Siddharth Mehrotra

Siddharth Mehrotra profile

Title: Designing for Appropriate Trust in Human-AI Interaction

Location: Aula, Academiegebouw, Delft

Time: September 6th, 2024, 10:00 am

Abstract: Trust is essential to any interaction, especially when interacting with technology that does not (metaphorically) think like we do. Nowadays, many AI systems are being developed that have the potential to make a difference in people’s lives, from health apps to robot companions. However, to reach their potential, people need to have appropriate levels of trust in these AI systems, i.e., people should not over- or under-trust AI as it can lead to misuse and disuse. Therefore, AI systems need to understand how humans trust them and what to do to promote appropriate trust.

In this research, as a first step towards eliciting appropriate trust, we must understand what factors influence trust in AI agents. Despite the growing attention in research on trust in AI agents, a lot is still unknown about people’s perceptions of trust in AI agents. Therefore, this research studied what makes people trust or distrust AI. Additionally, as mentioned above, human’s trust in the AI must be appropriate. The challenge is to ensure that humans tune their trust in the AI agent since we do not have control over humans. Therefore, in this research, we leverage the idea that if AI agents can reflect on their own trustworthiness through explanations, we may be able to influence humans to fine-tune their trust in them appropriately. With the information regarding the AI agent’s trustworthiness, a human can adapt to the qualities and limitations of the AI agent and, consequently, adjust the utilization of the agent accordingly.

The topic of this thesis relates to hybrid intelligence, meaning mutual trust is crucial for effectivehuman-AI interaction. To do this, in this thesis, we designed & developed artificial agents that can reason about and promote appropriate mutual trust. To explore our research questions, this thesis makes use of three lenses namely: a formal, a social and an application lens. This methodological approach ensured a holistic exploration of appropriate human trust, drawing on formal theories, social considerations, and practical insights.

Publications: