Siddharth Mehrotra
Title: Designing for Appropriate Trust in Human-AI Interaction
Location: Aula, Academiegebouw, Delft
Time: September 6th, 2024, 10:00 am
Abstract: Trust is essential to any interaction, especially when interacting with technology that does not (metaphorically) think like we do. Nowadays, many AI systems are being developed that have the potential to make a difference in people’s lives, from health apps to robot companions. However, to reach their potential, people need to have appropriate levels of trust in these AI systems, i.e., people should not over- or under-trust AI as it can lead to misuse and disuse. Therefore, AI systems need to understand how humans trust them and what to do to promote appropriate trust.
In this research, as a first step towards eliciting appropriate trust, we must understand what factors influence trust in AI agents. Despite the growing attention in research on trust in AI agents, a lot is still unknown about people’s perceptions of trust in AI agents. Therefore, this research studied what makes people trust or distrust AI. Additionally, as mentioned above, human’s trust in the AI must be appropriate. The challenge is to ensure that humans tune their trust in the AI agent since we do not have control over humans. Therefore, in this research, we leverage the idea that if AI agents can reflect on their own trustworthiness through explanations, we may be able to influence humans to fine-tune their trust in them appropriately. With the information regarding the AI agent’s trustworthiness, a human can adapt to the qualities and limitations of the AI agent and, consequently, adjust the utilization of the agent accordingly.
The topic of this thesis relates to hybrid intelligence, meaning mutual trust is crucial for effectivehuman-AI interaction. To do this, in this thesis, we designed & developed artificial agents that can reason about and promote appropriate mutual trust. To explore our research questions, this thesis makes use of three lenses namely: a formal, a social and an application lens. This methodological approach ensured a holistic exploration of appropriate human trust, drawing on formal theories, social considerations, and practical insights.
Publications:
- A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and Challenges in ACM Journal of Responsible Computing (JRC), 2024 (to appear).
- Integrity-based explanations for fostering appropriate trust in AI agents. ACM Transactions on Interactive Intelligent Systems, 14(1), 1-36, 2024.
- Practising Appropriate Trust in Human-Centred AI Design In: Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1-8), 2024, May.
- Appropriate context-dependent artificial trust in human-machine teamwork In: Putting AI in the Critical Loop (pp. 41-60). Academic Press, 2024.
- Building Appropriate Trust in AI: The Significance of Integrity-Centered Explanations In: HHAI 2023: Augmenting Human Intellect (pp. 436-439). IOS Press, 2023.
- Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework In: European Journal of Work and Organizational Psychology, pp. 1–14, 2023.
- Modelling Trust in Human-AI Interaction In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 1826–1828, International Foundation for Autonomous Agents and Multiagent Systems, Virtual Event, United Kingdom, 2021.
- More Similar Values, More Trust? – The Effect of Value Similarity on Trust in Human-Agent Interaction In: Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, pp. 777–783, Association for Computing Machinery, Virtual Event, USA, 2021.
- Trust should correspond to trustworthiness: a formalization of appropriate, mutual trust in human-agent teams In: Proceedings of the 22nd International Workshop on Trust in Agent Societies, London, UK, 2021.