Associated projects

PhD Student: Enrico Liscio (E.Liscio@tudelft.nl)

Research Project Description
Personal values are the abstract motivations driving our actions and opinions. Estimating the values lying behind participants’ utterances in a debate / deliberation means to understand the motives behind their statements, and ultimately help to explain why their opinions may differ. However, the subjective and interpretable nature of values poses a challenge to automatic estimation through AI systems. To this end, we propose the exploration of hybrid methods for the automatic estimation of personal values, combining the intuition of humans with the scalability of AI. Natural Language Processing techniques will be investigated with the goal of complementing and supporting human high-level abstract reasoning. To this end, we propose the exploration of hybrid methods for the automatic estimation of personal values, combining the intuition of humans with the scalability of AI. Natural Language Processing techniques will be investigated with the goal of complementing and supporting human high-level abstract reasoning.

PhD Student: Mark Adamik (m.adamik@vu.nl)

Supervisors: Ilaria Tiddi (i.tiddi@vu.nl), Piek Vossen (piek.vossen@vu.nl), Stefan Schlobach (k.s.schlobach@vu.nl)

Research Project Description

This PhD project will tackle the problem of enabling intelligent systems to reason over common sense knowledge to achieve their tasks. We will focus on the specific case of embodied agents performing everyday activities such as cooking or cleaning, i.e. tasks that require reasoning over different types of knowledge (we call these knowledge-intensive tasks), and using for this purpose the available common sense knowledge graphs , i.e. structured data sources that contain knowledge about the everyday world. We will propose new methods to allow robots to achieve knowledge-intensive tasks through combining existing structured data (knowledge graphs) with situational personalised knowledge captured through their sensors. We will study the hybrid application of advanced symbolic and subsymbolic AI methods to robotic systems operating in complex real-world scenarios.

PhD student: Masha Tsfasman (m.tsfasman@tudelft.nl)

Supervisors: Catharine Oertel (C.R.M.M.Oertel@tudelft.nl), Catholijn Jonker (c.m.jonker@tudelft.nl)

Research Project Description
The topic of this project is memory for conversational agents. On the path to the goal of creating a socially-aware memory system, we are creating a data-driven model predicting what is memorable from conversations for different participants of a meeting. For that, we are exploring different modalities of human-human interactions: our first results conclude that group eye-gaze behaviour is highly predictive of conversational memorability. This topic is relevant for HI since we are looking into better understanding of humans to advance the quality of long-term human-agent interactions and collaborations.

PhD student: Morita Tarvirdians (m.tarvirdians@tudelft.nl) 

Research Project Description 
The coming years may create diverse needs for intelligent social conversational robots that are required to collaborate with humans in various environments. Therefore, in order to build conversational agents and robots that can create human-line conversations, we need to analyse the unique abilities of humans and artificial intelligence in making conversations. For example, using theory of mind (Premack & Wooclryff, 1978), people can infer others’ perspectives and stances. This conception of the mind is an important achievement in human development because it directly impacts effective communication and social interaction (Farra et al., 2017). On the other hand, AI is capable of performing complex calculations, extracting knowledge from messy data, and connecting to various information sources to quickly update its current data. So, by using human skills and artificial intelligence, we can create a hybrid intelligence that can understand the stances of humans and reasons behind their standpoints. Now, argument mining in NLP not only determined what position people take but also why they hold the opinions they do (Lawrence & Reed, 2020). However, argumentation mining is mainly adopted on social media content and asynchronous data (Draws et al., 2022; Vilares & He, 2017). Yet, conversation in its natural form is multifaceted, therefore in addition to linguistic features, non0verbal cues for expressing perspectives should be examined. During this project, we will develop a model that captures the multifaceted nature of a person’s perspective on a given topic as it evolves over time using both human and artificial intelligence, and we will implement this perspective model into a social robot. 

PhD student: Siddarth Mehrotra (s.mehrotra@tudelft.nl)

Research Project Description
Trust is an important component of any interaction, but especially when we are interacting with a piece of technology which does not think like we do. Nowadays, many systems are being developed which have the potential to make a difference in people’s lives, from health apps to robot companions. But to reach their potential, people need to have appropriate levels of trust in these systems. And therefore, systems need to understand how humans trust them, and what to do to promote appropriate trust.

In this research, as a first step towards eliciting appropriate trust, we need to understand what factors influence trust in AI agents. Despite the growing attention in research on trust in AI agents, a lot is still unknown about people’s perceptions of trust in AI agents. Therefore, we wish to know what it is that makes people trust or distrust AI?

Additionally, it is important that human’s trust in the articial agent is appropriate. The challenge is to ensure that humans tune their trust towards the agent since we do not have control over the human. However, in this research, we leverage on the idea that agents reflect about their own trustworthiness, we may be able to infuence humans to appropriately fine-tune their trust in them. With the information regarding the agent’s trustworthiness, a human can adapt to the qualities and limitations of the agent and, consequently, adjust the utilization of the agent accordingly. All in all, we have following research questions:

  1. How can we guide humans to appropriately trust their AI systems & how can we verify it?
  2. How might we use our understanding of appropriate trust to design Human-AI agent interactions?

This topic relates to hybrid intelligence in the means that a key driver for achieving effective Human-AI interaction is mutual trust. To do this, we should develop artificial agents that are able reason about and promote appropriate mutual trust.

PhD student: Jonne Maas (j.j.c.maas@tudelft.nl)

Research project description
This research focuses on responsible AI development. In particular, I’m interested in the power dynamics underlying AI systems. Following democratic theories, I argue that current power dynamics are morally problematic, as end-users of systems are left to the goodwill of those that shape a system as well as the system itself, without means to properly hold accountable development and deployment decisions. Responsible AI, however, requires a responsible development and deployment process. Without just power dynamics, such responsible AI development is difficult if not impossible to achieve. My research is then one step towards Responsible AI.

The direct relevance with the HI Consortium lies in the connection with the responsible research line. Although I do not program myself, the philosophical approach of my dissertation contributes to a better understanding of what it means to develop Responsible AI in general. In addition, the philosophical insights from my dissertation could be used for AI systems that the HI consortium develops.

PhD student: Mustafa Mert  Çelikok (M.M.Celikok@tudelft.nl) 

Research project description
This project concerns developing multiagent reinforcement learning methods that will allow an AI agent to interact with a human partner in order to augment the decision-making of the human. The project is closely related to the overall agenda of intelligence augmentation and the long-term goals of the Hybrid Intelligence project.

Candidate: Deborah van Sinttruije

Supervisor: Catharine Oertel (C.R.M.M.Oertel@tudelft.nl)

Research project description

One of the most pressing matters that holds back robots from taking on more tasks and reach a widespread deployment in society is their limited ability to understand human communication and take situation-appropriate actions. The research project is dedicated to addressing this gap by developing the underlying data-driven models that enable a robot to engage with humans in a socially aware manner.

This research targets at the development of an argumentative dialogue system for human-robot interaction, which will explore how to fuse verbal and non-verbal behaviour to infer a person’s perspective. I use, and further develop, reinforcement learning techniques in order to drive the robot’s argumentative strategy for deliberating topics of current social importance such as global warming or vaccination.