PhD Student: Enrico Liscio (E.Liscio@tudelft.nl)
Research Project Description
Personal values are the abstract motivations driving our actions and opinions. Estimating the values lying behind participants’ utterances in a debate / deliberation means to understand the motives behind their statements, and ultimately help to explain why their opinions may differ. However, the subjective and interpretable nature of values poses a challenge to automatic estimation through AI systems. To this end, we propose the exploration of hybrid methods for the automatic estimation of personal values, combining the intuition of humans with the scalability of AI. Natural Language Processing techniques will be investigated with the goal of complementing and supporting human high-level abstract reasoning. To this end, we propose the exploration of hybrid methods for the automatic estimation of personal values, combining the intuition of humans with the scalability of AI. Natural Language Processing techniques will be investigated with the goal of complementing and supporting human high-level abstract reasoning.
PhD Student: Mark Adamik (m.adamik@vu.nl)
Supervisors: Ilaria Tiddi (i.tiddi@vu.nl), Piek Vossen (piek.vossen@vu.nl), Stefan Schlobach (k.s.schlobach@vu.nl)
Research Project Description
This PhD project will tackle the problem of enabling intelligent systems to reason over common sense knowledge to achieve their tasks. We will focus on the specific case of embodied agents performing everyday activities such as cooking or cleaning, i.e. tasks that require reasoning over different types of knowledge (we call these knowledge-intensive tasks), and using for this purpose the available common sense knowledge graphs , i.e. structured data sources that contain knowledge about the everyday world. We will propose new methods to allow robots to achieve knowledge-intensive tasks through combining existing structured data (knowledge graphs) with situational personalised knowledge captured through their sensors. We will study the hybrid application of advanced symbolic and subsymbolic AI methods to robotic systems operating in complex real-world scenarios.
PhD Student: Selene Baez (selene.baez.santamaria@gmail.com)
Supervisors: Piek Vossen (piek.vossen@vu.nl), Dan Balliet (d.p.balliet@vu.nl)
Research Project Description
We will study trust as a relationship between social agents in a multimodal world that involves multifaceted skills and complex contexts. This work aims to create and evaluate a computational model of trust, from the robot perspective towards trusting humans in collaborative tasks.
PhD Student: Lea Krause (VUA) (l.krause@vu.nl)
Supervisor: Piek Vossen (piek.vossen@vu.nl)
Research Project Description
What if the KG does not have a simple answer? How to formulate a complex answer that is still useful. We do not take a simple no for an answer. An intelligent agent should explain what it does know and how it is related. A secondary question is that the status of the answer is questionable: uncertainty of a source, how knowledgeable is the source, how much is it shared, is it also denied, how recent is the knowledge, how connected is the knowledge.
How can an answer be generated in natural language that is correct, informative, and effective?
PhD student: Masha Tsfasman (m.tsfasman@tudelft.nl)
Supervisors: Catharine Oertel (C.R.M.M.Oertel@tudelft.nl), Catholijn Jonker (c.m.jonker@tudelft.nl)
Research Project Description
The topic of this project is memory for conversational agents. On the path to the goal of creating a socially-aware memory system, we are creating a data-driven model predicting what is memorable from conversations for different participants of a meeting. For that, we are exploring different modalities of human-human interactions: our first results conclude that group eye-gaze behaviour is highly predictive of conversational memorability. This topic is relevant for HI since we are looking into better understanding of humans to advance the quality of long-term human-agent interactions and collaborations.
PhD student: Morita Tarvirdians (m.tarvirdians@tudelft.nl)
Research Project Description
The coming years may create diverse needs for intelligent social conversational robots that are required to collaborate with humans in various environments. Therefore, in order to build conversational agents and robots that can create human-line conversations, we need to analyse the unique abilities of humans and artificial intelligence in making conversations. For example, using theory of mind (Premack & Wooclryff, 1978), people can infer others’ perspectives and stances. This conception of the mind is an important achievement in human development because it directly impacts effective communication and social interaction (Farra et al., 2017). On the other hand, AI is capable of performing complex calculations, extracting knowledge from messy data, and connecting to various information sources to quickly update its current data. So, by using human skills and artificial intelligence, we can create a hybrid intelligence that can understand the stances of humans and reasons behind their standpoints. Now, argument mining in NLP not only determined what position people take but also why they hold the opinions they do (Lawrence & Reed, 2020). However, argumentation mining is mainly adopted on social media content and asynchronous data (Draws et al., 2022; Vilares & He, 2017). Yet, conversation in its natural form is multifaceted, therefore in addition to linguistic features, non0verbal cues for expressing perspectives should be examined. During this project, we will develop a model that captures the multifaceted nature of a person’s perspective on a given topic as it evolves over time using both human and artificial intelligence, and we will implement this perspective model into a social robot.
PhD student: Siddarth Mehrotra (s.mehrotra@tudelft.nl)
Research Project Description
Trust is an important component of any interaction, but especially when we are interacting with a piece of technology which does not think like we do. Nowadays, many systems are being developed which have the potential to make a difference in people’s lives, from health apps to robot companions. But to reach their potential, people need to have appropriate levels of trust in these systems. And therefore, systems need to understand how humans trust them, and what to do to promote appropriate trust.
In this research, as a first step towards eliciting appropriate trust, we need to understand what factors influence trust in AI agents. Despite the growing attention in research on trust in AI agents, a lot is still unknown about people’s perceptions of trust in AI agents. Therefore, we wish to know what it is that makes people trust or distrust AI?
Additionally, it is important that human’s trust in the articial agent is appropriate. The challenge is to ensure that humans tune their trust towards the agent since we do not have control over the human. However, in this research, we leverage on the idea that agents reflect about their own trustworthiness, we may be able to infuence humans to appropriately fine-tune their trust in them. With the information regarding the agent’s trustworthiness, a human can adapt to the qualities and limitations of the agent and, consequently, adjust the utilization of the agent accordingly. All in all, we have following research questions:
- How can we guide humans to appropriately trust their AI systems & how can we verify it?
- How might we use our understanding of appropriate trust to design Human-AI agent interactions?
This topic relates to hybrid intelligence in the means that a key driver for achieving effective Human-AI interaction is mutual trust. To do this, we should develop artificial agents that are able reason about and promote appropriate mutual trust.
PhD student: Jonne Maas (j.j.c.maas@tudelft.nl)
Research project description
This research focuses on responsible AI development. In particular, I’m interested in the power dynamics underlying AI systems. Following democratic theories, I argue that current power dynamics are morally problematic, as end-users of systems are left to the goodwill of those that shape a system as well as the system itself, without means to properly hold accountable development and deployment decisions. Responsible AI, however, requires a responsible development and deployment process. Without just power dynamics, such responsible AI development is difficult if not impossible to achieve. My research is then one step towards Responsible AI.
The direct relevance with the HI Consortium lies in the connection with the responsible research line. Although I do not program myself, the philosophical approach of my dissertation contributes to a better understanding of what it means to develop Responsible AI in general. In addition, the philosophical insights from my dissertation could be used for AI systems that the HI consortium develops.
Candidate: Deborah van Sinttruije
Supervisor: Catharine Oertel (C.R.M.M.Oertel@tudelft.nl)
Research project description
One of the most pressing matters that holds back robots from taking on more tasks and reach a widespread deployment in society is their limited ability to understand human communication and take situation-appropriate actions. The research project is dedicated to addressing this gap by developing the underlying data-driven models that enable a robot to engage with humans in a socially aware manner.
This research targets at the development of an argumentative dialogue system for human-robot interaction, which will explore how to fuse verbal and non-verbal behaviour to infer a person’s perspective. I use, and further develop, reinforcement learning techniques in order to drive the robot’s argumentative strategy for deliberating topics of current social importance such as global warming or vaccination.
Postdoc: Hüseyin Aydin (h.aydin@uu.nl)
Research project description
The project focuses on information privacy for the cases that concern multiple users. Considering the complexity of the problem along with its distributed and dynamic nature, the aim of the study can be described as the following: (i) modeling the factors that build people’s privacy preferences, (ii) adapting and applying multi-agent reinforcement learning methods on this model to compromise users’ choices, and (iii) evaluating the overall framework with real life scenarios and making further adjustments on the components if necessary.
Project’s novel aspects include (i) the application of reinforcement learning on a privacy problem that concerns a group of people, rather than an individual content whose decision solely depends on a person, and (ii) the construction of a solution for the conflicts of multiple users’ privacy preferences that is completely based on a learning method by the elimination of the necessity for negotiation methods.
Although autonomous solution by the learning agents with less human intervention is aimed within this study, the system should be adaptive to the changing behavior of the humans that agents represent. Hence, the system must be interacting with humans as needed. In this manner, the balance between frequently requiring manual actions by the end-users of the system and ignoring the dynamic behavior of the humans via completely automating the process can be obtained.
PhD Student: Amir Homayounirad (A.Homayounirad@tudelft.nl)
Research project description
Deliberation is the process of exchanging opinions and debating problems in depth. Moderators are crucial in facilitating this process and act as democratic intermediaries. Due to the subjective nature of their work, which requires specialized understanding and reasoning skills to help users within a discussion, causing large cognitive loads depending on the context and scale, moderators can benefit from the ability to process large datasets and the speed of AI techniques. Hence, we will develop hybrid (Human + AI) methods, utilizing natural language processing to assist moderators in facilitating deliberation towards supporting individuals and groups to have a more reflective discussion in online settings. To this end, we (1) Employ value theory to understand people’s value preferences and determine arguments that help explain why those values are relevant in context, (2) estimate individual preferences, and (3) support individuals/groups in reacting to others’ viewpoints.