PhD candidate: Aishwarya Suresh  (

Supervisor: Filippo Santoni de Sio (
Co-supervisor: Ibo van de Poel (

Research Project Description
Many current AI systems are implicitly supporting or constraining human moral judgment, e.g. information on Facebook, recommendation systems, targeted advertising. The goal of this project is to make this support (or constraint) explicit as a step towards empowering better human moral judgments. Our working assumption is that AI should support but not take over human moral judgement and decisions. To determine the desirable role of AI support, we should first of all better understand human moral judgement, and particularly the importance of context in it. Two case studies will be carried out on how AI may support moral human judgement in the context of changing values. On the basis of these, desirable (and undesirable) forms of AI support will be identified, and insights will be translated into a number of design principles for AI support for human moral judgement.

Full proposal

PhD candidate: Cor Steging (

Supervisor: Bart Verheij (
Co-supervisor: Silja Renooij (

Research Project Description
A core puzzle in today’s artificial intelligence is how knowledge, reasoning and data are connected. To what extent can knowledge used in reasoning be recovered from data with implicit structure? Can such knowledge be correctly recovered from data? To what extent does knowledge determine the structure of data that results from reasoning with the knowledge? Can knowledge be selected such that the output data generated by reasoning with the knowledge has desired properties? 

By investigating the relations between knowledge, reasoning and data, we aim to develop mechanisms for the verification and evaluation of hybrid systems that combine manual knowledge-based design and learning from data. The focus will be on structures used in reasoning and decision-making, in particular logical and probabilistic relations (as in Bayesian networks) and reasons (pro and con) and exceptions (as in argument-based decision making).

Full proposal

PhD candidate: Michiel van der Meer (

Supervisor: Catholijn Jonker (
Co-supervisor: Pradeep Murukannaiah (

Research Project Description
Perspectives in deliberation is about developing Artificial Intelligence techniques to find the underlying structures of debates and group deliberations with the idea that we can help participants to a debate / deliberation to understand why others have a different opinion in this debate. To this end we want to use computational linguistics to extract what we call micro-propositions from text: proposition mining. Secondly, we want to model the implications for stakeholders of these propositions: implication mining. Thirdly, we want to extract and understand the perspectives of the stakeholders on these implications: perspective mining.

In addition to understanding the debate and the perspectives, the AI you develop will seek to interact with stakeholders about the interpretations of what they brought in as statements and arguments. 

Full proposal

PhD candidate: Tiffany Matej (

Supervisor: Dan Balliet (
Co-supervisor: Hayley Hung (

Research Project Description
This PhD project will be a collaboration between psychology and computer science to develop hybrid intelligence that can understand, predict, and potentially aid in initiating collaborative behavior. To build such machines, we must further develop our understanding about how people select their partners and initiate collaborations. Moreover, this project will involve a close collaboration between psychology and computer science, and will apply state-of-the-art methods and techniques in both computer science and psychology to advance our understanding about these issues.

Partner selection is fundamental to understanding how people initiate cooperative relations and to avoid being exploited by non-cooperative individuals – two key features of human sociality thought to underlie why humans are such a cooperative species (see Barclay, 2013). This research will test and develop theory about how people choose cooperative partners. The project will use both naturalistic settings (e.g., social networking at scientific conferences) and experimental settings (e.g., experimental group tasks), to examine the non-verbal and verbal behaviors during social interactions that can be used to predict whether people choose to select another person (or not) as a collaboration partner. During these studies, participants will wear multi-modal sensors and be video recorded while interacting with other people for the first time, which will be used to capture non-verbal and verbal behaviors that can predict how people evaluate their interaction partner (e.g., their traits, motives), the social interaction (e.g., the closeness, social power), and behavior motivations (e.g., avoiding versus approach the person in future interactions).

This PhD project is a collaboration between Psychology (Daniel Balliet) and Computer Sciences (Hayley Hung, Rineke Verbrugge), and the PhD will also work closely with another PhD student supervised directly by Hayley Hung. The candidate should have an openness to working in a multi-disciplinary team, and have a general interest in establishing a closer connection between psychology and the computer sciences.

Full Proposal

PhD candidate: Nicole Orzan (

Supervisor: Erman Acar (

Co-supervisor: Eliseo Ferrante (
Promotor: Davide Grossi (

Research Project Description
Deliberation is a key ingredient for successful collective decision-making. In this project we will:

– investigate the mathematical and computational foundations of deliberative processes in groups of autonomous agents.
– lay the theoretical groundwork for modelling and analyzing how opinions are exchanged and evolve in groups of autonomous decision-makers.
– develop principled methods for the design of deliberation procedures that can provably improve the quality of group decisions.

Full proposal

PhD candidate: Emre Erdogan (

Supervisor: Frank Dignum (
Co-supervisor: Pinar Yolum (

Research Project Description
At Utrecht University, we are looking for several PhD students within the Hybrid Intelligence project. For the position on the computational theory of mind we look for a PhD student to investigate and develop a computationally efficient theory of mind that is theoretically sound.
The research will focus on which type of general knowledge on other agents has to be kept in order to keep a usable theory of mind available. How much of the historical information and premises on preferences, goals and motivations should be taken into account? The premise is that the theory of mind that needs to be kept depends on the level of interaction that is aimed for and the context in which this interaction takes place. You will apply the developed theory in the area of collaborative privacy. A typical situation in collaborative privacy is that a user in a collaborative system, such as an online social network, would like to share content that could possibly be co-owned by others, such as group pictures or collaboratively edited documents. At the time of sharing, the user has to take into account how the shared content can or will be used and how this sharing would affect other users and take an action accordingly. By employing a computationally viable model of theory of mind, a user can reason about other co-owners’
privacy expectations on the content in a given context using a theory of mind and make a sharing decision based on that.
The PhD candidate’s principal duty is to conduct scientific research, resulting in a PhD thesis at the end of the appointment. Other duties may include supporting the preparation and teaching of Bachelor’s and Master’s level courses, supervising student theses, managing research infrastructure and participating in public outreach. This position presents an excellent opportunity to develop an academic profile.

Full proposal

Postdoc: Kevin Godin-Dubois (

Supervisor: Karine Miras (
Co-supervisor: Decebal Mocanu (
Promotor: Guszti Eiben (

Research Project Description
By jointly addressing NeuroEvolution (NE) and Reinforcement Learning (RL) we open the door to producing robots that can improve at the population and individual levels. In the former case, this encompasses the full scope of genetic exploration (neural topology, sensory apparatus, morphology…). In the latter case, a given robot will leverage this genetic baggage to better accommodate local conditions including, in the case of physical robots, bridging the reality gap. Combining the advantages of offline body/brain optimization, through NE, with online adaptation, through RL, is an essential aspect of biological intelligence. Therefore, devising methodologies to exploit the best of both worlds is expected to have a strong impact on Artificial Intelligence for Embodied Robots.

Full proposal

PhD candidate: Niklas Höpner (

Supervisor: Herke van Hoof (
Co-supervisor: Ilaria Tiddi (

Research Project Description
At the university of Amsterdam, we are looking for a PhD candidate interested in combining (deep) reinforcement learning research with prior knowledge from knowledge graphs to learn explainable strategies for complex sequential tasks.
Many problems of practical interests are concerned with optimising sequential decision making. Think, for example, of finding optimal trajectories for vehicles or robots or deciding which medical tests to run. Methods for classical planning based on symbolic representations are typically explainable to human collaborators but rely on an exact problem description; while data-driven (e.g. reinforcement learning) approaches do not rely on a provided problem specification, but are data hungry and inscrutable. The combination of the complementary strengths of these approaches is expected to advance the state of the art in the areas of knowledge representation and reinforcement learning.

Full proposal

PhD candidate: Bram Renting (

Supervisor: Holger Hoos (
Co-supervisor: Catholijn Jonker (

Research Project Description
The goal of this PhD project is to develop automated machine learning methods for changing/non iid data, with applications to standard learning scenarios as well as to automated negotiation. Such techniques are key to enabling the efficient and robust use of machine learning systems and components for a broad range of human-centred AI applications. They will also contribute to fundamental advances in machine learning. Automated negotiation scenarios are of particular interest, as they play a key role in systems dealing with potentially conflicting interests between multiple users or stakeholders.

Full proposal

PhD candidate: Anna Kuzina (

Supervisor: Jacub Tomczak (
Co-supervisor: Max Welling (

Research Project Description
At Vrije Universiteit Amsterdam, we are looking for an enthusiastic PhD candidate who is interested in formulating and developing new models and algorithms for quantifying uncertainty and making decisions in changing environments. Our project “Continual learning and deep generative modeling for adaptive systems’’ focuses on fundamental research into combining various learning paradigms for building intelligent systems capable of learning in a continuous manner and evaluating uncertainty of the surrounding environment.

Adaptivity is a crucial capability of living organisms. Current machine learning systems are not equipped with tools that allow them to adjust to new situations and understand their surroundings (e.g., observed data). For instance, a robot should be able to adapt to new environment or task and assess whether the observed reality is known (i.e., likely events) or it should contact a human operator due to unusual observations (i.e., high uncertainty). Moreover, we claim that uncertainty assessment is crucial for communicating with human beings and for decision making.

In this project, we aim at designing new models and learning algorithms by combining multiple machine learning methods and developing new ones. In order to quantify uncertainties, we prefer to use deep generative modeling paradigm and frameworks like Variational Autoencoders and flow-based models. However, we believe that standard learning techniques are insufficient to update models and, therefore, continual learning (a.k.a. life-long learning, continuous learning) should be used. Since this is still an open question how continual learning ought to be formulated, we propose to explore different directions that could include, but are not limited to Bayesian nonparametrics and (Bayesian) model distillation. Moreover, a combination of continual learning and deep generative modeling entails new challenges and new research questions.

The PhD candidate will be part of the Department of Computer Science of the Vrije Universiteit Amsterdam (Computational Intelligence group) in a close partnership with the Institute of Informatics of the University of Amsterdam (Amsterdam Machine Learning Lab). Daily supervision will be performed by dr. Jakub Tomczak (VU). The promotor will be prof. A.E. Eiben (VU) and the co-promotor will be prof. M. Welling (UvA).

Full proposal

Candidate: Mani Tajaddini (

Supervisor: Mark Neerincx ( )
Co-supervisor: Annette ten Teije (

Research Project Description
The ambition for this Phd.D. position will be to define design patterns for successful configurations of Hybrid Intelligent systems. Such patterns describe how different combinations of machine and human capabilities perform for a given task under a given set of circumstances. We aim to develop a corresponding pattern language to express such design patterns in conceptual, (semi-)formal or computational form, and an empirical method to validate the patterns. See for more information.

Full proposal

PhD candidate: Taewoon Kim (

Supervisor: Michael Cochez (
Co-supervisor: Mark Neerincx (

Research Project Description
Machines have become reasonably good at answering factual knowledge questions. Search engines and virtual assistants can retrieve relevant information and answer factual and commonsense questions, using what psychologists call “semantic memory”. However, these systems will not remember what they did yesterday (i.e., using “episodic memory”). We want to model an agent that has both semantic and episodic memory systems. In order to tackle this problem, we will take advantage of the human memory systems suggested by theories from cognitive science. 

Full proposal

PhD candidate: Annet Onnes (

Supervisor: Silja Rennooij (
Co-supervisor: Roel Dobbe (

Research Project Description
Our project “Monitoring and constraining adaptive systems” focuses on fundamental research into integrating interpretable knowledge representation and reasoning with learning in the context of adaptive systems.
Since an adaptive system is allowed to change itself, we need to trust that it does not evolve into a system that violates various constraints important for the environment at large in which the system operates. We aim to design a monitoring system that is able to detect and react to violations of constraints, to predict that violations are about to occur, issue warnings, and ultimately gets the adaptive system back on track.  Being able to predict the behaviour of an adaptive system also allows for analysing and explaining it, which are important aspects to facilitate communication and collaboration between human and artificial adaptive agents.

Full proposal

Candidate: Pei-Yu Chen (

Supervisor: Birna van Riemsdijk (
Co-supervisor: Myrthe Tielman (

Research project description
It is our aim to realize intimate (supportive) technologies that feel like they tend to people with care and sensitivity, allowing them to maintain their space and agency in coaction with technology. This means that the technology needs to continuously tune in to the needs of the user and assess whether its provided support is (still) in alignment with this. We refer to this as Responsible Agency: the technology’s agency is shaped around the user’s agency, and this human-machine co-entity jointly produces actions in the world.

To realize this, a user model is needed that represents what is important to the user in light of the support that is to be provided (e.g., activities, values, capabilities, norms, frequencies of behaviour, etc.). This user model is constructed through direct interactions with the user at run-time, since the necessary information, e.g., on underlying values, often cannot be derived from existing data. Based on this user model, the agent can derive what it deems to be appropriate support actions.

Full proposal

PhD Candidate: Urja Khurana (

Supervisor: Antske Fokkens (
Co-supervisor: Eric Nalisnick (

Research Project Description

Natural language processing has a strong tradition in experimental research, where various methods are evaluated on gold standard datasets. Though these experiments can be valuable to determine which methods work best, they do not necessarily provide sufficient insight into the general quality of our methods for real-life applications. There are two questions that often need to be addressed before knowing whether a method is suitable to be used in a real-life application in addition to the outcome of a typical NLP experiment. First, what kind of errors does the method make and how problematic are they for the application? Second, how predictive are results obtained on the benchmark sets for the data that will be used in the real-life application? This project aims to address these two questions combining advanced systematic error analyses and formal comparison of textual data and language models.

Though potential erroneous correlations were still relatively easily identified in scenarios of old-fashioned extensive feature engineering and methods such as K-nearest neighbors, Naive Bayes, logistic regression, SVM, this has become more challenging now that technologies predominantly make use of neural networks. The field has become increasingly interested in exploring ways to interpret neural networks, but, once again, many studies focus on field internal questions (what linguistic information is captured? Which architectures learn compositionality to what extent?). We aim to take this research a step further and see if we can use insights into the workings of deep models to predict how they will work for specific applications that make use of data different from the original evaluation data. Both error analysis and formal comparison methods will contribute to establishing the relation between generic language models, task specific training data, evaluation data and ”new data”. By gaining a more profound understanding of these relations, we try and define metrics that can be used to estimate or even predict to what extent results on a new dataset will be similar to those reported on the evaluation data (both in terms of overall performance and in terms of types of errors).

Full proposal

Supervisor: Christof Monz (
Co-supervisor: Frans Oliehoek (

Research project discription

In this sub-project, we are looking for a PhD candidate interested in combining deep learning and natural language processing research to model complex contextual information to significantly improve the quality of dialog systems.

While current deep learning methods have been shown to be very effective in generating fluent utterances, these utterances are often only poorly connected to the context of the conversation. In this project, we will investigate the role of context (agent or environment) on natural dialogue generation. In this project, we will explore a several research directions such as the detection and representation of knowledge gaps, the generation of contextually appropriate responses, and novel ways to represent and access large contextual information.

Full proposal


PhD candidate: Robert Loftin (

Supervisor: Frans Oliehoek (
Co-supervisor: Herke van Hoof (

Research project Description
This sub-project wants to push machine learning beyond traditional settings that assume a fixed dataset. Specifically, in this project we will investigate interactive learning settings in which two or more learners interact by giving each other feedback to reach an outcome that is desirable from a system designers perspective. The goal is to better understand how to structure interactions to effectively progress to the desirable outcome state, and to develop practical learning techniques and algorithms that exploit these generated insights.

Full proposal

Candidate: Bernd Dudzik (

Supervisor: Hayley Hung (
Co-supervisor: Dan Balliet (

Research project description

An important but under-explored problem in computer science is the automated analysis of conversational dynamics in large unstructured social gatherings such as networking or mingling events. Research has shown that attending such events contributes greatly to career and personal success. While much progress has been made in the analysis of small pre-arranged conversations, scaling up robustly presents a number of fundamentally different challenges. Moreover, understanding the relationship between these interactions and how this translates into actual collaborations is unknown. 

Unlike analysing small pre-arranged conversations, during mingling, sensor data is seriously contaminated. Moreover, determining who is talking with whom is difficult because groups can split and merge at will. A fundamentally different approach is needed to handle both the complexity of the social situation as well as the uncertainty of the sensor data when analysing such scenes.

The main aim of the project is to address the following question: How can multi-sensor processing and machine learning methods be developed to model the dynamics of conversational interaction in large social gatherings using only non-verbal behaviour? The focus of this project is to measure conversation quality, from multi-sensor streams in crowded environments and its relationship with someone’s willingness to collaborate with other conversation partners. 

The successful applicants will develop automated techniques to analyse multi-sensor data (video, acceleration, audio, etc) of human social behavior. They will interact closely with a PhD student from social science and possibly other PhD students on the Hybrid Intelligence project. 

Full proposal


Candidate: Wijnand van Woerkom (

Supervisor: Henry Prakken (
Co-supervisor: Davide Grossi (

Research Project Description

This project aims to explain outcomes of data-driven machine-learning applications that support decision-making procedures to end users of such applications, such as lawyers, business people or ordinary citizens. The techniques should apply in contexts where a human decision maker is informed by data-driven algorithms and where the decisions have ethical, legal or societal implications. They should generate explanations for outputs for specific inputs. The generated explanations should be such that the reasons for the output can be understood and critically examined on their quality. The project will especially focus on explaining ‘black-box’ applications in that it will focus on model-agnostic methods, assuming only access to the training data and the possibility to evaluate a model’s output given input data. This will make the explanation methods independent of a model’s internal structure. This is important since in many real-life applications the learned models will not be interpretable or accessible, for instance, when the model is learned by deep learning or when the application is proprietary.

Full proposal

Candidate: Merle Reimann (

Supervisor: Koen Hindriks (
Co-supervisor: Christof Monz (

Research Project Description

A key challenge in human-machine dialogues is to provide more flexibility to a user and allow the user to co-regulate a conversation (instead of being directed by the agent’s conversation script or flow). We will develop a conversational agent that is able to establish and maintain a common understanding in co-regulation with its user, by offering the flexibility to deviate from the conventional sequences in structured question-and-answer or instructional dialogues. This may, for example, occur in education when a child is asked to solve a math problem and is not sure about the answer, or in health care when a patient shows hesitance when answering a survey question and would like to ask a clarification question, or when a user is unclear about a cooking assistant’s instructions. In such cases, rather than moving on to the next math problem or survey question, the user should be allowed to deviate from the main dialogue line (‘the happy flow’) to come to a mutual understanding and restore common ground. We will collect conversational data that can be used to extract common conversational patterns that indicate a user wants to take control of the dialogue flow, and will research and design an agent that is able to handle this. For example, the agent should be able to decide to either (i) ask a follow-up question (ii) provide feedback (verbalize its understanding of the humans’ answer), or (iii) give additional explanatory information to assist the user. A second aim of this project is to address the level of user engagement in such highly structured conversations that tend to be repetitive, by enabling the agent to memorize and refer back to relevant pieces of information in the conversation history or earlier conversations with the same user, and by incorporating variation in the agent’s utterances.

Full proposal

PhD Candidate: Jordi Top

Supervisor: Rineke Verbrugge (
Co-supervisor: Catholijn Jonker (

Research Project description

In hybrid intelligent systems in which humans and agents interact together, it is important to be able to understand and detect non-cooperative behavior such as lying and other forms of deception, both in people and in software agents. Cognitive scientists have shown that understanding the concept of lying and being able to maintain a lie over time requires second-order theory of mind: reasoning about what the other person (or agent) thinks you think. The same level of theory of mind is also required for detecting whether others are lying to you. In this PhD project we will:

  • investigate the logical and computational foundations of deception and deception detection in hybrid groups;
  • lay the theoretical groundwork for modelling and analyzing non-cooperative behavior in several communicative contexts such as negotiation games and coalition formation games;
  • develop principled methods for the design of software agents that can detect when other group members are engaged in non-cooperative behavior such as lying;
  • build agent-based models and/or computational cognitive models of deception and deception detection;
  • use simulation experiments in order to predict the outcomes of lab experiments to be performed.

The objective of the temporary position is the production of a number of research articles in peer-reviewed scientific journals and conference proceedings, which together will form the basis of a thesis leading to a PhD degree (dr.) at the University of Groningen.

Full proposal

Candidate: Íñigo Martínez De Rituerto De Troya (

Supervisor: Roel Dobbe (
Co-supervisor: Virginia Dignum (

Research Project Description

This PhD-project starts from two central assumptions: Firstly, that all “artificial intelligence systems” are hybrid, requiring the integration of human and machine activities. More fundamentally, AI systems are fundamentally sociotechnical, developed through myriad design tradeoffs and “hard choices”, and situated in a societal context where the system informs, influences or automates human decisions. And secondly, that AI systems contribute to “programmable infrastructures”, meaning that these systems don’t stand on themselves but are integrated into and restructure existing infrastructures and organizations, often with implications for democratic governance and sovereignty.

The PhD candidate will work in a growing team addressing issues of hard choices and programmable infrastructures, consisting of people with different disciplinary backgrounds. The position focuses on understanding how human factors are crucial for informing the design of AI systems in making sure these are safe in their behavior, just in their treatment of people, and allow affected stakeholders to be empowered to hold the system and its managers/operators/developers accountable.

The candidate will work to marry notions in traditional computer science and systems engineering with methods from situated design, participatory design, computational empowerment, action research, science-and-technology-studies, and other disciplines aiming to increase empowerment and participation in design. The project studies and facilitates the implementation of a data-driven, learning-based control scheme in the operation of electrical distribution networks, working closely together with Dutch utility companies and other societal stakeholders working on the transition to renewable energy resources. A second case study will be pursued in a more administrative context within the Digicampus (

In these studies, the PhD candidate will work together with stakeholders from the different sectors to both study and inform system development. The aim is (1) to understand what human factors inform the development process of AI systems and how PD and other design methods can facilitate this, and (2) what human dimensions in the situated context need to be addressed to integrate and operate the AI system in a safe and just manner, seeking collaboration with appropriate fields of expertise. For both these contributions, the PhD candidate will be leaning on emerging literature to structure his/her investigations and seek advice and collaboration across the different disciplines within the department, the Hybrid Intelligence Centre and internationally.

The selected candidate will also play a central role in building an international community of scholars advancing sociotechnical and situated studies of AI systems. This project will benefit from collaborations with dr. Seda Gürses in the Multi-Actor Systems department, prof. Virginia Dignum and the Responsible AI group at Umeå University in Sweden, and prof. Joan Greenbaum at . 

Full proposal

PhD candidate: Andreas Sauter (

Supervisor: Frank van Harmelen 
Co-Supervisor: Erman Acar ( 
Co-supervisor: Aske Plaat (

Research Project Description
We will design an adaptive system to be used as a component in  an artificial (virtual or physical) agent, which will enable structural updates on causal diagrams through logical reasoning and learning processes. We will study its theoretical properties, and test it empirically in virtual and physical multi-agent environments. We will use the empirical findings to refine its theory

Full proposal

Candidate: Maria Heuss (

Supervisor: Maarten de Rijke (prof. Maarten de Rijke)
Co-supervisor: Koen Hindriks (

Research Project Description

Learning to rank is at the heart of modern search and recommender systems. Increasingly, such systems are optimized using historical, logged interaction data. Such data is known to be biased in a myriad of ways (position bias, trust bias, selection bias, …). In recent years, various de-biasing methods have been proposed, for different types of bias. These methods are limited, either in the types of bias they address, the assumptions about user behavior they make, or the degree to which they are actionable. We will design debiasing methods that are explainable and that can be applied selectively, depending on context, user (or user model), and additional constraints such as fairness.

Interaction with large amounts of information is one of the original scenarios for hybrid intelligence. For interactions to be effective, algorithmic decisions and actions should be explainable. In the context of learning to rank from logged interaction data for search and recommendation, this implies that the underlying debiasing methods need to be explainable – both in terms of the bias they aim to address, the assumptions they make, the re-estimations they make, and the impact on learned search and recommendation behavior. Importantly, the explanations should be actionable in the sense that they inform the target user about the changes required to obtain a different outcome.

Full proposal

Candidate: Sharvaree Vadgama (

Supervisor: Erik Bekkers (dr. Erik Bekkers)
Co-supervisor: Jakub Tomczak (

Research Project Description

The aim of this project is to model, study, and visualize decision-making processes in (convolutional) neural networks. In particular, emphasis is put on explainability in critical applications where reliability and trustworthiness of AI systems are crucial, such as in the medical domain.

A main theme will be the design of AI system through recurrent NNs by which focus is put on modeling (and visualization of) the dynamics of decision making rather than relying on the static black box mechanics of feed-forward networks. E.g. when tracing a blood vessel in medical image data, one does not need to be able to represent or remember the vessel as a whole (feedforward mechanism), but it is more important to learn how to connect local line segments (the dynamics of reasoning). Similarly, in image classification problems, it is important to learn how local queues relate to each other and learn how to “connect the dots” rather than trying to come to a decision given all information at once. Being able to capture/model such dynamics of reasoning opens doors for interaction with and visualization of the systems, which leads to explainability. Your research will be built upon an exciting mix of geometric (equivariance, invertibility, PDE’s) and probabilistic (VAE, normalizing flows) principles to ensure reliability and interpretability of AI systems.

Full proposal

Candidate: Loan Hu (

Supervisor: Stefan Schlobach (
Co-supervisor: Victor de Boer (

Research proposal description

In HI we envision individual agents to have both access to a global set of knowledge sources, some specific formalisation of its own knowledge (and beliefs), as well as some notion of knowledge about other agents in their network, and the knowledge of those agents in particular. 

Knowledge graphs (KGs) can play an important role, both in order to store and provide access to global knowledge, common and accessible to both human and artificial agents, as well as storing local knowledge of individual agents in a larger network of agents. 

Unfortunately, the current formalisms are not sufficiently well-designed to work with complex, conflicting, dynamic and contextualised knowledge. What is needed to make KGs suitable formalisms for data and knowledge exchange in a HI network, is for individual agents to adapt their own knowledge in a KG (or at least the active part it is doing reasoning with) w.r.t. the interaction with one or more actors in its network. 

This research is part of the Research line “Adaptive HI”. In this research, adaptivity mostly refers to the ability to represent and adapt knowledge dynamically based on context and interaction with other agents.  We aim to contribute to Research question 3: “How can learning systems accommodate changes in user preferences, environments, tasks, and available resources without having to completely re-learn each time something changes?” by developing new knowledge representation solutions for representing the contexts. 

Full proposal

Candidate: Putra Manggala (

Supervisor: Eric Nalisnick (
Co-supervisor: Holger Hoos (

Research project description:

This project aims to use Bayesian priors defined in function space to enable data-efficient and human-guided model adaptation. Bayesian modeling is an appealing paradigm from which to build collaborative hybrid intelligent systems, because priors allow existing information and human expertise to be given primacy in model specification. One obstacle that often prevents the adoption of the Bayesian paradigm is prior specification. The prior must be specified as a distribution on parameter space, which is high-dimensional and complicated for models of the day (e.g. neural networks). It is typically challenging to translate human knowledge and intuitions — which are often contextualized in the space of data — to the parameter space. Our recent work  overcomes this gap between function and parameter space by defining the prior on the former and then reparameterizing it to be a proper distribution on the latter. The intuition is that the prior encourages the model predictions to agree with those of a teacher or reference model, which could be a human (imitation learning), a model for a related task (transfer learning), a previous iteration of the same model (continual learning), or a less expressive model (regularization). We would then leverage these various formulations for HI scenarios in which the model needs to be quickly adapted while protecting against overfitting or respecting human inputs. We plan to validate our results in an application healthcare, in particular the automated gating of cytometry data.

Full proposal

Supervisor: Stefan Schlobach (
Co-supervisor: Frank van Harmelen (

Project Description:

We will develop a game-theoretic framework which can capture the cooperation scenarios in heterogeneous groups (a mix of human and artificial agents). Such a study will require the development of necessary game-theoretic concepts; amongst them, refinement of existing equilibrium concepts, bounded rationality models and provably good guarantees. 

Full proposal

Postdoc: Mike Ligthart (

Supervisor: Koen Hindriks (

Research Project Description
A key challenge in human-robot interaction is to ensure engagement over time. This is also particularly true for child-robot interaction. In this project we will research how personalisation can be used to address this challenge and motivate children. We will test and evaluate our ideas in the education domain with the aim to engage children to keep practicing math problems with a robot. The project will build on previous research on child-robot interaction and work we did on a math tutor robot. Three key challenges will be combined to design a long-term engaging robot. 1) We will create a conversational design for long-term engaging child-robot interaction. 2) We will design a memory of the interaction history (a series of interaction sessions) and strategies to use that memory to personalise the interaction. 3) The final challenge will be to design a personalised user model to further tailor the interaction to the child’s needs and preferences.

Full proposal

PhD Student: Enrico Liscio (

Research Project Description
Personal values are the abstract motivations driving our actions and opinions. Estimating the values lying behind participants’ utterances in a debate / deliberation means to understand the motives behind their statements, and ultimately help to explain why their opinions may differ. However, the subjective and interpretable nature of values poses a challenge to automatic estimation through AI systems. To this end, we propose the exploration of hybrid methods for the automatic estimation of personal values, combining the intuition of humans with the scalability of AI. Natural Language Processing techniques will be investigated with the goal of complementing and supporting human high-level abstract reasoning. To this end, we propose the exploration of hybrid methods for the automatic estimation of personal values, combining the intuition of humans with the scalability of AI. Natural Language Processing techniques will be investigated with the goal of complementing and supporting human high-level abstract reasoning.

PhD Student:Mark Adamik (

Research Project Description

This PhD project will tackle the problem of enabling intelligent systems to reason over common sense knowledge to achieve their tasks. We will focus on the specific case of embodied agents performing everyday activities such as cooking or cleaning, i.e. tasks that require reasoning over different types of knowledge (we call these knowledge-intensive tasks), and using for this purpose the available common sense knowledge graphs , i.e. structured data sources that contain knowledge about the everyday world. We will propose new methods to allow robots to achieve knowledge-intensive tasks through combining existing structured data (knowledge graphs) with situational personalised knowledge captured through their sensors. We will study the hybrid application of advanced symbolic and subsymbolic AI methods to robotic systems operating in complex real-world scenarios.

Full Proposal

PhD Student: Selene Baez (

Research Project Description
We will study trust as a relationship between social agents in a multimodal world that involves multifaceted skills and complex contexts. This work aims to create and evaluate a computational model of trust, from the robot perspective towards trusting humans in collaborative tasks.

PhD Student: Lea Krause (VUA) (

Research Project Description
What if the KG does not have a simple answer? How to formulate a complex answer that is still useful. We do not take a simple no for an answer. An intelligent agent should explain what it does know and how it is related. A secondary question is that the status of the answer is questionable: uncertainty of a source, how knowledgeable is the source, how much is it shared, is it also denied, how recent is the knowledge, how connected is the knowledge. 

How can an answer be generated in natural language that is correct, informative, and effective?

PhD Student: Delaram Javdani (

Research Project Description
Digital information technology is increasingly interwoven with our society and individuals’ daily lives. On one hand, a tremendous amount of data becomes available in machine-interoperable formats while how humans can intuitively access and learn from such large-scale knowledge is becoming an urgent question. Addressing this challenge requires research on how to shape knowledge-centered human-machine conversations and meaning-making in an engaging and intuitive manner. Therefore, a project called “Conversational access of large-scale knowledge graphs” is proposed with the aim of investigating conversational AI technologies to achieve a meaningful, engaging, and enjoyable interaction between individual persons and large-scale knowledge graphs (KGs). This includes selecting and packaging relevant information into narratives and choosing suitable strategies to convey knowledge. The main research question is “How to shape knowledge-centered human-machine conversations and meaning-making in an engaging and intuitive manner?”

PhD student: Masha Tsfasman (

Research Project Description
The topic of this project is memory for conversational agents. On the path to the goal of creating a socially-aware memory system, we are creating a data-driven model predicting what is memorable from conversations for different participants of a meeting. For that, we are exploring different modalities of human-human interactions: our first results conclude that group eye-gaze behaviour is highly predictive of conversational memorability. This topic is relevant for HI since we are looking into better understanding of humans to advance the quality of long-term human-agent interactions and collaborations.

PhD student: Morita Tarvirdians ( 

Research Project Description 
The coming years may create diverse needs for intelligent social conversational robots that are required to collaborate with humans in various environments. Therefore, in order to build conversational agents and robots that can create human-line conversations, we need to analyse the unique abilities of humans and artificial intelligence in making conversations. For example, using theory of mind (Premack & Wooclryff, 1978), people can infer others’ perspectives and stances. This conception of the mind is an important achievement in human development because it directly impacts effective communication and social interaction (Farra et al., 2017). On the other hand, AI is capable of performing complex calculations, extracting knowledge from messy data, and connecting to various information sources to quickly update its current data. So, by using human skills and artificial intelligence, we can create a hybrid intelligence that can understand the stances of humans and reasons behind their standpoints. Now, argument mining in NLP not only determined what position people take but also why they hold the opinions they do (Lawrence & Reed, 2020). However, argumentation mining is mainly adopted on social media content and asynchronous data (Draws et al., 2022; Vilares & He, 2017). Yet, conversation in its natural form is multifaceted, therefore in addition to linguistic features, non0verbal cues for expressing perspectives should be examined. During this project, we will develop a model that captures the multifaceted nature of a person’s perspective on a given topic as it evolves over time using both human and artificial intelligence, and we will implement this perspective model into a social robot. 

PhD student: Siddarth Mehrotra (

Research Project Description
Trust is an important component of any interaction, but especially when we are interacting with a piece of technology which does not think like we do. Nowadays, many systems are being developed which have the potential to make a difference in people’s lives, from health apps to robot companions. But to reach their potential, people need to have appropriate levels of trust in these systems. And therefore, systems need to understand how humans trust them, and what to do to promote appropriate trust.

In this research, as a first step towards eliciting appropriate trust, we need to understand what factors influence trust in AI agents. Despite the growing attention in research on trust in AI agents, a lot is still unknown about people’s perceptions of trust in AI agents. Therefore, we wish to know what it is that makes people trust or distrust AI?

Additionally, it is important that human’s trust in the articial agent is appropriate. The challenge is to ensure that humans tune their trust towards the agent since we do not have control over the human. However, in this research, we leverage on the idea that agents reflect about their own trustworthiness, we may be able to infuence humans to appropriately fine-tune their trust in them. With the information regarding the agent’s trustworthiness, a human can adapt to the qualities and limitations of the agent and, consequently, adjust the utilization of the agent accordingly. All in all, we have following research questions:

  1. How can we guide humans to appropriately trust their AI systems & how can we verify it?
  2. How might we use our understanding of appropriate trust to design Human-AI agent interactions?

This topic relates to hybrid intelligence in the means that a key driver for achieving effective Human-AI interaction is mutual trust. To do this, we should develop artificial agents that are able reason about and promote appropriate mutual trust.

PhD student: Ludi van Leeuwen ( 

Research project description
The aim of this project is to investigate the rational use of probabilistic reasoning. Many domains that AI can be applied to, have to deal with a lack of “objective” frequency data, which needs to be available for the AI systems. This means that we would need to collect more data or elicit frequency data from domain experts. This is a labour-intensive process and it is unclear how this should be done exactly.  

Furthermore, the subjectivity involved with expert elicitation and data collection is understudied. Hence, the aim of this project is to discover under what circumstances we can use probabilistic methods, namely, Bayesian Networks for reasoning with uncertainty and evidence. Bayesian Networks seem like a good tool for reasoning with uncertainty and evidence, but there are many aspects of this formalism that should be open for further investigation in data-poor domains, like the effect of modeller’s subjectivity, imprecise probability and relevance. 

Bayesian Networks are a hybrid tool for reasoning with evidence, because people model the networks and find evidence, but are not so good at calculating the Bayesian inference updates. This is where human modellers and computers complement each other. The use of Bayesian Networks and probabilistic methods in general is a developing field and using them responsibly means that we have to map out in what areas we can trust them, and knowing where they fall short of a rational model. 

PhD student: Muhan Hou (

Research project description
This research is mainly focused on human interactive robot learning, aiming to transfer skills from human teachers to robots in an interactive and intuitive manner. This is closely correlated to HI since we share the same goal to improve the interaction between machine (either abstract or embodied agent) and humans, and ultimately endow the artificial intelligence with the capability to survive and thrive in the real world.

PhD student: Jonne Maas (

Research project description
This research focuses on responsible AI development. In particular, I’m interested in the power dynamics underlying AI systems. Following democratic theories, I argue that current power dynamics are morally problematic, as end-users of systems are left to the goodwill of those that shape a system as well as the system itself, without means to properly hold accountable development and deployment decisions. Responsible AI, however, requires a responsible development and deployment process. Without just power dynamics, such responsible AI development is difficult if not impossible to achieve. My research is then one step towards Responsible AI.

The direct relevance with the HI Consortium lies in the connection with the responsible research line. Although I do not program myself, the philosophical approach of my dissertation contributes to a better understanding of what it means to develop Responsible AI in general. In addition, the philosophical insights from my dissertation could be used for AI systems that the HI consortium develops.

PhD student: Mustafa Mert  Çelikok ( 

Research project description
This project concerns developing multiagent reinforcement learning methods that will allow an AI agent to interact with a human partner in order to augment the decision-making of the human. The project is closely related to the overall agenda of intelligence augmentation and the long-term goals of the Hybrid Intelligence project.

Candidate: Xinyi Chen

Research project description
Argumentative Dialog Systems

One of the most pressing matters that holds back robots from taking on more tasks and reach a widespread deployment in society is their limited ability to understand human communication and take situation-appropriate actions. The research project is dedicated to addressing this gap by developing the underlying data-driven models that enable a robot to engage with humans in a socially aware manner.

This research targets at the development of an argumentative dialogue system for human-robot interaction, which will explore how to fuse verbal and non-verbal behaviour to infer a person’s perspective. I use, and further develop, reinforcement learning techniques in order to drive the robot’s argumentative strategy for deliberating topics of current social importance such as global warming or vaccination.

PhD student: Johanna Wolff (

Research project description
The idea is to use techniques from knowledge representation and reasoning and non-monotonic reasoning to create a dynamic system that is able to adapt to changes in the users knowledge, beliefs, preferences and goals. Ideally, any changes in the system should also be comprehensible to the user, which means that the system should not only acknowledge these changes but also present them to the user in an understandable way.

This is necessary in order for personal agents to be able to accurately model the user and therefore offer the best support. On the other hand, the user needs to be aware of the impact their input has on the system in order to trust the personal agent.