Funded projects

PhD candidate: Aishwarya Suresh  (A.SureshIyer@tudelft.nl)

Supervisor: Filippo Santoni de Sio (f.santonidesio@tudelft.nl)
Co-supervisors: Ibo van de Poel (i.r.vandepoel@tudelft.nl), Victor de Boer (v.de.boer@vu.nl)

Research Project Description
Many current AI systems are implicitly supporting or constraining human moral judgment, e.g. information on Facebook, recommendation systems, targeted advertising. The goal of this project is to make this support (or constraint) explicit as a step towards empowering better human moral judgments. Our working assumption is that AI should support but not take over human moral judgement and decisions. To determine the desirable role of AI support, we should first of all better understand human moral judgement, and particularly the importance of context in it. Two case studies will be carried out on how AI may support moral human judgement in the context of changing values. On the basis of these, desirable (and undesirable) forms of AI support will be identified, and insights will be translated into a number of design principles for AI support for human moral judgement.

Full proposal
Poster

PhD candidate: Cor Steging (c.c.steging@rug.nl)

Supervisor: Bart Verheij (bart.verheij@rug.nl)
Co-supervisor: Silja Renooij (s.renooij@uu.nl)

Research Project Description
A core puzzle in today’s artificial intelligence is how knowledge, reasoning and data are connected. To what extent can knowledge used in reasoning be recovered from data with implicit structure? Can such knowledge be correctly recovered from data? To what extent does knowledge determine the structure of data that results from reasoning with the knowledge? Can knowledge be selected such that the output data generated by reasoning with the knowledge has desired properties? 

By investigating the relations between knowledge, reasoning and data, we aim to develop mechanisms for the verification and evaluation of hybrid systems that combine manual knowledge-based design and learning from data. The focus will be on structures used in reasoning and decision-making, in particular logical and probabilistic relations (as in Bayesian networks) and reasons (pro and con) and exceptions (as in argument-based decision making).

Full proposal
Poster

PhD candidate: Michiel van der Meer (m.t.van.der.meer@liacs.leidenuniv.nl)

Supervisor: Catholijn Jonker (C.M.Jonker@tudelft.nl)

Co-supervisors: Aske Plaat (a.plaat@liacs.leidenuniv.nl), Piek Vossen(piek.vossen@vu.nl) 

Research Project Description
Perspectives in deliberation is about developing Artificial Intelligence techniques to find the underlying structures of debates and group deliberations with the idea that we can help participants to a debate / deliberation to understand why others have a different opinion in this debate. To this end we want to use computational linguistics to extract what we call micro-propositions from text: proposition mining. Secondly, we want to model the implications for stakeholders of these propositions: implication mining. Thirdly, we want to extract and understand the perspectives of the stakeholders on these implications: perspective mining.

In addition to understanding the debate and the perspectives, the AI you develop will seek to interact with stakeholders about the interpretations of what they brought in as statements and arguments. 

Full proposal
Poster

PhD candidate: Tiffany Matej (tiffanymatej@gmail.com)

Supervisor: Dan Balliet (d.p.balliet@vu.nl)
Co-supervisor: Hayley Hung (h.hung@tudelft.nl)

Research Project Description
This PhD project will be a collaboration between psychology and computer science to develop hybrid intelligence that can understand, predict, and potentially aid in initiating collaborative behavior. To build such machines, we must further develop our understanding about how people select their partners and initiate collaborations. Moreover, this project will involve a close collaboration between psychology and computer science, and will apply state-of-the-art methods and techniques in both computer science and psychology to advance our understanding about these issues.

Partner selection is fundamental to understanding how people initiate cooperative relations and to avoid being exploited by non-cooperative individuals – two key features of human sociality thought to underlie why humans are such a cooperative species (see Barclay, 2013). This research will test and develop theory about how people choose cooperative partners. The project will use both naturalistic settings (e.g., social networking at scientific conferences) and experimental settings (e.g., experimental group tasks), to examine the non-verbal and verbal behaviors during social interactions that can be used to predict whether people choose to select another person (or not) as a collaboration partner. During these studies, participants will wear multi-modal sensors and be video recorded while interacting with other people for the first time, which will be used to capture non-verbal and verbal behaviors that can predict how people evaluate their interaction partner (e.g., their traits, motives), the social interaction (e.g., the closeness, social power), and behavior motivations (e.g., avoiding versus approach the person in future interactions).

This PhD project is a collaboration between Psychology (Daniel Balliet) and Computer Sciences (Hayley Hung, Rineke Verbrugge), and the PhD will also work closely with another PhD student supervised directly by Hayley Hung. The candidate should have an openness to working in a multi-disciplinary team, and have a general interest in establishing a closer connection between psychology and the computer sciences.

Full Proposal
Poster

PhD candidate: Nicole Orzan (n.orzan@rug.nl)

Supervisor: Davide Grossi (d.grossi@rug.nl)
Co-supervisor: Erman Acar (erman.acar@uva.nl)
Promotor: Davide Grossi (d.grossi@rug.nl)

Research Project Description
Deliberation is a key ingredient for successful collective decision-making. In this project we will:

– investigate the mathematical and computational foundations of deliberative processes in groups of autonomous agents.
– lay the theoretical groundwork for modelling and analyzing how opinions are exchanged and evolve in groups of autonomous decision-makers.
– develop principled methods for the design of deliberation procedures that can provably improve the quality of group decisions.

Full proposal
Poster

PhD candidate: Emre Erdogan (e.erdogan1@uu.nl)

Supervisor: Frank Dignum (f.p.m.dignum@uu.nl)

Co-supervisor: Pinar Yolum (p.yolum@uu.nl), Rineke Verbrugge (l.c.verbrugge@rug.nl) 

Research Project Description
At Utrecht University, we are looking for several PhD students within the Hybrid Intelligence project. For the position on the computational theory of mind we look for a PhD student to investigate and develop a computationally efficient theory of mind that is theoretically sound.
The research will focus on which type of general knowledge on other agents has to be kept in order to keep a usable theory of mind available. How much of the historical information and premises on preferences, goals and motivations should be taken into account? The premise is that the theory of mind that needs to be kept depends on the level of interaction that is aimed for and the context in which this interaction takes place. You will apply the developed theory in the area of collaborative privacy. A typical situation in collaborative privacy is that a user in a collaborative system, such as an online social network, would like to share content that could possibly be co-owned by others, such as group pictures or collaboratively edited documents. At the time of sharing, the user has to take into account how the shared content can or will be used and how this sharing would affect other users and take an action accordingly. By employing a computationally viable model of theory of mind, a user can reason about other co-owners’
privacy expectations on the content in a given context using a theory of mind and make a sharing decision based on that.
The PhD candidate’s principal duty is to conduct scientific research, resulting in a PhD thesis at the end of the appointment. Other duties may include supporting the preparation and teaching of Bachelor’s and Master’s level courses, supervising student theses, managing research infrastructure and participating in public outreach. This position presents an excellent opportunity to develop an academic profile.

Full proposal
Poster

Postdoc: Kevin Godin-Dubois (k.j.m.godin-dubois@vu.nl)

Supervisor: Karine Miras (k.dasilvamirasdearaujo@vu.nl)
Co-supervisor: Decebal Mocanu (d.c.mocanu@utwente.nl)
Promotor: Guszti Eiben (a.e.eiben@vu.nl)

Research Project Description
By jointly addressing NeuroEvolution (NE) and Reinforcement Learning (RL) we open the door to producing robots that can improve at the population and individual levels. In the former case, this encompasses the full scope of genetic exploration (neural topology, sensory apparatus, morphology…). In the latter case, a given robot will leverage this genetic baggage to better accommodate local conditions including, in the case of physical robots, bridging the reality gap. Combining the advantages of offline body/brain optimization, through NE, with online adaptation, through RL, is an essential aspect of biological intelligence. Therefore, devising methodologies to exploit the best of both worlds is expected to have a strong impact on Artificial Intelligence for Embodied Robots.

Full proposal
Poster

PhD candidate: Niklas Höpner (nhopner@gmail.com)

Supervisor: Herke van Hoof (h.c.vanhoof@uva.nl)
Co-supervisor: Ilaria Tiddi (i.tiddi@vu.nl)

Research Project Description
At the university of Amsterdam, we are looking for a PhD candidate interested in combining (deep) reinforcement learning research with prior knowledge from knowledge graphs to learn explainable strategies for complex sequential tasks.
Many problems of practical interests are concerned with optimising sequential decision making. Think, for example, of finding optimal trajectories for vehicles or robots or deciding which medical tests to run. Methods for classical planning based on symbolic representations are typically explainable to human collaborators but rely on an exact problem description; while data-driven (e.g. reinforcement learning) approaches do not rely on a provided problem specification, but are data hungry and inscrutable. The combination of the complementary strengths of these approaches is expected to advance the state of the art in the areas of knowledge representation and reinforcement learning.

Full proposal
Poster

PhD candidate: Bram Renting (bramrenting@gmail.com)

Supervisor: Holger Hoos (hh@liacs.nl)
Co-supervisor: Catholijn Jonker (c.m.jonker@tudelft.nl)

Research Project Description
The goal of this PhD project is to develop automated machine learning methods for changing/non iid data, with applications to standard learning scenarios as well as to automated negotiation. Such techniques are key to enabling the efficient and robust use of machine learning systems and components for a broad range of human-centred AI applications. They will also contribute to fundamental advances in machine learning. Automated negotiation scenarios are of particular interest, as they play a key role in systems dealing with potentially conflicting interests between multiple users or stakeholders.

Full proposal
Poster

PhD candidate: Anna Kuzina (av.kuzina@yandex.ru)

Supervisor: Jacub Tomczak (j.m.tomczak@vu.nl)
Co-supervisor: Max Welling (m.welling@uva.nl)

Research Project Description
At Vrije Universiteit Amsterdam, we are looking for an enthusiastic PhD candidate who is interested in formulating and developing new models and algorithms for quantifying uncertainty and making decisions in changing environments. Our project “Continual learning and deep generative modeling for adaptive systems’’ focuses on fundamental research into combining various learning paradigms for building intelligent systems capable of learning in a continuous manner and evaluating uncertainty of the surrounding environment.

Adaptivity is a crucial capability of living organisms. Current machine learning systems are not equipped with tools that allow them to adjust to new situations and understand their surroundings (e.g., observed data). For instance, a robot should be able to adapt to new environment or task and assess whether the observed reality is known (i.e., likely events) or it should contact a human operator due to unusual observations (i.e., high uncertainty). Moreover, we claim that uncertainty assessment is crucial for communicating with human beings and for decision making.

In this project, we aim at designing new models and learning algorithms by combining multiple machine learning methods and developing new ones. In order to quantify uncertainties, we prefer to use deep generative modeling paradigm and frameworks like Variational Autoencoders and flow-based models. However, we believe that standard learning techniques are insufficient to update models and, therefore, continual learning (a.k.a. life-long learning, continuous learning) should be used. Since this is still an open question how continual learning ought to be formulated, we propose to explore different directions that could include, but are not limited to Bayesian nonparametrics and (Bayesian) model distillation. Moreover, a combination of continual learning and deep generative modeling entails new challenges and new research questions.

The PhD candidate will be part of the Department of Computer Science of the Vrije Universiteit Amsterdam (Computational Intelligence group) in a close partnership with the Institute of Informatics of the University of Amsterdam (Amsterdam Machine Learning Lab). Daily supervision will be performed by dr. Jakub Tomczak (VU). The promotor will be prof. A.E. Eiben (VU) and the co-promotor will be prof. M. Welling (UvA).

Full proposal
Poster

Candidate: Mani Tajaddini (m.tajaddini@tudelft.nl)

Supervisor: Mark Neerincx (mark.neerincx@tno.nl )
Co-supervisor: Annette ten Teije (annette.ten.teije@vu.nl)

Research Project Description
The ambition for this Phd.D. position will be to define design patterns for successful configurations of Hybrid Intelligent systems. Such patterns describe how different combinations of machine and human capabilities perform for a given task under a given set of circumstances. We aim to develop a corresponding pattern language to express such design patterns in conceptual, (semi-)formal or computational form, and an empirical method to validate the patterns. See for more information.

Full proposal
Poster

PhD candidate: Taewoon Kim (t.kim@vu.nl)

Supervisor: Michael Cochez (m.cochez@vu.nl
Co-supervisor: Mark Neerincx (mark.neerincx@tno.nl)

Research Project Description
Machines have become reasonably good at answering factual knowledge questions. Search engines and virtual assistants can retrieve relevant information and answer factual and commonsense questions, using what psychologists call “semantic memory”. However, these systems will not remember what they did yesterday (i.e., using “episodic memory”). We want to model an agent that has both semantic and episodic memory systems. In order to tackle this problem, we will take advantage of the human memory systems suggested by theories from cognitive science. 

Full proposal
Poster

PhD candidate: Annet Onnes (a.t.onnes@uu.nl)

Supervisor: Silja Rennooij (S.Renooij@uu.nl)
Co-supervisor: Roel Dobbe (r.i.j.dobbe@tudelft.nl

Research Project Description
Our project “Monitoring and constraining adaptive systems” focuses on fundamental research into integrating interpretable knowledge representation and reasoning with learning in the context of adaptive systems.
Since an adaptive system is allowed to change itself, we need to trust that it does not evolve into a system that violates various constraints important for the environment at large in which the system operates. We aim to design a monitoring system that is able to detect and react to violations of constraints, to predict that violations are about to occur, issue warnings, and ultimately gets the adaptive system back on track.  Being able to predict the behaviour of an adaptive system also allows for analysing and explaining it, which are important aspects to facilitate communication and collaboration between human and artificial adaptive agents.

Full proposal
Poster

Candidate: Pei-Yu Chen (P.Y.Chen@tudelft.nl)

Supervisor: Birna van Riemsdijk (m.b.vanriemsdijk@utwente.nl)
Co-supervisor: Myrthe Tielman (M.L.Tielman@tudelft.nl)

Research project description
It is our aim to realize intimate (supportive) technologies that feel like they tend to people with care and sensitivity, allowing them to maintain their space and agency in coaction with technology. This means that the technology needs to continuously tune in to the needs of the user and assess whether its provided support is (still) in alignment with this. We refer to this as Responsible Agency: the technology’s agency is shaped around the user’s agency, and this human-machine co-entity jointly produces actions in the world.

To realize this, a user model is needed that represents what is important to the user in light of the support that is to be provided (e.g., activities, values, capabilities, norms, frequencies of behaviour, etc.). This user model is constructed through direct interactions with the user at run-time, since the necessary information, e.g., on underlying values, often cannot be derived from existing data. Based on this user model, the agent can derive what it deems to be appropriate support actions.

Full proposal
Poster

PhD Candidate: Urja Khurana (u.khurana@vu.nl)

Supervisor: Antske Fokkens (antske.fokkens@vu.nl)
Co-supervisor: Eric Nalisnick (etn22@cam.ac.uk)

Research Project Description

Natural language processing has a strong tradition in experimental research, where various methods are evaluated on gold standard datasets. Though these experiments can be valuable to determine which methods work best, they do not necessarily provide sufficient insight into the general quality of our methods for real-life applications. There are two questions that often need to be addressed before knowing whether a method is suitable to be used in a real-life application in addition to the outcome of a typical NLP experiment. First, what kind of errors does the method make and how problematic are they for the application? Second, how predictive are results obtained on the benchmark sets for the data that will be used in the real-life application? This project aims to address these two questions combining advanced systematic error analyses and formal comparison of textual data and language models.

Though potential erroneous correlations were still relatively easily identified in scenarios of old-fashioned extensive feature engineering and methods such as K-nearest neighbors, Naive Bayes, logistic regression, SVM, this has become more challenging now that technologies predominantly make use of neural networks. The field has become increasingly interested in exploring ways to interpret neural networks, but, once again, many studies focus on field internal questions (what linguistic information is captured? Which architectures learn compositionality to what extent?). We aim to take this research a step further and see if we can use insights into the workings of deep models to predict how they will work for specific applications that make use of data different from the original evaluation data. Both error analysis and formal comparison methods will contribute to establishing the relation between generic language models, task specific training data, evaluation data and ”new data”. By gaining a more profound understanding of these relations, we try and define metrics that can be used to estimate or even predict to what extent results on a new dataset will be similar to those reported on the evaluation data (both in terms of overall performance and in terms of types of errors).

Full proposal
Poster

Supervisor: Christof Monz (c.monz@uva.nl)
Co-supervisor: Frans Oliehoek (f.a.oliehoek@tudelft.nl)

Research project discription

In this sub-project, we are looking for a PhD candidate interested in combining deep learning and natural language processing research to model complex contextual information to significantly improve the quality of dialog systems.

While current deep learning methods have been shown to be very effective in generating fluent utterances, these utterances are often only poorly connected to the context of the conversation. In this project, we will investigate the role of context (agent or environment) on natural dialogue generation. In this project, we will explore a several research directions such as the detection and representation of knowledge gaps, the generation of contextually appropriate responses, and novel ways to represent and access large contextual information.

Full proposal
Poster

Contact: f.a.oliehoek@tudelft.nl

Postdoc: Robert Loftin (R.T.Loftin@tudelft.nl)

Supervisor: Frans Oliehoek (f.a.oliehoek@tudelft.nl)
Co-supervisor: Herke van Hoof (h.c.vanhoof@uva.nl)

Research project Description
This sub-project wants to push machine learning beyond traditional settings that assume a fixed dataset. Specifically, in this project we will investigate interactive learning settings in which two or more learners interact by giving each other feedback to reach an outcome that is desirable from a system designers perspective. The goal is to better understand how to structure interactions to effectively progress to the desirable outcome state, and to develop practical learning techniques and algorithms that exploit these generated insights.

Full proposal
Poster

Candidate: Bernd Dudzik (B.J.W.Dudzik@tudelft.nl)

Supervisor: Hayley Hung (h.hung@tudelft.nl)
Co-supervisor: Dan Balliet (d.p.balliet@vu.nl)

Research project description
An important but under-explored problem in computer science is the automated analysis of conversational dynamics in large unstructured social gatherings such as networking or mingling events. Research has shown that attending such events contributes greatly to career and personal success. While much progress has been made in the analysis of small pre-arranged conversations, scaling up robustly presents a number of fundamentally different challenges. Moreover, understanding the relationship between these interactions and how this translates into actual collaborations is unknown. 

Unlike analysing small pre-arranged conversations, during mingling, sensor data is seriously contaminated. Moreover, determining who is talking with whom is difficult because groups can split and merge at will. A fundamentally different approach is needed to handle both the complexity of the social situation as well as the uncertainty of the sensor data when analysing such scenes.

The main aim of the project is to address the following question: How can multi-sensor processing and machine learning methods be developed to model the dynamics of conversational interaction in large social gatherings using only non-verbal behaviour? The focus of this project is to measure conversation quality, from multi-sensor streams in crowded environments and its relationship with someone’s willingness to collaborate with other conversation partners. 

The successful applicants will develop automated techniques to analyse multi-sensor data (video, acceleration, audio, etc) of human social behavior. They will interact closely with a PhD student from social science and possibly other PhD students on the Hybrid Intelligence project. 

Full proposal
Poster

Candidate: Wijnand van Woerkom (w.k.vanwoerkom@uu.nl)

Supervisor: Henry Prakken (H.Prakken@uu.nl)
Co-supervisor: Davide Grossi (d.grossi@rug.nl)

Research Project Description

This project aims to explain outcomes of data-driven machine-learning applications that support decision-making procedures to end users of such applications, such as lawyers, business people or ordinary citizens. The techniques should apply in contexts where a human decision maker is informed by data-driven algorithms and where the decisions have ethical, legal or societal implications. They should generate explanations for outputs for specific inputs. The generated explanations should be such that the reasons for the output can be understood and critically examined on their quality. The project will especially focus on explaining ‘black-box’ applications in that it will focus on model-agnostic methods, assuming only access to the training data and the possibility to evaluate a model’s output given input data. This will make the explanation methods independent of a model’s internal structure. This is important since in many real-life applications the learned models will not be interpretable or accessible, for instance, when the model is learned by deep learning or when the application is proprietary.

Full proposal
Poster

Candidate: Merle Reimann (m.m.reimann@vu.nl)

Supervisor: Koen Hindriks (k.v.hindriks@vu.nl)
Co-supervisor: Christof Monz (c.monz@uva.nl)

Research Project Description

A key challenge in human-machine dialogues is to provide more flexibility to a user and allow the user to co-regulate a conversation (instead of being directed by the agent’s conversation script or flow). We will develop a conversational agent that is able to establish and maintain a common understanding in co-regulation with its user, by offering the flexibility to deviate from the conventional sequences in structured question-and-answer or instructional dialogues. This may, for example, occur in education when a child is asked to solve a math problem and is not sure about the answer, or in health care when a patient shows hesitance when answering a survey question and would like to ask a clarification question, or when a user is unclear about a cooking assistant’s instructions. In such cases, rather than moving on to the next math problem or survey question, the user should be allowed to deviate from the main dialogue line (‘the happy flow’) to come to a mutual understanding and restore common ground. We will collect conversational data that can be used to extract common conversational patterns that indicate a user wants to take control of the dialogue flow, and will research and design an agent that is able to handle this. For example, the agent should be able to decide to either (i) ask a follow-up question (ii) provide feedback (verbalize its understanding of the humans’ answer), or (iii) give additional explanatory information to assist the user. A second aim of this project is to address the level of user engagement in such highly structured conversations that tend to be repetitive, by enabling the agent to memorize and refer back to relevant pieces of information in the conversation history or earlier conversations with the same user, and by incorporating variation in the agent’s utterances.

Full proposal
Poster 1
Poster 2

PhD Candidate: Jordi Top

Supervisor: Rineke Verbrugge (L.C.Verbrugge@rug.nl)
Co-supervisor: Catholijn Jonker (c.m.jonker@tudelft.nl)

Research Project description

In hybrid intelligent systems in which humans and agents interact together, it is important to be able to understand and detect non-cooperative behavior such as lying and other forms of deception, both in people and in software agents. Cognitive scientists have shown that understanding the concept of lying and being able to maintain a lie over time requires second-order theory of mind: reasoning about what the other person (or agent) thinks you think. The same level of theory of mind is also required for detecting whether others are lying to you. In this PhD project we will:

  • investigate the logical and computational foundations of deception and deception detection in hybrid groups;
  • lay the theoretical groundwork for modelling and analyzing non-cooperative behavior in several communicative contexts such as negotiation games and coalition formation games;
  • develop principled methods for the design of software agents that can detect when other group members are engaged in non-cooperative behavior such as lying;
  • build agent-based models and/or computational cognitive models of deception and deception detection;
  • use simulation experiments in order to predict the outcomes of lab experiments to be performed.

The objective of the temporary position is the production of a number of research articles in peer-reviewed scientific journals and conference proceedings, which together will form the basis of a thesis leading to a PhD degree (dr.) at the University of Groningen.

Full proposal
Poster

Candidate: Íñigo Martínez De Rituerto De Troya (I.MartinezdeRituertodeTroya@tudelft.nl)

Supervisor: Roel Dobbe (roel@ainowinstitute.org)
Co-supervisor: Virginia Dignum (m.v.dignum@tudelft.nl)

Research Project Description

This PhD-project starts from two central assumptions: Firstly, that all “artificial intelligence systems” are hybrid, requiring the integration of human and machine activities. More fundamentally, AI systems are fundamentally sociotechnical, developed through myriad design tradeoffs and “hard choices”, and situated in a societal context where the system informs, influences or automates human decisions. And secondly, that AI systems contribute to “programmable infrastructures”, meaning that these systems don’t stand on themselves but are integrated into and restructure existing infrastructures and organizations, often with implications for democratic governance and sovereignty.

The PhD candidate will work in a growing team addressing issues of hard choices and programmable infrastructures, consisting of people with different disciplinary backgrounds. The position focuses on understanding how human factors are crucial for informing the design of AI systems in making sure these are safe in their behavior, just in their treatment of people, and allow affected stakeholders to be empowered to hold the system and its managers/operators/developers accountable.

The candidate will work to marry notions in traditional computer science and systems engineering with methods from situated design, participatory design, computational empowerment, action research, science-and-technology-studies, and other disciplines aiming to increase empowerment and participation in design. The project studies and facilitates the implementation of a data-driven, learning-based control scheme in the operation of electrical distribution networks, working closely together with Dutch utility companies and other societal stakeholders working on the transition to renewable energy resources. A second case study will be pursued in a more administrative context within the Digicampus (https://www.tudelft.nl/en/2019/tu-delft/digicampus-accelerates-innovation-in-digital-government-services/).

In these studies, the PhD candidate will work together with stakeholders from the different sectors to both study and inform system development. The aim is (1) to understand what human factors inform the development process of AI systems and how PD and other design methods can facilitate this, and (2) what human dimensions in the situated context need to be addressed to integrate and operate the AI system in a safe and just manner, seeking collaboration with appropriate fields of expertise. For both these contributions, the PhD candidate will be leaning on emerging literature to structure his/her investigations and seek advice and collaboration across the different disciplines within the department, the Hybrid Intelligence Centre and internationally.

The selected candidate will also play a central role in building an international community of scholars advancing sociotechnical and situated studies of AI systems. This project will benefit from collaborations with dr. Seda Gürses in the Multi-Actor Systems department, prof. Virginia Dignum and the Responsible AI group at Umeå University in Sweden, and prof. Joan Greenbaum at . 

Full proposal
Poster 1
Poster 2

PhD candidate: Andreas Sauter (a.sauter@vu.nl)

Supervisor: Erman Acar (erman.acar@uva.nl)
Co-supervisor: Frank van Harmelen
Co-supervisor: Aske Plaat (a.plaat@liacs.leidenuniv.nl)

Research Project Description
We will design an adaptive system to be used as a component in  an artificial (virtual or physical) agent, which will enable structural updates on causal diagrams through logical reasoning and learning processes. We will study its theoretical properties, and test it empirically in virtual and physical multi-agent environments. We will use the empirical findings to refine its theory

Full proposal
Poster

Candidate: Maria Heuss (m.c.heuss@uva.nl)

Supervisor: Maarten de Rijke (prof. Maarten de Rijke)
Co-supervisor: Koen Hindriks (k.v.hindriks@vu.nl)

Research Project Description

Learning to rank is at the heart of modern search and recommender systems. Increasingly, such systems are optimized using historical, logged interaction data. Such data is known to be biased in a myriad of ways (position bias, trust bias, selection bias, …). In recent years, various de-biasing methods have been proposed, for different types of bias. These methods are limited, either in the types of bias they address, the assumptions about user behavior they make, or the degree to which they are actionable. We will design debiasing methods that are explainable and that can be applied selectively, depending on context, user (or user model), and additional constraints such as fairness.

Interaction with large amounts of information is one of the original scenarios for hybrid intelligence. For interactions to be effective, algorithmic decisions and actions should be explainable. In the context of learning to rank from logged interaction data for search and recommendation, this implies that the underlying debiasing methods need to be explainable – both in terms of the bias they aim to address, the assumptions they make, the re-estimations they make, and the impact on learned search and recommendation behavior. Importantly, the explanations should be actionable in the sense that they inform the target user about the changes required to obtain a different outcome.

Full proposal
Poster

Candidate: Sharvaree Vadgama (sharvaree.vadgama@gmail.com)

Supervisor: Erik Bekkers (dr. Erik Bekkers)
Co-supervisor: Jakub Tomczak (j.m.tomczak@vu.nl)

Research Project Description

The aim of this project is to model, study, and visualize decision-making processes in (convolutional) neural networks. In particular, emphasis is put on explainability in critical applications where reliability and trustworthiness of AI systems are crucial, such as in the medical domain.

A main theme will be the design of AI system through recurrent NNs by which focus is put on modeling (and visualization of) the dynamics of decision making rather than relying on the static black box mechanics of feed-forward networks. E.g. when tracing a blood vessel in medical image data, one does not need to be able to represent or remember the vessel as a whole (feedforward mechanism), but it is more important to learn how to connect local line segments (the dynamics of reasoning). Similarly, in image classification problems, it is important to learn how local queues relate to each other and learn how to “connect the dots” rather than trying to come to a decision given all information at once. Being able to capture/model such dynamics of reasoning opens doors for interaction with and visualization of the systems, which leads to explainability. Your research will be built upon an exciting mix of geometric (equivariance, invertibility, PDE’s) and probabilistic (VAE, normalizing flows) principles to ensure reliability and interpretability of AI systems.

Full proposal
Poster

Candidate: Loan Hu (loanthuyho.cs@gmail.com)

Supervisor: Stefan Schlobach (k.s.schlobach@vu.nl)

Co-supervisor: Victor de Boer (v.de.boer@vu.nl), Birna van Riemsdijk (m.b.vanriemsdijk@utwente.nl), Myrthe Tielman (M.L.Tielman@tudelft.nl)

Research proposal description

In HI we envision individual agents to have both access to a global set of knowledge sources, some specific formalisation of its own knowledge (and beliefs), as well as some notion of knowledge about other agents in their network, and the knowledge of those agents in particular. 

Knowledge graphs (KGs) can play an important role, both in order to store and provide access to global knowledge, common and accessible to both human and artificial agents, as well as storing local knowledge of individual agents in a larger network of agents. 

Unfortunately, the current formalisms are not sufficiently well-designed to work with complex, conflicting, dynamic and contextualised knowledge. What is needed to make KGs suitable formalisms for data and knowledge exchange in a HI network, is for individual agents to adapt their own knowledge in a KG (or at least the active part it is doing reasoning with) w.r.t. the interaction with one or more actors in its network. 

This research is part of the Research line “Adaptive HI”. In this research, adaptivity mostly refers to the ability to represent and adapt knowledge dynamically based on context and interaction with other agents.  We aim to contribute to Research question 3: “How can learning systems accommodate changes in user preferences, environments, tasks, and available resources without having to completely re-learn each time something changes?” by developing new knowledge representation solutions for representing the contexts. 

Full proposal
Poster 1
Poster 2

Candidate: Putra Manggala (p.manggala@uva.nl)

Supervisor: Eric Nalisnick (e.t.nalisnick@uva.nl)
Co-supervisor: Holger Hoos (hh@liacs.nl)

Research project description:
This project aims to use Bayesian priors defined in function space to enable data-efficient and human-guided model adaptation. Bayesian modeling is an appealing paradigm from which to build collaborative hybrid intelligent systems, because priors allow existing information and human expertise to be given primacy in model specification. One obstacle that often prevents the adoption of the Bayesian paradigm is prior specification. The prior must be specified as a distribution on parameter space, which is high-dimensional and complicated for models of the day (e.g. neural networks). It is typically challenging to translate human knowledge and intuitions — which are often contextualized in the space of data — to the parameter space. Our recent work  overcomes this gap between function and parameter space by defining the prior on the former and then reparameterizing it to be a proper distribution on the latter. The intuition is that the prior encourages the model predictions to agree with those of a teacher or reference model, which could be a human (imitation learning), a model for a related task (transfer learning), a previous iteration of the same model (continual learning), or a less expressive model (regularization). We would then leverage these various formulations for HI scenarios in which the model needs to be quickly adapted while protecting against overfitting or respecting human inputs. We plan to validate our results in an application healthcare, in particular the automated gating of cytometry data.

Full proposal
Poster

Postdoc: Atefeh Keshavarzi Zafarghandi (Atefeh.Keshavarzi.Zafarghandi@cwi.nl) 

Supervisor: Stefan Schlobach (k.s.schlobach@vu.nl) 
Co-supervisor: Bart Verheij (bart.verheij@rug.nl) 

Project Description:
We will develop a game-theoretic framework which can capture the cooperation scenarios in heterogeneous groups (a mix of human and artificial agents). Such a study will require the development of necessary game-theoretic concepts; amongst them, refinement of existing equilibrium concepts, bounded rationality models and provably good guarantees. 

Full proposal
Poster

Postdoc: Mike Ligthart (m.e.u.ligthart@vu.nl)

Supervisor: Koen Hindriks (k.v.hindriks@vu.nl)

Research Project Description
A key challenge in human-robot interaction is to ensure engagement over time. This is also particularly true for child-robot interaction. In this project we will research how personalisation can be used to address this challenge and motivate children. We will test and evaluate our ideas in the education domain with the aim to engage children to keep practicing math problems with a robot. The project will build on previous research on child-robot interaction and work we did on a math tutor robot. Three key challenges will be combined to design a long-term engaging robot. 1) We will create a conversational design for long-term engaging child-robot interaction. 2) We will design a memory of the interaction history (a series of interaction sessions) and strategies to use that memory to personalise the interaction. 3) The final challenge will be to design a personalised user model to further tailor the interaction to the child’s needs and preferences.

Full proposal

Poster

PhD Student: Selene Baez (selene.baez.santamaria@gmail.com)

Supervisors: Piek Vossen (piek.vossen@vu.nl), Dan Balliet (d.p.balliet@vu.nl)

Research Project Description
We will study trust as a relationship between social agents in a multimodal world that involves multifaceted skills and complex contexts. This work aims to create and evaluate a computational model of trust, from the robot perspective towards trusting humans in collaborative tasks.

Full proposal
Poster

PhD Student: Lea Krause (VUA) (l.krause@vu.nl)

Supervisor: Piek Vossen (piek.vossen@vu.nl) 

Research Project Description
What if the KG does not have a simple answer? How to formulate a complex answer that is still useful. We do not take a simple no for an answer. An intelligent agent should explain what it does know and how it is related. A secondary question is that the status of the answer is questionable: uncertainty of a source, how knowledgeable is the source, how much is it shared, is it also denied, how recent is the knowledge, how connected is the knowledge. 

How can an answer be generated in natural language that is correct, informative, and effective?

Full proposal
Poster

PhD Student: Delaram Javdani Rikhtehgar (d.javdanirikhtehgar@utwente.nl)

Supervisors: Dirk Heylen (d.k.j.heylen@utwente.nl), Shenghui Wang (shenghui.wang@utwente.nl), Stefan Schlobach (k.s.schlobach@vu.nl) 

Research Project Description
Knowledge Graphs store interlinked descriptions of concepts, entities, relationships, and events with explicit semantics. Various paths are possible to explore such connected knowledge. In the Cultural Heritage domain, special exhibitions are often designed to present subsets of the general KG that focus on designated themes. Knowledge related to the theme is selected, organised and presented to visitors, often following a predefined path without taking into account the interest of visitors and the dynamics of interactions.

In this research, we will explore how to make knowledge accessible to users via a conversational agent who has its own goal of presenting the subset of the KG with a designated theme. The main research question of this project is “How to shape knowledge centred human-machine conversations and meaning-making in an engaging and intuitive manner?”

Full proposal
Poster

Postdoc: Davide Dell’Ana (d.dellanna@uu.nl 

Supervisors: Pinar Yolum (p.yolum@uu.nl) 

Co-supervisor: Catholijn Jonker (c.m.jonker@tudelft.nl) 

Research project description 
The goal of this research project is to scientifically investigate Hybrid Intelligence and to contribute toward establishing it as a research field. The focus of the project is on the development of a methodology to identify different dimensions that characterize hybrid intelligence, design metrics or processes to measure or assess such dimensions, and create use-cases to demonstrate different aspects of hybrid intelligence. The developed methodologies are expected to be used to evaluate how well human-AI systems provide solutions for hybrid intelligence, and to facilitate an hybrid design of new human-AI systems. 

Full proposal
Poster

PhD candidate: Sergey Troshin (s.troshin@uva.nl) 

Supervisor: Prof. Christof Monz (C.Monz@uva.nl)  

Co-supervisor: Vlad Niculae (v.niculae@uva.nl), Antske Fokkens (antske.fokkens@vu.nl) 

Research project description
The main goal of this project is developing geometry aware generative models with application to natural language processing. There are several possible directions of research: as the first step we plan to develop non isotropic probabilistic models on manifolds, e.g. probabilistic modelling of word embeddings on non-euclidian manifolds. Using these methods we plan to develop uncertainty aware generative models for language, as well as constrained language generation deep learning models, which is aligned with the HI research directions namely Responsible HI (trust via uncertainty estimation, constrained generation).

Full proposal
Poster

Supervisor: Andrew Yates, University of Amsterdam
Co-supervisor: Piek Vossen, Vrije Universiteit Amsterdam

Short description:
We will support the investigation step of the scientific method cycle through a scientific assistant that helps identify relevant literature (at the document level) and helps pinpoint relevant information within this literature.

Poster

Postdoc: Floris den Hengst (f.den.hengst@vu.nl)

Supervisor: Anette ten Tije (annette.ten.teije@vu.nl)

Co-supervisors: Herke van Hoof (h.c.vanhoof@uva.nl)

Research project description
Reinforcement agents in a medical setting learn a behaviour policy which is often incompatible with the typical strategies of doctors, as well as incompatible with the preferences of patients. This project aims to develop training methods for an RL agent to learn a strategy that is (i) cooperative with the expert, that (ii) displays behaviour which is coherent from the human point of view, and that (iii) takes into account the often conflicting goals that a medical treatment is expected to satisfy (maximising life expectancy, minimising treatment burden, optimising quality of life, minimising financial cost). Such conflicting treatment goals are particularly prevalent in multimorbid patients who are typically under a polypharmacy regime. We want to develop novel techniques in the RL framework based on the medical requirements of learning a treatment strategy and consequently demonstrate this by a prototype.

Full proposal
Poster

PhD candidate: Leon Eshuijs 

Supervisor: Antske Fokkens (antske.fokkens@vu.nl) 

Co-supervisor: Shihan Wang (s.wang2@uu.nl) 

Research project description 
This project brings together two trends in NLP. First, with large language models achieving increasingly impressive results on standard benchmarks, the NLP community pays increasing attention to evaluations that go beyond standard benchmark sets. One line of research has returned to challenge sets (e.g. King and Falkedal 1990, Lehmann et al. 1996, Ribeiro et al. 2020). Another point of attention is this year’s special theme track of ACL, reality check, raising questions of what happens when models are used in the real world. 

Second, as a fast-developing machine learning technique, reinforcement learning (RL) targets sequential decision-making tasks and is capable of adaptively interacting with the users to learn a model with target behavior (Sutton and Barto 2018). Since it is particularly powerful by taking the feedback from users into account and updating the strategy adaptively according to actual needs, in recent years, RL has been widely applied and shown as a well-suited solution in various NLP tasks (Uc-Cetina et al. 2022). 

In this project, we aim to start such a research line that will bring these two research directions together and will explore possibilities of using RL to target specific behavior of models. This could be training specifically to avoid a social undesirable bias, but also to exhibit robust behavior on a specific phenomenon (e.g. negation, specific syntactic structures, temporal information interacting with truth values). 

Full proposal
Poster

Supervisor: Catha Oertel, Technical University Delft
Co-supervisor: Piek Vossen, Vrije Universiteit Amsterdam

Short description:
Successful communication within a multi-party meeting depends on the creation of a common ground between interaction partners: what is relevant for oneself, what is relevant for the interaction partner(s). We focus on memory creation based on affective responses to conversational instances and their relation to real-world objects supporting the interaction.
 

Poster

PhD student: Bernard Hilpert (b.hilpert@liacs.leidenuniv.nl) 

Promotores: Joost Broekens (d.j.broekens@liacs.leidenuniv.nl), Aske Plaat (a.plaat@liacs.leidenuniv.nl)  

Co-promotor: Kim Baraka (k.baraka@vu.nl 

Research project description 
In humans affective signals reflect the internal state of the person, in particular the appraisal of the situation. This is a natural way to express how one is interpreting the current situation, which, we hypothesize, when simulated by an agent in a human-agent interaction context, can positively influence the human’s teaching performance.  In this project we investigate if and how robot affective expression, grounded in RL, can help to make the learning process more transparent. Previous research has shown that robot emotion simulation grounded in RL is possible and simulation studies show that the resulting emotional intensities (joy/distress/hope/worry) are plausible from a psychological perspective.  

Full proposal
Poster

PhD student: Gülce Günaydin

Supervisor: Daniel Balliet, Vrije Universiteit Amsterdam
Co-supervisor: vacancy

Short description:
CoDa is the first databank to represent an entire field of studies in the social sciences (human cooperation) in a machine-readable way which can be searched to select studies for on-demand meta-analysis. We will generalize CoDa to factors from economics (e.g., altruism, trust), sociology (e.g., social capital), and clinical psychology (i.e., treatments of depression), we will develop queries that can produce on-demand meta-analyses, and develop an interactive technique to create queries and update meta-analyses (e.g., text of the manuscript).

PhD Candidate: Feline Lindeboom (f.l.lindeboom@rug.nl) 
Supervisor:
Davide Grossi, University of Groningen
Co-supervisor: Pradeep Murukannaiah, Technical University Delft

Short description: 
We will develop a methodology for supporting participatory democratic decision making online. The methodology should support (1) users in participating in online democratic deliberation and reflecting on the alignment between their choices and values, (2) policy makers in aggregating users’ choices and opinions in a principled and transparent manner. 

PhD candidate: Alejandro García Castellanos 
Supervisor: Erik Bekkers, University of Amsterdam 
Co-supervisor: Daniel Pelt, University of Leiden 

Short description: 
We will develop techniques for collaborative human-computer image annotation of training sets for deep learning tasks. Our techniques will suggest relevant annotations to the human annotator, will indicate inconsistencies in the human annotations, and will use concepts from geometric deep learning to handle shapes of image annotations.

PhD Candidate: Mustafa Mert Çelikok (M.M.Celikok@tudelft.nl) 
Supervisor: Frans A. Oliehoek, Technical University Delft

Co-supervisor: Jan-Willem vd Meent, University of Amsterdam

Short description: 
We will investigate how Bayesian methods can be used in multi-agent reinforcement learning (MARL) as a way to infer models of human partners, in order to collaborate with them more effectively.

PhD candidate: Chenxu Hao (C.Hao-1@tudelft.nl) 

Supervisor: Hayley Hung (h.hung@tudelft.nl) 

Co-supervisors: Dan Balliet (d.p.balliet@vu.nl), Bernd Dudzik (B.J.W.Dudzik@tudelft.nl) 

Research project description 
This project that goes beyond understanding social encounters for partner selection. We know that there are critical moments in social interactions that contribute to a decision to collaborate with someone e.g. whether to trust someone, whether to select someone as a collaborative partner. 

However, is an individual accurate in making that assessment? Does their decision on who to collaborate with reflect performance outcomes with their selected team member? Or are there tendencies to make suboptimal social choices based on personal biases? If HI systems were able to help humans to better understand their social encounters and how they evaluate them, could better teams be self-selected for better team and individual outcomes? 

The goal of this project is to bridge the artificial perception and action loop by considering the idea of building a HI Social Guru. This is carried out through (i) goal setting, (ii) enlightenment and (iii) empowerment via the HI Social Guru. The important thing about a Guru is that they assist humans in self-reflection in social settings without needing to interfere directly (synchronously) with the interaction. That is, they provide wisdom and enlightenment asynchronously from the interaction. This provides another take on how HI systems can help humans to perform better in social situations. 

This asynchronous way of working leverages thought processes of a user where they can fully exploit their cognitive abilities to reflect and adapt their own behaviours in the future through what we are calling enlightenment and empowerment strategies. This approach aligns with constructive learning theory in coaching settings. The envisaged steps in this process will centre around the partner selection and collaboration task (through trust, warmth, competence, and team performance) of the 

PACO dataset for which many measures already exist to hit the ground running with such a study. A subsequent intervention experiment will need to be carried out whilst much of the data already collected from PACO will allow for efficient studies to be carried out before rolling out an intervention study. 

Full proposal

PhD Candidate: Giacomo Zamprogno (g.zamprogno@vu.nl) 
Supervisor: Ilaria Tiddi, Vrije Universiteit Amsterdam

Co-supervisor: Bart Verheij, University of Groningen

Short description: 
How can we use large-scale structured data harvested from the (structured, machine-readable) web in argumentation about complex hypotheses?

PhD Candidate: Saloni Singh (s.m.singh@vu.nl) 
Supervisor: Kim Baraka, Vrije Universiteit Amsterdam

Co-supervisor: Dirk Heylen, University of Twente

Short description: 
How can an AI system foster synergetic exchange with a human during a creative process? We will develop a collection of novel algorithmic tools that will allow a human+AI system to achieve an improved outcome when collaborating together as opposed to when engaging in a creative process by themselves. 

PhD Candidate: Ramira van der Meulen (a.van.der.meulen@liacs.leidenuniv.nl) 
Supervisor: Max van Duijn, University of Leiden

Co-supervisor: Rineke Verbrugge, University of Groningen

Short description:   
The aim is to study how common ground emerges, and how such common ground can support Theory of Mind, using agent-based modeling of a collaborative game and lab experiments in which humans and AI agents play that same game. 

Poster

PhD Candidate: Lena Malnatsky (e.malnatsky@vu.nl) 
Supervisor: dr. Mike Ligthart, Vrije Universiteit Amsterdam

Co-supervisor: dr. Shenghui Wang, University of Twente

Short description: 
Creating engaging and personalized content for sustained human-robot interactions over time, in our case between robots and children in an educational setting, is currently done entirely manually.  We will develop a hybrid solution for interaction content generation, ranging from high-level content creation (e.g. robot personality, conversational goals and narratives, and dialog templates) to mini-dialog generation (e.g. dynamically integrate fragments of high-level narratives and robot’s personal stories with child-robot conversational history into individual utterances). 

Poster

PhD Candidate: Annika Kniele (a.kniele@vu.nl) 
Supervisor: Piek Vossen, Vrije Universiteit Amsterdam

Co-supervisor: Catha Oertel, Technical University Delft

Short description: 
In a collaborative interactive setting between humans and agents, it is crucial that their memories get partially aligned to make decisions on agreed information but also to know how differences across memories can be leveraged. In this project, we create conversational models for agents to achieve such alignment, which is essential to establish shared conceptualizations for memories in a Theory of Mind (ToM) model.

Poster

PhD Candidate: Beliz Anastasia Akkuzu 
Supervisor: Pinar Yolum, University of Utrecht

Co-supervisor: Pradeep Murukannaiah, Technical University Delft

Short description: 
We will investigate how the notion of consent can be used as an abstraction to capture and facilitate interactions among multiple humans and AI agents in realizing hybrid intelligence. How can consent be formally represented? What are the types of consent and how can they be inferred? When and how should an agent request consent?

Poster

Supervisor: Roel Dobbe, Technical University Delft
Co-supervisor: Maarten de Rijke, University of Amsterdam

Short description: 
We will develop new methodologies to design for the emergent nature of hybrid intelligent systems. We will develop formal abstractions for the analysis and design of the dynamics of hybrid intelligent systems. These will inform practical design. methodologies that can enforce important emergent properties such as safety or justice in the emergent system dynamic 

Supervisor: Victor de Boer, Vrije Universiteit Amsterdam
Co-supervisor: Shenghui Wang, University of Twente

Short description: 
We will investigate interactive methods for eliciting annotations of heritage objects by users from a varied background (cultural, geographical, gender, a.o.). We will investigate methods to acquire this diverse knowledge through a variety of modalities (dialogue systems, VR, and others) combined with diversity-preserving annotation and crowdsourcing methods. 

Poster

PhD student: Ludi van Leeuwen (l.s.van.leeuwen@rug.nl) 

Supervisors: Bart Verheij (bart.verheij@rug.nl), Rineke Verbrugge (l.c.verbrugge@rug.nl)  

Co-supervisor: Silja Renooij (s.renooij@uu.nl) 

Research project description
The aim of this project is to investigate the rational use of probabilistic reasoning. Many domains that AI can be applied to, have to deal with a lack of “objective” frequency data, which needs to be available for the AI systems. This means that we would need to collect more data or elicit frequency data from domain experts. This is a labour-intensive process and it is unclear how this should be done exactly.  

Furthermore, the subjectivity involved with expert elicitation and data collection is understudied. Hence, the aim of this project is to discover under what circumstances we can use probabilistic methods, namely, Bayesian Networks for reasoning with uncertainty and evidence. Bayesian Networks seem like a good tool for reasoning with uncertainty and evidence, but there are many aspects of this formalism that should be open for further investigation in data-poor domains, like the effect of modeller’s subjectivity, imprecise probability and relevance. 

Bayesian Networks are a hybrid tool for reasoning with evidence, because people model the networks and find evidence, but are not so good at calculating the Bayesian inference updates. This is where human modellers and computers complement each other. The use of Bayesian Networks and probabilistic methods in general is a developing field and using them responsibly means that we have to map out in what areas we can trust them, and knowing where they fall short of a rational model. 

Full proposal
Poster

PhD student: Johanna Wolff (j.d.wolff@utwente.nl)

Supervisor: Birna van Riemsdijk (m.b.vanriemsdijk@utwente.nl)

Co-supervisor: Victor de Boer (v.de.boer@Tvu.nl) 

Research project description
The idea is to use techniques from knowledge representation and reasoning and non-monotonic reasoning to create a dynamic system that is able to adapt to changes in the users knowledge, beliefs, preferences and goals. Ideally, any changes in the system should also be comprehensible to the user, which means that the system should not only acknowledge these changes but also present them to the user in an understandable way.

This is necessary in order for personal agents to be able to accurately model the user and therefore offer the best support. On the other hand, the user needs to be aware of the impact their input has on the system in order to trust the personal agent.

Full proposal
Poster