Much of the research in the Hybrid Intelligence consortium is performed in the following 27 projects.
See also our organogram to see which project is part of which Research Line.

We're recruiting for:

PhD candidate: Aishwarya Suresh  (

Supervisor: Filippo Santoni de Sio (
Co-supervisor: Ibo van de Poel (

Research Project Description
Many current AI systems are implicitly supporting or constraining human moral judgment, e.g. information on Facebook, recommendation systems, targeted advertising. The goal of this project is to make this support (or constraint) explicit as a step towards empowering better human moral judgments. Our working assumption is that AI should support but not take over human moral judgement and decisions. To determine the desirable role of AI support, we should first of all better understand human moral judgement, and particularly the importance of context in it. Two case studies will be carried out on how AI may support moral human judgement in the context of changing values. On the basis of these, desirable (and undesirable) forms of AI support will be identified, and insights will be translated into a number of design principles for AI support for human moral judgement.

Full proposal

PhD candidate: Cor Steging (

Supervisor: Bart Verheij (
Co-supervisor: Silja Renooij (

Research Project Description
A core puzzle in today’s artificial intelligence is how knowledge, reasoning and data are connected. To what extent can knowledge used in reasoning be recovered from data with implicit structure? Can such knowledge be correctly recovered from data? To what extent does knowledge determine the structure of data that results from reasoning with the knowledge? Can knowledge be selected such that the output data generated by reasoning with the knowledge has desired properties? 

By investigating the relations between knowledge, reasoning and data, we aim to develop mechanisms for the verification and evaluation of hybrid systems that combine manual knowledge-based design and learning from data. The focus will be on structures used in reasoning and decision-making, in particular logical and probabilistic relations (as in Bayesian networks) and reasons (pro and con) and exceptions (as in argument-based decision making).

Full proposal

PhD candidate: Michiel van der Meer (

Supervisor: Catholijn Jonker (
Co-supervisor: Pradeep Murukannaiah (

Research Project Description
Perspectives in deliberation is about developing Artificial Intelligence techniques to find the underlying structures of debates and group deliberations with the idea that we can help participants to a debate / deliberation to understand why others have a different opinion in this debate. To this end we want to use computational linguistics to extract what we call micro-propositions from text: proposition mining. Secondly, we want to model the implications for stakeholders of these propositions: implication mining. Thirdly, we want to extract and understand the perspectives of the stakeholders on these implications: perspective mining.

In addition to understanding the debate and the perspectives, the AI you develop will seek to interact with stakeholders about the interpretations of what they brought in as statements and arguments. 

Full proposal

PhD candidate: Tiffany Matej (

Supervisor: Dan Balliet (
Co-supervisor: Hayley Hung (

Research Project Description
This PhD project will be a collaboration between psychology and computer science to develop hybrid intelligence that can understand, predict, and potentially aid in initiating collaborative behavior. To build such machines, we must further develop our understanding about how people select their partners and initiate collaborations. Moreover, this project will involve a close collaboration between psychology and computer science, and will apply state-of-the-art methods and techniques in both computer science and psychology to advance our understanding about these issues.

Partner selection is fundamental to understanding how people initiate cooperative relations and to avoid being exploited by non-cooperative individuals – two key features of human sociality thought to underlie why humans are such a cooperative species (see Barclay, 2013). This research will test and develop theory about how people choose cooperative partners. The project will use both naturalistic settings (e.g., social networking at scientific conferences) and experimental settings (e.g., experimental group tasks), to examine the non-verbal and verbal behaviors during social interactions that can be used to predict whether people choose to select another person (or not) as a collaboration partner. During these studies, participants will wear multi-modal sensors and be video recorded while interacting with other people for the first time, which will be used to capture non-verbal and verbal behaviors that can predict how people evaluate their interaction partner (e.g., their traits, motives), the social interaction (e.g., the closeness, social power), and behavior motivations (e.g., avoiding versus approach the person in future interactions).

This PhD project is a collaboration between Psychology (Daniel Balliet) and Computer Sciences (Hayley Hung, Rineke Verbrugge), and the PhD will also work closely with another PhD student supervised directly by Hayley Hung. The candidate should have an openness to working in a multi-disciplinary team, and have a general interest in establishing a closer connection between psychology and the computer sciences.

Full Proposal

PhD candidate: Nicole Orzan (

Supervisor: Davide Grossi (
Co-supervisor: Eliseo Ferrante (

Research Project Description
Deliberation is a key ingredient for successful collective decision-making. In this project we will:

– investigate the mathematical and computational foundations of deliberative processes in groups of autonomous agents.
– lay the theoretical groundwork for modelling and analyzing how opinions are exchanged and evolve in groups of autonomous decision-makers.
– develop principled methods for the design of deliberation procedures that can provably improve the quality of group decisions.

Full proposal

PhD candidate: Emre Erdogan (

Supervisor: Frank Dignum (
Co-supervisor: Pinar Yolum (

Research Project Description
At Utrecht University, we are looking for several PhD students within the Hybrid Intelligence project. For the position on the computational theory of mind we look for a PhD student to investigate and develop a computationally efficient theory of mind that is theoretically sound.
The research will focus on which type of general knowledge on other agents has to be kept in order to keep a usable theory of mind available. How much of the historical information and premises on preferences, goals and motivations should be taken into account? The premise is that the theory of mind that needs to be kept depends on the level of interaction that is aimed for and the context in which this interaction takes place. You will apply the developed theory in the area of collaborative privacy. A typical situation in collaborative privacy is that a user in a collaborative system, such as an online social network, would like to share content that could possibly be co-owned by others, such as group pictures or collaboratively edited documents. At the time of sharing, the user has to take into account how the shared content can or will be used and how this sharing would affect other users and take an action accordingly. By employing a computationally viable model of theory of mind, a user can reason about other co-owners’
privacy expectations on the content in a given context using a theory of mind and make a sharing decision based on that.
The PhD candidate’s principal duty is to conduct scientific research, resulting in a PhD thesis at the end of the appointment. Other duties may include supporting the preparation and teaching of Bachelor’s and Master’s level courses, supervising student theses, managing research infrastructure and participating in public outreach. This position presents an excellent opportunity to develop an academic profile.

Full proposal

Postdoc: Karine Miras (

Supervisor: Guszti Eiben (
Co-supervisor: Aimee van Wynsberghe (

Research Project Description
By bringing roboticists and ethicists together the project addresses two fundamental issues. Firstly, how to translate ethical requirements into design and technical features. Secondly, we explore the trade-off between adaptivity and responsibility attribution trying to find Pareto-optimal solutions.

Full proposal

PhD candidate: Niklas Höpner (

Supervisor: Herke van Hoof (
Co-supervisor: Ilaria Tiddi (

Research Project Description
At the university of Amsterdam, we are looking for a PhD candidate interested in combining (deep) reinforcement learning research with prior knowledge from knowledge graphs to learn explainable strategies for complex sequential tasks.
Many problems of practical interests are concerned with optimising sequential decision making. Think, for example, of finding optimal trajectories for vehicles or robots or deciding which medical tests to run. Methods for classical planning based on symbolic representations are typically explainable to human collaborators but rely on an exact problem description; while data-driven (e.g. reinforcement learning) approaches do not rely on a provided problem specification, but are data hungry and inscrutable. The combination of the complementary strengths of these approaches is expected to advance the state of the art in the areas of knowledge representation and reinforcement learning.

Full proposal

PhD candidate: Bram Renting (

Supervisor: Holger Hoos (
Co-supervisor: Catholijn Jonker (

Research Project Description
The goal of this PhD project is to develop automated machine learning methods for changing/non iid data, with applications to standard learning scenarios as well as to automated negotiation. Such techniques are key to enabling the efficient and robust use of machine learning systems and components for a broad range of human-centred AI applications. They will also contribute to fundamental advances in machine learning. Automated negotiation scenarios are of particular interest, as they play a key role in systems dealing with potentially conflicting interests between multiple users or stakeholders.

Full proposal

PhD candidate: Anna Kuzina (

Supervisor: Jacub Tomczak (
Co-supervisor: Max Welling (

Research Project Description
At Vrije Universiteit Amsterdam, we are looking for an enthusiastic PhD candidate who is interested in formulating and developing new models and algorithms for quantifying uncertainty and making decisions in changing environments. Our project “Continual learning and deep generative modeling for adaptive systems’’ focuses on fundamental research into combining various learning paradigms for building intelligent systems capable of learning in a continuous manner and evaluating uncertainty of the surrounding environment.

Adaptivity is a crucial capability of living organisms. Current machine learning systems are not equipped with tools that allow them to adjust to new situations and understand their surroundings (e.g., observed data). For instance, a robot should be able to adapt to new environment or task and assess whether the observed reality is known (i.e., likely events) or it should contact a human operator due to unusual observations (i.e., high uncertainty). Moreover, we claim that uncertainty assessment is crucial for communicating with human beings and for decision making.

In this project, we aim at designing new models and learning algorithms by combining multiple machine learning methods and developing new ones. In order to quantify uncertainties, we prefer to use deep generative modeling paradigm and frameworks like Variational Autoencoders and flow-based models. However, we believe that standard learning techniques are insufficient to update models and, therefore, continual learning (a.k.a. life-long learning, continuous learning) should be used. Since this is still an open question how continual learning ought to be formulated, we propose to explore different directions that could include, but are not limited to Bayesian nonparametrics and (Bayesian) model distillation. Moreover, a combination of continual learning and deep generative modeling entails new challenges and new research questions.

The PhD candidate will be part of the Department of Computer Science of the Vrije Universiteit Amsterdam (Computational Intelligence group) in a close partnership with the Institute of Informatics of the University of Amsterdam (Amsterdam Machine Learning Lab). Daily supervision will be performed by dr. Jakub Tomczak (VU). The promotor will be prof. A.E. Eiben (VU) and the co-promotor will be prof. M. Welling (UvA).

Full proposal

Candidate: Mani Tajaddini (

Supervisor: Mark Neerincx ( )
Co-supervisor: Annette ten Teije (

Research Project Description
The ambition for this Phd.D. position will be to define design patterns for successful configurations of Hybrid Intelligent systems. Such patterns describe how different combinations of machine and human capabilities perform for a given task under a given set of circumstances. We aim to develop a corresponding pattern language to express such design patterns in conceptual, (semi-)formal or computational form, and an empirical method to validate the patterns. See for more information.

Full proposal

PhD candidate: Taewoon Kim (

Supervisor: Piek Vossen (
Co-supervisor: Mark Neerincx (

Research Project Description
To collaborate with AI, such as robots, both people and systems need to understand how they perceive shared situations differently. Understanding these differences is a first step in collaborating successfully. Communication about these situations is fundamental for resolving misunderstanding, explaining perspectives and informing the other. This project addresses the phenomena of identity, reference and perspective within communicative scenarios between AI and people in real-world situations. Knowledge and awareness of the context helps communication between AI and people (successfully identifying and making reference to the world) and through communication we create the common ground for understanding the context (identifying the world from different perspectives). This project tackles these two sides of the same coin by building personal relationships between people and AI in an adaptive environment to learn how to communicate about shared situations or contexts. By building a collective memory of shared encounters, we can define what there is, what is relevant and why we care.

Full proposal

PhD candidate: Annet Onnes (

Supervisor: Silja Rennooij (
Co-supervisor: Jakub Tomczak (

Research Project Description
At Utrecht University we are looking for an enthusiastic PhD candidate who is interested in raising and educating a new generation of artificially intelligent agents: adaptive agents who need to learn to abide by our rules in hybrid intelligent teams. Our project “Monitoring and constraining adaptive systems” focuses on fundamental research into integrating interpretable knowledge representation and reasoning with learning in the context of adaptive systems.
Since an adaptive system is allowed to change itself, we need to trust that it does not evolve into a system that violates various constraints important for the environment at large in which the system operates. Building upon existing frameworks such as probabilistic graphical models and deep generative modelling, we aim to design a monitoring system that is able to detect and react to violations of constraints, to predict that violations are about to occur, issue warnings, and ultimately gets the adaptive system back on track. The monitoring system should allow for easily capturing constraints in a human-intuitive way, so that they can easily be inspected and changed. Being able to predict the behaviour of an adaptive system also allows for analysing and explaining it, which are important aspects to facilitate communication and collaboration between human and artificial adaptive agents.
The PhD candidate will be part of the Department of Information and Computing Sciences (Intelligent Systems group) in close partnership with the Department of Computer Science of the Vrije Universiteit Amsterdam (Computational Intelligence group). Daily supervision will be performed by dr. Silja Renooij (UU) and dr. Jakub Tomczak (VU).

Full proposal

Candidate: Pei-Yu Chen (

Supervisor: Birna van Riemsdijk (
Co-supervisor: Myrthe Tielman (

Research project description

It is our aim to realize intimate (supportive) technologies that feel like they tend to people with care and sensitivity, allowing them to maintain their space and agency in coaction with technology. This means that the technology needs to continuously tune in to the needs of the user and assess whether its provided support is (still) in alignment with this. We refer to this as Responsible Agency: the technology’s agency is shaped around the user’s agency, and this human-machine co-entity jointly produces actions in the world. 

To realize this, a user model is needed that represents what is important to the user in light of the support that is to be provided (e.g., activities, values, capabilities, norms, frequencies of behaviour, etc.). This user model is constructed through direct interactions with the user at run-time, since the necessary information, e.g., on underlying values, often cannot be derived from existing data. Based on this user model, the agent can derive what it deems to be appropriate support actions. 

Full proposal

Candidate: Urja Khurana (

Supervisor: Antske Fokkens (
Co-supervisor: Eric Nalisnick (

Research Project Description

Natural language processing has a strong tradition in experimental research, where various methods are evaluated on gold standard datasets. Though these experiments can be valuable to determine which methods work best, they do not necessarily provide sufficient insight into the general quality of our methods for real-life applications. There are two questions that often need to be addressed before knowing whether a method is suitable to be used in a real-life application in addition to the outcome of a typical NLP experiment. First, what kind of errors does the method make and how problematic are they for the application? Second, how predictive are results obtained on the benchmark sets for the data that will be used in the real-life application? This project aims to address these two questions combining advanced systematic error analyses and formal comparison of textual data and language models.

Though potential erroneous correlations were still relatively easily identified in scenarios of old-fashioned extensive feature engineering and methods such as K-nearest neighbors, Naive Bayes, logistic regression, SVM, this has become more challenging now that technologies predominantly make use of neural networks. The field has become increasingly interested in exploring ways to interpret neural networks, but, once again, many studies focus on field internal questions (what linguistic information is captured? Which architectures learn compositionality to what extent?). We aim to take this research a step further and see if we can use insights into the workings of deep models to predict how they will work for specific applications that make use of data different from the original evaluation data. Both error analysis and formal comparison methods will contribute to establishing the relation between generic language models, task specific training data, evaluation data and ”new data”. By gaining a more profound understanding of these relations, we try and define metrics that can be used to estimate or even predict to what extent results on a new dataset will be similar to those reported on the evaluation data (both in terms of overall performance and in terms of types of errors).

Full proposal

Candidate: Kata Naszadi (

Supervisor: Christof Monz (
Co-supervisor: Frans Oliehoek (

Research project discription

In this sub-project, we are looking for a PhD candidate interested in combining deep learning and natural language processing research to model complex contextual information to significantly improve the quality of dialog systems.

While current deep learning methods have been shown to be very effective in generating fluent utterances, these utterances are often only poorly connected to the context of the conversation. In this project, we will investigate the role of context (agent or environment) on natural dialogue generation. In this project, we will explore a several research directions such as the detection and representation of knowledge gaps, the generation of contextually appropriate responses, and novel ways to represent and access large contextual information.

Full proposal

Candidate: Bernd Dudzik (

Supervisor: Hayley Hung (
Co-supervisor: Dan Balliet (

Research project description

An important but under-explored problem in computer science is the automated analysis of conversational dynamics in large unstructured social gatherings such as networking or mingling events. Research has shown that attending such events contributes greatly to career and personal success. While much progress has been made in the analysis of small pre-arranged conversations, scaling up robustly presents a number of fundamentally different challenges. Moreover, understanding the relationship between these interactions and how this translates into actual collaborations is unknown. 

Unlike analysing small pre-arranged conversations, during mingling, sensor data is seriously contaminated. Moreover, determining who is talking with whom is difficult because groups can split and merge at will. A fundamentally different approach is needed to handle both the complexity of the social situation as well as the uncertainty of the sensor data when analysing such scenes.

The main aim of the project is to address the following question: How can multi-sensor processing and machine learning methods be developed to model the dynamics of conversational interaction in large social gatherings using only non-verbal behaviour? The focus of this project is to measure conversation quality, from multi-sensor streams in crowded environments and its relationship with someone’s willingness to collaborate with other conversation partners. 

The successful applicants will develop automated techniques to analyse multi-sensor data (video, acceleration, audio, etc) of human social behavior. They will interact closely with a PhD student from social science and possibly other PhD students on the Hybrid Intelligence project. 

Full proposal

Candidate: Wijnand van Woerkom (

Supervisor: Henry Prakken (
Co-supervisor: Davide Grossi (

Research Project Description

This project aims to explain outcomes of data-driven machine-learning applications that support decision-making procedures to end users of such applications, such as lawyers, business people or ordinary citizens. The techniques should apply in contexts where a human decision maker is informed by data-driven algorithms and where the decisions have ethical, legal or societal implications. They should generate explanations for outputs for specific inputs. The generated explanations should be such that the reasons for the output can be understood and critically examined on their quality. The project will especially focus on explaining ‘black-box’ applications in that it will focus on model-agnostic methods, assuming only access to the training data and the possibility to evaluate a model’s output given input data. This will make the explanation methods independent of a model’s internal structure. This is important since in many real-life applications the learned models will not be interpretable or accessible, for instance, when the model is learned by deep learning or when the application is proprietary.

Full proposal

Candidate: Merle Reimann (

Supervisor: Koen Hindriks (
Co-supervisor: Christof Monz (

Research Project Description

A key challenge in human-machine dialogues is to provide more flexibility to a user and allow the user to co-regulate a conversation (instead of being directed by the agent’s conversation script or flow). We will develop a conversational agent that is able to establish and maintain a common understanding in co-regulation with its user, by offering the flexibility to deviate from the conventional sequences in structured question-and-answer or instructional dialogues. This may, for example, occur in education when a child is asked to solve a math problem and is not sure about the answer, or in health care when a patient shows hesitance when answering a survey question and would like to ask a clarification question, or when a user is unclear about a cooking assistant’s instructions. In such cases, rather than moving on to the next math problem or survey question, the user should be allowed to deviate from the main dialogue line (‘the happy flow’) to come to a mutual understanding and restore common ground. We will collect conversational data that can be used to extract common conversational patterns that indicate a user wants to take control of the dialogue flow, and will research and design an agent that is able to handle this. For example, the agent should be able to decide to either (i) ask a follow-up question (ii) provide feedback (verbalize its understanding of the humans’ answer), or (iii) give additional explanatory information to assist the user. A second aim of this project is to address the level of user engagement in such highly structured conversations that tend to be repetitive, by enabling the agent to memorize and refer back to relevant pieces of information in the conversation history or earlier conversations with the same user, and by incorporating variation in the agent’s utterances.

Full proposal

Candidate: Burcu Arslan

Supervisor: Rineke Verbrugge (
Co-supervisor: Catholijn Jonker (

Research Project description

In hybrid intelligent systems in which humans and agents interact together, it is important to be able to understand and detect non-cooperative behavior such as lying and other forms of deception, both in people and in software agents. Cognitive scientists have shown that understanding the concept of lying and being able to maintain a lie over time requires second-order theory of mind: reasoning about what the other person (or agent) thinks you think. The same level of theory of mind is also required for detecting whether others are lying to you. In this PhD project we will:

  • investigate the logical and computational foundations of deception and deception detection in hybrid groups;
  • lay the theoretical groundwork for modelling and analyzing non-cooperative behavior in several communicative contexts such as negotiation games and coalition formation games;
  • develop principled methods for the design of software agents that can detect when other group members are engaged in non-cooperative behavior such as lying;
  • build agent-based models and/or computational cognitive models of deception and deception detection;
  • use simulation experiments in order to predict the outcomes of lab experiments to be performed.

The objective of the temporary position is the production of a number of research articles in peer-reviewed scientific journals and conference proceedings, which together will form the basis of a thesis leading to a PhD degree (dr.) at the University of Groningen.

Full proposal

Candidate: Íñigo Martínez De Rituerto De Troya (

Supervisor: Roel Dobbe (
Co-supervisor: Virginia Dignum (

Research Project Description

This PhD-project starts from two central assumptions: Firstly, that all “artificial intelligence systems” are hybrid, requiring the integration of human and machine activities. More fundamentally, AI systems are fundamentally sociotechnical, developed through myriad design tradeoffs and “hard choices”, and situated in a societal context where the system informs, influences or automates human decisions. And secondly, that AI systems contribute to “programmable infrastructures”, meaning that these systems don’t stand on themselves but are integrated into and restructure existing infrastructures and organizations, often with implications for democratic governance and sovereignty.

The PhD candidate will work in a growing team addressing issues of hard choices and programmable infrastructures, consisting of people with different disciplinary backgrounds. The position focuses on understanding how human factors are crucial for informing the design of AI systems in making sure these are safe in their behavior, just in their treatment of people, and allow affected stakeholders to be empowered to hold the system and its managers/operators/developers accountable.

The candidate will work to marry notions in traditional computer science and systems engineering with methods from situated design, participatory design, computational empowerment, action research, science-and-technology-studies, and other disciplines aiming to increase empowerment and participation in design. The project studies and facilitates the implementation of a data-driven, learning-based control scheme in the operation of electrical distribution networks, working closely together with Dutch utility companies and other societal stakeholders working on the transition to renewable energy resources. A second case study will be pursued in a more administrative context within the Digicampus (

In these studies, the PhD candidate will work together with stakeholders from the different sectors to both study and inform system development. The aim is (1) to understand what human factors inform the development process of AI systems and how PD and other design methods can facilitate this, and (2) what human dimensions in the situated context need to be addressed to integrate and operate the AI system in a safe and just manner, seeking collaboration with appropriate fields of expertise. For both these contributions, the PhD candidate will be leaning on emerging literature to structure his/her investigations and seek advice and collaboration across the different disciplines within the department, the Hybrid Intelligence Centre and internationally.

The selected candidate will also play a central role in building an international community of scholars advancing sociotechnical and situated studies of AI systems. This project will benefit from collaborations with dr. Seda Gürses in the Multi-Actor Systems department, prof. Virginia Dignum and the Responsible AI group at Umeå University in Sweden, and prof. Joan Greenbaum at . 

Full proposal

Candidate: Maria Heuss (

Supervisor: Maarten de Rijke (prof. Maarten de Rijke)
Co-supervisor: Koen Hindriks (

Research Project Description

Learning to rank is at the heart of modern search and recommender systems. Increasingly, such systems are optimized using historical, logged interaction data. Such data is known to be biased in a myriad of ways (position bias, trust bias, selection bias, …). In recent years, various de-biasing methods have been proposed, for different types of bias. These methods are limited, either in the types of bias they address, the assumptions about user behavior they make, or the degree to which they are actionable. We will design debiasing methods that are explainable and that can be applied selectively, depending on context, user (or user model), and additional constraints such as fairness.

Interaction with large amounts of information is one of the original scenarios for hybrid intelligence. For interactions to be effective, algorithmic decisions and actions should be explainable. In the context of learning to rank from logged interaction data for search and recommendation, this implies that the underlying debiasing methods need to be explainable – both in terms of the bias they aim to address, the assumptions they make, the re-estimations they make, and the impact on learned search and recommendation behavior. Importantly, the explanations should be actionable in the sense that they inform the target user about the changes required to obtain a different outcome.

Full proposal

Candidate: Sharvaree Vadgama (

Supervisor: Erik Bekkers (dr. Erik Bekkers)
Co-supervisor: Jakub Tomczak (

Research Project Description

The aim of this project is to model, study, and visualize decision-making processes in (convolutional) neural networks. In particular, emphasis is put on explainability in critical applications where reliability and trustworthiness of AI systems are crucial, such as in the medical domain.

A main theme will be the design of AI system through recurrent NNs by which focus is put on modeling (and visualization of) the dynamics of decision making rather than relying on the static black box mechanics of feed-forward networks. E.g. when tracing a blood vessel in medical image data, one does not need to be able to represent or remember the vessel as a whole (feedforward mechanism), but it is more important to learn how to connect local line segments (the dynamics of reasoning). Similarly, in image classification problems, it is important to learn how local queues relate to each other and learn how to “connect the dots” rather than trying to come to a decision given all information at once. Being able to capture/model such dynamics of reasoning opens doors for interaction with and visualization of the systems, which leads to explainability. Your research will be built upon an exciting mix of geometric (equivariance, invertibility, PDE’s) and probabilistic (VAE, normalizing flows) principles to ensure reliability and interpretability of AI systems.

Full proposal

Candidate: Loan Hu (

Supervisor: Stefan Schlobach (
Co-supervisor: Victor de Boer (

Research proposal description

In HI we envision individual agents to have both access to a global set of knowledge sources, some specific formalisation of its own knowledge (and beliefs), as well as some notion of knowledge about other agents in their network, and the knowledge of those agents in particular. 

Knowledge graphs (KGs) can play an important role, both in order to store and provide access to global knowledge, common and accessible to both human and artificial agents, as well as storing local knowledge of individual agents in a larger network of agents. 

Unfortunately, the current formalisms are not sufficiently well-designed to work with complex, conflicting, dynamic and contextualised knowledge. What is needed to make KGs suitable formalisms for data and knowledge exchange in a HI network, is for individual agents to adapt their own knowledge in a KG (or at least the active part it is doing reasoning with) w.r.t. the interaction with one or more actors in its network. 

This research is part of the Research line “Adaptive HI”. In this research, adaptivity mostly refers to the ability to represent and adapt knowledge dynamically based on context and interaction with other agents.  We aim to contribute to Research question 3: “How can learning systems accommodate changes in user preferences, environments, tasks, and available resources without having to completely re-learn each time something changes?” by developing new knowledge representation solutions for representing the contexts. 

Full proposal

Candidate: Putra Manggala (

Supervisor: Eric Nalisnick (
Co-supervisor: Holger Hoos (

Research project description:

This project aims to use Bayesian priors defined in function space to enable data-efficient and human-guided model adaptation. Bayesian modeling is an appealing paradigm from which to build collaborative hybrid intelligent systems, because priors allow existing information and human expertise to be given primacy in model specification. One obstacle that often prevents the adoption of the Bayesian paradigm is prior specification. The prior must be specified as a distribution on parameter space, which is high-dimensional and complicated for models of the day (e.g. neural networks). It is typically challenging to translate human knowledge and intuitions — which are often contextualized in the space of data — to the parameter space. Our recent work  overcomes this gap between function and parameter space by defining the prior on the former and then reparameterizing it to be a proper distribution on the latter. The intuition is that the prior encourages the model predictions to agree with those of a teacher or reference model, which could be a human (imitation learning), a model for a related task (transfer learning), a previous iteration of the same model (continual learning), or a less expressive model (regularization). We would then leverage these various formulations for HI scenarios in which the model needs to be quickly adapted while protecting against overfitting or respecting human inputs. We plan to validate our results in an application healthcare, in particular the automated gating of cytometry data.

Full proposal

Postdoc: Erman Acar (

Supervisor: Stefan Schlobach (
Co-supervisor: Frank van Harmelen (

Project Description:

We will develop a game-theoretic framework which can capture the cooperation scenarios in heterogeneous groups (a mix of human and artificial agents). Such a study will require the development of necessary game-theoretic concepts; amongst them, refinement of existing equilibrium concepts, bounded rationality models and provably good guarantees. 

Full proposal


Research Project Description

This sub-project wants to push machine learning beyond traditional settings that assume a fixed dataset. Specifically, in this project we will investigate interactive learning settings in which two or more learners interact by giving each other feedback to reach an outcome that is desirable from a system designers perspective. The goal is to better understand how to structure interactions to effectively progress to the desirable outcome state, and to develop practical learning techniques and algorithms that exploit these generated insights.

Job Requirements

We are looking for a curiosity-driven researcher that is motivated to push the boundaries of machine learning in interactive settings. We expect that the project will consist both of theoretical parts (e.g., proving theorems) and empirical parts (e.g., running simulations, and analyzing the results). Depending on the skills and aptitude of the candidate the research could focus more on the former or the latter.

Strict requirements:

  • a PhD degree in AI or closely related topics in computer science, math, or physics.

Other desiderata:

  • thorough knowledge of general area of reinforcement learning, decision making under uncertainty, and/or other forms of interactive machine learning such as generative adversarial networks, or online learning.
  • track record with international publications
  • good coding skills and experience in contemporary machine learning framework (e.g., Tensorflow, Pytorch)
  • fluent in English
  • self-motivated
  • team player: willing to initiate collaborations with other partners in the project.    

To apply for this position click here

Supervisor: Erman Acar (

Research Project Description
We will design an adaptive system to be used as a component in  an artificial (virtual or physical) agent, which will enable structural updates on causal diagrams through logical reasoning and learning processes. We will study its theoretical properties, and test it empirically in virtual and physical multi-agent environments. We will use the empirical findings to refine its theory

Full proposal

We value the privacy of your information. The information that you submit in your application is subject to our data privacy statement.