We are hiring for 27 PhD positions before mid 2020. If you are interested in any of these positions, get in touch with the relevant team member mentioned below. For further details about how to apply and deadlines please see below.

Before you apply, try to find out more about our research: watch our video, read our research page, read what the media wrote about our work, read the public version of our research plan, and look at the publications of the relevant team members.

Projects with PhD Positions already filled
Projects with open PhD positions

PhD candidate: Aishwarya Suresh  (

Supervisor: Aimee van Wynsberghe  (

Research Project Description
Many current AI systems are implicitly supporting or constraining human moral judgment, e.g. information on Facebook, recommendation systems, targeted advertising. The goal of this project is to make this support (or constraint) explicit as a step towards empowering better human moral judgments. Our working assumption is that AI should support but not take over human moral judgement and decisions. To determine the desirable role of AI support, we should first of all better understand human moral judgement, and particularly the importance of context in it. Two case studies will be carried out on how AI may support moral human judgement in the context of changing values. On the basis of these, desirable (and undesirable) forms of AI support will be identified, and insights will be translated into a number of design principles for AI support for human moral judgement.

Full proposal

PhD candidate: Cor Steging (

Supervisor: Bart Verheij (

Research Project Description
A core puzzle in today’s artificial intelligence is how knowledge, reasoning and data are connected. To what extent can knowledge used in reasoning be recovered from data with implicit structure? Can such knowledge be correctly recovered from data? To what extent does knowledge determine the structure of data that results from reasoning with the knowledge? Can knowledge be selected such that the output data generated by reasoning with the knowledge has desired properties? 

By investigating the relations between knowledge, reasoning and data, we aim to develop mechanisms for the verification and evaluation of hybrid systems that combine manual knowledge-based design and learning from data. The focus will be on structures used in reasoning and decision-making, in particular logical and probabilistic relations (as in Bayesian networks) and reasons (pro and con) and exceptions (as in argument-based decision making).

Full proposal

PhD candidate: Michiel van der Meer (

Supervisor: Catholijn Jonker (

Research Project Description
Perspectives in deliberation is about developing Artificial Intelligence techniques to find the underlying structures of debates and group deliberations with the idea that we can help participants to a debate / deliberation to understand why others have a different opinion in this debate. To this end we want to use computational linguistics to extract what we call micro-propositions from text: proposition mining. Secondly, we want to model the implications for stakeholders of these propositions: implication mining. Thirdly, we want to extract and understand the perspectives of the stakeholders on these implications: perspective mining.

In addition to understanding the debate and the perspectives, the AI you develop will seek to interact with stakeholders about the interpretations of what they brought in as statements and arguments. 

Full proposal

PhD candidate: Tiffany Matej (

Supervisor: Dan Balliet (

Research Project Description
This PhD project will be a collaboration between psychology and computer science to develop hybrid intelligence that can understand, predict, and potentially aid in initiating collaborative behavior. To build such machines, we must further develop our understanding about how people select their partners and initiate collaborations. Moreover, this project will involve a close collaboration between psychology and computer science, and will apply state-of-the-art methods and techniques in both computer science and psychology to advance our understanding about these issues.

Partner selection is fundamental to understanding how people initiate cooperative relations and to avoid being exploited by non-cooperative individuals – two key features of human sociality thought to underlie why humans are such a cooperative species (see Barclay, 2013). This research will test and develop theory about how people choose cooperative partners. The project will use both naturalistic settings (e.g., social networking at scientific conferences) and experimental settings (e.g., experimental group tasks), to examine the non-verbal and verbal behaviors during social interactions that can be used to predict whether people choose to select another person (or not) as a collaboration partner. During these studies, participants will wear multi-modal sensors and be video recorded while interacting with other people for the first time, which will be used to capture non-verbal and verbal behaviors that can predict how people evaluate their interaction partner (e.g., their traits, motives), the social interaction (e.g., the closeness, social power), and behavior motivations (e.g., avoiding versus approach the person in future interactions).

This PhD project is a collaboration between Psychology (Daniel Balliet) and Computer Sciences (Hayley Hung, Rineke Verbrugge), and the PhD will also work closely with another PhD student supervised directly by Hayley Hung. The candidate should have an openness to working in a multi-disciplinary team, and have a general interest in establishing a closer connection between psychology and the computer sciences.

Full Proposal

PhD candidate: Nicole Orzan (

Supervisor: Davide Grossi (

Research Project Description
Deliberation is a key ingredient for successful collective decision-making. In this project we will:

– investigate the mathematical and computational foundations of deliberative processes in groups of autonomous agents.
– lay the theoretical groundwork for modelling and analyzing how opinions are exchanged and evolve in groups of autonomous decision-makers.
– develop principled methods for the design of deliberation procedures that can provably improve the quality of group decisions.

Full proposal

PhD candidate: Emre Erdogan (MISSING MAIL)

Supervisor: Frank Dignum (

Research Project Description
At Utrecht University, we are looking for several PhD students within the Hybrid Intelligence project. For the position on the computational theory of mind we look for a PhD student to investigate and develop a computationally efficient theory of mind that is theoretically sound.
The research will focus on which type of general knowledge on other agents has to be kept in order to keep a usable theory of mind available. How much of the historical information and premises on preferences, goals and motivations should be taken into account? The premise is that the theory of mind that needs to be kept depends on the level of interaction that is aimed for and the context in which this interaction takes place. You will apply the developed theory in the area of collaborative privacy. A typical situation in collaborative privacy is that a user in a collaborative system, such as an online social network, would like to share content that could possibly be co-owned by others, such as group pictures or collaboratively edited documents. At the time of sharing, the user has to take into account how the shared content can or will be used and how this sharing would affect other users and take an action accordingly. By employing a computationally viable model of theory of mind, a user can reason about other co-owners’
privacy expectations on the content in a given context using a theory of mind and make a sharing decision based on that.
The PhD candidate’s principal duty is to conduct scientific research, resulting in a PhD thesis at the end of the appointment. Other duties may include supporting the preparation and teaching of Bachelor’s and Master’s level courses, supervising student theses, managing research infrastructure and participating in public outreach. This position presents an excellent opportunity to develop an academic profile.

Full proposal

PhD candidate: Karine Miras (

Supervisor: Guszti Eiben (

Research Project Description
By bringing roboticists and ethicists together the project addresses two fundamental issues. Firstly, how to translate ethical requirements into design and technical features. Secondly, we explore the trade-off between adaptivity and responsibility attribution trying to find Pareto-optimal solutions.

Full proposal

PhD candidate: Niklas Höpner (

Supervisor: Herke van Hoof (

Research Project Description
At the university of Amsterdam, we are looking for a PhD candidate interested in combining (deep) reinforcement learning research with prior knowledge from knowledge graphs to learn explainable strategies for complex sequential tasks.
Many problems of practical interests are concerned with optimising sequential decision making. Think, for example, of finding optimal trajectories for vehicles or robots or deciding which medical tests to run. Methods for classical planning based on symbolic representations are typically explainable to human collaborators but rely on an exact problem description; while data-driven (e.g. reinforcement learning) approaches do not rely on a provided problem specification, but are data hungry and inscrutable. The combination of the complementary strengths of these approaches is expected to advance the state of the art in the areas of knowledge representation and reinforcement learning.

Full proposal

PhD candidate: Bram Renting (

Supervisor: Holger Hoos (

Research Project Description
The goal of this PhD project is to develop automated machine learning methods for changing/non iid data, with applications to standard learning scenarios as well as to automated negotiation. Such techniques are key to enabling the efficient and robust use of machine learning systems and components for a broad range of human-centred AI applications. They will also contribute to fundamental advances in machine learning. Automated negotiation scenarios are of particular interest, as they play a key role in systems dealing with potentially conflicting interests between multiple users or stakeholders.

Full proposal

PhD candidate: Anna Kuzina (

Supervisor: Jacub Tomczak (

Research Project Description
At Vrije Universiteit Amsterdam, we are looking for an enthusiastic PhD candidate who is interested in formulating and developing new models and algorithms for quantifying uncertainty and making decisions in changing environments. Our project “Continual learning and deep generative modeling for adaptive systems’’ focuses on fundamental research into combining various learning paradigms for building intelligent systems capable of learning in a continuous manner and evaluating uncertainty of the surrounding environment.

Adaptivity is a crucial capability of living organisms. Current machine learning systems are not equipped with tools that allow them to adjust to new situations and understand their surroundings (e.g., observed data). For instance, a robot should be able to adapt to new environment or task and assess whether the observed reality is known (i.e., likely events) or it should contact a human operator due to unusual observations (i.e., high uncertainty). Moreover, we claim that uncertainty assessment is crucial for communicating with human beings and for decision making.

In this project, we aim at designing new models and learning algorithms by combining multiple machine learning methods and developing new ones. In order to quantify uncertainties, we prefer to use deep generative modeling paradigm and frameworks like Variational Autoencoders and flow-based models. However, we believe that standard learning techniques are insufficient to update models and, therefore, continual learning (a.k.a. life-long learning, continuous learning) should be used. Since this is still an open question how continual learning ought to be formulated, we propose to explore different directions that could include, but are not limited to Bayesian nonparametrics and (Bayesian) model distillation. Moreover, a combination of continual learning and deep generative modeling entails new challenges and new research questions.

The PhD candidate will be part of the Department of Computer Science of the Vrije Universiteit Amsterdam (Computational Intelligence group) in a close partnership with the Institute of Informatics of the University of Amsterdam (Amsterdam Machine Learning Lab). Daily supervision will be performed by dr. Jakub Tomczak (VU). The promotor will be prof. A.E. Eiben (VU) and the co-promotor will be prof. M. Welling (UvA).

Full proposal

PhD candidate: Taewoon Kim (

Supervisor: Piek Vossen (

Research Project Description
To collaborate with AI, such as robots, both people and systems need to understand how they perceive shared situations differently. Understanding these differences is a first step in collaborating successfully. Communication about these situations is fundamental for resolving misunderstanding, explaining perspectives and informing the other. This project addresses the phenomena of identity, reference and perspective within communicative scenarios between AI and people in real-world situations. Knowledge and awareness of the context helps communication between AI and people (successfully identifying and making reference to the world) and through communication we create the common ground for understanding the context (identifying the world from different perspectives). This project tackles these two sides of the same coin by building personal relationships between people and AI in an adaptive environment to learn how to communicate about shared situations or contexts. By building a collective memory of shared encounters, we can define what there is, what is relevant and why we care.

Full proposal

PhD candidate: Annet Onnes (

Supervisor: Silja Rennooij (

Research Project Description
At Utrecht University we are looking for an enthusiastic PhD candidate who is interested in raising and educating a new generation of artificially intelligent agents: adaptive agents who need to learn to abide by our rules in hybrid intelligent teams. Our project “Monitoring and constraining adaptive systems” focuses on fundamental research into integrating interpretable knowledge representation and reasoning with learning in the context of adaptive systems.
Since an adaptive system is allowed to change itself, we need to trust that it does not evolve into a system that violates various constraints important for the environment at large in which the system operates. Building upon existing frameworks such as probabilistic graphical models and deep generative modelling, we aim to design a monitoring system that is able to detect and react to violations of constraints, to predict that violations are about to occur, issue warnings, and ultimately gets the adaptive system back on track. The monitoring system should allow for easily capturing constraints in a human-intuitive way, so that they can easily be inspected and changed. Being able to predict the behaviour of an adaptive system also allows for analysing and explaining it, which are important aspects to facilitate communication and collaboration between human and artificial adaptive agents.
The PhD candidate will be part of the Department of Information and Computing Sciences (Intelligent Systems group) in close partnership with the Department of Computer Science of the Vrije Universiteit Amsterdam (Computational Intelligence group). Daily supervision will be performed by dr. Silja Renooij (UU) and dr. Jakub Tomczak (VU).

Full proposal


Research Project Description
The ambition for this Phd.D. position will be to define design patterns for successful configurations of Hybrid Intelligent systems. Such patterns describe how different combinations of machine and human capabilities perform for a given task under a given set of circumstances. We aim to develop a corresponding pattern language to express such design patterns in conceptual, (semi-)formal or computational form, and an empirical method to validate the patterns. See for more information.

Job Requirements
A succesful candidate will have an MSc in Artificial Intelligence with an interest in Cognitive Science, or vice versa, or an equivalent other degree with proven affinity to AI and Cognitive Science. The candidate should have skills and interest in modelling human-machine systems and collective or blended human-agent cognition, with an interest in empirically evaluating such models. Good communication and presentation skills in English are required, as well as the ability to work in a team and a strong commitment to research.

See also the detailed project description


Challenge: How can AI interpret and adapt to a human partner in a responsible way?
Change: You will develop machine reasoning techniques and conversational strategies.
Impact: AI will be able to constantly align with its human partner via conversation.

Job description

Why build AI systems that replace people if we can build AI systems that collaborate with people? Hybrid Intelligence is the combination of human and machine intelligence, expanding human intellect instead of replacing it. Our goal is to design Hybrid Intelligent systems, an approach to Artificial Intelligence that puts humans at the centre, changing the course of the ongoing AI revolution. The project will be recruiting 15 PhD or postdoc positions in total. For more information on the project see

At TU Delft in collaboration with University of Twente we are looking for a PhD student on Interactive Machine Reasoning for Responsible Hybrid Intelligence. The project centers around the concept of an Electronic Partner (e-partner), an intelligent agent that can support its user in a variety of daily activities, for example changing habits. In previous research ( we have developed knowledge structures for representing desired habits and underlying personal values, and a conversational agent that elicits this information from the user. The goal of the current project is to develop machine reasoning techniques and conversational strategies that allow the e-partner to interpret and adapt this information at run-time in interaction with the user based on the context. In other words, the e-partner should be able to tune in to the needs of the user as it provides support, and assess whether its support (still) aligns with user needs. With this project we lay the foundations for the novel area of Interactive Machine Reasoning in which meaning-making happens in coaction between human and technology, allowing people to maintain their personal space and agency. With this we go beyond a view of AI that centers autonomous decision making, towards Hybrid Intelligence.


Do you have a Master’s degree (or equivalent) in computer science, artificial intelligence or a related field and a good command of the English language? Are you inspired by shaping the digital society of the future with people at the center? Are you motivated by the prospect of creating the next generation of AI technologies that align with people’s norms and values and adapt to how people want to live their lives? Then send us your application! Expertise in one or more of the following areas is useful, but anyone with a CS, AI or related background that can contribute to addressing the goals of the project is welcome to apply: e.g., multiagent systems, knowledge representation and reasoning, formal methods, conversational agents, user modelling, human-computer interaction, or context-aware systems.

For application information for this position click here: [link to university job site]


Research Project Description
Natural language processing has a strong tradition in experimental research, where various methods are evaluated on gold standard datasets. Though these experiments can be valuable to determine which methods work best, they do not necessarily provide sufficient insight into the general quality of our methods for real-life applications. There are two questions that often need to be addressed before knowing whether a method is suitable to be used in a real-life application in addition to the outcome of a typical NLP experiment. First, what kind of errors does the method make and how problematic are they for the application? Second, how predictive are results obtained on the benchmark sets for the data that will be used in the real-life application? This project aims to address these two questions combining advanced systematic error analyses and formal comparison of textual data and language models.

Though potential erroneous correlations were still relatively easily identified in scenarios of old-fashioned extensive feature engineering and methods such as K-nearest neighbors, Naive Bayes, logistic regression, SVM, this has become more challenging now that technologies predominantly make use of neural networks. The field has become increasingly interested in exploring ways to interpret neural networks, but, once again, many studies focus on field internal questions (what linguistic information is captured? Which architectures learn compositionality to what extent?). We aim to take this research a step further and see if we can use insights into the workings of deep models to predict how they will work for specific applications that make use of data different from the original evaluation data. Both error analysis and formal comparison methods will contribute to establishing the relation between generic language models, task specific training data, evaluation data and ”new data”. By gaining a more profound understanding of these relations, we try and define metrics that can be used to estimate or even predict to what extent results on a new dataset will be similar to those reported on the evaluation data (both in terms of overall performance and in terms of types of errors).

Job Requirements
The prospective candidate has a Masters degree (MA/MSc) or equivalent in computational linguistics, or related field (AI, Computer Science with focus on NLP). Candidates from other field with a strong background in machine learning can also apply. Solid program skills are required. The project involves interdisciplinary collaboration. The ability to communicate with researchers from different domains is therefore important. Experience with/knowledge of statistical analysis is a plus.

Coming soon: To apply for this position click here


Research project discription

In this sub-project, we are looking for a PhD candidate interested in combining deep learning and natural language processing research to model complex contextual information to significantly improve the quality of dialog systems.

While current deep learning methods have been shown to be very effective in generating fluent utterances, these utterances are often only poorly connected to the context of the conversation. In this project, we will investigate the role of context (agent or environment) on natural dialogue generation. In this project, we will explore a several research directions such as the detection and representation of knowledge gaps, the generation of contextually appropriate responses, and novel ways to represent and access large contextual information.

Job Requirements
 • Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a closely related field;

•  strong scientific and mathematical background in artificial intelligence and a strong interest to solve natural language problems;

• good academic record and eagerness to tackle complex scientific problems;

• ability to implement and evaluate learning algorithms, e.g. using Python deep learning toolkits;

• ability to work well in teams and communicate fluently in written and spoken English.

To apply for this position click here [Link to university application]


Research Project Description

This sub-project wants to push machine learning beyond traditional settings that assume a fixed dataset. Specifically, in this project we will investigate interactive learning settings in which two or more learners interact by giving each other feedback to reach an outcome that is desirable from a system designers perspective. The goal is to better understand how to structure interactions to effectively progress to the desirable outcome state, and to develop practical learning techniques and algorithms that exploit these generated insights.

Job Requirements

We are looking for a curiosity-driven researcher that is motivated to push the boundaries of machine learning in interactive settings. We expect that the project will consist both of theoretical parts (e.g., proving theorems) and empirical parts (e.g., running simulations, and analyzing the results). Depending on the skills and aptitude of the candidate the research could focus more on the former or the latter.

Strict requirements:

  • a PhD degree in AI or closely related topics in computer science, math, or physics.

Other desiderata:

  • thorough knowledge of general area of reinforcement learning, decision making under uncertainty, and/or other forms of interactive machine learning such as generative adversarial networks, or online learning.
  • track record with international publications
  • good coding skills and experience in contemporary machine learning framework (e.g., Tensorflow, Pytorch)
  • fluent in English
  • self-motivated
  • team player: willing to initiate collaborations with other partners in the project.    

To apply for this position click here


Research Project Description
This project aims to explain outcomes of data-driven machine-learning applications that support decision-making procedures to end users of such applications, such as lawyers, business people or ordinary citizens. The techniques should apply in contexts where a human decision maker is informed by data-driven algorithms and where the decisions have ethical, legal or societal implications. They should generate explanations for outputs for specific inputs. The generated explanations should be such that the reasons for the output can be understood and critically examined on their quality. The project will especially focus on explaining ‘black-box’ applications in that it will focus on model-agnostic methods, assuming only access to the training data and the possibility to evaluate a model’s output given input data. This will make the explanation methods independent of a model’s internal structure. This is important since in many real-life applications the learned models will not be interpretable or accessible, for instance, when the model is learned by deep learning or when the application is proprietary.

Job Requirements
candidates with an Msc in AI, computer science, mathematics, data science or a related field, and with a strong interest in interdisciplinary AI research that combines the AI subfields of machine learning, data science, knowledge representation & reasoning and human-computer interaction. Moreover, the candidate should have a commitment to developing computational tools and techniques for helping people make better decisions. The candidate should be able to integrate various research methods and tools, such as formal methods, designing and implementing algorithms and experimental evaluation.

See also the detailed project description

To apply for this position click here


Enabling co-regulation for long-term engaging semi-structured conversations

Research Project Description

A key challenge in human-machine dialogues is to provide more flexibility to a user and allow the user to co-regulate a conversation (instead of being directed by the agent’s conversation script or flow). We will develop a conversational agent that is able to establish and maintain a common understanding in co-regulation with its user, by offering the flexibility to deviate from the conventional sequences in structured question-and-answer or instructional dialogues. This may, for example, occur in education when a child is asked to solve a math problem and is not sure about the answer, or in health care when a patient shows hesitance when answering a survey question and would like to ask a clarification question, or when a user is unclear about a cooking assistant’s instructions. In such cases, rather than moving on to the next math problem or survey question, the user should be allowed to deviate from the main dialogue line (‘the happy flow’) to come to a mutual understanding and restore common ground. We will collect conversational data that can be used to extract common conversational patterns that indicate a user wants to take control of the dialogue flow, and will research and design an agent that is able to handle this. For example, the agent should be able to decide to either (i) ask a follow-up question (ii) provide feedback (verbalize its understanding of the humans’ answer), or (iii) give additional explanatory information to assist the user. A second aim of this project is to address the level of user engagement in such highly structured conversations that tend to be repetitive, by enabling the agent to memorize and refer back to relevant pieces of information in the conversation history or earlier conversations with the same user, and by incorporating variation in the agent’s utterances.

Job Requirements

 You have a background in Artificial Intelligence or Computational Linguistics with a strong interest in conversational agents and automated conversation management, and experience with natural  language processing techniques. The ideal candidate also has experience with user-centered approaches for design and evaluation of conversational agents.

Link to apply for this position [Application link]


Research Project description

In hybrid intelligent systems in which humans and agents interact together, it is important to be able to understand and detect non-cooperative behavior such as lying and other forms of deception, both in people and in software agents. Cognitive scientists have shown that understanding the concept of lying and being able to maintain a lie over time requires second-order theory of mind: reasoning about what the other person (or agent) thinks you think. The same level of theory of mind is also required for detecting whether others are lying to you. In this PhD project we will:

  • investigate the logical and computational foundations of deception and deception detection in hybrid groups;
  • lay the theoretical groundwork for modelling and analyzing non-cooperative behavior in several communicative contexts such as negotiation games and coalition formation games;
  • develop principled methods for the design of software agents that can detect when other group members are engaged in non-cooperative behavior such as lying;
  • build agent-based models and/or computational cognitive models of deception and deception detection;
  • use simulation experiments in order to predict the outcomes of lab experiments to be performed.

The objective of the temporary position is the production of a number of research articles in peer-reviewed scientific journals and conference proceedings, which together will form the basis of a thesis leading to a PhD degree (dr.) at the University of Groningen.

Job Requirements

The successful candidate will investigate the logical and computational foundations of deception and deception detection in hybrid groups, and should have/ be:

  • motivated to pursue fundamental and interdisciplinary research at the interface of multi-agent systems and computational cognitive science, with a keen interest.
  • a master’s degree or equivalent in artificial intelligence, computer science, or computational cognitive science.
  • good analytical skills and a positive attitude towards interdisciplinary work.

To apply for this position, click here:


Research Project Description

This PhD-project starts from two central assumptions: Firstly, that all “artificial intelligence systems” are hybrid, requiring the integration of human and machine activities. More fundamentally, AI systems are fundamentally sociotechnical, developed through myriad design tradeoffs and “hard choices”, and situated in a societal context where the system informs, influences or automates human decisions. And secondly, that AI systems contribute to “programmable infrastructures”, meaning that these systems don’t stand on themselves but are integrated into and restructure existing infrastructures and organizations, often with implications for democratic governance and sovereignty.

The PhD candidate will work in a growing team addressing issues of hard choices and programmable infrastructures, consisting of people with different disciplinary backgrounds. The position focuses on understanding how human factors are crucial for informing the design of AI systems in making sure these are safe in their behavior, just in their treatment of people, and allow affected stakeholders to be empowered to hold the system and its managers/operators/developers accountable.

The candidate will work to marry notions in traditional computer science and systems engineering with methods from situated design, participatory design, computational empowerment, action research, science-and-technology-studies, and other disciplines aiming to increase empowerment and participation in design. The project studies and facilitates the implementation of a data-driven, learning-based control scheme in the operation of electrical distribution networks, working closely together with Dutch utility companies and other societal stakeholders working on the transition to renewable energy resources. A second case study will be pursued in a more administrative context within the Digicampus (

In these studies, the PhD candidate will work together with stakeholders from the different sectors to both study and inform system development. The aim is (1) to understand what human factors inform the development process of AI systems and how PD and other design methods can facilitate this, and (2) what human dimensions in the situated context need to be addressed to integrate and operate the AI system in a safe and just manner, seeking collaboration with appropriate fields of expertise. For both these contributions, the PhD candidate will be leaning on emerging literature to structure his/her investigations and seek advice and collaboration across the different disciplines within the department, the Hybrid Intelligence Centre and internationally.

The selected candidate will also play a central role in building an international community of scholars advancing sociotechnical and situated studies of AI systems. This project will benefit from collaborations with dr. Seda Gürses in the Multi-Actor Systems department, prof. Virginia Dignum and the Responsible AI group at Umeå University in Sweden, and prof. Joan Greenbaum at . 


Job Requirements

We seek a societally engaged engineer who loves to build things but is also intrinsically inclined to do so with a critical eye towards societal implications. Possible degree backgrounds are either a computer science field, an engineering field or an interdisciplinary field such as information systems or systems engineering and management, as long as you have a track record of building systems. You need to have experience with design in an engineering domain (electricity is great, but not necessary), and be versed in programming and data-driven methods (e.g. statistical learning, econometrics, artificial intelligence, control theory, optimization). Affinity with design methods, participatory methods or STS is a plus.

This position is a unique opportunity to develop yourself as an interdisciplinary scholar in the sociotechnical treatment of AI systems. It is not focused on building AI in theory, you will be going out in the world to build and study what is built, while engaging with domain experts, citizens, policymakers and other important stakeholders. You also need to have a drive and openness to work with other disciplines from social sciences and humanities. The department and mentorship provide a welcome, safe and fun place to do such bridge building work, while making sure you progress towards recognized contributions in this emerging field.

To apply to this position, click here.

Further positions with 
23.VU prof. Frank van Harmelen (VU),

Contact: prof. Maarten de Rijke

Research Project Description

Learning to rank is at the heart of modern search and recommender systems. Increasingly, such systems are optimized using historical, logged interaction data. Such data is known to be biased in a myriad of ways (position bias, trust bias, selection bias, …). In recent years, various de-biasing methods have been proposed, for different types of bias. These methods are limited, either in the types of bias they address, the assumptions about user behavior they make, or the degree to which they are actionable. We will design debiasing methods that are explainable and that can be applied selectively, depending on context, user (or user model), and additional constraints such as fairness.

Interaction with large amounts of information is one of the original scenarios for hybrid intelligence. For interactions to be effective, algorithmic decisions and actions should be explainable. In the context of learning to rank from logged interaction data for search and recommendation, this implies that the underlying debiasing methods need to be explainable – both in terms of the bias they aim to address, the assumptions they make, the re-estimations they make, and the impact on learned search and recommendation behavior. Importantly, the explanations should be actionable in the sense that they inform the target user about the changes required to obtain a different outcome.

Job Requirements

  • Candidates with an MSc in AI or CS.
  • Candidates with a thorough understanding of both information retrieval and machine learning.
  • Candidates with an interest in algorithm development and in their implications for people impacted by those algorithms.
  • Candidates with solid programming skills and experience in Python and machine learning frameworks.
  • Candidates with a good knowledge of English (written and spoken).

To apply for this position click here

Contact: dr. Erik Bekkers

Research Project Description

The aim of this project is to model, study, and visualize decision-making processes in (convolutional) neural networks. In particular, emphasis is put on explainability in critical applications where reliability and trustworthiness of AI systems are crucial, such as in the medical domain.

A main theme will be the design of AI system through recurrent NNs by which focus is put on modeling (and visualization of) the dynamics of decision making rather than relying on the static black box mechanics of feed-forward networks. E.g. when tracing a blood vessel in medical image data, one does not need to be able to represent or remember the vessel as a whole (feedforward mechanism), but it is more important to learn how to connect local line segments (the dynamics of reasoning). Similarly, in image classification problems, it is important to learn how local queues relate to each other and learn how to “connect the dots” rather than trying to come to a decision given all information at once. Being able to capture/model such dynamics of reasoning opens doors for interaction with and visualization of the systems, which leads to explainability. Your research will be built upon an exciting mix of geometric (equivariance, invertibility, PDE’s) and probabilistic (VAE, normalizing flows) principles to ensure reliability and interpretability of AI systems.

Job requirements: We are looking for a PhD student to design Hybrid Intelligent systems based on principles from (visual) attention, geometry, and probabilistic modeling.

To apply for this position click here [link to university job site]



At Vrije Universiteit Amsterdam, we are looking for a PhD student to investigate Knowledge Representation Formalisms for Hybrid Intelligence (HI). 

Knowledge graphs (KGs) can play an important role in representing knowledge for different agents in Hybrid Intelligence settings.  Unfortunately, the current KR formalisms are not sufficiently well-designed to work with complex, conflicting, dynamic and contextualised knowledge. What is needed to make KGs suitable formalisms for data and knowledge exchange in a HI network, is for individual agents to adapt their own knowledge in a KG (or at least the active part it is doing reasoning with) w.r.t. the interaction with one or more actors in its network.

You will study non-classical logical operators under (possibly changing) contexts, where the contexts are (semi)formal representations of the other agents’ requirements, knowledge, (cultural) background, necessity, and other modalities of choice.  Furthermore, you will design or adapt formalisms appropriate to the knowledge modeling challenges as well as protocols for agents to use the knowledge in HI interaction scenarios. Finally, you will apply and validate the solutions for a select number of application domains. The research project is based at the VU Amsterdam, and will be co-supervised by TU Delft and University of Twente researchers. 


We are looking for an enthusiastic person who is curious about what kind of knowledge is required by an HI agent to operate in a contextualised and fast changing environment when addressing real-life problems. We are looking for someone with a Master in Computer Science, Artificial Intelligence or related fields with an interest in Knowledge Representation, Logic and Formal Systems, but from a practical and pragmatic perspective. 

If you want to be also considered for one of our other PhD positions, then also upload your documents to our Hybrid Intelligence talent pool at Your information will then be shared among the researchers in the consortium, and you may be approached for one of the other positions listed on 


A challenging position in a socially involved organization. The salary will be in accordance with university regulations for academic personnel and amounts €2,395 (PhD) per month during the first year and increases to €3,061 (PhD) per month during the fourth year, based on a full-time employment. The job profile: is based on the university job ranking system and is vacant for at least 1 FTE.

The appointment will initially be for 1 year. After a satisfactory evaluation of the initial appointment, the contract will be extended for a duration of 4 years.
Additionally, Vrije Universiteit Amsterdam offers excellent fringe benefits and various schemes and regulations to promote a good work/life balance, such as:

  • a maximum of 41 days of annual leave based on full-time employment
  • 8% holiday allowance and 8.3% end-of-year bonus
  • solid pension scheme (ABP)
  • contribution to commuting expenses
  • optional model for designing a personalized benefits package

To apply for this position click here [link to univeristy job site]


Research project description:

This project aims to use Bayesian priors defined in function space to enable data-efficient and human-guided model adaptation. Bayesian modeling is an appealing paradigm from which to build collaborative hybrid intelligent systems, because priors allow existing information and human expertise to be given primacy in model specification. One obstacle that often prevents the adoption of the Bayesian paradigm is prior specification. The prior must be specified as a distribution on parameter space, which is high-dimensional and complicated for models of the day (e.g. neural networks). It is typically challenging to translate human knowledge and intuitions — which are often contextualized in the space of data — to the parameter space. Our recent work  overcomes this gap between function and parameter space by defining the prior on the former and then reparameterizing it to be a proper distribution on the latter. The intuition is that the prior encourages the model predictions to agree with those of a teacher or reference model, which could be a human (imitation learning), a model for a related task (transfer learning), a previous iteration of the same model (continual learning), or a less expressive model (regularization). We would then leverage these various formulations for HI scenarios in which the model needs to be quickly adapted while protecting against overfitting or respecting human inputs. We plan to validate our results in an application healthcare, in particular the automated gating of cytometry data.

Job Requirements:

  • A Master’s degree in Machine Learning, Statistics, Computer Science, Mathematics, or a related field;
  • experience in statistical or mathematical modeling;
  • experience in programming and software development.  Familiarity with Python and scientific computing libraries (e.g. NumPy, Stan, TensorFlow, PyTorch) is preferred;
  • enthusiasm for the scientific process: formulating and conducting experiments, data collection and analysis, disseminating findings via writing and oral presentations;
  • ability to cooperate and work effectively within a team;
  • fluency in English, both written and spoken.

Our offer

A temporary contract for 38 hours per week for the duration of 4 years (initial contract will be for a period of 18 months and after satisfactory evaluation it will be extended for a total duration of 4 years) that should lead to a dissertation (PhD thesis). We will draft an educational plan that includes attendance of courses and (international) meetings. We also expect you to assist in teaching undergraduates and master students. 

To apply for this position click here [link to university job site]

We value the privacy of your information. The information that you submit in your application is subject to our data privacy statement.