Research

Developing HI needs fundamentally new solutions to core research problems in AI: current AI technology surpasses humans in many pattern recognition and machine learning tasks, but it falls short on general world knowledge, common sense, and the human capabilities of (i) Collaboration, (ii) Adaptivity, (iii) Responsibility and (iv) Explainability of norms and values (CARE). These aspects are addressed in the four key objectives of the HI-research:

 

Collaborative HI: How to design and build intelligent agents that work in synergy with humans, with awareness of each other’s strengths and limitations? We develop shared mental models for communication between humans and agents, computational theories of mind to enable collaboration, and exploit multimodal interaction for seamless dialogues.


Adaptive HI: The world in which Hybrid Intelligent systems operate is dynamic, as are the teams of humans and agents that make up such HI systems. HI systems thus need to operate in situations not anticipated by their designers, and cope with variable team configurations, preferences and roles. This requires progress in online reinforcement learning, auto ML, and the integration of learning and reasoning. 

 

Responsible HI: Addressing and mitigating some of the perceived risks of Artificial Intelligence technologies requires ethical and legal concerns to be an integral part of the design and operation of HI systems. Values such as transparency, accountability, trust, privacy and fairness can no longer be relegated to regulations that apply after system’s deployment. We develop methods to include ethical, legal and societal considerations into the design process (“ethics in design”) and into the performance (“ethics by design”) of HI systems. 

 

Explainable HI: Intelligent agents and humans need to be able to mutually explain to each other what is happening (shared awareness), what they want to achieve (shared goals), and what collaborative ways they see of achieving their goals (shared plans and strategies). The challenge is to generate appropriate explanations in different circumstances and for different purposes, even for systems whose internal representations are vastly different from human cognitive concepts. We use causal models for shared representations, develop methods for contrastive, selective and interactive explanations, and combine symbolic and statistical representations. 



More detailed descriptions of
our objectives and the original organisation into four interconnected  research lines, our evaluation metrics and other information can be found in the public version of our proposal. 

Matrix structure 

Per February 2023, a new organisational structure was introduced, to obtain a scalable organisation accommodating the growth of the HI Centre, while still addressing the CARE objectives described above. The new Matrix structure of the HI centre is an evolution of the original four CARE Research Lines into three perspectives: (i) four core scientific HI Challenges (problem space); (ii) which are crosscut by Special Interest Groups (SIG, solution space) that focus on specific AI techniques, and (iii) four Case Studies to anchor our research in concrete societal situations (evaluation space). Over time the different clusters, interest groups and case studies may change, depending on the developments in the field. 

Challenges  

Collaboration & Synergy – The Collaboration and Synergy cluster brings together scientists to discuss topics and research questions related to how HI can be built to better facilitate human-human or human-agent collaboration. It addresses the research objective of collaborative systems.  

Dialogue –  The Dialogue cluster addresses the research objectives of collaborative HI and of explainable HI. HI scientists are connected which work on various aspects related to dialogues, such as long-term interaction, text versus dialog, human-agent interaction and human-robot interaction, and knowledge representation for dialogs.  

System Design and User Literacy –  This challenge focusses on Responsible HI. HI systems as sociotechnical systems are discussed, including their design in a situated and integral manner, and tools and understanding needed to integrate HI technologies into the situations where they can add value.  

Assistance and Trust – The focus of this challenge is to develop, improve, and analyze settings in which AI agents can assist humans in their tasks, considering that hybrid assistance based systems can, together, be more efficient in the required task than human-only or agent-only systems. This cluster addresses the research objective of Responsible HI. 

Special interest groups (SIG) 

Reinforcement Learning – The SIG on Reinforcement Learning is dedicated to potentially the most relevant machine learning technique for adaptive HI systems. It connects all members of the HI project that are interested in reinforcement learning, either as object of study or as tool. 

Deliberation – Deliberation and argumentation are among the most sophisticated forms of information exchange among humans, and hence represent a gold standard for intelligent interaction and a challenge for hybrid intelligent systems to master. This SIG addresses both research objectives Collaborative HI and Explainable HI and will be a home to all colleagues within HI (and beyond) that share interest in deliberation and argumentation, independently of their disciplinary background or focus of application. 

Ethics –The goal of this SIG is to stimulate reflection on ethical and responsibility aspects of HI across the project, and to connect HI researchers to the Gravitation-funded project ESDiT and ESDiT researchers to HI to stimulate exchange of expertise and collaboration. This SIG contributes to the research objective on Responsible HI. 

Theory of Mind – This SIG brings together all colleagues whose work is related to Theory of Mind. It contributes to the research objective on Collaborative HI.  

Case studies 

Case study Health – Diabetes: Diabetes is a chronic lifestyle related disease. A change of lifestyle requires intensive, personalised support involving both the patient and their social environment. A hybrid intelligent coach should help diabetes patients to adopt a healthier lifestyle while at the same time lowering the workload of healthcare professionals. A key challenge is the creation of multi-party dialogues between HI system, patient and healthcare professional that create long-term engagement. Explainability is crucial in this domain. 

Case study Education: The aim is to develop a Hybrid AI robot tutor and to evaluate this tutor in field experiments at schools with teachers and students. The robot tutor should engage and motivate, using techniques such as interactive storytelling and mental models. A key challenge is to create long term personalised interactions, using a memory and retrieval mechanisms that should adapt to students, provide explanations and give feedback in a responsible way. 

Case study Robotic Surgery: The aim is to create a Hybrid Intelligent system in which a robotic microscope and a surgeon complement each other’s skills on the level of micrometre surgery. The robotic microscope has to learn how best to align its angle and zoom of vision with the activities of the human surgeon. This requires a mutual understanding of each upcoming surgical procedure, which is to be acquired during practice sessions. The scientific challenges are human-robot dialogues, theory of mind, shared mental models, shared planning, and collaborative ontology determination. 

Case study Scientific Assistant: The aim of the scientific assistant is to support one or more steps in the scientific method cycle: formulating research questions, analysing the literature, formulating a hypothesis, designing an experiment, analysing data, and drawing conclusions. This will require a combination of symbolic and subsymbolic AI techniques, ranging from domain ontologies to deep learning, as well as theory of mind and shared planning to support the collaboration.