Developing HI needs fundamentally new solutions to core research problems in AI: current AI technology surpasses humans in many pattern recognition and machine learning tasks, but it falls short on general world knowledge, common sense, and the human capabilities of (i) Collaboration, (ii) Adaptivity, (iii) Responsibility and (iv) Explainability of norms and values (CARE). These challenges are being addressed in four interconnected research lines:
Collaborative HI: How to design and build intelligent agents that work in synergy with humans, with awareness of each other’s strengths and limitations? We develop shared mental models for communication between humans and agents, computational theories of mind to enable collaboration, and exploit multimodal interaction for seamless dialogues.
Coordinators: Dr. Hayley Hung, h.hung@tudelft.nl and
Prof. Koen Hindriks, k.v.hindriks@vu.nl
Adaptive HI: The world in which Hybrid Intelligent systems operate is dynamic, as are the teams of humans and agents that make up such HI systems. HI systems thus need to operate in situations not anticipated by their designers, and cope with variable team configurations, preferences and roles. This requires progress in online reinforcement learning, auto ML, and the integration of learning and reasoning. Coordinators: Dr. Herke van Hoof,
h.c.vanhoof@uva.nl and dr. Frans Oliehoek, f.a.oliehoek@tudelft.nl
Responsible HI: Addressing and mitigating some of the perceived risks of Artificial Intelligence technologies requires ethical and legal concerns to be an integral part of the design and operation of HI systems. Values such as transparency, accountability, trust, privacy and fairness can no longer be relegated to regulations that apply after system’s deployment. We develop methods to include ethical, legal and societal considerations into the design process (“ethics in design”) and into the performance (“ethics by design”) of HI systems. Coordinators: Dr. M. Birna van Riemsdijk, m.b.vanriemsdijk@utwente.nl and Prof. Bart Verheij, bart.verheij@rug.nl
Explainable HI: Intelligent agents and humans need to be able to mutually explain to each other what is happening (shared awareness), what they want to achieve (shared goals), and what collaborative ways they see of achieving their goals (shared plans and strategies). The challenge is to generate appropriate explanations in different circumstances and for different purposes, even for systems whose internal representations are vastly different from human cognitive concepts. We use causal models for shared representations, develop methods for contrastive, selective and interactive explanations, and combine symbolic and statistical representations. Coordinators: Dr. Antske Fokkens, antske.fokkens@vu.nl and Prof. Piek Vossen, piek.vossen@vu.nl and Pradeep Kumar Murukannaiah, p.k.murukannaiah@tudelft.nl
More detailed descriptions of our research lines, our evaluation metrics and other information can be found in the public version of our proposal.