Explainable HI

Intelligent agents and humans need to be able to mutually explain to each other what is happening (shared awareness), what they want to achieve (shared goals), and what collaborative ways they see of achieving their goals (shared plans and strategies). The challenge is to generate appropriate explanations in different circumstances and for different purposes, even for systems whose internal representations are vastly different from human cognitive concepts.

We use causal models for shared representations, develop methods for contrastive, selective and interactive explanations, and combine symbolic and statistical representations. 

Read about our two-day course on how to demystify complex algorithms, address the risks of big data, and ensure that AI remains transparent, accountable, and aligned with human values.

Stay tuned for new dates and upcoming sessions.

Learn about cutting-edge techniques like emotion recognition and natural language processing, and explore the psychological impact of interacting with social AI.

Stay tuned for new dates and upcoming sessions.