Publications

2022

  • E. van Krieken, E. Acar, and F. van Harmelen, “Analyzing differentiable fuzzy logic operators,” Artificial Intelligence, vol. 302, p. 103602, 2022.
    [BibTeX] [Abstract] [Download PDF]

    The AI community is increasingly putting its attention towards combining symbolic and neural approaches, as it is often argued that the strengths and weaknesses of these approaches are complementary. One recent trend in the literature are weakly supervised learning techniques that employ operators from fuzzy logics. In particular, these use prior background knowledge described in such logics to help the training of a neural network from unlabeled and noisy data. By interpreting logical symbols using neural networks, this background knowledge can be added to regular loss functions, hence making reasoning a part of learning. We study, both formally and empirically, how a large collection of logical operators from the fuzzy logic literature behave in a differentiable learning setting. We find that many of these operators, including some of the most well-known, are highly unsuitable in this setting. A further finding concerns the treatment of implication in these fuzzy logics, and shows a strong imbalance between gradients driven by the antecedent and the consequent of the implication. Furthermore, we introduce a new family of fuzzy implications (called sigmoidal implications) to tackle this phenomenon. Finally, we empirically show that it is possible to use Differentiable Fuzzy Logics for semi-supervised learning, and compare how different operators behave in practice. We find that, to achieve the largest performance improvement over a supervised baseline, we have to resort to non-standard combinations of logical operators which perform well in learning, but no longer satisfy the usual logical laws.

    @article{van2022analyzing,
    title={Analyzing differentiable fuzzy logic operators},
    author={van Krieken, Emile and Acar, Erman and van Harmelen, Frank},
    journal={Artificial Intelligence},
    volume={302},
    pages={103602},
    year={2022},
    publisher={Elsevier},
    url = "https://research.vu.nl/ws/portalfiles/portal/146020254/2002.06100v2.pdf",
    abstract = "The AI community is increasingly putting its attention
    towards combining symbolic and neural approaches, as
    it is often argued that the strengths and weaknesses
    of these approaches are complementary. One recent
    trend in the literature are weakly supervised
    learning techniques that employ operators from fuzzy
    logics. In particular, these use prior background
    knowledge described in such logics to help the
    training of a neural network from unlabeled and
    noisy data. By interpreting logical symbols using
    neural networks, this background knowledge can be
    added to regular loss functions, hence making
    reasoning a part of learning. We study, both
    formally and empirically, how a large collection of
    logical operators from the fuzzy logic literature
    behave in a differentiable learning setting. We find
    that many of these operators, including some of the
    most well-known, are highly unsuitable in this
    setting. A further finding concerns the treatment of
    implication in these fuzzy logics, and shows a
    strong imbalance between gradients driven by the
    antecedent and the consequent of the
    implication. Furthermore, we introduce a new family
    of fuzzy implications (called sigmoidal
    implications) to tackle this phenomenon. Finally, we
    empirically show that it is possible to use
    Differentiable Fuzzy Logics for semi-supervised
    learning, and compare how different operators behave
    in practice. We find that, to achieve the largest
    performance improvement over a supervised baseline,
    we have to resort to non-standard combinations of
    logical operators which perform well in learning,
    but no longer satisfy the usual logical laws."
    }

2021

  • M. van Bekkum, M. de Boer, F. van Harmelen, A. M. -, and A. ten Teije, “Modular design patterns for hybrid learning and reasoning systems,” Appl. Intell., vol. 51, iss. 9, p. 6528–6546, 2021. doi:10.1007/s10489-021-02394-3
    [BibTeX] [Download PDF]
    @article{DBLP:journals/apin/BekkumBHMT21,
    author = {Michael van Bekkum and
    Maaike de Boer and
    Frank van Harmelen and
    Andr{\'{e}} Meyer{-}Vitali and
    Annette ten Teije},
    title = {Modular design patterns for hybrid learning and reasoning systems},
    journal = {Appl. Intell.},
    volume = {51},
    number = {9},
    pages = {6528--6546},
    year = {2021},
    url = {https://doi.org/10.1007/s10489-021-02394-3},
    doi = {10.1007/s10489-021-02394-3},
    timestamp = {Wed, 01 Sep 2021 12:45:13 +0200},
    biburl = {https://dblp.org/rec/journals/apin/BekkumBHMT21.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
    url = https://link.springer.com/article/10.1007/s10489-021-02394-3}

  • A. Kuzina, M. Welling, and J. M. Tomczak, “Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks,” in ICLR 2021 Workshop on Robust and Reliable Machine Learning in the Real World, 2021.
    [BibTeX] [Abstract] [Download PDF]

    In this work, we explore adversarial attacks on the Variational Autoencoders (VAE). We show how to modify data point to obtain a prescribed latent code (supervised attack) or just get a drastically different code (unsupervised attack). We examine the influence of model modifications ($\beta$-VAE, NVAE) on the robustness of VAEs and suggest metrics to quantify it.

    @inproceedings{kuzina2021diagnosing,
    title={Diagnosing Vulnerability of Variational Auto-Encoders to Adversarial Attacks},
    author={Kuzina, Anna and Welling, Max and Tomczak, Jakub M},
    year={2021},
    booktitle = {ICLR 2021 Workshop on Robust and Reliable Machine Learning in the Real World},
    url={https://arxiv.org/pdf/2103.06701.pdf},
    abstract={In this work, we explore adversarial attacks on the Variational Autoencoders (VAE). We show how to modify data point to obtain a prescribed latent code (supervised attack) or just get a drastically different code (unsupervised attack). We examine the influence of model modifications ($\beta$-VAE, NVAE) on the robustness of VAEs and suggest metrics to quantify it.}
    }

  • H. Zheng and B. Verheij, “Rules, cases and arguments in artificial intelligence and law,” in Research Handbook on Big Data Law, R. Vogl, Ed., Edgar Elgar Publishing, 2021, pp. 373-387.
    [BibTeX] [Abstract] [Download PDF]

    Artificial intelligence and law is an interdisciplinary field of research that dates back at least to the 1970s, with academic conferences starting in the 1980s. In the field, complex problems are addressed about the computational modeling and automated support of legal reasoning and argumentation. Scholars have different backgrounds, and progress is driven by insights from lawyers, judges, computer scientists, philosophers and others. The community investigates and develops artificial intelligence techniques applicable in the legal domain, in order to enhance access to law for citizens and to support the efficiency and quality of work in the legal domain, aiming to promote a just society. Integral to the legal domain, legal reasoning and its structure and process have gained much attention in AI & Law research. Such research is today especially relevant, since in these days of big data and widespread use of algorithms, there is a need in AI to connect knowledge-based and data-driven AI techniques in order to arrive at a social, explainable and responsible AI. By considering knowledge in the form of rules and data in the form of cases connected by arguments, the field of AI & Law contributes relevant representations and algorithms for handling a combination of knowledge and data. In this chapter, as an entry point into the literature on AI & Law, three major styles of modeling legal reasoning are studied: rule-based reasoning, case-based reasoning and argument-based reasoning, which are the focus of this chapter. We describe selected key ideas, leaving out formal detail. As we will see, these styles of modeling legal reasoning are related, and there is much research investigating relations. We use the example domain of Dutch tort law (Section 2) to illustrate these three major styles, which are then more fully explained (Sections 3 to 5)

    @InCollection{Zheng:2021,
    author = {H. Zheng and B. Verheij},
    title = {Rules, cases and arguments in artificial intelligence and law},
    booktitle = {Research Handbook on Big Data Law},
    publisher = {Edgar Elgar Publishing},
    editor = {R Vogl},
    year = 2021,
    url = {https://www.ai.rug.nl/~verheij/publications/handbook2021.htm},
    pages = {373-387},
    abstract = {Artificial intelligence and law is an interdisciplinary field of research that dates back at least to the 1970s, with academic conferences starting in the 1980s. In the field, complex problems are addressed about the computational modeling and automated support of legal reasoning and argumentation. Scholars have different backgrounds, and progress is driven by insights from lawyers, judges, computer scientists, philosophers and others. The community investigates and develops artificial intelligence techniques applicable in the legal domain, in order to enhance access to law for citizens and to support the efficiency and quality of work in the legal domain, aiming to promote a just society. Integral to the legal domain, legal reasoning and its structure and process have gained much attention in AI & Law research. Such research is today especially relevant, since in these days of big data and widespread use of algorithms, there is a need in AI to connect knowledge-based and data-driven AI techniques in order to arrive at a social, explainable and responsible AI. By considering knowledge in the form of rules and data in the form of cases connected by arguments, the field of AI & Law contributes relevant representations and algorithms for handling a combination of knowledge and data. In this chapter, as an entry point into the literature on AI & Law, three major styles of modeling legal reasoning are studied: rule-based reasoning, case-based reasoning and argument-based reasoning, which are the focus of this chapter. We describe selected key ideas, leaving out formal detail. As we will see, these styles of modeling legal reasoning are related, and there is much research investigating relations. We use the example domain of Dutch tort law (Section 2) to illustrate these three major styles, which are then more fully explained (Sections 3 to 5)}
    }

  • C. A. Kurtan and P. i, “Assisting humans in privacy management: an agent-based approach,” Autonomous Agents and Multi-Agent Systems, vol. 35, iss. 7, 2021. doi:https://doi.org/10.1007/s10458-020-09488-1
    [BibTeX] [Abstract] [Download PDF]

    Image sharing is a service offered by many online social networks. In order to preserve privacy of images, users need to think through and specify a privacy setting for each image that they upload. This is difficult for two main reasons: first, research shows that many times users do not know their own privacy preferences, but only become aware of them over time. Second, even when users know their privacy preferences, editing these privacy settings is cumbersome and requires too much effort, interfering with the quick sharing behavior expected on an online social network. Accordingly, this paper proposes a privacy recommendation model for images using tags and an agent that implements this, namely pelte. Each user agent makes use of the privacy settings that its user have set for previous images to predict automatically the privacy setting for an image that is uploaded to be shared. When in doubt, the agent analyzes the sharing behavior of other users in the user’s network to be able to recommend to its user about what should be considered as private. Contrary to existing approaches that assume all the images are available to a centralized model, pelte is compatible to distributed environments since each agent accesses only the privacy settings of the images that the agent owner has shared or those that have been shared with the user. Our simulations on a real-life dataset shows that pelte can accurately predict privacy settings even when a user has shared a few images with others, the images have only a few tags or the user’s friends have varying privacy preferences.

    @Article{kurtan-yolum-21,
    author = {A. Can Kurtan and P{\i}nar Yolum},
    title = {Assisting humans in privacy management: an agent-based approach},
    journal = {Autonomous Agents and Multi-Agent Systems},
    year = {2021},
    volume = {35},
    number = {7},
    abstract = {Image sharing is a service offered by many online social networks. In order to preserve privacy of images, users need to think through and specify a privacy setting for each image that they upload. This is difficult for two main reasons: first, research shows that many times users do not know their own privacy preferences, but only become aware of them over time. Second, even when users know their privacy preferences, editing these privacy settings is cumbersome and requires too much effort, interfering with the quick sharing behavior expected on an online social network. Accordingly, this paper proposes a privacy recommendation model for images using tags and an agent that implements this, namely pelte. Each user agent makes use of the privacy settings that its user have set for previous images to predict automatically the privacy setting for an image that is uploaded to be shared. When in doubt, the agent analyzes the sharing behavior of other users in the user's network to be able to recommend to its user about what should be considered as private. Contrary to existing approaches that assume all the images are available to a centralized model, pelte is compatible to distributed environments since each agent accesses only the privacy settings of the images that the agent owner has shared or those that have been shared with the user. Our simulations on a real-life dataset shows that pelte can accurately predict privacy settings even when a user has shared a few images with others, the images have only a few tags or the user's friends have varying privacy preferences.},
    url = {https://link.springer.com/article/10.1007/s10458-020-09488-1},
    doi = {https://doi.org/10.1007/s10458-020-09488-1}
    }

  • E. Liscio, M. van der Meer, L. C. Siebert, C. M. Jonker, N. Mouter, and P. K. Murukannaiah, “Axies: Identifying and Evaluating Context-Specific Values,” in Proceedings of the 20th Conference on Autonomous Agents and MultiAgent Systems, London, 2021, p. 1–10.
    [BibTeX] [Abstract] [Download PDF]

    The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, context-specific values are necessary to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. We propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Axies simplifies the abstract task of value identification as a guided value annotation process involving human annotators. Axies exploits the growing availability of valueladen text corpora and Natural Language Processing to assist the annotators in systematically identifying context-specific values. We evaluate Axies in a user study involving 60 subjects. In our study, six annotators generate value lists for two timely and important contexts: Covid-19 measures, and sustainable Energy. Then, two policy experts and 52 crowd workers evaluate Axies value lists. We find that Axies yields values that are context-specific, consistent across different annotators, and comprehensible to end users.

    @inproceedings{Liscio-2021-AAMAS-Axies,
    author = {Enrico Liscio and Michiel van der Meer and Luciano C. Siebert and Catholijn M. Jonker and Niek Mouter and Pradeep K. Murukannaiah},
    title = {Axies: Identifying and Evaluating Context-Specific Values},
    booktitle = {Proceedings of the 20th Conference on Autonomous Agents and MultiAgent Systems},
    series = {AAMAS '21},
    year = {2021},
    address = {London},
    pages = {1--10},
    numpages = {9},
    keywords = {Agents, values, ethics, NLP},
    url = {https://ii.tudelft.nl/~pradeep/doc/Liscio-2021-AAMAS-Axies.pdf},
    abstract = {The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, context-specific values are necessary to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. We propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Axies simplifies the abstract task of value identification as a guided value annotation process involving human annotators. Axies exploits the growing availability of valueladen text corpora and Natural Language Processing to assist the annotators in systematically identifying context-specific values. We evaluate Axies in a user study involving 60 subjects. In our study, six annotators generate value lists for two timely and important contexts: Covid-19 measures, and sustainable Energy. Then, two policy experts and 52 crowd workers evaluate Axies value lists. We find that Axies yields values that are context-specific, consistent across different annotators, and comprehensible to end users.}
    }

  • E. Liscio, M. van der Meer, C. M. Jonker, and P. K. Murukannaiah, “A Collaborative Platform for Identifying Context-Specific Values,” in Proceedings of the 20th Conference on Autonomous Agents and MultiAgent Systems, London, 2021, p. 1–3.
    [BibTeX] [Abstract] [Download PDF]

    Value alignment is a crucial aspect of ethical multiagent systems. An important step toward value alignment is identifying values specific to an application context. However, identifying contextspecific values is complex and cognitively demanding. To support this process, we develop a methodology and a collaborative web platform that employs AI techniques. We describe this platform, highlighting its intuitive design and implementation.

    @inproceedings{Liscio-2021-AAMAS-AxiesDemo,
    author = {Enrico Liscio and Michiel van der Meer and Catholijn M. Jonker and Pradeep K. Murukannaiah},
    title = {A Collaborative Platform for Identifying Context-Specific Values},
    booktitle = {Proceedings of the 20th Conference on Autonomous Agents and MultiAgent Systems},
    series = {AAMAS '21},
    year = {2021},
    address = {London},
    pages = {1--3},
    numpages = {3},
    keywords = {Agents, values, ethics, NLP},
    url = {https://ii.tudelft.nl/~pradeep/doc/Liscio-2021-AAMAS-AxiesDemo.pdf},
    abstract = {Value alignment is a crucial aspect of ethical multiagent systems. An important step toward value alignment is identifying values specific to an application context. However, identifying contextspecific values is complex and cognitively demanding. To support this process, we develop a methodology and a collaborative web platform that employs AI techniques. We describe this platform, highlighting its intuitive design and implementation.}
    }

  • E. Liscio, M. van der Meer, L. C. Siebert, C. M. Jonker, N. Mouter, and P. K. Murukannaiah, “Axies: Identifying and Evaluating Context-Specific Values,” in Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021), Online, 2021, p. 799–808.
    [BibTeX] [Abstract] [Download PDF]

    The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, context-specific values are necessary to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions. We propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Axies simplifies the abstract task of value identification as a guided value annotation process involving human annotators. Axies exploits the growing availability of value-laden text corpora and Natural Language Processing to assist the annotators in systematically identifying context-specific values. We evaluate Axies in a user study involving 60 subjects. In our study, six annotators generate value lists for two timely and important contexts: Covid-19 measures, and sustainable Energy. Then, two policy experts and 52 crowd workers evaluate Axies value lists. We find that Axies yields values that are context-specific, consistent across different annotators, and comprehensible to end users

    @inproceedings{Liscio2021a,
    address = {Online},
    author = {Liscio, Enrico and van der Meer, Michiel and Siebert, Luciano C. and Jonker, Catholijn M. and Mouter, Niek and Murukannaiah, Pradeep K.},
    booktitle = {Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021)},
    keywords = {Context,Ethics,Natural Language Processing,Values,acm reference format,catholijn m,context,enrico liscio,ethics,jonker,luciano c,michiel van der meer,natural language processing,siebert,values},
    pages = {799--808},
    publisher = {IFAAMAS},
    title = {{Axies: Identifying and Evaluating Context-Specific Values}},
    year = {2021},
    url = "https://ii.tudelft.nl/~pradeep/doc/Liscio-2021-AAMAS-Axies.pdf",
    abstract = "The pursuit of values drives human behavior and promotes cooperation. Existing research is focused on general (e.g., Schwartz) values that transcend contexts. However, context-specific values are necessary to (1) understand human decisions, and (2) engineer intelligent agents that can elicit human values and take value-aligned actions.
    We propose Axies, a hybrid (human and AI) methodology to identify context-specific values. Axies simplifies the abstract task of
    value identification as a guided value annotation process involving human annotators. Axies exploits the growing availability of value-laden text corpora and Natural Language Processing to assist the annotators in systematically identifying context-specific values.
    We evaluate Axies in a user study involving 60 subjects. In our study, six annotators generate value lists for two timely and important contexts: Covid-19 measures, and sustainable Energy. Then, two policy experts and 52 crowd workers evaluate Axies value lists.
    We find that Axies yields values that are context-specific, consistent across different annotators, and comprehensible to end users"
    }

  • E. Liscio, M. van der Meer, C. M. Jonker, and P. K. Murukannaiah, “A Collaborative Platform for Identifying Context-Specific Values,” in Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021), Online, 2021, p. 1773–1775.
    [BibTeX] [Abstract] [Download PDF]

    Value alignment is a crucial aspect of ethical multiagent systems. An important step toward value alignment is identifying values specific to an application context. However, identifying context-specific values is complex and cognitively demanding. To support this process, we develop a methodology and a collaborative web platform that employs AI techniques. We describe this platform, highlighting its intuitive design and implementation.

    @inproceedings{Liscio2021,
    address = {Online},
    author = {Liscio, Enrico and van der Meer, Michiel and Jonker, Catholijn M. and Murukannaiah, Pradeep K.},
    booktitle = {Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2021)},
    keywords = {Context,Ethics,Natural Language Processing,Values,acm reference format,and pradeep,catholijn m,context,enrico liscio,ethics,jonker,michiel van der meer,natural language processing,values},
    pages = {1773--1775},
    publisher = {IFAAMAS},
    title = {{A Collaborative Platform for Identifying Context-Specific Values}},
    year = {2021},
    url = "https://www.ifaamas.org/Proceedings/aamas2021/pdfs/p1773.pdf",
    abstract = "Value alignment is a crucial aspect of ethical multiagent systems.
    An important step toward value alignment is identifying values specific to an application context. However, identifying context-specific values is complex and cognitively demanding. To support this process, we develop a methodology and a collaborative web
    platform that employs AI techniques. We describe this platform, highlighting its intuitive design and implementation."
    }

  • K. Miras, J. Cuijpers, B. Gülhan, and A. Eiben, “The Impact of Early-death on Phenotypically Plastic Robots that Evolve in Changing Environments,” in ALIFE 2021: The 2021 Conference on Artificial Life, 2021.
    [BibTeX] [Abstract] [Download PDF]

    In this work, we evolve phenotypically plastic robots-robots that adapt their bodies and brains according to environmental conditions-in changing environments. In particular, we investigate how the possibility of death in early environmental conditions impacts evolvability and robot traits. Our results demonstrate that early-death improves the efficiency of the evolutionary process for the earlier environmental conditions. On the other hand, the possibility of early-death in the earlier environmental conditions results in a dramatic loss of performance in the latter environmental conditions.

    @inproceedings{miras2021impact,
    title={The Impact of Early-death on Phenotypically Plastic Robots that Evolve in Changing Environments},
    author={Miras, Karine and Cuijpers, Jim and G{\"u}lhan, Bahadir and Eiben, AE},
    booktitle={ALIFE 2021: The 2021 Conference on Artificial Life},
    year={2021},
    organization={MIT Press},
    url ="https://direct.mit.edu/isal/proceedings-pdf/isal/33/25/1929813/isal_a_00371.pdf",
    abstract = "In this work, we evolve phenotypically plastic robots-robots that adapt their bodies and brains according to environmental conditions-in changing environments. In particular, we investigate how the possibility of death in early environmental conditions impacts evolvability and robot traits. Our results demonstrate that early-death improves the efficiency of the evolutionary process for the earlier environmental conditions. On the other hand, the possibility of early-death in the earlier environmental conditions results in a dramatic loss of performance in the latter environmental conditions."
    }

  • K. Miras, “Constrained by Design: Influence of Genetic Encodings on Evolved Traits of Robots,” Frontiers Robotics AI, vol. 8, p. 672379, 2021. doi:10.3389/frobt.2021.672379
    [BibTeX] [Abstract] [Download PDF]

    Genetic encodings and their particular properties are known to have a strong influence on the success of evolutionary systems. However, the literature has widely focused on studying the effects that encodings have on performance, i.e., fitness-oriented studies. Notably, this anchoring of the literature to performance is limiting, considering that performance provides bounded information about the behavior of a robot system. In this paper, we investigate how genetic encodings constrain the space of robot phenotypes and robot behavior. In summary, we demonstrate how two generative encodings of different nature lead to very different robots and discuss these differences. Our principal contributions are creating awareness about robot encoding biases, demonstrating how such biases affect evolved morphological, control, and behavioral traits, and finally scrutinizing the trade-offs among different biases.

    @article{DBLP:journals/firai/Miras21,
    author = {Karine Miras},
    title = {Constrained by Design: Influence of Genetic Encodings on Evolved Traits
    of Robots},
    journal = {Frontiers Robotics {AI}},
    volume = {8},
    pages = {672379},
    year = {2021},
    url = {https://doi.org/10.3389/frobt.2021.672379},
    doi = {10.3389/frobt.2021.672379},
    abstract = "Genetic encodings and their particular properties are known to have a strong influence on the success of evolutionary systems. However, the literature has widely focused on studying the effects that encodings have on performance, i.e., fitness-oriented studies. Notably, this anchoring of the literature to performance is limiting, considering that performance provides bounded information about the behavior of a robot system. In this paper, we investigate how genetic encodings constrain the space of robot phenotypes and robot behavior. In summary, we demonstrate how two generative encodings of different nature lead to very different robots and discuss these differences. Our principal contributions are creating awareness about robot encoding biases, demonstrating how such biases affect evolved morphological, control, and behavioral traits, and finally scrutinizing the trade-offs among different biases."
    }

  • P. Manggala, H. H. Hoos, and E. Nalisnick, “Bayesian Regression from Multiple Sources of Weak Supervision,” in ICML 2021 Machine Learning for Data: Automated Creation, Privacy, Bias, 2021.
    [BibTeX] [Abstract] [Download PDF]

    We describe a Bayesian approach to weakly supervised regression. Our proposed framework propagates uncertainty from the weak supervision to an aggregated predictive distribution. We use a generalized Bayes procedure to account for the supervision being weak and therefore likely misspecified.

    @inproceedings{manggala2021bayesianregression,
    title={Bayesian Regression from Multiple Sources of Weak Supervision},
    author={Manggala, Putra and Hoos, Holger H. and Nalisnick, Eric},
    year={2021},
    booktitle = {ICML 2021 Machine Learning for Data: Automated Creation, Privacy, Bias},
    url={https://pmangg.github.io/papers/brfmsows_mhn_ml4data_icml.pdf},
    abstract={We describe a Bayesian approach to weakly supervised regression. Our proposed framework propagates uncertainty from the weak supervision to an aggregated predictive distribution. We use a generalized Bayes procedure to account for the supervision being weak and therefore likely misspecified.}
    }

2020

  • B. Verheij, “Artificial intelligence as law,” Artif. Intell. Law, vol. 28, iss. 2, p. 181–206, 2020. doi:10.1007/s10506-020-09266-0
    [BibTeX] [Download PDF]
    @article{Verheij20,
    author = {Bart Verheij},
    title = {Artificial intelligence as law},
    journal = {Artif. Intell. Law},
    volume = {28},
    number = {2},
    pages = {181--206},
    year = {2020},
    url = {https://doi.org/10.1007/s10506-020-09266-0},
    doi = {10.1007/s10506-020-09266-0},
    timestamp = {Fri, 05 Jun 2020 17:08:42 +0200},
    biburl = {https://dblp.org/rec/journals/ail/Verheij20.bib},
    bibsource = {dblp computer science bibliography, https://dblp.org}
    }

  • N. Kökciyan and P. Yolum, “TURP: Managing Trust for Regulating Privacy in Internet of Things,” IEEE Internet Computing, vol. 24, iss. 6, pp. 9-16, 2020. doi:https://doi.org/10.1109/MIC.2020.3020006
    [BibTeX] [Abstract] [Download PDF]

    Internet of Things [IoT] applications, such as smart home or ambient assisted livingsystems, promise useful services to end users. Most of these services rely heavily on sharingand aggregating information among devices; many times raising privacy concerns. Contrary totraditional systems, where privacy of each user is managed through well-defined policies, thescale, dynamism, and heterogeneity of the IoT systems make it impossible to specify privacypolicies for all possible situations. Alternatively, this paper argues that handling of privacy has tobe reasoned by the IoT devices, depending on the norms, context, as well as the trust amongentities. We present a technique, where an IoT device collects information from others, evaluatesthe trustworthiness of the information sources to decide the suitability of sharing informationwith others. We demonstrate the applicability of the technique over an IoT pilot study.

    @ARTICLE{turp-ic-2020,
    author={K\"okciyan, Nadin and Yolum, P{\i}nar},
    journal={IEEE Internet Computing},
    title={TURP: Managing Trust for Regulating Privacy in Internet of Things},
    year={2020},
    volume={24},
    number={6},
    pages={9-16},
    abstract = {Internet of Things [IoT] applications, such as smart home or ambient assisted livingsystems, promise useful services to end users. Most of these services rely heavily on sharingand aggregating information among devices; many times raising privacy concerns. Contrary totraditional systems, where privacy of each user is managed through well-defined policies, thescale, dynamism, and heterogeneity of the IoT systems make it impossible to specify privacypolicies for all possible situations. Alternatively, this paper argues that handling of privacy has tobe reasoned by the IoT devices, depending on the norms, context, as well as the trust amongentities. We present a technique, where an IoT device collects information from others, evaluatesthe trustworthiness of the information sources to decide the suitability of sharing informationwith others. We demonstrate the applicability of the technique over an IoT pilot study.},
    url = {https://webspace.science.uu.nl/~yolum001/papers/InternetComputing-20-TURP.pdf},
    doi = {https://doi.org/10.1109/MIC.2020.3020006}
    }

  • O. Ulusoy and P. Yolum, “Agents for Preserving Privacy: Learning and Decision Making Collaboratively,” in Multi-Agent Systems and Agreement Technologies, 2020, p. 116–131. doi:https://doi.org/10.1007/978-3-030-66412-1_8
    [BibTeX] [Abstract] [Download PDF]

    Privacy is a right of individuals to keep personal information to themselves. Often online systems enable their users to select what information they would like to share with others and what information to keep private. When an information pertains only to a single individual, it is possible to preserve privacy by providing the right access options to the user. However, when an information pertains to multiple individuals, such as a picture of a group of friends or a collaboratively edited document, deciding how to share this information and with whom is challenging as individuals might have conflicting privacy constraints. Resolving this problem requires an automated mechanism that takes into account the relevant individuals’ concerns to decide on the privacy configuration of information. Accordingly, this paper proposes an auction-based privacy mechanism to manage the privacy of users when information related to multiple individuals are at stake. We propose to have a software agent that acts on behalf of each user to enter privacy auctions, learn the subjective privacy valuations of the individuals over time, and to bid to respect their privacy. We show the workings of our proposed approach over multiagent simulations.

    @InProceedings{ulusoy-yolum-20,
    title="Agents for Preserving Privacy: Learning and Decision Making Collaboratively",
    author="Ulusoy, Onuralp and Yolum, P{\i}nar",
    editor="Bassiliades, Nick and Chalkiadakis, Georgios and de Jonge, Dave",
    booktitle="Multi-Agent Systems and Agreement Technologies",
    year="2020",
    publisher="Springer International Publishing",
    pages="116--131",
    abstract="Privacy is a right of individuals to keep personal information to themselves. Often online systems enable their users to select what information they would like to share with others and what information to keep private. When an information pertains only to a single individual, it is possible to preserve privacy by providing the right access options to the user. However, when an information pertains to multiple individuals, such as a picture of a group of friends or a collaboratively edited document, deciding how to share this information and with whom is challenging as individuals might have conflicting privacy constraints. Resolving this problem requires an automated mechanism that takes into account the relevant individuals' concerns to decide on the privacy configuration of information. Accordingly, this paper proposes an auction-based privacy mechanism to manage the privacy of users when information related to multiple individuals are at stake. We propose to have a software agent that acts on behalf of each user to enter privacy auctions, learn the subjective privacy valuations of the individuals over time, and to bid to respect their privacy. We show the workings of our proposed approach over multiagent simulations.",
    isbn="978-3-030-66412-1",
    doi = {https://doi.org/10.1007/978-3-030-66412-1_8},
    url = {https://webspace.science.uu.nl/~yolum001/papers/ulusoy-yolum-20.pdf}
    }

  • L. Krause and P. Vossen, “When to explain: Identifying explanation triggers in human-agent interaction,” in 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Dublin, Ireland, 2020, p. 55–60.
    [BibTeX] [Abstract] [Download PDF]

    With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in human-agent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.

    @inproceedings{krause-vossen-2020-explain,
    title = "When to explain: Identifying explanation triggers in human-agent interaction",
    author = "Krause, Lea and
    Vossen, Piek",
    booktitle = "2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence",
    month = nov,
    year = "2020",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.nl4xai-1.12",
    pages = "55--60",
    abstract = "With more agents deployed than ever, users need to be able to interact and cooperate with them in an effective and comfortable manner. Explanations have been shown to increase the understanding and trust of a user in human-agent interaction. There have been numerous studies investigating this effect, but they rely on the user explicitly requesting an explanation. We propose a first overview of when an explanation should be triggered and show that there are many instances that would be missed if the agent solely relies on direct questions. For this, we differentiate between direct triggers such as commands or questions and introduce indirect triggers like confusion or uncertainty detection.",
    }

  • P. K. Murukannaiah, N. Ajmeri, C. M. Jonker, and M. P. Singh, “New Foundations of Ethical Multiagent Systems,” in Proceedings of the 19th Conference on Autonomous Agents and MultiAgent Systems, Auckland, 2020, p. 1706–1710.
    [BibTeX] [Abstract] [Download PDF]

    Ethics is inherently a multiagent concern. However, research on AI ethics today is dominated by work on individual agents: (1) how an autonomous robot or car may harm or (differentially) benefit people in hypothetical situations (the so-called trolley problems) and (2) how a machine learning algorithm may produce biased decisions or recommendations. The societal framework is largely omitted. To develop new foundations for ethics in AI, we adopt a sociotechnical stance in which agents (as technical entities) help autonomous social entities or principals (people and organizations). This multiagent conception of a sociotechnical system (STS) captures how ethical concerns arise in the mutual interactions of multiple stakeholders. These foundations would enable us to realize ethical STSs that incorporate social and technical controls to respect stated ethical postures of the agents in the STSs. The envisioned foundations require new thinking, along two broad themes, on how to realize (1) an STS that reflects its stakeholders’ values and (2) individual agents that function effectively in such an STS.

    @inproceedings{Murukannaiah-2020-AAMASBlueSky-EthicalMAS,
    author = {Pradeep K. Murukannaiah and Nirav Ajmeri and Catholijn M. Jonker and Munindar P. Singh},
    title = {New Foundations of Ethical Multiagent Systems},
    booktitle = {Proceedings of the 19th Conference on Autonomous Agents and MultiAgent Systems},
    series = {AAMAS '20},
    year = {2020},
    address = {Auckland},
    pages = {1706--1710},
    numpages = {5},
    keywords = {Agents, ethics},
    url = {https://ii.tudelft.nl/~pradeep/doc/Murukannaiah-2020-AAMASBlueSky-EthicalMAS.pdf},
    abstract = {Ethics is inherently a multiagent concern. However, research on AI ethics today is dominated by work on individual agents: (1) how an autonomous robot or car may harm or (differentially) benefit people in hypothetical situations (the so-called trolley problems) and (2) how a machine learning algorithm may produce biased decisions or recommendations. The societal framework is largely omitted. To develop new foundations for ethics in AI, we adopt a sociotechnical stance in which agents (as technical entities) help autonomous social entities or principals (people and organizations). This multiagent conception of a sociotechnical system (STS) captures how ethical concerns arise in the mutual interactions of multiple stakeholders. These foundations would enable us to realize ethical STSs that incorporate social and technical controls to respect stated ethical postures of the agents in the STSs. The envisioned foundations require new thinking, along two broad themes, on how to realize (1) an STS that reflects its stakeholders' values and (2) individual agents that function effectively in such an STS.}
    }

  • Z. Akata, D. Balliet, M. de Rijke, F. Dignum, V. Dignum, G. Eiben, A. Fokkens, D. Grossi, K. Hindriks, H. Hoos, H. Hung, C. Jonker, C. Monz, M. Neerincx, F. Oliehoek, H. Prakken, S. Schlobach, L. van der Gaag, F. van Harmelen, H. van Hoof, B. van Riemsdijk, A. van Wynsberghe, R. Verbrugge, B. Verheij, P. Vossen, and M. Welling, “A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence,” IEEE Computer, vol. 53, iss. 08, pp. 18-28, 2020. doi:10.1109/MC.2020.2996587
    [BibTeX] [Abstract] [Download PDF]

    We define hybrid intelligence (HI) as the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines. HI is an important new research focus for artificial intelligence, and we set a research agenda for HI by formulating four challenges.

    @ARTICLE {9153877,
    author = {Z. Akata and D. Balliet and M. de Rijke and F. Dignum and V. Dignum and G. Eiben and A. Fokkens and D. Grossi and K. Hindriks and H. Hoos and H. Hung and C. Jonker and C. Monz and M. Neerincx and F. Oliehoek and H. Prakken and S. Schlobach and L. van der Gaag and F. van Harmelen and H. van Hoof and B. van Riemsdijk and A. van Wynsberghe and R. Verbrugge and B. Verheij and P. Vossen and M. Welling},
    journal = {IEEE Computer},
    title = {A Research Agenda for Hybrid Intelligence: Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence},
    year = {2020},
    volume = {53},
    number = {08},
    issn = {1558-0814},
    pages = {18-28},
    doi = {10.1109/MC.2020.2996587},
    publisher = {IEEE Computer Society},
    address = {Los Alamitos, CA, USA},
    month = {aug},
    url = "http://www.cs.vu.nl/~frankh/postscript/IEEEComputer2020.pdf",
    abstract = "We define hybrid intelligence (HI) as the combination of human
    and machine intelligence, augmenting human intellect and
    capabilities instead of replacing them and achieving goals
    that were unreachable by either humans or machines. HI is an
    important new research focus for artificial intelligence, and we
    set a research agenda for HI by formulating four challenges."
    }

2019

  • F. van Harmelen and A. ten Teije, “A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems,” Journal of Web Engineering, vol. 18, iss. 1-3, pp. 97-124, 2019. doi:10.13052/jwe1540-9589.18133
    [BibTeX] [Abstract] [Download PDF]

    We propose a set of compositional design patterns to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation. As in other areas of computer science (knowledge engineering, software engineering, ontology engineering, process mining and others), such design patterns help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components. We have validated our set of compositional design patterns against a large body of recent literature.

    @article{JWE2019,
    title = "A Boxology of Design Patterns for Hybrid Learning and Reasoning Systems",
    author = "Frank van Harmelen and Annette ten Teije",
    journal = "Journal of Web Engineering",
    year = "2019",
    volume = "18",
    number = "1-3",
    pages = "97-124",
    doi = "10.13052/jwe1540-9589.18133",
    url = "http://www.cs.vu.nl/~frankh/postscript/JWE2019.pdf",
    abstract = "We propose a set of compositional design patterns to describe a large variety of systems that combine statistical techniques from machine learning with symbolic techniques from knowledge representation. As in other areas of computer science (knowledge engineering, software engineering, ontology engineering, process mining and others), such design patterns help to systematize the literature, clarify which combinations of techniques serve which purposes, and encourage re-use of software components. We have validated our set of compositional design patterns against a large body of recent literature."
    }