PhD position in Argumentative Explainable Artificial Intelligence (1.0 FTE)

Updated: almost 2 years ago
Deadline: 31 May 2022

If we want to apply artificial intelligence (AI) in high-risk domains, such as health care and the legal system, the applications need to be transparent and trustworthy. Additionally, they need to comply with an increasing range of ethical and legal guidelines. The fast-growing research area of eXplainable Artificial Intelligence (XAI) is aimed at increasing transparency and trust by providing explanations for AI systems and the decisions they make. Research in XAI has mainly been focused on making learning-based approaches to AI more transparent. However, these approaches do not account for the use of prior knowledge and reasoning in human cognition. Furthermore, they do not take the important aspect of contestability of explanations into account. In this project, you will study knowledge-based approaches to XAI, specifically approaches to XAI based on computational argumentation (Dung, 1995).

Knowledge-based approaches in AI are applied in many real-life applications, for example at the Netherlands Police and the Dutch Tax and Customs Administration. Additionally, it has been suggested that learning-based approaches to AI could be made more transparent by combining them with knowledge-based approaches: hybrid or neuro-symbolic AI. Therefore, developing good explanations for knowledge-based approaches to AI is essential. Explanations for knowledge-based AI have a long history, with argumentation-based XAI receiving an impetus as of late (Cyras et al., 2021; Borg and Bex, 2021) - human cognition is inherently argumentative and the interactive argumentative dialogues are important for increased trust in the explanations. However, many questions remain. In particular, implementing the argumentative nature and interactive aspects of human explanation with computational argumentation has not been sufficiently explored yet. Therefore, in this PhD project you will:

  • investigate how the benefits of explicit knowledge and argumentation can be implemented for XAI;
  • determine what makes good argumentation-based explanations;
  • and apply argumentation-based approaches to provide explanations for outcomes of other AI approaches, such as (deep) machine learning.

Besides conducting research, you will spend 30% of your time on teaching activities.

You will join the Intelligent Systems Group at the Department of Information and Computing Sciences at Utrecht University. This group has a strong tradition in logic and AI, including computational argumentation. Together with the Hybrid Intelligence Centre and the National Police-lab AI , they constitute a vibrant community of researchers on the subjects of XAI and hybrid knowledge-based/machine learning AI. 

(Borg and Bex, 2021) A. Borg, F. Bex (2021). A Basic Framework for Explanations in Argumentation. IEEE Intelligent Systems.
(Cyras et al., 2021) K. Cyras, A. Rago, E. Albini, P. Baroni and F. Toni (2021). Argumentative XAI: A Survey. In: Proceedings of IJCAI’21.
(Dung, 1995) P.M. Dung (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence Journal.



Similar Positions