Formal Explainability in Artificial Intelligence

Updated: 1 day ago
Location: Melbourne, VICTORIA

Artificial Intelligence (AI) models are widely used in decision making procedures in many real-world applications across important areas such as finance, healthcare, education, and safety critical systems. The fast growth, practical achievements and the overall success of modern approaches to AI guarantees that machine learning AI approaches will prevail as a generic computing paradigm, and will find an ever growing range of practical applications, many of which will have to do with various aspects of humans' lives including privacy and safety. As AI models on occasions fail, support biased decisions, and their decisions can be confusing due to brittleness, there is a critical need to understand their behaviour, analyse the (potential) failures of the models (or the data used to train them), debug them, and possibly repair them. This has given rise to a growing interest in validating the operation of AI models but also motivated efforts aiming at devising approaches to explainable artificial intelligence (XAI).

This project will build on the methodology of formal explainable AI (FXAI) and aim at advancing FXAI technology and broadening its use by seeking (1) how to efficiently represent an AI system of interest in a logical formalism; (2) how to apply efficient formal reasoning to understand the behaviour of an AI system in the form of provably correct abductive and contrastive explanations, the concepts well studied in the area of FXAI; and (3) how to provide guarantees of correctness for an AI system in light of the explanations computed.



Similar Positions