Explainability of AI techniques in law enforcement and the judiciary

Updated: about 21 hours ago
Location: Melbourne, VICTORIA
Deadline: The position may have been removed or expired!

This project will investigate and develop the ways in which AI algorithms and practices can be made transparent and explainable for use in law enforcement and judicial applications

The Faculty of Information Technology has a mission to advance social good through its research. Key to this mission is the AiLECS (Artificial Intelligence for Law Enforcement and Community Safety) research lab. The AiLECS lab is a joint initiative of Monash University and the Australian Federal Police, and researches the ethical application of AI theories and techniques to problems of interest to law enforcement agencies. The work of the lab is applied in nature, we seek to rapidly translate our research into real-world solutions to significant threats to community safety.

The admissible use of AI in law enforcement and judicial domains requires consideration of a number of legal issues.  In moving from investigative support tools to a more prominent role in a brief of evidence, AI capabilities need to align with the legal and epistemological frameworks within which laws are enforced and judgement made.   Constructing explanations that outline how data is gathered, curated and used to train AI algorithms, in addition to illustrating how such algorithms come to their decisions is crucial in this domain.   



Similar Positions