JADS: PhD on Audits for Explainability & transparency for AI Software (0,8-1,0)

Updated: over 1 year ago
Job Type: Temporary
Deadline: 21 Oct 2022

Jheronimus Academy of Data Science (JADS) Den Bosch, isproud to start with three large Robust AI labs together with:

  • Deloitte (Auditing for Responsible AI Software Systems) - 5 PhD's.
  • DPG Media (Responsible Media Lab) - 5 PhD's together with University of Amsterdam (UvA).
  • LaNubia (Innovation Lab for Utilities on Sustainable Technology and RenewableEnergy) - 5 PhD's.
  • JADS is seeking enthusiastic colleagues for the position of PhD students. We operationalize the huge ambition around AI by explicitly aligning our research agenda on Robust AI with the United Nation's sustainable development goals.
    The project is funded in a public-private partnership by NWO/NLAIC and the private partners. This position is part of the Deloitte project.

    Short Description

    The next generation of enterprise applications is quickly becoming AI-enabled, providing novel functionalities with unprecedented levels of automation and intelligence. As we recover, reopen, and rebuild, it is time to rethink the importance of trust. At no time has it been more tested or valued in leaders and each other. Trust is the basis for connection. Trust is all-encompassing: physical, emotional, digital, financial, and ethical. A nice-to-have is now a must-have; a principle is now a catalyst; a value is now invaluable.

    Are you an enthusiastic and ambitious researcher with a completed master's degree in a field related to machine learning (Computer science, AI, Data Science) or in Electrical Engineering with an affinity for AI and deep learning? Does the idea of working on real-world problems and with industry partners excite you? Are you passionate about using trustworthy AI methods for the next generation of auditing processes, which are increasingly AI-enabled and data-driven? And are you interested in delivering new tools to ascertain the fairness of the next generation of AI software?

    We are recruiting a Ph.D. candidate who will develop and validate novel concepts, methods, and tools for monitoring, auditing, and fostering fairness of AI software systems and trial them with industrial partners who work with Deloitte.

    Job Description

    This vacancy falls under the auspices of the JADE lab, which is the data/AI engineering and governance research UNIT of the Jheronimus Academy of Data Science (JADS), and DELOITTE. In particular, this position is associated with JADE's ROBUST program on Auditing for Responsible AI Software System (SAFE-GUARD), which is financed under the NWO LTP funding scheme with Deloitte as the key industry partner.

    Whilst the overall objective of SAFE-GUARD is auditing of AI software, it may be further refined in the following more elaborated goal: "Explore, develop and validate novel auditing theories, tools, and methodologies that will be able to monitor and audit whether AI applications adhere in terms of fairness (no bias), explainability, and transparency (easy to explain), robustness and reliability (delivering same results under various execution environments), respect of privacy (respecting GDPR), and safety and security (with no vulnerabilities)."

    The industrial setting of the deep involvement of Deloitte will balance the rigour with relevance and ascertain fit with societal requirements and trends, validation with industrial case studies.

    Scientific Challenge

    Explainability and transparency have been recognized as fundamental aspects of responsible AI systems. Considering these aspects ensures the reliability of the AI model when making decisions that, for example, are not based on sex or race or any other data point they wish to make ambiguous that can have disastrous consequences.

    Therefore, Deloitte calls for transparency and responsibility in AI. Transparent AI software explicates (implicit) underlying values, including ethical and moral considerations, and promotes the responsibility of companies or AI-software-based decisions. At the same time, ensuring and assessing system explainability in applied contexts is a daunting task, involving various stakeholders (developers, users, owners), and perspectives (technical, legal, psychological, economics).

    This project aims at improving the assessment of explainability and transparency of business/governmental decisions that are (partially) based on AI software, such as job offers, loan proposals, etc. As such, it goes way beyond simply publishing the AI code, as suggested by many, and aims at opening the AI software 'black box' and explaining whether or not the AI algorithms make sense, are well tested, and audited. At the same time, explainability involves a balancing act of explaining the workings in laymen's terms against oversimplification. Therefore, part of this project is the development of a toolbox for auditors (and their clients) to help evaluate the transparency and explainability against a coherent set of measurable parameters and suggest improvements.



    Similar Positions