Jheronimus Academy of Data Science (JADS) Den Bosch, isproud to start with three large Robust AI labs together with:
JADS is seeking enthusiastic colleagues for the position of PhD students. We operationalize the huge ambition around AI by explicitly aligning our research agenda on Robust AI with the United Nation's sustainable development goals.
The project is funded in a public-private partnership by NWO/NLAIC and the private partners. This position is part of the Deloitte project.
The next generation of enterprise applications is quickly becoming AI-enabled, providing novel functionalities with unprecedented levels of automation and intelligence. As we recover, reopen, and rebuild, it is time to rethink the importance of trust. At no time has it been more tested or valued in leaders and each other. Trust is the basis for connection. Trust is all-encompassing: physical, emotional, digital, financial, and ethical. A nice-to-have is now a must-have; a principle is now a catalyst; a value is now invaluable.
Are you an enthusiastic and ambitious researcher with a completed master's degree in a field related to machine learning (Computer science, AI, Data Science) or in Electrical Engineering with an affinity for AI and deep learning? Does the idea of working on real-world problems and with industry partners excite you? Are you passionate about using trustworthy AI methods for the next generation of auditing processes, which are increasingly AI-enabled and data-driven? And are you interested in delivering new tools to ascertain the fairness of the next generation of AI software?
We are recruiting a Ph.D. candidate who will develop and validate novel concepts, methods, and tools for monitoring, auditing, and fostering fairness of AI software systems and trial them with industrial partners who work with Deloitte.
This vacancy falls under the auspices of the JADE lab, which is the data/AI engineering and governance research UNIT of the Jheronimus Academy of Data Science (JADS), and DELOITTE. In particular, this position is associated with JADE's ROBUST program on Auditing for Responsible AI Software System (SAFE-GUARD), which is financed under the NWO LTP funding scheme with Deloitte as the key industry partner.
Whilst the overall objective of SAFE-GUARD is auditing of AI software, it may be further refined in the following more elaborated goal: "Explore, develop and validate novel auditing theories, tools, and methodologies that will be able to monitor and audit whether AI applications adhere in terms of fairness (no bias), explainability, and transparency (easy to explain), robustness and reliability (delivering same results under various execution environments), respect of privacy (respecting GDPR), and safety and security (with no vulnerabilities)."
The industrial setting of the deep involvement of Deloitte will balance the rigour with relevance and ascertain fit with societal requirements and trends, validation with industrial case studies.
Explainability and transparency have been recognized as fundamental aspects of responsible AI systems. Considering these aspects ensures the reliability of the AI model when making decisions that, for example, are not based on sex or race or any other data point they wish to make ambiguous that can have disastrous consequences.
Therefore, Deloitte calls for transparency and responsibility in AI. Transparent AI software explicates (implicit) underlying values, including ethical and moral considerations, and promotes the responsibility of companies or AI-software-based decisions. At the same time, ensuring and assessing system explainability in applied contexts is a daunting task, involving various stakeholders (developers, users, owners), and perspectives (technical, legal, psychological, economics).
This project aims at improving the assessment of explainability and transparency of business/governmental decisions that are (partially) based on AI software, such as job offers, loan proposals, etc. As such, it goes way beyond simply publishing the AI code, as suggested by many, and aims at opening the AI software 'black box' and explaining whether or not the AI algorithms make sense, are well tested, and audited. At the same time, explainability involves a balancing act of explaining the workings in laymen's terms against oversimplification. Therefore, part of this project is the development of a toolbox for auditors (and their clients) to help evaluate the transparency and explainability against a coherent set of measurable parameters and suggest improvements.
Ph D In “Designing Human Ai Data Sensemaking For Collaborative Decision Making”, Eindhoven University of Technology (TU/e), Netherlands, 4 days ago
PhD in “Designing Human-AI Data Sensemaking for Collaborative Decision Making” PhD in “Designing Human-AI Data Sensemaking for Collaborative Decision Making” Published Deadline Location today 23 D...
Ph D In “Designing Human Ai Data Sensemaking For Collaborative Decision Making”, Eindhoven University of Technology, Netherlands, 1 day ago
Irène Curie Fellowship No Department(s) Industrial Design Reference number V51.6099 Job description Project Background AI-empowered tools have shown great potential in supporting medical diagnosis...
Ph D Positions , IMDEA Networks Institute ;, Spain, 18 days ago
IMDEA Networks Institute in Madrid has PhD positions in the following areas: PhD position in data science and AI for networks Wireless Networking Group PhD position in explainable and robust AI fo...
Ph D Visual Analytics For Multimodal Data And Trustworthy Evidence In Court, Eindhoven University of Technology, Netherlands, about 9 hours ago
Irène Curie Fellowship No Department(s) Mathematics and Computer Science Reference number V32.6070 Job description We are looking for a motivated PhD candidate that wants to develop new visual ana...
Ai Ble Lab Ph D Position Reinforcement Learning For Human Ai Interaction In The Built Environment, Delft University of Technology, Netherlands, about 21 hours ago
Challenge: Develop AI agents to drive behaviour geared to energy-efficient use of buildings and circularity. Change: Embed reinforcement learning in human-AI interaction. Impact: Bridging the poli...
Ph D Out Of Distribution Detection For Medical Ai, Amsterdam UMC, Netherlands, 11 days ago
PhD Out-Of-Distribution detection for medical AI PhD Out-Of-Distribution detection for medical AI Published Deadline Location 18 Nov 11 Dec Amsterdam Application of Machine Learning methods in hea...