PhD position on "Ethics of artificial intelligence in the defence domain"

Updated: almost 2 years ago
Deadline: 17 May 2022

Military AI-enabled systems pose serious ethical, legal and societal challenges. Considering the unique nature of work in the defence domain, the stakes are immense. Several scientists and activist groups have warned against the potential arising of “killer robots”, i.e., autonomous weapon systems that select and attack targets without meaningful human control. Other applications of AI in defence are also subjected to ethical and legal objections, such as AI systems that play a role in providing situational awareness or collecting intelligence. These applications can have serious ethical and legal ramifications as they may for instance bias decision-making processes on morally sensitive issues. The use of AI technology in defence means handing over some degree of autonomy and responsibility to machines, which may impact human agency, human dignity and human rights in warfare.

Without sufficient consideration of ethical, legal and societal aspects (ELSA) of the use of AI in the defence domain, risks like losing control, biased decision-making, violation of rights, and decreasing humanity in warfare may result in losing public support. These risks and consequences should be avoided.

Currently, it is unclear which AI-enabled systems are acceptable from ethical, legal and societal perspectives, and which are not, under what conditions/circumstances. This leads to both “over-use” (e.g., using too many AI-systems in too many situations, with lack of consideration of consequences) and “under-use” (e.g., not using AI, due to lack of knowledge or fear of consequences) of AI-systems. Both over-use and under-use of AI in defence may lead to risks of protecting the freedom, safety and security of society.

This PhD project will deliver a methodology for the safe and sound use of AI in the defence domain. The methodology will have to ensure ethical, legal and societal alignment in all stages of design, acquisition, and operationalization of autonomous systems and military human-machine teams. The project will also identify codesign methods for designing human-machine teams can be used to achieve ethical, legal, and societal compliance. It will help identifying the algorithms to be used to ensure this compliance, and will support the efforts to incorporate ethical, legal and societal aspects in a system-of-AI-systems.

This PhD position will be part of the ELSA (ethical legal societal aspects) Lab defence, granted under the NWA call “Human-centred AI for an inclusive society – towards an ecosystem of trust”. https://www.nwo.nl/en/news/more-10-million-euros-human-centred-ai-research-elsa-labs

The successful candidate will work under the supervision of Filippo Santoni de Sio, Jeroen van den Hoven, Mark Neerincx, and Jurrian van Diggelen. Filippo Santoni de Sio and Jeroen van den Hoven are, respectively, associate and full professor in ethics and philosophy of technology at TU Delft. They have worked among other things on design for values, the ethics of digital technologies, and meaningful human control over autonomous systems. Mark Neerincx is full professor in Human-Centered Computing at the Delft University of Technology, and Principal Scientist at TNO Department of Human-Machine Teaming. Jurrian van Diggelen is Senior Researcher at TNO Defence, Safety and Security, and coordinator of  the ELSA Lab of which this PhD is part. 



Similar Positions