Military AI-enabled systems pose serious ethical, legal and societal challenges. Considering the unique nature of work in the defence domain, the stakes are immense. Several scientists and activist groups have warned against the potential arising of “killer robots”, i.e., autonomous weapon systems that select and attack targets without meaningful human control. Other applications of AI in defence are also subjected to ethical and legal objections, such as AI systems that play a role in providing situational awareness or collecting intelligence. These applications can have serious ethical and legal ramifications as they may for instance bias decision-making processes on morally sensitive issues. The use of AI technology in defence means handing over some degree of autonomy and responsibility to machines, which may impact human agency, human dignity and human rights in warfare.
Without sufficient consideration of ethical, legal and societal aspects (ELSA) of the use of AI in the defence domain, risks like losing control, biased decision-making, violation of rights, and decreasing humanity in warfare may result in losing public support. These risks and consequences should be avoided.
Currently, it is unclear which AI-enabled systems are acceptable from ethical, legal and societal perspectives, and which are not, under what conditions/circumstances. This leads to both “over-use” (e.g., using too many AI-systems in too many situations, with lack of consideration of consequences) and “under-use” (e.g., not using AI, due to lack of knowledge or fear of consequences) of AI-systems. Both over-use and under-use of AI in defence may lead to risks of protecting the freedom, safety and security of society.
This PhD project will deliver a methodology for the safe and sound use of AI in the defence domain. The methodology will have to ensure ethical, legal and societal alignment in all stages of design, acquisition, and operationalization of autonomous systems and military human-machine teams. The project will also identify codesign methods for designing human-machine teams can be used to achieve ethical, legal, and societal compliance. It will help identifying the algorithms to be used to ensure this compliance, and will support the efforts to incorporate ethical, legal and societal aspects in a system-of-AI-systems.
This PhD position will be part of the ELSA (ethical legal societal aspects) Lab defence, granted under the NWA call “Human-centred AI for an inclusive society – towards an ecosystem of trust”. https://www.nwo.nl/en/news/more-10-million-euros-human-centred-ai-research-elsa-labs
The successful candidate will work under the supervision of Filippo Santoni de Sio, Jeroen van den Hoven, Mark Neerincx, and Jurrian van Diggelen. Filippo Santoni de Sio and Jeroen van den Hoven are, respectively, associate and full professor in ethics and philosophy of technology at TU Delft. They have worked among other things on design for values, the ethics of digital technologies, and meaningful human control over autonomous systems. Mark Neerincx is full professor in Human-Centered Computing at the Delft University of Technology, and Principal Scientist at TNO Department of Human-Machine Teaming. Jurrian van Diggelen is Senior Researcher at TNO Defence, Safety and Security, and coordinator of the ELSA Lab of which this PhD is part.
Similar Positions
-
Post Doctoral Researcher (Scholarship) To Karolinska Institutet, Karolinska Institutet, Sweden, 16 days ago
Do you want to contribute to improving human health? Division Do you want to contribute as a post-doctoral researcher analyzing machine-learning approaches to detect and diagnose breast cancer in ...
-
Ph D Student In Computing Science With Focus On Cybersecurity, Umeå universitet, Sweden, 5 days ago
18 Apr 2024 Job Information Organisation/Company Umeå universitet Department Umeå University, Faculty of Science and Technology Research Field Computer science Researcher Profile Recognised Resear...
-
Ph D Student In Computing Science With Focus On Cybersecurity , Umeå University, Sweden, 1 day ago
Umeå University, Department of Computing Science The Department of Computing Science is now looking for a Doctoral student in cybersecurity with a focus on DDoS attacks and defence strategies for ...
-
Ph D Position: “Future Proofing Low Trophic Aquaculture Systems With Marine Macrophytes”, NIOZ Royal Netherlands Institute for Sea Research, Netherlands, 16 days ago
6 Apr 2024 Job Information Organisation/Company NIOZ Royal Netherlands Institute for Sea Research Research Field Biological sciences » Other Environmental science » Ecology Researcher Profile Firs...
-
Ph D Position Value Sensitive Design For Ai Assisted Decision Making (1.0 Fte), University of Groningen, Netherlands, 9 days ago
We are seeking a dedicated PhD researcher to explore ethical and legal issues around AI-assisted decision-making, with a focus on crisis management. The selected candidate will analyse how AI can ...
-
Ph D Candidate In Trustworthy Ai , NTNU - Norwegian University of Science and Technology, Norway, about 11 hours ago
31st May 2024 Languages English English English The Department of Information Security and Communication has a vacancy for a PhD Candidate in Trustworthy AI Apply for this job See advertisement Th...