Public opinion is increasingly informed by online content, spread via social media, and curated algorithmically. The content that is suggested to people to read next, uses recommendation engines that use a notion of relevance that is commonly optimized purely for a single metric, such as engagement or number of views. This leads to an increase in polarization and a decrease in deliberate cognition and autonomous choice of which content they see. Furthermore, when users use social media or search to inform their opinion, they normally select the information that confirms their pre-existing beliefs. E.g., if Mary who is an animal rights activist, wants to find out if zoos should continue to exist, she selects and reads more articles that are against zoos. Most people would act this way -- this is a systematic pattern in human behavior known as confirmation bias. Yet, there is nothing in these systems that supports users in reflecting on or identifying their own exposure or consumption biases.
This postdoc will help improve an existing explanation interface for non-expert users like Mary, who consume information about disputed topics (like whether zoos should exist). This person will also help evaluate the effectiveness of these explanations in user studies. The explicit scope of the post-doc position will be defined based on the preferences and experience of the selected candidate.
The position is embedded in the current Explainable Artificial Intelligence group of the Department of Advanced Computing Sciences. The group consists of full Professors and Associate & Assistant Professors, postdoctoral researchers, Ph.D. candidates and master/bachelor students. The group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements. We conduct both fundamental and applied research, with a focus on Explainable Artificial Intelligence. The candidate will also benefit from a strong industry and research network such as PIs involvement with the ROBUST Long Term Program in development (https://www.nwo.nl/en/researchprogrammes/knowledge-and-innovation-covenant/long-term-programmes-kic-2020-2023/robust-ltp )
The full-time position (negotiable) is initially offered for a period of 1 year, extensions may be possible.
The successful candidate will:
Researcher In Explainable Ai And Medical Natural Language Processing, Katholieke Universiteit Leuven, Belgium, 17 days ago
(ref. BAP-2022-571) Laatst aangepast : 20/07/2022 We offer a two-year research position for a postdoctoral or predoctoral scientist on the topic of explainable AI in the context of natural languag...
Postdoctoral Fellow Digitalisation Of Work And Professional Knowledge Development, University of Oslo, Norway, 6 days ago
Postdoctoral Researcher Information Quality And Argument Checking , University of Amsterdam, Netherlands, 12 minutes ago
Are you interested in implementing computational methods in argumentation theory? Do you have experience with working with formal approaches to the study of argumentation theory? The Institute fo...
Postdoctoral Researcher Or Ph D Position In Fairness Aware Learning To Rank , University of Amsterdam, Netherlands, 9 days ago
Algorithmic hiring is on the rise and rapidly becoming necessary in some sectors, yet these systems run the risk of reproducing and amplifying discriminatory biases. FINDHR is an interdisciplinary...
Postdoctoral Researcher Information Quality And Argument Checking , Universiteit van Amsterdam ;, Netherlands, 4 days ago
Job description Postdoctoral researcher Information quality and argument-checking The Institute for Logic, Language, and Computation (ILLC) currently has a vacancy for a postdoctoral research posi...
Postdoctoral Fellow In Distributed Data Processing And Inference In Emerging Io T, NORWEGIAN UNIVERSITY OF SCIENCE & TECHNOLOGY - NTNU, United Kingdom, 2 days ago
About the position The postdoctoral fellowship position is a temporary position where the main goal is to qualify for work in senior academic positions. The successful candidate will be offered a ...