Postdoctoral position in Explaining Search on Disputed Topics

Updated: about 2 months ago
Job Type: Temporary
Deadline: 03 Jul 2022

Public opinion is increasingly informed by online content, spread via social media, and curated algorithmically. The content that is suggested to people to read next, uses recommendation engines that use a notion of relevance that is commonly optimized purely for a single metric, such as engagement or number of views. This leads to an increase in polarization and a decrease in deliberate cognition and autonomous choice of which content they see. Furthermore, when users use social media or search to inform their opinion, they normally select the information that confirms their pre-existing beliefs. E.g., if Mary who is an animal rights activist, wants to find out if zoos should continue to exist, she selects and reads more articles that are against zoos. Most people would act this way -- this is a systematic pattern in human behavior known as confirmation bias. Yet, there is nothing in these systems that supports users in reflecting on or identifying their own exposure or consumption biases.

This postdoc will help improve an existing explanation interface for non-expert users like Mary, who consume information about disputed topics (like whether zoos should exist). This person will also help evaluate the effectiveness of these explanations in user studies. The explicit scope of the post-doc position will be defined based on the preferences and experience of the selected candidate.

The position is embedded in the current Explainable Artificial Intelligence group of the Department of Advanced Computing Sciences. The group consists of full Professors and Associate & Assistant Professors, postdoctoral researchers, Ph.D. candidates and master/bachelor students. The group works together closely on a day-to-day basis, to exchange knowledge, ideas, and research advancements. We conduct both fundamental and applied research, with a focus on Explainable Artificial Intelligence. The candidate will also benefit from a strong industry and research network such as PIs involvement with the ROBUST Long Term Program in development ( )

The full-time position (negotiable) is initially offered for a period of 1 year, extensions may be possible.

The successful candidate will:

  • Support the R&D activities of the department in human-computer interaction and explainable AI.
  • (Co-)author scientific, peer-reviewed papers, submitted to top conferences and journals in the field of data fusion and machine learning.
  • Provide support and/or manage funding acquisition activities and proposal writing (national and international, e.g. Horizon Europe).
  • Support the department in teaching activities.

  • View or Apply

    Similar Positions