Postdoctoral Positions in Multilingual Neural Machine Translation

Updated: about 2 years ago
Deadline: 13 Feb 2022

The Language Technology Lab  at the Informatics Institute of the University of Amsterdam invites applications for two fully-funded, three-year postdoc positions in the area of natural language processing (NLP) in general, and neural machine translation in particular.

The two open postdoc positions (in addition to three open PhD positions) are part of an advanced career fellowship project funded by the Netherlands Science Foundation (NWO, Vici scheme) awarded to Prof. Monz.

Modern Neural Machine Translation (NMT) frameworks have resulted in nothing short of a revolution, yielding amazing advances in translation quality for many languages, e.g., English to German. NMT models learn complex translation mappings from bilingual data, consisting of human translations between sentences from a pair of languages.

Recent research on multilingual NMT (ML-NMT) has shown that training one model simultaneously on many language pairs can lead to knowledge transfer between different languages. Current ML-NMT approaches show indeed impressive results for low-resource translation into English and zero-shot translation between related languages. However, translating into low-resource languages and zero-shot translation between distant languages remains of low quality, clearly indicating that we are still far away from accomplishing universal translation between all languages.

In this project, you will work in a team of six researchers and devise and investigate novel machine translation models that are more robust under limited training data conditions and models that allow for better knowledge transfer from high-quality translation directions to low-quality ones. Your research will be carried out in the context of our existing neural machine translation architecture implemented in PyTorch.

The research in this project will address a number of overarching questions including (but not limited to)…

  • How can we better train models for language pairs where only a very limited amount of training data is available?
  • What should novel models or training procedures look like that allow us to explicitly stimulate knowledge transfer between language pairs?
  • How can we achieve good quality translations for language pairs without any parallel training data (zero-shot problem)?
  • How can we improve the robustness of neural machine translation?

What are you going to do?

With our help and support, you will:

  • innovate techniques and models for low resource machine translation and transfer learning between languages;
  • experimentally verify research hypothesis and contribute to our machine translation infrastructure;
  • design and perform controlled experiments to gain modelling insights and answer fundamental research questions;
  • contribute to (and benefit from) our existing neural machine translation infrastructure;
  • advance the state of the art in machine translation;
  • participate in the supervision of PhD students;
  • become an active member of the research community and collaborate within and outside the Language Technology Lab;
  • publish and present work regularly at international conferences, workshops, and journals.


Similar Positions