PhD position in reliable deep reinforcement learning

Updated: over 2 years ago
Deadline: 16 Sep 2021

Are you interested in machine learning for intelligent decision making? Would you like to overcome fundamental challenges and bridge the gap to physical agents like robots, autonomous cars or automated factories? Create new cutting edge neural network architectures to empirically solve theoretical problems in artificial intelligence? Think about broader implications and practical applications?

Deep reinforcement learning (RL) has been at the core of many recent success stories in AI, in particular for playing strategic games like Go, Chess and StarCraft. Despite those spectacular breakthroughs, RL is rarely used in practice, as the learned control policies are generally not assumed to be reliable enough for deployed robots or autonomous cars. We want to change that!

During your PhD, you will develop new algorithms which generalize to situations that differ significantly from training. This will be possible by controlling a graph neural network's internal epistemic uncertainty, that is, how much the network trusts it's own computations. You will evaluate your work on multi-task RL benchmarks, where the agent learns one policy that is able to solve more than one task. Examples are simulated Mujoco robots or different levels of the same computer game. Your challenge will be to study, propose and empirically verify the properties that improve generalization in these benchmarks. You will be collaborating with other members of the Algorithmics group that work on related projects. There might also be an opportunity to transfer your work to real robots, if you are interested



Similar Positions