PhD in creating a cognitive framework for visuotactile robotic manipulation

Updated: 2 months ago
Job Type: Temporary
Deadline: 21 Oct 2021

Are you interested in developing intelligent robotic systems that make cognitive decisions using current machine learning approaches? Would you like to study how people can benefit from working together with an intelligent robotic arm? If so, then apply for this PhD position at Eindhoven University of Technology and join the Human Technology Interaction (HTI) group.

Background

Visual robot perception has grown tremendously in the last ten years. Artificial neural networks allow to reliably classify objects and segment structures from RGB images. Moreover, they allow for 3D pose estimation for robot navigation and manipulation in partially structured/unstructured environments. However, vision alone is not sufficient. Versatile interaction with unstructured environments requires a new generation of robots fully exploiting also touch and proprioception (perception of own movement and associated effort). Combining touch and vision information leads to a better world interpretation. Touch allows for 3D modeling of unseen parts of the environment as well as object mass, inertia, and friction estimation. Moreover, Artificial Intelligence (AI) through memory and cognition combined with visuo-tactile sensing will improve robot decision making, bringing a robotic arm's motions and behaviour closer to human capabilities. You can become part of a team of three PhD-students working on the integration the information from the senses into an overarching mental model for the robot.

Job description

As a PhD researcher at the Human-Technology Interaction (HTI) group, you will mainly focus on developing and implementing a mental model based on multi-sensory integration to enable robust robot perception and cognition in unstructured environments. To this end, you will work on a solution for data efficient, yet robust-to-noise, computation using Bayesian priors in a decision-making process. You will build a library of knowledge containing a rich prior of potential events and scenarios the systems can encounter. This knowledge you will then integrate with the sensory information received from the two fellow PhD researchers in your team, who will focus on obtaining visuo-tactile information and proprioceptive data and real world estimates of objects and humans in the environment. You will combine these streams of information by developing an overarching Bayesian multisensory integration model with internal memory and a predictive inner physics world, giving a robotic arm cognitive abilities for robust world understanding and decision making.

Furthermore, your job will be to reflect and capitalize on findings from ongoing studies at the TU/e and HTI on the application of Neural Networks and Bayesian Networks for high-fidelity data fusion. As a result you will contribute to innovative human-technology interaction in our society. You will publish your findings in top-tier academic journals preferably with a multi-disciplinary audience and proactively disseminate the methodology that you will develop through education, conferences, and workshops.

Context

The research will be done in a multi-disciplinary approach across the departments of Industrial Engineering & Innovation Sciences (IE&IS), Mathematics and Computer Science (M&CS), Mechanical Engineering (ME), and Electrical Engineering (EE). You will become part of the EAISI institute (Eindhoven Artificial Intelligence Institute) to join forces in creating AI for the real world. You will receive day-to-day supervision from Sanne Schoenmakers (IE&IS). And you will work in a team with Andrei Jalba (M&CS), Alessandro Saccon (ME), Marco Fattori (EE) and Wijnand IJsselsteijn (IE&IS).


View or Apply

Similar Positions