PhD on Visuotactile perception for robot manipulation

Updated: over 2 years ago
Job Type: Temporary
Deadline: 22 Nov 2021

Visual robot perception has grown tremendously in the last ten years. More and more, artificial neural networks allow to reliably classify objects and segment structures from RGB images. Moreover, they allow for 3D pose estimation for robot manipulation and navigation in partially structured and even unstructured dynamic environments. However, vision alone is not sufficient for manipulation. Versatile interaction with unstructured environments requires a new generation of robots fully exploiting also touch and proprioception (the perception of own movement and associated effort). Combining complementary touch and vision information leads to a better world interpretation. Touch allows for 3D modeling of unseen parts of the environment as well as object mass, inertia, and friction estimation. Combining visuotactile sensing with memory and cognition will improve robot decision making, bringing a robotic arm's motions and behaviour closer to human manipulation capabilities.

This project is sponsored by the Eindhoven AI Systems Institute (EAISI).

I.Touch2See Project
The project I.Touch2See involves four departments of the TU/e (Mechanical Engineering, Electrical Engineering, Mathematics & Computer Science, and Industrial Engineering & Innoviation Sciences), collaborating to develop groundbreaking robotics technologies to improve robotic perception for manipulation in unstructured enviroments. The PhD student will have ample opportunities for interaction within the project's principal investigators and the other
2 PhD students involved in the project and may also contribute/collaborate to other robotics projects currently running at TU/e and its partners (e.g., the I.AM. H2020 EU project
https://i-am-project.eu ).

Your duties
As a PhD researcher from dynamics and control section, within the robotics and perception group, you will focus on developing and implementing visual and touch perception algorithms to extra 6D pose, shape, occupancy, friction, weight, stiffness information of unknown or partially known objects in the scene. This knowledge you will then integrate with artificial skin and cognition and memory modules developed in collaboration with the two fellow PhD researchers in your team.

Your job will be to reflect and capitalize on findings from ongoing studies at the TU/e on the application of model-based and data-driven approaches for robot-environment physical interaction, including impacts. More specifically, your main tasks will be:

  • Continuously update a detailed literature review of deformable and soft object shape and pose reconstruction from RGB(D) and stereo camera images; estimation of friction and dynamic object properties from force/torque and artificial skin sensors; robot control and perception for physical interaction with rigid and deformable objects;
  • Dissemination of the results of your research in international and peer-reviewed journals and conferences;
  • Pro-actively enabling the collaboration between the involved disciplines;
  • Writing a successful dissertation based on the developed research and defending it;
  • Assume educational tasks like the supervision of Master students and internships;


Similar Positions