In this project, you will develop novel multi-modal deep learning techniques to detect the dynamic and static obstacles in the urban environment in self-driving applications. Existing approaches often focus on training models for a single sensor modality (vision, lidar, radar, or audio) or a specific pair of sensors. The overarching goal here is to develop a generic multi-modal self-supervised learning approaches to exploit the complementary nature of all available sensors during training, and thus compensate for the lack of annotated data for some sensors modalities (e.g. acoustics, radar). A possible direction is to simultaneously learn a shared 3D environment representation, and the sensor models to map their measurements to this shared space while preserving measurement uncertainty. To regularize the task, we will seek to exploit geometric constraints in the representation and self-supervised learning strategies. Additionally, past and future observations could be incorporated into the learning strategy to exploit the temporal consistency of the dynamic scene, reducing uncertainty in the constructed representation and possibly capturing cues to anticipate future traffic events. If successful, the developed methods enable self-driving vehicles to achieve state-of-the-art online perception performance for both well-known (vision, lidar) and novel sensing modalities (audio) without the need for extensive data annotation.
The host research group – the Intelligent Vehicles group - focuses on environment perception, dynamics & control, and interaction with humans for intelligent vehicles and automated driving in complex unstructured urban environments. The group is part of the Cognitive Robotics department at the 3ME Faculty, which aims to develop intelligent robots and vehicles that will advance mobility, productivity and quality of life. The department combines fundamental research with work on physical demonstrators in areas such as self-driving vehicles, collaborative industrial robots, mobile manipulators and haptic interfaces.
The new AI lab on 3D Understanding (3DUU) initiated by the 3D Geoinformation Research Group and the Intelligent Vehicles Group is one of the Delft Artificial Intelligence Labs (DAI-Labs). It is a cross-disciplinary research lab seeking to develop state-of-the-art AI techniques for interpreting 3D data and reconstructing 3D objects for large-scale urban applications.
Department
3DUU is a Delft Artificial Intelligence Lab (DAI-Lab). Artificial intelligence, data, and digitalization are becoming increasingly important when looking for answers to major scientific and societal challenges. In a DAI-lab, experts in ‘the fundamentals of AI technology’ along with experts in ‘AI challenges’ run a shared lab. As a PhD, you will work with at least two academic members of staff and three other PhD candidates. In total, TU Delft will establish 24 DAI-Labs where 48 Tenure Trackers and 96 PhD candidates will have the opportunity to push the boundaries of science by using AI. Each team is driven by research questions that arise from scientific and societal challenges and contribute to the development and execution of domain-specific education. Instead of the usual 4-year contract, you will receive a 5-year contract. Approximately a fifth of your time will be allocated to developing ground breaking learning materials and educating students in these new subjects. The experience you will gain by teaching will be invaluable for future career prospects. All team members have many opportunities for self-development. You will be a member of the thriving DAI-Lab community that fosters cross-fertilization between talents with different expertise and disciplines.
Similar Positions
-
Ph D Position Data Driven Additive Manufacturing , Delft University of Technology, Netherlands, 4 days ago
Bring additive manufacturing processes closer to the resilience found in the growth of living organisms This project offers agile creative working, and brings science closer to societal impact. Th...
-
Ph D On Deep Learning Perception For Automotive 4 D Imaging Radar, Eindhoven University of Technology (TU/e), Netherlands, 29 days ago
28 Mar 2024 Job Information Organisation/Company Eindhoven University of Technology (TU/e) Research Field Technology Researcher Profile First Stage Researcher (R1) Country Netherlands Application ...
-
Ph D On Deep Learning Perception For Automotive 4 D Imaging Radar, Eindhoven University of Technology, Netherlands, 24 minutes ago
Irène Curie Fellowship No Department(s) Electrical Engineering Reference number V36.7369 Job description • Are you inspired by the prospect of shaping the future of autonomous driving? • Are...
-
Ph D Position Distributed & Adaptive Radar For Human Wellbeing Monitoring, Delft University of Technology, Netherlands, 1 day ago
We seek a motivated PhD student to work on a 4-year NWO funded project called DARE (Distributed and Adaptive Radar for Enhanced Sensing and Classification). The goal of this project is to work tow...
-
Ph D Position Ai For Quantitative Ultrasound Biomicroscopy, Delft University of Technology (TU Delft), Netherlands, 29 days ago
28 Mar 2024 Job Information Organisation/Company Delft University of Technology (TU Delft) Research Field Physics Researcher Profile First Stage Researcher (R1) Country Netherlands Application Dea...
-
Ph D Position Distributed & Adaptive Radar For Drone Monitoring, Delft University of Technology, Netherlands, 1 day ago
We seek a motivated PhD student to work on a 4-year NWO funded project called DARE (Distributed and Adaptive Radar for Enhanced Sensing and Classification). The goal of this project is to work tow...