(M/W)- Thesis : Flash-based visual SLAM for UAV

Updated: over 2 years ago
Job Type: FullTime
Deadline: 16 Nov 2021

Gipsa-lab is a CNRS research unit joint, Grenoble-INP (Grenoble Institute of Technology), University of Grenoble under agreement with Inria, Observatory of Sciences of the Universe of Grenoble.
With 350 people including about 130 doctoral students, Gipsa-lab is a multidisciplinary research unit developing both basic and applied researches on complex signals and systems.
Gipsa-lab develops projects in the strategic areas of energy, environment, communication, intelligent systems, life and health and linguistic engineering.
Thanks to the research activities, Gipsa-lab maintains a constant link with the economic environment through a strong partnership with companies.
Gipsa-lab staff is involved in teaching and training in the various universities and engineering schools of the Grenoble academic area (Université Grenoble Alpes).
Gipsa-lab is internationally recognized for the research achieved in Automatic & Diagnostics, Signal Image Information Data Sciences, Speech and Cognition. The research unit develops projects in 16 teams organized in 4 Reseach centers
.Automatic & Diagnostic
.Data Science
.Geometry, Learning, Information and Algorithms
.Speech -cognition

Gipsa-lab regroups 148 permanent staff and around 260 no-permanent staff (Phd, post-dotoral students, visiting scholars, trainees in master…)

This thesis is proposed in the scope of the ANR Dark-NAV project whose goal is to equip a multirotor UAV with flash-based photolocation system inspired by the flashlight fish for navigation purpose. The flashlight fish illuminates its visual scene by triggering and modulating a striking bioluminescent flash as it swims through coral reefs.

The scientific objectives of this project will be the development of a powerful flash-based stereo active photolocation sensor, of aperiodic visual \textit{Simultaneous Localization And Mapping} (SLAM) algorithms and of the corresponding stabilization and navigation strategies. The key strategy will consist, in piloting the flashing frequency of the sensor according to the external illumination, the distance to obstacles, and the current speed of the UAV. The targeted application context carried by the industrial partner (SUEZ) is the autonomous inspection of empty water pipelines or tanks. This inspection is crucial for the maintenance of drinkable water infrastructures and to prevent unwanted pollution.
This PhD will focus on the aperiodic and self-triggerred SLAM part of this project.

The Dark-NAV project is supported by a consortium of three laboratories (GIPSA-Lab in Grenoble, ISM in Marseille and ICube in Strasbourg) have strong experiences in vision and advanced robotics control. The project also include the industrial partner SUEZ-SERAMM that uses drones for inspection ant maintenance of water pipes

The objective of the thesis is to study and designed new vision based SLAM algorithms using aperiodic flash based camera images available only after an external trigger (generated by the navigation module) or a self-triggering event (decided by the SLAM module itself).

The first task will focus on the visual inertial SLAM itself and will rely on the "SuperSurfelFusion" algorithm developed at Gipsa-Lab. This method uses a RGB+Depth camera, like the Intel RealSense camera, to represents the environment as a set of planar patches, called "supersurfels", and localize the camera by minimizing an error combining interest points and supersurfels reprojection. To maintain a good estimation between two images and to avoid camera tracking failure, it will be necessary to integrate the inertial data in the "SuperSurfelFusion". As the SLAM method uses interest points, it will also be necessary to study the robustness of interest points detection and tracking relatively to the flash illumination (particularly to the fact that the light source will move with the UAV).
The second task will be to study the self trigerring generation. The developed SLAM system will be able to predict a localization failure or a drop in localization accuracy in order to command new flashed image acquisition. The approach will rely on geometric and dynamic modelisation of the problem (for example covariance matrices of the localization state) or on machine learning methods like reinforcement learning.



Similar Positions