PhD in artifical intelligence for blind people navigation assistance

Updated: 6 months ago
Location: Tremblay en France, LE DE FRANCE
Deadline: 10 Sep 2019

The doctoral student will be based at IRIT in Toulouse but will also be able to work in Singapore for periods of several weeks at the CNRS-IPAL laboratory, in collaboration with NUS.

Navigation assistance is only beginning to be studied for blind people implanted with an artificial retina. It appears that with current implants, of very low resolution, navigation poses great difficulties related to mobility but also to orientation. The objectives of the INCA (Retinal Implant for Contextual and Learning-Based Navigation Support) project are to develop a bio-inspired artificial perception system for navigation aid in blind patients with retinal implants. The reinforcement learning paradigm and its links with deep neural networks will be at the heart of the research work, which includes several original features.

The first originality of the research work is the consideration of asynchronous high frequency input streams. The visual information will be captured by several event/asynchronous cameras whose operation is comparable to that of the retina. These cameras transmit a signal only when a change is detected in the input, which drastically limits the amount of information to be analyzed and allows a very fast propagation through a first neural network regulated by an unsupervised learning rule (STDP). The proposed system will also have a second source of contextual information, which will also be asynchronous. Urban data, particularly 3D models of cities, are now available in addition to cartographic or location information. They can be compressed, effectively transmitted in mobile situations and interacted with (Forgione et al. 2018). To our knowledge, the use of asynchronous cameras coupled with contextual information has never been proposed. However, they could be particularly relevant for the navigation aid of blind patients with implants.
The second originality is to control the system, downstream of the source sensors described above, from a visual representation of the decision-making state, in the form of an image. This asynchronous representation will be updated very frequently but, being too rich, can not be reproduced as it is in the implant. An
innovative reinforcement learning algorithm will allow, via one or more neural networks, to decide on recommendations for navigation and visual restitution actions (training of stimulation on the implant).

In order to carry out experiments, we will use a prosthetic vision simulator (SPV) already developed in the team (Vergnieux et al 2016) which will allow us to observe the effect of different restitutions on the user's behaviour.

Figure 2 : Example of a prosthetic vision simulator (SPV) with synchronous camera, in the particular case of navigation in a virtual reality environment. a) and e) Scenes from the virtual environment. b) to d) and f) to h) Different types of navigation information restitution based on different machine vision algorithms (from Vergnieux et al 2017).
As part of the INCA project, we are proposing a PhD grant whose project will focus on multimodal data fusion, decision support and recommendation based on contextual information. This call is open to graduates in Computer Science, Cognitive Sciences or Engineering.

View or Apply

Similar Positions