4 (four) research grants (BI) for Doctoral Students under the project with the reference UID/EEA/... (# of pos: 4)

Updated: over 2 years ago
Job Type: FullTime
Deadline: 15 Dec 2021

CALL FOR 4 (FOUR) RESEARCH GRANTS UNDER THE PROJECT WITH THE REFERENCE UID/EEA/00048

APPLICATION opens from 25th November until 15TH December /2021

Ref.ª BD/4/2021

The Instituto de Sistemas e Robótica (ISR) – Universidade de Coimbra opens a call for 4 (four) research grants (BI) under the project with the reference UID/EEA/00048, financed by national funds through the FCT / MCTES (PIDDAC) under the following conditions:

Scientific Area: Electrical and Computers Engineering, Electronics Engineering, Computer Science, Informatics Engineering, Biomedical Engineering, Robotics and Applied machine learning and similar areas.

Application profile: Candidates who choose to be enrolled or who meet the necessary conditions to enrol in a Doctoral Program and who intend to develop research activities leading to the attainment of the academic degree of Doctor at the Institute of Systems and Robotics-University of Coimbra or in the host institutions associated with it. Candidates must have good knowledge of written and spoken English.

NOTE: Candidates with foreign academic diplomas are obliged to present records of the equivalence/recognition of such diplomas and the conversion of the respective final notes to the Portuguese classification scale (whenever a final classification is attributed with the foreign diploma), issued by the Directorate-General for Higher Education or by a Portuguese public higher education establishment (regulated by Decree-Law no. 341/2007, of 12 October) or, in alternative, present the document of equivalence/recognition of such foreign qualifications to the corresponding Portuguese qualifications, issued by a Portuguese public higher education establishment (regulated by Decree-Law no. 283/83, of 21 June).

Workplan and Goals: The selected candidate will develop tasks in different areas of expertise of ISR, according to the following workplans:

Ref.ª BD4/2021/Topic 1 - Cooperative multimodal perception in outdoor mobile robot teams – a deep learning approach

Supervisors: Prof. Rui P. Rocha, Prof. Jorge Batista, Prof. Cristiano Premebida.

Description: Robotic perception in outdoor and complex unstructured environments poses several scientific problems that are still to be solved or have been only partially addressed in the state of the art. Cooperative multimodal robotic perception requires that multiple robots autonomously extract semantic information from multimodal data collected over wide areas, which is used to combine percepts from individual robots and build a globally consistent probabilistic semantic map. While building incrementally and collaboratively a perception model, robots must perform cooperative active perception, by using the current model to autonomously decide under uncertainty the next target viewpoints, either to explore unknown areas or to update information of previously explored areas. Cooperative active perception also requires that robots coordinate their individual actions to optimize the team performance in the collective task.

The main goal of this PhD workplan is to tackle cooperative active perception in unmanned terrestrial or aerial robots operating in large outdoor areas (e.g. in forests and agricultural areas), which involve the multimodality provided by the combination of 3D LiDAR sensors with different vision sensors, including RGB, stereo and multispectral cameras. A deep learning approach will be used to extract domain-specific information required to perform cooperative perception. Forestry inventory and precision agriculture are examples of envisioned testbeds to be used in the PhD project. A highly motivated PhD candidate is sought to develop novel scientific work within this research workplan, with the following objectives: Cooperative active perception; Multisensory, multimodal perception; Deep learning and Bayesian inference.

Ref.ª BD4/2021/Topic 2 - Integration of 6D object pose estimation in a Dynamic P300-based brain-computer interface (BCI)

Supervisors: Prof. Urbano Nunes, Prof. João Barreto, Prof. Rui Araújo

Description: Considering a human-centred mobile robotic system (e.g., a robotic wheelchair) with human-machine collaborative control, our goal is to explore the use of brain-computer interfaces (BCI) to guide the mobile robot to perform complex tasks. A computer-vision system mounted on the mobile robot is expected to recognize and track relevant objects/passages while the robot is moving. These relevant objects will feed into a dynamic P300-based BCI, by superimposing/highlighting the targets over a computer screen (with the scene) or projecting the targets using augmented reality glasses. Machine learning methods (deep learning and deep reinforcement learning) will be applied for processing of multimodal data in perceptual components of the project. The work plan is divided into the following tasks:

  • Research on 6D object pose estimation from visual and/or depth sensors
  • 6D object pose estimation is a common task in multiple real-world applications, such as augmented reality, autonomous navigation, robot manipulation, and surgical navigation.

    The existing approaches can be broadly divided into two groups: Geometric approaches and Data-driven approaches. Currently, data-driven approaches using RGB-D are the top-performers in public 6D object pose estimation challenges. Also, RGB-only strategies are showing very promising results by strongly leveraging data augmentation techniques.

    Based on the large experience of the team in geometric and data-driven approaches, in this task the hybrid approach will be pursued. The idea is that Deep Learning is clearly the top-performer for object segmentation, point matching and overall robustness, while geometric approaches tend to be more accurate (more fine-grained optimization) and more efficient.

  • Research on Dynamic P300-based BCI
  • A framework will be researched in which relevant objects (possible target destinations) will feed into a dynamic P300-based BCI, by superimposing/highlighting the targets over a computer screen (with the scene) or projecting the targets using augmented reality glasses (e.g. Hololens). Using 6D object pose estimation (task 1), the computer vision system has to compute and/or suggest the best trajectories/approaches to reach a given target (e.g., the best way to dock into a desk, the best position to watch TV, the best orientation to approach a door). The suggestions of the computer vision system will be evaluated in real-time through the automatic recognition of error-related potentials (ErrPs), which are expected to be triggered when wrong movements occur or wrong suggestions are proposed. This is expected to increase the overall system reliability as well as the naturalness of the human-machine interaction. The above scenario can be extended to robotic manipulators for grasping tasks.

  • Experimental exploitation and assessment involving end-users

    The developed approaches will be exploited and evaluated both in simulation and in real scenarios involving end-users.

  • Ref.ª BD4/2021/Topic 3 - Socio-Spatial Assistance for Human-Robot Interaction

    Supervisors: Prof. Helder Araujo, Prof. Lino Marques, Prof. Rui Cortesao

    Description: The goal is to contribute to the development of semi-autonomous robot behaviors and user interfaces that exploit the spatial and social contingencies of the local environment of a robot that is controlled remotely. The objectives include the development of a remote pilot (a robot located in a different location and interacting with people and to provide the remote robot with feedback of the local environment and high-level command primitives to control the movements of the robot. Predictive methods will be developed to mitigate delays and latencies in the communication network and make the teleoperation more natural and smooth of the pilot user. Methods for social signal interpretation, eye-gaze analysis and models of human-robot interactions, will be used to increase the robot social presence and improve interaction with the local user. The work also considers distributed computational implementations in off-board servers (or cloud) to allow the use of high-performance methods together with the limited computational abilities of the commercial telepresence robots.

    The workplan is divided into the following tasks:

  • Development of a remote pilot
  • The goal of this task is the development of a remote pilot: a physical robot that is located in a different and remote location and that interacts with people. The physical robot to be used is a commercial robot and, in this

    task, all the elements required to control the robot movements will be developed. That includes high-level command primitives as well as sensor control and data communication.

  • Methods for social interaction
  • The remotely located robot has to interact with people. In this task methods for social signal interpretation will be developed. Several sensors will be used namely sound and vision. The methods to be developed include integrated approaches for eye gaze and voice analysis and also global posture and gesture interpretation. The approaches will be grounded on high level models of human-robot interaction.

  • Experimental validation and End-Users studies
  • The approaches developed in the previous tasks will be validated both in simulation and in real scenarios. The real scenarios considered are telepresence in healthcare context. User needs will be surveyed and the robot platforms deployed remotely. Trials will be implemented based on a protocol previously prepared and approved. The final goal of this task is to demonstrate the usability and efficacy of the proposed system in challenging case studies.

    Ref.ª BD4/2021/Topic 4 - Self-powered autonomous biomonitoring wearable systems

    Supervisors: Prof. Aníbal T. de Almeida, Prof. Mahmoud Tavakoli, Prof. Paulo Peixoto

    Description: This proposal aims at the fabrication self-powered wearable biomonitoring systems, using novel ultrathin double layer supercapacitors, integrating IoT systems, to support distributed health care. The proposed work is based on early results of a new promising materials, which combines nanoparticles from Eutectic Gallium Indium Liquid Metal, and Graphene Oxide.

    The developed supercapacitors will be combined with printed stretchable antennas for far field energy harvesting. In this way we intend to move toward self-powered IoT systems, that can be a fixed station, or mobile. The supercapacitor allows the system to function for certain amount of the time, even if they cannot harvest energy for that time.

    The integration of energy harvesting technologies with the Internet of things (IoT) is becoming more and more common. IoT edge devices, which include end-user equipment connected to the network which interacts with other networks and devices, may be located in remote locations where the main power is not available, or battery replacement is not feasible. Energy harvesting technologies can reduce or eliminate the need for batteries for edge devices by using supercapacitors or rechargeable batteries to recharge them in the field. Furthermore, autonomous, or rechargeable sensors are environment friendly as they do not contain any harmful metal or chemicals for the environment.

    A low-power IoT system will be implemented as a demonstrator of the capabilities of the developed Energy Storage technology. This demonstrator will explore the use of Low Power Wide Area (LPWA) networks to allow IoT devices to connect and communicate efficiently and effectively over large distances with minimal cost in terms of power.

    Regime: The attribution of the grant does not generate or entitle a relation of a legal work contract, it is undertaken in an exclusive dedication regime and the fellow is granted with the Fellow Statute of the Instituto de Sistemas e Robótica – Universidade de Coimbra, according to Research Grant Statute or the Regulation of Research Grants of the Fundação para a Ciência e a Tecnologia, I.P., both in its actual wording.

    Location: The work will take place at the Instituto de Sistemas e Robótica – Universidade de Coimbra.

    Duration: The grants have a duration of 12 months, with the possibility of renewal up to 48 months, according to the grantee's performance, on an exclusive basis, according to FCT's advanced human resources training regulation.

    Finantial conditions: The amount of the Grant is 1.104,64€ corresponding to the monthly compensation stipulated in the FCT table (https://www.fct.pt/apoios/bolsas/valores.phtml.en ), plus social security (Seguro Social Voluntário, first level contributions), if the candidate opt for it, and personal accidents insurance. The payment will be made by bank transfer. There are possibilities of FCT paying tuition fees.

    Selection method: Curricular evaluation (50%), and interview (50%) to the best 8 applications at the Curricular Evaluation phase.

    Selection and attribution criteria: In the curricular evaluation (50%) will be considered the following criteria: global merit of the curriculum vitae (40%) and adequation of the candidate to the project (60%).

    In the interview (50%) for the best 3 candidates of the Curricular Evaluation will evaluate the motivation degree and the commitment for the task to develop.

    The candidates with a final evaluation of 3.5 in the scale of 5 values will be excluded. The jury reserve the right to don’t attribute the Grant in the case of no candidate has the required profile.

    NOTE: Candidates who do not obtain a minimum classification of 14 points on a scale from 0 to 20 points in the final evaluation are considered excluded from the selection process. The jury reserves the right not to award scholarships if candidates with the appropriate profile are not presented to the competition.

    Jury: The three supervisors for each PhD plan.

    Publication of the results: The evaluation results will be announced within 90 days after the end of the applications submission deadline, by notifying the applicants via email. After the announcement of the results, candidates are considered automatically notified to, if they wish to do so, comment on the results on a preliminary hearing period within 10 days after that date. After this, the selected candidates will have to declare in writing their acceptance. Unless a justification worthy of consideration is presented, if the declaration is not submitted within the referred period, it is considered that the candidate waivers the grant. In case of resignation or withdrawal of the selected candidate, the next candidate with the highest evaluation score will be notified immediately.

    Application submission: The applications shall be sent by email to [email protected] specifying in the subject the respective reference. The application must include the following documentation: CV, habilitation certificate including the final scores obtained in the undergraduate subjects and a motivation letter. Recommendation letters are valued but not mandatory.

    • Ref.ª BD4/2021/Topic 1 - Cooperative multimodal perception in outdoor mobile robot teams – a deep learning approach;
    • Ref.ª BD4/2021/Topic 2 - Integration of 6D object pose estimation in a Dynamic P300-based brain-computer interface (BCI);
    • Ref.ª BD4/2021/Topic 3 - Socio-Spatial Assistance for Human-Robot Interaction;
    • Ref.ª BD4/2021/Topic 4 - Self-powered autonomous biomonitoring wearable systems.

    Submission of applications: The application process is open from 25th November until 15TH December /2021.



    Similar Positions