-------- PhD in robotics/biomechanics (M/F) --------

Updated: 3 months ago
Location: Toulouse, MIDI PYRENEES
Job Type: FullTime
Deadline: 17 Feb 2024

28 Jan 2024
Job Information
Organisation/Company

CNRS
Department

Laboratoire d'analyse et d'architecture des systèmes
Research Field

Physics
Researcher Profile

First Stage Researcher (R1)
Country

France
Application Deadline

17 Feb 2024 - 23:59 (UTC)
Type of Contract

Temporary
Job Status

Full-time
Hours Per Week

35
Offer Starting Date

18 Mar 2024
Is the job funded through the EU Research Framework Programme?

Not funded by an EU programme
Is the Job related to staff position within a Research Infrastructure?

No

Offer Description

PhD thesis in the Gepetto team at LAAS-CNRS that are experts in anthropomorphic motion and control of biped robots.

Title : Model-free evaluation of worker musculoskeletal disorders from affordable sensors

Context:
Today, 80% of industrial diseases reported in factories are related to Worker Musculoskeletal Disorders (WMSD). This problem is well identified thanks to norms like ISO 11226, which describes worker's gesture guidelines to assess physical workload. Ergonomic indicators are usually calculated by professional ergonomists using a simple visual and subjective assessment. Ergonomics scores based on popular scales such as RULA or REBA can then be determined with a relatively large variations depending on the ergonomist who is analyzing worker motion. Also, with the development of collaborative robots classical visual evaluation is no longer well suited. One way to fullfill automatically these scales is to use motion capture system that allow to derive joint angles, velocities, torques and other biomechanical variables of interest in real-time. Thus, we propose in this thesis to develop a new truly affordable, easy-to-use and accurate multi-modal human motion analysis system, providing worker's kinodynamics state and an ergonomic score using a minimal number of sensors.
The thesis is part of the new ANR HERCULES project related to the development of a new human centered collaborative robot controller.
State-of-the-art:
To estimate these quantities, accurate and expensive Stereophotogrammetric Systems (SS) and force plates are considered as the gold standard in human motion analysis. Unfortunately, since they require special settings and expert skills, they are not usable outside laboratories. The emergence of affordable sensors such as IMU, RGB-D cameras, markerless skeleton tracking algorithms for RGB camera(s) or embedded insoles for force measurements, has recently revived research on human motion analysis outside laboratories. Yet, each of these sensors has drawbacks and advantages, listed hereby.
IMUs can estimate independently the position and the orientation of each link, ignoring human kinematics. Unfortunately, drift in their numerical integration leads to a large error in the pose estimate over time. Several, often costly, commercial IMU systems are proposed by companies such as Xsens (The Netherlands). Their accuracy, when compared to reference SS for 3D joint angles estimate, is still debated in the biomechanics community. Several recent studies reporting errors going up to 20 deg (Bouvier, 2015). Besides, IMUs require magnetometers, which are unreliable in a factory due to their sensitivity to ferromagnetic disturbances (Wenk, 2015). Last but not least, by requiring at least one IMU per investigated segment, they discourage long term workers' acceptance.
Markerless approaches are still considered inaccurate for real biomechanical applications. Our group was among the first to use markerless data, within a biomechanically constrained adaptive filter, outputting in real-time the inverse kinematics solution (Colombel, 2020). Yet, the accuracy level and robustness were inappropriate for real applications. Then, researchers took a step back to improve offline markerless data. The latest studies using data recorded in ideal conditions, with several expensive cameras, and using a biomechanical model to reduce outliers, reported errors between 3 and 20 deg (Pagnon, 2022). These studies often require a complex calibration phase and do not perform in real-time. Besides, purely visual markerless skeleton tracking algorithms fail or slow down if several individuals are in close interaction or in partially occluded environments. Usually, it is claimed that these errors are due to differences in the calibration of the models used for the markerless and the SS. Indeed, it is not possible to properly calibrate the model used in all markerless algorithms. We believe that part of this error is also due to the fact that most markerless skeleton tracking algorithms use only a single sample (image), to estimate the subject pose, rather than a sequence of samples. Li et al. (Li, 2019), use optimization to determine, from RGB images, both the kinematics and the dynamics of subjects performing parkour tasks. Their original approach involves tracking the joint center positions output by markerless algorithms, while simultaneously minimizing dynamic biomechanical cost functions related to interaction forces, and incorporating temporal relations between the components of the state vector as constraints. Their approach resulted in a reliable outcome with a so-called “dynamically consistent inverse kinematics”. The drawbacks are: they report solely joint center position estimates, their biomechanical model is very simplified and the method is not real-time.
When dynamics are involved, it is crucial to have subject-specific dynamic models (Bonnet, 2016). The body segment inertial parameters are often estimated from anthropometric tables although there is a consensus arguing that these are incorrect for atypical population (Bonnet, 2016). Fortunately, the last decade has seen the development of dynamic identification methods for floating base systems, inspired by the humanoid robotics field (Bonnet, 2016).
Recent studies (Bao, 2022) suggested that merging RGB-D and IMU data could improve joint angle estimates. Our group has proposed an alternative solution by tracking known AR markers, positioned on top of each IMU, for gait analysis in a controlled scenario (Mallat, 2022). Using visual and IMU data we were able to minimize the total number of embedded sensors to one per kinematics chain. Since we were using an extended Kalman filter, temporal relations between state vector elements were taken into account but not constrained to be dynamically consistent. As a result, inverse dynamics were not very accurate. Moreover, visual occlusions and kinematics constraints were difficult to handle and could lead to non physiological solutions.

Methodology:
One could erroneously believe that, with commercial solutions available nowadays, determining in real-time the kinodynamics state of a person is a solved research problem. Yet, this is not the case. First, commercial solutions require at least a sensor per segment; this is too invasive for real industrial applications, which can only rely on simple and affordable equipment. Second, an important point is that in all the previously mentioned methods a cumbersome weight tuning process is required to reflect the quality of the measurement of each modality. Third, the calibration of the biomechanical models is too often not handled properly.
Thus, in this thesis we propose to develop a Deep Learning algorithm using as input data the least amount of data coming from affordable sensors and model calibration parameters (that will be acquired also using affordable sensors) to obtain accurate kinodynamics state of a worker on the factory floor.
A dataset using several cameras and embedded sensors involving real working environments will be created and utilized to train and cross validate our algorithm. Thanks to accurate kinodynamics state the most popular ergonomic scales will be automatically fullfilled and we hope to be able to provide a biomechanical score to the worker/ergonomist in real-time. A first rough work plan is as follow:
M0-M12: Work in simulation to assess the minimal number of sensors required to estimate accurately joint kinodynamics state of a worker performing HERCULES industrial tasks and to develop the Deep Learning architecture.
M6:M12: Develop a new synchronous network of sensors (embedded Visual Inertial measurement Units, RGB-D cameras) that can be used on the actual factory floor.
M12:M16: Perform experimentation with at least 10 subjects performing industrial tasks and apply the Deep Learning algorithm to collected data.
M16-M20: Benchmark with state-of-the-art Inverse kinematics methodologies.
M24-M30: Experimentation on the factory floor at AIRBUS and ergonomic scales autofilling. Biomechanical analysis.

M26: Improve the proposed motion capture system design. Hopefully submit a patent.
Of course, the prototype of the new system will have to be iteratively re-designed to fit as best the workers needs and comments regarding its usability.
References:
Bouvier B. et al. (2015), A. Upper Limb Kinematics Using Inertial and Magnetic Sensors: Comparison of Sensor-to-Segment Calibrations, Sensors.
Wenk F. et al., U. (2015), Posture from motion. Int. Conf. IEEE IROS.
Colombel J., Bonnet V. et al. (2020), Physically Consistent Whole-Body Kinematics Assessment Based on an RGB-D Sensor. Application to Simple Rehabilitation Exercises, Sensors.
Pagnon et al. (2022), Pose2Sim: An open-source Python package for multiview markerless kinematics, The Journal of Open Source Software.
Li Z., Mansard N. et al., (2019), Estimating 3D motion and forces of person-object interactions from monocular video, Int. Conf. IEEE CVPR.
Bonnet V., Fraisse, P., et al. (2016), Optimal exciting dance for identifying inertial parameters of an anthropomorphic structure, IEEE Trans. Robotics.
Bao Y. et al. (2022), FusePose: IMU-Vision Sensor Fusion in Kinematic Space for Parametric Human Pose Estimation, IEEE Transactions on Multimedia.
Mallat R., Bonnet V., et al. (2021), Sparse Visual-IMU placement for gait kinematics assessment, IEEE T-NSRE.


Requirements
Research Field
Physics
Education Level
Master Degree or equivalent

Languages
FRENCH
Level
Basic

Research Field
Physics
Years of Research Experience
None

Additional Information
Website for additional job details

https://emploi.cnrs.fr/Offres/Doctorant/UPR8001-VINBON-001/Default.aspx

Work Location(s)
Number of offers available
1
Company/Institute
Laboratoire d'analyse et d'architecture des systèmes
Country
France
City
TOULOUSE
Geofield


Where to apply
Website

https://emploi.cnrs.fr/Candidat/Offre/UPR8001-VINBON-001/Candidater.aspx

Contact
City

TOULOUSE
Website

http://www.laas.fr

STATUS: EXPIRED