Research Associate

Updated: about 21 hours ago
Location: South Kensington, ENGLAND
Job Type: FullTime
Deadline: 21 May 2020

Job summary

Our vision is to establish the computational foundations for an “AI clinician” (Komorowksi et Faisal, 2018, Nature Medicine). While a “fully autonomous doctor” is far more ambitious (and perhaps undesirable) than self-driving cars (Topol, 2019, Nature Medicine), our goal is to ground our work in the established foundations of clinical decision recommendation systems and evaluate our developments in a meaningful clinical setting.

This vision in turn requires the development of novel RL techniques, which are at the heart of this fellowship and outlined in the subsequent sections. We have previously shown that RL and Bayesian Optimization can be applied for learning optimal interventions in closed loop. We work on methods allowing us to train RL agents “interactively” (off-policy policy estimation, OPPE) on historical data of medical interventions and patient responses to them. We want to build on our groundwork drawing on causal inference, deep- and kernel-based machine learning, to obtain bounds and novel estimation methods for this emerging application of RL (Liu et al, 2018, NeurIPS; Parbhoo et al, 2018, PLOS1; Peng et al, 2018, AMIA).

This led to our key work of learning optimal medical interventions from routine clinical data in critical care (Komorowski et Faisal, 2018, Nature Medicine): Thus, our healthcare application provided the breakthrough use case for OPPE, as most RL research is focused on interactive learning in e.g. robotics; but OPPE provides a key pathway to harnessing the knowledge in the myriad of healthcare treatment records.


Duties and responsibilities

To be able to achieve the vision of an AI clinician system in the long-term there are 4 critical objectives that require building onto the foundations in AI and we wish the Research Associate to engage in at least 2 of these:

  • Creating efficient RL modelling of partially observable system from historical data to enable high quality predictions
  • Unifying RL and causal inference for intervention-based AI
  • Developing Explainable RL to meaningfully interact with experts (clinicians) and keep learning from these interactions      
  • Evaluating the AI framework for intervention-based RL interfaced with hospital systems.
  • To take our approach to RL of intervention forward safely and efficiently towards practical deployment in healthcare, we crucially require research on core relevant RL methods.

    The aim of the project is the advancement of Reinforcement Learning methods for clinical intervention, and involves the unique opportunity to develop core machine learning theory and evaluate it with clinical end-users as needed.


    Essential requirements

    • A PhD (or equivalent) in and area pertinent to the subject area.

    • Relevant computer science/engineering/mathematics/physics or exceptional bio-science undergraduate background

    • Experience in the field of (one more multiple):

      • Reinforcement Learning

      • Machine Learning

      • Robotics

      • Computational Neuroscience

      • Recommender Systems Neural/Behaviour Data Analysis

      • Time Series Analysis

      • Relevant related backgrounds in control systems or Mathematics

      • Thorough understanding of quantitative methods for modelling and/or data driven analysis



    • Strong Analytical mathematical, programming and machine learning skills.

    Please see job description for full list of essential requirements.


    Further information

    *Candidates who have not yet been officially awarded their PhD will be appointed as Research Assistant within the salary range £35,477 - £38,566 per annum.

    In addition to completing the online application, candidates should attach:

    • A full CV
    • A short statement indicating what you see are interesting issues relating to the above post and why your expertise is relevant

    Informal enquiries regarding post please contact Aldo Faisal: [email protected]

    For queries regarding the application process contact Jamie Perrins: [email protected]

    For technical issues when applying online please email[email protected]

    Committed to equality and valuing diversity, we are an Athena SWAN Silver Award winner, a Stonewall Diversity Champion, a Disability Confident Employer and work in partnership with GIRES to promote respect for trans people.

    The College is a proud signatory to the San-Francisco Declaration on Research Assessment (DORA),which means that in hiring and promotion decisions, we evaluate applicants on the quality of their work, not the journal impact factor where it is published. For more information, see https://www.imperial.ac.uk/research-and-innovation/about-imperial-research/research-evaluation/

    The College believes that the use of animals in research is vital to improve human and animal health and welfare. Animals may only be used in research programmes where their use is shown to be necessary for developing new treatments and making medical advances. Imperial is committed to ensuring that, in cases where this research is deemed essential, all animals in the College’s care are treated with full respect, and that all staff involved with this work show due consideration at every level.http://www.imperial.ac.uk/research-and-innovation/about-imperial-research/research-integrity/animal-research


    View or Apply

    Similar Positions