​​Research Associate in Trustworthy Machine Learning for Malware Detection​

Updated: 22 days ago
Location: Strand, ENGLAND
Deadline: 14 Apr 2024

Job description

The Department of Informatics is looking to appoint a Research Associate in Trustworthy Machine Learning for Malware Detection, to work on the EPSRC project XAdv: Robust Explanations for Malware Detection, at the intersection of AI and Systems Security, led by Dr. Fabio Pierazzi (PI).


Project Background

Malware (short for malicious software) refers to any software that perform malicious activities, such as stealing information (e.g., spyware) and damaging systems (e.g., ransomware). Malware authors constantly update their attack strategies to evade detection of antivirus systems, and automatically generate multiple variants of the same malware that are harder to recognize than the original. Traditional malware detection methods relying on manually defined patterns (e.g., sequences of bytes) are time consuming and error prone. Hence, academic and industry researchers have started exploring how Machine Learning (ML) can help in detecting new, unseen malware types.

In this context, explaining ML decisions is fundamental for security analysts to verify correctness of a certain decision, and develop patches and remediations faster. However, it has been shown that attackers can induce arbitrary, wrong explanations in ML systems; this is achieved by carefully modifying a few bytes of their malware.


Project Objectives

The EPSRC project XAdv (X for explanation, and Adv for adversarial robustness), aims to design robust explanations for malware detection, i.e., explanations of model decisions which are easy to understand and visualize for security analysts (to support faster verification of maliciousness, and development of patches), and which are trustworthy and reliable even in presence of malware evolution over time and evasive malware authors.

Robustness of explanations will be evaluated from two main perspectives: concept drift (i.e., malware evolution over time), and adversarial ML (i.e., ML-aware attackers, who carefully craft malicious samples to evade detection systems).

Moreover, this project will explore how robust explanations can be used to automatically adapt ML-based malware detection models to new threats over time (e.g., into novel active learning strategies), as well as to integrate domain knowledge from security analysts' feedback from robust explanations to improve detection accuracy.


The Role

The candidate will be responsible for leading the co-design and development of novel solutions for robust explanations for malware detection, in the presence of concept drift and adversarial attacks.

They will also be expected to lead top security/ML publications (e.g., USENIX Security, S&P, TOPS, ICML, ICLR, AISec, DLSP) under the guidance of Dr. Fabio Pierazzi, and the advisory team as required. They will develop prototypes and code frameworks (e.g., in Python) for large-scale experimental evaluations of their research ideas, and co-lead also user studies and methodologies to verify accuracy of the explanation methods.

The candidate will become part of the Cybersecurity Group (CYS) in Informatics at King's (https://www.kcl.ac.uk/research/cy s), and work more closely with the team led by Dr. Fabio Pierazzi (https://fabio.pierazzi.com/team/ ), currently consisting of 6 Ph.D. students. The wider team of collaborators of this project, including industrial partners such as NCC Group and Avast (now Gen Digital), and universities such as UIUC, TU Berlin, and University of Cagliari, will ensure excellent development opportunities throughout the project. The researcher will be allowed to support the supervision of PhD, MSc and BSc students associated with the project, to increase their competitiveness for future academic and industry roles.

For further information about the Department of Informatics at King’s, please see   https://nms.kcl.ac.uk/luc.moreau/informatics/overview.pdf .

This post will be offered on an a fixed-term contract for 18 months*, not exceeding 30th September 2026 which is the end date of the grant.

Anticipated June/July 2024 start

This is a full-time post – 100% full time equivalent

* There is potential to extend, depending on funding availability and the grant end date (30th September 2026).


Key responsibilities
  • Work closely with the project team to ensure that the aims and objectives of the project are achieved in a timely and effective manner.   
  • Conduct evaluation work of established concepts and prototypes.  
  • Lead and write papers for publication in top security workshops, conferences, journals (e.g., S&P, USENIX Sec, CCS, AISec, DLSP, TOPS, TDSC). 
  • Participate in relevant events within the institution or externally, to build contacts to facilitate the exchange of information and advance thinking.   
  • Support events, conferences, and workshops run by the project to develop the project outputs and research agenda.   
  • Contribute to the development of further research proposals.   

The above list of responsibilities may not be exhaustive, and the post holder will be required to undertake such tasks and responsibilities as may reasonably be expected within the scope and grading of the post.


Skills, knowledge, and experience  

Essential criteria

1.       A completed (or close-to-completion) Ph.D. in Computer Science or a related field 

2.       Effective communication: written and verbal 

3.       Excellent organization and team-working skills 

4.       Publication track record in relevant security/ML venues (e.g., S&P, USENIX, CCS, DLSP, AISec, ACSAC, RAID, DIMVA, ICML, ICLR, NeurIPS, SaTML, TOPS, TDSC) 

5.       Research experience in at least one of the following areas: machine learning-based malware detection, concept drift, adversarial machine learning 

Desirable criteria

1.       Ability to work independently 

2.       Willingness and ability to collaborate 

3.       Research experience in explainability for AI and ML models decisions, and/or in designing and conducting user studies  

*Please note that this is a PhD level role but candidates who have submitted their thesis and are awaiting award of their PhDs will be considered. In these circumstances the appointment will be made at Grade 5, spine point 30 (£42,099 per annum, including London Weighting Allowance) with the title of Research Assistant. Upon confirmation of the award of the PhD, the job title will become Research Associate and the salary will increase to Grade 6.


Further information

Application Details

Applicants should clarify in their cover letter how they plan to address the research challenges posed in the XAdv EPSRC project, and how their profile and expertise would fit well for the scope of the project and within the essential/desirable criteria.

Shortlisted candidates will be invited to online interviews, where they will prepare a short research presentation followed by a Q&A with the panel. Detailed instructions will be shared with shortlisted candidates (tentatively) by April 19. Interviews will be (tentatively) held between May 1 and May 3.



Similar Positions