Sort by
Refine Your Search
-
Listed
-
Category
-
Program
-
Field
- Computer Science
- Linguistics
- Medical Sciences
- Biology
- Engineering
- Economics
- Materials Science
- Environment
- Mathematics
- Psychology
- Science
- Arts and Literature
- Chemistry
- Humanities
- Business
- Social Sciences
- Education
- Law
- Philosophy
- Earth Sciences
- Electrical Engineering
- Physics
- Sports and Recreation
- 13 more »
- « less
-
Efficient and Robust Alignment of Large Language Models
-
Primary supervisor - Dr Teodora Gliga, [email protected] Secondary supervisor – Prof Larissa Samuelson A fully funded PhD position is available with the Baby Language and Conceptual Knowledge Study
-
School of English Literature, Language and Linguistics ([email protected] ), or Dr Kirsten MacLeod, Head of the English Literature Subject Group ([email protected] ). We ask
-
led by Dr. Tae-Hee Choi. The research focuses on creating an effective, locally meaningful translanguaging pedagogy model using English as a medium to empower both educators and students from minority
-
DoS: Dr. Eve Kelland ([email protected] ) 2nd Supervisor: Dr. Richard Hosking ([email protected] ) 3rd Supervisor: Dr. Holly Stephenson ([email protected] ) 4th
-
understanding the lived experiences of vulnerable children and young people with speech, language and communication differences. You will work with Professor Clegg and Dr Sarah Spencer to collate and analyse data
-
directed by Dr Andy Seaman (Cardiff University, Principal Investigator) and Dr Charles Insley (The University of Manchester, Co-Investigator). During the later first millennium AD, the landscape known today
-
Funding amount: £18,622 maintenance grant per annum Lead Supervisor name: Dr Amor Abdelkader Project description Do you have a passion to develop the next generation batteries? Did you ever thought
-
fixed-term post is to provide replacement teaching while Dr Rosa Vidal Doval is on research leave in the 2024–25 academic year. The successful candidate will contribute to the teaching of Spanish within
-
Towards Responsible and Accessible Large Language Models