Large language models for detecting misinformation and disinformation

Updated: about 3 hours ago
Location: Melbourne, VICTORIA

The proliferation of misinformation and disinformation on online platforms has become a critical societal issue. The rapid spread of false information poses significant threats to public discourse, decision-making processes, and even democratic institutions. Large language models (LLMs) have shown tremendous potential in natural language understanding and generation. This research aims to harness the power of LLMs to develop advanced computational methods for the detection and mitigation of misinformation and disinformation. More specific objectives are:

  • To investigate the effectiveness of large language models in identifying and categorizing different types of misinformation and disinformation.
  • To enhance the capabilities of existing LLMs for more accurate detection through fine-tuning and model adaptation.
  • To explore the development of a scalable and adaptable system for real-time monitoring of online content across diverse platforms.


Similar Positions