Disinformation Applications Lab (Disinfo App Lab or DAL) is a non-profit initiative, led by the Digital Innovation foundation (DIF), with an international research team of 9 professors in 6 countries, based in Gatineau, Québec, Canada.
The team focuses on Artificial Intelligence (AI) applications to help detect and monitor a very specific type of disinformation: false allegations against politicians in their governance roles (e.g., corruption, bribery, nepotism, etc.). Fake news, whether text or multimedia, can target public sector projects with the goal to tarnish an otherwise well functioning government initiative. This creates doubt and will cause projects or programs to be cancelled, thereby depriving citizens from services, and impeding national development.
The diagram below shows the wide scope of actors, information, and technologies involved. We identify 5 areas: (1) blue team fighting disinformation damages; (2) red team running disinformation campaigns; (3) government projects targeted; (4) parliaments where attacks are impacting; (5) public to whom disinformation is distributed. Cybersecurity agencies and monitoring teams are “in the middle”.
By taking a “whole campaign” perspective, disinformation monitoring can provide at least 5 major functionality: (1) anticipation of “next events” in a chain of fake news, so as to help blue teams to fight more strategically; (2) counter-intelligence about dark web actors to deter and disarm; (3) transparency enhancement by linking public project information with politicians actions and formal news; (4) collaboration between political parties and their network of fact checkers, ensuring no false information is propagated in parliaments or used for decision making; (5) filtering and trusted sources systems to support end-users, whether general public or professionals.
As of 2024, the team is the process of collecting case study data on several corruption cases, some where allegations proved true, and others false. As well, the team develops new ontologies and knowledge graphs (KGs) to help model disinformation campaigns and attack patterns, which will then be integrated within graph-augmented Large Language Models (LLMs) . Finally, a set of use cases will be developed, in collaboration with political communications experts, who will help identify how to best use AI, especially KGs and LLMs, to overcome disinformation impact in their work, and ensure faster and more accurate due diligence to solve corruption allegations against politicians.