The next frontier in the fight against disinformation
With a backdrop of the UK Online Safety Bill which misses out on any significant provisions to explicitly target misinformation (unintentionally spreading news that is false) and disinformation (a deliberate attempt to proliferate false news for an agenda), and ahead of Rishi Sunak’s global AI governance summit next week, the fight against disinformation faces its next monumental challenge: generative AI.
Tools such as ChatGPT and other generative-based models open the door to misinformation and disinformation like never before. AI means that creating and disseminating fake headlines, images, and voice recordings is easier and faster than it has ever been, especially through new ‘content farm’ websites, which allow users to generate huge amounts of emotive fake content on an unprecedented scale.
Misinformation and disinformation certainly threaten democracy. Campaigns that spread during elections drive toxic societal polarisation and undermine the democratic process, with voters having to face a ballot decision on the basis of false information and fictitious stories designed to undermine political figures and parties.
Undoubtedly, the politically disruptive and deceptive nature of AI-generated disinformation is already evident.
Only two days before Slovakia’s tightly contested recent election, an audio recording appeared on Facebook which appeared to feature Michal Simecka, leader of the liberal progressive Slovakia party, discussing how to rig elections by buying votes. The party immediately denounced the audio as fake and fact checkers raced to remove the video during the 48-hour moratorium ahead of the opening of the polls, where politicians and media outlets were supposed to stay quiet.
Ahead of the next general election in the UK, Keir Starmer is the first to be at the wrong end of a viral AI-generated deepfake, in which he appeared to be swearing at staffers. With the backdrop of the ‘missed opportunity’ of the Online Safety Bill, a YouGov survey in May showed that 100 UK MPs reported that the rise of AI-generated content was their primary concern surrounding AI in general ahead of the next election.
The advance of generative-AI has shown that the problem of disinformation is complex and ever-evolving. Sunak’s upcoming global AI governance summit represents a key opportunity for the UK and other world governments to get an early grip on understanding the magnitude of the threat that AI represents with respect to misinformation and disinformation, and to begin to think about the bottom-up policies that will complement legislative efforts, such as those that Polis Analysis has consistently advocated for.
First, next week's AI governance summit must recognise that, while generative AI has huge potential benefits, it also poses a severe risk with respect to the proliferation of misinformation and disinformation, and that this risk needs to be addressed with urgency.
Second, the focus in overcoming such a challenge should be on the protection of the younger generation. For those below the age of 24, social media sites are now the most commonly reported source of news, where misinformation and disinformation remain rife. According to research from an all-party parliamentary group on literacy, only 2% of children possess the skills to consistently identify misinformation and disinformation, 50% of students are worried about not being able to distinguish it, and two-thirds of teachers agree that it harms the wellbeing of young people.
Third, the public should be educated and equipped with skills to identify, understand, and report misinformation and disinformation where they encounter it. This could include dedicated workshops, which should be made more widely available, new programmes to develop students' independent research skills, and an overhaul of the curriculum to include a new focus on digital literacy.
With the rise of generative-AI, it seems that misinformation and disinformation are here to stay and will grow beyond what was previously imaginable. Governments must recognise their responsibility to be part of a comprehensive response to legislate against, and promote initiatives to combat, misinformation and disinformation and its very real threat to society and democracy.
Tom Barton is the founder and CEO of Polis Analysis.