Election deepfakes: what impact will they have?

“A fake video is circulating from Politics Live today. It contains words I did not use.” 

So it begins - the first major ‘deepfake’ of the 2024 general election campaign. We have been hearing about deepfakes for years now. But today they are at a level where they are credible enough to not only go viral, but be believed. As deepfakes begin to significantly impact a major political event and target politicians who are best placed to do something about them, will it lead to a tougher regulatory approach on generative AI once the polls close? 

Since 2010 the battleground for elections has increasingly been online. This year is no exception as parties fight it out across Facebook to TikTok. What is different this time is the explosion of credible, high quality, manipulated AI content. 

Less than two weeks into the campaign the BBC has reported on countless examples of deepfakes spewing disinformation from the Conservatives’ national service policy, to unfounded allegations about Keir Starmer and Jimmy Savile. 

Deepfakes are a significant challenge for political strategists and social media platforms because the barrier for creating them is becoming increasingly low. While it is straightforward to have a post removed if easily proven false, (e.g. a graphic saying polling day is Thursday 11 July) removing more subtle and realistic AI-generated content is a trickier challenge. 

This threat has been rising. During the first morning of Labour’s 2023 Party Conference, an AI-generated audio clip of Keir Starmer began to quickly spread online. The audio imitated a recording of the Labour leader losing his temper and verbally abusing staff. But when the party attempted to have it removed they ran into difficulty. 

Social media platforms are naturally averse to acting on the immediate request of a political party, opting to thoroughly verify content first which can take days to complete. Meanwhile the content continues to spread and the more it is engaged with the faster it reaches new users. 

Deepfakes can also hide behind the pretence of being satire or a parody, despite fooling voters with their realism.

So far, the strategy adopted by political parties against deepfakes has been to report, ignore and pray it doesn’t gain traction. But as we have seen with Streeting’s response to the deepfake of him calling Diane Abbott a “silly woman” - when the fake content is potentially so damaging, politicians find it increasingly difficult to resist rebutting. Yet a rebuttal risks attracting more attention which in turn drives it to more users.

When trust in politicians is already increasingly low, there is a real danger that deepfakes will sow further discord, tarnish the thin reputations politicians have, and change the way people vote. 

With politicians at the sharp end of this issue, could this election be a turning point towards tougher AI regulation after polling day?

In the UK and abroad, momentum around the regulation of deepfakes continues to grow: 

  • Earlier this year the UK Government made the creation of sexualised deepfakes a criminal offence.

  • The European Union’s Digital Services Act gives authorities the power to fine platforms which fail to tackle election disinformation. 

  • The US Federal Trade Commission has proposed to extend the outright ban on the impersonation of businesses or government agencies to all individuals.

  • The US Federal Trade Commission is also seeking comment on whether the revised rule should declare it unlawful for a firm (such as an AI platform that creates images, video, or text) to provide any services that they know or have reason to know is being used to harm.

  • In April this year, several British artists signed an open letter calling for more protection against "the predatory use of AI to steal artists' voices and likenesses". 

  • UK MPs from the All-Party Parliamentary Group on Music backed new legislation including "a specific personality right” to protect creators and artists from misappropriation.

However, a tough approach to deepfakes poses risks to the generative AI sector more broadly. The EU’s new AI Act seeks to regulate based on perceived ‘risk’ to ensure companies comply with the law before applications are made available to the public. This approach has been criticised for the substantial obligations it has placed on all companies that use AI. As a result many fear it will make the EU less competitive than other jurisdictions such as China. 

So far the Conservative Government’s approach under Rishi Sunak has been to pursue a light touch, ‘pro-innovation’ approach to generative AI, declining to give regulators like Ofcom specific ‘take down’ powers, or to establish a separate AI regulator. Labour, by contrast, have pledged to introduce a stronger “overarching regulatory framework”.  

Often the most vocal campaigners are the very victims of the issues they are rallying against. The downside is that this can sometimes lead to knee-jerk regulation with unsatisfactory outcomes and unintended consequences. We have seen this before with the Online Safety Bill, where MPs who have been personally impacted by trolls and online abuse dismissed concerns around privacy and the competitiveness of the UK tech sector. 

The UK is currently seen as a regional leader in tech and AI development, and many within the sector will be anxious not to lose this competitive edge. A concern for businesses who believe in the power of generative AI to drive growth, is that MPs returning to Westminster may be tempted to stifle its nascent potential in order to protect their own image online.

Previous
Previous

Wargaming Labour’s tax options

Next
Next

Reeves's Mais lecture: are we any clearer on how Reevesonomics will work?