Media

The Use of Deep Fake Videos

June 2024

On 22nd May 2024, Rushi Sunak called for a general election to be held on 4th July 2024. Amidst the announcement, members of the public naturally turned to the news for information on the election and its candidates.

However, where they turn is changing. An Ofcom study found that just under half (47%) of UK adults use social media as their news source.[1] TikTok has been described by Ofcom as the primary news source for 10% of adults.

The fact about social media is that anyone or anything can upload information or ‘news’, including false information. Research by MIT found that fake news can spread on social media 10 times faster than false reporting by more traditional media streams.[2]

One of the increasingly popular methods of spreading misinformation is the use of deep fake videos. A deep fake is an Artificial Intelligence (AI) generated video which uses deep learning techniques of a real video or audio of a person to generate fake content. In other words, deep fakes are a form of interpretation that can replicate voices and mannerisms in a highly believable way.[3] Deep fakes add legitimacy to the message they are trying to propagate.

TikTok has been the subject of recent news reporting after multiple deep fake videos containing information about the election went viral in the UK. These posts include deep fake videos of Keir Starmer and Rushi Sunak detailing false or exaggerated policies.[4] Many of the posts containing deep fake videos have been watched hundreds of thousands of times. This is concerning given that, these videos could significantly impact political views with the upcoming general election.[5]

For example, in March 2024, a fake recording of the presidential candidate in Slovakia boasting about rigging the polls went viral. The candidate went on to lose the election and allegations on the effects of the video surfaced.[6]

Deep fake videos are not only being used to spread fake news online, but fraudsters are using AI-based video techniques to target businesses. According to research conducted by State of Information Security Report in 2024, nearly 32% of business have encountered deepfake security issues. The most common form of attack on business is the use of AI-powered voice and video-cloning technologies to trick recipients into making corporate fund transfers, a scam more officially known as Authorised Push Payment (APP) fraud.[7]

In February 2024, Arup, a British multinational design and engineering company, who designed the Sydney Opera House, became the victim of a deepfake scam. The employee was duped, through the use of a phishing email, into attending a zoom call with individuals who he believed were the chief financial officer and other members of staff known to him. Although he had doubts about the email, upon joining the zoom call and seeing/speaking to the ‘individuals’ familiar to him, he carried out their requests, sending US$ 25 million across 15 transactions. However, all the members of the call were deep fake videos, produced by AI.[8]

One of the ways to combat deep fake fraud is the use of advanced identity verification systems that utilise multifactor authentication. However, human error is often the root cause of criminals gaining access to business systems. In the example above, the phishing email was the giveaway, and they are becoming very sophisticated. Therefore, employee training is vital to combat the risks that deep fakes pose.

Quintel have a proven track record of analysing and investigating phishing scams and use a combination of forensic software to analyse potential deepfakes and mitigate any associated risk.[9] This has allowed our clients to combat potentially damaging online activity involving deep fake material.

Further details of Quintel’s Crisis Management services can be found at https://www.quintelintelligence.com/services/crisis-management/.

Article by Quintel Researcher, Liz Gahan.