Deepfake technology, beyond reality, cybersecurity and life (and death)

If people are a mixture of lights and shadows, technology, as an instrument, can become one. The outcome depends on the ultimate intention of whoever’s in control.

📸 Cottonbro I Pexels
Reading time: 5 min

If people are a mixture of lights and shadows, technology, as an instrument, can become one. The outcome depends on the ultimate intention of whoever’s in control. One of the dark sides of Artificial Intelligence lies in deepfakes. But it’s not all as bad as it seems.

Donald Trump, Barack Obama and Facebook CEO Mark Zuckerberg have all been victims of deepfakes. Even someone like Tom Cruise has faced competition from a deepfake with its own account on TikTok.

Actor Steve Buscemi’s face replaced that of actress Jennifer Lawrence, while Scarlett Johansson was less fortunate and hers was used in pornographic videos.

Its very name tells us everything. The term deepfake comes from “deep”, referring to deep learning, and “fake” or false. Therefore, a deepfake is, in short, a video in which the face or voice of a famous person is impersonated in such a way that it’s superimposed onto the face of another person with similar morphological characteristics.

Thus, the image of the supplanted person is manipulated, as well as, of course, the information that can be provided. 

This technology is based on Artificial Intelligence and it relies on machine learning. Because it uses data analysis to simulate another person’s face or voice as realistically as possible.

These data are obtained from hundreds of hours of recordings of the person to be “replicated” and they’re used to create new data, in other words, easily recreate not only a person’s image, but also his or her gestures and way of speaking. 

Familiar tasks

Deep learning is essential for the above. It’s a branch of machine learning that gives machines the ability to learn automatically without any human having to program them. This enables them, for example, to make predictions.

Deep learning constitutes a step further, as it trains the machine to develop a technique that will help it to perform tasks such as image and speech recognition. This is something that’s been used in a different form for quite a long time.

The sound and image recognition systems to be found in virtual assistants such as Alexa, Google, Cortana and Siri and some games consoles are well-known. 

Computers can thus perform tasks and work in a similar way to the human brain and understand the data that they handle. But, most importantly, deep learning facilitates pattern recognition and creation.

Dangers for companies

News of the application of deepfakes in pornography came to light in 2017. High-profile actresses such as Natalie Portman and world-famous singers such as Taylor Swift had their faces used in pornographic videos.

These scandals were followed by other equally damaging ones in the form of hoaxes and fake news. The problem of misinformation began to take the form of manipulated and very real videos, so much so that it’s now a cause for concern for AI and cybersecurity experts, due to the loss of trust in the messages reaching the public and the fraud that can be committed.

According to a Tessian survey, 74% of IT managers are worried about the potential effects of these videos. For companies this can lead to problems in sensitive areas such as cybersecurity and data protection. 

Identity theft and fraud are risks. Cybercriminals can, for example, create the image of a person or ask their employees to carry out compromising operations, as the Entelgy technological consultancy firm warns.

Companies may also be affected by smear campaigns, damaging their corporate reputations. The above is leading to the need for improved cybersecurity systems. 

Reality under debate

The debate is on the table: what’s true and what isn’t true? In an attempt to shed light on the issue, Michigan State University (USA) and Facebook are working together to create a new approach by means of reverse engineering. 

The aim is not only to detect the manipulation, but also to trace the source so as to identify the patterns and the AI model that generates these videos in real-life environments. This is possible because each image generated by the AI model leaves a footprint and this footprint can be compared with those of other manipulated videos.

Other uses: the upside of deepfake 

Not everything is necessarily bad. The use of these contents also has its good and fun side. Entertainment, without any malicious intent, is one of the sectors that can boast of deepfakes that don’t harm anyone, quite the opposite.

The film world is already benefiting from this technology and, thanks to AI, Millie Bobby Brown, the actress from the extraordinary Stranger Things TV series, recovered the stellar Princess Leia’s life and youth.

The same technology meant it was possible to maintain the character in Star Wars: Rogue One, despite the death of Carrie Fisher, and it featured a younger Robert de Niro playing his role in The Irishman. 

Another example is that of Salvador Dalí, who was brought back to life in an American museum. In an exhibition titled “Dali lives”, the genius stars in 125 interactive videos that enhance and enrich the visitor’s experience.

Dalí has also entered the field of advertising in a campaign for the Queen Sofia Foundation on the need to promote research into neuro-degenerative diseases, as the artist suffered from Parkinson’s disease. The world of advertising has also brought back the accent of Lola Flores to promote a well-known brand of beer. 

Although deepfake technology remains highly difficult to detect for the untrained eye, we’ll always have common sense and the option of checking whatever may seem disparate. The manipulation of the above videos means that the saying “seeing is believing” isn’t true anymore.


Contact our communication department or requests additional material.