Search Menu

AI-based cyberattacks

Automated phishing and deepfakes are some examples of cyberattacks linked to the malicious use of artificial intelligence.

Telefónica

AI-based cyberattacks are one of the greatest challenges of the digital age, as they are a threat that never rests, has the ability to learn quickly, and can act on an unprecedented scale and at unprecedented speeds.

Subscribe to Telefónica’s blog and find out before anyone else.





In this article, we will look at some examples of cyberattacks based on artificial intelligence and learn how to protect ourselves from cyberattacks using this technology, bearing in mind that artificial intelligence itself can also work as an ally.

Examples of cyberattacks based on artificial intelligence

AI not only allows attackers to automate tasks, but also helps them learn and adapt in real time, making attacks much more offensive.

Let’s look at two specific examples.

Automated phishing

Phishing is a type of computer scam that seeks to impersonate a person or organisation known to the victim in order to obtain confidential user data, such as banking details (accounts, cards, etc.) or passwords.

Automated phishing refers to the use of technologies and tools to automatically create, send and manage phishing campaigns, allowing attackers to scale their activities, personalise the messages sent and adapt quickly to potential responses from users.

Cybercriminals can rely on AI to create highly personalised phishing campaigns, making them more difficult to detect.

This high degree of personalisation can be achieved by analysing huge amounts of data, which also makes it easier to tailor content to the recipient and thus circumvent traditional security systems.

Deepfake

The term deepfake refers to the combination of deep learning and the English term fake, and refers to impersonation using advanced AI techniques that, after collecting data such as physical movements, voice or facial features, generates hyper-realistic content that can be audiovisual, graphic or even voice.

It should be explained that the danger does not lie in this technological application itself, but in the uses that can be made of it, some of which are dangerous: attacking the morals of the impersonated characters, fraud, use for biometric passwords, spreading fake news or identity theft.

However, there are a number of ‘clues’ that can help detect deepfakes, such as the number of blinks, possible inconsistencies between the face and body, videos that are excessively short (due to the editing time involved) or looking inside the mouth, as the details in this area – tongue, teeth or oral cavity – are difficult to reproduce.

Risks of AI in cybersecurity

AI not only enables more sophisticated attacks, but also makes defence more difficult.

Traditional cybersecurity systems, such as firewalls and antivirus software, are designed to detect known patterns.

However, AI-based attacks can constantly evolve by modifying their behaviour to avoid detection, which means that the attacker always seems to be one step ahead.

How to protect yourself from AI cyberattacks

Although it may seem contradictory, AI can also be a potential ally in defending against cyberattacks.

There are AI-based cybersecurity systems that monitor networks in real time, detect suspicious patterns and respond automatically before a potential attack can develop.

Likewise, users can also be more critical or cautious about the emails we open, the networks we trust or the applications we install.

Share it on your social networks


Communication

Contact our communication department or requests additional material.

Exit mobile version