Deepfakes in cybercrime: how AI turns trust into a vulnerability

by Cyber Awareness, Artificial intelligence

When fraudsters sound and look like real bosses, even secure processes start to falter. AI-supported deepfakes are changing the threat situation and making a genuine awareness culture more important than ever.

Artificial intelligence is not only changing the economy and society, cyber criminals are also using it in an increasingly targeted manner. Attackers are using deepfakes and AI-generated voices to deceive CEOs and employees and tempt them into ill-considered actions. Traditional defense mechanisms alone are no longer sufficient here. It is crucial that companies prepare their employees for this new dimension of deception through targeted awareness training.

To understand why AI-supported attacks are so dangerous, it is worth taking a look at the basics of authentication in IT security. 

There are three different authentication factors:

  1. Factor: Something you know (e.g. a password),
  2. Factor: something you have (e.g. a smartphone with a token) and
  3. Factor: something that is you (e.g. biometric features such as voice or face).

Until now, attackers have primarily focused on the first two factors. However, with the advancing performance of artificial intelligence (AI) and deepfakes, biometric features are also increasingly coming into focus. This means that what was previously considered unique and trustworthy is being targeted.

Deepfakes do not crack systems. They crack people's trust.

An incident with a signal effect

A case from 2024 shows just how real the danger is: an employee of the international engineering firm Arup took part in a video conference with supposed managers. The interlocutors looked and sounded like his boss. In reality, they were deepfake imitations. Convinced of their authenticity, the employee transferred around 20 million francs to the fraudsters. More on the case of Arup.

No technical security measure detected this attack - nor would it have been able to. Neither firewalls nor virus scanners raise the alarm if the deception is perfectly staged. Only a healthy degree of skepticism on the part of employees - consciously questioning even seemingly trusted sources - could have prevented the incident.

Trust as a gateway

Deepfakes are more than a technical phenomenon - they are a direct attack on the foundation of human interaction: trust. Companies that do not specifically prepare their workforce for such attacks run the risk of trust becoming a weak point.

Only awareness programs that also train and culturally anchor the critical handling of deepfakes will create the necessary vigilance in everyday digital life.

Why modern attacks require new awareness formats

Employees need to know what real attack scenarios feel like: What behavioral patterns trigger social engineering attacks, what does manipulation look like and why do modern attempts at deception often appear so credible? Well-trained teams react faster, more securely and with more foresight to phishing, deepfakes and social engineering. According to IBM, organizations can significantly reduce their security incidents and cut incident costs by around a third with targeted awareness programs [3].

In a world full of synthetic voices, healthy human doubt becomes one of an organization's most important defense mechanisms.

Next step for your company

e3 has been helping companies to build awareness in the long term for many years. Our training courses not only teach how to recognize classic phishing patterns, but also how employees can identify and question deepfakes and AI-supported social engineering attacks.

Success Story Axept

Initial situation

Axept Business Software AG attaches great importance to secure IT. Technical measures alone are not enough. Employees are also a decisive factor and should be made aware of cyber risks in the long term - over and above traditional phishing training.

Solution

Together with e3, axept developed an awareness program that takes modern attack methods such as social engineering and deepfakes into account. This included phishing simulations and practical awareness training for employees with different levels of IT knowledge.

Result

  • Holistic approach with a combination of simulation & training
  • Significantly increased safety awareness among the workforce
  • Sustainable anchoring of awareness in everyday corporate life

Would you like more information on this topic?

Register yourself

Find out more about trends. After registering, you can download factsheets and other specialist articles from our Trend Sites.

Please contact us. We will be happy to advise you

Our experts will be happy to answer any questions you may have on this trend topic.

 

e-mail(required)