Are AI-based attacks too good for security awareness training?

https://eu-images.contentstack.com/v3/assets/blt6d90778a997de1cd/blt26b2a68ca344e1b0/666b5522c4b2d7807c75296e/Screenshot_2024-06-12_at_3.34.37_PM.png?disable=upscale&width=1200&height=630&fit=crop



By Tom Tovar CEO and co-creator of Appdome


In an age where artificial intelligence (AI), continues to advance at an alarming pace, traditional security training is being challenged more than ever before. The rise of sophisticated AI threats, such as smishing and vishing, deepfakes and AI chatbot attacks, could render this human-centric approach ineffective.


Humans have a slight advantage today


Security awareness training helps individuals recognize the signs and tactics of social engineering attacks. Consumers and employees learn to recognize suspicious emails (phishing), questionable text messages (smishing), or manipulative phone calls. Training programs can help people identify red flags, detect subtle inconsistencies, and detect subtle variations in language or requests.


A well-trained employee may notice that an email purporting to be from a coworker contains unusual language or that a voicemail requesting sensitive information is “from” a senior executive who should have access to the information. The public can also be taught to avoid mass-produced vishing and smishing scams.


Even the most well-trained people are not immune to mistakes. Stress, fatigue, or cognitive overload can impair judgement, making it easier to be attacked by AI.


Tomorrow AI will be the winner


In two to three more years, AI-driven attacks are likely to have access to better and larger large language models. They will create more convincing, context aware interactions that mimic human behavior with alarming accuracy.


AI-supported attack tools today can craft emails and message that are virtually indistinguishable to those of legitimate contacts. Voice cloning can also mimic the speech patterns of anyone. These techniques will be combined with advanced deep-learning models in the future to create near-perfect deepfakes that can be used to mimic human speech.


AI-based attacks already have many advantages, including:

  • Seamless Personalization: The AI algorithms can analyze large amounts of data and tailor attacks to an individual’s habits, preferences, or communication styles.


  • AI systems are able to adapt in real-time, changing their tactics according to the feedback they receive. If the AI’s initial approach fails, it can quickly pivot and try different strategies until they find an attack that works.


  • AI can manipulate emotions with unprecedented precision. AI-generated deepfakes of trusted family members in distress can convincingly ask for urgent help, bypassing rational examination and triggering a immediate, emotional response.


Appdome is starting to see exploits that use AI chatbots superimposed via an overlaid attack on a mobile app, engaging a customer or employee into a seemingly innocent conversation. Some brands are preparing for the same attack that is carried out by an AI-powered mobile keyboard the victim installs. In either case, an overlay or keyboard could gather information about the victim, influence the victim or present malicious choices. It could also act on behalf the victim in order to compromise security or accounts or transactions. In the future, AI-driven attacks may include AI agents who act on behalf the victim and autonomously craft interactions within applications.


The Future of Security Awareness Training


As AI technology advances, traditional security awareness training is under threat. The margin for human error is shrinking rapidly. The future of security training will require a multifaceted strategy that combines real-time automated interventions, better cyber transparency and AI detection with human training and intuition.


Technical Attack Intervention


Not only must security awareness training include the recognition of an attack, but also a real technical intervention by a brand or enterprise. Even if an individual cannot tell the difference between a real and a fake interaction from the attacker, recognizing system-level interventions designed to protect users should be easier. Brands and companies can detect when malicious software, technical methods for spying, controlling, and account takeovers, are being used. They can then intervene before real damage is done.


Better Cyber Transparency


To ensure that security awareness training is successful, organizations must embrace greater cyber transparency to help users understand the defense response expected in applications or system. This requires robust defense technology measures to begin with. Enterprise policies and consumer-facing release notes should include “what to expect” if a brand or enterprise defense detects a threat.


Recognizing AI agents and AI agents interacting with apps


Brands and companies must deploy defense methods to detect the unique ways machines interacted with applications and systems. This includes patterns of typing, tapping, recording and in-app or device movements, as well as the systems used to perform these interactions. Non-human patterns are used to trigger alerts for end-users, improve due diligence workflows within applications, or complete additional authorization steps.


Prepare for the AI-Powered Future


The rise of AI powered social engineering attacks represents a significant shift within the cybersecurity landscape. If security awareness training will remain a valuable tool for cyber defense, then it must be adapted to include application and systems level interventions, improved cyber transparency, as well as the ability to recognize automated interaction with applications and system. We can protect our brands, enterprises and future by preventing the rise of AI-powered deception.



About Author


Tom Tovar, CEO and co-creator at Appdome is the only fully automated mobile app defense platform. He is a coder and hacker today, as well as a business leader. He began his career as a Stanford educated, tech-focused corporate and securities attorney. He has served as a board director and in C-level leadership positions at several cyber and tech companies.

<<<- Go Back