How much of a threat is AI to the cybersecurity of tech firms?

https://cdn.mos.cms.futurecdn.net/DKnCXCBzVhrirv84RYHLg8-1200-80.jpg

A recent survey of 400 Chief Security Officers in the UK and US revealed that 72% believed that AI solutions could lead to security breaches. In contrast, 80% of respondents said they planned to implement AI tools in order to defend themselves against AI. This is a reminder of the promise and threat of AI. On the one hand, AI is a powerful tool that can be used for unprecedented security and to enable cybersecurity specialists to go on offense against hackers. AI will also lead to massive automated attacks with incredible levels of sophistication. The big question for tech companies caught up in this war is how worried they should be and what they can do to protect themselves.

Let’s first take a step back and examine the current situation. According to data compiled and analyzed by Cobalt Security, cybercrime will cost the global economy $9.4 trillion in 2024. 75% of security professionals reported an increase in cyberattacks during the past year. The costs of these hacks will likely rise by at least 15 % each year. IBM reported that in 2023, the average cost of a data breach will be $4.45 million. This is a 15% increase since 2020.

Due to this, the cost for cybersecurity insurance has increased by 50%. Businesses now spend $215 Billion on risk management services and products. Healthcare, finance and insurance companies and their partners are most at risk. The tech industry faces particular challenges due to the large volume of sensitive information that startups deal with. They also have limited resources in comparison to large multinationals, and a culture that is geared toward scaling quickly at the expense IT infrastructure.


Sebastian Gierlinger

Storyblok’s VP of Engineering.

The challenge of differentiating AI attack

CFO magazine reported that 85% attributed the increase in cyberattacks by 2024 to the use generative AI. If you look closer, there are no stats that clearly show which attacks were made and what impact they had. It is difficult to determine whether a cybersecurity event was caused by generative AI. It can automate phishing emails or social engineering attacks.

It can be difficult to distinguish between human-made content and generative AI, because it mimics human content and responses. We don’t know the extent of AI-driven attacks that are based on generative AI or their effectiveness. It is difficult to gauge the severity of the problem if we can’t quantify it.

This means that the best action for startups is to focus and mitigate threats in general. All evidence suggests that current cybersecurity measures and solutions, underpinned by best practices data governance procedures, are up to task with the existing AI threat.

The greater the cyber risk

Ironically, the greatest threat to organizations comes from their own employees who use AI carelessly or fail to follow security procedures. Employees who share sensitive business data while using ChatGPT run the risk of that data being retrieved later, which could lead to leaks and hacks. To reduce this threat, it is important to have proper data protection systems and educate users of generative AI on the risks involved.

Education also includes helping employees understand AI’s current capabilities, especially when it comes to countering phishing attacks and social engineering. Recently, a finance manager at a major corporation paid out $25,000,000 to fraudsters following a deep-fake conference call that mimicked the company’s CFO. So far, so frightening. If you dig deeper, it turns out that the incident was not very sophisticated from the AI perspective. It was just a small step up from a scam that was perpetrated a few year ago to trick the finance departments of scores of businesses – many of which were startup companies – into sending money to a fake client account by using the email address of the CEO. In both cases, if basic compliance and security checks had been performed, or if common sense had been used, the scams would have been quickly uncovered. It is just as important to teach your employees how AI can generate the voice or appearances of others and how to detect these hacks as it is to have a robust security system.

AI is a long-term cybersecurity threat, but until we see more sophisticated AI, current security measures will suffice if they are adhered to. Businesses must continue to follow cybersecurity best practices and review their processes as the threat evolves. Cybersecurity is no stranger to new threats or bad actor methods. But businesses cannot afford to rely on outdated security technology or procedures.

We list top cloud antivirus.

This article was created as part of TechRadarPro’s Expert Insights, where we feature some of the brightest minds working in the technology sector today. The views expressed are those of the writer and not necessarily those TechRadarPro, Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

<<<- Go Back