Artificial Intelligence and Machine Learning are recognized as important components of the future of cloud security and cybersecurity. How well are these technologies integrated into the current cybersecurity functions?
In a recent survey conducted by Check Point and Cybersecurity Insiders, hundreds of professionals across various industries were asked how they have been using AI, what priority it holds for their companies and how it’s affected their workforce.
Where is AI in cybersecurity right now?
In the survey, respondents were asked to answer questions about the current state of AI implementation in their cybersecurity plans. This included how well it was implemented and how the implementation is progressing.
Their responses paint a portrait of an industry moving slowly and cautiously and hasn’t “gone all-in” with AI as some might expect. Businesses are still evaluating the risks and benefits of AI and ML, and they are establishing best practices that are compliant with relevant regulations.
When asked about their organisation’s adoption AI and ML for cybersecurity, 61% said it was in the “planning or development” stages – significantly higher than the 24% who classified it as “maturing or advanced.”
In addition, 15% of respondents said that their organisation has not implemented AI or ML in their cybersecurity efforts. While the benefits of AI in cybersecurity efforts have convinced many businesses to explore their potential, very few businesses are fully embracing them at this time.
The survey asked respondents to answer a more specific question: “Which cybersecurity functions (cloud) in your organization are currently enhanced by AI/ML?” Malware detection was the top performer at 35%. User behaviour analysis and supply-chain security were right behind.
On the bottom of the list are organisations that appear to be using AI in security posture management and adversarial AI research. The data, when combined with the answers to the question previously discussed about the state of AI, shows that individual applications for AI and ML are still far away from being universal.
The challenge of navigating an ever-changing regulatory landscape is one reason why AI adoption hasn’t accelerated. In the early stages, government guidance and laws are still developing around AI and cybersecurity. Businesses cannot afford to take any risks when it comes compliance. Keeping up with the rapid changes is complex and resource intensive.
How do organisations plan to use AI in the future?
Despite the cautious and slow adoption of AI for cybersecurity, it is almost universally regarded by 91% of respondents as a high priority. Only 9% said it was a low priority, or not a top priority.
The respondents clearly see AI’s potential to automate repetitive tasks, improve detection of anomalies, and detect malware. In fact, 48% of respondents cite this as the area that has the most potential. AI reinforcement learning is also a promising area for 41% of respondents. This is especially interesting, as only 18% are currently using AI to manage dynamic security postures. There is no doubt that AI has great potential, but it faces many challenges.
Beyond specific applications, respondents had to identify the benefits they saw in incorporating AI into cybersecurity. The most popular responses were vulnerability assessment and threat identification, but cost-efficiency was the least popular answer at only 21%.
AI is not viewed by most respondents as a money-saving tool, likely due to the high cost of regulatory compliance and implementation.
Conflicting views and concerns about AI in cybersecurity
Additional questions in the survey revealed professional concerns and a lack clarity regarding some fundamentals of AI. The impact of AI on cybersecurity is still a question that has no clear answer.
39 percent of respondents said that AI requires new skills, while 35 percent noted that job roles have been redefined. While 33% of respondents said that AI has reduced their workforce, 29% reported that their workforce has actually increased.
AI integration into cybersecurity is still a work-in-progress. While greater efficiency may be realized in the future, many businesses have to hire additional staff to integrate the new technology.
Recommended Reading
There was a significant divide in the responses to the question “Would our organisation be comfortable using Generative AI if we did not implement any internal controls for data-quality and governance policies?”
While 44% of respondents disagreed or strongly disagreed, 37% agreed or strongly agreed.
It’s rare to find such a large split on a question as this in a professional survey. This split seems to indicate either a lack or awareness of the importance of internal control and governance policies, when AI is involved.