How AI can save cybersecurity analysts from drowning under a sea of information

https://scx2.b-cdn.net/gfx/news/2024/tech-security.jpg
Credit: Unsplash/CC0 public domain

Data breaches and privacy violations have become more serious than ever as organizations rely more on networks, online platforms and technology. When you combine this with the increasing frequency and sophistication of cyber-threats, it is clear that strengthening cybersecurity defenses is more important than ever.

Cybersecurity analysts are at the forefront of this battle. They work around the clock, in security operations centres (SOCs), the units that protect organizations from cyber threats. They sift though a massive amount of data while monitoring potential security incidents.

They are overwhelmed by the vast amount of information coming from disparate sources – from network logs and threat intelligence feeds to trying to prevent an attack. They are overwhelmed. Artificial intelligence has never had a problem with too much data, so many experts look to AI to boost and ease the burden on analysts.

Stephen Schwab, Director of Strategy at USC’s Information Sciences Institute (ISI) Networking and Cybersecurity Division envisions symbiotic human-AI teams working together to improve security. AI can help analysts and improve performance in these high stakes environments. Schwab and his team developed testbeds to research AI-assisted strategies for cybersecurity in smaller systems.

He said: “We are trying to ensure that processes will not only ease but also lighten the workload of human analysts.”

David Balenson is the associate director of ISI’s Networking and Cybersecurity division. He emphasizes that automation plays a critical role in relieving the burden of cybersecurity analysts. “SOCs receive a flood of alerts, which analysts must analyze in real-time and decide if they are symptoms of an actual incident. “AI and automation can help identify patterns or trends in alerts that may indicate a potential incident,” says Balenson.

Transparency and explanation are important

The integration of AI into cybersecurity operations does not come without its challenges. One of the main concerns is the lack transparency and explainability in many AI-driven system. Schwab explains that machine learning (ML) can be used to monitor networks and end systems when human analysts are tired. “Yet, they are a black-box–they can send out alerts that seem inexplicable.” The human analyst must trust that the ML systems are operating within reason.

Schwab suggests that a solution is to build explainers that can be understood by analysts. These explainers would present the ML system in computerized English that is similar to . Marjorie Freedman is a Principal Scientist with ISI. She is researching this. “I have been examining what it means to generate an explanation and what you expect from the explanation. She said that they are also looking at how an explanation could help someone verify the generation of a model.

The Art of Flagging

Online authentication is a good example of how AI can be used to explain cybersecurity decisions. Users enter a PIN or password to authenticate with a system. The AI may flag the code even if it was entered correctly by different people because they punch in data in different patterns.

The AI may still take into account these “potentially suspect” patterns, even if they are not security breaches. The analyst will understand the AI’s reasoning better if, along with flagging the patterns, an explanation is given to them. This includes the input pattern. Armed with this additional information, analysts can make better decisions and take the appropriate action (i.e. confirm or override AI’s determination). Freedman believes cybersecurity operations should use their best ML models to predict, identify and address threats, in parallel with approaches which effectively explain the decision.

Freedman said, “If someone is shutting down an expensive system for the company, it’s high-stakes and we have to confirm that it’s the correct decision.” “The AI may not have a clear explanation of how it arrived at its conclusion, but the human analyst might need to know that to decide if it is correct or not.”

Keep your data private and secure

Trust between the human analyst, and the machine, is one of the challenges with AI in cybersecurity. Another challenge is trusting that the sensitive information or proprietary data that AIs will be trained on remains private. To train a machine-learning model to protect data or systems, an organisation might use operational details or vulnerabilities.

When integrating AI in cybersecurity operations, the potential exposure of this sensitive information about an organisation’s cyber posture poses a concern. “Once information is put into systems such as large language models, it’s not possible to stop it from sharing that information, even if it’s removed. Schwab said that we need to find ways to make this sharing space safe for everyone.

Schwab, Freedman, and the ISI team are hoping that their work will lead them to new ways of harnessing the strengths of humans and AI in order to bolster cyberdefenses and stay ahead of sophisticated adversaries.

Citation: How AI can keep cybersecurity analysts from drowning in a sea of data (2024, June 21) retrieved 21 June 2024 from https://techxplore.com/news/2024-06-ai-cybersecurity-analysts-sea.html

This document is protected by copyright. Aside from fair dealing in the interest of private study or for research, no part can be reproduced without written permission. The content is only provided as information.

<<<- Go Back