Trend Micro’s Dustin Childs Discusses LLMs Hacking Ability

https://assets.bizclikmedia.net/1200/7158f21f8db82e8b4898627bda5a13b0:6145027f9a65c30ceaaef7cb24f552c6/dustin-childs-head-of-threat-awareness-trend-micro-1.jpg.jpg

Current state of LLMs in cybersecurity

LLMs and Gen AI systems like GPT-4 have sparked discussions about their potential capabilities in cybersecurity, particularly their ability to autonomously hack systems.

The good news is, while these AI models have shown impressive capabilities in natural language processing and code generation, their application in autonomous hacking is still limited and largely theoretical.

“As of today, they can’t. LLMs like GPT-4 or Microsoft’s Co-Pilot are powerful tools for Natural Language Processing (NLP) and generation,” said Dustin. “But they are not inherently designed to autonomously perform hacking or complex attacks such as SQL injections.”

This clarifies that while LLMs can generate code snippets for common exploits when prompted, they lack the inherent ability to autonomously execute complex cyber attacks.

Yet, in analysing the key capabilities of advanced LLMs in finding and exploiting vulnerabilities, Dustin notes: “LLMs are not currently capable of autonomously finding or exploiting vulnerabilities. However, LLMs can assist in gathering information on potential vulnerabilities by summarising known exploits, providing details on how certain vulnerabilities are exploited, and even suggesting tools or techniques to use in penetration testing.

“This means that they rely on external scripts or human operators to carry out actions on real-world systems, limiting their ability to autonomously exploit vulnerabilities.”

This highlights the current role of LLMs as assistive tools rather than autonomous hacking entities. They can provide valuable information and suggestions, but cannot independently carry out complex attacks.

LLMs augmenting abilities

However, the potential misuse of LLMs by threat actors is a concern. 

“Threat actors can leverage LLMs capabilities to aid in the creation of exploits, amplifying their malicious activity,” says Dustin. “Let’s consider SQL injections as an example. A threat actor might prompt the LLM to generate different payloads to test various input fields of a web application for SQL injection vulnerabilities.

“They can also use these payloads in the target web application and analyse the responses. If the response changes in a way that indicates a successful injection, further exploitation might be possible,” he explains.

This scenario illustrates how malicious actors could potentially use LLMs to enhance their attack strategies, even if the models themselves cannot autonomously execute attacks.

LLMs v traditional cybersecurity tools

Human’s still play a key role in cybersecurity today, as AI is unlikely to autonomously hack systems without human intervention or knowledge of the vulnerabilities in the near future.

“At this point, LLMs cannot produce results similar to other automated forms of reverse engineering and exploit development. For example, fuzzing remains a better technology than LLMs when it comes to finding bugs within a closed-source application.”

This comparison underscores that established cybersecurity techniques and tools still outperform LLMs in practical application.

Looking to the future, Childs suggests a more likely scenario for LLM application in cybersecurity.

“LLMs can be trained to review code for problems before a product ships. It is more likely that this form of code review will be common before an LLM gains the capability to autonomously find vulnerabilities.”

This perspective highlights the potential for LLMs to contribute positively to cybersecurity by enhancing code quality and identifying vulnerabilities before they can be exploited.

While LLMs have shown impressive capabilities in language processing and code generation, their ability to autonomously hack systems remains limited. Their current value in cybersecurity lies more in augmenting human expertise and automating benign tasks rather than in autonomous exploitation.

“By combining technical controls, ethical guidelines, and continuous monitoring, it’s possible to harness the benefits of LLMs while minimising the risks associated with their misuse in autonomous hacking and other malicious activities,” Dustin concludes.

As these technologies continue to evolve, it will be crucial to implement safeguards and ethical guidelines to ensure their responsible use and be prepared for their adversarial use in the field of cybersecurity.

******

Make sure you check out the latest edition of Cyber Magazine and also sign up to our global conference series – Tech & AI LIVE 2024

******

Cyber Magazine is a BizClik brand

<<<- Go Back