Singapore working on technical guidelines for securing AI systems

https://www.zdnet.com/a/img/resize/c24054467e4f30e55f72db31354010d2fb970614/2024/07/05/6aaf734c-5e4d-4057-a557-13502a6bd0c1/gettyimages-1689739923.jpg?auto=webp&fit=crop&height=675&width=1200
AI security tech concept

Just_Super/Getty Images

Singapore plans to soon release instructions it says will offer “practical measures” to bolster the security of artificial intelligence (AI) tools and systems. The Cyber Security Agency (CSA) is slated to publish its draft Technical Guidelines for Securing AI Systems for public consultation later this month, according to Janil Puthucheary, Singapore’s senior minister of state for Ministry of Communications and Information.

The voluntary guidelines can be adopted alongside existing security processes that organizations implement to address potential risks in AI systems, Puthucheary said during his opening speech on Wednesday at the Association of Information Security Professionals (AiSP) AI security summit.

Through the technical guidelines, the CSA hopes to offer a useful reference for cybersecurity professionals looking to improve the security of their AI tools, the minister said. He further urged the industry and community to do their part in ensuring AI tools and systems remain safe and secure against malicious threats, even as techniques continue to evolve.

Also: The best VPN services (and how to choose the right one for you)

“Over the past couple of years, AI has proliferated rapidly and been deployed in a wide variety of spaces,” Puthucheary said. “This has significantly impacted the threat landscape. We know this rapid development and adoption of AI has exposed us to many new risks, [including] adversarial machine learning, which allows attackers to compromise the function of the model.”

He pointed to how security vendor McAfee succeeded in compromising Mobileye by making changes to the speed limit signs that the AI system was trained to recognize.

AI is fueling new security risks, and public and private sector organizations must work to understand this evolving threat landscape, Puthucheary said. He added that Singapore’s government CIO, the Government Technology Agency (GovTech), is developing capabilities to simulate potential attacks on AI systems to grasp how they can impact the security of such platforms. “By doing so, this will help us to put the right safeguards in place,” he said.

Puthucheary added that efforts to better guard against existing threats must continue, as AI is vulnerable to “classic” cyber threats, such as those targeting data privacy. He noted that the growing adoption of AI will expand the attack surface through which data can be exposed, compromised, or leaked. He said that AI can be tapped to create increasingly sophisticated malware, such as WormGPT, that can be difficult for existing security systems to detect.

Also: Cybersecurity teams need new skills even as they struggle to manage legacy systems

At the same time, AI can be leveraged to improve cyber defense and arm security professionals with the ability to identify risks faster, at scale, and with better precision, the minister said. He said security tools powered by machine learning can help detect anomalies and launch autonomous action to mitigate potential threats. 

According to Puthucheary, AiSP is setting up an AI special interest group, in which its members can exchange insights on developments and capabilities. Established in 2008, AiSP describes itself as an industry group focused on driving technical competence and interests of Singapore’s cybersecurity community.

Also: AI is changing cybersecurity and businesses must wake up to the threat

In April, the US National Security Agency’s AI Security Center released an information sheet, Deploying AI Systems Securely, which it said offered best practices on deploying and operating AI systems. 

Developed jointly with the US Cybersecurity and Information Security Agency, the guidelines aim to enhance the integrity and availability of AI systems and create mitigations for known vulnerabilities in AI systems. The document also outlines methodologies and controls to detect and respond to malicious activities against AI systems and related data.

<<<- Go Back