Generative AI security requires a solid framework

https://securityintelligence.com/wp-content/uploads/2024/06/Shot-of-Data-Center-With-Multiple-Rows-of-Fully-Operational-Server-Racks.-Modern-Telecommunications-Artificial-Intelligencelarge-server-area.jpeg



Generative AI security requires a solid framework






























Shot of Data Center With Multiple Rows of Fully Operational Server Racks. Modern Telecommunications, Artificial Intelligence,large server area,3d rendering
Shot of Data Center With Multiple Rows of Fully Operational Server Racks. Modern Telecommunications, Artificial Intelligence,large server area,3d rendering

How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.

The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.

CISA Director Jen Easterly said, “We don’t have a cyber problem, we have a technology and culture problem. Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And no place in technology reveals the obsession with speed to market more than generative AI.

AI training sets ingest massive amounts of valuable and sensitive data, which makes AI models a juicy attack target. Organizations cannot afford to bring unsecured AI into their environments, but they can’t do without the technology either.

To bridge the gap between the need for AI and its inherent risks, it’s imperative to establish a solid framework to direct AI security and model use. To help meet this need, IBM recently announced its Framework for Securing Generative AI. Let’s see how a well-developed framework can help you establish solid AI cybersecurity.

Securing the AI pipeline

A generative AI framework should be designed to help customers, partners and organizations to understand the likeliest attacks on AI. From there, defensive approaches can be prioritized to quickly secure generative AI initiatives.A chart of the AI pipeline, focusing on data security.

Securing the AI pipeline involves five areas of action:

  1. Securing the data: How data is collected and handled
  2. Securing the model: AI model development and training
  3. Securing the usage: AI model inference and live use
  4. Securing AI model infrastructure
  5. Establishing sound AI governance

Now, let’s see how each area is oriented to address AI security threats.

Learn more about AI cybersecurity

1. Secure the AI data

Hungry AI models consume massive amounts of data, which data scientists, engineers and developers will access for development purposes. However, developers might not have security high on their list of priorities. If mishandled, your sensitive data and critical intellectual property (IP) could end up exposed.

In AI model attacks, exfiltration of underlying data sets is likely to be one of the most common attack scenarios. Therefore, security fundamentals are the first line of defense to protect these data sets. AI security fundamentals include:

2. Secure the AI model

When developing AI applications, data scientists frequently use pre-existing, freely available machine learning (ML) models sourced from online repositories. However, like any open-source library, security is frequently not built in.

Every organization must consider the AI security risks versus the benefits of accelerated model development. However, without proper AI model security, the downside risk can be significant. Remember, hackers have access to online repositories as well, and backdoors or malware can be injected into open-source models. Any organization that downloads an infected model is wide open to attack.

Furthermore, API-enabled large language models (LLMs) present a similar risk. Hackers can target API interfaces to access and exploit data being transported across the APIs. And LLM agents or plug-ins with excessive permissions further increase the risk for compromise.

To secure AI models, organizations should:

3. Secure the AI usage

When AI models first became widely available, waves of users rushed to test the platforms. It wasn’t long before hackers were able to trick the models into ignoring guardrails and generate biased, false or even dangerous responses. All this can lead to reputational damage and increase the risk of costly legal headaches.

Attackers can also attempt to analyze input/output pairs and train a surrogate model to mimic the behavior of your organization’s AI model. This means the enterprise can lose its competitive edge. Finally, AI models are also vulnerable to denial of service attacks, where attackers overwhelm the LLM with inputs that degrade the quality of service and ramp up resource use.

Best practices for AI model usage security include:

  • Monitoring for prompt injections
  • Monitoring for outputs containing sensitive data or inappropriate content
  • Detecting and responding to data poisoning, model evasion and model extraction
  • Deploying machine learning detection and response (MLDR), which can be integrated into security operations solutions, such as IBM Security® QRadar®, enabling the ability to deny access and quarantine or disconnect compromised models.

4. Secure the infrastructure

A secure infrastructure must underpin any solid AI cybersecurity strategy. Strengthening network security, refining access control, implementing robust data encryption and deploying vigilant intrusion detection and prevention systems around AI environments are all critical for securing infrastructure that supports AI. Additionally, allocating resources towards innovative security solutions tailored for safeguarding AI assets should be a priority.

5. Establish AI governance

Artificial intelligence governance entails the guardrails that ensure AI tools and systems are and remain safe and ethical. It establishes the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.

IBM is an industry leader in AI governance, as shown by its presentation of the IBM Framework for Securing Generative AI. As entities continue to give AI more business process and decision-making responsibility, AI model behavior must be kept in check, monitoring for fairness, bias and drift over time. Whether induced or not, a model that diverges from what it was originally designed to do can introduce significant risk.

More from Artificial Intelligence




Self-replicating Morris II worm targets AI email assistants

4 min readThe proliferation of generative artificial intelligence (gen AI) email assistants such as OpenAI’s GPT-3 and Google’s Smart Compose has revolutionized communication workflows. Unfortunately, it has also introduced novel attack vectors for cyber criminals. Leveraging recent advancements in AI and natural language processing, malicious actors can exploit vulnerabilities in gen AI systems to orchestrate sophisticated cyberattacks with far-reaching consequences. Recent studies have uncovered the insidious capabilities of self-replicating malware, exemplified by the “Morris II” strain created by researchers. How the Morris…




Open source, open risks: The growing dangers of unregulated generative AI

3 min readWhile mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report, over two-thirds of businesses increased their use of open-source software in the last year.Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI…




AI-driven compliance: The key to cloud security

3 min readThe growth of cloud computing continues unabated, but it has also created security challenges. The acceleration of cloud adoption has created greater complexity, with limited cloud technical expertise available in the market, an explosion in connected and Internet of Things (IoT) devices and a growing need for multi-cloud environments. When organizations migrate to the cloud, there is a likelihood of data security problems given that many applications are not secure by design. When these applications migrate to cloud-native systems, mistakes in configuration…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.

Subscribe today

<<<- Go Back