Lakera, a Swiss startup focused on safeguarding generative AI applications from malicious prompts and other security threats, has secured $20 million in a Series A funding round led by Atomico, a prominent European venture capital firm.
Generative AI, exemplified by popular applications like ChatGPT, is gaining traction but raises significant concerns in enterprise environments, particularly regarding security and data privacy.
Large language models (LLMs) power generative AI, enabling machines to understand and produce text akin to human language. However, these models require prompts to guide their output, which can be manipulated to trick the AI into unintended actions such as exposing confidential data or granting unauthorized system access. This vulnerability, known as “prompt injections,” poses a growing threat that Lakera aims to mitigate.
Founded in Zurich in 2021 and formally launched in October 2023 with an initial $10 million funding, Lakera specializes in protecting organizations from LLM vulnerabilities such as data leaks and prompt injections. It provides security solutions applicable to any LLM, including those developed by OpenAI, Google, Meta, and Anthropic.
At its core, Lakera offers “Lakera Guard,” described as a “low-latency AI application firewall.” This technology secures traffic to and from generative AI applications, ensuring that prompts are scrutinized and processed securely.
The company’s approach integrates insights from diverse sources, including open-source datasets like those from Hugging Face, proprietary machine learning research, and innovative tools like the interactive game “Gandalf.” This game challenges users to attempt to deceive the system into revealing a password, helping refine Lakera’s defenses against potential threats.
Overall, Lakera’s innovative solutions aim to bolster the security and integrity of generative AI applications in enterprise contexts, safeguarding against emerging threats and ensuring safe and reliable AI operations.