Anthropic and Menlo Ventures Launch $100 Million AI Startup Fund

https://www.pymnts.com/wp-content/uploads/2024/07/Anthropic-Menlo-AI-startups.jpg

Anthropic and Menlo Ventures have launched a $100 million fund for artificial intelligence (AI) startups.

“We created this fund to fuel the next generation of AI startups through the powerful combination of Menlo’s extensive company-building experience and Anthropic’s pioneering AI technology and deep research expertise,” Menlo said in a news release Wednesday (July 17). 

“Through this collaboration, we aim to catalyze innovation and shape the future of artificial intelligence in the startup ecosystem.”

The companies say “Anthology” is a call-back to “Anthropic” but also represents their shared vision for a “curated collection” of AI innovators working together.

“Just as an anthology represents a collection of diverse works of art that form a masterpiece, our fund connects visionary entrepreneurs with Anthropic’s groundbreaking technology and Menlo’s venture expertise to fuel revolutionary advancements,” the announcement said.

Menlo, one of Silicon Valley’s earliest venture capital groups, is already an investor in Amazon-backed Anthropic, and last year, it said it had raised $1.3 billion in investments in up-and-coming AI firms.

The Anthology Fund, the companies will invest in companies from seed to expansion stages, with investments beginning at $100,000 or more.

Meanwhile, PYMNTS wrote earlier this month about Anthropic’s new funding program for advanced AI evaluations, noting that industry experts say it could accelerate the adoption of AI across a range of commercial sectors.

That program aims to help third-party organizations develop new methods for assessing AI capabilities and risks, addressing a crucial gap in the rapidly evolving field.

The initiative wants to develop more robust benchmarks for complex AI applications, potentially unlocking billions in commercial value. The lack of comprehensive evaluation tools has hindered widespread adoption as businesses seek to deploy AI solutions.

“We’re seeking evaluations that help us measure the AI Safety Levels (ASLs) defined in our Responsible Scaling Policy,” Anthropic said in its announcement. 

These levels determine safety and security requirements for models with specific capabilities. The impact of this program is expected to be especially significant for complex AI applications, PYMNTS wrote. 

“Straightforward applications like speech recognition already have decent benchmarks, but quantifying a model’s capability in assisting a crime is much more difficult,” Julija Bainiaksina, founder of the AI company MiniMe, told PYMNTS.

<<<- Go Back