OpenAI has led the movement in commercial AI models (and hype). Increasingly, the ChatGPT maker has also been reaching into the startup world, investing in and acquiring companies at various stages. Over the past two years, the company has backed more than 20 startups and acquired three.
In a new report, CB Insights broke down OpenAI’s investments and what they say about the company’s strategy and its vision for the future of AI. Overall, the deals span across developer tools, data management infrastructure, enterprise workflow automation, robotics, media, edTech, and health. Many of the companies OpenAI has backed are directly helping the company build its own infrastructure and enterprise-targeted AI. Others offer inroads into what the company believes could be AI’s killer apps for consumers, specifically big bets in learning and personal health.
“OpenAI’s recent investments point to potential growth opportunities for the firm, especially as it faces pressure to earn its eye-watering $80 billion valuation and stay ahead of competitors in the fast-moving generative AI market,” reads the report.
In the near term, it’s clear OpenAI views enterprise AI as its big opportunity. The company has invested in a variety of startups that offer capabilities across horizontal functions like sales, design, and coding, as well as several specific to different verticals like health care and law. One notable example is OpenAI’s continued investment in legal AI company Harvey, which it’s backed across four rounds including the company’s $100 million Series C last week. It makes sense: Users of ChatGPT Enterprise grew 4x in just a few months from 150,000 in January of this year to 600,000 in April, the company announced in April. Enterprises also have the most bandwidth available to experiment with generative AI, which requires a lot of financial and personnel resources from specialized engineering talent and employee training to compliance and the costs associated with the technology itself.
The company’s three acquisitions—digital product company Global Illumination (2023), video calling platform Multi (June 2024), and analytics and database company Rocketset (June 2024)—also represent the company’s focus on accelerating its enterprise strategy. Seeing as OpenAI just made two of these buys in June, it also suggests the company is accelerating its efforts around investments and M&A.
“OpenAI’s recent acquisitions share a few common traits: they focus on small teams with strong talent, they’re relatively early in their commercial trajectory, and they develop enterprise AI tools or infrastructure,” notes the report.
On the consumer front, OpenAI has thrown its weight behind areas where it could most easily integrate into users’ daily lives. One example is Thrive AI Health, which OpenAI is jointly funding with Huffington Post cofounder and Thrive Global CEO Arianna Huffington. The company aims to build a personalized AI-powered health coach to give users better insights into their health and gently nudge them toward healthier habits across areas like diet, sleep, and stress through real-time alerts. There’s also Speak, a language-learning app. If the success of Duolingo—which reported 31.4 million daily active users and record profitability in Q1 2024—is any indicator, users are definitely interested in making learning a new language a daily habit.
Perhaps the most interesting of OpenAI’s investments, however, are the long-term bets in companies developing applications to bring AI into the physical world. The Open AI Startup Fund co-led rounds for humanoid robotics developers 1X and Figure AI, including a $675 million Series B round for Figure AI in February that marked its biggest deal to date. Both companies are also tapping OpenAI’s technology to integrate into their robotics.
Like the company itself, OpenAI’s investments have also come with some controversies related to its unusual corporate structure. Earlier this year, it was revealed that although the OpenAI Startup Fund—which launched in 2021 and is the entity behind many of the company’s investments—was presented as a typical corporate venture arm, Sam Altman himself owned and controlled the fund, making investment decisions and raising money from outside limited partners. OpenAI changed the fund’s governance structure in March to remove him from control. Outside of OpenAI, Sam Altman is also known as one of the most prolific individual investors of AI companies, which has raised concerns around conflicts of interest and how he may be directly benefiting from OpenAI’s success through his wide net of investments in AI companies. Much of his portfolio is unknown, the Wall Street Journal reported he’s invested in more than 400 companies.
While OpenAI is ramping up its investments in generative AI startups, others are starting to question if investments in AI should slow down. In recent weeks, a growing number of Silicon Valley investors and Wall Street analysts started sounding the alarm about the AI gold rush and their fears that a bubble is forming. Goldman Sachs, Barclays, and Sequoia Capital, for example, all recently published reports arguing that the technology may not be able to make enough money to justify all the spending.
Still, venture capital investment into AI startups has yet to slow. In the first half of 2023, 225 AI startups raised $12.3 billion from VCs. In the first half of 2024, VCs yet again invested $12.3 billion into AI startups, investing in 255 companies, TechCrunch reported. Everyone has been pouring too much into generative AI to reel it in just yet.
And with that, here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
The EU AI Act enters into full force. The world’s first major AI law aimed at regulating the way companies develop, deploy, and use AI goes into effect today. The regulation categorizes AI applications by risk factor and includes stringent regulations for what it calls “general purpose” AI models, such as ChatGPT, that are built to accomplish a broad range of tasks. While the act officially goes into effect today, enforcement of its various provisions will begin on a rolling basis over the next two years. As CNBC notes, U.S. tech companies will primarily feel the law’s effects as they lead the charge on building and deploying AI models. Apple and Meta have already announced their intentions to withhold specific AI models and product rollouts in the EU, not necessarily citing the EU AI Act in particular, but broader regulatory concerns in the EU.
Senate committees advance nine AI bills. That’s according to FedScoop. The Senate Commerce, Science, and Technology Committee advanced a wide variety of bills yesterday aimed at regulating AI and bolstering the country’s leadership in the sector. The bills include the highly anticipated CREATE AI Act, which would establish shared national AI infrastructure for researchers and has been endorsed by entities including Google and the Stanford Institute for Human-Centered Artificial Intelligence. The committee also advanced the National Science Foundation AI Education Act (to expand AI educational opportunities), VET AI Act and TEST AI Act (both supported by Google), and the Future of AI Innovation Act (which would authorize a variety of actions to help the U.S. maintain leadership in AI, including launching grand challenges to spur research and allowing NIST to develop safety standards), as well as others aimed at bringing transparency to AI systems. Sen. Ed Markey (D-Mass.) retracted his proposed bill focused on studying the energy consumption of AI systems due to lack of support. In a separate session, the Senate Homeland Security and Governmental Affairs Committee also advanced a bill that would establish guardrails for federal AI acquisitions. While these bills still need to go through several steps to become law, it was a big day for AI regulation.
Meta blames hallucinations after its AI bot said the assassination attempt on Trump didn’t happen. Joel Kaplan, Meta’s global head of policy, called the chatbot’s false answers to questions about the shooting “unfortunate” in a blog post published Tuesday. “Like all generative AI systems, models can return inaccurate or inappropriate outputs,” he wrote. Across models and use cases, “hallucinations”—the term that’s stuck for the way LLMs confidently deliver false and fabricated information—continue to be a major stain on the technology. But Meta’s bot troubles don’t stop there: The company also said it’s scrapping its celebrity AI chatbots based on celebrities like Snoop Dog and Kendall Jenner after less than a year, according to The Information.
OpenAI starts rolling out its hyperrealistic voice mode first debuted with a Scarlett Johansson-like voice. That’s according to TechCrunch. The company will roll out the feature—called Advanced Voice Mode— throughout the fall to paid users, with a small group receiving it yesterday. The initial demo of the feature was steeped in controversy because of the voice’s uncanny similarity to Scarlett Johansson, the actress who voiced the AI assistant in the film Her. Johansson later said she denied multiple requests from OpenAI to provide her voice for the chatbot. Even though this voice was just one of several options, OpenAI said it’d be removing it and delayed the release of Advanced Voice Mode to improve safety features. Unlike the voice mode currently available in ChatGPT which uses three models (one to convert the users’ prompt to text, one to process the prompt, and one to convert the model’s output text to voice), Advanced Voice Mode relies only on the multimodal GPT-4o.
AI startup Friend debuts an always-listening AI necklace to keep users company. Wired has a great article about the product and its eyebrow-raising interview with the founder. Unlike other recent AI wearables framed as assistants to help users complete tasks, the new AI pendant from Friend is framed more like a buddy meant to offer companionship. It doesn’t speak out loud but rather communicates by sending texts and push notifications to the smartphone it’s paired to, offering commentary about the conversations it hears. The device is powered by Antrhopic’s Claude 3.5 model and records everything through a microphone that is always on and always listening, sporting 15 hours of battery life. Cue the alarms—both about the obvious privacy concerns and what it means to increasingly rely on AI for companionship.
FORTUNE ON AI
Microsoft’s fast-growing AI business isn’t fast enough for Wall Street —by Verne Kopytoff
Taco Bell is bringing AI to hundreds of drive-thrus nationwide —by Chris Morris
Surviving the ‘AI winter’: Startups need to know ‘when to go into hibernation’ if they want to thrive —by David Austin
Bosses want to use AI to boost productivity—but 77% of employees say it just creates more work —by Emma Burleigh
Only an AI breakthrough can help humans live longer, argues drug developer: ‘We need a ChatGPT moment in longevity’ —by Christiaan Hetzner
Singapore’s minister says AI is not the new oil—it’s way better —by Nicholas Gordon
AI CALENDAR
Aug. 12-14: Ai4 2024 in Las Vegas
Aug. 13: Made by Google event showcasing Google AI
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia.
Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)
EYE ON AI NUMBERS
98%
That’s the percentage of the time a new machine learning-powered variable speed limit system being tested on a stretch of highway near Nashville, Tenn., works correctly, according to a Vanderbilt University team that trained the system. It’d be a high score on many tests, but there isn’t much room for error in traffic management. Occasionally, the system calls for a change in speed limit that is larger than 10 miles per hour, which violates federal law.
“It’s a bad idea if the measured speed is going 80, 80, 80, 80, 20—we don’t want that,” Daniel Work, a professor of computer science and civil and environmental engineering at Vanderbilt University, told New Scientist. “We want it to go 70, 60, 50, 40, 30.”
The system has largely operated unaided on a 16-mile stretch of Interstate 24 since March, but researchers analyzing the results say the impact on efficiency and driver safety is still unclear. The project is still ongoing and the team plans to release more data later this year.