The Main Item
Ilya Sutskever, the OpenAI co-founder and former chief scientist, is taking another swing at steering AI research in a way that prioritizes safety–and, as he told Bloomberg rather pointedly, “we mean safe like nuclear safety as opposed to safe as in ‘trust and safety.'” Last week he unveiled his new startup-slash-research lab, Safe Superintelligence, which has only one goal: to create artificial general intelligence that will help us and not hurt us.
Last fall, Sutskever famously voted with a majority of OpenAI’s non-profit board to remove Sam Altman. Altman was the head of OpenAI’s for-profit subsidiary and pursued an aggressive commercial strategy. However, the board had lost trust in him. Sutskever changed sides after a staff revolt, but felt OpenAI was not taking safety seriously.
Sutskever stated that the new venture would be “fully protected from outside pressures such as dealing with a large, complex product and being stuck in a competitive race.”
This pledge will be tested by the vast sums of cash that he is likely to need for building and educating AI models. Daniel Gross, an AI expert and investor who is well-respected, and Daniel Levy a former OpenAI colleague are his co-founders. However, no investors have yet been named.
OpenAI’s non-profit mission was all but abandoned due to the lack of cash, both to pay for computing and to pay expensive researches. Sutskever is asking for money to fulfill a promise that may never be fulfilled, but he also vows to never pivot.
It’s a thing he might be able, alone, to get away. One investor texted: “Ilya is a brilliant scientist of our generation and the Daniels…it’s the best team start you can get.”
According to another VC, it sounds like a return of OpenAI’s original objectives, which is a mixed bag from an investor perspective. “Is it cool?” Yah. AGI is definitely something that people want to create. Will it make investors money? This person said, “I’m not sure with him leading it and their current focus/values.”
The world is also very different from 2015, when OpenAI launched, as the most capitalized companies are now investing billions of dollars in every AI opportunity.
The first investor said: “I think that the vision is noble in theory, but in practice, I don’t believe that one group can be so far ahead and be able protect that technology,” “It would be like building a recycling plant and expecting global heating to disappear.”
Sutskever is to be commended for his commitment to principles, especially when he knows that he can walk through another door at any time and become a billionaire in an instant. At least give the effort to create a safe AGI a chance.