Thursday, September 11, 2025
HomeTechnologySutskever strikes AI gold with billion-dollar backing for superintelligent AI

Sutskever strikes AI gold with billion-dollar backing for superintelligent AI


Ilya Sutskever, OpenAI Chief Scientist, speaks at Tel Aviv University on June 5, 2023.
Enlarge / Ilya Sutskever, OpenAI Chief Scientist, speaks at Tel Aviv College on June 5, 2023.

On Wednesday, Reuters reported that Secure Superintelligence (SSI), a brand new AI startup cofounded by OpenAI’s former chief scientist Ilya Sutskever, has raised $1 billion in funding. The three-month-old firm plans to give attention to growing what it calls “secure” AI methods that surpass human capabilities.

The fundraising effort exhibits that even amid rising skepticism round large investments in AI tech that thus far have didn’t be worthwhile, some backers are nonetheless keen to put massive bets on high-profile expertise in foundational AI analysis. Enterprise capital corporations like Andreessen Horowitz, Sequoia Capital, DST World, and SV Angel participated within the SSI funding spherical.

SSI goals to make use of the brand new funds for computing energy and attracting expertise. With solely 10 staff in the mean time, the corporate intends to construct a bigger workforce of researchers throughout places in Palo Alto, California, and Tel Aviv, Reuters reported.

Whereas SSI didn’t formally disclose its valuation, sources instructed Reuters it was valued at $5 billion—which is a stunningly great amount simply three months after the corporate’s founding and with no publicly-known merchandise but developed.

Son of OpenAI

OpenAI Chief Scientist Illya Sutskever speaks at TED AI 2023.
Enlarge / OpenAI Chief Scientist Illya Sutskever speaks at TED AI 2023.

Benj Edwards

Very like Anthropic earlier than it, SSI fashioned as a breakaway firm based partially by former OpenAI staff. Sutskever, 37, cofounded SSI with Daniel Gross, who beforehand led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher.

Sutskever’s departure from OpenAI adopted a tough interval on the firm that reportedly included disenchantment that OpenAI administration didn’t commit correct assets to his “superalignment” analysis workforce after which Sutskever’s involvement within the transient ouster of OpenAI CEO Sam Altman final November. After leaving OpenAI in Might, Sutskever mentioned his new firm would “pursue secure superintelligence in a straight shot, with one focus, one objective, and one product.”

Superintelligence, as we have famous beforehand, is a nebulous time period for a hypothetical know-how that may far surpass human intelligence. There is no such thing as a assure that Sutskever will achieve his mission (and skeptics abound), however the star energy he gained from his tutorial bona fides and being a key cofounder of OpenAI has made fast fundraising for his new firm comparatively straightforward.

The corporate plans to spend a few years on analysis and growth earlier than bringing a product to market, and its self-proclaimed give attention to “AI security” stems from the assumption that highly effective AI methods that may trigger existential dangers to humanity are on the horizon.

The “AI security” matter has sparked debate throughout the tech trade, with firms and AI specialists taking completely different stances on proposed security laws, together with California’s controversial SB-1047, which can quickly turn out to be regulation. For the reason that matter of existential danger from AI continues to be hypothetical and incessantly guided by private opinion reasonably than science, that exact controversy is unlikely to die down anytime quickly.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Most Popular

Recent Comments