Explore
Settings

Settings

×

Reading Mode

Adjust the reading mode to suit your reading needs.

Font Size

Fix the font size to suit your reading preferences

Language

Select the language of your choice. NewsX reports are available in 11 global languages.
we-woman
Advertisement

OpenAI Co-Founder Sutskever’s New AI Safety Startup SSI Secures $1 Billion in Funding

Company aims to develop advanced artificial intelligence systems that surpass human capabilities while ensuring safety, according to executives who spoke with Reuters.

OpenAI Co-Founder Sutskever’s New AI Safety Startup SSI Secures $1 Billion in Funding

Safe Superintelligence (SSI), a cutting-edge AI startup co-founded by Ilya Sutskever, former chief scientist at OpenAI, has raised $1 billion in funding. The company aims to develop advanced artificial intelligence systems that surpass human capabilities while ensuring safety, according to executives who spoke with Reuters.

With a lean team of 10 employees, SSI plans to utilize the substantial investment to enhance its computing power and recruit top-tier talent. The company is focused on assembling a small, highly trusted group of researchers and engineers, operating out of Palo Alto, California, and Tel Aviv, Israel. Although the company has not disclosed its current valuation, sources suggest it is valued at approximately $5 billion. This significant investment underscores the confidence that investors continue to place in exceptional talent dedicated to foundational AI research, even as general interest in funding such ventures has waned.

Leading venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel participated in the funding round. Additionally, NFDG, an investment partnership led by Nat Friedman and SSI’s CEO Daniel Gross, also contributed.

Gross emphasized the importance of aligning with investors who understand and support SSI’s mission. “It’s crucial for us to be surrounded by investors who share our vision of creating safe superintelligence. Our focus is on spending the next few years conducting R&D before bringing our product to market,” Gross explained in an interview.

The topic of AI safety—preventing AI from causing harm—has gained increasing attention due to concerns that rogue AI could pose significant risks to humanity. A proposed California bill aimed at imposing safety regulations on AI companies has divided the industry. Companies like OpenAI and Google have opposed the bill, while it has garnered support from Anthropic and Elon Musk’s xAI.

At 37, Ilya Sutskever is one of the most influential figures in the AI field. He co-founded SSI in June alongside Daniel Gross, a former leader of AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher. Sutskever serves as SSI’s chief scientist, Levy as principal scientist, and Gross oversees computing power and fundraising efforts.

Sutskever’s decision to launch SSI stems from his identification of a new challenge, or “mountain,” that differs from his previous work. Last year, he played a role in the board’s decision to oust OpenAI CEO Sam Altman, a move that was later reversed following a strong pushback from OpenAI employees. However, the situation led to Sutskever’s diminished role at OpenAI, ultimately resulting in his departure in May. After he left, OpenAI dismantled the “Superalignment” team he had led, which focused on ensuring AI remains aligned with human values as it surpasses human intelligence.

Unlike OpenAI’s unique corporate structure, designed for AI safety but which enabled Altman’s brief ouster, SSI operates as a traditional for-profit entity. The startup is heavily focused on hiring individuals who align with its culture, prioritizing character and exceptional capabilities over formal credentials.

“We’re particularly excited when we find candidates who are genuinely interested in the work, not just in the hype surrounding AI,” Gross added.

SSI intends to partner with cloud providers and chip manufacturers to meet its computing power needs, though it has not yet finalized these partnerships. AI startups often collaborate with companies like Microsoft and Nvidia to address their infrastructure requirements.

Sutskever, an early proponent of the scaling hypothesis—the idea that AI models improve with vast computing resources—hinted that SSI would take a different approach to scale than his former employer but did not provide specifics.

“Everyone talks about scaling, but few ask what exactly we are scaling,” Sutskever remarked. “Some people work long hours and simply go down the same path faster. That’s not our style. We aim to do something different, something truly special.”

 

 

Also read: Boko Haram Attacks Nigerian Village, 81 People Dead


mail logo

Subscribe to receive the day's headlines from NewsX straight in your inbox