OpenAI Co-founder Launches A New AI Company: 'Safe Superintelligence Inc. Is Our Mission, Our Name, And Our Entire Product'

One of the co-founders of the AI company OpenAI, Ilya Sutskever, has opened up a new venture called Safe Superintelligence Inc (SSI). The development takes place after a month of him leaving the ChatGPT platform.

One of the co-founders of the AI company OpenAI, Ilya Sutskever, has opened up a new venture called Safe Superintelligence Inc (SSI). The development takes place after a month of him leaving the ChatGPT platform.

Sutskever, along with Jan Leike, who were engaged in co-leading OpenAI’s ‘Superalignment Team’ have vacated the Sam Altman-run company in May after there was a discord with the leadership at OpenAI. Currently, Leike leads a team at competing AI firm Anthropic, which has received investments from tech giants Google and Amazon.

In a post on X social media platform, Sutskever said “SSI is our mission, our name, and our entire product roadmap because it is our sole focus”.

“We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace,” read the post.

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures,” it added.

The new venture SSI is headquartered in Palo Alto, California, with subsidiary branches in Tel Aviv, Israel. The SSI puts major emphasis on ensuring safety in superintelligence.

What Is SSI?

SSI presents itself as the globe’s inaugural “straight-shot SSI lab,” highlighting its exclusive dedication to crafting secure superintelligence. This strategy underscores the simultaneous progression of AI capabilities and safety precautions. SSI aims to expand the frontiers of AI while maintaining rigorous safety standards, facilitating what they term “peaceful scaling” of AI technologies.

The SSI differentiates itself by committing to avoid distractions that take place in the tech industry. The company puts focus on a business model and organizational structure that would isolate them from short-term commercial pressures and overbearing management overhead.

The inauguration of SSI aligns with recent turbulence at OpenAI, marked by prominent departures and former staff voicing concerns over oversight. Sutskever’s departure came amid internal disagreements concerning AI safety and the direction of leadership.

Recruitments Are On 

Currently, the SSI is actively recruiting top talent, with the opportunity to work in what they consider the most critical and technical challenge in these times.

“We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else,” the blog post said.

In the dynamic realm of AI, Safe Superintelligence Inc. (SSI) stands out as a visionary leader, placing paramount importance on AI safety amid swift progress. With a dedicated recruitment drive for top talent, SSI charts a pivotal path in shaping AI’s evolution and its profound implications for humanity’s trajectory.