Ilya Sutskever's New Venture: Safe Superintelligence (SSI)— New Kid on the Block
Former OpenAI Board Member, Ilya Sutskever's New Venture: Safe Superintelligence (SSI).
Ilya Sutskever's New Venture: Safe Superintelligence (SSI)
Ilya Sutskever, who recently left OpenAI, announced the launch of his new venture, Safe Superintelligence (SSI). In his X post on Juneteenth, a federal holiday in the USA, Sutskever, a prominent AI researcher and co-founder of OpenAI, stated: "We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small, crack(ed) team." This announcement has made waves in the tech community and follows a tumultuous period at OpenAI, underscoring the growing emphasis on AI safety in the industry.
Sutskever's Role in Sam Altman's Ouster
In November 2023, Ilya Sutskever played a pivotal role in the controversial firing of OpenAI's CEO, Sam Altman. As a member of OpenAI's board, Sutskever was instrumental in the decision to remove Altman, primarily due to disagreements over the pace of AI development and concerns about Altman's communication with the board. "Not being consistently candid" became emblematic of the greatest understatement in the corporate world, a phrase the board used to describe Altman's conduct. However, this decision was met with significant backlash, leading the OpenAI Board, including Sutskever, to reverse his stance. He joined over 700 OpenAI employees in signing a letter demanding Altman's reinstatement and publicly expressed regret for his involvement in the board's actions, stating, "I deeply regret my participation in the board's actions. I never intended to harm OpenAI."
However, other former board members, like Helen Toner, have since come out with startling details of Altman's actions. Helen Toner, on the recent TED AI Podcast hosted by Bilawal Sidhu, averred that the board learned about the launch of ChatGPT only from Twitter.
Leading the Safety Team at OpenAI
After Altman's return as CEO, Sutskever was removed from OpenAI's board but continued his work at the company by leading the Superalignment team alongside Jan Leike. This team was tasked with ensuring the safe development of artificial general intelligence (AGI). Despite these efforts, concerns among safety-minded employees about OpenAI's priorities began to surface. This led to the eventual departure of Sutskever, Leike, and several other safety-conscious employees last month. The disbandment of the Superalignment team further raised questions about OpenAI's commitment to AI safety, marking a significant shift in the company's internal dynamics.
Sutskever's Ongoing Commitment to AI Safety
Sutskever, who was OpenAI’s longtime chief scientist, has founded SSI with former Y Combinator partner Daniel Gross and ex-OpenAI engineer Daniel Levy. At OpenAI, Sutskever played a crucial role in the company’s efforts to enhance AI safety, particularly with the rise of “superintelligent” AI systems, working alongside Jan Leike on the Superalignment team. However, both Sutskever and Leike left OpenAI in May following a dramatic disagreement with the leadership over AI safety approaches.
Interestingly, Leike now leads a team at rival AI firm Anthropic. Sutskever has long been a vocal advocate for addressing the complex issues of AI safety. In a 2023 blog post co-authored with Leike, he predicted that AI with intelligence surpassing human capabilities could emerge within a decade and emphasized the necessity of researching methods to control and restrict such advanced AI to ensure it is benevolent.
Launching Safe Superintelligence
Today, shortly after leaving OpenAI, Sutskever announced the formation of Safe Superintelligence (SSI), a new AI company dedicated to developing safe superintelligence. SSI aims to prioritize safety and security while advancing AI capabilities, thereby insulating its work from commercial pressures. Sutskever founded SSI alongside Daniel Gross, a former AI lead at Apple, and Daniel Levy, a former OpenAI employee. The company, with offices in Palo Alto, California, and Tel Aviv, Israel, emphasizes a focused approach to AI development.
Commitment to AI Safety
SSI's mission is clear: to develop safe superintelligence through revolutionary breakthroughs produced by a small, dedicated team. Sutskever has consistently highlighted the importance of AI safety, and his new venture underscores this commitment. He has warned of the potential dangers posed by AI systems with intelligence surpassing human capabilities and has advocated for research into controlling and restricting such systems. This focus on safety is reflected in SSI's business model and operational strategy, which aim to advance AI capabilities while ensuring safety remains paramount.
Future Directions and Industry Impact
Sutskever's decision to establish SSI is seen as a direct response to the growing concerns surrounding AI safety and the need for responsible development of advanced AI systems. Unlike OpenAI, which started as a non-profit organization and later restructured, SSI is designed from the outset as a for-profit entity. This approach, coupled with the team's credentials and the increasing interest in AI, suggests that SSI may attract significant investment and resources. Daniel Gross, one of SSI's co-founders, indicated confidence in the company's ability to raise capital, highlighting the robust interest in their mission.
The launch of SSI marks a significant development in the AI industry, aiming to push the boundaries of AI capabilities while ensuring responsible and secure development. As the field of AI continues to evolve, competition and innovation within the industry are likely to intensify, benefiting consumers. The establishment of SSI underscores the importance of prioritizing safety in the pursuit of artificial general intelligence, setting a new standard for responsible AI development. The field is only going to heat up, promising exciting advancements and a heightened focus on consumer benefits and safety.
Competition is Intense
The AI ecosphere is being backed by billions of dollars and is in a state of rapid flux. OpenAI, already partnered with Microsoft, faces tough competition from Google AI, which has recently launched its new and improved Gemini. “Apple Intelligence”, in its pseudo-partnership with OpenAI, has already stolen the limelight with its emphasis on privacy. Meanwhile, billionaire Elon Musk, ironically one of the co-founders of OpenAI who has since departed, has recently raised $6 billion for his nascent venture xAI. Anthropic's Claude is also becoming a favorite among serious users. How far Ilya Sutskever's new venture will fare in this competitive landscape remains to be seen, but there is no doubt about the experience, expertise, and pedigree of the founders of SSI. They are poised to challenge the Goliaths and Titans of the industry, bringing a fresh perspective and dedication to AI safety.
If you believe this article would interest someone you know, please feel free to share it anonymously (for us), using any platform that you prefer.