Ilya Sutskever, co‑founder and former chief scientist at OpenAI, has stepped into the CEO role at Safe Superintelligence

Ilya Sutskever, co‑founder and former chief scientist at OpenAI, has stepped into the CEO role at Safe Superintelligence (SSI), the AI startup he established in June 2024. This leadership transition follows the departure of Daniel Gross, who left SSI to take charge of Meta’s forthcoming AI products division 


Gross, a key contributor to AI initiatives at Apple and co-founder of venture firm NFDG alongside Nat Friedman, joins a wave of top talent Meta is actively recruiting for its newly formed Meta Superintelligence Labs. This division, launched by Mark Zuckerberg, consolidates the company's AI efforts under the leadership of Alexandr Wang and Friedman, and represents Meta's bold move following delays in its Llama 4 and other projects 

Meta also reportedly attempted to acquire SSI—with a valuation around $32 billion—but Sutskever declined, emphasizing SSI’s independence and continued focus on safely advancing artificial superintelligence. The startup had previously secured $1 billion in funding and maintains a mission devoted solely to developing AI that surpasses human intelligence in a secure, controlled manner 

Prior to founding SSI, Sutskever played a central role at OpenAI and was involved in the high-profile board decision that ousted, then reinstated, CEO Sam Altman in November 2023. He officially left the company in May 2024, aiming to pursue an undistracted path toward AI safety 

Why this matters

Talent competition in AI

Meta’s aggressive hiring of figures like Gross and recruitment overtures to Sutskever highlight the fierce competition among tech leaders vying for world-class AI expertise .

Strategic positioning

SSI’s refusal to sell and its strong investor backing—including over $1 billion raised last year—demonstrate a commitment to maintain autonomy in a field often driven by short-term product goals .

Focus on AI safety

Under Sutskever's leadership, SSI’s sole objective is to build safe, superintelligent AI, avoiding common commercial or product distractions seen elsewhere