OpenAI and Sam Altman: Rarely Out of the News
The OpenAI Saga: A Classic Case Study of Executive Unawareness and Internal Disagreements
Introduction
Close on the heels of the launch of ChatGPT 4.0 earlier this week, OpenAI and its CEO Sam Altman have been in the news again, for all the wrong reasons. The controversial employment agreement and the exit of co-founder Ilya Sutskever have only added to the turmoil. The recent events at OpenAI highlight a startling lack of awareness— feigned or otherwise— among top executives regarding the company's internal operations and priorities. This brief article examines the saga that unfolded, shedding light on three key areas where this disconnect is evident: the ousting and return of Sam Altman, the ambiguous responses regarding technical developments, and the controversies surrounding employment agreements and internal priorities, especially those having ethical nuances.
The OpenAI Saga
For the benefit of those who may not be in touch with the developments in this field, suffice it to say that the saga began in November 2023 when two senior executives approached OpenAI's Board with concerns about CEO Sam Altman's inconsistent candour. The Board, already harbouring doubts from their own experiences, decided to remove1 him from his position. Despite this decisive action, Altman managed to orchestrate a surprising return. During his absence, Microsoft CEO Satya Nadella reportedly courted him to build a new company, leveraging Microsoft's rights over OpenAI's IP.
In the interim, the Board appointed Emmett Shear, the former CEO of Twitch, as CEO, shortly after appointing CTO Meera Murati as interim CEO. However, the twists continued as interim CEO and CTO, Meera Murati, and co-founder Ilya Sutskever, who had initially delivered Altman's termination notice, reversed their positions, supporting Altman's return. This period of internal turmoil revealed significant fractures within the executive team, casting doubt on their collective awareness and decision-making capabilities.
Sam Altman's Controversial Employment Agreement
Another contentious issue emerged regarding an employment agreement that included a highly unconventional clause: employees were forbidden from making disparaging statements about OpenAI, with violations potentially leading to the clawback of previously awarded equity, including vested equity. Sam Altman claimed he was unaware of this clause, which is particularly surprising given his role as CEO. This created quite a furore within and outside the company, prompting Altman to essentially retract and clarify things through a lengthy tweet.2
Altman's purported ignorance of such a significant policy affecting employee rights and compensation casts doubt on his overall awareness and leadership. Furthermore, his promise to "fix things" if reinstated suggests a reactive rather than proactive approach to governance, further highlighting the executive disconnect within OpenAI.
Ilya Sutskever and the Superalignment Team's Discontent
The final area of concern involves Ilya Sutskever and the Superalignment team, who were promised 20% of OpenAI's compute resources to ensure the alignment of artificial general intelligence (AGI) with human values, especially ethics. This initiative, formed less than a year ago under the guidance of Sutskever and Jan Leike, was crucial for maintaining ethical standards and societal safety as AI technology advances. However, internal resistance to allocating these resources led to significant discontent, with some team members vocally departing from the company. Leike expressed concerns that OpenAI's "safety culture and processes have taken a backseat to shiny products," criticising the company for not investing enough resources into crucial safety research.
In early May 2024, Sutskever, the co-founder and former Chief Scientist of OpenAI, and Leike, another long-standing OpenAI veteran, departed the company amid growing discontent over its priorities and direction. Their departures led to the dissolution of the Superalignment team, which was dedicated to ensuring the safety and alignment of potential future ultra-capable artificial intelligence (AI) systems. This discontent reflects broader internal disagreements about priorities and resource allocation, exacerbating the perception of executive unawareness and misalignment. While competitors like Google aggressively leverage their data centres, OpenAI's internal conflicts hinder its ability to keep pace, jeopardising its mission and competitive edge.
The departures of Sutskever and Leike, along with the dissolution of the Superalignment team, highlight the growing tensions between OpenAI's pursuit of cutting-edge AI products and the need to prioritise safety and responsible development, particularly as the company aims to achieve AGI.
Meera Murati's Pretence of Vague Technical Knowledge?
Another striking example of the disconnect within OpenAI's leadership was evident during an interview with Meera Murati, OpenAI's CTO, conducted by Joanna Stern of the Wall Street Journal in December 2023. When Stern asked about the training data for OpenAI's breakthrough Sora AI video generation model, Murati's response was alarmingly vague, stating it was trained on "publicly available data." When pressed further to specify whether platforms like YouTube or Instagram were used, Murati admitted she did not know, saying, "I'm not going to go into the details of the data" and eventually confessing, "I'm actually not sure about that."
This revelation raised serious questions about Murati's awareness and understanding of a major product under her purview. Whether this was a case of deliberate obfuscation for legal reasons or genuine ignorance remains unclear, but it underscores a significant lapse in executive knowledge about critical company operations. The interview highlights a broader issue of transparency and communication within OpenAI's leadership. Despite direct questioning, Murati was unable or unwilling to disclose specifics about the data sources used to train Sora, pointing to a deeper problem of information flow and accountability in the company's hierarchy. This incident not only damaged Murati's credibility but also cast doubt on OpenAI's commitment to openness and ethical standards, especially as the company positions itself at the forefront of AI development.
Summing Up
The recent saga at OpenAI illuminates a troubling pattern of executive unawareness and internal discord. From the dramatic ousting and return of Sam Altman to the ambiguous technical knowledge of Meera Murati and the controversial employment agreements, these events underscore significant governance challenges. As OpenAI navigates its path forward, addressing these internal issues will be crucial to maintaining its leadership in the AI industry and ensuring alignment with its foundational values.
“In regards to recent stuff about how openai handles equity: we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
There was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i've been genuinely embarrassed running openai; i did not know this was happening and i should have.
The team was already in the process of fixing the standard exit paperwork over the past month or so. if any former employee who signed one of those old agreements is worried about it, they can contact me and we'll fix that too. very sorry about this.”