OpenAI Researchers Alerted Board About AI Breakthrough Prior to Altman’s Ouster

Share This

Krystal is a technology reporter for Reuters who covers Silicon Valley and beyond through the lens of money and characters. She has a master’s degree from New York University and loves a scoop of matcha ice cream just as much as the next journalist.

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity. The previously unreported letter and AI algorithm were key developments before the board’s ouster of Altman, the poster child of generative AI, the two people familiar with the matter told Reuters.

The researchers behind the AI project, known internally as Q* (pronounced Q-Star), believe they’ve made progress toward what is known as “artificial general intelligence,” or AGI, a level of advancement that would make it capable of learning and adapting to tasks it wasn’t explicitly programmed for. Scientists once thought that AGI was decades away. Still, with recent developments in generative AI—which mimics human abilities by creating and training new programs—and advances in natural language processing, experts now think we are closer than ever to reaching AGI.

Researchers say the latest version of their AI model can perform math on par with grade-school students, a milestone that will help it better understand and interpret scientific research. While generative AI has done well at writing and language translation, conquering math is considered a critical next step in making the technology more like human intelligence. “If it can do science, then it can do everything,” one researcher said. “If you can replace a fruit packer or car painter with an AI, wouldn’t that benefit society?”

The two sources said that longtime chief executive Mira Murati told employees about the breakthrough, referred to as Q*, in an internal message and a letter to the board before the weekend turmoil. An OpenAI spokesperson confirmed the message’s contents and the letter but declined to comment on whether it was accurate. Reuters was unable to verify the details of the letter. In the aftermath of the drama, several prominent AI scientists called on OpenAI to slow down its research and avoid rushing to commercialize its advances before they have been thoroughly tested.


Greetings, dear readers! Welcome to the blog, a realm of words and ideas crafted to captivate and inspire. Today, we invite you to embark on a journey of discovery as we introduce ourself, the author behind the articles that grace this virtual abode.

Leave a Reply

Your email address will not be published.