Sam Altman, Lately

A few day ago, YCombinator boss Paul Graham posted an image on X1, providing an explanation behind Altman’s departure from YCombinator’s main leadership position. Graham’s attempt to dispel rumors of Altman’s supposed firing didn’t work as well as hoped, with many of the responses reiterating that making him choose between OpenAI & YCombinator still constituted a form of firing. However, the more substantial revelation was a few posts later.

While responding to replies, it seems that Graham was informed — and later confirmed — that Altman had, in a pretty clear conflict of interest and obviously unbeknownst to YCombinator leadership, invested a tidy sum of $10 million through YCombinator into OpenAI’s for-profit arm.

Not a great look.

This morning, the Wall Street Journal went even further with an expose on Altman’s burgeoning business empire, in a piece titled The Opaque Investment Empire Making OpenAI’s Sam Altman Rich. According to the WSJ, Altman has:

“…a sprawling investment empire that is becoming a direct beneficiary of OpenAI’s success.”

Supposedly to the tune of about $2.8 billion.

Now, how does that happen when Altman only makes a small salary from OpenAI, and — according to him & the company — doesn’t directly own any of it?

Altman’s personal investments range in everything from nuclear fusion reactor start-up Helion to being the third largest public stake in Reddit. Reddit, one of the companies that has a content-sharing deal in place with OpenAI to train ChatGPT.

“A growing number of Altman’s startups do business with OpenAI itself, either as customers or major business partners. The arrangement puts Altman on both sides of deals, creating a mounting list of potential conflicts in which he could personally benefit from OpenAI’s work.”

Directly, or indirectly, the streak of intentionally investing in companies that have a conflict of interest doesn’t seem to be declining, either — the exact opposite seems to be true.

Consequences of this dynamic can include placing employees in a real awkward situation, even if according to OpenAI he’s recused himself from some of the deals between the ChatGPT developer and outside companies he’s invested in. Even if he’s not involved in the deals directly, the reality that employees now have to consider that their boss is directly affected on each side of a deal by decisions they could make is almost sure to have an influence on the outcome. All while they’re supposed to be working for OpenAI’s benefit specifically.

Another article, also issued by the Wall Street Journal, aptly named “The Contradictions of Sam Altman, AI Crusader”, touched on some of these issues in March of last year. In it, his investments in Helion ($375mil) & Retro ($180mil) were stated as being “almost all of his liquid wealth.” He was also known to be apart of other projects — including Worldcoin — though the steep growth of his fortune, going from reported between $500-$700 million in March 2023, to $2.8 billion in less than 12 months later is pretty incredible. Especially given his public stance on the financing’s influence on AI’s development.

“He owns no stake in the ChatGPT developer, saying he doesn’t want the seductions of wealth to corrupt the safe development of artificial intelligence, and makes a yearly salary of just $65,000.”

The fact that he’s so clearly financially tied to OpenAI through a diverse set of investments kind of throws the idea that he only takes a $65,000 salary for altruistic reasons directly out of the window. It also lends insight to the leadership dust-up last year.

In November, OpenAI’s board ousted Altman, in pretty dramatic fashion. This recent piece by The Verge expounds on the initial reasoning of the board that, at the time, didn’t trust Altman. One of the cited reasons was Altman’s failure to inform the board of his ownership of the OpenAI Startup Fund. Seems like there is a pattern here.

Transparency is probably something Altman would do well to take steps to personally improve, especially after a majority of the board members who pushed out Altman were, as a result of his deal to return, pushed out themselves, resulting in even less oversight. Less critical oversight, at least. Let’s hope — as Altman would put it — for the safety of artificial intelligence, and therefore humanity as whole, transparency does improve across the board. That’s really all we can ask for, given that asking people to actually put financial gain aside as a motivating factor doesn’t seem to historically work out.

Any comments, suggestions or corrections can be sent to:

  1. Ironically, it was presumably to avoid AI scraping of the text — such as OpenAI does to train ChatGPT.