It would be easy to dismiss Elon Musk’s lawsuit against OpenAI as a case of sour grapes. Mr. Musk sued OpenAI this week, accusing the company of breaching the terms of its founding agreement and violating its founding principles. In his telling, OpenAI was established as a nonprofit that would build powerful A.I. systems for the good of humanity and give its research away freely to the public. But Mr. Musk argues that OpenAI broke that promise by starting a for-profit subsidiary that took on billions of dollars in investments from Microsoft. An OpenAI spokeswoman declined to comment on the suit. In a memo sent to employees on Friday, Jason Kwon, the company’s chief strategy officer, denied Mr. Musk’s claims and said, “We believe the claims in this suit may stem from Elon’s regrets about not being involved with the company today,” according to a copy of the memo I viewed.
On one level, the lawsuit reeks of personal beef. Mr. Musk, who founded OpenAI in 2015 along with a group of other tech heavyweights and provided much of its initial funding but left in 2018 over disputes with leadership, resents being sidelined in the conversations about A.I. His own A.I. projects haven’t gotten nearly as much traction as ChatGPT, OpenAI’s flagship chatbot. And Mr. Musk’s falling out with Sam Altman, OpenAI’s chief executive, has been well documented.
But amid all of the animus, there’s a point that is worth drawing out, because it illustrates a paradox that is at the heart of much of today’s A.I. conversation — and a place where OpenAI really has been talking out of both sides of its mouth, insisting both that its A.I. systems are incredibly powerful and that they are nowhere near matching human intelligence.
The claim centers on a term known as A.G.I., or “artificial general intelligence.” Defining what constitutes A.G.I. is notoriously tricky, although most people would agree that it means an A.I. system that can do most or all things that the human brain can do. Mr. Altman has defined A.G.I. as “the equivalent of a median human that you could hire as a co-worker,” while OpenAI itself defines A.G.I. as “a highly autonomous system that outperforms humans at most economically valuable work.”
Most leaders of A.I. companies claim that not only is A.G.I. possible to build, but also that it is imminent. Demis Hassabis, the chief executive of Google DeepMind, told me in a recent podcast interview that he thought A.G.I. could arrive as soon as 2030. Mr. Altman has said that A.G.I. may be only four or five years away.
Building A.G.I. is OpenAI’s explicit goal, and it has lots of reasons to want to get there before anyone else. A true A.G.I. would be an incredibly valuable resource, capable of automating huge swaths of human labor and making gobs of money for its creators. It’s also the kind of shiny, audacious goal that investors love to fund, and that helps A.I. labs recruit top engineers and researchers.