It Depends What the Meaning of the Acronym 'AGI' Is

Elon Musk Sues OpenAI for Violating the Company’s Principles
Musk said the prominent A.I. start-up had put profits and commercial interests ahead of seeking to benefit humanity.

When the news first hit, it was almost a throwaway – of course Elon Musk is now suing OpenAI. If we are in fact living in a simulation, this is almost too obvious a plot point. But actually, this may end up being the most important lawsuit in quite some time in Silicon Valley. If it were to go to trial, something which you have to believe OpenAI does not want to happen given everything going on there in recent months, a jury – one where presumably the less they know about any of the above drama the better, which likely means one that knows very little about any of the underlying tech involved – would effectively be asked to determine the definition for artificial general intelligence (AGI). That seems... wild.

Backing up...

Mr. Musk sued OpenAI and its chief executive, Sam Altman, accusing them of breaching a contract by putting profits and commercial interests in developing artificial intelligence ahead of the public good. A multibillion-dollar partnership that OpenAI developed with Microsoft, Mr. Musk said, represented an abandonment of a founding pledge to carefully develop A.I. and make the technology publicly available.

That reads like sour grapes, and it is, but it also drives directly to the actual key argument one level deeper:

The people who started OpenAI, including Mr. Musk, worried that an A.G.I. would be too powerful to be owned by a single entity, and that if they ever got close to building one, they’d need to change the control structure around it, to prevent it from doing harm or concentrating too much wealth and power in a single company’s hands.

Which is why, when OpenAI entered into a partnership with Microsoft, it specifically gave the tech giant a license that applied only to “pre-A.G.I.” technologies. (The New York Times has sued Microsoft and OpenAI over use of copyrighted work.)

According to the terms of the deal, if OpenAI ever built something that met the definition of A.G.I. — as determined by OpenAI’s nonprofit board — Microsoft’s license would no longer apply, and OpenAI’s board could decide to do whatever it wanted to ensure that OpenAI’s A.G.I. benefited all of humanity. That could mean many things, including open-sourcing the technology or shutting it off entirely.

Rightly or wrongly – something which again the jury may decide – this argument smartly uses the AGI clause in their contract against them. If OpenAI has achieved AGI, it would void at least part of the partnership with Microsoft. Now, presumably no one really thinks that as good as GPT-4 may be, that it is a fully intelligent system – but actually that gets back to what the definition of AGI is in the first place, and no one actually has a concrete one, and so we go back to that jury. Musk's lawyers will undoubtedly argue that AGI is not what you or I or many others may consider to be true AGI, but that what OpenAI has created is by some definition AGI. Whose definition? Well, funny enough: Microsoft's.

But in his legal filing, Mr. Musk makes an unusual argument. He argues that OpenAI has already achieved A.G.I. with its GPT-4 language model, which was released last year, and that future technology from the company will even more clearly qualify as A.G.I.

“On information and belief, GPT-4 is an A.G.I. algorithm, and hence expressly outside the scope of Microsoft’s September 2020 exclusive license with OpenAI,” the complaint reads.

What Mr. Musk is arguing here is a little complicated. Basically, he’s saying that because it has achieved A.G.I. with GPT-4, OpenAI is no longer allowed to license it to Microsoft, and that its board is required to make the technology and research more freely available.

His complaint cites the now-infamous “Sparks of A.G.I.” paper by a Microsoft research team last year, which argued that GPT-4 demonstrated early hints of general intelligence, among them signs of human-level reasoning.

Of course, OpenAI's and Microsoft's teams will argue that there's nothing to see here – namely, no AGI. That the paper was flawed. But Musk's team is already trying to get ahead of that by noting that of course they're going to argue that: it's in their business interest to distance themselves from AGI.

But the complaint also notes that OpenAI’s board is unlikely to decide that its A.I. systems actually qualify as A.G.I., because as soon as it does, it has to make big changes to the way it deploys and profits from the technology.

Moreover, he notes that Microsoft — which now has a nonvoting observer seat on OpenAI’s board, after an upheaval last year that resulted in the temporary firing of Mr. Altman — has a strong incentive to deny that OpenAI’s technology qualifies as A.G.I. That would end its license to use that technology in its products, and jeopardize potentially huge profits.

“Given Microsoft’s enormous financial interest in keeping the gate closed to the public, OpenAI, Inc.’s new captured, conflicted and compliant board will have every reason to delay ever making a finding that OpenAI has attained A.G.I.,” the complaint reads. “To the contrary, OpenAI’s attainment of A.G.I., like ‘Tomorrow’ in ‘Annie,’ will always be a day away.”

It's not the worst argument! You probably can't let OpenAI and Microsoft decide this given their bias and conflicts here. But you also can't let Musk decide given his bias and conflicts here. So again, that jury may get to decide...

Anyway, back to the broader argument, look I'm no lawyer and this is all rather complicated and nuanced, but I'll go ahead and quote a lawyer here:

Brian Quinn, a law professor at Boston College, said that Mr. Musk’s complaint made a compelling case that OpenAI had abandoned its roots. But, he said, Mr. Musk probably does not have the standing to bring it, because nonprofit law limits challenges of this type to those made by a nonprofit’s dues-paying members, its own directors or state regulators in Delaware, where OpenAI is registered.

“If he were a member of the board of directors, I would say, ‘Ooh, strong case.’ If this was filed by the Delaware secretary of state, I would say, ‘Ooh they’re in trouble,’” Mr. Quinn said. “But he doesn’t have standing. He doesn’t have a case.”

If only there were some former, pissed-off ex-directors of OpenAI out there...