Categories
Business Culture Digital - AI & Apps Lifestyle Regulations & Security Science Technology

AI development will likely move faster; be more dispersed, less controlled after failed coup at OpenAI

—  Failed coups, as seen at OpenAI, often accelerate the thing that they were trying to prevent

—  Over the past week, OpenAI’s board went through four CEOs in five days.

 

Over the past week, OpenAI’s board went through four CEOs in five days.

 

It accused the original chief executive, Sam Altman, of lying, but later backed down from that and refused to say what that meant.

 

Ninety per cent of the organisation’s staff signed an open letter saying they’d quit if the board didn’t. Silicon Valley was both riveted and horrified.

 

By Wednesday, Altman was back, two of the three external board members had been replaced, and everyone could get some sleep.

 

It would be easy to say that this chaos showed that both OpenAI’s board and its curious subdivided non-profit and for-profit structure were not fit for purpose. One could also suggest that the external board members did not have the appropriate background or experience to oversee a $90bn company that has been setting the agenda for a hugely important technology breakthrough.

 

One could probably say less polite things too, and all of that might be true, but it would also be incomplete.   As far as we know (and the very fact that I have to say that is also a problem), the underlying conflict inside OpenAI was one that a lot of people have pointed to and indeed made fun of over the past year.

 

OpenAI was created to try to build a machine version of something approximating to human intelligence (so-called “AGI,” or artificial general intelligence). The premise was that this was possible within years rather than decades, and potentially very good but also potentially very dangerous, not just for pedestrian things such as democracy or society but for humanity itself.

That’s the reason for the strange organisational structure — to control the risk. Altman has been building this thing as fast as possible, while also saying very loudly and often that this thing is extremely dangerous and governments should get involved to control any attempts to build it. Well, which is it?

 

Many in tech think that airing such concerns is a straightforward attempt at anti-competitive regulatory capture. This particularly applies to broader moves against open-source AI models (seen in the White House’s executive order on AI last month): people think that OpenAI is trying to get governments to ban competition. That might be true, but I personally think that people who claim AGI is both close and dangerous are sincere, and that makes their desire to build it all the more conflicted. That seems to be the best explanation of what has happened at OpenAI: those who think we should slow down and be careful mounted a coup against those who think we should speed up and be careful.

 

Part of the problem and conflict when it comes to discussing AGI is that it’s an abstract concept — a thought experiment — without any clear or well-understood theoretical model. The engineers on the Apollo Program knew how far away the moon was and how much thrust the rocket had but we don’t know how far away AGI is, nor how close OpenAI’s large language models are, nor whether they can get there.

You could spend weeks of your life watching videos of machine-learning scientists arguing about this and conclude only that they don’t know either. ChatGPT might scale all the way to the Terminator in five years, or in five decades, or it might not. This might be like looking at a 1920s biplane and worrying that it might go into orbit. We don’t know.  This means most conversations about the risk of AI become hunts for metaphors (it’s “like” nuclear weapons, or a meteorite, or indeed the Apollo Program).

 

Or they dredge up half-forgotten undergraduate philosophy classes (Pascal’s wager! Plato’s cave!), or resort to argument from authority (Geoff Hinton is worried! Yann LeCun is not!). In the end, this comes down to how you, instinctively, feel about risk. If you cannot know what is close or not, is that a reason to worry or a reason not to worry? There is no right answer.

 

Unfortunately for the “doomers”, the events of the last week have sped everything up. One of the now resigned board members was quoted as saying that shutting down OpenAI would be consistent with the mission (better safe than sorry). But the hundreds of companies that were building on OpenAI’s application programming interfaces are scrambling for alternatives, both from its commercial competitors and from the growing wave of open-source projects that aren’t controlled by anyone.

 

AI will now move faster and be more dispersed and less controlled. Failed coups often accelerate the thing that they were trying to prevent.

Indeed, a common criticism of the doomers is that their idea that one powerful piece of software and a few brilliant engineers can transform the world is just another form of naive and simplistic tech utopianism — it fails to understand the real nature of power, complexity and human systems. The doomers on the board demonstrated exactly that — they did not know how power works.

 

 

 

— Benedict Evans / Financial Times

(The writer is a technology analyst)

Techmeme

Leave a Reply

Your email address will not be published. Required fields are marked *