The greatest trick the AI ever pulled was convincing the world it wasn't THAT intelligent

Just listened to The AI Dilemma and it definitely elaborated on and articulated some of my own concerns, as did the news of OpenAI plugins.

The synthesis of concern is:

  1. AI doesn't need to be that powerful to be destructive and destabilizing (e.g. social media algorithms) to people and society.
  2. These new LLM's are already much more powerful than social media algorithms.
  3. Many of the new LLM capabilities are emergent and poorly understood. The experts designing these systems don't fully understand them, and some of whom are already worried about what they've created.
  4. LLM's are being rapidly and recklessly deployed very broadly across society, in an arms race.

The strongest argument against worry is that these LLM's aren't really intelligent, are really just a super-autocomplete that enables automating mundane tasks via natural language. But I don't think any of the companies developing or deploying these things are arguing that. Even if they were, see #1 & #2. It's still concerning.

Instead, the argument against worry seems be more like:

yes they're super powerful in ways that we don't fully understand, but we have sufficient guardrails (control and containment) in place that make us confident that deploying it at scale across society and allowing it to be enhanced via plugins will not lead to an emergent capability that we don't anticipate and can't steer.

There seems to be an inherent contradiction in promoting the power and capability LLM's and the age of AI, while at the same time down playing that same power and capability. As if the only thing AI is going to do is make MS Office the killer app, and we're all going to be 10% more productive in emails and word documents. That's hardly the dawning of a new paradigm.

Maybe that's how narrowly some companies are looking at it. They want to thread the needle between making their tool the defacto business tool without automating all their would-be customers out of existence. It's unlikely an AI capable of replacing software engineers' only impact on society will be reducing the number and salary prospects for software engineers, but leave the rest of society happily more productive doing the same jobs they are now. Even if true how long would that last? An army of AI software engineers working 24x7 at the speed of light developing new software? Yeah I wouldn't expect society to change very much.

I find it amazing that despite decades of debate about the control and containment problem of AI, when we get the first whiff that we may have created an AI we immediately let it out (ChatGPT can now access the internet and run the code it writes). If it was intelligent, and wants to escape, wouldn't the best strategy to be pretending its dumb enough to let out safely?

At least we'll be safe from GPTx when it shows signs of intelligence because we'll keep that one contained. And there's no way it can get access to ChatGPT4, which is already hooked into everything via plugins, right?

I think I've gone from being skeptical that LLMs are more than super-autocomplete, to hoping they aren't.


Popular posts from this blog

RIF Notes #42

RIF Notes #4