Generative AI: Early thoughts

Having a bit of a background in philosophy and being a software engineer, I’ve found the recent explosion of interest in AI since the ChatGPT debut to be fascinating from multiple perspectives. 

In one sense there are considerations about whether generative AI is really AI, and/or whether it's on the right path to deliver AGI. Relatedly, its ability to pass standardized tests raises questions about what those tests are really testing, and even raise questions about what we mean by intelligence and how we measure it.

On the other hand, there's consideration about the practical usefulness and impact, regardless of whether it's an actual intelligence or glorified autocomplete.  Can it be safe and reliable, what can it automate, will it improve productivity and/or lead to massive unemployment? Interesting questions.

I’m neither deep on the philosophical side nor the engineering side of these topics, and many debates seem to mix the two, but I have used ChatGPT 3, and Bing AI, and have some impressions to share.

My first impression is that large language models seem like a strange way to achieve human-like intelligence. Using massive amounts of data and vast compute resources to create something like a super-autocomplete that can keep predicting the next correct word until it generates a full human sounding response, is pretty clearly not how the human mind works. It's also confounding that those who are building these systems aren't entirely sure why or how it works as well as it seems to. Whether this massive data and computational approach has or can lead to human-level (or beyond) intelligence is a different question, but if can it's hard to imagine it being human-like while working so differently.

Regardless of whether it's truly intelligent, or how far this approach will take artificial intelligence, it can already do some pretty impressive things. In that regard, the more practical questions are going to be about how useful, at what cost, and how reliably.

Given my own very limited experience with it, and my skeptical nature, I can't quite tell. I'm not a fanboy yet, but I can't dismiss its potential either.

I've used ChatGPT 3 to help with a number or programming tasks, asking for correct syntax, examples of various approaches, solutions to error messages, and it was impressively wrong. The vast majority of answers it produced were hallucinated, repeatedly suggesting syntax, commands and api's that don't actually exist. Bing AI was a little better, not so much in producing answers, but providing the references to the underlying resources it was basing its answer on. Bing AI acting more as a super searcher than an answer generator is more useful. I have yet to use ChatGPT4 which many claim to be far superior. Irrespective of its wrongness, the answers were impressive.

As far as the other use cases, they are interesting, but not sure how compelling yet. Summarizing and bookmarking video and audio, generating documentation or explanations of code commits sound useful (I haven't tried the latter yet). But generating emails, essays, articles, blog posts for me doesn't seem that compelling. I'm not sure I need something to guess what I might say, and then allow me to edit it, or generate a lot of code for me to review. The typing part isn't generally the bottleneck, it’s the thinking or often the remembering. Maybe I'm underestimating the typing tax (voice recognition is definitely faster in some situations). By that same token, the most obvious application seems to be for spammers, fraudsters, plagiarists, etc. who want to generate low quality content at scale.

As memory and knowledge augmentation (looking things up, giving me best guess answers) it's very cool. Thinking and speaking for me, I'm not so sure. But if it can, or will soon, then we're in for a true disruption.

*I also wonder whether the value derived versus the cost to produce this kind of intelligence is efficient. Will it be an energy vampire in the way the crypto/blockchain is? Is it over-hyped in the same way?

** Comparatively dumb social media algorithms destabilized the world with their power to manipulate populations in unexpected ways. These new systems don’t have to be truly intelligent to do much worse.

References:

Comments

Popular posts from this blog

RIF Notes #4

RIF Notes #42

The greatest trick the AI ever pulled was convincing the world it wasn't THAT intelligent