Posts

Showing posts from March, 2023

Offminding

There's considerable concern about the prospects of AI putting people out of a work across a variety of knowledge work roles, especially software engineering. These concerns seem to have baked in assumptions that are too narrow, and if explored a bit more fully, shift the perspective. The underlying assumptions appear to be: That the job replacement will be something akin to offshoring. Where an AI agent will perform essentially the same engineering work in the same timeframe as a person, at a fraction of the cost. Cheap labor, but otherwise business as usual. That an AI agent capable of software engineering wouldn't also imply a level of intelligence and sophistication that would be extremely broadly applicable. An assumption, that and AI agent can be a really good software engineer but nothing else. If you accept the premise that an AI agent is now or soon will be capable of replacing a software engineer, these assumptions seem naïve. However I do agree that software engineer

The greatest trick the AI ever pulled was convincing the world it wasn't THAT intelligent

Just listened to The AI Dilemma and it definitely elaborated on and articulated some of my own concerns, as did the news of OpenAI plugins. The synthesis of concern is: AI doesn't need to be that powerful to be destructive and destabilizing (e.g. social media algorithms) to people and society. These new LLM's are already much more powerful than social media algorithms. Many of the new LLM capabilities are emergent and poorly understood. The experts designing these systems don't fully understand them, and some of whom are already worried about what they've created. LLM's are being rapidly and recklessly deployed very broadly across society, in an arms race. The strongest argument against worry is that these LLM's aren't really intelligent, are really just a super-autocomplete that enables automating mundane tasks via natural language. But I don't think any of the companies developing or deploying these things are arguing that. Even if they were, see #1 &

Generative AI: Early thoughts

Having a bit of a background in philosophy and being a software engineer, I’ve found the recent explosion of interest in AI since the ChatGPT debut to be fascinating from multiple perspectives.  In one sense there are considerations about whether generative AI is really AI, and/or whether it's on the right path to deliver AGI. Relatedly, its ability to pass standardized tests raises questions about what those tests are really testing, and even raise questions about what we mean by intelligence and how we measure it. On the other hand, there's consideration about the practical usefulness and impact, regardless of whether it's an actual intelligence or glorified autocomplete.  Can it be safe and reliable, what can it automate, will it improve productivity and/or lead to massive unemployment? Interesting questions. I’m neither deep on the philosophical side nor the engineering side of these topics, and many debates seem to mix the two, but I have used ChatGPT 3, and Bing AI, a