How vague can I be?
Where do we add value?
Vibe coding, prompt engineering, intent-driven software development and the promise of fully autonomous coding agents have me wondering where in the loop proponents imagine the human sits and what is their differentiating expertise?
If rather than specifying intent with programming languages, configuration and deployment tools, I can specify my systems with terse natural language prompts spinning up agents capable of generating, managing, deploying and monitoring all of those artifacts, what do I need to be good at?
Do I need to understand all those technical details (coding, testing, compiling, debugging, deploying) in order to construct prompts, or can I trust the A.I. and not bother with the details?
Do I need to be able to understand architecture, security and cost trade-offs, or can I prompt an A.I. to generate an architecture?
Do I need to be able translate business requirements, and technical documents into prompts, or can I feed them directly to an A.I.?
Do I need to understand the business in order to create business requirements, or can I get an A.I. to generate them?
Can I prompt an A.I. to do all of that in one step? Can I just provide the business overview, general need, a problem statement?
Is the only skill required, an ability articulate prompts? What exactly is that skill?
Can I get an A.I. to articulate prompts? What is the fewest number of prompts I need?
Am I a race car driver with a faster car, or a race car driver with a self-driving car? Am I driving or just telling the car to win the race? What makes me a better race car prompter than anyone else?
I doubt we can get anywhere close to this imagined intent-based world with the current LLM technology but if we could does that seem desirable?