<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[NappiSite]]></title><description><![CDATA[Critical thinking, skepticism and doubt]]></description><link>https://blog.nappisite.com</link><generator>Substack</generator><lastBuildDate>Mon, 13 Apr 2026 19:52:41 GMT</lastBuildDate><atom:link href="https://blog.nappisite.com/feed" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><webMaster><![CDATA[nappisite@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[nappisite@substack.com]]></itunes:email><itunes:name><![CDATA[jnappi]]></itunes:name></itunes:owner><itunes:author><![CDATA[jnappi]]></itunes:author><googleplay:owner><![CDATA[nappisite@substack.com]]></googleplay:owner><googleplay:email><![CDATA[nappisite@substack.com]]></googleplay:email><googleplay:author><![CDATA[jnappi]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Skeptics' Almanac]]></title><description><![CDATA["Extraordinary claims require extraordinary evidence"]]></description><link>https://blog.nappisite.com/p/skeptics-almanac</link><guid isPermaLink="false">https://blog.nappisite.com/p/skeptics-almanac</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Sun, 08 Mar 2026 19:31:41 GMT</pubDate><content:encoded><![CDATA[<p>A collection of sources I&#8217;ve come across as I wrestle with the hype vs reality of A.I.</p><ul><li><p><a href="https://www.jsnover.com/blog/2026/03/30/chatbots-unsafe-at-any-speed/">Chatbots: Unsafe at Any Speed</a></p></li><li><p><a href="https://www.nytimes.com/2026/03/27/opinion/technology-mental-fitness-cognitive.html?unlocked_article_code=1.WlA.6ZKN.Zf47rdbx8Sqm">There&#8217;s a Good Reason You Can&#8217;t Concentrate</a></p></li><li><p><a href="https://www.ft.com/content/7f635a68-3b2a-4e4f-ae3d-926ff06ff068?syn-25a6b1a6=1">AI chatbots often validate delusions and suicidal thoughts, study finds</a></p></li><li><p><a href="https://www.404media.co/a-top-google-search-result-for-claude-plugins-was-planted-by-hackers/">A Top Google Search Result for Claude Plugins Was Planted by Hackers</a></p></li><li><p><a href="https://techcrunch.com/2026/03/30/ai-work-boss-supervisor-us-quinnipiac-poll/">15% of Americans say they&#8217;d be willing to work for an AI boss, according to new poll</a></p></li><li><p><a href="https://www.wired.com/story/i-asked-chatgpt-what-wired-reviewers-recommend-its-answers-were-all-wrong/">I Asked ChatGPT What WIRED&#8217;s Reviewers Recommend. Its Answers Were All Wrong</a></p></li><li><p><a href="https://blog.robbowley.net/2026/04/02/more-code-less-delivery-but-does-the-circleci-2026-report-really-show-1-in-20-teams-are-benefiting/">More code, less delivery but does the CircleCI 2026 Report really show 1 in 20 teams are benefiting?</a></p></li><li><p><a href="https://www.scientificamerican.com/article/anthropic-leak-reveals-claude-code-tracking-user-frustration-and-raises-new/">WTF, Anthropic&#8217;s Claude Code keeps track of every time you swear</a></p></li><li><p><a href="https://www.oreilly.com/radar/the-model-you-love-is-probably-just-the-one-you-use/">The Model You Love Is Probably Just the One You Use</a></p></li><li><p><a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646">Thinking&#8212;Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender</a></p></li><li><p><a href="https://www.propublica.org/article/trump-doge-veterans-affairs-ai-contracts-health-care">DOGE Developed Error-Prone AI Tool to &#8220;Munch&#8221; Veterans Affairs Contracts</a></p></li><li><p><a href="https://arstechnica.com/ai/2026/04/research-finds-ai-users-scarily-willing-to-surrender-their-cognition-to-llms/">&#8220;Cognitive surrender&#8221; leads AI users to abandon logical thinking, research finds</a></p></li><li><p><a href="https://blog.robbowley.net/2026/04/04/metrs-developer-productivity-research-2026-update/">METR&#8217;s developer productivity research: 2026 update</a></p></li><li><p><a href="https://www.theregister.com/2026/03/17/ai_businesses_faking_it_reckoning_coming_codestrap/">AI still doesn't work very well, businesses are faking it, and a reckoning is coming</a></p></li><li><p><a href="https://www.theintrinsicperspective.com/p/bits-in-bits-out">Bits in Bits out</a></p></li><li><p><a href="https://www.bloomberg.com/opinion/articles/2026-03-13/the-ai-washing-of-job-cuts-is-corrosive-and-confusing">The AI-Washing of Job Cuts Is Corrosive and Confusing </a></p></li><li><p><a href="https://garymarcus.substack.com/p/breaking-expensive-new-evidence-that">BREAKING: Expensive new evidence that scaling is not all you need</a></p></li><li><p><a href="https://thebulletin.org/2025/03/russian-networks-flood-the-internet-with-propaganda-aiming-to-corrupt-ai-chatbots/">Russian networks flood the Internet with propaganda, aiming to corrupt AI chatbots</a></p></li><li><p><a href="https://www.fastcompany.com/91507219/chatgpt-edu-researchers-project-metadata-universities-exclusive">A configuration in Codex Cloud Environments lets thousands of colleagues see repository names and activity linked to ChatGPT accounts.</a></p></li><li><p><a href="https://www.scientificamerican.com/article/ai-autocomplete-doesnt-just-change-how-you-write-it-changes-how-you-think/">AI autocomplete doesn&#8217;t just change how you write. It changes how you think</a></p></li><li><p><a href="https://www.youtube.com/watch?v=kDBeFOscZpc">The Truth About Developer Productivity in the AI Age (IT'S A TRAP)</a></p></li><li><p><a href="https://garymarcus.substack.com/p/a-spate-of-outages-including-incidents">&#8220;A spate of outages, including incidents tied to the use of AI coding tools&#8221;, right on schedule</a></p></li><li><p><a href="https://www.wired.com/story/ai-kill-venture-capital/">Can AI Kill the Venture Capitalist?</a></p></li><li><p><a href="https://robertwachter.substack.com/p/why-do-you-still-have-a-job">Why Do You Still Have a Job?</a></p></li><li><p><a href="https://www.theregister.com/2026/02/09/microsoft_one_prompt_attack/">Microsoft boffins figured out how to break LLM safety guardrails with one simple prompt</a></p></li><li><p><a href="https://www.irishtimes.com/business/2026/01/20/ai-boom-could-falter-without-wider-adoption-microsoft-chief-satya-nadella-warns/">AI boom could falter without wider adoption, Microsoft chief Satya Nadella warns</a></p></li><li><p><a href="https://aboutsignal.com/videos-podcasts/videos/signals-whittaker-on-privacy-in-the-age-of-data-and-ai/">Signal&#8217;s Whittaker on privacy in the age of data and AI</a></p></li><li><p><a href="https://www.theregister.com/2026/01/20/pwc_ai_ceo_survey/">Majority of CEOs report zero payoff from AI splurge</a></p></li><li><p><a href="https://www.businessinsider.com/executives-adopting-ai-higher-rates-than-workers-research-2025-10">87% of execs are using AI on the job, compared with just 27% of employees</a></p></li><li><p><a href="https://www.ft.com/content/3d2669e3-c05e-48c9-8bb3-893c1d66de2e">The AI Shift: where are all the job losses?</a></p></li><li><p><a href="https://www.cnbc.com/2026/01/20/salesforce-benioff-ai-regulation-suicide-coaches.html">Salesforce&#8217;s Benioff calls for AI regulation, says models have become &#8216;suicide coaches&#8217;</a></p></li><li><p><a href="https://www.npr.org/2025/09/08/nx-s1-5528762/ai-slop-attention-economy">How AI slop is clogging your brain</a></p></li><li><p><a href="https://www.theglobeandmail.com/business/article-return-on-generative-ai-investments-survey-2-canadian-businesses/">Just 2% of Canadian businesses got return on generative AI investments, survey shows</a></p></li><li><p><a href="https://www.404media.co/ai-darwin-awards-show-ais-biggest-problem-is-human/">AI Darwin Awards Show AI&#8217;s Biggest Problem Is Human</a></p></li><li><p><a href="https://www.economist.com/by-invitation/2025/09/09/ai-agents-are-coming-for-your-privacy-warns-meredith-whittaker?giftId=859d3a46-a913-4ee0-a3d5-060b774c2501">AI agents are coming for your privacy, warns Meredith Whittaker</a></p></li><li><p><a href="https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity">AI-Generated &#8220;Workslop&#8221; Is Destroying Productivity</a></p></li><li><p><a href="https://www.cnbc.com/2025/10/19/firms-are-blaming-ai-for-job-cuts-critics-say-its-a-good-excuse.html">Companies are blaming AI for job cuts. Critics say it&#8217;s a &#8216;good excuse&#8217;</a></p></li><li><p><a href="https://www.wsj.com/tech/ai/when-ai-hype-meets-ai-reality-a-reckoning-in-6-charts-bf8043b4">When AI Hype Meets AI Reality: A Reckoning in 6 Charts</a></p></li><li><p><a href="https://www.theverge.com/ai-artificial-intelligence/827820/large-language-models-ai-intelligence-neuroscience-problems">Large language mistake</a></p></li><li><p><a href="https://theconversation.com/i-got-an-ai-to-impersonate-me-and-teach-me-my-own-course-heres-what-i-learned-about-the-future-of-education-262734">I got an AI to impersonate me and teach me my own course &#8211; here&#8217;s what I learned about the future of education</a></p></li><li><p><a href="https://futurism.com/neoscope/advanced-ai-give-medical-advice-real-world">Something Extremely Scary Happens When Advanced AI Tries to Give Medical Advice to Real World Patients</a></p></li><li><p><a href="https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html">OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws</a></p></li><li><p><a href="https://catherinedenial.org/blog/uncategorized/against-generative-ai/">Against Generative AI</a></p></li><li><p><a href="https://www.wired.com/story/ai-bubble-will-burst/">AI Is the Bubble to Burst Them All</a></p></li><li><p><a href="https://arxiv.org/html/2408.06602v1">Super-intelligence or Superstition? Exploring Psychological Factors Underlying Unwarranted Belief in AI Predictions</a></p></li><li><p><a href="https://codemanship.wordpress.com/2025/10/28/the-ai-ready-software-developer-13-you-are-the-intelligence/">The AI-Ready Software Developer #13 &#8211; *You* Are The Intelligence</a></p></li><li><p><a href="https://blog.robbowley.net/2025/11/05/findings-from-dxs-2025-report-ai-wont-save-you-from-your-engineering-culture/">Findings from DX&#8217;s 2025 report: AI won&#8217;t save you from your engineering culture</a></p></li><li><p><a href="https://gizmodo.com/ai-capabilities-may-be-overhyped-on-bogus-benchmarks-study-finds-2000682577">AI Capabilities May Be Overhyped on Bogus Benchmarks, Study Finds</a></p></li><li><p><a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai">The state of AI in 2025: Agents, innovation, and transformation</a></p></li><li><p><a href="https://www.economist.com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening?link_source=ta_bluesky_link&amp;taid=692b6389e171010001edca40">Investors expect AI use to soar. That&#8217;s not happening</a></p></li><li><p><a href="https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding">Where's the Shovelware? Why AI Coding Claims Don't Add Up</a></p></li><li><p><a href="https://codemanship.wordpress.com/2026/01/12/yes-maintainability-still-matters-in-ai-assisted-coding/">Yes, Maintainability Still Matters in &#8220;AI&#8221;-assisted Coding</a></p></li><li><p><a href="https://www.theverge.com/news/861576/google-gemini-ai-personal-intelligence-gmail-search-youtube-photos">Google&#8217;s Gemini AI will use what it knows about you from Gmail, Search, and YouTube</a></p></li><li><p><a href="https://www.chronicle.com/article/how-ai-is-changing-higher-education">How AI Is Changing Higher Education</a></p></li><li><p><a href="https://www.theguardian.com/technology/2025/nov/22/ai-workers-tell-family-stay-away">Meet the AI workers who tell their friends and family to stay away from AI</a></p></li><li><p><a href="https://www.youtube.com/shorts/aIvHf8vsWBM">Vibe Coding Fails</a></p></li><li><p><a href="https://www.youtube.com/watch?v=0ANECpNdt-4"> AI Agent, AI Spy</a></p></li><li><p><a href="https://buttondown.com/apperceptive/archive/ai-is-bad-ux/">"AI" is bad UX</a></p></li><li><p><a href="https://futurism.com/artificial-intelligence/chatbots-teen-mental-health-chatgpt-gemini-claude">Report Finds That Leading Chatbots Are a Disaster for Teens Facing Mental Health Struggles</a></p></li><li><p><a href="https://artificialbureaucracy.substack.com/p/context-widows">Context Widows</a></p></li><li><p><a href="https://www.yahoo.com/news/articles/researchers-just-found-something-could-140342111.html">Researchers Just Found Something That Could Shake the AI Industry to Its Core</a></p></li><li><p><a href="https://maarthandam.com/2025/12/25/salesforce-regrets-firing-4000-staff-ai/">Salesforce regrets firing 4000 experienced staff and replacing them with AI</a></p></li><li><p><a href="https://lucumr.pocoo.org/2026/1/18/agent-psychosis/">Agent Psychosis: Are We Going Insane?</a></p></li><li><p><a href="https://codemanship.wordpress.com/2026/01/03/the-ai-ready-software-developer-20-its-the-bottlenecks-stupid/">The AI-Ready Software Developer #20 &#8211; It&#8217;s The Bottlenecks, Stupid!</a></p></li><li><p><a href="https://www.theguardian.com/us-news/ng-interactive/2026/jan/18/tech-ai-bubble-burst-reverse-centaur">AI companies will fail. We can salvage something from the wreckage</a></p></li><li><p><a href="https://www.theregister.com/2026/01/22/cursor_ai_wrote_a_browser/">Cursor used agents to write a browser, proving AI can write shoddy code at scale</a></p></li><li><p><a href="https://www.theregister.com/2026/02/07/ai_companion_bots_vishal_sikka_interview/">Whether they are building agents or folding proteins, LLMs need a friend</a></p></li><li><p><a href="https://futurism.com/artificial-intelligence/google-ai-overviews-media">Evidence Grows That Google&#8217;s AI Overviews Have Eviscerated the Media Industry</a></p></li><li><p><a href="https://blog.robbowley.net/2026/01/30/sixty-years-of-learning-the-same-lesson/">Sixty years of learning the same lesson</a></p></li><li><p><a href="https://codemanship.wordpress.com/2026/01/30/coding-is-when-were-least-productive/">Coding Is When We&#8217;re Least Productive</a></p></li><li><p><a href="https://www.anthropic.com/research/AI-assistance-coding-skills">How AI assistance impacts the formation of coding skills</a></p></li><li><p><a href="https://codemanship.wordpress.com/2026/02/02/am-i-anti-ai-no-im-anti-harm/">Am I Anti-AI? No. I&#8217;m Anti-Harm.</a></p></li><li><p><a href="https://www.youtube.com/watch?v=aI7XknJJC5Q">Gary Marcus on the Massive Problems Facing AI &amp; LLM Scaling</a></p></li><li><p><a href="https://olivia.science/ai/">critical AI literacies for resisting and reclaiming</a></p></li><li><p><a href="https://www.businessinsider.com/moltbook-openclaw-social-network-boring-2026-2">Moltbook is about as fun as watching two Roombas bump into each other</a></p></li><li><p><a href="https://zenodo.org/records/17065099">Against the Uncritical Adoption of 'AI' Technologies in Academia</a></p></li><li><p><a href="https://www.youtube.com/watch?v=b9EbCb5A408"> We Studied 150 Developers Using AI (Here&#8217;s What's Actually Changed...)</a></p></li><li><p><a href="https://timesofindia.indiatimes.com/technology/tech-news/elon-musk-gives-less-than-a-year-to-coding-as-a-profession-says-there-is-no/articleshow/128244238.cms">Elon Musk gives less than a year to coding as a profession</a></p></li><li><p><a href="https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-study-robert-solow-information-technology-age/">Thousands of CEOs just admitted AI had no impact on employment or productivity&#8212;and it has economists resurrecting a paradox from 40 years ago</a></p></li><li><p><a href="https://clocks.brianmoore.com/">AI World Clocks</a></p></li><li><p><a href="https://codemanship.wordpress.com/2025/10/26/the-ai-ready-software-developer-10-comprehension-debt/">The AI-Ready Software Developer #10 &#8211; Comprehension Debt</a></p></li><li><p><a href="https://codescene.com/blog/agentic-ai-coding-best-practice-patterns-for-speed-with-quality">Agentic AI Coding: Best Practice Patterns for Speed with Quality</a></p></li><li><p><a href="https://profc.substack.com/p/the-ai-treadmill">The AI Treadmill</a></p></li><li><p><a href="https://www.wheresyoured.at/data-center-crisis/">The AI Data Center Financial Crisis</a></p></li><li><p><a href="https://olivia.science/before/">We've been here before</a></p></li><li><p><a href="https://markusharrer.de/blog/2026/02/18/ai-productivity-gains-in-different-situations/">AI Productivity Gains in Different Situations</a></p></li><li><p><a href="https://www.wheresyoured.at/premium-the-haters-guide-to-anthropic/">The Hater's Guide to Anthropic</a></p></li><li><p><a href="https://www.theatlantic.com/ideas/2026/02/ai-white-collar-jobs/686031/?gift=SCYx-5scVta3-cr_IlgTye5UuEDMsmIn8A8Cc1O-vk0">The Worst-Case Future for White-Collar Workers</a></p></li><li><p><a href="https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai">&#8216;It&#8217;s destroyed me completely&#8217;: Kenyan moderators decry toll of training of AI models</a></p></li><li><p><a href="https://github.com/anthropics/claude-code/issues/19739">Unified Bug Report: Claude Code Agent Systematic Failure Patterns</a></p></li><li><p><a href="https://www.wsj.com/economy/jobs/tech-has-never-caused-a-job-apocalypse-dont-bet-on-it-now-d192b579">Tech Has Never Caused a Job Apocalypse. Don&#8217;t Bet on It Now</a></p></li><li><p><a href="https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html">I Am An AI Hater</a></p></li><li><p><a href="https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies">&#8216;Unbelievably dangerous&#8217;: experts sound alarm after ChatGPT Health fails to recognise medical emergencies</a></p></li><li><p><a href="https://arxiv.org/abs/2602.11988">Evaluating AGENTS.md: Are Repository-Level Context Files Helpful for Coding Agents?</a></p></li><li><p><a href="https://www.404media.co/ai-translations-are-adding-hallucinations-to-wikipedia-articles/">AI Translations Are Adding &#8216;Hallucinations&#8217; to Wikipedia Articles</a></p></li><li><p><a href="https://nypost.com/2026/03/06/us-news/blowhard-chatgpt-bot-posed-as-lawyer-convinced-woman-to-fire-her-real-attorney-while-citing-phony-case-law-suit/">Blowhard ChatGPT bot posed as lawyer, convinced woman to fire her real attorney &#8212; while citing phony &#8216;case law&#8217;: suit</a></p></li><li><p><a href="https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it">AI Doesn&#8217;t Reduce Work&#8212;It Intensifies It</a></p></li><li><p><a href="https://osf.io/preprints/psyarxiv/t9u8g_v1">Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity</a></p></li><li><p><a href="https://garymarcus.substack.com/p/1984-but-with-llms">1984, but with LLM&#8217;s</a></p></li><li><p><a href="https://arxiv.org/abs/2410.05229">GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models</a></p></li><li><p><a href="https://arxiv.org/abs/2506.08872">Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task</a></p></li><li><p><a href="https://cdn.sanity.io/files/4zrzovbb/website/45bc6adf039848841ed9e47051fb1209d6bb2b26.pdf">Operating Multi-Client Influence Networks Across Platforms</a></p></li><li><p><a href="https://garymarcus.substack.com/p/a-knockout-blow-for-llms">A knockout blow for LLMs?</a></p></li><li><p><a href="https://ai-2027.com/ai-2027.pdf">AI 2027</a></p></li><li><p><a href="https://garymarcus.substack.com/p/breaking-sycophantic-ai-distorts">Breaking: &#8220;sycophantic AI distorts belief, manufacturing certainty where there should be doubt&#8221;</a></p></li><li><p><a href="https://codemanship.wordpress.com/2026/03/03/yeah-about-your-claude-md-file/">Yeah, About Your CLAUDE.md file&#8230;</a></p></li><li><p><a href="https://garymarcus.substack.com/p/ai-agents-have-so-far-mostly-been">AI Agents have, so far, mostly been a dud</a></p></li><li><p><a href="https://www.damiencharlotin.com/hallucinations/">AI Hallucination Cases</a></p></li><li><p><a href="https://www.404media.co/ai-therapy-bots-meta-character-ai-ftc-complaint/">AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say</a></p></li><li><p><a href="https://garymarcus.substack.com/p/ai-layoffs-productivity-and-the-klarna">AI, layoffs, productivity and The Klarna Effect</a></p></li><li><p><a href="https://www.newsguardtech.com/ai-monitor/august-2025-ai-false-claim-monitor/">AI False Information Rate Nearly Doubles in One Year</a></p></li><li><p><a href="https://americansunlight.substack.com/p/bad-actors-are-grooming-llms-to-produce">Bad Actors are Grooming LLMs to Produce Falsehoods</a></p></li><li><p><a href="https://garymarcus.substack.com/p/botshit-gone-wild">Botshit Gone Wild</a></p></li><li><p><a href="https://www.youtube.com/watch?v=cB0_-qKbal4">The AI Dilemma</a></p></li><li><p><a href="https://www.vox.com/future-perfect/415646/artificial-intelligencer-chatgpt-claude-privacy-surveillance">AI can now stalk you with just a single vacation photo</a></p></li><li><p><a href="https://www.adexchanger.com/the-sell-sider/commoditization-2-0-how-agentic-ai-could-undermine-the-open-webs-best-publishers">Commoditization 2.0: How Agentic AI Could Undermine The Open Web&#8217;s Best Publishers</a></p></li><li><p><a href="https://mail.cyberneticforests.com/complete-accuracy-collapse/">Complete Accuracy Collapse</a></p></li><li><p><a href="https://garymarcus.substack.com/p/decoding-and-debunking-hard-forks">Decoding (and debunking) Hard Fork&#8217;s Kevin Roose</a></p></li><li><p><a href="https://amandaguinzburg.substack.com/p/diabolus-ex-machina">Diabolus Ex Machina</a></p></li><li><p><a href="https://tante.cc/2024/10/16/does-open-source-ai-really-exist/">Does Open Source AI really exist?</a></p></li><li><p><a href="https://www.youtube.com/watch?v=_zs6BXdesaw">General Purpose is an Accountability Loophole</a></p></li><li><p><a href="https://www.gitclear.com/research/google_dora_2024_summary_ai_impact">Distilled summary of 2024/2025 Google DORA Report</a></p></li><li><p><a href="https://www.youtube.com/watch?v=tbDDYKRFjhk">Does AI Actually Boost Developer Productivity?</a></p></li><li><p><a href="https://www.youtube.com/watch?v=C8ddJ5b2TG0">Empire of AI: Karen Hao &amp; Roger McNamee Expose Sam Altman, Musk &amp; Big Tech</a></p></li><li><p><a href="https://www.architectureandgovernance.com/uncategorized/excerpt-on-multi-agent-systems-agentic-ai-from-the-ebook-mastering-ai-ethics-and-safety">Excerpt on Multi-Agent Systems (Agentic AI) from the eBook &#8216;Mastering AI Ethics and Safety&#8217;</a></p></li><li><p><a href="https://stackoverflow.co/internal/resources/2025-stack-overflow-developer-survey-for-leaders/final-thoughts/">2025 Stack Overflow Developer Survey: A TL;DR for Leaders</a></p></li><li><p><a href="https://garymarcus.substack.com/p/five-quick-updates-about-that-apple">Five quick updates about that Apple reasoning paper that people can&#8217;t stop talking about</a></p></li><li><p><a href="https://garymarcus.substack.com/p/gpt-4s-successes-and-gpt-4s-failures">GPT-4&#8217;s successes, and GPT-4&#8217;s failures</a></p></li><li><p><a href="https://cloud.google.com/resources/content/dora-impact-of-gen-ai-software-development">Unlock the potential of gen AI while mitigating risks with actionable insights from the special edition DORA report.</a></p></li><li><p><a href="https://blog.robbowley.net/2025/05/12/genai-coding-most-teams-arent-ready/">GenAI coding: most teams aren&#8217;t ready</a></p></li><li><p><a href="https://garymarcus.substack.com/p/generative-ais-crippling-and-widespread">Generative AI&#8217;s crippling and widespread failure to induce robust models of the world</a></p></li><li><p><a href="https://www.gitclear.com/ai_assistant_code_quality_2025_research">AI Copilot Code Quality: 2025 Look Back at 12 Months of Data</a></p></li><li><p><a href="https://gitclear-public.s3.us-west-2.amazonaws.com/GitClear-AI-Copilot-Code-Quality-2025.pdf">AI Copilot Code Quality</a></p></li><li><p><a href="https://www.cnbc.com/2025/07/11/goldman-sachs-autonomous-coder-pilot-marks-major-ai-milestone.html">Goldman Sachs is piloting its first autonomous coder in major AI milestone for Wall Street</a></p></li><li><p><a href="https://www.404media.co/goldman-sachs-ai-is-overhyped-wildly-expensive-and-unreliable/">Goldman Sachs: AI Is Overhyped, Wildly Expensive, and Unreliable</a></p></li><li><p><a href="https://garymarcus.substack.com/p/horse-rides-astronaut">Horse rides astronaut</a></p></li><li><p><a href="https://garymarcus.substack.com/p/humans-versus-machines-the-hallucination">Humans versus Machines: The Hallucination Edition</a></p></li><li><p><a href="https://www.cjr.org/analysis/i-tested-how-well-ai-tools-work-for-journalism.php">I Tested How Well AI Tools Work for Journalism</a></p></li><li><p><a href="https://stackoverflow.blog/2025/06/02/integrating-ai-agents-navigating-challenges-ensuring-security-and-driving-adoption/">Integrating AI agents: Navigating challenges, ensuring security, and driving adoption</a></p></li><li><p><a href="https://garymarcus.substack.com/p/llms-coding-agents-security-nightmare">LLMs + Coding Agents = Security Nightmare</a></p></li><li><p><a href="https://garymarcus.substack.com/p/llms-dishonest-unpredictable-and">LLMs: Dishonest, unpredictable and potentially dangerous.</a></p></li><li><p><a href="https://www.nber.org/papers/w33777#fromrss">Large Language Models, Small Labor Market Effects</a></p></li><li><p><a href="https://www.theregister.com/2025/09/04/m365_copilot_uk_government/">UK government trial of M365 Copilot finds no clear productivity boost</a></p></li><li><p><a href="https://www.restaurantbusinessonline.com/technology/mcdonalds-ending-its-drive-thru-ai-test">McDonald's is ending its drive-thru AI test</a></p></li><li><p><a href="https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/">Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity</a></p></li><li><p><a href="https://www.tomshardware.com/news/microsoft-lost-money-on-ai">Microsoft Lost $20 for Every $10 Copilot AI Subscription</a></p></li><li><p><a href="https://www.theguardian.com/technology/2025/jul/09/grok-ai-praised-hitler-antisemitism-x-ntwnfb">Musk&#8217;s AI firm forced to delete posts praising Hitler from Grok chatbot</a></p></li><li><p><a href="https://www.tomshardware.com/video-games/retro-gaming/not-to-be-outdone-by-chatgpt-microsoft-copilot-humiliates-itself-in-atari-2600-chess-showdown-another-ai-humbled-by-1970s-tech-despite-trash-talk">Not to be outdone by ChatGPT, Microsoft Copilot humiliates itself in Atari 2600 chess showdown &#8212; another AI humbled by 1970s tech despite trash talk</a></p></li><li><p><a href="https://msukhareva.substack.com/p/on-illusion-of-thinking-do-llms-reason">On Illusion of Thinking - Do LLMs Reason?</a></p></li><li><p><a href="https://garymarcus.substack.com/p/openais-waterloo">OpenAI&#8217;s Waterloo? [with corrections]</a></p></li><li><p><a href="https://futurism.com/chatgpt-dating-advice-results">People Are Asking ChatGPT for Relationship Advice and It&#8217;s Ending in Disaster</a></p></li><li><p><a href="https://www.youtube.com/watch?v=bIwIE2ZroJs">Pieces AI Productivity Summit</a></p></li><li><p><a href="https://pivot-to-ai.com/2025/08/12/prompt-inject-copilot-studio-ai-via-email-grab-a-companys-whole-salesforce/">Prompt-inject Copilot Studio AI via email, grab a company&#8217;s whole Salesforce</a></p></li><li><p><a href="https://docs.google.com/document/d/1DKpUUvKyH9Ql6_ubftYMiZloXizJU38YSjtP5i8MIx0/edit?tab=t.0">Questioning AI Resource List</a></p></li><li><p><a href="https://www.samharris.org/podcasts/making-sense-episodes/312-the-trouble-with-ai">The Trouble with AI</a></p></li><li><p><a href="https://garymarcus.substack.com/p/slopocalypse-now">Slopocalypse Now</a></p></li><li><p><a href="https://futurism.com/stanford-therapist-chatbots-encouraging-delusions">Stanford Research Finds That &#8220;Therapist&#8221; Chatbots Are Encouraging Users&#8217; Schizophrenic Delusions and Suicidal Thoughts</a></p></li><li><p><a href="https://www.techtarget.com/whatis/feature/Tech-sector-layoffs-explained-What-you-need-to-know">Tech sector layoffs explained: What you need to know</a></p></li><li><p><a href="https://futurism.com/ai-far-away-profit-experts-warn">The AI Industry Is Still Light-Years From Making a Profit, Experts Warn</a></p></li><li><p><a href="https://www.linkedin.com/pulse/age-ai-has-begun-bill-gates/">The Age of AI has begun</a></p></li><li><p><a href="http://The Death of the Student Essay&#8212;and the Future of Cognition">The Death of the Student Essay&#8212;and the Future of Cognition</a></p></li><li><p><a href="https://blog.samaltman.com/the-gentle-singularity">The Gentle Singularity</a></p></li><li><p><a href="https://www.bloodinthemachine.com/p/this-is-the-gentle-singularity">This is the gentle singularity?</a></p></li><li><p><a href="https://www.stephendiehl.com/posts/ai_for_coding/">The Stochastic Code Monkey Theorem</a></p></li><li><p><a href="https://www.theintrinsicperspective.com/p/the-joy-of-blackouts-ai-ruins-college">The joy of blackouts; AI ruins college; The Consciousness Wars continue; Peter Singer&#8217;s chatbot betrays him, &amp; more</a></p></li><li><p><a href="https://theneuroscienceofeverydaylife.substack.com/p/this-article-is-more-powerful-than">This article is more powerful than the human brain.</a></p></li><li><p><a href="https://arstechnica.com/ai/2025/05/time-saved-by-ai-offset-by-new-work-created-study-suggests/">Time saved by AI offset by new work created, study suggests</a></p></li><li><p><a href="https://www.baldurbjarnason.com/2025/trusting-your-own-judgement-on-ai/">Trusting your own judgement on &#8216;AI&#8217; is a huge risk</a></p></li><li><p><a href="https://www.thedailybeast.com/tulsi-gabbard-admits-to-asking-ai-what-to-classify-in-jfk-files/">Tulsi Gabbard Admits She Asked AI Which JFK Files Secrets to Reveal</a></p></li><li><p><a href="https://www.politico.com/news/2026/01/27/cisa-madhu-gottumukkala-chatgpt-00749361">Trump&#8217;s acting cyber chief uploaded sensitive files into a public version of ChatGPT</a></p></li><li><p><a href="https://www.economist.com/united-states/2026/01/22/ed-tech-is-profitable-it-is-also-mostly-useless">Ed tech is profitable. It is also mostly useless</a></p></li><li><p><a href="https://deepakness.com/raw/vibe-coding-is-not-new/">Vibe coding won't replace software engineers</a></p></li><li><p><a href="https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/">When ChatGPT summarises, it actually does nothing of the kind</a></p></li><li><p><a href="https://garymarcus.substack.com/p/why-do-large-language-models-hallucinate">Why DO large language models hallucinate?</a></p></li><li><p><a href="https://stackoverflow.co/internal/resources/why-high-quality-data-is-essential-for-agentic-ai/">Why high-quality data is essential for agentic AI</a></p></li><li><p><a href="https://www.brainonllm.com/">Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task</a></p></li><li><p><a href="https://thehackernews.com/2025/06/zero-click-ai-vulnerability-exposes.html">Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction</a></p></li><li><p><a href="https://dora.dev/ai/gen-ai-report/dora-impact-of-generative-ai-in-software-development.pdf">Impact of Generative AI in Software Development</a></p></li><li><p><a href="https://arxiv.org/pdf/2506.11928">LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming?</a></p></li><li><p><a href="https://arxiv.org/abs/2507.09089">Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity</a></p></li><li><p><a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf">The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity</a></p></li><li><p><a href="https://garymarcus.substack.com/p/those-claiming-were-mere-months-away">&#8221;Those claiming we&#8217;re mere months away from AI agents replacing most programmers&#8221; should think again</a></p></li><li><p><a href="https://libguides.nus.edu.sg/digitalliteracy">What is digital literacy and why is it important?</a></p></li></ul><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The greatest crowdsourcing story ever sold]]></title><description><![CDATA[Taxed on both ends]]></description><link>https://blog.nappisite.com/p/the-greatest-crowdsourcing-story</link><guid isPermaLink="false">https://blog.nappisite.com/p/the-greatest-crowdsourcing-story</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Fri, 06 Mar 2026 21:39:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fd29f8b0-7e42-423c-bc97-316608f0fd50_1024x608.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sometimes it feels like A.I. has been dumped in our collective laps with a &#8220;you figure out what to do with it&#8221; mentality. The premise, from the A.I. companies, seems to be that they have created an impressive and incredibly powerful albeit potentially dangerous technology, but they&#8217;re not quite sure what its good for, how to make it safe or profitable.  </p><p>That&#8217;s where we come in, citizens of the world. They&#8217;ll give us free or cheap access to A.I. tools, embed them everywhere and see what we come up with.  These things are so powerful and so impressive, they just have to be equally useful, and ultimately valuable. They must be. If you don&#8217;t think so, you must be using it wrong.  It&#8217;s too important not to try. There&#8217;s only two left, going fast, buy now.</p><p>I guess in some sense that&#8217;s not unlike the internet, which is not a product in of itself, but a platform upon which lots of innovation and profit was made.  Maybe that&#8217;s the case here too?</p><p>In this case the profits will be built not only on the effort that went into creating the content they are trained on, but also on the work that will go into figuring out how make them useful. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why are LLM’s so good at writing code, but bad at everything else?]]></title><description><![CDATA[Guardrails]]></description><link>https://blog.nappisite.com/p/why-are-llms-so-good-at-writing-code</link><guid isPermaLink="false">https://blog.nappisite.com/p/why-are-llms-so-good-at-writing-code</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Wed, 04 Mar 2026 19:34:45 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/4b01ef72-7dfd-4beb-b7a0-f7f7dc7d0903_1024x608.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There seems to be a general consensus forming that while its debatable whether A.I. tools are effective or safe in other professions (education, medicine, legal, etc.), <strong>one place it clearly shines is in software development</strong>. Ignoring for a moment whether that&#8217;s actually true or not, its worth considering why that might be true and what it implies.</p><p>What&#8217;s different about programming? </p><p>The main reason why these tools may seem to be better at coding than other tasks is due to constraints. Programming languages have very precise syntax, they have compilers, linters, automated testing frameworks, feedback mechanisms (output, logging, etc.). All of these constraints provide guardrails that keep the LLM&#8217;s propensity for hallucination in check. LLM&#8217;s can hallucinate incorrect syntax, non-existent api calls (and they often do), but they don&#8217;t get far because the code won&#8217;t compile. They can hallucinate sloppy code and incorrect implementations, but the linters will alarm, and the tests will fail. Further, these failures provide information back to the LLM&#8217;s so they can correct their mistakes (trial and error style) until they get something that compiles, passes static analysis checks, and passes the tests.  While these aren&#8217;t perfect and don&#8217;t guarantee success, they are much more strict and provide much more iterative feedback than a legal document, or medical advice, much earlier and more cost-effectively.</p><p>It may not be that they are &#8220;better a writing code&#8221;, they are just more constrained. Some of their unreliability can be mitigated.  That has the advantage of making more sense, as well as demonstrating how they can be made safer both in software development and beyond.  </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How vague can I be?]]></title><description><![CDATA[Where do we add value?]]></description><link>https://blog.nappisite.com/p/how-vague-can-i-be</link><guid isPermaLink="false">https://blog.nappisite.com/p/how-vague-can-i-be</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Fri, 27 Feb 2026 22:12:42 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bb5dc99c-2fc7-4b29-82af-6d99f5e05627_1024x608.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Vibe coding, prompt engineering, intent-driven software development and the promise of fully autonomous coding agents have me wondering where in the loop proponents imagine the human sits and what is their differentiating expertise?</p><p>If rather than specifying intent with programming languages, configuration and deployment tools, I can specify my systems with terse natural language prompts  spinning up agents capable of generating, managing, deploying and monitoring all of those artifacts, what do I need to be good at?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><ul><li><p>Do I need to understand all those technical details (coding, testing, compiling, debugging, deploying) in order to construct prompts, or can I trust the A.I. and not bother with the details?  </p></li><li><p>Do I need to be able to understand architecture, security and cost trade-offs, or can I prompt an A.I. to generate an architecture?</p></li><li><p>Do I need to be able translate business requirements, and technical documents  into prompts, or can I feed them directly to an A.I.? </p></li><li><p>Do I need to understand the business in order to create business requirements, or can I get an A.I. to generate them?</p></li><li><p>Can I prompt an A.I. to do all of that in one step? Can I just provide the business overview, general need, a problem statement?</p></li><li><p>Is the only skill required, an ability articulate prompts? What exactly is that skill?</p></li><li><p>Can I get an A.I. to articulate prompts? What is the fewest number of prompts I need?</p></li></ul><p>Am I a race car driver with a faster car, or a race car driver with a self-driving car? Am I driving or just telling the car to win the race? What makes me a better race car prompter than anyone else?</p><p>I doubt we can get anywhere close to this imagined intent-based world with the current LLM technology but if we could does that seem desirable?</p><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Where is the coding assistant sweet spot?]]></title><description><![CDATA[How to get to 10X]]></description><link>https://blog.nappisite.com/p/where-is-the-coding-assistant-sweet</link><guid isPermaLink="false">https://blog.nappisite.com/p/where-is-the-coding-assistant-sweet</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Thu, 19 Feb 2026 22:22:24 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/593b846b-2701-44fe-a727-ac3f6105c836_1024x608.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There are times when coding assistants (CA) work really well, and times when they struggle.  When they work they can be truly amazing time savers, but when they don&#8217;t they can be a time sink.  Its difficult to see how this one step forward two steps back achieves 10X productivity. Maybe on an individual task, but not on average over a sustained period.</p><p>To get that huge productivity boost we&#8217;d need to hit the time savers, and avoid the time sinks. Unfortunately the tasks capable of the greatest time savings, by virtue of doing lots of work for us, are the ones that are most likely to generate time sinks when they go awry.  </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>To optimize our productivity, we need to define tasks for the CA that are narrow enough that it is likely to complete quickly and correctly on the first try (or rapidly iterate) and that we can verify efficiently. But that leaves us with a conundrum:</p><ol><li><p>A task too narrow is actually less efficient with a CA than without (e.g. renaming a method). The instructing, thinking, editing, summarizing is slower than a typical refactoring tool would be, and carries with it the possibility of error.</p></li><li><p>A task too broad will increase time sink potential.</p></li></ol><p>We need to choose tasks that are narrow enough to have high success rate, and broad enough to accomplish more work faster than could be done without the CA.  That&#8217;s the goldilocks zone.  But even if you hit it, you&#8217;d need to string a lot of them together to get to 10X average productivity boost.</p><p></p><p></p><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Are we really talking about A.I.?]]></title><description><![CDATA[The rise of Agentic Autocomplete]]></description><link>https://blog.nappisite.com/p/are-we-really-talking-about-ai</link><guid isPermaLink="false">https://blog.nappisite.com/p/are-we-really-talking-about-ai</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Sun, 15 Feb 2026 15:13:27 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e3ab657c-f6cf-48ce-8b5e-07896a915b6f_1024x608.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I think the term A.I. is doing a lot of the heavy lifting in most discussions. Not only is it imprecise and difficult to know what anyone is referring to when then say A.I., but it also presupposes intelligence.  The term is applied to automation, pattern recognition, machine learning, chatbots, AGI, and everything in between. </p><p>How would things look if we used a different term? Assume this new term is just another name for the same LLM systems with the same capabilities that we currently call A.I. What if we called it Advanced Autocomplete (A.A.) or Interactive Autocomplete (I.A.) with its companion Agentic Autocomplete?  </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>It would be more difficult to ascribe intelligence to autocomplete. Afterall we all have experience with the unreliability of autocorrect.  Would we be so quick to anthropomorphize consciousness in an autocomplete system? Would we be as concerned that most white collar work could be replaced by autocomplete agents? Would we see huge valuations of autocomplete companies?</p><p>Because we so loosely use the A.I. term for all of these tools and systems, the presumption of intelligence get&#8217;s smuggled in, and is often conflated with AGI. It causes us to muddle the potential benefit and impact of true A.I.(AGI) with the benefits and impacts of current technology (LLM&#8217;s).</p><p>Before we can evaluate we need to know what we&#8217;re talking about.</p><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Coding Assistance Case Study]]></title><description><![CDATA[There'll be time enough for counting when the dealing is done]]></description><link>https://blog.nappisite.com/p/coding-assistance-case-study</link><guid isPermaLink="false">https://blog.nappisite.com/p/coding-assistance-case-study</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Thu, 12 Feb 2026 21:12:58 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6a0ec442-a89d-4151-afaa-4877473e9673_1024x608.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I like coding assistants and I use them every day. They are equal parts amazing and boneheaded.  There&#8217;s so many variables and so much nuance contributing to the boneheaded results, that I&#8217;m always looking for the straightforward use cases. The ones that should showcase the productivity enhancing power.  </p><p>Recently, I had a what I thought to be the classic use case for a coding assistant to remove the tedium and do in seconds something that would take me minutes or hours. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h1>Case study</h1><p>I recently upgraded my application MSTests to the newer <a href="https://learn.microsoft.com/en-us/dotnet/core/testing/microsoft-testing-platform-intro?tabs=dotnetcli">Microsoft.Testing.Platform</a>. This migration was done a few weeks back. But it left behind hundreds of analyzer warnings imploring me to adopt the new MSTest conventions.</p><p>Ripping through hundreds of tests and swapping out well defined implementations seemed like the perfect use case for a coding assistant.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!LvZG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!LvZG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 424w, https://substackcdn.com/image/fetch/$s_!LvZG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 848w, https://substackcdn.com/image/fetch/$s_!LvZG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 1272w, https://substackcdn.com/image/fetch/$s_!LvZG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!LvZG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png" width="845" height="283" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:283,&quot;width&quot;:845,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:45638,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://blog.nappisite.com/i/187769395?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!LvZG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 424w, https://substackcdn.com/image/fetch/$s_!LvZG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 848w, https://substackcdn.com/image/fetch/$s_!LvZG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 1272w, https://substackcdn.com/image/fetch/$s_!LvZG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb01b0a6b-2aff-4f08-b877-37fb15841831_845x283.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>Take 1</h2><p>I started out in Visual Studio Copilot with Claude Sonnet 4.5.  I instructed Copilot to find and correct all of the MSTest analyzer warnings.</p><p>Copilot did a reasonable job of compiling the solution and analyzing the build output to identify the MSTest warnings. It started to replace occurrences of [DataTestMethod] with [TestMethod], which is reasonable. But it got stuck processing for at least twenty minutes. It updated a few dozen occurrences, then declared victory. This was already off-track, because a simple find and replace would have accomplished this in seconds, yet Copilot processed for long durations, and missed a lot of them.  I then instructed it to finish the rest.  Copilot more or less engaged in the same process of long processing time, fixing a few warnings, and then giving up again declaring it was done.  I tried this a few more times, prompting it to finish which it restarted. Eventually Copilot made a change that created a compiler error, and then went off the rails breaking and fixing unrelated things. </p><p>I eventually stopped it.  This exercise took me a few hours and I didn&#8217;t really have anything useful to show for it.</p><h2>Take 2</h2><p>I decided to give Copilot CLI with GPT-5-mini a shot. The experience was different but similar. In the first attempt Copilot decided to suppress the warnings rather than fix them. After I rejected the warning suppression change, Copilot then performed like the Visual Studio attempt although it required much more interaction. Copilot stopped frequently requiring me intervene. It often declared it was done. Sometimes it claimed there were no more warnings several times in a row before eventually continuing. </p><p>I eventually stopped it.  This exercise took me a few hours and I didn&#8217;t really have anything useful to show for it.</p><h2>Take 3</h2><p>I decided to try Rider with Junie for a completely different tooling perspective. The results were similar. In this case it did seem to successfully perform some edits, albeit extremely slowly, spending hours to get about 20% the way through. It did crash a few times, where I had to intervene and tell it to continue. But like the others, taking too long and needing to be periodically corrected, prompted to continue, or reminded that its not done was not helpful.</p><h1>Summary</h1><p>Needless to say these tools did not remove the tedium of a rudimentary task. In fact they failed miserably. Now one could argue it was too many files, or too vague an instruction (I should have told it to use find-replace for example), but isn&#8217;t this what its supposed to be good at? If it can&#8217;t do this, how much autonomy can we give these things?</p><p>Even if it took hours to complete, without requiring intervention, that would have had some value. I could have let it do its work and comeback at then end of the day to find it all done. But they couldn&#8217;t even do that.</p><p>I often feel like a gambler. When I get the good result in the morning, and the coding assistant does something impressive or makes things easier for me, I should take my winnings for the day and walk away from the table. If I play long enough, inevitably I lose my winnings.</p><p></p><p></p><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Would I pay an A.I. agent minimum wage?]]></title><description><![CDATA["Only drug dealers and software developers refer to their customers as 'users'"]]></description><link>https://blog.nappisite.com/p/would-i-pay-an-ai-agent-minimum-wage</link><guid isPermaLink="false">https://blog.nappisite.com/p/would-i-pay-an-ai-agent-minimum-wage</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Mon, 09 Feb 2026 22:02:41 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/bd5ec765-0bc9-4587-bd6e-d6c76c67d2a3_1024x608.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I think its instructive to think about how much you&#8217;d be willing to pay, given the value you get out of these tools. If I&#8217;m 10x more productive, <a href="https://youtu.be/aJUuJtGgkQg">can take the day off</a>, or capable of doing something that I was not able to do before, then surely that is worth something?  </p><p>Would I gladly pay for it, or would I decide not to use it?</p><p>Presently, there are unnatural economics around these tools. They are produced at great cost, but provided for free or dollars a day. More than provided, they are thrust upon us.  I think that should suggest something about their value.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Will A.I. destroy my business?]]></title><description><![CDATA[What can I sell to the unemployed?]]></description><link>https://blog.nappisite.com/p/will-ai-destroy-my-business</link><guid isPermaLink="false">https://blog.nappisite.com/p/will-ai-destroy-my-business</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Fri, 06 Feb 2026 14:31:47 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/e5bbb46e-dde1-438d-816d-bc9fd7c5a70f_1024x608.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There&#8217;s a lot of discussions about the incredible wealth that A.I. will create for businesses, allowing them to eliminate &#8220;the tax of labor&#8221;, and the corollary negative implications for workers. Workers will experience somewhere between massive dystopian unemployment to massive utopian leisure time. </p><p>That picture tends to assume a zero sum between employers that will thrive at the expensive of workers. But its a picture that only examines the supply side (labor), what about the demand side (consumers)?  What does a company produce that will generate massive wealth without consumers? </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>If everyone is unemployed, who&#8217;s buying tech devices, apps and subscriptions, traveling, dining out, attending events?  How am I, as a business owner, going to get rich selling my product to the unemployed masses?  Are my products recession proof? Are they great depression proof? Are they great destruction proof?</p><p>Do I sell products to others businesses used by a human workforce? What if those businesses no longer have a human workforce? Do I have a government contract? Can the government afford to buy my product when there is no tax base?  </p><p>Are all these businesses racing towards their own destruction, aggressively adopting the seeds of their own demise?  The one dimensional view of employers get rich, workers get poor in the A.I. economy seems too simplistic. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Why do I need to learn A.I.?]]></title><description><![CDATA[Is it an expert tool or a replacement for software engineers?]]></description><link>https://blog.nappisite.com/p/why-do-i-need-to-learn-ai</link><guid isPermaLink="false">https://blog.nappisite.com/p/why-do-i-need-to-learn-ai</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Sun, 01 Feb 2026 15:49:53 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/926e1d47-f8db-4902-b777-5d3e3af0449f_670x600.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I see the hyping, the boosting, the peer pressure, I feel the FOMO.  If I&#8217;m not using it, I&#8217;ll soon be obsolete. But it&#8217;s not just using it, I have to master it, all the tools, the latest tricks and techniques, of which there is a nearly endless supply, and they are constantly changing. I&#8217;m mostly talking about coding assistants, because that&#8217;s where my job and career intersect with A.I., but I imagine it&#8217;s the same for GenAI more broadly.</p><p>If I&#8217;m not ten times more productive and haven&#8217;t been able to do and learn things never before possible, it&#8217;s because I&#8217;m not using it right. Go back to step one. I need better prompting techniques, or I have the wrong combination of tools and models.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But if it&#8217;s as powerful as the hype, why do I need to spend so much time learning how to use it and adapting to its peculiarities? The question is three-fold:</p><ol><li><p>Shouldn&#8217;t it mostly just work? Shouldn&#8217;t I be more less able to give it instructions and it do the right thing? Shouldn&#8217;t it ask for clarification? Shouldn&#8217;t it know and automatically configure whatever tools and dependencies it needs, and know what the latest tricks and techniques are best? Isn&#8217;t all this learning and adapting to the tools a drag on productivity? </p></li><li><p>How is it possible to be left behind? Maybe it&#8217;s not &#8220;there yet&#8221; but is getting better all the time.  Can&#8217;t I just wait until it &#8220;mostly just works&#8221;, and wouldn&#8217;t I immediately catch up? How do I fall behind in a race to democratize programming and obsolesce software engineers?</p></li><li><p>Why do I need to learn a job it knows how to do better than me, so that I can better instruct it to do what I want?</p></li></ol><p>In some sense you&#8217;d have to doubt the hype if you&#8217;re spending time mastering it. Whatever advantage you&#8217;d gain, by your own admission, would be fleeting.  If you spend all your time mastering this generation of tools, and I just wait for the next one, won&#8217;t I just leapfrog you in an instant? I&#8217;d be eleven times more productive!  On the other hand, if it&#8217;s an expert tool (like a mass spectrometer) only mastered by those in the field, and its somewhat plateaued in capabilities, then it makes sense to master it. But then isn&#8217;t it more like a programming language, framework or IDE, reserved and mastered by software engineers, an incremental improvement but not revolutionary?</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading NappiSite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Can A.I. learn for me?]]></title><description><![CDATA[Can I get stronger lifting weights with a forklift?]]></description><link>https://blog.nappisite.com/p/can-ai-learn-for-me</link><guid isPermaLink="false">https://blog.nappisite.com/p/can-ai-learn-for-me</guid><pubDate>Tue, 24 Jun 2025 20:56:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/beed724d-5aa7-438b-8574-639a8fc16020_2000x1184.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There is a lot of enthusiasm for all of the things A.I. can do for us, reading, writing, coding, art, research and analysis, even legal/medical advice and therapy. But there is something particularly odd about the claim that A.I. can help us learn to do all those things while at that same time doing them for us.</p><p>Setting aside, for the moment, the suggestion that reading, writing, (thinking) are the mundane tasks that we want to be freed from, its unclear to me how having A.I. do things for us is good for learning. </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Nappisite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The most common form of the learn faster with A.I. approach is the &#8220;summary&#8221;.  Feeding books, research papers, podcasts, email threads into A.I., and having it provide summaries.  The idea being that A.I. can read text faster, and therefore provide all the required information in a quick summary. All the learning, in a fraction of the time.</p><p>This would imply that all those books, research papers, podcasts and email discussions are mainly superfluous language. All that bloat can then be stripped out and the core information distilled down like a form of <a href="https://en.wikipedia.org/wiki/Lossless_compression">lossless compression</a>.  One wonders why the authors didn&#8217;t think of that?  Imagine how great a movie buff I&#8217;d be if instead of wasting time watching movies, I just watched trailers?</p><p>As suspect as &#8220;summary&#8221; based expertise is, that&#8217;s not the only flaw. We&#8217;re also playing <a href="https://officialgamerules.org/game-rules/two-truths-and-a-lie/">two truths and a lie</a> with the A.I. We know they <a href="https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)">hallucinate</a>, but since we didn&#8217;t actually read anything, we&#8217;re very unlikely to catch the lies.  We&#8217;re asking an A.I. to read and summarize information that we can&#8217;t be sure it actually read, or that the summary it produces is true to the source.  I guess that&#8217;s vibe learning, we&#8217;ll know its correct because it sounds plausible?</p><p>How well will I retain this information that I didn&#8217;t actually read? I guess I can always ask the A.I. for the answers again.  Like using a <a href="https://en.wikipedia.org/wiki/Butterfly_effect">Star trek replicator</a> to learn to cook, I won&#8217;t be using my critical thinking, analysis and problem solving skills, so they&#8217;ll be out of practice. </p><p>Maybe I won&#8217;t need those mundane skills anymore, but I will acquire new ones. I&#8217;ll need prompting skills, because how I ask the question is just as important as what I&#8217;m asking. The <a href="https://en.wikipedia.org/wiki/Butterfly_effect">butterfly effect</a> of <a href="https://arxiv.org/html/2406.11050v1#S1">token bias</a> means slight changes in the phrasing or context of my A.I. prompt can lead to different results, making them more akin to <a href="https://www.restonyc.com/what-are-the-three-rules-of-genie-wishes/">genie wishes</a> and <a href="https://en.wikipedia.org/wiki/Incantation">incantations</a>, producing answers that are more based on <a href="https://en.wikipedia.org/wiki/Wisdom_of_the_crowd">wisdom of the crowd</a> than expertise.</p><p>That also means that either I&#8217;ve already done the learning, and know what the expected results should be, or I have a feeling what I want them to be, either way I can keep prompting until I get the &#8220;right&#8221; answer.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Nappisite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A.I. Coding assistance?]]></title><description><![CDATA[The illusion of productivity]]></description><link>https://blog.nappisite.com/p/ai-coding-assistance</link><guid isPermaLink="false">https://blog.nappisite.com/p/ai-coding-assistance</guid><pubDate>Fri, 13 Jun 2025 23:27:43 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/f1591961-473b-4f5e-8b11-72f4f6fd36c6_500x293.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I&#8217;ve been trying out copilot agent-mode in both VS Code and Visual Studio 2022 the past couple of weeks. They are very impressive, fascinating and fun to work with. Curiously observing what it understands, how it responds, the code it writes, and how it reacts to errors and mistakes is hours of entertainment. Its a little bit like playing a video game, issuing commands to see what you can get it to do, genuinely engrossing.</p><p>While engaging and fun to work with, at least experimentally, I&#8217;m not sure that&#8217;s quite delivering on the promise. What we really want is the dramatic productivity and business value gains, right?</p><p>Well, that&#8217;s nowhere in sight.  I&#8217;ve set it loose on a number of fairly discreet real refactoring tasks or a real codebase, and its slow, error prone, in constant need of correction, sometimes wildly off-base. </p><p>The experience reminded me of off-shoring years ago, and it may be analogous in more than one way. The promise of off-shoring was not unlike A.I. coding assistants. It promised access to a dynamic pool of engineering resources that could be scaled up and down according to demand, cheaply.  With those resources being in different time zones we could have around the clock coding taking place, and unlock massive throughput.</p><p>However, my experience with off-shoring didn&#8217;t match those ideals. While they may have been cheap and &#8220;easy&#8221; to scale, it was not productive.  There were language barriers and cultural differences, unfamiliarity with the domain and the code, skill gaps and work hour mismatches, high turnover.  There was even the issue of allotting 5 engineers, but really having more than 5 people rotate in and out of the 5 engineer slots, exacerbating the above issues. To overcome this, we needed to spend more time specifying the work, more time reviewing it, explaining it, requesting rework, quality issues, etc. In many cases we had to explicitly provide a pre-coded solution to the off-shore team showing them exactly what to type. Rather than a software production boon it was an inefficiency quagmire.  </p><p>Agent-mode coding assistance has had that same character for me. A task that I could implement myself in an hour, often takes several hours of me experimenting with the right incantation to get a reasonable solution. Time spent scrutinizing results, trying to nudge it to correct it mistakes, waiting patiently as its slowly &#8220;thinking&#8221; and slowly altering files, causing compilation issues, and then trial-n-error fixing its own issues. The occasional lies about what it has done or not done, or going wildly off the rails rippling changes that it can&#8217;t figure its way out of, sapping trust and confidence a long the way. </p><p>Its early, and the vibe coders are vibing, but maybe the analogy is an allegory.</p><p></p>]]></content:encoded></item><item><title><![CDATA[All the Vibes]]></title><description><![CDATA[Vibe coding is all about embracing a dynamic flow with AI, moving past traditional manual coding by simply describing your ideas in plain language and letting the AI do the heavy lifting.]]></description><link>https://blog.nappisite.com/p/all-the-vibes</link><guid isPermaLink="false">https://blog.nappisite.com/p/all-the-vibes</guid><pubDate>Fri, 16 May 2025 02:29:42 GMT</pubDate><content:encoded><![CDATA[<p><strong>Vibe coding</strong> is all about embracing a dynamic flow with AI, moving past traditional manual coding by simply describing your ideas in plain language and letting the AI do the heavy lifting. It captures the essence of trusting the AI to generate functional code rapidly, often by reacting to its output or even accepting changes without deep review, which dramatically lowers the barrier to entry for anyone, regardless of their coding background. This creative approach empowers a wider range of people&#8212;from designers and product managers to hobbyists and aspiring creators&#8212;to quickly turn their visions into tangible applications, games, or tools, making rapid prototyping and bringing ideas to life faster and more accessible than ever before.</p><p><strong>Vibe lawyering</strong> is all about embracing a dynamic flow with AI, moving past traditional legal research and drafting by simply describing your legal needs in plain language and letting the AI do the heavy lifting. It captures the essence of trusting the AI to generate functional legal documents, analysis, or research rapidly, often by reacting to its output or even accepting suggestions without deep review, which potentially lowers the barrier to entry for anyone, regardless of their legal training. This approach could empower a wider range of people&#8212;from paralegals and business professionals to students and aspiring legal assistants&#8212;to quickly generate drafts or find information, making initial legal tasks and turning concepts into tangible documents faster and more accessible than ever before.</p><p><strong>Vibe medicine</strong> would embody a dynamic flow with AI in healthcare, moving past traditional diagnosis and research methods by simply describing symptoms or medical questions in plain language and letting the AI do the heavy lifting. It captures the essence of trusting the AI to generate potential diagnoses, treatment summaries, or relevant research findings rapidly, often by reacting to its output or accepting suggestions with limited deep clinical review. This approach could potentially lower the barrier for providing health care, allowing new medical staff to access medical information, diagnose health issues and develop treatment plans.</p><p><strong>Vibe accounting</strong> involves embracing a dynamic flow with AI for financial tasks, moving past tedious manual data entry and complex form navigation by simply describing your accounting and tax preparation needs in plain language and letting the AI handle the detailed calculations and form filling. It captures the essence of trusting the AI to rapidly generate preliminary reports, statements, analyses, and especially perform initial tax preparations, often by reacting to its output or accepting suggestions without deep line-by-line review. This approach offers the benefit of potentially lowering the barrier to managing finances and preparing taxes for small business owners or individuals, making these often daunting tasks faster and potentially more accessible by automating the routine and complex aspects of accounting and tax filing.</p><p><strong>Vibe investing</strong> would represent a dynamic flow with AI in managing assets, moving past traditional research and strategy development by simply describing your investment goals or market questions in plain language and letting the AI do the heavy lifting. It captures the essence of trusting the AI to generate potential investment strategies, market analyses, or recommendations rapidly, often by reacting to its output or accepting suggestions with limited deep personal review. This approach could potentially lower the barrier to entry for new investors or provide quick insights for experienced ones, making initial research and developing strategies faster and potentially more accessible for hobbyists, students, or those seeking automated financial guidance.</p><p><strong>Vibe management</strong> involves adopting a fluid collaboration with AI in overseeing projects and teams, moving beyond traditional planning and task assignment by describing management goals, project needs, or team challenges in plain language and allowing the AI to handle the complex information organization and task generation. Activities include describing project requirements, generating plans or tasks via AI, and reviewing/implementing suggestions, trusting the AI to rapidly produce things like project outlines, task breakdowns, resource allocations, or communication drafts, often by reacting to or accepting outputs with limited deep manual planning. This approach offers the benefit of potentially streamlining administrative and planning tasks for managers, making the process of organizing work and communicating goals faster and more accessible, thereby boosting productivity and allowing focus on higher-level strategic thinking.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading Nappisite! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[A.I. Implications]]></title><description><![CDATA[A poverty of imagination]]></description><link>https://blog.nappisite.com/p/ai-implications</link><guid isPermaLink="false">https://blog.nappisite.com/p/ai-implications</guid><pubDate>Mon, 12 May 2025 14:52:34 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/745fc23b-a60b-454f-9fd7-04a7c37051b4_626x358.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>There seems to be an inherent contradiction from those who promote the power and capability A.I., and at the same time down play the impact of that power.  It suggests both a lack of imagination, and the existence of unarticulated underlying assumptions, if not out-right (self or otherwise) deception. Assumptions like:</p><ul><li><p>We&#8217;re going be able to thread the needle between making applications that automate a lot of knowledge work and even eliminate entire jobs, but stop short of automating too much, so that we still have enough people employed to keep consuming.  </p></li><li><p><a href="https://www.theatlantic.com/technology/archive/2025/03/gsa-chat-doge-ai/681987/">That public sector jobs can be replaced by A.I. but not private sector.</a></p></li><li><p><a href="https://www.businessinsider.com/marc-andreessen-ai-cant-vc-tech-investing-jobs-career-2025-5?op=1">That my job can&#8217;t be done with A.I., but yours can</a>.  </p><p></p></li></ul><h2>A.I. replacement ground zero</h2><p>Let&#8217;s just take one example, and assume the hype around A.I. code assistants is founded.  Assume that A.I. agents are or will soon be able to replace much of the work of software engineers.</p><p>It seems unlikely that an A.I. capable of replacing software engineers will only impact  the opportunities and salary prospects for software engineers (<a href="https://jkempenergy.com/2024/03/08/energy-transitions-the-decline-of-whaling-and-the-rise-of-petroleum/">akin to whale oilers</a>), while leaving the rest of society happily more productive doing the same jobs they are now.  </p><p>What would we have to assume for that to be true?</p><ul><li><p>That the reasoning capabilities of a software engineering A.I. agent is extremely narrow and can only think about software engineering problems. It can&#8217;t do anything else, not accounting, finance, legal, medical, marketing, advertising, etc.</p><ul><li><p>Although somehow they would still be capable of creating applications for use by all of the other disciplines.</p></li></ul></li><li><p>That A.I. agents produce roughly the same quality software at the same pace as a typical engineer. </p><ul><li><p>Otherwise, if it produces software substantially better and faster, then the pace of automating larger swaths of other disciplines goes up.</p></li></ul></li><li><p>That A.I. agents can&#8217;t produce other A.I. agents.</p><ul><li><p>Otherwise a software engineering A.I. agent could produce accounting A.I. agents, etc.</p></li></ul></li></ul><p>Those don&#8217;t appear to be very safe assumptions.</p><ul><li><p>It is extremely unlikely that the reasoning capability of required to be a software engineer is narrowly applicable.</p></li><li><p>Its extremely unlikely that we&#8217;d produce a one-for-one replacement of software engineer with A.I. agent. More likely we&#8217;d be replacing one engineer with hundreds or thousands A.I. agents, capable of working 24x7 at light speed.</p><ul><li><p>The only limiting factor being compute resources and the cost of those resources.  </p></li></ul></li><li><p>With an explosion of A.I. agents working around the clock we&#8217;d be looking at exponential growth of software automation. It seems highly unlikely that other jobs wouldn&#8217;t be automated out of existence too.</p><ul><li><p>As we automate other disciplines, who&#8217;d be left to use products like MSOffice?  A.I. agents certainly don&#8217;t need to communicate via word documents and spreadsheets. Business software used by people would cease to exist, because there&#8217;d be no one left to use it.</p></li></ul></li></ul><h2>Knowledge workers can be replaced, but the trades will still be safe.  </h2><p>There&#8217;s another set of assumptions required to believe the common refrain, that working in trades like plumbing will be safer than knowledge work.  </p><p>We&#8217;d have to assume:</p><ul><li><p>A.I. agents can&#8217;t be used to create robotics capable of replacing plumbers, electricians, etc.</p></li><li><p>Plumber and electrician will still be good paying jobs when all knowledge workers are either unemployed (can&#8217;t afford to pay a plumber) or competing for the plumbing jobs.</p></li></ul><p>Those assumptions don&#8217;t appear any safer. </p><p>The same could be said of executives and managers, who imagine themselves similarly to be immune.</p><ul><li><p>There&#8217;s not a strong reason to believe management can&#8217;t be done via chat bot, nor that if that&#8217;s the only job left, it won&#8217;t be super competitive.</p></li></ul><h2>Logical conclusions</h2><p>Simply following the logical conclusions of A.I. hype and the implications quickly leads to some pretty radical impacts on society. Either we don&#8217;t really believe the hype or we haven&#8217;t really thought through the ramifications.</p><p>And that&#8217;s without even considering the more academic concerns:</p><ul><li><p>What happened to <a href="https://en.wikipedia.org/wiki/AI_capability_control">containment</a>?</p></li><li><p>What are the moral implications of a massive A.I. slave labor force, that doesn&#8217;t question, whistle blow, or otherwise resist questionable or unethical instructions?</p></li></ul><p></p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://blog.nappisite.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Thanks for reading! Subscribe for free to receive new posts and support my work.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[RIF Notes #59]]></title><description><![CDATA[&#8220;The reason was uncovered in a study by Zheng Wang at Ohio State University.]]></description><link>https://blog.nappisite.com/p/default</link><guid isPermaLink="false">https://blog.nappisite.com/p/default</guid><dc:creator><![CDATA[jnappi]]></dc:creator><pubDate>Mon, 10 Feb 2020 14:27:16 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ee1cf605-a7be-4e64-949e-aacb1c0d77d6_1200x1200.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8220;The reason was uncovered in a study by Zheng Wang at Ohio State University. She tracked students and found that when they multi-tasked, it made them feel more productive, even though in reality they were being unproductive. Other studies found that the more you multitask, the more addicted you get to it."</p><ul><li><p><a href="https://changelog.com/posts/monoliths-are-the-future">Monoliths are the future</a> &#8211; Always a fan of contrarian opinions.&nbsp; Like any good tech. fad, microservice&nbsp; backlash is coming.</p></li><li><p><a href="https://www.gallup.com/workplace/283985/working-remotely-effective-gallup-research-says-yes.aspx">Is Working Remotely Effective? Gallup Research Says Yes</a></p></li><li><p><a href="https://holub.com/individual-performance-appraisals-just-say-no/" title="https://holub.com/individual-performance-appraisals-just-say-no/">Individual Performance Appraisals, Just Say No!</a></p></li><li><p><a href="https://devblogs.microsoft.com/dotnet/configureawait-faq/">ConfigureAwait FAQ</a></p></li><li><p><a href="http://blogs.tedneward.com/post/2020-tech-predictions/">2020 Tech Predictions</a></p></li><li><p><a href="https://www.troyhunt.com/promiscuous-cookies-and-their-impending-death-via-the-samesite-policy">Promiscuous Cookies and Their Impending Death via the SameSite Policy</a></p></li><li><p><a href="https://www.codeproject.com/Articles/5061258/The-Psychological-Reasons-of-Software-Project-Fail" title="The Psychological Reasons of Software Project Failures">The Psychological Reasons of Software Project Failures</a></p></li><li><p><a href="https://medium.com/@alexkatrompas/the-fall-of-the-software-engineer-the-rise-of-the-programmer-technician-451a572d28b0" title="The Fall of The Software Engineer, The Rise of The Programmer Technician">The Fall of The Software Engineer, The Rise of The Programmer Technician</a></p></li></ul>]]></content:encoded></item></channel></rss>