Can A.I. learn for me?
Can I get stronger lifting weights with a forklift?
There is a lot of enthusiasm for all of the things A.I. can do for us, reading, writing, coding, art, research and analysis, even legal/medical advice and therapy. But there is something particularly odd about the claim that A.I. can help us learn to do all those things while at that same time doing them for us.
Setting aside, for the moment, the suggestion that reading, writing, (thinking) are the mundane tasks that we want to be freed from, its unclear to me how having A.I. do things for us is good for learning.
The most common form of the learn faster with A.I. approach is the “summary”. Feeding books, research papers, podcasts, email threads into A.I., and having it provide summaries. The idea being that A.I. can read text faster, and therefore provide all the required information in a quick summary. All the learning, in a fraction of the time.
This would imply that all those books, research papers, podcasts and email discussions are mainly superfluous language. All that bloat can then be stripped out and the core information distilled down like a form of lossless compression. One wonders why the authors didn’t think of that? Imagine how great a movie buff I’d be if instead of wasting time watching movies, I just watched trailers?
As suspect as “summary” based expertise is, that’s not the only flaw. We’re also playing two truths and a lie with the A.I. We know they hallucinate, but since we didn’t actually read anything, we’re very unlikely to catch the lies. We’re asking an A.I. to read and summarize information that we can’t be sure it actually read, or that the summary it produces is true to the source. I guess that’s vibe learning, we’ll know its correct because it sounds plausible?
How well will I retain this information that I didn’t actually read? I guess I can always ask the A.I. for the answers again. Like using a Star trek replicator to learn to cook, I won’t be using my critical thinking, analysis and problem solving skills, so they’ll be out of practice.
Maybe I won’t need those mundane skills anymore, but I will acquire new ones. I’ll need prompting skills, because how I ask the question is just as important as what I’m asking. The butterfly effect of token bias means slight changes in the phrasing or context of my A.I. prompt can lead to different results, making them more akin to genie wishes and incantations, producing answers that are more based on wisdom of the crowd than expertise.
That also means that either I’ve already done the learning, and know what the expected results should be, or I have a feeling what I want them to be, either way I can keep prompting until I get the “right” answer.