Prompting != Learning
Table of Contents
This is probably a follow-on after the AI (Code) Assistants post.
As I spend more time and focus on Generative AI products and adoption beyond software engineering use cases, I’ve noticed a concerning trend, especially among Generation Z professionals: they’re sacrificing crucial foundational learning (“apprentice mode”) by adopting Generative AI tools too early. LLMs provide instant, above-average answers, but do not add real, persistent experience to their skillset in the long run.
Conventional Learning Paths
In fields like Medicine, you typically study for about 8 years, shadow experienced professionals, practice under supervision, complete multi-year specialization, and then you’re ready to start.
Other careers, such as software engineering, rely more on experiential learning where mistakes rarely threaten lives (unless you’re building software for self-driving vehicles, rockets, or medical devices). Some complete 2-5 years of formal education. Others are entirely self-taught. Great software engineers come from anywhere (including nowhere). Unfit professional software engineers could also be an Ivy League graduate.
It’s positive that LLMs are challenging how schools and universities grade their students. High-quality content is freely available, communities of practice flourish, and knowledge acquisition follows multiple valid paths. Centuries-old educational institutions no longer monopolize knowledge. In the past, people were graded by how much knowledge they retained (and tools like a simple calculator were prohibited). Now the focus would shift towards how effectively a person, augmented by AI, applies knowledge to real-world challenges.
Human neurons beat neural networks
Current LLMs can already pass professional exams and they’re getting more capable and cheaper every other week.
Scientists are still learning about the human brain and neurons. Some argue the brain functions more like an analog device than a digital one. Others suggest it combines analog and digital characteristics with quantum dynamics. Neurons create new connections on a daily basis. Serendipity is more than a fancy word. When humans interact with other humans (and their environment), new knowledge emerges. That’s not true with the current generation of LLMs –– they absurdly hallucinate instead.
When you’re on “apprentice mode” you’re forming your very own new neuronal connections. You grasp fundamentals by doing, failing, and trying again. Malcolm Gladwell popularized the 10,000 Hour Rule in his excellent 2008 book Outliers. Though this concept sparked debate (with some rejecting it entirely), please remember: LLMs won’t transform you into an expert instantly, regardless of how impressive their outputs seem. Their summaries might help you pretend to understand something you really don’t. They might suffice when explaining concepts to novices but they’ll likely embarrass you in front of real experts.
My approach to learning in an AI-first world
As we increasingly encounter forward-looking statements like “AI-first” and “post-agentic era” alongside doomsday prophecies about AI replacing humanity, I prefer biological reality: Humans dominate Earth and all other species. Humans have enough power to trigger mass extinction events with the press of a button. Most of that power comes from learning, curiosity, and challenging the status quo.
Halfway through my professional career, I’m confident I’ll learn more in the next 5-10 years than in the previous 20, augmented by AI. Yet I’ll try hard to preserve the learning methods I enjoy the most: reading books and blogs, writing summaries, drawing my own version of “mind maps with box diagrams” and consuming videos and podcasts. I completed about 6 years of formal professional education, including recent “executive education” courses –– and I’m not close to the idea of going back to school if something really attracted me to do so in the future.
Almost a year ago, I began using LLMs to analyze my summaries, writing, and raw notes. The results impressed me. LLMs suggested complementary learning paths (including some creative but completely wrong ones), improved my writing, and made me more thoughtful by observing how a massive non-deterministic algorithm designed to predict tokens one by one boldly pretends to surpass human intelligence.
I started calling it the AI-later approach to learning. And I’m loving it.