💭 What Happens When the Apprentice Magician Plays with God’s Wands?
Building on my previous posts about meta-understanding and on my relationship with AI tools, today I’m sharing a bit of a conflicting interest.
In recent months, every noise you hear seems to carry AI with it. It is everywhere, and it won’t stop or go away anytime soon. We can’t really deny what’s in front of us. The capabilities of AI tools in the last few months are completely insane. I’m not talking only about content creation tools, but also deeply technical ones.
As much as I’m concerned with privacy and all that, it’s impossible to ignore that AI is here and changing everything right now—for better or for worse. I often remind myself of the movie Don’t Look Up when I’m too stubborn to see what’s going on.
The Grail for Developers
One particular tool that piqued my interest is Google’s code wiki (an AI-powered codebase explorer), where you can dive into repositories and learn whatever you want using an AI assistant. As a software developer, this feels like a grail. You cherry-pick what you want, and it’s delivered exactly as you need it.
I cannot imagine the amount of knowledge you can pour from that. Agentic AI systems (autonomous agents) can do even more, but hey, this is already here and free to use. What I want to say is that possibilities are opening everywhere for almost anyone with the right interest. Intent is no longer constrained by time or resources the way it used to be. With tools that can narrate a codebase or explain a complex paradigm in seconds, the floor of the ocean feels closer than ever. We are no longer limited by the speed of our reading, but by the clarity of our desire.
This power excites me as a developer, yet it’s exactly why my dilemma starts to grow.
Lisp, AI, and Meta-Understanding
Lisp and AI share a kind of DNA: they are both declarative. They allow us to bypass the “how” and focus on the “intent.” But Lisp is a declarative sun—it illuminates the logic. AI is a declarative shadow—it gives you the result, but swallows the entire process.
While AI helps me grasp the latest details of almost anything, the AI itself remains a black box—a closed room where the lights are off. We know little about how it truly works, and we can’t accurately predict what it will do. We are using an opaque mind to create a transparent world: an apprentice magician playing with God’s tools.
If I use a tool I do not understand to explain a tool I want to master, where does the meta-understanding actually live?
Closing Thought
“The one who invented the boat also invented the shipwreck,” as the saying goes, but the unease persists for me. There is a new light, a very strong one. It is the future, but I can’t shake the uncomfortable feeling I have about this “new light.”
I’m interested to know what people think about this. Do you share similar feelings? What’s your take on all this?