Why using AI requires us to be even better (as developers and beyond)
In almost every movie involving air or space travel, the pilot relies on autopilot most of the time. But there always comes a heroic moment where they have to switch to manual mode and save the crew through sheer skill that almost no one else aboard possesses.

I have observed strikingly similar situations while debugging code with the best AI models available.
When a problem becomes slightly out of the ordinary, the AI copilot will quickly modify the codebase, form wrong hypotheses that reinforce themselves as the code evolves, and lead you into false beliefs about why things are not working. It takes real skill to hit pause, step back, and understand what actually happened, especially when you are already in unfamiliar territory: maybe using a library you have not touched in a while, maybe solving a problem with no obvious precedent. Resisting the confident, fluent misdirection of an LLM in that moment is genuinely harder than not using an LLM at all.
AI copilots are remarkable. But it is tempting to conclude: “We do not need a pilot anymore” — or worse, “We can get away with a less qualified one.” I see many decision-makers fall into that trap.
As long as what we are building involves any degree of innovation, that thinking is extremely risky.
The good news: knowledge workers will increasingly get to work on problems with higher degrees of innovation, because AI can do almost all of the non-innovative work. The question worth sitting with is: Are we cultivating the deeper skills that make that possible, or are we quietly letting them atrophy?
Have you experienced something similar? I would love to hear about it.
