Today I wanted to talk through a couple of ideas that have been on my mind while working on some projects recently. They revolve around two related themes: how we should think about using large language models and how software development itself is evolving.
A lot of people today want to solve every problem with an LLM. If you watch the current wave of tooling and demos, it almost feels like the answer to every engineering question is “just throw an LLM at it.” I understand the excitement, and to be clear, I think these tools are incredibly powerful. But I also think we need to be careful not to forget that traditional approaches still matter.
In many situations, a simple model or algorithm is still the right answer. Sometimes a basic technique like linear regression or a deterministic rule-based system can outperform something far more complicated. LLMs are powerful, but they are not a universal replacement for every existing technique. They are just another tool in the toolbox.
What has fundamentally changed is the speed at which we can build and iterate.
Historically, the typical development model was to build an MVP, ship it, and then close the project or move on to the next thing. The focus was on getting something minimally viable into production as quickly as possible.
I think that model is evolving.
Instead of “build an MVP and ship it,” the new workflow looks more like this:
- Build the MVP quickly using AI-assisted tools.
- Deploy it into a sandbox environment.
- Experiment aggressively.
- Iterate rapidly using feedback from both humans and AI systems.
In other words, the MVP is no longer the end of the process. It is the starting point.
This also leads to an interesting cultural tension among engineers. On one side, some engineers distrust AI systems and want to rely entirely on manual processes. On the other side, some people treat AI output as if it is automatically correct.
Both extremes are flawed.
If you rely only on manual processes, you will inevitably miss things. Humans forget details. Humans overlook patterns. We all do it. Ironically, some engineers will say they trust their own design reviews more than automated tools, but the automated system may detect something obvious—like a banned function or a security issue—that the reviewer overlooked.
At the same time, blindly trusting AI is just as dangerous.
The real shift is not about replacing engineers with AI. It is about AI-enabled engineers who use these systems to augment their workflow.
An AI system can scan massive codebases, detect patterns, flag problems, and generate initial solutions. But the engineer still provides judgment, architecture decisions, and context.
The engineers who will be the most effective in this new environment are not the ones who reject AI, nor the ones who blindly trust it. They are the ones who integrate it into their workflow thoughtfully.
There is one habit that I think becomes especially important in this world: documentation.
As you experiment, prototype, and iterate with AI tools, you should capture what you are doing. Document the prompts. Document the results. Document the decisions that worked and the ones that failed.
In a world where experimentation becomes cheap and rapid, the real value is in the knowledge you accumulate over time.
If you build something useful, write it down. If you discover a good pattern, save it somewhere. If a prompt produces a useful output, store it.
Over time, that knowledge becomes a multiplier.
AI can accelerate development, but engineers who systematically capture what they learn will move even faster.