This morning, on the drive home after dropping off one of my kids, I started thinking about something that sits at the intersection of engineering, privacy, and intellectual property. It’s something I’ve thought about before, but the rise of AI coding tools makes the question much more interesting.
I’ve always had a very clear stance when it comes to intellectual property. If I build an application on a company laptop, during company time, solving a company problem, that code belongs to the organization. That’s completely fair. If you’re being paid to build something for a company, the output should belong to the company. There’s nothing controversial about that.
Where things get more complicated is when we move beyond the code itself.
An organization owns the code that you produce. But it doesn’t own the ideas that you develop while producing it.
If you learn a better way to structure an application, or you discover a pattern that makes a system easier to test, scale, or maintain, that knowledge becomes part of your skill set as an engineer. Those ideas are portable. They follow you wherever you go. The same design intuition that helps you solve one problem can often be applied to dozens of others.
This happens all the time in engineering conversations. Someone shares an idea that can be summarized in a few sentences. Build the application this way. Add this type of testing. Make sure dashboards allow users to drill into the underlying data. Make labels clickable so they route to filtered tables. These are small ideas, but they are incredibly reusable. Once you see the pattern, you start applying it everywhere.
What’s changed recently is the amount of effort required to turn those ideas into working software.
Before generative AI tools, the bottleneck wasn’t usually the concept. The bottleneck was the time it took to implement it. You might know exactly what you wanted to build, but it still meant hours of wiring things together, writing boilerplate code, setting up tests, building UI interactions, and debugging everything along the way.
Now that overhead is shrinking quickly.
If you understand how to describe the behavior you want, you can prompt a system to generate most of the scaffolding. You can say something like: given this feature, allow users to click on a label, route them to a filtered table view, and generate unit tests for the logic. A good system can produce a large portion of that implementation almost instantly.
That shift changes what actually matters when building software. The limiting factor is no longer how quickly someone can write code. The limiting factor is how clearly they understand the problem they are trying to solve.
This raises an interesting question about how organizations will think about intellectual property in the future. Companies can protect source code and repositories. They can enforce policies around confidential data and internal tools. But they can’t really protect the knowledge engineers gain while solving problems.
If an engineer becomes better at designing systems, understanding patterns, or prompting AI tools effectively, that capability belongs to the engineer. When they move to another organization or build something on their own, they bring that experience with them.
In many ways, that’s unavoidable. It’s also probably healthy for the industry.
The old saying about teaching someone to fish applies here. If you give someone a fish, you feed them for a day. If you teach them how to fish, you feed them for a lifetime. When organizations invest in engineers and teach them better ways to think about systems, those engineers inevitably become more capable everywhere they go.
AI doesn’t remove that dynamic. It just accelerates it.
I see something similar when I work with my kids on math problems. When I was younger, learning math often meant spending a lot of time just trying to find the right explanation. You might spend hours digging through textbooks, indexes, or example problems before you even understood the method needed to solve the question.
Today the path from confusion to explanation is much shorter. You can see the solution almost immediately, along with the reasoning behind it.
That’s a huge shift.
The concern people raise is that students might skip the struggle that builds intuition. That’s a fair concern. But there’s another possibility as well. If students spend less time searching for answers, they might spend more time exploring the ideas behind them. They can test variations, ask deeper questions, and focus on understanding patterns instead of simply locating the right formula.
It’s still unclear how this will shape the next generation of engineers. In some ways they may be far more capable, because they can iterate on ideas much faster. In other ways they might lack some of the foundational friction that earlier generations experienced while learning.
I’m genuinely curious to see how that balance plays out.
When I think about my own kids, I sometimes wonder what engineering will look like if they go down that path. My son picks things up quickly and remembers patterns very well. In a world where tools can generate most of the mechanical parts of coding, that kind of pattern recognition becomes incredibly powerful.
The next generation of engineers may not be defined by how well they memorize syntax or how quickly they can write code. Instead, they’ll be defined by how well they understand problems and how effectively they guide tools toward solutions.
And that kind of thinking is something no organization can truly claim ownership over.