Over the past few weeks I have been testing a few AI code assistants like Claude Code, OpenAI Codex and Github Copilot. I'm using to vibe engineer some personal projects but also to try out new ways of building software at Mindera.
Using AI to write most of my code makes me think that the role of a developer might evolve to more of an architect. Someone who thinks about what they need/want to build and then orchestrates AI to actually get it built. But as I do this, I am seeing two types of situations that will make a full evolution like this a bit slower than some predict, or even challenge it. Namely:
Using AI to write most of my code makes me think that the role of a developer might evolve to more of an architect. Someone who thinks about what they need/want to build and then orchestrates AI to actually get it built. But as I do this, I am seeing two types of situations that will make a full evolution like this a bit slower than some predict, or even challenge it. Namely:
- When I'm building something that is not as standard as most of the things I build (ie: something that isn't just CRUD pages and typical interfaces), AI tends to struggle more to deliver working solutions. Or at least takes longer to get it right than more conventional examples. I've seen this described as building things that fall off the distribution curve. The tech stack also has an influence here. AI might push for more standardisation as many of these models are much better trained on stacks like ReactJS. Tech choices will need to consider this.
 - When I'm building something that I've never built before versus something that I have built before. For example, the other day I was building user invitation system for a project, similar to the one I built for Kronflow earlier this year. So I was able to see that Claude Code wasn't building the feature in the right way, and I could intervene and ask it to adjust its approach. This means that when I'm building something that I haven't built before, there is a higher chance that it will be written poorly without me realising.
 
Point number one might be a bigger issue than point number two, because "poor" might still be acceptable, as long as it's not insecure and bug-ridden. But if AI doesn't build the thing properly, a human will still need to step in.
What this tells me is that to guide AI adoption we probably want to categorise the projects that our teams are working on by tech stack in use, and by "distance to distribution curve". This can help us deliver training where it's most impactful and allocate the best people to the projects where AI can't be relied on as much.
The other topic that I think will continue to be hot is how to train juniors to become good in the age of AI, when we know that experience and knowledge remain key, but the temptation of AI and the shifting roles might make it harder for that experience to be developed.
Lots to figure it out still.
What this tells me is that to guide AI adoption we probably want to categorise the projects that our teams are working on by tech stack in use, and by "distance to distribution curve". This can help us deliver training where it's most impactful and allocate the best people to the projects where AI can't be relied on as much.
The other topic that I think will continue to be hot is how to train juniors to become good in the age of AI, when we know that experience and knowledge remain key, but the temptation of AI and the shifting roles might make it harder for that experience to be developed.
Lots to figure it out still.