Thoughts on AI-assisted software engineering (AI-free version)

2026-02-01

I recently read this post by Andrej Karpathy and it got me thinking. It covers many interesting topics. If you haven’t read it, go read it first and come back.

Scrolling through LinkedIn or YouTube it feels like everyone is either trying to sell you something or scare you. Posts with titles similar to the following seem to take over my feed.

  • “I vibe-coded this $1M MRR app in one weekend”
  • “Company XYZ lays off 80% of their staff because of AI”
  • “How to be 20x more efficient with this new AI workflow”

Depending on who you are and what you do, this can sound either very scary or like a huge opportunity. This post is my attempt to give both sides of the spectrum and everyone in between a more realistic perspective.

I’m not an expert on AI, but I use it almost daily. I use it for software engineering in large, complex code bases at work, for smaller, less complex personal side-projects, and for daily tasks and research. I’d argue, this gives me at least some ability and credibility to write about the topic. I use the terms “AI” and “LLM” interchangeably throughout the post.

Ego & Identity

When I read Karpathy’s post, it confirmed a lot of what I and probably many others have experienced as well over the past ~3 months.

”[…] I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write… in words. It hurts the ego a bit but […]”

Especially the part “it hurts the ego” resonates. Why? Software engineering and writing code is part of my identity. If you would ask me in a professional context who I am, I will probably say “My name is Joshua, I’m a software engineer”. I’ve spent a lot of my time writing and debugging code - during university, at work, and on several side-projects. Stack Overflow felt like a second home, and every now and then you would end up visiting some random person’s blog post to find out how to get something working that’s documented nowhere else.

Writing code is, to some extent, what earns me a living. I always enjoyed it and I still enjoy it. AIs are pretty good these days in writing code, so it feels like your hard earned coding skills (and therefore you as a person), are less valuable. And I think that is exactly why it hurts the ego.

The good part is that software engineering is much more than writing code. It’s about communication and collaboration. It’s about asking the right questions and understanding what your users want and that what they say they want is what they actually need. It’s about complex design and architectural decisions that affect the technical and the social/organisational part of the system.

Fortunately for all the software engineers out there, in my experience, AIs are not yet very good at those tasks - especially not in complex socio-technical systems. I think there are several reasons why, but probably the biggest reason is lack of context. A talk at the coffee machine, a conversation during lunch with a colleague from another team - none of this is available to the AI unless you provide it. A lot of information is implicit, somewhere in the back of your head, until you remember it in the right moment.

Maybe it’s naive but I think we should worry less about being replaced by AI or that there won’t be a need for software engineers in the future. Instead, we should focus more on the skills that will become more important in the future and how we can use AI-assisted workflows to our advantage. Or as Kent Beck writes:

The future isn’t about AI replacing developers - it’s about developers learning to dance with these powerful new partners while maintaining the discipline that creates sustainable software.

Current Limitations

Karpathy describes some of the problems with the current state of AI-assisted coding:

“They [the LLMs] also really like to overcomplicate code and APIs, they bloat abstractions, they don’t clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it’s up to you to be like “umm couldn’t you just do this instead?” and they will be like “of course!” and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don’t like or don’t sufficiently understand as side effects, even if it is orthogonal to the task at hand.”

That’s exactly what I experience when using AI for rather complex tasks. I think that’s why so many people with little or no technical experience who try AI-assisted coding hit the barrier sooner rather than later. They don’t have the experience and intuition to judge that the code is bloated or the internal structure is too complex. I did several experiments where I did not look at the code at all and only prompted the AI based on the output and error messages. At some point it was running in circles. Trying to fix bugs that were caused by poor design or issues that could’ve been prevented in the first place. An improved workflow, like “putting it [the AI] in the loop” and providing the right tools, might solve some issues, but I think it will not detect fundamental design issues.

Someone who has written and read a lot of code will most likely detect those problems earlier and can direct the AI in another direction or fix it themselves. Peter Steinberger said something like ”[…] everything is just a question away but you have to know what question to ask” during an interview with The Pragmatic Engineer, and I think that’s what a lot of people are missing when they start using AI-assisted coding.

The software and AI industry is changing rapidly. Models will get better and make fewer mistakes, the tooling and workflows will improve, and hopefully resources for training and inference will decrease. Still, I think that the most important factor for successful software engineering will remain a human with a solid technical expertise and experience. Software engineering is more than writing code, which means there is still a big portion left to do. And even for writing code, I think you are more likely to succeed if you have a good technical understanding and developed some intuition of what’s good vs. not-so-good code.

Learning & Failure

Karpathy describes that his ability to write code manually is declining.

“I’ve already noticed that I am slowly starting to atrophy my ability to write code manually.”

I think most people with a similar workflow would agree. It will be interesting to see how “writing less code” will turn out in the long term and how it affects those who never learned to write code by hand.

When I write code manually, I usually have to iterate several times until I’m happy with the result. Translating my thoughts into code, helps me expose gaps in my thinking and learn more about the problem and the system context. In some sense it’s similar to journaling or writing a blog post.

What I find even more interesting in the context of learning (or unlearning) is failure (and friction). Failure is an important part of learning. A big part of university was learning through failure. There are not many things as satisfying as spending several hours on a hard problem, initially failing, but iterating until it works and the dopamine hits.

Karpathy also talks about failure and mistakes:

“The models definitely still make mistakes […] the models make wrong assumptions […] They also don’t manage their confusion, they don’t seek clarifications, they don’t surface inconsistencies, they don’t present tradeoffs, they don’t push back when they should […]”

While it’s technically correct that the models are making the mistake, it feels like we are offloading the failure. When I use AI and it generates slop, I often tend to blame it and not myself. I know that I’m still responsible for the outcome and the code I ship, but in the process it feels less like I’m the one making the mistake. It can even provoke angry thoughts, after telling it to use Svelte 5 syntax for the fifth time. I’m not sure if other people feel the same, but I’m wondering how this is affecting the process of learning.

On the other hand, being more detached from the code is often mentioned as a positive effect. It’s probably better to not take code reviews too personal. Nevertheless, I think the question about the importance of failure and friction for learning remains, especially for people who are just entering university or school.

While writing this post, Anthropic published a paper called “How AI Impacts Skill Formation”. They summarized their results in a post and it answers some of the questions and concerns of this section. Most importantly:

“The largest gap in scores between the two groups was on debugging questions, suggesting that the ability to understand when code is incorrect and why it fails may be a particular area of concern if AI impedes coding development.”

which suggests that if you want to maximize on system understanding, you might want to reduce the usage of AI initially and use one of the “High-scoring interaction patterns” such as “Generation-then-comprehension” approach. I’m not an expert on study design and cannot say whether the results are solid, but it is at least an indicator.

The takeaway is that if you want to learn something new or develop a new skill, resist the temptation to use AI from the start to generate all the solutions. Use it to ask for hints or let it explain concepts. If you are just experimenting or building something where you don’t need a deep understanding and you might throw it away anyways, I suggest taking the fastest path using AI to generate the code, knowing that you will learn less. I came up with the following diagram. It’s far from perfect, but I think it serves as a good guideline.

Understanding vs Importance diagram

Where to go from here?

I think the following is a good starting point to think about this question.

”[…] LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.”

Those who primarily liked coding and don’t adopt AI-assisted workflows due to their ego, will probably be left behind at some point. On the other side, those who are only interested in building and don’t care about the quality of the code might hit the barriers earlier due to lack of understanding and being out of touch with the code. I think the sweet spot is somewhere in the middle. If you ask me what to do with that, here is my strategy:

Learn the Basics: It will be more important to understand the fundamentals, and less important to know the details of every language or framework. When you start learning a new technology or programming language, use AI as a coach or mentor, not to generate all the code. Intentionally avoid AI occasionally to experience failure and friction and maximize for understanding instead of speed.

Experiment: Everything is changing rapidly and it’s almost impossible to not get the AI FOMO right now. Stay curious about new trends and at the same time avoid the temptation to jump on every hype-train. Try out new tools and workflows every now and then. If something works, keep using it. If not, try something else - easy as that. Find blogs or people you trust and observe what they are doing or using. Most importantly, build a solid foundation of tools and workflows first and avoid jumping between 5 different IDEs.

Build Something: It has never been easier and cheaper to build and ship software. See if you like other aspects of shipping a product, like marketing or sales. It doesn’t have to be the next Unicorn, it can be something small, just for you or your friends. Maybe you can replace one of your monthly subscriptions building it yourself. You can do this as a side project, with minimal time effort.

Soft Skills: I think the human side will become even more important and soft skills will matter more. Why? Almost everyone is now able to reverse a B-tree or solve hard LeetCode problems using AI. What’s more important is whether you are able to work in a team and can communicate your thoughts. Learn how to communicate, learn how to be more authentic, learn how to lead.

Networking & Personal Brand: With AI everywhere, I think that trust will be more important than ever. I would not hire someone based on whether they can solve hard LeetCode problems. Instead, I would look for someone I trust to get the job done and knows what it takes to ship great software. How to know who you can trust? I think this part has become a lot more difficult. A lot of traditional software engineering interviews are broken, because they focused primarily on hard skills and the ability to write code. Therefore, my assumption is that building a network and reputation is much more important. You can achieve this by contributing to open source projects, share what you learned on a blog, build your own products, connect with people in and outside of work.

None of this is magic or will make you a great software engineer overnight. All of this was already important before AI, but the importance has shifted. Every now and then it helps to re-calibrate your compass and consider what actually helps you move forward personally and career-wise. It will be interesting to read this post in 5 years and see if the strategy worked out.


This post got longer than planned but it felt good to write about it and share some thoughts. I hope you learned something and it triggered some thoughts as well or encouraged you to see things more positive with all the AI hype and scary headlines.

Btw, I initially wrote the entire post without the help of AI and only later used it to improve the spelling and clarity. You read the AI free version.