An early good bye to 2025

Hey! You can find me on Mastodon and Bluesky!

There are less than 2 weeks left in 2025 and this means it is time to look back at the last year. In last year’s review I ended on this note:

For the next year, I hope to spend more timing painting in broad strokes so I get better at that specifically: more “create a good-enough first version” and less “polish version 2.0 to be even better”. More 80% solutions. More “let’s just try this.” More “creating”, less “changing.”

I am happy to say that I succeeded in this:

  • I accidentally created cpp2better, which is now a product I am licensing out. It takes the code generated by Unity’s il2cpp pipeline and optimizes it, which does a lot for some games. I learned a lot from this, and I have a better appreciation for people in sales. But it’s definitely a thing I created! It’s terrible in many ways (“this tool parses a subset of C++ and rewrites it to be more efficient”) but it’s still doing its job.
  • I have reduced my contracting work to focus on developing new things. I can’t talk about my main project right now (soon, maybe?), but I have several side projects that will hopefully see the light of day soon.
  • I have tried to be more consistent in writing this blog, and I had almost one-post-a-week until October. I felt both that the quality of what I had to say was dropping, and that my mind was just drawn towards programming more than writing. We’ll see how that continues in the next year.

I have also found a good way to paint in very broad strokes. But explaining this requires an unfortunate detour through a topic that everyone hates: It’s unfortnuately impossible to talk about 2025 without also talking about AI. So let’s get it over with.

From my perspective, AI tools are a little bit like weapons: I am generally against weapons. Weapons are bad. The people that most want weapons are usually the people that should not be given access to them. Unfortunately, many of them do have access to weapons, and (unlikely me) they have zero hesitation to use them. At least on an international level it’s hence clear that countries must be armed unless they want to risk being bullied or straight up invaded by their neighbors. There is no point in being “morally right” if that means handing the power to the assholes or risking getting your family killed.

Now for AI. If AI is effective and useful, then it is irresponsible to leave it to the people who have no concerns about what it will do to society. If it is effective, then the people with the least concern for others will be getting more powerful, and as with weapons you are better off if your side is also armed.

I do not believe that we are in an AI-tech-bubble in the sense that crypto and web3 stuff was a tech-bubble that brought about zero meaningful change. I am not qualified to talk about the financials, but AI is going to have a lasting impact on technology. I am firmly in the camp of “AI is effective and useful”, so by the above paragraph I also believe it is irresponsible to leave its usage to people that have no hesitation over it.

I believe the net-effect of AI will be two things: society is going to become more unequal, and the floor is going to be raised. These two factors don’t contradict each other. It’s perfectly possible to live in a society that has cured cancer but also enslaves 99% of its population as rightless workers. That’s basically the premise of the typical dystopian SciFi oligarchy: Nobody lives below the poverty line anymore, yet you can’t choose what you eat either. We are far away from that at the moment, but the general direction of “rich get richer” is certainly not going to be slowed down by enabling much more powerful automation through capital.

Why do I believe that AI is “effective and useful?” With the latest models (since November 2025) AI agents have become more effective programmers than I am, at least in the context in which I operate. I have used them extensively and wrote quite a few tools with them. Modern models understand novel concepts, they understand completely foreign codebases, they pick up new domain specific languages, and they write code much more quickly than me or you could. At least in my context all of that is true. Claude Code is a better programmer than me. By “programmer” I mean “the one doing the programming.” They still require instruction, guidance, architecture review, product-sense etc..

A common theme then is to argue about the “quality” of the code. Yes, they make mistakes. Yes, they need supervision. Luckily, my personal strengths in programming are on the analysis side: quickly understand a lot of code (yet not a quickly as an AI), review code, make something that’s bad much better. If you give me the ability to iterate with a handful of “people” that do the busy work of writing a first shitty implementation, that’s a huge upgrade. Do I want to review the code of a human co-worker after a week, or do I want to get a first-order approximation from an AI after 5 minutes and then quickly iterate? Iteration beats everything else, no matter how many percentage points worse the model is (I still want the co-worker as well, please). The first-order approximation usually already reveals why the direction I picked was fundamentally flawed, for example. I also do not think that agents make more mistakes than me. They just make them much more quickly.

Let’s talk about two more arguments I frequently hear:

  1. Pricing. Is the current world we live in where you get a ton of Claude Code usage for 200 USD a month sustainable? I don’t know. But as far as I can tell agents have so far gotten better and more efficient over time. Even if prices went up substantially, it would still be a very good deal.

  2. Is this a sustainable pipeline for producing people that can oversee AI agents? I.e. “what about junior programming positions?” This question makes a lot of sense at first but then turns out to be an irrelevant concern. First, so far everything suggests that the amount of supervision needed is going to decrease (and has already massively decreased). Second, if humans need to learn supervision (or some other new skill) to be employable (“economically viable”), then people are going to learn that skill.

    It’s the same structural argument as “people don’t learn to write assembly anymore.” I personally believe that fluently reading lots of x64 assembly has been the single most impactful technical skill I have ever learned. But even in video game programming the number of people that can do even basic bullshit detection on compiler output is shockingly low, and yet there are more games than ever.

    The majority of the market will not look at AI output. It will be shitty in some way, the same way that modern software is shitty in many ways. It will be “good enough” for many, and there are going to be specialists that do look at AI output, in the same way that looking at compiler output is now a “speciality” and not a basic expectation. Having a deep understanding of “everything computers” is going to be a bigger advantage than ever before.

All of this is to say that I am quite sure that programming is going to change, a lot, and that not learning how to effectively use AI tooling is going to be a disadvantage. Claiming moral high ground by not using AI is something I sympathize with, but there is no point in going down with that ship, because then no skeptics will be left and we have lost by default.

Tying this back to my thoughts from last year, AI is a massive upgrade for painting in broad strokes and getting 80% solutions. I believe I have gotten better at that without AI anyway, but having a tool at your hand that can quickly re-write entire codebases (assuming you have very good test coverage) makes it much easier to actually believe that “I am going to clean this up later.” So far, this has been a great success.

With all of this said, here is what I want to do next year:

  • I want to get closer to users again and release products that have users beyond build systems (as is the case for cpp2better). I like interacting with people.
  • I want to take more time to stress less, probably pick up some meditation practice. The world is a terrible place to be in, but locally it can still be nice and I need to find more space to appreciate that.
  • I want to listen to more unapologetically stupid music. It is one of the most consistent sources of joy in my life, and there is more of it out there than ever.

To a fantastic and hopefully boring-in-terms-of-politics 2026.

Share: X (Twitter) Facebook LinkedIn