I want to be clear about what this post is before you read it. Nothing in here is new. Everything I am about to say has already been said, probably better, by someone else. This isn’t a hot take. It’s just an honest description of something I think a lot of people in tech are feeling right now. I am not offering a resolution or a framework or five things you can do to future-proof your career, because I don’t really know either.

I am writing this purely as a bit of a therapeutic exercise.


The Oscillation Link to heading

I go back and forth constantly. Not in a dramatic, crisis-of-identity way. More like a low-level hum that never fully switches off.

Some days I genuinely believe that AI is not going to fundamentally change what I do. The job is complex. It requires judgment, context, and an understanding of systems that interact in ways no prompt can fully capture. I have been doing this for years and I know things that are genuinely hard to articulate, let alone automate. I will be fine.

Other days I think: “who am I kidding, this is already happening”.

It can flip in the space of an hour. I will solve something non-trivial at work and feel completely secure in what I do, and then I will open YouTube or X, or Reddit and feel like I am already behind.

I suspect most people who claim to have completely figured out this back-and-forth are performing, pretending to be more confident than they really are. For most of us, the truth is that we’re somewhere in between and still trying to work things out.


The Two Poles Link to heading

I think the oscillation sits between two poles.

On one side, there is the denial pole. This is the belief that AI is overhyped, that it cannot truly understand complex systems, that it makes things up, and that what we do is too nuanced to be replaced. This is the “it will not happen” position.

On the other side, there is the inevitable pole. Not because anything has already happened, but because it feels like it will. Not today, maybe not even next year, but the direction of travel seems clear. This is the “it has not happened yet, but it is coming” position.

I move between these two constantly, but neither position feels entirely honest on its own.

Earlier this year, Matt Schumer published Something Big Is Happening. Schumer is an AI CEO, so it is worth applying appropriate scepticism, but the article went viral because it articulated something a lot of people were already feeling. He wrote that he is “no longer needed for the actual technical work” of his job. He also referenced a prediction from Dario Amodei, one of the founders of Anthropic, that 50% of entry-level white-collar jobs could disappear within one to five years.

I read it and felt it land somewhere specific.

Schumer did later walk parts of it back in a CNBC interview ,but the reaction to it is the interesting part. You don’t get something to spread that quickly and widely unless it’s tapping into something real, even if the details aren’t entirely accurate.

On the other side, I have had colleagues say AI will not take our jobs because it cannot understand complex problems and makes things up. I understand the instinct, but I think it is wrong as a long-term bet.

The argument relies on AI’s current limitations staying current. AI hallucinations from last year are almost a non-issue today. The pace of improvement is not steady, it is compounding. Extrapolate that forward two years, five years, ten years. The “it makes things up” objection is getting weaker every quarter.

That is what pushes me back towards the inevitable pole: not Schumer’s specific predictions, but the overall direction things are heading.


The Ground Keeps Shifting Link to heading

I’m a fan of NetworkChuck. His energetic, YouTube-style, overly excited and exaggerated delivery isn’t for everyone, his videos are undeniably educational, and a great resource.

NetworkChuck recently posted a video titled, with the usual YouTube click bait, “# I almost quit YouTube….”. He talked about nearly quitting YouTube, taking a sabbatical, and eventually landing on what he describes as “relentless optimism”. I am not citing it because I think he has the answer. He has a large audience and a very specific creator-optimism brand. None of that is what I am after.

What I found useful was a specific feeling he described: by the time you have processed one shift, the landscape has already moved. It is not that you are slow. It is that it genuinely never stops.

This feeling is really hitting the nail on the head. It is exhausting in a way that is different from normal keeping-up-with-tech exhaustion. Normally, you learn something new, a framework or a tool, and eventually things stabilise.

This does not feel like that. It feels like the foundation of the work itself is shifting, and nobody really knows what it is going to look like in the end.


The FOMO That Needs To Be Scratched Link to heading

I want to be honest about what my FOMO actually is, because it is not sophisticated.

I just want to play with the latest toys. It’s the same itch that got me interested in setting up a home lab, collecting old Cisco routers, and running Kubernetes on anything I can. This latest itch has been playing with self-hosted AI models.

I don’t really have a good use case for it other than to use together with Home Assistant. It’s just that I love to experiment and try new things. This desire to play around with technology is what gets me going, and self-hosting AI models is just the next thing on my list.

I have been looking at Mac Studios. The M-series chips are far more power-efficient than a server full of Nvidia GPUs, and local inference using a M-series Mac works very well, especially on the higher-end configurations. I have not bought one (yet). It is a lot of money and I cannot justify the cost by saying “I just want a new toy”.

(I’ll be honest, I’m also a little bit conflicted - the self-hosting addict, and Kubernetes nerd that I am, I would really love to have some Nvidia GPUs in my K8S cluster, but the power is just a bit too unreasonable).

At the moment, I pay for Claude Pro (the cheapest paid package). I also have an Ollama Cloud subscription (which is actually quite nice! You get a lot of tokens, and Kimi K2.5 works pretty well!). I have tinkered with OpenClaw, the terrifying, yet kinda interesting AI Personal Assistant.

But here is where I get stuck: the hammer and the nail problem. I have the hammer, but I can’t find any nails.

I watch video after video where the premise is “IT’S MY 24/7 AI ASSISTANT AGENT” and the explanation of what it actually does never arrives. The basic take away from a lot of the videos has been: “My AI personal assistant is amazing it runs in a loop and manages other agents. Much wow.”, or they are running their 24/7 AI assistant to generate “leads” for an undefined business (usually selling AI courses). The hype feeds itself in a loop. Just looking at the /r/openclaw subreddit shows countless posts saying “So I’ve setup OpenClaw…. now whats your use cases?”.

There are more than enough posts also explaining how dangerous something like OpenClaw can be, so I’m not going to go into that, but the enthusiast content treats this as a solved problem, or ignores it entirely. It is not solved.

So I sit with the itch unscratched, which might be the honest outcome. Or it might be that I have not found the right thing yet.


What is the job now? Link to heading

AI can already write Terraform. It writes Ansible playbooks. It writes Kubernetes manifests. It troubleshoots a lot of basic issues quite well, and also speeds up working on more difficult problems with a bit of guidance. I use it for all of these things and it is genuinely useful. It can get you 80% of the way there very quickly, but that last 20% still requires real understanding.

I am not redundant today. The job still requires someone who can evaluate the output, catch the subtle mistakes, understand the system the config is being dropped into, and take responsibility for what actually runs in production. AI does not do any of that. I do.

But here is the thing: I am not confident about what the job looks like in five years. I am even less confident about what it looks like in ten years. The shape of the work is already changing and I cannot see clearly where the trajectory ends up.

That uncertainty about the trajectory is its own kind of anxiety, separate from the immediate job security question. It is the not-knowing-what-to-prepare-for that is hard.


When “Just in Case” Becomes a Real Thought Link to heading

I think a lot of people wish they had a side hustle of some sort. Something “just in case”.

Lately, I have been thinking about it more and more. Not because anything has gone wrong. Not because I am in trouble. Just because tomorrow feels genuinely unknowable in a way it did not before.

I do not have a plan. I have the thought.


Where I actually am Link to heading

I am tinkering. I am paying for tools. I am reading things. I am having the same circular conversations with my brother, both of us technical, both of us finding the technology genuinely interesting, both of us finding the conversations end somewhere uncomfortable and a little bit depressing.

I have not figured out how to integrate AI into my work in a way that feels strategic rather than reactive. I have not resolved the oscillation. I do not have a clear answer to the “embrace AI” advice because nobody has defined what that means for someone in infrastructure and platform work.

What I have is the question.

And the uncomfortable feeling that not having an answer might be the most honest position right now.

I went back and forth on whether to include this, but it feels dishonest not to: parts of this were written with the help of AI.

Which, I suppose, is part of the point.

Related Posts