Yes, you’re right 🔗
-
AI slop is ruining the internet.
-
Given half a chance AI will delete your inbox or worse (even if you work in Safety and Alignment at Meta):
Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb. pic.twitter.com/XAxyRwPJ5R
— Summer Yue (@summeryue0) February 23, 2026 -
Low-effort AI contributions are harming the open-source ecosystem.
-
LLMs hallucinate
…etc etc, ad infinitum.
But you’re also so, so wrong. 🔗
ME: Here's this tool to help you do your job better THEM: Cool! ME: It uses AI THEM: IT'S A CON, RUN AWAY, DON'T BELIEVE THEM
AI is fundamentally changing how we do things, whether you like it or not.
AI is not just another hype cycle, and I’ll tell you why.
Consider:
-
The Internet
-
The Cloud
-
Big Data
-
Blockchain & Crypto
-
Data Mesh
-
GenAI/Coding Agents
Some strikes and misses there. The difference with AI [1] is that the people shouting excitedly about it are actually using it and getting real value from it.
Contrast that to when folk were running around trying to convince themselves that they needed to learn Pig to process their "Big Data" when SQL on Oracle would have been just fine, or that there really was a use case for Blockchain beyond a handful of niche use cases (some of them even legal).
| Just because something is hyped, doesn’t mean that there’s not something in it. |
Of course, we’ve all been burnt. I distinctly remember sitting around in 2021 convincing myself I ought to be learning how to write a smart contract for Ethereum. Oh, how we laughed.
But if you’re the kind of person who wants to stay relevant in the jobs market, part of what you should always be doing is keeping an eye on developments in the industry, even if some of it turns out to be hokum.
Why is this still an argument? 🔗
AI is here to stay, and those of us keen to have relevant and rewarding jobs in the future really ought to be actively figuring out what on earth AI means for our particular disciplines. And this is me here, trying to figure it out.
This article is from last September; ancient by AI commentary standards. But it remains an important and relevant read. I’d crudely summarise it thus: simply keeping on doing what you’re doing won’t work.
The trajectories that things used to follow are changing, and no-one knows where they’re going. As Sam Newman notes:
Whatever you might think about the problems or downside of AI for software dev, you need to keep a roof over your head.
When things are changing, or have changed, human instinct varies. Many people, myself included, hate their cheese being moved. Change creates uncertainty. Uncertainty is unsettling. This reaction is understandable.
Brittany Ellich wrote an excellent article this week, titled Embrace the Uncertainty. Her article is considered, thoughtful, and articulate—I recommend you read it. Much more calmly than I’m doing, she argues that we don’t really have a choice; pretending that we can ignore the impact of AI is pointless. Instead, per the title: embrace it.
Agentic tools aren’t just "a fancy version of auto-complete"… 🔗
The difference between the tools I’m using and getting excited about (such as Claude Code), and the "chat bot" LLMs you played with and dismissed as a fun curiosity is that the tools I’m using are agentic.
That damned buzzword. The marketers have ruined it.
But agentic actually means something: the tool has agency. Of its own accord, it will
-
Look things up
-
Read documents, and "understand" them
-
Edit files
-
Execute code
-
Look at test results, "figure out" the problem, and change the source code to fix the problem.
…although, it is just writing code 🔗
Consider two key arguments, both of which are true:
-
LLMs make shit up
-
LLMs are not deterministic. Run the same prompt twice, and you get different results. Maybe it’s two different ways of saying the same thing, maybe one is right and one is wrong. Maybe both are wrong.
Does that mean that we shouldn’t use them?
That would be…short-sighted.
LLMs, and coding agents, are tools. That’s all. Startlingly productivity-boosting, and rather fun to use—but tools nonetheless. And just like any other tool, they have their correct uses, and their incorrect ones.
-
Correct use of agentic coding: making you more productive at writing code. Code that you should still test and verify.
-
Incorrect use of agentic coding: blindly trusting whatever it does.
In the context of data engineering, I’ve seen the concern raised multiple times that LLMs can’t work with data because of their non-deterministic nature. That’s completely true, and completely missing the point.
When we’re using agentic coding tools to build data pipelines we’re getting them to write the code. They write the code that is then executed by deterministic systems. I’m not using an LLM to work out 2+2 and find that sometimes it tells me it’s 4, or maybe 6 or 7. I’m using an LLM to write some code (SQL) that says something like:
SELECT col_1 + col_2 FROM src_table;
and then the RDBMS does the calculation. No hallucinations. Either the code is right, or it’s wrong. And that’s concretely testable and verifiable.
AI is a force-multiplier 🔗
Put yourself in the shoes of an employer. In front of you are two candidates for a job. Both equally skilled and experienced. One embraces AI tooling as a way to be more productive. One doesn’t.
Who is going to get the job?
We can argue until we’re blue in the face regarding other scenarios (good engineer vs bad engineer with AI, engineer vs AI, etc), but if nothing else, the above framing should convince you that it’s worth understanding where AI can fit into your work (and where it can’t…yet).
Even if you’re happy where you are—and not planning to be in the hypothetical situation above of being a candidate for a new job—it might not be AI that replaces you, but another human. What’s stopping some junior half your age who is actively adopting AI running rings around you and taking your job?
P.S. 🔗
Learning this shit is fun.
Any half-decent employer at the moment will be offering up access to AI tools—bite their hand off and take the chance to learn it.
Now, maybe that’s because their ulterior motive is to replace you. Then again, smart employers are simply realising that AI is a productivity tool and they want their staff to use it.
And if your employer is just planning to replace you with AI, is that not even more reason to embrace the opportunity to learn it now and skill yourself up for the jobs market that’s to come?
|
Credits and Blame
I wrote this blog title as a joke on LinkedIn, but enough people egged me on that I then fleshed it out into a full article. If that was you and you were joking…oops. |