Tech Radar (Nov 2025) - data blips
The latest Thoughtworks TechRadar is out. Here are some of the more data-related ‘blips’ (as they’re called on the radar) that I noticed.
The latest Thoughtworks TechRadar is out. Here are some of the more data-related ‘blips’ (as they’re called on the radar) that I noticed.
What with Current NOLA 2025 happening this week, and some very last minute preparations for the demo at the keynote on day 2, this month’s links roundup is pushing it right up to the wire :) The demo was pretty cool, and finally I have a good example of how this AI stuff actually fits into a workflow ;) I’ll write it up as a blog post (or two, probably)—stay tuned!
A presentation about effective blog writing for developers, covering why to blog, what to write about, and how to structure your content.
A short series of notes for myself as I learn more about the AI ecosystem as of Autumn [Fall] 2025. The driver for all this is understanding more about Apache Flink’s Flink Agents project, and Confluent’s Streaming Agents.
I started off this series—somewhat randomly, with hindsight—looking at Model Context Protocol (MCP). It’s a helper technology to make things easier to use and provide a richer experience. Next I tried to wrap my head around Models—mostly LLMs, but also with an addendum discussing other types of model too. Along the lines of MCP, Retrieval Augmented Generation (RAG) is another helper technology that on its own doesn’t do anything but combined with an LLM gives it added smarts. I took a brief moment in part 4 to try and build a clearer understanding of the difference between ML and AI.
So whilst RAG and MCP combined make for a bunch of nice capabilities beyond models such as LLMs alone, what I’m really circling around here is what we can do when we combine all these things: Agents! But…what is an Agent, both conceptually and in practice? Let’s try and figure it out.
Sneaking it in just before the end of the month!
It’s a bumper set of links this month—I started with an original backlog of 125 links to get through. Some fell by the wayside, but plenty of others (78, to be precise) made the cut. With no further ado, let’s get cracking!
Not got time for all this? I’ve marked 🔥 for my top reads of the month :)
A short series of notes for myself as I learn more about the AI ecosystem as of September 2025. The driver for all this is understanding more about Apache Flink’s Flink Agents project, and Confluent’s Streaming Agents.
Having poked around MCP and Models, next up is RAG.
RAG has been one of the buzzwords of the last couple of years, with any vendor worth its salt finding a way to crowbar it into their product. I’d been sufficiently put off it by the hype to steer away from actually understanding what it is. In this blog post, let’s fix that—because if I’ve understood it correctly, it’s a pattern that’s not scary at all.
A short series of notes for myself as I learn more about the AI ecosystem as of September 2025. The driver for all this is understanding more about Apache Flink’s Flink Agents project, and Confluent’s Streaming Agents.
Having poked around MCP and got a broad idea of what it is, I want to next look at Models. What used to be as simple as "I used AI" actually boils down into several discrete areas, particularly when one starts looking at using LLMs beyond writing a rap about Apache Kafka in the style of Monty Python and using it to build agents (like the Flink Agents that prompted this exploration in the first place).
A short series of notes for myself as I learn more about the AI ecosystem as of September 2025. The driver for all this is understanding more about Apache Flink’s Flink Agents project, and Confluent’s Streaming Agents.
The first thing I want to understand better is MCP.
Not got time for all this? I’ve marked 🔥 for my top reads of the month :)