Interesting links - July 2025
Not got time for all this? I’ve marked 🔥 for my top reads of the month :)
Not got time for all this? I’ve marked 🔥 for my top reads of the month :)
Iceberg nicely decouples storage from ingest and query (yay!). When we say "decouples" it’s a fancy way of saying "doesn’t do". Which, in the case of ingest and query, is really powerful. It means that we can store data in an open format, populated by one or more tools, and queried by the same, or other tools. Iceberg gets to be very opinionated and optimised around what it was built for (storing tabular data in a flexible way that can be efficiently queried). This is amazing!
But, what Iceberg doesn’t do is any housekeeping on its data and metadata. This means that getting data in and out of Apache Iceberg isn’t where the story stops.
Without wanting to mix my temperature metaphors, Iceberg is the new hawtness, and getting data into it from other places is a common task. I wrote previously about using Flink SQL to do this, and today I’m going to look at doing the same using Kafka Connect.
Kafka Connect can send data to Iceberg from any Kafka topic. The source Kafka topic(s) can be populated by a Kafka Connect source connector (such as Debezium), or a regular application producing directly to it.
Not got time for all this? I’ve marked 🔥 for my top reads of the month :)
In this blog post I’ll show how you can use Flink SQL to write to Iceberg on S3, storing metadata about the Iceberg tables in the AWS Glue Data Catalog. First off, I’ll walk through the dependencies and a simple smoke-test, and then put it into practice using it to write data from a Kafka topic to Iceberg.
After a week’s holiday ("vacation", for y’all in the US) without a glance at anything work-related, what joy to return and find that the DuckDB folk have been busy, not only with the recent 1.3.0 DuckDB release, but also a brand new project called DuckLake.
Here are my brief notes on DuckLake.
Not got time for all this? I’ve marked 🔥 for my top reads of the month :)
SQL. Three simple letters.
Ess Queue Ell.
/ˌɛs kjuː ˈɛl/
.
In the data world they bind us together, yet separate us.
As the saying goes, England and America are two countries divided by the same language, and the same goes for the batch and streaming world and some elements of SQL.
Another year, another Current—another 5k run/walk for anyone who’d like to join!
Whether you’re processing data in batch or as a stream, the concept of time is an important part of accurate processing logic.
Because we process data after it happens, there are a minimum of two different types of time to consider:
When it happened, known as Event Time
When we process it, known as Processing Time (or system time or wall clock time)