🏃🚶 The unofficial Current London 2025 Run/Walk 🏃🚶
Another year, another Current—another 5k run/walk for anyone who’d like to join!
Another year, another Current—another 5k run/walk for anyone who’d like to join!
Whether you’re processing data in batch or as a stream, the concept of time is an important part of accurate processing logic.
Because we process data after it happens, there are a minimum of two different types of time to consider:
When it happened, known as Event Time
When we process it, known as Processing Time (or system time or wall clock time)
So. Many. Interesting. Links. Not got time for all this? I’ve marked 🔥 for my top reads of the month :)
Confluent Cloud for Apache Flink gives you access to run Flink workloads using a serverless platform on Confluent Cloud. After poking around the Confluent Cloud API for configuring connectors I wanted to take a look at the same for Flink.
Using the API is useful particularly if you want to script a deployment, or automate a bulk operation that might be tiresome to do otherwise. It’s also handy if you just prefer living in the CLI :)
The problem with publishing February’s interesting links at the beginning of the month and now getting around to publishing March’s at the end is that I have nearly two months' worth of links to share 😅 So with no further ado, let’s crack on.
| tl;dr: Upload a PDF document in which each slide of the carousel is one page. |
I wanted to post a Carousel post in LinkedIn, but had to wade through a million pages of crap in Google from companies trying to sell shit. Here’s how to do it simply.
In this blog post I’m going to explore how as a data engineer in the field today I might go about putting together a rudimentary data pipeline. I’ll take some operational data, and wrangle it into a form that makes it easily pliable for analytics work.
After a somewhat fevered and nightmarish period during which people walked around declaring "Schema on Read" was the future, that "Data is the new oil", and "Look at the size of my big data", the path that is history in IT is somewhat coming back on itself to a more sensible approach to things.
As they say:
What’s old is new
This is good news for me, because I am old and what I knew then is 'new' now ;)
DuckDB added a very cool UI last week and I’ve been using it as my primary interface to DuckDB since.
One thing that bothered me was that the SQL I was writing in the notebooks wasn’t exportable. Since DuckDB uses DuckDB in the background for storing notebooks, getting the SQL out is easy enough.
I wrote a couple of weeks ago about using DuckDB and Rill Data to explore a new data source that I’m working with. I wanted to understand the data’s structure and distribution of values, as well as how different entities related. This week DuckDB 1.2.1 was released and that little 0.0.1 version boost brought with it the DuckDB UI.
Here I’ll go through the same process as I did before, and see how much of what I was doing can be done in DuckDB alone now.