Kafka Connect in distributed mode uses Kafka itself to persist the offsets of any source connectors. This is a great way to do things as it means that you can easily add more workers, rebuild existing ones, etc without having to worry about where the state is persisted. I personally always recommend using distributed mode, even if just for a single worker instance - it just makes things easier, and more standard.
When you create a sink connector in Kafka Connect, by default it will start reading from the beginning of the topic and stream all of the existing—and new—data to the target. The setting that controls this behaviour is auto.offset.reset, and you can see its value in the worker log when the connector runs:
I’ve been using Replicator as a powerful way to copy data from my Kafka rig at home onto my laptop’s Kafka environment. It means that when I’m on the road I can continue to work with the same set of data and develop pipelines etc. With a VPN back home I can even keep them in sync directly if I want to.
I hit a problem the other day where Replicator was running, but I had no data in my target topics on my laptop.
Alfred is one of my favourite productivity tools. One of its best features is the clipboard history, which when I moved laptops and it didn’t transfer I realised quite how much I rely on this functionality in my day-to-day work.
Whilst Alfred has the options to syncronise its preferences across machines, it seems that it doesn’t synchronise the clipboard database. To get it to work I did the following:
I write and speak lots about Kafka, and get a fair few questions from this. The most common question is actually nothing to do with Kafka, but instead:
How do you make those cool diagrams?
I wrote about this originally last year but since then have evolved my approach. I’ve now pretty much ditched Paper, in favour of Concepts. It was recommended to me after I published the previous post.
This week I was scheduled in to a couple of meetups, in Vienna and Munich. Flying is an inevitable part of travel since I also happen to like being home seeing my family and airplanes are usually the quickest way to make this happen. I don’t particularly enjoy flying, and there’s the environmental impact of it too—so when I realised that Vienna and Munich are relatively close to each other I looked at getting the train.
Kafka Connect has as REST API through which all config should be done, including removing connectors that have been created. Sometimes though, you might have reason to want to manually do this—and since Kafka Connect running in distributed mode uses Kafka as its persistent data store, you can achieve this by manually writing to the topic yourself.
Here’s a hacky way to automatically restart Kafka Connect connectors if they fail. Restarting automatically only makes sense if it’s a transient failure; if there’s a problem with your pipeline (e.g. bad records or a mis-configured server) then you don’t gain anything from this. You might want to check out Kafka Connect’s error handling and dead letter queues too.
Kafka Connect configuration is easy - you just write some JSON! But what if you’ve got credentials that you need to pass? Embedding those in a config file is not always such a smart idea. Fortunately with KIP-297 which was released in Apache Kafka 2.0 there is support for external secrets. It’s extendable to use your own ConfigProvider, and ships with its own for just putting credentials in a file - which I’ll show here. You can read more here.
Kafka Connect exposes a REST interface through which all config and monitoring operations can be done. You can create connectors, delete them, restart them, check their status, and so on. But, I found a situation recently in which I needed to delete a connector and couldn’t do so with the REST API. Here’s another way to do it, by amending the configuration Kafka topic that Kafka Connect in distributed mode uses to persist configuration information for connectors. Note that this is not a recommended way of working with Kafka Connect—the REST API is there for a good reason :)
Kafka Connect is a API within Apache Kafka and its modular nature makes it powerful and flexible. Converters are part of the API but not always fully understood. I’ve written previously about Kafka Connect converters, and this post is just a hands-on example to show even further what they are—and are not—about.
When you run Kafka Connect in distributed mode it uses a Kafka topic to store the offset information for each connector. Because it’s just a Kafka topic, you can read that information using any consumer.
Prompted by a question on StackOverflow, the requirement is to take a series of events related to a common key and for each key output a series of aggregates derived from a changing value in the events. I’ll use the data from the question, based on ticket statuses. Each ticket can go through various stages, and the requirement was to show, per customer, how many tickets are currently at each stage.
Before you can drop a stream or table that’s populated by a query in KSQL, you have to terminate any queries upon which the object is dependent. Here’s a bit of jq & xargs magic to terminate all queries that are currently running
This post is the companion to an earlier one that I wrote about conference abstracts. In the same way that the last one was inspired by reviewing a ton of abstracts and noticing a recurring pattern in my suggestions, so this one comes from reviewing a bunch of slide decks for a forthcoming conference. They all look like good talks, but in several cases these great talks are fighting to get out from underneath the deadening weight of slides.
Herewith follows my highly-opinionated, fairly-subjective, and extremely-terse advice and general suggestions for slide decks. You can also find relating ramblings in this recent post too. My friend and colleague Vik Gamov also wrote a good post on this same topic, and linked to a good video that I’d recommend you watch.
I’ve written quite a few talks over the years, but usually as a side-line to my day job. In my role as a Developer Advocate, talks are part of What I Do, and so I can dedicate more time to it. A lot of the talks I’ve done previously have evolved through numerous iterations, and with a new talk to deliver for the "Spring Season" of conferences, I thought it would be interesting to track what it took from concept to actual delivery.