IBM MQ on Docker - Channel was blocked
Running IBM MQ in a Docker container and the client connecting to it was throwing repeated Channel was blocked errors.
Running IBM MQ in a Docker container and the client connecting to it was throwing repeated Channel was blocked errors.
One of my favourite hacks for getting data into Kafka is using kafkacat and stdin, often from jq. You can see this in action with Wi-Fi data, IoT data, and data from a REST endpoint. This is fine for getting values into a Kafka message - but Kafka messages are key/value, and being able to specify a key is can often be important.
Here’s a way to do that, using a separator and some jq magic. Note that at the moment kafkacat only supports single byte separator characters, so you need to choose carefully. If you pick a separator that also appears in your data, it’s possibly going to have unintended consequences.
Readers of a certain age and RDBMS background will probably remember northwind, or HR, or OE databases - or quite possibly not just remember them but still be using them. Hardcoded sample data is fine, and it’s great for repeatable tutorials and examples - but it’s boring as heck if you want to build an example with something that isn’t using the same data set for the 100th time.
Here’s a collection of Kafka-related talks, just for you.
Each one has 🍿🎥 a recording, 📔 slides, and 👾 code to go and try out.
Prompted by a question on StackOverflow I thought I’d take a quick look at setting up ksqlDB to ingest CDC events from Microsoft SQL Server using Debezium. Some of this is based on my previous article, Streaming data from SQL Server to Kafka to Snowflake ❄️ with Kafka Connect.
I like standalone, repeatable, demo code. For that reason I love using Docker Compose and I embed everything in there - connector installation, the kitchen sink - the works.
I use Hugo for my blog, hosted on GitHub pages. One of the reasons I’m really happy with it is that I can use Asciidoc to author my posts. I was writing a blog recently in which I wanted to include some code that’s hosted on GitHub. I could have copied & pasted it into the blog but that would be lame!
With Asciidoc you can use the include:: directive to include both local files:
Kafka Connect is the integration API for Apache Kafka. Check out this video for an overview of what Kafka Connect enables you to do, and how to do it.
There’s ways, and then there’s ways, to count the number of records/events/messages in a Kafka topic. Most of them are potentially inaccurate, or inefficient, or both. Here’s one that falls into the potentially inefficient category, using kafkacat to read all the messages and pipe to wc which with the -l will tell you how many lines there are, and since each message is a line, how many messages you have in the Kafka topic:
$ kafkacat -b broker:29092 -t mytestopic -C -e -q| wc -l
3
Google Chrome automagically adds sites that you visit which support searching to a list of custom search engines. For each one you can set a keyword which activates it, so based on the above list if I want to search Amazon I can just type a <tab> and then my search term
I had the pleasure of presenting at DataEngBytes recently, and am delighted to share with you the 🗒️ slides, 👾 code, and 🎥 recording of my ✨brand new talk✨:
A tiny snippet since I wasted 10 minutes going around the houses on this one…
tl;dr: If you try to create a command that is not in lower case (e.g. Alert not alert) then the setMyCommands API will return BOT_COMMAND_INVALID
The server sends Transfer-Encoding: chunked data, and you want to work with the data as you get it, instead of waiting for the server to finish, the EOF to fire, and then process the data?
At the beginning of all this my aim was to learn something new (Go), and use it to write a version of a utility that I’d previously hacked together in Python that checks your Apache Kafka broker configuration for possible problems with the infamous advertised.listeners setting. Check out a blog that I wrote which explains all about Apache Kafka and listener configuration.
| You can find the code at https://github.com/rmoff/kafka-listeners |
So far I’ve been running all my code either in the Go Tour sandbox, using Go Playground, or from a single file in VS Code. My explorations in the previous article ended up with a a source file that was starting to get a little bit unwieldily, so let’s take a look at how that can be improved.
Within my most recent code, I have the main function and the doProduce function, which is fine when collapsed down:
When I set out to learn Go one of the aims I had in mind was to write a version of this little Python utility which accompanies a blog I wrote recently about understanding and diagnosing problems with Kafka advertised listeners. Having successfully got Producer, Consumer, and AdminClient API examples working, it is now time to turn to that task.
Last time I looked at creating my first Apache Kafka consumer in Go, which used the now-deprecated channel-based consumer. Whilst idiomatic for Go, it has some issues which mean that the function-based consumer is recommended for use instead. So let’s go and use it!
Instead of reading from the Events() channel of the consumer, we read events using the Poll() function with a timeout. The way we handle events (a switch based on their type) is the same:
Having written my first Kafka producer in Go, and even added error handling to it, the next step was to write a consumer. It follows closely the pattern of Producer code I finished up with previously, using the channel-based approach for the Consumer:
I looked last time at the very bare basics of writing a Kafka producer using Go. It worked, but only with everything lined up and pointing the right way. There was no error handling of any sorts. Let’s see about fixing this now.