For whatever reason, CSV still exists as a ubiquitous data interchange format. It doesn’t get much simpler: chuck some plaintext with fields separated by commas into a file and stick .csv on the end. If you’re feeling helpful you can include a header row with field names in.
Alfred is one of my favourite productivity apps for the Mac. It’s a file indexer, a clipboard manager, a snippet expander - and that’s just scratching the surface really. I recently got a new machine without it installed and realised just how much I rely on Alfred, particularly its clipboard manager.
Imagine you’ve got a stream of data; it’s not “big data,” but it’s certainly a lot. Within the data, you’ve got some bits you’re interested in, and of those bits, you’d like to be able to query information about them at any point. Sounds fun, right?
What if you didn’t need any datastore other than Apache Kafka itself to be able to do this? What if you could ingest, filter, enrich, aggregate, and query data with just Kafka?
Screenflow has a useful Markers feature for adding notes to the timeline.
You can use these to helpfully add a table of contents to your Youtube video, but unfortunately Screenflow doesn’t have the option to export them directly. Instead, use the free Subler program as an intermediary (download it from here).
Export from Screenflow with a chapters track
Open the file in Subler and export to text file
☁️Confluent Cloud is a great solution for a hosted and managed Apache Kafka service, with the additional benefits of Confluent Platform such as ksqlDB and managed Kafka Connect connectors. But as a developer, you won’t always have a reliable internet connection. Train, planes, and automobiles—not to mention crappy hotel or conference Wi-Fi. Wouldn’t it be useful if you could have a replica of your Cloud data on your local machine? That just pulled down new data automagically, without needing to be restarted each time you got back on the network?
kafkacat is one of my go-to tools when working with Kafka. It’s a producer and consumer, but also a swiss-army knife of debugging and troubleshooting capabilities. So when I built a new Fedora server recently, I needed to get it installed. Unfortunately there’s no pre-packed install available on yum, so here’s how to do it manually.
Pre-requisite installs We’ll need some packages from the Confluent repo so set this up for yum first by creating /etc/yum.
Updated 16 April 2020 to cover formatting tricks & add import to Google Docs info
Short and sweet this one. I’ve written in the past how I love Markdown but I’ve actually moved on from that and now firmly throw my hat in the AsciiDoc ring. I’ll write another post another time explaining why in more detail, but in short it’s just more powerful whilst still simple and readable without compilation.
I’ve been poking around recently with capturing Wi-Fi packet data and streaming it into Apache Kafka, from where I’m processing and analysing it. Kafka itself is rock-solid - because I’m using ☁️Confluent Cloud and someone else worries about provisioning it, scaling it, and keeping it running for me. But whilst Kafka works just great, my side of the setup—tshark running on a Raspberry Pi—is less than stable. For whatever reason it sometimes stalls and I have to restart the Raspberry Pi and restart the capture process.
🦠COVID-19 has well and truly hit the tech scene this week. As well as being full of "WFH tips" for all the tech workers suddenly banished from their offices, my particular Twitter bubble is full of DevRel folk musing and debating about what this interruption means to our profession. For sure, in the short term, the Spring conference season is screwed— all the conferences are cancelled (or postponed).
But what about the future?
Wi-fi is now ubiquitous in most populated areas, and the way the devices communicate leaves a lot of 'digital exhaust'. Usually a computer will have a Wi-Fi device that’s configured to connect to a given network, but often these devices can be configured instead to pick up the background Wi-Fi chatter of surrounding devices.
There are good reasons—and bad—for doing this. Just like taking apart equipment to understand how it works teaches us things, so being able to dissect and examine protocol traffic lets us learn about this.
My name’s Robin, and I’m a Developer Advocate. What that means in part is that I build a ton of demos, and Docker Compose is my jam. I love using Docker Compose for the same reasons that many people do:
Spin up and tear down fully-functioning multi-component environments with ease. No bespoke builds, no cloning of VMs to preserve "that magic state where everything works"
Repeatability. It’s the same each time.
Portability. I can point someone at a docker-compose.yml that I’ve written and they can run the same on their machine with the same results almost guaranteed.
ksqlDB 0.7 will add support for message keys as primitive data types beyond just STRING (which is all we’ve had to date). That means that Kafka messages are going to be much easier to work with, and require less wrangling to get into the form in which you need them. Take an example of a database table that you’ve ingested into a Kafka topic, and want to join to a stream of events. Previously you’d have had to take the Kafka topic into which the table had been ingested and run a ksqlDB processor to re-key the messages such that ksqlDB could join on them. Friends, I am here to tell you that this is no longer needed!
I’m quite a fan of Sonos audio equipment but recently had some trouble with some of the devices glitching and even cutting out whilst playing. Under the covers Sonos stuff is running Linux (of course) and exposes some diagnostics through a rudimentary frontend that you can access at http://<sonos player IP>:1400/support/review:
Whilst this gives you the current state, you can’t get historical data on it. It felt like the problems were happening "all the time", but were they actually?