For the first time ever intentionally, I rethemed one of my blogs. New Media Hack got a new theme only because I had to apply something during the transfer to WordPress. Mass Programming Resistance hadn’t had a style change from its inception. Now it does.
The theme is pretty minimal. I forked a repository from Giulio Fidente that is spartan black text on white background. Mainly I just decided to go with square instead of rounded corners on the buttons and to tighten up the gutter between the sidebar and the main content.
Along with fixing up the RSS and Atom feeds, the theme change was the last thing to finish before declaring the transition to static publishing complete.
This is my first post using Markdown and Pelican. Currently https://crossjam.net/ is resolving properly and the content has transferred over. There’s a bunch of redirects, which are a little ugly, but they serve the purpose of dealing with the old WordPress path prefixes. Mass Programming Resistance and New Media Hack (still preserved, bless its heart) are separated out into two different domains.
Now need to work on a good workflow for publishing. The upside is that I can now go with Emacs to create posts, which will accelerate my ability to spit out text. But I’ll need some key bindings to smooth out some of the linking work. Also, need to figure out a comfortable flow to update the production server. Pushing to a git repo seems like a little too much work, although netlify has an interesting take on that path. Maybe for a dev site just to get some experience.
Okay, I’m making the move to static site generation. With any luck, this will be my last post using WordPress. There will be some domain name shifting so probably some upset to feed readers. Such is life.
Herewith, some enjoyable email newsletters that I’m subscribed to.
After leaving O’Reilly Media, Ben Lorica decided to keep up the podcasting bug with The Data Exchange. Gradient Flow is his parallel site and newsletter on the topics of data, business, machine learning, and AI.
PyCoder’s Weekly provides a nice assortment of Python links. The overall volume and topic distribution works for me.
I’ve been subscribed to Python Weekly for ages. The number of links is a little higher than is useful to me and the number of unsummarized links has increased recently. That said, there’s usually one or two worth following in every edition.
Link parkin’: Materialize
The simplicity of SQL queries, but with millisecond-level latency for real-time data. That is Materialize, the only true SQL streaming database for building internal tools, interactive dashboards, and customer-facing experiences.
Wire format compatible with PostgreSQL, so you can use the
psql command line tool, even though there’s not a Postgres database underneath. Possibly a worthy challenger to ksqlDB
Also in the bin of interesting potential side projects, would be deploying an Algo server for a personal VPN.
From the original Algo announcement post:
Today we’re introducing Algo, a self-hosted personal VPN server designed for ease of deployment and security. Algo automatically deploys an on-demand VPN service in the cloud that is not shared with other users, relies on only modern protocols and ciphers, and includes only the minimal software you need.
There’s richer and more recent detail in The Changelog podcast Episode 377.
Note to self, future blog post(s) on recommended podcasts.
The 20.04 Ubuntu Long Term Support release (Focal Fossa) officially supports the Raspberry Pi!
Running Ubuntu Server on your Raspberry Pi is easy. Just pick the OS image you want, flash it onto a microSD card, load it onto your Pi and away you go.
With handy installation instructions to boot.
Long term readers may remember that I put together a stack of Raspberry Pis just for fun one Christmas season. They haven’t been doing much since, but because I’m a heavy user of Ubuntu at work, maybe I can get more mileage out of them.
Alternatively, looks like incorporating Algolia wouldn’t be too challenging.
Hat tip to this tutorial by Maxime Laboissonniere.
Personally, 2019 was a real struggle of a year for me. But 2020 is flat godawful from a global societal perspective. And we haven’t even quite hit mf’in June yet!
The fact that the President of the United States was impeached for only the third time in the country’s history will probably be a footnote in this year’s historical record. Back to Normal (TM) is oxymoronic. This is your normal. Best be about making it better, and quick.
More technical and entertainment shenanigans are forthcoming. But a facade of “This is Fine” would be irresponsible. Had to uncork a little and at least acknowledge the turmoil.
Forced to work from home more, I’m at least enjoying the opportunity to play music out load over speakers. Look Ma! No office headphones. Maybe in another post, I’ll document my foray into the world of Sonos, but suffice it to say it’s fun to listen to music out loud for a change.
I’ve also been committed to finding some new artists to get in my rotation. Don’t know how I discovered her, but Ms. Mada has been a revelation, especially her “Late Night With Ms. Mada” playlist on Soundcloud. Broadly speaking, her sets are House Music but not categorizable, at least by me, in any one of its many splintered genres. Noticeably different from the traditional stylings out of Chicago, New York, and San Francisco, but I’m really enjoying the stripped down, excellently blended rhythms.
BTW, Sonos gear may be a bit of a pain in the ass and often counterintuitive, but it integrates well with Internet streaming audio services like Soundcloud, Spotify, and TuneIn. And apparently there’s a Sonos device API, scriptable with Python.
At the day job, we rely heavily on GitLab which is a beast of a platform, even in if you only consider all the features in the FOSS version. Looks like I’m going to have to automate some processes on top of GitLab and thankfully, there’s a rich RESTful API.
Even better for me there’s a nice looking Python client package for the api that also provides a command line interface. Looks like the client library is pretty well maintained although the CLI needs some love.
One new piece of kit in my tech toolbox is the Caddy web server. Small, self-contained, and full featured, it definitely comes in handy for personal tech projects and professional prototypes. Biggest win is that it automatically handles HTTPS certificates through LetsEncrypt. As in, you just put the domain name in the config file and HTTPS cert management is solved.
So I’ve said in the past that I wanted to investigate the Discogs.com data dumps as a side project. I’ve basically failed, other than to keep collecting the data dumps continuously. To date, I believe there are 459 data files, totaling about 380+ Gb of compressed data. My finger in the wind estimate is that the uncompressed total is near about 4 Tb.
Just going to keep trying to take baby steps to build momentum on this. The available dataset now spans over a decade, which makes it interesting in its own right almost independent of what’s in the dumps. But it’s challenging because of data dirt from the early years and data scale recently. Just getting what I think is an accurate listing and count of the file dumps was surprisingly difficult to generate.
So I made my own fork of pelican to just extract the raw content from a WordPress XML export. Initial indicators are pretty good. Using pelican’s built-in web serving capability, a small sampling of the derived posts seem to have come through completely intact and appear to do well with pelican’s transformation to HTML.
Next up, exploration of pelican theming to eventually give this blog a fresh face.
Longtime NetNewsWire fan here. Glad to see Brent Simmons has reached a significant milestone in bringing NetNewsWire to full functionality. With the feedbin.me integration NetNewsWire is now really useful on a personal basis. I can once again have a great MacOs desktop RSS experience. Looking forward to what’s coming down the pike.
After five years of work — including getting the name NetNewsWire back, and a beautiful new app icon by Brad Ellis — NetNewsWire 5 has finally hit the alpha stage.
datasette project is just totally darn cool. If you have some data in an sqlite database (or in a csv file which is easily turned into an sqlite db) then
datasette makes it easy to publish that data on the Web. With data oriented retrieval APIs for free!
The project mostly emerged while I was out in the blogging wilderness. I’ve had opportunity to use it in anger and can highly recommend it. There have been more sophisticated and complex means to achieve the same goals, and maybe even more elegant toolkits that never quite got adoption, but
datasette strikes a beautiful balance between pragmatism, utility, and beauty.
Go figure. Based upon these instructions from Codeholics, I started to convert MPR using the Python powered Pelican. Pelican did a good job of cleanly processing my WordPress export, and extracting my posts and other content. It looked like I was on my way.
Except my posts are written in Markdown and stored that way in the WordPress database. Which Pelican’s
import function conveniently escapes when it generates Markdown. Sigh.
It may not be the most elegant way, but I think I can work around this with a custom fork of the Pelican source code. At worst, I’ll be flexing some atrophying Git muscles.
I’ve managed to make it fifty two years on this here planet. Yeah me!
First thing to do is to convert over to serving static HTML.
More to come …
I’ve been tracking the progress of TimescaleDB for a while now. One thing that really stands out is the company’s pragmatic nature. Sure they came up with an innovative way to scale time series data storage, management, and querying. But it seems like they’ve really caught traction by meeting many customers where they’re at: relational DB knowledgeable and okay with using PostgreSQL. In a number of recent podcasts, I haven’t really heard the founders geek out about the underlying techniques but instead focus on how the product, not the technology, addresses customer pain points.
By using Prometheus and TimescaleDB together, you can combine the simplicity of Prometheus with the reliability, power, flexibility, and scalability of TimescaleDB, and pick the approach that makes most sense for the task at hand. In particular, it is because Prometheus and TimescaleDB are so different that they become the perfect match, with each complementing the other. For example, as mentioned earlier, you can use either PromQL or full SQL for your queries, or both.
In particular, TimescaleDB engineers have done some of heavy lifting in creating a PostgreSQL connector for the Grafana metrics visualization framework. That’s putting skin in the game that customers can see.
Also, “It’s just Postgres,” is a great talking point.
I like where these guys are going.
Well, that was fun.
Started back to full employment this past Monday. The onboarding has been painless so far, even to the point of my personally designed Uplift standing desk being already assembled when I walked in the door.
Obviously a long way to go, but the commute reduction is feeling like a ridiculous win. Plus the team I joined is really even keel, low drama, and generally quiet in the open space lab we occupy, when there are actually people there. About half the company is remote. Sure, I’d love an office with a door, but sometimes you gotta live with the tradeoffs.
More to come …
The great thing about this post from Cloudflare, “How to drop 10 million packets per second”, is all the fun little low level networking tools, (
conntrack), I learned about.
Dropping packets hitting our servers, as simple as it sounds, can be done on multiple layers. Each technique has its advantages and limitations. In this blog post we’ll review all the techniques we tried thus far.
One of the things about having an ARM-based RPi cluster is a need to serve custom images. Even though there are a number of well run, cloud stored image registries, including Docker Hub and Google Container Registry, it feels like this is a homebrew style service that one should be able to host on their own. Straight Docker Distribution is surprisingly barebones.
Meanwhile, VMWare has open-sourced Harbor, an image registry which seems much more full featured:
Project Harbor is an an open source trusted cloud native registry project that stores, signs, and scans content. Harbor extends the open source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Harbor supports advanced features such as user management, access control, activity monitoring, and replication between instances. Having a registry closer to the build and run environment can also improve image transfer efficiency.
While there are quite a bunch of them, the fundamental conceptual elements of Kubernetes are fairly accessible. Nodes? Check. Containers? Check. Pods? Check. Services? Pretty straightforward, although there is some not oft mentioned complexity in the underlying network routing across pods.
A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector (see below for why you might want a Service without a selector).
Check out the “Virtual IPs and service proxies,” subhead of the Services docs to see what I mean about networking.
Exposing services to the outside world? Not so much. Alternatively, if you can make sense of this gobbledygook, you’re a better person than I. Service Types are the singular concept where I have yet to see a good, comprehensible tutorial, either written, audio, or video. Something I’ll be on the lookout for.
Since I’m a Digital Ocean customer, this article was quite handy. Getting Started with Software-Defined Networking and Creating a VPN with ZeroTier One by Sam Cater:
ZeroTier One is an open-source application which uses some of the latest developments in SDN to allow users to create secure, manageable networks and treat connected devices as though they’re in the same physical location. ZeroTier provides a web console for network management and endpoint software for the clients. It’s an encrypted Peer-to-Peer technology, meaning that unlike traditional VPN solutions, communications don’t need to pass through a central server or router — messages are sent directly from host to host. As a result it is very efficient and ensures minimal latency. Other benefits include ZeroTier’s simple deployment and configuration process, straightforward maintenance, and that it allows for centralized registration and management of authorized nodes via the Web Console.
By following this tutorial, you will connect a client and server together in a simple point-to-point network. Since Software-Defined Networking doesn’t utilize the traditional client/server design, there is no central VPN server to install and configure; this streamlines deployment of the tool and the addition of any supplementary nodes. Once connectivity is established, you’ll have the opportunity to utilize ZeroTier’s VPN capability by using some clever Linux functionalities to allow traffic to leave your ZeroTier network from your server and instruct a client to send it’s traffic in that direction.
The following was helpful in getting ZeroTier up and running on my home k8s cluster. Accessing your Raspberry Pi securely from the Internet using ZeroTier by Kelvin Zhang:
When you need to access your Raspberry Pi from home, exposing your public IP/using dynamic DNS and opening ports can expose your Pi to potential security threats, especially if you’re using password-based authentication or running services behind these ports.
The well-known method of doing it is to use a VPN. Whereas OpenVPN is a common solution, ZeroTier heavily outshines it. OpenVPN can be cumbersome to set up and maintain (especially if things go wrong), and provisioning new devices can be a pain with having to generate certificates. In comparison, ZeroTier can be installed with a single bash script, and your virtual network can be managed with their web panel which enables you to provision devices, assign static IPs and more.
Give ’em a read if this stuff interests you.
Last November, I threatened to build a Kubernetes cluster out of Raspberry Pi 3s. Well I actually did it starting during the December holidays and finishing up in January. Here’s a picture of it:
The one warning, that’s not obvious from the construction guide, is that the Raspberry Pi ARM processor architecture typically doesn’t have popular Docker images publicly available. This makes it somewhat challenging to do anything further usefully non-trivial. All-in-all, while not cheap, it was still a fun project and handy to have a k8s lab at home to play with.
I swear I’ve written about ZeroTier somewhere else before, but apparently not on this blog. The company and technology first came across my radar in a PacketPushers podcast episode that was a really deep technical dive. From the current front page of the website:
ZeroTier delivers the capabilities of VPNs, SDN, and SD-WAN with a single system. Manage all your connected resources across both local and wide area networks as if the whole world is a single data center.
Behind the scenes, ZeroTier uses software defined networking and cryptographic techniques to build secure, planetary-scale, virtual Ethernet networks. For the administrator, installation and setup is a relatively painless experience as these things go. Meanwhile, devices in a ZeroTier network can interconnect as if they were on the same local-area network (LAN) wherever they are. ZeroTier endpoints conveniently figure out ways to punch through firewalls and other network obstructions. Sort of like VPNs with 90% less hassle and a 90% more fun from a networking perspective.
Recently I setup ZeroTier on my personal laptop and a home Raspberry Pi 3 cluster. The cluster is behind the firewall of a wireless router and my service provider, but it’s been pretty seamless to remotely SSH into the cluster from just about anywhere.
The only potential downer, if you’re really into this stuff, is that the free service relies on a kernel of centralized infrastructure maintained by the ZeroTier company. Using the service thus places trust in ZeroTier’s security, infrastructure capabilities, technical competence, etc. etc. A not negligible concern to an entity’s business processes. This is counterbalanced by an open source codebase and a commercial option for on-prem deployment if full accountability is needed.
For me though, ZeroTier has worked better than expected and there’s some interesting underlying tech below the surface.
Well for once, I do actually seem to be maintaining a book reading habit.
Three Four more completed to add to the list:
- “The Last Good Man,” Linda Nagata
- “The Lean Startup,” Eric Ries
- “The Peripheral,” William Gibson
- “Lexicon,” Max Barry
A recurring theme in my reading of William Gibson novels is their improvement upon reread. I didn’t really cotton to “Zero History” until after a few visits. As in that case, the initial review was muddled. This time around, the subtle breakneck pace of the narrative (events only occurred over the better part of a week) and the general inhumane nature of The Jackpot and The Klept were more resonant. “The Peripheral” also got a new sheen in light of political events in the US that happened after its publication.
“The Lean Startup” has achieved a bit of a cult like status, but it feels like a useful framework for guiding a startup. A qualifier on this statement since I’ve only notionally been involved with startups and never really in the breach. The innovation accounting methods didn’t feel all that actionable though.
I sort of bought “The Last Good Man” on a whim. It was a solid purchase and an enjoyable read. Four or five different narrative perspectives popped up, which was probably two to three to many for me and there were a lot of named characters to track. The background theme of autonomized (sp) warfare was compelling. Loved the character of True Brighton.
A friend sent me a copy of “Lexicon” a while ago, I gave it a start, didn’t catch fire, and then got sucked into it on a cross-country flight. The book lives up to its reviews and caught me by surprise. My only knock is a villain that’s a bit too close to infallible, but otherwise just a great fantastical thriller spiced with interesting social commentary. And a great love story to boot.
Link parkin’ since this is so obscure, yet useful. https://calendar.google.com/calendar/iphoneselect
Sync other calendars
- On your computer, visit the Calendar sync page.
- Check or uncheck the names of any calendars.
- In the bottom right corner, click Save.
- When you’re done, refresh your calendar.
This is how I got shared calendars that I added to my online Google Calendar, which showed up under “Other Calendars”, to become visible in the Google Calendar app on iOS. Once visible, then you can check off the shared calendar to have it become part of the overall calendar view. Handy for incorporating app specific calendars.
Survived yet another year. The opportunity to be gainfully underemployed is turning out to be a great birthday present.
Definitely feeling a change in the air.
Link’ parkin: Awesome-Kubernetes
A curated list for awesome kubernetes sources Inspired by @sindresorhus’ awesome
Looks perilously overwhelming, but probably good to have in the locker.
So great, you’ve got your Raspberry Pi Kubernetes cluster up and running. Now what? Luckily, the k8s ecosystem seems to be supporting three different approaches to low friction, “serverless” computing-style deployment of application. It’s nice to have choice, but sometimes a little advice helps.
This whitepaper will explore how we can take the very useful design parameters and service orchestration features of K8s and marry them with serverless frameworks and Functions as a Service (FaaS). In particular, we will hone in on the features and functionalities, operational performance and efficiencies of three serverless frameworks that have been architected on a K8s structure: (i) Fission; (ii) OpenFaaS; and (iii) Kubeless.
The nice thing about Hasan’s blog post is that it gets into the deployment details of each toolkit. This is good to understand in addition to the developer experience that each platform provides. Clear contrasts can be seen, and now I have a better understanding of where the pain points might emerge.
There have been many times in recent memory where I’ve said, “I’m going to read more books.” Subsequently, that’s been followed by abject failure. Given my most recent such declaration, abject failure would be a reasonable prediction.
So far though, three books have been completed in my self-defined time off:
- “All the Birds in the Sky,” Charlie Jane Anders
- “You Are Now Less Dumb,” David McRaney
- “Deep Work: Rules for Focused Success in a Distracted World,” Cal Newport
Quick thoughts on each book.
“All the Birds in the Sky,” wasn’t quite what I expected. It was a little too fantastic for what I wanted at the moment. Enjoyed a lot of the premise, characters, and writing, but I never really connected.
“You Are Now Less Dumb,” is a worthy successor to “You Are Not So Smart,” but the topics are a bit more complex and don’t fit as well into the originator’s format, which relied on tidy, bite sized chunks discussing cognitive biases. Now we get bigger, less tidy hunks. Definitely worth reading but not quite as satisfying as the first book.
I had been subscribed to Cal Newport’s blog and was very sympathetic to his principles without being a practitioner. With time off, I’m now actively pursuing the potential to put his teachings in play. You can get most of the concepts from his blog, but the book provides excellent organization and refinement (not to mention some change in his pocket). Writing more about this deep work, “Deep Work: …,” and how to apply it will make for good blog fodder.
I finally got a chance to register for and take Paco Nathan’s online course on NLP for O’Reilly, “Get Started with NLP in Python”. Totally enjoyable, although it was very much “intro” material. Paco has this down cold. However, I found the use of Jupyter Notebooks and JupyterHub to be truly amazing. As far as I can tell, 100+ participants got a tiny, ephemeral, on-demand compute server combined with well structured computational content.
And. Stuff. Just. Worked.
I didn’t see any complaints in the hosted chat or Slack channels about setup. The amount of install and deployment toil eliminated must have been phenomenal. This might be the way of the future.
However, a Datanami interview with Neha Narkhede pretty much vindicates my concerns:
But like most things in IT, the devil is in the details. “It’s actually not that easy,” says Neha Narkhede, the CTO and co-founder of Confluent, the commercial venture behind open source Apache Kafka. “Kubernetes is amazing, but it was designed for stateless applications.”
Like all stateful applications, Kafka makes certain assumptions about the infrastructure that it’s running on. “If machines come and go, you have to maintain the logical context of what a node is,” Narkhede tells Datanami. “As the underling hardware changes, you need to make sure that that node concept stays the same. In addition to that, there’s a bunch of networking-layer details that need to be right.”
Two big gotchas on this announcement. First, it’s not shipping yet and even early availability won’t happen until later this summer. Second, the Confluent Operator is going into the closed source, proprietary, revenue generating bucket of the Confluent business. I can totally understand this decision, but it’ll probably be a bit of a bummer for those without an enterprise grade checkbook.
Wonder if either a really good open source deployment of Kafka on k8s emerges or this leaves a window open for other streaming platforms (Pulsar, NATS Streaming) to be more k8s friendly and garner wide adoption.
So it turns out that Feedbin, the RSS feed aggregator service that I pay for, has long supported receiving email newsletters and treating the source as a feed:
You can now receive email newsletters in Feedbin.
… To use this feature, go to the settings page and find your secret Feedbin email address. Use this email address whenever you sign up for an email newsletter. Anything sent to it will show up as a feed in Feedbin, grouped by sender.
Reading email in an email app feels like work to me. However, there’s a certain class of email that I want to enjoy reading, and Feedbin is where I go when I want to read for pleasure.
At first, I scoffed at this notion. Recently I checked my “Newsletters” label in Gmail and was gobsmacked by how many old issues, across a number of newsletters, had piled up. What the heck, let’s give this Feedbin hack a whirl.
So far it’s actually way better than I thought. The reading is improved because Gmail cuts off longer messages after a certain number of characters, which modern newsletter emails oft run afoul of. Then you have to follow a link to read the rest which is a pain in the ass. Also, the messages show up in a place that I’m much more prone to check for reading material on a regular basis. Finally, the autogrouping by source works better than a label in my e-mail reader. Sure I could set up Gmail to do that, but hey, work.
The only gotcha, is that for new signups, you have to go into the reader and click the verification link. No big deal, but sorta weird.
Herewith are a few newsletters that I like and have thrown into this scheme:
I haven’t dived too far down the Kubernetes rabbit hole yet, but one thing I was trying to tinker with was deploying Kafka within a k8s cluster. The results were … unsatisfying. The folks at Cockroach Labs have observed similar issues and are offering advice on how to deal with stateful k8s apps.
In short: managing state in Kubernetes is difficult because the system’s dynamism is too chaotic for most databases to handle––especially SQL databases that offer strong consistency.
I’ll note that for Kafka, the odd peccadillos of ZooKeeper configuration make the process “anti-cloud native”. And how to expose long-lived, stateful connections, also seems to work against the Kubernetes grain.
I’m sure someone, somewhere has wrangled through all of these problems but there does seem to be a lot of toil here.