Here in the states we celebrate a Thursday holiday with food and entertainment gluttony, followed by a Friday of manufactured physical retail (traditionally) consumer gluttony, followed the next Monday with manufactured online consumer gluttony.
And then we try to cleanse the palate with “Giving Tuesday”.
I’ve pretty much trended to flipping the script on those days, although I do love a bit of food gluttony. This year I achieved nothing purchased on Black Friday and Cyber Monday, nor even over the weekend which used to be my consumer gluttony escape hatch. On Giving Tuesday I batched up a whole bunch of my annual giving which usually gets done the last week of December.
Not going to go into all of my charitable contributions, but just wanted to plug supporting membership in the Python Software Foundation. I’ve gotten a ton out of Python over my career and I really enjoy the programming language, so it’s only right I give back a little. I have no idea what benefits accrue from supporting membership but I’m doing my bit to publicize their availability and encourage other Pythonistas to join up.
Some folks at work were waxing ecstatic about how a Pi-hole made such a huge difference in ad blocking. I thought installation would be a little bit more effort than value for my time. Then I found a small window of free time this weekend and thought “what the heck, let’s give it a shot.”
Commonly affiliated with Raspberry Pi devices, Pi-hole is actually quite easy to deploy on an Ubuntu device. After less than an hours worth of work, I had the whole stack up and running. And boy does it make a difference! The screen cap below highlights the amount of ad blockage for just a couple of hours and a few devices.
Sorry I waited so long!
Back in October I had this weird uncharacteristic media buying splurge. Physical or digital it didn’t matter. Novels, novellas, short story collections, graphic novels, pamphlets, didn’t matter. Sports, history, culture, politics, media, tech, didn’t matter. Training sites, news apps, digital long form magazines, didn’t matter.
Some of it was authors I like releasing new books that seem really intriguing. Some of it was recommendations from authors I really like. Some are from authors I know personally. Some of it was second order linkage from references. Some of it was deal sniping and some of it was remainder bin diving. Some of it was stuff I want to revisit. Some of it was material I always thought I should visit.
At this point, I have so much material I’d be totally happy spending every waking hour for the rest of the calendar year reading as much as possible.
Thankfully I avoided any Black Friday urges and didn’t purchase anything. Looking forward to Giving Tuesday.
kerfuffle | kərˈfəf(ə)l |
noun [in singular] British informal
a commotion or fuss, especially one caused by conflicting views: there
was a kerfuffle over the chairmanship.
I abdicated Twitter well over three years ago, but with the recent events regarding the company changing ownership I decided to check in and see how long I’d actually been on the service:
According to Wikipedia, that “Joined February 2007” means my account was established within the first year of the service and 7 months of its initial public launch 😮.
Abdicating was driven by pursuit of “deep work” and avoidance of the cognitive and emotional drain the site represents. I know there are some vibrant, useful, and potentially interesting to me community pockets within Twitter. Good on them, but I haven’t missed it for a moment. There is a somewhat juvenile impulse to jump back on for a one tweet dunk on the order of “Look at me! I bailed before all the shit really hit the fan. What are ya’ll still doing here?!” but I like to think I’ve gotten more mature with age.
There was about 30 seconds where I thought it might be cool to join the fediverse and see where this all might eventually land, but I quickly came to my senses. If I didn’t need what was on offer before why would I need it now?
This adventure was prompted by an article entitled “What happens to sports media if Twitter dies?”. It will likely be my last commentary about Twitter for quite some time.
TIL Twitter Gradient Accounts
Other people on Twitter had noticed them as well and referred to them (usually with irritation) as “gradient accounts,” because many of their profile pictures are not of human faces or anything else, just color gradients. Gradient accounts have usernames that sound like AIM usernames: @f41ryluvrr, @urf41ryg1rl, @moonlouvrr, @newmoonbaby2, @glitteryxhearts. Through their tweets, they identify as overthinkers and dreamers and hot people, and they often profess melancholy and romantic longing. The romantic longing sometimes clashes with casual misanthropy; the all-lowercase disclosures of trauma and malaise are mixed with playful Gossip Girl memes. Their content is more popular than I can possibly explain, and they know it.
Was totally media nerd sniped by this, with thoughts of the Marly Krushkova arc from Count Zero running through my head. Somehow real people participating in a mass consensual hallucination makes it even more Gibsonian.
As in technology for hip-hop DJs. Scratch Cyborgs: The Hip-Hop DJ as Technology may be the best thing I’ve ever run across on Hacker News.
Hip-hop DJ culture provides a rich site for exploring how culture and industry can converge and collaborate, as well as how they need each other to move forward.
The references are just a rabbit hole for this technologist, hip-hop lover, and former amateur club DJ.
And as an MIT alum, how did I not know about the MIT Press Reader? 😧
Two great tastes that taste great together. Jamie Zawinski’s XScreensaver for macOS and Jared Tarbell’s substrate generative art algorithm. Stunningly beautiful on a 27” iMac.
Seeing as how I’ve actually used XScreensaver on 1990s vintage UNIX workstations, and also mostly reimplemented substrate for my own ends, I’m embarrassed to learn substrate has been in XScreensaver since 2004. I’ll blame it on being distracted by the XMatrix digital rain plugin. That’s the one I default to using via XScreensaver. Love the little Matrix Reloaded “Easter egg”. The randomizer feature is looking better though, now that you can make useful subselections of the available screensavers.
It’s been a minute, or two, since we’ve done this. From May to August 15 titles were closed out. That’ll take this year’s total to 25. 40 finished is conceivable.
Lots to get to. Let’s get stuck in.
2022-10-04 Now with even more commentary.
Like many of his other novels, William Gibson’s The Peripheral grew on me after first reading. In late October, Amazon Prime will be airing a TV series adaptation. The trailer looks good, but then again the trailer always looks good.
Yesterday I busted my 434 day music listening streak, dangit! 😢
I hung my listening habit off of my exercise habit, but had a little change up yesterday. Went for a walk in the morning instead of the evening and rushed out the door to meet the daily schedule. Then never got around to picking out a tracklist and firing it up.
Ah well. We go again.
Said the Manchester City fan, heh.
Link parkin’: Spotify Million Playlist Dataset Challenge
The Spotify Million Playlist Dataset Challenge consists of a dataset and evaluation to enable research in music recommendations. It is a continuation of the RecSys Challenge 2018, which ran from January to July 2018. The dataset contains 1,000,000 playlists, including playlist titles and track titles, created by users on the Spotify platform between January 2010 and October 2017. The evaluation task is automatic playlist continuation: given a seed playlist title and/or initial set of tracks in a playlist, to predict the subsequent tracks in that playlist. This is an open-ended challenge intended to encourage research in music recommendations, and no prizes will be awarded (other than bragging rights).
Mmmmm… data. Unfortunately, terms of service limit usage to participation in the challenge contest. So no data munging and redistributing.
TIL: Podcast Index
The Podcast Index is here to preserve, protect and extend the open, independent podcasting ecosystem.
We do this by enabling developers to have access to an open, categorized index that will always be available for free, for any use.
Has an API. Interestingly, the entire podcast database (guessing not full content of enclosures isn’t included) can be downloaded. Mmmmmm… data.
The team behind the Podcast Index is leading a Podcasting 2.0 “movement” to add some features to RSS to improve the podcast experience for publishers and listeners.
Podcasting 2.0 is a set of forward looking ideas combined with the technology to realize them. It’s a vision for what the podcast listener experience can and should be. That experience has stagnated for over a decade, with almost all of the improvements coming in isolated sections of the ecosystem. There hasn’t been a single, unified vision from the podcasting community acting together with one voice. So, we’ve ended up with fragments of innovation across the podcasting landscape with no central driving goal in mind. Podcasting 2.0 is the expression of what that goal could be.
Bonus project: Sucky pulls down an RSS feed and all of its enclosures.
Hat tip to a great Changelog podcast episode, natch, discussing RSS with Ben Ubois, the creator of one of my favorite services feedbin.
My default starting point for building a new piece of software is to create a command line interface (CLI) app. I’ve even got my own personalized Python cookiecutter template to generate them quickly. Independent of some personal preferences that this supports, like options for logging, thanks to the click argument parsing toollkit, the autogenerated tool immediately integrates well with the wider UNIX ecosystem.
Buuuut, I’ve never gotten a good handle on how to integrate this approach with the modern Web API OAuth+API token methodology. OAuth gives me a headache 😆.
I’m still digesting this article from the folks at notia.ai, entitled “Building an authenticated Python CLI,” but on first read it seems really well done.
When building out the Notia client, we found a real lack of resources around building a persistently authenticated Python library.
To address this, we are going to be building an interactive, authenticated Python CLI that uses the Twitter API to fetch the top Machine Learning tweets of the week! You can see the final result in the video demo above - or you can skip to the final code here.
Building this CLI will let us explore concepts like authenticating a local device between uses, accepting CLI arguments with Click, and displaying our data interactively with Rich.
I’ll definitely be taking their advice in building some future X-to-sqlite applications.
Due credit to Simon Willison for providing the upstream basis of my cookiecutter template.
Chanced across a completely new word to me: stfnal
stfnal (comparative more stfnal, superlative most stfnal)
Of or pertaining to scientifiction or science fiction.
Via a Cory Doctorow review of A Half-Built Garden, which I am running out and buying tomorrow in hardcover.
xonsh has been growing on me as an interactive shell. One area I haven’t delved into much is the history capabilities. I guess I shouldn’t be too surprised that sqlite makes an appearance:
Xonsh has a second built-in history backend powered by sqlite (other than the JSON version mentioned all above in this tutorial). It shares the same functionality as the JSON version in most ways, except it currently doesn’t support the history diff action and does not store the output of commands, as the json-backend does. E.g. xonsh.history[-1].out will always be None.
The Sqlite history backend can provide a speed advantage in loading history into a just-started xonsh session. The JSON history backend may need to read potentially thousands of json files and the sqlite backend only reads one. Note that this does not affect startup time, but the amount of time before all history is available for searching.
Combine with the sqlite’s full text search capabilities for even more entertainment.
Data Machina is a weekly link newsletter on AI/ML topics broadly construed. It’s distinctive in the variety of areas it touches on ( e.g. specific language sections for Python, Scala, Lisp/Clojure, R, plus segments on datasets and distributed systems), along with the sheer number of links. I was enjoying the content way back when it started out on TinyLetter, but it went on hiatus and thence behind a paywall, so effectively disappeared from my radar.
Why subscribe to Data Machina?
Data Machina brings you a highly curated selection of the best in Machine Learning, AI, Data Science, and Data Engineering every week, 52 weeks per year.
Loaded with useful, unique, and interesting content, Data Machina is read by thousands of AI/ML professionals and researchers around the world.
Data Machina is published in a minimalistic, easy-to-read format, with pure, simple text, and structured in clearly marked sections so you can scan them quickly without being disturbed by ads, banners, icons, images or other annoying stuff.
Now it’s back in a free version and looking as good as ever. Glad to make your acquaintance again!
Also, yet another plug for feedreading email newsletters. The latest versions of Data Machina quietly popped up in my feeds unannounced. No muss, no fuss. Just back to reading the high quality linkfest in its best habitat for me.
I was hoping to fool around with Mopidy as an audio playback
engine because it’s written in Python and supports the MPD protocol
according to the documentation. When I went to install it using
homebrew
on my MacBook Air, the latest version had problems with
its plugins, wherein I discovered there was already an outstanding
issue on GitHub. Unfortunately, a solution didn’t look promising,
but at least I chimed in my interest.
So off I go, working on other things and forgetting about the
problem. Lo and behold, another user reports the real source of
the issue and a convenient fix. With an export
GST_PLUGIN_PATH=/opt/homebrew/lib/gstreamer-1.0/
now my Mopidy server
works perfectly and can playback audio on my MacBook. Score one for
just registering interest on GitHub!!
Mystery solved. Onwards to implementing my own database driven, dynamically created playlists.
TIL Steampipe. From the intro announcement:
Steampipe, a new open source project from Turbot, enables cloud pros (e.g. software developers, operations engineers and security teams) to query their favorite cloud services with SQL. It has quickly become one of our favorite tools in-house and we hope it finds a way into your tool box as well.
The heart of Steampipe is an intuitive command line interface (CLI) that solves the challenges encountered when asking questions of cloud resources and services. Traditional tools and custom scripts that provide visibility into these services are cumbersome, inconsistent across providers and painful to maintain. Steampipe provides a consistent, explorable and interactive approach across IaaS, PaaS and SaaS services.
Via an O’Reilly Radar Post by Jon Udell. Glad to see him back!!
TIL pgloader
pgloader has two modes of operation. It can either load data from files, such as CSV or Fixed-File Format; or migrate a whole database to PostgreSQL.
pgloader supports several RDBMS solutions as a migration source, and fetches information from the catalog tables over a connection to then create an equivalent schema in PostgreSQL. This means that you can migrate to PostgreSQL in a single command-line!
Via a Twilio blog post linked from the PyCoders Weekly newsletter, Issue 533
Actually, mutagen is probably the right tool for the MP4 metadata
job at hand, especially with the EasyMP4
class available
Mutagen is a Python module to handle audio metadata. It supports ASF, FLAC, MP4, Monkey’s Audio, MP3, Musepack, Ogg Opus, Ogg FLAC, Ogg Speex, Ogg Theora, Ogg Vorbis, True Audio, WavPack, OptimFROG, and AIFF audio files. All versions of ID3v2 are supported, and all standard ID3v2.4 frames are parsed. It can read Xing headers to accurately calculate the bitrate and length of MP3s. ID3 and APEv2 tags can be edited regardless of audio format. It can also manipulate Ogg streams on an individual packet/page level.
Link parkin’: mp4v2
A C/C++ library to create, modify and read MP4 files
This is the new MP4v2 project, a fork of the abandoned MP4v2 library project now archived at Google Code.
Seems a little more convenient, vice ffmpeg
for working with .m4a
files as converted for or ripped by Apple’s Music.app. While primarily
a library, there are a few cli tools such mp4info
and mp4tags
.
Per usual Simon Willison has pushed out yet another impressive SQLite oriented tool: sqlite-comprehend
I built a new tool this week: sqlite-comprehend, which passes text from a SQLite database through the AWS Comprehend entity extraction service and stores the returned entities.
My attention was caught by multiple aspects:
- The usage of many pieces of his toolkit but especially db-to-sqlite to grab data out of PostgreSQL, since I have some interesting data in guess what … PostgreSQL
- Outsourcing entity extraction out to AWS Comprehend
- The application of SQLite’s full text search capabilities
- And of course Simon’s way of writing this all up, which I aspire to emulate. I’m getting there with potential content.
Bottom line, I think it’s eminently possible to take my Discogs tables and Fabric views, export them into a single SQLite / Datasette instance, and have an easily searchable Discogs artifact that’s simple to distribute as one SQLite file on a CDN.
Don’t know if anyone else would use it, but it’s an itch I’d like to scratch.
TIL about Bytewax
Bytewax is an open source Python framework for building highly scalable dataflows in a streaming or batch context.
This week I learned about PostgreSQL’s conditional expressions in
general and the COALESCE expression in particular. A big part of the
grungingess of my Discogs Postgres views is dealing with the
data’s usage of alternative name variations
or anvs
in the
fabric_track_artists
view which are
quite often NULL. This propagates into a crappy ad hoc value for the
track_artists
via abuse of concat_ws
. I’ve got a pretty good
feeling that can be handled more elegantly with a COALESCE.
A couple of other things that need investigating:
- The regexps for fabric vs fabriclive should be collapsed into one
- Rename the
fabric_live
column to a more generalfabric_series
and compute it from thetitle
column - Reexamine the UNION statements to see if they can be handled by a more appropriate join
Lots of redundancy that can be cleaned up.
Just for giggles, following up on my pondering regarding the SQLite schema within NetNewsWire, I poked around in the DB and pulled the schemas:
Thinking out loud. With a nice library around the Feedbin API it wouldn’t be too hard to grab the data and stuff it into SQLite. Alternatively, a Feedbin account could be registered with NetNewsWire and then the underlying SQLite DB inspected.
The former seems more elegant while the latter is radically pragmatic.
If for nothing else, poking around in the NetNewsWire SQLite DB probably illustrates a highly performant SQL data model and schemas for RSS data and feed management. Or even better, just read the source code
Link parkin’, since I have a largish collection of largish csv files I’m interested in processing: zsv
Preliminary performance results compare favorably vs other CSV utilities (xsv, tsv-utils, csvkit, mlr (miller) etc). Below were results on a pre-M1 macOS MBA; on most platforms zsvlib was 2x faster, though in some cases the advantage was smaller e.g. 15-25%)
Link parkin’. A couple of publicly available modules for taking personal data and stuffing it into an SQLite database, congruent with Simon Willison’s Dogsheep initiative
Dogsheep is a collection of tools for personal analytics using SQLite and Datasette.
I’ve been working on automating metadata additions to my Fabric
collection using information from Discogs. I was poking around for cli
ways, especially via ffmpeg
, to add the info to
a music file and chanced across a really useful gist.
A quick guide on how to read/write/modify ID3 metadata tags for audio / media files using ffmpeg.
At the bottom of the gist is a mention of ffprobe
. Much more
appropriate for the task at hand, especially since it can generate
output in JSON
ffprobe gathers information from multimedia streams and prints it in human- and machine-readable fashion.
… ffprobe output is designed to be easily parsable by a textual filter, and consists of one or more sections of a form defined by the selected writer, which is specified by the print_format option.
Although to be fair, ffprobe
doesn’t seem to be able to write metadata
to a file.
Bonus! ffprobe tips
Don’t know why, but I was reminded that Simon Willison had a neat
utility to process data exported from Apple HealthKit data,
specifically from an Apple Watch, healthkit-to-sqlite
Convert an Apple Healthkit export zip to a SQLite database
Includes export instructions from your watch. Worked surprisingly well.
The neat part is that you can then use Datasette and the Datasette cluster map plugin to visualize outdoor workouts on a map. And of course there’s always good old, exploratory data analysis using sqlite and Pandas.
I was dorking around last night and came up with the following in
about 15 minutes of work in xonsh
command line session.
from pathlib import Path
for fname in !(ls *.wav):
p = Path(fname.strip())
ffmpeg -i @(p) -c:a libfdk_aac -b:a 128k @(f"{p.stem}.m4a")
🎉 💥 💥 🎉
At the day job, I got sucked into trying to understand two PostgreSQL
data types, timestamp
and timestamptz
. Thought I knew what I was
doing, then read the docs and came away even more confused. Luckily,
the folks at Cybertec had a pretty recent blog post on just this topic
Time Zone Management in PostgreSQL.
Next to character encoding, time zones are among the least-loved topics in computing. In addition, PostgreSQL’s implementation of timestamp with time zone is somewhat surprising. So I thought it might be worth to write up an introduction to time zone management and recommendations for its practical use.
The punchline …
Even though it is easy to get confused with time zones, you can steer clear of most problems if you use timestamp with timezone everywhere, stick with IANA time zone names and make sure to set the TimeZone parameter to the time zone on the client side. Then PostgreSQL will do all the heavy lifting for you.
But really, read the whole thing. There’s a lot of nuance and the proper handling of timezones in Postgres is definitely not obvious. I may actually circle back and illustrate what dragged me into this tarpit and how I currently understand things.
I think I discovered their blog during my last writing hiatus, so time to give ’em some love. The fine folks at fly.io have been doing excellent technical blogging for a few years now. Thomas Ptacek’s stuff, like this one on process isolation, is the pinnacle but there’s all around good material from many quarters. For example, here are some quality posts on Firecracker
It’s a small start, but my musicapp
command line utility, for
working with XML out of Apple’s Music.app, can be used in a one liner with sqlite-utils
:
Link parkin’: pg_timetable
pg_timetable is an advanced job scheduler for PostgreSQL, offering many advantages over traditional schedulers such as cron and others. It is completely database driven and provides a couple of advanced concepts.
In any serious development effort, I’m likely to have PostgreSQL in the stack so might as well take advantage of it for scheduled tasks too. One less piece of kit to worry about.
Plus: Pavlo Golub’s series of blog posts on pg_timetable. Pavlo is the creator of pg_timetable. Part of CYBERTEC’s PostgreSQL Professional Services.
Link parkin’: sqlite-utils.
CLI tool and Python utility functions for manipulating SQLite databases
This library and command-line utility helps create SQLite databases from an existing collection of data.
Can’t believe I haven’t stashed Simon Willison’s insanely useful toolkit on this here blog. Makes it insanely easy to do stuff with sqlite databases from the command line and from within Python. For example
If you have data as JSON, you can use
sqlite-utils insert tablename
to insert it into a database. The table will be created with the correct (automatically detected) columns if it does not already exist.
PostgreSQL is totally awesome. But sometimes it’s more useful to have pure file(s) storage and query for your data. Herewith a collection of data storage engines that somewhat cover the space of more well-known engines:
Previously I mentioned libpytunes and went to kick the tires. I
thought it was published on PyPI but turns out it wasn’t. So here
I am going pip install libpytunes
and wondering why I can’t
subsequently do a import libpytunes
.
I’ve always known you can do pip install
from a git repository, but
a while back Adam Johnson wrote up some of the details. There are
plenty of other good overviews out there, (e.g. Simon Willison’s),
this one just caught my eye recently.
Now pip install git+https://github.com/liamks/libpytunes
actually
installs the module and my import
statement works as expected. Bonus, you
can put git+https://github.com/liamks/libpytunes
into
requirements.txt
and setup.py
files as well, to achieve similar results.
Unfortunately the liamks
version got hit by a trivial API change in
plistlib
in Python 3.9, so there was still breakage on my end, but
Anirudh Acharya has a forked repo with the necessary one
liner fix. Of course I used pip install git...
, and now my Music.app
experiments are proceeding apace.
Link parkin’: Music Library Exporter
Music Library Exporter allows you to export your library and playlists from the native macOS Music app.
The library is exported in an XML format, and is compatible with other applications, services, and tools that rely on the Music (previously iTunes) XML library format.
** 🎉 🎊 🥳 BONUS!! CLI SUPPORT 🥳 🎊 🎉 **
Aside from the main Music Library Exporter application, this project also includes a command-line program called library-generator.
Now licking my chops for some serious Music.app automation, although I’m a little nervous about compatibility. Will give it a test drive and report back.
© 2008-2024 C. Ross Jam. Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.