home ¦ Archives ¦ Atom ¦ RSS

Agentic Coding and Generative Art

Following up on my prior musings about Python and generative art. Agentic coding could of course be part of building a Processing-like platform for Python. At the end of the day though, the actual generative code is just code.

What would agentic coding support for generative art pieces look like? On one hand, you could start very “vibe codey” with almost purely natural language expressions of what an artist wants. Spit out some code. Run a visual display. Iterate in natural language.

On the other hand, the agentic framework could assist proficient coders with leveraging underlying code features of the platform, even helping with domain specific mathematical, vector graphic, and low level pixel array manipulation (bit bliting) languages to support artistic flair.

Time for some research.


The CLAUDECODE Environment Variable

Across my personal gear, the bash login startup has a pretty florid banner generated by hyfetch. This is fine for the occasional human paced terminal shell instantiation. However, when using claude code for agentic coding there’s a Bash tool which seems to use my login profile. The banner is just junk that eats up the context token budget, so I was casting about for a means to disable it when Claude was doing the invocation. Claude swore it wasn’t using a login shell, despite evidence to the contrary. I figured with all the agentic coding excitement, surfacing a discussion of this point would be relatively fruitless straightforward. Sadly I ended up a bit disappointed.

Turns out there’s a CLAUDECODE environment variable that is set to 1 for the Bash tool. I couldn’t really find any documentation other than this acknowledgment in a GitHub issue. Thought I’d publish a note on the Web in case someone else is struggling here as well. In any event, the following bash snippet conserves a few tokens.

if [[ -z "$CLAUDECODE" ]]; then
   if [[ -x $HOME/.local/bin/neowofetch ]]; then
      neowofetch --package_managers off --package_minimal
   fi
fi

Alternatively, the Claude Code documentation indicates the ability to customize the environment through a settings.json file, thereby allowing you to define your own flag if you’d like.


More On Prek

From Hugo van Kemenade, some benchmarking of the performance of prek for managing git commit hooks:

prek is noticeably quicker at installing hooks.

⚡ About 10x faster than pre-commit and uses only a third of disk space. This 10x is a comparison of installing hooks using the excellent hyperfine benchmarking tool.

Here’s my own comparison.

Kemenade doesn’t see 10x but definitely a significant performance boost. He also provides some handy shell aliases for pre-commit and prek.


TSDProxy

Link parkin’: TSDProxy

TSDProxy is an application that automatically creates a proxy to virtual addresses in your Tailscale network. Easy to configure and deploy, based on Docker container labels or a simple proxy list file. It simplifies traffic redirection to services running inside Docker containers, without the need for a separate Tailscale container for each service.

I landed on TSDProxy via exploration of incorporating dokku into my tailnet. On cursory examination, here be dragons. The tricky bit is if you want to serve SSL traffic from a container, which then requires some DNS and cert jujitsu. There is also a dokku tailscale plugin but the same configuration caveats apply.

Seems like an adventure worth diving into.


Moderninzing Python Generative Art

Over 15 years ago, I completed a generative art hack in Python. Effectively I transliterated a work from Processing into a combination of pygame and some lower level bit manipulation libraries. Further work never got all that far into making a robust framework or re-implementing other works.

Fast forward to the current era and maybe it’s time for a revisit. Especially as I’m looking for some new themes for this blog. First off, there’s likely a free order of magnitude or two speedup that’s been gained just through processor and GPU improvements. Second, thanks to the AI bubble, the software layers for programming with accelerators in Python has vastly improved. Third, it looks like OpenGL isn’t the only way to do fast bit level graphics anymore.

Just for grins, I’ve been tasking Gemini Deep Research to create reports on how this could be done. Here’s a sample:

I. Executive Synthesis: Framework Recommendations and Performance Benchmarks

A. The Architecture of Choice: Optimal Stack for Real-Time

Generative Art The investigation into Python frameworks suitable for real-time generative art, specifically those handling large bitmap images represented by NumPy arrays, identifies an optimal stack that minimizes CPU overhead and leverages modern hardware capabilities. The most architecturally sound and performance-oriented solution centers on the use of pygfx and fastplotlib, relying on the WGPU graphics backend.

This combination is strategically superior because it addresses the constraints inherent in legacy visualization tools. Frameworks such as VisPy and Pyglet, while mature, are typically built upon OpenGL. In contrast, WGPU represents a crucial evolutionary step in graphics APIs, serving as a high-level abstraction layer that translates uniformly across modern, low-level APIs, including Vulkan, Metal, and Direct3D. Since WGPU itself is a Rust implementation with C bindings , adopting a WGPU-based library inherently satisfies the requirement for low-level language bindings while maintaining a high-level Python application interface. This architectural choice is not merely an incremental performance improvement but a foundational necessity to fully exploit the parallel processing power of modern GPUs, ensuring greater long-term stability and maximizing performance gains compared to frameworks constrained by the legacy overhead of OpenGL.

This could also be an interesting exploratory application of agentic coding. I know a bit about graphics and generative art, but I’m not an expert. Maybe I could coding-centaur my way into a useful framework in a reasonable amount of time.


Diggin’ On Mëstiza

Lately I’ve been diggin’ on DJ mixes from the fabulous duo known as Mëstiza (warning, hella glitz on their site, here’s a more accessible biography and a Wikipedia entry) Blurb directly from their site, all caps included.

MËSTIZA HAS SOLIDIFIED THEMSELVES AS A DYNAMIC FORCE IN THE GLOBAL MUSIC SCENE, SEAMLESSLY AND UNIQUELY INTERTWINING ELECTRONIC MUSIC WITH THE RICH TRADITIONS OF FLAMENCO. THEIR INNOVATIVE ARTISTRY CELEBRATES THEIR SPANISH HERITAGE WHILE CHAMPIONING FEMALE EMPOWERMENT, CREATING A VIBRANT FUSION OF MUSIC, FASHION, AND CULTURAL STORYTELLING.

Rummaging around on Apple Music I discovered their and Ushuaïa Ibiza mix sets. Outstanding blending of house and Spanish vibes. Love the stage garb as well. Now on the lookout for other live collections. Highly recommended.


prek

Link parkin’: prek

pre-commit is a framework to run hooks written in many languages, and it manages the language toolchain and dependencies for running the hooks.

prek is a reimagined version of pre-commit, built in Rust. It is designed to be a faster, dependency-free and drop-in alternative for it, while also providing some additional long-requested features.

I’ve plugged prek into a few repos and it feels like a winner.


Hopper: Python Developer Tooling Handbook

Link parkin’: Python Developer Tooling Handboook, by Tim Hopper.

This is not a book about programming Python. Instead, the goal of this book is to help you understand the ecosystem of tools used to make Python development easier and more productive. For example, this book will help you make sense of the complex world of building Python packages: what exactly are uv, Poetry, Flit, Setuptools, and Hatch? What are the pros and cons of each? How do they compare to each other? It also covers tools for linting, formatting, and managing dependencies.

Hopper’s handbook is a really rich resource. Despite the mention of other Python packaging frameworks and tools, clearly the uv wave (really Astral wave if you add in ruff and ty) landed on those shores. There’s a lot of good actionable advice. And the Explanation section has bunch of foundational, non-Astral, Python packaging wisdom. The attendant blog looks great as well.

Tim’s excellent PyBites Podcast interview episode tipped me off to the handbook.


Bootstrapping Python CLI Packages

As an avowed command line interface (CLI) guy my default approach to building new Python functionality is to write a packages that’s embedded within a CLI right from the getgo. Fortunately, Python is blessed with many packages to support this. click and typer are my gotos. I so admire click that I believe the package should be a part of the Python standard library.

I also have a few opinions regarding packaging (uv please), logging (use loguru), and configuration (platform user directories + TOML files). If you do enough of these CLIs within a certain period of time, you start to yearn for some bootstrapping automation. Recently I landed on a couple of packages that align with my preferences and could really help here.

First off is a batteries included cookiecutter for new Python packages.

There are many cookiecutter templates, but this one is mine and I’m sharing it with you. Create complete Python packages with zero configuration - including CLI, testing, documentation, and automated PyPI publishing via GitHub Actions.

Second is the typerdrive package (background from Tucker Beck)

During my time as an engineer working primarily with Python, I’ve written a a fair number of CLIs powered by Typer. One type of project that has been popping up for me a lot lately involves writing CLI programs that interface with RESTful APIs. These are pretty common these days with so many service companies offering fully operational battlestations…I mean, platforms that can be accessed via API.

I’ve established some pretty useful and re-usable patterns in building these kinds of apps, and I keep finding new ways to improve both the developer experience and the user experience. Every time I go about porting some feature across to a new or old CLI, I wish there was a library that wrapped them all up in a nice package. Now, there is typerdrive:

These are the challenges I found myself facing repeatedly when building CLIs that talk to APIs:

  • Settings management: so you’re not providing the same values as arguments over and over
  • Cache management: to store auth tokens you use to access a secure API
  • Handling errors: repackaging ugly errors and stack traces into nice user output
  • Client management: serializing data into and out of your requests
  • Logging management: storing and reviewing logs to diagnose errors

typerdrive wraps all this up into an interface that’s easy for users and developers as well.

I could see myself creating a combination of these two into a new cookiecutter with a few tweaks of my own for AI engineered CLIs and REPLs. My thanks to the fine gentlemen who authored these packages and made them publicly available.


Python 3.14 Released

Python 3.14 got released recently. The team at Astral has a good feature overview amongst the many floating around on the ’Net. The overview is admittedly tinged with a focus on uv and ruff, so getting a few differing takes is a good idea. There’s nothing in this particular Python release I’m in a rush to try out, but the progress on exploiting processor concurrency is heartening.


marimo and quarto

Link parkin’: marimo + quarto

This repository provides a framework for integrating Quarto with marimo, enabling markdown documents to be executed in a marimo environment, and reactive in page.

Previously I’ve written about how marimo is an interesting project that’s advancing the state of the art in the Python computational notebook space. One of quarto’s claims to fame is straightforward incorporation of Jupyter notebooks in scientific publishing. As I’m getting up to speed with quarto, I ventured out to see how well marimo was integrated.

After giving it a quick test drive, the linked extension looks promising, but is a tad glitchy. Apparently a JavaScript support library for marimo that’s a few point releases behind main is necessary to get embedded interactivity working. Not that I desperately need that feature, but it’s mildly annoying.

If the extension sees continued support and improvement, I’ll be putting it to good use.


? The Next Era

Typically I’m a “meta is murder” blogger. I prefer delivering content to talking about delivering content. Today I’m making a minor exception. Mainly for posterity.

Most of the action in the technology space writ large is driven by AI. It’s close to inescapable. Being an active technologist I’m swept up in it as well. Even if I wasn’t planning on making it a big part of the next day job (🤞), I’d likely be diving in out of pure curiosity.

The topic is big enough and more career oriented that I’m going to break my work out in the space into another site. memexponent.net will house all of my work and thoughts on AI engineering, hopefully building up a useful portfolio over time.

Where does that leave Mass Programming Resistance (MPR)? To Be Determined. Here’s a quick laundry list of areas I might take this site back to in depth

  • Popular Media: Books, Music, Podcasts, Movies, Sports, Episodic Series (is it really TV anymore?)
  • Generative Art
  • Programming Language Design and Implementation
  • Data Management, Data Engineering, and Analytics
  • Non-AI Technology

Me being me, there has to be a tech angle. I can’t go pure culture and criticism though. Just need to find a proper balance.

More to come…


Pelican YAML Metadata

Link parkin’: a Pelican plugin that enables YAML formatted front matter

This Pelican plugin allows articles written in Markdown to define their metadata using a YAML header. This format is compatible with other popular static site generators like Jekyll or Hugo.

It is fully backwards-compatible with the default metadata parsing.

I’m also working up another blog that uses quarto. Quarto Markdown is Pandoc Markdown which is extended to use YAML for its metadata. Eventually I’ll do some agentic coding to build a CLI tool to assist in creating new posts for either style of blog. So getting them lined up on the same format is a good thing.


uv and .env

From Daniel Roy Greenfeld, TIL: Loading .env files with uv run”

We don’t need python-dotenv, use uv run with —env-file, and your env vars from .env get loaded.

Good to know, even though I’m all in on direnv to auto load .env files. Also, handy to make Poe the Poet tasks that invoke uv underneath the covers really explicit.


Trey Hunner Cheatsheets

Link parkin, from Trey Hunner’s site: Python Articles on Cheat Sheets

A collection of the many Python cheat sheets within Python Morsels articles and screencasts.

I especially like the cheatsheet on the pathlib module:

I now use pathlib for nearly all file-related code in Python, especially when I need to construct or deconstruct file paths or ask questions of file paths.

I’d like to make the case for Python’s pathlib module… but first let’s look at a cheat sheet of common path operations.


Digging On Amapiano

Now that I have working search on this here blog I can ask questions like, “Have I ever mentioned amapiano music?” And as of this moment, the answer is “no”.

Let’s fix that.

Here’s the intro paragraph on amapiano from Wikipedia

Amapiano is a genre of music from South Africa that became popular in mid-2012 with an earlier regular occurrence on South African radio stations in the early 2000s. It is a hybrid of kwaito, deep house, gqom, jazz, soul, and lounge music characterized by synths and wide, percussive basslines. The word “amapiano” derives from the IsiZulu word for “pianos”.

I can’t pinpoint an exact moment, but it most likely was a few years ago coming out of the pandemic where I first bumped into the genre. YouTube tags me as streaming this video of TxC playing Boiler Room London four years ago. I’m pretty sure that was the first hit because I was also somewhat astounded that I’d sit through an hour long video of a dj session. Or at least have it on in the background. Also, TxC are definitely a hot look on stage. Last but not least that was likely my intro to Boiler Room which will get a whole post of its own someday.

I’m still a House and DnB guy in the main, but I’m always game to toss in an amapiano mix discovery on Apple Music, which does some curation and promotion of the form, or anything eye catching from Boiler Room. Since amapiano partially derives from House this makes complete sense. Connecting with the African continent is a cherry on top.

Highly recommended.


Litestar Lookin’

I enjoyed a relatively recent James Bennett, erm, broadside “Litestar is worth a look”. Broadside is probably too strong a term but gives you a sense of the tone. His post discusses why one should consider an alternative Python based HTTP serving engine, Litestar, as a productive modern framework. In particular, he got in a few healthy shots at a couple of my faves FastAPI and pydantic.

You save this as app.py, run with litestar run or hand it directly to the ASGI server of your choice, and it launches a web application. You go to /greet?name=Bob and it replies “Hi, Bob!”. Leave out the name parameter and it responds with an HTTP 400 telling you the name parameter is required.

So what. Big deal. The FastAPI Evangelism Strike Force will be along soon to bury you under rocket-ship emoji while explaining that FastAPI does the same thing but a million times better. And if you’re a Java person used to Spring, or a .NET person used to ASP.NET MVC, well, there’s nothing here that’s new to you; you’ve had this style of annotation/signature-driven framework for years (and in fact one thing I like about Litestar is how often it reminds me of the good parts of those frameworks). And did anyone tell you FastAPI does this, too! 🚀🚀🚀🚀🚀🚀🚀🚀🚀

But there are a lot of things that make Litestar stand out to me in the Python world. I’m going to pick out three to talk about today, and one of them is hiding in plain sight in that simple example application.

Here’s my summarization of his three points:

  1. Scalable management of route specification and organization
  2. Decoupling from Pydantic for schema validation and serialization/deserialization which enables …
  3. Application of SQLAlchemy, best of breed in the Python ecosystem, for database integration

The entire blog post is well worth reading and reasonably argued. Litestar won’t immediately become the first thing I reach for when building an HTTP backend. However, Bennett succeeded in provoking me to at least consider exploring Litestar for some future projects just to understand the tradeoffs and the developer experience. Robust alternatives are always good to know about. His closing graf captures the intent and outcome.

I could go on for a lot longer listing things I like about Litestar, and probably wind up way too far into my own subjective preferences, but hopefully I’ve given you enough of a realistic taste of what it offers that, next time you’re about to build a Python web app, you might decide to reach for 💡⭐ to carry you to the moon 🚀🚀🚀.


zsa cards

Link parkin’, just because they’re so beautiful, zsa cards

The tagline is “A deck of inspiration and connection.” I’ve bought both the Original and Premium versions and have a deck within easy reach right on my desk. When I get stuck in a monotonous monthly status update Zoom call that doesn’t really need my participation, I just flip through and admire the cards, letting my mind wander.

As advertised, they are quite attractive, entrancing objects. Highly recommended.


The marimo moment

In my previous full time gig, I did a bit of work implementing a set of APIs using FastAPI and deploying them into AWS. I needed to get a picture of API usage from external partners and AWS didn’t have anything to easily use straight out of the box. So I set about doing some dashboard development and initially thought about using Jupyter but decided to take a sidequest into marimo which had been popping up quite a bit on my podcast radar. Worked like a charm.

Also as part of my job, I built a little NLP model training platform on top of Coiled running in AWS. Highly recommend Coiled if you need to scale compute on AWS but have minimal in-house cloud and ops expertise and staffing. Especially coiled batch, which let us effectively use AWS GPU nodes. We were running super lean and didn’t have time to really drill down on all that AWS had to offer.

And now Coiled has illustrated running marimo notebooks on coiled.

My spider sense tells me marimo is having a moment and building towards escape velocity. It won’t dislodge Jupyter so much as provide a complement in the notebook ecosystem, similar to how polars complements pandas in the dataframe ecosystem.

Here’s some data points about marimo being on my radar. These are all from podcasts I subscribe to and where I consume episodes regularly:

A Listen Notes search would seem to confirm my intuition that marimo is making a push to increase visibility, possibly due to a round of venture funding at the end of last year. Podcasts aren’t the only place marimo has been popping up. The founder, Akshay Agrawal, did a PyCon US 2025 talk which is available on YouTube, and the tech features prominently in the TalkPython course, LLM Building Blocks in Python”.

I’ve found Agrawal to be pretty engaging and thoughtful in all of these conversations. He seems to be coming from a pragmatic place of hard won experience. It’s giving me confidence that this project might have legs.

My limited experimentation with marimo has shown promising shoots although it’s a fast moving target. I really like the reactive execution design choice and the underlying usage of plain Python as the notebook storage format. They’re integrating agentic AI features, of course, but there’s interesting possibilities for agentic co-development of an interactive computational artifact along with the user. Probably worthy of doing some digging into the CS research literature for comps.

Here’s to marimo finding traction and enduring.


ffmpeg, homebrew, and aac

Link parkin’ this Stack Overflow solution for personal reference:

Homebrew v2.0 dropped all of the extra options that are not explicitly enabled in each formulae. So the —with options no longer work if you use the core Homebrew formulae.

Instead you can use a third-party repository (or “tap”) such as ​homebrew-ffmpeg. This tap was created in response to the removal of the options from the core formulae.

$ brew tap homebrew-ffmpeg/ffmpeg
$ brew install homebrew-ffmpeg/ffmpeg/ffmpeg --with-fdk-aac
# or
$ brew install homebrew-ffmpeg/ffmpeg/ffmpeg --with-fdk-aac --HEAD

I have a script that uses ffmpeg to convert WAV files to m4a files for Apple Music. It needs the non-free Fraunhofer FDK AAC to properly encode the data and write out in the correct file format. It’s not often I use the script though and a skoosh of bitrot had settled in to my installation with homebrew. This solution fixed things up in a pinch.


Blogaversary

What a momentous day!! According to my math, this blog is heading into its 17th year of existence. It all started way back when with a white plastic MacBook. That thing is still kicking as well 😲. An Intel Mac, I’ve had it happily running Ubuntu Linux for a few years. Despite only having 2 CPU cores and 4GB of RAM it still comes in handy for varied experiments.

But a downer as well. Today I got laid off from my current gig. I had a pretty good streak of picking my exits, but this came out of the blue. Won’t define me though. Just more time to really figure out what’s real and what’s BS in the agentic tool space.

Onwards to new adventures 🏴‍☠️ & 🥷!

Addendum: here’s the first post


Terminal Craft Time Sink

Hola peeps! It’s been a minute. Time for an update.

Nothing major on the life front. In fact, a bit of summer relief from the tyranny of kid activity. I’ve actually been able to sleep in a few days. And work slogs on per usual.

On the personal tech front though, I’ve been spending some quality time seriously reworking my terminal lifestyle. The last couple of posts hint at what’s been going on. Those were just initial steps and much more buffing, waxing, polishing, and refinement were needed. Let’s dive in …

read more ...


Adapting Atuin

As previously mentioned, atuin has been something of a godsend. Interacting with the bash command history is a complete joy. There’s a minor configuration tweak, illustrated below, that I want to mention in case it helps someone else out. I prioritized session and directory history ahead of global for lookup. Tab sprawl is the name of the game for me, each one like an individual context. So session makes the right place to start even if it’s initially empty. global is just a few C-r hits away if needed.

[search]
## The list of enabled filter modes, in order of priority.
## The "workspace" mode is skipped when not in a workspace or workspaces = false.
## Default filter mode can be overridden with the filter_mode setting.
# filters = [ "global", "host", "session", "workspace", "directory" ]
filters = [ "session", "directory", "global", "host", "workspace" ]

The above TOML goes in ~/.config/atuin/config.toml

Meanwhile, I spent some quality time working on direnv configuration. Direnv is less of an immediate win because there’s some effort needed to add it to existing projects and use it for initiating new ones. Also, it works more as implicit magic than explicit navigation. Trey Hunner provided a good starting point but I’m molding his approach for my workflow. I’m working on a bash function to properly setup .direnv for my pre-existing uv based Python projects. pyenv is going the way of the dodo. Viva la Frank Wiles.

zoxide however, is going to take some getting used to. The directory teleportation mental model needs some burn in.


Adopting Atuin

For the longest time, like a decade or more, I’ve been really irritated by the behavior of the bash C-r key binding. It’s supposed to be a reverse history search by default. It has some non-obvious behavior though if you decide you’ve gone too far in your history.

The fix to bash is probably straightforward and I could have got on with my life. But sometimes I’m a dope. Anyway, I’m retooling my shell life, prompted by this Frank Wiles’ post My CLI World

Enter atuin:

Atuin CLI

The magical shell history tool loved by developers worldwide. Sync your commands across machines, search everything instantly, and keep your data encrypted. Open source.

Yeah, atuin is what I needed. And being the dope I am, I’ve known about it for a while now. The cross-host synching seems cools, but the default synch server being someone else’s host put me off a little.

The initial experience is looking great and exceeding my expectations.

Yup. I really am a dope. Onwards to integrating direnv and zoxide.


Man In The AIrena

Let it be known that I made my first foray into agentic AI coding, two days ago on July 6th, 2025. I worked with Claude Code to start prototyping a little tool to build m3u playlists. This will come in handy for making my local collection of music files exposable as part of my OwnTone project.

It wasn’t quite vibe coding, as I didn’t let Claude off the leash to make changes on its own. All of its requests were reviewed personally. At the same time, I haven’t written a line of code in the repo. In fact, I’ve barely looked at any of the source code. Meantime, my initial runs for building playlists look pretty good. To top it off, now a bunch of ideas for extending the tool are cooking in my head.

The experience hasn’t been mindblowingly life altering, but vaguely satisfying. This is a project I would have likely procrastinated on interminably. With an hour of effort, give or take, at least I’m started. Lots of boilerplate and mind numbing testing avoided. So there’s a path forward. And thoughts of many other deferred projects that could now be within reach. Credit to Andrew Ng, writing in The Batch, for the nudge to finally push out from the dock.

I’m completely sympathetic to all the folks who are apprehensive to pessimistic on where this is going and what it will ultimately cost. For me, guarded experimentation is the right path forward. It feels like there is some there there. Maybe (probably) not enough to match the huckster BS and snake oil, but possibly a useful normal technology. YMMV.


Owntone Dialtone

For the longest time, I’ve been dreaming of a hackable solution to drive my multiple Sonos speaker setup. A while back, I thought I could cobble something together based upon Music Player Daemon (mpd). Didn’t really have time to dig deep into setting things up. The other bit is that getting Sonos to pick up streams from mpd seemed a bit kludgey.

I kept plugging away doing background research and discovered OwnTone

OwnTone is a media server that lets you play audio sources such as local files, Spotify, pipe input or internet radio to AirPlay 1 and 2 receivers, Chromecast receivers, Roku Soundbridge, a browser or the server’s own sound system. Or you can listen to your music via any client that supports mp3 streaming.

You control the server via a web interface, Apple Remote, an Android remote (e.g. Retune), an MPD client, json API or DACP.

OwnTone also serves local files via the Digital Audio Access Protocol (DAAP) to iTunes (Windows), Apple Music (macOS) and Rhythmbox (Linux), and via the Roku Server Protocol (RSP) to Roku devices.

Did some holiday hunkering down and got an OwnTone server deployed on my homeLAN. Had some challenges doing a build under Linux on a mini-PC but I found a community package that worked out well. Turns out I had some Linuxbrew stuff causing conflicts. After that it was a little bit of firewall configuration aaaand …

crossjam’s OwnTone server dashboard

The frontend is nice while OwnTone also provides an mpd facade that’s controllable with mpd clients like mpc. There’s also a good looking JSON API. Plus it supports LastFM integration although I’ll keep on with my soco-scribbler work. Even more beautiful is the fact that it’s reachable over my TailScale network. All in all, a side project hacker’s delight.

Some burn-in needs to happen with a full play list run, but this is promising. Given the protocols that OwnTone is designed to service (DAAP and AirPlay), it shouldn’t be a surprise it does an excellent job of finding my AirPlay 2 speakers as output targets. I also have a pair of Homepod Minis that are only lightly used. Maybe this will give them renewed life.


On Tyranny

Today’s a good day to start a reread of Timothy Snyder’s On Tyranny. Might even complete it in one sitting.

Don’t Be A Bystander


1Password CLI

Just have to throw a shout out to the 1Password CLI tool. It’s actually a really elegant way to plug your friendly neighborhood secrets manager into the terminal/shell life. And the integration with Apple’s Touch ID is well done. Highly recommended.


Context Engineering

Simon Willison noted an AI related neologism “context engineering”, emerging on The Socials (TM).

The term context engineering has recently started to gain traction as a better alternative to prompt engineering. I like it. I think this one may have sticking power.

I used to scoff at the term prompt engineering. Its early connotation was one of messing around with text prompts in an ad hoc fashion and seeing what happens. Adding goofy stuff like “my grandmother’s life depends on you getting this answer correct.” Not much engineering involved.

Given the explosion in the maximum size of context windows and how they’re constructed impacts LLM performance, I’m now in the camp of prompt engineering is serious business. In my head, I was noodling with a term “context programming” to express the many ways that crafting contexts for LLMs have arisen in engineering practice. C. f. DSPy. There’s also a bit of science emerging around where and how models interact with large contexts.

If “context engineering” catches on, I’m good with it.


platformdirs

Link parkin’: platformdirs

platformdirs is a library to determine platform-specific system directories. This includes directories where to place cache files, user data, configuration, etc.

A handy module for use within other projects.


TIL Inspect AI

TIL Inspect

An open-source framework for large language model evaluations

Looks like a nice piece of open source kit from the UK government’s AI Security Institute

A big part of the day job is LLM evaluation so this is definitely of interest.


soco-scribbler

I forked sonos-lastfm just to noodle around with Sonos monitoring generally. Dubbed the project soco-scribbler as it’ll scribble down locally what’s been played for later usage. Goal one is to record plays to sqlite via sqlite-utils. Then onwards to reconciliation with last.fm assuming there’s a parallel scrobbler going, like the hit or miss Sonos plugin. Afterwards, who knows.

Forza.


Gemini Chat Link Longevity

Since I just put the chat link feature of Gemini, thought it might be a good idea to ask how long the links last. And who better to query then Gemini!

Public links to Gemini chats generally stay alive as long as the associated chat is saved in your Gemini Apps Activity. …

In summary, the lifespan of a public Gemini chat link is tied to how long you keep the original chat in your Gemini Apps Activity, unless you choose to delete the link specifically.

If I’d have figured this out earlier, I’d have used a Gemini link in my discussion of mixes and streaming platforms. Which reminds me to check in and see how to do this with ChatGPT and Claude.


Beats In Space

Speaking of Beats In Space, this is a trove of mix history that I need to dive into:

Beats In Space is a radio show that started in 1999 on WNYU 89.1FM in New York City and is now broadcasting every week on Apple Music.

Beats In Space is brought to you by New York DJ Tim Sweeney.

Not only does Sweeney have a guest mix, often he also does a mix, and an interview with the guest dj:

I’m excited to share that all Beats In Space interviews are now available for FREE on Apple Podcasts! Whether you’re a long-time listener or just discovering the show, you can now dive into in-depth conversations with some of the most influential DJs and producers in electronic music—no paywall, just pure music talk. Subscribe and leave us a comment!

Great backgrounder at the fifteen year mark on Sweeney and the show. Now currently at twenty five plus years and counting 😮, with a only a single hiatus!

Previously I have spoken about the notion of a retrocast for spoken word podcasts, but the same could be done for music podcasts / mix series, tracing their evolution over time. I chatted up Gemini about this during some background research on Carlita and asked “Are there any such tools that look at longitudinal track trends, DJ relations and history, and/or track genealogy”. There’s a whole bunch more to the thread but this kicks off a lot of intellectual provocation.

You’re asking about a fascinating and cutting-edge area of AI in music, moving beyond simple track identification to deeper, more interconnected analysis. While the precise, all-in-one tools for “longitudinal track trends, DJ relations and history, and/or track genealogy” aren’t yet widely available as off-the-shelf consumer products, the underlying AI and research are certainly advancing in these directions.

Mixes, episodes, data integration and AI 🤔. Feels like a great rabbit hole to get stuck into.


The Platform for DJ Mixes

I’ve recently fallen down a rabbit hole of adding DJ mix sets to my Apple Music library. Boiler Room and Fabric have been on the radar now for a year or two, with solid presences in the Apple Music service. For whatever reason, one day recently I started chasing the “More By …” and “You Might Also Like…” connections underneath a set’s tracklist and discovered:

  • The Warehouse Project and
  • Glitterbox and
  • Tomorrowland and
  • Defected Broadcasting and
  • The Lost Village and
  • Movement and
  • you get the picture …

The nice thing is that I’m coming across new to me DJs like Carlita, TSHA, Seth Troxler, Skream, Chase & Status, Sonny Fodera, Denis Sulta, and Kilimanjaro. Meantime there’s a heaping helping of old friends with mix sets I’ve never heard before like Josh Wink, Todd Terry, MK, Roger Sanchez, Armand van Helden, Lil’ Louie Vega, Masters at Work, Marshall Jefferson and Basement Jaxx.

Some of these sets are bananas in terms of length, including a 5 hour ride with Seth Troxler B2B Skream.

Out of curiosity I asked the major AI bots their opinion to compare and contrast Spotify with Apple Music on this front. Surely Spotify must be in the arena here? Apparently that’s not really the case. Here’s what Claude had to say:

Bottom Line

Apple Music is the clear winner for electronic DJ mixes, offering:

  • Professional-grade content: Thousands of expertly crafted continuous DJ mixes
  • Industry partnerships: Direct integration with major DJ software and hardware
  • Regular updates: Monthly curated mixes and exclusive content from top DJs
  • Variety: Comprehensive coverage of electronic music genres

Spotify falls short with mainly user-generated playlist collections rather than true DJ mixes, limited professional content, and no current support for DJ software integration.

For serious electronic music fans and DJs, Apple Music provides a significantly richer and more varied selection of actual DJ mixes compared to Spotify’s more basic offerings.

I need to do some further work to verify they’re all not hallucinating but this matches my general vibe. Full disclosure, I ditched a Spotify premium subscription a few years back since I got sucked into the Apple One bundle.

What really grabs me about Apple Music is the relation with curators, promoters, and labels such as The Warehouse Project, Boiler Room, Fabric, and Defected. Plus they have their own series Beats in Space and provide dj mix themed genres, although you have to dig a little. This provides some serendipitous discovery without needing to go completely algorithmic. Great folks like Boiler Room just putting up phenomenal events on Apple Music and the archives live on in perpetuity for someone like me to fall into.

Have to acknowledge the creator focused platforms Mixcloud and SoundCloud. They’re both great sources for sets directly from DJs. Bonus, they integrate nicely with the Sonos platform. They just don’t provide personal library of tracks that Apple Music does. He said grudgingly.

Now if only Apple would do something about their janky desktop app.


20 Years on Last.fm ?!

Happened to be looking at my profile on Last.fm and noticed the following

Screenshot of Last.fm profile - Joined June 5,
2005

In case you can’t make out the text from the screen capture, there’s a “scrobbling since June 5 2005” hiding in there.

Hang on! I’ve been scrobbling for 20 years 😱 ?!

Just for calibration, that’s one year longer than The Site Formerly Known as Twitter has existed, and one year less than TheFacebook.

Some good ’ole Web 2.0 services just keep hanging in there.


Searching Mass Programming Resistance

Boy am I a dope.

I’ve been fiending for search capabilities on this here ’blog for ages. Never got around to putting something together dreading grinding through HTML and JavaScript coding. So I just limped along with site:crossjam.net modifiers in search engines.

About this time last year, I posted about pagefind. pagefind is a JavaScript toolkit for creating embedded search indexes to go along with static server pages. Looked easy peasy to run and use. Lame old me still dragged his feet.

Finally for some Friday hacking, I decided to push through and see if I could get pagefind based search working in an experimental fashion.

After reacquainting myself with the pagefind docs, I had it working in an hour 🙄. Still have a few bits to polish, but once it’s working there will be an honest to gosh search page


A New Breed of AI Engineers

Being a Systems guy, I don’t really know much about ML and AI. In my attempt to keep up, I’m trying to draft off the best that I know of. Andrew Ng is a legend, so I’m subscribed to his newsletter “The Batch”. The most recent edition, entitled Meet The New Breed of GenAI Application Engineers neatly summarizes a lot of what’s going on beyond the various echo chambers:

There’s a new breed of GenAI Application Engineers who can build more-powerful applications faster than was possible before, thanks to generative AI. Individuals who can play this role are highly sought-after by businesses, but the job description is still coming into focus. Let me describe their key skills, as well as the sorts of interview questions I use to identify them.

Skilled GenAI Application Engineers meet two primary criteria: (i) They are able to use the new AI building blocks to quickly build powerful applications. (ii) They are able to use AI assistance to carry out rapid engineering, building software systems in dramatically less time than was possible before. In addition, good product/design instincts are a significant bonus.

About two years ago, I would have been one to scoff at the notion of AI Engineers, thinking them as overblown prompt tweakers and API cobblers. I’ve come around because of two major shifts.

First, systematically, robustly, and safely building capabilities on top of LLMs really does require engineering. We’ve moved past the point of just bolting a chat window into an app and calling it a day. Compound AI systems (Go Bears!) with agentic components are a whole new ballgame.

Second, the shape and quality of system building is changing. The baseline velocity of software development is going up. Vibe coding may lead to crap applications but there are enough folks doing the required experimentation to lead to better outcomes. Still have concerns about the security, ops, and maintenance aspects but those become worthy avenues to provide value.

If you click through on Ng’s newsletter, that list of AI building blocks is daunting. Good taste on which of them to early adopt, lag, and deep dive will be another defining skill. As always, those who can discern the enduring qualities will be well positioned.


Synology and Sonos

I have a Synology NAS and a few Sonos devices. Pulling these two nuggets of information together in case someone else needs to rework their setup. Previously I had all my music data on one machine, but since the NAS has plenty of space, it made sense to migrate the Apple Music library and then point Sonos in the right direction.

First you need to create a new library Apple Music

You can have more than one music library in Music. For example, you could have a library of holiday music that wouldn’t appear in Music the rest of the year. Or you could keep some of your music in a library on an external storage device.

Then these Sonos instructions illustrate how to get your old data over to the NAS library.

Choose in Apple Music under the File menu the ‘Import’ option. Select the folder where your music files are stored on your local drive. The Apple Music app will now copy all the files in the selected folder and subfolders to the NAS and update its music library. Bear in mind that all files will be copied to the NAS but not be moved, so the initial Apple Music library and local music folders will remain without change on your local drive.

This definitely can take a while. Unfortunately, Apple Music goes through a few different phases during import, only has minimal messages, and definitely no progress reporting. Oh well.


OpenRouter

Link parkin’: OpenRouter

From the OpenRouter docs

OpenRouter provides a unified API that gives you access to hundreds of AI models through a single endpoint, while automatically handling fallbacks and selecting the most cost-effective options. Get started with just a few lines of code using your preferred SDK or framework.

I actually have an API key already, just need to get into some mischief with it.

Good podcast episode with the CEO of OpenRouter, Alex Atallah.

© 2008-2025 C. Ross Jam. Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.