home ¦ Archives ¦ Atom ¦ RSS

claude-code-transcripts

Simon Willison has released a new tool: claude-code-transcripts.

A new way to extract detailed transcripts from Claude Code

I’ve released claude-code-transcripts, a new Python CLI tool for converting Claude Code transcripts to detailed HTML pages that provide a better interface for understanding what Claude Code has done than even Claude Code itself. The resulting transcripts are also designed to be shared, using any static HTML hosting or even via GitHub Gists.

This is great! AI session transcripts make excellent fodder for blog posts. I wish the desktop AI agents provided consistent and convenient ways to extract chat transcripts.


TIL: Lazygit

Seen the Lazygit name kicking around, so I figured it was time to find out what all the fuss was about.

Here’s the elevator pitch:

Rant time: You’ve heard it before, git is powerful, but what good is that power when everything is so damn hard to do? Interactive rebasing requires you to edit a goddamn TODO file in your editor? Are you kidding me? To stage part of a file you need to use a command line program to step through each hunk and if a hunk can’t be split down any further but contains code you don’t want to stage, you have to edit an arcane patch file by hand? Are you KIDDING me?! Sometimes you get asked to stash your changes when switching branches only to realise that after you switch and unstash that there weren’t even any conflicts and it would have been fine to just checkout the branch directly? YOU HAVE GOT TO BE KIDDING ME!

If you’re a mere mortal like me and you’re tired of hearing how powerful git is when in your daily life it’s a powerful pain in your ass, lazygit might be for you.

I live in magit for git work, but something handy for the occasional terminal session seems useful


mdformat

I’ve created a handy tool for copyediting my blog posts. Unfortunately, the models tend to extend my text into long lines and don’t properly word-wrap.

Enter mdformat

Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files. Mdformat is a Unix-style command-line tool as well as a Python library.

The features/opinions of the formatter include:

  • Consistent indentation and whitespace across the board
  • Always use ATX style headings
  • Move all link references to the bottom of the document (sorted by label)
  • Reformat indented code blocks as fenced code blocks
  • Use 1. as the ordered list marker if possible, also for noninitial list items

Mdformat will not change word wrapping by default. The rationale for this is to support Semantic Line Breaks.

These two plugins, mdformat-footnote and mdformat-front-matters, have been useful. A long time ago, I picked up the Pandoc footnote style in Markdown. This year, I started using YAML front matter to align with my use of Quarto on another blog.

I actually started implementing this feature by hand without assistance from an agentic coder. Then I asked GitHub Copilot to review the PR. It found a few issues, including a couple of clear brainos by the human, and made some suggestions that were easy to adopt. The two of us delivered a better product.

GitHub Copilot feels like a vastly underappreciated entry into the agentic coding space.

P. S. This post was copyedited with my own copyediting tool.


Claude Plan Mode

Armin Ronacher dug into Claude Code’s plan mode as an interested bystander.

However today I had two interesting conversations with people who really like plan mode. As a non-user of plan mode, I wanted to understand how it works. So I specifically looked at the Claude Code implementation to understand what it does, how it prompts the agent, and how it steers the client. I wanted to use the tool loop just to get a better understanding of what I’m missing out on.

This post is basically just what I found out about how it works, and maybe it’s useful to someone who also does not use plan mode and wants to know what it actually does.

This one took me a few days to get around to, but it turned out to be an interesting reflection on how I work with Claude. In essence, like Ronacher, I’ve unknowingly recreated many key aspects of plan mode!

Over the last month, I’ve started most agentic coding sessions with a prompt along the lines of: “generate a plan, in markdown format, include a timestamp, with a task checklist, and write it into a plans folder.” Then I review the plan, typically only making minor changes. I commit these plans to the working repository.

With a plan in place, I’ll kick the agent into implementation action. If the plan is fairly complex, with multiple phases, and might hit usage limits, I often ask the agent to complete only the first couple of phases, then pause so I can review.

Here’s more from Armin:

Plan mode as it exists in Claude has this sort of weirdness in my mind where it doesn’t come quite natural to me. It might come natural to others! But why can I not just ask the model to plan with me? Why do I have to switch the user interface into a different mode? Plan mode is just one of many examples where I think that because we are already so used to writing or talking to machines, bringing in more complexity in the user interface takes away some of the magic. I always want to look into whether just working with the model can accomplish something similar enough that I don’t actually need to have another user interaction or a user interface that replicates something that natural language could potentially do.

This is particularly true because my workflow involves wanting to double check what these plans are, to edit them, and to manipulate them. I feel like I’m more in control of that experience if I have a file on disk somewhere that I can see, that I can read, that I can review, that I can edit before actually acting on it. The Claude integrated user experience is just a little bit too far away from me to feel natural. I understand that other people might have different opinions on this, but for me that experience really was triggered by the thought that if people have such a great experience with plan mode, I want to understand what I’m missing out on.

And now I know: I’m mostly a custom prompt to give it structure, and some system reminders and a handful of examples.

Emphasis mine on that last line.

The other nice thing about making plans into explicit documentation is that:

  1. Work can be handed off to another agentic system.
  2. You can try the same plan with multiple different agents.
  3. Resets and restarts are easy.

YMMV.


Merry Christmas 2025

The previously mentioned festivitas makes for a great holiday desktop. This screen capture isn’t doing it justice, but the snowflakes are actually a nice update.

A screen capture of a macOS desktop. The menubar and sidebar have
festive seasonal ornaments from the festivitas app. The background is
a spectacular space shot of a galaxy captured by the James Webb Telescope

Merry Christmas ya’ll! 🎄


Python Supply Chain Security

Michael Kennedy does the Python community a service and explains how to integrate pip-audit into package development to help secure the dependency supply chain:

pip-audit is great because you can just run it on the command line. It will check against PyPA’s official list of vulnerabilities and tell you if anything in your virtual environment or requirements files is known to be malicious.

You could even set up a GitHub Action to do so, and I wouldn’t recommend against that at all. But it’s also valuable to make this check happen on developers’ machines. It’s a simple two-step process to do so …

Here’s the précis on pip-audit:

pip-audit is a tool for scanning Python environments for packages with known vulnerabilities. It uses the Python Packaging Advisory Database (https://github.com/pypa/advisory-database) via the PyPI JSON API as a source of vulnerability reports.

Kennedy illustrates the basic installation and usage of pip-audit from the command line. He also incorporates it into a pytest test. Personally, I think I’d rather add it as a Poe The Poet task and then roll it into a qa meta task. That approach already fits into my GitHub Actions workflow.

Also, TIL about uv’s dependency cooldowns:

Dependency cooldowns

uv also supports dependency “cooldowns” in which resolution will ignore packages newer than a duration. This is a good way to improve security posture by delaying package updates until the community has had the opportunity to vet new versions of packages.

This feature is available via the exclude-newer option and shares the same semantics.

Define a dependency cooldown by specifying a duration instead of an absolute value. Either a “friendly” duration (e.g., 24 hours, 1 week, 30 days) or an ISO 8601 duration (e.g., PT24H, P7D, P30D) can be used.

Even better, this can be specified in the tool.uv section of a pyproject.toml file.


TIL Gemmaverse

TIL: Gemmaverse

Strictly speaking, last night I learned about the compendium of open, smaller models made available by Google. Thanks to the exceedingly generous Hugo Bowne-Anderson (do check out the Vanishing Gradients podcast) for hosting a pop-up seminar with Ravin Kumar of Google DeepMind.

Gemma Models Overview

Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is named after the Latin gemma, meaning precious stone. The Gemma model weights are supported by developer tools that promote innovation, collaboration, and the responsible use of artificial intelligence (AI). You can get multiple variations of Gemma for general and specific use cases:

  • Gemma 3: Solve a wide variety of generative AI tasks with text and image input, support for over 140 languages, and long 128K context window.

  • CodeGemma: Complete programming tasks with this lightweight, coding-focused generative model.

  • PaliGemma 2: Build visual data processing AI solutions with a model that’s built to be fine-tuned for your image data processing applications and available in multiple resolutions.

  • ShieldGemma 2: Evaluate the safety of generative AI models’ input and output against defined policies.

Many more Gemma variants are available from Google and our AI developer community. Check them out on Kaggle Models and Hugging Face. Get inspired by what our community members have built with Gemma in the Gemmaverse.

I’d heard of a few of the Gemma models but didn’t realize how diverse the collection had become.

Bonus: Kumar dropped a Colab Notebook on finetuning the FunctionGemma 270M model.


mlx-whisper

Link parkin’: mlx-whisper

Speech recognition with Whisper in MLX. Whisper is a set of open source speech recognition models from OpenAI, ranging from 39 million to 1.5 billion parameters.

I had been experimenting with transcription and diarization using WhisperX. This turned out to be pretty slow on an M2 MacBook. mlx-whisper is pretty honking fast, although it only does transcription. I think diarization can be addressed by complementary application of pyannote.audio.

pyannote.audio is an open-source toolkit written in Python for speaker diarization. Based on PyTorch machine learning framework, it comes with state-of-the-art pretrained models and pipelines, that can be further finetuned to your own data for even better performance.

Thought I’d mentioned mlx-whisper ahead of parakeet-mlx. In any event, I’ve actually put it to the test a little bit for retrocast. The processing rate is quite acceptable for high-quality transcription. However, this needs some serious benchmarking to confirm. moonshine is also in the mix.


Two New Side Projects

I’ve been putting in the reps with agentic coding, gaining patterns and confidence. So much so that I’m committing to launching two projects here at the end of the year.

Humble Librarian

Let’s start with Humble Librarian. I’m a bit of a Humble Book Bundle addict.

BOOKS BUNDLES

Looking for your next great read? Immerse yourself in adventurous comics, discover spectacular fantasy worlds, whip up your soon-to-be favorite recipe, run a new tabletop RPG system, and more with our curated book bundles. Whether you’re a voracious reader or want to explore new genres, our exclusive bundles can help you find your next literary love. Plus, a portion of the bundle proceeds go to charity!

Over the last few years, I’ve plunked down for 80+ book bundles. I tend toward tech book and comics bundles, with dashes of science fiction and self-help. I’ve also purchased a number of eBooks directly from Manning, Pragmatic Programmers, No Starch, eBooks.com, and self-publishers. Each bundle typically includes 10 or more books.

Bottom line, I have a metric crap-ton of EPUBs, PDFs, CBZs, and MOBIs to deal with. Half the time when I’m looking at a new Humble Book Bundle, I’m not sure if I already have the books on offer.

Let’s see how far we can get using agentic coding to build a file-system crawler, indexer, and a chat UX for managing my personal library.

Peyote

Over fifteen years ago I made some headway on a project named peyote. The idea was to use Python, Pygame, and PyOpenGL to create generative art on the order of Processing. I managed to get one piece completed and then ran out of steam.

Below is a portion of a plan that I asked Claude Code to create for a screensaver, hextrail, that recently caught my eye.

HexTrail Screensaver - Code Summary and Python Port Plan

Original Code Summary

HexTrail is an XScreenSaver module written in C with OpenGL that creates a mesmerizing animated pattern of colored lines growing across a hexagonal grid.

Core Concept

The screensaver creates a honeycomb grid of hexagonal cells. Animated “arms” (lines) grow from the centers of hexagons toward their edges, then continue into neighboring hexagons, creating a branching network of colorful trails across the hexagonal substrate.

Key Data Structures

  1. arm (hextrail.c:39-43)

    • State: EMPTY, IN, WAIT, OUT, or DONE
    • ratio: Growth progress (0.0 to 1.0)
    • speed: Animation speed
  2. hexagon (hextrail.c:45-53)

    • Position (XYZ coordinates)
    • 6 neighbors (pointers to adjacent hexagons)
    • 6 arms (one for each edge)
    • Color index
    • Border state and animation ratio
  3. hextrail_configuration (hextrail.c:55-70)

    • Grid dimensions and hexagon array
    • Color palette (8 colors)
    • OpenGL context and rotation state
    • Animation state (FIRST, DRAW, FADE)

That’s just an initial assessment from the XScreensaver C source code for hextrail. I’m feeling confident I can work with Claude and Codex to make a passable start on a featureful, modernized peyote app with hextrail and my old sketch, substrate, as starters.

Bottom Line

As many others diving into agentic coding point out, the process might not be more productive for straightforward code creation. But these tools lower the barrier for kicking off new projects and getting them above the threshold.

Forza!


Mirascope and Lilypad

Link parkin’: Mirascope for programming with LLMs

Mirascope is a powerful, flexible, and user-friendly library that simplifies the process of working with LLMs through a unified interface that works across various supported providers, including OpenAI, Anthropic, Mistral, Google (Gemini/Vertex), Groq, Cohere, LiteLLM, Azure AI, and Bedrock.

Whether you’re generating text, extracting structured information, or developing complex AI-driven agent systems, Mirascope provides the tools you need to streamline your development process and create powerful, robust applications.

And Lilypad for observability and context engineering

Context engineering refers to structuring everything an LLM sees so it provides the right response. This involves curating and sequencing the information that’s sent to the model, a task that goes beyond just writing prompt instructions.

That’s why we built Lilypad, a context engineering framework that versions, traces, and evaluates everything influencing an LLM’s output, not just the prompt, allowing you to reproduce, compare, and improve every input, parameter, and piece of context systematically, rather than through trial and error.

I’m eyeing Lilypad as a light(ish)weight option for getting started with AI evals on a small scale. The Docker Compose deployment to launch the necessary services doesn’t look too bad, only kicking off Zookeeper, Kafka, Postgres, and Lilypad containers. And with a modernized version of Kafka you could probably kill the Zookeeper process to lighten the load a bit.

The TalkPython course on LLM Building Blocks for Python has some coverage on Mirascope:

Dive into LLM Building Blocks for Python, a concise 1.2-hour video course that equips you with everything you need to integrate large language models into your Python applications. You’ll learn to move beyond “text in → text out” by turning your prompts into structured data, orchestrating chat-style workflows, and building interactive prototypes. From rapid-fire notebook experiments to production-ready async pipelines and caching, this course gives you practical, code-first techniques for real-world LLM development.

I actually went through the course when it first came out in the middle of the summer (shout out again Vincent Warmerdam), but it didn’t really sink in. Now, with the benefit of more experience with AI API client frameworks, it makes more sense. The decorator and “prompt function” styles feel attractive. Also, Vincent didn’t cover Lilypad, which I’m hoping can provide a starter kit deployment for prototyping processes around LLM product evals.


parakeet-mlx

Link parkin’: parakeet-mlx

Parakeet MLX

An implementation of the Parakeet models - Nvidia’s ASR (Automatic Speech Recognition) models - for Apple Silicon using MLX.

Regarding the models, here’s info from a June NVIDIA blog post:

NVIDIA Parakeet TDT 0.6B v2 is a 600-million-parameter automatic speech recognition (ASR) model designed for high-quality English transcription. It is currently ranked #1 on the Hugging Face ASR leaderboard, alongside four other top-ranking NVIDIA Parakeet models. NVIDIA NeMo Canary models have also made their mark on the Hugging Face ASR leaderboard.

This post explores how these and other cutting-edge NVIDIA speech AI models are setting new benchmarks for accuracy, speed, and versatility in automatic speech recognition (ASR). We will review model highlights, leaderboard performance, and practical deployment options so you can leverage these state-of-the-art models for real-world applications.

Another possible transcription backend for retrocast.


Diggin’ On PAUZA

Listened to the EDC Mexico set from PAUZA on Apple Music and really enjoyed it. It’s very much in the same vein as Mestiza. The set was so authentic that it even included a train wreck or two 😆.

I was working with Claude to confirm the name and get some background.

You were right! PAUZA is indeed a DJ duo—specifically Cuba’s first female electronic music duo, formed by Paula Fernández and Zahira Sánchez from Havana.

The name PAUZA is actually a combination of their names: “PAU” (from Paula) and “ZA” (from Zahira), not directly a translation from Spanish or Slavic languages.

They’re known for blending electronic music with traditional Afro-Cuban rhythms and have over 25 million Spotify streams.

I apologize for my initial assumption about the name’s origin!

From PAUZA’s Resident Advisor bio:

PAUZA is a Cuban DJ duo conformed by Paula Fernández and Zahira Sánchez, known for their distinctive fusion of electronic music and traditional / folkloric Afro Cuban rhythms. Since their formation, they have stood out on the international music scene with their unique and energetic style.

Latin-themed mix sets have been providing good vibes recently.


TIL: Liquid Funk

Quoting from Wikipedia on Liquid funk

Liquid funk, liquid drum & bass, liquid DnB, melodic drum & bass, or sometimes just liquid is a subgenre of drum and bass. While it uses similar basslines and bar layouts to other styles, it contains fewer bar-oriented samples and more instrumental layers (both synthesized and natural), harmonies, melodies and ambiance, producing a sentimental atmosphere directed at home listeners as well as nightclub and rave audiences. Music genres such as jazz, soul and sometimes blues have a pivotal influence on liquid funk.

Previously, I had personally referred to this style of music as “Soulful DnB”. I can’t remember exactly what triggered a deeper investigation (probably liner notes on an Apple Music mix) but I poked ChatGPT to investigate:

Is “liquid dnb” a well recognized term? Check your work, provide citations and an explanation if true

What “Liquid DnB” Means

  • Liquid DnB is commonly used as shorthand for liquid drum & bass — a subgenre of drum & bass (DnB) music.
  • It’s characterized by lush melodies, smoother rhythms, soulful harmonies, and atmospheric textures, often incorporating influences from jazz, soul, funk, and ambient music.
  • This style typically contrasts with harder and darker DnB subgenres like neurofunk or techstep by focusing on musicality and emotion. 

Confidence was bolstered when artists I recognized appeared in the response: DJ Marky, Calibre, London Elektricity, High Contrast, etc. I additionally cross-checked with Claude, which also returned the Wikipedia link. Oddly, there were no follow-up questions, unlike ChatGPT. +1 to OpenAI.

This all worked out well: I learned more about a genre of music that I really love. I had no idea that this style has been around for over 25 years! Bonus: a few more artists to look into.


Marimo and Agentic Coding

Eric J. Ma dives into agents and notebooks with Use coding agents to write Marimo notebooks.

In this blog post, I share how combining AI coding assistants with Marimo notebooks can supercharge your Python development and data science workflows. I walk through handy features like the —watch flag for live updates, the marimo check command for code quality, and even advanced options like MCP and built-in AI editing. Curious how you can automate and speed up your notebook workflow while keeping your code clean?

If you’re like me, you might find coding with AI assistants somewhat addictive. And if you’re like me, you might also like to write code in Marimo notebooks, the modern alternative to Jupyter that offers better reproducibility and cleaner Python development.

Turns out there’s a way to put these two together for automated Python development and data science workflows, creating a powerful combination for rapid prototyping and iterative coding.

Previously, I noted how JupyterAI has an interesting notebook editing model. I still have respect for what’s going on there. But Marimo seems to be pushing really hard in this space. There’s something to be said for startup energy.

AI-assisted coding

marimo is an AI-native editor, with support for full-cell AI code generation:

  • generating new cells from a prompt
  • refactoring existing cells from a prompt
  • generating entire notebooks

as well as inline autocompletion (like GitHub Copilot).

marimo’s AI assistant is specialized for working with data: unlike traditional assistants that only have access to the text of your program, marimo’s assistant has access to the values of variables in memory, letting it code against your dataframe and database schemas.

This guide provides an overview of these features and how to configure them.


A Lesson From 90,000 Scrobbles

last.fm 90,000 Scrobbles Screen
Capture

I recently reached 90,000 scrobbles recorded on last.fm! That spans a little over 20 years 😲 — roughly 4,500 per year, approximately 12 tracks a day. Depending upon the style of music, that’s easily 1 complete CD / album / mix a day, every day, for 20 years.

Let’s dig in a bit …

read more ...


Spotify Prompted Playlists

I dumped Spotify a few years back and still firmly believe Apple Music best fits my taste for DJ mixes. This new Spotify feature, Prompted Playlists, at least sounds intriguing.

Imagine a Spotify that doesn’t just passively learn from you but literally listens to you. One you can steer and shape with your own words. For the first time, your ideas, your logic, and your creativity can actually power the Spotify algorithm, directing how it thinks, adapts, and responds to you.

Introducing Prompted Playlist, where your words bring your playlist to life

Starting December 11, Premium listeners in New Zealand will get early access to our latest beta feature: Prompted Playlist. It’s the first feature that puts control of the algorithm and the broader Spotify experience directly in your hands.

Prompted Playlists let you describe exactly what you want to hear and set the rules for your personalized playlist. And unlike anything before it, this feature taps into your entire Spotify listening history, all the way back to day one. Each playlist reflects not only what you love today, but the full arc of your taste. Spotify then curates and keeps it fresh based on your listening patterns and world knowledge.

It seems slightly odd that the press release doesn’t explicitly mention “artificial intelligence.” It would be hard to believe there isn’t some of it under the covers. But maybe that’s deft expectations management. Carefully avoiding promises of ChatGPT-like affordances is a good idea.

This also augurs battles between existing online services and major AI platforms with prominent UIs: OpenAI, Anthropic, Google, etc. In principle, the AI platforms can add features like Prompted Playlists. In practice, they would have to fully take on the incumbent to match its established advantage. Spotify has years of insight, knowledge, technology, and lock-in related to playlists. It’s hard to see an AI platform being as effective or convenient without that head start.

Do AI features trend toward being invisible in successful products, with only one or two viable “omni-knowledge” platforms?

If users enthusiastically adopt Prompted Playlists, what other playlist types (e.g., RSS and podcast subscriptions, social media follow bundles, streaming video catalog dives) are amenable to this approach? Might be fun to cook up a few prototypes for reps and exploration.

Can’t wait for Apple Music to match!


whisperX

Link parkin’: whisperX

This repository provides fast automatic speech recognition (70x realtime with large-v2) with word-level timestamps and speaker diarization.

  • ⚡️ Batched inference for 70x realtime transcription using whisper large-v2
  • 🪶 faster-whisper backend, requires <8GB gpu memory for large-v2 with beam_size=5
  • 🎯 Accurate word-level timestamps using wav2vec2 alignment
  • 👯‍♂️ Multispeaker ASR using speaker diarization from pyannote-audio (speaker ID labels)
  • 🗣️ VAD preprocessing, reduces hallucination & batching with no WER degradation

Whisper is an ASR model developed by OpenAI, trained on a large dataset of diverse audio. Whilst it does produces highly accurate transcriptions, the corresponding timestamps are at the utterance-level, not per word, and can be inaccurate by several seconds. OpenAI’s whisper does not natively support batching.

retrocast is my personal RAG “auditionware” project for investigating with podcast episodes. It’s meant to combine:

  • agentic coding
  • AI-powered search
  • AI-powered UX

So far, the agentic coding has accelerated my ability to implement features. Recently, episode archiving was completed. Next up: generating transcripts for full-text and embedding search.

Looks like whisperX fits the bill for a first crack at ASR for retrocast.


Television, Internet, Everything

Cal Newport recently engaged with an article by Derek Thompson entitled Everything is Television. I came across Newport’s thinking via a recent podcast episode. Here’s the gist:

Last month, Derek Thompson published an intriguing essay that made waves in technology criticism circles. It was titled: “Everything is Television.” In today’s episode, Cal takes a closer look at this essay, unpacking and expanding Thompson’s arguments, and ultimately concluding with a series of predictions about what to expect next from the internet.

Both posts are worth a read and I’m not going to spoil the punchlines here. Give ’em a click and digest in full. They did trigger a burr that’s been itching me lately. As someone who was around when straight blogging took off, I’ve always been somewhat bemused about how much more successful podcasting has become. Even as curmudgeonly as I am, I subscribe to a decent number of podcast feeds, while admittedly listening to a small handful of episodes. I probably average fewer than five episodes a week, if that.

Still, the pivot to video is disappointing. The fact that a podcast is really a hosted YouTube or Twitch stream invariably shows through. The conversation inevitably snags on some visual element. Part of the reason I came to enjoy podcasts is that they were an intelligent form of radio that I could use mostly as background noise with occasional moments of insight.

Also, video requires higher costs and greater production value to be sustainable. So now episodes come with a heaping helping of “check out our livestream,” and “don’t forget to hit those like, subscribe, and share buttons.” 🤮 The drive to build a large enough audience to stay credible on YouTube, TikTok, and maybe The Socials (TM) is amplified.

Enough whinging. Like many things, I’ll stick with listening to podcast episodes despite my complaints. I’ve also been lucky not to hit (much) AI slop. At some point it’ll be “not for me,” and I’ll move on.

Bonus Ramblin’

Just some incoherent follow-up on the bigger issue Newport and Thompson are engaged with. Thompson’s definition of television is similar, but not the same, as the TV that I grew up with:

By “television,” I am referring to something bigger than broadcast TV, the cable bundle, or Netflix. In his 1974 book Television: Technology and Cultural Form, Raymond Williams wrote that “in all communications systems before [television], the essential items were discrete.” That is, a book is bound and finite, existing on its own terms. A play is performed in a particular theater at a set hour. Williams argued that television shifted culture from discrete and bounded products to a continuous, streaming sequence of images and sounds, which he called “flow.” When I say “everything is turning into television,” what I mean is that disparate forms of media and entertainment are converging on one thing: the continuous flow of episodic video.

If all media (Thompson), or “just” Internet media (Newport), collapse into television, there are some interesting dynamics. I find resonance with Newport’s idea that social media behemoths may fall by the wayside, as television doesn’t have the same social network lock-in. Alternatively, they might try to morph into the new form … and fail.

Also, from my personal experience, content bifurcates into two bins: one: high-priced live-event spectacles (sports, superstar concerts, breaking news); and two: an infinite, mostly undifferentiated catalog of on-demand narratives (everything else).

On the latter point, I’m probably not the only one to notice that the UX of many streaming services is frightfully similar. Sometimes I wonder if Netflix, Prime, Peacock, Paramount+, and Disney+ all paid a visit to the same design shop. Apple, of course, has to be different.

I’m not quite sure whether drop-in, screaming-head outrage generation (cable news, sports talk) deserves its own bin due to eminent disposability.

Lastly, we’ve seen that collapse into an undifferentiated mass with text on the web in general and print news in particular. It hasn’t gone very well.


copyedit: Update

copyedit is just a personal tool, built with agentic coding, in the spirit of Simon Willison. It prompts an LLM to copyedit text. At first, I wasn’t sure how well it would work, but I’ve been using it to clean up posts here for a while now. It actually works quite well.

There’s likely a plethora of ways to solve this problem (e.g. vale), but I’ve had fun putting this together.

In building it I’ve learned a few things:

  • The utility of hijacking a CLI module like Willison’s llm has been immense. There’s a lot of underlying plumbing that comes in handy. At the same time, llm hasn’t clicked as a Python package for me.
  • With a bit of prompting, LLMs are pretty good at keeping my general style, but there are a few mannerisms I’m going to need to add to the system prompt, for example leaving off leading ‘Is’ and ‘Its’ on fragments.
  • Annoyingly, LLMs don’t seem to comprehend word wrapping, although I should actually test that out. It feels a bit heavyweight to apply AI to that task, so I should probably apply a word-wrapping post-processing step.

Speaking of vale:

Vale is an open-source, command-line tool that brings your editorial style guide to life.

And looks like there’s a good book from Pragmatic Programmers:

Write Better with Vale

Automate Your Style Guides and Lint Prose Like You Lint Code

by Brian P. Hogan

Create consistent content that gives readers confidence with Vale, the open-source prose linter that helps you enforce your style guide automatically. Use battle-tested rules based on freely available, popular style guides, apply your brand’s terms with a custom vocabulary, and integrate Vale into your text editor, Git hooks, and CI pipeline. Catch typos and inclusive-language issues before they ship, and spend your energy on shaping ideas instead of fixing copy. Whether you’re a technical writer working in a docs-as-code environment, or a software engineer who occasionally writes, you’ll ship clean, consistent copy every time.


Pondering Ghostty

I’m gonna’ make an investment into Ghostty for some of my agentic coding adventures. This will supplement my long-lived, go-to terminal, iTerm2. Two projects I want to actively explore — Amp and opencode — suggest they fit better with super-modern, graphics accelerated terminal emulators. Also, it’s probably time for me to get out and about a bit in the terminal space. There’s evidence that I’ve been using iTerm2 for at least 13 years 😳!

Mitchell Hashimoto’s “About” statement for the project resonates.

About Ghostty

Ghostty is a terminal emulator that differentiates itself by being fast, feature-rich, and native. While there are many excellent terminal emulators available, they all force you to choose between speed, features, or native UIs. Ghostty provides all three.

In all categories, I am not trying to claim that Ghostty is the best (i.e. the fastest, most feature-rich, or most native). But when I set out to create Ghostty, I felt all terminals made you choose at most two of these categories. I wanted to create a terminal that was competitive in all three categories and I believe Ghostty achieves that goal.

Before diving into the details, I also want to note that Ghostty is a passion project started by Mitchell Hashimoto (that’s me!). It’s something I work on in my free time and is a labor of love. Please don’t forget this when interacting with the project. I’m doing my best to make something great along with the lovely contributors, but it’s not a full-time job for any of us.

Full disclosure, I’m a GitHub sponsor of the iTerm2 project for a token amount monthly.


Festivitas Redux

After picking up Festivitas last year, I enjoyed the app so much that I basically ran it all the time on multiple personal computers.

Meanwhile, Simon Støvring, the Festivitas author, shipped out an email with the following information:

This season, Festivitas turns your Mac into a winter wonderland with snow. Just like the festive lights, the snow is fully customizable so you can create anything from gentle drifting flakes to a full blizzard ❄️

The update will be automatically installed when you launch Festivitas on your Mac.

And for the first time, Festivitas comes to iPhone and iPad with animated festive lights on your Home Screen 🎄📱

Snow on the desktop is pretty cool ⛄️!

I did a quick install and poked around the iOS app. I’m not quite sure I understand how it’s supposed to work, but I’m game to give it some more time. I may just make some in-app purchases to support Simon.


Discogs Data 2025 Complete

You can search this blog and see that I’ve got a long history with Discogs Data Dumps. Over the years, though, it seems that some glitch would prevent delivery of a dump for a month here or there. I should do an analysis to illustrate the most recent calendar year that had a “complete” set of dumps.

The good news is that 2025 resulted in captures for all twelve months. I’ve been monitoring the Discogs discussion forums, and unfortunately there were continual data-quality issues. Seems like they got solved with the latest captures for November and December. Way to finish strong, Discogs.

Here’s to 2026 being a banner year for Discogs Data Dumps 🤞!


Diggin’ On: FABRICLIVE presents Pola & Bryson

Pola & Bryson’s mix has been in my rotation for a couple of months now. Had some highway time today, and it demanded an audience with CarPlay. It’s of a piece with the work of DJ Marky and Calibre. There’s probably a more fashionable term, but I call it “Soulful DnB”. Definitely drum ’n’ bass at the core: less jump-up, not much toasting, more soulful vocals, and yes, piano licks (yellow shoes …).

FABRIC MIXES

FABRICLIVE presents Pola & Bryson

Rising D&B duo Pola & Bryson launch FABRICLIVE presents, a brand new mix and compilation series paying homage to the iconic FABRICLIVE series, which ran from 2001 to 2018 and became a defining platform for bass music culture in the UK.

It includes lead single, ‘Worlds Apart’ ft. Emily Makis, and the much-awaited follow-up collaboration since 2023’s ‘Phoneline’. More new music comes from Etherwood & Hugh Hardie, Xeonz, Charli Brix and Molecular, not forgetting a jam-packed mix featuring heavyweight names in D&B such as Camo & Krooked, Mefjus and Breakage, alongside newcomers like IYAMAH, Monrroe, and Thread.

Regular readers know I’m in the tank for anything Fabric London. I’ve got a digital version of all 100 releases for both of Fabric and FabricLive, plus a decent collection of the original physical CDs. What I didn’t realize is that this Pola & Bryson episode is the start of a new mix series. I had just read it as another release in the Fabric Presents series.

Looking forward to seeing more from the renewed FabricLive.

Memo to self: download that Fabric Presents Floorplan, stat


Amp Coding Agent

Link parkin’: Amp

Why Amp?

Amp is a frontier coding agent for your terminal and editor, built by Sourcegraph.

  • Multi-Model: Sonnet, GPT-5, fast models—Amp uses them all, for what each model is best at.
  • Opinionated: You’re always using the good parts of Amp. If we don’t use and love a feature, we kill it.
  • On the Frontier: Amp goes where the models take it. No backcompat, no legacy features.
  • Threads: You can save and share your interactions with Amp. You wouldn’t code without version control, would you?

Amp has 2 modes: smart (unconstrained state-of-the-art model use) and free (free of charge, using fast basic models).

Want to go much deeper? Watch our Raising an Agent podcast that chronicles the first few months of building Amp, and see our FIF.

This post was started before a recent edition of Hamel Husain’s newsletter arrived in my transom today (apologies, can’t find a Web page version to link to). He’s positive on Amp as well:

Even though Cursor is my daily driver, I frequently try new coding agents. One that’s stood out for me over the last few months is Amp. Amp is the Omakase experience of coding agents: the team maniacally experiments with the frontier of coding models, and tweak the tools, system prompt, etc accordingly for you. For example, over the past week they moved the default model from Gemini 3 to Opus 4.5. (BTW all of this is backed by rigorous internal evals, which allows them to move quickly).

Typical of Husain, his email contains much more detail, including caveats for working with Amp. For example, the version of Amp that he uses is priced on a per-token basis. This can get expensive if you’re not careful. Also, he reported that the Amp CLI had problems with iTerm, but works well in Ghostty. Hopefully it’ll be made available somewhere easy to reference.

The field of terminal user-interface coding agents is getting crowded, but Amp is worth watching.


TIL: podcast-archiver

TIL: podcast-archiver

A fast and simple command line client to archive all episodes from your favorite podcasts.

Podcast Archiver takes the feed URLs of your favorite podcasts and downloads all available episodes for you—even those “hidden” in paged feeds. You’ll end up with a complete archive of your shows. The archiver also supports updating an existing archive, so that it lends itself to be set up as a cronjob.

For my retrocast project, I’ve been casting about for a solution to reliably download all the episodes from an RSS feed. aria2 is the current path being pursued. The approach has been to fork aria2 as a subprocess and then feed it episode URLs over one of aria2’s RPC-over-HTTP interfaces. Not as elegant as I’d like.

A trick I’ve picked up is hijacking click‑based CLI projects and blending them into my own CLI projects. Either via eyeball inspection or metaprogramming, it’s easy to grab commands and wire them into my own app. This technique needs some refinement to deal with potential configuration impedance mismatches between CLI toolkits. I’ll go into detail in another post, but I put it to good effect by integrating Simon Willison’s llm package into another personal project. I got the 90% of llm that seamlessly manages models, plugins, prompts, and AI vendor APIs. My extra bits of functionality and porcelain benefit from it.

So next up, let’s apply this to retrocast and podcast-archiver.

Bonus TIL: rich-click

rich-click is a wrapper around Click that renders help output nicely using Rich.

  • Click is a “Python package for creating beautiful command line interfaces”.
  • Rich is a “Python library for rich text and beautiful formatting in the terminal”.

The intention of rich-click is to provide attractive help output from Click, formatted with Rich, with minimal customization required.

Discovered this because podcast-archiver uses rich-click to implement a pretty text interface. I’m a heavy user of both packages, so this is definitely one to take a look at.


TaskiPy

Link parkin’: TaskiPy

The complementary task runner for python.

Every development pipeline has tasks, such as test, lint or publish. With taskipy, you can define those tasks in one file and run them with a simple command.

I’m sort of in the tank for PoeThePoet, but it’s good to be aware of other options. Need to track down a comparison.

Via Vincent Warmerdam


Onyx

Link parkin’: Onyx

Onyx is a feature-rich, self-hostable Chat UI that works with any LLM. It is easy to deploy and can run in a completely airgapped environment.

Onyx comes loaded with advanced features like Agents, Web Search, RAG, MCP, Deep Research, Connectors to 40+ knowledge sources, and more.

From the docs:

Why Onyx?

Why choose Onyx over ChatGPT, Microsoft Copilot, Google Gemini, or Glean?

  • Reliable Responses from Internal Knowledge: Onyx builds a knowledge index using LLM-native techniques. Powered with contextual retrieval, advanced RAG, hybrid search, and AI-generated knowledge graphs, Onyx ensures the most relevant results and the least hallucinations.

  • Open Source: Onyx is made for security, data privacy, and ease of self-hosting. It’s easy and free to get started. For teams investing in a long-term GenAI strategy, Onyx can be easily extended or customized to your team’s future needs.

  • Highly Configurable: Onyx is designed to be flexible, so you can optimize the experience for your team. Plug and play any LLM model, connect only the data you need, and enable the features you want your team to use.

You can’t have too many options for implementing interfaces for AI applications.


Enjoying GitHub Copilot Agents

In my agentic coding explorations, I really like the coding agents provided by GitHub Copilot. Here’s the 101 intro.

Earlier this year, GitHub introduced an integrated, enterprise-ready coding agent for Copilot. Coding agent is a software engineering (SWE) agent that runs independently in the background to complete assigned tasks — similar to a peer developer.

The agent starts its work when you hand it a task. Then it spins up a fully customizable development environment, powered by GitHub Actions. You can track it every step of the way, from issue to pull request to review to approval.

The agent is designed to help you offload tasks like fixing bugs, test coverage, or refactoring code, so you can work on what interests you most. ✨

More from the docs:

Benefits over traditional AI workflows

When used effectively, Copilot coding agent offers productivity benefits over traditional AI assistants in IDEs:

With AI assistants in IDEs, coding happens locally. Individual developers pair in synchronous sessions with the AI assistant. Decisions made during the session are untracked and lost to time unless committed. Although the assistant helps write code, the developer still has a lot of manual steps to do: create the branch, write commit messages, push the changes, open the PR, write the PR description, get a review, iterate in the IDE, and repeat. These steps take time and effort that may be hard to justify for simple or routine issues.

With Copilot coding agent, all coding and iterating happens on GitHub as part of the pull request workflow. You can create multiple custom agents that specialize in different types of tasks. Copilot automates branch creation, commit message writing and pushing, PR opening, and PR description writing. Developers let the agents work in the background and then steer Copilot to a final solution using PR reviews. Working on GitHub adds transparency, with every step happening in a commit and being viewable in logs, and opens up collaboration opportunities for the entire team.

The integration with GitHub, no surprise, is excellent and has been working well for me. It’s easy to interact with an agent through a PR. Launching agents based on GitHub Issues is trivial and natural. I can push commits to an agent’s development branch seamlessly. The UX for agent session tracking has been better than what I’ve experienced with Claude Code, Codex, and Jules. Not a huge difference, but the polish feels a notch above.

The only downside so far is that you don’t have any way to specify a particular model to use. Despite that, the code results have been on par with what I’ve seen from other coding agents. In any event, I ponied up for a $10-a-month subscription, and I’m already cranking through features on personal projects. It feels like good value.


Agor: Agent Orchestration

Link parkin’: Agor

Think Figma, but for AI coding assistants. Orchestrate Claude Code, Codex, and Gemini sessions on a multiplayer canvas. Manage git worktrees, track AI conversations, and visualize your team’s agentic work in real-time.

TL;DR: Agor is a multiplayer spatial canvas where you coordinate multiple AI coding assistants on parallel tasks, with GitHub-linked worktrees, automated workflow zones, and isolated test environments—all running simultaneously.

Picked this up from an episode of the Data Engineering Podcast featuring Max Beauchemin of Apache Airflow and Apache Superset fame. Definitely give it a listen.

In this crossover episode, Max Beauchemin explores how multiplayer, multi‑agent engineering is transforming the way individuals and teams build data and AI systems. He digs into the shifting boundary between data and AI engineering, the rise of “context as code,” and how just‑in‑time retrieval via MCP and CLIs lets agents gather what they need without bloating context windows. Max shares hard‑won practices from going “AI‑first” for most tasks, where humans focus on orchestration and taste, and the new bottlenecks that appear — code review, QA, async coordination — when execution accelerates 2–10x. He also dives deep into Agor, his open‑source agent orchestration platform: a spatial, multiplayer workspace that manages Git worktrees and live dev environments, templatizes prompts by workflow zones, supports session forking and sub‑sessions, and exposes an internal MCP so agents can schedule, monitor, and even coordinate other agents.

It’s very early days for Agor, but the UX Beauchemin described was interesting. Effectively, he hijacked Kanban cards as a frontend for agent orchestration. The claim is that Agor is great for individual users and is showing promise for teams. Also, a wide range of agentic coders are supported, including Claude Code, Codex, Gemini, and opencode. User interfaces in the orchestration space should be open season for experimentation.


Gemini CLI Tips

Link parkin’, from Addy Osmani: Gemini CLI Tips & Tricks

This guide covers ~30 pro-tips for effectively using Gemini CLI for agentic coding

Gemini CLI is an open-source AI assistant that brings the power of Google’s Gemini model directly into your terminal. It functions as a conversational, “agentic” command-line tool - meaning it can reason about your requests, choose tools (like running shell commands or editing files), and execute multi-step plans to help with your development workflow.

In practical terms, Gemini CLI acts like a supercharged pair programmer and command-line assistant. It excels at coding tasks, debugging, content generation, and even system automation, all through natural language prompts. Before diving into pro tips, let’s quickly recap how to set up Gemini CLI and get it running.

Tip 16 on integrating shell interaction seems particularly handy. Seems like all the other CLI tools have this in one form or another, but this tip is a good discussion of the utility of the feature.

Previously, I found Gemini CLI to be less effective than Claude Code. Just last week, Google had a raft of AI announcements, including a batch of agentic coding upgrades connected to Gemini CLI.

Gemini 3 Pro is now available in Gemini CLI

We’ve integrated Gemini 3 Pro, our most intelligent model, directly into Gemini CLI to unlock a new level of performance and productivity in the terminal. This powerful combination delivers state-of-the-art reasoning for executing better commands, enhances support for complex engineering work through agentic coding, and enables smarter, more tailored workflows via advanced tool use.

It’s probably time to revisit Gemini CLI, although the passel of agentic coding CLI tools is getting hard to keep track of.


Switching to marimo

Parul Pandey has a lot of experience with JupyterLab and Notebooks. He switched to marimo and documented the whys and wins.

TL;DR: After years of using JupyterLab, I have moved most of my work to marimo notebooks, a new kind of Python notebook that addresses many long-standing issues with traditional ones. This article covers the reasons behind my transition and how marimo fits naturally into my current workflow, with full gratitude to Project Jupyter for building the notebook ecosystem that shaped data science, research and education.

There’s a lot of content, but it captures much of the marimo upside that has come across my transom. None of the observations were particularly new to me, but it’s good to have them in one place.

Pandey’s point on AI integration covered an item I discussed previously, JupyterAI.

marimo also has some really useful AI-assisted coding features built in, which have been very helpful for me. To be honest, Jupyter also has a JupyterLab extension for AI features called JupyterAI (and yes, I have written about that too), but having AI support built into marimo removes a lot of the friction for me.

JupyterAI is the Jupyter project’s effort to bring in AI capabilities. I just generated some top-level thoughts, but Pandey wrote up his experience, Build Your Own AI Coding Assistant in JupyterLab with Ollama and Hugging Face, in depth. It’s good:

As you can gauge from this article, Jupyter AI makes it easy to set up a coding assistant, provided you have the right installations and setup in place. I used a relatively small model, but you can choose from a variety of models supported by Ollama or Hugging Face. The key advantage here is that using a local model offers significant benefits: it enhances privacy, reduces latency, and decreases dependence on proprietary model providers. However, running large models locally with Ollama can be resource-intensive so ensure that you have sufficient RAM. With the rapid pace at which open-source models are improving, you can achieve comparable performance even with these alternatives.


Armin and Agents

Armin Ronacher has been working on agentic systems. He’s one of the reasons I’m skeptically optimistic about AI coding agents. His work is a prime example of principled, considerate, measured, and well-informed exploration of the possibilities.

Currently he thinks Agent Design Is Still Hard

TL;DR: Building agents is still messy. SDK abstractions break once you hit real tool use. Caching works better when you manage it yourself, but differs between models. Reinforcement ends up doing more heavy lifting than expected, and failures need strict isolation to avoid derailing the loop. Shared state via a file-system-like layer is an important building block. Output tooling is surprisingly tricky, and model choice still depends on the task.

Lots of goodness and, to be honest, stuff that’s over my head at the moment. I’m definitely not as deep in the weeds as Armin. However, I was struck by this statement:

Testing and Evals

We find testing and evals to be the hardest problem here. This is not entirely surprising, but the agentic nature makes it even harder. Unlike prompts, you cannot just do the evals in some external system because there’s too much you need to feed into it. This means you want to do evals based on observability data or instrumenting your actual test runs. So far none of the solutions we have tried have convinced us that they found the right approach here. Unfortunately, I have to report that at the moment we haven’t found something that really makes us happy. I hope we’re going to find a solution for this because it is becoming an increasingly frustrating aspect of building an agent.

I’m primarily attracted to the plumbing aspects, tracing and observability, but AI evals practice does not seem to be converging on common processes.


Last Tea Shop Complete

L. M. Orchard had an intriguing reference to a narrative table top role playing (TTRP) game, entitled Last Tea Shop Complete

Last Tea Shop Complete is a revised and expanded version of a one-page solo game about tea and conversation. Set up your stall and wait for visitors to emerge out of the mist. Over a cup of tea, chat to the visitor about their life and help prepare them for their trip into the Lands of the Dead. Record their thoughts, fears, dreams and hopes in your journal for posterity.

Orchard put this in a category of solo journaling games. Considering myself a connoisseur of good narrative and a halfway decent writer, this might be a way to cut down on screen time and exercise the creative muscles.

Here’s a quick Kagi search discovery for a definition from Tina Alberion

Solo journaling games are incredibly diverse, but their unifying elements are prompts and recording/documenting. Many solo journaling games allow the player to determine their journaling method. Some strongly recommend a specific method for thematic purposes (like voice recordings for Call Your Exes), but for the most part, players may use a journal, voice or video recordings, or even just their imagination to play.

And a deeper dive from Jordan Rocke

What is a Solo Journaling TTRPG?

OK, so what are they? Basically, it’s a sub-genre of TableTop Role Playing Games (like Dungeons & Dragons, etc). TTRPGs are games where you tell a story with some friends, using some external element to determine how successful you are at doing what your character wants to do. That’s normally dice, but coins, playing cards, tarot cards and other elements are slowly entering the field.

The issue with games like Dungeons & Dragons, and basically all the other big name RPG systems, is that they only function as multiplayer games. Clearly, as being an adult is scheduling hell, a lot of people hit the point where they wanted to try playing games that weren’t reliant on a group of busy people finding time and energy to play. Enter solo TTRPGs! …

Instead, let me try to sell you specifically on solo journaling RPGs! The premise is simple: you are writing an account of your experiences from the perspective of a fictional character. The normal format is that you are given a prompt, and have to write a short (or sometimes long) passage to respond to it. The line between TTRPG and creative writing exercise is a little murky, but it’s fun all the same. The specific ways these games work varies A LOT, and it’s a little difficult to actually summarise in any way that covers all solo journaling RPGs. So, let me take you through some of the games I’ve tried, and maybe you’ll see something interesting!

I think I’ve found a new rabbit hole to fall into.


Hone, Enamored

I’m not huge on affirmations. Lean more towards stoicism than positivity spiritually manifesting success. On a daily basis, though, I try to mentally hold on to a few concise phrases for motivation and emotional guardrails. Shorter is better, and singular words are often best.

Today, I became enamored of the verb hone, defined in Wiktionary as:

Verb

hone (third-person singular simple present hones, present participle honing, simple past and past participle honed)

  1. (transitive) To sharpen with a hone; to whet.
  2. (transitive) To use a hone to produce a precision bore.
  3. (transitive) To refine (a skill especially) by learning.
  4. To make more acute, intense, or effective.

The last two meanings resonate. Moment to moment, I’m working to refine myself into a more acute, intense, and effective instrument. Added to the kit.


TIL: Click Context Parameter Source

TIL that click, my favorite CLI processing toolkit, has a mechanism to determine how a CLI option was provided Context.get_parameter_source

Get the source of a parameter. This indicates the location from which the value of the parameter was obtained.

This can be useful for determining when a user specified a value on the command line that is the same as the default value. It will be DEFAULT only if the value was actually taken from the default.

Via Stack Overflow


MacWhisper

I’m probably going to spring for MacWhisper Pro.

Quickly and easily transcribe audio files into text with OpenAI’s state-of-the-art transcription technology Whisper as well as Nvidia Parakeet. Whether you’re recording a meeting, lecture, or other important audio, MacWhisper quickly and accurately transcribes your audio files into text.

It got solid marks from Simon Willison. Vincent Warmerdam is only using the free version, but he’s got MacWhisper wired up to a programmatic keyboard. Both gentlemen have good taste in my eyes.

Initially, I’d use it like Warmerdam for agentic coding. If it can be CLI automated, it could become an essential part of the retrocast pipeline.


Some Recent eBooks of Note

Herewith, some recommendations of independently published eBooks. If you’re inclined to shop small, give these authors a shout.

Adam Johnson is a Django project committer. He’s written multiple books on DX or “development experience” in my book. Instruction on how developers can improve their experience using certain development tools.

First, the older Boost Your Git DX:

This book covers the best command-line Git tools, techniques, tips, tricks, and tactical tidbits I’ve encountered from over a decade of using Git. The selection reflects my preferences for well-maintained tools that provide high value with little need for customization.

Note that this book is not an introduction to Git and assumes that you’re already using Git on a daily basis. It offers ways to improve your developer experience with Git that ultimately help you code faster.

I bought it a while back and read it end-to-end. Since I’m a magit user on a minute-to-minute basis, much of it wasn’t directly relevant. But I learned a lot about Git, and there are some recommendations I need to implement for those actual terminal interactions with the Git CLI.

Second, the newer Boost Your GitHub DX:

This is a book about using GitHub more effectively, touring you through the fundamental features with my experience-based commentary. It covers settings, keyboard shortcuts, hidden features, syntax, techniques, CLI commands, and even improving your writing. These tools will help you collaborate more effectively with your team and deliver higher-quality software faster.

This book is aimed at anyone using GitHub daily, assuming you have some familiarity with GitHub and Git and want to go deeper.

Purchased but not yet read. As I attempt to push more development onto agentic coders, I’m using GitHub as the central platform to coordinate with systems like Claude Code, Codex, and Gemini. Specifically, I want to heavily leverage issues and PRs to track feature implementation. I bet Adam’s book will come in handy, especially in taking advantage of the GitHub CLI tool.

Third, Michael Kennedy’s Talk Python in Production:

Welcome to Talk Python in Production, a hands-on guide for Python developers determined to master real-world deployment and infrastructure management. Have you ever felt locked into pricey cloud services or struggled with overly complex DevOps configurations? This book’s stack-native approach offers a refreshing alternative.

You’ll learn to containerize Python apps, secure them with NGINX, tap into CDNs for global performance, and manage everything on a single, powerful server, without sacrificing reliability. Along the way, you’ll see exactly how we evolved Talk Python’s own web infrastructure over many years of practical deployment experience. This isn’t just theory: discover cost-saving tips, real-world examples, and step-by-step tutorials you can put to use right away. By the end, you’ll be confident in running your Python applications at scale, with minimal cloud lock-in and maximum control over your technology stack.

My primary interest here is in self-deploying HTTP traffic analytics and uptime monitoring. I’ve got multiple sites on the web and a minor interest in understanding visitors to one or two. I’ve plunked down for Kennedy’s book and am looking forward to getting stuck into it. Full disclosure: I’m a TalkPython fanboy, having bought the course bundle multiple times and listen to the podcast often, along with PythonBytes.


Anthropic’s Claude Code Best Practices

Link parkin’: “Claude Code: Best practices for agentic coding”

Claude Code is a command-line tool for agentic coding. This post covers tips and tricks that have proven effective for using Claude Code across various codebases, languages, and environments.

In particular, I’m looking at extending my own Claude Code environment as documented in this piece to integrate the GitHub CLI.

For repeated workflows—debugging loops, log analysis, etc.—store prompt templates in Markdown files within the .claude/commands folder. These become available through the slash commands menu when you type /. You can check these commands into git to make them available for the rest of your team.

Custom slash commands can include the special keyword $ARGUMENTS to pass parameters from command invocation.

For example, here’s a slash command that you could use to automatically pull and fix a GitHub issue:

I need to do some research on whether this approach or Agent Skills is better. Maybe this note from the skills docs will solve the question:

How Skills are invoked: Skills are model-invoked — Claude autonomously decides when to use them based on your request and the Skill’s description. This is different from slash commands, which are user-invoked (you explicitly type /command to trigger them).


TIL: opencode

TIL opencode:

The AI coding agent built for the terminal

OpenCode is fully open source, giving you control and freedom to use any provider, any model, and any editor.

As someone who has used Claude Code CLI, Codex CLI, Gemini CLI, and aider, seems like a no-brainer to give opencode a test drive. The only hesitation is the terminal program requirements:

Prerequisites

To use OpenCode, you’ll need:

A modern terminal emulator like:

WezTerm, cross-platform

Alacritty, cross-platform

Ghostty, Linux and macOS

Kitty, Linux and macOS

API keys for the LLM providers you want to use.

I’m quite happy with iTerm2. These are all advanced variations on a terminal, typically with accelerator driven graphics and/or AI integration. Sounds wonderful but a bit heavyweight for me. Hopefully the “like” is applicable to iTerm2.

Via Alex Ellis. His prototype Code Review Bot post is worth a shout as well.


KuzuDB and LadybugDB

KuzuDB is a professional take on a graph database engine. Also embeddable so sort of like SQLite for graph and vector data.

Unfortunately, the company behind the open source project, up and pulled stakes according to The Register. One of the more annoying possible outcomes of a VC backed project (c.f. Marimo).

The KuzuDB embedded graph database, open source under the MIT license, has been abandoned by its creator and sponsor Kùzu Inc, leaving its community pondering whether to fork or find an alternative.

A few days ago, the GitHub project was archived and a note appeared stating that “Kuzu is working on something new.” In addition, the documentation and blog post archive were moved from the Kuzu website to GitHub.

Since the project was developed with a generous open source license, forking was quite an eminent possibility. Enter LadybugDB:

LadybugDB is a modern graph database designed with a primary focus on object storage. Unlike traditional databases that treat storage as an afterthought, LadybugDB places object storage at the core of its architecture, enabling efficient management of complex, interconnected data structures while maintaining the flexibility and scalability that modern applications demand.

Built on top of KuzuDB, LadybugDB inherits a robust and modular implementation of cypher. KuzuDB was previously developed by Kùzu Inc.

Here’s the repo. The emphasis on object storage takes it in a bit of a different direction though. Now moving into DuckDB territory.

© 2008-2025 C. Ross Jam. Licensed under CC BY-NC-SA 4.0 Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.