home ¦ Archives ¦ Atom ¦ RSS

Milvus Lite

Link parkin’: Milvus Lite

Introducing Milvus Lite: the Lightweight Version of Milvus

Milvus is an open-source vector database purpose-built to index, store, and query embedding vectors generated by deep neural networks and other machine learning (ML) models at billions of scales. It has become a popular choice for many companies, researchers, and developers who must perform similarity searches on large-scale datasets.

However, some users may find the full version of Milvus too heavy or complex. To address this problem, Bin Ji, one of the most active contributors in the Milvus community, built Milvus Lite, a lightweight version of Milvus.

Introducing Milvus Lite: Start Building a GenAI Application in Seconds

Milvus Lite supports all the basic operations available in Milvus, such as creating collections and inserting, searching, and deleting vectors. It will soon support advanced features like hybrid search. Milvus Lite loads data into memory for efficient searches and persists it as an SQLite file.

GitHub repo


Running Local LLMs

Link parkin’: 50+ Open-Source Options for Running LLMs Locally

Vince Lam put together a comprehensive resource for running LLMs on your own hardware:

There are many open-source tools for hosting open weights LLMs locally for inference, from the command line (CLI) tools to full GUI desktop applications. Here, I’ll outline some popular options and provide my own recommendations. I have split this post into the following sections:

  1. All-in-one desktop solutions for accessibility
  2. LLM inference via the CLI and backend API servers
  3. Front-end UIs for connecting to LLM backends

GitHub Repo, helpful Google Sheet


Twenty Plus Years …

Hola Peeps!

It’s been a moment. The archives record my last post was late September, 2023. Yikes! Suffice to say quite a bit happened after I mentioned going on sabbatical, (TL;DR elderly dad got ill and needed a lot of assistance getting back on his feet ; an extended, elaborate, distracting recruiting process for an intriguing employment position fell through) but I survived. Can’t say I achieved what I set out to, but life hits you with a curve ball every now and then.

The last month or so I’ve really started to get back into my own personal technical diversions. Any longtime followers will have a good idea what that entails. Of course, I’ll have to write about them, although fair warning, I’ll also be messing about quite a bit with experimental publishing to other sites I own. More to come.

To the title of this particular post. Between Mass Programming Resistance and New Media Hack I’ve pushed out at least a few blogs every year since January 18 2003. Employing my overeducated abilities, this one will extend the streak to 22 years in a row. 😲 Definitely bursty, but it’s some sort of an accomplishment. Many other bloggers, prominent or not, have fallen by the wayside. Just says I’m something of an Internet old-timer. If they only knew.

Having been through a few social media eras, and not really into TheSocials (tm) of this moment, I plan to keep plugging away at this here site, passing along nuggets and commenting on various and sundry that catches my eye. Definitely good therapy.

To 20 more years. Forza!


beanstalkd

TIL beanstalkd

Beanstalk is a simple, fast work queue.

Its interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously.

Apparently it’s been around forever. And I called myself a messaging nerd. Shame.


My Emerging Knowledge Toolchain

Due to research demands of my self-imposed sabbatical, I wound up stacking up a whole bunch of browser tabs across multiple computers, windows, browsers, etc. It was a bit appalling.

The breadth of the types was outstanding as well. Long form web articles. GitHub repos of interest. Academic papers from arXiv. Academic papers from other places. PDFs from all across the web. Email newsletters stuck in multiple GMail accounts. And plenty of plain old web pages for one project of note or another.

Not to mention the bits and bobs trapped in my RSS feed reader stars and my podcast app.

I finally got some time this week to sit down and begin dealing with this knowledge sprawl. There had been some prior investigation on tools worth picking up and growing into. Here’s where I am at the moment:

  • Omnivore, for stashing long web articles to read later
  • Zotero, for academic reference and paper management
  • Zotfile, to shift papers from Zotero into a Dropbox folder
  • Pinboard, for parking links of interest
  • Goodnotes, an app recommended by a colleague for annotating PDFs
  • Apple’s “Books” app for reading technical books

So far so good. All of these apps bridge macOS and iOS which is great since I’m deeply wedded to the Apple ecosystem and I especially want to put my iPad to good use. They also seem to have reasonably decent browser apps as well. Omnivore and Zotero have solid cross browser plugin extensions that make it easy to shovel content in directly from a web page being viewed. For Pinboard, I just do it the old fashioned way, with a JavaScript bookmarklet. I guess I shouldn’t be surprised that saving bookmarks on Pinboard has advanced a bit. Might be time to try yet another browser extension.

Early doors, but mopping up those open tabs is going to plan. Next I have to make all that capture productive. Leaning towards pulling obsidian into the mix since it has nice integration with Omnivore. Then I’m contemplating some linkblog experimentation via various APIs in combination with Pelican’s static site generation capabilities. Fun times.


Some jq Stuff

jq is one of my favorite command line tools and there’s a bit of news. There’s a jq 1.7 finally:

After a five year hiatus we’re back with a GitHub organization, with new admins and new maintainers who have brought a great deal of energy to make a long-awaited and long-needed new release. We’re very grateful for all the new owners, admins, and maintainers. Special thanks go to Owen Ou (@owenthereal) for pushing to set up a new GitHub organization for jq, Stephen Dolan (@stedolan) for transferring the jq repository to the new organization, @itchyny for doing a great deal of work to get the release done, Mattias Wadman (@wader) and Emanuele Torre (@emanuele6) for many PRs and code reviews. Many others also contributed PRs, issues, and code reviews as well, and you can find their contributions in the Git log and on the closed issues and PRs page.

Also, is there anything Postgres can’t do? Enter pgJQ as a supplement to Postgres’ built-in jsonb support.

The pgJQ extension embeds the standard jq compiler and brings the much loved jq lang to Postgres.

It adds a jqprog data type to express jq programs and a jq(jsonb, jqprog) function to execute them on jsonb objects. It works seamlessly with standard jsonb functions, operators, and jsonpath.

Very much feels like alpha software, but could still be a useful addition to one’s toolbox.


That One Discogs Release

Any significantly large, human created, dataset is going to get some weird entries. In the Discogs Data Dumps, there’s a release (careful, following that link might blow up your browser) with a title that’s a whole bunch of Unicode characters and the word “Unicode”. The title is a little under 628K (yes, six hundred twenty eight thousand) octets.

Does it matter at all? Well, if you stuff that record into a PostgreSQL database and then build an index on the title column, you’ll get a sad trombone.

I’m thinking of hacking my personal fork of discogs-xml2db to have an option for limiting field size.


MusicBrainz Database

Link parkin’: MusicBrainz Database

The MusicBrainz Database is built on the PostgreSQL relational database engine and contains all of MusicBrainz’ music metadata. This data includes information about artists, release groups, releases, recordings, works, and labels, as well as the many relationships between them. The database also contains a full history of all the changes that the MusicBrainz community has made to the data.

Mmmmmmm, data.


What Are Embeddings?

Vicki Boykis put together a free primer (delivered in PDF) on vector space embeddings.

Peter Norvig urges us to teach ourselves programming in ten years. In this spirit, after several years of working with embeddings, foundational data structures in deep learning models, I realized it’s not trivial to have a good conceptual model of them. Moreover, when I did want to learn more, there was no good, general text I could refer to as a starting point. Everything was either too deep and academic or too shallow and content from vendors in the space selling their solution. So I started a project to understand the fundamental building blocks of machine learning and natural language processing, particularly as they relate to recommendation systems today. The results of this project are the PDF on this site, which is aimed at a generalist audience and not trying to sell you anything except the idea that vectors are cool. I’ve also been working on Viberary to implement these ideas in practice.

The post also points to plenty of follow on educational material. I’m particularly intrigued by the supplemental content related to deep learning and recommender systems.


Building Your Own Chatbot

Mozilla ran an interesting, meticulously documented, time-bounded effort to create an in-house, open source, large language model based chatbot.

With this goal in mind, a small team within Mozilla’s innovation group recently undertook a hackathon at our headquarters in San Francisco. Our objective: build a Mozilla internal chatbot prototype, one that’s…

  • Completely self-contained, running entirely on Mozilla’s cloud infrastructure, without any dependence on third-party APIs or services.
  • Built with free, open source large language models and tooling.
  • Imbued with Mozilla’s beliefs, from trustworthy AI to the principles espoused by the Mozilla Manifesto.

Heads up. I’m on sabbatical, attempting to do a bit of education on deep learning, generative models, and AI broadly. Case studies like these are really useful.

Despite the hype train on generative AI, there’s promise, … and peril. Expect to see quite a bit more of this in my feed.


llm and Llama 2

The prolific Simon Willison has put together llm, a Python library and CLI for messing around with AI:

A CLI utility and Python library for interacting with Large Language Models, including OpenAI, PaLM and local models installed on your own machine.

Eminently convenient is a plugin mechanism that supports usage of local open source models.

I followed the directions on the tin and was able to run the latest Llama 2 model on my M2 MacBook within about 15 minutes. Most of that time was spent waiting for downloads.

However, the model does run as slowly as advertised, taking about 20 seconds to respond to a prompt. Still, it’s nice to not be beholden to our Big Tech overlords for ungoverned experimentation. OpenAPI access is definitely not cheap!


Futzing With Fabric

Previously, I claimed to have “finished fabric”. That was just gathering the bits. But then there’s actually getting all that music into the macOS Music app. Le Sigh!

A couple of weeks ago I started reconciling the release lists with the playlists in my Music app. Turns out I actually didn’t have one release (Fabric Live 34, Krafty Kuts) and hadn’t really finished Fabric. I had been exchanging with the Fabric team for bulk digital downloads and Krafty Kuts accidentally got subbed with another mix.

Thankfully, Fabric First digital members now have access to all of the releases. So problem solved. P. S. for the quarterly cost of digital membership, you get incredible value. Even if you can’t get to the club.

And then there’s the metadata issue. Music Brainz Picard is a godsend for fixing local file metadata. All I’ll say is that getting iCloud Music Library to sync with that metadata is a bear. I get really annoyed with missing metadata because Fabric has great cover art.

After a whole bunch of futzing, I think I’ve finally got it nailed, but maybe it’s time for some automation to nail a confirmation. Even at this state, I find the odd omission in a playlist and metadata hasn’t tracked all devices. Thanks Apple!


Pelican Plugin Fixes

Just a quick self-pat on the back for making some fixes to an old Pelican plugin pelican_json_feed, which I forked into my own personal version.

My commit just made the plugin compatible with some changes in the Jinja templating library, that were blocking rebuilds to this here blog.

Gotta celebrate the wins!!


Grist and Bookstack

Two things I’m planning to mess about with for personal knowledge management, both with the potential for self-hosting. First, Grist

Grist is a software product to organize, analyze, and share data.

Grist combines the best of spreadsheets and databases. Grist lets you work with simple grids and lists, and is at its best when data gets more complex.

And Bookstack, similar to Confluence with Markdown support

BookStack is a simple, self-hosted, easy-to-use platform for organising and storing information.


Musicbrainz Picard

Link parkin’: Musicbrainz Picard

Picard is a cross-platform music tagger powered by the MusicBrainz database.

Picard helps you organize your music collection by renaming your music files and sorting them into a folder structure exactly the way you want it. A variety of plugins are available and you can even write your own. Picard supports a wide range of audio formats and can also lookup an entire CD for you.

During my completist phase of gathering the Fabric and FabricLive digital releases, I wound up with a bunch of .wav files. These files had no metadata, which makes for a crappy playback experience. I had been mucking around trying to pull something together using Discogs data and ffmpeg. Things got complex way too fast.

So I start digging around for MP3 taggers looking especially at some of the commercial ones for macOS. Google happened to turn up Picard. I was a bit skeptical since open source apps and commercial music data haven’t worked well for me in the past.

Man was I wrong! Picard so far is really good. I’m actually impressed how well it’s handled metadata for the mix CDs I’ve thrown at it. The Musicbrainz metadata repository hasn’t been stumped, Picard lines up data and tracks well, and updating tags is just a button push. Bonus, cover art gets added properly as well.

Now to be about cleaning up a big pile of mixes recently and not so recently acquired.


Algorithmic Amplification and Society

I must say, I’ve been enjoying the output from Arvind Narayanan’s algorithmic amplification project at Columbia University:

Most online speech today is hosted on algorithmic platforms that are designed to optimize for engagement. But algorithms are not neutral. They amplify some speech and suppress others. Some effects are positive, such as the decreased power of gatekeepers in identifying new talent. Others exert a pervasive distorting effect on everything, whether the production and dissemination of science or the tourism industry.

Under the direction of the Knight Institute’s 2022-2023 Visiting Research Scientist Arvind Narayanan, the Institute will examine how algorithmic amplification and distortion shape specific domains, markets, or facets of society, and explore ways to modify algorithms or design in order to minimize harmful amplifying or distorting effects.

Still sitting on the social media sidelines, just admiring the constant conflagration. Glad someone’s digging in to what’s happening with a modicum of scientific rigor though.


Pagefind

Link parkin’: Pagefind

Pagefind is a fully static search library that aims to perform well on large sites, while using as little of your users’ bandwidth as possible, and without hosting any infrastructure.

Pagefind runs after Hugo, Eleventy, Jekyll, Next, Astro, SvelteKit, or any other SSG. The installation process is always the same: Pagefind only requires a folder containing the built static files of your website, so in most cases no configuration is needed to get started.

Might be the search solution this site has been looking for.


812 Discogs Data Files

Previously back in December, I noted a script that I had written to count the number of data files pointed to at data.discogs.com. Back then it was 712.

I dusted off the old script just to see if still runs. Sure enough it works. The latest total is 812 files, which feels right. 6 months later at 5 files per month means a total of 30 additional.

Impressed that the script still works. Also, I should note that the todos from the total size calculation have been completed, so it’s a one liner to create an sqlite db of file info. Now I need to turn it all into a Python package so I can use pipx to install it.

Or maybe I should keep it simple and just copy it into my personal ~/.local/bin directory 🤔.


Back in the Saddle

I’m gainfully unemployed again. 5 years, 2 DARPA programs from start to finish, 2 more business thrusts launched with long runway, and 1 global pandemic to top it off! The separation was completely amicable and I initiated it. Me and the previous company both evolved and were trending in different directions. There’s no immediate employment lined up.

Commence sabbatical. Relaxing, recharging, and reprogramming myself. Need to work on some family and health issues. Making room to catch up with friends old and new. Time to learn enough about generative AI to seriously tinker and be able to call BS on snakeoil salesmen. Plus space for a few other fun tech things like music data hacking and Rust programming. Haven’t traveled outside of the DMV for three years. Gonna’ actually go places.

I’ve twice left positions without the “next” thing lined up. The first time, there was really no plan. Bad idea. Wound up muddling along for about three months and almost induced financial ruin. Lucked into a good position that led to great returns. The second time, I had a half-baked plan but didn’t game out the financial timing well. Really could have easily gone sideways, but at least there were backup plans. Thankfully it all worked out.

This time around? Financial runway secured. High probability opportunities being worked and my personal network tapped for more as insurance. If all goes to plan, it’ll be about a three month sabbatical, but if it goes longer I won’t enter panic mode until 2024.

Which means I now have plenty of time to blog. There’s a fair bit to say. We go again!


xmlstarlet

Link parkin’: XMLStarlet

XMLStarlet is a set of command line utilities (tools) which can be used to transform, query, validate, and edit XML documents and files using simple set of shell commands in similar way it is done for plain text files using UNIX grep, sed, awk, diff, patch, join, etc commands.

XMLStarlet is a venerable Swiss Army Knife for carving up XML. I was having some issues with xmllint checking Discogs Data XML files for well-formedness and stumbled upon XMLStarlet. On initial test drive, seemed to be doing a bit better with the large size of the Discogs files.


hyperfine cli benchmarking

TIL hyperfine:

A command-line benchmarking tool.

Might not read obvious, but it’s a command-line tool that looks really handy for benchmarking other command-line tools. Should replace the time command, but it’s way more than that. Has a ton of features including things like “statistical analysis across multiple runs” and exporting results to CSV, JSON, Markdown, etc.


sqlite-xsv

Link parkin’: sqlite-xsv

sqlite-xsv is a new SQLite extension for querying CSVs, TSVs, and other-SVs, written in Rust, based on sqlite-loadable and the excellent csv crate.

Current in SQLite, you have 2 options for parsing CSVs: the official SQLite CSV virtual table and sqlean’s vsv counterpart. Both work as advertised, and are perfectly fine by itself!

However, sqlite-xsv provides various ergonomic benefits over these two extensions, such as:

Another Alex Garcia special

h/t Simon Willison


Overcast Renewal

My subscription for the Overcast podcast player renewed today, for which I have no qualms. Really happy with the app, glad to support an independent developer, and would highly recommend to others.

I’m sure there other fine iOS podcast apps out there, but Overcast works really well for me.


Working Locally with Clickhouse

ClickHouse is a database tool I haven’t had a chance to sink my teeth into but heard good things about. Extracting, converting, and querying data in local files using clickhouse-local is an overview of using ClickHouse to work on data hosted on a node.

Sometimes we have to work with files, like CSV or Parquet, resident locally on our computers, readily accessible in S3, or easily exportable from MySQL or Postgres databases. Wouldn’t it be nice to have a tool to analyze and transform the data in those files using the power of SQL, and all of the ClickHouse functions, but without having to deploy a whole database server or write custom Python code?

Fortunately, this is precisely why clickhouse-local was created! The name “local” indicates that it is designed and optimized for data analysis using the local compute resources on your laptop or workstation. In this blog post, we’ll give you an overview of the capabilities of clickhouse-local and how it can increase the productivity of data scientists and engineers working with data in these scenarios.

Also noting yet another database with embedded HTTP request capabilities.


Plug for Libby

Just wanted to give a plug for the Libby app by OverDrive.

Borrow ebooks, audiobooks, magazines, and more from your local library for free! Libby is the newer library reading app by OverDrive, loved by millions of readers worldwide.

Works great with the Kindle App on an iPad. Excellent alternative to forking over full price for an eBook you might not be sure about.

Also, public libraries everywhere and their staffs deserve all the love they can get.


Algorithmic Boundaries

Thanks to Tim Bray for putting together an overview and thoughts on some of the issues the fediverse community has with global content surveillance a.k.a. content search and indexing. Even though I’m out as a social media participant, I’m mentally noodling on technical means that can support some of the proposed social goals, without going full bore, end-to-end encryption. Wondering if some notion of crawler/indexer algorithmic transparency and auditing via computational means could help.

I wonder if these discussions will ever intersect with work Princeton University’s Arvind Narayanan project on Algorithmic Amplification and Society.

The distribution of online speech today is almost wholly algorithm-mediated. To talk about speech, then, we have to talk about algorithms. In computer science, the algorithms driving social media are called recommendation systems, and they are the secret sauce behind Facebook and YouTube, with TikTok more recently showing the power of an almost purely algorithm-driven platform.

Relatively few technologists participate in the debates on the societal and legal implications of these algorithms. As a computer scientist, that makes me excited about the opportunity to help fill this gap by collaborating with the Knight First Amendment Institute at Columbia University as a visiting senior research scientist — I’m on sabbatical leave from Princeton this academic year. Over the course of the year, I’ll lead a major project at the Knight Institute focusing on algorithmic amplification.

While Narayanan seems focused on amplification via algorithmic-mediated speech, the consideration of obfuscation feels like a worthy part of the discussion.


Pelican: Draft View vs Published View

One thing that’s been annoying me recently with the static site publishing tool that I use, pelican, is how it treats draft posts and publishing. Out of the box, I haven’t been able to easily generate one development site that includes all draft (and unlabeled status) posts and another “officially” published site that omits draft posts. The development site being handled by pelican’s built-in HTTP server will provide a drafts folder, but I find it’s usage clunky.

However, there may be a workaround using pelican’s default Makefile and some smart configuration:

  • Have the devserver and devserver-global targets in the Makefile generate output into a distinct dev output dir, where dev content winds up getting served from
  • Modify the pelicanconf.py config files DEFAULT_METADATA to set status to publish
  • Leave the publishconf.py as is, or make sure the default status is draft
  • Maybe put all draft posts in a draftposts subdir so they get tagged in the dev server and are easier to find.

That should cleanly isolate the dev content from the published content, yet put all the draft material in one place during dev. It should at the same time prevent accidental publishing of draft material to the production site. I’m going to try an experiment using this approach for a few days and report back.

P. S. Combined with TailScale, mobile life and authoring on the go should become quite palatable. Hmmm 🤔, how well does self-hosted filesharing tech like NFS and SMB work over TailScale? Given my personal toolkit, worst comes to worst, I could also employ Emacs TRAMP over SSH.

Oh wait. Is it even possible to get Emacs on an iPad? There’s gotta be a way right! Apparently not.


Awesome macOS CLI

Link parkin’: Awesome macOS Command Line

A curated list of shell commands and tools specific to macOS.

Does what it says on the tin.

Bonus: Awesome Command Line Apps

A curated list of useful command line apps, in celebration of the TUI.

Love me some TUI.


TailScale On the Road

So I’ve taken my iPad out and about, putting TailScale to the test for remote access to a home machine. Results have been somewhat mixed.

First off, that it works at all, making it seamlessly possible to ping all my devices as if they were on a virtual LAN across the worldwide Internet, is impressive. I did manage to get some quality extended remote SSH time on the home machine. I’ve been able to do this successfully multiple times.

But for whatever reason, on one of my trips things went really sideways. Don’t know what happened but I lost access, maybe it was the café Wi-Fi connectivity, and then TailScale was a bit out of sorts. Couldn’t resolve hosts using TailScale DNS for a few moments, then when resolution came back, couldn’t complete connections. I switched to my phone as a mobile hotspot and things were still a bit borked. Quite a bit of teasing potential for a full recovery, but things just never got right. Finally just gave up and went back home.

I’m freely admitting not doing any serious diagnosis, so please TailScale support, no need to ring my number. I’ll be a good customer and attempt an appropriate bug report if it happens again.


Dragged Into GitHub Actions

Thanks to Simon Willison’s click-app cookiecutter template I’ve been sucked into GitHub actions. His cookiecutter has Python testing baked in and GH Actions that leverage that testing turned on by default. Normally I wouldn’t care, but the repo workflow sends you email and can badge itself as red when failing, which annoys me enough that I have to go fix it.

Not at all bad though to get introduced to GH Actions since they seem to be core knowledge for good GH citizenship. Just takes some getting used to while being part of a GitLab CI/CD shop as part of the day job.


Carmo and Mastodon

Still no plans for me to get back into social media, but I appreciate how deeply Rui Carmo dives into his Mastodon experience:

It’s the quiet week before New Year’s, so I thought it worthwhile to tag some loose notes together and take a snapshot of what I’m doing with Mastodon and how it differs from Twitter. Everyone else seems to be doing it, so why not?

I will admit a slight interest in the technical underpinnings of the fediverse, especially ActivityPub and how information is disseminated.

Plug here for Carmo’s Tao of Mac blog. Great technical information and great writing.


GNU Screen Locking Fix

Since I started using an iPad Pro cover, SSH, and screen, this problem has been biting me in the rear:

I have been using GNU Screen for a while now. I usually create multiple new windows (Ctrl-a c) every day. Then I flip back and forth between the multiple screens (Ctrl-a n) or just toggle between the last 2 windows used (Ctrl-a a). Sometimes my finger slips and I hit Ctrl-a x which provides you with a password prompt. This is GNU Screen’s lockscreen function.

Normally you just enter your user password and the screen will unlock. Screen is using /local/bin/lck or /usr/bin/lock by default. This is all fine and dandy if you have a user account password to enter. If you have servers that only use SSH keys and don’t use passwords you will have no valid password to enter. In other words you are stuck at this lock screen. One way around it is to login to the same machine and re-attach to the same screen session (normally “screen -r” if you have only 1 session open). Then kill the session with the lock screen. This is annoying to have to do.

It really is a pain, but it has an extremely simple solution. Basically you disable the default binding for locking and go on about your business.


Discogs Data Total Size

The next question I have about the Discogs Data is what’s the total amount to download? An initial step is updating my URL gathering script to grab the Content-Length header from an http probe of each url and start generating csv compatible output.


read more ...


782 Discogs Data Files

I wanted to know how many distinct data files (checksums and compressed XML data) were referenced from discogs.data.com. This is prelude to trying to do a, extremely polite, crawl of all the files for some longitudinal analysis. So I threw together a little script and learned a few things.

The answer turned out be 782.


read more ...


Doctorow’s Memex Method

Apologies for the extensive quoting, but I don’t think Cory Doctorow would actually mind (too much). Pointing back to a 2021 piece on Medium (full post) and Pluralistic (condensed summary, podcast teaser) entitled The Memex Method. It’s sort of the inspiration for some of the personal content hacking I’ve gotten interested in over the last few month.

Blogging is the process by which I take everything that seems significant and fix it in my memory; the process of explaining why something seems significant for strangers is powerfully mnemonic in exactly the way that scrawling tones in a private notebook isn’t.

The fulltext, searchable, tagged database of everything I’ve ever given real thought to is how I synthesize whatever new things snag my attention into longer, more reflective pieces – which go into the searchable, tagged database, too.

From Medium Memex, combined with Pluralistic — the solo blog I started after I left Boing Boing — is a vast storehouse of nearly everything I found to be significant since 2001. When one of those nucleation events occurs, the full-text search and tag-based retrieval tools built into Wordpress allow me to bring up everything I’ve ever written on the subject, both to refresh my memory as to the salient details and to provide webby links to expansions of related ideas.

Not just because it created a daily writing habit, nor because it helped me organize my thoughts – but also because it is iterative, a way of structuring and auditioning arguments for an audience that refines how to present technical, difficult material.

I’m impressed that simply combining tagging with fulltext search can provide so much creative fuel.


Welcome Back

What with the earthquakes occurring in the social media space, quite a few folks are reemerging in the blogosphere. Glad to see these gentlemen (hmmm?) routinely pop up again in my feed reader:

Also have to mention the return of Jason Kottke, kottke.org, who oddly enough I hadn’t ever really cottoned to not being so much into popular culture items. But I’m getting interested in linkblogging and he has always been the epitome of fine linkblogging.


Whisper is Python

TIL the original Whisper is mostly Python and I was also introduced to Sumana Harihareswara

Whisper, from OpenAI, is a new open source tool that “approaches human level robustness and accuracy on English speech recognition”; “Moreover, it enables transcription in multiple languages, as well as translation from those languages into English.” …

Whisper is an open source software tool written mostly in the Python programming language. Instructions on how to download, install, and run it are relatively straightforward, if you are comfortable running commands in a terminal. It depends on Python, a few Python libraries, and Rust. In case you want to try Whisper but you don’t want to fiddle with installing it on your computer, the machine learning company Replicate is hosting a web-based version of Whisper so you can upload a sound file and get a transcription. But of course then you don’t get the privacy benefits of running it entirely on your own machine.

I will definitely be giving this a test drive during my holiday time off.

Meanwhile, Harihareswara seems like quite the interesting individual. I appreciate her thoughtfulness regarding the ethics of using Whisper. Will be adding her blog to my feeds.

Via Simon Willison


An iPad Cover

A while back, I was reading about Rui Carmo’s adventures doing development on his iPad Pro. Since I have the same iPad model, I cribbed his recommendation for the Logitech Combo Pro as a cover/keyboard. I haven’t chased Carmo’s development aims but just as a keyboard, the Combo Pro works well.

My singular complaint is that I use my iPad as a reading device a lot, sort of like an overpriced Kindle. When in this mode, the Combo Pro is a little inconvenient. Either the keyboard lays in front, consuming space unnecessarily, or it folds back without providing protection for the iPad screen. Detaching the keyboard to switch modes is a little unwieldy.

However, the iPad Pro is a much more powerful device than a garden variety Kindle. I did not pursue Carmo’s goal of turning my iPad into a full on self-hosted development platform. Instead, I just took the baby step of using it as a tablet terminal for SSHing into remote machines. TailScale has amplified this by making access to my virtual, personal home LAN seamless. Panic’s Prompt is my current iOS SSH tool of choice. I’m creating this post from my iPad Pro, using Prompt and the Combo Pro at this very moment. Next stop is taking this on the road, outside of my home.

Minor tip. For this keyboard, map ESC to the CAPS-lock key. Doubly so if you’re an Emacs user.


A Quamina Bug

Have to take a moment to acknowledge doing my duty as a good open source citizen. Hey it was only reporting a tiny bug (bug: Quote escaping termination failure) in Tim Bray’s Quamina library, but every little bit counts. Also gave me a good opportunity to limber up my issue reporting muscles.


Python keyring

keyring looks like a nice cross-platform Python module for storing encrypted passwords using platform appropriate mechanisms.

The Python keyring library provides an easy way to access the system keyring service from python. It can be used in any application that needs safe password storage.

© 2008-2024 C. Ross Jam. Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.