Thanks to Tim Bray for putting together an overview and thoughts on some of the issues the fediverse community has with global content surveillance a.k.a. content search and indexing. Even though I’m out as a social media participant, I’m mentally noodling on technical means that can support some of the proposed social goals, without going full bore, end-to-end encryption. Wondering if some notion of crawler/indexer algorithmic transparency and auditing via computational means could help.
I wonder if these discussions will ever intersect with work Princeton University’s Arvind Narayanan project on Algorithmic Amplification and Society.
The distribution of online speech today is almost wholly algorithm-mediated. To talk about speech, then, we have to talk about algorithms. In computer science, the algorithms driving social media are called recommendation systems, and they are the secret sauce behind Facebook and YouTube, with TikTok more recently showing the power of an almost purely algorithm-driven platform.
Relatively few technologists participate in the debates on the societal and legal implications of these algorithms. As a computer scientist, that makes me excited about the opportunity to help fill this gap by collaborating with the Knight First Amendment Institute at Columbia University as a visiting senior research scientist — I’m on sabbatical leave from Princeton this academic year. Over the course of the year, I’ll lead a major project at the Knight Institute focusing on algorithmic amplification.
While Narayanan seems focused on amplification via algorithmic-mediated speech, the consideration of obfuscation feels like a worthy part of the discussion.