home ¦ Archives ¦ Atom ¦ RSS

rqlite

Just because I dig the many cool uses SQLite has been put to:

rqlite is a distributed relational database, which uses SQLite as its storage engine. rqlite uses Raft to achieve consensus across all the instances of the SQLite databases, ensuring that every change made to the system is made to a quorum of SQLite databases, or none at all. It also gracefully handles leader elections, and tolerates failures of machines, including the leader. rqlite is available for Linux, OSX, and Microsoft Windows.


Return of the Runtimes

A HotOS 2017 paper (PDF) authored by Maas, Asanović, and Kubiatowicz, hits on some of the systems trends that have piqued my interest. Definitely from a cloud-computing, datacenter perspective, but vertically integrated programming stacks on top of disaggregated resources holds promise. Does this make me look buzzword compliant?

In this paper, we argue that we should rethink how language runtimes are designed for the Cloud 3.0 era. We do this by laying out seven tenets of building language runtimes for the next generation of cloud data centers. We then distill these tenets into a proposal for a shared substrate to underpin these future runtimes.

With a title like “Return of the Runtimes: Rethinking the Language Runtime System for the Cloud 3.0 Era,” how could you go wrong?

Brush with greatness, Asanović and I were in the same incoming grad cohort in Berkeley CS. We even had a class or two together.


Center for Network Programming

Speaking of Princeton, I’m intrigued by the Cornell-Princeton Center for Network Programming

The Center for Network Programming supports research on languages, algorithms, and tools for network programming, and facilitates closer interactions with partners in industry and government.

As a bona fide, card carrying, UC Berkeley computer systems junkie, this warms the cockles of my heart. Especially projects like frenetic. In particular, I sense that the new era of software defined networking (SDN) and network programming languages is a phase change. I may just be an old PLDI nerd but this feels like a big deal. At least it will be a lot of fun!


WebTAP, OpenWPM, and Data

WebTAP is a public interest and research project at Princeton University looking into how Web entities track users through a variety of techniques. Based upon rigorous research methods they provide the public at large with insights and policy recommendations regarding online privacy.

OpenWPM is the software framework that WebTAP uses to conduct a large scale census of websites:

OpenWPM is a web privacy measurement framework which makes it easy to collect data for privacy studies on a scale of thousands to millions of site. OpenWPM is built on top of Firefox, with automation provided by Selenium. It includes several hooks for data collection, including a proxy, a Firefox extension, and access to Flash cookies. Check out the instrumentation section below for more details.

OpenWPM is the basis of an extensive academic publication (which I need to read).

The cherry on top is collection documentation and archiving of their Web census data. According to a recent blog post on a new notebook wrapper around the Web Census data, they collect 500GB of data on a monthly basis. Juicy!

Great work by computer science and policy researchers on behalf of the greater good.


Culture of Fear

Love this track:

[embed]https://www.youtube.com/watch?v=GWvqkDvv4mE[/embed]

Thievery Corporation’s Culture of Fear

Security alert on orange
It's been on orange since '01, G 
I mean wassup man, can't a brother get yellow, man
Just for like two months or something
Goddamn, sick of that


Coates On Conversational IoT

A bit dusty, but I really enjoyed this O’Reilly Bots Podcast discussion between Jon Bruner and Tom Coates. Given the potential for a buzzword laden, hype explosion (voice interfaces, IoT, connected home, bots) the conversation was surprisingly thoughtful.

In this episode of the O’Reilly Bots Podcast, I speak with Tom Coates, co-founder of Thington, a service layer for the Internet of Things. Thington provides a conversational, messaging-like interface for controlling devices like lights and thermostats, but it’s also conversational at a deeper level: its very architecture treats the interactions between different devices like a conversation, allowing devices to make announcements to any other device that cares to listen.

Internet of Things is an area which feels compelling to me, on a number of different tech angles, despite the hype. Still pondering on bots.


AgensGraph

Link parkin’: AgensGraph

AgensGraph is a new generation multi-model graph database for the modern complex data environment, which supports the relational and graph data models at the same time. AgensGraph supports ANSI-SQL and openCypher. SQL and Cypher can be integrated into single queries in AgensGraph.

AgensGraph is yet another PostgreSQL derivative. So at this point you could conceivably have one DB engine that has strong relational credibility, hardened geospatial functionality, support for time series via extension, semi-structured document data capabilities, and at least commercially developed graph data support.

Could this be the small to medium data management Rapture? Ha, ha! Only serious.


Cloud FPGAs

FPGA stands for Field-Programmable Gate Array (sic). I was struck by this article from The New Stack summarizing ways FPGAs can be incorporated into cloud computing offerings.

The array of gates that make up an FPGA can be programmed to run a specific algorithm, using the combination of logic gates (usually implemented as lookup tables), arithmetic units, digital signal processors (DSPs) to do multiplication, static RAM for temporarily storing the results of those computation and switching blocks that let you control the connections between the programmable blocks. Some FPGAs are essentially systems-on-a-chip (SoC), with CPUs, PCI Express and DMA connections and Ethernet controllers, turning the programmable array into a custom accelerator for the code running on the CPU.

The combination means that FPGAs can offer massive parallelism targeted only for a specific algorithm, and at much lower power compared to a GPU. And unlike an application-specific integrated circuit (ASIC), they can be reprogrammed when you want to change that algorithm (that’s the field-programmable part).

Give the article a read to hear about how Microsoft is using FPGAs to accelerate network packet processing for software defined networking, SDN, applications. FPGAs have always seemed to only be applicable in really niche, vertical applications, but this feels like a relatively broad use case to me. Also, a number of other really important verticals (crypto, *omics) along with potential to join in the AI hype wave would seem to make for a bright FPGA future. The “hardware microservices” portmanteau is a good soundbite.

FPGA programming has always been extremely difficult. I’m surprised that tightly integrated programming stacks haven’t emerged to make this a lot easier, given the relatively high value they would seem to bring. The article does hint at this potential future though. Alternatively, one can look at cloud APIs as eventually becoming the “programming stack” that many developers use to exploit FPGAs.

Feels like a trend to keep an eye on.


Crux

Crux Cover Completed Ramez Naam’s Crux today. Crux is the sequel to Nexus, which I read a while back. The basic premise rests on the intersection of nano-computation and cognitive augmentation leading to the emergence of post-human capabilities. Bits of quantum computing, climate change, and other speculative technologies are thrown in to boot. Mayhem ensues. In Crux, there are much heavier political and social dilemmas woven throughout.

Crux maintains the rapid action, technothriller pace of its predecessor. Per usual, the breakneck combat and carnage aren’t really my thing, but the rapid plot advances makes it an easy read. It’s been a while since I read Nexus, but I don’t remember a similarly bewildering number of characters as were eventually put in play in Crux.

Relative to other science fiction that I really love, Crux is lacking those slower, interstitial moments where the author paints out many of the unspectacular details of the world. Being almost all chase and conflict, there’s not much time for reflection on how the Nexus drug plays out in the more quotidian aspects of people’s lives.

Of course Crux ends in a cliffhanger since it’s part of a trilogy. I’ll have to read Apex to see how it all ends, but boy are there a lot of threads to tie off.

While it wouldn’t be my first selection off the bookshelf to read, Crux is definitely not time wasted.


TimescaleDB

Link parkin’: TimescaleDB

An open-source time-series database optimized for fast ingest and complex queries. Looks, feels, speaks like Postgres.


My Alien Covenant Review

First, watch the Alien: Covenant Prologue trailers online, such as “The Crossing” [embed]https://youtu.be/XeMVrnYNwus[/embed]

and “The Last Supper”.

Now you’ve seen the best parts of the entire Alien: Covenant production. None of which appear in the theatrical release.

Use your money to buy a ticket for Guardians of the Galaxy v2.

You’re welcome!


Brendan Gregg and eBPF

Brendan Gregg is a performance analysis superstar and I’ve already told you eBPF is wicked cool. I’m not going to the Velocity conference but I’ll check out the talk once it shows up on Safari.

What is eBPF and why is it useful?

eBPF is a weird Linux kernel technology that powers low-overhead custom analysis tools, which can be run in production to find performance wins that no other tool can. With it, we can pull out millions of new metrics from the kernel and applications, and explore running software like never before. It’s a superpower. It’ll benefit many people on Linux as they’ll add a toolkit of new analysis tools, or use new plugins for deep monitoring. That’s what I’ll show in my Velocity talk: new tools you can use.

There are four other good questions to go along with the above.


Data Intelligence Conference

I bought a ticket for the Data Intelligence conference:

The 2017 Data Intelligence conference, which will take place in Mclean, Virginia is the first machine learning gathering for the community using and developing machine learning and data intelligence. It is produced and underwritten by NumFOCUS, the 501(c)(3) nonprofit that supports and promotes world-class, innovative, open source scientific computing. Through the Data Intelligence Conference, NumFOCUS advances its mission of growing the international community of open source developers.

To be honest, the event description is a bit buzzword laden for my taste. I think this conference is a substitute for last year’s PyData conference in DC. This one islocal though, mostly over a weekend, and the entrance fee was the right price. Maybe I’m missing them, but DC based technology events that are grassroots and outside of the Federal space are hard to find. So I’m really looking forward to the conference.

Given how hiccupy (sic) this blog has been, I doubt there’s anyone reading who might also be in attendance, but give me a shout if you do actually exist.


Data Science on the GCP

Data Science on GCP Cover Still in early release, Data Science on the Google Cloud Platform, might be a good read:

Valliappa (Lak) Lakshmanan, Technical Lead for Data & ML Professional Services at Google Cloud, is the author of the upcoming O’Reilly Media book “Data Science on the Google Cloud Platform” (now in Early Release). In the following Q&A, Lak describes his reasons for writing this book, its intended readers, what readers will learn and how to think about the practice of data science on Google Cloud Platform (GCP)-based architecture.

The pull quote is from a post about the book over at the Google Cloud Platform blog.


The Three Body Problem

Threebody Problem Cover Knocked off Liu Cixin’s The Three Body Problem this weekend, an extremely entertaining tale of initial alien encounter. While the overall plot and literary execution are outstanding, the key factor is that this is a translation from a popular Chinese work. In general, the shift from Western norms is bracing and in particular, the Communist Revolution in China is woven throughout the story to devastating effect. The overall reverence for science, apparent in the text and Liu’s afterword, is also refreshing.

The tale additionally invokes serious consideration into humanity, inhumanity, and the fate of man on Earth.

My only nit is that at the end, the aliens are heavily anthropomorphized, which didn’t work for me. But I acknowledge the wry symmetry that Liu invoked by doing so.

Apologies for the copyediting twitches.


The ArchiTECHt

Derrick Harris used to be a GigaOM reporter on the big data beat. When GigaOM went under he moved on to doing media for Mesosphere.

Looks like Harris launched out on his own again in January, doing a combo of blogging, newsletter, and podcasting. All of it can be found at architecht.io. The interviewee lineup on the podcast looks especially good with some high profile names like Eric Brewer, Mike Olson, Jay Kreps, and Julia Austin.


Cormode on Data Sketching

Graham Cormode, who is probably one of the most knowledgeable people in the world on the topic, has written an insanely good article on data sketching

The aim of this article has been to introduce a selection of recent techniques that provide approximate answers to some general questions that often occur in data analysis and manipulation. In all cases, simple alternative approaches can provide exact answers, at the expense of keeping complete information. The examples shown here have illustrated, however, that in many cases the approximate approach can be faster and more space efficient. The use of these methods is growing. Bloom filters are sometimes said to be one of the core technologies that “big data experts” must know. At the very least, it is important to be aware of sketching techniques to test claims that solving a problem a certain way is the only option. Often, fast approximate sketch-based techniques can provide a different tradeoff.

I say “insanely good” because there is some seriously hairy math behind these techniques. Yet Cormode makes the principles easily accessible to a general, admittedly already technically inclined, audience. As a former instructor, this is an article you could give to a bunch of upperclassmen and then spend two good lectures working through details and implications. No mean feat. Plus, these types of data structures are increasingly important to know about.


That’s Rich

As a very early del.icio.us fanboy, and current Pinboard customer, Maciej Ceglowski’s chutzpah impresses me.

Pinboard has acquired Delicious. Here’s what you need to know:

If you’re a Pinboard user, nothing will change. Sad!

If you’re a Delicious user, you will have to find another place to save your bookmarks. The site will stay online. but on June 15, I will put Delicious into read-only mode. You won’t be able to save new bookmarks after that date, or use the API.

Not sure if I’m more surprised that del.icio.us is still live or that it went for so cheap.


Simit

Link parkin’:

Simit, A language for computing on sparse systems.

Simit is a new programming language that makes it easy to compute on sparse systems using linear algebra. Simit programs are typically shorter than Matlab programs yet are competitive with hand-optimized code and also run on GPUs.

With Simit you build a graph that describes your sparse system (e.g. a spring system, a mesh or the world wide web). You then compute on the system in two ways: locally or globally. Local computations apply update functions to each vertex or edge of the graph that update local state based on the vertex or the edge and its endpoints. This part of the language is similar to what you find in graph processing framework such as GraphLab and its descendants.


Just Because I Dig the Title

Occupy the Cloud: Distributed Computing for the 99%

Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. While distributed computation frameworks have moved beyond a simple map-reduce model, many users are still left to struggle with complex cluster management and configuration tools, even for running simple embarrassingly parallel jobs. We argue that stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity. Furthermore, using our prototype implementation, PyWren, we show that this model is general enough to implement a number of distributed computing models, such as BSP, efficiently. Extrapolating from recent trends in network bandwidth and the advent of disaggregated storage, we suggest that stateless functions are a natural fit for data processing in future computing environments.

Actually, PyWren seems like yet another top notch UC Berkeley CS research project. Go Bears!


Some AI Curmudgeon Considerations

Even though Prismatic never really worked for me and never caught on in general, I like Bradford Cross’s musings on Twitter. His latest venture recently surfaced, and his was the interview on the first episode of The Architecht podcast I listened to. This reminded me of some thoughts he had on AI startups earlier this year:

With AI in a full-fledged mania, 2017 will be the year of reckoning. Pure hype trends will reveal themselves to have no fundamentals behind them. Paradoxically, 2017 will also be the year of breakout successes from a handful of vertically-oriented AI startups solving full-stack industry problems that require subject matter expertise, unique data, and a product that uses AI to deliver its core value proposition.

Seems like production AI is more of a Formula One type endeavor rather than stock car. Cross might be right, but it would be interesting if an ML equivalent of Hadoop emerged. Interesting, but low probability.


Streaming Open Data With Satori

I’ve been poking around for a while on the Interwebs looking for accessible streaming data sources besides the oversubscribed Twitter feeds. Today I stumbled across Satori, with an initial description of the service from their blog:

Why? Because the world of open data needs to change. Right now there is a trove of open, public data available all over the world. But instead of being able to realize its potential, that data is at-rest on a variety of disparate websites across the internet.

By coalescing the world’s open data into streaming live data and making it available for free, we’ll be able to see new solutions to big problems and ideas that haven’t even been thought of yet.

With Satori, any developer with a computer, anywhere in the world can create a free account and have unlimited access to live open data to build the live data apps.

Right now I’d be nervous building anything serious on such a new service, lest they run out of money and abruptly shut down the service. For throwaway noodling and proofs-of-concept it looks like Satori provides something valuable. If nothing else, a convenient feed of Wikipedia edits would be interesting to experiment with.


Paco Nathan’s NLP in Python

[embed]https://twitter.com/pacoid/status/850480933534785536[embed]

Diggin’ through some old Twitter faves and found that @pacoid is doing online courses covering Natural Language Processing:

Keep in mind, these courses are the opposite of MOOCs. We realized how the industry had swung too far in the wrong direction with Ed Tech, how VC-backed tech startups had taken seriously detrimental short-cuts to attempt scale in learning, how current trends in “education” at scale opposed our ethos and experience at O’Reilly. Our origin story as a company was about peer teaching, with Tim and Dale active at Unix user group meetings. We’ve always been about peer teaching — that’s one reason I was eager to lead this program, calling back to my teaching fellowship many years ago at Stanford, where I’d helped establish a popular peer teaching program there.

If you’re already an O’Reilly Safari subscriber it looks like a great, quick (couple of hours), intro. Unfortunately, all the upcoming sessions are already full with a waitlist! Hopefully, Paco can sneak in a few more sessions this year.


Packet Pushers Podcast

I’ve really gotten into podcasts recently where once I was just not a fan. They make a great alternative to broadcast radio if like me you’re stuck in transit a lot. Thanks to Wes Felter’s blog, the Packet Pushers Podcast came across my radar. Definitely deep, meaty, technical stuff and recommended. Now I feel like I actually understand what NFV means.


The Player of Games

Be it resolved that I will complete reading 50 books in the remainder of this calendar year.

ThePlayerOfGames Cover To begin this journey, I knocked off Iain M. Banks’ The Player of Games yesterday. I had been slogging through the first half of the book, but then the latter half really moved for me. Previously, I was not a big fan of Banks’ Consider Phlebas. I can recommend The Player of Games, although it too can be a tad disturbing at times.

Cover Art, By Source, Fair use, Wikipedia


50

Older, wiser, definitely grayer. F’in readers! The last decade has been a bit of a roller coaster.

A few regrets, but extremely thankful I’ve managed to last this long and for the many gifts I’ve been blessed with.

“To infinity and beyond,” although it feels like everyone who’s reached this point is on a month-to-month lease.


eBPF is Wicked Cool

Quentin Monnet is making it easy to dive into eBPF.

So instead, here is what we will do. After all, I spent some time reading and learning about BPF, and while doing so, I gathered a fair amount of material about BPF: introductions, documentation, but also tutorials or examples. There is a lot to read, but in order to read it, one has to find it first. Therefore, as an attempt to help people who wish to learn and use BPF, the present article introduces a list of resources. These are various kinds of readings, that hopefully will help you dive into the mechanics of this kernel bytecode.

A little virtual machine running inside the kernel is such a wicked concept.


4K Monitor Buyer’s Guide

I’m in the market for a new desktop monitor. 4K resolution seems to have hit a reasonable price and quality point. The Wirecutter has a top notch list of suggestions.


Au Revoir AMPLab

[embed]https://twitter.com/bigdata/status/800006369772322817[embed]

End of Project has arrived for UC Berkeley’s AMPLab. Spark and the other varied projects of the group hit my radar back in July of 2012. The project was run according to Prof. Dave Patterson’s guidance for collaborative research centers. I think it’s fair to say AMPLab was a success.

Looking forward to what comes next and Go Bears!


Thanksgivings

Duly noting a number of things I’m thankful for this year:

  • My enduring family, surviving and thriving as African-Americans in the US. Special remembrance for my Aunt Gracie who passed away this summer after a long battle with cancer. She went to work for NASA straight out of high school and served the country for 38 (!!) years. Her eulogy highlighted some of the BS she overcame but I never knew about. I’ll always remember her as a faith filled, uplifting spirit.
  • Widely scattered friends, who nevertheless helped me get through a very challenging year.
  • The continuing gift of my education, focused and funded by my parents, that keeps opening up opportunities of all sorts, especially chances to help others.
  • Interesting, challenging, and impactful employment.
  • Eight years of President Obama, an intelligent, even keeled, leader who did the office proud.
  • That I actually have a lot to be thankful for. There are many out there struggling to find even a tiny morsel of hope.

kube-ui Trick

Eventually I’ll get into k8s enough that this trick will be useful to know:

In my opinion, the standard kube-ui is pretty spartan. It doesn’t really give me a good overview of what is going on in my cluster.

Weave Scope is an open source tool that helps you monitor and visualize your cluster. It is currently very beta, but I think it has a lot of potential!

Running it is also super easy.

Plus I really like what the Weave folks have been up to.

Thanks Sandeep!


Link Parkin’: The Flattened Big Data Reading List

I’ve already read a bunch of links on this reading list, but there’s some new nuggets. I also like Lars Albertsson’s take on the process of building data intensive systems:

This is a curated recommended read and watch list for scalable data processing. It is primarily aimed towards software architects and developers, but there is material also for people in leadership position as well as data scientists, in particular in the first section. The content has been chosen with a bias towards material that conveys a good understanding of the field as a whole and is relevant for building practical applications.


Processing A Billion Taxi Rides

Mark Litwintschik has been doing yeoman’s work with the New York City Taxi & Limousine Commission data. Over a series of blog posts he’s taken this one dataset and processed it with a number of data management and “big data” technologies, including purchasing an Amazon Redshift cluster:

Over the past few months I’ve been benchmarking a dataset of 1.1 billion taxi journeys made in New York City over a six year period on a number of data stores and cloud services. Among these has been AWS Redshift. Up until now I’ve been using their free-tier and single-compute node clusters. While these make Data Warehousing very inexpensive they aren’t going to be representative of the incredible query speeds which can be achieved with Redshift.

In this post I’ll be looking at how fast a 6-node ds2.8xlarge Redshift Cluster can query over a billion records from the dataset I’ve put together.

Litwintschik’s admirable in how well he documents the steps he takes to actually running queries against such a large dataset. Just getting such data into the right place to work with is challenging. There are lots of places you can trip up doing this stuff and his work can save others a lot of trouble.

Something along these lines is what I’m aspiring to do with Fun With Discogs Data.


Preordered

[embed]https://twitter.com/djmarkfarina/status/738738259866615808[/embed]

Heck yeah, you know I ordered up some of Mark Farina’s forthcoming Mushroom Jazz 8. Now it’s just a matter of what and when I add some other items.


Diggin’ On: Data Machina

Really been enjoying the weekly Data Machina e-mail newsletter put out by @ds_ldn a.k.a. Data Science London. It’s actually fairly dense, but usefully eclectic, focused on things relevant to “data science”, broadly construed. I always find at least a handful of links that are out and out great.

Subscribe now or try before you buy


FWDD 0: Fun With Discogs Data

This may become a recurring aspect of the blog, thus the abbreviation and numbering. I’ve managed to catch up and download the entirety of discogs.com data dump archives to my personal laptop. As of this writing, it’s about 153 Gb of mostly compressed XML data, at varying levels of quality going all the way back to 2008.

I don’t really have much of a plan other than to explore an interesting longitudinal data set. One thing I’m hoping to do is come up with a modernish set of tools to process the data, including normalizing and transforming to other formats. The other goal is to push it all up into Google Cloud Platform and see what working with data in that environment is like. Also, planning to make code and generated data open, since Discogs provides it under an extremely liberal license.


Consul Service Discovery

Nice little video overview of service discovery in microservice architectures and how Consul can fill that role. “Why is service discovery important? (And what is Consul?)”. Fair notice, it’s a teaser for an O’Reilly video training course on microservices.


What’s A Unikernel?

I recently worked on a proposal that heavily incorporated the notion of unikernels. Even still, I’m not really sure I could have explained what they were to even someone else technically proficient.

Enter the Google Cloud Platform Podcast. Listening to Pivotal’s John Feminella I finally heard a clear, clean explanation. Check it out for yourself, but the notion of an automatically constructed, application specific, machine image that can run on a hypervisor nails it for me.

They’re still extremely bleeding edge, but it looks like unikernel based approaches will have a place in the microservices oriented future.

P.S. I just started listening to the GCP Podcast, but I’m encouraged by how informative these first couple of episodes have been.


Armbrust on Spark Structured Streaming

I enjoyed this O’Reilly Data Podcast conversation with Michael Armbrust regarding Apache Spark 2.0’s Structured Streaming:

With the release of Spark version 2.0, streaming starts becoming much more accessible to users. By adopting a continuous processing model (on an infinite table), the developers of Spark have enabled users of its SQL or DataFrame APIs to extend their analytic capabilities to unbounded streams.

Within the Spark community, Databricks Engineer, Michael Armbrust is well-known for having led the long-term project to move Spark’s interactive analytics engine from Shark to Spark SQL. (Full disclosure: I’m an advisor to Databricks.) Most recently he has turned his efforts to helping introduce a much simpler stream processing model to Spark Streaming (“structured streaming”).

You’ll need a login, but there’s also a deeper dive video from Armbrust and Tathagata Das going into more details of Structured Streaming.

At one point, Ben Lorica asked Armbrust about the dimensions upon which developers should evaluate streaming platforms. The obvious ones (delivery guarantees, latency, throughput) were brought up. I’d add a few more

  • expressiveness, how convenient is it to express common streaming computations and how possible is it to implement exquisite solutions
  • agility, the ease with which stream processing code can be correctly updated and re-deployed
  • monitoring, getting useful performance metrics and debugging information out of the system

Apache Spark Structured Streaming, Kafka Streams, Twitter Heron, Apache Flink. So much to choose from.


Twitter Heron

Like I said, a crowded space.

Last year we announced the introduction of our new distributed stream computation system, Heron. Today we are excited to announce that we are open sourcing Heron under the permissive Apache v2.0 license. Heron is a proven, production-ready, real-time stream processing engine, which has been powering all of Twitter’s real-time analytics for over two years. Prior to Heron, we used Apache Storm, which we open sourced in 2011. Heron features a wide array of architectural improvements and is backward compatible with the Storm ecosystem for seamless adoption.

Twitter Heron, now an open source project.

© 2008-2024 C. Ross Jam. Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.