home ¦ Archives ¦ Atom ¦ RSS

Kafka 0.10.0

As I said, I use (and like) Kafka quite a bit. The new release of Kafka 0.10.0, as covered by Confluent’s Neha Narkhede, has a number of interesting features:

I am very excited to announce the availability of the 0.10 release of Apache Kafka and the 3.0 release of the Confluent Platform. This release marks the availability of Kafka Streams, a simple solution to stream processing and Confluent Control Center, the first comprehensive management and monitoring system for Apache Kafka. Around 112 contributors provided bug fixes, improvements, and new features such that in total 413 JIRA issues and 13 KIPs were resolved.

Kafka Streams becomes official, but the timestamped messages will turn out to be very handy.


K8S Udacity Course

K8S stands for Kubernetes, which is a container orchestration platform from Google. Translation? Kubernetes is a system for running distributed code with high availability at scale. Looks like there’s a nice bitesized Udacity course on Kubernetes serving as an introduction.

This course is designed to teach you about managing application containers, using Kubernetes. We’ve built this course in partnership with experts such as Kelsey Hightower and Carter Morgan from Google and Netflix’s former Cloud Architect, Adrian Cockcroft (current Technology Fellow at Battery Ventures), who provide critical learning throughout the course.

Mastering highly resilient and scalable infrastructure management is very important, because the modern expectation is that your favorite sites will be up 24/7, and that they will roll out new features frequently and without disruption of the service. Achieving this requires tools that allow you to ensure speed of development, infrastructure stability and ability to scale. Students with backgrounds in Operations or Development who are interested in managing container based infrastructure with Kubernetes are recommended to enroll!

Might have to carve out some time for this one.


Storm Still Going Strong

Apache Storm LogoApache Storm is approaching 5 years as an interesting, useful, important vibrant open source project. P. Taylor Goetz, one of the project leads, is doing an overview of Storm in concert with the 1.0 release.

In this series of blog posts, we will provide an in-depth look select features introduced with the release of Apache Storm (Storm) 1.0. To kick off the series, we’ll take a look how Storm has evolved over the years from its beginnings as an open source project, up to the 1.0 milestone release.

The space of computing over and developing against streaming data has grown crowded in the past year or two. Storm is one of those technologies good to know about as it provides a useful baseline of features to discuss and has significant “burn in”. And it’s still getting better!


Kafka Summit Recordings

Apache Kafka Logo These days, I do a lot of work with Apache Kafka. Kafka implements partitioned, replicated, append only logs. If you squint enough, those logs can look like a messaging system. Turns out Kafka is pretty good for a lot of distributed system and “big data” use cases.

I couldn’t make it to the inaugural Kafka Summit, but the organizers have made video recordings and slides available for all of the presentations. Well done!


2016-07-22

On July 22, 2016, Farina returns with the 8th installment of one of electronic music’s longest running compilations, Mushroom Jazz, celebrating the 25th anniversary of the series.

There is nothing else that can be added to this announcement. You know I’m all over this one.


The New Smack

You know you’ve been around awhile when you start observing “acronym recycling”.

Circa 2010:

Storage, MAp/reduce, Query

Circa 2016:

Spark, Mesos, Akka, Cassandra, Kafka

When I first heard the term in Ben Lorica’s O’Reilly Data Show podcast episode with Evan Chan, I did a double take. Trawling the interwebs a bit looks like there might be some there there. MeetUps. Slides. Talks. Conferences, sort of. Even manifestos!

Not exactly one-to-one, but definitely square in the same ecosystem. The 2010 Radar article is still surprisingly relevant. And if you think of the trends in “Big Data” over the last 5+ years, SMACK is basically an evolution of SMAQ, refined for the rise of Spark as a compute engine, and updated for the emergence of streaming, unbounded data processing.

SMACK HARD is a little too cute by half though, if you ask me.


Diggin’ On Evil Nine

Fabriclive28 evilnine packshot small

I’ve been on a bit of a spending binge over the products from the Fabric London store. When in the mood for some “new to me” music, I trawl their back catalog. To be honest, the purchases have been a combination of buying second market CDs or digital downloads from the Amazon Music Store. Hopefully, this plug will send some purchases their way. But over the course of the past month, I’ve collected over 6 different titles.

The current leader of the pack is Evil Nine’s entry into the FabricLive list. The end-to-end DJ mix is a nice journey into breaks territory. Opens quite well, dips a little, then really picks up steam near the tail end. Highlights are, Technologic, All I Wanna Do Is Break Some Hearts, and Nowhere Girl. Not to mention an inspired outro with The Clash’s London Calling.

The musical style is most reminiscent of stuff from Fatboy Slim, The Chemical Brothers, and an incredible one shot effort, The Dirtchamber Sessions, by The Prodigy. Give Evil Nine a whirl if you fancy that particular flavor.


Spark-TS

Link parkin’:

Large-scale time-series data shows up across a variety of domains. In this post, I’ll introduce Spark-TS, a library developed by Cloudera’s Data Science team (and in use by customers) that enables analysis of data sets comprising millions of time series, each with millions of measurements. Spark-TS runs atop Apache Spark, and exposes Scala and Python APIs.

Deployed by Cloudera with real customers, according to them. Sorely needed. Appreciate the Python modules, which I hope aren’t too far behind the Scala API.


NodeBox for OpenGL

A while ago, I started a project called AdoptedArt, where I attempted to transliterate Matt Pearson’s work at AbandonedArt.org into Python. Back then there were two impediments. One, there really weren’t any graphical toolkits that were a solid equivalent of processing. I cobbled something out of pyprocessing but it wasn’t very satisfying. Not to mention the project wasn’t particularly active. Two, my lil’ ole White MacBook really didn’t have enough horsepower to compensate for the Python performance penalty.

AdoptedArt fell by the wayside, but just for giggles, over Thanksgiving I took a lark to see if it could be resurrected. Now I have two things on my side. One, the new MacBook Pro is easily an order of magnitude faster thanks to processing speedups, multiple cores, GPU acceleration, and a big old SSD. Second, NodeBox for OpenGL emerged, adding image manipulation capabilities and hardware acceleration to the NodeBox vector drawing API. Moore’s Law FTW! Plus, the install was painless using Continuum’s Anaconda, even though there was some C based extensions to be built from source.

Bottom line, it only took me a little bit of work to adapt my adoption of AbandonedArt’s first processing sketch, Spirograph into NodeBoxGL. And it ran smooth as silk, with the MacBook barely breaking a sweat.

I’ve got high hopes to revive this project as a creative endeavor and a complete diversion from work stuff. We’ll see how it goes!


Whysabouts

Just for posterity, some of the reasoning behind coming back to these here parts, besides fulfilling a stated commitment.

TL;DR Commuting and creating. Less of the first, more of the latter.

Death to Schleppy!!

I live in the center-west portion of Loudoun County, roughly 30ish miles from Arlington and thence further into the city. Arlington has been about the best of my regular work place locations as I’ve had stops in downtown DC, Crystal City, Oakton and Rockville, MD.

The Washington DC Metroplex is brutally hostile to commuters. Typically I spend about 75 to 90 minutes getting to work, one way. Doesn’t matter what combo of driving and public transportation, minimum 3 hours in transit every business day, punctuated by bi-weekly traffic disasters just to accentuate the pain.

In June of 2016, I’ll have been doing it for a full decade. Years two through eight, I developed Stockholm syndrome and just rolled with it. In the last year a certain despair about the amount of my life dripping away at this has crept up. Family constraints lock in the residential location. So I’m now resolved to alter my career trajectory as needed to radically reduce my commute.

By May 31st of 2017, my one way trip time to work will be 30 minutes or less. Preferably way less.

What’s the blog relation? To best position myself to Kill Schleppy, I want to sharpen a number of technical skills to increase leverage with whoever employs my talents. Some of this will happen through self-instruction and side projects. (This and lack of software upgrades for Ye Olde MacBook led to the new Macaversary.) What better place than Mass Programming Resistance to record my efforts and build up an independent, web visible portfolio? Maybe even land a “remote first” opportunity. Going to the home office or local co-working space sounds really enticing.

Parallel to that, in Whatabouts I noted the elimination of certain things that I felt were a drag on mental and emotional health. That space can be filled with productive, creative energy. Mass Programming Resistance can once again be the outlet for that energy. Blogging’s dead, blah, blah, blah. But in the past, when I’ve been on a regular routine, there was a sense of purpose and achievement. It works for me.


Conda Envs

I don’t find Python’s virtualenvs to be as cumbersome as Tim Hopper, especially when combined with virtualenvwrapper, but his argument for using Anaconda environments is compelling:

In 2015, I have almost exclusively used Python installations provided through Continuum Analytics’s Conda/Anaconda platform. I have also switched from using virtualenvs to using conda environments, and I am loving it.

The only contra is that pip and virtualenv are so widely distributed and easily deployed with distro packages that they’re somewhat safe to rely on. Conda, not so much, even though it’s a light lift and spreading.

Interestingly, I probably overlapped with some of Tim’s Qadium colleagues on a DARPA program.


Whatabouts

Stuff I’m currently interested in and will probably frequently comment on here. First off on the technical front:

  • Processing of streaming data. On the day job, I’ve deeply embraced Apache Kafka. We haven’t had much success doing the actual data processing, but the Googlish (among others) concepts of unbounded processing are attractive.
  • Speaking of which Apache Spark (Go Bears!), is still of high attention, especially Spark Streaming.
  • Distributed consensus algorithms, ala Paxos, ZAB, Raft, et. al. Also toolkits such as Zookeeper, Consul, etcd which leverage these algorithms to additionally enable service discovery.
  • Containerization, principally Docker but also getting back to LXC and BSD jails, the originals. Unikernels fall into the same space although I’m still trying to make sense of them.
  • Programming languages. Still primarily a Pythonista, actually contributed some Go to an open source project, getting dragged into Scala and back into Java due to Spark/Kafka, soft spot always for Common Lisp, Scheme and their ancestors
  • Dev tools and practices. The day job has made me a passable git user, an emerging test driven developer, and well aware of continuous integration and continuous deployment practices. Planning to actually top up my skills on modern SW development environments.
  • Performance monitoring and analysis. I’ve gladly joined the cult of Brendan Gregg. It’s a very interesting and underserved area, that integrates a lot of interesting technologies, engineering techniques, and analysis methodologies. Proficiency in this domain seems to be a bit of a superpower.

The overall technical theme is what I’ll term “distributed programming in the small”. I’ve come to grips with the realization that my dreams of becoming a massive scale systems builder are probably not going to come true. Google, Amazon, Facebook, Netflix et. al. have moved the goal posts so far out that unless you work at one of those places, or a scientific computing institution, you’re pretty much an amateur. However, there are plenty of use cases that can exploit key technologies from that crowd to build reliable and dynamic data processing systems even without having to reach global scale. Plenty of problems still out there that force you to use more than one computer at a time. But when you go that route, to the extent reasonable, do what the pros do.

Musically still diggin’ on House, DnB, downtempo, trip-hop, and breaks DJ mixes. Find myself going retro a bit more into 80’s, less so 90’s, pop and dance through a Spotify Premium account. I pay for mixes and subscribe for singles.

Crate diggin’ through all the old material I have in the form of old blog posts, RSS feed stars, Twitter faves (F likes), and pinboard bookmarks. Having been around for a while now affords me the space to do some reflection in addition to observation.

Not so much:

  • TV. Cut off the national drip about a year ago. Mental and emotional health much improved.
  • Sports. Really dialed it back. It mostly went out the window with the One Eyed Monster. Basically do audio streams of games while conducting other activities, visit the Verizon Center to watch The Wiz with my dad, and check scores on my phone every now and then. Finding the NFL and NCAA despicable.
  • Science fiction reading. It’s become intermittent due to Life (TM). If I get back on the bandwagon it’ll make a return but “underpromise and overdeliver”.
  • Movie stuff. Ditto. In the last two calendar years, I’ve seen exactly one feature film in the theater: Mad Max: Fury Road. Highly recommended but again, I don’t get out for entertainment much. There’s a plethora of on-demand at my disposal, but can’t muster the energy to take advantage.
  • Social media. I’ve been around long enough to remember when the notion of “IRL: In Real Life” made sense and we called it “social software”. Now it’s All Real Life, All The Time and increasingly disruptive of our social fabric, (not in a good way).
  • Celebrity. You can get plenty of that in other venues.

Consider yourselves, … warned.


GLA Podcast 50

[embed]https://twitter.com/djmarkfarina/status/665242414693654532[/embed]

I’ve given it two listens so far and good stuff. A welcome return to straight ahead house music.


Whereabouts

Invincea Wall It occurs to me that there is a major event that I have not noted in this venue. About 18 months ago, I gave up my position at Lockheed Martin and moved to a much smaller company Invincea. That web site is almost exclusively about the commercial product side of Invincea, but we also have a federal services division called Invincea Labs.

Labs is in exactly the same DoD Science and Technology research space I worked for LM. We hustle Contracted Research and Development (CRAD) from various agencies looking for technical solutions to bleeding edge problems.

It’s an amusing story of how it came about, but I literally ended at LM on a Friday afternoon and started across the street at Invincea the following Monday. The biggest change is that I went down 3 orders of magnitude in employee head count. Also, Labs has a pure focus on cybersecurity. No more worrying about expensive jet fighters and all that. Lean, mean, and a relaxed attitude have been a refreshing change of pace.

I’m part of the Cyber Analytics team, and Labs has a lot of open positions. We work on all sorts of bleeding edge projects so shoot me an e-mail at bria n.d ennis@invincea.com if any of them seem to fit you.


It’s Been A Long Time…

…, I shouldn’t have left you,
Without a strong rhyme to step to.
Think of how many weak shows you slept through.
Time’s up. I’m sorry I kept you…

— Rakim

MacBook Pro 2015

It’s a new Macaversary around here.


Gibson’s “The Peripheral”

The Peripheral Cover I finished reading William Gibson’s The Peripheral about a day ago. As an avowed Gibson fanboy, I’ve oddly got some pretty mixed feelings about the book.

First off, I did this weird thing of pre-ordering from Amazon, so I had the book on the first day of availability. October 28th, 2014. Six. Months. Ago. For whatever reason, I parked the hardcover and never got started. Maybe it was the dread of hauling the hefty tome around.

So taking advantage of some vacation time, I jumped in and devoured it promptly. Ticked all of my Gibson check boxes. But in the end I felt a bit unsatisfied.

Can’t quite put my finger on it, but I really didn’t get a rush. Since Cayce Pollard, I haven’t really felt like a Gibson protagonist has been in much peril. Just a matter of waiting to see how they get out of it. Yeah, Flynne got kidnapped and threatened, but everybody came out clean in the end.

And the Chinese server explanation I found lacking.

Maybe this one just needs to grow on me like the Blue Ant Trilogy.


EMR Spark

Muy bueno! Spark is now an official part of Amazon Elastic MapReduce.

I’m happy to announce that Amazon EMR now supports Apache Spark. Amazon EMR is a web service that makes it easy for you to process and analyze vast amounts of data using applications in the Hadoop ecosystem, including Hive, Pig, HBase, Presto, Impala, and others. We’re delighted to officially add Spark to this list. Although many customers have previously been installing Spark using custom scripts, you can now launch an Amazon EMR cluster with Spark directly from the Amazon EMR Console, CLI, or API.


Shades of Redis

Good piece from Charles Leifer highlighting a couple of alternative takes on the redis key/datastructure store:

Recently I’ve learned about a few new Redis-like databases: Rlite, Vedis and LedisDB. Each of these projects offers a slightly different take on the data-structure server you find in Redis, so I thought that I’d take some time and see how they worked. In this post I’ll share what I’ve learned, and also show you how to use these databases with Walrus, as I’ve added support for them in the latest 0.3.0 release.

I’m particularly intrigued by the embedded Rlite store. Seems like something useful for situations slightly less relational than what SQLite can service.


Deep Cassandra

Andrew Montalenti relates parse.ly’s experience with Cassandra. Lots of interesting tidbits, but the money graf is this:

A well-seasoned technologist friend of mine was not at all surprised when I walked him through some of these issues we had with Cassandra. He said, “You honestly expected that adopting a data store at your scale would not require you to learn all of its internals?” He has a point. After all, we didn’t adopt Elasticsearch until we really grokked Lucene.


Upserts, mmmmmm!

I don’t know how I missed Craig Kerstiens’ post on upserts in PostgreSQL 9.5, but I’m glad they’re here.

If you’ve followed anything I’ve written about Postgres, you know that I’m a fan. At the same time you know that there’s been one feature that so many other databases have, which Postgres lacks and it causes a huge amount of angst for not being in Postgres… Upsert. Well the day has come, it’s finally committed and will be available in Postgres 9.5.


GearPump

Since my last extended run of blogging, I’ve really gotten into message system infrastructure and streaming data computation architectures. May have to kick the tires on GearPump

GearPump is a lightweight real-time big data streaming engine. It is inspired by recent advances in the Akka framework and a desire to improve on existing streaming frameworks. … Per initial benchmarks we are able to process 11 million messages/second (100 bytes per message) with a 17ms latency on a 4-node cluster.

That seems like a lot of msgs/sec. Gotta see the specs on that cluster.

Via

[embed]https://twitter.com/bigdata/status/586190819297734656[/embed]


Customizing IPython for Pandas

Link parkin’

Chris Moffit’s blog post doesn’t have a lot of detail, but the attendant IPython notebook looks chock full of goodness

The combination of IPython + Jupyter + Pandas makes it easy to interact with and display your data. Not surprisingly, these tools are easy to customize and configure for your own needs. This article summarizes some of the most useful and interesting options.


Ignite v Spark

I’m an unabashed Apache Spark fanboy, but it’s good intellectual hygiene to know about the technical alternatives. Apache Ignite is one that has slipped beneath my radar. Konstantin Boudnik contrasts Ignite and Spark:

Complimentary to my earlier post on Apache Ignite in-memory file-system and caching capabilities I would like to cover the main differentiation points of the Ignite and Spark. I see questions like this coming up repeatedly. It is easier to have them answered, so you don’t need to fish around the net for the answers.

Clearly not from a native English speaker, but definitely worth the read.


Get Famous With Spark

You too can win fame and fortune using Apache Spark! Helps to invent it and write up a great PhD dissertation.

Matei Zaharia won the 2014 Doctoral Dissertation Award for his innovative solution to tackling the surge in data processing workloads, and accommodating the speed and sophistication of complex multi-stage applications and more interactive ad-hoc queries. His work proposed a new architecture for cluster computing systems, achieving best-in-class performance in a variety of workloads while providing a simple programming model that lets users easily and efficiently combine them.

Go Bears!


Reconciling Streaming Jargon

I really enjoyed reading Martin Kleppmann’s treatise on varying communities and terminology related to stream processing:

Some people call it stream processing. Others call it Event Sourcing or CQRS. Some even call it Complex Event Processing. Sometimes, such self-important buzzwords are just smoke and mirrors, invented by companies who want to sell you stuff. But sometimes, they contain a kernel of wisdom which can really help us design better systems.

In this talk, we will go in search of the wisdom behind the buzzwords. We will discuss how event streams can help make your application more scalable, more reliable and more maintainable. Founded in the experience of building large-scale data systems at LinkedIn, and implemented in open source projects like Apache Kafka and Apache Samza, stream processing is finally coming of age.

On the day job, I’m on my third deployment of a message queueing system to support prototyping of stream processing algorithms. I’m really starting to appreciate the fundamental differences between various approaches. I can also say there’s no “right way” to do it. Each use case has to be looked at individually and there definitely will be some bespoke customization. Carefully define your correctness and performance guarantees and there’s a chance you’ll get it right.

Dispatches like Kleppmann’s though, are helpful in understanding what the landscape looks like and where you’d like to be.


Sprung For Pinner

I’ve gotten back into consistently collecting links of interest and started heavily using the fine Pinboard product. Capturing links from iOS devices was a drag though, having to go through a JavaScript bookmarklet. Felt sort of convoluted. Too much friction. But my iPhone and iPad are the places where I run across most of the links I want to stash.

Enter Pinner, a $4.99 app. A little pricey, but it looks very nice and provides a clean experience for browsing Pinboard. Best of all it adds a custom share sheet so posting a link to Pinboard is just one click away.


At What COST?

Frank McSherry published a useful reminder that one must carefully calibrate the need to deploy “big data” solutions:

Lots of people struggle with the complexities of getting big data systems up and running, when they possibly shouldn’t be using the systems in the first place. The data sets above are certainly not small (billions of edges), but still run just fine on a laptop. Much faster than the distributed systems, at least.

Here are two helpful guidelines (for largely disjoint populations):

  1. If you are going to use a big data system for yourself, see if it is faster than your laptop.
  2. If you are going to build a big data system for others, see that it is faster than my laptop.

This brings back memories of the CMU work on GraphChi, where the processed graphs with billions of edges on a Mac Mini.

I’ll have to dig up Frank’s paper once it gets published.


Well That Explains It

I’ve been seeing some weird host naming issues on my Mac OS X machine for work. Thought it was an honest to gosh conflict with another machine but it turns out there’s glitchiness in Apple’s latest DNS servers:

Duplicate machine names. We use an old Mac named “nirrti” as a file- and iTunes server. In the pre-10.10 days, once in a blue moon nirrti would rename herself to “nirrti (2)”, presumably because it looked like another machine was already using the name “nirrti”. Under 10.10, this now happens a lot, sometimes getting all the way to nirrti (7). Changing back the computer name in the Sharing pane of the System Preferences usually doesn’t take. Apart from looking bad, this also makes opening network connections and playing iTunes content harder, as you need to connect to the right version of the name or nothing happens.

Good to know, but I wouldn’t go so far as to attempt the modifications described in the article. Seems like a recipe for later pain on further application and operating system upgrades.


GLA Podcast 49

[embed]https://twitter.com/djmarkfarina/status/552184777038520320[/embed]

I am all over that. Will definitely be checking it out during tomorrow’s commute.

Fiending for a Mushroom Jazz 8 release though. It’s been over 3 years since Mushroom Jazz 7 hit the street.


spark-kafka

You can also turn a Kafka topic into a Spark RDD

Spark-kafka is a library that facilitates batch loading data from Kafka into Spark, and from Spark into Kafka.

This library does not provide a Kafka Input DStream for Spark Streaming. For that please take a look at the spark-streaming-kafka library that is part of Spark itself.

This could come in handy to pre-ingest some data to build up some history before connecting to a Kafka data stream using Spark Streaming.


Elasticsearch and Spark

Apache Spark Logo In the day job, I was casting about for ways to integrate Apache Spark with the open source search engine Elasticsearch. Basically, I had some megawads of JSON data which Elasticsearch happily inhales, but I needed a compute platform to work with the data. Spark is my weapon of choice.

Turns out there’s a really nice Elasticsearch Hadoop toolkit that includes making Spark RDDs out of Elasticsearch searches. I have to thank Sloan Ahrens for tipping me off with a nice clear explanation of putting the connector in action:

In this post we’re going to continue setting up some basic tools for doing data science. The ultimate goal is to be able to run machine learning classification algorithms against large data sets using Apache Spark™ and Elasticsearch clusters in the cloud.

… we will continue where we left off, by installing Spark on our previously-prepared VM, then doing some simple operations that illustrate reading data from an Elasticsearch index, doing some transformations on it, and writing the results to another Elasticsearch index.


Into the Blockchain

Mastering Bitcoin Cover I’m way late to the Bitcoin party, but think the notion of applications built from blockchain concepts will be a Big Deal (™). Andreas Antonopoulos’ new book Mastering Bitcoin is getting me up to speed. Here’s a taste:

One way to think about the blockchain is like layers in a geological formation, or a glacier core sample. The surface layers may change with the seasons, or even be blown away before they have time to settle. But once you go a few inches deep, geological layers become more and more stable. By the time you look a few hundred feet down, you are looking at a snapshot of the past that has remained undisturbed for millennia or millions of years. In the blockchain, the most recent few blocks may be revised if there is a chain recalculation due to a fork. The top six blocks are like a few inches of topsoil. But once you go deeper into the blockchain, beyond six blocks, blocks are less and less likely to change. After 100 blocks back, there is so much stability that the “coinbase” transaction, the transaction containing newly mined bitcoins, can be spent. A few thousand blocks back (a month) and the blockchain is settled history. It will never change.block, and “top” or “tip” to refer to the most recently added block.

From what I’ve read so far, the book is a nice blend of high level overview and technical details, with code samples no less.


Yes, It Still Works

Yes this blog is still fully operational. As sole owner, proprietor, publisher, and author, I’m committing to more content in 2015. I guarantee it’s going to be a more interesting year in these here parts.


Spark, IPython, Kafka

A couple of good overviews from the fine folks at Cloudera

First, Gwen Shapira & Jeff Holoman on “Apache Kafka for Beginners”

Apache Kafka is creating a lot of buzz these days. While LinkedIn, where Kafka was founded, is the most well known user, there are many companies successfully using this technology.

So now that the word is out, it seems the world wants to know: What does it do? Why does everyone want to use it? How is it better than existing solutions? Do the benefits justify replacing existing systems and infrastructure?

In this post, we’ll try to answers those questions. We’ll begin by briefly introducing Kafka, and then demonstrate some of Kafka’s unique features by walking through an example scenario. We’ll also cover some additional use cases and also compare Kafka to existing solutions.

And Uri Laserson on “How-to: Use IPython Notebook with Apache Spark”

Here I will describe how to set up IPython Notebook to work smoothly with PySpark, allowing a data scientist to document the history of her exploration while taking advantage of the scalability of Spark and Apache Hadoop.


Ramez Naam’s “Nexus”

Nexus Cover Generally, I dislike the technothriller genre (c.f. Daemon), but I generally enjoyed “Nexus” by Ramez Naam. The technical and philosophical aspects of bio-hacking were well done. I wasn’t particularly fond of the technothriller clash of nation states, American exceptionalism, military/intelligence complex sycophancy tropes, but I knew what I was getting into. At least there was some interesting cultural diversity and introspection in the mix.

I may actually pick up the sequel, “Crux”.


Blaze Expressions

“tl;dr Blaze abstracts tabular computation, providing uniform access to a variety of database technologies”

Haven’t gotten a chance to dig in yet, but Continuum Analytics’ new Blaze Expressions library is worthy of further inspection:

Occasionally we run across a dataset that is too big to fit in our computer’s memory. In this case NumPy and Pandas don’t fit our needs and we look to other tools to manage and analyze our data. Popular choices include databases like Postgres and MongoDB, out-of-disk storage systems like PyTables and BColz and the menagerie of tools on top of the Hadoop File System (Hadoop, Spark, Impala and derivatives.) Each of these systems has their own strengths and weaknesses and an experienced data analyst will choose the right tool for the problem at hand. Unfortunately learning how each system works and pushing data into the proper form often takes most of the data scientist’s time.

The startup costs of learning to munge and migrate data between new technologies often dominate biggish-data analytics.

Blaze strives to reduce this friction. Blaze provides a uniform interface to a variety of database technologies and abstractions for migrating data.

I especially like the notion of exploiting multiple different frameworks such as in-memory (Pandas), SQL, NoSQL (MongoDB), and Big Data (Apache Spark) for tabular backend engines.


Apache Spark’s Unified Model

I’ve been a fan of Apache Spark (Go Bears!) for a while despite not having a real good opportunity to put the toolkit to practical use. Last year I got to AMPCamp 3 and the first Spark Summit. At the latter event, The AMPLab started singing a new tune about the benefits of a unified model for big data processing, moving on from selling in-memory computing.

Cloudera’s Gwen Shapira posted a good case study of the upside:

But the biggest advantage Spark gave us in this case was Spark Streaming, which allowed us to re-use the same aggregates we wrote for our batch application on a real-time data stream. We didn’t need to re-implement the business logic, nor test and maintain a second code base. As a result, we could rapidly deploy a real-time component in the limited time left — and impress not just the users but also the developers and their management.


Items Of Note

A bit dated, but hopefully not completely useless:


Can’t Wait

[embed]https://twitter.com/GreatDismal/status/487685164768563202[/embed]

Maybe I’ll have finished re-reading the Blue Ant trilogy by then.


Foxy 538

Welcome back Nate!

The breadth of our coverage will be much clearer at this new version of FiveThirtyEight, which is launching Monday under the auspices of ESPN. We’ve expanded our staff from two full-time journalists to 20 and counting. Few of them will focus on politics exclusively; instead, our coverage will span five major subject areas — politics, economics, science, life and sports.

What I like about this particular post (go read it all, seriously), is the level of humility Silver expresses. A lot of people can, and do, do the math and follow the predictive approaches he espouses. But putting it to the principled service of informing The Public, within the current dynamic of Internet social media, is innovative. Computer Assisted Reporting was just a precursor. As a recovering new media hack I can appreciate all the roots of this iteration of his work.

Plus, I love this attitude:

It’s time for us to start making the news a little nerdier.

© 2008-2024 C. Ross Jam. Built using Pelican. Theme based upon Giulio Fidente’s original svbhack, and slightly modified by crossjam.