Portfolio Companies

The history of computing can be largely described by architectural eras demarcated by the near-continuous ebb and flow from centralized to distributed computing. The first generation was centralized, with mainframes and dumb terminals defining the era. All computing was done centrally by the mainframe with the terminal merely displaying the resulting operations.

As endpoints (and networks) became more capable with additional processing and storage, the generation of client-server computing took hold. This architecture leveraged both endpoint and central capacity, giving users the benefit of hi-fidelity applications that communicated to centralized data stores in a seamless (or so intended) fashion. Unlocking less expensive and available compute at the endpoint unleashed an entire generation of new applications and businesses such as Facebook, Twitter, Square and many others.

Over the past decade, cloud computing and software as a service (SAAS) have moved the needle back once again toward a centralized architecture. Processing is centralized in the cloud datacenter and endpoints simply display the resulting operations, albeit in a more colorful way than their simple terminal predecessors. This is now changing.

Our mobile devices have become supercomputers in our hand. The processing power and storage capacity of these devices are now 100x more capable than PCs of 20 years ago. History has shown that as processing becomes available, new applications and architectures happily utilize the excess capacity. Enter the new world of cloud-client computing where applications and compute services are executed in a balanced and synchronized fashion between your mobile endpoint and the cloud.

Because smartphones are such beefy computers, developers have been rushing to take advantage of the available computing horsepower. Until now, this mostly meant writing native applications using Apple’s XCode or Eclipse/ADT for Android. But native apps are a pain: they typically require separate front-end engineers, there is little code shared between apps, and there is no concept of coordination with the back-end (cloud) services. All of this work must be handcrafted on a per app and per OS basis, rendering it costly and error-prone. It’s a duct tape and bailing twine approach for delivering a marginally better user experience.

That is, until Meteor. The Meteor platform brings back the goodness of the Web without sacrifice. Teams can share code across Web and mobile front ends, build super slick apps with a single code base across Android and iOS, and utilize a framework for integrating front-end and cloud operations. For the first time, there is a simple and elegant way to create applications that leverage the best of the client and the cloud, yielding applications that are high-fidelity and synchronized with the most advanced back-end/cloud services.

Meteor delivers dramatic productivity improvements for developers who need to deliver great experiences across Web, iOS, Android and other mobile platforms and enables the computational “oomph” available on smartphones to do more than just render HTML. Meteor delights users with Web and app experiences that are fluid and fast.

Meteor has the technology to usher in the new world of cloud-client computing and we couldn’t be more proud to be investors in the team that makes all of this happen.


A while back I wrote a blog post suggesting that datacenter infrastructure would move from an on-premise operation to the cloud. It may have seemed counter-intuitive that the infrastructure itself would become available from the cloud, but that’s exactly what’s happening.

We’ve now seen everything from security to system management to storage evolve into as-a-service datacenter offerings, yielding all the benefits of SaaS — rapid innovation, pay-as-you-go, no hardware installation — while at the same time providing rich enterprise functionality.

As the datacenter gets dis-intermediated with the as-a-service paradigm, an interesting opportunity exists for the “big data” layer to move to the cloud. While big data is one of the newer parts of the infrastructure stack — and should have been architected and delivered as a service from the start — an estimated 90+% of Fortune 2000 companies carry out their big data analytics on-premise.  These on-premise deployments are complex, hard to implement, and have already become something of a boat anchor when it comes to attempts to speed up big data analytics. They perfectly define the term “big drag.”

Without question the time has come to move big data to the cloud and deliver this part of the infrastructure stack as a service. Enter Cazena — our latest investment in the big data sector. The Cazena founders were former leaders at Netezza, the big data appliance leader that went public and was acquired by IBM for $1.7 billion. Prat Moghe, founder & CEO of Cazena, previously led strategy, product and marketing at Netezza. Prat has teamed up with Jit Saxena, co-founder of Netezza, and Jim Baum, the CEO of Netezza — all leaders in the big data industry.

This team knows a great deal about big data and agility of deployment. Ten years ago (long before the term big data was being used), the Netezza team came up with a radically simple big data appliance. Appliances reduced the sheer complexity of data warehouse projects — the amount of time and resources it took to deploy and implement big data.

In the next decade, even faster deployment cycles will be required as businesses want data on-demand. Additionally, the consumption pattern has changed as the newer data stack built using Hadoop and Spark has broadened the use of data. A new cloud-based, service-oriented deployment model will be required. The Cazena team is uniquely positioned to make this a reality.

We could not be more thrilled to be backing the team that has the domain expertise and thought leadership to change the face of big data deployments. Big data is changing the way the world processes information, and Cazena is uniquely positioned to accelerate these efforts.

The last few years have seen the incredible growth of cloud computing. Applications and services that were developed for on-premise use have all found a new home in the cloud. As with most technology transformations, early adoption often occurs around a hobbyist developer community that then expands into more mainstream adoption and use. The cloud is no exception; as it grows it continues to empower developers to shape technology and change the world.

What started as a primitive, manual, and cumbersome infrastructure service, has evolved into a variety of cloud vendors offering vast collections of services targeted at a number of different audiences –perhaps too vast. We have Database-as-a-Service, Compute-as-a-Service, Analytics-as-a-Service, Storage-as-a-Service, as well as deployment and network environments, and everything in between. It has left the developer community with more options, functionality, and cost than it needs or wants.

It’s time for the cloud to once again focus on developers, and that is where DigitalOcean comes in.

Started by Ben Uretsky and his brother Moisey, with the additional intellectual brawn of an eclectic group of passionate developers, DigitalOcean has focused on one goal: making developers lives easier by providing a powerful, yet simple Infrastructure-as-a-Service.

SOURCE: Netcraft

The DigitalOcean service is purpose-built for the developer, offering automated web infrastructure for deploying web-based applications. The results have been eye-popping. From a standing-start in December 2012, DigitalOcean has grown from 100 web-facing computers to over 50,000 today, making it one of the fastest growing cloud computing providers in the world. It is now the ninth largest web infrastructure provider on the planet. With this round of funding, the management team intends to aggressively hire more in-house and remote software engineers to accelerate that already tremendous momentum.

SOURCE: Netcraft

DigitalOcean is also taking a page out of the open source world and is using and contributing to the most relevant open source projects. In the same way that Github or Facebook or Twitter offers open source as a service, DigitalOcean does the same. A few weeks back, I wrote a post presenting several viable models for open source deployments and DigitalOcean is a case study. We are thrilled to be working with the DigitalOcean team as they continue to build a cloud that developers love.

NOTE: Chart data from Netcraft.

Mobile devices have put supercomputers in our hands, and—along with their first cousin—the tablet, represent the largest shift in computing since the PC era. The capacity and power of these devices are in its infancy, and all expectations lead to a doubling of capability every 18 months. In the same way that the PC era unlocked the imagination and innovation of an entire generation, we are seeing a repeat pattern with mobile devices at unprecedented scale.

History has shown that as compute capacity becomes available, new applications and programs happily consume the excess. Additional memory, disk, and processing power always lead to substantially better and more innovative products, serving an ever-broader set of consumers. We saw it with the PC, and we will see it with mobile as the number of devices grows well past a billion. Yet-to-be-developed applications are waiting to take advantage of this processing capability, and it’s going to require mobile operating system innovation to expose this awesome power.

An operating system is one of the most fundamental and important pieces of software. Great operating systems leverage new hardware, provide a consistent way to run applications, and provide a foundation for all interaction with a computing system. For PCs, Windows is the dominant operating system; for servers, Linux is dominant; and for mobile, Android enjoys a staggering 82% market share (Gartner, November 2013). Like Linux (and unlike Windows), Android is Open Source, which means no one company owns the code. Anyone can improve Android by adding new functionality and tools.

One reason why Android is winning is due to that open source spirit of additive innovation. Because consumers are clamoring for increased personalization and customization options, the Android open source community has been happily taking up the task of fulfilling that demand. What’s more, the growing enterprise trend of BYOD (bring your own device) is here to stay, which will further add to that demand as consumers use their mobile devices at home, at work, and on the road—all requiring customized functionality.

Enter Cyanogen, our newest portfolio company that’s well on its way in building a new operating system, CyanogenMOD (CM), leveraging core Open Source Android to provide the fastest, most innovative mobile operating system platform. CM takes the best of what Android offers and adds innovative features to create a clean yet customizable user experience. CM is 100% compatible with all Android applications, yet brings fabulous new capabilities to Android such as enhanced security, performance, device support, and personalization. Cyanogen has been powered by the open-source community—led by its founder Steve Kondik—ever since it launched four years ago. The community continues to work at a feverish pace, helping to bring up both newly launched and existing Android devices with the latest Cyanogen builds.

Today, tens of millions of devices are running Cyanogen worldwide, and we believe that CM has the opportunity to become one of the world’s largest mobile operating systems. As past history suggests, companies such as Microsoft and RedHat have done exceedingly well by being independent of hardware, and we believe that this trend will accelerate in the mobile world. The rapid success of CM indicates a growing consumer desire to have a fully compatible Android operating system that is truly independent from any hardware company or OEM. Consumers win as Cyanogen can launch updates more frequently, fix bugs faster, and deploy new features more regularly, compared to OEMs whose organizations are optimized for building fantastic hardware.

We’re incredibly excited to lead their Series B round of financing and to work with the Cyanogen team, a majority of which has been “sourced” from their “open source” community! Their expertise in building Android products and their desire to create a world-class mobile user experience will guide their decisions as they continue building on their success to date. Software is eating the world, Android is eating mobile, and we think Cyanogen only just finished their appetizer and is moving onto the entree.

One of the holy grails in the storage market has been to deliver a piece of software that could eliminate the need for an external storage array.  The software would provide all the capabilities of an enterprise-class storage device, install on commodity servers alongside applications, eliminate the need for a storage network, and provide shared storage semantics, high availability, and scale-out. With Maxta, the search for such a holy grail ends here.

The external storage array and associated storage network have been a staple of enterprise computing for several decades.  Innovations in storage have been all about making the external storage array faster and more reliable.  Even with all the recent excitement of flash replacing spinning disk, the entire focus of the $30B storage market has been around incrementally improving the external array.   Incrementalism as opposed to (literally) thinking outside the box.

Maxta is announcing a revolutionary shift in storage.  Not only are storage arrays and networks eliminated, but, as a result, compute and storage are co-located.  This convergence keeps the application close to its data, improving performance, reliability, and simplicity.  A layer of software to replace a storage array sounds too good to be true, except Maxta has paying customers and production deployments, and has delivered several releases of their software prior to today’s announcement.

Maxta would not be possible without CEO Yoram Novick, who is a world-class expert in storage software and data center design.  Yoram holds 25 patents and was previously CEO of Topio, a successful storage company that was acquired by NTAP several years ago.  He’s a storage software genius, with a penchant for engineering design and feature completeness as opposed to fluffy marketing announcements and future promises.  He’s the real deal and a true storage geek at heart.

When I met Yoram several years ago, he came to us with the radical idea to build a software layer to change the storage landscape.  Leverage commodity components and put all the hard stuff in software.  Within minutes, we decided to invest and we haven’t looked back since.  We are thrilled to be working with Yoram and team as they use software to deliver one of the holy grails of the storage market.

With all the recent innovations in flash storage design, you’d think we’d have a smooth path toward supporting storage requirements for new hyper-scale datacenters and cloud computing. However, nothing could be further from the truth! Existing storage architectures, despite taking advantage of flash, are doomed in the hyper-scale world. Simply put, storage has not evolved in 30 years, resulting in a huge disconnect between the requirements of the new datacenter and the capability of existing storage systems.

There are two fundamental problems right now: 1) existing storage does not scale for the hyper-scale datacenter and 2) traditional storage stacks have not been architected to take advantage of the recent innovations in flash.

Current storage systems don’t scale because they were designed in the mainframe era. Mainframe-style arrays were designed in a world where a single mainframe provided the compute and a handful of storage arrays hung off the mainframe to support data storage. This one-to-one architecture continues to be used today, despite the fact that the compute side of the hyper-scale datacenter is expanding to hundreds or thousands of individual servers in enterprise datacenters, similar to Google or Amazon. As you can imagine, you achieve theoretically unlimited capacity for compute only to be severely bottlenecked on the storage end of things.

Furthermore, while flash storage has become the hot new thing—super fast, energy efficient, with a smaller form factor—the other internal parts of the storage subsystem have not changed at all. Adding flash to an outdated system is like adding a jet engine to the Wright Brothers’ airplane: pretty much doomed to fail, despite the hype.

This brings me to Coho Data (formerly known as and a team I’ve worked closely with for years. The founding team includes Ramana Jonnala, Andy Warfield and Keir Fraser, superb product visionaries and architects, with deep domain expertise in virtualization and systems, having built the XenSource open source virtualization stack and scaled it to support some of the biggest public clouds around. This team has built infrastructure software that has been used by hundreds of millions of end users and installed on millions of machines. By applying their expertise and adding key talent with network virtualization experience to the team, they are challenging the fundamentals of storage.

A year after we funded their Series A, having spent that time heads-down building product and piloting with customers, I’m really excited to share that Coho Data today is announcing a revolutionary design in storage that has been built from the inside out to challenge how companies of all sizes think about how they store and deliver access to their data. The team has rebuilt the entire storage array with new software and integrated networking to offer the fastest, most scalable storage system in the market, effectively turning the Wright Brothers’ airframe into an F-16 fighter jet. The Coho DataStream architecture supports the most demanding hyper-scale environments, while at the same time optimizing for the use of commodity flash, all with standard and extensible interfaces and simple integration. As hyper-scale datacenters become the new standard, monolithic storage arrays will go the way of the mainframe.

Coho Data is changing the storage landscape from the inside out and I could not be more thrilled to be part of the most exciting storage company of the cloud generation.

Today I’m excited to announce the company that will transform the networking industry for the cloud era, Cumulus Networks, which has been in stealth for over three years. We were seed investors in Cumulus Networks and later went on to lead their Series A.

In the last decade, the compute side of the datacenter was completely revolutionized by Linux and virtualization running on commodity servers. By untethering the software (i.e., the operating system) from the hardware, the Lintel (Linux and Intel) stack obviated the need for dedicated hardware solutions such as Sun Servers in one fell swoop, bringing radically new economics, performance, scale and innovation to datacenter and cloud environments. This shift became the foundation for the software-defined datacenter.

However, while the server has undergone a complete transformation, the networking stack has remained completely unchanged. In spite of all the recent excitement with OpenFlow and Software-Defined Networking, the OS running inside network gear—the networking switch—is still very much proprietary and tied to proprietary hardware. Today’s most “innovative” network gear resembles a last-generation Sun Server: proprietary, inflexible, expensive and difficult to maintain.

Enter Cumulus Networks, which brings Linux to Networking and can be combined with software-defined networking (e.g. Nicira) to complete the transformation of the network stack for the cloud era. Just as Linux transformed the server, Cumulus Linux will transform the network by making proprietary hardware and software obsolete. Cloud and enterprise datacenters can now choose commodity hardware plus Cumulus Linux software to achieve cloud-scale networking that provides the flexibility and superior economics that only software can deliver.

Cumulus Networks would not be possible if not for the team behind the solution. The founding team, which includes JR Rivers and Nolan Leake, has deep expertise in networking, virtualization and cloud infrastructure. The team also includes a number of senior Engineers from Juniper and Cisco Fellows, which is an elite group of engineers responsible for the most innovative advancements in networking. The founders knew a secret: They knew that networking would no longer require proprietary hardware and software and that the shift to cloud-scale environments would require a new and modern approach to networking. Cumulus Networks is bringing the Linux revolution to networking.

Our investment in Cumulus Networks underscores our excitement about the future of software-based networking. I am thrilled to be representing Andreessen Horowitz on the board of Cumulus Networks and look forward to seeing the transformation of older generation networking to the new architecture of the cloud era.