Archive

Andreessen Horowitz

The history of computing can be largely described by architectural eras demarcated by the near-continuous ebb and flow from centralized to distributed computing. The first generation was centralized, with mainframes and dumb terminals defining the era. All computing was done centrally by the mainframe with the terminal merely displaying the resulting operations.

As endpoints (and networks) became more capable with additional processing and storage, the generation of client-server computing took hold. This architecture leveraged both endpoint and central capacity, giving users the benefit of hi-fidelity applications that communicated to centralized data stores in a seamless (or so intended) fashion. Unlocking less expensive and available compute at the endpoint unleashed an entire generation of new applications and businesses such as Facebook, Twitter, Square and many others.

Over the past decade, cloud computing and software as a service (SAAS) have moved the needle back once again toward a centralized architecture. Processing is centralized in the cloud datacenter and endpoints simply display the resulting operations, albeit in a more colorful way than their simple terminal predecessors. This is now changing.

Our mobile devices have become supercomputers in our hand. The processing power and storage capacity of these devices are now 100x more capable than PCs of 20 years ago. History has shown that as processing becomes available, new applications and architectures happily utilize the excess capacity. Enter the new world of cloud-client computing where applications and compute services are executed in a balanced and synchronized fashion between your mobile endpoint and the cloud.

Because smartphones are such beefy computers, developers have been rushing to take advantage of the available computing horsepower. Until now, this mostly meant writing native applications using Apple’s XCode or Eclipse/ADT for Android. But native apps are a pain: they typically require separate front-end engineers, there is little code shared between apps, and there is no concept of coordination with the back-end (cloud) services. All of this work must be handcrafted on a per app and per OS basis, rendering it costly and error-prone. It’s a duct tape and bailing twine approach for delivering a marginally better user experience.

That is, until Meteor. The Meteor platform brings back the goodness of the Web without sacrifice. Teams can share code across Web and mobile front ends, build super slick apps with a single code base across Android and iOS, and utilize a framework for integrating front-end and cloud operations. For the first time, there is a simple and elegant way to create applications that leverage the best of the client and the cloud, yielding applications that are high-fidelity and synchronized with the most advanced back-end/cloud services.

Meteor delivers dramatic productivity improvements for developers who need to deliver great experiences across Web, iOS, Android and other mobile platforms and enables the computational “oomph” available on smartphones to do more than just render HTML. Meteor delights users with Web and app experiences that are fluid and fast.

Meteor has the technology to usher in the new world of cloud-client computing and we couldn’t be more proud to be investors in the team that makes all of this happen.

 

A while back I wrote a blog post suggesting that datacenter infrastructure would move from an on-premise operation to the cloud. It may have seemed counter-intuitive that the infrastructure itself would become available from the cloud, but that’s exactly what’s happening.

We’ve now seen everything from security to system management to storage evolve into as-a-service datacenter offerings, yielding all the benefits of SaaS — rapid innovation, pay-as-you-go, no hardware installation — while at the same time providing rich enterprise functionality.

As the datacenter gets dis-intermediated with the as-a-service paradigm, an interesting opportunity exists for the “big data” layer to move to the cloud. While big data is one of the newer parts of the infrastructure stack — and should have been architected and delivered as a service from the start — an estimated 90+% of Fortune 2000 companies carry out their big data analytics on-premise.  These on-premise deployments are complex, hard to implement, and have already become something of a boat anchor when it comes to attempts to speed up big data analytics. They perfectly define the term “big drag.”

Without question the time has come to move big data to the cloud and deliver this part of the infrastructure stack as a service. Enter Cazena — our latest investment in the big data sector. The Cazena founders were former leaders at Netezza, the big data appliance leader that went public and was acquired by IBM for $1.7 billion. Prat Moghe, founder & CEO of Cazena, previously led strategy, product and marketing at Netezza. Prat has teamed up with Jit Saxena, co-founder of Netezza, and Jim Baum, the CEO of Netezza — all leaders in the big data industry.

This team knows a great deal about big data and agility of deployment. Ten years ago (long before the term big data was being used), the Netezza team came up with a radically simple big data appliance. Appliances reduced the sheer complexity of data warehouse projects — the amount of time and resources it took to deploy and implement big data.

In the next decade, even faster deployment cycles will be required as businesses want data on-demand. Additionally, the consumption pattern has changed as the newer data stack built using Hadoop and Spark has broadened the use of data. A new cloud-based, service-oriented deployment model will be required. The Cazena team is uniquely positioned to make this a reality.

We could not be more thrilled to be backing the team that has the domain expertise and thought leadership to change the face of big data deployments. Big data is changing the way the world processes information, and Cazena is uniquely positioned to accelerate these efforts.

The mobile revolution has spread beyond the mini supercomputers in our hands all the way to the datacenter.

With our expanded use of smartphones comes increased pressure on servers to help drive these devices: The activity we see everyday on our phones is a mere pinhole view into all that’s happening behind the scenes, in the massive cloud infrastructure powering all those apps, photo-shares, messages, notifications, tweets, emails, and more. Add in the billions of devices coming online through the Internet of Things — which scales through number of new endpoints, not just number of users — and you begin to see why the old model of datacenters built around PCs is outdated. We need more power. And our old models for datacenters are simply not enough.

That’s where mobile isn’t just pressuring, but actually changing the shape of the datacenter — displacing incumbents and creating new opportunities for startups along the way. READ MORE

The promise of big data has ushered in an era of data intelligence. From machine data to human thought streams, we are now collecting more data each day, so much that 90% of the data in the world today has been created in the last two years alone. In fact, every day, we create 2.5 quintillion bytes of data — by some estimates that’s one new Google every four days, and the rate is only increasing. Our desire to use, interact, and learn from this data will become increasingly important and strategic to businesses and society as a whole.

Yet, while we are collecting and storing massive amounts of data, our ability to analyze and make use of the data is stuck in information hell. Even our most modern tools reflect an older, batch-oriented era, that relies on queries and specialized programs to extract information. The results are slow, complex and time consuming processes that struggle to keep up with an ever-increasing corpus of data. Quite often, answers to our queries are long outdated before the system completes the task. While this may sound like a problem of 1970s mainframes and spinning tape, this is exactly how things work in even the most modern Hadoop environments of today.

More data means more insight, better decisions, better cures, better security, better predictions — but requires re-thinking last generation tools, architectures, and processes. The “holy grail” will allow all people or programs to fluidly interact with their data in an easy, real-time, interactive format — similar to a Facebook Search or Google Search engine. Information must become a seamless and fundamental property of all systems, yielding new insights by learning from the knowns and predicting the unknowns.

That’s why we’re investing in Adatao, which is on the leading edge of this transformation by combining big compute and big data under one beautiful document user interface. This combination offers a remarkable system that sifts through massive amounts of data, aggregating and machine-learning, while hiding the complexities and helping all users, for the first time, to deal with big data analytics in a real-time, flexible, interactive way.

For example, a business user in the airline industry can ask (in natural language) Adatao’s system to predict future airline delay ratios by quickly exploring 20 years of arrival/departure data (124 million rows of data) to break down past delays by week, month, and cause. In the same way Google Docs allows teams all over the world collaborate, Adatao allows data scientists and business users to collaborate on massive datasets, see the same views and together produce a visual model in just three seconds.

The Adatao software would not be possible, if not for the incredible team behind the project. I first met Christopher Nguyen, founder and CEO, at a breakfast meeting in Los Altos and was blown away by his humble personality. I knew at that moment, I wanted to find a way to partner with him. Here’s a guy who grew up in Vietnam and came to the US with a desire to make a difference. Since then, Christopher has started several successful companies, was engineering director of Google Apps and earned a PhD from Stanford and a BS from UC Berkeley, and is a recipient of the prestigious “Google Founders Award”.

He’s assembled a crack technical team of engineers and PhDs in parallel systems and machine learning. They all want to change the world and solve the most pressing data and information issues of our generation.

I am honored to be joining the board and look forward to partnering with this incredibly talented team. Adatao’s approach, team, and spirit of innovation will usher in a new generation of real-time, information intelligence that we believe will be the future of big data.

A new architectural era in computing is upon us, and the datacenter is changing to accommodate it. The cloud generation of companies has ramped their dominance and proven their models, and the legacy enterprise is close behind in making this massive shift. These new datacenters—as pioneered and designed by Facebook, Google, and Twitter—are defined by hyper-scale deployments of thousands of servers, requiring a new software architecture to manage and aggregate these systems. Mesosphere is that software, and we believe this architecture will be as disruptive to the datacenter as Linux and virtualization have been over the past decade.

Today’s application architectures and big data workloads are scale-out, stateless, and built to leverage the seemingly infinite processing capacity of the modern datacenters. These modern hyper-scale datacenters are the equivalent of giant supercomputers: they run massively parallel applications that serve millions of user requests a second. We are moving from a collection of servers running discrete, stateful applications, to massive scale-out applications that treat the hardware as one giant server.

In that “giant server” view of the world, Mesosphere is the obvious foundation for this new cloud stack and adoption is scaling fast. Look under the datacenter hood in many forward-looking, hyper-scale environments, including Twitter, Airbnb, eBay, and OpenTable, and you will find Mesosphere.

The Future of the Datacenter is Aggregation (not Virtualizaton)

Ten years ago, virtual machines (VMs) revolutionized the datacenter. This was because while the servers were getting bigger and bigger, the apps running on them pretty much stayed the same size. In order to make better use of those large servers, it made sense to virtualize the machines so that you could run multiple applications on the same machine at the same time.

Today, aggregation is fomenting a similar revolution and applications don’t fit on single machines anymore. In today’s world, applications run at a much larger scale (millions of users, billions of data points, and in real-time) and they are essentially large-scale distributed systems, composed of dozens (or even thousands) of services running across all the machines (virtual and physical) in the datacenter. In this world, you want to stitch together all of the resources on those machines into one common pool from which all the applications and services can draw.

Aggregation has proven itself in the A-lists of hyperscale companies, like Google and Twitter. They’ve demonstrated that it’s much more efficient to aggregate machines—pooling all of the resources—and then build applications against the datacenter behaving as a single machine.

Aggregation, and the tools to manage it at scale, is what Mesosphere is bringing to everybody —and it’s what we believe the future of the datacenter looks like.

The companies that buy into this architecture do not abandon virtualization, containers, or other approaches. These become important infrastructure components. But the way they manage their entire datacenter will evolve beyond the duct tape and band aid, highly manual approach to scripting IT operations tasks and “recipes”, and configuring dependencies each time a new application is brought online or a server goes down.

Mesos: From UC Berkeley to Reality

In 2009, Mesosphere Co-founder Florian Leibert was working at Twitter to scale the application in response to its exponential growth. At the time, he spotted a new open source technology that had been built at UC Berkeley called Mesos and he helped Twitter bring it into full production.

Today, almost all of Twitter’s infrastructure is built on top of Mesos, which is now an Apache open source project and is at the core of Mesosphere’s products. The Mesosphere stack, which includes Apache Mesos, is not a hypothetical technology. It’s highly mature and battle-tested, in large-scale production, running in both private datacenters and in public cloud environments. Other organizations using Mesos include: Hubspot, Airbnb, Atlassian, eBay, OpenTable, PayPal, Shopify, and Netflix.

Mesosphere is harnessing the core open source technology of Apache Mesos, and making it possible for everyone to tap into its power. By building an entire ecosystem around Mesos, they are making it easy to install, operate, and manage. For developers, Mesosphere provides simple command-line and API access to compute clusters for deploying and scaling applications, without relying on IT operations. For IT operations, Mesos abstracts the most difficult low-level tasks related to deploying and managing services, virtual machines, and containers in scale-out cloud and datacenter environments, and provides true automation, fault tolerance, and server utilization for modern scale requirements. Finally, Mesos allows applications to move between different environments without any change to the application.

Mesosphere will help define the next generation datacenter. I am honored to be joining the board of a team of dedicated system-level software engineers who will change the face of enterprise computing.

The last few years have seen the incredible growth of cloud computing. Applications and services that were developed for on-premise use have all found a new home in the cloud. As with most technology transformations, early adoption often occurs around a hobbyist developer community that then expands into more mainstream adoption and use. The cloud is no exception; as it grows it continues to empower developers to shape technology and change the world.

What started as a primitive, manual, and cumbersome infrastructure service, has evolved into a variety of cloud vendors offering vast collections of services targeted at a number of different audiences –perhaps too vast. We have Database-as-a-Service, Compute-as-a-Service, Analytics-as-a-Service, Storage-as-a-Service, as well as deployment and network environments, and everything in between. It has left the developer community with more options, functionality, and cost than it needs or wants.

It’s time for the cloud to once again focus on developers, and that is where DigitalOcean comes in.

Started by Ben Uretsky and his brother Moisey, with the additional intellectual brawn of an eclectic group of passionate developers, DigitalOcean has focused on one goal: making developers lives easier by providing a powerful, yet simple Infrastructure-as-a-Service.

SOURCE: Netcraft

The DigitalOcean service is purpose-built for the developer, offering automated web infrastructure for deploying web-based applications. The results have been eye-popping. From a standing-start in December 2012, DigitalOcean has grown from 100 web-facing computers to over 50,000 today, making it one of the fastest growing cloud computing providers in the world. It is now the ninth largest web infrastructure provider on the planet. With this round of funding, the management team intends to aggressively hire more in-house and remote software engineers to accelerate that already tremendous momentum.

SOURCE: Netcraft

DigitalOcean is also taking a page out of the open source world and is using and contributing to the most relevant open source projects. In the same way that Github or Facebook or Twitter offers open source as a service, DigitalOcean does the same. A few weeks back, I wrote a post presenting several viable models for open source deployments and DigitalOcean is a case study. We are thrilled to be working with the DigitalOcean team as they continue to build a cloud that developers love.

NOTE: Chart data from Netcraft.

Open source software powers the world’s technology. In the past decade, there has been an inexorable adoption of open source in most aspects of computing. Without open source, Facebook, Google, Amazon, and nearly every other modern technology company would not exist. Thanks to an amazing community of innovative, top-notch programmers, open source has become the foundation of cloud computing, software-as-a-service, next generation databases, mobile devices, the consumer internet, and even Bitcoin.

Yet, with all that momentum, there’s a vocal segment of software insiders that preach the looming failure of open source software against competition from proprietary software vendors. The future for open source, they argue, is as also-ran software, relegated to niche projects. It’s proprietary software vendors that will handle the really critical stuff.

So which is it? The success of technology companies using open source, and the apparent failure of open source is a head scratcher. Yet both are true, but not for the reasons some would have you believe. The success or failure of open source is not the software itself  – it’s definitely up to the tasks required of it – but in the underlying business model.

It started (and ended) with Red Hat

Red Hat, the Linux operating system company, pioneered the original open source business model. Red Hat gives away open source software for free but charges a support fee to those customers who rely on Red Hat for maintenance, support, and installation. As revenue began to roll into Red Hat, a race began among startups to develop an open source offering for each proprietary software counterpart and then wrap a Red Hat-style service offering around it. Companies such as MySQL, XenSource, SugarCRM, Ubuntu, and Revolution Analytics were born in this rush toward open source.

Red Hat is a fantastic company, and a pioneer in successfully commercializing open source. However, beyond Red Hat the effort has largely been a failure from a business standpoint. Consider that the “support” model has been around for 20 years, and other than Red Hat there are no other public standalone companies that have been able to offer an alternative to their proprietary counterpart. When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat.

peterlevine1

There are many reasons why the Red Hat model doesn’t work, but its key point of failure is that the business model simply does not enable adequate funding of ongoing investments. The consequence of the model is minimal product differentiation resulting in limited pricing power and corresponding lack of revenue. As shown below, the open source support model generates a fraction of the revenue of other licensing models. For that reason it’s nearly impossible to properly invest in product development, support, or sales the way that companies like Microsoft or Oracle or Amazon can.

peterlevine2

And if that weren’t tough enough, pure open source companies have other factors stacked against them. Product roadmaps and requirements are often left to a distributed group of developers. Unless a company employs a majority of the inventors of a particular open source project, there is a high likelihood that the project never gains traction or another company decides to create a fork of the technology. The complexities of defining and controlling a stable roadmap versus innovating quickly enough to prevent a fork is vicious and complex for small organizations.

To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves. Great for the open source community, not so great for us.

If you think this is past history and not relevant, I see a similar situation occurring today with OpenStack, and it is likely happening with many other successful open source projects. As an open source company, you are not only competing with proprietary incumbents, you are competing with the open source community itself. It’s a veritable shit-show.

If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.” If you are really lucky (as we were with XenSource), you might be acquired as a “strategic” acquisition. But, most open source companies don’t have that kind of luck, and the chances of going public and creating a large standalone company are pretty darn slim.

Even with all that stacked against them, we still see entrepreneurs pitching their companies as the “next Red Hat of…” Here is the problem with that vision: there has never been a “next Red Hat of…” It’s not to say we won’t see another Red Hat, but the odds are long and the path is littered with the corpses of companies that have tried the support model.

But there is a model that works.

Selling open source as a service

The winning open source model turns open source 1.0 on its head. By packaging open source into a service (as in cloud computing or software-as-a-service) or as a software or hardware appliance, companies can monetize open source with a far more robust and flexible model, encouraging innovation, and on-going investment in software development.

Many of today’s most successful new companies rely on an ecosystem of standardized open source components that are generally re-used and updated by the industry at-large. Companies who use these open source building blocks are more than happy to contribute to their ongoing success. These open source building blocks are the foundation of all modern cloud and SaaS offerings, and they are being monetized beautifully in many cases.

Depending on the company and the product, an organization may develop more open source software specific to their business or build some amount of proprietary software to complete the product offering. Amazon, Facebook, GitHub and scores of others mix open source components with their own proprietary code, and then sell the combination as a service.

This recipe – combining open source with a service or appliance model – is producing staggering results across the software landscape. Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation.

Beyond SaaS, I would expect there to be future models for Open Source monetization, which is great for the industry.

So what are you waiting for?

Build a big business on top of and around a successful platform by adding something of your own that is both substantial and differentiated. Take, for example, our national road and highway system. If you view it as the transportation platform, you start to see the host of highly differentiated businesses that have been built on top of it, ranging from FedEx to Tesla. The ridesharing service Lyft is building its business on top of that same transportation platform, as well as Amazon’s AWS platform.

If you extend that platform worldview, Red Hat’s support model amounts to selling a slightly better version of the road – in this case, the Linux operating system – which is already good enough for most people.

Sure, when you first launch a business built using open source components, it’s important to grow the size of the platform and cater to your early adopters to drive initial success. So you might start off looking a little like Red Hat. But if all goes well, you’ll start to more resemble Facebook, GitHub, Amazon or Cumulus Networks as you layer in your own special something on top of the platform and deliver it as a service, or package it as an appliance. Becoming the next Red Hat is an admirable goal, but when you look at the trends today, maybe even Red Hat should think about becoming the next Amazon.