We need a third mobile OS to move us towards a more open environment for anyone to innovate, without permission. Especially as mobile phones have begun to democratize and broaden the reach of technology around the world… why shouldn’t we then also democratize the mobile operating system? MORE
A while back I wrote a blog post suggesting that datacenter infrastructure would move from an on-premise operation to the cloud. It may have seemed counter-intuitive that the infrastructure itself would become available from the cloud, but that’s exactly what’s happening.
We’ve now seen everything from security to system management to storage evolve into as-a-service datacenter offerings, yielding all the benefits of SaaS — rapid innovation, pay-as-you-go, no hardware installation — while at the same time providing rich enterprise functionality.
As the datacenter gets dis-intermediated with the as-a-service paradigm, an interesting opportunity exists for the “big data” layer to move to the cloud. While big data is one of the newer parts of the infrastructure stack — and should have been architected and delivered as a service from the start — an estimated 90+% of Fortune 2000 companies carry out their big data analytics on-premise. These on-premise deployments are complex, hard to implement, and have already become something of a boat anchor when it comes to attempts to speed up big data analytics. They perfectly define the term “big drag.”
Without question the time has come to move big data to the cloud and deliver this part of the infrastructure stack as a service. Enter Cazena — our latest investment in the big data sector. The Cazena founders were former leaders at Netezza, the big data appliance leader that went public and was acquired by IBM for $1.7 billion. Prat Moghe, founder & CEO of Cazena, previously led strategy, product and marketing at Netezza. Prat has teamed up with Jit Saxena, co-founder of Netezza, and Jim Baum, the CEO of Netezza — all leaders in the big data industry.
This team knows a great deal about big data and agility of deployment. Ten years ago (long before the term big data was being used), the Netezza team came up with a radically simple big data appliance. Appliances reduced the sheer complexity of data warehouse projects — the amount of time and resources it took to deploy and implement big data.
In the next decade, even faster deployment cycles will be required as businesses want data on-demand. Additionally, the consumption pattern has changed as the newer data stack built using Hadoop and Spark has broadened the use of data. A new cloud-based, service-oriented deployment model will be required. The Cazena team is uniquely positioned to make this a reality.
We could not be more thrilled to be backing the team that has the domain expertise and thought leadership to change the face of big data deployments. Big data is changing the way the world processes information, and Cazena is uniquely positioned to accelerate these efforts.
I was introduced to Paula Long the CEO of DataGravity about the same time I arrived at a16z (nearly four years ago). Every time a new storage deal was pitched to us, I would call Paula to get her thoughts. Given my own background in storage and systems software, I was blown away at Paula’s depth and knowledge in the space. Not only did she articulate every technical nuance of the project we discussed, she had an uncanny feel for what was likely to happen in the future.
Paula casually rattled off every company doing similar things, price and performance of solid-state storage, file systems, volume managers, device drivers, block interfaces, meta data, NAS, SAN, objects, and security. It was enough to make my head spin, yet she analyzed every situation with a clarity that I had never seen before. I had known Paula as the founder of EqualLogic (her prior storage company acquired by Dell for $1.4 billion in 2008), but her insight and wisdom about everything storage far exceeded that of anyone I had met. When she came to me with her own ideas for a new storage company there was no hesitation. Betting on Paula would result in something really special. In December 2012 we invested in DataGravity.
When we talked about DataGravity in those days, Paula would tell me how the real future of storage was unlocking the information residing in the gazillions of files and terabytes of unstructured data that organizations store but never use. She articulated that most other storage companies were in a race to zero; chasing the faster and cheaper angle, with their solid-state storage and incremental innovation. “Table stakes,” she would say. “DataGravity is going to do something never done before. We are going to unlock the value of storage. Storage is the obvious place for intelligence to be surfaced.” This all sounded great, but – even with my background in the space – I never fully appreciated what Paula had envisioned. She had a secret.
Today, DataGravity is unveiling the world’s first data-aware storage system. The system is quite simply revolutionary. We saw a demonstration of the system’s capability at a board meeting a few months ago, and that is when it all came together for me. This was not some incremental system that everyone else was building, but an entirely new way of managing storage and information. I left the board meeting thinking that all storage systems in the future would have elements of the DataGravity concepts. It was truly new thinking.
This was not some incremental system that everyone else was building, but an entirely new way of managing storage and information.
The secret sauce DataGravity brings to the market is making dumb storage smart, all in a single system. DataGravity is both a primary storage array and an analytics system combined into one. The combination — without any performance or operational penalty — means, for the first time, that organizations can use their primary storage for file storage, IT operations, AND analytics at the point of storage. “Data-aware” means indexing and giving storage intelligence before it is stored. Instead of having dedicated and expensive secondary systems for analytics, operations and data analysis, DataGravity does it all in one place.
DataGravity is about to change the way we think about storage. From the demographics of data, to data security, to searching and trend information, the system will unlock an entire class of capabilities that we have not yet begun to comprehend. For example, imagine knowing when a file is being written or corrupted, before it is accessed. Or being able to identify subject-matter experts in an organization based on who is writing the most content on what and when. Or determining data ownership and control and correlate this with active or inactive employees. All this from a “storage” system.
So here we are today at an amazing inflection point in the history of storage. Twenty years from now, we’ll look back at this day as the day storage went from being dumb to being smart. The day that transformed the way the world stores its information. Just as Paula predicted, and just as Paula knew.
The mobile revolution has spread beyond the mini supercomputers in our hands all the way to the datacenter.
With our expanded use of smartphones comes increased pressure on servers to help drive these devices: The activity we see everyday on our phones is a mere pinhole view into all that’s happening behind the scenes, in the massive cloud infrastructure powering all those apps, photo-shares, messages, notifications, tweets, emails, and more. Add in the billions of devices coming online through the Internet of Things — which scales through number of new endpoints, not just number of users — and you begin to see why the old model of datacenters built around PCs is outdated. We need more power. And our old models for datacenters are simply not enough.
That’s where mobile isn’t just pressuring, but actually changing the shape of the datacenter — displacing incumbents and creating new opportunities for startups along the way. READ MORE
The promise of big data has ushered in an era of data intelligence. From machine data to human thought streams, we are now collecting more data each day, so much that 90% of the data in the world today has been created in the last two years alone. In fact, every day, we create 2.5 quintillion bytes of data — by some estimates that’s one new Google every four days, and the rate is only increasing. Our desire to use, interact, and learn from this data will become increasingly important and strategic to businesses and society as a whole.
Yet, while we are collecting and storing massive amounts of data, our ability to analyze and make use of the data is stuck in information hell. Even our most modern tools reflect an older, batch-oriented era, that relies on queries and specialized programs to extract information. The results are slow, complex and time consuming processes that struggle to keep up with an ever-increasing corpus of data. Quite often, answers to our queries are long outdated before the system completes the task. While this may sound like a problem of 1970s mainframes and spinning tape, this is exactly how things work in even the most modern Hadoop environments of today.
More data means more insight, better decisions, better cures, better security, better predictions — but requires re-thinking last generation tools, architectures, and processes. The “holy grail” will allow all people or programs to fluidly interact with their data in an easy, real-time, interactive format — similar to a Facebook Search or Google Search engine. Information must become a seamless and fundamental property of all systems, yielding new insights by learning from the knowns and predicting the unknowns.
That’s why we’re investing in Adatao, which is on the leading edge of this transformation by combining big compute and big data under one beautiful document user interface. This combination offers a remarkable system that sifts through massive amounts of data, aggregating and machine-learning, while hiding the complexities and helping all users, for the first time, to deal with big data analytics in a real-time, flexible, interactive way.
For example, a business user in the airline industry can ask (in natural language) Adatao’s system to predict future airline delay ratios by quickly exploring 20 years of arrival/departure data (124 million rows of data) to break down past delays by week, month, and cause. In the same way Google Docs allows teams all over the world collaborate, Adatao allows data scientists and business users to collaborate on massive datasets, see the same views and together produce a visual model in just three seconds.
The Adatao software would not be possible, if not for the incredible team behind the project. I first met Christopher Nguyen, founder and CEO, at a breakfast meeting in Los Altos and was blown away by his humble personality. I knew at that moment, I wanted to find a way to partner with him. Here’s a guy who grew up in Vietnam and came to the US with a desire to make a difference. Since then, Christopher has started several successful companies, was engineering director of Google Apps and earned a PhD from Stanford and a BS from UC Berkeley, and is a recipient of the prestigious “Google Founders Award”.
He’s assembled a crack technical team of engineers and PhDs in parallel systems and machine learning. They all want to change the world and solve the most pressing data and information issues of our generation.
I am honored to be joining the board and look forward to partnering with this incredibly talented team. Adatao’s approach, team, and spirit of innovation will usher in a new generation of real-time, information intelligence that we believe will be the future of big data.
A new architectural era in computing is upon us, and the datacenter is changing to accommodate it. The cloud generation of companies has ramped their dominance and proven their models, and the legacy enterprise is close behind in making this massive shift. These new datacenters—as pioneered and designed by Facebook, Google, and Twitter—are defined by hyper-scale deployments of thousands of servers, requiring a new software architecture to manage and aggregate these systems. Mesosphere is that software, and we believe this architecture will be as disruptive to the datacenter as Linux and virtualization have been over the past decade.
Today’s application architectures and big data workloads are scale-out, stateless, and built to leverage the seemingly infinite processing capacity of the modern datacenters. These modern hyper-scale datacenters are the equivalent of giant supercomputers: they run massively parallel applications that serve millions of user requests a second. We are moving from a collection of servers running discrete, stateful applications, to massive scale-out applications that treat the hardware as one giant server.
In that “giant server” view of the world, Mesosphere is the obvious foundation for this new cloud stack and adoption is scaling fast. Look under the datacenter hood in many forward-looking, hyper-scale environments, including Twitter, Airbnb, eBay, and OpenTable, and you will find Mesosphere.
The Future of the Datacenter is Aggregation (not Virtualizaton)
Ten years ago, virtual machines (VMs) revolutionized the datacenter. This was because while the servers were getting bigger and bigger, the apps running on them pretty much stayed the same size. In order to make better use of those large servers, it made sense to virtualize the machines so that you could run multiple applications on the same machine at the same time.
Today, aggregation is fomenting a similar revolution and applications don’t fit on single machines anymore. In today’s world, applications run at a much larger scale (millions of users, billions of data points, and in real-time) and they are essentially large-scale distributed systems, composed of dozens (or even thousands) of services running across all the machines (virtual and physical) in the datacenter. In this world, you want to stitch together all of the resources on those machines into one common pool from which all the applications and services can draw.
Aggregation has proven itself in the A-lists of hyperscale companies, like Google and Twitter. They’ve demonstrated that it’s much more efficient to aggregate machines—pooling all of the resources—and then build applications against the datacenter behaving as a single machine.
Aggregation, and the tools to manage it at scale, is what Mesosphere is bringing to everybody —and it’s what we believe the future of the datacenter looks like.
The companies that buy into this architecture do not abandon virtualization, containers, or other approaches. These become important infrastructure components. But the way they manage their entire datacenter will evolve beyond the duct tape and band aid, highly manual approach to scripting IT operations tasks and “recipes”, and configuring dependencies each time a new application is brought online or a server goes down.
Mesos: From UC Berkeley to Reality
In 2009, Mesosphere Co-founder Florian Leibert was working at Twitter to scale the application in response to its exponential growth. At the time, he spotted a new open source technology that had been built at UC Berkeley called Mesos and he helped Twitter bring it into full production.
Today, almost all of Twitter’s infrastructure is built on top of Mesos, which is now an Apache open source project and is at the core of Mesosphere’s products. The Mesosphere stack, which includes Apache Mesos, is not a hypothetical technology. It’s highly mature and battle-tested, in large-scale production, running in both private datacenters and in public cloud environments. Other organizations using Mesos include: Hubspot, Airbnb, Atlassian, eBay, OpenTable, PayPal, Shopify, and Netflix.
Mesosphere is harnessing the core open source technology of Apache Mesos, and making it possible for everyone to tap into its power. By building an entire ecosystem around Mesos, they are making it easy to install, operate, and manage. For developers, Mesosphere provides simple command-line and API access to compute clusters for deploying and scaling applications, without relying on IT operations. For IT operations, Mesos abstracts the most difficult low-level tasks related to deploying and managing services, virtual machines, and containers in scale-out cloud and datacenter environments, and provides true automation, fault tolerance, and server utilization for modern scale requirements. Finally, Mesos allows applications to move between different environments without any change to the application.
Mesosphere will help define the next generation datacenter. I am honored to be joining the board of a team of dedicated system-level software engineers who will change the face of enterprise computing.
“Veterans” are not the first thing that comes to mind when one thinks of Silicon Valley and tech. But there’s actually a growing community of veterans in tech here, and leadership is top of mind for them. Which makes sense when you think about what the military does: It’s all about putting the right foundation, the right process, the right people in place. That’s what a hyper-growth company needs, as LinkedIn CEO Jeff Weiner observes.
I interviewed Weiner at our recent Veterans in Technology Leadership event, which connected veterans in our network to portfolio CEOs. Weiner, who has had the chance to meet with some Delta Force teams, SEALs and senior military leaders, said, “If you ask the common person on the street if a grizzled vet is going to be compassionate, they’re going to be thinking about warfare, about the aggression of war. But I think that’s a huge misconception. I’ve been wildly impressed by the caliber and the integrity and the humanity of these people. That’s what makes great leaders.”
There’s a very clear distinction between managers and leaders. “Managers essentially tell people what to do, but leaders inspire them to do it.” Here are some more highlights from our conversation…
PL: Let’s start with your leadership philosophy.
JW: Inspiration lies at the heart of leadership. There are three ways for leaders to inspire: (1) Possess a clarity of vision. (2) Hold to the courage of your conviction. (3) Have the ability to effectively communicate both of the above. If you’re not forthcoming with information, you’re going to create a problem. It’s almost an inherent conflict, because your employees are not going to feel trusted.
PL: How does trust play into how you and your colleagues work together?
JW: When people say, “Just trust me,” you’ve never met them before, it’s not going to happen. It takes a long time to build up. It can be lost in milliseconds now, literally milliseconds. I’m not saying that figuratively.
Trust is the bedrock of everything that we do, quite literally. So, if we lose the trust of our members, we’re done. We’re completely done. An old friend once taught me that trust is consistency over time. It’s a simple formula for a complex thing.
PL: What changes did you go through as LinkedIn went from a small company to a large company?
JW: The first continuum is the difference between problem solving and coaching. If there’s a day-to-day problem, if you’re just diving in and just falling back to that instinct that got you to that place to begin with, you’re never going to scale. You will never, ever scale. If you’re really doing your job as a CEO, you’re going to leave your people with the tools to be able to coach their team and so forth.
The second continuum is tactical execution on the one end, and strategic thinking and proactive thinking on the other. By the time you’re at 300 people, you’d better accept that you’re going to have competition. If you are just reacting to them, it’s too late. Trust me, the competition is thinking strategically. You’ve got to be constantly thinking about what’s next, by looking out three to five years and working your way back in terms of what’s going to be necessary to get there.
PL: What was your personal journey as a leader?
JW: I think I made a very natural mistake that a lot of us make, which is to project my own worldview onto other people. That if I thought a certain way, why didn’t they think the same way? And if I did things a certain way, why didn’t they do things the same way?
What I failed to realize back then was that just because I enjoy that part of the business, or I may have a certain facility with that part of the business, doesn’t mean others need to. And I need to take the time and understand where they’re coming from. This is my first principle of management, which is managing compassionately. And that means putting yourself in someone else’s shoes and seeing the world through their lens, their perspective.
PL: What skills do you look for in team members?
JW: The holy grail is the “five-tool” player:
1. It starts with technology vision. Because technology drives essentially everything we do, you have to understand where it’s going and how it’s going to impact society.
2. The next tool is product sensibility. You need to be able to harness that vision and meet unmet needs in the marketplace.
3. The third is business acumen. If you don’t have a sustainable business model, it’s not going to go anywhere over time.
4. The fourth is leadership and the ability to evangelize.
5. The fifth and last is the most important tool, and it’s resourcefulness. It’s just getting shit done.
If you find people with more than one of these skills and they’re superlative, do everything you can to hire them. Anything close to four skills, superstars. Five skills and they’re people that change the world.
PL: Are we as a nation doing enough to create not just jobs, but the right jobs?
JW: This is one of the most significant issues of our time. Youth-based unemployment in the country is 2X that of general unemployment (6.7 percent), so roughly 13 percent. There are approximately 73 million unemployed youth between the ages of 15 and 24 on a global basis. We have to get this right and we’re not: I think we’re still training our youth for the jobs of a prior economy that no longer exists — not the jobs that are, or will be.
So, what can we do? We can start with vocational training and identifying where the jobs are. There are 3.8 million available jobs in the country and some people would say it’s even higher than that. That number has risen every year since 2008, despite the fact we have all this unemployment.
I’m not just talking about becoming a software engineer in the valley, because training and computer science degrees, that kind of stuff may take a little longer. Of the 4 million or so jobs out there, you’ve got a lot of jobs that are in retail, that are in real estate, or construction. But the people looking for work may not even know what jobs are available, and where they are.
It’s about identifying wherever they are in the country. And making sure we have the right resources to get them the right skills, and can make themselves available for the right jobs. I’d love for us to be investing more of our collective time and energy and resources in finding the way.
PL: What role will education need to play here?
JW: I think most people have familiarized themselves with the fact that our education system is highly antiquated. Very different skills are required for the knowledge economy. Our education system needs less rote learning and more critical reasoning, creative problem solving, and collaboration. Even when we were in primary school, individuality was the model we were taught. Taking tests as an individual, doing individual work, grading me individually.
How many of you are going to be successful by working individually? There’s something a little out of whack when we have a school system where every kid is being treated as an individual.
I think compassion is the single most important thing you can teach a child, seriously. I don’t mean that in some spiritual, new-age way. I mean it, period. In a more global, interdependent, interconnected society, compassion should be taught. Just like reading, just like math.
The last few years have seen the incredible growth of cloud computing. Applications and services that were developed for on-premise use have all found a new home in the cloud. As with most technology transformations, early adoption often occurs around a hobbyist developer community that then expands into more mainstream adoption and use. The cloud is no exception; as it grows it continues to empower developers to shape technology and change the world.
What started as a primitive, manual, and cumbersome infrastructure service, has evolved into a variety of cloud vendors offering vast collections of services targeted at a number of different audiences –perhaps too vast. We have Database-as-a-Service, Compute-as-a-Service, Analytics-as-a-Service, Storage-as-a-Service, as well as deployment and network environments, and everything in between. It has left the developer community with more options, functionality, and cost than it needs or wants.
It’s time for the cloud to once again focus on developers, and that is where DigitalOcean comes in.
Started by Ben Uretsky and his brother Moisey, with the additional intellectual brawn of an eclectic group of passionate developers, DigitalOcean has focused on one goal: making developers lives easier by providing a powerful, yet simple Infrastructure-as-a-Service.
The DigitalOcean service is purpose-built for the developer, offering automated web infrastructure for deploying web-based applications. The results have been eye-popping. From a standing-start in December 2012, DigitalOcean has grown from 100 web-facing computers to over 50,000 today, making it one of the fastest growing cloud computing providers in the world. It is now the ninth largest web infrastructure provider on the planet. With this round of funding, the management team intends to aggressively hire more in-house and remote software engineers to accelerate that already tremendous momentum.
DigitalOcean is also taking a page out of the open source world and is using and contributing to the most relevant open source projects. In the same way that Github or Facebook or Twitter offers open source as a service, DigitalOcean does the same. A few weeks back, I wrote a post presenting several viable models for open source deployments and DigitalOcean is a case study. We are thrilled to be working with the DigitalOcean team as they continue to build a cloud that developers love.
NOTE: Chart data from Netcraft.
Open source software powers the world’s technology. In the past decade, there has been an inexorable adoption of open source in most aspects of computing. Without open source, Facebook, Google, Amazon, and nearly every other modern technology company would not exist. Thanks to an amazing community of innovative, top-notch programmers, open source has become the foundation of cloud computing, software-as-a-service, next generation databases, mobile devices, the consumer internet, and even Bitcoin.
Yet, with all that momentum, there’s a vocal segment of software insiders that preach the looming failure of open source software against competition from proprietary software vendors. The future for open source, they argue, is as also-ran software, relegated to niche projects. It’s proprietary software vendors that will handle the really critical stuff.
So which is it? The success of technology companies using open source, and the apparent failure of open source is a head scratcher. Yet both are true, but not for the reasons some would have you believe. The success or failure of open source is not the software itself – it’s definitely up to the tasks required of it – but in the underlying business model.
It started (and ended) with Red Hat
Red Hat, the Linux operating system company, pioneered the original open source business model. Red Hat gives away open source software for free but charges a support fee to those customers who rely on Red Hat for maintenance, support, and installation. As revenue began to roll into Red Hat, a race began among startups to develop an open source offering for each proprietary software counterpart and then wrap a Red Hat-style service offering around it. Companies such as MySQL, XenSource, SugarCRM, Ubuntu, and Revolution Analytics were born in this rush toward open source.
Red Hat is a fantastic company, and a pioneer in successfully commercializing open source. However, beyond Red Hat the effort has largely been a failure from a business standpoint. Consider that the “support” model has been around for 20 years, and other than Red Hat there are no other public standalone companies that have been able to offer an alternative to their proprietary counterpart. When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat.
There are many reasons why the Red Hat model doesn’t work, but its key point of failure is that the business model simply does not enable adequate funding of ongoing investments. The consequence of the model is minimal product differentiation resulting in limited pricing power and corresponding lack of revenue. As shown below, the open source support model generates a fraction of the revenue of other licensing models. For that reason it’s nearly impossible to properly invest in product development, support, or sales the way that companies like Microsoft or Oracle or Amazon can.
And if that weren’t tough enough, pure open source companies have other factors stacked against them. Product roadmaps and requirements are often left to a distributed group of developers. Unless a company employs a majority of the inventors of a particular open source project, there is a high likelihood that the project never gains traction or another company decides to create a fork of the technology. The complexities of defining and controlling a stable roadmap versus innovating quickly enough to prevent a fork is vicious and complex for small organizations.
To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves. Great for the open source community, not so great for us.
If you think this is past history and not relevant, I see a similar situation occurring today with OpenStack, and it is likely happening with many other successful open source projects. As an open source company, you are not only competing with proprietary incumbents, you are competing with the open source community itself. It’s a veritable shit-show.
If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.” If you are really lucky (as we were with XenSource), you might be acquired as a “strategic” acquisition. But, most open source companies don’t have that kind of luck, and the chances of going public and creating a large standalone company are pretty darn slim.
Even with all that stacked against them, we still see entrepreneurs pitching their companies as the “next Red Hat of…” Here is the problem with that vision: there has never been a “next Red Hat of…” It’s not to say we won’t see another Red Hat, but the odds are long and the path is littered with the corpses of companies that have tried the support model.
But there is a model that works.
Selling open source as a service
The winning open source model turns open source 1.0 on its head. By packaging open source into a service (as in cloud computing or software-as-a-service) or as a software or hardware appliance, companies can monetize open source with a far more robust and flexible model, encouraging innovation, and on-going investment in software development.
Many of today’s most successful new companies rely on an ecosystem of standardized open source components that are generally re-used and updated by the industry at-large. Companies who use these open source building blocks are more than happy to contribute to their ongoing success. These open source building blocks are the foundation of all modern cloud and SaaS offerings, and they are being monetized beautifully in many cases.
Depending on the company and the product, an organization may develop more open source software specific to their business or build some amount of proprietary software to complete the product offering. Amazon, Facebook, GitHub and scores of others mix open source components with their own proprietary code, and then sell the combination as a service.
This recipe – combining open source with a service or appliance model – is producing staggering results across the software landscape. Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation.
Beyond SaaS, I would expect there to be future models for Open Source monetization, which is great for the industry.
So what are you waiting for?
Build a big business on top of and around a successful platform by adding something of your own that is both substantial and differentiated. Take, for example, our national road and highway system. If you view it as the transportation platform, you start to see the host of highly differentiated businesses that have been built on top of it, ranging from FedEx to Tesla. The ridesharing service Lyft is building its business on top of that same transportation platform, as well as Amazon’s AWS platform.
If you extend that platform worldview, Red Hat’s support model amounts to selling a slightly better version of the road – in this case, the Linux operating system – which is already good enough for most people.
Sure, when you first launch a business built using open source components, it’s important to grow the size of the platform and cater to your early adopters to drive initial success. So you might start off looking a little like Red Hat. But if all goes well, you’ll start to more resemble Facebook, GitHub, Amazon or Cumulus Networks as you layer in your own special something on top of the platform and deliver it as a service, or package it as an appliance. Becoming the next Red Hat is an admirable goal, but when you look at the trends today, maybe even Red Hat should think about becoming the next Amazon.
Mobile devices have put supercomputers in our hands, and—along with their first cousin—the tablet, represent the largest shift in computing since the PC era. The capacity and power of these devices are in its infancy, and all expectations lead to a doubling of capability every 18 months. In the same way that the PC era unlocked the imagination and innovation of an entire generation, we are seeing a repeat pattern with mobile devices at unprecedented scale.
History has shown that as compute capacity becomes available, new applications and programs happily consume the excess. Additional memory, disk, and processing power always lead to substantially better and more innovative products, serving an ever-broader set of consumers. We saw it with the PC, and we will see it with mobile as the number of devices grows well past a billion. Yet-to-be-developed applications are waiting to take advantage of this processing capability, and it’s going to require mobile operating system innovation to expose this awesome power.
An operating system is one of the most fundamental and important pieces of software. Great operating systems leverage new hardware, provide a consistent way to run applications, and provide a foundation for all interaction with a computing system. For PCs, Windows is the dominant operating system; for servers, Linux is dominant; and for mobile, Android enjoys a staggering 82% market share (Gartner, November 2013). Like Linux (and unlike Windows), Android is Open Source, which means no one company owns the code. Anyone can improve Android by adding new functionality and tools.
One reason why Android is winning is due to that open source spirit of additive innovation. Because consumers are clamoring for increased personalization and customization options, the Android open source community has been happily taking up the task of fulfilling that demand. What’s more, the growing enterprise trend of BYOD (bring your own device) is here to stay, which will further add to that demand as consumers use their mobile devices at home, at work, and on the road—all requiring customized functionality.
Enter Cyanogen, our newest portfolio company that’s well on its way in building a new operating system, CyanogenMOD (CM), leveraging core Open Source Android to provide the fastest, most innovative mobile operating system platform. CM takes the best of what Android offers and adds innovative features to create a clean yet customizable user experience. CM is 100% compatible with all Android applications, yet brings fabulous new capabilities to Android such as enhanced security, performance, device support, and personalization. Cyanogen has been powered by the open-source community—led by its founder Steve Kondik—ever since it launched four years ago. The community continues to work at a feverish pace, helping to bring up both newly launched and existing Android devices with the latest Cyanogen builds.
Today, tens of millions of devices are running Cyanogen worldwide, and we believe that CM has the opportunity to become one of the world’s largest mobile operating systems. As past history suggests, companies such as Microsoft and RedHat have done exceedingly well by being independent of hardware, and we believe that this trend will accelerate in the mobile world. The rapid success of CM indicates a growing consumer desire to have a fully compatible Android operating system that is truly independent from any hardware company or OEM. Consumers win as Cyanogen can launch updates more frequently, fix bugs faster, and deploy new features more regularly, compared to OEMs whose organizations are optimized for building fantastic hardware.
We’re incredibly excited to lead their Series B round of financing and to work with the Cyanogen team, a majority of which has been “sourced” from their “open source” community! Their expertise in building Android products and their desire to create a world-class mobile user experience will guide their decisions as they continue building on their success to date. Software is eating the world, Android is eating mobile, and we think Cyanogen only just finished their appetizer and is moving onto the entree.