Archive

Andreessen Horowitz

The history of computing can be largely described by architectural eras demarcated by the near-continuous ebb and flow from centralized to distributed computing. The first generation was centralized, with mainframes and dumb terminals defining the era. All computing was done centrally by the mainframe with the terminal merely displaying the resulting operations.

As endpoints (and networks) became more capable with additional processing and storage, the generation of client-server computing took hold. This architecture leveraged both endpoint and central capacity, giving users the benefit of hi-fidelity applications that communicated to centralized data stores in a seamless (or so intended) fashion. Unlocking less expensive and available compute at the endpoint unleashed an entire generation of new applications and businesses such as Facebook, Twitter, Square and many others.

Over the past decade, cloud computing and software as a service (SAAS) have moved the needle back once again toward a centralized architecture. Processing is centralized in the cloud datacenter and endpoints simply display the resulting operations, albeit in a more colorful way than their simple terminal predecessors. This is now changing.

Our mobile devices have become supercomputers in our hand. The processing power and storage capacity of these devices are now 100x more capable than PCs of 20 years ago. History has shown that as processing becomes available, new applications and architectures happily utilize the excess capacity. Enter the new world of cloud-client computing where applications and compute services are executed in a balanced and synchronized fashion between your mobile endpoint and the cloud.

Because smartphones are such beefy computers, developers have been rushing to take advantage of the available computing horsepower. Until now, this mostly meant writing native applications using Apple’s XCode or Eclipse/ADT for Android. But native apps are a pain: they typically require separate front-end engineers, there is little code shared between apps, and there is no concept of coordination with the back-end (cloud) services. All of this work must be handcrafted on a per app and per OS basis, rendering it costly and error-prone. It’s a duct tape and bailing twine approach for delivering a marginally better user experience.

That is, until Meteor. The Meteor platform brings back the goodness of the Web without sacrifice. Teams can share code across Web and mobile front ends, build super slick apps with a single code base across Android and iOS, and utilize a framework for integrating front-end and cloud operations. For the first time, there is a simple and elegant way to create applications that leverage the best of the client and the cloud, yielding applications that are high-fidelity and synchronized with the most advanced back-end/cloud services.

Meteor delivers dramatic productivity improvements for developers who need to deliver great experiences across Web, iOS, Android and other mobile platforms and enables the computational “oomph” available on smartphones to do more than just render HTML. Meteor delights users with Web and app experiences that are fluid and fast.

Meteor has the technology to usher in the new world of cloud-client computing and we couldn’t be more proud to be investors in the team that makes all of this happen.

 

A while back I wrote a blog post suggesting that datacenter infrastructure would move from an on-premise operation to the cloud. It may have seemed counter-intuitive that the infrastructure itself would become available from the cloud, but that’s exactly what’s happening.

We’ve now seen everything from security to system management to storage evolve into as-a-service datacenter offerings, yielding all the benefits of SaaS — rapid innovation, pay-as-you-go, no hardware installation — while at the same time providing rich enterprise functionality.

As the datacenter gets dis-intermediated with the as-a-service paradigm, an interesting opportunity exists for the “big data” layer to move to the cloud. While big data is one of the newer parts of the infrastructure stack — and should have been architected and delivered as a service from the start — an estimated 90+% of Fortune 2000 companies carry out their big data analytics on-premise.  These on-premise deployments are complex, hard to implement, and have already become something of a boat anchor when it comes to attempts to speed up big data analytics. They perfectly define the term “big drag.”

Without question the time has come to move big data to the cloud and deliver this part of the infrastructure stack as a service. Enter Cazena — our latest investment in the big data sector. The Cazena founders were former leaders at Netezza, the big data appliance leader that went public and was acquired by IBM for $1.7 billion. Prat Moghe, founder & CEO of Cazena, previously led strategy, product and marketing at Netezza. Prat has teamed up with Jit Saxena, co-founder of Netezza, and Jim Baum, the CEO of Netezza — all leaders in the big data industry.

This team knows a great deal about big data and agility of deployment. Ten years ago (long before the term big data was being used), the Netezza team came up with a radically simple big data appliance. Appliances reduced the sheer complexity of data warehouse projects — the amount of time and resources it took to deploy and implement big data.

In the next decade, even faster deployment cycles will be required as businesses want data on-demand. Additionally, the consumption pattern has changed as the newer data stack built using Hadoop and Spark has broadened the use of data. A new cloud-based, service-oriented deployment model will be required. The Cazena team is uniquely positioned to make this a reality.

We could not be more thrilled to be backing the team that has the domain expertise and thought leadership to change the face of big data deployments. Big data is changing the way the world processes information, and Cazena is uniquely positioned to accelerate these efforts.

The mobile revolution has spread beyond the mini supercomputers in our hands all the way to the datacenter.

With our expanded use of smartphones comes increased pressure on servers to help drive these devices: The activity we see everyday on our phones is a mere pinhole view into all that’s happening behind the scenes, in the massive cloud infrastructure powering all those apps, photo-shares, messages, notifications, tweets, emails, and more. Add in the billions of devices coming online through the Internet of Things — which scales through number of new endpoints, not just number of users — and you begin to see why the old model of datacenters built around PCs is outdated. We need more power. And our old models for datacenters are simply not enough.

That’s where mobile isn’t just pressuring, but actually changing the shape of the datacenter — displacing incumbents and creating new opportunities for startups along the way. READ MORE

The promise of big data has ushered in an era of data intelligence. From machine data to human thought streams, we are now collecting more data each day, so much that 90% of the data in the world today has been created in the last two years alone. In fact, every day, we create 2.5 quintillion bytes of data — by some estimates that’s one new Google every four days, and the rate is only increasing. Our desire to use, interact, and learn from this data will become increasingly important and strategic to businesses and society as a whole.

Yet, while we are collecting and storing massive amounts of data, our ability to analyze and make use of the data is stuck in information hell. Even our most modern tools reflect an older, batch-oriented era, that relies on queries and specialized programs to extract information. The results are slow, complex and time consuming processes that struggle to keep up with an ever-increasing corpus of data. Quite often, answers to our queries are long outdated before the system completes the task. While this may sound like a problem of 1970s mainframes and spinning tape, this is exactly how things work in even the most modern Hadoop environments of today.

More data means more insight, better decisions, better cures, better security, better predictions — but requires re-thinking last generation tools, architectures, and processes. The “holy grail” will allow all people or programs to fluidly interact with their data in an easy, real-time, interactive format — similar to a Facebook Search or Google Search engine. Information must become a seamless and fundamental property of all systems, yielding new insights by learning from the knowns and predicting the unknowns.

That’s why we’re investing in Adatao, which is on the leading edge of this transformation by combining big compute and big data under one beautiful document user interface. This combination offers a remarkable system that sifts through massive amounts of data, aggregating and machine-learning, while hiding the complexities and helping all users, for the first time, to deal with big data analytics in a real-time, flexible, interactive way.

For example, a business user in the airline industry can ask (in natural language) Adatao’s system to predict future airline delay ratios by quickly exploring 20 years of arrival/departure data (124 million rows of data) to break down past delays by week, month, and cause. In the same way Google Docs allows teams all over the world collaborate, Adatao allows data scientists and business users to collaborate on massive datasets, see the same views and together produce a visual model in just three seconds.

The Adatao software would not be possible, if not for the incredible team behind the project. I first met Christopher Nguyen, founder and CEO, at a breakfast meeting in Los Altos and was blown away by his humble personality. I knew at that moment, I wanted to find a way to partner with him. Here’s a guy who grew up in Vietnam and came to the US with a desire to make a difference. Since then, Christopher has started several successful companies, was engineering director of Google Apps and earned a PhD from Stanford and a BS from UC Berkeley, and is a recipient of the prestigious “Google Founders Award”.

He’s assembled a crack technical team of engineers and PhDs in parallel systems and machine learning. They all want to change the world and solve the most pressing data and information issues of our generation.

I am honored to be joining the board and look forward to partnering with this incredibly talented team. Adatao’s approach, team, and spirit of innovation will usher in a new generation of real-time, information intelligence that we believe will be the future of big data.

A new architectural era in computing is upon us, and the datacenter is changing to accommodate it. The cloud generation of companies has ramped their dominance and proven their models, and the legacy enterprise is close behind in making this massive shift. These new datacenters—as pioneered and designed by Facebook, Google, and Twitter—are defined by hyper-scale deployments of thousands of servers, requiring a new software architecture to manage and aggregate these systems. Mesosphere is that software, and we believe this architecture will be as disruptive to the datacenter as Linux and virtualization have been over the past decade.

Today’s application architectures and big data workloads are scale-out, stateless, and built to leverage the seemingly infinite processing capacity of the modern datacenters. These modern hyper-scale datacenters are the equivalent of giant supercomputers: they run massively parallel applications that serve millions of user requests a second. We are moving from a collection of servers running discrete, stateful applications, to massive scale-out applications that treat the hardware as one giant server.

In that “giant server” view of the world, Mesosphere is the obvious foundation for this new cloud stack and adoption is scaling fast. Look under the datacenter hood in many forward-looking, hyper-scale environments, including Twitter, Airbnb, eBay, and OpenTable, and you will find Mesosphere.

The Future of the Datacenter is Aggregation (not Virtualizaton)

Ten years ago, virtual machines (VMs) revolutionized the datacenter. This was because while the servers were getting bigger and bigger, the apps running on them pretty much stayed the same size. In order to make better use of those large servers, it made sense to virtualize the machines so that you could run multiple applications on the same machine at the same time.

Today, aggregation is fomenting a similar revolution and applications don’t fit on single machines anymore. In today’s world, applications run at a much larger scale (millions of users, billions of data points, and in real-time) and they are essentially large-scale distributed systems, composed of dozens (or even thousands) of services running across all the machines (virtual and physical) in the datacenter. In this world, you want to stitch together all of the resources on those machines into one common pool from which all the applications and services can draw.

Aggregation has proven itself in the A-lists of hyperscale companies, like Google and Twitter. They’ve demonstrated that it’s much more efficient to aggregate machines—pooling all of the resources—and then build applications against the datacenter behaving as a single machine.

Aggregation, and the tools to manage it at scale, is what Mesosphere is bringing to everybody —and it’s what we believe the future of the datacenter looks like.

The companies that buy into this architecture do not abandon virtualization, containers, or other approaches. These become important infrastructure components. But the way they manage their entire datacenter will evolve beyond the duct tape and band aid, highly manual approach to scripting IT operations tasks and “recipes”, and configuring dependencies each time a new application is brought online or a server goes down.

Mesos: From UC Berkeley to Reality

In 2009, Mesosphere Co-founder Florian Leibert was working at Twitter to scale the application in response to its exponential growth. At the time, he spotted a new open source technology that had been built at UC Berkeley called Mesos and he helped Twitter bring it into full production.

Today, almost all of Twitter’s infrastructure is built on top of Mesos, which is now an Apache open source project and is at the core of Mesosphere’s products. The Mesosphere stack, which includes Apache Mesos, is not a hypothetical technology. It’s highly mature and battle-tested, in large-scale production, running in both private datacenters and in public cloud environments. Other organizations using Mesos include: Hubspot, Airbnb, Atlassian, eBay, OpenTable, PayPal, Shopify, and Netflix.

Mesosphere is harnessing the core open source technology of Apache Mesos, and making it possible for everyone to tap into its power. By building an entire ecosystem around Mesos, they are making it easy to install, operate, and manage. For developers, Mesosphere provides simple command-line and API access to compute clusters for deploying and scaling applications, without relying on IT operations. For IT operations, Mesos abstracts the most difficult low-level tasks related to deploying and managing services, virtual machines, and containers in scale-out cloud and datacenter environments, and provides true automation, fault tolerance, and server utilization for modern scale requirements. Finally, Mesos allows applications to move between different environments without any change to the application.

Mesosphere will help define the next generation datacenter. I am honored to be joining the board of a team of dedicated system-level software engineers who will change the face of enterprise computing.

The last few years have seen the incredible growth of cloud computing. Applications and services that were developed for on-premise use have all found a new home in the cloud. As with most technology transformations, early adoption often occurs around a hobbyist developer community that then expands into more mainstream adoption and use. The cloud is no exception; as it grows it continues to empower developers to shape technology and change the world.

What started as a primitive, manual, and cumbersome infrastructure service, has evolved into a variety of cloud vendors offering vast collections of services targeted at a number of different audiences –perhaps too vast. We have Database-as-a-Service, Compute-as-a-Service, Analytics-as-a-Service, Storage-as-a-Service, as well as deployment and network environments, and everything in between. It has left the developer community with more options, functionality, and cost than it needs or wants.

It’s time for the cloud to once again focus on developers, and that is where DigitalOcean comes in.

Started by Ben Uretsky and his brother Moisey, with the additional intellectual brawn of an eclectic group of passionate developers, DigitalOcean has focused on one goal: making developers lives easier by providing a powerful, yet simple Infrastructure-as-a-Service.

SOURCE: Netcraft

The DigitalOcean service is purpose-built for the developer, offering automated web infrastructure for deploying web-based applications. The results have been eye-popping. From a standing-start in December 2012, DigitalOcean has grown from 100 web-facing computers to over 50,000 today, making it one of the fastest growing cloud computing providers in the world. It is now the ninth largest web infrastructure provider on the planet. With this round of funding, the management team intends to aggressively hire more in-house and remote software engineers to accelerate that already tremendous momentum.

SOURCE: Netcraft

DigitalOcean is also taking a page out of the open source world and is using and contributing to the most relevant open source projects. In the same way that Github or Facebook or Twitter offers open source as a service, DigitalOcean does the same. A few weeks back, I wrote a post presenting several viable models for open source deployments and DigitalOcean is a case study. We are thrilled to be working with the DigitalOcean team as they continue to build a cloud that developers love.

NOTE: Chart data from Netcraft.

Open source software powers the world’s technology. In the past decade, there has been an inexorable adoption of open source in most aspects of computing. Without open source, Facebook, Google, Amazon, and nearly every other modern technology company would not exist. Thanks to an amazing community of innovative, top-notch programmers, open source has become the foundation of cloud computing, software-as-a-service, next generation databases, mobile devices, the consumer internet, and even Bitcoin.

Yet, with all that momentum, there’s a vocal segment of software insiders that preach the looming failure of open source software against competition from proprietary software vendors. The future for open source, they argue, is as also-ran software, relegated to niche projects. It’s proprietary software vendors that will handle the really critical stuff.

So which is it? The success of technology companies using open source, and the apparent failure of open source is a head scratcher. Yet both are true, but not for the reasons some would have you believe. The success or failure of open source is not the software itself  – it’s definitely up to the tasks required of it – but in the underlying business model.

It started (and ended) with Red Hat

Red Hat, the Linux operating system company, pioneered the original open source business model. Red Hat gives away open source software for free but charges a support fee to those customers who rely on Red Hat for maintenance, support, and installation. As revenue began to roll into Red Hat, a race began among startups to develop an open source offering for each proprietary software counterpart and then wrap a Red Hat-style service offering around it. Companies such as MySQL, XenSource, SugarCRM, Ubuntu, and Revolution Analytics were born in this rush toward open source.

Red Hat is a fantastic company, and a pioneer in successfully commercializing open source. However, beyond Red Hat the effort has largely been a failure from a business standpoint. Consider that the “support” model has been around for 20 years, and other than Red Hat there are no other public standalone companies that have been able to offer an alternative to their proprietary counterpart. When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat.

peterlevine1

There are many reasons why the Red Hat model doesn’t work, but its key point of failure is that the business model simply does not enable adequate funding of ongoing investments. The consequence of the model is minimal product differentiation resulting in limited pricing power and corresponding lack of revenue. As shown below, the open source support model generates a fraction of the revenue of other licensing models. For that reason it’s nearly impossible to properly invest in product development, support, or sales the way that companies like Microsoft or Oracle or Amazon can.

peterlevine2

And if that weren’t tough enough, pure open source companies have other factors stacked against them. Product roadmaps and requirements are often left to a distributed group of developers. Unless a company employs a majority of the inventors of a particular open source project, there is a high likelihood that the project never gains traction or another company decides to create a fork of the technology. The complexities of defining and controlling a stable roadmap versus innovating quickly enough to prevent a fork is vicious and complex for small organizations.

To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves. Great for the open source community, not so great for us.

If you think this is past history and not relevant, I see a similar situation occurring today with OpenStack, and it is likely happening with many other successful open source projects. As an open source company, you are not only competing with proprietary incumbents, you are competing with the open source community itself. It’s a veritable shit-show.

If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.” If you are really lucky (as we were with XenSource), you might be acquired as a “strategic” acquisition. But, most open source companies don’t have that kind of luck, and the chances of going public and creating a large standalone company are pretty darn slim.

Even with all that stacked against them, we still see entrepreneurs pitching their companies as the “next Red Hat of…” Here is the problem with that vision: there has never been a “next Red Hat of…” It’s not to say we won’t see another Red Hat, but the odds are long and the path is littered with the corpses of companies that have tried the support model.

But there is a model that works.

Selling open source as a service

The winning open source model turns open source 1.0 on its head. By packaging open source into a service (as in cloud computing or software-as-a-service) or as a software or hardware appliance, companies can monetize open source with a far more robust and flexible model, encouraging innovation, and on-going investment in software development.

Many of today’s most successful new companies rely on an ecosystem of standardized open source components that are generally re-used and updated by the industry at-large. Companies who use these open source building blocks are more than happy to contribute to their ongoing success. These open source building blocks are the foundation of all modern cloud and SaaS offerings, and they are being monetized beautifully in many cases.

Depending on the company and the product, an organization may develop more open source software specific to their business or build some amount of proprietary software to complete the product offering. Amazon, Facebook, GitHub and scores of others mix open source components with their own proprietary code, and then sell the combination as a service.

This recipe – combining open source with a service or appliance model – is producing staggering results across the software landscape. Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation.

Beyond SaaS, I would expect there to be future models for Open Source monetization, which is great for the industry.

So what are you waiting for?

Build a big business on top of and around a successful platform by adding something of your own that is both substantial and differentiated. Take, for example, our national road and highway system. If you view it as the transportation platform, you start to see the host of highly differentiated businesses that have been built on top of it, ranging from FedEx to Tesla. The ridesharing service Lyft is building its business on top of that same transportation platform, as well as Amazon’s AWS platform.

If you extend that platform worldview, Red Hat’s support model amounts to selling a slightly better version of the road – in this case, the Linux operating system – which is already good enough for most people.

Sure, when you first launch a business built using open source components, it’s important to grow the size of the platform and cater to your early adopters to drive initial success. So you might start off looking a little like Red Hat. But if all goes well, you’ll start to more resemble Facebook, GitHub, Amazon or Cumulus Networks as you layer in your own special something on top of the platform and deliver it as a service, or package it as an appliance. Becoming the next Red Hat is an admirable goal, but when you look at the trends today, maybe even Red Hat should think about becoming the next Amazon.

I recently had the privilege of hosting a fireside chat with Lieutenant General John Vines, who is regarded as one of the most influential U.S. military leaders of the past twenty years. You can see the talk here:

At the time, he was the only military general to lead combat operations in both Iraq and Afghanistan in the post 9-11 era, overseeing an organization of more than 160,000 troops.

For a man of his stature, he’s refreshingly humble. He jokes that he was the only guy to cause Defense Secretary Donald Rumsfeld to lose his voice from screaming at him in two separate wars. The General is one of those people with whom you want to hang out after spending only a few minutes with him. No wonder he is such an extraordinary leader.

During our conversation there were many leadership lessons from his experience that are highly relevant to entrepreneurs and CEOs. Here are a few of my favorites:

1. Leadership is different from management.

“In the end those who follow you willingly do it because they trust you and are inspired by you. They are counting on you to have their backs and to be right.” Great leaders rely on relationships and intuition. In a challenging situation, a good leader knows what their reports will do and what the outcome will be.

Vines underscored that management and leadership, while related, have very different characteristics. Management is the science that undergirds leadership. Leadership is the art. “Where leaders earn their pay is applying their judgment, skill and wisdom to all the data. Because if we were purely a data driven organization, then we could plug it all into some algorithm and it could tell us what the answer is.”

It follows then that not all great managers become great leaders. This concept has always resonated with me and I have seen this first-hand. A great leader gets the team to follow her into battle and does it with purpose and conviction. Great leaders also understand how to instinctively use resources to the best possible outcome.

2. Leadership awareness

“It is almost impossible to really see yourself as an organization and as an individual.”

Vines relayed the story of a complex combat operation that required the deployment of several infantry units. Vines ordered a large quantity of heavy equipment to support the mission, which his reporting system indicated was available. Problem was, it had already been provided to another unit and Iraqi counterparts.

Instead, Vines devised something he dubbed a “Delta Report”, which reconciled for a 30-day period all the things that were supposed to be available but weren’t; the equipment that was supposed to be repaired but was still in the shop; the gear that had arrived that no one even knew about.

In his words: “That 30-day Delta came out to be $11 billion of end-items, things like tanks and trucks, that we had been ordering from the States because we thought we needed them, but they were already there. We couldn’t see ourselves.”

Vines admitted his team spent a lot of time understanding their threat (you might call it the competition), but couldn’t see his operations in real-time. As a result, he made some large, painful changes, but ultimately made sure the right processes were in place for he and his team to see themselves in real-time.

I’ve always believed that self-awareness and company awareness are key attributes to being a great leader. I’ve seen all too many examples of companies and CEOs who are breathing their own exhaust. Leaders need self-awareness in order to have a complete and accurate picture of themselves and their company.

3. Identifying catastrophic risk helps to prepare for the unknown, but you can’t see all the “Black Swans” that lie in wait.

Vines relayed an example of a massive air operation he was preparing that required the use of hundreds of helicopters in Afghanistan. “It was massive, the number of planes used in the operation would have made it one of the largest air forces in the world. And that was just helicopters.”

At the moment of execution of the military mission, the key guy on the ground responsible for checking the purity of the helicopter became ill. There was nobody who could fulfill his job. Mission aborted.

“We spent hundreds, even thousands of hours assessing risk, but what we didn’t understand is that there were points of failure in this enterprise that we hadn’t even considered. We certainly hadn’t looked around corners.”

This story was particularly interesting to me in that, even with all the planning, something caused the mission to go sideways. I’ve seen this in companies. They plan and plan, yet something always comes up forcing a real-time change. In my experience, planning is a great tool but leaders always need to be prepared for solving the unknown as issues arise. You can never plan for all contingencies all the time.

4. The higher up the organization, the more time leaders should be spending with people in the organization as opposed to doing “tasks”.

As his organization got larger, Vines could no longer spend time with every person. However, he spent most of his time away from headquarters talking with his lieutenants, making sure everyone developed a “shared consciousness”, a shared vision.

“I believe leadership should be eyes on, hands off.” So Vines deployed wide-scale use of video conferencing to discuss high-level thinking and strategy with the troops. “Once an organization understands the objectives, a mid-level person can figure out the strategy.” Once they knew the thinking and the strategy behind what they were about to do ”the orders almost follow themselves.”

As an entrepreneur, your natural instinct as your company grows may be to spend more time doing tasks you are good at. If you are an engineer by training, this may mean spending time with the engineering group. My advice as your company grows is to spend time with all groups and help to create a deep bench of executives who each do a better job than you in their given areas. Spend lots of time with them; spend lots of time with your employees. The organization will see you as a great leader as opposed to a micro-manager.

Veteran talent

My time with General Vines gave me a much deeper appreciation for the similarities between military leadership and leadership in companies. Over 120 folks working in high-tech attended our fireside chat from the Bay Area. The vast majority were veterans.

One of my other takeaways: Veterans can bring really important leadership qualities to your organization. These folks are truly amazing.

As Vines put it, “Sometimes the scale is different. Sometimes the cost is different in blood and treasure. But there are more similarities than differences in business and warfare.

“Every person that we asked to go forth to do something at extreme risk—at risk of their life—we owed it to them to do everything we could to create conditions that would allow them to do that, and come back alive and intact to their families.

“If you could look in the mirror and say, ‘I have done everything humanly possible to create an environment of mitigated risk,’ I think you can live with yourself. If something goes wrong because you are lazy, or because you didn’t devote the proper rigor to it, then you have to live with those consequences too.”

The views expressed by LTG (ret.) Vines in this article are his own and do not represent the views of the U.S. Military or Government. “Warlord 6” was LTG Vines’ call sign in Iraq.

DataGravity is poised to transform the storage landscape. The company represents a once-in-a-decade opportunity to create an entirely new category of storage by unlocking the value of data that today sits idle in a storage system. I call the category “Storage Intelligence” and the transformation will be profound.

The story starts with DataGravity’s incredible founding team: Paula Long and John Joseph. Paula is a technical visionary in the storage world and was the co-founder of EqualLogic, a storage company acquired by Dell in 2008 for $1.4 billion. John was also an early member of the EqualLogic team and brings great talent in sales, marketing and operations. Unsatisfied with the pace of innovation in the storage world, Paula and John have teamed up again to royally disrupt the staid storage industry.

DataGravity’s focus on storage intelligence highlights entirely new thinking in storage innovation.   We’ve seen hundreds of new storage companies in the past few years and most have followed the well-worn path of incremental feature development, focusing on storing dumb bits of data at lower cost with faster access. Interesting and incremental—hardly transformative. A race to zero does not make for a killer new category.

Unlocking the next generation of storage requires looking at stored data not as a dumb repository of expensive bits, but as the foundation for usable, intelligent information. We’ve overlooked the data as the true asset to our business and we store it away without giving any thought to what it’s saying about our business, our customers and our users. The DataGravity team will take what is considered an idle operating expense and convert it to near instant business value.

We’ve only begun to see the explosion of data and its value to businesses of all sizes. DataGravity’s mission of turning dumb storage into meaningful information will give new meaning to storage infrastructure. As data centers evolve and information becomes central to the competitive advantage of organizations, DataGravity will fill a storage need that goes far beyond the current storage developments of today.

I am pleased to be joining the board of DataGravity and working with the team that is going to transform storage.

Everything that can be invented has been invented.
—Charles H. Duell, Commissioner, U.S. patent office, 1899

Last month, we gathered 75 of the top CIOs from around the country to discuss the new generation of enterprise software and the redefined role of the CIO. These CIOs are dealing with an unprecedented level of experimentation and innovative new approaches focused on unsolved problems in enterprise software. The end result will be a complete remaking of the entire enterprise software stack at the intersection of cloud, mobile and SaaS.

All of the CIOs are also facing a changed environment, one where every department within an organization makes its own software buying decisions, outside the purview of the CIO. This “departmentalization of applications”—from Box for collaboration to GitHub for software development to Tidemark for Enterprise Performance Management—means the CIO not only needs to figure out how to enable the department and employee to leverage these software products, but also meet the security and compliance requirements of the larger corporate environment—which, by the way, Bromium, CipherCloud and Okta allow you to do. These CIOs know that they can adapt or organizations will adapt without them.

Their jobs weren’t always so difficult. For those of you old enough to remember, there was a time when enterprise computing was almost exclusively dominated by Microsoft, Oracle and Cisco. It was a time when on-premise, Windows-based applications were the de-facto standard and there was no alternative. The enterprise was so entrenched that challenging the status quo was viewed as suicidal and very stupid. So hardened was the thinking that most innovation in the enterprise was relegated to mere feature extensions of existing solutions.

Fast-forward to today and the world of enterprise computing has done a 180. Traditional IT is being blown to bits as cloud infrastructure, Software-as-a-Service and mobile computing become the new standard. We are experiencing innovation and usage as never seen before. It is truly a renaissance of massive scale. Hundreds of billions of dollars are up for grabs as buyers shift to new architectures and away from old, as new users and new markets embrace the availability and ease by which they can consume technology.

On the Road to a Revolution

VMware and Salesforce catalyzed this movement from unlikely origins. Both were little known and under-funded, but against all conventional wisdom each visualized a new world order—a world where the data center was virtual and where applications would run off-premise, eliminating op-ex and painful software upgrades. The world watched but there were few believers. “Suicidal,” people said. “Why would I ever permit my precious customer data to reside outside my firewall?”

But momentum grew. VMware figured out how to effectively break apart the functionality of software from the hardware it resides on, driving a new set of economics into data centers. Salesforce began expanding beyond CRM, demonstrating the wider viability of subscription-based payments and the customer benefits of constant iteration. Customers began to believe that this new vision might actually come true. From a single virtual server and a single customer relationship app, both companies paved the way for a new world order.

Every part of the business software stack is now being remade—from infrastructure to applications to mobile to analytics—with every incumbent in danger of having its core business eroded. And, sure, incumbents will try to buy innovative products and will try to develop their own competing technologies, but the reality is that this new paradigm disrupts the entirety of these businesses. Overcoming a foundational shift cannot be met by a simple product buy or even a strategy change—the new breed of enterprise software startups has different revenue recognition policies, different sales models and different go-to-market models, and engineering processes than incumbents. We are talking about transformations occurring here simultaneously in technology and business models! It’s an entirely new approach to IT.

The Departmentalization of Applications

Buyers are clamoring for this new approach. None of our portfolio companies use Oracle. Some use Microsoft, but the majority opts for Google or an open source package. In our own Executive Briefing Center, where we connect and facilitate exchange amongst global brands and the rising stars in tech, we’re finding that even enterprise CIOs are looking beyond mature players to new and emerging technology companies, especially in areas like cloud computing, mobile, big data and SaaS. These are the early indicators of a more permanent shift in IT consumption habits. This shift is resulting in software applications that are targeted for specific business functions. Apptio, for example, has built a world-class application that specifically targets the CIO as a customer. Mixpanel helps companies learn from their data and grow their business, with a specific focus on analytics for mobile applications. This shift is what I am calling the “departmentalization of applications”.

And entrepreneurs know that incumbents are vulnerable. We see a tremendous number of entrepreneurs bringing a new approach to this crusty, old enterprise software market. We see entrepreneurs like Ben Werther of Platfora, who is passionate about up-ending the Business Intelligence market, and Ash Ashutosh of Actifio, who is creating the next generation storage software.

These are entrepreneurs who choose to do the hard work of building software for companies to use, and the software they are creating is elegant, fast, does what it’s supposed to, and priced fairly. This is an unbeatable value proposition. For everyone except perhaps the incumbents, this is a great time to be involved with enterprise software.