The last few years have seen the incredible growth of cloud computing. Applications and services that were developed for on-premise use have all found a new home in the cloud. As with most technology transformations, early adoption often occurs around a hobbyist developer community that then expands into more mainstream adoption and use. The cloud is no exception; as it grows it continues to empower developers to shape technology and change the world.

What started as a primitive, manual, and cumbersome infrastructure service, has evolved into a variety of cloud vendors offering vast collections of services targeted at a number of different audiences –perhaps too vast. We have Database-as-a-Service, Compute-as-a-Service, Analytics-as-a-Service, Storage-as-a-Service, as well as deployment and network environments, and everything in between. It has left the developer community with more options, functionality, and cost than it needs or wants.

It’s time for the cloud to once again focus on developers, and that is where DigitalOcean comes in.

Started by Ben Uretsky and his brother Moisey, with the additional intellectual brawn of an eclectic group of passionate developers, DigitalOcean has focused on one goal: making developers lives easier by providing a powerful, yet simple Infrastructure-as-a-Service.

SOURCE: Netcraft

The DigitalOcean service is purpose-built for the developer, offering automated web infrastructure for deploying web-based applications. The results have been eye-popping. From a standing-start in December 2012, DigitalOcean has grown from 100 web-facing computers to over 50,000 today, making it one of the fastest growing cloud computing providers in the world. It is now the ninth largest web infrastructure provider on the planet. With this round of funding, the management team intends to aggressively hire more in-house and remote software engineers to accelerate that already tremendous momentum.

SOURCE: Netcraft

DigitalOcean is also taking a page out of the open source world and is using and contributing to the most relevant open source projects. In the same way that Github or Facebook or Twitter offers open source as a service, DigitalOcean does the same. A few weeks back, I wrote a post presenting several viable models for open source deployments and DigitalOcean is a case study. We are thrilled to be working with the DigitalOcean team as they continue to build a cloud that developers love.

NOTE: Chart data from Netcraft.

Open source software powers the world’s technology. In the past decade, there has been an inexorable adoption of open source in most aspects of computing. Without open source, Facebook, Google, Amazon, and nearly every other modern technology company would not exist. Thanks to an amazing community of innovative, top-notch programmers, open source has become the foundation of cloud computing, software-as-a-service, next generation databases, mobile devices, the consumer internet, and even Bitcoin.

Yet, with all that momentum, there’s a vocal segment of software insiders that preach the looming failure of open source software against competition from proprietary software vendors. The future for open source, they argue, is as also-ran software, relegated to niche projects. It’s proprietary software vendors that will handle the really critical stuff.

So which is it? The success of technology companies using open source, and the apparent failure of open source is a head scratcher. Yet both are true, but not for the reasons some would have you believe. The success or failure of open source is not the software itself  – it’s definitely up to the tasks required of it – but in the underlying business model.

It started (and ended) with Red Hat

Red Hat, the Linux operating system company, pioneered the original open source business model. Red Hat gives away open source software for free but charges a support fee to those customers who rely on Red Hat for maintenance, support, and installation. As revenue began to roll into Red Hat, a race began among startups to develop an open source offering for each proprietary software counterpart and then wrap a Red Hat-style service offering around it. Companies such as MySQL, XenSource, SugarCRM, Ubuntu, and Revolution Analytics were born in this rush toward open source.

Red Hat is a fantastic company, and a pioneer in successfully commercializing open source. However, beyond Red Hat the effort has largely been a failure from a business standpoint. Consider that the “support” model has been around for 20 years, and other than Red Hat there are no other public standalone companies that have been able to offer an alternative to their proprietary counterpart. When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat.

peterlevine1

There are many reasons why the Red Hat model doesn’t work, but its key point of failure is that the business model simply does not enable adequate funding of ongoing investments. The consequence of the model is minimal product differentiation resulting in limited pricing power and corresponding lack of revenue. As shown below, the open source support model generates a fraction of the revenue of other licensing models. For that reason it’s nearly impossible to properly invest in product development, support, or sales the way that companies like Microsoft or Oracle or Amazon can.

peterlevine2

And if that weren’t tough enough, pure open source companies have other factors stacked against them. Product roadmaps and requirements are often left to a distributed group of developers. Unless a company employs a majority of the inventors of a particular open source project, there is a high likelihood that the project never gains traction or another company decides to create a fork of the technology. The complexities of defining and controlling a stable roadmap versus innovating quickly enough to prevent a fork is vicious and complex for small organizations.

To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves. Great for the open source community, not so great for us.

If you think this is past history and not relevant, I see a similar situation occurring today with OpenStack, and it is likely happening with many other successful open source projects. As an open source company, you are not only competing with proprietary incumbents, you are competing with the open source community itself. It’s a veritable shit-show.

If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.” If you are really lucky (as we were with XenSource), you might be acquired as a “strategic” acquisition. But, most open source companies don’t have that kind of luck, and the chances of going public and creating a large standalone company are pretty darn slim.

Even with all that stacked against them, we still see entrepreneurs pitching their companies as the “next Red Hat of…” Here is the problem with that vision: there has never been a “next Red Hat of…” It’s not to say we won’t see another Red Hat, but the odds are long and the path is littered with the corpses of companies that have tried the support model.

But there is a model that works.

Selling open source as a service

The winning open source model turns open source 1.0 on its head. By packaging open source into a service (as in cloud computing or software-as-a-service) or as a software or hardware appliance, companies can monetize open source with a far more robust and flexible model, encouraging innovation, and on-going investment in software development.

Many of today’s most successful new companies rely on an ecosystem of standardized open source components that are generally re-used and updated by the industry at-large. Companies who use these open source building blocks are more than happy to contribute to their ongoing success. These open source building blocks are the foundation of all modern cloud and SaaS offerings, and they are being monetized beautifully in many cases.

Depending on the company and the product, an organization may develop more open source software specific to their business or build some amount of proprietary software to complete the product offering. Amazon, Facebook, GitHub and scores of others mix open source components with their own proprietary code, and then sell the combination as a service.

This recipe – combining open source with a service or appliance model – is producing staggering results across the software landscape. Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation.

Beyond SaaS, I would expect there to be future models for Open Source monetization, which is great for the industry.

So what are you waiting for?

Build a big business on top of and around a successful platform by adding something of your own that is both substantial and differentiated. Take, for example, our national road and highway system. If you view it as the transportation platform, you start to see the host of highly differentiated businesses that have been built on top of it, ranging from FedEx to Tesla. The ridesharing service Lyft is building its business on top of that same transportation platform, as well as Amazon’s AWS platform.

If you extend that platform worldview, Red Hat’s support model amounts to selling a slightly better version of the road – in this case, the Linux operating system – which is already good enough for most people.

Sure, when you first launch a business built using open source components, it’s important to grow the size of the platform and cater to your early adopters to drive initial success. So you might start off looking a little like Red Hat. But if all goes well, you’ll start to more resemble Facebook, GitHub, Amazon or Cumulus Networks as you layer in your own special something on top of the platform and deliver it as a service, or package it as an appliance. Becoming the next Red Hat is an admirable goal, but when you look at the trends today, maybe even Red Hat should think about becoming the next Amazon.

Mobile devices have put supercomputers in our hands, and—along with their first cousin—the tablet, represent the largest shift in computing since the PC era. The capacity and power of these devices are in its infancy, and all expectations lead to a doubling of capability every 18 months. In the same way that the PC era unlocked the imagination and innovation of an entire generation, we are seeing a repeat pattern with mobile devices at unprecedented scale.

History has shown that as compute capacity becomes available, new applications and programs happily consume the excess. Additional memory, disk, and processing power always lead to substantially better and more innovative products, serving an ever-broader set of consumers. We saw it with the PC, and we will see it with mobile as the number of devices grows well past a billion. Yet-to-be-developed applications are waiting to take advantage of this processing capability, and it’s going to require mobile operating system innovation to expose this awesome power.

An operating system is one of the most fundamental and important pieces of software. Great operating systems leverage new hardware, provide a consistent way to run applications, and provide a foundation for all interaction with a computing system. For PCs, Windows is the dominant operating system; for servers, Linux is dominant; and for mobile, Android enjoys a staggering 82% market share (Gartner, November 2013). Like Linux (and unlike Windows), Android is Open Source, which means no one company owns the code. Anyone can improve Android by adding new functionality and tools.

One reason why Android is winning is due to that open source spirit of additive innovation. Because consumers are clamoring for increased personalization and customization options, the Android open source community has been happily taking up the task of fulfilling that demand. What’s more, the growing enterprise trend of BYOD (bring your own device) is here to stay, which will further add to that demand as consumers use their mobile devices at home, at work, and on the road—all requiring customized functionality.

Enter Cyanogen, our newest portfolio company that’s well on its way in building a new operating system, CyanogenMOD (CM), leveraging core Open Source Android to provide the fastest, most innovative mobile operating system platform. CM takes the best of what Android offers and adds innovative features to create a clean yet customizable user experience. CM is 100% compatible with all Android applications, yet brings fabulous new capabilities to Android such as enhanced security, performance, device support, and personalization. Cyanogen has been powered by the open-source community—led by its founder Steve Kondik—ever since it launched four years ago. The community continues to work at a feverish pace, helping to bring up both newly launched and existing Android devices with the latest Cyanogen builds.

Today, tens of millions of devices are running Cyanogen worldwide, and we believe that CM has the opportunity to become one of the world’s largest mobile operating systems. As past history suggests, companies such as Microsoft and RedHat have done exceedingly well by being independent of hardware, and we believe that this trend will accelerate in the mobile world. The rapid success of CM indicates a growing consumer desire to have a fully compatible Android operating system that is truly independent from any hardware company or OEM. Consumers win as Cyanogen can launch updates more frequently, fix bugs faster, and deploy new features more regularly, compared to OEMs whose organizations are optimized for building fantastic hardware.

We’re incredibly excited to lead their Series B round of financing and to work with the Cyanogen team, a majority of which has been “sourced” from their “open source” community! Their expertise in building Android products and their desire to create a world-class mobile user experience will guide their decisions as they continue building on their success to date. Software is eating the world, Android is eating mobile, and we think Cyanogen only just finished their appetizer and is moving onto the entree.

I recently had the pleasure of interviewing eBay CEO John Donahoe before a crowd of military veterans at a16z. Below is an abridged version of our discussion, which focused on the type of leader John has become as he led the turnaround at eBay.

Peter Levine: You describe your management style as “servant leadership.” Where does that come from?

John Donahoe: It started with Tom Tierney who was a mentor of mine. He was my boss at Bain & Company, and is on the board of eBay. He’s one of those leaders that care enough to always give constructive feedback.

To give you a sense of what Tom’s like, after the last eBay board meeting he calls me and asks, “How do you think it went?” And I go, “How do you think it went?.” And he says, “You know John, that conversation around X, Y, and Z. If what you were trying to get across is that you felt fairly emotional about the issue, you’d already decided what you wanted to do, and you didn’t want to hear anybody else’s opinion, you did a good job. If on the other hand, what you were trying to demonstrate is that you are seasoned and sophisticated CEO, that you are open-minded and wanted to hear other’s opinions, that you knew you could make a decision but you were actually engaging in an authentic discussion with them, eh, not so good.”

That’s something I’ve taken away, which is, I think a good leader cares enough to give his or her best people feedback. But Tom early on captured the phrase for me – servant leadership. And that’s how I’d describe my leadership: servant leadership. In most companies it’s a classic hierarchy, the person on top is the CEO, in the military it’s the general, whomever is in charge. That’s never really worked for me. I’ve always been trained with the inverted pyramid, where the customer is on top. They’re why we’re here. They are the people who give us a sense of purpose of why we’re here.

And inside our organization, the people I talk about on top of our org chart are the people who deal with customers every day ­– they’re our customer teammates, our sales team, and our support teams. And everybody inside the company exists to help them serve the customers better. And I’m at the bottom of that pyramid, and ultimately my job is to clear channels to serve our customers as well. It’s to serve. If you want to have the absolute most talented people working for you, they can’t feel like they are working for you.

The one other person that had more impact than he realizes is General Colin Powell, talking about followership. The focus is not about me, the leader, the focus is on how do I create followership. We’ve all had leaders we want to follow and usually that leader empowers us, has our back, and treats us better than they are.

PL: Is that philosophy something that you can go to a class and learn about? Or is it experiential? And are other leadership styles acceptable as well?

JD: I think each of us has to discover what our leadership style is. You can’t copy another person’s. If I think about the leaders I respect the most, they can have different styles, but what they all are, they are authentic in understanding who they are and who they want to lead. They are transparent and consistent about that. I think that’s the job of any leader. I wouldn’t try to copy someone else’s personality. I followed Meg Whitman, I had big shoes to fill. But I couldn’t be Meg Whitman. I had to be me. The leaders that create followership, if there’s one common quality it is that they are authentic. Having good values, and then being authentic and transparent.

PL: What has been the origin of your own mistakes?

JDonahoe: I’ve made a lot of mistakes. The truth is, my biggest mistakes have been not taking enough risk. It’s not been what I’ve done, it’s been what I’ve not done. There have been times where I haven’t moved fast enough or taken enough risk. When I was running the Marketplace before becoming CEO, I was scared of taking the risk of labeling what was going on. We had stopped innovating, we weren’t delivering good experiences for our customers, and we were taking them for granted. We were living a narrative that was no longer true.

It was only when it got so bad that I spoke up and spoke the truth and took on the risk. It was hard, it hurt, and everyone hated me. By the time I became CEO it became clear to me that I was going to be presiding over this ­– I’m going to catch a falling knife. I was named on a Wednesday. On Monday, we had a seller meeting in Washington, DC where we announced the biggest set of changes in eBay’s history, and I labeled it as a turnaround, a word that everyone hated. We stood up told the truth, it felt so good. And we finally labeled what everyone knew what was true. It felt good for 24 hours, and then all hell broke loose.

PL: What are the cultural and character ingredients about building a great and enduring business?

JD: The first thing is picking the right company. I was at Bain for 20 years. I loved it. And when Meg first called me to join eBay, it was the hottest company on earth at that stage. I said, Meg that’s not me. I’m not a Valley guy in that way. It doesn’t light my fire. There’s nothing wrong with it, it just doesn’t light my fire. And she said, “I want you to meet eBay’s founder, Pierre Omidyar.” I’ll never forget ­– it was a rainy day in November 2004 at this place where eBay was having a leadership meeting. I was curious. I had never met one of the most famous Internet entrepreneurs. I went in thinking I would meet somebody like Steve Jobs ­­– some larger-than-life personality, maybe a little and brash and arrogant. And I could not have been more surprised. Pierre is soft spoken, and one of the most humble, centered humans I’ve ever met. And I sit down and we’re talking and I say, “Pierre, how do you measure success for eBay?” And he didn’t say a thing about growth rate or revenue or stock price or reputation in the short term. He said, “John, what I care about is I want eBay to positively impact hundreds of millions of people’s lives all over the world and I want to do it over decades, I don’t just want it to be a flash-in-the-pan. If we have lasting impact and we’re going to help the world be a better place, we’ve got to last.” And I was like, you had me at ‘hello.’ I literally walked into that interview thinking I wasn’t going to leave Bain for eBay, and I walked out saying, “I want to follow this guy.”

PL: How should the rest of us think about picking the right company? Does “hot” matter?

JD: What I would suggest is, don’t listen to what everyone else says. Don’t join something because everyone else is. You need to ask yourself, can you personally relate to the purpose of the company that you’re joining? It’s interesting, because Silicon Valley has produced 98-percent of the greatest technology, innovation, startup companies, and entrepreneurs in the last 50 years. There’s no place that’s even close. But during that period of time – 50 years – Silicon Valley has produced five scale-enduring companies. And I’m defining scale as above $20 billion market cap, and I’m defining enduring as having been successful for a 20 year period or longer: HP, Intel, Apple, Oracle and Cisco. That’s it. Google’s not 20, eBay’s not 20, none of the Internet companies are. And the ethos around here is short-term, the hot. And you know when it stops being hot, I’m going to jump to the next one.

What it takes to commit to build a great, enduring company is a different mindset. What is true about Silicon Valley is that innovation is the lifeblood. Innovation drives competitive advantage, but what I think is not talked about is the timeless principles of management. To build a great enduring company you have to marry innovation with the timeless principles of management. How do you scale, how do you build a team, how you go global, how do you develop and retain people? A lot of what we’re trying to do at eBay is marry those timeless principles of management with the ability to innovate. The question is how do we try to do both? We’re right in the middle of that experiment.

One of the holy grails in the storage market has been to deliver a piece of software that could eliminate the need for an external storage array.  The software would provide all the capabilities of an enterprise-class storage device, install on commodity servers alongside applications, eliminate the need for a storage network, and provide shared storage semantics, high availability, and scale-out. With Maxta, the search for such a holy grail ends here.

The external storage array and associated storage network have been a staple of enterprise computing for several decades.  Innovations in storage have been all about making the external storage array faster and more reliable.  Even with all the recent excitement of flash replacing spinning disk, the entire focus of the $30B storage market has been around incrementally improving the external array.   Incrementalism as opposed to (literally) thinking outside the box.

Maxta is announcing a revolutionary shift in storage.  Not only are storage arrays and networks eliminated, but, as a result, compute and storage are co-located.  This convergence keeps the application close to its data, improving performance, reliability, and simplicity.  A layer of software to replace a storage array sounds too good to be true, except Maxta has paying customers and production deployments, and has delivered several releases of their software prior to today’s announcement.

Maxta would not be possible without CEO Yoram Novick, who is a world-class expert in storage software and data center design.  Yoram holds 25 patents and was previously CEO of Topio, a successful storage company that was acquired by NTAP several years ago.  He’s a storage software genius, with a penchant for engineering design and feature completeness as opposed to fluffy marketing announcements and future promises.  He’s the real deal and a true storage geek at heart.

When I met Yoram several years ago, he came to us with the radical idea to build a software layer to change the storage landscape.  Leverage commodity components and put all the hard stuff in software.  Within minutes, we decided to invest and we haven’t looked back since.  We are thrilled to be working with Yoram and team as they use software to deliver one of the holy grails of the storage market.

With all the recent innovations in flash storage design, you’d think we’d have a smooth path toward supporting storage requirements for new hyper-scale datacenters and cloud computing. However, nothing could be further from the truth! Existing storage architectures, despite taking advantage of flash, are doomed in the hyper-scale world. Simply put, storage has not evolved in 30 years, resulting in a huge disconnect between the requirements of the new datacenter and the capability of existing storage systems.

There are two fundamental problems right now: 1) existing storage does not scale for the hyper-scale datacenter and 2) traditional storage stacks have not been architected to take advantage of the recent innovations in flash.

Current storage systems don’t scale because they were designed in the mainframe era. Mainframe-style arrays were designed in a world where a single mainframe provided the compute and a handful of storage arrays hung off the mainframe to support data storage. This one-to-one architecture continues to be used today, despite the fact that the compute side of the hyper-scale datacenter is expanding to hundreds or thousands of individual servers in enterprise datacenters, similar to Google or Amazon. As you can imagine, you achieve theoretically unlimited capacity for compute only to be severely bottlenecked on the storage end of things.

Furthermore, while flash storage has become the hot new thing—super fast, energy efficient, with a smaller form factor—the other internal parts of the storage subsystem have not changed at all. Adding flash to an outdated system is like adding a jet engine to the Wright Brothers’ airplane: pretty much doomed to fail, despite the hype.

This brings me to Coho Data (formerly known as Convergent.io) and a team I’ve worked closely with for years. The founding team includes Ramana Jonnala, Andy Warfield and Keir Fraser, superb product visionaries and architects, with deep domain expertise in virtualization and systems, having built the XenSource open source virtualization stack and scaled it to support some of the biggest public clouds around. This team has built infrastructure software that has been used by hundreds of millions of end users and installed on millions of machines. By applying their expertise and adding key talent with network virtualization experience to the team, they are challenging the fundamentals of storage.

A year after we funded their Series A, having spent that time heads-down building product and piloting with customers, I’m really excited to share that Coho Data today is announcing a revolutionary design in storage that has been built from the inside out to challenge how companies of all sizes think about how they store and deliver access to their data. The team has rebuilt the entire storage array with new software and integrated networking to offer the fastest, most scalable storage system in the market, effectively turning the Wright Brothers’ airframe into an F-16 fighter jet. The Coho DataStream architecture supports the most demanding hyper-scale environments, while at the same time optimizing for the use of commodity flash, all with standard and extensible interfaces and simple integration. As hyper-scale datacenters become the new standard, monolithic storage arrays will go the way of the mainframe.

Coho Data is changing the storage landscape from the inside out and I could not be more thrilled to be part of the most exciting storage company of the cloud generation.

In this final installment of the SaaS Manifesto, which has examined the rise of the departmental user as a major influencer of enterprise buying decisions, and how building a real sales team is integral to the process, the last critical consideration is the need to rethink the procurement process for SaaS.

The corporate one-size-fits-all process for procuring applications is broken. Many companies adopting SaaS are still using 85-page perpetual license agreements that were written years ago and designed for on-premise software purchases. It’s no wonder that SaaS companies want to avoid central IT and purchasing like the plague! Fortunately, the solution is simple: Every company needs to adopt a SaaS policy and treat SaaS software purchasing in an entirely different fashion from on-premise practices.

The perpetual license and SaaS don’t mix

Perpetual licenses made sense when companies invested millions of dollars for on-premise software, services and corresponding infrastructure. This resulted in procurement and legal teams requiring a slew of contractual terms including: indemnification and liability limits, future updates and releases, maintenance fees, perpetual use, termination and continued use, future software modules, infrastructure procurement, beta test criteria, deployment roll-out and many others.

The use of existing procurement practices makes no sense for SaaS and results in a sluggish and frustrating experience for everyone involved.  Frustrating for the departmental user who wants fast access to much needed business capability; frustrating to the procurement and legal groups who spend an inordinate amount of time negotiating useless terms; frustrating to the SaaS provider who is trying to go around the entire process; and frustrating to the CIO who fears rogue deployment of SaaS apps and their impact on security and compliance.

Like oil and water, the two practices don’t mix, and it’s time to change the game and create an effective three-way partnership between the user, the SaaS provider and the CIO.

Creating an “express lane” SaaS policy

CIOs can enable rapid adoption of SaaS by creating a repeatable, streamlined purchasing process that is significantly faster than a traditional approach. SaaS policies and procurement checklists should reflect this and CIOs can help key business stakeholders identify and map their requirements to the available SaaS offerings and ensure that appropriate due diligence is done.

A starting point should be the creation of different policies and procurement processes for mission-critical and departmental applications. Mission-critical applications still require 24×7 support, high availability Service Level Agreements, data security and disaster recovery considerations. Because these applications may directly impact revenue or customers, substantial up-front diligence is required. Nonetheless, many of the old, on-premise contract requirements are still irrelevant here since there is no infrastructure, no maintenance, no perpetuity, etc.

Aside from mission-critical SaaS apps, 80-90% of all SaaS solutions are departmental apps and these are the applications that should have an “express lane” for procurement. Imagine a streamlined process for most SaaS purchases that enables fast, easy and safe deployment. Everyone wins and everyone is thrilled at the result.

The Express Lane checklist for departmental apps

The following checklist is a framework for establishing the Express Lane:

  • Data ownership and management – Clearly define that all data is owned by the company, not the SaaS provider, and can be pulled out of the providers system at any time.
  • Security – the ability to control the level of access by the SaaS provider and the company’s employees, as well as shut off access when an employee leaves the company.
  • Dedicated support and escalation path – Access to a fully staffed support team, but probably does not need to be 24×7.
  • Reliability – Set a baseline for the provider’s historical reliability, including lack of downtime and data loss.
  • Disaster recovery processes – Internal plan to minimize information loss if the vendor goes out of business or violates the user contract.

Applying a framework creates a partnership with all stakeholders and has the following benefits:

  • Cost savings – Save money in terms of company resources needed to take providers through approval process and in up-front overhead on long-term contract commitments.
  • Scalability – Ability to use a SaaS solution for a specific segment of a business or across an entire enterprise.
  • Efficiency – Streamlined process with legal and billing.
  • Flexibility – A month-to-month contract, instead of a SLA, means a company is not locked into a long-term contract.
  • Eliminates rogue solutions – IT gains greater control by eliminating the practice of employees signing up for services on their personal credit cards and expensing.

Ending the use of perpetual license practices for SaaS applications will result in much better alignment between the SaaS supplier, the internal user and the CIO. And the creation of an Express Lane for the vast majority of all departmental apps will enable rapid adoption of business applications by the departmental user, while giving IT the oversight in the rapidly evolving world of SaaS.

The SaaS revolution has not only changed the way products are delivered and developed, but the way products are brought to market. SaaS and the departmentalization of IT recognize that the line-of-business buyer and new sales approaches target that buyer with a combination of freemium and inside sales. Couple this with a streamlined process for procuring SaaS and we create the perfect storm for a massive transformation in application deployment and productivity.

Note: Part 3 of “The SaaS Manifesto” first ran in The Wall Street Journal‘s CIO Journal.

I recently had the privilege of hosting Dick Costolo (@dickc), CEO of Twitter, at Andreessen Horowitz for a fireside chat. The event was the second in a series hosted by my firm aimed at strengthening the network of military veterans in Silicon Valley and expanding this network’s connection to the greater tech ecosystem. (As much as I’d love to claim that we precipitated the filing of Twitter’s S-1, this event and the S-1 occurring on the same day was purely coincidental.)

What is not a coincidence is how Dick’s leadership style and personality have transformed Twitter into one of the most successful technology companies of our time.  We had the opportunity to spend an hour together and here’s what I learned about leadership, culture and veterans at Twitter:

How has Twitter’s culture changed over your tenure as CEO?

Dick joked that when he came on board someone could have “thrown a hand grenade into the company at 5:30pm and only hit the cleaning people.” He started holding people accountable and rewarding projects where hard work was visible. He’d go into the office at 10pm, got to know the people who were around and then prioritized their projects. People quickly got the message.

Dick describes himself as a hands-on manager and expects the same from his team. He instituted a leadership class for all new managers, which covers topics like how to give transparent feedback (hint: don’t sugarcoat it) and how to deliver difficult news to your team, like your project getting axed (hint: don’t throw leadership under the bus). Of course, he also practices what he preaches, holding weekly 1:1 meetings with direct reports and remaining accessible to employees, regardless of rank.

Open communication is a nice objective, but how does Twitter do this given its size and scope?  

Dick admits open communication is difficult to maintain because of two opposing pressures: the first is the desire to limit communication to reduce the risk of leaks; the second is simply that everyone can’t know everything. Dick leans toward over-communicating and trusts management to synthesize relevant information, rather than publishing a transcript of every meeting. As the company continues to grow, synthesizing is increasingly important.

On leadership and veterans

Dick’s passion for good leadership is an anomaly in the tech industry. And I don’t think it’s a coincidence that Military Veterans are under-represented in a sector notorious for shunning authority.

Dick certainly made a strong statement traveling from Twitter’s HQ in San Francisco to our office in Menlo Park, in the middle of rush hour, to participate in our event for local veterans on the same day that Twitter filed its S-1. He clearly understands and appreciates the value that military talent can bring to the table (case in point: Russ Laraway, a rising star at Twitter, oversees their SMB unit and is a former Marine).

As for hiring vets, I couldn’t agree more with Dick when he said there are lots of roles in the technology industry for which the job requirements are a load of hogwash. For example, I can’t understand why engineering skills are part of the criteria for project management roles. Good communication skills and adherence to strict deadlines are not strengths for many engineers. But I can think of more than a few vets who would really shine with this responsibility.

On behalf of everyone at Andreessen Horowitz, I want to thank Dick and our veteran community for making this event the absolute highlight of my week!

It may sound a bit ungrateful, especially coming from someone who invests in these things, but many early SaaS companies in many ways have been successful in spite of themselves. SaaS customers have had their pick of great software products, all available from the cloud, and without the long, tortured installation efforts of previous generations of software. On the back of these frictionless software deals, SaaS companies have been growing like mad, and often without any formal sales effort. But if they haven’t already, these up-and-to-the-right companies are about to hit a wall. The reason is that early deployments and usage do not necessarily translate into sustainable revenue growth.

In order for SaaS businesses to really scale and reach their full potential as industry leaders, they need a real and robust sales effort. That’s right, you need to build a sales team.

It won’t be easy. I won’t pretend that it is. But scaling sales, while expensive and culturally challenging to implement, changes the size of the potential opportunity. The big opportunity for SaaS companies is to drive adoption across the whole organization, which requires a centralized effort to redesign corporate processes, facilitate training and manage customer success. This is especially the case with tools that work best when used by everyone at a company, like CRM, human resources or accounting

Freemium is only part of the story

Before we dive in, there’s one thing I have to set straight: Freemium is a fantastic starting point for SaaS, but freemium is not the same as building a sales organization. Freemium is a product and marketing strategy designed to generate a massive base of users, which can be approached for a future sale. Freemium is all about seeding the market and establishing a platform for building a winning offering. The best SaaS companies use their free product to iterate and improve their offering with data and feedback. But even with an effective freemium go-to-market strategy, SaaS companies still need to think about augmenting with a sales organization. Start with freemium, but don’t end there.

The evolving role of the CIO

The CIO’s role is evolving in that for most SaaS applications, the department will drive the purchase. This is different from past generations of software, where on-premise installations and routine software upgrades required the CIO to hand-hold every buying decision. In a sense, SaaS has liberated the CIO to focus on longer-term strategic business issues, rather than worry about the next Oracle or SAP upgrade.  The CIO will influence security, support and data protection policies, so understanding these up-front becomes a key part of the selling process.

The balance of influence between the departmental buyer and the CIO differs by application as well as by company—the more mission critical, secure, and integrated, the larger the CIO role. For infrastructure purchases, as an example, the CIO continues to be highly involved in the purchasing decision.

A framework for an effective SaaS sales organization

Designing a sales and marketing function targeted at the departmental buyer is key to creating long-term competitive advantage. I’ve seen many early SaaS companies reluctantly stumble into half-baked sales efforts, only to find a flattening in revenue and customer engagement.

To convince the skeptics, I’ve asked Dan Shapero at LinkedIn, one of the most successful SaaS companies of our time, to weigh in. Dan is the VP of Talent and Insights at LinkedIn and runs a 1,200-person sales organization. While most people think that LinkedIn sells itself with great product and no sales effort, nothing could be further from the truth. Here’s a framework that LinkedIn has developed to apply:  

Organize around the buyer. LinkedIn has multiple business lines that work with three different corporate functions: talent, marketing and sales. These departments typically make discrete decisions, with independent budgets, so LinkedIn has different teams that focus on partnering with each function.

Distinguish between new account acquisition and account success. Two of the most important lessons at LinkedIn have been (1) successful clients buy more over time and (2) the process and expertise required to acquire a new customer is very different from nurturing that customer. As a result, there are separate and distinct teams, sales processes and measures of success for managing new and existing customers.

Land, then expand. With SaaS, customers can purchase on a small scale before going all in. Rather than focusing on landing huge deals, Linkedin has been better served by acquiring many smaller scale deals at clients with huge long-term potential. Albeit smaller, success with the initial deployment often results in tremendous upside in the second, third, fourth year of a client’s tenure. Expanding in a SaaS/freemium model is particularly effective because you not only can demonstrate success, but you can also pinpoint and size future demand based on who is using the technology for free.

Leverage inside sales for the mid-market. The creation of a robust, inside sales organization to serve clients over the phone, from regional hubs around the world, is a critical part of LinkedIn’s successful SaaS franchise. Inside sales reps close their own business and manage their own territory. The SaaS model enables the client to be engaged, sold, provisioned and serviced in a highly scalable way, without the need for an in-person visit.

Monitor customer engagement. SaaS provides incredible transparency into how actively engaged customers are with the product. Understanding where usage is strong and weak across customers allows LinkedIn to improve customer experience by deploying training resources proactively, offering targeted advice on best practice, and improving the product roadmap. Most enterprise vendors are flying blind when it comes to understanding the success of their customers, while SaaS companies have a fundamental information advantage.

When you are in the throes of viral adoption, it is not immediately intuitive to many SaaS companies to build out a sales organization. Right now, those of you in all the rapid growth SaaS companies might still be thinking, “Not us, we’ll just keep booking those inbound leads.” You’ll keep thinking that until the inbound stops. The paradox of great SaaS companies is that the more successful a SaaS company is with early deployments, the more challenging it becomes for that organization to recognize and embrace building a formal sales organization to address the needs of the enterprise buyer. That’s why I want every SaaS company to consider the SaaS Manifesto a call to arms. We are at a point in the maturity of SaaS where mature sales are important.

Up next: The SaaS Manifesto: Part 3 – the requirements for enterprise-wide SaaS adoption and deployment 

Note: Part 2 of “The SaaS Manifesto” first ran in The Wall Street Journal‘s CIO Journal. You can read Part 1 here.

It doesn’t happen often, every 10 to 15 years or so, but we are in the throes of the reordering of the $4 trillion corporate IT market. And depending on which side of that transformation you sit, this is either the best time to be an enterprise technology company (see: renaissance in enterprise computing), or reason to start looking for a new line of work.

I certainly sit among the group that sees this as a huge opportunity, and it’s far from finished. If the first phase was to build replacement technologies for every part of the IT stack, the next phase—and the next golden opportunity—is to re-imagine the business side of the equation and change how buyers and vendors come together. That is where this SaaS Manifesto comes in. Think of it as a three-part field guide to the new way enterprise computing will be bought and sold.

Part 1: Navigating the Departmentalization of IT

In the enterprise IT world, companies like Oracle, Microsoft and SAP are established giants, so entrenched that every new company has had to either peacefully co-exist with them or else face getting steamrolled into oblivion. But that strength comes with a weakness: These companies are slow adapting new practices and evolving to new models. In fact, both SAP and Oracle recently attributed their missed earnings targets to “the cloud.”

And there is a major change occurring in the enterprise: Beyond the technical and architectural innovation we see in new products, there are fundamental opportunities appearing on the distribution and customer side that simply never existed in the past. In past technological shifts (e.g. from mainframe to client server, or from client server to PC) purchasing was always done through a centralized CIO organization, no matter the product. Large vendors could rely on the depth of their existing sales channel and the reluctance of customers to move outside their respective fiefdoms to successfully enter newer areas. Sure, vendors would be left behind in each shift, but it was largely due to lack of new technology, as opposed to a lack of changes in the go-to-market landscape.

Today, the new buyer is the operating department—HR, sales, development, marketing—and the decisions of which technologies to procure are no longer solely centralized through the CIO. In fact, nearly 50% of all IT purchasing decisions are now being influenced and/or made by an operating department, says an August 2013 study by Enterprise Strategy Group, as these departments look for purpose-built applications. This change creates one of the most meaningful differences in the new world of enterprise computing: Not only do the large players have to create or buy new technology, but they must also adapt their offerings and sales models to appeal to this new buyer.

Here’s why this shift is difficult for established players:

Perpetual vs. subscription licensing. Many current operating plans and sales organizations at the largest technology companies are built on the perpetual license model, where a customer pays one large sum up front and the vendor immediately recognizes nearly 100% of that payment as revenue. This perpetual license gives customers the “privilege” of paying an annual maintenance fee regardless of whether or not they take advantage of future upgrades. With subscription licensing, however, revenue is recognized over the life of the contract, making this an extremely difficult economic and organizational shift for an existing vendor.

Product cycle and software development methodology. For packaged software, new features are delivered (in the best case) twice a year. Often these feature releases are never deployed due to the complexity of field upgrades, resulting in users working with software that is years old. With SaaS, development is near continuous, allowing for rapid feature innovation and instant deployment of new features to all users.

Ease of adoption and trial-use. In the pre-SaaS, on-premise world, software purchases were made through a central CIO organization, which was equipped to deploy infrastructure and then test, certify and validate every new application. This highly concerted—not to mention, costly—effort required salespeople and systems engineers to run pilots, alphas and internal rollouts. The process would often take months, and by the time the software was ready to be deployed, there was no clear indication as to whether the product was really useful to the company.

However, with the advent of cloud and SaaS, the end user/department can easily try new software without an on-premise install, often at no cost. Developers and startups have found a replicable, reliable way to circumvent the iron grip of the industry’s major players and innovate, rather than iterate, on solutions to complex business problems. It’s a meritocracy of applications, where the best wins.

Inside sales leverage. With easy adoption and trial-use, a typical SaaS customer will have used a product and know its capability. This makes selling an upgrade or enterprise-wide deployment much easier and more economical.

Because of SaaS, the inside sales function is growing at 15 times the pace of direct sales. For existing companies that have large direct sales groups, moving to an inside sale model requires a complete re-tooling of the sales organization. This is a difficult transition and provides an opportunity for new companies to prevail.

Customer relationships. Customer lock-in has long been the hallmark of incumbent companies. Selling on-premise software directly to the CIO resulted in a tight relationship between the CIO buyer and the incumbent vendor. No matter how slow the rollout or buggy the end result, the fact that any new product had a receptive, locked-in customer, made it incredibly difficult for a new company to wedge its way in.

But with departmentalization, individual operational units have more autonomy to purchase technology. This gives the newcomer a real opportunity to establish relationships that the incumbent may not have. In fact, several large, incumbent companies are now making an effort to get to know the departmental buyer and get ahead of this trend.

The cloud and SaaS are stripping the complexity of IT to the point where any given operational department now has the confidence to purchase the tools they need directly from the vendor, circumventing a large part of the traditional IT procurement process. Moreover, the new buyers are not encumbered by risk-averse, IT decision-makers who operate under the belief that “nobody gets fired for buying IBM.”

By targeting this departmental user, Goliath topples almost before he sees David coming.

Up next: The Saas Manifesto: Part 2 – Building a sales organization that caters to the department, but recognizes the critical aspects to enterprise-wide requirements, and the changing role of the CIO.

Note: Part 1 of “The SaaS Manifesto” first ran in The Wall Street Journal‘s CIO Journal.