The last few years have seen the incredible growth of cloud computing. Applications and services that were developed for on-premise use have all found a new home in the cloud. As with most technology transformations, early adoption often occurs around a hobbyist developer community that then expands into more mainstream adoption and use. The cloud is no exception; as it grows it continues to empower developers to shape technology and change the world.

What started as a primitive, manual, and cumbersome infrastructure service, has evolved into a variety of cloud vendors offering vast collections of services targeted at a number of different audiences –perhaps too vast. We have Database-as-a-Service, Compute-as-a-Service, Analytics-as-a-Service, Storage-as-a-Service, as well as deployment and network environments, and everything in between. It has left the developer community with more options, functionality, and cost than it needs or wants.

It’s time for the cloud to once again focus on developers, and that is where DigitalOcean comes in.

Started by Ben Uretsky and his brother Moisey, with the additional intellectual brawn of an eclectic group of passionate developers, DigitalOcean has focused on one goal: making developers lives easier by providing a powerful, yet simple Infrastructure-as-a-Service.

SOURCE: Netcraft

The DigitalOcean service is purpose-built for the developer, offering automated web infrastructure for deploying web-based applications. The results have been eye-popping. From a standing-start in December 2012, DigitalOcean has grown from 100 web-facing computers to over 50,000 today, making it one of the fastest growing cloud computing providers in the world. It is now the ninth largest web infrastructure provider on the planet. With this round of funding, the management team intends to aggressively hire more in-house and remote software engineers to accelerate that already tremendous momentum.

SOURCE: Netcraft

DigitalOcean is also taking a page out of the open source world and is using and contributing to the most relevant open source projects. In the same way that Github or Facebook or Twitter offers open source as a service, DigitalOcean does the same. A few weeks back, I wrote a post presenting several viable models for open source deployments and DigitalOcean is a case study. We are thrilled to be working with the DigitalOcean team as they continue to build a cloud that developers love.

NOTE: Chart data from Netcraft.

Open source software powers the world’s technology. In the past decade, there has been an inexorable adoption of open source in most aspects of computing. Without open source, Facebook, Google, Amazon, and nearly every other modern technology company would not exist. Thanks to an amazing community of innovative, top-notch programmers, open source has become the foundation of cloud computing, software-as-a-service, next generation databases, mobile devices, the consumer internet, and even Bitcoin.

Yet, with all that momentum, there’s a vocal segment of software insiders that preach the looming failure of open source software against competition from proprietary software vendors. The future for open source, they argue, is as also-ran software, relegated to niche projects. It’s proprietary software vendors that will handle the really critical stuff.

So which is it? The success of technology companies using open source, and the apparent failure of open source is a head scratcher. Yet both are true, but not for the reasons some would have you believe. The success or failure of open source is not the software itself  – it’s definitely up to the tasks required of it – but in the underlying business model.

It started (and ended) with Red Hat

Red Hat, the Linux operating system company, pioneered the original open source business model. Red Hat gives away open source software for free but charges a support fee to those customers who rely on Red Hat for maintenance, support, and installation. As revenue began to roll into Red Hat, a race began among startups to develop an open source offering for each proprietary software counterpart and then wrap a Red Hat-style service offering around it. Companies such as MySQL, XenSource, SugarCRM, Ubuntu, and Revolution Analytics were born in this rush toward open source.

Red Hat is a fantastic company, and a pioneer in successfully commercializing open source. However, beyond Red Hat the effort has largely been a failure from a business standpoint. Consider that the “support” model has been around for 20 years, and other than Red Hat there are no other public standalone companies that have been able to offer an alternative to their proprietary counterpart. When you compare the market cap and revenue of Red Hat to Microsoft or Amazon or Oracle, even Red Hat starts to look like a lukewarm success. The overwhelming success of Linux is disproportionate to the performance of Red Hat. Great for open source, a little disappointing for Red Hat.

peterlevine1

There are many reasons why the Red Hat model doesn’t work, but its key point of failure is that the business model simply does not enable adequate funding of ongoing investments. The consequence of the model is minimal product differentiation resulting in limited pricing power and corresponding lack of revenue. As shown below, the open source support model generates a fraction of the revenue of other licensing models. For that reason it’s nearly impossible to properly invest in product development, support, or sales the way that companies like Microsoft or Oracle or Amazon can.

peterlevine2

And if that weren’t tough enough, pure open source companies have other factors stacked against them. Product roadmaps and requirements are often left to a distributed group of developers. Unless a company employs a majority of the inventors of a particular open source project, there is a high likelihood that the project never gains traction or another company decides to create a fork of the technology. The complexities of defining and controlling a stable roadmap versus innovating quickly enough to prevent a fork is vicious and complex for small organizations.

To make matters worse, the more successful an open source project, the more large companies want to co-opt the code base. I experienced this first-hand as CEO at XenSource, where every major software and hardware company leveraged our code base with nearly zero revenue coming back to us. We had made the product so easy to use and so important, that we had out-engineered ourselves. Great for the open source community, not so great for us.

If you think this is past history and not relevant, I see a similar situation occurring today with OpenStack, and it is likely happening with many other successful open source projects. As an open source company, you are not only competing with proprietary incumbents, you are competing with the open source community itself. It’s a veritable shit-show.

If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.” If you are really lucky (as we were with XenSource), you might be acquired as a “strategic” acquisition. But, most open source companies don’t have that kind of luck, and the chances of going public and creating a large standalone company are pretty darn slim.

Even with all that stacked against them, we still see entrepreneurs pitching their companies as the “next Red Hat of…” Here is the problem with that vision: there has never been a “next Red Hat of…” It’s not to say we won’t see another Red Hat, but the odds are long and the path is littered with the corpses of companies that have tried the support model.

But there is a model that works.

Selling open source as a service

The winning open source model turns open source 1.0 on its head. By packaging open source into a service (as in cloud computing or software-as-a-service) or as a software or hardware appliance, companies can monetize open source with a far more robust and flexible model, encouraging innovation, and on-going investment in software development.

Many of today’s most successful new companies rely on an ecosystem of standardized open source components that are generally re-used and updated by the industry at-large. Companies who use these open source building blocks are more than happy to contribute to their ongoing success. These open source building blocks are the foundation of all modern cloud and SaaS offerings, and they are being monetized beautifully in many cases.

Depending on the company and the product, an organization may develop more open source software specific to their business or build some amount of proprietary software to complete the product offering. Amazon, Facebook, GitHub and scores of others mix open source components with their own proprietary code, and then sell the combination as a service.

This recipe – combining open source with a service or appliance model – is producing staggering results across the software landscape. Cloud and SaaS adoption is accelerating at an order of magnitude faster than on-premise deployments, and open source has been the enabler of this transformation.

Beyond SaaS, I would expect there to be future models for Open Source monetization, which is great for the industry.

So what are you waiting for?

Build a big business on top of and around a successful platform by adding something of your own that is both substantial and differentiated. Take, for example, our national road and highway system. If you view it as the transportation platform, you start to see the host of highly differentiated businesses that have been built on top of it, ranging from FedEx to Tesla. The ridesharing service Lyft is building its business on top of that same transportation platform, as well as Amazon’s AWS platform.

If you extend that platform worldview, Red Hat’s support model amounts to selling a slightly better version of the road – in this case, the Linux operating system – which is already good enough for most people.

Sure, when you first launch a business built using open source components, it’s important to grow the size of the platform and cater to your early adopters to drive initial success. So you might start off looking a little like Red Hat. But if all goes well, you’ll start to more resemble Facebook, GitHub, Amazon or Cumulus Networks as you layer in your own special something on top of the platform and deliver it as a service, or package it as an appliance. Becoming the next Red Hat is an admirable goal, but when you look at the trends today, maybe even Red Hat should think about becoming the next Amazon.

Mobile devices have put supercomputers in our hands, and—along with their first cousin—the tablet, represent the largest shift in computing since the PC era. The capacity and power of these devices are in its infancy, and all expectations lead to a doubling of capability every 18 months. In the same way that the PC era unlocked the imagination and innovation of an entire generation, we are seeing a repeat pattern with mobile devices at unprecedented scale.

History has shown that as compute capacity becomes available, new applications and programs happily consume the excess. Additional memory, disk, and processing power always lead to substantially better and more innovative products, serving an ever-broader set of consumers. We saw it with the PC, and we will see it with mobile as the number of devices grows well past a billion. Yet-to-be-developed applications are waiting to take advantage of this processing capability, and it’s going to require mobile operating system innovation to expose this awesome power.

An operating system is one of the most fundamental and important pieces of software. Great operating systems leverage new hardware, provide a consistent way to run applications, and provide a foundation for all interaction with a computing system. For PCs, Windows is the dominant operating system; for servers, Linux is dominant; and for mobile, Android enjoys a staggering 82% market share (Gartner, November 2013). Like Linux (and unlike Windows), Android is Open Source, which means no one company owns the code. Anyone can improve Android by adding new functionality and tools.

One reason why Android is winning is due to that open source spirit of additive innovation. Because consumers are clamoring for increased personalization and customization options, the Android open source community has been happily taking up the task of fulfilling that demand. What’s more, the growing enterprise trend of BYOD (bring your own device) is here to stay, which will further add to that demand as consumers use their mobile devices at home, at work, and on the road—all requiring customized functionality.

Enter Cyanogen, our newest portfolio company that’s well on its way in building a new operating system, CyanogenMOD (CM), leveraging core Open Source Android to provide the fastest, most innovative mobile operating system platform. CM takes the best of what Android offers and adds innovative features to create a clean yet customizable user experience. CM is 100% compatible with all Android applications, yet brings fabulous new capabilities to Android such as enhanced security, performance, device support, and personalization. Cyanogen has been powered by the open-source community—led by its founder Steve Kondik—ever since it launched four years ago. The community continues to work at a feverish pace, helping to bring up both newly launched and existing Android devices with the latest Cyanogen builds.

Today, tens of millions of devices are running Cyanogen worldwide, and we believe that CM has the opportunity to become one of the world’s largest mobile operating systems. As past history suggests, companies such as Microsoft and RedHat have done exceedingly well by being independent of hardware, and we believe that this trend will accelerate in the mobile world. The rapid success of CM indicates a growing consumer desire to have a fully compatible Android operating system that is truly independent from any hardware company or OEM. Consumers win as Cyanogen can launch updates more frequently, fix bugs faster, and deploy new features more regularly, compared to OEMs whose organizations are optimized for building fantastic hardware.

We’re incredibly excited to lead their Series B round of financing and to work with the Cyanogen team, a majority of which has been “sourced” from their “open source” community! Their expertise in building Android products and their desire to create a world-class mobile user experience will guide their decisions as they continue building on their success to date. Software is eating the world, Android is eating mobile, and we think Cyanogen only just finished their appetizer and is moving onto the entree.

I recently had the pleasure of interviewing eBay CEO John Donahoe before a crowd of military veterans at a16z. Below is an abridged version of our discussion, which focused on the type of leader John has become as he led the turnaround at eBay.

Peter Levine: You describe your management style as “servant leadership.” Where does that come from?

John Donahoe: It started with Tom Tierney who was a mentor of mine. He was my boss at Bain & Company, and is on the board of eBay. He’s one of those leaders that care enough to always give constructive feedback.

To give you a sense of what Tom’s like, after the last eBay board meeting he calls me and asks, “How do you think it went?” And I go, “How do you think it went?.” And he says, “You know John, that conversation around X, Y, and Z. If what you were trying to get across is that you felt fairly emotional about the issue, you’d already decided what you wanted to do, and you didn’t want to hear anybody else’s opinion, you did a good job. If on the other hand, what you were trying to demonstrate is that you are seasoned and sophisticated CEO, that you are open-minded and wanted to hear other’s opinions, that you knew you could make a decision but you were actually engaging in an authentic discussion with them, eh, not so good.”

That’s something I’ve taken away, which is, I think a good leader cares enough to give his or her best people feedback. But Tom early on captured the phrase for me – servant leadership. And that’s how I’d describe my leadership: servant leadership. In most companies it’s a classic hierarchy, the person on top is the CEO, in the military it’s the general, whomever is in charge. That’s never really worked for me. I’ve always been trained with the inverted pyramid, where the customer is on top. They’re why we’re here. They are the people who give us a sense of purpose of why we’re here.

And inside our organization, the people I talk about on top of our org chart are the people who deal with customers every day ­– they’re our customer teammates, our sales team, and our support teams. And everybody inside the company exists to help them serve the customers better. And I’m at the bottom of that pyramid, and ultimately my job is to clear channels to serve our customers as well. It’s to serve. If you want to have the absolute most talented people working for you, they can’t feel like they are working for you.

The one other person that had more impact than he realizes is General Colin Powell, talking about followership. The focus is not about me, the leader, the focus is on how do I create followership. We’ve all had leaders we want to follow and usually that leader empowers us, has our back, and treats us better than they are.

PL: Is that philosophy something that you can go to a class and learn about? Or is it experiential? And are other leadership styles acceptable as well?

JD: I think each of us has to discover what our leadership style is. You can’t copy another person’s. If I think about the leaders I respect the most, they can have different styles, but what they all are, they are authentic in understanding who they are and who they want to lead. They are transparent and consistent about that. I think that’s the job of any leader. I wouldn’t try to copy someone else’s personality. I followed Meg Whitman, I had big shoes to fill. But I couldn’t be Meg Whitman. I had to be me. The leaders that create followership, if there’s one common quality it is that they are authentic. Having good values, and then being authentic and transparent.

PL: What has been the origin of your own mistakes?

JDonahoe: I’ve made a lot of mistakes. The truth is, my biggest mistakes have been not taking enough risk. It’s not been what I’ve done, it’s been what I’ve not done. There have been times where I haven’t moved fast enough or taken enough risk. When I was running the Marketplace before becoming CEO, I was scared of taking the risk of labeling what was going on. We had stopped innovating, we weren’t delivering good experiences for our customers, and we were taking them for granted. We were living a narrative that was no longer true.

It was only when it got so bad that I spoke up and spoke the truth and took on the risk. It was hard, it hurt, and everyone hated me. By the time I became CEO it became clear to me that I was going to be presiding over this ­– I’m going to catch a falling knife. I was named on a Wednesday. On Monday, we had a seller meeting in Washington, DC where we announced the biggest set of changes in eBay’s history, and I labeled it as a turnaround, a word that everyone hated. We stood up told the truth, it felt so good. And we finally labeled what everyone knew what was true. It felt good for 24 hours, and then all hell broke loose.

PL: What are the cultural and character ingredients about building a great and enduring business?

JD: The first thing is picking the right company. I was at Bain for 20 years. I loved it. And when Meg first called me to join eBay, it was the hottest company on earth at that stage. I said, Meg that’s not me. I’m not a Valley guy in that way. It doesn’t light my fire. There’s nothing wrong with it, it just doesn’t light my fire. And she said, “I want you to meet eBay’s founder, Pierre Omidyar.” I’ll never forget ­– it was a rainy day in November 2004 at this place where eBay was having a leadership meeting. I was curious. I had never met one of the most famous Internet entrepreneurs. I went in thinking I would meet somebody like Steve Jobs ­­– some larger-than-life personality, maybe a little and brash and arrogant. And I could not have been more surprised. Pierre is soft spoken, and one of the most humble, centered humans I’ve ever met. And I sit down and we’re talking and I say, “Pierre, how do you measure success for eBay?” And he didn’t say a thing about growth rate or revenue or stock price or reputation in the short term. He said, “John, what I care about is I want eBay to positively impact hundreds of millions of people’s lives all over the world and I want to do it over decades, I don’t just want it to be a flash-in-the-pan. If we have lasting impact and we’re going to help the world be a better place, we’ve got to last.” And I was like, you had me at ‘hello.’ I literally walked into that interview thinking I wasn’t going to leave Bain for eBay, and I walked out saying, “I want to follow this guy.”

PL: How should the rest of us think about picking the right company? Does “hot” matter?

JD: What I would suggest is, don’t listen to what everyone else says. Don’t join something because everyone else is. You need to ask yourself, can you personally relate to the purpose of the company that you’re joining? It’s interesting, because Silicon Valley has produced 98-percent of the greatest technology, innovation, startup companies, and entrepreneurs in the last 50 years. There’s no place that’s even close. But during that period of time – 50 years – Silicon Valley has produced five scale-enduring companies. And I’m defining scale as above $20 billion market cap, and I’m defining enduring as having been successful for a 20 year period or longer: HP, Intel, Apple, Oracle and Cisco. That’s it. Google’s not 20, eBay’s not 20, none of the Internet companies are. And the ethos around here is short-term, the hot. And you know when it stops being hot, I’m going to jump to the next one.

What it takes to commit to build a great, enduring company is a different mindset. What is true about Silicon Valley is that innovation is the lifeblood. Innovation drives competitive advantage, but what I think is not talked about is the timeless principles of management. To build a great enduring company you have to marry innovation with the timeless principles of management. How do you scale, how do you build a team, how you go global, how do you develop and retain people? A lot of what we’re trying to do at eBay is marry those timeless principles of management with the ability to innovate. The question is how do we try to do both? We’re right in the middle of that experiment.

One of the holy grails in the storage market has been to deliver a piece of software that could eliminate the need for an external storage array.  The software would provide all the capabilities of an enterprise-class storage device, install on commodity servers alongside applications, eliminate the need for a storage network, and provide shared storage semantics, high availability, and scale-out. With Maxta, the search for such a holy grail ends here.

The external storage array and associated storage network have been a staple of enterprise computing for several decades.  Innovations in storage have been all about making the external storage array faster and more reliable.  Even with all the recent excitement of flash replacing spinning disk, the entire focus of the $30B storage market has been around incrementally improving the external array.   Incrementalism as opposed to (literally) thinking outside the box.

Maxta is announcing a revolutionary shift in storage.  Not only are storage arrays and networks eliminated, but, as a result, compute and storage are co-located.  This convergence keeps the application close to its data, improving performance, reliability, and simplicity.  A layer of software to replace a storage array sounds too good to be true, except Maxta has paying customers and production deployments, and has delivered several releases of their software prior to today’s announcement.

Maxta would not be possible without CEO Yoram Novick, who is a world-class expert in storage software and data center design.  Yoram holds 25 patents and was previously CEO of Topio, a successful storage company that was acquired by NTAP several years ago.  He’s a storage software genius, with a penchant for engineering design and feature completeness as opposed to fluffy marketing announcements and future promises.  He’s the real deal and a true storage geek at heart.

When I met Yoram several years ago, he came to us with the radical idea to build a software layer to change the storage landscape.  Leverage commodity components and put all the hard stuff in software.  Within minutes, we decided to invest and we haven’t looked back since.  We are thrilled to be working with Yoram and team as they use software to deliver one of the holy grails of the storage market.

With all the recent innovations in flash storage design, you’d think we’d have a smooth path toward supporting storage requirements for new hyper-scale datacenters and cloud computing. However, nothing could be further from the truth! Existing storage architectures, despite taking advantage of flash, are doomed in the hyper-scale world. Simply put, storage has not evolved in 30 years, resulting in a huge disconnect between the requirements of the new datacenter and the capability of existing storage systems.

There are two fundamental problems right now: 1) existing storage does not scale for the hyper-scale datacenter and 2) traditional storage stacks have not been architected to take advantage of the recent innovations in flash.

Current storage systems don’t scale because they were designed in the mainframe era. Mainframe-style arrays were designed in a world where a single mainframe provided the compute and a handful of storage arrays hung off the mainframe to support data storage. This one-to-one architecture continues to be used today, despite the fact that the compute side of the hyper-scale datacenter is expanding to hundreds or thousands of individual servers in enterprise datacenters, similar to Google or Amazon. As you can imagine, you achieve theoretically unlimited capacity for compute only to be severely bottlenecked on the storage end of things.

Furthermore, while flash storage has become the hot new thing—super fast, energy efficient, with a smaller form factor—the other internal parts of the storage subsystem have not changed at all. Adding flash to an outdated system is like adding a jet engine to the Wright Brothers’ airplane: pretty much doomed to fail, despite the hype.

This brings me to Coho Data (formerly known as Convergent.io) and a team I’ve worked closely with for years. The founding team includes Ramana Jonnala, Andy Warfield and Keir Fraser, superb product visionaries and architects, with deep domain expertise in virtualization and systems, having built the XenSource open source virtualization stack and scaled it to support some of the biggest public clouds around. This team has built infrastructure software that has been used by hundreds of millions of end users and installed on millions of machines. By applying their expertise and adding key talent with network virtualization experience to the team, they are challenging the fundamentals of storage.

A year after we funded their Series A, having spent that time heads-down building product and piloting with customers, I’m really excited to share that Coho Data today is announcing a revolutionary design in storage that has been built from the inside out to challenge how companies of all sizes think about how they store and deliver access to their data. The team has rebuilt the entire storage array with new software and integrated networking to offer the fastest, most scalable storage system in the market, effectively turning the Wright Brothers’ airframe into an F-16 fighter jet. The Coho DataStream architecture supports the most demanding hyper-scale environments, while at the same time optimizing for the use of commodity flash, all with standard and extensible interfaces and simple integration. As hyper-scale datacenters become the new standard, monolithic storage arrays will go the way of the mainframe.

Coho Data is changing the storage landscape from the inside out and I could not be more thrilled to be part of the most exciting storage company of the cloud generation.

In this final installment of the SaaS Manifesto, which has examined the rise of the departmental user as a major influencer of enterprise buying decisions, and how building a real sales team is integral to the process, the last critical consideration is the need to rethink the procurement process for SaaS.

The corporate one-size-fits-all process for procuring applications is broken. Many companies adopting SaaS are still using 85-page perpetual license agreements that were written years ago and designed for on-premise software purchases. It’s no wonder that SaaS companies want to avoid central IT and purchasing like the plague! Fortunately, the solution is simple: Every company needs to adopt a SaaS policy and treat SaaS software purchasing in an entirely different fashion from on-premise practices.

The perpetual license and SaaS don’t mix

Perpetual licenses made sense when companies invested millions of dollars for on-premise software, services and corresponding infrastructure. This resulted in procurement and legal teams requiring a slew of contractual terms including: indemnification and liability limits, future updates and releases, maintenance fees, perpetual use, termination and continued use, future software modules, infrastructure procurement, beta test criteria, deployment roll-out and many others.

The use of existing procurement practices makes no sense for SaaS and results in a sluggish and frustrating experience for everyone involved.  Frustrating for the departmental user who wants fast access to much needed business capability; frustrating to the procurement and legal groups who spend an inordinate amount of time negotiating useless terms; frustrating to the SaaS provider who is trying to go around the entire process; and frustrating to the CIO who fears rogue deployment of SaaS apps and their impact on security and compliance.

Like oil and water, the two practices don’t mix, and it’s time to change the game and create an effective three-way partnership between the user, the SaaS provider and the CIO.

Creating an “express lane” SaaS policy

CIOs can enable rapid adoption of SaaS by creating a repeatable, streamlined purchasing process that is significantly faster than a traditional approach. SaaS policies and procurement checklists should reflect this and CIOs can help key business stakeholders identify and map their requirements to the available SaaS offerings and ensure that appropriate due diligence is done.

A starting point should be the creation of different policies and procurement processes for mission-critical and departmental applications. Mission-critical applications still require 24×7 support, high availability Service Level Agreements, data security and disaster recovery considerations. Because these applications may directly impact revenue or customers, substantial up-front diligence is required. Nonetheless, many of the old, on-premise contract requirements are still irrelevant here since there is no infrastructure, no maintenance, no perpetuity, etc.

Aside from mission-critical SaaS apps, 80-90% of all SaaS solutions are departmental apps and these are the applications that should have an “express lane” for procurement. Imagine a streamlined process for most SaaS purchases that enables fast, easy and safe deployment. Everyone wins and everyone is thrilled at the result.

The Express Lane checklist for departmental apps

The following checklist is a framework for establishing the Express Lane:

  • Data ownership and management – Clearly define that all data is owned by the company, not the SaaS provider, and can be pulled out of the providers system at any time.
  • Security – the ability to control the level of access by the SaaS provider and the company’s employees, as well as shut off access when an employee leaves the company.
  • Dedicated support and escalation path – Access to a fully staffed support team, but probably does not need to be 24×7.
  • Reliability – Set a baseline for the provider’s historical reliability, including lack of downtime and data loss.
  • Disaster recovery processes – Internal plan to minimize information loss if the vendor goes out of business or violates the user contract.

Applying a framework creates a partnership with all stakeholders and has the following benefits:

  • Cost savings – Save money in terms of company resources needed to take providers through approval process and in up-front overhead on long-term contract commitments.
  • Scalability – Ability to use a SaaS solution for a specific segment of a business or across an entire enterprise.
  • Efficiency – Streamlined process with legal and billing.
  • Flexibility – A month-to-month contract, instead of a SLA, means a company is not locked into a long-term contract.
  • Eliminates rogue solutions – IT gains greater control by eliminating the practice of employees signing up for services on their personal credit cards and expensing.

Ending the use of perpetual license practices for SaaS applications will result in much better alignment between the SaaS supplier, the internal user and the CIO. And the creation of an Express Lane for the vast majority of all departmental apps will enable rapid adoption of business applications by the departmental user, while giving IT the oversight in the rapidly evolving world of SaaS.

The SaaS revolution has not only changed the way products are delivered and developed, but the way products are brought to market. SaaS and the departmentalization of IT recognize that the line-of-business buyer and new sales approaches target that buyer with a combination of freemium and inside sales. Couple this with a streamlined process for procuring SaaS and we create the perfect storm for a massive transformation in application deployment and productivity.

Note: Part 3 of “The SaaS Manifesto” first ran in The Wall Street Journal‘s CIO Journal.