Archive

Uncategorized

Virtualization has been a key driver behind every major trend in software, from search to social networks to SaaS, over the past decade. In fact, most of the applications we use — and cloud computing as we know it today — would not have been possible without the server utilization and cost savings that resulted from virtualization.

But now, new cloud architectures are reimagining the entire data center. Virtualization as we know it can no longer keep up.

As data centers transform, the core insight behind virtualization — that of carving up a large, expensive server into several virtual machines — is being turned on its head. Instead of divvying the resources of individual servers, large numbers of servers areaggregated into a single warehouse-scale (though still virtual!) “computer” to run highly distributed applications.

Every IT organization and developer will be affected by these changes, especially as scaling demands increase and applications get more complex every day. How can companies that have already invested in the current paradigm of virtualization understand the shift? What’s driving it? And what happens next? MORE

Without a file system, modern computers would not operate. At some point, all I/O and most data on a computer finds its way into a file system — making it one of the most indispensable system software components.

File systems have always been designed for a disk-drive-centric storage environment. It’s a fundamental paradigm that has existed since the beginning of computing. But the world is changing. As we confront unprecedented amounts of data, much of it in real-time, there is a shift from disk-centric to memory-centric computing. We can reliably predict, with the cost of system memory decreasing, that memory for both storage and compute will be the exact same thing.

This shift to memory-centric computing requires an entirely new file and storage system. And that’s where Tachyon comes in to play. MORE

I was introduced to Paula Long the CEO of DataGravity about the same time I arrived at a16z (nearly four years ago).  Every time a new storage deal was pitched to us, I would call Paula to get her thoughts. Given my own background in storage and systems software, I was blown away at Paula’s depth and knowledge in the space. Not only did she articulate every technical nuance of the project we discussed, she had an uncanny feel for what was likely to happen in the future.

Paula casually rattled off every company doing similar things, price and performance of solid-state storage, file systems, volume managers, device drivers, block interfaces, meta data, NAS, SAN, objects, and security. It was enough to make my head spin, yet she analyzed every situation with a clarity that I had never seen before. I had known Paula as the founder of EqualLogic (her prior storage company acquired by Dell for $1.4 billion in 2008), but her insight and wisdom about everything storage far exceeded that of anyone I had met. When she came to me with her own ideas for a new storage company there was no hesitation. Betting on Paula would result in something really special. In December 2012 we invested in DataGravity.

When we talked about DataGravity in those days, Paula would tell me how the real future of storage was unlocking the information residing in the gazillions of files and terabytes of unstructured data that organizations store but never use. She articulated that most other storage companies were in a race to zero; chasing the faster and cheaper angle, with their solid-state storage and incremental innovation. “Table stakes,” she would say. “DataGravity is going to do something never done before. We are going to unlock the value of storage. Storage is the obvious place for intelligence to be surfaced.” This all sounded great, but – even with my background in the space – I never fully appreciated what Paula had envisioned. She had a secret.

Today, DataGravity is unveiling the world’s first data-aware storage system. The system is quite simply revolutionary. We saw a demonstration of the system’s capability at a board meeting a few months ago, and that is when it all came together for me. This was not some incremental system that everyone else was building, but an entirely new way of managing storage and information. I left the board meeting thinking that all storage systems in the future would have elements of the DataGravity concepts. It was truly new thinking.

This was not some incremental system that everyone else was building, but an entirely new way of managing storage and information.

The secret sauce DataGravity brings to the market is making dumb storage smart, all in a single system. DataGravity is both a primary storage array and an analytics system combined into one. The combination — without any performance or operational penalty — means, for the first time, that organizations can use their primary storage for file storage, IT operations, AND analytics at the point of storage. “Data-aware” means indexing and giving storage intelligence before it is stored. Instead of having dedicated and expensive secondary systems for analytics, operations and data analysis, DataGravity does it all in one place.

DataGravity is about to change the way we think about storage. From the demographics of data, to data security, to searching and trend information, the system will unlock an entire class of capabilities that we have not yet begun to comprehend. For example, imagine knowing when a file is being written or corrupted, before it is accessed. Or being able to identify subject-matter experts in an organization based on who is writing the most content on what and when. Or determining data ownership and control and correlate this with active or inactive employees. All this from a “storage” system.

So here we are today at an amazing inflection point in the history of storage. Twenty years from now, we’ll look back at this day as the day storage went from being dumb to being smart. The day that transformed the way the world stores its information. Just as Paula predicted, and just as Paula knew.

 

 

A new architectural era in computing is upon us, and the datacenter is changing to accommodate it. The cloud generation of companies has ramped their dominance and proven their models, and the legacy enterprise is close behind in making this massive shift. These new datacenters—as pioneered and designed by Facebook, Google, and Twitter—are defined by hyper-scale deployments of thousands of servers, requiring a new software architecture to manage and aggregate these systems. Mesosphere is that software, and we believe this architecture will be as disruptive to the datacenter as Linux and virtualization have been over the past decade.

Today’s application architectures and big data workloads are scale-out, stateless, and built to leverage the seemingly infinite processing capacity of the modern datacenters. These modern hyper-scale datacenters are the equivalent of giant supercomputers: they run massively parallel applications that serve millions of user requests a second. We are moving from a collection of servers running discrete, stateful applications, to massive scale-out applications that treat the hardware as one giant server.

In that “giant server” view of the world, Mesosphere is the obvious foundation for this new cloud stack and adoption is scaling fast. Look under the datacenter hood in many forward-looking, hyper-scale environments, including Twitter, Airbnb, eBay, and OpenTable, and you will find Mesosphere.

The Future of the Datacenter is Aggregation (not Virtualizaton)

Ten years ago, virtual machines (VMs) revolutionized the datacenter. This was because while the servers were getting bigger and bigger, the apps running on them pretty much stayed the same size. In order to make better use of those large servers, it made sense to virtualize the machines so that you could run multiple applications on the same machine at the same time.

Today, aggregation is fomenting a similar revolution and applications don’t fit on single machines anymore. In today’s world, applications run at a much larger scale (millions of users, billions of data points, and in real-time) and they are essentially large-scale distributed systems, composed of dozens (or even thousands) of services running across all the machines (virtual and physical) in the datacenter. In this world, you want to stitch together all of the resources on those machines into one common pool from which all the applications and services can draw.

Aggregation has proven itself in the A-lists of hyperscale companies, like Google and Twitter. They’ve demonstrated that it’s much more efficient to aggregate machines—pooling all of the resources—and then build applications against the datacenter behaving as a single machine.

Aggregation, and the tools to manage it at scale, is what Mesosphere is bringing to everybody —and it’s what we believe the future of the datacenter looks like.

The companies that buy into this architecture do not abandon virtualization, containers, or other approaches. These become important infrastructure components. But the way they manage their entire datacenter will evolve beyond the duct tape and band aid, highly manual approach to scripting IT operations tasks and “recipes”, and configuring dependencies each time a new application is brought online or a server goes down.

Mesos: From UC Berkeley to Reality

In 2009, Mesosphere Co-founder Florian Leibert was working at Twitter to scale the application in response to its exponential growth. At the time, he spotted a new open source technology that had been built at UC Berkeley called Mesos and he helped Twitter bring it into full production.

Today, almost all of Twitter’s infrastructure is built on top of Mesos, which is now an Apache open source project and is at the core of Mesosphere’s products. The Mesosphere stack, which includes Apache Mesos, is not a hypothetical technology. It’s highly mature and battle-tested, in large-scale production, running in both private datacenters and in public cloud environments. Other organizations using Mesos include: Hubspot, Airbnb, Atlassian, eBay, OpenTable, PayPal, Shopify, and Netflix.

Mesosphere is harnessing the core open source technology of Apache Mesos, and making it possible for everyone to tap into its power. By building an entire ecosystem around Mesos, they are making it easy to install, operate, and manage. For developers, Mesosphere provides simple command-line and API access to compute clusters for deploying and scaling applications, without relying on IT operations. For IT operations, Mesos abstracts the most difficult low-level tasks related to deploying and managing services, virtual machines, and containers in scale-out cloud and datacenter environments, and provides true automation, fault tolerance, and server utilization for modern scale requirements. Finally, Mesos allows applications to move between different environments without any change to the application.

Mesosphere will help define the next generation datacenter. I am honored to be joining the board of a team of dedicated system-level software engineers who will change the face of enterprise computing.

In this final installment of the SaaS Manifesto, which has examined the rise of the departmental user as a major influencer of enterprise buying decisions, and how building a real sales team is integral to the process, the last critical consideration is the need to rethink the procurement process for SaaS.

The corporate one-size-fits-all process for procuring applications is broken. Many companies adopting SaaS are still using 85-page perpetual license agreements that were written years ago and designed for on-premise software purchases. It’s no wonder that SaaS companies want to avoid central IT and purchasing like the plague! Fortunately, the solution is simple: Every company needs to adopt a SaaS policy and treat SaaS software purchasing in an entirely different fashion from on-premise practices.

The perpetual license and SaaS don’t mix

Perpetual licenses made sense when companies invested millions of dollars for on-premise software, services and corresponding infrastructure. This resulted in procurement and legal teams requiring a slew of contractual terms including: indemnification and liability limits, future updates and releases, maintenance fees, perpetual use, termination and continued use, future software modules, infrastructure procurement, beta test criteria, deployment roll-out and many others.

The use of existing procurement practices makes no sense for SaaS and results in a sluggish and frustrating experience for everyone involved.  Frustrating for the departmental user who wants fast access to much needed business capability; frustrating to the procurement and legal groups who spend an inordinate amount of time negotiating useless terms; frustrating to the SaaS provider who is trying to go around the entire process; and frustrating to the CIO who fears rogue deployment of SaaS apps and their impact on security and compliance.

Like oil and water, the two practices don’t mix, and it’s time to change the game and create an effective three-way partnership between the user, the SaaS provider and the CIO.

Creating an “express lane” SaaS policy

CIOs can enable rapid adoption of SaaS by creating a repeatable, streamlined purchasing process that is significantly faster than a traditional approach. SaaS policies and procurement checklists should reflect this and CIOs can help key business stakeholders identify and map their requirements to the available SaaS offerings and ensure that appropriate due diligence is done.

A starting point should be the creation of different policies and procurement processes for mission-critical and departmental applications. Mission-critical applications still require 24×7 support, high availability Service Level Agreements, data security and disaster recovery considerations. Because these applications may directly impact revenue or customers, substantial up-front diligence is required. Nonetheless, many of the old, on-premise contract requirements are still irrelevant here since there is no infrastructure, no maintenance, no perpetuity, etc.

Aside from mission-critical SaaS apps, 80-90% of all SaaS solutions are departmental apps and these are the applications that should have an “express lane” for procurement. Imagine a streamlined process for most SaaS purchases that enables fast, easy and safe deployment. Everyone wins and everyone is thrilled at the result.

The Express Lane checklist for departmental apps

The following checklist is a framework for establishing the Express Lane:

  • Data ownership and management – Clearly define that all data is owned by the company, not the SaaS provider, and can be pulled out of the providers system at any time.
  • Security – the ability to control the level of access by the SaaS provider and the company’s employees, as well as shut off access when an employee leaves the company.
  • Dedicated support and escalation path – Access to a fully staffed support team, but probably does not need to be 24×7.
  • Reliability – Set a baseline for the provider’s historical reliability, including lack of downtime and data loss.
  • Disaster recovery processes – Internal plan to minimize information loss if the vendor goes out of business or violates the user contract.

Applying a framework creates a partnership with all stakeholders and has the following benefits:

  • Cost savings – Save money in terms of company resources needed to take providers through approval process and in up-front overhead on long-term contract commitments.
  • Scalability – Ability to use a SaaS solution for a specific segment of a business or across an entire enterprise.
  • Efficiency – Streamlined process with legal and billing.
  • Flexibility – A month-to-month contract, instead of a SLA, means a company is not locked into a long-term contract.
  • Eliminates rogue solutions – IT gains greater control by eliminating the practice of employees signing up for services on their personal credit cards and expensing.

Ending the use of perpetual license practices for SaaS applications will result in much better alignment between the SaaS supplier, the internal user and the CIO. And the creation of an Express Lane for the vast majority of all departmental apps will enable rapid adoption of business applications by the departmental user, while giving IT the oversight in the rapidly evolving world of SaaS.

The SaaS revolution has not only changed the way products are delivered and developed, but the way products are brought to market. SaaS and the departmentalization of IT recognize that the line-of-business buyer and new sales approaches target that buyer with a combination of freemium and inside sales. Couple this with a streamlined process for procuring SaaS and we create the perfect storm for a massive transformation in application deployment and productivity.

Note: Part 3 of “The SaaS Manifesto” first ran in The Wall Street Journal‘s CIO Journal.

It may sound a bit ungrateful, especially coming from someone who invests in these things, but many early SaaS companies in many ways have been successful in spite of themselves. SaaS customers have had their pick of great software products, all available from the cloud, and without the long, tortured installation efforts of previous generations of software. On the back of these frictionless software deals, SaaS companies have been growing like mad, and often without any formal sales effort. But if they haven’t already, these up-and-to-the-right companies are about to hit a wall. The reason is that early deployments and usage do not necessarily translate into sustainable revenue growth.

In order for SaaS businesses to really scale and reach their full potential as industry leaders, they need a real and robust sales effort. That’s right, you need to build a sales team.

It won’t be easy. I won’t pretend that it is. But scaling sales, while expensive and culturally challenging to implement, changes the size of the potential opportunity. The big opportunity for SaaS companies is to drive adoption across the whole organization, which requires a centralized effort to redesign corporate processes, facilitate training and manage customer success. This is especially the case with tools that work best when used by everyone at a company, like CRM, human resources or accounting

Freemium is only part of the story

Before we dive in, there’s one thing I have to set straight: Freemium is a fantastic starting point for SaaS, but freemium is not the same as building a sales organization. Freemium is a product and marketing strategy designed to generate a massive base of users, which can be approached for a future sale. Freemium is all about seeding the market and establishing a platform for building a winning offering. The best SaaS companies use their free product to iterate and improve their offering with data and feedback. But even with an effective freemium go-to-market strategy, SaaS companies still need to think about augmenting with a sales organization. Start with freemium, but don’t end there.

The evolving role of the CIO

The CIO’s role is evolving in that for most SaaS applications, the department will drive the purchase. This is different from past generations of software, where on-premise installations and routine software upgrades required the CIO to hand-hold every buying decision. In a sense, SaaS has liberated the CIO to focus on longer-term strategic business issues, rather than worry about the next Oracle or SAP upgrade.  The CIO will influence security, support and data protection policies, so understanding these up-front becomes a key part of the selling process.

The balance of influence between the departmental buyer and the CIO differs by application as well as by company—the more mission critical, secure, and integrated, the larger the CIO role. For infrastructure purchases, as an example, the CIO continues to be highly involved in the purchasing decision.

A framework for an effective SaaS sales organization

Designing a sales and marketing function targeted at the departmental buyer is key to creating long-term competitive advantage. I’ve seen many early SaaS companies reluctantly stumble into half-baked sales efforts, only to find a flattening in revenue and customer engagement.

To convince the skeptics, I’ve asked Dan Shapero at LinkedIn, one of the most successful SaaS companies of our time, to weigh in. Dan is the VP of Talent and Insights at LinkedIn and runs a 1,200-person sales organization. While most people think that LinkedIn sells itself with great product and no sales effort, nothing could be further from the truth. Here’s a framework that LinkedIn has developed to apply:  

Organize around the buyer. LinkedIn has multiple business lines that work with three different corporate functions: talent, marketing and sales. These departments typically make discrete decisions, with independent budgets, so LinkedIn has different teams that focus on partnering with each function.

Distinguish between new account acquisition and account success. Two of the most important lessons at LinkedIn have been (1) successful clients buy more over time and (2) the process and expertise required to acquire a new customer is very different from nurturing that customer. As a result, there are separate and distinct teams, sales processes and measures of success for managing new and existing customers.

Land, then expand. With SaaS, customers can purchase on a small scale before going all in. Rather than focusing on landing huge deals, Linkedin has been better served by acquiring many smaller scale deals at clients with huge long-term potential. Albeit smaller, success with the initial deployment often results in tremendous upside in the second, third, fourth year of a client’s tenure. Expanding in a SaaS/freemium model is particularly effective because you not only can demonstrate success, but you can also pinpoint and size future demand based on who is using the technology for free.

Leverage inside sales for the mid-market. The creation of a robust, inside sales organization to serve clients over the phone, from regional hubs around the world, is a critical part of LinkedIn’s successful SaaS franchise. Inside sales reps close their own business and manage their own territory. The SaaS model enables the client to be engaged, sold, provisioned and serviced in a highly scalable way, without the need for an in-person visit.

Monitor customer engagement. SaaS provides incredible transparency into how actively engaged customers are with the product. Understanding where usage is strong and weak across customers allows LinkedIn to improve customer experience by deploying training resources proactively, offering targeted advice on best practice, and improving the product roadmap. Most enterprise vendors are flying blind when it comes to understanding the success of their customers, while SaaS companies have a fundamental information advantage.

When you are in the throes of viral adoption, it is not immediately intuitive to many SaaS companies to build out a sales organization. Right now, those of you in all the rapid growth SaaS companies might still be thinking, “Not us, we’ll just keep booking those inbound leads.” You’ll keep thinking that until the inbound stops. The paradox of great SaaS companies is that the more successful a SaaS company is with early deployments, the more challenging it becomes for that organization to recognize and embrace building a formal sales organization to address the needs of the enterprise buyer. That’s why I want every SaaS company to consider the SaaS Manifesto a call to arms. We are at a point in the maturity of SaaS where mature sales are important.

Up next: The SaaS Manifesto: Part 3 – the requirements for enterprise-wide SaaS adoption and deployment 

Note: Part 2 of “The SaaS Manifesto” first ran in The Wall Street Journal‘s CIO Journal. You can read Part 1 here.