Archive

Tag Archives: Storage

I was introduced to Paula Long the CEO of DataGravity about the same time I arrived at a16z (nearly four years ago).  Every time a new storage deal was pitched to us, I would call Paula to get her thoughts. Given my own background in storage and systems software, I was blown away at Paula’s depth and knowledge in the space. Not only did she articulate every technical nuance of the project we discussed, she had an uncanny feel for what was likely to happen in the future.

Paula casually rattled off every company doing similar things, price and performance of solid-state storage, file systems, volume managers, device drivers, block interfaces, meta data, NAS, SAN, objects, and security. It was enough to make my head spin, yet she analyzed every situation with a clarity that I had never seen before. I had known Paula as the founder of EqualLogic (her prior storage company acquired by Dell for $1.4 billion in 2008), but her insight and wisdom about everything storage far exceeded that of anyone I had met. When she came to me with her own ideas for a new storage company there was no hesitation. Betting on Paula would result in something really special. In December 2012 we invested in DataGravity.

When we talked about DataGravity in those days, Paula would tell me how the real future of storage was unlocking the information residing in the gazillions of files and terabytes of unstructured data that organizations store but never use. She articulated that most other storage companies were in a race to zero; chasing the faster and cheaper angle, with their solid-state storage and incremental innovation. “Table stakes,” she would say. “DataGravity is going to do something never done before. We are going to unlock the value of storage. Storage is the obvious place for intelligence to be surfaced.” This all sounded great, but – even with my background in the space – I never fully appreciated what Paula had envisioned. She had a secret.

Today, DataGravity is unveiling the world’s first data-aware storage system. The system is quite simply revolutionary. We saw a demonstration of the system’s capability at a board meeting a few months ago, and that is when it all came together for me. This was not some incremental system that everyone else was building, but an entirely new way of managing storage and information. I left the board meeting thinking that all storage systems in the future would have elements of the DataGravity concepts. It was truly new thinking.

This was not some incremental system that everyone else was building, but an entirely new way of managing storage and information.

The secret sauce DataGravity brings to the market is making dumb storage smart, all in a single system. DataGravity is both a primary storage array and an analytics system combined into one. The combination — without any performance or operational penalty — means, for the first time, that organizations can use their primary storage for file storage, IT operations, AND analytics at the point of storage. “Data-aware” means indexing and giving storage intelligence before it is stored. Instead of having dedicated and expensive secondary systems for analytics, operations and data analysis, DataGravity does it all in one place.

DataGravity is about to change the way we think about storage. From the demographics of data, to data security, to searching and trend information, the system will unlock an entire class of capabilities that we have not yet begun to comprehend. For example, imagine knowing when a file is being written or corrupted, before it is accessed. Or being able to identify subject-matter experts in an organization based on who is writing the most content on what and when. Or determining data ownership and control and correlate this with active or inactive employees. All this from a “storage” system.

So here we are today at an amazing inflection point in the history of storage. Twenty years from now, we’ll look back at this day as the day storage went from being dumb to being smart. The day that transformed the way the world stores its information. Just as Paula predicted, and just as Paula knew.

 

 

One of the holy grails in the storage market has been to deliver a piece of software that could eliminate the need for an external storage array.  The software would provide all the capabilities of an enterprise-class storage device, install on commodity servers alongside applications, eliminate the need for a storage network, and provide shared storage semantics, high availability, and scale-out. With Maxta, the search for such a holy grail ends here.

The external storage array and associated storage network have been a staple of enterprise computing for several decades.  Innovations in storage have been all about making the external storage array faster and more reliable.  Even with all the recent excitement of flash replacing spinning disk, the entire focus of the $30B storage market has been around incrementally improving the external array.   Incrementalism as opposed to (literally) thinking outside the box.

Maxta is announcing a revolutionary shift in storage.  Not only are storage arrays and networks eliminated, but, as a result, compute and storage are co-located.  This convergence keeps the application close to its data, improving performance, reliability, and simplicity.  A layer of software to replace a storage array sounds too good to be true, except Maxta has paying customers and production deployments, and has delivered several releases of their software prior to today’s announcement.

Maxta would not be possible without CEO Yoram Novick, who is a world-class expert in storage software and data center design.  Yoram holds 25 patents and was previously CEO of Topio, a successful storage company that was acquired by NTAP several years ago.  He’s a storage software genius, with a penchant for engineering design and feature completeness as opposed to fluffy marketing announcements and future promises.  He’s the real deal and a true storage geek at heart.

When I met Yoram several years ago, he came to us with the radical idea to build a software layer to change the storage landscape.  Leverage commodity components and put all the hard stuff in software.  Within minutes, we decided to invest and we haven’t looked back since.  We are thrilled to be working with Yoram and team as they use software to deliver one of the holy grails of the storage market.

With all the recent innovations in flash storage design, you’d think we’d have a smooth path toward supporting storage requirements for new hyper-scale datacenters and cloud computing. However, nothing could be further from the truth! Existing storage architectures, despite taking advantage of flash, are doomed in the hyper-scale world. Simply put, storage has not evolved in 30 years, resulting in a huge disconnect between the requirements of the new datacenter and the capability of existing storage systems.

There are two fundamental problems right now: 1) existing storage does not scale for the hyper-scale datacenter and 2) traditional storage stacks have not been architected to take advantage of the recent innovations in flash.

Current storage systems don’t scale because they were designed in the mainframe era. Mainframe-style arrays were designed in a world where a single mainframe provided the compute and a handful of storage arrays hung off the mainframe to support data storage. This one-to-one architecture continues to be used today, despite the fact that the compute side of the hyper-scale datacenter is expanding to hundreds or thousands of individual servers in enterprise datacenters, similar to Google or Amazon. As you can imagine, you achieve theoretically unlimited capacity for compute only to be severely bottlenecked on the storage end of things.

Furthermore, while flash storage has become the hot new thing—super fast, energy efficient, with a smaller form factor—the other internal parts of the storage subsystem have not changed at all. Adding flash to an outdated system is like adding a jet engine to the Wright Brothers’ airplane: pretty much doomed to fail, despite the hype.

This brings me to Coho Data (formerly known as Convergent.io) and a team I’ve worked closely with for years. The founding team includes Ramana Jonnala, Andy Warfield and Keir Fraser, superb product visionaries and architects, with deep domain expertise in virtualization and systems, having built the XenSource open source virtualization stack and scaled it to support some of the biggest public clouds around. This team has built infrastructure software that has been used by hundreds of millions of end users and installed on millions of machines. By applying their expertise and adding key talent with network virtualization experience to the team, they are challenging the fundamentals of storage.

A year after we funded their Series A, having spent that time heads-down building product and piloting with customers, I’m really excited to share that Coho Data today is announcing a revolutionary design in storage that has been built from the inside out to challenge how companies of all sizes think about how they store and deliver access to their data. The team has rebuilt the entire storage array with new software and integrated networking to offer the fastest, most scalable storage system in the market, effectively turning the Wright Brothers’ airframe into an F-16 fighter jet. The Coho DataStream architecture supports the most demanding hyper-scale environments, while at the same time optimizing for the use of commodity flash, all with standard and extensible interfaces and simple integration. As hyper-scale datacenters become the new standard, monolithic storage arrays will go the way of the mainframe.

Coho Data is changing the storage landscape from the inside out and I could not be more thrilled to be part of the most exciting storage company of the cloud generation.

DataGravity is poised to transform the storage landscape. The company represents a once-in-a-decade opportunity to create an entirely new category of storage by unlocking the value of data that today sits idle in a storage system. I call the category “Storage Intelligence” and the transformation will be profound.

The story starts with DataGravity’s incredible founding team: Paula Long and John Joseph. Paula is a technical visionary in the storage world and was the co-founder of EqualLogic, a storage company acquired by Dell in 2008 for $1.4 billion. John was also an early member of the EqualLogic team and brings great talent in sales, marketing and operations. Unsatisfied with the pace of innovation in the storage world, Paula and John have teamed up again to royally disrupt the staid storage industry.

DataGravity’s focus on storage intelligence highlights entirely new thinking in storage innovation.   We’ve seen hundreds of new storage companies in the past few years and most have followed the well-worn path of incremental feature development, focusing on storing dumb bits of data at lower cost with faster access. Interesting and incremental—hardly transformative. A race to zero does not make for a killer new category.

Unlocking the next generation of storage requires looking at stored data not as a dumb repository of expensive bits, but as the foundation for usable, intelligent information. We’ve overlooked the data as the true asset to our business and we store it away without giving any thought to what it’s saying about our business, our customers and our users. The DataGravity team will take what is considered an idle operating expense and convert it to near instant business value.

We’ve only begun to see the explosion of data and its value to businesses of all sizes. DataGravity’s mission of turning dumb storage into meaningful information will give new meaning to storage infrastructure. As data centers evolve and information becomes central to the competitive advantage of organizations, DataGravity will fill a storage need that goes far beyond the current storage developments of today.

I am pleased to be joining the board of DataGravity and working with the team that is going to transform storage.