Some observers have pointed to the technical challenges of keeping data in sync over large distances. Has EMC solved that problem?
We point to two bodies of work that we believe have enabled us to come up with this technology to solve the problem. One is that we have been doing deep analysis, particularly as part of our Symmetrix product, where we have built the world's largest cache environment in the embedded product for many, many years. We've gained a lot of insights into workloads and what caches well, what doesn't, and how to do that kind of work. We also acquired a piece of technology originally from YottaYotta that was a distributed cache protocol for technology to be able to do coherence algorithms for distributed data.
By bringing those two bodies of expertise together, we think we've actually been able to solve this idea of global federation, of being able to cache data over distance.
Now, having been a cache designer in my own career, starting with the 486 processor cache that I personally designed, you realize that not everything works in a cache. There are all these sub-workloads that don't work well. However, you find that most things actually work pretty well. As we say, most things follow the 80-20 rule: a little bit of data gets used a lot. As long as you live within the boundaries of how much bandwidth you acquire and make available, we're quite convinced this is a very significant new technology and maybe a breakthrough and as such, it's a new capability that quite differentiates this storage vision from anything that we've seen before.
How close is EMC to making a public demonstration of this technology?
We wouldn't have talked about it if it wasn't well underway. Watch this space and hopefully you'll see more of the specifics at EMC World in May. As I indicated at the briefing that we gave, we're already engaged with customers who are experimenting with this and doing a proof of concept with some of the early appliances that we're building in this space.
The technology will first hit the market as an appliance rather than a capability built into an existing product?
To get it right, the first version is an appliance because we're doing this new algorithm with caches, memories, traffic and bandwidth. So, the first version is an appliance. However, the core technology is a software protocol that, as I indicated in the briefing, we'll just embed that in some of our arrays and potentially have other software versions of the product available over time also. Generations one and, probably, two will be an appliance. Over the roadmap, you'll see us build this into some of our other storage arrays and perhaps other ways of how we productize it.