X-IO Storage and adjustable data de-dupe in All Flash

05/07/2018
576
Embed

Summary: Chat w. Bill Miller, CEO of XIO about how they enable you to tune the data reduction functionality so you can turn it off for volumes of data that aren't very reducable. That saves alot of overhead, especially when data reduction occurs further up the stack at the application layer. Just one of the features in this video short.

Transcript:

Mike Matchett:                  Hi. I'm Mike Matchett with Small World Big Data, and today we're going to talk about all-flash. The all-flash market, what's happening, is it becoming commoditized? What should you be looking out for if you're in the market for all-flash storage solutions. I'm going to have on Bill Miller, who's the CEO of XIO, and he's going to talk us through their perspectives on the all-flash market.

                                                      Before I get into that, though, I just want to say a few things about what we're seeing in the storage industry. Obviously, all-flash is becoming more popular. It's dropping in price. People are considering all-flash data centers, for example. On the other end, we're seeing hybrid arrays also benefiting who can take advantage of multiple tiers of storage, so there's a natural tension there. But generally, things are moving towards a solid state world, and there's a lot of questions you have to ask about what solid state can really do for you. Is it really prime time to replace everything in your data center?

                                                      With that, let me introduce Bill Miller. Hi, Bill.

Bill Miller:                              Hi, Mike.

Mike Matchett:                  So tell us, just in a nutshell, a little bit about the G4 that you guys have just come out with. What does that really bring to market in the all-flash space, first?

Bill Miller:                              Yeah, so with our ISE G4 ... XIO has, of course, been in the data storage ... business for a long time and so G4 is fourth generation. The fourth generation of architecture, especially of our code that runs these arrays. The earlier generations, or the earliest generations, were really focused on making disk drives work a lot better. The company has roots inside of Seagate and so they really cared about how drives perform in arrays.

                                                      So for years that was really what XIO was known for, but there was a lot of really interesting code and IP in these arrays that applies to flash, but we've done a complete rehash of it to make sure that we kept the stuff that worked really well and it was applied to flash and we got rid of some of the stuff that was overhead and maybe didn't work so well in the new world.

                                                      So generation four was really focused on flash. There's some really benefits of the way that we do data layout onto the media that has always been there. Even the disk world was focused around performance, mostly, but it really works well with flash in providing a level of overprovisioning and wear leveling that is at the array level that really makes sure that you get good reliability out of the flash over its lifetime. You get ... you know, you can kind of even greater performance out of flash itself, and then we ...

                                                      You know, flash arrays needed deduplication and data reduction, so we've added data reduction into the code. We've added features that are really table stakes in this market, like snapshots and asynchronous replications. We completely did the ... redid our management and UI capability to simplify that and make it modern, web services-based interface.

Mike Matchett:                  So yeah, so you guys sound like you ... I was going to ask you about data reduction and you got there. So you guys are all in on converting to all flash and converting to this flash basis. What's happened with flash in the last couple years, because it used to be pretty expensive and tony and then companies like Pure and other people came around, said, "No, we can try to convince you to do all flash." You guys have now made the switch also to all flash. What's happening with the market there?

Bill Miller:                              Well, I think ... you know, the main thing is that flash arrays have now gotten to a place where they're ... certainly, when you apply data reduction ... if you get reasonable data reduction values out of the data you're storing, they end up being cheaper than disk drives. And when you look at the simplicity of it, when I talk to some of our customers about flash and their experience with it after years of managing disks, what they tell us is, it's just a lot easier to manage.

                                                      You don't really have to think at all about data placement on your arrays, you get kind of equal performance everywhere, the reliability of it is greater. They tend not to fail as much, so you don't have to babysit your arrays much when you have flash arrays. So you look at a total cost of ownership basis and flash is simply cheaper now.

Mike Matchett:                  Well, I guess that depends on who you're buying it from, which solution you're getting, I think. But your argument is, you can bring a lot of IP that you've had for a long time, tailored for flash, and really make a very cost-efficient ... like you said, table stakes equivalent flash array to just anybody else in the market, right?

Bill Miller:                              Yeah, that's right. I mean, the way we looked at the marketplace as we were bringing our ISE G4 900 series to market was that there are a bunch of vendors out there that are competing in the all-flash array market space. They're all relatively the same. They're substitutable. We know ... customers were telling us that they're going to shop between vendors, and they'll probably buy arrays from more than one vendor because there's really not a stickiness or switching cost issues or complexity issue with these things. They're easy to manage so you can easily have a couple or three vendors in your shop, no problem. It doesn't create any additional cost for you, and it does drive your price down.

                                                      So we really focused on price, and I mentioned data reduction ... when we were looking at data reduction, one of the things that we discovered ... our engineering team looking at how to best do data reduction. In this way, I guess we had a bit of an advantage in coming a little bit late to this game because we were able to look at how other people had done it and look at some fundamentals and really came up with an invention. That invention around data reduction allows us to get the same results ... the same data reduction ratios and the same performance out of a data reduced volume that others do at a fraction of the cost. We get it at a fraction of the cost because we're able to do it with much less in the way of CPU and memory resources.

                                            So data reduction is very CPU and memory intensive the way others implement it. If you can come up with, as we did, a patent pending invention in data reduction that uses, really, 25% of those resources and then amortize that over the cost of the flash in the array, you end up with a bill of material costs that's only 60% and 70% of what others ... is costing them to build that same array. In a market that is very competitive and really becoming commoditized, where there's not a lot of stickiness of switching costs, price matters. So we use our cost advantage and pass that along to our customers and we give them better cost in the flash array market.

Mike Matchett:                  So this sounds like a great opportunity for you but maybe not such good news for EMC and NetApp and those bigger guys that are ... been really making money pushing down their larger, all-flash, high-end arrays, right? So you're-

Bill Miller:                              [crosstalk 00:06:32] and I think if you look at some of the sales and market share numbers from the past few quarters from the big vendors like EMC, you can see that they're losing a lot of ground on a revenue line in that market space, and some of that is because they're being forced to take lower margins and push the cost of those things down in a very commoditized, competitive market space.

Mike Matchett:                  Yeah, yeah, and I think sometimes the features those high-end arrays have ... like dedupe has to be on for the entire array ... really constrain the kind of workloads you can put on there. You guys have even thought about that and allow data reduction, as we were talking, on a [inaudible 00:07:09] basis, so now you can say we can consolidate many different kinds of workloads onto a G4 kind of solution.

Bill Miller:                              Yeah, absolutely. Yeah. Again, coming a little bit late to this game. Most of the people who preceded us in bringing all-flash arrays to market with data reduction had the idea of data reduction all the time, so all workloads, all data, all the time, all the array were reduced. One of the things we recognized is that in the very early days of data reduced flash arrays, people liked to talk about VDI workloads that were getting very large data reduction numbers like 15:1, and a lot of that was because early VDI software would take all of the code on every desktop and stick it out in the VDI environment as is. So they had a lot of replicated code, an entire copy of the operating system in the application environment, and everything for every desktop.

                                                      Then the VDI guys said, "We can fix that. We can go back and centralize some of those things that are multiple copies across all desktops and only store it once." And so, as a result of that, even for data reduced arrays, it's becoming that much of more data reduction is being done up at the application software layers or infrastructure layers where [crosstalk 00:08:25].

Mike Matchett:                  Yeah, up the stack, yeah.

Bill Miller:                              ... up the stack and so the data that gets down to the array may not be as deduplicatable as it was a few years ago. So we said ... there is a cost. There's a penalty when you run data reduction all the time and it's going to hurt your performance. You're using the performance of flash to kind of support having good performance out of data reduction, which is not much better than raw disk arrays in terms of ... certainly not the ones we built in terms of performance.

                                                      But why don't we let people turn it off? Why don't we have an alternative pass through the code that avoids all that overhead of data reduction because if they have certain data sets or certain very high-performance workloads and they just want that flash performance, let's let them do it by volume. So we'll allow them to have data reduced volumes and raw volumes on the same array, and the raw volumes get about twice the IOPS as a data reduced volume, and twice the IOPS of the guys who were doing all-flash arrays that are data reduction all the time.

Mike Matchett:                  So it sounds like there's a really compelling argument here to really take a look around at your storage environment and say, "I probably don't really need those more complex hybrid solutions with tiering and decisions to be made about quality of service, and I probably don't need those really expensive all-flash arrays because the high-end services are moving up the stack and there are definitely cost-efficient alternatives like XIO available that we can bring in side by side and just ... and it's not a rip and replace. It's just ... they can co-live and bring out there on that."

                                                      So sounds like a pretty bright future. Just in the last few seconds here. Where's XIO going next? NVMe or convergence? What's happening?

Bill Miller:                              Yeah, so both of those. So at XIO, we've been in the high reliability, high performance external data storage array business for a long time. Long before I got here. Great, great vendor in that marketplace, and certainly our ISE G4 900 series arrays are a great next step there. Beyond that, we're going to stay in that marketplace.

                                                      We have a roadmap there that will bring an NVMe array to market. I'm not willing to talk about time frame quite yet, but what I will say is there's another [inaudible 00:10:34] that we've been going down here at XIO for the last couple of years, which is around edge computing. Edge micro data centers, edge micro clouds. We see an emerging market opportunity for converged compute, compute offload and storage in single systems where we're using our expertise around systems design to have done what we call fabric express, which is a switch PCI fabric that allows you to put a lot of compute horsepower, a lot of compute offload horsepower and a lot of NVMe storage on one fabric and a very small container.

                                                      And those are being used for really interesting applications around real-time streaming data analytics, big data ... you know, taking hyper-scale concepts and big data analytics and collapsing them down into one node to make them either more deployable or just less expensive and easier to manage. I like to think about things like hyper-scale data center architecture and cloud architecture are great for applications that interact with people, but in a world of autonomous machines, where suddenly sub-second response times are not good enough, you have to think about sub-microsecond response times in terms of ingesting data, doing analytics against that data and generating responses those machines can use, that you have to make that happen closer to where those machines are. You can't make it happen in some faraway [crosstalk 00:11:50].

Mike Matchett:                  I mean, the speed of light still is a problem for us all, right? Can't solve that one.

Bill Miller:                              Yeah, [crosstalk 00:11:53] is fundament, right? It's not going to change.

Mike Matchett:                  Not going to change.

Bill Miller:                              [crosstalk 00:11:57] get there and back, it's a problem.

Mike Matchett:                  It's a problem. I think that's all the time we have today, Bill. Thank you for being on here. Thanks for explaining all this flash market stuff to us.

Bill Miller:                              Thanks, Mike. Appreciate it. Appreciate the opportunity.

Mike Matchett:                  And thank you for watching. This is Mike Matchett with Small World Big Data, and I can't wait to have XIO come back and tell us more about bouquet filters and edge convergence and some of the new things they're working on when they're ready. Take care, guys. Thanks.

Bill Miller:                              Look forward. Thanks, Mike.

Categories:
Channels:
Tags: