In this brief video, Wayne Lam, CEO Cirrus Data Solutions and I discuss the challenges behind trying to migrate block level data...non-disruptively...to a local or remote target (heterogeneous or otherwise). This complex problem will likely be faced by everyone at one point or another in their career. Part 2 of a 4 part series.
Transcript:
David Littman: Hi, Dave Littman, Truth in IT. I am joined again by Wayne Lam. Wayne is CEO of Cirrus Data. Wayne, welcome.
Wayne Lam: Thank you, David.
David Littman: All right, so hey listen, Cirrus Data focuses on data migration of block level storage. Wayne is going to go through some of the problems that many of his clients have faced. This video is going to be maybe seven to 10 minutes and then we're going to have a second video that talks about their solutions specifically. In the essence of time, Wayne, I'll let you take it away.
Wayne Lam: Thank you. Let's get started. Okay, so first, let me have a brief introduction of Cirrus Data. We've been around for about four or five years now. We started in 2011, and in the past three or four years we have been very successful in marketing a data migration product.
It is enabled by a number of patents that allow us to simplify the entire migration process from end-to-end so there is no downtime. Now, why is that important? Let's look at how normally other products do the job of storage migration.
So first, host-based approach. You can go through each of the hosts. It installs the migration software and then the host would read and write for the new storage, but this is pretty problematic if you consider two factors. Number one, that you have to go to each host, whether you have Linux, Windows, Unix, and God knows, maybe even network, so how are you going to go to each and make sure that your software run on each of them? Right? So, that's a problem.
The second most important problem, a serious problem, is also that while it's reading and writing it sucks the data into the host and push it all back, so what it's going to do to the load of that maybe already challenged server in terms of resources? Right? These are the problems, so we want to avoid that, okay?
Second approach is to use an appliance so that the appliance will be from the fabric, from the Fibre Channel fabric, for example. It would then in the middle do the copying from the old to the new on behalf of the host. That's the best approach, because you initial to any platform so one solution fits all. The other is above the storage layer so that it can eventually be applicable for all the different storage out there.
That's good, however, the problem with that kind of an appliance in my previous companies ... I used to be in that business, too, called storage virtualization appliance. It takes a lot of work when it comes to integration, so you have to touch the host. You have to touch the fabric zones and you could have thousands of zones there. You have to do the LUN masking change at the source storage.
So, all these can be very risky and one mistake could inadvertently lose data or corrupt the data. We want to avoid all that, so our invention, those patent that I mentioned earlier, allow us to do exactly that. We can go there and avoid using the host. In fact, the host doesn't even know that we are doing it, and avoid touching the switch and avoid touching the old storage. Just plug it in and transparent migrate.
David Littman: Fabulous. Fabulous. That sounds very interesting. Now, Wayne, there are also some risks in pain points with some other migration appliances, though, like multipathing software and LUN masks. Talk to us a little bit about that.
Wayne Lam: Absolutely. So, you think it's easy to just go to any fabric and plug in something new, and then somehow magically it would be able to read the source and write to the destination, which is what migration is all about. However, appliances that you need to deploy in a Fibre Channel fabric is really not easy because they have their own identity.
That necessitates going to each of the hosts and perhaps change the multipath configuration in order to flow the traffic into the appliance so that you can gain awareness of the reads and writes so that you can migrate every doc, online migration.
Now of course, if it's not online, if you can shut down everything and migrate, then yes, it's really easy. You just plug it in, have two minutes of a stoppage and then it will copy every byte, but nobody can give you an extensive downtime for a mission critical system.
Secondly, are the switch. In order to introduce another entity in the fabric you have to make, normally, extensive zoning changes, which our patent avoid. It's a really simple patent, which I will explain when I have the opportunity later. Then finally, the original storage normally presents the disc to the original host. Now you have to present them instead to the migration appliance first.
These are all changes that can be quite risky and that's why people normally have downtime in order to make that kind of appliance deployment for everyone else's appliance except ours.
David Littman: Okay, fabulous, Wayne. So, just to summarize, pain points with migration of block level data for mission critical applications that can't have any downtime just raises all sorts of complexities and all sorts of problems, especially with heterogeneous environments.
Wayne Lam: Yes.