As an EMC partner, I’m definitely seeing a lot of excitement around Isilon right now…which given it’s a fantastic product is very cool. While VNX and Isilon both have clear strengths, they do overlap at a high level so I’ve found myself having a “what’s the best fit?” conversation somewhat regularly with customers.
Along those lines, I recently had a query from a customer with a pair of replicating EMC Celerra arrays + VMware Site Recovery Manager as well as EMC Avamar for backup who was pondering whether to go VNX (VNX in production and leverage the Celerra’s in DR) or move into the brave new Isilon world.
We had several discussions and I ended up putting together an email outlining some distinctions. While I’ve sanitized/modified it a bit just to be respectful of the customer in question and their environment details, I’m guessing this may be of interest to others out there. So without further ado, here you go (and be warned — novel alert).
More after the jump…
- File Data (i.e. CIFS shares….not VMware)
- Large file systems – 16 TB or greater in a single file system (this would really be a single application in a way).
- High Capacity — 30 TB up to 20 PB
- High Growth — adding TB/month.
- Sequential file access — lots of sequential reads.
- Admin Overhead — very, very low.
- High Capacity — multiple PB
- High volume — 10,000s of VMs.
- Note: you can run VMware with much smaller environments than this…but as a primary storage array without anything else this is the current best fit.
- Fast Growth in general — when adding TB/month potentially (as with Isilon you have to add a node at a time).
- Single Filesystem — part of what makes expansion easy and also enables the non-RAID data protection scheme.
I could expand this out a good bit into other areas as well actually….please don’t hesitate to ask (I have a 30 minute or so Isilon discussion I love walking through with people). There’s also a great review+deep-dive on Isilon by Ars Technica here (as everything on Ars, it’s very detailed).
(Note that wasn’t in the email: there’s a lot more about Isilon architecture and benefits….we’d discussed those before so left them out of this email.)
- Site Recovery Manager — supported but version 1.0….the SRM “test” requires double the storage due to no “zero space clone” support. You then also have to wait for the data to copy during the clone process. I do anticipate this changing at some point but can’t provide a date right now.
- Avamar & Isilon — NDMP from Isilon to Avamar is not currently supported. You’d have to setup a Windows machine (physical or VM) to mount the CIFS/SMB shares and then back them up. If you’ve been used to NDMP backup windows, this can be rather painful.
- Isilon does have a Backup Accelerator node which can write to tape drives but is not specifically supported with Avamar.
- Isilon does have NDMP backup integration with other backup products ironically.
- Response times — Chad Sakac covered this thoroughly in a blog post covering where Isilon does fit — test/dev, vCloud Director, vFabric development. The summary statement was….
- “Think 10-30ms as opposed to the 2-10ms you would see for block or NAS workloads on VNX or block workloads on VMAX. Also, the $/IOps characteristics for EMC Isilon mean that the fit is best for VMs that are relatively large, but do a relatively small number of IOps.” http://virtualgeek.typepad.com/virtual_geek/2011/08/nfs-changes-in-vsphere-5-and-true-scale-out-nas-isilon.html
- When I look at the current Celerra performance, you’d actually be seeing slower response from a backend perspective than you do today based on that statement
- Node Architecture — you have to start out with a minimum of 3 nodes per disk type. If you want multiple disk types (i.e. slow disk and fast disk….SAS or FC vs. NL-SAS or SATA), you’d actually need 6 nodes to start.
- Replication — you cannot replicate using either Replicator or RecoverPoint from Celerra to Isilon. This would mean you’d either need (2) Isilon grids (one at each site) or if did (1) Isilon grid in production would be taking a step backwards from a DR perspective (short of layering on additional 3rd-party options with additional management/complexity/etc.). Basically, VNX & Celerra can replicate between each other (a mix of RecoverPoint, MirrorView, and RecoverPoint) and Isilon grids can replicate between each other.
- Data Protection — Isilon has an incredibly stong non-RAID data protection scheme. However, for files/writes below 128k (which can happen in VMware environments as well as generic file sharing data), the data is simply mirrored (not split up and laid across the grid nicely). This can have unanticipated impacts as it relates both to performance and usable space. Ars Technica also did a great writeup on Isilon which covers this among many other points as well.
- Note: there are code changes coming in Isilon at some point to address this as Chad foreshadowed in his blog post….but no public timelines.
- Software — software is packaged differently on Isilon. For instance, “quota” capability is an add-on vs. being built-in (such as it is) on the VNX. This is more of a note…..just something that’s different.
- Simultaneous block and file access (i.e. iSCSI….one of the reasons we don’t talk much about iSCSI on Isilon right now to be honest — it’s there but not a default recommendation).
- Exchange – not recommended due to the notes under data protection as well as challenges with high random write I/O.
- Random access of small files — lots of home directories with small file and many students accessing for instance. This is just the flip side of Isilon being really strong on sequential file access.
- Full VMware Integration (pretty light right now….nowhere near VSI and Unisphere integration).