Dell EMC World 2016 – DSSD!

This is an overdue post from Dell EMC World (Oct 17-20). Full disclosure that DellEMC paid for my flight/hotel/conference pass. Also, full disclosure that I’ve since then accepted a position with a company that competes with EMC in certain areas (blog post on that to come later this week). You be the judge if any of my commentary is influenced by any of that.
Love this quote – “Bandwidth is the work of man, latency is the work of god.”

As part of being at Dell EMC World via an Influencer pass as EMC Elect,  I had a briefing with Matthew McDonough around DSSD. Matthew leads product management, solutions, and marketing for DSSD so definitely knows his stuff. Given that I had some previous background around DSSD (I’d been in a briefing at EMC World 2015 on it), we decided not to do a standard product overview/walkthrough but just chat about DSSD and follow the conversation wherever it went. Here are my notes from the conversation (albeit a bit cleaned up).

Note: if you’re looking for a DSSD 101 post, this isn’t it – I’d recommend Google. 😉 To be a bit more helpful, Chad has a great post as always back from EMC World 2015 on DSSD. If anything, this blog post is targeted at someone with general DSSD knowledge and looking for additional details that you might not easily find in regular marketing materials.

On to the discussion…

  • DSSD arrays connect to up to 48 hosts using host connect cards – it’s proprietary but there’s nothing else that can run the way needed so no choice but to go proprietary. Non-transparent bridge exposing a massive PCIe fabric on the D5.
    • Provides connection into the server with PCIe / NVMe speeds.
  • Secret Sauce – OS that manages all the storage, sets up and tears down I/O requests. Controller level has all the background processes that handle flash translation, multi-dimensional RAID capabilities. Custom flash modules that don’t do garbage collection because it can done more efficiently at scale in the controller.
    • My take: this is interesting to me given the huge difference in opinions on where GC should be done. It’s also interesting to hear that some of the DSSD OS differentiators are around things we tend to take for granted but are really hard at much higher speeds than traditional storage.
  • With SSD drives get certain capacity but also buying a certain performance in each NAND chip – with most arrays not getting all of the NAND chip performance due to bottlenecks right after the NAND layer.
  • DSSD is not much more expensive on $/GB than regular flash array – way less expensive on $/IOPs. Eliminate the dual markup of regular arrays.
    • My take: this absolutely makes sense — dual markup removal means can bring the $/GB into the “realm of rational comparison” and then the performance can push a purchase decision over the edge. That’s if the customer doesn’t just need the performance as the main driver of course.
  • Overprovisioning? Same as other arrays – have to be able to handle cells going bad, garbage collection purposes.
  • Hot markets – financial & federal. More around the workloads – most people aren’t too concerned about the cards.
    • My take: I found this interesting….that customers aren’t pushing back much on the cards. That makes sense if you can’t get the performance DSSD offers anywhere else and may also mean that the target markets are more on rackmounts than blade servers.
  • Seeing deployments of traditional and next generation workloads.
    • Traditional = Oracle databases, SAS analytics, genomics, life sciences, structural. Some applications that run off GPFS use DSSD as a block device.
  • Can’t get DSSD performance outside of Exagrid – drives customer interest.
  • What’s the Total Addressable Market? (TAM) Looking at the Server Flash market as an overall TAM and performance requirements are moving in the direction of DSSD.
    • My take: I love looking at the TAM as something that’s moving – right now the overall datacenter is shifting to all flash but it’s not like that shift will stop (call it “TAM shift momentum” if you  will). As we keep moving along the performance curve, more customers will wander into wanting/needing what DSSD offers.
  • iSCSI or FC Support? VxRack solution for DSSD using Dell servers as way to make it more easily consumable – that’s the better answer in their mind than supporting iSCSI or FC.
  • NVMe via PCIe is the first instantiation of DSSD. Coming investments to help push NVMe over Ethernet. Will likely never get rid of the NVMe cards albeit may morph.
  • Validation + Performance Testing
  • Focusing on the application owner – infrastructure folks don’t know what to do with 10x performance.
    • My take: as someone with infrastructure background, this made me laugh at the same time acknowledging how true it is. Most infrastructure folks including myself don’t know what to do with a 10x performance increase past flash.
  • Anyone pushed DSSD to the max? Yes – customer running healthcare analytics – started with need for 30 MB/s bandwidth, then 50, then 100.
    • My take: I love that a batch style job can still push the boundaries of the fastest thing out there.
  • Great Quote – Bandwidth is the work of man, latency is the work of god.
  • Good recent EMC Pulse blog from John McCool

 

That’s a wrap! As I mentioned above, hopefully this is interesting for someone with a baseline awareness of DSSD but looking for some additional information.

3 thoughts on “Dell EMC World 2016 – DSSD!

  1. Pingback: Dell EMC World 2016 – Closing Thoughts | Think Meta

Leave a comment