Session Notes – VMware vSphere 5 Storage Troubleshooting Boot Camp – VMware Partner Exchange

Summary = great design considerations around VDI and awesome deep dive by Mostafa — buy his book when it comes out.

Presenters = Mostafa (VCDX #02) for Troubleshooting and Jeff Whitman & Jim Yanik (both Senior SE’s) for initial VDI part.

Note: as with all my session notes this is a mix of what’s on the screen, what the presenters are saying, and my own thoughts. If something doesn’t sound like what the presenters would say, assume it’s just my opinion. 😉 Or ask in the comments if not sure…

More after the jump…

First Section = Storage Performance and Sizing (mostly VDI Focused). Not as much Mostafa to start out….mostly 2 VMware SE’s on the VDI side.

  • Focused on desktop I/O at the start at least.
    • Asks if anyone has a clue about desktop I/O and I/O patterns – I’m the only one who raises my hand.
  • An Example – 6×450 GB 15k RPM SAS Drives RAID 10
    • Easy to get capacity for VDI desktops but must be aware of space.
  • The Problem – Real World Example
    • POC – Performance great, then performance terrible (only planned for space increase and not I/O increase).
    • Production Assembly line – Perfomance acceptable,during normal operations – writing files/updates/AV killed things.
    • Space — I’ve got space…why should I buy disk?
  • Rules of thumb – 5 = light (not much time at desktop), medium = 10 (most of day at desktop, typical office worker), 20 = heavy (developer, power user)
    • One customer thought they’d be safe by doubling heavy from 20 to 40, bought a bunch of stuff, their workload was actually 100.
    • Don’t forget Read/Write mix.
    • Don’t forget Read/Write mix.
    • We really mean it.
  • Recommending IOPS calculator at http://myvirtualcloud.net/?page_id=1076
    • Andre Leibo
  • Also recommending Lakeside Software and Liquidware Labs for assessments.
    • Agree for really large deployments….like the Varrow assessment approach for smaller/medium type setups.
    • Discussion (after my question) about the tools being too heavy….comments about how VMware PS has seen the same thing so sometimes don’t present anything but a summary. Also sometimes run it in tandem as can take 1-3 months to gather really good data.
    • VMware saying they’ve found sometimes these tools slow things down – paralysis by analysis.
    • Use this as background/backend data.
  • Assessment Examples – Real World
    • 4,417 Desktops – Disk IO/SEC SCAN
      • Average of 6 IOPs per user, 2:1 R/W (4503 total)
      • Max IOPs = 38, 1:3 R/W (51863 total)
      • 95th Percentile = 14 IOPs (17,807 total)
    • 5,026 Desktops
      • Spike at login each morning – boot storm.
      • Average I/O per desktop = 1.4, 1:1 R/W
      • Max I/O per Desktop = 4.2, 1:1 R/W
      • 95th Percentile = 3.4
  • Storage Solutions – Read IOPs
    • Array based Cache = RAM based, SSD Based, Flash Cache, FAST Cache
    • Host Based Cache – next version of View will allow you to allocate RAM as cache specifically for boot storms, etc.
    • View Composer Storage Tiering -Replica on SSD Storage
    • Offload Operations
      • Profile – View Virtuall Profiles
      • User Data
      • Applications
    • No Write penalty for cache stuff.
  • Write IOPs is trickier.
    • In general you need to build out the number of spindles to support write IOPs
    • Don’t forget the RAID type caveat – write penalty.
    • SSD writes are generally slower than reads and can degrade over time.
    • Some vendor specific solutions may help.
      • FAST at Rest, WAFL for example.
      • Vendors with dedup and serializing I/O.
      • This is where the Disclaimer slide applies.
    • Question about Atlantis computing – it does what it claims but it’s expansive.
    • Sizing for write IOPs is prob most critical area.
    • Mention about Nutanix as well using built-in FusionIO to provide dense # of VDI in small form factor.
  • Dealing with Peaks
    • Boot Storms – Set Power Policy to Always On
    • Antivirus – vShield Endpoint, Randomized Full Disk Scans, No Full Disk Scans
    • Login – View Virtual Profiles, leave users logged in
    • Image Optimization
  • Latency is generally best indicator of problem.
    • Need a monitoring tool in place for this.
    • 20 ms latency is getting into trouble.
    • 50 ms will light up Help Desk
    • Big latency numbers are possible if don’t design well and will seem like service outage.
    • Watch latency numbers as add desktops.
    • Don’t use Linked Clones if not using the R-R-R features (refresh, recompose, reboot).
      • Just use the full clones.
      • Over time read IOPs can grow dramatically if not refreshed.
      • One environment didn’t refresh Linked Clones for a full year….majorly killed IOPs and Read IOPs starting to come out of the Linked Clones.
  • Limit of 8 hosts in View cluster was VMFS issue, it’s an “issue” for View 5 and NFS but actually an artificial limit for supportability reasons.
    • Big Deal = if do NFS, no technical NFS limitation with linked clones (not supported currently though). Later versions of view will remove the NFS supportability question and also raise VMFS cluster size limits (I realize this is forward looking but statement of the obvious I think?).
  • Need to be aware of RAID choice impact.
  • Look at CapEX cost per IOP – cheapest disk may be most expensive solution.
    • Keep power consumption in mind on OpEx cost.
  • NFS vs. Block Based Storage – Performance should be a relative wash….but NFS vs. VAAI, etc. kind of come out a wash.
  • Must Pilot if need to guarantee performance.
  • Great Blog Links
  • Load Testing – View Planner is free for partners but clumsy.
    • LoginVSI is good but have to pay for it.
    • Caution on View Planner – doesn’t do good job on sizing PCoIP traffic requirements (handles storage fairly well).

Now transitioning to Mostafa – mostly live demos for Storage Troubleshooting – recording it since almost no slides and will post recording.

  • VMHBA numbers = USB key is higher number, iSCSI software is higher, anything hardware is lower vmbha number.
    • DAVG/cmd = raw response time from storage device.
    • KAVG = time spent in vmkernel
    • GAVG = response time seen by VM that would be perceived by virtual machines – D + K = G
  • Can add columns (hit “F” to add fields) – best ones are LATSTATS/rd (read latency stats – ms) and LATSTATS/wr (write latency stats – ms)
  • Also watch the CPU load in the top of esxtop to see if CPU contention is playing a role in I/O contention.
  • What are correct values for these response times?
    • It depends…..lower = better of course.
    • Sometimes lower is just b/c can’t do the operation (fast abort).
    • ESX will function with almost any response time albeit poorly.
    • Any command not acknowledged by SAN within 5000ms (5 seconds) will be aborted. This is where perceived disk performance falls off a cliff.
  • Long discussion around VASA (it matters…go learn it if you don’t) and then using storage throttling with Storage DRS.
  • Now a live demo of vm-support script.
    • GUI is now under – Administation menu – Export System Logs
    • Can select One hosts, all hosts, vCenter, etc.
    • Can select which System Logs you want (bunch of options)
    • Gather performance data – time range. Runs esxtop on the hosts.
      • Default = 300 seconds duration and run very 5 seconds.
      • VMware can run esxtop against the results to replay what was happening on your box.
  • Much of what is below is from full walkthrough of a live .vmx file.
  • /proc nodes in Linux – used to have /proc nodes in Service Console, now have VSI as structured memory based filesystem in ESXi.
    • Accidentally shipped a tool to troubleshoot VSI info….since shipped it now leaving it in.
  • in /etc/vmware/vm-support directory there are a bunch of .mfx files – manifest files which are actually ASII files.
    • These list out the actual commands run to gather the vm-support data….very cool to see./
  • /vmfs/volumes – just symlinks for “human readable” names.
    • For the long real datastore name, if VMFS the last 12 characters are the MAC address of the host that created the VMFS filesystem.
  • Discussing SR-IOV and how it shows up in “lspci” output.
  • sched.scsi0:0.throughputCap setting in .vmx actually dictates if SIOC is on for a vmdk.
  • Snapshots – Delete means Commit (big I/O impact)…be very, very careful.
    • Oh, how I know this…
  • vm-support gives you a .tgz file — regular “tar -xzvf” to extract it. Contents are….
    • boot option details
    • CIM info – 3rd party hardware details
    • esxcfg-* — still some in there….deprecated in future releases.
  • esxcli and locally are very simile but can run before esxcli service loads — more a dependency during boot and/or troubleshooting thing.
  • In esxcli anything not related to native multipathing
  • In ESXi 5 divided storage stack into layers – previously any storage changes required a full kernel update (not very practical).
    • NMP is a huge PSA plugin — not just for third-party possibilities but also to lower number of kernel updates.
  • PSP does path selection but NMP handles the actual failover events.
  • MRU is actually a plugin at the same level as PSP
  • NMP –> PSP –> fixed/generic/roundrobin
  • NMP –> MRU (same level)
  • Mostafa is focusing on this for a while to ease us into the idea of frameworks.
  • Never set Round Robin on SATP level or could affect arrays that don’t handle Round Robin well (or at all).
    • esxcli storage nmp satp list
    • Shows all Default PSP’s for storage arrays.
  • esxcli storage core claimrule list
    • shows the default rules and then rules which have taken over.
  • ESX/ESXi assumes that anything smaller than 50 MB is a management LUN and should be excluded/hidden.
  • PowerPath does fully take over for anything claimed by NMP unless you specifically exclude.
  • NMP and/or PowerPath updates — likely require reboot….but if you do a “dry run” install and poke inside scripts it’s often stuff you could reload manually (like kernel modules) as sometimes reboot is just to let the “jumpstart” script run nicely.
    • Handy to know if can’t vMotion for some reason (VFCache maybe).
    • Reboot is often required by manufacturer so we’re way outside supportability but there’s stuff under the covers we can look at it we’re desperate.

One thought on “Session Notes – VMware vSphere 5 Storage Troubleshooting Boot Camp – VMware Partner Exchange

  1. Pingback: Think Meta » Session Notes Compendium – VMware Partner Exchange

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s