VSP2247 10Gb & FCoE Real World Design Considerations

Got here a bit late as stayed to talk with Bryan Kuhn and Chad Sakac after the last session….discussion around NFS for VDI, etc.

Speaker = Don Mann from ePlus

  • Two Approaches
    • Throttling – examples…
      • vSS – throttle egress (outbound) traffic from VM
      • vDS – throttle ingress/egress to/from VM
      • HP FlexFabric – FlexNICs – 4 NICs per 10 Gb link
    • Prioritization
      • vDS with vSphere 4.1 or higher – Network I/O Control
      • Cisco Nexus 1000v — as of 1.4 can do Bandwidth Prioritization Queueing
      • Cisco Virtual Interface Card – part of UCS – very clean.
  • vSwitch Options in vSphere 5 – some of these are new
    • vNetwork Standard Switch – same as vSwitch in VI3
    • Distributed vSwitch — private VLANs, NIOC, Datacenter QoS, DVMirror, NetFlow
    • Nexus 1000v — network managed by net team, “Virtual Security Gateway”, Nexus 1010 w/Network Analysis Module
  • Not just the vSphere hosts….
    • Total Solution Design
    • Traffic congestion possible at…
      • blade chassis uplinks
      • storage uplinks
      • switching interconnects
    • Link Failure Detection
      • Cisco Link State Tracking
      • ESX Beacon Probing
      • HP SmartLink
  • Quality of Service
    • Priority Flow Control 
      • enables lossless Fabrics for each class of service.
      • PAUSE sent per virtual lane when buffers limit exceeded.
    • Class of Service based Bandwidth Management
      • 802.1Qaz for enhanced.
    • FCoE — biggest difference is the physical media.
      • FC frames over L2 Ethernet transport – lossless stuff in 10 GigE standard to enable this
      • Single adapter
      • No gateways
      • Mostly single hop..a bit of multi-hop but not much.
      • FCoE doesn’t require SAN support….can do FCoE to hosts and FC to SAN
  • Very good network stack comparison slide..get this from the slide deck.
    • Less overhead than FCIP or iSCSI
  • General recommendations
    • read vendor best practices
    • Check HCL and update drivers/firmware as needed.
    • Indentify network failures, Beacon Probing, SmartLinks, Link State Tracking
    • Utilize vmxnet3 where possible
    • Leverage Twinax cables where feasible < 10m (MUCH lower cost)
    • Jumbo Frames — should only be enabled for unique workloads
    • Test, Test, Test
      • Ideally before it goes into production
      • Verify the throughput/functionality on the card.
    • Enable spanning-tree port fast on upstream switch ports.
    • VLAN tag into vSphere — provides easier troubleshooting
    • Consider Multi-NIC vMotion with proper traffic management.
  • Rackmount server recommendations
    • Leverage Network I/O Control features
      • Enable QoS priority tag (and on upstream switch)
        • Or throttle vMotion at ~3-4 Gbps (on vSS and vDS)
      • Enable NetIOC on 6Gbps if leveraging FCoE (4 GB -> FC)
    • Optionally — upgrade to 1000v to gain:
  • HP FlexFabric Recommendations — multiple recommendations…check the slides.
  • Cisco UCS Recommendations
    • Leverage Cisco VIC – Hardware QoS on adapter
    • Fully configure QoS
      • Many best practice docs are missing some of the steps.
    • Leverage port-channels and pin groups for uplinks
    • Consider Updating Templates for consistency
    • Set Maintenance Policy to “user-ack”
    • Enable CDP under Network Control Policy
  • 3 Ways to use Cisco VIC card
    • Adapter FEX — likes this one the most…..simplest as shows vmnics in the same way to ESX as normal setups.
    • 1000v
    • VM FEX — gives vNICs directly to VMs

One thought on “VSP2247 10Gb & FCoE Real World Design Considerations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s