Presented by Venky Deshpande, Sr. Technical Marketing Manager with VMware
Summary = Venky really knows his stuff….great walkthrough on vDS and different scenarios with it. Some good audience questions at the end too.
Now the standard disclaimer….but there’s nothing new here so who cares? 😉
Looks like most people in the room are generally familiar with vDS
More after the jump…
Overview – the obvious stuff
- vMotion Aware
- Unified vSwitch management independent of physical fabric.
- Manage datacenter wide and not host by host.
- Some new features – Load Based Teaming and Network I/O Control
- vSphere 5 adds NetFlow and Port Mirroring
- LBT — some question about ingress vs. egress – need to understand that clearly.
- NIOC has Shaper and Scheduler function — where the policy is applied and how often it is applied.
- # of Uplinks determines # of Schedulers – 8 uplinks = 8 schedulers.
- Same concept of shares — walking through what shares mean — the size of the piece and relative proportions as you add/remove traffic classes.
- Same diagram as I usually show in vSphere 5 update presentations.
- Can also tag with 802.1p – tag it higher so won’t get dropped.
- End to End QoS does matter.
- vDS Parameters
- Uplink Port Group – # of uplinks depends on # of physical network adapters
- Look at your vSphere Maximums — (10) or so 10 GigE and (32) GigE
- Distributed Port Group – depends on # of traffic types or number of tenants
- Uplink Port Group – # of uplinks depends on # of physical network adapters
- Physical Switch Parameters
- Switch Port Parameters – Trunk/Access, VLAN, MTU, Spanning Tree (BPDU Guard – prevents port from sending a BPDU which can cause STP re-convergence, Port Fast – avoids going through Spanning Tree states)
- Reseiliency and Performance – Etherchannel (IP Hash), Switch clustering/stacking, Link State Tracking – detects failure between aggregation and core layer.
Design Considerations
- Separate Infrastructure Traffic from VM Traffic
- VMs shouldn’t see infrastructure traffic – security violation
- Method of Traffic separation – either Physical or Logical
- Logical – VLANs
- one vDS, all pNICs
- 2 pNICs are sufficient
- very common as move to 10 GigE and use NIOC
- Physical
- One vDS for each physical network.
- Create portgroups, etc.
- Need at least (4) pNICs – 2 per vSwitch
- Logical – VLANs
- No Single Point of Failure – SPoF
- 2 or more physical NICs preferably to separate physical switches.
- Understand traffic slows.
- Uset NetFlow to see this.
- Use the data to come up with traffic shares.
- Prioritize traffic — we’re talking QoS/802.1p here.
- Traffic Characteristics – think about the traffic type, bandwidth required, impact if starved, etc.
- Nice chart on this but too much to put down….had some good info to be aware of.
Different Deployments
- Customer Needs
- Reliability and security in network stuff.
- SLA guarantees for certain types of traffic.
- Traffic management to help with multi-tenancy and Tier 1 Apps in a consolidated environment.
- Specific Security Requirements – Legal, Rules/Regs, etc.
- Visibility and Monitoring Support – NetFlow and SPAN now added in vSphere 5.
- Infrastructure
- Blades vs. Rack mount
- GigE vs. 10 GigE
- Scenario: Multiple GigE Ports
- Port Group A with 4 GigE to one physical switch.
- Port Group B with 4 GigE to another physical switch.
- Audience Question – Link State Tracking vs. Beacon Probing
- Link State Tracking is only on Cisco switches….use it when you have Cisco.
- Beacon probing if not on Cisco switches.
- Audience Question – trunking DMZ across data centers, etc.
- Just VLAN it….sometimes that’s an auditing issue though.
- No good answer except drive your hosts towards as much logical isolation and as little physical isolation as possible.
vDS Configuration Steps
- Configure Uplink Port Groups
- Number of uplinks depends on number of NICS
- Define Port Groups based on number of traffic types
- Configure Following parameters for each PG
- Teaming
- VLAN setting
- Add Hosts to vDS
- Easy when have same # of vmnics across all hosts, harder when variable # of vmnics on hosts – be careful.
- Option 1 – Static Port Group to NIC Mapping
- Take your traffic types and set the teaming options individually.
- Ex. iSCSI gets one uplink with explicit failover, VM gets 2 uplinks with Load Based Teaming
- One NIC for other types with Explicit Failover
- Very static, simple, maps to existing customer methods.
- Physical Switch Config Steps
- Trunk
- Allow needed VLANS (or deny)
- Port Fast
- BPDU Guard
- No Link Agg
- Don’t need Link State Tracking if have Mesh Topology
- Option 2 – Static with Enhancements
- 2 Gig for iSCSI up from 1 Gig
- 2 Gig for vMotion up from 1 Gig
- iSCSI B/w improvement via iSCSI MPIO – 2 separate PG’s and bind vmknic to physical NICs/vmnics
- vMotion B/W improvement via multi-NIC vMotion
- Good chart showing this one just like Option 1
- Option 3 – Dynamic – use NIOC and config Shares and Limits
- Need B/W info for the traffic types so don’t go too high or too low – use NetFlow to figure it out.
- Use all uplinks for all traffic types, all active, no standby.
- One Port Group per Traffic type.
- Set NIOC Shares, No NIOC limits on this one.
- How Shares Translate to Bandwidth
- Management and FT Traffic through uplink 1
- Regular share stuff frankly.
- Option 4 – Rack Mount Server with 10 GigE
- Pretty basic….just 2 10 GigE links and shares are easy.
- Option 5 – Blade Server with 10 GigE
- Access layer moves into the chassis…but same other than that.
Pro’s and Cons
- Static Option Pro’s
- Deterministic traffic allocation — very strong.
- Admins have complete control over where traffic flows go.
- Physical separation of traffic through separate physical interfaces.
- Static Option Cons
- Underutilized I/O
- Lots of management, etc
- Dynamic Pros
- Better use I/O and network
- Logical separation
- Traffic SLA via NIOC
- Resiliency via Active/Active Paths
- Dynamic Cons
- Needs all traffic paths to handle traffic characteristics — flows can shift.
- Need VLAn expertise…may have to debate with the network guys (not pruning as many VLANs or any).
- Key Takeaways
- VDS simplifies a lot of stuff.
- Use NIOC and LBT to push up network utilization.
- VDS is very flexible and scalable.
Audience Questions
- What about when vCenter goes down?
- Network continues to operate.
- Customers can’t change vDS config.
- Misconfigured management network is the challenge – host can’t connect to VC & VC can’t connect to hosts.
- To recover from this use DCUI locally to reconfig the management network on a standard vSwitch.
- Connect back to vSwitch.
- Can you export vDS info from one vCenter to another vCenter? (my question)
- Scenario would be setting up a 2nd site for SRM and starting with the same vDS config and then adjusting from there.
- Answer is no….but coming in the next version or two.
- Might we see vDS in Enterprise level?
- No….but a lot of discussion around that and a lot of other people want to see it.
- Scripts coming to help with migration from Standard vSwitch to Distributed vSwitch.