VMworld 2013: Advanced VMware NSX Architecture

Twitter: #NET5716; Bruce Davie, VMware

This was by far the best session of the day. While my background is not in networking, even I got excited about what software defined networking can do for an enterprise. The session was also a fire hose of NSX advanced details, and the standby line was huge. Even if you aren’t a networking professional, this will have a big impact on how server and virtualization administrators consume network services. According to the speaker, SDN is the biggest change to networking in a generation. NSX is a shipping product, so this wasn’t some pie in the sky PowerPoint slidedeck about what may be possible in the future. Per the keynote this morning, EBay, GE, and Citi corp are using NSX on a massive scale in production.

Why we need network virtualization

  • Provisioning is slow
  • Placement is limited
  • Mobility is limited
  • Hardware dependent
  • Operationally intensive
  • A VM is chained to the network infrastructure

Network Virtualization Abstraction Layer

  • Programmatic provisioning
  • Place any workload anywhere
  • Move any workload anywhere
  • Decoupled from hardware
  • Operationally efficient
  • Ebay shrunk network change window from 7 days to a matter of minutes (any they were highly automated to begin with)

What is network virtualization?

  • Provides full L2, L3, L4-7 Network services
  • Requirement: IP transport
  • Starting point is the virtual switch
  • NSX works on vSphere, KVM, Xen Server
  • Controller cluster maintains state
  • NSX API is how you programmatically create the cloud management platform
  • The local vSwitches are programmed with forwarding rules to provide L2, firewall functionality, etc.
  • Packets on the wire are tunneled between hypervisors. You just need IP connectivity.
  • When you change the virtual networks, the underlying physical switches won’t know the difference.
  • NSX Gateway: Connects to physical hosts and across the WAN. It is an ISO image that can run in a VM or on baremetal
  • Big announcement: Hardware partner program

VMware NSX Controller Scale out

  • Controller Cluster
  • Logically centralized, but a distributed highly available scale-out x86 servers
  • All nodes are active
  • Start out with three nodes
  • Live software upgrades – Virtual networks stay up, packets keep flowing
  • Workload sliced among nodes
  • Each logical network has a primary and backup node
  • Biggest deployment has 5 nodes and supports 5K hypervisors and 100K ports
  • Fault tolerant

Tunnels

  • STT for hypervisor to hypervisor comms
  • VXLAN for third party networking devices (chip level support)

Visibility and Virtual Networks

  • You can monitor networks via the NSX API
  • You can show health, and a whole slew of functionality for the entire virtual network with a single point with software
  • Hyper visibility into the network state
  • All from a single controller API
  • Can synthetically insert traffic as if the VM sent it

Hardware VTEPs

  • Benefits: Fine-grained access and connect bare metal workloads with higher performance/throughput
  • Same operational model (provisioning, monitoring) as virtual networks
  • Consistent model (HUGE) regardless of workload or non-VM workloads
  • Partners: Arista, HP, Brocade, Dell, Juniper, Cumulus Networks

Connecting the Physical to the Virtual

  • Physical switch connects to NSX controller cluster API
  • Shares a VMAC and PHYMAC databases
  • No multicast requirement for underlay network
  • State sharing to avoid MAC learn flooding
  • Physical ports are treated just like virtual ports

Distributed Services

  • NSX architecture allows many services to be implemented in a fully distributed way
  • Examples include firewalls (statefull or stateless), logical routing load balancing
  • Scale: No central bottleneck, no hairpinning,
  • Ensure all packets get appropriate services applied (e.g. firewall)
  • Distributed L3 Forwarding
  • vSwitch does 2/3 of the work – L2 and L3
  • Controller cluster calculates all needed info and pushes the config to each hypervisor host virtual switches

Connecting across the WAN

  • Option A: Map logical networks to VLANs. Manual process (creating VRFs, etc.)
  • Future: Will have a much more automated solution- NSX gateway will label MPLS packets

What’s Next?

  • Snapshot, rollback, what if testing
  • Federation Multi-DC use cases
  • Physical/Virtual Integration
  • Advanced L4-L7 services
  • Use business rules to define complaint networks (e.g. HIPPA, PCI, etc.) and make them cookie cutter
Print Friendly, PDF & Email

Related Posts

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments