VMworld 2016: Architecture Future of Network Virtualization

Session: NET8193R. Bruce Davie, CTO Networking

Software developers need to be treated as a first class customer. The developer is king.

Network virtualization is the bridge to the future.

Network architecture today: Data plane, control plane, management plane, cloud consumption. Distributed data plane, centralized control.

Management Plane Availability

  • Developers need access to the management plane and it needs higher availability than in years past
  • New: The scalable persistence of memory
  • Write and read scalability
  • Durability
  • Shrink-wrapped
  • Consistent snapshots
  • Atomic transactions
  • Driving innovation: Distributed, shared log – No single point of failure

Control Plane Evolution

  • Heterogeneity – Hypervisors, gateways, top-of-rack switches, public cloud workloads, containers
  • Scalability – Thousands of hypervisors, 10,000s of logical ports
  • Central control plane – Generalized instructions that doesn’t need to understand heterogenity
  • Local control plane – Hypervisor specific controls (vSphere, KVM, Hyper-v, AWS, Azure, etc.)

What about non-virtualized workloads?

  • NSX has solutions for this problem

High-performance Data Plane

  • x86 processors can forward hundreds of millions of packets a second
  • DPDK – Data path development kit from Intel.
  • active-active edge cluster
  • Active-hot-standby for stateful services

Takeaway: Developers are key, and need to make them successful.

Extend NSX to the public cloud – VMware is starting with AWS

Network virtualization for containers – Put a vSwitch in the guest OS


VMUnderground Opening Acts: Networking


This was a panel discussion hosted by vBrownBag and VMUnderground.

Panelists: Chris Wahl, Scott Lowe, and several others that I didn’t catch. Sorry! These are real-time notes so forgive any typos.

1. SDN was defined because networks are too complex and slow to change.

2. The networking industry is long overdue for a change. Customers are under pressure to be more fluid. The hypervisor can become a natural place for SDN to live.

3. How does SDN change your life or is it important? Networks are inherently frail, and there’s a big human element. SDN will help automate network changes. “This VM needs to talk to that VM, through this firewall.” SDN can take care of the implementation details, and it has checks and balances.

4. The new SDN will enable policy among different devices to be shared, and unify management. Cumulus white box delivery model was mentioned.

5. SDN will be a long term process.

6. What is SDN? Separating the control plane from the data plane. Orchestration now becomes much easier. An API is really another CLI.

7. Step back, look at the problem you are trying to solve, then find the best set of tools. SDN is one tool to solve business problems.

8. How do admin roles change with SDN? The lines continue to blur between server, network and virtualization admins. We have to move past technology. SDN can enable businesses to differentiate their services.

9. The end goal is to enable what you want to do with the network. The specs are changing all of the time. Many business processes are not adapted for SDN.

10.  You will always need “ditch diggers” to rack hardware and plug in cables. The mid-tier network admins are going away. You need to expand your skillset to SDN.

11. The MCSE moved up the stack to be a virtualization administrator. The network admins haven’t had to change for 30 years.

12. Manual QoS configuration will cause you to go bald. SDN can help solve your QoS problems.

13. The key to SDN is to use business requirement justification for implementing SDN. SDN is a tool, and a good tool. “What is the problem I’m trying to solve.” Then you can assemble the right kind of tools.

14. SDN enables deep application introspection, much like VMware enabled easier application management.

15. One of the leading use cases of NSX and micro-segmentation. “Zero trust model.” Its about having a firewall on every VM. But the overhead using old technology for that was huge, and impractical. NSX is also great for multi-tenancy.

16. SDN should not require network folks to be a programmer or use languages like Python. GUIs are coming.

17. Be ready for change, and embrace it.

VMworld 2013: Advanced VMware NSX Architecture

Twitter: #NET5716; Bruce Davie, VMware

This was by far the best session of the day. While my background is not in networking, even I got excited about what software defined networking can do for an enterprise. The session was also a fire hose of NSX advanced details, and the standby line was huge. Even if you aren’t a networking professional, this will have a big impact on how server and virtualization administrators consume network services. According to the speaker, SDN is the biggest change to networking in a generation. NSX is a shipping product, so this wasn’t some pie in the sky PowerPoint slidedeck about what may be possible in the future. Per the keynote this morning, EBay, GE, and Citi corp are using NSX on a massive scale in production.

Why we need network virtualization

  • Provisioning is slow
  • Placement is limited
  • Mobility is limited
  • Hardware dependent
  • Operationally intensive
  • A VM is chained to the network infrastructure

Network Virtualization Abstraction Layer

  • Programmatic provisioning
  • Place any workload anywhere
  • Move any workload anywhere
  • Decoupled from hardware
  • Operationally efficient
  • Ebay shrunk network change window from 7 days to a matter of minutes (any they were highly automated to begin with)

What is network virtualization?

  • Provides full L2, L3, L4-7 Network services
  • Requirement: IP transport
  • Starting point is the virtual switch
  • NSX works on vSphere, KVM, Xen Server
  • Controller cluster maintains state
  • NSX API is how you programmatically create the cloud management platform
  • The local vSwitches are programmed with forwarding rules to provide L2, firewall functionality, etc.
  • Packets on the wire are tunneled between hypervisors. You just need IP connectivity.
  • When you change the virtual networks, the underlying physical switches won’t know the difference.
  • NSX Gateway: Connects to physical hosts and across the WAN. It is an ISO image that can run in a VM or on baremetal
  • Big announcement: Hardware partner program

VMware NSX Controller Scale out

  • Controller Cluster
  • Logically centralized, but a distributed highly available scale-out x86 servers
  • All nodes are active
  • Start out with three nodes
  • Live software upgrades – Virtual networks stay up, packets keep flowing
  • Workload sliced among nodes
  • Each logical network has a primary and backup node
  • Biggest deployment has 5 nodes and supports 5K hypervisors and 100K ports
  • Fault tolerant


  • STT for hypervisor to hypervisor comms
  • VXLAN for third party networking devices (chip level support)

Visibility and Virtual Networks

  • You can monitor networks via the NSX API
  • You can show health, and a whole slew of functionality for the entire virtual network with a single point with software
  • Hyper visibility into the network state
  • All from a single controller API
  • Can synthetically insert traffic as if the VM sent it

Hardware VTEPs

  • Benefits: Fine-grained access and connect bare metal workloads with higher performance/throughput
  • Same operational model (provisioning, monitoring) as virtual networks
  • Consistent model (HUGE) regardless of workload or non-VM workloads
  • Partners: Arista, HP, Brocade, Dell, Juniper, Cumulus Networks

Connecting the Physical to the Virtual

  • Physical switch connects to NSX controller cluster API
  • Shares a VMAC and PHYMAC databases
  • No multicast requirement for underlay network
  • State sharing to avoid MAC learn flooding
  • Physical ports are treated just like virtual ports

Distributed Services

  • NSX architecture allows many services to be implemented in a fully distributed way
  • Examples include firewalls (statefull or stateless), logical routing load balancing
  • Scale: No central bottleneck, no hairpinning,
  • Ensure all packets get appropriate services applied (e.g. firewall)
  • Distributed L3 Forwarding
  • vSwitch does 2/3 of the work – L2 and L3
  • Controller cluster calculates all needed info and pushes the config to each hypervisor host virtual switches

Connecting across the WAN

  • Option A: Map logical networks to VLANs. Manual process (creating VRFs, etc.)
  • Future: Will have a much more automated solution- NSX gateway will label MPLS packets

What’s Next?

  • Snapshot, rollback, what if testing
  • Federation Multi-DC use cases
  • Physical/Virtual Integration
  • Advanced L4-L7 services
  • Use business rules to define complaint networks (e.g. HIPPA, PCI, etc.) and make them cookie cutter