VSP3111: Nexus 1000v Architecture, Deployment, Management

This session focused on the Cisco distributed virtual switch, the Nexus 1000v. The speaker was very knowledgeable and a great presenter. Lots of great details, but as fast as he was going I didn’t get all of the details. You can check out the his blog at jasonnash.com.


  • The VSM is a virtual supervisor module, which acts as the brains of the switch just like a physical switch.
  • The VEM is a virtual ethernet module, which is in essence, a virtual line card that resides on each ESXi host.
  • VSM to VEM communications are critical and you have various deployment options
    • Layer 2 only: Uses two to three VLANs and is the default option, and the most commonly deployed architecture.
    • Layer 3: Utilizes UDP communications over port 4785, so it can be routed
  • When in layer 2 mode you need to configure the control, management and packet networks
    • Management: End point that you SSH into to manage the VSM and maintains contact to vCenter. Needs to be routable.
    • Control: VSM to VEM communications (This is where most problems occur.)
    • Packet: Used for CDP and ERSPAN traffic
  • Nexus 1000v deployment best practices
    • Locate each VSM on different datastores
    • You CAN run vCenter on a host that utilizes the N1K DVS
    • ALWAYS, ALWAYS run the very latest code. Latest as of Sept 1, 2011 is 1.4a, which does work with vSphere 5.0.
    • Don’t clone or snapshot the VSM, but DO use regular Cisco config backup commands
    • Always, always deploy VSMs in pairs (no extra licensing cost, so you are dumb not to do it).
  • Port profile types
    • Ethernet profile: Used for physical NICs and are used as uplinks out of the server. These use uplink profiles.
    • vEthernet profile: Exposed as port groups in vCenter and is the most common type of administrative change made in the VSM.
  • Uplink teaming
    • N1Kv supports LACP, but the physical switch must support it as well.
    • vPC-HM – Requires hardware support from the switch and more complex to troubleshoot
    • vPC-HM w/ MAC pinning – Most common configuration and easy to setup/troubleshoot.
  • On Cisco switches enable BDPU filter and BDPU guard on physical switch ports that connect to N1K uplinks.
  • Configure VSM management, control, packet, Fault Tolerance, vMotion as “system” VLANs in the N1K so they are available at ESXi host boot time and don’t wait on the VSM to come up.
  • For excellent troubleshooting information check out Cisco DOC 26204.
  • You can also check out the N1KV v1.4a troubleshooting guide here.
  • The network team may prefer to use the Nexus 1010, which is a hardware appliance that runs the VSMs. This removes the VSM from the ESXi hosts, and could be better for availability, plus the network guys can use a serial cable into the 1010. You would deploy 1010s in pairs, and they have bundles that really bring down the price.
  • You can deploy multiple VSMs on the same VLANs, but just be sure to assign each VSM pair a different “DOMAIN” ID.

Not mentioned in this session are additional Cisco products that layer on top of the 1000v, such as the forthcoming Virtual ASA (firewall), a virtual NAM, and the virtual secure gateway. The ASA is used for edge protection while the VSG would be used for internal VM protection.

Print Friendly, PDF & Email

Related Posts

Notify of
Inline Feedbacks
View all comments