NSX Home LAB Part 4 – Logical Switch

Logical Switch Overview

This next overview of Logical Switch was taken from great work of Max Ardica and Nimish Desai in the official NSX Design Guide:

The Logical Switching capability in the NSX-v platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks (i.e. underlay and overlay networks) provided by NSX network virtualization.

Logical Switch Overview

The Figure above displays the logical and physical network views when logical switching is deployed leveraging the VXLAN overlay technology that allows stretching a L2 domain (logical switch) across multiple server racks, independently from the underlay inter-rack connectivity (L2 or L3).

With reference to the example of the deployment of the multi-tier application previously discussed, the logical switching function allows to create the different L2 segments mapped to the different tiers where the various workloads (virtual machines or physical hosts) are connected.

Create of Logical Switch

It is worth noticing that logical switching functionality must enable both virtual-to-virtual and virtual-to-physical communication in each segment and that the use of NSX VXLAN-to-VLAN bridging is also required to allow connectivity to the logical space to physical nodes, as it is often the case for the DB tier.

LAB Topology Current State

Before start Lab 4 i deleted one ESXi host to save memory and storage space on my Laptop.

So the Compute cluster will built from 2 ESXi host’s without VSAN.

My Shard Storage is OpenFiler.

Lab4  Topology Starting  Point

Lab4 Topology Starting Point

Creating the VTEP kernel Interface

In order the ESXi host to be able send VXLAN traffic we need to create special VMkernel interface called VTEP(VXLAN Tunnel End Point).

We have two option for creating the IP address of this VTEP .

DHCP or IP Pool, My preference is ip pool method.

Go to Host Preparation :

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

Click  “Configure” where the VXLAN Column, as result’s New form pop up.

The Minimum MTU is 1600 , do not lower this value.


Select “Use IP Pool” and Chose New Pool.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

New Form will show. type your range of ip address for the VMkernel ip pool.

Create VTEP IP Pool

Create VTEP IP Pool

Click OK, for the teaming policy choose Fail Over (Must for Nested ESXi).

After few sec we will create 3 VMK1 interfaces with 3 different ip address.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

The Topology with the VMKernel interfaces, show in Black Color :

Lab4  Topology With VTEP IP address

Lab4 Topology With VTEP IP address

 Create the Segment id

For etch VXLAN we have unique ID that represent with as Segment ID, this Number called VNI – VXLAN Network Identifier.

Instead of creating new VNI etch time we need new Logical Switch, we will create Pool of VNI.

The VNI number start from 5000.

Click on Segment ID and than Edit and Chose your range:

Create Pools of VNI

Create Pools of VNI

Transport Zone

In the simplest sense, a Transport Zone defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure. As previously mentioned, this communication happens leveraging one (or more) specific interface defined on each ESXi host and named VXLAN Tunnel EndPoints (VTEPs).

A Transport Zone extends across one or more ESXi clusters and in a loose sense defines the span of logical switches. To understand this better, it is important to clarify the relationship existing between Logical Switch, VDS and Transport Zone. A VDS can span across a certain number of ESXi hosts, since it is possible to add/remove single ESXi hosts from a specific VDS. In a real life NSX deployment, it is very likely that multiple VDS are defined in a given NSX Domain. Figure 14 shows a scenario where a “Compute-VDS” spans across all the ESXi hosts part of compute clusters, and a separate “Edge-VDS” extends across ESXi hosts in the edge clusters.

Think of Transport zone as Large Tube that carry all VNI inside it.

The Zone can work in 3 different Mode’s: Unicast , Multicast and Hybride.  (special blog post will be need to explain this tree mode’s)

We will chose Unicast because this mode will work without multicast at the Physical Switch’s.

We can decided which cluster’s join to Transport Zone,

In our lab both Management and Compute Cluster will Join same Transport Zone called “Lab Zone”.

Note: NSX Domain can have more than one Transport Zone.

Create Transport Zone

Create Transport Zone

Create the Logical Switch

At this Point We can create the Logical Switch, the function of the logical switch is the connect Virtual Machine’s from different esxi host’s(or same one)

Magic of NSX is the ability of each  esxi to be in different ip subnet’s

For this Lab, the Logical Switch will named “VNI-5000”.

Logical switch is tied to Transport Zone

Create Logical Switch

Create Logical Switch

Results  of creating the logical switch:

Logical Switch

Logical Switch

Connect virtual machines to/from a Logical Switch

To connect VM to Logical switch we need to click +VM image:

Connect Virtual Machines to logical switch

Connect Virtual Machines to logical switch

Select VM

Connect Virtual Machines to logical switch2

Connect Virtual Machines to logical switch2

Pick Up the Specific NIC to add to logical switch:

Connect Virtual Machines to logical switch3

Connect Virtual Machines to logical switch3

Click Finish

Connect Virtual Machines to logical switch4

Connect Virtual Machines to logical switch4

Test Logical Switch connectivity

We have  two diffrent way to test logical switch connectivity:

Option 1 GUI:

Double Click on Logical Switch icon for example VNI-5000 and select the Monitor tab:

Test Logical Switch connectivity1

IN the Size of test packet we two diffrent option:

“VXLAN standard” or “Minimum”. the difference is the MTU size.

Test Logical Switch connectivity2

Test Logical Switch connectivity2

VXLAN standard size is 1550 bytes (should match the physical infrastructure MTU) without fragmentation. This allows NSX to check connectivity and verify that the infrastructure is prepared for VXLAN traffic.

Minimum packet size allows fragmentation. Hence, NSX can check only connectivity but not whether the infrastructure is ready for the larger frame size.

Use the browse button to select the source esxi host and destination esxi host:

Test Logical Switch connectivity3

Test Logical Switch connectivity3

Click “Start Test”:

Test Logical Switch connectivity4

Test Logical Switch connectivity4

Option 2 CLI :

use the command:

ping ++netstack=vxlan ‘IP_address’

for example:

ping ++netstack=vxlan -d -s 1550

The ip address is the destination VTEP ip address,

The “D” mean set DF bit on

The “S” is the MTU size.

useful Post:

NSX-v Troubleshooting L2 Connectivity

Lab 4 Summary Logical Switch:

After Creating the Logical Switch VNI_5000 (Marked with yellow) , VM1 _will able to  talk with VM2.

Note the magic: This two virtual machine’s do not have L2 connectivity!!!

LAB 4 Final with Logical SwitchTopology

LAB 4 Final with Logical SwitchTopology

Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router