NSX Home LAB Part 4 – Logical Switch

Logical Switch Overview

This next overview of Logical Switch was taken from great work of Max Ardica and Nimish Desai in the official NSX Design Guide:

The Logical Switching capability in the NSX-v platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks (i.e. underlay and overlay networks) provided by NSX network virtualization.

Logical Switch Overview

The Figure above displays the logical and physical network views when logical switching is deployed leveraging the VXLAN overlay technology that allows stretching a L2 domain (logical switch) across multiple server racks, independently from the underlay inter-rack connectivity (L2 or L3).

With reference to the example of the deployment of the multi-tier application previously discussed, the logical switching function allows to create the different L2 segments mapped to the different tiers where the various workloads (virtual machines or physical hosts) are connected.

Create of Logical Switch

It is worth noticing that logical switching functionality must enable both virtual-to-virtual and virtual-to-physical communication in each segment and that the use of NSX VXLAN-to-VLAN bridging is also required to allow connectivity to the logical space to physical nodes, as it is often the case for the DB tier.

LAB Topology Current State

Before start Lab 4 i deleted one ESXi host to save memory and storage space on my Laptop.

So the Compute cluster will built from 2 ESXi host’s without VSAN.

My Shard Storage is OpenFiler.

Lab4  Topology Starting  Point

Lab4 Topology Starting Point

Creating the VTEP kernel Interface

In order the ESXi host to be able send VXLAN traffic we need to create special VMkernel interface called VTEP(VXLAN Tunnel End Point).

We have two option for creating the IP address of this VTEP .

DHCP or IP Pool, My preference is ip pool method.

Go to Host Preparation :

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

Click  “Configure” where the VXLAN Column, as result’s New form pop up.

The Minimum MTU is 1600 , do not lower this value.

https://roie9876.wordpress.com/2014/04/29/nsx-minimum-mtu/

Select “Use IP Pool” and Chose New Pool.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

New Form will show. type your range of ip address for the VMkernel ip pool.

Create VTEP IP Pool

Create VTEP IP Pool

Click OK, for the teaming policy choose Fail Over (Must for Nested ESXi).

After few sec we will create 3 VMK1 interfaces with 3 different ip address.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

The Topology with the VMKernel interfaces, show in Black Color :

Lab4  Topology With VTEP IP address

Lab4 Topology With VTEP IP address

 Create the Segment id

For etch VXLAN we have unique ID that represent with as Segment ID, this Number called VNI – VXLAN Network Identifier.

Instead of creating new VNI etch time we need new Logical Switch, we will create Pool of VNI.

The VNI number start from 5000.

Click on Segment ID and than Edit and Chose your range:

Create Pools of VNI

Create Pools of VNI

Transport Zone

In the simplest sense, a Transport Zone defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure. As previously mentioned, this communication happens leveraging one (or more) specific interface defined on each ESXi host and named VXLAN Tunnel EndPoints (VTEPs).

A Transport Zone extends across one or more ESXi clusters and in a loose sense defines the span of logical switches. To understand this better, it is important to clarify the relationship existing between Logical Switch, VDS and Transport Zone. A VDS can span across a certain number of ESXi hosts, since it is possible to add/remove single ESXi hosts from a specific VDS. In a real life NSX deployment, it is very likely that multiple VDS are defined in a given NSX Domain. Figure 14 shows a scenario where a “Compute-VDS” spans across all the ESXi hosts part of compute clusters, and a separate “Edge-VDS” extends across ESXi hosts in the edge clusters.

Think of Transport zone as Large Tube that carry all VNI inside it.

The Zone can work in 3 different Mode’s: Unicast , Multicast and Hybride.  (special blog post will be need to explain this tree mode’s)

We will chose Unicast because this mode will work without multicast at the Physical Switch’s.

We can decided which cluster’s join to Transport Zone,

In our lab both Management and Compute Cluster will Join same Transport Zone called “Lab Zone”.

Note: NSX Domain can have more than one Transport Zone.

Create Transport Zone

Create Transport Zone

Create the Logical Switch

At this Point We can create the Logical Switch, the function of the logical switch is the connect Virtual Machine’s from different esxi host’s(or same one)

Magic of NSX is the ability of each  esxi to be in different ip subnet’s

For this Lab, the Logical Switch will named “VNI-5000”.

Logical switch is tied to Transport Zone

Create Logical Switch

Create Logical Switch

Results  of creating the logical switch:

Logical Switch

Logical Switch

Connect virtual machines to/from a Logical Switch

To connect VM to Logical switch we need to click +VM image:

Connect Virtual Machines to logical switch

Connect Virtual Machines to logical switch

Select VM

Connect Virtual Machines to logical switch2

Connect Virtual Machines to logical switch2

Pick Up the Specific NIC to add to logical switch:

Connect Virtual Machines to logical switch3

Connect Virtual Machines to logical switch3

Click Finish

Connect Virtual Machines to logical switch4

Connect Virtual Machines to logical switch4

Test Logical Switch connectivity

We have  two diffrent way to test logical switch connectivity:

Option 1 GUI:

Double Click on Logical Switch icon for example VNI-5000 and select the Monitor tab:

Test Logical Switch connectivity1

IN the Size of test packet we two diffrent option:

“VXLAN standard” or “Minimum”. the difference is the MTU size.

Test Logical Switch connectivity2

Test Logical Switch connectivity2

VXLAN standard size is 1550 bytes (should match the physical infrastructure MTU) without fragmentation. This allows NSX to check connectivity and verify that the infrastructure is prepared for VXLAN traffic.

Minimum packet size allows fragmentation. Hence, NSX can check only connectivity but not whether the infrastructure is ready for the larger frame size.

Use the browse button to select the source esxi host and destination esxi host:

Test Logical Switch connectivity3

Test Logical Switch connectivity3

Click “Start Test”:

Test Logical Switch connectivity4

Test Logical Switch connectivity4

Option 2 CLI :

use the command:

ping ++netstack=vxlan ‘IP_address’

for example:

ping ++netstack=vxlan 192.168.150.52 -d -s 1550

The ip address is the destination VTEP ip address,

The “D” mean set DF bit on

The “S” is the MTU size.

useful Post:

NSX-v Troubleshooting L2 Connectivity

Lab 4 Summary Logical Switch:

After Creating the Logical Switch VNI_5000 (Marked with yellow) , VM1 _will able to  talk with VM2.

Note the magic: This two virtual machine’s do not have L2 connectivity!!!

LAB 4 Final with Logical SwitchTopology

LAB 4 Final with Logical SwitchTopology

Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router

Posted in Home Lab, Install Tagged with: , , , , , , , , ,
4 comments on “NSX Home LAB Part 4 – Logical Switch
  1. koren says:

    Roie, before NSX vcenter, vmk0 on all ESXs, NSX-manager etc where communicating through some ‘regular’ (vlan or flat) port-group, maybe on vswitch0 in nic0 on the servers. is it mandatory to create a ‘management’ network inside vxlan for architecture to work?
    is it ok to create a vxlan for let’s say vmk1 (maybe on nic1 of each server) and use that as logical switch and not migrate mgmt at all?
    in the second option, VTEPs will be on vmk1s, is it mandatory to have VTEPs communicate with controller of have only UWAs communicate with controller?

    • roie9876@gmail.com says:

      Hello Koren,
      1. is it mandatory to create a ‘management’ network inside vxlan for architecture to work?
      Answer: What do you mean by “management network inside vxlan” ? All NSX management is running only over
      The management interface of the ESXi host, Management not running over the VXLAN.

      2. is it ok to create a vxlan for let’s say vmk1 (maybe on nic1 of each server) and use that as logical switch and not migrate mgmt at all?
      Answer: Management information can’t running over VXLAN VMkernel interface.

      3. In the second option, VTEPs will be on vmk1s, is it mandatory to have VTEPs communicate with controller of have only UWAs communicate with controller?
      Answer: The VTEPS UWA communication to NSX controller running over the Management interface of the ESXi host.

  2. xlunt says:

    Hello Roie, I do not think the information presented here on this page is accurate. VMware workstation does not support MTU larger than 1500 yet you have posted on this very page gui ping tests via VXLAN. I think the lab setup was changed or modified in a way not mentioned in the previous posts. I have got the lab setup in workstation and VMnet 9 assigned for VXLAN only traffic. Ping tests via gui and CLI fail. Minimal works fine. All other communications are working except the VXLAN pings. VMK ++netstac=vxlan -d -s 1000 = ok. -s 1550 = fail. My lab setup has physical NICs assigned to the vDS.

  3. Dan says:

    Indeed, I’ve also realized that guide misses some really important info about nested ESXi configuration which prevents to fully utilize VXLAN using nested environment in VMware Workstation. By default network adapters for your VMs on Workstation are e1000, which do not understand frames larger than 1500 bytes. For your ESXi nested hosts you need to edit vmx file and change adapters type (from ethernet*.virtualDev = “e1000” to ethernet*.virtualDev = “vmxnet3”). Once completed power up your nested ESXi and you will see under network adapters 10000 Mb. Now if you use this vmxnet3 adapters to build dvSwitch for VXLAN you should have no problems with VXLAN communication.

Leave a Reply