NSX Home LAB Part 4 – Logical Switch

Logical Switch Overview

This next overview of Logical Switch was taken from great work of Max Ardica and Nimish Desai in the official NSX Design Guide:

The Logical Switching capability in the NSX-v platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks (i.e. underlay and overlay networks) provided by NSX network virtualization.

Logical Switch Overview

The Figure above displays the logical and physical network views when logical switching is deployed leveraging the VXLAN overlay technology that allows stretching a L2 domain (logical switch) across multiple server racks, independently from the underlay inter-rack connectivity (L2 or L3).

With reference to the example of the deployment of the multi-tier application previously discussed, the logical switching function allows to create the different L2 segments mapped to the different tiers where the various workloads (virtual machines or physical hosts) are connected.

Create of Logical Switch

It is worth noticing that logical switching functionality must enable both virtual-to-virtual and virtual-to-physical communication in each segment and that the use of NSX VXLAN-to-VLAN bridging is also required to allow connectivity to the logical space to physical nodes, as it is often the case for the DB tier.

LAB Topology Current State

Before start Lab 4 i deleted one ESXi host to save memory and storage space on my Laptop.

So the Compute cluster will built from 2 ESXi host’s without VSAN.

My Shard Storage is OpenFiler.

Lab4  Topology Starting  Point

Lab4 Topology Starting Point

Creating the VTEP kernel Interface

In order the ESXi host to be able send VXLAN traffic we need to create special VMkernel interface called VTEP(VXLAN Tunnel End Point).

We have two option for creating the IP address of this VTEP .

DHCP or IP Pool, My preference is ip pool method.

Go to Host Preparation :

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

Click  “Configure” where the VXLAN Column, as result’s New form pop up.

The Minimum MTU is 1600 , do not lower this value.


Select “Use IP Pool” and Chose New Pool.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

New Form will show. type your range of ip address for the VMkernel ip pool.

Create VTEP IP Pool

Create VTEP IP Pool

Click OK, for the teaming policy choose Fail Over (Must for Nested ESXi).

After few sec we will create 3 VMK1 interfaces with 3 different ip address.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

The Topology with the VMKernel interfaces, show in Black Color :

Lab4  Topology With VTEP IP address

Lab4 Topology With VTEP IP address

 Create the Segment id

For etch VXLAN we have unique ID that represent with as Segment ID, this Number called VNI – VXLAN Network Identifier.

Instead of creating new VNI etch time we need new Logical Switch, we will create Pool of VNI.

The VNI number start from 5000.

Click on Segment ID and than Edit and Chose your range:

Create Pools of VNI

Create Pools of VNI

Transport Zone

In the simplest sense, a Transport Zone defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure. As previously mentioned, this communication happens leveraging one (or more) specific interface defined on each ESXi host and named VXLAN Tunnel EndPoints (VTEPs).

A Transport Zone extends across one or more ESXi clusters and in a loose sense defines the span of logical switches. To understand this better, it is important to clarify the relationship existing between Logical Switch, VDS and Transport Zone. A VDS can span across a certain number of ESXi hosts, since it is possible to add/remove single ESXi hosts from a specific VDS. In a real life NSX deployment, it is very likely that multiple VDS are defined in a given NSX Domain. Figure 14 shows a scenario where a “Compute-VDS” spans across all the ESXi hosts part of compute clusters, and a separate “Edge-VDS” extends across ESXi hosts in the edge clusters.

Think of Transport zone as Large Tube that carry all VNI inside it.

The Zone can work in 3 different Mode’s: Unicast , Multicast and Hybride.  (special blog post will be need to explain this tree mode’s)

We will chose Unicast because this mode will work without multicast at the Physical Switch’s.

We can decided which cluster’s join to Transport Zone,

In our lab both Management and Compute Cluster will Join same Transport Zone called “Lab Zone”.

Note: NSX Domain can have more than one Transport Zone.

Create Transport Zone

Create Transport Zone

Create the Logical Switch

At this Point We can create the Logical Switch, the function of the logical switch is the connect Virtual Machine’s from different esxi host’s(or same one)

Magic of NSX is the ability of each  esxi to be in different ip subnet’s

For this Lab, the Logical Switch will named “VNI-5000”.

Logical switch is tied to Transport Zone

Create Logical Switch

Create Logical Switch

Results  of creating the logical switch:

Logical Switch

Logical Switch

Connect virtual machines to/from a Logical Switch

To connect VM to Logical switch we need to click +VM image:

Connect Virtual Machines to logical switch

Connect Virtual Machines to logical switch

Select VM

Connect Virtual Machines to logical switch2

Connect Virtual Machines to logical switch2

Pick Up the Specific NIC to add to logical switch:

Connect Virtual Machines to logical switch3

Connect Virtual Machines to logical switch3

Click Finish

Connect Virtual Machines to logical switch4

Connect Virtual Machines to logical switch4

Test Logical Switch connectivity

We have  two diffrent way to test logical switch connectivity:

Option 1 GUI:

Double Click on Logical Switch icon for example VNI-5000 and select the Monitor tab:

Test Logical Switch connectivity1

IN the Size of test packet we two diffrent option:

“VXLAN standard” or “Minimum”. the difference is the MTU size.

Test Logical Switch connectivity2

Test Logical Switch connectivity2

VXLAN standard size is 1550 bytes (should match the physical infrastructure MTU) without fragmentation. This allows NSX to check connectivity and verify that the infrastructure is prepared for VXLAN traffic.

Minimum packet size allows fragmentation. Hence, NSX can check only connectivity but not whether the infrastructure is ready for the larger frame size.

Use the browse button to select the source esxi host and destination esxi host:

Test Logical Switch connectivity3

Test Logical Switch connectivity3

Click “Start Test”:

Test Logical Switch connectivity4

Test Logical Switch connectivity4

Option 2 CLI :

use the command:

ping ++netstack=vxlan ‘IP_address’

for example:

ping ++netstack=vxlan -d -s 1550

The ip address is the destination VTEP ip address,

The “D” mean set DF bit on

The “S” is the MTU size.

useful Post:

NSX-v Troubleshooting L2 Connectivity

Lab 4 Summary Logical Switch:

After Creating the Logical Switch VNI_5000 (Marked with yellow) , VM1 _will able to  talk with VM2.

Note the magic: This two virtual machine’s do not have L2 connectivity!!!

LAB 4 Final with Logical SwitchTopology

LAB 4 Final with Logical SwitchTopology

Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router

NSX Home LAB Part 3

Host Preparation

This post was updated at 18.10.14

Host Preparation overview

This next overview of Host Preparation was taken from great work of Max Ardica and Nimish Desai in the official NSX Design Guide:

The NSX Manager is responsible for the deployment of the controller clusters and for the ESXi host preparation, installing on those the various vSphere Installation Bundles (VIBs) to enable VXLAN, Distributed Routing, Distributed Firewall and a user world agent used to communicate at the control plane level. The NSX Manager is also responsible for the deployment and configuration of the NSX Edge Services Gateways and associated network services (Load Balancing, Firewalling, NAT, etc). Those functionalities will be described in much more detail in following sections of this paper.

The NSX Manager also ensures security of the control plane communication of the NSX architecture by creating self-signed certificates for the nodes of the controller cluster and for each ESXi hosts that should be allowed to join the NSX domain. The NSX Manager installs those certificates to the ESXi hosts and the NSX Controller(s) over a secure channel; after that, mutual authentication of NSX entities occurs by verifying the certificates. Once this mutual authentication is completed, control plane communication is encrypted.

Note: in NSX-v software release 6.0, SSL is disabled by default. In order to ensure confidentiality of the control-plane communication, it is recommended to enable SSL via an API call. From 6.1 release the default value is changed and SSL is enabled.

In terms of resiliency, since the NSX Manager is a virtual machine, the recommendation is to leverage the usual vSphere functionalities as vSphere HA to ensure that the NSX Manager can be dynamically moved should the ESXi hosts where it is running occur a failure. It is worth noticing that such failure scenario would temporarily impact only the NSX management plane, while the already deployed logical networks would continue to operate seamlessly.

Note: the NSX Manager outage may affect specific functionalities (when enabled) as for example Identity Based Firewall (since it won’t be possible to resolve usernames to AD groups), and flow monitoring collection.

Finally, NSX Manager data, including system configuration, events and audit log tables (stored in the internal data base), can be backed up at any time by performing an on-demand backup from the NSX Manager GUI and saved to a remote location that must be accessible by the NSX Manager. It is also possible to schedule periodic backups to be performed (hourly, daily or weekly). Notice that restoring a backup is only possible on a freshly deployed NSX Manager appliance that can access one of the previously backed up instances.

Host Preparation

Successful host preparation on a cluster will do the following:

  • Install network fabric VIBs (host kernel components) on esx hosts in the cluster.
  • Configure host messaging channel for communication with NSX manager.
  • Make hosts ready for Distributed Firewall, VXLAN & VDR configuration.

Host Preparation

Host Preparation


UWA = Uer Wordl Agent is a TCP (SSL) client that communicates with the Controller using the control plane protocol.

The UWA Communicates with message bus agent to retrieve control plane related information from NSX Manager

We can think of the UWA as a middleware between the ESX kernel module to the NSX Controller.

The  deployment of UWA steps:

  • The agent is packaged into VXLAN VIB (vSphere Installation Bundle)
  • Installed by NSX Manager via EAM (ESX Agency Manager) during host preparation
  • Runs as a service daemon on ESXi: netcpa


VTEP (VXLAN Tunnel End Point)

  • VMkernel interface which serves as the endpoint for encapsulation/de-encapsulation of VXLAN traffic
  • Collect network information, which is then reported to the Controller via User World Agent (UWA)

Preparing the ESX Host’s

Both UWA and VTEP installing to ESX host’s in easy step’s, click on install button 🙂

We will do it for Both Cluster’s Management and Compute.

Host Preparation

Host Preparation

After few Sec We will get this result’s:

Successful Host Preparation

Successful Host Preparation

If you face issue with Host Preparation you can read this post NSX-v Host Preparation

We can verify from CLI the status of the UWA and VXLAN. SSH to ESX-COMP-1.nsx.local

UWA status verification

/etc/init.d/netcpad status

UWA netcpad status

UWA netcpad status

From esxtop we can see the demon running:



In NSX we have 3 different VIB

vib names are:




Verify VXLAN VIB is installed:

esxcli software vib get –vibname esx-vxlan



Summery of Part 3  System Level Architecture

We install the UWA and VXLAN VIB, the result’s of this step’s from high level view:

  • NSX Manager deploys Controllers and prepares vSphere Clusters for VXLAN
  • Controllers are clustered for scale out and high availability
  • VTEPs collect network information, which is then reported to the Controller via User World Agent (UWA)
NSX Infrastructure Architecture

NSX Infrastructure Architecture

Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router

NSX Home LAB Part 2

 NSX Controller

This Post updated at 18.10.14

NSX Controller Overview



The NSX control plane runs in the NSX controller. In a vSphere-optimized environment with VDS the controller enables multicast free VXLAN and control plane programming of elements such as Distributed Logical Routing (DLR).

In all cases the controller is purely a part of the control plane and does not have any data plane traffic passing through it. The controller nodes are also deployed in a cluster of odd members in order to enable high-availability and scale.

The Controller role in NSX Architecture are:

  • Enables VXLAN control plane by distributing network information.
  • Controllers are clustered for scale out in odd number’s (1,3) and high availability.
  • TCP (SSL) server implements the control plane protocol
  • An extensible framework that supports multiple applications:
  • Currently VXLAN and Distributed Logical Router
  • Provides CLI interface for statistics and runtime states
  • Clustering, data persistence/replication, and REST API framework from NVP are leveraged by the controller

This next overview of NSX Controller was taken from great work of Max Ardica and Nimish Desai in the official NSX Design Guide:

The Controller cluster in the NSX platform is the control plane component that is responsible in managing the switching and routing modules in the hypervisors. The controller cluster consists of controller nodes that manage specific logical switches. The use of controller cluster in managing VXLAN based logical switches eliminates the need for multicast support from the physical network infrastructure. Customers now don’t have to provision multicast group IP addresses and also don’t need to enable PIM routing or IGMP snooping features on physical switches or routers.

Additionally, the NSX Controller supports an ARP suppression mechanism that reduces the need to flood ARP broadcast requests across the L2 network domain where virtual machines are connected. The different VXLAN replication mode and the ARP suppression mechanism will be discussed in more detail in the “Logical Switching” section.

For resiliency and performance, production deployments must deploy a Controller Cluster with multiple nodes. The NSX Controller Cluster represents a scale-out distributed system, where each Controller Node is assigned a set of roles that define the type of tasks the node can implement.

In order to increase the scalability characteristics of the NSX architecture, a “slicing” mechanism is utilized to ensure that all the controller nodes can be active at any given time.

 Slicing Controller

The above illustrates how the roles and responsibilities are fully distributed between the different cluster nodes. This means, for example, that different logical networks (or logical routers) may be managed by different Controller nodes: each node in the Controller Cluster is identified by a unique IP address and when an ESXi host establishes a control-plane connection with one member of the cluster, a full list of IP addresses for the other members is passed down to the host, so to be able to establish communication channels with all the members of the Controller Cluster. This allows the ESXi host to know at any given time what specific node is responsible for a given logical network

In the case of failure of a Controller Node, the slices for a given role that were owned by the failed node are reassigned to the remaining members of the cluster. In order for this mechanism to be resilient and deterministic, one of the Controller Nodes is elected as a “Master” for each role. The Master is responsible for allocating slices to individual Controller Nodes and determining when a node has failed, so to be able to reallocate the slices to the other nodes using a specific algorithm. The master also informs the ESXi hosts about the failure of the cluster node, so that they can update their internal information specifying what node owns the various logical network slices.

The election of the Master for each role requires a majority vote of all active and inactive nodes in the cluster. This is the main reason why a Controller Cluster must always be deployed leveraging an odd number of nodes.

Controlloer Nodes Majority

Figure above highlights the different majority number scenarios depending on the number of Controller Cluster nodes. It is evident how deploying 2 nodes (traditionally considered an example of a redundant system) would increase the scalability of the Controller Cluster (since at steady state two nodes would work in parallel) without providing any additional resiliency. This is because with 2 nodes, the majority number is 2 and that means that if one of the two nodes were to fail, or they lost communication with each other (dual-active scenario), neither of them would be able to keep functioning (accepting API calls, etc.). The same considerations apply to a deployment with 4 nodes that cannot provide more resiliency than a cluster with 3 elements (even if providing better performance).

Note: NSX currently (as of software release 6.1) supports only clusters with 3 nodes. The various examples above with different numbers of nodes were given just to illustrate how the majority vote mechanism works.

NSX controller nodes are deployed as virtual appliances from the NSX Manager UI. Each appliance is characterized by an IP address used for all control-plane interactions and by specific settings (4 vCPUs, 4GB of RAM) that cannot currently be modified. Downsizing NSX Controller

In order to ensure reliability to the Controller cluster, it is good practice to spread the deployment of the cluster nodes across separate ESXi hosts, to ensure that the failure of a single host would not cause the loss of majority number in the cluster. NSX does not currently provide any embedded capability to ensure this, so the recommendation is to leverage the native vSphere DRS anti-affinity rules to avoid deploying more than one controller node on the same ESXi server.

For more information on how to create a VM-to-VM anti-affinity rule,example of rule:

NSX Management Cluster and DRS Rules


please refer to the following KB article:


Deploying NSX Controller

From the NSX Controller Menu we click green + button

Deploying Controller

Deploying Controller

Add Controller window pop-up, we will place the controller at the Management Cluster.


The IP Pool box needed for automatically  allocated ip address for etch controller Node.


After click ON the NSX Manager will Deploy Controller Node at the Management Cluster.


we will need to wait until the node status change from Deploying to Normal


At this point we have one Node in the NSX Cluster of Controller’s.


if you have problem to deploy the controller read my post:

Deploying NSX-V controller failed

We can now SSH to Controller and run some show commands:

show control-cluster status

nvp-controller # show control-cluster status
Type Status Since
Join status: Join complete 04/16 02:55:19
Majority status: Connected to cluster majority 04/16 02:55:11
Restart status: This controller can be safely restarted 04/16 02:55:17
Cluster ID: 14d6067f-c1d2-4541-ae45-d20d1c47009f
Node UUID: 14d6067f-c1d2-4541-ae45-d20d1c47009f

Role Configured status Active status
api_provider enabled activated
persistence_server enabled activated
switch_manager enabled activated
logical_manager enabled activated
directory_server enabled activated

Join Status: when this node join to the cluster and the status of the join, in this output we get “Join completed”

Check which Node are UP and running and ip addres’s

show control-cluster startup-nodes

nvp-controller # show control-cluster startup-nodes

Controller Cluster and High Availability

Controller Cluster

Controller Cluster

For testing We will install 3 Node to join as one Controller Cluster.

Install Controller Cluster

Install Controller Cluster

From the output of the startup node we can see we have 3 node’s.

show control-cluster startup-nodes

nvp-controller # show control-cluster startup-nodes,,

One of the node member will elect as Master.

In order to  see which node member elected as Master we can run the command:

show control-cluster roles

Master Election

Master Election

Node 1 chose as Master.

Now lest check what will happen if we restart Node 1.

Restart Node1

Restart Node1

After few sec the Node 2 elected as Master:

Node2 Elected as Master

Node2 Elected as Master

in order to save memory at my laptop i will leave node1 and delete node2 and node3.

want to know more about how to TSHO NSX controllers ?

read my post

Troubleshooting NSX-V Controller

Summery of Part 2

We Install NSX Controller and see the functionality of the high availability of the NSX cluster.

Lab topology:

Summary of home lab part 2

Summary of home lab part 2

Related Post:

Troubleshooting NSX-V Controller// // // <![CDATA[
var amznKeys = amznads.getKeys();
if (typeof amznKeys != “undefined” && amznKeys != “”) { for (var i =0; i // // // // // //

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router

NSX Home LAB Part 1

Built the NSX infrastructure

 This Post was updated at:18/10/2014

Thanks To VMware for giving me Today  DELL PRECISION M4800

It’s about time to create my first NSX home lab inside my Laptop.

Lab Hardware Specification for M4800


4th generation Intel® Core™  i7 processors, up to Core i7 Extreme Edition, Intel vPro™ advanced management on select CPUs


4 DIMM slots:  32GB 1600MH

Hard Drive

SSD (Solid state drive)  512GB SATA 6Gb/s

Lab topology from 5,000 feet

Any NSX implementation will need Management Cluster and Compute Cluster.

Lab Topology

Both Layer 3 and Layer 2 transport networks are fully supported by NSX, however to most effectively demonstrate the flexibility and scalability of network virtualization a L3 topology with different routed networks for the Compute and Management/Edge Clusters is recommended.

Management VM Sizing Requirements:

Management VM Sizing

WorkStation VM and Nested VM

Some of the VM will install as Virtual machine inside VMware workstation, other will be part of the Nested ESX installation

Workstation VM Nested VM
vCenter All NSX Component’s
ESXi Host’s Win 2008 A/D,DNS,CA
WAN Router (Olive) Test VM Win7/Linux

WorkStation Network Connectivity

We will use 4 Network Adapters:

WorkStation Network Connectivity

Adapter Name Function Workstation Adapter Name vSphere Function
Network Adpater Management VMnet1 vMotion,VSAN,Management .Active Standby NiC Teaming
Network Adpater 1 Management VMnet1 vMotion,VSAN,ManagementActive Standby NiC Teaming
Network Adpater 2 Tansport Zone VMnet3 VXLAN

NiC Teaming  mode for the Management will set to failover (this is mandatory in Nested environment)

Lab Topology Before NSX

The starting point for this lab assume we know how to install vCetner  and ESX with Distributed switch.

The all lab will run on top VMware Workstation 10.

The vSphere infrastructure will built from 2 ESX Cluster’s.

Management Cluster with 1 ESX Host.

Computes Cluster 3 ESX host’s

Lab Topology Before NSX

Lab vSphere Version’s

For this Lab   we will use ESX 5.5U1 and vCetner will be Virtual Appliance 5.5U1


The Computes Cluster with is 3 ESX host will Create one VSAN Cluster.

There are many blogs in this context, so I will not dwell on this topic

I use   William Lam blog


Since my Hard Drive is SSD, the ESX see all Disk as SSD so needed to change it to None-SSD.

esxcli storage nmp satp rule add –satp VMW_SATP_LOCAL –device mpx.vmhba1:C0:T2:L0 –option “enable_local disable_ssd

esxcli storage core claiming reclaim -d mpx.vmhba1:C0:T2:L0

VSAN Disk Management and DataStore

This VSAN Cluster built from  3 ESX host’s.

Etch Host have 5GB SSD and 25GB None-SSD.


The end results of the Total VSAN Cluster is 48.06 GB


Installing the NSX Manager

NSX Manager

NSX Manager

NSX Manager Role of the NSX Architecture:

The NSX management plane is built by the NSX manager. The NSX manager provides the single point of configuration and the REST API entry-points in a vSphere environment for NSX.

The consumption of NSX can be driven directly via the NSX manager UI. In a vSphere environment this is available via the vSphere Web UI itself. Typically end-users tie in network virtualization to their cloud management platform for deploying applications. NSX provides a rich set of integration into virtually any CMP via the REST API. Out of the box integration is also currently available through VMware vCloud Automation Center (vCAC) and vCloud Director (vCD).

NSX Manager:

• 1:1 mapping between an NSX Manager and vCenter Server (vCenter for one Manager)
• Provides the management UI and API for NSX
• vSphere Web Client Plugin
• Deploys NSX Controller and NSX Edge Virtual Appliances (OVF)
• Installs VXLAN,Distributed Routing and Firewall kernel modules plus UW Agent on ESXi hosts
• Configures Controller Cluster via a REST API and hosts via a message bus

NSX development is very simple  task, all NSX components are built from single OVA image file.

NSX Manager deploy OVA_1

place it at the Management Cluster

NSX Manager deploy OVA_2

For the lab size saving i will use Thin Provision (at Production use thick)
NSX Manager deploy OVA_3

Use the Management Network

NSX Manager deploy OVA_4

Fill in the Password and DNS/NTP

NSX Manager deploy OVA_5

Finalize the Wizard and power up the vm.

NSX Manager deploy OVA_6

After few min we can access the NSX Manager GUI


NSX Manager Configure_1

In the main summary page we can see the services status of the NSX Manager.

All services must be in Running state, except the SSH service.

vPostgress is the built in database for NSX Manager.

RabiitMQ is the BUS messaging from the Manager to other NSX components.

the NSX Manager Service is the main service.

Click on the Manage tab will take us to  General Menu.

The Time sync is critical for the SSO work properly.

The best way is to use NTP, otherwise use  Manual time and time zone.

NSX Manager Configure_2

Backup you NSX manager according to you company backup policy.

NSX Manager Configure_3

Connect NSX Manager to Vcenter

This is the place we create the link between NSX Manager and vCenter.

Confirm that user has administrative privileges.


Lookup Server:

To be able login A/D Users to the NSX manager we will need to configure the SSO lookup server (in My lab the SSO server is running inside the vCenter).

Confirm that the user has admin privileges.

With Lookup Service we can assign different privilege to different users from A/D = Role Base Access Control

want to know how ? read this post

NSX Manager Configure_7

vCenter is just ip, User / Password.

After successfully completed this 2 step’s the results look like this:

NSX Manager Configure_9

if you have problem in the register process read my post:

NSX-V Troubleshooting registration to vCenter


We need to wait around 4 min before the NSX Menu show up in the vCenter GUI.

NSX Manager Configure_10

Summery of Part 1

We install the NSX Manager Image

Register the NSX Manager in the vCenter

Register the SSO service in the vCenter.

The Current topology at the end of part 1 will be:

summery of part 1


Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router