Infrastructure Adventures

10/15/2011

Deploying Cisco UCS VM-FEX for vSphere – Part 3: DVS and Guest Configuration

Filed under: Network, Virtualization — Tags: , , , , , , , , , , , — Joe Keegan @ 2:50 PM

In Part 2, I covered what needs to be done in UCSM to configure VM-FEX, how to ingrate UCSM with VMware and the creation of the VM-FEX DVS. In this part I’ll cover configuration of the VM-FEX DVS from a VMware point of view and what you need to do on your guests to enable DirectPath I/O. If you are new to VM-FEX or want a refresher on the concepts then check out part 1.

DVS Configuration

I’m not going to go into every step of configuring a DVS, there are plenty of places to get that information. I just want to cover some considerations when dealing with VM-FEX.

As mentioned in Part 2, your DVS will need to have one or more up-link ports configured. It doesn’t look like these up-links are used for any traffic, they do not show up in the MAC table on the Fabric Interconnects and VMware does not report any traffic on them. But they are needed, I would guess mostly because vCenter expects them to be there.

In my config I have the up-links for the VM-FEX DVS configured like any other vSwitch, one vNIC on Fabric-A and one on Fabric-B. The vNICs are configured not to failover and teaming is handled in ESXi.

When it’s all said and done it looks like this.

For the most part it looks like a normal DVS, except for some subtle differences. First, even though the port groups FEX-UCS_Guest and FEX-UCS_Mgmt are on different VLANs, there is no way to get the VLAN, or most other network information, via the vSphere Client.  Instead you need to look in UCSM for information on the port group configuration.

Also the VM-FEX DVS includes the port group deleted-pg.  This port group is created automatically and the description states “Ports belonging to deleted profiles are stored here”. I’m not entirely sure how ports get here. I would assume that if you deleted a profile that all the vNICs associated with that profile would be deleted also, but I have no tried that yet. In any case don’t assign any VMs to this port group since they will not be reachable via the network.

Enabling High Performance Mode

For a VM to utilize DirectPath I/O it must have have a reservation for all it’s memory. You can check to see if DirectPath I/O is active for a VM by editing the VM’s settings and looking at the virtual NIC. Assuming the VM has all it’s memory reserved you will see that DirectPath I/O is active.

You can also view all the VM’s connected to the VM-FEX and whether DirectPath I/O is active for each one.

Above you can see that these ten VMs on the DVS switch are in High Performance mode and have DirectPath I/O active. Normally the maximum number of devices supported per ESXi server is eight, but apparently VM-FEX in high performance mode does not count. As a matter of fact if you look at the ESXi Advance Settings, where you configure devices for DirectPath I/O, or display DirectPath I/O devices via PowerCLI, you won’t see any of the VM-FEX vNICs configured for DirectPath I/O. This must be part of the VM-FEX and VEM special sauce.

Now that I have VM-FEX configured for High Performance mode I plan to do some performance testing. I’d like not only understand how VM-FEX impacts network performance, but also what impact it has on CPU utilization. Stay tuned for a future installment that will cover the results of my testing.

10/09/2011

Deploying Cisco UCS VM-FEX for vSphere – Part 2: UCSM Config and VMware Integration

Filed under: Network, Virtualization — Tags: , , , , , , , , , , , — Joe Keegan @ 5:49 PM

In Part 1 of this series I covered the necessary concepts to understand Cisco’s VM-FEX. In this part I’ll cover how to go about configuration UCS Manager and performing the necessary Vmware integration.

Configuration of UCS Policies

First we need to configure the policies in UCS that will allow our server profile to support VM-FEX.

Dynamic vNIC Connection Policy

The first policy is related to  the vNICs that will be used for each VM. Create a Dynamic vNIC policy by looking under the Policies section of the LAN tab, Right-click and select to create a new policy. You’ll get the following screen in which you will need to fill out or select the relevant options.

The important options are:

Number of Dynamic vNICs – This is the number of vNICs that will be available for dynamic assignment to VMs. Remember that the VIC has a limit to the number of vNICs that it can support and this is based on the number of uplinks between the IOM and the FI. At least this is the case with the 2104 IOM and the M81KR VIC, which supports ((# IOM Links * 15) – 2)). Also remember that your ESXi server will already have a number of vNICs used for other traffic such as Mgmt, vMotion, storage, etc, and that these count against the limit.

Adapter Policy – This determines the vNIC adapter config (HW queue config, TCP offload, etc) and you must select VMWarePassThru to support VM-FEX in High Performance mode.

Protection – This determines the initial placement of the vNICs, either all of them are placed on fabric A or Fabric B or they are alternated between the two fabrics if you just select the “Protected” option. Failover is always enabled on these vNICs and there is no way to disable the protection.

BIOS Policies

You need ensure that all the BIOS virtualization options are enable on the blades so they can support VM-FEX. To do this create a BIOS policy and set the Virtual Technology (VT) option under the Processor section and all the options under the Intel Directed IO section to enabled.

While it seems like these would all be enabled by default, when I looked at a blade that uses the default BIOS policy with each option set to “Platform Default”, It had the Coherency Support set to disabled. So make sure to create the policy to ensure everything is set correctly.

Create UCS Service Profile

To create a service profile that supports VM-FEX you will need to create a service profile using the “expert mode”. You will need to select the Dynamic vNIC Connection Policy on the networking screen (Step 3) and select the BIOS policy on the Operational Policies screen (Step 8).

vNICs for the VM-FEX DVS

As mentioned in the concepts portion of this series, VM-FEX looks like a DVS to vCenter. All virtual switches  need to be configured with up-link interfaces so that traffic on the virtual switch can be sent to the physical world, but what about the VM-FEX?

With VM-FEX each VM is assigned to it’s own vNIC, so it seems logical that the DVS would not need any up-link interfaces, but this is not the case. If you do not configure any up-link interfaces on the VM-FEX DVS then your VMs NICs will be disconnected when they boot up and they will not be assigned a Veth on the UCS Fabric Interconnect.

So you need to make sure to configure a pair of static vNICs to be used as DVS up-links. The static vNICs used for up-links do not have to be configured for any specific VLANs or adapter policy. As far as I can tell they are not actually used for any traffic.

Assign Service Profile to a Blade

You should be able to see the dynamic vNICs under the network tab of the service profile once it’s assigned to a blade. You may need to boot the blade before they show up and it should look something like this.

Install ESXi 5.0 and Cisco VEM Software Bundle

Next you need install ESXi 5.0 on the blade. There is no Cisco or UCS specific ESX image or installer, just use the one you can download from VMware.

Once ESXi is installed you will need to install the Cisco VEM Software Bundle. You access the bundle from the UCS Manager launch web page.

This link will bring you to a page with the VIB that you will need to download to your ESXi server. Look for the ESXi 5.0 or later in the description and find the URL for cross_cisco-vem-v132-4.2.1.1.4.1.0-3.0.4.vib. You will need the URL of the VIB for the next step of installing the VIB on your ESXi server.

Once you have the URL for the VIB you will need to enable SSH on your ESXi server and SSH into the server. Then execute the command in the following example to install the VIB. Make sure to replace the URL in the example below with the URL of the VIB file on your UCSM.

~ # esxcli software vib install --viburl http://hq-demo-ucsm/cisco/vibs/VEM/4.1.0/VEM-4.1.0-patch01/cross_cisco-vem-v132-4.2.1.1.4.1.0-3.0.4.vib
Installation Result
   Message: Operation finished successfully.
   Reboot Required: false
   VIBs Installed: Cisco_bootbank_cisco-vem-v132-esx_4.2.1.1.4.1.0-3.0.4
   VIBs Removed:
   VIBs Skipped:

Configuring VMware Integration and Create VM-FEX DVS

You will need to enable the integration between vCenter and UCSM since the VM-FEX leverages VMware’s DVS.

Modify Extension Key

This is optional, but it makes the integration with vCenter easier to spot (You’ll see later) and is really simply to do, so I’m would recommend it.

In UCSM go to the VM tab and click on VMware. On the right side there will be a Modify Extension Key under actions. Click on this and enter a name for the UCSM vCenter integration. I just used the name of my UCSM.

VMware Integration Wizard

Next we will run the VMware Integration Wizard, which is launched by selecting Configure VMware Integration, below the Modify Extension Key link. This wizard will take you through the steps to integrate with VMware and create a VM-FEX DVS.

Step 1: Install Plug-in on vCenter Server

The first step involves installing the UCSM vCenter Plug-in on your vCenter server. When the VMware Integration Wizard is luanched the first screen will give you the option to export the plugin. Click the Export Button and save the Plug-in to your desktop.

Once the plug-in is downloaded go into your vCenter Client and import the plug-in by selecting Manage Plug-ins… found under the Plug-in’s menu.  At this point the Plug-in manager screen will launch and you will right click on a part of the manager screen without a plug-in and select New Plug-in. Browse to where you downloaded the Plug-in and select it. At this point a new plug-in should appear in the Plug-in Manager with the name that you entered when you modified the extension key.

In my example you can see that there are two plug-ins, one called hq-demo-ucsm and the other called Cisco-UCSM-<snip>. The Cisco-UCSM-<snip> plugin was form a previous UCSM integration where I did not modify the extension key.

Step 2: Define VMware DVS

The next step defines the information that will be used to create the DVS . This step’s screen looks like this.

You need to make sure that the vCenter info and Datacenter name matches what you currently have in your VMware environment.

The DVS folder name and DVS name can be what ever you would like, but obviously make it something that make sense to you. Also make sure that the DVS is set to Enabled.

Step 4: Define Port Profile

This next step is used to define the port profile that is to be used to create the port groups on the DVS.

Here you enter the name of the port profile, desired policies and VLANs. In general you are going to select one VLAN and make it native (untagged). If you select more then one VLAN then each vNIC will be configured as a trunk and the VM will have to be configured to do 802.1q.

The second part is the Profile Client information where you enter a name and select which DVS you want to create the port group on. In my config I just named the profile client the same as the port profile and selected the DVS I just created. You could choose “All” for the DVS and it would add the port group to any future VM-FEX DVS that is created.

 Step 5: Apply Configuration

In this step you just click Finish and all the configuration is applied. If you are able to switch over to the vCenter client you can see the DVS being created and configured.

Once all the configuration has completed you should see the new DVS under the Networking Inventory section of the vCenter client.

Step 6: Enable High Performance Mode

According to the documentation you are suppose to select high performance mode when using the VMware Integration Wizard, but I didn’t see the place to set it. I’ve run through the wizard a few times and I don’t think it is there. So instead I just changed the setting on the Port Profiles General tab.

VM-FEX Configuration Complete

If you followed along you should now have VM-FEX deployed to one or more of your ESXi 5.0 servers. In Part 3 of this series I’ll walk through the configuration of the DVS and VMs to support High Performance Mode.

09/29/2011

Deploying Cisco UCS VM-FEX for vSphere – Part 1:Concept

Filed under: Network, Virtualization — Tags: , , , , , , , , , , , — Joe Keegan @ 5:48 PM

Fabric Extender (FEX) Overview

Fabric Extenders (FEX) have been part of Cisco’s Data Center 3.0 strategy since the beginning with the introduction of the Nexus 5000 and Nexus 2000. Fabric extenders are also utilized in the Cisco UCS ensuring that all server to server traffic is switched through the Fabric Interconnects.  There are Pro’s and Con’s to this design, with a major Pro being that the traffic patterns are very predictable. Traffic between Server A and Server B will always go through the Fabric Interconnect, even if Server A and Server B are in the same Chassis. As you can see in the diagram below all traffic is sent through the Fabric Interconnect no matter the source or destination of the traffic.

This is opposed to a traditional switched architecture where each blade chassis performs it’s own local switching. Here you can see that sometimes the traffic leaves the chassis, such as traffic between server B and C. Other times the traffic is locally switched and the traffic never leaves the chassis, such as the traffic between Server A and Server B.

As a network admin this means that you need to make sure that you have reach down into each of the chassis switches. If you need to do a packet capture then each of those chassis switches must be capable of doing some sort of port mirror. On top of that, you as the network admin need to know where to preform the capture. This can be pretty easy in a small static environment, but in larger environments and cloud type environments where a server could move from one blade to another on any give day, things can get a lot more complicated.

With the UCS’s fabric design you’ll always know that traffic between any two servers will traverse the Fabric Interconnect and you can always look here for your traffic flows, right? Well not exactly, not if you are like every other company and are running virtual servers.

With virtual servers and virtual switching you end up in the exact same place as you were with local chassis switches. Traffic between two VMs stays local to the server handled by the host’s virtual switch. Only if that traffic leaves the server will the traffic flow traverse the Fabric Interconnect. To resolve this issue Cisco has worked with VMware to integrate the Cisco UCS with vSphere’s distributed virtual switching in the form of the VM-FEX or previously refereed to as UCS Pass Through Switching (PTS).

With VM-FEX all the traffic from one VM to another VM is sent through the Fabric Interconnect just like they were physical hosts.

VM-FEX works by leveraging UCS vNICs (not to be confused with VMware vNICs), where each VM is assigned to a UCS vNIC or from a more technical aspect the traffic from each VM is tagged with a specific VNTag.

On-top of making sure you traffic flows are handled consistently for all the servers hosted in your UCS, VM-FEX can be used to provide ASIC based switching for all your VMs which can potentially improve network performance and lower the CPU utilization of your physical hosts. To see how this works you need to understand the different VM-FEX Modes.

VM-FEX Modes

VM-FEX now has two modes, a Standard mode that was the only mode that was available in previous versions and a new High-Performance mode that leverages DirectPath I/O, sometime also referred to VMDirectPath  (i.e. Hypervisor Bypass).

Standard mode utilizes the Nexus 1000V VEM Distributed Virtual Switch to direct the traffic from each VM to it’s assigned UCS vNIC. As seen by this diagram from Cisco, the DVS is still in the path, but it’s like each VM as it’s own dedicated port group with it’s dedicated UCS vNIC as the uplink for that port group.

In High-Performance mode  VM-FEX utilizes DirectPath I/O to bypass the hypervisor and the DVS. As mentioned the selling point of this is improved network performance since it removes layers of software from the network path and reduced CPU load on the physical host since it is no longer handling the network traffic. Normally when using DirectPath I/O there are some major drawbacks in the fact that you loose a lot of functionality, such as vMotion, HA, DRS, Snapshots & suspend/resume. This is not the case with DirectPath I/O with VM-FEX. This is because a VM can switch from High-Performance mode back to Standard mode when it needs to perform a feature where DirectPath I/O is not support, say like a vMotion.

The diagram below from Cisco gives and overview of how this is done.

Step 1 Shows two VMs configured in VM-FEX High-Performance mode using DirectPath I/O on one host.

Step 2 Shows that when a vMotion for the VM on the left is initiated it switches back to VM-FEX Standard mode. The VM is no longer using DirectPath I/O and can now be migrated to another h0st.

Step 3 Shows the VM moved to the second host. The VM is still in Standard mode.

Step 4 Shows the VM switches back to VM-FEX High-Performance mode and starts using DirectPath I/O once more.

This is pretty amazing since you get all the benefit of DirectPath I/O without many of the drawbacks. And to top it off, it looks like VM-FEX allows you to run more VMs using DirectPath I/O then the vSphere 5.0 limit of 8 since I was able to get 10VMs running in High-Performance mode and all the VM’s reported that DirectPath I/O was active.

VM-FEX Considerations

So if VM-FEX High Performance mode is so amazing, why not use it for every VM. Well there are some considerations and still a few drawback. The first drawback is that each VM using High-Performance Mode must have a memory reservation for all of it’s memory. A VM with 4GBs of vRAM must have a 4GB memory reservation. This can quickly eat up space on your ESX hosts and reduce your over subscription rate.

Second Fault Tolerance does not support DirectPath I/O so you can not run a VM using Fault Tolerance in High-Performance mode. But, since both modes are supported on the same DVS it doesn’t really impact your network design.

The last consideration is that each of your VMs, in both Standard and High-Performance mode, need a UCS vNIC and those are limited by the number of connections between your chassis IOMs and the Fabric Interconnects.

Port Profiles, Port Groups and Profile Clients

When creating the VM-FEX it’s important to understand UCS Port Profiles, VMware Port Groups and UCS Profile Clients.

Port Groups are used in VMware virtual networking to define a configuration for a group on VM NICs. A port group defines thing like  VLAN, traffic shaping, failover, etc, but with the VM-FEX this is defined in the UCSM via a UCS port profile.

A UCS Port Profile defines the configuration for a group of vNICs in the UCS, defining things like QoS Policy, VLANs, Pin Group etc. UCS Port Profiles show up in VMware as a Port Group.

You may have multiple clusters and DVS hosted within a UCS and you may want some port profiles to be available on certain DVS while you may want other port profiles to span DVS. This is where the profile client comes in. When you create a profile client you define what port profiles you want to make available to each of your DVS and those port profiles will show up as port groups on the specified DVS.

For example say we are hosting two DVS, DVS1 & DVS2, in our UCS cluster and three Port Profiles – PPA, PPB & PPC. And we want to have PPC avialable on both DVS, but only have PPA on DVS1 and PPB on DVS2. We would configure something along these lines.

Here we have three Port Profiles defined (Red Tinted Boxes), along with three Profile Clients (white boxes in the middle). Profile Client A (PCA) is configured to make Port Profile A (PPA) available on DVS1. PPA shows up on DVS1 as Port Group A (PGA). The same is done on DVS2 with Port Profile B. Profile Client C is created and configured to put Port Profile C onto all the DVS and in turn Port Group C shows up on both DVS.

Now that we have the concepts out of the way I’ll focus on how to configure and implement VM-FEX for vSphere. Stay Tuned for Part Two.

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 34 other followers

%d bloggers like this: