XenServer

Hello there! First of all, I’d like to thank you for visiting my blog and I sincerely hope that you’ll find it informative and useful for your work/study. In this blog, I’d be going through the elaboration of the workshop I performed in Citrix XenServer at my company and will be sharing my findings with you. My intention here is to cover the complete process of XenServer deployment which involves preparing server & network requirements, hypervisor installation, configuring storage, setting up networking, creating VMs, implementing HA, XenMotion and DR, and so on.




I understand that you’re probably eager to jump right into the setup and configuration of XenServer, however I’d first recommend you to go through some basic concepts about XenServer. In case you are already familiar with the basics of Xen, you can skip this introductory section and continue from the next section.

What is Xen?

Xen is an open-source virtualization hypervisor initially developed at University of Cambridge Computer Laboratory. Technically, it runs on a more privileged CPU state than any other software on the machine. Basically, its hypervisor acts as an additional layer between the hardware and software and its control domain controls and manages all the resources and tasks for other guest software.

Later a company called Citrix implemented the xen platform to create their virtualization platform and the Citrix XenServer was launched. With XenServer, lots of newer features mainly XenCenter were implemented to Xen and XenServer became the industry ready virtualization platform for use. This same Citrix XenServer is the platform of choice in this workshop.

Few years back, Citrix has provided the XenServer to Linux Foundation and currently, it is managed by xenserver.org and not to mention, it’s completely free and open source. Other than Citrix, the other giant company that has been implementing Xen is Amazon, whose Amazon Web Service (one of the most popular cloud solutions) is also based on Xen.

Why is Xen Important?

Xen is important because it’s an open-source virtualization platform, which means it’s free, but even more importantly, it is developed and managed by the community of the developers from around the globe. Because of the same reason, it’s achieving the rapid growth it seeks and the support from the community is also amazing.

Introduction to My XenServer Workshop

Since our company specializes in Network and System Integration as well as Support and we had some pretty good servers and switches sitting around in our vault for some time, we decided to perform an Enterprise grade workshop in Virtualization technologies mainly in Vmware and Citrix XenServer. Because I got involved in XenServer, I’m doing this blog about XenServer for now.

To talk about the devices used in this workshop, we had three Cisco UCS C220 M4 Rack servers and two Cisco 3650 switches. So, we planned to design a highly available and fault tolerant infrastructure in both network level and server level. And regarding Storage, as we didn’t have a dedicated SAN/NAS device, we used one of the servers to setup a virtual SAN device using OpenFiler. Now, as we have prepared the three main components of any IT infrastructure: Computing, Networking and Storage, let’s begin to plan and design the system architecture.

Architecture Design

XenServer Workshop Architecture
XenServer Workshop Architecture

Please try to observe the above diagram carefully and think about what we can infer from it. I assume you’ve found it quite  simple, although I’d also like to describe it for you. The server at the top is obviously for storage. We’re going to install OpenFiler on this server so that it’ll later act as the storage for other servers. Down to the storage server, there are two switches (Switch1 and Switch2), Layer 3 switches to be specific (in case you’re using Layer 2 switches, you’ll need to have a router for inter-vlan routing). And at the bottom, there are two servers in which, we’ll be installing Citrix XenServer’s hypervisors. You can also see in the picture that there are two connections to every server and each connection from a server is connected to separate switches. For example, if there are two NICs (NIC1 and NIC2) on a server, then NIC1 is connected to Switch1 and NIC2 is connected to Switch2.

Then, you might have also noticed a circle around those two connections between server and switch as well as between two switches too. Those elliptical circles around links refer to the Link Aggregation. I assume you already know about Link Aggregation, but even if you don’t, no reason to worry. Link Aggregation or LAG is simply the logical addition of multiple physical links so that the combination acts as a single link with combined capabilities. For example, let’s imagine you’ve got two physical NICs of capacity 1Gbps each on your server and you want the total bandwidth of 2 Gbps in your server. However, using those two NICs won’t give you the net bandwidth of 2 Gbps if you configure them individually, rather they’ll just act as two separate connections with bandwidth of 1 Gbps each. Also you’ll have two different IP addresses assigned to each NIC and in case one of the NICs fails, the other won’t take its place. Whereas by using LAG, we don’t have to assign separate IP addresses to each NIC because there will be only one logical interface/link with single IP address. Added to that, we’ll get the expected bandwidth of 2 Gbps to our server along with benefits like load balancing and high availability.

Note: For the LAG to work, the NICs on the servers must be connected to logically the same switch, which means you can have your servers connected to multiple physical switches  but all those switches should be stacked up together so that the stack becomes logically a single unit.




Installation Procedure

Enough with the basics, let’s get into the setup process of the workshop now. First, let’s begin with the setup and configuration of the servers (Cisco UCS servers in this case). If you’re familiar with the Cisco’s UCS technology, you must also be well acquainted with Cisco Integrated Management Controller (CIMC) too. CIMC allows us to control and manage the Cisco server via Management network and web based interface. With CIMC, we can browse through the summary/stats of server’s components, configure BIOS setting, manage disk volumes, access virtual console or KVM and so on. In addition, we can mount an ISO image into a virtual CD/DVD of the server via CIMC KVM and boot up the server without even needing a CD-ROM or USB or any sort of physical access, for that matter.

Installing XenServer

 After we unpack our server, we’d need to attach a keyboard and a monitor to it for the first time in order to configure the CIMC in the server.  As soon as the server is powered up, pressing a F8 key takes us to the CIMC configuration screen where we can specify the management interface and its IP information. Then, all we need is to connect the server’s management interface to the network and we’re able to login into the server’s CIMC from our web browser. The default login credential for Cisco server is username=admin and password=password. The next thing we need is to configure the logical volumes or RAID on the disks, which we can perform either in the storage section from CIMC or from the server’s console by pressing either Ctrl+M or Ctrl+V or something like that depending on the version/model of the server. The storage setup process and configuration varies greatly depending on the model of server, RAID card used, and also HBA for SAN/NAS setup.

XenServer - Configuring RAID
Creating RAID volume from CLI mode.

Now that our disk volume is configured, we can begin the installation of XenServer hypervisor in the server. For that purpose, we can launch a KVM from the CIMC, which is highly dependent upon the availability of Java runtime environment in our browser. After the KVM is launched, we can mount the ISO image of XenServer installer into the virtual CD/DVD and map it to the server then. The next thing we need to do is restart our server by passing Ctrl+Alt+Del from KVM’s macros or simply restart the server from CIMC. When the server reboots, we need to enter into BIOS setting or Boot options selection so that we can select the Virtual CD/DVD as our boot device of choice. After that, the server simply boots up into XenServer’s installer from Virtual CD/DVD and the installation begins. The XenServer’s installation window looks like in the figure below:

XenServer Welcome Screen
XenServer Welcome Screen

Now, we just need to answer some questions to the installer such as our preferred language, keyboard layout, location, disk to install on, network configurations, time, installation of supplementary packs, and so on. That’s it! Our XenServer is installed and ready to go. Regarding network configuration, we now set the xenserver’s IP details according to the Management network i.e. 172.16.40.0/24. In this workshop, the management IP of Xen1 server is 172.16.40.13 and that of Xen2 is 172.16.40.14. Accordingly, the gateway of management network is 172.16.40.254 and it falls in VLAN 5.

Installing Storage Server

As mentioned earlier in the Workshop’s description, I’m going to tell you how did I setup a virtual SAN for my virtualization environment. First, I downloaded an ISO file for OpenFiler from their website and mounted it to my server’s virtual DVD via KVM. Then, booted up the OpenFiler. The few steps of installation went well, however when I reached the place where I needed to select and configure disk volume for OpenFiler, the installer didn’t display any disk. I realized that the installer didn’t have necessary drivers to detect the embedded RAID controller present in the server.




XenServer - Disk Detection Issue in OpenFiler
Disk Detection Issue in OpenFiler

I tried to use the drivers pack provided by Cisco itself for this purpose, but unfortunately it didn’t work. I also downloaded drivers suggested in many related blogs and articles, but again I had no success. So, after trying so hard and being very limited in time, I had to switch my OpenFiler system to a virtualized platform. Therefore, I first installed XenServer in the same server and then created a VM for OpenFiler. While installing OpenFiler on VM went smoothly without any problem with disk detection. There wasn’t much to be configured during installation, so OpenFiler was installed quickly and easily.

Note: If you know how to install OpenFiler on Cisco UCS server with embedded MegaRAID controller, please let me know in the comment section. I’ll truly appreciate it.

After the OpenFiler was successfully installed, I browsed its IP address from my web browser and logged into it using the default username and password as openfiler and password respectively. From there I created required LUNS and mapped them.

XenServer - Storage Volume distribution Chart
Storage Volume distribution Chart
XenServer - LUN Mapping
LUN Mapping
Setting Up Cisco Switches

I’m using two Cisco 3650 switches here. First, I stacked them together so that I could have a single logical switch. You can learn how to stack two Cisco switches from here:

After I had my switches stacked together, I first performed the basic device configuration like setting up console, vty and ssh logins. Secondly, I created four VLANS: VLAN 5 (Management, 172.16.40.0/24), VLAN 10 (XenMotion, 10.10.10.0/24), VLAN 20 (Storage, 10.10.20.0/24) and VLAN 30 (Data Network, 10.10.30.0/24). I configured gateways for all VLANs with 254 value on the last octet of each network. Also I used the network 192.168.1.0/24 for our native VLAN i.e. VLAN 1, which is connected via uplink.

Then, I started connecting my servers to the switches. I connected NICs of server-1 to ports 13 of each switch and that of server-2 to ports 14 of each switch. And I connected NICs of Storage server to ports 24 of each switch. Let’s look at the configurations made on switch stack:

Creating VLANs:

Cisco-Stack(config)# vlan 5
Cisco-Stack(config-vlan)# name Managment-VLAN
Cisco-Stack(config-vlan)# exit
Cisco-Stack(config)# vlan 10
Cisco-Stack(config-vlan)# name XenMotion-VLAN
Cisco-Stack(config-vlan)# exit
Cisco-Stack(config)# vlan 20
Cisco-Stack(config-vlan)# name Storage-VLAN
Cisco-Stack(config-vlan)# mtu 9216
Cisco-Stack(config-vlan)# exit
Cisco-Stack(config)# vlan 30
Cisco-Stack(config-vlan)# name Data-VLAN
Cisco-Stack(config-vlan)# exit

Cisco-Stack(config)# interface vlan 5
Cisco-Stack(config-if)# ip address 172.16.40.254 255.255.255.0
Cisco-Stack(config-if)# exit
Cisco-Stack(config)# interface vlan 10
Cisco-Stack(config-if)# ip address 10.10.10.254 255.255.255.0
Cisco-Stack(config-if)# exit
Cisco-Stack(config)# interface vlan 20
Cisco-Stack(config-if)# ip address 10.10.20.254 255.255.255.0
Cisco-Stack(config-if)# exit
Cisco-Stack(config)# interface vlan 30
Cisco-Stack(config-if)# ip address 10.10.30.254 255.255.255.0
Cisco-Stack(config-if)# exit

You can notice on the configuration for VLAN 20 where I have set “mtu 9216“, which means I would like to have larger frames unlike the default size of 1500 bytes. These larger frames are also called Jumbo frames and they are very useful and resource saving while sending storage traffic which tend to be larger in size and more frequent. We’ll set the same mtu value for the Storage network in XenServer Pool too so that both ends can transmit larger frames.




Common config for all ports:

Cisco-Stack(config)# interface range GigabitEthernet1/0/13 – 14, GigabitEthernet2/0/13 – 14
Cisco-Stack(config-if-range)# switchport mode trunk
Cisco-Stack(config-if-range)# switchport trunk native vlan 5
Cisco-Stack(config-if-range)# switchport trunk allowed vlan 5,10,20,30
Cisco-Stack(config-if-range)# channel-protocol lacp

The above configuration is common for all the switchports connecting to the servers as they all pass VLAN trunk with VLANs 5,10,20 and 30. I’m setting native vlan as 5 because our management network belongs to VLAN 5 or you can also take it as the network in which the physical ports of servers lie. Finally, I implemented Link Aggregation Control Protocol (LACP) on all of these ports.

Config for Server-1:

Cisco-Stack(config)# interface GigabitEthernet1/0/13
Cisco-Stack(config-if)# description XenServer1-NIC1
Cisco-Stack(config-if)# channel-group 1 mode active

Cisco-Stack(config)# interface GigabitEthernet2/0/13
Cisco-Stack(config-if)# description XenServer1-NIC2
Cisco-Stack(config-if)# channel-group 1 mode active

The “channel-group 1 mode active” statement on both of the above interfaces tells the switch-stack to place both of these links in port-channel 1. Now, the links between server-1 and switch-stack have been logically aggregated or made single link.

Config for Server-2:

Cisco-Stack(config)# interface GigabitEthernet1/0/14
Cisco-Stack(config-if)# description XenServer2-NIC1
Cisco-Stack(config-if)# channel-group 2 mode active

Cisco-Stack(config)# interface GigabitEthernet2/0/14
Cisco-Stack(config-if)# description XenServer2-NIC2
Cisco-Stack(config-if)# channel-group 2 mode active

Similar to Server-1, these configuration logically aggregates links between server-2 and switch-stack in port-channel 2.

Config for Storage Server:

Cisco-Stack(config)# interface range GigabitEthernet1/0/24, GigabitEthernet2/0/24
Cisco-Stack(config-if-range)# switchport mode trunk
Cisco-Stack(config-if-range)# switchport trunk native vlan 5
Cisco-Stack(config-if-range)# switchport trunk allowed vlan 5,10,20
Cisco-Stack(config-if-range)# channel-protocol lacp
Cisco-Stack(config-if)# channel-group 3 mode active
Cisco-Stack(config-if)# exit

Cisco-Stack(config)# interface GigabitEthernet1/0/24
Cisco-Stack(config-if)# description Storage-NIC1

Cisco-Stack(config)# interface GigabitEthernet2/0/24
Cisco-Stack(config-if)# description Storage-NIC2

The first section of configurations is almost same as the common configurations done on switchports connecting other servers except for the VLANs being passed to it. Since this server has been dedicated for Storage alone i.e. OpenFiler system, I just needed to pass Storage VLAN i.e. VLAN 20 to this server. In that case, I might not have needed to put these switchports in trunk mode as this server would’ve just needed one network access. However, I couldn’t install the OpenFiler on the bare metal server, so I shifted it to virtualized environment of XenServer. As a result of which, I had to pass additional VLANs i.e. VLAN 5 and VLAN 10 to this server for XenServer Management and XenMotion. Therefore, I configured these switchports in trunk mode and added them to port-channel group 3.

Config for Port-Channels:

Cisco-Stack(config)# interface range port-channel 1-3
Cisco-Stack(config-if-range)# switchport mode trunk
Cisco-Stack(config-if-range)# switchport trunk native vlan 5
Cisco-Stack(config-if-range)# switchport trunk allowed vlan 5,10,20
Cisco-Stack(config-if-range)# exit
Cisco-Stack(config)# interface port-channel 1
Cisco-Stack(config-if)# description XenServer-1
Cisco-Stack(config-if)# exit
Cisco-Stack(config)# interface port-channel 2
Cisco-Stack(config-if)# description XenServer-2
Cisco-Stack(config-if)# exit
Cisco-Stack(config)# interface port-channel 3
Cisco-Stack(config-if)# description Storage
Cisco-Stack(config-if)# end

Configurations on Port-channel interfaces are not much different than that on other interfaces. They are also configured to be in trunk mode and pass the necessary VLANs to their member links connecting to their respective servers.

This concludes the network setup for the workshop. I hope you’re following and understanding what I did upto now. Please let me know in the comment section if you’ve any doubt or corrections on my work. Now, let’s get started with the post-installation part of XenServer.

XenServer Post-Installation Tasks

After the XenServer is installed on the servers, the first thing we need to do is to access and manage those XenServers, for which purpose, we can either use Command Line Interface Tool called xe or Graphical User Interface called XenCenter. Since GUI is much easier to get started, let’s start with it. You can download the XenCenter’s setup file from Citrix’s official website and run the installer in your personal computer or a dedicated VM for XenServer management. After the XenCenter is successfully installed and executed, it looks something like this:

XenServer - XenCenter Home
XenCenter Home

Then, we need to add the servers to XenCenter and it can be done simply by clicking on ADD a server from XenCenter Home, which will pop-up a window asking the server’s IP address (Management Network – 172.16.40.0/24) and login credentials.

XenServer - Adding a Server to XenCenter
Adding a Server to XenCenter

I added all three XenServers (172.16.40.13, 172.16.40.14 and 172.16.40.100) to the XenCenter. I then configured necessary networking on each server by selecting a server and going to the Networking tab. Since I needed to aggregate the two NICs in each server as a single link, I just selected those two NICs and created a bond.





XenServer - Creating a Network Bond
Creating a Network Bond

As you can see on the Bond creation window, we have four different options while creating a network bond. The top two options: Active-active and Active-passive don’t require corresponding configuration on the switch they connect with, whereas the lower two options require the respective configuration on the switch. The Active-active setup will place both NICs in forwarding state leveraging benefits of both NICs, although this setup doesn’t provide aggregated bandwidth or full load-balancing. The Active-passive setup is almost similar to Active-active setup in features, except that between two NICs, only one NIC is active and other remains passive for failover purpose.

The later two setups both provide optimum utilization of network interfaces through bandwidth aggregation, dynamic load-balancing and high availability. The only difference is on which basis do they perform the load balancing: IP and port of source and destination or source MAC address. I chose to use LACP with load balancing based on source MAC address because that’s how I have configured my switch stack. You can choose your choice of link aggregation.

Now, let’s create some networks on our servers to communicate with those VLANs we created earlier. But before I created necessary networks on my servers, I realized that I needed to have identical network and storage requirements for both of my XenServers. So, I first created a pool named “lab” by clicking on New Pool adding the two servers: xen1 and xen2 to the pool. Once both of my servers were inside the pool, I selected my pool “lab” and navigated to Networking tab. There, I created all four of my networks: Management on VLAN 5, XenMotion on VLAN 10, Storage on VLAN 20 and Data Network on VLAN 30. And I attached all those networks on my new aggregated network interface or bond 0+1.

XenServer - Networking on my XenServer Pool
Networking on my XenServer Pool

Now that I’ve created my pool and managed networking part, I went on utilizing the virtual storage I created earlier using OpenFiler. Since I installed OpenFiler on my third XenServer, I configured that XenServer from XenCenter as I did with my first two XenServers. I mounted an ISO containing OpenFiler to XenServer, booted it up and installed it using the entire disk space on server. I assigned the OpenFiler system an IP address (10.10.20.1) belonging to Storage Network (10.10.20.0/24) because it’ll be serving storage to other servers and all the storage traffic will be flowing in this network alone.

For this workshop, I had created two LUNs in the OpenFiler: one for the shard storage shared by both XenServers to store their Virtual Machines and other for the HeartBeat storage required by XenServer pool to enable High Availability. Then those two LUNs were mapped for iSCSI storage transmission using CHAP authentication so that they can be obtained by the XenServer Pool as block storage. To add a Shared Storage in XenServer Pool, select the pool and click on the New Storage button on the top section of XenCenter. XenCenter presents us with multiple options while adding a storage. Since I planned on using iSCSI storage served by OpenFiler,  I simply went for iSCSI Shared Storage. In the pop window asking inforamation about storage server, I entered the IP address of OpenFiler server i.e. 10.10.20.1 as the target. Since I had configured CHAP authentication on OpenFiler, I entered that user and password for authentication. After scanning the target host, I could see IQN of my OpenFiler’s LUN map and my two LUNs. I added two storage repositories (Shared iSCSI and HeartBeat SR) for each of those LUNs.

XenServer - Creating iSCSI Storage Repository
Creating iSCSI Storage Repository

Now that we’ve got our shared storage and networking ready, we can start creating our virtual machines. Creating a VM in XenServer is a very simple task. All we need is to Right click on a pool or server, click on New VM and follow a simple/straight-forward dialog window. However, there is one small problem in XenServer while creating a VM and that is the mounting of ISO image of operating system. XenCenter allows us to boot a VM with an ISO from our personal computer or network shared locations. Unfortunately, that isn’t a convenient and reliable option if we need to use large sized ISO files or need them frequently or have low bandwidth connection. So, I had to have an ISO library inside the XenServer pool, where I could easily upload my ISO files and use them as easily and quickly as possible. But unfortunately, as far as I know, XenCenter doesn’t allow us to create an ISO library that sits inside XenServer and use those ISOs directly for booting up our VMs. For that purpose, I had to visit the CLI utility of XenServer aka xe from where I could create an ISO library and boot my VMs using the ISOs in that library. As a matter of fact, xe is a really powerful and complete solution for managing XenServer platform which provides full control over XenServer, unlike its GUI based counterpart aka XenCenter.

To access and use XE, we can SSH to one of our XenServers or simple browse the console of one of them in XenCenter.

xen1# mkdir /var/run/ISOStore
xen1# xe sr-create content-type=iso type=iso name-label=ISOStore device-config:location=/var/run/ISOStore device-config:legacy_mode=true
xen1# xe sr-list

The first command creates a directory named ISOStore inside directory /var/run/. And the second command creates a storage repository that acts as an ISO library and allows the XenServer to use the ISOs in it to be used in VMs. Similarly, we can also create an ISO library to be kept at NAS or other network location. We cannot use XenCenter to upload the ISO files to this library even though it can display the repository and its contents. To upload an ISO file to this library, we can use other file transfer techniques like SCP and FTP. Since I personally prefer to use SCP and SFTP often, I uploaded my ISO files using PSFTP that comes with Putty, a SSH Client.

As I mentioned earlier, XenCenter is something to get started with, but not the ultimate weapon in Xen platform. For advanced and detailed control over XenServer, we must get our hands on xe. We’ll also use xe frequently in further sections along with XenCenter.




Setting Up High Availability

Xen provides pretty straight-forward and easy solution for configuring HA across XenServers. In order to configure HA, we first need to select the Xen pool and then navigate to HA tab, where we can find a Configure HA button.

XenServer - Configure HA
Configure HA

The pre-requisites for implementing HA are: all XenServers should share a Shared Storage (in my case, Shared iSCSI) and a Shared Heartbeat Storage (HeartBeat SR) and it is recommended for the servers to have identical CPU family (in my case, Intel Xeon with 8 cores) and same RAM size to prevent over-committing issue (in my case, 16 GB of RAM on both servers).

XenServer - Configuring HA - Heartbeat SR
Configuring HA – Heartbeat SR

After the Heartbeat Storage Repository has been successfully selected, HA is guaranteed in our Xen platform and then we can plan our HA setup. During the planning, we can set HA restart priority, Start Order, Attempt to start  next VM after, and Failures tolerated values for the virtual machines installed in our XenServers.

XenServer - HA Plan
HA Plan

After the HA plan has been configured as per our requirements, we can complete our HA setup and continue using our HA enabled virtual infrastructure.

XenServer - Health Status of HA
Health Status of HA
VM Creation

Now to test VM migration feature, I first installed a CentOS 6.5 minimal operating system in a VM by mounting an ISO for CentOS from my ISO-library that I created earlier. After my VM was installed, I mounted the xen-tools.iso to the VM and installed the Xen-tools so that I can achieve optimum advantages of Xen from my VM. Having Xen tools installed in a VM enables us to perform dynamic memory allocation, auto-startup on boot, deeper integration with Xen hypervisor and many other features. Installing Xen-tools is a quite simple task too. All we need to do is Run the executable and follow the installation wizard in Windows system. And in case of linux system, we just need to mount the CD-ROM containing xen-tools.iso to some directory, unzip the contents and run the installer script.
In my CentOS system, I did the following to install Xen-tools in it after I attached the xen-tools.iso to it.

root@centos# mount /dev/xvdd /tmp
root@centos# cd /tmp/Linux
root@centos# chmod +x install.sh
root@centos# ./install.sh
root@centos# reboot

After installing xen-tools on my VM, I also enabled auto start-up feature on this VM so that it can start itself when the server/pool boots up. While doing so, I also enabled auto-startup for my pool and all these tasks should be performed from xe since XenCenter doesn’t provide these controls.

xen1#  xe pool-list

This command lists the pools accessible to this xen host and also displays their UUIDs and name. I took the UUID of my pool i.e. lab and put it in the following command.

xen1# xe pool-parm-set uuid=[uuid-pool] other-config:auto_poweron=true

Then, I enabled auto-startup feature on my VM by first listing the VMs available in my pool and then using the UUID of the particular VM to configure.

xen1# xe vm-list
xen1# xe vm-parm-set uuid=[uuid-vm] other-config:auto_poweron=true

Cloning or quick creation of VM is also very easier and simpler in XenServer. We can clone a VM by right clicking on the VM and then clicking on Clone. Or we can also create a template of a VM and then use that template to create any number of identical VMs from that template within a matter of a minute. Xen implements Copy-on-Write (COW) feature which allows the creation and use of multiple VMs with same base image and then manages difference images for each VM. This technology enables significant storage saving and rapid VM creation from templates.

VM Migration/XenMotion

Now that I’ve got my XenServers in High Availability mode and some VMs created, I can easily migrate my VMs from one Xen host to another without impacting the state and service of those VMs. Actually it’s really easy, simple and quick too. I performed a Ping test on a VM while migrating it and witnessed that only one ping packet was lost before it got migrated on destination XenServer. To migrate a VM manually, we can right click on the VM and click on Migrate to option and select the destination host.

XenServer - VM Migration in Xen
VM (CentOS6.5-test4) being Migrated from Xen1 to Xen2

In the next screenshot, you can see that the VM has been successfully migrated from xen1 to xen2 without being down or unavailable.

XenServer - VM After Migration
VM After Migration
Testing HA

After I had already implemented HA on my Xen platform and also successfully migrated VMs from one host to another, l tested the reliability and performance of my HA. In order to test HA, I simply needed to manually create a failure in a host or its network. So, I started by unplugging network cables connecting Xen1 server (or I could’ve just powered off the host itself) and observed the VMs on Xen1 being automatically moved to Xen2 and got restarted there. You can also notice in the below screenshots that there was a VM called “CentOS6.5-test2” running in Xen1 before it failed. Then, after the Xen1 failed for some reason, that VM was immediately restarted in Xen2.

XenServer - Pool Status before and after Xen1 Failed
Pool Status before and after Xen1 Failed

The following screenshot represents the health status and Failure capacity of my High Availability setup after one host failed.

XenServer - Status of Pool after Xen1 failed
Status of Pool after Xen1 failed