Category Archives: OVM

How to determine the type of VM from inside the VM in OVM x86 ?

We know that in case of OVM we are dealing with three types of VMs. To determine the type of VM we are taking the OVM GUI after that we are checking the type of VM. We can do the same thing at VM level as well. This method can help to quickly response on your bridge call.

  • HVM
  • PVM
  • HVM with PV drivers.

Case 1 : HVM (Hyper Virtual Machine)

In this case VM is not customized to run in Virtual envirnoment. Below are the method to verify from inside the VM nature of VM.

We can check the dmesg.

vm-hvm01 ~> dmesg | grep -i xen
ACPI: RSDP 00000000000ea020 00024 (v02 Xen)
ACPI: XSDT 00000000fc00eaa0 00034 (v01 Xen HVM 00000000 HVML 00000000)
ACPI: FACP 00000000fc00e8c0 000F4 (v04 Xen HVM 00000000 HVML 00000000)
ACPI: DSDT 00000000fc002c40 0BBF1 (v02 Xen HVM 00000000 INTL 20110112)
ACPI: APIC 00000000fc00e9c0 000D8 (v02 Xen HVM 00000000 HVML 00000000)

If we look at module we will find that the module which we are using is the kernel module not any customized module.

vm-hvm01 ~> cat /etc/modprobe.conf | grep -i eth0
alias eth0 8139cp

vm-hvm01 ~> modinfo 8139cp
filename: /lib/modules/2.6.32-100.26.2.el5/kernel/drivers/net/8139cp.ko
license: GPL
version: 1.3
description: RealTek RTL-8139C+ series 10/100 PCI Ethernet driver
author: Jeff Garzik <jgarzik@pobox.com>
srcversion: 93DF48CB0077E555C8819AE
alias: pci:v00000357d0000000Asv*sd*bc*sc*i*
alias: pci:v000010ECd00008139sv*sd*bc*sc*i*
depends: mii
vermagic: 2.6.32-100.26.2.el5 SMP mod_unload modversions
parm: debug:8139cp: bitmapped message enable number (int)
parm: multicast_filter_limit:8139cp: maximum number of filtered multicast addresses (int)

Further we can check the hwconfiguration of VM using below command.

vm-hvm01 ~> cat /etc/sysconfig/hwconf | grep -i qemu
desc: “ATA QEMU HARDDISK”
desc: “ATA QEMU HARDDISK”

Case 2 PVM (Para Virtualized Machine)

We are not seeing thing related to HVM in dmesg.

[root@phl3dsmfdb02 ~]# dmesg | grep -i xen
Xen: 0000000000000000 – 00000000000a0000 (usable)
Xen: 00000000000a0000 – 0000000000100000 (reserved)
Xen: 0000000000100000 – 0000001580000000 (usable)
#1 [000d4bb000 – 000d52a000] XEN PAGETABLES ==> [000d4bb000 – 000d52a000]
#5 [00028b8000 – 000d4bb000] XEN START INFO ==> [00028b8000 – 000d4bb000]
Booting paravirtualized kernel on Xen
Xen version: 4.1.3OVM (preserve-AD)
Xen: using vcpu_info placement
Xen: using vcpuop timer interface
installing Xen timer for CPU 0
installing Xen timer for CPU 1
installing Xen timer for CPU 2
installing Xen timer for CPU 3
xen_balloon: Initialising balloon driver.
Switching to clocksource xen
input: Xen Virtual Keyboard as /class/input/input1
input: Xen Virtual Pointer as /class/input/input2
XENBUS: Device with no driver: device/vbd/51712
XENBUS: Device with no driver: device/vbd/51728
XENBUS: Device with no driver: device/vbd/51744
XENBUS: Device with no driver: device/vbd/51760
XENBUS: Device with no driver: device/vbd/51776
XENBUS: Device with no driver: device/vif/0
XENBUS: Device with no driver: device/vif/1
XENBUS: Device with no driver: device/vif/2
We can see the module configuration file to check the status of modules and we can clearly see that we are using xen modules.

[root@phl3dsmfdb02 ~]# cat /etc/modprobe.conf
alias eth0 xennet
alias scsi_hostadapter xenblk

[root@phl3dsmfdb02 ~]# lsmod | grep -i xen
xen_netfront 16356 0
xen_blkfront 12731 13

[root@phl3dsmfdb02 ~]# modinfo xen_netfront
filename: /lib/modules/2.6.32-100.26.2.el5/kernel/drivers/net/xen-netfront.ko
alias: xennet
alias: xen:vif
license: GPL
description: Xen virtual network device frontend
srcversion: 770DD0C26EFB5A5D1CAA2A9
depends:
vermagic: 2.6.32-100.26.2.el5 SMP mod_unload modversions

Another verification step can be checking the hardware configuration file.

root@phl3dsmfdb02 ~]# cat /etc/sysconfig/hwconf

class: OTHER
bus: PSAUX
detached: 0
driver: pcspkr
desc: “PC Speaker”

class: OTHER
bus: PSAUX
detached: 0
desc: “Xen Virtual Pointer”

class: NETWORK
bus: XEN
detached: 0
device: eth0
driver: xennet
desc: “Xen Virtual Ethernet”
network.hwaddr: 00:16:3e:5d:9e:0d

class: MOUSE
bus: PSAUX
detached: 0
device: input/mice
driver: generic3ps/2
desc: “Macintosh mouse button emulation”

class: VIDEO
bus: XEN
detached: 0
device: fb0
driver: xenfb
desc: “Xen Virtual Framebuffer”
video.xdriver: fbdev

class: HD
bus: XEN
detached: 0
device: xvda
driver: xenblk
desc: “Xen Virtual Block Device”

class: KEYBOARD
bus: PSAUX
detached: 0
desc: “Xen Virtual Keyboard”

Case 3 : HVM with PV drivers

If you take first look you will say this is HVM machine but its not.

[root@phl3fisndb03 ~]# dmesg | grep -i xen
DMI: Xen HVM domU, BIOS 4.1.3OVM 02/22/2014
Hypervisor detected: Xen HVM
Xen version 4.1.
Xen Platform PCI: I/O protocol version 1
Netfront and the Xen platform PCI driver have been compiled for this kernel: unp lug emulated NICs.
Blkfront and the Xen platform PCI driver have been compiled for this kernel: unp lug emulated disks.
ACPI: RSDP 00000000000ea020 00024 (v02 Xen)
ACPI: XSDT 00000000fc00eaa0 00034 (v01 Xen HVM 00000000 HVML 00000000)
ACPI: FACP 00000000fc00e8c0 000F4 (v04 Xen HVM 00000000 HVML 00000000)
ACPI: DSDT 00000000fc002c40 0BBF1 (v02 Xen HVM 00000000 INTL 20110112)
ACPI: APIC 00000000fc00e9c0 000D8 (v02 Xen HVM 00000000 HVML 00000000)
Booting paravirtualized kernel on Xen HVM
xen:events: Xen HVM callback vector for event delivery is enabled

If you see the module it belongs to xen hypervisor. So finally it become HVM with PV drivers.

[root@phl3fisndb03 ~]# lsmod | grep -i xen
xen_netfront 21082 0
xen_blkfront 21314 4
[root@phl3fisndb03 ~]# modinfo xen_netfront
filename: /lib/modules/3.8.13-16.3.1.el6uek.x86_64/kernel/drivers/net/xen-netfront.ko
alias: xennet
alias: xen:vif
license: GPL
description: Xen virtual network device frontend
srcversion: 8072CF1E596590C4EC5603F
depends:
intree: Y
vermagic: 3.8.13-16.3.1.el6uek.x86_64 SMP mod_unload modversions
[root@phl3fisndb03 ~]# modinfo xen_blkfront
filename: /lib/modules/3.8.13-16.3.1.el6uek.x86_64/kernel/drivers/block/xen-blkfront.ko
alias: xenblk
alias: xen:vbd
alias: block-major-202-*
license: GPL
description: Xen virtual block device frontend
srcversion: 391F354010EE075E7AA3A79
depends:
intree: Y
vermagic: 3.8.13-16.3.1.el6uek.x86_64 SMP mod_unload modversions
parm: max:Maximum amount of segments in indirect requests (default is 32) (int)

I hope this will help you  while troubleshooting 🙂

Repository Space calculation in OVM x86 environment.

I presented LUN of 50 GB from Storage.

When I created repository using 50GB, 4.3 GB of space got used for overhead associated with it.

[root@OVS-2 ~]# df -h /OVS/Repositories/0004fb0000030000f1532acb312df8a2
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/14f504e46494c45006a504d3265522d31386d342d5a76416f 50G 4.3G 46G 9% /OVS/Repositories/0004fb0000030000f1532acb312df8a2

At this current point of time It is showing me equal usage in GUI and OVS CLI.
Currently I have not put any thing on repository. Now I am going to upload the ISO image to repsoitory using winscp.
I connected the winscp session to OVS server and browsed to path /OVS/Repositories/0004fb0000030000f1532acb312df8a2/ISOs and started transerring ISO image to mentioned path.

If I am checking the utilization at OVS CLI level it is showing me the currect utilization but at GUI level utilization is misleading it is still 9% there. We need to refresh the repository to see the correct utilization.

[root@OVS-1 ~]# df -h /OVS/Repositories/0004fb0000030000f1532acb312df8a2/
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/14f504e46494c45006a504d3265522d31386d342d5a76416f 50G 12G 39G 23% /OVS/Repositories/0004fb0000030000f1532acb312df8a2

[root@OVS-1 ~]# cd !$
cd /OVS/Repositories/0004fb0000030000f1532acb312df8a2/

[root@OVS-1 0004fb0000030000f1532acb312df8a2]# du -sh *
0 Assemblies
7.0G ISOs
0 lost+found
0 Templates
0 VirtualDisks
0 VirtualMachines
[root@OVS-1 0004fb0000030000f1532acb312df8a2]#
If we are doing operation using GUI then repository will take the correct utilization. If we are doing it from CLI then we have to do it manually.

How to check the OVM x86 cluster DB status ?

We know that if are doing any operation in GUI ultimately files are getting created on server. Same goes for pools as well. While we are creating pools using GUI, DB files are getting created on physical servers.

After the creation of cluster pool in OVM GUI. OCFS2 file system will be created and it will be mounted on all physical servers which are part of that cluster. Shown in later part of document.

Apart from that some other DB files are also getting created like shown below.

Method 1 : Pulling the data from /etc/ovs-agent/db.

Case 1 : Go to below path and issue ll command to see the content and file * command to see the nature of files.

[root@OVS-2 ~]# cd /etc/ovs-agent/db/
[root@OVS-2 db]# pwd
/etc/ovs-agent/db

[root@OVS-2 db]# ll
total 40
-rw——- 1 root root 12288 Oct 12 22:16 aproc
-rw——- 1 root root 12288 Oct 12 16:46 exports
-rw——- 1 root root 12288 Oct 12 19:40 repository
-rw——- 1 root root 12288 Oct 14 18:42 server

[root@OVS-2 db]# file *
aproc:      Berkeley DB (Hash, version 9, native byte-order)
exports:    Berkeley DB (Hash, version 9, native byte-order)
repository: Berkeley DB (Hash, version 9, native byte-order)
server:     Berkeley DB (Hash, version 9, native byte-order)

Case 2 : After that we can check the status of cluster using below command by default it will pick the data from location /etc/ovs-agent/db. Its not necessary to be present in that path to issue the below command.

[root@OVS-2 db]# ovs-agent-db dump_db server
{‘cluster_state’: ‘DLM_Ready’,
‘clustered’: True,
‘is_master’: False,
‘manager_event_url’: ‘https://192.168.111.110:7002/ovm/core/wsapi/rest/internal/Server/56:4d:ee:1f:d9:79:c7:3c:5c:f2:a4:f1:93:13:5d:5c/Event&#8217;,
‘manager_ip’: ‘192.168.111.110’,
‘manager_statistic_url’: ‘https://192.168.111.110:7002/ovm/core/wsapi/rest/internal/Server/56:4d:ee:1f:d9:79:c7:3c:5c:f2:a4:f1:93:13:5d:5c/Statistic&#8217;,
‘manager_uuid’: ‘0004fb00000100006c89d905006ea09d’,
‘node_number’: 1,
‘pool_alias’: ‘mypool-1’,
‘pool_uuid’: ‘0004fb0000020000723250d652ed73ba’,
‘pool_virtual_ip’: ‘192.168.111.112’,
‘poolfs_nfsbase_uuid’: ”,
‘poolfs_target’: ‘/dev/mapper/14f504e46494c45004e74666b654f2d6854444f2d79775143’,
‘poolfs_type’: ‘lun’,
‘poolfs_uuid’: ‘0004fb00000500002d107c91a367306b’,
‘registered_hostname’: ‘OVS-2’,
‘registered_ip’: ‘192.168.111.121’,
‘roles’: set([‘utility’, ‘xen’])}

Case 3 : We can check the repository database as well by using below command.

[root@OVS-2 db]# ovs-agent-db dump_db repository
{‘0004fb0000030000f1532acb312df8a2’: {‘alias’: u’LinuxRepo-1′,
‘filesystem’: ‘ocfs2’,
‘fs_location’: ‘/dev/mapper/14f504e46494c45006a504d3265522d31386d342d5a76416f’,
‘manager_uuid’: u’0004fb00000100006c89d905006ea09d’,
‘mount_point’: ‘/OVS/Repositories/0004fb0000030000f1532acb312df8a2’,
‘version’: u’3.0′}}

Method 2 : Now coming to our OCFS2 file system. If you want to pull the DB information from mounted OCFS2 file system then we have to used the -c option with above command.

Case 1 : We can determine the OCFS2 file system path by below command.

[root@OVS-2 db]# ovs-agent-db get_cluster_db_home
‘/poolfsmnt/0004fb00000500002d107c91a367306b/db’

Case 2 : Lets see the content of OCFS2 file system. It also contains the DB files.

[root@OVS-2 db]# df -h /poolfsmnt/0004fb00000500002d107c91a367306b/db
Filesystem                                                     Size  Used Avail Use% Mounted on
/dev/mapper/14f504e46494c45004e74666b654f2d6854444f2d79775143   14G  263M   14G   2% /poolfsmnt/0004fb00000500002d107c91a367306b

[root@OVS-1 ~]# cd /poolfsmnt/0004fb00000500002d107c91a367306b/db/

[root@OVS-1 db]# ll
total 36
-rw——- 1 root root 12288 Oct 12 22:16 monitored_vms
-rw——- 1 root root 12288 Oct 14 18:53 server_pool
-rw——- 1 root root 12288 Oct 12 18:27 server_pool_servers

Case 3 : If we want to check the VMs which are running on cluster.

[root@OVS-1 db]# ovs-agent-db dump_db -c monitored_vms
{‘0004fb00-0006-0000-4689-b1d1cc6e83d9’: {‘repo_id’: ‘0004fb0000030000f1532acb312df8a2’,
‘vm_id’: ‘0004fb00000600004689b1d1cc6e83d9’}}

Above command is showing the repository ID on which the vDisks are hosted along with it showing the UUID of VM as well.

[root@OVS-1 db]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
0004fb00000600004689b1d1cc6e83d9             1  1024     1     -b—-     39.2
Domain-0                                     0   823     4     r—–    640.8

Case 4 : Lets check the status of servers present in pool using below command. It will show which physical node which is currently the master of pool

[root@OVS-1 db]# ovs-agent-db dump_db -c server_pool_servers
{‘OVS-1’: {‘is_master’: True,
‘node_number’: 0,
‘registered_ip’: ‘192.168.111.120’,
‘roles’: set([‘utility’, ‘xen’])},
‘OVS-2’: {‘is_master’: False,
‘node_number’: 1,
‘registered_ip’: ‘192.168.111.121’,
‘roles’: set([‘utility’, ‘xen’])}}

Case 5 : If we want to check the status of server pool we will use below command.

[root@OVS-1 db]# ovs-agent-db dump_db -c server_pool
{‘auto_remaster’: True,
‘pool_alias’: ‘mypool-1’,
‘pool_master_hostname’: ‘OVS-1’,
‘pool_member_ip_list’: [‘192.168.111.120’, ‘192.168.111.121’],
‘pool_uuid’: ‘0004fb0000020000723250d652ed73ba’,
‘pool_virtual_ip’: ‘192.168.111.112’}

Tip : We are using bridge networking in OVS if we want to switch to openvswitch we have to change the parameter in below file 🙂 not related to topic but I found it worth to share.

[root@OVS-2 ~]# cat /etc/ovs-agent/agent.ini | grep -i virtualnetwork
;To use open vswitch change the virtualnetwork value to openvswitch
virtualnetwork=linuxbridge

How to determine various Disk States in OVM x86 using vm.cfg file ?

Today I came across very interesting topic about disk modes in configuration file of VM. I simulated in my Lab environment.  Below are my findings.

Case 1 : If we are assigning physical disk to VM in OVM x86 just like RDM LUN in VMware. It will be different from the other vDisks which are assigned to VM. We can see that difference in vm configuration file as well.

I have assigned physical disk to VM LinTest1 and in configuration file it is starting with ‘phy’ instead of ‘file’ which is used for simple vDisk. At the end we are seeing ‘w’ which shows that disk is in write status.

[root@OVS-2 0004fb00000600004689b1d1cc6e83d9]# cat vm.cfg
vif = [‘mac=00:21:f6:cd:c2:87,bridge=103a1e612f’]
OVM_simple_name = ‘LinTest1’
guest_os_type = ‘linux’
disk = [‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000f81558f292b2f52e.img,xvda,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000ea645f4fa6a4abf8.img,xvdb,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000ad06906d5a735bd0.img,xvdc,w’, ‘phy:/dev/mapper/14f504e46494c45004c32597330422d746257662d34434677,xvdd,w’]

Note : We can’t create the clone with physical disk assigned to VM

Case 2 : If we are sharing single disk between two VMs for the file systems like GFS2. Then that disk will come in configuration file ending with “w!”. It will be in the configuration file of both VMs to which the disk is shared.

[root@OVS-2 0004fb00000600004689b1d1cc6e83d9]# cat vm.cfg
vif = [‘mac=00:21:f6:cd:c2:87,bridge=103a1e612f’]
OVM_simple_name = ‘LinTest1’
guest_os_type = ‘linux’
disk = [‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000f81558f292b2f52e.img,xvda,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000ea645f4fa6a4abf8.img,xvdb,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000ad06906d5a735bd0.img,xvdc,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000b1c0d51195d1a633.img,xvdd,w!’]

So when ever you see w! in VM configuration file that means the disk is in shared status.

Case 3 : If any device or virtual disk is presented on server only in read only. Then it will be ending with “r” in configuration file of VM.

In below case I have assigned ISO image to VM in below output r at end is showing that its in read only mode.

[root@OVS-2 0004fb00000600004689b1d1cc6e83d9]# cat vm.cfg
vif = [‘mac=00:21:f6:cd:c2:87,bridge=103a1e612f’]
OVM_simple_name = ‘LinTest1’
guest_os_type = ‘linux’
disk = [‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000f81558f292b2f52e.img,xvda,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000ea645f4fa6a4abf8.img,xvdb,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000ad06906d5a735bd0.img,xvdc,w’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000b1c0d51195d1a633.img,xvdd,w!’, ‘file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/ISOs/V41362-01.iso,xvde:cdrom,r’]

I hope that will help while doing troubleshooting for OVM X86 issues 🙂

How to differentiate between Sparse and non-sparse disk in OVM x86?

In case of OVM x86 people face lot of difficulties to figure out whether disk is sparse or non-sparse. If you are having good naming convention then you are lucky if not then you are in big trouble. I have figured out two ways to determine the nature of disk.

Case 1 :  Using OVM manager CLI.

I have connected to the CLI of OVM manager

[root@OVM-1 ~]# ssh -l admin OVM-1 -p 10000
admin@ovm-1’s password:

Issuing below command will show the mapping of disk to disk names.

OVM> list VirtualDisk
Command: list VirtualDisk
Status: Success
Time: 2014-10-14 20:25:48,246 IST
Data:
id:0004fb0000120000ad06906d5a735bd0.img  name:Non-sparse1-LIn1
id:0004fb0000120000ea645f4fa6a4abf8.img  name:Sparse-1-Lin1
id:0004fb0000120000f81558f292b2f52e.img  name:rootdisk-1

I am having naming convention with which I can tell which is sparse and non-sparse. If it is not there we can issue below command.

Here name I am taking from previous command. Have a close look at the “Max size” and “Used size”.

If the Max size is equal to Used size then it is non-sparse disk for sure. If Max size is greater than Used size then its sparse disk.

  • Sparse Disk :

OVM> show VirtualDisk name=Sparse-1-Lin1
Command: show VirtualDisk name=Sparse-1-Lin1
Status: Success
Time: 2014-10-14 20:26:00,547 IST
Data:
VmDiskMapping 1 = 0004fb0000130000e9a90559c1e2b127  [Mapping for disk Id (0004fb0000120000ea645f4fa6a4abf8.img)]
  Max (GiB) = 1.0
  Used (GiB) = 0.93
Shareable = No
Repository Id = 0004fb0000030000f1532acb312df8a2  [LinuxRepo-1]
Id = 0004fb0000120000ea645f4fa6a4abf8.img  [Sparse-1-Lin1]
Name = Sparse-1-Lin1
Locked = false

  • Non-Sparse Disk 

OVM> show VirtualDisk name=Non-sparse1-LIn1
Command: show VirtualDisk name=Non-sparse1-LIn1
Status: Success
Time: 2014-10-14 20:26:29,649 IST
Data:
VmDiskMapping 1 = 0004fb0000130000e70f5443092d366b  [Mapping for disk Id (0004fb0000120000ad06906d5a735bd0.img)]
  Max (GiB) = 1.0
  Used (GiB) = 1.0
Shareable = No
Repository Id = 0004fb0000030000f1532acb312df8a2  [LinuxRepo-1]
Id = 0004fb0000120000ad06906d5a735bd0.img  [Non-sparse1-LIn1]
Name = Non-sparse1-LIn1
Description = Non Sparse Disk for Linux 1 vm
Locked = false

Sometime people ask if the disk is fully utilized then how can they differentiate between sparse and non-sparse. I have simulated that case as well. Even if the disk is 100% utilized it will never reach the exact value of “Max Size”. Example of Sparse which I have shown below is shown after the disk is 100% utilized on VM level to which it is assigned 🙂

Case 2 : Checking the size of vDisk in Repository

Currently I have taken the putty session to physical server and jumped to the repository path where my vDisks are located.

[root@OVS-2 VirtualDisks]# pwd
/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks

After that I have issued below command to see the sparse and non-sparse disk.

If the first value is greater than or equal to second value then the disk is non-sparse. ex First vDisk.

If the first value is less than second value it is sparse disk. ex Second and Third vDisk.

[root@OVS-2 VirtualDisks]# ls -lsh
total 4.8G
1.0G -rw——- 1 root root 1.0G Oct 14  2014 0004fb0000120000ad06906d5a735bd0.img
956M -rw——- 1 root root 1.0G Oct 14  2014 0004fb0000120000ea645f4fa6a4abf8.img
2.9G -rw——- 1 root root  10G Oct 14  2014 0004fb0000120000f81558f292b2f52e.img

How to create cluster pool in OVM ?

After the installation of OVM shown in previous article here.

I have done the installation of two OVS servers on VMware workstation. Its time to scan those two servers into OVM.

Pr-requisites:

Installed OVM manager.

Ethernet configured to handle the storage network traffic.

Need to attach one filer which is used for storage using that storage ethernet configuration.

Step 1 : Installation of OVS server is very simple like any Linux OS installation. Only difference which I noticed is that it will ask for one more password during installation that is for Oracle VM agent password.

This password we require during the scanning of OVS server into OVM.

By default after successful installation of OVS server bond0 will get created with eth0.

[root@OVS-1 ~]# ifconfig -a
bond0     Link encap:Ethernet  HWaddr 00:0C:29:81:60:F9
inet addr:192.168.111.120  Bcast:192.168.111.255  Mask:255.255.255.0
UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
RX packets:49 errors:0 dropped:0 overruns:0 frame:0
TX packets:48 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5881 (5.7 KiB)  TX bytes:6999 (6.8 KiB)

My OVS version is :

[root@OVS-1 ~]# cat /etc/ovs-release
Oracle VM server release 3.3.1

[root@OVS-1 ~]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
Domain-0                                     0   832     4     r—–     53.9

Step 2 : I have assigned one more network interface to OVS servers. Now I have two interfaces on each OVS server.
eth0 is used for mgmt, cluster heartbeat and Live Migrate.
eth1 is used for storage.
[root@OVS-1 ~]# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr 00:0C:29:81:60:03
inet addr:192.168.112.11  Bcast:192.168.112.255  Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:7700 errors:0 dropped:0 overruns:0 frame:0
TX packets:4885 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:9714910 (9.2 MiB)  TX bytes:446684 (436.2 KiB)

Step 3 : While creating pool in OVM. Disk which is used to heartbeat in cluster pool should be at least of size is 12 GB.

I have scanned the two physical servers OVS-1 and OVS-2 in unassigned pool before creating the new pool. I have not shown here because its very simple.

Below is the links for document shown with screenshots.

https://drive.google.com/file/d/0B7F4NEbnRvYiZFVZMHkzTHJDMnM/view?usp=sharing

After the successful creation of cluster  pool. You will find new file system will be mounted on each OVS server.

[root@OVS-1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2       7.8G  1.2G  6.3G  15% /
tmpfs           369M   16K  369M   1% /dev/shm
/dev/sda1       477M   47M  401M  11% /boot
none            369M   40K  369M   1% /var/lib/xenstored
/dev/dm-8        14G  369M   14G   3% /poolfsmnt/0004fb00000500002d107c91a367306b

[root@OVS-1 ~]# cat /etc/mtab  | grep -i ocfs2
ocfs2_dlmfs /dlm ocfs2_dlmfs rw 0 0
/dev/mapper/14f504e46494c45004e74666b654f2d6854444f2d79775143 /poolfsmnt/0004fb00000500002d107c91a367306b ocfs2 rw,_netdev,heartbeat=global 0 0

Below is the cluster configuration file which is created after creation of pool with two OVS servers.

[root@OVS-1 ~]# cat /etc/ocfs2/cluster.conf
node:
name = OVS-1
cluster = 723250d652ed73ba
number = 0
ip_address = 192.168.111.120
ip_port = 7777

node:
name = OVS-2
cluster = 723250d652ed73ba
number = 1
ip_address = 192.168.111.121
ip_port = 7777

cluster:
name = 723250d652ed73ba
heartbeat_mode = global
node_count = 2

heartbeat:
cluster = 723250d652ed73ba
region = 0004FB00000500002D107C91A367306B

Xen Hypervisor (xm) command Cheat Sheet

In this article I am going to show you the some good xm usages.

How to list the currently running VM on physical server ?

[root@OVS-2 ~]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
0004fb00000600004689b1d1cc6e83d9             1  1027     1     r—–    293.5
Domain-0                                     0   830     4     r—–   1242.7

How to list the virtual CPUs which are assigned to VM with domain ID 1 ?

[root@OVS-2 ~]# xm vcpu-list 1
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
0004fb00000600004689b1d1cc6e83d9     1     0     2   -b-     287.8 any cpu

How to check the state of domain ?

[root@OVS-2 ~]# xm domstate 1
idle

How to list the vNICs which are assigned to VM ?

[root@OVS-2 ~]# xm network-list 1
Idx BE     MAC Addr.     handle state evt-ch tx-/rx-ring-ref BE-path
0   0  00:21:f6:cd:c2:87    0     4      6     768  /769     /local/domain/0/backend/vif/1/0

How to check the up time of the VM ?

[root@OVS-2 VirtualDisks]# xm uptime
Name                                ID Uptime
0004fb00000600004689b1d1cc6e83d9     2  0:01:07
Domain-0                             0  4:21:58

How to list the block devices associated with VM ?

[root@OVS-2 ~]# xm block-list 1
Vdev  BE handle state evt-ch ring-ref BE-path
51712  0    0     4      12     9     /local/domain/0/backend/vbd/1/51712
51728  0    0     4      13     10    /local/domain/0/backend/vbd/1/51728

How to reboot domain from OVS server ?

[root@OVS-2 VirtualDisks]# xm reboot 2

After reboot operation it will change the domain ID.

[root@OVS-2 VirtualDisks]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
0004fb00000600004689b1d1cc6e83d9             3  1033     1     -b—-      1.3
Domain-0                                     0   831     4     r—–   1461.7

How to run the dry check to see whether domain is able to access the resources ?

[root@OVS-2 crash]# xm dry-run /OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualMachines/0004fb00000600004689b1d1cc6e83d9/vm.cfg
Using config file “/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualMachines/0004fb00000600004689b1d1cc6e83d9/vm.cfg”.
Checking domain:
0004fb00000600004689b1d1cc6e83d9: PERMITTED
Checking resources:
file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/VirtualDisks/0004fb0000120000f81558f292b2f52e.img: PERMITTED
file:/OVS/Repositories/0004fb0000030000f1532acb312df8a2/ISOs/V41362-01.iso: PERMITTED
Dry Run: PASSED