How to check the OVM x86 cluster DB status ?

We know that if are doing any operation in GUI ultimately files are getting created on server. Same goes for pools as well. While we are creating pools using GUI, DB files are getting created on physical servers.

After the creation of cluster pool in OVM GUI. OCFS2 file system will be created and it will be mounted on all physical servers which are part of that cluster. Shown in later part of document.

Apart from that some other DB files are also getting created like shown below.

Method 1 : Pulling the data from /etc/ovs-agent/db.

Case 1 : Go to below path and issue ll command to see the content and file * command to see the nature of files.

[root@OVS-2 ~]# cd /etc/ovs-agent/db/
[root@OVS-2 db]# pwd
/etc/ovs-agent/db

[root@OVS-2 db]# ll
total 40
-rw——- 1 root root 12288 Oct 12 22:16 aproc
-rw——- 1 root root 12288 Oct 12 16:46 exports
-rw——- 1 root root 12288 Oct 12 19:40 repository
-rw——- 1 root root 12288 Oct 14 18:42 server

[root@OVS-2 db]# file *
aproc:      Berkeley DB (Hash, version 9, native byte-order)
exports:    Berkeley DB (Hash, version 9, native byte-order)
repository: Berkeley DB (Hash, version 9, native byte-order)
server:     Berkeley DB (Hash, version 9, native byte-order)

Case 2 : After that we can check the status of cluster using below command by default it will pick the data from location /etc/ovs-agent/db. Its not necessary to be present in that path to issue the below command.

[root@OVS-2 db]# ovs-agent-db dump_db server
{‘cluster_state’: ‘DLM_Ready’,
‘clustered’: True,
‘is_master’: False,
‘manager_event_url’: ‘https://192.168.111.110:7002/ovm/core/wsapi/rest/internal/Server/56:4d:ee:1f:d9:79:c7:3c:5c:f2:a4:f1:93:13:5d:5c/Event’,
‘manager_ip’: ‘192.168.111.110’,
‘manager_statistic_url’: ‘https://192.168.111.110:7002/ovm/core/wsapi/rest/internal/Server/56:4d:ee:1f:d9:79:c7:3c:5c:f2:a4:f1:93:13:5d:5c/Statistic’,
‘manager_uuid’: ‘0004fb00000100006c89d905006ea09d’,
‘node_number’: 1,
‘pool_alias’: ‘mypool-1’,
‘pool_uuid’: ‘0004fb0000020000723250d652ed73ba’,
‘pool_virtual_ip’: ‘192.168.111.112’,
‘poolfs_nfsbase_uuid’: ”,
‘poolfs_target’: ‘/dev/mapper/14f504e46494c45004e74666b654f2d6854444f2d79775143’,
‘poolfs_type’: ‘lun’,
‘poolfs_uuid’: ‘0004fb00000500002d107c91a367306b’,
‘registered_hostname’: ‘OVS-2’,
‘registered_ip’: ‘192.168.111.121’,
‘roles’: set([‘utility’, ‘xen’])}

Case 3 : We can check the repository database as well by using below command.

[root@OVS-2 db]# ovs-agent-db dump_db repository
{‘0004fb0000030000f1532acb312df8a2’: {‘alias’: u’LinuxRepo-1′,
‘filesystem’: ‘ocfs2’,
‘fs_location’: ‘/dev/mapper/14f504e46494c45006a504d3265522d31386d342d5a76416f’,
‘manager_uuid’: u’0004fb00000100006c89d905006ea09d’,
‘mount_point’: ‘/OVS/Repositories/0004fb0000030000f1532acb312df8a2’,
‘version’: u’3.0′}}

Method 2 : Now coming to our OCFS2 file system. If you want to pull the DB information from mounted OCFS2 file system then we have to used the -c option with above command.

Case 1 : We can determine the OCFS2 file system path by below command.

[root@OVS-2 db]# ovs-agent-db get_cluster_db_home
‘/poolfsmnt/0004fb00000500002d107c91a367306b/db’

Case 2 : Lets see the content of OCFS2 file system. It also contains the DB files.

[root@OVS-2 db]# df -h /poolfsmnt/0004fb00000500002d107c91a367306b/db
Filesystem                                                     Size  Used Avail Use% Mounted on
/dev/mapper/14f504e46494c45004e74666b654f2d6854444f2d79775143   14G  263M   14G   2% /poolfsmnt/0004fb00000500002d107c91a367306b

[root@OVS-1 ~]# cd /poolfsmnt/0004fb00000500002d107c91a367306b/db/

[root@OVS-1 db]# ll
total 36
-rw——- 1 root root 12288 Oct 12 22:16 monitored_vms
-rw——- 1 root root 12288 Oct 14 18:53 server_pool
-rw——- 1 root root 12288 Oct 12 18:27 server_pool_servers

Case 3 : If we want to check the VMs which are running on cluster.

[root@OVS-1 db]# ovs-agent-db dump_db -c monitored_vms
{‘0004fb00-0006-0000-4689-b1d1cc6e83d9’: {‘repo_id’: ‘0004fb0000030000f1532acb312df8a2’,
‘vm_id’: ‘0004fb00000600004689b1d1cc6e83d9’}}

Above command is showing the repository ID on which the vDisks are hosted along with it showing the UUID of VM as well.

[root@OVS-1 db]# xm list
Name                                        ID   Mem VCPUs      State   Time(s)
0004fb00000600004689b1d1cc6e83d9             1  1024     1     -b—-     39.2
Domain-0                                     0   823     4     r—–    640.8

Case 4 : Lets check the status of servers present in pool using below command. It will show which physical node which is currently the master of pool

[root@OVS-1 db]# ovs-agent-db dump_db -c server_pool_servers
{‘OVS-1’: {‘is_master’: True,
‘node_number’: 0,
‘registered_ip’: ‘192.168.111.120’,
‘roles’: set([‘utility’, ‘xen’])},
‘OVS-2’: {‘is_master’: False,
‘node_number’: 1,
‘registered_ip’: ‘192.168.111.121’,
‘roles’: set([‘utility’, ‘xen’])}}

Case 5 : If we want to check the status of server pool we will use below command.

[root@OVS-1 db]# ovs-agent-db dump_db -c server_pool
{‘auto_remaster’: True,
‘pool_alias’: ‘mypool-1’,
‘pool_master_hostname’: ‘OVS-1’,
‘pool_member_ip_list’: [‘192.168.111.120’, ‘192.168.111.121’],
‘pool_uuid’: ‘0004fb0000020000723250d652ed73ba’,
‘pool_virtual_ip’: ‘192.168.111.112’}

Tip : We are using bridge networking in OVS if we want to switch to openvswitch we have to change the parameter in below file 🙂 not related to topic but I found it worth to share.

[root@OVS-2 ~]# cat /etc/ovs-agent/agent.ini | grep -i virtualnetwork
;To use open vswitch change the virtualnetwork value to openvswitch
virtualnetwork=linuxbridge

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s