Yesterday I have created Sun cluster in my test lab environment. I have create it on Solaris 10.
After configuration of cluster I have used below commands to check the status and add the resource into cluster.
Step1 : Instead of running the full command again and again. I added that path.
Checked the status of cluster found the both nodes are in Online status.
export PATH=$PATH:/usr/cluster/bin/
SolNode1:> clnode status
=== Cluster Nodes ===
— Node Status —
Node Name Status
——— ——
SolNode2 Online
SolNode1 Online
Step 2 : As this is newly configured cluster. Quoram is not configured. I have added one shared disk in Oracle Virtual Box which is very easy task. I added that disk in cluster as quoram device.
SolNode1:> clq status
=== Cluster Quorum ===
— Quorum Votes Summary from (latest node reconfiguration) —
Needed Present Possible
—— ——- ——–
1 1 1
— Quorum Votes by Node (current status) —
Node Name Present Possible Status
——— ——- ——– ——
SolNode2 1 1 Online
SolNode1 0 0 Online
Checked the current status of devices which are present in cluster.
SolNode2:> cldev status
=== Cluster DID Devices ===
Device Instance Node Status
————— —- ——
/dev/did/rdsk/d1 SolNode2 Ok
/dev/did/rdsk/d4 SolNode1 Ok
Scanned the new device in cluster to configure it as quoram disk.
SolNode2:> cldev populate
Configuring DID devices
Checked the status using below command whether new disk has been appeared.
SolNode2:> cldev status
=== Cluster DID Devices ===
Device Instance Node Status
————— —- ——
/dev/did/rdsk/d1 SolNode2 Ok
/dev/did/rdsk/d2 SolNode1 Ok
SolNode2 Ok
/dev/did/rdsk/d3 SolNode1 Ok
SolNode2 Ok
/dev/did/rdsk/d4 SolNode1 Ok
It appears with name of d2 I added that as cluster quoram.
SolNode2:> clquorum add d2
Now check the status of quoram.
SolNode2:> clq status
=== Cluster Quorum ===
— Quorum Votes Summary from (latest node reconfiguration) —
Needed Present Possible
—— ——- ——–
1 1 1
— Quorum Votes by Node (current status) —
Node Name Present Possible Status
——— ——- ——– ——
SolNode2 1 1 Online
SolNode1 0 0 Online
— Quorum Votes by Device (current status) —
Device Name Present Possible Status
———– ——- ——– ——
d2 0 0 Offline
For me quoram device was showing offline I issued reset command It brought the device in online.
SolNode2:> clq reset
SolNode2:> clq status
=== Cluster Quorum ===
— Quorum Votes Summary from (latest node reconfiguration) —
Needed Present Possible
—— ——- ——–
2 2 3
— Quorum Votes by Node (current status) —
Node Name Present Possible Status
——— ——- ——– ——
SolNode2 1 1 Online
SolNode1 1 1 Online
— Quorum Votes by Device (current status) —
Device Name Present Possible Status
———– ——- ——– ——
d2 1 1 Online
Step 3 : Before adding the resource into cluster, We have to register the resource type.
SolNode2:> /usr/cluster/bin/clrt register SUNW.HAStoragePlus
SolNode2:> clrt list
SUNW.LogicalHostname:4
SUNW.SharedAddress:2
SUNW.HAStoragePlus:10
Step 4 : Create one pool named as Sharepool Once again my disk is shared between two nodes.
SolNode2:> zpool create -f SharePool1 /dev/did/dsk/d3s2
SolNode2:> zpool status SharePool1
pool: SharePool1
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
SharePool1 ONLINE 0 0 0
/dev/did/dsk/d3s2 ONLINE 0 0 0
errors: No known data errors
Step 5 : Create one cluster RG(resource group) and add the resource into group. Here I added the pool as resource in resource group.
SolNode2:> clrg create RG1
SolNode2:> clrs create -g RG1 -t SUNW.HAStoragePlus -p Zpools=SharePool1 CLShare Pool1
SolNode2:> clrs status
=== Cluster Resources ===
Resource Name Node Name State Status Message
————- ——— —– ————–
CLSharePool1 SolNode2 Offline Offline
SolNode1 Offline Offline
Checked the status of resource group it is showing offline.
SolNode2:> clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
———- ——— ——— ——
RG1 SolNode2 No Offline
SolNode1 No Offline
Step 6 : We brought the resource group online on one Node.
SolNode2:> clrg online -M -n SolNode2 RG1
SolNode2:> clrs status
=== Cluster Resources ===
Resource Name Node Name State Status Message
————- ——— —– ————–
CLSharePool1 SolNode2 Online Online
SolNode1 Offline Offline
SolNode2:> clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
———- ——— ——— ——
RG1 SolNode2 No Online
SolNode1 No Offline
Step 7 : I have migrated the RG1 to another node.
SolNode2:> clrg switch -n SolNode1 RG1
SolNode1:> clrg status
=== Cluster Resource Groups ===
Group Name Node Name Suspended Status
———- ——— ——— ——
RG1 SolNode2 No Offline
SolNode1 No Online
SolNode1:> df -h /SharePool1/
Filesystem size used avail capacity Mounted on
SharePool1 984M 31K 984M 1% /SharePool1