You are here

VSP 6.3 Service Guard packaged VM's with NFS backing store and guest application monitoring

First we create the on the nfs server a FS for the VM's disks, and then we export it via NFS:

root@BL860c:/etc/dfs> cd /so-nfs-disk
root@BL860c:/so-nfs-disk> dd if=/dev/zero of=vm1-so.disk bs=1024k count=30000
30000+0 records in
30000+0 records out
root@BL860c:/so-nfs-disk> bdf
root@BL860c:/so-nfs-disk> du -sk vm1-so.disk
30720016 vm1-so.disk

The file needs bin:sys to work:

root@BL860c:/so-nfs-disk> chown bin:sys vm1-so.disk
root@BL860c:/so-nfs-disk> ls -l
total 69828672
-rw-r--r-- 1 bin sys 4294967296 Jul 3 01:22 disk1
drwxr-xr-x 2 root root 96 Feb 24 2015 lost+found
-rw-r--r-- 1 bin sys 31457280000 Feb 24 2015 vm1-so.disk

root@BL860c:/> cat /etc/dfs/dfstab
# place share(1M) commands here for automatic execution
# on entering init state 3.
#
# share [-F fstype] [ -o options] [-d ""]
# .e.g,
# share -F nfs -o rw=engineering -d "home dirs" /home
share -F nfs -o anon=2 /so-nfs-disk

Now we mount it on the VSP's hard and tcp required v3:

root@bl870c:/> bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1392640 230392 1153232 17% /
/dev/vg00/lvol1 1835008 217280 1605168 12% /stand
/dev/vg00/lvol8 8912896 1579032 7288336 18% /var
/dev/vg00/lvol7 7847936 3290360 4521976 42% /usr
/dev/vg00/lvol4 524288 20976 499384 4% /tmp
/dev/vg00/lvol6 11993088 6365008 5584192 53% /opt
/dev/vg00/lvol5 114688 5416 108424 5% /home
19.132.168.80:/so-nfs-disk
102400000 30811758 67113984 31% /so-nfs-disk
root@bl870c:/>

root@rx7640:/> bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1409024 228976 1170896 16% /
/dev/vg00/lvol1 1835008 391976 1431816 21% /stand
/dev/vg00/lvol8 8912896 1720568 7146528 19% /var
/dev/vg00/lvol7 7864320 3283752 4544880 42% /usr
/dev/vg00/lvol4 524288 20808 499552 4% /tmp
/dev/vg00/lvol6 11862016 6335096 5483856 54% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
19.132.168.80:/so-nfs-disk
102400000 30811758 67113984 31% /so-nfs-disk

We install the VSP software:

root@bl870c:/> swlist -l bundle | grep -i integri
BB068AA B.06.40 HP-UX vPars & Integrity VM v6
SG-IVS-Toolkit B.02.00 Serviceguard Toolkit for Integrity Virtual Servers
T8718AC B.06.40 Integrity VM Online Migration Software

root@rx7640:/> swlist -l bundle | grep -i integri
BB068AA B.06.40 HP-UX vPars & Integrity VM v6
SG-IVS-Toolkit B.02.00 Serviceguard Toolkit for Integrity Virtual Servers
T8718AC B.06.40 Integrity VM Online Migration Software

We are going to create a shared virtual switch on both VSP's, with the same net, same vlan:

root@rx7640:/> hpvmnet -c -S vmprod -n 0
root@rx7640:/> hpvmnet
Name Number State Mode NamePPA MAC Address IPv4 Address
===================== ====== ======= ========= ======= ============== ===============
localnet 1 Up Shared N/A N/A
vmprod 2 Down Shared lan0 19.0.0.31
root@rx7640:/> hpvmnet -b -S vmprod
root@rx7640:/> hpvmnet
Name Number State Mode NamePPA MAC Address IPv4 Address
===================== ====== ======= ========= ======= ============== ===============
localnet 1 Up Shared N/A N/A
vmprod 2 Up Shared lan0 0x00110a429e7c 19.0.0.31

root@bl870c:/> hpvmnet -c -S vmprod -n 0
root@bl870c:/> hpvmnet -b -S vmprod
root@bl870c:/> hpvmnet -S vmprod
Name Number State Mode NamePPA MAC Address IPv4 Address
===================== ====== ======= ========= ======= ============== ===============
vmprod 2 Up Shared lan0 0x0017a477000c 19.132.168.75

No port is configured.

[Configured Host IP Address(es)]
19.132.168.75

Now on one of the VSP's we are going to create the test VM, with a file backingstore under the NFS filesystem.

root@rx7640:/> hpvmcreate -P vm1 -K 19.132.168.79 -L 255.0.0.0 -O HPUX -c 2 -r 2048
root@rx7640:/> hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # Type OS Type State #VCPUs #Devs #Nets Memory
==================== ===== ==== ======= ========= ====== ===== ===== =======
vm1 1 SH HPUX Off 2 0 0 2048 MB
root@rx7640:/opt/hpvm/bin> hpvmmodify -P vm1 -a network:avio_lan::vswitch:vmprod

root@rx7640:/opt/hpvm/bin> hpvmmodify -P vm1 -a disk:avio_stor::file:/so-nfs-disk/vm1-so.disk
root@rx7640:/opt/hpvm/bin> hpvmstatus -v -P vm1
Version B.06.40.00
[Virtual Machine Details]
Virtual Machine Name VM # Type OS Type State
==================== ===== ==== ======= ========
vm1 1 SH HPUX Off

[Runnable Status Details]
Runnable status : Runnable

[Remote Console]
Remote Console Ip Address: 19.132.168.79
Remote Console Net Mask: 255.0.0.0

[Authorized Administrators]
Oper Groups :
Admin Groups :
Oper Users :
Admin Users :

[Virtual CPU Details]
#vCPUs Ent Min Ent Max
====== ======= =======
2 10.0% 100.0%

[Memory Details]
Total Reserved Overhead
Memory Memory Memory
======= ======== ========
2048 MB 64 MB 128 MB

[Storage Interface Details]
vPar/VM Physical
Device Adapter Bus Dev Ftn Tgt Lun Storage Device
======= ========== === === === === === ========= =========================
disk avio_stor 0 1 0 0 0 file /so-nfs-disk/vm1-so.disk

[Network Interface Details]
Interface Adaptor Name/Num PortNum Bus Dev Ftn Mac Address
========= ========== ===================== ======= === === === =================
vswitch avio_lan vmprod 1 0 0 0 56-45-84-be-c6-7e

[Direct I/O Interface Details]
vPar/VM Physical
Device Adapter Bus Dev Ftn Mac Address Storage Device
======= ======= === === === ================= ========= ===========

[Misc Interface Details]
vPar/VM Physical
Device Adapter Bus Dev Ftn Tgt Lun Storage Device
======= ========== === === === === === ========= =========================
serial com1 tty console

We now start the server, and install from ignite in our case:

hpvmstart -p vm1

...........

add net default: gateway 19.132.168.1
* Reading configuration information from server...
* Loading configuration utility...
* Beginning installation from source: 19.132.168.80
======= 07/03/16 01:40:40 EDT Starting system configuration...
* Configure_Disks: Begin
* Will install B.11.31 onto this system.

Once it has been installed, we can try the OVMM.

We first configure ssh between our VSP's, normall you would use a dedicated trunk(10gb better) for online migration, this is just and example we use the same network/nics for everything:

Normal private rsa key config:

# /opt/hpvm/bin/secsetup -r rx7640 bl870c

root@rx7640:/> cat /.ssh/*keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxtbxqLGWPie38IRcUqjqPedL/J4nbjpMuk8o0OyPN/fLC3i8RebwAibpcKj6kB7E0Gcxz2I0w7t0o8yC2DGjbGbFF2HLNJXFO3JuU/S9f9l5s3cPdh3XMSxoGLENKQUvbp4Ryy/curTSMIyddqT/xfoHkeVKc8PzUQdujj+HRDwA8rYwGMt29IF1Ml40REfOt9eaZWI4JfAEWuX5ldvvtCgyp1wHa2zCDbEAXfea5O5N76C3T+Zty30AUq0vEwarAEphd6AdKboa91HC+Yt1pJHEpve54qzm1bhi/A6R0C9WiTp3qaE8XsxVO9AWJLUSvEId7lmP0xXjSMyHo9a//Q== root@rx6600

And once that is ready lets migrate from VSP rx7640 to VSP bl870c:

root@rx7640:/.ssh> hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # Type OS Type State #VCPUs #Devs #Nets Memory
==================== ===== ==== ======= ========= ====== ===== ===== =======
vm1 1 SH HPUX On (OS) 2 2 1 1792 MB
root@rx7640:/> hpvmmigrate -o -P vm1 -h bl870c
hpvmmigrate: Connected to target VSP using 'bl870c'
hpvmmigrate: Starting vPar/VM 'vm1' on target VSP host 'bl870c'
(C) Copyright 2000 - 2016 Hewlett-Packard Development Company, L.P.

hpvmmigrate: Init phase (step 4) - progress 0%
Creation of VM minor device 1
Device file = /var/opt/hpvm/uuids/08667964-3d34-11e6-a109-00110a429e7c/vm_dev
guestStatsStartThread: Started guestStatsCollectLoop - thread = 6
allocating datalogger memory: FF800000-FF900000 (1024KB) ramBaseLog 6000000180700000
allocating firmware RAM (fff00000-100000000, 1024KB) ramBaseFw 6000000180600000
Starting event polling thread

Online migration initiated by source 'rx7640' (19.0.0.31)
Target:0: online migration started with encryption algorithm AES-128-CBC.
hpvmmigrate: Init phase completed successfully.
hpvmmigrate: Copy phase - progress 0%

The server continues to respond to ping, there is only a small lapse during the Frozen Phase:

hpvmmigrate: Init phase completed successfully.
hpvmmigrate: Copy phase completed successfully.
hpvmmigrate: I/O quiesce phase completed successfully.
hpvmmigrate: Frozen phase (step 4) - progress 6%

64 bytes from 19.0.0.71: icmp_seq=66. time=5. ms
64 bytes from 19.0.0.71: icmp_seq=67. time=4. ms
64 bytes from 19.0.0.71: icmp_seq=68. time=5. ms
64 bytes from 19.0.0.71: icmp_seq=99. time=1610. ms
64 bytes from 19.0.0.71: icmp_seq=100. time=597. ms
64 bytes from 19.0.0.71: icmp_seq=101. time=25. ms
64 bytes from 19.0.0.71: icmp_seq=102. time=1. ms
64 bytes from 19.0.0.71: icmp_seq=103. time=0. ms

hpvmmigrate: Frozen phase (step 27) - progress 70%
Event: configuration file renamed to /var/opt/hpvm/uuids/08667964-3d34-11e6-a109-00110a429e7c/vmm_config.current
hpvmmigrate: Frozen phase completed successfully.
hpvmmigrate: vPar/VM migrated successfully.

So the online migration worked ok, now we are going to configure Service Guard, and convert our vm1 server in a service guard package:

First we install SG and configure it, I won't go into details, not related to the topic on hand..

root@rx7640:/etc/cmcluster> cat cmclnodelist
rx7640 root
bl870c root
root@rx7640:/etc/cmcluster> scp cmclnodelist bl870c:/etc/cmcluster/
cmclnodelist 100% 28 0.0KB/s 0.0KB/s 00:00
root@rx7640:/etc/cmcluster> cmquerycl -v -k -w local -C cluster-test.ascii -n bl870c -n rx7640 -q 19.0.0.22
Number of configured IPv6 interfaces found: 0.
Warning: Unable to determine local domain name for rx7640
check_cdsf_group, no cdsf group specified.
Looking for other clusters ... Done
Gathering storage information
Found 1 devices on node bl870c
Found 2 devices on node rx7640
Analysis of 3 devices should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Found 1 volume groups on node bl870c
Found 1 volume groups on node rx7640
Analysis of 2 volume groups should take approximately 1 seconds
0%----10%----20%----30%----40%----50%----60%----70%----80%----90%----100%
Gathering network information
Beginning network probing

root@rx7640:/etc/cmcluster> cmcheckconf -v -C cluster-test.ascii -k
Begin cluster verification...
Checking cluster file: cluster-test.ascii.
MAX_CONFIGURED_PACKAGES configured to 300.
Checking nodes ... Done
Checking existing configuration ... Done
MAX_CONFIGURED_PACKAGES configured to 300.
Serviceguard Extension for RAC is installed on node: bl870c
Serviceguard Extension for RAC is not installed on node rx7640
Please make sure that Serviceguard Extension for RAC is installed on all nodes in the cluster before attempting to deploy Oracle RAC in this cluster.
Gathering storage information

root@rx7640:/etc/cmcluster> cmapplyconf -v -C cluster-test.ascii -k
Begin cluster verification...
Checking cluster file: cluster-test.ascii
MAX_CONFIGURED_PACKAGES configured to 300.
check_cdsf_group, no cdsf group specified.
Adding node bl870c to cluster hpvm_cluster
Adding node rx7640 to cluster hpvm_cluster
Completed the cluster creation
root@rx7640:/etc/cmcluster> cmviewcl

CLUSTER STATUS
hpvm_cluster down

NODE STATUS STATE
bl870c down unknown
rx7640 down unknown

root@rx7640:/etc/cmcluster> cmruncl
Serviceguard Extension for RAC is installed on node: bl870c
Serviceguard Extension for RAC is not installed on node rx7640
Please make sure that Serviceguard Extension for RAC is installed on all nodes in the cluster before attempting to deploy Oracle RAC in this cluster.
cmruncl: Validating network configuration...
cmruncl: Network validation complete
Waiting for cluster to form .... done
Cluster successfully formed.
Check the syslog files on all nodes in the cluster to verify that no warnings occurred during startup.
root@rx7640:/etc/cmcluster> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
bl870c up running
rx7640 up running

Ok, so now we have a service guard cluster running, lets create the pkg with our vm inside:

First because of a bug I have to run on both vsp nodes the command:

root@rx7640:/etc/cmcluster> hpvmmodify -P vm2 -x runnable_status=enabled -x modify_status=enabled -x visible_status=enabled
root@bl870c:/etc/cmcluster> hpvmmodify -P vm2 -x runnable_status=enabled -x modify_status=enabled -x visible_status=enabled

Otherwise the command would fail, now we run the deploy command

root@rx7640:/etc/cmcluster> cmdeployvpkg -P vm2 -n rx7640 -n bl870c

This is the HP Serviceguard Integrity Virtual Servers Toolkit package creation
script.

This script will assist the user to develop and manage Serviceguard
packages for VM and associated package configuration files.

We recommend you to review and modify the configuration file created by this
script, as needed for your particular environment.

Do you wish to continue? (y/n):y

[Virtual Machine Details]
Virtual Machine Name VM # Type OS Type State
==================== ===== ==== ======= ========
vm2 1 SH HPUX Off
[Storage Interface Details]
vPar/VM Physical
Device Adapter Bus Dev Ftn Tgt Lun Storage Device
======= ========== === === === === === ========= =========================
disk avio_stor 0 0 0 0 0 file /so-nfs-disk/vm1-so.disk
[Network Interface Details]
Interface Adaptor Name/Num PortNum Bus Dev Ftn Mac Address
========= ========== ===================== ======= === === === =================
vswitch avio_lan vmprod 1 0 1 0 06-71-75-1a-3a-7c

Package the VM summarized above? (y/n):y

Checking the VM and cluster configuration

Determining package attributes and modules...

Creating modular style package files for VM : vm2

Review and/or modify the package configuration file (optional)? (y/n):n

Copy the package files to each cluster member? (y/n):y

The VM has been successfully configured as a Serviceguard package.

Use cmcheckconf check the package configuration file (optional)? (y/n):y
cmcheckconf: Verification completed. No errors found.
Use the cmapplyconf command to apply the configuration.
Apply the package configuration file to the cluster (optional)? (y/n):y

Modify the package configuration ([y]/n)? y
Completed the cluster update

Please see the HP Serviceguard Toolkit for Integrity Virtual Servers user guide for
additional instructions on configuring Virtual Machines or Virtual Partitions
(vPar) as Serviceguard packages.

Before running this package the following steps may need to be performed:

1. Review the files located in /etc/cmcluster/vm2/.
2. Add new LVM Volume Groups to the cluster configuration file, if any.
3. Check the cluster and/or package configuration using the cmcheckconf command.
4. Apply the cluster and/or package configuration using the cmapplyconf command.
5. Un-mount file systems and deactivate non-shared volumes used by the VM.
6. Start dependent packages associated with shared LVM, CVM or CFS backing stores.
7. Start the package (on the node where the vm2 is running) using: cmrunpkg vm2

root@rx7640:/etc/cmcluster>

t@rx7640:/etc/cmcluster> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
bl870c up running
rx7640 up running

UNOWNED_PACKAGES

PACKAGE STATUS STATE AUTO_RUN NODE
vm2 down halted disabled unowned

root@rx7640:/etc/cmcluster> umount /so-nfs-disk
root@rx7640:/etc/cmcluster> hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # Type OS Type State #VCPUs #Devs #Nets Memory
==================== ===== ==== ======= ========= ====== ===== ===== =======
vm2 1 SH HPUX Off 2 1 1 1792 MB
root@rx7640:/etc/cmcluster> cmrunpkg vm2
Running package vm2 on node rx7640
Successfully started package vm2 on node rx7640
cmrunpkg: All specified packages are running
root@rx7640:/etc/cmcluster> hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # Type OS Type State #VCPUs #Devs #Nets Memory
==================== ===== ==== ======= ========= ====== ===== ===== =======
vm2 1 SH HPUX On (EFI) 2 1 1 1792 MB
root@rx7640:/etc/cmcluster> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
bl870c up running
rx7640 up running

PACKAGE STATUS STATE AUTO_RUN NODE
vm2 up running disabled rx7640
root@rx7640:/etc/cmcluster> bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 1409024 229936 1169936 16% /
/dev/vg00/lvol1 1835008 391976 1431816 21% /stand
/dev/vg00/lvol8 8912896 1746352 7120792 20% /var
/dev/vg00/lvol7 7864320 3283752 4544880 42% /usr
/dev/vg00/lvol4 524288 20808 499552 4% /tmp
/dev/vg00/lvol6 11862016 6335096 5483856 54% /opt
/dev/vg00/lvol5 131072 5480 124616 4% /home
19.132.168.80:/so-nfs-disk
102400000 35006078 63181809 36% /so-nfs-disk
root@rx7640:/etc/cmcluster>

Now we have our pkg/vm running inside service guard, we can move the pkg from one VSP to another using the cmhaltpkg/cmrunpkg commands, this is and offline migration.

root@rx7640:/> cmhaltpkg vm2
root@rx7640:/> cmrunpkg -n bl870c vm2
root@rx7640:/> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
bl870c up running

PACKAGE STATUS STATE AUTO_RUN NODE
vm2 up running disabled bl870c

NODE STATUS STATE
rx7640 up running

If we want to do and online migration we have to use the cmmovevpkg from the toolkit, the problem is that NFS is not supported as a backing store with the cmmovepkg, so to get it working we have to remove the NFS FS part of the pkg, and add the NFS mount to the fstab:

root@rx7640:/etc/cmcluster/scripts/tkit/vtn> cat /etc/fstab
# System /etc/fstab file. Static information about the file systems
# See fstab(4) and sam(1M) for further details on configuring devices.
/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /tmp vxfs delaylog 0 2
/dev/vg00/lvol5 /home vxfs delaylog 0 2
/dev/vg00/lvol6 /opt vxfs delaylog 0 2
/dev/vg00/lvol7 /usr vxfs delaylog 0 2
/dev/vg00/lvol8 /var vxfs delaylog 0 2
bl860c:/so-nfs-disk /so-nfs-disk nfs llock,hard,proto=tcp 0 0

Comment out all the FS related entries on the conf file

root@rx7640:/etc/cmcluster/vm2> cat vm2.conf | grep -E ("^# fs"|filesystem)
#module_name sg/filesystem
#module_version 1
#operation_sequence $SGCONF/scripts/sg/filesystem.sh
# fs_mount_retry_count 0
# fs_umount_retry_count 1
# fs_name /so-nfs-disk
# fs_server 19.132.168.80
# fs_directory /so-nfs-disk
# fs_type "nfs"
# fs_mount_opt "-o llock,hard"
# fs_umount_opt ""
# fs_fsck_opt ""

Now we stop the pkg, and the apply the config:

root@rx7640:/>cmhaltpkg vm2
root@rx7640:/>cmapplyconf -v -P /etc/cmcluster/vm2/vm2.conf
root@rx7640:/> cmrunpkg -n bl870c vm2

So config done, let's try out live migration:

root@bl870c:/> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
rx7640 up running
bl870c up running

PACKAGE STATUS STATE AUTO_RUN NODE
vm2 up running disabled bl870c
root@bl870c:/> hpvmstatus
[Virtual Machines]
Virtual Machine Name VM # Type OS Type State #VCPUs #Devs #Nets Memory
==================== ===== ==== ======= ========= ====== ===== ===== =======
vm2 1 SH HPUX On (EFI) 2 1 1 1792 MB

root@bl870c:/> cmmovevpkg -v -h rx7640 -P vm2
Moving (online) Serviceguard package vm2 from node bl870c to node rx7640
Disabling package failover for package : vm2
Package vm2 is already disabled
WARNING: Any failure of the package vm2 or the node bl870c, will cause the package to fail and not fail over
cmmodpkg: Completed successfully on all packages specified
hpvmmigrate: Connected to target VSP using 'rx7640'
hpvmmigrate: Starting vPar/VM 'vm2' on target VSP host 'rx7640'
(C) Copyright 2000 - 2016 Hewlett-Packard Development Company, L.P.
Creation of VM minor device 1
Device file = /var/opt/hpvm/uuids/cca60b12-4222-11e6-9400-00110a429e7c/vm_dev
guestStatsStartThread: Started guestStatsCollectLoop - thread = 6
allocating datalogger memory: FF800000-FF900000 (1024KB) ramBaseLog 6000000180700000
allocating firmware RAM (fff00000-100000000, 1024KB) ramBaseFw 6000000180600000
Starting event polling thread

hpvmmigrate: Init phase (step 4) - progress 0%

Online migration initiated by source 'bl870c' (19.132.168.75)
Target:0: online migration started with encryption algorithm AES-128-CBC.
hpvmmigrate: Init phase completed successfully.
hpvmmigrate: Copy phase completed successfully.
hpvmmigrate: I/O quiesce phase completed successfully.
hpvmmigrate: Frozen phase completed successfully.
hpvmmigrate: vPar/VM migrated successfully.
Waiting for Serviceguard to detect node change....
Re-integrating vm2 in Serviceguard on rx7640
Package vm2 is already enabled on node rx7640
cmmodpkg: Completed successfully on all packages specified
Running package vm2 on node rx7640
Successfully started package vm2 on node rx7640
cmrunpkg: All specified packages are running
cmmodpkg: Completed successfully on all packages specified
Package vm2 is already enabled on node rx7640
cmmodpkg: Completed successfully on all packages specified

root@bl870c:/> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
rx7640 up running

PACKAGE STATUS STATE AUTO_RUN NODE
vm2 up running enabled rx7640

NODE STATUS STATE
bl870c up running

Ok working...

Now lets configure application monitorin inside the guest VM, so if a failure in the application the guest VM is running will triger a restart or vm node reallocation.

On the VSP Host:

root@rx7640:/> mkdir /etc/cmcluster/cmappmgr
root@rx7640:/> cd /etc/cmcluster/cmappmgr
/opt/java8/bin/keytool -genkey -alias clientprivate -keystore client.private -storepass clientpw -keypass clientpw -validity 400
/opt/java8/bin/keytool -export -alias clientprivate -keystore client.private -file temp.key -storepass clientpw
/opt/java8/bin/keytool -import -noprompt -alias clientpublic -keystore client.public -file temp.key -storepass public
root@rx7640:/etc/cmcluster/cmappmgr> ls
client.private client.public temp.key

On the VM guest:

root@vm1:/opt/hp/cmappserver> /opt/java8/bin/keytool -genkey -alias serverprivate -keystore server.private -storepass serverpw -keypass serverpw -validity 400
root@vm1:/opt/hp/cmappserver> /opt/java8/bin/keytool -export -alias serverprivate -keystore server.private -file temp.key -storepass serverpw
Certificate stored in file
root@vm1:/opt/hp/cmappserver> /opt/java8/bin/keytool -import -noprompt -alias serverpublic -keystore server.public -file temp.key -storepass public
Certificate was added to keystore

Key distribution:

Copy from the host to the guest

root@rx7640:/etc/cmcluster/cmappmgr> scp client.public vm1:/opt/hp/cmappserver
Password:
client.public 100% 894 0.9KB/s 0.9KB/s 00:00

Guest to host

root@vm1:/opt/hp/cmappserver> scp server_vm1.public rx7640:/etc/cmcluster/cmappmgr/
The authenticity of host 'rx7640 (19.0.0.31)' can't be established.
ECDSA key fingerprint is da:e9:1b:e3:a4:0a:c3:06:36:7d:31:08:6f:75:6e:ee.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'rx7640,19.0.0.31' (ECDSA) to the list of known hosts.
Password:
server_vm1.public

Host to all the rest of vsp hosts

root@rx7640:/etc/cmcluster> scp -pr cmappmgr root@bl870c:/etc/cmcluster/
client.private 100% 1303 1.3KB/s 1.3KB/s 00:00
temp.key 100% 825 0.8KB/s 1.3KB/s 00:00
client.public 100% 894 0.9KB/s 1.3KB/s 00:00
server_vm1.public 100% 894 0.9KB/s 1.3KB/s 00:00

Configure the file and copy to all vsps:

root@rx7640:/etc/cmcluster> cat /etc/cmappmgr.conf
###############################################################
# (C) Copyright 2009-2010 Hewlett-Packard Development Company, L.P.
# @(#) SG cmappmgr Configuration File
# @(#) Product Name : HP SG cmappmgr conf file
# @(#) Product Version : A.11.20.00
# @(#) Patch Name : PHSS_43842
#
###############################################################
keyStore=/etc/cmcluster/cmappmgr/client.private
# If unspecified, the default value is /etc/client.private
keyStorePassword=
# If unspecified, the default value is clientpw
# Specify node name where the trustStore comes from, followed by a ":", e.g.,
vm1:
trustStore=/etc/cmcluster/cmappmgr/server_vm1.public
trustStorePassword=public
# If unspecified, the default value is /etc/server.public
# If unspecified, the default value is public
root@rx7640:/etc/cmcluster> scp /etc/cmappmgr.conf bl870c:/etc/cmappmgr.conf
cmappmgr.conf

Install cmappserver depots on VM guests:

root@rx7640:/opt/hp/serviceguard/cmappserver> cd 11iv3
root@rx7640:/opt/hp/serviceguard/cmappserver/11iv3> ls
cmappserver.depot
root@rx7640:/opt/hp/serviceguard/cmappserver/11iv3> scp cmappserver.depot vm1:/tmp/
Password:
cmappserver.depot 100% 780KB 780.0KB/s 780.0KB/s 00:00
root@rx7640:/opt/hp/serviceguard/cmappserver/11iv3>

root@vm1:/tmp> swlist -s $PWD/cmappserver.depot
# Initializing...
# Contacting target "vm1"...
#
# Target: vm1:/tmp/cmappserver.depot
#

#
# No Bundle(s) on vm1:/tmp/cmappserver.depot
# Product(s):
#

CMAPPSERVER A.11.20.00 Application Server for Serviceguard (PHSS_43153)
root@vm1:/tmp> swinstall -s $PWD/cmappserver.depot CMAPPSERVER

======= 07/12/16 12:56:01 MDT BEGIN swinstall SESSION
(non-interactive) (jobid=vm1-0002)

* Session started for user "root@vm1".

* Beginning Selection
* Target connection succeeded for "vm1:/".
* Source: /tmp/cmappserver.depot
* Targets: vm1:/
* Software selections:
CMAPPSERVER.CMAPP,r=A.11.20.00,a=HP-UX_B.11.31_IA,v=HP
* Selection succeeded.

* Beginning Analysis and Execution
* Session selections have been saved in the file
"/.sw/sessions/swinstall.last".
* The analysis phase succeeded for "vm1:/".
* The execution phase succeeded for "vm1:/".
* Analysis and Execution succeeded.

NOTE: More information may be found in the agent logfile using the
command "swjob -a log vm1-0002 @ vm1:/".

======= 07/12/16 12:56:50 MDT END swinstall SESSION (non-interactive)
(jobid=vm1-0002)

Una vez instalado, modificar si es necesario el fichero en el guest cmappserver.conf

Now we can test if our configuration is working:

root@rx7640:/opt/hp/serviceguard/cmappserver> /usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1
java version is "1.6.0.27"
==========cmappmgr command output is also logged in syslog
/opt/java6/jre/bin/java
-node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1

On the guest we can see the process running:

root@vm1:/opt/hp/cmappserver> ps -ef | grep -i ping
root 5195 5055 0 13:07:35 ? 0:00 /usr/sbin/ping 19.132.168.1

So all is configured ok, we can now modify are pkg to add the needed services:

We add to the /etc/cmcluster/vm2.conf file:

service_name vm2-ping
service_cmd "/usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1"
service_restart none
#service_fail_fast_enabled
service_halt_timeout 300

root@rx7640:/etc/cmcluster/vm2> cmapplyconf -P vm2.conf
One or more of the specified packages are running. Any error in the
proposed configuration change could cause these packages to fail.
Ensure configuration changes have been tested before applying them.
Modify the package configuration ([y]/n)? y
Completed the cluster update

We can check our service is configured:

root@rx7640:/etc/cmcluster/vm2> cmviewcl -v -p vm2 | grep -i service
Service up 0 0 vm2
Service up 0 0 vm2-ping

Now lets see if it works I'm going to kill the ping inside the vm:

root@vm1:/> kill -9 5195
root@vm1:/> ps -ef | grep -i ping
root 5223 4872 0 13:15:57 pts/0 0:00 grep -i ping

On the host logs:

Jul 12 13:14:05 rx7640 cmappmgr[2234]: Done.
Jul 12 13:22:36 rx7640 cmappmgr[2234]: Main client session exits...
Jul 12 13:22:36 rx7640 cmappmgr[2234]: EOF reached during cmappmgr message receive
Jul 12 13:22:36 rx7640 cmappmgr[2234]: PROGRAM EXIT CODE 0
Jul 12 13:22:36 rx7640 cmserviced[18109]: Service vm2-ping completed successfully with an exit(0).
Jul 12 13:22:36 rx7640 cmcld[18103]: Service vm2-ping in package vm2 has gone down.
Jul 12 13:22:36 rx7640 cmcld[18103]: Disabled node rx7640 from running package vm2.
Jul 12 13:23:11 rx7640 cmcld[18103]: (bl870c) Started package vm2 on node bl870c.

root@rx7640:/opt/hp/serviceguard/cmappserver> /usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1
java version is "1.6.0.27"
==========cmappmgr command output is also logged in syslog
/opt/java6/jre/bin/java
-node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1

On the guest we can see the process running:

root@vm1:/opt/hp/cmappserver> ps -ef | grep -i ping
root 5195 5055 0 13:07:35 ? 0:00 /usr/sbin/ping 19.132.168.1

So all is configured ok, we can now modify are pkg to add the needed services:

We add to the /etc/cmcluster/vm2.conf file:

service_name vm2-ping
service_cmd "/usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1"
service_restart none
#service_fail_fast_enabled
service_halt_timeout 300

root@rx7640:/etc/cmcluster/vm2> cmapplyconf -P vm2.conf
One or more of the specified packages are running. Any error in the
proposed configuration change could cause these packages to fail.
Ensure configuration changes have been tested before applying them.
Modify the package configuration ([y]/n)? y
Completed the cluster update

We can check our service is configured:

root@rx7640:/etc/cmcluster/vm2> cmviewcl -v -p vm2 | grep -i service
Service up 0 0 vm2
Service up 0 0 vm2-ping

Now lets see if it works I'm going to kill the ping inside the vm:

root@vm1:/> kill -9 5195
root@vm1:/> ps -ef | grep -i ping
root 5223 4872 0 13:15:57 pts/0 0:00 grep -i ping

On the host logs:

Jul 12 13:14:05 rx7640 cmappmgr[2234]: Done.
Jul 12 13:22:36 rx7640 cmappmgr[2234]: Main client session exits...
Jul 12 13:22:36 rx7640 cmappmgr[2234]: EOF reached during cmappmgr message receive
Jul 12 13:22:36 rx7640 cmappmgr[2234]: PROGRAM EXIT CODE 0
Jul 12 13:22:36 rx7640 cmserviced[18109]: Service vm2-ping completed successfully with an exit(0).
Jul 12 13:22:36 rx7640 cmcld[18103]: Service vm2-ping in package vm2 has gone down.
Jul 12 13:22:36 rx7640 cmcld[18103]: Disabled node rx7640 from running package vm2.
Jul 12 13:23:11 rx7640 cmcld[18103]: (bl870c) Started package vm2 on node bl870c.

root@rx7640:/opt/hp/serviceguard/cmappserver> /usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1
java version is "1.6.0.27"
==========cmappmgr command output is also logged in syslog
/opt/java6/jre/bin/java
-node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1

On the guest we can see the process running:

root@vm1:/opt/hp/cmappserver> ps -ef | grep -i ping
root 5195 5055 0 13:07:35 ? 0:00 /usr/sbin/ping 19.132.168.1

So all is configured ok, we can now modify are pkg to add the needed services:

We add to the /etc/cmcluster/vm2.conf file:

service_name vm2-ping
service_cmd "/usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1"
service_restart none
#service_fail_fast_enabled
service_halt_timeout 300

root@rx7640:/etc/cmcluster/vm2> cmapplyconf -P vm2.conf
One or more of the specified packages are running. Any error in the
proposed configuration change could cause these packages to fail.
Ensure configuration changes have been tested before applying them.
Modify the package configuration ([y]/n)? y
Completed the cluster update

We can check our service is configured:

root@rx7640:/etc/cmcluster/vm2> cmviewcl -v -p vm2 | grep -i service
Service up 0 0 vm2
Service up 0 0 vm2-ping

Now lets see if it works I'm going to kill the ping inside the vm:

root@vm1:/> kill -9 5195
root@vm1:/> ps -ef | grep -i ping
root 5223 4872 0 13:15:57 pts/0 0:00 grep -i ping

On the host logs:

Jul 12 13:14:05 rx7640 cmappmgr[2234]: Done.
Jul 12 13:22:36 rx7640 cmappmgr[2234]: Main client session exits...
Jul 12 13:22:36 rx7640 cmappmgr[2234]: EOF reached during cmappmgr message receive
Jul 12 13:22:36 rx7640 cmappmgr[2234]: PROGRAM EXIT CODE 0
Jul 12 13:22:36 rx7640 cmserviced[18109]: Service vm2-ping completed successfully with an exit(0).
Jul 12 13:22:36 rx7640 cmcld[18103]: Service vm2-ping in package vm2 has gone down.
Jul 12 13:22:36 rx7640 cmcld[18103]: Disabled node rx7640 from running package vm2.
Jul 12 13:23:11 rx7640 cmcld[18103]: (bl870c) Started package vm2 on node bl870c.

root@rx7640:/opt/hp/serviceguard/cmappserver> /usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1
java version is "1.6.0.27"
==========cmappmgr command output is also logged in syslog
/opt/java6/jre/bin/java
-node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1

On the guest we can see the process running:

root@vm1:/opt/hp/cmappserver> ps -ef | grep -i ping
root 5195 5055 0 13:07:35 ? 0:00 /usr/sbin/ping 19.132.168.1

So all is configured ok, we can now modify are pkg to add the needed services:

We add to the /etc/cmcluster/vm2.conf file:

service_name vm2-ping
service_cmd "/usr/sbin/cmappmgr -node vm1 -cmappserver_timeout 240 -service /usr/sbin/ping 19.132.168.1"
service_restart none
#service_fail_fast_enabled
service_halt_timeout 300

root@rx7640:/etc/cmcluster/vm2> cmapplyconf -P vm2.conf
One or more of the specified packages are running. Any error in the
proposed configuration change could cause these packages to fail.
Ensure configuration changes have been tested before applying them.
Modify the package configuration ([y]/n)? y
Completed the cluster update

We can check our service is configured:

root@rx7640:/etc/cmcluster/vm2> cmviewcl -v -p vm2 | grep -i service
Service up 0 0 vm2
Service up 0 0 vm2-ping

Now lets see if it works I'm going to kill the ping inside the vm:

root@vm1:/> kill -9 5195
root@vm1:/> ps -ef | grep -i ping
root 5223 4872 0 13:15:57 pts/0 0:00 grep -i ping

On the host logs:

Jul 12 13:14:05 rx7640 cmappmgr[2234]: Done.
Jul 12 13:22:36 rx7640 cmappmgr[2234]: Main client session exits...
Jul 12 13:22:36 rx7640 cmappmgr[2234]: EOF reached during cmappmgr message receive
Jul 12 13:22:36 rx7640 cmappmgr[2234]: PROGRAM EXIT CODE 0
Jul 12 13:22:36 rx7640 cmserviced[18109]: Service vm2-ping completed successfully with an exit(0).
Jul 12 13:22:36 rx7640 cmcld[18103]: Service vm2-ping in package vm2 has gone down.
Jul 12 13:22:36 rx7640 cmcld[18103]: Disabled node rx7640 from running package vm2.
Jul 12 13:23:11 rx7640 cmcld[18103]: (bl870c) Started package vm2 on node bl870c.

root@bl870c:/> cmviewcl

CLUSTER STATUS
hpvm_cluster up

NODE STATUS STATE
rx7640 up running
bl870c up running

PACKAGE STATUS STATE AUTO_RUN NODE
vm2 up running enabled bl870c

root@vm1:/> ps -ef | grep -i ping
root 2604 2507 0 13:26:20 ? 0:00 /usr/sbin/ping 19.132.168.1

Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://hpuxtips.es/?q=content/vsp-63-service-guard-packaged-vms-nfs-backing-store-and-guest-application-monitoring [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.92.142.198 [:db_insert_placeholder_9] => 1502962637 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.