You are here

Part 8. Sun Cluster 3.2: Creating a Failover RG using Non-voting nodes(Zones)

Why use non-voting nodes for your cluster apps:

-Criteria for Using Support for Solaris Zones Directly Through the RGM

Use support for Solaris zones directly through the RGM if any of following criteria is met:
Your application cannot tolerate the additional failover time that is required to boot a zone.
You require minimum downtime during maintenance.
You require dual-partition software upgrade.
You are configuring a data service that uses a shared address resource for network load balancing.

-Requirements for Using Support for Solaris Zones Directly Through the RGM

-If you plan to use support for Solaris zones directly through the RGM for an application, ensure that the following requirements are met:

The application is supported to run in non-global zones.
The data service for the application is supported to run in non-global zones.

-If you use support for Solaris zones directly through the RGM, configure resources and resource groups as follows:

Ensure that resource groups that are related by an affinity are configured to run in the same zone. An affinity between resource groups that are configured to run in different zones on the same node has no effect.

Configure every application in the non-global zone as a resource.

We first are going to install 2 zones, in nodes vm1 and vm2, our zone name is going to be the same, zone1. so it's easier to administrate.

we add to the /etc/hosts in all our nodes the following:

11.0.0.106 zone1
11.0.0.107 zone2

We configure zone1 in vm1:

[root@vm1:/]# zfs create rpool/zones
[root@vm1:/]# zfs set mountpoint=/zones rpool/zones
[root@vm1:/]# mkdir /zones/zone1 (10-15 04:42)
[root@vm1:/]# chmod -R 700 /zones (10-15 04:42)
[root@vm1:/]# zonecfg -z zone1 (10-15 04:37)
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create -b
zonecfg:zone1> add net
zonecfg:zone1:net> set address=11.0.0.106
zonecfg:zone1:net> set physical=e1000g2
zonecfg:zone1:net> end
zonecfg:zone1> set autoboot=true
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> commit
zonecfg:zone1> exit
[root@vm1:/]# zoneadm list -cv (10-15 04:43)
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- zone1 configured /zones/zone1 native shared
[root@vm1:/]# zoneadm -z zone1 install (10-15 04:43)
A ZFS file system has been created for this zone.
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <5999> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1251> packages on the zone.
Initialized <1251> packages on zone.
Zone is initialized.
Installation of <1> packages was skipped.
The file contains a log of the zone installation.
[root@vm1:/]# zoneadm -z zone1 ready (10-15 04:50)

Once we have the zone in the ready state, we add a private network to the zone, so it can comunnicate with the cluster:
[root@vm1:/]# clnode set -p zprivatehostname=zone1 vm1:zone1 (10-15 04:51)
[root@vm1:/]# zoneadm -z zone1 boot (10-15 04:51)
[root@vm1:/]# zlogin -C zone1
We configure the zone ....

Once finished, we check the network is ok:
zone1# ifconfig -a
lo0:1: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
e1000g2:3: flags=1000843 mtu 1500 index 2
inet 11.0.0.106 netmask ffffff00 broadcast 11.0.0.255
clprivnet0:1: flags=1009843 mtu 1500 index 5
inet 172.16.4.65 netmask fffffe00 broadcast 172.16.5.255
zone1#

All ready for zone1, now we are going to create zone1 in vm2(the hostname will be zone2), we export zone1 config in vm1, to use it for creating zone2:

[root@vm1:/]# zonecfg -z zone1 export > zone1.cfg

We modify the .cfg file and change the ip adress:
set address=11.0.0.106 we change it for: set address=11.0.0.107

copy the file to vm2, via scp:
[root@vm1:/]# scp zone1.cfg vm2:/tmp/ (10-15 05:01)
zone1.cfg 100% |*******************************************************************************************************************************| 283 00:00

[root@vm2:/tmp]# zonecfg -z zone1 -f zone1.cfg (10-12 18:43)
[root@vm2:/tmp]# zoneadm list -cv (10-12 18:43)
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- zone1 configured /zones/zone1 native shared
[root@vm2:/tmp]# mkdir /zones/zone1 (10-12 18:43)
[root@vm2:/tmp]# chmod -R 700 /zones (10-12 18:43)
[root@vm2:/tmp]# zoneadm -z zone1 verify (10-12 18:43)
[root@vm2:/tmp]# zoneadm -z zone1 install (10-12 18:44)
A ZFS file system has been created for this zone.
Preparing to install zone .
Creating list of files to copy from the global zone.
Copying <5999> files to the zone.
Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1251> packages on the zone.
Initialized <1251> packages on zone.
Zone is initialized.
Installation of <1> packages was skipped.
The file contains a log of the zone installation.
[root@vm2:/tmp]# zoneadm -z zone1 ready (10-12 18:48)
[root@vm2:/tmp]# clnode set -p zprivatehostname=zone2 vm2:zone1 (10-12 19:06)

On both nodes, we have to modify the /etc/nsswitch.conf and add the cluster in front of the hosts and netmasks:

hosts: cluster files
ipnodes: files
networks: files
protocols: files
rpc: files
netmask: cluster files

Now that we have our zones ready, we are going to create our new apache RG, called APA2-rg, and we are going to use the zones we created before in vm1 and vm2:

[root@vm1:/]# clrg create -n vm1,vm2 -z zone1 APA2-rg (10-15 08:00)
[root@vm1:/]# clrg status | grep zone (10-15 08:02)
APA2-rg vm1:zone1 No Unmanaged
vm2:zone1 No Unmanaged

Let's start with the resources, first the logical hostname resource.

We add to the /etc/hosts in the non-global and globlal servers the line:

11.0.0.108 apache2-ip

Then we are ready to create the resource:

[root@vm1:/]# clreslogicalhostname create -g APA2-rg -h apache2-ip APA2-ip (10-15 08:05)
[root@vm1:/]# clresource status | grep zone (10-15 08:06)
APA2-ip vm1:zone1 Offline Offline
vm2:zone1 Offline Offline

No we are going to create the FS and the HA storage resource:

We are going to use our current metaset/device group, it's probably better to run each RG with it's own Device Group, but it's not mandatory:

[root@vm1:/]# metainit -s otromas d200 -p d1 60mb (10-15 08:11)
d200: Soft Partition is setup
[root@vm1:/]# newfs /dev/md/otromas/rdsk/d200 (10-15 08:14)
newfs: construct a new file system /dev/md/otromas/rdsk/d200: (y/n)? y
/dev/md/otromas/rdsk/d200: 122880 sectors in 60 cylinders of 64 tracks, 32 sectors
60.0MB in 4 cyl groups (16 c/g, 16.00MB/g, 7680 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 32832, 65632, 98432,

On all nodes:

[root@vm1:/]# mkdir /global/apache2

Add to the /etc/vfstab on all nodes:

/dev/md/otromas/dsk/d200 /dev/md/otromas/rdsk/d200 /global/apache2 ufs2 no global

We create the resource:

[root@vm1:/]# clrs create -g APA2-rg -t SUNW.HAStoragePlus -p FilesystemMountPoints=/global/apache2 -p AffinityOn=true APA2-disk

[root@vm1:/]# clresource status | grep -i zone (10-15 08:26)
APA2-ip vm1:zone1 Offline Offline
vm2:zone1 Offline Offline
APA2-disk vm1:zone1 Offline Offline
vm2:zone1 Offline Offline

Now we can online the RG and then finally add the Apache rs:

[root@vm1:/]# clrg status (10-15 08:28)

=== Cluster Resource Groups ===

Group Name Node Name Suspended Status
---------- --------- --------- ------
NFSrg vm1 No Online
vm2 No Offline
vm3 No Offline

APACHE1rg vm2 No Offline
vm1 No Online

APA2-rg vm1:zone1 No Online
vm2:zone1 No Offline

[root@vm1:/]# clresource status | grep -i zone (10-15 08:28)
APA2-ip vm1:zone1 Online Online - LogicalHostname online.
vm2:zone1 Offline Offline
APA2-disk vm1:zone1 Online Online
vm2:zone1 Offline Offline

Now we can add the the apache resource, I will jump this bit as it's the same steps as in a previous post:

http://www.hpuxtips.es/?q=node/298

Unix Systems: 

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://hpuxtips.es/?q=node/304 [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.162.3.15 [:db_insert_placeholder_9] => 1503351099 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.