You are here

Part16: red Hat Storage server(gluster). My Study Notes for Red Hat Certificate of Expertise in Clustering and Storage Management Exam (EX436)

Red hat storage:

We are going to install a gluster server, the first thing we need to do, is remove all the cluster config, so we do a:

[root@centos-clase1 samba:sambapub]# ccs -h centos-clase1 --stopall
Stopped centosclu3hb1
Stopped centosclu1hb1
Stopped centosclu2hb1

reboot all the nodes, once they are up, we check we have no cman,rgmanager,clvmd running, and we can reconfigure the storage, we are going to remove all the old vgs to reuse the PVs in new VGs, but we then check , that we can't access clustered vgs, so we need to do:

doesn't work:
[root@centos-clase1 ~]# vgchange -c n /dev/vghanfs
Skipping clustered volume group vghanfs

so in all nodes:
[root@centos-clase2 ~]# lvmconf --disable-cluster

[root@centos-clase1 ~]# vgchange -c n vghanfs --config 'global {locking_type = 0}'
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Unable to determine exclusivity of lvnfs
Unable to determine exclusivity of lvxfslog
Volume group "vghanfs" successfully changed
[root@centos-clase1 ~]# vgchange -c n vgsamba --config 'global {locking_type = 0}'
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Unable to determine exclusivity of lvsamba
Volume group "vgsamba" successfully changed
[root@centos-clase1 ~]# vgchange -c n vgcluster --config 'global {locking_type = 0}'
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Unable to determine exclusivity of lvol1
Volume group "vgcluster" successfully changed
[root@centos-clase1 ~]#
[root@centos-clase1 ~]#
[root@centos-clase1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg_rootvg 1 4 0 wz--n- 19.59g 6.88g
vg_rootvg 1 4 0 wz--n- 19.59g 6.88g
vgcluster 1 1 0 wz--n- 1016.00m 616.00m
vghanfs 1 2 0 wz--n- 1.14g 12.00m
vgsamba 2 1 0 wz--n- 864.00m 164.00m
[root@centos-clase1 ~]#
[root@centos-clase1 ~]# vgremove vgsamba vghanfs vgcluster
Do you really want to remove volume group "vgsamba" containing 1 logical volumes? [y/n]: y
Logical volume "lvsamba" successfully removed
Volume group "vgsamba" successfully removed
Do you really want to remove volume group "vghanfs" containing 2 logical volumes? [y/n]: y
Logical volume "lvnfs" successfully removed
Logical volume "lvxfslog" successfully removed
Volume group "vghanfs" successfully removed
Do you really want to remove volume group "vgcluster" containing 1 logical volumes? [y/n]: y
Logical volume "lvol1" successfully removed
Volume group "vgcluster" successfully removed

Ok, we are ready to reuse the pvs, we remove all partitions from the disks, and we are going to use:

centos-clase1 pv -> clase1hd0 in vg -> vgclase1sto0 with a 1gb lvol -> lvclase1lv0 with a xfs fs with 512 inode
centos-clase2 pv -> clase2hd0 in vg -> vgclase2sto0 with a 1gb lvol -> lvclase2lv0 with a xfs fs with 512 inode
centos-clase3 pv -> clase3hd0 in vg -> vgclase3sto0 with a 1gb lvol -> lvclase3lv0 with a xfs fs with 512 inode

Here is the ouput of creating this structure for centos-clase1, we create the xfs filesystem with 512byte inode size:

[root@centos-clase1 ~]# pvcreate /dev/mapper/clase1hd0
Physical volume "/dev/mapper/clase1hd0" successfully created
[root@centos-clase1 ~]# vgcreate vgclase1sto0 /dev/mapper/clase1hd0
Volume group "vgclase1sto0" successfully created
[root@centos-clase1 ~]# lvcreate -L 1G -n lvclase1lv0 vgclase1sto0
Logical volume "lvclase1lv0" created
[root@centos-clase1 ~]# mkfs.xfs -i size=512 /dev/vgclase1sto0/lvclase1lv0
meta-data=/dev/vgclase1sto0/lvclase1lv0 isize=512 agcount=4, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "mkdir /n${i}_exp"; done
[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "mount /dev/vgclase${i}sto0/lvclase${i}lv0 /n${i}_exp"; done
[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "df -h | grep -i exp"; done
1014M 33M 982M 4% /n1_exp
1014M 33M 982M 4% /n2_exp
1014M 33M 982M 4% /n3_exp

We have to add the mountpoint to /etc/fstab so we make it permanent:

[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "echo "/dev/mapper/vgclase$isto0-lvclase$ilv0 /n$i_exp xfs defaults 1 2" >> /etc/fstab"

thats just a first part, that we have created the bricks in our nodes, we are now going to proceed installing the gluster software:

Download and install, I should make a repo for this, to lazy right now, i will copy it to all nodes via scp:
[root@centos-clase1 gluster]# wget -l 1 -nd -nc -r -A.rpm http://download.gluster.org/pub/gluster/glusterfs/LATEST/RHEL/epel-6.4/x...
[root@centos-clase1 gluster]# rpm -ivh *.rpm
warning: glusterfs-3.4.0-8.el6.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 89ccae8b: NOKEY
Preparing... ########################################### [100%]
1:glusterfs-libs ########################################### [ 9%]
2:glusterfs ########################################### [ 18%]
3:glusterfs-api ########################################### [ 27%]
4:glusterfs-cli ########################################### [ 36%]
5:glusterfs-fuse ########################################### [ 45%]
6:glusterfs-server ########################################### [ 55%]
error reading information on service glusterfsd: No such file or directory
7:glusterfs-geo-replicati########################################### [ 64%]
8:glusterfs-api-devel ########################################### [ 73%]
9:glusterfs-devel ########################################### [ 82%]
10:glusterfs-rdma ########################################### [ 91%]
11:glusterfs-debuginfo ########################################### [100%]

After installing the software in all nodes, we are going to check ntp is ok, and configure our peers using a dedicated network:

[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "ntpq -p "; done
remote refid st t when poll reach delay offset jitter
==============================================================================
*gw.fomvslab.com 130.206.3.166 2 u 53 64 377 0.247 -0.087 0.085
remote refid st t when poll reach delay offset jitter
==============================================================================
*gw.fomvslab.com 130.206.3.166 2 u 38 64 377 0.287 2.612 0.591
remote refid st t when poll reach delay offset jitter
==============================================================================
*gw.fomvslab.com 130.206.3.166 2 u 17 128 377 0.256 -0.069 0.124

[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "chkconfig glusterd on; service glusterd start"; done
Starting glusterd:[ OK ]
Starting glusterd:[ OK ]
Starting glusterd:[ OK ]

Im going to use this network:
3.3.3.2 centosclu1hb1
3.3.3.3 centosclu2hb1
3.3.3.4 centosclu3hb1
3.3.3.5 centosclu4hb1

[root@centos-clase1 ~]# gluster peer probe centosclu2hb1
peer probe: success
[root@centos-clase1 ~]# gluster peer probe centosclu3hb1
peer probe: success
[root@centos-clase1 ~]# gluster peer status
Number of Peers: 2

Hostname: centosclu2hb1
Port: 24007
Uuid: 0b850e6c-e28d-4cfa-abeb-36e1cc6197e3
State: Peer in Cluster (Connected)

Hostname: centosclu3hb1
Port: 24007
Uuid: fb00a563-0590-4bab-af70-fcab7ac74f75
State: Peer in Cluster (Connected)

Usage: volume create [stripe ] [replica ] [device vg] [transport ] ...
[root@centos-clase1 ~]# gluster volume create volume1 stripe 3 transport tcp centosclu1hb1:/n1_exp centosclu2hb1:/n2_exp centosclu3hb1:/n3_exp
volume create: volume1: success: please start the volume to access data
[root@centos-clase1 ~]#
[root@centos-clase1 ~]# gluster volume info all

Volume Name: volume1
Type: Stripe
Volume ID: 98b6e75c-ef00-486f-b085-e915b976c906
Status: Created
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/n1_exp
Brick2: centosclu2hb1:/n2_exp
Brick3: centosclu3hb1:/n3_exp

There are no active volume tasks

[root@centos-clase1 ~]# gluster volume start volume1
volume start: volume1: success

[root@centos-clase1 ~]# gluster volume status all
Status of volume: volume1
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick centosclu1hb1:/n1_exp 49152 Y 2631
Brick centosclu2hb1:/n2_exp 49152 Y 2814
Brick centosclu3hb1:/n3_exp 49152 Y 3023
NFS Server on localhost 2049 Y 2641
NFS Server on centosclu3hb1 2049 Y 3033
NFS Server on centosclu2hb1 2049 Y 2824

There are no active volume tasks

[root@centos-clase1 ~]#

Once the volume is active, you can access it via the glusterfs client, nfs version 3, and cifs if you configure samba:

[root@centos-clase1 ~]# showmount -e
Export list for centos-clase1.fomvslab.com:
/volume1 *

We are now going to mount using nfs v3 from a client host:
[root@foserver01 ~]# mount -t nfs -o proto=tcp,vers=3 3.3.3.2:/volume1 /mnt
[root@foserver01 ~]# df -h | grep -i mnt
3.3.3.2:/volume1 3.0G 97M 2.9G 4% /mnt
[root@foserver01 ~]# cd /mnt

We create a file called file1 with a size of 300M, as we can see the total size of the file is striped among the 3 striped bricks:

[root@foserver01 mnt]# for i in 1 2 3 ; do ssh centos-clase$i "du -sh /n${i}_exp/file1"; done
128M /n1_exp/file1
128M /n2_exp/file1
128M /n3_exp/file1
[root@foserver01 mnt]# du -sh file1
300M file1

We can also mount any node to access the data:

[root@foserver01 ~]# umount /mnt
[root@foserver01 ~]# mount -t nfs -o proto=tcp,vers=3 3.3.3.3:/volume1 /mnt
[root@foserver01 ~]# ls -l /mnt
total 2558076
-rw-r--r-- 1 root root 104857600 Sep 15 20:41 file1
-rw-r--r-- 1 root root 104857600 Sep 15 20:42 file2
-rw-r--r-- 1 root root 104857600 Sep 15 20:42 file3
-rw-r--r-- 1 root root 104857600 Sep 15 20:42 file4
-rw-r--r-- 1 root root 209715200 Sep 15 20:44 file7
-rw-r--r-- 1 root root 209715200 Sep 15 20:45 file8
-rw-r--r-- 1 root root 97779712 Sep 15 20:45 file9
[root@foserver01 ~]#

No we are going to create 2 more bricks in node1 and 2, and create a replicated volume:

[root@foserver01 log]# for i in 1 2 3 ; do ssh centos-clase$i "lvcreate -L 500M -n lvclase${i}lv1 /dev/vgclase${i}sto0"; done
Logical volume "lvclase1lv1" created
Logical volume "lvclase2lv1" created
Logical volume "lvclase3lv1" created
[root@foserver01 log]#
root@foserver01 log]# for i in 1 2 3 ; do ssh centos-clase$i "mkfs.xfs -i size=512 /dev/vgclase${i}sto0/lvclase${i}lv1"; done

meta-data=/dev/vgclase1sto0/lvclase1lv1 isize=512 agcount=4, agsize=32000 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=128000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=1200, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0

[root@foserver01 log]# for i in 1 2 3 ; do ssh centos-clase$i "mkdir -p /gluster/replica${i}"; done
[root@foserver01 log]# for i in 1 2 3 ; do ssh centos-clase$i "mount /dev/vgclase${i}sto0/lvclase${i}lv1 /gluster/replica${i}"; done
[root@foserver01 log]# for i in 1 2 3 ; do ssh centos-clase$i "df -h | grep exp1"; done
496M 26M 471M 6% /n1_exp1
496M 26M 471M 6% /n2_exp1
496M 26M 471M 6% /n3_exp1

We are only going to use bricks in node1 and node2, but we create it also in node3 for further use in the future.

[root@centos-clase1 ~]# gluster volume create volume1 replica 2 transport tcp centosclu1hb1:/gluster/replica1 centosclu2hb1:/gluster/replica2
[root@centos-clase1 ~]# gluster volume start volume1
volume start: volume1: success

[root@centos-clase2 ~]# gluster volume info volume1

Volume Name: volume1
Type: Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu2hb1:/gluster/replica2

We are going to install the glusterfs native client, it can failover between servers:

[root@foserver01 tmp]# rpm -ivh *.rpm
warning: glusterfs-3.4.0-8.el6.x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 89ccae8b: NOKEY
Preparing... ########################################### [100%]
1:glusterfs-libs ########################################### [ 33%]
2:glusterfs ########################################### [ 67%]
3:glusterfs-fuse ########################################### [100%]

[root@foserver01 glusterfs]# mount -t glusterfs 3.3.3.3:/volume1 /mnt
[root@foserver01 glusterfs]# df -h | grep -i mnt
3.3.3.3:/volume1 496M 226M 270M 46% /mnt

The mount log file is in this dir:
/var/log/glusterfs/mnt.log

we do a reset to centos-clase2:
[root@foserver01 glusterfs]# virsh reset centos-clase2
Domain centos-clase2 was reset

And we check that we can keep using our mountpoint /mnt

[root@foserver01 mnt]# cd /mnt
[root@foserver01 mnt]# ls
100M fsdfs lol otro100M pdsa
[root@foserver01 mnt]# mkdir dfsd
[root@foserver01 mnt]# dd if=/dev/zero of=50M bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB) copied, 5.22528 s, 10.0 MB/s

we can check the heal status on the volume, and check how it's out of sync:

[root@centos-clase1 replica1]# gluster volume heal volume1 info
Gathering Heal info on volume volume1 has been successful

Brick centosclu1hb1:/gluster/replica1
Number of entries: 6
/
/lol
/fsdfs
/pdsa
/dfsd
/50M

Brick centosclu2hb1:/gluster/replica2
Status: Brick is Not connected
Number of entries: 0

One node2 is up again it auto heals the files out of sync:

[root@centos-clase1 replica1]# gluster volume heal volume1 info
Gathering Heal info on volume volume1 has been successful

Brick centosclu1hb1:/gluster/replica1
Number of entries: 1
/50M

Brick centosclu2hb1:/gluster/replica2
Status: self-heal-daemon is not running on 0b850e6c-e28d-4cfa-abeb-36e1cc6197e3
[root@centos-clase1 replica1]# gluster volume heal volume1 info
Gathering Heal info on volume volume1 has been successful

Brick centosclu1hb1:/gluster/replica1
Number of entries: 0

Brick centosclu2hb1:/gluster/replica2
Number of entries: 0

We can add another brick, another so we would have 1 more node for redundancy:

[root@centos-clase1 replica1]# gluster volume add-brick volume1 replica 3 centosclu3hb1:/gluster/replica3
volume add-brick: success

[root@centos-clase1 replica1]# gluster volume info volume1

Volume Name: volume1
Type: Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu2hb1:/gluster/replica2
Brick3: centosclu3hb1:/gluster/replica3

[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "mount /dev/vgclase${i}sto0/lvclase${i}lv2 /gluster/n${i}_exp"; done
[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "mkdir /gluster/n${i}_exp1" ; done
[root@foserver01 ~]# for i in 1 2 3 ; do ssh centos-clase$i "mount /dev/vgclase${i}sto0/lvclase${i}lv2 /gluster/n${i}_exp1"; done

For example to create a distributed replica what would be a 2 way mirror with 3 distributed disks each, we can do:
[root@centos-clase1 ~]# gluster volume add-brick volume1 replica 3 centosclu1hb1:/gluster/n1_exp1 centosclu2hb1:/gluster/n2_exp1 centosclu3hb1:/gluster/n3_exp1
volume add-brick: success
[root@centos-clase1 ~]# gluster volume info volume1

Volume Name: volume1
Type: Distributed-Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu2hb1:/gluster/replica2
Brick3: centosclu3hb1:/gluster/replica3
Brick4: centosclu1hb1:/gluster/n1_exp1
Brick5: centosclu2hb1:/gluster/n2_exp1
Brick6: centosclu3hb1:/gluster/n3_exp1

so now we should have double the space we had before in the volume:

before:
[root@foserver01 glusterfs]# df -h | grep -i mnt
3.3.3.3:/volume1 496M 226M 270M 46% /mnt

now:
[root@foserver01 ~]# df -h | grep -i mnt
3.3.3.3:/volume1 991M 301M 690M 31% /mnt
[root@foserver01 ~]#

Once we add or remove bricks the new files will automaticly be distributed to the new bricks, but the old files have to be manually auto balanced:

[root@centos-clase3 replica3]# gluster volume rebalance volume1 start
volume rebalance: volume1: success: Starting rebalance on volume volume1 has been successful.
ID: 507594fa-fb3c-4d98-a638-d43e043e2ce8

[root@centos-clase2 ~]# gluster volume rebalance volume1 status
Node Rebalanced-files size scanned failures status run time in secs
--------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 5 0 completed 1.00
localhost 0 0Bytes 5 0 completed 1.00
localhost 0 0Bytes 5 0 completed 1.00
3.3.3.2 0 0Bytes 1 0 in progress 5.00
volume rebalance: volume1: success:
[root@centos-clase2 ~]# gluster volume rebalance volume1 status
Node Rebalanced-files size scanned failures status run time in secs
--------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 5 0 completed 1.00
localhost 0 0Bytes 5 0 completed 1.00
localhost 0 0Bytes 5 0 completed 1.00
3.3.3.2 2 200.0MB 7 2 completed 91.00

We can see in the new bricks how the size grows:

[root@centos-clase3 replica3]# df -h
/dev/mapper/vgclase3sto0-lvclase3lv2
496M 26M 471M 6% /gluster/n3_exp1
496M 108M 388M 22% /gluster/n3_exp1
496M 226M 271M 46% /gluster/n3_exp1

[root@centos-clase3 replica3]# gluster volume rebalance volume1 status
Node Rebalanced-files size scanned failures status run time in secs
--------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 5 0 completed 1.00
localhost 0 0Bytes 5 0 completed 1.00
localhost 0 0Bytes 5 0 completed 1.00
centosclu2hb1 0 0Bytes 5 0 completed 1.00

We can remove our last config:

[root@centos-clase3 replica3]# gluster volume remove-brick volume1 centosclu3hb1:/gluster/n3_exp1 centosclu2hb1:/gluster/n2_exp1 centosclu1hb1:/gluster/n1_exp1
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
[root@centos-clase3 replica3]# gluster volume info volume1

Volume Name: volume1
Type: Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu2hb1:/gluster/replica2
Brick3: centosclu3hb1:/gluster/replica3

[root@centos-clase3 replica3]# gluster volume remove-brick volume1 replica 2 centosclu3hb1:/gluster/replica3
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success

[root@centos-clase3 replica3]# gluster volume remove-brick volume1 replica 1 centosclu2hb1:/gluster/replica2
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success
[root@centos-clase3 replica3]# gluster volume info volume1

Volume Name: volume1
Type: Distribute
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
[root@centos-clase3 replica3]#

Now we are going to create a distribute volume with 3 bricks and then replicate them:

[root@centos-clase3 ~]# gluster volume add-brick volume1 centosclu2hb1:/gluster/replica2 centosclu3hb1:/gluster/replica3
volume add-brick: success
[root@centos-clase3 ~]# gluster volume info volume1

Volume Name: volume1
Type: Distribute
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu2hb1:/gluster/replica2
Brick3: centosclu3hb1:/gluster/replica3

now we make this 3 brick distributed volume replicated:

[root@centos-clase1 ~]# gluster volume add-brick volume1 replica 2 centosclu1hb1:/gluster/n1_exp1 centosclu2hb1:/gluster/n2_exp1 centosclu3hb1:/gluster/n3_exp1
volume add-brick: failed: /gluster/n1_exp1 or a prefix of it is already part of a volume ----> when you get this error you need to umount and format bricks already used in other volumes
[root@centos-clase1 ~]# gluster volume add-brick volume1 replica 2 centosclu1hb1:/gluster/n1_exp1 centosclu2hb1:/gluster/n2_exp1 centosclu3hb1:/gluster/n3_exp1
volume add-brick: success
[root@centos-clase1 ~]# gluster volume info volume1

Volume Name: volume1
Type: Distributed-Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu1hb1:/gluster/n1_exp1
Brick3: centosclu2hb1:/gluster/replica2
Brick4: centosclu2hb1:/gluster/n2_exp1
Brick5: centosclu3hb1:/gluster/replica3
Brick6: centosclu3hb1:/gluster/n3_exp1

Now we can check the client, it has 1,5 gigs bigger size from 3 bricks, and it hasn't lost any data:

[root@foserver01 ~]# df -h | grep -i mnt
3.3.3.3:/volume1 1.5G 126M 1.4G 9% /mnt
[root@foserver01 ~]# ls -l /mnt
total 51224
-rw-r--r-- 1 root root 52428800 Sep 16 18:16 50M
drwxr-xr-x 2 root root 8198 Sep 16 19:28 dfsd
drwxr-xr-x 2 root root 8198 Sep 16 19:28 fsdfs
drwxr-xr-x 2 root root 8198 Sep 16 19:28 lol
-rw-r--r-- 1 root root 0 Sep 16 18:16 pdsa
-rw-r--r-- 1 root root 0 Sep 16 18:31 pollo

Volume properties, list , modify:

[root@centos-clase2 ~]# gluster volume set volume1 auth.allow "3.3.3.*"
volume set: success
[root@centos-clase2 ~]# gluster volume info volume1

Volume Name: volume1
Type: Distributed-Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu1hb1:/gluster/n1_exp1
Brick3: centosclu2hb1:/gluster/replica2
Brick4: centosclu2hb1:/gluster/n2_exp1
Brick5: centosclu3hb1:/gluster/replica3
Brick6: centosclu3hb1:/gluster/n3_exp1
Options Reconfigured:
auth.allow: 3.3.3.* -------------> options different from default apear here.

We can reset all properties with the gluster volume reset command:

[root@centos-clase2 ~]# gluster volume reset volume1
volume reset: success
[root@centos-clase2 ~]# gluster volume info volume1

Volume Name: volume1
Type: Distributed-Replicate
Volume ID: e4bdb3b7-2cbf-4154-bfe5-9dde1c2b7862
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: centosclu1hb1:/gluster/replica1
Brick2: centosclu1hb1:/gluster/n1_exp1
Brick3: centosclu2hb1:/gluster/replica2
Brick4: centosclu2hb1:/gluster/n2_exp1
Brick5: centosclu3hb1:/gluster/replica3
Brick6: centosclu3hb1:/gluster/n3_exp1

Unix Systems: 

Comments

Well, being your study notes, specially on such an advanced topic, it is ok, but a bit of the theory behind it, and even the initial objective are foggy.

one or two images would have helped as well.

This WILL be a good source of information once I learn a bit more about Gluster and eventually iSCSI on top of it.

Thank you for sharing.

it's true, as I am really short of free time, I just sort of copy and paste my notes, in case they can help someone researching this topic

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://hpuxtips.es/?q=content/part16-red-hat-storage-servergluster-my-study-notes-red-hat-certificate-expertise-clustering [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.90.207.75 [:db_insert_placeholder_9] => 1513505304 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.