You are here

Red Hat OpenStack Administration (CL210) exam preparation notes

 Some notes I wrote while I was preparing the CL210 exam

Exam preparation:


    Install and configure Red Hat Enterprise Linux OpenStack Platform
1.    Manage users
1.    Manage projects
2.    Manage flavors
1.    Manage roles
1.    Set quotas
3.    Manage images
3.    Configure images at instantiation
6.    Add additional compute nodes
7.    Manage Swift storage
5.    Manage networking
4.    Manage Cinder storage


1. Manage Roles,users,projects:

We create a project called liquid, and 2 users liquid1 and liquid2:

(openstack) project create liquid
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| enabled     | True                             |
| id          | 22606b0b2b7f411abd08b23f8ebe03a1 |
| name        | liquid                           |
+-------------+----------------------------------+
(openstack) user create liquid1
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | b744bf4403954da0aae267325293411a |
| name     | liquid1                          |
| username | liquid1                          |
+----------+----------------------------------+
(openstack) user create liquid2
+----------+----------------------------------+
| Field    | Value                            |
+----------+----------------------------------+
| email    | None                             |
| enabled  | True                             |
| id       | 7695af16ee3940be99bacf031732f6cd |
| name     | liquid2                          |
| username | liquid2                          |
+----------+----------------------------------+
(openstack) user set --password Amena2006 liquid1
(openstack) user set --password Amena2006 liquid2

We are going to give liquid1 the admin role for project liquid, and liquid2 the default _member_ role:

(openstack) role add --project liquid --user liquid1 admin
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 8569de17075142ccb6351d7a3eea7ca9 |
| name  | admin                            |
+-------+----------------------------------+
(openstack) role add --project liquid --user liquid2 _member_
+-------+----------------------------------+
| Field | Value                            |
+-------+----------------------------------+
| id    | 9fe2ff9ee4384b1894a90878d3e92bab |
| name  | _member_                         |
+-------+----------------------------------+
(openstack) role add --project liquid --user liquid2 ResellerAdmin   ] [--swap ]
                          [--rxtx-factor ] [--is-public ]
                          
error: too few arguments
Try 'nova help flavor-create' for more information.
[root@controller1 ~(keystone_liquid2)]# nova flavor-create m1.disk 6 512 0 2  ------> in disk if we use 0, the disk will be the size of the image
ERROR (Forbidden): Policy doesn't allow compute_extension:flavormanage to be performed. (HTTP 403) (Request-ID: req-6bf3b220-a4ac-4f45-b61d-40b9681ccf6b)

As you can see, liquid2 doesn't have perms, let's use liquid1 who is admin in project liquid:

[root@controller1 ~(keystone_liquid2)]# source keystonerc_liquid1
[root@controller1 ~(keystone_liquid1)]# nova flavor-create m1.disk 6 512 0 2
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.disk | 512       | 0    | 0         |      | 2     | 1.0         | True      |
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller1 ~(keystone_liquid1)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 6  | m1.disk   | 512       | 0    | 0         |      | 2     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+

Now we are going to create  a private flavor:

[root@controller1 ~(keystone_liquid1)]# openstack flavor create --id 7 --ram 1024 --disk 10 --ephemeral 2 --vcpus 1 --private m1.disk.1
+----------------------------+-----------+
| Field                      | Value     |
+----------------------------+-----------+
| OS-FLV-DISABLED:disabled   | False     |
| OS-FLV-EXT-DATA:ephemeral  | 2         |
| disk                       | 10        |
| id                         | 7         |
| name                       | m1.disk.1 |
| os-flavor-access:is_public | False     |
| ram                        | 1024      |
| rxtx_factor                | 1.0       |
| swap                       |           |
| vcpus                      | 1         |
+----------------------------+-----------+

As you can see is not listed on the flavors list:

[root@controller1 ~(keystone_liquid1)]# nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 6  | m1.disk   | 512       | 0    | 0         |      | 2     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
[root@controller1 ~(keystone_liquid1)]# 

If you are an admin you can list all flavors with the --all flag:

[root@controller1 ~(keystone_liquid1)]# nova flavor-list --all
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
| 6  | m1.disk   | 512       | 0    | 0         |      | 2     | 1.0         | True      |
| 7  | m1.disk.1 | 1024      | 10   | 2         |      | 1     | 1.0         | False     |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+


We have to check the flavors access list:

[root@controller1 ~(keystone_liquid1)]# nova  flavor-access-list --flavor m1.disk.1
+-----------+-----------+
| Flavor_ID | Tenant_ID |
+-----------+-----------+
+-----------+-----------+

And add the liquid tenant:

[root@controller1 ~(keystone_liquid1)]# nova flavor-access-add  m1.disk.1 liquid
+-----------+-----------+
| Flavor_ID | Tenant_ID |
+-----------+-----------+
| 7         | liquid    |
+-----------+-----------+
[root@controller1 ~(keystone_liquid1)]# nova flavor-show m1.disk.1
+----------------------------+-----------+
| Property                   | Value     |
+----------------------------+-----------+
| OS-FLV-DISABLED:disabled   | False     |
| OS-FLV-EXT-DATA:ephemeral  | 2         |
| disk                       | 10        |
| extra_specs                | {}        |
| id                         | 7         |
| name                       | m1.disk.1 |
| os-flavor-access:is_public | False     |
| ram                        | 1024      |
| rxtx_factor                | 1.0       |
| swap                       |           |
| vcpus                      | 1         |
+----------------------------+-----------+

Here we boot an instance using the "private flavor":

[root@controller1 ~(keystone_liquid1)]# nova boot --flavor m1.disk.1 --image cirros --security-groups default --nic net-id=6629041d-6d40-4ed5-a6cd-9ddcf3365ca5 cirros
+--------------------------------------+-----------------------------------------------+
| Property                             | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          | nova                                          |
| OS-EXT-SRV-ATTR:host                 | -                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                             |
| OS-EXT-STS:power_state               | 0                                             |
| OS-EXT-STS:task_state                | scheduling                                    |
| OS-EXT-STS:vm_state                  | building                                      |
| OS-SRV-USG:launched_at               | -                                             |
| OS-SRV-USG:terminated_at             | -                                             |
| accessIPv4                           |                                               |
| accessIPv6                           |                                               |
| adminPass                            | K6qXXDjFmj8J                                  |
| config_drive                         |                                               |
| created                              | 2016-01-05T13:37:19Z                          |
| flavor                               | m1.disk.1 (7)                                 |
| hostId                               |                                               |
| id                                   | 2787bff8-210f-47ba-a13d-f87d10332a77          |
| image                                | cirros (8004e7f5-34d3-4980-875d-508e1ba7ac88) |
| key_name                             | -                                             |
| metadata                             | {}                                            |
| name                                 | cirros                                        |
| os-extended-volumes:volumes_attached | []                                            |
| progress                             | 0                                             |
| security_groups                      | default                                       |
| status                               | BUILD                                         |
| tenant_id                            | 22606b0b2b7f411abd08b23f8ebe03a1              |
| updated                              | 2016-01-05T13:37:19Z                          |
| user_id                              | b744bf4403954da0aae267325293411a              |
+--------------------------------------+-----------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# nova list
+--------------------------------------+--------+--------+------------+-------------+----------------------+
| ID                                   | Name   | Status | Task State | Power State | Networks             |
+--------------------------------------+--------+--------+------------+-------------+----------------------+
| 2787bff8-210f-47ba-a13d-f87d10332a77 | cirros | ACTIVE | -          | Running     | private=192.168.99.6 |
+--------------------------------------+--------+--------+------------+-------------+----------------------+

If we try with another user from another tenant, it fails::

[root@controller1 ~(keystone_admin)]# source keystonerc_tenant2
[root@controller1 ~(keystone_tenant2)]# nova boot --flavor m1.disk.1 --image cirros --security-groups default --nic net-id=6629041d-6d40-4ed5-a6cd-9ddcf3365ca5 cirros3
ERROR (CommandError): No flavor with a name or ID of 'm1.disk.1' exists.

If we use a PUBLIC Flavor it works:
root@controller1 ~(keystone_tenant2)]# nova boot --flavor m1.disk --image cirros --security-groups default --nic net-id=6629041d-6d40-4ed5-a6cd-9ddcf3365ca5 cirros3
+--------------------------------------+-----------------------------------------------+
| Property                             | Value                                         |
+--------------------------------------+-----------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                        |
| OS-EXT-AZ:availability_zone          | nova                                          |
..........


In flavors we also have extra_specs, they can be, system or user defined, system defined attributes are available in virtue of the hypervisor we are using, for example we can use extra_specs to limit the bandwith of the virtual interface of the instances created with this flavor, example:

[root@controller1 ~(keystone_liquid1)]# nova flavor-key m1.disk set "quota:vif_inbound_average=10240"
[root@controller1 ~(keystone_liquid1)]# nova flavor-show m1.disk
+----------------------------+-----------------------------------------+
| Property                   | Value                                   |
+----------------------------+-----------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                   |
| OS-FLV-EXT-DATA:ephemeral  | 0                                       |
| disk                       | 0                                       |
| extra_specs                | {"quota:vif_inbound_average": "10240"} |
| id                         | 6                                       |
| name                       | m1.disk                                 |
| os-flavor-access:is_public | True                                    |
| ram                        | 512                                     |
| rxtx_factor                | 1.0                                     |
| swap                       |                                         |
| vcpus                      | 2                                       |
+----------------------------+-----------------------------------------+

We can now boot and image using the flavor with extrac specs:

[root@controller1 ~(keystone_liquid1)]# nova boot --flavor m1.disk --image cirros --security-groups default --nic net-id=6629041d-6d40-4ed5-a6cd-9ddcf3365ca5 cirros_extra

And if we check in the libvirtd profile of the image running in the compute node:

[root@compute1 ~]# virsh dumpxml instance-00000007 | grep -A 2 -B 2 inbound
      

Thats the end of our flavor review.


3. MANAGE IMAGES:

We are going to manually (in a very basic fasion create a cloud image).

We are going to use as base our allready working centos 7 base image:

[root@openstackbox os]# cp -p packstack.qcow2 cloud-img.qcow2
[root@openstackbox os]# virt-clone -o controller1 --preserve-data -f cloud-img.qcow2 -n template-temporal 
[root@openstackbox os]# virsh start template-temporal
[root@openstackbox os]# virsh console template-temporal

Once in our cloud img we remove all network config just leaving in network-scripts eth0 config:

[root@localhost network-scripts]# cat ifcfg-eth0
DEVICE=eth0
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="dhcp"

We install the cloud init and tools:

[root@localhost ~]# yum install cloud-init.x86_64  cloud-utils.x86_64 

You can modify cloud-img behaviour in file /etc/cloud/cloud.cfg , here we add the resolv-conf module, so it configure dns on the images:

[root@localhost cloud]# cat /etc/cloud/cloud.cfg | grep -i resolv
 - resolv-conf
[root@localhost cloud]# echo "NOZEROCONF=yes" > /etc/sysconfig/network

We also check we have serial console output configured:

[root@localhost cloud]# cat /etc/default/grub | grep -i ttyS0
GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap crashkernel=auto console=ttyS0,115200n8"

Ok, ready to go, lest shutdown the template:

root@localhost cloud]# shutdown -h now

Lets reset and clear all image info:

[root@openstackbox os]# virt-sysprep -d template-temporal

And finally reduce all the empty space on the image:

[root@openstackbox os]# du -sh cloud-img.qcow2
1.4G	cloud-img.qcow2
[root@openstackbox os]# mkdir /vm/os/temp
[root@openstackbox os]# export TMPDIR=/vm/os/temp
[root@openstackbox os]# virt-sparsify --compress cloud-img.qcow2 cloud-img-ok.qcow2
Input disk virtual size = 32212258816 bytes (30.0G)
Create overlay file in /vm/os/temp to protect source disk ...
Examine source disk ...
Sparsify operation completed with no errors.  Before deleting the old disk, 
carefully check that the target disk boots and works correctly.
[root@openstackbox os]# du -sh cloud-img-ok.qcow2
508M	cloud-img-ok.qcow2

Perfect we have our image ready. let's upload it to glance.


ot@controller1 ~(keystone_liquid1)]# glance image-create --name centos7 --disk-format qcow2 --container-format bare --file cloud-img-ok.qcow2 --is-public True --human-readable --progress
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 7854b7a70d40e83ee117f0e540ddf958     |
| container_format | bare                                 |
| created_at       | 2016-01-07T08:45:32.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | bbdbcb62-f0ca-409d-9b5f-85928e7a8944 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | centos7                              |
| owner            | 22606b0b2b7f411abd08b23f8ebe03a1     |
| protected        | False                                |
| size             | 508MB                                |
| status           | active                               |
| updated_at       | 2016-01-07T08:45:41.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# glance image-list
+--------------------------------------+---------+-------------+------------------+-----------+--------+
| ID                                   | Name    | Disk Format | Container Format | Size      | Status |
+--------------------------------------+---------+-------------+------------------+-----------+--------+
| bbdbcb62-f0ca-409d-9b5f-85928e7a8944 | centos7 | qcow2       | bare             | 532724736 | active |
| 8004e7f5-34d3-4980-875d-508e1ba7ac88 | cirros  | qcow2       | bare             | 13200896  | active |
+--------------------------------------+---------+-------------+------------------+-----------+--------+

If no other backend(swift,ceph,etc) is configured to store glance images, they are stored on the server where the glance services are running inside the /var/lib/glance directory:

[root@controller1 images(keystone_liquid1)]# pwd
/var/lib/glance/images
[root@controller1 images(keystone_liquid1)]# ls -ltr
total 533132
-rw-r-----. 1 glance glance  13200896 Dec 31 10:42 8004e7f5-34d3-4980-875d-508e1ba7ac88
-rw-r-----. 1 glance glance 532724736 Jan  7 09:45 bbdbcb62-f0ca-409d-9b5f-85928e7a8944
[root@controller1 images(keystone_liquid1)]# 

If for example we have swift running, we can store images in swift, here is and example on how to configure glance to use swift as a storage backend:

[root@controller1 ~(keystone_liquid1)]# cat /etc/glance/glance-api.conf | grep -v ^# | grep -v ^$ | grep -i swift
         glance.store.swift.Store
default_store=swift
swift_store_auth_address='http://10.10.10.31:35357/v2.0/'
swift_store_user=services:glance
swift_store_key=cec8b7fe03d7499a
swift_store_container=glance
swift_store_create_container_on_put=True

(reverse-i-search)`restar': systemctl restart openstack-glance-api.service

Now when we upload the image it's stored in swift:

[root@controller1 ~(keystone_liquid1)]# glance image-create --name swift.test --store swift --disk-format qcow2 --container-format bare --file cloud-img-ok.qcow2 --is-public True --human-readable --progress 
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 7854b7a70d40e83ee117f0e540ddf958     |
| container_format | bare                                 |
| created_at       | 2016-01-07T14:03:03.000000           |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 599e00ad-ee7c-4a31-b463-ef0a9955cb00 |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | swift.test                           |
| owner            | 22606b0b2b7f411abd08b23f8ebe03a1     |
| protected        | False                                |
| size             | 508MB                                |
| status           | active                               |
| updated_at       | 2016-01-07T14:03:14.000000           |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# swift --os-username glance --os-project-name services --os-password cec8b7fe03d7499a list
glance
[root@controller1 ~(keystone_liquid1)]# swift --os-username glance --os-project-name services --os-password cec8b7fe03d7499a list glance
599e00ad-ee7c-4a31-b463-ef0a9955cb00

We can still upload to the file backend by using the --store modifier on the glance client:

[root@controller1 ~(keystone_liquid1)]# glance image-create --name file.test --store file --disk-format qcow2 --container-format bare --file cloud-img-ok.qcow2 --is-public True --human-readable --progress 
[root@controller1 ~(keystone_liquid1)]# nova boot --flavor m1.disk --image swift.test --security-groups default --nic net-id=6629041d-6d40-4ed5-a6cd-9ddcf3365ca5 centos7
[root@controller1 ~(keystone_liquid1)]# nova list
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks              |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
| 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 | centos7      | ACTIVE | -          | Running     | private=192.168.99.12 |
[root@controller1 ~(keystone_liquid1)]# nova console-log centos7
.........
CentOS Linux 7 (Core)
Kernel 3.10.0-229.14.1.el7.x86_64 on an x86_64

localhost login: [   95.696137] cloud-init[584]: Cloud-init v. 0.7.5 running 'init-local' at Thu, 07 Jan 2016 14:24:19 +0000. Up 92.36 seconds.
[  141.974999] cloud-init[938]: Cloud-init v. 0.7.5 running 'init' at Thu, 07 Jan 2016 14:25:05 +0000. Up 138.66 seconds.
[  144.063434] cloud-init[938]: ci-info: +++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++
[  144.184776] cloud-init[938]: ci-info: +--------+------+---------------+---------------+-------------------+
[  144.187769] cloud-init[938]: ci-info: | Device |  Up  |    Address    |      Mask     |     Hw-Address    |
[  144.258363] cloud-init[938]: ci-info: +--------+------+---------------+---------------+-------------------+
[  144.264373] cloud-init[938]: ci-info: |  lo:   | True |   127.0.0.1   |   255.0.0.0   |         .         |
[  144.292861] cloud-init[938]: ci-info: | eth0:  | True | 192.168.99.12 | 255.255.255.0 | fa:16:3e:09:1b:df |
[  144.298840] cloud-init[938]: ci-info: +--------+------+---------------+---------------+-------------------+

So cloud init is working, and our image is booting ok. NEXT PLEASE!.


4. MANAGE VOLUMES CINDER.

First de basics, create a volume and attach it to a running instance:

[root@controller1 cinder(keystone_liquid1)]# cinder create --display-name DataVol --display-description "Apache data Volume" 2
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-11T15:50:59.972229      |
| display_description |          Apache data Volume          |
|     display_name    |               DataVol                |
|      encrypted      |                False                 |
|          id         | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  2                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@controller1 cinder(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+

There are multiple backends we can use: ceph,lvm,nfs,etc let's check what are we using:

[root@controller1 cinder(keystone_liquid1)]# cat /etc/cinder/cinder.conf | grep -i enabled_backends
#enabled_backends=
enabled_backends=lvm

We are using lvm let's check out what happend when we run the cinder create command:

[root@controller1 cinder(keystone_liquid1)]# lvs | grep -i cinder
  Configuration setting "snapshot_autoextend_percent" invalid. It's not part of any section.
  Configuration setting "snapshot_autoextend_threshold" invalid. It's not part of any section.
  volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 cinder-volumes -wi-a-----  2.00g                                                    

As you can see a 2 gig volume called volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 from the cinder-volumes VG got created. You can have VG's configured with any name you like, and also have several VG's configured:

[root@controller1 cinder(keystone_liquid1)]# cat /etc/cinder/cinder.conf | grep -i cinder-volumes | grep -v ^#
volume_group=cinder-volumes

No we are going to attach this volume to our instance, the lvm volume will get exported(target) via iscsi, and the compute node will import(initiator) the volume and add it to the VM configuration via libvirt api:

So right now we don't have any iscsi volumes exported:

root@controller1 cinder(keystone_liquid1)]# targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 0]
  o- loopback ......................................................................................................... [Targets: 0]

Let's attach the volume:

[root@controller1 cinder(keystone_liquid1)]# nova list
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
| ID                                   | Name         | Status | Task State | Power State | Networks              |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
| 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 | centos7      | ACTIVE | -          | Running     | private=192.168.99.12 |
| 6380d79e-8cbf-4071-be89-fbaf39b6b2be | cirros_extra | ACTIVE | -          | Running     | private=192.168.99.11 |
+--------------------------------------+--------------+--------+------------+-------------+-----------------------+
[root@controller1 cinder(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@controller1 cinder(keystone_liquid1)]# nova volume-attach centos7 a7ea8e59-8300-4503-a168-2ab2c4a1c106  auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
| serverId | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| volumeId | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
+----------+--------------------------------------+
[root@controller1 cinder(keystone_liquid1)]# cinder list
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | in-use |   DataVol    |  2   |      -      |  false   | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
+--------------------------------------+--------+--------------+------+-------------+----------+--------------------------------------+

As you can see we have attached our volume to the centos7 instance, let's check iscsi on the controller node(target) and on the compute node(initiator):

[root@controller1 cinder(keystone_liquid1)]# targetcli ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106  [/dev/cinder-volumes/volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 (2.0GiB) write-thru activated]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2010-10.org.openstack:volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 ............................................. [TPGs: 1]
  |   o- tpg1 .......................................................................................... [no-gen-acls, auth per-acl]
  |     o- acls .......................................................................................................... [ACLs: 1]
  |     | o- iqn.1994-05.com.redhat:66f9c19a3cf0 ...................................................... [1-way auth, Mapped LUNs: 1]
  |     |   o- mapped_lun0 ................. [lun0 block/iqn.2010-10.org.openstack:volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 1]
  |     | o- lun0  [block/iqn.2010-10.org.openstack:volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 (/dev/cinder-volumes/volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 10.10.10.31:3260 ................................................................................................. [OK]
  o- loopback ......................................................................................................... [Targets: 0]

We can see that a block backing store has been created, a lun0 created that is been exported with iqn: iqn.1994-05.com.redhat:66f9c19a3cf0, the portal to access the lun is ip: 10.10.10.31.

On the compute host, we can see how the volume is attached to the compute node:

[root@compute1 ~]# dmesg
[1053061.777716] iscsi: registered transport (tcp)
[1053061.781246] scsi host2: iSCSI Initiator over TCP/IP
[1053061.791024] scsi 2:0:0:0: Direct-Access     LIO-ORG  IBLOCK           4.0  PQ: 0 ANSI: 5
[1053061.796122] scsi 2:0:0:0: alua: supports implicit and explicit TPGS
[1053061.805527] scsi 2:0:0:0: alua: No target port descriptors found
[1053061.808866] scsi 2:0:0:0: alua: not attached
[1053061.817612] scsi 2:0:0:0: Attached scsi generic sg0 type 0
[1053061.827302] sd 2:0:0:0: [sda] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
[1053061.832521] sd 2:0:0:0: [sda] Write Protect is off
[1053061.835243] sd 2:0:0:0: [sda] Mode Sense: 43 00 10 08
[1053061.835668] sd 2:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
[1053061.844541]  sda: unknown partition table
[1053061.847802] sd 2:0:0:0: [sda] Attached SCSI disk
[root@compute1 ~]# iscsiadm -m session -P 1
Target: iqn.2010-10.org.openstack:volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 (non-flash)
	Current Portal: 10.10.10.31:3260,1
	Persistent Portal: 10.10.10.31:3260,1
		**********
		Interface:
		**********
		Iface Name: default
		Iface Transport: tcp
		Iface Initiatorname: iqn.1994-05.com.redhat:66f9c19a3cf0
		Iface IPaddress: 10.10.10.41
		Iface HWaddress: 
		Iface Netdev: 
		SID: 1
		iSCSI Connection State: LOGGED IN
		iSCSI Session State: LOGGED_IN
		Internal iscsid Session State: NO CHANGE


And also how it is configured in the instance libvirt configuration:

[root@compute1 ~]# virsh dumpxml instance-00000008 | grep -A 2 -B 2 -i vol
    

Finally we can check in the instance how we have our 2 gig volume available:

[root@host-192-168-99-12 ~]# parted /dev/vdb mklabel msdos
Information: You may need to update /etc/fstab.

[root@host-192-168-99-12 ~]# parted /dev/vdb print                
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags: 

Number  Start  End  Size  Type  File system  Flags

[root@host-192-168-99-12 ~]# mkfs.xfs -f /dev/vdb
meta-data=/dev/vdb               isize=256    agcount=4, agsize=131072 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0
data     =                       bsize=4096   blocks=524288, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@host-192-168-99-12 ~]# mkdir /data
[root@host-192-168-99-12 ~]# mount /dev/vdb /data
[root@host-192-168-99-12 ~]# cd /data
[root@host-192-168-99-12 data]# dd if=/dev/zero of=100M bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 26.1474 s, 4.0 MB/s
[root@host-192-168-99-12 data]# ls
100M
[root@host-192-168-99-12 data]# 


We just added a 100MB file to the Volume, so we can check in future examples that the data is still there.


We are going to make the volume readonly , multi attach to several hosts is not yet implemented altough on it's way

First we detach it from the instance(needs to be umounted on the instance,not in use..)

[root@controller1 ~(keystone_liquid1)]# nova volume-detach centos7 a7ea8e59-8300-4503-a168-2ab2c4a1c106
[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@controller1 ~(keystone_liquid1)]# cinder readonly-mode-update a7ea8e59-8300-4503-a168-2ab2c4a1c106 True
[root@controller1 ~(keystone_liquid1)]# cinder metadata-show a7ea8e59-8300-4503-a168-2ab2c4a1c106
+-------------------+-------+
| Metadata-property | Value |
+-------------------+-------+
|      readonly     |  True |
+-------------------+-------+

This is done at the qemu/libvirt level:

[root@compute1 ~]# virsh dumpxml instance-00000008 | grep -B 2 -A 4 -i vdb
      a7ea8e59-8300-4503-a168-2ab2c4a1c106
      

We are going to leave it rw:

[root@controller1 ~(keystone_liquid1)]# nova volume-detach centos7 a7ea8e59-8300-4503-a168-2ab2c4a1c106
[root@controller1 ~(keystone_liquid1)]# cinder readonly-mode-update a7ea8e59-8300-4503-a168-2ab2c4a1c106 False

Change the owner of a volume, let's say you need to change the project/tenant of a volume, this is how it's done:

[root@controller1 ~(keystone_liquid1)]# cinder transfer-create a7ea8e59-8300-4503-a168-2ab2c4a1c106
+------------+--------------------------------------+
|  Property  |                Value                 |
+------------+--------------------------------------+
|  auth_key  |           577c6cde5b5fc6b3           |
| created_at |      2016-01-12T07:50:58.027708      |
|     id     | 3f866f2c-6b7f-4b7e-8892-8fdfe802c306 |
|    name    |                 None                 |
| volume_id  | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
+------------+--------------------------------------+
[root@controller1 ~(keystone_tenant2)]# cinder transfer-list
+--------------------------------------+--------------------------------------+------+
|                  ID                  |              Volume ID               | Name |
+--------------------------------------+--------------------------------------+------+
| d61adc1b-fb1e-4cc9-99ae-b527c693b0ee | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |  -   |
+--------------------------------------+--------------------------------------+------+
[root@controller1 ~(keystone_liquid1)]# source keystonerc_tenant2 
[root@controller1 ~(keystone_tenant2)]# cinder transfer-accept 3f866f2c-6b7f-4b7e-8892-8fdfe802c306 577c6cde5b5fc6b3     --> here is the transfer id and the auth_key
+-----------+--------------------------------------+
|  Property |                Value                 |
+-----------+--------------------------------------+
|     id    | 3f866f2c-6b7f-4b7e-8892-8fdfe802c306 |
|    name   |                 None                 |
| volume_id | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
+-----------+--------------------------------------+
[root@controller1 ~(keystone_tenant2)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@controller1 ~(keystone_tenant2)]# source keystonerc_liquid1 
[root@controller1 ~(keystone_liquid1)]# cinder list
+----+--------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+----+--------+--------------+------+-------------+----------+-------------+
+----+--------+--------------+------+-------------+----------+-------------+
[root@controller1 ~(keystone_liquid1)]# 


Creating snapshots of a volume, if the volume is attached to an instance we need to use the -f :

[root@controller1 ~(keystone_liquid1)]# cinder snapshot-create --display-name "data-snap" --display-description "data snapshot 12 Jan" a7ea8e59-8300-4503-a168-2ab2c4a1c106
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2016-01-12T08:48:04.292362      |
| display_description |         data snapshot 12 Jan         |
|     display_name    |              data-snap               |
|          id         | df961fb9-f6c6-4797-b50f-1bb626e88310 |
|       metadata      |                  {}                  |
|         size        |                  2                   |
|        status       |               creating               |
|      volume_id      | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
+---------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+--------------+------+
|                  ID                  |              Volume ID               |   Status  | Display Name | Size |
+--------------------------------------+--------------------------------------+-----------+--------------+------+
| df961fb9-f6c6-4797-b50f-1bb626e88310 | a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |  data-snap   |  2   |
+--------------------------------------+--------------------------------------+-----------+--------------+------+

With the LVM backend to create snapshots, the LVM snapshot feature is used:

[root@controller1 ~(keystone_liquid1)]# lvs | grep cinder-volumes
  Configuration setting "snapshot_autoextend_percent" invalid. It's not part of any section.
  Configuration setting "snapshot_autoextend_threshold" invalid. It's not part of any section.
  _snapshot-df961fb9-f6c6-4797-b50f-1bb626e88310 cinder-volumes swi-a-s---  2.00g      volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 0.00                                   
  volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106    cinder-volumes owi-a-s---  2.00g                                                                                         

Now that we have a snapshot, we can do 2 things create a clon of the snapshot or do a backup of the snapshot to the object store.

First we are going to clone the snapshot and attach it to the cirros_extra instance:

[root@controller1 ~(keystone_liquid1)]# cinder create --snapshot-id df961fb9-f6c6-4797-b50f-1bb626e88310 --display-name data-clon --display-description "Clon from data volume" 2
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-12T08:53:30.687666      |
| display_description |        Clon from data volume         |
|     display_name    |              data-clon               |
|      encrypted      |                False                 |
|          id         | e8be1cb6-bc5d-42c1-94f0-1ce337227a73 |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  2                   |
|     snapshot_id     | df961fb9-f6c6-4797-b50f-1bb626e88310 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |             |
| e8be1cb6-bc5d-42c1-94f0-1ce337227a73 | available |  data-clon   |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
root@controller1 ~(keystone_liquid1)]# lvs | grep -i cinder-volumes
  Configuration setting "snapshot_autoextend_percent" invalid. It's not part of any section.
  Configuration setting "snapshot_autoextend_threshold" invalid. It's not part of any section.
  _snapshot-df961fb9-f6c6-4797-b50f-1bb626e88310 cinder-volumes swi-a-s---  2.00g      volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 0.00                                   
  volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106    cinder-volumes owi-a-s---  2.00g                                                                                         
  volume-e8be1cb6-bc5d-42c1-94f0-1ce337227a73    cinder-volumes -wi-a-----  2.00g                                                                                         

[root@controller1 ~(keystone_liquid1)]# nova volume-attach centos7 e8be1cb6-bc5d-42c1-94f0-1ce337227a73  auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | e8be1cb6-bc5d-42c1-94f0-1ce337227a73 |
| serverId | 6380d79e-8cbf-4071-be89-fbaf39b6b2be |
| volumeId | e8be1cb6-bc5d-42c1-94f0-1ce337227a73 |
+----------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |                                      |
| e8be1cb6-bc5d-42c1-94f0-1ce337227a73 |   in-use  |  data-clon   |  2   |      -      |  false   | 6380d79e-8cbf-4071-be89-fbaf39b6b2be |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+

We are now going to connect to the centos7 instance, and check the data is there:

[root@host-192-168-99-12 ~]# parted /dev/vdb print
Model: Virtio Block Device (virtblk)
Disk /dev/vdb: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags: 

Number  Start  End     Size    File system  Flags
 1      0.00B  2147MB  2147MB  xfs

[root@host-192-168-99-12 ~]# mount /dev/vdb /mnt
[root@host-192-168-99-12 ~]# ls -l /mnt
total 102400
-rw-r--r--. 1 root root 104857600 Jan 12 08:20 100M

Perfect, snapshots and then clonning is done, now let's get into cinder backup, the cinder backup is a full backup of the volume, that can later be restored, to be able to restore the volume we need to access the volume metadata saved on the cinder backup service, we can also backup metada just in case we loose all of cinders db information.

Backups:

As of Kilo release Cinder Backup service has following functionality:

    Back up available Cinder volumes
    Multiple drivers available: RGW/Swift, Ceph, TSM, NFS
    Back up encrypted volumes
    Incremental backups
    Metadata support in Cinder Backups
    Ability to import/export backups in to Cinder

Normally the backups are done against and object storage backend(swift or ceph), in our case we have swift configured as the backup backend:

[root@controller1 ~(keystone_liquid1)]# cat /etc/cinder/cinder.conf | grep -i backup | grep -v ^#
backup_compression_algorithm=zlib
backup_swift_url=http://10.10.10.31:8080/v1/AUTH_
backup_swift_container=volumes_backup
backup_swift_object_size=52428800
backup_swift_retry_attempts=3
backup_swift_retry_backoff=2
backup_driver=cinder.backup.drivers.swift
backup_topic=cinder-backup
backup_manager=cinder.backup.manager.BackupManager
backup_api_class=cinder.backup.api.API
backup_name_template=backup-%s

To take a backup the volume has to be in a detach state, so what we do is first snapshot the volume, then clone it, and finally take a backup of the clone, let's show and example of the full cycle:

So let's start, we have a volume attached to our centos7 instance, it's mounted in /data with a file inside:

[root@host-192-168-99-12 data]# pwd
/data
[root@host-192-168-99-12 data]# ls
100M  lost+found
[root@host-192-168-99-12 data]# df -h .
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/datavg-datalv  976M  103M  807M  12% /data

We are going to take the snapshot with the volume attached, so we temporaly freeze the FS:

[root@host-192-168-99-12 data]# fsfreeze --freeze /data

If we try to snapshot with the volume attached we get and error:
root@controller1 ~(keystone_liquid1)]# cinder snapshot-create --display-name "data-snap" --display-description "data snap" a7ea8e59-8300-4503-a168-2ab2c4a1c106
ERROR: Invalid volume: Volume a7ea8e59-8300-4503-a168-2ab2c4a1c106 status must be available, but current status is: in-use. (HTTP 400) (Request-ID: req-ae628589-de62-4c8b-a48f-dd2fe859878e)

We need to use the --force modifier:

[root@controller1 ~(keystone_liquid1)]# cinder snapshot-create --force True  --display-name "data-snap" --display-description "data snap" a7ea8e59-8300-4503-a168-2ab2c4a1c106
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2016-01-12T10:05:47.024561      |
| display_description |              data snap               |
|     display_name    |              data-snap               |
|          id         | bd26510b-bef5-428f-9b7b-c6e8864afcc7 |
|       metadata      |                  {}                  |
|         size        |                  2                   |
|        status       |               creating               |
|      volume_id      | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
+---------------------+--------------------------------------+

No we are going to clone the snapshot and then take a backup:

root@controller1 ~(keystone_liquid1)]# cinder create --snapshot-id bd26510b-bef5-428f-9b7b-c6e8864afcc7 --display-name "data-clone" 2
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-12T10:08:14.571706      |
| display_description |                 None                 |
|     display_name    |              data-clone              |
|      encrypted      |                False                 |
|          id         | c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  2                   |
|     snapshot_id     | bd26510b-bef5-428f-9b7b-c6e8864afcc7 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                 None                 |
+---------------------+--------------------------------------+

We have the clone, now let's backup the clone:

[root@controller1 ~(keystone_liquid1)]# cinder list | grep c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb
| c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb | available |  data-clone  |  2   |      -      |  false   |                                      |
[root@controller1 ~(keystone_liquid1)]# cinder backup-create --display-name "data_backup_12_Jan_2016" c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb
+-----------+--------------------------------------+
|  Property |                Value                 |
+-----------+--------------------------------------+
|     id    | 11b493ba-2b6d-4804-84fb-b0b745740260 |
|    name   |       data_backup_12_Jan_2016        |
| volume_id | c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb |
+-----------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder backup-list
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+
|                  ID                  |              Volume ID               |   Status  |           Name          | Size | Object Count |   Container    |
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+
| 11b493ba-2b6d-4804-84fb-b0b745740260 | c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb | available | data_backup_12_Jan_2016 |  2   |      42      | volumes_backup |
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+

If we check in swift, we can see, the container volumes_backup has been created:

[root@controller1 ~(keystone_liquid1)]# swift list volumes_backup
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00001
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00002
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00003
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00004
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00005
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00006
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00007
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00008
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00009
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00010
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00011
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00012
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00013
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00014
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00015
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00016
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00017
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00018
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00019
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00020
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00021
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00022
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00023
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00024
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00025
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00026
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00027
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00028
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00029
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00030
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00031
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00032
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00033
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00034
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00035
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00036
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00037
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00038
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00039
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00040
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260-00041
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260_metadata
volume_c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb/20160112101008/az_nova_backup_11b493ba-2b6d-4804-84fb-b0b745740260_sha256file

Backup of in-use volumes was identified as a potential improvement for the backup service in Kilo release and so the non disruptive backup feature was added in Liberty. This new feature brings us not only the convenience of creating backups of attached volumes with one single API call, but preserves original volume id reference in the backup record and can do incremental backups as well. 

Once the backup is ready let's remove snapshot and clone:

[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 |   in-use  |   DataVol    |  2   |      -      |  false   | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb | available |  data-clone  |  2   |      -      |  false   |                                      |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder snapshot-list
+--------------------------------------+--------------------------------------+-----------+--------------+------+
|                  ID                  |              Volume ID               |   Status  | Display Name | Size |
+--------------------------------------+--------------------------------------+-----------+--------------+------+
| bd26510b-bef5-428f-9b7b-c6e8864afcc7 | a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |  data-snap   |  2   |
+--------------------------------------+--------------------------------------+-----------+--------------+------+
[root@controller1 ~(keystone_liquid1)]# cinder snapshot-delete bd26510b-bef5-428f-9b7b-c6e8864afcc7
[root@controller1 ~(keystone_liquid1)]# cinder delete c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb

Ok, let's test our backup with a restore of the volume:

Let's delete a file from the /data FS:

[root@host-192-168-99-12 data]# file 100M
100M: data
[root@host-192-168-99-12 data]# rm 100M
[root@host-192-168-99-12 data]# ls
lost+found
[root@host-192-168-99-12 ~]# umount /data
[root@host-192-168-99-12 ~]# lvchange -a n /dev/datavg/datalv
[root@host-192-168-99-12 ~]# vgexport /dev/datavg
  Volume group "datavg" successfully exported

[root@controller1 ~(keystone_liquid1)]# nova volume-detach centos7 a7ea8e59-8300-4503-a168-2ab2c4a1c106
[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 | available |   DataVol    |  2   |      -      |  false   |             |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@controller1 ~(keystone_liquid1)]# cinder backup-list
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+
|                  ID                  |              Volume ID               |   Status  |           Name          | Size | Object Count |   Container    |
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+
| 11b493ba-2b6d-4804-84fb-b0b745740260 | c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb | available | data_backup_12_Jan_2016 |  2   |      42      | volumes_backup |
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+

To restore we use de destination volume-id where we wan't to restore, and te backup ID of the source:

[root@controller1 ~(keystone_liquid1)]# cinder backup-restore --volume-id a7ea8e59-8300-4503-a168-2ab2c4a1c106 11b493ba-2b6d-4804-84fb-b0b745740260
[root@controller1 ~(keystone_liquid1)]# cinder backup-list
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+
|                  ID                  |              Volume ID               |   Status  |           Name          | Size | Object Count |   Container    |
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+
| 11b493ba-2b6d-4804-84fb-b0b745740260 | c0ba395f-0463-4bcc-8aaa-ffeebb44f3bb | restoring | data_backup_12_Jan_2016 |  2   |      42      | volumes_backup |
+--------------------------------------+--------------------------------------+-----------+-------------------------+------+--------------+----------------+

[root@controller1 ~(keystone_liquid1)]# tail -2 /var/log/cinder/backup.log 
2016-01-12 14:17:35.445 26477 INFO cinder.backup.manager [req-f6021da9-d725-4708-883c-45a9bafddf42 b744bf4403954da0aae267325293411a 22606b0b2b7f411abd08b23f8ebe03a1 - - -] Restore backup started, backup: 11b493ba-2b6d-4804-84fb-b0b745740260 volume: a7ea8e59-8300-4503-a168-2ab2c4a1c106.

Once it has finished we can access the vol Filesystem and see all the contents are there:

2016-01-12 14:33:05.505 26477 INFO cinder.backup.manager [req-d4b7ac70-3725-4847-848a-6f6106dafda8 - - - - -] Restore backup finished, backup 11b493ba-2b6d-4804-84fb-b0b745740260 restored to volume a7ea8e59-8300-4503-a168-2ab2c4a1c106.
[root@controller1 ~(keystone_liquid1)]# nova volume-attach centos7 a7ea8e59-8300-4503-a168-2ab2c4a1c106  auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdb                             |
| id       | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
| serverId | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| volumeId | a7ea8e59-8300-4503-a168-2ab2c4a1c106 |
+----------+--------------------------------------+


[root@host-192-168-99-12 ~]# lvs
  LV     VG     Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root   centos -wi-ao---- 27.46g                                                    
  swap   centos -wi-ao----  2.00g                                                    
  datalv datavg -wi-a-----  1.00g                                                    
[root@host-192-168-99-12 ~]# mount /dev/datavg/datalv /data
[root@host-192-168-99-12 ~]# ls -l /data
total 102416
-rw-r--r--. 1 root root 104857600 Jan 12 10:56 100M
drwx------. 2 root root     16384 Jan 12 10:55 lost+found

Backup/Restore of cinder volumes DONE!.


Now to configure Multiple Cinder storage Backends, like NFS,ceph,netapp,etc.. In this example we are going to use a another LVM type with Sata disks.

Before modifiying cinder's config file we had:

enabled_backends=lvm
[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=10.10.10.31
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi
volume_backend_name=lvm

We modify the file and add another type:

enabled_backends=lvm,lvm-sata
[lvm]
iscsi_helper=lioadm
volume_group=cinder-volumes
iscsi_ip_address=10.10.10.31
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi
volume_backend_name=lvm
[lvm-sata]
iscsi_helper=lioadm
volume_group=sata-volumes
iscsi_ip_address=10.10.10.31
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volumes_dir=/var/lib/cinder/volumes
iscsi_protocol=iscsi


The difference is the Volume group, in this case formed by SATA 10k disks.

Now we have to create the volume types, and we are ready to go, because we are just testing we are going to create the VG from a loopback file

[root@controller1 ~(keystone_liquid1)]# truncate /var/lib/cinder/sata-volumes  -s 1000G
[root@controller1 ~(keystone_liquid1)]# ls -s /var/lib/cinder/sata-volumes
0 /var/lib/cinder/sata-volumes
[root@controller1 ~(keystone_liquid1)]# losetup /dev/loop1 /var/lib/cinder/sata-volumes
[root@controller1 ~(keystone_liquid1)]# losetup 
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         1  0 /srv/loopback-device/swiftloopback
/dev/loop1         0      0         0  0 /var/lib/cinder/sata-volumes
/dev/loop2         0      0         0  0 /var/lib/cinder/cinder-volumes
[root@controller1 ~(keystone_liquid1)]# pvcreate /dev/loop1
  Physical volume "/dev/loop1" successfully created
[root@controller1 ~(keystone_liquid1)]# vgcreate sata-volumes /dev/loop1
  Volume group "sata-volumes" successfully created
[root@controller1 ~(keystone_liquid1)]# vgs
  VG             #PV #LV #SN Attr   VSize    VFree   
  centos           1   2   0 wz--n-   29.51g   44.00m
  cinder-volumes   1   2   1 wz--n-   20.60g   16.60g
  sata-volumes     1   0   0 wz--n- 1000.00g 1000.00g

[root@controller1 ~(keystone_liquid1)]# openstack-service restart cinder
[root@controller1 ~(keystone_liquid1)]#

So now to configure the new type of volume from the CINDER API:

[root@controller1 cinder(keystone_liquid1)]# cinder type-create iscsi-sata
+--------------------------------------+------------+
|                  ID                  |    Name    |
+--------------------------------------+------------+
| 63619417-d861-440e-b58b-357e0a4c0e92 | iscsi-sata |
+--------------------------------------+------------+
[root@controller1 cinder(keystone_liquid1)]# cinder type-key iscsi-sata set volume_backend_name=lvm-sata
[root@controller1 cinder(keystone_liquid1)]#  cinder extra-specs-list
+--------------------------------------+------------+---------------------------------------+
|                  ID                  |    Name    |              extra_specs              |
+--------------------------------------+------------+---------------------------------------+
| 0b39a592-8d02-44cd-9a12-fbb405bf9657 |   iscsi    |    {u'volume_backend_name': u'lvm'}   |
| 63619417-d861-440e-b58b-357e0a4c0e92 | iscsi-sata | {u'volume_backend_name': u'lvm-sata'} |
+--------------------------------------+------------+---------------------------------------+
[root@controller1 cinder(keystone_liquid1)]# cinder create --volume-type iscsi-sata --display-name cirros-boot-vol2 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-12T15:11:54.415990      |
| display_description |                 None                 |
|     display_name    |           cirros-boot-vol2           |
|      encrypted      |                False                 |
|          id         | f924422a-f284-4f3d-9a5b-cf87b8d99404 |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |              iscsi-sata              |
+---------------------+--------------------------------------+
[root@controller1 cinder(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |   Display Name   | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 |   in-use  |     DataVol      |  2   |      -      |  false   | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| f924422a-f284-4f3d-9a5b-cf87b8d99404 | available | cirros-boot-vol2 |  1   |  iscsi-sata |  false   |                                      |
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
[root@controller1 cinder(keystone_liquid1)]# lvs
  LV                                          VG             Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root                                        centos         -wi-ao---- 27.46g                                                    
  swap                                        centos         -wi-ao----  2.00g                                                    
  volume-a7ea8e59-8300-4503-a168-2ab2c4a1c106 cinder-volumes -wi-ao----  2.00g                                                    
  volume-f924422a-f284-4f3d-9a5b-cf87b8d99404 sata-volumes   -wi-a-----  1.00g                        

QOS quality of service for volumes:

Originally both QEMU and KVM support rate limitation.
This is obviously implemented through libvirt and available as an extra xml flag within the  section called iotune.

QoS options are:

    total_bytes_sec: the total allowed bandwidth for the guest per second
    read_bytes_sec: sequential read limitation
    write_bytes_sec: sequential write limitation
    total_iops_sec: the total allowed IOPS for the guest per second
    read_iops_sec: random read limitation
    write_iops_sec: random write limitation

The consumer type can be:

front-end = Compute node
back-end = Cinder service node
both = both front and back

[root@controller1 cinder(keystone_liquid1)]# cinder qos-create high-iops consumer="front-end" read_iops_sec=2000 write_iops_sec=1000
+----------+---------------------------------------------------------+
| Property |                          Value                          |
+----------+---------------------------------------------------------+
| consumer |                        front-end                        |
|    id    |           0eebddff-4932-4dd4-8db9-4ebf7fd9096d          |
|   name   |                        high-iops                        |
|  specs   | {u'write_iops_sec': u'1000', u'read_iops_sec': u'2000'} |
+----------+---------------------------------------------------------+
[root@controller1 cinder(keystone_liquid1)]# cinder type-create high-iops
+--------------------------------------+-----------+
|                  ID                  |    Name   |
+--------------------------------------+-----------+
| 02179a34-7816-47a3-8325-7c7a34585859 | high-iops |
+--------------------------------------+-----------+
[root@controller1 cinder(keystone_liquid1)]# cinder qos-associate 0eebddff-4932-4dd4-8db9-4ebf7fd9096d 02179a34-7816-47a3-8325-7c7a34585859
[root@controller1 cinder(keystone_liquid1)]# cinder qos-list
+--------------------------------------+-----------+-----------+---------------------------------------------------------+
|                  ID                  |    Name   |  Consumer |                          specs                          |
+--------------------------------------+-----------+-----------+---------------------------------------------------------+
| 0eebddff-4932-4dd4-8db9-4ebf7fd9096d | high-iops | front-end | {u'write_iops_sec': u'1000', u'read_iops_sec': u'2000'} |
+--------------------------------------+-----------+-----------+---------------------------------------------------------+

we can now create a volume with type high-iops and attach it to and instance:

[root@controller1 ~(keystone_liquid1)]# cinder create --display-name highiops --volume-type high-iops 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-14T08:07:48.456906      |
| display_description |                 None                 |
|     display_name    |               highiops               |
|      encrypted      |                False                 |
|          id         | 94f81e17-f95a-4e96-adf3-8c1dc268fc36 |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |              high-iops               |
+---------------------+--------------------------------------+
+--------------------------------------+-----------+-----------+---------------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |   Display Name   | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
| 94f81e17-f95a-4e96-adf3-8c1dc268fc36 | available |     highiops     |  1   |  high-iops  |  false   |                                      |
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 |   in-use  |     DataVol      |  2   |      -      |  false   | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| f924422a-f284-4f3d-9a5b-cf87b8d99404 | available | cirros-boot-vol2 |  1   |  iscsi-sata |  false   |                                      |
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# nova volume-attach centos7 94f81e17-f95a-4e96-adf3-8c1dc268fc36  auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdc                             |
| id       | 94f81e17-f95a-4e96-adf3-8c1dc268fc36 |
| serverId | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| volumeId | 94f81e17-f95a-4e96-adf3-8c1dc268fc36 |
+----------+--------------------------------------+


As we said you can check on the instance xml configuration the new attached volume QOS info:

[root@compute1 ~]# virsh dumpxml instance-00000008 | grep -B 4 -A 1 iops
      20001000




Encryption:

We need to add a 16 digit hex to the fixed_key inside the [keymgr] section:

[root@controller1 ~]# cat /etc/cinder/cinder.conf | grep -v ^# | grep -A 6 keymgr | grep -v ^$
[keymgr]
fixed_key=5228F56D8326DB68

[root@controller1 ~]# openstack-service restart cinder-volume
[root@controller1 ~]# 


I got the the random number using:
[root@controller1 ~]# for i in $(seq 1 16); do echo -n $(echo "obase=16; $(($RANDOM % 16))" | bc); done; echo
5228F56D8326DB68

NOTE: just for and example 16 zeros would aso work 00000000000000000

We also have to add the key to the compute node:

[root@compute1 ~]#  cat /etc/nova/nova.conf | grep -v ^# | grep -A 6 keymgr | grep -v ^$
[keymgr]
fixed_key=5228F56D8326DB68
[root@compute1 ~]# openstack-service restart nova
[root@compute1 ~]#

No we can create the volume type, and associate a cipher type to the volume type:

[root@controller1 ~(keystone_liquid1)]# cinder type-create AESCRYPT
+--------------------------------------+----------+
|                  ID                  |   Name   |
+--------------------------------------+----------+
| 9592d693-0f33-4b72-8c00-889d2370173e | AESCRYPT |
+--------------------------------------+----------+
[root@controller1 ~(keystone_liquid1)]# cinder encryption-type-create --cipher aes --key_size 512 --control_location front-end AESCRYPT nova.volume.encryptors.luks.LuksEncryptor
+--------------------------------------+-------------------------------------------+--------+----------+------------------+
|            Volume Type ID            |                  Provider                 | Cipher | Key Size | Control Location |
+--------------------------------------+-------------------------------------------+--------+----------+------------------+
| 9592d693-0f33-4b72-8c00-889d2370173e | nova.volume.encryptors.luks.LuksEncryptor |  aes   |   512    |    front-end     |
+--------------------------------------+-------------------------------------------+--------+----------+------------------+
[root@controller1 ~(keystone_liquid1)]# cinder extra-specs-list
+--------------------------------------+------------+---------------------------------------+
|                  ID                  |    Name    |              extra_specs              |
+--------------------------------------+------------+---------------------------------------+
| 02179a34-7816-47a3-8325-7c7a34585859 | high-iops  |                   {}                  |
| 0b39a592-8d02-44cd-9a12-fbb405bf9657 |   iscsi    |    {u'volume_backend_name': u'lvm'}   |
| 63619417-d861-440e-b58b-357e0a4c0e92 | iscsi-sata | {u'volume_backend_name': u'lvm-sata'} |
| 9592d693-0f33-4b72-8c00-889d2370173e |  AESCRYPT  |                   {}                  |
+--------------------------------------+------------+---------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder create --display-name cryptvol --volume-type AESCRYPT 1
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|     attachments     |                  []                  |
|  availability_zone  |                 nova                 |
|       bootable      |                false                 |
|      created_at     |      2016-01-14T08:27:24.977603      |
| display_description |                 None                 |
|     display_name    |               cryptvol               |
|      encrypted      |                 True                 |
|          id         | e7891136-aa82-44db-82fb-8ee0db29a52d |
|       metadata      |                  {}                  |
|     multiattach     |                false                 |
|         size        |                  1                   |
|     snapshot_id     |                 None                 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |               AESCRYPT               |
+---------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# cinder list
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  |   Display Name   | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
| 94f81e17-f95a-4e96-adf3-8c1dc268fc36 |   in-use  |     highiops     |  1   |  high-iops  |  false   | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| a7ea8e59-8300-4503-a168-2ab2c4a1c106 |   in-use  |     DataVol      |  2   |      -      |  false   | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| e7891136-aa82-44db-82fb-8ee0db29a52d | available |     cryptvol     |  1   |   AESCRYPT  |  false   |                                      |
| f924422a-f284-4f3d-9a5b-cf87b8d99404 | available | cirros-boot-vol2 |  1   |  iscsi-sata |  false   |                                      |
+--------------------------------------+-----------+------------------+------+-------------+----------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# nova volume-attach centos7 e7891136-aa82-44db-82fb-8ee0db29a52d  auto
+----------+--------------------------------------+
| Property | Value                                |
+----------+--------------------------------------+
| device   | /dev/vdd                             |
| id       | e7891136-aa82-44db-82fb-8ee0db29a52d |
| serverId | 8b1beff9-2e22-4d22-bf15-22030bd9b0d7 |
| volumeId | e7891136-aa82-44db-82fb-8ee0db29a52d |
+----------+--------------------------------------+

As we can see a luks encrypted block device is created, and is the compute nodei(front-end) who decrypts the volume:

[root@compute1 nova]# tail -40 /var/log/nova/nova-compute.log | grep -i luks
2016-01-14 09:35:42.336 29384 WARNING nova.volume.encryptors.luks [req-63ec4ac2-8eec-48a8-bcc0-9e263117bda9 b744bf4403954da0aae267325293411a 22606b0b2b7f411abd08b23f8ebe03a1 - - -] isLuks exited abnormally (status 1): Device /dev/sdc is not a valid LUKS device.
2016-01-14 09:35:42.337 29384 INFO nova.volume.encryptors.luks [req-63ec4ac2-8eec-48a8-bcc0-9e263117bda9 b744bf4403954da0aae267325293411a 22606b0b2b7f411abd08b23f8ebe03a1 - - -] /dev/sdc is not a valid LUKS device; formatting device for first use

[root@compute1 mapper]# lsblk --fs | grep LUKS
sdc                                                                                               crypto_LUKS       b6f90101-d30e-44ca-a12f-00656dc606b7   
[root@compute1 mapper]# dmsetup ls --target crypt
ip-10.10.10.31:3260-iscsi-iqn.2010-10.org.openstack:volume-fd340911-df46-4734-a374-f95c9210487f-lun-0	(253, 3)

on the controller node there ir no luks device open:
[root@controller1 ~]# dmsetup ls --target crypt
[root@controller1 ~]# 


SWIFT PSEUDO FOLDERS.

I have only been able to create the pseudo folders with curl or from horizon, I'm not able via swift cli.

Here is and example of a pseudo folder, it gives the oportunity to nest folder/files inside containers.

In this example sh-env is a container:

[root@controller1 ~(keystone_liquid1)]# swift list --lh
    4 316K 2016-01-14 10:48:18 sh-env
   43 6.6M 2016-01-12 10:10:08 volumes_backup
   47 6.9M

Inside we have 2 pseudo folders liquid-project/ and test-project/ inside one of the foders we have uploaded a file:

[root@controller1 ~(keystone_liquid1)]# swift list --lh sh-env
 237 2016-01-14 10:48:38 keystonerc_liquid1
   0 2016-01-14 11:07:08 liquid-project/
316K 2016-01-14 11:07:46 liquid-project/openstack-kilo.txt
   0 2016-01-14 11:07:23 test-project/
316K

5. MANAGE NETWORKING
------------------------

We need to have a network bridge if we wan't to use the NIC for the controller and also as the public network gateway to the outside world.

To configure an OpenSwitch bridge on eth0:

[root@controller1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
ONBOOT="yes"
BOOTPROTO="none"
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-eth0
[root@controller1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-br-eth0 
DEVICE=br-eth0
BOOTPROTO=static
IPADDR=192.168.122.31
NETMASK=255.255.255.0
ONBOOT=yes
DEVICETYPE=ovsx
TYPE=OVSBridge


One the network restarts, you get:

9: br-eth0:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether ae:76:e4:74:7d:4d brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.31/24 brd 192.168.122.255 scope global br-eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::ac76:e4ff:fe74:7d4d/64 scope link 
       valid_lft forever preferred_lft forever

[root@controller1 ~]# ovs-vsctl show
............
    Bridge "br-eth0"
        Port "eth0"
            Interface "eth0"
        Port "br-eth0"
            Interface "br-eth0"
                type: internal
...............


We are first going to create a public network, from wich floating Ips will be assigned to the instances so they can be reached from the outside world.

[root@controller1 ~(keystone_liquid1)]# neutron net-create --shared --router:external --provider:network_type vxlan public
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 23797775-a630-4fea-b66f-7085ef7a47f8 |
| mtu                       | 0                                    |
| name                      | public                               |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 81                                   |
| router:external           | True                                 |
| shared                    | True                                 |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 22606b0b2b7f411abd08b23f8ebe03a1     |

No we create a Subnet for the public network, we disabled dhcp and used a 50 ip pool 150 to 200 for floating IPs:

[root@controller1 ~(keystone_liquid1)]# neutron subnet-create --name public-sub --gateway 192.168.122.1 --allocation-pool start=192.168.122.150,end=192.168.122.200 --disable-dhcp public 192.168.122.0/24
Created a new subnet:
+-------------------+--------------------------------------------------------+
| Field             | Value                                                  |
+-------------------+--------------------------------------------------------+
| allocation_pools  | {"start": "192.168.122.150", "end": "192.168.122.200"} |
| cidr              | 192.168.122.0/24                                       |
| dns_nameservers   |                                                        |
| enable_dhcp       | False                                                  |
| gateway_ip        | 192.168.122.1                                          |
| host_routes       |                                                        |
| id                | 176009b3-ae96-4e79-acb9-eddf51343cea                   |
| ip_version        | 4                                                      |
| ipv6_address_mode |                                                        |
| ipv6_ra_mode      |                                                        |
| name              | public-sub                                             |
| network_id        | 23797775-a630-4fea-b66f-7085ef7a47f8                   |
| subnetpool_id     |                                                        |
| tenant_id         | 22606b0b2b7f411abd08b23f8ebe03a1                       |
+-------------------+--------------------------------------------------------+

[root@controller1 ~(keystone_liquid1)]# neutron net-list
+--------------------------------------+--------+-------------------------------------------------------+
| id                                   | name   | subnets                                               |
+--------------------------------------+--------+-------------------------------------------------------+
| 23797775-a630-4fea-b66f-7085ef7a47f8 | public | 176009b3-ae96-4e79-acb9-eddf51343cea 192.168.122.0/24 |
+--------------------------------------+--------+-------------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron subnet-list
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| id                                   | name       | cidr             | allocation_pools                                       |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| 176009b3-ae96-4e79-acb9-eddf51343cea | public-sub | 192.168.122.0/24 | {"start": "192.168.122.150", "end": "192.168.122.200"} |
+--------------------------------------+------------+------------------+--------------------------------------------------------+

So that's our public network, let's create a private network, with 2 subnets, not worh using 2 subnets on the same network!.

*******************************************NOTE ON USING 2 SUBNETS IN THE SAME NETWORK************************************
When using 2 subnets in the same network you can't specify with the nova python client a subnet-id to wich you wan't to attach the new instance.
You can only use net-id or port-id. A blue print was submitted to add this functionality to the nova python cli, but it never made it, becasue
the nova project want's to get rid of proxies to other services little by little so they are not adding "more functionalities":

https://blueprints.launchpad.net/nova/+spec/selecting-subnet-when-creating-vm

To specify the subnet you have to assing a static IP for example:
# nova boot --flavor m1.disk --image cirros --security-group default --nic net-id=90820573-6d34-40a3-915b-272533e41096,v4-fixed-ip=10.20.20.6 cirros-backend

This is a big if for automatic deployments, heat templates,etc

So right now as it stands is maybe better to create a 1 to 1 relation with a network&subnet for each net you need, beside this there is no real obvious benefit I can see from using 2 subnets in the same network.
***********************************************************************************************************************


[root@controller1 ~(keystone_liquid1)]# neutron net-create  --provider:network_type vxlan internal
Created a new network:
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| id                        | 90820573-6d34-40a3-915b-272533e41096 |
| mtu                       | 0                                    |
| name                      | internal                             |
| provider:network_type     | vxlan                                |
| provider:physical_network |                                      |
| provider:segmentation_id  | 25                                   |
| router:external           | False                                |
| shared                    | False                                |
| status                    | ACTIVE                               |
| subnets                   |                                      |
| tenant_id                 | 22606b0b2b7f411abd08b23f8ebe03a1     |
+---------------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron subnet-create --name frontend --allocation-pool start=10.10.10.5,end=10.10.10.250 --dns-nameserver 8.8.8.8 --enable-dhcp --ip-version 4 internal 10.10.10.0/24
Created a new subnet:
+-------------------+------------------------------------------------+
| Field             | Value                                          |
+-------------------+------------------------------------------------+
| allocation_pools  | {"start": "10.10.10.5", "end": "10.10.10.250"} |
| cidr              | 10.10.10.0/24                                  |
| dns_nameservers   | 8.8.8.8                                        |
| enable_dhcp       | True                                           |
| gateway_ip        | 10.10.10.1                                     |
| host_routes       |                                                |
| id                | 67bd32b5-113d-43cb-a806-3b9a8f3e40ea           |
| ip_version        | 4                                              |
| ipv6_address_mode |                                                |
| ipv6_ra_mode      |                                                |
| name              | frontend                                       |
| network_id        | 90820573-6d34-40a3-915b-272533e41096           |
| subnetpool_id     |                                                |
| tenant_id         | 22606b0b2b7f411abd08b23f8ebe03a1               |
+-------------------+------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron subnet-create --name backend --allocation-pool start=10.20.20.5,end=10.20.20.250 --dns-nameserver 8.8.8.8 --enable-dhcp --ip-version 4 internal 10.20.20.0/24
Created a new subnet:
+-------------------+------------------------------------------------+
| Field             | Value                                          |
+-------------------+------------------------------------------------+
| allocation_pools  | {"start": "10.20.20.5", "end": "10.20.20.250"} |
| cidr              | 10.20.20.0/24                                  |
| dns_nameservers   | 8.8.8.8                                        |
| enable_dhcp       | True                                           |
| gateway_ip        | 10.20.20.1                                     |
| host_routes       |                                                |
| id                | 318432be-cd5f-4a40-ada0-1056c00e0d88           |
| ip_version        | 4                                              |
| ipv6_address_mode |                                                |
| ipv6_ra_mode      |                                                |
| name              | backend                                        |
| network_id        | 90820573-6d34-40a3-915b-272533e41096           |
| subnetpool_id     |                                                |
| tenant_id         | 22606b0b2b7f411abd08b23f8ebe03a1               |
+-------------------+------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron subnet-list
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| id                                   | name       | cidr             | allocation_pools                                       |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| 176009b3-ae96-4e79-acb9-eddf51343cea | public-sub | 192.168.122.0/24 | {"start": "192.168.122.150", "end": "192.168.122.200"} |
| 67bd32b5-113d-43cb-a806-3b9a8f3e40ea | frontend   | 10.10.10.0/24    | {"start": "10.10.10.5", "end": "10.10.10.250"}         |
| 318432be-cd5f-4a40-ada0-1056c00e0d88 | backend    | 10.20.20.0/24    | {"start": "10.20.20.5", "end": "10.20.20.250"}         |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron net-list
+--------------------------------------+----------+-------------------------------------------------------+
| id                                   | name     | subnets                                               |
+--------------------------------------+----------+-------------------------------------------------------+
| 23797775-a630-4fea-b66f-7085ef7a47f8 | public   | 176009b3-ae96-4e79-acb9-eddf51343cea 192.168.122.0/24 |
| 90820573-6d34-40a3-915b-272533e41096 | internal | 67bd32b5-113d-43cb-a806-3b9a8f3e40ea 10.10.10.0/24    |
|                                      |          | 318432be-cd5f-4a40-ada0-1056c00e0d88 10.20.20.0/24    |
+--------------------------------------+----------+-------------------------------------------------------+

So now we need a router, to comunicate our private network, with the public network and the outside world:

[root@controller1 ~(keystone_liquid1)]# neutron router-create --distributed False --ha False liquid-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| distributed           | False                                |
| external_gateway_info |                                      |
| ha                    | False                                |
| id                    | 50e49712-1a58-4c54-a404-7e1f42683b6f |
| name                  | liquid-router                        |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | 22606b0b2b7f411abd08b23f8ebe03a1     |
+-----------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron router-list
+--------------------------------------+---------------+-----------------------+-------------+-------+
| id                                   | name          | external_gateway_info | distributed | ha    |
+--------------------------------------+---------------+-----------------------+-------------+-------+
| 50e49712-1a58-4c54-a404-7e1f42683b6f | liquid-router | null                  | False       | False |
+--------------------------------------+---------------+-----------------------+-------------+-------+

We also add the router external gateway

[root@controller1 ~(keystone_liquid1)]# neutron router-gateway-set liquid-router public
Set gateway for router liquid-router
[root@controller1 ~(keystone_liquid1)]# neutron router-list
+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name          | external_gateway_info                                                                                                                                                                       | distributed | ha    |
+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| 50e49712-1a58-4c54-a404-7e1f42683b6f | liquid-router | {"network_id": "23797775-a630-4fea-b66f-7085ef7a47f8", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "176009b3-ae96-4e79-acb9-eddf51343cea", "ip_address": "192.168.122.150"}]} | False       | False |
+--------------------------------------+---------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

We can see on the router ports, the external port assigned with the the first ip of the public-subnet:

[root@controller1 ~(keystone_liquid1)]# neutron router-port-list liquid-router
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                              |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+
| b89abff0-4c14-4fe8-8960-c19a9f572158 |      | fa:16:3e:a7:81:64 | {"subnet_id": "176009b3-ae96-4e79-acb9-eddf51343cea", "ip_address": "192.168.122.150"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------------+

No we are going to add the frontend subnet port to the router, so we can add floating ips to our web front ends:

[root@controller1 ~(keystone_liquid1)]# neutron subnet-list
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| id                                   | name       | cidr             | allocation_pools                                       |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
| 176009b3-ae96-4e79-acb9-eddf51343cea | public-sub | 192.168.122.0/24 | {"start": "192.168.122.150", "end": "192.168.122.200"} |
| 67bd32b5-113d-43cb-a806-3b9a8f3e40ea | frontend   | 10.10.10.0/24    | {"start": "10.10.10.5", "end": "10.10.10.250"}         |
| 318432be-cd5f-4a40-ada0-1056c00e0d88 | backend    | 10.20.20.0/24    | {"start": "10.20.20.5", "end": "10.20.20.250"}         |
+--------------------------------------+------------+------------------+--------------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron router-interface-add liquid-router 67bd32b5-113d-43cb-a806-3b9a8f3e40ea    mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
17: qg-b89abff0-4c:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:a7:81:64 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.150/24 brd 192.168.122.255 scope global qg-b89abff0-4c
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fea7:8164/64 scope link 
       valid_lft forever preferred_lft forever
18: qr-523932d4-25:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:e0:73:fd brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.1/24 brd 10.10.10.255 scope global qr-523932d4-25
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fee0:73fd/64 scope link 
       valid_lft forever preferred_lft forever


On the dhcp service is for the internal network, but it has 2 different subnets, tap4eee1b90-5a has 1 port/interface/ip on each subnet, acting as the dhcp server:

[root@controller1 ~(keystone_liquid1)]# ip netns exec qdhcp-90820573-6d34-40a3-915b-272533e41096 ip a
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
16: tap4eee1b90-5a:  mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether fa:16:3e:bb:07:92 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.5/24 brd 10.10.10.255 scope global tap4eee1b90-5a
       valid_lft forever preferred_lft forever
    inet 10.20.20.5/24 brd 10.20.20.255 scope global tap4eee1b90-5a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:febb:792/64 scope link 
       valid_lft forever preferred_lft forever

If we look at the dnsmasq service, we can see it uses dnsmasq tags to differentiate between both subnets:

[root@controller1 ~(keystone_liquid1)]# ps -ef | grep -i dnsm
nobody   11928     1  0 Jan14 ?        00:00:00 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap4eee1b90-5a --except-interface=lo --pid-file=/var/lib/neutron/dhcp/90820573-6d34-40a3-915b-272533e41096/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/90820573-6d34-40a3-915b-272533e41096/host --addn-hosts=/var/lib/neutron/dhcp/90820573-6d34-40a3-915b-272533e41096/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/90820573-6d34-40a3-915b-272533e41096/opts --dhcp-leasefile=/var/lib/neutron/dhcp/90820573-6d34-40a3-915b-272533e41096/leases --dhcp-range=set:tag0,10.10.10.0,static,86400s --dhcp-range=set:tag1,10.20.20.0,static,86400s --dhcp-lease-max=512 --conf-file=/etc/neutron/dnsmasq-neutron.conf --domain=openstacklocal

[root@controller1 ~(keystone_liquid1)]# cat /var/lib/neutron/dhcp/90820573-6d34-40a3-915b-272533e41096/opts
tag:tag0,option:dns-server,8.8.8.8
tag:tag0,option:classless-static-route,10.20.20.0/24,0.0.0.0,0.0.0.0/0,10.10.10.1
tag:tag0,249,10.20.20.0/24,0.0.0.0,0.0.0.0/0,10.10.10.1
tag:tag0,option:router,10.10.10.1
tag:tag1,option:dns-server,8.8.8.8
tag:tag1,option:classless-static-route,10.10.10.0/24,0.0.0.0,0.0.0.0/0,10.20.20.1
tag:tag1,249,10.10.10.0/24,0.0.0.0,0.0.0.0/0,10.20.20.1
tag:tag1,option:router,10.20.20.1

Both Router and dhcp services are joined by vlan tagging in openvswitch, under the br-int bridge:

    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap4eee1b90-5a"
            tag: 3
            Interface "tap4eee1b90-5a"
                type: internal
        Port "qr-523932d4-25"
            tag: 3
            Interface "qr-523932d4-25"
                type: internal
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.4.0"

The Router and dhcp services/agents, reach the instances in the compute nodes via a vxlan overlay:

[root@controller1 ~(keystone_liquid1)]# ovs-vsctl show
57fdf2c1-0e2a-4fd8-a8e9-cb8a245e8621
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a0a0a29"
            Interface "vxlan-0a0a0a29"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.10.31", out_key=flow, remote_ip="10.10.10.41"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}

And on the compute node:

[root@compute1 ~]# ovs-vsctl show
57fdf2c1-0e2a-4fd8-a8e9-cb8a245e8621
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a0a0a1f"
            Interface "vxlan-0a0a0a1f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.10.10.41", out_key=flow, remote_ip="10.10.10.31"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"


That' just to give a little idea of what is happening on the OS, but there is much more going on, iptables,snat,etc.

We are going to create 2 instances, one on each subnet frontend and backend:

[root@controller1 ~(keystone_liquid1)]#nova boot --flavor m1.disk --image cirros --security-groups default --nic net-id=90820573-6d34-40a3-915b-272533e41096 cirros

To create servers on the backend subnet we need to specify a fixed IP, because the nova python cli has no option to specify the subnet we wan't to use:

[root@controller1 ~(keystone_liquid1)]# nova boot --flavor m1.disk --image cirros --security-group default --nic net-id=90820573-6d34-40a3-915b-272533e41096,v4-fixed-ip=10.20.20.6 cirros-backend
[root@controller1 ~(keystone_liquid1)]# nova list
+--------------------------------------+-----------------+--------+------------+-------------+---------------------+
| ID                                   | Name            | Status | Task State | Power State | Networks            |
+--------------------------------------+-----------------+--------+------------+-------------+---------------------+
| 96c291a1-1fb0-4959-a585-0fc71e63b454 | cirros-backend  | ACTIVE | -          | Running     | internal=10.20.20.6 |
| da08a1ef-b09a-4eed-8af5-22efea92678a | cirros-frontend | ACTIVE | -          | Running     | internal=10.10.10.7 |
+--------------------------------------+-----------------+--------+------------+-------------+---------------------+

We also have to create or modify the default security group, you really should create one security group for each type of server web or database, we are just going to modify the default sec group to allow ssh, ping was allready enabled:

[root@controller1 ~(keystone_liquid1)]# nova secgroup-delete-rule default tcp 22 22 192.168.99.0/24
+-------------+-----------+---------+-----------------+--------------+
| IP Protocol | From Port | To Port | IP Range        | Source Group |
+-------------+-----------+---------+-----------------+--------------+
| tcp         | 22        | 22      | 192.168.99.0/24 |              |
+-------------+-----------+---------+-----------------+--------------+
[root@controller1 ~(keystone_liquid1)]# nova secgroup-list-rules default
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range  | Source Group |
+-------------+-----------+---------+-----------+--------------+
|             |           |         |           | default      |
|             |           |         |           | default      |
| icmp        | -1        | -1      | 0.0.0.0/0 |              |
+-------------+-----------+---------+-----------+--------------+
[root@controller1 ~(keystone_liquid1)]# nova secgroup-add-rule default tcp 22 22 192.168.122.0/24
+-------------+-----------+---------+------------------+--------------+
| IP Protocol | From Port | To Port | IP Range         | Source Group |
+-------------+-----------+---------+------------------+--------------+
| tcp         | 22        | 22      | 192.168.122.0/24 |              |
+-------------+-----------+---------+------------------+--------------+

We are now going to request a floating ip from the public network pool and assign it to and instance.

Until a Floating IP is asigned, you will see the router port status in down state:

[root@controller1 ~(keystone_liquid1)]# neutron port-list | grep 192.168.122.150
| b89abff0-4c14-4fe8-8960-c19a9f572158 |      | fa:16:3e:a7:81:64 | {"subnet_id": "176009b3-ae96-4e79-acb9-eddf51343cea", "ip_address": "192.168.122.150"} |
[root@controller1 ~(keystone_liquid1)]# neutron port-show b89abff0-4c14-4fe8-8960-c19a9f572158 | grep -i status
| status                | DOWN                                                                                   |

Ip requested:

[root@controller1 ~(keystone_liquid1)]# nova floating-ip-create public
+--------------------------------------+-----------------+-----------+----------+--------+
| Id                                   | IP              | Server Id | Fixed IP | Pool   |
+--------------------------------------+-----------------+-----------+----------+--------+
| ba1dc27d-cfa7-48e5-906a-e8a3578735b7 | 192.168.122.151 | -         | -        | public |
+--------------------------------------+-----------------+-----------+----------+--------+

now we associate it with the instance:

[root@controller1 ~(keystone_liquid1)]# nova floating-ip-associate cirros-frontend 192.168.122.151
[root@controller1 ~(keystone_liquid1)]# nova list
+--------------------------------------+-----------------+--------+------------+-------------+--------------------------------------+
| ID                                   | Name            | Status | Task State | Power State | Networks                             |
+--------------------------------------+-----------------+--------+------------+-------------+--------------------------------------+
| 96c291a1-1fb0-4959-a585-0fc71e63b454 | cirros-backend  | ACTIVE | -          | Running     | internal=10.20.20.6                  |
| da08a1ef-b09a-4eed-8af5-22efea92678a | cirros-frontend | ACTIVE | -          | Running     | internal=10.10.10.7, 192.168.122.151 |
+--------------------------------------+-----------------+--------+------------+-------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# ping 192.168.122.151
PING 192.168.122.151 (192.168.122.151) 56(84) bytes of data.
64 bytes from 192.168.122.151: icmp_seq=1 ttl=63 time=2.26 ms

We can also attach a floating ip using neutron:

[root@controller1 ~(keystone_liquid1)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| ba1dc27d-cfa7-48e5-906a-e8a3578735b7 | 10.20.20.7       | 192.168.122.151     | eb7408c2-b6a2-4311-a55c-d13f6d90b7f1 |
+--------------------------------------+------------------+---------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron floatingip-create public
Created a new floatingip:
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| fixed_ip_address    |                                      |
| floating_ip_address | 192.168.122.152                      |
| floating_network_id | 23797775-a630-4fea-b66f-7085ef7a47f8 |
| id                  | 80ff9c2f-437c-4474-8d33-36546aea0ad0 |
| port_id             |                                      |
| router_id           |                                      |
| status              | DOWN                                 |
| tenant_id           | 22606b0b2b7f411abd08b23f8ebe03a1     |
+---------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron port-list
+--------------------------------------+-------------+-------------------+----------------------------------------------------------------------------------------+
| id                                   | name        | mac_address       | fixed_ips                                                                              |
+--------------------------------------+-------------+-------------------+----------------------------------------------------------------------------------------+
| 068650f8-1ab8-4728-b8ed-748bd7ef3eac |             | fa:16:3e:21:5e:64 | {"subnet_id": "176009b3-ae96-4e79-acb9-eddf51343cea", "ip_address": "192.168.122.152"} |
| 37dba003-2b84-4560-be09-b0f331b3cc41 |             | fa:16:3e:63:43:81 | {"subnet_id": "318432be-cd5f-4a40-ada0-1056c00e0d88", "ip_address": "10.20.20.6"}      |
| 4eee1b90-5a54-43d7-9c68-ba006b89d96a |             | fa:16:3e:bb:07:92 | {"subnet_id": "67bd32b5-113d-43cb-a806-3b9a8f3e40ea", "ip_address": "10.10.10.5"}      |
|                                      |             |                   | {"subnet_id": "318432be-cd5f-4a40-ada0-1056c00e0d88", "ip_address": "10.20.20.5"}      |
| 523932d4-25a5-4e5e-9756-abb0cf99a78d |             | fa:16:3e:e0:73:fd | {"subnet_id": "67bd32b5-113d-43cb-a806-3b9a8f3e40ea", "ip_address": "10.10.10.1"}      |
| 54aee187-7ddc-42ee-9f88-0c4aadc6ff37 |             | fa:16:3e:0c:a4:65 | {"subnet_id": "318432be-cd5f-4a40-ada0-1056c00e0d88", "ip_address": "10.20.20.1"}      |
| 73139063-7137-4aae-a657-974455bd5bef |             | fa:16:3e:fd:e1:f1 | {"subnet_id": "176009b3-ae96-4e79-acb9-eddf51343cea", "ip_address": "192.168.122.151"} |
| b89abff0-4c14-4fe8-8960-c19a9f572158 |             | fa:16:3e:a7:81:64 | {"subnet_id": "176009b3-ae96-4e79-acb9-eddf51343cea", "ip_address": "192.168.122.150"} |
| eb7408c2-b6a2-4311-a55c-d13f6d90b7f1 | nic-fr-cirr | fa:16:3e:dd:ac:f0 | {"subnet_id": "318432be-cd5f-4a40-ada0-1056c00e0d88", "ip_address": "10.20.20.7"}      |
+--------------------------------------+-------------+-------------------+----------------------------------------------------------------------------------------+
[root@controller1 ~(keystone_liquid1)]# neutron floatingip-associate 80ff9c2f-437c-4474-8d33-36546aea0ad0 37dba003-2b84-4560-be09-b0f331b3cc41
Associated floating IP 80ff9c2f-437c-4474-8d33-36546aea0ad0
[root@controller1 ~(keystone_liquid1)]# neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id                                   | fixed_ip_address | floating_ip_address | port_id                              |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 80ff9c2f-437c-4474-8d33-36546aea0ad0 | 10.20.20.6       | 192.168.122.152     | 37dba003-2b84-4560-be09-b0f331b3cc41 |
| ba1dc27d-cfa7-48e5-906a-e8a3578735b7 | 10.20.20.7       | 192.168.122.151     | eb7408c2-b6a2-4311-a55c-d13f6d90b7f1 |
+--------------------------------------+------------------+---------------------+--------------------------------------+
[root@controller1 ~(keystone_liquid1)]# nova list
+--------------------------------------+-----------------+--------+------------+-------------+--------------------------------------+
| ID                                   | Name            | Status | Task State | Power State | Networks                             |
+--------------------------------------+-----------------+--------+------------+-------------+--------------------------------------+
| 96c291a1-1fb0-4959-a585-0fc71e63b454 | cirros-backend  | ACTIVE | -          | Running     | internal=10.20.20.6, 192.168.122.152 |
| 97fa2734-d050-48a4-97e2-9e466a2166bb | cirros-frontend | ACTIVE | -          | Running     | internal=10.20.20.7, 192.168.122.151 |
+--------------------------------------+-----------------+--------+------------+-------------+--------------------------------------+

It's just another way of doing the same thing. Now as you can see we have access to our instance from the outside.

Here we have covered, basic network functionality.

6.    Add additional compute nodes

We have installed a new centos7 OS instance and called it compute2, we are going to configure this system as a new compute node to our OpenStack installation:

[root@controller1 ~(keystone_admin)]# ping compute2
PING compute2 (10.10.10.42) 56(84) bytes of data.
64 bytes from compute2 (10.10.10.42): icmp_seq=1 ttl=64 time=0.222 ms

We need to modify the packstack install file and add the compute2 ip address:

[root@controller1 ~(keystone_admin)]# cat openstack-config.pack | grep -v ^# | grep -iE '(compute_hosts|PROVISION_DEMO)'
CONFIG_COMPUTE_HOSTS=10.10.10.41,10.10.10.42
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_DEMO_FLOATRANGE=192.168.122.100/24

We run the script again, and enter the root passwd for the compute2 node:

[root@controller1 ~(keystone_admin)]# packstack --answer-file=openstack-config.pack
Welcome to the Packstack setup utility

The installation log file is available at: /var/tmp/packstack/20160127-173725-NeJc3e/openstack-setup.log

Installing:
Clean Up                                             [ DONE ]
Discovering ip protocol version                      [ DONE ]
root@10.10.10.42's password: 

Once it finishes we have our compute2 added to our openstack cloud:

[root@controller1-ext ~(keystone_admin)]# nova hypervisor-list
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | compute1            | up    | enabled |
| 2  | compute2            | up    | enabled |
+----+---------------------+-------+---------+

Also very important we can see, that the iptables rules have been added in the controller node:

[root@controller1-ext ~(keystone_admin)]# iptables -nL | grep 10.10.10.42 
ACCEPT     tcp  --  10.10.10.42          0.0.0.0/0            multiport dports 5671,5672 /* 001 amqp incoming amqp_10.10.10.42 */
ACCEPT     tcp  --  10.10.10.42          0.0.0.0/0            multiport dports 3260 /* 001 cinder incoming cinder_10.10.10.42 */
ACCEPT     tcp  --  10.10.10.42          0.0.0.0/0            multiport dports 3306 /* 001 mariadb incoming mariadb_10.10.10.42 */
ACCEPT     udp  --  10.10.10.42          0.0.0.0/0            multiport dports 4789 /* 001 neutron tunnel port incoming neutron_tunnel_10.10.10.31_10.10.10.42 */
ACCEPT     tcp  --  10.10.10.42          0.0.0.0/0            multiport dports 6000,6001,6002,873 /* 001 swift storage and rsync incoming swift_storage_and_rsync_10.10.10.42 */

It also adds the rules in the compute1 node:

[root@compute1 ~]# iptables -nL | grep 10.10.10.42
ACCEPT     udp  --  10.10.10.42          0.0.0.0/0            multiport dports 4789 /* 001 neutron tunnel port incoming neutron_tunnel_10.10.10.41_10.10.10.42 */
ACCEPT     tcp  --  10.10.10.42          0.0.0.0/0            multiport dports 16509,49152:49215 /* 001 nova qemu migration incoming nova_qemu_migration_10.10.10.41_10.10.10.42 */

Now we run some nodes on the new compute node:

[root@controller1-ext ~(keystone_admin)]# nova boot  --max-count 7 --flavor m1.tiny --image cirros  --security-groups default --nic net-id=56c50a4f-880e-4f10-9cb0-1e5d5dc05345 cirro^C
[root@controller1-ext ~(keystone_admin)]# nova hypervisor-servers compute2
+--------------------------------------+-------------------+---------------+---------------------+
| ID                                   | Name              | Hypervisor ID | Hypervisor Hostname |
+--------------------------------------+-------------------+---------------+---------------------+
| bd095621-46b6-4c65-9679-d8fb58c00755 | instance-00000004 | 2             | compute2            |
| 6439e288-fb61-4637-8da1-5585f12a0e50 | instance-00000006 | 2             | compute2            |
| f06c845f-39f7-47d1-9c71-5952323eb7a1 | instance-00000008 | 2             | compute2            |
| dccf48de-b1da-4e0f-b161-f2423a714c6a | instance-0000000a | 2             | compute2            |
+--------------------------------------+-------------------+---------------+---------------------+

We add a floating ip to a instance that is running on the new compute node and check conectivity

[root@controller1-ext ~(keystone_admin)]# nova list | grep cirros4-6
| 1520b03c-c32c-4376-ab41-4a207dffefbc | cirros4-6 | ACTIVE | -          | Running     | private=10.0.0.7, 192.168.122.4 |
[root@controller1-ext ~(keystone_admin)]# ping 192.168.122.4
PING 192.168.122.4 (192.168.122.4) 56(84) bytes of data.
64 bytes from 192.168.122.4: icmp_seq=1 ttl=63 time=1.11 ms
^C
--- 192.168.122.4 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.116/1.116/1.116/0.000 ms


So we are finished adding a compute node, lets continue with swift.


7.    Manage Swift storage
Unix Systems: 

Comments

regards
Thanks for the notes are very valuable. What is the version of openstack on which you made your examination?

rhosp 6

What you recommend while giving exam :
Configure network basic Static IP first or first do the installation and then configure network and bridge with static IP as at start it use DHCP?

Also do we need to configure Net tunnelling with eth1 or not needed or when asked ?

Add new comment

Filtered HTML

  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd>
  • Lines and paragraphs break automatically.

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
By submitting this form, you accept the Mollom privacy policy.
Error | HP-UX Tips & Tricks Site

Error

Error message

  • Warning: Cannot modify header information - headers already sent by (output started at /homepages/37/d228974590/htdocs/includes/common.inc:2567) in drupal_send_headers() (line 1207 of /homepages/37/d228974590/htdocs/includes/bootstrap.inc).
  • PDOException: SQLSTATE[42000]: Syntax error or access violation: 1142 INSERT command denied to user 'dbo229817041'@'217.160.155.192' for table 'watchdog': INSERT INTO {watchdog} (uid, type, message, variables, severity, link, location, referer, hostname, timestamp) VALUES (:db_insert_placeholder_0, :db_insert_placeholder_1, :db_insert_placeholder_2, :db_insert_placeholder_3, :db_insert_placeholder_4, :db_insert_placeholder_5, :db_insert_placeholder_6, :db_insert_placeholder_7, :db_insert_placeholder_8, :db_insert_placeholder_9); Array ( [:db_insert_placeholder_0] => 0 [:db_insert_placeholder_1] => cron [:db_insert_placeholder_2] => Attempting to re-run cron while it is already running. [:db_insert_placeholder_3] => a:0:{} [:db_insert_placeholder_4] => 4 [:db_insert_placeholder_5] => [:db_insert_placeholder_6] => http://hpuxtips.es/?q=content/red-hat-openstack-administration-cl210-exam-preparation-notes [:db_insert_placeholder_7] => [:db_insert_placeholder_8] => 54.90.207.75 [:db_insert_placeholder_9] => 1513505373 ) in dblog_watchdog() (line 157 of /homepages/37/d228974590/htdocs/modules/dblog/dblog.module).
The website encountered an unexpected error. Please try again later.