Quantcast
Channel: CodeSection,代码区,Linux操作系统:Ubuntu_Centos_Debian - CodeSec
Viewing all articles
Browse latest Browse all 11063

Let’s play with Cinder and RBD (part 2)

$
0
0

The idea ishelp you get familiar with what cinder can do and how rbd makes it happen and what it looks like on the backend.

Create a volume from a snapshot Create a volume from another volume Create volume

Let’s create a logical volume:

$ cinder create <size> --name volume1 --description cinder-volume +--------------------------------+------------------------------------------------+ | Property | Value | +--------------------------------+------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-07-29T00:21:49.000000 | | description | None | | encrypted | False | | id | 1b681f9f-81f6-4965-ad89-28ffb10c1ede | | metadata | {} | | migration_status | None | | multiattach | False | | name | volume1 | | os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2016-07-29T00:21:52.000000 | | user_id | 4180c9a6469b480cbbf0c5e79dc478fb | | volume_type | ceph | +--------------------------------+------------------------------------------------+

Positional arguments:

<size> : Size of volume, in GiBs. (Required unless snapshot-id/source-volid is specified). Check cinder help create for more info

Let’s verify with ~/sudo rbd ls volumes to check what rbd have:

vagrant@vagrant-ubuntu-trusty-64:~/devstack$ sudo rbd ls volumes >bar >volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede

bar is a image created just with rbd, you can see that all the cinder volumes starts with ‘ volume-<uuid> ‘.

Create snapshot

So for rbd everything is thinly provisioned, and snapshots and clones use copy-on-write .

When you have a copy-on-write ( cow ) snapshot, it means it has a depency on the parent that it was created from. The only unique blocks in the snapshot or clone will be the blocks that have been modified (and thus copied) that makes snapshot creation very very fast (you only need to update metadata)and data doesn’t actually move or copy anywhere …. instead it’s copied on demand, or on write !

This dependency means that you cannot, e.g. delete a volume that has snapshots

because that would make those snapshots unusable,like pulling the rug out from under them.

$ cinder snapshot-create <volume ID or name> --name snap1 $ cinder snapshot-create volume1 --name snap1 $ sudo rbd ls -l volumes >NAME SIZE PARENT FMT PROT LOCK >bar 1024M 1 >volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede 1024M 2 >volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e9..d 1024M 2 yes

You can see that has the same volume ID with ..@snapshot-<ID snap>...

Create volume from snapshot

from: cinder snapshot-list get the snap ID.

$ cinder create --snapshot-id 6e93e928-2558-4f12-a9ab-12d25cd72dbd --name v-from-s +--------------------------------+------------------------------------------------+ | Property | Value | +--------------------------------+------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-07-29T00:40:00.000000 | | description | None | | encrypted | False | | id | 18966249-f68b-4ed3-901e-7447a25dad03 | | metadata | {} | | migration_status | None | | multiattach | False | | name | v-from-s | | os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 | | replication_status | disabled | | size | 1 | | snapshot_id | 6e93e928-2558-4f12-a9ab-12d25cd72dbd | | source_volid | None | | status | creating | | updated_at | 2016-07-29T00:40:00.000000 | | user_id | 4180c9a6469b480cbbf0c5e79dc478fb | | volume_type | ceph | +--------------------------------+------------------------------------------------+ Create a volume from another volume

Since we are cloning from a volume and not a snapshot, we must firstcreate a snapshot of the source volume.

$ cinder create --source-volid 1b681f9f-81f6-4965-ad89-28ffb10c1ede --name v-from-v +--------------------------------+------------------------------------------------+ | Property | Value | +--------------------------------+------------------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-07-29T00:44:47.000000 | | description | None | | encrypted | False | | id | 9f79be73-4df6-4ab1-ab70-02c91df96439 | | metadata | {} | | migration_status | None | | multiattach | False | | name | v-from-v | | os-vol-host-attr:host | vagrant-ubuntu-trusty-64.localdomain@ceph#ceph | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 80463e7d9d8847169acd70b156ac3b61 | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | 1b681f9f-81f6-4965-ad89-28ffb10c1ede | | status | creating | | updated_at | 2016-07-29T00:44:47.000000 | | user_id | 4180c9a6469b480cbbf0c5e79dc478fb | | volume_type | ceph | $ sudo rbd ls -l volumes NAME SIZE PARENT FMT PROT LOCK bar 1024M 1 volume-18966249-f68b-4ed3-901e-7447a25dad03 1024M volumes/volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e93e928-2558-4f12-a9ab-12d25cd72dbd 2 volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede 1024M 2 volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@snapshot-6e93e928-2558-4f12-a9ab-12d25cd72dbd 1024M 2 yes volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@volume-9f79be73-4df6-4ab1-ab70-02c91df96439.clone_snap 1024M 2 yes volume-9f79be73-4df6-4ab1-ab70-02c91df96439 1024M volumes/volume-1b681f9f-81f6-4965-ad89-28ffb10c1ede@volume-9f79be73-4df6-4ab1-ab70-02c91df96439.clone_snap 2

In this case the clone volume has an volume-ID@volume-ID.clone_snap


Viewing all articles
Browse latest Browse all 11063

Trending Articles