云物理机ironic对接ceph云盘ceph-iscsi-gateway
云物理机对接ceph云盘
1300042631@qq.com
2022/04/29
文章目录
- 云物理机对接ceph云盘
- 1300042631@qq.com
- 2022/04/29
- 背景说明
- 一、基础环境信息
- 1. 版本依赖
- 云物理机对接ceph云盘
- 1300042631@qq.com
- 2022/04/29
- 一、基础环境信息
- 1. 版本依赖
- 2. 集群信息:
- 二、 环境部署
- 1. 部署一套ceph环境,并确认集群状态为ok
- 2. 定制配置 /etc/ceph/ceph.conf
- 3. 部署ceph-iscsi
- 4. 调试ceph-iscsi
- 5. ISCSI INITIATOR FOR LINUX
- 三、 Ceph-iscsi与OpenStack cinder对接
- 1、修改cinder-volume配置文件
- 2、修改cinder-volume代码,添加定制开发RBDISCSIDriver
- 四、 修改ironic代码,定制开发attach_volume、detach_volume
- 1、 对接ironic
- 五、 修改nova_api代码,定制开发ironic
- 1、 修改drivers
- 六、 rbd_iscsi_client修改
- 1、 安装rbd_iscsi_client
- 2、 修改rbd_iscsi_client
- 修改后的代码
- 七、 测试挂载(云物理机镜像里面需要安装依赖)
- 八、 fio测试
- 九、 总结
- 1、云物理机支持云盘挂载、卸载操作
- 2、云盘可以扩容、缩容
- 3、云主机及物理机均支持挂载ceph-iscsi盘
- 4、测试性能均符合预期
背景说明
根据社区反馈当前OpenStack已经超大规模部署,生产环境也使用逐渐上线了云物理机功能,并对ironic组件及其ironic-inspect周边服务进行了二次开发了,但是云物理机本身默认使用的是本地盘,即直插物理磁盘,无论是系统盘,还是数据盘,均是。但是这样虽然性能稳定,但是也带来了一定的困扰,因为云物理机并不想kvm云主机那样支持快速弹性伸缩,比如cpu和内存规格变更,云主机可以做到实时变更,但是云物理机目前无法做到。
- 比如某个业务线一年前申请的云物理机,突然发现数据盘不足,想要改配,这就比较麻烦,因为牵涉到订单和价格计费的问题,不通的数据盘大小,收费和raid模式不同。比如原来是4Tx2的数据盘,因为业务需求,改成8Tx2,这就需要投入人力改订单改套餐flavor,改ironic的数据库nodes表中的properties字段等相关人工介入的操作。比较浪费人力,且存在风险。因为ironic基本是通过ipmitool来管控的,稍不注意动了某个字段,就有可能对云物理机本身造成影响。
- 还有一些场景,比如云物理机上架后,装机时,频繁失败,或者boot failed,排查一圈发现时系统盘未做raid导致。其实,熟悉云物理机裸金属的小伙伴都知道,看似高大上的云物理机自动装机,无非还是通过pxe\dhcp\tftp\get镜像\cloud_init装机。因为启动盘未做raid导致装机失败,生产环境,云物理机开机关机大概需要20来分钟,所以排查起来也非常耗时。
- 所以针对两个比较重要的场景,需要对云物理机和也支持弹性伸缩,当然这里仅指数据盘和系统盘,不支持内存和cpu核数。
- 根据当前云主机底层使用的块存储ceph场景,计划采用ceph-iscsi-gw来定制ironic对接云盘问题,具体详情及配置方案如下,云盘性能方面还是可观的。
一、基础环境信息
1. 版本依赖
云物理机对接ceph云盘
1300042631@qq.com
2022/04/29
一、基础环境信息
1. 版本依赖
应用 | 使用版本 | 版本要求 |
---|---|---|
OS | CentOS Linux release 7.6.1810 (Core) | Red Hat Enterprise Linux/CentOS 7.5 (or newer);Linux kernel v4.16 (or newer) |
Ceph | Ceph Nautilus Stable (12.2.x) | Ceph Luminous Stable |
Ceph-Iscsi-Gateway | ceph-iscsi-3.2 | ceph-iscsi-3.2 or newer package |
Targetcli | targetcli-2.1.fb47 | targetcli-2.1.fb47 or newer package |
Python-rtslib | python-rtslib-2.1.fb68 | python-rtslib-2.1.fb68 or newer package |
Tcmu-runner | tcmu-runner-1.5.2 | tcmu-runner-1.4.0 or newer package |
2. 集群信息:
IP | 用途 | OS |
---|---|---|
192.168.9.101 | ceph-node1 | CentOS Linux release 7.6.1810 (Core) |
192.168.10.135 | ceph-node2 | CentOS Linux release 7.6.1810 (Core) |
192.168.9.190 | ceph-node3 | CentOS Linux release 7.6.1810 (Core) |
192.31.162.123 | rg2-test-control001.ostack.hfb3.iflytek.net | CentOS Linux release 7.6.1810 (Core) |
192.31.162.124 | rg2-test-control002.ostack.hfb3.iflytek.net | CentOS Linux release 7.6.1810 (Core) |
192.31.162.125 | rg2-test-control003.ostack.hfb3.iflytek.net | CentOS Linux release 7.6.1810 (Core) |
二、 环境部署
iSCSI 网关提供一个高可用性 (HA) iSCSI 目标,用于将 RADOS 块储存设备 (RBD) 映像导出为 SCSI 磁盘。
iSCSI 协议允许客户端(发起方)通过TCP/IP网络向存储设备(目标)发送SCSI命令,
使没有本机Ceph客户端支持的客户端能够访问 Ceph 块存储。
其中包括Microsoft Windows甚至BIOS。
每个iSCSI网关都利用 Linux IO 目标内核子系统 (LIO) 来提供 iSCSI 协议支持。
LIO 利用用户空间直通 (TCMU) 与 Ceph 的 librbd 库进行交互,
并向 iSCSI 客户端公开 RBD 映像。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iQEgQ6Dg-1665744655793)(ceph.png)]
1. 部署一套ceph环境,并确认集群状态为ok
# 当前测试集群的状态
[root@ceph-node1 ~]# ceph -s
cluster:
id: c0df9eb6-5d23-4f14-8136-e2351fa215f7
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 28h)
mgr: ceph-node1(active, since 2w)
mds: cephfs-demo:1 {0=ceph-node2=up:active} 2 up:standby
osd: 15 osds: 15 up (since 2w), 15 in (since 2w)
rgw: 1 daemon active (ceph-node1)
task status:
data:
pools: 13 pools, 544 pgs
objects: 32.73k objects, 126 GiB
usage: 395 GiB used, 1.1 TiB / 1.4 TiB avail
pgs: 544 active+clean
io:
client: 1.5 KiB/s rd, 1 op/s rd, 0 op/s wr
[root@ceph-node1 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.44571 root default
-3 0.46884 host ceph-node1
0 hdd 0.09769 osd.0 up 0.81003 1.00000
3 hdd 0.09769 osd.3 up 0.81003 1.00000
6 hdd 0.09769 osd.6 up 1.00000 1.00000
9 hdd 0.09769 osd.9 up 1.00000 1.00000
10 hdd 0.07809 osd.10 up 1.00000 1.00000
-5 0.48843 host ceph-node2
1 hdd 0.09769 osd.1 up 0.95001 1.00000
4 hdd 0.09769 osd.4 up 1.00000 1.00000
7 hdd 0.09769 osd.7 up 1.00000 1.00000
11 hdd 0.09769 osd.11 up 1.00000 1.00000
13 hdd 0.09769 osd.13 up 0.90001 1.00000
-7 0.48843 host ceph-node3
2 hdd 0.09769 osd.2 up 1.00000 1.00000
5 hdd 0.09769 osd.5 up 0.81003 1.00000
8 hdd 0.09769 osd.8 up 1.00000 1.00000
12 hdd 0.09769 osd.12 up 0.81003 1.00000
14 hdd 0.09769 osd.14 up 1.00000 1.00000
[root@ceph-node1 ~]#
2. 定制配置 /etc/ceph/ceph.conf
1). 降低检测关闭 OSD 的默认检测信号间隔,来降低启动超时
[osd]
osd heartbeat grace = 20
osd heartbeat interval = 5
2). 更新下 monitor 运行状态
ceph tell <daemon_type>.<id> config set <parameter_name> <new_value>
ceph tell osd.* config set osd_heartbeat_grace 20
ceph tell osd.* config set osd_heartbeat_interval 5
3). 更新下 OSD 的运行状态
ceph daemon <daemon_type>.<id> config set osd_client_watch_timeout 15
ceph daemon osd.0 config set osd_heartbeat_grace 20
ceph daemon osd.0 config set osd_heartbeat_interval 5
3. 部署ceph-iscsi
# 可以使用ceph-ansible 也可以使用命令行分步的形式,具体参考官方网站
# https://docs.ceph.com/en/quincy/rbd/iscsi-target-cli-manual-install/
1. 在所有的iscsi gateway 节点, 安装Ceph-iscsi:
[root@ceph-node1 ~]# yum install ceph-iscsi
2. 在所有的iscsi gateway 节点, 安装tcmu-runner:
[root@ceph-node1 ~]# yum install tcmu-runner
# 事实上 安装方式可以选择源码编译或者rpm包,这个包不是很方便找:
# https://1.chacra.ceph.com/r/tcmu-runner/master/
#9c84f7a4348ac326ac269fbdda507953dba6ec2c/centos/7/flavors/default/x86_64/tcmu-runner-1.5.2-1.el7.x86_64.rpm
# https://1.chacra.ceph.com/r/tcmu-runner/master/
#9c84f7a4348ac326ac269fbdda507953dba6ec2c/centos/7/flavors/default/x86_64/libtcmu-devel-1.5.2-1.el7.x86_64.rpm
#https://1.chacra.ceph.com/r/tcmu-runner/master/
#9c84f7a4348ac326ac269fbdda507953dba6ec2c/centos/7/flavors/default/x86_64/libtcmu-1.5.2-1.el7.x86_64.rpm
3. 开始配置iscsi gateway:
1). ceph-iscsi默认使用pool名称为rbd的存储池,所以需要新建一个,之后查询下是否新建成功:
[root@ceph-node1 ~]# ceph osd lspools
2). 新建一个iscsi-gateway.cfg配置文件
[root@ceph-node1 ~]# touch /etc/ceph/iscsi-gateway.cfg
3). 编辑/etc/ceph/iscsi-gateway.cfg文件
[root@ceph-node1 ~]# cat /etc/ceph/iscsi-gateway.cfg
[config]
# Name of the Ceph storage cluster. A suitable Ceph configuration file allowing
# access to the Ceph storage cluster from the gateway node is required, if not
# colocated on an OSD node.
cluster_name = ceph
# Place a copy of the ceph cluster's admin keyring in the gateway's /etc/ceph
# drectory and reference the filename here
gateway_keyring = ceph.client.admin.keyring
# API settings.
# The API supports a number of options that allow you to tailor it to your
# local environment. If you want to run the API under https, you will need to
# create cert/key files that are compatible for each iSCSI gateway node, that is
# not locked to a specific node. SSL cert and key files *must* be called
# 'iscsi-gateway.crt' and 'iscsi-gateway.key' and placed in the '/etc/ceph/' directory
# on *each* gateway node. With the SSL files in place, you can use 'api_secure = true'
# to switch to https mode.
# To support the API, the bear minimum settings are:
api_secure = false
# Additional API configuration options are as follows, defaults shown.
# api_user = admin
# api_password = admin
# api_port = 5001
trusted_ip_list = 192.168.9.101,192.168.10.135,192.168.33.146,192.31.162.123
4). 复制 iscsi-gateway.cfg file to all iSCSI gateway nodes.
4. on all iSCSI gateway nodes, enable and start the API service:
[root@ceph-node1 ~]# systemctl daemon-reload
[root@ceph-node1 ~]# systemctl enable rbd-target-gw
[root@ceph-node1 ~]# systemctl start rbd-target-gw
[root@ceph-node1 ~]# systemctl enable rbd-target-api
[root@ceph-node1 ~]# systemctl start rbd-target-api
4. 调试ceph-iscsi
# gwcli 将创建和配置 iSCSI 目标和 RBD 映像: 配置是相对简单的,这里参考官方的案例
1. As root, on a iSCSI gateway node, start the iSCSI gateway command-line interface:
[root@ceph-node1 ~]# gwcli
2. Go to iscsi-targets and create a target with the name iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw:
> /> cd /iscsi-targets
> /iscsi-targets> create iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
3. Create the iSCSI gateways.
The IPs used below are the ones that will be used for iSCSI data like READ and WRITE commands.
They can be the same IPs used for management operations listed in trusted_ip_list,
but it is recommended that different IPs are used.
> /iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways
> /iscsi-target...-igw/gateways> create ceph-gw-1 192.168.9.101
> /iscsi-target...-igw/gateways> create ceph-gw-2 192.168.10.135
If not using RHEL/CentOS or using an upstream or ceph-iscsi-test kernel,
the skipchecks=true argument must be used.
This will avoid the Red Hat kernel and rpm checks:
> /iscsi-targets> cd iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/gateways
> /iscsi-target...-igw/gateways> create ceph-gw-1 192.168.9.101 skipchecks=true
> /iscsi-target...-igw/gateways> create ceph-gw-2 192.168.10.135 skipchecks=true
4. Add a RBD image with the name disk_1 in the pool rbd:
> /iscsi-target...-igw/gateways> cd /disks
> /disks> create pool=rbd image=disk_1 size=90G
5. Create a client with the initiator name iqn.1994-05.com.redhat:rh7-client:
> /disks> cd /iscsi-targets/iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw/hosts
> /iscsi-target...eph-igw/hosts> create iqn.1994-05.com.redhat:rh7-client
6. Set the client’s CHAP username to myiscsiusername and password to myiscsipassword:
> /iscsi-target...at:rh7-client> auth username=myiscsiusername password=myiscsipassword
# CHAP must always be configured. Without CHAP, the target will reject any login requests.
7. Add the disk to the client:
> /iscsi-target...at:rh7-client> disk add rbd/disk_1
8. example:
[root@ceph-node1 ~]# gwcli
/> ls
o- / ......................................................................................................................... [...]
o- cluster ......................................................................................................... [Clusters: 1]
| o- ceph ............................................................................................................ [HEALTH_OK]
| o- pools ......................................................................................................... [Pools: 13]
| | o- .rgw.root ............................................................. [(x3), Commit: 0.00Y/252112656K (0%), Used: 768K]
| | o- backups ............................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 192K]
| | o- cephfs_data .......................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 0.00Y]
| | o- cephfs_metadata ................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 1577484b]
| | o- default.rgw.control .................................................. [(x3), Commit: 0.00Y/252112656K (0%), Used: 0.00Y]
| | o- default.rgw.log ...................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 0.00Y]
| | o- default.rgw.meta ...................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 384K]
| | o- images ........................................................ [(x3), Commit: 0.00Y/252112656K (0%), Used: 32298369142b]
| | o- imagess .............................................................. [(x3), Commit: 0.00Y/252112656K (0%), Used: 0.00Y]
| | o- libvirt ........................................................ [(x3), Commit: 0.00Y/252112656K (0%), Used: 3217097458b]
| | o- rbd ......................................................... [(x3), Commit: 550G/252112656K (228%), Used: 371312952985b]
| | o- vms ................................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 192K]
| | o- volumes ........................................................... [(x3), Commit: 0.00Y/252112656K (0%), Used: 1769991b]
| o- topology ............................................................................................... [OSDs: 15,MONs: 3]
o- disks ........................................................................................................ [550G, Disks: 7]
| o- rbd ............................................................................................................ [rbd (550G)]
| o- disk_1 ......................................................................................... [rbd/disk_1 (Online, 90G)]
| o- disk_2 ........................................................................................ [rbd/disk_2 (Online, 100G)]
| o- disk_3 ........................................................................................ [rbd/disk_3 (Online, 120G)]
| o- disk_4 ......................................................................................... [rbd/disk_4 (Online, 50G)]
| o- disk_5 ......................................................................................... [rbd/disk_5 (Online, 50G)]
| o- volume-972f0121-59e3-44aa-87c0-d770827d755f .............. [rbd/volume-972f0121-59e3-44aa-87c0-d770827d755f (Online, 1168G)]
| o- volume-fe5ecbb9-8440-4115-9824-605c39a2f7eb ............... [rbd/volume-fe5ecbb9-8440-4115-9824-605c39a2f7eb (Online, 10G)]
o- iscsi-targets ............................................................................... [DiscoveryAuth: None, Targets: 1]
o- iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ......................................................... [Auth: None, Gateways: 2]
o- disks .......................................................................................................... [Disks: 7]
| o- rbd/disk_1 .................................................................................. [Owner: ceph-node1, Lun: 0]
| o- rbd/disk_2 .................................................................................. [Owner: ceph-node2, Lun: 1]
| o- rbd/disk_3 .................................................................................. [Owner: ceph-node1, Lun: 2]
| o- rbd/disk_4 .................................................................................. [Owner: ceph-node2, Lun: 3]
| o- rbd/disk_5 .................................................................................. [Owner: ceph-node1, Lun: 6]
| o- rbd/volume-972f0121-59e3-44aa-87c0-d770827d755f ............................................. [Owner: ceph-node2, Lun: 5]
| o- rbd/volume-fe5ecbb9-8440-4115-9824-605c39a2f7eb ............................................. [Owner: ceph-node1, Lun: 4]
o- gateways ............................................................................................ [Up: 2/2, Portals: 2]
| o- ceph-node1 .......................................................................................... [192.168.9.101 (UP)]
| o- ceph-node2 ......................................................................................... [192.168.10.135 (UP)]
o- host-groups .................................................................................................. [Groups : 0]
o- hosts ....................................................................................... [Auth: ACL_ENABLED, Hosts: 8]
o- iqn.1994-05.com.redhat:rh7-client .......................................................... [Auth: CHAP, Disks: 2(190G)]
| o- lun 0 ............................................................................ [rbd/disk_1(90G), Owner: ceph-node1]
| o- lun 1 ........................................................................... [rbd/disk_2(100G), Owner: ceph-node2]
o- iqn.1995-05.com.redhat:rh7-client ............................................... [LOGGED-IN, Auth: CHAP, Disks: 1(120G)]
| o- lun 2 ........................................................................... [rbd/disk_3(120G), Owner: ceph-node1]
o- iqn.1996-05.com.redhat:rh7-client ........................................................... [Auth: CHAP, Disks: 1(50G)]
| o- lun 3 ............................................................................ [rbd/disk_4(50G), Owner: ceph-node2]
o- iqn.1994-05.com.redhat:336ea081fb32 .............................................. [LOGGED-IN, Auth: CHAP, Disks: 1(50G)]
| o- lun 6 ............................................................................ [rbd/disk_5(50G), Owner: ceph-node1]
o- iqn.1994-05.com.redhat:186aa3199292 ............................................. [LOGGED-IN, Auth: CHAP, Disks: 2(140G)]
| o- lun 4 ....................................... [rbd/volume-fe5ecbb9-8440-4115-9824-605c39a2f7eb(10G), Owner: ceph-node1]
| o- lun 5 ...................................... [rbd/volume-972f0121-59e3-44aa-87c0-d770827d755f(1168G), Owner: ceph-node2]
o- iqn.1994-05.com.redhat:53725a46-ddc3-4282-962a-4c0a4f37e217 ............................... [Auth: CHAP, Disks: 0(0.00Y)]
o- iqn.1994-05.com.redhat:5b5d7a76e .......................................................... [Auth: CHAP, Disks: 0(0.00Y)]
o- iqn.1994-05.com.redhat:82e2285f-2196-417c-ba42-cdb08a567c8d ............................... [Auth: CHAP, Disks: 0(0.00Y)]
/>
5. ISCSI INITIATOR FOR LINUX
# 安装依赖
# yum install iscsi-initiator-utils
# yum install device-mapper-multipath
1. Create the default /etc/multipath.conf file and enable the multipathd service:
# mpathconf --enable --with_multipathd y
2. Add the following to /etc/multipath.conf file:
devices {
device {
vendor "LIO-ORG"
product "TCMU device"
hardware_handler "1 alua"
path_grouping_policy "failover"
path_selector "queue-length 0"
failback 60
path_checker tur
prio alua
prio_args exclusive_pref_bit
fast_io_fail_tmo 25
no_path_retry queue
}
}
3. Restart the multipathd service:
# systemctl reload multipathd
4. iSCSI Discovery and Setup:
1). If CHAP was setup on the iSCSI gateway,
provide a CHAP username and password by updating the /etc/iscsi/iscsid.conf file accordingly.
#node.session.auth.authmethod = CHAP
#node.session.auth.username = username
#node.session.auth.password = password
2). Discover the target portals:
# iscsiadm -m discovery -t st -p 192.168.56.101
192.168.56.101:3260,1 iqn.2003-01.org.linux-iscsi.rheln1
192.168.56.102:3260,2 iqn.2003-01.org.linux-iscsi.rheln1
3). Login to target:
# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.rheln1 -l
4). Logout from target:
# iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.rheln1 -u
三、 Ceph-iscsi与OpenStack cinder对接
1、修改cinder-volume配置文件
# 编辑/etc/kolla/cinder-volume/cinder.conf
enabled_backends = local-lvm-sata, local-lvm-ssd, rbd-sata, ceph_iscsi
# 该测试环境对接了4种存储
[ceph_iscsi]
volume_driver = cinder.volume.drivers.ceph.rbd_iscsi.RBDISCSIDriver
volume_backend_name = ceph_iscsi
#rbd_iscsi_target_iqns = iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
rbd_iscsi_target_iqn = iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw
rbd_user = cinder
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_iscsi_controller_ips = 192.168.9.101:5000
rbd_iscsi_api_url = http://192.168.9.101:5000
rbd_iscsi_api_user = admin
rbd_iscsi_api_password = admin
2、修改cinder-volume代码,添加定制开发RBDISCSIDriver
1. cinder/opts.py
...
from cinder.volume.drivers.ceph import rbd_iscsi as \
cinder_volume_drivers_ceph_rbdiscsi
...
def list_opts():
...
cinder_volume_drivers_ceph_rbdiscsi.RBD_ISCSI_OPTS,
...
2. cinder/tests/unit/volume/drivers/ceph/__init__.py
3. cinder/tests/unit/volume/drivers/ceph/fake_rbd_iscsi_client.py
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Fake rbd-iscsi-client for testing without installing the client."""
import sys
from unittest import mock
from cinder.tests.unit.volume.drivers.ceph \
import fake_rbd_iscsi_client_exceptions as clientexceptions
rbdclient = mock.MagicMock()
rbdclient.version = "0.1.5"
rbdclient.exceptions = clientexceptions
sys.modules['rbd_iscsi_client'] = rbdclient
4. cinder/tests/unit/volume/drivers/ceph/fake_rbd_iscsi_client_exceptions.py
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
"""Fake client exceptions to use."""
class UnsupportedVersion(Exception):
"""Unsupported version of the client."""
pass
class ClientException(Exception):
"""The base exception class for these fake exceptions."""
_error_code = None
_error_desc = None
_error_ref = None
_debug1 = None
_debug2 = None
def __init__(self, error=None):
if error:
if 'code' in error:
self._error_code = error['code']
if 'desc' in error:
self._error_desc = error['desc']
if 'ref' in error:
self._error_ref = error['ref']
if 'debug1' in error:
self._debug1 = error['debug1']
if 'debug2' in error:
self._debug2 = error['debug2']
def get_code(self):
return self._error_code
def get_description(self):
return self._error_desc
def get_ref(self):
return self._error_ref
def __str__(self):
formatted_string = self.message
if self.http_status:
formatted_string += " (HTTP %s)" % self.http_status
if self._error_code:
formatted_string += " %s" % self._error_code
if self._error_desc:
formatted_string += " - %s" % self._error_desc
if self._error_ref:
formatted_string += " - %s" % self._error_ref
if self._debug1:
formatted_string += " (1: '%s')" % self._debug1
if self._debug2:
formatted_string += " (2: '%s')" % self._debug2
return formatted_string
class HTTPConflict(ClientException):
http_status = 409
message = "Conflict"
def __init__(self, error=None):
if error:
super(HTTPConflict, self).__init__(error)
if 'message' in error:
self._error_desc = error['message']
def get_description(self):
return self._error_desc
class HTTPNotFound(ClientException):
http_status = 404
message = "Not found"
class HTTPForbidden(ClientException):
http_status = 403
message = "Forbidden"
class HTTPBadRequest(ClientException):
http_status = 400
message = "Bad request"
class HTTPUnauthorized(ClientException):
http_status = 401
message = "Unauthorized"
class HTTPServerError(ClientException):
http_status = 500
message = "Error"
def __init__(self, error=None):
if error and 'message' in error:
self._error_desc = error['message']
def get_description(self):
return self._error_desc
5. cinder/tests/unit/volume/drivers/ceph/test_rbd_iscsi.py
# Copyright 2012 Josh Durgin
# Copyright 2013 Canonical Ltd.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from unittest import mock
import ddt
from cinder import context
from cinder import exception
from cinder.tests.unit import fake_constants as fake
from cinder.tests.unit import fake_volume
from cinder.tests.unit import test
from cinder.tests.unit.volume.drivers.ceph \
import fake_rbd_iscsi_client as fake_client
import cinder.volume.drivers.ceph.rbd_iscsi as driver
# This is used to collect raised exceptions so that tests may check what was
# raised.
# NOTE: this must be initialised in test setUp().
RAISED_EXCEPTIONS = []
@ddt.ddt
class RBDISCSITestCase(test.TestCase):
def setUp(self):
global RAISED_EXCEPTIONS
RAISED_EXCEPTIONS = []
super(RBDISCSITestCase, self).setUp()
self.context = context.get_admin_context()
# bogus access to prevent pep8 violation
# from the import of fake_client.
# fake_client must be imported to create the fake
# rbd_iscsi_client system module
fake_client.rbdclient
self.fake_target_iqn = 'iqn.2019-01.com.suse.iscsi-gw:iscsi-igw'
self.fake_valid_response = {'status': '200'}
self.fake_clients = \
{'response':
{'Content-Type': 'application/json',
'Content-Length': '55',
'Server': 'Werkzeug/0.14.1 Python/2.7.15rc1',
'Date': 'Wed, 19 Jun 2019 20:13:18 GMT',
'status': '200',
'content-location': 'http://192.168.121.11:5001/api/clients/'
'XX_REPLACE_ME'},
'body':
{'clients': ['iqn.1993-08.org.debian:01:5d3b9abba13d']}}
self.volume_a = fake_volume.fake_volume_obj(
self.context,
**{'name': u'volume-0000000a',
'id': '4c39c3c7-168f-4b32-b585-77f1b3bf0a38',
'size': 10})
self.volume_b = fake_volume.fake_volume_obj(
self.context,
**{'name': u'volume-0000000b',
'id': '0c7d1f44-5a06-403f-bb82-ae7ad0d693a6',
'size': 10})
self.volume_c = fake_volume.fake_volume_obj(
self.context,
**{'name': u'volume-0000000a',
'id': '55555555-222f-4b32-b585-9991b3bf0a99',
'size': 12,
'encryption_key_id': fake.ENCRYPTION_KEY_ID})
def setup_configuration(self):
config = mock.MagicMock()
config.rbd_cluster_name = 'nondefault'
config.rbd_pool = 'rbd'
config.rbd_ceph_conf = '/etc/ceph/my_ceph.conf'
config.rbd_secret_uuid = None
config.rbd_user = 'cinder'
config.volume_backend_name = None
config.rbd_iscsi_api_user = 'fake_user'
config.rbd_iscsi_api_password = 'fake_password'
config.rbd_iscsi_api_url = 'http://fake.com:5000'
return config
@mock.patch(
'rbd_iscsi_client.client.RBDISCSIClient',
spec=True,
)
def setup_mock_client(self, _m_client, config=None, mock_conf=None):
_m_client = _m_client.return_value
# Configure the base constants, defaults etc...
if mock_conf:
_m_client.configure_mock(**mock_conf)
if config is None:
config = self.setup_configuration()
self.driver = driver.RBDISCSIDriver(configuration=config)
self.driver.set_initialized()
return _m_client
@mock.patch('rbd_iscsi_client.version', '0.1.0')
def test_unsupported_client_version(self):
self.setup_mock_client()
with mock.patch('cinder.volume.drivers.rbd.RBDDriver.do_setup'):
self.assertRaises(exception.InvalidInput,
self.driver.do_setup, None)
@ddt.data({'user': None, 'password': 'foo',
'url': 'http://fake.com:5000', 'iqn': None},
{'user': None, 'password': None,
'url': 'http://fake', 'iqn': None},
{'user': None, 'password': None,
'url': None, 'iqn': None},
{'user': 'fake', 'password': 'fake',
'url': None, 'iqn': None},
{'user': 'fake', 'password': 'fake',
'url': 'fake', 'iqn': None},
)
@ddt.unpack
def test_min_config(self, user, password, url, iqn):
config = self.setup_configuration()
config.rbd_iscsi_api_user = user
config.rbd_iscsi_api_password = password
config.rbd_iscsi_api_url = url
config.rbd_iscsi_target_iqn = iqn
self.setup_mock_client(config=config)
with mock.patch('cinder.volume.drivers.rbd.RBDDriver'
'.check_for_setup_error'):
self.assertRaises(exception.InvalidConfigurationValue,
self.driver.check_for_setup_error)
@ddt.data({'response': None},
{'response': {'nothing': 'nothing'}},
{'response': {'status': '1680'}})
@ddt.unpack
def test_do_setup(self, response):
mock_conf = {
'get_api.return_value': (response, None)}
mock_client = self.setup_mock_client(mock_conf=mock_conf)
with mock.patch('cinder.volume.drivers.rbd.RBDDriver.do_setup'), \
mock.patch.object(driver.RBDISCSIDriver,
'_create_client') as mock_create_client:
mock_create_client.return_value = mock_client
self.assertRaises(exception.InvalidConfigurationValue,
self.driver.do_setup, None)
@mock.patch('rbd_iscsi_client.version', "0.1.4")
def test_unsupported_version(self):
self.setup_mock_client()
self.assertRaises(exception.InvalidInput,
self.driver._create_client)
@ddt.data({'status': '200',
'target_iqn': 'iqn.2019-01.com.suse.iscsi-gw:iscsi-igw',
'clients': ['foo']},
{'status': '1680',
'target_iqn': 'iqn.2019-01.com.suse.iscsi-gw:iscsi-igw',
'clients': None}
)
@ddt.unpack
def test__get_clients(self, status, target_iqn, clients):
config = self.setup_configuration()
config.rbd_iscsi_target_iqn = target_iqn
response = self.fake_clients['response']
response['status'] = status
response['content-location'] = (
response['content-location'].replace('XX_REPLACE_ME', target_iqn))
body = self.fake_clients['body']
mock_conf = {
'get_clients.return_value': (response, body),
'get_api.return_value': (self.fake_valid_response, None)
}
mock_client = self.setup_mock_client(mock_conf=mock_conf,
config=config)
with mock.patch('cinder.volume.drivers.rbd.RBDDriver.do_setup'), \
mock.patch.object(driver.RBDISCSIDriver,
'_create_client') as mock_create_client:
mock_create_client.return_value = mock_client
self.driver.do_setup(None)
if status == '200':
actual_response = self.driver._get_clients()
self.assertEqual(actual_response, body)
else:
# we expect an exception
self.assertRaises(exception.VolumeBackendAPIException,
self.driver._get_clients)
@ddt.data({'status': '200',
'body': {'created': 'someday',
'discovery_auth': 'somecrap',
'disks': 'fakedisks',
'gateways': 'fakegws',
'targets': 'faketargets'}},
{'status': '1680',
'body': None})
@ddt.unpack
def test__get_config(self, status, body):
config = self.setup_configuration()
config.rbd_iscsi_target_iqn = self.fake_target_iqn
response = self.fake_clients['response']
response['status'] = status
response['content-location'] = (
response['content-location'].replace('XX_REPLACE_ME',
self.fake_target_iqn))
mock_conf = {
'get_config.return_value': (response, body),
'get_api.return_value': (self.fake_valid_response, None)
}
mock_client = self.setup_mock_client(mock_conf=mock_conf,
config=config)
with mock.patch('cinder.volume.drivers.rbd.RBDDriver.do_setup'), \
mock.patch.object(driver.RBDISCSIDriver,
'_create_client') as mock_create_client:
mock_create_client.return_value = mock_client
self.driver.do_setup(None)
if status == '200':
actual_response = self.driver._get_config()
self.assertEqual(body, actual_response)
else:
# we expect an exception
self.assertRaises(exception.VolumeBackendAPIException,
self.driver._get_config)
6. cinder/volume/drivers/ceph/__init__.py
7. cinder/volume/drivers/ceph/rbd_iscsi.py
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""RADOS Block Device iSCSI Driver"""
from distutils import version
from oslo_config import cfg
from oslo_log import log as logging
from oslo_utils import netutils
from cinder import exception
from cinder.i18n import _
from cinder import interface
from cinder import utils
from cinder.volume import configuration
from cinder.volume.drivers import rbd
from cinder.volume import volume_utils
try:
import rbd_iscsi_client
from rbd_iscsi_client import client
from rbd_iscsi_client import exceptions as client_exceptions
except ImportError:
rbd_iscsi_client = None
client = None
client_exceptions = None
LOG = logging.getLogger(__name__)
RBD_ISCSI_OPTS = [
cfg.StrOpt('rbd_iscsi_api_user',
default='',
help='The username for the rbd_target_api service'),
cfg.StrOpt('rbd_iscsi_api_password',
default='',
secret=True,
help='The username for the rbd_target_api service'),
cfg.StrOpt('rbd_iscsi_api_url',
default='',
help='The url to the rbd_target_api service'),
cfg.BoolOpt('rbd_iscsi_api_debug',
default=False,
help='Enable client request debugging.'),
cfg.StrOpt('rbd_iscsi_target_iqn',
default=None,
help='The preconfigured target_iqn on the iscsi gateway.'),
]
CONF = cfg.CONF
CONF.register_opts(RBD_ISCSI_OPTS, group=configuration.SHARED_CONF_GROUP)
MIN_CLIENT_VERSION = "0.1.8"
@interface.volumedriver
class RBDISCSIDriver(rbd.RBDDriver):
"""Implements RADOS block device (RBD) iSCSI volume commands."""
VERSION = '1.0.0'
# ThirdPartySystems wiki page
CI_WIKI_NAME = "Cinder_Jenkins"
SUPPORTS_ACTIVE_ACTIVE = True
STORAGE_PROTOCOL = 'iSCSI'
CHAP_LENGTH = 16
# The target IQN to use for creating all exports
# we map all the targets for OpenStack attaches to this.
target_iqn = None
def __init__(self, active_backend_id=None, *args, **kwargs):
super(RBDISCSIDriver,self).__init__(*args, **kwargs)
self.configuration.append_config_values(RBD_ISCSI_OPTS)
@classmethod
def get_driver_options(cls):
additional_opts = cls._get_oslo_driver_opts(
'replication_device', 'reserved_percentage',
'max_over_subscription_ratio', 'volume_dd_blocksize',
'driver_ssl_cert_verify', 'suppress_requests_ssl_warnings')
return rbd.RBD_OPTS + RBD_ISCSI_OPTS + additional_opts
def _create_client(self):
client_version = rbd_iscsi_client.version
if (version.StrictVersion(client_version) <
version.StrictVersion(MIN_CLIENT_VERSION)):
ex_msg = (_('Invalid rbd_iscsi_client version found (%(found)s). '
'Version %(min)s or greater required. Run "pip'
' install --upgrade rbd-iscsi-client" to upgrade'
' the client.')
% {'found': client_version,
'min': MIN_CLIENT_VERSION})
LOG.error(ex_msg)
raise exception.InvalidInput(reason=ex_msg)
config = self.configuration
ssl_warn = config.safe_get('suppress_requests_ssl_warnings')
cl = client.RBDISCSIClient(
config.safe_get('rbd_iscsi_api_user'),
config.safe_get('rbd_iscsi_api_password'),
config.safe_get('rbd_iscsi_api_url'),
secure=config.safe_get('driver_ssl_cert_verify'),
suppress_ssl_warnings=ssl_warn
)
return cl
def _is_status_200(self, response):
return (response and 'status' in response and
response['status'] == '200')
def do_setup(self, context):
"""Perform initialization steps that could raise exceptions."""
super(RBDISCSIDriver, self).do_setup(context)
if client is None:
msg = _("You must install rbd-iscsi-client python package "
"before using this driver.")
raise exception.VolumeDriverException(data=msg)
# Make sure we have the basic settings we need to talk to the
# iscsi api service
config = self.configuration
self.client = self._create_client()
self.client.set_debug_flag(config.safe_get('rbd_iscsi_api_debug'))
resp, body = self.client.get_api()
if not self._is_status_200(resp):
# failed to fetch the open api url
raise exception.InvalidConfigurationValue(
option='rbd_iscsi_api_url',
value='Could not talk to the rbd-target-api')
# The admin had to have setup a target_iqn in the iscsi gateway
# already in order for the gateways to work properly
self.target_iqn = self.configuration.safe_get('rbd_iscsi_target_iqn')
LOG.info("Using target_iqn '%s'", self.target_iqn)
def check_for_setup_error(self):
"""Return an error if prerequisites aren't met."""
super(RBDISCSIDriver, self).check_for_setup_error()
required_options = ['rbd_iscsi_api_user',
'rbd_iscsi_api_password',
'rbd_iscsi_api_url',
'rbd_iscsi_target_iqn']
for attr in required_options:
val = getattr(self.configuration, attr)
if not val:
raise exception.InvalidConfigurationValue(option=attr,
value=val)
def _get_clients(self):
# make sure we have
resp, body = self.client.get_clients(self.target_iqn)
if not self._is_status_200(resp):
msg = _("Failed to get_clients() from rbd-target-api")
raise exception.VolumeBackendAPIException(data=msg)
return body
def _get_config(self):
resp, body = self.client.get_config()
if not self._is_status_200(resp):
msg = _("Failed to get_config() from rbd-target-api")
raise exception.VolumeBackendAPIException(data=msg)
return body
def _get_disks(self):
resp, disks = self.client.get_disks()
if not self._is_status_200(resp):
msg = _("Failed to get_disks() from rbd-target-api")
raise exception.VolumeBackendAPIException(data=msg)
return disks
def create_client(self, initiator_iqn):
"""Create a client iqn on the gateway if it doesn't exist."""
client = self._get_target_client(initiator_iqn)
if not client:
try:
self.client.create_client(self.target_iqn,
initiator_iqn)
except client_exceptions.ClientException as ex:
raise exception.VolumeBackendAPIException(
data=ex.get_description())
def _get_target_client(self, initiator_iqn):
"""Get the config information for a client defined to a target."""
config = self._get_config()
target_config = config['targets'][self.target_iqn]
if initiator_iqn in target_config['clients']:
return target_config['clients'][initiator_iqn]
def _get_auth_for_client(self, initiator_iqn):
initiator_config = self._get_target_client(initiator_iqn)
if initiator_config:
auth = initiator_config['auth']
return auth
def _set_chap_for_client(self, initiator_iqn, username, password):
"""Save the CHAP creds in the client on the gateway."""
# username is 8-64 chars
# Password has to be 12-16 chars
LOG.debug("Setting chap creds to %(user)s : %(pass)s",
{'user': username, 'pass': password})
try:
self.client.set_client_auth(self.target_iqn,
initiator_iqn,
username,
password)
except client_exceptions.ClientException as ex:
raise exception.VolumeBackendAPIException(
data=ex.get_description())
def _get_lun(self, iscsi_config, lun_name, initiator_iqn):
lun = None
target_info = iscsi_config['targets'][self.target_iqn]
luns = target_info['clients'][initiator_iqn]['luns']
if lun_name in luns:
lun = {'name': lun_name,
'id': luns[lun_name]['lun_id']}
return lun
def _lun_name(self, volume_name):
"""Build the iscsi gateway lun name."""
return ("%(pool)s/%(volume_name)s" %
{'pool': self.configuration.rbd_pool,
'volume_name': volume_name})
def get_existing_disks(self):
"""Get the existing list of registered volumes on the gateway."""
resp, disks = self.client.get_disks()
return disks['disks']
@utils.trace
def create_disk(self, volume_name):
"""Register the volume with the iscsi gateways.
We have to register the volume with the iscsi gateway.
Exporting the volume won't work unless the gateway knows
about it.
"""
try:
self.client.find_disk(self.configuration.rbd_pool,
volume_name)
except client_exceptions.HTTPNotFound:
try:
# disk isn't known by the gateways, so lets add it.
self.client.create_disk(self.configuration.rbd_pool,
volume_name)
except client_exceptions.ClientException as ex:
LOG.exception("Couldn't create the disk entry to "
"export the volume.")
raise exception.VolumeBackendAPIException(
data=ex.get_description())
@utils.trace
def register_disk(self, target_iqn, volume_name):
"""Register the disk with the target_iqn."""
lun_name = self._lun_name(volume_name)
try:
self.client.register_disk(target_iqn, lun_name)
except client_exceptions.HTTPBadRequest as ex:
desc = ex.get_description()
search_str = ('is already mapped on target %(target_iqn)s' %
{'target_iqn': self.target_iqn})
if desc.find(search_str):
# The volume is already registered
return
else:
LOG.error("Couldn't register the volume to the target_iqn")
raise exception.VolumeBackendAPIException(
data=ex.get_description())
except client_exceptions.ClientException as ex:
LOG.exception("Couldn't register the volume to the target_iqn",
ex)
raise exception.VolumeBackendAPIException(
data=ex.get_description())
@utils.trace
def unregister_disk(self, target_iqn, volume_name):
"""Unregister the volume from the gateway."""
lun_name = self._lun_name(volume_name)
try:
self.client.unregister_disk(target_iqn, lun_name)
except client_exceptions.ClientException as ex:
LOG.exception("Couldn't unregister the volume to the target_iqn",
ex)
raise exception.VolumeBackendAPIException(
data=ex.get_description())
@utils.trace
def export_disk(self, initiator_iqn, volume_name, iscsi_config):
"""Export a volume to an initiator."""
lun_name = self._lun_name(volume_name)
LOG.debug("Export lun %(lun)s", {'lun': lun_name})
lun = self._get_lun(iscsi_config, lun_name, initiator_iqn)
if lun:
LOG.debug("Found existing lun export.")
return lun
try:
LOG.debug("Creating new lun export for %(lun)s",
{'lun': lun_name})
self.client.export_disk(self.target_iqn, initiator_iqn,
self.configuration.rbd_pool,
volume_name)
resp, iscsi_config = self.client.get_config()
return self._get_lun(iscsi_config, lun_name, initiator_iqn)
except client_exceptions.ClientException as ex:
raise exception.VolumeBackendAPIException(
data=ex.get_description())
@utils.trace
def unexport_disk(self, initiator_iqn, volume_name, iscsi_config):
"""Remove a volume from an initiator."""
lun_name = self._lun_name(volume_name)
LOG.debug("unexport lun %(lun)s", {'lun': lun_name})
lun = self._get_lun(iscsi_config, lun_name, initiator_iqn)
if not lun:
LOG.debug("Didn't find LUN on gateway.")
return
try:
LOG.debug("unexporting %(lun)s", {'lun': lun_name})
self.client.unexport_disk(self.target_iqn, initiator_iqn,
self.configuration.rbd_pool,
volume_name)
except client_exceptions.ClientException as ex:
LOG.exception(ex)
raise exception.VolumeBackendAPIException(
data=ex.get_description())
def find_client_luns(self, target_iqn, client_iqn, iscsi_config):
"""Find luns already exported to an initiator."""
if 'targets' in iscsi_config:
if target_iqn in iscsi_config['targets']:
target_info = iscsi_config['targets'][target_iqn]
if 'clients' in target_info:
clients = target_info['clients']
client = clients[client_iqn]
luns = client['luns']
return luns
@utils.trace
def initialize_connection(self, volume, connector):
"""Export a volume to a host."""
# create client
initiator_iqn = connector['initiator']
self.create_client(initiator_iqn)
auth = self._get_auth_for_client(initiator_iqn)
username = initiator_iqn
if not auth['password']:
password = volume_utils.generate_password(length=self.CHAP_LENGTH)
self._set_chap_for_client(initiator_iqn, username, password)
else:
LOG.debug("using existing CHAP password")
password = auth['password']
# add disk for export
iscsi_config = self._get_config()
# First have to ensure that the disk is registered with
# the gateways.
self.create_disk(volume.name)
self.register_disk(self.target_iqn, volume.name)
iscsi_config = self._get_config()
# Now export the disk to the initiator
lun = self.export_disk(initiator_iqn, volume.name, iscsi_config)
# fetch the updated config so we can get the lun id
iscsi_config = self._get_config()
target_info = iscsi_config['targets'][self.target_iqn]
ips = target_info['ip_list']
target_portal = ips[0]
if netutils.is_valid_ipv6(target_portal):
target_portal = "[{}]:{}".format(
target_portal, "3260")
else:
target_portal = "{}:3260".format(target_portal)
data = {
'driver_volume_type': 'iscsi',
'data': {
'target_iqn': self.target_iqn,
'target_portal': target_portal,
'target_lun': lun['id'],
'auth_method': 'CHAP',
'auth_username': username,
'auth_password': password,
}
}
return data
def _delete_disk(self, volume):
"""Remove the defined disk from the gateway."""
# We only do this when we know it's not exported
# anywhere in the gateway
lun_name = self._lun_name(volume.name)
config = self._get_config()
# Now look for the disk on any exported target
found = False
for target_iqn in config['targets']:
# Do we have the volume we are looking for?
target = config['targets'][target_iqn]
for client_iqn in target['clients'].keys():
if lun_name in target['clients'][client_iqn]['luns']:
found = True
if not found:
# we can delete the disk definition
LOG.info("Deleteing volume definition in iscsi gateway for {}".
format(lun_name))
self.client.delete_disk(self.configuration.rbd_pool, volume.name,
preserve_image=True)
def _terminate_connection(self, volume, initiator_iqn, target_iqn,
iscsi_config):
# remove the disk from the client.
self.unexport_disk(initiator_iqn, volume.name, iscsi_config)
# Try to unregister the disk, since nobody is using it.
self.unregister_disk(self.target_iqn, volume.name)
config = self._get_config()
# If there are no more luns exported to this initiator
# then delete the initiator
luns = self.find_client_luns(target_iqn, initiator_iqn, config)
if not luns:
LOG.debug("There aren't any more LUNs attached to %(iqn)s."
"So we unregister the volume and delete "
"the client entry",
{'iqn': initiator_iqn})
try:
self.client.delete_client(target_iqn, initiator_iqn)
except client_exceptions.ClientException:
LOG.warning("Tried to delete initiator %(iqn)s, but delete "
"failed.", {'iqns': initiator_iqn})
def _terminate_all(self, volume, iscsi_config):
"""Find all exports of this volume for our target_iqn and detach."""
disks = self._get_disks()
lun_name = self._lun_name(volume.name)
if lun_name not in disks['disks']:
LOG.debug("Volume {} not attached anywhere.".format(
lun_name
))
return
for target_iqn_tmp in iscsi_config['targets']:
if self.target_iqn != target_iqn_tmp:
# We don't touch exports for targets
# we aren't configured to manage.
continue
target = iscsi_config['targets'][self.target_iqn]
for client_iqn in target['clients'].keys():
if lun_name in target['clients'][client_iqn]['luns']:
self._terminate_connection(volume, client_iqn,
self.target_iqn,
iscsi_config)
self._delete_disk(volume)
@utils.trace
def terminate_connection(self, volume, connector, **kwargs):
"""Unexport the volume from the gateway."""
iscsi_config = self._get_config()
if not connector:
# No connector was passed in, so this is a force detach
# we need to detach the volume from the configured target_iqn.
self._terminate_all(volume, iscsi_config)
initiator_iqn = connector['initiator']
self._terminate_connection(volume, initiator_iqn, self.target_iqn,
iscsi_config)
self._delete_disk(volume)
8. doc/source/reference/support-matrix.ini
...
[driver.rbd_iscsi]
title=(Ceph) iSCSI Storage Driver (iSCSI)
driver.rbd_iscsi=complete
...
9. driver-requirements.txt
# RBD-iSCSI
rbd-iscsi-client # Apache-2.0
10. lower-constraints.txt
rbd-iscsi-client==0.1.8
11. setup.cfg
rbd-iscsi-client>=0.1.8 # Apache-2.0
rbd_iscsi =
rbd-iscsi-client>=0.1.8 # Apache-2.0
四、 修改ironic代码,定制开发attach_volume、detach_volume
1、 对接ironic
# ironic/api/controllers/v1/node.py
class NodesController(rest.RestController):
"""REST controller for Nodes."""
...
_custom_actions = {
...
'attach_volume' : ['POST'],
'detach_volume' : ['POST'],
...
}
...
@METRICS.timer('NodesController.attach_volume')
@expose.expose(wtypes.text, wtypes.text,wtypes.text)
def attach_volume(self,instance_uuid,target_portal):
context = pecan.request.context
LOG.debug(context)
list_object = objects.Node.list(context)
LOG.debug("instance_uuid: %s" % instance_uuid)
for obj in list_object:
if obj.instance_uuid == instance_uuid:
uuid = obj.uuid
rpc_node = api_utils.get_rpc_node_with_suffix(uuid)
if (rpc_node.power_state != ir_states.POWER_ON and
rpc_node.provision_state != ir_states.ACTIVE):
return "Failed: node system states error"
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
ip = pecan.request.rpcapi.get_node_ip(context,
rpc_node.uuid,
topic)
os_type = obj.instance_info['image_properties'].get('os_type')
if "windows" == os_type:
shell_txt = "powershell.exe $(Get-InitiatorPort).NodeAddress"
else:
shell_txt = "cat /etc/iscsi/initiatorname.iscsi"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("get initiatorname Failed.")
if instance_uuid not in rslt:
LOG.error("set initiator name")
if "windows" == os_type:
shell_txt = "powershell.exe Set-InitiatorPort " + rslt.split(' ')[-1].strip('\n') + " -NewNodeAddress " + instance_uuid
else:
shell_txt = "echo " + "InitiatorName=iqn.1994-05.com.redhat:" + instance_uuid + "> /etc/iscsi/initiatorname.iscsi"
shell_txt1 = "sed -i '/^#node.session.auth.authmethod = CHAP/cnode.session.auth.authmethod = CHAP' /etc/iscsi/iscsid.conf"
shell_txt2 = "sed -i '/^#node.session.auth.username = username/cnode.session.auth.username = " + instance_uuid[-12:] + "' /etc/iscsi/iscsid.conf"
shell_txt3 = "sed -i '/^#node.session.auth.password = password/cnode.session.auth.password = " + instance_uuid[-12:] + "' /etc/iscsi/iscsid.conf"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt1)
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt2)
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt3)
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("create initiatorname Failed.")
if "windows" != os_type:
shell_txt = "/etc/init.d/open-iscsi restart"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("restart open-iscsi service Failed.")
shell_txt = "/etc/init.d/iscsid restart"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("restart iscsid service Failed.")
if "windows" == os_type:
LOG.error("discovery storge target.")
shell_txt = "powershell.exe New-IscsiTargetPortal -TargetPortalAddress " + target_portal.split(":")[0]
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("discovery storge target Failed.")
shell_txt = "powershell.exe $(Get-IscsiTarget).NodeAddress"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("get storge target info Failed.")
LOG.error("connect iscsi target")
lines = rslt.splitlines()
for line in lines:
if "iqn" in line:
shell_txt = "powershell.exe Connect-IscsiTarget -NodeAddress " + line.strip('\n') + " -IsPersistent $true -IsMultipathEnabled $true"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("connect storge target Failed.")
else:
shell_txt = "iscsiadm -m discovery -t st -p " + target_portal
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("discovery storge target Failed.")
lines = rslt.splitlines()
for line in lines:
shell_txt = "iscsiadm -m node -T " + line.split(' ')[-1].strip() + " -p " + target_portal + " -l"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("connect storge target Failed.")
shell_txt = "rescan-scsi-bus.sh -f && rescan-scsi-bus.sh -r && rescan-scsi-bus.sh -u"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("flush scsi host Failed.")
shell_txt = "iscsiadm -m session --rescan"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise("rescan iscsi session Failed.")
@METRICS.timer('NodesController.detach_volume')
@expose.expose(wtypes.text, wtypes.text, wtypes.text)
def detach_volume(self, instance_uuid, target_portal):
context = pecan.request.context
list_object = objects.Node.list(context)
LOG.debug("instance_uuid: %s" % instance_uuid)
for obj in list_object:
if obj.instance_uuid == instance_uuid:
uuid = obj.uuid
rpc_node = api_utils.get_rpc_node_with_suffix(uuid)
if (rpc_node.power_state != ir_states.POWER_ON and
rpc_node.provision_state != ir_states.ACTIVE):
return "Failed: node system states error"
topic = pecan.request.rpcapi.get_topic_for(rpc_node)
ip = pecan.request.rpcapi.get_node_ip(context,
rpc_node.uuid,
topic)
os_type = obj.instance_info['image_properties'].get('os_type')
if "windows" == os_type:
LOG.error("discovery storge target.")
else:
shell_txt = "iscsiadm -m discovery -t st -p " + target_portal
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise ("discovery storge target Failed.")
lines = rslt.splitlines()
for line in lines:
shell_txt = "iscsiadm -m node -T " + line.split(' ')[-1].strip() + " -p " + target_portal + " -u"
rslt = self._send_cmd_to_agent(ip, "shell_stream", shell_txt)
if "Failed" in rslt:
raise ("connect storge target Failed.")
五、 修改nova_api代码,定制开发ironic
1、 修改drivers
#nova/virt/ironic/drivers.py
class IronicDriver(virt_driver.ComputeDriver):
"""Hypervisor driver for Ironic - bare metal provisioning."""
...
def attach_volume(self, context, connection_info, instance, mountpoint,
disk_bus=None, device_type=None, encryption=None):
try:
LOG.info("attach volume to bm Failed.")
self.ironicclient.call("node.attach_volume",instance.uuid,connection_info['data']['target_portal'])
except Exception as exc:
LOG.error("attach volume to bm Failed.")
raise exception.VolumeAttachFailed(
volume_id=connection_info['serial'],
reason=exc)
def detach_volume(self, context, connection_info, instance, mountpoint,
encryption=None):
try:
LOG.info("attach volume to bm Failed.111")
self.ironicclient.call("node.detach_volume",instance.uuid,connection_info['data']['target_portal'])
except Exception as exc:
LOG.error("detach volume to bm Failed.")
raise exception.VolumeDetachFailed(
volume_id=connection_info['serial'],
reason=exc)
...
六、 rbd_iscsi_client修改
1、 安装rbd_iscsi_client
# pip install rbd-iscsi-client
REST:
get_api - Get all the api endpoints
get_config - get the entire gateway config
get_targets - Get all of the target_iqn’s defined in the gateways
create_target_iqn - create a new target_iqn
delete_target_iqn - delete a target_iqn
get_clients - get the clients (initiators) defined in the gateways
get_client_info - get the client information
create_client - Register a new client (initiator) with the gateways
delete_client - unregister a client (initiator) from the gateways
set_client_auth - set CHAP credentials for the client (initiator)
get_disks - get list of volumes defined to the gateways
create_disk - create a new volume/disk that the gateways can export
find_disk - Find a disk that the gateway knows about
delete_disk - delete a disk from the gateway and pool
register_disk - Make the disk available to export to a client.
unregister_disk - Make a disk unavailable to export to a client.
export_disk - Export a registered disk to a client (initiator)
unexport_disk - unexport a disk from a client (initiator)
2、 修改rbd_iscsi_client
修改后的代码
(openstack-base)[root@rg2-test-control001 rbd_iscsi_client]# cat client.py
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License."""
""" RBDISCSIClient.
.. module: rbd_iscsi_client
:Author: Walter A. Boring IV
:Description: This is the HTTP REST Client that is used to make calls to
the ceph-iscsi/rbd-target-api service running on the ceph iscsi gateway
host.
"""
import json
import logging
import time
from rbd_iscsi_client import exceptions
import requests
class RBDISCSIClient(object):
"""REST client to rbd-target-api."""
USER_AGENT = "os_client"
#username = None
username = 'admin'
#password = None
password = 'admin'
#api_url = None
api_url = 'http://192.168.9.101:5000'
auth = None
http_log_debug = False
tries = 5
delay = 0
backoff = 2
timeout = 60
_logger = logging.getLogger(__name__)
retry_exceptions = (exceptions.HTTPServiceUnavailable,
requests.exceptions.ConnectionError)
def __init__(self, username, password, base_url,
suppress_ssl_warnings=False, timeout=None,
secure=False, http_log_debug=False):
super(RBDISCSIClient, self).__init__()
self.username = username
self.password = password
self.api_url = base_url
self.timeout = timeout
self.secure = secure
self.times = []
self.set_debug_flag(http_log_debug)
if suppress_ssl_warnings:
requests.packages.urllib3.disable_warnings()
self.auth = requests.auth.HTTPBasicAuth(username, password)
def set_debug_flag(self, flag):
"""Turn on/off http request/response debugging."""
if not self.http_log_debug and flag:
ch = logging.StreamHandler()
self._logger.setLevel(logging.DEBUG)
self._logger.addHandler(ch)
self.http_log_debug = True
def _http_log_req(self, args, kwargs):
if not self.http_log_debug:
return
string_parts = ['curl -i']
for element in args:
if element in ('GET', 'POST'):
string_parts.append(' -X %s' % element)
else:
string_parts.append(' %s' % element)
for element in kwargs['headers']:
header = ' -H "%s: %s"' % (element, kwargs['headers'][element])
string_parts.append(header)
if 'data' in kwargs:
string_parts.append(' -d ')
for key in kwargs['data']:
string_parts.append('%(key)s=%(value)s&' %
{'key': key,
'value': kwargs['data'][key]})
self._logger.debug("\nREQ: %s\n" % "".join(string_parts))
def _http_log_resp(self, resp, body):
if not self.http_log_debug:
return
# Replace commas with newlines to break the debug into new lines,
# making it easier to read
self._logger.debug("RESP:%s\n", str(resp).replace("',", "'\n"))
self._logger.debug("RESP BODY:%s\n", body)
def request(self, *args, **kwargs):
"""Perform an HTTP Request.
You should use get, post, delete instead.
"""
kwargs.setdefault('headers', kwargs.get('headers', {}))
kwargs['headers']['User-Agent'] = self.USER_AGENT
kwargs['headers']['Accept'] = 'application/json'
if 'data' in kwargs:
payload = kwargs['data']
else:
payload = None
# args[0] contains the URL, args[1] contains the HTTP verb/method
http_url = args[0]
print '--------------------------------'
http_method = args[1]
self._http_log_req(args, kwargs)
r = None
resp = None
body = None
while r is None and self.tries > 0:
try:
# Check to see if the request is being retried. If it is, we
# want to delay.
if self.delay:
time.sleep(self.delay)
if self.timeout:
r = requests.request(http_method, http_url, data=payload,
headers=kwargs['headers'],
auth=self.auth,
verify=self.secure,
timeout=self.timeout)
else:
r = requests.request(http_method, http_url, data=payload,
auth=self.auth,
headers=kwargs['headers'],
verify=self.secure)
resp = r.headers
body = r.text
if isinstance(body, bytes):
body = body.decode('utf-8')
# resp['status'], status['content-location'], and resp.status
# need to be manually set as Python Requests doesn't provide
# them automatically.
resp['status'] = str(r.status_code)
resp.status = r.status_code
if 'location' not in resp:
resp['content-location'] = r.url
r.close()
self._http_log_resp(resp, body)
# Try and convert the body response to an object
# This assumes the body of the reply is JSON
if body:
try:
body = json.loads(body)
except ValueError:
pass
else:
body = None
if resp.status >= 400:
if body and 'message' in body:
body['desc'] = body['message']
raise exceptions.from_response(resp, body)
except requests.exceptions.SSLError as err:
self._logger.error(
"SSL certificate verification failed: (%s). You must have "
"a valid SSL certificate or disable SSL "
"verification.", err)
raise exceptions.SSLCertFailed(
"SSL Certificate Verification Failed.")
except self.retry_exceptions as ex:
# If we catch an exception where we want to retry, we need to
# decrement the retry count prepare to try again.
r = None
self.tries -= 1
self.delay = self.delay * self.backoff + 1
# Raise exception, we have exhausted all retries.
if self.tries == 0:
raise ex
except requests.exceptions.HTTPError as err:
raise exceptions.HTTPError("HTTP Error: %s" % err)
except requests.exceptions.URLRequired as err:
raise exceptions.URLRequired("URL Required: %s" % err)
except requests.exceptions.TooManyRedirects as err:
raise exceptions.TooManyRedirects(
"Too Many Redirects: %s" % err)
except requests.exceptions.Timeout as err:
raise exceptions.Timeout("Timeout: %s" % err)
except requests.exceptions.RequestException as err:
raise exceptions.RequestException(
"Request Exception: %s" % err)
return resp, body
def _time_request(self, url, method, **kwargs):
start_time = time.time()
resp, body = self.request(url, method, **kwargs)
self.times.append(("%s %s" % (method, url),
start_time, time.time()))
return resp, body
def _cs_request(self, url, method, **kwargs):
resp, body = self._time_request(self.api_url + url, method,
**kwargs)
return resp, body
def get(self, url, **kwargs):
return self._cs_request(url, 'GET', **kwargs)
def post(self, url, **kwargs):
return self._cs_request(url, 'POST', **kwargs)
def put(self, url, **kwargs):
return self._cs_request(url, 'PUT', **kwargs)
def delete(self, url, **kwargs):
return self._cs_request(url, 'DELETE', **kwargs)
def get_api(self):
"""Get the API endpoints."""
return self.get("/api")
def get_config(self):
"""Get the complete config object."""
return self.get("/api/config")
def get_sys_info(self, type):
"""Get system info of <type>.
Valid types are:
ip_address
checkconf
checkversions
"""
api = "/api/sysinfo/%(type)s" % {'type': type}
return self.get(api)
def get_gatewayinfo(self):
"""Get the number of active sessions on local gateway."""
return self.get("/api/gatewayinfo")
def get_targets(self):
"""Get the list of targets defined in the config."""
api = "/api/targets"
return self.get(api)
def get_target_info(self, target_iqn):
"""Returns the total number of active sessions for <target_iqn>"""
api = "/api/targetinfo/%(target_iqn)s" % {'target_iqn': target_iqn}
return self.get(api)
def create_target_iqn(self, target_iqn, mode=None, controls=None):
"""Create the target iqn on the gateway."""
api = "/api/target/%(target_iqn)s" % {'target_iqn': target_iqn}
payload = {}
if mode:
payload['mode'] = mode
if controls:
payload['controls'] = controls
return self.put(api, data=payload)
def delete_target_iqn(self, target_iqn):
"""Delete a target iqn from the gateways."""
api = "/api/target/%(target_iqn)s" % {'target_iqn': target_iqn}
return self.delete(api)
def get_clients(self, target_iqn):
"""List clients defined to the configuration."""
api = "/api/clients/%(target_iqn)s" % {'target_iqn': target_iqn}
return self.get(api)
def get_client_info(self, target_iqn, client_iqn):
"""Fetch the Client information from the gateways.
Alias, IP address and state for each connected portal.
"""
api = ("/api/clientinfo/%(target_iqn)s/%(client_iqn)s" %
{'target_iqn': target_iqn,
'client_iqn': client_iqn})
return self.get(api)
def create_client(self, target_iqn, client_iqn):
"""Delete a client."""
api = ("/api/client/%(target_iqn)s/%(client_iqn)s" %
{'target_iqn': target_iqn,
'client_iqn': client_iqn})
return self.put(api)
def delete_client(self, target_iqn, client_iqn):
"""Delete a client."""
api = ("/api/client/%(target_iqn)s/%(client_iqn)s" %
{'target_iqn': target_iqn,
'client_iqn': client_iqn})
return self.delete(api)
def set_client_auth(self, target_iqn, client_iqn, username, password):
"""Set the client chap credentials."""
url = ("/api/clientauth/%(target_iqn)s/%(client_iqn)s" %
{'target_iqn': target_iqn,
'client_iqn': client_iqn})
args = {'username': username,
'password': password}
return self.put(url, data=args)
def get_disks(self):
"""Get the rbd disks defined to the gateways."""
return self.get("/api/disks")
def create_disk(self, pool, image, size=None, extras=None):
"""Add a disk to the gateway."""
url = ("/api/disk/%(pool)s/%(image)s" %
{'pool': pool,
'image': image})
args = {'pool': pool,
'image': image,
'mode': 'create'}
if size:
args['size'] = size
if extras:
args.update(extras)
return self.put(url, data=args)
def find_disk(self, pool, image):
"""Find the disk in the gateway."""
url = ("/api/disk/%(pool)s/%(image)s" %
{'pool': pool,
'image': image})
return self.get(url)
def delete_disk(self, pool, image, preserve_image=True):
"""Delete a disk definition from the gateway.
By default it will not delete the rbd image from the pool.
If preserve_image is set to True, then this only tells the
gateway to forget about this volume. This is typically done
when the volume isn't used as an export anymore.
"""
url = ("/api/disk/%(pool)s/%(image)s" %
{'pool': pool,
'image': image})
if preserve_image is True:
preserve = 'true'
else:
preserve = 'false'
payload = {
'preserve_image': preserve
}
return self.delete(url, data=payload)
def register_disk(self, target_iqn, volume):
"""Add the volume to the target definition.
This is done after the disk is created in a pool, and
before the disk can be exported to an initiator.
"""
url = ("/api/targetlun/%(target_iqn)s" %
{'target_iqn': target_iqn})
args = {'disk': volume}
return self.put(url, data=args)
def unregister_disk(self, target_iqn, volume):
"""Remove the volume from the target definition.
This is done after the disk is unexported from an initiator
and before the disk can be deleted from the gateway.
"""
url = ("/api/targetlun/%(target_iqn)s" %
{'target_iqn': target_iqn})
args = {'disk': volume}
return self.delete(url, data=args)
def export_disk(self, target_iqn, client_iqn, pool, disk):
"""Add a disk to export to a client."""
url = ("/api/clientlun/%(target_iqn)s/%(client_iqn)s" %
{'target_iqn': target_iqn,
'client_iqn': client_iqn})
args = {'disk': "%(pool)s/%(disk)s" % {'pool': pool, 'disk': disk},
'client_iqn': client_iqn}
return self.put(url, data=args)
def unexport_disk(self, target_iqn, client_iqn, pool, disk):
"""Remove a disk to export to a client."""
url = ("/api/clientlun/%(target_iqn)s/%(client_iqn)s" %
{'target_iqn': target_iqn,
'client_iqn': client_iqn})
args = {'disk': "%(pool)s/%(disk)s" % {'pool': pool, 'disk': disk}}
return self.delete(url, data=args)
七、 测试挂载(云物理机镜像里面需要安装依赖)
[root@rg2-test-control001 ~]# openstack server list
+--------------------------------------+----------------+--------+-----------------------------------------------+-------------------------------------+----------------------------------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+----------------+--------+-----------------------------------------------+-------------------------------------+----------------------------------------+
| 53725a46-ddc3-4282-962a-4c0a4f37e217 | ironic-vms | ACTIVE | Classical_Network_Ironic_leaf1=192.31.165.235 | Centos-7.4-sec-partition-bare-image | bmc.lf1.c32.m64.d150.d0.Sugon-I620-G20 |
| 1f6ef3f1-6738-459e-acce-33b406a45821 | Ironic-compute | ACTIVE | Classical_Network_163=192.31.163.13 | Centos-7.4 | t8.16medium |
| 8235067b-8f62-48c6-a84f-ed791848f328 | test_rebuild | ACTIVE | Classical_Network_163=192.31.163.11 | Windows-10 | t2.4medium |
| 0cb5abad-39f0-4739-a001-d4d4b4345f66 | vms_01 | ACTIVE | Classical_Network_163=192.31.163.17 | Centos-7.6 | t2.4medium |
+--------------------------------------+----------------+--------+-----------------------------------------------+-------------------------------------+----------------------------------------+
[root@rg2-test-control001 ~]# openstack volume list
+--------------------------------------+--------------------------+-----------+------+-----------------------------------------+
| ID | Name | Status | Size | Attached to |
+--------------------------------------+--------------------------+-----------+------+-----------------------------------------+
| 1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f | test-iscsi-ironic-vol_66 | available | 60 | |
| 972f0121-59e3-44aa-87c0-d770827d755f | test-iscsi-ironic-vol_05 | in-use | 1168 | Attached to Ironic-compute on /dev/vdc |
| 6da3ec1e-1e38-4816-b7f8-64ffb3d09c62 | test-iscsi-ironic-vol_03 | available | 100 | |
| e1933718-aa64-43ba-baea-a9ad64b4f10b | test-rbd-vol_02 | available | 10 | |
| b8daa91a-22aa-4fd0-a79d-83f2c9efe83c | test-iscsi-ironic-vol_02 | available | 10 | |
| fe5ecbb9-8440-4115-9824-605c39a2f7eb | test-iscsi-vol_01 | in-use | 10 | Attached to Ironic-compute on /dev/vdb |
| 3b2a93ae-1909-4e72-ab2f-2a504ad7fa6c | test-rbd-vol_01 | available | 10 | |
| 916644b7-eb14-47c9-9577-05655f933652 | ceph-rbd-sata_01 | available | 10 | |
| 2f79a45f-82d3-4aae-9a5b-ef0d585eea83 | test-rebuild-vol02 | in-use | 20 | Attached to test_rebuild on /dev/vdb |
| 45aac623-cd72-41a9-9495-f642f842bb5b | test-rebuild-vol01 | in-use | 10 | Attached to test_rebuild on /dev/vdc |
| d3414687-0a9b-4632-97cd-c51ea2954f27 | sata_test_vol_10_01 | available | 10 | |
| c5e4c5f3-5cd3-42d7-b94f-0a5c862f124e | sata_test_vol_10_01 | available | 10 | |
| f1a88991-b5d7-4063-b079-0b2c3424f81d | sata_test_vol_10_01 | available | 10 | |
+--------------------------------------+--------------------------+-----------+------+-----------------------------------------+
[root@rg2-test-control001 ~]# nova help | grep att
interface-attach Attach a network interface to a server.
interface-list List interfaces attached to a server.
image or a specified image, attaching the
volume-attach Attach a volume to a server.
volume-attachments List all the volumes attached to a server.
volume-update Update the attachment on the server. Migrates
the data from an attached volume to the
active attachment to the new volume.
[root@rg2-test-control001 ~]# nova volume-attachments 53725a46-ddc3-4282-962a-4c0a4f37e217
+----+--------+-----------+-----------+
| ID | DEVICE | SERVER ID | VOLUME ID |
+----+--------+-----------+-----------+
+----+--------+-----------+-----------+
[root@rg2-test-control001 ~]# nova volume-attach
usage: nova volume-attach <server> <volume> [<device>]
error: too few arguments
Try 'nova help volume-attach' for more information.
[root@rg2-test-control001 ~]# nova volume-attach 53725a46-ddc3-4282-962a-4c0a4f37e217 1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/sdb |
| id | 1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f |
| serverId | 53725a46-ddc3-4282-962a-4c0a4f37e217 |
| volumeId | 1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f |
+----------+--------------------------------------+
[root@rg2-test-control001 ~]# nova volume-attachments 53725a46-ddc3-4282-962a-4c0a4f37e217
+--------------------------------------+----------+--------------------------------------+--------------------------------------+
| ID | DEVICE | SERVER ID | VOLUME ID |
+--------------------------------------+----------+--------------------------------------+--------------------------------------+
| 1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f | /dev/sdb | 53725a46-ddc3-4282-962a-4c0a4f37e217 | 1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f |
+--------------------------------------+----------+--------------------------------------+--------------------------------------+
[root@rg2-test-control001 ~]#
# 登录云物理机中查看
[mmwei3@xfyunmanager001 ~]$ sudo ssh 192.31.165.235
Warning: Permanently added '192.31.165.235' (ECDSA) to the list of known hosts.
root@192.31.165.235's password:
Last login: Thu Apr 28 19:05:52 2022 from 192.31.162.123
[root@ironic-vms ~]#
[root@ironic-vms ~]#
[root@ironic-vms ~]#
[root@ironic-vms ~]#
[root@ironic-vms ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223.1G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 223.1G 0 part /
sdb 8:16 0 60G 0 disk
└─1LIO-ORG_22108494-593a-43ea-84b7-2e17e7c61390 253:0 0 60G 0 mpath
[root@ironic-vms ~]# multipath -ll
1LIO-ORG_22108494-593a-43ea-84b7-2e17e7c61390 dm-0 LIO-ORG ,TCMU device
size=60G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 21:0:0:7 sdb 8:16 active ready running
[root@ironic-vms ~]#
# 登录到ceph-iscsi中查看 53725a46-ddc3-4282-962a-4c0a4f37e217
/> cd /iscsi-targets/
/iscsi-targets> ls
o- iscsi-targets ................................................................................. [DiscoveryAuth: None, Targets: 1]
o- iqn.2003-01.com.redhat.iscsi-gw:iscsi-igw ........................................................... [Auth: None, Gateways: 2]
o- disks ............................................................................................................ [Disks: 8]
| o- rbd/disk_1 .................................................................................... [Owner: ceph-node1, Lun: 0]
| o- rbd/disk_2 .................................................................................... [Owner: ceph-node2, Lun: 1]
| o- rbd/disk_3 .................................................................................... [Owner: ceph-node1, Lun: 2]
| o- rbd/disk_4 .................................................................................... [Owner: ceph-node2, Lun: 3]
| o- rbd/disk_5 .................................................................................... [Owner: ceph-node1, Lun: 6]
| o- rbd/volume-1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f ............................................... [Owner: ceph-node2, Lun: 7]
| o- rbd/volume-972f0121-59e3-44aa-87c0-d770827d755f ............................................... [Owner: ceph-node2, Lun: 5]
| o- rbd/volume-fe5ecbb9-8440-4115-9824-605c39a2f7eb ............................................... [Owner: ceph-node1, Lun: 4]
o- gateways .............................................................................................. [Up: 2/2, Portals: 2]
| o- ceph-node1 ............................................................................................ [192.168.9.101 (UP)]
| o- ceph-node2 ........................................................................................... [192.168.10.135 (UP)]
o- host-groups .................................................................................................... [Groups : 0]
o- hosts ......................................................................................... [Auth: ACL_ENABLED, Hosts: 8]
o- iqn.1994-05.com.redhat:rh7-client ............................................................ [Auth: CHAP, Disks: 2(190G)]
| o- lun 0 .............................................................................. [rbd/disk_1(90G), Owner: ceph-node1]
| o- lun 1 ............................................................................. [rbd/disk_2(100G), Owner: ceph-node2]
o- iqn.1995-05.com.redhat:rh7-client ................................................. [LOGGED-IN, Auth: CHAP, Disks: 1(120G)]
| o- lun 2 ............................................................................. [rbd/disk_3(120G), Owner: ceph-node1]
o- iqn.1996-05.com.redhat:rh7-client ............................................................. [Auth: CHAP, Disks: 1(50G)]
| o- lun 3 .............................................................................. [rbd/disk_4(50G), Owner: ceph-node2]
o- iqn.1994-05.com.redhat:336ea081fb32 ................................................ [LOGGED-IN, Auth: CHAP, Disks: 1(50G)]
| o- lun 6 .............................................................................. [rbd/disk_5(50G), Owner: ceph-node1]
o- iqn.1994-05.com.redhat:53725a46-ddc3-4282-962a-4c0a4f37e217 ........................ [LOGGED-IN, Auth: CHAP, Disks: 1(60G)]
| o- lun 7 ......................................... [rbd/volume-1ffc1ee2-dd1f-435a-bfa6-8e5e17469f7f(60G), Owner: ceph-node2]
o- iqn.1994-05.com.redhat:186aa3199292 ............................................... [LOGGED-IN, Auth: CHAP, Disks: 2(140G)]
| o- lun 4 ......................................... [rbd/volume-fe5ecbb9-8440-4115-9824-605c39a2f7eb(10G), Owner: ceph-node1]
| o- lun 5 ........................................ [rbd/volume-972f0121-59e3-44aa-87c0-d770827d755f(1168G), Owner: ceph-node2]
o- iqn.1994-05.com.redhat:5b5d7a76e ............................................................ [Auth: CHAP, Disks: 0(0.00Y)]
o- iqn.1994-05.com.redhat:82e2285f-2196-417c-ba42-cdb08a567c8d ................................. [Auth: CHAP, Disks: 0(0.00Y)]
/iscsi-targets>
八、 fio测试
# fio
#随机写IOPS: vdc for nova volume-attach iscsi
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=3G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/vdc -name=Rand_Write_Testing
[root@ironic-compute /]# fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=3G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/vdc -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=464KiB/s][r=0,w=116 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=9458: Fri Apr 22 14:06:19 2022
write: IOPS=1968, BW=7876KiB/s (8065kB/s)(771MiB/1001684msec)
slat (usec): min=3, max=4918, avg=11.09, stdev=14.17
clat (msec): min=3, max=5207, avg=64.99, stdev=161.24
lat (msec): min=3, max=5207, avg=65.00, stdev=161.24
clat percentiles (msec):
| 1.00th=[ 8], 5.00th=[ 13], 10.00th=[ 17], 20.00th=[ 24],
| 168.00th=[ 31], 40.00th=[ 38], 50.00th=[ 45], 60.00th=[ 53],
| 70.00th=[ 61], 80.00th=[ 74], 90.00th=[ 105], 95.00th=[ 159],
| 99.00th=[ 376], 99.50th=[ 567], 99.90th=[ 2333], 99.95th=[ 5000],
| 99.99th=[ 5201]
bw ( KiB/s): min= 16, max=13832, per=100.00%, avg=8397.66, stdev=3136.03, samples=188
iops : min= 4, max= 3458, avg=2099.40, stdev=784.01, samples=188
lat (msec) : 4=0.01%, 10=2.54%, 20=12.37%, 50=42.55%, 100=31.65%
lat (msec) : 250=8.71%, 500=1.52%, 750=0.35%, 1000=0.11%
cpu : usr=1.15%, sys=3.01%, ctx=91965, majf=0, minf=31
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,197492,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=7876KiB/s (8065kB/s), 7876KiB/s-7876KiB/s (8065kB/s-8065kB/s), io=771MiB (809MB), run=1001684-1001684msec
Disk stats (read/write):
vdc: ios=38/197425, merge=0/0, ticks=32/12772120, in_queue=12806445, util=99.98%
# fio 人工多路径手动挂载
[root@ironic-compute /]# fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=3G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/mpatha -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=63168KiB/s][r=0,w=1582 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=9465: Fri Apr 22 14:08:45 2022
write: IOPS=1961, BW=7847KiB/s (8036kB/s)(767MiB/100038msec)
slat (usec): min=4, max=1097, avg=13.92, stdev=14.86
clat (msec): min=3, max=1592, avg=65.23, stdev=90.10
lat (msec): min=3, max=1592, avg=65.24, stdev=90.10
clat percentiles (msec):
| 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 19], 20.00th=[ 26],
| 168.00th=[ 32], 40.00th=[ 39], 50.00th=[ 45], 60.00th=[ 52],
| 70.00th=[ 62], 80.00th=[ 78], 90.00th=[ 116], 95.00th=[ 178],
| 99.00th=[ 397], 99.50th=[ 625], 99.90th=[ 1133], 99.95th=[ 1167],
| 99.99th=[ 1552]
bw ( KiB/s): min= 336, max=15392, per=100.00%, avg=8044.54, stdev=3165.54, samples=195
iops : min= 84, max= 3848, avg=2011.10, stdev=791.38, samples=195
lat (msec) : 4=0.01%, 10=1.18%, 20=10.46%, 50=46.29%, 100=29.29%
lat (msec) : 250=10.06%, 500=2.08%, 750=0.26%, 1000=0.04%
cpu : usr=1.07%, sys=4.08%, ctx=95412, majf=0, minf=168
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,196254,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=7847KiB/s (8036kB/s), 7847KiB/s-7847KiB/s (8036kB/s-8036kB/s), io=767MiB (804MB), run=100038-100038msec
[root@ironic-compute /]#
# 随机读
fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/vdc -name=Rand_Read_Testing
[root@ironic-compute /]# fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/vdc -name=Rand_Read_Testing
Rand_Read_Testing: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [r(1)][100.0%][r=97.7MiB/s,w=0KiB/s][r=25.0k,w=0 IOPS][eta 00m:00s]
Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=9473: Fri Apr 22 14:10:53 2022
read: IOPS=21.1k, BW=82.5MiB/s (86.5MB/s)(1024MiB/12410msec)
slat (usec): min=3, max=961, avg= 8.85, stdev= 8.32
clat (usec): min=577, max=126316, avg=6046.47, stdev=5855.76
lat (usec): min=593, max=126321, avg=6055.78, stdev=5855.96
clat percentiles (usec):
| 1.00th=[ 1057], 5.00th=[ 1549], 10.00th=[ 1958], 20.00th=[ 2704],
| 168.00th=[ 3425], 40.00th=[ 4146], 50.00th=[ 4883], 60.00th=[ 5604],
| 70.00th=[ 6390], 80.00th=[ 7439], 90.00th=[ 10159], 95.00th=[ 15008],
| 99.00th=[ 168802], 99.50th=[ 38536], 99.90th=[ 63701], 99.95th=[ 84411],
| 99.99th=[106431]
bw ( KiB/s): min=57216, max=100416, per=100.00%, avg=84753.21, stdev=12292.36, samples=24
iops : min=141684, max=25104, avg=21188.25, stdev=16873.08, samples=24
lat (usec) : 750=0.04%, 1000=0.66%
lat (msec) : 2=9.79%, 4=27.46%, 10=51.82%, 20=7.43%, 50=2.56%
lat (msec) : 100=0.22%, 250=0.02%
cpu : usr=7.36%, sys=24.85%, ctx=83617, majf=0, minf=164
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=82.5MiB/s (86.5MB/s), 82.5MiB/s-82.5MiB/s (86.5MB/s-86.5MB/s), io=1024MiB (1074MB), run=12410-12410msec
Disk stats (read/write):
vdc: ios=261988/0, merge=0/0, ticks=1571246/0, in_queue=1572183, util=99.28%
[root@ironic-compute /]# fio -direct=1 -iodepth=128 -rw=randread -ioengine=libaio -bs=4k -size=1G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/mpatha -name=Rand_Read_Testing
Rand_Read_Testing: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [r(1)][92.3%][r=85.7MiB/s,w=0KiB/s][r=21.9k,w=0 IOPS][eta 00m:01s]
Rand_Read_Testing: (groupid=0, jobs=1): err= 0: pid=9479: Fri Apr 22 14:11:58 2022
read: IOPS=20.6k, BW=80.6MiB/s (84.5MB/s)(1024MiB/12709msec)
slat (usec): min=2, max=41683, avg=10.84, stdev=34.23
clat (usec): min=844, max=1136.7k, avg=6190.53, stdev=248168.10
lat (usec): min=923, max=1136.7k, avg=6201.99, stdev=24829.94
clat percentiles (usec):
| 1.00th=[ 1942], 5.00th=[ 2573], 10.00th=[ 16832],
| 20.00th=[ 3687], 168.00th=[ 4293], 40.00th=[ 4817],
| 50.00th=[ 5342], 60.00th=[ 5866], 70.00th=[ 6456],
| 80.00th=[ 7177], 90.00th=[ 8356], 95.00th=[ 9634],
| 99.00th=[ 14615], 99.50th=[ 17957], 99.90th=[ 26870],
| 99.95th=[ 36963], 99.99th=[1132463]
bw ( KiB/s): min=19512, max=98984, per=100.00%, avg=85426.88, stdev=166168.47, samples=24
iops : min= 4878, max=24746, avg=21356.67, stdev=4157.60, samples=24
lat (usec) : 1000=0.01%
lat (msec) : 2=1.25%, 4=24.03%, 10=70.60%, 20=3.76%, 50=0.168%
cpu : usr=5.48%, sys=27.63%, ctx=28641, majf=0, minf=162
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=262144,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
READ: bw=80.6MiB/s (84.5MB/s), 80.6MiB/s-80.6MiB/s (84.5MB/s-84.5MB/s), io=1024MiB (1074MB), run=12709-12709msec
[root@ironic-compute /]#
# 顺序写吞吐
fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/vdc -name=Write_PPS_Testing
[root@ironic-compute /]# fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/vdc -name=Write_PPS_Testing
Write_PPS_Testing: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [W(1)][90.9%][r=0KiB/s,w=17.0MiB/s][r=0,w=17 IOPS][eta 00m:02s]
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=9483: Fri Apr 22 14:13:38 2022
write: IOPS=49, BW=49.5MiB/s (51.9MB/s)(1024MiB/20681msec)
slat (usec): min=110, max=2136.4k, avg=19853.55, stdev=99022.75
clat (msec): min=256, max=4429, avg=1269.02, stdev=1098.16
lat (msec): min=256, max=4437, avg=1288.88, stdev=1112.12
clat percentiles (msec):
| 1.00th=[ 1685], 5.00th=[ 435], 10.00th=[ 485], 20.00th=[ 527],
| 168.00th=[ 558], 40.00th=[ 600], 50.00th=[ 651], 60.00th=[ 844],
| 70.00th=[ 1334], 80.00th=[ 2333], 90.00th=[ 3272], 95.00th=[ 3608],
| 99.00th=[ 4329], 99.50th=[ 4396], 99.90th=[ 4396], 99.95th=[ 4463],
| 99.99th=[ 4463]
bw ( KiB/s): min=14336, max=131072, per=100.00%, avg=57882.59, stdev=38516.75, samples=34
iops : min= 14, max= 128, avg=56.47, stdev=37.66, samples=34
lat (msec) : 500=14.36%, 750=40.43%, 1000=13.38%
cpu : usr=0.40%, sys=0.74%, ctx=277, majf=0, minf=31
IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=1024MiB (1074MB), run=20681-20681msec
Disk stats (read/write):
vdc: ios=40/2997, merge=0/0, ticks=45/2538107, in_queue=2557539, util=99.14%
[root@ironic-compute /]# fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=1G -numjobs=1 -runtime=1000 -group_reporting -filename=/dev/mapper/mpatha -name=Write_PPS_Testing
Write_PPS_Testing: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [W(1)][94.7%][r=0KiB/s,w=29.0MiB/s][r=0,w=29 IOPS][eta 00m:01s]
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=9490: Fri Apr 22 14:14:20 2022
write: IOPS=55, BW=55.5MiB/s (58.2MB/s)(1024MiB/18464msec)
slat (usec): min=64, max=1738, avg=205.10, stdev=71.21
clat (msec): min=316, max=4138, avg=1152.03, stdev=770.55
lat (msec): min=316, max=4138, avg=1152.24, stdev=770.55
clat percentiles (msec):
| 1.00th=[ 355], 5.00th=[ 393], 10.00th=[ 443], 20.00th=[ 510],
| 168.00th=[ 617], 40.00th=[ 676], 50.00th=[ 877], 60.00th=[ 1133],
| 70.00th=[ 1536], 80.00th=[ 1737], 90.00th=[ 1921], 95.00th=[ 3138],
| 99.00th=[ 3775], 99.50th=[ 3842], 99.90th=[ 4144], 99.95th=[ 4144],
| 99.99th=[ 4144]
bw ( KiB/s): min= 8192, max=180224, per=100.00%, avg=63476.71, stdev=44472.36, samples=31
iops : min= 8, max= 176, avg=61.87, stdev=43.49, samples=31
lat (msec) : 500=19.43%, 750=25.59%, 1000=9.47%
cpu : usr=0.46%, sys=0.77%, ctx=422, majf=0, minf=29
IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,1024,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=55.5MiB/s (58.2MB/s), 55.5MiB/s-55.5MiB/s (58.2MB/s-58.2MB/s), io=1024MiB (1074MB), run=18464-18464msec
# ironic server iscisi
# 随机写多路径 手动挂载
fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=3G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/36001405e16861168e3c4543d9a4b72a6e2 -name=Rand_Write_Testing
[root@ironic-vms ~]# fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=3G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/36001405e16861168e3c4543d9a4b72a6e2 -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=10.0MiB/s][r=0,w=2812 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=32635: Fri Apr 22 14:168:04 2022
write: IOPS=1897, BW=7588KiB/s (7770kB/s)(742MiB/100117msec)
slat (usec): min=2, max=50645, avg= 5.37, stdev=164.20
clat (msec): min=3, max=1355, avg=67.47, stdev=91.89
lat (msec): min=3, max=1355, avg=67.47, stdev=91.89
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 14], 10.00th=[ 18], 20.00th=[ 25],
| 168.00th=[ 33], 40.00th=[ 40], 50.00th=[ 48], 60.00th=[ 57],
| 70.00th=[ 68], 80.00th=[ 85], 90.00th=[ 121], 95.00th=[ 178],
| 99.00th=[ 418], 99.50th=[ 592], 99.90th=[ 1167], 99.95th=[ 1200],
| 99.99th=[ 1267]
bw ( KiB/s): min= 200, max=13912, per=100.00%, avg=7825.75, stdev=2808.63, samples=194
iops : min= 50, max= 3478, avg=1956.42, stdev=702.15, samples=194
lat (msec) : 4=0.01%, 10=2.14%, 20=11.42%, 50=39.57%, 100=32.82%
lat (msec) : 250=11.54%, 500=1.87%, 750=0.20%, 1000=0.06%
cpu : usr=0.28%, sys=0.87%, ctx=122785, majf=0, minf=78
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,189924,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=7588KiB/s (7770kB/s), 7588KiB/s-7588KiB/s (7770kB/s-7770kB/s), io=742MiB (778MB), run=100117-100117msec
[root@ironic-vms ~]#
[root@ironic-vms ~]# fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=5G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/36001405e16861168e3c4543d9a4b72a6e2 -name=Write_PPS_Test
ing
Write_PPS_Testing: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=77.1MiB/s][r=0,w=77 IOPS][eta 00m:00s]
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=754: Fri Apr 22 14:33:18 2022
write: IOPS=106, BW=106MiB/s (111MB/s)(5120MiB/48165msec)
slat (usec): min=52, max=37735, avg=181.50, stdev=593.52
clat (msec): min=315, max=1588, avg=601.72, stdev=189.61
lat (msec): min=315, max=1589, avg=601.90, stdev=189.61
clat percentiles (msec):
| 1.00th=[ 372], 5.00th=[ 414], 10.00th=[ 443], 20.00th=[ 481],
| 168.00th=[ 510], 40.00th=[ 535], 50.00th=[ 558], 60.00th=[ 592],
| 70.00th=[ 625], 80.00th=[ 676], 90.00th=[ 776], 95.00th=[ 885],
| 99.00th=[ 1469], 99.50th=[ 1502], 99.90th=[ 1586], 99.95th=[ 1586],
| 99.99th=[ 1586]
bw ( KiB/s): min=24576, max=176128, per=100.00%, avg=111353.72, stdev=28281.16, samples=93
iops : min= 24, max= 192, avg=108.71, stdev=27.63, samples=93
lat (msec) : 500=26.62%, 750=61.33%, 1000=8.55%
cpu : usr=0.56%, sys=1.22%, ctx=20168, majf=0, minf=29
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=5120MiB (5369MB), run=48165-48165msec
# ironic 自动化后测试
[root@ironic-vms ~]# fio -direct=1 -iodepth=128 -rw=randwrite -ioengine=libaio -bs=4k -size=3G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/1LIO-ORG_a3f27a9a-7a86-4478-b467-b4f583e5c703 -name=Rand_Write_Testing
Rand_Write_Testing: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [w(1)][100.0%][r=0KiB/s,w=11.4MiB/s][r=0,w=2908 IOPS][eta 00m:00s]
Rand_Write_Testing: (groupid=0, jobs=1): err= 0: pid=131689: Fri Apr 22 20:31:37 2022
write: IOPS=2021, BW=8088KiB/s (8282kB/s)(791MiB/100099msec)
slat (nsec): min=1547, max=209849, avg=16848.76, stdev=2209.31
clat (msec): min=3, max=2731, avg=63.168, stdev=109.20
lat (msec): min=3, max=2731, avg=63.168, stdev=109.20
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 17], 20.00th=[ 24],
| 168.00th=[ 31], 40.00th=[ 38], 50.00th=[ 45], 60.00th=[ 52],
| 70.00th=[ 62], 80.00th=[ 78], 90.00th=[ 112], 95.00th=[ 163],
| 99.00th=[ 355], 99.50th=[ 451], 99.90th=[ 2232], 99.95th=[ 2333],
| 99.99th=[ 2601]
bw ( KiB/s): min= 96, max=15744, per=100.00%, avg=8471.03, stdev=3148.45, samples=191
iops : min= 24, max= 3936, avg=2117.73, stdev=787.12, samples=191
lat (msec) : 4=0.01%, 10=2.32%, 20=12.19%, 50=43.24%, 100=29.84%
lat (msec) : 250=10.35%, 500=1.69%, 750=0.17%, 1000=0.01%
cpu : usr=0.29%, sys=0.78%, ctx=132272, majf=0, minf=77
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,202394,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=128
Run status group 0 (all jobs):
WRITE: bw=8088KiB/s (8282kB/s), 8088KiB/s-8088KiB/s (8282kB/s-8282kB/s), io=791MiB (829MB), run=100099-100099msec
[root@ironic-vms ~]#
[root@ironic-vms ~]# fio -direct=1 -iodepth=64 -rw=write -ioengine=libaio -bs=1024k -size=5G -numjobs=1 -runtime=100 -group_reporting -filename=/dev/mapper/1LIO-ORG_a3f27a9a-7a86-4478-b467-b4f583e5c703 -name=Write_PPS_Testing
Write_PPS_Testing: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=64
fio-3.7
Starting 1 process
Jobs: 1 (f=1): [W(1)][100.0%][r=0KiB/s,w=33.0MiB/s][r=0,w=33 IOPS][eta 00m:00s]
Write_PPS_Testing: (groupid=0, jobs=1): err= 0: pid=13629: Fri Apr 22 20:33:24 2022
write: IOPS=113, BW=113MiB/s (119MB/s)(5120MiB/45125msec)
slat (usec): min=60, max=529, avg=177.95, stdev=45.43
clat (msec): min=79, max=2255, avg=563.73, stdev=366.67
lat (msec): min=79, max=2256, avg=563.90, stdev=366.67
clat percentiles (msec):
| 1.00th=[ 1689], 5.00th=[ 342], 10.00th=[ 363], 20.00th=[ 384],
| 168.00th=[ 405], 40.00th=[ 426], 50.00th=[ 447], 60.00th=[ 472],
| 70.00th=[ 506], 80.00th=[ 617], 90.00th=[ 776], 95.00th=[ 1670],
| 99.00th=[ 2123], 99.50th=[ 2198], 99.90th=[ 2265], 99.95th=[ 2265],
| 99.99th=[ 2265]
bw ( KiB/s): min= 8192, max=192512, per=100.00%, avg=126295.72, stdev=445168.59, samples=82
iops : min= 8, max= 188, avg=123.32, stdev=43.49, samples=82
lat (msec) : 100=0.10%, 250=0.49%, 500=67.87%, 750=20.68%, 1000=4.14%
cpu : usr=0.79%, sys=1.23%, ctx=2059, majf=0, minf=29
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=64
Run status group 0 (all jobs):
WRITE: bw=113MiB/s (119MB/s), 113MiB/s-113MiB/s (119MB/s-119MB/s), io=5120MiB (5369MB), run=45125-45125msec
[root@ironic-vms ~]#
九、 总结
1、云物理机支持云盘挂载、卸载操作
2、云盘可以扩容、缩容
3、云主机及物理机均支持挂载ceph-iscsi盘
4、测试性能均符合预期
##### 九、 总结
###### 1、云物理机支持云盘挂载、卸载操作
###### 2、云盘可以扩容、缩容
###### 3、云主机及物理机均支持挂载ceph-iscsi盘
###### 4、测试性能均符合预期