多应用+插件架构,代码干净,二开方便,首家独创一键云编译技术,文档视频完善,免费商用码云13.8K 广告
**一、在添加monitor的时候报错** ~~~ [wlwjfx25][DEBUG ] connected to host: WLWJFX32 [wlwjfx25][INFO ] Running command: ssh -CT -o BatchMode=yes wlwjfx25 [wlwjfx25][DEBUG ] connection detected need for sudo sudo: sorry, you must have a tty to run sudo [ceph_deploy][ERROR ] RuntimeError: connecting to host: wlwjfx25 resulted in errors: IOError cannot send (already closed?) ~~~ **解决方法:** 使用不同账户,执行执行脚本时候sudo经常会碰到 sudo: sorry, you must have a tty to run sudo这个情况,其实修改一下sudo的配置就好了 ~~~ vi /etc/sudoers (最好用visudo命令) 注释掉 Default requiretty 一行 #Default requiretty ~~~ 意思就是sudo默认需要tty终端。注释掉就可以在后台执行了。 **二、 ceph-deploy mon create-initial 遇到错误** admin_socket: exception getting command descriptions: [Errno 2] No such file or director 要在配置文件中加入以下内容: ~~~ [osd] osd max object name len = 256 //这里必须写,否则在创建mon会出错 osd max object namespace len = 64 //同上 rbd default features = 1 ~~~ **三、 ceph状态为HEALTH_WARN ** ~~~ [root@WLWJFX62 ~]# ceph -s cluster e062ce71-bfb3-4895-8373-6203de2fa793 health HEALTH_WARN too few PGs per OSD (10 < min 30) monmap e1: 3 mons at {WLWJFX23=10.255.213.133:6789/0,WLWJFX24=10.255.213.134:6789/0,WLWJFX25=10.255.213.135:6789/0} election epoch 10, quorum 0,1,2 WLWJFX23,WLWJFX24,WLWJFX25 mdsmap e7: 1/1/1 up {0=WLWJFX34=up:active} osdmap e611: 145 osds: 145 up, 145 in pgmap v1283: 512 pgs, 3 pools, 11667 bytes data, 20 objects 742 GB used, 744 TB / 785 TB avail 512 active+clean ~~~ 执行ceph health 可得知: ~~~ [root@WLWJFX62 ~]# ceph health HEALTH_WARN too few PGs per OSD (10 < min 30) ~~~ 需调整 需要修改pg_num , pgp_num 1、查看 所拥有的pool ~~~ [root@WLWJFX23 ceph]# ceph osd pool stats pool rbd id 0 nothing is going on pool fs_data id 3 nothing is going on pool fs_metadata id 4 nothing is going on ~~~ 2、获取对应pool的pg_num和pgp_num.值 ~~~ ceph osd pool get fs_data pg_num ceph osd pool get fs_data pgp_num ceph osd pool get fs_metadata pg_num ceph osd pool get fs_metadata pgp_num ~~~ 2、修改pool对应的pg_num和pgp_num. ~~~ ceph osd pool set fs_data pg_num 512 ceph osd pool set fs_data pgp_num 512 ceph osd pool set fs_metadata pg_num 512 ceph osd pool set fs_metadata pgp_num 512 ~~~ 通过ceph -s检查: ~~~ [root@WLWJFX23 ceph]# ceph -s cluster e062ce71-bfb3-4895-8373-6203de2fa793 health HEALTH_WARN too few PGs per OSD (26 < min 30) monmap e1: 3 mons at {WLWJFX23=10.255.213.133:6789/0,WLWJFX24=10.255.213.134:6789/0,WLWJFX25=10.255.213.135:6789/0} election epoch 10, quorum 0,1,2 WLWJFX23,WLWJFX24,WLWJFX25 mdsmap e7: 1/1/1 up {0=WLWJFX34=up:active} osdmap e627: 145 osds: 145 up, 145 in pgmap v1352: 1280 pgs, 3 pools, 11667 bytes data, 20 objects 742 GB used, 744 TB / 785 TB avail 1280 active+clean ~~~ 若还出现 too few PGs per OSD (26 < min 30) 报错,则pg_num和pgp_num还需增加,设定的值最好是2的**整数幂** 3、需要注意, pg_num只能增加, 不能缩小. ~~~ [root@mon1 ~]# ceph osd pool set rbd pg_num 64 Error EEXIST: specified pg_num 64 <= current 128 ~~~ 四、创建osd时报错: [ceph_deploy][ERROR ] RuntimeError: bootstrap-osd keyring not found; run 'gatherkeys' 登录跳板节点: ceph-deploy gatherkeys WLWJFX{64..72} 注意问题: 1、ceph 10版本会将glib库的版本要求你为centos-1611的版本,否则会出现不兼容的报错。 2、检查各主机时钟是否一致 3、 [root@xhw342 ~]# yum -y install yum-plugin-priorities Loaded plugins: fastestmirror CentOS7_1611-media | 3.6 kB 00:00:00 ZStack | 3.6 kB 00:00:00 ceph-jewel | 2.9 kB 00:00:00 ceph-jewel_deprpm | 2.9 kB 00:00:00 ceph-jewel_noarch | 2.9 kB 00:00:00 One of the configured repositories failed (Unknown), and yum doesn't have enough cached data to continue. At this point the only safe thing yum can do is fail. There are a few ways to work "fix" this: 1. Contact the upstream for the repository and get them to fix the problem. 2. Reconfigure the baseurl/etc. for the repository, to point to a working upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work). 3. Disable the repository, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage: yum-config-manager --disable <repoid> 4. Configure the failing repository to be skipped, if it is unavailable. Note that yum will try to contact the repo. when it runs most commands, so will have to try and fail each time (and thus. yum will be be much slower). If it is a very temporary problem though, this is often a nice compromise: yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again 解决方法: 进入/etc/yum.repos.d中删除epel.repo和epel-testing.repo 4、 [xhw342][DEBUG ] Configure Yum priorities to include obsoletes [xhw342][WARNIN] check_obsoletes has been enabled for Yum priorities plugin [xhw342][INFO ] Running command: rpm --import https://download.ceph.com/keys/release.asc [xhw342][WARNIN] curl: (6) Could not resolve host: download.ceph.com; Unknown error [xhw342][WARNIN] error: https://download.ceph.com/keys/release.asc: import read failed(2). [xhw342][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm --import https://download.ceph.com/keys/release.asc