ceph安装

本文介绍了ceph的部署流程。

环境

1
2
3
主机1:yunwei-store-1,centos7,192.168.213.155,mon,osd,rgw,ceph-deploy
主机2:yunwei-store-2,centos7,192.168.213.154,mon,osd
主机3:yunwei-store-3,centos7,192.168.213.153,mon,osd

添加ceph源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
在所有机器上执行:
cat /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for x86_64
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS/
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

执行yum makecache

设置免密登录

1
2
3
4
5
在所有机器上执行
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
分别把其他几台~/.ssh/id_rsa.pub中的内容拷贝到另外几台的~/.ssh/authorized_keys文件中
chmod 600 ~/.ssh/authorized_keys

安装部署工具

1
2
在yunwei-store-1上执行
yum -y install ceph-deploy

创建集群

1
2
3
4
5
6
7
在yunwei-store-1上执行
mkdir -p /opt/ceph/ceph-cluster
cd /opt/ceph/ceph-cluster
ceph-deploy new yunwei-store-1 yunwei-store-2 yunwei-store-3
ceph-deploy 会在 ceph-cluster 目录下生成3个文件,ceph.conf 为ceph配置文件,ceph-deploy-ceph.log为ceph-deploy日志文件,ceph.mon.keyring 为 ceph monitor 的密钥环。
在ceph.conf中添加osd pool default size = 2
osd pool default size为默认副本数量

安装ceph

1
2
在yunwei-store-1上执行
ceph-deploy install yunwei-store-1 yunwei-store-2 yunwei-store-3

初始化monitors

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
yunwei-store-1上执行
ceph-deploy mon create-initial
查看集群状态
ceph -s
cluster a3771c3a-af5a-4f94-9be2-63f8f49e6a3a
health HEALTH_ERR
no osds
monmap e1: 3 mons at {yunwei-store-1=192.168.225.22:6789/0,yunwei-store-2=192.168.225.24:6789/0,yunwei-store-3=192.168.225.23:6789/0}
election epoch 6, quorum 0,1,2 yunwei-store-1,yunwei-store-3,yunwei-store-2
osdmap e1: 0 osds: 0 up, 0 in
flags sortbitwise,require_jewel_osds
pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
0 kB used, 0 kB / 0 kB avail
64 creating
在三台主机上均运行了monitor

创建osd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
在yunwei-store-1上执行
OSD 是最终数据存储的地方
fdisk -l
Disk /dev/vdc: 21.5 GB, 21474836480 bytes, 41943040 sectors
(1)prepare OSD
ceph-deploy osd prepare yunwei-store-1:vdc yunwei-store-2:vdc yunwei-store-3:vdc
(2)activate OSD
ceph-deploy osd activate yunwei-store-1:/dev/vdc1 yunwei-store-2:/dev/vdc1 yunwei-store-3:/dev/vdc1
查看状态
ceph -s
cluster a3771c3a-af5a-4f94-9be2-63f8f49e6a3a
health HEALTH_OK
monmap e1: 3 mons at {yunwei-store-1=192.168.225.22:6789/0,yunwei-store-2=192.168.225.24:6789/0,yunwei-store-3=192.168.225.23:6789/0}
election epoch 6, quorum 0,1,2 yunwei-store-1,yunwei-store-3,yunwei-store-2
osdmap e16: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v37: 64 pgs, 1 pools, 0 bytes data, 0 objects
322 MB used, 45724 MB / 46046 MB avail
64 active+clean
运行了3个osd
查看集群 OSD 信息
ceph osd tree

同步秘钥和配置信息

1
2
3
4
5
在yunwei-store-1上执行
ceph-deploy admin yunwei-store-1 yunwei-store-2 yunwei-store-3
在所有机器上执行
设置秘钥文件权限
chmod +r /etc/ceph/ceph.client.admin.keyring
ulysses wechat
订阅+