现在的位置: 首页 > 系统运维 > Linux > 正文

1、CentOS7 安装Ceph

2016年05月30日 Linux ⁄ 共 4024字 ⁄ 字号 暂无评论

1、安装环境

|-----node1(mon,osd) sda为系统盘,sdb和sdc为osd盘

|

|-----node2(mon,osd) sda为系统盘,sdb和sdc为osd盘

                              admin-----|

|-----node3(mon,osd) sda为系统盘,sdb和sdc为osd盘

|

|-----client

Ceph Monitors 之间默认使用 6789 端口通信, OSD 之间默认用 6800:7300 这个范围内的端口通信

2、准备工作(所有节点)

2.1、修改IP地址

vim /etc/sysconfig/network-scripts/ifcfg-em1

IPADDR=192.168.130.205

NETMASK=255.255.255.0

GATEWAY=192.168.130.2

2.2、关闭防火墙

systemctl stop firewalld.service #停止firewall

systemctl disable firewalld.service #禁止firewall开机启动

firewall-cmd --state #查看防火墙状态

2.3、修改yum源

cd /etc/yum.repos.d

mv CentOS-Base.repo CentOS-Base.repo.bk

wget http://mirrors.163.com/.help/CentOS6-Base-163.repo

yum makecache

2.4、修改时区

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

yum -y install ntp

systemctl enable ntpd

systemctl start ntpd

ntpstat

2.5、修改hosts

vim /etc/hosts

192.168.130.205 admin

192.168.130.204 client

192.168.130.203 node3

192.168.130.202 node2

192.168.130.201 node1

2.6、安装epel仓库、添加yum ceph仓库、更新软件库

安装epel仓库

rpm -vih http://mirrors.sohu.com/fedora-epel/7/x86_64/e/epel-release-7-2.noarch.rpm

添加yum ceph仓库

vim /etc/yum.repos.d/ceph.repo

[ceph]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-hammer/el7/x86_64/

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=http://mirrors.163.com/ceph/keys/release.asc

2.7、安装ceph-deploy,ceph(ceph所有节点都安装,ceph-deploy只需admin节点安装)

yum -y update && yum -y install --release hammer ceph ceph-deploy

3、允许无密码 SSH 登录(admin节点)

3.1、生成SSH密钥对,提示 “Enter passphrase” 时,直接回车,口令即为空:

ssh-keygen

3.2、把公钥拷贝至所有节点

ssh-copy-id root@node1

ssh-copy-id root@node2

ssh-copy-id root@node3

ssh-copy-id root@client

3.3、验证是否可以无密码SSH登录

ssh node1

ssh node2

ssh node3

ssh client

4、创建Monitor(admin节点)

4.1、在node1、node2、node3上创建monitor

mkdir myceph

cd myceph

ceph-deploy new node1 node2 node3

4.2、修改osd的副本数,将osd pool default size = 2添加至末尾

vim /etc/ceph.conf

osd pool default size = 2

4.3、配置初始 monitor(s)、并收集所有密钥

ceph-deploy mon create-initial

5、创建OSD(admin节点)

5.1列举磁盘

ceph-deploy disk list node1

ceph-deploy disk list node2

5.2、擦净磁盘

ceph-deploy disk zap node1:sdb

ceph-deploy disk zap node1:sdc

ceph-deploy disk zap node2:sdb

ceph-deploy disk zap node2:sdc

ceph-deploy disk zap node3:sdb

ceph-deploy disk zap node3:sdc

5.3、准备OSD

ceph-deploy osd prepare node1:sdb

ceph-deploy osd prepare node1:sdc

ceph-deploy osd prepare node2:sdb

ceph-deploy osd prepare node2:sdc

ceph-deploy osd prepare node3:sdb

ceph-deploy osd prepare node3:sdc

ceph-deploy osd activate node1:sdb1

ceph-deploy osd activate node1:sdc1

ceph-deploy osd activate node2:sdb1

ceph-deploy osd activate node2:sdc1

ceph-deploy osd activate node3:sdb1

ceph-deploy osd activate node3:sdc1

5.4、删除OSD

ceph osd out osd.3

ssh node1 service ceph stop osd.3

ceph osd crush remove osd.3

        ceph auth del osd.3 //从认证中删除

        ceph osd rm 3 //删除

5.5、把配置文件和 admin 密钥拷贝到各节点,这样每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring

ceph-deploy admin admin node1 node2 node3

5.6、查看集群健康状况

ceph health

6、配置块设备(client节点)

6.1、创建映像

rbd create foo --size 4096 [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]

rbd create foo --size 4096 -m node1 -k /etc/ceph/ceph.client.admin.keyring

6.2、将映像映射为块设备

sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]

sudo rbd map foo --name client.admin -m node1 -k /etc/ceph/ceph.client.admin.keyring

6.3、创建文件系统

sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo

6.4、挂载文件系统

sudo mkdir /mnt/ceph-block-device

sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device

cd /mnt/ceph-block-device

1、列出存储池

ceph osd lspools

2、创建存储池

ceph osd pool create pool-name pg-num pgp-num

ceph osd pool create test 512 512

3、删除存储池

ceph osd pool delete test test --yes-i-really-really-mean-it

4、重命名存储池

ceph osd pool rename current-pool-name new-pool-name

ceph osd pool rename test test2

5、查看存储池统计信息

rados df

6、调整存储池选项值

ceph osd pool set test size 3 设置对象副本数

7、获取存储池选项值

ceph osd pool get test size 获取对象副本数

1、创建块设备映像

rbd create --size {megabytes} {pool-name}/{image-name}

rbd create --size 1024 test/foo

2、罗列块设备映像

rbd ls

3、检索映像信息

rbd info {image-name}

rbd info foo

rbd info {pool-name}/{image-name}

rbd info test/foo

4、调整块设备映像大小

rbd resize --size 512 test/foo --allow-shrink 调小

rbd resize --size 4096 test/foo 调大

5、删除块设备

rbd rm test/foo

内核模块操作

1、映射块设备

sudo rbd map {pool-name}/{image-name} --id {user-name}

sudo rbd map test/foo2 --id admin

如若启用cephx认证,还需指定密钥

sudo rbd map test/foo2 --id admin --keyring /etc/ceph/ceph.client.admin.keyring

2、查看已映射设备

rbd showmapped

3、取消块设备映射

sudo rbd unmap /dev/rbd/{poolname}/{imagename}

rbd unmap /dev/rbd/test/foo2