anduin revised this gist . Go to revision
1 file changed, 10 insertions, 2 deletions
notes.md
@@ -65,10 +65,18 @@ d g w | |||
65 | 65 | ||
66 | 66 | ## 准备 Ceph 文件 | |
67 | 67 | ||
68 | - | 去那些node上 | |
68 | + | 去那些node上,每台 node 都要跑: | |
69 | 69 | ||
70 | 70 | ```bash | |
71 | - | mkdir /etc/ceph | |
71 | + | sudo mkdir /etc/ceph | |
72 | + | ``` | |
73 | + | ||
74 | + | 每台 node 都安装 docker。ceph 天生就需要 docker 来部署它自己的许多组件。 | |
75 | + | ||
76 | + | ```bash | |
77 | + | curl -fsSL get.docker.com -o get-docker.sh | |
78 | + | CHANNEL=stable sh get-docker.sh | |
79 | + | rm get-docker.sh | |
72 | 80 | ``` | |
73 | 81 | ||
74 | 82 | 初始化ceph,在node1上运行 |
anduin revised this gist . Go to revision
1 file changed, 2 insertions
notes.md
@@ -15,6 +15,8 @@ sudo vim /etc/ssh/sshd_config | |||
15 | 15 | ||
16 | 16 | 将里面的 `PermitRootLogin` 改成 `yes`,将 `PasswordAuthentication` 改成 `yes` | |
17 | 17 | ||
18 | + | 是的,这非常不优雅。ceph 喜欢用 root ssh 到各个 node 上一顿跑命令。但是它就是这么设计的。 | |
19 | + | ||
18 | 20 | ```bash | |
19 | 21 | sudo systemctl restart ssh sshd | |
20 | 22 | ``` |
anduin revised this gist . Go to revision
1 file changed, 1 insertion, 1 deletion
notes.md
@@ -5,7 +5,7 @@ | |||
5 | 5 | * node2 | |
6 | 6 | * node3 | |
7 | 7 | ||
8 | - | 三个 node 全部都插一块 **新的** 大硬盘。应该是 `sdb` 。是别的也无所谓。 | |
8 | + | 三个 node 全部都插一块 **新的** 大硬盘。假如是 `/dev/sdb` 。 | |
9 | 9 | ||
10 | 10 | ## 开启root ssh | |
11 | 11 |
anduin revised this gist . Go to revision
1 file changed, 1 insertion, 1 deletion
notes.md
@@ -5,7 +5,7 @@ | |||
5 | 5 | * node2 | |
6 | 6 | * node3 | |
7 | 7 | ||
8 | - | 三个 node 全部都插一块 **新的** 大硬盘。应该是 `sdb` 。 | |
8 | + | 三个 node 全部都插一块 **新的** 大硬盘。应该是 `sdb` 。是别的也无所谓。 | |
9 | 9 | ||
10 | 10 | ## 开启root ssh | |
11 | 11 |
anduin revised this gist . Go to revision
1 file changed, 2 insertions
notes.md
@@ -5,6 +5,8 @@ | |||
5 | 5 | * node2 | |
6 | 6 | * node3 | |
7 | 7 | ||
8 | + | 三个 node 全部都插一块 **新的** 大硬盘。应该是 `sdb` 。 | |
9 | + | ||
8 | 10 | ## 开启root ssh | |
9 | 11 | ||
10 | 12 | ```bash |
anduin revised this gist . Go to revision
1 file changed, 2 insertions
notes.md
@@ -97,9 +97,11 @@ apt install -y ceph-common | |||
97 | 97 | ||
98 | 98 | ## 添加主机 | |
99 | 99 | ||
100 | + | ```bash | |
100 | 101 | ceph orch host add node2 | |
101 | 102 | ceph orch host add node3 | |
102 | 103 | ceph orch host ls | |
104 | + | ``` | |
103 | 105 | ||
104 | 106 | ## 添加OSD | |
105 | 107 |
anduin revised this gist . Go to revision
1 file changed, 13 insertions, 5 deletions
notes.md
@@ -1,4 +1,10 @@ | |||
1 | 1 | ||
2 | + | 这里将在 3 台 node 上安装 ceph。三台 node 的 hostname 分别是: | |
3 | + | ||
4 | + | * node1 | |
5 | + | * node2 | |
6 | + | * node3 | |
7 | + | ||
2 | 8 | ## 开启root ssh | |
3 | 9 | ||
4 | 10 | ```bash | |
@@ -13,22 +19,22 @@ sudo systemctl restart ssh sshd | |||
13 | 19 | ||
14 | 20 | ## 改好密码 | |
15 | 21 | ||
22 | + | ```bash | |
16 | 23 | sudo su | |
17 | 24 | passwd | |
25 | + | ``` | |
18 | 26 | ||
19 | 27 | 设置一个 root 密码 | |
20 | 28 | ||
21 | - | ## 关停服务 | |
29 | + | ## 关停服务 (可选) | |
22 | 30 | ||
23 | - | 现在的服务是基于 Glusterfs 的。先关停所有 Docker stack,然后关停 glusterfs 服务。 | |
31 | + | 如果你的盘已经配成了是基于 Glusterfs 的。先关停所有 Docker stack,然后关停 glusterfs 服务。 | |
24 | 32 | ||
25 | 33 | ```bash | |
26 | 34 | docker stack rm $(docker stack ls --format '{{.Name}}') | |
27 | 35 | ``` | |
28 | 36 | ||
29 | - | ## 清空老盘 | |
30 | - | ||
31 | - | 关掉 glusterfs | |
37 | + | 关掉 glusterfs。 | |
32 | 38 | ||
33 | 39 | ```bash | |
34 | 40 | sudo systemctl stop glusterd | |
@@ -44,6 +50,8 @@ rm -rvf /swarm-vol | |||
44 | 50 | rm -rvf /var/no-direct-write-here/gluster-bricks | |
45 | 51 | ``` | |
46 | 52 | ||
53 | + | ## 清空老盘 | |
54 | + | ||
47 | 55 | 将老盘彻底抹除。抹除前记得备份。 | |
48 | 56 | ||
49 | 57 | ```bash |
anduin revised this gist . Go to revision
1 file changed, 5 insertions, 1 deletion
notes.md
@@ -1,11 +1,15 @@ | |||
1 | 1 | ||
2 | 2 | ## 开启root ssh | |
3 | 3 | ||
4 | + | ```bash | |
4 | 5 | sudo vim /etc/ssh/sshd_config | |
6 | + | ``` | |
5 | 7 | ||
6 | - | 将里面的 PermitRootLogin 改成 yes,将 PasswordAuthentication 改成 yes | |
8 | + | 将里面的 `PermitRootLogin` 改成 `yes`,将 `PasswordAuthentication` 改成 `yes` | |
7 | 9 | ||
10 | + | ```bash | |
8 | 11 | sudo systemctl restart ssh sshd | |
12 | + | ``` | |
9 | 13 | ||
10 | 14 | ## 改好密码 | |
11 | 15 |
anduin revised this gist . Go to revision
1 file changed, 117 insertions
notes.md(file created)
@@ -0,0 +1,117 @@ | |||
1 | + | ||
2 | + | ## 开启root ssh | |
3 | + | ||
4 | + | sudo vim /etc/ssh/sshd_config | |
5 | + | ||
6 | + | 将里面的 PermitRootLogin 改成 yes,将 PasswordAuthentication 改成 yes | |
7 | + | ||
8 | + | sudo systemctl restart ssh sshd | |
9 | + | ||
10 | + | ## 改好密码 | |
11 | + | ||
12 | + | sudo su | |
13 | + | passwd | |
14 | + | ||
15 | + | 设置一个 root 密码 | |
16 | + | ||
17 | + | ## 关停服务 | |
18 | + | ||
19 | + | 现在的服务是基于 Glusterfs 的。先关停所有 Docker stack,然后关停 glusterfs 服务。 | |
20 | + | ||
21 | + | ```bash | |
22 | + | docker stack rm $(docker stack ls --format '{{.Name}}') | |
23 | + | ``` | |
24 | + | ||
25 | + | ## 清空老盘 | |
26 | + | ||
27 | + | 关掉 glusterfs | |
28 | + | ||
29 | + | ```bash | |
30 | + | sudo systemctl stop glusterd | |
31 | + | sudo systemctl disable glusterd | |
32 | + | ``` | |
33 | + | ||
34 | + | 取消挂载老盘,别忘了umount多个地点。对老盘跑 | |
35 | + | ||
36 | + | ```bash | |
37 | + | umount /swarm-vol | |
38 | + | umount /var/no-direct-write-here/gluster-bricks | |
39 | + | rm -rvf /swarm-vol | |
40 | + | rm -rvf /var/no-direct-write-here/gluster-bricks | |
41 | + | ``` | |
42 | + | ||
43 | + | 将老盘彻底抹除。抹除前记得备份。 | |
44 | + | ||
45 | + | ```bash | |
46 | + | fdisk /dev/sdb | |
47 | + | d g w | |
48 | + | ``` | |
49 | + | ||
50 | + | ## 准备 Ceph 文件 | |
51 | + | ||
52 | + | 去那些node上 | |
53 | + | ||
54 | + | ```bash | |
55 | + | mkdir /etc/ceph | |
56 | + | ``` | |
57 | + | ||
58 | + | 初始化ceph,在node1上运行 | |
59 | + | ||
60 | + | ```bash | |
61 | + | curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm | |
62 | + | chmod +x cephadm | |
63 | + | mkdir -p /etc/ceph | |
64 | + | ./cephadm bootstrap --mon-ip $MYIP | |
65 | + | ``` | |
66 | + | ||
67 | + | 等90秒,别忘了复制输出的内容。 | |
68 | + | ||
69 | + | 把这些文件都拷到node2和node3上 | |
70 | + | ||
71 | + | ```bash | |
72 | + | scp /etc/ceph/ceph.conf root@node2:/etc/ceph/ceph.conf | |
73 | + | scp /etc/ceph/ceph.conf root@node3:/etc/ceph/ceph.conf | |
74 | + | scp /etc/ceph/ceph.client.admin.keyring root@node2:/etc/ceph/ceph.client.admin.keyring | |
75 | + | scp /etc/ceph/ceph.client.admin.keyring root@node3:/etc/ceph/ceph.client.admin.keyring | |
76 | + | scp /etc/ceph/ceph.pub root@node2:/root/.ssh/authorized_keys | |
77 | + | scp /etc/ceph/ceph.pub root@node3:/root/.ssh/authorized_keys | |
78 | + | ``` | |
79 | + | ||
80 | + | 都安装好ceph-common | |
81 | + | ||
82 | + | ```bash | |
83 | + | apt install -y ceph-common | |
84 | + | ``` | |
85 | + | ||
86 | + | ## 添加主机 | |
87 | + | ||
88 | + | ceph orch host add node2 | |
89 | + | ceph orch host add node3 | |
90 | + | ceph orch host ls | |
91 | + | ||
92 | + | ## 添加OSD | |
93 | + | ||
94 | + | 我们可以先让 Ceph 自动检测一波 | |
95 | + | ||
96 | + | ```bash | |
97 | + | ceph orch apply osd --all-available-devices | |
98 | + | ``` | |
99 | + | ||
100 | + | 如果有问题,可以手动添加 | |
101 | + | ||
102 | + | ```bash | |
103 | + | ceph orch device zap node1 /dev/sdb --force | |
104 | + | ceph orch device zap node2 /dev/sdb --force | |
105 | + | ceph orch device zap node3 /dev/sdb --force | |
106 | + | ``` | |
107 | + | ||
108 | + | ## 挂载新文件系统 | |
109 | + | ||
110 | + | ```bash | |
111 | + | echo " | |
112 | + | # mount cephfs | |
113 | + | node1,node2,node3:/ /swarm-vol ceph name=admin,noatime,_netdev 0 0 | |
114 | + | " | sudo tee -a /etc/fstab | |
115 | + | sudo mkdir /swarm-vol | |
116 | + | sudo mount /swarm-vol | |
117 | + | ``` |