第一次接触Ceph,已经实验了两天,具体多少次记不清了,大多数情况一启动起来都是这个状态.看了很多邮件列表,一直没找到解决方法.原因不明...
vagrant中部署的
[root@ceph1 ~]# ceph -v
ceph version 0.87.1 (283c2e7cfa2457799f534744d7d549f83ea1335e)
系统信息
```
[root@ceph1 ~]# cat /etc/issue
CentOS release 6.6 (Final)
Kernel \r on an \m
[root@ceph1 ~]# uname -a
Linux ceph1 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
ceph.conf配置文件
[global]
auth service required = cephx
auth client required = cephx
auth cluster required = cephx
auth supported = none
filestore_xattr_use_omap = true # Just 4 ext4
chooseleaf = 0
fsid = a1115100-85e8-405d-8b36-35d2428bab46
osd pool default size = 2
osd pool default min size = 1
[mon]
mon data = /var/lib/ceph/mon/$name
[mon.a]
host = ceph1
mon addr = 192.168.251.100:6789
[osd]
osd journal size = 1024
osd journal = /var/lib/ceph/osd/$name/journal
osd data = /var/lib/ceph/osd/$name
[osd.0]
host = ceph1
[osd.1]
host = ceph1
ceph 信息
[root@ceph1 ~]# ceph osd tree
-1 0.4 root default
-2 0 host ceph
-3 0.4 host ceph1
0 0.2 osd.0 up 1
1 0.2 osd.1 up 1
```
[root@ceph1 ~]# ceph -s
cluster a1115100-85e8-405d-8b36-35d2428bab46
health HEALTH_WARN 64 pgs degraded; 64 pgs stuck degraded; 64 pgs stuck unclean; 64 pgs stuck undersized; 64 pgs undersized
monmap e1: 1 mons at {a=192.168.251.100:6789/0}, election epoch 1, quorum 0 a
osdmap e25: 3 osds: 2 up, 2 in
pgmap v47: 64 pgs, 1 pools, 0 bytes data, 0 objects
6399 MB used, 8906 MB / 16124 MB avail
64 active+undersized+degraded
1
bfti 2015-04-15 19:59:47 +08:00
现在用ceph的企业多么?
|
3
feicheche 2015-04-16 18:29:35 +08:00
因为你有3个osd,但只有2个up,看看是不是有一个osd没有启动。
|
4
xanpeng 2015-04-18 00:49:47 +08:00
好久没看了。猜测可能的原因:1) feicheche说的 "3 osds: 2 up, 2in"; 2) 貌似默认3副本,要求跨host,osd够host不够仍会HEALTH_WARN。
|
5
resettarget 2015-04-25 22:32:42 +08:00
我看也是OSD问题,至少要有3个 OSD active,才能HEALTH_OK
|