1) start with
ceph health detail
2) OSD_FULL
Ceph prevent writing to a full OSD. By default 'full' is set to 0.95. Temporary solution, set full to 0.97
ceph osd set-full-ratio 0.96
ps: to get more info
1- ceph osd dump | grep full_ratio
2- ceph df
(ref https://docs.ceph.com/en/quincy/rados/operations/health-checks/)
3) POOL_TOO_FEW_PGS
list pools
ceph osd lspools
To allow the cluster to automatically adjust the number of PGs
ceph osd pool set <pool-name> pg_autoscale_mode on
ceph osd pool set <pool-name> pg_num <new-pg-num>
4) BLUESTORE_NO_PER_POOL_OMAPStop the osd, repair, then start again:
eg: osd.123
systemctl stop ceph-osd@123 ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-123 systemctl start ceph-osd@123
No comments:
Post a Comment
Terima kasih