Difference between revisions of "Single Host Ceph Server"
(→Sources) |
(→Sources n crap) |
||
Line 77: | Line 77: | ||
=Sources n crap= | =Sources n crap= | ||
https://docs.ceph.com/en/latest/cephadm/install/ | https://docs.ceph.com/en/latest/cephadm/install/ | ||
+ | |||
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e | https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e | ||
+ | |||
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/ | https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/ | ||
Revision as of 19:31, 27 October 2020
Clean Centos 8
Contents
- 1 Basic Stuff og cephadm
- 2 Boostrap monitor på egen ip
- 3 Installer ceph
- 4 Opret OSD'er med alle diske (få lige specifik kommando fra Hoerup)
- 5 Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)
- 6 EC stuff, her med 4+1
- 7 Hint at denne pool skal bruges til block storage
- 8 Opret filsystem
- 9 Hvad mangler vi ?
- 10 Sources n crap
Basic Stuff og cephadm
yum install -y python3 podman chrony lvm2 wget wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm chmod +x /root/cephadm
mkdir -p /etc/ceph
./cephadm add-repo --release octopus ./cephadm install
Boostrap monitor på egen ip
cephadm bootstrap --mon-ip 192.168.2.206
Installer ceph
cephadm add-repo --release octopus cephadm install ceph-common cephadm install ceph
Opret OSD'er med alle diske (få lige specifik kommando fra Hoerup)
ceph orch apply osd --all-available-devices ceph status
Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)
ceph osd crush rule create-replicated repl1 default osd ceph osd pool ls ceph osd pool set device_health_metrics crush_rule repl1
EC stuff, her med 4+1
ceph osd pool create rbdmeta replicated repl1 ceph osd erasure-code-profile get default ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd ceph osd pool create rbddata erasure ec41
Hint at denne pool skal bruges til block storage
ceph osd pool application enable rbddata rbd ceph osd pool application enable rbdmeta rbd
Opret filsystem
Tillad EC blok overwrites
ceph osd pool set rbddata allow_ec_overwrites true
rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1 rbd ls rbdmeta
Mapper et rbd image ind som blockdevice
rbd map rbdmeta/ectestimage1
Indskriv i /etc/ceph/rbdmap
rbdmeta/ectestimage1 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
systemctl enable rbdmap.service
Mount filsystem
mkfs.xfs /dev/rbd0 mkdir /storage mount -t xfs /dev/rbd0 /storage/ df -h /storage/
/etc/fstab
/dev/rbd0 /storage/ xfs defaults,_netdev 0 0
Hvad mangler vi ?
Clean shutdown / reboot ? ceph logs ? Scrubbing ? Overvågning / prometheus ?
Sources n crap
https://docs.ceph.com/en/latest/cephadm/install/
https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e
https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/
Zap disk for re-use
ceph-volume lvm zap /dev/sdX
eller
dd if=/dev/zero of=/dev/vdc bs=1M count=10