Difference between revisions of "Single Host Ceph Server"

From HoerupWiki
Jump to: navigation, search
(CephFS - Filesystem)
(CephFS - Filesystem)
Line 67: Line 67:
  
 
=CephFS - Filesystem=
 
=CephFS - Filesystem=
 +
# Filsystemet kalder vi myfs
 +
 +
#Inden opsæt opretter vi datapool med EC profil
 +
ceph osd pool create cephfs.myfs.data erasure ec41
 +
 
  #setup metadata server
 
  #setup metadata server
  ceph orch apply mds cephfs
+
  ceph orch apply mds myfs
  
 
  # opret volume
 
  # opret volume
  ceph fs volume create cephfs
+
  ceph fs volume create myfs
  
 
  # metadata skal være replicated, men vi sætter crushrule  
 
  # metadata skal være replicated, men vi sætter crushrule  
  ceph osd pool set cephfs.cephfs.meta crush_rule repl1
+
  ceph osd pool set cephfs.myfs.meta crush_rule repl1
ceph osd pool set cephfs.cephfs.data crush_rule repl1
 
  
 
=Hvad mangler vi ?=
 
=Hvad mangler vi ?=

Revision as of 20:00, 3 November 2020

Clean Centos 8

Basic Stuff og cephadm

yum install -y python3 podman chrony lvm2 wget 
wget -O /root/cephadm https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x /root/cephadm
mkdir -p /etc/ceph
./cephadm add-repo --release octopus
./cephadm install


Boostrap monitor på egen ip

cephadm bootstrap --mon-ip 192.168.2.206   

Installer ceph

cephadm add-repo --release octopus
cephadm install ceph-common
cephadm install ceph 

Opret OSD'er med alle diske (få lige specifik kommando fra Hoerup)

ceph orch apply osd --all-available-devices
ceph status


Lav ny regel der bruger failure domain på OSD (istedet for 3 hosts)

ceph osd crush rule create-replicated repl1 default osd
ceph osd pool ls
ceph osd pool set device_health_metrics crush_rule repl1


Block Device

EC stuff, her med 4+1

ceph osd pool create rbdmeta replicated repl1
ceph osd erasure-code-profile get default
ceph osd erasure-code-profile set ec41 k=4 m=1 crush-failure-domain=osd
ceph osd pool create rbddata erasure ec41

Hint at denne pool skal bruges til block storage

ceph osd pool application enable rbddata rbd
ceph osd pool application enable rbdmeta rbd

Tillad EC blok overwrites

ceph osd pool set rbddata allow_ec_overwrites true
rbd create --size 40G --data-pool rbddata rbdmeta/ectestimage1
rbd ls rbdmeta

Mapper et rbd image ind som blockdevice

rbd map rbdmeta/ectestimage1

Indskriv i /etc/ceph/rbdmap

rbdmeta/ectestimage1    id=admin,keyring=/etc/ceph/ceph.client.admin.keyring
systemctl enable rbdmap.service


Mount filsystem

mkfs.xfs /dev/rbd0 
mkdir /storage
mount -t xfs /dev/rbd0 /storage/
df -h /storage/

/etc/fstab

/dev/rbd0       /storage/       xfs     defaults,_netdev        0       0

CephFS - Filesystem

# Filsystemet kalder vi myfs
#Inden opsæt opretter vi datapool med EC profil
ceph osd pool create cephfs.myfs.data erasure ec41
#setup metadata server
ceph orch apply mds myfs
# opret volume
ceph fs volume create myfs
# metadata skal være replicated, men vi sætter crushrule 
ceph osd pool set cephfs.myfs.meta crush_rule repl1

Hvad mangler vi ?

Clean shutdown / reboot ?

ceph logs ?

Scrubbing ?

Overvågning / prometheus ?

Defekt disk, ny disk.

Rest API

Sources n crap

https://docs.ceph.com/en/latest/cephadm/install/

https://medium.com/@balderscape/setting-up-a-virtual-single-node-ceph-storage-cluster-d86d6a6c658e

https://linoxide.com/linux-how-to/hwto-configure-single-node-ceph-cluster/


Zap disk for re-use

ceph-volume lvm zap /dev/sdX

eller

dd if=/dev/zero of=/dev/vdc bs=1M count=10