cluster
Resizing placement groups - again
Ceph really keeps us active. 😄
cluster
Ceph really keeps us active. 😄
cluster
We need to upgrade our ceph version from 17 (quincy) to 18 (reef) to be prepared for next major Proxmox upgrade.
cluster
After many manual changes our ceph configs still contains a few remainings that we now cleanup.
cluster
After the removal of nodes cluster-02 and cluster-06 the number of placement groups had to be optimized again.
cluster
It's time to remove the old cluster-02 node.
cluster
We add a Ceph monitor and a Ceph manager spare on node 12 using the web ui.
cluster
This time we have a 'too many placement groups' message on our ssd pool.
cluster-06
We need to remove the cluster-06 node which was just a temporary replacement for the bricked cluster-01.
cluster
We now the cluster with the new node.
cluster-12
After installing the cluster-12 node we change the temporary network settings to the real ones.
cluster
Our cluster sufferes a major ceph breakdown.
cluster-12
Install Proxmox VE 8.1 on the cluster-12 hardware using the terminal installer.
cluster
After the CEPH_CLUSTER_LAN outage we had a crashed OSD, which needs to be cleaned up.
cluster
Or cluster has a major outage without any warning.
cluster
As all vm disks have been migrated to the ssd pool we can now remove the hdd pool an it's OSDs.
cluster
We currently have a 'too few placement groups' message on our ssd pool.