GlusterFS
It is a very common problem that you need a filesystem that is synced between several computers and stays available, even if some servers are down. Here is a nice overview of those filesystems https://en.wikipedia.org/wiki/Comparison_of_distributed_file_systems
https://en.wikipedia.org/wiki/Gluster#GlusterFS is one of those filesystems.
Install
https://www.osradar.com/install-and-configure-glusterfs-debian-10/
On each server:
systemctl start glusterd
systemctl status glusterd
On any server probe all the other servers
gluster peer probe thorsten-gluster-test-3.example.com
...
Plus one server also needs to add to first one
This lists all the servers but yourself
This lists all the server
Volumes
Create a volume, list all the servers and where on the server the data should be stored
thorsten-gluster-test-1.example.com:/mnt/my_gulsterfs_test \
thorsten-gluster-test-2.example.com:/mnt/my_gulsterfs_test \
thorsten-gluster-test-3.example.com:/mnt/my_gulsterfs_test
Mount a volume (this is a stupid way as it only works when this server is online while you mount)
If a node is down you can force remove it from all volumes that have it and then detach it (but you need to lower the replica the the number of servers that will remain)
gluster volume remove-brick my_volume replica 2 bad-server.example.com:/data1/glusterfs_volumes/my_volume/brick force
gluster peer detach bad-server.example.com
Later you can add a new server and increase the replica again
Kubernetes PVC based on
For on premise Kubernetes solution you often need a storage solution that does not have a single point of failure. Glusterfs is nice for this.
kind: PersistentVolume
metadata:
# The name of the PV, which is referenced in pod definitions or displayed in various oc volume commands.
name: my-glusterfs-pvc
spec:
capacity:
# The amount of storage allocated to this volume.
storage: 8Gi
accessModes:
# labels to match a PV and a PVC. They currently do not define any form of access control.
- ReadWriteOnce
# The glusterfs plug-in defines the volume type being used
glusterfs:
endpoints: glusterfs-cluster
# Gluster volume name, preceded by /
path: /my_volume
readOnly: false
# volume reclaim policy indicates that the volume will be preserved after the pods accessing it terminate.
# Accepted values include Retain, Delete, and Recycle.
persistentVolumeReclaimPolicy: Delete