Multinode/multireplica configurations

Hello!
I’m having 5 FST nodes and trying to add all their storage space into the same pool. Also need triple redundancy, or any RAID-like scenarios, when I can lose or decommission nodes and have all my data available. Please point me to some docs on that, or please give an example of a 5-node configuration with triple redundancy.


Currently I’ve created

[root@master1 ~]# eos fs ls
┌────────────────────────┬────┬──────┬────────────────────────────────┬────────────────┬────────────────┬────────────┬──────────────┬────────────┬──────┬────────┬────────────────┐
│host                    │port│    id│                            path│      schedgroup│          geotag│        boot│  configstatus│       drain│ usage│  active│          health│
└────────────────────────┴────┴──────┴────────────────────────────────┴────────────────┴────────────────┴────────────┴──────────────┴────────────┴──────┴────────┴────────────────┘
 fst5.eos                 1095      1                         /data/01        default.0       local::geo       booted             rw      nodrain  14.92   online      no smartctl 

just on a single FST and can successfully use it via EOS utility and FUSE mount. But when I try the same on another FST node:

[root@fst4 ~]# eosfstregister -r master1.eos:1094 /data/01 default:1
...
error: scheduling group default.0 is full
error: no group available for file system
error: cannot boot filesystem - no filesystem with uuid=

or

eosfstregister -r master1.eos:1094 /data/01 default:2
...
error: Your policy definitions don't match the number of file systems I have found [ #filesystem = 1 #fspolicies = 2 ]
u

I’ve edited this message to describe what I’ve finally figured out:
The manual says this: “To write a file EOS selects a group and tries place the file into a single group. If you want now to write files with two replicas you have to have at least 2 filesystems per group, if you want to use erasure coding e.g. RAID6, you would need to have 6 filesystems per group.” So, I’ve added all filesystems from all nodes(1 fs per node) to the single default.0 group.

[root@master1 ~]# eos fs ls
┌────────────────────────┬────┬──────┬────────────────────────────────┬────────────────┬────────────────┬────────────┬──────────────┬────────────┬──────┬────────┬────────────────┐
│host                    │port│    id│                            path│      schedgroup│          geotag│        boot│  configstatus│       drain│ usage│  active│          health│
└────────────────────────┴────┴──────┴────────────────────────────────┴────────────────┴────────────────┴────────────┴──────────────┴────────────┴──────┴────────┴────────────────┘
 fst1.eos                 1095      1                         /data/01        default.0       local::geo       booted             rw      nodrain  15.00   online      no smartctl 
 fst2.eos                 1095      2                         /data/01        default.0       local::geo       booted             rw      nodrain  14.99   online      no smartctl 
 fst3.eos                 1095      3                         /data/01        default.0       local::geo       booted             rw      nodrain  14.99   online      no smartctl 
 fst4.eos                 1095      4                         /data/01        default.0       local::geo       booted             rw      nodrain  14.94   online      no smartctl 
 fst5.eos                 1095      5                         /data/01        default.0       local::geo       booted             rw      nodrain  14.90   online      no smartctl 

By doing the following commands on master1(set 10 nodes max):

eos space define default 10
eos space set default on

on FST nodes:

rm -rf /data
mkdir -p /data/01
chown daemon:daemon /data/01
eosfstregister -r master1.eos:1094 /data/01 default:1
systemctl restart eos5-fst@fst

on master1:

[root@master1 ~]# eos fs ls
┌────────────────────────┬────┬──────┬────────────────────────────────┬────────────────┬────────────────┬────────────┬──────────────┬────────────┬──────┬────────┬────────────────┐
│host                    │port│    id│                            path│      schedgroup│          geotag│        boot│  configstatus│       drain│ usage│  active│          health│
└────────────────────────┴────┴──────┴────────────────────────────────┴────────────────┴────────────────┴────────────┴──────────────┴────────────┴──────┴────────┴────────────────┘
 fst1.eos                 1095      1                         /data/01        default.0       local::geo       booted             rw      nodrain  15.00   online      no smartctl 
 fst2.eos                 1095      2                         /data/01        default.0       local::geo       booted             rw      nodrain  14.99   online      no smartctl 
 fst3.eos                 1095      3                         /data/01        default.0       local::geo       booted             rw      nodrain  14.99   online      no smartctl 
 fst4.eos                 1095      4                         /data/01        default.0       local::geo       booted             rw      nodrain  14.94   online      no smartctl 
 fst5.eos                 1095      5                         /data/01        default.0       local::geo       booted             rw      nodrain  14.90   online      no smartctl 

and finally:

[root@master1 ~]# eos mkdir /eos/dev/test/ 
[root@master1 ~]# eos attr set default=replica /eos/dev/test/
[root@master1 ~]# eos attr set sys.forced.nstripes=3 /eos/dev/test/
[root@master1 ~]# eos chmod 777 /eos/dev/test/
success: mode of file/directory /eos/dev/test/ is now '777'
[root@master1 ~]# eos cp /etc/passwd /eos/dev/test/
[eoscp] passwd                   Total 0.00 MB  |====================| 100.00 % [0.0 MB/s]
[eos-cp] copied 1/1 files and 1111 B in 0.07 seconds with 15.34 kB/s