RAID6 (RAIN6) implementation in eos-citrine on CentOS 7.5

Dear All,
We have been configuring EOS Citrine on our new 1.1 PB storage with 7(seven) FST and 2 MGM. Each FST contain 16 nos. 10TB HDD for data storage and OS is another disk.We make 16 groups and each group contain 7 hdd from 7 fst.

[root@eos-mgm ~]# eos -b group ls
┌──────────┬────────────────┬────────────┬──────┬────────────┬────────────┬────────────┬──────────┬──────────┬──────────┐
│type │ name│ status│ N(fs)│ dev(filled)│ avg(filled)│ sig(filled)│ balancing│ bal-shd│ drain-shd│
└──────────┴────────────────┴────────────┴──────┴────────────┴────────────┴────────────┴──────────┴──────────┴──────────┘
groupview default.0 on 7 0.00 0.00 0.00 idle 0 0
groupview default.1 on 7 0.00 0.00 0.00 idle 0 0
groupview default.10 on 7 0.00 0.00 0.00 idle 0 0
groupview default.11 on 7 0.00 0.00 0.00 idle 0 0
groupview default.12 on 7 0.00 0.00 0.00 idle 0 0
groupview default.13 on 7 0.00 0.00 0.00 idle 0 0
groupview default.14 on 7 0.00 0.00 0.00 idle 0 0
groupview default.15 on 7 0.00 0.00 0.00 idle 0 0
groupview default.2 on 7 0.00 0.00 0.00 idle 0 0
groupview default.3 on 7 0.00 0.00 0.00 idle 0 0
groupview default.4 on 7 0.00 0.00 0.00 idle 0 0
groupview default.5 on 7 0.00 0.00 0.00 idle 0 0
groupview default.6 on 7 0.00 0.00 0.00 idle 0 0
groupview default.7 on 7 0.00 0.00 0.00 idle 0 0
groupview default.8 on 7 0.00 0.00 0.00 idle 0 0
groupview default.9 on 7 0.00 0.00 0.00 idle 0 0

[root@eos-mgm ~]# eos -b node ls
┌──────────┬────────────────────────────────┬────────────────┬──────────┬────────────┬──────┬──────────┬────────┬────────┬────────────────┬─────┐
│type │ hostport│ geotag│ status│ status│ txgw│ gw-queued│ gw-ntx│ gw-rate│ heartbeatdelta│ nofs│
└──────────┴────────────────────────────────┴────────────────┴──────────┴────────────┴──────┴──────────┴────────┴────────┴────────────────┴─────┘
nodesview eos04.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 1 16
nodesview eos05.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 1 16
nodesview eos06.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 2 16
nodesview eos07.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 2 16
nodesview eos08.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 2 16
nodesview eos09.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 2 16
nodesview eos10.tier2-kol.res.in:1095 geotagdefault online on off 0 10 120 2 16

[root@eos-mgm ~]# eos -b space ls
┌──────────┬────────────────┬────────────┬────────────┬──────┬─────────┬───────────────┬──────────────┬─────────────┬─────────────┬──────┬──────────┬───────────┬───────────┬──────┬────────┬───────────┬──────┬────────┬───────────┐
│type │ name│ groupsize│ groupmod│ N(fs)│ N(fs-rw)│ sum(usedbytes)│ sum(capacity)│ capacity(rw)│ nom.capacity│ quota│ balancing│ threshold│ converter│ ntx│ active│ wfe│ ntx│ active│ intergroup│
└──────────┴────────────────┴────────────┴────────────┴──────┴─────────┴───────────────┴──────────────┴─────────────┴─────────────┴──────┴──────────┴───────────┴───────────┴──────┴────────┴───────────┴──────┴────────┴───────────┘
spaceview default 0 24 112 112 4.05 GB 1.10 PB 1.10 PB 0 B off off 20 off 2 0 off 1 0 off

==========================

Now , How we implement RAID6 (4+2) on the storage?
We use following parameters for use raid6:-
++++++
[root@eos-mgm ~]# eos -b attr ls /eos/alicekolkata/grid

sys.forced.blockchecksum=“crc32c”
sys.forced.blocksize=“1M”
sys.forced.checksum=“adler”
sys.forced.layout=“raid6”
sys.forced.nstripes=“6”
sys.forced.space=“default”
+++++++
Is above parameters are right? If anything wrong, suggest us.

Also, we configure recycle bin in eos, but its gives error when we assign a space i.e.
eos -b recycle config --size 100G
Suggest us.