EOS FST - ConversionJob: [3009] Unable to get free physical space

Hello,
We have deployed EOS 5.1+ (currently 5.1.29) for ATLAS GRID infrastructure at our site; a single MGM with QuarkDB and 5 FST nodes running on AlmaLinux 8.8.
For some reason 2 of our FS’s on single FST have really low space usage, holding way less data than any other FS’s on any of the FST. While investigating situation we have found that GroupConverter logs a lot of errors (example):
ERROR ConversionJob:379 msg="[ERROR] Server responded with an error: [3009] Unable to get free physical space /eos/cyfr/proc/conversion/000000000003f3a7:default.12#00100002^groupbalancer^; No space left on device" tpc_src=root://localhost:1094//eos/atlas/atlasdatadisk/rucio/data17_13TeV/ea/22/DAOD_JETM1.32822114._000073.pool.root.1 tpc_dst=root://localhost:1094//eos/cyfr/proc/conversion/000000000003f3a7:default.12#00100002^groupbalancer^ conversion_id=000000000003f3a7:default.12#00100002^groupbalancer^
despite having a lot of free space (all of the FS’s are online and in RW mode - screen):

Each of /fst/archiveXX directory has owner daemon:daemon and the permissions are set to 755 . We can manualy create, modify and delete files (tested up to couple of GigaBytes) on problematic FS’s.

We do not know where to look further in order to solve this problem. It also affects draining process.
Any suggestions and feedback are much appreciated, i will update this question if more details are needed.
Cheers

Hi Oskar,

Can you please post the output of the following command?
eos geosched show param

Thanks,
Elvin

Hi,
sure, there’s the output:

### GeoTreeEngine parameters :
skipSaturatedAccess = 1
skipSaturatedDrnAccess = 1
skipSaturatedBlcAccess = 1
proxyCloseToFs = 1
penaltyUpdateRate = 1
plctDlScorePenalty = 17.4838(default) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
plctUlScorePenalty = 17.4838(defaUlt) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
accessDlScorePenalty = 17.4838(default) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
accessUlScorePenalty = 17.4838(defaUlt) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
fillRatioLimit = 80
fillRatioCompTol = 100
saturationThres = 10
timeFrameDurationMs = 1000

Cheers

Hi Oskar,

Please run the following command and then restart the MGM daemon. Once this is done then all of your FSTs should be used by the groupbalancer and also for direct file placement. There is a bug in the scheduling implementation that is triggered when the skipSaturatedAccess & co. are enabled and this prevents some of the disks to be properly used for placement of new files.

sudo eos geosched set skipSaturatedAccess 0
sudo eos geosched set skipSaturatedDrnAccess 0
sudo eos geosched set skipSaturatedBlcAccess 0

Cheers,
Elvin

Hi,
sadly changing those parameters did not solve our problem, apart of restarting the MGM daemon itself we have also tried rebooting MGM node and one FST containing “bad” FS’s afterwards.
I attach some new logs in case in case they have changed since:

230926 14:33:48 ERROR ConversionJob:379              msg="[ERROR] Server responded with an error: [3009] Unable to get free physical space /eos/cyfr/proc/conversion/00000000004f0ad6:default.12#00100002^groupbalancer^; No space left on device" tpc_src=root://localhost:1094//eos/atlas/atlasdatadisk/rucio/mc23_13p6TeV/ab/d3/HITS.34928691._003890.pool.root.1 tpc_dst=root://localhost:1094//eos/cyfr/proc/conversion/00000000004f0ad6:default.12#00100002^groupbalancer^ conversion_id=00000000004f0ad6:default.12#00100002^groupbalancer^ 
230926 14:33:48 ERROR ConversionJob:379              msg="[ERROR] Server responded with an error: [3009] Unable to get free physical space /eos/cyfr/proc/conversion/000000000005884e:default.12#00100002^groupbalancer^; No space left on device" tpc_src=root://localhost:1094//eos/atlas/atlasdatadisk/rucio/data16_13TeV/d3/85/DAOD_HIGG1D1.27457343._000174.pool.root.1 tpc_dst=root://localhost:1094//eos/cyfr/proc/conversion/000000000005884e:default.12#00100002^groupbalancer^ conversion_id=000000000005884e:default.12#00100002^groupbalancer^

We look forward for more clues.
Cheers

Hi Oskar,

Can you post the output of the following commands:
eos geosched show tree
eos geosched show param
eos attr ls /eos/cyfr/proc/conversion
eos space ls default

Thanks,
Elvin

Hi,
output of the following commands:

eos geoshed show tree

┌──────────┬──────┬────┬──────────────────────┬────────┬─────┬───┬────────┐
│group     │geotag│fsid│                  node│branches│leavs│sum│  status│
└──────────┴──────┴────┴──────────────────────┴────────┴─────┴───┴────────┘
 default.0                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   70 eos02.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.1                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   71 eos02.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.10                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶   80 eos05.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.11                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶   81 eos05.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.12                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶   82 eos06.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.13                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶   83 eos06.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.14                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶  100 eos02.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.15                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    1 eos02.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.16                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    2 eos03.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.17                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    3 eos03.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.18                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    4 eos04.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.19                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    5 eos04.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.2                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   72 eos02.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.20                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    6 eos05.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.21                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    7 eos05.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.22                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    8 eos06.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.23                                           2     1   3          
      └───▶ cyfr                                      1     1   2          
              └──▶    9 eos06.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.3                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   73 eos03.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.4                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   74 eos03.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.5                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   75 eos03.grid.cyfronet.pl        0     1   1    DinRW 
 ------------------------------------------------------------------------- 
 default.6                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   76 eos04.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.7                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   77 eos04.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.8                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   78 eos04.grid.cyfronet.pl        0     1   1 UnvDinRW 
 ------------------------------------------------------------------------- 
 default.9                                            2     1   3          
     └────▶ cyfr                                      1     1   2          
              └──▶   79 eos05.grid.cyfronet.pl        0     1   1 UnvDinRW

eos geosched show param

### GeoTreeEngine parameters :
skipSaturatedAccess = 0
skipSaturatedDrnAccess = 0
skipSaturatedBlcAccess = 0
proxyCloseToFs = 1
penaltyUpdateRate = 1
plctDlScorePenalty = 5.36768(default) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
plctUlScorePenalty = 5.36768(defaUlt) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
accessDlScorePenalty = 5.36768(default) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
accessUlScorePenalty = 5.36768(defaUlt) | 10(1Gbps) | 10(10Gbps) | 10(100Gbps) | 10(1000Gbps)
fillRatioLimit = 80
fillRatioCompTol = 100
saturationThres = 10
timeFrameDurationMs = 1000

eos attr ls /eos/cyfr/proc/conversion
it returns nothing, as no parametes are set for this directory, in case its needed i also post the attrs for the proc directory, which were set for the accounting
eos attr ls /eos/cyfr/proc

sys.accounting.storageendpoints.0.assignedshares.0="all"
sys.accounting.storageendpoints.0.endpointurl="root://eos01.grid.cyfronet.pl:1094/"
sys.accounting.storageendpoints.0.interface="xrootd"
sys.accounting.storageendpoints.0.name="eoscyfrxrootd"
sys.accounting.storageendpoints.1.assignedshares.0="all"
sys.accounting.storageendpoints.1.endpointurl="https://eos01.grid.cyfronet.pl:8443/"
sys.accounting.storageendpoints.1.interface="https"
sys.accounting.storageendpoints.1.name="eoscyfrhttps"

In addition i checked the owner for files/directories inside proc, not all of them are owned by daemon, but i do not know is fine for EOS workings.
os ls -lah /eos/cyfr/proc

drwxr-xr-x   1 root     root          20.48 k Sep 22 14:00 .
drwxrwxr-x   1 root     root          20.48 k Jan  1  1970 ..
-rwxr-xr-x   1 root     root                0 Sep 22 14:00 accounting
drwxrwx---   1 daemon   daemon              0 Jan  1  1970 archive
drwxr-xr-x   1 daemon   daemon              0 Jan  1  1970 clone
drwxrwx---   1 daemon   daemon              0 Sep 27 12:32 conversion
-rw-r--r--   0 root     root             4096 Jul 27 13:44 master
-rw-r--r--   0 root     root             4096 Jul 27 13:44 quota
-rw-r--r--   0 root     root             4096 Jul 27 13:44 reconnect
drwx------   1 root     root                0 Jan  1  1970 recycle
drwxr-xr-x   1 root     root                0 Jan  1  1970 tape-rest-api
drwx------   1 root     root                0 Jan  1  1970 token
drwx------   1 daemon   root                0 Jan  1  1970 tracker
-rw-r--r--   0 root     root             4096 Jul 27 13:44 who
-rw-r--r--   0 root     root             4096 Jul 27 13:44 whoami
drwx------   1 daemon   root                0 Jan  1  1970 workflow

eos space ls default

┌──────────┬────────────────┬────────────┬────────────┬──────┬─────────┬───────────────┬──────────────┬─────────────┬─────────────┬──────────────┬──────┬──────────┬───────────┬───────────┬──────┬────────┬───────────┬──────┬────────┬───────────┐                           
│type      │            name│   groupsize│    groupmod│ N(fs)│ N(fs-rw)│ sum(usedbytes)│ sum(capacity)│ capacity(rw)│ nom.capacity│sched.capacity│ quota│ balancing│  threshold│  converter│   ntx│  active│        wfe│   ntx│  active│ intergroup│                           
└──────────┴────────────────┴────────────┴────────────┴──────┴─────────┴───────────────┴──────────────┴─────────────┴─────────────┴──────────────┴──────┴──────────┴───────────┴───────────┴──────┴────────┴───────────┴──────┴────────┴───────────┘                           
 spaceview           default            0           24     24        24       650.79 TB      918.50 TB     918.50 TB       1.00 PB      188.76 TB     on         on          20          on     20        0         off      1        0          on

I hope the formatting wont break those CLI tables.
Cheers

Hi Oskar,

Do you plan to use 2 replicas for the layouts or just simple files? If you don’t use any replication then put all the file systems in just one group (default.0). You can do this by using the eos fs mv -f command.
If you do plan to use replication then I would recommend a number of groups equal to the max number of file systems on your machines. Then put one file systems from each machine in a different group from other file systems from the same machine.
Once your groups and file systems are restructured issue eos geosched forcerefresh and send again the output of eos geosched show tree.

Cheers,
Elvin

Hi,
As each of our FS’s provides like RAID6 capabilites, we are not using any kind of file replication.
We have moved all FS’s to a single group, ran required commands and now the behave more or less as expected - new files are landing on all FS’s, including those problematic ones. Here’s the output of eos geoshed show tree

┌─────────┬──────┬────┬──────────────────────┬────────┬─────┬───┬─────────┐
│group    │geotag│fsid│                  node│branches│leavs│sum│   status│
└─────────┴──────┴────┴──────────────────────┴────────┴─────┴───┴─────────┘
 default.0                                           2    33  35           
     └───▶ cyfr                                      1    33  34           
             ├──▶    1 eos02.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶  100 eos02.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   18 eos06.grid.cyfronet.pl        0     1   1     DinRW 
             ├──▶   19 eos02.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶    2 eos03.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   20 eos03.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶    3 eos03.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   36 eos04.grid.cyfronet.pl        0     1   1     DinRW 
             ├──▶   37 eos05.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶   38 eos06.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶   39 eos02.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶    4 eos04.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   40 eos03.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶   41 eos04.grid.cyfronet.pl        0     1   1  UnvDinRW 
             ├──▶    5 eos04.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶    6 eos05.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶    7 eos05.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   70 eos02.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   71 eos02.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   72 eos02.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   73 eos03.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   74 eos03.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   75 eos03.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   76 eos04.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   77 eos04.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   78 eos04.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   79 eos05.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶    8 eos06.grid.cyfronet.pl        0     1   1     DinRW 
             ├──▶   80 eos05.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   81 eos05.grid.cyfronet.pl        0     1   1 BoutDinRW 
             ├──▶   82 eos06.grid.cyfronet.pl        0     1   1     DinRW 
             ├──▶   83 eos06.grid.cyfronet.pl        0     1   1 BoutDinRW 
             └──▶    9 eos06.grid.cyfronet.pl        0     1   1 BoutDinRW

Although our problems have been fixed, this is a stopgap solution only, the group balancer bug (?) still remains. My understanding of the documentation points to our previous config as a correct way of achieving our goals.

Thanks for providing all of the guidance and presenting working solution,
Cheers

Hi Oskar,

Given that you now have file systems in just one group, you no longer need to run the groupbalancer since there is nothing to balance between groups. At this point you should enable the simple balancer to make sure the data is evenly distributed between the file systems in the same group (in your case the only group default.0).

My suspicion of why the group balancer failed, is that by default the space policy assumes a replica layout with 2 replicas. In the beginning you only had groups with one file system so there was no group that could be selected to place a file with 2 replicas - this is the reason for the initial error.

In the current setup, you only have one group so the groupbalancer can not find another group where to balance the files - and this is probably the reason for the failures that you currently get.

Therefore, please try enabling the normal balancer and do one more eos geosched forcerefresh before that, and let me know if things improve.

Cheers,
Elvin