Filesystems with status "nodrain"

Hello,
I do not know why but I have a serveur whose 4 filesystems have the “nodrain” status :

eos -b fs ls
[…]
nanxrd05.in2p3.fr 1095 46 /data4 default.0 Subatech-H002 booted rw online N/A
nanxrd06.in2p3.fr 1095 47 /data1 default.0 Subatech-H002 booted rw nodrain online N/A
nanxrd06.in2p3.fr 1095 48 /data2 default.0 Subatech-H002 booted rw nodrain online N/A
nanxrd06.in2p3.fr 1095 49 /data3 default.0 Subatech-H002 booted rw nodrain online N/A
nanxrd06.in2p3.fr 1095 50 /data4 default.0 Subatech-H002 booted rw nodrain online N/A
nanxrd07.in2p3.fr 1095 51 /data1 default.0 Subatech-H002 booted rw online N/A
[…]

What does it mean and how do I remove that ?

Thank you

JM

Hi Jean-Michel,
I think for filesystem 46 /51 the value is still unitialized … @amanzi can probably explain better, when the nodrain status is set. In any case, that is not making a problem.

Hi,
this has been fixed by Elvin


and the fix is available since EOS 4.3.0
anyway this is just an issue with the visualization, as the first time the drainstatus change it will be correctly set.
cheers
Andrea

Thank you Andrea,

This is not a newly added EOS filesystem, it follows an update of EOS on the fileservers, so it is a bit strange… Is there a way to suppress this status so that it looks exactly as the other filesystems ?

JM

H Jean Michel,
are you running different version of EOS on the filesystems involved? can you report the versions that you are running?
There is no way to remove the drainstatus from a filesystem, which should be available on all of them
cheers
Andrea

Andrea,

I am sorry for the long delay. All EOS cluster members are running the same version:
rpm -qa | grep eos
eos-folly-2017.09.18.00-4.el6.x86_64
libmicrohttpd-0.9.38-eos.xves.el6.x86_64
eos-apmon-1.1.4-1.x86_64
eos-server-4.2.25-1.el6.x86_64
eos-client-4.2.25-1.el6.x86_64

I can’t remember if this specific host nanxrd06 was treated differently but it is the only one having the drainstatus set to “nodrain” for its 4 filesystems, drainstatus is blank for all the filesystems on the other servers.

JM

We recently added many filesystems, and, indeed, we also observe that the drainstatus field is not set. It is not only a visualization issue, as now the output of eos fs ls -d is not empty and contains all these new FS. What would be the way to change the drainstatus value ?

Hi Frank, Jean Michel
you can workaround this by forcing a drainstatus change
for instance you can start a filesystem drain:

fs config <fsid> configstatus=drain

the drain will not immediately start as there are 60 secs preparation period where the drainstatus will be set to “prepare”

within this 60 secs you should then stop the drain

config <fsid> configstatus=rw

this should set the drainstatus to “nodrain”

let me know
cheers
Andrea

Hi Andrea, Franck,

I just tried it on one FS and it worked.

Thanks

JM