Hi Jean-Michel,
I think for filesystem 46 /51 the value is still unitialized … @amanzi can probably explain better, when the nodrain status is set. In any case, that is not making a problem.
and the fix is available since EOS 4.3.0
anyway this is just an issue with the visualization, as the first time the drainstatus change it will be correctly set.
cheers
Andrea
This is not a newly added EOS filesystem, it follows an update of EOS on the fileservers, so it is a bit strange… Is there a way to suppress this status so that it looks exactly as the other filesystems ?
H Jean Michel,
are you running different version of EOS on the filesystems involved? can you report the versions that you are running?
There is no way to remove the drainstatus from a filesystem, which should be available on all of them
cheers
Andrea
I am sorry for the long delay. All EOS cluster members are running the same version:
rpm -qa | grep eos
eos-folly-2017.09.18.00-4.el6.x86_64
libmicrohttpd-0.9.38-eos.xves.el6.x86_64
eos-apmon-1.1.4-1.x86_64
eos-server-4.2.25-1.el6.x86_64
eos-client-4.2.25-1.el6.x86_64
I can’t remember if this specific host nanxrd06 was treated differently but it is the only one having the drainstatus set to “nodrain” for its 4 filesystems, drainstatus is blank for all the filesystems on the other servers.
We recently added many filesystems, and, indeed, we also observe that the drainstatus field is not set. It is not only a visualization issue, as now the output of eos fs ls -d is not empty and contains all these new FS. What would be the way to change the drainstatus value ?