MGM disk space management

We’ve had a few cases of the MGM’s meta data log disk filling up to the point of being unable to perform a compactification. Unfortunately EOS doesn’t handle this very well.

Looking at mgm/Master.cc, in Master::Supervisor() it has some code for handling writes to a nearly full metadata disk, but as far as I can see, the compactification code has no similar check. The result is I can have compactifications fail, requiring a MGM restart and possibly a manual recovery of the mdlog files.

In one extreme case, I’ve managed to create a situtation where I’ve had to copy the directory mdlog from a file handle in /proc for the MGM thread, after it has been unlinked from the filesystem.

I could write a check in the compactification section to handle this, although I’m favouring making the MGM write stalling configurable or based on the current mdlog sizes. I propose setting this at a percentage over the size of the current metadata size. This means compactification would succeed at least for the current pass, making it easy to log and detect prior to it being an actual production issue, and allowing earlier handling of negative situtations that require heroic effort to recover from.

How are others managing this at the moment?

At the JRC we used to have similar problems, with disk space filled up. Since the /var partition is used for both logging, and mdlog files storage we have addressed it by:

  1. Selecting appropriate log levels for our needs
  2. Running logrotate every hour instead of every 24 h
  3. Separating the /var/eos/ by designating its own mount point on a big volume (5+TB)

Nothing fancy but it works.

We have similar monitoring in place, filesystem paths on different storage, and we do have clean up processes. We actually rotate our logs every minute at present if they’re over a certain size, as this allows us to turn on debugging.

Part of the problem lies in that a compactification can be started without sufficient disk space, and this blocks further work by the MGM, and creates a risky situtation with the mdlogs on disk.

I’m thinking about submiting a patch to allow this to be configurable as a ratio of current mdlog size to free disk space, and set the MGM to stall writing clients (as it does presently). I’d prefer to do this, and allow myself to clean things up without moving data around, even if the MGM has gone read only, than risk running out of disk.

Running out of disk on the MGM gets very messy very quickly.

How do you perform compactification? Do you use eos-log-compact? If yes, what is the proper way of using this tool?

In our in-memory instances of EOS, we use the ns compact command in the eos cli. We presently do it on a manual schedule because we’re paranoid.

Our QDB instances, we don’t have to do this as it’s handled by the system.

Thank you.