CERN Accelerating science

EOSPPS qdb config

I’ve been messing around with running quarkdb and I’ve got a working setup, but it’s kind of clunky.

A couple questions:

  • How are you guys defining all the qdb node hostname/port combinations for mgmofs.qdbcluster and for the quarkdb-create command, is it manual or scripted somehow?
  • Is there an easy way to determine just the current leader of the cluster? I’m currently just running raft_info and grepping for LEADER, is there a nicer way? I had a quick look through the code and didn’t find anything.


Hi Crystal,

  1. mgmofs.qdbcluster is set through puppet in our setup, and we run quarkdb-create manually when first configuring the nodes, at least for now. We’ll probably add some logic to our scripts to run quarkdb-create during installation.

  2. Now, there is. :stuck_out_tongue: Try “raft-info leader”. I may extend this to allow filtering of any field on “raft-info”, but for now it only works for the leader.


That’s awesome, thanks so much Georgios!! :grinning: I’d like to see if I can automate a node joining the cluster somehow :smiley:

Since you’re doing the host/port stuff via puppet, is the list just hardcoded in somewhere? I had a quick look through the cernops/puppet-eosserver github and couldn’t find where those were being set (also i haven’t actually used puppet before, so… gotta admit i’m not entirely sure what to look for)

(i did find this hilarious though: ‘listkeys’ => %w[my little poney])

Hi, it’s set from this file, grep for “qdbcluster”:

The quarkdb-create command invocation is being run manually for now, and redis.myself is set to be the FQDN of each QDB machine in question.

I don’t appear to have view access for that repository! :persevere:

Ah indeed, that repo contains internal configuration for production instances as well, and is locked down. Anyway, this is the related section of that link:

  xrootd.async: off nosf
  xrootd.chksum: adler32
  xrd.sched: mint 64 maxt 2048 idle 300
  xrd.timeout: idle 86400
  all.export: /
  all.role: manager
  oss.fdlimit: 16384 32768
    - unix
    - sss -c /etc/eos.keytab -s /etc/eos-archive.keytab
    - krb5 /etc/krb5.keytab.eospps xrootd/
    - gsi -crl:1 -moninfo:1 -cert:/etc/grid-security/daemon/hostcert.pem -key:/etc/grid-security/daemon/hostkey.pem -gridmap:/etc/grid-security/grid-mapfile -d:1 -gmapopt:2
  sec.level: all relaxed standard
  mgmofs.fs: /
  mgmofs.targetport: 1095
  mgmofs.authdir: /var/eos/auth
  mgmofs.trace: all debug root://localhost:1097//eos/
  mgmofs.instance: eospps
  mgmofs.configdir: /var/eos/config
  mgmofs.metalog: /var/eos/md
  mgmofs.reportnamespace: false
  mgmofs.reportstore: true
  mgmofs.reportstorepath: /var/eos/report
  mgmofs.txdir: /var/eos/tx
  mgmofs.autoloadconfig: default
  mgmofs.autosaveconfig: true
  mgmofs.archivedir: /var/eos/archive/
  mgmofs.nslib: /usr/lib64/
  mgmofs.centraldraining: true
    - localhost.localdomain sss unix
    - localhost sss unix
    - '* only krb5 gsi sss unix'

The value of mgmofs.qdbcluster is simply hardcoded for each instance. Hope it helps!

Yep thanks!! :grinning: