there are two things you have to do to split instances into pieces:
Deploy a ‘router’ front-end for XRootd & HTTP traffic.
You can setup a front-end EOS server(s), where you configure routing the MGM for xrootd and http traffic.
[root@ajp console]# eos route -h
Usage: route [ls|link|unlink]
namespace routing to redirect clients to external instances
route ls [<path>]
list all routings or the one matching for the given path
route link <path> <dst_host>[:<xrd_port>[:<http_port>]],...
create routing from <path> to destination host. If the xrd_port
is ommited the default 1094 is used, if the http_port is ommited
the default 8000 is used. Several dst_hosts can be specified by
separating them with ",". The redirection will go to the MGM
from the specified list
e.g route /eos/dummy/ foo.bar:1094:8000
route unlink <path>
remove routing matching path
For FUSE access we use AUTOFS to map paths to instances. The mounts cannot use the front-end router service, they need to talk directly to instances.
As an example you deploy:
(empty MGM): eosentry.foo
Now you define routing like this:
/eos/tree1/ => eosbranch1
/eos/tree2/ => eosbranch2
All Xrootd & Http clients talk always to eosentry.foo
The mounts have to be mounted as the routing table.
The mounts have to be configured like the routing settings. One can actually use the eos route ls command to derive the autofs mount table.
At CERN we are experimenting additionally with bind mounts to avoid to have too many mount points.
E.g. we mount 5 eos instances:
and then we define bind mounts on top of them to get more finegrained assignements like
/eos/letter/a => /eos/instance1/
/eos/letter/b => /eos/instance2/
This is not as flexible as splitting the MGM tree internally, but it creates 100% independent trees, which are not sharing the same point of failures.