Inconsistency of Mount Point in AlmaLinux 9.x on FST's OS upgradation

Hi..

After Upgradation from CentOS 7.9 to AlmaLinux 9.x on all FST of EOS (ALICE:: Kolkata::EOS2), we are facing a peculiar problem, mount point of disks are arbitrary changing with reboot.

We has 16 Mount point for eos data (16 Nos. of NLSAS HDD @10TB in RAID-0) and 2 SSD for OS (480 GB with RAID-1) in 8 nos. of FST servers.
++++++++++++
[root@************ ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 8.9T 0 disk
└─sda1 8:1 0 8.9T 0 part /xdata1
sdb 8:16 0 8.9T 0 disk
└─sdb1 8:17 0 8.9T 0 part /xdata0
sdc 8:32 0 8.9T 0 disk
└─sdc1 8:33 0 8.9T 0 part /xdata5
sdd 8:48 0 446.6G 0 disk
├─sdd1 8:49 0 256M 0 part /boot/efi
├─sdd2 8:50 0 1G 0 part /boot
├─sdd3 8:51 0 160G 0 part /var
├─sdd4 8:52 0 90G 0 part /
├─sdd5 8:53 0 70G 0 part /tmp
├─sdd6 8:54 0 33.4G 0 part /localdata
├─sdd7 8:55 0 32G 0 part [SWAP]
├─sdd8 8:56 0 30G 0 part /home
└─sdd9 8:57 0 30G 0 part /opt
sde 8:64 0 8.9T 0 disk
└─sde1 8:65 0 8.9T 0 part /xdata2
sdf 8:80 0 8.9T 0 disk
└─sdf1 8:81 0 8.9T 0 part /xdata4
sdg 8:96 0 8.9T 0 disk
└─sdg1 8:97 0 8.9T 0 part /xdata3
sdh 8:112 0 8.9T 0 disk
└─sdh1 8:113 0 8.9T 0 part /xdata7
sdi 8:128 0 8.9T 0 disk
└─sdi1 8:129 0 8.9T 0 part /xdata6
sdj 8:144 0 8.9T 0 disk
└─sdj1 8:145 0 8.9T 0 part /xdata9
sdk 8:160 0 8.9T 0 disk
└─sdk1 8:161 0 8.9T 0 part /xdata10
sdl 8:176 0 8.9T 0 disk
└─sdl1 8:177 0 8.9T 0 part /xdata14
sdm 8:192 0 8.9T 0 disk
└─sdm1 8:193 0 8.9T 0 part /xdata12
sdn 8:208 0 8.9T 0 disk
└─sdn1 8:209 0 8.9T 0 part /xdata11
sdo 8:224 0 8.9T 0 disk
└─sdo1 8:225 0 8.9T 0 part /xdata8
sdp 8:240 0 8.9T 0 disk
└─sdp1 8:241 0 8.9T 0 part /xdata13
sdq 65:0 0 8.9T 0 disk
└─sdq1 65:1 0 8.9T 0 part /xdata15
[root@eos10 ~]#
++++++++++++++++++++++++++++
After every reboot, mount point of “/xdata*” are changed /dev/sd*.
For example: -
/dev/sdb1 9.0T 7.2T 1.8T 81% /xdata0

In above output, /dev/sdb1 is mounted with /xdata0. but after reboot, /dev/sdb1 is mounted with /xdata5 or /xdata10 or /xdata2 etc. And /xdata0 has link with /dev/sdq1 or /dev/sda1 or etc. It’s inconsistency. We also try different method i.e. UUID, multipath and drive path i.e. /dev/sd* . But every time it’s change with reboot. i.e.

/dev/sdb1 9.0T 7.2T 1.8T 81% /xdata10
or
/dev/sda1 9.0T 7.2T 1.8T 81% /xdata0

However, on CentOS 7 and AlmaLinux 8, no such issue. All the HDD are mount with desired mount point.

Please suggest accordingly.

Regards
Prasun and WLCG Kolkata Team

Hi Prasun! this was a problem also on Centos7, maybe it just not happened to you before (in my case was a changing of enumeration between motherboard SATA controller and HBA). with the increasing number of devices the “resolving” of devices to a logical name can become non-sequential and as such it will change between reboots. The general recommendation (even for md raid volumes) is to use UUIDs in fstab.
Check partition UUID with blkid command and use that UUID instead of a /dev/ path, this will give you a guaranteed mapping (as is not possible to have multiple partitions with the same UUID).

Hi all,
another option is to have a filesystem label - we use xfs (label can be set after mkfs with xfs_admin), and label the disk by fst host and group number. the corresponding mount setting in fstab is done with LABEL=
Best,
Erich

Hi Adrian and Erich and EOS Team,

Thank for your suggestion and apology for late reply.

We had tried UUID (from blkid and lsblk -fp) instead of /dev/; and setting the LABEL with xfs_admin -L “newlabel” /dev/. but it’s not helped us. We had regenerate the UUID by using " xfs_admin -U generate /dev/" and then trying to fix the issue. But, it’s failed.

Then, we had find and check the disk id or addresses in “wwn-0x” format of each disk under /dev/disk/by-id/. Then, we copy the address of particular disks ; i.e. wwn-0x format; and point out to their related mount point i.e. /xdata*/ on /etc/fstab.

After doing the above procedure, it’s fixed and after several reboot, mount point are not replicated.
Thanks again.
Regards
Prasun