Currently, the nmpost cluster is a mix of Torque/Moab nmpost{001..090} and HTCondor nmpost{091..120} devhost{001..002}. Eventually we would like to replace Torque/Moab with Slurm as we think it can do most of what Torque/Moab does but is free and has regular versions released. Torque/Moab seem to have stalled.
We upgraded to Torque-6/Moab-9 and thus started having to pay for Torque/Moab in 2018. This was done because Torque-6 understood cgroups and NUMA nodes (although it doesn't handle NUMA nodes the way I would like it to) and Torque-6 was no longer compatible with the free scheduler Maui forcing us to purchase the Moab scheduler. Since then we have leveraged a couple of things Moab can do that Maui never could like increasing the number of jobs the scheduler looks ahead to schedule. This allows Moab to start reserving space for pending vlass jobs on vlasstest nodes but is not a critical requirement. Largely, the win was with cgroups for resource separation and NUMA nodes to double the number of interactive nodes. Both of which only required the new version of Torque which in turn required Moab which in turn we had to pay for. See what they did there? You can read more about it at https://staff.nrao.edu/wiki/bin/view/DMS/SCGTorque6Moab9Presentation
An option to replace Torque/Moab, instead of Slurm, is openpbs which seems to be the free version of PBS Pro maintained by Altair Engineering. I haven't used openpbs yet but it may be a simpler transition than Slurm.
- To Do
- upgrade testpost-master to RHEL7 so it can run Slurm
- upgrade nmpost-master to RHEL7 so it can run Slurm
- Implement some sort of mechanism to keep vlass jobs on vlass nodes, hera jobs on hera nodes, etc
- Document how to use HTCondor and Slurm with emphasis on transitioning from Torque/Moab
- https://staff.nrao.edu/wiki/bin/view/NM/HTCondor#Simple_Documentation
- https://staff.nrao.edu/wiki/bin/view/NM/SlurmExampleSubmit
- I will convert these into pages in https://info.nrao.edu/computing/guide/cluster-processing/
- Create a small subset of nmpost cluster that only runs Slurm for users to test.
- Identify stake-holders (E.g. operations, DAs, sci-staff, SSA, observers) and give them the chance to test Slurm and provide opinions
- implement useful opinions
- Set a date to transition remaining cluster to Slurm
- DONE
- DONE: Set a PoolName for the testpost and nmpost clusters. E.g. NRAO-NM-PROD and NRAO-NM-TEST. They don't have to be allcaps.
- DONE: Change slurm so that nodes come up properly after a reboot instead of "unexpectedly rebooted" ReturnToService=2