Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Interactive: The ability to assign all or part of a node to a user with shell level access (nodescheduler, qsub -I, etc).  Current minimal granularity is per NUMA node, but finer could be useful.  Slurm and HTCondor lack the uniqueuser feature of Moab so implementing nodescheduler will at least be different if not difficult and at most be impossible.  One thought is to ditch nodescheduler and just use the interactive commands that come with Slurm and HTCondor, but I am having some success implementing nodescheduler in Slurm with the --exclude syntax.

    • nodescheduler: Was written before I understood what qsub -I did.  Had I known, I may have argued to use qsub -I instead of nodescheduler as it is much simpler, is consistent with other installations of Torque, and may have forced some users to use batch processing which is much more efficient.
      • nodescheduler features we like
        • It's not tied to any tty so a user can login multiple times from multiple places to their reserved node without requiring something like screen, tmux, or vnc.  It also means that users aren't all going through nmpost-master.
        • Its creation is asynchronous.  If the cluster is full you don't wait around for your reservation to start, you get an email message when it is ready.
        • It's time limited (e.g. two weeks).  We might be able to do the same with a queue/partition setting but could we then extend that reservation?
        • We get to define the shape of a reservation (whole node, NUMA node, etc).  If we just let people use qsub -I they could reserve all sorts of sizes which may be less efficient.  Then again it may be more efficient.  But either way I think nodescheduler it is simpler for our users.
      • nodescheduler features we dislikes
        • With Toruqe/Moab asking for a NUMA node doesn't work as I would like.  Because of bugs and limitations, I still have to ask for a specific amount of memory.  The whole point of asking for a NUMA node was that I didn't need to know the resources of a node ahead of time but could just ask for half of a node. Sadly, that doesn't work with Torque/Moab.
        • Because of the way I maintain the cgroup for the user, with /etc/cgrules.conf, I cannot let a user have more than one nodescheduler job on the same node or it will be impossible to know which cgroup an ssh connection should use.  The interactive commands (qsub -I, etc) don't have this problem.
      • Slurm
        • To Do cgroups are not removed after nodescheduler ends.  Is this because of reaper killing something it shouldnt?  Is it because of the cgrules stuff?
          • If I change nodescheduler to name the job interactive_X the cgroup gets removed.  So clearly this has something to do with the cgreg stuff.
          • Adding sleep 30 to the end of the epilog didn't help.
          • Exiting the epilog script early doesn't help.
          • Exiting early form the prolog script does seem to help.
          • If I don't send any signals to cgrulesengd from the prolog script, the cgroup is removed properly.
          • Using killall --signal SIGUSR2 cgrulesengd instead of systemctl restart cgred in the prolog doesn't help.
          • Using killall --signal SIGUSR1 cgrulesengd doesn't read /etc/cgrules.conf but Slurm does remove the cgroup.
          • Putting a sleep 5 before restarting cgred in the prolog doesn't help
          • Adding freezer to the list of controllers in /etc/cgrules.conf doesn't help.
          • Adding Alloc to PrologFlags
          • So, why does restarting or sending SIGUSR2 to cgrulesengd somehow prevent Slurm from cleaning up the cgroup?
          • Increase the logging.
          • A different method could be to create /etc/profile.d/ssh.sh that finds the path to the cgroup and uses cgclassify to pu the shell in that cgroup.  Not as slick as using cgreg but also may not prevent Slurm from removing the cgroup.  See https://unix.stackexchange.com/questions/526994/limit-resources-cpu-mem-only-in-ssh-session
          • Another option would be to forcibly remove the cgroup in the epilog but I don't like it.
        • srun -p interactive --pty bash This logs the user into an interactive shell on a node with defaults (1 core, 1 GB memory) in the interactive partition.
        • NUMA I don't see how Slurm can reserve NUMA nodes so we may have to just reserve X tasks with Y memory.
        • naccesspolicy=uniqueuser I don't know how to keep Slurm from giving a user multiple portions of the same host.  With Moab I used naccesspolicy=uniqueuser which prevents the ambiguity of which ssh connection goes to which cgroup.  I could have nodescheduler check the nodes and assign one that the user isn't currently using but this is starting to turn nodescheduler into a scheduler of its own and I think may be more complication than we want to maintain.
          • one job only What about enforcing one interactive job per user?  nodescheduler exiting with an error if the user already has an interactive job running.
          • routing queue What if I create a routing queue (Slurm can do those, yes?) and then walk that queue and assign them to nodes.  Yes this would be starting to implement my own scheduler.
          • exclude There is a -x , --exclude=<node name list> argument to sbatch.
            • But -x will only work if nodescheduler can find a free node at the moment. If it has to wait, then that excluded node may no longer be running a job by the user.  Worse yet, the node that nodescheduler is about to give the user may have a new job by this user.

            • What about combining -x with a test-and-resbubmit function in the prolog script?  Before setting up cgreg, if there is already an interactive job running on this node as the user, add this node to the exclude list and resubmit the interactive job.
            • What if nodescheduler excludes nodes that the user is running interactive jobs on instead of letting it go to the prolog?
            • SOLUTION: Using the ExcNodeList option combined with requeuehold and some other jiggery pokery in prolog and epilog scripts seems to have nodescheduler working.
            • Better SOLUTION: Just have nodescheduler build a list of interactive_j jobs running by the user and the add that list with the --exclude sbatch command.  This was James's idea.  I hate it when he has good ideas.
        • cgrules Slurm has system-level prolog/epilog functionality that should allow nodesceduler to set /etc/cgrules.conf but pam_slurm_adopt.so pretty much removes the need for /etc/cgrules.conf.
        • PAM The pam_slurm.so module can be used without modifying systemd and will block users that don't have a job running from logging in.  The pam_slurm_adopt.so module required removing some pam_systemd modeules and does what pam_slurm.so does plus will put the user's login shell in the same cgroup as the slurm job expected to run the longest, which could replace my /etc/cgrules.conf hack.  This still doesn't solve the problem of multiple interactive jobs by the same user on the same node.  Removing the pam_systemd.so module prevents the creation of things like /run/user/<UID> and the XDG_RUNTIME_DIR and  XDG_SESSION_ID which breaks VNC.  So we may want to use just pam_slurm.so and not pam_slurm_adopt.so.
          • But slurm at CHTC has neither pam_slurm.so nor pam_slurm_adopt.so configured and their nodes don't create /run/user/<UID> either.  So it might just be slurm itself and not the pam modules causing the problem.
          • Also, in order to install pam_slurm_adopt.so you have to not only disable systemd-logind but you must mask it as well.  This prevents /run/user/<UID> from being created even if you login with ssh (e.g. no Slurm, Torque, or HTCondor involved).
        • nodeextendjob Can Slurm extend the MaxTime of an interactive job? Yes scontrol update timelimit=+7-0:0:0 jobid=489  This sets the MaxTime to seven minutes.
      • HTCondor
        • condor_submit -i This logs the user into an interactive shell on a node with defaults (1 core equivelent, 0.5 GB memory)
        • NUMA I don't see how HTCondor can reserve NUMA nodes so we may have to just reserve X tasks with Y memory.
        • naccesspolicy=uniqueuser I don't think I need to worry about giving a user multiple portions of the same host if we are using condor_ssh_to_job.  But if aren't using condor_ssh_to_job then we could exclude hosts with requirements = Machine != hostname
        • cgrules I don't know if HTCondor has the prologue/epilogue functionality to implement my /etc/cgrules.conf hack.
        • PAM How can we allow a user to login to a node they have an interactive job running on via nodescheduler?  With Torque or Slurm there are PAM modules but there isn't one for HTCondor.
        • Could run a sleep job just like we do with Torque and use condor_ssh_to_job which seems to do X11 properly.  We would probably want to make gygax part of the nmpost pool.
        • cgreg I don't know  if HTCondor has system-level prolog and epilog scripts to edit /etc/cgrules.conf
      • OpenPBS
        • Does not have a uniqueuser option so cannot do nodescheduler like Torque/Moab.
    • Nodevnc  Given the limitation of Slurm and HTCondor and that we already recommend users use VNC on their interactive nodes, why don't we just provide a nodevnc script that reserves a node (via torque, slurm or HTCondor), start a vnc server and then tells the user it is ready and how to connect to it?  If someone still needs/wants just simple terminal access, then qsub -I or srun --pty bash or condor_submit -i might suffice.
      • DONE: Torque
        • I can actually successfully launch a VNC session using my nodevnc-pbs script even though there is no /run/user/<UID> on the node.  I have not changed this nodevnc-pbs script in six months.  This is because even though Torque doesn't create /run/user/<UID> just like Slurm doesn't, Torque doesn't set the XDG_RUNTIME_DIR variable like Slurm does.  This is good news because since Torque neither creates /run/user/<UID> nor sets XDG_RUNTIME_DIR and we have been using RHEL7 since late 2020 without issue, then unsetting XDG_RUNTIME_DIR in Slurm is not likely to cause us problems.
      • DONE: Slurm
        • /run/user/<UID> Slurm doesn't actually run /bin/login so things like /run/user/<UID> are not created yet XDG_RUNTIME_DIR is still set for some reason which causes vncserver to produce errors like Call to lnusertemp failed upon connection with vncviewer.
          • If I unset XDG_RUNTIME_DIR in the slurm script, I can successfully connect to VNC.  Why is Slurm setting this when it isn't making the directory?  I think this may be a bug in Slurm.  Perhaps Slurm is setting this variable instead of letting pam_systemd.so and/or systemd-logind set it. There is a bug report https://bugs.schedmd.com/show_bug.cgi?id=5920 where the developers think this is being caused because of their pam_slurm_adopt.so module but I don't think that is the case.
          • Would it be best to just unset XDG_RUNTIME_DIR in a system prolog?
          • YES: loginctl enable-linger krowe run vnc loginctl disable-linger krowe.  I could maybe put this is a prolog/epilog.
          • I can successfully run Xvfb without /run/user/<UID>.
          • I have successfully ran small CASA tests with xvfb-run.
          • A work-around could be something like the following, but there might be other things broken because of the missing /run/user/<UID> and ${XDG_RUNTIME_DIR}/gvfs is actually a fuse mount and reaper does not know how to umount things.
            • mkdir /tmp/${USER}
            • export XDG_RUNTIME_DIR=/tmp/${USER}
        • Reading up on how pam_slurm_adopt works, it will probably never cooperate with systemd and therefore it is a hack and not future-proof.  https://github.com/systemd/systemd/issues/13535  I am unsure how wise it is to start using pam_slurm_adopt in the first place.
        • So if I don't install pam_slurm_adopt.so, which I only installed because it seemed better than my /etc/cgrules.conf hack, which I only created for nodescheduler after we started using cgroups, then I think I can get nodevnc working as a pseudo-replacement for nodescheduler.  If we do use nodevnc and don't use nodescheduler (which we mostly can't) then we may not want to use the pam_slurm.so module either so that users can't login to nodes they have reserved and possibly use resources they aren't scheduled for.  If they really need to login to a node where they are running a job, Slurm has something similar to HTCondor's condor_ssh_to_job which is srun --jobid jobid --pty bash -l But you need to set PrologFlags=x11 in slurm.conf, only one terminal can connect with srun in this way at a time and the DISPLAY seems to only work under certain situations.  Basicly, this is not a useful mechanism for users.  X11 forwarding works a little better if I use salloc instead of sbatch sleep.sh but it still only allows one terminal at a time and doesn't work with the --no-shell option.
      • HTCondor
        • HTCondor doesn't seem to create /run/user/<UID> either here (8.9.7) nor at CHTC (8.9.11).  I can get vncserver to run at CHTC by setting HOME=$TMPDIR and transferring ~/.vnc but I am unable to connect to it via vncviewer.  The connection times out.  This makes me think that even if I can get vncserver working, which I may have done at CHTC, it still will give me the lnusertemp error because of the missing /run/user/<UID>.
        • Xserver Since we run an X server on our nmpost nodes, ironically to allow VNC and remote X from thin clients, starting a vncserver from HTCondor fails.  This is because vncserver doesn't see the /tmp/.X11-unix socket of the running X server because HTCondor has bind mounted a fresh /tmp for us so vncserver tries to start an X server which fails because the port is already in use.
        • Mar. 30, 2021 krowe: I upgraded all the execute hosts to 8.9.11 for the fix to James's memory problem (actually fixed in 8.9.9) and now my nodevnc-htcondor script works.  Perhaps something in the new version of condor fixed things?  It still isn't creating a /run/user/UID but maybe that isn't really necessary.
        • Apr. 12, 2021 krowe: nodevnc-htcondor did not start when HTCondor selected a node that was running a job for James on nmpost106.  Yet it seems to let me run two nodevnc jobs on the same testpost node.  Is it because of James, the nmpost node or something else?  After James's job finished I was able to run nodevnc on nmpost106, so it was James's job.  The problem is xvfb-run is preventing nodevn from establishing a listening socket.
    • screen?
    • tmux?

...