You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

I have an idea how to make one OS image that can be used for both the HTCondor cluster and Slurm Cluster such that we can have HTCondor jobs glidein to the Slurm cluster.


CONDOR_CONFIG

The condor_startd reads the CONDOR_CONFIG environment variable as its config file instead of the default /etc/condor/condor_config and exits with an error if there is a problem reading that file.

https://htcondor.readthedocs.io/en/latest/admin-manual/introduction-to-configuration.html?highlight=condor_config#ordered-evaluation-to-set-the-configuration


DAEMON_SHUTDOWN

The condor_startd daemon will shutdown gracefully and not be restarted if the ClassAd STARTD.DAEMON_SHUTDOWN evlauates to True. E.g.

STARTD.DAEMON_SHUTDONW = size(ChildState) == 0 && size(ChildActivity) == 0 && (MyCurrentTime - EnteredCurrentActivity) > 600'

MASTER.DAEMON_SHUTDOWN = STARTD_StartTime == 0

https://htcondor.readthedocs.io/en/latest/admin-manual/configuration-macros.html

https://htcondor.readthedocs.io/en/latest/classad-attributes/machine-classad-attributes.html


sysconfig

The condor.service unit in systemd reads /etc/sysconfig/condor but does not evaluate it. So adding something like the following to /etc/sysconfig/condor won't work

CONDOR_CONFIG=$(cat /var/run/condor/config)

I could instead add a second EnvironmentFile like so

EnvironmentFile=-/etc/sysconfig/condor
EnvironmentFile=-/var/run/condor/config

where /var/run/condor/config sets CONDOR_CONFIG=/etc/condor/condor_config

But I can use this to keep HTCondor from starting, just like I do with Torque and Slurm. I can set CONDOR_CONFIG=/dontstartcondor in /etc/syconfig/condor in the OS image and override it with a snapshot.  Then stop setting 99-nrao as a snapshot.


OS image

All three schedulers (Torque, slurm, condor) will be configured to start via systemd. The file pbs_mom, slurm, and condor in /etc/sysconfig will be set such that all of these schedulers will fail to start on boot.

/etc/sysconfig/pbs_mom: PBS_ARGS="-h"
/etc/sysconfig/slurm: SLURMD_OPTIONS="-h"
/etc/sysconfig/condor: CONDOR_CONFIG=/nosuchfile

If any of these schedulers are wanted to start on boot, the appropriate /etc/sysconfig file (pbs_mom, slurm, condor) will be altered via a snapshot.

/etc/sysconfig/pbs_mom: PBS_ARGS="-h"
/etc/sysconfig/slurm: SLURMD_OPTIONS="-h"
/etc/sysconfig/condor: CONDOR_CONFIG=/etc/condor/condor_config

Change the LOCAL_CONFIG_FILE in HTCondor to a file that will contain the configurations needed for a Slurm node to run an HTCondor Pilot job (e.g. STARTD.DAEMON_SHUTDOWN). This file will be created by the Pilot job.

echo 'LOCAL_CONFIG_FILE = /var/run/condor/condor_config.local' >> /etc/condor/condor_config

The alternative was to make a complete copy of condor_config and all its sub-config files into an /etc/condor/glidein-slurm.conf and add the DAEMON_SHUTDOWN ad as well. This seems dangerous to me as now those two config files can drift.


Pilot Job

The Pilot job submitted to Slurm will one of the two following options depending on results from my testing

echo 'CONDOR_CONFIG=/etc/condor/glidein-slurm.conf' > /var/run/condor/config

echo 'STARTD.DAEMON_SHUTDONW = State == "Unclaimed" && Activity == "Idle" && (MyCurrentTime - EnteredCurrentActivity) > 600' > /var/run/condor/condor_config.local

echo 'MASTER.DAEMON_SHUTDOWN = STARTD_StartTime == 0' >> /var/run/condor/condor_config.local

condor_master -f

rm -f /var/run/condor/condor_config.local

rm -f /var/run/condor/config

exit

or

echo 'CONDOR_CONFIG=/etc/condor/glidein-slurm.conf' > /var/run/condor/config

echo 'STARTD.DAEMON_SHUTDONW = State == "Unclaimed" && Activity == "Idle" && (MyCurrentTime - EnteredCurrentActivity) > 600' > /var/run/condor/condor_config.local

systemctl start condor

# loop until condor_startd is no longer a running process

systemctl stop condor

rm -f /var/run/condor/condor_config.local

rm -f /var/run/condor/config

exit

If the Payload job is very small and exits before the Pilot job can start blocking on condor_startd then the Pilot job may never end. So, it may need some code to exit after some amount of time if condor_stard hasn't been seen.

If the Pilot job starts condor_master then I may not need to add the EnvironmentFile=-/var/run/condor/config line in the condor unit file.


Factory

The factory process that watches the clusters and launches Pilot jobs should be pretty simple

If jobs are waiting in the HTCondor cluster (perhaps only vlapipe jobs)

If nodes are available in the Slurm Cluster (If not perhaps send email)

Launch one Pilot job

Sleep some amount of time, presumably more than the time HTCondor takes to launch a job


Problems



Ideas

Instead of using systemd to start condor, I could run condor_master -f from the Pilot script. I can set set both STARTD and MASTER DAEMON_SHUTDOWN variables which will cause condor_master to exit and therefore I won't need to watch the condor_startd process.  This may still cause weirdness with cgruops (the HTCondor processes being a subset of the Slurm processes) but I will have to try it to find out.



  • No labels