This work is a joint effort of SCG and ARDG to decompose imaging in small work units that can be processed as independent jobs on an HTC environment, making use of HTCondor's dagman.
Initial testing consists of running a single gridding cycle on a previously partitioned MS. Original scripts are located at /lustre/aoc/users/sbhatnag/11B-157/Continuum/IMAGING_CTB80/MTWBAWP/PARALLEL/HTCondor/SCRIPT_TEST
At this stage, all scripts assume a shared file system across all compute nodes (lustre). Follows a short description of each script.
imaging.py | Python script that splits the input MS into smaller MSes and produces the DAG (in tclean.dag file) This also has the tclean parameters |
mkres.py | The Python script that sets up the SynthesisImager tool of CASA, runs the gridder on the input MS and produces images with the given basename. The input MS (via the DAG nodes) are the sub-MSes produced by imaging.py |
tclean.dag | The DAG to convert sub-MSes to sub-images. Uses tclean.htc HTCondor script. |
tclean.htc | The HTCondor script that uses CASA to run mkres.py with the (sub-)MS and Image name. |
The convolution functions have to be obtained prior to the partition of the MS. They are contained in the (sub)directory cf.tt_tclean_allSPW_withW.ps. A copy of the original scripts, data (before and after partition) and convolution functions is located in the directory script_test_0 under /lustre/aoc/sciops/fmadsen/HTCondor/imaging_ctb80, that will be used as root for subsequent testing.