The AOC has 30 16-core cluster nodes which jointly provide roughly 4Million core hours / year.   SE Continuum imaging takes between 30M and 60M core hours / year (50M to 100M per 18 months).  To realize operational scale imaging, the NRAO needs to identify 10 to 15 times its current capacity.

VLASS SE imaging in its current form presents 3 unique challenges to external processing facilities

  1. Input data sizes are large
    1. Calibrated MS is ~1TByte for most SBs:  could be reduced to 8GB by pre-splitting the MS to relevant visibilities
    2. CFCache is 30GB:  could be generated on the fly at a minor run time cost (for monolithic imaging)
    3. CASA code stack is ~1GB (may only be an issue for more fine grained decompositions)
  2. Memory foot prints are large
    1. Need to characterize major minor cycle
    2. Need to examine CASA 6 refactored imaging code in this context
  3. Imaging run times are viewed as long
    1. Single node 16x parallelization run times are in the 200 to 500 hours wall clock time.  For external facilities without pre-emption wall clock is important 

All 3 areas can be addressed by decomposing the external processing request from a full pipeline execution to a portion of the imaging step.  Possible areas of decomposition are:

  • split workflow in to separate data staging and imaging pipelines to avoid manipulating 1TB class MSes
  • per tclean call (scripted pipeline has 12 unique calls to tclean)
  • per major cycle (submit jobs on a per major cycle basis) (assuming 400 hour jobs with 20 major cycles this results in ~20 hour jobs from external host perspective)
  • Separate major and minor cycles (run major cycle on external hosts and minor cycle locally to more effectively balance memory demands)
  • Separate major cycle per SPW
    • reduces input data and CFcache input size by 16x
    • reduces per job runtime by additional 16x, 20 hour per major cycle jobs become 1+ hour jobs
    • linearly reduces cfcache creation time as well, may still be more cost effective to build on the fly rather than distribute
  • Separate by W terms (ship 1024 distinct gridding jobs, per SPW per W for each major cycle
    • Finest practical granularity,  run time becoming close to scatter gather time
    • Not currently possible 

We've identified HTCondor and CHTC+OSG as the preferred distribution stack and resource providers.   As the memory footprint and runtime of jobs decrease more resources become available.  For facilities that do not preempt running jobs in favor of low priority tasks it is critical that they enforce a maximum runtime

Cores per job

Memory per core

Runtime per job

concurrent jobs per imaging runtotal jobs per imaging runCore hours per imaging runTotal pipeline wall time

Available core hours per year

Pipeline characteristicsNotes
1632400 hours1164006400400KStock pipeline as isStock
1632201~20~6400~70002MPer major+minor cycle executionsUseful first step, unlocks other modes, provides upwards of an AOC worth of hardware
1632??201~20~6400~75002MMajor cycle onlyNot interesting by itself, necessary precursor
1162016320~6400~800030M+Major cycle only per SPWIncrease in per image runtime, access to 10x AOC
14-8.8512~10000~8000~10000100M+Major cycle only per SPW per WAccess to OSG, substantial increase in per job runtime
  • No labels