You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

Following the successful conclusion of the VIP project we now have a model for processing a statistically significant percentage of the sky.  There are two tracks we need to follow to get to operational scale imaging:

  1. Implement the example script within the CASA Pipeline framework
  2. Demonstrate a model for at-scale processing, where at scale is roughly 1000 square degrees imaged per month.

There will be at least 3 different examples of the workflow being implemented at any given time in the path to operations.

  1. Scripted Pipeline: Example python script provided by VIP.  This is the plan of record and should be kept up to date with the other forms until deprecated.   (https://gitlab.nrao.edu/jmarvil/vlass-imaging-project/)
  2. CASA Pipeline: standard pipeline in  both a pythonic and XML based form
  3. External processing script: initially based off of (1), hopefully it will converge with (2) but may need to be a separately maintained entity due to external constraints.

There are 3 main tracks of development 

  1. Characterize resource and runtime characteristics and develop external processing mode 
    1. Runtime and memory per tclean call 
    2. Questions regarding creation vs distribution of cfcache
    3. Questions of decomposition axis (major/minor cycle, per SPW gridding, per SPW+W gridding)
    4. Questions regarding distribution of CASA software stack (per sq dg * per SPW per W gridding = 30 PBytes of CASA tar ball distribution)
  2. Port scripted pipeline to CASA pipeline format
    1. CASA 6 based ?
    2. Questions regarding pybdsf incorporation
  3. Implement CASA pipeline based execution within SSA workflows
    1. Pybdsf inclusion
    2. Tracking split calibrated MS temporary secondary products
    3. Tracking state of external processes (TBD)






  • No labels