1. Someone with casaadm account credentials (Drew or Karlee) copies the release candidate into /home/casa/packages/pipeline.
  2. Copy (or remove) the old test symbolic link to a different symbolic link name; i.e. test_5.6.1.
  3. They then make a symbolic link to /home/casa/packages/pipeline/test; i.e. ln -sf /home/casa/packages/pipeline/<my new casa package> test.
  4. The pipeline is set to run on incoming data by setting the ciplRunState variable to RUN in the following file: /home/casa/capo/dsoc-test.properties
  5. The test pipeline gets run in parallel with the actual pipeline for a period ~ 1 week, and the results checked against those from the production pipeline.

Test pipeline (workspaces)

  • As above, but note that VLASS SECI has its own version in the dsoc-xxxx.properties CAPO file: edu.nrao.workspaces.ProcessingSettings.CasaVersion.vlassSeci that you may or may not wish to change
  • Workspaces has a separate CAPO property for the RUN/PAUSE/STOP of automated calibration: edu.nrao.workspaces.StandardCalibrationSettings.runState

7 Comments

  1. For 6.1.1 the pipeline fails to execute offline due to a python path issue, and testing needs to be started using runs from the command line. Furthermore, the method of invoking the hifv recipe has changed in 6.x (PIPE-813). The easiest way to run it from the CASA command line (as recommended by John) is probably:

    import pipeline.recipereducer
    
    pipeline.recipereducer.reduce(vis=['../rawdata/path-to-asdm'],procedure='procedure_hifv.xml', loglevel='summary')

    Note that when run like this the weblog appears in a "procedure_hifv" sub-directory. The square brackets are very important - the asdm must be given as a list, even with only 1 element, otherwise bad things happen. Also, each run needs to be started in an empty directory (i.e. not one containing a previous pipeline run), otherwise the pipeline will attempt to append existing calibrations from the previous run.

  2. For the 6.1.1 pipeline, will it be alright to run these as vlapipe in /lustre/aoc/cluster/pipeline/dsoc-test/qa2/<JobID> or do we need to do this in some other area on lustre, such as our sciops areas? Also, do we know if there is some kind of special directory structure this method expects? (similar to how casa_piperestorescript.py expects to see a products/ rawdata/ and working/ directories)

    1. I would run these elsewhere in lustre, just to separate them from the regular tests - your sciops area would be fine provided we can all see it to review the weblogs. This method has no special directory structure, you can run it on an SDM in the same directory if you want. I'm going to wait until my test run has finished though before setting you loose on these tests in case I find more issues... 

  3. Ok, so I have gotten things to run using the recipe reducer as listed in the comments above, so feel free to pick a dataset and have a go once you get the chance. I did an L+C Multiband from project SC1006 which seemed to work well. Talking to Drew we think one per band (and maybe one more Multiband) should be enough to validate the pipeline.

  4. We have the standard Band test we could use. Data are in /lustre/aoc/sciops/dmedlin/pipeline/pl_data/bands/  I'd suggest we have four SRDP DAs, and we can each take two.

  5. OK, sounds like a good plan.

  6. It was discovered that John T. has already done the band tests, so we are processing some recently observed observations to see if they match the current production pipeline.