Checklist for ALMA optimized imaging workflow:
- DA is notified by email when there is a new image dataset to review. The data are in /lustre/naasc/web/almapipe/pipeline/vaprod/image-qa. Note: operations manager will still need to check the processing area for stalled or failed jobs.
- QA: in the pilot, only channel widths and ranges can be set, so QA should be focused on those, along with general QA issues:
- Was the restoration of the calibration successful?
- Is the continuum subtraction satisfactory (hif_findcont task)? If not, DA should pick a new continuum range and rerun (step 3 below), or consult with an SRDP Scientist if it is not clear how best to do this
- Does the rms noise in a line-free cube channel match the theoretical estimate from the hif_makeimages step in the pipeline (to within 10% above the theoretical estimate for bands 3-6, 15% for bands 7-8 and 20% for bands 9-10; but is no more than 20% better than the estimate)? If outside these ranges, DA should perform a limited investigation:
- Check the original QA2 report in AQUA to see if any problems were reported there.
- In cases of dynamic range limits, is the dynamic range >200 (50 for 7m data)? If so the product may be marked as a QA pass, but the DA should note in the QA report that the user should attempt self-calibration.
- If there is bad target data, the DA should flag the bad data and rerun the PPR using the pipeline rerun script supplied by SSA.
- If neither of these is the case, or the RMS is more than 20% below the theoretical estimate, the DA should consult a scientist, and the data may be marked as QA fail (In the case of QA fail, products would not be ingested into the archive.)
- Are there artifacts in the image suggesting that target flagging is needed? If so, flag the data and rerun the PPR using the pipeline rerun script supplied by SSA.
- Does the cube as made cover the region of interest requested by the PI (in the PPR)? If large parts of the cube are blank, and/or a line is cut off at the edge of the cube the DA should consult with a scientist.
- Is the synthesized beam highly elliptical? (Axial ratio > 3:1). If so, check that this is not due to heavy flagging of the target. If flagging is the cause (and not the observing HA and Dec), consult with a scientist as to whether or not the job should fail QA.
- Is the synthesized beam sufficiently well-sampled (>3 pixels across the minor axis)? If not, DA should rerun with a smaller pixel scale (and image size in pixels adjusted to image the same area) using the pipeline rerun script supplied by SSA.
- Check that the continuum images look sensible, compare theoretical to achieved RMS and note in the QA report (step 4) if any are dynamic range limited.
- To rerun a job, login as almapipe (ssh almapipe@localhost) modify the necessary files in the working directory (e.g. PPR.xml, cont..cat and/or flagtargetstemplate.txt) and run almaReimageCube -r <job id> <MOUS UID> e.g. .
- Update the Google Spreadsheet with the QA state https://docs.google.com/spreadsheets/d/1USJ5rQRNbR3ORj80-UuEuqGnSYm4l6h_A6ac5s1FyJ0/edit#gid=0
- Run tar -xvzf on the tarred, gzipped weblog file in the products directory.
- Login as almapipe (if not already done so above). Write a short QA report suitable for transmission to the user using the attached template qa_notes.html and add this to the weblog html directory (as qa_notes.html). In the qa_comments section:
- If the image failed QA, briefly give a reason.
- Note any target flagging (on the level of antennas/spw).
- Note any change to the continuum range(s).
- Add any other comments (e.g. if the image is dynamic range limited and self calibration is recommended).
- Tar/gzip the weblog with the qa_notes added, e.g. tar cvzf uid___A001_X13d5_X224.hifa_cubeimage.weblog.tgz pipeline-20191021T185739 (this should overwrite the old weblog.tgz file).
- Initiate archive ingest of image products if they passed QA using the audiPass script, audiPass <job id> -E <your email> e.g. audiPass 320755390 -E pfisher@nrao.edu.
- Remove products/raw/working directory after data ingested successfully into the archive.