Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. DA is notified by email when there is a new image dataset to review. The Operations Manager will assign for review (for now). The data are in /lustre/naasc/web/almapipe/pipeline/naasc-prod/image-qa Note: operations manager will still need to check the processing area for stalled or failed jobs. The initiator of any such jobs can be found in spool/<jobid>/metadata.json for contact by Helpdesk email if needed.
  2. login as almapipe (ssh almapipe@localhost or almapipe@<your desktop>, use the ssh key), set the appropriate environment (type activate_profile naasc-prod on the command line).
  3. QA: in the pilot, only channel widths and ranges can be set, so QA should be focused on those, along with general QA issues (note that firefox (to browse the weblog), casa and casaviewer are not in the default path for almapipe, they live in /opt/local/bin, so need to be invoked as e.g. /opt/local/bin/firefox, /opt/local/bin/casaviewer):
    1. Was the restoration of the calibration successful?
    2. Is the continuum subtraction satisfactory (hif_findcont task)? If not, DA should pick a new continuum range and rerun (step 3 below), or consult with an SRDP Scientist if it is not clear how best to do this
    3. Does the rms noise in a line-free cube channel match the theoretical estimate from the hif_makeimages step in the pipeline (to within ~10% above the theoretical estimate for bands 3-6, 15% for bands 7-8 and 20% for bands 9-10; but is no more than 30% better than the estimate)? If outside these ranges, DA should perform a limited investigation:
      1. Check the original QA2 report  in AQUA to see if any problems were reported there.
      2. In cases of dynamic range limits, is the dynamic range >200 (50 for 7m data)? If so the product may be marked as a QA pass, but the DA should note in the QA report that the user should attempt self-calibration.
      3. If there is bad target data, the DA should flag the bad data and rerun the PPR using the pipeline rerun script supplied by SSA.
      4. If neither of these is the case, or the RMS is more than 30% below the theoretical estimate, the DA should consult a scientist.
    4. Are there artifacts in the image suggesting that target flagging is needed? If so, flag the data and rerun the PPR using the pipeline rerun script supplied by SSA (see Step 3 below).
    5. Does the cube as made seem to likely to have covered the region of interest requested by the PI (in the PPR)? If large parts of the cube are blank, and/or a line is cut off at the edge of the cube the DA should consult with a scientist.
    6. Is the synthesized beam highly elliptical? (Axial ratio > 3:1). If so, check that this is not due to heavy flagging of the target. If flagging is the cause (and not the observing HA and Dec),  consult with a scientist as to whether or not the job should fail QA.
    7. Is the synthesized beam sufficiently well-sampled (>3 pixels across the minor axis)? If not, DA should rerun with a smaller pixel scale (and image size in pixels adjusted to image the same area) using the pipeline rerun script supplied by SSA.
    8. Check that the continuum images look sensible, compare theoretical to achieved RMS and note in the QA report (step 4) if any are dynamic range limited.
    9. If the user has requested a non-default angular resolution, then the imageprecheck task (stage 3 in the weblog) will indicate the approximate requested beam size and the last two lines of the table in the stage 3 weblog will report the taper used (if any) and the expected beamsize of the product (the task itself may show a fail until the score heuristics are updated, this can be ignored). The achieved beam size in the final product should be checked against the values in imageprecheck (it need not be exactly the same, but should be within ~20%).  
  4. To rerun a job,modify the necessary files in the working directory (e.g. PPR.xml, cont..cat and/or flagtargetstemplate.txt) and run almaReimageCube -r <job id> <UID of Job Directory> e.g. almaReimageCube -r 320578787 uid___A002_Xe29133_X3610. Note that the uid of the job directory is not the MOUSid, but rather that of one of the ASDMs included. It can be found by going to image_qa/<jobID>/ and copying whatever UID is the directory there.
  5. If the job is still not passing QA, please contact a scientist. QA fail jobs are not archived, but in most cases we will email the user via the helpdesk stating the reasons for the failure and make helpful suggestions, or suggest an ALMA QA3 report if there are problems with the data that were missed by ALMA QA2.
  6. Update the Google Spreadsheet with the QA state https://docs.google.com/spreadsheets/d/1USJ5rQRNbR3ORj80-UuEuqGnSYm4l6h_A6ac5s1FyJ0/edit#gid=0 (to find the user who requested the reprocessing, go to the spool directory /lustre/naasc/web/almapipe/pipeline/naasc-prod/spool/<job id> and look for the userEmail field in the metadata.json file).
  7. Login as almapipe (if not already done so above). Write Optional: especially if there were issues that needed a rerun, write a short QA report suitable for transmission to the user using the attached template qa_notes.html and add this to the weblog html directory (as qa_notes.html). In the qa_comments section:
    • Note any target flagging (on the level of antennas/spw).
    • Note any change to the continuum range(s).
    • Add any other comments (e.g. if the image is dynamic range limited and self calibration is recommended).
  8. Initiate archive ingest of image products if they passed QA using the audiPass script, audiPass <job id> -E <your email> e.g. audiPass 320755390 -E pfisher@nrao.edu.
  9. Reporting issues: if a software problem is encountered during this process, please alert the Operations Manager, who will file a JIRA ticket with SSA if it is not already a known issue.