Page for presenting results; description of the tests is presented on parent page:

Calibration pipeline

Goal: Test the behavior of Tier0 parallelization of calibrator imaging in the calibration pipelline


Table: Runtime of the calibration pipeline vs. CASA version, 8 way parallel runs (unless stated otherwise) and 256 GB memory limit.

Projectcasa-pipeline-release-5.6.1-8.el7casa-6.1.0-63casa-CAS-9386-53casa-CAS-9386-53_2MPI
2017.1.00717.S28h51m11s32h14m43s31h37m52s33h09m28s
2017.1.00750.T05h17m12s04h52m57s04h50m23s04h42m11s
2017.1.00884.S09h15m26s08h46m16s08h43m33s08h36m50s
2017.1.00983.S55h41m06s51h29m49s51h27m05s58h05m08s
2017.1.01214.S28h54m29s22h41m17s22h39m23s20h49m49s
E2E6.1.00080.S14h01m18s13h14m39s13h18m04s14h25m03s
E2E6.1.00092.S61h26m59s65h46m20s66h13m38s70h14m20s


No obvious issues in calibration, slight increase in runtime which is not unexpected given tclean() runtime change. Some things could be investigated but no current plans given competing time demands.


Imaging pipeline

Goal: Demonstrate that the refactored code has the desired memory footprint effect.


Table: Memory footprint of entire pipeline run vs. CASA version, 8 way parallel runs with 128 GB memory limit,


Memory footprint (GB)
Projectcasa-pipeline-release-5.6.1-8.el7casa-6.1.0-63casa-CAS-9386-53
2017.1.00717.S15.9602851915.3538589517.16140747
2017.1.00750.T5.3002357483.9925956734.630950928
2017.1.00884.S48.8861808853.224071579.35606766
2017.1.00983.S50.8941841150.6911315959.88224411
2017.1.01214.S20.7219772320.3068275524.82666397
E2E6.1.00080.S47.1487808245.9478797949.49978256
E2E6.1.00092.S23.1033630422.399372157.35647202


Memory footprint is systematically higher for the refactor case. Which is opposite of expected. Potentially this is as an efficiency improvement in chanchunk estimation and not the actual natural unconstrained limit which would suggest that the memory usage would increase as available memory increased.


Goal: Demonstrate the runtime cost of the refactored code and whether it's a fixed overhead so it's contribution goes to zero for larger data sets or whether the overhead scales with image complexity


Table: Runtime of imaging pipeline vs. CASA version, 8 way parallel runs with 128 GB memory limit.

Project

casa-pipeline-release-5.6.1-8.el7

casa-6.1.0-63casa-CAS-9386-53
2017.1.00717.S18h36m15s17h21m13s15h43m25s
2017.1.00750.T06h41m32s05h30m07s02h55m08s
2017.1.00884.S07h17m28s06h45m21s05h50m07s
2017.1.00983.S110h33m48s117h53m42s276h24m43s
2017.1.01214.S06h46m09s06h11m58s05h31m04s
E2E6.1.00080.S35h22m22s35h16m56s29h31m27s
E2E6.1.00092.S126h30m37s130h18m17s53h35m03s


Several unanticipated or difficult to explain behaviors seen in imaging pipeline:


  • Why the significant refactor runtime increase (>2x)  with 983.S

  • Why the significant decrease with 092.S. 

  • Why the systematic slight decrease in runtime with all other data sets, we expected a slight increase in runtime.   

For 2017.1.00983.S the two most likely culprits are time per major cycle and number of major cycles.

The average major cycle time plus transition time to minor cycle duration per tclean calls varies with a bias toward the refactor code taking longer as expected but not at the 2x level plus. For some cases it is faster, which is yet to be explained.

Significant increase in number of major cycles.for SPWs 29 and 31 for both targets.   So for 2017.1.00983.S the increase in runtime for the pipeline is a slight increase in per major cycle time and a significant increase in number of major cycles for two SPWs.   The latter probably warrants further examination,  it is likely that real world data sets could see significantly longer run times due to convergence issues with the new cube stop criteria.


For E2E6.1.00092.S, there are 3 possible explanations,  decreased major cycle runtime (shouldn’t be possible), decreased number of major cycles,  parallelization effects.

The above plot shows much less runtime per major cycle for refactored, about 5x which would be consistent with serial vs parallel with 8 engines.

Consistent number of major cycles per tclean calls.


The ganglia plots bellow show the CPU load of cube imaging for casa-6.1.0-63 and casa-CAS-9386-53.el7, respectively.





The casa5 and casa6 logs show tclean is explicitly called with parallel=False, refactor is with true.  

I/O wait at 15:50 is most likely a local lustre response issue.

Lastly the log scale plot of imaging pipeline runtime shows inverted improvement from expectation.

Below is a  plot for number of major cycle references per casa version per data set (without E2E6.1.00092.S and 2017.1.00750.T because of parallelization).  This is across *all* tclean calls for all imaging cases in the pipeline.  



  • No labels