Page for presenting results; description of the tests is presented in the on parent page:
...
Anchor | ||||
---|---|---|---|---|
|
Memory footprint is systematically higher for the refactor case. Which is opposite of expected. Potentially this is as an efficiency improvement in chanchunk estimation and not the actual natural unconstrained limit which would suggest that the memory usage would increase as available memory increased.
...
Anchor | ||||
---|---|---|---|---|
|
Several unanticipated or difficult to explain behaviors seen in imaging pipeline:
Why the significant refactor runtime increase (>2x) with 983.S
Why the significant decrease with 092.S.
Why the systematic slight decrease in runtime with all other data sets, we expected a slight increase in runtime.
For 2017.1.00983.S the two most likely culprits are time per major cycle and number of major cycles.
The average major cycle time plus transition time to minor cycle duration per tclean calls varies with a bias toward the refactor code taking longer as expected but not at the 2x level plus. For some cases it is faster, which is yet to be explained.
Significant increase in number of major cycles.for SPWs 29 and 31 for both targets. So for 2017.1.00983.S the increase in runtime for the pipeline is a slight increase in per major cycle time and a significant increase in number of major cycles for two SPWs. The latter probably warrants further examination, it is likely that real world data sets could see significantly longer run times due to convergence issues with the new cube stop criteria.
For E2E6.1.00092.S, there are 3 possible explanations, decreased major cycle runtime (shouldn’t be possible), decreased number of major cycles, parallelization effects.
The above plot shows much less runtime per major cycle for refactored, about 5x which would be consistent with serial vs parallel with 8 engines.
Consistent number of major cycles per tclean calls.
The ganglia plots bellow show the CPU load of cube imaging for casa-6.1.0-63 and casa-CAS-9386-53.el7, respectively.
The casa5 and casa6 logs show tclean is explicitly called with parallel=False, refactor is with true.
I/O wait at 15:50 is most likely a local lustre response issue.
Lastly the log scale plot of imaging pipeline runtime shows inverted improvement from expectation.
Below is a plot for number of major cycle references per casa version per data set (without E2E6.1.00092.S and 2017.1.00750.T because of parallelization). This is across *all* tclean calls for all imaging cases in the pipeline.