...
This is a crazy idea but what about using checkpointing with SSA's workflow. Right now they have a three-step process: download, process, upload. all of which use lustre. But what if we ran checkpointing after each step? Would this allow the data to be downloaded directlyi to local storage instead of lustre, then processeed, then uploaded. Now that I write it out, I don't see how this is much better than the current process of copying from archive to lustre to local to lustre to local to lustre. Have to think about it more.
This checkpointing is kinda kind of a trick to get multiple jobs, actually checkpoints of one job, to run on the same host (something we wanted a while ago)
Let me see if I can explain what I think the process is for SSAs std_calibration which I think is a DAG
- fetch - Copies data from someplace (perhaps the archive) to local storage on nmpost node.
- Then DAG node ends and data is returned to lustre.
- envoy - Copies data from lustre to local storage and runs calibration.
- Then DAG node ends and data is returned to lustre.
- convey - Copies data from lustre to local storage and then delivers is someplace.
Though probably the best solution is to keep SSA from doing their unnecessary three-step process.
...