StackStorm is:

a platform for integration and automation across services and tools. It ties together your existing infrastructure and application environment so you can more easily automate that environment. It has a particular focus on taking actions in response to events.

This sounds a lot like what Amygdala does, hence the following story:

This page documents the planning and analysis of StackStorm for this story.

StackStorm terminology

Amygdala is a little shy on terminology. It essentially has one major interface, which is Loader and whose role is basically to sit on a certain channel of an AMQP connection and do something with the events that come in.

StackStorm, in contrast, has a fairly rich suite of interfaces:

More information about this can be found in the StackStorm "About" documentation.

Amygdala Migration

The purpose of this section is to draft out what Amygdala processes exist, and how to convert them to StackStorm processes.

Monitor and Control Systems

The package edu.nrao.archive.amygdala.bridges contains several classes which together collaborate to handle new science data models (SDMs) and binary data files (BDFs) as they come off the array. The earliest moment our software could possibly be apprised of these files is when the metadata capture and formatting (MCAF) process finishes producing the SDM, or when the correlator backend (CBE) finishes producing the BDF. Amygdala was architected around this idea: the class MCMulticastBridge  connects to MCAF and registers a web hook with itself and begins processing Monitor & Control UDP packets. It level-changes these into Java EE events and AMQP events.

From here, the BdfIngestionBatcher  runs on an adjustable but currently 5 minute timer, batch ingesting however many BDFs have arrived since the last run. SdmIngestionTrigger  simply initiates an SDM ingestion whenever a complete SDM is generated; this occurs just once per observation, in contrast to the BDFs, which are generated continuously during ingestion.

This architecture is now considered flawed. It is prone to network filesystem synchronization issues: just because the CBE is done writing a file does not mean that the file is yet visible somewhere else in the network. Also, we have seen that the webhook for MCAF needs to be reissued whenever MCAF is restarted. But since we aren't apprised of when exactly that happens, we wind up resetting the webhook on a periodic timer.

Overall, it's just very clear that this whole approach is pretty brittle and poses no specific benefits in comparison to the older way of doing this pioneered by John Benson, which is to say, monitoring the filesystem periodically.

StackStorm version

This will wind up being a pair of very short workflow that looks something like this:

  1. Filesystem monitor for BDF files
  2. Run Workspaces workflow/capability of ingestion

StackStorm: Necessary Functionality

Monitor and Control Systems

Filesystem Monitoring

Per the discussion above, the correct approach here is to come up with a filesystem monitor. Every so often (configurable) emit an event if there are new files present in the directory.

There are several ways to do this:

  1. Emit a trigger about new files arriving in the directory
  2. Emit a trigger when there are new files, but include all the files in the directory
  3. Emit a trigger whenever there are any files in the directory, including or not including all the files

Option 1 seems like the most useful, but requires the most state from the trigger. It also opens the question of whether StackStorm is an "at least once" or an "at most once" system.

Option 2 requires less state and seems like it would be more robust if StackStorm turns out to be an "at most once" system.

Option 3 seems like it would be the most irritating and least helpful, except for the fact that it would be totally sufficient for this problem and requires no particular state.

Queue Runner/Loaders

These come in two flavors: ALMA and VLBA. They share a similar architecture, which is very interesting and cool (I wrote the first one) but mainly exists to enable throttling of executions at a certain number, which is an intrinsic functionality of Workspaces (capability queue limits). So it should not be necessary to bring much of the complexity along for the migration.

ALMA

Despite the names, AlmaReingestionQueueLoader actually accounts for three things:

  1. Reingestion of raw ASDMs from the ALMA instrument
  2. Reingestion of calibrations from ALMA
  3. Marking ASDMs as not calibrateable

What is needed to implement this is:

  1. Source of ALMA ASDMs to load
  2. Source of ALMA calibrations to load
  3. Action of marking ASDM as not calibrateable

VLBA

The VLBA system is simpler, it only handles the appearance of VLBA data to ingest. There is no automatic calibration of VLBA data. What is needed to implement this is therefore only a source of VLBA data to load.

StackStorm Checklist