To both assess the efficacy of the SRDP capabilities, the usage patterns, and drive decision making based on informed data we will rely on metrics throughout the project.
Note: Unless otherwise noted collected metrics should be available on weekly, monthly, semester and cumulative timescales.
Status Progression
Status | ||||||
---|---|---|---|---|---|---|
|
Status | ||||||||
---|---|---|---|---|---|---|---|---|
|
Status | ||||||||
---|---|---|---|---|---|---|---|---|
|
Status | ||||||||
---|---|---|---|---|---|---|---|---|
|
Table of Contents | ||
---|---|---|
|
Archive Content Metrics
Description | Purpose | Traceability | Status | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Report the number and volume of calibration results ingested to the archive | Track usage of space and coverage of SRDP product in the archive. | TUC02.5.1 |
| ||||||||||
Report the number and volume of Execution Blocks ingested in the archive (differentiate those from PI projects from system) | Overall Archive utilization |
| |||||||||||
Report the number and volume of image products ingested to the archive (differentiate by telescope and collection) | Track usage of space and coverage of SRDP product in the archive. |
| |||||||||||
Report the number and volume of data products reingested into the archive as a result of QA processes on data products already delivered (QA3) or reprocessing with better pipelines. | Track usage of space in the archive. Determine whether data replacement method should involve removal of old products, or their retention as deprecated items. | New |
|
Archive Usage Metrics
Unless otherwise noted all requirements should be reported both as the number of primary products and the total data volume.
Description | Purpose | Traceability | Status | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Number of registered users that download each type of product. | This metric as different from the total download volume allows us to determine how widespread the SRDP user community is. | TUC12.5.1 |
| ||||||||||
Total number of product downloads of each type (including anonymous users). This should be both a total number and a number per collection. | Total volume of data and products going out is one of the key performance indicators for the project. | TUC12.5.1 |
| ||||||||||
Number of products downloaded by a member of the observing team, both during and after the product proprietary period | Understand data reuse and other usage patterns. | TUC12.5.1 |
| ||||||||||
Method of Data Delivery | Understand how users prefer to access data, deprecate unused methods. | TUC12.5.1 |
| ||||||||||
Number of products downloaded by users not on original observing teams from particular projects. (JT) | Understand impact, legacy value, and community interest in data from particular projects, thinking more about large projects. |
Capability Usage Metrics
Description | Purpose | Traceability | Status | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Report the number of times the standard calibration pipeline was triggered. | Determine overall usage of the pipeline. Normalize error reporting and other pipeline related reports. | TUC02.5.1 |
| ||||||||||
Number and type of customization choices made by the user as part of the download from the archive. | Understand usage patterns, gauge interest for various options Understand resource implications of capabilities | TUC12.5.1 |
| ||||||||||
Number of times (by telescope) each capability is triggered and the options set | Guide development of new capabilities, ensure delivered capabilities are useful |
| |||||||||||
Physical location of pipeline runs | CV, NM, XSEDE, AWS, PartnerX etc | New |
|
Workflow Metrics
Description | Purpose | Traceability | Status | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Report the number of times (and reason) that the standard products required intervention before passing QA. | Gauge efficienciency and maturity of pipeline. Estimate effort required for QA in future performance periods. | TUC02.5.1 |
| ||||||||||
The number of times that a pipeline failed to produce a valid product and needed to switched to manual. | Understand the success rate of the pipeline to mediate between additional modes and increased robustness. | New |
| ||||||||||
Duration of each pipeline run (wall clock) | Monitoring of computing resource usage | New |
| ||||||||||
Number of cores for each pipeline run | Monitoring of computing resource usage | New |
| ||||||||||
Memory allocation for each pipeline run | Monitoring of computing resource usage | New |
| ||||||||||
Time needed to flag data for calibration (if any) | Monitoring of human and computing resource usage | New |
| ||||||||||
Time taken for weblog review of each pipeline run | Monitoring of human resource usage | New |
| ||||||||||
Data volume of products from each pipeline run | Monitoring of computing/archive resource usage | New |
| ||||||||||
Time spent on queue before execution | Resource Estimation and Management |
| |||||||||||
Total Elapsed time between submission and delivery | User Experience Monitoring |
|
|
|
Product Quality Metrics
Description | Purpose | Traceability | Status | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Report the number of times that calibrations are reported by the PI as deficient. | Gauge efficienciency efficiency and maturity of pipeline. Estimate effort required for QA in future performance periods. Monitor the rate of bad data released by the SRDP project. | TUC02.5.1 |
| ||||||||||
Report the number of times that image products are reported by the PI as deficient and the final disposition (not sufficient for science, bad pipeline, PI Education) | Gauge efficiency and maturity of pipeline. Estimate effort required for QA in future performance periods. Monitor the rate of bad data released by the SRDP project. | New |
| ||||||||||
Report the number of times that pipeline runs are rerun as a result of an error being discovered in the pipeline or observational data/metadata by NRAO staff (after delivery to the PI). | Monitor the rate of bad data released by the SRDP project. Help estimate resource requirements for reprocessing of large volumes of data. | New |
| ||||||||||
Report number of pipeline runs needed on a dataset before delivery to the PI (for flagging, or tweaking of pipeline parameters to produce a satisfactory result) | Gauge efficiency and maturity of pipeline. Monitoring of human and computing resource usage | New |
| ||||||||||
Deployment Metrics
Description | Purpose | Traceability | Status | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Average and peak number of simultaneous users (Archive and Workflow frontends) | Resource estimation, load balancing | New |
| ||||||||||
MTBF, MTTR | Service uptime reporting, reliability. | New |
| ||||||||||
Average and peak number of each capability execution. | Resource estimation, load balancing (might be redundant, but can be quite resource intensive depending on dataset) | New |
| ||||||||||