Test State Descriptions
- TO DO: This is the initial state for each test, indicating that the test has been defined, but is not yet ready to be executed, usually because the software the test is designed to validate has not been made available yet.
- READY TO TEST: Designates that all pre-requisites for the test have been met and testing may commence. Transition to this state may be the result of a formal Test Readiness Review or a less formal process.
- TEST IN PROGRESS: Indicates that testing is underway.
- FAILED: A test in the failed state indicates that the software is not validated for the functionality tested.
- PASSED: The software under test fully meets the requirements of the test.
- PARTIAL: Some subtests (either formal subtests in the Jira sense or informal subtests within the test case) are in the passed or closed state and others are not. The functionality tested is "partially"
validated. - CLOSED: Because subsequent releases of the software may introduce defects to previously validated functionality, until a regression test is created that ensures a defect is not introduced tests remain in the passed state. Fully validated tests, with regressions are placed in the closed state and do not need to be manually executed in the future.
- CANCELLED: The test is deemed to no longer be useful or required and will not be executed in the future. For example if a scope change at the project level removes requirements from the project, the tests validating that functionality should be cancelled.
Parent States
Often we group sets of sub-tests together for tracking and organization reasons. The parent states are derived from the states of the sub-tests as follows:
- CANCELLED: All subtests are in the CANCELLED state. Otherwise CANCELLED subtests are not considered in determining the state of the parent.
- TO DO: All subtests are in the TO DO state.
- READY TO TEST: All subtests are in the READY TO TEST or TO DO state and at least one subtest is in the READT TO TEST state.
- TEST IN PROGRESS: All subtests are in the TO DO, READY TO TEST, or TEST IN PROGRESS state and at least one subtest is in the TEST IN PROGRESS state.
- PASSED: All subtests are in the PASSED or CLOSED states.
- CLOSED: All subtests are in the CLOSED state.
- FAILED: At least one subtest is in a failed state and no subtests are in the PASSED, CLOSED, PARTIAL, or TEST IN PROGRESS states
- PARTIAL: All other cases.
3 Comments
Mark Lacy
A couple of comments:
1) I have kept cancelled sub-task tickets (which were cancelled because expected functionality was deemed out of scope (for example ALMA Cycle 0 or legacy VLA data searches) in the hope that they will eventually be testable. It seems that JIRA allows the transition from Cancelled to Ready to Test. I guess that means though that the corresponding parent tickets should be set to Partial until these capabilities can be successfully tested.
2) Every time a new branch of a new deployment is deployed to test at least some of the tests are rerun as regressions. It's usually not practical to reexecute the entire test suite on the short timescales of the build deployments, so things are often only partially tested between builds. I think this is probably OK, but just to be aware that although I try it's hard to be sure that something tested in build 1 of a test deployment will still work in build N (or indeed in production)... Would be much nicer though if SSA just did one deployment to test (and used dev for the partial builds).
Jeff Kern
I think that the ones that we have cancelled are no longer considered so the parent ticket could be in the Passed state even with sub-tests in the cancelled state. If we decide to revisit the scope decision at some point in the future we can do a search for all cancelled (sub-)tests move them to the To Do (or Ready to Test state) and then the parent ticket would go back to Partial.
Now that we have them actually deploying to test, I think we want to start being a bit more formal about when we are testing (does this new thing work) and when we are validating (including the regressions). I think we should have an TRR (probably at the Wed meetings) before we start the validation, it is not reasonable for them to expect continual testing, that is what their regression tests are for.
Robert Treacy
Mark makes point 2) that it is impractical to execute fully comprehensive manual regression tests. Regression tests so far have not even carried from one wave to the next but seem to already present quite a tax between bug fixes and re-deployments. I know the goal is to minimize this churn. Since we have not yet started to track manual regression testing from wave to wave, I am thinking of how best to track this in Jira and in the schedule. There is a growing suite of tests with each release (new and regression) but will the time window to validate grow by a comparable amount? I am thinking that is not practical. I have checked most validation tests in the schedule for Pilot as complete and there are still a fair number of partials, but I am not sure yet how to carry regression testing of old tickets into the next wave. This looks like it will lead to a fairly tedious maintenance of the Jira tickets and either a need to incrementally increase validation time in each wave, or figure out a way to compress manual regression tests.
I am making the assumption the state of the regression tests is captured in Jira status and have a few questions.
If manual regression testing is not comprehensive, how can the Jira testing and schedule accurately reflect that?
If previously validated tickets have a change in status to reflect a failed regression test; then at any given moment during a validation period, the overall status reflects the state of known validation for that deployment only, but there may be unknowns in regression test not run. This is OK, but can we establish a mechanism to easily look at history to see trends in validation for long term metrics? Such as: Are there patterns to regression test failures? Is the overall validation state making the progress it should, or do we have a growing number of partials, where there may be an indication that technical debt is not under control, etc?