...
Page Properties | ||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Objective
Be able to compare any outputs of an extract to corresponding loaded inputs
Success metrics
...
Assumptions
Milestones
...
Requirements
...
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
...
Develop a framework that will allow QA to contribute test scenarios to a repository that can be run as regression tests for any GDM instance.
User Story
As a member of the development team, I need an automated back end regression testing framework so that test scenarios can be created and added and run to ensure the health of GDM code
- Load:
- create the entities needed
- for samples, need to create
- PI
- project
- experiment
- platform
- protocol
- vendor
- vendor-protocol
- for markers, need to create
- platform
- mapset
- marker group(s), if needed in subsequent extract
- for datasets, need to create analyses
- calling, and any other(s)
- for samples, need to create
- files to be provided for any load scenario
- input file
- json file
- Loads must go through Data Validation
- create the entities needed
- Extract
- appropriate loads need to happen first
- files to be provided for extract scenario(s)
- for Extract by Sample -
- list of entities in .txt file format (germplasm names, external codes, or dnasample names)
- for Extract by Marker -
- list of marker names in .txt file format, or
- name of marker group
- marker group will need to be created first
- type of test - positive or negative
- and whether comparisons are needed at the end
- specify location
- host needs to be dynamic
- crop needs to be dynamic
- input files
- output files if comparison needs to be done
- No emails will be sent
- view results to see which tests have passed and which tests have failed
- for failed tests, need
- access to logs
- access to all artifacts
- for failed tests, need
- if a comparison is needed
Status: Failed/ Passed
Success Report
# of records
# of columns
Execution time
Failed Report:
Status: Failed/Passed
for each failure:
failure type
Record #
column #
Failure Types:
Data Validation failure
Genotype mismatch
- Once a test has completed
- for load and extract failures, copy
- logs
- input files
- instruction file
- all the files in the digest and/or extract folder
- to a directory with the scenario name to be used for debugging purposes
- All entities created by the test need to be removed from the DB
- for load and extract failures, copy
- Documentation
- how to use it to test branches in individual environments
- how to add additional test cases
- how to integrate into Bamboo
Success metrics
Goal | Metric |
---|---|
Be able to compare any outputs of an extract to corresponding loaded inputs | |
1a. Run five (5) Scenarios provided by Deb | |
1b. Run Scenarios in the repo – there are 11 others | |
1b.1. Report file that indicates status of tests and where failures occurred | |
1c. Demo to Team | |
2a. Additional Scenarios to be added by Deb | |
2b. Roy run the Scenarios | |
2c. Integrate with Bamboo |
Assumptions
Stories
Jira Legacy | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
|
...
User interaction and design
Open Questions
Question | Answer | Date Answered |
---|---|---|
log files? not in gobii_bundle, nor dev folder | ||
input files? not in gobii_bundle/drops/dev/files | ||
instruction file? not in gobii_bundle/crops/dev/loader | ||
should we be deleting these artifacts? I'm thinking we should keep these for debugging purposes when a job fails | ||
digest files and are in place - great! please keep these | ||
what should be deleted? I think table entries should be deleted, but not artifacts from a job |