Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 19 Next »

Target release2.2
Epic GDM-40 - Getting issue details... STATUS
Document status
IN PROGRESS
Document owner
DesignerDeb Weigand
Tech leadLuke Cook
Technical writers
QA

Deb Weigand

Roy Petrie

Objective

Develop a framework that will allow QA to contribute test scenarios to a repository that can be run as regression tests for any GDM instance.

User Story

As a member of the development team, I need an automated back end regression testing framework so that test scenarios can be created and added and run to ensure the health of GDM code

  • Load:
    • create the entities needed
      • for samples, need to create
        • PI
        • project
        • experiment
        • platform
        • protocol
        • vendor
        • vendor-protocol
      • for markers, need to create
        • platform
        • mapset
        • marker group(s), if needed in subsequent extract
      • for datasets, need to create analyses
        • calling, and any other(s)
    • files to be provided for any load scenario
      • input file
      • json file
    • Loads must go through Data Validation
  • Extract
    • appropriate loads need to happen first 
    • files to be provided for extract scenario(s)
    • for Extract by Sample -
      • list of entities in .txt file format (germplasm names, external codes, or dnasample names)
    • for Extract by Marker - 
      • list of marker names in .txt file format, or
      • name of marker group
        • marker group will need to be created first
  • type of test - positive or negative
    • and whether comparisons are needed at the end
  • specify location
    • host needs to be dynamic
    • crop needs to be dynamic 
    • input files
    • output files if comparison needs to be done
  • No emails will be sent
  • view results to see which tests have passed and which tests have failed
    • for failed tests, need
      • access to logs
      • access to all artifacts
  • if a comparison is needed
    • Status: Failed/ Passed

      • Success Report

        • # of records

        • # of columns

        • Execution time

    • Failed Report:

      • Status: Failed/Passed

        • for each failure:

          • failure type

          • Record #

          • column #

    • Failure Types:

      • Data Validation failure

      • Genotype mismatch

  • Once a test has completed
    • All entities created by the test need to be removed from the DB
  • Documentation
    • how to use it to test branches in individual environments
    • how to add additional test cases
    • how to integrate into Bamboo

Success metrics

GoalMetric
Be able to compare any outputs of an extract to corresponding loaded inputs
1a. Run five (5) Scenarios provided by Deb

1b. Run Scenarios in the repo – there are 11 others

1b.1. Report file that indicates status of tests and where failures occurred
1c. Demo to Team

2a. Additional Scenarios to be added by Deb

2b. Roy run the Scenarios

2c. Integrate with Bamboo

Assumptions

Stories

key summary type updated due assignee priority status
Loading...
Refresh

User interaction and design

Open Questions

QuestionAnswerDate Answered
log files? not in gobii_bundle, nor dev folder
input files? not in gobii_bundle/drops/dev/files

instruction file? not in gobii_bundle/crops/dev/loader

should we be deleting these artifacts? I'm thinking we should keep these for debugging purposes when a job fails

digest files and are in place - great! please keep these

what should be deleted? I think table entries should be deleted, but not artifacts from a job

Out of Scope

  • No labels