Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Load:
    • create the entities needed
      • for samples, need to create
        • PI
        • project
        • experiment
        • platform
        • protocol
        • vendor
        • vendor-protocol
      • for markers, need to create
        • platform
        • mapset
        • marker group(s), if needed in subsequent extract
      • for datasets, need to create analyses
        • calling, and any other(s)
    • files to be provided for any load scenario
      • input file
      • json file
    • Loads must go through Data Validation
  • Extract
    • appropriate loads need to happen first 
    • files to be provided for extract scenario(s)
    • for Extract by Sample -
      • list of entities in .txt file format (germplasm names, external codes, or dnasample names)
    • for Extract by Marker - 
      • list of marker names in .txt file format, or
      • name of marker group
        • marker group will need to be created first
  • type of test - positive or negative
    • and whether comparisons are needed at the end
  • specify location
    • host needs to be dynamic
    • crop needs to be dynamic 
    • input files
    • output files if comparison needs to be done
  • No emails will be sent
  • view results to see which tests have passed and which tests have failed
    • for failed tests, need
      • access to logs
      • access to all artifacts
  • if a comparison is needed
    • Status: Failed/ Passed

      • Success Report

        • # of records

        • # of columns

        • Execution time

    • Failed Report:

      • Status: Failed/Passed

        • for each failure:

          • failure type

          • Record #

          • column #

    • Failure Types:

      • Data Validation failure

      • Genotype mismatch

  • Once a test has completed
    • for load and extract failures, copy
      • logs
      • input files
      • instruction file
      • all the files in the digest and/or extract folder
      • to a directory with the scenario name to be used for debugging purposes
    • All entities created by the test need to be removed from the DB
  • Documentation
    • how to use it to test branches in individual environments
    • how to add additional test cases
    • how to integrate into Bamboo

Success metrics

GoalMetric
Be able to compare any outputs of an extract to corresponding loaded inputs
1a. Run five (5) Scenarios provided by Deb

1b. Run Scenarios in the repo – there are 11 others

1b.1. Report file that indicates status of tests and where failures occurred
1c. Demo to Team

2a. Additional Scenarios to be added by Deb

2b. Roy run the Scenarios

2c. Integrate with Bamboo

...

User interaction and design

Open Questions

QuestionAnswerDate Answered
log files? not in gobii_bundle, nor dev folder
input files? not in gobii_bundle/drops/dev/files

instruction file? not in gobii_bundle/crops/dev/loader

should we be deleting these artifacts? I'm thinking we should keep these for debugging purposes when a job fails

digest files and are in place - great! please keep these

what should be deleted? I think table entries should be deleted, but not artifacts from a job

Out of Scope