Release 2.2

Release Date -  

GDM Loader download - Rel2.2_LoaderUI


 Software QA Test Cases

  File Modified

Microsoft Excel Spreadsheet Rel 2.2 - Extractor-props_IDs-TestCases.xlsx

Apr 27, 2020 by Deb Weigand

Microsoft Excel Spreadsheet Rel 2.2 - Tetraploid-TestCases.xlsx

Apr 27, 2020 by Deb Weigand

Microsoft Excel Spreadsheet Rel 2.2 - TimeScope_MarkerDelete-TestCases.xlsx

Apr 27, 2020 by Deb Weigand

New Features


New FeaturesDescription

GDM-440 - Getting issue details... STATUS

Users will like to be able to extract meta information they have loaded to GDM, both system defined and custom ones.

 read more...

In addition, there is a need for IDs to be extracted for various entities.

  • For sample.file

    • Props (both system and user defined)

      • germplasm props

      • dnasample props

      • project props

    • IDs:

      • germplasm_id

      • dnasample_id

      • dnarun_id

  • For marker.file:

    • marker props (both system and user defined)

    • IDs:

      • marker_id

      • linkage_group_id

  • Headers for props fields in the files should be prefixed with the main entity table name. e.g. trial_name in dnasample props should have a header "dnasample_trial_name" in sample.file

  • Dataset.hmp.txt will contain all the props fields, both system and user defined

GDM-444 - Getting issue details... STATUS

Provide the ability for a user to upload a dataset type of 'nucleotide_4_letter' as per other data types.

 read more...

In addition, there is a need for IDs to be extracted for various entities.

    • new dataset type of 'nucleotide_4_letter'

    • available in the LoaderUI dropdown list on the 'Create' dataset page

    • dataset type of 'nucleotide_4_letter'

    • added to the cv table with cvgroup_id = 1

    • Contains exactly 4 elements

    • New type transformation in Digest/Extract similar to 'nucleotide_2_letter'.

      Acceptance Criteria:

    • querying the database: select * from cv where cvgroup_id = 1; results in 'nucleotide_4_letter' in the cv table

    • 'nucleotide_4_letter' is selectable in the LoaderUI dropdown list on the 'Create' dataset page

    • I can successfully load a matrix with the allowed alleles and separators as described above

    • any attempted load with alleles other than listed above will fail

    • any attempted load with separators other than those listed above will fail

    • ? has been added to the missingIndicators.txt file

GDM-86 - Getting issue details... STATUS

The ability to delete markers, marker groups, and linkage groups in TimeScope

 read more...
  • User Interface Specifications:

    For the markers tab, the right (content) panel will contain:

    1. Filtering controls:

      1. marker_id (range, ex: 100-210)

        1. OR marker.names list (text area: line-separated list)

      2. Platform

    2. A submit button to query the database

    3. A clear button to clear the current filters and result set

    4. A table listing the data resulting from the filters

      • This table will provide pagination for optimal performance

      • This table will provide a capability to select rows individually

      • And a select all button to select all rows

    5. A delete button to delete the rows selected

    6. For the result table: All the columns wherein an ID is displayed should have a column next to it displaying the names. Columns should be sortable.

Functional Specifications:

Upon selecting a set of markers from the result table and clicking the delete button, the system will:

  1. Prompt the user to confirm what will be deleted (basic yes-no prompt window with a list of all the datasets that will be deleted)

  2. PREREQUISITE CHECK: We only allow markers deletion IF the markers are not being used in any dataset AND any marker_groups. If there are datasets or marker_groups using those markers:

    1. Provide a window/prompt/page that displays the list of dataset references for the markers, and do not allow deletion if there are references. Basically, you have to delete the datasets first.

      1. A message that essentially displays: "Marker 1 is being used on dataset A, B, and C. Marker 2 is being used on dataset D and E. Please delete those datasets first."

      2. Presentation of this message and list of datasets is up to the developer. If it's more performant and user-friendly to have a separate page with a table in it (in case the list is big) then do it that way.

    2. Provide a window/prompt/page that displays the list of marker_groups references for the markers, and do not allow deletion if there are references. Basically, you have to update the marker_groups first.

      1. A message that essentially displays: "Marker 1 is being used on marker_group A, B, and C. Marker 2 is being used on marker_group D and E. Please update those marker groups first using the loaderUI."

      2. Presentation of this message and list of marker_groups is up to the developer. If it's more performant and user-friendly to have a separate page with a table in it (in case the list is big) then do it that way.

  3. If the user clicks yes, the system then will delete the following, in this order (details on how to delete them are detailed below)

    • Marker_linkage_group rows for the list of markers being deleted

    • Marker rows

  4. If the user clicks no, the operation will simply abort.

Everytime a deletion is being made, show:

Prompt Messages:

A warning message box will be shown that clearly indicates that the operation is final and the data will be deleted, providing a quick statistics on how many rows there are (ex. Are you sure you want to delete 102321 markers?). The user is then provided with the ability to cancel or go through with the operation.

Result Report Page:

Upon completion of deletion, a summary page of what was deleted will be displayed. This will contain:

  • Total number of rows deleted

  • Filtering criteria

  • Deletion duration

Add a Footer: This should show one or two sentences that warns the user to do regular backups or at least one backup before using this tool to allow for recoverability in case of user mistakes.

GDM-461 - Getting issue details... STATUS

web app to extract data from GDM, preprocess it, and generate a Flapjack project file that will be used in a Flapjack application for pedigree verification.

 read more...
  • User will be presented with a list of sample meta column names they can use for splitting 
  • There will be a selection box where the user can select one or more options 
  • There is a set list in the leftmost list box  
    • The user can select one or more items in the leftmost list and push to the rightmost list 
    • Once items have been moved to the rightmost list the user can select and move an item back to the leftmost list 
  • The list selected and in the rightmost list box can be reordered 
  • The order of the list will be used to split the data 
  • The list contains the following options for splitting: 
    • dnasample_sample_group 
    • dnasample_sample_group_cycle 
    • germplasm_par1 
    • germpalsm_par2 
    • germplasm_pedigree 
    • germplasm_type 
GDM-571 - Getting issue details... STATUS Deploy Haplo Tool and DArT View with Marker Portal
GDM-570 - Getting issue details... STATUS
GDM-569 - Getting issue details... STATUS
GDM-40 - Getting issue details... STATUS

Automated back end regression testing (BeRT) framework so that test scenarios can be created and added and run to ensure the health of GDM code

 read more...
  • Load:
    • create the entities needed
      • for samples, need to create
        • PI
        • project
        • experiment
        • platform
        • protocol
        • vendor
        • vendor-protocol
      • for markers, need to create
        • platform
        • mapset
        • marker group(s), if needed in subsequent extract
      • for datasets, need to create analyses
        • calling, and any other(s)
    • files to be provided for any load scenario
      • input file
      • json file
    • Loads must go through Data Validation
  • Extract
    • appropriate loads need to happen first 
    • files to be provided for extract scenario(s)
    • for Extract by Sample -
      • list of entities in .txt file format (germplasm names, external codes, or dnasample names)
    • for Extract by Marker - 
      • list of marker names in .txt file format, or
      • name of marker group
        • marker group will need to be created first
  • type of test - positive or negative
    • and whether comparisons are needed at the end
  • specify location
    • host needs to be dynamic
    • crop needs to be dynamic 
    • input files
    • output files if comparison needs to be done
  • No emails will be sent
  • view results to see which tests have passed and which tests have failed
    • for failed tests, need
      • access to logs
      • access to all artifacts
  • if a comparison is needed
    • Status: Failed/ Passed

      • Success Report

        • # of records

        • # of columns

        • Execution time

    • Failed Report:

      • Status: Failed/Passed

        • for each failure:

          • failure type

          • Record #

          • column #

    • Failure Types:

      • Data Validation failure

      • Genotype mismatch

  • Once a test has completed
    • for load and extract failures, copy
      • logs
      • input files
      • instruction file
      • all the files in the digest and/or extract folder
      • to a directory with the scenario name to be used for debugging purposes
    • All entities created by the test need to be removed from the DB
  • Documentation
    • how to use it to test branches in individual environments
    • how to add additional test cases
    • how to integrate into Bamboo

Significant Bug Fixes

key summary assignee reporter issuelinks priority status
Loading...
Refresh