GDE_logo_scaled.png

GenoEx-GDE User’s manual v.1.2

part 3 - API Manual (including the use of the gxapi.py program).

See also part 1 in GDE_user_manual and part 2 in GDE_gxprep_manual. This part assumes that those previous parts have already been read and understood.

The gxapi.py support program, maintained and distributed by the Interbull Centre, allows easy access to the API for upload and download of 706 and 711 files associated with the GenoEx-GDE database and is provided as an easy way to get started with using the API. For those that can read the python code that it is written in, it also provides an additional source of detailed documentation of the API.

This manual describes each of the calls of the API along with the usage of the gxapi program. These descriptions are organized into four sections to focus on the main aspects of the API where the first section provides an overview of, and some general information about, the API and the last section focus on the gxapi.py program. The remaining sections focus on different usage of the API.

Section 1, overview and general information

The API is provided as an alternative way to access the functionality provided via the https://genoex.org/ web site interface and is provided via POST calls on the same site. The operations have a basic structure where each call require arguments in JSON format split into parameters and auth, both of which are key/value mappings.
The auth part always contain keys username and pw where the respective values should be your registered email address and associated password.
The parameters part contain different keys depending on what call it is about, but always, at least, information on the version.
An example call via the curl program looks like (in one long command):

Where the username@company.com and test strings would need to be substituted with your registered email and associated password. This represents the common basic structure of every call to the API although most calls would need additional parameters.

The data returned is a JSON encoded data structure containing, at the minimum, keys status and status_message. If status has value true then an additional key named return_values is provided.

The details below are up-to-date with the 220805 version of the API (and gxapi.py).

The API is in large parts asynchronous, i.e. an operation is first initiated and then the user would need to periodically poll for the status of that operation until it terminates either successfully or with a failure. This mode of operation is needed to avoid the timeouts inherent in normal implementations of the HTTP protocol for long running operations.

The return values of all calls is a JSON data structure looking, at the top level, like:

Where the "..." would be a set of key/value pairs which will vary between different calls.
Whenever the value of "status" is false, then the value of "status_message" will report the error message. Furthermore, if the value of "status" is true, then the value of "return_values" should still be investigated for possible error messages before retrieving the real return values (keys "error" and/or "error_list" to be explicit).

The following two sections focus on the primary functionalities provided: upload and download of 706/711 files.

Section 2, upload of 706/711 files

This is a two steps operation: a submit call (once) and then intermittently (once per minute or so) polling status of that submission until a terminating state is reached.

An example of a submit call via the curl program looks like:

As in all these examples, the username@company.com and test strings would need to be substituted to your registered email address and associated password before running this.
In addition, the paths and filenames specified (i.e. the parts between @ and ; inside the JSON strings) need to be adapted to your own situation.
Note that the use of a single backslash at the end of the lines is just a way to visualize that the single command continues on the next line.

This example shows how to upload a 706 file and the associated 711 file in one go, but if only one of these file types are to be uploaded then simply remove the -F switch, and associated JSON string, related to the file you are not going to upload.

The above submission call will return a JSON data structure containing, if successful, the job_id assigned to this submission:

Note that in all calls, if the key status has a false value, then the error message is found in status_message. Even if status is true, there may still be errors described inside the return_values data structure.

The second step, polling for status, is accomplished via a call like:

The 9be6c0bf-de9f-4951-b9e1-27217ec1e0c4 string needs to be replaced with the value of the job_id key provided in the return data structure of the submit call above.

This last call is then intermittently repeated, with no change, until either a job_status of "FINISHED" or "FAILED" is reached and returned in a JSON data structure:

Section 3, download of 706/711 files

Download operations are a bit different from upload as 711 files are downloaded in synchronous mode and 706 files are downloaded in asynchronous mode, similar to upload.

In addition, there is an optional preliminary step to visualize all the available values to choose from when selecting the parameter values to provide in the download operation
(you may want to redirect the output to a file {see params.log in the command line of the example} to have the results handy - and refresh this file from time to time by repeating this operation):

This is a synchronous operation and hence a single step is sufficient.

The return_values data structure in the reply will include keys: breeds, countries, orgs, gender and arrays. The value of each key is a list of strings to choose from when specifying the corresponding parameter in calls below. This data roughly corresponds to the data shown in the download dialog of the web browser interface.

Download 706 files

This is a three steps operation: an extraction call (once) followed by intermittently (every 30 seconds or so) polling status of that extraction until a terminating state is reached and finally, if status of extraction is "FINISHED", downloading the resulting assembled zip file.

The extraction call is where the specification for what data to download is provided.
The allowed values for different parts of the specification are:

Example call:

Note that in this example, the values of keys "countries", "arrays" and "orgs" are specified as empty lists. This means "all values included".
The value of "quality_criteria" is null, also meaning "anything goes" ignoring the results of the quality checks, i.e. all genotypes are considered for extraction.

This extraction call will return a JSON data structure containing, if successful, the job_id assigned to this submission:

The second step, intermittently poll for status, is performed identical to how it is done for the upload except that the job_id is extracted from the reply of the extraction call.
See section 2 step 2 "polling for status", for an example.

The third step, if the extraction was successful (i.e. polling ended with status "FINISHED"), is simply a call to download the zip file associated with the prepared extraction. Example:

Download 711 files

This is a single step operation which is fully specified in a single curl call:

The parameters "breeds", "countries", "gender" and "arrays" are used precisely as for the download of file706 reported above.

Section 4, using the gxapi.py program

The gxapi.py program is fetched from the web browser interface on the GDE -> UPLOAD page.
Note that the gxapi.py program requires a fairly recent version of python (3.7 or newer) with the requests module installed.

In the examples below, the gxapi.py program is assumed to be located in the current directory, but that is not a requirement.
If it is located in another directory, just precede gxapi.py in the examples with the path to where it is stored, i.e. use something like path-to-installdir/gxapi.py instead of gxapi.py in the examples.

An alternative (Linux only) way to install it and execute it is to put the gxapi.py file in one of the directories in your execution path, PATH, and enable the execution flag on it. In that case, the leading "python " can be removed from the examples.

To get a quick overview of how to execute it, run it with the -h switch:

Upload of 706/711 files

To get a quick overview of how to execute upload, run it with the -h switch:

To upload a pair of 706/711 files in one go, simply run it like this:

To upload only one file, either a 706 or a 711 file, just omit the argument for the other file in the example above.

Download of 706/711 files

The optional preliminary step, calling gde_get_parameters, is performed via:

(complete with redirecting the stdout to a file, params.log, to save output for later).

To get an overview of how to execute download, run it with the -h switch:

Here, the switches are divided into groups "optional arguments" (used for both 706 and 711 files), "genotypes" (used for 706 files only) and "access" (used for 711 files only):

optional arguments:

genotypes:

access:

Note that the switch -A is used to select if downloading a 711 file (if present) or a 706 file (if omitted).
At the end of the help output, a couple of small explicit examples are shown, but here follows a couple more.

To, for example, download a 706 file containing the best genotypes of BSW bulls regardless of country, array, organization, date-of-upload or quality status execute:

Adding switch --all would remove limitation to download only the best genotypes of each animal. To do the same download but only genotypes that pass all quality checks, omit -q '' from the above command. To further limit the data downloaded, add switches for breeds, countries, arrays, organizations and/or dates and either replace the empty string after -q with a suitable specification (e.g. pedigree,call_rate) or omit -q and associated string completely (which is the same as specifying -q frequency,pedigree,call_rate).

Another example, downloading a 711 file for all animals available: