GDE_logo_scaled.png

GenoEx-GDE User’s manual v.1.2

part 3 - API Manual (including the use of the gxapi.py program).

See also part 1 in GDE_user_manual and part2 in GDE_gxprep_manual. This part assumes that those previous parts have already been read and understood.

The gxapi.py support program, maintained and distributed by the Interbull Centre, allows easy access to the API for upload and download of 706 and 711 files associated with the GenoEx-GDE database and is provided as an easy way to get started with using the API. For those that can read the python code that it is written in, it also provides an additional source of detailed documentation of the API.

This manual describes each of the calls of the API along with the usage of the gxapi program. These descriptions are organized into four sections to focus on the main aspects of the API where the first section provides an overview of, and some general information about, the API and the last section focus on the gxapi.py program. The remaining sections focus on different usage of the API.

Section 1, overview and general information

The API is provided as an alternative way to access the functionality provided via the https://genoex.org/ web site interface and is provided via POST calls on the same site. The operations have a basic structure where each call require arguments in JSON format split into parameters and auth, both of which are key/value mappings.
The auth part always contain keys username and pw where the respective values should be your registered email address and associated password.
The parameters part contain different keys depending on what call it is about, but always at least version.
An example call via the curl program looks like (in one long command):

Naturally, the username@company.com and test strings need to be substituted with something more appropriate before running this. This shows the common basic structure of every call to the API, but most calls need additional parameters.

The data returned is a JSON encoded data structure containing, at the minimum, keys status and status_message and if status has value True then an additional key return_values is provided.

The details below are up-to-date with the 220805 version of the API (and gxapi.py).

The API is in large parts asynchronous, i.e. where an operation is first initiated and then the user need to periodically poll for the status of that operation until it terminates either successfully or with a failure. This mode of operation is needed to avoid the timeouts inherent in normal implementations of the HTTP protocol for long running operations.

The return values of all calls is a JSON data structure looking, at the top level, like:

Naturally the "..." is really a set of key/value pairs, but these vary between the different calls.
When the value of "status" is false, then the value of "status_message" is the error message. If the value of "status" is instead true, then the value of "return_values" should be investigated for possible error messages before retrieving the real return values.

The following two sections focus on the primary functionality provided; upload of 706/711 files and then download of 706/711 files.

Section 2, upload of 706/711 files

This is a two step operation: a submit call (once) and then intermittently (once per minute or so) polling status of that submission until a terminating state is reached.

An example submit call via the curl program looks like:

As always, the username@company.com and test strings need to be substituted with something more appropriate before running this.
In addition, the paths and filenames specified (i.e. the parts between @ and ; inside the JSON strings) need to be adapted to your own situation.
Note that the use of a single backslash at the end of the lines is just a way to visualize that the single command continues on the next line.

This example specifies the aggregate upload of both a 706 file and the associated 711 file in one go, but if only one of these file types are to be uploaded then simply omit the other files -F switch and associated JSON string.

This submission call will return a JSON data structure containing, if successful, the job_id assigned to this submission:

Note that in all calls, if the key status has a False value, then the error message is found in status_message. Even if status is True, there may still be errors described inside the return_values data structure.

The second step, polling for status, is accomplished via a call like:

The 9be6c0bf-de9f-4951-b9e1-27217ec1e0c4 string need to be replaced with the value of the job_id key provided in the return data structure of the submit call above.

This call is then intermittently repeated, with no change, until either a job_status of FINISHED or FAILED is reached and returned in a JSON data structure:

Section 3, download of 706/711 files

Download operations are a bit different from upload as 711 files are downloaded in synchronous mode and 706 files are downloaded in asynchronous mode similar to upload.

In addition, there is an optional preliminary step to retrieve the available values to choose from when selecting the parameter values to provide in the download operation
(you may want to redirect the output to a file {see params.log in the command line} to have the results handy - and refresh this file from time to time by repeating this operation):

This is a synchronous operation and hence a single step is sufficient.

The return_values data structure in the reply will include keys: breeds, countries, orgs, gender and arrays. The value of each key is a list of strings to choose from when specifying the corresponding parameter in calls below.

Download 706 files

This is a three step operation: an extraction call (once) followed by intermittently (every 15 seconds or so) polling status of that extraction until a terminating state is reached and finally fetching the resulting assembled zip file.

The extraction call is where the specification for what data to download is provided.
The allowed values for different parts of the specification are:

Note that in this example, the values of keys "countries", "arrays" and "orgs" is specified as an empty list. This means "anything goes".
The value of "quality_criteria" is null, also meaning "anything goes" ignoring the results of the quality checks, i.e. all genotypes are considered for extraction.

This extraction call will return a JSON data structure containing, if successful, the job_id assigned to this submission:

XXX

Download 711 files

This is a single step operation which could be specified in a single curl call:

Section 4, using the gxapi.py program