TranQL is a query language for interactive exploration of federated knowledge graphs.
Previous work focused on a workflow language for automating biomedical queries. We've also prototyped similar workflows using the Common Workflow Language (CWL).
Workflow languages generally provide capabilities to support large scale, fault tolerant, reproducible, automated computing. These are critical features where processes that have been refined by teams over time must be automated and shared. Common characteristics of these systems include:
- The ability to manage multiple, interacting, long running third party programs (eg, genomic sequence alignment)
- Infrastructure level support for reproducibility via highly technical artifacts like Docker containers.
- Complex syntax in formats like YAML, which are generally unfamilar to clinical data and medical experts.
While these features are essential for some applications, they are neither targeted at nor well suited to
- Iterative, interactive exploration of large data sets.
- Accessibility to communities like clinical data specialists and medical experts.
- A programmatic interface between applications and a data source.
The ability to explore large data sets with queries is extremely familiar to clinical data experts and many medical informatics specialists. To make semantic databases more accessible to these communities, we designed TranQL to share structural and syntactic similarities with the most familiar and widely used languages for interactive distributed data analytics, while providing an interface to heterogeneous semantic graph services existing environments don't address.
In particular, the Structured Query Language (SQL) is among the most pervasive query languages in use. It is vital to the work of clinical data specialists. TranQL borrows concepts from SQL while borrowing elements of graph semantics from query languages like Cypher.
It must be noted here that the W3C Semantic Web stack has the most robust and mature toolkit in this space surrounding technologies including RDF and SPARQL. However, wide spread adoption of this stack has not approached the levels of technologies like SQL, REST and OpenAPI. Also, the W3C stack envisions a homogeneous RDF/SPARQL environment. We sought something able to embrace more heterogeneous data sources.
On a final contextual note, we've also evaluated a GraphQL interface to these federated data services. GraphQL, it's name not withstading, does not provide much in the way of constructs allowing the user to think explicitly in terms of a knowledge graph compared to Cypher or SPARQL. And, again, it's query syntax and approach is highly unfamiliar to clinical data and medical communities.
TranQL is a classic interpreter with a lexical analyzer & parser which produces a token stream. The tokens are interpreted to build an abstract syntax tree modeling the program's constructs which are then executed sequentially. The grammar supports three types of statements:
- SET: Assign a value to a variable.
-
SET <variable> = <value>
-
- SELECT: Select a graph described by a pattern from a service, given various constraints. Graph patterns are expressed using concepts from the biolink-model.
-
SELECT <graph> FROM <service> [WHERE <constraint> [AND <constraint]*] [[SET <jsonpath> AS <var> | [SET <var>]]*```
-
- CREATE GRAPH: Create a graph at a service.
-
CREATE GRAPH <var> AT <service> AS <name>
-
The Translator standard graph API is a protocol for exchanging graphs with federated data sources. TranQL works with endpoints supporting this standard.
The TranQL Backplane is a collection of endpoints supporting the standard API which implement reusable question answering services, or modules.
Backplane modules support a simplified syntax in the language for greater readability.
The example program begins with a multi-line comment describing its intent:
The first statement selects a graph pattern connecting disease nodes to chemical substances, both biolink-model
concepts.
The from clause specifies the path to a Backplane endpoint. Because it begins with a "/", TranQL prepends the protocol, host, and port of a configured TranQL Backplane service. The service can be any endpoint implementing the standard graph endpoint interface.
The first where
constraint parameterizes the disease question node sent to the service. In this case, it resolves an English word into ontology identifiers using the bionames API. If curies are supplied, those are used directly. The latter approach is definitely more robust and specific. The former may be more helpful for quick interactive exploration, and may serve as a bridge to an NLP interface.
The rest of the constraints, because they do not map to graph query elements, are transmitted to the service as options
in the standard protocol. The service being invoked validates and interprets the options. In the case above, the endpoint passes the options along to define a cohort in the ICEES clinical reasoner.
The final part of the select statement is a set
statement which uses a JSONPath query to extract chemical identifiers from the result and store them as a variable.
The second select
statement sends a different graph query to the Gamma reasoner and parameterizes the chemical_substance concept with identifiers from the first clinical step.
The resulting graph is saved as a variable.
There's not yet a standard solution to explaining what graph patterns are supported by each endpoint. SQL lets users list the schema of a data source. Towards addressing this, Robokop(Gamma) provides a metadata endpoint describing its meta-knowledge-graph. The shallow hierarchy describes the kinds of transitions it supports and the data sources implementing the transitions. See the "Viewer" tab here.
The TranQL Backplane implements two standard API endpoints for visualizing a knowledge graph. One supports the UCSD NDEx network sharing platform and the other supports Gamma's answer visualization facility.
The program ends by publishing the answer set to both services.
Here's a link to the Gamma visualization for the answer.
TranQL is brand new and strictly alpha.
Requires Python>=3.10
git clone <repository>
cd tranql
pip install -r tranql/requirements.txt
bin/test
Run TranQL
locally in a development environment. This set up uses the ./src/tranql/dev-conf.yml
configuration file
and the ./src/tranql/conf/dev-schema.yml
file. Update those to edit your local configuration.
- Set up local environment variables
export REDIS_PASSWORD=<password>
export SCHEMA_CONFIG_PATH=./src/tranql/conf/dev-schema.yml
-
Look at
dev-schema.yml
file- Current default is to use
redis
as the database
- Current default is to use
-
Install an editable build
pip install -e .
- Choose your server
- Non-docker:
To run a program, first start the backplane:
cd backplane PYTHONPATH=$PWD/../.. python server.py
- Docker with redis
docker-compose build redis docker-compose up redis
- Non-docker:
To run a program, first start the backplane:
- Run a query
python main.py --source queries/workflow-5.tranql
Run TranQL
in a local docker environment using docker-compose
. This will use the configurations
in the docker the docker-compose.yml
file
- Set the local environment variables
export REDIS_PASSWORD=<password>
export SCHEMA_CONFIG_PATH=./src/tranql/conf/redis-schema.yml
- Build and start the docker containers
docker-compose build
docker-compose up
- Run a query
python main.py --source queries/workflow-5.tranql
To run the web app, first start the TranQL API:
cd tranql
PYTHONPATH=$PWD/../ python api.py
Then follow the instructions in web/ to start the website.
Run the interactive interpreter.
bin/tranql --shell
$ bin/tranql --help
usage: main.py [-h] [-d] [-c] [-b BACKPLANE] [-i] [-s SOURCE] [-o OUTPUT]
TranQL
optional arguments:
-h, --help show this help message and exit
-d, --verbose Verbose mode. (default: False)
-c, --cache Cache responses from backplane
services? (default: False)
-b BACKPLANE, --backplane BACKPLANE Backplane URL prefix (default:
http://localhost:8099)
-i, --shell The interpreter read-eval-print-loop
(REPL). (default: False)
-s SOURCE, --source SOURCE The program's source file (default:
None)
-o OUTPUT, --output OUTPUT Output destination (default: None)
- Move to the latest standard API version (0.9.0)
- Implement basic NDEx visualization connectivity
- Implement basic Gamma visualization connectivity
- Query graphs with multiple occurrences of the same concept.
- Does the standard API need to support multiple values per question-graph node? (So far, looks like no)
- Queries with bidirectional links.
- Validate query terms exist in the biolink-model.
- Predicates in queries.
- Enforce constraint identifiers are sub-types of biolink-model parent types where this info exists.
- Index previous answer via standard API "knowledge_maps" construct.
- Integrate graph metadata API.