Query Privileges

Users with querywriter role or greater (designer, admin, globaldesigner, and superuser) can create, install and drop queries.

Any user with queryreader role or greater for a given graph can run the queries for that graph.

To implement fine-grained control over which queries can be executed by which sets of users:

  1. Group your queries into your desired privilege groups.

  2. Define a graph for each privilege group. These graphs can all have the same domain if you wish.

  3. Create your queries, assigning each to its appropriate privilege group.

"(" [parameterList] ")"
[FOR GRAPH graphName]
[RETURNS "(" baseType | accumType ")"]
[API "(" stringLiteral ")"]
[SYNTAX syntaxName]
"{" queryBody "}"
interpretQuery := INTERPRET QUERY "(" ")"
[FOR GRAPH graphName]
[SYNTAX syntaxName]
"{" queryBody "}"
parameterValueList := parameterValue ["," parameterValue]*
parameterValue := parameterConstant
| "[" parameterValue ["," parameterValue]* "]" // BAG or SET
| "(" stringLiteral, stringLiteral ")" // generic VERTEX value
parameterConstant := numeric | stringLiteral | TRUE | FALSE
parameterList := parameterType paramName ["=" constant]
["," parameterType paramName ["=" constant]]*
syntaxName := name
queryBody := [typedefs] [declStmts] [declExceptStmts] queryBodyStmts
typedefs := (typedef ";")+
declStmts := (declStmt ";")+
declStmt := baseDeclStmt | accumDeclStmt | fileDeclStmt
declExceptStmts := (declExceptStmt ";")+
queryBodyStmts := (queryBodyStmt ";")+
installQuery := INSTALL QUERY [installOptions] ( "*" | ALL |queryName ["," queryName]* )
runQuery := RUN QUERY [runOptions] queryName "(" parameterValueList ")"
showQuery := SHOW QUERY queryName
dropQuery := DROP QUERY ( "*" | ALL | queryName ["," queryName]* )

A GSQL query is a sequence of data retrieval-and-computation statements executed as a single operation. Users can write queries to explore a data graph however they like, to read and make computations on the graph data along the way, to update the graph, and to deliver resulting data. A query is analogous to a user-defined procedure or function: it can have one or more input parameters, and it can produce output in two ways: by returning a value or by printing. A query can be run in one of three ways:

  1. Define and run an unnamed query immediately:

    1. INTERPRET QUERY: execute the query's statements

    Alternately, there is also a built-in REST++ endpoint to interpret a query string: POST /gsqlserver/interpreted_query See the RESTPP API User Guide for details.

  2. Define a named query and then run it.

    1. CREATE QUERY: define the functionality of the query

    2. INTERPRET QUERY: execute the query with input values

  3. Define a named query, compile it to optimize performance, and then run it.

    1. CREATE QUERY: define the functionality of the query

    2. INSTALL QUERY: compile the query

    3. RUN QUERY: execute the query with input values

There are some limitations to Interpreted mode. See the section on INTERPRET QUERY and the appendix section Interpreted GSQL Limitations.


"(" [parameterList] ")"
[FOR GRAPH graphName]
[RETURNS "(" baseType | accumType ")"]
[API "(" stringLiteral ")"]
[SYNTAX syntaxName]
"{" queryBody "}"
queryBody := [typedefs] [declStmts] [declExceptStmts] queryBodyStmts

CREATE QUERY defines the functionality of a query on a given graph schema.

A query has a name, a parameter list, the name of the graph being queried, an optional RETURNS type (see Section "RETURN Statement" for more details), optional specifiers for the output api and the language syntax version, and a body. The body consists of an optional sequence of typedefs , followed by an optional sequence of declarations, then followed by one or more statements. The body defines the behavior of the query.

DYNAMIC Query Support

As of TigerGraph 3.0+, FOR GRAPH graphName is optional, as long as the graph has been specified already, either when entering gsql: GSQL -g graphName [<gsql_command>] or once inside the GSQL shell, by using the USE GRAPH graphName command. This is one aspect of Dynamic Querying.

If the optional keywords OR REPLACE are included, then this query definition, if error-free, will replace a previous definition with the same query name. The new query will not be installed. That is, CREATE OR REPLACE QUERY name acts like DROP QUERY name CREATE QUERY name However, if there are any errors in this query definition, then the previous query definition will be maintained. If the OR REPLACE option is not used, then GSQL will reject a CREATE QUERY command that uses an existing name.

The DISTRIBUTED option applies only to installations where the graph has been distributed across a cluster . If specified, the query will run with a different execution model which may give better performance for queries which traverse a large portion of the cluster. Not all GSQL query language features are supported in DISTRIBUTED mode. For details, see the separate document: Distributed Query Mode.

Typedefs allow the programmer to define custom types for use within the body. The declarations support definition of accumulators (see Chapter "Accumulators" for more details) and global/local variables. All accumulators and global variables must be declared before any statements. There are various types of statements that can be used within the body. Typically, the core statement(s) in the body of a query is one or more SELECT, UPDATE, INSERT, DELETE statements. The language supports conditional statements such as an IF statement as well as looping constructs such as WHILE and FOREACH. It also supports calling functions, assigning variables, printing, and modifying the graph data.

The query body may include calls to other queries. That is, the other queries are treated as subquery functions. See the subsection on "Queries as Functions".

Example of a CREATE QUERY statement
CREATE QUERY createQueryEx (STRING uid) FOR GRAPH socialNet RETURNS (int)
# declaration statements
users = {person.*};
# body statements
posts = SELECT p
FROM users:u-(posted)->:p
WHERE == uid;
PRINT posts;
RETURN posts.size();

Query Parameter and Return Types

This table lists the supported data types for input parameters and return values.

Parameter Types


  • SET<baseType>, BAG<baseType>

  • Exception: EDGE and JSONOBJECT type are not supported, either as a primitive parameter or as part of a complex type.

Return Types


  • any accumulator type, except GroupByAccum

API (JSON output format)

currently, the only option is "v2" (default)


v1 (default) or v2 (pattern matching). See the SELECT Statement section for an outline of the differences. See Pattern Matching for details on v2.

Dynamic Querying

TigerGraph 3.0+ supports Dynamic Querying. This means the query can be written and installed as a saved procedure without referencing a particular graph. Schema details -- the name of the graph, vertex types, edge types, and attributes -- can all be parameterized. The only need to be specified at run time.

There are the ingredients for a dynamic query:

  1. Graph name: When creating a query, FOR GRAPH graphName is optional, as long as the graph has been specified already, either when entering gsql: GSQL -g graphName [<gsql_command>] or once inside the GSQL shell, by using the USE GRAPH graphName command.

  2. Vertex type and edge type in SELECT statements. Typically, the FROM clause mentions the name of specific vertex types and edge types. String or string set parameters can be used here instead.

  3. Attribute names. The getAttr and setAttr functions, which take attribute name and data type as string parameters, can be used to parameterize attribute access.

  4. INSERT statements: If you are using INSERT to add data to your graph, you need to specify what type of vertex or edge you want to add. This can also be parameterized.

Here is a simple example to demonstrate how to apply Dynamic GSQL Query techniques. Here is the PageRank algorithm from our GSQL Graph Algorithm library. Here is it with schema information embedded statically in the query:

  • graph name = social

  • vertex type = Page

  • edge type = Link

  • vertex attribute = Score

CREATE QUERY pageRank (FLOAT maxChange=0.00, INT maxIter=25, FLOAT damping=0.85)
MaxAccum<float> @@maxDiff = 9999;
SumAccum<float> @recv_score = 0;
SumAccum<float> @score = 1;
WHILE @@maxDiff > maxChange LIMIT maxIter DO
@@maxDiff = 0;
FROM Page:s -(Link>:e)- Page:t
ACCUM t.@received_score += s.@score/(s.outdegree("Link"))
POST-ACCUM s.@score = (1.0-damping) + damping * s.@recv_score,
s.@received_score = 0,
@@maxDiff += abs(s.@score - s.@score');
V = SELECT s FROM Page:s
POST-ACCUM s.Score = s.@score;
RUN QUERY pageRank(_,_,_)

Here is the same algorithm written in Dynamic Querying style:

CREATE QUERY pageRankDyn (FLOAT maxChange=0.00, INT maxIter=25, FLOAT damping=0.85,
STRING vType, STRING eType, STRING attr) // Parameters
MaxAccum<float> @@maxDiff = 9999;
SumAccum<float> @recv_score = 0;
SumAccum<float> @score = 1;
WHILE @@maxDiff > maxChange LIMIT maxIter DO
@@maxDiff = 0;
FROM vType:s -(eType>:e)- vType:t // Parameterized
ACCUM t.@received_score += s.@score/(s.outdegree(eType)) //Param
POST-ACCUM s.@score = (1.0-damping) + damping * s.@recv_score,
s.@received_score = 0,
@@maxDiff += abs(s.@score - s.@score');
V = SELECT s FROM Page:s
POST-ACCUM s.setAttr(attr, s.@score); // Parameterized
RUN QUERY pageRankDyn(_,_,_,"Page", "Link", "Score")

Statement Types

A statement is a standalone instruction that expresses an action to be carried out. The most common statements are data manipulation language ( DML) statements . DML statements include the SELECT, UPDATE, INSERT INTO, DELETE FROM, and DELETE statements.

A GSQL query has two levels of statements. The upper-level statement type is called query-body-level statement , or query-body statement for short. This statement type is part of either the top-level block or a query-body control flow block. For example, each of the statements at the top level directly under CREATE QUERY is a query-body statement. If one of the statements is a CASE statement with several THEN blocks, each of the statements in the THEN blocks is also a query-body statement. Each query-body statement ends with a semicolon.

The lower-level statement type is called DML-sub-level statement or DML-sub-statement for short. This statement type is used inside certain query-body DML statements, to define particular data manipulation actions. DML-sub-statements are comma-separated. There is no comma or semicolon after the last DML-sub-statement in a block. For example, one of the top-level statements is a SELECT statement, each of the statements in its ACCUM clause is a DML-sub-statement. If one of those DML-sub-statements is a CASE statement, each of the statement in the THEN blocks is a DML-sub-statement.

There is some overlap in the types. For example, an assignStmt can be used either at the query-body level or the DML-sub-level.

queryBodyStmts := (queryBodyStmt ";")+
queryBodyStmt := assignStmt // Assignment
| vSetVarDeclStmt // Declaration
| gAccumAssignStmt // Assignment
| gAccumAccumStmt // Assignment
| lAccumAccumStmt // Assignment
| funcCallStmt // Function Call
| selectStmt // Select
| queryBodyCaseStmt // Control Flow
| queryBodyIfStmt // Control Flow
| queryBodyWhileStmt // Control Flow
| queryBodyForEachStmt // Control Flow
| BREAK // Control Flow
| CONTINUE // Control Flow
| updateStmt // Data Modification
| insertStmt // Data Modification
| queryBodyDeleteStmt // Data Modification
| printStmt // Output
| printlnStmt // Output
| logStmt // Output
| returnStmt // Output
| raiseStmt // Exception
| tryStmt // Exception
DMLSubStmtList := DMLSubStmt ["," DMLSubStmt]*
DMLSubStmt := assignStmt // Assignment
| funcCallStmt // Function Call
| gAccumAccumStmt // Assignment
| lAccumAccumStmt // Assignment
| attrAccumStmt // Assignment
| vAccumFuncCall // Function Call
| localVarDeclStmt // Declaration
| DMLSubCaseStmt // Control Flow
| DMLSubIfStmt // Control Flow
| DMLSubWhileStmt // Control Flow
| DMLSubForEachStmt // Control Flow
| BREAK // Control Flow
| CONTINUE // Control Flow
| insertStmt // Data Modification
| DMLSubDeleteStmt // Data Modification
| printlnStmt // Output
| logStmt // Output

Guidelines for understanding statement type hierarchy:

  • Top-level statements are Query-Body type (each statement ending with a semicolon).

  • The statements within a DML statement are DML-sub statements (comma-separated list).

  • The blocks within a Control Flow statement have the same type as the entire Control Flow statement itself.

Schematic illustration of relationship between queryBodyStmt and DMLSubStmt
# Each statement's operation type is either ControlFlow, DML, or other.
# Each statement's syntax type is either queryBodyStmt or DMLSubStmt.
CREATE QUERY stmtTypes (parameterList) FOR GRAPH g [
other queryBodyStmt1;
ControlFlow queryBodyStmt2 # ControlFlow inside top level.
other queryBodyStmt2.1; # subStmts in ControlFlow are queryBody unless inside DML.
ControlFlow queryBodyStmt2.2 # ControlFlow inside ControlFlow inside top level
other queryBodyStmt2.2.1;
other queryBodyStmt2.2.2;
DML queryBodyStmt2.3 # DML inside ControlFlow inside top-level
other DMLSubStmt2.3.1, # switch to DMLSubStmt
other DMLSubStmt2.3.2
DML queryBodyStmt3 # DML inside top level.
other DMLSubStmt3.1, # All subStmts in DML must be DMLSubStmt type
ControlFlow DMLSubStmt3.2 # ControlFlow inside DML inside top level
other DMLSubStmt3.2.1,
other DMLSubStmt3.2.2
DML DMLsubStmt3.3
other DMLSubStmt3.3.1,
other DMLSubStmt3.3.2
other queryBodyStmt4;

Here is a descriptive list of query-body statements:

EBNF term

Common Name



Assignment Statement

See "Declaration and Assignment Statements"


Vertex Set Variable Declaration Statement

See "Declaration and Assignment Statements"


Global Accumulator Assignment Statement

See "Declaration and Assignment Statements"


Global Accumulator Accumulation Statement

See "Declaration and Assignment Statements"


Local Accumulator Accumulation Statement

See "Declaration and Assignment Statements"


Functional Call or Query Call Statement

See "Declaration and Assignment Statements"


SELECT Statement

See "SELECT Statement"


query-body CASE statement

See "Control Flow Statements"


query-body IF statement

See "Control Flow Statements"


query-body WHILE statement

See "Control Flow Statements"


query-body FOREACH statement

See "Control Flow Statements"


UPDATE Statement

See "Data Modification Statements"


INSERT INTO statement

See "Data Modification Statements"


Query-body DELETE Statement

See "Data Modification Statements"


PRINT Statement

See "Output Statements"


LOG Statement

See Output Statements"


RETURN Statement

See "Output Statements"


PRINT Statement

See "Exception Statements"


TRY Statement

See "Exception Statements"

Here is a descriptive list of DML-sub-statements:

EBNF term

Common Name



Assignment Statement

See "Declaration and Assignment Statements"


Functional Call Statement

See "Declaration and Assignment Statements"


Global Accumulator Accumulation Statement

See "Declaration and Assignment Statements"


Local Accumulator Accumulation Statement

See "Declaration and Assignment Statements"


Attribute Accumulation Statement

See "Declaration and Assignment Statements"


Vertex-attached Accumulator Function Call Statement

See "Declaration and Assignment Statements"


Local Variable Declaration Statement

See "SELECT Statement"



See "Control Flow Statements"


DML-sub DELETE Statement

See "Data Modification Statements"


DML-sub CASE statement

See "Data Modification Statements"


DML-sub IF statement

See "Data Modification Statements"


DML-sub FOREACH statement

See "Data Modification Statements"


DML-sub WHILE statement

See "Data Modification Statements"


LOG Statement

See "Output Statements"


INTERPRET QUERY runs a query by translating it line-by-line. This is in contrast to the 2-step flow: (1) INSTALL to pre-translate and optimize a query, then (2) RUN to execute the installed query. The basic trade-off between INTERPRET QUERY and INSTALL/RUN QUERY is as follows:


    • Starts running immediately but may take longer to finish than running an INSTALLed query.

    • Suitable for ad hoc exploration of a graph or when developing and debugging an application, and rapid experimentation is desired.

    • Supports most but not all of the features of the full GSQL query language. See the Appendix section Interpreted GSQL Limitations.


    • Takes up to a minute to INSTALL.

    • Runs faster than INTERPRET, from only a few percent faster to twice as fast.

    • Should always be used for production environments with fixed queries.

There are two GSQL syntax options for Interpreted GSQL: Immediate mode and Saved-query mode. In addition there is also a predefined RESTful endpoint for running interpreted GSQL: POST /gsqlserver/interpreted_query. The query body is sent as the payload of the request. The syntax is like the Immediate query option, except that it is possible to provide parameters, using the query string of the endpoint's request URL. The example below shows a parameterized query using the POST /gsqlserver/interpreted_query endpoint. For more details, see the RESTPP API User Guide.

Interpreted GSQL REST Endpoint with Immediate Query
curl --user tigergraph:tigergraph -X POST 'localhost:14240/gsqlserver/interpreted_query?a=10' -d '
INTERPRET QUERY (int a) FOR GRAPH gsql_demo {

Immediate Mode: Define and Interpret

interpret-anonymous-query syntax
interpretQuery := INTERPRET QUERY "(" ")"
[FOR GRAPH graphName]
[SYNTAX syntaxName]
"{" queryBody "}"

This syntax is similar in concept to SQL queries. Queries are not named, do not accept parameters, and are not saved after being run. Syntax differences from compiled GSQL:

  1. The keyword CREATE is replaced with INTERPRET.

  2. The query is executed immediately by the INTERPRET statement. The INSTALL and RUN statements are not used.

  3. Parameters are not accepted.

Compare the example below to the example in the Create Query section:

  • No query name, no parameters, no RETURN

  • Because no parameter is allowed, the parameter uid is set within the query.

Example of Immediate Mode for INTERPRET QUERY
# declaration statements
STRING uid = "Jane.Doe";
users = {person.*};
# body statements
posts = SELECT p
FROM users:u-(posted)->:p
WHERE == uid;
PRINT posts, posts.size();

Interpret a Saved Query

interpret-saved-query syntax
runQuery := (RUN | INTERPRET) QUERY [runOptions] queryName "(" parameterValueList ")"

This syntax is like RUN query, except

  1. The keyword RUN is replaced with INTERPRET.

  2. Some options may not be supported.

Example of Interpret-Only Mode for INTERPRET QUERY
INTERPRET QUERY createQueryEx ("Jane.Doe")


installQuery := INSTALL QUERY [installOptions] ( "*" | ALL | queryName [, queryMame]* )

A query must be installed before it can be executed. The INSTALL QUERY command will install the queries listed:

INSTALL QUERY queryName1, queryName2, ...

It can also install all uninstalled queries, using either of the following commands: INSTALL QUERY * INSTALL QUERY ALL

Note: Installing takes several seconds for each query. The current version does not support concurrent installation and running of queries. Other concurrent graph operations will be delayed until the installation finishes.

The following options are available:

-force Option

Reinstall the query even if the system indicates the query is already installed. This is useful for overwriting an installation that is corrupted or otherwise outdated, without having to drop and then recreate the query. If this option is not used, the GSQL shell will refuse to re-install a query that is already installed.


During standard installation, the user-defined queries are dynamically linked to the GSQL language code. Anytime after INSTALL QUERY has been performed, another statement, INSTALL QUERY -OPTIMIZE can be executed. The names of the individual queries are not needed. This operation optimizes all previously installed queries, reducing their run times by about 20%. Optimize a query if query run time is more important to you than query installation time.


CREATE QUERY query1...
RUN QUERY query1(...)
INSTALL QUERY -OPTIMIZE # (optional) optimizes run time performance for query1 and query2
RUN QUERY query1(...) # runs faster than before




If you have a distributed database deployment, installing the query in DISTRIBUTED mode can increase performance for single queries - using a single worker from each available machine to yield results. Certain cases may benefit more from this option than others -- more detailed information is available on the next page: Distributed Query Mode .


Running a Query

Installing a query creates a REST++ endpoint. Once a query is installed, there are two ways of executing a query. either using the GSQL RUN QUERY command or by sending a REST request.

Query output size limitation

There is a maximum size limit of 2GB for the result set of a SELECT block. A SELECT block is the main component of a query which searches for and returns data from the graph. If the result of the SELECT block is larger than 2GB, the system will return no data. NO error message is produced.


RUN QUERY syntax
runQuery := (RUN | INTERPRET) QUERY [runOptions] queryName "(" parameterValueList ")"
runOptions := ( "-av" | "-d" )*
parameterValueList := parameterValue [, parameterValue]*
RUN QUERY example
RUN QUERY RunQueryEx(1, "test", 3.14)

Running a Query as a REST Endpoint

Application developers may find it more convenient to run queries by sending an HTTP request to a REST endpoint. Installed queries have an endpointhttps://server_ip:9000/query/<graphName>/<queryName>

If the REST++ server is local, then server_ip is localhost. The request can use either the GET or POST method. The query parameter values are either included directly in the query string of the HTTP request's URL or supplied using a data payload. The basic format for the parameters as a query string is


The following two curl commands are each equivalent to the RUN QUERY command above. The first gives the parameter values in the query string in a URL. This example illustrates the simple format for primitive data types such as INT, DOUBLE, and STRING. The second gives the parameter values through the curl command's data payload -d option.

Running a query via HTTP request
curl -X GET "http://localhost:9000/query/testGraph/RunQueryEx?p1=1&p2=test&p3=3.14"
curl -d @RunQueryEx.dat -X POST "http://localhost:9000/query/testGraph/RunQueryEx"

where RunQueryExPara.dat has the exact string as the query string in the first URL.


To see a list of the parameter names and types for the user-installed GSQL queries, run the following REST++ request:

curl -X GET "http://localhost:9000/endpoints?dynamic=true"

By using the data payload option, the user can avoid using a long and complex URL, as well as keep their parameters more secure. In fact, to call the same query but with different parameters, only the data payload file contents need to be changed; the HTTP request can be the same. The file loader loads the entire file, appends multiple lines into one, and uses the resulting string as the URL query string. If both a query string and a data payload are given (which we strongly discourage), both are included, where the URL query string's parameter values overwrite the values given in the data payload.

Some curl options may be either required or recommended for security, user authentication, or error messages. Please see RESTPP Requests for general advice on using TigerGraph REST endpoints.

Complex Type Parameter Passing

This subsection describes how to format the complex type parameter values when executing a query by RUN QUERY or curl command. More details about all parameter types are described in Section "Query Parameter Types"

Parameter type


Query string for GET /query HTTP Request

SET or BAG of primitives

Square brackets enclose the collection of values.

Example: a set p1 of integers: [1,5,10]

Assign multiple values to the same parameter name.

Example: a set p1 of integers: p1=1&p1=5&p1=10


If the vertex type is specified in the query definition, then the vertex argument is simply vertex_id

Example: vertex type is person and desired id is person2. "person2"


Example: vertex type is person and desired id is person2. vp=person2


(type not pre-specified)

If the type is not defined in the query definition, then the argument must provide both the id and type in parentheses:(vertex_id, vertex_type)

Example: a vertex va w ith id="person1" and type="person: ("person1","person")


Example: parameter vertex va when type="person" and id="person1": va=person1&va.type=person

SET or BAG of VERTEX<type>

Same as a SET or BAG of primitives, where the primitive type is vertex_id. Example: [ "person3", "person4" ]

Same as a SET or BAG of primitives, where the primitive type is vertex_id. Example: vp=person3&vp=person4


(type not pre-specified)

Same as a SET or BAG of vertices, with vertex type not pre-specified. Square brackets enclose a comma-separated list of vertex (id, type) pairs. Mixed types are permitted. Example: [ ("person1","person") , ("11","post") ]

The SET or BAG must be treated like an array, specifying the first, second, etc. elements with indices [0], [1], etc. The example below provides the same input arguments as the RUN QUERY example to the left.


When square brackets are used in a curl URL, the -g option or escape characters must be adopted. If the parameters are given by data payload (either by file or data payload string), the -g option is not needed and escape characters should not be used.

Below are examples.

Running a query via HTTP request - complex parameter type
# 1. SET or BAG
CREATE QUERY RunQueryEx2(SET<INT> p1) FOR GRAPH testGraph{ .... }
# To run this query (either RUN QUERY or curl):
GSQL > RUN QUERY RunQueryEx2([1,5,10])
curl -X GET "http://localhost:9000/query/testGraph/RunQueryEx2?p1=1&p1=5&p1=10"
# 2. VERTEX.
# First parameter is any vertex; second parameter must be a person type.
CREATE QUERY printOneVertex(VERTEX va, VERTEX<person> vp) FOR GRAPH socialNet {
PRINT va, vp;
# To run this query:
GSQL > RUN QUERY printOneVertex(("person1","person"),"person2") # 1st param must give type: (vertex_id, vertex_type)
curl -X GET 'http://localhost:9000/query/socialNet/printOneVertex?va=person1&va.type=person&vp=person2'
# 3. BAG or SET of VERTEX, any type
CREATE QUERY printOneBagVertices(BAG<VERTEX> va) FOR GRAPH socialNet {
# To run this query:
GSQL > RUN QUERY printOneBagVertices([("person1","person"), ("11","post")]) # [(vertex_1_id, vertex_1_type), (vertex_2_id, vertex_2_type), ...]
curl -X GET 'http://localhost:9000/query/socialNet/printOneBagVertices?va\[0\]=person1&va\[0\].type=person&va\[1\]=11&va\[1\].type=post'
curl -g -X GET 'http://localhost:9000/query/socialNet/printOneBagVertices?va[0]=person1&va[0].type=person&va[1]=11&va[1].type=post'
# 4. BAG or SET of VERTEX, pre-specified type
CREATE QUERY printOneSetVertices(SET<VERTEX<person>> vp) FOR GRAPH socialNet {
# To run this query:
GSQL > RUN QUERY printOneSetVertices(["person3", "person4"]) # [vertex_1_id, vertex_2_id, ...]
curl -X GET 'http://localhost:9000/query/socialNet/printOneSetVertices?vp=person3&vp=person4'

Payload Size Limit

This data payload option can accept a file up to 128MB by default. To increase this limit to xxx MB, use the following command:

gadmin --set nginx.client_max_body_size xxx -f

The upper limit of this setting is 1024 MB. Raising the size limit for the data payload buffer reduces the memory available for other operations, so be cautious about increasing this limit.

For more detailed information about REST++ endpoints and requests, see the RESTPP API User Guide .

The following options are available when running a query:

All-Vertex Mode -av Option

Some queries run with all or almost all vertices in a SELECT statement s, e.g. PageRank algorithm. In this case, the graph processing engine can run much more efficiently in all-vertex mode. In the all-vertex mode, all vertices are always selected, and the following actions become ineffective:

  • Filtering with selected vertices or vertex types. The source vertex set must be all vertices.

  • Filtering with the WHERE clause.

  • Filtering with the HAVING clause.

  • Assigning designated vertex or designated type of vertexes. E.g. X = { vertex_type .*}

To run the query in all-vertex mode, use the -av option in shell mode or include __GQUERY__USING_ALL_ACTIVE_MODE=true in the query string of an HTTP request.

GSQL > RUN QUERY -av test()
## In a curl URL call. Note the use of both single and double underscores.
curl -X GET 'http://localhost:9000/query/graphname/queryname?__GQUERY__USING_ALL_ACTIVE_MODE=true'

Diagnose -d Option

The diagnose option can be turned on in order to produce a diagnostic monitoring log, which contains the processing time of each SELECT block . To turn on the monitoring log, use the -d option in shell mode or __GQUERY__monitor=true in the query string of an HTTP request.

GSQL > RUN QUERY -d test()
## In a curl URL call. Note the use of both single and double underscores.
curl -X GET 'http://localhost:9000/query/graphname/queryname?__GQUERY__monitor=true'

The path of the generated log file will be shown as a part of output message. An example log is shown below:

Query Block Start (#6) start at 11:52:06.415284
Query Block Start (#6) end at 11:52:06.415745 (takes 0.000442 s)
Query test takes totally 0.001 s (restpp's pre/post process time not included)
---------------- Summary (sort by total_time desc) ----------------
Query Block Start on Line 6
total iterations count : 1
avg iterations stats : 0.000442s
max iterations stats : 0.000442s
min iterations stats : 0.000442s
total activated vertex count : 2
max activated vertex count : 2
min activated vertex count : 2

GSQL Query Output Format

The standard output of GSQL queries is in industry-standard JSON format. A JSON object is an unordered set of key:value pairs , enclosed in curly braces. Among the acceptable data types for a JSONvalue are array and object . A JSON array is an ordered list of values , enclosed in square brackets. Since values can be objects or arrays, JSON supports hierarchical, nested structures. Strings are enclosed in double quotation marks. We also use the term field to refer to a key (or a key:value pair) of a given object.

At the top level of the JSON structure are four required fields ("version", "error", "message", and "results") and one dependent field ("code"). If a query is successful, the value of "error" will be "false", the "message" value will be empty, and the "results" value will be the intended output of the query. If an error or exception occurred during query execution, the "error" value will be "true", the "message" value will be a string message describing the error condition, and the "results" field will be empty. Also, the "code" field will contain an error code.

Beginning with version 2 (v2) of the output specification, an additional top-level field is required: "version". The "version" value is an object with the following fields:

"version" field



String specifying the output API version. Values are specified as follows:

  • "v1": Output API used in TigerGraph platform v0.8 through v1.0. NOTE: "v1" support is no longer available as of TigerGraph v3.0.

  • "v2" (default): Output API introduced in TigerGraph platform v1.1 This is the latest API.


String indicating the edition of the product. Current possible values are "developer" and "enterprise".


Integer representing which version of the user's graph schema is currently in use. When a CREATE GRAPH statement is executed, the version is initialized to 0. Each time a SCHEMA_CHANGE JOB is run, the schema value is incremented (e.g., 1, 2, etc.).

Other top-level objects, such as "code" may appear in certain circumstances. Note that the top-level objects are enclosed in curly braces, meaning that they form an unordered set. They may appear in any order.

Below is an example of the output of a successful query:

Top Level JSON of a Valid Query - Example
"version": {"edition": "developer","api": "v2","schema": "1"},
"error": false,
"message": "",
"results": [

The value of the "results" key-value pair is a sequential list of the data objects specified by the PRINT statements of the query. The list order follows the order of PRINT execution. The detailed format of the PRINT statement results is described in Output Statements and FILE Objects.

The following REST response misspells the name of the endpoint

GET echo/ Request and Response
curl -X GET "http://localhost:9000/eco"

and generates the following output:

"version": {"edition":"developer","api":"v2","schema":0},
"error": true,
"message": "Endpoint is not found from url = /eco, please use GET /endpoints to list all valid endpoints.",
"code": "REST-1000"

Changing the Default Output API

The following GSQL statement can be used to set the JSON output API configuration.

SET json_api = <version_string>

This statement sets a persistent system parameter. Each version of the TigerGraph platform is pre-configured to what was the latest output API that at the time of release. For example, platform version 1.1 is configured so that each query will produce v2 output by default.

As of TigerGraph v3.0, the only supported JSON API is "v2".


To show the GSQL text of a query, run SHOW QUERY query_name . The query_name argument can use * or ? wildcards from Linux globbing, or it can be a regular expression, when preceded by -r. See SHOW: View Parts of the Catalog

Additionally, the "ls" GSQL command lists all created queries and identifies which queries have been installed.


To drop a query, run DROP QUERY query_name . The query will be uninstalled (if it has been installed) and removed from the dictionary. The GSQL language will refuse to drop an installed query Q if another query R is installed which calls query Q . That is, all calling queries must be dropped before or at the same time that their called subqueries are dropped.

To drop all queries,, either of the following commands can be used: DROP QUERY ALL DROP QUERY *

The scope of ALL depends on the user's current scope. If the user has set a working graph, then DROP ALL removes all the jobs for that graph. If a superuser has set their scope to be global, then DROP ALL removes all jobs across all graph spaces.