Skip to content

Connecting to BigQuery

Info

A valid Enterprise license of PuppyGraph is required to connect to BigQuery.

Please see https://www.puppygraph.com/pricing to learn more and start a free trial with PuppyGraph Enterprise.

Prerequisites

  • The BigQuery service is accessible over the network from the PuppyGraph instance.
  • A service account with proper permissions (see below) is available for PuppyGraph to connect to BigQuery with.

Service Account Permission

PuppyGraph uses a Google Service Account for reading data from BigQuery. The service account needs to have the following permissions:

To validate required permission on the service account, you can use the following gcloud command:

gcloud projects get-iam-policy <PROJECT_ID> --flatten="bindings[].members" --format='table(bindings.role)' --filter="bindings.members:<SERVICE_ACCOUNT>"
Replace <PROJECT_ID> with your project ID and <SERVICE_ACCOUNT> with your service account email. The output should contain the following lines:
ROLE: roles/bigquery.dataViewer
ROLE: roles/bigquery.jobUser
ROLE: roles/bigquery.readSessionUser

Configuration

Configuration Explanation
JDBC URI

A JDBC compatible connection URI of the data source. Read this page for more details on how to construct the URI.

Specifically, the following parameters need to be set according to your service account config:

  • ProjectId=PJID. PJID need to set your service account project id.
  • OAuthServiceAcctEmail= set your service account id.
  • OAuthPvtKeyPath= set the key file path that is accessible from docker container.

JDBC Driver Class

The class name of the JDBC Driver.

The default value is com.simba.googlebigquery.jdbc.Driver

Demo

Prerequisites

The demo assumes that PuppyGraph has been deployed at localhost according to the instruction in Launching PuppyGraph from AWS Marketplace or Launching PuppyGraph in Docker.

In this demo, we use the username puppygraph and password puppygraph123.

Data Preparation (Optional)

ID age name
v1 29 marko
v2 27 vadas

refIDsourcereferredweight
e1v1v20.5

The demo uses people and referral information as shown above.

The following steps will create tables and insert data to BigQuery in the GCP web console.

Firstly, create dataset with multiple-region support in .

Then, create tables using the web console.

Finally, open query tab and execute follow SQL.

insert into `demo.person` values ('v1', 29, 'marko'), ('v2', 27, 'vadas');
insert into `demo.referral` values ('e1', 'v1', 'v2', 0.5);

Authentication

Firstly, you must create a service account and generate its key.

Then, download this key and put it to PuppyGraph's environment. Assume the downloaded key file named key.json and PuppyGraph container named puppy

docker cp key.json puppy:/home/key.json

Upload the schema

Now the data are ready in BigQuery. We need a PuppyGraph schema before querying it. Let's create a schema file bigquery.json:

bigquery.json
{
  "catalogs": [
    {
      "name": "jdbc_bigquery",
      "type": "bigquery",
      "jdbc": {
        "jdbcUri": "jdbc:bigquery://https://www.googleapis.com/bigquery/v2:443;ProjectId=PJID;OAuthType=0;OAuthServiceAcctEmail=bigquery-sa@PJID.iam.gserviceaccount.com;OAuthPvtKeyPath=/home/key.json;EnableSession=1;",
        "driverClass": "com.simba.googlebigquery.jdbc.Driver"
      }
    }
  ],
  "vertices": [
    {
      "label": "person",
      "mappedTableSource": {
        "catalog": "jdbc_bigquery",
        "schema": "demo",
        "table": "person",
        "metaFields": {
          "id": "ID"
        }
      },
      "attributes": [
        {
          "type": "Long",
          "name": "age"
        },
        {
          "type": "String",
          "name": "name"
        }
      ]
    }
  ],
  "edges": [
    {
      "label": "knows",
      "mappedTableSource": {
        "catalog": "jdbc_bigquery",
        "schema": "demo",
        "table": "referral",
        "metaFields": {
          "id": "refId",
          "from": "source",
          "to": "referred"
        }
      },
      "from": "person",
      "to": "person",
      "attributes": [
        {
          "type": "Double",
          "name": "weight"
        }
      ]
    }
  ]
}

Here are some notes on this schema:

  1. A catalog jdbc_bigquery is added to specify the remote data source in BigQuery.
  2. Set type to bigquery.
  3. Set driverClass to com.simba.googlebigquery.jdbc.Driver.
  4. jdbcUri need to set according to your service account config
  5. ProjectId=PJID. PJID need to set your service account project id.
  6. OAuthServiceAcctEmail= set your service account id.
  7. OAuthPvtKeyPath= set the key file path in docker container. Here is /home/key.json

Now we can upload the schema file bigquery.json to PuppyGraph with the following shell command, assuming that the PuppyGraph is running on localhost:

curl -XPOST -H "content-type: application/json" --data-binary @./bigquery.json --user "puppygraph:puppygraph123" localhost:8081/schema

Query the data

Connecting to PuppyGraph at http://localhost:8081 and start gremlin console from the "Query" section:

[PuppyGraph]> console
         \,,,/
         (o o)
-----oOOo-(3)-oOOo-----
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
plugin activated: tinkerpop.tinkergraph

Now we have connected to the Gremlin Console. We can query the graph:

gremlin> g.V().hasLabel("person").out("knows").values("name")
==>vadas