BigQuery (Service Account)
Service Account dataanalyticsBigQuery (Service Account)
What you can do
Section titled “What you can do”Connect this agent connector to let your agent:
- Query insert — Submit an asynchronous BigQuery query job
- Job cancel — Request cancellation of a running BigQuery job
- Run dry, run — Validate a SQL query and estimate its cost without executing it
- List list — List all tables and views in a BigQuery dataset
- Get get — Retrieve metadata and schema for a specific BigQuery table or view, including column names, types, descriptions, and table properties
Authentication
Section titled “Authentication”This connector uses Service Account authentication.
Before calling this connector from your code, create the BigQuery (Service Account) connection in AgentKit > Connections and copy the exact Connection name from that connection into your code. The value in code must match the dashboard exactly.
Set up the connector
In Scalekit dashboard, go to AgentKit > Connections > Create Connection. Find BigQuery (Service Account) and click Create.
That’s it — no OAuth credentials or redirect URIs needed. BigQuery Service Account uses server-to-server authentication handled entirely through your GCP service account credentials.
Code examples
Connect to BigQuery using a GCP service account — Scalekit handles authentication automatically using your service account credentials.
import { ScalekitClient } from '@scalekit-sdk/node';import 'dotenv/config';
const connectionName = 'bigqueryserviceaccount'; // get your connection name from connection configurationsconst identifier = 'user_123'; // your unique user identifier
// Get your credentials from app.scalekit.com → Developers → Settings → API Credentialsconst scalekit = new ScalekitClient( process.env.SCALEKIT_ENV_URL, process.env.SCALEKIT_CLIENT_ID, process.env.SCALEKIT_CLIENT_SECRET);const actions = scalekit.actions;
// Create a connected account with your service account credentialsawait actions.getOrCreateConnectedAccount({ connectionName, identifier, authorizationDetails: { staticAuth: { serviceAccountJson: '<paste your GCP service account JSON here>', }, },});
// Execute a BigQuery toolconst result = await actions.executeTool({ toolName: 'bigqueryserviceaccount_run_query', connectionName, identifier, toolInput: { query: 'SELECT 1 AS test', },});console.log(result);import scalekit.clientimport osfrom dotenv import load_dotenv
# Load environment variablesload_dotenv()
scalekit = scalekit.client.ScalekitClient( os.getenv("SCALEKIT_ENV_URL"), os.getenv("SCALEKIT_CLIENT_ID"), os.getenv("SCALEKIT_CLIENT_SECRET"))actions = scalekit.actions
CONNECTOR = "bigqueryserviceaccount"IDENTIFIER = "user_123"
# Service account JSON (replace with a real one)SERVICE_ACCOUNT_JSON = """{ "type": "service_account", "project_id": "my-gcp-project", "private_key_id": "key-id", "private_key": "-----BEGIN PRIVATE KEY-----\\nREPLACE_WITH_REAL_PRIVATE_KEY\\n-----END PRIVATE KEY-----\\n", "client_email": "my-sa@my-gcp-project.iam.gserviceaccount.com", "client_id": "123456789", "auth_uri": "https://accounts.google.com/o/oauth2/auth", "token_uri": "https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/my-sa%40my-gcp-project.iam.gserviceaccount.com", "universe_domain": "googleapis.com"}"""
# Step 1: Get or create connected account with service account credentialsresponse = actions.get_or_create_connected_account( connection_name=CONNECTOR, identifier=IDENTIFIER, authorization_details={ "static_auth": { "service_account_json": SERVICE_ACCOUNT_JSON } })
account = response.connected_accountprint(f"Connected account: {account.id} | Status: {account.status}")
# Step 2: Execute a BigQuery toolresult = actions.execute_tool( tool_name="bigqueryserviceaccount_run_query", connection_name=CONNECTOR, identifier=IDENTIFIER, tool_input={ "query": "SELECT 1 AS test" })
print("Query result:", result.data)Proxy API Calls
// Make a direct BigQuery REST API call via Scalekit proxy// Base URL is already scoped to: .../bigquery/v2/projects/{project_id}const result = await actions.request({ connectionName, identifier, path: '/datasets', method: 'GET',});console.log(result);# Make a direct BigQuery REST API call via Scalekit proxy# Base URL is already scoped to: .../bigquery/v2/projects/{project_id}result = actions.request( connection_name=CONNECTOR, identifier=IDENTIFIER, path="/datasets", method="GET")print(result)Tool list
Section titled “Tool list”Use the exact tool names from the Tool list below when you call execute_tool. If you’re not sure which name to use, list the tools available for the current user first.
bigqueryserviceaccount_cancel_job Request cancellation of a running BigQuery job. Returns the final job resource. Cancellation is best-effort and the job may complete before it can be cancelled. 2 params
Request cancellation of a running BigQuery job. Returns the final job resource. Cancellation is best-effort and the job may complete before it can be cancelled.
job_id string required The ID of the job to cancel location string optional Geographic location where the job was created, e.g. US or EU bigqueryserviceaccount_dry_run_query Validate a SQL query and estimate its cost without executing it. Returns statistics.totalBytesProcessed so you can check byte usage before running the real job. 3 params
Validate a SQL query and estimate its cost without executing it. Returns statistics.totalBytesProcessed so you can check byte usage before running the real job.
query string required SQL query to validate and estimate location string optional Geographic location where the job should run, e.g. US or EU use_legacy_sql boolean optional Use BigQuery legacy SQL syntax instead of standard SQL bigqueryserviceaccount_get_dataset Retrieve metadata for a specific BigQuery dataset, including location, description, labels, access controls, and creation/modification times. 1 param
Retrieve metadata for a specific BigQuery dataset, including location, description, labels, access controls, and creation/modification times.
dataset_id string required The ID of the dataset to retrieve bigqueryserviceaccount_get_job Retrieve the status and configuration of a BigQuery job by its job ID. Use this to poll for completion of an async query job submitted via Insert Query Job. 2 params
Retrieve the status and configuration of a BigQuery job by its job ID. Use this to poll for completion of an async query job submitted via Insert Query Job.
job_id string required The ID of the job to retrieve location string optional Geographic location where the job was created, e.g. US or EU bigqueryserviceaccount_get_model Retrieve metadata for a specific BigQuery ML model, including model type, feature columns, label columns, and training run details. 2 params
Retrieve metadata for a specific BigQuery ML model, including model type, feature columns, label columns, and training run details.
dataset_id string required The ID of the dataset containing the model model_id string required The ID of the model to retrieve bigqueryserviceaccount_get_query_results Retrieve the results of a completed BigQuery query job. Supports pagination via page tokens. Use after polling Get Job until status is DONE. 5 params
Retrieve the results of a completed BigQuery query job. Supports pagination via page tokens. Use after polling Get Job until status is DONE.
job_id string required The ID of the completed query job location string optional Geographic location where the job was created, e.g. US or EU max_results integer optional Maximum number of rows to return per page page_token string optional Page token from a previous response to retrieve the next page of results timeout_ms integer optional Maximum milliseconds to wait if the query has not yet completed bigqueryserviceaccount_get_routine Retrieve the definition and metadata of a specific BigQuery routine (stored procedure or UDF), including its arguments, return type, and body. 2 params
Retrieve the definition and metadata of a specific BigQuery routine (stored procedure or UDF), including its arguments, return type, and body.
dataset_id string required The ID of the dataset containing the routine routine_id string required The ID of the routine to retrieve bigqueryserviceaccount_get_table Retrieve metadata and schema for a specific BigQuery table or view, including column names, types, descriptions, and table properties. 2 params
Retrieve metadata and schema for a specific BigQuery table or view, including column names, types, descriptions, and table properties.
dataset_id string required The ID of the dataset containing the table table_id string required The ID of the table or view to retrieve bigqueryserviceaccount_insert_query_job Submit an asynchronous BigQuery query job. Returns a job ID that can be used with Get Job or Get Query Results to poll for completion and retrieve results. 9 params
Submit an asynchronous BigQuery query job. Returns a job ID that can be used with Get Job or Get Query Results to poll for completion and retrieve results.
query string required SQL query to execute create_disposition string optional Specifies whether the destination table is created if it does not exist destination_dataset_id string optional Dataset ID to write query results into destination_table_id string optional Table ID to write query results into location string optional Geographic location where the job should run, e.g. US or EU maximum_bytes_billed string optional Maximum bytes that can be billed for this query; query fails if limit is exceeded priority string optional Job priority: INTERACTIVE (default) or BATCH use_legacy_sql boolean optional Use BigQuery legacy SQL syntax instead of standard SQL write_disposition string optional Specifies the action when the destination table already exists bigqueryserviceaccount_list_datasets List all BigQuery datasets in the project. Supports filtering by label and pagination. 4 params
List all BigQuery datasets in the project. Supports filtering by label and pagination.
all boolean optional If true, includes hidden datasets in the results filter string optional Label filter expression to restrict results, e.g. labels.env:prod max_results integer optional Maximum number of datasets to return per page page_token string optional Page token from a previous response to retrieve the next page bigqueryserviceaccount_list_jobs List BigQuery jobs in the project. Supports filtering by state and projection, and pagination. 5 params
List BigQuery jobs in the project. Supports filtering by state and projection, and pagination.
all_users boolean optional If true, returns jobs for all users in the project; otherwise returns only the current user's jobs max_results integer optional Maximum number of jobs to return per page page_token string optional Page token from a previous response to retrieve the next page projection string optional Controls the fields returned: minimal (default) or full state_filter string optional Filter jobs by state: done, pending, or running bigqueryserviceaccount_list_models List all BigQuery ML models in a dataset, including their model type, training status, and creation time. 3 params
List all BigQuery ML models in a dataset, including their model type, training status, and creation time.
dataset_id string required The ID of the dataset to list models from max_results integer optional Maximum number of models to return per page page_token string optional Page token from a previous response to retrieve the next page bigqueryserviceaccount_list_routines List all stored procedures and user-defined functions (UDFs) in a BigQuery dataset. 4 params
List all stored procedures and user-defined functions (UDFs) in a BigQuery dataset.
dataset_id string required The ID of the dataset to list routines from filter string optional Filter expression to restrict results, e.g. routineType:SCALAR_FUNCTION max_results integer optional Maximum number of routines to return per page page_token string optional Page token from a previous response to retrieve the next page bigqueryserviceaccount_list_table_data Read rows directly from a BigQuery table without writing a SQL query. Supports pagination, row offset, and field selection. 6 params
Read rows directly from a BigQuery table without writing a SQL query. Supports pagination, row offset, and field selection.
dataset_id string required The ID of the dataset containing the table table_id string required The ID of the table to read rows from max_results integer optional Maximum number of rows to return per page page_token string optional Page token from a previous response to retrieve the next page selected_fields string optional Comma-separated list of fields to return; if omitted all fields are returned start_index integer optional Zero-based row index to start reading from bigqueryserviceaccount_list_tables List all tables and views in a BigQuery dataset. Supports pagination. 3 params
List all tables and views in a BigQuery dataset. Supports pagination.
dataset_id string required The ID of the dataset to list tables from max_results integer optional Maximum number of tables to return per page page_token string optional Page token from a previous response to retrieve the next page bigqueryserviceaccount_run_query Execute a SQL query synchronously against BigQuery and return results immediately. Best for short-running queries. For long-running queries use Insert Query Job instead. 7 params
Execute a SQL query synchronously against BigQuery and return results immediately. Best for short-running queries. For long-running queries use Insert Query Job instead.
query string required SQL query to execute create_session boolean optional If true, creates a new session and returns a session ID in the response dry_run boolean optional If true, validates the query and returns estimated bytes processed without executing location string optional Geographic location of the dataset, e.g. US or EU max_results integer optional Maximum number of rows to return in the response timeout_ms integer optional Maximum milliseconds to wait for query completion before returning use_legacy_sql boolean optional Use BigQuery legacy SQL syntax instead of standard SQL