> **Building with AI coding agents?** If you're using an AI coding agent, install the official Scalekit plugin. It gives your agent full awareness of the Scalekit API — reducing hallucinations and enabling faster, more accurate code generation.
>
> - **Claude Code**: `/plugin marketplace add scalekit-inc/claude-code-authstack` then `/plugin install <auth-type>@scalekit-auth-stack`
> - **GitHub Copilot CLI**: `copilot plugin marketplace add scalekit-inc/github-copilot-authstack` then `copilot plugin install <auth-type>@scalekit-auth-stack`
> - **Codex**: run the bash installer, restart, then open Plugin Directory and enable `<auth-type>`
> - **Skills CLI** (Windsurf, Cline, 40+ agents): `npx skills add scalekit-inc/skills --list` then `--skill <skill-name>`
>
> `<auth-type>` / `<skill-name>`: `agentkit`, `full-stack-auth`, `mcp-auth`, `modular-sso`, `modular-scim` — [Full setup guide](https://docs.scalekit.com/dev-kit/build-with-ai/)

---

# Databricks Workspace

**Authentication:** Service Principal (OAuth 2.0)
**Categories:** Data, Analytics, Automation
## What you can do

Connect this agent connector to let your agent:

- **Schemata information schema** — List all schemas within a catalog using INFORMATION_SCHEMA.SCHEMATA
- **Constraints information schema table** — List PRIMARY KEY and FOREIGN KEY constraints for tables in a schema using INFORMATION_SCHEMA.TABLE_CONSTRAINTS
- **List unity catalog schemas, unity catalog catalogs, unity catalog tables** — List all schemas within a Unity Catalog in the Databricks workspace
- **Get sql statement result chunk, sql warehouse, sql statement** — Fetch a specific result chunk for a paginated SQL statement result
- **Tables information schema** — List tables and views in a schema using INFORMATION_SCHEMA.TABLES
- **Columns information schema** — List columns for a table using INFORMATION_SCHEMA.COLUMNS

## Authentication

This connector uses **Service Principal (OAuth 2.0)** authentication.

Before calling this connector from your code, create the Databricks Workspace connection in **AgentKit** > **Connections** and copy the exact **Connection name** from that connection into your code. The value in code must match the dashboard exactly.

## Tool list

Use the exact tool names from the **Tool list** below when you call `execute_tool`. If you're not sure which name to use, list the tools available for the current user first.

## Tool list

### `databricksworkspace_cluster_get`

Get details of a specific Databricks cluster by cluster ID.

Parameters:

- `cluster_id` (`string`, required): The unique identifier of the cluster.

### `databricksworkspace_cluster_start`

Start a terminated Databricks cluster by cluster ID.

Parameters:

- `cluster_id` (`string`, required): The unique identifier of the cluster to start.

### `databricksworkspace_cluster_terminate`

Terminate a Databricks cluster by cluster ID. The cluster will be deleted and all its associated resources released.

Parameters:

- `cluster_id` (`string`, required): The unique identifier of the cluster to terminate.

### `databricksworkspace_clusters_list`

List all clusters in the Databricks workspace.

### `databricksworkspace_information_schema_columns`

List columns for a table using INFORMATION_SCHEMA.COLUMNS. Returns column name, data type, nullability, numeric precision/scale, max char length, and comment.

Parameters:

- `catalog` (`string`, required): The catalog containing the table.
- `schema` (`string`, required): The schema containing the table.
- `table` (`string`, required): The table to list columns for.
- `warehouse_id` (`string`, required): The ID of the SQL warehouse to run the query on.

### `databricksworkspace_information_schema_schemata`

List all schemas within a catalog using INFORMATION_SCHEMA.SCHEMATA. Used for schema discovery during setup.

Parameters:

- `catalog` (`string`, required): The catalog to list schemas from.
- `warehouse_id` (`string`, required): The ID of the SQL warehouse to run the query on.

### `databricksworkspace_information_schema_table_constraints`

List PRIMARY KEY and FOREIGN KEY constraints for tables in a schema using INFORMATION_SCHEMA.TABLE_CONSTRAINTS. Used to auto-detect join keys.

Parameters:

- `catalog` (`string`, required): The catalog containing the schema.
- `schema` (`string`, required): The schema to list constraints from.
- `warehouse_id` (`string`, required): The ID of the SQL warehouse to run the query on.

### `databricksworkspace_information_schema_tables`

List tables and views in a schema using INFORMATION_SCHEMA.TABLES. Returns table name, type (MANAGED, EXTERNAL, VIEW, etc.), and comment for schema discovery.

Parameters:

- `catalog` (`string`, required): The catalog to query INFORMATION_SCHEMA from.
- `schema` (`string`, required): The schema to list tables from.
- `warehouse_id` (`string`, required): The ID of the SQL warehouse to run the query on.

### `databricksworkspace_job_get`

Get details of a specific Databricks job by job ID.

Parameters:

- `job_id` (`integer`, required): The unique identifier of the job.

### `databricksworkspace_job_run_now`

Trigger an immediate run of a Databricks job by job ID.

Parameters:

- `job_id` (`integer`, required): The unique identifier of the job to run.

### `databricksworkspace_job_runs_list`

List all job runs in the Databricks workspace, optionally filtered by job ID.

Parameters:

- `job_id` (`integer`, optional): Filter runs by a specific job ID. If omitted, returns runs for all jobs.
- `limit` (`integer`, optional): The number of runs to return. Defaults to 20. Maximum is 1000.
- `offset` (`integer`, optional): The offset of the first run to return.

### `databricksworkspace_jobs_list`

List all jobs in the Databricks workspace.

Parameters:

- `limit` (`integer`, optional): The number of jobs to return. Defaults to 20. Maximum is 100.
- `offset` (`integer`, optional): The offset of the first job to return.

### `databricksworkspace_scim_me_get`

Retrieve information about the currently authenticated service principal in the Databricks workspace.

### `databricksworkspace_scim_users_list`

List all users in the Databricks workspace using the SCIM v2 API.

Parameters:

- `count` (`integer`, optional): Maximum number of results to return per page.
- `filter` (`string`, optional): SCIM filter expression to narrow results (e.g. userName eq "user@example.com").
- `startIndex` (`integer`, optional): 1-based index of the first result to return. Used for pagination.

### `databricksworkspace_secrets_scopes_list`

List all secret scopes available in the Databricks workspace.

### `databricksworkspace_sql_statement_cancel`

Cancel a running SQL statement by its statement ID.

Parameters:

- `statement_id` (`string`, required): The ID of the SQL statement to cancel.

### `databricksworkspace_sql_statement_execute`

Execute a SQL statement on a Databricks SQL warehouse and return the results.

Parameters:

- `statement` (`string`, required): The SQL statement to execute.
- `warehouse_id` (`string`, required): The ID of the SQL warehouse to execute the statement on.
- `catalog` (`string`, optional): The catalog to use for the statement execution.
- `schema` (`string`, optional): The schema to use for the statement execution.

### `databricksworkspace_sql_statement_get`

Get the status and results of a previously executed SQL statement by its statement ID.

Parameters:

- `statement_id` (`string`, required): The ID of the SQL statement to retrieve.

### `databricksworkspace_sql_statement_result_chunk_get`

Fetch a specific result chunk for a paginated SQL statement result. Use when a statement result has multiple chunks (large result sets).

Parameters:

- `chunk_index` (`integer`, required): The index of the result chunk to fetch (0-based).
- `statement_id` (`string`, required): The ID of the SQL statement.

### `databricksworkspace_sql_warehouse_get`

Get details of a specific Databricks SQL warehouse by its ID.

Parameters:

- `warehouse_id` (`string`, required): The ID of the SQL warehouse to retrieve.

### `databricksworkspace_sql_warehouse_start`

Start a stopped Databricks SQL warehouse by its ID.

Parameters:

- `warehouse_id` (`string`, required): The ID of the SQL warehouse to start.

### `databricksworkspace_sql_warehouse_stop`

Stop a running Databricks SQL warehouse by its ID.

Parameters:

- `warehouse_id` (`string`, required): The ID of the SQL warehouse to stop.

### `databricksworkspace_sql_warehouses_list`

List all SQL warehouses available in the Databricks workspace.

### `databricksworkspace_unity_catalog_catalogs_list`

List all Unity Catalogs accessible to the service principal in the Databricks workspace.

### `databricksworkspace_unity_catalog_schemas_list`

List all schemas within a Unity Catalog in the Databricks workspace.

Parameters:

- `catalog_name` (`string`, required): The name of the catalog to list schemas from.

### `databricksworkspace_unity_catalog_tables_list`

List all tables and views within a schema in a Unity Catalog in the Databricks workspace.

Parameters:

- `catalog_name` (`string`, required): The name of the catalog containing the schema.
- `schema_name` (`string`, required): The name of the schema to list tables from.


---

## More Scalekit documentation

| Resource | What it contains | When to use it |
|----------|-----------------|----------------|
| [/llms.txt](/llms.txt) | Structured index with routing hints per product area | Start here — find which documentation set covers your topic before loading full content |
| [/llms-full.txt](/llms-full.txt) | Complete documentation for all Scalekit products in one file | Use when you need exhaustive context across multiple products or when the topic spans several areas |
| [sitemap-0.xml](https://docs.scalekit.com/sitemap-0.xml) | Full URL list of every documentation page | Use to discover specific page URLs you can fetch for targeted, page-level answers |
