Connect your AI Agents to Google BigQuery in minutes

Available tools
list_datasets
List datasets in a BigQuery project. Returns dataset IDs, descriptions, and locations. Use list_projects first to get available project IDs.
get_dataset
Get detailed information about a specific BigQuery dataset including description, location, labels, and access controls. Use list_datasets to find available dataset IDs.
create_dataset
Create a new dataset in a BigQuery project. Specify location (US, EU, etc.) at creation time — it cannot be changed later. Use list_projects to find available project IDs.
update_dataset
Update a BigQuery dataset's description, friendly name, labels, or default table expiration. Location cannot be changed. Use list_datasets to find available dataset IDs.
delete_dataset
Delete a BigQuery dataset. Set delete_contents=true to delete all tables in the dataset. Without it, the dataset must be empty. Use list_datasets to find available dataset IDs.
list_jobs
List BigQuery jobs in a project. Filter by state (done, pending, running) and projection level. Returns job IDs, types, status, and statistics.
get_job
Get detailed information about a specific BigQuery job including status, configuration, and statistics. Use list_jobs to find job IDs, or use the job_id from a run_query response.
cancel_job
Cancel a running BigQuery job. The job may still complete if it finishes before the cancellation takes effect. Use list_jobs with state_filter='running' to find cancellable jobs.
list_projects
List all BigQuery projects accessible to the authenticated user. Returns project IDs, names, and numeric IDs. Use this to discover available projects before querying datasets or tables.
run_query
Execute a SQL query in BigQuery and return results. Supports GoogleSQL (default) and legacy SQL. If the query doesn't complete within timeout, use get_query_results with the returned job_id to poll for results.
get_query_results
Get results from a previously executed query using its job ID. Use this to poll for long-running query completion or to paginate through large result sets. The job_id comes from a previous run_query response.
list_tables
List tables in a BigQuery dataset. Returns table IDs, types, and creation times. Use list_datasets first to get available dataset IDs.
get_table
Get detailed information about a BigQuery table including schema, row count, size, and partitioning config. Use list_tables to find available table IDs.
create_table
Create a new table in a BigQuery dataset with a typed schema. Supports STRING, INTEGER, FLOAT, BOOLEAN, TIMESTAMP, RECORD, and more. Use list_datasets to find available dataset IDs.
update_table
Update a BigQuery table's description, friendly name, labels, or schema (add new columns only — cannot remove/rename). Use list_tables to find available table IDs.
delete_table
Delete a BigQuery table permanently. This cannot be undone. Use list_tables to find available table IDs.
list_table_data
Read rows from a BigQuery table. Returns data as column:value dicts with pagination support. For filtered/aggregated data, use run_query instead. Use list_tables to find table IDs.
insert_table_data
Stream-insert rows into a BigQuery table. Each row is a dict of column_name:value pairs. Returns insert errors if any rows fail. Use list_tables to find table IDs.
copy_table
Copy a BigQuery table to a new location (within/across datasets or projects). This is an async operation that returns a job ID — use get_job to poll for completion. Source and destination datasets must be in the same location. Use list_tables to find available table IDs.
export_table_to_gcs
Export a BigQuery table to Google Cloud Storage in CSV, JSON, Avro, or Parquet format. This is an async operation — use get_job to poll for completion. For tables >1GB, use wildcard in URI (gs://bucket/file-*.csv). CSV does not support nested data — use JSON, Avro, or Parquet instead. Use list_tables to find available table IDs.
validate_credential
Validate Google BigQuery credentials. Verifies credentials during setup.

How to set up Merge Agent Handler
In an mcp.json file, add the configuration below, and restart Cursor.
Learn more in the official documentation ↗
1{
2 "mcpServers": {
3 "agent-handler": {
4 "url": "https://ah-api-develop.merge.dev/api/v1/tool-packs/{TOOL_PACK_ID}/registered-users/{REGISTERED_USER_ID}/mcp",
5 "headers": {
6 "Authorization": "Bearer yMt*****"
7 }
8 }
9 }
10}
11Open your Claude Desktop configuration file and add the server configuration below. You'll also need to restart the application for the changes to take effect.
Make sure Claude is using the Node v20+.
Learn more in the official documentation ↗
1{
2 "mcpServers": {
3 "agent-handler": {
4 "command": "npx",
5 "args": [
6 "-y",
7 "mcp-remote@latest",
8 "https://ah-api-develop.merge.dev/api/v1/tool-packs/{TOOL_PACK_ID}/registered-users/{REGISTERED_USER_ID}/mcp",
9 "--header",
10 "Authorization: Bearer ${AUTH_TOKEN}"
11 ],
12 "env": {
13 "AUTH_TOKEN": "yMt*****"
14 }
15 }
16 }
17}Open your Windsurf MCP configuration file and add the server configuration below.
Click on the refresh button in the top right of the Manage MCP server page or in the top right of the chat box in the box icon.
Learn more in the official documentation ↗
1{
2 "mcpServers": {
3 "agent-handler": {
4 "command": "npx",
5 "args": [
6 "-y",
7 "mcp-remote@latest",
8 "https://ah-api.merge.dev/api/v1/tool-packs/<tool-pack-id>/registered-users/<registered-user-id>/mcp",
9 "--header",
10 "Authorization: Bearer ${AUTH_TOKEN}"
11 ],
12 "env": {
13 "AUTH_TOKEN": "<ah-production-access-key>"
14 }
15 }
16 }
17 }In Command Palette (Cmd+Shift+P on macOS, Ctrl+Shift+P on Windows), run "MCP: Open User Configuration".
You can then add the configuration below and press "start" right under servers. Enter the auth token when prompted.
Learn more in the official documentation ↗
1{
2 "inputs": [
3 {
4 "type": "promptString",
5 "id": "agent-handler-auth",
6 "description": "Agent Handler AUTH_TOKEN", // "yMt*****" when prompt
7 "password": true
8 }
9 ],
10 "servers": {
11 "agent-handler": {
12 "type": "stdio",
13 "command": "npx",
14 "args": [
15 "-y",
16 "mcp-remote@latest",
17 "https://ah-api-develop.merge.dev/api/v1/tool-packs/{TOOL_PACK_ID}/registered-users/{REGISTERED_USER_ID}/mcp",
18 "--header",
19 "Authorization: Bearer ${input:agent-handler-auth}"
20 ]
21 }
22 }
23}FAQs on using Merge's Google BigQuery MCP server
FAQs on using Merge's Google BigQuery MCP server
What is a Google BigQuery MCP?
It's an MCP server that connects your agents directly to Google BigQuery's cloud data warehouse via tools. Your agents can invoke these tools to run SQL queries, explore dataset schemas, retrieve table metadata, list available resources, and more.
Google offers an official BigQuery MCP server, but you can also use one from a third-party platform, like Merge Agent Handler.
How can I use the Google BigQuery MCP server?
The use cases naturally depend on the agent you've built, but here are a few common ones:
- On-demand analytics from conversational agents: When a user asks a business question in a chat interface, an agent translates it into SQL, runs the query against BigQuery, and returns a formatted summary with key numbers and trends without requiring the user to open a BI tool
- Pipeline validation after ETL jobs: After a data pipeline completes, an agent queries BigQuery to verify row counts, check for nulls in required fields, and flag anomalies before downstream reports or dashboards are generated
- Scheduled business metric reporting: On a daily or weekly schedule, an agent runs a set of predefined SQL queries against BigQuery, formats the results into a readable digest, and posts them to the relevant Slack channel or sends them as an email summary
- Schema discovery for data onboarding: When a new dataset is added to a project, an agent lists the available tables, retrieves schema details for each, and assembles a data dictionary document in Notion or Google Docs for the analytics team
What are popular tools for Google BigQuery's MCP server?
Here are some of the most commonly used tools:
execute_sql: runs a SQL statement against BigQuery with read-write access, supporting DML and DDL operations. Use this when an agent needs to insert, update, or transform data as part of an automated pipeline or data preparation workflow
execute_sql_readonly: runs a read-only SQL query against BigQuery, blocking any mutations. Good for analytics agents that need to query and report on data without any risk of modifying underlying tables
list_dataset_ids: returns the dataset IDs available in a BigQuery project. Call this when an agent needs to discover what data is available before running queries or building a schema map for a new workflow
list_table_ids: lists the tables within a specified BigQuery dataset. Helpful when an agent is navigating a dataset's structure to identify the right table before querying or performing schema validation
get_dataset_info: retrieves metadata for a specific BigQuery dataset, including location, creation time, and access configuration. Use this when an agent needs to confirm dataset properties before executing operations against it
get_table_info: returns detailed information about a table, including its schema, row count, and partitioning settings. Useful for agents building dynamic queries or verifying that a table's structure matches an expected format before processing
What makes Merge Agent Handler's Google BigQuery MCP server better than alternative Google BigQuery MCP servers?
Google has an official BigQuery MCP server, but running it through Merge Agent Handler adds enterprise controls that matter when agents are touching production data:
- Enterprise-grade security and DLP: Merge Agent Handler includes built-in data loss prevention controls that let you block or redact sensitive fields before they reach an agent. For BigQuery, this means you can prevent raw query results, table contents, and schema details from being surfaced to agents that don't need full access to the underlying data
- Managed authentication and credentials: Merge stores and manages your BigQuery OAuth credentials on your behalf. You never inject service account keys or tokens into agent configuration, and you don't need to handle re-authorization as credentials rotate
- Real-time observability and audit trail: Every tool call against BigQuery is logged: which query ran, which dataset or table was targeted, and what the response contained. Data and security teams can audit exactly what an agent read or wrote without any custom instrumentation
- Tool Packs and controlled access: Tool Packs let you bundle specific BigQuery tools with tools from other connectors into a single MCP endpoint, scoped to a specific use case. An agent gets exactly the tools it needs, nothing more
How can I start using Merge Agent Handler's Google BigQuery MCP server?
You can take the following steps:
1. Create or log into your Merge Agent Handler account and navigate to Tool Packs (collections of connector tools scoped to a specific use case).
2. Create a new Tool Pack, then find and enable the Google BigQuery connector. Match the tools to your use case: executesql_readonly and the list and get_ tools are enough for reporting and discovery agents, while execute_sql is needed for agents that write or transform data.
3. Add a Registered User inside the Tool Pack. This is the identity context under which your agent operates. Merge generates a unique MCP URL scoped to this user once it's created.
4. From the Registered User detail page, authenticate Google BigQuery by completing the OAuth flow. Merge stores and manages the credentials going forward.
5. Copy the MCP URL from the Tool Pack detail page and generate an API key from Settings. You'll need both to connect your agent.
6. Add the MCP server to your agent or IDE using the MCP URL and API key. Your Google BigQuery tools are now accessible through that endpoint.
Ready to try it out?
Whether you're an engineer experimenting with agents or a product manager looking to add tools, you can get started for free now
























