API Reference
Powerful data platform API for datasets, queries, and analytics. Hotdata exposes a /v1/* HTTP API at api.hotdata.dev.
Authentication
Most /v1/* endpoints require two headers:
Authorization: Bearer <api_token>
X-Workspace-Id: <workspace_public_id>
Workspaces
Workspace management
List workspaces
Endpoint: GET /v1/workspaces
Query parameters:
organization_public_idstring— Filter by organization. Defaults to the user's current organization.
Response: 200 Successful response
{
"ok": true,
"workspaces": [
{
"public_id": "ws_abc123",
"name": "production-analytics",
"active": true,
"favorite": true,
"provision_status": "provisioned",
"namespace": "workspace-ws_abc123"
}
]
}
Errors: 401 Missing or invalid authorization, 403 Forbidden — not a member of the organization or workspace token used, 404 Organization not found
Create a workspace
Endpoint: POST /v1/workspaces
Request body:
namestring— required. Name for the new workspace.organization_public_idstring— Target organization. Defaults to the user's current organization.
{
"name": "production-analytics",
"organization_public_id": "string"
}
Response: 201 Workspace created
{
"ok": true,
"workspace": {
"public_id": "ws_abc123",
"name": "production-analytics",
"provision_status": "pending",
"namespace": "workspace-ws_abc123"
}
}
Errors: 400 Invalid JSON body, 401 Missing or invalid authorization, 403 Forbidden, 404 Organization not found, 422 Validation error (e.g. name required)
Query
Execute SQL queries against connected data sources. Use standard Postgres-compatible SQL syntax to query and join across multiple connections and datasets. Results are returned inline and also persisted asynchronously for later retrieval.
Execute SQL query
Endpoint: POST /v1/query
Request body:
asyncboolean— When true, execute the query asynchronously and return a query run ID for polling via GET /query-runs/. The query results can be retrieved via GET /results/ once the query run status is "succeeded".async_after_msinteger,null— If set with async=true, wait up to this many milliseconds for the query to complete synchronously before returning an async response. Minimum 1000ms. Ignored if async is false.. Min:1000sqlstring— required
{
"async": true,
"async_after_ms": null,
"sql": "string"
}
Response: 200 Query executed successfully
{
"columns": [
"string"
],
"execution_time_ms": 0,
"nullable": [
true
],
"query_run_id": "string",
"result_id": null,
"row_count": 0,
"rows": [
[
null
]
],
"warning": null
}
Response: 202 Query submitted asynchronously
{
"query_run_id": "string",
"reason": null,
"status": "string",
"status_url": "string"
}
Errors: 400 Invalid request, 500 Internal server error
Connections
Manage connections to remote databases (Postgres, MySQL, Snowflake, BigQuery, DuckDB, Iceberg, etc.). Creating a connection registers the source and triggers automatic schema discovery. Each connection's tables are cached locally for fast query performance.
List connections
Endpoint: GET /v1/connections
Response: 200 List of connections
{
"connections": [
{
"id": "string",
"name": "string",
"source_type": "string"
}
]
}
Create connection
Endpoint: POST /v1/connections
Request body:
configobject— required. Connection configuration object. Fields vary by source type (host, port, database, etc.).namestring— requiredsecret_idstring,null— Optional reference to a secret by ID (e.g., "secr_abc123"). If provided, this secret will be used for authentication. Mutually exclusive withsecret_name.secret_namestring,null— Optional reference to a secret by name. If provided, this secret will be used for authentication. Mutually exclusive withsecret_id.skip_discoveryboolean— If true, skip automatic schema discovery after registering the connection. The connection will be created but no tables will be discovered. You can run discovery later via the refresh endpoint.source_typestring— required
{
"config": {},
"name": "string",
"secret_id": null,
"secret_name": null,
"skip_discovery": true,
"source_type": "string"
}
Response: 201 Connection created
{
"discovery_error": null,
"discovery_status": "success",
"id": "string",
"name": "string",
"source_type": "string",
"tables_discovered": 0
}
Errors: 400 Invalid request, 409 Connection already exists
Get connection
Endpoint: GET /v1/connections/{connection_id}
Path parameters:
connection_idstring— Connection ID
Response: 200 Connection details
{
"id": "string",
"name": "string",
"source_type": "string",
"synced_table_count": 0,
"table_count": 0
}
Errors: 404 Connection not found
Delete connection
Endpoint: DELETE /v1/connections/{connection_id}
Path parameters:
connection_idstring— Connection ID
Response: 204 Connection deleted
Errors: 404 Connection not found
Purge connection cache
Endpoint: DELETE /v1/connections/{connection_id}/cache
Path parameters:
connection_idstring— Connection ID
Response: 204 Cache purged
Errors: 404 Connection not found
Check connection health
Endpoint: GET /v1/connections/{connection_id}/health
Path parameters:
connection_idstring— Connection ID
Response: 200 Connection health status
{
"connection_id": "string",
"error": null,
"healthy": true,
"latency_ms": 0
}
Errors: 404 Connection not found
Purge table cache
Endpoint: DELETE /v1/connections/{connection_id}/tables/{schema}/{table}/cache
Path parameters:
connection_idstring— Connection IDschemastring— Schema nametablestring— Table name
Response: 204 Table cache purged
Errors: 404 Not found
Get table profile
Endpoint: GET /v1/connections/{connection_id}/tables/{schema}/{table}/profile
Path parameters:
connection_idstring— Connection IDschemastring— Schema nametablestring— Table name
Response: 200 Column profile statistics
{
"columns": [
{
"cardinality": 0,
"data_type": "string",
"name": "string",
"null_count": 0,
"null_percentage": 0,
"profile": null
}
],
"connection": "string",
"row_count": 0,
"schema": "string",
"synced_at": null,
"table": "string"
}
Errors: 404 Table or profile not found
Connection Types
Discover available connection types and their configuration requirements. Each type describes the config parameters and authentication needed to create a connection.
List connection types
Endpoint: GET /v1/connection-types
Response: 200 Available connection types
{
"connection_types": [
{
"label": "string",
"name": "string"
}
]
}
Get connection type details
Endpoint: GET /v1/connection-types/{name}
Path parameters:
namestring— Connection type name (e.g. postgres, mysql, snowflake)
Response: 200 Connection type details
{
"auth": null,
"config_schema": null,
"label": "string",
"name": "string"
}
Errors: 404 Unknown connection type
Refresh
Refresh schema metadata and table data for connections. Schema refresh re-discovers tables and column definitions from the remote source. Data refresh re-syncs cached data with the latest from the remote tables.
Refresh connection data
Endpoint: POST /v1/refresh
Request body:
-
asyncboolean— When true, submit the refresh as a background job and return immediately with a job ID for status polling. Only supported for data refresh operations. -
connection_idstring,null -
databoolean -
dataset_idstring,null -
include_uncachedboolean— Controls whether uncached tables are included in connection-wide data refresh. -
false(default): Only refresh tables that already have cached data. This is the common case for keeping existing data up-to-date. -
true: Also sync tables that haven't been cached yet, essentially performing an initial sync for any new tables discovered since the connection was created.
This field only applies to connection-wide data refresh (when data=true and
table_name is not specified). It has no effect on single-table refresh or
schema refresh operations.
schema_namestring,nulltable_namestring,null
{
"async": true,
"connection_id": null,
"data": true,
"dataset_id": null,
"include_uncached": true,
"schema_name": null,
"table_name": null
}
Response: 200 Refresh completed
null
Errors: 400 Invalid request, 404 Connection not found
Information Schema
Inspect table and column metadata across all connections. Returns schema information including column names, data types, and sync status for every discovered table.
List tables
Endpoint: GET /v1/information_schema
Query parameters:
connection_idstring— Filter by connection IDschemastring— Filter by schema name (supports % wildcards)tablestring— Filter by table name (supports % wildcards)include_columnsboolean— Include column definitions (default: false)limitinteger— Maximum number of tables per pagecursorstring— Pagination cursor from a previous response
Response: 200 Table metadata
{
"count": 0,
"has_more": true,
"limit": 0,
"next_cursor": null,
"tables": [
{
"columns": null,
"connection": "string",
"last_sync": null,
"schema": "string",
"synced": true,
"table": "string"
}
]
}
Errors: 404 Connection not found
Secrets
Store and manage credentials used by connections. Secrets are encrypted at rest and referenced by connections for authentication. Secret values are never returned by the API — only metadata (name, timestamps) is exposed.
List secrets
Endpoint: GET /v1/secrets
Response: 200 List of secrets
{
"secrets": [
{
"created_at": "2026-01-01T00:00:00Z",
"name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
]
}
Create secret
Endpoint: POST /v1/secrets
Request body:
namestring— requiredvaluestring— required
{
"name": "string",
"value": "string"
}
Response: 201 Secret created
{
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"name": "string"
}
Errors: 409 Secret already exists
Get secret
Endpoint: GET /v1/secrets/{name}
Path parameters:
namestring— Secret name
Response: 200 Secret metadata
{
"created_at": "2026-01-01T00:00:00Z",
"name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Secret not found
Update secret
Endpoint: PUT /v1/secrets/{name}
Path parameters:
namestring— Secret name
Request body:
valuestring— required
{
"value": "string"
}
Response: 200 Secret updated
{
"name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Secret not found
Delete secret
Endpoint: DELETE /v1/secrets/{name}
Path parameters:
namestring— Secret name
Response: 204 Secret deleted
Errors: 404 Secret not found
Results
Retrieve persisted query results. Every query execution persists its results asynchronously. Results transition through statuses: processing → ready (or failed). Once ready, the full result data can be retrieved by ID.
List results
Endpoint: GET /v1/results
Query parameters:
limitinteger— Maximum number of results (default: 100, max: 1000)offsetinteger— Pagination offset (default: 0)
Response: 200 List of results
{
"count": 0,
"has_more": true,
"limit": 0,
"offset": 0,
"results": [
{
"created_at": "2026-01-01T00:00:00Z",
"error_message": null,
"id": "string",
"status": "string"
}
]
}
Get result
Endpoint: GET /v1/results/{id}
Path parameters:
idstring— Result ID
Response: 200 Result data
{
"columns": null,
"error_message": null,
"nullable": null,
"result_id": "string",
"row_count": null,
"rows": null,
"status": "string"
}
Errors: 404 Result not found
Query Runs
View the history of executed queries. Each query run records the SQL text, execution time, result reference, and status. Useful for auditing and debugging query performance.
List query runs
Endpoint: GET /v1/query-runs
Query parameters:
limitinteger— Maximum number of resultscursorstring— Pagination cursorstatusstring— Filter by status (comma-separated, e.g. status=running,failed)saved_query_idstring— Filter by saved query ID
Response: 200 List of query runs
{
"count": 0,
"has_more": true,
"limit": 0,
"next_cursor": null,
"query_runs": [
{
"completed_at": null,
"created_at": "2026-01-01T00:00:00Z",
"error_message": null,
"execution_time_ms": null,
"id": "string",
"result_id": null,
"row_count": null,
"saved_query_id": null,
"saved_query_version": null,
"server_processing_ms": null,
"snapshot_id": "string",
"sql_hash": "string",
"sql_text": "string",
"status": "string",
"trace_id": null,
"user_public_id": null,
"warning_message": null
}
]
}
Get query run
Endpoint: GET /v1/query-runs/{id}
Path parameters:
idstring— Query run ID
Response: 200 Query run details
{
"completed_at": null,
"created_at": "2026-01-01T00:00:00Z",
"error_message": null,
"execution_time_ms": null,
"id": "string",
"result_id": null,
"row_count": null,
"saved_query_id": null,
"saved_query_version": null,
"server_processing_ms": null,
"snapshot_id": "string",
"sql_hash": "string",
"sql_text": "string",
"status": "string",
"trace_id": null,
"user_public_id": null,
"warning_message": null
}
Errors: 404 Query run not found
Uploads
Upload files (CSV, JSON, Parquet) that can be used as sources for datasets. Files are stored temporarily and referenced by ID when creating a dataset. Maximum upload size is 2 GB.
List uploads
Endpoint: GET /v1/files
Query parameters:
statusstring— Filter by upload status
Response: 200 List of uploads
{
"uploads": [
{
"content_type": null,
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"size_bytes": 0,
"status": "string"
}
]
}
Upload file
Endpoint: POST /v1/files
Query parameters:
streamingboolean— Stream upload to disk for large files (up to 20GB)
Request body: Raw bytes (application/octet-stream)
Response: 201 File uploaded
{
"content_type": null,
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"size_bytes": 0,
"status": "string"
}
Errors: 400 Invalid request
Datasets
User-managed tables created from uploaded files or inline data. Unlike connection tables (which mirror remote sources), datasets are fully managed and owned by you. Create a dataset from a CSV/JSON/Parquet upload or from inline data, then query it with SQL like any other table.
List datasets
Endpoint: GET /v1/datasets
Query parameters:
limitinteger— Maximum number of datasets (default: 100, max: 1000)offsetinteger— Pagination offset (default: 0)
Response: 200 List of datasets
{
"count": 0,
"datasets": [
{
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"label": "string",
"latest_version": 0,
"pinned_version": null,
"schema_name": "string",
"table_name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
],
"has_more": true,
"limit": 0,
"offset": 0
}
Create dataset
Endpoint: POST /v1/datasets
Request body:
labelstring— requiredsourceDatasetSource— required. Dataset source specificationtable_namestring,null— Optional table_name - if not provided, derived from label
{
"label": "string",
"source": null,
"table_name": null
}
Response: 201 Dataset created
{
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"label": "string",
"schema_name": "string",
"status": "string",
"table_name": "string"
}
Errors: 400 Invalid request, 409 Dataset already exists
Get dataset
Endpoint: GET /v1/datasets/{id}
Path parameters:
idstring— Dataset ID
Response: 200 Dataset details
{
"columns": [
{
"data_type": "string",
"name": "string",
"nullable": true
}
],
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"label": "string",
"latest_version": 0,
"pinned_version": null,
"schema_name": "string",
"source_type": "string",
"table_name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Dataset not found
Update dataset
Endpoint: PUT /v1/datasets/{id}
Path parameters:
idstring— Dataset ID
Request body:
labelstring,nullpinned_versioninteger,null— Pin to a specific version, or send null to unpin. Omit the field entirely to leave pinning unchanged.table_namestring,null
{
"label": null,
"pinned_version": null,
"table_name": null
}
Response: 200 Dataset updated
{
"id": "string",
"label": "string",
"latest_version": 0,
"pinned_version": null,
"table_name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Dataset not found, 409 Conflict
Delete dataset
Endpoint: DELETE /v1/datasets/{id}
Path parameters:
idstring— Dataset ID
Response: 204 Dataset deleted
Errors: 404 Dataset not found
List dataset versions
Endpoint: GET /v1/datasets/{id}/versions
Path parameters:
idstring— Dataset ID
Query parameters:
limitinteger— Maximum number of versions (default: 100, max: 1000)offsetinteger— Pagination offset (default: 0)
Response: 200 List of dataset versions
{
"count": 0,
"dataset_id": "string",
"has_more": true,
"limit": 0,
"offset": 0,
"versions": [
{
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"source_type": "string",
"version": 0
}
]
}
Errors: 404 Dataset not found
Saved Queries
Save, version, and execute named SQL queries. Each update creates a new version, preserving the full history. Saved queries are automatically classified by category (e.g., aggregation, join, filtered scan) and can be executed by ID.
List saved queries
Endpoint: GET /v1/queries
Query parameters:
limitinteger— Maximum number of resultsoffsetinteger— Pagination offset
Response: 200 List of saved queries
{
"count": 0,
"has_more": true,
"limit": 0,
"offset": 0,
"queries": [
{
"created_at": "2026-01-01T00:00:00Z",
"description": "string",
"id": "string",
"latest_version": 0,
"name": "string",
"tags": [
"string"
],
"updated_at": "2026-01-01T00:00:00Z"
}
]
}
Create saved query
Endpoint: POST /v1/queries
Request body:
descriptionstring,nullnamestring— requiredsqlstring— requiredtagsarray,null
{
"description": null,
"name": "string",
"sql": "string",
"tags": null
}
Response: 201 Saved query created
{
"category": null,
"created_at": "2026-01-01T00:00:00Z",
"description": "string",
"has_aggregation": null,
"has_group_by": null,
"has_join": null,
"has_limit": null,
"has_order_by": null,
"has_predicate": null,
"id": "string",
"latest_version": 0,
"name": "string",
"num_tables": null,
"sql": "string",
"sql_hash": "string",
"table_size": null,
"tags": [
"string"
],
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 400 Invalid request
Get saved query
Endpoint: GET /v1/queries/{id}
Path parameters:
idstring— Saved query ID
Response: 200 Saved query details
{
"category": null,
"created_at": "2026-01-01T00:00:00Z",
"description": "string",
"has_aggregation": null,
"has_group_by": null,
"has_join": null,
"has_limit": null,
"has_order_by": null,
"has_predicate": null,
"id": "string",
"latest_version": 0,
"name": "string",
"num_tables": null,
"sql": "string",
"sql_hash": "string",
"table_size": null,
"tags": [
"string"
],
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Saved query not found
Update saved query
Endpoint: PUT /v1/queries/{id}
Path parameters:
idstring— Saved query ID
Request body:
category_overridestring,null— Override the auto-detected category. Sendnullto clear (revert to auto).descriptionstring,nullnamestring,null— Optional new name. When omitted the existing name is preserved.sqlstring,null— Optional new SQL. When omitted the existing SQL is preserved.table_size_overridestring,null— User annotation for table size. Sendnullto clear.tagsarray,null
{
"category_override": null,
"description": null,
"name": null,
"sql": null,
"table_size_override": null,
"tags": null
}
Response: 200 Saved query updated
{
"category": null,
"created_at": "2026-01-01T00:00:00Z",
"description": "string",
"has_aggregation": null,
"has_group_by": null,
"has_join": null,
"has_limit": null,
"has_order_by": null,
"has_predicate": null,
"id": "string",
"latest_version": 0,
"name": "string",
"num_tables": null,
"sql": "string",
"sql_hash": "string",
"table_size": null,
"tags": [
"string"
],
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 400 Invalid request, 404 Saved query not found
Delete saved query
Endpoint: DELETE /v1/queries/{id}
Path parameters:
idstring— Saved query ID
Response: 204 Saved query deleted
Errors: 404 Saved query not found
Execute saved query
Endpoint: POST /v1/queries/{id}/execute
Path parameters:
idstring— Saved query ID
Response: 200 Query executed
{
"columns": [
"string"
],
"execution_time_ms": 0,
"nullable": [
true
],
"query_run_id": "string",
"result_id": null,
"row_count": 0,
"rows": [
[
null
]
],
"warning": null
}
Errors: 404 Saved query not found
List saved query versions
Endpoint: GET /v1/queries/{id}/versions
Path parameters:
idstring— Saved query ID
Query parameters:
limitinteger— Maximum number of versionsoffsetinteger— Pagination offset
Response: 200 List of versions
{
"count": 0,
"has_more": true,
"limit": 0,
"offset": 0,
"saved_query_id": "string",
"versions": [
{
"category": null,
"created_at": "2026-01-01T00:00:00Z",
"has_aggregation": null,
"has_group_by": null,
"has_join": null,
"has_limit": null,
"has_order_by": null,
"has_predicate": null,
"num_tables": null,
"sql": "string",
"sql_hash": "string",
"table_size": null,
"version": 0
}
]
}
Errors: 404 Saved query not found
Indexes
Create, list, and delete indexes on cached tables. Supports sorted indexes for range queries and BM25 full-text indexes for keyword search.
List indexes on a table
Endpoint: GET /v1/connections/{connection_id}/tables/{schema}/{table}/indexes
Path parameters:
connection_idstring— Connection identifierschemastring— Schema nametablestring— Table name
Response: 200 Indexes listed
{
"indexes": [
{
"columns": [
"string"
],
"created_at": "2026-01-01T00:00:00Z",
"index_name": "string",
"index_type": "string",
"metric": null,
"status": "ready",
"updated_at": "2026-01-01T00:00:00Z"
}
]
}
Errors: 404 Table not found, 500 Internal server error
Create an index on a table
Endpoint: POST /v1/connections/{connection_id}/tables/{schema}/{table}/indexes
Path parameters:
connection_idstring— Connection identifierschemastring— Schema nametablestring— Table name
Request body:
asyncboolean— When true, create the index as a background job and return a job ID for polling.columnsstring[] — required. Columns to index. Required for all index types.descriptionstring,null— User-facing description of the embedding (e.g., "product descriptions").dimensionsinteger,null— Output vector dimensions. Some models support multiple dimension sizes (e.g., OpenAI text-embedding-3-small supports 512 or 1536). If omitted, the model's default dimensions are used.. Min:0embedding_provider_idstring,null— Embedding provider ID. When set for a vector index, the source column is treated as text and embeddings are generated automatically. The vector index is then built on the generated embedding column ({column}_embeddingby default).index_namestring— requiredindex_typestring— Index type: "sorted" (default), "bm25", or "vector"metricstring,null— Distance metric for vector indexes: "l2", "cosine", or "dot". When omitted, defaults to "l2" for float array columns or the provider's preferred metric for text columns with auto-embedding.output_columnstring,null— Custom name for the generated embedding column. Defaults to{column}_embedding.
{
"async": true,
"columns": [
"string"
],
"description": null,
"dimensions": null,
"embedding_provider_id": null,
"index_name": "string",
"index_type": "string",
"metric": null,
"output_column": null
}
Response: 201 Index created
{
"columns": [
"string"
],
"created_at": "2026-01-01T00:00:00Z",
"index_name": "string",
"index_type": "string",
"metric": null,
"status": "ready",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 400 Invalid request, 404 Table not found, 500 Internal server error
Delete an index
Endpoint: DELETE /v1/connections/{connection_id}/tables/{schema}/{table}/indexes/{index_name}
Path parameters:
connection_idstring— Connection identifierschemastring— Schema nametablestring— Table nameindex_namestring— Index name
Response: 204 Index deleted
Errors: 404 Index not found, 500 Internal server error
Embedding Providers
Manage embedding providers that generate vector embeddings for text columns. Providers can be service-based (e.g., OpenAI) or local. Register a provider, then reference it when creating vector indexes on text columns.
List embedding providers
Endpoint: GET /v1/embedding-providers
Response: 200 List of embedding providers
{
"embedding_providers": [
{
"config": null,
"created_at": "2026-01-01T00:00:00Z",
"has_secret": true,
"id": "string",
"name": "string",
"provider_type": "string",
"source": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
]
}
Create embedding provider
Endpoint: POST /v1/embedding-providers
Request body:
api_keystring,null— Inline API key. If provided, a secret is auto-created and referenced. Cannot be used together withsecret_name.configany— Provider-specific configuration (model name, base URL, dimensions, etc.)namestring— requiredprovider_typestring— required. Provider type: "local" or "service"secret_namestring,null— Reference an existing secret by name (for service providers).
{
"api_key": null,
"config": null,
"name": "string",
"provider_type": "string",
"secret_name": null
}
Response: 201 Embedding provider created
{
"config": null,
"created_at": "2026-01-01T00:00:00Z",
"id": "string",
"name": "string",
"provider_type": "string"
}
Errors: 400 Invalid request, 409 Provider with this name already exists
Get embedding provider
Endpoint: GET /v1/embedding-providers/{id}
Path parameters:
idstring— Embedding provider ID
Response: 200 Embedding provider details
{
"config": null,
"created_at": "2026-01-01T00:00:00Z",
"has_secret": true,
"id": "string",
"name": "string",
"provider_type": "string",
"source": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Provider not found
Update embedding provider
Endpoint: PUT /v1/embedding-providers/{id}
Path parameters:
idstring— Embedding provider ID
Request body:
api_keystring,null— Inline API key. If provided, updates (or creates) the auto-managed secret.configanynamestring,nullsecret_namestring,null— Secret name containing the API key. Pass null to clear.
{
"api_key": null,
"config": null,
"name": null,
"secret_name": null
}
Response: 200 Embedding provider updated
{
"id": "string",
"name": "string",
"updated_at": "2026-01-01T00:00:00Z"
}
Errors: 404 Provider not found
Delete embedding provider
Endpoint: DELETE /v1/embedding-providers/{id}
Path parameters:
idstring— Embedding provider ID
Response: 204 Embedding provider deleted
Errors: 404 Provider not found
Jobs
Track background jobs. Jobs are submitted internally by other APIs when async execution is requested. Poll job status by ID or list all jobs.
List jobs
Endpoint: GET /v1/jobs
Query parameters:
job_typeJobType— Filter by job typestatusstring— Filter by status (comma-separated, e.g. status=pending,running)limitinteger— Max results (default 50)offsetinteger— Offset for pagination
Response: 200 List of jobs
{
"jobs": [
{
"attempts": 0,
"completed_at": null,
"created_at": "2026-01-01T00:00:00Z",
"error_message": null,
"id": "string",
"job_type": "noop",
"result": null,
"status": "pending"
}
]
}
Get job status
Endpoint: GET /v1/jobs/{id}
Path parameters:
idstring— Job ID
Response: 200 Job status
{
"attempts": 0,
"completed_at": null,
"created_at": "2026-01-01T00:00:00Z",
"error_message": null,
"id": "string",
"job_type": "noop",
"result": null,
"status": "pending"
}
Errors: 404 Job not found
Error responses
All endpoints may return error responses in this format:
{
"message": "Error description",
"status": 400
}
Rate Limiting
API requests are subject to rate limiting. When rate limits are exceeded, the API returns a 429 Too Many Requests response with a Retry-After header.