← Use cases

Control spend and long-running hybrid queries

Some queries run quickly and frequently, while others scan large datasets and take longer. Managing both requires clear control over execution and results. You can run queries asynchronously, retrieve past results, refresh connections in the background, and inspect active or completed jobs through a consistent interface.

How it works

Step 1 — Prefer async when integrations might lag

How many events did we get today in the huge events table? If the answer isn’t ready yet, show me how to check when the count is done.

When the engine decides synchronous wait isn’t worth it, the CLI may return a query_run_id so you can poll status instead of blocking.

Step 2 — Reuse stored results instead of re-running

Show my last 5 query runs, then let me read the stored rows from one of them without re-running the same SQL.

Step 3 — Tune connection refresh, not blind rescans

Our shared finance Snowflake is out of date—refresh it so the table list here matches the warehouse again.

connections refresh kicks off catalog reconciliation in the background. Wait for it to finish before you rely on tables list reflecting upstream DDL.

Step 4 — Watch jobs that balloon latency

What’s running in the background right now? Then show me full detail for one of those runs.

jobs list shows work still running (for example long index builds). jobs <id> reports status until the job completes.

Who uses this

  • Finance or ops roles correlating usage with query-run metadata.
  • ML or evaluation pipelines issuing many short reads against shared warehouses.