Find slow SQL in history and add indexes for your filters
Repeated queries from dashboards or tools tend to follow consistent patterns. You can use run history to see execution time, inspect the tables and columns involved, and add indexes that align with common filters and joins. This gives you a direct way to improve performance based on actual usage.
How it works
Step 1 — Mine query runs for slow or frequent SQL
Show my last 25 queries and how long each took—I’m trying to spot the slow ones.Each run records execution_time_ms and sql_text for sorting and inspection by observed latency.
Step 2 — Align with the physical model
Our slow reports filter orders by status and when they were created. Show the structure of the orders table on our commerce database.Step 3 — Create indexes that match common access paths
Speed up those filters: add an index on status and created_at for orders. If the build can happen in the background, use that.With --async, index creation returns a job—poll hotdata jobs until it finishes before expecting faster lookups.
Who uses this
- Platform engineers tying production query patterns to index changes.
- Analytics owners tuning indexes on tables heavily used by BI and agents.
- Teams operating agents whose generated SQL volume grows over time.