Try the Demo

See Scry catch a 92x query regression in under 5 minutes.

Prerequisites

  • Docker and Docker Compose — the demo runs entirely in containers
  • ~2GB disk space — for container images

That's it. No Rust toolchain, no source code, no database setup.

Install the CLI

Download the latest scry binary from GitHub releases:

# Download the latest release for your platform
# https://github.com/scrydata/scry-cli/releases
curl -sL https://github.com/scrydata/scry-cli/releases/latest/download/scry-cli-x86_64-unknown-linux-gnu.tar.gz | tar xz
sudo mv scry /usr/local/bin/

# Or on macOS (Apple Silicon)
curl -sL https://github.com/scrydata/scry-cli/releases/latest/download/scry-cli-aarch64-apple-darwin.tar.gz | tar xz
sudo mv scry /usr/local/bin/

# Verify
scry --version

Run the Demo

One command launches the full stack and walks you through an interactive scenario:

scry demo

This starts a Docker Compose stack (source PostgreSQL, NATS, scry-platform, scry-backfill, scry-proxy, and a query generator) and walks you through seven steps:

  1. Services start — Docker Compose brings up 5 services (PostgreSQL, NATS, scry-platform, scry-backfill, scry-proxy). An architecture overview prints while they start, then each service reports healthy with elapsed time.
  2. Shadow syncs — A shadow database is provisioned and synchronized. An animated progress indicator shows the sync phase (Provisioning → SchemaSync → Backfill → CaughtUp), the percentage complete, the current table being synced, and the running event count.
  3. Queries captured — A query workload runs through scry-proxy, capturing ~25 query events across 4 types: ORDER lookups by status, JOIN queries with users, GROUP BY aggregations, and user-specific order history.
  4. Review scenario — A brief narrative sets the scene: "Your teammate added a region column. The migration passed staging with 100 rows. Production has 25,000 orders." Press Enter to continue.
  5. Apply migration & replay — The migration SQL is displayed in a bordered box. It's applied to the shadow (the source database is never touched), then captured queries are replayed against the changed shadow.
  6. Regression report — The results appear (see "What You'll See" below).
  7. Apply fix & verify — The suggested index is applied to the shadow, replay re-runs, and the report shows "ALL QUERIES PASSED — no regressions detected."

The demo runs in an alternate terminal screen with animated spinners, color-coded output, and an always-visible status bar. Press Ctrl+C at any time to exit cleanly.

What You'll See

The replay report shows three query patterns that regressed after the migration:

REGRESSION DETECTED

3 query pattern(s) regressed:

  SELECT * FROM orders WHERE region = $1 AND status = $2
  |-- Before:  p50=2ms   p99=8ms
  |-- After:   p50=184ms p99=312ms  (92x slower)
  `-- Cause:   Seq Scan on orders (missing index on region)

  SELECT region, status, COUNT(*) FROM orders GROUP BY ...
  |-- Before:  p50=5ms   p99=15ms
  |-- After:   p50=210ms p99=380ms  (42x slower)
  `-- Cause:   Seq Scan on orders

  SELECT o.*, u.email FROM orders o JOIN users u ...
  |-- Before:  p50=8ms   p99=25ms
  |-- After:   p50=165ms p99=290ms  (20x slower)
  `-- Cause:   Seq Scan on orders

The demo then shows the EXPLAIN ANALYZE for the worst offender, confirming the sequential scan:

EXPLAIN ANALYZE
SELECT * FROM orders WHERE region = $1 AND status = $2

Seq Scan on orders  (cost=0.00..1842.00 rows=25000 width=128)
                    (actual time=0.015..184.230 rows=312 loops=1)
  Filter: ((region = 'us-west-2') AND (status = 'pending'))
  Rows Removed by Filter: 24688
Planning Time: 0.085 ms
Execution Time: 184.312 ms

The suggested fix is displayed in a cyan-bordered box:

CREATE INDEX idx_orders_region ON orders (region);
CREATE INDEX idx_orders_region_status ON orders (region, status);

After applying the fix and re-running the replay, the post-fix EXPLAIN confirms the improvement:

EXPLAIN ANALYZE
SELECT * FROM orders WHERE region = $1 AND status = $2

Index Scan using idx_orders_region_status on orders
                    (cost=0.29..8.42 rows=312 width=128)
                    (actual time=0.021..3.105 rows=312 loops=1)
  Index Cond: ((region = 'us-west-2') AND (status = 'pending'))
Planning Time: 0.092 ms
Execution Time: 3.187 ms

Visual Details

The demo runs in an alternate terminal screen with a polished UI:

  • Animated spinners — braille-character spinners show progress during sync, capture, and replay
  • Color-coded output — green for passed queries, red for regressed, cyan for SQL display
  • Bordered SQL boxes — migration and fix SQL displayed in cyan-bordered panels
  • Status bar — always-visible bar at the bottom showing current step, total progress, and elapsed time

After the fix, every pattern returns to baseline. Read The Post-Mortem We Never Had to Write for the full story.

Other Scenarios

The demo ships with three scenarios that demonstrate different types of regressions:

# Default: missing index after adding a column
scry demo

# Large UPDATE blocking concurrent queries
scry demo --scenario table-locking

# Dropped index that was still serving queries
scry demo --scenario index-drop

# Add explanatory context between steps
scry demo --explain

table-locking

A bulk UPDATE on a large table blocks concurrent queries. The report shows: UPDATE duration (47s), queries blocked (156), max wait time (38s), queries timed out (12). The fix demonstrates batching the UPDATE with SKIP LOCKED to avoid holding a table-level lock.

index-drop

An "unused" index is dropped — but it was actually serving 2,400 queries/hour (stats were reset after a recent failover, making it look unused). The report shows 5 query patterns regressed, with impact quantified: ~18,720 queries/day affected, +83ms average regression. Scry catches what pg_stat_user_indexes missed.

--explain

Adds architectural explanation boxes that pause between steps, covering: the demo stack layout, how shadow databases and journals work, how scry-proxy captures queries transparently, and how regression detection compares before/after performance. Good for a first run or when presenting to a team.

Cleanup

The demo prompts you to clean up when it finishes. To clean up manually:

scry demo --cleanup

This removes the Docker containers and the local ./data/scry-platform.db SQLite database created during the run.

If you skip cleanup, the query generator keeps running and you can explore the management API's Swagger UI at http://localhost:8081/swagger-ui/.

Next Steps

Need Help?

If you run into issues with the demo, check the Troubleshooting Guide or reach out to us for support.

Request Early Access