Stockyard stores everything in one SQLite file. Traces, costs, audit logs, cached responses, module configs, encrypted API keys. Backing up the entire platform is one command.
The database file is at /data/stockyard.db by default (or wherever you configured the data directory). Copy it and you have a complete backup of the entire platform.
This works while Stockyard is running because SQLite in WAL mode handles concurrent access. The copy is a consistent snapshot.
If you are running in Docker:
Stop Stockyard, copy the backup file to the data directory, start Stockyard. The binary reads the database and picks up where the backup left off.
If the backup is from an older version, Stockyard runs schema migrations automatically on startup. No separate migration step required.
A cron job is all you need for automated backups:
That is a complete backup rotation. One line, one cron entry. No backup agent, no snapshot scheduler, no managed backup service.
The SQLite file contains the complete state of the platform: request traces with model, tokens, cost, and latency; cost records and summaries; the hash-chained audit ledger; cached LLM responses; all 76 module configurations; model alias mappings; provider API keys (AES-256-GCM encrypted); rate limiter and circuit breaker state; and product state for all 150 tools.
When you copy this file, you copy everything. There is no separate config store, no external cache, and no state held in memory that is not also persisted to disk.
If you lose the server entirely, recovery is: install Stockyard on a new machine, copy the database file to the data directory, start the binary. The new instance has the same traces, the same costs, the same configs, and the same encrypted keys as the one you lost.
Time to recovery is however long it takes to copy the file and start the binary. There is no database import, no schema recreation, and no config replay.
File-copy backups are full snapshots. Stockyard does not currently support point-in-time recovery, incremental backups, or continuous replication. If you need those capabilities, you would need to layer them on top with a tool like Litestream, which provides continuous SQLite replication to S3 or other storage.
For most single-node deployments, daily file-copy backups provide sufficient protection.
No pg_dump, no managed snapshots, no point-in-time recovery configuration. Copy the file.
Get started