← All posts

Architecture Decisions Behind Stockyard

· Michael

Every architectural decision in Stockyard traces back to one principle: a developer should be able to go from zero to a fully-instrumented LLM proxy in under 60 seconds. Here’s what that forced us to choose, and what we gave up.

Go over Python

The LLM tooling ecosystem is overwhelmingly Python. We chose Go anyway. The reason is deployment: a Go binary is a single static file. No virtualenvs, no pip dependencies, no version conflicts, no runtime. Download it, run it.

The performance characteristics matter too. Go’s goroutine-per-request model handles concurrent proxy traffic cleanly. The median overhead of the full 76-module middleware chain is 0.12ms. In Python with ASGI, that number would be measured in milliseconds, not fractions of milliseconds.

The tradeoff: fewer community contributions from the heavily-Python LLM developer community. We accept that.

SQLite over Postgres

Most proxy tools need a Postgres database for persistence and a Redis instance for caching. That’s two additional services to provision, configure, connect, and monitor.

Stockyard embeds SQLite. The database file lives at ~/.stockyard/stockyard.db. It’s created on first boot. Backups are a file copy. For the full reasoning, see why we chose SQLite.

SQLite handles our workload well. Traces, audit ledger entries, prompt templates, workflow definitions, and config state are all write-light, read-heavy patterns. For Brand’s hash-chained ledger, SQLite’s sequential write performance is actually ideal — it’s an append-only log.

The tradeoff: no multi-writer horizontal scaling. Stockyard Cloud handles this by running dedicated instances per customer rather than sharding a shared database.

Middleware chain over plugin system

We could have built a plugin system with dynamic loading. Instead, every module is compiled into the binary and wrapped with toggle.Wrap, which checks the module’s enabled state on every request.

This means the binary is ~25MB and contains all 76 modules whether you use them or not. But it also means:

The tradeoff: you can’t write custom modules yet. Everything ships in the binary. This is intentional for now — we want to get the built-in modules right before opening up extension points.

Nine apps over six services

Lookout, Brand, Tack Room, Forge, and Trading Post could each be separate microservices. We compiled them all into the same binary and have them share the same SQLite database and the same HTTP server on port 4200.

The upside: one process to manage, one port to expose, one database to back up. The proxy writes traces to Lookout and audit events to Brand automatically — no inter-service communication, no message queues, no eventual consistency.

The tradeoff: you can’t scale Lookout independently of the proxy. For most deployments (even at thousands of requests per minute), this is fine. For the rare case where it isn’t, Stockyard Cloud runs on dedicated infrastructure.

Stockyard. Wrangle your Stack.

See how Stockyard compares

Stockyard vs LiteLLM · Stockyard vs Helicone · Stockyard vs Portkey

Explore: Self-hosted proxy · Best self-hosted proxy · Install guide · Why SQLite