footer{text-align:center;padding:2rem;font-size:0.75rem;font-family:var(--font-mono);color:var(--cream-muted);border-top:1px solid var(--bg3)} footer a{color:var(--cream-muted)}footer a:hover{color:var(--cream)} .sig{font-family:var(--font-serif);font-style:italic;color:var(--leather-light);font-size:0.9rem}
Compatibility

OpenAI-compatible proxy. One URL change, 16 providers.

Stockyard speaks the OpenAI API format. If your app, SDK, or tool talks to OpenAI, point it at Stockyard instead. Same request format, same response format, 16 providers behind it.

The only change

# Before: direct to OpenAI client = OpenAI( api_key="sk-...", base_url="https://api.openai.com/v1" ) # After: through Stockyard client = OpenAI( api_key="sk-...", base_url="http://localhost:4200/v1" ) # Everything else is identical. Same methods, same parameters.

Cursor: Settings → Models → Override OpenAI Base URL → http://localhost:4200/v1

Windsurf / Copilot: Set the OpenAI-compatible endpoint in settings. Stockyard handles the rest.

Aider / Cline: Set OPENAI_API_BASE=http://localhost:4200/v1 and they route through Stockyard automatically.

Any OpenAI SDK: Python, Node, Go, Rust — any SDK that lets you set a base URL works out of the box.

Use any provider with OpenAI format

Your app sends OpenAI-formatted requests. Stockyard translates to the right provider format underneath. Request claude-sonnet-4-20250514 as the model and Stockyard sends it to the Anthropic API. Request gemini-pro and it goes to Google. Your app code stays the same.

# Request Claude through OpenAI format $ curl localhost:4200/v1/chat/completions \ -d '{"model":"claude-sonnet-4-20250514","messages":[...]}' → translated and sent to Anthropic API # Request Gemini through the same format $ curl localhost:4200/v1/chat/completions \ -d '{"model":"gemini-pro","messages":[...]}' → translated and sent to Google API # Both return OpenAI-format responses

Set "stream": true in your request and Stockyard streams server-sent events in OpenAI format, regardless of which provider is behind it. Failover works mid-stream — if a provider drops the connection, Stockyard retries on the next provider.

What you get for free by proxying

Every request through Stockyard is automatically traced with cost, latency, and token count. You get a tamper-proof audit ledger, rate limiting, content filtering, PII redaction, caching, and failover — all without changing your application code beyond that one URL.

The 76 middleware modules run on every request. You don't configure them individually — they work out of the box. Turn specific modules on or off when you need to.

One base URL. Everything else stays the same.

Install Stockyard, change your base URL, and every LLM request gains tracing, controls, and multi-provider routing. No SDK swap, no code rewrite.

Install Stockyard
Editor Setup Guides → Proxy-Only Setup → Model Aliasing →

Frequently Asked Questions

Is Stockyard compatible with the OpenAI API?
Yes. Stockyard speaks the same /v1/chat/completions protocol as OpenAI. Change your base URL and your existing OpenAI SDK code works without modification.
Does it work with the Python OpenAI SDK?
Yes. Set base_url to your Stockyard instance and api_key to your Stockyard key or provider key. Streaming, function calling, and all standard features work.
Can I use non-OpenAI models through the OpenAI API format?
Yes. Stockyard includes shim modules that translate requests to Anthropic, Google Gemini, and other providers. Send an OpenAI-format request for claude-sonnet-4-5 and Stockyard handles the translation.
What tools and frameworks work with Stockyard?
Any tool that supports the OpenAI API works: LangChain, Vercel AI SDK, LiteLLM, Instructor, Continue.dev, Cursor, and any other OpenAI-compatible client.
Explore: Proxy-only mode · Best self-hosted proxy · Model aliasing · Gateway vs proxy
Stockyard also makes 150 focused self-hosted tools — browse the catalog or get everything for $29/mo.