Add cost tracking, caching, failover, and 76 middleware modules to your OpenAI requests. One URL change, no SDK swap.
OpenAI is the most-used LLM provider but gives you limited visibility into costs until your bill arrives. Proxying through Stockyard adds per-request cost tracking, response caching (identical prompts skip the API entirely), rate limiting per user or project, and automatic failover to Anthropic or other providers when OpenAI has an outage.
Your application code does not change. Stockyard speaks the same OpenAI-compatible API, so any SDK that works with OpenAI works with Stockyard.
# Install Stockyard curl -fsSL stockyard.dev/install.sh | sh # Set your OpenAI API key export OPENAI_API_KEY=your-key-here # Start the proxy stockyard # Provider: openai (from OPENAI_API_KEY) # Proxy listening on :4200 # Send a request through the proxy curl http://localhost:4200/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hello"}]}'
OpenAI models are auto-detected from the model name in the request. No extra config needed beyond the API key.
Requests go straight to api.openai.com. No cost visibility until the monthly invoice. No caching. No failover. No audit trail.
Every request logged with cost. Identical prompts cached automatically. If OpenAI goes down, failover to Claude or Gemini. Hash-chained audit trail for compliance.
Route OpenAI through Stockyard in under 60 seconds.
Install GuideAll 16 providers · Proxy-only mode · What is an LLM proxy? · vs LiteLLM · vs Helicone