Provider

Route OpenAI through Stockyard

Add cost tracking, caching, failover, and 76 middleware modules to your OpenAI requests. One URL change, no SDK swap.

Environment variable
OPENAI_API_KEY
Models
gpt-4o, gpt-4o-mini, gpt-4.1, gpt-4.1-mini, o3-mini
Failover to
Anthropic Claude, Google Gemini, or Groq
API format
OpenAI-compatible

Why proxy OpenAI?

OpenAI is the most-used LLM provider but gives you limited visibility into costs until your bill arrives. Proxying through Stockyard adds per-request cost tracking, response caching (identical prompts skip the API entirely), rate limiting per user or project, and automatic failover to Anthropic or other providers when OpenAI has an outage.

Your application code does not change. Stockyard speaks the same OpenAI-compatible API, so any SDK that works with OpenAI works with Stockyard.

Quick start

# Install Stockyard
curl -fsSL stockyard.dev/install.sh | sh

# Set your OpenAI API key
export OPENAI_API_KEY=your-key-here

# Start the proxy
stockyard
# Provider: openai (from OPENAI_API_KEY)
# Proxy listening on :4200

# Send a request through the proxy
curl http://localhost:4200/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"gpt-4o","messages":[{"role":"user","content":"hello"}]}'

Good to know

OpenAI models are auto-detected from the model name in the request. No extra config needed beyond the API key.

Direct OpenAI vs through Stockyard

DIRECT TO OPENAI

Requests go straight to api.openai.com. No cost visibility until the monthly invoice. No caching. No failover. No audit trail.

THROUGH STOCKYARD

Every request logged with cost. Identical prompts cached automatically. If OpenAI goes down, failover to Claude or Gemini. Hash-chained audit trail for compliance.

Route OpenAI through Stockyard in under 60 seconds.

Install Guide

All 16 providers · Proxy-only mode · What is an LLM proxy? · vs LiteLLM · vs Helicone

Explore: Anthropic · Google Gemini · Install guide