Migration Guide
Switch from LiteLLM, Helicone, Portkey, or direct API calls.
From direct API calls
If your application calls OpenAI, Anthropic, or other providers directly, migration is one line:
# Before OPENAI_BASE_URL=https://api.openai.com/v1 # After OPENAI_BASE_URL=http://localhost:4200/v1
Your application code does not change. Stockyard speaks the OpenAI-compatible API. Requests are proxied to the original provider with cost tracking, caching, and all middleware applied.
From LiteLLM
LiteLLM and Stockyard both expose /v1/chat/completions. The migration path:
1. Install Stockyard and set your provider API keys as environment variables.
2. Change your application's base URL from the LiteLLM endpoint to Stockyard's endpoint.
3. Remove Postgres and Redis if they were only used by LiteLLM.
# Before (LiteLLM) OPENAI_BASE_URL=http://litellm:4000/v1 # After (Stockyard) OPENAI_BASE_URL=http://localhost:4200/v1
Virtual keys in LiteLLM map to team API keys in Stockyard. Budget limits map to spend caps. See Stockyard vs LiteLLM for a full feature comparison.
From Helicone
Helicone uses header injection (Helicone-Auth) to proxy requests. With Stockyard, you change the base URL instead:
# Before (Helicone) OPENAI_BASE_URL=https://oai.helicone.ai/v1 HELICONE_AUTH=Bearer sk-helicone-... # After (Stockyard) OPENAI_BASE_URL=http://localhost:4200/v1
Remove the Helicone-Auth header from your requests. Stockyard's Lookout replaces Helicone's observability dashboard. See Stockyard vs Helicone.
From Portkey
If you use Portkey's gateway (self-hosted), the migration is a base URL change. If you use Portkey's cloud, you also gain data residency since requests no longer leave your infrastructure:
# Before (Portkey) OPENAI_BASE_URL=https://api.portkey.ai/v1 # After (Stockyard) OPENAI_BASE_URL=http://localhost:4200/v1
See Stockyard vs Portkey for feature mapping.
Zero-downtime migration
For production migrations, run both proxies in parallel:
1. Start Stockyard alongside your existing proxy.
2. Route a percentage of traffic to Stockyard using a load balancer or feature flag.
3. Compare costs, latency, and error rates in both systems.
4. When satisfied, cut over fully and decommission the old proxy.