Migrate from Direct LLM Calls
Migrating to Rivano requires only two changes: update your LLM client’s base URL and add the Rivano authorization header. Your existing request and response handling code stays the same.
What changes
| Before | After |
|---|---|
| Base URL points directly to the LLM provider | Base URL points to Rivano gateway |
Provider API key in Authorization header | Provider key managed by Rivano; Rivano API key in header |
| No tracing, policies, or cost tracking | Automatic tracing, policy enforcement, cost calculation |
Step 1: Register your provider in Rivano
Add your LLM provider API key to Rivano. This key is encrypted and used by the gateway when forwarding requests:
curl -X POST https://api.rivano.ai/api/providers \
-H "Authorization: Bearer rv_api_..." \
-H "Content-Type: application/json" \
-d '{
"name": "openai-prod",
"provider": "openai",
"apiKey": "sk-..."
}'
Or via the dashboard: Settings → Providers → + New Provider.
Step 2: Update your LLM client
The only code change is the base URL (and header, if you were passing the provider key directly):
OpenAI
import OpenAI from 'openai';
// Before
const client = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
// After — change baseURL only
const client = new OpenAI({
apiKey: 'placeholder', // not used — Rivano manages the provider key
baseURL: 'https://gateway.rivano.ai/openai/v1',
defaultHeaders: {
'Authorization': `Bearer ${process.env.RIVANO_API_KEY}`,
},
}); Anthropic
import Anthropic from '@anthropic-ai/sdk';
// Before
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
// After
const client = new Anthropic({
apiKey: 'placeholder',
baseURL: 'https://gateway.rivano.ai/anthropic',
defaultHeaders: {
'Authorization': `Bearer ${process.env.RIVANO_API_KEY}`,
},
}); Google (Gemini)
import { GoogleGenerativeAI } from '@google/generative-ai';
// Gemini SDK does not support custom base URLs natively.
// Use the REST API directly through Rivano instead:
const response = await fetch(
'https://gateway.rivano.ai/google/v1beta/models/gemini-pro:generateContent',
{
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.RIVANO_API_KEY}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
contents: [{ parts: [{ text: 'Hello, world.' }] }],
}),
}
); Step 3: Verify traces appear
After updating the base URL, make a test request and confirm it appears in Observability → Traces. You should see the trace within a few seconds.
# Quick smoke test
curl https://gateway.rivano.ai/openai/v1/chat/completions \
-H "Authorization: Bearer rv_ingest_..." \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Say hello."}]
}'
Open the dashboard, go to Observability → Traces, and confirm the trace appears with the correct agent, model, and status.
If traces are not appearing, check that you are using an ingest-scoped key. The api-scoped key is for the management API, not for proxy requests.
Provider-specific notes
OpenAI — All endpoints under /v1/ are supported, including streaming, embeddings, and function calling.
Anthropic — The Messages API is fully supported. Legacy Completions API is not proxied.
Google Gemini — REST API is supported. The native Gemini SDK requires HTTP interception (see above). Streaming is supported.
Azure OpenAI — Use the azure provider type when registering. The gateway rewrites the URL to your Azure endpoint.
Rolling back
To revert, change the base URL back to the original provider URL and restore the provider API key in your application. No data is deleted in Rivano.
Related
- Gateway Overview — Gateway configuration and routing
- Hybrid Deployment — Self-hosted gateway with cloud control plane
- Security Policies — Apply guardrails after migration
- Cost Tracking — Set up budgets after traces are flowing