What is Rivano

Rivano is an AI platform that lets you deploy, govern, and observe AI agents without changing your existing LLM code. You point your application at the Rivano proxy, and every request flows through a configurable middleware pipeline before reaching your LLM provider.

Quickstart preview

The fastest way to see Rivano in action is to point your OpenAI SDK at the Rivano gateway. No new SDK required.

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: 'http://localhost:8080/openai/v1',
});

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Summarize this contract.' }],
});

console.log(response.choices[0].message.content);

The gateway records this request as a trace, evaluates it against your policies, and forwards it to OpenAI. The response comes back normally — your existing application code changes nothing.

Architecture

Your App ──► Rivano Gateway ──► LLM Provider (OpenAI / Anthropic / ...)


            Control Plane
            (api.rivano.ai)


             Dashboard
          (app.rivano.ai)

Rivano Gateway — A self-hosted proxy (or managed cloud endpoint) that sits between your application and any LLM provider. It enforces policies, detects PII, scores injection risk, and logs every request as a trace.

Control Plane — The hosted API at api.rivano.ai. Manages agents, policies, teams, costs, compliance reports, and alerting. The gateway can run fully offline or connect to the control plane for cloud sync.

Dashboard — A web UI at app.rivano.ai where you view traces, manage policies, inspect costs, and configure governance.

Three pillars

Build

Define AI agents as named, versioned workloads in a rivano.yaml file. Deploy them with rivano deploy. Roll back to any previous configuration instantly.

Govern

Write declarative policies that fire on request or response. Block prompt injection. Redact PII from responses. Warn when token counts exceed a threshold. Apply the foundational policy pack with one command.

Observe

Every proxied request becomes a trace with full span detail — model, latency, token counts, cost, and quality scores. Aggregate stats let you see cost by agent, error rates by provider, and quality trends over time.

💡

You do not need to instrument your application code. Rivano captures observability data at the proxy layer automatically.

Who Rivano is for

  • Teams shipping AI features who need visibility into what their agents are doing in production
  • Platform and DevOps engineers who want to enforce guardrails without coupling policies to application code
  • Compliance-conscious organizations who need audit trails and framework reports (SOC 2, GDPR, ISO 27001)
  • Cost-sensitive teams who want per-agent cost breakdowns and budget alerts before a bill arrives

Next steps

  • Quickstart — Send your first proxied request in 5 minutes
  • Installation — Install the SDK, CLI, and gateway
  • Core Concepts — Understand agents, policies, traces, and providers