KairosRoute vs Portkey
Portkey is an observability + caching gateway. KairosRoute is a routing + observability gateway. The wedge is different.
Router-first. We pick the cheapest model that clears a quality bar for the task; observability is a side-effect of that decision being audited per request.
Observability-first. They put a logging + caching gateway in front of any model SDK; routing is configurable, not the wedge.
Side-by-side
| KairosRoute | Portkey | |
|---|---|---|
| Wedge | Quality-gated routing | Observability + caching |
| OpenAI SDK compatibility | Yes | Yes |
| Per-request audit receipts | Yes (full funnel) | Logs (trace per request) |
| Quality-gated routing | Yes (built-in) | Configurable rules |
| Self-tuning per workspace | Yes (Business+) | No |
| Prompt caching | Yes (semantic) | Yes (semantic + simple) |
| Markup on managed keys | 4% flat | Per-tier subscription |
| BYOK markup | 0% | 0% |
Where Portkey wins
- Mature observability layer. If your primary need is "log every LLM call with structured traces", Portkey is purpose-built.
- Built-in caching across providers. If your workload is heavy on cache-friendly prompts, that's a big win.
- Guardrails (PII redaction, content filters). We don't ship as many out of the box.
- Larger team, longer track record on enterprise observability features.
Where KairosRoute wins
- Routing is the wedge, not a configurable extension. The classifier + quality-gate is on for every request unless you opt out.
- Per-request receipt funnel. Not just "we logged it" — we record which models passed, which got dropped, and why.
- Signal-loop tuning. After ~200 requests on Business+, routing weights are shaped by your workspace's reality, not a global default.
- Zero markup on BYOK on every tier. The 4% managed-key fee is flat and capped — no tier-based markup escalation.
- Smaller, simpler product surface. If you don't need 50+ guardrail integrations, you don't pay for them.
Migrating from Portkey
Both APIs are OpenAI-compatible — swap the base URL and you're live. If you already use Portkey for observability and want quality-gated routing on top, you can use KairosRoute as the upstream and keep Portkey's existing integrations downstream.
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.portkey.ai/v1',
apiKey: process.env.PORTKEY_API_KEY,
defaultHeaders: { 'x-portkey-virtual-key': 'openai-vk' },
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
});import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.kairosroute.com/v1',
apiKey: process.env.KAIROSROUTE_API_KEY,
});
const response = await client.chat.completions.create({
model: 'auto',
messages: [{ role: 'user', content: 'Hello' }],
});Try the playground
21 curated prompts, full routing decision and cost comparison live in the browser. No signup, no card. Or sign up for the free tier and run your own traffic through the gateway.