KairosRoute vs OpenRouter
OpenRouter aggregates models. KairosRoute routes them. Same OpenAI-compatible API surface, different products.
You set model="auto" and we pick the cheapest model that clears a quality bar for the task. Every decision is audited in a receipt. Zero markup on BYOK; flat 4% on managed-key traffic.
You pick the model and OpenRouter passes through to the provider. They charge a 5.5% credit-purchase fee on top of provider rates. The catalog is broader (300+ models).
Side-by-side
| KairosRoute | OpenRouter | |
|---|---|---|
| Catalog size | 45+ models | 300+ models |
| Routing strategy | Quality-gated, classifier-driven | Manual model selection |
| Audit trail per request | Yes — full receipt with funnel | No |
| Markup on BYOK | 0% | 5.5% credit-purchase fee |
| Markup on managed keys | 4% flat | 5.5% credit-purchase fee |
| Self-tuning per workspace | Yes (Business+) | No |
| Multi-key provider pool | Yes (cooling + rotation) | No |
| OpenAI SDK compatibility | Yes | Yes |
| Free tier | 100K BYOK + $5 trial | Free credits, no subscription |
Where OpenRouter wins
- Catalog breadth — if you specifically need an obscure or niche model, OpenRouter probably has it before we do.
- Pure passthrough billing — if your workload is "always pin GPT-5", a 5.5% fee on credit purchase may be cheaper than our 4% per-request fee plus a subscription.
- No subscription — pay-as-you-go without a base seat. Useful for hobby workloads.
Where KairosRoute wins
- You don't pick the model. The classifier categorizes the prompt (97.6% accuracy on our seed eval) and the router picks the cheapest model that clears a quality bar for that category.
- Per-request receipts. Every decision logs the candidate pool, why each model passed or got dropped, and what the frontier would have cost. Auditable.
- Zero markup on BYOK forever. If you bring your own provider keys, providers bill you direct at provider rates — we only charge the gateway fee against your plan.
- Signal-loop tuning on Business+. The router's quality scores adapt to your workspace's observed traffic after ~200 requests. Stops being our opinion, starts being your reality.
- Multi-key provider pool with cooling and rotation. One rate-limit on OpenAI doesn't take the gateway down.
Migrating from OpenRouter
Both APIs are OpenAI-compatible. Migrating from OpenRouter is a base-URL swap and an API-key swap. Pin a specific model on either side and the call is identical. Use model="auto" on KairosRoute to opt into routing.
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://openrouter.ai/api/v1',
apiKey: process.env.OPENROUTER_API_KEY,
});
const response = await client.chat.completions.create({
model: 'anthropic/claude-3.5-sonnet',
messages: [{ role: 'user', content: 'Hello' }],
});import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.kairosroute.com/v1',
apiKey: process.env.KAIROSROUTE_API_KEY,
});
// Either pin a model, or let kr-auto pick:
const response = await client.chat.completions.create({
model: 'auto',
messages: [{ role: 'user', content: 'Hello' }],
});Try the playground
21 curated prompts, full routing decision and cost comparison live in the browser. No signup, no card. Or sign up for the free tier and run your own traffic through the gateway.