Key Features
300+ Models
OpenAI, Anthropic, Google, Meta, Mistral, DeepSeek, Cohere and a whole lot more. All from one endpoint.
Cost Savings
Smart model selection and response caching that actually saves you money. Your wallet will thank you.
Smart Routing
Picks the best provider based on availability, latency, and cost. No more babysitting your API calls.
Automatic Failover
Provider goes down? Requests get rerouted automatically. Zero downtime, zero drama.
Response Caching
Same request twice? Get it from cache. Faster responses, lower costs. Configurable TTL and cache keys.
Usage Analytics
Token usage, costs, latency, error rates. All your models, all your providers, one dashboard.
Supported Providers
| Provider | Example Models |
|---|---|
| OpenAI | openai/gpt-4o, openai/gpt-4o-mini, openai/o1, openai/o3-mini |
| Anthropic | anthropic/claude-sonnet-4-20250514, anthropic/claude-3-5-haiku-20241022 |
google/gemini-2.5-pro-preview-06-05, google/gemini-2.0-flash | |
| Meta | meta-llama/llama-4-maverick, meta-llama/llama-3.3-70b-instruct |
| DeepSeek | deepseek/deepseek-r1, deepseek/deepseek-chat |
| Mistral | mistralai/mistral-large-latest, mistralai/codestral-latest |
| Cohere | cohere/command-r-plus, cohere/command-r |
| Amazon | amazon/nova-pro-v1, amazon/nova-lite-v1 |
This is just a taste. Check out the full menu on the Available Models page.
Model Format
Models use theprovider/model-name format:
How It Works
Send a request
Make an OpenAI-compatible API call to
https://api.eachlabs.ai/v1 with your each::labs API key. That’s it.LLM Router does its thing
Validates your request, checks the cache, picks the best provider. All in milliseconds.