Point any OpenAI-compatible SDK to the each::labs base URL:
Copy
from openai import OpenAIclient = OpenAI( api_key="YOUR_EACHLABS_API_KEY", base_url="https://api.eachlabs.ai/v1")
3
Make your first request
Use the provider/model-name format to pick any model:
Copy
response = client.chat.completions.create( model="openai/gpt-4o", messages=[ {"role": "user", "content": "What is the capital of France?"} ])print(response.choices[0].message.content)
Streaming is not supported yet. Yeah, we know. It’s painful. We’re on it and it’ll be here soon. For now all requests return the complete response at once.
Once streaming lands, you’ll use it like this:
Copy
# Coming soonstream = client.chat.completions.create( model="openai/gpt-4o", messages=[{"role": "user", "content": "Write a haiku about coding"}], stream=True)for chunk in stream: if chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="")