Skip to content

Providers

exagent supports OpenAI and Anthropic out of the box. Both use the same interface — swap one line to switch.


Choosing a provider

# OpenAI
self.set_model("openai", "gpt-4.1-mini")

# Anthropic
self.set_model("anthropic", "claude-sonnet-4-5")

set_model accepts the provider name and any model string the underlying SDK accepts.


Supported models

Model Notes
gpt-4.1 Most capable
gpt-4.1-mini Fast, cost-effective — good default
gpt-4o Multimodal

Uses the OpenAI Responses API (client.responses.create).

Model Notes
claude-opus-4-6 Most capable
claude-sonnet-4-6 Balanced speed and quality
claude-haiku-4-5-20251001 Fastest, lowest cost

Uses the Anthropic Messages API (client.messages.create).


Provider-specific options

Any keyword argument passed to set_model is forwarded to the SDK client or request.

self.set_model(
    "openai",
    "gpt-4.1",
    api_key="sk-...",          # override key
)
self.set_model(
    "anthropic",
    "claude-sonnet-4-5",
    api_key="sk-ant-...",      # override key
    max_tokens=4096,           # default is 1024
)

API key resolution

exagent resolves API keys in this order:

  1. api_key kwarg passed to set_model
  2. Environment variable (OPENAI_API_KEY or ANTHROPIC_API_KEY)
  3. .env file in the working directory

If none are found, a ValueError is raised with a clear message.


Using different providers per agent

Each agent has its own model configuration. You can use different providers in the same program:

class ResearchAgent(Agent):
    def __init__(self):
        self.set_model("anthropic", "claude-opus-4-6")  # powerful model for research
        ...

class SummaryAgent(Agent):
    def __init__(self):
        self.set_model("openai", "gpt-4.1-mini")  # fast model for summarising
        ...

This is especially useful with the Orchestrator — each specialist agent can use whichever model suits its task.


Adding a custom provider

Subclass BaseProvider and implement generate(). Optionally implement stream() for streaming support.

from exagent import BaseProvider
from exagent.types import ProviderResponse, ToolCall

class MyProvider(BaseProvider):
    def generate(self, history, system=None, tools=None) -> ProviderResponse:
        # call your LLM here
        ...
        return ProviderResponse(
            text="response text",
            tool_calls=[],
            assistant_message={"role": "assistant", "content": [...]},
            stop_reason="end_turn",
        )

Then register it:

from exagent.providers import get_provider

# monkey-patch or subclass get_provider to return your provider

Note

Custom provider support via the registry is not yet part of the public API. The BaseProvider interface is stable.


Next

Shaping behaviour with skills →