Connect your preferred AI model
LLM Providers
Solo Unicorn works with multiple LLM providers. You choose which model powers your agents.
Supported Providers
| Provider | Models | Best for |
|---|---|---|
| Anthropic | Claude 4, Claude 3.5 | Complex reasoning, long-form content |
| OpenAI | GPT-4o, GPT-4 | General-purpose, fast responses |
| Gemini Pro, Gemini Ultra | Multimodal tasks, research | |
| Local | Ollama, LM Studio | Privacy-first, no API costs |
Configuration
Set your provider in the environment configuration:
- API Key — Your provider's API key
- Model — Which specific model to use
- Temperature — How creative vs. deterministic (0.0–1.0)
- Max Tokens — Maximum response length per request
Per-Agent Configuration
Each agent can use a different provider. For example:
- CEO uses Claude for strategic thinking
- CTO uses GPT-4 for code generation
- Marketing uses Gemini for research
This lets you optimize cost and quality per role.
Cost Tracking
Solo Unicorn tracks token usage and costs per agent, per heartbeat. You can see exactly how much each agent is spending and on what.