Send a prompt through the full orchestration pipeline — intent classification, skill loading, model routing, agent loop, and response.
stream: true or see Streaming Completions.auto, Theo classifies the prompt and selects the optimal engine automatically.Available modes:auto — Classify prompt and route to best engine (default)fast — Low-latency responses for simple queriesthink — Deep reasoning for complex analysiscode — Code generation (Theo Code engine, extended output budget)image — Image generation (Theo Create)video — Video generation (async)research — Deep web research with citations (async)roast — Humorous, irreverent tonegenui — Generate interactive UI components (OpenUI Lang)true, returns a text/event-stream response instead of JSON. See Streaming.GET /api/v1/skills, or in the E.V.I. Canvas Input node.See Activating Skills via API for the full guide."theo" — Default Theo persona"none" — No persona (raw model output){ "system_prompt": "You are..." } — Custom system prompt"code", "think"), values are Theo engine IDs (e.g., "theo-1-reason", "theo-1-flash"). See List Models for valid engine IDs."theo" for the default format, "openai" for OpenAI-compatible format.cmpl_)."completion"."auto")."fast", "think", "code").format: "openai" to receive responses in OpenAI’s chat.completions format. This allows drop-in replacement in existing OpenAI-based applications.
chat.completion schema with choices, usage, and model fields.
conversation_id) are automatically cached. Identical requests return cached results instantly at zero cost. See Semantic Caching.
Cached responses include "_cached": true in the response body.
| Status | Code | Description |
|---|---|---|
| 400 | validation_error | Invalid request body (missing prompt, invalid mode, etc.) |
| 401 | invalid_api_key | Missing or invalid API key |
| 402 | insufficient_credits | Account has insufficient balance |
| 404 | not_found | Conversation ID not found |
| 429 | rate_limit_exceeded | Too many requests — check Retry-After header |
| 500 | server_error | Internal server error |