theo.complete(request)
Sends a prompt through the full orchestration pipeline and returns the complete response.
CompletionRequest
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
prompt | string | ✅ | — | The prompt text |
mode | ChatMode | — | "auto" | Execution mode |
conversation_id | string | — | — | Continue a conversation |
skills | string[] | — | — | Skill slugs to activate |
tools | ToolDef[] | — | — | Inline tool definitions |
persona | PersonaInput | — | "theo" | Persona override |
temperature | number | — | — | Sampling temperature |
max_iterations | number | — | 8 | Max agent loop iterations |
stream | boolean | — | false | Enable SSE streaming |
model_overrides | Record<string, string> | — | — | Override model per mode |
metadata | Record<string, unknown> | — | — | Custom metadata |
theo.stream(request)
Returns an AsyncGenerator<StreamEvent> for real-time token delivery.
StreamEvent Types
| Type | Data | Description |
|---|---|---|
meta | Model info, mode, skills | Emitted first |
token | { token: string } | Each generated token |
tool | Tool call info | Tool was invoked |
artifact | File/image data | Generated artifact |
genui_meta | Component library info | GenUI mode metadata |
done | Full response + usage | Final event |
error | Error details | Processing error |
