Skip to main content

The Developer’s Choice

You want AI in your product. You have options:
  1. Direct model APIs — Call a single provider directly
  2. Model proxy — Use a unified gateway for multi-model access
  3. Framework — Build your own pipeline with LangChain / LangGraph
  4. Orchestration API — Use Theo for classification, routing, skills, tools, and billing in one call

Comparison

CapabilityDirect APIModel ProxyLangChainTheo
Model access1 provider300+ modelsAny (you configure)300+ models
Intent classificationBuild it✅ Built-in
Model routingManualBuild it✅ Automatic
Automatic failoverBuild it✅ Built-in
Tool executionProvider-specificBuild it✅ Agent loop
Skills / domain expertise✅ Marketplace
Custom persona (E.V.I.)Build itBuild it✅ One parameter
MemoryBuild it✅ Cross-session
StreamingBuild it✅ SSE
Billing / creditsProvider billing✅ Per-token
Semantic cacheBuild it✅ Built-in
Audit trailBuild it✅ Immutable
Time to productionWeeksDaysMonthsHours

Code Comparison

DIY Approach — You handle everything

const model = pickModelForTask(prompt);          // you build this
const skills = loadDomainKnowledge(userContext);  // you build this
const tools = resolveToolDefinitions(skills);     // you build this
const systemPrompt = buildPrompt(persona, skills, tools); // you build this
const response = await llmProvider.complete({ model, messages, tools });
const toolResults = await executeTools(response.tool_calls); // you build this
const finalResponse = await llmProvider.complete({ model, messages: [..., toolResults] });
await debitCredits(user, response.usage);        // you build this
await logAudit(user, response);                  // you build this

Theo — One call

import { Theo } from "@hitheo/sdk";

const theo = new Theo({ apiKey: "theo_sk_..." });
const response = await theo.complete({
  prompt: "Check inventory across all warehouses",
  skills: ["inventory-check"],
  persona: { system_prompt: "You are Nova..." },
});

When to Use What

Use Theo when:
  • You want AI features without building orchestration infrastructure
  • You need multi-model routing with automatic failover
  • You want domain expertise via installable skills
  • You’re building an E.V.I. (embedded AI with custom persona)
  • You need built-in billing, caching, and audit
Use a model proxy when:
  • You just need model access with unified billing
  • You’re building your own orchestration layer
Use LangChain when:
  • You need total control over every pipeline decision
  • Your use case requires custom agent architectures
  • You have the engineering team to build and maintain the infrastructure
Use direct APIs when:
  • You only need one model from one provider
  • You have specific provider requirements (SLA, data residency)