Appearance
Unified Multi-Provider AI SDKs for TypeScript — Landscape and Recommendation
Summary
The content factory currently uses @anthropic-ai/claude-agent-sdk (Anthropic's Claude Agent SDK) for all LLM interactions. To support OpenAI models alongside Anthropic without maintaining separate provider packages, the Vercel AI SDK (ai package, v6) is the clear winner: 10M+ weekly downloads, first-class support for both Anthropic and OpenAI, streaming, structured output, tool calling, agent patterns, extended thinking support, and it works in plain Node.js/Express (no Edge runtime required). The migration path from Claude Agent SDK's query() pattern to AI SDK's generateText()/streamText() is straightforward.
Comparison Matrix
| Criteria | Vercel AI SDK (ai) | OpenAI Agents SDK (@openai/agents) | TanStack AI (@tanstack/ai) | Mastra (@mastra/core) | LiteLLM JS (litellmjs) | Claude Agent SDK (current) |
|---|---|---|---|---|---|---|
| npm weekly downloads | 10.2M | 463K | 44K | 576K | Dead (no npm) | 3.6M |
| Latest version | 6.0.146 | 0.8.2 | 0.10.0 (alpha) | -- | 0.12.0 (Jan 2024) | 0.2.92 |
| Multi-provider | 15+ providers | OpenAI native + adapter for others | OpenAI, Anthropic, Google, Ollama | 94 providers (via AI SDK) | 8 providers, partial | Anthropic only |
| Streaming | Yes (streamText, pipeToResponse) | Yes (StreamedRunResult) | Yes | Yes | Yes (basic) | Yes (async iterator) |
| Structured output | Yes (Zod/JSON Schema) | Yes (via tools) | Yes (Zod) | Yes (Zod) | No | Yes (outputFormat) |
| Tool calling | Yes (tool(), dynamicTool()) | Yes (6 tool types + MCP) | Yes (isomorphic tools) | Yes | No | Yes (tools array) |
| Agent patterns | Yes (ToolLoopAgent, v6) | Yes (Agent, Handoff, Guardrails) | Yes (tool loop) | Yes (Agent, Workflow, RAG) | No | Minimal (query loop) |
| Extended thinking | Yes (providerOptions.anthropic.thinking) | No (OpenAI only) | Unknown | Yes (via AI SDK) | No | Yes (native) |
| Prompt caching | Yes (cacheControl) | No | Unknown | Yes (via AI SDK) | No | Yes (native) |
| Express/Node.js | Yes (pipeTextStreamToResponse) | Yes | Yes | Yes | Yes | Yes |
| TypeScript-first | Yes | Yes | Yes | Yes | Yes | Yes |
| Actively maintained | Yes (Vercel-backed) | Yes (OpenAI-backed) | Alpha stage | Yes (YC W25, $13M) | No (last commit Jan 2024) | Yes (Anthropic-backed) |
| Install | npm i ai @ai-sdk/anthropic @ai-sdk/openai | npm i @openai/agents | npm i @tanstack/ai | npm i mastra @mastra/core | -- | npm i @anthropic-ai/claude-agent-sdk |
Detailed Analysis
1. Vercel AI SDK (ai package) -- RECOMMENDED
What it is: The dominant TypeScript AI SDK, maintained by Vercel. v6 (current) introduces a formal Agent abstraction (ToolLoopAgent) and unifies structured output with tool calling. Provider-agnostic: install @ai-sdk/anthropic for Claude, @ai-sdk/openai for GPT/o-series, and swap with one line.
Key strengths for Kendo:
- Drop-in provider switching. Change
anthropic('claude-sonnet-4-6')toopenai('gpt-4o')-- samegenerateText()/streamText()call, same tool definitions, same output handling. - Extended thinking support.
providerOptions: { anthropic: { thinking: { type: 'adaptive' } } }returnsreasoningTextalongsidetext. This is critical if we ever want to expose Claude's reasoning chain. - Structured output.
generateText()withoutput: Output.object({ schema: z.object({...}) })replaces our currentoutputFormatJSON schema approach. Zod integration means compile-time type safety. - Streaming to Express.
result.pipeTextStreamToResponse(res)works in plain Node.js -- no Edge runtime needed despite what some comparison articles claim. Full example exists atai-sdk.dev/cookbook/api-servers/express. - Prompt caching. Anthropic prompt caching exposed via
providerOptions.anthropic.cacheControl. - Massive ecosystem. 10.2M weekly downloads. Every tutorial, every example, every AI library assumes AI SDK compatibility. The
@ai-sdk/anthropicprovider alone has 4.1M weekly downloads.
Provider-specific features:
- Anthropic: extended thinking (adaptive + budget), prompt caching, computer use tool, PDF support,
disableParallelToolUse, speed/effort settings, MCP server connections - OpenAI: function calling, structured outputs, embeddings, file search, web search
Migration path from Claude Agent SDK:
typescript
// BEFORE (claude-agent-sdk)
const conversation = query({
prompt: userPrompt,
options: {
systemPrompt: SYSTEM,
model: 'claude-haiku-4-5-20251001',
tools: [],
outputFormat: {type: 'json_schema', schema: OUTPUT_SCHEMA},
},
});
for await (const message of conversation) { ... }
// AFTER (ai sdk)
const result = await generateText({
model: anthropic('claude-haiku-4-5-20251001'),
system: SYSTEM,
prompt: userPrompt,
output: Output.object({ schema: zodSchema }),
});
const spec = result.object; // typed!Gotchas:
- v6 deprecates
generateObject()in favor ofgenerateText()withoutputproperty -- migration codemod available (npx @ai-sdk/codemod v6) - Provider-specific features require
providerOptions-- not every feature is portable across providers - The
@ai-sdk/gatewaydependency in v6 suggests Vercel is steering toward their paid gateway service, though direct provider connections still work fine - Bundle size is small (19.5 kB gzipped for OpenAI provider) but you install one package per provider
2. OpenAI Agents SDK (@openai/agents)
What it is: OpenAI's agent framework for TypeScript, evolved from the Swarm experiment. Provides Agent, Handoff, Guardrails primitives plus 6 tool types including MCP support. Primary use case: multi-agent orchestration with OpenAI models.
Key strengths:
- Rich agent primitives. Agent handoffs, guardrails, sessions, tracing -- more opinionated agent framework than AI SDK
- MCP-native. Built-in
MCPServerStdioandMCPServerSSEsupport - Realtime agents. Voice agent support with interruption detection
- Multi-provider via adapter.
@openai/agents-extensionspackage bridges to Vercel AI SDK providers:typescriptimport { aisdk } from '@openai/agents-extensions/ai-sdk'; import { anthropic } from '@ai-sdk/anthropic'; const model = aisdk(anthropic('claude-sonnet-4-6')); const agent = new Agent({ model, instructions: '...', tools: [...] });
Why NOT for Kendo's content factory:
- OpenAI-first design. Anthropic support is a second-class citizen via an adapter that's still in beta. Feature parity is not guaranteed -- OpenAI-specific features (hosted tools, file search, web search) won't work with Anthropic models.
- Heavier abstraction. The Agent/Handoff/Guardrail pattern adds complexity we don't need. Our content factory has a simple three-agent pipeline -- we don't need handoffs or guardrails.
- Lower adoption. 463K weekly downloads vs AI SDK's 10.2M. The extensions package is only at 46K downloads.
- 0.x version. Still pre-1.0, API may change significantly.
When it would make sense: If Kendo ever builds a complex multi-agent system with dynamic handoffs, guardrails, and MCP tools -- and OpenAI is the primary model -- the Agents SDK would be the right choice. For our use case (content generation with structured output), it's overkill.
3. TanStack AI (@tanstack/ai)
What it is: Provider-agnostic AI SDK from the TanStack team (React Query, TanStack Router). Focuses on "no vendor lock-in" with clean TypeScript APIs and isomorphic tool definitions.
Key strengths:
- Truly vendor-neutral (no Vercel platform tie-in)
- Isomorphic tools (define once, run on server or client)
- Supports OpenAI, Anthropic, Ollama, Google Gemini, OpenRouter
Why NOT for Kendo:
- Alpha stage. v0.10.0, API is unstable. "Alpha 2" blog post from TanStack.
- Tiny adoption. 44K weekly downloads -- 230x smaller than AI SDK.
- Missing documentation. Extended thinking, prompt caching, and provider-specific features are undocumented.
- No ecosystem. No Express integration examples, no agent framework, limited community content.
When it would make sense: If Vercel AI SDK ever becomes too tightly coupled to Vercel's commercial platform and we need a pure open-source alternative. Worth revisiting in 6-12 months when it reaches 1.0.
4. Mastra (@mastra/core)
What it is: Full-featured TypeScript agent framework from the Gatsby team (YC W25, $13M funding). Built on top of Vercel AI SDK for LLM calls, adds agents, workflows, RAG, memory, evals, and MCP support.
Key strengths:
- Full agent framework with memory, workflows, RAG, evals
- 94 providers via AI SDK integration
- Supervisor pattern for multi-agent orchestration
- Observational memory (background agents compress conversation history)
Why NOT for Kendo's content factory:
- Framework, not library. Mastra wants to own the entire agent lifecycle. Our content factory is a focused tool, not a platform.
- Excessive abstraction. We need
generateText()with structured output, not RAG pipelines and memory systems. - Dependency chain. Mastra -> AI SDK -> Provider SDK. Two layers of abstraction above the model. If AI SDK already gives us what we need, adding Mastra is pure overhead.
- Young project. Active but still rapidly evolving APIs.
When it would make sense: If Kendo builds a standalone AI agent product with persistent memory, multi-step workflows, and RAG -- Mastra would be worth evaluating. For the content factory, it's like using Rails when you need a single Express endpoint.
5. LiteLLM JS (litellmjs)
What it is: Community JavaScript port of the Python LiteLLM library. The official LiteLLM is Python-only; this is an unofficial adaptation.
Why NOT for Kendo:
- Dead project. Last commit January 2024. Not on npm anymore. 147 GitHub stars.
- No tool calling. Only basic completions and embeddings.
- Partial provider support. 8 providers, many incomplete.
- No structured output, no agents, no streaming beyond basic chunks.
The only viable LiteLLM path for TypeScript is running the Python proxy server and calling it via OpenAI-compatible API -- which adds operational complexity for no benefit when AI SDK exists.
6. Anthropic SDKs (current setup)
@anthropic-ai/claude-agent-sdk (what we use): Anthropic's agent SDK that wraps the Claude API with a query() function, structured output, tool use, and session management. 3.6M weekly downloads.
@anthropic-ai/sdk (lower-level): Direct Anthropic API client. 12.2M weekly downloads. Does not support other providers.
Neither SDK supports non-Anthropic models. The only multi-provider path via Anthropic's SDK is using a gateway service (Braintrust, Requesty) that translates between API formats -- adding latency, cost, and a third-party dependency.
Recommendation for Kendo
Migrate the content factory from @anthropic-ai/claude-agent-sdk to Vercel AI SDK (ai + @ai-sdk/anthropic).
Then, when OpenAI model support is needed, add @ai-sdk/openai and swap the model string. No other code changes required.
Migration scope estimate
The content factory has 4 agent files that use query():
server/agents/strategist.ts-- structured output (Haiku)server/agents/writer.ts-- free-form text (Sonnet)server/agents/evaluator.ts-- structured output (Sonnet)server/agents/scorer.ts-- structured output (Haiku)
Each file follows the same pattern: call query(), iterate async, extract result. The AI SDK equivalent is generateText() with optional output for structured data. Estimated migration: half a day of work per agent, including testing. The prompt files (server/prompts/) need zero changes -- they're just strings.
What we gain
- Provider flexibility. Add
@ai-sdk/openaiand test GPT-4o or o3 for any agent role. Compare quality/cost/speed without rewriting agent logic. - Better structured output. Zod schemas with compile-time type inference replace our manual JSON schema + fallback parsing.
- Ecosystem alignment. AI SDK is the de facto standard. Future tools, tutorials, and libraries assume it.
- Extended thinking. Anthropic extended thinking exposed via
providerOptions-- no special SDK needed. - Prompt caching. Reduce costs on repeated system prompts via
cacheControl.
What we lose
- Claude Agent SDK's
permissionMode/persistSessionoptions. These are specific to the Claude Code environment. Our content factory doesn't use them meaningfully (all agents run withtools: []andpersistSession: false). - Direct Anthropic SDK features we might not know about. The AI SDK provider is a thin wrapper, so provider-specific capabilities are generally accessible via
providerOptions.
Package changes
diff
// package.json dependencies
- "@anthropic-ai/claude-agent-sdk": "^0.2.85",
+ "ai": "^6.0.0",
+ "@ai-sdk/anthropic": "^3.0.0",
+ // Add when needed:
+ // "@ai-sdk/openai": "^3.0.0",Open Questions
- Per-agent model selection. Should we make the model configurable per agent at runtime (e.g., via the dashboard), or keep it as a constant per agent file? AI SDK makes this trivial -- the model is just a function parameter.
- Cost tracking. AI SDK returns
usagedata (prompt/completion tokens). Should we track costs per run inmeta.json? The current setup doesn't do this. - Streaming to WebSocket. The content factory streams progress via WebSocket. AI SDK's
streamText()returns an async iterable that could pipe partial content to the frontend in real-time, which we don't do today (we only send progress events, not partial text). - OpenAI model candidates. Which OpenAI models would we actually test? GPT-4o for the writer (creative)? o3 for the evaluator (reasoning)? This needs experimentation once the SDK is swapped.
- Fallback chains. AI SDK supports model fallbacks (try Anthropic, fall back to OpenAI on failure). Worth implementing for reliability, or unnecessary complexity for a solo-founder tool?