Skip to content

Unified Multi-Provider AI SDKs for TypeScript — Landscape and Recommendation

Summary

The content factory currently uses @anthropic-ai/claude-agent-sdk (Anthropic's Claude Agent SDK) for all LLM interactions. To support OpenAI models alongside Anthropic without maintaining separate provider packages, the Vercel AI SDK (ai package, v6) is the clear winner: 10M+ weekly downloads, first-class support for both Anthropic and OpenAI, streaming, structured output, tool calling, agent patterns, extended thinking support, and it works in plain Node.js/Express (no Edge runtime required). The migration path from Claude Agent SDK's query() pattern to AI SDK's generateText()/streamText() is straightforward.

Comparison Matrix

CriteriaVercel AI SDK (ai)OpenAI Agents SDK (@openai/agents)TanStack AI (@tanstack/ai)Mastra (@mastra/core)LiteLLM JS (litellmjs)Claude Agent SDK (current)
npm weekly downloads10.2M463K44K576KDead (no npm)3.6M
Latest version6.0.1460.8.20.10.0 (alpha)--0.12.0 (Jan 2024)0.2.92
Multi-provider15+ providersOpenAI native + adapter for othersOpenAI, Anthropic, Google, Ollama94 providers (via AI SDK)8 providers, partialAnthropic only
StreamingYes (streamText, pipeToResponse)Yes (StreamedRunResult)YesYesYes (basic)Yes (async iterator)
Structured outputYes (Zod/JSON Schema)Yes (via tools)Yes (Zod)Yes (Zod)NoYes (outputFormat)
Tool callingYes (tool(), dynamicTool())Yes (6 tool types + MCP)Yes (isomorphic tools)YesNoYes (tools array)
Agent patternsYes (ToolLoopAgent, v6)Yes (Agent, Handoff, Guardrails)Yes (tool loop)Yes (Agent, Workflow, RAG)NoMinimal (query loop)
Extended thinkingYes (providerOptions.anthropic.thinking)No (OpenAI only)UnknownYes (via AI SDK)NoYes (native)
Prompt cachingYes (cacheControl)NoUnknownYes (via AI SDK)NoYes (native)
Express/Node.jsYes (pipeTextStreamToResponse)YesYesYesYesYes
TypeScript-firstYesYesYesYesYesYes
Actively maintainedYes (Vercel-backed)Yes (OpenAI-backed)Alpha stageYes (YC W25, $13M)No (last commit Jan 2024)Yes (Anthropic-backed)
Installnpm i ai @ai-sdk/anthropic @ai-sdk/openainpm i @openai/agentsnpm i @tanstack/ainpm i mastra @mastra/core--npm i @anthropic-ai/claude-agent-sdk

Detailed Analysis

What it is: The dominant TypeScript AI SDK, maintained by Vercel. v6 (current) introduces a formal Agent abstraction (ToolLoopAgent) and unifies structured output with tool calling. Provider-agnostic: install @ai-sdk/anthropic for Claude, @ai-sdk/openai for GPT/o-series, and swap with one line.

Key strengths for Kendo:

  • Drop-in provider switching. Change anthropic('claude-sonnet-4-6') to openai('gpt-4o') -- same generateText()/streamText() call, same tool definitions, same output handling.
  • Extended thinking support. providerOptions: { anthropic: { thinking: { type: 'adaptive' } } } returns reasoningText alongside text. This is critical if we ever want to expose Claude's reasoning chain.
  • Structured output. generateText() with output: Output.object({ schema: z.object({...}) }) replaces our current outputFormat JSON schema approach. Zod integration means compile-time type safety.
  • Streaming to Express. result.pipeTextStreamToResponse(res) works in plain Node.js -- no Edge runtime needed despite what some comparison articles claim. Full example exists at ai-sdk.dev/cookbook/api-servers/express.
  • Prompt caching. Anthropic prompt caching exposed via providerOptions.anthropic.cacheControl.
  • Massive ecosystem. 10.2M weekly downloads. Every tutorial, every example, every AI library assumes AI SDK compatibility. The @ai-sdk/anthropic provider alone has 4.1M weekly downloads.

Provider-specific features:

  • Anthropic: extended thinking (adaptive + budget), prompt caching, computer use tool, PDF support, disableParallelToolUse, speed/effort settings, MCP server connections
  • OpenAI: function calling, structured outputs, embeddings, file search, web search

Migration path from Claude Agent SDK:

typescript
// BEFORE (claude-agent-sdk)
const conversation = query({
    prompt: userPrompt,
    options: {
        systemPrompt: SYSTEM,
        model: 'claude-haiku-4-5-20251001',
        tools: [],
        outputFormat: {type: 'json_schema', schema: OUTPUT_SCHEMA},
    },
});
for await (const message of conversation) { ... }

// AFTER (ai sdk)
const result = await generateText({
    model: anthropic('claude-haiku-4-5-20251001'),
    system: SYSTEM,
    prompt: userPrompt,
    output: Output.object({ schema: zodSchema }),
});
const spec = result.object; // typed!

Gotchas:

  • v6 deprecates generateObject() in favor of generateText() with output property -- migration codemod available (npx @ai-sdk/codemod v6)
  • Provider-specific features require providerOptions -- not every feature is portable across providers
  • The @ai-sdk/gateway dependency in v6 suggests Vercel is steering toward their paid gateway service, though direct provider connections still work fine
  • Bundle size is small (19.5 kB gzipped for OpenAI provider) but you install one package per provider

2. OpenAI Agents SDK (@openai/agents)

What it is: OpenAI's agent framework for TypeScript, evolved from the Swarm experiment. Provides Agent, Handoff, Guardrails primitives plus 6 tool types including MCP support. Primary use case: multi-agent orchestration with OpenAI models.

Key strengths:

  • Rich agent primitives. Agent handoffs, guardrails, sessions, tracing -- more opinionated agent framework than AI SDK
  • MCP-native. Built-in MCPServerStdio and MCPServerSSE support
  • Realtime agents. Voice agent support with interruption detection
  • Multi-provider via adapter. @openai/agents-extensions package bridges to Vercel AI SDK providers:
    typescript
    import { aisdk } from '@openai/agents-extensions/ai-sdk';
    import { anthropic } from '@ai-sdk/anthropic';
    const model = aisdk(anthropic('claude-sonnet-4-6'));
    const agent = new Agent({ model, instructions: '...', tools: [...] });

Why NOT for Kendo's content factory:

  • OpenAI-first design. Anthropic support is a second-class citizen via an adapter that's still in beta. Feature parity is not guaranteed -- OpenAI-specific features (hosted tools, file search, web search) won't work with Anthropic models.
  • Heavier abstraction. The Agent/Handoff/Guardrail pattern adds complexity we don't need. Our content factory has a simple three-agent pipeline -- we don't need handoffs or guardrails.
  • Lower adoption. 463K weekly downloads vs AI SDK's 10.2M. The extensions package is only at 46K downloads.
  • 0.x version. Still pre-1.0, API may change significantly.

When it would make sense: If Kendo ever builds a complex multi-agent system with dynamic handoffs, guardrails, and MCP tools -- and OpenAI is the primary model -- the Agents SDK would be the right choice. For our use case (content generation with structured output), it's overkill.

3. TanStack AI (@tanstack/ai)

What it is: Provider-agnostic AI SDK from the TanStack team (React Query, TanStack Router). Focuses on "no vendor lock-in" with clean TypeScript APIs and isomorphic tool definitions.

Key strengths:

  • Truly vendor-neutral (no Vercel platform tie-in)
  • Isomorphic tools (define once, run on server or client)
  • Supports OpenAI, Anthropic, Ollama, Google Gemini, OpenRouter

Why NOT for Kendo:

  • Alpha stage. v0.10.0, API is unstable. "Alpha 2" blog post from TanStack.
  • Tiny adoption. 44K weekly downloads -- 230x smaller than AI SDK.
  • Missing documentation. Extended thinking, prompt caching, and provider-specific features are undocumented.
  • No ecosystem. No Express integration examples, no agent framework, limited community content.

When it would make sense: If Vercel AI SDK ever becomes too tightly coupled to Vercel's commercial platform and we need a pure open-source alternative. Worth revisiting in 6-12 months when it reaches 1.0.

4. Mastra (@mastra/core)

What it is: Full-featured TypeScript agent framework from the Gatsby team (YC W25, $13M funding). Built on top of Vercel AI SDK for LLM calls, adds agents, workflows, RAG, memory, evals, and MCP support.

Key strengths:

  • Full agent framework with memory, workflows, RAG, evals
  • 94 providers via AI SDK integration
  • Supervisor pattern for multi-agent orchestration
  • Observational memory (background agents compress conversation history)

Why NOT for Kendo's content factory:

  • Framework, not library. Mastra wants to own the entire agent lifecycle. Our content factory is a focused tool, not a platform.
  • Excessive abstraction. We need generateText() with structured output, not RAG pipelines and memory systems.
  • Dependency chain. Mastra -> AI SDK -> Provider SDK. Two layers of abstraction above the model. If AI SDK already gives us what we need, adding Mastra is pure overhead.
  • Young project. Active but still rapidly evolving APIs.

When it would make sense: If Kendo builds a standalone AI agent product with persistent memory, multi-step workflows, and RAG -- Mastra would be worth evaluating. For the content factory, it's like using Rails when you need a single Express endpoint.

5. LiteLLM JS (litellmjs)

What it is: Community JavaScript port of the Python LiteLLM library. The official LiteLLM is Python-only; this is an unofficial adaptation.

Why NOT for Kendo:

  • Dead project. Last commit January 2024. Not on npm anymore. 147 GitHub stars.
  • No tool calling. Only basic completions and embeddings.
  • Partial provider support. 8 providers, many incomplete.
  • No structured output, no agents, no streaming beyond basic chunks.

The only viable LiteLLM path for TypeScript is running the Python proxy server and calling it via OpenAI-compatible API -- which adds operational complexity for no benefit when AI SDK exists.

6. Anthropic SDKs (current setup)

@anthropic-ai/claude-agent-sdk (what we use): Anthropic's agent SDK that wraps the Claude API with a query() function, structured output, tool use, and session management. 3.6M weekly downloads.

@anthropic-ai/sdk (lower-level): Direct Anthropic API client. 12.2M weekly downloads. Does not support other providers.

Neither SDK supports non-Anthropic models. The only multi-provider path via Anthropic's SDK is using a gateway service (Braintrust, Requesty) that translates between API formats -- adding latency, cost, and a third-party dependency.

Recommendation for Kendo

Migrate the content factory from @anthropic-ai/claude-agent-sdk to Vercel AI SDK (ai + @ai-sdk/anthropic).

Then, when OpenAI model support is needed, add @ai-sdk/openai and swap the model string. No other code changes required.

Migration scope estimate

The content factory has 4 agent files that use query():

  • server/agents/strategist.ts -- structured output (Haiku)
  • server/agents/writer.ts -- free-form text (Sonnet)
  • server/agents/evaluator.ts -- structured output (Sonnet)
  • server/agents/scorer.ts -- structured output (Haiku)

Each file follows the same pattern: call query(), iterate async, extract result. The AI SDK equivalent is generateText() with optional output for structured data. Estimated migration: half a day of work per agent, including testing. The prompt files (server/prompts/) need zero changes -- they're just strings.

What we gain

  1. Provider flexibility. Add @ai-sdk/openai and test GPT-4o or o3 for any agent role. Compare quality/cost/speed without rewriting agent logic.
  2. Better structured output. Zod schemas with compile-time type inference replace our manual JSON schema + fallback parsing.
  3. Ecosystem alignment. AI SDK is the de facto standard. Future tools, tutorials, and libraries assume it.
  4. Extended thinking. Anthropic extended thinking exposed via providerOptions -- no special SDK needed.
  5. Prompt caching. Reduce costs on repeated system prompts via cacheControl.

What we lose

  1. Claude Agent SDK's permissionMode/persistSession options. These are specific to the Claude Code environment. Our content factory doesn't use them meaningfully (all agents run with tools: [] and persistSession: false).
  2. Direct Anthropic SDK features we might not know about. The AI SDK provider is a thin wrapper, so provider-specific capabilities are generally accessible via providerOptions.

Package changes

diff
// package.json dependencies
- "@anthropic-ai/claude-agent-sdk": "^0.2.85",
+ "ai": "^6.0.0",
+ "@ai-sdk/anthropic": "^3.0.0",
+ // Add when needed:
+ // "@ai-sdk/openai": "^3.0.0",

Open Questions

  1. Per-agent model selection. Should we make the model configurable per agent at runtime (e.g., via the dashboard), or keep it as a constant per agent file? AI SDK makes this trivial -- the model is just a function parameter.
  2. Cost tracking. AI SDK returns usage data (prompt/completion tokens). Should we track costs per run in meta.json? The current setup doesn't do this.
  3. Streaming to WebSocket. The content factory streams progress via WebSocket. AI SDK's streamText() returns an async iterable that could pipe partial content to the frontend in real-time, which we don't do today (we only send progress events, not partial text).
  4. OpenAI model candidates. Which OpenAI models would we actually test? GPT-4o for the writer (creative)? o3 for the evaluator (reasoning)? This needs experimentation once the SDK is swapped.
  5. Fallback chains. AI SDK supports model fallbacks (try Anthropic, fall back to OpenAI on failure). Worth implementing for reliability, or unnecessary complexity for a solo-founder tool?