Integrate Morpheus API Gateway with Vercel AI SDK
Learn how to integrate the Morpheus API Gateway with Vercel’s AI SDK v5 to build AI-powered applications with free, decentralized AI inference. This guide covers streaming responses, tool calling, and handling Morpheus-specific implementation details.Overview
The Morpheus API Gateway provides free AI inference through a decentralized compute marketplace. By integrating with Vercel’s AI SDK, you get access to powerful models like Qwen, Llama, and more while maintaining a familiar OpenAI-compatible API structure.The Morpheus API Gateway is currently in Open Beta, providing free access to AI inference without requiring wallet connections or staking MOR tokens.
Prerequisites
Before you begin, ensure you have:- Node.js 18+ installed on your system
- A Morpheus API key from openbeta.mor.org
- Basic knowledge of Next.js and React
- Familiarity with TypeScript
1
Create a Morpheus API Key
Visit openbeta.mor.org and sign in to create your API key.
- Navigate to the API Keys section
- Click “Create API Key” and provide a name
- Copy your API key immediately (it won’t be shown again)
Store your API key securely. Never commit it to version control or expose it in client-side code.
2
Install Required Dependencies
Install the Vercel AI SDK and OpenAI-compatible provider:
Verify installation by running
npm list ai to see the installed version.3
Configure Environment Variables
Create a For production, you can optionally use type-safe environment validation with
.env.local file in your project root:.env.local
@t3-oss/env-nextjs:src/lib/env.ts
Never commit your API key to version control. Add
.env.local to your .gitignore file.Basic Integration
Setting Up the Morpheus Provider
The Morpheus API Gateway is OpenAI-compatible, allowing you to use the@ai-sdk/openai-compatible provider:
lib/morpheus.ts
The
createOpenAICompatible provider allows any OpenAI-compatible API to work seamlessly with the AI SDK, including Morpheus.Available Models
Query the available models using the Morpheus API:Common Morpheus Models
Common Morpheus Models
Popular models available through Morpheus:
- llama-3.3-70b:web - Meta’s Llama 3.3 with web search tool calling
- llama-3.3-70b - Meta’s Llama 3.3 base model
- qwen3-235b:web - Qwen 3 with web search tool calling
- qwen3-235b - Qwen 3 base model
Model availability may vary based on provider availability in the Morpheus marketplace. The API automatically routes to the highest-rated provider for your selected model. The
:web suffix indicates models optimized for web content and browsing tasks.Text Generation
Basic Text Generation
Use thegenerateText function for simple, non-streaming text generation:
server-action.ts
Streaming Text Generation
For interactive applications, usestreamText to stream responses in real-time:
app/api/chat/route.ts
Use
toDataStreamResponse() for easy integration with AI SDK UI components. For more control, use toTextStreamResponse() or iterate over result.textStream directly.Complete API Route Implementation
Here’s a complete example of a chat API route using Morpheus:app/api/chat/route.ts
The
convertToModelMessages function transforms AI SDK UI messages into the format expected by language models. It handles user messages, assistant messages, and system prompts automatically.Tool Calling
Enable your AI models to execute functions and interact with external systems through tool calling.Defining Tools
Define tools using Zod schemas for type-safe parameter validation:lib/tools.ts
Using Tools with Streaming
Define tools using Zod schemas and integrate them with your streaming endpoint:app/api/chat/route.ts
Use
maxSteps to prevent infinite tool calling loops. The AI SDK will automatically handle multi-step tool execution.Tool Calling Best Practices
Clear descriptions
Provide detailed descriptions for tools and parameters to help the model understand when and how to use them.
Validate inputs
Use Zod schemas to enforce parameter types and constraints, preventing invalid tool executions.
Handle errors gracefully
Wrap tool execution logic in try-catch blocks and return meaningful error messages to the model.
Limit steps
Use
stopWhen: stepCountIs(n) to prevent infinite tool calling loops and control costs.Client-Side Implementation
Build an interactive chat interface using the AI SDK’suseChat hook:
app/page.tsx
The
useChat hook automatically handles message state, streaming updates, and tool invocations. It provides a simple interface for building chat applications with minimal boilerplate.Model Selection
Allow users to switch between different Morpheus models:app/page.tsx
app/api/chat/route.ts
The
:web suffix indicates models optimized for web browsing and content generation. These models typically perform better for tasks involving current events or web-based information.Troubleshooting
Tool calls fail with 'Expected function.name to be a string'
Tool calls fail with 'Expected function.name to be a string'
Cause: Morpheus may send tool call metadata and arguments in separate chunks during streaming.Solution: This issue has been resolved in recent versions of the Morpheus API. If you still encounter it, implement a custom stream transformer:
Model calls tools for simple questions
Model calls tools for simple questions
Cause: The system prompt doesn’t provide clear guidance on when to use tools vs. direct answers.Solution: Add explicit instructions in your system prompt:
Stream stops with 'finishReason: unknown'
Stream stops with 'finishReason: unknown'
Cause: Morpheus may be sending error JSON mixed into the SSE stream.Solution: Ensure your transformer filters out non-SSE error messages:
Multi-parameter tool calls fail
Multi-parameter tool calls fail
Cause: Some Morpheus models struggle with complex tool calls requiring multiple parameters.Solution: Try a different model (llama-3.3-70b often performs better) or simplify your tools. Provide explicit examples in tool descriptions:
API returns 'invalid response format' errors
API returns 'invalid response format' errors
Cause: Morpheus occasionally sends malformed error responses that break SSE parsing.Solution: The stream transformer should filter these out. Add comprehensive logging to identify problematic chunks:
Advanced Configuration
Custom Headers and Options
Pass additional configuration to the Morpheus provider:lib/morpheus.ts
Token Usage Tracking
Track token consumption using theonFinish callback:
While Morpheus currently provides free inference during the Open Beta, tracking usage is good practice for understanding your application’s resource consumption.
Error Handling
Implement robust error handling for production applications:app/api/chat/route.ts
Use
maxRetries to automatically retry failed requests. This is especially useful for handling temporary network issues or provider timeouts.Next Steps
Explore Models
Browse all available models in the Morpheus marketplace and their capabilities.
AI SDK Documentation
Dive deeper into the Vercel AI SDK’s features, including agents, embeddings, and more.
Morpheus API Reference
Complete API documentation for all Morpheus Gateway endpoints.
Example Projects
Explore example projects and templates using the AI SDK.
Summary
You’ve successfully integrated the Morpheus API Gateway with Vercel’s AI SDK! Key takeaways:OpenAI Compatibility: Morpheus works seamlessly with the AI SDK’s OpenAI-compatible provider
Streaming Support: Real-time streaming responses work out of the box with
streamTextTool Calling: Define tools with Zod schemas for type-safe, multi-step interactions
Model Selection: Choose between different Morpheus models (Llama, Qwen) based on your needs
Free Inference: Build AI applications with free, decentralized inference during the Open Beta

