Welcome to the Morpheus Inference API
The Morpheus Inference API provides a simple method for users to access the Morpheus Inference Marketplace. This API is fully OpenAI-compatible, enabling seamless integration with existing AI applications while utilizing a decentralized marketplace to protect you and your data.
OpenAI Compatibility
The Morpheus Inference API implements the OpenAI API specification, ensuring compatibility with existing OpenAI clients and tools. Simply change the base URL and API key to switch from OpenAI to Morpheus:
curl
Python
JavaScript
TypeScript
curl https://api.mor.org/api/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama-3.3-70b",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.mor.org/api/v1"
)
response = client.chat.completions.create(
model="llama-3.3-70b",
messages=[
{"role": "user", "content": "Hello!"}
]
)
print(response.choices[0].message.content)
import OpenAI from "openai";
const client = new OpenAI({
apiKey: "YOUR_API_KEY",
baseURL: "https://api.mor.org/api/v1",
});
const response = await client.chat.completions.create({
model: "llama-3.3-70b",
messages: [
{ role: "user", content: "Hello!" }
],
});
console.log(response.choices[0].message.content);
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.MORPHEUS_API_KEY!,
baseURL: "https://api.mor.org/api/v1",
});
const response = await client.chat.completions.create({
model: "llama-3.3-70b",
messages: [
{ role: "user", content: "Hello!" }
],
});
console.log(response.choices[0].message.content);
Any application that supports “OpenAI-compatible” APIs can work with Morpheus. This includes Cursor, Continue, LangChain, and hundreds of other tools.
Base URL
https://api.mor.org/api/v1
All API endpoints are prefixed with /api/v1. For example:
- Models:
https://api.mor.org/api/v1/models
- Chat:
https://api.mor.org/api/v1/chat/completions
- Embeddings:
https://api.mor.org/api/v1/embeddings
Authentication
The Inference API uses API key authentication. Generate your API key from the Admin Portal at app.mor.org.
Include your API key in the Authorization header:
Authorization: Bearer sk-xxxxxxxxxxxxx
API Categories
- Models: List and query available AI models and bids
- Chat: OpenAI-compatible chat completion endpoints
- Embeddings: Embeddings endpoint to support vector storage (RAG)
- Utility: Health Checks
Rate Limits
Rate limits are applied at the user level and vary based on your account tier. Contact support for enterprise rate limits.
Error Handling
The API returns standard HTTP status codes:
200 - Success
201 - Created
204 - No Content (successful deletion)
400 - Bad Request
401 - Unauthorized
403 - Forbidden
404 - Not Found
422 - Validation Error
500 - Internal Server Error
Error responses include a JSON body with details:
{
"detail": [
{
"loc": ["body", "field_name"],
"msg": "Field required",
"type": "missing"
}
]
}