Skip to content

Integrate

Copy-paste snippets for your favorite tools, agents, and languages. Fill in your details below and all code blocks update automatically.

OpenAI Compatible Anthropic Messages Responses API

🤖 AI Coding Agents
Pi
Codex CLI
Aider
Continue.dev
Cursor

Pi by Mario Zechner

1. Install

npm install -g @mariozechner/pi-coding-agent

2. Configure ~/.pi/agent/models.json

Codex CLI by OpenAI

1. Set environment variable

2. Configure ~/.codex/config.toml

Aider

Run with environment variables

Continue.dev VS Code extension

Add to ~/.continue/config.yaml

Cursor

Configure via the Cursor settings UI:

1. Open Settings > Models
2. Set "Override OpenAI Base URL" to:
SERVER_URL/v1
3. Set "OpenAI API Key" to your API key
4. Add your model name: MODEL_NAME
💻 OpenAI Chat Completions /v1/chat/completions
curl
Python
Node.js
Go
curl (stream)
Python (stream)
Node.js (stream)
Go (stream)

curl

Python pip install openai

Node.js / TypeScript npm install openai

Go github.com/sashabaranov/go-openai

curl with -N for unbuffered output

Python pip install openai

Node.js / TypeScript npm install openai

Go github.com/sashabaranov/go-openai

🧠 Anthropic Messages API /v1/messages
curl
Python
curl (stream)
Python (stream)

curl

Uses the Anthropic x-api-key header and anthropic-version header.

Python pip install anthropic

curl streaming with -N

Python pip install anthropic

🔭 OpenAI Responses API /v1/responses
curl
Python
curl (stream)
Python (stream)

curl

The newer OpenAI Responses API uses a simpler input field instead of a messages array.

Python pip install openai

curl streaming with -N

Python pip install openai

📄 Response Formats
Chat Completions
Chat Completions (SSE)
Messages API
Messages API (SSE)
Responses API
Responses API (SSE)

Chat Completions /v1/chat/completions

Standard OpenAI chat completions response format.

{ "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1700000000, "model": "your-model-name", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! Here's a fun fact: ..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12, "completion_tokens": 42, "total_tokens": 54 } }

Chat Completions Streaming Server-Sent Events

When stream: true, returns SSE events with delta content.

data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1700000000,"model":"your-model-name","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]} data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1700000000,"model":"your-model-name","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]} data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1700000000,"model":"your-model-name","choices":[{"index":0,"delta":{"content":"!"},"finish_reason":null}]} data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1700000000,"model":"your-model-name","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]} data: [DONE]

Anthropic Messages /v1/messages

Standard Anthropic Messages API response format.

{ "id": "msg_abc123", "type": "message", "role": "assistant", "content": [ { "type": "text", "text": "Hello! Here's a fun fact: ..." } ], "model": "your-model-name", "stop_reason": "end_turn", "usage": { "input_tokens": 10, "output_tokens": 42 } }

Anthropic Messages Streaming Server-Sent Events

When stream: true, returns named SSE events with content deltas.

event: message_start data: {"type":"message_start","message":{"id":"msg_abc123","type":"message","role":"assistant","content":[],"model":"your-model-name"}} event: content_block_start data: {"type":"content_block_start","index":0,"content_block":{"type":"text","text":""}} event: content_block_delta data: {"type":"content_block_delta","index":0,"delta":{"type":"text_delta","text":"Hello!"}} event: content_block_stop data: {"type":"content_block_stop","index":0} event: message_delta data: {"type":"message_delta","delta":{"stop_reason":"end_turn"},"usage":{"output_tokens":42}} event: message_stop data: {"type":"message_stop"}

OpenAI Responses /v1/responses

Newer OpenAI Responses API format.

{ "id": "resp_abc123", "object": "response", "created_at": 1700000000, "model": "your-model-name", "output": [ { "type": "message", "role": "assistant", "content": [ { "type": "output_text", "text": "Hello! Here's a fun fact: ..." } ] } ], "usage": { "input_tokens": 10, "output_tokens": 42, "total_tokens": 52 } }

Responses API Streaming Server-Sent Events

When stream: true, returns named SSE events.

event: response.created data: {"type":"response.created","response":{"id":"resp_abc123","object":"response","status":"in_progress"}} event: response.output_text.delta data: {"type":"response.output_text.delta","delta":"Hello"} event: response.output_text.delta data: {"type":"response.output_text.delta","delta":"!"} event: response.output_text.done data: {"type":"response.output_text.done","text":"Hello!"} event: response.completed data: {"type":"response.completed","response":{"id":"resp_abc123","object":"response","status":"completed"}}
🚀 Try It Live
Response will stream here…
📚 Quick Reference
API Base URL
SERVER_URL/v1
Authorization Header
Bearer API_KEY
Supported Endpoints
/v1/chat/completions · /v1/messages · /v1/responses · /v1/models