Documentation

The Claw Bay Docs

The Claw Bay is fully OpenAI-compatible. Point your existing OpenAI SDK at our base URL, set your bearer key, and your code works with no other changes.

OpenAI Base URLhttps://api.theclawbay.com/v1

Quick Setup

Get up and running with The Claw Bay in minutes. Choose your preferred setup method below.

# 1) Install Node.js (if needed)
brew install node

# 2) Install CLI tools
npm i -g @openai/codex theclawbay

# 3) Run one-time setup
theclawbay setup

# 4) Start Codex
codex

Supported clients

Codex CLICodex VS CodeCodex AppContinueClineOpenClawOpenCodeKilo CodeRoo CodeAider

Configuration

Three steps to make your first request. No extra SDK needed - use the official OpenAI library with a custom baseURL.

OpenAI-compatible apps use https://api.theclawbay.com/v1. Native Codex config uses https://api.theclawbay.com/backend-api/codex, and theclawbay setup manages that route for you when you link a device.

1

Get your key

Open the dashboard, reveal your key, and store it as THECLAWBAY_API_KEY.

2

Set base URL

Point your OpenAI client at the Claw Bay base URL shown above.

3

Call /models first

Fetch the live model list so your app picks an available model at runtime.

SDK Examples

Use the official OpenAI SDK for GPT/Codex with The Claw Bay's OpenAI-compatible base URL.

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.THECLAWBAY_API_KEY,
  baseURL: "https://api.theclawbay.com/v1",
});

const models = await client.models.list();
const model = models.data[0]?.id ?? "gpt-5.5";

const response = await client.responses.create({
  model,
  input: "Write a short launch note for a new SaaS feature.",
  reasoning: { effort: "medium" },
});

console.log(response.output_text);

Supported Endpoints

All routes live under the base URL. Authenticate every request with Authorization: Bearer <key>.

GET/models

List all models currently available. Call this first so your app selects a live model.

POST/responses

Responses API - recommended for new integrations. Supports reasoning, tools, streaming.

POST/chat/completions

OpenAI-compatible chat completions. Drop-in replacement for existing apps.

POST/images/generations

Direct GPT Image 1.5 endpoint. Returns OpenAI-style image responses.

GET/quota

Returns your current usage - 5-hour and weekly windows, percent used.

Image Generation

You can generate images either through the direct OpenAI-compatible /images/generations route with model: "gpt-image-1.5", or through the Responses API using the hosted image_generation tool.

Billing is based on the image metadata that the upstream actually returns. If the upstream upgrades a request from low to medium quality, The Claw Bay bills the returned medium image rather than the lower requested setting.

Direct Images API - JavaScript
js
const image = await client.images.generate({
  model: "gpt-image-1.5",
  prompt: "A minimalist black sailboat on a white background.",
  size: "1024x1024",
  quality: "low",
});

console.log(image.data[0]?.b64_json);
Responses API image_generation tool - JavaScript
js
const response = await client.responses.create({
  model: "gpt-5.5",
  input: "Generate a minimalist black sailboat on a white background.",
  tools: [
    {
      type: "image_generation",
      size: "1024x1024",
      quality: "low",
      background: "opaque",
    },
  ],
  tool_choice: { type: "image_generation" },
});

console.log(response.output);

Streaming

Pass stream: true to receive server-sent events. The event type response.output_text.delta carries each text chunk.

Streaming - JavaScript
js
const stream = await client.responses.create({
  model,
  input: "Stream a concise product update.",
  stream: true,
  reasoning: { effort: "low" },
});

for await (const event of stream) {
  if (event.type === "response.output_text.delta") {
    process.stdout.write(event.delta ?? "");
  }
}

Tool Calling

Function-style tools are fully supported via /chat/completions. Define your schema, set tool_choice: "auto", and parse the response.

Tool calling - JavaScript
js
const completion = await client.chat.completions.create({
  model,
  messages: [
    { role: "user", content: "What is the weather in Boston?" },
  ],
  tools: [
    {
      type: "function",
      function: {
        name: "get_current_weather",
        description: "Get the current weather for a city.",
        parameters: {
          type: "object",
          properties: {
            location: { type: "string" },
            unit: { type: "string", enum: ["c", "f"] },
          },
          required: ["location"],
        },
      },
    },
  ],
  tool_choice: "auto",
});

console.log(completion.choices[0]?.message?.tool_calls ?? []);

Quota & Errors

Check your current usage at any time. Rate-limit errors follow the standard OpenAI shape with an extra theclawbayError field.

Check quota
bash
curl "https://theclawbay.com/api/codex-auth/v1/quota" \
  -H "Authorization: Bearer $THECLAWBAY_API_KEY"
Legacy Codex/OpenCode-compatible quota
bash
curl "https://theclawbay.com/api/codex-auth/v1/quota?format=legacy_codex" \
  -H "Authorization: Bearer $THECLAWBAY_API_KEY"

Most apps should use the standard OpenAI-compatible base URL only. If an OpenCode quota or auth plugin expects the older Codex quota schema, request /api/codex-auth/v1/quota?format=legacy_codex instead.

Error Headers

x-theclawbay-request-idUnique ID for debugging
x-theclawbay-error-codeMachine-readable error code
x-theclawbay-retryableWhether retrying will help
Retry-AfterSeconds until the limit resets

Common Error Codes

weekly_cost_limit_reachedWeekly spend cap hit
5h_cost_limit_reached5-hour spend cap hit
invalid_api_keyKey missing or malformed
model_not_foundRequested model unavailable
Example error response
json
HTTP/1.1 429 Too Many Requests
x-theclawbay-error-code: weekly_cost_limit_reached

{
  "error": "weekly cost limit reached for this account",
  "theclawbayError": {
    "category": "quota",
    "code": "weekly_cost_limit_reached",
    "retryable": false
  }
}

API Reference

Reasoning levels in model metadata

The Claw Bay now exposes supported reasoning levels directly on the OpenAI-compatible /v1/models response. That gives endpoint-driven apps a clean way to discover which models support reasoning and which effort levels are valid before sending a request.

Returned on GET /v1/models

Each reasoning-capable model includes supported efforts and a default effort.

{
  "id": "gpt-5.5",
  "object": "model",
  "owned_by": "theclawbay",
  "supports_reasoning": true,
  "supported_reasoning_efforts": ["low", "medium", "high", "xhigh"],
  "default_reasoning_effort": "xhigh"
}

Current reasoning-capable models

GPT-5.5

gpt-5.5

default xhigh

low, medium, high, xhigh

GPT-5.4

gpt-5.4

default medium

minimal, low, medium, high

GPT-5.4 Mini

gpt-5.4-mini

default medium

minimal, low, medium, high

GPT-5.3 Codex

gpt-5.3-codex

default medium

low, medium, high

GPT-5.2 Codex

gpt-5.2-codex

default medium

low, medium, high, xhigh

GPT-5.2

gpt-5.2

default medium

none, low, medium, high, xhigh

GPT-5.1 Codex Max

gpt-5.1-codex-max

default medium

none, medium, high, xhigh

GPT-5.1 Codex Mini

gpt-5.1-codex-mini

default medium

medium, high