Skip to main content
Reasoning models are LLMs trained with reinforcement learning to think before they answer, producing a long internal chain of thought before responding. They excel at complex problem solving, coding, scientific reasoning, and multi-step planning for agentic workflows.
The Responses API is best supported on the Enterprise plan. Use https://enterprise.blackbox.ai as the base URL for full model availability and production reliability. The API is also available on standard plans at https://api.blackbox.ai, where it is currently experimental.

Basic Reasoning Request

Pass a reasoning object with an effort field to control how much the model thinks before responding:
const response = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    reasoning: {
      effort: 'medium',
    },
    input: [
      {
        role: 'user',
        content:
          'Write a bash script that takes a matrix represented as a string with format [1,2],[3,4],[5,6] and prints the transpose in the same format.',
      },
    ],
  }),
});

const data = await response.json();
const msg = data.output.find((item: any) => item.type === 'message');
const text = msg?.content?.find((part: any) => part.type === 'output_text')?.text;
console.log(text);

Reasoning Response

When reasoning is enabled, the output array includes a reasoning item with the model’s internal thinking, followed by a message item with the visible response:
{
  "id": "resp_QpQxYjZSgteaeH8SXShotw4n...",
  "object": "response",
  "model": "blackboxai/openai/gpt-5.3-codex",
  "status": "completed",
  "output": [
    {
      "id": "rs_1772502265541_jj0h02gs7k",
      "type": "reasoning",
      "summary": [],
      "content": [
        {
          "type": "reasoning_text",
          "text": "Calculating arithmetic\n\nI need to resolve a simple arithmetic problem: what's 15 times 27? The answer is 405."
        }
      ],
      "encrypted_content": "gAAAAABppjz3bJfZv5_WjuJI..."
    },
    {
      "id": "msg_abc456",
      "type": "message",
      "role": "assistant",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "text": "15 × 27 = **405**.",
          "annotations": []
        }
      ]
    }
  ],
  "usage": {
    "input_tokens": 75,
    "output_tokens": 1186,
    "output_tokens_details": {
      "reasoning_tokens": 1024
    },
    "total_tokens": 1261
  }
}
The encrypted_content field contains an opaque token that can be passed back in multi-turn conversations so the model can reference its previous reasoning. Use include: ["reasoning.encrypted_content"] to receive it.

Reasoning with Tool Calling

When reasoning is combined with tool calling, the model thinks first (producing a reasoning item), then calls the tool (producing a function_call item):
import os
import requests

response = requests.post(
    'https://enterprise.blackbox.ai/v1/responses',
    headers={
        'Content-Type': 'application/json',
        'Authorization': f"Bearer {os.environ['BLACKBOX_API_KEY']}",
    },
    json={
        'model': 'blackboxai/openai/gpt-5.3-codex',
        'reasoning': {'effort': 'medium'},
        'include': ['reasoning.encrypted_content'],
        'input': [
            {'role': 'user', 'content': "What's the weather in Paris?"}
        ],
        'tools': [{
            'type': 'function',
            'name': 'get_weather',
            'description': 'Get the current weather in a location',
            'parameters': {
                'type': 'object',
                'properties': {
                    'city': {'type': 'string', 'description': 'City name'}
                },
                'required': ['city']
            }
        }]
    },
)
data = response.json()

for item in data['output']:
    if item['type'] == 'reasoning':
        print(f"[Reasoning] {item.get('content', [{}])[0].get('text', '')[:100]}...")
    elif item['type'] == 'function_call':
        print(f"[Tool] {item['name']}({item['arguments']})")
Response — Reasoning + Tool Call
{
  "id": "resp_WKnCTWadviYHbIbazz8YU1C1...",
  "object": "response",
  "model": "blackboxai/openai/gpt-5.3-codex",
  "status": "completed",
  "output": [
    {
      "id": "rs_1772502274510_1t9dmjqoee5j",
      "type": "reasoning",
      "summary": [],
      "content": [
        {
          "type": "reasoning_text",
          "text": "I need to use the weather tool to get the weather for Paris."
        }
      ],
      "encrypted_content": "gAAAAABppj0BM4w_s3F9t4kb..."
    },
    {
      "id": "fc_1772502274510_g7twdw0wz7v",
      "type": "function_call",
      "name": "get_weather",
      "arguments": "{\"city\":\"Paris\"}",
      "call_id": "call_7imH3aTWGafLCivAHxauCCrV",
      "status": "completed"
    }
  ],
  "usage": {
    "input_tokens": 591,
    "output_tokens": 120,
    "output_tokens_details": { "reasoning_tokens": 64 },
    "total_tokens": 711
  }
}

Multi-Turn with Reasoning

After executing the tool, pass back the reasoning item, function call, and tool result in the input array. The model uses its previous reasoning context to generate the final answer:
import os
import requests
import json

url = 'https://enterprise.blackbox.ai/v1/responses'
headers = {
    'Content-Type': 'application/json',
    'Authorization': f"Bearer {os.environ['BLACKBOX_API_KEY']}",
}

tools = [{
    'type': 'function',
    'name': 'calculator',
    'description': 'Perform a calculation',
    'parameters': {
        'type': 'object',
        'properties': {'expression': {'type': 'string'}},
        'required': ['expression']
    }
}]

# Turn 1: Model reasons and calls the tool
r1 = requests.post(url, headers=headers, json={
    'model': 'blackboxai/openai/gpt-5.3-codex',
    'reasoning': {'effort': 'medium'},
    'include': ['reasoning.encrypted_content'],
    'input': [{'role': 'user', 'content': 'What is 15 + 27? Use the calculator.'}],
    'tools': tools
}).json()

# Build input for Turn 2: include all output items + tool result
input_turn2 = [{'role': 'user', 'content': 'What is 15 + 27? Use the calculator.'}]
for item in r1['output']:
    input_turn2.append(item)
    if item['type'] == 'function_call':
        input_turn2.append({
            'type': 'function_call_output',
            'call_id': item['call_id'],
            'output': '42'
        })

# Turn 2: Model uses reasoning context + tool result for final answer
r2 = requests.post(url, headers=headers, json={
    'model': 'blackboxai/openai/gpt-5.3-codex',
    'reasoning': {'effort': 'medium'},
    'input': input_turn2,
    'tools': tools
}).json()

msg = next(item for item in r2.get('output', []) if item.get('type') == 'message')
text = next(part['text'] for part in msg['content'] if part['type'] == 'output_text')
print(text)  # "15 + 27 = 42"
Turn 2 Response — Final Answer
{
  "id": "resp_AaQmI9orpLppdWHhpAkPGu5o...",
  "object": "response",
  "model": "blackboxai/openai/gpt-5.3-codex",
  "status": "completed",
  "output": [
    {
      "id": "msg_1772502284627_3tpxqy1rums",
      "type": "message",
      "role": "assistant",
      "status": "completed",
      "content": [
        {
          "type": "output_text",
          "text": "15 + 27 = **42**.",
          "annotations": []
        }
      ]
    }
  ],
  "usage": {
    "input_tokens": 85,
    "output_tokens": 13,
    "output_tokens_details": { "reasoning_tokens": 0 },
    "total_tokens": 98
  }
}

Effort Levels

The reasoning.effort parameter guides how many reasoning tokens the model generates before responding. The default is medium.
ValueDescription
"low"Favors speed and economical token usage
"medium"Balanced between speed and reasoning accuracy (default)
"high"Favors more complete reasoning for complex tasks
"xhigh"Maximum reasoning depth — allocates the largest portion of tokens for thinking

Reasoning Summaries

The reasoning.summary field is part of the OpenAI Responses API spec and is accepted by the API. Support varies by model — when supported, setting it causes the output array to include a reasoning item before the message item containing a human-readable summary of the model’s internal thinking.
ValueDescription
"auto"Uses the most detailed summarizer available (recommended)
"detailed"Full step-by-step reasoning summary
"concise"A shorter, high-level summary of the reasoning process
reasoning.summary is not currently supported by blackboxai/openai/gpt-5.3-codex. When unsupported, the field is accepted but ignored — the response returns only the message output item.
When a model supports reasoning.summary, the output array contains a reasoning item before the message:
[
  {
    "id": "rs_abc123",
    "type": "reasoning",
    "summary": [
      {
        "type": "summary_text",
        "text": "**Answering a simple question**\n\nThe capital of France is Paris — a well-known fact. I'll keep the answer brief and direct."
      }
    ]
  },
  {
    "id": "msg_abc456",
    "type": "message",
    "status": "completed",
    "role": "assistant",
    "content": [
      {
        "type": "output_text",
        "text": "The capital of France is Paris.",
        "annotations": []
      }
    ]
  }
]

Token Usage

Reasoning tokens are generated internally and are not visible in the response, but they are billed as output tokens and appear in usage.output_tokens_details:
{
  "usage": {
    "input_tokens": 75,
    "output_tokens": 1186,
    "output_tokens_details": {
      "reasoning_tokens": 1024
    },
    "total_tokens": 1261
  }
}
Reasoning tokens occupy space in the model’s context window and are billed as output tokens even though they are not returned in the response.

Handling Incomplete Responses

If the model runs out of token budget while reasoning, it may return status: incomplete — or in some cases return status: completed with an empty output_text when all tokens were consumed by internal reasoning. Reserve enough tokens for both reasoning and the visible output to avoid this.
const response = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    reasoning: { effort: 'medium' },
    max_output_tokens: 300,
    input: [
      {
        role: 'user',
        content: 'Write a bash script that transposes a matrix.',
      },
    ],
  }),
});

const data = await response.json();
if (data.status === 'incomplete' && data.incomplete_details?.reason === 'max_output_tokens') {
  console.log('Ran out of tokens');
  const msg = data.output?.find((item: any) => item.type === 'message');
  const text = msg?.content?.find((part: any) => part.type === 'output_text')?.text;
  if (text) {
    console.log('Partial output:', text);
  } else {
    console.log('Ran out of tokens during reasoning — no visible output produced');
  }
}

Request Parameters

reasoning
object
Configures reasoning behavior for the model.
max_output_tokens
integer
Maximum number of tokens to generate in the response, including reasoning tokens. Increase this when using higher effort levels to avoid incomplete responses.

Next Steps

Text Generation

Learn the basics of the Responses API

Tool Calling

Combine reasoning with tool calls for complex agentic workflows

Best Practices

Preserve reasoning signatures across turns and avoid common errors