Skip to main content
These examples use the Enterprise plan endpoint https://enterprise.blackbox.ai. The API is also available on standard plans at https://api.blackbox.ai, where it is currently experimental.

Structuring Tool Call IDs

When sending multi-turn conversations that include tool calls, every tool_call ID and its matching tool_call_id must only contain letters, numbers, underscores, and hyphens ([a-zA-Z0-9_-]).

What causes the error

Using IDs with dots, colons, or other special characters. This commonly happens when you construct IDs yourself or copy them from provider-internal formats:
// These IDs will be REJECTED — they contain dots
{
  "role": "assistant",
  "tool_calls": [
    {
      "id": "toolu_vrtx_01QhtesphwJp7uBdvuFWVhMd.nested.id",
      "type": "function",
      "function": { "name": "get_weather", "arguments": "{}" }
    }
  ]
}

The correct way

Always use the id from the model’s response exactly as returned. If you are generating your own IDs (for example, when replaying a conversation), follow the pattern call_<alphanumeric>:
from openai import OpenAI

client = OpenAI(
    base_url="https://enterprise.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

# Step 1: Send the initial request with tools
response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=[
        {"role": "user", "content": "What's the weather in Tokyo?"}
    ],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                },
                "required": ["location"]
            }
        }
    }]
)

message = response.choices[0].message

# Step 2: Use the tool_call ID exactly as the model returned it
# The ID will look like "call_abc123" — never modify it
tool_call_id = message.tool_calls[0].id

# Step 3: Send the tool result back, matching the exact ID
messages = [
    {"role": "user", "content": "What's the weather in Tokyo?"},
    {
        "role": "assistant",
        "content": message.content,
        "tool_calls": [
            {
                "id": tool_call_id,  # Use the exact ID from the response
                "type": "function",
                "function": message.tool_calls[0].function
            }
        ]
    },
    {
        "role": "tool",
        "tool_call_id": tool_call_id,  # Must match the tool_call ID above
        "content": '{"temperature": 22, "condition": "sunny"}'
    }
]

follow_up = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=messages,
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                },
                "required": ["location"]
            }
        }
    }]
)
Never construct tool call IDs manually with characters like . or :. If you need to generate your own IDs (for testing or replaying conversations), use a format like call_abc123 — only alphanumeric characters, hyphens, and underscores.

Pairing Every Tool Call with a Tool Result

When the model responds with one or more tool_calls, every tool call must have a corresponding tool message with a matching tool_call_id immediately after the assistant message.

What causes the error

Sending an assistant message that contains tool_calls without providing a tool result for each one in the next turn. This commonly happens when you only handle the first tool call and ignore the rest, or when you skip the tool result entirely and send another user message instead:
// The model returned TWO tool calls, but only ONE tool result is provided
{
  "messages": [
    { "role": "user", "content": "Weather and time in London?" },
    {
      "role": "assistant",
      "content": null,
      "tool_calls": [
        { "id": "call_abc", "type": "function", "function": { "name": "get_weather", "arguments": "{}" } },
        { "id": "call_def", "type": "function", "function": { "name": "get_time", "arguments": "{}" } }
      ]
    },
    { "role": "tool", "tool_call_id": "call_abc", "content": "{\"temp\": 15}" }
    // MISSING: tool result for "call_def" — this will cause a 400 error
  ]
}

The correct way

Loop through all tool_calls and send a tool message for each one:
from openai import OpenAI

client = OpenAI(
    base_url="https://enterprise.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get weather for a location",
            "parameters": {
                "type": "object",
                "properties": {"location": {"type": "string"}},
                "required": ["location"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_time",
            "description": "Get current time for a timezone",
            "parameters": {
                "type": "object",
                "properties": {"timezone": {"type": "string"}},
                "required": ["timezone"]
            }
        }
    }
]

response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=[
        {"role": "user", "content": "What's the weather and time in London?"}
    ],
    tools=tools
)

message = response.choices[0].message

# The model may return multiple tool_calls — handle ALL of them
messages = [
    {"role": "user", "content": "What's the weather and time in London?"},
    {
        "role": "assistant",
        "content": message.content,
        "tool_calls": [
            {"id": tc.id, "type": "function", "function": tc.function}
            for tc in message.tool_calls
        ]
    }
]

# Add a tool result for EVERY tool call
for tool_call in message.tool_calls:
    if tool_call.function.name == "get_weather":
        result = '{"temperature": 15, "condition": "cloudy"}'
    elif tool_call.function.name == "get_time":
        result = '{"time": "14:30 GMT"}'
    else:
        result = '{"error": "Unknown tool"}'

    messages.append({
        "role": "tool",
        "tool_call_id": tool_call.id,
        "content": result
    })

# Now send all tool results back
follow_up = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=messages,
    tools=tools
)
If a tool call fails on your side, still send back a tool message for it — set the content to a JSON error like {"error": "service unavailable"}. The model can use this to adjust its response. Never skip a tool result.

Preserving Reasoning Blocks in Multi-Turn Requests

When using reasoning models with tool calling, the model returns reasoning_details alongside its response. If you send those reasoning blocks back in a follow-up request, they must be exactly as the model returned them — including any signature fields.

What causes the error

Modifying, reordering, or stripping fields from the reasoning_details before sending them back. This commonly happens when you serialize the response to a database and a field gets dropped, or when you manually rebuild the assistant message and forget the signature:
// The signature has been altered or removed — this will cause a 400 error
{
  "role": "assistant",
  "content": null,
  "tool_calls": [{ "id": "call_abc", "type": "function", "function": { "name": "get_weather", "arguments": "{}" } }],
  "reasoning_details": [
    {
      "type": "reasoning.text",
      "text": "Let me think about what clothes to recommend...",
      "signature": null
    }
  ]
}
The signature was originally a cryptographic string like "erUBMkiJvNVMxLa..." but was set to null during serialization. The provider rejects the entire request because the signature no longer matches.

The correct way

Pass the entire reasoning_details array back without touching it:
from openai import OpenAI

client = OpenAI(
    base_url="https://enterprise.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather",
        "parameters": {
            "type": "object",
            "properties": {"location": {"type": "string"}},
            "required": ["location"]
        }
    }
}]

# First call — model reasons and requests a tool
response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=[
        {"role": "user", "content": "What should I wear in Boston today?"}
    ],
    tools=tools,
    extra_body={"reasoning": {"max_tokens": 2000}}
)

message = response.choices[0].message

# Pass reasoning_details back EXACTLY as received — do not modify
messages = [
    {"role": "user", "content": "What should I wear in Boston today?"},
    {
        "role": "assistant",
        "content": message.content,
        "tool_calls": [
            {"id": tc.id, "type": "function", "function": tc.function}
            for tc in message.tool_calls
        ],
        # Keep reasoning_details intact — do not edit, reorder, or
        # strip the signature field
        "reasoning_details": message.reasoning_details
    },
    {
        "role": "tool",
        "tool_call_id": message.tool_calls[0].id,
        "content": '{"temperature": 45, "condition": "rainy"}'
    }
]

# Second call — model continues from where it left off
response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-sonnet-4.5",
    messages=messages,
    tools=tools,
    extra_body={"reasoning": {"max_tokens": 2000}}
)
The reasoning_details array contains signature fields that are cryptographically verified by the provider. If you serialize and deserialize these blocks (for example, storing them in a database), make sure no fields are dropped or altered. The entire sequence must match the original output exactly.
If you don’t need to preserve reasoning continuity across turns, you can simply omit the reasoning_details field from the assistant message. The model will start fresh reasoning for the next turn, which avoids signature validation entirely.

Avoiding Invalid Thinking Signatures

When using models with extended thinking, the model returns thinking blocks with a signature field that cryptographically validates the thinking content. If you send these blocks back in a multi-turn conversation, the signature must be intact or the API will reject the request with:
messages.N.content.0: Invalid `signature` in `thinking` block

What causes the error

The signature field gets corrupted during serialization, storage, or reconstruction of the conversation history. The most common cause is null coercion — your database, ORM, or serialization layer converts the signature string to null. Common scenarios that corrupt signatures:
ScenarioWhat happensResult
ORM/DB stores signature as nullable columnValue becomes NULL on read400 error
JSON.parse → modify → JSON.stringify drops fieldsignature key missing entirelySilently accepted (but no continuity)
Manual reconstruction with signature: nullExplicit null value400 error
Signature string truncated (e.g. column length limit)Partial signature400 error
Thinking blocks stripped entirelyNo thinking in historySilently accepted (fresh reasoning)

How to reproduce

There are three ways to trigger this error. All examples below use the Blackbox AI endpoint — get a valid response first, then corrupt the signature before sending it back. Setup (shared across all examples):
from openai import OpenAI

client = OpenAI(
    base_url="https://enterprise.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

# Get a response with thinking — we'll corrupt it in different ways below
response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[
        {"role": "user", "content": "What is 25 * 47?"}
    ],
    extra_body={"reasoning": {"max_tokens": 2000}}
)

msg = response.choices[0].message
original_reasoning = msg.reasoning_details  # Save the original
1. Null coercion — the most common cause. Your ORM, database, or serialization layer converts the signature string to null:
# Simulate: ORM reads signature column as NULL
corrupted = [{**block, "signature": None} for block in original_reasoning]

response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[
        {"role": "user", "content": "What is 25 * 47?"},
        {"role": "assistant", "content": msg.content, "reasoning_details": corrupted},
        {"role": "user", "content": "Now multiply that by 2"}
    ],
    extra_body={"reasoning": {"max_tokens": 2000}}
)
# 400: "Invalid `signature` in `thinking` block"
2. Truncation — the signature string is cut short, e.g. by a VARCHAR(255) column that can’t hold the full signature (often 500–1500 characters):
# Simulate: database column truncates the signature to 100 characters
corrupted = [{**block, "signature": block["signature"][:100]} for block in original_reasoning]

response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[
        {"role": "user", "content": "What is 25 * 47?"},
        {"role": "assistant", "content": msg.content, "reasoning_details": corrupted},
        {"role": "user", "content": "Now multiply that by 2"}
    ],
    extra_body={"reasoning": {"max_tokens": 2000}}
)
# 400: "Invalid `signature` in `thinking` block"
3. Character corruption — the signature is modified during encoding, URL-escaping, or manual string manipulation (e.g. replacing characters, re-encoding base64):
# Simulate: a few characters in the signature get changed
corrupted = []
for block in original_reasoning:
    sig = block.get("signature", "")
    if sig and len(sig) > 10:
        # Flip some characters in the middle of the signature
        sig = sig[:10] + "XXXXXX" + sig[16:]
    corrupted.append({**block, "signature": sig})

response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[
        {"role": "user", "content": "What is 25 * 47?"},
        {"role": "assistant", "content": msg.content, "reasoning_details": corrupted},
        {"role": "user", "content": "Now multiply that by 2"}
    ],
    extra_body={"reasoning": {"max_tokens": 2000}}
)
# 400: "Invalid `signature` in `thinking` block"
Omitting reasoning_details entirely (or stripping thinking blocks completely) does not cause an error — the model simply starts fresh reasoning. The error only occurs when you include a thinking block with a corrupted signature value.

The fix

Option 1 (recommended): Pass thinking blocks back exactly as received. Do not serialize individual fields — store and return the entire reasoning_details array as an opaque blob:
from openai import OpenAI
import json

client = OpenAI(
    base_url="https://enterprise.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

# First call — model reasons and responds
response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[
        {"role": "user", "content": "What tools do I need to build a bookshelf?"}
    ],
    extra_body={"reasoning": {"max_tokens": 2000}}
)

msg = response.choices[0].message

# Store reasoning_details as a JSON blob — do NOT decompose into columns
# In your database: reasoning_details_json TEXT NOT NULL
stored_reasoning = json.dumps(msg.reasoning_details)  # Opaque blob

# When loading back, parse the blob directly
loaded_reasoning = json.loads(stored_reasoning)

# Second call — pass reasoning_details back untouched
messages = [
    {"role": "user", "content": "What tools do I need to build a bookshelf?"},
    {
        "role": "assistant",
        "content": msg.content,
        "reasoning_details": loaded_reasoning,  # Exact original data
    },
    {"role": "user", "content": "What about wood glue?"}
]

response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=messages,
    extra_body={"reasoning": {"max_tokens": 2000}}
)
Option 2: Strip thinking blocks entirely. If you cannot guarantee the signature will survive your storage pipeline, omit reasoning_details from previous assistant turns. The model will start fresh reasoning each turn:
# Simply don't include reasoning_details when building the message history
messages = [
    {"role": "user", "content": "What tools do I need to build a bookshelf?"},
    {
        "role": "assistant",
        "content": msg.content,
        # No reasoning_details — model will re-reason from scratch
    },
    {"role": "user", "content": "What about wood glue?"}
]

response2 = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=messages,
    extra_body={"reasoning": {"max_tokens": 2000}}
)
Database storage checklist:
  • Store reasoning_details as a single JSON/JSONB column — not as separate fields
  • Ensure the column has no character length limit (signatures can be 1000+ characters)
  • Do not use ORMs that auto-convert unknown fields to null
  • Test round-trip serialization: parse(stringify(reasoning_details)) should produce byte-identical output
Thinking signatures work across providers (e.g. a response from blackboxai/anthropic/claude-opus-4.6 can be sent to blackboxai/vertex_ai/claude-opus-4.6 and vice versa). You do not need to pin the provider — only the content integrity matters.

GPT-5.4 Parameter Compatibility

GPT-5.4 (and other GPT-5 reasoning models) restrict certain parameters based on the reasoning_effort setting. Sending unsupported parameters will return a 400 error like:
gpt-5 models (including gpt-5-codex) don't support temperature=0.7.
Only temperature=1 is supported.

Which parameters are restricted

The following parameters are only supported when reasoning_effort is set to none:
ParameterWith reasoning_effort=noneWith reasoning_effort=low/medium/high
temperature (values other than 1)SupportedRejected
top_pSupportedRejected
logprobsSupportedRejected
temperature=1 is always allowed regardless of reasoning effort. For full details, see OpenAI’s GPT-5.4 parameter documentation.

What causes the error

Sending temperature, top_p, or logprobs with a reasoning effort other than none:
# This will be REJECTED — temperature=0.7 with reasoning enabled
response = client.chat.completions.create(
    model="blackboxai/openai/gpt-5.4",
    messages=[{"role": "user", "content": "Hello"}],
    temperature=0.7,  # Not allowed when reasoning is active
)
# This will ALSO be REJECTED — explicit reasoning_effort + temperature
response = client.chat.completions.create(
    model="blackboxai/openai/gpt-5.4",
    messages=[{"role": "user", "content": "Hello"}],
    temperature=0.7,
    extra_body={"reasoning_effort": "low"},  # Any value except "none"
)

The fix

Option 1: Set reasoning_effort to none when you need sampling parameters. This disables reasoning and allows full control over temperature, top_p, and logprobs:
from openai import OpenAI

client = OpenAI(
    base_url="https://enterprise.blackbox.ai",
    api_key="<BLACKBOX_API_KEY>",
)

# /v1/responses — use reasoning.effort = "none"
response = client.responses.create(
    model="blackboxai/openai/gpt-5.4",
    input="Write a creative poem",
    max_output_tokens=200,
    temperature=0.7,
    reasoning={"effort": "none"},
)

# /chat/completions — use reasoning_effort = "none"
response = client.chat.completions.create(
    model="blackboxai/openai/gpt-5.4",
    messages=[{"role": "user", "content": "Write a creative poem"}],
    max_tokens=200,
    temperature=0.7,
    extra_body={"reasoning_effort": "none"},
)
Option 2: Remove sampling parameters when reasoning is enabled. If you want reasoning, don’t send temperature, top_p, or logprobs:
# Correct: reasoning enabled, no sampling parameters
response = client.responses.create(
    model="blackboxai/openai/gpt-5.4",
    input="Solve this step by step: what is 127 * 389?",
    max_output_tokens=500,
    reasoning={"effort": "medium"},
    # No temperature, top_p, or logprobs
)
This restriction applies to all GPT-5 reasoning models including gpt-5.4, gpt-5.3-codex, gpt-5.2-codex, and gpt-5. The only exception is temperature=1, which is always accepted.

Avoiding Thinking with Forced Tool Choice

When using extended thinking (reasoning), you cannot force the model to use a specific tool via tool_choice. Anthropic-based models require tool_choice to be "auto" (or omitted) when thinking/reasoning is enabled.

What causes the error

Setting tool_choice to "required", "any", or {"type": "function", "name": "..."} while also enabling reasoning:
Error: "Thinking may not be enabled when tool_choice forces tool use."
# This will be REJECTED — forced tool_choice + reasoning
response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[{"role": "user", "content": "What's the weather?"}],
    tools=[...],
    tool_choice={"type": "function", "function": {"name": "get_weather"}},
    extra_body={"reasoning": {"max_tokens": 2000}}  # Conflict!
)
All of these tool_choice values are rejected when reasoning is enabled:
  • "required" — forces the model to call any tool
  • "any" — same as required
  • {"type": "function", "function": {"name": "..."}} — forces a specific tool

The fix

Use tool_choice: "auto" (or omit it entirely) when reasoning is enabled. The model will still call tools when appropriate — thinking models are highly capable of deciding when to use tools on their own:
from openai import OpenAI

client = OpenAI(
    base_url="https://enterprise.blackbox.ai/chat/completions",
    api_key="<BLACKBOX_API_KEY>",
)

# Correct: use tool_choice="auto" with reasoning
response = client.chat.completions.create(
    model="blackboxai/anthropic/claude-opus-4.6",
    messages=[
        {"role": "user", "content": "What's the weather in Tokyo?"}
    ],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a location",
            "parameters": {
                "type": "object",
                "properties": {"location": {"type": "string"}},
                "required": ["location"]
            }
        }
    }],
    tool_choice="auto",  # Must be "auto" when reasoning is enabled
    extra_body={"reasoning": {"max_tokens": 2000}}
)
If you absolutely need to force a specific tool, disable reasoning for that request. You can re-enable reasoning on subsequent turns once the forced tool call is complete.