Skip to main content
The Responses API supports tool calling to give models access to external functions. Define tools in your request with a name, description, and JSON schema for parameters. When the model determines it needs a tool to answer the user’s question, it returns a function_call output with the tool name and arguments for you to execute.
The Responses API is best supported on the Enterprise plan. Use https://enterprise.blackbox.ai as the base URL for full model availability and production reliability. The API is also available on standard plans at https://api.blackbox.ai, where it is currently experimental.
Important — Tool Format: The Responses API uses a flat tool structure where name, description, and parameters are at the top level. This is different from the Chat Completions API which nests them under function. Using the Chat Completions nested format on the Responses API will result in an invalid_request_error.See the Chat Completions tool calling docs if you are using /chat/completions instead.

Basic Tool Calling

const response = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    instructions: 'You are a helpful assistant. Use tools when appropriate.',
    input: [
      {
        type: 'message',
        role: 'user',
        content: 'What is the weather like in New York?',
      },
    ],
    tools: [
      {
        type: 'function',
        name: 'get_weather',
        description: 'Get the current weather in a location',
        parameters: {
          type: 'object',
          properties: {
            location: {
              type: 'string',
              description: 'The city and state, e.g. San Francisco, CA',
            },
          },
          required: ['location'],
        },
      },
    ],
    tool_choice: 'auto',
  }),
});

const data = await response.json();
console.log(data);

Tool Call Response

When the model decides to call a tool, the response includes a function_call output:
{
  "output": [
    {
      "type": "function_call",
      "name": "get_weather",
      "arguments": "{\"location\": \"New York, NY\"}",
      "call_id": "call_abc123"
    }
  ]
}
Parse the arguments and execute your function, then send the result back in a follow-up request:
// Parse and execute the function call
const functionCall = data.output[0];
const args = JSON.parse(functionCall.arguments);
const weatherResult = await getWeather(args.location);

// Send the result back
const followUp = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    instructions: 'You are a helpful assistant. Use tools when appropriate.',
    input: [
      {
        type: 'message',
        role: 'user',
        content: 'What is the weather like in New York?',
      },
      {
        type: 'function_call',
        call_id: functionCall.call_id,
        name: functionCall.name,
        arguments: functionCall.arguments,
      },
      {
        type: 'function_call_output',
        call_id: functionCall.call_id,
        output: JSON.stringify(weatherResult),
      },
    ],
    tools: [
      {
        type: 'function',
        name: 'get_weather',
        description: 'Get the current weather in a location',
        parameters: {
          type: 'object',
          properties: {
            location: { type: 'string', description: 'The city and state' },
          },
          required: ['location'],
        },
      },
    ],
  }),
});

const finalData = await followUp.json();
console.log(finalData.output[0].content[0].text);

Multi-Turn Tool Calling

In a multi-turn conversation, you build up the full history — user messages, model outputs (including function_call items), and tool results (function_call_output items) — and send it with each request. The Responses API is stateless, so every request must contain the complete conversation. This example walks through a two-turn exchange where the model calls a tool in the first turn, receives the result, then calls a second tool in the second turn before giving a final answer.
Codex models (gpt-5.3-codex, gpt-5.2-codex): These models always operate under Zero Data Retention (store: false), so include: ["reasoning.encrypted_content"] is always enabled — even if you do not provide it. Encrypted reasoning tokens are returned in every response and can be passed back in subsequent requests to maintain reasoning continuity without any server-side storage. See Zero Data Retention for details.
This section breaks down what happens at each turn so you can see exactly how messages flow.

Turn 1 — Initial request

Send the user’s message and tool definitions:
const response1 = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    instructions: 'You are a helpful assistant. Use tools when appropriate.',
    input: [
      {
        type: 'message',
        role: 'user',
        content: 'Read the file config.json and tell me what environment it is configured for.',
      },
    ],
    tools: TOOLS,
    tool_choice: 'auto',
  }),
});

const turn1 = await response1.json();
// turn1.output[0] → { type: 'function_call', name: 'read_file', call_id: 'call_1', arguments: '{"path":"config.json"}' }

Turn 1 — Execute the tool and send the result back

Append the model’s function_call output and your function_call_output to the history, then make the next request:
const fc1 = turn1.output[0]; // the function_call item
const fileContents = readFile(JSON.parse(fc1.arguments).path); // your implementation

const response2 = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    instructions: 'You are a helpful assistant. Use tools when appropriate.',
    input: [
      // Original user message
      { type: 'message', role: 'user', content: 'Read the file config.json and tell me what environment it is configured for.' },
      // Model's function_call from turn 1 — must be included verbatim
      { type: 'function_call', name: fc1.name, call_id: fc1.call_id, arguments: fc1.arguments },
      // Your tool result
      { type: 'function_call_output', call_id: fc1.call_id, output: fileContents },
    ],
    tools: TOOLS,
    tool_choice: 'auto',
  }),
});

const turn2 = await response2.json();
If the model calls another tool (e.g. search_file), repeat the same pattern — append its function_call, execute it, append the function_call_output, and send again. When the model returns a message output with no further function calls, the conversation is complete.

Conversation history shape

After two tool calls, the input array you send looks like this:
[
  { "type": "message", "role": "user", "content": "Read config.json and tell me the environment." },
  { "type": "function_call", "name": "read_file", "call_id": "call_1", "arguments": "{\"path\":\"config.json\"}" },
  { "type": "function_call_output", "call_id": "call_1", "output": "{\"env\":\"production\",\"debug\":false}" },
  { "type": "function_call", "name": "search_file", "call_id": "call_2", "arguments": "{\"path\":\"config.json\",\"pattern\":\"env\"}" },
  { "type": "function_call_output", "call_id": "call_2", "output": "1: {\"env\":\"production\"}" },
  { "type": "message", "role": "assistant", "content": "The file is configured for the production environment with debug mode disabled." }
]
Always include every function_call item from the model’s output array verbatim in the next request’s input. Omitting any item will cause the API to reject the conversation as malformed.

Multi-Turn with User Messages

After returning a tool result, you can append a new user message in the same request to continue the conversation. This lets the user ask follow-up questions based on the tool’s output without starting over.

Turn 1 — User asks a question

const response1 = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    instructions: 'You are a helpful assistant. Use tools when appropriate.',
    input: [
      { type: 'message', role: 'user', content: 'What is the weather like in New York?' },
    ],
    tools: TOOLS,
    tool_choice: 'auto',
  }),
});

const turn1 = await response1.json();
// turn1.output[0] → { type: 'function_call', name: 'get_weather', call_id: 'call_abc', arguments: '{"location":"New York, NY"}' }

Turn 2 — Return tool result and add a follow-up user message

Append the model’s function_call, your function_call_output, and the new user message together in input:
const fc = turn1.output[0];
const weatherResult = await getWeather(JSON.parse(fc.arguments).location);

const response2 = await fetch('https://enterprise.blackbox.ai/v1/responses', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    Authorization: `Bearer ${process.env.BLACKBOX_API_KEY}`,
  },
  body: JSON.stringify({
    model: 'blackboxai/openai/gpt-5.3-codex',
    instructions: 'You are a helpful assistant. Use tools when appropriate.',
    input: [
      // Original user message
      { type: 'message', role: 'user', content: 'What is the weather like in New York?' },
      // Model's function_call — included verbatim
      { type: 'function_call', name: fc.name, call_id: fc.call_id, arguments: fc.arguments },
      // Tool result
      { type: 'function_call_output', call_id: fc.call_id, output: JSON.stringify(weatherResult) },
      // Follow-up user message
      { type: 'message', role: 'user', content: 'Is that good weather for a picnic?' },
    ],
    tools: TOOLS,
    tool_choice: 'auto',
  }),
});

const turn2 = await response2.json();
console.log(turn2.output[0].content[0].text);
// "Yes — that's great picnic weather. At 72°F and sunny with moderate humidity..."
The conversation history sent in Turn 2 looks like this:
[
  { "type": "message", "role": "user", "content": "What is the weather like in New York?" },
  { "type": "function_call", "name": "get_weather", "call_id": "call_abc", "arguments": "{\"location\":\"New York, NY\"}" },
  { "type": "function_call_output", "call_id": "call_abc", "output": "{\"temperature\":\"72°F\",\"condition\":\"Sunny\",\"humidity\":\"45%\"}" },
  { "type": "message", "role": "user", "content": "Is that good weather for a picnic?" }
]
The new user message goes after the function_call_output, not before it. The API processes the input array in order and expects tool results to appear immediately after their corresponding function_call.

Tool Choice Options

Control how the model uses tools with the tool_choice parameter:
ValueBehavior
"auto"The model decides whether to call a tool
"required"The model must call at least one tool
"none"The model cannot call any tools

Request Parameters

tools
array
required
Array of tool definitions. Each tool object contains:
  • type: Always "function"
  • name: The function name the model will use
  • description: Describes when and how to use this tool
  • parameters: JSON Schema object defining the function’s parameters
tool_choice
string | object
Controls tool usage. Set to "auto" (default), "required", or "none". To force a specific tool, pass {"type": "function", "name": "tool_name"}.
parallel_tool_calls
boolean
When true, the model may call multiple tools simultaneously. Default: true

Use Case: Coding Agent

A coding agent gives the model a set of file system and terminal tools and runs an agentic loop — calling the API, executing whatever tools the model requests, and feeding the results back — until the model returns a plain text response with no further tool calls. Define seven SWE tools:
Python
TOOLS = [
    {
        "type": "function",
        "name": "read_file",
        "description": "Read the full contents of a file at the given path.",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string", "description": "File path to read"},
            },
            "required": ["path"],
        },
    },
    {
        "type": "function",
        "name": "write_file",
        "description": "Write content to a file, creating it if it doesn't exist.",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string", "description": "File path to write to"},
                "content": {"type": "string", "description": "Full content to write"},
            },
            "required": ["path", "content"],
        },
    },
    {
        "type": "function",
        "name": "edit_file",
        "description": "Replace the first occurrence of old_string with new_string in a file.",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string"},
                "old_string": {"type": "string", "description": "Exact string to find"},
                "new_string": {"type": "string", "description": "Replacement string"},
            },
            "required": ["path", "old_string", "new_string"],
        },
    },
    {
        "type": "function",
        "name": "search_file",
        "description": "Search for a regex pattern in a file and return matching lines with line numbers.",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string"},
                "pattern": {"type": "string", "description": "Regex pattern to search for"},
            },
            "required": ["path", "pattern"],
        },
    },
    {
        "type": "function",
        "name": "execute_command",
        "description": (
            "Run a shell command and return its output. Use this to execute scripts, "
            "run tests, install packages, compile code, or inspect the environment."
        ),
        "parameters": {
            "type": "object",
            "properties": {
                "command": {"type": "string", "description": "Shell command to execute"},
                "working_directory": {
                    "type": "string",
                    "description": "Directory to run the command in (default: current directory)",
                },
            },
            "required": ["command"],
        },
    },
    {
        "type": "function",
        "name": "list_directory",
        "description": "List the files and subdirectories in a directory.",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string", "description": "Directory path to list (default: current directory)"},
            },
            "required": [],
        },
    },
    {
        "type": "function",
        "name": "glob_files",
        "description": "Find files matching a glob pattern, e.g. '**/*.py' or 'src/**/*.ts'.",
        "parameters": {
            "type": "object",
            "properties": {
                "pattern": {"type": "string", "description": "Glob pattern to match files against"},
                "directory": {"type": "string", "description": "Root directory for the search (default: current directory)"},
            },
            "required": ["pattern"],
        },
    },
]
Then run the agentic loop:
Python
import os, json, requests

API_KEY = os.environ["BLACKBOX_API_KEY"]
BASE_URL = "https://enterprise.blackbox.ai/v1/responses"
MODEL = "blackboxai/openai/gpt-5.3-codex"


def run_agent(task: str, max_iterations: int = 10) -> str:
    messages = [
        {
            "type": "message",
            "role": "system",
            "content": (
                "You are a coding assistant with access to file system and terminal tools. "
                "Use the tools to read, write, edit, search files, run terminal commands, "
                "list directories, and find files to complete the task. "
                "When done, summarize what you did."
            ),
        },
        {"type": "message", "role": "user", "content": task},
    ]

    for _ in range(max_iterations):
        data = requests.post(
            BASE_URL,
            headers={"Authorization": f"Bearer {API_KEY}"},
            json={"model": MODEL, "input": messages, "tools": TOOLS, "tool_choice": "auto"},
            timeout=60,
        ).json()

        outputs = data.get("output", [])
        function_calls = [o for o in outputs if o.get("type") == "function_call"]

        if not function_calls:
            # No more tool calls — agent is done
            for out in outputs:
                for part in out.get("content", []):
                    if isinstance(part, dict) and part.get("type") == "output_text":
                        return part["text"]

        # Append model outputs to conversation
        messages.extend(outputs)

        # Execute each tool call and return results
        for fc in function_calls:
            args = json.loads(fc.get("arguments", "{}"))
            result = execute_tool(fc["name"], args)          # your dispatch function
            messages.append({
                "type": "function_call_output",
                "call_id": fc["call_id"],
                "output": result,
            })

    return "Max iterations reached."
The agent loop continues until the model returns a response with no function_call outputs. Always set a max_iterations guard to prevent runaway loops.
Example tasks this agent handles:
  • "Read main.py and tell me what the entry point function does."
  • "Write a file /tmp/utils.py with a helper function for parsing JSON, then read it back to confirm."
  • "Search app.py for all lines containing 'TODO' and list their line numbers."
  • "Edit config.py: replace DEBUG = False with DEBUG = True, then verify the change."
  • "Run python3 tests/test_api.py and report any failures."
  • "List the project root and find all TypeScript files under src/."

Tool Calling with Encrypted Reasoning (ZDR)

Codex models (gpt-5.3-codex, gpt-5.2-codex) always operate under Zero Data Retention — store is enforced to false and no data is persisted between requests. To support stateless multi-turn tool calling, include: ["reasoning.encrypted_content"] is always enabled for these models, even if you do not provide it. This means every response includes reasoning output items with an encrypted_content field. When building multi-turn conversations, pass these reasoning items back verbatim in the next request’s input array. The encrypted content is decrypted in-memory for generating the next response and then securely discarded — no intermediate state is ever persisted.

Basic Example — Weather Tool with Encrypted Reasoning

from openai import OpenAI
import json

client = OpenAI(
    api_key="YOUR_BLACKBOX_API_KEY",
    base_url="https://enterprise.blackbox.ai/v1",
)

tools = [{
    "type": "function",
    "name": "get_weather",
    "description": "Get current temperature for provided coordinates in celsius.",
    "parameters": {
        "type": "object",
        "properties": {
            "latitude": {"type": "number"},
            "longitude": {"type": "number"}
        },
        "required": ["latitude", "longitude"],
        "additionalProperties": False
    },
    "strict": True
}]

# Turn 1: Ask the model — it will call the tool
input_items = [{"role": "user", "content": "What's the weather like in Paris today?"}]

response = client.responses.create(
    model="blackboxai/openai/gpt-5.3-codex",
    input=input_items,
    tools=tools,
    include=["reasoning.encrypted_content"],
)

# Append ALL output items (reasoning + function_call) to context
for item in response.output:
    input_items.append(item.model_dump())

# Find the function call and execute it
tool_call = next(item for item in response.output if item.type == "function_call")
args = json.loads(tool_call.arguments)
result = get_weather(args["latitude"], args["longitude"])  # your implementation

# Append the tool result
input_items.append({
    "type": "function_call_output",
    "call_id": tool_call.call_id,
    "output": str(result),
})

# Turn 2: Send everything back — model uses decrypted reasoning to respond
response_2 = client.responses.create(
    model="blackboxai/openai/gpt-5.3-codex",
    input=input_items,
    tools=tools,
    include=["reasoning.encrypted_content"],
)

print(response_2.output_text)

Multi-Turn Agentic Loop with Encrypted Reasoning

For agentic workflows where the model may call multiple tools across several turns, use a loop that automatically handles encrypted reasoning items:
from openai import OpenAI
import json

client = OpenAI(
    api_key="YOUR_BLACKBOX_API_KEY",
    base_url="https://enterprise.blackbox.ai/v1",
)

tools = [
    {
        "type": "function",
        "name": "read_file",
        "description": "Read the contents of a file at the given path",
        "parameters": {
            "type": "object",
            "properties": {
                "path": {"type": "string", "description": "File path to read"}
            },
            "required": ["path"]
        }
    },
    {
        "type": "function",
        "name": "execute_command",
        "description": "Run a shell command and return its output",
        "parameters": {
            "type": "object",
            "properties": {
                "command": {"type": "string", "description": "Shell command to execute"}
            },
            "required": ["command"]
        }
    }
]

input_items = [
    {"role": "user", "content": "Read main.py and run the tests."}
]

for turn in range(10):
    response = client.responses.create(
        model="blackboxai/openai/gpt-5.3-codex",
        input=input_items,
        tools=tools,
        include=["reasoning.encrypted_content"],
    )

    # Append ALL output items — reasoning items with encrypted_content
    # are included automatically and must be passed back verbatim
    for item in response.output:
        input_items.append(item.model_dump())

    # Find tool calls
    tool_calls = [
        item for item in response.output if item.type == "function_call"
    ]

    if not tool_calls:
        # No tool calls — model responded with text, done
        print(response.output_text)
        break

    # Execute each tool and append results
    for tc in tool_calls:
        args = json.loads(tc.arguments)
        result = execute_tool(tc.name, args)  # your dispatch function
        input_items.append({
            "type": "function_call_output",
            "call_id": tc.call_id,
            "output": str(result),
        })
Always include all output items from each response — including reasoning items with encrypted_content — when building the input for the next turn. Omitting reasoning items will break the model’s chain of thought and may degrade response quality.

Next Steps

Text Generation

Learn the basics of the Responses API

Streaming

Stream tool calls in real time as they are generated

Best Practices

Tool call IDs, pairing tool results, reasoning signatures, and more

Zero Data Retention

Learn more about ZDR and encrypted reasoning for Codex models