← AI Model Comparison 2026

GPT-5.4

OpenAI's most capable frontier model with native computer-use capabilities, Tool Search for large tool ecosystems, and the longest context window in the GPT family at 1.05 million tokens.

Model Story

GPT-5.4 was released by OpenAI on March 5, 2026, as the culmination of the GPT-5 series that began rolling out in late 2025. Positioned as OpenAI's "most capable and efficient frontier model for professional work," GPT-5.4 combines the reasoning advances of the o-series models with the general-purpose versatility of the GPT line, unified into a single architecture with selectable reasoning effort levels.

The defining feature of GPT-5.4 is its native computer-use capability. Unlike earlier models that relied on external scaffolding to interact with operating systems, GPT-5.4 was trained end-to-end to perceive screenshots, navigate interfaces, and execute actions. On the OSWorld-Verified benchmark, which measures real computer task completion, GPT-5.4 achieves 75.0%, surpassing the human baseline of 72.4%. This makes it the first general-purpose model with genuine state-of-the-art computer use for agents.

Another key innovation is Tool Search. Organizations building large internal tool ecosystems previously had to load all tool definitions into the context window, consuming precious tokens. GPT-5.4 can efficiently search and retrieve tool definitions on demand, enabling integration with thousands of internal APIs without context bloat.

For Faraday Machines customers, GPT-5.4 is typically deployed in a hybrid architecture: local open-source models handle sensitive internal data, while GPT-5.4 is called via API for computer-use automation, web research, and tasks requiring its broad knowledge base. The model's availability through Azure OpenAI Service also enables data residency guarantees for Canadian enterprises.

Key Specifications

DeveloperOpenAI
Release DateMarch 5, 2026
ArchitectureUndisclosed (proprietary)
Context Window1,050,000 tokens
Max Output128,000 tokens
LicenseProprietary (API / enterprise license)
Knowledge CutoffAugust 31, 2025
ModalitiesText, image input; text output
Special FeaturesNative computer-use, Tool Search, reasoning effort levels
AvailabilityChatGPT Plus/Team/Pro; OpenAI API; Azure OpenAI; Codex

API Pricing

Standard Input
$2.50
per 1M tokens
Standard Output
$15.00
per 1M tokens
Cached Input
$0.25
per 1M tokens

GPT-5.4 Pro is priced at $30 input / $180 output per 1M tokens for the highest capability tier. Long context prompts exceeding 272K tokens incur a 2x input and 1.5x output surcharge across all tiers. Batch and Flex processing offer 50% discounts. Priority processing costs 2x standard rates. Regional data residency adds a 10% uplift.

Benchmarks

SWE-bench Pro
57.7%
Complex software tasks
OSWorld-Verified
75.0%
Computer use (beats human 72.4%)
BrowseComp
82.7%
Web browsing (GPT-5.4 Pro: 89.3%)

On-Premises Considerations

GPT-5.4 is a fully proprietary model with no open-weight release, which means it cannot be self-hosted on Faraday Machines hardware directly. However, Faraday supports several hybrid deployment patterns that preserve data privacy while leveraging GPT-5.4's unique capabilities.

For Canadian organizations with data residency requirements, Azure OpenAI Service offers GPT-5.4 deployment in Canadian datacenters with private networking. Faraday Machines can architect pipelines where sensitive data is processed locally by Kimi or GLM, with sanitized or abstracted queries sent to GPT-5.4 for computer-use automation, web research, or broad knowledge tasks.

For organizations evaluating the total cost of ownership, Faraday's TCO calculator includes GPT-5.4 API pricing alongside local model costs, making it easy to compare hybrid architectures against pure cloud or pure on-premises approaches.