Compare/Claude 4.5 Haiku (Reasoning) vs GPT-5.4 mini (Non-Reasoning)

Claude 4.5 Haiku (Reasoning)vsGPT-5.4 mini (Non-Reasoning)

Side-by-side comparison of pricing, 12 benchmarks, and generation speed.

Anthropic

Claude 4.5 Haiku (Reasoning)

Input
$1/M
Output
$5/M
Speed
139 tok/s
TTFT
9.86s
OpenAI

GPT-5.4 mini (Non-Reasoning)

Input
$0.75/M
Output
$4.5/M
Speed
202 tok/s
TTFT
0.45s

Winner by Category

Cheaper
GPT-5.4 mini (Non-Reasoning)
Faster (tok/s)
GPT-5.4 mini (Non-Reasoning)
Lower Latency
GPT-5.4 mini (Non-Reasoning)
Benchmarks (11-0)
Claude 4.5 Haiku (Reasoning)

Pricing Comparison

MetricClaude 4.5 Haiku (Reasoning)GPT-5.4 mini (Non-Reasoning)
Input ($/M tokens)$1$0.75
Output ($/M tokens)$5$4.5
Cost for 1M input + 100K output tokens:
Claude 4.5 Haiku (Reasoning)$1.50
GPT-5.4 mini (Non-Reasoning)$1.20

Speed Comparison

Output Speed (tokens/s) — higher is better
Claude 4.5 Haiku (Reasoning)
139 tok/s
GPT-5.4 mini (Non-Reasoning)
202 tok/s
Time to First Token (seconds) — lower is better
Claude 4.5 Haiku (Reasoning)
9.86s
GPT-5.4 mini (Non-Reasoning)
0.45s

Benchmark Comparison

Data from Artificial Analysis API — 12 benchmarks

Intelligence Index
37.123.3
Coding Index
32.625.3
Math Index
83.7
GPQA Diamond
67.2%60.6%
MMLU-Pro
76.0%
LiveCodeBench
61.5%
AIME 2025
83.7%
MATH-500
Humanity's Last Exam
9.7%5.7%
SciCode
43.3%39.6%
IFBench
54.3%38.8%
TerminalBench
27.3%18.2%
Claude 4.5 Haiku (Reasoning)11 wins
0 winsGPT-5.4 mini (Non-Reasoning)

Frequently Asked Questions

Which is cheaper, Claude 4.5 Haiku (Reasoning) or GPT-5.4 mini (Non-Reasoning)?

GPT-5.4 mini (Non-Reasoning) is cheaper overall. Its blended price (3:1 input/output ratio) is $1.69/M tokens vs $2.00/M for Claude 4.5 Haiku (Reasoning).

Which model performs better on benchmarks?

Claude 4.5 Haiku (Reasoning) wins 11 out of 12 benchmarks compared to 0 for GPT-5.4 mini (Non-Reasoning). See the detailed benchmark chart above for per-category results.

Which is faster for real-time applications?

GPT-5.4 mini (Non-Reasoning) generates tokens faster at 202 tok/s vs 139 tok/s. However, GPT-5.4 mini (Non-Reasoning) has lower time-to-first-token (0.45s vs 9.86s).

When should I use Claude 4.5 Haiku (Reasoning) vs GPT-5.4 mini (Non-Reasoning)?

Choose based on your priorities: GPT-5.4 mini (Non-Reasoning) for lower cost, Claude 4.5 Haiku (Reasoning) for stronger benchmark performance, and GPT-5.4 mini (Non-Reasoning) for faster generation. For latency-sensitive apps, check the TTFT comparison above.