Compare/Claude 4.5 Haiku (Non-reasoning) vs DeepSeek R1 0528 (May '25)

Claude 4.5 Haiku (Non-reasoning)vsDeepSeek R1 0528 (May '25)

Side-by-side comparison of pricing, 12 benchmarks, and generation speed.

Anthropic

Claude 4.5 Haiku (Non-reasoning)

Input
$1/M
Output
$5/M
Speed
103 tok/s
TTFT
0.47s
DeepSeek

DeepSeek R1 0528 (May '25)

Input
$1.35/M
Output
$5.4/M
Speed
TTFT

Winner by Category

Cheaper
Claude 4.5 Haiku (Non-reasoning)
Faster (tok/s)
Claude 4.5 Haiku (Non-reasoning)
Lower Latency
DeepSeek R1 0528 (May '25)
Benchmarks (4-8)
DeepSeek R1 0528 (May '25)

Pricing Comparison

MetricClaude 4.5 Haiku (Non-reasoning)DeepSeek R1 0528 (May '25)
Input ($/M tokens)$1$1.35
Output ($/M tokens)$5$5.4
Cost for 1M input + 100K output tokens:
Claude 4.5 Haiku (Non-reasoning)$1.50
DeepSeek R1 0528 (May '25)$1.89

Speed Comparison

Output Speed (tokens/s) — higher is better
Claude 4.5 Haiku (Non-reasoning)
103 tok/s
DeepSeek R1 0528 (May '25)
Time to First Token (seconds) — lower is better
Claude 4.5 Haiku (Non-reasoning)
0.47s
DeepSeek R1 0528 (May '25)

Benchmark Comparison

Data from Artificial Analysis API — 12 benchmarks

Intelligence Index
31.127.1
Coding Index
29.624.0
Math Index
39.076.0
GPQA Diamond
64.6%81.3%
MMLU-Pro
80.0%84.9%
LiveCodeBench
51.1%77.0%
AIME 2025
39.0%76.0%
MATH-500
98.3%
Humanity's Last Exam
4.3%14.9%
SciCode
34.4%40.3%
IFBench
42.0%39.6%
TerminalBench
27.3%15.9%
Claude 4.5 Haiku (Non-reasoning)4 wins
8 winsDeepSeek R1 0528 (May '25)

Frequently Asked Questions

Which is cheaper, Claude 4.5 Haiku (Non-reasoning) or DeepSeek R1 0528 (May '25)?

Claude 4.5 Haiku (Non-reasoning) is cheaper overall. Its blended price (3:1 input/output ratio) is $2.00/M tokens vs $2.36/M for DeepSeek R1 0528 (May '25).

Which model performs better on benchmarks?

DeepSeek R1 0528 (May '25) wins 8 out of 12 benchmarks compared to 4 for Claude 4.5 Haiku (Non-reasoning). See the detailed benchmark chart above for per-category results.

Which is faster for real-time applications?

Claude 4.5 Haiku (Non-reasoning) generates tokens faster at 103 tok/s vs 0 tok/s. However, DeepSeek R1 0528 (May '25) has lower time-to-first-token (0.00s vs 0.47s).

When should I use Claude 4.5 Haiku (Non-reasoning) vs DeepSeek R1 0528 (May '25)?

Choose based on your priorities: Claude 4.5 Haiku (Non-reasoning) for lower cost, DeepSeek R1 0528 (May '25) for stronger benchmark performance, and Claude 4.5 Haiku (Non-reasoning) for faster generation. For latency-sensitive apps, check the TTFT comparison above.