Compare/Claude 3.5 Haiku vs DeepSeek R1 (Jan '25)

Claude 3.5 HaikuvsDeepSeek R1 (Jan '25)

Side-by-side comparison of pricing, 12 benchmarks, and generation speed.

Anthropic

Claude 3.5 Haiku

Input
$0.8/M
Output
$4/M
Speed
TTFT
DeepSeek

DeepSeek R1 (Jan '25)

Input
$1.35/M
Output
$4/M
Speed
TTFT

Winner by Category

Cheaper
Claude 3.5 Haiku
Faster (tok/s)
Tie
Lower Latency
Tie
Benchmarks (1-11)
DeepSeek R1 (Jan '25)

Pricing Comparison

MetricClaude 3.5 HaikuDeepSeek R1 (Jan '25)
Input ($/M tokens)$0.8$1.35
Output ($/M tokens)$4$4
Cost for 1M input + 100K output tokens:
Claude 3.5 Haiku$1.20
DeepSeek R1 (Jan '25)$1.75

Speed Comparison

Speed data not available for these models.

Benchmark Comparison

Data from Artificial Analysis API — 12 benchmarks

Intelligence Index
18.718.8
Coding Index
10.715.9
Math Index
68.0
GPQA Diamond
40.8%70.8%
MMLU-Pro
63.4%84.4%
LiveCodeBench
31.4%61.7%
AIME 2025
68.0%
MATH-500
72.1%96.6%
Humanity's Last Exam
3.5%9.3%
SciCode
27.4%35.7%
IFBench
42.8%39.0%
TerminalBench
2.3%6.1%
Claude 3.5 Haiku1 wins
11 winsDeepSeek R1 (Jan '25)

Frequently Asked Questions

Which is cheaper, Claude 3.5 Haiku or DeepSeek R1 (Jan '25)?

Claude 3.5 Haiku is cheaper overall. Its blended price (3:1 input/output ratio) is $1.60/M tokens vs $2.36/M for DeepSeek R1 (Jan '25).

Which model performs better on benchmarks?

DeepSeek R1 (Jan '25) wins 11 out of 12 benchmarks compared to 1 for Claude 3.5 Haiku. See the detailed benchmark chart above for per-category results.

Which is faster for real-time applications?

Both models have comparable generation speeds.

When should I use Claude 3.5 Haiku vs DeepSeek R1 (Jan '25)?

Choose based on your priorities: Claude 3.5 Haiku for lower cost, DeepSeek R1 (Jan '25) for stronger benchmark performance, and both have comparable speed. For latency-sensitive apps, check the TTFT comparison above.