Compare/Claude 3 Opus vs o1-preview

Claude 3 Opusvso1-preview

Side-by-side comparison of pricing, 12 benchmarks, and generation speed.

Anthropic

Claude 3 Opus

Input
$15/M
Output
$75/M
Speed
TTFT
OpenAI

o1-preview

Input
$16.5/M
Output
$66/M
Speed
TTFT

Winner by Category

Cheaper
o1-preview
Faster (tok/s)
Tie
Lower Latency
Tie
Benchmarks (5-3)
Claude 3 Opus

Pricing Comparison

MetricClaude 3 Opuso1-preview
Input ($/M tokens)$15$16.5
Output ($/M tokens)$75$66
Cost for 1M input + 100K output tokens:
Claude 3 Opus$22.50
o1-preview$23.10

Speed Comparison

Speed data not available for these models.

Benchmark Comparison

Data from Artificial Analysis API — 12 benchmarks

Intelligence Index
18.023.7
Coding Index
19.534.0
Math Index
GPQA Diamond
48.9%
MMLU-Pro
69.6%
LiveCodeBench
27.9%
AIME 2025
MATH-500
64.1%92.4%
Humanity's Last Exam
3.1%
SciCode
23.3%
IFBench
TerminalBench
Claude 3 Opus5 wins
3 winso1-preview

Frequently Asked Questions

Which is cheaper, Claude 3 Opus or o1-preview?

o1-preview is cheaper overall. Its blended price (3:1 input/output ratio) is $28.88/M tokens vs $30.00/M for Claude 3 Opus.

Which model performs better on benchmarks?

Claude 3 Opus wins 5 out of 12 benchmarks compared to 3 for o1-preview. See the detailed benchmark chart above for per-category results.

Which is faster for real-time applications?

Both models have comparable generation speeds.

When should I use Claude 3 Opus vs o1-preview?

Choose based on your priorities: o1-preview for lower cost, Claude 3 Opus for stronger benchmark performance, and both have comparable speed. For latency-sensitive apps, check the TTFT comparison above.