Compare/Claude 3.5 Sonnet (June '24) vs Grok 4

Claude 3.5 Sonnet (June '24)vsGrok 4

Side-by-side comparison of pricing, 12 benchmarks, and generation speed.

Anthropic

Claude 3.5 Sonnet (June '24)

Input
$3/M
Output
$15/M
Speed
TTFT
xAI

Grok 4

Input
$3/M
Output
$15/M
Speed
47 tok/s
TTFT
8.38s

Winner by Category

Cheaper
Tie
Faster (tok/s)
Grok 4
Lower Latency
Claude 3.5 Sonnet (June '24)
Benchmarks (0-12)
Grok 4

Pricing Comparison

MetricClaude 3.5 Sonnet (June '24)Grok 4
Input ($/M tokens)$3$3
Output ($/M tokens)$15$15
Cost for 1M input + 100K output tokens:
Claude 3.5 Sonnet (June '24)$4.50
Grok 4$4.50

Speed Comparison

Output Speed (tokens/s) — higher is better
Claude 3.5 Sonnet (June '24)
Grok 4
47 tok/s
Time to First Token (seconds) — lower is better
Claude 3.5 Sonnet (June '24)
Grok 4
8.38s

Benchmark Comparison

Data from Artificial Analysis API — 12 benchmarks

Intelligence Index
14.241.5
Coding Index
26.040.5
Math Index
92.7
GPQA Diamond
56.0%87.7%
MMLU-Pro
75.1%86.6%
LiveCodeBench
81.9%
AIME 2025
92.7%
MATH-500
69.5%99.0%
Humanity's Last Exam
3.7%23.9%
SciCode
31.6%45.7%
IFBench
53.7%
TerminalBench
37.9%
Claude 3.5 Sonnet (June '24)0 wins
12 winsGrok 4

Frequently Asked Questions

Which is cheaper, Claude 3.5 Sonnet (June '24) or Grok 4?

Both models have similar pricing. Check the detailed breakdown above for input vs output token costs.

Which model performs better on benchmarks?

Grok 4 wins 12 out of 12 benchmarks compared to 0 for Claude 3.5 Sonnet (June '24). See the detailed benchmark chart above for per-category results.

Which is faster for real-time applications?

Grok 4 generates tokens faster at 47 tok/s vs 0 tok/s. Claude 3.5 Sonnet (June '24) also has lower time-to-first-token (0.00s vs 8.38s).

When should I use Claude 3.5 Sonnet (June '24) vs Grok 4?

Choose based on your priorities: both are similarly priced, Grok 4 for stronger benchmark performance, and Grok 4 for faster generation. For latency-sensitive apps, check the TTFT comparison above.