InclusionAI
Ling-flash-2.0
AI model by InclusionAI. Real-time pricing and benchmark data.
Benchmarks
Coding Index16.7
Math Index65.3
GPQA Diamond65.7%
MMLU-Pro77.7%
LiveCodeBench58.9%
AIME 202565.3%
SciCode28.9%
IFBench34.4%
TerminalBench10.6%
Compare with similar models
| Model | Input | Output | Speed |
|---|---|---|---|
Ling-flash-2.0Current | $0.14 | $0.57 | 64 tok/s |
| Seed-OSS-36B-Instruct | $0.21 | $0.57 | 40 tok/s |
| Llama 3.1 Instruct 70B | $0.56 | $0.56 | 31 tok/s |
| gpt-oss-120B (low) | $0.15 | $0.60 | 255 tok/s |
| gpt-oss-120B (high) | $0.15 | $0.60 | 253 tok/s |
| Mistral Small 4 (Non-reasoning) | $0.15 | $0.60 | 159 tok/s |
Compare Ling-flash-2.0 with
Example Costs
Single Request
$0.0004
1.0K in / 500 out
1K Requests/day
$0.4250
1.0M in / 500.0K out
10K Requests/day
$4.25
10.0M in / 5.0M out