Compare/LFM2.5-1.2B-Instruct vs gpt-oss-120B (low)

LFM2.5-1.2B-Instructvsgpt-oss-120B (low)

Side-by-side comparison of pricing, 12 benchmarks, and generation speed.

Liquid AI

LFM2.5-1.2B-Instruct

Input
$0/M
Output
$0/M
Speed
TTFT
OpenAI

gpt-oss-120B (low)

Input
$0.15/M
Output
$0.6/M
Speed
255 tok/s
TTFT
0.51s

Winner by Category

Cheaper
LFM2.5-1.2B-Instruct
Faster (tok/s)
gpt-oss-120B (low)
Lower Latency
LFM2.5-1.2B-Instruct
Benchmarks (1-10)
gpt-oss-120B (low)

Pricing Comparison

MetricLFM2.5-1.2B-Instructgpt-oss-120B (low)
Input ($/M tokens)$0$0.15
Output ($/M tokens)$0$0.6
Cost for 1M input + 100K output tokens:
LFM2.5-1.2B-Instruct$0.00
gpt-oss-120B (low)$0.21

Speed Comparison

Output Speed (tokens/s) — higher is better
LFM2.5-1.2B-Instruct
gpt-oss-120B (low)
255 tok/s
Time to First Token (seconds) — lower is better
LFM2.5-1.2B-Instruct
gpt-oss-120B (low)
0.51s

Benchmark Comparison

Data from Artificial Analysis API — 12 benchmarks

Intelligence Index
8.024.5
Coding Index
0.815.5
Math Index
66.7
GPQA Diamond
32.6%67.2%
MMLU-Pro
77.5%
LiveCodeBench
70.7%
AIME 2025
66.7%
MATH-500
Humanity's Last Exam
6.8%5.2%
SciCode
2.3%36.0%
IFBench
43.8%58.3%
TerminalBench
0.0%5.3%
LFM2.5-1.2B-Instruct1 wins
10 winsgpt-oss-120B (low)

Frequently Asked Questions

Which is cheaper, LFM2.5-1.2B-Instruct or gpt-oss-120B (low)?

LFM2.5-1.2B-Instruct is cheaper overall. Its blended price (3:1 input/output ratio) is $0.00/M tokens vs $0.26/M for gpt-oss-120B (low).

Which model performs better on benchmarks?

gpt-oss-120B (low) wins 10 out of 12 benchmarks compared to 1 for LFM2.5-1.2B-Instruct. See the detailed benchmark chart above for per-category results.

Which is faster for real-time applications?

gpt-oss-120B (low) generates tokens faster at 255 tok/s vs 0 tok/s. LFM2.5-1.2B-Instruct also has lower time-to-first-token (0.00s vs 0.51s).

When should I use LFM2.5-1.2B-Instruct vs gpt-oss-120B (low)?

Choose based on your priorities: LFM2.5-1.2B-Instruct for lower cost, gpt-oss-120B (low) for stronger benchmark performance, and gpt-oss-120B (low) for faster generation. For latency-sensitive apps, check the TTFT comparison above.