← All Tools

Best Gemini 2.5 Flash Alternatives

Gemini 2.5 Flash by Google is a budget model priced at $0.15/0.6 per 1M tokens (in/out). It's already affordable, but you might want different strengths or features.

Gemini 2.5 Flash

GoogleBudget

Input

$0.15/1M

Output

$0.6/1M

Context

1M

Max Output

66K

Why Switch from Gemini 2.5 Flash?

Weaker than Flash 3 on most benchmarks
Output quality inconsistent on edge cases

Top Alternatives

#1Gemini 3 FlashGoogleBudget

Comparable performance.

Input

$0.5/1M

233% more

Output

$3/1M

400% more

Context

1M

Max Output

66K

MMLU-Pro: 78%(+2.0%)HumanEval: 90%(+0.5%)
#2o3OpenAIReasoning

Higher benchmark scores.

Input

$0.4/1M

167% more

Output

$1.6/1M

167% more

Context

200K

Max Output

100K

MMLU-Pro: 87%(+11.0%)HumanEval: 94.5%(+5.0%)
#3DeepSeek V3DeepSeekOpen Source

44% cheaper, comparable performance, open-source and self-hostable.

Input

$0.14/1M

7% cheaper

Output

$0.28/1M

53% cheaper

Context

164K

Max Output

16K

MMLU-Pro: 78%(+2.0%)HumanEval: 89%(-0.5%)
#4GPT-4o MiniOpenAIBudget

Same category, different trade-offs.

Input

$0.15/1M

Same price

Output

$0.6/1M

Same price

Context

128K

Max Output

16K

MMLU-Pro: 68%(-8.0%)HumanEval: 87.2%(-2.3%)
#5Claude Haiku 4.5AnthropicBudget

Same category, different trade-offs.

Input

$0.8/1M

433% more

Output

$4/1M

567% more

Context

200K

Max Output

8K

MMLU-Pro: 69.4%(-6.6%)HumanEval: 88.1%(-1.4%)
#6o4-miniOpenAIReasoning

Higher benchmark scores.

Input

$1.1/1M

633% more

Output

$4.4/1M

633% more

Context

200K

Max Output

100K

MMLU-Pro: 85%(+9.0%)HumanEval: 93.5%(+4.0%)
#7Mistral Large 3MistralFlagship

Higher benchmark scores.

Input

$2/1M

1233% more

Output

$5/1M

733% more

Context

128K

Max Output

16K

MMLU-Pro: 83%(+7.0%)HumanEval: 91%(+1.5%)
#8Llama 4 MaverickMetaOpen Source

Higher benchmark scores, open-source and self-hostable.

Input

$0.31/1M

107% more

Output

$0.85/1M

42% more

Context

1M

Max Output

32K

MMLU-Pro: 80.5%(+4.5%)HumanEval: 90.2%(+0.7%)

Full Comparison Table

ModelInput $/1MOutput $/1MContextMMLU-ProHumanEvalScore
Gemini 3 FlashGoogle$0.50233% more$3.00400% more1M78%+2.0%90%+0.5%80
o3OpenAI$0.40167% more$1.60167% more200K87%+11.0%94.5%+5.0%75
DeepSeek V3DeepSeek$0.147% cheaper$0.2853% cheaper164K78%+2.0%89%-0.5%73
GPT-4o MiniOpenAI$0.15Same price$0.60Same price128K68%-8.0%87.2%-2.3%69
Claude Haiku 4.5Anthropic$0.80433% more$4.00567% more200K69.4%-6.6%88.1%-1.4%67
o4-miniOpenAI$1.10633% more$4.40633% more200K85%+9.0%93.5%+4.0%65
Mistral Large 3Mistral$2.001233% more$5.00733% more128K83%+7.0%91%+1.5%65
Llama 4 MaverickMeta$0.31107% more$0.8542% more1M80.5%+4.5%90.2%+0.7%63
MiniMax M2.5MiniMax$0.30100% more$1.20100% more200K82%+6.0%90%+0.5%63
GLM-4.7Zhipu AI$0.60300% more$2.20267% more200K84.3%+8.3%59
GLM-5Zhipu AI$1.00567% more$3.20433% more200K70.4%-5.6%91%+1.5%58
Llama 4 ScoutMeta$0.1820% more$0.635% more10M74.2%-1.8%86%-3.5%56
Claude Opus 4.6Anthropic$5.003233% more$25.004067% more200K89.5%+13.5%95%+5.5%55
Claude Sonnet 4.6Anthropic$3.001900% more$15.002400% more200K86%+10.0%94%+4.5%55
GPT-5.3 CodexOpenAI$2.001233% more$16.002567% more200K90%+14.0%96.5%+7.0%55
GPT-5.2 CodexOpenAI$1.751067% more$14.002233% more200K89%+13.0%95.5%+6.0%55
Mistral Medium 3Mistral$0.40167% more$2.00233% more128K76%Same87%-2.5%52
GPT-5OpenAI$1.25733% more$10.001567% more128K88.5%+12.5%95%+5.5%50
Gemini 3.1 ProGoogle$2.001233% more$12.001900% more1M91%+15.0%95%+5.5%50
Gemini 3 ProGoogle$2.001233% more$12.001900% more1M89.8%+13.8%94%+4.5%50
Gemini 2.5 ProGoogle$1.25733% more$10.001567% more1M87.5%+11.5%93.5%+4.0%50
Grok 4xAI$3.001900% more$15.002400% more128K86%+10.0%93%+3.5%50
Claude Sonnet 4.5Anthropic$3.001900% more$15.002400% more200K84.5%+8.5%93%+3.5%48
DeepSeek R1DeepSeek$0.55267% more$2.19265% more128K84%+8.0%92%+2.5%46
GPT-4oOpenAI$2.501567% more$10.001567% more128K80.5%+4.5%91%+1.5%38

Head-to-Head Comparisons

Alternatives for Other Models