← All Tools

Best Mistral Large 3 Alternatives

Mistral Large 3 by Mistral is a flagship model priced at $2/5 per 1M tokens (in/out). Looking for a better deal or different capabilities? Here are the best options.

Mistral Large 3

MistralFlagship

Input

$2/1M

Output

$5/1M

Context

128K

Max Output

16K

Why Switch from Mistral Large 3?

Lower benchmarks than top-tier competitors
Smaller ecosystem

Top Alternatives

#1GPT-5.3 CodexOpenAIFlagship

Comparable performance, 66K max output.

Input

$2/1M

Same price

Output

$16/1M

220% more

Context

200K

Max Output

66K

MMLU-Pro: 90%(+7.0%)HumanEval: 96.5%(+5.5%)
#2Gemini 3.1 ProGoogleFlagship

Higher benchmark scores, 1M context (8x more), 64K max output.

Input

$2/1M

Same price

Output

$12/1M

140% more

Context

1M

Max Output

64K

MMLU-Pro: 91%(+8.0%)HumanEval: 95%(+4.0%)
#3GLM-5Zhipu AIFlagship

40% cheaper, 128K max output.

Input

$1/1M

50% cheaper

Output

$3.2/1M

36% cheaper

Context

200K

Max Output

128K

MMLU-Pro: 70.4%(-12.6%)HumanEval: 91%(Same)
#4GPT-5.2 CodexOpenAIFlagship

Comparable performance, 66K max output.

Input

$1.75/1M

13% cheaper

Output

$14/1M

180% more

Context

200K

Max Output

66K

MMLU-Pro: 89%(+6.0%)HumanEval: 95.5%(+4.5%)
#5GPT-5OpenAIFlagship

Comparable performance, adds audio.

Input

$1.25/1M

38% cheaper

Output

$10/1M

100% more

Context

128K

Max Output

16K

MMLU-Pro: 88.5%(+5.5%)HumanEval: 95%(+4.0%)
#6o4-miniOpenAIReasoning

21% cheaper, 100K max output.

Input

$1.1/1M

45% cheaper

Output

$4.4/1M

12% cheaper

Context

200K

Max Output

100K

MMLU-Pro: 85%(+2.0%)HumanEval: 93.5%(+2.5%)
#7Gemini 3 ProGoogleFlagship

Comparable performance, 1M context (8x more), 66K max output.

Input

$2/1M

Same price

Output

$12/1M

140% more

Context

1M

Max Output

66K

MMLU-Pro: 89.8%(+6.8%)HumanEval: 94%(+3.0%)
#8Gemini 3 FlashGoogleBudget

50% cheaper, 1M context (8x more), 66K max output.

Input

$0.5/1M

75% cheaper

Output

$3/1M

40% cheaper

Context

1M

Max Output

66K

MMLU-Pro: 78%(-5.0%)HumanEval: 90%(-1.0%)

Full Comparison Table

ModelInput $/1MOutput $/1MContextMMLU-ProHumanEvalScore
GPT-5.3 CodexOpenAI$2.00Same price$16.00220% more200K90%+7.0%96.5%+5.5%90
Gemini 3.1 ProGoogle$2.00Same price$12.00140% more1M91%+8.0%95%+4.0%85
GLM-5Zhipu AI$1.0050% cheaper$3.2036% cheaper200K70.4%-12.6%91%Same85
GPT-5.2 CodexOpenAI$1.7513% cheaper$14.00180% more200K89%+6.0%95.5%+4.5%83
GPT-5OpenAI$1.2538% cheaper$10.00100% more128K88.5%+5.5%95%+4.0%78
o4-miniOpenAI$1.1045% cheaper$4.4012% cheaper200K85%+2.0%93.5%+2.5%78
Gemini 3 ProGoogle$2.00Same price$12.00140% more1M89.8%+6.8%94%+3.0%78
Gemini 3 FlashGoogle$0.5075% cheaper$3.0040% cheaper1M78%-5.0%90%-1.0%78
Grok 4xAI$3.0050% more$15.00200% more128K86%+3.0%93%+2.0%78
Claude Opus 4.6Anthropic$5.00150% more$25.00400% more200K89.5%+6.5%95%+4.0%73
GLM-4.7Zhipu AI$0.6070% cheaper$2.2056% cheaper200K84.3%+1.3%72
Claude Sonnet 4.6Anthropic$3.0050% more$15.00200% more200K86%+3.0%94%+3.0%68
o3OpenAI$0.4080% cheaper$1.6068% cheaper200K87%+4.0%94.5%+3.5%68
Gemini 2.5 FlashGoogle$0.1593% cheaper$0.6088% cheaper1M76%-7.0%89.5%-1.5%68
DeepSeek R1DeepSeek$0.5573% cheaper$2.1956% cheaper128K84%+1.0%92%+1.0%66
Claude Haiku 4.5Anthropic$0.8060% cheaper$4.0020% cheaper200K69.4%-13.6%88.1%-2.9%64
Mistral Medium 3Mistral$0.4080% cheaper$2.0060% cheaper128K76%-7.0%87%-4.0%64
Gemini 2.5 ProGoogle$1.2538% cheaper$10.00100% more1M87.5%+4.5%93.5%+2.5%63
Claude Sonnet 4.5Anthropic$3.0050% more$15.00200% more200K84.5%+1.5%93%+2.0%60
Llama 4 MaverickMeta$0.3185% cheaper$0.8583% cheaper1M80.5%-2.5%90.2%-0.8%56
MiniMax M2.5MiniMax$0.3085% cheaper$1.2076% cheaper200K82%-1.0%90%-1.0%56
GPT-4o MiniOpenAI$0.1593% cheaper$0.6088% cheaper128K68%-15.0%87.2%-3.8%54
GPT-4oOpenAI$2.5025% more$10.00100% more128K80.5%-2.5%91%Same50
Llama 4 ScoutMeta$0.1891% cheaper$0.6387% cheaper10M74.2%-8.8%86%-5.0%48
DeepSeek V3DeepSeek$0.1493% cheaper$0.2894% cheaper164K78%-5.0%89%-2.0%46

Head-to-Head Comparisons

Alternatives for Other Models