Best GLM-4.7 Alternatives
GLM-4.7 by Zhipu AI is a mid-tier model priced at $0.6/2.2 per 1M tokens (in/out). It's already affordable, but you might want different strengths or features.
GLM-4.7
Zhipu AIMid-TierInput
$0.6/1M
Output
$2.2/1M
Context
200K
Max Output
128K
Why Switch from GLM-4.7?
Top Alternatives
14% cheaper, adds tool-use.
Input
$0.4/1M
33% cheaper
Output
$2/1M
9% cheaper
Context
128K
Max Output
16K
29% cheaper, comparable performance, adds tool-use.
Input
$0.4/1M
33% cheaper
Output
$1.6/1M
27% cheaper
Context
200K
Max Output
100K
Dramatically cheaper (59% less), comparable performance, 1M context (5x more).
Input
$0.31/1M
48% cheaper
Output
$0.85/1M
61% cheaper
Context
1M
Max Output
32K
46% cheaper, comparable performance, open-source and self-hostable.
Input
$0.3/1M
50% cheaper
Output
$1.2/1M
45% cheaper
Context
200K
Max Output
128K
2% cheaper.
Input
$0.55/1M
8% cheaper
Output
$2.19/1M
0% cheaper
Context
128K
Max Output
64K
Comparable performance, 1M context (5x more), adds tool-use, audio.
Input
$1.25/1M
108% more
Output
$10/1M
355% more
Context
1M
Max Output
66K
Comparable performance, adds tool-use.
Input
$2/1M
233% more
Output
$5/1M
127% more
Context
128K
Max Output
16K
Comparable performance, adds tool-use.
Input
$3/1M
400% more
Output
$15/1M
582% more
Context
200K
Max Output
16K
Full Comparison Table
| Model | Input $/1M | Output $/1M | Context | MMLU-Pro | HumanEval | Score |
|---|---|---|---|---|---|---|
| Mistral Medium 3Mistral | $0.4033% cheaper | $2.009% cheaper | 128K | 76%-8.3% | 87%— | 81 |
| o3OpenAI | $0.4033% cheaper | $1.6027% cheaper | 200K | 87%+2.7% | 94.5%— | 79 |
| Llama 4 MaverickMeta | $0.3148% cheaper | $0.8561% cheaper | 1M | 80.5%-3.8% | 90.2%— | 78 |
| MiniMax M2.5MiniMax | $0.3050% cheaper | $1.2045% cheaper | 200K | 82%-2.3% | 90%— | 78 |
| DeepSeek R1DeepSeek | $0.558% cheaper | $2.190% cheaper | 128K | 84%-0.3% | 92%— | 71 |
| Gemini 2.5 ProGoogle | $1.25108% more | $10.00355% more | 1M | 87.5%+3.2% | 93.5%— | 70 |
| Mistral Large 3Mistral | $2.00233% more | $5.00127% more | 128K | 83%-1.3% | 91%— | 69 |
| Claude Sonnet 4.6Anthropic | $3.00400% more | $15.00582% more | 200K | 86%+1.7% | 94%— | 67 |
| Claude Sonnet 4.5Anthropic | $3.00400% more | $15.00582% more | 200K | 84.5%+0.2% | 93%— | 67 |
| o4-miniOpenAI | $1.1083% more | $4.40100% more | 200K | 85%+0.7% | 93.5%— | 62 |
| Gemini 3 FlashGoogle | $0.5017% cheaper | $3.0036% more | 1M | 78%-6.3% | 90%— | 62 |
| Gemini 2.5 FlashGoogle | $0.1575% cheaper | $0.6073% cheaper | 1M | 76%-8.3% | 89.5%— | 62 |
| Llama 4 ScoutMeta | $0.1870% cheaper | $0.6371% cheaper | 10M | 74.2%-10.1% | 86%— | 61 |
| DeepSeek V3DeepSeek | $0.1477% cheaper | $0.2887% cheaper | 164K | 78%-6.3% | 89%— | 61 |
| GPT-5.3 CodexOpenAI | $2.00233% more | $16.00627% more | 200K | 90%+5.7% | 96.5%— | 59 |
| GPT-5.2 CodexOpenAI | $1.75192% more | $14.00536% more | 200K | 89%+4.7% | 95.5%— | 59 |
| GPT-5OpenAI | $1.25108% more | $10.00355% more | 128K | 88.5%+4.2% | 95%— | 55 |
| Gemini 3.1 ProGoogle | $2.00233% more | $12.00445% more | 1M | 91%+6.7% | 95%— | 55 |
| Gemini 3 ProGoogle | $2.00233% more | $12.00445% more | 1M | 89.8%+5.5% | 94%— | 55 |
| GLM-5Zhipu AI | $1.0067% more | $3.2045% more | 200K | 70.4%-13.9% | 91%— | 54 |
| GPT-4oOpenAI | $2.50317% more | $10.00355% more | 128K | 80.5%-3.8% | 91%— | 50 |
| Claude Opus 4.6Anthropic | $5.00733% more | $25.001036% more | 200K | 89.5%+5.2% | 95%— | 49 |
| Claude Haiku 4.5Anthropic | $0.8033% more | $4.0082% more | 200K | 69.4%-14.9% | 88.1%— | 48 |
| GPT-4o MiniOpenAI | $0.1575% cheaper | $0.6073% cheaper | 128K | 68%-16.3% | 87.2%— | 48 |
| Grok 4xAI | $3.00400% more | $15.00582% more | 128K | 86%+1.7% | 93%— | 48 |