← All Comparisons

Gemini 3.1 Pro vs GLM-4.7

A detailed comparison of Gemini 3.1 Pro (Google) and GLM-4.7 (Zhipu AI) across pricing, performance, and features.

Pricing Comparison

MetricGemini 3.1 ProGLM-4.7Difference
Input / 1M tokens$2.00$0.60-70%
Output / 1M tokens$12.00$2.20-82%
Context window1M200K
Max output64K128K

Benchmark Comparison

BenchmarkGemini 3.1 ProGLM-4.7
MMLU-Pro91%84.3%
HumanEval95%
GPQA94.3%85.7%

Capabilities

CapabilityGemini 3.1 ProGLM-4.7
audio
code
reasoning
text
tool-use
vision

Gemini 3.1 Pro Strengths

  • #1 on 12 of 18 tracked benchmarks
  • 94.3% GPQA Diamond — highest of any model
  • Same price as Gemini 3 Pro (free upgrade)
  • 1M context with configurable thinking levels

Gemini 3.1 Pro Weaknesses

  • Still in preview
  • Context-tiered pricing ($4/$18 above 200K)

GLM-4.7 Strengths

  • Excellent value — strong benchmarks at $0.60/$2.20
  • Open-weight (MIT license)
  • Top scores on AIME 25 and BrowseComp

GLM-4.7 Weaknesses

  • No tool-use support yet
  • 358B parameters — still heavy for self-hosting
  • Smaller ecosystem than OpenAI/Anthropic

Quick Verdict

Best value: GLM-4.7 is the more affordable option at $0.6/$2.2 per 1M tokens.

Higher benchmarks: Gemini 3.1 Pro scores higher on average across available benchmarks (93.4% avg).

Larger context: Gemini 3.1 Pro supports 1M tokens.

Choose GLM-4.7 if cost matters most. Choose Gemini 3.1 Pro if you need the best possible quality for complex tasks.

More Comparisons