← All Tools
Context Window Comparison
How much text can each AI model process at once? Context windows compared visually with real-world equivalents.
Quick Reference
1K tokens ≈
750 words
1 page ≈
400 tokens
1 book ≈
100K tokens
1M tokens ≈
10 books
Visual Comparison
~7,692,308 words · 25,641 pages · 102.6 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
o4-miniOpenAI200K
~153,846 words · 513 pages · 2.1 books
o3OpenAI200K
~153,846 words · 513 pages · 2.1 books
GLM-5Zhipu AI200K
~153,846 words · 513 pages · 2.1 books
GLM-4.7Zhipu AI200K
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~126,154 words · 421 pages · 1.7 books
GPT-5OpenAI128K
~98,462 words · 328 pages · 1.3 books
GPT-4oOpenAI128K
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
Grok 4xAI128K
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
Detailed Breakdown
| Model | Context | Max Output | ~Words | ~Pages | ~Books | Input $/1M |
|---|---|---|---|---|---|---|
| Llama 4 Scout | 10M | 32K | 7,692,308 | 25,641 | 102.6 | $0.18 |
| Gemini 3.1 Pro | 1M | 64K | 769,231 | 2,564 | 10.3 | $2.00 |
| Gemini 3 Pro | 1M | 66K | 769,231 | 2,564 | 10.3 | $2.00 |
| Gemini 3 Flash | 1M | 66K | 769,231 | 2,564 | 10.3 | $0.50 |
| Gemini 2.5 Pro | 1M | 66K | 769,231 | 2,564 | 10.3 | $1.25 |
| Gemini 2.5 Flash | 1M | 66K | 769,231 | 2,564 | 10.3 | $0.15 |
| Llama 4 Maverick | 1M | 32K | 769,231 | 2,564 | 10.3 | $0.31 |
| Claude Opus 4.6 | 200K | 32K | 153,846 | 513 | 2.1 | $5.00 |
| Claude Sonnet 4.6 | 200K | 16K | 153,846 | 513 | 2.1 | $3.00 |
| Claude Sonnet 4.5 | 200K | 16K | 153,846 | 513 | 2.1 | $3.00 |
| Claude Haiku 4.5 | 200K | 8K | 153,846 | 513 | 2.1 | $0.80 |
| GPT-5.3 Codex | 200K | 66K | 153,846 | 513 | 2.1 | $2.00 |
| GPT-5.2 Codex | 200K | 66K | 153,846 | 513 | 2.1 | $1.75 |
| o4-mini | 200K | 100K | 153,846 | 513 | 2.1 | $1.10 |
| o3 | 200K | 100K | 153,846 | 513 | 2.1 | $0.40 |
| GLM-5 | 200K | 128K | 153,846 | 513 | 2.1 | $1.00 |
| GLM-4.7 | 200K | 128K | 153,846 | 513 | 2.1 | $0.60 |
| MiniMax M2.5 | 200K | 128K | 153,846 | 513 | 2.1 | $0.30 |
| DeepSeek V3 | 164K | 16K | 126,154 | 421 | 1.7 | $0.14 |
| GPT-5 | 128K | 16K | 98,462 | 328 | 1.3 | $1.25 |
| GPT-4o | 128K | 16K | 98,462 | 328 | 1.3 | $2.50 |
| GPT-4o Mini | 128K | 16K | 98,462 | 328 | 1.3 | $0.15 |
| DeepSeek R1 | 128K | 64K | 98,462 | 328 | 1.3 | $0.55 |
| Grok 4 | 128K | 16K | 98,462 | 328 | 1.3 | $3.00 |
| Mistral Large 3 | 128K | 16K | 98,462 | 328 | 1.3 | $2.00 |
| Mistral Medium 3 | 128K | 16K | 98,462 | 328 | 1.3 | $0.40 |
Word counts are approximate (1 token ≈ 0.75 words for English text). Actual token counts vary by language, formatting, and content type. Code typically has more tokens per word than prose.