← All Tools

Context Window Comparison

How much text can each AI model process at once? Context windows compared visually with real-world equivalents.

Quick Reference

1K tokens ≈

750 words

1 page ≈

400 tokens

1 book ≈

100K tokens

1M tokens ≈

10 books

Visual Comparison

~7,692,308 words · 25,641 pages · 102.6 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~769,231 words · 2,564 pages · 10.3 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~153,846 words · 513 pages · 2.1 books
~126,154 words · 421 pages · 1.7 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books
~98,462 words · 328 pages · 1.3 books

Detailed Breakdown

ModelContextMax Output~Words~Pages~BooksInput $/1M
Llama 4 Scout10M32K7,692,30825,641102.6$0.18
Gemini 3.1 Pro1M64K769,2312,56410.3$2.00
Gemini 3 Pro1M66K769,2312,56410.3$2.00
Gemini 3 Flash1M66K769,2312,56410.3$0.50
Gemini 2.5 Pro1M66K769,2312,56410.3$1.25
Gemini 2.5 Flash1M66K769,2312,56410.3$0.15
Llama 4 Maverick1M32K769,2312,56410.3$0.31
Claude Opus 4.6200K32K153,8465132.1$5.00
Claude Sonnet 4.6200K16K153,8465132.1$3.00
Claude Sonnet 4.5200K16K153,8465132.1$3.00
Claude Haiku 4.5200K8K153,8465132.1$0.80
GPT-5.3 Codex200K66K153,8465132.1$2.00
GPT-5.2 Codex200K66K153,8465132.1$1.75
o4-mini200K100K153,8465132.1$1.10
o3200K100K153,8465132.1$0.40
GLM-5200K128K153,8465132.1$1.00
GLM-4.7200K128K153,8465132.1$0.60
MiniMax M2.5200K128K153,8465132.1$0.30
DeepSeek V3164K16K126,1544211.7$0.14
GPT-5128K16K98,4623281.3$1.25
GPT-4o128K16K98,4623281.3$2.50
GPT-4o Mini128K16K98,4623281.3$0.15
DeepSeek R1128K64K98,4623281.3$0.55
Grok 4128K16K98,4623281.3$3.00
Mistral Large 3128K16K98,4623281.3$2.00
Mistral Medium 3128K16K98,4623281.3$0.40

Word counts are approximate (1 token ≈ 0.75 words for English text). Actual token counts vary by language, formatting, and content type. Code typically has more tokens per word than prose.