@@ -15,34 +15,34 @@ A context window is how much text the AI can look at and remember at one time. T
1515 ** Important** : Bigger context windows let the AI see more of your project, but they cost more money and take longer.
1616</Tip >
1717
18- ### Quick reference
18+ ### Size Guide
1919
20- | Size | Tokens | Approximate Words | Use Case |
20+ | Size | Tokens | About How Many Words | Good For |
2121| --------------- | ------ | ----------------- | ------------------------- |
22- | ** Small** | 8K-32K | 6,000-24,000 | Single files , quick fixes |
23- | ** Medium** | 128K | ~ 96,000 | Most coding projects |
24- | ** Large** | 200K | ~ 150,000 | Complex codebases |
25- | ** Extra Large** | 400K+ | ~ 300,000+ | Entire applications |
26- | ** Massive ** | 1M+ | ~ 750,000+ | Multi-project analysis |
22+ | ** Small** | 8K-32K | 6,000-24,000 | One file , quick fixes |
23+ | ** Medium** | 128K | ~ 96,000 | Most projects |
24+ | ** Large** | 200K | ~ 150,000 | Big projects |
25+ | ** Extra Large** | 400K+ | ~ 300,000+ | Whole apps |
26+ | ** Huge ** | 1M+ | ~ 750,000+ | Multiple projects |
2727
28- ### Model context windows
28+ ### Different AI Models
2929
30- | Model | Context Window | Effective Window \* | Notes |
30+ | AI Model | Context Window | Actually Works Well \* | Notes |
3131| --------------------- | -------------- | ------------------ | ------------------------------ |
32- | ** Claude Sonnet 4.5** | 1M tokens | ~ 500K tokens | Best quality at high context |
33- | ** GPT-5** | 400K tokens | ~ 300K tokens | Three modes affect performance |
34- | ** Gemini 2.5 Pro** | 1M+ tokens | ~ 600K tokens | Excellent for documents |
35- | ** DeepSeek V3** | 128K tokens | ~ 100K tokens | Optimal for most tasks |
36- | ** Qwen3 Coder** | 256K tokens | ~ 200K tokens | Good balance |
32+ | ** Claude Sonnet 4.5** | 1M tokens | ~ 500K tokens | Best for big projects |
33+ | ** GPT-5** | 400K tokens | ~ 300K tokens | Has three different modes |
34+ | ** Gemini 2.5 Pro** | 1M+ tokens | ~ 600K tokens | Great for reading documents |
35+ | ** DeepSeek V3** | 128K tokens | ~ 100K tokens | Good for most things |
36+ | ** Qwen3 Coder** | 256K tokens | ~ 200K tokens | Nice balance |
3737
38- \* Effective window is where the model maintains high quality. Beyond this point, the AI may start " forgetting" earlier parts of your conversation .
38+ \* The AI works best up to this point. After that, it might start forgetting earlier parts of your chat .
3939
40- ### What counts toward context
40+ ### What Uses Up the Context Window
4141
42- 1 . ** Your current conversation ** - All messages in the chat
43- 2 . ** File contents ** - Any files you've shared or CodinIT has read
44- 3 . ** Tool outputs ** - Results from executed commands
45- 4 . ** System prompts ** - CodinIT's instructions (minimal impact )
42+ 1 . ** Your messages ** - Everything you and the AI say in the chat
43+ 2 . ** Your files ** - Files the AI reads from your project
44+ 3 . ** Command results ** - Output from commands that run
45+ 4 . ** System instructions ** - CodinIT's background instructions (uses very little )
4646
4747### Optimization strategies
4848
0 commit comments