|
1 | 1 | --- |
2 | 2 | title: "xAI (Grok)" |
3 | | -description: "Learn how to configure and use xAI's Grok models with CodinIT, including API key setup, supported models, and reasoning capabilities." |
| 3 | +description: "Configure xAI's Grok models with large context windows and reasoning capabilities." |
4 | 4 | --- |
5 | 5 |
|
6 | | -xAI is the company behind Grok, a large language model known for its conversational abilities and large context window. Grok models are designed to provide helpful, informative, and contextually relevant responses. |
7 | | - |
8 | 6 | **Website:** [https://x.ai/](https://x.ai/) |
9 | 7 |
|
10 | | -### Getting an API Key |
11 | | - |
12 | | -1. **Sign Up/Sign In:** Go to the [xAI Console](https://console.x.ai/). Create an account or sign in. |
13 | | -2. **Navigate to API Keys:** Go to the API keys section in your dashboard. |
14 | | -3. **Create a Key:** Click to create a new API key. Give your key a descriptive name (e.g., "CodinIT"). |
15 | | -4. **Copy the Key:** **Important:** Copy the API key _immediately_. You will not be able to see it again. Store it securely. |
16 | | - |
17 | | -### Supported Models |
18 | | - |
19 | | -CodinIT supports the following xAI Grok models: |
20 | | - |
21 | | -#### Grok-3 Models |
22 | | - |
23 | | -- `grok-3-beta` (Default) - xAI's Grok-3 beta model with 131K context window |
24 | | -- `grok-3-fast-beta` - xAI's Grok-3 fast beta model with 131K context window |
25 | | -- `grok-3-mini-beta` - xAI's Grok-3 mini beta model with 131K context window |
26 | | -- `grok-3-mini-fast-beta` - xAI's Grok-3 mini fast beta model with 131K context window |
27 | | - |
28 | | -#### Grok-2 Models |
29 | | - |
30 | | -- `grok-2-latest` - xAI's Grok-2 model - latest version with 131K context window |
31 | | -- `grok-2` - xAI's Grok-2 model with 131K context window |
32 | | -- `grok-2-1212` - xAI's Grok-2 model (version 1212) with 131K context window |
33 | | - |
34 | | -#### Grok Vision Models |
35 | | - |
36 | | -- `grok-2-vision-latest` - xAI's Grok-2 Vision model - latest version with image support and 32K context window |
37 | | -- `grok-2-vision` - xAI's Grok-2 Vision model with image support and 32K context window |
38 | | -- `grok-2-vision-1212` - xAI's Grok-2 Vision model (version 1212) with image support and 32K context window |
39 | | -- `grok-vision-beta` - xAI's Grok Vision Beta model with image support and 8K context window |
40 | | - |
41 | | -#### Legacy Models |
42 | | - |
43 | | -- `grok-beta` - xAI's Grok Beta model (legacy) with 131K context window |
44 | | - |
45 | | -### Configuration in CodinIT |
46 | | - |
47 | | -1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel. |
48 | | -2. **Select Provider:** Choose "xAI" from the "API Provider" dropdown. |
49 | | -3. **Enter API Key:** Paste your xAI API key into the "xAI API Key" field. |
50 | | -4. **Select Model:** Choose your desired Grok model from the "Model" dropdown. |
51 | | - |
52 | | -### Reasoning Capabilities |
53 | | - |
54 | | -Grok 3 Mini models feature specialized reasoning capabilities, allowing them to "think before responding" - particularly useful for complex problem-solving tasks. |
55 | | - |
56 | | -#### Reasoning-Enabled Models |
| 8 | +## Getting an API Key |
57 | 9 |
|
58 | | -Reasoning is only supported by: |
| 10 | +1. Go to [xAI Console](https://console.x.ai/) and sign in |
| 11 | +2. Navigate to API Keys section |
| 12 | +3. Create a new API key and name it (e.g., "CodinIT") |
| 13 | +4. Copy the key immediately - you won't see it again |
59 | 14 |
|
60 | | -- `grok-3-mini-beta` |
61 | | -- `grok-3-mini-fast-beta` |
| 15 | +## Configuration |
62 | 16 |
|
63 | | -The Grok 3 models `grok-3-beta` and `grok-3-fast-beta` do not support reasoning. |
| 17 | +1. Click the settings icon (⚙️) in CodinIT |
| 18 | +2. Select "xAI" as the API Provider |
| 19 | +3. Paste your API key |
| 20 | +4. Choose your model |
64 | 21 |
|
65 | | -#### Controlling Reasoning Effort |
| 22 | +## Supported Models |
66 | 23 |
|
67 | | -When using reasoning-enabled models, you can control how hard the model thinks with the `reasoning_effort` parameter: |
| 24 | +**Grok-3 Series (131K context):** |
| 25 | +- `grok-3-beta` (Default) |
| 26 | +- `grok-3-fast-beta` |
| 27 | +- `grok-3-mini-beta` (Reasoning enabled) |
| 28 | +- `grok-3-mini-fast-beta` (Reasoning enabled) |
68 | 29 |
|
69 | | -- `low`: Minimal thinking time, using fewer tokens for quick responses |
70 | | -- `high`: Maximum thinking time, leveraging more tokens for complex problems |
| 30 | +**Grok-2 Series (131K context):** |
| 31 | +- `grok-2-latest` |
| 32 | +- `grok-2` |
| 33 | +- `grok-2-1212` |
71 | 34 |
|
72 | | -Choose `low` for simple queries that should complete quickly, and `high` for harder problems where response latency is less important. |
| 35 | +**Vision Models:** |
| 36 | +- `grok-2-vision-latest` (32K context) |
| 37 | +- `grok-2-vision` (32K context) |
| 38 | +- `grok-vision-beta` (8K context) |
73 | 39 |
|
74 | | -#### Key Features |
| 40 | +## Reasoning Capabilities |
75 | 41 |
|
76 | | -- **Step-by-Step Problem Solving**: The model thinks through problems methodically before delivering an answer |
77 | | -- **Math & Quantitative Strength**: Excels at numerical challenges and logic puzzles |
78 | | -- **Reasoning Trace Access**: The model's thinking process is available via the `reasoning_content` field in the response completion object |
| 42 | +Available on `grok-3-mini-beta` and `grok-3-mini-fast-beta`: |
| 43 | +- **Step-by-step problem solving:** Methodical thinking process |
| 44 | +- **Reasoning effort control:** Set `low` for quick responses or `high` for complex problems |
| 45 | +- **Reasoning trace access:** View the model's thinking process |
79 | 46 |
|
80 | | -### Tips and Notes |
| 47 | +## Notes |
81 | 48 |
|
82 | | -- **Context Window:** Most Grok models feature large context windows (up to 131K tokens), allowing you to include substantial amounts of code and context in your prompts. |
83 | | -- **Vision Capabilities:** Select vision-enabled models (`grok-2-vision-latest`, `grok-2-vision`, etc.) when you need to process or analyze images. |
84 | | -- **Pricing:** Pricing varies by model, with input costs ranging from $0.3 to $5.0 per million tokens and output costs from $0.5 to $25.0 per million tokens. Refer to the xAI documentation for the most current pricing information. |
85 | | -- **Performance Tradeoffs:** "Fast" variants typically offer quicker response times but may have higher costs, while "mini" variants are more economical but may have reduced capabilities. |
| 49 | +- **Context window:** Up to 131K tokens for most models |
| 50 | +- **Vision support:** Available on select models |
| 51 | +- **Pricing:** Varies by model, see xAI documentation |
0 commit comments