Skip to content

Commit 10fea4a

Browse files
Update providers/ollama.mdx
Co-Authored-By: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
1 parent 5a87ac0 commit 10fea4a

1 file changed

Lines changed: 31 additions & 64 deletions

File tree

providers/ollama.mdx

Lines changed: 31 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -1,79 +1,46 @@
11
---
22
title: "Ollama"
3-
description: "Set up Ollama to run AI models locally with CodinIT for enhanced privacy, offline access, and complete control over your development."
3+
description: "Run AI models locally with Ollama for privacy and offline access."
44
---
55

6-
CodinIT supports running models locally using Ollama. This approach offers privacy, offline access, and potentially reduced costs. It requires some initial setup and a sufficiently powerful computer. Because of the present state of consumer hardware, it's not recommended to use Ollama with CodinIT as performance will likely be poor for average hardware configurations.
6+
Run models locally using Ollama for privacy, offline access, and control. Requires initial setup and sufficient hardware.
77

88
**Website:** [https://ollama.com/](https://ollama.com/)
99

10-
### Setting up Ollama
10+
## Setup
1111

12-
1. **Download and Install Ollama:**
13-
Obtain the Ollama installer for your operating system from the [Ollama website](https://ollama.com/) and follow their installation guide. Ensure Ollama is running. You can typically start it with:
12+
1. **Install Ollama:** Download from [ollama.com](https://ollama.com/) and install
13+
2. **Start Ollama:** Run `ollama serve` in terminal
14+
3. **Download a model:**
15+
```bash
16+
ollama pull qwen2.5-coder:32b
17+
```
18+
4. **Configure context window:**
19+
```bash
20+
ollama run qwen2.5-coder:32b
21+
/set parameter num_ctx 32768
22+
/save your_custom_model_name
23+
```
1424

15-
```bash
16-
ollama serve
17-
```
25+
## Configuration in CodinIT
1826

19-
2. **Download a Model:**
20-
Ollama supports a wide variety of models. A list of available models can be found on the [Ollama model library](https://ollama.com/library). Some models recommended for coding tasks include:
27+
1. Click the settings icon (⚙️) in CodinIT
28+
2. Select "ollama" as the API Provider
29+
3. Enter your saved model name
30+
4. (Optional) Set base URL if not using default `http://localhost:11434`
2131

22-
- `codellama:7b-code` (a good, smaller starting point)
23-
- `codellama:13b-code` (offers better quality, larger size)
24-
- `codellama:34b-code` (provides even higher quality, very large)
25-
- `qwen2.5-coder:32b`
26-
- `mistralai/Mistral-7B-Instruct-v0.1` (a solid general-purpose model)
27-
- `deepseek-coder:6.7b-base` (effective for coding)
28-
- `llama3:8b-instruct-q5_1` (suitable for general tasks)
32+
## Recommended Models
2933

30-
To download a model, open your terminal and execute:
34+
- `qwen2.5-coder:32b` - Excellent for coding
35+
- `codellama:34b-code` - High quality, large size
36+
- `deepseek-coder:6.7b-base` - Effective for coding
37+
- `llama3:8b-instruct-q5_1` - General tasks
3138

32-
```bash
33-
ollama pull <model_name>
34-
```
39+
See [Ollama model library](https://ollama.com/library) for full list.
3540

36-
For instance:
41+
## Notes
3742

38-
```bash
39-
ollama pull qwen2.5-coder:32b
40-
```
41-
42-
3. **Configure the Model's Context Window:**
43-
By default, Ollama models often use a context window of 2048 tokens, which can be insufficient for many CodinIT requests. A minimum of 12,000 tokens is advisable for decent results, with 32,000 tokens being ideal. To adjust this, you'll modify the model's parameters and save it as a new version.
44-
45-
First, load the model (using `qwen2.5-coder:32b` as an example):
46-
47-
```bash
48-
ollama run qwen2.5-coder:32b
49-
```
50-
51-
Once the model is loaded within the Ollama interactive session, set the context size parameter:
52-
53-
```
54-
/set parameter num_ctx 32768
55-
```
56-
57-
Then, save this configured model with a new name:
58-
59-
```
60-
/save your_custom_model_name
61-
```
62-
63-
(Replace `your_custom_model_name` with a name of your choice.)
64-
65-
4. **Configure CodinIT:**
66-
- Open the CodinIT sidebar (usually indicated by the CodinIT icon).
67-
- Click the settings gear icon (⚙️).
68-
- Select "ollama" as the API Provider.
69-
- Enter the Model name you saved in the previous step (e.g., `your_custom_model_name`).
70-
- (Optional) Adjust the base URL if Ollama is running on a different machine or port. The default is `http://localhost:11434`.
71-
- (Optional) Configure the Model context size in CodinIT's Advanced settings. This helps CodinIT manage its context window effectively with your customized Ollama model.
72-
73-
### Tips and Notes
74-
75-
- **Resource Demands:** Running large language models locally can be demanding on system resources. Ensure your computer meets the requirements for your chosen model.
76-
- **Model Choice:** Experiment with various models to discover which best fits your specific tasks and preferences.
77-
- **Offline Capability:** After downloading a model, you can use CodinIT with that model even without an internet connection.
78-
- **Token Usage Tracking:** CodinIT tracks token usage for models accessed via Ollama, allowing you to monitor consumption.
79-
- **Ollama's Own Documentation:** For more detailed information, consult the official [Ollama documentation](https://ollama.com/docs).
43+
- **Context window:** Minimum 12,000 tokens recommended, 32,000 ideal
44+
- **Resource demands:** Large models require significant system resources
45+
- **Offline capability:** Works without internet after model download
46+
- **Performance:** May be slow on average hardware

0 commit comments

Comments
 (0)