Skip to content

Commit 9c71539

Browse files
Merge pull request #4 from codinit-dev/mintlify/general-suggestions-56058
Consolidate navigation and improve discoverability
2 parents 5ee049a + 53c01e4 commit 9c71539

3 files changed

Lines changed: 71 additions & 92 deletions

File tree

docs.json

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,14 @@
6666
"features/development/webcontainer",
6767
"features/development/workbench"
6868
]
69+
},
70+
{
71+
"group": "AI Features",
72+
"expanded": false,
73+
"pages": [
74+
"essentials/ai-chat-commands",
75+
"essentials/project-templates"
76+
]
6977
}
7078
]
7179
},
@@ -136,18 +144,14 @@
136144
"expanded": false,
137145
"pages": ["integrations/vercel", "integrations/netlify", "integrations/cloudflare"]
138146
},
139-
{
140-
"group": "Local Providers",
141-
"expanded": false,
142-
"pages": ["providers/lmstudio", "providers/ollama"]
143-
},
144147
{
145148
"group": "Running Models Locally",
146149
"icon": "cpu",
147150
"expanded": false,
148151
"pages": [
149-
"running-models-locally/lm-studio",
150-
"running-models-locally/local-model-setup"
152+
"running-models-locally/local-model-setup",
153+
"providers/lmstudio",
154+
"providers/ollama"
151155
]
152156
}
153157
]

providers/lmstudio.mdx

Lines changed: 60 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,14 +63,52 @@ LM Studio downloads and runs AI models locally using your computer's resources.
6363
## Setup Instructions
6464

6565
<Steps>
66-
<Step title="Download LM Studio">Visit [LM Studio website](https://lmstudio.ai/) and download the application</Step>
67-
<Step title="Install and Launch">Install LM Studio and launch the application</Step>
68-
<Step title="Download Models">Browse the model library and download models you want to use</Step>
69-
<Step title="Start Local Server">Click "Start Server" in LM Studio to begin the local API server</Step>
70-
<Step title="Configure in Codinit">Set the server URL (usually http://localhost:1234) in Codinit settings</Step>
71-
<Step title="Test Connection">Verify the connection and start using local AI models</Step>
66+
<Step title="Download LM Studio">
67+
Visit [lmstudio.ai](https://lmstudio.ai) and download the application for your operating system.
68+
69+
![LM Studio download page](/assets/images/lmstudio.webp)
70+
</Step>
71+
<Step title="Install and Launch">
72+
Install LM Studio and launch the application. You'll see four tabs on the left:
73+
- **Chat**: Interactive chat interface
74+
- **Developer**: Where you will start the server
75+
- **My Models**: Your downloaded models storage
76+
- **Discover**: Browse and add new models
77+
</Step>
78+
<Step title="Download a Model">
79+
Navigate to the "Discover" tab, browse available models, and download your preferred model. Wait for the download to complete.
80+
81+
**Recommended**: Use **Qwen3 Coder 30B A3B Instruct** for the best experience with CodinIT. This model delivers strong coding performance and reliable tool use.
82+
</Step>
83+
<Step title="Start the Server">
84+
Navigate to the "Developer" tab and toggle the server switch to "Running". The server will run at `http://localhost:51732`.
85+
86+
![Starting the LM Studio server](/assets/images/lmstudio.webp)
87+
</Step>
88+
<Step title="Configure Model Settings">
89+
After loading your model in the Developer tab, configure these critical settings:
90+
- **Context Length**: Set to 262,144 (the model's maximum)
91+
- **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
92+
- **Flash Attention**: Enable if available (improves performance)
93+
</Step>
94+
<Step title="Configure in CodinIT">
95+
Set the server URL in CodinIT settings and verify the connection to start using local AI models.
96+
</Step>
7297
</Steps>
7398

99+
### Quantization Guide
100+
101+
Choose quantization based on your available RAM:
102+
103+
- **32GB RAM**: Use 4-bit quantization (~17GB download)
104+
- **64GB RAM**: Use 8-bit quantization (~32GB download) for better quality
105+
- **128GB+ RAM**: Consider full precision or larger models
106+
107+
### Model Format
108+
109+
- **Mac (Apple Silicon)**: Use MLX format for optimized performance
110+
- **Windows/Linux**: Use GGUF format
111+
74112
## Key Features
75113

76114
<BadgeGroup>
@@ -209,3 +247,19 @@ LM Studio downloads and runs AI models locally using your computer's resources.
209247
<Callout type="warning">
210248
**Resource Intensive**: Large models require significant RAM and may run slowly on lower-end hardware.
211249
</Callout>
250+
251+
## Troubleshooting
252+
253+
If CodinIT can't connect to LM Studio:
254+
255+
1. Verify LM Studio server is running (check Developer tab)
256+
2. Ensure a model is loaded
257+
3. Check your system meets hardware requirements
258+
4. Confirm the server URL matches in CodinIT settings
259+
260+
## Important Notes
261+
262+
- Start LM Studio before using with CodinIT
263+
- Keep LM Studio running in background
264+
- First model download may take several minutes depending on size
265+
- Models are stored locally after download

running-models-locally/lm-studio.mdx

Lines changed: 0 additions & 79 deletions
This file was deleted.

0 commit comments

Comments
 (0)