Skip to content

Commit 5327d08

Browse files
Update providers/lmstudio.mdx
Co-Authored-By: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
1 parent 5ee049a commit 5327d08

1 file changed

Lines changed: 44 additions & 6 deletions

File tree

providers/lmstudio.mdx

Lines changed: 44 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -63,14 +63,52 @@ LM Studio downloads and runs AI models locally using your computer's resources.
6363
## Setup Instructions
6464

6565
<Steps>
66-
<Step title="Download LM Studio">Visit [LM Studio website](https://lmstudio.ai/) and download the application</Step>
67-
<Step title="Install and Launch">Install LM Studio and launch the application</Step>
68-
<Step title="Download Models">Browse the model library and download models you want to use</Step>
69-
<Step title="Start Local Server">Click "Start Server" in LM Studio to begin the local API server</Step>
70-
<Step title="Configure in Codinit">Set the server URL (usually http://localhost:1234) in Codinit settings</Step>
71-
<Step title="Test Connection">Verify the connection and start using local AI models</Step>
66+
<Step title="Download LM Studio">
67+
Visit [lmstudio.ai](https://lmstudio.ai) and download the application for your operating system.
68+
69+
![LM Studio download page](/assets/images/lmstudio.webp)
70+
</Step>
71+
<Step title="Install and Launch">
72+
Install LM Studio and launch the application. You'll see four tabs on the left:
73+
- **Chat**: Interactive chat interface
74+
- **Developer**: Where you will start the server
75+
- **My Models**: Your downloaded models storage
76+
- **Discover**: Browse and add new models
77+
</Step>
78+
<Step title="Download a Model">
79+
Navigate to the "Discover" tab, browse available models, and download your preferred model. Wait for the download to complete.
80+
81+
**Recommended**: Use **Qwen3 Coder 30B A3B Instruct** for the best experience with CodinIT. This model delivers strong coding performance and reliable tool use.
82+
</Step>
83+
<Step title="Start the Server">
84+
Navigate to the "Developer" tab and toggle the server switch to "Running". The server will run at `http://localhost:51732`.
85+
86+
![Starting the LM Studio server](/assets/images/lmstudio.webp)
87+
</Step>
88+
<Step title="Configure Model Settings">
89+
After loading your model in the Developer tab, configure these critical settings:
90+
- **Context Length**: Set to 262,144 (the model's maximum)
91+
- **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
92+
- **Flash Attention**: Enable if available (improves performance)
93+
</Step>
94+
<Step title="Configure in CodinIT">
95+
Set the server URL in CodinIT settings and verify the connection to start using local AI models.
96+
</Step>
7297
</Steps>
7398

99+
### Quantization Guide
100+
101+
Choose quantization based on your available RAM:
102+
103+
- **32GB RAM**: Use 4-bit quantization (~17GB download)
104+
- **64GB RAM**: Use 8-bit quantization (~32GB download) for better quality
105+
- **128GB+ RAM**: Consider full precision or larger models
106+
107+
### Model Format
108+
109+
- **Mac (Apple Silicon)**: Use MLX format for optimized performance
110+
- **Windows/Linux**: Use GGUF format
111+
74112
## Key Features
75113

76114
<BadgeGroup>

0 commit comments

Comments
 (0)