Skip to content

Commit b1bce4b

Browse files
Update providers/lmstudio.mdx
Co-Authored-By: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
1 parent 10fea4a commit b1bce4b

1 file changed

Lines changed: 45 additions & 237 deletions

File tree

providers/lmstudio.mdx

Lines changed: 45 additions & 237 deletions
Original file line numberDiff line numberDiff line change
@@ -1,265 +1,73 @@
11
---
22
title: LM Studio
3-
description: Run AI models locally with LM Studio's user-friendly interface for privacy, speed, and offline development capabilities.
3+
description: Run AI models locally with LM Studio's user-friendly interface for privacy and offline development.
44
---
55

6-
LM Studio provides a user-friendly way to run large language models locally on your computer, offering privacy, speed, and offline capabilities without requiring an internet connection.
6+
LM Studio provides a user-friendly way to run AI models locally with privacy, speed, and offline capabilities.
77

8-
## Overview
8+
**Website:** [https://lmstudio.ai](https://lmstudio.ai)
99

10-
LM Studio bridges the gap between powerful AI models and local computing, allowing you to run advanced AI models directly on your machine. It's perfect for users who want privacy, speed, and control over their AI interactions.
10+
## Setup
1111

12-
<CardGroup cols={3}>
13-
<Card title="Local Execution" icon="computer">
14-
Run AI models directly on your computer
15-
</Card>
16-
<Card title="Privacy First" icon="shield">
17-
Keep conversations and data completely private
18-
</Card>
19-
<Card title="Offline Capable" icon="wifi-off">
20-
Work without internet connectivity
21-
</Card>
22-
</CardGroup>
12+
1. **Download:** Visit [lmstudio.ai](https://lmstudio.ai) and download for your OS
13+
2. **Install and launch:** Open LM Studio
14+
3. **Download a model:** Go to "Discover" tab and download a model
15+
- **Recommended:** Qwen3 Coder 30B A3B Instruct for best CodinIT experience
16+
4. **Start server:** Go to "Developer" tab and toggle server to "Running" (runs at `http://localhost:51732`)
17+
5. **Configure model settings:**
18+
- **Context Length:** Set to 262,144 (maximum)
19+
- **KV Cache Quantization:** Leave unchecked (critical for performance)
20+
- **Flash Attention:** Enable if available
2321

24-
## How It Works
22+
## Configuration in CodinIT
2523

26-
LM Studio downloads and runs AI models locally using your computer's resources. It provides a simple interface to manage models, start local servers, and connect to various applications including Codinit.
24+
1. Click the settings icon (⚙️) in CodinIT
25+
2. Select "LM Studio" as the API Provider
26+
3. Set server URL to `http://localhost:51732`
27+
4. Choose your model
2728

28-
<AccordionGroup>
29-
<Accordion title="Model Management" icon="download">
30-
### Downloading Models
31-
Choose from thousands of available models in various sizes and capabilities.
29+
## Quantization Guide
3230

33-
- **Model Library**: Browse and download models from Hugging Face
34-
- **Size Options**: From small 1GB models to large 100GB+ models
35-
- **Format Support**: GGUF, SafeTensor, and other formats
36-
- **Automatic Updates**: Stay current with latest model versions
31+
Choose based on available RAM:
32+
- **32GB RAM:** 4-bit quantization (~17GB download)
33+
- **64GB RAM:** 8-bit quantization (~32GB download)
34+
- **128GB+ RAM:** Full precision or larger models
3735

38-
</Accordion>
36+
## Model Format
3937

40-
<Accordion title="Local Server" icon="server">
41-
### Running AI Locally
42-
Start a local API server that applications can connect to.
38+
- **Mac (Apple Silicon):** Use MLX format
39+
- **Windows/Linux:** Use GGUF format
4340

44-
- **One-Click Setup**: Start local server with single button
45-
- **API Compatibility**: OpenAI-compatible API endpoints
46-
- **Multi-Platform**: Windows, macOS, and Linux support
47-
- **Resource Management**: Monitor CPU/GPU usage and memory
41+
## Features
4842

49-
</Accordion>
50-
51-
<Accordion title="Performance Tuning" icon="settings">
52-
### Optimization Settings
53-
Fine-tune performance based on your hardware capabilities.
54-
55-
- **GPU Acceleration**: Utilize NVIDIA/AMD GPUs when available
56-
- **CPU Optimization**: Efficient CPU inference for all systems
57-
- **Memory Management**: Control RAM usage and model loading
58-
- **Quantization**: Balance speed vs. quality with different precisions
59-
60-
</Accordion>
61-
</AccordionGroup>
62-
63-
## Setup Instructions
64-
65-
<Steps>
66-
<Step title="Download LM Studio">
67-
Visit [lmstudio.ai](https://lmstudio.ai) and download the application for your operating system.
68-
69-
![LM Studio download page](/assets/images/lmstudio.webp)
70-
</Step>
71-
<Step title="Install and Launch">
72-
Install LM Studio and launch the application. You'll see four tabs on the left:
73-
- **Chat**: Interactive chat interface
74-
- **Developer**: Where you will start the server
75-
- **My Models**: Your downloaded models storage
76-
- **Discover**: Browse and add new models
77-
</Step>
78-
<Step title="Download a Model">
79-
Navigate to the "Discover" tab, browse available models, and download your preferred model. Wait for the download to complete.
80-
81-
**Recommended**: Use **Qwen3 Coder 30B A3B Instruct** for the best experience with CodinIT. This model delivers strong coding performance and reliable tool use.
82-
</Step>
83-
<Step title="Start the Server">
84-
Navigate to the "Developer" tab and toggle the server switch to "Running". The server will run at `http://localhost:51732`.
85-
86-
![Starting the LM Studio server](/assets/images/lmstudio.webp)
87-
</Step>
88-
<Step title="Configure Model Settings">
89-
After loading your model in the Developer tab, configure these critical settings:
90-
- **Context Length**: Set to 262,144 (the model's maximum)
91-
- **KV Cache Quantization**: Leave unchecked (critical for consistent performance)
92-
- **Flash Attention**: Enable if available (improves performance)
93-
</Step>
94-
<Step title="Configure in CodinIT">
95-
Set the server URL in CodinIT settings and verify the connection to start using local AI models.
96-
</Step>
97-
</Steps>
98-
99-
### Quantization Guide
100-
101-
Choose quantization based on your available RAM:
102-
103-
- **32GB RAM**: Use 4-bit quantization (~17GB download)
104-
- **64GB RAM**: Use 8-bit quantization (~32GB download) for better quality
105-
- **128GB+ RAM**: Consider full precision or larger models
106-
107-
### Model Format
108-
109-
- **Mac (Apple Silicon)**: Use MLX format for optimized performance
110-
- **Windows/Linux**: Use GGUF format
111-
112-
## Key Features
113-
114-
<BadgeGroup>
115-
<Badge variant="secondary">Local Execution</Badge>
116-
<Badge variant="secondary">Privacy Focused</Badge>
117-
<Badge variant="secondary">Offline Capable</Badge>
118-
<Badge variant="secondary">Cost Free</Badge>
119-
<Badge variant="secondary">Customizable</Badge>
120-
</BadgeGroup>
121-
122-
### Platform Advantages
123-
124-
- **Complete Privacy**: All conversations stay on your device
125-
- **No API Costs**: Run unlimited AI interactions for free
126-
- **Offline Operation**: Work without internet connectivity
127-
- **Hardware Flexibility**: Run on any modern computer
128-
- **Model Variety**: Access thousands of different AI models
129-
130-
## Use Cases
131-
132-
<AccordionGroup>
133-
<Accordion title="Private Development" icon="lock">
134-
### Secure Development
135-
Perfect for sensitive development work and private projects.
136-
137-
- Code review without sharing code externally
138-
- Private documentation and analysis
139-
- Secure brainstorming and planning
140-
- Confidential business applications
141-
142-
</Accordion>
143-
144-
<Accordion title="Offline Work" icon="plane">
145-
### Offline Productivity
146-
Continue working with AI assistance even without internet.
147-
148-
- Travel and remote work scenarios
149-
- Limited connectivity environments
150-
- Data-sensitive offline processing
151-
- Emergency backup AI capabilities
152-
153-
</Accordion>
154-
155-
<Accordion title="Cost Optimization" icon="dollar-sign">
156-
### Budget-Friendly AI
157-
Access advanced AI capabilities without ongoing costs.
158-
159-
- Unlimited usage without API fees
160-
- No per-token or per-request charges
161-
- One-time setup, ongoing free usage
162-
- Cost-effective for heavy AI users
163-
164-
</Accordion>
165-
166-
<Accordion title="Learning & Experimentation" icon="graduation-cap">
167-
### Educational Use
168-
Learn about AI and experiment with different models.
169-
170-
- Study different model architectures
171-
- Compare model performance and capabilities
172-
- Learn prompt engineering techniques
173-
- Understand AI model behaviors
174-
175-
</Accordion>
176-
</AccordionGroup>
43+
- **Complete privacy:** All data stays on your device
44+
- **No API costs:** Unlimited free usage
45+
- **Offline operation:** Works without internet
46+
- **Hardware flexibility:** Runs on any modern computer
17747

17848
## System Requirements
17949

180-
<AccordionGroup>
181-
<Accordion title="Minimum Requirements" icon="check-circle">
182-
### Basic Setup
183-
Requirements for running small to medium models.
184-
185-
- **RAM**: 8GB minimum, 16GB recommended
186-
- **Storage**: 10GB free space for models and application
187-
- **OS**: Windows 10+, macOS 10.15+, Ubuntu 18.04+
188-
- **CPU**: Modern multi-core processor
189-
190-
</Accordion>
191-
192-
<Accordion title="Recommended Setup" icon="star">
193-
### Optimal Performance
194-
Recommended specifications for large models and best performance.
195-
196-
- **RAM**: 32GB or more for large models
197-
- **GPU**: NVIDIA GPU with 8GB+ VRAM (optional but recommended)
198-
- **Storage**: SSD with 50GB+ free space
199-
- **CPU**: Multi-core processor with AVX2 support
200-
201-
</Accordion>
202-
203-
<Accordion title="GPU Support" icon="gpu">
204-
### Hardware Acceleration
205-
Utilize GPU acceleration for faster inference speeds.
206-
207-
- **NVIDIA GPUs**: CUDA support for maximum performance
208-
- **AMD GPUs**: ROCm support on Linux
209-
- **Apple Silicon**: Native acceleration on M1/M2/M3 Macs
210-
- **CPU Fallback**: Automatic fallback to CPU when GPU unavailable
50+
**Minimum:**
51+
- 8GB RAM (16GB recommended)
52+
- 10GB free storage
53+
- Modern multi-core CPU
21154

212-
</Accordion>
213-
</AccordionGroup>
214-
215-
## Model Selection Guide
216-
217-
<AccordionGroup>
218-
<Accordion title="Model Sizes" icon="scale">
219-
### Choosing Model Size
220-
Balance between performance and resource requirements.
221-
222-
- **Small Models (1-3GB)**: Fast, basic capabilities, good for simple tasks
223-
- **Medium Models (3-7GB)**: Balanced performance, good for most applications
224-
- **Large Models (7-20GB)**: High quality, slower but more capable
225-
- **XL Models (20GB+)**: Maximum quality, requires powerful hardware
226-
227-
</Accordion>
228-
229-
<Accordion title="Use Case Models" icon="target">
230-
### Specialized Models
231-
Choose models based on your specific needs.
232-
233-
- **Code Models**: Code generation, debugging, technical writing
234-
- **General Chat**: Conversation, analysis, creative writing
235-
- **Math/Science**: Mathematical reasoning, scientific analysis
236-
- **Multilingual**: Support for multiple languages and cultures
237-
238-
</Accordion>
239-
</AccordionGroup>
240-
241-
<Callout type="info">**Free Forever**: LM Studio is completely free to use. No subscriptions or hidden costs.</Callout>
242-
243-
<Callout type="tip">
244-
**Start Small**: Begin with smaller models to test your setup, then upgrade to larger models as needed.
245-
</Callout>
246-
247-
<Callout type="warning">
248-
**Resource Intensive**: Large models require significant RAM and may run slowly on lower-end hardware.
249-
</Callout>
55+
**Recommended:**
56+
- 32GB+ RAM for large models
57+
- NVIDIA GPU with 8GB+ VRAM (optional)
58+
- SSD with 50GB+ free space
25059

25160
## Troubleshooting
25261

253-
If CodinIT can't connect to LM Studio:
254-
255-
1. Verify LM Studio server is running (check Developer tab)
62+
If CodinIT can't connect:
63+
1. Verify LM Studio server is running
25664
2. Ensure a model is loaded
257-
3. Check your system meets hardware requirements
258-
4. Confirm the server URL matches in CodinIT settings
65+
3. Check system meets hardware requirements
66+
4. Confirm server URL matches in CodinIT settings
25967

260-
## Important Notes
68+
## Notes
26169

26270
- Start LM Studio before using with CodinIT
26371
- Keep LM Studio running in background
264-
- First model download may take several minutes depending on size
72+
- First model download may take several minutes
26573
- Models are stored locally after download

0 commit comments

Comments
 (0)