Skip to content

Commit ae4cf22

Browse files
Update providers/cloud-providers.mdx
Co-Authored-By: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
1 parent 46ac1cf commit ae4cf22

1 file changed

Lines changed: 73 additions & 167 deletions

File tree

providers/cloud-providers.mdx

Lines changed: 73 additions & 167 deletions
Original file line numberDiff line numberDiff line change
@@ -3,227 +3,133 @@ title: 'Providers'
33
description: 'Connect CodinIT with 19+ AI providers including cloud models, local inference, and specialized services.'
44
---
55

6-
### Enterprise & Research Models
6+
## Enterprise & Research Models
77

88
<CardGroup cols={2}>
99
<Card title="Anthropic" icon="/assets/ai-icons/anthropic.svg" href="/providers/anthropic">
10-
Claude models with advanced reasoning capabilities
10+
Claude models with advanced reasoning
11+
</Card>
12+
<Card title="OpenAI" icon="/assets/ai-icons/openai.svg" href="/providers/openai">
13+
GPT-5 and o-series models
14+
</Card>
15+
<Card title="Google" icon="/assets/ai-icons/google.svg" href="/providers/google">
16+
Gemini models via GCP Vertex AI
1117
</Card>
12-
13-
<Card title="OpenAI" icon="/assets/ai-icons/openai.svg" href="/providers/openai">
14-
GPT-5 and GPT-4 models for versatile AI assistance
15-
</Card>
16-
17-
<Card title="Google" icon="/assets/ai-icons/google.svg" href="/providers/google">
18-
Gemini models with multimodal capabilities
19-
</Card>
20-
2118
<Card title="DeepSeek" icon="/assets/ai-icons/deepseek.svg" href="/providers/deepseek">
22-
Advanced reasoning models for complex tasks
19+
Advanced reasoning models
2320
</Card>
2421
</CardGroup>
2522

26-
### Specialized & Fast Inference
23+
## Fast Inference & Specialized
2724

2825
<CardGroup cols={3}>
2926
<Card title="Groq" icon="/assets/ai-icons/groq.svg" href="/providers/groq">
30-
Ultra-fast inference with LPU technology
27+
Ultra-fast LPU inference
28+
</Card>
29+
<Card title="Together AI" icon="/assets/ai-icons/togetherai.svg" href="/providers/togetherai">
30+
50+ open-source models
31+
</Card>
32+
<Card title="Hyperbolic" icon="/assets/ai-icons/hyperbolic.svg" href="/providers/hyperbolic">
33+
Optimized open-source inference
34+
</Card>
35+
<Card title="Perplexity" icon="/assets/ai-icons/perplexity-color.svg" href="/providers/perplexity">
36+
AI with integrated web search
3137
</Card>
32-
33-
<Card title="Together AI" icon="/assets/ai-icons/togetherai.svg" href="/providers/togetherai">
34-
Access to 50+ open-source models
35-
</Card>
36-
37-
<Card title="Hyperbolic" icon="/assets/ai-icons/hyperbolic.svg" href="/providers/hyperbolic">
38-
Optimized inference for open-source models
39-
</Card>
40-
41-
<Card title="Perplexity" icon="/assets/ai-icons/perplexity-color.svg" href="/providers/perplexity">
42-
AI models with integrated web search
43-
</Card>
44-
4538
<Card title="XAI Grok" icon="/assets/ai-icons/xai.svg" href="/providers/xai-grok">
46-
X.AI's Grok models with real-time knowledge
39+
Grok models with large context
40+
</Card>
41+
<Card title="Fireworks" icon="/assets/ai-icons/fireworks.svg" href="/providers/fireworks">
42+
Fast inference, 40+ models
4743
</Card>
4844
</CardGroup>
4945

50-
### Open Source & Community
46+
## Open Source & Community
5147

5248
<CardGroup cols={3}>
5349
<Card title="Cohere" icon="/assets/ai-icons/cohere.svg" href="/providers/cohere">
54-
Command R series models for coding and analysis
50+
Command R series models
51+
</Card>
52+
<Card title="HuggingFace" icon="/assets/ai-icons/huggingface.svg" href="/providers/huggingface">
53+
Thousands of community models
54+
</Card>
55+
<Card title="Mistral AI" icon="/assets/ai-icons/mistral.svg" href="/providers/mistral-ai">
56+
Mistral and Codestral models
5557
</Card>
56-
57-
<Card title="HuggingFace" icon="/assets/ai-icons/huggingface.svg" href="/providers/huggingface">
58-
Open-source model hub with community models
59-
</Card>
60-
61-
<Card title="Mistral AI" icon="/assets/ai-icons/mistral.svg" href="/providers/mistral-ai">
62-
Open-source and commercial Mistral models
63-
</Card>
64-
6558
<Card title="Moonshot" icon="/assets/ai-icons/moonshot.svg" href="/providers/moonshot">
66-
Chinese language models with Kimi series
59+
Kimi series, Chinese language
6760
</Card>
6861
</CardGroup>
6962

70-
### Unified & Routing
63+
## Unified & Routing
7164

7265
<CardGroup cols={2}>
7366
<Card title="OpenRouter" icon="/assets/ai-icons/openrouter.svg" href="/providers/openrouter">
74-
Access multiple models through a unified API
67+
Multiple models, unified API
7568
</Card>
76-
7769
<Card title="OpenAI Compatible" icon="/assets/ai-icons/openai.svg" href="/providers/openai-like">
78-
Connect to any OpenAI-compatible API endpoint
70+
Any OpenAI-compatible endpoint
7971
</Card>
8072
</CardGroup>
8173

82-
### Cloud & Enterprise
74+
## Cloud & Enterprise
8375

8476
<CardGroup cols={2}>
8577
<Card title="AWS Bedrock" icon="/assets/ai-icons/bedrock.svg" href="/providers/aws-bedrock">
86-
Enterprise-grade AI models through AWS infrastructure
78+
Enterprise AI via AWS
8779
</Card>
88-
8980
<Card title="GitHub Models" icon="/assets/ai-icons/github.svg" href="/providers/github">
90-
Access OpenAI and other models through GitHub
81+
Models through GitHub platform
9182
</Card>
9283
</CardGroup>
9384

94-
### Local & Private
85+
## Local & Private
9586

9687
<CardGroup cols={2}>
9788
<Card title="Ollama" icon="/assets/ai-icons/ollama.svg" href="/providers/ollama">
98-
Run open-source models locally with Ollama
89+
Run models locally with Ollama
9990
</Card>
100-
10191
<Card title="LM Studio" icon="/assets/ai-icons/lmstudio.svg" href="/providers/lmstudio">
102-
Desktop app for running models locally
92+
Desktop app for local models
10393
</Card>
10494
</CardGroup>
10595

106-
## Choosing the Right Provider
107-
108-
With 19+ AI providers available, selecting the right model depends on your specific needs. Consider these key factors:
109-
110-
<AccordionGroup>
111-
<Accordion title="Performance & Speed">
112-
* **Ultra-fast inference**: Groq (LPU technology), Together AI
113-
* **Best reasoning**: Anthropic Claude, DeepSeek, OpenAI o1
114-
* **Balanced performance**: OpenAI GPT-4, Google Gemini, Cohere
115-
* **Local speed**: Ollama, LM Studio (no network latency)
116-
</Accordion>
117-
118-
<Accordion title="Cost Considerations">
119-
* **Free/Low-cost**: Local models (Ollama, LM Studio), OpenRouter * **Budget-friendly**: Together AI, HuggingFace,
120-
Hyperbolic * **Premium**: Anthropic, OpenAI, Google (higher quality) * **Enterprise**: AWS Bedrock, GitHub Models
121-
(included benefits)
122-
</Accordion>
123-
124-
<Accordion title="Privacy & Security">
125-
* **Maximum privacy**: Local models (Ollama, LM Studio) - data never leaves your device * **Enterprise-grade**: AWS
126-
Bedrock, Anthropic (SOC 2 compliant) * **Cloud security**: OpenAI, Google, Cohere (encrypted transmission) *
127-
**Specialized**: Perplexity (search integration with privacy considerations)
128-
</Accordion>
129-
130-
<Accordion title="Model Capabilities">
131-
* **Code generation**: All providers support coding, specialized: Cohere, Together AI, GitHub * **Multimodal**: Google
132-
Gemini, OpenAI GPT-4 Vision, Moonshot * **Long context**: Claude (200K+), Gemini (1M+), GPT-4 (128K) * **Function
133-
calling**: OpenAI, Anthropic, Google, Cohere * **Search integration**: Perplexity (real-time web search) *
134-
**Multilingual**: Cohere, Google, Moonshot (Chinese), Mistral
135-
</Accordion>
136-
137-
<Accordion title="Use Case Optimization">
138-
* **Rapid prototyping**: Groq, Together AI (fast iteration)
139-
* **Production applications**: Anthropic, OpenAI, AWS Bedrock
140-
* **Research & analysis**: DeepSeek, Perplexity, Cohere
141-
* **Offline development**: Ollama, LM Studio
142-
* **Enterprise integration**: AWS Bedrock, GitHub Models
143-
* **Cost optimization**: Hyperbolic, HuggingFace, OpenRouter
144-
</Accordion>
145-
</AccordionGroup>
96+
## Choosing a Provider
14697

147-
## Quick Start
98+
**Performance & Speed:**
99+
- Ultra-fast: Groq, Together AI, Fireworks
100+
- Best reasoning: Anthropic Claude, DeepSeek, OpenAI o1
101+
- Balanced: OpenAI GPT-4, Google Gemini, Cohere
148102

149-
<Steps>
150-
<Step title="Choose Your Provider">
151-
Select from 19+ providers based on your needs: speed, cost, capabilities, or privacy requirements
152-
</Step>
103+
**Cost:**
104+
- Free/Low-cost: Local models (Ollama, LM Studio), OpenRouter
105+
- Budget-friendly: Together AI, HuggingFace, Hyperbolic
106+
- Premium: Anthropic, OpenAI, Google
153107

154-
<Step title="Get API Credentials">
155-
For cloud providers: Sign up and get API keys. For local providers: Download and install the software
156-
</Step>
108+
**Privacy:**
109+
- Maximum privacy: Local models (Ollama, LM Studio)
110+
- Enterprise-grade: AWS Bedrock, Anthropic
111+
- Cloud security: OpenAI, Google, Cohere
157112

158-
<Step title="Configure in CodinIT">
159-
Add your credentials in CodinIT's settings under AI Providers or use provider-specific setup prompts
160-
</Step>
113+
**Capabilities:**
114+
- Code generation: All providers, specialized: Cohere, Together AI
115+
- Multimodal: Google Gemini, OpenAI GPT-4 Vision, Moonshot
116+
- Long context: Claude (200K+), Gemini (1M+), GPT-4 (128K)
117+
- Search integration: Perplexity
118+
- Multilingual: Cohere, Google, Moonshot (Chinese)
161119

162-
<Step title="Select Your Model">
163-
Choose from available models within your selected provider, considering context limits and capabilities
164-
</Step>
120+
## Quick Start
165121

166-
<Step title="Start Building">
167-
Begin using AI assistance in your development workflow with the configured provider
168-
</Step>
122+
<Steps>
123+
<Step title="Choose Provider">Select based on needs: speed, cost, capabilities, or privacy</Step>
124+
<Step title="Get Credentials">Sign up and get API keys (cloud) or install software (local)</Step>
125+
<Step title="Configure CodinIT">Add credentials in CodinIT settings</Step>
126+
<Step title="Select Model">Choose from available models</Step>
127+
<Step title="Start Building">Begin using AI in your workflow</Step>
169128
</Steps>
170129

171-
## Configuration Tips
172-
173-
<Tip>
174-
**Multi-Provider Setup**: Configure multiple providers simultaneously and switch between them based on task
175-
requirements, cost considerations, or performance needs.
176-
</Tip>
177-
178-
<Info>
179-
**API Key Security**: Your API keys are stored locally and never transmitted to CodinIT servers. They are only used to
180-
communicate directly with your chosen AI provider.
181-
</Info>
182-
183-
<Warning>
184-
**Rate Limits**: Each provider has different rate limits and usage quotas. Monitor your usage and consider provider
185-
switching for high-volume workloads.
186-
</Warning>
187-
188-
<Callout type="tip">
189-
**Provider Switching**: Easily switch between providers mid-project. CodinIT maintains separate contexts for different
190-
providers, allowing you to leverage specialized capabilities as needed.
191-
</Callout>
192-
193-
<Callout type="info">
194-
**Local vs Cloud**: Local providers (Ollama, LM Studio) offer maximum privacy but require hardware resources. Cloud
195-
providers offer convenience and advanced features but involve data transmission.
196-
</Callout>
197-
198-
## Next Steps
199-
200-
<CardGroup cols={3}>
201-
<Card title="Model Configuration" icon="sliders" href="/model-config/context-windows">
202-
Learn about context windows and model parameters
203-
</Card>
204-
205-
<Card title="Compare Models" icon="chart-line" href="/model-config/model-comparison">
206-
Compare different models and their capabilities
207-
</Card>
208-
209-
<Card title="Run Models Locally" icon="server" href="/running-models-locally/local-model-setup">
210-
Set up local models for complete privacy
211-
</Card>
212-
213-
<Card title="Prompt Engineering" icon="book" href="/prompting/prompt-engineering-guide">
214-
Optimize your prompts for better results
215-
</Card>
216-
217-
<Card title="Token Efficiency" icon="zap" href="/prompting/maximize-token-efficiency">
218-
Optimize costs and performance across providers
219-
</Card>
220-
221-
<Card title="Integration Guides" icon="plug" href="/integrations/supabase">
222-
Connect with databases, deployments, and APIs
223-
</Card>
224-
</CardGroup>
130+
## Notes
225131

226-
<Callout type="success">
227-
**Provider Ecosystem**: With 19+ AI providers, you can choose the perfect model for every task - from rapid
228-
prototyping to production deployment, from cost optimization to maximum privacy.
229-
</Callout>
132+
- **Multi-provider:** Configure multiple providers and switch between them
133+
- **API security:** Keys stored locally, never transmitted to CodinIT servers
134+
- **Rate limits:** Each provider has different limits
135+
- **Local vs Cloud:** Local offers privacy but requires hardware; cloud offers convenience and advanced features

0 commit comments

Comments
 (0)