Skip to content

Commit 49b940f

Browse files
Update providers/openrouter.mdx
Co-Authored-By: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
1 parent d6168de commit 49b940f

1 file changed

Lines changed: 19 additions & 24 deletions

File tree

providers/openrouter.mdx

Lines changed: 19 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,35 @@
11
---
22
title: "OpenRouter"
3-
description: "Learn how to use OpenRouter with CodinIT to access a wide variety of language models through a single API."
3+
description: "Access multiple AI models through a unified API with OpenRouter."
44
---
55

6-
OpenRouter is an AI platform that provides access to a wide variety of language models from different providers, all through a single API. This can simplify setup and allow you to easily experiment with different models.
6+
OpenRouter provides access to models from multiple providers through a single API.
77

88
**Website:** [https://openrouter.ai/](https://openrouter.ai/)
99

10-
### Getting an API Key
10+
## Getting an API Key
1111

12-
1. **Sign Up/Sign In:** Go to the [OpenRouter website](https://openrouter.ai/). Sign in with your Google or GitHub account.
13-
2. **Get an API Key:** Go to the [keys page](https://openrouter.ai/keys). You should see an API key listed. If not, create a new key.
14-
3. **Copy the Key:** Copy the API key.
12+
1. Go to [OpenRouter](https://openrouter.ai/) and sign in with Google or GitHub
13+
2. Navigate to the [keys page](https://openrouter.ai/keys)
14+
3. Copy your API key (or create a new one)
1515

16-
### Supported Models
16+
## Configuration
1717

18-
OpenRouter supports a large and growing number of models. CodinIT automatically fetches the list of available models. Refer to the [OpenRouter Models page](https://openrouter.ai/models) for the complete and up-to-date list.
18+
1. Click the settings icon (⚙️) in CodinIT
19+
2. Select "OpenRouter" as the API Provider
20+
3. Paste your API key
21+
4. Choose your model
1922

20-
### Configuration in CodinIT
23+
## Supported Models
2124

22-
1. **Open CodinIT Settings:** Click the settings icon (⚙️) in the CodinIT panel.
23-
2. **Select Provider:** Choose "OpenRouter" from the "API Provider" dropdown.
24-
3. **Enter API Key:** Paste your OpenRouter API key into the "OpenRouter API Key" field.
25-
4. **Select Model:** Choose your desired model from the "Model" dropdown.
26-
5. **(Optional) Custom Base URL:** If you need to use a custom base URL for the OpenRouter API, check "Use custom base URL" and enter the URL. Leave this blank for most users.
25+
CodinIT automatically fetches available models. See [OpenRouter Models](https://openrouter.ai/models) for the complete list.
2726

28-
### Supported Transforms
27+
## Features
2928

30-
OpenRouter provides an [optional "middle-out" message transform](https://openrouter.ai/features/message-transforms) to help with prompts that exceed the maximum context size of a model. You can enable it by checking the "Compress prompts and message chains to the context size" box.
29+
- **Message transforms:** Enable "Compress prompts and message chains to context size" to handle large prompts
30+
- **Prompt caching:** Automatically passes caching to supported models
31+
- **Gemini caching:** Manually enable "Enable Prompt Caching" for Gemini models
3132

32-
### Tips and Notes
33+
## Notes
3334

34-
- **Model Selection:** OpenRouter offers a wide range of models. Experiment to find the best one for your needs.
35-
- **Pricing:** OpenRouter charges based on the underlying model's pricing. See the [OpenRouter Models page](https://openrouter.ai/models) for details.
36-
- **Prompt Caching:**
37-
- OpenRouter passes caching requests to underlying models that support it. Check the [OpenRouter Models page](https://openrouter.ai/models) to see which models offer caching.
38-
- For most models, caching should activate automatically if supported by the model itself (similar to how Requesty works).
39-
- **Exception for Gemini Models via OpenRouter:** Due to potential response delays sometimes observed with Google's caching mechanism when accessed via OpenRouter, a manual activation step is required _specifically for Gemini models_.
40-
- If using a **Gemini model** via OpenRouter, you **must manually check** the "Enable Prompt Caching" box in the provider settings to activate caching for that model. This checkbox serves as a temporary workaround. For non-Gemini models on OpenRouter, this checkbox is not necessary for caching.
35+
- **Pricing:** Based on underlying model pricing. See [OpenRouter Models](https://openrouter.ai/models)

0 commit comments

Comments
 (0)