Skip to content

Commit e6d130a

Browse files
Inline ARM64 guidance in EmbeddingServer step
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent fdc70b9 commit e6d130a

File tree

1 file changed

+5
-12
lines changed

1 file changed

+5
-12
lines changed

docs/toolhive/tutorials/mcp-optimizer.mdx

Lines changed: 5 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -89,16 +89,6 @@ Before starting this tutorial, make sure you have:
8989
- An MCP client (Visual Studio Code with GitHub Copilot is used in this
9090
tutorial)
9191

92-
:::tip[ARM64 support]
93-
94-
The default TEI image is x86_64-only. If you are running on ARM64 nodes (for
95-
example, Apple Silicon with kind), set the `image` field in your EmbeddingServer
96-
to use the ARM64 image. See
97-
[EmbeddingServer resource](../guides-vmcp/optimizer.mdx#embeddingserver-resource)
98-
for details.
99-
100-
:::
101-
10292
## Step 1: Create an MCPGroup and deploy backend MCP servers
10393

10494
Create an MCPGroup to organize the backend MCP servers that the optimizer will
@@ -195,15 +185,18 @@ The optimizer uses semantic search to find relevant tools. This requires an
195185
EmbeddingServer, which runs a text embeddings inference (TEI) server.
196186

197187
Create an EmbeddingServer with default settings. This deploys the
198-
`BAAI/bge-small-en-v1.5` model:
188+
`BAAI/bge-small-en-v1.5` model. If you are running on ARM64 nodes (for example,
189+
Apple Silicon with kind), uncomment the `image` line to use the ARM64 build:
199190

200191
```yaml title="embedding-server.yaml"
201192
apiVersion: toolhive.stacklok.dev/v1alpha1
202193
kind: EmbeddingServer
203194
metadata:
204195
name: optimizer-embedding
205196
namespace: toolhive-system
206-
spec: {}
197+
spec:
198+
# Uncomment for Apple Silicon or other ARM64 platforms
199+
# image: ghcr.io/huggingface/text-embeddings-inference:cpu-arm64-latest
207200
```
208201

209202
Apply the resource:

0 commit comments

Comments
 (0)