This repository contains Check Point's fork of Presenton with Azure OpenAI support.
This fork adds enterprise-grade Azure OpenAI integration to Presenton:
✅ Azure OpenAI Native Support - Works with Azure OpenAI Legacy API endpoints
✅ GPT-5 Reasoning Models - Advanced custom template creation with GPT-5
✅ Dual Model Strategy - Fast gpt-4o-mini for basic, GPT-5 for advanced features
✅ Production Ready - Fully tested with Check Point Azure infrastructure
- Docker
- Azure OpenAI endpoint access
- (Optional) Pexels API key for images
docker build -t presenton-azure:latest .
docker run -d --name presenton -p 5000:80 \
-e LLM="custom" \
-e CUSTOM_LLM_URL="https://your-azure-endpoint.azure-api.net/path" \
-e CUSTOM_LLM_API_KEY="your-azure-api-key" \
-e CUSTOM_MODEL="gpt-4o-mini-2024-07-18" \
-e OPENAI_API_KEY="your-azure-api-key" \
-e GPT5_MODEL="gpt-5-2025-08-07" \
-e IMAGE_PROVIDER="pexels" \
-e PEXELS_API_KEY="your-pexels-key" \
-e GOOGLE_API_KEY="optional" \
-v "./app_data:/app_data" \
presenton-azure:latestAccess at: http://localhost:5000
| Variable | Description | Example |
|---|---|---|
CUSTOM_LLM_URL |
Azure endpoint URL | https://your-endpoint.azure-api.net/path |
CUSTOM_LLM_API_KEY |
Azure API key | your-api-key |
CUSTOM_MODEL |
Fast model for basic presentations | gpt-4o-mini-2024-07-18 |
OPENAI_API_KEY |
Same as CUSTOM_LLM_API_KEY (for frontend) |
your-api-key |
| Variable | Description | Default |
|---|---|---|
GPT5_MODEL |
Model for custom templates | gpt-5-2025-08-07 |
IMAGE_PROVIDER |
Image service | pexels |
PEXELS_API_KEY |
Pexels API key | - |
GOOGLE_API_KEY |
Google API key | - |
- CUSTOM_MODEL (gpt-4o-mini): Fast basic presentations (~5-10 seconds)
- GPT5_MODEL (gpt-5): Advanced custom templates (~30-60 seconds per slide)
-
Azure Legacy API Client (
servers/fastapi/services/llm_client.py)- Auto-detects Azure URLs
- Uses
AsyncAzureOpenAIclient - Handles Azure authentication format
-
Model Validation Skip (
servers/fastapi/utils/)- Graceful handling when Azure endpoint lacks
/modelsendpoint - Proceeds with configured model without validation
- Graceful handling when Azure endpoint lacks
-
GPT-5 Support (
servers/fastapi/api/v1/ppt/endpoints/slide_to_html.py)- Converted Responses API → Chat Completions API
- Azure-compatible parameters:
max_completion_tokensinstead ofmax_tokens- No
temperatureparameter (GPT-5 default only) - 16000 token limit for reasoning
- Separate GPT5_MODEL configuration
-
Docker Dependencies (
Dockerfile)- Added
zstdfor Ollama
- Added
- Navigate to http://localhost:5000
- Enter topic and click "Generate"
- Uses gpt-4o-mini for speed
- Go to http://localhost:5000/custom-template
- Upload .pptx file
- Click "Process File"
- Uses GPT-5 reasoning for quality
- Expect 30-60s per slide (reasoning tokens)
FastAPI docs: http://localhost:5000/docs
Authentication errors (401)
- Verify
CUSTOM_LLM_API_KEYis correct - Check API key has model access
Slow custom templates
- Expected: GPT-5 reasoning takes time
- Normal: 30-60 seconds per slide
Empty GPT-5 responses
- Fork includes fix: 16000 token limit
- GPT-5 needs tokens for reasoning + content
main- Sync with upstream Presentonazure-openai-legacy-api- Azure integration (active)
fc13b2e- Initial Azure OpenAI support2d230e7- GPT-5 Chat Completions API
- Upstream: presenton/presenton
- Original README: See README.md
- Docs: https://docs.presenton.ai
Apache License 2.0 (same as upstream)
- Azure OpenAI issues: File in this repo
- Upstream issues: presenton/presenton
Note: No sensitive data (API keys, endpoints) are committed to this repository. All configuration is via environment variables.