TTS Providers
OpenReader supports OpenAI-compatible TTS providers through a common API shape.
tip
If you are running a self-hosted TTS server (Kokoro/Orpheus/etc.), use Custom OpenAI-Like in Settings.
Quick Setup by Provider
- OpenAI
- DeepInfra
- Custom OpenAI-Like
- In Settings, choose provider:
OpenAI. - Keep the default
API_BASE(auto-filled). - Set
API_KEYto your OpenAI key. - Choose model/voice.
- In Settings, choose provider:
Deepinfra. - Keep the default
API_BASE(auto-filled). - Set
API_KEYto your DeepInfra key. - Choose model/voice.
- In Settings, choose provider:
Custom OpenAI-Like. - Set
API_BASEto your endpoint (example:http://host.docker.internal:8880/v1). - Set
API_KEYif your provider requires one. - Choose model/voice.
Provider Dropdown Behavior
In Settings, provider options include:
OpenAIDeepinfraCustom OpenAI-Like(Kokoro, Orpheus, and other OpenAI-compatible endpoints)
API_BASE guidance:
OpenAIandDeepinfraauto-fill default endpoints.Custom OpenAI-Likerequires settingAPI_BASEmanually.
OpenAI-Compatible API Shape
Custom providers should expose:
GET /v1/audio/voicesPOST /v1/audio/speech
Server-Reachable API Base
TTS requests are sent from the Next.js server, not directly from the browser. API_BASE must be reachable from the server runtime.