KittenTTS-FastAPI
Run KittenTTS-FastAPI locally and connect it to OpenReader using the Custom OpenAI-Like provider. Lightweight and CPU-friendly.
Run KittenTTS
docker run -it --rm \
--name kittentts-fastapi \
-e KITTEN_MODEL_REPO_ID="KittenML/kitten-tts-nano-0.8-fp32" \
-p 8005:8005 \
ghcr.io/richardr1126/kittentts-fastapi-cpu
Connect to OpenReader
Environment variables (recommended for deployment):
API_BASE=http://kittentts-fastapi:8005/v1
Use
kittentts-fastapiif that's the container name, orhost.docker.internalif not.
Or in-app via Settings → TTS Provider:
- Set provider to
Custom OpenAI-Like. - Set
API_BASEto your KittenTTS endpoint (e.g.http://kittentts-fastapi:8005/v1). - Leave
API_KEYblank unless your deployment requires one. - Choose model
kitten-tts(or the model your deployment exposes).
Settings modal values override env vars. See TTS Providers for how the two layers interact.