Documentation Index
Fetch the complete documentation index at: https://docs.cartesia.ai/llms.txt
Use this file to discover all available pages before exploring further.
Once your deployment is running, you can test it using the following commands. Ensure you have network access to your service via port-forwarding or an ingress.
List Voices
curl "http://<your-host>:<port>/voices" \
-H "Cartesia-Version: 2025-04-16" \
-H "X-API-Key: $CARTESIA_API_KEY" | jq '.'
Text-to-Speech
curl -X POST "http://<your-host>:<port>/tts/bytes" \
-H "Cartesia-Version: 2025-04-16" \
-H "X-API-Key: $CARTESIA_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model_id": "sonic-2",
"transcript": "Hello, this is a test of the Cartesia text-to-speech API.",
"voice": {
"mode": "id",
"id": "bf0a246a-8642-498a-9950-80c35e9276b5"
},
"output_format": {
"container": "wav",
"encoding": "pcm_f32le",
"sample_rate": 44100
},
"language": "en"
}' > output.wav
Benchmarking
We provide a benchmarking tool in the cartesia-kube repository for measuring TTS performance metrics like TTFA and latency.
cd cartesia-kube/benchmarking
export CARTESIA_API_KEY="your-api-key"
export CARTESIA_API_URL="wss://your-ingress-host"
# Run with default concurrency (4)
uv run tts_benchmark.py
# Run with custom concurrency
uv run tts_benchmark.py --concurrency 8
See the benchmarking README for detailed usage and output format.