Skip to main content
Using the Cartesia Websocket API allows you to simultaneously stream text input and audio output. This is best for realtime use cases such as voice agents when text is generated incrementally, as from an LLM. Stream text in chunks to the Cartesia and receive audio chunks in real time. This is ideal when text is generated incrementally, such as from an LLM.

Prerequisites

  • A Cartesia API key. Create one here, then add it to your .bashrc or .zshrc:
    export CARTESIA_API_KEY=<your api key here>
    
  • ffplay (part of FFmpeg), used to play audio output:
    brew install ffmpeg
    

Stream text and play audio

1

Install the SDK

pip install 'cartesia[websockets]'
2

Stream text over a WebSocket

realtime-tts.py
from cartesia import Cartesia
import subprocess
import os

client = Cartesia(api_key=os.getenv("CARTESIA_API_KEY"))

print("Starting ffplay to play streaming audio output...")
player = subprocess.Popen(
    ["ffplay", "-f", "f32le", "-ar", "44100", "-probesize", "32", "-analyzeduration", "0", "-nodisp", "-autoexit", "-loglevel", "quiet", "-"],
    stdin=subprocess.PIPE,
    bufsize=0,
)

print("Connecting to Cartesia via websockets...")
with client.tts.websocket_connect() as connection:
    ctx = connection.context(
        model_id="sonic-3",
        voice={"mode": "id", "id": "f786b574-daa5-4673-aa0c-cbe3e8534c02"},
        output_format={
            "container": "raw",
            "encoding": "pcm_f32le",
            "sample_rate": 44100,
        },
    )

    print("Sending chunked text input...")
    for part in ["Hi there! ", "Welcome to ", "Cartesia Sonic."]:
        ctx.push(part)

    ctx.no_more_inputs()

    for response in ctx.receive():
        if response.type == "chunk" and response.audio:
            print(f"Received audio chunk ({len(response.audio)} bytes)")
            # Here we pipe audio to ffplay. In a production app you might play audio in
            # a client, or forward it to another service, eg, a telephony provider.
            player.stdin.write(response.audio)
        elif response.type == "done":
            break

player.stdin.close()
player.wait()
3

Run the quickstart

python3 realtime-tts.py
This will stream text inputs to Cartesia, and play the streaming audio output using ffplay. (Make sure your device volume is turned on!)

How it works

The WebSocket connection manages multiple contexts, each representing an independent, continuous stream of speech. Cartesia context is exactly like an LLM context: on our servers, we store the previously-generated speech so that new speech matches it in tone. To summarize, here’s what our code does, after establishing a Websocket connection:
  1. Create a context with context().
  2. Push text incrementally with push(). Each chunk continues seamlessly from the previous one using continuations.
  3. Signal completion with no_more_inputs() to tell the model no more text is coming.
  4. Receive audio chunks as they are generated.
This maps directly to LLM token streaming — push each token or sentence fragment as it arrives, and audio begins streaming back even if the full text is not yet available.

What’s next

Stream inputs using continuations

Deep dive into context management and buffering.

Choose a Voice

Browse voices and learn how to pick the right one for your use case.

Choosing TTS parameters

Pick the right output format, sample rate, and encoding for your use case.