Skip to main content
The Line SDK is a Python framework for building voice agents. Handles audio infrastructure, speech recognition, and conversation flow.
uv add cartesia-line
New to Line? Start with the Quickstart to build and deploy your first agent.

Core Concepts

ComponentPurpose
AgentControls the input/output event loop via a process method
LlmAgentBuilt-in agent that wraps 100+ LLM providers via LiteLLM
ToolsFunctions your agent can call—database lookups, handoffs, web search
VoiceAgentAppHTTP server that connects your agent to Cartesia’s audio infrastructure
import os
from line.llm_agent import LlmAgent, LlmConfig, end_call
from line.voice_agent_app import VoiceAgentApp

async def get_agent(env, call_request):
    return LlmAgent(
        model="anthropic/claude-haiku-4-5-20251001",
        api_key=os.getenv("ANTHROPIC_API_KEY"),
        tools=[end_call],
        config=LlmConfig(
            system_prompt="You are a helpful assistant.",
            introduction="Hello! How can I help you today?",
        ),
    )

app = VoiceAgentApp(get_agent=get_agent)
The agent speaks the introduction when a call starts, then responds to whatever the user says using the LLM.

Features

  • Real-time interruption support — Handles audio interruptions and turn-taking out-of-the-box.
  • Tool calling — Connect to databases, APIs, and external services
  • Multi-agent handoffs — Route conversations between specialized agents
  • Web search — Built-in tool for real-time information lookup

Add Capabilities

Look up information

from typing import Annotated
from line.llm_agent import loopback_tool

@loopback_tool
async def get_order_status(ctx, order_id: Annotated[str, "The order ID"]):
    """Look up an order's current status."""
    order = await db.get_order(order_id)
    return f"Order {order_id} is {order.status}"

Handoff to another agent

from line.llm_agent import LlmAgent, LlmConfig, agent_as_handoff, end_call

spanish_agent = LlmAgent(
    model="gpt-5-nano",
    api_key=os.getenv("OPENAI_API_KEY"),
    tools=[end_call],
    config=LlmConfig(
        system_prompt="You speak only in Spanish.",
        introduction="¡Hola! ¿Cómo puedo ayudarte?",
    ),
)

main_agent = LlmAgent(
    model="anthropic/claude-haiku-4-5-20251001",
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    tools=[
        end_call,
        agent_as_handoff(
            spanish_agent,
            name="transfer_to_spanish",
            description="Transfer when user requests Spanish.",
        ),
    ],
    config=LlmConfig(...),
)

Search the web

from line.llm_agent import web_search

agent = LlmAgent(
    tools=[end_call, web_search],  # Add built-in web search
    ...
)
See Tools for the full guide.

Code Examples

ExampleDescription
Basic ChatSimple conversational agent
Form FillerCollect structured data via conversation
Multi-AgentHand off between specialized agents
Chat SupervisorFast chat model with powerful reasoning escalation
Echo ToolCustom handoff tool implementation

Integrations

IntegrationDescription
Exa Web ResearchReal-time web search
BrowserbaseFill web forms via voice

Next Steps