Documentation Index
Fetch the complete documentation index at: https://docs.cartesia.ai/llms.txt
Use this file to discover all available pages before exploring further.
Patterns for production voice agents: observability, tool design, multi-agent systems, and guardrails.
Complete Example: Multi-Agent Customer Service
This example combines prompting, all three tool types, and multi-agent handoffs:
import os
from typing import Annotated
from line import CallRequest
from line.llm_agent import (
LlmAgent, LlmConfig, loopback_tool, passthrough_tool,
agent_as_handoff, end_call
)
from line.events import AgentSendText, AgentTransferCall
from line.voice_agent_app import AgentEnv, VoiceAgentApp
# Loopback tool: Fetch order info for LLM to contextualize
@loopback_tool
async def get_order_status(ctx, order_id: Annotated[str, "The order ID"]):
"""Look up order status by ID."""
order = await db.get_order(order_id)
return f"Order {order_id}: {order.status}, delivers {order.delivery_date}"
# Passthrough tool: Deterministic transfer action
@passthrough_tool
async def transfer_to_human(ctx):
"""Transfer to a human agent."""
yield AgentSendText(text="Let me connect you with a team member who can help further.")
yield AgentTransferCall(target_phone_number="+18005551234")
SYSTEM_PROMPT = """You are a friendly customer service agent for Acme Corp.
You can:
- Look up order status using get_order_status
- Transfer to a human agent using transfer_to_human
- Transfer to Spanish support using transfer_to_spanish
- End calls politely using end_call
Rules:
- Always confirm the order ID before looking it up
- Offer to transfer to a human if you can't resolve the issue
- Transfer to Spanish support if the user speaks Spanish or requests it
- Be empathetic and professional
"""
async def get_agent(env: AgentEnv, call_request: CallRequest):
# Spanish-speaking specialist agent
spanish_agent = LlmAgent(
model="gpt-5-nano",
api_key=os.getenv("OPENAI_API_KEY"),
tools=[get_order_status, transfer_to_human, end_call],
config=LlmConfig(
system_prompt="Eres un agente de servicio al cliente amigable para Acme Corp. Habla solo en español.",
introduction="¡Hola! Gracias por llamar a Acme Corp. ¿Cómo puedo ayudarte hoy?",
),
)
# Main English-speaking agent with handoff capability
return LlmAgent(
model="anthropic/claude-haiku-4-5-20251001",
api_key=os.getenv("ANTHROPIC_API_KEY"),
tools=[
get_order_status,
transfer_to_human,
agent_as_handoff(
spanish_agent,
handoff_message="Transferring you to our Spanish-speaking team...",
name="transfer_to_spanish",
description="Transfer to Spanish support when user speaks Spanish or requests it.",
),
end_call,
],
config=LlmConfig(
system_prompt=SYSTEM_PROMPT,
introduction="Hi! Thanks for calling Acme Corp. How can I help you today?",
),
)
app = VoiceAgentApp(get_agent=get_agent)
if __name__ == "__main__":
app.run()
Observability
Log Metrics
Track performance and business metrics:
from line.events import LogMetric, LogMessage
@loopback_tool
async def process_order(ctx, order_id: Annotated[str, "Order ID"]):
"""Process a customer order."""
import time
start = time.time()
result = await api.process_order(order_id)
# Log timing metric
yield LogMetric(name="order_processing_ms", value=(time.time() - start) * 1000)
# Log business event
yield LogMessage(
name="order_processed",
level="info",
message=f"Processed order {order_id}",
metadata={"status": result.status}
)
return f"Order {order_id} processed: {result.status}"
Built-in LLM Agent Metrics
LlmAgent automatically emits three timing metrics on every turn — no code needed:
| Metric | Description |
|---|
llm_first_chunk_ms | Time from start of response generation to first chunk (text or tool call) from the LLM |
llm_first_text_ms | Time from start of response generation to first text chunk |
agent_turn_ms | Total agent processing time for the turn |
Validate inputs before processing:
@loopback_tool
async def book_appointment(
ctx,
date: Annotated[str, "Date in YYYY-MM-DD format"],
time: Annotated[str, "Time in HH:MM format"]
):
"""Book an appointment."""
from datetime import datetime
try:
dt = datetime.strptime(f"{date} {time}", "%Y-%m-%d %H:%M")
except ValueError:
return "Invalid date or time format. Please use YYYY-MM-DD and HH:MM."
if dt < datetime.now():
return "Cannot book appointments in the past."
# Proceed with booking
return f"Appointment booked for {dt.strftime('%B %d at %I:%M %p')}"
Handle long-running operations with proper timeout handling:
import asyncio
@loopback_tool
async def search_inventory(ctx, query: Annotated[str, "Search query"]):
"""Search inventory with timeout protection."""
try:
result = await asyncio.wait_for(
inventory_api.search(query),
timeout=5.0
)
return f"Found {len(result.items)} items matching '{query}'"
except asyncio.TimeoutError:
return "Search is taking longer than expected. Please try a more specific query."
Error Handling
Handle errors gracefully in tools:
@loopback_tool
async def get_account_info(ctx, account_id: Annotated[str, "Account ID"]):
"""Look up account information."""
try:
account = await api.get_account(account_id)
return f"Account {account_id}: Balance ${account.balance:.2f}"
except AccountNotFoundError:
return f"Account {account_id} not found."
except Exception as e:
logger.error(f"Error fetching account: {e}")
return "Sorry, I couldn't retrieve that account information right now."
Agent Wrappers
Agent wrappers add cross-cutting behavior (logging, validation, routing) without modifying the underlying agent.
Guardrails: Safety and Content Filtering
Wrappers are ideal for implementing guardrails that filter unsafe content in both directions:
class GuardrailsAgent:
def __init__(self, inner_agent, safety_api):
self.inner = inner_agent
self.safety_api = safety_api
async def process(self, env, event):
# Pre-processing: Check user input for unsafe content
if isinstance(event, UserTurnEnded):
user_text = event.content[0].content if event.content else ""
if await self.safety_api.is_unsafe(user_text):
yield AgentSendText(text="I'm here to help with appropriate requests. Let's keep our conversation respectful.")
return
# Post-processing: Check agent output for safety issues
async for output in self.inner.process(env, event):
if isinstance(output, AgentSendText):
if await self.safety_api.is_unsafe(output.text):
yield LogMessage(
name="safety_violation",
level="warning",
message=f"Blocked unsafe output: {output.text[:100]}..."
)
yield AgentSendText(text="I apologize, but I can't provide that information.")
continue
yield output
Common guardrail patterns:
- Content safety filtering (toxicity, hate speech, PII)
- Rate limiting and abuse prevention
- Compliance checks (HIPAA, financial regulations)
- Brand safety (off-brand responses)
Routing Between Multiple Agents
Dynamically switch between specialized agents based on conversation context:
class RouterAgent:
def __init__(self, default_agent, specialists: dict):
self.default = default_agent
self.specialists = specialists
self.current = default_agent
async def process(self, env, event):
# Switch agent based on user input
if isinstance(event, UserTurnEnded):
user_text = event.content[0].content if event.content else ""
if "billing" in user_text.lower():
self.current = self.specialists.get("billing", self.default)
elif "technical" in user_text.lower():
self.current = self.specialists.get("technical", self.default)
async for output in self.current.process(env, event):
yield output
Use with LlmAgent:
async def get_agent(env, call_request):
return RouterAgent(
default_agent=LlmAgent(
model="gpt-5-nano",
api_key=os.getenv("OPENAI_API_KEY"),
config=LlmConfig(system_prompt="You are a helpful assistant..."),
),
specialists={
"billing": LlmAgent(
model="gpt-5-nano",
api_key=os.getenv("OPENAI_API_KEY"),
config=LlmConfig(system_prompt="You are a billing specialist..."),
),
"technical": LlmAgent(
model="anthropic/claude-haiku-4-5-20251001",
api_key=os.getenv("ANTHROPIC_API_KEY"),
config=LlmConfig(system_prompt="You are a technical support specialist..."),
),
}
)
Best Practices
Keep wrappers focused on a single responsibility. Use async for and yield to preserve streaming. Stack simple wrappers rather than building one complex one.
# Composable wrappers
agent = LoggingWrapper(
ValidationWrapper(
LlmAgent(...)
)
)
Example Implementations
Full working examples demonstrating these patterns:
| Example | Pattern | Description |
|---|
| Form Filler | Stateful tools | Walk users through a YAML-defined form with validation |
| Multi-Agent Transfer | agent_as_handoff | English/Spanish agent handoff |
| Chat Supervisor | Background research | Separate agents for talking and longer-thinking |