Tips

Tips for building voice agents with the Line SDK.

Observability

Monitor your voice agents by tracking custom events and metrics. The Line SDK automatically captures these for analysis on the Cartesia platform.

Track Custom Events

Define and track custom events during your voice agent calls.

1from line import register_observability_event
2from pydantic import BaseModel
3
4# Define your custom event
5class LeadCaptured(BaseModel):
6 customer_name: str
7 interest_level: str
8 contact_method: str
9
10async def handle_new_call(system: VoiceAgentSystem, call_request: CallRequest):
11 # Set up your agent.
12 chat_node = ChatNode()
13 bridge = Bridge(chat_node)
14 system.with_speaking_node(chat_node, bridge)
15
16 # Register event type for tracking.
17 register_observability_event(
18 system.user_bridge,
19 system.harness,
20 LeadCaptured
21 )
22
23 # Your agent can now yield LeadCaptured events.
24 bridge.on(UserStoppedSpeaking).stream(chat_node.generate).broadcast()

Log Metrics

Log metrics during the call to track performance or other characteristics of the call.

1import time
2from line.events import LogMetric
3
4async def track_response_time(msg: Message):
5 start_time = time.time()
6
7 # Process the request
8 result = await process_user_request(msg.event.content)
9
10 # Calculate and log timing
11 duration = time.time() - start_time
12 yield LogMetric(name="response_time_seconds", value=duration)
13
14 # Also yield business events
15 if result.is_qualified_lead:
16 yield LeadCaptured(
17 customer_name=result.name,
18 interest_level="high",
19 contact_method="phone"
20 )
21
22# Track timing for all user requests
23bridge.on(UserStoppedSpeaking).stream(track_response_time).broadcast()

Using loguru for Proper Logging

Configure loguru as your logger to see results in the UI and have logs captured:

1from loguru import logger
2import sys
3
4# Use in your nodes
5class ChatNode(ReasoningNode):
6 async def process_context(self, context: ConversationContext):
7 logger.info(f"Processing {len(context.events)} events")
8
9 # Your processing logic
10 messages = convert_messages_to_openai(context.events)
11
12 logger.debug(f"Generated {len(messages)} messages for LLM")
13
14 for chunk in client.chat.completions.create(
15 model="gpt-4", messages=messages, stream=True
16 ):
17 if chunk.choices[0].delta.content:
18 content = chunk.choices[0].delta.content
19 logger.trace(f"Streaming content: {content[:50]}...")
20 yield AgentResponse(content=content)

Performance

Efficient Event Filtering

Filter events at the bridge level for better performance:

1# Good: Filter at bridge level
2bridge.on(UserTranscriptionReceived, source=node.id).map(handle_user_input)
3bridge.on(ToolCall, tool_name="transfer").map(handle_transfer)