Tips

Tips for building voice agents with the Line SDK.

Observability

Monitor your voice agents by tracking custom events and metrics. The Line SDK automatically captures these for analysis on the Cartesia platform.

Track Custom Events

Define and track custom events during your voice agent calls.

1from line import register_observability_event
2from pydantic import BaseModel
3
4
5# Define your custom event
6class LeadCaptured(BaseModel):
7 customer_name: str
8 interest_level: str
9 contact_method: str
10
11
12async def handle_new_call(
13 system: VoiceAgentSystem, call_request: CallRequest
14):
15 # Set up your agent.
16 chat_node = ChatNode()
17 bridge = Bridge(chat_node)
18 system.with_speaking_node(chat_node, bridge)
19
20 # Register event type for tracking.
21 register_observability_event(
22 system.user_bridge, system.harness, LeadCaptured
23 )
24
25 # Your agent can now yield LeadCaptured events.
26 bridge.on(UserStoppedSpeaking).stream(
27 chat_node.generate
28 ).broadcast()

Log Metrics

Log metrics during the call to track performance or other characteristics of the call.

1import time
2from line.events import LogMetric
3
4
5async def track_response_time(msg: Message):
6 start_time = time.time()
7
8 # Process the request
9 result = await process_user_request(msg.event.content)
10
11 # Calculate and log timing
12 duration = time.time() - start_time
13 yield LogMetric(name="response_time_seconds", value=duration)
14
15 # Also yield business events
16 if result.is_qualified_lead:
17 yield LeadCaptured(
18 customer_name=result.name,
19 interest_level="high",
20 contact_method="phone",
21 )
22
23
24# Track timing for all user requests
25bridge.on(UserStoppedSpeaking).stream(track_response_time).broadcast()

Using loguru for Proper Logging

Configure loguru as your logger to see results in the UI and have logs captured:

1from loguru import logger
2import sys
3
4
5# Use in your nodes
6class ChatNode(ReasoningNode):
7 async def process_context(self, context: ConversationContext):
8 logger.info(f"Processing {len(context.events)} events")
9
10 # Your processing logic
11 messages = convert_messages_to_openai(context.events)
12
13 logger.debug(f"Generated {len(messages)} messages for LLM")
14
15 for chunk in client.chat.completions.create(
16 model="gpt-4", messages=messages, stream=True
17 ):
18 if chunk.choices[0].delta.content:
19 content = chunk.choices[0].delta.content
20 logger.trace(f"Streaming content: {content[:50]}...")
21 yield AgentResponse(content=content)

Performance

Efficient Event Filtering

Filter events at the bridge level for better performance:

1# Good: Filter at bridge level
2bridge.on(UserTranscriptionReceived, source=node.id).map(
3 handle_user_input
4)
5bridge.on(ToolCall, tool_name="transfer").map(handle_transfer)