March 2026
Platform-wide API, PVC, and client library updates for this month are in Changelog 2026 (March 2026).
February 4, 2026
AgentUpdateCall Output Event
Added AgentUpdateCall event for dynamically updating call configuration during a conversation:
from line.events import AgentUpdateCall
# In an agent's process method:
yield AgentUpdateCall(voice_id="5ee9feff-1265-424a-9d7f-8e4d431a12c7")
yield AgentUpdateCall(pronunciation_dict_id="dict-123")
| Field | Description |
|---|
voice_id | Updates the agent’s voice |
pronunciation_dict_id | Updates the pronunciation dictionary |
All fields are optional—only set fields are updated. See Events for details.
February 1, 2026
Line SDK v0.2 — Major Release
We’re releasing Line SDK v0.2, a complete redesign of the voice agent framework focused on simplicity, streaming performance, and seamless LLM integration. This release introduces a new async iterable architecture that replaces the previous event bus system.
Breaking Changes: v0.2 is not backwards compatible with v0.1.x. See the Migration Guide below for detailed upgrade instructions.
What’s changing? Line SDK v0.2 makes it much simpler to build voice agents. Instead of manually wiring together multiple components (systems, bridges, nodes), you now write a single function that returns your agent. The SDK handles audio, interruptions, and conversation flow automatically.
Why upgrade?
- Faster development — Build agents in hours instead of days with less boilerplate code
- Easier maintenance — Fewer moving parts means fewer bugs and simpler debugging
- Better reliability — Built-in error handling, retries, and fallback models
- More flexibility — Switch between 100+ AI providers (OpenAI, Anthropic, Google, etc.) without code changes
- Powerful tools — Add capabilities like web search, call transfers, and multi-agent handoffs with one line of code
What’s New in v0.2
Simplified Agent Architecture
The new architecture replaces the VoiceAgentSystem, Bus, Bridge, and ReasoningNode pattern with a single async iterable function:
import os
from line import CallRequest
from line.llm_agent import LlmAgent, LlmConfig, end_call
from line.voice_agent_app import AgentEnv, VoiceAgentApp
async def get_agent(env: AgentEnv, call_request: CallRequest):
return LlmAgent(
model="anthropic/claude-haiku-4-5-20251001",
api_key=os.getenv("ANTHROPIC_API_KEY"),
tools=[end_call],
config=LlmConfig(
system_prompt="You are a helpful assistant.",
introduction="Hello! How can I help you today?",
),
)
app = VoiceAgentApp(get_agent=get_agent)
Benefits:
- Less boilerplate code
- No manual event routing or bridge configuration
- Automatic conversation history management
- Built-in interruption handling
- Quick, and easy tool definition
Built-in LLM Support via LiteLLM
LlmAgent provides unified access to 100+ LLM providers through LiteLLM:
# OpenAI
LlmAgent(model="gpt-5-nano", api_key=os.getenv("OPENAI_API_KEY"), ...)
# Anthropic
LlmAgent(model="anthropic/claude-haiku-4-5-20251001", api_key=os.getenv("ANTHROPIC_API_KEY"), ...)
# Google Gemini
LlmAgent(model="gemini/gemini-2.5-flash-preview-09-2025", api_key=os.getenv("GEMINI_API_KEY"), ...)
# With fallbacks
LlmAgent(
model="gpt-5-nano",
config=LlmConfig(fallbacks=["anthropic/claude-haiku-4-5-20251001", "gemini/gemini-2.5-flash-preview-09-2025"]),
...
)
Define agent capabilities using simple decorators. Three tool types cover all common scenarios:
| Tool Type | Decorator | What It Does | Example Use Case |
|---|
| Loopback | @loopback_tool | Fetches information, then the agent speaks the answer naturally | Looking up order status, checking account balance |
| Passthrough | @passthrough_tool | Takes an immediate action without additional AI processing | Ending a call, transferring to a phone number |
| Handoff | @handoff_tool | Transfers the conversation to a different specialized agent | Routing to Spanish support, escalating to billing |
from typing import Annotated
from line.llm_agent import loopback_tool, passthrough_tool, handoff_tool
from line.events import AgentEndCall
@loopback_tool
async def get_weather(ctx, city: Annotated[str, "City name"]) -> str:
"""Get current weather for a city."""
return f"72°F and sunny in {city}"
@passthrough_tool
async def end_call(ctx):
"""End the call."""
yield AgentEndCall()
@handoff_tool
async def transfer_to_support(ctx, event):
"""Transfer to support agent."""
async for output in support_agent.process(ctx.turn_env, event):
yield output
Long-running tools can execute in the background without blocking the LLM:
from typing import Annotated
from line.llm_agent import loopback_tool
@loopback_tool(is_background=True)
async def check_bank_balance(ctx, account_id: Annotated[str, "Account ID"]):
"""Check account balance (may take a few seconds)."""
yield "Checking your balance..." # Immediate acknowledgment
balance = await api.get_balance(account_id) # Long operation
yield f"Your balance is ${balance:.2f}" # Triggers new LLM completion
Common operations available out of the box:
from line.llm_agent import end_call, send_dtmf, transfer_call, web_search, agent_as_handoff
agent = LlmAgent(
tools=[
end_call, # End the call
send_dtmf, # Send DTMF tones
transfer_call, # Transfer to phone number
web_search, # Real-time web search
agent_as_handoff(other_agent, name="transfer_to_billing"),
],
...
)
Multi-Agent Workflows
Create sophisticated agent routing with agent_as_handoff:
spanish_agent = LlmAgent(
model="gpt-5-nano",
config=LlmConfig(system_prompt="Speak only in Spanish.", ...),
...
)
main_agent = LlmAgent(
tools=[
agent_as_handoff(
spanish_agent,
handoff_message="Transferring to Spanish support...",
name="transfer_to_spanish",
description="Transfer when user requests Spanish.",
),
],
...
)
Structured Event System
Events are how your agent communicates with the outside world. Output events are actions your agent takes (speaking, ending calls). Input events are things that happen during a call (user speaks, call starts).
Output Events (agent → harness):
AgentSendText — Send text to be spoken
AgentEndCall — End the call
AgentTransferCall — Transfer to another number
AgentSendDtmf — Send DTMF tone
AgentToolCalled / AgentToolReturned — Tool execution tracking
LogMetric / LogMessage — Observability
Input Events (harness → agent):
CallStarted / CallEnded — Call lifecycle
UserTurnStarted / UserTurnEnded — User speaking
UserTextSent / UserDtmfSent — User content
AgentHandedOff — Handoff notification
All input events include a history field with the complete conversation context.
Enhanced Configuration
Fine-tune how your agent thinks and responds. LlmConfig lets you control the AI’s personality, response length, creativity, and reliability:
LlmConfig(
system_prompt="You are a helpful assistant.",
introduction="Hello! How can I help?",
# Sampling parameters
temperature=0.7,
max_tokens=1024,
top_p=0.95,
# Resilience
num_retries=2,
fallbacks=["gpt-5-nano"],
timeout=30.0,
# Provider-specific options
extra={"reasoning_effort": "high"},
)
Migration Guide from v0.1.x to v0.2
This guide walks you through upgrading your existing v0.1.x agents to v0.2. The migration involves updating imports, simplifying your agent setup, and adopting the new tool system. Most agents can be migrated in under an hour.
Overview of Changes
| v0.1.x | v0.2 |
|---|
VoiceAgentSystem + Bus + Bridge | VoiceAgentApp with get_agent callback |
ReasoningNode subclasses | LlmAgent or custom Agent protocol |
call_handler(system, request) | get_agent(env, request) -> Agent |
| Manual event routing | Automatic event dispatch with filters |
process_context() method | process(env, event) async iterable |
Step 1: Update Imports
# v0.1.x
from line.voice_agent_app import VoiceAgentApp
from line.voice_agent_system import VoiceAgentSystem
from line.bridge import Bridge
from line.nodes import ReasoningNode
from line.events import (
AgentSpeechSent,
UserTranscriptionReceived,
EndCall,
TransferCall,
)
# v0.2
from line.voice_agent_app import VoiceAgentApp, AgentEnv
from line.llm_agent import LlmAgent, LlmConfig
from line.llm_agent import end_call, transfer_call, loopback_tool, passthrough_tool
from line.events import (
AgentSendText,
AgentEndCall,
AgentTransferCall,
UserTurnEnded,
CallStarted,
)
Step 2: Replace VoiceAgentSystem with get_agent
In v0.1.x, event routing was configured manually via bridge.on(). In v0.2, event dispatch is automatic with customizable run and cancel filters.
from line.voice_agent_app import VoiceAgentApp
from line.voice_agent_system import VoiceAgentSystem
from line.bridge import Bridge
from line.nodes import ReasoningNode
from line.events import (
UserTranscriptionReceived,
UserStoppedSpeaking,
DTMFInputEvent,
)
class MyReasoningNode(ReasoningNode):
async def process_context(self, context):
# Your LLM logic here
response = await call_llm(context.messages)
yield AgentResponse(content=response)
async def call_handler(system: VoiceAgentSystem, call_request):
node = MyReasoningNode(system_prompt="You are helpful.")
bridge = Bridge(node)
system.with_speaking_node(node, bridge)
# Manual event routing with bridge.on()
bridge.on(UserTranscriptionReceived).map(node.add_event)
bridge.on(UserStoppedSpeaking).stream(node.generate).broadcast()
# DTMF events required explicit routing
bridge.on(DTMFInputEvent).map(node.handle_dtmf)
await system.start()
await system.send_initial_message("Hello!")
await system.wait_for_shutdown()
app = VoiceAgentApp(call_handler=call_handler)
Run and Cancel Filters
Filters control your agent’s behavior during a call:
- Run filters determine what triggers your agent to respond (e.g., when the user finishes speaking)
- Cancel filters determine what interrupts your agent (e.g., when the user starts talking over the agent)
You can customize these by returning a tuple instead of just the agent:
from typing import Union, Tuple
AgentSpec = Union[Agent, Tuple[Agent, run_filter, cancel_filter]]
| Filter | Purpose | Default |
|---|
| run_filter | Events that trigger agent processing | [CallStarted, UserTurnEnded, CallEnded] |
| cancel_filter | Events that cancel in-progress agent tasks | [UserTurnStarted] |
Example: Agent that responds to DTMF input
from line.events import (
CallStarted, CallEnded, UserTurnEnded, UserTurnStarted, UserDtmfSent
)
async def get_agent(env: AgentEnv, call_request: CallRequest):
agent = LlmAgent(...)
# Include UserDtmfSent in run_filter to process DTMF
run_filter = [CallStarted, UserTurnEnded, UserDtmfSent, CallEnded]
cancel_filter = [UserTurnStarted]
return (agent, run_filter, cancel_filter)
Example: Agent that doesn’t get interrupted
async def get_agent(env: AgentEnv, call_request: CallRequest):
agent = LlmAgent(...)
# Empty cancel_filter = agent won't be interrupted
run_filter = [CallStarted, UserTurnEnded, CallEnded]
cancel_filter = []
return (agent, run_filter, cancel_filter)
Example: Custom filter function
def my_run_filter(event: InputEvent) -> bool:
"""Only process events during business hours."""
if isinstance(event, CallStarted):
return is_business_hours()
return isinstance(event, (UserTurnEnded, CallEnded))
async def get_agent(env: AgentEnv, call_request: CallRequest):
agent = LlmAgent(...)
return (agent, my_run_filter, [UserTurnStarted])
Step 3: Migrate Event Handling
# Event names
AgentSpeechSent # Agent spoke
UserTranscriptionReceived # User spoke
EndCall # End call
TransferCall # Transfer call
# Manual event handling in ReasoningNode
class MyNode(ReasoningNode):
async def process_context(self, context):
for event in context.events:
if isinstance(event, UserTranscriptionReceived):
user_message = event.transcription
# Manual tool handling in ReasoningNode
class MyNode(ReasoningNode):
async def process_context(self, context):
# Parse tool calls from LLM response
if tool_call := extract_tool_call(response):
result = await self.execute_tool(tool_call)
# Manually add to context and call LLM again
context.add_tool_result(result)
response = await call_llm(context)
Step 5: Migrate Multi-Agent Patterns
# Manual agent switching
class MainNode(ReasoningNode):
def __init__(self, spanish_node):
self.spanish_node = spanish_node
self.use_spanish = False
async def process_context(self, context):
if self.should_switch_to_spanish(context):
self.use_spanish = True
# Complex manual state management
Removed APIs
The following APIs from v0.1.x have been removed with no direct replacement:
| Removed | Alternative |
|---|
VoiceAgentSystem | Use VoiceAgentApp with get_agent |
Bus | Events are dispatched automatically |
Bridge | Use run/cancel filters on AgentSpec |
ReasoningNode | Use LlmAgent or implement Agent protocol |
ConversationHarness | Handled internally by ConversationRunner |
EventsRegistry | Use typed event classes directly |
Custom Agent Protocol
If you need custom logic beyond LlmAgent, implement the Agent protocol:
from typing import AsyncIterable
from line.events import (
InputEvent,
OutputEvent,
AgentSendText,
CallStarted,
UserTurnEnded,
)
class CustomAgent:
"""Custom agent implementing the Agent protocol."""
async def process(self, env, event: InputEvent) -> AsyncIterable[OutputEvent]:
if isinstance(event, CallStarted):
yield AgentSendText(text="Hello from custom agent!")
elif isinstance(event, UserTurnEnded):
# Your custom logic here
user_message = event.content[0].content
response = await your_custom_logic(user_message, event.history)
yield AgentSendText(text=response)
Breaking Changes Summary
This section provides a quick reference for all breaking changes. Use this as a checklist when migrating your code.
Event Renames
| v0.1.x | v0.2 |
|---|
AgentSpeechSent | AgentSendText (output) / AgentTextSent (input) |
UserTranscriptionReceived | UserTextSent / UserTurnEnded |
UserStartedSpeaking | UserTurnStarted |
UserStoppedSpeaking | UserTurnEnded |
AgentStartedSpeaking | AgentTurnStarted |
AgentStoppedSpeaking | AgentTurnEnded |
EndCall | AgentEndCall |
TransferCall | AgentTransferCall |
DTMFInputEvent | UserDtmfSent |
DTMFOutputEvent | AgentSendDtmf |
Output vs. Input events: AgentSendText is an output event you yield to make the agent speak. AgentTextSent is an input event you receive confirming what was spoken (appears in history).
Structural Changes
- History in events: All input events now include an optional
history field with complete conversation context. When history is None, the event is inside a history list; when it contains a list, the event has full context attached.
- Tool events:
ToolCall/ToolResult replaced with structured AgentToolCalled/AgentToolReturned
- Event IDs: All events now have stable
event_id fields for tracking
Configuration Changes
| v0.1.x | v0.2 |
|---|
CallRequest.agent.system_prompt | LlmConfig.system_prompt |
CallRequest.agent.introduction | LlmConfig.introduction |
| Manual LLM parameters | LlmConfig with full LiteLLM support |
Use LlmConfig.from_call_request(call_request, fallback_system_prompt="...", fallback_introduction="...") to automatically inherit configuration from the Cartesia Playground while providing sensible defaults. See Agents documentation for details.
New Dependencies
v0.2 introduces the following dependencies:
litellm # Multi-provider LLM support
pydantic # Type validation for events
phonenumbers >= 9.0 # Phone number validation for transfer_call
Optional dependencies for examples:
exa-py # Exa web search integration
duckduckgo-search # Fallback web search
Getting Help