Skip to main content
Tools let your agent perform actions and retrieve information. The SDK supports three tool paradigms that differ in how they affect conversation flow.

Defining Tools

Any properly annotated function can be a tool. The SDK uses the function’s docstring as the description and type annotations for parameters:
from typing import Annotated

async def get_weather(
    ctx,
    city: Annotated[str, "The city to check weather for"],
    units: Annotated[str, "celsius or fahrenheit"] = "fahrenheit"
):
    """
    Look up the current weather in a given city.
    """
    return f"72°F and sunny in {city}"
The first parameter of every tool must be ctx (the tool context). This provides access to conversation state and is required for forward compatibility. Your tool parameters follow after ctx.

Tool Types

Plain functions passed to tools are automatically wrapped as loopback tools. Use decorators (@loopback_tool, @passthrough_tool, @handoff_tool) for explicit control.

Loopback Tools (@loopback_tool)

The default behavior. The tool’s result is sent back to the LLM, which can then continue generating a response.
from line.llm_agent import loopback_tool

@loopback_tool
async def get_account_balance(ctx, account_id: Annotated[str, "The account ID"]):
    """Look up the balance for a customer account."""
    balance = await api.get_balance(account_id)
    return f"${balance:.2f}"
Use for: Information retrieval, calculations, API queries.

Passthrough Tools (@passthrough_tool)

Output events go directly to the user, bypassing the LLM. Use for deterministic actions.
from line.llm_agent import passthrough_tool
from line.events import AgentSendText, AgentEndCall

@passthrough_tool
async def end_call_with_message(ctx, message: Annotated[str, "Goodbye message"]):
    """End the call with a custom goodbye message."""
    yield AgentSendText(text=message)
    yield AgentEndCall()
Use for: Call control (EndCall, TransferCall, SendDtmf), deterministic responses.

Handoff Tools (@handoff_tool)

Transfers control to another handler. All future events are routed to the handoff target instead of the original agent.
from typing import Annotated
from line.llm_agent import handoff_tool
from line.events import AgentHandedOff, AgentSendText, UserTurnEnded, AgentEndCall

@handoff_tool
async def run_satisfaction_survey(
    ctx,
    customer_name: Annotated[str, "The customer's name"],
    event
):
    """Hand off to a customer satisfaction survey at the end of the call."""
    if isinstance(event, AgentHandedOff):
        # First call - send introduction
        yield AgentSendText(
            text=f"Thank you for your call, {customer_name}. "
            "Please stay on the line for a brief satisfaction survey. "
            "On a scale of 1 to 5, how would you rate your experience today?"
        )
        return

    # Subsequent calls - handle survey responses
    if isinstance(event, UserTurnEnded):
        user_response = event.content[0].content if event.content else ""
        yield AgentSendText(text=f"You rated us {user_response}. Thank you for your feedback!")
        yield AgentEndCall()
Use for: Custom multi-step flows, specialized handlers with their own logic. When using a handoff tool, the event parameter receives different values depending on timing:
  • First call: event is AgentHandedOff — use this to send a transition message
  • Subsequent calls: event is the actual InputEvent (UserTurnEnded, etc.)
Once a handoff occurs, the original agent no longer receives events. The handoff tool function handles all future conversation turns.
To hand off to another LlmAgent, use the agent_as_handoff helper instead of writing a raw @handoff_tool. It handles the delegation automatically.

Built-in Tools

from line.llm_agent import end_call, send_dtmf, transfer_call, web_search

agent = LlmAgent(
    model="anthropic/claude-haiku-4-5-20251001",
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    tools=[end_call, send_dtmf, transfer_call, web_search],
    config=LlmConfig(...),
)
ToolDescriptionWhen to Use
end_callEnds the call (with a goodbye message)User says “goodbye”, issue resolved, or conversation complete
transfer_callTransfers to another number (E.164 format)Escalating to human agents, routing to departments
web_searchSearches the web for real-time infoCurrent events, live prices, recent news the LLM doesn’t know
Examples:
# End call: Let the LLM decide when conversation is complete
tools=[end_call]  # LLM calls this when user says "thanks, bye!"

# Transfer: Route to human support
tools=[transfer_call]  # LLM calls transfer_call(target_phone_number="+18005551234")

# Web search with custom context size
tools=[web_search(search_context_size="high")]  # More context for complex queries

agent_as_handoff

Creates a handoff tool from another Agent—the easiest way to implement multi-agent workflows.
from line.llm_agent import LlmAgent, LlmConfig, agent_as_handoff, end_call

spanish_agent = LlmAgent(
    model="gpt-5-nano",
    api_key=os.getenv("OPENAI_API_KEY"),
    tools=[end_call],
    config=LlmConfig(
        system_prompt="You speak only in Spanish.",
        introduction="¡Hola! ¿Cómo puedo ayudarte?",
    ),
)

main_agent = LlmAgent(
    model="anthropic/claude-haiku-4-5-20251001",
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    tools=[
        end_call,
        agent_as_handoff(
            spanish_agent,
            handoff_message="Transferring to Spanish support...",
            name="transfer_to_spanish",
            description="Use when user requests Spanish.",
        ),
    ],
    config=LlmConfig(...),
)
ParameterTypeDescription
agentAgentThe agent to hand off to
handoff_messageOptional[str]Message spoken before the handoff
nameOptional[str]Tool name for the LLM
descriptionOptional[str]When the LLM should use this tool
When called, agent_as_handoff automatically sends the handoff message, triggers the new agent’s introduction, and routes all future events to it.
See Advanced Patterns for a complete multi-agent example with loopback, passthrough, and handoff tools.

Long-Running Tools

By default, tool calls are terminated when the agent is interrupted (though any reasoning and tool call response values already produced are preserved for use in the next generation). For tools that are expected to take a long time to complete, set is_background=True. The tool will continue running in the background until completion regardless of interruptions, then loop back to the LLM to produce a response.
from typing import Annotated
from line.llm_agent import loopback_tool

@loopback_tool(is_background=True)
async def search_database(ctx, query: Annotated[str, "Search query"]) -> str:
    """Search the database - may take several seconds."""
    results = await slow_database_search(query)
    return format_results(results)

@loopback_tool(is_background=True)
async def generate_report(ctx, report_type: Annotated[str, "Type of report"]) -> str:
    """Generate a detailed report - runs in background."""
    report = await compile_report(report_type)
    return report
Background tools are useful when:
  • The operation may take longer than typical user patience (e.g., complex searches, report generation)
  • You want the user to be able to speak while the operation completes
  • The result should be delivered even if the user interrupts with another question