IntegrationsFrameworksClaude Agent SDK
This is a Jupyter notebook

Integrate Langfuse with Claude Agent SDK

This notebook demonstrates how to capture detailed traces from the Claude Agent SDK with Langfuse using OpenTelemetry.

What is Claude Agent SDK?
The Claude Agent SDK is Anthropic’s open-source framework for building AI agents. It provides a clean API for creating tool-using agents, including native support for MCP.

What is Langfuse?
Langfuse is the open source LLM engineering platform. It provides detailed tracing, monitoring, and analytics for every prompt, model response, and tool call in your agent, making it easy to debug, evaluate, and iterate on LLM applications.

Step 1: Install Dependencies

%pip install langfuse claude-agent-sdk "langsmith[claude-agent-sdk]" "langsmith[otel]" -q

Step 2: Set Up Environment Variables

Set up your Langfuse API keys (Langfuse Cloud or self-hosted) and your Anthropic API key (Anthropic Console).

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..."
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..."
os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
# Your Anthropic API key
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-..."
 
# Configure LangChain OpenTelemetry instrumentation for Claude Agent SDK
os.environ["LANGSMITH_OTEL_ENABLED"] = "true"
os.environ["LANGSMITH_OTEL_ONLY"] = "true"
os.environ["LANGSMITH_TRACING"] = "true"

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")

Step 3: OpenTelemetry Instrumentation

Use LangChain’s Claude Agent SDK instrumentation library to instrument the agent SDK and send OpenTelemetry spans to Langfuse.

from langsmith.integrations.claude_agent_sdk import configure_claude_agent_sdk
 
# Configure Claude Agent SDK with OpenTelemetry tracing
configure_claude_agent_sdk()

Step 4: Build a Hello World Agent

Every tool call and model completion is captured as an OpenTelemetry span and forwarded to Langfuse.

import asyncio
from claude_agent_sdk import (
    ClaudeAgentOptions,
    ClaudeSDKClient,
    tool,
    create_sdk_mcp_server,
)
from typing import Any
 
@tool(
    "get_weather",
    "Gets the current weather for a given city",
    {
        "city": str,
    },
)
async def get_weather(args: dict[str, Any]) -> dict[str, Any]:
    """Simulated weather lookup tool"""
    city = args["city"]
 
    # Simulated weather data
    weather_data = {
        "Berlin": "Cloudy, 59°F",
        "New York": "Sunny, 75°F",
    }
 
    weather = weather_data.get(city, "Weather data not available")
    return {"content": [{"type": "text", "text": f"Weather in {city}: {weather}"}]}
 
 
async def main():
    # Create SDK MCP server with the weather tool
    weather_server = create_sdk_mcp_server(
        name="weather",
        version="1.0.0",
        tools=[get_weather],
    )
 
    options = ClaudeAgentOptions(
        model="claude-sonnet-4-5-20250929",
        system_prompt="You are a friendly travel assistant who helps with weather information.",
        mcp_servers={"weather": weather_server},
        allowed_tools=["mcp__weather__get_weather"],
    )
 
    async with ClaudeSDKClient(options=options) as client:
        await client.query("What's the weather like in Berlin and New York?")
 
        async for message in client.receive_response():
            print(message)
 
 
await main()

Step 5: View the Trace in Langfuse

Head over to your Langfuse dashboard → Traces. You should see traces including all tool calls and model inputs/outputs.

Claude Agent SDK example trace in Langfuse

Link to trace in Langfuse

Interoperability with the Python SDK

You can use this integration together with the Langfuse SDKs to add additional attributes to the trace.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, propagate_attributes, get_client
 
langfuse = get_client()
 
@observe()
def my_llm_pipeline(input):
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
    ):
 
        # YOUR APPLICATION CODE HERE
        result = call_llm(input)
 
        # Update the trace input and output
        langfuse.update_current_trace(
            input=input,
            output=result,
        )
 
        return result

Learn more about using the Decorator in the Langfuse SDK instrumentation docs.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?