GuidesCookbooksAnthropic
This is a Jupyter notebook

Trace Anthropic Models in Langfuse

Anthropic provides advanced language models like Claude, known for their safety, helpfulness, and strong reasoning capabilities. By combining Anthropic’s models with Langfuse, you can trace, monitor, and analyze your AI workloads in development and production.

This notebook demonstrates two different ways to use Anthropic models with Langfuse:

  1. OpenTelemetry Instrumentation: Use the AnthropicInstrumentor library to wrap Anthropic SDK calls and send OpenTelemetry spans to Langfuse.
  2. OpenAI SDK: Use Anthropic’s OpenAI-compatible endpoints via Langfuse’s OpenAI SDK wrapper.

What is Anthropic?
Anthropic is an AI safety company that develops Claude, a family of large language models designed to be helpful, harmless, and honest. Claude models excel at complex reasoning, analysis, and creative tasks.

What is Langfuse?
Langfuse is an open source platform for LLM observability and monitoring. It helps you trace and monitor your AI applications by capturing metadata, prompt details, token usage, latency, and more.

Step 1: Install Dependencies

Before you begin, install the necessary packages in your Python environment:

%pip install anthropic openai langfuse opentelemetry-instrumentation-anthropic

Step 2: Configure Langfuse SDK

Next, set up your Langfuse API keys. You can get these keys by signing up for a free Langfuse Cloud account or by self-hosting Langfuse. These environment variables are essential for the Langfuse client to authenticate and send data to your Langfuse project.

Also set your Anthropic API (Anthropic Console).

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
os.environ["ANTHROPIC_API_KEY"] = "sk-ant-..."  # Your Anthropic API key

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")

Langfuse client is authenticated and ready!

Approach 1: OpenTelemetry Instrumentation

Use the AnthropicInstrumentor library to wrap Anthropic SDK calls and send OpenTelemetry spans to Langfuse.

from opentelemetry.instrumentation.anthropic import AnthropicInstrumentor
 
AnthropicInstrumentor().instrument()
from anthropic import Anthropic
 
# Initialize the Anthropic client
client = Anthropic(
    api_key=os.environ.get("ANTHROPIC_API_KEY")
)
 
# Make the API call to Anthropic
message = client.messages.create(
    model="claude-opus-4-20250514",
    max_tokens=1000,
    temperature=1,
    system="You are a world-class poet. Respond only with short poems.",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Why is the ocean salty?"
                }
            ]
        }
    ]
)
print(message.content)

Approach 2: OpenAI SDK Drop-in Replacement

Anthropic provides OpenAI-compatible endpoints that allow you to use the OpenAI SDK to interact with Claude models. This is particularly useful if you have existing code using the OpenAI SDK that you want to switch to Claude.

# Langfuse OpenAI client
from langfuse.openai import OpenAI
 
client = OpenAI(
    api_key=os.environ.get("ANTHROPIC_API_KEY"),  # Your Anthropic API key
    base_url="https://api.anthropic.com/v1/"  # Anthropic's API endpoint
)
 
response = client.chat.completions.create(
    model="claude-opus-4-20250514", # Anthropic model name
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who are you?"}
    ],
)
 
print(response.choices[0].message.content)

View Traces in Langfuse

After executing the application, navigate to your Langfuse Trace Table. You will find detailed traces of the application’s execution, providing insights into the agent conversations, LLM calls, inputs, outputs, and performance metrics.

Langfuse Trace

You can also view the trace in Langfuse here:

Interoperability with the Python SDK

You can use this integration together with the Langfuse Python SDK to add additional attributes to the trace.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, get_client
 
langfuse = get_client()
 
@observe()
def my_instrumented_function(input):
    output = my_llm_call(input)
 
    langfuse.update_current_trace(
        input=input,
        output=output,
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "[email protected]"},
        version="1.0.0"
    )
 
    return output

Learn more about using the Decorator in the Python SDK docs.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?