IntegrationsFrameworksStrands Agents
This is a Jupyter notebook

Integrate Langfuse with the Strands Agents SDK

This notebook demonstrates how to monitor and debug your Strands Agent effectively using Langfuse. By following this guide, you will be able to trace your agent’s operations, gaining insights into its behavior and performance.

What is the Strands Agents SDK? The Strands Agents SDK (docs), developed by AWS, is a toolkit for building AI agents that can interact with various tools and services, including AWS Bedrock.

What is Langfuse? Langfuse is an open-source LLM engineering platform. It provides robust tracing, debugging, evaluation, and monitoring capabilities for AI agents and LLM applications. Langfuse integrates seamlessly with multiple tools and frameworks through native integrations, OpenTelemetry, and its SDKs.

Get Started

We’ll guide you through a simple example of using Strands agents and integrating them with Langfuse for observability.

Step 1: Install Dependencies

ℹ️

To enable OTEL exporting, install Strands Agents with otel extra dependencies: pip install ‘strands-agents[otel]‘

%pip install "strands-agents[otel]" strands-agents-tools langfuse

Step 2: Set Environment Variables

Next, we need to configure the environment variables for Langfuse and AWS (for Bedrock models).

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
# Your openai key
os.environ["OPENAI_API_KEY"] = "sk-proj-..."

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")

Step 3: Initialize the Strands Agent

With the environment set up, we can now initialize the Strands agent. This involves defining the agent’s behavior, configuring the underlying LLM, and setting up tracing attributes for Langfuse.

from strands import Agent
from strands.models.openai import OpenAIModel
 
 
# Configure the OpenAI model to be used by the agent
model = OpenAIModel(
    model_id="gpt-5", # Example model ID
)
 
# Configure the agent
agent = Agent(
    model=model,
    system_prompt="You are a helpful assistant that can answer questions and help with tasks.",
)

Step 4: Run the Agent

Now it’s time to run the initialized agent with a sample query. The agent will process the input, and Langfuse will automatically trace its execution via the OpenTelemetry integration configured earlier.

results = agent("Hi, where can I eat in San Francisco?")

Step 5: View Traces in Langfuse

After running the agent, you can navigate to your Langfuse project to view the detailed traces. These traces provide a step-by-step breakdown of the agent’s execution, including LLM calls, tool usage (if any), inputs, outputs, latencies, costs.

Example trace of a Strands agent interaction in Langfuse

Public Example Strands Agent Trace

Interoperability with the Python SDK

You can use this integration together with the Langfuse SDKs to add additional attributes to the observation.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the observation.

from langfuse import observe, propagate_attributes, get_client
 
langfuse = get_client()
 
@observe()
def my_llm_pipeline(input):
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-observation"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
    ):
 
        # YOUR APPLICATION CODE HERE
        result = call_llm(input)
 
        return result
 
# Run the function
my_llm_pipeline("Hi")

Learn more about using the Decorator in the Langfuse SDK instrumentation docs.

Troubleshooting

No observations appearing

First, enable debug mode in the Python SDK:

export LANGFUSE_DEBUG="True"

Then run your application and check the debug logs:

  • OTel observations appear in the logs: Your application is instrumented correctly but observations are not reaching Langfuse. To resolve this:
    1. Call langfuse.flush() at the end of your application to ensure all observations are exported.
    2. Verify that you are using the correct API keys and base URL.
  • No OTel spans in the logs: Your application is not instrumented correctly. Make sure the instrumentation runs before your application code.
Unwanted observations in Langfuse

The Langfuse SDK is based on OpenTelemetry. Other libraries in your application may emit OTel spans that are not relevant to you. These still count toward your billable units, so you should filter them out. See Unwanted spans in Langfuse for details.

Missing attributes

Some attributes may be stored in the metadata object of the observation rather than being mapped to the Langfuse data model. If a mapping or integration does not work as expected, please raise an issue on GitHub.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?