DocsIntegrationsAutoGen
This is a Jupyter notebook

Integrate Langfuse with AutoGen

This notebook provides a step-by-step guide on integrating Langfuse with AutoGen to achieve comprehensive observability and debugging for your multi-agent LLM applications.

What is AutoGen? AutoGen (GitHub) is an open-source framework developed by Microsoft for building LLM applications, including agents capable of complex reasoning and interactions. AutoGen simplifies the creation of conversational agents that can collaborate or compete to solve tasks.

What is Langfuse? Langfuse is an open-source LLM engineering platform. It offers tracing and monitoring capabilities for AI applications. Langfuse helps developers debug, analyze, and optimize their AI systems by providing detailed insights and integrating with a wide array of tools and frameworks through native integrations, OpenTelemetry, and dedicated SDKs.

Getting Started

Let’s walk through a practical example of using AutoGen and integrating it with Langfuse via OpenTelemetry for comprehensive tracing.

Step 1: Install Dependencies

⚠️

Note: This notebook utilizes the Langfuse OTel Python SDK v3. For users of Python SDK v2, please refer to our legacy AutoGen integration guide.

%pip install langfuse openlit "autogen-agentchat" "autogen-ext[openai]" -U

Step 2: Configure Langfuse SDK

Next, set up your Langfuse API keys. You can get these keys by signing up for a free Langfuse Cloud account or by self-hosting Langfuse. These environment variables are essential for the Langfuse client to authenticate and send data to your Langfuse project.

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..."
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
# Your OpenAI key
os.environ["OPENAI_API_KEY"] = "sk-proj-..."

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")
 

Step 3: Initialize OpenLit Instrumentation

Now, we initialize the OpenLit instrumentation. OpenLit automatically captures AutoGen operations and exports OpenTelemetry (OTel) spans to Langfuse.

import openlit
 
# Initialize OpenLIT instrumentation. The disable_batch flag is set to true to process traces immediately.
openlit.init(tracer=langfuse._otel_tracer, disable_batch=True)

Step 5: Basic AutoGen Application

Let’s create a straightforward AutoGen application. In this example, we’ll create a simple multi-agent conversation where an Assistant agent answers a user’s question. This will serve as the foundation for demonstrating Langfuse tracing.

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
 
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent("assistant", model_client=model_client)
print(await agent.run(task="Say 'Hello World!'"))
await model_client.close()

Step 6: View Traces in Langfuse

After executing the application, navigate to your Langfuse Trace Table. You will find detailed traces of the application’s execution, providing insights into the agent conversations, LLM calls, inputs, outputs, and performance metrics. The trace will show the complete flow from the initial user query through the agent interactions to the final response.

Langfuse Trace

You can also view the public trace here: Langfuse Trace Example

Interoperability with the Python SDK

You can use this integration together with the Langfuse Python SDK to add additional attributes to the trace.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, get_client
 
langfuse = get_client()
 
@observe()
def my_instrumented_function(input):
    output = my_llm_call(input)
 
    langfuse.update_current_trace(
        input=input,
        output=output,
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "[email protected]"},
        version="1.0.0"
    )
 
    return output

Learn more about using the Decorator in the Python SDK docs.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?