IntegrationsFrameworksGoogle ADK
This is a Jupyter notebook

Integrate Langfuse with Google’s Agent Development Kit

This notebook demonstrates how to capture detailed traces from a Google Agent Development Kit (ADK) application with Langfuse using the OpenTelemetry (OTel) protocol.

Why Agent Development Kit?
Google’s Agent Development Kit streamlines building, orchestrating, and tracing generative-AI agents out of the box, letting you move from prototype to production far faster than wiring everything yourself.

Why Langfuse?
Langfuse gives you a detailed dashboard and rich analytics for every prompt, model response, and function call in your agent, making it easy to debug, evaluate, and iterate on LLM apps.

Step 1: Install dependencies

%pip install langfuse google-adk openinference-instrumentation-google-adk -q

Step 2: Set up environment variables

Fill in the Langfuse and your Gemini API key.

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
# Gemini API Key (Get from Google AI Studio: https://aistudio.google.com/app/apikey)
os.environ["GOOGLE_API_KEY"] = "..." 

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
langfuse = get_client()
 
# Verify connection
if langfuse.auth_check():
    print("Langfuse client is authenticated and ready!")
else:
    print("Authentication failed. Please check your credentials and host.")

Langfuse client is authenticated and ready!

Step 3: OpenTelemetry Instrumentation

Use the GoogleADKInstrumentor library to wrap ADK calls and send OpenTelemetry spans to Langfuse.

from openinference.instrumentation.google_adk import GoogleADKInstrumentor
 
GoogleADKInstrumentor().instrument()

Step 3: Build a hello world agent

Every tool call and model completion is captured as an OpenTelemetry span and forwarded to Langfuse.

from google.adk.agents import Agent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from google.genai import types
 
def say_hello():
    return {"greeting": "Hello Langfuse 👋"}
 
agent = Agent(
    name="hello_agent",
    model="gemini-2.0-flash",
    instruction="Always greet using the say_hello tool.",
    tools=[say_hello],
)
 
APP_NAME = "hello_app"
USER_ID = "demo-user"
SESSION_ID = "demo-session"
 
session_service = InMemorySessionService()
# create_session is async → await it in notebooks
await session_service.create_session(app_name=APP_NAME, user_id=USER_ID, session_id=SESSION_ID)
 
runner = Runner(agent=agent, app_name=APP_NAME, session_service=session_service)
 
user_msg = types.Content(role="user", parts=[types.Part(text="hi")])
for event in runner.run(user_id=USER_ID, session_id=SESSION_ID, new_message=user_msg):
    if event.is_final_response():
        print(event.content.parts[0].text)

Step 4: View the trace in Langfuse

Head over to your Langfuse dashboard → Traces. You should see traces including all tool calls and model inputs/outputs.

Google ADK example trace in Langfuse

Link to trace in Langfuse

Interoperability with the Python SDK

You can use this integration together with the Langfuse SDKs to add additional attributes to the trace.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, propagate_attributes, get_client
 
langfuse = get_client()
 
@observe()
def my_llm_pipeline(input):
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
    ):
 
        # YOUR APPLICATION CODE HERE
        result = call_llm(input)
 
        # Update the trace input and output
        langfuse.update_current_trace(
            input=input,
            output=result,
        )
 
        return result

Learn more about using the Decorator in the Langfuse SDK instrumentation docs.

Troubleshooting

No traces appearing

First, enable debug mode in the Python SDK:

export LANGFUSE_DEBUG="True"

Then run your application and check the debug logs:

  • OTel spans appear in the logs: Your application is instrumented correctly but traces are not reaching Langfuse. To resolve this:
    1. Call langfuse.flush() at the end of your application to ensure all traces are exported.
    2. Verify that you are using the correct API keys and base URL.
  • No OTel spans in the logs: Your application is not instrumented correctly. Make sure the instrumentation runs before your application code.
Unwanted observations in Langfuse

The Langfuse SDK is based on OpenTelemetry. Other libraries in your application may emit OTel spans that are not relevant to you. These still count toward your billable units, so you should filter them out. See Unwanted spans in Langfuse for details.

Missing attributes

Some attributes may be stored in the metadata object of the observation rather than being mapped to the Langfuse data model. If a mapping or integration does not work as expected, please raise an issue on GitHub.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?