IntegrationsModel ProvidersGoogle Gemini
This is a Jupyter notebook

Trace Google Gemini Models in Langfuse

This notebook shows how to trace and observe Google Gemini models with Langfuse and the Google GenAI SDK.

What is Google Gemini? Google Gemini is Google’s family of multimodal generative models (text, images, audio, video, code) available through the Gemini API and Vertex AI, with tiers like Flash and Pro for different speed/quality needs.

What is the Google GenAI SDK? The Google GenAI SDK is a unified client library (Python/JavaScript) that simplifies calling Gemini—handling auth (API key or ADC), streaming, tool/function calling, and safety—so you can integrate models in a few lines.

What is Langfuse? Langfuse is an open source platform for LLM observability and monitoring. It helps you trace and monitor your AI applications by capturing metadata, prompt details, token usage, latency, and more.

Step 1: Install Dependencies

Before you begin, install the necessary packages in your Python environment:

%pip install google-genai openai langfuse openinference-instrumentation-google-genai

Step 2: Configure Langfuse SDK

Next, set up your Langfuse API keys. You can get these keys by signing up for a free Langfuse Cloud account or by self-hosting Langfuse. These environment variables are essential for the Langfuse client to authenticate and send data to your Langfuse project.

Also set your Google Vertex API credentials which uses Application Default Credentials (ADC) from a service account key file.

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
 
# Your Google Gemini API key
os.environ["GOOGLE_API_KEY"] = "***"  

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
# Initialise Langfuse client and verify connectivity
langfuse = get_client()
assert langfuse.auth_check(), "Langfuse auth failed - check your keys ✋"

Step 3: OpenTelemetry Instrumentation

Use the GoogleGenAIInstrumentor library to wrap Google GenAI SDK calls and send OpenTelemetry spans to Langfuse.

from openinference.instrumentation.google_genai import GoogleGenAIInstrumentor
 
GoogleGenAIInstrumentor().instrument()

Step 4: Run an Example

from google import genai
 
client = genai.Client()
 
response = client.models.generate_content(
    model="gemini-2.5-flash",
    contents="What is Langfuse?",
)
print(response.text)
# Streaming Example
for chunk in client.models.generate_content_stream(
    model="gemini-2.5-flash",
    contents="What is Langfuse?",
):
    print(chunk.text, end="", flush=True)
print()  # newline after streaming

View Traces in Langfuse

After executing the application, navigate to your Langfuse Trace Table. You will find detailed traces of the application’s execution, providing insights into the agent conversations, LLM calls, inputs, outputs, and performance metrics.

Langfuse Trace

View trace in Langfuse

Interoperability with the Python SDK

You can use this integration together with the Langfuse Python SDK to add additional attributes to the trace.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, get_client
 
langfuse = get_client()
 
@observe()
def my_instrumented_function(input):
    output = my_llm_call(input)
 
    langfuse.update_current_trace(
        input=input,
        output=output,
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "[email protected]"},
        version="1.0.0"
    )
 
    return output

Learn more about using the Decorator in the Python SDK docs.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?