GuidesCookbooksPydantic AI Integration via OpenTelemetry
This is a Jupyter notebook

Pydantic AI Integration via OpenTelemetry

Langfuse offers an OpenTelemetry backend that ingests trace data from a variety of OpenTelemetry instrumentation libraries. In this guide, we demonstrate how to use the Pydantic Logfire instrumentation library to instrument your Pydantic AI agents.

About PydanticAI: PydanticAI is a Python agent framework designed to simplify the development of production-grade generative AI applications. It brings the same type-safety, ergonomic API design, and developer experience found in FastAPI to the world of GenAI app development.

Step 1: Install Dependencies

Before you begin, install the necessary Python packages. The command below will install the pydantic-ai package along with Logfire support, which is required for trace ingestion via Langfuse:

%pip install pydantic-ai[logfire]

Step 2: Configure Environment Variables

To forward trace data to Langfuse, you must set up the required environment variables. This includes providing your Langfuse API keys and the proper OpenTelemetry exporter endpoint. Additionally, you need to specify your OpenAI API key if you are using OpenAI for your generative tasks.

import os
import base64
 
LANGFUSE_PUBLIC_KEY = "pk-lf-..."
LANGFUSE_SECRET_KEY = "sk-lf-..."
LANGFUSE_AUTH = base64.b64encode(f"{LANGFUSE_PUBLIC_KEY}:{LANGFUSE_SECRET_KEY}".encode()).decode()
 
os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://cloud.langfuse.com/api/public/otel" # EU data region
# os.environ["OTEL_EXPORTER_OTLP_ENDPOINT"] = "https://us.cloud.langfuse.com/api/public/otel" # US data region
os.environ["OTEL_EXPORTER_OTLP_HEADERS"] = f"Authorization=Basic {LANGFUSE_AUTH}"
 
# your openai key
os.environ["OPENAI_API_KEY"] = "sk-..."

Step 3: Initialize Instrumentation

Now, initialize Logfire’s instrumentation and define a sample Pydantic AI agent that makes use of dependency injection and tool registration. In this example, we create a “roulette” agent. The agent is configured to call a tool function (roulette_wheel), which checks if a given square is a winner. The agent is type-safe, ensuring that the dependency (deps_type) and the output (result_type) have defined types.

import nest_asyncio
nest_asyncio.apply()
import logfire
 
logfire.configure(
    service_name='my_logfire_service',
 
    # Sending to Logfire is on by default regardless of the OTEL env vars.
    send_to_logfire=False,
)
from pydantic_ai import Agent, RunContext
 
roulette_agent = Agent(
    'openai:gpt-4o',
    deps_type=int,
    result_type=bool,
    system_prompt=(
        'Use the `roulette_wheel` function to see if the '
        'customer has won based on the number they provide.'
    ),
)
 
 
@roulette_agent.tool
async def roulette_wheel(ctx: RunContext[int], square: int) -> str:
    """check if the square is a winner"""
    return 'winner' if square == ctx.deps else 'loser'

Step 4: Run the Agent

Finally, run your Pydantic AI agent and generate trace data that will be sent to Langfuse. In the example below, the agent is executed with a dependency value (the winning square) and natural language input. The output from the tool function is then printed.

# Run the agent
success_number = 18
result = roulette_agent.run_sync('Put my money on square eighteen', deps=success_number)
print(result.data)

Step 5: Explore Traces in Langfuse

With the instrumentation in place, all trace data generated by the agent will be sent to Langfuse. You can view detailed trace logs—including operation timings, debugging information, and performance metrics—by accessing your Langfuse dashboard. For example, check out a sample trace to see the flow of a Pydantic AI request.

Example trace in Langfuse

Pydantic AI OpenAI Trace

result = roulette_agent.run_sync('I bet five is the winner', deps=success_number)
print(result.data)

Example trace

Was this page useful?

Questions? We're here to help

Subscribe to updates