# Cookbook: Trace OpenAI Structured Outputs with Langfuse

In this cookbook you will learn how to use Langfuse to monitor OpenAI Structured Outputs.

## What are structured outputs?

Generating structured data from unstructured inputs is a core AI use case today. Structured outputs make especially chained LLM calls, UI component generation, and model-based evaluation more reliable. Structured Outputs (opens in a new tab) is a new capability of the OpenAI API that builds upon JSON mode and function calling to enforce a strict schema in a model output.

## How to trace structured output in Langfuse?

If you use the OpenAI Python SDK, you can use the Langfuse drop-in replacement (opens in a new tab) to get full logging by changing only the import. With that, you can monitor the structured output generated by OpenAI in Langfuse.

```
- import openai
+ from langfuse.openai import openai
Alternative imports:
+ from langfuse.openai import OpenAI, AsyncOpenAI, AzureOpenAI, AsyncAzureOpenAI
```

## Step 1: Initialize Langfuse

Initialize the Langfuse client with your API keys (opens in a new tab) from the project settings in the Langfuse UI and add them to your environment.

`%pip install langfuse openai --upgrade`

```
import os
# Get keys for your project from the project settings page
# https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # 🇺🇸 US region
# Your openai key
os.environ["OPENAI_API_KEY"] = ""
```

## Step 2: Math tutor example

In this example, we'll build a math tutoring tool that outputs steps to solve a math problem as an array of structured objects.

This setup is useful for applications where each step needs to be displayed separately, allowing users to progress through the solution at their own pace.

(Example taken from OpenAI cookbook (opens in a new tab))

**Note:** While OpenAI also offer structured output parsing via its beta API (`client.beta.chat.completions.parse`

), this approach currently does not allow setting Langfuse specific attributes such as `name`

, `metadata`

, `userId`

etc. Please use the approach using `response_format`

with the standard `client.chat.completions.create`

as described below.

```
# Use the Langfuse drop-in replacement to get full logging by changing only the import.
# With that, you can monitor the structured output generated by OpenAI in Langfuse.
from langfuse.openai import OpenAI
import json
openai_model = "gpt-4o-2024-08-06"
client = OpenAI()
```

In the `response_format`

parameter you can now supply a JSON Schema via `json_schema`

. When using `response_format`

with `strict: true`

, the model's output will adhere to the provided schema.

Function calling remains similar, but with the new parameter `strict: true`

, you can now ensure that the schema provided for the functions is strictly followed.

```
math_tutor_prompt = '''
You are a helpful math tutor. You will be provided with a math problem,
and your goal will be to output a step by step solution, along with a final answer.
For each step, just provide the output as an equation use the explanation field to detail the reasoning.
'''
def get_math_solution(question):
response = client.chat.completions.create(
model = openai_model,
messages=[
{
"role": "system",
"content": math_tutor_prompt
},
{
"role": "user",
"content": question
}
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "math_reasoning",
"schema": {
"type": "object",
"properties": {
"steps": {
"type": "array",
"items": {
"type": "object",
"properties": {
"explanation": {"type": "string"},
"output": {"type": "string"}
},
"required": ["explanation", "output"],
"additionalProperties": False
}
},
"final_answer": {"type": "string"}
},
"required": ["steps", "final_answer"],
"additionalProperties": False
},
"strict": True
}
}
)
return response.choices[0].message
```

```
# Testing with an example question
question = "how can I solve 8x + 7 = -23"
result = get_math_solution(question)
print(result.content)
```

`{"steps":[{"explanation":"We need to isolate the term with the variable, 8x. So, we start by subtracting 7 from both sides to remove the constant term on the left side.","output":"8x + 7 - 7 = -23 - 7"},{"explanation":"The +7 and -7 on the left side cancel each other out, leaving us with 8x. The right side simplifies to -30.","output":"8x = -30"},{"explanation":"To solve for x, divide both sides of the equation by 8, which is the coefficient of x.","output":"x = -30 / 8"},{"explanation":"Simplify the fraction -30/8 by finding the greatest common divisor, which is 2.","output":"x = -15 / 4"}],"final_answer":"x = -15/4"}`

```
# Print results step by step
result = json.loads(result.content)
steps = result['steps']
final_answer = result['final_answer']
for i in range(len(steps)):
print(f"Step {i+1}: {steps[i]['explanation']}\n")
print(steps[i]['output'])
print("\n")
print("Final answer:\n\n")
print(final_answer)
```

```
Step 1: We need to isolate the term with the variable, 8x. So, we start by subtracting 7 from both sides to remove the constant term on the left side.
8x + 7 - 7 = -23 - 7
Step 2: The +7 and -7 on the left side cancel each other out, leaving us with 8x. The right side simplifies to -30.
8x = -30
Step 3: To solve for x, divide both sides of the equation by 8, which is the coefficient of x.
x = -30 / 8
Step 4: Simplify the fraction -30/8 by finding the greatest common divisor, which is 2.
x = -15 / 4
Final answer:
x = -15/4
```

## Step 3: See your trace in Langfuse

You can now see the trace and the JSON schema in Langfuse.

Example trace in Langfuse (opens in a new tab)

## Alternative: Using the SDK `parse`

helper

The new SDK version adds a `parse`

helper, allowing you to use your own Pydantic model without defining a JSON schema.

```
from pydantic import BaseModel
class MathReasoning(BaseModel):
class Step(BaseModel):
explanation: str
output: str
steps: list[Step]
final_answer: str
def get_math_solution(question: str):
response = client.beta.chat.completions.parse(
model=openai_model,
messages=[
{"role": "system", "content": math_tutor_prompt},
{"role": "user", "content": question},
],
response_format=MathReasoning,
)
return response.choices[0].message
```

```
result = get_math_solution(question).parsed
print(result.steps)
print("Final answer:")
print(result.final_answer)
```

```
[Step(explanation='To isolate the term with the variable on one side of the equation, start by subtracting 7 from both sides.', output='8x = -23 - 7'), Step(explanation='Combine like terms on the right side to simplify the equation.', output='8x = -30'), Step(explanation='Divide both sides by 8 to solve for x.', output='x = -30 / 8'), Step(explanation='Simplify the fraction by dividing both the numerator and the denominator by their greatest common divisor, which is 2.', output='x = -15 / 4')]
Final answer:
x = -15/4
```

## See your trace in Langfuse

You can now see the trace and your supplied Pydantic model in Langfuse.

Example trace in Langfuse (opens in a new tab)

## Feedback

If you have any feedback or requests, please create a GitHub Issue (opens in a new tab) or share your idea with the community on Discord (opens in a new tab).