Prompt Config
The prompt config in Langfuse is an optional arbitrary JSON object attached to each prompt, that can be used by code executing the LLM call. Common use cases include:
- storing model parameters (
model,temperature,max_tokens) - storing structured output schemas (
response_format) - storing function/tool definitions (
tools,tool_choice)
Because the config is versioned together with the prompt, you can manage all parameters in one place. This makes it easy to switch models, update schemas, or tune behavior without touching your application code.

Setting the config
Setting the config can be done both via the Langfuse prompt UI and via the SDKs.
To add or edit a config for your prompt:
- Navigate to Prompt Management in the Langfuse UI
- Select or create a prompt
- In the prompt editor, find the Config field (JSON editor)
- Enter your config as a valid JSON object
- Save the prompt — the config is now versioned with this prompt version
You can test your prompt with its config directly in the Playground.
Using the config
The example below retrieves the AI model and temperature from the prompt config.
After fetching a prompt, access the config via the config property and pass the values to your LLM call.
This example uses the Langfuse OpenAI integration for tracing, but this is optional. You can use any method to call your LLM (e.g., OpenAI SDK directly, other providers, etc.).
from langfuse import get_client
# Initialize Langfuse OpenAI client for this example.
from langfuse.openai import OpenAI
client = OpenAI()
langfuse = get_client()
# Fetch prompt
prompt = langfuse.get_prompt("invoice-extractor")
# Access config values
cfg = prompt.config
model = cfg.get("model")
temperature = cfg.get("temperature")
# Use in your LLM call
client.chat.completions.create(
model=model,
temperature=temperature,
messages=prompt.prompt
)Example use cases
Structured Outputs
When you need your LLM to return data in a specific JSON format, store the schema in your prompt config. This keeps the schema versioned alongside your prompt and lets you update it without code changes.
Best practice: Use response_format with type: "json_schema" and strict: true to enforce the schema. This ensures the model’s output exactly matches your expected structure. If you’re using Pydantic models, convert them with type_to_response_format_param — see the OpenAI Structured Outputs guide.
from langfuse import get_client
from langfuse.openai import OpenAI
langfuse = get_client()
client = OpenAI()
# Fetch prompt with config containing response_format
prompt = langfuse.get_prompt("invoice-extractor")
system_message = prompt.compile()
# Extract parameters from config
cfg = prompt.config
# Example config:
# {
# "response_format": {
# "type": "json_schema",
# "json_schema": {
# "name": "invoice_schema",
# "schema": {
# "type": "object",
# "properties": {
# "invoice_number": { "type": "string" },
# "total": { "type": "number" }
# },
# "required": ["invoice_number", "total"],
# "additionalProperties": false
# },
# "strict": true
# }
# }
# }
response_format = cfg.get("response_format")
res = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": "Extract invoice number and total from: ..."},
],
response_format=response_format,
langfuse_prompt=prompt, # Links this generation to the prompt version in Langfuse
)
# Response is guaranteed to match your schema
content = res.choices[0].message.contentFunction Calling
For agents and tool-using applications, store your function definitions in the prompt config. This allows you to version and update your available tools alongside your prompts.
Best practice: Store tools (function definitions with JSON Schema parameters) and tool_choice in your config. This keeps your function signatures versioned and lets you add, modify, or remove tools without deploying code changes.
from langfuse import get_client
from langfuse.openai import OpenAI
langfuse = get_client()
client = OpenAI()
# Fetch prompt with config containing tools
prompt = langfuse.get_prompt("weather-agent")
system_message = prompt.compile()
# Extract parameters from config
cfg = prompt.config
# Example config:
# {
# "tools": [
# {
# "type": "function",
# "function": {
# "name": "get_current_weather",
# "description": "Get the current weather in a given location",
# "parameters": {
# "type": "object",
# "properties": {
# "location": { "type": "string", "description": "City and country" },
# "unit": { "type": "string", "enum": ["celsius", "fahrenheit"] }
# },
# "required": ["location"],
# "additionalProperties": false
# }
# }
# }
# ],
# "tool_choice": { "type": "auto" }
# }
tools = cfg.get("tools", [])
tool_choice = cfg.get("tool_choice")
res = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_message},
{"role": "user", "content": "What's the weather in Berlin?"},
],
tools=tools,
tool_choice=tool_choice,
langfuse_prompt=prompt, # Links this generation to the prompt version in Langfuse
)For complete end-to-end examples, see the OpenAI Functions cookbook and the Structured Outputs cookbook.