Get Started with Tracing
This quickstart helps you to ingest your first trace in Langfuse.
Get API keys
- Create Langfuse account or self-host Langfuse.
- Create new API credentials in the project settings.
Ingest your first trace
Use the drop-in replacement for the OpenAI Python SDK to get full observability.
pip install langfuse
Add you Langfuse credentials as environment variables.
LANGFUSE_SECRET_KEY = "sk-lf-..."
LANGFUSE_PUBLIC_KEY = "pk-lf-..."
LANGFUSE_HOST = "https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_HOST = "https://us.cloud.langfuse.com" # 🇺🇸 US region
Change the import to use the OpenAI drop-in replacement.
from langfuse.openai import openai
Use the OpenAI SDK as usual.
completion = openai.chat.completions.create(
name="test-chat",
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a very accurate calculator. You output only the result of the calculation."},
{"role": "user", "content": "1 + 1 = "}],
metadata={"someMetadataKey": "someValue"},
)
Use the Langfuse wrapper function around the OpenAI JS/TS SDK for full observability.
npm install langfuse openai
Add your Langfuse credentials to your environment variables. Make sure that you have a .env
file in your project root and a package like dotenv
to load the variables.
LANGFUSE_SECRET_KEY = "sk-lf-..."
LANGFUSE_PUBLIC_KEY = "pk-lf-..."
LANGFUSE_BASEURL = "https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_BASEURL = "https://us.cloud.langfuse.com" # 🇺🇸 US region
With your environment configured, call OpenAI SDK methods as usual from the wrapped client.
import OpenAI from "openai";
import { observeOpenAI } from "langfuse";
const openai = observeOpenAI(new OpenAI());
const res = await openai.chat.completions.create({
messages: [{ role: "system", content: "Tell me a story about a dog." }],
model: "gpt-4o",
max_tokens: 300,
});
Use the Langfuse CallbackHandler to get full observability of the LangChain Python SDK.
pip install langfuse langchain-openai
Add your Langfuse credentials as environment variables.
LANGFUSE_SECRET_KEY = "sk-lf-..."
LANGFUSE_PUBLIC_KEY = "pk-lf-..."
LANGFUSE_HOST = "https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_HOST = "https://us.cloud.langfuse.com" # 🇺🇸 US region
Initialize the Langfuse callback handler.
from langfuse.callback import CallbackHandler
langfuse_handler = CallbackHandler()
Add the Langfuse callback handler to your chain.
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model_name="gpt-4o")
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | llm
response = chain.invoke(
{"topic": "cats"},
config={"callbacks": [langfuse_handler]})
Use the Langfuse CallbackHandler to get full observability of the LangChain JS/TS SDK.
npm i langfuse-langchain
Add your Langfuse credentials to your environment variables. Make sure that you have a .env
file in your project root and a package like dotenv
to load the variables.
LANGFUSE_SECRET_KEY = "sk-lf-..."
LANGFUSE_PUBLIC_KEY = "pk-lf-..."
LANGFUSE_BASEURL = "https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_BASEURL = "https://us.cloud.langfuse.com" # 🇺🇸 US region
Initialize the Langfuse callback handler and add it to your chain.
import { CallbackHandler } from "langfuse-langchain";
// Deno: import CallbackHandler from "https://esm.sh/langfuse-langchain";
const langfuseHandler = new CallbackHandler();
// Your Langchain code
// Add Langfuse handler as callback to `run` or `invoke`
await chain.invoke({ input: "<user_input>" }, { callbacks: [langfuseHandler] });
Use the Langfuse Python SDK to wrap any LLM or Agent
pip install langfuse
LANGFUSE_SECRET_KEY = "sk-lf-..."
LANGFUSE_PUBLIC_KEY = "pk-lf-..."
LANGFUSE_HOST = "https://cloud.langfuse.com" # 🇪🇺 EU region
# LANGFUSE_HOST = "https://us.cloud.langfuse.com" # 🇺🇸 US region
There are three main ways of creating traces with the Python SDK:
The @observe
decorator is the simplest way to instrument your application. It is a function decorator that can be applied to any function.
It sets the current span in the context for automatic nesting of child spans and automatically ends it when the function returns. It also automatically captures the function name, arguments, and return value.
from langfuse import observe, get_client
@observe
def my_function():
return "Hello, world!" # Input/output and timings are automatically captured
my_function()
# Flush events in short-lived applications
langfuse = get_client()
langfuse.flush()
Context managers are the recommended way to instrument chunks of work in your application as they automatically handle the start and end of spans, and set the current span in the context for automatic nesting of child spans. They provide more control than the @observe
decorator.
from langfuse import get_client
langfuse = get_client()
# Create a span using a context manager
with langfuse.start_as_current_span(name="process-request") as span:
# Your processing logic here
span.update(output="Processing complete")
# Create a nested generation for an LLM call
with langfuse.start_as_current_generation(name="llm-response", model="gpt-3.5-turbo") as generation:
# Your LLM call logic here
generation.update(output="Generated response")
# All spans are automatically closed when exiting their context blocks
# Flush events in short-lived applications
langfuse.flush()
Manual observations give you control over when spans start and end and do not set the current span in the context for automatic nesting of child spans. You must explicitly call .end()
when they’re complete.
from langfuse import get_client
langfuse = get_client()
# Create a span without a context manager
span = langfuse.start_span(name="user-request")
# Your processing logic here
span.update(output="Request processed")
# Child spans must be created using the parent span object
nested_span = span.start_span(name="nested-span")
nested_span.update(output="Nested span output")
# Important: Manually end the span
nested_span.end()
# Important: Manually end the parent span
span.end()
# Flush events in short-lived applications
langfuse.flush()
Use the Langfuse JS/TS SDK to wrap any LLM or Agent
npm i langfuse
LANGFUSE_SECRET_KEY = "sk-lf-...";
LANGFUSE_PUBLIC_KEY = "pk-lf-...";
LANGFUSE_BASEURL = "https://cloud.langfuse.com"; 🇪🇺 EU region
# LANGFUSE_BASEURL = "https://us.cloud.langfuse.com"; 🇺🇸 US region
import { Langfuse } from "langfuse"; // or "langfuse-node"
const langfuse = new Langfuse();
import { Langfuse } from "langfuse"; // or "langfuse-node"
const langfuse = new Langfuse({
secretKey: "sk-lf-...",
publicKey: "pk-lf-...",
baseUrl: "https://cloud.langfuse.com", // 🇪🇺 EU region
// baseUrl: "https://us.cloud.langfuse.com", // 🇺🇸 US region
// optional
release: "v1.0.0",
requestTimeout: 10000,
enabled: true, // set to false to disable sending events
});
const trace = langfuse.trace({
name: "my-AI-application-endpoint",
});
// Example generation creation
const generation = trace.generation({
name: "chat-completion",
model: "gpt-4o",
input: messages,
});
// Application code
const chatCompletion = await llm.respond(prompt);
// End generation - sets endTime
generation.end({
output: chatCompletion,
});
In short-lived environments (e.g. serverless functions), make sure to always
call langfuse.shutdownAsync()
at the end to await all pending requests.
(Learn more)
Use the agent mode of your editor to integrate Langfuse into your existing codebase.
This might or might not work very well (depending on your code base). Please share any feedback or issues on GitHub.
1. Install the Langfuse Docs MCP Server (optional)
The agent will use the Langfuse searchLangfuseDocs
tool (docs) to find the correct documentation for the integration you are looking for. This is optional, alternatively the agent can use its native websearch capabilities.
Add Langfuse Docs MCP to Cursor via the one-click install:
Manual configuration
Add the following to your mcp.json
:
{
"mcpServers": {
"langfuse-docs": {
"url": "https://langfuse.com/api/mcp"
}
}
}
Add Langfuse Docs MCP to Copilot in VSCode via the following steps:
- Open Command Palette (⌘+Shift+P)
- Open “MCP: Add Server…”
- Select
HTTP
- Paste
https://langfuse.com/api/mcp
- Select name (e.g.
langfuse-docs
) and whether to save in user or workspace settings - You’re all set! The MCP server is now available in Agent mode
Add Langfuse Docs MCP to Claude Code via the CLI:
claude mcp add \
--transport http \
langfuse-docs \
https://langfuse.com/api/mcp \
--scope user
Manual configuration
Alternatively, add the following to your settings file:
- User scope:
~/.claude/settings.json
- Project scope:
your-repo/.claude/settings.json
- Local scope:
your-repo/.claude/settings.local.json
{
"mcpServers": {
"langfuse-docs": {
"transportType": "http",
"url": "https://langfuse.com/api/mcp",
"verifySsl": true
}
}
}
One-liner JSON import
claude mcp add-json langfuse-docs \
'{"type":"http","url":"https://langfuse.com/api/mcp"}'
Once added, start a Claude Code session (claude
) and type /mcp
to confirm the connection.
Add Langfuse Docs MCP to Windsurf via the following steps:
-
Open Command Palette (⌘+Shift+P)
-
Open “MCP Configuration Panel”
-
Select
Add custom server
-
Add the following configuration:
{ "mcpServers": { "langfuse-docs": { "command": "npx", "args": ["mcp-remote", "https://langfuse.com/api/mcp"] } } }
Langfuse uses the streamableHttp
protocol to communicate with the MCP server. This is supported by most clients.
{
"mcpServers": {
"langfuse-docs": {
"url": "https://langfuse.com/api/mcp"
}
}
}
If you use a client that does not support streamableHttp
(e.g. Windsurf), you can use the mcp-remote
command as a local proxy.
{
"mcpServers": {
"langfuse-docs": {
"command": "npx",
"args": ["mcp-remote", "https://langfuse.com/api/mcp"]
}
}
}
2. Run Agent
Copy and execute the following prompt in the agent mode of your editor:
View prompt
# Langfuse Agentic Onboarding ## Goals Your goal is to help me integrate Langfuse tracing into my codebase. ## Rules Before you begin, you must understand these three fundamental rules: 1. Do Not Change Business Logic: You are strictly forbidden from changing, refactoring, or altering any of my existing code's logic. Your only task is to add the necessary code for Langfuse integration, such as decorators, imports, handlers, and environment variable initializations. 2. Adhere to the Workflow: You must follow the step-by-step workflow outlined below in the exact sequence. 3. If available, use the langfuse-docs MCP server and the `searchLangfuseDocs` tool to retrieve information from the Langfuse docs. If it is not available, please use your websearch capabilities to find the information. ## Integration Workflow ### Step 1: Language and Compatibility Check First, analyze the codebase to identify the primary programming language. - If the language is Python or JavaScript/TypeScript, proceed to Step 2. - If the language is not Python or JavaScript/TypeScript, you must stop immediately. Inform me that the codebase is currently unsupported for this AI-based setup, and do not proceed further. ### Step 2: Codebase Discovery & Entrypoint Confirmation Once you have confirmed the language is compatible, explore the entire codebase to understand its purpose. - Identify all files and functions that contain LLM calls or are likely candidates for tracing. - Present this list of files and function names to me. - If you are unclear about the main entry point of the application (e.g., the primary API route or the main script to execute), you must ask me for confirmation on which parts are most critical to trace before proceeding to the next step. ### Step 3: Discover Available Integrations After I confirm the files and entry points, get a list of available integrations from the Langfuse docs by calling the `getLangfuseOverview` tool. ### Step 4: Analyze Confirmed Files for Technologies Based on the files we confirmed in Step 2, perform a deeper analysis to identify the specific LLM frameworks or SDKs being used (e.g., OpenAI SDK, LangChain, LlamaIndex, Anthropic SDK, etc.). Search the Langfuse docs for the integration instructions for these frameworks via the `searchLangfuseDocs` tool. If you are unsure, repeatedly query the Langfuse docs via the `searchLangfuseDocs` tool. ### Step 5: Propose a Development Plan Before you write or modify a single line of code, you must present me with a clear, step-by-step development plan. This plan must include: - The Langfuse package(s) you will install. - The files you intend to modify. - The specific code changes you will make, showing the exact additions. - Instructions on where I will need to add my Langfuse API keys after your work is done. I will review this plan and give you my approval before you proceed. ### Step 6: Implement the Integration Once I approve your plan, execute it. First, you must use your terminal access to run the necessary package installation command (e.g., pip install langfuse, npm install langfuse) yourself. After the installation is successful, modify the code exactly as described in the plan. When done, please review the code changes. The goal here is to keep the integration as simple as possible. ### Step 7: Request User Review and Wait After you have made all the changes, notify me that your work is complete. Explicitly ask me to run the application and confirm that everything is working correctly and that you can make changes/improvements if needed. ### Step 8: Debug and Fix if Necessary If I report that something is not working correctly, analyze my feedback. Use the knowledge you have to debug the issue. If required, re-crawl the relevant Langfuse documentation to find a solution, propose a fix to me, and then implement it.
Explore all integrations and frameworks that Langfuse supports.
See your trace in Langfuse
After running your application, visit the Langfuse interface to view the trace you just created. (Example LangGraph trace in Langfuse)