Upgrade Paths: Langchain Integration
This doc is a collection of upgrade paths for different versions of the integration. If you want to add the integration to your project, you should start with the latest version and follow the integration guide.
Lanfuse and Langchain are under active development. Thus, we are constantly improving the intgration. This means that we sometimes need to make breaking changes to our APIs or need to react to breaking changes in Langchain. We try to keep these to a minimum and to provide clear upgrade paths when we do make them.
Python
JS/TS
Python
From v2.x.x to v3.x.x
Python SDK v3 introduces a completely revised Langfuse core with a new observability API. While the LangChain integration still relies on a CallbackHandler
, nearly all ergonomics have changed. The table below highlights the most important breaking changes:
Topic | v2 | v3 |
---|---|---|
Package import | from langfuse.callback import CallbackHandler | from langfuse.langchain import CallbackHandler |
Client handling | Multiple instantiated clients | Singleton pattern, access via get_client() |
Trace/Span context | CallbackHandler optionally accepted root to group runs | Use context managers with langfuse.start_as_current_span(...) |
Dynamic trace attrs | Pass via LangChain config (e.g. metadata["langfuse_user_id"] ) | Use metadata["langfuse_user_id"] OR span.update_trace(user_id=...) |
Constructor args | CallbackHandler(sample_rate=..., user_id=...) | No constructor args – use Langfuse client or spans |
Minimal migration example:
# Install latest SDK (>=3.0.0)
pip install --upgrade langfuse
# v2 Code (for reference)
# from langfuse.callback import CallbackHandler
# handler = CallbackHandler()
# chain.invoke({"topic": "cats"}, config={"callbacks": [handler]})
# v3 Code
from langfuse import Langfuse, get_client
from langfuse.langchain import CallbackHandler
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
# 1. Create/Configure Langfuse client (once at startup)
Langfuse(
public_key="your-public-key",
secret_key="your-secret-key",
)
# 2. Access singleton instance and create handler
langfuse = get_client()
handler = CallbackHandler()
# 3. Option 1: Use metadata in chain invocation (simplest migration)
llm = ChatOpenAI(model_name="gpt-4o")
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | llm
response = chain.invoke(
{"topic": "cats"},
config={
"callbacks": [handler],
"metadata": {"langfuse_user_id": "user_123"}
}
)
# 3. Option 2: Wrap LangChain execution in a span (for more control)
# with langfuse.start_as_current_span(name="tell-joke") as span:
# span.update_trace(user_id="user_123", input={"topic": "cats"})
# response = chain.invoke({"topic": "cats"}, config={"callbacks": [handler]})
# span.update_trace(output={"joke": response.content})
# (Optional) Flush events in short-lived scripts
langfuse.flush()
- All arguments such as
sample_rate
ortracing_enabled
must now be provided when constructing the Langfuse client (or via environment variables) – not on the handler. - Functions like
flush()
andshutdown()
moved to the client instance (get_client().flush()
).
From v1.x.x to v2.x.x
The CallbackHandler
can be used in multiple invocations of a Langchain chain as shown below.
from langfuse.callback import CallbackHandler
langfuse_handler = CallbackHandler(PUBLIC_KEY, SECRET_KEY)
# Setup Langchain
from langchain.chains import LLMChain
...
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[langfuse_handler])
# Add Langfuse handler as callback
chain.run(input="<first_user_input>", callbacks=[langfuse_handler])
chain.run(input="<second_user_input>", callbacks=[langfuse_handler])
So far, invoking the chain multiple times would group the observations in one trace.
TRACE
|
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
We changed this, so that each invocation will end up on its own trace. This allows us to derive the user inputs and outputs to Langchain applications.
TRACE_1
|
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
TRACE_2
|
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
If you still want to group multiple invocations on one trace, you can use the Langfuse SDK combined with the Langchain integration (more details).
from langfuse import Langfuse
langfuse = Langfuse()
# Get Langchain handler for a trace
trace = langfuse.trace()
langfuse_handler = trace.get_langchain_handler()
# langfuse_handler will use the trace for all invocations
JS/TS
From v2.x.x to v3.x.x
Requires langchain ^0.1.10
. Langchain released a new stable version of the Callback Handler interface and this version of the Langfuse SDK implements it. Older versions are no longer supported.
From v1.x.x to v2.x.x
The CallbackHandler
can be used in multiple invocations of a Langchain chain as shown below.
import { CallbackHandler } from "langfuse-langchain";
// create a handler
const langfuseHandler = new CallbackHandler({
publicKey: LANGFUSE_PUBLIC_KEY,
secretKey: LANGFUSE_SECRET_KEY,
});
import { LLMChain } from "langchain/chains";
// create a chain
const chain = new LLMChain({
llm: model,
prompt,
callbacks: [langfuseHandler],
});
// execute the chain
await chain.call(
{ product: "<user_input_one>" },
{ callbacks: [langfuseHandler] }
);
await chain.call(
{ product: "<user_input_two>" },
{ callbacks: [langfuseHandler] }
);
So far, invoking the chain multiple times would group the observations in one trace.
TRACE
|
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
We changed this, so that each invocation will end up on its own trace. This is a more sensible default setting for most users.
TRACE_1
|
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
TRACE_2
|
|-- SPAN: Retrieval
| |
| |-- SPAN: LLM Chain
| | |
| | |-- GENERATION: ChatOpenAi
If you still want to group multiple invocations on one trace, you can use the Langfuse SDK combined with the Langchain integration (more details).
const trace = langfuse.trace({ id: "special-id" });
// CallbackHandler will use the trace with the id "special-id" for all invocations
const langfuseHandler = new CallbackHandler({ root: trace });