Integrate with Langfuse in seconds using the new Langchain Integration
For teams building their LLM app with Langchain, adopting Langfuse just got easier. We added a
CallbackHandler to the Langfuse Python SDK that natively integrates with Langchain Callbacks (opens in a new tab).
# Initialize Langfuse handler from langfuse.callback import CallbackHandler handler = CallbackHandler(PUBLIC_KEY, SECRET_KEY) # Setup Langchain from langchain.chains import LLMChain ... chain = LLMChain(llm=llm, prompt=prompt) # Add Langfuse handler as callback chain.run(input="<user_input", callbacks=[handler])
From the Langchain integration docs
CallbackHandler tracks the following actions when using Langchain:
All actions are automatically nested based on the call tree and include inputs, outputs, model configurations, token counts, latencies and errors.
Demo of the debug view in Langfuse:
You can find the code of these examples in the Langchain integration docs
Langfuse is an open source product analytics platform for LLM applications. It is used by teams to track and analyze their LLM app in production with regards to quality, cost and latency across product releases and use cases. In addition, the Langfuse Debug UI helps to visualize the control flow of LLM apps in production. Read our launch post if you want to learn more.
- Read the Langchain integration docs for more details and examples to get started.
- Not (exclusively) using Langchain in production? Follow the quickstart to get started with the Typescript and Python SDKs that allow you to integrate with your custom LLM app.