Docs
Integrations
Langchain
JS/TS

Langchain Integration (JS/TS)

Github repository langfuse/langfuse-jsCI test statusnpm langfuse-langchain

If you are working with Node.js, Deno, or Edge functions, the langfuse library is the simplest way to integrate Langfuse into your Langchain application. The library queues calls to make them non-blocking.

Supported runtimes

  • Node.js
  • Edge: Vercel, Cloudflare, ...
  • Deno

Want to work without Langchain? Use Langfuse for tracing and LangfuseWeb to capture feedback from the browser.

Installation

# npm
npm i langfuse-langchain
 
# or yarn
yarn add langfuse-langchain
 
# or deno
import CallbackHandler from 'https://esm.sh/langfuse-langchain'

In your application, set your API keys from the project settings in the Langfuse UI to create a client.

import { CallbackHandler } from "langfuse-langchain";
 
const langfuseHandler = new CallbackHandler({
  secretKey: "sk-lf-...",
  publicKey: "pk-lf-...",
  // options
});

Options

VariableDescriptionDefault value
baseUrlBaseUrl of the Langfuse API"https://cloud.langfuse.com"
releaseThe release number/hash of the application to provide analytics grouped by release.process.env.LANGFUSE_RELEASE or common system environment names (opens in a new tab)
versionThe version of the application, see experimentation docs for details.undefined
userIdFor user-level analytics (docs)undefined
sessionIdFor session-level tracing (docs)undefined
ℹ️

In short-lived environments (e.g. serverless functions), make sure to always call langfuseHandler.shutdownAsync() at the end to flush the queue and await all pending requests. (Learn more)

Create a simple LLM call using Langchain

import { PromptTemplate } from "@langchain/core/prompts";
import { OpenAI } from "@langchain/openai";
 
import { CallbackHandler } from "langfuse-langchain";
 
// Create a callback handler
const langfuseHandler = new CallbackHandler({
  publicKey: LANGFUSE_PUBLIC_KEY,
  secretKey: LANGFUSE_SECRET_KEY,
});
 
// Choose a model
const model = new OpenAI({
  temperature: 0,
  openAIApiKey: "YOUR-API-KEY",
  callbacks: [langfuseHandler] // Register your Langfuse callback in the constructor
});
 
// Create a prompt
const prompt = PromptTemplate.fromTemplate(
  "What is a good name for a company that makes {product}?"
);
 
// Create a chain and add the Langfuse callback handler
const chain = prompt.pipe(model).withConfig({ callbacks: [langfuseHandler] }); // Register your Langfuse callback on chain creation
 
// Invoke the chain
const result = await chain.invoke(
  { product: "colorful hockey sticks" },
  { callbacks: [langfuseHandler] } // Register your Langfuse callback as run config
);
 
...
 
await langfuseHandler.flushAsync(); // Flush queued events to Langfuse

There are two ways to integrate callbacks into Langchain:

  • Constructor Callbacks: Set when initializing an object, like new LLMChain({ ..., callbacks: [langfuseHandler] }). This approach will use the callback for every call made on that specific object. However, it won't apply to its child objects, making it limited in scope.
  • Request Callbacks: Defined when issuing a request, like chain.invoke(..., { callbacks: [langfuseHandler] }). This not only uses the callback for that specific request but also for any subsequent sub-requests it triggers.

For comprehensive data capture especially for complex chains or agents, it's advised to use the both approaches, as demonstrated above. Langchain docs. (opens in a new tab).

Stateful Langchain callbacks

The Langchain client can also be constructed with Traces or Spans. This allows to nest Langchain executions anywhere in the Trace an hence to add any metadata and ids that are important.

import { OpenAI } from "@langchain/openai";
import { CallbackHandler, Langfuse } from "langfuse-langchain";
 
// Instantiate the standard Langfuse SDK
const langfuse = new Langfuse({
  publicKey: LANGFUSE_PUBLIC_KEY,
  secretKey: LANGFUSE_SECRET_KEY,
  baseUrl: LANGFUSE_BASEURL,
});
 
// Create a trace and a handler nested into the trace.
const parentTrace = langfuse.trace({ name: "parent-trace" });
const langfuseHandler = new CallbackHandler({ root: parentTrace });
 
// Call the LLM with the handler
const llm = new OpenAI({ callbacks: [langfuseHandler] });
await llm.call("Tell me a joke", { callbacks: [langfuseHandler] });
 
// Create a span within the trace
const childSpan = parentTrace.span({ name: "child-span" });
const langfuseSpanHandler = new CallbackHandler({ root: span });
 
// Invoke the nested call to the LLm with the corresponding handler
const llmSpan = new OpenAI({ callbacks: [langfuseSpanHandler] });
await llmSpan.call("Tell me a better joke", {
  callbacks: [langfuseSpanHandler],
});
 
await langfuse.flushAsync();

Shutdown

The Langfuse SDKs buffer events and flush them asynchronously to the Langfuse server. You should call shutdown to exit cleanly before your application exits.

await langfuseHandler.shutdownAsync();

Execution identifier

The SDK provides a function to return the traceId of the trace which is currently used. Similarly, there is another function to expose the langchain runId. This runId is the latest top-level id of a Langchain run which is also used to create Spans or Generations in Langfuse.

Both of these ids can be used to create scores for the correct Span in Langfuse.

import { OpenAI } from "@langchain/openai";
 
import { CallbackHandler, Langfuse } from "langfuse-langchain";
 
// Create a Langfuse JS client
const langfuse = new Langfuse();
 
// Create a trace and a handler nested into the trace.
const trace = langfuse.trace({ id: "special-id" });
const langfuseHandler = new CallbackHandler({ root: trace });
 
langfuseHandler.getTraceId(); // returns "special-id"
 
// Call the LLM with the handler
const llm = new OpenAI({ callbacks: [langfuseHandler] });
await llm.call("Tell me a joke", { callbacks: [langfuseHandler] });
 
langfuseHandler.getLangchainRunId(); // returns the latest run id
 
await llm.call("Tell me a better joke", { callbacks: [langfuseHandler] });
 
langfuseHandler.getLangchainRunId(); // returns the latest run id, different from the first one

Upgrading from v2.x.x to v3.x.x

Requires langchain ^0.1.10 (opens in a new tab). Langchain released a new stable version of the Callback Handler interface and this version of the Langfuse SDK implements it. Older versions are no longer supported.

Upgrading from v1.x.x to v2.x.x

The CallbackHandler can be used in multiple invocations of a Langchain chain as shown below.

import { CallbackHandler } from "langfuse-langchain";
 
// create a handler
const langfuseHandler = new CallbackHandler({
  publicKey: LANGFUSE_PUBLIC_KEY,
  secretKey: LANGFUSE_SECRET_KEY,
});
 
import { LLMChain } from "langchain/chains";
 
// create a chain
const chain = new LLMChain({
  llm: model,
  prompt,
  callbacks: [langfuseHandler],
});
 
// execute the chain
await chain.call(
  { product: "<user_input_one>" },
  { callbacks: [langfuseHandler] }
);
await chain.call(
  { product: "<user_input_two>" },
  { callbacks: [langfuseHandler] }
);

So far, invoking the chain multiple times would group the observations in one trace.

TRACE
|
|-- SPAN: Retrieval
|   |
|   |-- SPAN: LLM Chain
|   |   |
|   |   |-- GENERATION: ChatOpenAi
|-- SPAN: Retrieval
|   |
|   |-- SPAN: LLM Chain
|   |   |
|   |   |-- GENERATION: ChatOpenAi

We changed this, so that each invocation will end up on its own trace. This is a more sensible default setting for most users.

TRACE_1
|
|-- SPAN: Retrieval
|   |
|   |-- SPAN: LLM Chain
|   |   |
|   |   |-- GENERATION: ChatOpenAi
 
TRACE_2
|
|-- SPAN: Retrieval
|   |
|   |-- SPAN: LLM Chain
|   |   |
|   |   |-- GENERATION: ChatOpenAi

If you still want to group multiple invocations on one trace, you can scope the CallbackHandler to a single trace using the following approach. See docs above for more information.

const trace = langfuse.trace({ id: "special-id" });
const langfuseHandler = new CallbackHandler({ root: trace });

Was this page useful?

Questions? We're here to help

Subscribe to updates