IntegrationsFrameworksVercel AI SDK

Vercel AI SDK - Observability & Analytics

Telemetry is an experimental feature of the AI SDK and might change in the future.

The Vercel AI SDK offers native instrumentation with OpenTelemetry. You can enable the Vercel AI SDK telemetry by passing { experimental_telemetry: { isEnabled: true }} to your AI SDK function calls.

The LangfuseSpanProcessor is automatically detecting multimodal data in your traces and is handling it automatically.

Here is a full example on how to set up tracing with the

  • AI SDK v5
  • Next JS
  • deployed on Vercel
instrumentation.ts
import { LangfuseSpanProcessor, ShouldExportSpan } from "@langfuse/otel";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
 
// Optional: filter our NextJS infra spans
const shouldExportSpan: ShouldExportSpan = (span) => {
  return span.otelSpan.instrumentationScope.name !== "next.js";
};
 
const langfuseSpanProcessor = new LangfuseSpanProcessor({
  shouldExportSpan,
});
 
const tracerProvider = new NodeTracerProvider({
  spanProcessors: [langfuseSpanProcessor],
});
 
tracerProvider.register();
💡

If you are using Next.js, please use a manual OpenTelemetry setup via the NodeTracerProvider rather than via registerOTel from @vercel/otel. This is because the @vercel/otel package does not yet support the OpenTelemetry JS SDK v2 on which the @langfuse/tracing and @langfuse/otel packages are based.

route.ts
import { streamText } from "ai";
import { after } from "next/server";
 
import { openai } from "@ai-sdk/openai";
import {
  observe,
  updateActiveObservation,
  updateActiveTrace,
} from "@langfuse/tracing";
import { trace } from "@opentelemetry/api";
 
import { langfuseSpanProcessor } from "@/src/instrumentation";
 
const handler = async (req: Request) => {
  const {
    messages,
    chatId,
    userId,
  }: { messages: UIMessage[]; chatId: string; userId: string } =
    await req.json();
 
  // Set session id and user id on active trace
  const inputText = messages[messages.length - 1].parts.find(
    (part) => part.type === "text"
  )?.text;
 
  updateActiveObservation({
    input: inputText,
  });
 
  updateActiveTrace({
    name: "my-ai-sdk-trace",
    sessionId: chatId,
    userId,
    input: inputText,
  });
 
  const result = streamText({
    // ... other streamText options ...
    experimental_telemetry: {
      isEnabled: true,
    },
    onFinish: async (result) => {
      updateActiveObservation({
        output: result.content,
      });
      updateActiveTrace({
        output: result.content,
      });
 
      // End span manually after stream has finished
      trace.getActiveSpan().end();
    },
  });
 
  // Important in serverless environments: schedule flush after request is finished
  after(async () => await langfuseSpanProcessor.forceFlush());
 
  return result.toUIMessageStreamResponse();
};
 
export const POST = observe(handler, {
  name: "handle-chat-message",
  endOnExit: false, // end observation _after_ stream has finished
});

Learn more about the AI SDK Telemetry in the Vercel AI SDK documentation on Telemetry.

Short-lived environments

In short-lived environments such as Vercel Cloud Functions, AWS Lambdas etc. you may force an export and flushing of spans after function execution and prior to environment freeze or shutdown by awaiting a call to the forceFlush method on the LangfuseExporter instance.

Learn more

See the telemetry documentation of the Vercel AI SDK for more information.

GitHub Discussions

Was this page helpful?