DocsOverview

Langfuse Overview

Langfuse is an open-source LLM engineering platform (GitHub) that helps teams collaboratively debug, analyze, and iterate on their LLM applications. All platform features are natively integrated to accelerate the development workflow. Langfuse is open, self-hostable, and extensible (why langfuse?).

Tracing

  • Log traces
  • Lowest level transparency
  • Understand cost and latency

Prompts

  • Version control and deploy
  • Collaborate on prompts
  • Test prompts and models

Evals

  • Measure output quality
  • Monitor production health
  • Test changes in development

Platform

  • API-first architecture
  • Data exports to blob storage
  • Enterprise security and administration

Tracing

Tracing is essential for understanding and debugging LLM applications. Unlike traditional software, LLM applications involve complex, non-deterministic interactions that can be challenging to monitor and debug. Langfuse provides comprehensive tracing capabilities that help you understand exactly what’s happening in your application.

  • Traces include all LLM and non-LLM calls, including retrieval, embedding, API calls, and more
  • Support for tracking multi-turn conversations as sessions and user tracking
  • Agents can be represented as graphs
  • Capture traces via our native SDKs for Python/JS, 50+ library/framework integrations, OpenTelemetry, or via an LLM Gateway such as LiteLLM

Want to see an example? Play with the interactive demo.

Traces allow you to track every LLM call and other relevant logic in your app.

Prompt Management

Prompt Management is critical in building effective LLM applications. Langfuse provides tools to help you manage, version, and optimize your prompts throughout the development lifecycle.

  • Get started with prompt management
  • Manage, version, and optimize your prompts throughout the development lifecycle
  • Test prompts interactively in the LLM Playground
  • Run Prompt Experiments against datasets to test new prompt versions directly within Langfuse

Create a new prompt via UI, SDKs, or API.

Evaluations

Evals are crucial for ensuring the quality and reliability of your LLM applications. Langfuse provides flexible evaluation tools that adapt to your specific needs, whether you’re testing in development or monitoring production performance.

  • Get started with different evaluation methods: LLM-as-a-judge, user feedback, manual labeling, or custom
  • Identify issues early by running evaluations on production traces
  • Create and manage Datasets for systematic testing in development that ensure your application performs reliably across different scenarios

Plot evaluation results in the Langfuse Dashboard.

Where to start?

Setting up the full process of online tracing, prompt management, production evaluations to identify issues, and offline evaluations on datasets requires some time. This guide is meant to help you figure out what is most important for your use case.

Simplified lifecycle from PoC to production:

Langfuse Features along the development lifecycle

Why Langfuse?

  • Open source: Fully open source with public API for custom integrations
  • Production optimized: Designed with minimal performance overhead
  • Best-in-class SDKs: Native SDKs for Python and JavaScript
  • Framework support: Integrated with popular frameworks like OpenAI SDK, LangChain, and LlamaIndex
  • Multi-modal: Support for tracing text, images and other modalities
  • Full platform: Suite of tools for the complete LLM application development lifecycle

Community & Contact

We actively develop Langfuse in open source together with our community:

Langfuse evolves quickly, check out the changelog for the latest updates. Subscribe to the mailing list to get notified about new major features:

Was this page helpful?