Langfuse LogoLangfuse Logo
HIRING
DocsSelf HostingGuidesIntegrationsFAQPricingChangelogBlogDemoLibrarySecurity & Compliance
Discord
GitHub
15K
AppSign Up
  • DocsIntegrationsSelf Hosting
    Guides
    AI Engineering Library
  • Overview
    • Overview
    • Datasets
    • Evaluation of Rag with Ragas
    • Evaluation with Langchain
    • Evaluation with Uptrain
    • Migrating Data from One Langfuse Project to Another
    • Example Decorator Openai Langchain
    • Example - Tracing and Evaluation for the OpenAI-Agents SDK
    • External Evaluation Pipelines
    • Guide - Building an intent classification pipeline
    • Example - Trace and Evaluate LangGraph Agents
    • Example Llm Security Monitoring
    • Example Multi Modal Traces
    • Query Data in Langfuse via the SDK
    • Synthetic Datasets
    • Amazon Bedrock
    • Anthropic
    • Integration Azure Openai Langchain
    • Databricks
    • Integration Haystack
    • Integration Langchain
    • Open Source Observability for LangGraph
    • Langserve
    • Integration Litellm Proxy
    • Integration Llama Index Callback
    • Integration Llama Index Instrumentation
    • Integration Llama Index Milvus Lite
    • LlamaIndex
    • Monitoring LlamaIndex applications with PostHog and Langfuse
    • LlamaIndex Workflows
    • OpenAI Assistants API
    • Integration Openai Sdk
    • Observe OpenAI Structured Outputs with Langfuse
    • JS Integration Langchain
    • JS Integration Litellm Proxy
    • LlamaIndex.TS Integration
    • JS Integration Openai
    • Langfuse JS/TS SDK
    • JS Prompt Management Langchain
    • Langfuse SDK Performance Test
    • Tracing using the OpenInference SDK
    • MLflow Integration via OpenTelemetry
    • OpenLIT Integration via OpenTelemetry
    • Otel Integration Openllmetry
    • Using OpenTelemetry SDK with Langfuse OTel API
    • Prompt Management Langchain
    • Prompt Management Openai Functions
    • Prompt Management Performance Benchmark
    • Python Decorators
    • Python SDK (Low-level, v2)
    • Overview
    • Beginner's Guide to RAG Evaluation with Langfuse and Ragas
    • External Evaluation Pipelines
    • Haystack Integration
    • Introducing Datasets v2
    • Introducing Langfuse 2.0
    • Introducing the observe() decorator for Python
    • LLM-as-a-Judge Evaluators for Dataset Experiments
    • LLM Playground
    • Posthog Integration
    • Run Langfuse Locally in 3 Minutes
    • Webinar: Traceability and Observability in Multi-Step LLM Systems

On This Page

  • Learn more
Question? Give us feedback →Edit this page on GitHub
GuidesVideosIntroducing the observe() decorator for Python

Introducing the observe() decorator for Python

Decorator Integration

Learn more

Decorator docsBlog postNotebook demonstrating all featuresRap battle example notebook (video)
Introducing Langfuse 2.0LLM-as-a-Judge Evaluators for Dataset Experiments
Was this page helpful?
Support

Platform

  • LLM Tracing
  • Prompt Management
  • Evaluation
  • Human Annotation
  • Datasets
  • Metrics
  • Playground

Integrations

  • Python SDK
  • JS/TS SDK
  • OpenAI SDK
  • Langchain
  • Llama-Index
  • Litellm
  • Dify
  • Flowise
  • Langflow
  • Vercel AI SDK
  • Instructor
  • API

Resources

  • Documentation
  • Interactive Demo
  • Video demo (10 min)
  • Changelog
  • Roadmap
  • Pricing
  • Enterprise
  • Self-hosting
  • Open Source
  • Why Langfuse?
  • AI Engineering Library
  • Status
  • 🇯🇵 Japanese
  • 🇰🇷 Korean
  • 🇨🇳 Chinese

About

  • Blog
  • Careers
  • About us
  • Support
  • Talk to us
  • OSS Friends
  • Twitter
  • LinkedIn

Legal

  • Security
  • Imprint
  • Terms
  • Privacy

  • SOC 2 Type II
  • ISO 27001
  • GDPR
  • HIPAA
© 2022-2025 Langfuse GmbH / Finto Technologies Inc.