DocsQuickstart

Get Started with Langfuse Tracing

This quickstart helps you to integrate your LLM application with Langfuse Tracing. It will log a single LLM call to get you started.

If you are looking for other features, see the overview.

Create new project in Langfuse

  1. Create Langfuse account or self-host
  2. Create a new project
  3. Create new API credentials in the project settings

Log your first LLM call to Langfuse

The @observe() decorator makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse OpenAI integration to automatically capture all model parameters.

Not using OpenAI? Switch to the β€œPython Decorator + any LLM” tab.

pip install langfuse openai
.env
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
# πŸ‡ͺπŸ‡Ί EU region
LANGFUSE_HOST="https://cloud.langfuse.com"
# οΏ½οΏ½ US region
# LANGFUSE_HOST="https://us.cloud.langfuse.com"
main.py
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration
 
@observe()
def story():
    return openai.chat.completions.create(
        model="gpt-3.5-turbo",
        max_tokens=100,
        messages=[
          {"role": "system", "content": "You are a great storyteller."},
          {"role": "user", "content": "Once upon a time in a galaxy far, far away..."}
        ],
    ).choices[0].message.content
 
@observe()
def main():
    return story()
 
main()
βœ…

Done, now visit the Langfuse interface to look at the trace you just created.

All Langfuse platform features

This was a very brief introduction to get started with Langfuse. Explore all Langfuse platform features in detail.

Develop

Monitor

Test

References

FAQ

Was this page useful?

Questions? We're here to help

Subscribe to updates