Docs
Quickstart

Quickstart

This quickstart helps you to integrate your LLM application with Langfuse. It will log a single LLM call to get started.

Create new project in Langfuse

  1. Create Langfuse account (opens in a new tab) or self-host
  2. Create a new project
  3. Create new API credentials in the project settings

Log your first LLM call to Langfuse

The @observe() decorator makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse OpenAI integration to automatically capture all model parameters.

Not using OpenAI? Switch to the "Python Decorator + any LLM" tab.

pip install langfuse openai
.env
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com" # πŸ‡ͺπŸ‡Ί EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # πŸ‡ΊπŸ‡Έ US region
main.py
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration
 
@observe()
def story():
    return openai.chat.completions.create(
        model="gpt-3.5-turbo",
        max_tokens=100,
        messages=[
          {"role": "system", "content": "You are a great storyteller."},
          {"role": "user", "content": "Once upon a time in a galaxy far, far away..."}
        ],
    ).choices[0].message.content
 
@observe()
def main():
    return story()
 
main()
βœ…

Done, now visit the Langfuse interface to look at the trace you just created.

All Langfuse platform features

This was a very brief introduction to get started with Langfuse. Explore all Langfuse platform features in detail.

Develop

Monitor

Test

References

Was this page useful?

Questions? We're here to help

Subscribe to updates