Langfuse just got faster β†’
IntegrationsModel ProvidersQwen
This is a Jupyter notebook

Observability for Qwen with Langfuse

This notebook shows how to trace Qwen API calls with Langfuse using the OpenAI SDK drop-in replacement.

What is Qwen? Qwen is a family of large language models developed by Alibaba Cloud, offering models like Qwen-Plus, Qwen-Max, and Qwen-Turbo through an OpenAI-compatible API.

What is Langfuse? Langfuse is an open-source LLM engineering platform that helps teams trace, debug, and evaluate their LLM applications.

Step 1: Install Dependencies

%pip install openai langfuse

Step 2: Set Up Environment Variables

Get your Langfuse keys from the project settings in Langfuse Cloud or set up self-hosting.

import os

# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_BASE_URL"] = "https://cloud.langfuse.com" # πŸ‡ͺπŸ‡Ί EU region
# os.environ["LANGFUSE_BASE_URL"] = "https://us.cloud.langfuse.com" # πŸ‡ΊπŸ‡Έ US region

os.environ["DASHSCOPE_API_KEY"] = "sk-..."  # Get your API key from https://qwen.ai/apiplatform

Step 3: Use Langfuse OpenAI Drop-in Replacement

Instead of importing openai directly, import it from langfuse.openai. All calls through this client are automatically traced in Langfuse.

The Qwen API is OpenAI-compatible, so we just point the client at the DashScope endpoint.

import os
from langfuse.openai import openai

client = openai.OpenAI(
    api_key=os.environ.get("DASHSCOPE_API_KEY"),
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)

Step 4: Run an Example

Send a chat completion request to a Qwen model. The trace will appear in your Langfuse dashboard automatically.

response = client.chat.completions.create(
    model="qwen3.6-plus",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What makes open-source AI models important?"},
    ],
    name="Qwen-Trace",
)
print(response.choices[0].message.content)

Step 5: View Traces in Langfuse

After running the example, open your Langfuse dashboard to see the full trace including prompts, completions, tool calls, token usage, and latency.

Example Qwen trace in Langfuse

Example trace in Langfuse

Interoperability with the Python SDK

You can use this integration together with the Langfuse SDKs to add additional attributes to the observation.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the observation.

from langfuse import observe, propagate_attributes, get_client
 
langfuse = get_client()
 
@observe()
def my_llm_pipeline(input):
    # Add additional attributes (user_id, session_id, metadata, version, tags) to all spans created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-observation"],
        metadata={"email": "user@langfuse.com"},
        version="1.0.0"
    ):

        # YOUR APPLICATION CODE HERE
        result = call_llm(input)

        return result

# Run the function
my_llm_pipeline("Hi")

Learn more about using the Decorator in the Langfuse SDK instrumentation docs.

The Context Manager allows you to wrap your instrumented code using context managers (with with statements), which allows you to add additional attributes to the observation.

from langfuse import get_client, propagate_attributes

langfuse = get_client()

with langfuse.start_as_current_observation(
    as_type="span",
    name="my-observation",
    trace_context={"trace_id": "abcdef1234567890abcdef1234567890"},  # Must be 32 hex chars
) as observation:

    # Add additional attributes (user_id, session_id, metadata, version, tags)
    # to all observations created within this execution scope
    with propagate_attributes(
        user_id="user_123",
        session_id="session_abc",
        metadata={"experiment": "variant_a", "env": "prod"},
        version="1.0",
    ):
        # YOUR APPLICATION CODE HERE
        result = call_llm("some input")

# Flush events in short-lived applications
langfuse.flush()

Learn more about using the Context Manager in the Langfuse SDK instrumentation docs.

Troubleshooting

No observations appearing

First, enable debug mode in the Python SDK:

export LANGFUSE_DEBUG="True"

Then run your application and check the debug logs:

  • OTel observations appear in the logs: Your application is instrumented correctly but observations are not reaching Langfuse. To resolve this:
    1. Call langfuse.flush() at the end of your application to ensure all observations are exported.
    2. Verify that you are using the correct API keys and base URL.
  • No OTel spans in the logs: Your application is not instrumented correctly. Make sure the instrumentation runs before your application code.
Unwanted observations in Langfuse

The Langfuse SDK is based on OpenTelemetry. Other libraries in your application may emit OTel spans that are not relevant to you. These still count toward your billable units, so you should filter them out. See Unwanted spans in Langfuse for details.

Missing attributes

Some attributes may be stored in the metadata object of the observation rather than being mapped to the Langfuse data model. If a mapping or integration does not work as expected, please raise an issue on GitHub.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:


Was this page helpful?