IntegrationsModel ProvidersGroq
This is a Jupyter notebook

Cookbook: Observability for Groq Models (Python)

This cookbook shows two ways to interact with Groq models and trace them with Langfuse:

  1. Using the OpenAI SDK to interact with the Groq model
  2. Using the OpenInference instrumentation library to interact with Groq models

By following these examples, you’ll learn how to log and trace interactions with Groq language models, enabling you to debug and evaluate the performance of your AI-driven applications.

ℹ️

Note: Langfuse is also natively integrated with LangChain, LlamaIndex, LiteLLM, and other frameworks. If you use one of them, any use of Groq models is instrumented right away.

To get started, set up your environment variables for Langfuse and Groq:

import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # πŸ‡ͺπŸ‡Ί EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # πŸ‡ΊπŸ‡Έ US region
 
# Your Groq API key
os.environ["GROQ_API_KEY"] = "gsk_..."

Option 1: Using the OpenAI SDK to interact with the Groq model

Note: This example shows how to use the OpenAI Python SDK. If you use JS/TS, have a look at our OpenAI JS/TS SDK.

Install Required Packages

%pip install langfuse openai --upgrade

Import Necessary Modules

Instead of importing openai directly, import it from langfuse.openai. Also, import any other necessary modules.

# Instead of: import openai
from langfuse.openai import OpenAI

Initialize the OpenAI Client for the Groq Model

Initialize the OpenAI client but point it to the Groq model endpoint. Replace the access token with your own.

client = OpenAI(
    base_url="https://api.groq.com/openai/v1",
    api_key=os.environ.get("GROQ_API_KEY")
)

Chat Completion Request

Use the client to make a chat completion request to the Groq model.

completion = client.chat.completions.create(
    model="llama3-8b-8192",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {
            "role": "user",
            "content": "Write a poem about language models"
        }
    ]
)
print(completion.choices[0].message.content)

Example trace in Langfuse

Option 2: Using the OpenInference instrumentation

This option will use the OpenInference instrumentation library to send traces to Langfuse.

For more detailed guidance on the Groq SDK, please refer to the Groq Documentation and the Langfuse Documentation.

Install Required Packages

%pip install groq langfuse openinference-instrumentation-groq
import os
 
# Get keys for your project from the project settings page: https://cloud.langfuse.com
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..." 
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..." 
os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com" # πŸ‡ͺπŸ‡Ί EU region
# os.environ["LANGFUSE_HOST"] = "https://us.cloud.langfuse.com" # πŸ‡ΊπŸ‡Έ US region
 
# Your Groq API key
os.environ["GROQ_API_KEY"] = "gsk_..."

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

from langfuse import get_client
 
# Initialise Langfuse client and verify connectivity
langfuse = get_client()
assert langfuse.auth_check(), "Langfuse auth failed - check your keys βœ‹"

OpenTelemetry Instrumentation

Use the OpenInference instrumentation library to wrap the Groq SDK calls and send OpenTelemetry spans to Langfuse.

from openinference.instrumentation.groq import GroqInstrumentor
 
GroqInstrumentor().instrument()

Example LLM Call

from groq import Groq
 
# Initialize Groq client
groq_client = Groq(api_key=os.environ["GROQ_API_KEY"])
chat_completion = groq_client.chat.completions.create(
    messages=[
        {
            "role": "user",
            "content": "Explain the importance of fast language models",
        }
    ],
    model="llama-3.3-70b-versatile",
)
 
print(chat_completion.choices[0].message.content)

See Traces in Langfuse

After running the example model call, you can see the traces in Langfuse. You will see detailed information about your Groq API calls, including:

  • Request parameters (model, messages, temperature, etc.)
  • Response content
  • Token usage statistics
  • Latency metrics

Example trace in Langfuse

Example trace in Langfuse

Interoperability with the Python SDK

You can use this integration together with the Langfuse Python SDK to add additional attributes to the trace.

The @observe() decorator provides a convenient way to automatically wrap your instrumented code and add additional attributes to the trace.

from langfuse import observe, get_client
 
langfuse = get_client()
 
@observe()
def my_instrumented_function(input):
    output = my_llm_call(input)
 
    langfuse.update_current_trace(
        input=input,
        output=output,
        user_id="user_123",
        session_id="session_abc",
        tags=["agent", "my-trace"],
        metadata={"email": "[email protected]"},
        version="1.0.0"
    )
 
    return output

Learn more about using the Decorator in the Python SDK docs.

Next Steps

Once you have instrumented your code, you can manage, evaluate and debug your application:

Was this page helpful?