Langfuse Integration for Inferable

Inferable (GitHub) is an open-source platform that helps you build reliable agentic automations at scale.

With the native integration, you can use Inferable to quickly create distributed agentic automations and then use Langfuse to monitor and improve them. No code changes required.

Get Started

Get Langfuse API keys

  1. Create account and project on cloud.langfuse.com
  2. Copy API keys for your project

Configure Inferable with Langfuse

  1. Navigate to the Integrations tab of your preferred cluster in Inferable
  2. Add your Langfuse credentials:
    • Secret API Key: Your Langfuse Secret API Key
    • Public API Key: Your Langfuse Public API Key
    • Base URL: Your Langfuse Base URL (e.g. https://cloud.langfuse.com)
    • Send Message Payloads: Whether to send inputs and outputs of LLM calls and function calls to Langfuse

Features

Tracing

Once you have enabled the Langfuse integration, you will start to see traces in the Langfuse dashboard. Every Run in Inferable will be mapped to its own trace in Langfuse.

Inferable trace in Langfuse

You will find two types of spans in the trace:

  • Tool Calls: Denoted by function name. These are spans created for each tool call made in the Run by the LLM.
  • LLM Calls: Denoted by GENERATION. This is the span created for the LLM call. Inferable will create a new span for each LLM call in the Run, including:
    • Agent loop reasoning
    • Utility calls (e.g., Summarization, Title generation)

Learn more about the Langfuse Tracing data structure here.

Evaluations

Whenever you submit an evaluation on a Run via the Playground or the API, Inferable will send a score to Langfuse on the trace for that Run.

If you’re using Langfuse for evaluation, this will help you correlate the evaluation back to the specific Trace in Langfuse.

Inferable evaluation in Langfuse

Message Payload Security

By default, Inferable will only send metadata about LLM calls and function calls. This includes the model, Run ID, token usage, latency etc.

If you have Send Message Payloads enabled, Inferable will also send the inputs and outputs of the LLM calls and function calls. This includes:

  • Prompts
  • Responses
  • Tool calls
  • Tool call arguments
  • Tool call results

Other notes

  • The Langfuse traces may take up to 30 seconds to be sent to Langfuse. But usually appear in a few seconds.
  • You can report an issue on Inferable GitHub if you’re having trouble with the integration.

Was this page useful?

Questions? We're here to help

Subscribe to updates