← Back to changelog
February 27, 2024

LlamaIndex integration (Python)

Picture Hassieb PakzadHassieb Pakzad
LlamaIndex integration (Python)

Automatically capture detailed traces and metrics for every request of your LlamaIndex application with the new Langfuse Integration.

We're excited to announce our newest Python integration with LlamaIndex (opens in a new tab) (GitHub (opens in a new tab)), bringing advanced observability to a popular library in the LLM ecosystem for building RAG (retrieval-augmented generation) applications. RAG allows you to inject additional context derived from private data sources (PDFs, SQL databases, slide decks) into your LLM prompts.

Our integration with LlamaIndex empowers you now to seamlessly track and monitor the performance, traces, and metrics of your LlamaIndex applications. Detailed traces of the LlamaIndex context augmentation and the LLM querying processes are captured and can be inspected directly in the Langfuse UI. By integrating Langfuse's observability with LlamaIndex, we aim to provide you with the transparency needed to optimize your private-data enhanced LLM applications.

We're incredibly excited about this, let us know if you have any questions or feedback!

PS: If you are interested in an integration with LlamaIndex.TS, add your upvote/comments here (opens in a new tab).

🕹️ How it works

from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
from langfuse.llama_index import LlamaIndexCallbackHandler
langfuse_callback_handler = LlamaIndexCallbackHandler(
Settings.callback_manager = CallbackManager([langfuse_callback_handler])

Done! ✨ Traces and metrics from your LlamaIndex application are now automatically tracked in Langfuse.


LlamaIndex Integration

Based on the LlamaIndex Cookbook.

📚 More details

Check out the full documentation or end-to-end cookbook for more details on how to use this integration.

See our announcement blog post

Was this page useful?

Questions? We're here to help

Subscribe to updates