Blog
The latest updates from Langfuse. See Changelog for more product updates.
Should you use an LLM Proxy to Build your Application?
We go over the advantages and disadvantages of using an LLM proxy. Read more →
knowledge2024/09/23by Clemens, Marc
AI Agent Observability with Langfuse
Learn about agents and the importance of monitoring and tracking performance, cost, and user interactions. Explore tools like LangGraph, Llama Agents, Dify, Flowise, and Langflow, and see how Langfuse helps to trace and optimize your application. Read more →
showcase2024/07/31by Jannik
Dify x Langfuse: Built in observability & analytics
The popular LLM development framework Dify.AI now integrates with Langfuse out of the box and in one click. Read more →
integration2024/07/01by Clemens
Monitoring LLM Security & Reducing LLM Risks
LLM security requires effective run-time checks and ex-post monitoring and evaluation. Learn how to use Langfuse together with popular security libraries to protect prevent common security risks. Read more →
showcase2024/06/06by Lydia You
Haystack <> Langfuse Integration
Easily monitor and trace your Haystack pipelines with this new Langfuse integration. Read more →
integration2024/05/16by Lydia
Introducing Langfuse 2.0: the LLM Engineering Platform
Extending Langfuse’s core tracing with evaluations, prompt management, LLM playground and datasets. Read more →
integration2024/04/26by Clemens
Trace complex LLM applications with the Langfuse decorator (Python)
When building RAG or agents, lots of LLM calls and non-LLM inputs feeds into the final output. The Langfuse decorator allows you to trace and evaluate holistically. Read more →
integration2024/04/24by Marc
Langfuse Launch Week #1
A week full of new features releases Read more →
launchweek2024/04/19by Clemens
Native LlamaIndex Integration
Context retrieval is a mainstay in LLM engineering. This latest integration brings Langfuse's observability to LlamaIndex applications for simple tracing, monitoring and evaluation of RAG applications. Read more →
integration2024/03/06by Clemens
Langfuse adds >20 evals with UpTrain.ai integration
Run UpTrain's >20 pre-configured evaluations on your Langfuse prod/dev data in Langfuse. Read more →
cookbook2024/03/05by Clemens
Langfuse raises $4M
We're thrilled to announce that Langfuse, the open source observability and analytics solution for LLM applications, has raised a $4m seed round from Lightspeed Venture Partners, La Famiglia and Y Combinator. Read more →
announcement2023/11/07by Marc
Langfuse Update — October 2023
Improved dashboard, OpenAI integration, RAG evals, doubled API speed, simplified Docker deployments. Read more →
update2023/11/01by Marc
Langfuse Update — September 2023
Model-based evals, datasets, core improvements (query engine, complex filters, exports, sharing) and new integrations (Langflow, Flowise, LiteLLM). Read more →
update2023/10/02by Marc
Langflow x Langfuse
Build in no-code in Langflow, observe and analyze in Langfuse Read more →
integration2023/09/21by Max
Langfuse Update — August 2023
Improved data ingestion, integrations and UI. Read more →
update2023/09/06by Max
Langfuse Update — July 2023
Async SDKs, automated token counts, new Trace UI, Langchain integration, public GET API, new reports, ... Read more →
update2023/08/02by Marc
🤖 Q&A Chatbot for Langfuse Docs
Summary of how we've built a Q&A chatbot for the Langfuse docs and how Langfuse helps us to improve it. Read more →
showcase2023/07/31by Marc
🦜🔗 Langchain Integration
Integrate with Langfuse in seconds using the new Langchain Integration. Read more →
release2023/07/27by Max
LLM Chatbot Showcase
We integrated langfuse with the Vercel AI Chatbot. This example includes streaming responses, automated token counts, collection of user feedback in the frontend, grouping of thread into a single trace, and more. Read more →
showcase2023/07/21by Marc
Launch YC
Cross-post of our Launch YC (W23) post which explains why we're building Langfuse. Read more →
announcement2023/07/19by Clemens