Langfuse Roadmap
Langfuse is open source and we want to be fully transparent what we’re working on and what’s next. This roadmap is a living document and we’ll update it as we make progress.
Your feedback is highly appreciated. Feel like something is missing? Add new ideas on GitHub or vote on existing ones. Both are a great way to contribute to Langfuse and help us understand what is important to you.
🚀 Released
10 most recent changelog items:
- Slack Integration for Prompt Webhooks
- Usage Alerts
- LLM Playground with Side-by-Side Comparison
- Sessions in Annotation Queues
- LiveKit Agents Tracing Integration
- Trigger Remote Custom Experiments from UI
- Full-Text Search Across Prompt Content
- AWS SDK Default Credential Provider Chain Support
- Webhooks for Prompt Changes
- n8n Node for Langfuse Prompt Management
Subscribe to our mailing list to get occasional email updates about new features.
🚧 In progress
- Tracing
- Unified agent graphs
- New JS SDK based on OpenTelemetry (#1291)
- Evaluation
- Improvements to core eval views (e.g. compare run view)
- Annotate dataset experiments
🔮 Planned
- Agent Observability
- Evaluation
- Rule‑based evaluators (regex, structural checks) (#4671, #4484)
- Trace LLM‑as‑judge evaluations for debugging & cost tracking
- Evaluation comparison dashboard: correlation, confusion matrix, overlap histogram
- SDK abstraction for easy experiment setup and UI‑triggered external runners
- Sessions & observations support for annotation queues (#7551)
- Session-level and observation-level llm-as-a-judge
- Datasets
- Playground
- Multi‑modal message support (#6017)
- Dataset experiments in playground
- Prompt Management
- Data Platform
🙏 Feature requests and bug reports
The best way to support Langfuse is to share your feedback, report bugs, and upvote on ideas suggested by others.
Feature requests
Bug reports
Was this page helpful?