LLM Playground
Test and iterate on your prompts directly in the Langfuse Prompt Playground. Tweak the prompt and model parameters to see how different models respond to these input changes. This allows you to quickly iterate on your prompts and optimize them for the best results in your LLM app without having to switch between tools or use any code.
Core features
Side-by-Side Comparison View
Compare multiple prompt variants alongside each other. Execute them all at once or focus on a single variant. Each variant keeps its own LLM settings, variables, tool definitions, and placeholders so you can immediately see the impact of every change.
Open your prompt in the playground
You can open a prompt you created with Langfuse Prompt Management in the playground.
Save your prompt to Prompt Management
When you’re satisfied with your prompt, you can save it to Prompt Management by clicking the save button.
Open a generation in the playground
You can open a generation from Langfuse Observability in the playground by clicking the Open in Playground
button in the generation details page.
Tool calling and structured outputs
The Langfuse Playground supports tool calling and structured output schemas, enabling you to define, test, and validate LLM executions that rely on tool calls and enforce specific response formats.
Tool Calling
- Define custom tools with JSON schema definitions
- Test prompts relying on tools in real-time by mocking tool responses
- Save tool definitions to your project
Structured Output
- Enforce response formats using JSON schemas
- Save schemas to your project
- Jump into the playground from your OpenAI generation using structured output
Add prompt variables
You can add prompt variables in the playground to simulate different inputs to your prompt.
Use your favorite model
You can use your favorite model by adding the API key for the model you want to use in the Langfuse project settings. You can learn how to set up an LLM connection here.