LLM Playground
Test and iterate on your prompts directly in the Langfuse Prompt Playground. Tweak the prompt and model parameters to see how different models respond to these input changes. This allows you to quickly iterate on your prompts and optimize them for the best results in your LLM app without having to switch between tools or use any code.
Core features
Open your prompt in the playground
You can open a prompt you created with Langfuse Prompt Management in the playground.
Open a generation in the playground
You can open a generation from Langfuse Observability in the playground by clicking the Open in Playground
button in the generation details page.
Tool calling and structured outputs
The Langfuse Playground supports tool calling and structured output schemas, enabling you to define, test, and validate LLM executions that rely on tool calls and enforce specific response formats.
Tool Calling
- Define custom tools with JSON schema definitions
- Test prompts relying on tools in real-time by mocking tool responses
- Save tool definitions to your project
Structured Output
- Enforce response formats using JSON schemas
- Save schemas to your project
- Jump into the playground from your OpenAI generation using structured output
Add prompt variables
You can add prompt variables in the playground to simulate different inputs to your prompt.
Select a model
You can use your favorite model by adding the API key for the model you want to use in the Langfuse project settings.