LLM-as-a-Judge

LLM-as-a-Judge (also known as Model-based Evaluations) is an evaluation method to score the output of an application by using an LLM as an evaluator. The LLM is given a trace or a dataset entry and asked to score and reason about the output. The resulting scores include chain-of-thought reasoning as a comment.

Why use LLM-as-a-judge?

  • Scalable: Judge thousands of outputs quickly versus human annotators.
  • Human‑like: Captures nuance (e.g. helpfulness, toxicity, relevance) better than simple metrics, especially when rubric‑guided.
  • Repeatable: With a fixed rubric, you can rerun the same prompts to get consistent scores.

Set up step-by-step

Create a new LLM-as-a-Judge evaluator

Navigate to the Evaluators page and click on the + Set up Evaluator button.

Evaluator create

Set the default model

Next, define the default model used for the evaluations. This step requires an LLM Connection to be set up. Please see LLM Connections for more information.

It’s crucial that the chosen default model supports structured output. This is essential for our system to correctly interpret the evaluation results from the LLM judge.

Pick an Evaluator

Evaluator select

Next, select an evaluator. There are two main ways:

Langfuse ships a growing catalog of evaluators built and maintained by us and partners like Ragas. Each evaluator captures best-practice evaluation prompts for a specific quality dimension—e.g. Hallucination, Context-Relevance, Toxicity, Helpfulness.

  • Ready to use: no prompt writing required.
  • Continuously expanded: by adding OSS partner-maintained evaluators and more evaluator types in the future (e.g. regex-based).

Choose which Data to Evaluate

With your evaluator and model selected, you now specify which data to run the evaluations on. You can choose between running on production tracing data or Dataset Experiments.

Evaluating live production traffic allows you to monitor the performance of your LLM application in real-time.

  • Scope: Choose whether to run on new traces only and/or existing traces once (for backfilling). When in doubt, we recommend running on new traces.
  • Filter: Narrow down the evaluation to a specific subset of data you’re interested in. You can filter by trace name, tags, userId and may more. Combine filters freely.
  • Preview: Langfuse shows a sample of traces from the last 24 hours that match your current filters, allowing you to sanity-check your selection.
  • Sampling: To manage costs and evaluation throughput, you can configure the evaluator to run on a percentage (e.g., 5%) of the matched traces.

Production tracing data

Map Variables & preview Evaluation Prompt

You now need to teach Langfuse which properties of your trace or dataset item represent the actual data to populate these variables for a sensible evaluation. For instance, you might map your system’s logged trace input to the prompt’s {{input}} variable, and the LLM response ie trace output to the prompt’s {{output}} variable. This mapping is crucial for ensuring the evaluation is sensible and relevant.

  • Prompt Preview: As you configure the mapping, Langfuse shows a live preview of the evaluation prompt populated with actual data. This preview uses historical traces from the last 24 hours that matched your filters (from Step 3). You can navigate through several example traces to see how their respective data fills the prompt, helping you build confidence that the mapping is correct.
  • JSONPath: If the data is nested (e.g., within a JSON object), you can use a JSONPath expression (like $.choices[0].message.content) to precisely locate it.
Filter preview

Trigger the evaluation

To see your evaluator in action, you need to either send traces (fastest) or trigger an experiment run (takes longer to setup) via the UI or SDK. Make sure to set the correct target data in the evaluator settings according to how you want to trigger the evaluation.

✨ Done! You have successfully set up an evaluator which will run on your data.

Need custom logic? Use the SDK instead—see Custom Scores or an external pipeline example.

Debug LLM-as-a-Judge Executions

Every LLM-as-a-Judge evaluator execution creates a full trace, giving you complete visibility into the evaluation process. This allows you to debug prompt issues, inspect model responses, monitor token usage, and trace evaluation history.

You can show the LLM-as-a-Judge execution traces by filtering for the environment langfuse-llm-as-a-judge in the tracing table:

Tracing table filtered to langfuse-llm-as-a-judge environment

LLM-as-a-Judge Execution Status
  • Completed: Evaluation finished successfully.
  • Error: Evaluation failed (click execution trace ID for details).
  • Delayed: Evaluation hit rate limits by the LLM provider and is being retried with exponential backoff.
  • Pending: Evaluation is queued and waiting to run.

GitHub Discussions

Was this page helpful?