DocsObservabilityFeaturesCorrections

Corrected Outputs

Corrections allow you to capture improved versions of LLM outputs directly in trace and observation views. Domain experts can document what the model should have generated, creating a foundation for fine-tuning datasets and continuous improvement.

Corrected output with diff view

Why Use Corrections?

  • Domain expert feedback: Subject matter experts provide what the model should have output based on their expertise
  • Fine-tuning datasets: Export corrected outputs alongside original inputs to create high-quality training data from production traces
  • Quality benchmarking: Compare actual vs expected outputs across your production traces to identify systematic issues
  • Human-in-the-loop workflows: Capture corrections during review processes, especially useful in annotation queues

How It Works

Add corrected outputs to any trace or observation through the UI or API. Corrections appear alongside the original output with a diff view showing what changed. Each trace or observation can have one corrected output.

Adding Corrections

Via the UI

Navigate to any trace or observation detail page:

  1. Find the “Corrected Output” field below the original output
  2. Click to add or edit the correction
  3. Enter the improved version of the output
  4. Toggle between JSON validation mode and plain text mode to match your data format
  5. View the diff to compare original vs corrected output

Adding a correction in the UI

The editor auto-saves as you type and provides real-time validation feedback in JSON mode.

Fetching Corrections

Corrections are stored as scores and can be fetched programmatically to build datasets or analyze model performance.

Coming soon: Fetch corrections via the SDK.

Was this page helpful?