Link Prompts to Traces
Linking prompts to traces enables tracking of metrics and evaluations per prompt version. It’s the foundation of improving prompt quality over time.
After linking prompts and traces, navigating to a generation span in Langfuse will highlight the prompt that was used to generate the response. To access the metrics, navigate to your prompt and click on the Metrics
tab.
How to Link Prompts to Traces
Decorators
from langfuse import observe, get_client
langfuse = get_client()
@observe(as_type="generation")
def nested_generation():
prompt = langfuse.get_prompt("movie-critic")
langfuse.update_current_generation(
prompt=prompt,
)
@observe()
def main():
nested_generation()
main()
Context Managers
from langfuse import get_client
langfuse = get_client()
prompt = langfuse.get_prompt("movie-critic")
with langfuse.start_as_current_generation(
name="movie-generation",
model="gpt-4o",
prompt=prompt
) as generation:
# Your LLM call here
generation.update(output="LLM response")
If a fallback prompt is used, no link will be created.
Metrics Reference
- Median generation latency
- Median generation input tokens
- Median generation output tokens
- Median generation costs
- Generation count
- Median score value
- First and last generation timestamp
Was this page helpful?