Integration Langserve
This is a Jupyter notebook

Cookbook: Langserve Integration

Langserve (opens in a new tab) (Python)

LangServe helps developers deploy LangChain runnables and chains as a REST API.

This library is integrated with FastAPI and uses pydantic for data validation.

In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in LangChain.js.

This cookbook demonstrates how to trace applications deployed via Langserve with Langfuse (using the LangChain integration (opens in a new tab)). We'll run both the server and the client in this notebook.


Install dependencies and configure environment

!pip install fastapi sse_starlette httpx langserve langfuse langchain-openai langchain
import os
# Get keys for your project from the project settings page
os.environ["LANGFUSE_PUBLIC_KEY"] = ""
os.environ["LANGFUSE_SECRET_KEY"] = ""
os.environ["LANGFUSE_HOST"] = "" # 🇪🇺 EU region
# os.environ["LANGFUSE_HOST"] = "" # 🇺🇸 US region
# Your openai key
os.environ["OPENAI_API_KEY"] = ""

Simple LLM Call Example

Initialize the Langfuse client and configure the LLM with Langfuse as callback handler. Add to Fastapi via Langserve's add_routes().

from langchain_openai import ChatOpenAI
from langchain_core.runnables.config import RunnableConfig
from langfuse import Langfuse
from langfuse.callback import CallbackHandler
from fastapi import FastAPI
from langserve import add_routes
langfuse_handler = CallbackHandler()
# Tests the SDK connection with the server
llm = ChatOpenAI()
config = RunnableConfig(callbacks=[langfuse_handler])
llm_with_langfuse = llm.with_config(config)
# Setup server
app = FastAPI()
# Add Langserve route

Note: We use TestClient in this example to be able to run the server in a notebook

from fastapi.testclient import TestClient
# Initialize TestClient
client = TestClient(app)
# Test simple route
response ="/test-simple-llm-call/invoke", json={"input": "Tell me a joke?"})

Example trace: (opens in a new tab)

Trace of Langserve Simple LLM Call

LCEL example

from langchain.prompts import ChatPromptTemplate
from langchain.schema import StrOutputParser
from langserve import add_routes
# Create Chain
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
chain = prompt | llm | StrOutputParser()
# Add new route
# Test chain route
response ="/test-chain/invoke", json={"input": {"topic": "Berlin"}})

Example trace: (opens in a new tab)

Trace of Langserve LCEL Example

Agent Example

from import tool
from langchain_core.utils.function_calling import convert_to_openai_tool
from langchain.agents.format_scratchpad.openai_tools import (
from langchain.agents import AgentExecutor
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langserve.pydantic_v1 import BaseModel
from langchain_core.prompts import MessagesPlaceholder
class Input(BaseModel):
    input: str
prompt = ChatPromptTemplate.from_messages(
        ("system", "You are a helpful assistant."),
        ("user", "{input}"),
def word_length(word: str) -> int:
    """Returns a counter word"""
    return len(word)
tools = [word_length]
llm_with_tools = llm.bind(tools=[convert_to_openai_tool(tool) for tool in tools])
agent = (
        "input": lambda x: x["input"],
        "agent_scratchpad": lambda x: format_to_openai_tool_messages(
    | prompt
    | llm_with_tools
    | OpenAIToolsAgentOutputParser()
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_config = RunnableConfig({"run_name": "agent"}, callbacks=[langfuse_handler])
response ="/test-agent/invoke", json={"input": {"input": "How long is Leonardo DiCaprios last name?"}})

Example trace: (opens in a new tab)

Trace of Langserve Agent example

Was this page useful?

Questions? We're here to help

Subscribe to updates