Self-Hosting Langfuse - Open Source LLM Observability
Langfuse Server, which includes the API and Web UI, is open-source and can be self-hosted using Docker.
For a detailed component and architecture diagram, refer to CONTRIBUTING.md.
Looking for a managed solution? Consider Langfuse Cloud maintained by the Langfuse team.
Prerequisites: Postgres Database
Langfuse requires a persistent Postgres database to store its state. You can use a managed service on AWS, Azure, or GCP, or host it yourself. Once the database is ready, keep the connection string handy. At least version 12 is required.
Deploying the Application
Deploy the application container to your infrastructure. You can use managed services like AWS ECS, Azure Container Instances, or GCP Cloud Run, or host it yourself.
During the container startup, all database migrations will be applied automatically. This can be optionally disabled via environment variables.
docker pull langfuse/langfuse:2
docker run --name langfuse \
-e DATABASE_URL=postgresql://hello \
-e NEXTAUTH_URL=http://localhost:3000 \
-e NEXTAUTH_SECRET=mysecret \
-e SALT=mysalt \
-e ENCRYPTION_KEY=0000000000000000000000000000000000000000000000000000000000000000 \ # generate via: openssl rand -hex 32
-p 3000:3000 \
-a STDOUT \
langfuse/langfuse
We follow semantic versioning for Langfuse releases, i.e. breaking changes are only introduced in a new major version.
- We recommend automated updates within a major version to benefit from the latest features, bug fixes, and security patches (
docker pull langfuse/langfuse:2
). - Subscribe to our mailing list to get notified about new releases and new major versions.
Recommended Instance Size
For production environments, we suggest using a configuration of 2 CPU cores and 3 GB of RAM for the Langfuse container. On AWS, this would equate to a t3.medium
instance. The container is stateless, allowing you to autoscale it based on actual resource usage.
Configuring Environment Variables
Langfuse can be configured using environment variables (.env.prod.example). Some are mandatory as defined in the table below:
Variable | Required / Default | Description |
---|---|---|
DATABASE_URL | Required | Connection string of your Postgres database. Instead of DATABASE_URL , you can also use DATABASE_HOST , DATABASE_USERNAME , DATABASE_PASSWORD and DATABASE_NAME . |
DIRECT_URL | DATABASE_URL | Connection string of your Postgres database used for database migrations. Use this if you want to use a different user for migrations or use connection pooling on DATABASE_URL . For large deployments, configure the database user with long timeouts as migrations might need a while to complete. |
SHADOW_DATABASE_URL | If your database user lacks the CREATE DATABASE permission, you must create a shadow database and configure the “SHADOW_DATABASE_URL”. This is often the case if you use a Cloud database. Refer to the Prisma docs for detailed instructions. | |
NEXTAUTH_URL | Required | URL of your deployment, e.g. https://yourdomain.com or http://localhost:3000 . Required for successful authentication via OAUTH. |
NEXTAUTH_SECRET | Required | Used to validate login session cookies, generate secret with at least 256 entropy using openssl rand -base64 32 . |
SALT | Required | Used to salt hashed API keys, generate secret with at least 256 entropy using openssl rand -base64 32 . |
ENCRYPTION_KEY | Required | Used to encrypt sensitive data. Must be 256 bits, 64 string characters in hex format, generate via: openssl rand -hex 32 . |
LANGFUSE_CSP_ENFORCE_HTTPS | false | Set to true to set CSP headers to only allow HTTPS connections. |
PORT | 3000 | Port the server listens on. |
HOSTNAME | localhost | In some environments it needs to be set to 0.0.0.0 to be accessible from outside the container (e.g. Google Cloud Run). |
LANGFUSE_DEFAULT_ORG_ID | Configure optional default organization for new users. When users create an account they will be automatically added to this organization. | |
LANGFUSE_DEFAULT_ORG_ROLE | VIEWER | Role of the user in the default organization (if set). Possible values are OWNER , ADMIN , MEMBER , VIEWER . See roles for details. |
LANGFUSE_DEFAULT_PROJECT_ID | Configure optional default project for new users. When users create an account they will be automatically added to this project. | |
LANGFUSE_DEFAULT_PROJECT_ROLE | VIEWER | Role of the user in the default project (if set). Possible values are OWNER , ADMIN , MEMBER , VIEWER . See roles for details. |
SMTP_CONNECTION_URL | Configure optional SMTP server connection for transactional email. Connection URL is passed to Nodemailer (docs). | |
EMAIL_FROM_ADDRESS | Configure from address for transactional email. Required if SMTP_CONNECTION_URL is set. | |
S3_ENDPOINT S3_ACCESS_KEY_ID S3_SECRET_ACCESS_KEY S3_BUCKET_NAME S3_REGION | Optional S3 configuration for enabling large exports from the UI. S3_BUCKET_NAME is required to enable exports. The other variables are optional and will use the default provider credential chain if not specified. | |
LANGFUSE_S3_MEDIA_UPLOAD_ENABLED LANGFUSE_S3_MEDIA_UPLOAD_BUCKET LANGFUSE_S3_MEDIA_UPLOAD_REGION LANGFUSE_S3_MEDIA_UPLOAD_ACCESS_KEY_ID LANGFUSE_S3_MEDIA_UPLOAD_SECRET_ACCESS_KEY LANGFUSE_S3_MEDIA_UPLOAD_ENDPOINT LANGFUSE_S3_MEDIA_UPLOAD_FORCE_PATH_STYLE LANGFUSE_S3_MEDIA_UPLOAD_PREFIX | false | S3 configuration for enabling multi-modal attachments. All variables are optional and will use the default values shown if not specified. Set LANGFUSE_S3_MEDIA_UPLOAD_ENABLED=true to enable multi-modal attachments. Configured storage bucket must have a publicly resolvable hostname to support direct uploads via our SDKs and media asset fetching directly from the browser. |
DB_EXPORT_PAGE_SIZE | 1000 | Optional page size for streaming exports to S3 to avoid memory issues. The page size can be adjusted if needed to optimize performance. |
LANGFUSE_AUTO_POSTGRES_MIGRATION_DISABLED | false | Set to true to disable automatic database migrations on docker startup. |
LANGFUSE_LOG_LEVEL | info | Set the log level for the application. Possible values are trace , debug , info , warn , error , fatal . |
LANGFUSE_LOG_FORMAT | text | Set the log format for the application. Possible values are text , json . |
NEXT_PUBLIC_BASE_PATH | Set the base path for the application. This is useful if you want to deploy Langfuse on a subpath, especially when integrating Langfuse into existing infrastructure. Refer to the section below for details. |
Authentication
Email/Password
Email/password authentication is enabled by default. Users can sign up and log in using their email and password.
To disable email/password authentication, set AUTH_DISABLE_USERNAME_PASSWORD=true
. In this case, you need to set up SSO instead.
If you want to provision a default user for your Langfuse instance, you can use the LANGFUSE_INIT_*
environment variables.
Password Reset
-
If transactional emails are configured on your instance via the
SMTP_CONNECTION_URL
andEMAIL_FROM_ADDRESS
environments, users can reset their password by using the “Forgot password” link on the login page. -
If transactional emails are not set up, passwords can be reset by following these steps:
- Update the email associated with your user account in database, such as by adding a prefix.
- You can then sign up again with a new password.
- Reassign any organizations you were associated with via the
organization_memberships
table in database. - Finally, remove the old user account from the
users
table in database.
SSO
To enable OAuth/SSO provider sign-in for Langfuse, add the following environment variables:
Provider | Variables | OAuth Redirect URL |
---|---|---|
AUTH_GOOGLE_CLIENT_ID AUTH_GOOGLE_CLIENT_SECRET AUTH_GOOGLE_ALLOW_ACCOUNT_LINKING=true (optional)AUTH_GOOGLE_ALLOWED_DOMAINS=langfuse.com,google.com (optional, list of allowed domains based on hd OAuth claim) | /api/auth/callback/google | |
GitHub | AUTH_GITHUB_CLIENT_ID AUTH_GITHUB_CLIENT_SECRET AUTH_GITHUB_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/github |
GitHub Enterprise | AUTH_GITHUB_ENTERPRISE_CLIENT_ID AUTH_GITHUB_ENTERPRISE_CLIENT_SECRET AUTH_GITHUB_ENTERPRISE_BASE_URL AUTH_GITHUB_ENTERPRISE_ALLOW_ACCOUNT_LINKING=false (optional) | /api/auth/callback/github-enterprise |
GitLab | AUTH_GITLAB_CLIENT_ID AUTH_GITLAB_CLIENT_SECRET AUTH_GITLAB_ISSUER (optional)AUTH_GITLAB_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/gitlab |
AzureAD/Entra ID | AUTH_AZURE_AD_CLIENT_ID AUTH_AZURE_AD_CLIENT_SECRET AUTH_AZURE_AD_TENANT_ID AUTH_AZURE_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/azure-ad |
Okta | AUTH_OKTA_CLIENT_ID AUTH_OKTA_CLIENT_SECRET AUTH_OKTA_ISSUER AUTH_OKTA_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/okta |
Auth0 | AUTH_AUTH0_CLIENT_ID AUTH_AUTH0_CLIENT_SECRET AUTH_AUTH0_ISSUER AUTH_AUTH0_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/auth0 |
AWS Cognito | AUTH_COGNITO_CLIENT_ID AUTH_COGNITO_CLIENT_SECRET AUTH_COGNITO_ISSUER AUTH_COGNITO_ALLOW_ACCOUNT_LINKING=true (optional) | /api/auth/callback/cognito |
Custom OAuth (source) | AUTH_CUSTOM_CLIENT_ID AUTH_CUSTOM_CLIENT_SECRET AUTH_CUSTOM_ISSUER AUTH_CUSTOM_NAME (any, used only in UI)AUTH_CUSTOM_ALLOW_ACCOUNT_LINKING=true (optional)AUTH_CUSTOM_SCOPE (optional, defaults to "openid email profile" ) | /api/auth/callback/custom |
Use *_ALLOW_ACCOUNT_LINKING
to allow merging accounts with the same email address. This is useful when users sign in with different providers or email/password but have the same email address. You need to be careful with this setting as it can lead to security issues if the emails are not verified.
Need another provider? Langfuse uses Auth.js, which integrates with many providers. Add a feature request on GitHub if you want us to add support for a specific provider.
Additional configuration
Variable | Description |
---|---|
AUTH_DOMAINS_WITH_SSO_ENFORCEMENT | Comma-separated list of domains that are only allowed to sign in using SSO. Email/password sign in is disabled for these domains. E.g. domain1.com,domain2.com |
AUTH_DISABLE_SIGNUP | Set to true to disable sign up for new users. Only existing users can sign in. This affects all new users that try to sign up, also those who received an invite to a project and have no account yet. |
AUTH_SESSION_MAX_AGE | Set the maximum age of the session (JWT) in minutes. The default is 30 days (43200 ). The value must be greater than 5 minutes, as the front-end application refreshes its session every 5 minutes. |
Headless Initialization
By default, you need to create a user account, organization and project via the Langfuse UI before being able to use the API. You can find the API keys in the project settings within the UI.
If you want to automatically initialize these resources, you can optionally use the following LANGFUSE_INIT_*
environment variables. When these variables are set, Langfuse will automatically create the specified resources on startup if they don’t already exist. This allows for easy integration with infrastructure-as-code and automated deployment pipelines.
Environment Variable | Description | Required to Create Resource | Example |
---|---|---|---|
LANGFUSE_INIT_ORG_ID | Unique identifier for the organization | Yes | my-org |
LANGFUSE_INIT_ORG_NAME | Name of the organization | No | My Org |
LANGFUSE_INIT_PROJECT_ID | Unique identifier for the project | Yes | my-project |
LANGFUSE_INIT_PROJECT_NAME | Name of the project | No | My Project |
LANGFUSE_INIT_PROJECT_PUBLIC_KEY | Public API key for the project | Yes | lf_pk_1234567890 |
LANGFUSE_INIT_PROJECT_SECRET_KEY | Secret API key for the project | Yes | lf_sk_1234567890 |
LANGFUSE_INIT_USER_EMAIL | Email address of the initial user | Yes | [email protected] |
LANGFUSE_INIT_USER_NAME | Name of the initial user | No | John Doe |
LANGFUSE_INIT_USER_PASSWORD | Password for the initial user | Yes | password123 |
The different resources depend on each other in the following way. You can e.g. intialize an organization and a user wihtout having to also initialize a project and API keys, but you cannot initialize a project without also initializing an organization.
Organization
├── Project (part of organization)
│ └── API Keys (set for project)
└── User (owner of organization)
Troubleshooting:
- If you use
LANGFUSE_INIT_*
in Docker Compose, do not double-quote the values (GitHub issue). - The resources depend on one another (see note above). For example, you must create an organization to initialize a project.
Configuring the Enterprise Edition
The Enterprise Edition (compare versions) of Langfuse includes additional optional configuration options that can be set via environment variables.
Variable | Description |
---|---|
LANGFUSE_ALLOWED_ORGANIZATION_CREATORS | Comma-separated list of allowlisted users that can create new organizations. By default, all users can create organizations. E.g. [email protected],[email protected] . |
LANGFUSE_UI_API_HOST | Customize the hostname that is referenced in the settings. Defaults to window.origin . |
LANGFUSE_UI_DOCUMENTATION_HREF | Customize the documentation link reference in the menu and settings. |
LANGFUSE_UI_SUPPORT_HREF | Customize the support link reference in the menu and settings. |
LANGFUSE_UI_FEEDBACK_HREF | Replace the default feedback widget with your own feedback link. |
LANGFUSE_UI_LOGO_DARK_MODE_HREF LANGFUSE_UI_LOGO_LIGHT_MODE_HREF | Co-brand the Langfuse interface with your own logo. Langfuse adapts to the logo width, with a maximum aspect ratio of 1:3. Narrower ratios (e.g., 2:3, 1:1) also work. The logo is fitted into a bounding box, so there are no specific pixel constraints. For reference, the example logo is 160px x 400px. |
LANGFUSE_UI_DEFAULT_MODEL_ADAPTER | Set the default model adapter for the LLM playground and evals. Options: OpenAI , Anthropic , Azure . Example: Anthropic |
LANGFUSE_UI_DEFAULT_BASE_URL_OPENAI | Set the default base URL for OpenAI API in the LLM playground and evals. Example: https://api.openai.com/v1 |
LANGFUSE_UI_DEFAULT_BASE_URL_ANTHROPIC | Set the default base URL for Anthropic API in the LLM playground and evals. Example: https://api.anthropic.com |
LANGFUSE_UI_DEFAULT_BASE_URL_AZURE_OPENAI | Set the default base URL for Azure OpenAI API in the LLM playground and evals. Example: https://{instanceName}.openai.azure.com/openai/deployments |
Health and Readiness Check Endpoint
Langfuse includes a health check endpoint at /api/public/health
and a readiness check endpoint at /api/public/ready
.
The health check endpoint checks the API functionality and indicates if the application is alive.
The readiness check endpoint indicates if the application is ready to serve traffic.
Access the health and readiness check endpoints:
curl http://localhost:3000/api/public/health
curl http://localhost:3000/api/public/ready
The potential responses from the health check endpoint are:
200 OK
: Both the API is functioning normally and a successful connection to the database was made.503 Service Unavailable
: Either the API is not functioning or it couldn’t establish a connection to the database.
The potential responses from the readiness check endpoint are:
200 OK
: The application is ready to serve traffic.500 Internal Server Error
: The application received a SIGTERM or SIGINT and should not receive traffic.
Applications and monitoring services can call this endpoint periodically for health updates.
Per default, the healthcheck endpoint does not validate if the database is reachable, as there are cases where the
database is unavailable, but the application still serves traffic.
If you want to run database healthchecks, you can add ?failIfDatabaseUnavailable=true
to the healthcheck endpoint.
Encryption
Encryption in transit (HTTPS)
For encryption in transit, HTTPS is strongly recommended. Langfuse itself does not handle HTTPS directly. Instead, HTTPS is typically managed at the infrastructure level. There are two main approaches to handle HTTPS for Langfuse:
-
Load Balancer Termination: In this approach, HTTPS is terminated at the load balancer level. The load balancer handles the SSL/TLS certificates and encryption, then forwards the decrypted traffic to the Langfuse container over HTTP. This is a common and straightforward method, especially in cloud environments.
- Pros: Simplifies certificate management as it is usually a fully managed service (e.g. AWS ALB), offloads encryption overhead from application servers.
- Cons: Traffic between load balancer and Langfuse container is unencrypted (though typically within a secure network).
-
Service Mesh Sidecar: This method involves using a service mesh like Istio or Linkerd. A sidecar proxy is deployed alongside each Langfuse container, handling all network traffic including HTTPS.
- Pros: Provides end-to-end encryption (mutual TLS), offers advanced traffic management and observability.
- Cons: Adds complexity to the deployment, requires understanding of service mesh concepts.
Once HTTPS is enabled, you can configure add LANGFUSE_CSP_ENFORCE_HTTPS=true
to ensure browser only allow HTTPS connections when using Langfuse.
Encryption at rest (database)
All Langfuse data is stored in your Postgres database. Database-level encryption is recommended for a secure production deployment and available across cloud providers.
The Langfuse team has implemented this for Langfuse Cloud and it is fully ISO27001, SOC2 Type 2 and GDPR compliant (security page).
Additional application-level encryption
In addition to in-transit and at-rest encryption, sensitive data is also encrypted or hashed at the application level.
Data | Encryption |
---|---|
API keys | Hashed using SALT |
Langfuse Console JWTs | Encrypted via NEXTAUTH_SECRET |
LLM API credentials stored in Langfuse | Encrypted using ENCRYPTION_KEY |
Integration credentials (e.g. PostHog) | Encrypted using ENCRYPTION_KEY |
Input/Outputs of LLM Calls, Traces, Spans | Work in progress, reach out to [email protected] if you are interested in this |
Build Langfuse from source
While we recommend using the prebuilt docker image, you can also build the image yourself from source.
The repo includes multiple Dockerfile
files. You only need to build the web
Dockerfile as shown below.
# clone repo
git clone https://github.com/langfuse/langfuse.git
cd langfuse
# checkout v2 branch
# main branch includes unreleased changes that might be unstable
git checkout v2
# build image
docker build -t langfuse/langfuse -f ./web/Dockerfile .
Custom Base Path
If you want to deploy Langfuse behind a custom base path (e.g. https://yourdomain.com/langfuse
), you can set the NEXT_PUBLIC_BASE_PATH
environment variable. This is useful if you want to deploy Langfuse on a subpath, especially when integrating Langfuse into existing infrastructure.
As this base path is inlined in static assets, you cannot use the prebuilt docker image. You need to build the image from source with the NEXT_PUBLIC_BASE_PATH
environment variable set at build time.
When using a custom base path, NEXTAUTH_URL
must be set to the full URL including the base path and /api/auth
. For example, if you are deploying Langfuse at https://yourdomain.com/langfuse-base-path
, you need to set:
NEXT_PUBLIC_BASE_PATH="/langfuse-base-path"
NEXTAUTH_URL="https://yourdomain.com/langfuse-base-path/api/auth"
Build image with NEXT_PUBLIC_BASE_PATH
as build argument:
# clone repo
git clone https://github.com/langfuse/langfuse.git
cd langfuse
# checkout v2 branch
# main branch includes unreleased changes that might be unstable
git checkout v2
# build image with NEXT_PUBLIC_BASE_PATH
docker build -t langfuse/langfuse --build-arg NEXT_PUBLIC_BASE_PATH=/langfuse-base-path -f ./web/Dockerfile .
Once your Langfuse instance is running, you can access both the API and console through your configured custom base path. When connecting via SDKs, make sure to include the custom base path in the hostname.
Troubleshooting
If you encounter issues, ensure the following:
NEXTAUTH_URL
exactly matches the URL you’re accessing Langfuse with. Pay attention to the protocol (http vs https) and the port (e.g., 3000 if you do not expose Langfuse on port 80).- Set
HOSTNAME
to0.0.0.0
if you cannot access Langfuse. - Encode special characters in
DATABASE_URL
, see this StackOverflow answer for details. - If you use the SDKs to connect with Langfuse, use
auth_check()
to verify that the connection works. - Make sure you are at least on Postgres 12.
- When using Docker Compose / Kubernetes, your application needs to connect to the Langfuse container at the docker internal network address that you specified, e.g.
http://langfuse:3000
/http://langfuse.docker.internal:3000
. Learn more: docker compose networking documentation, kubernetes networking documentation - SSO
- Ensure that the OAuth provider is configured correctly. The return path needs to match the
NEXTAUTH_URL
, and the OAuth client needs to be configured with the correct callback URL. - Langfuse uses NextAuth.js. Please refer to the NextAuth.js documentation for more information.
- If you encounter issues with your custom SSO setup, please raise an issue and submit a PR to improve Langfuse. Alternatively, the Langfuse team provides support for setting up custom SSO configurations under a commercial agreement. For more information, please reach out to [email protected].
- Ensure that the OAuth provider is configured correctly. The return path needs to match the
Updating the Application
We recommend enabling automated updates within the current major version to benefit from the latest features, bug fixes, and security patches.
To update the application:
- Stop the container.
- Pull the latest container.
- Restart the application.
During container startup, any necessary database migrations will be applied automatically if the database schema has changed. This can be optionally disabled via environment variables.
Langfuse is released through tagged semver releases. Check GitHub releases for information about the changes in each version.
(Optional) Apply newly supported models to existing data in Langfuse
This is only necessary if you want new model prices to be applied to existing traces/generations. Most users will not need to do this as applying prices to new traces only is totally fine when updating regularly.
Langfuse includes a list of supported models for usage and cost tracking. If a Langfuse update includes support for new models, these will only be applied to newly ingested traces/generations.
Optionally, you can apply the new model definitions to existing data using the following steps. During the migration, the database remains available (non-blocking).
-
Clone the repository and create an
.env
file:# Clone the Langfuse repository git clone https://github.com/langfuse/langfuse.git # Navigate to the Langfuse directory cd langfuse # Install all dependencies pnpm i # Create an .env file cp .env.dev.example .env
-
Edit the
.env
to connect to your database from your machine:.envNODE_ENV=production # Replace with your database connection string DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres
-
Execute the migration. Depending on the size of your database, this might take a while.
pnpm run models:migrate
-
Clean up: remove the
.env
file to avoid connecting to the production database from your local machine.
Kubernetes deployments
Kubernetes is a popular choice for deploying Langfuse when teams maintain the rest of their infrastructure using Kubernetes. You can find community-maintained templates and Helm Charts in the langfuse/langfuse-k8s repository.
If you encounter any bugs or have suggestions for improvements, please contribute to the repository by submitting issues or pull requests.
Platform-specific information
This section is work in progress and relies on community contributions. The Langfuse team/maintainers do not have the capacity to maintain or test this section. If you have successfully deployed Langfuse on a specific platform, consider contributing a guide either via a GitHub PR/Issue or by reaching out to the maintainers. Please also let us know if one of these guides does not work anymore or if you have a better solution.
Railway
Porter.run
If you use Porter to deploy your application, you can easily add a Langfuse instance to your cluster via the “Add-ons”. The add-on will automatically configure the necessary environment variables, inject your database credentials, and deploy and autoscale the Langfuse container. Learn more about this in our changelog.
AWS
We recommend deploying Langfuse on AWS using the Elastic Container Service (ECS) and Fargate for a scalable and low-maintenance container deployment. Note: you can use AWS Cognito for SSO.
Have a look at this configuration template: aws-samples/deploy-langfuse-on-ecs-with-fargate
Azure
Deploy Langfuse to Azure using the Azure Container Instances service for a flexible and low-maintenance container deployment. Note: you can use Azure AD for SSO.
You can deploy Langfuse to Azure via the Azure Developer CLI using this template: Azure-Samples/langfuse-on-azure.
Google Cloud Platform (Cloud Run & Cloud SQL)
The simplest way to deploy Langfuse on Google Cloud Platform is to use Cloud Run for the containerized application and Cloud SQL for the database.
Option 1: UI Deployment
Create Cloud SQL Instance:
- Open Google Cloud SQL.
- Click on Create Instance.
- Choose PostgreSQL and configure the instance according to your requirements.
- You’ll need the following details:
- default > user: postgres
- default > database schema: public
- setup > password:
<password>
- connection > connection name:
<google-cloud-project-id>:<region-id>:<sql-instance-id>
Optionally: Create OAuth Credentials for sign-in with Google
- Open API Credentials
- Click “Create Credentials” and then “OAuth Client ID”
- Choose “Web Application” and then give it an appropriate name
- Click Create
Create Secrets:
- Open Secret Manager
- For each secret needed (at least
AUTH_GOOGLE_CLIENT_ID, AUTH_GOOGLE_CLIENT_SECRET, DATABASE_URL, DIRECT_URL, NEXTAUTH_SECRET, NEXTAUTH_URL,
andSALT
), click “Create Secret” and fill in the name and value.
Notes:
DATABASE_URL
is the connection string to the Cloud SQL instance.postgresql://<user-name>:<password>@localhost/<db-name>/?host=/cloudsql/<google-cloud-project-id>:<region-id>:<sql-instance-id>&sslmode=none&pgbouncer=true
DIRECT_URL
is for database migrations, without&pgbouncer=true
, the value should look like this:postgresql://<user-name>:<password>@localhost/<db-name>/?host=/cloudsql/<google-cloud-project-id>:<region-id>:<sql-instance-id>&sslmode=none
- Set
NEXTAUTH_URL
tohttp://localhost:3000
. This is a placeholder, we’ll update it later.
Deploy on Cloud Run:
-
Open Google Cloud Run.
-
Click on Create Service.
-
Enter the following container image URL:
docker.io/langfuse/langfuse:2
. We use tag2
to pin the major version. -
Configure the service name and region according to your requirements.
-
Select authentication as ‘Allow unauthenticated invocations’, as Langfuse will have its own built-in Authentication that you can use.
-
Choose ‘CPU Allocation and Pricing’ as “CPU is only allocated during request processing” to scale down the instance to 0 when there are no requests.
-
Configure ingress control according to your needs. For most cases, ‘All’ should suffice.
-
“Container(s), Volumes, Networking & Security”:
- Specify container port as
3000
. - On “Variables & Secrets” tab, add the required environment variables (see table above):
SALT
,NEXTAUTH_URL
,NEXTAUTH_SECRET
, andDATABASE_URL
, etc.
- Specify container port as
-
Scroll all the way down to enable the Cloud SQL connections. Select the created Cloud SQL instance in the dropdown. Context: Your Cloud Run service won’t be assigned a static IP, so you can’t whitelist the ingress IP in Cloud SQL or any other hosted databases. Instead, we use the Google Cloud SQL Proxy.
-
Finally, you can finish deploying the application.
-
While the application is deployed for the first time, you can see how the database migrations are applied in the logs.
-
Once the application is up and running, you can find the Cloud Run service URL on top of the page. Now, choose “Edit and deploy new revision” to update the
NEXTAUTH_URL
environment variable to the Cloud Run service URL ending in.run.app
. -
Optionally, configure a custom domain for the Cloud Run service.
Troubleshooting: Cloud SQL Connection Issues
If you encounter an error like “Error 403: boss::NOT_AUTHORIZED: Not authorized to access resource” or “Possibly missing permission cloudsql.instances.connect” when deploying the Langfuse container, you may need to grant ‘Cloud SQL Client’ permissions to the relevant service accounts. Here’s how to resolve this:
- In the Google Cloud search box, search for and select “Service Accounts”.
- Find the service accounts with names ending in
@appspot.gserviceaccount.com
and[email protected]
. - In the Google Cloud search box, search for and select “IAM & Admin”.
- Click “Grant Access”, then “Add Principals”.
- Enter the name of the first service account you found.
- Select the “Cloud SQL Client” role and save.
- Repeat steps 4-6 for the second service account.
After granting these permissions, try redeploying your Cloud Run service. This should resolve any authorization issues related to connecting to your Cloud SQL instance.
Option 2: Cloud Build
Google Cloud Build is GCP’s continuous integration and continuous deployment (CI/CD) service that automates the building, testing, and deployment of your applications. To deploy Langfuse, you can specify your workflow in a cloudbuild.yaml file. Additionally, GCP’s Secret Manager can be used to securely handle sensitive information like DATABASE_URL and NEXTAUTH_SECRET. Below is an example of how to set up a Cloud Build configuration:
# Deployment configuration for Langfuse on Google Cloud Run
substitutions:
_SERVICE_NAME: langfuse
_REGION: europe-west1 # Change to your desired region
_PROJECT_ID: your-project-id # Change to your Google Cloud project ID
_SQL_INSTANCE_ID: my-cool-db # the name of the cloud sql database you create
tags: ["${_PROJECT_ID}", "${_SERVICE_NAME}"]
steps:
# Step to deploy the Docker image to Google Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
id: deploy-cloud-run
entrypoint: bash
args:
- "-c"
- |
gcloud run deploy ${_SERVICE_NAME} --image docker.io/langfuse/langfuse:2 \
--region ${_REGION} \
--project ${_PROJECT_ID} \
--platform managed \
--port 3000 \
--allow-unauthenticated \
--memory 2Gi \
--cpu 1 \
--min-instances 0 \
--max-instances 3 \
--set-env-vars HOSTNAME=0.0.0.0 \
--add-cloudsql-instances=${_PROJECT_ID}:${_REGION}:${_SQL_INSTANCE_ID} \
--update-secrets AUTH_GOOGLE_CLIENT_ID=AUTH_GOOGLE_CLIENT_ID:latest,AUTH_GOOGLE_CLIENT_SECRET=AUTH_GOOGLE_CLIENT_SECRET:latest,SALT=SALT:latest,NEXTAUTH_URL=NEXTAUTH_URL:latest,NEXTAUTH_SECRET=NEXTAUTH_SECRET:latest,DATABASE_URL=DATABASE_URL:latest,DIRECT_URL=DIRECT_URL:latest
You can submit this build using gcloud build submit
in your local console by issuing the below in the same folder as the cloudbuild.yaml
file.
To submit this build, use the following command in your local console, in the directory containing the cloudbuild.yaml
file:
gcloud builds submit .
For automatic rebuilds upon new commits, set up a Cloud Build Trigger linked to your repository holding the cloudbuild.yaml
file. This will redeploy Langfuse whenever changes are pushed to the repository.
Note on AlloyDB
AlloyDB is a fully-managed postgres compatible database offered by Google Cloud Platform that is tuned for better performance for tasks such as analytical queries and in-database embeddings. It is recommend you use it within a Shared VPC with your Cloud Run runtime, which will expose AlloyDB’s private ip address to your application. If you are using it the DB connection string changes slightly:
# ALLOYDB_CONNECTION_STRING
postgresql://<USER>:<PASSWORD>@<ALLOY_DB_PRIVATE_IP>:5432/<ALLOY_DB_DATABASE>/?sslmode=none&pgbouncer=true
# ALLOYDB_DIRECT_URL
postgresql://<USER>:<PASSWORD>@<ALLOY_DB_PRIVATE_IP>:5432/<ALLOY_DB_DATABASE>/?sslmode=none
Heroku
To deploy this image on heroku you have to run through the steps in the following deployment guide:
-
Pull the docker image. This can be achieved by running the following command in your terminal:
docker pull langfuse/langfuse:2
-
Get the ID of the pulled image
Linux / MacOS:
Running the following command should result in directly printing the image ID
docker images | grep langfuse/langfuse | awk '[print $3]'
Following this tutorial, you will always have to insert this image ID when [IMAGE_ID] is written.
Windows:
On windows you can print the full information of the pulled image using:
docker images | findstr /S "langfuse/langfuse"
This will result in something like:
langfuse/langfuse 2 cec90c920468 28 hours ago 595MB
Here you have to manually retrieve the image ID which in this case is
cec90c920468
. It should be located between the tag2
and the created28 hours ago
in this example. -
Prepare your terminal and docker image
First of all, you will have to be logged in to heroku using
heroku login
If this is not working, please visit the heroku CLI setup.
If you succeeded in logging in to heroku via the CLI, you can continue by following the next steps:
Tag the docker image (Insert your image ID into the command). You will also have to insert the name of your heroku app/dyno into [HEROKU_APP_NAME]:
docker tag [IMAGE_ID] registry.heroku.com/[HEROKU_APP_NAME]/web
-
Setup a database for your heroku app
In the dashboard of your heroku app, add the
Heroku Postgres
-AddOn. This will add a PostgreSQL database to your application. -
Set the environment variables
For the minimum deployment in heroku, you will have to set the following environment variables (see table above). The
DATABASE_URL
is your database connection string starting withpostgres://
in the configuration of your added PostgreSQL database.DATABASE_URL= NEXTAUTH_SECRET= NEXTAUTH_URL= SALT=
Have a look at the other optional environment variables in the table above and set them if needed to configure your deployment.
-
Push to heroku container registry
In this step you will push the docker image to the heroku container registry: (Insert the name of your heroku app/dyno)
docker push registry.heroku.com/[HEROKU_APP_NAME]/web
-
Deploy the docker image from the heroku registry
In the last step you will have to execute the following command to finally deploy the image. Again insert the name of your heroku app:
heroku container:release web --app=[HEROKU_APP_NAME]
Support
If you experience any issues, please join us on Discord or contact the maintainers at [email protected].
For support with production deployments, the Langfuse team provides dedicated enterprise support. To learn more, reach out to [email protected] or schedule a demo.
Alternatively, you may consider using Langfuse Cloud, which is a fully managed version of Langfuse. You can find information about its security and privacy here.
FAQ
- Are prebuilt ARM images available?
- Are older versions of the SDK compatible with newer versions of Langfuse?
- I cannot connect to my docker deployment, what should I do?
- I have forgotten my password
- How can I restrict access on my self-hosted instance to internal users?
- How to manage different environments in Langfuse?
- Can I deploy multiple instances of Langfuse behind a load balancer?
- What kind of telemetry does Langfuse collect?