Skip to content

Here's a custom documentation that you can use to deploy Open WebUI with Ollama using Docker:


Open WebUI with Ollama - Custom Docker Deployment Guide

This guide will walk you through setting up Open WebUI, integrated with Ollama, to run a local language model (LLM) for AI interactions. You'll learn how to configure the environment, deploy using Docker, and manage the installation for persistent data storage.

1. Prerequisites

Before starting, ensure you have the following tools installed:

  • Docker: Install Docker
  • Basic understanding of Docker commands
  • A machine with enough resources to run LLM models (minimum 8GB RAM recommended)

2. Docker Command for Deployment

Run the following command in your terminal to start the Open WebUI container with Ollama:

docker run -d \
  -p 3000:8080 \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:ollama

Explanation of the Command:

  • -d: Runs the container in the background (detached mode).
  • -p 3000:8080: Maps port 8080 inside the container to port 3000 on your machine.
  • -v ollama:/root/.ollama: Mounts a persistent volume for Ollama's data to ensure no data is lost after restarts.
  • -v open-webui:/app/backend/data: Mounts a persistent volume for Open WebUI's backend data.
  • --restart always: Ensures the container restarts automatically if it stops or the Docker daemon is restarted.
  • ghcr.io/open-webui/open-webui:ollama: Specifies the Docker image to be pulled from the Open WebUI GitHub registry, with Ollama included.

3. Accessing the Open WebUI Interface

Once the container is up and running, you can access the Open WebUI interface via a web browser:

http://localhost:3000

If you're hosting it on a remote server, replace localhost with the server's IP address.

4. Managing Your Installation

Persistent Data Storage

This setup ensures that your configurations and data (models, preferences) are stored across restarts using Docker volumes: - Ollama data is stored in the ollama volume (/root/.ollama). - Open WebUI data is stored in the open-webui volume (/app/backend/data).

If you prefer to store data at a specific location on your host system, you can replace the volume mounting with a path:

-v /path/to/ollama:/root/.ollama
-v /path/to/open-webui:/app/backend/data

Restart and Removal Commands

To stop or restart your container:

docker stop open-webui
docker start open-webui

To remove the container (including the persistent data):

docker rm -f open-webui
docker volume rm ollama open-webui

5. Optional: GPU Support

If your machine has a compatible NVIDIA GPU, you can leverage it by modifying the Docker run command as follows:

docker run -d \
  -p 3000:8080 \
  --gpus=all \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:cuda

This command enables GPU support for faster model inference.

6. Additional Configuration

You can pass environment variables into your Docker command to customize the setup. For example, if Ollama is hosted on a different server, set the base URL:

docker run -d \
  -p 3000:8080 \
  -e OLLAMA_BASE_URL=https://ollama.example.com \
  -v ollama:/root/.ollama \
  -v open-webui:/app/backend/data \
  --name open-webui \
  --restart always \
  ghcr.io/open-webui/open-webui:ollama

For security, you can also control user access by enabling or disabling login. To disable login for single-user setups:

-e WEBUI_AUTH=False

7. Troubleshooting

  • Connection Issues: Ensure that ports 3000 and 8080 are open in your firewall or server's security settings.
  • Data Not Persisting: Verify that Docker volumes are mounted correctly by inspecting them:
    docker volume inspect ollama
    docker volume inspect open-webui
    
  • Check Logs: If the container fails, check the logs for error messages:
    docker logs open-webui
    

This custom guide provides a complete setup for Open WebUI with Ollama using Docker. You can modify paths, settings, and commands as needed to suit your environment. Enjoy experimenting with your local AI setup!

Deploying Open WebUI with Ollama on TrueNAS SCALE Using the Custom App Button

TrueNAS SCALE allows for easy deployment of applications through its Custom Apps feature, which leverages Kubernetes in the background. Here's how you can deploy Open WebUI with Ollama using this feature:

1. Prepare the Helm Chart or Docker Image

Before deploying, you need to ensure that the Docker image of Open WebUI with Ollama is ready for use. TrueNAS SCALE allows adding custom applications via Docker or Helm charts. The image we will use is:

  • Image: ghcr.io/open-webui/open-webui:ollama

2. Access the TrueNAS SCALE Web UI

  • Open your browser and access your TrueNAS SCALE dashboard.
  • Navigate to Apps in the left sidebar.
  • Click on the Manage Catalogs button.

3. Add the Custom App

  • Select Add Catalog to add a new custom application.
  • In the Catalog URL field, input the link to a catalog that supports Kubernetes deployment or your custom Helm chart.
  • If you're using the Docker image directly, skip the Helm chart step and proceed to custom apps.

4. Deploy via Custom App Button

Once you've added the catalog, you can now proceed to the app deployment:

  • Click on Available Applications and then select Launch Docker App (if using Docker) or Install Helm Chart.
  • For Docker:
  • Name your app, for example, open-webui-ollama.
  • In the Image Repository field, input ghcr.io/open-webui/open-webui:ollama.
  • Set Application Ports: map the internal port 8080 to an external port like 3000 (e.g., 3000:8080).
  • Set up Environment Variables as needed:
    • WEBUI_AUTH=False (if disabling login).
  • Configure the Volumes:
    • Add a volume for persistent storage, e.g., ollama mounted to /root/.ollama.
    • Another volume for Open WebUI data, e.g., open-webui mounted to /app/backend/data.
  • For Helm Chart:
  • Follow the prompts and provide values for image, ports, and volume mounts.

5. Complete the Deployment

Once you finish configuring the application: - Click Install and TrueNAS SCALE will deploy the application on its Kubernetes backend. - You can monitor the deployment process in the Apps section.

6. Access Open WebUI

After the deployment is complete: - Visit http://your-truenas-ip:3000 to access the Open WebUI interface.

This setup ensures you can manage and run Open WebUI with Ollama on your TrueNAS SCALE environment, utilizing Kubernetes for container orchestration in the background while maintaining persistent storage across restarts.


This guide ensures you can leverage TrueNAS SCALE's capabilities to deploy advanced applications like Open WebUI using either Docker images or Helm charts, integrating seamlessly with the platform's Kubernetes-based infrastructure.