Ollama + Open WebUI
Ollama is a lightweight framework for running large language models locally, supporting models like Llama 3, Mistral, Gemma, Phi, and many more. This stack combines Ollama with Open WebUI, providing a polished ChatGPT-style web interface for interacting with your local models. Open WebUI supports conversation history, model switching, system prompts, RAG (Retrieval-Augmented Generation), multi-modal inputs, and user management. Ollama handles model downloading, quantization, GPU acceleration (NVIDIA CUDA, AMD ROCm), and exposes an OpenAI-compatible API. Models and chat history are persisted in named volumes. The Ollama API is accessible on port 11434 for programmatic access and integration with other tools. After deployment, access Open WebUI on port 8080, create your account, and pull a model (e.g., `ollama pull llama3.2`) to start chatting.
Included Services
ollama
ollama/ollama:latest
Environment Variables:
open-webui
ghcr.io/open-webui/open-webui:main
Environment Variables:
Generated YAML
# Generated by ComposeHub (composehub.dev)
name: ollama
services:
ollama:
image: ollama/ollama:latest
restart: always
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
environment:
OLLAMA_HOST: ${OLLAMA_HOST:-0.0.0.0}
OLLAMA_NUM_PARALLEL: ${OLLAMA_NUM_PARALLEL:-1}
OLLAMA_MAX_LOADED_MODELS: ${OLLAMA_MAX_LOADED_MODELS:-1}
networks:
- ollama
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:11434/api/version || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: "4.00"
memory: 8192M
reservations:
cpus: "1.00"
memory: 2048M
labels:
com.composehub.description: Ollama LLM inference server
open-webui:
image: ghcr.io/open-webui/open-webui:main
restart: always
ports:
- 8080:8080
volumes:
- openwebui_data:/app/backend/data
environment:
OLLAMA_BASE_URL: http://ollama:11434
WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:-changeme}
ENABLE_SIGNUP: ${ENABLE_SIGNUP:-true}
DEFAULT_MODELS: ${DEFAULT_MODELS:-}
networks:
- ollama
depends_on:
ollama:
condition: service_healthy
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:8080/health || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: "2.00"
memory: 2048M
reservations:
cpus: "0.25"
memory: 256M
labels:
com.composehub.description: Open WebUI โ ChatGPT-style interface for Ollama
networks:
ollama:
driver: bridge
volumes:
ollama_data:
driver: local
openwebui_data:
driver: local
Quick Info
- Services
- 2
- Networks
- 1
- Volumes
- 2
When to Use
This template is ideal for setting up a ollama + open webui environment. All services are pre-configured with healthchecks, resource limits, and sensible defaults. Customize environment variables before deploying to production.
Tips
- Change all default passwords before deploying
- Review resource limits for your hardware
- Add a reverse proxy for production HTTPS
- Configure backup strategies for data volumes