Ollama + Open WebUI
Ollama is a lightweight framework for running large language models locally, supporting models like Llama 3, Mistral, Gemma, Phi, and many more. This stack combines Ollama with Open WebUI, providing a polished ChatGPT-style web interface for interacting with your local models. Open WebUI supports conversation history, model switching, system prompts, RAG (Retrieval-Augmented Generation), multi-modal inputs, and user management. Ollama handles model downloading, quantization, GPU acceleration (NVIDIA CUDA, AMD ROCm), and exposes an OpenAI-compatible API. Models and chat history are persisted in named volumes. The Ollama API is accessible on port 11434 for programmatic access and integration with other tools. After deployment, access Open WebUI on port 8080, create your account, and pull a model (e.g., `ollama pull llama3.2`) to start chatting.
Servizi Inclusi
ollama
ollama/ollama:latest
Variabili d'Ambiente:
open-webui
ghcr.io/open-webui/open-webui:main
Variabili d'Ambiente:
YAML Generato
# Generated by ComposeHub (composehub.dev)
name: ollama
services:
ollama:
image: ollama/ollama:latest
restart: always
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
environment:
OLLAMA_HOST: ${OLLAMA_HOST:-0.0.0.0}
OLLAMA_NUM_PARALLEL: ${OLLAMA_NUM_PARALLEL:-1}
OLLAMA_MAX_LOADED_MODELS: ${OLLAMA_MAX_LOADED_MODELS:-1}
networks:
- ollama
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:11434/api/version || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: "4.00"
memory: 8192M
reservations:
cpus: "1.00"
memory: 2048M
labels:
com.composehub.description: Ollama LLM inference server
open-webui:
image: ghcr.io/open-webui/open-webui:main
restart: always
ports:
- 8080:8080
volumes:
- openwebui_data:/app/backend/data
environment:
OLLAMA_BASE_URL: http://ollama:11434
WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:-changeme}
ENABLE_SIGNUP: ${ENABLE_SIGNUP:-true}
DEFAULT_MODELS: ${DEFAULT_MODELS:-}
networks:
- ollama
depends_on:
ollama:
condition: service_healthy
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:8080/health || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: "2.00"
memory: 2048M
reservations:
cpus: "0.25"
memory: 256M
labels:
com.composehub.description: Open WebUI — ChatGPT-style interface for Ollama
networks:
ollama:
driver: bridge
volumes:
ollama_data:
driver: local
openwebui_data:
driver: local
Informazioni Rapide
- Servizi
- 2
- Reti
- 1
- Volumi
- 2
Quando Usarlo
Questo template è ideale per configurare un ambiente ollama + open webui Tutti i servizi sono preconfigurati con healthchecks, limiti di risorse e valori predefiniti sensati. Personalizza le variabili d'ambiente prima di fare il deploy in produzione.
Consigli
- Cambia tutte le password predefinite prima del deploy
- Verifica i limiti di risorse per il tuo hardware
- Aggiungi un reverse proxy per HTTPS in produzione
- Configura strategie di backup per i volumi di dati