Ollama + Open WebUI
Ollama is a lightweight framework for running large language models locally, supporting models like Llama 3, Mistral, Gemma, Phi, and many more. This stack combines Ollama with Open WebUI, providing a polished ChatGPT-style web interface for interacting with your local models. Open WebUI supports conversation history, model switching, system prompts, RAG (Retrieval-Augmented Generation), multi-modal inputs, and user management. Ollama handles model downloading, quantization, GPU acceleration (NVIDIA CUDA, AMD ROCm), and exposes an OpenAI-compatible API. Models and chat history are persisted in named volumes. The Ollama API is accessible on port 11434 for programmatic access and integration with other tools. After deployment, access Open WebUI on port 8080, create your account, and pull a model (e.g., `ollama pull llama3.2`) to start chatting.
Servicios Incluidos
ollama
ollama/ollama:latest
Variables de Entorno:
open-webui
ghcr.io/open-webui/open-webui:main
Variables de Entorno:
YAML Generado
# Generated by ComposeHub (composehub.dev)
name: ollama
services:
ollama:
image: ollama/ollama:latest
restart: always
ports:
- 11434:11434
volumes:
- ollama_data:/root/.ollama
environment:
OLLAMA_HOST: ${OLLAMA_HOST:-0.0.0.0}
OLLAMA_NUM_PARALLEL: ${OLLAMA_NUM_PARALLEL:-1}
OLLAMA_MAX_LOADED_MODELS: ${OLLAMA_MAX_LOADED_MODELS:-1}
networks:
- ollama
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:11434/api/version || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: "4.00"
memory: 8192M
reservations:
cpus: "1.00"
memory: 2048M
labels:
com.composehub.description: Ollama LLM inference server
open-webui:
image: ghcr.io/open-webui/open-webui:main
restart: always
ports:
- 8080:8080
volumes:
- openwebui_data:/app/backend/data
environment:
OLLAMA_BASE_URL: http://ollama:11434
WEBUI_SECRET_KEY: ${WEBUI_SECRET_KEY:-changeme}
ENABLE_SIGNUP: ${ENABLE_SIGNUP:-true}
DEFAULT_MODELS: ${DEFAULT_MODELS:-}
networks:
- ollama
depends_on:
ollama:
condition: service_healthy
healthcheck:
test:
- CMD-SHELL
- curl -f http://localhost:8080/health || exit 1
interval: 30s
timeout: 10s
retries: 5
start_period: 30s
deploy:
resources:
limits:
cpus: "2.00"
memory: 2048M
reservations:
cpus: "0.25"
memory: 256M
labels:
com.composehub.description: Open WebUI — ChatGPT-style interface for Ollama
networks:
ollama:
driver: bridge
volumes:
ollama_data:
driver: local
openwebui_data:
driver: local
Información Rápida
- Servicios
- 2
- Redes
- 1
- Volúmenes
- 2
Cuándo Usar
Esta plantilla es ideal para configurar un entorno de ollama + open webui Todos los servicios vienen preconfigurados con healthchecks, límites de recursos y valores por defecto sensatos. Personaliza las variables de entorno antes de desplegar en producción.
Consejos
- Cambia todas las contraseñas por defecto antes de desplegar
- Revisa los límites de recursos para tu hardware
- Añade un proxy inverso para HTTPS en producción
- Configura estrategias de respaldo para los volúmenes de datos