Add English version of DockerHub README

This commit is contained in:
Oliver Hofmann 2026-04-29 09:19:28 +02:00
parent c46c1c2b12
commit 92fd66381f

206
DOCKERHUB.en.md Normal file
View File

@ -0,0 +1,206 @@
# mediaeng/llmproxy
A lightweight reverse proxy for [Ollama](https://ollama.com) that manages API keys with configurable token and request quotas. Incoming requests in OpenAI-compatible format are authenticated, checked against the quota, and forwarded to the configured Ollama server.
Ollama does not need to run on the same host — `OLLAMA_URL` can point to any reachable server: the Docker host itself, another machine on the network, or a remote server.
## Features
- OpenAI-compatible endpoint (`/v1/chat/completions`, `/v1/models`)
- API key management with daily and monthly token/request limits
- Web-based admin interface (port 8001)
- Streaming support (Server-Sent Events)
- Tool use / function calling passthrough
- Rotating usage logs
- SQLite (default) or PostgreSQL
## Ports
| Port | Service |
|------|---------|
| `8000` | Proxy endpoint (OpenAI API) |
| `8001` | Admin API + web interface (do not expose) |
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `ADMIN_PASSWORD` | | **Required.** Password for the admin interface |
| `OLLAMA_URL` | `http://localhost:11434` | URL of the Ollama server (without `/v1` suffix) |
| `DEFAULT_MODEL` | `llama3` | Model used when the client does not specify one |
| `DATABASE_URL` | `sqlite:///./test.db` | Database connection string (SQLite or PostgreSQL) |
| `PROXY_HOST` | `0.0.0.0` | Proxy bind address |
| `PROXY_PORT` | `8000` | Proxy port |
| `ADMIN_PORT` | `8001` | Admin API port |
| `APP_TZ` | `Europe/Berlin` | Timezone for daily/monthly quota resets |
| `LOG_FILE` | `logs/usage.log` | Path of the rotating usage log file |
## Docker Compose External Ollama, SQLite
Use this when Ollama runs outside of Docker — on the Docker host or any other reachable server. Adjust `OLLAMA_URL` accordingly.
```yaml
services:
llmproxy:
image: mediaeng/llmproxy:latest
restart: unless-stopped
ports:
- "8000:8000"
environment:
ADMIN_PASSWORD: changeme
OLLAMA_URL: http://host.docker.internal:11434 # or http://<ip>:11434
DEFAULT_MODEL: llama3
APP_TZ: Europe/Berlin
volumes:
- llmproxy-data:/app/backend
# On Linux, add extra_hosts since host.docker.internal is not
# available automatically:
# extra_hosts:
# - "host.docker.internal:host-gateway"
volumes:
llmproxy-data:
```
## Docker Compose External Ollama, PostgreSQL
```yaml
services:
llmproxy:
image: mediaeng/llmproxy:latest
restart: unless-stopped
ports:
- "8000:8000"
environment:
ADMIN_PASSWORD: changeme
OLLAMA_URL: http://host.docker.internal:11434 # or http://<ip>:11434
DEFAULT_MODEL: llama3
APP_TZ: Europe/Berlin
DATABASE_URL: postgresql://llmproxy:secret@db:5432/llmproxy
volumes:
- llmproxy-data:/app/backend
depends_on:
db:
condition: service_healthy
# extra_hosts:
# - "host.docker.internal:host-gateway"
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: llmproxy
POSTGRES_USER: llmproxy
POSTGRES_PASSWORD: secret
volumes:
- pg-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U llmproxy"]
interval: 5s
timeout: 5s
retries: 5
volumes:
pg-data:
```
## Docker Compose Ollama as Container, SQLite
Ollama and llmproxy run together in Docker, data persisted in a volume.
```yaml
services:
llmproxy:
image: mediaeng/llmproxy:latest
restart: unless-stopped
ports:
- "8000:8000"
environment:
ADMIN_PASSWORD: changeme
OLLAMA_URL: http://ollama:11434
DEFAULT_MODEL: llama3
APP_TZ: Europe/Berlin
volumes:
- llmproxy-data:/app/backend
depends_on:
- ollama
ollama:
image: ollama/ollama:latest
restart: unless-stopped
volumes:
- ollama-data:/root/.ollama
volumes:
llmproxy-data:
ollama-data:
```
## Docker Compose Ollama as Container, PostgreSQL
For production environments with an external database.
```yaml
services:
llmproxy:
image: mediaeng/llmproxy:latest
restart: unless-stopped
ports:
- "8000:8000"
environment:
ADMIN_PASSWORD: changeme
OLLAMA_URL: http://ollama:11434
DEFAULT_MODEL: llama3
APP_TZ: Europe/Berlin
DATABASE_URL: postgresql://llmproxy:secret@db:5432/llmproxy
depends_on:
db:
condition: service_healthy
ollama:
condition: service_started
db:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_DB: llmproxy
POSTGRES_USER: llmproxy
POSTGRES_PASSWORD: secret
volumes:
- pg-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U llmproxy"]
interval: 5s
timeout: 5s
retries: 5
ollama:
image: ollama/ollama:latest
restart: unless-stopped
volumes:
- ollama-data:/root/.ollama
volumes:
pg-data:
ollama-data:
```
## Quick Start
```bash
docker run -d \
-p 8000:8000 \
-e ADMIN_PASSWORD=changeme \
-e OLLAMA_URL=http://host.docker.internal:11434 \
-v llmproxy-data:/app/backend \
mediaeng/llmproxy:latest
```
## Client Configuration
Configure the proxy as an OpenAI-compatible endpoint:
```
Base URL: http://<host>:8000/v1
API Key: <API key created in the admin interface>
```