Running in Docker
DipDup provides prebuilt Docker images hosted on Docker Hub. You can use them as is or build custom images based on them.
link | latest tag |
---|---|
Docker Hub | dipdup/dipdup:7 |
GitHub Container Registry | ghcr.io/dipdup-io/dipdup:7 |
GitHub Container Registry (Nightly) | ghcr.io/dipdup-io/dipdup:next |
All base images are based on python:3.11-slim-bookworm, support amd64 and arm64 architectures. Default user is dipdup
with UID 1000 and home directory /home/dipdup
. Entrypoint is set to dipdup
command.
Nightly builds are published on every push to the next
branch for developers' convenience. Do not use nightlies in production! You can also use X.Y and X.Y.Z tags to pin to a specific version.
Usage
To run DipDup in container, you need to copy or mount your project directory and config file to the container.
Given your project source code is in dipdup_indexer
directory, you can run DipDup container using bind mounts with the following command:
docker run \
-v dipdup_indexer:/home/dipdup/dipdup_indexer \
dipdup/dipdup:7
-c dipdup_indexer run
If you're using SQLite database, you can also mount it as a volume:
docker run \
-v dipdup_indexer:/home/dipdup/dipdup_indexer \
-v dipdup_indexer.sqlite:/home/dipdup/dipdup_indexer.sqlite \
dipdup/dipdup:7
-c dipdup_indexer run
Building custom image
Start with creating .dockerignore file for your project if it's missing.
# Ignore all
*
# Add metadata and build files
!dipdup_indexer
!pyproject.toml
!pdm.lock
!README.md
# Add Python code
!**/*.py
**/.*_cache
**/__pycache__
# Add configs and scripts (but not env!)
!**/*.graphql
!**/*.json
!**/*.sql
!**/*.yaml
!**/*.yml
!**/*.j2
!**/.keep
Then copy your code and config file to the image:
FROM dipdup/dipdup:7
# FROM ghcr.io/dipdup-io/dipdup:7
# FROM ghcr.io/dipdup-io/dipdup:next
# COPY --chown=dipdup pyproject.toml README.md .
# RUN pip install .
COPY --chown=dipdup . dipdup_indexer
WORKDIR dipdup_indexer
If you need to install additional Python dependencies, just call pip directly during the build stage:
RUN pip install --no-cache -r requirements.txt
Deploying with Docker Compose
Here's an example compose.yaml
file:
version: "3.8"
name: dipdup_indexer
services:
dipdup:
build:
context: ..
dockerfile: deploy/Dockerfile
restart: always
env_file: .env
ports:
- 46339
- 9000
command: ["-c", "dipdup.yaml", "-c", "configs/dipdup.compose.yaml", "run"]
depends_on:
- db
- hasura
db:
image: postgres:15
ports:
- 5432
volumes:
- db:/var/lib/postgresql/data
restart: always
env_file: .env
environment:
- POSTGRES_USER=dipdup
- POSTGRES_DB=dipdup
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U dipdup"]
interval: 10s
timeout: 5s
retries: 5
hasura:
image: hasura/graphql-engine:latest
ports:
- 8080
depends_on:
- db
restart: always
environment:
- HASURA_GRAPHQL_DATABASE_URL=postgres://dipdup:${POSTGRES_PASSWORD}@db:5432/dipdup
- HASURA_GRAPHQL_ADMIN_SECRET=${HASURA_SECRET}
- HASURA_GRAPHQL_ENABLE_CONSOLE=true
- HASURA_GRAPHQL_DEV_MODE=true
- HASURA_GRAPHQL_LOG_LEVEL=info
- HASURA_GRAPHQL_ENABLE_TELEMETRY=false
- HASURA_GRAPHQL_UNAUTHORIZED_ROLE=user
- HASURA_GRAPHQL_STRINGIFY_NUMERIC_TYPES=true
volumes:
db:
Environment variables are expanded in the DipDup config file; PostgreSQL password and Hasura secret are forwarded from host environment in this example.
You can create a separate dipdup.<environment>.yaml
file for this stack to apply environment-specific config overrides:
database:
kind: postgres
host: ${POSTGRES_HOST:-db}
port: 5432
user: ${POSTGRES_USER:-dipdup}
password: ${POSTGRES_PASSWORD}
database: ${POSTGRES_DB:-dipdup}
hasura:
url: http://${HASURA_HOST:-hasura}:8080
admin_secret: ${HASURA_SECRET}
allow_aggregations: true
camel_case: true
sentry:
dsn: ${SENTRY_DSN:-""}
environment: ${SENTRY_ENVIRONMENT:-""}
prometheus:
host: 0.0.0.0
logging: ${LOGLEVEL:-INFO}
Then modify command in compose.yaml
:
services:
dipdup:
command: ["dipdup", "-c", "dipdup.yaml", "-c", "dipdup.prod.yaml", "run"]
...
Note the hostnames (resolved in the docker network) and environment variables (expanded by DipDup).
Build and run the containers:
docker-compose up -d --build
Deploying with Docker Swarm
Scaffolded projects contain a compose file for Docker Swarm. Before spawning this stack create external networks traefik-public
and prometheus-private
. Optionally, deploy Traefik and Prometheus and attach them to these networks to get a fully functional stack.
version: "3.8"
name: dipdup_indexer
services:
dipdup:
image: ${IMAGE:-ghcr.io/dipdup-io/dipdup}:${TAG:-7}
depends_on:
- db
- hasura
command: ["-c", "dipdup.yaml", "-c", "configs/dipdup.swarm.yaml", "run"]
env_file: .env
networks:
- internal
- prometheus-private
deploy:
mode: replicated
replicas: ${INDEXER_ENABLED:-1}
labels:
- prometheus-job=${SERVICE}
- prometheus-port=8000
placement: &placement
constraints:
- node.labels.${SERVICE} == true
logging: &logging
driver: "json-file"
options:
max-size: "10m"
max-file: "10"
tag: "\{\{.Name\}\}.\{\{.ImageID\}\}"
db:
image: postgres:15
volumes:
- db:/var/lib/postgresql/data
env_file: .env
environment:
- POSTGRES_USER=dipdup
- POSTGRES_DB=dipdup
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
networks:
- internal
deploy:
mode: replicated
replicas: 1
placement: *placement
logging: *logging
hasura:
image: hasura/graphql-engine:latest
depends_on:
- db
environment:
- HASURA_GRAPHQL_DATABASE_URL=postgres://dipdup:${POSTGRES_PASSWORD}@dipdup_indexer_db:5432/dipdup
- HASURA_GRAPHQL_ADMIN_SECRET=${HASURA_SECRET}
- HASURA_GRAPHQL_ENABLE_CONSOLE=true
- HASURA_GRAPHQL_DEV_MODE=false
- HASURA_GRAPHQL_LOG_LEVEL=warn
- HASURA_GRAPHQL_ENABLE_TELEMETRY=false
- HASURA_GRAPHQL_UNAUTHORIZED_ROLE=user
- HASURA_GRAPHQL_STRINGIFY_NUMERIC_TYPES=true
networks:
- internal
- traefik-public
deploy:
mode: replicated
replicas: 1
labels:
- traefik.enable=true
- traefik.http.services.${SERVICE}.loadbalancer.server.port=8080
- "traefik.http.routers.${SERVICE}.rule=Host(`${HOST}`) && (PathPrefix(`/v1/graphql`) || PathPrefix(`/api/rest`))"
- traefik.http.routers.${SERVICE}.entrypoints=http,${INGRESS:-ingress}
- "traefik.http.routers.${SERVICE}-console.rule=Host(`${SERVICE}.${SWARM_ROOT_DOMAIN}`)"
- traefik.http.routers.${SERVICE}-console.entrypoints=https
- traefik.http.middlewares.${SERVICE}-console.headers.customrequestheaders.X-Hasura-Admin-Secret=${HASURA_SECRET}
- traefik.http.routers.${SERVICE}-console.middlewares=authelia@docker,${SERVICE}-console
placement: *placement
logging: *logging
volumes:
db:
networks:
internal:
traefik-public:
external: true
prometheus-private:
external: true