/ Docs
7.0
/
Warning
This page is incomplete and/or outdated. See the Config reference for actual keys.

Config file reference

advanced

advanced:
  early_realtime: False
  merge_subscriptions: False
  postpone_jobs: False
  reindex:
    manual: wipe
    migration: exception
    rollback: ignore
    config_modified: exception
    schema_modified: exception

This config section allows users to tune some system-wide options, either experimental or unsuitable for generic configurations.

fielddescription
reindexMapping of reindexing reasons and actions DipDup performs
schedulerapscheduler scheduler config
postpone_jobsDo not start job scheduler until all indexes are in realtime state
early_realtimeEstablish realtime connection immediately after startup
merge_subscriptionsSubscribe to all operations instead of exact channels

CLI flags have priority over self-titled AdvancedConfig fields.

contracts

A list of the contracts you can use in the index definitions. Each contract entry has two fields:

  • address — either originated or implicit account address encoded in base58.
  • typename — an alias for the particular contract script, meaning that two contracts sharing the same code can have the same type name.
contracts:
  kusd_dex_mainnet:
    address: KT1CiSKXR68qYSxnbzjwvfeMCRburaSDonT2
    typename: quipu_fa12
  tzbtc_dex_mainnet:
    address: KT1N1wwNPqT5jGhM91GQ2ae5uY8UzFaXHMJS
    typename: quipu_fa12
  kusd_token_mainnet:
    address: KT1K9gCRgaLRFKTErYt1wVxA3Frb9FjasjTV
    typename: kusd_token
  tzbtc_token_mainnet:
    address: KT1PWx2mnDueood7fEmfbBDKx1D9BAnnXitn
    typename: tzbtc_token

If the typename field is not set, a contract alias will be used instead.

Contract entry does not contain information about the network, so it's a good idea to include the network name in the alias. This design choice makes possible a generic index parameterization via templates. See index templates for details.

If multiple contracts you index have the same interface but different code, see faq to learn how to avoid conflicts.

custom

An arbitrary YAML object you can use to store internal indexer configuration.

package: my_indexer
...
custom:
  foo: bar

Access or modify it from any callback:

ctx.config.custom['foo'] = 'buzz'

database

DipDup supports several database engines for development and production. The obligatory field kind specifies which engine has to be used:

  • sqlite
  • postgres (and compatible engines)

Database engines article may help you choose a database that better suits your needs.

SQLite

path field must be either path to the .sqlite3 file or :memory: to keep a database in memory only (default):

database:
  kind: sqlite
  path: db.sqlite3
fielddescription
kindalways 'sqlite'
pathPath to .sqlite3 file, leave default for in-memory database

PostgreSQL

Requires host, port, user, password, and database fields. You can set schema_name to values other than public, but Hasura integration won't be available.

database:
  kind: postgres
  host: db
  port: 5432
  user: dipdup
  password: ${POSTGRES_PASSWORD:-changeme}
  database: dipdup
  schema_name: public
fielddescription
kindalways 'postgres'
hostHost
portPort
userUser
passwordPassword
databaseDatabase name
schema_nameSchema name
immune_tablesList of tables to preserve during reindexing
connection_timeoutConnection timeout in seconds

You can also use compose-style environment variable substitutions with default values for secrets and other fields. See dipdup environment variables.

Immune tables

You might want to keep several tables during schema wipe if the data in them is not dependent on index states yet heavy. A typical example is indexing IPFS data — changes in your code won't affect off-chain storage, so you can safely reuse this data.

database:
  immune_tables:
    - ipfs_assets

immune_tables is an optional array of table names that will be ignored during schema wipe. Once an immune table is created, DipDup will never touch it again; to change the schema of an immune table, you need to perform a migration manually. Check schema export output before doing this to ensure the resulting schema is the same as Tortoise ORM would generate.

datasources

A list of API endpoints DipDup uses to retrieve indexing data to process.

A datasource config entry is an alias for the endpoint URI; there's no network mention. Thus it's good to add a network name to the datasource alias, e.g. tzkt_mainnet.

tzkt

dipdup.yaml
datasources:
  tzkt:
    kind: tezos.tzkt
    url: ${TZKT_URL:-https://api.tzkt.io}
    http:
      retry_count:  # retry infinetely
      retry_sleep:
      retry_multiplier:
      ratelimit_rate:
      ratelimit_period:
      connection_limit: 100
      connection_timeout: 60
      batch_size: 10000
    buffer_size: 0

coinbase

dipdup.yaml
datasources:
  coinbase:
    kind: coinbase

dipdup-metadata

dipdup.yaml
datasources:
  metadata:
    kind: tzip_metadata
    url: https://metadata.dipdup.net
    network: mainnet|ghostnet|mumbainet

ipfs

dipdup.yaml
datasources:
  ipfs:
    kind: ipfs
    url: https://ipfs.io/ipfs

hasura

This optional section used by DipDup executor to automatically configure Hasura engine to track your tables.

dipdup.yaml
hasura:
  url: http://hasura:8080
  admin_secret: ${HASURA_SECRET:-changeme}
  allow_aggregations: false
  camel_case: true
  rest: true
  select_limit: 100
  source: default

hooks

Hooks are user-defined callbacks you can execute with a job scheduler or within another callback (with ctx.fire_hook).

dipdup.yaml
hooks:
  calculate_stats:
    callback: calculate_stats
    atomic: False
    args:
     major: bool
     depth: int

jobs

Add the following section to DipDup config:

dipdup.yaml
jobs:
  midnight_stats:
    hook: calculate_stats
    crontab: "0 0 * * *"
    args:
      major: True
  leet_stats:
    hook: calculate_stats
    interval: 1337  # in seconds
    args:
      major: False

If you're unfamiliar with the crontab syntax, an online service crontab.guru will help you build the desired expression.

logging

You can configure an amount of logging output by modifying the logging field.

dipdup.yaml
logging: DEBUG

At the moment these values are equal to setting dipdup log level to INFO, WARNING or DEBUG, but this may change in the future.

package

DipDup uses this field to discover the Python package of your project.

dipdup.yaml
package: my_indexer_name

DipDup will search for a module named my_module_name in PYTHONPATH starting from the current working directory.

DipDup configuration file is decoupled from the indexer implementation which gives more flexibility in managing the source code.

See project package for details.

prometheus

dipdup.yaml
prometheus:
  host: 0.0.0.0

Prometheus integration options

fielddescription
hostHost to bind to
portPort to bind to
update_intervalInterval to update some metrics in seconds

sentry

dipdup.yaml
sentry:
  dsn: https://...
  environment: dev
  debug: False
fielddescription
dsnDSN of the Sentry instance
environmentEnvironment to report to Sentry (informational only)
debugCatch warning messages and more context

spec_version

The DipDup specification version defines the format of the configuration file and available features.

dipdup.yaml
spec_version: 2.0

This table shows which specific SDK releases support which DipDup file versions.

spec_version valueSupported DipDup versions
1.0>=1.0, <2.0
1.1>=2.0, <3.0
1.2>=3.0, <7.0
2.0>=7.0

If you're getting MigrationRequiredError after updating the framework, run the dipdup migrate command to perform project migration.

At the moment, spec_version has not changed for a very long time. Consider recreating the package from scratch and migrating logic manually if you have another value in your configuration file.

templates

dipdup.yaml
indexes:
  foo:
    template: bar
    first_level: 12341234
    template_values:
      network: mainnet
templates:
  bar:
    kind: tezos.tzkt.operations
    datasource: tzkt_<network>  # resolves into `tzkt_mainnet`
    ...
fielddescription
kindalways template
nameName of index template
template_valuesValues to be substituted in template (<key>value)
first_levelLevel to start indexing from
last_levelLevel to stop indexing at (DipDup will terminate at this level)
Help and tips -> Join our Discord
Ideas or suggestions -> Issue Tracker