Quickstart
This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details.
Let's create an indexer for the USDt token contract. Our goal is to save all token transfers to the database and then calculate some statistics of its holders' activity.
Install DipDup
A modern Linux/macOS distribution with Python 3.12 installed is required to run DipDup.
The recommended way to install DipDup CLI is pipx. We also provide a convenient helper script that installs all necessary tools. Run the following command in your terminal:
curl -Lsf https://dipdup.io/install.py | python3.12
See the Installation page for all options.
Create a project
DipDup CLI has a built-in project generator. Run the following command in your terminal:
dipdup new
Choose Starknet
network and demo_starknet_events
template.
[none]
and demo_blank
instead and proceed to the Config section.Follow the instructions; the project will be created in the new directory.
Write a configuration file
In the project root, you'll find a file named dipdup.yaml
. It's the main configuration file of your indexer. We will discuss it in detail in the Config section; now it has the following content:
spec_version: 3.0
package: demo_starknet_events
datasources:
subsquid:
kind: starknet.subsquid
url: ${SUBSQUID_URL:-https://v2.archive.subsquid.io/network/starknet-mainnet}
node:
kind: starknet.node
url: ${NODE_URL:-https://starknet-mainnet.g.alchemy.com/v2}/${NODE_API_KEY:-''}
contracts:
stark_usdt:
kind: starknet
address: '0x68f5c6a61780768455de69077e07e89787839bf8166decfbf92b645209c0fb8'
typename: stark_usdt
indexes:
starknet_usdt_events:
kind: starknet.events
datasources:
- subsquid
- node
handlers:
- callback: on_transfer
contract: stark_usdt
name: Transfer
Generate types and stubs
Now it's time to generate typeclasses and callback stubs based on definitions from config. Examples below use demo_starknet_events
as a package name; yours may differ.
Run the following command:
dipdup init
DipDup will create a Python package demo_starknet_events
with everything you need to start writing your indexer. Use package tree
command to see the generated structure:
$ dipdup package tree
demo_starknet_events [.]
├── abi
│ └── stark_usdt/cairo_abi.json
├── configs
│ ├── dipdup.compose.yaml
│ ├── dipdup.sqlite.yaml
│ ├── dipdup.swarm.yaml
│ └── replay.yaml
├── deploy
│ ├── .env.default
│ ├── Dockerfile
│ ├── compose.sqlite.yaml
│ ├── compose.swarm.yaml
│ ├── compose.yaml
│ ├── sqlite.env.default
│ └── swarm.env.default
├── graphql
├── handlers
│ └── on_transfer.py
├── hasura
├── hooks
│ ├── on_index_rollback.py
│ ├── on_reindex.py
│ ├── on_restart.py
│ └── on_synchronized.py
├── models
│ └── __init__.py
├── sql
├── types
│ └── stark_usdt/starknet_events/transfer.py
└── py.typed
That's a lot of files and directories! But don't worry, we will need only models
and handlers
sections in this guide.
Define data models
DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use custom ORM based on Tortoise ORM as an abstraction layer.
First, you need to define a model class. Our schema will consist of a single model Holder
with the following fields:
address | account address |
balance | token amount held by the account |
turnover | total amount of transfer/mint calls |
tx_count | number of transfers/mints |
last_seen | time of the last transfer/mint |
Here's how to define this model in DipDup:
from dipdup import fields
from dipdup.models import CachedModel
class Holder(CachedModel):
address = fields.TextField(primary_key=True)
balance = fields.DecimalField(decimal_places=6, max_digits=20, default=0)
turnover = fields.DecimalField(decimal_places=6, max_digits=20, default=0)
tx_count = fields.BigIntField(default=0)
last_seen = fields.BigIntField(null=True)
class Meta:
maxsize = 2**18
Implement handlers
Everything's ready to implement an actual indexer logic.
Our task is to index all the balance updates. Put some code to the on_transfer
handler callback to process matched logs:
from decimal import Decimal
from demo_starknet_events import models as models
from demo_starknet_events.types.stark_usdt.starknet_events.transfer import TransferPayload
from dipdup.context import HandlerContext
from dipdup.models.starknet import StarknetEvent
from tortoise.exceptions import DoesNotExist
async def on_transfer(
ctx: HandlerContext,
event: StarknetEvent[TransferPayload],
) -> None:
amount = Decimal(event.payload.value) / (10**6)
if not amount:
return
address_from = f'0x{event.payload.from_:x}'
await on_balance_update(
address=address_from,
balance_update=-amount,
level=event.data.level,
)
address_to = f'0x{event.payload.to:x}'
await on_balance_update(
address=address_to,
balance_update=amount,
level=event.data.level,
)
async def on_balance_update(
address: str,
balance_update: Decimal,
level: int,
) -> None:
try:
holder = await models.Holder.cached_get(pk=address)
except DoesNotExist:
holder = models.Holder(
address=address,
balance=0,
turnover=0,
tx_count=0,
last_seen=None,
)
holder.cache()
holder.balance += balance_update
holder.turnover += abs(balance_update)
holder.tx_count += 1
holder.last_seen = level
await holder.save()
And that's all! We can run the indexer now.
Next steps
Run the indexer in memory:
dipdup run
Store data in SQLite database:
dipdup -c . -c configs/dipdup.sqlite.yaml run
Or spawn a Compose stack with PostgreSQL and Hasura:
cd deploy
cp .env.default .env
# Edit .env file before running
docker-compose up
DipDup will fetch all the historical data and then switch to realtime updates. You can check the progress in the logs.
If you use SQLite, run this query to check the data:
sqlite3 demo_starknet_events.sqlite 'SELECT * FROM holder LIMIT 10'
If you run a Compose stack, open http://127.0.0.1:8080
in your browser to see the Hasura console (an exposed port may differ). You can use it to explore the database and build GraphQL queries.
Congratulations! You've just created your first DipDup indexer. Proceed to the Getting Started section to learn more about DipDup configuration and features.