escope program aims to index an EVM like chain using go-ethereum clients via RPCs.
Build the application using:
- Setup golang abi modules via abigen using
./scripts/setup_abi.sh. make buildormake build -B(force) to buildbuild/escopemake installto install the builtescopeinto/usr/local/bin/escopebringing it into $PATH.make uninstallremoves/usr/local/bin/escope
escope while building is stamped with build information such as git commit, CI process, time of build etc. so as to allow identification while in production.
$ escope --version
escope version v0.0.1 build main@5c6c6c5
compiled at 23-05-2022 04:29:22 by travisci
using go version go1.17.8 linux/amd64
escope runs in two different modes:
- Realtime: Aims to stay on head of the concerned blockchain and update the backend database in realtime. Can be run using
escope realtime -c <config.yaml> - Backfill: Aims to backfill a range of blocks in the past and update the backend database, rewriting the entries for concerned blocks. Can be run using
escope backfill -c <config.yaml>
Example config file(s) is available at test/configs/testcfg.yaml
- Install docker via
paru -S docker(arch linux) and run it usingsudo systemctl start docker - Install psycopg2 using
sudo pacman -S python-psycopg2 - Install
pgclifor interacting with a postgres instance via cli usingpip3 install pgcli - Install migrate using
./scripts/setup_migrate.sh - Setup a pgsql docker container using
./scripts/start_db.sh. This will generate a newpgdata/.pgdata_XXXXdirectory (XXXXX being random for each invocation) which will be used by postgresql. Every time this script is invoked, the DB is launched anew with no data. - To start a docker container with a previous directory, invoke using
./scripts/start_db.sh a6df1if concerned data directory ispgdata/.pgdata_a6df1. In this mode, db migrations are not run.
You should now have two users:
- devuser: Accessible via
pgcli postgresql://devuser:devpass@localhost:5432/devdbfor DB superuser access. - proguser: Accessible via
pgcli postgresql://proguser:progpass@localhost:5432/devdbfor insert only access toblocksandpool_actions_gethtables.
- Install docker via
paru -S docker(arch linux) and run it usingsudo systemctl start docker - Setup a rabbit MQ container using
./scripts/start_rmq.sh. This will create a fresh cluster each time it is invoked.
Single user for all access to rmq in dev mode. Acess management console in browser using: http://devuser:devpass@localhost:15672/#/queues
To provide parity with existing build steps, scripts have been added to the Makefile for building the docker image. Use make docker to build the project or make dockerbuildx to execute a multiarch build (see Multiarchitecture Builds for details).
Currently the image expects that the runner will mount files it needs for operation. These files are config.yaml, chainlink_oracle_dumpefile.csv, and dex_dumpfile.csv. Additionally, the application stores stateful data in the folder lb.badger.db which is managed by the local backend. Currently this is also being mounted in order for the state to be maintained between application runs. A more architecturally stable solution for this should be explored in the future. For instance, if your current working directory is the project root and these files are also located at the project root, then you could run the container like this:
docker run \
--name=geth-indexer \
--mount type=bind,source="$(pwd)"/config.yaml,target=/geth-indexer/config.yaml \
--mount type=bind,source="$(pwd)"/chainlink_oracle_dumpfile.csv,target=/geth-indexer/chainlink_oracle_dumpfile.csv \
--mount type=bind,source="$(pwd)"/dex_dumpfile.csv,target=/geth-indexer/dex_dumpfile.csv \
--mount type=bind,source="$(pwd)"/lb.badger.db,target=/geth-indexer/lb.badger.db \
geth-indexer
Building for multiple architectures currently requires the docker buildx CLI plugin. See Docker docs: Install Docker Buildx for details on setup.
Building for multiple architectures requires that the created artifacts be immediatly pushed to a docker remote. This means that the multiarchitecture builds cannot be used for local testing. Instead, for testing using your local docker server you should do the default single architecture build. See this Github issue for discussion on this topic and multiarchitecture builds should be run in a pipeline where the resulting images can be automatically pushed.
