Warning: the names of some of these environment variables will be changed at some point in the near future.
This page lists the environment variables used by graph-node and what effect
they have. Some environment variables can be used instead of command line flags.
Those are not listed here, please consult graph-node --help for details on
those.
ETHEREUM_POLLING_INTERVAL: how often to poll Ethereum for new blocks (in ms, defaults to 500ms)ETHEREUM_RPC_MAX_PARALLEL_REQUESTS: Maximum number of concurrent HTTP requests to an Ethereum RPC endpoint (defaults to 64).GRAPH_ETHEREUM_TARGET_TRIGGERS_PER_BLOCK_RANGE: The ideal amount of triggers to be processed in a batch. If this is too small it may cause too many requests to the ethereum node, if it is too large it may cause unreasonably expensive calls to the ethereum node and excessive memory usage (defaults to 100).ETHEREUM_TRACE_STREAM_STEP_SIZE:graph-nodequeries traces for a given block range when a subgraph defines call handlers or block handlers with a call filter. The value of this variable controls the number of blocks to scan in a single RPC request for traces from the Ethereum node.DISABLE_BLOCK_INGESTOR: set totrueto disable block ingestion. Leave unset or set tofalseto leave block ingestion enabled.ETHEREUM_BLOCK_BATCH_SIZE: number of Ethereum blocks to request in parallel (defaults to 50)GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE: Maximum number of blocks to scan for triggers in each request (defaults to 100000).GRAPH_ETHEREUM_MAX_EVENT_ONLY_RANGE: Maximum range size foreth.getLogsrequests that dont filter on contract address, only event signature.GRAPH_ETHEREUM_JSON_RPC_TIMEOUT: Timeout for Ethereum JSON-RPC requests.GRAPH_ETHEREUM_REQUEST_RETRIES: Number of times to retry JSON-RPC requests made against Ethereum. This is used for requests that will not fail the subgraph if the limit is reached, but will simply restart the syncing step, so it can be low. This limit guards against scenarios such as requesting a block hash that has been reorged. Defaults to 10.GRAPH_ETHEREUM_CLEANUP_BLOCKS: Set totrueto clean up unneeded blocks from the cache in the database. When this isfalseor unset (the default), blocks will never be removed from the block cache. This setting should only be used during development to reduce the size of the database. In production environments, it will cause multiple downloads of the same blocks and therefore slow the system down.
GRAPH_MAPPING_HANDLER_TIMEOUT: amount of time a mapping handler is allowed to take (in seconds, default is unlimited)GRAPH_IPFS_SUBGRAPH_LOADING_TIMEOUT: timeout for IPFS requests made to load subgraph files from IPFS (in seconds, default is 60).GRAPH_IPFS_TIMEOUT: timeout for IPFS requests from mappings usingipfs.catoripfs.map(in seconds, default is 60).GRAPH_MAX_IPFS_FILE_BYTES: maximum size for a file that can be retrieved withipfs.cat(in bytes, default is unlimited)GRAPH_MAX_IPFS_MAP_FILE_SIZE: maximum size of files that can be processed withipfs.map. When a file is processed throughipfs.map, the entities generated from that are kept in memory until the entire file is done processing. This setting therefore limits how much memory a call toipfs.mapmay use. (in bytes, defaults to 256MB)GRAPH_MAX_IPFS_CACHE_SIZE: maximum number of files cached in the theipfs.catcache (defaults to 50).GRAPH_MAX_IPFS_CACHE_FILE_SIZE: maximum size of files that are cached in theipfs.catcache (defaults to 1MiB)GRAPH_ENTITY_CACHE_SIZE: Size of the entity cache, in kilobytes. Defaults to 10000 which is 10MB.
GRAPH_GRAPHQL_QUERY_TIMEOUT: maximum execution time for a graphql query, in seconds. Default is unlimited.SUBSCRIPTION_THROTTLE_INTERVAL: while a subgraph is syncing, subscriptions to that subgraph get updated at most this often, in ms. Default is 1000ms.GRAPH_GRAPHQL_MAX_COMPLEXITY: maximum complexity for a graphql query. See here for what that means. Default is unlimited. Typical introspection queries have a complexity of just over 1 million, so setting a value below that may interfere with introspection done by graphql clients.GRAPH_GRAPHQL_MAX_DEPTH: maximum depth of a graphql query. Default (and maximum) is 255.GRAPH_GRAPHQL_MAX_FIRST: maximum value that can be used for thefirstargument in GraphQL queries. If not provided,firstdefaults to 100. The default value forGRAPH_GRAPHQL_MAX_FIRSTis 1000.GRAPH_GRAPHQL_MAX_OPERATIONS_PER_CONNECTION: maximum number of GraphQL operations per WebSocket connection. Any operation created after the limit will return an error to the client. Default: unlimited.
GRAPH_NODE_ID: sets the node ID, allowing to run multiple Graph Nodes in parallel and deploy to specific nodes; each ID must be unique among the set of nodes.GRAPH_LOG: control log levels, the same way thatRUST_LOGis described hereTHEGRAPH_STORE_POSTGRES_DIESEL_URL: postgres instance used when running tests. Set topostgresql://<DBUSER>:<DBPASSWORD>@<DBHOST>:<DBPORT>/<DBNAME>GRAPH_KILL_IF_UNRESPONSIVE: If set, the process will be killed if unresponsive.GRAPH_LOG_QUERY_TIMING: Control whether the process logs details of processing GraphQL and SQL queries. The value is a comma separated list ofsqlandgql. Ifgqlis present in the list, each GraphQL query made against the node is logged at levelinfo. The log message contains the subgraph that was queried, the query, its variables, the amount of time the query took, and a uniquequery_id. Ifsqlis present, the SQL queries that a GraphQL query causes are logged. The log message contains the subgraph, the query, its bind variables, the amount of time it took to execute the query, the number of entities found by the query, and thequery_idof the GraphQL query that caused the SQL query. These SQL queries are marked withcomponent: GraphQlRunnerThere are additional SQL queries that get logged whensqlis given. These are queries caused by mappings when processing blocks for a subgraph, and queries caused by subscriptions. Defaults to no logging.STORE_CONNECTION_POOL_SIZE: How many simultaneous connections to allow to the store. Due to implementation details, this value may not be strictly adhered to. Defaults to 10.GRAPH_LOG_POI_EVENTS: Logs Proof of Indexing events deterministically. This may be useful for debugging.