HyperIndex Complete Documentation
This document contains all HyperIndex documentation consolidated into a single file for LLM consumption.
| What it is | A blazing-fast, developer-friendly multichain blockchain indexer that transforms on-chain events into structured, queryable databases with GraphQL APIs |
| Data engine | Powered by HyperSync - up to 2000x faster than traditional RPC endpoints |
| Performance | Ranked #1 fastest indexer in independent Sentio benchmarks (April 2025) - up to 6x faster than the nearest competitor, 63x faster than TheGraph |
| Supported chains | 70+ EVM chains and Fuel, with new networks added regularly; all EVM-compatible chains supported via RPC |
| Languages | TypeScript, JavaScript, ReScript |
| Key files | config.yaml (indexer settings), schema.graphql (data schema), src/EventHandlers.* (event logic) |
| Prerequisites | Node.js v22+, pnpm v8+, Docker Desktop (local dev only) |
| Deployment | Hosted service (managed, no API token needed) or self-hosted |
| API token | Required for local dev and self-hosted deployments from 3 November 2025 via ENVIO_API_TOKEN env variable |
| Query interface | GraphQL API auto-generated from your schema |
| Multichain | Native multichain indexing with unordered_multichain_mode support |
| Wildcard indexing | Index by event signature rather than contract address |
| Migration | Straightforward migration path from TheGraph subgraphs |
| Get started | pnpx envio init |
| Support | Discord Β· GitHub |
Overviewβ
File: overview.md
HyperIndex is a blazing-fast, developer-friendly multichain indexer, optimized for both local development and reliable hosted deployment. It empowers developers to effortlessly build robust backends for blockchain applications.
HyperIndex is Envio's full-featured blockchain indexing framework that transforms on-chain events into structured, queryable databases with GraphQL APIs.
HyperSync is the high-performance data engine that powers HyperIndex. It provides the raw blockchain data access layer, delivering up to 2000x faster performance than traditional RPC endpoints.
While HyperIndex gives you a complete indexing solution with schema management and event handling, HyperSync can be used directly for custom data pipelines and specialized applications.
Feature Roadmapβ
Upcoming features on our development roadmap:
- Isolated Multichain Mode
- Polished Solana Support
- Indexing 1,000,000+ events per second
π Quick Linksβ
Getting Startedβ
File: getting-started.md
Learn how to create and run a blockchain indexer with Envioβs HyperIndex, from initialization to local testing and deployment.
Indexer Initializationβ
Prerequisitesβ
- Node.js (v22 or newer recommended)
- pnpm (recommended but not required)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is required only if you plan to run your indexer locally. You can skip installing Docker if you'll only be using Envio Cloud.
Additionally for Windows Users:β
- WSL Windows Subsystem for Linux
Essential Filesβ
After initialization, your indexer will contain three main files that are essential for its operation:
config.yamlβ Defines indexing settings such as blockchain endpoints, events to index, and advanced behaviors.schema.graphqlβ Defines the GraphQL schema for indexed data and its structure for efficient querying.src/EventHandlers.*β Contains the logic for processing blockchain events.
Note: The file extension for Event Handlers (
*.ts,*.js, or*.res) depends on the programming language chosen (TypeScript, JavaScript, or ReScript).
You can customize your indexer by modifying these files to meet your specific requirements.
For a complete walkthrough of the process, refer to the Quickstart guide.
Quickstart With Aiβ
File: quickstart-with-ai.md
Build an Envio HyperIndex indexer end-to-end with an AI coding assistant.
Most developers now reach for an AI coding assistant before they open a file. This guide walks through an AI-centric flow for creating, developing, and deploying a HyperIndex indexer. It is semi-generic, so any capable AI coding assistant (Cursor, Windsurf, Copilot Agent, Continue, etc.) will work. That said, we've seen the best results with Claude Code and recommend starting there.
If you'd rather drive the CLI yourself, see Getting Started and the Quickstart.
Step 1. Give the Assistant Access to the Envio Docs (MCP)β
Envio ships a Model Context Protocol server so your AI assistant can search and read Envio documentation directly instead of guessing from stale training data.
Claude Code:
claude mcp add --transport http envio-docs https://docs.envio.dev/mcp
Cursor / VS Code / other MCP clients, add the endpoint to your MCP config:
{
"mcpServers": {
"envio-docs": {
"url": "https://docs.envio.dev/mcp"
}
}
}
Full setup details in the MCP Server guide. If your assistant doesn't support MCP, you can still point it at the LLM-friendly docs bundle.
Step 3. Develop with the Built-in Claude Skillsβ
HyperIndex v3 ships with Claude skills that teach AI assistants how HyperIndex works: config, schema, handlers, loaders, dynamic contracts, testing, and migration checklists. When an assistant is attached to a v3 project, it can read these skills directly instead of inventing patterns.
A productive loop with skills + the docs MCP looks like:
- Describe the behavior you want in plain English.
- Let the assistant edit
config.yaml,schema.graphql, andsrc/EventHandlers.*. - Ask it to run
pnpm envio codegenandpnpm devto validate. - Iterate on failures together.
The three files you'll spend most of your time in:
config.yaml: networks, contracts, eventsschema.graphql: entities and relationshipssrc/EventHandlers.*: per-event logic
Step 5. Deploy Programmatically with envio-cloudβ
Once your indexer runs locally, the envio-cloud CLI lets an assistant (or a CI job) deploy and manage the hosted indexer without opening the dashboard.
npm install -g envio-cloud
envio-cloud login --token $ENVIO_GITHUB_TOKEN
envio-cloud indexer add --name my-indexer --repo my-repo
envio-cloud deployment status my-indexer --watch-till-synced
envio-cloud deployment logs my-indexer --follow
Every command supports -o json, which makes it easy for assistants and scripts to parse results. Full reference: Envio Cloud CLI.
Contract Importβ
File: contract-import.md
The Quickstart enables you to instantly autogenerate a powerful blockchain indexer and start querying blockchain data in minutes. This is the fastest and easiest way to begin using HyperIndex.
Example: Autogenerate an indexer for the Eigenlayer contract and index its entire history in less than 5 minutes by simply running pnpx envio@3.0.0-rc.0 init and providing the contract address from Etherscan.
Video Tutorialsβ
Contract Import Methodsβ
There are two convenient methods to import your contract:
- Block Explorer (verified contracts on supported explorers like Etherscan and Blockscout)
- Local ABI (custom or unverified contracts)
1. Block Explorer Importβ
This method uses a verified contract's address from a supported blockchain explorer (Etherscan, Routescan, etc.) to automatically fetch the ABI.
Steps:β
a. Select the blockchain
? Which blockchain would you like to import a contract from?
> ethereum-mainnet
goerli
optimism
base
bsc
gnosis
polygon
[ββ to move, enter to select]
HyperIndex supports all EVM-compatible chains. If your desired chain is not listed, you can import via the local ABI method or manually adjust the config.yaml file after initialization.
b. Enter the contract address
? What is the address of the contract?
[Use proxy address if ABI is for a proxy implementation]
If using a proxy contract, always specify the proxy address, not the implementation address.
c. Select events to index
? Which events would you like to index?
> [x] ClaimRewards(address indexed from, address indexed reward, uint256 amount)
[x] Deposit(address indexed from, uint256 indexed tokenId, uint256 amount)
[x] NotifyReward(address indexed from, address indexed reward, uint256 indexed epoch, uint256 amount)
[x] Withdraw(address indexed from, uint256 indexed tokenId, uint256 amount)
[space to select, β to select all, β to deselect all]
d. Finish or add more contracts
You'll be prompted to continue adding more contracts or to complete the setup:
? Would you like to add another contract?
> I'm finished
Add a new address for same contract on same network
Add a new network for same contract
Add a new contract (with a different ABI)
Generated Files & Configurationβ
The Quickstart automatically generates key files:
1. config.yamlβ
Automatically configured parameters include:
- Network ID
- Start Block
- Contract Name
- Contract Address
- Event Signatures
By default, all selected events are included, but you can manually adjust the file if needed. See the detailed guide on config.yaml.
2. GraphQL Schemaβ
- Entities are automatically generated for each selected event.
- Fields match the event parameters emitted.
See more details in the schema file guide.
3. Event Handlersβ
- Handlers are autogenerated for each event.
- Handlers create event-specific entities.
Learn more in the event handlers guide.
HyperIndex Performance Benchmarksβ
File: benchmarks.md
Overviewβ
HyperIndex delivers industry-leading performance for blockchain data indexing. Independent benchmarks have consistently shown Envio's HyperIndex to be the fastest blockchain indexing solution available, with dramatic performance advantages over competitive offerings.
Recent Independent Benchmarksβ
The most comprehensive and up-to-date benchmarks were conducted by Sentio in April 2025 and are available in the sentio-benchmark repository. These benchmarks compare Envio's HyperIndex against other popular blockchain indexers across multiple real-world scenarios:
Key Performance Highlightsβ
| Case | Description | Envio | Nearest Competitor | The Graph | Ponder |
|---|---|---|---|---|---|
| LBTC Token Transfers | Event handling, No RPC calls, Write-only | 3m | 8m - 2.6x slower (Sentio) | 3h9m - 3780x slower | 1h40m - 2000x slower |
| LBTC Token with RPC calls | Event handling, RPC calls, Read-after-write | 1m | 6m - 6x slower (Sentio) | 1h3m - 63x slower | 45m - 45x slower |
| Ethereum Block Processing | 100K blocks with Metadata extraction | 7.9s | 1m - 7.5x slower (Subsquid) | 10m - 75x slower | 33m - 250x slower |
| Ethereum Transaction Gas Usage | Transaction handling, Gas calculations | 1m 26s | 7m - 4.8x slower (Subsquid) | N/A | 33m - 23x slower |
| Uniswap V2 Swap Trace Analysis | Transaction trace handling, Swap decoding | 41s | 2m - 3x slower (Subsquid) | 8m - 11x slower | N/A |
| Uniswap V2 Factory | Event handling, Pair and swap analysis | 8s | 2m - 15x slower (Subsquid) | 19m - 142x slower | 21m - 157x slower |
The independent benchmark results demonstrate that HyperIndex consistently outperforms all competitors across every tested scenario. This includes the most realistic real-world indexing scenario LBTC Token with RPC calls - where HyperIndex was up to 6x faster than the nearest competitor and over 63x faster than The Graph.
Historical Benchmarking Resultsβ
Our internal benchmarking from October 2023 showed similar performance advantages. When indexing the Uniswap V3 ETH-USDC pool contract on Ethereum Mainnet, HyperIndex achieved:
- 2.1x faster indexing than the nearest competitor
- Over 100x faster indexing than some popular alternatives
You can read the full details in our Indexer Benchmarking Results blog post.
Verify For Yourselfβ
We encourage developers to run their own benchmarks. You can also use the templates provided in the Open Indexer Benchmark repository.
How to Migrate Using AIβ
File: migrate-with-ai.md
HyperIndex v3 includes built-in Claude skills that guide AI programming assistants through the full subgraph migration process, from understanding your existing logic to converting handlers and running quality checks. This is the recommended way to migrate complex subgraphs.
Prerequisitesβ
- An AI programming assistant (Cursor or Claude Code)
- pnpm installed
- HyperIndex v3 (Claude skills are available in v3)
Step 1: Initialize a Boilerplate HyperIndex Indexerβ
Create a new HyperIndex indexer that indexes the same contracts and events as the subgraph you are migrating. Run the following in a new directory:
pnpx envio@3.0.0-rc.0 init
Follow the CLI prompts to set up the boilerplate indexer with the same contracts and events as your existing subgraph.
The Claude skills are only available in HyperIndex v3. See the v3 migration guide for current install guidance.
Step 2: Set Up a Monorepo Structureβ
Create a parent directory that contains both your new HyperIndex boilerplate indexer and the existing subgraph repo you want to migrate:
my-migration/
βββ my-subgraph/ # Your existing subgraph repo
βββ my-hyperindex-indexer/ # The boilerplate HyperIndex indexer from Step 1
This structure gives your assistant visibility into both projects so it can read and understand your subgraph logic while writing the HyperIndex implementation.
Step 3: Run Your AI Programming Assistantβ
Open the monorepo root with your AI programming assistant running there (for example, run Claude Code in the monorepo root or open the monorepo in Cursor). Put your assistant in plan mode first, then provide a prompt like the following (replace the repo names with your own):
This monorepo contains two indexers:
- `my-subgraph/` β an existing Graph Protocol subgraph indexer (source of truth)
- `my-hyperindex-indexer/` β a HyperIndex boilerplate scaffolded from the same
contracts (migration target)
Migrate the subgraph indexer to a fully working HyperIndex indexer.
Follow these phases in order:
Phase 1 β Plan
- Produce a migration plan mapping each subgraph component to its HyperIndex
equivalent.
- Flag anything that has no direct equivalent and propose a workaround.
- Do NOT write code yet.
Phase 2 β Implement
- Migrate the entire subgraph following the plan and skill guides.
- Process one handler file at a time.
- After each file, run `pnpm envio codegen` to validate, and verify it against
the migration checklist before moving on.
Phase 3 β Verify
- Walk through every checklist item from the migration skill and confirm it
passes.
- Run any available build or type check commands.
- List any items you could not complete and why.
- Only modify files in `my-hyperindex-indexer/`. Do not change the subgraph repo.
- Preserve all entity fields and event mappings from the subgraph.
- Do not skip or summarize checklist items β execute every one.
- If you are uncertain about a migration decision, pause and ask me.
- After migration, run
pnpm devto verify the indexer runs correctly - Use the Indexer Migration Validator to compare outputs between your subgraph and the new HyperIndex indexer
Manual Migrationβ
For a detailed manual migration guide covering the step by step conversion of subgraph.yaml, schema, and event handlers, see Migrate from The Graph.
Migrate from The Graph to Envioβ
File: migration-guide.md
Please reach out to our team on Discord for personalized migration assistance.
Introductionβ
Migrating your existing subgraph to Envio's HyperIndex is designed to be a developer-friendly process. HyperIndex draws strong inspiration from The Graphβs subgraph architecture, which makes the migration simple, especially with the help of coding assistants like Cursor and AI tools (don't forget to use our ai friendly docs).
The process is simple but requires a good understanding of the underlying concepts. If you are new to HyperIndex, we recommend starting with the Getting Started guide.
If you want an assistant-led workflow, see How to Migrate Using AI for a guided process that works in both Cursor and Claude Code.
Why Migrate to HyperIndex?β
- Superior Performance: Up to 100x faster indexing speeds
- Lower Costs: Reduced infrastructure requirements and operational expenses
- Better Developer Experience: Simplified configuration and deployment
- Advanced Features: Access to capabilities not available in other indexing solutions
- Seamless Integration: Easy integration with existing GraphQL APIs and applications
Subgraph to HyperIndex Migration Overviewβ
Migration consists of three major steps:
- Subgraph.yaml migration
- Schema migration - near copy paste
- Event handler migration
At any point in the migration run
pnpm envio codegen
to verify the config.yaml and schema.graphql files are valid.
or run
pnpm dev
to verify the indexer is running and indexing correctly.
0.5 Use pnpx envio@3.0.0-rc.0 init to generate a boilerplateβ
As a first step, we recommend using pnpx envio@3.0.0-rc.0 init to generate a boilerplate for your project. This will handle the creation of the config.yaml file and a basic schema.graphql file with generic handler functions.
1. subgraph.yaml β config.yamlβ
pnpx envio@3.0.0-rc.0 init will generate this for you. It's a simple configuration file conversion. Effectively specifying which contracts to index, which networks to index (multiple networks can be specified with envio) and which events from those contracts to index.
Take the following conversion as an example, where the subgraph.yaml file is converted to config.yaml the below comparisons is for the Uniswap v4 pool manager subgraph.
The Graph - subgraph.yaml
specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
HyperIndex - config.yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
handler: src/EventHandlers.ts
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
For any potential hurdles, please refer to the Configuration File documentation.
2. Schema migrationβ
copy & paste the schema from the subgraph to the HyperIndex config file.
Small nuance differences:
- You can remove the
@entitydirective - Enums
- BigDecimals
3. Event handler migrationβ
This consists of two parts
- Converting assemblyscript to typescript
- Converting the subgraph syntax to HyperIndex syntax
3.1 Converting Assemblyscript to Typescriptβ
The subgraph uses assemblyscript to write event handlers. The HyperIndex syntax is usually in typescript. Since assemblyscript is a subset of typescript, it's quite simple to copy and paste the code, especially so for pure functions.
3.2 Converting the subgraph syntax to HyperIndex syntaxβ
There are some subtle differences in the syntax of the subgraph and HyperIndex. Including but not limited to the following:
- Replace Entity.save() with context.Entity.set()
- Convert to async handler functions
- Use
awaitfor loading entitiesconst x = await context.Entity.get(id) - Use dynamic contract registration to register contracts
The below code snippets can give you a basic idea of what this difference might look like.
The Graph - eventHandler.ts
export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex);
subscription.tokenId = event.params.tokenId;
subscription.address = event.params.subscriber.toHexString();
subscription.logIndex = event.logIndex;
subscription.blockNumber = event.block.number;
subscription.position = event.params.tokenId;
subscription.save();
}
HyperIndex - eventHandler.ts
PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}
context.Subscription.set(entity);
})
Extra tipsβ
HyperIndex is a powerful tool that can be used to index any contract. There are some features that are especially powerful that go above subgraph implementations and so in some cases you may want to optimise your migration to HyperIndex further to take advantage of these features. Here are some useful tips:
- Use
field_selectionto opt into optional transaction and block fields (e.g.hash,status,gasUsed) that are not included by default, see Transaction receipts for a migration-focused example and the field selection docs for the full list. - Use the
unordered_multichain_modeoption to enable unordered multichain mode, this is the most common need for multichain indexing. However comes with tradeoffs worth understanding. Doc here: unordered multichain mode - Use wildcard indexing to index by event signatures rather than by contract address.
- HyperIndex uses the standard GraphQL query language, whereas TheGraph uses a custom GraphQL syntax. You can read about the differences and how to convert queries in our Query Conversion Guide. We also provide a query converter tool for backwards compatibility with existing TheGraph queries.
- Loaders are a powerful feature to optimize historical sync performance. You can read more about them here.
- HyperIndex is very flexible and can be used to index offchain data too or send messages to a queue etc for fetching external data, you can further optimise the fetching by using the effects api
Transaction receiptsβ
In The Graph, you opt into receipt data per-handler with receipt: true in subgraph.yaml:
eventHandlers:
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
receipt: true
This makes event.receipt available inside the handler with fields like status, gasUsed, and logs.
In HyperIndex, receipt-level fields are part of transaction_fields and must be requested via field_selection in config.yaml. There is no separate receipt object β the fields are accessed directly on event.transaction:
field_selection:
transaction_fields:
- hash
- status # 1 = success, 0 = reverted
- gasUsed
- cumulativeGasUsed
- contractAddress # non-null for contract-creation transactions
- logsBloom
MyContract.Transfer.handler(async ({ event, context }) => {
const { status, gasUsed } = event.transaction;
// ...
});
See the full list of available transaction_fields in the Configuration File docs.
Validating Your Migrationβ
After completing your migration, it's important to verify that your HyperIndex indexer produces the same data as your original subgraph. Use the Indexer Migration Validator CLI tool to compare results between both endpoints and identify any discrepancies. The tool automatically generates entity configs from your GraphQL schema and provides detailed field-level analysis of differences.
Share Your Learningsβ
If you discover helpful tips during your migration, we'd love contributions! Open a PR to this guide and help future developers.
Getting Helpβ
Join Our Discord: The fastest way to get personalized help is through our Discord community.
Migrate from Ponder to HyperIndexβ
File: migrate-from-ponder.md
Need help? Reach out on Discord for personalized migration assistance.
Migrating from Ponder to HyperIndex is straightforward β both frameworks use TypeScript, index EVM events, and expose a GraphQL API. The key differences are the config format, schema syntax, and entity operation API.
If you are new to HyperIndex, start with the Getting Started guide first.
For an assistant-led workflow, see How to Migrate Using AI, which includes a shared process for Cursor and Claude Code.
Why Migrate to HyperIndex?β
- Up to 158x faster historical sync via HyperSync
- Multichain by default β index any number of chains in one config
- Same language β your TypeScript logic transfers directly
Migration Overviewβ
Migration has three steps:
ponder.config.tsβconfig.yamlponder.schema.tsβschema.graphql- Event handlers β adapt syntax and entity operations
At any point, run:
pnpm envio codegen # validate config + schema, regenerate types
pnpm dev # run the indexer locally
Step 1: ponder.config.ts β config.yamlβ
Ponder
export default createConfig({
chains: {
mainnet: { id: 1, rpc: process.env.PONDER_RPC_URL_1 },
},
contracts: {
MyToken: {
abi: myTokenAbi,
chain: "mainnet",
address: "0xabc...",
startBlock: 18000000,
},
},
});
HyperIndex (v3)
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: my-indexer
contracts:
- name: MyToken
abi_file_path: ./abis/MyToken.json
handler: ./src/EventHandlers.ts
events:
- event: Transfer
- event: Approval
chains:
- id: 1
start_block: 0
contracts:
- name: MyToken
address:
- 0xabc...
start_block: 18000000
v2 note: HyperIndex v2 uses
networksinstead ofchains. See the v2βv3 migration guide.
Key differences:
| Concept | Ponder | HyperIndex |
|---|---|---|
| Config format | ponder.config.ts (TypeScript) | config.yaml (YAML) |
| Chain reference | Named + viem object | Numeric chain ID |
| RPC URL | In config | RPC_URL_ env var |
| ABI source | TypeScript import | JSON file (abi_file_path) |
| Events to index | Inferred from handlers | Explicit events: list |
| Handler file | Inferred | Explicit handler: per contract |
Convert your ABI: Ponder uses TypeScript ABI exports (as const). HyperIndex needs a plain JSON file in abis/. Strip the export const ... = wrapper and as const and save as .json.
Field selection β accessing transaction and block fieldsβ
By default, only a minimal set of fields is available on event.transaction and event.block. Fields like event.transaction.hash are undefined unless explicitly requested.
events:
- event: Transfer
field_selection:
transaction_fields:
- hash
Or declare once at the top level to apply to all events:
name: my-indexer
field_selection:
transaction_fields:
- hash
contracts:
# ...
See the full list of available fields in the Configuration File docs.
Migrate From Alchemyβ
File: migrate-from-alchemy.md
Note: Alchemy subgraphs sunset on Dec 8th, 2025. Envio is offering affected Alchemy users 2 months of free hosting on Envio, along with full white-glove migration support to help projects move over smoothly.
For more info on how you can start your free trial or book migration support, visit this page to learn more.
Migrating Alchemy subgraphs to Envioβs HyperIndex is a simple and developer-friendly process. Alchemy subgraphs follow The Graphβs model and HyperIndex uses a very similar structure, so most of your existing setup can carry over cleanly.
If you're familiar with The Graphβs libraries, the migration process should be straightforward. You can also utilize tools like Cursor to speed things up. If you are new to HyperIndex, we strongly recommend starting with our Getting Started guide before you begin your migration from Alchemy.
Why Migrate to Envioβs HyperIndex?β
- High Speed Performance: 143x faster than subgraphs
- Lower Costs: Reduced infrastructure requirements and operational expenses
- Better Developer Experience: Simplified configuration and deployment
- Multichain Native: Index data across multiple EVM chains through a single HyperIndex project
- Local Development: Run your indexers locally for fast iteration and easier debugging
- White Glove Migration Support: Get direct support from the Envio team for a smoother migration.
- GitOps Ready Deployments: Link your GitHub repo and manage multiple deployments in a clean unified workflow
- Advanced Features: Access to features like external calls and block handlers
- Seamless Integration: Easily integrate existing GraphQL APIs and applications
How to Migrate from Alchemy to Envio in 4 easy stepsβ
This Migration consists of 4 major steps:
- Create a HyperIndex Project
- subgraph.yaml Migration to config.yaml
- schema.graphql Migration
- Event Handler Migration
Create a HyperIndex Projectβ
Start by spinning up a basic HyperIndex project with this command:
pnpx envio@3.0.0-rc.0 init template --name alchemy-migration --directory alchemy-migration --template greeter --api-token "YOUR_ENVIO_API_KEY"
Once the project is created, drop your API key into the .env file and youβre good to go.
subgraph.yaml Migration to config.yamlβ
In HyperIndex, all project configuration lives in config.yaml. This is where you define contract addresses, the networks you want to index, and the specific events you want to track from those contracts.
Below is an example showing how a Uniswap V4 subgraph.yaml maps to a HyperIndex config.yaml in a real migration.
The Graph - subgraph.yaml
specVersion: 0.0.4
description: Uniswap is a decentralized protocol for automated token exchange on Ethereum.
repository: https://github.com/Uniswap/v4-subgraph
schema:
file: ./schema.graphql
features:
- nonFatalErrors
- grafting
- kind: ethereum/contract
name: PositionManager
network: mainnet
source:
abi: PositionManager
address: "0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e"
startBlock: 21689089
mapping:
kind: ethereum/events
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mappings/index.ts
entities:
- Position
abis:
- name: PositionManager
file: ./abis/PositionManager.json
eventHandlers:
- event: Subscription(indexed uint256,indexed address)
handler: handleSubscription
- event: Unsubscription(indexed uint256,indexed address)
handler: handleUnsubscription
- event: Transfer(indexed address,indexed address,indexed uint256)
handler: handleTransfer
HyperIndex - config.yaml
# yaml-language-server: $schema=./node_modules/envio/evm.schema.json
name: uni-v4-indexer
networks:
- id: 1
start_block: 21689089
contracts:
- name: PositionManager
address: 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
handler: src/EventHandlers.ts
events:
- event: Subscription(uint256 indexed tokenId, address indexed subscriber)
- event: Unsubscription(uint256 indexed tokenId, address indexed subscriber)
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
If you hit any issues, check the Configuration File docs or reach out to our team in Discord.
schema.graphql Migrationβ
This step is simple. You keep the entire file as is, with one small change: remove all @entity directives from your entities. Everything else stays the same.
Event Handler Migrationβ
This is the final step of the migration which consists of two parts:
- Moving from AssemblyScript to TypeScript
- Updating Subgraph syntax to HyperIndex syntax
AssemblyScript to TypeScriptβ
HyperIndex uses TypeScript instead of AssemblyScript. Since AssemblyScript is a subset of TypeScript, you can simply copy most of your code over without worrying about major syntax changes.
Subgraph to HyperIndexβ
The HyperIndex workflow is very similar to Subgraphs, but there are a few important differences to keep in mind:
- Replace
ENTITY.save()withcontext.ENTITY.set(VALUES) - Handlers need to be async
- Use
awaitwhen loading entities
As you start using HyperIndex, youβll pick up the differences quickly.
Here is a code snippet to give you a sense of what these changes look like in practice.
The Graph - eventHandler.ts
export function handleSubscription(event: SubscriptionEvent): void {
const subscription = new Subscribe(event.transaction.hash + event.logIndex);
subscription.tokenId = event.params.tokenId;
subscription.address = event.params.subscriber.toHexString();
subscription.logIndex = event.logIndex;
subscription.blockNumber = event.block.number;
subscription.position = event.params.tokenId;
subscription.save();
}
HyperIndex - eventHandler.ts
PoolManager.Subscription.handler( async (event, context) => {
const entity = {
id: event.transaction.hash + event.logIndex,
tokenId: event.params.tokenId,
address: event.params.subscriber,
blockNumber: event.block.number,
logIndex: event.logIndex,
position: event.params.tokenId
}
context.Subscription.set(entity);
})
For a few extra tips on migrating from Alchemy to Envio, check out our other migration guide in our docs.
Share Your Learningsβ
If you come across anything useful during your migration, please feel free to contribute. Simply open a PR to this guide and help future developers.
Getting Helpβ
Join our Discord if you need support. It is the fastest way to get direct help from the team and the community.
Migrate to HyperIndex V3β
File: migrate-to-v3.md
15 full months have passed since the official HyperIndex v2.0.0. Since then, we have shipped 32 minor releases and multiple patches with zero breaking changes to the documented API. We also received PRs from 6 external contributors, grew from 1 GitHub star to over 470, and saw many big projects rely on HyperIndex.
HyperIndex V3 focuses on modernizing the codebase and laying the foundation for many more months of development. This guide walks you through upgrading from V2 to V3.
New Featuresβ
Unified Handlers APIβ
In V3 all handler registrations now happen through a single indexer value. Contract-specific exports (ERC20.Transfer.handler, UniV3.PoolFactory.contractRegister, etc.) have been removed in favor of indexer.onEvent, indexer.contractRegister, and indexer.onBlock.
Event handlers with indexer.onEvent:
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: ({ chain }) => ({
params: [
{ from: chain.Safe.addresses },
{ to: chain.Safe.addresses },
],
}),
},
async ({ event, context }) => {
// Handler logic
},
);
Dynamic contracts with indexer.contractRegister:
indexer.contractRegister(
{
contract: "UniV3",
event: "PoolFactory",
},
async ({ event, context }) => {
context.chain.Pool.add(event.params.poolAddress);
},
);
Block handlers with indexer.onBlock consolidate across chains in a single call:
indexer.onBlock(
{ name: "EveryBlock" },
async ({ block, context }) => {
// Handler logic
},
);
For chain-specific or interval-based block handlers, use the where callback:
indexer.onBlock(
{
name: "Ranges",
where: ({ chain }) => {
if (chain.id !== 1) return false;
return {
block: {
number: {
_gte: 20_000_000,
_lte: 22_000_000,
_every: 100,
},
},
};
},
},
async ({ block, context }) => {
// Handler logic
},
);
Per-Event Start Blockβ
Handlers can specify custom start blocks per chain via where.block.number._gte, overriding contract and chain configuration:
indexer.onEvent(
{
contract: "UniV4",
event: "Pool",
where: ({ chain }) => {
let startBlock: number;
switch (chain.id) {
case 1:
startBlock = 18_000_000;
break;
case 8453:
startBlock = 2_000_000;
break;
default: {
const _exhaustive: never = chain.id;
return false;
}
}
return {
block: { number: { _gte: startBlock } },
};
},
},
async ({ event, context }) => {
// Handler logic
},
);
CommonJS β ESMβ
We migrated HyperIndex from CommonJS-only to ESM-only. This enables:
- Using the latest versions of libraries that have long since abandoned CommonJS support
- Top-level await in handler files
Top-Level Awaitβ
Thanks to the migration to ESM, you can now use await directly in handler and other files:
const ZERO_ADDRESS = "0x0000000000000000000000000000000000000000";
// Load data before registering handlers
const addressesFromServer = await loadWhitelistedAddresses();
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: {
params: [
{ from: ZERO_ADDRESS, to: addressesFromServer },
{ from: addressesFromServer, to: ZERO_ADDRESS },
],
},
},
async ({ event, context }) => {
// ... your handler logic
},
);
3x Historical Backfill Performanceβ
Achieved by adding chunking logic to request events across multiple ranges at once. This also fixed overfetching for contracts with a much later start_block in the config, as well as speeding up dynamic contract registration. If you had data fetching as a bottleneck, 25k events per second is now a standard.
Automatic Handler Registration (src/handlers)β
We introduced automatic registration of handler files located in src/handlers.
Previously, you needed to specify an explicit path to a handler file for every contract in config.yaml. Now you can remove all of the paths from config.yaml and simply move the files to src/handlers. You can name the files however you want, but we suggest using contract names and having a file per contract.
If you don't like src/handlers, use the handlers option in config.yaml to customize it.
The explicit handler field in config.yaml still works, so you don't need to change anything immediately.
RPC for Realtime Indexingβ
Built by an external contributor @cairoeth to allow specifying realtime mode for an RPC data source to embrace low-latency head tracking:
rpc:
- url: https://eth-mainnet.your-rpc-provider.com
for: realtime
In this case, the RPC won't be used for historical sync but will be used as the primary source once the indexer enters realtime mode.
Chain State on Contextβ
The Handler Context object provides chain state via the chain property:
indexer.onEvent(
{ contract: "ERC20", event: "Approval" },
async ({ context }) => {
console.log(context.chain.id); // 1 - The chain id of the event
console.log(context.chain.isRealtime); // true - Whether the indexer entered realtime mode
},
);
Indexer State & Configβ
As a replacement for the deprecated and removed getGeneratedByChainId, we introduce the indexer value. It provides nicely typed chains and contract data from your config, as well as the current indexing state, such as isRealtime and addresses. Use indexer either at the top level of the file or directly from handlers. It returns the latest indexer state.
With this change, we also introduce new official types: Indexer, EvmChainId, FuelChainId, and SvmChainId.
indexer.name; // "uniswap-v4-indexer"
indexer.description; // "Uniswap v4 indexer"
indexer.chainIds; // [1, 42161, 10, 8453, 137, 56]
indexer.chains[1].id; // 1
indexer.chains[1].startBlock; // 0
indexer.chains[1].endBlock; // undefined
indexer.chains[1].isRealtime; // false
indexer.chains[1].PoolManager.name; // "PoolManager"
indexer.chains[1].PoolManager.abi; // unknown[]
indexer.chains[1].PoolManager.addresses; // ["0x000000000004444c5dc75cB358380D2e3dE08A90"]
On indexer restart, reading indexer at the top level of a handler file returns values restored from the database β including dynamically registered contract addresses β rather than only what's declared in config.yaml:
// Includes initial + dynamically registered addresses persisted in the DB
console.log(indexer.chains.eth.Pool.addresses);
Conditional Event Handlersβ
Now it's possible to return a boolean value from the where function to disable or enable the handler conditionally.
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: ({ chain }) => {
// Skip all ERC20 on Polygon
if (chain.id === 137) {
return false;
}
// Track all ERC20 on Ethereum Mainnet
if (chain.id === 1) {
return true;
}
// Track only whitelisted addresses on other chains
return {
params: [
{ from: ZERO_ADDRESS, to: WHITELISTED_ADDRESSES[chain.id] },
{ from: WHITELISTED_ADDRESSES[chain.id], to: ZERO_ADDRESS },
],
};
},
},
async ({ event, context }) => {
// ... your handler logic
},
);
Automatic Contract Configurationβ
Started automatically configuring all globally defined contracts. This fixes an issue where addContract crashed because the contract was defined globally but not linked for a specific chain. Now it's done automatically:
contracts:
- name: UniswapV3Factory
events: # ...
- name: UniswapV3Pool
events: # ...
chains:
- id: 1
start_block: 0
contracts:
- name: UniswapV3Factory
address: 0x1F98431c8aD98523631AE4a59f267346ea31F984
# UniswapV3Pool no longer needed here - auto-configured from global contracts
- id: 10
start_block: 0
contracts:
- name: UniswapV3Factory
address: 0x1F98431c8aD98523631AE4a59f267346ea31F984
# UniswapV3Pool no longer needed here - auto-configured from global contracts
ClickHouse Storage (Experimental)β
HyperIndex can now run with multiple storage backends at the same time. Postgres remains the primary database, and entities can additionally be written to a ClickHouse database that is restart- and reorg-resistant. Prometheus metrics carry a storage-name label so you can distinguish backends.
Enable both backends in config.yaml:
storage:
postgres: true
clickhouse: true
envio dev automatically spins up a ClickHouse Docker container for local development. For envio start, provide your own connection via the environment variables ENVIO_CLICKHOUSE_HOST, ENVIO_CLICKHOUSE_DATABASE, ENVIO_CLICKHOUSE_USERNAME, and ENVIO_CLICKHOUSE_PASSWORD. Currently supported only on Dedicated Plan.
Do not run multiple indexers writing to the same ClickHouse database at the same time.
HyperSync Source Improvementsβ
Multiple updates on the HyperSync side to achieve smaller latency and less traffic:
- Server-Sent Events instead of polling to get updates about new blocks
- CapnProto instead of JSON for query serialization
- Cache for queries with repetitive filters - huge egress saving when indexing thousands of addresses
- Improved connection establishment behind a proxy
- Configurable log level support via
ENVIO_HYPERSYNC_LOG_LEVELenvironment variable - Automatic rate-limiting handling on the client side
- Better reconnection logic, logging, and fallbacks for HyperSync SSE and RPC WebSocket height streaming for more stable indexing at the chain head
Fuel Block Handler Supportβ
Block handlers are now supported for Fuel indexing.
Solana Support (Experimental)β
HyperIndex now supports Solana with RPC as a source. This feature is experimental and may undergo minor breaking changes. Solana exposes its block-stream handler as indexer.onSlot (rather than onBlock) to match Solana's slot-based model.
To initialize a Solana project:
pnpx envio@3.0.0-rc.0 init svm
See the Solana documentation for more details.
pnpx envio@3.0.0-rc.0 init Improvementsβ
- Removed language selection to prefer TypeScript by default
- Cleaned up templates to follow the latest good practices
- Added new templates to highlight HyperIndex features:
Feature: Factory Contract,Feature: External Calls - Pre-configured GitHub Actions workflow for running tests and initialized git repository
- Generated projects include Cursor/Claude skills to support agent-driven development
Block Handler Only Indexersβ
Now it's possible to create indexers with only block handlers. Previously, it was required to have at least one event handler for it to work. The contracts field became optional in config.yaml.
Flexible Entity Fieldsβ
We no longer have restrictions on entity field names, such as type and others. Shape your entities any way you want. There are also improvements in generating database columns in the same order as they are defined in the schema.graphql.
Unordered Multichain Mode by Defaultβ
Unordered multichain mode is now available and the default behavior. This provides better performance for most use cases. If you need ordered multichain behavior, you can explicitly set multichain: ordered in your config.
Preload Optimization by Defaultβ
Preload optimization is now enabled by default, replacing the previous loaders and preload_handlers options. This improves historical sync performance automatically.
TUI Improvementsβ
We gave our TUI some love, making it look more beautiful and compact. It also consumes fewer resources, shares a link to the Hasura playground, and dynamically adjusts to the terminal width.
The TUI is now auto-disabled in CI environments and when running under AI agents, so logs stay clean without manual configuration. The legacy TUI_OFF=true environment variable was renamed to ENVIO_TUI=false.
!TUI
New Testing Frameworkβ
HyperIndex ships a purpose-built testing framework powered by createTestIndexer(). Write tests against the same indexer that runs in production β no database, no Docker, no manual mock wiring.
The framework integrates with Vitest, replacing the previous mocha/chai setup with a single package that doesn't require configuration by default and includes snapshot testing out-of-the-box. It also provides typed test assertions and utilities to read/write entities in-between processing runs.
Three ways to feed eventsβ
1. Auto-exit β processes the first block with matching events, then exits. Each subsequent call continues where the last one stopped. Zero config needed.
describe("ERC20 indexer", () => {
it("processes the first block with events", async (t) => {
const indexer = createTestIndexer();
const result = await indexer.process({ chains: { 1: {} } });
// Auto-filled by Vitest on first run β just review and commit
t.expect(result).toMatchInlineSnapshot(`
{
"changes": [
{
"Transfer": {
"sets": [
{
"blockNumber": 10861674,
"from": "0x0000000000000000000000000000000000000000",
"id": "1-10861674-23",
"to": "0x41653c7d61609D856f29355E404F310Ec4142Cfb",
"transactionHash": "0x4b37d2f343608457ca...",
"value": 1000000000000000000000000000n,
},
],
},
"block": 10861674,
"chainId": 1,
"eventsProcessed": 1,
},
],
}
`);
});
});
2. Explicit block range β pin to specific blocks for deterministic CI snapshots.
const result = await indexer.process({
chains: {
1: {
startBlock: 10_861_674,
endBlock: 10_861_674,
},
},
});
3. Simulate β feed typed synthetic events for pure unit tests. No network, no block ranges.
await indexer.process({
chains: {
137: {
simulate: [
{
contract: "Greeter",
event: "NewGreeting",
params: { greeting: "Hello", user: "0x123..." },
},
],
},
},
});
Key capabilitiesβ
- Snapshot-driven assertions β
result.changescaptures every entity set/delete per block. Pair withtoMatchInlineSnapshotfor auto-generated, reviewable snapshots. - Direct entity access β
indexer.Entity.get(),.getOrThrow(),.getAll(), and.set()for reading and presetting state. - Real pipeline, real confidence β tests exercise the full indexer pipeline including dynamic contract registration, multi-chain support, and handler context.
- Parallel test execution via worker thread isolation.
The test indexer also exposes chain information:
const indexer = createTestIndexer();
indexer.chainIds; // [1, 42161]
indexer.chains[1].id; // 1
indexer.chains[1].startBlock; // 0
indexer.chains[1].ERC20.addresses; // ["0x..."]
// Read/write entities between processing runs
await indexer.Account.set({ id: "0x123...", balance: 100n });
const account = await indexer.Account.get("0x123...");
See the Testing documentation for more details.
Podman Supportβ
Beyond Docker, HyperIndex now supports Podman for local development environments. This provides an alternative container runtime for developers who prefer Podman or have it available in their environment.
Nested Tuples for Contract Importβ
The envio init command now supports contracts with nested tuples in event signatures, which was previously a limitation when importing contracts.
PostgreSQL Update for Local Docker Composeβ
The local development Docker Compose setup now uses PostgreSQL 18.1 (upgraded from 17.5).
contractName and eventName on Eventβ
Events now include contractName and eventName fields, making it easier to identify which contract and event you're working with in handlers:
indexer.onEvent(
{ contract: "ERC20", event: "Transfer" },
async ({ event }) => {
console.log(event.contractName); // "ERC20"
console.log(event.eventName); // "Transfer"
},
);
New Official Exported Typesβ
Generated code now exports official generic types for entities, enums, and events. These replace the previous contract-specific type exports:
import type {
MyEntity, // Still exported but Entity is preferred
Entity, // Generic entity type β use as Entity
Enum, // Generic enum type β use as Enum (replaces direct MyEnum export)
EvmEvent, // Generic event type β use as EvmEvent
// Access specific fields: EvmEvent["block"]
} from "envio";
Support for DESC Indicesβ
A nice way to improve your query performance as well:
type PoolDayData
@index(fields: ["poolId", ["date", "DESC"]]) {
id: ID!
poolId: String!
date: Timestamp!
}
RPC Source Improvementsβ
Added polling_interval option for RPC source configuration. Also added missing support for receipt-only fields (gasUsed, cumulativeGasUsed, effectiveGasPrice) that are not available via eth_getTransactionByHash. HyperIndex will additionally perform the eth_getTransactionReceipt request when one of the fields is added in field_selection.
WebSocket Support (Experimental)β
Experimental WebSocket support for RPC source to improve head latency. Please create a GitHub issue if you come across any problems.
chains:
- id: 1
rpc:
url: ${ENVIO_RPC_ENDPOINT}
ws: ${ENVIO_WS_ENDPOINT}
for: realtime
Prometheus Metrics for Data Providersβ
Added a Prometheus metric to track requests to data providers, providing better observability into your indexer's data fetching patterns.
GraphQL-Style getWhere APIβ
The getWhere query API has been redesigned using GraphQL-style syntax:
// Before
const transfers = await context.Transfer.getWhere.from.eq("0x123...");
// After
const transfers = await context.Transfer.getWhere({ from: { _eq: "0x123..." } });
Additionally, three new filter operators are available following Hasura-style conventions:
context.Entity.getWhere({ amount: { _gte: 100n } })
context.Entity.getWhere({ amount: { _lte: 500n } })
context.Entity.getWhere({ status: { _in: ["active", "pending"] } })
Direct RPC Clientβ
Replaced Ethers.js with a direct RPC client implementation, reducing dependencies and improving performance.
Block Lag Configurationβ
A per-chain block_lag option to index behind the chain head by a specified number of blocks. Replaces the global ENVIO_INDEXING_BLOCK_LAG environment variable. Defaults to 0. This is for advanced use cases β only use it if you know what you're doing.
chains:
- id: 1
block_lag: 5
Official /metrics Endpointβ
Prometheus metrics are now official. We cleaned up metric names, switched time units to seconds instead of milliseconds, and followed Prometheus naming conventions more closely. Metrics also cover data points previously available only via the --bench feature. A separate /metrics/runtime endpoint with a dedicated Prometheus registry is available for runtime metrics, isolated from the default /metrics endpoint.
Starting from the v3.0.0 release, Prometheus metrics will follow semver and be documented.
Breaking changes:
- Cleaned up metric names and switched time units from milliseconds to seconds
- Removed
--benchsupport β use the/metricsendpoint instead
Use the new envio metrics CLI command to fetch the Prometheus metrics of a locally running indexer without curling the endpoint manually.
Continue on Config Changeβ
HyperIndex can now keep indexing through some config.yaml changes β rpc configuration is the first to land β instead of erroring out on every restart. Where a change is incompatible, the CLI prints exactly which fields were touched and offers two clear options (revert, or envio dev -r to wipe and re-index). More flexibility will be unlocked over time; open a GitHub issue if you need a specific field supported.
Double Handler Registrationβ
It's now possible to register multiple handlers for the same event with similar filters:
indexer.onEvent(
{ contract: "ERC20", event: "Transfer" },
async ({ event, context }) => {
// Your logic here
},
);
indexer.onEvent(
{ contract: "ERC20", event: "Transfer" },
async ({ event, context }) => {
// And here
},
);
Improved Multiple Data-Sources Supportβ
After switching to a fallback source, HyperIndex now attempts to recover to the primary source 60 seconds later. Previously, it would stay on the fallback until the fallback was down or the indexer was restarted. The source selection logic has also been improved for better indexing resilience and stricter enforcement of the realtime mode configuration.
Updated Dev Docker Flowβ
envio dev no longer uses a generated Docker Compose file and manages containers, network, and volumes directly for greater flexibility. For example, disabling Hasura with ENVIO_HASURA now prevents envio dev from pulling the Hasura image. Use envio dev --restart (or -r) to forcefully clear the database even if there are no config changes detected.
Envio Dev Updateβ
envio dev no longer automatically resets the database on incompatible config or schema changes. Use envio dev -r to explicitly allow this.
Envio Start Updateβ
envio start now has a clear role: to run HyperIndex in the production environment. Use envio dev for local development to enable debugging with Dev Console.
Optimized envio codegenβ
envio codegen is now near-instant. We no longer run pnpm i for the generated package, and we no longer recompile ReScript every time you change config.yaml or schema.graphql. The output is also a lot quieter.
Smaller envio Package (-88MB)β
By eliminating dynamically generated ReScript code, we no longer need to ship or run a ReScript compiler at runtime. The published npm package shrank from 141MB to 53MB.
No Hard pnpm Requirementβ
Internal use of pnpm is gone. The generated package no longer has its own dependency tree, so HyperIndex works with whichever package manager you prefer.
Bun Supportβ
Run HyperIndex on Bun:
bun --bun envio dev
Choose Your Package Manager on envio initβ
envio init now accepts --package-manager=pnpm|npm|bun|yarn so you can scaffold projects without committing to pnpm.
Better Tuples Developer Experienceβ
Solidity struct components used to be generated as positional tuples in handler params, which made handler code awkward. They are now generated as objects with named fields:
struct CreateEventCommon {
address funder;
address sender;
address recipient;
Lockup.CreateAmounts amounts;
IERC20 token;
bool cancelable;
bool transferable;
Lockup.Timestamps timestamps;
string shape;
address broker;
}
event CreateLockupTranchedStream(
uint256 indexed streamId,
Lockup.CreateEventCommon commonParams,
LockupTranched.Tranche[] tranches
);
// Before
event.params.commonParams[5];
event.params.commonParams[3][0];
// After
event.params.commonParams.cancelable;
event.params.commonParams.amounts.deposit;
Improved Multichain Backfillβ
For large multichain indexers, HyperIndex now throttles chains that have already reached the head so they don't compete for resources while the rest finish backfilling. Once every chain has caught up, throttling is lifted and all chains continue indexing equally.
Toolchain Upgradesβ
- ReScript upgraded from v11 to v12 (internally and in
envio inittemplates) - TypeScript upgraded from v5 to v6 (internally and in
envio inittemplates)
Breaking Changesβ
Node.js & Runtimeβ
- Node.js 22 is now the minimum required version, while 24 is the recommended version
- Changes in handler files don't trigger codegen on
pnpm dev
Handler API Changesβ
- Unified handler registration via
indexerβ contract-specific exports were removed. ReplaceContract.Event.handler(handler, options)withindexer.onEvent({ contract, event, ...options }, handler). ReplaceContract.Event.contractRegister(...)withindexer.contractRegister({ contract, event }, ...). The old API has been hard removed and no longer works. - Dynamic contract registration β
context.add(address)is replaced withcontext.chain..add(address). eventFiltersrenamed towhereβ thewherecallback receives{ chain }(not{ chainId }) and returnsfalse,true, or{ params: [...], block?: { number: { _gte, _lte, _every } } }. The previous array shorthand is no longer accepted at the top level β wrap it in{ params: [...] }.- Block handlers consolidated β the standalone
onBlockexport and per-chainforEachregistration are gone. Use a singleindexer.onBlock({ name, where? }, handler)call. For chain-specific or interval-based handlers, return{ block: { number: { _gte, _lte, _every } } }fromwhere, orfalseto skip a chain. - Per-event start block β handlers can now override the configured start block per chain via
where.block.number._gte. - Removed
experimental_createEffectin favor ofcreateEffect - Renamed transaction field
kindtotype - For block handlers:
block.chainIdis removed in favor ofcontext.chain.id Addresstype changed fromstringto`0x${string}`- Removed
transaction.chainIdfrom field selection β usecontext.chain.idorevent.chainIdinstead getWhereAPI redesigned β changed fromcontext.Entity.getWhere.fieldName.eq(value)tocontext.Entity.getWhere({ fieldName: { _eq: value } })using GraphQL-style filter syntax- Events now include
contractNameandeventNamefields
config.yaml Changesβ
- Renamed
networkstochains - Renamed
confirmed_block_thresholdtomax_reorg_depth - Removed
unordered_multichain_modeflag, replaced withmultichain: ordered | unordered(default:unordered) - Removed
loadersoption (now always enabled via Preload Optimization) - Removed
preload_handlersoption (now always enabled) - Removed
preRegisterDynamicContractsoption - Removed
event_decoderoption (the Rust-based decoder is now the only implementation) - Removed
rpc_configin favor ofrpc, which now supports multiple URLs,formode (sync,realtime,fallback), and WebSocket configuration (see RPC for Realtime Indexing) - Removed the
outputflag β generated types are always emitted to.envio/at the project root
HyperSync API Token Requiredβ
Indexers using HyperSync as a data source now require an ENVIO_API_TOKEN environment variable. You can obtain a free API token at envio.dev/app/api-tokens.
export ENVIO_API_TOKEN=your_token_here
Environment Variable Changesβ
- Removed
UNSTABLE__TEMP_UNORDERED_HEAD_MODEenvironment variable - Removed
UNORDERED_MULTICHAIN_MODEenvironment variable - Removed
MAX_BATCH_SIZEenvironment variable (usefull_batch_sizein config.yaml instead) - Renamed
ENVIO_PG_PUBLIC_SCHEMAtoENVIO_PG_SCHEMA(the old name is still supported until v4) - Renamed
TUI_OFF=truetoENVIO_TUI=false(TUI is also auto-disabled in CI environments and when running under AI agents)
Generated Code Changesβ
- Removed
chaintype in favor ofChainId(now a union type instead of a number) - Removed internal
ContractTypeenum (allows longer contract names) - Removed
getGeneratedByChainId(useindexervalue instead) - Lowercased entity types removed: Generated code no longer exports lowercased entity types (e.g.,
transfer). Use capitalized names instead (e.g.,Transfer) - Entity array field values are now typed as
readonlyβ update any code that directly mutates array fields S.nullableschema type now returnsnullinstead ofundefined
CLI Behavior Changesβ
envio devno longer auto-resets the database. Useenvio dev --restart(or-r) to clear it explicitly.envio startis now production-only β useenvio devfor local development.
Deprecated: MockDb Testing APIβ
The MockDb testing API has been removed. Migrate to createTestIndexer() with simulate:
-import { TestHelpers, type User } from "generated";
-const { MockDb, Greeter, Addresses } = TestHelpers;
+import { createTestIndexer, type User, TestHelpers } from "envio";
+const { Addresses } = TestHelpers;
it("A NewGreeting event creates a User entity", async (t) => {
- const mockDbInitial = MockDb.createMockDb();
+ const indexer = createTestIndexer();
const userAddress = Addresses.defaultAddress;
const greeting = "Hi there";
- const mockNewGreetingEvent = Greeter.NewGreeting.createMockEvent({
- greeting: greeting,
- user: userAddress,
- });
-
- const updatedMockDb = await Greeter.NewGreeting.processEvent({
- event: mockNewGreetingEvent,
- mockDb: mockDbInitial,
- });
+ await indexer.process({
+ chains: {
+ 137: {
+ simulate: [
+ {
+ contract: "Greeter",
+ event: "NewGreeting",
+ params: { greeting, user: userAddress },
+ },
+ ],
+ },
+ },
+ });
const expectedUserEntity: User = {
id: userAddress,
latestGreeting: greeting,
numberOfGreetings: 1,
greetings: [greeting],
};
- const actualUserEntity = updatedMockDb.entities.User.get(userAddress);
+ const actualUserEntity = await indexer.User.getOrThrow(userAddress);
t.expect(actualUserEntity).toEqual(expectedUserEntity);
});
MockDb Migration Cheat Sheetβ
Old (MockDb) | New (createTestIndexer) |
|---|---|
MockDb.createMockDb() | createTestIndexer() |
Contract.Event.createMockEvent({...}) | Inline in simulate: [{ contract, event, params }] |
Contract.Event.processEvent({event, mockDb}) | indexer.process({ chains: { id: { simulate } } }) |
mockDb.entities.Entity.get(id) | await indexer.Entity.getOrThrow(id) |
mockDb.entities.Entity.set({...}) | indexer.Entity.set({...}) |
| Manual handler threading & event chaining | Automatic β pass multiple events in the simulate array |
Deprecated: Contract-Specific Type Exportsβ
Generated code no longer exports contract-specific event log types (e.g., ERC20_Transfer_eventLog) or direct enum types (e.g., MyEnum). Use the new generic types instead:
| Old | New |
|---|---|
ERC20_Transfer_eventLog | EvmEvent |
ERC20_Transfer_block | EvmEvent["block"] |
MyEnum (direct export) | Enum |
MyEntity | Entity (preferred) |
Postgres Column Updatesβ
raw_events.event_id:NUMERICβBIGINTraw_events.serial:SERIALβBIGSERIALenvio_chains.events_processed:INTEGERβBIGINTenvio_checkpoints.id:INTEGERβBIGINT- Deprecated
envio_chains._num_batches_fetchedβ always returns 0 for backward compatibility
Fixesβ
- Fixed an issue where the indexer stops progressing without any error (PostgreSQL client update)
- Fixed checksum for addresses returned by RPC in lowercase
- Fixed incorrect validation of transactions
tofield returned by RPC - Fixed OOM error on RPC request crashing loop
- Fixed an edge case where a multichain indexer could freeze during a rollback on reorg (also backported to v2.32.10)
- Fixed external Postgres database support via
ENVIO_PG_HOST - Fixed
S.nullableschema type to beT | nullinstead ofT | undefined
Migration Guideβ
Step 0: Prepare on V2 (Recommended)β
Before upgrading to V3, we recommend preparing your project while still on V2:
- Upgrade to v2.32.6 and enable Preload Optimization:
# config.yaml
preload_handlers: true
-
If you were using loaders, migrate them to Preload Optimization following the Migrating from Loaders guide.
-
Verify your indexer works correctly with
pnpm devbefore proceeding to V3.
This step ensures a smoother migration by validating Preload Optimization works with your handlers before the V3 upgrade.
Step 1: Update Dependenciesβ
Node.jsβ
Update Node.js to version 22 or higher.
package.jsonβ
Update your package.json with the following changes:
{
"type": "module",
"engines": {
"node": ">=22.0.0"
},
"dependencies": {
"envio": "3.0.0-rc.0"
},
"devDependencies": {
"@types/node": "24.12.2",
"typescript": "6.0.3",
"vitest": "4.1.0"
}
}
Adding "type": "module" is required for V3. Without it, your project will fail to start due to ESM import errors.
Remove the generated package. As of v3.0.0-alpha.24, the local generated package no longer exists β types are emitted to .envio/types.d.ts (git-ignored) and wired up via a small envio-env.d.ts file at the project root. Drop the entry from package.json if you still have it:
- "optionalDependencies": {
- "generated": "./generated"
- },
Re-run envio codegen after upgrading; everything you previously imported from generated is now exported from envio.
If you use testing with Mocha (recommended: migrate to Vitest):
We recommend migrating from mocha/chai to Vitest, which offers a better testing experience with the new HyperIndex testing framework:
pnpm remove ts-mocha ts-node mocha chai @types/mocha @types/chai
pnpm add -D vitest@4.0.16
Update your package.json:
{
"scripts": {
"test": "vitest run"
},
"devDependencies": {
"vitest": "4.0.16"
}
}
Move and refactor your test files:
- Move
test/Test.tstosrc/indexer.test.ts - Update imports from
mocha/chaito usevitest:
// Before (mocha/chai)
// After (vitest)
If you prefer to keep Mocha:
Remove ts-mocha and ts-node, then install tsx:
pnpm remove ts-mocha ts-node
pnpm add -D tsx@4.21.0
Update your test script in package.json:
{
"scripts": {
"mocha": "tsc --noEmit && NODE_OPTIONS='--no-warnings --import tsx' mocha --exit test/**/*.ts"
}
}
If you use ts-node for start script:
Replace with:
{
"scripts": {
"start": "envio start"
}
}
Step 2: Update tsconfig.jsonβ
Update your tsconfig.json to support ESM:
{
/* For details: https://www.totaltypescript.com/tsconfig-cheat-sheet */
"compilerOptions": {
/* Base Options: */
"esModuleInterop": true,
"skipLibCheck": true,
"target": "es2022",
"allowJs": true,
"resolveJsonModule": true,
"moduleDetection": "force",
"isolatedModules": true,
"verbatimModuleSyntax": true,
/* Strictness */
"strict": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
/* For running Envio: */
"module": "ESNext",
"moduleResolution": "bundler",
"noEmit": true,
/* Code doesn't run in the DOM: */
"lib": ["es2022"],
"types": ["node"]
}
}
This includes additional strictness options like verbatimModuleSyntax and noUncheckedIndexedAccess. You can disable them to simplify the migration.
Step 3: Update config.yamlβ
Rename networks to chains:
# Before
networks:
- id: 1
contracts:
- name: MyContract
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
# After
chains:
- id: 1
contracts:
- name: MyContract
events:
- event: Transfer(address indexed from, address indexed to, uint256 value)
Update multichain mode (if applicable):
If you had unordered_multichain_mode: true, remove it β this is now the default. If you need ordered multichain behavior, explicitly set:
multichain: ordered
Rename config options:
confirmed_block_thresholdβmax_reorg_depth
Remove deprecated options:
Remove the following options from your config if present:
loadersβ now always enabled via Preload Optimizationpreload_handlersβ now always enabledpreRegisterDynamicContractsβ no longer neededunordered_multichain_modeβ replaced withmultichainoptionevent_decoderβ the Rust-based decoder is now the only implementationrpc_configβ replaced withrpc(see Breaking Changes)
New option for batch size:
If you were using MAX_BATCH_SIZE environment variable, use the new config option instead:
full_batch_size: 5000
Automatic Handler Registration (optional):
Optionally move your handler files to src/handlers/ and remove the explicit handler paths from config.yaml.
Step 4: Update Environment Variablesβ
Add required environment variables:
If your indexer uses HyperSync (the default data source), you need to set up an API token:
- Get a free API token at envio.dev/app/api-tokens
- Set the environment variable:
export ENVIO_API_TOKEN=your_token_here
For local development, you can add it to a .env file:
ENVIO_API_TOKEN=your_token_here
Remove deprecated environment variables if present:
UNSTABLE__TEMP_UNORDERED_HEAD_MODEUNORDERED_MULTICHAIN_MODEMAX_BATCH_SIZEβ usefull_batch_sizein config.yaml instead
Step 5: Update Handler Codeβ
Migrate to the unified indexer API:
All contract-specific handler exports (ERC20.Transfer.handler, Greeter.NewGreeting.contractRegister, etc.) have been removed. Register every handler through indexer.onEvent, indexer.contractRegister, or indexer.onBlock.
// Before
ERC20.Transfer.handler(
async ({ event, context }) => {
// ...
},
{
wildcard: true,
eventFilters: ({ chainId }) => [
{ from: ZERO_ADDRESS, to: WHITELIST[chainId] },
],
}
);
// After
indexer.onEvent(
{
contract: "ERC20",
event: "Transfer",
wildcard: true,
where: ({ chain }) => ({
params: [{ from: ZERO_ADDRESS, to: WHITELIST[chain.id] }],
}),
},
async ({ event, context }) => {
// ...
},
);
Migrate dynamic contract registration:
// Before
UniV3.PoolFactory.contractRegister(async ({ event, context }) => {
context.addPool(event.params.poolAddress);
});
// After
indexer.contractRegister(
{ contract: "UniV3", event: "PoolFactory" },
async ({ event, context }) => {
context.chain.Pool.add(event.params.poolAddress);
},
);
Migrate block handlers:
// Before
indexer.chainIds.forEach((chainId) => {
onBlock(
{ name: "EveryBlock", chain: chainId },
async ({ block, context }) => {
// ...
},
);
});
// After
indexer.onBlock(
{ name: "EveryBlock" },
async ({ block, context }) => {
// ...
},
);
For chain-specific handlers or interval/range filters, use the where callback (see Unified Handlers API).
Rename deprecated APIs:
| V2 (Deprecated) | V3 |
|---|---|
Contract.Event.handler(...) | indexer.onEvent({ contract, event, ...options }, handler) |
Contract.Event.contractRegister(...) | indexer.contractRegister({ contract, event }, handler) |
onBlock({ chain, ... }, handler) | indexer.onBlock({ name, where? }, handler) |
context.add(addr) | context.chain..add(addr) |
eventFilters option | where callback returning { params: [...] } |
experimental_createEffect | createEffect |
block.chainId (in block handlers) | context.chain.id |
transaction.kind | transaction.type |
chain type | ChainId |
transaction.chainId | context.chain.id or event.chainId |
Entity.getWhere.field.eq(value) | Entity.getWhere({ field: { _eq: value } }) |
Entity.getWhere.field.gt(value) | Entity.getWhere({ field: { _gt: value } }) |
Entity.getWhere.field.lt(value) | Entity.getWhere({ field: { _lt: value } }) |
Removed APIs:
getGeneratedByChainIdβ use theindexer.chains[chainId]instead (see Indexer State & Config)getWhereAPI β update to the new GraphQL-style filter syntax:
// Before
const transfers = await context.Transfer.getWhere.from.eq("0x123...");
const bigTransfers = await context.Transfer.getWhere.value.gt(1000n);
// After
const transfers = await context.Transfer.getWhere({ from: { _eq: "0x123..." } });
const bigTransfers = await context.Transfer.getWhere({ value: { _gt: 1000n } });
- Lowercased entity types β use capitalized names instead:
// Before
// After
CLI behavior changes:
envio devno longer auto-resets the database. If you relied on auto-reset, runenvio dev -r(or--restart) explicitly.envio startis now production-only β keep usingenvio devfor local development.
Step 6: Test Your Migrationβ
After making all changes, run codegen and start your indexer:
pnpm envio codegen
pnpm dev
Quick Migration Checklistβ
Prepare (on V2):
- Upgrade to
envio@2.32.6 - Enable
preload_handlers: truein config.yaml - Migrate from loaders if applicable (guide)
- Verify indexer works with
pnpm dev
Dependencies:
- Update Node.js to >=22
- Add
"type": "module"topackage.jsonβ Required for V3! - Update
enviodependency to the latest v3 release - Remove the
optionalDependencies.generatedentry frompackage.json(alpha.24+) - Update
engines.nodeto>=22.0.0inpackage.json - Update
tsconfig.jsonfor ESM support - Migrate from mocha/chai to vitest (recommended) or replace
ts-mocha/ts-nodewithtsx
config.yaml:
- Rename
networkstochains - Rename
confirmed_block_thresholdtomax_reorg_depth - Replace
rpc_configwithrpc - Remove
unordered_multichain_mode(now default) - Remove
loadersandpreload_handlersoptions - Remove
preRegisterDynamicContractsoption - Remove
event_decoderoption - Remove the
outputoption (types are always written to.envio/) - If using ClickHouse, add
storage: { postgres: true, clickhouse: true }(env vars are still required for the connection)
Environment Variables:
- Set
ENVIO_API_TOKENif using HyperSync (get token) - Remove
UNSTABLE__TEMP_UNORDERED_HEAD_MODE - Remove
UNORDERED_MULTICHAIN_MODE - Remove
MAX_BATCH_SIZE(usefull_batch_sizein config.yaml) - Rename
TUI_OFF=truetoENVIO_TUI=falseif you set it
Handler Code:
- Migrate event handlers from
Contract.Event.handler(...)toindexer.onEvent({ contract, event, ...options }, handler) - Migrate dynamic contract registration from
Contract.Event.contractRegister(...)toindexer.contractRegister({ contract, event }, handler) - Migrate
context.add(addr)tocontext.chain..add(addr) - Convert
eventFiltersto the newwherecallback returning{ params: [...] } - Migrate block handlers from per-chain
onBlockloops to a singleindexer.onBlockcall (usewherefor chain-specific or interval filters) - Use the new
where.block.number._gteto override per-event start blocks if needed - Replace
experimental_createEffectwithcreateEffect - Replace
block.chainIdwithcontext.chain.idin block handlers - Replace
transaction.kindwithtransaction.type - Update usage of
chaintype toChainId - Replace
getGeneratedByChainIdwithindexer.chains[chainId] - Update code expecting
Addresstype to bestring(now`0x${string}`) - Replace
transaction.chainIdwithcontext.chain.idorevent.chainId - Replace lowercased entity type imports with capitalized versions (e.g.,
transferβTransfer) - Update
getWherecalls to new GraphQL-style filter syntax (e.g.,getWhere({ field: { _eq: value } })) - Update any
S.nullableschema usage β now returnsnullinstead ofundefined - Migrate from
MockDbtocreateTestIndexer()(see MockDb Migration Cheat Sheet) - Replace contract-specific type exports (
ERC20_Transfer_eventLog) with generic types (EvmEvent)
CLI:
- If you relied on
envio devresetting the database automatically, switch toenvio dev -r - Use
envio devfor local development (envio startis now production-only)
Verify:
- Run
pnpm envio codegenandpnpm devto verify
Getting Helpβ
If you encounter any issues during migration, join our Discord community for support.
Release Notesβ
For detailed release notes, see:
- v3.0.0-rc.0
- v3.0.0-alpha.24
- v3.0.0-alpha.23
- v3.0.0-alpha.22
- v3.0.0-alpha.21
- v3.0.0-alpha.20
- v3.0.0-alpha.19
- v3.0.0-alpha.18
- v3.0.0-alpha.17
- v3.0.0-alpha.16
- v3.0.0-alpha.15
- v3.0.0-alpha.14
- v3.0.0-alpha.13
- v3.0.0-alpha.12
- v3.0.0-alpha.11
- v3.0.0-alpha.10
- v3.0.0-alpha.9
- v3.0.0-alpha.8
- v3.0.0-alpha.7
- v3.0.0-alpha.6
- v3.0.0-alpha.5
- v3.0.0-alpha.4
- v3.0.0-alpha.3
- v3.0.0-alpha.2
- v3.0.0-alpha.1
- v3.0.0-alpha.0
Configuration Fileβ
File: Guides/configuration-file.mdx
The config.yaml file defines your indexer's behavior, including which blockchain events to index, contract addresses, which networks to index, and various advanced indexing options. It is a crucial step in configuring your HyperIndex setup.
After any changes to your config.yaml and the schema, run:
pnpm codegen
This command generates necessary types and code for your event handlers.
Key Configuration Optionsβ
Contract Addressesβ
Set the address of the smart contract you're indexing.
Addresses can be provided in checksum format or in lowercase. Envio accepts both and normalizes them internally.
Single address:
address: 0xContractAddress
Multiple addresses for the same contract:
contracts:
- name: MyContract
address:
- 0xAddress1
- 0xAddress2
If using a proxy contract, always use the proxy address, not the implementation address.
Global definitions:
You can also avoid repeating addresses by using global contract definitions:
contracts:
- name: Greeter
abi: greeter.json
networks:
- id: ethereum-mainnet
contracts:
- name: Greeter
address: 0xProxyAddressHere
Events Selectionβ
Define specific events to index in a human-readable format:
events:
- event: "NewGreeting(address user, string greeting)"
- event: "ClearGreeting(address user)"
By default, all events defined in the contract are indexed, but you can selectively disable them by removing them from this list.
Custom Event Namesβ
You can assign custom names to events in config.yaml. This is handy when
two events share the same name but have different signatures, or when you want
a more descriptive name in your Envio project.
events:
- event: Assigned(address indexed recipientId, uint256 amount, address token)
- event: Assigned(address indexed recipientId, uint256 amount, address token, address sender)
name: AssignedWithSender
Field Selectionβ
To improve indexing performance and reduce credits usage, the block and transaction fields on events contain only a subset of the fields available on the blockchain.
To access fields that are not provided by default, specify them using the field_selection option for your event:
events:
- event: "Assigned(address indexed user, uint256 amount)"
field_selection:
transaction_fields:
- transactionIndex
block_fields:
- timestamp
See all possible options in the Config File Reference or use IDE autocomplete for your help.
Global Field Selectionβ
You can also specify fields globally for all events in the root of the config file:
field_selection:
transaction_fields:
- hash
- gasUsed
block_fields:
- parentHash
Try to use this option sparingly as it can cause redundant Data Source calls and increased credits usage.
Field Selection per Event is available from envio@2.11.0 and above. Please, upgrade your indexer to access this feature.
Rollback on Reorgβ
HyperIndex automatically handles blockchain reorganizations by default. To disable or customize this behavior, set the rollback_on_reorg flag in your config.yaml:
rollback_on_reorg: true # default is true
See detailed configuration options here.
Environment Variablesβ
Since envio@2.9.0, environment variable interpolation is supported for flexibility and security:
networks:
- id: ${ENVIO_CHAIN_ID:-ethereum-mainnet}
contracts:
- name: Greeter
address: ${ENVIO_GREETER_ADDRESS}
Run your indexer with custom environment variables:
ENVIO_CHAIN_ID=optimism ENVIO_GREETER_ADDRESS=0xYourContractAddress pnpm dev
Interpolation syntax:
${ENVIO_VAR}β Use the value ofENVIO_VAR${ENVIO_VAR:-default}β UseENVIO_VARif set, otherwise usedefault
For more detailed information about environment variables, see our Environment Variables Guide.
Output Directory Pathβ
You can customize the path where the generated directory will be placed using the output option:
output: ./custom/generated/path
By default, the generated directory is placed in generated relative to the current working directory. If set, it will be a path relative to the config file location.
This is an advanced configuration option. When using a custom output directory, you'll need to manually adjust your .gitignore file and project structure to match the new configuration.
Full config file exampleβ
This example indexes events from multiple contracts across multiple networks.
name: envio-indexer
unordered_multichain_mode: true
preload_handlers: true
contracts:
- name: PoolManager
handler: src/EventHandlers.ts
events:
- event: Swap(bytes32 indexed id, address indexed sender, int128 amount0, int128 amount1, uint160 sqrtPriceX96, uint128 liquidity, int24 tick, uint24 fee)
- name: PositionManager
handler: src/EventHandlers.ts
events:
- event: Transfer(address indexed from, address indexed to, uint256 indexed id)
networks:
- id: 1
# Keep it 0 and HyperSync will automatically find the first block for your contracts
start_block: 0
contracts:
- name: PositionManager
address:
- 0xbD216513d74C8cf14cf4747E6AaA6420FF64ee9e
start_block: 18500000 # OPTIONAL: Override for contract deployed later
- name: PoolManager
address:
- "0x000000000004444c5dc75cB358380D2e3dE08A90"
- id: 10
start_block: 0
contracts:
- name: PositionManager
address:
- 0x3C3Ea4B57a46241e54610e5f022e5c45859A1017
- name: PoolManager
address:
- 0x9a13F98Cb987694C9F086b1F5eB990EeA8264Ec3
- id: 42161
start_block: 0
contracts:
- name: PositionManager
address:
- 0xd88f38f930b7952f2db2432cb002e7abbf3dd869
- name: PoolManager
address:
- 0x360e68faccca8ca495c1b759fd9eee466db9fb32
Now your configuration file is set, you're ready to start indexing with HyperIndex!
Schema Fileβ
File: Guides/schema-file.md
The schema.graphql file defines the data model for your HyperIndex indexer. Each entity type defined in this schema corresponds directly to a database table, with your event handlers responsible for creating and updating the records. HyperIndex automatically generates a GraphQL API based on these entity types, allowing easy access to the indexed data.
Scalar Typesβ
Scalar types represent basic data types and map directly to JavaScript, TypeScript, or ReScript types.
| GraphQL Scalar | Description | JavaScript/TypeScript | ReScript |
|---|---|---|---|
ID | Unique identifier | string | string |
String | UTF-8 character sequence | string | string |
Int | Signed 32-bit integer | number | int |
Float | Signed floating-point number | number | float |
Boolean | true or false | boolean | bool |
Bytes | UTF-8 character sequence (hex prefixed 0x) | string | string |
BigInt | Signed integer (int256 in Solidity) | bigint | bigint |
BigDecimal | Arbitrary-size floating-point | BigDecimal (imported) | BigDecimal.t |
Timestamp | Timestamp with timezone | Date | Js.Date.t |
Json | JSON object (from envio@2.20) | Json | Js.Json.t |
Learn more about GraphQL scalars here.
Enum Typesβ
Enums allow fields to accept only a predefined set of values.
Example:
enum AccountType {
ADMIN
USER
}
type User {
id: ID!
balance: Int!
accountType: AccountType!
}
Enums translate to string unions (TypeScript/JavaScript) or polymorphic variants (ReScript):
TypeScript Example:
let user = {
id: event.params.id,
balance: event.params.balance,
accountType: "USER", // enum as string
};
ReScript Example:
let user: Types.userEntity = {
id: event.params.id,
balance: event.params.balance,
accountType: #USER, // polymorphic variant
};
Field Indexing (@index)β
Add an index to a field for optimized queries and loader performance:
type Token {
id: ID!
tokenId: BigInt!
collection: NftCollection!
owner: User! @index
}
- All
idfields and fields referenced via@derivedFromare indexed automatically.
Generating Typesβ
Once you've defined your schema, run this command to generate these entity types that can be accessed in your event handlers:
pnpm envio codegen
You're now ready to define powerful schemas and efficiently query your indexed data with HyperIndex!
Event Handlersβ
File: Guides/event-handlers.mdx
Event Handlers
Registrationβ
A handler is a function that receives blockchain data, processes it, and inserts it into the database. You can register handlers in the file defined in the handler field in your config.yaml file. By default this is src/EventHandlers.* file.
..handler(async ({ event, context }) => {
// Your logic here
});
const { } = require("generated");
..handler(async ({ event, context }) => {
// Your logic here
});
Handlers...handler(async ({ event, context }) => {
// Your logic here
});
The generated module contains code and types based on config.yaml and schema.graphql files. Update it by running pnpm codegen command whenever you change these files.
Basic Exampleβ
Here's a handler example for the NewGreeting event. It belongs to the Greeter contract from our beginners Greeter Tutorial:
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity: User = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
const { Greeter } = require("generated");
// Handler for the NewGreeting event
Greeter.NewGreeting.handler(async ({ event, context }) => {
const userId = event.params.user; // The id for the User entity
const latestGreeting = event.params.greeting; // The greeting string that was added
const currentUserEntity = await context.User.get(userId); // Optional user entity that may already exist
// Update or create a new User entity
const userEntity = currentUserEntity
? {
id: userId,
latestGreeting,
numberOfGreetings: currentUserEntity.numberOfGreetings + 1,
greetings: [...currentUserEntity.greetings, latestGreeting],
}
: {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
};
context.User.set(userEntity); // Set the User entity in the DB
});
open Types
// Handler for the NewGreeting event
Handlers.Greeter.NewGreeting.handler(async ({event, context}) => {
let userId = event.params.user->Address.toString // The id for the User entity
let latestGreeting = event.params.greeting // The greeting string that was added
let maybeCurrentUserEntity = await context.user.get(userId) // Optional User entity that may already exist
// Update or create a new User entity
let userEntity: Entities.User.t = switch maybeCurrentUserEntity {
| Some(existingUserEntity) => {
id: userId,
latestGreeting,
numberOfGreetings: existingUserEntity.numberOfGreetings + 1,
greetings: existingUserEntity.greetings->Belt.Array.concat([latestGreeting]),
}
| None => {
id: userId,
latestGreeting,
numberOfGreetings: 1,
greetings: [latestGreeting],
}
}
context.user.set(userEntity) // Set the User entity in the DB
})
Preload Optimizationβ
Important! Preload optimization makes your handlers run twice.
Starting from envio@2.27 all new indexers are created with preload optimization pre-configured by default.
This optimization enables HyperIndex to efficiently preload entities used by handlers through batched database queries, while ensuring events are processed synchronously in their original order. When combined with the Effect API for external calls, this feature delivers performance improvements of multiple orders of magnitude compared to other indexing solutions.
Read more in the dedicated guides:
- How Preload Optimization Works
- Double-Run Footgun
- Effect API
- Migrating from Loaders (recommended)
Advanced Use Casesβ
HyperIndex provides many features to help you build more powerful and efficient indexers. There's definitely the one for you:
- Handle Factory Contracts with Dynamic Contract Registration (with nested factories support)
- Perform external calls to decide which contract address to register using Async Contract Register
- Index all ERC20 token transfers with Wildcard Indexing
- Use Topic Filtering to ignore irrelevant events
- With multiple filters for single event
- With different filters per network
- With filter by dynamicly registered contract addresses (eg Index all ERC20 transfers to/from your Contract)
- Access Contract State directly from handlers
- Perform external calls from handlers by following the IPFS Integration guide
Context Objectβ
The handler context provides methods to interact with entities stored in the database.
Retrieving Entitiesβ
Retrieve entities from the database using context.Entity.get where Entity is the name of the entity you want to retrieve, which is defined in your schema.graphql file.
await context.Entity.get(entityId);
It'll return Entity object or undefined if the entity doesn't exist.
Starting from envio@2.22.0 you can use context.Entity.getOrThrow to conveniently throw an error if the entity doesn't exist:
const pool = await context.Pool.getOrThrow(poolId);
// Will throw: Entity 'Pool' with ID '...' is expected to exist.
// Or you can pass a custom message as a second argument:
const pool = await context.Pool.getOrThrow(
poolId,
`Pool with ID ${poolId} is expected.`
);
Or use context.Entity.getOrCreate to automatically create an entity with default values if it doesn't exist:
const pool = await context.Pool.getOrCreate({
id: poolId,
totalValueLockedETH: 0n,
});
// Which is equivalent to:
let pool = await context.Pool.get(poolId);
if (!pool) {
pool = {
id: poolId,
totalValueLockedETH: 0n,
};
context.Pool.set(pool);
}
Retrieving Entities by Fieldβ
ERC20.Approval.handler(async ({ event, context }) => {
// Find all approvals for this specific owner
const currentOwnerApprovals = await context.Approval.getWhere.owner_id.eq(
event.params.owner
);
// Process all the owner's approvals efficiently
for (const approval of currentOwnerApprovals) {
// Process each approval
}
});
You can also use context..getWhere..gt to get all entities where the field value is greater than the given value.
Important:
-
This feature requires Preload Optimization to be enabled.
- Either by
preload_handlers: truein yourconfig.yamlfile - Or by using Loaders (Deprecated)
- Either by
-
Works with any field that:
- Is used in a relationship with the
@derivedFromdirective - Has an
@indexdirective
- Is used in a relationship with the
-
Potential Memory Issues: Very large
getWherequeries might cause memory overflows. -
Tip: Try to put the
getWherequery to the top of the handler, to make sure it's being preloaded. Read more about how Preload Optimization works.
Modifying Entitiesβ
Use context.Entity.set to create or update an entity:
context.Entity.set({
id: entityId,
...otherEntityFields,
});
Both context.Entity.set and context.Entity.deleteUnsafe methods use the In-Memory Storage under the hood and don't require await in front of them.
Referencing Linked Entitiesβ
When your schema defines a field that links to another entity type, set the relationship using _id with the referenced entity's id. You are storing the ID, not the full entity object.
type A {
id: ID!
b: B!
}
type B {
id: ID!
}
context.A.set({
id: aId,
b_id: bId, // ID of the linked B entity
});
HyperIndex automatically resolves A.b based on the stored b_id when querying the API.
Deleting Entities (Unsafe)β
To delete an entity:
context.Entity.deleteUnsafe(entityId);
The deleteUnsafe method is experimental and unsafe. You need to manually handle all entity references after deletion to maintain database consistency.
Updating Specific Entity Fieldsβ
Use the following approach to update specific fields in an existing entity:
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
const pool = await context.Pool.get(poolId);
if (pool) {
context.Pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
}
let pool = await context.pool.get(poolId);
pool->Option.forEach(pool => {
context.pool.set({
...pool,
totalValueLockedETH: pool.totalValueLockedETH.plus(newDeposit),
});
});
context.logβ
The context object also provides a logger that you can use to log messages to the console. Compared to console.log calls, these logs will be displayed on our Envio Cloud runtime logs page.
Read more in the Logging Guide.
context.isPreloadβ
If you need to skip the preload phase for CPU-intensive operations or to perform certain actions only once per event, you can use context.isPreload.
ERC20.Transfer.handler(async ({ event, context }) => {
// Load existing data efficiently
const [sender, receiver] = await Promise.all([
context.Account.getOrThrow(event.params.from),
context.Account.getOrThrow(event.params.to),
]);
// Skip expensive operations during preload
if (context.isPreload) {
return;
}
// CPU-intensive calculations only happen once
const complexCalculation = performExpensiveOperation(event.params.value); // Placeholder function for demonstration
// Create or update sender account
context.Account.set({
id: event.params.from,
balance: sender.balance - event.params.value,
computedValue: complexCalculation,
});
// Create or update receiver account
context.Account.set({
id: event.params.to,
balance: receiver.balance + event.params.value,
});
});
Note: While context.isPreload can be useful for bypassing double execution, it's recommended to use the Effect API for external calls instead, as it provides automatic batching and memoization benefits.
External Callsβ
Envio indexer runs using Node.js runtime. This means that you can use fetch or any other library like viem to perform external calls from your handlers.
Note that with Preload Optimization all handlers run twice. But with Effect API this behavior makes your external calls run in parallel, while keeping the processing data consistent.
Check out our IPFS Integration, Accessing Contract State and Effect API guides for more information.
context.effectβ
Define an effect and use it in your handler with context.effect:
// Define an effect that will be called from the handler.
const getMetadata = createEffect(
{
name: "getMetadata",
input: S.string,
output: {
description: S.string,
value: S.bigint,
},
rateLimit: {
calls: 5,
per: "second",
},
cache: true, // Optionally persist the results in the database
},
({ input }) => {
const response = await fetch(`https://api.example.com/metadata/${input}`);
const data = await response.json();
return {
description: data.description,
value: data.value,
};
}
);
ERC20.Transfer.handler(async ({ event, context }) => {
// Load metadata for the token.
// This will be executed in parallel for all events in the batch.
// The call is automatically memoized, so you don't need to worry about duplicate requests.
const sender = await context.effect(getMetadata, event.params.from);
// Process the transfer with the pre-loaded data
});
Performance Considerationsβ
For performance optimization and best practices, refer to:
- Benchmarking
- Preload Optimization
These guides offer detailed recommendations on optimizing entity loading and indexing performance.
Block Handlers (new in v2.29)β
File: Guides/block-handlers.md
Run logic on every block or an interval.
Understanding Multichain Indexingβ
File: Advanced/multichain-indexing.mdx
Multichain indexing allows you to monitor and process events from contracts deployed across multiple blockchain networks within a single indexer instance. This capability is essential for applications that:
- Track the same contract deployed across multiple networks
- Need to aggregate data from different chains into a unified view
- Monitor cross-chain interactions or state
How It Worksβ
With multichain indexing, events from contracts deployed on multiple chains can be used to create and update entities defined in your schema file. Your blockchain indexer will process events from all configured networks, maintaining proper synchronization across chains.
Configuration Requirementsβ
To implement multichain indexing, you need to:
- Populate the
networkssection in yourconfig.yamlfile for each chain - Specify contracts to index from each network
- Create event handlers for the specified contracts
Real-World Example: Uniswap V4 Multichain Indexerβ
For a comprehensive, production-ready example of multichain indexing, we recommend exploring our Uniswap V4 Multichain Indexer. This official reference implementation:
- Indexes Uniswap V4 deployments across 10 different blockchain networks
- Powers the official v4.xyz interface with real-time data
- Demonstrates best practices for high-performance multichain indexing
- Provides a complete, production-grade implementation you can study and adapt
!V4 indexer
The Uniswap V4 indexer showcases how to effectively structure a multichain indexer for a complex DeFi protocol, handling high volumes of data across multiple networks while maintaining performance and reliability.
Config File Structure for Multichain Indexingβ
The config.yaml file for multichain indexing contains three key sections:
- Global contract definitions - Define contracts, ABIs, and events once
- Network-specific configurations - Specify chain IDs and starting blocks
- Contract instances - Reference global contracts with network-specific addresses
# Example structure (simplified)
contracts:
- name: ExampleContract
abi_file_path: ./abis/example-abi.json
handler: ./src/EventHandlers.js
events:
- event: ExampleEvent
networks:
- id: 1 # Ethereum Mainnet
start_block: 0
contracts:
- name: ExampleContract
address: "0x1234..."
- id: 137 # Polygon
start_block: 0
contracts:
- name: ExampleContract
address: "0x5678..."
Key Configuration Conceptsβ
- The global
contractssection defines the contract interface, ABI, handlers, and events once - The
networkssection lists each blockchain network you want to index - Each network entry references the global contract and provides the network-specific address
- This structure allows you to reuse the same handler functions and event definitions across networks
π’ Best Practice: When developing multichain indexers, append the chain ID to entity IDs to avoid collisions. For example:
user-1for Ethereum anduser-137for Polygon.
Multichain Event Orderingβ
When indexing multiple chains, you have two approaches for handling event ordering:
Unordered Multichain Modeβ
Unordered mode is recommended for most applications.
The indexer processes events as soon as they're available from each chain, without waiting for other chains. This "Unordered Multichain Mode" provides better performance and lower latency.
- Events will still be processed in order within each individual chain
- Events across different chains may be processed out of order
- Processing happens as soon as events are emitted, reducing latency
- You avoid waiting for the slowest chain's block time
This mode is ideal for most applications, especially when:
- Operations on your entities are commutative (order doesn't matter)
- Entities from different networks never interact with each other
- Processing speed is more important than guaranteed cross-chain ordering
How to Enable Unordered Modeβ
In your config.yaml:
unordered_multichain_mode: true
networks: ...
Ordered Modeβ
Ordered mode is currently the default mode. But it'll be changed to unordered mode in the future. If you don't need strict deterministic ordering of events across all chains, it's recommended to use unordered mode.
If your application requires strict deterministic ordering of events across all chains, you can enable "Ordered Mode". In this mode, the indexer synchronizes event processing across all chains, ensuring that events are processed in the exact same order in every indexer run, regardless of which chain they came from.
When to Use Ordered Modeβ
Use ordered mode only when:
- The exact ordering of operations across different chains is critical to your application logic
- You need guaranteed deterministic results across all indexer runs
- You're willing to accept higher latency for cross-chain consistency
Cross-chain ordering is particularly important for applications like:
- Bridge applications: Where messages or assets must be processed on one chain before being processed on another chain
- Cross-chain governance: Where decisions made on one chain affect operations on another chain
- Multi-chain financial applications: Where the sequence of transactions across chains affects accounting or risk calculations
- Data consistency systems: Where the state must be consistent across multiple chains in a specific order
Technical Detailsβ
With ordered mode enabled:
- The indexer needs to wait for all blocks to increment from each network
- There is increased latency between when an event is emitted and when it's processed
- Processing speed is limited by the block interval of the slowest network
- Events are guaranteed to be processed in the same order in every indexer run
Cross-Chain Ordering Preservationβ
Ordered mode ensures that the temporal relationship between events on different chains is preserved. This is achieved by:
- Global timestamp ordering: Events are ordered based on their block timestamps across all chains
- Deterministic processing: The same sequence of events will be processed in the same order every time
The primary trade-off is increased latency at the head of the chain. Since the indexer must wait for blocks from all chains to determine the correct ordering, the processing of recent events is delayed by the slowest chain's block time. For example, if Chain A has 2-second blocks and Chain B has 15-second blocks, the indexer will process events at the slower 15-second rate to maintain proper ordering.
This latency is acceptable for applications where correct cross-chain ordering is more important than real-time updates. For bridge applications in particular, this ordering preservation can be critical for security and correctness, as it ensures that deposit events on one chain are always processed before the corresponding withdrawal events on another chain.
Best Practices for Multichain Indexingβ
1. Entity ID Namespacingβ
Always namespace your entity IDs with the chain ID to prevent collisions between networks. This ensures that entities from different networks remain distinct.
2. Error Handlingβ
Implement robust error handling for network-specific issues. A failure on one chain shouldn't prevent indexing from continuing on other chains.
3. Testingβ
- Test your indexer with realistic scenarios across all networks
- Use testnet deployments for initial validation
- Verify entity updates work correctly across chains
4. Performance Considerationsβ
- Use unordered mode when appropriate for better performance
- Consider your indexing frequency based on the block times of each chain
- Monitor resource usage, as indexing multiple chains increases load
- Adding more chains does not linearly degrade performance β chains are indexed in parallel. However, handler logic that writes to shared entities across chains may introduce contention when using ordered mode.
5. Adding a New Chain to an Existing Indexerβ
To add a new chain to a running indexer:
- Add the new network entry to your
config.yamlwith the appropriatestart_blockand contract addresses - Push the updated code to your deployment branch (for Envio Cloud) or restart locally with
pnpm envio start -r
On Envio Cloud, this creates a new deployment that re-indexes all chains (including the new one). Your previous deployment continues serving queries with zero downtime until the new deployment is fully synced. See the deployment guide for details.
Locally, adding a new chain requires a restart and will re-index all chains from their respective start blocks.
Troubleshooting Common Issuesβ
-
Different Network Speeds: If one network is significantly slower than others, consider using unordered mode to prevent bottlenecks.
-
Entity Conflicts: If you see unexpected entity updates, verify that your entity IDs are properly namespaced with chain IDs.
-
Memory Usage: If your indexer uses excessive memory, consider optimizing your entity structure and implementing pagination in your queries.
Next Stepsβ
- Explore our Uniswap V4 Multichain Indexer for a complete implementation
- Review performance optimization techniques for your indexer
Testingβ
File: Guides/testing.mdx
Introductionβ
Envio comes with a built-in testing library that enables developers to thoroughly validate their indexer behavior without requiring deployment or interaction with actual blockchains. This library is specifically crafted to:
- Mock database states: Create and manipulate in-memory representations of your database
- Simulate blockchain events: Generate test events that mimic real blockchain activity
- Assert event handler logic: Verify that your handlers correctly process events and update entities
- Test complete workflows: Validate the entire process from event creation to database updates
The testing library provides helper functions that integrate with any JavaScript-based testing framework (like Mocha, Jest, or others), giving you flexibility in how you structure and run your tests.
Learn by doingβ
If you prefer to explore by example, the Greeter template includes complete tests that demonstrate best practices:
- Generate
greetertemplate in TypeScript using Envio CLI
pnpx envio@3.0.0-rc.0 init template -l typescript -d greeter -t greeter -n greeter
- Run tests
pnpm test
- See the
test/test.tsfile to understand how the tests are written.
Writing testsβ
Test Library Designβ
The testing library follows key design principles that make it effective for testing HyperIndex indexers:
- Immutable database: The mock database is immutable, with each operation returning a new instance. This makes it robust and easy to test against previous states.
- Chainable operations: Operations can be chained together to build complex test scenarios.
- Realistic simulations: Mock events closely mirror real blockchain events, allowing you to test your handlers in conditions similar to production.
Typical Test Flowβ
Most tests will follow this general pattern:
- Initialize the mock database (empty or with predefined entities)
- Create a mock event with test parameters
- Process the mock event through your handler(s)
- Assert that the resulting database state matches your expectations
This flow allows you to verify that your event handlers correctly create, update, or modify entities in response to blockchain events.
Assertionsβ
The testing library works with any JavaScript assertion library. In the examples, we use Node.js's built-in assert module, but you can also use popular alternatives like chai or expect.
Common assertion patterns include:
assert.deepEqual(expectedEntity, actualEntity)- Check that entire entities matchassert.equal(expectedValue, actualEntity.property)- Verify specific property valuesassert.ok(updatedMockDb.entities.Entity.get(id))- Ensure an entity exists
Troubleshootingβ
If you encounter issues with your tests, check the following:
Environment and Setupβ
-
Verify your Envio version: The testing library is available in versions
v0.0.26and abovepnpm envio -v -
Ensure you've generated testing code: Always run codegen after updating your schema or config
pnpm codegen -
Check your imports: Make sure you're importing the correct files
const { MockDb, Greeter, Addresses } = TestHelpers;
const assert = require("assert");
const { UserEntity, TestHelpers } = require("generated");
const { MockDb, Greeter, Addresses } = TestHelpers;
open RescriptMocha
open Mocha
open Belt
Common Issues and Solutionsβ
-
"Cannot read properties of undefined": This usually means an entity wasn't found in the database. Verify your IDs match exactly and that the entity exists before accessing it.
-
"Type mismatch": Ensure that your entity structure matches what's defined in your schema. Type issues are common when working with numeric types (like
BigIntvsnumber). -
ReScript specific setup: If using ReScript, remember to update your
rescript.jsonfile:{
"sources": [
{ "dir": "src", "subdirs": true },
{ "dir": "test", "subdirs": true }
],
"bs-dependencies": ["rescript-mocha"]
} -
Debug database state: If you're having trouble with assertions, add a debug log to see the exact state of your entities:
console.log(
JSON.stringify(updatedMockDb.entities.User.get(userAddress), null, 2)
);
If you encounter any issues or have questions, please reach out to us on Discord
Navigating Hasuraβ
File: Guides/navigating-hasura.md
This page is only relevant when testing on a local machine or using a self-hosted version of Envio that uses Hasura.
Introductionβ
Hasura is a GraphQL engine that provides a web interface for interacting with your indexed blockchain data. When running HyperIndex locally, Hasura serves as your primary tool for:
- Querying indexed data via GraphQL
- Visualizing database tables and relationships
- Testing API endpoints before integration with your frontend
- Monitoring the indexing process
This guide explains how to navigate the Hasura dashboard to effectively work with your indexed data.
Accessing Hasura Consoleβ
When running HyperIndex locally, Hasura Console is automatically available at:
http://localhost:8080
You can access this URL in any web browser to open the Hasura console.
When prompted for authentication, use the password: testing
Key Dashboard Areasβ
The Hasura dashboard has several tabs, but we'll focus on the two most important ones for HyperIndex developers:
API Tabβ
The API tab lets you execute GraphQL queries and mutations on indexed data. It serves as a GraphQL playground for testing your API calls.
Featuresβ
- Explorer Panel: The left panel shows all available entities defined in your
schema.graphqlfile - Query Builder: The center area is where you write and execute GraphQL queries
- Results Panel: The right panel displays query results in JSON format
Available Entitiesβ
By default, you'll see:
- All entities defined in your
schema.graphqlfile dynamic_contracts(for dynamically added contracts)raw_eventstable (Note: This table is no longer populated by default to improve performance. To enable storage of raw events, addraw_events: trueto yourconfig.yamlfile as described in the Raw Events Storage section)
Example Queryβ
Try a simple query to test your blockchain indexer:
query MyQuery {
User(limit: 5) {
id
latestGreeting
numberOfGreetings
}
}
Click the "Play" button to execute the query and see the results.
For more advanced GraphQL query options, see Hasura's quickstart guide.
Data Tabβ
The Data tab provides direct access to your database tables and relationships, allowing you to view the actual indexed data.
Featuresβ
- Schema Browser: View all tables in the database (left panel)
- Table Data: Examine and browse data within each table
- Relationship Viewer: See how different entities are connected
Working with Tablesβ
- Select any table from the "public" schema to view its contents
- Use the "Browse Rows" tab to see all data in that table
- Check the "Insert Row" tab to manually add data (useful for testing)
- View the "Modify" tab to see the table structure
Verifying Indexed Dataβ
To confirm your blockchain indexer is working correctly:
- Check entity tables to ensure they contain the expected data
- Look at the
db_write_timestampcolumn values to confirm when data was last updated - Newer timestamps indicate fresh data; older timestamps might indicate stale data from previous runs
Common Tasksβ
Checking Indexing Statusβ
To verify your blockchain indexer is actively processing new blocks:
- Go to the Data tab
- Select any entity table
- Check the latest
db_write_timestampvalues - Monitor these values over time to ensure they're updating
(Note the TUI is also an easy way to monitor this)
Troubleshooting Missing Dataβ
If expected data isn't appearing:
- Check if you've enabled raw events storage (
raw_events: trueinconfig.yaml) and then examine theraw_eventstable to confirm events were captured - Verify your event handlers are correctly processing these events
- Examine your GraphQL queries to ensure they match your schema structure
- Check console logs for any processing errors
Resetting Indexed Dataβ
When testing, you may need to reset your database:
- Stop your indexer
- Reset your database (refer to the development guide for commands)
- Restart your indexer to begin processing from the configured start block
Best Practicesβ
- Regular Verification: Periodically check both the API and Data tabs to ensure your blockchain indexer is functioning correctly
- Query Testing: Test complex queries in the API tab before implementing them in your application
- Schema Validation: Use the Data tab to verify that relationships between entities are correctly established
- Performance Monitoring: Watch for tables that grow unusually large, which might indicate inefficient indexing
Aggregations: local vs hosted (avoid the footβgun)β
When developing locally with Hasura, you may notice that GraphQL aggregate helpers (for example, count/sum-style aggregations) are available. On Envio Cloud, these aggregate endpoints are intentionally not exposed. Aggregations over large datasets can be very slow and unpredictable in production.
The recommended approach is to compute and store aggregates at indexing time, not at query time. In practice this means maintaining counters, sums, and other rollups in entities as part of your event handlers, and then querying those precomputed values.
Example: indexing-time aggregationβ
schema.graphql
# singleton; you hardcode the id and load it in and out
type GlobalState {
id: ID! # "global-state"
count: Int!
}
type Token {
id: ID! # incremental number
description: String!
}
EventHandler.ts
const globalStateId = "global-state";
NftContract.Mint.handler(async ({event, context}) => {
const globalState = await context.GlobalState.get(globalStateId);
if (!globalState) {
context.log.error("global state doesn't exist");
return;
}
const incrementedTokenId = globalState.count + 1;
context.Token.set({
id: incrementedTokenId,
description: event.params.description,
});
context.GlobalState.set({
...globalState,
count: incrementedTokenId,
});
});
This pattern scales: you can keep per-entity counters, rolling windows (daily/hourly entities keyed by date), and top-N caches by updating entities as events arrive. Your queries then read these precomputed values directly, avoiding expensive runtime aggregations.
Exceptional casesβ
If runtime aggregate queries are a hard requirement for your use case, please reach out and we can evaluate options for your project on Envio Cloud. Contact us on Discord.
Disable Hasura for Self-Hosted Blockchain Indexersβ
Starting from envio@2.26.0 it's possible to disable Hasura integration for self-hosted blockchain indexers. To do so, set the ENVIO_HASURA environment variable to false.
Environment Variablesβ
File: Guides/environment-variables.md
Environment variables are a crucial part of configuring your Envio blockchain indexer. They allow you to manage sensitive information and configuration settings without hardcoding them in your codebase.
Naming Conventionβ
All environment variables used by Envio must be prefixed with ENVIO_. This naming convention:
- Prevents conflicts with other environment variables
- Makes it clear which variables are used by the Envio indexer
- Ensures consistency across different environments
Envio API Token (required for HyperSync)β
To ensure continued access to HyperSync, set an Envio API token in your environment.
- Use
ENVIO_API_TOKENto provide your token at runtime - See the API Tokens guide for how to generate a token: API Tokens
Envio-specific environment variablesβ
The following variables are used by HyperIndex:
-
ENVIO_API_TOKEN: API token for HyperSync access (required for continued access in self-hosted deployments) -
ENVIO_HASURA: Set tofalseto disable Hasura integration for self-hosted blockchain indexers -
MAX_BATCH_SIZE: Size of the in-memory batch before writing to the database. Default:5000. Set to1to help isolate which event or data save is causing Postgres write errors. -
ENVIO_PG_PORT: Port for the Postgres service used by HyperIndex during local development -
ENVIO_PG_PASSWORD: Postgres password (self-hosted) -
ENVIO_PG_USER: Postgres username (self-hosted) -
ENVIO_PG_DATABASE: Postgres database name (self-hosted) -
ENVIO_PG_PUBLIC_SCHEMA: Postgres schema name override for the generated/public schema
Example Environment Variablesβ
Here are some commonly used environment variables:
# Envio API Token (required for continued HyperSync access)
ENVIO_API_TOKEN=your-secret-token
# Blockchain RPC URL
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
# Starting block number for indexing
ENVIO_START_BLOCK=12345678
# Coingecko API key
ENVIO_COINGECKO_API_KEY=api-key
# In-memory batch size (default 5000)
MAX_BATCH_SIZE=1
Setting Environment Variablesβ
Local Developmentβ
For local development, you can set environment variables in several ways:
- Using a
.envfile in your project root:
# .env
ENVIO_API_TOKEN=your-secret-token
ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
ENVIO_START_BLOCK=12345678
- Directly in your terminal:
export ENVIO_API_TOKEN=your-secret-token
export ENVIO_RPC_URL=https://arbitrum.direct.dev/your-api-key
Envio Cloudβ
When using Envio Cloud, you can configure environment variables through the Envio platform's dashboard. Remember that all variables must still be prefixed with ENVIO_.
For more information about environment variables in Envio Cloud, see the Envio Cloud documentation.
Configuration Fileβ
For use of environment variables in your configuration file, read the docs here: Configuration File.
Best Practicesβ
- Never commit sensitive values: Always use environment variables for sensitive information like API keys and database credentials
- Never commit or use private keys: Never commit or use private keys in your codebase
- Use descriptive names: Make your environment variable names clear and descriptive
- Document your variables: Keep a list of required environment variables in your project's README
- Use different values: Use different environment variables for development, staging, and production environments
- Validate required variables: Check that all required environment variables are set before starting your blockchain indexer
Troubleshootingβ
If you encounter issues with environment variables:
- Verify that all required variables are set
- Check that variables are prefixed with
ENVIO_ - Ensure there are no typos in variable names
- Confirm that the values are correctly formatted
For more help, see our Troubleshooting Guide.
MCP Serverβ
File: Guides/mcp-server.md
Envio provides a Model Context Protocol (MCP) server that lets AI coding assistants search and retrieve documentation directly. This means tools like Claude Code, Cursor, and other MCP-compatible clients can access Envio docs without you needing to copy-paste context manually.
Endpointβ
https://docs.envio.dev/mcp
Available Toolsβ
The MCP server exposes two tools:
| Tool | Description |
|---|---|
docs_search | Full-text search across all documentation. Returns matching pages with titles, URLs, and content snippets. |
docs_fetch | Retrieves the full content of a documentation page as markdown. |
Setupβ
Claude Codeβ
claude mcp add --transport http envio-docs https://docs.envio.dev/mcp
Cursor / VS Codeβ
Add the following to your MCP configuration (.cursor/mcp.json or VS Code MCP settings):
{
"mcpServers": {
"envio-docs": {
"url": "https://docs.envio.dev/mcp"
}
}
}
Other MCP Clientsβ
Point any MCP-compatible client to the endpoint URL above using the Streamable HTTP transport.
Uniswap V4 Multichain Indexerβ
File: Examples/example-uniswap-v4.md
The following blockchain indexer example is a reference implementation and can serve as a starting point for applications with similar logic.
This official Uniswap V4 indexer is a comprehensive implementation for the Uniswap V4 protocol using Envio HyperIndex. This is the same indexer that powers the v4.xyz website, providing real-time data for the Uniswap V4 interface.
Key Featuresβ
- Multichain Support: Indexes Uniswap V4 deployments across 10 different blockchain networks in real-time
- Complete Pool Metrics: Tracks pool statistics including volume, TVL, fees, and other critical metrics
- Swap Analysis: Monitors swap events and liquidity changes with high precision
- Hook Integration: In-progress support for Uniswap V4 hooks and their events
- Production Ready: Powers the official v4.xyz interface with production-grade reliability
- Ultra-Fast Syncing: Processes massive amounts of blockchain data significantly faster than alternative blockchain indexing solutions, reducing sync times from days to minutes
!V4 gif
Technical Overviewβ
This indexer is built using TypeScript and provides a unified GraphQL API for accessing Uniswap V4 data across all supported networks. The architecture is designed to handle high throughput and maintain consistency across different blockchain networks.
Performance Advantagesβ
The Envio-powered Uniswap V4 indexer offers extraordinary performance benefits:
- 10-100x Faster Sync Times: Leveraging Envio's HyperSync technology, this indexer can process historical blockchain data orders of magnitude faster than traditional solutions
- Real-time Updates: Maintains low latency for new blocks while efficiently managing historical data
Use Casesβ
- Power analytics dashboards and trading interfaces
- Monitor DeFi positions and protocol health
- Track historical performance of Uniswap V4 pools
- Build custom notifications and alerts
- Analyze hook interactions and their impact
Getting Startedβ
To use this indexer, you can:
- Clone the repository
- Follow the installation instructions in the README
- Run the indexer locally or deploy it to a production environment
- Access indexed data through the GraphQL API
Contributionβ
The Uniswap V4 indexer is actively maintained and welcomes contributions from the community. If you'd like to contribute or report issues, please visit the GitHub repository.
This is an official reference implementation that powers the v4.xyz website. While extensively tested in production, remember to validate the data for your specific use case. The indexer is continuously updated to support the latest Uniswap V4 features and optimizations.
Sablier Protocol Indexersβ
File: Examples/example-sablier.md
The following blockchain indexers serve as exceptional reference implementations for the Sablier protocol, showcasing professional development practices and efficient multichain data processing.
Overviewβ
Sablier is a token streaming protocol that enables real-time finance on the blockchain, allowing tokens to be streamed continuously over time. These official Sablier indexers track streaming activity across 18 different EVM-compatible chains, providing comprehensive data through a unified GraphQL API.
Professional Indexer Suiteβ
Sablier maintains three specialized indexers, each targeting a specific part of their protocol:
1. Lockup Indexerβ
Tracks the core Sablier lockup contracts, which handle the streaming of tokens with fixed durations and amounts. This indexer provides data about stream creation, cancellation, and withdrawal events. Used primarily for the vesting functionality of Sablier.
2. Flow Indexerβ
Monitors Sablier's advanced streaming functionality, allowing for dynamic flow rates and more complex streaming scenarios. This indexer captures stream modifications, batch operations, and other flow-specific events. Powers the payments side of the Sablier application.
3. Airdrops Indexerβ
Tracks Sablier's Merkle Airdrops, which enables efficient batch stream creation using cryptographic proofs. This indexer captures data about batch creations, claims, and related activities. Used for both Airstreams and Instant Airdrops functionality.
Key Featuresβ
- Comprehensive Multichain Support: Indexes data across 18 different EVM chains
- Professionally Maintained: Used in production by the Sablier team and their partners
- Extensive Test Coverage: Includes comprehensive testing to ensure data accuracy
- Optimized Performance: Implements efficient data processing techniques
- Well-Documented: Clear code structure with extensive comments
- Backward Compatibility: Carefully manages schema evolution and contract upgrades
- Cross-chain Architecture: Envio promotes efficient cross-chain indexing where all networks share the same indexer endpoint
Best Practices Showcaseβ
These blockchain indexers demonstrate several development best practices:
- Modular Code Structure: Well-organized code with clear separation of concerns
- Consistent Naming Conventions: Professional and consistent naming throughout
- Efficient Event Handling: Optimized processing of blockchain events
- Comprehensive Entity Relationships: Well-designed data model with proper relationships
- Thorough Input Validation: Robust error handling and input validation
- Detailed Changelogs: Documentation of breaking changes and migrations
- Handler/Loader Pattern: Envio indexers use an optimized pattern with loaders to pre-fetch entities and handlers to process them
Getting Startedβ
To use these indexers as a reference for your own development:
- Clone the specific repository based on your needs:
- Review the file structure and implementation patterns
- Examine the event handlers for efficient data processing techniques
- Study the schema design for effective entity modeling
For complete API documentation and usage examples, see:
These are official indexers maintained by the Sablier team and represent production-quality implementations. They serve as an excellent example of professional blockchain indexer development and are regularly updated to support the latest protocol features.
Hosted Serviceβ
File: Hosted_Service/hosted-service.md
Envio Cloud (formerly Hosted Service) is a fully managed hosting solution for your blockchain indexers, providing all the infrastructure, scaling, and monitoring needed to run production-grade indexers without operational overhead.
Envio Cloud offers multiple plans to suit different needs, from free development environments to enterprise-grade dedicated hosting. Each plan includes powerful features like static production endpoints, built-in alerts, and production-ready infrastructure.
Deployment Optionsβ
Envio provides flexibility in how you deploy and host your indexers:
-
Envio Cloud (Fully Managed): Let Envio handle everything. The following sections of this page outline Envio Cloud in more detail. This is the recommended deployment method for most users and removes the hosting overhead for your team. See below for the all the awesome features we provide and see the Pricing & Billing page for more information on which plan suits your indexing needs.
-
Self-Hosting: Run your indexer on your own infrastructure. This requires advanced setup and infrastructure knowledge not unique to Envio. See the following repository for a simple docker example to get you started. Please note this example does not cover all infrastructure related needs. It is recommended that at least a separate Postgres management tool is used for self-hosting in production. For further instructions see the Self Hosting Guide
Key Featuresβ
- Git-based Deployments: Similar to Vercel, deploy your indexer by simply pushing to a designated deployment branch
- Zero Infrastructure Management: We handle all the servers, databases, and scaling for you
- Static Production Endpoints: Consistent URLs with zero-downtime deployments and instant version switching
- Built-in Monitoring: Track logs, sync status, and deployment health in real-time
- Comprehensive Alerting: Multi-channel notifications (Discord, Slack, Telegram, Email) for critical issues, performance warnings, and deployment updates
- Security Features: IP/Domain whitelisting to control access to your indexer endpoints
- GraphQL API: Access your indexed data through a performant, production-ready GraphQL endpoint
- Multichain Support: Deploy indexers that track multiple networks from a single codebase
Deployment Modelβ
Envio Cloud provides a seamless GitHub-integrated deployment workflow:
- GitHub Integration: Install the Envio Deployments GitHub App to connect your repositories
- Flexible Configuration: Support for monorepos with configurable root directories, config file locations, and deployment branches
- Automatic Deployments: Push to your deployment branch to trigger builds and deployments
- Version Management: Maintain multiple deployment versions with one-click switching and rollback capabilities
- Real-time Monitoring: Track deployment progress, logs, and sync status through the dashboard
Multiple Indexers: Deploy several indexers from a single repository using different configurations, branches, or directories.
You can view and manage your hosted indexers in the Envio Explorer.
Getting Startedβ
- Features - Learn about all available Envio Cloud features
- Deployment Guide - Step-by-step instructions for deploying your indexer
- Envio Cloud CLI - Manage and monitor your hosted indexers from the command line
- Pricing & Billing - Compare plans and pricing options
- Self-Hosting - Run your indexer on your own infrastructure
It is recommended that before deploying to Envio Cloud, the indexer is built and tested locally to ensure it runs smoothly. For a complete list of local CLI commands to develop your indexer, see the CLI Commands documentation.
Envio Cloud Featuresβ
File: Hosted_Service/hosted-service-features.md
Envio Cloud includes several production-ready features to help you manage and secure your blockchain indexer deployments.
Most features listed on this page are available for paid production plans only. The free development plan has limited features and is designed for testing and development purposes. View our pricing plans to see what's included in each plan.
Deployment Tagsβ
Organize and identify your deployments with custom key/value tags. Tags help you categorize deployments by environment, project, team, or any custom attribute that fits your workflow.
How it works:
- Add up to 5 custom tags per deployment via the deployment overview page
- Each tag consists of a key (max 20 characters) and a value (max 20 characters, automatically lowercased)
- Click "+ Add Tag" to create new tags, or click existing tags to edit or delete them
Special name Tag:
The name tag has special behaviorβwhen set, its value is displayed directly on the deployment list, making it easy to identify deployments at a glance without navigating into each one.
Example Use Cases:
name: stagingorname: productionβ quickly identify deployment purposeenv: staging/env: productionβ categorize by environmentteam: frontendβ organize by team ownershipversion: v2β track deployment versions
Benefits:
- Quickly identify deployments in the list view
- Organize deployments across multiple projects or environments
- Add context and metadata to your deployments
- Filter and locate deployments more efficiently
IP Whitelistingβ
Availability: Paid plans only
Control access to your indexer by restricting requests to specific IP addresses. This security feature helps protect your data and ensures only authorized clients can query your indexer.
Benefits:
- Enhanced security for sensitive data
- Prevent unauthorized access
- Control API usage from specific sources
- Ideal for production environments with strict access requirements
Effect API Cacheβ
Availability: Medium plans and up
Speed up your indexer deployments by caching Effect API results. When enabled, new deployments will start with preloaded effect data, eliminating the need to re-fetch external data and significantly reducing sync time.
How it works:
- Save a Cache: From any deployment, click "Save Cache" to capture the current effect data
- Configure Settings: Navigate to Settings > Cache to manage your caches
- Enable Caching: Toggle caching on and select which cache to use for new deployments
- Deploy: New deployments will automatically restore from the selected cache
Key Features:
- Quick Save: Save cache directly from the deployment page with one click
- Cache Management: View, select, and delete caches from the Cache settings page
- Automatic Restore: New deployments preload effect data from the active cache
- Download Cache: Download caches for local development, enabling faster iteration without re-fetching external data
Benefits:
- Dramatically faster deployment sync times
- Reduced external API calls during indexing
- Seamless deployment updates with preserved effect state
Learn more about the Effect API and how caching works in our Effect API documentation.
This feature is only available for blockchain indexers deployed with version 2.26.0 or higher.
Built-in Alertsβ
Availability: Paid plans only
Stay informed about your indexer's health and performance with our integrated alerting system. Configure multiple notification channels and choose which alerts you want to receive.
This feature is only available for blockchain indexers deployed with version 2.24.0 or higher.
Notification Channelsβ
Configure one or multiple notification channels to receive alerts:
- Discord
- Slack
- Telegram
Zero-Downtime Deploymentsβ
Update your blockchain indexer without any service interruption using our seamless deployment system with static production endpoints.
How it works:
- Deploy new versions alongside your current deployment
- Each indexer gets a static production endpoint that remains consistent
- Use 'Promote to Production' to instantly route the static endpoint to any deployment
- All requests to your static production endpoint are automatically routed to the promoted deployment
- Maintain API availability throughout upgrades with no endpoint changes required
Key Features:
- Static Production Endpoint: Consistent URL that never changes, regardless of which deployment is active
- Instant Switching: Promote any deployment to production with zero downtime
- Rollback Capabilities: Quickly switch back to previous deployments if needed
- Seamless Updates: Your applications continue working without any configuration changes
Deployment Location Choiceβ
Full support for cross-region deployments is in active development. If you require a deployment to be based in the USA please contact us through our support channel on discord.
Availability: Dedicated plans only
Choose your primary deployment region to optimize performance and meet compliance requirements.
Available Regions:
- USA
- EU
Benefits:
- Reduced latency for your target users
- Data residency compliance support
- Custom infrastructure configurations
- Dedicated infrastructure resources
Direct Database Accessβ
Availability: Dedicated plans only
Access your indexed data directly through SQL queries, providing flexibility beyond the standard GraphQL endpoint.
Use Cases:
- Complex analytical queries
- Custom data exports
- Advanced reporting and dashboards
- Integration with external analytics tools
Powerful Analytics Solutionβ
Availability: Dedicated plans only (additional cost)
A comprehensive analytics platform that automatically pipes your indexed data from PostgreSQL into ClickHouse (approximately 2 minutes behind real-time) and provides access through a hosted Metabase instance.
Technical Architecture:
- Data Pipeline: Automatic replication from PostgreSQL to ClickHouse
- Near Real-time: Data available in an analytics platform within ~2 minutes
- Frontend: Hosted Metabase instance for visualization and analysis
- Performance: ClickHouse optimized for analytical queries on large datasets
Capabilities:
- Interactive, customizable dashboards through Metabase
- Variety of visualization options (charts, graphs, tables, maps)
- Fast analytical queries on large datasets via ClickHouse
- Ad-hoc SQL queries for data exploration
- Automated alerts based on data thresholds
- Team collaboration and report sharing
- Export capabilities for further analysis
For deployment instructions and limits, see our Deployment Guide. For pricing and feature availability by plan, see our Billing & Pricing page.
Deploying Your Indexerβ
File: Hosted_Service/hosted-service-deployment.md
Envio Cloud provides a seamless git-based deployment workflow, similar to modern platforms like Vercel. This enables you to easily deploy, update, and manage your blockchain indexers through your normal development workflow.
Prerequisites & Important Informationβ
Requirementsβ
-
Version Support: We strongly advise using the latest release version for improved deployment performance. Envio Cloud requires a minimum version of at least
2.21.5. Additionally, the following versions are not supported on Envio Cloud:2.29.x
-
PNPM Support: the deployment must be compatible with pnpm version
10.32.0 -
Repository Folder:
- Package.json: a
package.jsonfile must be present in the root folder and support the above two requirements, with the envio version explicitly configured in the dependencies. - Configuration file: a HyperIndex configuration file must be present.
The root folder and configuration file name can be set in the indexer settings.
- Package.json: a
-
GitHub Repository: The repository must be no larger than
100MB. Caching between deployments is supported for paid plans using the Effects Api. -
Node Version: It is strongly recommended that the indexer is compatible with node version 24 or higher.
Before deploying your indexer, please be aware of the below limits and policies
Deployment Limitsβ
- 3 development plan indexers per organization
- Deployments per indexer: 3 deployments per indexer
- Deployments can be deleted in Envio Cloud to make space for more deployments
Development Plan Fair Usage Policyβ
The free development plan includes automatic deletion policies to ensure fair resource allocation:
Automatic Deletion Rules:β
- Hard Limits:
- Deployments that exceed 20GB of storage will be automatically deleted
- Deployments older than 30 days will be automatically deleted
- Soft Limits (whichever comes first):
- 100,000 events processed
- 5GB storage used
- no requests for 7 days
When soft limits are breached, the two-stage deletion process begins
Two-Stage Deletion Processβ
Applies to development deployments that breach the soft limits
- Grace Period (7 days) - Your indexer continues to function normally, you receive notification about the upcoming deletion
- Read-Only Access (3 days) - Indexer stops processing new data, existing data remains accessible for queries
- Full Deletion - Indexer and all data are permanently deleted
The grace period durations (7 + 3 days) are subject to change. Always monitor your deployment status and upgrade when approaching limits.
For complete pricing details and feature comparison, see our Pricing & Billing page.
Step-by-Step Deployment Instructionsβ
Initial Setupβ
- Log in with GitHub: Visit the Envio App and authenticate with your GitHub account
- Select an Organization: Choose your personal account or any organization you have access to
- Install the Envio Deployments GitHub App: Grant access to the repositories you want to deploy
Configure Your Indexerβ
- Connect a Repo: Select the repository containing your indexer code
- Add the Indexer: Click "Add Indexer" and configure your indexer
- Configure Deployment Settings:
- Specify the config file location
- Set the root directory (important for monorepos)
- Choose the deployment branch
You can deploy multiple indexers from a single repository by configuring them with different config file paths, root directories, and/or deployment branches.
If you're working in a monorepo, ensure all your imports are contained within your indexer directory to avoid deployment issues.
Deploy Your Codeβ
- Create a Deployment Branch: Set up the branch you specified during configuration
- Deploy via Git: Push your code to the deployment branch
- Monitor Deployment: Track the progress of your deployment in the Envio dashboard
Manage Your Deploymentβ
- Version Management: Once deployed, you can:
- View detailed logs
- Switch between different deployed versions
- Rollback to previous versions if needed
Updating Your Deploymentβ
After your initial deployment, you can update your indexer by pushing new commits to the deployment branch. Each push creates a new deployment version.
What happens on each pushβ
When you push to your deployment branch, Envio Cloud will:
- Build your updated indexer code
- Start a new deployment that re-indexes from the start block
- Keep your previous deployment running and serving queries until the new one is fully synced
This means there is no downtime during updates β your existing deployment continues serving data while the new one catches up.
When re-indexing is requiredβ
A full re-index from the start block happens on every new deployment. This includes changes to:
- Event handler logic
- Schema (
schema.graphql) - Configuration (
config.yaml) - ABIs or contract addresses
Use the Effects API cache to speed up re-indexing by caching expensive external calls (like eth_call results) across deployments. This is available on paid plans.
Adding a new chain to your indexerβ
To add a new chain, update your config.yaml with the new network configuration and push to the deployment branch. The new deployment will index all configured chains, including the new one.
Your previous deployment continues serving data for the existing chains while the new deployment syncs.
Rolling back to a previous versionβ
If a new deployment introduces issues, you can switch back to a previous version from the Envio Cloud dashboard. Navigate to your indexer and select the version you want to activate.
Monitoringβ
Once your indexer is deployed, you can monitor its health, performance, and progress using several built-in tools including the dashboard, logs, and alerts.
For detailed information about monitoring your deployments, see our Monitoring Guide.
Continuous Deployment Best Practices and Configurationβ
For a robust deployment workflow, we recommend:
- Protected Branches: Set up branch protection rules for your deployment branch
- Pull Request Workflow: Instead of pushing directly to the deployment branch, use pull requests from feature branches
- CI Integration: Add tests to your CI pipeline to validate indexer functionality before merging to the deployment branch
Continuous Configurationβ
After deploying your indexer, you can manage its configuration through the Settings tab in the Envio Cloud dashboard:
General Tabβ
The General tab provides core configuration options:
- Config File Path: Update the location of your indexer's configuration file
- Deployment Branch: Change which Git branch triggers deployments
- Root Directory: Modify the root directory for your indexer (useful for monorepos)
- Delete Indexer: Permanently remove the indexer and all its deployments
Deleting an indexer is permanent and will remove all associated deployments and data. This action cannot be undone.
Environment Variables Tabβ
Configure environment-specific variables for your indexer:
- Add custom environment variables with the
ENVIO_prefix - Environment variables are securely stored and injected into your indexer at runtime
- Useful for API keys, configuration values, and other deployment-specific settings
Use environment variables for sensitive data rather than hardcoding values in your repository. Remember to prefix all variables with ENVIO_.
Plans & Billing Tabβ
Manage your indexer's pricing plan and billing:
- Select from available pricing plans
- Upgrade your plan to suit your needs
- View current plan features and limits
For detailed pricing information, see our Pricing & Billing page.
Alerts Tabβ
Configure monitoring and notification preferences:
- Set up notification channels (Discord, Slack, Telegram, Email)
- Choose which alert types to receive (Production Endpoint Down, Indexer Stopped Processing, etc.)
- Configure deployment notifications (Historical Sync Complete)
For complete alert configuration details, see our Features page.
Alert configuration is available for indexers deployed with version 2.24.0 or higher on paid production plans.
Visual Reference Guideβ
The following screenshots show each step of the deployment process:
Step 1: Select Organizationβ
!Select organisation
Step 2: Install GitHub Appβ
!Install GitHub App
Step 3: Connect a Repoβ
!Connect a repo
Step 4: Add the Indexerβ
!Add the indexer
Step 5: Configure Deployment Settingsβ
!Configure indexer
Step 6: Create a Deployment Branchβ
!Create deployment branch
Step 7: Deploy via Gitβ
!Deploy via Git
Step 8: Indexer Deployedβ
Once deployment completes, your indexer should be live and you should see the overview dashboard below. Full monitoring details are available in our Monitoring Guide.
!Indexer overview
Step 9: Manage Indexer Configurationβ
Manage indexer configurations and deployments using the sidebar navigation on the left.
!Manage indexer configuration
Related Documentationβ
- Features - Learn about all available Envio Cloud features
- Envio Cloud CLI - Deploy and manage indexers from the command line
- Pricing & Billing - Compare plans and see feature availability
- Overview - Introduction to Envio Cloud
- Self-Hosting - Run your indexer on your own infrastructure
Monitoring Your Blockchain Indexerβ
File: Hosted_Service/hosted-service-monitoring.md
Once your blockchain indexer is deployed, Envio Cloud provides several tools to help you monitor its health, performance, and progress.
Dashboard Overviewβ
The main dashboard provides real-time visibility into your indexer's status:
Key Metrics Displayed:
- Active Deployments: Track how many deployments are currently running (e.g., 1/3 active)
- Deployment Status: See whether your indexer is actively syncing, stopped, or has encountered errors
- Recent Commits: View your deployment history with commit information and active status
- Usage Statistics: Monitor your indexing hours, storage usage, and query rate limits
- Network Progress: Real-time progress bars showing sync status for each blockchain network
- Events Processed: Track the total number of events your indexer has processed
- Historical Sync Time: See how long it took to complete the initial sync
Deployment Status Indicatorsβ
Each deployment shows clear status information:
- Syncing: Indexer is actively processing blocks and events
- Syncing Stopped: Indexer has stopped processing (may indicate an error or a breach of plan limits)
- Historical Sync Complete: Initial sync finished, indexer is processing new blocks in real-time
Error Detection and Troubleshootingβ
When issues occur, the dashboard displays failure information to help you quickly diagnose problems.
Failure Information Includes:
- Error Type: Clear indication of the failure (e.g., "Indexing Has Stopped")
- Error Description: Details about what went wrong (e.g., "Error during event handling")
- Next Steps: Guidance on where to find more information (error logs)
- Support Access: Direct link to Discord for assistance
Loggingβ
Full logging supported is integrated and configured by Envio via Envio Cloud
Access detailed logs to troubleshoot issues and monitor indexer behavior:
- Real-time Logs: View live logs as your indexer processes events
- Error Logs: Quickly identify and diagnose errors in your event handlers
- Deployment Logs: Track the deployment process and startup sequence
- Filter Log Levels: Find specific log entries to debug issues
Access logs through the "Logs" button on your deployment page.
Built-in Alertsβ
Configure proactive monitoring through the Alerts tab to receive notifications before issues impact your users:
- Critical Alerts: Get notified when your production endpoint goes down
- Warning Alerts: Receive alerts when your indexer stops processing blocks
- Info Alerts: Stay informed about indexer restarts and error logs
- Deployment Notifications: Know when historical sync completes
For detailed alert configuration, see the Deployment Guide and our Features page.
Set up multiple notification channels (Paid Plans Only) to ensure you never miss critical alerts about your indexer's health.
Visual Referenceβ
Dashboard Overviewβ
!Dashboard overview
Network Progress by Chainβ
!Network progress bars
Example Failure Notificationβ
When indexing stops, the dashboard clearly surfaces the issue so you can investigate and resolve it quickly.
!Indexing has stopped
Related Documentationβ
- Deploying Your Indexer - Complete deployment guide
- Envio Cloud CLI - Monitor deployments from the command line with
envio-cloud deployment metricsandenvio-cloud deployment status - Features - Learn about all available Envio Cloud features
- Pricing & Billing - Compare plans and see feature availability
Envio Cloud CLIβ
File: Hosted_Service/envio-cloud-cli.md
The envio-cloud CLI is a command-line tool for interacting with Envio Cloud. It enables you to deploy, manage, and monitor your blockchain indexers directly from the terminal β making it particularly useful for CI/CD pipelines, scripting, and agentic workflows.
Installationβ
npm install -g envio-cloud
Or run directly without installation:
npx envio-cloud
Shell Completionβ
The envio-cloud CLI ships with shell completion scripts for bash, zsh, fish, and powershell. Completion includes dynamic suggestions for indexer names and commit hashes, so you can tab-complete them directly from the terminal.
Run the one-liner for your shell to install completions:
| Shell | One-liner |
|---|---|
zsh | echo 'source > ~/.zshrc |
bash | envio-cloud completion bash > ~/.local/share/bash-completion/completions/envio-cloud |
fish | envio-cloud completion fish > ~/.config/fish/completions/envio-cloud.fish |
powershell | envio-cloud completion powershell >> $PROFILE |
Restart your shell (or source your profile) for the completions to take effect. Run envio-cloud completion --help for further options.
Authenticationβ
Browser Loginβ
envio-cloud login
Opens browser-based authentication via envio.dev with a 30-day session duration. Tokens are automatically refreshed when expired.
Token-Based Login (CI/CD)β
envio-cloud login --token ghp_YOUR_TOKEN
Or using an environment variable:
export ENVIO_GITHUB_TOKEN=ghp_YOUR_TOKEN
envio-cloud login
Required GitHub token scopes: read:org, read:user, user:email.
Session Managementβ
envio-cloud token # Check current session
envio-cloud logout # Remove credentials
Context Managementβ
Like kubectl namespaces, envio-cloud lets you store default values for organisation and indexer so you don't have to pass them on every command. Flags (--org, --indexer) always override stored context.
# Set defaults
envio-cloud config set-org myorg
envio-cloud config set-indexer myindexer
# View current context
envio-cloud config get-context
# Commands now use defaults automatically
envio-cloud deployment status abc1234 # org and indexer from context
envio-cloud indexer settings get # both from context
# Flags override context
envio-cloud deployment status abc1234 --org other-org
# Clear stored context
envio-cloud config clear
Context is stored at ~/.envio-cloud/context.json. Resolution priority:
- Explicit positional arguments
--org/--indexerflags- Stored context
- GitHub login (organisation only)
| Command | Description |
|---|---|
config set-org | Set default organisation |
config set-indexer | Set default indexer |
config get-context | Show current defaults and where they come from |
config clear | Remove all stored defaults |
Commandsβ
Indexer Commandsβ
List Indexersβ
Lists indexers across every organisation you are a member of. Use --org to
scope to a single organisation. Requires authentication.
envio-cloud indexer list
envio-cloud indexer list --org myorg
envio-cloud indexer list --limit 10
envio-cloud indexer list -o json
| Flag | Description |
|---|---|
--org | Scope to a single organisation you belong to |
--limit | Limit number of results |
-o, --output | Output format (json) |
Get Indexer Detailsβ
envio-cloud indexer get [organisation]
envio-cloud indexer get hyperindex mjyoung114 -o json
envio-cloud indexer get hyperindex --org mjyoung114
Organisation can be omitted if set via context. Requires authentication β you can only view indexers in organisations you are a member of.
Add an Indexerβ
envio-cloud indexer add --name my-indexer --repo my-repo
envio-cloud indexer add --name my-indexer --repo my-repo --branch main --tier development
envio-cloud indexer add --name my-indexer --repo my-repo --dry-run
| Flag | Description | Default |
|---|---|---|
-n, --name | Indexer name (required) | β |
-r, --repo | Repository name (required) | β |
-b, --branch | Deployment branch | envio |
-d, --root-dir | Root directory | ./ |
-c, --config-file | Config file path | config.yaml |
-t, --tier | Pricing tier | development |
-a, --access-type | Access type | public |
-e, --env-file | Environment file | β |
--auto-deploy | Enable auto-deploy | true |
--dry-run | Preview without creating | β |
-y, --yes | Skip confirmation prompts | β |
Delete an Indexerβ
Permanently delete an indexer and all of its deployments. Requires typing the indexer name to confirm.
envio-cloud indexer delete myindexer myorg
envio-cloud indexer delete myindexer --org myorg
envio-cloud indexer delete myindexer myorg --yes # skip confirmation for CI/CD
This action cannot be undone. All deployments, data, and configuration for the indexer will be permanently removed.
View and Modify Settingsβ
# View current settings
envio-cloud indexer settings get myindexer myorg
# Modify settings (only specified flags are changed)
envio-cloud indexer settings set myindexer myorg --branch main
envio-cloud indexer settings set myindexer myorg --auto-deploy=false
envio-cloud indexer settings set myindexer myorg --config-file config.yaml --branch develop
| Flag (set) | Description |
|---|---|
--branch | Git branch for deployments |
--config-file | Path to config file |
--root-dir | Root directory within the repository |
--auto-deploy | Enable or disable auto-deploy on push |
--description | Indexer description |
--access-type | public or private |
Manage Environment Variablesβ
Environment variables can be managed from the CLI. All keys must be prefixed with ENVIO_. Changes take effect on the next deployment.
# List variables (values masked by default)
envio-cloud indexer env list myindexer myorg
envio-cloud indexer env list myindexer myorg --show-values
# Set one or more variables
envio-cloud indexer env set myindexer myorg ENVIO_API_KEY=abc123 ENVIO_DEBUG=true
# Remove a variable
envio-cloud indexer env delete myindexer myorg ENVIO_DEBUG
# Bulk import from a .env file
envio-cloud indexer env import myindexer myorg --file .env
The .env file format is one KEY=VALUE per line. Lines starting with # are ignored.
Configure IP Whitelistingβ
Restrict access to your indexer's GraphQL endpoint by IP address. Supports IPv4 addresses and CIDR notation.
# View current IP whitelist configuration
envio-cloud indexer security get myindexer myorg
# Add IPs to the whitelist
envio-cloud indexer security add-ip myindexer myorg 203.0.113.50
envio-cloud indexer security add-ip myindexer myorg 10.0.0.0/8
# Enable IP whitelisting (make sure to add IPs first)
envio-cloud indexer security enable myindexer myorg
# Disable IP whitelisting
envio-cloud indexer security disable myindexer myorg
# Restrict whitelisting to production deployments only
envio-cloud indexer security set-prod-only myindexer myorg true
# Remove an IP
envio-cloud indexer security remove-ip myindexer myorg 203.0.113.50
Add your IP addresses before enabling whitelisting β otherwise you may lock yourself out. The CLI will warn you if you try to enable whitelisting with no IPs configured.
Deployment Commandsβ
All deployment commands accept arguments as [organisation]. Organisation and indexer can be omitted if set via envio-cloud config.
Deployment Metricsβ
envio-cloud deployment metrics [organisation]
envio-cloud deployment metrics hyperindex b3ead3a mjyoung114 --watch
envio-cloud deployment metrics hyperindex b3ead3a mjyoung114 -o json
No authentication required.
| Flag | Description |
|---|---|
--watch | Continuously poll for updates |
-o, --output | Output format (json) |
Deployment Statusβ
envio-cloud deployment status [organisation]
envio-cloud deployment status hyperindex b3ead3a mjyoung114 --watch-till-synced
| Flag | Description |
|---|---|
--watch-till-synced | Wait until deployment is fully synced |
Deployment Infoβ
envio-cloud deployment info [organisation]
Get Query Endpointβ
Returns the GraphQL query endpoint URL for a deployment. The endpoint is computed from deployment parameters and the cluster is resolved from the deployment tier via the API. Output is a bare URL, so it composes cleanly with shell scripting.
envio-cloud deployment endpoint [organisation]
envio-cloud deployment endpoint hyperindex b3ead3a mjyoung114
envio-cloud deployment endpoint hyperindex b3ead3a mjyoung114 -o json
Use the URL directly in a curl query:
curl "$(envio-cloud deployment endpoint hyperindex b3ead3a mjyoung114)" \
-H "Content-Type: application/json" \
-d '{"query": "{ _meta { chainMetadata { chainId } } }"}'
| Flag | Description |
|---|---|
--cluster | Override cluster (hyper, hypertierchicago, ip-projects, prodaws, staging) |
-o, --output | Output format (json) |
The ep alias is also available: envio-cloud deployment ep .
Promote a Deploymentβ
Promote a deployment to the production endpoint. Requires confirmation (y/N).
envio-cloud deployment promote [organisation]
envio-cloud deployment promote myindexer abc1234 myorg --yes
Delete a Deploymentβ
Permanently delete a deployment. Requires typing the indexer name to confirm.
envio-cloud deployment delete [organisation]
envio-cloud deployment delete myindexer abc1234 myorg --yes
This action cannot be undone. The deployment and its data will be permanently removed.
Restart a Deploymentβ
Restart a running deployment. There is a 10-minute cooldown between restarts.
envio-cloud deployment restart [organisation]
envio-cloud deployment restart myindexer abc1234 myorg --yes
Deployment Logsβ
Show build or runtime logs for a deployment.
envio-cloud deployment logs [organisation]
envio-cloud deployment logs myindexer abc1234 myorg --build
envio-cloud deployment logs myindexer abc1234 myorg --level error,warn
envio-cloud deployment logs myindexer abc1234 myorg --follow
| Flag | Description |
|---|---|
--build | Show build logs instead of runtime logs |
--level | Filter by log level (e.g., error,warn) |
--limit | Max number of log lines (default: 100) |
--follow | Poll for new logs every 10 seconds |
Repository Commandsβ
List Repositoriesβ
envio-cloud repos
envio-cloud repos -o json
Requires authentication.
Confirmation Promptsβ
Dangerous commands require confirmation before executing:
| Command | Confirmation type |
|---|---|
indexer delete | Type the indexer name |
deployment delete | Type the indexer name |
deployment promote | y/N prompt |
deployment restart | y/N prompt |
All prompts can be skipped with the --yes / -y flag for CI/CD usage.
Global Flagsβ
| Flag | Description |
|---|---|
--org | Override default organisation |
--indexer | Override default indexer |
-q, --quiet | Suppress informational messages |
-o, --output | Output format (json) |
--config | Specify config file path |
-h, --help | Display command help |
-v, --version | Show CLI version |
JSON Outputβ
All commands support JSON output via the -o json flag, making the CLI easy to integrate into scripts and automation pipelines.
Success response:
{"ok": true, "data": [ ... ]}
Error response:
{"ok": false, "error": "error message"}
Example with jq:
# Get event count for a deployment
envio-cloud deployment metrics hyperindex b3ead3a mjyoung114 -o json | jq '.data[].num_events_processed'
# List all indexer IDs in an org
envio-cloud indexer list --org enviodev -o json | jq -r '.data[].indexer_id'
Exit Codesβ
| Code | Meaning |
|---|---|
0 | Success |
1 | User error (invalid arguments, authentication required) |
2 | API or server error |
Related Documentationβ
- Envio Cloud Overview - Introduction to Envio Cloud
- Deploying Your Indexer - Step-by-step deployment guide via the dashboard
- Production Features - Tags, IP whitelisting, caching, and alerts
- Monitoring - Dashboard monitoring and alerts
- Envio CLI - Local development CLI reference
- npm package - Latest version and changelog
Hosted Service Billingβ
File: Hosted_Service/hosted-service-billing.mdx
Pricing & Billing
Envio offers flexible pricing options to meet the needs of projects at different stages of development.
Pricing Plansβ
Envio Cloud offers flexible pricing plans to match your project's needs, from free development environments to enterprise-grade dedicated hosting.
For the most up-to-date pricing information, detailed plan comparisons, and feature breakdowns, please visit our official Envio Pricing Page.
Available Plans:
| Plan | Price | Intended for |
|---|---|---|
| Development | Free | Testing, prototyping, and development. 30-day max lifespan, subject to fair usage limits |
| Production Small | Paid | Getting started with production deployments |
| Production Medium | Paid | Scaling your indexing operations with higher limits |
| Production Large | Paid | High-volume production workloads |
| Dedicated | Custom | Ultimate performance, isolated infrastructure, and custom SLAs |
What's included across paid plans:
- Higher event processing and storage limits (increases with each tier)
- Higher query rate limits on your GraphQL endpoint
- Effect API cache support for faster re-indexing (Medium plans and up)
- Monitoring, alerts, and deployment management
- Priority support (Dedicated plan)
The free development plan is intended for testing and development purposes only and should not be used as a production environment. Development plan deployments have a maximum life span of 30 days and Envio makes no guarantees regarding uptime, availability, or data persistence for deployments on the development plan. If you choose to use a development plan deployment in a production capacity, you do so entirely at your own risk. Envio assumes no liability or accountability for any downtime, data loss, or service interruptions that may occur on development plan deployments.
For detailed feature explanations, see our Features page. For deployment instructions, see our Deployment Guide. Not sure which option is right for your project? Book a call with our team to discuss your specific needs.
Self-Hosting Your Blockchain Indexerβ
File: Hosted_Service/self-hosting.md
This documentation page is actively being improved. Check back regularly for updates and additional information.
While Envio offers a fully managed cloud hosting solution via Envio Cloud, you may prefer to run your blockchain indexer on your own infrastructure. This guide covers everything you need to know about self-hosting Envio indexers.
We deeply appreciate users who choose Envio Cloud, as it directly supports our team and helps us continue developing and improving Envio's technology. If your use case allows for it, please consider the hosted option.
Why Self-Host?β
Self-hosting gives you:
- Complete Control: Manage your own infrastructure and configurations
- Data Sovereignty: Keep all indexed data within your own systems
Self Hosting can be done with a variety of different infrastructure, tools and methods. The outline below is merely a starting point and does not offer a full production level solution. In some cases advanced knowledge of infrastructure, database management and networking may be required for a full production level solution.
Prerequisitesβ
Before self-hosting, ensure you have:
- Docker installed on your host machine
- Sufficient storage for blockchain data and the indexer database
- Adequate CPU and memory resources (requirements vary based on chains and indexing complexity)
- Required HyperSync and/or RPC endpoints
- Envio API token for HyperSync access (
ENVIO_API_TOKEN) β required for continued access. See API Tokens.
Getting Startedβ
In general, if you want to self-host, you will likely use a Docker setup.
For a working example, check out the local-docker-example repository.
It contains a minimal Dockerfile and docker-compose.yaml that configure the Envio indexer together with PostgreSQL and Hasura.
Configuration Explainedβ
The compose file in that repository sets up three main services:
- PostgreSQL Database (
envio-postgres): Stores your indexed data - Hasura GraphQL Engine (
graphql-engine): Provides the GraphQL API for querying your data - Envio Indexer (
envio-indexer): The core indexing service that processes blockchain data
Environment Variablesβ
The configuration uses environment variables with sensible defaults. For production, you should customize:
- Envio API token (
ENVIO_API_TOKEN) - Database credentials (
ENVIO_PG_PASSWORD,ENVIO_PG_USER, etc.) - Hasura admin secret (
HASURA_GRAPHQL_ADMIN_SECRET) - Resource limits based on your workload requirements
Getting Helpβ
If you encounter issues with self-hosting:
- Check the Envio GitHub repository for known issues
- Join the Envio Discord community for community support
For most production use cases, we recommend using Envio Cloud to benefit from automatic scaling, monitoring, and maintenance.
Organisation Setupβ
File: Hosted_Service/organisation-setup.md
Use this guide to set up an organisation in Envio Cloud and grant access to your team.
Access Controlβ
Being a member of the GitHub organisation does not automatically grant access to the organisation in the Envio Cloud UI. Each member must be explicitly added by the organisation admin. If someone attempts to visit the organisation URL (e.g., https://envio.dev/app/) without being added, they'll see a "You are not a member of this team" message.
!Not a Member Error
Tutorial Op Bridge Depositsβ
File: Tutorials/tutorial-op-bridge-deposits.md
Introductionβ
This tutorial will guide you through indexing Optimism Standard Bridge deposits in under 5 minutes using Envio HyperIndex's no-code contract import feature.
The Optimism Standard Bridge enables the movement of ETH and ERC-20 tokens between Ethereum and Optimism. We'll index bridge deposit events by extracting the DepositFinalized logs emitted by the bridge contracts on both networks.
Prerequisitesβ
Before starting, ensure you have the following installed:
- Node.js (v22 or newer recommended)
- pnpm (recommended but not required)
- Docker Desktop (required to run the Envio indexer locally)
Note: Docker is specifically required to run your blockchain indexer locally. You can skip Docker installation if you plan only to use Envio Cloud.
Step 1: Initialize Your Indexerβ
- Open your terminal in an empty directory and run:
pnpx envio@3.0.0-rc.0 init
-
Name your indexer (we'll use "optimism-bridge-indexer" in this example):
-
Choose your preferred language (TypeScript, JavaScript, or ReScript):
Step 2: Import the Optimism Bridge Contractβ
-
Select Contract Import β Block Explorer β Optimism
-
Enter the Optimism bridge contract address:
0x4200000000000000000000000000000000000010 -
Select the
DepositFinalizedevent:- Navigate using arrow keys (ββ)
- Press spacebar to select the event
Tip: You can select multiple events to index simultaneously.
Step 3: Add the Ethereum Mainnet Bridge Contractβ
-
When prompted, select Add a new contract
-
Choose Block Explorer β Ethereum Mainnet
-
Enter the Ethereum Mainnet gateway contract address:
0x99C9fc46f92E8a1c0deC1b1747d010903E884bE1 -
Select the
ETHDepositInitiatedevent -
When finished adding contracts, select I'm finished
Step 4: Start Your Indexerβ
- If you have any running indexers, stop them first:
pnpm envio stop
- Start your new indexer:
pnpm dev
This command:
- Starts the required Docker containers
- Sets up your database
- Launches the indexing process
- Opens the Hasura GraphQL interface
Step 5: Understanding the Generated Codeβ
Let's examine the key files that Envio generated:
1. config.yamlβ
This configuration file defines:
- Networks to index (Optimism and Ethereum Mainnet)
- Starting blocks for each network
- Contract addresses and ABIs
- Events to track
2. schema.graphqlβ
This schema defines the data structures for our selected events:
- Entity types based on event data
- Field types matching the event parameters
- Relationships between entities (if applicable)
3. src/EventHandlers.tsβ
This file contains the business logic for processing events:
- Functions that execute when events are detected
- Data transformation and storage logic
- Entity creation and relationship management