# overview
How ChainAlert Works
ChainAlert runs a continuous monitoring pipeline that watches every block on every supported chain. When on-chain activity matches your detection rules, you get an alert — typically within seconds of the block being confirmed.
The Pipeline
Every alert passes through four stages. Each stage runs independently and scales horizontally, so the system stays fast even as the number of detections grows.
$ pipeline --describe
>avg time from block confirmation to alert: <30 seconds
Block Ingestion
ChainAlert runs a dedicated poller for each supported blockchain. Each poller continuously fetches new blocks and extracts every event log and optionally transaction data from those blocks.
One poller per chain — adding a new organization doesn't add RPC load. Your detections piggyback on a shared stream of block data.
Automatic catch-up — if the system goes down briefly, pollers resume exactly where they left off. No blocks are skipped.
Rate-limit aware — pollers adapt to each chain's block time and process blocks in controlled batches to avoid overwhelming RPC providers.
Event Matching
Raw blockchain logs are decoded using the contract's ABI and then evaluated against every active detection rule on that chain. The matcher uses an indexed lookup to find relevant rules in constant time — regardless of how many detections are running across all organizations.
# incoming Transfer event on USDC contract
event: Transfer(address from, address to, uint256 value)
from: 0x3ee1...8a2f
to: 0x7bc4...1d09
value: 5,000,000 USDC
>rule "Large Transfer Monitor" matched (value > 1,000,000)
>alert dispatched to #security-alerts on Slack
ChainAlert ships with built-in ABI support for common events like ERC-20 Transfers, so detections work out of the box — even for contracts that haven't been verified on Etherscan. For verified contracts, the full ABI is resolved automatically.
Balance & State Monitoring
Not all threats show up as events. Some require watching a value over time — a treasury balance draining, a contract's paused state flipping, or an approval being set dangerously high.
The state poller runs on a configurable schedule for each detection. It queries the chain directly — checking native balances, token balances, or the return value of any view function — and compares the result against your thresholds.
Percentage-based alerts — "alert if the balance drops by more than 20% in an hour"
Absolute thresholds — "alert if the hot wallet balance goes below 5 ETH"
Historical snapshots — every reading is recorded so you can see trends in the dashboard
Storage Slot Monitoring
Some critical state changes happen silently — without emitting any event log. A proxy's implementation slot can change via a direct SSTORE without producing an Upgraded event. ChainAlert's storage slot monitoring catches these blind spots.
The state poller reads raw EVM storage slots via eth_getStorageAt and compares values across polls. This complements event monitoring — events catch loud changes instantly, storage polling catches silent ones on a schedule.
Proxy upgrades — monitor ERC-1967 implementation, admin, and beacon slots
Custom slots — monitor any 32-byte storage slot with change detection or threshold conditions
Anomaly detection — alert when a value suddenly deviates from its rolling average
Function-Call Detection
Some admin functions don't emit events. A contract's addRollup(address) or setFee(uint256) might modify critical state without producing any log. ChainAlert's function-call detection watches transaction calldata for specific 4-byte function selectors.
Top-level calls only — monitors direct tx.to calls, not internal calls from timelocks or multisigs
Zero cost when unused — transaction data is only fetched when function-call rules exist for a network
Parameter filtering — optionally filter on decoded function arguments, same as event conditions
View Function Monitoring
For values that can only be read through contract functions — like totalSupply(), getReserves(), or getMinDelay() — ChainAlert periodically calls the function via eth_call and monitors the return value for changes, threshold crossings, or anomalies.
Smart Cooldowns
Nobody wants 200 Slack messages in 10 minutes. Every detection has a configurable cooldown period — after an alert fires, subsequent matches are suppressed until the cooldown expires. The default is 5 minutes, but you can set it to anything from 1 minute to 24 hours depending on how noisy the event is.
Real-Time Rule Updates
When you create, pause, or update a detection, the change takes effect immediately. There's no deployment step, no waiting for a cron job, no restart required. The worker pipeline picks up rule changes within the next block cycle.