Releases: streamingfast/firehose-ethereum
v2.9.3
v2.9.2
-
Fixed
substreams-tier2
not setting itself ready correctly on startup sincev2.9.0
. -
Added support for
--output=bytes
mode which prints the chain's specific Protobuf block as bytes, the encoding for the bytes string printed is determined by--bytes-encoding
, useshex
by default. -
Added back
-o
as shorthand for--output
infirecore tools ...
sub-commands.
v2.9.1
- Add back
grpc.health.v1.Health
service tofirehose
andsubstreams-tier1
services (regression in 2.9.0) - Give precedence to the tracing header
X-Cloud-Trace-Context
overTraceparent
to prevent user systems' trace IDs from leaking passed a GCP load-balancer
v2.9.0
Reader
- Reader Node Manager HTTP API now accepts
POST http://localhost:10011/v1/restart<?sync=true>
to restart the underlying reader node binary sub-process. This is a alias for/v1/reload
.
Tools
- Enhanced
fireeth tools print merged-blocks
with various small quality of life improvements:- Now accepts a block range instead of a single start block.
- Passing a single block as the block range will print this single block alone.
- Block range is now optional, defaulting to run until there is no more files to read.
- It's possible to pass a merged blocks file directly, with or without an optional range.
Firehose
Important
This release will reject firehose connections from clients that don't support GZIP or ZSTD compression. Use --firehose-enforce-compression=false
to keep previous behavior, then check the logs for incoming Substreams Blocks request
logs with the value compressed: false
to track users who are not using compressed HTTP connections.
Important
This release removes the old sf.firehose.v1
protocol (replaced by sf.firehose.v2
in 2022, this should not affect any reasonably recent client).
- Add support for ConnectWeb firehose requests.
- Always use gzip compression on firehose requests for clients that support it (instead of always answering with the same compression as the request).
Substreams
-
The
substreams-tier1
app now has two new configuration flags named respectivelysubstreams-tier1-active-requests-soft-limit
andsubstreams-tier1-active-requests-hard-limit
helping better load balance active requests across a pool oftier1
instances.The
substreams-tier1-active-requests-soft-limit
limits the number of client active requests that a tier1 accepts before starting
to be report itself as 'unready' within the health check endpoint. A limit of 0 or less means no limit.This is useful to load balance active requests more easily across a pool of tier1 instance. When the instance reaches the soft
limit, it will start to be unready from the load balancer standpoint. The load balancer in return will remove it from the list
of available instances, and new connections will be routed to remaining clients, spreading the load.The `substreams-tier1-active-requests-hard-limit` limits the number of client active requests that a tier1 accepts before
rejecting incoming gRPC requests with 'Unavailable' code and setting itself as unready. A limit of 0 or less means no limit.
This is useful to prevent the tier1 from being overwhelmed by too many requests, most client auto-reconnects on 'Unavailable' code
so they should end up on another tier1 instance, assuming you have proper auto-scaling of the number of instances available. -
The
substreams-tier1
app now exposes a new Prometheus metricsubstreams_tier1_rejected_request_counter
that tracks rejected
requests. The counter is labelled by the gRPC/ConnectRPC returned code (ok
andcanceled
are not considered rejected requests). -
The
substreams-tier2
app now exposes a new Prometheus metricsubstreams_tier2_rejected_request_counter
that tracks rejected
requests. The counter is labelled by the gRPC/ConnectRPC returned code (ok
andcanceled
are not considered rejected requests). -
Properly accept and compress responses with
gzip
for browser HTTP clients using ConnectWeb withAccept-Encoding
header -
Allow setting subscription channel max capacity via
SOURCE_CHAN_SIZE
env var (default: 100)
v2.8.4
Substreams
- Fix an issue preventing proper detection of gzip compression when multiple headers are set (ex: python grpc client)
- Fix an issue preventing some tier2 requests on last-stage from correctly generating stores. This could lead to some missing "backfilling" jobs and slower time to first block on reconnection.
- Fix a thread leak on cursor resolution resulting in bad counter for active connections
- Add support for zstd encoding on server
v2.8.3
Note
This release will reject connections from clients that don't support GZIP compression. Use --substreams-tier1-enforce-compression=false
to keep previous behavior, then check the logs for incoming Substreams Blocks request
logs with the value compressed: false
to track users who are not using compressed HTTP connections.
- Fix broken
tools poller
command in v2.8.2
v2.8.2
Warning
Do NOT use this version with tools poller
, a flag issue prevents the poller from starting up. Recommended that you upgrade to v2.8.3 ASAP
Note
This release will reject connections from clients that don't support GZIP compression. Use --substreams-tier1-enforce-compression=false
to keep previous behavior, then check the logs for incoming Substreams Blocks request
logs with the value compressed: false
to track users who are not using compressed HTTP connections.
- Bump firehose-core to v1.6.8
- Substreams: add
--substreams-tier1-enforce-compression
to reject connections from clients that do not support GZIP compression - Substreams performance: reduced the number of mallocs (patching some third-party libraries)
- Substreams performance: removed heavy tracing (that wasn't exposed to the client)
- Fixed
--reader-node-line-buffer-size
flag that was not being respected in reader-node-stdin app - poller: add
--max-block-fetch-duration
v2.8.1
firehose-grpc-listen-addr
andsubstreams-tier1-grpc-listen-addr
flags now accepts comma-separated addresses (allows listening as plaintext and snakeoil-ssl at the same time or on specific ip addresses)- rpc-poller: fix fetching the first block on an endpoint (was not following the cursor, failing unnecessarily on non-archive nodes)
v2.8.0
- Adding nil safety check on the
CombinedFilter
and looping over the transaction_trace receipts - Bump
substreams
anddmetering
to latest version adding theoutputModuleHash
to metering sender.
v2.7.5
Substreams fixes
Note All caches for stores using the updatePolicy
set_sum
(added in substreams v1.7.0) and modules that depend on them will need to be deleted, since they may contain bad data.
- Fix bad data in stores using
set_sum
policy: squashing of store segments incorrectly "summed" some values that should have been "set" if the last event for a key on this segment was a "sum" - Fix small bug making some requests in development-mode slow to start (when starting close to the module initialBlock with a store that doesn't start on a boundary)