Skip to content

Releases: sigp/lighthouse

Summer Smith

22 Apr 02:13
v7.0.0
54f7bc5
Compare
Choose a tag to compare

Summary

This releases includes the fork epoch for Electra ⚡ on Mainnet 🎉

All mainnet users must upgrade to v7.0.0 by the time of the fork on 7 May 2025. You must also update your execution client (see compatible versions below).

This release also includes several new features, bug fixes and optimisations.

⚠️ Breaking Changes ⚠️

You can upgrade to v7.0.0 from any v5 or v6 release. If you are upgrading from v5 you should make sure to read the v6 release notes to account for breaking changes between v5 and v6.

Upgrading to Lighthouse v7.0.0 should require no manual intervention aside from updating the binary or Docker image, as there are no changes to CLI flags that will prevent the node from starting. Mainnet users must upgrade before the Electra fork. Failure to upgrade in time will require a re-sync.

Once you upgrade to Lighthouse v7.0.0, you can downgrade to v6, but only prior to the Electra fork.

See the sections below for details on other backwards-incompatible changes:

  • Deprecated CLI flags
  • Minimum supported Rust version
  • IPv6 by default
  • Gas limit enforcement

⚠️ Deprecated CLI Flags ⚠️

The following beacon node flags have been deprecated. You should remove them, but the beacon node will still start if they are provided.

  • --light-client-server

🦀 Minimum Supported Rust Version 🦀

We have updated the Minimum Supported Rust Version (MSRV) for this release from 1.80.0 to 1.83.0.

This is only relevant to users compiling Lighthouse from source.

You can update your Rust compiler using:

rustup update stable

IPv6 by Default

Lighthouse will now automatically listen on IPv6 if it detects a globally-routable address. We expect for the majority of users with IPv4-only setups that this change will have no effect, but that it will benefit users with correctly configured IPv6 stacks.

The default IPv6 listening port has been changed from port 9090 to port 9000 (same as IPv4) to make firewalling easier. The IPv6 port can be adjusted using the flag --port6.

You can opt-out of IPv6 by using the flag--listen-address 0.0.0.0 to only listen on IPv4.

For more information, see the Lighthouse blog:

Gas Limit Enforcement

Lighthouse BN now enforces gas limit preferences when validating execution payloads from external builders (e.g. mev-boost relays). You can configure the gas limit for all validators connected to a VC using --gas-limit, or set individual limits in the validator_definitions.yml, or using the VC HTTP API.

⚡ Electra ⚡

The Electra hard fork, paired with the Prague hard fork on the execution layer – together known as Pectra – brings several new features to Ethereum.

The headline change is known as Max EB, and raises the maximum effective balance a single validator may wield from 32 ETH to 2048 ETH. Once adopted, this will allow the network to run more efficiently with a lower validator count, while retaining the same level of security. Max EB even removes some centralisation vectors from staking incentives so that solo validators are able to tap into the compounding rewards previously enjoyed exclusively by large operators.

The process of switching a validator's max effective balance is a consolidation, which transfers stake from one validator to another. Consolidations are triggered via a smart contract call, and are fully opt-in and voluntary. If you are a solo operator with a small number of validators, there is no need to consolidate, although you may choose to do so.

Information about consolidation tooling has been added to the Lighthouse book:

Bug Fixes

  • Bugfix for a regression in attestation subscription logic, resolving InsufficientPeers errors.

New Features

  • Electra fork epoch for Gnosis chain.
  • Light client server enabled by default.
  • Support for a new database backend, redb. This is still experimental and only reccommended for expert users.
  • New API to add trusted peers at runtime (#7198).
  • Full Hoodi testnet support (--network hoodi).

Optimisations

  • Smaller default state cache size (32) to keep memory constrained during non-finality.
  • Smarter state cache heuristics.
  • More efficient serving of BlocksByRange/BlobsByRange during non-finality.

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Mainnet Users Medium Medium
Testnet Users Low Low

See Update Priorities for more information about this table.

Compatible Execution Clients

You must update both the consensus client (Lighthouse) and execution client to be ready for Pectra.

Execution client Pectra-ready version
Reth v1.3.12
Nethermind v1.31.9
Geth v1.15.9
Erigon v3.0.2
Besu 25.4.1

Known Issues

Due to the reduced state cache size, you may see an increase in WARN State cache missed logs. This is harmless and can be safely ignored. These state cache misses will be downgraded to DEBUG level in the next release. In a future release the pruning code will also be adjusted so that these cache misses can't be triggered during pruning (which is the source of the majority of cache misses currently).

If you are running a node with services connected to the HTTP API (e.g. Rocket Pool rewards generation, a block explorer, etc) we recommend setting a higher value for --state-cache-size, e.g. 128.

All Changes

  • Release v7.0.0 (#7288)
  • Merge remote-tracking branch 'origin/stable' into release-v7.0.0
  • Release v7.0.0-beta.7 (#7333)
  • Update proposer_slashings and attester_slashings amounts for electra. (#7316)
  • Release v7.0.0-beta.6
  • Update withdrawals processing (spec v1.5.0-beta.6)
  • Ensure /eth/v2/beacon/pool/attestations honors committee_index (#7298)
  • Ensure light_client/updates endpoint returns spec compliant SSZ data (#7230)
  • Update crossbeam to fix cargo audit failure (#7313)
  • Gnosis Pectra fork epoch (#7296)
  • Update and cleanup Electra preset (#7303)
  • Downgrade light client errors (#7300)
  • Add pending_consolidations Beacon API endpoint (#7290)
  • Remove/document remaining Electra TODOs (#6982)
  • Clarify network limits (#7175)
  • Fix builder API electra json response (#7285)
  • Mainnet Electra fork epoch (#7275)
  • Return eth1_data early post transition (#7248)
  • Compute roots for unfinalized by_range requests with fork-choice (#7098)
  • Bump openssl to fix cargo audit failure (#7263)
  • Rust 1.86.0 lints (#7254)
  • feat: add more bootnodes for Hoodi and Sepolia (#7222)
  • Consensus spec tests beta4 (#7231)
  • Disable LevelDB snappy feature (#7235)
  • Admin add/remove peer (#7198)
  • Top-up pubkey cache on startup (#7217)
  • Release v7.0.0-beta.5 (#7210)
  • Fix xdelta3 output buffer issue (#7174)
  • Prevent duplicate effective balance processing (#7209)
  • Release v7.0.0-beta.4 (#7162)
  • Update ring to 0.17.14 to fix build compat (#7164)
  • Reject attestations to blocks prior to the split (#7084)
  • Manual compaction endpoint backport (#7104)
  • Pseudo finalization endpoint (#7103)
  • Support Hoodi testnet (#7145)
  • State cache tweaks (#7095)
  • Add block ban flag --invalid-block-roots (#7042)
  • Ensure finalized block is the correct fork variant when constructing light client updates (#7085)
  • feat: implement new beacon APIs(accessors for pending_deposits/pending_partial_withdrawals) (#7006)
  • Address cargo audit failure RUSTSEC-2024-0437 (#7114)
  • Set epochs-per-blob-prune default to 256 (#7113)
  • Change state cache size default to 32 (#7101)
  • Address cargo audit failure RUSTSEC-2025-0009 (#7086)
  • Optimise status processing (#7082)
  • Temporarily ignore cargo audit failures (#7092)
  • Use sync_tolerance_epochs flag to control the proposer prep routines (#7044)
  • Schedule Chiado testnet Electra hard fork (#7074)
  • Make ExecutionBlock::total_difficulty Optional (#7050)
  • Add --long-timeouts-multiplier CLI flag (#7047)
  • Add --disable-attesting flag to validator client (#7046)
  • Add test flag to override SYNC_TOLERANCE_EPOCHS for range sync testing (#7030)
  • Fix builder API headers (#7009)
  • Rust 1.85 lints (#7019)
  • Fix light client merkle proofs (#7007)
  • Update mergify conditions for trivial and ready-for-merge labels to satisfy if base is not stable (#6997)
  • Release v7.0.0-beta.0 (#6962)
  • Fix light client plumbing in beacon processor (#6993)
  • Ensure GET v2/validator/aggregate_attestation is backwards compatible (#6984)
  • Address cargo audit failure RUSTSEC-2025-0006 (#6972)
  • IPv6 By Default (#6808)
  • Update EF tests to spec v1.5.0-beta.2 (#6958)
  • Sync active request byrange ids logs (#6914)
  • Enable Light Client server by default (#6950)
  • Schedule Sepolia and Holesky Electra forks (#6949)
  • Update attestation rewards API for Electra (#6819)
  • Fix aggregate attestation v2 re...
Read more

Nancy

17 Apr 06:07
v7.0.0-beta.7
fd82ee2
Compare
Choose a tag to compare
Nancy Pre-release
Pre-release

DO NOT RUN THIS PRE-RELEASE ON MAINNET

Summary

This release is a hotfix release for Electra-enabled test networks: Sepolia, Holesky, Hoodi and Chiado. Users on these networks should update at their earliest convenience. More information about the patched bug will be available shortly.

If you have already updated to v7.0.0-beta.6, there is no need to update.

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Testnet Users High Low
Mainnet Users DO NOT UPGRADE DO NOT UPGRADE

See Update Priorities more information about this table.

All Changes

  • Release v7.0.0-beta.7 (#7333)
  • Update proposer_slashings and attester_slashings amounts for electra. (#7316)
  • Release v7.0.0-beta.6
  • Update withdrawals processing (spec v1.5.0-beta.6)
  • Ensure /eth/v2/beacon/pool/attestations honors committee_index (#7298)
  • Ensure light_client/updates endpoint returns spec compliant SSZ data (#7230)
  • Update crossbeam to fix cargo audit failure (#7313)
  • Gnosis Pectra fork epoch (#7296)
  • Update and cleanup Electra preset (#7303)
  • Downgrade light client errors (#7300)
  • Add pending_consolidations Beacon API endpoint (#7290)
  • Remove/document remaining Electra TODOs (#6982)
  • Clarify network limits (#7175)
  • Fix builder API electra json response (#7285)
  • Mainnet Electra fork epoch (#7275)
  • Return eth1_data early post transition (#7248)
  • Compute roots for unfinalized by_range requests with fork-choice (#7098)
  • Bump openssl to fix cargo audit failure (#7263)
  • Rust 1.86.0 lints (#7254)
  • feat: add more bootnodes for Hoodi and Sepolia (#7222)
  • Consensus spec tests beta4 (#7231)
  • Disable LevelDB snappy feature (#7235)
  • Admin add/remove peer (#7198)
  • Top-up pubkey cache on startup (#7217)

Binaries

See pre-built binaries documentation.

The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0

System Architecture Binary PGP Signature
x86_64 lighthouse-v7.0.0-beta.7-x86_64-apple-darwin.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.7-x86_64-unknown-linux-gnu.tar.gz PGP Signature
aarch64 lighthouse-v7.0.0-beta.7-aarch64-unknown-linux-gnu.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.7-x86_64-windows.tar.gz PGP Signature
System Option - Resource
Docker v7.0.0-beta.7 sigp/lighthouse

Photography Raptor

27 Mar 05:02
v7.0.0-beta.5
6d5a2be
Compare
Choose a tag to compare
Photography Raptor Pre-release
Pre-release

DO NOT RUN THIS PRE-RELEASE ON MAINNET

Summary

This is a high-priority bugfix release for all Electra testnets: Sepolia, Holesky, Hoodi and Chiado. You should update to this release as soon as possible, especially if you are running a large number of validators on one of these networks.

There is no impact on mainnet, and mainnet users should remain on v6.0.1.

Breaking Changes

There are no breaking changes in this release.

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Testnet Users High Low
Mainnet Users DO NOT UPGRADE DO NOT UPGRADE

See Update Priorities more information about this table.

All Changes

  • Release v7.0.0-beta.5 (#7210)
  • Fix xdelta3 output buffer issue (#7174)
  • Prevent duplicate effective balance processing (#7209)

Binaries

See pre-built binaries documentation.

The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0

System Architecture Binary PGP Signature
x86_64 lighthouse-v7.0.0-beta.5-x86_64-apple-darwin.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.5-x86_64-unknown-linux-gnu.tar.gz PGP Signature
aarch64 lighthouse-v7.0.0-beta.5-aarch64-unknown-linux-gnu.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.5-x86_64-windows.tar.gz PGP Signature
System Option - Resource
Docker v7.0.0-beta.5 sigp/lighthouse

Blue Vacuum Cleaner

20 Mar 07:59
v7.0.0-beta.4
0486802
Compare
Choose a tag to compare
Blue Vacuum Cleaner Pre-release
Pre-release

DO NOT RUN THIS PRE-RELEASE ON MAINNET

This release contains a bug and we strongly recommend upgrading to v7.0.0-beta.5.

Summary

This new beta release brings support for the Hoodi testnet, and cleaned up versions of the fixes made to keep Holesky running.

It is recommended for all testnets: Sepolia, Holesky, Hoodi and Chiado (Gnosis testnet).

This release will form the basis of the upcoming stable v7.0.0 release for Pectra on mainnet, so your feedback is greatly appreciated.

Breaking Changes

There are no breaking changes introduced by this release.

Known Issues

This release contains a bug and we recommend upgrading to v7.0.0-beta.5.

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Testnet Users Low Low
Mainnet Users DO NOT UPGRADE DO NOT UPGRADE

See Update Priorities for more information about this table.

All Changes

  • Release v7.0.0-beta.4 (#7162)
  • Update ring to 0.17.14 to fix build compat (#7164)
  • Reject attestations to blocks prior to the split (#7084)
  • Manual compaction endpoint backport (#7104)
  • Pseudo finalization endpoint (#7103)
  • Support Hoodi testnet (#7145)
  • State cache tweaks (#7095)
  • Add block ban flag --invalid-block-roots (#7042)
  • Ensure finalized block is the correct fork variant when constructing light client updates (#7085)
  • feat: implement new beacon APIs(accessors for pending_deposits/pending_partial_withdrawals) (#7006)
  • Address cargo audit failure RUSTSEC-2024-0437 (#7114)
  • Set epochs-per-blob-prune default to 256 (#7113)
  • Change state cache size default to 32 (#7101)
  • Address cargo audit failure RUSTSEC-2025-0009 (#7086)
  • Optimise status processing (#7082)
  • Temporarily ignore cargo audit failures (#7092)
  • Use sync_tolerance_epochs flag to control the proposer prep routines (#7044)
  • Schedule Chiado testnet Electra hard fork (#7074)
  • Make ExecutionBlock::total_difficulty Optional (#7050)
  • Add --long-timeouts-multiplier CLI flag (#7047)
  • Add --disable-attesting flag to validator client (#7046)
  • Add test flag to override SYNC_TOLERANCE_EPOCHS for range sync testing (#7030)
  • Fix builder API headers (#7009)
  • Rust 1.85 lints (#7019)
  • Fix light client merkle proofs (#7007)

Binaries

See pre-built binaries documentation.

The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0

System Architecture Binary PGP Signature
x86_64 lighthouse-v7.0.0-beta.4-x86_64-apple-darwin.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.4-x86_64-unknown-linux-gnu.tar.gz PGP Signature
aarch64 lighthouse-v7.0.0-beta.4-aarch64-unknown-linux-gnu.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.4-x86_64-windows.tar.gz PGP Signature
System Option - Resource
Docker v7.0.0-beta.4 sigp/lighthouse

Memory Parasites

13 Mar 10:03
v7.0.0-beta.3
8d058e4
Compare
Choose a tag to compare
Memory Parasites Pre-release
Pre-release

DO NOT RUN THIS PRE-RELEASE ON MAINNET

Summary

This is another hotfix for Holesky fixing a bug introduced in v7.0.0-beta.2. The bug caused nodes to get stuck after Holesky finalized again. This release is recommended over v7.0.0-beta.2 for all nodes running on Holesky.

Alternatively, users may checkpoint sync using v7.0.0-beta.0 with --state-cache-size 32, although this may become unstable if finality lapses again.

We are working on further testing and stabilising the hotfixes for the final v7.0.0 release and may release a new beta before then. Thank you for your patience.

All Changes

  • Release v7.0.0-beta.3
  • Fix descent from split check (#7105)

Binaries

See pre-built binaries documentation.

The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0

System Architecture Binary PGP Signature
x86_64 lighthouse-v7.0.0-beta.3-x86_64-apple-darwin.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.3-x86_64-unknown-linux-gnu.tar.gz PGP Signature
aarch64 lighthouse-v7.0.0-beta.3-aarch64-unknown-linux-gnu.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.3-x86_64-windows.tar.gz PGP Signature
System Option - Resource
Docker v7.0.0-beta.3 sigp/lighthouse

Frankenstein's Monster

04 Mar 06:06
9bb0e13
Compare
Choose a tag to compare
Pre-release

⚠️ This release contains a known bug which can cause nodes to get stuck. If you are running on Holesky, v7.0.0-beta.3 is recommended instead ⚠️

💡 Mainnet users can ignore this pre-release.

Summary

This is a second hot-fix release for the unsuccessful Electra upgrade on Holesky in February 2025. It contains patches to help reduce memory usage during non-finality, and to help nodes sync.

This release is only for Holesky users. Do not use this version on mainnet, Sepolia, or any other network.

Please see our informational issue for up-to-date advice on the Holesky situation: #7040

Forced Pseudo-Finalization

This release includes an experimental feature to forcefully "finalize" a block in Lighthouse's database so that Lighthouse can prune low-quality sidechains, remove finalized states and reclaim disk space. This feature is not safe in general, and should not be used except in emergencies.

Usage of the endpoint requires an epoch, block_root and state_root to pseudo-finalize, which must be from an epoch boundary. Skipped slots are OK, but the state_root must be the state_root of the epoch boundary state.

An example command to force pseudo-finalization on Holesky is:

curl -X POST --data '{"epoch": "117400", "state_root": "0x355fa23c9704fe346362c43a8fe43fba464fe63f20853bd3a87a8f465d52b4f4", "block_root": "0x06d788e593fd2b5b6fb6dcd63dfa4766201f05d948923dff6865d823246dd3c7" }' http://localhost:5052/lighthouse/finalize

If successful you should see a lot of logs of the form Pruning head, and then after some time (possibly multiple hours if your node knows all of the 800+ Holesky heads), it will complete. Completion is indicated via the debug log DEBG Database consolidation complete (look in $datadir/beacon/logs/beacon.log), and an update to the split field in the /lighthouse/database/info API.

Compaction will likely run after forced finalization, which is what reclaims the space permanently. If started you will see the log:

INFO Starting database compaction

When it completes (likely after ~1hr), you will see the log:

INFO Database compaction complete

If the finalization completes without compaction, you can trigger a manual compaction using the HTTP API:

curl -X POST http://localhost:5052/lighthouse/compaction

State Cache Tweaks

We've tweaked how old states interact with the state cache, removing several paths for state cache misses, and bad state cache interactions caused by BlocksByRange requests, sidechains and attestations to ancient blocks.

All of these optimisations are automatic and require no action from users to benefit. If you are feeling adventurous we've also found some benefit from using a new flag --state-cache-headroom 8, which prunes the state cache more aggressively (removing 8 states) when it gets full.

You may notice backtraces present in the State cache missed logs. These are harmless and do not indicate a fatal error. We added them to aid in debugging.

New HTTP endpoint lighthouse/add_peer to add trusted peer

This endpoint allows users to add a trusted peer to the peer database and dials it every heartbeat in case it gracefully disconnects. This is useful if the node struggles to find peers on the canonical chain. This can be used together with the --disable-discovery flag to limit the peers the node dials to speed up syncing to the right chain.

An example command to add a trusted peer:

curl -X POST -H "Content-Type: application/json" --data '{"enr": "enr:-Le4QLoE1wFHSlGcm48a9ZESb_MRLqPPu6G0vHqu4MaUcQNDHS69tsy-zkN0K6pglyzX8m24mkb-LtBcbjAYdP1uxm4BhGV0aDKQabfZdAQBcAAAAQAAAAAAAIJpZIJ2NIJpcIQ5gR6Wg2lwNpAgAUHQBwEQAAAAAAAAADR-iXNlY3AyNTZrMaEDPMSNdcL92uNIyCsS177Z6KTXlbZakQqxv3aQcWawNXeDdWRwgiMohHVkcDaCI4I"}' http://localhost:5052/lighthouse/add_peer

⚠️ Breaking Changes ⚠️

You should only upgrade to v7.0.0-beta.2 from v7.0.0-beta.{0,1} on Holesky. This release contains no breaking changes compared to v7.0.0-beta.1.

Please see the release notes for previous v7 releases for a summary of changes included in those releases:

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Staking Users (testnet) High High
Non-Staking Users (testnet) High ---
Staking Users (mainnet) DON'T UPDATE DON'T UPDATE
Non-Staking Users (mainnet) DON'T UPDATE ---

See Update Priorities for more information about this table.

Testnet users should update both the Lighthouse VC and BN to v7.0.0-beta.2 if using separate binaries. The execution layer client (e.g. Geth, Reth, Besu, Nethermind, Erigon) must also be updated prior to the Electra fork.

All Changes

  • Bump version to v7.0.0-beta.2 (#7073)
  • Manual compaction endpoint (#7072)
  • Add CI fixes to holesky-rescue (#7071)
  • Add http endpoint to add trusted peer (#7068)
  • Add backtrace logging. (#7063)
  • Prevent writing to state cache when migrating the database (#7067)
  • Split block root lookups between fork choice and store on BBR response (#7066)
  • Revert "Reuse milhouse subtrees to shrink inactivity_scores in memory (#7062)"
  • Reuse milhouse subtrees to shrink inactivity_scores in memory (#7062)
  • Manual finalization endpoint (#7059)
  • Change state cache size default to 32. (#7055)
  • Load block roots from fork choice where possible when serving BlocksByRange requests (#7058)
  • Optimise status processing for holesky-rescue (#7054)

Binaries

See pre-built binaries documentation.

The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0

System Architecture Binary PGP Signature
x86_64 lighthouse-v7.0.0-beta.2-x86_64-apple-darwin.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.2-x86_64-unknown-linux-gnu.tar.gz PGP Signature
aarch64 lighthouse-v7.0.0-beta.2-aarch64-unknown-linux-gnu.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.2-x86_64-windows.tar.gz PGP Signature
System Option - Resource
Docker v7.0.0-beta.2 sigp/lighthouse

Hemorrhage

26 Feb 05:09
v7.0.0-beta.1
dc320e3
Compare
Choose a tag to compare
Hemorrhage Pre-release
Pre-release

Summary

This is a hot-fix release for the unsuccessful Electra upgrade on Holesky in February 2025. It contains patches to help users avoid the invalid justified Holesky block and handle the resulting network turmoil (#7041).

This release is only for Holeksy users. Do not use this version on mainnet.

Please see our informational issue for up-to-date advice on the Holesky situation: #7040

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Staking Users High (Holesky only) High (Holesky only)
Non-Staking Users High (Holesky only) ---

Do not use this version on mainnet

See Update Priorities more information about this table.

All Changes

  • Bump version
  • discard unused code
  • take config value
  • Bump sync-tolerance-epoch and make it a cli param
  • Add vc --disable-attesting flag
  • Ban peers with banned payloads
  • Add log
  • Blacklist invalid block root in block verification and blacklist invalid finalized epochs in sync.
  • Blacklist invalid block root in block verification and blacklist invalid finalized epochs in sync.
  • MORE
  • Allow invalidation of "valid" nodes
  • Implement invalidation API
  • Remove more liveness risks
  • lcli http-sync hacks
  • Disable liveness risk
  • Fix flag
  • Add flag to disable attestation APIs
  • Fix builder API headers (#7009)
  • Rust 1.85 lints (#7019)
  • Fix light client merkle proofs (#7007)

Binaries

See pre-built binaries documentation.

The binaries are signed with Sigma Prime's PGP key: 15E66D941F697E28F49381F426416DC3F30674B0

System Architecture Binary PGP Signature
x86_64 lighthouse-v7.0.0-beta.1-x86_64-apple-darwin.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.1-x86_64-unknown-linux-gnu.tar.gz PGP Signature
aarch64 lighthouse-v7.0.0-beta.1-aarch64-unknown-linux-gnu.tar.gz PGP Signature
x86_64 lighthouse-v7.0.0-beta.1-x86_64-windows.tar.gz PGP Signature
System Option - Resource
Docker v7.0.0-beta.1 sigp/lighthouse

Beta-Seven

13 Feb 07:34
v7.0.0-beta.0
1888be5
Compare
Choose a tag to compare
Beta-Seven Pre-release
Pre-release

💡 Mainnet users can ignore this pre-release.

Summary

This beta release is high-priority for users on Holesky and Sepolia. It is a required upgrade for the upcoming Electra hard forks on these testnets.

  • Electra on Holesky: Mon 24 Feb 2025 21:55:12 UTC. Slot 3710976.
  • Electra on Sepolia: Wed 5 Mar 2025 07:29:36 UTC. Slot 7118848.

If you are running Lighthouse on a testnet, you must also upgrade your execution layer client to a Prague-Electra (Pectra) compatible release.

The Electra upgrade has not yet been scheduled on mainnet, so no action is required for mainnet users (do not upgrade).

Notable changes in Lighthouse v7.0.0-beta.0 include:

  • IPv6 enabled by default when a globally-routable IPv6 address is configured.
  • Light client server enabled by default.
  • Bugfix for a regression in attestation subscription logic.
  • Support for a new database backend, redb.

⚠️ Breaking Changes ⚠️

You can upgrade to v7.0.0-beta.0 from any v5 or v6 release. If you are upgrading from v5 you should make sure to read the v6 release notes to account for breaking changes between v5 and v6.

Upgrading to Lighthouse v7.0.0-beta.0 should be automatic for all users, as there are no changes to CLI flags that will prevent the node from starting. Holesky and Sepolia users must upgrade before the Electra fork. Failure to upgrade in time will require a re-sync.

Once you upgrade to Lighthouse v7.0.0-beta.0, you can downgrade to v6, but only prior to the Electra fork.

⚠️ Deprecated CLI Flags ⚠️

The following beacon node flags have been deprecated. You should remove them, but the beacon node will still start if they are provided.

  • --light-client-server

IPv6 by Default

Lighthouse will now automatically listen on IPv6 if it detects a globally-routable address. We expect for the majority of users with IPv4-only setups that this change will have no effect, but that it will benefit users with correctly configured IPv6 stacks.

The default IPv6 listening port has been changed from port 9090 to port 9000 (same as IPv4) to make firewalling easier. The IPv6 port can be adjusted using the flag --port6.

You can opt-out of IPv6 by using the flag --listen-address 0.0.0.0 to only listen on IPv4.

For more information, see the implementation PR:

🦀 Minimum Supported Rust Version 🦀

We have updated the Minimum Supported Rust Version (MSRV) for this release from 1.80.0 to 1.83.0.

This is only relevant to users compiling Lighthouse from source.

You can update your Rust compiler using:

rustup update stable

⚡ Electra ⚡

The Electra hard fork, paired with the Prague hard fork on the execution layer – together known as Pectra – brings several new features to Ethereum.

The headline change is known as Max EB, and raises the maximum effective balance a single validator may wield from 32 ETH to 2048 ETH. Once adopted, this will allow the network to run more efficiently with a lower validator count, while retaining the same level of security. Max EB even removes some centralisation vectors from staking incentives so that solo validators are able to tap into the compounding rewards previously enjoyed exclusively by large operators.

The process of switching a validator's max effective balance is a consolidation, which transfers stake from one validator to another. Consolidations are triggered via a smart contract call, and are fully opt-in and voluntary. If you are a solo operator with a small number of validators, there is no need to consolidate, although you may choose to do so.

We expect the tooling and documentation for consolidations to mature as Electra on mainnet approaches. We have plans of our own to update the Lighthouse UI (Siren) with consolidation support, and will make an announcement when that is ready.

Light Client Server

We have enabled the Light Client Server by default 🎉. Our implementation has reached a stage of maturity and performance where we feel comfortable rolling it out by default. It should result in negligible changes to bandwidth, CPU usage, memory and disk I/O for users.

The light client protocol allows very lightweight clients and devices to interact with Ethereum, and we are excited to see how the ecosystem evolves with more widespread protocol support.

You can opt-out of the light client server using --disable-light-client-server.

Attestation Subscription Fix

A fix was made to the beacon node's attestation subscription logic which could lead to under-subscription and issues publishing aggregate attestations. This has been addressed in the following PR:

New Database Backends

A new database backend is available in the form of redb. This is a pure-Rust database, with nice ACID properties. We hope to modernise and optimise our database usage around redb in the coming months.

For now, redb is performing slightly worse than LevelDB for some operations, and is only recommended for expert users and tinkerers. You can opt-in using --beacon-node-backend redb. This won't make use of any existing LevelDB database, nor will it delete it, so you should delete your LevelDB database manually and then checkpoint sync if you would like to switch.

🐛 Known Issues 🐛

There are no known issues at the time of writing.

Update Priority

This table provides priorities for which classes of users should update particular components.

User Class Beacon Node Validator Client
Staking Users (testnet) High High
Non-Staking Users (testnet) High ---
Staking Users (mainnet) DON'T UPDATE DON'T UPDATE
Non-Staking Users (mainnet) DON'T UPDATE ---

See Update Priorities for more information about this table.

Testnet users should update both the Lighthouse VC and BN to v7.0.0-beta.0 if using separate binaries. The execution layer client (e.g. Geth, Reth, Besu, Nethermind, Erigon) must also be updated prior to the Electra fork.

All changes

  • Release v7.0.0-beta.0 (#6962)
  • Fix light client plumbing in beacon processor (#6993)
  • Ensure GET v2/validator/aggregate_attestation is backwards compatible (#6984)
  • Address cargo audit failure RUSTSEC-2025-0006 (#6972)
  • IPv6 By Default (#6808)
  • Update EF tests to spec v1.5.0-beta.2 (#6958)
  • Sync active request byrange ids logs (#6914)
  • Enable Light Client server by default (#6950)
  • Schedule Sepolia and Holesky Electra forks (#6949)
  • Update attestation rewards API for Electra (#6819)
  • Fix aggregate attestation v2 response (#6926)
  • Remove duplicated fork_epoch and fork_version implementation (#6953)
  • Optimise and refine SingleAttestation conversion (#6934)
  • Fix fetch blobs in all-null case (#6940)
  • Keep execution payload during historical backfill when prune-payloads set to false (#6766)
  • Remove un-used batch sync error condition (#6917)
  • Remove unused metrics (#6817)
  • Reduce ForkName boilerplate in fork-context (#6933)
  • Use old geth version due to breaking changes. (#6936)
  • Fix attestation queue length metric (#6924)
  • Update metrics.rs (#6863)
  • Add individual by_range sync requests (#6497)
  • Return error if getBlobs not supported (#6911)
  • Add test to beacon node fallback feature (#6568)
  • Add check to Lockbud CI job (#6898)
  • UX Network Fixes (#6796)
  • chore: update peerDAS KZG library to 0.5.3 (#6906)
  • Migrate validator client to clap derive (#6300)
  • Use data column batch verification consistently (#6851)
  • Add builder SSZ flow (#6859)
  • Subscribe to PeerDAS topics on Fulu fork (#6849)
  • Fix subnet unsubscription time (#6890)
  • Cargo update for openssl vuln (#6901)
  • update libp2p to 0.55 (#6889)
  • update MSRV (#6896)
  • Compute columns in post-PeerDAS checkpoint sync (#6760)
  • Fix mdbook build. (#6891)
  • POST /eth/v2/beacon/pool/attestations bugfixes (#6867)
  • Cargo update without rust_eth_kzg (#6848)
  • Implement PeerDAS Fulu fork activation (#6795)
  • Make range sync chain Id sequential (#6868)
  • Underflow and Typo (#6885)
  • Increase jemalloc aarch64 page size limit (#5244) (#6831)
  • Some sync/backfill format nits (#6861)
  • Fork aware max values in rpc (#6847)
  • More gossipsub metrics (#6873)
  • Fix Redb implementation and add CI checks (#6856)
  • Detect invalid proposer signature on RPC block processing (#6519)
  • Add tests for ExecutionRequests decoding errors (#6832)
  • Update to EF tests v1.5.0-beta.1 (#6871)
  • Modularize beacon node backend (#4718)
  • Electra minor refactorings (#6839)
  • Update discv5 (#6836)
  • Avoid computing columns from EL blobs if block has already been imported (#6816)
  • Add MetaData V3 support to node/identity API (#6827)
  • Refactor mock builder (#6735)
  • Add EIP-7636 support (#6793)
  • Fix custodial peer assumption on lookup custody requests (#6815)
  • Do not send column requests if there is no blob for the block. (#6814)
  • SingleAttestation implementation (#6488)
  • Misc. dependency cleanup (#6810)
  • Remove ineffectual block RPC limits post merge (#6798)
  • Implement PeerDAS subnet decoupling (aka custody groups) (#6736)
  • Fix data columns not persisting for PeerDAS due to a getBlobs race condition (#6756)
  • Use existing peer count metrics loop to check for open_nat toggle (#6800)
  • Implement changes for EIP 7691 (#6803)
  • Execution requests with ...
Read more

Rick In A Vat In The Garage

16 Dec 06:25
v6.0.1
0d90135
Compare
Choose a tag to compare

Summary

This low-priority patch release fixes a few minor issues in the recent major release v6.0.0. If you have not yet upgraded to Lighthouse v6.0.0, we recommend skipping v6.0.0 and going straight to v6.0.1. These release notes provide a full description of the changes in v6.0.0 and v6.0.1, referred to collectively as v6.0.x.

Lighthouse v6.0.x includes new features and optimisations which are backwards-incompatible with Lighthouse v5.x.y.

After many months of testing, v6.0.x stabilises hierarchical state diffs, resulting in much more compact archive nodes! This long-awaited feature was also known as on-disk tree-states, and was available for pre-release testing in Lighthouse v5.1.222-exp.

Other notable changes in v6.0.x include:

  • Removal and deprecation of several old CLI flags. Some flags will need to be removed in order for Lighthouse to start, see below for a full list.
  • Improved beacon node failover and prioritisation in the validator client.
  • Support for engine_getBlobsV1 to speed up import and propagation of blobs.
  • Optimised peer discovery and long-term subnet subscription logic.
  • New commands for lighthouse validator-manager.
  • Improved light client support. Enabled via --light-client-server.
  • SSZ by default for blocks published by the VC.

Compared to v6.0.0, this release includes several bugfixes:

  • Fix an issue with attestation subnet subscriptions not being tracked correctly (#6682).
  • Fix the /lighthouse/nat API and NAT metrics (#6677).
  • Prevent sync from getting stuck on certain lookups (#6658).
  • Quality-of-life improvements for archive node users (#6669, #6668).

⚠️ Breaking Changes ⚠️

Upgrading to Lighthouse v6.0.x should be automatic for most users, but you must:

  • Remove any unsupported CLI flags (see below), and
  • Be aware of the one-way database migration and the changes to archive nodes.

Once you upgrade a beacon node to Lighthouse v6.0.x, you cannot downgrade to v5.x.y without re-syncing.

⚠️ Database Migration ⚠️

The beacon node database migration for v6.0.x is applied automatically upon upgrading. No manual action is required to upgrade.

There is no database downgrade available. We did not take this choice lightly, but in order to deliver hierarchical state diffs, a one-way database migration was simplest. If you do find yourself wanting to downgrade, re-syncing using checkpoint sync is highly recommended as it will get the node back online in just a few minutes.

For Archive Nodes

The migration enables hierarchical state diffs which necessitates the deletion of previously stored historic states. If you are running an archive node, then all historic states will be deleted upon upgrading. If you would like to continue running an archive node, you should use the --reconstruct-historic-states flag so that state reconstruction can restart from slot 0.

If you would like to change the density of diffs, you can use the new flag --hierarchy-exponents which should be applied the first time you start after upgrading. We have found that the hierarchy-exponents configuration does not greatly impact query times which tend to be dominated by cache builds and affected more by query ordering. We still recommend avoiding parallel state queries at the same slot, and making use of sequential calls where possible (e.g. in indexing services). We plan to continue optimising parallel queries and cache builds in future releases, without requiring a re-sync.

For more information on configuring the hierarchy exponents see the updated documentation on Database Configuration in the Lighthouse book.

Hierarchy Exponents Storage requirement Sequential slot query Uncached query
5,9,11,13,16,18,21 (default) 418 GiB 250-700 ms up to 10 s
5,7,11 (frequent snapshots) 589 GiB 250-700 ms up to 6 s
0,5,7,11 (per-slot diffs) 1915 GiB+ 250-700 ms up to 2 s

As part of the archive node changes the format of the "anchor" has also changed. For an archive node the anchor will no longer be null and will instead take the value:

"anchor": {
  "anchor_slot": "0",
  "oldest_block_slot": "0",
  "oldest_block_parent": "0x0000000000000000000000000000000000000000000000000000000000000000",
  "state_upper_limit": "0",
  "state_lower_limit": "0"
}

Don't be put off by the state_upper_limit being equal to 0: this indicates that all states with slots >= 0 are available, i.e. full state history.

NOTE: if you are upgrading from v5.1.222-exp you need to re-sync from scratch. The database upgrade will fail if attempted.

⚠️ Removed CLI Flags ⚠️

The following beacon node flags which were previously deprecated have been deleted. You must remove them from your beacon node arguments before updating to v6.0.x:

  • --self-limiter
  • --http-spec-fork
  • --http-allow-sync-stalled
  • --disable-lock-timeouts
  • --always-prefer-builder-payload
  • --progressive-balances
  • --disable-duplicate-warn-logs
  • -l (env logger)

The following validator client flags have also been deleted and must be removed before starting up:

  • --latency-measurement-service
  • --disable-run-on-all
  • --produce-block-v3

In many cases the behaviour enabled by these flags has become the default and no replacement flag is necessary. If you would like to fine-tune some aspect of Lighthouse's behaviour the full list of CLI flags is available in the book:

⚠️ Deprecated CLI Flags ⚠️

The following beacon node flags have been deprecated. You should remove them, but the beacon node will still start if they are provided.

  • --eth1
  • --dummy-eth1

The following global (BN and VC) flags have also been deprecated:

  • --terminal-total-difficulty-override
  • --terminal-block-hash-override
  • --terminal-block-hash-epoch-override
  • --safe-slots-to-import-optimistically

⚠️ Modified CLI Flags ⚠️

The beacon node flag --purge-db will now only delete the database in interactive mode, and requires manual confirmation. If it is provided in a non-interactive context, e.g. under systemd or docker then it will have no effect. The beacon node will start without anything being deleted.

If you wish to use the old purge-db behaviour it is available via the flag --purge-db-force which never asks for confirmation.

Network Optimisations

This release includes optimisations to Lighthouse's subnet subscription logic, updates to peer discovery, and fine-tuning of IDONTWANT and rate limiting.

Users should see similar performance with reduced bandwidth. Users running a large number of validators (1000+) on a single beacon node may notice a reduction in the number of subscribed subnets, but can opt-in to subscribing to more subnets using --subscribe-all-subnets if desired (e.g. for marginally increasing block rewards from included attestations).

Validator Client Fallback Optimisations

The beacon node fallback feature in the validator client has been refactored for greater responsiveness. Validator clients running with multiple beacon nodes will now switch more aggressively to the "healthiest" looking beacon node, where health status is determined by:

  • Sync distance (head distance from the current slot).
  • Execution layer (EL) health (whether the EL is online and not erroring).
  • Optimistic sync status (whether the EL is syncing).

The impact of this change should be less downtime during upgrades, and better resilience to faulty or broken beacon nodes.

Users running majority clients should be aware that in the case of a faulty majority client, the validator client may prefer the faulty chain due to it appearing healthier. The best defense against this problem is to run some (or all) validator clients without any connection to a beacon node running a majority CL client or majority EL client.

New Validator Manager Commands

The Lighthouse validator manager is the recommended way to manage validators from the CLI, without having to shut down the validator client.

In this release it has gained several new capabilities:

  • lighthouse vm import: now supports standard keystores generated by other tools like staking-deposit-cli.
  • lighthouse vm list: a new read-only command to list the validator keys that a VC has imported.
  • lighthouse vm delete: a new command to remove a key from a validator client, e.g. after exiting.

For details on these commands and available flags, see the docs: https://lighthouse-book.sigmaprime.io/validator-manager.html

Future releases will continue to expand the number of available commands, with the goal of eventually deprecating the previous lighthouse account-manager CLI.

Fetch Blobs Optimisation

This release supports the new engine_getBlobsV1 API for accelerating the import of blocks with blobs. If the API is supported by your execution node, Lighthouse will use it to load blobs from the mempool without waiting for them to arrive on gossip. Our testing indicates that this will help Ethereum scale to a higher blob count, but we need more data from real network...

Read more

Tiny Rick

02 Dec 05:57
v6.0.0
c042dc1
Compare
Choose a tag to compare

Summary

This low-priority major release introduces new features and optimisations which are backwards-incompatible with Lighthouse v5.x.y. If you are running Nethermind we recommend waiting for the release of Nethermind v1.30.0 before upgrading due to an incompatibility (see Known Issues).

After many months of testing, this releases stabilises hierarchical state diffs, resulting in much more compact archive nodes! This long-awaited feature was also known as on-disk tree-states, and was available for pre-release testing in Lighthouse v5.1.222-exp.

Other notable changes include:

  • Removal and deprecation of several old CLI flags. Some flags will need to be removed in order for Lighthouse to start, see below for a full list.
  • Improved beacon node failover and prioritisation in the validator client.
  • Support for engine_getBlobsV1 to speed up import and propagation of blobs.
  • Optimised peer discovery and long-term subnet subscription logic.
  • New commands for lighthouse validator-manager.
  • Improved light client support. Enabled via --light-client-server.
  • SSZ by default for blocks published by the VC.

⚠️ Breaking Changes ⚠️

Upgrading to Lighthouse v6.0.0 should be automatic for most users, but you must:

  • Remove any unsupported CLI flags (see below), and
  • Be aware of the one-way database migration and the changes to archive nodes.

Once you upgrade a beacon node to Lighthouse v6.0.0, you cannot downgrade to v5.x.y without re-syncing.

⚠️ Database Migration ⚠️

The beacon node database migration for v6.0.0 is applied automatically upon upgrading. No manual action is required to upgrade.

There is no database downgrade available. We did not take this choice lightly, but in order to deliver hierarchical state diffs, a one-way database migration was simplest. If you do find yourself wanting to downgrade, re-syncing using checkpoint sync is highly recommended as it will get the node back online in just a few minutes.

For Archive Nodes

The migration enables hierarchical state diffs which necessitates the deletion of previously stored historic states. If you are running an archive node, then all historic states will be deleted upon upgrading. If you would like to continue running an archive node, you should use the --reconstruct-historic-states flag so that state reconstruction can restart from slot 0.

If you would like to change the density of diffs, you can use the new flag --hierarchy-exponents which should be applied the first time you start after upgrading. We have found that the hierarchy-exponents configuration does not greatly impact query times which tend to be dominated by cache builds and affected more by query ordering. We still recommend avoiding parallel state queries at the same slot, and making use of sequential calls where possible (e.g. in indexing services). We plan to continue optimising parallel queries and cache builds in future releases, without requiring a re-sync.

For more information on configuring the hierarchy exponents see the updated documentation on Database Configuration in the Lighthouse book.

Hierarchy Exponents Storage requirement Sequential slot query Uncached query
5,9,11,13,16,18,21 (default) 418 GiB 250-700 ms up to 10 s
5,7,11 (frequent snapshots) 589 GiB 250-700 ms up to 6 s
0,5,7,11 (per-slot diffs) 1915 GiB+ 250-700 ms up to 2 s

As part of the archive node changes the format of the "anchor" has also changed. For an archive node the anchor will no longer be null and will instead take the value:

"anchor": {
  "anchor_slot": "0",
  "oldest_block_slot": "0",
  "oldest_block_parent": "0x0000000000000000000000000000000000000000000000000000000000000000",
  "state_upper_limit": "0",
  "state_lower_limit": "0"
}

Don't be put off by the state_upper_limit being equal to 0: this indicates that all states with slots >= 0 are available, i.e. full state history.

NOTE: if you are upgrading from v5.1.222-exp you need to re-sync from scratch. The database upgrade will fail if attempted.

⚠️ Removed CLI Flags ⚠️

The following beacon node flags which were previously deprecated have been deleted. You must remove them from your beacon node arguments before updating to v6.0.0:

  • --self-limiter
  • --http-spec-fork
  • --http-allow-sync-stalled
  • --disable-lock-timeouts
  • --always-prefer-builder-payload
  • --progressive-balances
  • --disable-duplicate-warn-logs
  • -l (env logger)

The following validator client flags have also been deleted and must be removed before starting up:

  • --latency-measurement-service
  • --disable-run-on-all
  • --produce-block-v3

In many cases the behaviour enabled by these flags has become the default and no replacement flag is necessary. If you would like to fine-tune some aspect of Lighthouse's behaviour the full list of CLI flags is available in the book:

⚠️ Deprecated CLI Flags ⚠️

The following beacon node flags have been deprecated. You should remove them, but the beacon node will still start if they are provided.

  • --eth1
  • --dummy-eth1

The following global (BN and VC) flags have also been deprecated:

  • --terminal-total-difficulty-override
  • --terminal-block-hash-override
  • --terminal-block-hash-epoch-override
  • --safe-slots-to-import-optimistically

⚠️ Modified CLI Flags ⚠️

The beacon node flag --purge-db will now only delete the database in interactive mode, and requires manual confirmation. If it is provided in a non-interactive context, e.g. under systemd or docker then it will have no effect. The beacon node will start without anything being deleted.

If you wish to use the old purge-db behaviour it is available via the flag --purge-db-force which never asks for confirmation.

Network Optimisations

This release includes optimisations to Lighthouse's subnet subscription logic, updates to peer discovery, and fine-tuning of IDONTWANT and rate limiting.

Users should see similar performance with reduced bandwidth. Users running a large number of validators (1000+) on a single beacon node may notice a reduction in the number of subscribed subnets, but can opt-in to subscribing to more subnets using --subscribe-all-subnets if desired (e.g. for marginally increasing block rewards from included attestations).

Validator Client Fallback Optimisations

The beacon node fallback feature in the validator client has been refactored for greater responsiveness. Validator clients running with multiple beacon nodes will now switch more aggressively to the "healthiest" looking beacon node, where health status is determined by:

  • Sync distance (head distance from the current slot).
  • Execution layer (EL) health (whether the EL is online and not erroring).
  • Optimistic sync status (whether the EL is syncing).

The impact of this change should be less downtime during upgrades, and better resilience to faulty or broken beacon nodes.

Users running majority clients should be aware that in the case of a faulty majority client, the validator client may prefer the faulty chain due to it appearing healthier. The best defense against this problem is to run some (or all) validator clients without any connection to a beacon node running a majority CL client or majority EL client.

New Validator Manager Commands

The Lighthouse validator manager is the recommended way to manage validators from the CLI, without having to shut down the validator client.

In this release it has gained several new capabilities:

  • lighthouse vm import: now supports standard keystores generated by other tools like staking-deposit-cli.
  • lighthouse vm list: a new read-only command to list the validator keys that a VC has imported.
  • lighthouse vm delete: a new command to remove a key from a validator client, e.g. after exiting.

For details on these commands and available flags, see the docs: https://lighthouse-book.sigmaprime.io/validator-manager.html

Future releases will continue to expand the number of available commands, with the goal of eventually deprecating the previous lighthouse account-manager CLI.

Fetch Blobs Optimisation

This release supports the new engine_getBlobsV1 API for accelerating the import of blocks with blobs. If the API is supported by your execution node, Lighthouse will use it to load blobs from the mempool without waiting for them to arrive on gossip. Our testing indicates that this will help Ethereum scale to a higher blob count, but we need more data from real networks before committing to a blob count increase.

There are several new Prometheus metrics to track the hit rate:

  • beacon_blobs_from_el_received_total
  • beacon_blobs_from_el_expected_total
  • beacon_blobs_from_el_hit_total

Logs at debug level also show the operation of this new feature (grep for fetch_blobs).

At the time of writing the follow execution clients support the API:

  • Reth v1.0.7 or newer.
  • Besu v24.9.1 or newer.
  • Geth v1.14.12 or newer.

Unsupported:

  • Nethermind v1.29.x (buggy, see below).
  • Erigon.

🐛 Known Issues 🐛

Nethermind v1.29.x engine_getBlobsV1 bug

Nethermind versions v1.29.0 and v1.29.1 include ...

Read more