-
Notifications
You must be signed in to change notification settings - Fork 799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent fork-choice: Validator sometimes chooses own secondary block over others primary #6013
Comments
Could you please provide some logs from when this is happening? Best with |
I checked the code and I think I know what the problem is. Block production and block import from the network are running in parallel. The way the best weight is determined by loading it from the backend, but the backend is only updated when the block is imported. This means that when a block is imported from the network and the locally built block is imported in parallel, we can run into the issue that you are describing. To solve this we need to store the block weight information in the shared state. |
I want to try attempting this, assign me please. |
The issue should be in this code: polkadot-sdk/substrate/client/consensus/babe/src/lib.rs Lines 1642 to 1657 in e9393a9
|
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
I have observed an inconsistency in the fork-choice rule on our Substrate-based chain. As per the protocol, when there are two blocks for the same slot—one primary and one secondary—the validator should consider the primary block as the best block. However, this is not consistently followed.
Issue Details:
Steps to reproduce
You can reproduce this using a substrate-node and a local chain setup with two validators. Monitor the fork-choice rule by observing the best block selection when validators must choose between primary and secondary blocks for the same slot. With a two-validator setup, this phenomenon typically occurs within 4 or 5 epochs
The text was updated successfully, but these errors were encountered: