-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Add slot skip rates to ValidatorHistoryAccount #5
Comments
Hey @buffalu I'd like to take this up, can you please assign me? |
@buffalu the ValidatorHistoryEntry has a size limit of 128, looking at the leader schedule for most validators, the space required to store that would be around 6-8kb which is a lot. A potential solution is to use a leader schedule account and store it in that. Why do we need a slot to pubkey mapping? If we know the leader schedule and we have the slot history sysvar we can check if any slot in the schedule is missing from the sysvar |
This sounds reasonable to me, how are you thinking about the layout for that account? Is it HashMap<Pubkey, Vec> or something else? |
I was thinking more along the lines of having a account per epoch per validator. For epoch N you create a leader schedule account whos key is mapped to the ValidatorHistoryEntry and contains the epoch number as well. #[account(zero_copy)
pub struct LeaderSchedule{
pub epoch: u64,
pub vote_identity: Pubkey,
pub leader_schedule: [u64; 20000] // currently highest stake validator has ~14k length schedule so this gives it a good margin.
}
pub struct ValidatorHistoryEntry{
...
pub leader_schedule: Pubkey
} Per epoch cost would be ~ 0.0032 SOL. |
Each validator with a ValidatorHistory account is given an index, which we can use to save a lot of space when storing the leader schedule and also use one global account rather than per validator. Also removes the need to save 32 bytes on the validator history account We can invert the ValidatorHistory -> LeaderSchedule mapping to be LeaderSchedule -> ValidatorHistory like this: #[account(zero_copy)
pub struct LeaderSchedule{
pub epoch: u64,
pub slot_offset: u64,
// Each value in this array is the index of the leader for the slot at i + slot_offset
pub leader_schedule: [u32; 432000]
} Size: 432000 * 4 bytes + 8 + 8 = 1728016 bytes, less than 10MiB account limit Then in an instruction for each validator you can loop through SlotHistory and check all slots where You'd need to upload the LeaderSchedule across multiple TX's cause it's so big, and also initialize it with multiple instructions - check ReallocValidatorHistoryAccount for an example. You could also maybe reuse the same LeaderSchedule account each epoch but could make the timing for updates each epoch more complex |
Is your feature request related to a problem? Please describe.
The ValidatorHistoryAccount should track the block skip rate of every validator. It should be updated in semi-realtime.
Describe the solution you'd like
There's a few pieces of information that need to get stored on-chain to do this.
This will be a tricky one, looking forward to seeing the implementation :)
The text was updated successfully, but these errors were encountered: