-
Notifications
You must be signed in to change notification settings - Fork 799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add storage bounds for pallet staking
and clean up deprecated non paged exposure storages
#6445
base: master
Are you sure you want to change the base?
Conversation
Co-authored-by: Kian Paimani <[email protected]>
@@ -400,7 +401,7 @@ impl pallet_staking::Config for Runtime { | |||
type DisablingStrategy = pallet_staking::UpToLimitWithReEnablingDisablingStrategy; | |||
type MaxInvulnerables = ConstU32<20>; | |||
type MaxRewardPagesPerValidator = ConstU32<20>; | |||
type MaxValidatorsCount = ConstU32<300>; | |||
type MaxValidatorsCount = MaxAuthorities; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a huge jump, is there a reason why the MaxAuthorities is so big in the test runtime?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, right! Then it's probably better to revert the change in that case, adding also a comment: 6c9806c
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, but there is still some comments to resolve.
substrate/frame/staking/src/lib.rs
Outdated
pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> { | ||
let old_exposures = exposure.others.len(); | ||
let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default(); | ||
defensive_assert!(old_exposures == others.len(), "Too many exposures for a page"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is not used, we can make it better or remove it.
pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> { | |
let old_exposures = exposure.others.len(); | |
let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default(); | |
defensive_assert!(old_exposures == others.len(), "Too many exposures for a page"); | |
pub fn try_from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> { | |
let others = WeakBoundedVec::try_from(exposure.others).map_err(|_| ())?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in 3b7bc95
claimed_pages.push(page); | ||
// try to add page to claimed entries | ||
if claimed_pages.try_push(page).is_err() { | ||
defensive!("Limit reached for maximum number of pages."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proof should be more precise. In what circumstance can this limit be reached, why is it impossible in practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you mean by "the proof"?
let mut eras_stakers_keys = | ||
v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>(); | ||
eras_stakers_keys.dedup(); | ||
for k in eras_stakers_keys { | ||
let mut removal_result = | ||
v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None); | ||
while let Some(next_cursor) = removal_result.maybe_cursor { | ||
removal_result = v16::ErasStakers::<T>::clear_prefix( | ||
k, | ||
u32::max_value(), | ||
Some(&next_cursor[..]), | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to try to remove all keys in one time, if we don't need multi-block migration and we are sure it is ok then we can do:
let mut eras_stakers_keys = | |
v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>(); | |
eras_stakers_keys.dedup(); | |
for k in eras_stakers_keys { | |
let mut removal_result = | |
v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None); | |
while let Some(next_cursor) = removal_result.maybe_cursor { | |
removal_result = v16::ErasStakers::<T>::clear_prefix( | |
k, | |
u32::max_value(), | |
Some(&next_cursor[..]), | |
); | |
} | |
} | |
v16::ErasStakers::<T>::clear(u32::max_value(), None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can be sure for Polkadot or Kusama but how can we be sure for every other chain? Maybe there are some limits I'm not aware of
let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys() | ||
.map(|(k1, _k2)| k1) | ||
.collect::<Vec<_>>(); | ||
eras_stakers_clipped_keys.dedup(); | ||
for k in eras_stakers_clipped_keys { | ||
let mut removal_result = | ||
v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None); | ||
while let Some(next_cursor) = removal_result.maybe_cursor { | ||
removal_result = v16::ErasStakersClipped::<T>::clear_prefix( | ||
k, | ||
u32::max_value(), | ||
Some(&next_cursor[..]), | ||
); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys() | |
.map(|(k1, _k2)| k1) | |
.collect::<Vec<_>>(); | |
eras_stakers_clipped_keys.dedup(); | |
for k in eras_stakers_clipped_keys { | |
let mut removal_result = | |
v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None); | |
while let Some(next_cursor) = removal_result.maybe_cursor { | |
removal_result = v16::ErasStakersClipped::<T>::clear_prefix( | |
k, | |
u32::max_value(), | |
Some(&next_cursor[..]), | |
); | |
} | |
} | |
v16::ErasStakersClipped::<T>::clear(u32::max_value(), None); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, are we assuming that this is safe?
} else { | ||
log!(info, "v17 applied successfully."); | ||
} | ||
T::DbWeight::get().reads_writes(1, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can count the number of operation as we do them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can we do that?
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
All GitHub workflows were cancelled due to failure one of the required jobs. |
This is part of #6289 and necessary for the Asset Hub migration.
Building on the observations and suggestions from #255 .
Changes
MaxInvulnerables
to boundInvulnerables
Vec ->BoundedVec
.westend
).MaxDisabledValidators
to boundDisabledValidators
Vec ->BoundedVec
MaxValidatorsCount
according to the current disabling strategy)ErasStakers
andErasStakersClipped
(see Tracker issue for cleaning up old non-paged exposure logic in staking pallet #433 )MaxExposurePageSize
to boundErasStakersPaged
mapping to exposure pages: eachExposurePage.others
Vec is turned into aWeakBoundedVec
to allow easy and quick changes to this boundMaxBondedEras
to boundBondedEras
Vec ->BoundedVec
BondingDuration::get() + 1
everywhere to include both time interval endpoints in [current_era - BondingDuration::get()
,current_era
]. Notice that this was done manually in every test and runtime, so I wonder if there is a better way to ensure thatMaxBondedEras::get() == BondingDuration::get() + 1
everywhere.MaxRewardPagesPerValidator
to boundClaimedRewards
Vec of pages ->WeakBoundedVec
WeakBoundedVec
to allow easy and quick changes to this parameterMaxValidatorsCount
optional storage item to addMaxValidatorsCount
mandatory config parameterEraRewardPoints.individual
BTreeMap ->BoundedBTreeMap
;TO DO
Slashing storage items will be bounded in another PR.
UnappliedSlashes
SlashingSpans