Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add storage bounds for pallet staking and clean up deprecated non paged exposure storages #6445

Open
wants to merge 148 commits into
base: master
Choose a base branch
from

Conversation

re-gius
Copy link
Contributor

@re-gius re-gius commented Nov 11, 2024

This is part of #6289 and necessary for the Asset Hub migration.

Building on the observations and suggestions from #255 .

Changes

  • Add MaxInvulnerables to bound Invulnerables Vec -> BoundedVec.
    • Set to constant 20 in the pallet (must be >= 17 for backward compatibility of runtime westend).
  • Add MaxDisabledValidators to bound DisabledValidators Vec -> BoundedVec
    • Set to constant 100 in the pallet (it should be <= 1/3 * MaxValidatorsCount according to the current disabling strategy)
  • Remove ErasStakers and ErasStakersClipped (see Tracker issue for cleaning up old non-paged exposure logic in staking pallet #433 )
  • Use MaxExposurePageSize to bound ErasStakersPaged mapping to exposure pages: each ExposurePage.others Vec is turned into a WeakBoundedVec to allow easy and quick changes to this bound
  • Add MaxBondedEras to bound BondedEras Vec -> BoundedVec
    • Set to BondingDuration::get() + 1 everywhere to include both time interval endpoints in [current_era - BondingDuration::get(), current_era]. Notice that this was done manually in every test and runtime, so I wonder if there is a better way to ensure that MaxBondedEras::get() == BondingDuration::get() + 1 everywhere.
  • Add MaxRewardPagesPerValidator to bound ClaimedRewards Vec of pages -> WeakBoundedVec
    • Set to constant 20 in the pallet. The vector of pages is now a WeakBoundedVec to allow easy and quick changes to this parameter
  • Remove MaxValidatorsCount optional storage item to add MaxValidatorsCount mandatory config parameter
    • Using it to to bound EraRewardPoints.individual BTreeMap -> BoundedBTreeMap;
    • Set to dynamic parameter in runtime westend so that changing it should not require migrations for it

TO DO
Slashing storage items will be bounded in another PR.

  • UnappliedSlashes
  • SlashingSpans

@@ -400,7 +401,7 @@ impl pallet_staking::Config for Runtime {
type DisablingStrategy = pallet_staking::UpToLimitWithReEnablingDisablingStrategy;
type MaxInvulnerables = ConstU32<20>;
type MaxRewardPagesPerValidator = ConstU32<20>;
type MaxValidatorsCount = ConstU32<300>;
type MaxValidatorsCount = MaxAuthorities;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a huge jump, is there a reason why the MaxAuthorities is so big in the test runtime?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right! Then it's probably better to revert the change in that case, adding also a comment: 6c9806c

Copy link
Contributor

@gui1117 gui1117 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, but there is still some comments to resolve.

Comment on lines 763 to 766
pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> {
let old_exposures = exposure.others.len();
let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default();
defensive_assert!(old_exposures == others.len(), "Too many exposures for a page");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is not used, we can make it better or remove it.

Suggested change
pub fn from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> {
let old_exposures = exposure.others.len();
let others = WeakBoundedVec::try_from(exposure.others).unwrap_or_default();
defensive_assert!(old_exposures == others.len(), "Too many exposures for a page");
pub fn try_from_clipped(exposure: Exposure<AccountId, Balance>) -> Result<Self, ()> {
let others = WeakBoundedVec::try_from(exposure.others).map_err(|_| ())?;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed in 3b7bc95

substrate/frame/staking/src/lib.rs Outdated Show resolved Hide resolved
substrate/frame/staking/src/lib.rs Outdated Show resolved Hide resolved
substrate/frame/staking/src/lib.rs Outdated Show resolved Hide resolved
claimed_pages.push(page);
// try to add page to claimed entries
if claimed_pages.try_push(page).is_err() {
defensive!("Limit reached for maximum number of pages.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The proof should be more precise. In what circumstance can this limit be reached, why is it impossible in practice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you mean by "the proof"?

Comment on lines +75 to +88
let mut eras_stakers_keys =
v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>();
eras_stakers_keys.dedup();
for k in eras_stakers_keys {
let mut removal_result =
v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakers::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to try to remove all keys in one time, if we don't need multi-block migration and we are sure it is ok then we can do:

Suggested change
let mut eras_stakers_keys =
v16::ErasStakers::<T>::iter_keys().map(|(k1, _k2)| k1).collect::<Vec<_>>();
eras_stakers_keys.dedup();
for k in eras_stakers_keys {
let mut removal_result =
v16::ErasStakers::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakers::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
v16::ErasStakers::<T>::clear(u32::max_value(), None);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can be sure for Polkadot or Kusama but how can we be sure for every other chain? Maybe there are some limits I'm not aware of

Comment on lines +90 to +104
let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys()
.map(|(k1, _k2)| k1)
.collect::<Vec<_>>();
eras_stakers_clipped_keys.dedup();
for k in eras_stakers_clipped_keys {
let mut removal_result =
v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakersClipped::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
let mut eras_stakers_clipped_keys = v16::ErasStakersClipped::<T>::iter_keys()
.map(|(k1, _k2)| k1)
.collect::<Vec<_>>();
eras_stakers_clipped_keys.dedup();
for k in eras_stakers_clipped_keys {
let mut removal_result =
v16::ErasStakersClipped::<T>::clear_prefix(k, u32::max_value(), None);
while let Some(next_cursor) = removal_result.maybe_cursor {
removal_result = v16::ErasStakersClipped::<T>::clear_prefix(
k,
u32::max_value(),
Some(&next_cursor[..]),
);
}
}
v16::ErasStakersClipped::<T>::clear(u32::max_value(), None);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, are we assuming that this is safe?

} else {
log!(info, "v17 applied successfully.");
}
T::DbWeight::get().reads_writes(1, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can count the number of operation as we do them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can we do that?

@paritytech-review-bot paritytech-review-bot bot requested a review from a team January 21, 2025 16:22
@paritytech-workflow-stopper
Copy link

All GitHub workflows were cancelled due to failure one of the required jobs.
Failed workflow url: https://github.com/paritytech/polkadot-sdk/actions/runs/12891905705
Failed job name: fmt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T1-FRAME This PR/Issue is related to core FRAME, the framework. T2-pallets This PR/Issue is related to a particular pallet.
Projects
Status: In review
Status: Scheduled
Development

Successfully merging this pull request may close these issues.

6 participants