-
-
Notifications
You must be signed in to change notification settings - Fork 164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to process requests evenly with RateLimiterQueue? #112
Comments
Also - possibly should be a separate issue, but if I add items to the queue very quickly, it waits the correct amount of time before releasing the first item, but then immediately releases all other items afterwards, when they should wait for the first item to be release and at least the minimum amount after that |
I've tried execEvenly with RateLimiterMemory and also encountered the issue where it executes all requests after the first one simultaneously. |
Hi @sam-lord Thanks for reporting this issue. First of all, you can avoid
If you use above config with RateLimiterQueue, it should work as you expect: 1 request will be processed every 6 minutes, all other requests are queued. It is better than Secondly, |
Thanks for the response. Unfortunately we need to use points > 0 -- my use-case is a certificate management system which issues new certificates with LetsEncrypt, but there is a renewal process happening outside my control. When the renewals happen I am using the penalty method to remove a point from the rate limiter. I have been looking at the way execEvenly is calculated, and I believe it could be fixed with something like this: const usedPoints = this.points - res.remainingPoints;
const delay = Math.max(1, usedPoints) * this.execEvenlyMinDelayMs;
setTimeout(resolve, delay, res); As the number of used points increase, the delay before resolving increases, lining up to the correct interval. Not sure I have understood the current code though, the algorithm used at the moment has gone over my head a little. |
@animir I've got a fix that works for me - you might be interested. It's a new RateLimiter type that wraps UnionRate limiter, providing a full AbstractRateLimiter class that uses two limiters under the hood: one main one - uses the exact config provided, and one "interval" - which uses a single point and the duration is just const {RateLimiterUnion, RateLimiterMySQL, RateLimiterMemory} = require('rate-limiter-flexible');
const RateLimiterAbstract = require('rate-limiter-flexible/lib/RateLimiterAbstract');
module.exports = class RateLimiterCustom extends RateLimiterAbstract {
constructor(options = {}, callback) {
super(options);
const limiterType = (options.storeClient)
? RateLimiterMySQL
: RateLimiterMemory;
const mainLimiter = new limiterType(Object.assign({}, options, {
keyPrefix: 'main'
}), callback);
const intervalLimiter = new limiterType(Object.assign({}, options, {
keyPrefix: 'interval',
points: 1,
duration: Math.ceil(this.duration / this.points)
}));
this._unionLimiter = new RateLimiterUnion(mainLimiter, intervalLimiter);
this._mainLimiter = mainLimiter;
}
async consume(key, points = 1, options = {}) {
// Consume from the union limiter and error on a single failure - return result from main limiter if possible
try {
const {main, interval} = await this._unionLimiter.consume(key, points, options);
main.msBeforeNext = Math.max(main.msBeforeNext, interval.msBeforeNext);
return main;
} catch (result) {
if (result instanceof Error) {
throw result;
}
const {main, interval} = result;
if (main && interval) {
main.msBeforeNext = Math.max(main.msBeforeNext, interval.msBeforeNext);
throw main;
} else if (main) {
throw main;
} else {
throw interval;
}
}
}
penalty(key, points = 1) {
return this._mainLimiter.penalty(key, points);
}
reward(key, points = 1) {
return this._mainLimiter.reward(key, points);
}
get(key) {
return this._mainLimiter.get(key);
}
set(key, points, secDuration) {
return this._mainLimiter.set(key, points, secDuration);
}
block(key, secDuration) {
return this._mainLimiter.block(key, secDuration);
}
delete(key) {
return this._mainLimiter.delete(key);
}
}; |
@sam-lord It is interesting solution 👍 Note, Union limiter consumes from all limiters in union every request. While I'll describe, how I understand your case, fix me please, if I get it wrong
|
Ahh - thanks for mentioning that. I might reward limiters that didn’t reject in my catch block to fix the number of points. I think you’ve described the use case perfectly, yes. |
@sam-lord Ok, you can try to remove Union from your Custom limiter. Consume points in a sequence, not in parallel. If Do you mind, if I rename this issue? It may not reflect the nature of |
Not at all, please go ahead
Thanks very much for the advice, I'll drop another note if I get all this working. You have been an amazing help. |
This is all working perfectly. The code is really simple now: const {RateLimiterMySQL, RateLimiterMemory} = require('rate-limiter-flexible');
const RateLimiterAbstract = require('rate-limiter-flexible/lib/RateLimiterAbstract');
module.exports = class RateLimiterCustom extends RateLimiterAbstract {
constructor(options = {}, callback) {
super(options);
const limiterType = (options.storeClient)
? RateLimiterMySQL
: RateLimiterMemory;
const mainLimiter = new limiterType(Object.assign({}, options, {
keyPrefix: 'main'
}), callback);
const intervalLimiter = new limiterType(Object.assign({}, options, {
keyPrefix: 'interval',
points: 1,
duration: Math.ceil(this.duration / this.points)
}));
this._mainLimiter = mainLimiter;
this._intervalLimiter = intervalLimiter;
}
async consume(key, points = 1, options = {}) {
await this._intervalLimiter.consume(key, points, options);
return await this._mainLimiter.consume(key, points, options);
}
penalty(key, points = 1) {
return this._mainLimiter.penalty(key, points);
}
reward(key, points = 1) {
return this._mainLimiter.reward(key, points);
}
get(key) {
return this._mainLimiter.get(key);
}
set(key, points, secDuration) {
return this._mainLimiter.set(key, points, secDuration);
}
block(key, secDuration) {
return this._mainLimiter.block(key, secDuration);
}
delete(key) {
return this._mainLimiter.delete(key);
}
}; Thanks once again for the help with this - hopefully this ends up being useful to others with this type of use-case |
@sam-lord You're welcome. It looks nice. There is a separate issue for And I created a knowledge base article with slightly changed example: Consume points evenly |
I'm sending requests to a heavily rate-limited service but want them all to get through eventually, so I have a RateLimiterQueue with the default maximum size, wrapping a RateLimiterMySQL which is using the Sequelize backend for MySQL.
The issue is that whilst we have configured the service to have 10 points per hour, only 5 requests are being executed per hour.
I have the limiter set with the following config (as well as database specific config):
And the queue wrapping the limiter has no additional configuration. Is this a known issue?
EDIT: use an example from Consume points evenly article. At the time of this edit,
execEvenly
option doesn't work with RateLimiterQueue.The text was updated successfully, but these errors were encountered: