-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider supporting FIFO + message keys #294
Comments
Hi @fggrtech , thank you. I'd like to see features like this make it into the project too. I'll have to think a bit about how this would impact the existing queue APIs and tables. I don't think we'd want to break the simple queue structure that currently exists and ideally we could find a way to add this functionality without adding overhead or requiring the usage of the new keys, but worst case we add a new API specifically for FIFO. |
I'm looking for exactly this functionality. Does the group ID need to be a UUID or can it be just some arbitrary text? |
I think that is one of difficult questions about this feature request. A message id means different things based on the use context. My personal case is a uuid, but i would assume it would be more generic like a Kafka message key - bytes or a string field. |
Hey @ChuckHend ! We've run into this too. I like the approach of not breaking existing queues, so I'd like to suggest a new Agreeing with the above that the group ID should be Up for a contribution? CC: @gruebel |
Hi. That's me (and not PGMQ), but I feel like we should keep the same interface for send. The main reason is that we'd like to support exchanges in the future, and exchanges should be able to push to all queue types. Currently, all queue types have the same interface to push, so there's no problem doing that. A way to achieve that would be:
If we really wanted, we could also keep the same interface for read. But this would imply an additional check (for queue type) on every read. This is a bit hard to write in a performant way. |
@v0idpwn , @nimrodkor - yes this would be a subsequent read to We already have |
Proposed api: -- We create the queue in the same way. It's a regular queue
select pgmq.create('myqueue');
-- When sending messages, we can add a special header
select pgmq.send('myqueue', '"hello"'::jsonb, '{"x-pgmq-fifo": "ordering-key"}'::jsonb);
select pgmq.send('myqueue', '"fifo"'::jsonb, '{"x-pgmq-fifo": "ordering-key"}'::jsonb);
-- We can get multiple messages of the same ordering key on a query
select msg_id, msg, headers from pgmq.read_fifo('myqueue', 10, 10);
| msg_id | msg | headers |
+--------+-----------+-----------------------------------|
| 1 | '"hello"' | '{"x-pgmq-fifo": "ordering-key"}' |
| 2 | '"fifo"' | '{"x-pgmq-fifo": "ordering-key"}' |
select pgmq.send('myqueue', '"bye"'::jsonb, '{"x-pgmq-fifo": "ordering-key"}'::jsonb);
select pgmq.send('myqueue', '"world"'::jsonb, '{"x-pgmq-fifo": "another-ordering-key"}'::jsonb);
-- One the second select, we don't get the `"bye"` message because it's on FIFO with the `"ordering-key"` key, which still
-- has undeleted messages, even if they are not visible
select msg_id, msg, headers from pgmq.read_fifo('myqueue', 10, 10);
| msg_id | msg | headers |
+--------+-----------+-------------------------------------------+
| 4 | '"world"' | '{"x-pgmq-fifo": "another-ordering-key"}' |
SELECT PG_SLEEP(10);
-- After the VT is expired or messages are deleted, we can read the messages behind in the queue
select msg_id, msg, headers from pgmq.read_fifo('myqueue', 10, 10);
| msg_id | msg | headers |
+--------+-----------+-------------------------------------------+
| 1 | '"hello"' | '{"x-pgmq-fifo": "ordering-key"}' |
| 2 | '"fifo"' | '{"x-pgmq-fifo": "ordering-key"}' |
| 3 | '"bye"' | '{"x-pgmq-fifo": "ordering-key"}' |
| 4 | '"world"' | '{"x-pgmq-fifo": "another-ordering-key"}' |
|
@fggrtech , @tazmaniax , @nimrodkor - what are your thoughts on the API proposed above? Do you think this will meet your requirements? |
Important notes:
|
@ChuckHend Works for me. |
This makes sense. Correct if I'm wrong below: If I have a user-A (fifo-group-id = user-A), and the user has 10 messages, and I'm reading them 1 by 1 because ordering matters:
|
That's right, @kevbook! |
@ChuckHend , the API looks good. Will it work similarly to SQS FIFO in that if fewer than the read quantity of messages are available for the same message group ID, a read may include messages from other message group IDs in the same batch, but each group retains FIFO order? Do you have some idea when this might be available and published to dbdev for easy deployment to Supabase? FIFO with group id is critical for my use case. Let me know if there is something I can do to help. |
The intention I think is to make it similar to SQS, though I think that is still TBD. What would be your preference? Regarding dbdev, perhaps @olirice or someone from the Supabase team could chime in. I am not sure who is responsible for keeping https://database.dev/plpgsql/pgmq up to date. |
@tazmaniax if you upgrade your instance you'll find that Supabase quietly ships if you have previously installed pgmq via dbdev you'll want to remove/drop that out before upgrading or the schema names will clash when you try to enable it on your new instance with |
the user though in this case it would probably be most reliable to copy/paste the SQL as dbdev is still in flux |
@olirice I've just done the upgrade, and yes, pgmq is there; that's great and thanks for the heads up. |
Firstly, this project is real neat.
Any thoughts on supporting FIFO queues with message key values, similar to SQS FIFO + MessageGroupId ?
Riffing off your existing work, here are the prototype functions which illustrate the idea:
Thoughts?
The text was updated successfully, but these errors were encountered: