Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using delete_stream #269

Closed
ouven opened this issue May 18, 2023 · 2 comments
Closed

using delete_stream #269

ouven opened this issue May 18, 2023 · 2 comments

Comments

@ouven
Copy link

ouven commented May 18, 2023

hi,

The problem

I have a very private aggregate, that needs to erase all its events from the eventstore, after it has been deleted.
But I wanted to keep the deleted event itself, so that all projections are guaranteed to receive the deleted event and can erase the according private data too.
So I did something like this:

defmodule PrivateLocation do
# aggregate
# cut things ...

  def execute(%__MODULE__{privatelocation_id: id} = me, %DeletePrivateLocation{} = command) do
    with :ok <- existent(me),
         :ok <- undeleted(me) do
      # hard delete here, so the deleted event stays in the event stream
     :ok = My.EventStore.delete_stream("#{id}", :any_version, :hard)

      %DeletedPrivateLocation{privatelocation_id: id}
    end
  end

# cut things ...
end

Expected:
When I send the delete command:

  • all the events and the stream is deleted from the eventstore
  • the deleted event itself is added to a newly created stream with the same id

Actual behaviour:

  • :ok is not returned from delete_stream, but {:error, :stream_not_found}
  • The stream is gone anyways
  • The deleted event was not added to a new stream

I do not fully understand that. Is there a retry logic somewhere, so that a second try does yield the error?

What pattern was intended to be used here?

Should I use an event handler, that listens to the deleted event and then hard deletes the stream?
Is it guaranteed, that all handlers, that listen to this stream (or the $all stream), will receive this last event and can erase the private data from their projections?

Or should I soft delete first and scale a hard delete for later?

@slashdotdash
Copy link
Member

slashdotdash commented May 18, 2023

You should not directly access or modify the event store in the aggregate's execute/2 function. Doing so will likely break the aggregate's behaviour when it attempts to append any events returned from the function since the aggregate's event stream will either not exist (deleted) or could be at a different version. In either case appending the returned event(s) will fail. In the case this happens the aggregate will retry the command again after fetching the latest events from its event stream. This can occur up to a limited number of failed attempts before the command dispatch returns an error.

What you can do instead is:

  • Use an event handler to hard delete the stream and then append a new DeletedPrivateLocation event to the now deleted stream.
  • Don't store private data within the aggregate's event stream, but instead store it in an external data store where it can be more easily deleted when you want to. The events would only contain a reference to the data held externally.
  • Use "crypto-shredding" to encrypt private data and then you can later delete the encryption key to prevent future decryption.

See: GDPR compliance recipe

@ouven
Copy link
Author

ouven commented May 18, 2023

Thank you for the fast response!

The first option is the one we will go with.

We already evaluated the other two and came to the conclusion, that the first is the best for us. We just got stuck in the details.
So again, thank you for the fast unblocking!

@ouven ouven closed this as completed May 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants