You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
question why this is really needed, the S3 bucket is supposed to work behind the scenes and is not meant to be a way to view or manage uploaded documents. The Github pull requests will have all the metadata, including name of the project and filename.
My biggest concern around this was related to things like associated media not getting erased and becoming orphaned & not necessary, but taking up memory.
I understand that you'd need a sizeable amount of memory consuming media to make a cost impact, so it isn't too concerning, but really management of things if things go awry on project upload somehow. These kinds of things is where my head was at on this.
I could see the benefit of having a type of cleanup script that would parse all of the projects for the urls of the content files and thumbnails and delete all files that are not linked
Description
For each submission, create a separate folder in S3
Acceptance Criteria
Lambda function creates S3 folder through CLI
Mocks
Each folder would have a slugified name of the project
Reference Links
The text was updated successfully, but these errors were encountered: