Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Google Cloud Storage uploader #240

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Google Cloud Storage uploader #240

wants to merge 1 commit into from

Conversation

mlt
Copy link

@mlt mlt commented Oct 13, 2018

No description provided.

@mlt mlt force-pushed the gcs branch 3 times, most recently from a57e728 to af829d4 Compare October 15, 2018 18:37
Copy link
Owner

@mattheworiordan mattheworiordan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an awesome contribution @mlt.
The only thing I am a little concerned about is there are no tests. Now I realise the S3 driver has no test either, and I also recognise end-to-end tests are going to be hard. However, it would be nice to have some assurance in future that we won't end up with a regression.
WDYT?

@mlt
Copy link
Author

mlt commented Oct 25, 2018

Thanks! I did write it after S3 implementation.

Before we jump into testing I wonder how scalable this approach is. I mean what if we have more some XYZ uploaders. We keep adding members to store corresponding configs in hashes and allowing only a single uploader to be used at a time. While, we may say it doesn't make sense to use 2 uploaders at the same time (why would anybody need that?). I wonder if we should have a single hash with :s3 and :gcs keys on top level and make it possible to use all of them if one will? I feel like some refactoring would be necessary though...

Now I realise the S3 driver has no test

I do see a unit test for S3. I can reuse that for GCS as well.

Also I don't think end-to-end is going to be hard if we are up for it. Ingress is free and if we delete uploads, I don't think we would have to pay anything for storage. With high availability and low latency it can be done. We would want to encrypt credentials though.

I'm going to rebase my stuff meanwhile as S3 path reporting changes have been merged... it will look clean for GCS as well.

@mattheworiordan
Copy link
Owner

@mlt sorry for the slow reply.

Initial thoughts:

  1. I don't think you should overengineer it at this stage. If we add another target bucket provider, then I think at that time we should consider a refactor. Unless of course you have the time and fancy doing that work. What I am saying is that I don't think it's a blocker to this useful feature.
  2. I don't believe an end-to-end test is necessary. What is necessary is a test that stubs out the GCS gem methods and makes sure they are called with the expected values. At least then the contract between this gem and that gem are tested, and we can rely on the GCS gem to keep their contracts with Google working.

WDYT?

@mlt
Copy link
Author

mlt commented Mar 14, 2019

uhm… no worries on slow replies…it took me over 2 month to get back to it 😆
I think it passes some tests now. I used S3 implementation as a reference, but I had to heavily modify it. I hope it tests most if not all cases. I do not see coverage reports.

@glaszig
Copy link
Contributor

glaszig commented Aug 5, 2019

i wonder why this is not implemented using the callbacks. would be much cleaner, not clutter the base class with specifics. could also be released as a seperate gem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants