Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Server side encryption configuration option #3

Open
kaapa opened this issue May 19, 2015 · 5 comments
Open

Server side encryption configuration option #3

kaapa opened this issue May 19, 2015 · 5 comments

Comments

@kaapa
Copy link

kaapa commented May 19, 2015

Older Refile version, using AWS SDK V1, supported configuring server side encryption via the s3_options per S3 backend / bucket.

https://github.com/refile/refile/blob/bdc1fead72747a18f7120189d860f6368dbdc81e/lib/refile/backend/s3.rb#L37

AWS SDK V2 doesn't support configuring this option on the Aws::S3::Resource object.

https://github.com/refile/refile-s3/blob/master/lib/refile/s3.rb#L40

I think being able to define the encryption per bucket would be a rather essential feature. AWS SDK V2 requires this to be passed as part of the options argument for copy_from, put and presigned_post methods (for example server_side_encryption: 'aes256').

https://github.com/refile/refile-s3/blob/master/lib/refile/s3.rb#L56
https://github.com/refile/refile-s3/blob/master/lib/refile/s3.rb#L58
https://github.com/refile/refile-s3/blob/master/lib/refile/s3.rb#L140

http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#copy_from-instance_method
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#put-instance_method
http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Object.html#presigned_post-instance_method

IMO these are some of the other "static" options that could be a deal breaker for some, but not relevant to my use case:

  • storage_class
  • sse_customer_algorithm
  • sse_customer_key
  • sse_customer_key_md5
  • ssekms_key_id
@kaapa
Copy link
Author

kaapa commented May 19, 2015

Here's an example of functioning AES256 encrypted S3 backend initialization under Refile 0.5.4:

aws = {
  access_key_id: 'aws_access_key_id',
  secret_access_key: 'aws_secret_access_key',
  bucket: 'aws_bucket_name',
  s3_server_side_encryption: :aes256
}

Refile.cache = Refile::Backend::S3.new(prefix: 'cache', **aws)
Refile.store = Refile::Backend::S3.new(prefix: 'store', **aws)

@jnicklas
Copy link
Contributor

That's pretty annoying. We should probably still accept these options the same way and pass them through to the relevant library calls. A PR for this would be greatly appreciated!

kaapa added a commit to kaapa/refile-s3 that referenced this issue May 24, 2015
@kaapa
Copy link
Author

kaapa commented May 24, 2015

Comments/advice on my approach would be nice before a PR.

Couple of notes:

  • I didn't get the test suite to pass in it's original state. Tests related to non-existing files raise Aws::S3::Errors::Forbidden. Might be due to my S3 config or perhaps regression since upgrade to AWS SDK V2?
  • Bucket#presigned_post does not accept the same options as Object#copy_from and Object#put. Hence the two configuration options. This is bit nasty since eg. server_side_encryption needs to be set twice.
  • Other approach would have been to collect the parameters individually from @s3_options and pass them to the three methods as per their API. I just felt that it would add an unnecessary layer of abstraction and maintenance burden to refile-s3 while simply delegating the user provided options relieves the library from that responsibility.
  • I did not yet comment the code as this is just to the direction and get feedback. Will add that if this approach is validated.

@kaapa
Copy link
Author

kaapa commented May 24, 2015

And here's a configuration example:

aws = {
  access_key_id: 'aws_access_key_id',
  secret_access_key: 'aws_secret_access_key',
  bucket: 'aws_bucket_name',
  s3_object_operation_options: {
    server_side_encryption: 'aes256'
  },
  s3_presigned_post_options: {
    server_side_encryption: 'aes256'
  }
}

Refile.cache = Refile::Backend::S3.new(prefix: 'cache', **aws)
Refile.store = Refile::Backend::S3.new(prefix: 'store', **aws)
Option API
s3_object_operation_options Object#copy_from and Object#put
s3_presigned_post_options Bucket#presigned_post

PS. After a more through look at #copy_from and #put APIs it's obvious that they differ too, but at least their encryption options are the same, which is not the case with #presigned_post.

kaapa added a commit to kaapa/refile-s3 that referenced this issue May 25, 2015
@kaapa
Copy link
Author

kaapa commented May 25, 2015

I got bit annoyed of my initial implementation and took a second stab at the problem.

This second take allows a flat configuration eg:

aws = {
  access_key_id: 'aws_access_key_id',
  secret_access_key: 'aws_secret_access_key',
  bucket: 'aws_bucket_name',
  server_side_encryption: 'aes256'
}

Refile.cache = Refile::Backend::S3.new(prefix: 'cache', **aws)
Refile.store = Refile::Backend::S3.new(prefix: 'store', **aws)

The downside is that S3_AVAILABLE_OPTIONS must define valid options for S3 operations as per API of each method used from the AWS SDK. Also, current implementation only supports symbols as configuration keys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants