-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Minio #1767
Minio #1767
Conversation
|
||
|
||
def download_document_from_s3(s3_key, original_file_name): | ||
s3_response = s3_client().get_object(Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=s3_key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a good place for us to flip over to proxying the new download API endpoint
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. This is me initially picking up from a PR that I've had sat dormant for a long time but I definitely see this ending up being the function we switch out to read from the API, I think the only thing that this misses at the moment is the document id, which we'd want to also send to check for file permissions.
AWS_REGION=<<FROM_VAULT>> | ||
AWS_S3_ENDPOINT_URL=http://s3:9000 | ||
AWS_ACCESS_KEY_ID=minio_username | ||
AWS_SECRET_ACCESS_KEY=minio_password |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the expectation here that minio will be present on the API docker-compose only? Just wrapping my head around how this works
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct. The API needs to have access to the same bucket so we need to have one single instance of minio running and it makes sense for that to be in the docker-compose of the API.
We do the same for redis as well.
It's what
networks:
lite:
external: true
is for to bridge between the two docker-compose clusters (or whatever they're called).
} | ||
) | ||
|
||
# Send LITE API the file information | ||
case_documents, _ = post_case_documents(request, case_id, data) | ||
|
||
if "errors" in case_documents: | ||
if settings.DEBUG: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feels like a bad pattern to have settings.DEBUG in normal control flow.
Is there not any other way ?
not sure what though atm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is your concern?
The reason behind this was that when I was testing this locally I would sometimes get a generic error that was hard for me to then understand what was going on so when running in debug mode it was better to just return the error that came from the API so it was clear what just happened.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just don't like seeing a different flow based on settings.DEBUG on production code.
no biggie so approving
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small comment about the debug.
generally looks sensible to me.
Seem like high risk because of some of the core changes has then been tested on a dev environment anywhere ?
Deployed on dev2. Files uploaded and downloaded and all working as expected. |
bc076f6
to
51ef147
Compare
This removes the usage of presigned URLs so that we can use minio for local development and testing The streaming response was already used in other parts of the application so this also starts making this pattern consistent This also switches out unit testing to use a mock s3 library
This is to ensure that different environments can't change these settings and inadvertantly break the tests
This is due to the way it loads Django settings at module which we want to override in tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Aim
This adds a docker container that is running minio so that we can switch to a completely isolated environment instead of having to talk to the real S3 when working locally or running tests.
LTD-4658