-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the concatener a Readable and upload to google chunk by chunk #90
Make the concatener a Readable and upload to google chunk by chunk #90
Conversation
ctx.req.on('aborted', () => { | ||
log.debug('request-aborted', 'The request has been aborted!'); | ||
concatener.unpipe(googleStorageStream); | ||
const error = new BadRequestError('The request has been aborted.'); | ||
googleStorageStream.destroy(error); | ||
concatener.destroy(error); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my test this doesn't seem to make a big difference. But this does no harm 🤷
this.chunks.length = 0; | ||
callback(); | ||
} | ||
|
||
transferContents(): Buffer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kept this one for tests, as it's still useful in that context. But we shouldn't use it for the production code.
I wonder if I should add a check for NODE_ENV==='test'
... That wouldn't work when running tests obviously but we'd see it when running integration tests. Thoughts?
I've thought about this a bit more, this may be just easier to do the work I planned to generate a token before starting to upload, effectively removing the need to concatenate the content. I think i'll try this next week and see if I can get it to work with a small amount of work. |
Superceded by #91. |
This change halves memory usage during an upload by avoiding a copy, by giving the data to the google stream chunk by chunk instead of all at once (checked using top). This also seems to recover memory better when aborting big requests (but I'll have to double check this).
The first commit is #89, please don't look at it.
This might fix #83 incidentally but I haven't tried yet, I'll do before requesting review.
I already tested with a real profile end-to-end.
There's seem to still have a small leak, but I'm not sure yet. That's why I'm not requesting a review yet.