Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zipping large file 3.5 GB memory issue #116

Open
Guling85 opened this issue Oct 4, 2022 · 4 comments
Open

Zipping large file 3.5 GB memory issue #116

Guling85 opened this issue Oct 4, 2022 · 4 comments

Comments

@Guling85
Copy link

Guling85 commented Oct 4, 2022

Im trying to zip a large file with streams and while reading the stream I chunk the large ziped file and inserting it to indexedDB. And getting this error
conflux.esm.js:815 Uncaught (in promise) RangeError: Array buffer allocation failed
at ZipTransformer._callee$ (conflux.esm.js:815:31)

Cant the lib handle large zip files? or Im i doing things wrong.

This is my code.

this.logger.info('zipping files', this.files);

    const iterator = this.files.entries();

    const myReadable = new ReadableStream({
      async pull(controller) {
        const { value, done } = await iterator.next();
        console.log('TEST', value);
        if (done) {
          controller.close();
        } else {

          console.log('TEST2', value);

          return controller.enqueue({
            name: `/${value[1].name}`,
            stream: () => value[1].stream(),
          });
        }
      },
    });

    const appDB = await openDB<DB>('db');

    const writableStream = new WritableStream({
      start(controller) {

      },

      async write(chunk, controller) {
        await appDB.add('chunks', { transferId: '1', index: 'index', chunkOrder: 1, blob: chunk });
        //console.log('data', chunk);
        chunk = null;
      },
      close() {
        console.log('[close]');
      },
      abort(reason) {
        /* … */
      },
    });

    myReadable.pipeThrough(new Writer()).pipeThrough(chunkSlicer(640000)).pipeTo(writableStream);
@michaelfarrell76
Copy link
Member

@eligrey any idea looking at the code here?

@eligrey
Copy link
Member

eligrey commented Oct 5, 2022

Assuming that openDB is implemented correctly, I would guess that console.log('TEST', value); may be causing a memory leak.

@Guling85
Copy link
Author

Guling85 commented Oct 5, 2022

I have tried and removed the console.log() and still get the same error.
The errors comes from conflux.esm.js:815 Uncaught (in promise) RangeError: Array buffer allocation failed.

@Guling85
Copy link
Author

Guling85 commented Oct 6, 2022

I found the cause why the memory issue occors. It is becasue im not using fetch im using a filebrowser then trying to merge the File objects to a readablestream. Then when conflux is trying to transform my stream it gets this error.


const myReadable = new ReadableStream({
      async pull(controller) {
        const { value, done } = await iterator.next();
        console.log('TEST', value);
        if (done) {
          controller.close();
        } else {

          console.log('TEST2', value);

          return controller.enqueue({
            name: `/${value[1].name}`,
            stream: () => value[1].stream(),
          });
        }
      },
    });

Is there anywayt too zip multple files directly from the filebrowser without fetching them elsewhere?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants