Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client.createWriteStream results zero length file #181

Open
artturimatias opened this issue Nov 6, 2019 · 29 comments
Open

client.createWriteStream results zero length file #181

artturimatias opened this issue Nov 6, 2019 · 29 comments

Comments

@artturimatias
Copy link

artturimatias commented Nov 6, 2019

I'm writing a Nextcloud client application with Nodejs and webdav-client.
I have no problems reading contents of the nexcloud server but uploading files with client.createWriteStream creates zero length files.

This is how I'm using client.createWriteStream:

fs.createReadStream('workcycles.jpg')
    .pipe(client.createWriteStream("test_file.jpg"));

I can upload files with curl or cadaver so this is not a permission problem:

curl -u user:pass -T workcycles.jpg "https://mydummync.com/remote.php/webdav/new_file.jpg"

Am I doing something wrong here? Maybe this related to #173?

version: 2.10.0

@perry-mitchell
Copy link
Owner

Streams are tricky. Are you waiting for it to finish? You could use something like end-of-stream to detect this easily.

Ideally you'd want to log out all stream errors to see if it's failing - if you could do that it might show you why no data is being written.

@artturimatias
Copy link
Author

artturimatias commented Nov 7, 2019

Thanks for reply. The problem is that I don't get any errors and stream seems to end properly.
If I replace client.createWriteStream with fs.createWriteStream the stream works and I get a new local file.

var eos = require('end-of-stream');
var filename = "workcycles.jpg"
const local = fs.createReadStream(path.join("preview", filename));
const webdav = client.createWriteStream(filename) // this results zero length file
// const webdav = fs.createWriteStream(filename) // this works
eos(webdav, function(err) {
	if (err) return console.log('webdav stream had an error or closed early');
	console.log('webdav stream has ended', this === webdav);
	
});
webdav.on('data', () => {
           console.log('webdav data')
   })
webdav.on('error', () => {
  console.log('webdav error')
});

local.pipe(webdav)

Maybe I'm just not using it right. Could you provide a working example of how one should use client.createWriteStream with Nextcloud?

@perry-mitchell
Copy link
Owner

perry-mitchell commented Nov 11, 2019

@artturimatias Ok, so I tried with an example and the stream writes, but I don't get the end event at the correct moment. Here's the example I'm using:

const fs = require("fs");
const { createClient } = require("webdav");
const endOfStream = require("end-of-stream");

const client = createClient("https://somewebdav.com", {
    username: "user",
    password: "pass"
});

const readStream = fs.createReadStream("./webdav.jpg");
const writeStream = client.createWriteStream("/webdav.jpg");
readStream.pipe(writeStream);

endOfStream(writeStream, err => {
    if (err) {
        console.error(err);
        return;
    }
    console.log("Done!");
});

I couldn't run a demo account at nextcloud.com as their service wasn't working.

Seems to be that the event comes when the writestream finishes but the request is still running. Need to think of a way to fix that, as it's most likely been around as an issue for some time. This is related to #147 You might be right that it's related to #173, but I can't be sure.

The issue is that the stream seems to drain and emit end, but the request is still ongoing. Ideally the stream would only fire the end event once the request has finished.

@artturimatias
Copy link
Author

Thanks for the example. I tested with your code and I still got a zero length file.
If I run with debug (DEBUG=*), then I can see options for put request that seems to be valid:

options { protocol: 'https:',
  maxRedirects: 21,
  maxBodyLength: 10485760,
  path: '/remote.php/webdav/vepa.png',
  method: 'PUT',
  headers:
   { Accept: 'application/json, text/plain, */*',
	 'Content-Type': 'application/x-www-form-urlencoded',
 ....

@perry-mitchell
Copy link
Owner

perry-mitchell commented Nov 14, 2019

Yes in your example the Content-Type is wrong.. it should probably be application/octet-stream. I can't imagine why this would be.

Webdav-client doesn't set content-type, so it must be getting set in some default manner. You could modify this area to add the header Content-Type: application/octet-stream to see if that helps..

@artturimatias
Copy link
Author

Ok, there must be something odd in our nextclooud installation. I tried with try.nextcloud.com and upload worked (headers did not matter, I tried with modified headers and default ones)

Sorry that I didn't test earlier with other instance. I need to checkout out how our installation is different from stock installation.

Thanks for the help so far.

Here is the full code that I used for testing:

const fs = require("fs");
const { createClient } = require("webdav");
const endOfStream = require("end-of-stream");

const client = createClient("https://dummycloud.com/remote.php/webdav/", {
	username: "user",
	password: "pass"
});

const readStream = fs.createReadStream("./osc-vepa.png");
const writeStream = client.createWriteStream("/vepa.png");
readStream.pipe(writeStream);

endOfStream(writeStream, err => {
	if (err) {
		console.error(err);
		return;
	}
	console.log("Done!");
});

@armpogart
Copy link

armpogart commented Jan 27, 2020

@perry-mitchell I can confirm that Content-type: application/x-www-form-urlencoded is set by default for all non-json requests, and I've found out that it is just axios default(see source code).

Also changing to application/octet-stream doesn't help with my zero-length issue, just wanted to let you know where wrong content type is coming from and I guess this needs to be set in your code in headers.

Investigating further zero-length issue.

@artturimatias
Copy link
Author

I just had confirmation that our Nextcloud has load balancer in front of it and that probably caused the stream breakage in my case.

@perry-mitchell
Copy link
Owner

We had another issue - #147 - which might have played some part in this issue. I know it was mentioned earlier but it might be worth trying again now that it's fixed, as circumstances might have changed. It was released in 3.1.0.

@artturimatias
Copy link
Author

Tested with 3.2.0 but no change, results still zero length files in our Nextcloud. But like I said, the problem might be the load balancer in our case.

@perry-mitchell
Copy link
Owner

@artturimatias Please let me know if you discover otherwise. It’s hard to get streaming working perfectly.

@Daniel-Pek
Copy link

I've noticed the createWriteStream function only works for small files but completely fails for large files (Say over 10mb). I've tested it on my local webdav server so we can rule out the possibility of the problem causing by load balancer.

I am suspecting the problem might be from the Axios being called in the fetch function. It seems Axios has a problem when PUT/POST stream to a remote server.

@armpogart
Copy link

@Daniel-Pek I have tested with 1KB text file and ~100KB jpeg image, both result in 0KB file on webdav server. And the only webdav server that I'm working with is bigcommerce.com account webdav storage which might be behind load-balancer. So I won't rule out the load balancer causing that problem.

Not sure how I can debug this problem.

@armpogart
Copy link

@perry-mitchell Another observation is client.putFileContents works ok on the same server. So even if it something related to load-balancer, only streaming the file fails, and using client.putFileContents with any buffer read with fs.readFile works ok.

Let me know if/how I can further debug the problem to give you additional information. This issue is a blocker for us.

@eric-burel
Copy link

eric-burel commented Aug 31, 2020

Hi guys, I can reproduce the issue with a local setup of Next Cloud too. No idea what happens at this point.

Edit: however I can repro on try.nextcloud too. The code is rather simple, I am not sure what I have missed:

  addFileFromReadable = async (readable, fileDefinition) => {
    const webDavClient = await this.getClient();
    const filePath = getFilePath(fileDefinition);
    const writeStream = webDavClient.createWriteStream(filePath);
    await readable.pipe(writeStream);
    return fileDefinition;
  };

and everything is fine if I do a fs.createWriteStream instead so the readable is not faulty.

As pointed out by other users, the palliative is to transform the readable into a buffer and then use putFileContents instead (but I guess this uses more memory as you load the file in your server's RAM when doing this).

Edit: btw, big thank you for this lib. I've just finished replacing Meteor Files by Next Cloud using your package, except for this small issue everything worked very smoothly

@bennigraf
Copy link
Contributor

Hey there,

I also stumbled over this issue. After googling around a bit I found people having issues with a missing Content-Length header on the request[1]. I tried to "hack" that header directly into createStream.js, which did indeed solve the issue for me. There's right now no way to set the header via the API of this package, so that's no proper fix. I don't know if or when I might find the time to submit a proper fix for this by providing a way to set the headers. Until then, maybe this helps someone or inspires someone else to create a PR ;-). I'm using putFileContents() for now, which works fine enough for me.

I'm running the request against an up to date NextCloud installation which also runs behind a nginx loadbalancer, which I cannot control (it's hosted at uberspace.de). I don't know if that has something to do with it.

Best,
Benjamin.

[1] i.e. axios/axios#961 (comment)

@perry-mitchell
Copy link
Owner

It's good to note that you can, at least as of v4.0.0-r01, upload a file steam using putFileContents instead of createWriteStream. That being said, I'm planning to add easy access to providing a file size value to the createWriteStream method so that this header can be set.

@simatec
Copy link

simatec commented Jan 17, 2021

Unfortunately I have the same problem. With the client.createWriteStream I only get a file with 0 bytes.

We use the webdav in a backup plugin for the smart home project iobroker

Currently I solve it via client.putFileContents
This works, but because of fs.readFileSync it is very expensive for RAM.

Is there a solution to the problem in the meantime?

Here is a link to the project:
https://github.com/simatec/ioBroker.backitup/blob/007247e6c18537c54de28dc5062a23c7466e8e70/lib/scripts/65-webdav.js#L23

@Daniel-Pek
Copy link

Hey guys, I actually have a work around about this issue and you guys might want to give a go on this.

  1. Find out a location in your WebDav server that you want to upload your files by using the build-in function getFileUploadLink
  2. Use the Node JS stream to pipe the files to that location by using the URL from the step 1

This did work for me even for a very large file.

@simatec
Copy link

simatec commented Jan 18, 2021

Thanks for your tip. Do you have an example of this?

@perry-mitchell
Copy link
Owner

@simatec putFileContents takes a stream, so you don't need to use fs.readFileSync.

@simatec
Copy link

simatec commented Jan 20, 2021

Unfortunately, an upload is not possible without fs.readFileSync.
A file with 0 bytes is also created here

@eric-burel
Copy link

eric-burel commented Jan 21, 2021

Hi @Daniel-Pek , what do you mean specifcically by "Use the Node JS stream to pipe the files to that location by using the URL from the step 1"?
This is a bit complex and a piece of code would help greatly.

I don't get why it would work better than the code that exists already there: https://github.com/perry-mitchell/webdav-client/blob/master/source/operations/createStream.ts#L23

@sokraflex
Copy link

sokraflex commented Feb 6, 2021

Some thoughts on 4.0.0, testing towards nextcloud: Uploading files > 2 GB seems to be impossible at the moment.

  • putFileContents only works for files with a maximum size of 2GB, no matter which value maxBodyLength has
  • putFileContents does not work when passing streams (e.g. client.putFileContents('filename', fs.createReadStream('filename'))), but works using fs.readFileSync
  • createWriteStream creates zero-length files (despite listening to finish-event)

My use case:

  • 2.5 TB of files uploaded using WebDAV
  • Some large files up to 10 GB, millions of small files < 5 KB (node_modules)
  • 1 GBit/s maximum bandwith
  • 32 GB memory
    The uploads are performed in parallel. Not sure yet how this library handles that.

Will sporadically continue my debugging and update this post if I have new information.

@jmadotgg
Copy link

jmadotgg commented Mar 26, 2021

Hey there,

I also stumbled over this issue. After googling around a bit I found people having issues with a missing Content-Length header on the request[1]. I tried to "hack" that header directly into createStream.js, which did indeed solve the issue for me. There's right now no way to set the header via the API of this package, so that's no proper fix. I don't know if or when I might find the time to submit a proper fix for this by providing a way to set the headers. Until then, maybe this helps someone or inspires someone else to create a PR ;-). I'm using putFileContents() for now, which works fine enough for me.

I'm running the request against an up to date NextCloud installation which also runs behind a nginx loadbalancer, which I cannot control (it's hosted at uberspace.de). I don't know if that has something to do with it.

Best,
Benjamin.

[1] i.e. axios/axios#961 (comment)

That seems to work fine for now:

function writeStream(localFilePath, remoteFilePath) { 
 try {
    fs.stat(localFilePath, (err, data) => {
      if (err) throw err;
      const readStream = fs.createReadStream(localFilePath);
      const writeStream = client.createWriteStream(remoteFilePath, {
        headers: { "Content-Length": data.size },
      });
      readStream.pipe(writeStream);

      endOfStream(writeStream, (err) => {
        if (err) throw err;
        console.log("Done!");
      });
    });

  } catch (error) {
    console.error(error);
  }
}

(Why doesn't it format correctly?) - Fixed 🙂

Thanks @bennigraf !

@perry-mitchell
Copy link
Owner

The putFileContents method supports overriding the content length header, so using this together with a stream should work. It's unfortunate that some servers require this header, as streaming is inherently performed without knowing the full file size beforehand in many cases.

I'll try to set aside some more testing time regarding uploading large 2GB+ files.

@KEMBL
Copy link

KEMBL commented Aug 9, 2022

webdav: 4.10.0, WebDAV server - Apache 2.4.39, by HTTPS, with AuthType.Digest and httpsAgent: new https.Agent({ rejectUnauthorized: false }), tested in LAN (no firewalls, IDS, etc.)

Initially I also fell into the problem with createWriteStream(...) and zero size files. But later I realized that during the work with stream files upload, the same time I creating another WebDAVClient and that somehow garbles the transfer of previous one.

So, instead of creating new clients for each operation within the same fodder I cached first one and reused it with destroying afterwards. It works fine and I do not modify headers.

I found that when turned on forensic logging on httpd side and saw two PUT requests for the same file and one of them with 0 content length. Cannot say how they intertwined but caching solved the problem.

Also, if you are doing stat(..) right after writeStream.on('close', ...) you probably get false result because WebDAV server need some small time (100ms - 400ms - 1sec) to make file available.

@xiaoosnggao

This comment was marked as off-topic.

@dinhnghia
Copy link

const writeStream = await client.createWriteStream(
      filePath,
      {
        'headers': {
          'content-type': file.mimetype,
          'content-length': file.size
        }
      }
);

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests