Skip to content
This repository has been archived by the owner on Oct 28, 2019. It is now read-only.

upload.dataset() can't deal with large data sets #97

Open
fboylu opened this issue Mar 18, 2016 · 4 comments
Open

upload.dataset() can't deal with large data sets #97

fboylu opened this issue Mar 18, 2016 · 4 comments

Comments

@fboylu
Copy link

fboylu commented Mar 18, 2016

while using upload.dataset() on a data frame, the following error is returned:

Error: AzureML returns error code:
HTTP status code : 500
Maximum request length exceeded.
Traceback:

  1. upload.dataset(labeledfeatures, ws, name = "labeledfeatures")
  2. try_fetch(url, handle = h)
  3. validate_response(r)
  4. stop(msg, call. = FALSE)

I was able to upload a smaller subset of the data frame without issues.

@gilbertw
Copy link

We need to switch to a chunked upload to handle the large payloads. See the python sdk change for reference, Azure-Samples/Azure-MachineLearning-ClientLibrary-Python@4204e2b

@andrie andrie changed the title upload.dataset() producing error upload.dataset() can't deal with large data sets Mar 23, 2016
@stephlocke
Copy link

Any ETA available on a resolution to this issue?

@andrie
Copy link
Contributor

andrie commented Jan 29, 2017

Pull requests are welcome. I don't think anybody is actively working on this issue.

@stephlocke
Copy link

Ok

There's no contributor guidelines and whatnot on this - before PRs can be accepted, does this need to be once-overed for Microsoft compliance stuff or is it ok as-is?

http://open.microsoft.com/

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants