Replies: 2 comments 2 replies
-
13 million rows really is not that much. There's no way this library is taking an hour to parse that. This library should be able to handle 13 million rows in ~seconds on commodity hardware. (Unless each row is truly huge.) This feels to me more like a general Rust help question rather than something specific to I'd suggest:
|
Beta Was this translation helpful? Give feedback.
0 replies
-
running it in release solved the problem, thx, Sorry I can't share any of the data unfortunally. yes the rows are massive |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello there, good day!
I have a rather large csv (about 13 million rows) that I need to run some processes on (just a function).
I'm using this library to parse it and then process it, however it takes up to an hour to do all the processing, is there any way to do this faster? Like creating multiple readers that work on chunks of the file at the same time in parallel? Pollars is not an option to me as I run out of memory before reaching the middle of the file (I've about 16gb of memory in my computer)
Beta Was this translation helpful? Give feedback.
All reactions