-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving file catalogue #38
Comments
I just came up with a fix. Oops! |
Wouldn't it be better to read it in N bytes parts? (for some constant N) |
Yeah |
But what happened before is that it would only read 256 bytes |
I should have the kernel read 256 every chunk |
and multiple chunks every file |
(Correct?) |
I think so. |
K got it. So 256 chars per chunk? |
For now 256 should be ok. It can be changed if needed. If you really wanted you cold read byte by byte. But it would be much slower. |
I (for some reason) tried my way, and paging fault will appear! |
): |
I think in Barteks2x branch, there is a partial solve (dunno) |
ill look at his fork |
k |
Currently files must fit into a 256 char array. Files smaller can be fixed with
strTrim
but bigger files cannot be read entirely.The text was updated successfully, but these errors were encountered: