Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving file catalogue #38

Open
plankp opened this issue Nov 19, 2015 · 15 comments
Open

Improving file catalogue #38

plankp opened this issue Nov 19, 2015 · 15 comments

Comments

@plankp
Copy link
Collaborator

plankp commented Nov 19, 2015

Currently files must fit into a 256 char array. Files smaller can be fixed with strTrim but bigger files cannot be read entirely.

@plankp plankp self-assigned this Nov 19, 2015
@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

I just came up with a fix. Oops!

@plankp plankp closed this as completed in 756f0d0 Nov 19, 2015
plankp added a commit that referenced this issue Nov 19, 2015
@Barteks2x
Copy link
Collaborator

Wouldn't it be better to read it in N bytes parts? (for some constant N)

@plankp plankp reopened this Nov 19, 2015
@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

Yeah

@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

But what happened before is that it would only read 256 bytes

@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

I should have the kernel read 256 every chunk

@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

and multiple chunks every file

@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

(Correct?)

@Barteks2x
Copy link
Collaborator

I think so.
The current implementation allocates memory for the whole file. For small files it's good enough (and without multitasking there is no reason to ever "cat" bigger files. But when/if the OS gets real filesystem support - it current implementation will break).

@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

K got it. So 256 chars per chunk?

@Barteks2x
Copy link
Collaborator

For now 256 should be ok. It can be changed if needed. If you really wanted you cold read byte by byte. But it would be much slower.

@plankp plankp removed their assignment Nov 19, 2015
@plankp
Copy link
Collaborator Author

plankp commented Nov 19, 2015

I (for some reason) tried my way, and paging fault will appear!

@raphydaphy
Copy link
Owner

):

@plankp
Copy link
Collaborator Author

plankp commented Nov 20, 2015

I think in Barteks2x branch, there is a partial solve (dunno)

@raphydaphy raphydaphy assigned Barteks2x and unassigned plankp Nov 20, 2015
@raphydaphy
Copy link
Owner

ill look at his fork
\

@plankp
Copy link
Collaborator Author

plankp commented Nov 20, 2015

k

@plankp plankp added the ready label Nov 21, 2015
@Pvanduyse Pvanduyse modified the milestones: File System, Writer Saving Nov 25, 2015
plankp added a commit that referenced this issue Dec 2, 2015
@raphydaphy raphydaphy removed the ready label Dec 10, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants