Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove feed() #43

Closed
tjvr opened this issue Mar 29, 2017 · 1 comment
Closed

Remove feed() #43

tjvr opened this issue Mar 29, 2017 · 1 comment

Comments

@tjvr
Copy link
Collaborator

tjvr commented Mar 29, 2017

Implementing a streaming API on top of a RegExp-based tokenizer is fraught with difficulty, as discussed in #36. It's not clear what to do when a token overlaps with a chunk boundary; should we:

  1. Require users to feed chunks which are already split on token boundaries
  2. Buffer input until we get a regex match that doesn't extend to the end of the buffer —which introduces unfortunate restrictions on the lexer definition.
  3. Re-lex the entire input when we receive new data, and somehow notify the consumer to discard some (or all!) of the old tokens (ouch)
  4. Throw everything away and write our own RegExp implementation from scratch; then we can directly query the FSM to see if the node we're on can transition past the next character!

Only (1) seems like a workable solution. Excepting (4) (!), the rest you can re-implement yourself on top of reset(). So I propose removing feed() and re-introducing remaining() [which would return buffer.slice(index)]. Thus Moo is a fast lexer core; people can extend it with hacky strategies as necessary. :-)

@tjvr
Copy link
Collaborator Author

tjvr commented Mar 29, 2017

@Hardmath123

tjvr added a commit that referenced this issue Apr 5, 2017
@tjvr tjvr closed this as completed in #44 Apr 9, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant