-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Interactive reads #439
Interactive reads #439
Conversation
[eE] (DIGITS | SIGNED_DIGITS); | ||
fragment | ||
DOUBLE_TAIL: | ||
[.eE] [-0-9.eE]*; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something like this is required to properly throw a ParseException on 1.0e0.1234
, instead of just reading 1.0
and leaving the rest on the stream.
@@ -148,7 +140,7 @@ QUOTING: '\'' | '`' | '~' | '~@'; | |||
|
|||
|
|||
KEYWORD: | |||
':' NAME; | |||
':' NAME?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allows for a parse excepton to be thrown on :
input
|
||
p++; | ||
currentCharIndex++; | ||
//sync(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These two consume implementations are the same as the Unbuffered(Char|Token)Stream
class that they extend except without the sync
call at the end, which consumes more characters/tokens from the input
|
||
ParseTree tree = parser.form(); | ||
|
||
r.unread(cs.LA(1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even after disabling the sync
ing in the streams with our custom consume
implementations below, there is a single additional character that is read from the stream when a form is parsed. I'm currently unsure of the exact mechanism that causes this but I believe it involves consume
calls in antlr's LexerATNSimulator class. It's possible that there's a better way to handle this there. But just unreading from the stream here seems to work well.
@@ -203,6 +195,6 @@ fragment | |||
COMMENT: ';' ~[\r\n]* ; | |||
|
|||
TRASH | |||
: ( WS | COMMENT ) -> channel(HIDDEN) | |||
: ( WS | COMMENT ) -> skip |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The built in CommonTokenStream that is used for the other reads filters for tokens only from the default channel. This feature doesn't use that token stream and instead modifies UnbufferedTokenStream, which doesn't include channel filtering.
It would be possible to add that functionality back into the token stream we use, but since we are not doing anything with the hidden token channel, it should be fine to just skip these tokens instead.
This looks very good, happy to merge when you think it is ready. Are you on Discord @jjttjj and if so what is your handle? Regarding EOF handling, I think some way to control this might be useful. I can see a couple of options:
|
Addresses #438
I had mainly wanted to be able to upgrade my clojure repl to a convex repl and that has been working well for me. But there could be things I'm missing, and edge cases and implications of this change that I'm still missing. Any feedback is welcome. Below are a few things I'm unsure of
EOF handling
One thing I'm unsure of is if there should be better EOF handling with this feature. Should there be an EOF exception thrown when a read is attempted on EOF? The following currently just throws a ParseException:
Should there be more options for EOF handling, as the clojure.core/read allows?
Testing
For now the tests I added are fairly minimal, just checking for things I ran into during development. I have tried out using
readOne
in place ofread
for all the relevant test cases and verified that things work as expected. I could try to refactor the reader tests to do in all appropriate tests as well. Or just copy and paste several other of theread
tests to areadOne
version.