-
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is this project active? I have some ideas #53
Comments
Hey @asterite,
Thank you for the kind words (and for creating the crystal language)! ❤️
It is mostly in life support mode but definitely in working state, I am using I am a bit dissatisfied with the time the compiler takes to perform the semantic analysis pass on a large codebase even though it still works quite fine for small to medium scale, and I did not find a way to solve that so I stopped working on the internals for the moment.
Not right now even though the code is fully commented, but I can make something up to explain the big picture for sure.
Looking forward to that! 💡 |
Oh, take your time! I'm in the same situation as you. I wanted to start working on this slowly, whenever I have time, no rush at all :-)
Yup! Given that it's going to be hard to change that, I think the idea would be to avoid doing a full semantic pass. I see that
So here's the rough idea (most of these steps probably already work like that in crystalline):
So how does the semantic analysis work? Let's start with an example. Say you have this: def foo(x : Int32)
x.| # `|` is the cursor's position
end Here we'll notice that But of course this is a very simple example. What about this? def foo(x)
x.|
end Well, here we can't know anything about What can we do? Just assume we don't know anything about Let's say the user continued typing: def foo(x)
y = x.reverse
y.|
end Now what? Well, we'll assume that Then the autocompletions for Of course, if in another part of that method there's a call to Here we got lucky because both For instance, the definition of def reverse : Array(T)
Array(T).new(size) { |i| @buffer[size - i - 1] }
end We can see that the last expression returned is Anything more complex and we just abort and assume we don't know anything about the returned type. This of course is less accurate that doing a full semantic analysis, but at least it can never be slow (these heuristics are very fast.) Also, we could say that crystalline works better if you type your methods (both input and output types.) That's also helpful for documentation, for readers of the code, etc. So in the end crystalline could also encourage adding more explicit types to the code. I think the only slowness could come from running top-level which might rely on some macro runs, etc. But that slowness would only appear the first time the entire project is loaded. All subsequent model updates could happen lazily in the background (and we use the existing slightly outdated model in the meantime.) What do you think about this approach? |
I think that using it would be a tremendous improvement!
I fully agree, but we must be aware that since it will be decoupled from the Crystal compiler it will need to be maintained apart and that some diagnostics
Totally 👍.
Autcompletion is one thing, but it would also be very important to implement a (limited) form of typechecking to be able to provide code diagnostics. I think the changes can be integrated quite easily into the existing codebase, but more on that later. TodayCrystalline is basically a Language Server, a loop waiting for notifications and messages from a client and communicating through STDIN / STDOUT. A good chunk of the Language Server Protocol Specification has been implemented under the The server delegates actions to a controller instance which performs an action depending on whatever the client requests. When a file is opened, the LSP stores it in memory which makes it possible to provide diagnostics without having to save the file. The memory representation gets updated as the user makes changes in the editor. Crystalline also tries to determine what files are part of the So with that said, I will try to focus on the analysis part now. Here are some (tiny) additions (monkey patches) I made to the compiler:
Here is the method used to prepare the analysis on a workspace or a file. It is called at startup, whenever a file is saved or updated and when hovering something. The compilation output (
For hover, completion and go to definition requests additional processing is done, Visitors crawl the AST and format the infos to the client. Now what?So the good thing is that I think we can reuse most of the existing
I can make a dedicated branch and refactor the code to follow the logic you posted. Roughly the changes would be to:
And then fill the gaps later on. I'll try to work on this before the end of the week if it's fine with you to provide the branch and the groundwork 😄. Then we could iterate on the rest, but I'm afraid I would not be very helpful on writing the parser/typer since you have way more experience on this and it has been ages (12+ years) since I've worked on something similar. It's been 1 hour since I started writing I'm running out of time for now, but please do tell me if you need anything from my end of if something is not clear. |
Warning: a lot of text ahead. But before you start reading it: please don't start working on any of this yet! 🙏 The reasons are:
I pushed a branch here: https://github.com/asterite/crystalline/tree/lightweight There's nothing much in that branch yet. It's just me experimenting a bit with the code, exploring some ideas. More on this later.
That's great to hear! 🙌
I've been thinking about this and I realized that we could actually reuse most of the semantic phase. There's a type, That said, if we aren't using the same algorithm as the compiler, but a weaker one, where types might not be fully correct or fully known, then reporting an error might sometimes be misleading. So another idea could be to have this lightweight mode just be an option you can choose in the settings. Something like "Type-checking mode":
That way we don't have to throw away all the wonderful work you've done, which is useful for small and medium projects, and even in large projects people could choose the heavy mode if they use autocompletion sparsely and when they do they'd want to know they get the right thing. Another idea: time how long it takes for the full semantic analysis. If we notice it's above some threshold we automatically switch to lightweight mode. Maybe that could be another mode ("Let Crystalline decide the mode based on the size of the project".)
What do you think about extracting this to a shard? When I had the idea of implementing a different LSP for Crystal just the idea of implementing the protocol made me lazy about doing it because it's work unrelated to the main logic of providing autocompletion, etc. There's also https://github.com/jemc/crystal-lsp so there's always some duplicate effort. An LSP shard will help in this way:
If that's fine with you I could extract such shard, maybe put it in https://github.com/crystal-community and then send a PR to use that in crystalline. BTW, I noticed your LSP protocol implementation has excellent comments that helped me understand some things without having to read about LSP. Thank you! 💙 (that's why I think the LSP shard should be based on your code)
I saw this! I also saw the override in
This is great too!
I have some ideas about this which I'll detail later.
So it seems that one thing I was proposing was already implemented 🙌
This is excellent. I had no idea about how to do this. Maybe launching a separate Thread isn't ideal (I think Thread is kind of private API) but with this I mean that Crystal should provide a safe way to do it, it's just not there yet.
It seems so!
Oh, please don't (for the reasons I mentioned in the beginning.)
What do you think if we start by creating a branch where we can push things together and experiment? No need to use my branch's code as I removed a lot of your code and I'd actually want to keep the two modes. In that branch we can start by:
Then we can iterate on the rest, as you say (but not yet on the things you said.) I always like to do things in small steps, and these were my ideas:
What do you think? In the meantime I'll continue experimenting in my branch with the ideas above. So, again, please don't start working on anything 🙏 (or, if you want, you can create a branch for this and make me a contributor 😊, unless you prefer that I work on my existing branch and I can make you a contributor there, either is fine with me) |
Actually I'm thinking that some of this can be done in completely separate steps
If it's okay with you I can work on those two things. I think once we have those in place it'll be easier to work on the lightweight semantic analysis. I'll open separate issues for those to keep the discussion focused, and also to be able to later link PRs to those issues. For the second point in particular I'll post some snippets where autocompletion (or everything else) should work but currently doesn't. |
Adding a "lightweight" mode seems like the best solution, I'm fully aligned with the plan 👍.
Let's do that (plus this is exactly what I had in mind). I'll create a
I'll discuss these topics on the issues you created directly. Thanks again ❤️! |
Still no read all discussion, but, i prefer if the type of x is unpredictable, we should not return all methods or methods in Object, instead, we can return only one item, Other, it will no different with |
I strongly disagree (even though I don't think this is the right time to debate about this change anyway). In the example
It will since in |
Any update for this? can we introduce the current process in here or on forum? |
As far as I know, the plan is clearly expressed in the above comments of this thread. This shard has no link with the crystal forum or the crystal team, I am the only maintainer and I clearly stated that I was focusing on other things right now. I believe that @asterite also has other concerns for the moment. There are sub-issues linked to this one that track the progress and that will get updated if we resume the work. On a side note, I understand that some people may express some impatience/frustration but I would rather like not being pinged when there is nothing meaningful to say. Please also note that:
|
I care about Crystal, so I care about the progress of Crystal on LSP, because it is very important for newcomers (I don't need it at all), so I ask for this, but, i feel some grumpy smell from the above answer, small communities bring less contributions, and because missing some feature, fewer people use them, a vicious circle. |
I also care a lot about Crystal and I think I made my share of the work by providing quite a bunch of shards to the community.
I fail to see where the grumpiness is. I stated some facts that I felt like were important to mention in this context. |
Sorry for use wrong word for my thought. i just thought you might not want someone to ask about the progress, and all works is for other people, if someone want those feature, he can always contribute on it. Anyway, i get my answer, features in this thread will not be implemented soon. thanks. |
From a usability standpoint i would make it default to some kind of dynamic heavy/light mode based on the performance of the system in use. I've had to mess around a lot with both pyright and jedi on the python side of things and i'm not happy because pyright has more functionality but i can't use it at work because the code base is so large that it just falls on its face and i have to use jedi. I think many of us also like to develop remotely on a home server with only 1/10 performance of our main rig, which can mean that we want our same medium sized project to use heavy mode when on a beefy setup and light mode when using a potato. It would be nice if this was abstracted away from new users and available for hard-coding for those that look for it. Just my 2 cents. |
In the case of something like: def foo(x)
x.|
end How useful/useless would it be to consider other variables in the same class with the same name? I find that they typically have the same type, but that is anecdotal. However if there's enough info in Also, would a way to pass "hints" be of value? For example: def foo(x) # crystalline-hint: x : Int32 | String | Nil
x.|
end That would tell crystalline that if it can't figure out what As I'm neither an expert nor even a very well-informed observer, these are just ideas, and I'm equally excited to learn why that they are terrible ideas as I would be if they were good ones. |
Is it possible to adopt some of the ideas from rust-analyzer - the extracted https://github.com/salsa-rs/salsa . It is what they use for on demand , incremental updates |
True, since I was taking a look at Crystal, and seeing that the language support is just very limited on editors along with no official LSP draws me away from really using it at all as it's such an integral feature for me. |
Hi!
First of all, thank you so much for this project!
I have some ideas of how to provide autocompletion that wouldn't take that much time, although it might not be 100% accurate. I thought about creating a separate project but maybe it's better if I contribute to this one.
But before sharing my ideas, I'd like to understand a few things:
After that, I could share some ideas I have... if they are very different from how things are working now :-)
Thank you!
The text was updated successfully, but these errors were encountered: