-
-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(diagnostic): integrate annotate-snippets crate #91
feat(diagnostic): integrate annotate-snippets crate #91
Conversation
Codecov Report
@@ Coverage Diff @@
## master #91 +/- ##
==========================================
+ Coverage 78.86% 81.67% +2.80%
==========================================
Files 141 141
Lines 8754 9021 +267
==========================================
+ Hits 6904 7368 +464
+ Misses 1850 1653 -197
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @legendiguess ! Thanks for adding the builders!
The only thing that is currently missing for me are tests that verify that the snippets are actually created correctly. I've seen a lot of test screenshots float by on discord, maybe you can turn them into actual regression tests?
I would like to see some tests that given an error check whether the output matches to what you would expect. You can use the snapshot
crate to create tests like that. We use that a lot throughout the codebase.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice @legendiguess !
I added some comments here and there but the main thing I'm missing is tests for all the different types of errors. I know you create test programs for all the different types of errors. It would be great to add those as tests!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall, a very good PR. You improved test coverage a lot, which is important to verify that your code works! 👍
Some comments for next time:
- You put a lot of work into a single PR. As a result, it can seem that there are a lot of things "wrong" or "to fix" after reviewing. This can be a bit demotivating, I think. It also makes it harder for us to review, because there is so much to look at.
- There is still a lot of code duplication.
When writing code, try to find the right balance between code duplication and readability. E.g. sometimes you can write something with few words, but it is hard to understand what is going on. Other times there is a lot of code duplication which requires changes in many different places. If someone forgets one place, it can result in errors. So both can be dangerous, hence try to find a good balance 😄
I also think it will help you learn more if you make smaller PRs, because you go through the cycle of programming -> review -> learn -> improve -> review -> finish more often. The tips you'll receive during each review will be fewer, making it easier to learn on those topics.
Just needs a squash before this can be merged. I will do that |
Cleaned up history and rebased on |
Colors in Mun terminal output can be setup by "MUN_TERMINAL_COLOR" environment variable. Possible values: "enable" - enable colors, "auto"(default) - automatically decide if colors can be turned on, "disable" - disable colors. Environment variable can be overridden with "--color" command-line option
Co-Authored-By: Bas Zalmstra <[email protected]>
Removed changes to submodule hashes from respective commits, because this PR should really not be touching those. This was probably erroneously committed as a result of failing to |
No description provided.