Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements (suggestions) #48

Open
danielsoner opened this issue Nov 2, 2018 · 3 comments
Open

Improvements (suggestions) #48

danielsoner opened this issue Nov 2, 2018 · 3 comments

Comments

@danielsoner
Copy link

Hi,
I've been working with Nemo for a while and really enjoy it. Here is some feedback:

  • JavaScript configuration over JSON, so it will be easier to do things dynamically + avoid repetition in JSON config files
  • textEquals assertion doesn't show actual and expected text (only one of these), so it's harder to debug
  • When I run tests in parallel: "file" and one test has it.only, I would only like to run that one test. Currently it runs all files
  • When running tests in parallel, it creates a report file for each test file. It would be easier to just open a single report file with all results
  • I use a lot of async / await and if a test fails, I usually don't see in a stack trace where it happened, which is much harder to debug. I typically see this:
    image
@ohpyupi
Copy link

ohpyupi commented Nov 2, 2018

+1 to Javascript config over JSON

@grawk
Copy link
Member

grawk commented Nov 8, 2018

regarding JS config.. please see #49 .. Feel free to install 4.9.0-alpha.1 and let me know if you see any issues.

@grawk
Copy link
Member

grawk commented Nov 8, 2018

Taking all of these and commenting:

  • JavaScript configuration over JSON, so it will be easier to do things dynamically + avoid repetition in JSON config files
  • textEquals assertion doesn't show actual and expected text (only one of these), so it's harder to debug
    • textEquals is a WebDriver method so would probably have to be resolved via the Selenium project
  • When I run tests in parallel: "file" and one test has it.only, I would only like to run that one test. Currently it runs all files
    • it is not clear how to resolve this, since the parallel processes are separate mocha instances. I recommend just remove the -F option when you want to run a single test. That will work.
  • When running tests in parallel, it creates a report file for each test file. It would be easier to just open a single report file with all results
    • This could potentially be resolved, but it would be a significant effort. Makes sense to file a new issue to discuss/track
  • I use a lot of async / await and if a test fails, I usually don't see in a stack trace where it happened, which is much harder to debug. I typically see this (see above for this screen grab)
    • This could potentially be resolved. Investigation needed. Makes sense to file a new issue to discuss/track

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants