Skip to content

Latest commit

 

History

History
157 lines (110 loc) · 10.5 KB

Debug-frontend-tests.md

File metadata and controls

157 lines (110 loc) · 10.5 KB

Table of contents

Introduction

Debugging frontend tests can be difficult because the stack traces are much less helpful than in the backend. There are two reasons for this:

  • Because so much of our frontend code is asynchronous, the stack trace will often just point to code that executes a queue of jobs when you usually want to know what code submitted the job.
  • Jasmine, our testing framework, does not support showing accurate line numbers in stack traces. We looked into fixing this in #9648 but abandoned the idea once it was clear that Jasmine lacked the functionality we needed.

Here's an example frontend test stack trace:

Chrome Headless 83.0.4103.116 (Mac OS 10.15.5) Assets Backend API Service on dev mode should handle rejection when fetching a file fails FAILED
	Error: Automatic conversion to Blob is not supported for response type.
	    at _toBlob (core/templates/combined-tests.spec.js:92825:11)
	    at _maybeConvertBody (core/templates/combined-tests.spec.js:92870:20)
	    at TestRequest../node_modules/@angular/common/fesm5/http/testing.js.TestRequest.flush (core/templates/combined-tests.spec.js:92745:16)
	    at UserContext.<anonymous> (core/templates/combined-tests.spec.js:61955:17)
	    at ZoneDelegate../node_modules/zone.js/dist/zone.js.ZoneDelegate.invoke (core/templates/pages/about-page/about-page.module.js:237519:30)
	    at ProxyZoneSpec../node_modules/zone.js/dist/proxy.js.ProxyZoneSpec.onInvoke (core/templates/combined-tests.spec.js:427781:43)
	    at ZoneDelegate../node_modules/zone.js/dist/zone.js.ZoneDelegate.invoke (core/templates/pages/about-page/about-page.module.js:237518:36)
	    at Zone../node_modules/zone.js/dist/zone.js.Zone.run (core/templates/pages/about-page/about-page.module.js:237276:47)
	    at runInTestZone (core/templates/combined-tests.spec.js:427336:38)
	    at UserContext.<anonymous> (core/templates/combined-tests.spec.js:427351:24)
Chrome Headless 83.0.4103.116 (Mac OS 10.15.5): Executed 1 of 2277 (1 FAILED) (0 secs / 0.046 secs)

The zone.js stack elements relate to the zone.js package that is handling our asynchronous operations. Notice that the line numbers are unreasonably large. This is because to run our tests, we load all our testing code and dependencies into combined-tests.spec.js. The line numbers refer to the resulting "mega-file," which doesn't actually exist on the file system.

Below, we'll discuss some strategies you can use to overcome these difficulties. We suggest you follow a workflow like this:

  1. Find the code for the failing test
  2. Run the test in isolation
  3. Debug by printing out debugging information, investigating the code referenced by the stack trace, and/or running many times on CI.

Find tests

You can use test names to find test code files, but you can't just search for the whole name. You have to understand how test names are constructed. For example, here's the kind of test structure that produced the test name "Assets Backend API Service on dev mode should handle rejection when fetching a file fail" above:

describe('Assets Backend API Service', () => {
  describe('on dev mode', () => }
    it('should handle rejection when fetching a file fails', () => {
      ...
    });
  });
});

The strings from the describe and it functions get concatenated with space ( ) delimiters to create the test name. The name of the it block usually starts with should, so we recommend searching the codebase for the portion of the test name from should to the end (e.g. "should handle rejection when fetching a file fails" in this case).

Isolate tests

Once you have identified which test is failing, you often want to run that test in isolation next. This will let you get results quickly as you try other debugging strategies. To do this you can use the --specs_to_run flag on the frontend tests script which takes in a comma-delimited list of frontend test files, which allows you to run only specific frontend test(s).

An alternative is to change the it() of your test to fit(). You can even do this multiple times to mark multiple tests for running. Then when you run the frontend tests, only those tests with fit() will run. Note that fdescribe() works the same way.

Once you've specified which tests you want to run, you should make sure you can still reproduce the bug you are investigating. Some bugs are caused by how tests relate to each other, so in these cases, you won't be able to reproduce the bug by running the failing test in isolation. To see how this might happen, consider the following test code:

describe('oppia', () => {
  var test = 2;

  it('should do something', () => {
    test = 3;
  });

  it('should do something else', () => {
    expect(test).toBe(2);
  });
});

These tests will fail if should do something runs before should do something else, but not if the tests run in the reverse order. Our frontend tests run in a non-deterministic order, so these kinds of problems generally appear as intermittent failures. In other words, these tests will sometimes pass but fail other times, even if the underlying code is the same.

The tests should work correctly regardless of what order they run in. If you find some tests where it matters what order the tests run in, those tests are incorrect, so you should fix them just like you would fix a failing test.

Verbose mode

By default, we suppress log output from the tests to the terminal when running the frontend tests. This keeps the test output clean so it's easy to find which tests failed. However, this output can be useful for debugging. For example, if you add console.log() or console.error() statements to your tests for debugging, the output from those statements will be suppressed by default. To view this output, pass --verbose when you run the frontend tests:

Python:

python -m scripts.run_frontend_tests --verbose

Docker:

make run_tests.frontend PYTHON_ARGS="--verbose"

Only use --verbose when you are running a few tests in isolation. Otherwise, you will be swamped with way too many log messages.

Note that depending on your situation, console.log() or console.error() might be preferable:

  • console.log() statements will cause the linter to fail, so they are a great choice for local debugging. Then if you forget to remove them before pushing your changes, the linter will remind you.
  • console.error() statements do not cause a lint failure, so they work well when you are pushing your code with debugging code to a PR to let the tests run in the CI environment.

Downloading the combined-tests.spec.js file

By default, the combined-tests.spec.js file is deleted after running the frontend tests. This is because it is a large file that is a combination of all the frontend test files. Currently, the stack traces for frontend test failures mention the line numbers with respect to the combined-tests.spec.js file. To allow you to easily find the line in the combined-tests.spec.js file, you can download the file and view it in a text editor using the --download_combined_frontend_spec_file flag:

Python:

python -m scripts.run_frontend_tests --download_combined_frontend_spec_file

Docker:

make run_tests.frontend PYTHON_ARGS="--download_combined_frontend_spec_file"

The combined-tests.spec.js file will be downloaded to the karma_coverage_reports directory.

Reproduce test execution order

Frontend tests are executed in a random order. This is done to ensure that the tests are not dependent on each other. However, this can make it difficult to reproduce the test execution order. To reproduce a test execution order, you can use the seed value that is printed when the tests are run. For example, the following is a sample output from the frontend tests:

Running test in production environment
Building third party libs at third_party/generated/
Minifying and creating sourcemap for third_party/generated/js/third_party.js
Seed for Frontend Test Execution Order 640

In this case, the seed value is 640. To run the tests in the same order as above, you can change the value of the jasmineSeed in the karma.conf.ts file to 640 and then run the frontend tests.

Find stack elements

While you can't use the line numbers in a stack trace to find the associated code, you can sometimes use the function or class names. For example, in the example above we could try searching the codebase for functions named as _maybeConvertBody. The success of this technique depends on how frequently we define functions with the name you search for. For example, we define lots of constructor() functions, so if you search for constructor(), you'll have a hard time figuring out which function is the one appearing in the stack trace. You will have better luck searching for a function whose name is less common.

Run tests repeatedly on CI

Especially when debugging frontend tests that only fail intermittently, you may want to set your tests to run many times on CI. For example, you might add some debugging code and then rerun your frontend tests on CI until the failure occurs. You can update the frontend test CI workflow to enable this. In .github/workflows/frontend_tests.yml, change the step that runs the tests to:

run: for run in {1..N}; do {{the normal command to run the tests}}`; done

Here N is the number of times you want the tests to run.

For example, to run the tests 20 times, you could do this:

run: for run in {1..20}; do PYTHONIOENCODING=utf-8 python -m scripts.run_frontend_tests --run_minified_tests --skip_install --check_coverage; done

To limit the load on oppia/oppia CI runners, please only run tests repeatedly on PRs that are opened against branches on your own fork. For example, you will probably want to open a PR to merge changes from your feature branch to your fork's develop branch, not the oppia/oppia develop branch.