-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Testing
-
unit test (
*.unit.test.ts
): testing a unit of behaviour (not units of code) -
integration test (
*.test.ts
): testing multiple units of behaviour and how they work together -
smoke test (
*.smoke.test.ts
): testing a usage scenario - test plan: what to test manually due to not being covered by the above types of tests
- Testing against external systems? Preference is (in order): live system, fake, stub, mock
- Test the outcome, not the implementation (i.e. if I were to refactor all private code, no tests should break)
- All code has a test at one of the levels outlined in the Terminology section
- Experiments have tests (you do not need to test all permutations of experiments being on/off, though)
Note: Unit tests are those in files with extension .unit.test.ts
.
- Make sure you have compiled all code (done automatically when using incremental building)
- Ensure you have disabled breaking into 'Uncaught Exceptions' when running the Unit Tests
- For the linters and formatters tests to pass successfully, you will need to have those corresponding Python libraries installed locally
- Run the Tests via the
Unit Tests
launch option.
You can also run them from the command-line (after compiling):
npm run test:unittests # runs all unit tests
npm run test:unittests -- --grep='<NAME-OF-SUITE>'
To run only a specific test suite for unit tests:
Alter the launch.json
file in the "Debug Unit Tests"
section by setting the grep
field:
"args": [
"--timeout=60000",
"--grep", "<suite name>"
],
...this will only run the suite with the tests you care about during a test run (be sure to set the debugger to run the Debug Unit Tests
launcher).
Functional tests are those in files with extension .functional.test.ts
.
These tests are similar to system tests in scope, but are run like unit tests.
You can run functional tests in a similar way to that for unit tests:
- via the "Functional Tests" launch option, or
- on the command line via
npm run test:functional
Note: System tests are those in files with extension .test*.ts
but which are neither .functional.test.ts
nor .unit.test.ts
.
- Make sure you have compiled all code (done automatically when using incremental building)
- Ensure you have disabled breaking into 'Uncaught Exceptions' when running the Unit Tests
- For the linters and formatters tests to pass successfully, you will need to have those corresponding Python libraries installed locally by using the
./requirements.txt
andbuild/test-requirements.txt
files - Run the tests via
npm run
or the Debugger launch options (you can "Start Without Debugging"). - Note you will be running tests under the default Python interpreter for the system.
You can also run the tests from the command-line (after compiling):
npm run testSingleWorkspace # will launch the VSC UI
npm run testMultiWorkspace # will launch the VSC UI
If you want to change which tests are run or which version of Python is used, you can do this by setting environment variables. The same variables work when running from the command line or launching from within VSCode, though the mechanism used to specify them changes a little.
- Setting
CI_PYTHON_PATH
lets you change the version of python the tests are executed with - Setting
VSC_PYTHON_CI_TEST_GREP
lets you filter the tests by name
CI_PYTHON_PATH
In some tests a Python executable is actually run. The default executable is
python
(for now). Unless you've run the tests inside a virtual environment,
this will almost always mean Python 2 is used, which probably isn't what you
want.
By setting the CI_PYTHON_PATH
environment variable you can
control the exact Python executable that gets used. If the executable
you specify isn't on $PATH
then be sure to use an absolute path.
This is also the mechanism for testing against other versions of Python.
VSC_PYTHON_CI_TEST_GREP
This environment variable allows providing a regular expression which will be matched against suite and test "names" to be run. By default all tests are run.
For example, to run only the tests in the Sorting
suite (from
src/test/format/extension.sort.test.ts
)
you would set the value to Sorting
. To run the ProcessService
and
ProcessService Observable
tests which relate to stderr
handling, you might
use the value ProcessService.*stderr
.
Be sure to escape any grep-sensitive characters in your suite name.
In some rare cases in the "system" tests the VSC_PYTHON_CI_TEST_GREP
environment variable is ignored. If that happens then you will need to
temporarily modify the const grep =
line in
src/test/index.ts
.
Launching from VS Code
In order to set environment variables when launching the tests from VSCode you
should edit the launch.json
file. For example you can add the following to the
appropriate configuration you want to run to change the interpreter used during
testing:
"env": {
"CI_PYTHON_PATH": "/absolute/path/to/interpreter/of/choice/python"
}
On the command line
The mechanism to set environment variables on the command line will vary based on your system, however most systems support a syntax like the following for setting a single variable for a subprocess:
VSC_PYTHON_CI_TEST_GREP=Sorting npm run testSingleWorkspace
The extension has a number of scripts in ./pythonFiles. Tests for these scripts are found in ./pythonFiles/tests. To run those tests:
python2.7 pythonFiles/tests/run_all.py
python3 -m pythonFiles.tests
By default, functional tests are included. To exclude them:
python3 -m pythonFiles.tests --no-functional
To run only the functional tests:
python3 -m pythonFiles.tests --functional
Clone the repo into any directory, open that directory in V SCode, and use the Extension
launch option within VS Code.
The easiest way to debug the Python Debugger (in our opinion) is to clone this git repo directory into your extensions directory.
From there use the Extension + Debugger
launch option.
Once we code freeze for a release, we need to verify that everything is working appropriately. While automated tests are wonderful and help prevent regressions, physically verifying also helps in cases where a test might not be thorough enough or testing is simply too difficult to automate.
We use VS Code's releasing testing procedure during their endgame. This entails:
-
Writing test plan items (TPIs) for large features (w/ the TPI having the
testplan-item
label and the issue it is for having theon-testplan
label) - Verifying any bugs fixed in this release.
-
Verifying simple features added in this release (w/ the
verification-needed
label)
What this means is all of the issues included in the milestone should have a bug
, verification-needed
, or a on-testplan
label with an accompanying TPI when closed.