-
Notifications
You must be signed in to change notification settings - Fork 282
Testing organization #228
Comments
As for docker - you can test some of the drivers locally(not all of them) if you like. This should drastically reduce storage usage(and make running tests faster). It's fine most of the time, travis will catch a bug if it is database specific. I agree that using
You can achieve same thing right now using
It's working this way for a reason. You can still enable test only on some of the drivers(in case it is driver specific functionality), but you have to do it explicitly. Most of the time feature can be implemented same way on each of the database engines. Current approach 'forces' implementation of feature to supported drivers, which make feature supported by all driver, not only specific one which was actually used by person implementing a feature. Without it after some time we would have some features working only on specific db engines(while all of them could be supported) or different implementations depending on database driver(and more probability of bugs). PS: In the future during tests there will also be check if generated code doesn't require syncing schema before using it for the first time. But this won't be implemented until typeorm schema sync issues are resolved. |
The "reference" folder is supposed to contain the exact expected generated code, thus enabling this visual check.
True. However, it might be worth checking how the generator would handle such unsupported features.
The job of the generator is to take existing DB and turn it into entities, not to turn entities into SQL statements. That said, I see your point about version differences... In the setup above, this could be solved by an extra ts file in the config and schema folders that gets passed the TypeORM version, DB driver version and DB server version into an exported function, and returns a Promise telling the main testing script whether to skip the current test. If absent, the test would be assumed to be unskippable with all drivers, regardless of any version constraints - the typical case. Come to think of it, with such a file in place, each .tomg-config.json driver can be ignored, forcing tests to be engine and version agnostic by default. If a test by design produces slightly different output depending on a version and/or driver, a new config and reference can be created, each with the appropriate version constraints. If a certain schema is only supported on certain DB server versions, a ts file named after the schema will be required, to clarify those version constraints, or else the schema will be assumed supported for all DB server versions.
With the above tweaks, that too would be addressed. The config would specify the schema name, but no driver name, meaning a test will by default require the schema on all drivers. A contributor contrubuting a new schema would be forced to think if a new schema is even required for the test they're making, or if editing all existing rereferences and/or adding a backwards compatible option would be more preferable. Though I guess the process can be streamlines by changing the schema folder even further, to
The entities to be sycned can also serve as a form of driver indepdenent reference (in constract to the actual "reference" folder that is driver dependent by design). |
In the clone of the repo I have, running
|
Sorry for the delay. Weird error message. It is working correctly on my machine(so it might be OS related). After some consideration: using described test organization(raw sql queries) might be better approach for the future. Especially when we will be able to generate models defined by entity schemas. Like every approach it has its ups and downs. |
I've more than once been bited in the ass by the way tests are done when doing PRs, and the fact that I can't really run them locally (because I don't have enough HDD space for a Docker VM, container and images...) doesn't help either.
I think it would be cleaner if tests are reorganized in a way that makes them easier to alter and add new cases, and not rely on parsing files with testing utilities.
To this end, I propose the following.
Have a folder structure like
The testing file would run each configuration, and compare the output with the output in the "reference" folder for that same combination. The config will be augmented with credentials from env variables, so that one doesn't have to alter all configs when things change. If the config specifies a driver, only that driver will be produced and expected in the reference folder. Otherwise, all enabled ones for which there is a reference would run. If a reference is missing, there may be a warning I guess.
To minimize whitespace created differences, "prettier" could be applied to both the references and the output.
Creating the schemas in CI could happen when files are added/updated in the schemas folder (travis has envs that help with that...), at which point the respective schema would be dropped if it exists, recreated, and I guess the data files could be cached for the next build (or the container be saved as a separate layer... not sure what's more optimal for Docker and travis-ci...).
The docker container would merely contain the installed DBs and set envs for credentials. When running locally, one would set envs for the DBs they have in an env file, similarly to now.
Key difference is that to run without a container, one can just run the schemas in the schema folder with their SQL editor (or perhaps a dedicated "one time only" script that takes the env into account). If exploring a scenario not covered by existing schemas, a new one can be created to contain a minimal schema showing the issue. Equally importantly, if there is no reference for a config and driver combo, that should not be considered a test failure, allowing contributors to only contribute tests to drivers they're actually working with.
The text was updated successfully, but these errors were encountered: