-
Notifications
You must be signed in to change notification settings - Fork 771
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Systematic test units for fit_transform() #2206
Comments
It might be that is less dramatic than I portray it to be. In the coverage html file for |
Thanks for opening up a separate issue and detailing the underlying problem. Coverage is difficult with a modular framework but this really should have been captured by the tests since it is core functionality. I have limited time currently (most of my time is spend answering issues and reviewing PRs) but might shift the balance in the future. |
Hi, I would be interested in picking this up, if that is okay? I have experience with pytest framework which should help with this issue. |
@PipaFlores @MaartenGr Started doing some investigations on what needs to be done. Running the command (for visibility pytest -vv tests/ --cov=bertopic --cov-report term-missing) to see lines within the codebase that haven’t been ‘tested’ (see image below). For the large part we do see that _bertopic.py is well tested. That being said it is evident that there are quite a few lines that don’t have an equivalent test, primarily due to if/else logic which in some cases is nested. If we focus on the fit_transform() method, notably lines 431-437, 446, 470-471, 479-488, 496, 510, you see that those areas aren't covered by an explicit test but can be done to mocking and patching with realistic values (see here for a high-level introduction to mocking functions in pytest) and testing correct arguements passed in for instance. What I am proposing as a success criteria is to improve the code coverage of the _bertopic.py only, notably within the fit_transform() method. Is this reasonable statement and within the spirit of the original issue? While I would like to say that we could use the PR for the issue to improve the coverage of the other files, I am aware of scope creep so I will save it for another PR. Let me know if this success criteria are reasonable and if so will put together an implementation 😊 |
Feature request
As identified in PR #2191, the current test units do not cover the process of fitting a model. In other words, is not testing the implementation of
fit_transform()
. Consequently, different current, and future, features that are performed at thefit_transform()
level are not tested in any systematic way. We realized about this when debugging topic reduction by declaring anr_topics
for the model before fitting. However, this issue might involve all the core features, and most of the optional ones.Currently, in conftest.py the tests define the models and
fit
them for further testing in the other units.https://github.com/MaartenGr/BERTopic/blob/c3ec85dec8eeac704b30812dfed4ac8cd7d13561/tests/conftest.py#L50C1-L55C17
As such, some improvement is required for the tests to cover for the
fit_transform()
method, the core of the library.Motivation
This is required to systematically test the internal consistency of all features and the overall work pipeline.
Your contribution
I can't tackle this issue yet due to time availability, since I will need to familiarize myself more with the pytest framework first. I will come back in a future to tackle this, but I leave the issue open as a reminder, and in case someone else is up for the challenge.
The text was updated successfully, but these errors were encountered: