-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High memory usage - Suspected cleanup not done per testcase #569
Comments
Hi @gridhead, Each widget added via At work we have many test suites with over 3000ks+ tests, with many of the tests (more than 50%) use |
Ah, I do not claim it to be - apologies if it sounded that way. I could use your assistance in understanding how I can lower it. From what it seems from my fixture named |
It did to me but now worries -- not offended at all, just pointing that out because chasing that direction would not be useful. I can show where this is handled in pytest-qt in case it helps: We call pytest-qt/src/pytestqt/plugin.py Lines 198 to 205 in 691c7fb
pytest-qt/src/pytestqt/qtbot.py Lines 743 to 756 in 691c7fb
Which we track when pytest-qt/src/pytestqt/qtbot.py Lines 734 to 740 in 691c7fb
My guess is that |
The snippets help us understand the deeper workings of We have tried some fixes but it has mostly been shotgun debugging as we were very confused as to why things were not working as intended. Try 1 - We tried to remove the deconstructor of the Try 2 - We tried to move the fixture named Try 3 - We tried to modify the fixture named @pytest.fixture
def runner(qtbot):
testwind = MainWindow()
qtbot.addWidget(testwind)
yield testwind
testwind.close()
testwind.deleteLater()
del testwind Try 4 - We tried commenting out the codebase of creation of temporary data (to check if that was the cause of this problem) and the static resources (ref. https://github.com/gridhead/gi-loadouts/tree/test/gi_loadouts/face/rsrc) converted from QResource files (to confirm if those lingered on in the memory after the context) but either they were imported anyway or this action is unrelated to the actual problem. Try 5 - We tried to observe the memory consumption per module using
Try 6 - We tried renaming the fixture named Observation - We did observe that the |
I did some more digging in by trying to run the tests on Windows and I quickly discovered that the Task manager Task switcher The |
One thing you can do as sanity check is to edit your local pytest-qt/src/pytestqt/qtbot.py Lines 754 to 755 in 691c7fb
This should blow up and demonstrate that
Also note that having a Also note that |
I gave it a try and you're right, the I am considering grouping my tests in classes and restricting the context of the fixture but I have limited confidence on whether this would end up working (like https://stackoverflow.com/a/62176555). Would you know if testing N.B. A friend of mine @sdglitched suggested that maybe the |
I doubt it, our test suites at work use QMainWindow classes without problems.
Perhaps... ideally modal dialogs should be mocked in tests given they block the test flow, see https://pytest-qt.readthedocs.io/en/latest/note_dialogs.html. |
The grouping of tests with classes did not work, as I previously thought. Interestingly enough, the memory leak does not show up (or at least in a distinctive manner) in the actual instance of the running application, mostly because there is only one instance of the |
I was reading more into memory leaks and why some objects would outlive their utility, and the two things that stuck out to me are one, where signals/slots are not properly disconnected and the other, where the usage of Take this snippet, for example. def initialize_events(self) -> None:
"""
Initialize events on the user interface
:return:
"""
self.head_scan.clicked.connect(self.show_output_window)
self.head_char_elem.currentTextChanged.connect(self.handle_elem_data)
self.head_char_name.currentTextChanged.connect(self.handle_char_data)
self.head_char_levl.currentTextChanged.connect(self.handle_char_data)
self.head_char_name.currentTextChanged.connect(self.format_weapon_by_char_change)
self.weap_area_type.currentTextChanged.connect(self.convey_weapon_type_change)
self.weap_area_name.currentTextChanged.connect(self.convey_weapon_name_change)
self.weap_area_levl.currentTextChanged.connect(self.convey_weapon_levl_change)
self.weap_area_refn.currentTextChanged.connect(self.convey_refinement_change)
for item in [
(self.arti_fwol_levl, self.arti_fwol_type, self.arti_fwol_rare, self.arti_fwol_name_main, self.arti_fwol_data_main, "fwol", self.arti_fwol_type_name, self.arti_fwol_head_area, self.arti_fwol_head_icon),
(self.arti_pmod_levl, self.arti_pmod_type, self.arti_pmod_rare, self.arti_pmod_name_main, self.arti_pmod_data_main, "pmod", self.arti_pmod_type_name, self.arti_pmod_head_area, self.arti_pmod_head_icon),
(self.arti_sdoe_levl, self.arti_sdoe_type, self.arti_sdoe_rare, self.arti_sdoe_name_main, self.arti_sdoe_data_main, "sdoe", self.arti_sdoe_type_name, self.arti_sdoe_head_area, self.arti_sdoe_head_icon),
(self.arti_gboe_levl, self.arti_gboe_type, self.arti_gboe_rare, self.arti_gboe_name_main, self.arti_gboe_data_main, "gboe", self.arti_gboe_type_name, self.arti_gboe_head_area, self.arti_gboe_head_icon),
(self.arti_ccol_levl, self.arti_ccol_type, self.arti_ccol_rare, self.arti_ccol_name_main, self.arti_ccol_data_main, "ccol", self.arti_ccol_type_name, self.arti_ccol_head_area, self.arti_ccol_head_icon),
]:
item[1].currentTextChanged.connect(lambda _, a_type=item[1], a_rare=item[2], a_name=item[6], a_back=item[8], a_id=item[5]: self.change_rarity_backdrop_by_changing_type(a_type, a_rare, a_name, a_back, a_id))
item[1].currentTextChanged.connect(lambda _, a_type=item[1], a_id=item[5]: self.change_artifact_team_by_changing_type(a_type, a_id))
item[1].currentTextChanged.connect(lambda _, a_type=item[1], a_id=item[5]: self.remove_artifact(a_type, a_id))
item[0].currentTextChanged.connect(lambda _, a_levl=item[0], a_type=item[1], a_rare=item[2], a_name=item[3], a_data=item[4], a_id=item[5]: self.change_data_by_changing_level_or_stat(a_levl, a_type, a_rare, a_name, a_data, a_id))
item[3].currentTextChanged.connect(lambda _, a_levl=item[0], a_type=item[1], a_rare=item[2], a_name=item[3], a_data=item[4], a_id=item[5]: self.change_data_by_changing_level_or_stat(a_levl, a_type, a_rare, a_name, a_data, a_id))
item[2].currentTextChanged.connect(lambda _, a_rare=item[2], a_name=item[3], a_id=item[5]: self.change_artifact_substats_by_changing_rarity_or_mainstat(a_rare, a_name, a_id))
item[3].currentTextChanged.connect(lambda _, a_rare=item[2], a_name=item[3], a_id=item[5]: self.change_artifact_substats_by_changing_rarity_or_mainstat(a_rare, a_name, a_id))
item[2].currentTextChanged.connect(lambda _, a_levl=item[0], a_back=item[7], a_rare=item[2]: self.change_levels_backdrop_by_changing_rarity(a_levl, a_back, a_rare))
for part in ["fwol", "pmod", "sdoe", "gboe", "ccol"]:
for alfa in ["a", "b", "c", "d"]:
drop, text = getattr(self, f"arti_{part}_name_{alfa}"), getattr(self, f"arti_{part}_data_{alfa}")
drop.currentTextChanged.connect(lambda _, a_drop=drop, a_text=text: self.render_lineedit_readonly_when_none(a_drop, a_text))
text.textChanged.connect(self.validate_lineedit_userdata)
for part in ["fwol", "pmod", "sdoe", "gboe", "ccol"]:
getattr(self, f"arti_{part}_scan").clicked.connect(lambda _, a_part=part: self.show_scan_dialog(a_part))
getattr(self, f"arti_{part}_load").clicked.connect(lambda _, a_part=part: self.arti_load(a_part))
getattr(self, f"arti_{part}_save").clicked.connect(lambda _, a_part=part: self.arti_save(a_part))
getattr(self, f"arti_{part}_wipe").clicked.connect(lambda _, a_part=part: self.wipe_artifact(a_part))
self.head_load.clicked.connect(self.team_load)
self.head_save.clicked.connect(self.team_save)
self.head_wipe.clicked.connect(self.wipe_team)
self.weap_head_load.clicked.connect(self.weap_load)
self.weap_head_save.clicked.connect(self.weap_save)
self.char_head_lumi.clicked.connect(lambda _, a_char=CharName.lumine: self.select_char_from_dropdown(a_char))
self.char_head_aeth.clicked.connect(lambda _, a_char=CharName.aether: self.select_char_from_dropdown(a_char))
self.side_head.clicked.connect(lambda _, a_link=__homepage__: self.open_link(a_link))
self.side_tckt.clicked.connect(lambda _, a_link=__issutckt__: self.open_link(a_link))
self.side_cash.clicked.connect(lambda _, a_link=__donation__: self.open_link(a_link))
self.side_info.clicked.connect(self.show_info_dialog)
self.side_lcns.clicked.connect(self.show_lcns_dialog) I thought that Also, please feel free to let me know if this is the wrong direction for pursuing the memory persistence problem. |
For testing our application, Loadouts for Genshin Impact, we see a very high memory usage of up to 4.0 GiB even when the actual run of the application takes somewhere in the neighbourhood of 200 MiB of memory. This starts from a meagre 100 MiB and with the run of each testcase associated with
pytest-qt
, it adds around 100 MiB of memory which doesn't get offloaded until the end of the entire test run. We currently have around 842 tests out of which around half are associated withpytest-qt
. Ideally, if the cleanups were done per testcase, we expect the run to stay well under 500 MiB of memory usage.Here's what the
conftest
module looks like where our fixture is located.The text was updated successfully, but these errors were encountered: