You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As can be seen in #5689 (comment) and #5741, this must be done because even if an error occurred when validating the states and this error appeared in the logs, the test did not fail.
Currently, the test follows the documentation steps for installation, registration, stop, among others; but it has been observed that it is essential to make modifications to ensure that each step is fully verified. For example, the agent installation process performs the following tests:
test_installation: Installs and checks directories.
test_status: Verifies the service status.
Considering that in this process the agent should not be initiated, the installation test should be confined solely to checking the installation. Apart from verifying directories, it should also confirm that the installed package is correct, including aspects like version, revision, and metadata, among others.
The current state check in the test is too broad. It should be more specific and only check that the status is correct for the expected statuses in each scenario rather than simply valid.
Expected behavior
The state check should be modified to improve accuracy in determining whether the test passes or fails based on expected statuses. If an attempt is made to start the service and the expected status is a failure (such as when attempting to start the agent without registering a manager), the test should expect this failure and consider it a valid outcome. On the contrary, if the agent is expected to start successfully but fails, the test should fail accordingly, which is not always happening with the current implementation.
By improving the granularity of the tests, we can ensure that each action, such as installation or service startup, is independently and accurately verified, thereby increasing the overall robustness of the testing process.
The text was updated successfully, but these errors were encountered:
Description
Based on #5741, it has been detected that the tests to check the correct functioning of the agent in the testing module should be modified:
As can be seen in #5689 (comment) and #5741, this must be done because even if an error occurred when validating the states and this error appeared in the logs, the test did not fail.
Currently, the test follows the documentation steps for installation, registration, stop, among others; but it has been observed that it is essential to make modifications to ensure that each step is fully verified. For example, the agent installation process performs the following tests:
test_installation: Installs and checks directories.
test_status: Verifies the service status.
Considering that in this process the agent should not be initiated, the installation test should be confined solely to checking the installation. Apart from verifying directories, it should also confirm that the installed package is correct, including aspects like version, revision, and metadata, among others.
The current state check in the test is too broad. It should be more specific and only check that the status is correct for the expected statuses in each scenario rather than simply valid.
Expected behavior
The state check should be modified to improve accuracy in determining whether the test passes or fails based on expected statuses. If an attempt is made to start the service and the expected status is a failure (such as when attempting to start the agent without registering a manager), the test should expect this failure and consider it a valid outcome. On the contrary, if the agent is expected to start successfully but fails, the test should fail accordingly, which is not always happening with the current implementation.
By improving the granularity of the tests, we can ensure that each action, such as installation or service startup, is independently and accurately verified, thereby increasing the overall robustness of the testing process.
The text was updated successfully, but these errors were encountered: