-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Inconsistent and flaky unit-tests #2892
Conversation
Note Reviews PausedUse the following commands to manage reviews:
WalkthroughThe recent updates encompass a variety of enhancements across the codebase, focusing on refining data handling, improving code organization, and enhancing test reliability. Key areas of improvement include the management of favicon data, the centralization of server startup logic in tests, adjustments in HTTP request handling, and the removal of concurrent execution in limiter tests. These changes collectively aim to streamline operations, boost readability, and ensure more robust testing practices. Changes
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (5)
- app_test.go (2 hunks)
- client_test.go (69 hunks)
- listen_test.go (5 hunks)
- middleware/logger/logger_test.go (1 hunks)
- middleware/proxy/proxy_test.go (34 hunks)
Additional comments: 11
listen_test.go (4)
- 49-49: Introduced a channel
errs
to capture errors in the goroutine. This is a good practice for handling errors in asynchronous operations, ensuring that errors are not lost and can be asserted in the main test goroutine.- 52-55: Increased the timeout duration from 250 milliseconds to 5 seconds in the context creation for graceful shutdown. This change likely aims to reduce flakiness in tests by allowing more time for operations to complete, especially in environments with variable performance. However, ensure that this does not significantly increase the overall test suite execution time.
- 72-73: Modified the test cases to reflect the updated timeout values. This is a necessary change following the increase in the context timeout to ensure consistency in test expectations. It's good to see that the test cases are updated to align with the new configuration.
- 89-91: Removed the assertion for error in the goroutine and added it after receiving the error from the channel. This change improves the reliability of error handling in tests by ensuring that errors are properly captured and asserted in the main test goroutine. It's a good practice to centralize error assertions after asynchronous operations.
middleware/proxy/proxy_test.go (3)
- 43-43: The integration of the
startServer
function within various test functions is consistent and improves code organization and readability by centralizing server startup logic.Also applies to: 131-131, 160-160, 214-214, 536-536
- 19-30: The changes in this PR, including the introduction of the
startServer
function and the removal oft.Parallel()
andrequire.NoError()
usage (as inferred from the PR objectives and the absence of these in the provided code), align with the goals of improving test reliability and code consistency. These adjustments contribute to making the tests more deterministic and easier to debug.- 24-26: The use of
panic
in thestartServer
function to handle errors is a clear way to indicate server startup failures in tests. However, ensure that this approach is consistent with the overall testing strategy of the project. Consider whether alternative error handling mechanisms might be more appropriate in some contexts to facilitate debugging.client_test.go (2)
- 46-53: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [40-57]
The modification in
Test_Client_Invalid_URL
to include thestartServer
function before making requests is a good practice to ensure the server is ready. This change aligns with the PR objectives to improve test reliability.
- 46-53: > 📝 NOTE
This review was outside the diff hunks and was mapped to the diff hunk with the greatest overlap. Original lines [40-57]
While the provided code does not explicitly show the removal of
t.Parallel()
andrequire.NoError()
, it's important to ensure that their removal across the test suite aligns with the PR objectives to prevent race conditions and improve test reliability. Removingt.Parallel()
helps avoid tests affecting each other by running concurrently, and replacingrequire.NoError()
with a more controlled error handling mechanism can prevent abrupt test terminations that might mask underlying issues.Ensure that the removal of
t.Parallel()
andrequire.NoError()
is consistently applied across the test suite and that it does not introduce any unintended side effects.app_test.go (2)
- 824-830: The changes in
Test_App_ShutdownWithTimeout
introduce error handling forapp.Listener(ln)
and use panic in case of an error, which aligns with the PR objectives to improve error handling in tests.- 874-877: The changes in
Test_App_ShutdownWithContext
introduce error handling forapp.Listener(ln)
and use panic in case of an error, which aligns with the PR objectives to improve error handling in tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (3)
- client_test.go (69 hunks)
- listen_test.go (4 hunks)
- middleware/proxy/proxy_test.go (34 hunks)
Files skipped from review as they are similar to previous changes (3)
- client_test.go
- listen_test.go
- middleware/proxy/proxy_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- listen_test.go (4 hunks)
Additional comments: 5
listen_test.go (5)
- 50-50: Introduced a channel
errs
to capture errors from the goroutine. This is a good practice for error handling in asynchronous operations, ensuring that errors are not lost and can be asserted in the main test goroutine.- 53-53: Increased the timeout duration from 250 milliseconds to 2 seconds in the context creation. This change likely aims to reduce flakiness in tests by allowing more time for operations to complete, especially in environments with variable performance. However, it's important to balance this with the overall test suite execution time.
- 67-79: Implemented a server readiness check before proceeding with the test cases. This is a significant improvement, ensuring that the server is ready to handle requests before any tests are executed. It reduces the likelihood of flaky tests due to timing issues with server startup.
- 88-88: Updated the test cases to reflect the new timeout values. This change is consistent with the increased timeout duration set earlier in the context. It's crucial that test cases are aligned with the expected behavior and configurations of the system under test.
- 128-130: Removed the assertion for error in the goroutine and added it after receiving the error from the channel. This change correctly moves the error assertion to the main test goroutine, where it can be properly handled and reported by the testing framework. It's a good practice to centralize error handling in asynchronous test setups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (1)
- internal/storage/memory/memory_test.go (1 hunks)
Additional comments: 1
internal/storage/memory/memory_test.go (1)
- 86-86: The change from "john" to "john-delete" in the
Test_Storage_Memory_Delete
function is a minor but positive change for clarity and specificity in test cases. This helps in distinguishing the delete operation's test case from others that use a similar key. However, it's important to ensure that this key change does not affect other tests or the overall test suite's logic.
…issues pointed by nilaway
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (6)
- client.go (2 hunks)
- listen.go (2 hunks)
- middleware/logger/default_logger.go (1 hunks)
- middleware/logger/template_chain.go (1 hunks)
- middleware/proxy/proxy.go (1 hunks)
- middleware/proxy/proxy_test.go (9 hunks)
Files skipped from review as they are similar to previous changes (1)
- middleware/proxy/proxy_test.go
Additional comments: 10
middleware/logger/template_chain.go (4)
- 20-22: Adding a nil check for
templateB
before proceeding with the function logic is a good practice to ensure that the function does not operate on invalid data. This enhances the robustness of the code by preventing potential nil pointer dereferences.- 25-27: The nil check for
startTagB
is correctly implemented. It ensures that the function does not proceed with a nilstartTagB
, which could lead to runtime errors. This is a good error handling practice.- 30-32: Similar to the previous checks, adding a nil check for
endTagB
is a prudent measure. It prevents the function from executing further and potentially encountering runtime errors due to a nilendTagB
.- 35-37: The nil check for
paramSeparatorB
follows the same pattern as the previous checks and is a good practice. It ensures the function's robustness by preventing operations on a nilparamSeparatorB
.middleware/logger/default_logger.go (2)
- 159-162: Adding a check for a nil
io.Writer
at the beginning of thewriteLog
function is a crucial safety measure. It prevents the function from attempting to write to a nil writer, which would result in a runtime panic. This change enhances the reliability of the logging mechanism.- 164-167: The updated error handling logic in the
writeLog
function, which attempts to write the error message to the original writer (w
) and falls back toos.Stderr
if that fails, is a thoughtful improvement. It ensures that error messages are not lost silently and are directed to a fallback output if necessary. This change improves the observability and debuggability of the application.middleware/proxy/proxy.go (1)
- 203-205: Introducing a nil check for the
uri
parameter in thegetScheme
function is a good defensive programming practice. It prevents the function from proceeding with a niluri
, which could lead to runtime errors. This change enhances the function's robustness and reliability.listen.go (2)
- 271-274: Adding error checking before using the listener in the
createListener
function is a critical improvement. It ensures that the function does not proceed with an invalid listener, which could lead to runtime errors. This change enhances the reliability of the listener creation process.- 382-382: Modifying the initialization of
pidSlice
to usemake
with an initial capacity of 0 is a subtle but good practice. It explicitly initializespidSlice
as an empty slice, which is more idiomatic and clear than relying on a nil slice. This change improves code readability and maintainability.client.go (1)
- 607-624: The addition of nil checks for
a.formFiles
,ff
, andff.Content
before proceeding with operations on these objects is a good practice to avoid nil dereference issues. This change enhances the robustness of the code by ensuring that operations are only performed on non-nil objects, thus preventing potential runtime panics. This aligns well with the PR's objective of enhancing code reliability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- client_test.go (22 hunks)
Files skipped from review as they are similar to previous changes (1)
- client_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 1
Configuration used: CodeRabbit UI
Files selected for processing (3)
- internal/memory/memory_test.go (1 hunks)
- middleware/logger/logger_test.go (2 hunks)
- middleware/proxy/proxy_test.go (10 hunks)
Files skipped from review as they are similar to previous changes (1)
- middleware/proxy/proxy_test.go
Additional comments: 5
internal/memory/memory_test.go (5)
- 20-20: Adding a comment here improves the readability and understanding of the test's purpose, which is a good practice.
- 27-27: Switching from
require.Equal
torequire.Nil
for checking nil values is appropriate when validating that a key does not exist in the store. This change makes the intention of the test clearer and is a good improvement.- 46-47: Adding new test cases for setting keys with values and resetting all keys is a positive change, as it increases the coverage and robustness of the tests. It's good to see that different scenarios are being considered.
- 51-54: The adjustment of sleep time for the expiration test is not directly visible in the provided code snippet. However, ensuring that the sleep time accurately reflects the expiration time of keys is crucial for the reliability of this test. If the sleep time has been adjusted elsewhere in the code, it's important to verify that it's sufficient for the expiration to take effect.
Verification successful
The sleep time of 1100 milliseconds in the expiration test is appropriate, as it exceeds the expiration time of 1 second (1000 milliseconds) set for the keys. This should allow sufficient time for the keys to expire before they are accessed again, indicating that the sleep time adjustment is correctly implemented for the purpose of the test.
* 46-47: Modifying key names for clarity is a good practice, especially in a test environment where readability and understandability are crucial. Using descriptive key names like "john-reset" and "doe-reset" helps to clearly distinguish between different test scenarios.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the adjusted sleep time is appropriate for the expiration test. grep -C3 "time.Sleep" internal/memory/memory_test.goLength of output: 222
Script:
#!/bin/bash # Find the expiration time set for keys in the test to compare with the sleep time. grep -B2 "store.Set" internal/memory/memory_test.goLength of output: 317
Script:
#!/bin/bash # Find the value assigned to `exp` that represents the expiration time for the keys in the test. grep -B5 "exp" internal/memory/memory_test.goLength of output: 466
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (1)
- internal/memory/memory_test.go (1 hunks)
Files skipped from review as they are similar to previous changes (1)
- internal/memory/memory_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 2
Configuration used: CodeRabbit UI
Files selected for processing (3)
- client_test.go (22 hunks)
- middleware/favicon/favicon.go (2 hunks)
- middleware/proxy/proxy_test.go (10 hunks)
Files skipped from review as they are similar to previous changes (2)
- client_test.go
- middleware/proxy/proxy_test.go
Additional comments: 1
middleware/favicon/favicon.go (1)
- 139-143: Serving the cached favicon with appropriate headers is correctly implemented. Setting the
Content-Length
,Content-Type
, andCache-Control
headers is essential for efficient and correct delivery of static assets like favicons. This implementation ensures that browsers can cache the favicon, reducing unnecessary requests. No issues or improvements are identified in this segment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 0
Configuration used: CodeRabbit UI
Files selected for processing (2)
- middleware/favicon/favicon.go (2 hunks)
- middleware/proxy/proxy_test.go (10 hunks)
Files skipped from review as they are similar to previous changes (2)
- middleware/favicon/favicon.go
- middleware/proxy/proxy_test.go
…and parallel benchmarks for Session middleware
# Conflicts: # client/client_test.go # middleware/proxy/proxy_test.go
lets solve |
* Fixes for some of the failing tests * Add readiness check to serverStart() * Use net/http client for tests listen test * Use different key for this test * Run Proxy Middleware tests in parallel. Add nil checks for potential issues pointed by nilaway * Enable parallel client tests * Do not run timing sensitive tests in parallel * Remove TODO * Revert Test_Proxy_DoTimeout_Timeout, and remove t.Parallel() for it * Do not calculate favicon len on each handler call * Revert logic change * Increase timeout of SaveFile tests * Do not run time sensitive tests in parallel * The Agent can't be run in parallel * Do not run time sensitive tests in parallel * Fixes based on uber/nilaway * Revert change to Client test * Run parallel * Update client_test.go * Update client_test.go * Update cache_test.go * Update cookiejar_test.go * Remove parallel for test using timeouts * Remove t.Parallel() from logger middleware tests * Do not use testify.require in a goroutine * Fix import, and update golangci-lint * Remove changes to template_chain.go * Run more tests in parallel * Add more parallel tests * Add more parallel tests * SetLogger can't run in parallel * Run more tests in parallel, fix issue with goroutine in limiter middleware * Update internal/storage/memory, add more benchmarks * Increase sleep for csrf test by 100 milliseconds. Implement asserted and parallel benchmarks for Session middleware * Add 100 milliseconds to sleep during test * Revert name change * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests * fix: Inconsistent and flaky unit-tests --------- Co-authored-by: M. Efe Çetin <[email protected]> Co-authored-by: René <[email protected]>
Description
testify.require
inside goroutines created during testsinternal/storage/memory
to match https://github.com/gofiber/storage/tree/main/memoryinternal/storage/memory
middleware/session
Fixes #2891 #2872
Type of change
Please delete options that are not relevant.
Summary by CodeRabbit
Summary by CodeRabbit