-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigation profit calculation test #364
Conversation
This reverts commit 11e98cf.
…ion-profit-calculation-test
run tests for different configurations here's the summary "with approx" means that the
|
The observations from #364 (comment) :
|
So apparently there're 2 problems:
all test cases pass with 4b356e9 (works with approx) |
As discussed, please look out for much bigger issue than a few percent difference. In the bug report in #329 relevant line is:
|
Local tests pass, on the drone no :/ will address this later, moving on to #364 (comment) |
changing also suggesting that date fetching function could fail does not entirely make sense because if the network is not a fork, we just return |
I think those statements contradict each other. Cashing on the frontend expects that it's running in the consistent environment. In case of the tests where unexpected changes can be quickly introduced, cache should be invalidated. So I would invalidate cache before each execution of the cached function (that's a lot of work) and after the network reset/warp. |
Bare-result-wise. The cache is already being clear according to the suggestion. #364 (comment) Regarding the other topics: what would be your take on the suggestions here? |
@@ -0,0 +1,7 @@ | |||
import { Memoized } from 'memoizee'; | |||
|
|||
export default (cachedFunctions: Memoized<any>[]) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as suggested regarding the clearing cache
Long-term, I think we should enable drone cache not only because it will decrease text execution speed, but also because we might reach alchemy quota or will need paid account otherwise. So please create an issue for devops. For the short-term, I need to understand what exactly is failing: Therefore, I would suggest to refactor this whole logic of forked time based on the actual restrictions we have: eg do not predict |
so i've went ahead and observed how to invalidate cache. Sadly i did not find a function that just wipes every single cached item in alternatively, i could go stub the so for experimental purposes i just manually wrote the functionality that diables caching for tests, see the diff Assuming that there is not a killer-mistake and the introduced changes actually disable caching successfully, the result is that cache is not the reason for differing results in the runs with/without cache.
also tried this one out and this either did not yield any successful change in the results as by d9934f7 |
Does this https://stackoverflow.com/a/47058957 not do the trick? |
During debugging i noticed that the hardhat logs differ depending on whether there's a cache prepared. The difference is at least the presense of the error on the chain that originates from the dss-chainlog:
Any imminent idea about what it might hint towards? (would save me time researching) |
|
This reverts commit f7b08cf.
Ok. The non-cached hardhat setup now is capable of actually successfully running tests. The thought process:
Conclusion / hypothesis of what could be the reason: The blockchain fork actually was super slow, the tests did not wait for its response and failed, the next test was started and the request to reset the chain was sent, thus the test runner and the hardhat fork were not "syncronized" and the fork did not behave as was expected because it was not yet done with the previously sent requests. Additional sidenotes:
|
Closing this for the moment as we are not actively working on it |
Closes #232
As #232 (comment) requested, adding the test that fails/
Fixes that were added:
approximateUnitPrice
based on the next step (price drop) when the current step is close to the end.calculateTransactionCollateralOutcome
so that it does not directly compare the values but instead subtracts the values and compares them to close-to-zero value. In other words changed the line that duplicates the logic(owe > tab)
frompotentialOutcomeTotalPrice > auction.debtDAI
todebtDai - PotentialOutcomeTotalPrice < 0.00000000000001
Checklist:
#
)