Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow CI servers to specify if they want to know about all changes (totals + per step) or only totals #52

Open
dmonllao opened this issue Feb 20, 2015 · 1 comment

Comments

@dmonllao
Copy link
Contributor

Today @rajeshtaneja and I have been talking about this tool and how the results we are getting are not satisfying us. There are multiple issues that we should look at, this issue is about the first of them:

  • The per-step results are not reliable; considering that we send requests in a random order IMO we should increase the per-step thresholds or just disable them. I think that we should choose between testing our real-site-mock performance as a whole or to tests each page's performance. A real site will receive requests from concurrent users in a random order, we try to mimic this random human behaviour configuring jmeter to act like this, but that means that the conditions when a user reaches each step are different and the results will vary a lot.

If we would have to choose I think that we may be more interested in a whole site performance, to see how major core changes affects the general site performance. We could introduce a setting to select whether we want a comparison of the whole site performance or a comparison of each of the steps performance, changing jmeter settings and threshold values accordingly, but for what we have been seeing and the random behaviours of moodle (caches, LASTACCESS_UPDATE_SECS...), I think that would be hard to get consistent results by step, so I would vote for, at least initially, increase the per-step threshold values and to add an extra param to report::calculate_big_differences() to ignore per-step results so we can rely on CI servers warning us about changes affecting the whole system performance. I'm setting this issue as this issue's name.

I'm currently testing an alternative to, instead of using MDL_PERFTOFOOT var + MDL_PERF_TEST (the one we created to catch data from redirects) remove all references to MDL_PERF_TEST and keep a simple echo $performanceinfo['txt'] at the end of the shutdown function based on MDL_PERF_TEST, the same place where we write to error_log. MDL_PERF_TEST is only set in this tool's tests (as a collateral damage we remove test testing frameworks references in codebase). Probably @danpoltawski can comment about it as he has experience working with apache logs, but this solution would work exactly like reading from apache logs but less trouble for what I can see. Using this alternative we will catch everything, including redirections. @danpoltawski, do you see any problem using this approach? I will paste all this in #48

  • For what Rajesh told me the CI performance server is reporting some misses (404 http code) on some runs randomly, this is not expected and breaks the reliability of all results as, even though it is just one request, results would be affected.

Probably the machine runs out of resources, we need to tune properly the web server and the database to ensure that we don't stress the machine, we are not doing stress tests, just performance tests, but the more users we use the more stable results we can get.

@rajeshtaneja
Copy link

Thanks for looking at this David,

I think it will be nice to first disable per-step threshold and see if we get consistent results. If that works then we can have a setting to have both runs to check whole site performance and comparison of each step performance.

As per #48, I was thinking of saving that information in external file. But that might not be helpful as Jmeter run and information needs to be collated with it. So I think your approach seems reasonable.

Let me know if I can be of any help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants