-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collecting download count from GitHub broken #1040
Comments
It'd be great if you provided a PR - I can test... except that perhaps this could only be tested on the beta version of backdropcms.org, so we would need someone who has access to patching beta - @jenlampton or @bugfolder ? |
It can be tested by anyone who has our GitHub key, which we'd be willing to
provide to trusted community members for testing purposes ;) DM me and I'll
send it next time I'm at my computer :)
…On Sun, Jun 30, 2024, 7:24 AM Alejandro Cremaschi ***@***.***> wrote:
It'd be great if you provided a PR - I can test... except that perhaps
this could only be tested on the beta version of backdropcms.org, so we
would need someone who has access to patching beta - @jenlampton
<https://github.com/jenlampton> or @bugfolder
<https://github.com/bugfolder> ?
—
Reply to this email directly, view it on GitHub
<#1040 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADBER7U3M25XQRMQJQ2W4DZKAIJBAVCNFSM6AAAAABKD7CDFKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCOJYGU3TSNJYGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
In fact, to get download counts, there is no need to use a GitHub key. I believe this is now publicly available with the get request @indigoxela posted above. However, how would we test the PR on the backdropcms.org site? I'm a bit unclear about this. |
An odd finding here: some of the numbers displayed on B-org seem correct (rules, backup_migrate, imce...). Others for projects that exist for a while are completely wrong (leaflet). That suggests, that this odd and dated function borg_project_metrics_get_downloads() might eventually still work, and the problem's somewhere else... Stats ARE broken. But eventually not because of borg_project_metrics_get_downloads(). Maybe there's something wrong with these newer nodes on B-org, or maybe there are just too many projects to still handle them that way (without batches, just in loops, it seems). |
Some more comparison of actual numbers and numbers displayed on B-org... It feels sort of random. Some numbers are correct. The majority is not. The newer a project is the wronger the numbers. Projects created in the last years don't display any download counts at all. My conclusion: it's the number of projects to loop over. Some timeout or memory exhaustion might strike. Or eventually GitHub sets limits re API calls. Hard to know "from outside". |
That sounds plausible! Perhaps if we used a queue and Backdrop's Queue API for this, instead of trying to load all and loop at once, and risk a timeout. Github says there is a maximum rate of 5000 requests per hour. It's hard to know how many nodes b-org currently has for projects, themes and layouts. I bet they are more than 5000. Perhaps a cleanup of old nodes is needed, in addition to starting to use a queue. All of this is really hard to test without full access to the b-org site. We need a person with full access to be on board. |
@bugfolder is the guy we need for this. I think also, |
Available to test and deploy. I think anyone can build a local instance of b.org for testing, but (a) it's a lot trickier now that CiviCRM is part of b.org, and (b) there are some things you can't handshake with GH from a local dev site (TMK). |
Also, a local instance will not have the possibly thousands of nodes pointing at projects. |
Sure they would. That would be the primary point of building a local version of backdropcms.org (as opposed to a new vanilla installation of B), to have all the nodes, etc. You use the sanitized dbs for backdrop and CiviCRM; the former contains the nodes. |
This is a good idea anyway, but I don't think we are hitting the timeout (yet). There are 1.2k projects in backdrop-contrib, so we can't have more than that many project nodes on b.org (contrib projects include those without releases).
Maybe we should update the README with some informaiton about how to set up a local site? I'll create a separate issue for that :)
We should document these (maybe also in the README), as you may be right but I'm not sure exactly what they are. AFAIK the getting information FROM GitHub should work as expected, but pushging things TO GitHub (like the zipped project) is only possible from b.org. We should do some testing to confirm. |
Yes, but perhaps there are more than one node per project? Perhaps one per release? |
Those are project release nodes (a different type). I don't know if they get usage data separately? They might! edit: we have 2974 project release nodes, so still not more than the limit. |
That brings us back to the initial question: why is this broken? It's not working. That's a fact. And it seems broken for quite a while now. What I can see is, that this count fetch happens once a day for all projects in one loop. And that seems questionable to me - given the raising number of projects. That there are "only" 2974 project release nodes does not mean, only 2974 requests will be made to the GH API. There are several fetch jobs and even one job can cause several requests to GH API, if there are many releases (pages). Here the fetch happens (via cron, all at once): backdropcms.org/www/modules/custom/borg_project_metrics/borg_project_metrics.module Line 119 in 6299472
This is the actual fetch: backdropcms.org/www/modules/custom/borg_project_metrics/borg_project_metrics.module Line 43 in 6299472
|
I feel the culprit is a timeout - but as @indigoxela, it's hard to tell unless someone can install a local version using the current database and config. |
I've started putting together instructions for setting up a local instance with the current (sanitized) db and config. Should have something in the next few days, will ping here when it's in place. |
I'm thinking that testing that this is a timeout or memory issue will be tricky locally, as this may be dependent on server settings. I know so little about Apache and servers - do they typically create some sort of error log when a process times out or runs out of memory? If so, has anyone checked the logs for b-org? |
Timeout and memory issues are in PHP, which has good logging. There are a bunch of PHP errors in the logs on the live site (which should be addressed), but nothing about timeout or memory limits. :/ The PHP configurations can be seen at https://backdropcms.org/admin/reports/status/php (if you have access) but we should really add a |
For those wanting to set up a local instance, I've created a PR on #1041 with instructions that you can try out. If you find anything unclear or non-working, post a comment on the issue and I'll update the PR. DM me on Zulip if you need credentials for the sanitized db site. |
Meanwhile, I installed our GH token credentials on my local, enabled Devel module, and ran this code to get downloads for a single module:
Result was "0". Which seems a bit low. Stepping into the debugger, after the curl call and separating out the header and body, I get this result for the body:
The value of
I'm just starting to read up, so if this triggers any feedback or suggestions for changes, please post. |
This URL is wrong. The correct one should be |
But I think the problem is that you are calling the function |
Oops closed by accident |
Ah.
returns 10245, which is the correct number. So that, at least, works. |
Can you check |
Again, as it seems, this has been missed: the download count seems correct for some projects, but is wrong for most. The project nodes for the loop get collected without sorting in the db query, so I'd assume, the ordering is as-is from the database (nid). |
This returns 818. |
Yes! So this means that:
So, if the problem is timeout, perhaps we can try handling this with batch calls. Right now this is all handled in one page request. Or if the problem is memory, then by using BackdropQueue and breaking it down into several cron calls. |
I still think it's still a big improvement over what we have now. Maybe it's okay that we check for the first download of a project more often? |
And here's some possible reasons. I installed the patch, then put a debugger breakpoint on the totaling line in
And then took a look around each time it stopped for the first 10 or so projects. First project was
So this means that this project will show up every single time, since its project node is never modified. And the loop itself is inefficient: Next project: Next project: Next project: Net project: ...A few more of "Moved Permanently" or "Not found". Next project: And finally, we get to After letting cron finish running once, I see 48 project nodes that got updated, from which I infer that there's 152 nodes like the above clogging the system, since they get called every time because their changed date is never updated. Running cron again bumps the number of updated projects to 82. I infer that the patch basically works, in that repeated cron runs would eventually get all the projects updated, but we should modify the code somehow so that projects that return 0 for downloads aren't repeatedly updated. Perhaps just update their changed date anyhow without actually changing anything. |
Quoth @jenlampton
(That appeared while I was typing up the results of my analysis above.) I agree,the patch is a big improvement. Seems, though, that as long as we're fixing this, we could fix it in such a way as to eliminate the clogging projects from each run. If, sometime in the future, the number of clogging projects got to be over 200, we'd be back in this situation again. |
I thought we weren't actually hitting a rate limit? If not, why don't we just batch and do all the projects every day, rather than only 200? I think it's reasonable to use the updated time for the sort (so we start with those that were updated longest ago) but we should still go through all of them every day and the 200 count won't be an issue. |
My understanding of reading @indigoxela's PR is that it will do them all every day in hourly batches of 200, always starting with those that are updated longest ago. But it looks like the first 150 or so in each batch of 200 are the same oldsters who never get updated. For example, two successive runs of the cron job return an array with So I think that @indigoxela's approach in the PR would accomplish what you requested; it just needs to update the nodes that returned 0 (for whatever reason) so they won't get checked again for 24 hours. |
Maybe we also need another issue to remove those project nodes (like https://backdropcms.org/project/paywall_braintree) that correspond to nonexistent GH projects. Perhaps have the metrics cron job post a watchdog report for nonexistent projects as it encounters them to make them visible to administrators. |
It sounds like the batch information might not be updating properly if we're catching the same items again before we've gone through the whole COUNT once. Here's an example of what the batch update could look like:
Note: this code is completely unstested and intended only to indicate intent :) |
@bugfolder that's an interesting finding. So we have zombie nodes - that show up on B-org, but their repo moved or got deleted? We could catch that by verifying, that curl comes with a successful http 200. The logic problem with zero downloads needs consideration, too. @jenlampton looks like you're coming up with a totally different approach, did I get that right? Does this make my attempt (PR) obsolete? |
Updated my PR to consider some of the concerns.
However, I've no idea how to handle the "zombie project nodes". That's not something, the metrics module can do. But now they should get updated, too, to not further "stick in the update queue", and watchdog warns. Again: test cautiously, as I couldn't. |
My code example was based off an update hook batch, so not directly
applicable. I didn't see any batching in the current PR so I expect that
happens elsewhere in the module? I haven't looked at it yet, just thinking
"out loud" about how to solve the problems mentioned, since it sounded like
they were batch-related :)
…On Mon, Jul 8, 2024, 10:07 PM indigoxela ***@***.***> wrote:
Updated my PR to consider some of the concerns.
- Consider http return code when getting download count
- Skip the loose check for $num, as
borg_project_metrics_get_downloads() always returns a number, and 0 is
a valid one
However, I've no idea how to handle the "zombie project nodes". That's not
something, the metrics module can do. But now they should get updated, too,
to not further "stick in the update queue".
Again: test cautiously, as I couldn't.
—
Reply to this email directly, view it on GitHub
<#1040 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADBERZYEBKPYPUGGI5HKE3ZLNVYJAVCNFSM6AAAAABKD7CDFKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJWGUZTGOJYGY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
We should probably solve the zombie nodes problem in a other issue. Agree
they aren't really part of the project count solution.
…On Mon, Jul 8, 2024, 11:09 PM Jen Lampton ***@***.***> wrote:
My code example was based off an update hook batch, so not directly
applicable. I didn't see any batching in the current PR so I expect that
happens elsewhere in the module? I haven't looked at it yet, just thinking
"out loud" about how to solve the problems mentioned, since it sounded like
they were batch-related :)
On Mon, Jul 8, 2024, 10:07 PM indigoxela ***@***.***> wrote:
> Updated my PR to consider some of the concerns.
>
> - Consider http return code when getting download count
> - Skip the loose check for $num, as
> borg_project_metrics_get_downloads() always returns a number, and 0
> is a valid one
>
> However, I've no idea how to handle the "zombie project nodes". That's
> not something, the metrics module can do. But now they should get updated,
> too, to not further "stick in the update queue".
>
> Again: test cautiously, as I couldn't.
>
> —
> Reply to this email directly, view it on GitHub
> <#1040 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AADBERZYEBKPYPUGGI5HKE3ZLNVYJAVCNFSM6AAAAABKD7CDFKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJWGUZTGOJYGY>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
It may be totally nonsense, but something odd came to my mind... |
Do we know how long it takes currently? I have some long running cron tasks
on other sites and it's not really and issue if another cron is skipped,
but if it skips 5 or more that can start to be an issue.
…On Tue, Jul 9, 2024, 8:35 AM indigoxela ***@***.***> wrote:
It may be totally nonsense, but something odd came to my mind...
Cron runs every 5 minutes on B-org? Doesn't that mean, we have to get our
stuff done within 5 minutes, before the next job starts?
I'm no expert with Backdrop cron, though.
If it were so, eventually the reported problem with broken stats started
when cron has been switched to 5min?
—
Reply to this email directly, view it on GitHub
<#1040 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADBER5OAJQ3KA4TT7B24BDZLP7K3AVCNFSM6AAAAABKD7CDFKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMJYGAZTIOBTHA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Currently watchdog is reporting "Cron run completed" on the 5's (e.g., 10:20, 10:25, ...), which suggests that it's taking less than a minute. I observed in |
@bugfolder more interesting findings... 😉 Can you double check, that you're inspecting the right hour? That the "Attempting to re-run..." message doesn't show up at all in the logs is good news, though. I'm curious, what testing the latest changes in my PR does, hoping for some improvement. 😉 |
Code certainly looks good, and I'll test it as soon as I can shake loose some time (and the right kind of time). Hoping soon. |
I've done some testing locally (running cron manually, then resetting Also, as intended, problems getting downloads are now reported in watchdog, identifying the problematic project. After 3 cron runs, there were 19 watchdog reports. Some examples:
And so forth and so on. At this point, it appears that we've fixed the immediate problem, and with 200 checks 23 time per day, we can handle a very large number of future projects. So I'd advocate that we merge this into the site repo and then check back after a day. And we'll have the exceptions/zombies represented in watchdog, so the separate issue will have some grist to work with. |
That So it might be that |
So maybe if HTTP return code was 0, one should not update the downloads number to zero, in hopes that the next attempt will get through to the server. Perhaps detect that situation and return a total of -1, and then in the update loop, if the total is -1, don't update the downloads number for the node, but still save the node to update its changed date? |
That sounds great.
…On Thu, Jul 11, 2024, 9:34 AM Robert J. Lang ***@***.***> wrote:
So maybe if HTTP return code was 0, one should not update the downloads
number to zero, in hopes that the next attempt will get through to the
server. Perhaps detect that situation and return a total of -1, and then in
the update loop, if the total is -1, don't update the downloads number for
the node, but still save the node to update its changed date?
—
Reply to this email directly, view it on GitHub
<#1040 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADBER2OH5GQLTHQ75L27HTZL2XXRAVCNFSM6AAAAABKD7CDFKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRTGM4TANRUGA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
That's a good point. We wouldn't want the count to get reset to 0 in that case... 🤔 |
PR updated (yet another time: test cautiously). In case of network problems, that aren't in any way related to the repo, it now skips over it. To pick that node up again in the next run. |
Yes, I think so. If it only shows up once, it can be ignored, but if it's happening every hour, something needs looking into. |
Sorry for the long delay; I was traveling. Now back. I've tested out the latest PR on my local site. Although it generated a slew of "Network problem while fetching..." reports in watchdog, it successfully updated 80-odd modules, and a second cron call updated another 80 or so. Most importantly, there was no smoke or flames. So I am in favor of going ahead and merging this PR and deploying it. We can see if the "network problem" afflicts the real server, or was peculiar to my local installation, and meanwhile, lots of modules that had never been getting updated will be brought up to date. |
Let's do it! thank you everyone for helping get this resolved. |
Issue #1040: Fix broken download count fetched from Github
Great teamwork here!!! 🙌🏼 🎉 ❤️ I am currently on holidays, so trying to keep up with the chat and issue queues as best as I can while the girls are sleeping at night... Can we please add some inline comments for the sake of people reviewing the code in the future, so that certain things don't seem "magic"? I went through this thread, and here's a few of the comments that I think should be captured as inline comments:
The bits in the code that seem appropriate for these comments seem to be the following: $max_per_cron = 200;
$result = db_query_range("SELECT nid FROM {node} WHERE type IN (:types) AND status = 1 AND changed < :one_day_ago ORDER BY changed ASC", 0, $max_per_cron, array( ), WATCHDOG_WARNING);
$total = -1;
break; ...and/or here: // The value "-1" indicates some sort of network problem.
if ($num != -1) { Also, this was not apparent to me straight away: // Prevent overlap with weekly jobs.
if (date('G') != 23) { ...I was left wondering "Why? How? ...and what weekly jobs?". Sorry, I can't provide a PR at the moment - most likely after a couple of weeks(?) ...unless someone beats me to it. |
EDIT: the problem is not what I thought initially. More findings in the comments.
Most plausible reason is some timeout problem.
Previous (wrong assumption): This is the wrong header:
https://github.com/backdrop-ops/backdropcms.org/blob/main/www/modules/custom/borg_project_metrics/borg_project_metrics.module#L53
Actually the API requires now something like:
Note the header:
Accept: application/vnd.github.v3+json
And that CURLOPT_USERAGENT in that function seems like an odd decision, too. 😉
And I don't get the Authorization header (all repos are public, so all repo info is, too).
The text was updated successfully, but these errors were encountered: