-
-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG : Action Fails #425
Comments
Same problem
|
same issue |
I also have the same problem. For me, it has been occuring for the past few days. |
It has been successful for a few days now. |
Mine has not worked once since 11 Mar 2023.
|
I've been getting this error for about a month now:
Same as the users above. |
I'm getting this error like all the users above.
|
Having the same issue for weeks too! |
@ddok2, @eby8zevin, @Fanduzi, @moncheeta, @yanskun, @UnixBear, @willnaoosmith |
Done! [...]
Preparing metadata (pyproject.toml): finished with status 'done'
ERROR: Ignored the following versions that require a different python version: 1.21.2 Requires-Python >=3.7,<3.11; 1.21.3 Requires-Python >=3.7,<3.11; 1.21.4 Requires-Python >=3.7,<3.11; 1.21.5 Requires-Python >=3.7,<3.11; 1.21.6 Requires-Python >=3.7,<3.11
ERROR: Could not find a version that satisfies the requirement opencv-python==4.2.0.34 (from versions: 3.4.0.14, 3.4.10.37, 3.4.11.39, 3.4.11.41, 3.4.11.43, 3.4.11.45, 3.4.13.47, 3.4.15.55, 3.4.16.57, 3.4.16.59, 3.4.17.61, 3.4.17.63, 3.4.18.65, 4.3.0.38, 4.4.0.40, 4.4.0.42, 4.4.0.44, 4.4.0.46, 4.5.1.48, 4.5.3.56, 4.5.4.58, 4.5.4.60, 4.5.5.62, 4.5.5.64, 4.6.0.66, 4.7.0.68, 4.7.0.72)
ERROR: No matching distribution found for opencv-python==4.2.0.34
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
Warning: Docker build failed with exit code 1, back off 2.58 seconds before retry.
/usr/bin/docker build -t 6c0442:7c019a37c0d94605a8feced2f4c32637 -f "/home/runner/work/_actions/anmol098/waka-readme-stats/V3/Dockerfile" "/home/runner/work/_actions/anmol098/waka-readme-stats/V3"
[...]
Error: Docker build failed with exit code 1
##[debug]System.InvalidOperationException: Docker build failed with exit code 1
##[debug] at GitHub.Runner.Worker.ActionManager.BuildActionContainerAsync(IExecutionContext executionContext, Object data)
##[debug] at GitHub.Runner.Worker.JobExtensionRunner.RunAsync()
##[debug] at GitHub.Runner.Worker.StepsRunner.RunStepAsync(IStep step, CancellationToken jobCancellationToken)
##[debug]Finishing: Build anmol098/waka-readme-stats@V3
[...] Looks like it's a simple dependency issue. [EDIT] Here's the log for the master version, which gives 'Async is never awaited' error: [...]
Traceback (most recent call last):
File "/waka-readme-stats/main.py", line 221, in <module>
run(main())
File "/usr/local/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/waka-readme-stats/main.py", line 208, in main
stats = await get_stats()
^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/main.py", line 156, in get_stats
yearly_data, commit_data = await calculate_commit_data(repositories)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/yearly_commit_calculator.py", line 38, in calculate_commit_data
await update_data_with_commit_stats(repo, yearly_data, date_data)
File "/waka-readme-stats/yearly_commit_calculator.py", line 64, in update_data_with_commit_stats
commit_data = await DM.get_remote_graphql("repo_commit_list", owner=owner, name=repo_details["name"], branch=branch["name"], id=GHM.USER.node_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 293, in get_remote_graphql
res = await DownloadManager._fetch_graphql_paginated(query, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 267, in _fetch_graphql_paginated
initial_query_response = await DownloadManager._fetch_graphql_query(query, **kwargs, pagination="first: 100")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 229, in _fetch_graphql_query
return await DownloadManager._fetch_graphql_query(query, retries_count - 1, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 229, in _fetch_graphql_query
return await DownloadManager._fetch_graphql_query(query, retries_count - 1, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 229, in _fetch_graphql_query
return await DownloadManager._fetch_graphql_query(query, retries_count - 1, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 7 more times]
File "/waka-readme-stats/manager_download.py", line 231, in _fetch_graphql_query
raise Exception(f"Query '{query}' failed to run by returning code of {res.status_code}: {res.json()}")
Exception: Query 'repo_commit_list' failed to run by returning code of 502: {'data': None, 'errors': [{'message': 'Something went wrong while executing your query. This may be the result of a timeout, or it could be a GitHub bug. Please include `7028:69F2:240738D:49FE50C:642310BC` when reporting this issue.'}]}
sys:1: RuntimeWarning: coroutine 'AsyncClient.get' was never awaited
##[debug]Docker Action run completed with exit code 1
##[debug]Finishing: Generate Waka Stats
[...] And, here's my YAML code: [...]
runs-on: ubuntu-latest
[...]
- name: Generate Waka Stats
uses: anmol098/waka-readme-stats@master
with:
WAKATIME_API_KEY: ${{ secrets.WAKATIME_API_KEY }}
GH_TOKEN: ${{ secrets.GH_TOKEN }}
SHOW_PROJECTS: "False"
SHOW_LOC_CHART: "False"
SHOW_PROFILE_VIEWS: "False"
SHOW_LANGUAGE_PER_REPO: "False"
SHOW_COMMIT: "False"
SHOW_DAYS_OF_WEEK: "False"
SHOW_TIMEZONE: "False"
SHOW_UPDATED_DATE: "False"
SHOW_LINES_OF_CODE: "True"
LOCALE: "en"
[...] |
Same error I am facing too. My yaml is quite different |
Hi ! I've the same error and I don't know how fix that.. |
In my case, time-out errors happened when fetching the commit history of a large repo of mine
You guys can give it a try: example |
@doctormin I used all flag your settings, and it worked. Thank you |
@doctormin Thank you for giving advice...however it hasn't worked for me but when I noticed that the error in the running is due to the local language we have defined in the YML file for me it was Here is my YML file |
@iamgojoof6eyes same here. Error: File "/waka-readme-stats/main.py", line 156, in get_stats
yearly_data, commit_data = await calculate_commit_data(repositories)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/yearly_commit_calculator.py", line 38, in calculate_commit_data
await update_data_with_commit_stats(repo, yearly_data, date_data)
File "/waka-readme-stats/yearly_commit_calculator.py", line 64, in update_data_with_commit_stats
commit_data = await DM.get_remote_graphql("repo_commit_list", owner=owner, name=repo_details["name"], branch=branch["name"], id=GHM.USER.node_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 293, in get_remote_graphql
res = await DownloadManager._fetch_graphql_paginated(query, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 267, in _fetch_graphql_paginated
initial_query_response = await DownloadManager._fetch_graphql_query(query, **kwargs, pagination="first: 100")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 229, in _fetch_graphql_query
return await DownloadManager._fetch_graphql_query(query, retries_count - 1, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 229, in _fetch_graphql_query
return await DownloadManager._fetch_graphql_query(query, retries_count - 1, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/waka-readme-stats/manager_download.py", line 229, in _fetch_graphql_query
return await DownloadManager._fetch_graphql_query(query, retries_count - 1, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[Previous line repeated 7 more times]
File "/waka-readme-stats/manager_download.py", line 231, in _fetch_graphql_query
raise Exception(f"Query '{query}' failed to run by returning code of {res.status_code}: {res.json()}")
Exception: Query 'repo_commit_list' failed to run by returning code of 502: {'data': None, 'errors': [{'message': 'Something went wrong while executing your query. This may be the result of a timeout, or it could be a GitHub bug. Please include `4481:3E96:1CE9FF:1DCD42:642D4A29` when reporting this issue.'}]} Maybe it's because my profile readme has almost 2.7K commits? [EDIT]: Same error message with Here's my YML file |
Your error is quite different from my error. |
@iamgojoof6eyes Please, share your error message. |
@pseusys I did debug mode as you said. That's how I got this log: LOG here's my YAML code:
|
@ddok2 So, this behavior is not expected.
Unfortunately I can't run the queries for you because the repository in question is private and so I with my GitHub token don't have access to it. |
error reappears when i run this locally, and it gives me this error: {
"errors": [
{
"type": "FORBIDDEN",
"path": [
"user",
"repositories",
"nodes",
3
],
"extensions": {
"saml_failure": false
},
"locations": [
{
"line": 5,
"column": 13
}
],
"message": "`LiteLDev` forbids access via a personal access token (classic). Please use a GitHub App, OAuth App, or a personal access token with fine-grained permissions."
},
{
"type": "FORBIDDEN",
"path": [
"user",
"repositories",
"nodes",
18
],
"extensions": {
"saml_failure": false
},
"locations": [
{
"line": 5,
"column": 13
}
],
"message": "`LiteLDev` forbids access via a personal access token (classic). Please use a GitHub App, OAuth App, or a personal access token with fine-grained permissions."
}
]
} and the value of these resps is [
{
"primaryLanguage": {
"name": "TypeScript"
},
"name": "fuck-cors",
"owner": {
"login": "lgc2333"
},
"isPrivate": false
},
null, // HERE
{
"primaryLanguage": {
"name": "Python"
},
"name": "nonebot_template_plugin",
"owner": {
"login": "Ikaros-521"
},
"isPrivate": false
},
] |
@lgc2333 Ok, so here we can see that this organisation has prohibited access to its repositories to the users with old personal access tokens (classic). |
@doctormin Thanks for your sharing, your YML file works for me. |
i can't use fine-grained access tokens in this action
https://github.com/lgc2333/lgc2333/actions/runs/4713861175/jobs/8359827916 |
@pseusys The error came from this line of code |
@pseusys ,still get error on [this pr: #449] I modified the code as below and it works fine. (just add ...
for branch in branch_data["data"]["repository"]["refs"]["nodes"]:
commit_data = await DM.get_remote_graphql("repo_commit_list", owner=owner, name=repo_details["name"], branch=branch["name"], id=GHM.USER.node_id)
for commit in [c for c in commit_data["data"]["repository"]["ref"]["target"]["history"]["nodes"] if c is not None]:
date = search(r"\d+-\d+-\d+", commit["committedDate"]).group()
... is it okay for me to commit to this pr[#449]? |
@ddok2, could you please inspect, how does it happen that |
@pseusys, I agree with your comment. I'll inspect the issue again. |
This issue was a GitHub problem. |
Yea, this is failing for me across the board, as I previously described, with the following stack-trace:
Seems like this is due to rate limit issues? I never had this happen until 11 Mar 2023. Is there something I can do to avoid the rate limit? |
I think the problem is that once the repository has enough commits, it takes too long for GitHub to process the data, so it returns an Mine has 2.720 commits as now. If anyone tries to fix it by doing my suggestion, let us know! |
I think we fhould file an issue to GitHub support, shouldn't we? |
Yup |
Tried running the code after cleaning the repository commit history using this: git checkout --orphan latest_branch
git add -A
git commit -am "init"
git branch -D master
git branch -m master
git push -f origin master and… it didn't work One thing that I did see is that the code is fetching some repositories that aren't mine (I have 58, he's fetching 64). Is there a way/option to make it fetch only repositories on my personal profile? Looks like it's fetching a repository I helped on, but since it's private, I can't know which is it. |
@willnaoosmith, there's no such option yet, but we are always open to ideas and/or proposals! |
I am facing the same issue
Reading the issue discussion, I understand this is a problem with GitHub API and not this workflow. Am I right? |
I opened a ticket with Github for this issue to see what suggestions they might have. I received the following response, likely what has been known already:
Hoping this helps? It sounds like the only solution is to create some form of elaborate query mechanism that reduces batch size and increases delays between batches until it finds a sweet spot. |
Yeah, honestly, I can't come up with a really good solution. Since we are speaking about an action, that runs on GitHub servers as well, increasing execution time might lead to some rate limits of GitHub-hosted action time showing up. I don't think there is a way of using GraphQL queries in a more efficient way than we do now (however, if you do know such a way, we all would be grateful for any ideas). |
@pseusys thanks for responding with your insights. I'm not at all experienced in GraphQL, so I probably am not able to contribute much other than ideas, but I do like you caching idea. It sounds like a decent way of working with it. Further, that could probably be set up to start caching on the first run, retain the cache when it errors out, then use it for the second run, until it can complete without errors. Would be cool if on rate limit error, the action could be rescheduled for an hour or two later to run again, but if not, it would just take a few days to get caught up. |
Well, I don't think we are really able to reschedule actions, only delay within GitHub action execution time limit (which is like 5 hours max I guess). Yes, and in case of an error, we can indeed, proceed with the results we have obtained, cache them and maybe display a warning badge at the bottom of the readme, like this: Warning The action wasn't able to retrieve all user data during latest run due to GitHub internal API limitations. We hope that the info gathered in the next run will be more complete! And during the next run we can use cache for the repos/PRs/commits/branches that were not updated since. |
Describe the bug
Actions get failed from a month...
Github repository link
The text was updated successfully, but these errors were encountered: