-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extract “Tangled circles” #35
Comments
Hi,I have been resolved the problem |
Sorry for the late rely. Glad it's resolved. Please feel free to drop a new issue if you have any questions :) |
Hi, I'm sorry to bother you again. I encountered the following error while running hifiasm, and I'm not sure what it means. It's also worth noting that the memory usage can reach up to 600GB during the run.
|
Looks like oom kill, unless job hit some time limit. Could you try resuming the run by If you have other samples to run, or runs killed before the log could say "bin files have been written", please try the meta_dev branch (commit f98f1ad, r74) instead, which tries to fix the high peak RSS issue and otherwise identical to r73. I will merge r74 into master and update bioconda this week. |
Hi, has the latest r74 version been released? Does this version address the high peak RSS issue and help reduce memory usage? Thank you! |
It is now merged into |
Hi, I saw that you released an update, and I immediately installed it using conda. However, unfortunately, I encountered the following error. Could you please help me understand what might be the cause
|
Are you using bin files from the oom killed run above? From the log I guess no? If it is indeed a new run: could you try simple rerun the failed job with everything unchanged? I remember from a very long time ago, I saw a segfault, around this stage into the run, that disappeared upon rerun and I failed to reproduce it afterwards, therefore whatever that has been unfixed... If rerun does not resolve this segfault, I might need to roll HEAD back to r73. I wanted to say "share data if it can be shared and I will troubleshoot" as usual, but I do not have access to HPC clusters right now. Sorry. |
Hello, I am still encountering the above error when rerunning locally on the HPC. The data is from all Colorectum data in PRJNA748109.
|
Hello. I'm sorry to bother you again. What should I do when encountering the following log information? However, I still obtained the assembled results.
|
Sorry about the vague error. This run actually finished, both the assembly and the circle finding results should have been produced. The error was on the built-in MAG binning, which failed to find any bins: either because the sample was simple & there is nothing to bin with, or the assembly was fragmented & there is nothing to bin with. I should've include catching of this signal in the latest patch, but forgot to... Is this the PRJNA748109 that segfaulted, or a different sample? |
Thank you for clarifying my confusion. The segmentation fault occurred during the run of Colorectum in PRJNA748109. |
So did PRJNA748109 somehow managed to have a run without the segfault, and got to the "checkpoint: post-assembly" part as the log posted above? Or this sample always triggered the segfault, while other samples of yours were assembled? Thanks for letting me know though, I will remember to test on PRJNA748109 when I have access to servers. Sorry for no actual fix at the moment. |
Hello, I'm not sure what the reason is, but when I switch to another server, it runs successfully. However, on some servers, including clusters, it encounters a segmentation fault (SG). They were all installed via conda. |
I see, thanks so much for the report. I will remember this when testing. |
HiI am using hifiasm-meta for metagenome assembly and have divided the assembled results into three types. However, I am encountering some issues, as mentioned above, because I am unfamiliar with the software and don’t know how to extract and reassemble the “tangled circles.” I would greatly appreciate any help you can provide.
Best wishes
The text was updated successfully, but these errors were encountered: