-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Could you please share an official torch or numpy implementation for GNN reranking? #379
Comments
Hi @jifeiluo Could you provide more details about cuda problem? |
I guess the code is a little out of date. For example, when I compile the code with torch 2.0, one of the errors I received is as follows.
And when compiling the code with python 3.7 and torch 1.4, different problem appears. My CUDA version is over 11, I hope whether you can test this code with newest torch version on Ubuntu, and specify the gcc version when compiling more specifically. Thanks for your response and I'm looking forward for your CPU version. |
I have the same problems. |
@jifeiluo @liguopeng0923 Hell, I followed the pytorch cuda extension tutorial to write the code. You can modify the code to the corresponding version by checking the new tutorial. |
@Xuanmeng-Zhang @layumi Thanks for your response. What about the CPU version you mentioned in this paper? Can you share it? Since I have little experience with the c++ extension, I still hope you can share the torch or numpy based code first so we can quickly analyse the GNN's contribution. It will help us a lot than if we modify the code according to the new tutorial. And if possible, can you keep the code up to date since you are more familiar with the pytorch cuda extension. Thanks! |
Me too. Actually,GPU version is not suitable for large datasets . But unsupervised learning needs large images.
发自我的iPhone
…------------------ Original ------------------
From: Jifei Luo ***@***.***>
Date: Sun,Aug 27,2023 11:03 AM
To: 1072652010 ***@***.***>
Subject: Re: [layumi/Person_reID_baseline_pytorch] Could you please share anofficial torch or numpy implementation for GNN reranking? (Issue #379)
@Xuanmeng-Zhang @layumi Thanks for your response. What about the CPU version you mentioned in this paper? Can you share it? Since I have little experience with the c++ extension, I still hope you can share the torch or numpy based code first so we can quickly analyse the GNN's contribution. It will help us a lot than if we modify the code according to the new tutorial. And if possible, can you keep the code up to date since you are more familiar with the pytorch cuda extension. Thanks!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hello, are you still there? @layumi, I made some rough adjustments to the code today, and it only depends on numpy and torch now. But there are some details I'm not quite sure, like the aggregation function. Can you help verify and determine the specific parameters? Thanks!
|
Hi Jiefei, what is the mAP before and after re-ranking |
@jifeiluo @Xuanmeng-Zhang What I should do when I miss a large dataset? (query and reference are about one hundred thousand) |
For a baseline mAP of 88.01, k-reciprocal can achieve an mAP rate of 94.77 and the code above is 94.54. Is it normal? Could you please check the code? @Xuanmeng-Zhang @layumi |
maybe you can use the CPU version |
Yes, the implementation seems no problem. You can adjust the number of propagation time to see the performance. |
All right, thanks for your response! |
My suggestion is about project Understanding Image Retrieval Re-Ranking: A Graph Neural Network Perspective. The current cuda version seems not that easy for a fast verification due to unexpected compilation problem. In addition to the GPU acceleration effect, the GNN algorithm itself is worth for a further research. Can you share an official torch or numpy implementation? Thanks.
The text was updated successfully, but these errors were encountered: