-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference over Facenet makes different results on Jetson Nano and dGPU #41
Comments
Hi, 500 and 1000 are values of what? kindly eleborate. |
@shubham-shahh I think these are the embeddings. @IsraelLencina I am also trying to reproduce this setup on dgpu, facing saveral issues.
|
Yes, it's embeddings, i've seen that the change is after the inference. |
Hi again, i've seen in NOTES section that .uff and .engine are GPU specific, i've tried to generate everything from step 3 (as you say in NOTES section) but the "GPU embeddings" are still in values bigger than "Jetson embeddings". |
are you referring to the master branch or develop branch? |
Always master |
Hi, i'm using this repo from a laptop with TensorRT (it's TensorRT 20.03 docker image), and also from a Jetson, the code works without problems in Jetson, the detections are OK this is in expected values.
But when i execute this repo on my own laptop, it return values between 500 and 1000 do you know why is it?
The text was updated successfully, but these errors were encountered: