Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deepstream #30

Open
hirwa145 opened this issue Feb 1, 2021 · 95 comments · Fixed by #36
Open

Deepstream #30

hirwa145 opened this issue Feb 1, 2021 · 95 comments · Fixed by #36

Comments

@hirwa145
Copy link

hirwa145 commented Feb 1, 2021

Is there a way i can deploy using Nvidia Deepstream ? Or create Deepstream app from this?

@shubham-shahh
Copy link
Collaborator

check this, dlib model and facenet have similar output
https://forums.developer.nvidia.com/t/face-recognition-using-dlib-with-jetson-nano/167172

@hirwa145
Copy link
Author

hirwa145 commented Feb 2, 2021

So it is possible

@nwesem
Copy link
Owner

nwesem commented Feb 5, 2021

@hirwa145 this was asked before and i think it is a great idea. Let me know if you are interested to work on that

@shubham-shahh
Copy link
Collaborator

shubham-shahh commented Feb 5, 2021

@nwesem I am working on a similar project, but first I'm trying to use dlib's model for face recognition with deepstream

@hirwa145
Copy link
Author

Have you succeeded with it?

@shubham-shahh
Copy link
Collaborator

currently i am working on https://github.com/riotu-lab/deepstream-facenet this, because to run this repo with deepstream will require us to configure mtcnn as with the deepstream.

@shubham-shahh
Copy link
Collaborator

Hey, I managed to get facenet working with deepstream but it's a python implementation. shall I put up a pull request? @nwesem

@hirwa145
Copy link
Author

hirwa145 commented Feb 15, 2021

@shubham-shahh yes please. Can you upload it. I am very interested to see how you managed to make it work with Deepstream

@shubham-shahh
Copy link
Collaborator

I have a python edition, currently I am working on a C++ version as well, shall I PR a python version?

@hirwa145
Copy link
Author

Yes, you can PR the python version

@hirwa145
Copy link
Author

@shubham-shahh you pull requested the python version?

@shubham-shahh
Copy link
Collaborator

not yet, @nwesem should create a new branch for the python version, meanwhile, I'll commit the project to my account.

@hirwa145
Copy link
Author

Yeah. That is also okay, you can commit it to your account whilest we are waiting for him to open new branch

@shubham-shahh
Copy link
Collaborator

Yeah. That is also okay, you can commit it to your account whilest we are waiting for him to open new branch

I'll complete it by this weekend

@hirwa145
Copy link
Author

Alright, thank you

@nwesem
Copy link
Owner

nwesem commented Feb 17, 2021

hey @shubham-shahh, cool! Thanks for contributing to this project! I am looking forward to see your implementation. I added you as a collaborator on the project, please add your code to a branch called python-develop and I will check it out and merge it if everything works as expected. Currently I am pretty busy with work, but I will take the time to test your implementation asap.
If you can't create a new branch just push it to your forked repo and send me a pull request

@shubham-shahh
Copy link
Collaborator

hey @shubham-shahh, cool! Thanks for contributing to this project! I am looking forward to see your implementation. I added you as a collaborator on the project, please add your code to a branch called python-develop and I will check it out and merge it if everything works as expected. Currently I am pretty busy with work, but I will take the time to test your implementation asap.

Hey, I hope you are doing fine, thanks for the branch I'll submit my work by this weekend.

@shubham-shahh
Copy link
Collaborator

shubham-shahh commented Feb 20, 2021

Hey, @nwesem I updated the develop branch of my fork with the python implementation, @hirwa145 please test it and let me know if there are any issues, and once this is tested ill put up a pull request on your develop branch and it would be great if we keep them in separate branches. Here

@hirwa145
Copy link
Author

Hello @shubham-shahh . Great job doing deepstream implementation. So far i tried the python implementation and it is working perfect and thank you. I am currently testing the cpp implementation version even tho it is taking long for testing the sample app and i am not sure why. But i have a question, is there a way to implement the recognition where faces in video files are compared to saved images in the folder and display the name of the person instead of
'"person". Thank you.

@shubham-shahh
Copy link
Collaborator

shubham-shahh commented Feb 23, 2021

Hello @shubham-shahh . Great job doing deepstream implementation. So far i tried the python implementation and it is working perfect and thank you. I am currently testing the cpp implementation version even tho it is taking long for testing the sample app and i am not sure why. But i have a question, is there a way to implement the recognition where faces in video files are compared to saved images in the folder and display the name of the person instead of
'"person". Thank you.

Hello, did you follow the guide step by step? was there any issue at any step? and thanks for testing. and about your question, I will implement that functionality this weekend if possible but if you want to implement it by yourself i can tell you the approach which you can follow.
thanks again

@hirwa145
Copy link
Author

for python implementation, everything works as expected. But for C++ implementation, i have to first test the test sample, so it seems it stuck here
NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 . D you know why?

@shubham-shahh
Copy link
Collaborator

shubham-shahh commented Feb 23, 2021

for python implementation, everything works as expected. But for C++ implementation, i have to first test the test sample, so it seems it stuck here
NvMMLiteOpen : Block : BlockType = 261 NVMEDIA: Reading vendor.tegra.display-size : status: 6 NvMMLiteBlockCreate : Block : BlockType = 261 . D you know why?

Is this the sample App? or the one i included in the repo? whats your video source .mp4 or .h264?

@hirwa145
Copy link
Author

sample_720p.mp4, as mentioned from your instructions

@shubham-shahh
Copy link
Collaborator

sample_720p.mp4, as mentioned from your instructions

and is it the sample app present already or you are using the repo app?

@hirwa145
Copy link
Author

yes, available since i installed deepstream. it must be tested on that sample? Or i can test on another .mp4 file?

@shubham-shahh
Copy link
Collaborator

yes, available since i installed deepstream. it must be tested on that sample? Or i can test on another .mp4 file?

The one that is already presemt works with .h264 files only and the one in the repo works with mp4 files and rtsp streams

@hirwa145
Copy link
Author

So which means when i am doing testing c++ implementation. i do this ./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 instead of ./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4

@shubham-shahh
Copy link
Collaborator

So which means when i am doing testing c++ implementation. i do this ./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264 instead of ./deepstream-infer-tensor-meta-app -t infer /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.mp4

right

@hirwa145
Copy link
Author

Perfect. Now i am able to run C++ implementation successfully and detecting faces as it should. Sometimes misses face that are out of focus.( probably due to resolution/quality of the video) , also if i noticed that when testing on .mp4 or longer video, there is no quiting option, so i have to wait to wait for video to finish so that i quit the app.

@hirwa145
Copy link
Author

Also if it is possible, could you show me how i can implement the function that the name of face owner is displayed please? Thank you

@shubham-shahh
Copy link
Collaborator

what is the next step after getting embedding

Did you create the pickle file with list of embeddings?

@hirwa145
Copy link
Author

Yes, I finished it.

@hirwa145
Copy link
Author

hirwa145 commented Apr 15, 2021

Do i have to run test_facenet_trt.py script with soecified location of a pickle file?

@shubham-shahh
Copy link
Collaborator

Yes, I finished it.

now all you have to do is compare the embeddings.

@hirwa145
Copy link
Author

The problem is how do i do that?

@hirwa145
Copy link
Author

Do i have to run test_facenet_trt.py script with soecified location of a pickle file

Do i use this?

@shubham-shahh
Copy link
Collaborator

Do i have to run test_facenet_trt.py script with soecified location of a pickle file

Do i use this?

not netessary, as it uses MTCNN for first stage

@shubham-shahh
Copy link
Collaborator

The problem is how do i do that?

This this tutorial covers that

@hirwa145
Copy link
Author

Mhm, it can be applied same way to the Deepstream facenet app?

@shubham-shahh
Copy link
Collaborator

Mhm, it can be applied same way to the Deepstream facenet app?

Yes, the embeddings part.

@hirwa145
Copy link
Author

Amd what about the comparing part?

@shubham-shahh
Copy link
Collaborator

Amd what about the comparing part?

It briefly explains the comparing part as well.

@hirwa145
Copy link
Author

It work only with python implementation. Is there a way to make it work with CPP implementation?

@hirwa145
Copy link
Author

In python implementation, which part of code that does output those vectors for face features extracted?

@shubham-shahh
Copy link
Collaborator

It work only with python implementation. Is there a way to make it work with CPP implementation?

The logic will remain the same

@shubham-shahh
Copy link
Collaborator

shubham-shahh commented Apr 21, 2021

In python implementation, which part of code that does output those vectors for face features extracted?

this

@hirwa145
Copy link
Author

I know that is the python code responsible for all facenet actions. I wanted to know which line in that file, that is outputing/producing those vector(embeddings). That will be very helpful

@shubham-shahh
Copy link
Collaborator

I know that is the python code responsible for all facenet actions. I wanted to know which line in that file, that is outputing/producing those vector(embeddings). That will be very helpful

Hi, the link mentioned above is the permalink to that line.

@hirwa145
Copy link
Author

hirwa145 commented Apr 21, 2021

How do i count count avg mean and avg std for embeddings?
For example, i calculated vector distance between 2 photos of Obama, and i got average of 0.4587....
And when i compare the photo of Obama and elton john or benaffleck photos, i get average of 1.562....

How do i calculTe avg mean avg std from these info?

@hirwa145
Copy link
Author

@shubham-shahh I managed to be able to predict names of the people in the video correctly. But names are displayed in the terminal but not around the bbox, how can i achieve that?

@shubham-shahh
Copy link
Collaborator

@shubham-shahh I managed to be able to predict names of the people in the video correctly. But names are displayed in the terminal but not around the bbox, how can i achieve that?

Hi, you need to update Deepstream's bbox function.

@hirwa145
Copy link
Author

You mean in nvdsparsebbox_Yolo.cpp?

@shubham-shahh
Copy link
Collaborator

You mean in nvdsparsebbox_Yolo.cpp?

No, if I am not mistaken that is for the bbox from the pgie, so until we get the pgie we don't have the name of the person.

@hirwa145
Copy link
Author

So how to i change the bbox function? I used Python implementation

@shubham-shahh
Copy link
Collaborator

So how to i change the bbox function? I used Python implementation

one approach I would use is, to draw on the stream after sgie gives you the name. so with the help of OpenCV, you can draw the box and name of the person.

@hirwa145
Copy link
Author

Which means i have to write a new code block for this

@shubham-shahh
Copy link
Collaborator

Which means i have to write a new code block for this

Depends on the approach

@hirwa145
Copy link
Author

hirwa145 commented Apr 26, 2021

Okay, now everything is working fine. But one more question, how can i calculate the value of net-scale-factor please. I want ti fine tune the probability. And offset value

@shubham-shahh
Copy link
Collaborator

Okay, now everything is working fine. But one more question, how can i calculate the value of net-scale-factor please. I want ti fine tune the probability. And offset value

I am not sure about that, you can find info on deepstream forums.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants