Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to generate points for landmark detection for my source image #3

Closed
gopi1989 opened this issue May 12, 2015 · 26 comments
Closed

How to generate points for landmark detection for my source image #3

gopi1989 opened this issue May 12, 2015 · 26 comments

Comments

@gopi1989
Copy link

Hi Patrick,

I build your code successfully and tested the following codes:

  1. Landmark detection - Output got ,but taking more than an hour.
  2. How to generate the points for my source image
@songminglong
Copy link

you should download labeled facial landmarks database,and put them into landmarkdetection to train model.By the way,you also need to prepare meanshape file and correct file format for trainning data.tata

@gopi1989
Copy link
Author

Hi Pat,
I run landmark executable it takes more than 1.30 hours to save the output image.Why it taking the more hour to save the output.
Will you please share the link for landmarks database.Also please more specific about the mean shape file

@songminglong
Copy link

http://ibug.doc.ic.ac.uk/resources/300-W/
you could computer the meanshape file from training database. there are many ways to get it

@Anand0793
Copy link

I tested the landmark exe it saves out.png and landmark_regressor_ibug_68lms.txt but it takes more than one hour.
Tell the reason why taking more than one hour for save out.png

@patrikhuber
Copy link
Owner

Hi gopi, hi Anand,

Are you using the newest version from master (0.3.0)? I fixed the training time two weeks ago (https://github.com/patrikhuber/superviseddescent/releases) and you shouldn't have issues. If you still do, can you please tell me the exact commit hash and system you're using, and also try how long it takes if you compile with -fopenmp(gcc/clang) or /openmp(VS)?

Note that the landmark_detection example trains the model each time before then detecting on one image. That's of course not necessary, the model can be trained once, stored, and then ran on multiple images in miliseconds. I also hope I'll finally manage to add our pretrained model to the library in a couple of weeks!

The mean can easily be calculated by just taking the mean landmarks of a few dozen training examples. I can add the code if you want.

Let me know how it goes, happy to help more.

@gopi1989
Copy link
Author

Hey Pat,
I tried the SDM v0.3.0 (OpenCV + OpenMP). Again i am getting same time of computation around 45 min pat.

@Anand0793
Copy link

Hi Patrick,

I build the latest code of yours from the Link is : https://github.com/patrikhuber/superviseddescent/releases ,
then commit Id is :ee101ba3404a3e8c81203eac97f22e7a1580d0cc
.To build this code i used the following system configuration:

Ubuntu 14.04 LTS 64 bit
g++ 4.8.2
NVIDIA Graphics Card + CUDA 6.5
Opencv 3.0 + OpenMP+ Boost 1.57 +Eigen 3
RAM 8GB
Intel® Core™ i7-4790 CPU @ 3.60GHz × 8
GeForce GT 640/PCIe/SSE2

Still,It takes more than 45 min to see the output.Also you stated that the training model is trained once we will get the output in a milliseconds for rest of the images.
where the model files are stored and how to use the pre-trained model files.Every time i try to run the code it first get trained and print the residual points and then showing the output.

Pat,I am also having little confusion,Will you please clarify my doubts!
1.What is the accuracy difference between Elador versus yours?

Elador link - https://github.com/elador/FeatureDetection
Superviseddescent link - https://github.com/patrikhuber/superviseddescent

2.What are the features you had been added in your code against human sensing code?
SDM - http://www.humansensing.cs.cmu.edu/intraface/citations.html
How faster your code compare to human sensing code

@patrikhuber
Copy link
Owner

Hi gopi, hi Anand,

Thanks for your replies and thank you Anand for all the details. I cannot reproduce the 45min training time at this point, which is a bit weird. I'll continue trying. Just to be sure - did you really compile superviseddescent with -fopenmp?

To answer your questions:

  1. I do not know. But we'll upload results soon that show the accuracy of this implementation.
  2. Intraface (CMU) do not provide source code, only a library interface. It's not open source, thus we can't know exactly. But the methods are quite similar.

It seems like you are both more interested in just running a landmark detection, and not training your own model. Is that correct? If yes, then bare with me for a few days more, our full model and easy-to-run example will be available soon :-)

@gopi1989
Copy link
Author

  1. Yes Patrick, you are right! will wait for some other days to get your full-model.
  2. Please let me know ,Are you also working on Tracking and Pose Estimation.

Thanks,
Gopi

@scadavid
Copy link

Hi Patrik,

I am also waiting for this, thanks!

Steven

@andyhx
Copy link

andyhx commented May 29, 2015

i have no ideal why i need about 12hours to train the data,then come out the out.png and landmark_regressor_ibug_68lms .i am using vs2013,opencv 2.4.8,boostboost_1_55_0 ,EIGEN3 with the debug mode ,why it takes so long ,why
Anand0793 can train it so fast ,where do mistake it? anyone can help me ,thx

@Anand0793
Copy link

Hi andyhx,

I tried in ubuntu 14.04 LTS 64 bit, Opencv 3.0 + OpenMP+ Boost 1.57 +Eigen 3, g++ 4.8.2
In ubuntu it take more than 1 hours to train the data, then it saves out.png and landmark_regressor_ibug_68lms.txt I raised the issue in github then patrick replied to compile with -fopenmp(gcc/clang), then i compile with -fopenmp it again also take more than 1 hour.
Again patrick replied I'll finally manage to add our pre trained model to the library in a couple of weeks!

can you please tries with OpenMp, let me know the status !!Hope will you execute it

@patrikhuber
Copy link
Owner

I think there might be an issue with Eigen's parallelisation of PartialPivLU on Linux. I had a quick try yesterday and it didn't look like it was using more than one core. I'll investigate further once I've completed what I'm currently working on.

@andyhx Anyway, I think running Eigen in debug mode does not sound like a good idea when inverting such huge matrices.

@andyhx
Copy link

andyhx commented Jun 1, 2015

@patrikhuber @Anand0793 yes,thanks you so much ,now with vs 2013 in the release mode and using openmp is really fast. About 7 mins is enough to finish the train and predict. can't not believe there lies so much difference betweeen the release and debug mode,thanks again

@songminglong
Copy link

how many training samples used in train step?@andyhx

@andyhx
Copy link

andyhx commented Jun 1, 2015

@shangguanxiaohu just the images that demo provided ,i will try more ,how about u

@andyhx
Copy link

andyhx commented Jun 1, 2015

@shangguanxiaohu what is ur qq account ,we can discuss more over it

@songminglong
Copy link

1247768069@andyhx

@patrikhuber
Copy link
Owner

Hi all,

@gopi1989 @Anand0793 @shangguanxiaohu @scadavid @andyhx

I've pushed a lot of changes to the devel branch and the RCR landmark detection model is ready to use.

To use it, compile rcr_detect from the examples/ directory and run it like:

rcr_detect -m path_to/face_landmarks_model_rcr_22.bin -f path_to_opencv/haarcascade_frontalface_alt2.xml -i your_image.png

A pre-trained model is available under examples/data/face_landmarks_model_rcr_22.bin. (I'll add one with more landmarks at a later time).

I will merge these changes into the master branch in the next few weeks, update the documentation, fix a few problems and so on, but the stuff in the devel branch is ready to try out.

The detection takes around 30ms per image at the moment.

Let me know how it goes.

@gopi1989
Copy link
Author

gopi1989 commented Jun 4, 2015

Yes,Thanks for your needful help.will update the processing time in a couple of hours

Regards,
Gopi

@Anand0793
Copy link

Hi Patrik,

I build the supervised-descent code from https://github.com/patrikhuber/superviseddescent/tree/devel this link

I run the "rcr_detect" executable it works,Also detection is good for 2D frontal takes milliseconds per image.
Then,I run the rcr_track it also works but having doubt it is detection or tracking.

@patrikhuber
Copy link
Owner

Hi Anand,

Thanks for the feedback! Cool that it worked. Yes, the tracking is not very smart at the moment - it'll be improved over the next few weeks.

@andyhx
Copy link

andyhx commented Jun 7, 2015

@patrikhuber hi , when i trained about 800 pics that idug provide ,it crashes. i am using ur rcr program in a server .the sever has 16 G ram , 2.9 GHZ.it causees memory exceptions in ur both previous andmark dection program and current rcr program . do u know why ?Anyone got same problems?

@patrikhuber
Copy link
Owner

Hi @andyhx,

With how many landmarks were you training? It should definitely work with 20 to 40 landmarks. How much RAM was actually free on the server?

You can find more info and a link to the wiki with memory requirements and a few notes in #2 (at the bottom).

@andyhx
Copy link

andyhx commented Jun 12, 2015

HI @patrikhuber
thanks for answering my question. i am training 22 landmarks using ur rcr program built by myself , there is about 12G ram free in the server.i am training lfpw pics in the lfpw\ibug_lfpw_trainset .would u please send u a version of exe built on windows platform .so i can test out whether i made mistakes in building program or the other problems

@patrikhuber
Copy link
Owner

A bit late, but anyway: I fixed the memory issues a while ago in 07e4b89, so the whole training doesn't use more than 0.5 to 2GB of RAM now. I'll update the wiki.

For the long training time of examples/landmark_detection, I opened #14.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants