Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference in C++ #24

Open
dattanand opened this issue May 6, 2019 · 1 comment
Open

Inference in C++ #24

dattanand opened this issue May 6, 2019 · 1 comment

Comments

@dattanand
Copy link

Hi,
I want to load the trained model and run the inference in C++, I am trying to figure out a way to do this. What I could figure out is that I need to build TF with GPU support and generate c++ library to link against. What I dont know is, How Do I make sure the loaded graph will actually run on GPU? (It does when run in python). I am unsure about the custom-ops and whether do I need to do anything with them while inferring in c++. It might sound little irrelevant but If you have any pointers please guide

thanks

@yulequan
Copy link
Owner

yulequan commented May 9, 2019

Hi, the TF has the C++ interface and the TF operation is written by cuda. So one way may write the testing code in C++ with TF C++ API and call the cuda code. I have no experience and it is only my guess.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants