You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I want to load the trained model and run the inference in C++, I am trying to figure out a way to do this. What I could figure out is that I need to build TF with GPU support and generate c++ library to link against. What I dont know is, How Do I make sure the loaded graph will actually run on GPU? (It does when run in python). I am unsure about the custom-ops and whether do I need to do anything with them while inferring in c++. It might sound little irrelevant but If you have any pointers please guide
thanks
The text was updated successfully, but these errors were encountered:
Hi, the TF has the C++ interface and the TF operation is written by cuda. So one way may write the testing code in C++ with TF C++ API and call the cuda code. I have no experience and it is only my guess.
Hi,
I want to load the trained model and run the inference in C++, I am trying to figure out a way to do this. What I could figure out is that I need to build TF with GPU support and generate c++ library to link against. What I dont know is, How Do I make sure the loaded graph will actually run on GPU? (It does when run in python). I am unsure about the custom-ops and whether do I need to do anything with them while inferring in c++. It might sound little irrelevant but If you have any pointers please guide
thanks
The text was updated successfully, but these errors were encountered: