Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pr/387 #465

Merged
merged 8 commits into from
Apr 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions src/c++/examples/simple_grpc_infer_client.cc
Original file line number Diff line number Diff line change
Expand Up @@ -229,6 +229,10 @@ main(int argc, char** argv)
use_cached_channel),
err);

if (verbose) {
std::cout << "There are " << client->GetNumCachedChannels()
<< " cached channels" << std::endl;
}
// Create the data for the two input tensors. Initialize the first
// to unique integers and the second to all ones.
std::vector<int32_t> input0_data(16);
Expand Down
11 changes: 10 additions & 1 deletion src/c++/library/grpc_client.cc
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,7 @@ GetStub(
"TRITON_CLIENT_GRPC_CHANNEL_MAX_SHARE_COUNT", "6"));
const auto& channel_itr = grpc_channel_stub_map_.find(url);
// Reuse cached channel if the channel is found in the map and
// used_cached_channel flag is true
// use_cached_channel flag is true
if ((channel_itr != grpc_channel_stub_map_.end()) && use_cached_channel) {
// check if NewStub should be created
const auto& shared_count = std::get<0>(channel_itr->second);
Expand Down Expand Up @@ -136,6 +136,8 @@ GetStub(
std::shared_ptr<inference::GRPCInferenceService::Stub> stub =
inference::GRPCInferenceService::NewStub(channel);

// If `use_cached_channel` is true, create no new channels even if there
// are no cached channels.
if (use_cached_channel) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't expect testing changes to change production code. Is this needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is coming from here: #387
I do not know what are process is for PRs without testing so made a branch from the external PR, added testing and made sure the commit shows the original contributors github handle.

// Replace if channel / stub have been in the map
if (channel_itr != grpc_channel_stub_map_.end()) {
Expand Down Expand Up @@ -1706,6 +1708,13 @@ InferenceServerGrpcClient::~InferenceServerGrpcClient()
StopStream();
}

size_t
InferenceServerGrpcClient::GetNumCachedChannels() const
{
std::lock_guard<std::mutex> lock(grpc_channel_stub_map_mtx_);
return grpc_channel_stub_map_.size();
}

//==============================================================================

}} // namespace triton::client
3 changes: 3 additions & 0 deletions src/c++/library/grpc_client.h
Original file line number Diff line number Diff line change
Expand Up @@ -600,6 +600,9 @@ class InferenceServerGrpcClient : public InferenceServerClient {
const std::vector<const InferRequestedOutput*>& outputs =
std::vector<const InferRequestedOutput*>());

// Number of Cached Channels
size_t GetNumCachedChannels() const;

private:
InferenceServerGrpcClient(
const std::string& url, bool verbose, bool use_ssl,
Expand Down
Loading