Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pr/387 #465

Merged
merged 8 commits into from
Apr 17, 2024
Merged

Pr/387 #465

merged 8 commits into from
Apr 17, 2024

Conversation

debermudez
Copy link
Contributor

@debermudez debermudez commented Feb 1, 2024

Adding testing to verify new use_cached_channel behavior.
To be merged after #387
Server test: triton-inference-server/server#7123

grpc_channel_stub_map_.insert(
std::make_pair(url, std::make_tuple(1, channel, stub)));

if (use_cached_channel) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't expect testing changes to change production code. Is this needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is coming from here: #387
I do not know what are process is for PRs without testing so made a branch from the external PR, added testing and made sure the commit shows the original contributors github handle.

@debermudez debermudez marked this pull request as ready for review February 2, 2024 19:07
const tc::KeepAliveOptions& keepalive_options = tc::KeepAliveOptions();
bool use_cached_channel = false;
auto err = tc::InferenceServerGrpcClient::Create(
&this->client_, url, verbose, use_ssl, ssl_options, keepalive_options,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way we could re-use the ClientTest?

@@ -1615,6 +1709,58 @@ TEST_F(GRPCTraceTest, GRPCClearTraceSettings)
<< std::endl;
}

TEST_F(GRPCClientTest, InferMultiNoUseCachedChannel)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there are reason we're using InferMulti instead of regular Infer calls?

@debermudez
Copy link
Contributor Author

Cannot approve since I am considered original author. All of this looks good to me.

@jbkyang-nvi jbkyang-nvi merged commit fe94403 into main Apr 17, 2024
3 checks passed
@jbkyang-nvi jbkyang-nvi deleted the pr/387 branch April 17, 2024 17:16
jbkyang-nvi added a commit to triton-inference-server/server that referenced this pull request Apr 17, 2024
debermudez added a commit that referenced this pull request Apr 18, 2024
Adding testing to verify new use_cached_channel behavior.
To be merged after #387
Server test: triton-inference-server/server#7123
---------
Co-authored-by: Markus Hennerbichler <[email protected]>
Co-authored-by: Katherine Yang <[email protected]>
Co-authored-by: Katherine Yang <[email protected]>
mc-nv pushed a commit that referenced this pull request Apr 18, 2024
Adding testing to verify new use_cached_channel behavior.
To be merged after #387
Server test: triton-inference-server/server#7123
---------
Co-authored-by: Markus Hennerbichler <[email protected]>
Co-authored-by: Katherine Yang <[email protected]>
Co-authored-by: Katherine Yang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants