Best Practice for Serverless Hot Chocolate #5768
-
I'm using Hot Chocolate subgraphs on serverless Google Cloud Run/Knative, which lets me build my servers in docker containers, upload them, and then startup only when there is a request to serve. This works well from a pricing standpoint, but my cold start performance leaves something to be desired. According to my GCP metrics, I have cold start times almost uniformly at 4 seconds (p50 3.87 sec, p99 4.05 sec). This means incoming user requests sit there for 4 seconds before execution even starts. I know that there are infrastructure changes I could make in order to mitigate this, but I am wondering if there is anything that I can change/might not have thought of within Hot Chocolate that could accelerate the start-up process. Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
The best way to mitigate this at the moment is, as you said, on the infrastructure level. We are aware of this and will work on improving the Startup time. The problem is that it is quite expensive to start the server how we do it at the moment, but together with out plans for AOT, this will improve drastically |
Beta Was this translation helpful? Give feedback.
The best way to mitigate this at the moment is, as you said, on the infrastructure level.
We are aware of this and will work on improving the Startup time. The problem is that it is quite expensive to start the server how we do it at the moment, but together with out plans for AOT, this will improve drastically