Optimum neuron LLM inference cache builder #85
inference_cache_llm.yml
on: schedule
Matrix: Create optimum-neuron inference cache
Annotations
2 errors
Create optimum-neuron inference cache (mistral)
Process completed with exit code 1.
|
Create optimum-neuron inference cache (mixtral)
The self-hosted runner: aws-inf2-48xlarge-use1-public-80-pvzhg-runner-f4lmb lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.
|