Skip to content

Optimum neuron LLM inference cache builder #85

Optimum neuron LLM inference cache builder

Optimum neuron LLM inference cache builder #85

Triggered via schedule November 23, 2024 00:24
Status Failure
Total duration 3h 49m 14s
Artifacts
Matrix: Create optimum-neuron inference cache
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Create optimum-neuron inference cache (mistral)
Process completed with exit code 1.
Create optimum-neuron inference cache (mixtral)
The self-hosted runner: aws-inf2-48xlarge-use1-public-80-pvzhg-runner-f4lmb lost communication with the server. Verify the machine is running and has a healthy network connection. Anything in your workflow that terminates the runner process, starves it for CPU/Memory, or blocks its network access can cause this error.