Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[8.x] [ML] Trained Model: Fix start deployment with ML autoscaling an…
…d 0 active nodes (#201256) (#201748) # Backport This will backport the following commits from `main` to `8.x`: - [[ML] Trained Model: Fix start deployment with ML autoscaling and 0 active nodes (#201256)](#201256) <!--- Backport version: 9.4.3 --> ### Questions ? Please refer to the [Backport tool documentation](https://github.com/sqren/backport) <!--BACKPORT [{"author":{"name":"Dima Arnautov","email":"[email protected]"},"sourceCommit":{"committedDate":"2024-11-26T10:33:04Z","message":"[ML] Trained Model: Fix start deployment with ML autoscaling and 0 active nodes (#201256)\n\n## Summary\r\n\r\nDuring my testing, I used the current user with all required privileges\r\nbut failed to notice that, after switching to the internal`\r\nkibana_system` user, it lacked the manage_autoscaling privilege required\r\nfor the `GET /_autoscaling/policy` API.\r\n\r\nAs a result, the `isMlAutoscalingEnabled` flag, which we rely on in the\r\nStart Deployment modal, was always set to false. This caused a bug in\r\nscenarios with zero active ML nodes, where falling back to deriving\r\navailable processors from ML limits was not possible.\r\n\r\n\r\nYou can check the created deployment, it correctly identifies ML\r\nautoscaling:\r\n\r\n<img width=\"670\" alt=\"image\"\r\nsrc=\"https://github.com/user-attachments/assets/ff1f835e-2b90-4b73-bea8-a49da8846fbd\">\r\n\r\n\r\nAlso fixes restoring vCPU levels from the API deployment params.\r\n\r\n### Checklist\r\n\r\nCheck the PR satisfies following conditions. \r\n\r\n- [x] [Unit or functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere updated or added to match the most common scenarios","sha":"9827a07b5891d643a61a53e09350ff6e4ab25889","branchLabelMapping":{"^v9.0.0$":"main","^v8.18.0$":"8.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["release_note:fix",":ml","v9.0.0","Feature:3rd Party Models","Team:ML","ci:cloud-deploy","ci:cloud-redeploy","backport:version","v8.17.0","v8.18.0","v8.16.2"],"title":"[ML] Trained Model: Fix start deployment with ML autoscaling and 0 active nodes ","number":201256,"url":"https://github.com/elastic/kibana/pull/201256","mergeCommit":{"message":"[ML] Trained Model: Fix start deployment with ML autoscaling and 0 active nodes (#201256)\n\n## Summary\r\n\r\nDuring my testing, I used the current user with all required privileges\r\nbut failed to notice that, after switching to the internal`\r\nkibana_system` user, it lacked the manage_autoscaling privilege required\r\nfor the `GET /_autoscaling/policy` API.\r\n\r\nAs a result, the `isMlAutoscalingEnabled` flag, which we rely on in the\r\nStart Deployment modal, was always set to false. This caused a bug in\r\nscenarios with zero active ML nodes, where falling back to deriving\r\navailable processors from ML limits was not possible.\r\n\r\n\r\nYou can check the created deployment, it correctly identifies ML\r\nautoscaling:\r\n\r\n<img width=\"670\" alt=\"image\"\r\nsrc=\"https://github.com/user-attachments/assets/ff1f835e-2b90-4b73-bea8-a49da8846fbd\">\r\n\r\n\r\nAlso fixes restoring vCPU levels from the API deployment params.\r\n\r\n### Checklist\r\n\r\nCheck the PR satisfies following conditions. \r\n\r\n- [x] [Unit or functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere updated or added to match the most common scenarios","sha":"9827a07b5891d643a61a53e09350ff6e4ab25889"}},"sourceBranch":"main","suggestedTargetBranches":["8.17","8.x","8.16"],"targetPullRequestStates":[{"branch":"main","label":"v9.0.0","branchLabelMappingKey":"^v9.0.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/201256","number":201256,"mergeCommit":{"message":"[ML] Trained Model: Fix start deployment with ML autoscaling and 0 active nodes (#201256)\n\n## Summary\r\n\r\nDuring my testing, I used the current user with all required privileges\r\nbut failed to notice that, after switching to the internal`\r\nkibana_system` user, it lacked the manage_autoscaling privilege required\r\nfor the `GET /_autoscaling/policy` API.\r\n\r\nAs a result, the `isMlAutoscalingEnabled` flag, which we rely on in the\r\nStart Deployment modal, was always set to false. This caused a bug in\r\nscenarios with zero active ML nodes, where falling back to deriving\r\navailable processors from ML limits was not possible.\r\n\r\n\r\nYou can check the created deployment, it correctly identifies ML\r\nautoscaling:\r\n\r\n<img width=\"670\" alt=\"image\"\r\nsrc=\"https://github.com/user-attachments/assets/ff1f835e-2b90-4b73-bea8-a49da8846fbd\">\r\n\r\n\r\nAlso fixes restoring vCPU levels from the API deployment params.\r\n\r\n### Checklist\r\n\r\nCheck the PR satisfies following conditions. \r\n\r\n- [x] [Unit or functional\r\ntests](https://www.elastic.co/guide/en/kibana/master/development-tests.html)\r\nwere updated or added to match the most common scenarios","sha":"9827a07b5891d643a61a53e09350ff6e4ab25889"}},{"branch":"8.17","label":"v8.17.0","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.x","label":"v8.18.0","branchLabelMappingKey":"^v8.18.0$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"8.16","label":"v8.16.2","branchLabelMappingKey":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"}]}] BACKPORT--> Co-authored-by: Dima Arnautov <[email protected]>
- Loading branch information