You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a meta issue for UX work related to predicting behavior of time series data in OpenSearch Dashboards.
The solution covers creating and managing forecasting jobs from a data source and consuming forecasting results visualized on a graph based on this feature proposal.
Forecasting will show the user where their metrics are headed so that they can prepare accordingly (e.g. disk space, server instance upgrades, inventory planning).
Is your feature request related to a problem? Please describe.
DevOps and SREs can leverage AI/ML capabilities of OpenSearch to identify anomalies in their log data, but the gained insights would be retroactive by definition. The user wants to make sure that the applications, service tiers, or services are up and fully functional, they would like to be proactively notified if their area of responsibility is going to have problems.
With Forecasting user can configure a job that will take in account the granularity and the amount of the data and will train a machine learning model to generate predictive datapoints for a desired period of time in the future. The forecasting results will be visualized on a graph.
Describe the solution you'd like
The following UX proposal is describing a workflow of user for setting up and managing a forecast and the user experience to view forecasting results. The recommendation includes the use of OUI components to provide consistency and ease in user and developer experience.
Forecast lifecycle
Forecast (forecasting job) can exist in one of the following states:
Inactive - a forecast that hasn’t been started yet
Inactive: stopped - a forecast stopped by user after running
Awaiting data to initialize forecast - a forecast is attempting to start but there is not enough data for the model to start initializing
Awaiting data to restart forecast - a forecast is attempting to restart after running but there is not enough data
Initializing test - a forecast is building model to run test
Initializing forecast - a forecast is building model to start running continuously
Test complete - a forecast generated a test result and stopped
Running - a forecast running continuously
Test initialization failed
Forecast initialization failed
Forecast failed
Some of the state transitions happen automatically (an example being going from “Initializing test” to “Test complete”) or be triggered by user (in order for a forecast to go into “Initializing forecast” user starts the forecast).
This is a meta issue for UX work related to predicting behavior of time series data in OpenSearch Dashboards.
The solution covers creating and managing forecasting jobs from a data source and consuming forecasting results visualized on a graph based on this feature proposal.
Forecasting will show the user where their metrics are headed so that they can prepare accordingly (e.g. disk space, server instance upgrades, inventory planning).
Is your feature request related to a problem? Please describe.
DevOps and SREs can leverage AI/ML capabilities of OpenSearch to identify anomalies in their log data, but the gained insights would be retroactive by definition. The user wants to make sure that the applications, service tiers, or services are up and fully functional, they would like to be proactively notified if their area of responsibility is going to have problems.
With Forecasting user can configure a job that will take in account the granularity and the amount of the data and will train a machine learning model to generate predictive datapoints for a desired period of time in the future. The forecasting results will be visualized on a graph.
Describe the solution you'd like
The following UX proposal is describing a workflow of user for setting up and managing a forecast and the user experience to view forecasting results. The recommendation includes the use of OUI components to provide consistency and ease in user and developer experience.
Forecast lifecycle
Forecast (forecasting job) can exist in one of the following states:
Some of the state transitions happen automatically (an example being going from “Initializing test” to “Test complete”) or be triggered by user (in order for a forecast to go into “Initializing forecast” user starts the forecast).
User flows
The text was updated successfully, but these errors were encountered: