diff --git a/serverless/images/copy-connection-details.png b/serverless/images/copy-connection-details.png new file mode 100644 index 00000000..f989c835 Binary files /dev/null and b/serverless/images/copy-connection-details.png differ diff --git a/serverless/images/create-an-api-key.png b/serverless/images/create-an-api-key.png new file mode 100644 index 00000000..8d9898c9 Binary files /dev/null and b/serverless/images/create-an-api-key.png differ diff --git a/serverless/images/get-started-create-an-index.png b/serverless/images/get-started-create-an-index.png new file mode 100644 index 00000000..fa46a1dd Binary files /dev/null and b/serverless/images/get-started-create-an-index.png differ diff --git a/serverless/images/getting-started-page.png b/serverless/images/getting-started-page.png new file mode 100644 index 00000000..7589d727 Binary files /dev/null and b/serverless/images/getting-started-page.png differ diff --git a/serverless/index-serverless-elasticsearch.asciidoc b/serverless/index-serverless-elasticsearch.asciidoc index b1066d4e..ef38b36a 100644 --- a/serverless/index-serverless-elasticsearch.asciidoc +++ b/serverless/index-serverless-elasticsearch.asciidoc @@ -1,6 +1,5 @@ [[what-is-elasticsearch-serverless]] -== {es} - +== {es-serverless} ++++ {es} ++++ @@ -9,6 +8,8 @@ include::./pages/what-is-elasticsearch-serverless.asciidoc[leveloffset=+2] include::./pages/get-started.asciidoc[leveloffset=+2] +include::./pages/connecting-to-es-endpoint.asciidoc[leveloffset=+2] + include::./pages/clients.asciidoc[leveloffset=+2] include::./pages/clients-go-getting-started.asciidoc[leveloffset=+3] include::./pages/clients-java-getting-started.asciidoc[leveloffset=+3] @@ -39,10 +40,6 @@ include::./pages/search-your-data-semantic-search.asciidoc[leveloffset=+3] include::./pages/search-your-data-semantic-search-elser.asciidoc[leveloffset=+4] include::./pages/explore-your-data.asciidoc[leveloffset=+2] -include::./pages/explore-your-data-the-aggregations-api.asciidoc[leveloffset=+3] -include::./pages/explore-your-data-discover-your-data.asciidoc[leveloffset=+3] -include::./pages/explore-your-data-visualize-your-data.asciidoc[leveloffset=+3] -include::./pages/explore-your-data-alerting.asciidoc[leveloffset=+3] include::./pages/search-playground.asciidoc[leveloffset=+2] diff --git a/serverless/index.asciidoc b/serverless/index.asciidoc index 54cdacb1..4aeab024 100644 --- a/serverless/index.asciidoc +++ b/serverless/index.asciidoc @@ -21,8 +21,3 @@ include::./index-serverless-elasticsearch.asciidoc[] include::{observability-serverless}/index.asciidoc[] include::{security-serverless}/index.asciidoc[] include::./index-serverless-project-settings.asciidoc[] - - -// Hidden pages -include::./pages/explore-your-data-visualize-your-data-create-dashboards.asciidoc[leveloffset=+1] -include::./pages/explore-your-data-visualize-your-data-create-visualizations.asciidoc[leveloffset=+1] diff --git a/serverless/pages/connecting-to-es-endpoint.asciidoc b/serverless/pages/connecting-to-es-endpoint.asciidoc new file mode 100644 index 00000000..02069ee8 --- /dev/null +++ b/serverless/pages/connecting-to-es-endpoint.asciidoc @@ -0,0 +1,91 @@ +[[elasticsearch-connecting-to-es-serverless-endpoint]] += Connecting to your Elasticsearch Serverless endpoint + +[TIP] +==== +This page assumes you have already <>. +==== + +Learn how to securely connect to your Elasticsearch Serverless instance. + +To connect to your Elasticsearch instance from your applications, client libraries, or tools like `curl`, you'll need two key pieces of information: an API key and your endpoint URL. This guide shows you how to get these connection details and verify they work. + +[discrete] +[[elasticsearch-get-started-create-api-key]] +== Create a new API key + +Create an API key to authenticate your requests to the {es} APIs. You'll need an API key for all API requests and client connections. + +To create a new API key: + +. On the **Getting Started** page, scroll to **Add an API Key** and select **New**. You can also search for *API keys* in the https://www.elastic.co/guide/en/kibana/current/kibana-concepts-analysts.html#_finding_your_apps_and_objects[global search field]. ++ +image::images/create-an-api-key.png[Create an API key.] +. In **Create API Key**, enter a name for your key and (optionally) set an expiration date. +. (Optional) Under **Control Security privileges**, you can set specific access permissions for this API key. By default, it has full access to all APIs. +. (Optional) The **Add metadata** section allows you to add custom key-value pairs to help identify and organize your API keys. +. Select **Create API Key** to finish. + +After creation, you'll see your API key displayed as an encoded string. +Store this encoded API key securely. It is displayed only once and cannot be retrieved later. +You will use this encoded API key when sending API requests. + +[NOTE] +==== +You can't recover or retrieve a lost API key. Instead, you must delete the key and create a new one. +==== + +[discrete] +[[elasticsearch-get-started-endpoint]] +== Get your {es} endpoint URL + +The endpoint URL is the address for your {es} instance. +You'll use this URL together with your API key to make requests to the {es} APIs. +To find the endpoint URL: + + +. On the **Getting Started** page, scroll to **Copy your connection details** section, and find the **Elasticsearch endpoint** field. +. Copy the URL for the Elasticsearch endpoint. + +image::images/copy-connection-details.png[Copy your Elasticsearch endpoint.] + +[discrete] +[[elasticsearch-get-started-test-connection]] +== Test connection + +Use https://curl.se[`curl`] to verify your connection to {es}. + +`curl` will need access to your Elasticsearch endpoint and `encoded` API key. +Within your terminal, assign these values to the `ES_URL` and `API_KEY` environment variables. + +For example: + +[source,bash] +---- +export ES_URL="https://dda7de7f1d264286a8fc9741c7741690.es.us-east-1.aws.elastic.cloud:443" +export API_KEY="ZFZRbF9Jb0JDMEoxaVhoR2pSa3Q6dExwdmJSaldRTHFXWEp4TFFlR19Hdw==" +---- + +Then run the following command to test your connection: + +[source,bash] +---- +curl "${ES_URL}" \ + -H "Authorization: ApiKey ${API_KEY}" \ + -H "Content-Type: application/json" +---- + +You should receive a response similar to the following: + +[source,json] +---- +{ + "name" : "serverless", + "cluster_name" : "dda7de7f1d264286a8fc9741c7741690", + "cluster_uuid" : "ws0IbTBUQfigmYAVMztkZQ", + "version" : { ... }, + "tagline" : "You Know, for Search" +} +---- + +Now you're ready to start adding data to your {es-serverless} project. \ No newline at end of file diff --git a/serverless/pages/developer-tools-troubleshooting.asciidoc b/serverless/pages/developer-tools-troubleshooting.asciidoc index 92b5f3dc..65fcff95 100644 --- a/serverless/pages/developer-tools-troubleshooting.asciidoc +++ b/serverless/pages/developer-tools-troubleshooting.asciidoc @@ -163,7 +163,7 @@ GET /my-index-000001/_count } ---- -If the field is aggregatable, you can use <> +If the field is aggregatable, you can use {ref}/search-aggregations.html[aggregations] to check the field's values. For `keyword` fields, you can use a `terms` aggregation to retrieve the field's most common values: diff --git a/serverless/pages/explore-your-data-alerting.asciidoc b/serverless/pages/explore-your-data-alerting.asciidoc deleted file mode 100644 index 12d2d230..00000000 --- a/serverless/pages/explore-your-data-alerting.asciidoc +++ /dev/null @@ -1,159 +0,0 @@ -[[elasticsearch-explore-your-data-alerting]] -= Manage alerting rules - -// :description: Define when to generate alerts and notifications with alerting rules. -// :keywords: serverless, elasticsearch, alerting, how-to - -++++ -Alerts -++++ - -preview:[] - -In **{alerts-app}** or **{project-settings} → {manage-app} → {rules-app}** you can: - -* Create and edit rules -* Manage rules including enabling/disabling, muting/unmuting, and deleting -* Drill down to rule details -* Configure rule settings - -[role="screenshot"] -image::images/rules-ui.png[Example rule listing in {rules-ui}] - -For an overview of alerting concepts, go to <>. - -//// -/* ## Required permissions - -Access to rules is granted based on your {alert-features} privileges. */ -//// - -//// -/* MISSING LINK: -For more information, go to missing linkSecuritys. */ -//// - -[discrete] -[[elasticsearch-explore-your-data-alerting-create-and-edit-rules]] -== Create and edit rules - -When you click the **Create rule** button, it launches a flyout that guides you through selecting a rule type and configuring its conditions and actions. - -[role="screenshot"] -image::images/alerting-overview.png[{rules-ui} app] - -The rule types available in an {es-serverless} project are: - -* {kibana-ref}/rule-type-es-query.html[{es} query] -* {kibana-ref}/rule-type-index-threshold.html[Index threshold] -* {kibana-ref}/geo-alerting.html[Tracking containement] -* {ref}/transform-alerts.html[Transform health] - -After a rule is created, you can open the action menu (…) and select **Edit rule** to re-open the flyout and change the rule properties. - -You can also manage rules as resources with the https://registry.terraform.io/providers/elastic/elasticstack/latest[Elasticstack provider] for Terraform. -For more details, refer to the https://registry.terraform.io/providers/elastic/elasticstack/latest/docs/resources/kibana_alerting_rule[elasticstack_kibana_alerting_rule] resource. - -// For details on what types of rules are available and how to configure them, refer to [Rule types]({kibana-ref}/rule-types.html). - -// missing link - -[discrete] -[[elasticsearch-explore-your-data-alerting-snooze-and-disable-rules]] -== Snooze and disable rules - -The rule listing enables you to quickly snooze, disable, enable, or delete individual rules. -For example, you can change the state of a rule: - -[role="screenshot"] -image::images/rule-enable-disable.png[Use the rule status dropdown to enable or disable an individual rule] - -When you snooze a rule, the rule checks continue to run on a schedule but the alert will not trigger any actions. -You can snooze for a specified period of time, indefinitely, or schedule single or recurring downtimes: - -[role="screenshot"] -image::images/rule-snooze-panel.png[Snooze notifications for a rule] - -When a rule is in a snoozed state, you can cancel or change the duration of this state. - -[discrete] -[[elasticsearch-explore-your-data-alerting-import-and-export-rules]] -== Import and export rules - -To import and export rules, use <>. - -//// -/* -TBD: Do stack monitoring rules exist in serverless? -Stack monitoring rules are automatically created for you and therefore cannot be managed in **Saved Objects**. -*/ -//// - -Rules are disabled on export. You are prompted to re-enable the rule on successful import. - -[role="screenshot"] -image::images/rules-imported-banner.png[Rules import banner] - -[discrete] -[[elasticsearch-explore-your-data-alerting-view-rule-details]] -== View rule details - -You can determine the health of a rule by looking at its **Last response**. -A rule can have one of the following responses: - -`failed`:: -The rule ran with errors. - -`succeeded`:: -The rule ran without errors. - -`warning`:: -The rule ran with some non-critical errors. - -Click the rule name to access a rule details page: - -[role="screenshot"] -image::images/rule-details-alerts-active.png[Rule details page with multiple alerts] - -In this example, the rule detects when a site serves more than a threshold number of bytes in a 24 hour period. Four sites are above the threshold. These are called alerts - occurrences of the condition being detected - and the alert name, status, time of detection, and duration of the condition are shown in this view. Alerts come and go from the list depending on whether the rule conditions are met. - -When an alert is created, it generates actions. If the conditions that caused the alert persist, the actions run again according to the rule notification settings. There are three common alert statuses: - -`active`:: -The conditions for the rule are met and actions should be generated according to the notification settings. - -`flapping`:: -The alert is switching repeatedly between active and recovered states. - -`recovered`:: -The conditions for the rule are no longer met and recovery actions should be generated. - -.Flapping alerts -[NOTE] -==== -The `flapping` state is possible only if you have enabled alert flapping detection in **{rules-ui}** → **Settings**. A look back window and threshold are used to determine whether alerts are flapping. For example, you can specify that the alert must change status at least 6 times in the last 10 runs. If the rule has actions that run when the alert status changes, those actions are suppressed while the alert is flapping. -==== - -If there are rule actions that failed to run successfully, you can see the details on the **History** tab. -In the **Message** column, click the warning or expand icon or click the number in the **Errored actions** column to open the **Errored Actions** panel. - -// - -//// -/* -TBD: Is this setting still feasible in serverless? -In this example, the action failed because the `xpack.actions.email.domain_allowlist` setting was updated and the action's email recipient is no longer included in the allowlist: - -![Rule history page with alerts that have errored actions](../images/rule-details-errored-actions.png) -*/ -//// - -// If an alert was affected by a maintenance window, its identifier appears in the **Maintenance windows** column. - -You can suppress future actions for a specific alert by turning on the **Mute** toggle. -If a muted alert no longer meets the rule conditions, it stays in the list to avoid generating actions if the conditions recur. -You can also disable a rule, which stops it from running checks and clears any alerts it was tracking. -You may want to disable rules that are not currently needed to reduce the load on your cluster. - -[role="screenshot"] -image::images/rule-details-disabling.png[Use the disable toggle to turn off rule checks and clear alerts tracked] diff --git a/serverless/pages/explore-your-data-discover-your-data.asciidoc b/serverless/pages/explore-your-data-discover-your-data.asciidoc deleted file mode 100644 index cd86194b..00000000 --- a/serverless/pages/explore-your-data-discover-your-data.asciidoc +++ /dev/null @@ -1,200 +0,0 @@ -[[elasticsearch-explore-your-data-discover-your-data]] -= Discover your data - -// :description: Learn how to use Discover to gain insights into your data. -// :keywords: serverless, elasticsearch, discover data, how to - -preview:[] - -With **Discover**, you can quickly search and filter your data, get information -about the structure of the fields, and display your findings in a visualization. -You can also customize and save your searches and place them on a dashboard. - -[discrete] -[[elasticsearch-explore-your-data-discover-your-data-explore-and-query-your-data]] -== Explore and query your data - -This tutorial shows you how to use **Discover** to search large amounts of -data and understand what’s going on at any given time. This tutorial uses the book sample data set from the <>. - -You’ll learn to: - -* **Select** data for your exploration, set a time range for that data, -search it with the {kib} Query Language, and filter the results. -* **Explore** the details of your data, view individual documents, and create tables -that summarize the contents of the data. -* **Present** your findings in a visualization. - -At the end of this tutorial, you’ll be ready to start exploring with your own -data in **Discover**. - -[discrete] -[[elasticsearch-explore-your-data-discover-your-data-find-your-data]] -== Find your data - -Tell {kib} where to find the data you want to explore, and then specify the time range in which to view that data. - -. Once the book sample data has been ingested, navigate to **Explore → Discover** and click **Create data view**. -. Give your data view a name. -+ -[role="screenshot"] -image::images/create-data-view.png[Create a data view] -+ -. Start typing in the **Index pattern** field, and the names of indices, data streams, and aliases that match your input will be displayed. -+ -** To match multiple sources, use a wildcard (*), for example, `b*` and any indices starting with the letter `b` display. -** To match multiple sources, enter their names separated by a comma. Do not include a space after the comma. For example `books,magazines` would match two indices: `books` and `magazines`. -** To exclude a source, use a minus sign (-), for example `-books`. -. In the **Timestamp** field dropdown, and then select `release_date`. -+ -** If you don't set a time field, you can't use global time filters on your dashboards. Leaving the time field unset might be useful if you have multiple time fields and want to create dashboards that combine visualizations based on different timestamps. -** If your index doesn't have time-based data, choose **I don't want to use the time filter**. -. Click **Show advanced settings** to: -+ -** Display hidden and system indices. -** Specify your own data view name. For example, enter your {es} index alias name. -. Click **Save data view to {kib}**. -. Adjust the time range to view data for the **Last 40 years** to view all your book data. -+ -[role="screenshot"] -image::images/book-data.png[Your book data displayed] - -[discrete] -[[explore-fields-in-your-data]] -== Explore the fields in your data - -**Discover** includes a table that shows all the documents that match your search. By default, the document table includes a column for the time field and a column that lists all other fields in the document. You’ll modify the document table to display your fields of interest. - -. In the sidebar, enter `au` in the search field to find the `author` field. -. In the **Available fields** list, click `author` to view its most popular values. -+ -**Discover** shows the top 10 values and the number of records used to calculate those values. -+ -. Click image:images/icons/plusInCircleFilled.svg[Add] to toggle the field into the document table. You can also drag the field from the **Available fields** list into the document table. - -[discrete] -[[elasticsearch-explore-your-data-discover-your-data-add-a-field-to-your-data-source]] -== Add a field to your {data-source} - -What happens if you forgot to define an important value as a separate field? Or, what if you -want to combine two fields and treat them as one? This is where {ref}/runtime.html[runtime fields] come into play. -You can add a runtime field to your {data-source} from inside of **Discover**, -and then use that field for analysis and visualizations, -the same way you do with other fields. - -. In the sidebar, click **Add a field**. -. In the **Create field** form, enter `hello` for the name. -. Turn on **Set value**. -. Define the script using the Painless scripting language. Runtime fields require an `emit()`. -+ -[source,ts] ----- -emit("Hello World!"); ----- -. Click **Save**. -. In the sidebar, search for the **hello** field, and then add it to the document table. -. Create a second field named `authorabbrev` that combines the authors last name and first initial. -+ -[source,ts] ----- -String str = doc['author.keyword'].value; -char ch1 = str.charAt(0); -emit(doc['author.keyword'].value + ", " + ch1); ----- -. Add `authorabbrev` to the document table. - -[role="screenshot"] -image::images/add-fields.png[How the fields you just created should display] - -[discrete] -[[search-in-discover]] -== Search your data - -One of the unique capabilities of **Discover** is the ability to combine free text search with filtering based on structured data. To search all fields, enter a simple string in the query bar. - -To search particular fields and build more complex queries, use the {kib} Query language. As you type, KQL prompts you with the fields you can search and the operators you can use to build a structured query. - -Search the book data to find out which books have more than 500 pages: - -. Enter `p`, and then select **page_count**. -. Select **>** for greater than and enter **500**, then click the refresh button or press the Enter key to see which books have more than 500 pages. - -[discrete] -[[filter-in-discover]] -== Filter your data - -Whereas the query defines the set of documents you are interested in, -filters enable you to zero in on subsets of those documents. -You can filter results to include or exclude specific fields, filter for a value in a range, -and more. - -Exclude documents where the author is not Terry Pratchett: - -. Click image:images/icons/plusInCircleFilled.svg[Add] next to the query bar. -. In the **Add filter** pop-up, set the field to **author**, the operator to **is not**, and the value to **Terry Pratchett**. -. Click **Add filter**. -. Continue your exploration by adding more filters. -. To remove a filter, click the close icon (x) next to its name in the filter bar. - -[discrete] -[[look-inside-a-document]] -== Look inside a document - -Dive into an individual document to view its fields and the documents that occurred before and after it. - -. In the document table, click the expand icon image:images/icons/expand.svg[View details] to show document details. -. Scan through the fields and their values. If you find a field of interest, hover your mouse over the **Actions** column for filters and other options. -. To create a view of the document that you can bookmark and share, click **Single document**. -. To view documents that occurred before or after the event you are looking at, click **Surrounding documents**. - -[discrete] -[[save-your-search]] -== Save your search for later use - -Save your search so you can use it later to generate a CSV report, create visualizations and Dashboards. Saving a search saves the query text, filters, and current view of **Discover**, including the columns selected in the document table, the sort order, and the {data-source}. - -. In the upper right toolbar, click **Save**. -. Give your search a title. -. Optionally store tags and the time range with the search. -. Click **Save**. - -[discrete] -[[elasticsearch-explore-your-data-discover-your-data-visualize-your-findings]] -== Visualize your findings - -If a field can be {ref}/search-aggregations.html[aggregated], you can quickly visualize it from **Discover**. - -. In the sidebar, find and then click `release_date`. -. In the popup, click **Visualize**. -+ -[NOTE] -==== -{kib} creates a visualization best suited for this field. -==== -+ -. From the **Available fields** list, drag and drop `page_count` onto the workspace. -. Save your visualization for use on a dashboard. - -For geographical point fields, if you click **Visualize**, your data appears in a map. - -[discrete] -[[share-your-findings]] -== Share your findings - -To share your findings with a larger audience, click **Share** in the upper right toolbar. - -[discrete] -[[alert-from-Discover]] -== Generate alerts - -From **Discover**, you can create a rule to periodically check when data goes above or below a certain threshold within a given time interval. - -. Ensure that your data view, -query, and filters fetch the data for which you want an alert. -. In the toolbar, click **Alerts → Create search threshold rule**. -+ -The **Create rule** form is pre-filled with the latest query sent to {es}. -. Configure your {es} query and select a connector type. -. Click **Save**. - -For more about this and other rules provided in {alert-features}, go to <>. diff --git a/serverless/pages/explore-your-data-the-aggregations-api.asciidoc b/serverless/pages/explore-your-data-the-aggregations-api.asciidoc deleted file mode 100644 index 7bd287c8..00000000 --- a/serverless/pages/explore-your-data-the-aggregations-api.asciidoc +++ /dev/null @@ -1,439 +0,0 @@ -[[elasticsearch-explore-your-data-aggregations]] -= Aggregations - -// :description: Aggregate and summarize your {es} data. -// :keywords: serverless, elasticsearch, aggregations, reference - -preview:[] - -An aggregation summarizes your data as metrics, statistics, or other analytics. -Aggregations help you answer questions like: - -* What's the average load time for my website? -* Who are my most valuable customers based on transaction volume? -* What would be considered a large file on my network? -* How many products are in each product category? - -{es} organizes aggregations into three categories: - -* {ref}/search-aggregations-metrics.html[Metric] aggregations that calculate metrics, -such as a sum or an average, from field values. Note that -{ref}/search-aggregations-metrics-scripted-metric-aggregation.html[scripted metric aggregations] -are not available in {es-serverless}. -* {ref}/search-aggregations-bucket.html[Bucket] aggregations that -group documents into buckets, also called bins, based on field values, ranges, -or other criteria. -* {ref}/search-aggregations-pipeline.html[Pipeline] aggregations that take input from -other aggregations instead of documents or fields. - -[discrete] -[[elasticsearch-explore-your-data-aggregations-run-an-aggregation]] -== Run an aggregation - -You can run aggregations as part of a search by specifying the search API's `aggs` parameter. The -following search runs a {ref}/search-aggregations-bucket-terms-aggregation.html[terms aggregation] on -`my-field`: - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - } - } - } -} -' ----- - -// TEST[setup:my_index] - -// TEST[s/my-field/http.request.method/] - -Aggregation results are in the response's `aggregations` object: - -// TESTRESPONSE[s/"took": 78/"took": "$body.took"/] - -// TESTRESPONSE[s/\.\.\.$/"took": "$body.took", "timed_out": false, "_shards": "$body._shards", /] - -// TESTRESPONSE[s/"hits": \[\.\.\.\]/"hits": "$body.hits.hits"/] - -// TESTRESPONSE[s/"buckets": \[\]/"buckets":\[\{"key":"get","doc_count":5\}\]/] - -[source,json] ----- -{ - "took": 78, - "timed_out": false, - "_shards": {...}, - "hits": {...}, - "aggregations": { - "my-agg-name": { <1> - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [...] - } - } -} ----- - -<1> Results for the `my-agg-name` aggregation. - -[discrete] -[[elasticsearch-explore-your-data-aggregations-change-an-aggregations-scope]] -== Change an aggregation's scope - -Use the `query` parameter to limit the documents on which an aggregation runs: - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "query": { - "range": { - "@timestamp": { - "gte": "now-1d/d", - "lt": "now/d" - } - } - }, - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - } - } - } -} -' ----- - -// TEST[setup:my_index] - -// TEST[s/my-field/http.request.method/] - -[discrete] -[[elasticsearch-explore-your-data-aggregations-return-only-aggregation-results]] -== Return only aggregation results - -By default, searches containing an aggregation return both search hits and -aggregation results. To return only aggregation results, set `size` to `0`: - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "size": 0, - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - } - } - } -} -' ----- - -// TEST[setup:my_index] - -// TEST[s/my-field/http.request.method/] - -[discrete] -[[elasticsearch-explore-your-data-aggregations-run-multiple-aggregations]] -== Run multiple aggregations - -You can specify multiple aggregations in the same request: - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "aggs": { - "my-first-agg-name": { - "terms": { - "field": "my-field" - } - }, - "my-second-agg-name": { - "avg": { - "field": "my-other-field" - } - } - } -} -' ----- - -// TEST[setup:my_index] - -// TEST[s/my-field/http.request.method/] - -// TEST[s/my-other-field/http.response.bytes/] - -[discrete] -[[elasticsearch-explore-your-data-aggregations-run-sub-aggregations]] -== Run sub-aggregations - -Bucket aggregations support bucket or metric sub-aggregations. For example, a -terms aggregation with an {ref}/search-aggregations-metrics-avg-aggregation.html[avg] -sub-aggregation calculates an average value for each bucket of documents. There -is no level or depth limit for nesting sub-aggregations. - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - }, - "aggs": { - "my-sub-agg-name": { - "avg": { - "field": "my-other-field" - } - } - } - } - } -} -' ----- - -// TEST[setup:my_index] - -// TEST[s/_search/_search?size=0/] - -// TEST[s/my-field/http.request.method/] - -// TEST[s/my-other-field/http.response.bytes/] - -The response nests sub-aggregation results under their parent aggregation: - -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits",/] - -// TESTRESPONSE[s/"key": "foo"/"key": "get"/] - -// TESTRESPONSE[s/"value": 75.0/"value": $body.aggregations.my-agg-name.buckets.0.my-sub-agg-name.value/] - -[source,json] ----- -{ - ... - "aggregations": { - "my-agg-name": { <1> - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [ - { - "key": "foo", - "doc_count": 5, - "my-sub-agg-name": { <2> - "value": 75.0 - } - } - ] - } - } -} ----- - -<1> Results for the parent aggregation, `my-agg-name`. - -<2> Results for `my-agg-name`'s sub-aggregation, `my-sub-agg-name`. - -[discrete] -[[elasticsearch-explore-your-data-aggregations-add-custom-metadata]] -== Add custom metadata - -Use the `meta` object to associate custom metadata with an aggregation: - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "aggs": { - "my-agg-name": { - "terms": { - "field": "my-field" - }, - "meta": { - "my-metadata-field": "foo" - } - } - } -} -' ----- - -// TEST[setup:my_index] - -// TEST[s/_search/_search?size=0/] - -The response returns the `meta` object in place: - -[source,json] ----- -{ - ... - "aggregations": { - "my-agg-name": { - "meta": { - "my-metadata-field": "foo" - }, - "doc_count_error_upper_bound": 0, - "sum_other_doc_count": 0, - "buckets": [] - } - } -} ----- - -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits",/] - -[discrete] -[[elasticsearch-explore-your-data-aggregations-return-the-aggregation-type]] -== Return the aggregation type - -By default, aggregation results include the aggregation's name but not its type. -To return the aggregation type, use the `typed_keys` query parameter. - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?typed_keys&pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "aggs": { - "my-agg-name": { - "histogram": { - "field": "my-field", - "interval": 1000 - } - } - } -} -' - ----- - -// TEST[setup:my_index] - -// TEST[s/typed_keys/typed_keys&size=0/] - -// TEST[s/my-field/http.response.bytes/] - -The response returns the aggregation type as a prefix to the aggregation's name. - -[IMPORTANT] -==== -Some aggregations return a different aggregation type from the -type in the request. For example, the terms, {ref}/search-aggregations-bucket-significantterms-aggregation.html[significant terms], -and {ref}/search-aggregations-metrics-percentile-aggregation.html[percentiles] -aggregations return different aggregations types depending on the data type of -the aggregated field. -==== - -// TESTRESPONSE[s/\.\.\./"took": "$body.took", "timed_out": false, "_shards": "$body._shards", "hits": "$body.hits",/] - -// TESTRESPONSE[s/"buckets": \[\]/"buckets":\[\{"key":1070000.0,"doc_count":5\}\]/] - -[source,json] ----- -{ - ... - "aggregations": { - "histogram#my-agg-name": { <1> - "buckets": [] - } - } -} ----- - -<1> The aggregation type, `histogram`, followed by a `#` separator and the aggregation's name, `my-agg-name`. - -[discrete] -[[elasticsearch-explore-your-data-aggregations-use-scripts-in-an-aggregation]] -== Use scripts in an aggregation - -When a field doesn't exactly match the aggregation you need, you -should aggregate on a {ref}/runtime.html[runtime field]: - -[source,bash] ----- -curl "${ES_URL}/my-index/_search?pretty" \ --H "Authorization: ApiKey ${API_KEY}" \ --H "Content-Type: application/json" \ --d' -{ - "size": 0, - "runtime_mappings": { - "message.length": { - "type": "long", - "script": "emit(doc[\u0027message.keyword\u0027].value.length())" - } - }, - "aggs": { - "message_length": { - "histogram": { - "interval": 10, - "field": "message.length" - } - } - } -} -' ----- - -Scripts calculate field values dynamically, which adds a little -overhead to the aggregation. In addition to the time spent calculating, -some aggregations like {ref}/search-aggregations-bucket-terms-aggregation.html[`terms`] -and {ref}/search-aggregations-bucket-filters-aggregation.html[`filters`] can't use -some of their optimizations with runtime fields. In total, performance costs -for using a runtime field varies from aggregation to aggregation. - -[discrete] -[[elasticsearch-explore-your-data-aggregations-aggregation-caches]] -== Aggregation caches - -For faster responses, {es} caches the results of frequently run aggregations in -the {ref}/shard-request-cache.html[shard request cache]. To get cached results, use the -same {ref}/search-shard-routing.html#shard-and-node-preference[`preference` string] for each search. If you -don't need search hits, <> to avoid -filling the cache. - -{es} routes searches with the same preference string to the same shards. If the -shards' data doesn't change between searches, the shards return cached -aggregation results. - -[discrete] -[[elasticsearch-explore-your-data-aggregations-limits-for-long-values]] -== Limits for `long` values - -When running aggregations, {es} uses {ref}/number.html[`double`] values to hold and -represent numeric data. As a result, aggregations on `long` numbers -greater than 2^53 are approximate. diff --git a/serverless/pages/explore-your-data-visualize-your-data-create-dashboards.asciidoc b/serverless/pages/explore-your-data-visualize-your-data-create-dashboards.asciidoc deleted file mode 100644 index ad5bebd4..00000000 --- a/serverless/pages/explore-your-data-visualize-your-data-create-dashboards.asciidoc +++ /dev/null @@ -1,95 +0,0 @@ -[role="exclude",id="elasticsearch-explore-your-data-dashboards"] -= Create dashboards - -// :description: Create dashboards to visualize and monitor your {es} data. -// :keywords: serverless, elasticsearch, dashboards, how to - -preview:[] - -Learn the most common way to create a dashboard from your own data. The tutorial will use sample data from the perspective of an analyst looking at website logs, but this type of dashboard works on any type of data. - -[discrete] -[[open-the-dashboard]] -== Open the dashboard - -Begin with an empty dashboard, or open an existing dashboard. - -. Open the main menu, then click **Dashboard**. -. On the **Dashboards** page, choose one of the following options: - -* To start with an empty dashboard, click **Create dashboard**. -+ -When you create a dashboard, you are automatically in edit mode and can make changes. -* To open an existing dashboard, click the dashboard **Title** you want to open. -+ -When you open an existing dashboard, you are in view mode. To make changes, click **Edit** in the toolbar. - -[discrete] -[[elasticsearch-explore-your-data-dashboards-add-data-and-create-a-dashboard]] -== Add data and create a dashboard - -Add the sample web logs data, and create and set up the dashboard. - -. On the **Dashboard** page, click **Add some sample data**. -. Click **Other sample data sets**. -. On the **Sample web logs** card, click **Add data**. - -Create the dashboard where you'll display the visualization panels. - -. Open the main menu, then click **Dashboard**. -. Click **[Logs] Web Traffic**. - -By default some visualization panels have been created for you using the sample data. Go to <> to learn about the different visualizations. - -[role="screenshot"] -image::images/dashboard-example.png[dashboard with default visualizations using sample data] - -[discrete] -[[elasticsearch-explore-your-data-dashboards-reset-the-dashboard]] -== Reset the dashboard - -To remove any changes you've made, reset the dashboard to the last saved changes. - -. In the toolbar, click **Reset**. -. Click **Reset dashboard**. - -[discrete] -[[elasticsearch-explore-your-data-dashboards-save-dashboards]] -== Save dashboards - -When you've finished making changes to the dashboard, save it. - -. In the toolbar, click **Save**. -. To exit **Edit** mode, click **Switch to view mode**. - -[discrete] -[[elasticsearch-explore-your-data-dashboards-add-dashboard-settings]] -== Add dashboard settings - -When creating a new dashboard you can add the title, tags, design options, and more to the dashboard. - -. In the toolbar, click **Settings**. -. On the **Dashboard settings** flyout, enter the **Title** and an optional **Description**. -. Add any applicable **Tags**. -. Specify the following settings: - -* **Store time with dashboard** — Saves the specified time filter. -* **Use margins between panels** — Adds a margin of space between each panel. -* **Show panel titles** — Displays the titles in the panel headers. -* **Sync color palettes across panels** — Applies the same color palette to all panels on the dashboard. -* **Sync cursor across panels** — When you hover your cursor over a panel, the cursor on all other related dashboard charts automatically appears. -* **Sync tooltips across panels** — When you hover your cursor over a panel, the tooltips on all other related dashboard charts automatically appears. - -. Click **Apply**. - -[discrete] -[[elasticsearch-explore-your-data-dashboards-share-dashboards]] -== Share dashboards - -To share the dashboard with a larger audience, click **Share** in the toolbar. For detailed information about the sharing options, refer to {kibana-ref}/reporting-getting-started.html[Reporting]. - -[discrete] -[[elasticsearch-explore-your-data-dashboards-export-dashboards]] -== Export dashboards - -To automate {kib}, you can export dashboards as JSON using the {kibana-ref}/saved-objects-api-export.html[Export objects API]. It is important to export dashboards with all necessary references. diff --git a/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.asciidoc b/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.asciidoc deleted file mode 100644 index ad1506ac..00000000 --- a/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.asciidoc +++ /dev/null @@ -1,384 +0,0 @@ -[role="exclude",id="elasticsearch-explore-your-data-visualizations"] -= Create visualizations - -// :description: Create charts, graphs, maps, and more from your {es} data. -// :keywords: serverless, elasticsearch, visualize, how to - -preview:[] - -Learn how to create some visualization panels to add to your dashboard. -This tutorial uses the same web logs sample data from <>. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-open-the-visualization-editor-and-get-familiar-with-the-data]] -== Open the visualization editor and get familiar with the data - -Once you have loaded the web logs sample data into your dashboard lets open the visualization editor, to ensure the correct fields appear. - -. On the dashboard, click **Create visualization**. -. Make sure the **{kib} Sample Data Logs** {data-source} appears. - -To create the visualizations in this tutorial, you'll use the following fields: - -* **Records** -* **timestamp** -* **bytes** -* **clientip** -* **referer.keyword** - -To see the most frequent values in a field, hover over the field name, then click _i_. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-create-your-first-visualization]] -== Create your first visualization - -Pick a field you want to analyze, such as **clientip**. To analyze only the **clientip** field, use the **Metric** visualization to display the field as a number. - -The only number function that you can use with **clientip** is **Unique count**, also referred to as cardinality, which approximates the number of unique values. - -. Open the **Visualization type** dropdown, then select **Metric**. -. From the **Available fields** list, drag **clientip** to the workspace or layer pane. -+ -In the layer pane, **Unique count of clientip** appears because the editor automatically applies the **Unique count** function to the **clientip** field. **Unique count** is the only numeric function that works with IP addresses. -. In the layer pane, click **Unique count of clientip**. -+ -a. In the **Name** field, enter `Unique visitors`. -+ -b. Click **Close**. -. Click **Save and return**. -+ -**[No Title]** appears in the visualization panel header. Since the visualization has its own `Unique visitors` label, you do not need to add a panel title. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-view-a-metric-over-time]] -== View a metric over time - -There are two shortcuts you can use to view metrics over time. -When you drag a numeric field to the workspace, the visualization editor adds the default -time field from the {data-source}. When you use the **Date histogram** function, you can -replace the time field by dragging the field to the workspace. - -To visualize the **bytes** field over time: - -. On the dashboard, click **Create visualization**. -. From the **Available fields** list, drag **bytes** to the workspace. -+ -The visualization editor creates a bar chart with the **timestamp** and **Median of bytes** fields. -. To zoom in on the data, click and drag your cursor across the bars. - -To emphasize the change in **Median of bytes** over time, change the visualization type to **Line** with one of the following options: - -* In the **Suggestions**, click the line chart. -* In the editor toolbar, open the **Visualization type** dropdown, then select **Line**. - -To increase the minimum time interval: - -. In the layer pane, click **timestamp**. -. Change the **Minimum interval** to **1d**, then click **Close**. -+ -You can increase and decrease the minimum interval, but you are unable to decrease the interval below the configured **Advanced Settings**. - -To save space on the dashboard, hide the axis labels. - -. Open the **Left axis** menu, then select **None** from the **Axis title** dropdown. -. Open the **Bottom axis** menu, then select **None** from the **Axis title** dropdown. -. Click **Save and return** - -Since you removed the axis labels, add a panel title: - -. Open the panel menu, then select **Panel settings**. -. In the **Title** field, enter `Median of bytes`, then click **Apply**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-view-the-top-values-of-a-field]] -== View the top values of a field - -Create a visualization that displays the most frequent values of **request.keyword** on your website, ranked by the unique visitors. To create the visualization, use **Top values of request.keyword** ranked by **Unique count of clientip**, instead of being ranked by **Count of records**. - -The **Top values** function ranks the unique values of a field by another function. -The values are the most frequent when ranked by a **Count** function, and the largest when ranked by the **Sum** function. - -. On the dashboard, click **Create visualization**. -. From the **Available fields** list, drag **clientip** to the **Vertical axis** field in the layer pane. -+ -The visualization editor automatically applies the **Unique count** function. If you drag **clientip** to the workspace, the editor adds the field to the incorrect axis. -. Drag **request.keyword** to the workspace. -+ -When you drag a text or IP address field to the workspace, the editor adds the **Top values** function ranked by **Count of records** to show the most frequent values. - -The chart labels are unable to display because the **request.keyword** field contains long text fields. You could use one of the **Suggestions**, but the suggestions also have issues with long text. The best way to display long text fields is with the **Table** visualization. - -. Open the **Visualization type** dropdown, then select **Table**. -. In the layer pane, click **Top 5 values of request.keyword**. -+ -a. In the **Number of values** field, enter `10`. -+ -b. In the **Name** field, enter `Page URL`. -+ -c. Click **Close**. -. Click **Save and return**. -+ -Since the table columns are labeled, you do not need to add a panel title. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-compare-a-subset-of-documents-to-all-documents]] -== Compare a subset of documents to all documents - -Create a proportional visualization that helps you determine if your users transfer more bytes from documents under 10KB versus documents over 10KB. - -. On the dashboard, click **Create visualization**. -. From the **Available fields** list, drag **bytes** to the **Vertical axis** field in the layer pane. -. In the layer pane, click **Median of bytes**. -. Click the **Sum** quick function, then click **Close**. -. From the **Available fields** list, drag **bytes** to the **Break down by** field in the layer pane. - -To select documents based on the number range of a field, use the **Intervals** function. -When the ranges are non numeric, or the query requires multiple clauses, you could use the **Filters** function. - -Specify the file size ranges: - -. In the layer pane, click **bytes**. -. Click **Create custom ranges**, enter the following in the **Ranges** field, then press Return: - -* **Ranges** — `0` -> `10240` -* **Label** — `Below 10KB` - -. Click **Add range**, enter the following, then press Return: - -* **Ranges** — `10240` -> `+∞` -* **Label** — `Above 10KB` - -. From the **Value format** dropdown, select **Bytes (1024)**, then click **Close**. - -To display the values as a percentage of the sum of all values, use the **Pie** chart. - -. Open the **Visualization Type** dropdown, then select **Pie**. -. Click **Save and return**. - -Add a panel title: - -. Open the panel menu, then select **Panel settings**. -. In the **Title** field, enter `Sum of bytes from large requests`, then click **Apply**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-view-the-distribution-of-a-number-field]] -== View the distribution of a number field - -The distribution of a number can help you find patterns. For example, you can analyze the website traffic per hour to find the best time for routine maintenance. - -. On the dashboard, click **Create visualization**. -. From the **Available fields** list, drag **bytes** to **Vertical axis** field in the layer pane. -. In the layer pane, click **Median of bytes**. -+ -a. Click the **Sum** quick function. -+ -b. In the **Name** field, enter `Transferred bytes`. -+ -c. From the **Value format** dropdown, select **Bytes (1024)**, then click **Close**. -. From the **Available fields** list, drag **hour_of_day** to **Horizontal axis** field in the layer pane. -. In the layer pane, click **hour_of_day**, then slide the **Intervals granularity** slider until the horizontal axis displays hourly intervals. -. Click **Save and return**. - -Add a panel title: - -. Open the panel menu, then select **Panel settings**. -. In the **Title** field, enter `Website traffic`, then click **Apply**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-create-a-multi-level-chart]] -== Create a multi-level chart - -**Table** and **Proportion** visualizations support multiple functions. For example, to create visualizations that break down the data by website traffic sources and user geography, apply the **Filters** and **Top values** functions. - -. On the dashboard, click **Create visualization**. -. Open the **Visualization type** dropdown, then select **Treemap**. -. From the **Available fields** list, drag **Records** to the **Metric** field in the layer pane. -. In the layer pane, click **Add or drag-and-drop a field** for **Group by**. - -Create a filter for each website traffic source: - -. Click **Filters**. -. Click **All records**, enter the following in the query bar, then press Return: - -* **KQL** — `referer : **facebook.com**` -* **Label** — `Facebook` - -. Click **Add a filter**, enter the following in the query bar, then press Return: - -* **KQL** — `referer : **twitter.com**` -* **Label** — `Twitter` - -. Click **Add a filter**, enter the following in the query bar, then press Return: - -* **KQL** — `NOT referer : **twitter.com** OR NOT referer: **facebook.com**` -* **Label** — `Other` - -. Click **Close**. - -Add the user geography grouping: - -. From the **Available fields** list, drag **geo.srcdest** to the workspace. -. To change the **Group by** order, drag **Top 3 values of geo.srcdest** in the layer pane so that appears first. - -Remove the documents that do not match the filter criteria: - -. In the layer pane, click **Top 3 values of geo.srcdest**. -. Click **Advanced**, deselect **Group other values as "Other"**, then click **Close**. -. Click **Save and return**. - -Add a panel title: - -. Open the panel menu, then select **Panel settings**. -. In the **Title** field, enter `Page views by location and referrer`, then click **Apply**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-visualization-panels]] -== Visualization panels - -Visualization panels are how you display visualizations of your data and what make Kibana such a useful tool. Panels are designed to build interactive dashboards. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-create-and-add-panels]] -=== Create and add panels - -Create new panels, which can be accessed from the dashboard toolbar or the **Visualize Library**, or add panels that are saved in the **Visualize Library**, or search results from <>. - -Panels added to the **Visualize Library** are available to all dashboards. - -To create panels from the dashboard: - -. From the main menu, click **Dashboard** and select **[Logs] Web Traffic**. -. Click **Edit** then click **Create visualization**. -. From the **Available fields** drag and drop the data you want to visualize. -. Click **Save and return**. -. Click **Save** to add the new panel to your dashboard. - -To create panels from the **Visualize Library**: - -. From the main menu, click **Visualize Library**. -. Click **Create visualization**, then select an editor from the options. -. Click **Save** once you have created your new visualization. -. In the modal, enter a **Title**, **Description**, and decide if you want to save the new panel to an existing dashboard, a new dashboard, or to the **Visualize Library**. -. Save the panel. - -To add existing panels from the **Visualize Library**: - -. From the main menu, click **Dashboard** and select **[Logs] Web Traffic**. -. Click **Edit** then in the dashboard toolbar, click **Add from library**. -. Click the panel you want to add to the dashboard, then click _X_. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-save-panels]] -=== Save panels - -Consider where you want to save and add the panel in {kib}. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-save-to-the-visualize-library]] -==== Save to the Visualize Library - -To use the panel on other dashboards, save the panel to the **Visualize Library**. When panels are saved in the **Visualize Library**, image:images/icons/folderCheck.svg[Visualize Library] appears in the panel header. - -If you created the panel from the dashboard: - -. Open the panel menu and click **More → Save to library**. -. Enter the **Title** and click **Save**. - -If you created the panel from the **Visualize Library**: - -. In the editor, click **Save**. -. Under **Save visualization** enter a **Title**, **Description**, and decide if you want to save the new panel to an existing dashboard, a new dashboard, or to the **Visualize Library**. -. Click **Save and go to Dashboard**. -. Click **Save**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-save-to-the-dashboard]] -==== Save to the dashboard - -Return to the dashboard and add the panel without specifying the save options or adding the panel to the **Visualize Library**. - -If you created the panel from the dashboard: - -. In the editor, click **Save and return**. -. Click **Save**. - -If you created the panel from the **Visualize Library**: - -. Click **Save**. -. Under **Save visualization** enter a **Title**, **Description**, and decide if you want to save the new panel to an existing dashboard, a new dashboard, or to the **Visualize Library**. -. Click **Save and go to Dashboard**. -. Click **Save**. - -To add unsaved panels to the **Visualize Library**: - -. Open the panel menu, then select **More → Save to library**. -. Enter the panel title, then click **Save**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-arrange-panels]] -=== Arrange panels - -Compare the data in your panels side-by-side, organize panels by priority, resize the panels so they all appear on the dashboard without scrolling down, and more. - -In the toolbar, click **Edit**, then use the following options: - -* To move, click and hold the panel header, then drag to the new location. -* To resize, click the resize control, then drag to the new dimensions. -* To maximize to fullscreen, open the panel menu, then click **More → Maximize panel**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-add-text-panels]] -=== Add text panels - -Add **Text** panels to your dashboard that display important information, instructions, and more. You create **Text** panels using https://github.github.com/gfm/[GitHub-flavored Markdown] text. - -. On the dashboard, click **Edit**. -. Click **Add panel** and select **image:images/icons/visText.svg[Create new text] Text**. -. Check the rendered text, then click **Save and return**. -. To save the new text panel to your dashboard click **Save**. - -[discrete] -[[elasticsearch-explore-your-data-visualizations-add-image-panels]] -=== Add image panels - -To personalize your dashboards, add your own logos and graphics with the **Image** panel. You can upload images from your computer, or add images from an external link. - -. On the dashboard, click **Edit**. -. Click **Add panel** and select **image:images/icons/image.svg[Add image] Image**. -. Use the editor to add an image. -. Click **Save**. -. To save the new image panel to your dashboard click **Save**. - -To manage your uploaded image files, open the main menu, then click **Management → Files**. - -[WARNING] -==== -When you export a dashboard, the uploaded image files are not exported. -When importing a dashboard with an image panel, and the image file is unavailable, the image panel displays a `not found` warning. Such panels have to be fixed manually by re-uploading the image using the panel's image editor. -==== - -[discrete] -[[edit-panels]] -=== Edit panels - -To make changes to the panel, use the panel menu options. - -. In the toolbar, click **Edit**. -. Open the panel menu, then use the following options: - -* **Edit Lens** — Opens **Lens** so you can make changes to the visualization. -* **Edit visualization** — Opens the editor so you can make changes to the panel. -* **Edit map** — Opens the editor so you can make changes to the map panel. -+ -The above options display in accordance to the type of visualization the panel is made up of. -* **Edit Lens** — Opens aggregation-based visualizations in **Lens**. -* **Clone panel** — Opens a copy of the panel on your dashboard. -* **Panel settings** — Opens the **Panel settings** window to change the **title**, **description**, and **time range**. -* **More → Inspect** — Opens an editor so you can view the data and the requests that collect that data. -* **More → Explore data in Discover** — Opens that panels data in **Discover**. -* **More → Save to library** — Saves the panel to the **Visualize Library**. -* **More → Maximize panel** — Maximizes the panel to full screen. -* **More → Download as CSV** — Downloads the data as a CSV file. -* **More → Replace panel** — Opens the **Visualize Library** so you can select a new panel to replace the existing panel. -* **More → Copy to dashboard** — Copy the panel to a different dashboard. -* **More → Delete from dashboard** — Removes the panel from the dashboard. diff --git a/serverless/pages/explore-your-data-visualize-your-data.asciidoc b/serverless/pages/explore-your-data-visualize-your-data.asciidoc deleted file mode 100644 index 5c7a3396..00000000 --- a/serverless/pages/explore-your-data-visualize-your-data.asciidoc +++ /dev/null @@ -1,38 +0,0 @@ -[[elasticsearch-explore-your-data-visualize-your-data]] -= Visualize your data - -// :description: Build dynamic dashboards and visualizations for your {es} data. -// :keywords: serverless, elasticsearch, visualize, how to - -preview:[] - -The best way to understand your data is to visualize it. - -Elastic provides a wide range of pre-built dashboards for visualizing data from a variety of sources. -These dashboards are loaded automatically when you install https://www.elastic.co/docs/current/integrations[Elastic integrations]. - -You can also create new dashboards and visualizations based on your data views to get a full picture of your data. - -In your {es-serverless} project, go to **Dashboards** to see existing dashboards or create your own. - -Notice you can filter the list of dashboards: - -* Use the text search field to filter by name or description. -* Use the **Tags** menu to filter by tag. To create a new tag or edit existing tags, click **Manage tags**. -* Click a dashboard's tags to toggle filtering for each tag. - -[discrete] -[[elasticsearch-explore-your-data-visualize-your-data-create-new-dashboards]] -== Create new dashboards - -To create a new dashboard, click **Create dashboard** and begin adding visualizations. -You can create charts, graphs, maps, tables, and other types of visualizations from your data, or you can add visualizations from the library. - -You can also add other types of panels — such as filters, links, and text — and add controls like time sliders. - -For more information about creating dashboards, refer to the {kibana-ref}/dashboard.html[{kib} documentation]. - -[NOTE] -==== -The {kib} documentation is written for {kib} users, but the steps for serverless are very similar. -==== diff --git a/serverless/pages/explore-your-data.asciidoc b/serverless/pages/explore-your-data.asciidoc index e58339a9..a9b8229b 100644 --- a/serverless/pages/explore-your-data.asciidoc +++ b/serverless/pages/explore-your-data.asciidoc @@ -6,9 +6,43 @@ preview:[] -In addition to search, {es3} offers several options for analyzing and visualizing your data. +In addition to search, {es-serverless} offers several options for analyzing and visualizing your data. -* <>: Use the {es-serverless} REST API to summarize your data as metrics, statistics, or other analytics. -* <>: Use the **Discover** UI to filter your data or learn about its structure. -* <>: Build dynamic dashboards that visualize your data as charts, gauges, graphs, maps, and more. -* <>: Create rules that trigger notifications based on your data. +[NOTE] +==== +These features are available on all Elastic deployment types: self-managed clusters, Elastic Cloud Hosted deployments, and {es-serverless} projects. +They are documented in the {es} and {kib} core documentation. +==== + +[discrete] +== Data analysis + +{ref}/search-aggregations.html[Aggregations]:: +Use aggregations in your https://www.elastic.co/docs/api/doc/elasticsearch-serverless/operation/operation-search#operation-search-body-application-json-aggregations[`_search` API] requests to summarize your data as metrics, statistics, or other analytics. + +{kibana-ref}/discover.html[Discover]:: +Use the **Discover** UI to quickly search and filter your data, get information about the structure of the fields, and display your findings in a visualization. ++ +🔍 Find **Discover** in your {es-serverless} project's UI under *Analyze / Discover*. +[discrete] + +[discrete] +== Visualization + +{kibana-ref}/dashboard.html[Dashboards]:: +Build dynamic dashboards that visualize your data as charts, graphs, maps, and more. ++ +🔍 Find **Dashboards** in your {es-serverless} project's UI under *Analyze / Dashboard*. + +{kibana-ref}/maps.html[Maps]:: +Visualize your geospatial data on a map. ++ +🔍 Find **Maps** in your {es-serverless} project's UI under *Other tools / Maps*. + +[discrete] +== Monitoring + +{kibana-ref}/alerting-getting-started.html[Rules]:: +Create rules that trigger notifications when certain conditions are met in your data. ++ +🔍 Find **Rules** in your {es-serverless} project's UI under *Project settings > Alerts and insights > Rules*. \ No newline at end of file diff --git a/serverless/pages/files.asciidoc b/serverless/pages/files.asciidoc index 3e693187..c0716e8b 100644 --- a/serverless/pages/files.asciidoc +++ b/serverless/pages/files.asciidoc @@ -8,7 +8,7 @@ preview:[] This content applies to: {es-badge} {obs-badge} {sec-badge} -Several {serverless-full} features let you upload files. For example, you can add files to <> or upload a logo to an **Image** panel in a <>. +Several {serverless-full} features let you upload files. For example, you can add files to <> or upload a logo to an **Image** panel in a {kibana-ref}/dashboard.html[Dashboard]. You can access these uploaded files in **{project-settings} → {manage-app} → {files-app}**. diff --git a/serverless/pages/get-started.asciidoc b/serverless/pages/get-started.asciidoc index 5d8a1807..8acd015c 100644 --- a/serverless/pages/get-started.asciidoc +++ b/serverless/pages/get-started.asciidoc @@ -6,246 +6,99 @@ preview:[] -Follow along to set up your {es-serverless} project and get started with some sample documents. -Then, choose how to continue with your own data. +On this page, you will learn how to: + +- <>. + +- Get started with {es}: + * <>: + Follow the step-by-step tutorial provided in the UI to create an index and ingest data. + + * <>: + Use the Getting Started page's instructions to ingest data and perform your first search. + + * <>: + If you're already familiar with {es}, retrieve your connection details, select an ingest method that suits your needs, and start searching. [discrete] [[elasticsearch-get-started-create-project]] -== Create project +== Create an {es-serverless} project -Use your {ecloud} account to create a fully-managed {es-serverless} project: +Use your {ecloud} account to create a fully-managed {es} project: . Navigate to {ess-console}[cloud.elastic.co] and create a new account or log in to your existing account. . Within **Serverless Projects**, choose **Create project**. -. Choose the {es-serverless} project type. +. Choose the {es} project type. . Select a **configuration** for your project, based on your use case. + -** **General purpose**. For general search use cases across various data types. -** **Optimized for Vectors**. For search use cases using vectors and near real-time retrieval. +** **General purpose**: For general search use cases across various data types. +** **Optimized for Vectors**: For search use cases using vectors and near real-time retrieval. . Provide a name for the project and optionally edit the project settings, such as the cloud platform <>. Select **Create project** to continue. . Once the project is ready, select **Continue**. -You should now see **Get started with {es-serverless}**, and you're ready to continue. - -include::../partials/minimum-vcus-detail.asciidoc[] - -[discrete] -[[elasticsearch-get-started-create-api-key]] -== Create API key - -Create an API key, which will enable you to access the {es-serverless} API to ingest and search data. - -. On the **Getting Started** page, scroll to **Add an API Key** and select **New**. -. In **Create API Key**, enter a name for your key and (optionally) set an expiration date. -. (Optional) Under **Control Security privileges**, you can set specific access permissions for this API key. By default, it has full access to all APIs. -. (Optional) The **Add metadata** section allows you to add custom key-value pairs to help identify and organize your API keys. -. Select **Create API Key** to finish. - -After creation, you'll see your API key displayed as an encoded string. -Store this encoded API key securely. It is displayed only once and cannot be retrieved later. -You will use this encoded API key when sending API requests. - -[NOTE] +[TIP] ==== -You can't recover or retrieve a lost API key. Instead, you must delete the key and create a new one. +Learn how billing works for your project in <>. ==== -[discrete] -[[elasticsearch-get-started-copy-url]] -== Copy URL - -Next, copy the URL of your API endpoint. -You'll send all {es-serverless} API requests to this URL. - -. On the **Getting Started** page, scroll to **Copy your connection details** section, and find the **{es} endpoint** field. -. Copy the URL for the {es} endpoint. - -Store this value along with your `encoded` API key. -You'll use both values in the next step. +Now your project is ready to start creating indices, adding data, and performing searches. You can choose one of the following options to proceed. [discrete] -[[elasticsearch-get-started-test-connection]] -== Test connection - -We'll use the `curl` command to test your connection and make additional API requests. -(See https://everything.curl.dev/get[Install curl] if you need to install this program.) - -`curl` will need access to your {es} endpoint and `encoded` API key. -Within your terminal, assign these values to the `ES_URL` and `API_KEY` environment variables. - -For example: - -[source,bash] ----- -export ES_URL="https://dda7de7f1d264286a8fc9741c7741690.es.us-east-1.aws.elastic.cloud:443" -export API_KEY="ZFZRbF9Jb0JDMEoxaVhoR2pSa3Q6dExwdmJSaldRTHFXWEp4TFFlR19Hdw==" ----- - -Then run the following command to test your connection: - -[source,bash] ----- -curl "${ES_URL}" \ - -H "Authorization: ApiKey ${API_KEY}" \ - -H "Content-Type: application/json" ----- +[[elasticsearch-follow-guided-index-flow]] +== Option 1: Follow the guided index flow -You should receive a response similar to the following: +Once your project is set up, you'll be directed to a page where you can create your first index. +An index is where documents are stored and organized, making it possible to search and retrieve data. -[source,json] ----- -{ - "name" : "serverless", - "cluster_name" : "dda7de7f1d264286a8fc9741c7741690", - "cluster_uuid" : "ws0IbTBUQfigmYAVMztkZQ", - "version" : { ... }, - "tagline" : "You Know, for Search" -} ----- - -Now you're ready to ingest and search some sample documents. +. Enter a name for your index. +. Click *Create my index*. You can also create the index by clicking on *Code* to view and run code through the command line. ++ +image::images/get-started-create-an-index.png[Create an index.] -[discrete] -[[elasticsearch-get-started-ingest-data]] -== Ingest data +. You’ll be directed to the *Index Management* page. Here, copy and save the following: +- Elasticsearch URL +- API key [NOTE] ==== -This example uses {es} APIs to ingest data. If you'd prefer to upload a file using the UI, refer to <>. +You won’t be able to view this API key again. If needed, refer to <> to generate a new one. ==== -To ingest data, you must create an index and store some documents. -This process is also called "indexing". - -You can index multiple documents using a single `POST` request to the `_bulk` API endpoint. -The request body specifies the documents to store and the indices in which to store them. - -{es} will automatically create the index and map each document value to one of its data types. -Include the `?pretty` option to receive a human-readable response. - -Run the following command to index some sample documents into the `books` index: - -[source,bash] ----- -curl -X POST "${ES_URL}/_bulk?pretty" \ - -H "Authorization: ApiKey ${API_KEY}" \ - -H "Content-Type: application/json" \ - -d ' -{ "index" : { "_index" : "books" } } -{"name": "Snow Crash", "author": "Neal Stephenson", "release_date": "1992-06-01", "page_count": 470} -{ "index" : { "_index" : "books" } } -{"name": "Revelation Space", "author": "Alastair Reynolds", "release_date": "2000-03-15", "page_count": 585} -{ "index" : { "_index" : "books" } } -{"name": "1984", "author": "George Orwell", "release_date": "1985-06-01", "page_count": 328} -{ "index" : { "_index" : "books" } } -{"name": "Fahrenheit 451", "author": "Ray Bradbury", "release_date": "1953-10-15", "page_count": 227} -{ "index" : { "_index" : "books" } } -{"name": "Brave New World", "author": "Aldous Huxley", "release_date": "1932-06-01", "page_count": 268} -{ "index" : { "_index" : "books" } } -{"name": "The Handmaids Tale", "author": "Margaret Atwood", "release_date": "1985-06-01", "page_count": 311} -' ----- - -You should receive a response indicating there were no errors: - -[source,json] ----- -{ - "errors" : false, - "took" : 1260, - "items" : [ ... ] -} ----- +The UI provides ready-to-use code examples for ingesting data via the REST API. +Choose your preferred tool for making these requests: -[discrete] -[[elasticsearch-get-started-search-data]] -== Search data - -To search, send a `POST` request to the `_search` endpoint, specifying the index to search. -Use the {es} query DSL to construct your request body. - -Run the following command to search the `books` index for documents containing `snow`: - -[source,bash] ----- -curl -X POST "${ES_URL}/books/_search?pretty" \ - -H "Authorization: ApiKey ${API_KEY}" \ - -H "Content-Type: application/json" \ - -d ' -{ - "query": { - "query_string": { - "query": "snow" - } - } -} -' ----- - -You should receive a response with the results: - -[source,json] ----- -{ - "took" : 24, - "timed_out" : false, - "_shards" : { - "total" : 1, - "successful" : 1, - "skipped" : 0, - "failed" : 0 - }, - "hits" : { - "total" : { - "value" : 1, - "relation" : "eq" - }, - "max_score" : 1.5904956, - "hits" : [ - { - "_index" : "books", - "_id" : "Z3hf_IoBONQ5TXnpLdlY", - "_score" : 1.5904956, - "_source" : { - "name" : "Snow Crash", - "author" : "Neal Stephenson", - "release_date" : "1992-06-01", - "page_count" : 470 - } - } - ] - } -} ----- +* <> in your project's UI +* Python +* JavaScript +* cURL [discrete] -[[elasticsearch-get-started-continue-on-your-own]] -== Continue on your own +[[elasticsearch-follow-in-product-getting-started]] +== Option 2: Follow the Getting Started guide -Congratulations! -You've set up an {es-serverless} project, and you've ingested and searched some sample data. -Now you're ready to continue on your own. +To get started using the in-product tutorial, navigate to the *Getting Started* page and follow the on-screen steps. + +image::images/getting-started-page.png[Getting Started page.] [discrete] -[[elasticsearch-get-started-explore]] -=== Explore +[[elasticsearch-explore-on-your-own]] +== Option 3: Explore on your own -Want to explore the sample documents or your own data? +If you're already familiar with Elasticsearch, you can jump right into setting up a connection and ingesting data as per your needs. -By creating a data view, you can explore data using several UI tools, such as Discover or Dashboards. Or, use {es} aggregations to explore your data using the API. Find more information in <>. +. Retrieve your <>. +. Ingest your data. Elasticsearch provides several methods for ingesting data: +* <> +* <> +* <> +* <> +* <> +* https://github.com/elastic/crawler[Elastic Open Web Crawler] [discrete] -[[elasticsearch-get-started-build]] -=== Build - -Ready to build your own solution? - -To learn more about sending and syncing data to {es-serverless}, or the search API and its query DSL, check <> and <>. +[[elasticsearch-next-steps]] +== Next steps -//// -/* -- -- -*/ -//// +* Once you've added data to your {es-serverless} project, you can use {kibana-ref}/playground.html[Playground] to test and tweak {es} queries and chat with your data, using GenAI. +* You can also try our hands-on {ref}/quickstart.html#quickstart-list[quick start tutorials] in the core {es} documentation. \ No newline at end of file diff --git a/serverless/pages/ingest-your-data.asciidoc b/serverless/pages/ingest-your-data.asciidoc index 88809992..7a906530 100644 --- a/serverless/pages/ingest-your-data.asciidoc +++ b/serverless/pages/ingest-your-data.asciidoc @@ -24,6 +24,8 @@ This data can be updated, but the value of the content remains relatively consta Use connector clients to sync data from a range of popular data sources to {es}. You can also send data directly to {es} from your application using the API. +[discrete] +[[elasticsearch-ingest-time-series-data]] **Times series (timestamped) data** Time series, or timestamped data, describes data that changes frequently and "flows" over time, such as stock quotes, system metrics, and network traffic data. diff --git a/serverless/pages/knn-search.asciidoc b/serverless/pages/knn-search.asciidoc index 65d97774..ba2ade2c 100644 --- a/serverless/pages/knn-search.asciidoc +++ b/serverless/pages/knn-search.asciidoc @@ -361,7 +361,7 @@ each score in the sum. In the example above, the scores will be calculated as score = 0.9 * match_score + 0.1 * knn_score ---- -The `knn` option can also be used with <>. +The `knn` option can also be used with {ref}/search-aggregations.html[aggregations]. In general, {es} computes aggregations over all documents that match the search. So for approximate kNN search, aggregations are calculated on the top `k` nearest documents. If the search also includes a `query`, then aggregations are diff --git a/serverless/pages/manage-billing-monitor-usage.asciidoc b/serverless/pages/manage-billing-monitor-usage.asciidoc index b0add3f6..21828b43 100644 --- a/serverless/pages/manage-billing-monitor-usage.asciidoc +++ b/serverless/pages/manage-billing-monitor-usage.asciidoc @@ -6,23 +6,19 @@ preview:[] -To find more details about your account usage: +To get more details about your account usage: . Navigate to https://cloud.elastic.co/[cloud.elastic.co] and log in to your {ecloud} account. -. Go to the user icon on the header bar and select **Billing**. +. In the header bar, click the user icon, then select **Billing**. -On the **Usage** page you can: +On the **Usage** tab of the **Billing** page, you can: -* Monitor the usage for the current month, including total hourly rate and month-to-date usage +* Monitor usage for the current month, including month-to-date usage * Check the usage breakdown for a selected time range +* View usage totals by product [IMPORTANT] ==== The usage breakdown information is an estimate. To get the exact amount you owe for a given month, check your invoices in the <>. ==== -.{es} minimum runtime VCUs -[IMPORTANT] -==== -When you create an {es-serverless} project, a minimum number of VCUs are always allocated to your project to maintain basic ingest and search capabilities. These VCUs incur a minimum cost even with no active usage. Learn more about https://www.elastic.co/pricing/serverless-search#what-are-the-minimum-compute-resource-vcus-on-elasticsearch-serverless[minimum VCUs on {es-serverless}]. -==== diff --git a/serverless/pages/manage-your-project.asciidoc b/serverless/pages/manage-your-project.asciidoc index dc404899..a0b410b6 100644 --- a/serverless/pages/manage-your-project.asciidoc +++ b/serverless/pages/manage-your-project.asciidoc @@ -40,17 +40,20 @@ The total volume of search-ready data is the sum of the following: Each project type offers different settings that let you adjust the performance and volume of search-ready data, as well as the features available in your projects. +[discrete] +[[elasticsearch-manage-project-search-power-settings]] |=== | Setting | Description | Available in | **Search Power** -a| Search Power affects search speed by controlling the number of VCUs (Virtual Compute Units) allocated to search-ready data in the project. Additional VCUs provide more compute resources and result in performance gains. +a| Search Power controls the speed of searches against your data. With Search Power, you can improve search performance by adding more resources for querying, or you can reduce provisioned resources to cut costs. +Choose from three Search Power settings: -The **Cost-efficient** Search Power setting limits the available cache size, and generates cost savings by reducing search performance. +**On-demand:** Autoscales based on data and search load, with a lower minimum baseline for resource use. This flexibility results in more variable query latency and reduced maximum throughput. -The **Balanced** Search Power setting ensures that there is sufficient cache for all search-ready data, in order to respond quickly to queries. +**Performant:** Delivers consistently low latency and autoscales to accommodate moderately high query throughput. -The **Performance** Search Power setting provides more computing resources in addition to the searchable data cache, in order to respond quickly to higher query volumes and more complex queries. +**High-throughput:** Optimized for high-throughput scenarios, autoscaling to maintain query latency even at very high query volumes. | {es-badge} | **Search Boost Window** diff --git a/serverless/pages/maps.asciidoc b/serverless/pages/maps.asciidoc index 1da4049f..d289dfd1 100644 --- a/serverless/pages/maps.asciidoc +++ b/serverless/pages/maps.asciidoc @@ -65,7 +65,7 @@ Check out {kibana-ref}/import-geospatial-data.html[Import geospatial data]. Viewing data from different angles provides better insights. Dimensions that are obscured in one visualization might be illuminated in another. -Add your map to a <> and view your geospatial data alongside bar charts, pie charts, tag clouds, and more. +Add your map to a {kibana-ref}/dashboard.html[Dashboard] and view your geospatial data alongside bar charts, pie charts, tag clouds, and more. This choropleth map shows the density of non-emergency service requests in San Diego by council district. The map is embedded in a dashboard, so users can better understand when services are requested and gain insight into the top requested services. diff --git a/serverless/pages/pricing.asciidoc b/serverless/pages/pricing.asciidoc index 51f0ed85..d5d30289 100644 --- a/serverless/pages/pricing.asciidoc +++ b/serverless/pages/pricing.asciidoc @@ -6,51 +6,48 @@ preview:[] -{es} is priced based on the consumption of the underlying -infrastructure used to support your use case, with the performance -characteristics you need. We measure by Virtual Compute Units (VCUs), which is a -slice of RAM, CPU and local disk for caching. The number of VCUs required will -depend on the amount and the rate of data sent to {es} and retained, -and the number of searches and latency you require for searches. In addition, if -you required {ml} for inference or NLP tasks, those VCUs are also -metered and billed. +Elasticsearch is priced based on consumption of the underlying +infrastructure that supports your use case, with the performance +characteristics you need. Measurements are in Virtual Compute Units (VCUs). +Each VCU represents a fraction of RAM, CPU, and local disk for caching. -include::../partials/minimum-vcus-detail.asciidoc[] +The number of VCUs you need is determined by: + +* Volume and ingestion rate of your data +* Data retention requirements +* Search query volume +* Search Power setting +* Machine learning usage [discrete] [[elasticsearch-billing-information-about-the-vcu-types-search-ingest-and-ml]] -== Information about the VCU types (Search, Ingest, and ML) +== VCU types: Search, Indexing, and ML -There are three VCU types in {es}: +Elasticsearch uses three VCU types: -* **Indexing** — The VCUs used to index the incoming documents to be -stored in {es}. -* **Search** — The VCUs used to return search results with the latency and -Queries per Second (QPS) you require. -* **Machine Learning** — The VCUs used to perform inference, NLP tasks, and other ML activities. +* **Indexing:** The VCUs used to index incoming documents. +* **Search:** The VCUs used to return search results, with the latency and +queries per second (QPS) you require. +* **Machine learning:** The VCUs used to perform inference, NLP tasks, and other ML activities. [discrete] [[elasticsearch-billing-information-about-the-search-ai-lake-dimension-gb]] -== Information about the Search AI Lake dimension (GB) +== Data storage and billing + -For {es}, the Search AI Lake is where data is stored and retained. This is -charged in GBs for the size of data at rest. Depending on the enrichment, -vectorization and other activities during ingest, this size may be different -from the original size of the source data. +{es-serverless} projects store data in the <>. You are charged per GB of stored data at rest. Note that if you perform operations at ingest such as vectorization or enrichment, the size of your stored data will differ from the size of the original source data. [discrete] [[elasticsearch-billing-managing-elasticsearch-costs]] == Managing {es} costs -You can control costs in a number of ways. Firstly there is the amount of -data that is retained. {es} will ensure that the most recent data is -cached, allowing for fast retrieval. Reducing the amount of data means fewer -Search VCUs may be required. If you need lower latency, then more Search VCUs -can be added by adjusting the Search Power. A further refinement is for data streams that can be used to store -time series data. For that type of data, you can further define the number of -days of data you want cacheable, which will affect the number of Search VCUs and -therefore the cost. Note that {es-serverless} maintains and bills for -https://www.elastic.co/pricing/serverless-search#what-are-the-minimum-compute-resource-vcus-on-elasticsearch-serverless[minimum compute resource Ingest and Search VCUs]. - -For detailed {es-serverless} project rates, check the -https://www.elastic.co/pricing/serverless-search[{es-serverless} pricing page]. +You can control costs by using a lower Search Power setting or reducing the amount +of retained data. + +* **Search Power setting:** <> controls the speed of searches against your data. With Search Power, you can +improve search performance by adding more resources for querying, or you can reduce provisioned +resources to cut costs. +* **Time series data retention:** By limiting the number of days of <> that are available for caching, +you can reduce the number of search VCUs required. + +For detailed {es-serverless} project rates, see the https://www.elastic.co/pricing/serverless-search[{es-serverless} pricing page]. diff --git a/serverless/pages/serverless-differences.asciidoc b/serverless/pages/serverless-differences.asciidoc index e4a060b0..6ee6b812 100644 --- a/serverless/pages/serverless-differences.asciidoc +++ b/serverless/pages/serverless-differences.asciidoc @@ -22,7 +22,7 @@ complexity by optimizing your cluster performance for you. Data stream lifecycle is an optimized lifecycle tool that lets you focus on the most common lifecycle management needs, without unnecessary hardware-centric concepts like data tiers. + -* **Watcher** is not available, in favor of **<>**. +* **Watcher** is not available, in favor of Kibana Alerts**. + Kibana Alerts allows rich integrations across use cases like APM, metrics, security, and uptime. Prepackaged rule types simplify setup and hide the details of complex, domain-specific detections, while providing a consistent interface across Kibana. diff --git a/serverless/pages/what-is-elasticsearch-serverless.asciidoc b/serverless/pages/what-is-elasticsearch-serverless.asciidoc index 3eb6b4c9..58fbb60d 100644 --- a/serverless/pages/what-is-elasticsearch-serverless.asciidoc +++ b/serverless/pages/what-is-elasticsearch-serverless.asciidoc @@ -1,32 +1,62 @@ -//// -To be rewritten/refined -//// +// ℹ️ THIS CONTENT IS RENDERERED ON THE index-serverless-elasticsearch.asciidoc PAGE +// Use the id <> to link to this page // :description: Build search solutions and applications with {es-serverless}. // :keywords: serverless, elasticsearch, overview preview:[] -{es} allows you to build custom applications. Whether you have structured or unstructured text, numerical data, or geospatial data, {es} can efficiently store and index it in a way that supports fast searches. - -.Understanding {es-serverless} -[IMPORTANT] +[TIP] ==== -Refer to <> and <> for important details, including features and limitations specific to {es-serverless}. +If you haven't used {es} before, first learn the basics in the https://www.elastic.co/guide/en/elasticsearch/reference/current/elasticsearch-intro.html[core {es} documentation]. ==== -[discrete] -== Get started -* <>: Create your first {es} project. -* <>: Learn how to get your data into {es}. +{es} is one of the three available project types on <>. + +This project type enables you to use the core functionality of {es}: searching, indexing, storing, and analyzing data of all shapes and sizes. + +When using {es} on {serverless-full} you don’t need to worry about managing the infrastructure that keeps {es} distributed and available: nodes, shards, and replicas. These resources are completely automated on the serverless platform, which is designed to scale up and down with your workload. + +This automation allows you to focus on building your search applications and solutions. [discrete] -== How to +[[elasticsearch-overview-get-started]] +== Get started -* <>: Build your queries to perform and combine many types of searches. -* <>: Search, filter your data, and display your findings. -* <>: Create rules to detect complex conditions and trigger alerts. -* <>: Send requests with Console and profile queries with Search Profiler. -* <>: Manage user access, billing, and check performance metrics. +[cols="2"] +|=== +| 🚀 +a| [.card-title]#<># + +Get started by creating your first {es} project on serverless. +| 🔌 +a| [.card-title]#<># + +Learn how to connect your applications to your {es-serverless} endpoint. + +// TODO add coming link to new page about connecting to your serverless endpoint +// <> + +| ⤵️ +a| [.card-title]#<># + +Learn how to get your data into {es} and start building your search application. + +| 🛝 +a| [.card-title]#https://www.elastic.co/guide/en/kibana/master/playground.html[*Try Playground →*]# + +After you've added some data, use Playground to test out queries and combine {es} with the power of Generative AI in your applications. +|=== + +[discrete] +[[elasticsearch-overview-learn-more]] +== Learn more + +[cols="2"] +|=== +| ❓ +a| [.card-title]#<># + +Understand the differences between {es} on {serverless-full} and other deployment types. + +| 🧾 +a| [.card-title]#<># + +Learn about the billing model for {es} on {serverless-full}. +|===