diff --git a/docs/shipping/Code/dotnet.md b/docs/shipping/Code/dotnet.md index c9b6a194..03a387fa 100644 --- a/docs/shipping/Code/dotnet.md +++ b/docs/shipping/Code/dotnet.md @@ -1197,14 +1197,14 @@ The following example uses a basic [Minimal API with ASP.NET Core](https://learn ### Create and launch an HTTP Server -To begin, set up an environment in a new directory called `dotnet-simple`. Within that directory, execute following command: +1. Set up an environment in a new directory called `dotnet-simple`. Within that directory, execute following command: -``` +```bash dotnet new web ``` -In the same directory, replace the content of Program.cs with the following code: +2. In the same directory, replace the content of `Program.cs` with the following code: -``` +```csharp using System.Globalization; using Microsoft.AspNetCore.Mvc; @@ -1239,7 +1239,7 @@ app.Run(); ``` -In the Properties subdirectory, replace the content of launchSettings.json with the following: +3. In the `Properties` subdirectory, replace the content of `launchSettings.json` with the following: ``` { @@ -1259,7 +1259,7 @@ In the Properties subdirectory, replace the content of launchSettings.json with ``` -Build and run the application with the following command, then open http://localhost:8080/rolldice in your web browser to ensure it is working. +4. Build and run the application with the following command, then open http://localhost:8080/rolldice in your web browser to ensure it is working. ``` dotnet build @@ -1267,67 +1267,73 @@ dotnet run ``` ### Instrumentation -Next we’ll install the instrumentation [NuGet packages from OpenTelemetry](https://www.nuget.org/profiles/OpenTelemetry) that will generate the telemetry, and set them up. +Next, we'll configure the OpenTelemetry logging exporter to send logs to Logz.io via the OTLP listener. + +This configuration is designed to send logs to your Logz.io account via the OpenTelemetry Protocol (OTLP) listener. You need to specify your Logz.io token and configure the listener endpoint to match the correct region. By default, the endpoint is `https://otlp-listener.logz.io/v1/logs`, but it should be adjusted based on your region. You can find more details on the regional configurations in the [Hosting Regions Documentation](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions). + + 1. Add the packages - ``` - dotnet add package OpenTelemetry.Extensions.Hosting - dotnet add package OpenTelemetry.Instrumentation.AspNetCore - dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol - ``` - -2. Setup the OpenTelemetry code - - In Program.cs, replace the following lines: - - ``` - var builder = WebApplication.CreateBuilder(args); - var app = builder.Build(); - ``` - With: - ``` - using OpenTelemetry; - using OpenTelemetry.Logs; - using OpenTelemetry.Resources; - using OpenTelemetry.Exporter; - - var builder = WebApplication.CreateBuilder(args); - - const string serviceName = "roll-dice"; - const string logzioEndpoint = "https://otlp-listener.logz.io/v1/logs"; - const string logzioToken = ""; - - builder.Logging.AddOpenTelemetry(options => - { - options - .SetResourceBuilder( - ResourceBuilder.CreateDefault() - .AddService(serviceName)) - .AddOtlpExporter(otlpOptions => - { - otlpOptions.Endpoint = new Uri(logzioEndpoint); - otlpOptions.Headers = $"Authorization=Bearer {logzioToken}, user-agent=logzio-dotnet-logs"; - otlpOptions.Protocol = OtlpExportProtocol.HttpProtobuf; - }); - }); - - var app = builder.Build(); - ``` + + ```bash + dotnet add package OpenTelemetry.Extensions.Hosting + dotnet add package OpenTelemetry.Instrumentation.AspNetCore + dotnet add package OpenTelemetry.Exporter.OpenTelemetryProtocol + ``` + +2. Setup the OpenTelemetry code in `Program.cs`, by replacing the following lines: + + ```csharp + var builder = WebApplication.CreateBuilder(args); + var app = builder.Build(); + ``` + + With: + + + ```csharp + using OpenTelemetry; + using OpenTelemetry.Logs; + using OpenTelemetry.Resources; + using OpenTelemetry.Exporter; + var builder = WebApplication.CreateBuilder(args); + const string serviceName = "roll-dice"; + const string logzioEndpoint = "https://otlp-listener.logz.io/v1/logs"; + const string logzioToken = ""; + builder.Logging.AddOpenTelemetry(options => + { + options + .SetResourceBuilder( + ResourceBuilder.CreateDefault() + .AddService(serviceName)) + .AddOtlpExporter(otlpOptions => + { + otlpOptions.Endpoint = new Uri(logzioEndpoint); + otlpOptions.Headers = $"Authorization=Bearer {logzioToken}, user-agent=logzio-dotnet-logs"; + otlpOptions.Protocol = OtlpExportProtocol.HttpProtobuf; + }); + }); + var app = builder.Build(); + ``` + + {@include: ../../_include/log-shipping/log-shipping-token.md} + Update the `listener.logz.io` parth in `https://otlp-listener.logz.io/v1/logs` with the URL for [your hosting region](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region). + 3. Run your **application** once again: - ``` - dotnet run - ``` - Note the output from the dotnet run. + ```bash + dotnet run + ``` 4. From another terminal, send a request using curl: - ``` - curl localhost:8080/rolldice - ``` + ```bash + curl localhost:8080/rolldice + ``` + 5. After about 30 sec, stop the server process. -At this point, you should see log output from the server and client on your Logzio account. +At this point, you should see log output from the server and client on your Logz.io account. diff --git a/docs/shipping/Code/go.md b/docs/shipping/Code/go.md index 0eb27d35..e27dd84b 100644 --- a/docs/shipping/Code/go.md +++ b/docs/shipping/Code/go.md @@ -21,6 +21,12 @@ If your code is running inside Kubernetes the best practice will be to use our [ ## Logs +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + + + :::note [Project's GitHub repo](https://github.com/logzio/logzio-go/) ::: @@ -383,6 +389,240 @@ Install the pre-built dashboard to enhance the observability of your metrics. {@include: ../../_include/metric-shipping/generic-dashboard.html} + + + +### Prerequisites + +Ensure that you have the following installed locally: + +* Go 1.21 or newer + +### Example Application + +The following example uses a basic net/http application in Go. This guide will help you set up the environment, create the application, and configure it to send logs to Logz.io using OpenTelemetry. + + + +### Create and launch an HTTP Server + +1. Create a new directory for your Go project and initialize the Go module: + + ```bash + mkdir otel-getting-started + cd otel-getting-started + go mod init otel-getting-started + ``` + +2. Create and activate a virtual environment: + + ```bash + mkdir otel-getting-started + cd otel-getting-started + python3 -m venv venv + source venv/bin/activate + ``` + +3. Create a file named main.go and add the following code to set up a simple HTTP server: + + ```go + package main + + import ( + "io" + "math/rand" + "net/http" + "strconv" + "strings" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/metric" + ) + + func rolldice(w http.ResponseWriter, r *http.Request) { + ctx, span := tracer.Start(r.Context(), "roll") + defer span.End() + + // Extract the player's name from a query parameter or other parts of the request + path := r.URL.Path + segments := strings.Split(path, "/") + playerName := "Anonymous" // Default name if not specified + + if len(segments) > 2 && segments[2] != "" { + playerName = segments[2] + } + + roll := 1 + rand.Intn(6) + + if playerName == "Anonymous" { + logger.InfoContext(ctx, "Anonymous player is rolling the dice", "result", roll) + } else { + logger.InfoContext(ctx, playerName+" is rolling the dice", "result", roll) + } + + rollValueAttr := attribute.Int("roll.value", roll) + span.SetAttributes(rollValueAttr) + rollCnt.Add(ctx, 1, metric.WithAttributes(rollValueAttr)) + + resp := strconv.Itoa(roll) + "\n" + if _, err := io.WriteString(w, resp); err != nil { + logger.ErrorContext(ctx, "Write failed", "error", err) + } + } + + func main() { + http.HandleFunc("/rolldice", rolldice) + http.ListenAndServe(":8080", nil) + } + + ``` + +4. Run the application: + + ``` bash + go run main.go + ``` + +Open http://localhost:8080/rolldice in your web browser to ensure it is working. + + +### Instrumentation + +Next, we'll configure the OpenTelemetry logging exporter to send logs to Logz.io via the OTLP listener. + +This configuration is designed to send logs to your Logz.io account via the OpenTelemetry Protocol (OTLP) listener. You need to specify your Logz.io token and configure the listener endpoint to match the correct region. By default, the endpoint is `https://otlp-listener.logz.io/v1/logs`, but it should be adjusted based on your region. You can find more details on the regional configurations in the [Hosting Regions Documentation](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions). + + + +1. Install OpenTelemetry dependencies: + + ```bash + go get go.opentelemetry.io/otel + go get go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp + go get go.opentelemetry.io/otel/exporters/stdout/stdoutlog + go get go.opentelemetry.io/otel/exporters/stdout/stdoutmetric + ``` + +2. Create a new file named `otel.go` and add the following code to set up OpenTelemetry logging: + + + ```go + // Copyright The OpenTelemetry Authors + // SPDX-License-Identifier: Apache-2.0 + + package main + + import ( + "context" + "errors" + "fmt" + + "go.opentelemetry.io/otel" + "go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp" + "go.opentelemetry.io/otel/exporters/stdout/stdoutlog" + "go.opentelemetry.io/otel/log/global" + "go.opentelemetry.io/otel/propagation" + "go.opentelemetry.io/otel/sdk/log" + ) + + // setupOTelSDK bootstraps the OpenTelemetry pipeline. + // If it does not return an error, make sure to call shutdown for proper cleanup. + func setupOTelSDK(ctx context.Context) (shutdown func(context.Context) error, err error) { + var shutdownFuncs []func(context.Context) error + + // shutdown calls cleanup functions registered via shutdownFuncs. + // The errors from the calls are joined. + // Each registered cleanup will be invoked once. + shutdown = func(ctx context.Context) error { + var err error + for _, fn := range shutdownFuncs { + err = errors.Join(err, fn(ctx)) + } + shutdownFuncs = nil + return err + } + + // handleErr calls shutdown for cleanup and makes sure that all errors are returned. + handleErr := func(inErr error) { + err = errors.Join(inErr, shutdown(ctx)) + } + + // Set up propagator. + prop := newPropagator() + otel.SetTextMapPropagator(prop) + + // Set up logger provider. + loggerProvider, err := newLoggerProvider() + if err != nil { + handleErr(err) + return + } + shutdownFuncs = append(shutdownFuncs, loggerProvider.Shutdown) + global.SetLoggerProvider(loggerProvider) + + return + } + + func newPropagator() propagation.TextMapPropagator { + return propagation.NewCompositeTextMapPropagator( + propagation.TraceContext{}, + propagation.Baggage{}, + ) + } + + func newLoggerProvider() (*log.LoggerProvider, error) { + // Create stdout log exporter + stdoutExporter, err := stdoutlog.New(stdoutlog.WithPrettyPrint()) + if err != nil { + return nil, fmt.Errorf("failed to create stdout exporter: %w", err) + } + + // Create OTLP HTTP log exporter for Logz.io + httpExporter, err := otlploghttp.New(context.Background(), + otlploghttp.WithEndpoint("otlp-listener.logz.io"), + otlploghttp.WithHeaders(map[string]string{ + "Authorization": "Bearer ", + }), + otlploghttp.WithURLPath("/v1/logs"), + ) + if err != nil { + return nil, fmt.Errorf("failed to create OTLP HTTP exporter: %w", err) + } + + // Create a logger provider with both exporters + loggerProvider := log.NewLoggerProvider( + log.WithProcessor(log.NewBatchProcessor(stdoutExporter)), // For stdout + log.WithProcessor(log.NewBatchProcessor(httpExporter)), // For HTTP export + ) + + return loggerProvider, nil + } + ``` + + + {@include: ../../_include/log-shipping/log-shipping-token.md} + Update the `listener.logz.io` parth in `https://otlp-listener.logz.io/v1/logs` with the URL for [your hosting region](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region). + + +3. Run your **application** once again: + + ```bash + go run . + ``` + +4. From another terminal, send a request using curl: + + ```bash + curl localhost:8080/rolldice + ``` +5. After about 30 sec, stop the server process. + +At this point, you should see log output from the server and client on your Logz.io account. + + + + + ## Traces diff --git a/docs/shipping/Code/node-js.md b/docs/shipping/Code/node-js.md index 613d1d61..8a138b90 100644 --- a/docs/shipping/Code/node-js.md +++ b/docs/shipping/Code/node-js.md @@ -360,8 +360,10 @@ logger.log(obj); ``` + + ## Metrics These examples use the [OpenTelemetry JS SDK](https://github.com/open-telemetry/opentelemetry-js) and are based on the [OpenTelemetry exporter collector proto](https://github.com/open-telemetry/opentelemetry-js/tree/main/packages/opentelemetry-exporter-collector-proto). diff --git a/docs/shipping/Code/python.md b/docs/shipping/Code/python.md index c077f8f8..e0edbccb 100644 --- a/docs/shipping/Code/python.md +++ b/docs/shipping/Code/python.md @@ -17,6 +17,12 @@ drop_filter: [] ## Logs +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + + + + :::note [Project's GitHub repo](https://github.com/logzio/logzio-python-handler/) ::: @@ -311,13 +317,204 @@ logger.addFilter(TruncationLoggerFilter()) The default limit is 32,700, but you can adjust this value as required. + + + +### Prerequisites + +Ensure that you have the following installed locally: + +* Python 3.7 or newer +* pip (Python package installer) + +### Example Application + +The following example uses a basic Flask application. + + +### Create and launch an HTTP Server + +1. Set up an environment in a new directory called `otel-getting-started`: + + ```bash + mkdir otel-getting-started + cd otel-getting-started + ``` + +2. Create and activate a virtual environment: + + ```bash + mkdir otel-getting-started + cd otel-getting-started + python3 -m venv venv + source venv/bin/activate + ``` + +3. Install Flask and OpenTelemetry dependencies: + + ```bash + pip install flask opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp + ``` + +4. Create a Flask application in a file named app.py and add the following code: + + ```python + from flask import Flask + import random + import logging + + # Basic Flask application setup + app = Flask(__name__) + + # Set up basic logging to console + logging.basicConfig(level=logging.INFO) + logger = logging.getLogger("app") + + @app.route("/rolldice/", methods=["GET"]) + @app.route("/rolldice/", methods=["GET"]) + def handle_roll_dice(player=None): + result = roll_dice() + + if player: + logger.info(f"{player} is rolling the dice: {result}") + else: + logger.info(f"Anonymous player is rolling the dice: {result}") + + return str(result) + + def roll_dice(): + return random.randint(1, 6) + + if __name__ == "__main__": + app.run(host="0.0.0.0", port=8080) + ``` + + +5. Run the application: + + ``` bash + python app.py + ``` + +Open http://localhost:8080/rolldice in your web browser to ensure it is working. + + +### Instrumentation + +Next, we'll configure the OpenTelemetry logging exporter to send logs to Logz.io via the OTLP listener. + +This configuration is designed to send logs to your Logz.io account via the OpenTelemetry Protocol (OTLP) listener. You need to specify your Logz.io token and configure the listener endpoint to match the correct region. By default, the endpoint is `https://otlp-listener.logz.io/v1/logs`, but it should be adjusted based on your region. You can find more details on the regional configurations in the [Hosting Regions Documentation](https://docs.logz.io/docs/user-guide/admin/hosting-regions/account-region/#available-regions). + +:::note +Ensure that you include the `user-agent` header in the format: `"user-agent=logzio-python-logs-otlp"`. +::: + +1. Install OpenTelemetry dependencies: + + ```bash + pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp + ``` + +2. Update the Flask Application to Include OpenTelemetry: + + Modify the existing app.py file to include OpenTelemetry logging: + + ```python + from flask import Flask + import random + import logging + + from opentelemetry._logs import set_logger_provider + from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter + from opentelemetry.sdk.resources import Resource + from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler + from opentelemetry.sdk._logs.export import BatchLogRecordProcessor + + # Configuration + service_name = "roll-dice" + logzio_endpoint = "https://otlp-listener.logz.io/v1/logs" # Update this to match your region if needed + logzio_token = "<>" + + # Set up OpenTelemetry resources + resource = Resource.create({"service.name": service_name}) + + # Set up Logger Provider and OTLP Log Exporter (HTTP/JSON) + logger_provider = LoggerProvider(resource=resource) + set_logger_provider(logger_provider) + log_exporter = OTLPLogExporter( + endpoint=logzio_endpoint, + headers={ + "Authorization": f"Bearer {logzio_token}", + "user-agent": "logzio-python-logs-otlp" + } + ) + logger_provider.add_log_record_processor(BatchLogRecordProcessor(log_exporter)) + + # Set up a specific logger for the application + logger = logging.getLogger("app") + logger.setLevel(logging.INFO) + + # Attach OTLP handler to the specific logger + otlp_handler = LoggingHandler(logger_provider=logger_provider) + logger.addHandler(otlp_handler) + + # Attach a StreamHandler to log to the console + console_handler = logging.StreamHandler() + console_handler.setLevel(logging.INFO) + formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') + console_handler.setFormatter(formatter) + logger.addHandler(console_handler) + + # Flask application setup + app = Flask(__name__) + + @app.route("/rolldice/", methods=["GET"]) + @app.route("/rolldice/", methods=["GET"]) + def handle_roll_dice(player=None): + result = roll_dice() + + if player: + logger.info(f"{player} is rolling the dice: {result}") + else: + logger.info(f"Anonymous player is rolling the dice: {result}") + + return str(result) + + def roll_dice(): + return random.randint(1, 6) + + if __name__ == "__main__": + app.run(host="0.0.0.0", port=8080) + + ``` + + {@include: ../../_include/log-shipping/log-shipping-token.md} + + +3. Run your **application** once again: + + ```bash + python app.py + ``` + +4. From another terminal, send a request using curl: + + ```bash + curl localhost:8080/rolldice + ``` +5. After about 30 sec, stop the server process. + +At this point, you should see log output from the server and client on your Logz.io account. + + + + ## Metrics Send custom metrics to Logz.io from your Python application. This example uses [OpenTelemetry Python SDK](https://github.com/open-telemetry/opentelemetry-python-contrib) and the [OpenTelemetry remote write exporter](https://pypi.org/project/opentelemetry-exporter-prometheus-remote-write/). -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; + diff --git a/docs/user-guide/admin/give-aws-access-with-iam-roles.md b/docs/user-guide/admin/give-aws-access-with-iam-roles.md index c6cc691c..f058b78f 100644 --- a/docs/user-guide/admin/give-aws-access-with-iam-roles.md +++ b/docs/user-guide/admin/give-aws-access-with-iam-roles.md @@ -60,27 +60,29 @@ To do this, add the following to your IAM policy: Note that the ListBucket permission is set to the entire bucket and the GetObject permission ends with a /* suffix, so we can get files in subdirectories. ::: -### Create a Logz.io-AWS connector +### Create a Logz.io-AWS Connector for Archive Setup -In your Logz.io app, go to the **Integration hub** and select the relevant AWS resource. +1. In your Logz.io app, go to the **Integration hub** and select the relevant AWS resource. -Inside the integration, click **+ Add a bucket** and select the option to **Authenticate with a role** +2. Inside the integration, click **+ Add a bucket** and select the option to **Authenticate with a role**. -![Connect Logz.io to an AWS resource](https://dytvr9ot2sszz.cloudfront.net/logz-docs/log-shipping/s3-bucket-id-dec.png) +3. Copy and paste the **Account ID** and the **External ID** into your text editor. -Copy and paste the **Account ID** and the **External ID** in your text editor. +4. Fill in the form to create a new connector: + - Enter the **S3 bucket name**. + - Enter the **Prefix** where your logs are stored, if applicable. -Fill in the form to create a new connector. +5. Click **Get the role policy**. + - Review the role policy to confirm the required permissions. + - Paste the policy into your text editor. -Enter the **S3 bucket name** and, if needed, -the **Prefix** where your logs are stored. +6. Follow the role creation process using the information from the role policy. -Click **Get the role policy**. -You can review the role policy to confirm the permissions that will be needed. -Paste the policy in your text editor. +7. Once the role is created, paste the resulting **Role ARN** within the Archive setup in Logz.io. Keep this information available so you can use it in AWS. + ### Create the policy in AWS Navigate to [IAM policies](https://us-east-1.console.aws.amazon.com/iam/home#/policies) and click **Create policy**. diff --git a/docs/user-guide/app360/service-list.md b/docs/user-guide/app360/service-list.md index f6354393..ba094ff4 100644 --- a/docs/user-guide/app360/service-list.md +++ b/docs/user-guide/app360/service-list.md @@ -63,17 +63,17 @@ Clicking on one of the services or clicking on drill down opens a dashboard with You can change the time frame and add additional filters, including comparing the data to a previous period or choose an environment, nodes, and pods. Clicking the refresh button will manually update the data. -![service deeper](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/services-service-drilldown-mar18.png) +![service deeper](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/services-service-drilldown-aug26.png) Hovering over the graphs provides additional info for the time point you've chosen: * The **Request rate** graph shows the number of requests made per minute * The **Latency** graph provides a milliseconds count of how long it takes for data to travel in your environment -* The **Errors** graph analyzes the percentage of errors that occurred -* The **HTTP status code** graph measures the distribution of various HTTP status codes +* The **Error Ratio** graph analyzes the percentage of errors that occurred +* The **Status code** graph measures the distribution of various HTTP status codes -![graphs](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/service-drilldown-graphs-mar27.png) + ### Operations overview @@ -87,7 +87,7 @@ This table includes all of the operations running inside the chosen service with Use the search bar to find a specific operation or the arrows at the bottom of the table to navigate the operations. -![operations view](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/astronomy-operations-table.png) +![operations view](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/astronomy-operations-table-aug26.png) ### (Single) Operation overview @@ -174,6 +174,12 @@ Once your anomaly detector is up and running, you'll see an indicator in the lis ![no anomaly](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/no-anomaly.png) +## AI Assistant + +Click the **AI Assistant** button to activate the [Observability IQ Assistant](https://docs.logz.io/docs/user-guide/observability/assistantiq), an AI-powered, chat-based interface that lets you engage in a dynamic conversation with your data. Use one of the pre-configured prompts or type your own question to get real-time insights about your metrics, anomalies, trends, and the overall health of your environment. + + +![AI App360](https://dytvr9ot2sszz.cloudfront.net/logz-docs/services/aikapp360.gif) diff --git a/docs/user-guide/explore/new-explore.md b/docs/user-guide/explore/new-explore.md index e02080b1..f225c625 100644 --- a/docs/user-guide/explore/new-explore.md +++ b/docs/user-guide/explore/new-explore.md @@ -9,7 +9,7 @@ slug: /user-guide/new-explore/ Explore provides a unified dashboard for monitoring your data, offering a quick and efficient way to identify and debug issues. Designed for investigating and analyzing large data volumes, Explore allows you to use filters, queries, and searches to pinpoint and delve into problems effortlessly. -![Explore dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/explore-dashboard-aug6.png) +![Explore dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/explore-aug21.png) ### Simple Search / Advanced (Lucene) @@ -20,11 +20,13 @@ Click on the dropdown menu to switch between Simple Search and Advanced Search, Build your query by selecting fields, parameters, and conditions. To add a value that doesn't appear in your logs, type its name and click on the + sign. You can also add free text to your search, which will convert it into a Lucene query. -![Smart Search gif](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/simple-search-aug6.gif) + * **Advanced (Lucene)**: Use advanced text querying for log searches. You can search for free text by typing the text string you want to find; for example, error will return all words containing this string, and using quotation marks, "error", will return only the specific word you're searching for. -![Lucene Search gif](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/advanced-search-aug6.gif) + + +![Choose Search Method](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/choose-search-aug21.png) ### Filters @@ -36,7 +38,7 @@ All visible fields appear on the left side, including exceptions (if any) and sp You can pin up to three custom fields by hovering over them and clicking the star icon. -explore-fields +explore-fields ### Graph View @@ -45,7 +47,7 @@ Visualize trends over time and group data based on your investigations. Hover ov You can enlarge or reduce the size of the graph by clicking the arrow button at the top right. -graph-view +graph-view ### Exceptions @@ -64,16 +66,16 @@ To select a custom time frame, click the time element and choose the period rele ### Observability IQ Assistant -Click the ✨ Observability IQ button to activate Observability IQ Assistant, an AI-powered, chat-based interface that lets you engage in a dynamic conversation with your data. Use one of the pre-configured prompts or type your own question to get real-time insights about your metrics, anomalies, trends, and the overall health of your environment. +Click the **AI Assistant** button to activate [Observability IQ Assistant](/docs/user-guide/observability/assistantiq), an AI-powered, chat-based interface that lets you engage in a dynamic conversation with your data. Use one of the pre-configured prompts or type your own question to get real-time insights about your metrics, anomalies, trends, and the overall health of your environment. -![Observability IQ Assistant](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/iq-aug6.gif) +![Observability IQ Assistant](https://dytvr9ot2sszz.cloudfront.net/logz-docs/explore-dashboard/assistant-aug21.gif) ### Group By The default graph view is set to group by all fields, and you can choose specific fields to focus on from the dropdown menu. -smart-search-groupby +smart-search-groupby @@ -82,7 +84,7 @@ The default graph view is set to group by all fields, and you can choose specifi Click the 1L button to change the table view. Selecting **1 Line** provides a compact view, **2 Lines** displays two lines from the logs, and **Expanded** offers a full log view, presenting all relevant data for easier viewing. -expand-view + ### Create Alert, Copy Link, Export CSV @@ -92,7 +94,7 @@ The ⋮ menu offers additional options for Explore, including: * **Copy Link**: Generates a URL with your current view, which you can share with team members. You need to be logged in to Logz.io to view it * **Export CSV**: Exports up to 50,000 logs to a CSV file, including the timestamp and log message -side-menu +side-menu ### Logs Table @@ -102,4 +104,4 @@ Expand each log to view additional details, see the log in JSON format, and add In the top right corner, choose to view a single log in a new window, view surrounding logs for context, and share the URL of the specific log you're viewing. -smart-search \ No newline at end of file +smart-search \ No newline at end of file diff --git a/docs/user-guide/integrations/notification-endpoints/ms-teams.md b/docs/user-guide/integrations/notification-endpoints/ms-teams.md index bf8c9b21..1291d3df 100644 --- a/docs/user-guide/integrations/notification-endpoints/ms-teams.md +++ b/docs/user-guide/integrations/notification-endpoints/ms-teams.md @@ -55,25 +55,40 @@ To use this example in your own endpoint, copy the payload. Note that double-bra ``` { - "type": "message", - "attachments": [ - { - "contentType": "application/vnd.microsoft.card.adaptive", - "contentUrl": null, - "content": { - "$schema": "http://adaptivecards.io/schemas/adaptive-card.json", - "type": "AdaptiveCard", - "version": "1.2", - "body": [ - { - "type": "TextBlock", - "text": "Submitted response:" - } - ] - } - } + "type": "message", + "attachments": [ + { + "contentType": "application/vnd.microsoft.card.adaptive", + "contentUrl": null, + "content": { + "$schema": "http://adaptivecards.io/schemas/adaptive-card.json", + "type": "AdaptiveCard", + "version": "1.2", + "body": [ + { + "type": "TextBlock", + "text": "title: {{alert_severity}}: {{alert_title}}" + }, + { + "type": "TextBlock", + "text": "summary: {{alert_description}}" + }, + { + "type": "TextBlock", + "text": "text: {{alert_samples}}" + } + ], + "actions": [ + { + "type": "Action.OpenUrl", + "title": "View in OpenSearch Dashboards", + "url": "{{alert_app_url}}#/view-triggered-alert?from={{alert_timeframe_start_epoch_millis}}&to={{alert_timeframe_end_epoch_millis}}&definitionId={{alert_definition_id}}&switchToAccountId={{account_id}}" + } ] + } } + ] +} ``` diff --git a/docs/user-guide/k8s-360/kubernetes-360-pre.md b/docs/user-guide/k8s-360/kubernetes-360-pre.md index 0644b41d..f8d415f2 100644 --- a/docs/user-guide/k8s-360/kubernetes-360-pre.md +++ b/docs/user-guide/k8s-360/kubernetes-360-pre.md @@ -11,9 +11,7 @@ slug: /user-guide/k8s-360/kubernetes-360-pre Kubernetes 360 application provides an overview of your Kubernetes data, providing a quick overview of your current deployments, pods, and more useful information regarding your environment. -![Main dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-main.png) - - +![Main dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-aug22.png) diff --git a/docs/user-guide/k8s-360/overview.md b/docs/user-guide/k8s-360/overview.md index 0c493cfe..7d375a30 100644 --- a/docs/user-guide/k8s-360/overview.md +++ b/docs/user-guide/k8s-360/overview.md @@ -14,7 +14,7 @@ Kubernetes 360 lets R&D and engineering teams monitor and troubleshoot applicati The platform utilizes Kubernetes' numerous advantages for R&D and dev teams, allowing you to monitor application SLOs in a simple, efficient, and actionable manner. Kubernetes 360 offers flexibility and visibility while providing service discovery, balancing load, and allowing developer autonomy and business agility. -![Main dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-jul-overview-.png) +![Main dashboard](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-aug22.png) To activate your Kubernetes 360 dashboard, connect your Kubernetes data quickly and easily through Logz.io's **[Telemetry Collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup)**. @@ -25,21 +25,21 @@ Once everything is up and running, you can use your Kubernetes 360 application. -## Kubernetes 360 overview + +![deployments card](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/focus-on-nodes.png) -You can dive deeper into each card by clicking on it and opening the **[Quick view](#quick-view)** menu. +You can dive deeper into each card by clicking on it and opening the **[Quick view](#quick-view)** menu.--> -## Customize your application +## Kubernetes 360 overview -You can change and adjust Kubernetes 360 application to match your monitoring and troubleshooting needs. To help you get started, we'll break down the different options, how you can access them, and how they can help you and your team. +You can use Kubernetes 360 to suit your monitoring and troubleshooting needs. To help you get started, we'll break down the different options, how you can access them, and how they can help you and your team. -![Dashboard breakdown](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-jul-overview-numbers-.png) + @@ -48,13 +48,22 @@ You can change and adjust Kubernetes 360 application to match your monitoring an First, choose the environment you'd like to view. Environments with many users, teams, or projects use a namespace to bundle relevant clusters and nodes. This filter allows you to focus on all elements inside a specific namespace. -Next, choose whether to view the environment's clusters, nodes, or both. Each dropdown menu includes all clusters and nodes in the chosen Kubernetes account, and you can use the search bar to find and add nodes to your view easily. +Next, choose whether to view the environment's clusters, namespaces, or deployments. Each dropdown menu includes all clusters and nodes in the chosen Kubernetes account, and you can use the search bar to find and add nodes to your view easily. + +![Filters](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-filters-aug22.png) + + +

Observability IQ Assistant

+ +Click the **AI Assistant** button to activate [Observability IQ Assistant](/docs/user-guide/observability/assistantiq), an AI-powered, chat-based interface that lets you engage in a dynamic conversation with your data. Use one of the pre-configured prompts or type your own question to get real-time insights about your metrics, anomalies, trends, and the overall health of your environment. + +![IQ](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/iq-aug22.png)

View

You can switch your view to filter by the following resources: **Node**, **Pod**, **Deployment**, **Daemonset**, **Statefulset**, or **Job**. -![switch view](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/filter-view-jul-.png) +![switch view](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-filter-aug22.png) In addition, you can also switch between the **Map** and **List** views, according to your monitoring needs. Note that the Pod view can only be seen as a list. @@ -91,21 +100,21 @@ By default, Kubernetes 360 provides an overview of your current environment. Use Clicking on one of the cards or rows opens the quick view menu. This menu provides additional information about each element, allowing you to investigate and understand what’s happening inside your Kubernetes environment. -For each available view - Deployment, Pod, Node, Dameonset, Statefulset, and Job - you can access the quick view to gain more information, such as: +For each available view - Deployment, Pod, Node, Dameonset, Statefulset, and Job - you can access the quick view to gain more information, such as: * **Cluster** - The cluster associated with the chosen view. * **Namespace** - The unique namespace. * **Status** - Indicates whether that condition is applicable, with possible values **True**, **False**, or **Unknown**. -* **CPU** - Amount of CPU used. If the CPU is not capped, you'll see an indicator stating **no limit**. +* **CPU** - Amount of CPU used. You'll see an indicator stating **no limit** if the CPU is not capped. * **Memory** - An average calculation of how much memory is in use. * **Uptime** - The duration of how long the chosen view has been running. * **Security risks** - The number of potential security risks. -And more. +And of course, activate Observability IQ Assistant to open the AI-powered, chat-based interface to engage and query your data further. -![Pod upper menu Overview](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/pod-upper-overview-sep.png) +![Pod upper menu Overview](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/k360-inner-aug22.png) -Each view lets you dive deeper into the data by using the links at the top of the quick view. Click on **See Metrics**, **See Traces**, or **See Logs** to navigate directly to the relevant view. +Click on **See Metrics**, **See Traces**, or **See Logs** to navigate to each dashboard's relevant view. ### Quick view tabs @@ -113,7 +122,7 @@ Each view lets you dive deeper into the data by using the links at the top of th To enrich your existing and newly sent data, use the [Telemetry Collector](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup) to configure and send data quickly. ::: -Each quick view includes several tabs that provide additional information you can act on. For each tab, you can change the time frame chosen by clicking on the date bar at the top. +Each quick view includes several tabs that provide additional information you can act on. You can choose the time frame for each tab by clicking on the date bar at the top.

Pods tab

@@ -123,13 +132,13 @@ The Pods tab provides a list of all pods related to this node. The table include

Logs tab

-In the Logs tab you can view the time, log level, and message for each log line. You can search for specific logs using the search bar, which supports free text and Lucene queries. +The Logs tab shows each log line's time, log level, and message. The search bar supports free text and Lucene queries so that you can search for specific logs. ![Pod menu Overview](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/pod-quick-view-sep.png)

Metrics tab

-The **Metrics** tab presents useful data in graph form. These graphs provides a view of Replicas Over Time, CPU Usage (cores) per pod, Memory Usage Per Pod, CPU Usage, Requests and Limits (Cores), Memory Usage, Requests and Limits, and Received & Transmitted Bytes. +The **Metrics** tab presents useful data in graph form. These graphs provide a view of Replicas Over Time, CPU Usage (cores) per pod, Memory Usage Per Pod, CPU Usage, Requests and Limits (Cores), Memory Usage, Requests and Limits, and Received & Transmitted Bytes. ![Stateful menu Overview](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/metrics-quick-view.png) @@ -139,14 +148,15 @@ The **Traces** tab includes all of the spans in this deployment, including the f * Time * Trace ID -* The Service related to the span -* Which Operation ran -* The Duration of the run, represented in milliseconds +* The service related to the span +* Which operation ran +* The duration of the run, represented in milliseconds * Status code indicating whether a specific HTTP request has been successfully completed ![Quick menu Overview](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/deployment-quick-view-sep.png) -

Trace quickview

+ +

Trace quick view

Click on one of the **Trace ID** items to open the Trace quick view. This view includes additional data such as Trace ID, group name, timestamp marking the beginning of a trace being monitored, and the originating service. @@ -158,16 +168,24 @@ The link icon next to each operation and service opens the service overview for ![Trace quick view](https://dytvr9ot2sszz.cloudfront.net/logz-docs/k360/trace-view-k8s360-apr24.gif) +

YAML tab

+ +You can view each node's YAML configuration, allowing easier troubleshooting and configuration verification. + +Open the node you want to investigate and click the YAML tab. With direct access to YAML files, you can quickly understand and audit the underlying settings and setups of Kubernetes deployments, ensuring configurations align with operational requirements and best practices. + + -### Investigate through quick view -

See Metrics

+## Investigate through quick view + +

See Metrics

You can easily investigate the different issues you might encounter. Each quick view menu contains the **View Metrics** button, allowing you to view the relevant information in a Grafana application. This can provide a focused overview of the chosen element, allowing you to pinpoint what happened and when it started quickly. -

See Logs

+

See Logs

Node and pod views include the **See Logs** button, which opens an OpenSearch Dashboards screen with the relevant query to display the log information. @@ -178,25 +196,25 @@ Click on **Add filter** at the top of the screen. The fields vary according to y Next, choose the operator. For example, you can select **exists** to view all related logs. -

Open Livetail

+

Open Livetail

Node and pod views include the **Open Livetail** button, which opens Logz.io's Livetail filtered with the selected Kubernetes host. Live tail gives you a live view of your logs as they come into Logz.io, allowing you to view and troubleshoot in real time. -

Open Traces

+

Open Traces

The Deployment view includes the **See Traces** button, which opens Jaeger with the relevant data needed to deep dive into it. Gain a system-wide view of your distributed architecture, detect failed or high latency requests, and quickly drill into end-to-end call sequences of selected requests of intercommunicating microservices. ## Track Deployment Data -You can enrich your Kubernetes 360 graphs by adding an indication of recent deployments, helping you determine if a deployment has increased response times for end-users, altered your application's memory/CPU footprint, or introduced any other performance-related changes. +You can enrich your Kubernetes 360 graphs by adding an indication of recent deployments. This will help you determine whether a deployment has increased end-user response times, altered your application's memory/CPU footprint, or introduced any other performance-related changes. To enable deployment tracking ability, run the [**Telemetry Collector**](https://app.logz.io/#/dashboard/integrations/collectors?tags=Quick%20Setup) on your Kubernetes clusters. You can also activate this process **manually** by installing [Logz.io Kubernetes events Helm chart](https://app.logz.io/#/dashboard/integrations/Kubernetes:~:text=user%20guide.-,Send%20your%20deploy%20events%20logs,-This%20integration%20sends). Once enabled, the graphs will include a deployment marker, marked by a dotted vertical line. -You can view additional deployment data by clicking on the line. This data includes the deployment time, the associated service and environment, and a quick link to view the commit in your logs. +Clicking on the line allows you to view additional deployment data. This data includes the deployment time, the associated service and environment, and a quick link to view the commit in your logs. Click **Go to commit** to access and view your own code related to this deployment, allowing you to probe deeper into the relevant data. @@ -209,25 +227,6 @@ To activate the **Go to Commit** button, go to **your app or service** and add t - - - - - - - ## Additional information ### Calculating Log error rate diff --git a/docs/user-guide/observability/faq.md b/docs/user-guide/observability/faq.md index bdb69a91..a3296c2a 100644 --- a/docs/user-guide/observability/faq.md +++ b/docs/user-guide/observability/faq.md @@ -51,7 +51,7 @@ The model is hosted within the same region in which your Logz.io data is hosted. ### Can account admins see my queries and chat history? -No. Account admins or any other users within your organization cannot view or access any queries or chat history from the Observability IQ Assistant. Logz.io does not retain your query or chat history and is deleted after the session ends. +No. Account admins or any other users within your organization cannot view or access any queries or chat history from the Observability IQ Assistant. ### Do you use my data to train the AI model?