Skip to content

Commit

Permalink
docs: broken link and cleanup
Browse files Browse the repository at this point in the history
  • Loading branch information
JStickler committed Jun 25, 2024
1 parent 7ca1916 commit 8918d07
Show file tree
Hide file tree
Showing 4 changed files with 53 additions and 43 deletions.
18 changes: 5 additions & 13 deletions docs/sources/operations/storage/table-manager/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ table - also called periodic table - contains the data for a specific time
range.

This design brings two main benefits:

1. **Schema config changes**: each table is bounded to a schema config and
version, so that changes can be introduced over the time and multiple schema
configs can coexist
Expand All @@ -37,7 +38,6 @@ The Table Manager supports the following backends:
- **Chunk store**
- Filesystem (primarily used for local environments)


Loki does support the following backends for both index and chunk storage, but they are deprecated and will be removed in a future release:

- [Amazon DynamoDB](https://aws.amazon.com/dynamodb)
Expand All @@ -52,7 +52,6 @@ For detailed information on configuring the Table Manager, refer to the
[`table_manager`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#table_manager)
section in the Loki configuration document.


## Tables and schema config

A periodic table stores the index or chunk data relative to a specific period
Expand All @@ -70,14 +69,13 @@ This allows to have multiple non-overlapping schema configs over the time, in
order to perform schema version upgrades or change storage settings (including
changing the storage type).

![periodic_tables](./table-manager-periodic-tables.png)
{{< figure alt="periodic tables" align="center" src="./table-manager-periodic-tables.png" >}}

The write path hits the table where the log entry timestamp falls into (usually
the last table, except short periods close to the end of a table and the
beginning of the next one), while the read path hits the tables containing data
for the query time range.


### Schema config example

For example, the following `schema_config` defines two configurations: the first
Expand Down Expand Up @@ -107,7 +105,6 @@ schema_config:
period: 168h
```
### Table creation
The Table Manager creates new tables slightly ahead of their start period, in
Expand All @@ -118,7 +115,6 @@ The `creation_grace_period` property - in the
[`table_manager`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#table_manager)
configuration block - defines how long before a table should be created.


## Retention

The retention - managed by the Table Manager - is disabled by default, due to
Expand All @@ -143,7 +139,7 @@ is deleted, the Table Manager keeps the last tables alive using this formula:
number_of_tables_to_keep = floor(retention_period / table_period) + 1
```

![retention](./table-manager-retention.png)
{{< figure alt="retention" align="center" src="./table-manager-retention.png" >}}

{{% admonition type="note" %}}
It's important to note that - due to the internal implementation - the table
Expand All @@ -155,16 +151,16 @@ For detailed information on configuring the retention, refer to the
[Loki Storage Retention]({{< relref "../retention" >}})
documentation.


## Active / inactive tables

A table can be active or inactive.

A table is considered **active** if the current time is within the range:

- Table start period - [`creation_grace_period`](https://grafana.com/docs/loki/<LOKI_VERSION>/configure/#table_manager)
- Table end period + max chunk age (hardcoded to `12h`)

![active_vs_inactive_tables](./table-manager-active-vs-inactive-tables.png)
{{< figure alt="active_vs_inactive_tables" align="center" src="./table-manager-active-vs-inactive-tables.png" >}}

Currently, the difference between an active and inactive table **only applies
to the DynamoDB storage** settings: capacity mode (on-demand or provisioned),
Expand All @@ -177,7 +173,6 @@ read/write capacity units and autoscaling.
| Write capacity unit | `provisioned_write_throughput` | `inactive_write_throughput` |
| Autoscaling | Enabled (if configured) | Always disabled |


## DynamoDB Provisioning

When configuring DynamoDB with the Table Manager, the default [on-demand
Expand All @@ -201,21 +196,18 @@ ensure that the primary index key is set to `h` (string) and the sort key is set
to `r` (binary). The "period" attribute in the configuration YAML should be set
to `0`.


## Table Manager deployment mode

The Table Manager can be executed in two ways:

1. Implicitly executed when Loki runs in monolithic mode (single process)
1. Explicitly executed when Loki runs in microservices mode


### Monolithic mode

When Loki runs in [monolithic mode]({{< relref "../../../get-started/deployment-modes" >}}),
the Table Manager is also started as component of the entire stack.


### Microservices mode

When Loki runs in [microservices mode]({{< relref "../../../get-started/deployment-modes" >}}),
Expand Down
21 changes: 12 additions & 9 deletions docs/sources/send-data/promtail/cloud/ec2/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,9 @@ aws ec2 authorize-security-group-ingress --group-id sg-02c489bbdeffdca1d --proto
aws ec2 authorize-security-group-ingress --group-id sg-02c489bbdeffdca1d --protocol tcp --port 3100 --cidr 0.0.0.0/0
```

> You don't need to open those ports to all IPs as shown above you can use your own IP range.
{{< admonition type="note" >}}
You don't need to open those ports to all IPs as shown above you can use your own IP range.
{{< /admonition >}}

We're going to create an [Amazon Linux 2][Amazon Linux 2] instance as this is one of the most popular but feel free to use the AMI of your choice.

Expand All @@ -65,7 +67,9 @@ To make it more interesting later let's tag (`Name=promtail-demo`) our instance:
aws ec2 create-tags --resources i-041b0be05c2d5cfad --tags Key=Name,Value=promtail-demo
```

> Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags that you've assigned to it. You'll see later, Promtail can transform those tags into [Loki labels][labels].
{{< admonition type="note" >}}
Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. This is useful when you have many resources of the same type—you can quickly identify a specific resource based on the tags that you've assigned to it. You'll see later, Promtail can transform those tags into [Loki labels][labels].
{{< /admonition >}}

Finally let's grab the public DNS of our instance:

Expand Down Expand Up @@ -150,7 +154,7 @@ Finally the [`relabeling_configs`][relabel] section has three purposes:

2. Choosing where Promtail should find log files to tail, in our example we want to include all log files that exist in `/var/log` using the glob `/var/log/**.log`. If you need to use multiple glob, you can simply add another job in your `scrape_configs`.

3. Ensuring discovered targets are only for the machine Promtail currently runs on. This is achieved by adding the label `__host__` using the incoming metadata `__meta_ec2_private_dns_name`. If it doesn't match the current `HOSTNAME` environment variable, the target will be dropped.
3. Ensuring discovered targets are only for the machine Promtail currently runs on. This is achieved by adding the label `__host__` using the incoming metadata `__meta_ec2_private_dns_name`. If it doesn't match the current `HOSTNAME` environment variable, the target will be dropped.
If `__meta_ec2_private_dns_name` doesn't match your instance's hostname (on EC2 Windows instance for example, where it is the IP address and not the hostname), you can hardcode the hostname at this stage, or check if any of the instances tag contain the hostname (`__meta_ec2_tag_<tagkey>: each tag value of the instance`)

Alright we should be ready to fire up Promtail, we're going to run it using the flag `--dry-run`. This is perfect to ensure everything is correctly, specially when you're still playing around with the configuration. Don't worry when using this mode, Promtail won't send any logs and won't remember any file positions.
Expand All @@ -175,7 +179,7 @@ open http://ec2-13-59-62-37.us-east-2.compute.amazonaws.com:3100/

For example the page below is the service discovery page. It shows you all discovered targets, with their respective available labels and the reason it was dropped if it was the case.

![discovery page page][discovery page]
{{< figure alt="Service discovery page" align="center" src="./promtail-ec2-discovery.png" >}}

This page is really useful to understand what labels are available to forward with the `relabeling` configuration but also why Promtail is not scraping your target.

Expand Down Expand Up @@ -232,7 +236,7 @@ Jul 08 15:48:57 ip-172-31-45-69.us-east-2.compute.internal promtail-linux-amd64[

You can now verify in Grafana that Loki has correctly received your instance logs by using the [LogQL]({{< relref "../../../../query" >}}) query `{zone="us-east-2"}`.

![Grafana Loki logs][ec2 logs]
{{< figure alt="Grafana Loki logs" align="center" src="./promtail-ec2-logs.png" >}}

## Sending systemd logs

Expand All @@ -255,7 +259,9 @@ We will edit our previous config (`vi ec2-promtail.yaml`) and add the following

Note that you can use [relabeling][relabeling] to convert systemd labels to match what you want. Finally make sure that the path of journald logs is correct, it might be different on some systems.

> You can download the final config example from our [GitHub repository][final config].
{{< admonition type="note" >}}
You can download the final config example from our [GitHub repository][final config].
{{< /admonition >}}

That's it, save the config and you can `reboot` the machine (or simply restart the service `systemctl restart promtail.service`).

Expand All @@ -276,14 +282,11 @@ Let's head back to Grafana and verify that your Promtail logs are available in G
[prometheus scrape config]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
[ec2_sd_config]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config
[role]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles.html
[discovery page]: ./promtail-ec2-discovery.png "Service discovery"
[relabel]: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config
[systemd]: https://www.freedesktop.org/software/systemd/man/systemd.service.html
[logql]: ../../../query
[ec2 logs]: ./promtail-ec2-logs.png "Grafana Loki logs"
[config gist]: https://gist.github.com/cyriltovena/d0881cc717757db951b642be48c01445
[labels]: https://grafana.com/blog/2020/04/21/how-labels-in-loki-can-make-log-queries-faster-and-easier/
[troubleshooting loki]: ../../../getting-started/troubleshooting#troubleshooting-targets
[live tailing]: https://grafana.com/docs/grafana/latest/features/datasources/loki/#live-tailing
[systemd]: ../../../installation/helm#run-promtail-with-systemd-journal-support
[journald]: https://www.freedesktop.org/software/systemd/man/systemd-journald.service.html
Expand Down
29 changes: 18 additions & 11 deletions docs/sources/send-data/promtail/cloud/ecs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,9 @@ aws ecs create-cluster --cluster-name ecs-firelens-cluster

We will also need an [IAM Role to run containers][ecs iam] with, let's create a new one and authorize [ECS][ECS] to endorse this role.

> You might already have this `ecsTaskExecutionRole` role in your AWS account if that's the case you can skip this step.
{{< admonition type="note" >}}
You might already have this `ecsTaskExecutionRole` role in your AWS account if that's the case you can skip this step.
{{< /admonition >}}

```bash
curl https://raw.githubusercontent.com/grafana/loki/main/docs/sources/send-data/promtail/cloud/ecs/ecs-role.json > ecs-role.json
Expand Down Expand Up @@ -81,7 +83,9 @@ Amazon [Firelens][Firelens] is a log router (usually `fluentd` or `fluentbit`) y

In this example we will use [fluentbit][fluentbit] with the [fluentbit output plugin][fluentbit loki] installed but if you prefer [fluentd][fluentd] make sure to check the [fluentd output plugin][fluentd loki] documentation.

> We recommend you to use [fluentbit][fluentbit] as it's less resources consuming than [fluentd][fluentd].
{{< admonition type="note" >}}
We recommend you to use [fluentbit][fluentbit] as it's less resources consuming than [fluentd][fluentd].
{{< /admonition >}}

Our [task definition][task] will be made of two containers, the [Firelens][Firelens] log router to send logs to Loki (`log_router`) and a sample application to generate log with (`sample-app`).

Expand Down Expand Up @@ -117,7 +121,9 @@ curl https://raw.githubusercontent.com/grafana/loki/main/docs/sources/send-data/

The `log_router` container image is the [Fluent bit Loki docker image][fluentbit loki image] which contains the Loki plugin pre-installed. As you can see the `firelensConfiguration` type is set to `fluentbit` and we've also added `options` to enable ECS log metadata. This will be useful when querying your logs with Loki LogQL label matchers.

> The `logConfiguration` is mostly there for debugging the fluent-bit container, but feel free to remove that part when you're done testing and configuring.
{{< admonition type="note" >}}
The `logConfiguration` is mostly there for debugging the fluent-bit container, but feel free to remove that part when you're done testing and configuring.
{{< /admonition >}}

```json
{
Expand Down Expand Up @@ -169,7 +175,9 @@ All `options` of the `logConfiguration` will be automatically translated into [f
This `OUTPUT` config will forward logs to [GrafanaCloud][GrafanaCloud] Loki, to learn more about those options make sure to read the [fluentbit output plugin][fluentbit loki] documentation.
We've kept some interesting and useful labels such as `container_name`, `ecs_task_definition` , `source` and `ecs_cluster` but you can statically add more via the `Labels` option.

> If you want run multiple containers in your task, all of them needs a `logConfiguration` section, this give you the opportunity to add different labels depending on the container.
{{< admonition type="note" >}}
If you want run multiple containers in your task, all of them needs a `logConfiguration` section, this give you the opportunity to add different labels depending on the container.
{{< /admonition >}}

```json
{
Expand All @@ -187,7 +195,7 @@ We've kept some interesting and useful labels such as `container_name`, `ecs_tas
}
```

Finally, you need to replace the `executionRoleArn` with the [ARN][arn] of the role we created in the [first section](#Setting-up-the-ECS-cluster).
Finally, you need to replace the `executionRoleArn` with the [ARN][arn] of the role we created in the [first section](#setting-up-the-ecs-cluster).

Once you've finished editing the task definition we can then run the command below to create the task:

Expand All @@ -209,15 +217,17 @@ aws ecs create-service --cluster ecs-firelens-cluster \
--network-configuration "awsvpcConfiguration={subnets=[subnet-306ca97d],securityGroups=[sg-02c489bbdeffdca1d],assignPublicIp=ENABLED}"
```

> Make sure public (`assignPublicIp`) is enabled otherwise ECS won't connect to the internet and you won't be able to pull external docker images.
{{< admonition type="note" >}}
Make sure public (`assignPublicIp`) is enabled otherwise ECS won't connect to the internet and you won't be able to pull external docker images.
{{< /admonition >}}

You can now access the ECS console and you should see your task running. Now let's open Grafana and use explore with the Loki data source to explore our task logs. Enter the query `{job="firelens"}` and you should see our `sample-app` logs showing up as shown below:

![grafana logs firelens][grafana logs firelens]
{{< figure alt="grafana logs firelens" align="center" src="./ecs-grafana.png" >}}

Using the `Log Labels` dropdown you should be able to discover your workload via the ECS metadata, which is also visible if you expand a log line.

That's it ! Make sure to checkout LogQL to learn more about Loki powerful query language.
That's it. Make sure to checkout [LogQL][logql] to learn more about Loki powerful query language.

[create an vpc]: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-subnets-commands-example.html
[ECS]: https://aws.amazon.com/ecs/
Expand All @@ -239,7 +249,4 @@ That's it ! Make sure to checkout LogQL to learn more about Loki powerful query
[logql]: https://grafana.com/docs/loki/<LOKI_VERSION>/logql/
[alpine]:https://hub.docker.com/_/alpine
[fluentbit output]: https://fluentbit.io/documentation/0.14/output/
[routing]: https://fluentbit.io/documentation/0.13/getting_started/routing.html
[grafanacloud account]: https://grafana.com/login
[grafana logs firelens]: ./ecs-grafana.png
[logql]: ../../../query
Loading

0 comments on commit 8918d07

Please sign in to comment.