Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add topic and app for new otel collector #727

Merged
merged 1 commit into from
Dec 12, 2024

Conversation

matthewhughes-uw
Copy link
Contributor

The new tail sampler will:

  • Read from the current otel.otlp_spans topic
  • Produce to the new otel.otlp_sampled_spans

Later tempo will also be configure to read from the new topic and not the old one, but we'll leave it alone for now.

Ticket: DENA-1126

@matthewhughes-uw matthewhughes-uw requested a review from a team as a code owner December 12, 2024 14:23
Copy link

linear bot commented Dec 12, 2024

@uw-infra
Copy link

uw-infra commented Dec 12, 2024

Terraform run output for

Cluster: dev-aws
Module: otel/kafka-bitnami
Path: dev-aws/otel
Commit ID: a7dd55fcf8a0ca3799004a4627f6a0c957a46caf
✅ Run Status: Ok, Run Summary: Plan: 3 to add, 0 to change, 0 to destroy.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kafka_topic.otlp_sampled_spans will be created
  + resource "kafka_topic" "otlp_sampled_spans" {
      + config             = {
          + "cleanup.policy"    = "delete"
          + "compression.type"  = "zstd"
          + "max.message.bytes" = "134217728"
          + "retention.bytes"   = "5368709120"
          + "retention.ms"      = "43200000"
          + "segment.bytes"     = "262144000"
          + "segment.ms"        = "10800000"
        }
      + id                 = (known after apply)
      + name               = "otel.otlp_sampled_spans"
      + partitions         = 200
      + replication_factor = 3
    }

  # module.otel_sampling_collector.kafka_acl.producer_acl["otel.otlp_sampled_spans"] will be created
  + resource "kafka_acl" "producer_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Write"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_sampled_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

  # module.otel_sampling_collector.kafka_acl.topic_acl["otel.otlp_spans"] will be created
  + resource "kafka_acl" "topic_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Read"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

To manually trigger plan again please post @terraform-applier plan dev-aws/otel as comment.

@uw-infra
Copy link

uw-infra commented Dec 12, 2024

Terraform run output for

Cluster: dev-aws
Module: otel/kafka-bitnami
Path: dev-aws/otel
Commit ID: e66ffd676f12f98c09b3c52ed4843dcfc7fd80d6
✅ Run Status: Ok, Run Summary: Plan: 3 to add, 0 to change, 0 to destroy.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kafka_topic.otlp_sampled_spans will be created
  + resource "kafka_topic" "otlp_sampled_spans" {
      + config             = {
          + "cleanup.policy"    = "delete"
          + "compression.type"  = "zstd"
          + "max.message.bytes" = "134217728"
          + "retention.bytes"   = "5368709120"
          + "retention.ms"      = "43200000"
          + "segment.bytes"     = "262144000"
          + "segment.ms"        = "10800000"
        }
      + id                 = (known after apply)
      + name               = "otel.otlp_sampled_spans"
      + partitions         = 200
      + replication_factor = 3
    }

  # module.otel_tail_sampling_collector.kafka_acl.producer_acl["otel.otlp_sampled_spans"] will be created
  + resource "kafka_acl" "producer_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Write"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/tail-sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_sampled_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

  # module.otel_tail_sampling_collector.kafka_acl.topic_acl["otel.otlp_spans"] will be created
  + resource "kafka_acl" "topic_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Read"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/tail-sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

To manually trigger plan again please post @terraform-applier plan dev-aws/otel as comment.

The new tail sampler will:

* Read from the current `otel.otlp_spans` topic
* Produce to the new `otel.otlp_sampled_spans`

Later `tempo` will also be configure to read from the new topic and not
the old one, but we'll leave it alone for now.

Ticket: DENA-1126
@matthewhughes-uw matthewhughes-uw force-pushed the mhughes-DENA-1126-new-collector-bits branch from e66ffd6 to 12aa980 Compare December 12, 2024 15:25
@uw-infra
Copy link

uw-infra commented Dec 12, 2024

Terraform run output for

Cluster: dev-aws
Module: otel/kafka-bitnami
Path: dev-aws/otel
Commit ID: 12aa98065dc4a05f01857cbc37c4761e9c22e0e6
✅ Run Status: Ok, Run Summary: Plan: 3 to add, 0 to change, 0 to destroy.
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kafka_topic.otlp_sampled_spans will be created
  + resource "kafka_topic" "otlp_sampled_spans" {
      + config             = {
          + "cleanup.policy"    = "delete"
          + "compression.type"  = "zstd"
          + "max.message.bytes" = "134217728"
          + "retention.bytes"   = "5368709120"
          + "retention.ms"      = "43200000"
          + "segment.bytes"     = "262144000"
          + "segment.ms"        = "10800000"
        }
      + id                 = (known after apply)
      + name               = "otel.otlp_sampled_spans"
      + partitions         = 200
      + replication_factor = 3
    }

  # module.otel_tail_sampling_collector.kafka_acl.producer_acl["otel.otlp_sampled_spans"] will be created
  + resource "kafka_acl" "producer_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Write"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/tail-sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_sampled_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

  # module.otel_tail_sampling_collector.kafka_acl.topic_acl["otel.otlp_spans"] will be created
  + resource "kafka_acl" "topic_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Read"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/tail-sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

Plan: 3 to add, 0 to change, 0 to destroy.

To manually trigger plan again please post @terraform-applier plan dev-aws/otel as comment.

@matthewhughes-uw matthewhughes-uw merged commit 94b5f07 into main Dec 12, 2024
2 checks passed
@matthewhughes-uw matthewhughes-uw deleted the mhughes-DENA-1126-new-collector-bits branch December 12, 2024 15:29
@uw-infra
Copy link

Terraform run output for

Cluster: dev-aws
Module: otel/kafka-bitnami
Path: dev-aws/otel
Commit ID: 94b5f07e8f51316ffcebe0d2ba54c6b7e0f51af7
✅ Run Status: Ok, Run Summary: Apply complete! Resources: 3 added, 0 changed, 0 destroyed
Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # kafka_topic.otlp_sampled_spans will be created
  + resource "kafka_topic" "otlp_sampled_spans" {
      + config             = {
          + "cleanup.policy"    = "delete"
          + "compression.type"  = "zstd"
          + "max.message.bytes" = "134217728"
          + "retention.bytes"   = "5368709120"
          + "retention.ms"      = "43200000"
          + "segment.bytes"     = "262144000"
          + "segment.ms"        = "10800000"
        }
      + id                 = (known after apply)
      + name               = "otel.otlp_sampled_spans"
      + partitions         = 200
      + replication_factor = 3
    }

  # module.otel_tail_sampling_collector.kafka_acl.producer_acl["otel.otlp_sampled_spans"] will be created
  + resource "kafka_acl" "producer_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Write"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/tail-sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_sampled_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

  # module.otel_tail_sampling_collector.kafka_acl.topic_acl["otel.otlp_spans"] will be created
  + resource "kafka_acl" "topic_acl" {
      + acl_host                     = "*"
      + acl_operation                = "Read"
      + acl_permission_type          = "Allow"
      + acl_principal                = "User:CN=otel/tail-sampling-collector"
      + id                           = (known after apply)
      + resource_name                = "otel.otlp_spans"
      + resource_pattern_type_filter = "Literal"
      + resource_type                = "Topic"
    }

Plan: 3 to add, 0 to change, 0 to destroy.
kafka_topic.otlp_sampled_spans: Creating...
module.otel_tail_sampling_collector.kafka_acl.topic_acl["otel.otlp_spans"]: Creating...
module.otel_tail_sampling_collector.kafka_acl.topic_acl["otel.otlp_spans"]: Creation complete after 1s [id=User:CN=otel/tail-sampling-collector|*|Read|Allow|Topic|otel.otlp_spans|Literal]
kafka_topic.otlp_sampled_spans: Creation complete after 2s [id=otel.otlp_sampled_spans]
module.otel_tail_sampling_collector.kafka_acl.producer_acl["otel.otlp_sampled_spans"]: Creating...
module.otel_tail_sampling_collector.kafka_acl.producer_acl["otel.otlp_sampled_spans"]: Creation complete after 0s [id=User:CN=otel/tail-sampling-collector|*|Write|Allow|Topic|otel.otlp_sampled_spans|Literal]

Warning: Argument is deprecated

  with provider["registry.terraform.io/mongey/kafka"],
  on __env.tf line 14, in provider "kafka":
  14: provider "kafka" {

This parameter is now deprecated and will be removed in a later release,
please use `client_cert` instead.

(and 2 more similar warnings elsewhere)

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

To manually trigger plan again please post @terraform-applier plan dev-aws/otel as comment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants