Skip to content

Commit

Permalink
Update connectors.rst
Browse files Browse the repository at this point in the history
- Also trim trailing whitespace
  • Loading branch information
afausti committed Aug 28, 2024
1 parent 4fc7e71 commit b99d633
Show file tree
Hide file tree
Showing 3 changed files with 15 additions and 10 deletions.
6 changes: 3 additions & 3 deletions docs/developer-guide/broker-migration.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To migrate your Kafka brokers to a new storage class, you need to specify the st
rebalance: false
This configuration creates a new ``KafkaNodePool`` resource for the brokers using the new storage class.
This configuration creates a new ``KafkaNodePool`` resource for the brokers using the new storage class.
Sync the new ``KafkaNodePool`` resource in Argo CD.

At this point, your data will still reside on the old brokers, and the new ones will be empty.
Expand Down Expand Up @@ -68,9 +68,9 @@ You can check the state of the rebalance by inspecting the ``KafkaRebalance`` re
Finally, once the rebalancing state is ready, set ``brokerStorage.enabled: true`` and ``brokerStorage.migration.enabled: false`` and ``brokerStorage.migration.rebalance: false``.

Note that the PVCs of the old brokers need to be deleted manually, as they are orphan resources in Sasquatch.
Note that the PVCs of the old brokers need to be deleted manually, as they are orphan resources in Sasquatch.

Also, keep in mind that Strimzi will assign new broker IDs to the newly created brokers.
Also, keep in mind that Strimzi will assign new broker IDs to the newly created brokers.
Ensure that you update the broker IDs wherever they are used, such as in the Kafka external listener configuration.


Expand Down
13 changes: 9 additions & 4 deletions docs/developer-guide/connectors.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Managing InfluxDB Sink connectors


An InfluxDB Sink connector consumes data from Kafka and writes to InfluxDB.
Sasquatch uses the Telegraf `Kafka consumer input`_ plugin and the `InfluxDB v1 output`_ plugin for that.
Sasquatch uses the Telegraf `Kafka consumer input`_ plugin and the `InfluxDB v1 output`_ plugin implemented in the `telegraf-kafka-consumer subchart`_ .

Configuration
=============
Expand All @@ -31,6 +31,12 @@ Here's what the connector configuration for writing data from the ``lsst.example
[ "band", "instrument" ]
replicaCount: 1
The following sections cover the most important configuration options using the ``lsst.example.skyFluxMetric`` metric as an example.

See the `telegraf-kafka-consumer subchart`_ for the configuration options and default values.

See the :ref:`avro` section to learn more about the ``lsst.example.skyFluxMetric`` example in Sasquatch.

Selecting Kafka topics
----------------------

Expand Down Expand Up @@ -80,7 +86,6 @@ For example, you might query the ``lsst.example.skyFluxMetric`` metric and group

See `InfluxDB schema design and data layout`_ for more insights on how to design tags.

See the `telegraf-kafka-consumer subchart`_ for additional configuration options.


Deployment and scaling
Expand All @@ -94,7 +99,7 @@ To scale a connector horizontally, increase the ``kafkaConsumers.<connector name
.. note::

Note that scaling the connector horizontally only works if the Kafka topic has multiple partitions.
The number of topic partitions must be a multiple of the number of connector replicas.
The number of topic partitions must be a multiple of the number of connector replicas.
For example if your topic was created with 8 partitions, you can scale the connector to 1, 2, 4, or 8 replicas.

Operations
Expand All @@ -108,7 +113,7 @@ To list the connectors deployed in a Sasquatch environment, run:
To view the logs of a connector or multiple connectors run:

.. code:: bash
.. code:: bash
kubectl logs sasquatch-telegraf-<connector-name> -n sasquatch
kubectl logs -l app=sasquatch-telegraf-kafka-consumer --tail=5 -n sasquatch
Expand Down
6 changes: 3 additions & 3 deletions docs/developer-guide/strimzi-updates.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
Strimzi upgrades
################

It is recommended that you perform incremental upgrades of the Strimzi operator as soon as new versions become available.
In Phalanx, dependabot will detect a new version of Strimzi.
It is recommended that you perform incremental upgrades of the Strimzi operator as soon as new versions become available.
In Phalanx, dependabot will detect a new version of Strimzi.
Once you merge the dependabot PR into the ``main`` branch, you can sync the Strimzi app in Argo CD.

This operation will upgrade the operator to the latest version and will trigger a Kafka rollout in the namespaces watched by Strimzi.
Expand All @@ -23,7 +23,7 @@ See :ref:`kafka-upgrades` for instructions on upgrading Kafka.
Kafka upgrades
==============

Each Strimzi release supports a range of Kafka versions.
Each Strimzi release supports a range of Kafka versions.
It is recommended that you always use the latest version of Kafka that is supported by the operator.

Sasquatch deploys Kafka in KRaft mode.
Expand Down

0 comments on commit b99d633

Please sign in to comment.