Skip to content

Commit

Permalink
Release 08.06.2021
Browse files Browse the repository at this point in the history
* Fixes and improvements.
* Translations updated.
  • Loading branch information
DataUI VCS Robot committed Jun 8, 2021
1 parent 2fd6871 commit 81fa9f9
Show file tree
Hide file tree
Showing 81 changed files with 2,783 additions and 2,004 deletions.
10 changes: 10 additions & 0 deletions en/_includes/marketplace/image.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Images of products placed in the Marketplace must meet the [requirements](../../marketplace/operations/create-image.md#requirements).

If you don't have a VM image, create one:

* [Use Packer](../../solutions/infrastructure-management/packer-quickstart). The image is automatically uploaded to {{ compute-name }}.<br>Recommendations for creating an image:
* As a base image, use an image from the [public catalog](../../compute/operations/images-with-pre-installed-software/get-list) in {{ yandex-cloud }}.
* See [examples of packer recipes](https://github.com/yandex-cloud/examples/tree/master/jenkins-packer/packer).
* [Automate](../../solutions/infrastructure-management/jenkins) the image build with Jenkins.
* Use other tools that are convenient for you. In this case, you need to [upload](../../compute/operations/image-create/upload.md) an image to {{ compute-name }} yourself. Supported image formats: Qcow2, VMDK, and VHD.

13 changes: 13 additions & 0 deletions en/_includes/marketplace/product-version.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Upload the first version of your product and assign it a service plan.

1. In the **Name** field, enter the name of the product's version.
1. Upload the product logo in SVG format.
1. In the **Image** field, click **Add image**. In the window that opens, find your folder and select the desired image.
1. Select the operating system type: Linux or Windows.
1. In the **Plan** list, select the service plan that you created.
1. If you want to use this version by default, enable **Major version**.
1. Add a **Version description**.
1. Set the minimum hardware requirements for your product.
1. Click **Preview** to see what your product's public page will look like.
1. Click **Ready**.

Original file line number Diff line number Diff line change
@@ -1,25 +1,16 @@
1. Log in to the Marketplace partner interface.
Add information about your product to be displayed in the Marketplace public catalog.

1. Log in to the Marketplace partner interface.
1. Open the **Products** section.

1. Click **Add product**.

1. Fill out the form with information about your product:

1. Specify the product name in Russian and English.

1. Specify the vendor name in Russian and English.

1. Select a product category.

1. Upload the product logo in SVG format.

1. Under **Product description**, add the following in Russian and English:

* A brief description of your product: what problems it addresses, its key characteristics, features, and advantages over its counterparts. Be specific and avoid advertising cliches.

* Step-by-step instructions on how to get started with your product. Specify what to pay attention to when deploying your product and what difficulties users may face.

* Your contact details in the event they have any questions or something goes wrong while using the product.

{% cut "Example" %}
Expand All @@ -31,7 +22,7 @@
#### Deployment instructions {#ubuntu-deploy}

1.&nbsp;Open Ubuntu 20.04 LTS in the Yandex.Cloud marketplace.<br>
2.&nbsp;Click **Run in console** and [create](https://cloud.yandex.com/docs/compute/operations/vm-create/create-linux-vm) a VM.
2.&nbsp;Click **Run in console** and [create](https://cloud.yandex.ru/docs/compute/operations/vm-create/create-linux-vm) a VM.

#### Support information {#ubuntu-support}

Expand All @@ -41,14 +32,13 @@

**Yandex.Cloud**

Yandex.Cloud technical support responds to requests 24 hours a day, 7 days a week. The response time depends on your service plan. [Learn more](https://cloud.yandex.com/docs/support/overview).
Yandex.Cloud technical support responds to requests 24 hours a day, 7 days a week. The response time depends on your service plan. [Learn more](https://cloud.yandex.ru/docs/support/overview).

{% endcut %}

1. Provide a list of examples in Russian and English of how your product can be used. Give links to available use cases (if any).

{% cut "Example" %}

* Development and testing web services.
* Prototyping new service components.
* Administration of a cluster of VMs or databases.
Expand All @@ -58,5 +48,5 @@

1. Attach links to the product or developer website.

1. Click **Next**.
1. Click **Next** to proceed to uploading the first version of your product.

4 changes: 4 additions & 0 deletions en/_includes/marketplace/registration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
Open the [Marketplace](https://cloud.yandex.com/marketplace) page and fill out a request to add your product to the Marketplace. In the request, specify the [billing account](../../billing/concepts/billing-account.md) of the business that will publish the product. By submitting the request, you accept the [Offer](https://yandex.ru/legal/marketplace_offer/) for Software Product Access on the Marketplace.

Once {{ yandex-cloud }} verifies your request, the specified billing account gets Yandex Cloud Marketplace partner status and access to the [partner interface](https://partners.cloud.yandex.ru/). You'll receive an email notification when that happens.

6 changes: 6 additions & 0 deletions en/_includes/marketplace/tariff-note.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{% note warning %}

{{ yandex-cloud }} charges a commission of 20% of the cost to use an image.

{% endnote %}

File renamed without changes.
147 changes: 147 additions & 0 deletions en/_includes/mdb/mkf/kafka-settings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
* **Auto create topics enable** {{ tag-con }} {#settings-auto-create-topics}

Manages automatic creation of topics.

Automatic topic creation is disabled by default (`false`).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_auto.create.topics.enable).

* **Compression type** {{ tag-all }} {#settings-compression-type}

In the management console, this setting corresponds to **Compression type**.

The codec used for message compression:

| Management console | CLI | {{ TF }} and API | Description |
| ------------------ | -------------- | ------------------------------- | ------------------------------------------------ |
| `Uncompressed` | `uncompressed` | `COMPRESSION_TYPE_UNCOMPRESSED` | Compression is disabled |
| `Zstd` | `zstd` | `COMPRESSION_TYPE_ZSTD` | The [zstd](https://facebook.github.io/zstd/) codec |
| `Lz4` | `lz4` | `COMPRESSION_TYPE_LZ4` | The [lz4](https://lz4.github.io/lz4/) codec |
| `Snappy` | `snappy` | `COMPRESSION_TYPE_SNAPPY` | The [snappy](https://github.com/google/snappy) codec |
| `Gzip` | `gzip` | `COMPRESSION_TYPE_GZIP` | The [gzip](https://www.gzip.org) codec |
| `Producer` | `producer` | `COMPRESSION_TYPE_PRODUCER` | The codec is set on the side of the [producer](../../../managed-kafka/concepts/producers-consumers.md) |

By default, the compression codec is set by the producer (`COMPRESSION_TYPE_PRODUCER`).

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-compression-type).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_compression.type).

* **Log flush interval messages** {{ tag-all }} {#settings-log-flush-interval-messages}

In the management console, this setting corresponds to **Flush messages**.

The number of topic messages that can be kept in memory before these messages are flushed to disk. For example, if the parameter is `1`, the disk is flushed after each message is received. If it's set to `5`, messages are flushed to disk in groups of five.

The minimum value is `1`, the maximum value is `9223372036854775807` (default).

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-flush-messages).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#flush.messages).

* **Log flush interval ms** {{ tag-all }} {#settings-log-flush-interval-ms}

The maximum time in milliseconds that a message in the topic is kept in the memory before being flushed to the disk. If the value is not set, the [Log flush scheduler interval ms](#settings-log-flush-scheduler-interval-ms) setting is used instead.

The maximum value is `9223372036854775807`.

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-flush-ms).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#flush.ms).

* **Log flush scheduler interval ms** {{ tag-all }} {#settings-log-flush-scheduler-interval-ms}

The period (in milliseconds) to check the presence of logs to be flushed to the disk. This check is done by the log flusher.

The maximum value is `9223372036854775807` (default).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_log.flush.scheduler.interval.ms).

* **Log preallocate** {{ tag-con }} {{ tag-cli }} {{ tag-tf }} {#settings-log-preallocate}

Determines whether to preallocate log file space.

By default, the space for log files is allocated as needed (`false`).

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-preallocate).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_log.preallocate).

* **Log retention bytes** {{ tag-all }} {#settings-log-retention-bytes}

In the management console, this setting corresponds to **Retention, bytes**.

The maximum size a partition can grow to, in bytes. When the partition reaches this size, {{ KF }} starts deleting the old log segments. The setting applies if the [log cleanup policy](#settings-topic-cleanup-policy) is in effect `Delete`.

The minimum and default value is `-1` (log size is unlimited), the maximum value is `9223372036854775807`.

Use this setting if you need to control the log size due to limited disk space.

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-retention-bytes).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_log.retention.bytes).

See also the [Log retention ms](#settings-log-retention-ms) setting.

* **Log retention hours** {{ tag-all }} {#settings-log-retention-hours}

Time (in hours) for {{ KF }} to keep a log segment file. This setting applies if the [log cleanup policy](#settings-topic-cleanup-policy) is in effect `Delete`: after the specified time, the segment file is deleted.

The maximum value is `168` (default).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_log.retention.hours).

* **Log retention minutes** {{ tag-all }} {#settings-log-retention-minutes}

Time (in minutes) for {{ KF }} to keep a log segment file. This setting applies if the [log cleanup policy](#settings-topic-cleanup-policy) is in effect `Delete`: after the specified time, the segment file is deleted.

The maximum value is `2147483647`. If no value is set, [Log retention hours](#settings-log-retention-hours) is used.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_log.retention.minutes).

* **Log retention ms** {{ tag-all }} {#settings-log-retention-ms}

In the management console, this setting corresponds to **Retention, ms**.

Time (in milliseconds) for {{ KF }} to keep a log segment file. This setting applies if the [log cleanup policy](#settings-topic-cleanup-policy) is in effect `Delete`: after the specified time, the segment file is deleted.

The minimum value is `-1` (logs are stored without any time limits), the maximum value is `9223372036854775807`. If no value is set, [Log retention minutes](#settings-log-retention-minutes) is used.

{% note warning %}

If both **Log retention bytes** and **Log retention ms** are set to `-1`, the log grows indefinitely. This way the cluster can run out of storage space quickly.

{% endnote %}

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-log-retention-ms).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#retention.ms).

See also the [Log retention bytes](#settings-log-retention-bytes) setting.

* **Log segment bytes** {{ tag-con }} {{ tag-cli }} {{ tag-tf }} {#settings-log-segment-bytes}

The maximum batch size of messages that {{ KF }} allows the producer to write to a topic or consumer to read from a topic (in bytes, after compression, if enabled).

The minimum value is `14`, maximum value is `2147483647`, default value is `1073741824` (1 GB).

This is a global cluster-level setting. It can be overridden on the [topic level](#settings-topic-log-segment-bytes).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_log.segment.bytes).

* **Socket receive buffer bytes** {{ tag-con }} {#settings-socket-receive-buffer-bytes}

The buffer size for the receiving socket (in bytes).

The minimum and default value is `-1` (use OS settings), the maximum value is `2147483647`.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_socket.receive.buffer.bytes).

* **Socket send buffer bytes** {{ tag-con }} {#settings-socket-send-buffer-bytes}

The buffer size for the send socket (in bytes).

The minimum and default value is `-1` (use OS settings), the maximum value is `2147483647`.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#brokerconfigs_socket.send.buffer.bytes).

131 changes: 131 additions & 0 deletions en/_includes/mdb/mkf/topic-settings.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
* **Cleanup policy** {{ tag-con }} {{ tag-cli }} {{ tag-api }} {#settings-topic-cleanup-policy}

In the management console, this setting corresponds to **Cleanup policy**.

Retention policy to use on old log messages:
* `Delete` (`CLEANUP_POLICY_DELETE` for {{ TF }} and the API): Delete log segments when either their retention time or log size limit is reached.
* `Compact` (`CLEANUP_POLICY_COMPACT` for {{ TF }} and the API): Compress messages in the log.
* `CompactAndDelete` (`CLEANUP_POLICY_COMPACT_AND_DELETE` for {{ TF }} and the API): Both compact and delete log segments.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#cleanup.policy).

* **Compression type** {{ tag-all }} {#settings-topic-compression-type }

In the management console, this setting corresponds to **Compression type**.

See the [Compression type](#settings-compression-type) cluster-level setting.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_compression.type).

* **Delete delay, ms** {{ tag-all }} {#settings-topic-file-delete-delay}

In the management console, this setting corresponds to **Delete delay, ms**.

The time to wait before deleting a file from the filesystem.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#file.delete.delay.ms).

* **Delete retention** {{ tag-con }} {#settings-delete-retention}

Time in milliseconds to retain delete tombstone markers for log compacted topics. This setting applies only if the [log cleanup policy](#settings-topic-cleanup-policy) is set to `Compact` or `CompactAndDelete`.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#retention.ms).

* **Flush messages** {{ tag-all }} {#settings-topic-flush-messages}

In the management console, this setting corresponds to **Flush messages**.

See the [Log flush interval messages](#settings-log-flush-interval-messages) cluster-level setting.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_flush.messages).

* **Flush, ms** {{ tag-all }} {#settings-topic-flush-ms}

In the management console, this setting corresponds to **Flush, ms**.

See the [Log flush interval ms](#settings-log-flush-interval-ms) cluster-level setting.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_flush.ms).

* **Maximum batch size** {{ tag-all }} {#settings-topic-max-message-bytes}

In the management console, this setting corresponds to **Maximum batch size**.

See the [Log segment bytes](#settings-log-segment-bytes) cluster-level setting.

Minimum value is `0` and default value is `1,048,588` (1 MB).

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#retention.bytes).

* **Min compaction lag, ms** {{ tag-all }} {#settings-topic-max-compaction-lag-ms}

In the management console, this setting corresponds to **Minimum compaction lag, ms**.

The minimum time a message will remain uncompacted in the log.

For more information, see the [documentation for {{ KF }}](https://kafka.apache.org/documentation/#min.compaction.lag.ms).

* **Minimum number of in-sync replicas** {{ tag-con }} {{ tag-cli }} {{ tag-api }} {#settings-topic-min-insync-replicas}

In the management console, this setting corresponds to **Minimum number of in-sync replicas**.

The minimum number of replicas to wait for the record confirmation from in order to consider a message successfully written to a topic. Use this setting if the producer waits too long for a successful record confirmation from all the broker hosts in the cluster.

The minimum value depends on the number of [broker hosts](../../../managed-kafka/concepts/brokers.md):
* For clusters with a single broker host: `1`.
* For clusters with at least two broker hosts: `2`.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_min.insync.replicas).

* **Num partitions** {{ tag-all }} {#settings-topic-num-partitions}

In the management console, this setting corresponds to **Number of partitions**.

The number of log sections per topic.

The minimum and default value is `1`.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#brokerconfigs_num.partitions).

* **Preallocate** {{ tag-cli }} {{ tag-tf }} {#settings-topic-preallocate}

See the [Log preallocate](#settings-log-preallocate) cluster-level setting.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_preallocate).

* **Replication factor** {{ tag-all }} {#settings-topic-replication-factor}

In the management console, this setting corresponds to **Replication factor**.

Amount of data copies (replicas) for the topic in the cluster.

The minimum value depends on the number of [broker hosts](../../../managed-kafka/concepts/brokers.md):
* For clusters with a single broker host: `1`.
* For clusters with at least two broker hosts: `3`.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#streamsconfigs_replication.factor).

* **Retention, bytes** {{ tag-all }} {#settings-topic-retention-bytes}

In the management console, this setting corresponds to **Retention, bytes**.

See the [Log retention bytes](#settings-log-retention-bytes) cluster-level setting.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_retention.bytes).

* **Retention, ms** {{ tag-all }} {#settings-topic-log-retention-ms}

In the management console, this setting corresponds to **Retention, ms**.

See the [Log retention ms](#settings-log-retention-ms) cluster-level setting.

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_retention.ms).

* **Segment bytes** {{ tag-cli }} {{ tag-tf }} {#settings-topic-segment-bytes}

Segment size for log files, in bytes.

Minimum value is `14` and default value is `1,073,741,824` (1 GB).

For more information, see the [{{ KF }} documentation](https://kafka.apache.org/documentation/#topicconfigs_segment.bytes).

Loading

0 comments on commit 81fa9f9

Please sign in to comment.