Releases: IBMStreams/streamsx.kafka
Kafka Toolkit v3.0.4
What's new and what has changed
This release is a bugfix release.
Fixed issues in this release:
- #203 KafkaConsumer: assign output attributes via index rather than attribute name
- #206 Make main composites of samples public.
This allows using the samples with streamsx Python package. - #208 KafkaProducer: message or key attribute with underline causes error at context checker.
All previous versions back to 1.0.0 are affected by this issue. - New sample: KafkaAvroSample
For all changes, please see the CHANGELOG
You find the SPL documentation at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/.
Kafka Toolkit v3.0.3
What's new in this release and what has changed
This release is a bugfix release.
Fixed issues:
- #198 - The "nConsecutiveRuntimeExc" variable never reaches 50 when exceptions occur
You find the SPL documentation at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/.
Kafka Toolkit v3.0.2
What's new in this release and what has changed
This release is a bugfix release.
Fixed issues:
You find the SPL documentation at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/.
Kafka Toolkit v3.0.1
What's new in this release and what has changed
This release is a bugfix release.
Fixed issues:
- #196 - KafkaProducer: Consistent region reset can trigger addtional reset
You find the SPL documentation at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/.
Kafka Toolkit v3.0.0
What's new in this release and what has changed
- The included Kafka client has been upgraded from version 2.2.1 to 2.3.1.
- The schema of the output port of the
KafkaProducer
operator supports optional types for the error description. - When in autonomous region, the optional input port of the
KafkaConsumer
operator can be used to change the topic subscription, not only the partition assignment. To create JSON for changing topic subscriptions, the toolkit contains new SPL functions. For consistent region, this function is not available. - The guaranteeOrdering parameter now enables the idempotent producer when set to
true
, which allows a higher throughput by allowing more in-flight requests per connection (requires Kafka server version 0.11 or higher). - The
KafkaConsumer
operator now enables and benefits from group management when the user does not specify a group identifier, i.e. when the operator generates a group identifier. - Checkpoint reset of the
KafkaConsumer
is optimized in consistent region when the consumer operator is the only group member. - The
KafkaConsumer
operator can be configured as a static consumer group member (requires Kafka server version 2.3 or higher). See also the Static Consumer Group Membership chapter in the KafkaConsumer's documentation. - The
KafkaConsumer
operator now usesread_committed
as the defaultisolation.level
configuration unless the user has specified a different value. Inread_committed
mode, the consumer will read only those transactional messages which have been successfully committed. Messages of aborted transactions are now skipped. The consumer will continue to read non-transactional messages as before. This new default setting is incompatible with Kafka 0.10.2.
Deprecated features
The use of the input control port has been deprecated when the KafkaConsumer is used in a consistent region.
Changes in behaviour
KafkaConsumer operator
When the user does not specify a group identifier, and does not specify partitions to consume (via the partition parameter), the KafkaConsumer now subscribes to the given topics and benefits from group management, for example it automatically gets assigned new partitions when the number of partitions of the subscribed topics change. In previous versions, the KafkaConsumer operator used to self-assign all available topic partitions of the topics at startup, and the partition assignment of the operator never changed during the runtime.
Incompatible changes
- The toolkit requires at minimum Streams version 4.3.
- The guaranteeOrdering parameter of the
KafkaProducer
operator is incompatible with Kafka version 0.10.x when set totrue
. The work-around for Kafka 0.10.x is given in the guaranteeOrdering parameter description. - When the
KafkaConsumer
operator is configured with input port, the topic, pattern, partition, and startPosition parameters used to be ignored in previous versions. Now an SPL compiler error is raised when one of these parameters is used together with the input port. - The default
isolation.level
configuration of theKafkaConsumer
operator is incompatible with Kafka broker version 0.10.x. When connecting with Kafka 0.10.x,isolation.level=read_uncommitted
must be used for the consumer configuration.
The SPL documentation can be found at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/
Kafka Toolkit v1.9.5
What's new in this toolkit release
This toolkit is a bugfix release that fixes a critical bug for those users, for which an upgrade to at least toolkit version 2.0.1 is not an option.
This toolkit release fixes following issue:
- #171 Resetting from checkpoint will fail when sequence id is >1000
You find the SPL documentation online at https://ibmstreams.github.io/streamsx.kafka/doc/v1.9.5/spldoc/html/.
Kafka Toolkit v2.2.1
What's new in this toolkit release
This release is a bugfix release for the following bugs:
- #179 - KafkaProducer: Lost output tuples on FinalMarker reception
You find the SPL documentation online at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/
Kafka Toolkit v2.2.0
What's new in this toolkit release
Changes and enhancements
-
The KafkaProducer operator can be configured with an optional output port. The function of this port can be configured with the outputErrorsOnly parameter.
-
The KafkaProducer operator has following new custom metrics:
nFailedTuples
,nPendingTuples
, andnQueueFullPause
. -
Changed recovery strategy for the KafkaProducer operator when used outside a consistent region: In previous version the operator used to abort the PE on Kafka error with loss of all buffered producer records.
Now the input tuples are kept until all their producer records are acknowledged. On retriable Kafka error, the internal producer instance is re-created and the failed records of the tuples are retried. Records that fail two send attempts are treated as finally failed.
You find the SPL documentation online at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/
Kafka Toolkit v2.1.0
What's new in this toolkit release
Changes and enhancements
-
This toolkit version has been tested also with Kafka 2.3
-
#169 new optional operator parameter sslDebug. For debugging SSL issues see also the toolkit documentation
-
#167 changed default values for following consumer and producer configurations:
client.dns.lookup = use_all_dns_ips
reconnect.backoff.max.ms = 10000
(Kafka's default is 1000)reconnect.backoff.ms = 250
(Kafka's default is 50)retry.backoff.ms = 500
(Kafka's default is 100)
-
Changed exception handling for the KafkaProducer when not used in a consistent region: #163 (comment)
Bugs fixed in this release
- #163 KafkaProducer's exception handling makes the operator lose tuples when in CR
- #164 on reset() the KafkaProducerOperator should instantiate a new producer instance
- #166 Resource leak in KafkaProducer when reset to initial state in a CR
You find the SPL documentation online at https://ibmstreams.github.io/streamsx.kafka/docs/user/SPLDoc/
Kafka Toolkit v2.0.1
What's new in this toolkit release
This toolkit release fixes following issue:
- #171 Resetting from checkpoint will fail when sequence id is >1000