多应用+插件架构,代码干净,二开方便,首家独创一键云编译技术,文档视频完善,免费商用码云13.8K 广告
*原文:[https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#whats-new-part](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#whats-new-part)* ***** <br > 本节涵盖了从 2.2 版到 2.3 版的变更内容。另请参阅 [Spring Integration for Apache Kafka(3.2 版)的变更内容](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#new-in-sik)。 <br > ### **2.1.1. 提示,技巧和示例** ***** 添加了一个新的章节的提示,技巧和示例。 <br > ### **2.1.2. Kafka Client 版本** ***** kafka-clients 要求 2.3.0 及以上版本。 <br > ### **2.1.3. Class/Package 变更** ***** TopicPartitionInitialOffset 已经过期,请使用 TopicPartitionOffset 替代。 <br > ### **2.1.4. Configuration 变更** ***** 从版本 2.3.4 开始,missingTopicsFatal 容器属性默认改为 flase。如果为 true,则当代理(Broker)宕机时应用程序将启动失败,很多用户将受此变更的影响。鉴于 Kafka 是一个高可用的平台,我们认为在代理(Broker)全部宕机的情况下启动应用程序并不是常见的情况。 <br > ### **2.1.5. Producer 和 Consumer Factory 变更** ***** The DefaultKafkaProducerFactory can now be configured to create a producer per thread. You can also provide Supplier<Serializer> instances in the constructor as an alternative to either configured classes (which require no-arg constructors), or constructing with Serializer instances, which are then shared between all Producers. See Using DefaultKafkaProducerFactory for more information. The same option is available with Supplier<Deserializer> instances in DefaultKafkaConsumerFactory. See Using KafkaMessageListenerContainer for more information. <br > ### 2.1.6. **Listener Container 变更** ***** Previously, error handlers received ListenerExecutionFailedException (with the actual listener exception as the cause) when the listener was invoked using a listener adapter (such as @KafkaListener s). Exceptions thrown by native GenericMessageListener s were passed to the error handler unchanged. Now a ListenerExecutionFailedException is always the argument (with the actual listener exception as the cause), which provides access to the container’s group.id property. Because the listener container has it’s own mechanism for committing offsets, it prefers the Kafka ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG to be false. It now sets it to false automatically unless specifically set in the consumer factory or the container’s consumer property overrides. The ackOnError property is now false by default. See Seek To Current Container Error Handlers for more information. It is now possible to obtain the consumer’s group.id property in the listener method. See Obtaining the Consumer group.id for more information. The container has a new property recordInterceptor allowing records to be inspected or modified before invoking the listener. A CompositeRecordInterceptor is also provided in case you need to invoke multiple interceptors. See Message Listener Containers for more information. The ConsumerSeekAware has new methods allowing you to perform seeks relative to the beginning, end, or current position and to seek to the first offset greater than or equal to a time stamp. See Seeking to a Specific Offset for more information. A convenience class AbstractConsumerSeekAware is now provided to simplify seeking. See Seeking to a Specific Offset for more information. The ContainerProperties provides an idleBetweenPolls option to let the main loop in the listener container to sleep between KafkaConsumer.poll() calls. See its JavaDocs and Using KafkaMessageListenerContainer for more information. When using AckMode.MANUAL (or MANUAL_IMMEDIATE) you can now cause a redelivery by calling nack on the Acknowledgment. See Committing Offsets for more information. Listener performance can now be monitored using Micrometer Timer s. See Monitoring Listener Performance for more information. The containers now publish additional consumer lifecyle events relating to startup. See Application Events for more information. Transactional batch listeners can now support zombie fencing. See Transactions for more information. The listener container factory can now be configured with a ContainerCustomizer to further configure each container after it has been created and configured. See Container factory for more information. <br > ### **2.1.7. ErrorHandler 变更** ***** The SeekToCurrentErrorHandler now treats certain exceptions as fatal and disables retry for those, invoking the recoverer on first failure. The SeekToCurrentErrorHandler and SeekToCurrentBatchErrorHandler can now be configured to apply a BackOff (thread sleep) between delivery attempts. Starting with version 2.3.2, recovered records' offsets will be committed when the error handler returns after recovering a failed record. See Seek To Current Container Error Handlers for more information. The DeadLetterPublishingRecoverer, when used in conjunction with an ErrorHandlingDeserializer2, now sets the payload of the message sent to the dead-letter topic, to the original value that could not be deserialized. Previously, it was null and user code needed to extract the DeserializationException from the message headers. See Publishing Dead-letter Records for more information. <br > ### **2.1.8. TopicBuilder** ***** 提供了一个新类 `TopicBuilder`,可以更方便地创建 `NewTopic` `@Bean`。 有关更多信息,请参见配置主题。 <br > ### **2.1.9. Kafka Streams 变更** ***** 现在,您可以对 `@EnableKafkaStreams` 创建的 `StreamsBuilderFactoryBean` 进行额外的配置。 有关更多信息,请参见 [流配置](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-config)。 新提供了 `RecoveringDeserializationExceptionHandler`,它允许恢复具有反序列化错误的记录。 可以将其与 `DeadLetterPublishingRecoverer` 结合使用,以将这些记录发送到死信主题。 有关更多信息,请参见 [从反序列化异常中恢复](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-deser-recovery)。 提供了 `HeaderEnricher` 转换器(Transformer),使用 SpEL 生成标头值。 有关更多信息,请参见 [Header Enricher](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-header-enricher)。 已提供 `MessagingTransformer`。 这允许 Kafka 流拓扑与 Spring 消息组件(例如 Spring Integration flow)进行交互。 有关更多信息,请参见 [MessagingTransformer 和从 KStream 调用 Spring Integration flow](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#streams-integration)。 <br > ### **2.1.10. JSON Component 变更** ***** Now all the JSON-aware components are configured by default with a Jackson ObjectMapper produced by the JacksonUtils.enhancedObjectMapper(). The JsonDeserializer now provides TypeReference-based constructors for better handling of target generic container types. Also a JacksonMimeTypeModule has been introduced for serialization of org.springframework.util.MimeType to plain string. See its JavaDocs and Serialization, Deserialization, and Message Conversion for more information. A ByteArrayJsonMessageConverter has been provided as well as a new super class for all Json converters, JsonMessageConverter. Also, a StringOrBytesSerializer is now available; it can serialize byte[], Bytes and String values in ProducerRecord s. See Spring Messaging Message Conversion for more information. The JsonSerializer, JsonDeserializer and JsonSerde now have fluent APIs to make programmatic configuration simpler. See the javadocs, Serialization, Deserialization, and Message Conversion, and Streams JSON Serialization and Deserialization for more informaion. <br > ### **2.1.11. ReplyingKafkaTemplate** ***** When a reply times out, the future is completed exceptionally with a KafkaReplyTimeoutException instead of a KafkaException. 此外,现在提供了重载的 `sendAndReceive` 方法,该方法允许在每个消息的基础上指定回复超时事件。 <br > ### **2.1.12. AggregatingReplyingKafkaTemplate** ***** 通过聚合来自多个接收者的回复,扩展了 `ReplyingKafkaTemplate`。 有关更多信息,请参见 [聚合多个回复](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#aggregating-request-reply)。 Extends the ReplyingKafkaTemplate by aggregating replies from multiple receivers. See Aggregating Multiple Replies for more information. <br > ### **2.1.13. 事务变更** ***** 现在,您可以在 `KafkaTemplate` 和 `KafkaTransactionManager` 上覆盖生产者工厂的 `transactionIdPrefix`。 有关更多信息,请参见 [transactionIdPrefix](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#transaction-id-prefix)。 You can now override the producer factory’s transactionIdPrefix on the KafkaTemplate and KafkaTransactionManager. See transactionIdPrefix for more information. <br > ### **2.1.14. New Delegating Serializer/Deserializer** ***** 提供了一个委派的序列化器和反序列化器,利用标头来启用生产和消费记录(使用多种键/值类型)。 有关更多信息,请参见 [委派序列化器和反序列化器](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#delegating-serialization)。 <br > ### **2.1.15. New Retrying Deserializer** ***** 新增了一个 `RetryingDeserializer`,用于发生瞬时错误时(比如可能发生的网络问题)重试序列化 有关更多信息,请参见 [Retrying Deserializer](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#retrying-deserialization)。 <br > ### **2.1.16. New function for recovering from deserializing errors** ***** ErrorHandlingDeserializer2 now uses a POJO (FailedDeserializationInfo) for passing all the contextual information around a deserialization error. This enables the code to access to extra information that was missing in the old BiFunction<byte[], Headers, T> failedDeserializationFunction. <br > ### **2.1.17. EmbeddedKafkaBroker 变更** ***** 现在,您可以在注解中覆盖默认代理(Broker)列表属性名称。 有关更多信息,请参见 [@EmbeddedKafka 注解 或 EmbeddedKafkaBroker Bean](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#kafka-testing-embeddedkafka-annotation)。 <br > ### **2.1.18. ReplyingKafkaTemplate 变更** ***** 现在,您可以自定义标题名称以进行关联,回复主题和回复分区。 有关更多信息,请参见 [使用 ReplyingKafkaTemplate](https://docs.spring.io/spring-kafka/docs/2.3.4.RELEASE/reference/html/#replying-template)。 <br > ### **2.1.19. Header Mapper 变更** ***** `DefaultKafkaHeaderMapper` 不再将简单的字符串值标头编码为JSON。 <br > <br > <br >