From a672b3116abc8bd4c168c9001eba87014f695cc8 Mon Sep 17 00:00:00 2001 From: Georgi Donev <32678835+g-donev@users.noreply.github.com> Date: Mon, 11 May 2026 13:54:21 +0300 Subject: [PATCH 1/5] Adding What's New --- docs/modules/ROOT/pages/whats-new.adoc | 90 ++++++++++++++------------ 1 file changed, 49 insertions(+), 41 deletions(-) diff --git a/docs/modules/ROOT/pages/whats-new.adoc b/docs/modules/ROOT/pages/whats-new.adoc index 653a0ec4d..975f2378e 100644 --- a/docs/modules/ROOT/pages/whats-new.adoc +++ b/docs/modules/ROOT/pages/whats-new.adoc @@ -1,75 +1,83 @@ = What's New in Hazelcast Platform -:description: Here are the highlights of what's new and improved in Hazelcast Platform 5.5. -[[whats-new]] - -NOTE: The What's New page for Hazelcast Platform {version} will be available when this version is released. For the last major release, see below. +:description: Here are the highlights of what's new and improved in Hazelcast Platform 5.7, including Management Center 5.11 and Platform Operator for Kubernetes 5.17 updates. +:page-aliases: whats-new.adoc {description} -== Get instant answers with new Hazelcast Ask AI +Hazelcast Platform 5.7 strengthens the operational foundations that mission-critical systems rely on when real-time data has to be both fast and correct. The release advances the Jet streaming engine and the CP Subsystem, adds support for Java 25, brings dynamic diagnostic logging to general availability in Management Center 5.11, and extends Kubernetes-native operations through Platform Operator 5.17. + +== Production-hardened streaming microservices -On every docs page you can now click the *Ask AI* button in the bottom right and get instant answers to all your questions about Hazelcast Platform, and our tools and clients. Ask AI is powered by the entire suite of Hazelcast documentation, including the latest docs from docs.hazelcast.com, the various API docs microsites, the latest official blogs, and a bunch of code samples and support knowledgebase articles. +A growing number of Hazelcast customers are replacing sprawls of request-response microservices and separate data grids with chains of isolated, stateful Jet jobs running on a single cluster. Platform 5.7 makes that pattern operationally resilient at enterprise scale, with four specific improvements to the Jet streaming engine: -image:Ask_AI_JDK.png[Ask AI example] +* *User Code Namespaces* give each pipeline stage classloader isolation. Different stages can use different library versions, and any one stage can be hot-swapped without disturbing the others. +* *Stage-to-stage handoff* through `MapJournal` and `IMap` now exposes first-class backpressure metrics and explicit signals when events are at risk of being overwritten. Inter-stage event loss becomes visible rather than silent. +* *Per-processor state counts* are exposed, so operators can see memory pressure building before it causes a failure. +* *Automatic job recovery* through routine cluster changes, including member replacement and rolling upgrades, removes manual operator intervention. Jobs configured with processing guarantees can be stopped, upgraded and resumed without data loss. -Give it a try now - for more information, see xref:ask-ai.adoc[]. +Together, these changes mean a failure in one stage does not cascade into the others. Each stage can fail, restart, or be upgraded independently, giving teams the isolation of microservices with the operational surface of a single platform. -== New Vector Collection for building semantic search (BETA) -[.enterprise]*Enterprise* +For more information, see xref:pipelines:overview.adoc[Building Data Pipelines] and xref:clusters:user-code-namespaces.adoc[User Code Namespaces]. -With the introduction of a new data structure specifically designed for vector search, Hazelcast Platform 5.5 adds functionality to help you build and deliver vector search capabilities, and improve the quality and accuracy of search results. Indexes are based on the JVector library, which implements a DiskANN algorithm for similarity search and provides exceptional performance. +== CP Leader Auto Step-Down -A Hazelcast vector database engine is a specialized type of database, which is optimized for storing, searching, and managing vector embeddings and additional metadata. You can include values in the metadata to provide filtering, additional processing, or analysis of vectors. The primary object for interacting with vector storage is a Vector Collection. A Vector Collection holds information about the vectors and associated metadata (user values). Use with Jet Job placement control (see below) to dedicate resources for vector embedding and optimizing search performance. +*Enterprise* -image:data-structures:vector-search-components.png[High-level overview of vector search components] +In a distributed cluster, every strongly consistent operation pays the cost of wherever the cluster has placed its CP leader. A leader in the wrong data centre means every transaction across every service pays that latency tax. -For more info on vector collections, see xref:data-structures:vector-search-overview.adoc[Data Structure Design]. +CP Leader Auto Step-Down in Platform 5.7 lets operators declare that specific members must never hold CP group leadership. Any such member elected as leader immediately triggers a new election, keeping leadership close to the workloads it serves. The feature is implemented without modifying the Raft protocol itself: leadership changes flow through the standard consensus mechanism, which preserves the CP Subsystem's linearizable guarantees and no-data-loss property throughout. -== Distribute your workload with Jet Job placement control +The CP member list in Management Center 5.11 clearly identifies flagged members, so operators can see at a glance which members are excluded from leadership. -Compute Isolation drives greater efficiency and performance by moving the compute closer to the data to improve performance and reduce network overheads. With 5.5 your Jet processing jobs can now be distributed across a defined subset of the cluster, which means you can distribute your workload to meet your business and regulatory requirements. You can configure Jet processing jobs so that they run on lite members only, allowing you to split your computational and storage requirements without the need to configure each job separately. You control the members to use for your Jet job processing on a job-by-job basis. +NOTE: CP Leader Auto Step-Down requires the `ADVANCED_CP` feature to be enabled in the license key. -For more info, including how to use the 'JobBuilder' API, see xref:pipelines:job-placement-control.adoc[]. +For more information, see xref:cp-subsystem:cp-subsystem.adoc[CP Subsystem]. -== Multi-member routing for Java clients +== Java 25 support -Geographically dispersed or 'stretched' clusters are not as resilient as standard HA clusters, and also suffer from lower throughput, longer recovery and longer operations. With the release of Hazelcast Platform 5.5, you can use multi-member routing to provide greater performance and stability for Java client applications connecting to geographically dispersed clusters. This means much faster transaction re-routing in the event of an outage. You can implement strong consistency on a stretch cluster and reduce connection overheads and improve overall performance. You can easily enable client multi-member routing using your existing client network configuration and get these benefits, all with zero downtime. +Platform 5.7 adds support for Java 25, the latest Java LTS release, with no regressions against the prior LTS baseline across Oracle JDK, OpenJDK and the other JVM vendors Hazelcast supports. Benchmarks show no performance degradation, and in some workloads a measurable improvement. Management Center 5.11 is validated for Java 25 as well. Previously supported Java versions continue to work with Platform 5.7, so adopting this release does not force a JVM upgrade. -Before 5.5, Hazelcast clients could only have one of two types of connection to a cluster: single member (also known as unisocket) or all member (also known as smart). With a unisocket connection the client only connected to a single cluster member, which meant all communication was relayed via the connected cluster member and could therefore suffer from performance issues. With an all member connection the client connects to every member in the cluster and is partition-aware, which means that communications are sent directly to the cluster member holding the required data. If you're using partition groups to isolate data to specific cluster members, there was previously no way for the client to have visibility into this configuration and therefore was not connecting as efficiently as possible, or adapting to failovers. For example, the CP Subsystem uses a group leader - clients would not be aware of this and there would an extra internal relay stage rather than sending directly to the group leader alone. +== .NET and C++ client performance -image:ROOT:client-routing.png[Hazelcast Cluster Routing diagram] +The Hazelcast .NET and C++ clients deliver significant performance improvements in 5.7. For detailed benchmark comparisons, see the client release notes. -For more info, see xref:clients:java.adoc#client-cluster-routing-modes[Client cluster routing modes]. +== Hazelcast Platform Release Notes -== Feast feature store integration -Release 5.5 includes integration of Feast (**Fea**ture **St**ore), an open-source feature store that can be used on your existing infrastructure to manage and serve ML features to real-time models. A feature store is a central repository where features can be stored and processed for reuse or sharing. When integrated with Hazelcast, you can benefit from an online store that supports materializing feature values in a running Hazelcast cluster. +For detailed release notes for Enterprise Edition and Community Edition that include new features and enhancements, breaking changes, deprecations and other fixes, see xref:release-notes:releases.adoc[Release Notes]. -Feast can help to decouple ML from your data infrastructure. This can be useful in a number of ways; for example, to allow you to move between model types, such as training models to serving models, or from one data infrastructure system to another. With Feast you use the following in your ML models: +To evaluate Hazelcast Enterprise Edition features, you can https://hazelcast.com/trial-request/?utm_source=docs-website[request a trial license key]. -* Historical data to support predictions that allow scaling to improve model performance -* Real-time data to support data-driven insights -* Pre-computed features that can be served online +== Management Center 5.11 -For more info, see xref:integrate:integrate-with-feast.adoc[]. +Management Center 5.11 focuses on operational reliability, cloud-native deployment support, and security hardening: -== Dynamic configuration using REST API -[.enterprise]*Enterprise* +* *Dynamic diagnostic logging is now generally available.* Operators can enable, configure and disable diagnostics at runtime, on any member, without a cluster restart - removing what used to be one of the most painful steps in diagnosing live production issues. +* *CP Leader Auto Step-Down* is visible in the CP member list, so operators can see which members are excluded from leadership. +* *Automatic reconnection* when a cluster comes back online removes the manual step previously required after planned or unplanned restarts. +* *JVM version details* are exposed in the Members table, making mixed-version deployments and Java rollout verification straightforward. +* *Idempotent `mc-conf` security commands* support GitOps patterns where the same configuration script is applied repeatedly. +* *Additional Near Cache invalidation metrics* are exposed through the Prometheus exporter. +* *Security hardening:* an authorization-bypass issue found during penetration testing has been resolved, the MC server version can be suppressed from HTTP response headers, and routine dependency security updates have been applied. -Hazelcast now provides a REST API that allows you to access your data structures and cluster using HTTP/HTTPS protocols. You can interact with the API using various tools and platforms such as cURL, REST clients (like Postman and Insomnia), or programming languages with HTTP request libraries or built-in support. The new REST API comes with an integrated Swagger UI for learning about the API and trying out API calls. +For more information, see the https://docs.hazelcast.com/management-center/5.11/release-notes/releases[Management Center 5.11 Release Notes]. -For more info, including tutorials for Java and Docker, see xref:maintain-cluster:enterprise-rest-api.adoc[]. +== Hazelcast Platform Operator for Kubernetes 5.17 -== Release Notes +Operator 5.17 extends CRD coverage and closes several gaps in Kubernetes-managed deployments. -For detailed release notes that include new features and enhancements, breaking changes, deprecations and other fixes, see 5.5 Release Notes. +Configuration surface improvements: -To evaluate Hazelcast {enterprise-product-name} features, you can https://hazelcast.com/trial-request/?utm_source=docs-website[request a trial license key]. -To install Hazelcast {enterprise-product-name}, see xref:getting-started:install-hazelcast.adoc[]. +* Custom `log4j2` configuration can now be supplied via the Hazelcast CR, bringing Operator-managed deployments in line with the logging customization available in standalone installations. +* The Management Center Prometheus exporter is now configurable through the Operator rather than requiring separate MC configuration. +* `envFrom` is supported, allowing environment variables to be injected from ConfigMaps or Secrets using the standard Kubernetes pattern. +* Near Cache `serialize-keys` is now exposed in the CRD. -== Hazelcast Command Line Client (CLC) +Deployment pattern improvements: -Support added for CPMap data structures, including `cpmap` commands and advanced script functions. +* Additional JARs can be deployed via the sidecar agent, extending the module deployment capability introduced in 5.16. +* Kubernetes labels on Operator-managed resources can now be modified after initial deployment. +* Management Center instance names are now namespace-qualified, which matters in multi-namespace deployments. -For detailed release notes that include new features and fixes, see xref:clc:ROOT:release-notes-5.4.0.adoc[Hazelcast CLC 5.4.0]. +IMPORTANT: Operator 5.17 fixes a TLS certificate rotation issue that has been present since Operator 5.15, where clients continued to connect with expired or invalid certificates after rotation. Customers running TLS on Operator 5.15 or 5.16 should prioritize this upgrade. In addition, configuration changes that should have triggered rolling updates but were silently ignored in earlier versions will now be applied correctly, so any pending changes will be applied on upgrade. -To get started with Hazelcast CLC, see xref:clc:ROOT:install-clc.adoc[Installing the Hazelcast CLC]. +For more information, see the https://docs.hazelcast.com/operator/5.17/release-notes[Operator 5.17 Release Notes]. From 1e8e5dcd526451194d64bc0158c5c539a86be11f Mon Sep 17 00:00:00 2001 From: Georgi Donev <32678835+g-donev@users.noreply.github.com> Date: Mon, 11 May 2026 16:11:49 +0300 Subject: [PATCH 2/5] Fix antora reference issue --- docs/modules/ROOT/pages/whats-new.adoc | 1 - 1 file changed, 1 deletion(-) diff --git a/docs/modules/ROOT/pages/whats-new.adoc b/docs/modules/ROOT/pages/whats-new.adoc index 975f2378e..1d9da96ed 100644 --- a/docs/modules/ROOT/pages/whats-new.adoc +++ b/docs/modules/ROOT/pages/whats-new.adoc @@ -1,6 +1,5 @@ = What's New in Hazelcast Platform :description: Here are the highlights of what's new and improved in Hazelcast Platform 5.7, including Management Center 5.11 and Platform Operator for Kubernetes 5.17 updates. -:page-aliases: whats-new.adoc {description} From bd6a9d55698ce73b17acda52e281f44b6f61d0a8 Mon Sep 17 00:00:00 2001 From: Georgi Donev <32678835+g-donev@users.noreply.github.com> Date: Mon, 11 May 2026 13:51:02 +0300 Subject: [PATCH 3/5] Adding 5.7.0 release notes (#2182) - **Prepare release 5.7.0** - **Adding 5.7.0 Enteprise release notes** - **Adding 5.7.0 community release notes** --- .../release-notes/pages/community.adoc | 124 +++++++++++++++-- .../release-notes/pages/enterprise.adoc | 127 +++++++++++++++++- 2 files changed, 234 insertions(+), 17 deletions(-) diff --git a/docs/modules/release-notes/pages/community.adoc b/docs/modules/release-notes/pages/community.adoc index 9d852f53e..45e091311 100644 --- a/docs/modules/release-notes/pages/community.adoc +++ b/docs/modules/release-notes/pages/community.adoc @@ -1,7 +1,6 @@ = Community Edition Release Notes :description: These release notes list any new features, enhancements, fixes, security issues and breaking changes that were made for Hazelcast Platform {open-source-product-name}. - -IMPORTANT: This is a snapshot release - release notes for this release are not available yet. +:page-aliases: release-notes:5-7-0-community.adoc {description} @@ -9,34 +8,139 @@ NOTE: Hazelcast Platform {open-source-product-name} is available in major and mi For help downloading Hazelcast {open-source-product-name}, see xref:getting-started:install-hazelcast.adoc[]. -== x.y.z +== 5.7.0 + +**Release date**: 2026-05-12 -**Release date**: +This release also includes the {open-source-product-name} fixes and security updates that were delivered in the {enterprise-product-name} 5.6.1 maintenance release. -* <> * <> * <> * <> * <> * <> - -=== New features +* <> +* <> === Breaking changes -=== Known issues +* *Decoupled `hazelcast-spring` from Spring Boot autoconfiguration*: Removed the direct dependency on Spring Boot from `hazelcast-spring` to resolve a dependency cycle that prevented Spring Boot from upgrading to newer Hazelcast versions. As part of this change, Spring Boot autoconfiguration for exposing Hazelcast components has been moved to dedicated modules: `hazelcast-spring-boot3` and `hazelcast-spring-boot4`. Users relying on Spring Boot autoconfiguration must explicitly add the appropriate module to their dependencies to retain existing behavior. === Enhancements +* *Java 25 support*: Hazelcast Platform 5.7 supports Java 21 and Java 25. Adding Java 25 enables customers to adopt the latest JDK with full compatibility, ensuring alignment with evolving JVM features and allowing teams to standardize on newer runtimes without impacting cluster stability. ++ +IMPORTANT: When running Hazelcast on Java 25, the `SecurityManager` is not functional due to ecosystem changes in the JDK. Deployments that rely on `SecurityManager`-based security policies must run on Java 21. ++ +NOTE: Docker users who want to run Hazelcast on Java 21 instead of the default Java 25 must pull a JDK-tagged image. For example, `hazelcast/hazelcast:latest-jdk21` will resolve to 5.7 built with JDK 21. + +* *Dynamic Diagnostic Logs are now Generally Available*: First introduced as BETA in 5.6, Dynamic Diagnostic Logs are now GA. Operators can enable and adjust diagnostic logging at runtime without restarts, significantly improving on-demand troubleshooting and reducing the need for disruptive configuration changes in production environments. + +* *Migrated Kinesis connector to AWS SDK v2*: Replaced usage of the deprecated AWS SDK v1, which reached end of support, with the v2 SDK to eliminate deprecation warnings and ensure long-term compatibility and support; no user action is required unless relying on SDK v1-specific behavior. + +* *Improved User Code Namespace support for Jet IMap sinks using EntryProcessors*: Enhanced the behavior of `Sinks.mapWithMerging`, `mapWithUpdate`, and `mapWithEntryProcessor` to correctly resolve classes from the job's User Code Namespace (UCN) during deserialization. The job namespace is now checked first, with a fallback to the IMap namespace if needed, improving compatibility with custom job resources. + +* *Improved Jet backpressure metrics*: Resolved an issue where input queue sizes were reported incorrectly under backpressure, even when queues were full and jobs were stalled. Queue sizes are now updated during metric collection, providing better visibility into where backpressure occurs. + +* *Improved handling of concurrent Event Journal readers*: Resolved an issue where multiple readers of the same Event Journal, especially with different filters, could block each other, causing increased latency, misleading warnings, and occasional event loss. Readers are now unparked in a scheduled manner, ensuring independent progress, more accurate diagnostics, and more reliable event delivery. + +* *Introduced extensible transformation mechanism for Pipeline API*: Added a new `using(...)` extension mechanism that enables type-safe, fluent extension of Pipeline transformations. This allows both built-in and user-defined extensions to add custom transformation methods across stream, batch, and keyed stages. + +* *Added persistence support for Jet job namespaces*: Namespaces used by Jet jobs are now persisted to Hot Restart storage during job startup and when updated by a running job. This ensures that namespaces survive lossless restarts, eliminating the need to re-add them. + +* *Added `INITIAL_SNAPSHOT_REQUIRED` to prevent data loss on job restarts*: Introduced a new job configuration option that forces Jet jobs with stateful sources (such as map journal or Kafka) to complete an initial snapshot before processing any data. This ensures that sources depending on initialization-time state do not reinitialize inconsistently after a restart, which could otherwise lead to data loss despite `exactly_once` or `at_least_once` guarantees. When enabled, the job only transitions to `RUNNING` after the initial snapshot completes successfully. + +* *Added immediate state eviction option for keyed stateful stream stages*: Introduced a new `deleteStatePredicate` parameter to the `mapStateful`, `flatMapStateful`, and `filterStateful` methods for keyed stream stages. This predicate allows the state associated with a key to be removed immediately after processing an event when the condition evaluates to true. When triggered, the state is deleted without invoking the `onEvictFn` callback, giving developers finer control over state lifecycle and memory usage in stateful stream processing. + +* *Added state size metric for `mapStateful` processors*: Introduced a new `totalStates` metric that reports the number of states maintained by each `mapStateful` processor. This helps users identify jobs where state growth may lead to excessive memory usage or potential out-of-memory conditions. + +* *Added MapFlush sink API for snapshot-driven persistence*: Introduced `EnterpriseSinks.mapFlushSink` to enable Jet pipelines to flush an IMap to its configured MapStore during snapshot phases. This is particularly useful for deduplication and other scenarios requiring consistent commit semantics to the backing store. + +* *Improved Map and Cache Event Journal cleanup to prevent memory leaks*: Enhanced the cleanup process for event journals to prevent unbounded heap growth on backup replicas. Cleanup now runs on both primary and backup replicas before add operations. A new cluster property, `hazelcast.journal.cleanup.threshold`, was also introduced to trigger cleanup when the journal's remaining capacity drops below a configured threshold. + === Fixes +* *Fixed failures and inconsistent results in `IMap.containsValue()` with TTL*: Resolved an issue where entry expiration during iteration could interfere with map traversal, causing operation failures and missed values. The fix ensures stable iteration by checking expiration instead of modifying entries during traversal, resulting in reliable results. + +* *Reduced CPU spike on Near Cache with TTL*: Resolved a performance issue affecting Near Caches when TTL or max-idle was configured. Previously, the expiration task scanned all cache entries every 5 seconds, leading to excessive CPU usage for large caches with long TTLs. The fix introduces a time-bound expiration cycle that limits CPU consumption per run and resumes from the last examined entry in the next cycle if needed. This preserves the existing guarantee that expired entries are never returned, while significantly reducing unnecessary CPU load. + +* *Fixed missing Near Cache invalidation on TTL expiration for Java client*: Resolved an issue where entries expired via TTL in map/cache did not trigger invalidation events for the Java client Near Cache, leading to stale data. + +* *Fixed incorrect handling of `serializeKeys` in dynamically added Near Cache configurations*: Resolved an issue where `serializeKeys=true` was ignored and stored as `false`. During rolling upgrades, re-adding such configurations may fail due to this mismatch; in these cases, use `serializeKeys=false` to match the existing state. + +* *Fixed validation for duplicate advanced network endpoint configurations (XML)*: Resolved an issue where XML configuration allowed multiple declarations of advanced network endpoint sections (such as `member-server`, `client-server`, REST, and memcache socket endpoint configs), even though only a single declaration is supported. + +* *Fixed non-functional operation timeout metrics*: Resolved an issue where the `operation.callTimeoutCount` and `operation.operationTimeoutCount` metrics were listed in the documentation but never incremented, making them ineffective. The fix restores the increment logic for `operation.callTimeoutCount`, ensuring it correctly tracks invocation call timeouts. The `operation.operationTimeoutCount` metric has been deprecated since its functionality is already covered by `InvocationMonitor` timeout metrics and will be removed in a future major release. + +* *Fixed initialization failure handling and dependency issues in `GenericMapStore`*: Improved error handling during initialization to ensure failures are correctly propagated, and removed an unnecessary `slf4j` classpath requirement. Additionally, the `columns` property now automatically trims leading and trailing whitespace. + +* *Fixed endpoint resolution for nonstandard AWS regions*: Resolved an issue where incorrect AWS domains were used for regions such as China and ISO partitions. The fix ensures endpoints are resolved using the correct domain based on the region, improving compatibility across AWS environments. + +* *Added shared `HttpClient` cache for `RestClient`*: Introduced a static cache to enable reuse of `HttpClient` instances across multiple `RestClient` instances, reducing overhead from repeated client creation. + +* *Improved resilience of Jet jobs during member shutdown or topology changes*: Resolved an issue where Jet jobs could fail permanently if certain topology-related exceptions occurred during job startup, such as `HazelcastInstanceNotActiveException`, when a member stopped or restarted. Previously, these exceptions caused the job to enter a failed state without recovery. The fix ensures that such conditions trigger a job restart instead, improving reliability during cluster changes. + +* *Fixed race condition during Jet job initialization and termination with restart*: Resolved a race condition between job initialization and a termination request with restart that could cause an `AssertionError` (mode is null). This could occur when a termination request was received while the job was still starting, leading to inconsistent internal state and job failure. The fix ensures that termination requests received during initialization are properly handled by aborting initialization and scheduling the restart through the standard job restart flow, preventing unexpected job failures and improving reliability during graceful member shutdown or scaling scenarios. + +* *Updated Jet job cancellation handling for JDK 23+ compatibility*: Resolved an issue where changes in JDK 23 caused `Job.join()` and `isUserCancelled()` to lose the root cause `CancellationByUserException`. + +* *Improved detection of event loss in Event Journal sources*: Event Journal sources for maps and caches now mark entries when events have been lost due to journal overlap (such as slow consumers). A new `isAfterLostEvents()` indicator allows Jet pipelines to detect and handle such cases, improving observability without adding overhead or additional events. + +* *Fixed database errors and potential data loss after cluster network recovery*: Resolved an issue where reconnecting nodes could cause duplicate database write operations when using a write-behind MapStore. During a split-brain recovery, the system could keep old background tasks running alongside new ones, leading to database conflicts and lost updates. The fix ensures that old tasks are properly stopped during the merge process. This guarantees that only a single process writes to the database, ensuring reliable data persistence and preventing database errors. + +* *Improved validation for null client cluster name configuration*: Resolved an issue where setting a `null` cluster name via `ClientConfig#setClusterName(null)` resulted in a `NullPointerException` during client connection with an unclear error message. The fix introduces fail-fast validation that throws an `IllegalArgumentException` when a `null` cluster name is configured, helping users identify the configuration problem earlier and with a clearer error message. + +* *Fixed premature database writes for the first entry in write-behind maps*: Resolved an issue where the first item added to a write-behind map could be flushed to the external data store before the configured write delay had expired. This rare behavior occurred when the internal queue initialization and the initial data insertion happened within the exact same millisecond, causing a timing collision. The fix ensures that the configured write delay is strictly respected for all entries, preventing unexpected early database writes and maintaining consistent write-behind behavior. + +* *Fixed cluster data migration failures if TRACE logging was enabled*: Resolved an issue where the cluster could get stuck while moving data between nodes. This occurred because an internal error could be triggered when the system attempted to write diagnostic logs at the exact same time as other background tasks were completing. The fix ensures that the internal logging mechanism safely handles simultaneous operations, allowing data migrations to complete smoothly and reliably. + === Deprecations +* *`operation.operationTimeoutCount` metric*: Deprecated because its functionality is already covered by `InvocationMonitor` timeout metrics. It will be removed in a future major release. + === Security -* **Security Fix for CVE-XXX-YYYY:** Resolved https://nvd.nist.gov/vuln/detail/CVE-XXX-YYYY[CVE-XXX-YYYY] +* *Enforced permission checks for IMap projection and aggregation operations*: New `IMap.project()` and `IMap.aggregate()` permissions have been introduced and these operations are rejected if the corresponding permissions are not granted. *Action required*: users with fine-grained security must update their configurations to explicitly include these permissions where needed, otherwise these operations will fail. -=== Contributors +* *Added configurable class restrictions for Zero Config Compact Serialization (ZCCS)*: Introduced optional allowlist/blocklist controls using `JavaSerializationFilterConfig` to restrict which classes can be used with ZCCS, mitigating risks from unsafe deserialization. Users are encouraged to configure restrictions to improve security ahead of stricter defaults in a future major release. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-33870[CVE-2026-33870] and https://nvd.nist.gov/vuln/detail/CVE-2026-33871[CVE-2026-33871] in Netty*: Fixed vulnerabilities by upgrading the Netty dependency. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-22740[CVE-2026-22740], https://nvd.nist.gov/vuln/detail/CVE-2026-34483[CVE-2026-34483], https://nvd.nist.gov/vuln/detail/CVE-2026-34486[CVE-2026-34486], https://nvd.nist.gov/vuln/detail/CVE-2026-34487[CVE-2026-34487], and https://nvd.nist.gov/vuln/detail/CVE-2026-40973[CVE-2026-40973] in Spring Boot*: Fixed vulnerabilities by upgrading the Spring Boot dependency. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-34478[CVE-2026-34478], https://nvd.nist.gov/vuln/detail/CVE-2026-34480[CVE-2026-34480], and https://nvd.nist.gov/vuln/detail/CVE-2026-34481[CVE-2026-34481] in Apache Log4j*: Addressed potential security risks related to improper request handling and input processing. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-42198[CVE-2026-42198] in PostgreSQL JDBC driver*: Fixed a vulnerability by upgrading the `pgjdbc` dependency. + +* *Enforced stricter checks on classes in SQL*: Resolved an issue where restrictions on classnames used in SQL mappings and types were not checked in some cases. +* *Enforced checks on classes in `JsonUtil` deserialization*: Some classes (unlikely to be used) are no longer allowed to be deserialized using `JsonUtil`. Customers using the `com.hazelcast.jet.json.JsonUtil` class are recommended to review the Javadoc for more secure alternative methods. + +* *Enforced checks on classes used in client protocol*: In some situations it was possible to instantiate arbitrary classes in client protocol error conditions. Strict filtering has been implemented to prevent this issue. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2025-33042[CVE-2025-33042] in Avro*: Fixed a vulnerability by upgrading the Avro and Parquet dependencies. + +* *Security Advisory regarding Elasticsearch 7*: https://nvd.nist.gov/vuln/detail/CVE-2025-66566[CVE-2025-66566] has been identified in Elasticsearch 7. As Elasticsearch 7 is currently End of Life (EOL), no upstream fix is available from the vendor. We strongly recommend that users evaluate the security risks associated with this CVE. Support for Elasticsearch 7 will be removed in a future version of Hazelcast. + +* *Fixed unrestricted attribute access during query lookups*: Resolved an issue where query lookups could access attributes that should not be exposed during query evaluation. The fix adds configurable restrictions on which attributes can be accessed, improving security and giving users more control over query behavior. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2025-12183[CVE-2025-12183] in LZ4 Java*: Fixed a vulnerability by upgrading the LZ4 Java dependency. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2025-59419[CVE-2025-59419] in Netty*: Fixed a vulnerability by upgrading the Netty dependency. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-22731[CVE-2026-22731], https://nvd.nist.gov/vuln/detail/CVE-2026-22733[CVE-2026-22733], https://nvd.nist.gov/vuln/detail/CVE-2026-22737[CVE-2026-22737], and https://nvd.nist.gov/vuln/detail/CVE-2026-22735[CVE-2026-22735] in Spring Boot*: Fixed vulnerabilities by upgrading the Spring Boot dependency. + +=== Known issues + +* *Deeply nested JSON objects on JDK 25*: When running on JDK 25, extremely deeply nested JSON objects (approaching the ~1000 nesting depth limit) may trigger a `StackOverflowError`, causing the operation to fail. This is due to changes in the JDK and not caused by Hazelcast code changes in this release. As a workaround, increase the JVM stack size using the `-Xss` startup flag. + +* *`@ExposeHazelcastObjects` with dependent Hazelcast configuration or instance beans*: When running an application with Spring integration and `@ExposeHazelcastObjects` (explicit or implicit via autoconfiguration), beans with Hazelcast `Config` and/or `HazelcastInstance` cannot rely on dependencies such as classes marked as `@ConfigurationProperties`. The system incorrectly triggers early initialization of those beans before `@ConfigurationProperties` are injected. + +=== Contributors +We would like to thank the contributors from our open source community +who worked on this release: +* https://github.com/Connor-Roche-Fidelis[Connor-Roche-Fidelis] +* https://github.com/Haricshore[Haricshore] +* https://github.com/snicoll[snicoll] diff --git a/docs/modules/release-notes/pages/enterprise.adoc b/docs/modules/release-notes/pages/enterprise.adoc index 6ceff5757..fa3943df5 100644 --- a/docs/modules/release-notes/pages/enterprise.adoc +++ b/docs/modules/release-notes/pages/enterprise.adoc @@ -1,17 +1,15 @@ = Enterprise Edition Release Notes :description: These release notes list any new features, enhancements, fixes, security issues and breaking changes that were made for Hazelcast Platform {enterprise-product-name}. :page-enterprise: true -:page-aliases: - -IMPORTANT: This is a snapshot release - release notes for this release are not available yet. +:page-aliases: release-notes:5-7-0.adoc {description} For help downloading Hazelcast {enterprise-product-name}, see xref:getting-started:install-enterprise.adoc[] or https://hazelcast.com/trial-request/?utm_source=docs-website[request a trial license key]. -== x.y.z +== 5.7.0 -**Release date**: +**Release date**: 2026-05-12 * <> * <> @@ -19,19 +17,134 @@ For help downloading Hazelcast {enterprise-product-name}, see xref:getting-start * <> * <> * <> +* <> +* <> === New features +* *CP Leader Auto Step-Down*: CP members can now be configured to automatically step down (abdicate) from leadership if elected, ensuring CP group leaders remain on preferred, low-latency "leader-capable" members. This reduces the risk of performance degradation and instability caused by high-latency leaders while preserving Raft fault-tolerance requirements during normal operations. The change is observable via logs and metrics. + +For more details on new features, see xref:ROOT:whats-new.adoc[What's new]. + === Breaking changes -=== Known issues +* *Decoupled `hazelcast-spring` from Spring Boot autoconfiguration*: Removed the direct dependency on Spring Boot from `hazelcast-spring` to resolve a dependency cycle that prevented Spring Boot from upgrading to newer Hazelcast versions. As part of this change, Spring Boot autoconfiguration for exposing Hazelcast components has been moved to dedicated modules: `hazelcast-spring-boot3` and `hazelcast-spring-boot4`. Users relying on Spring Boot autoconfiguration must explicitly add the appropriate module to their dependencies to retain existing behavior. === Enhancements +* *Java 25 support*: Hazelcast Platform 5.7 supports Java 21 and Java 25. Adding Java 25 enables customers to adopt the latest JDK with full compatibility, ensuring alignment with evolving JVM features and allowing teams to standardize on newer runtimes without impacting cluster stability. ++ +IMPORTANT: When running Hazelcast on Java 25, the `SecurityManager` is not functional due to ecosystem changes in the JDK. ++ +NOTE: Docker users who want to run Hazelcast on Java 21 instead of the default Java 25 must pull a JDK-tagged image. For example, `hazelcast/hazelcast:latest-jdk21` will resolve to 5.7 built with JDK 21. ++ +NOTE: The Hazelcast Platform Operator defaults to Java 25 unless the deployment is pinned to a specific Java version. + +* *Dynamic Diagnostic Logs are now Generally Available*: First introduced as BETA in 5.6, Dynamic Diagnostic Logs are now GA. Operators can enable and adjust diagnostic logging at runtime without restarts, significantly improving on-demand troubleshooting and reducing the need for disruptive configuration changes in production environments. + +* *Improved visibility for WAN Replication queue overflow and dropped events*: Enhanced observability for WAN Replication scenarios where queues become full and events are dropped. Member log messages for "queue full" conditions now include the affected Map/Cache name and the corresponding WAN Publisher, making it easier to identify the source of dropped events. Management Center surfaces alerts on the WAN Replication page to indicate when queues are full and entries are dropped, including the relevant queue and Map/Cache details. + +* *Improved Native OutOfMemoryError (OOME) diagnostics*: Native OOME messages from `PoolingMemoryManager` now include detailed fragmentation and memory distribution statistics, such as maximum page fragmentation, thread memory imbalance, unusable free memory, and total allocated (including external) memory. This makes it easier to identify whether allocation failures are caused by fragmentation, uneven thread-local memory usage, or actual memory exhaustion. + +* *Improved Native OutOfMemoryError handling for HD IMap with JSON keys and values*: Resolved an issue where a Native OOME during insertion into an HD IMap using `HazelcastJsonValue` keys and values could lead to inconsistent behavior, such as incorrect exceptions being reported or native memory not being fully released. The fix ensures that NOOME is handled and reported correctly, prevents unintended double-free scenarios, and keeps native memory accounting and system behavior consistent. + +* *Added Jackson 3 support alongside Jackson 2*: Introduced Jackson 3.x to enable support for records with JSON annotations, while retaining Jackson 2.x on the classpath for compatibility with existing dependencies. + +* *Migrated Kinesis connector to AWS SDK v2*: Replaced usage of the deprecated AWS SDK v1, which reached end of support, with the v2 SDK to eliminate deprecation warnings and ensure long-term compatibility and support; no user action is required unless relying on SDK v1-specific behavior. + +* *Improved User Code Namespace support for Jet IMap sinks using EntryProcessors*: Enhanced the behavior of `Sinks.mapWithMerging`, `mapWithUpdate`, and `mapWithEntryProcessor` to correctly resolve classes from the job's User Code Namespace (UCN) during deserialization. The job namespace is now checked first, with a fallback to the IMap namespace if needed, improving compatibility with custom job resources. + +* *Improved Jet backpressure metrics*: Resolved an issue where input queue sizes were reported incorrectly under backpressure, even when queues were full and jobs were stalled. Queue sizes are now updated during metric collection, providing better visibility into where backpressure occurs. + +* *Improved handling of concurrent Event Journal readers*: Resolved an issue where multiple readers of the same Event Journal, especially with different filters, could block each other, causing increased latency, misleading warnings, and occasional event loss. Readers are now unparked in a scheduled manner, ensuring independent progress, more accurate diagnostics, and more reliable event delivery. + +* *Introduced extensible transformation mechanism for Pipeline API*: Added a new `using(...)` extension mechanism that enables type-safe, fluent extension of Pipeline transformations. This allows both built-in and user-defined extensions to add custom transformation methods across stream, batch, and keyed stages. + +* *Added persistence support for Jet job namespaces*: Namespaces used by Jet jobs are now persisted to Hot Restart storage during job startup and when updated by a running job. This ensures that namespaces survive lossless restarts, eliminating the need to re-add them. + +* *Added `INITIAL_SNAPSHOT_REQUIRED` to prevent data loss on job restarts*: Introduced a new job configuration option that forces Jet jobs with stateful sources (such as map journal or Kafka) to complete an initial snapshot before processing any data. This ensures that sources depending on initialization-time state do not reinitialize inconsistently after a restart, which could otherwise lead to data loss despite `exactly_once` or `at_least_once` guarantees. When enabled, the job only transitions to `RUNNING` after the initial snapshot completes successfully. + +* *Added immediate state eviction option for keyed stateful stream stages*: Introduced a new `deleteStatePredicate` parameter to the `mapStateful`, `flatMapStateful`, and `filterStateful` methods for keyed stream stages. This predicate allows the state associated with a key to be removed immediately after processing an event when the condition evaluates to true. When triggered, the state is deleted without invoking the `onEvictFn` callback, giving developers finer control over state lifecycle and memory usage in stateful stream processing. + +* *Added state size metric for `mapStateful` processors*: Introduced a new `totalStates` metric that reports the number of states maintained by each `mapStateful` processor. This helps users identify jobs where state growth may lead to excessive memory usage or potential out-of-memory conditions. + +* *Added MapFlush sink API for snapshot-driven persistence*: Introduced `EnterpriseSinks.mapFlushSink` to enable Jet pipelines to flush an IMap to its configured MapStore during snapshot phases. This is particularly useful for deduplication and other scenarios requiring consistent commit semantics to the backing store. + === Fixes +* *Fixed failures and inconsistent results in `IMap.containsValue()` for maps in HD Memory with TTL*: Resolved an issue where entry expiration during iteration could interfere with map traversal, causing operation failures and missed values. The fix ensures stable iteration by checking expiration instead of modifying entries during traversal, resulting in reliable results. + +* *Reduced CPU spike on Near Cache with HD and TTL*: Resolved a performance issue affecting Near Caches using on-heap in-memory format when TTL or max-idle was configured. Previously, the expiration task scanned all cache entries every 5 seconds, leading to excessive CPU usage for large caches with long TTLs. The fix introduces a time-bound expiration cycle that limits CPU consumption per run and resumes from the last examined entry in the next cycle if needed. This preserves the existing guarantee that expired entries are never returned, while significantly reducing unnecessary CPU load. + +* *Fixed missing Near Cache invalidation on TTL expiration for Java client*: Resolved an issue where entries expired via TTL in map/cache did not trigger invalidation events for the Java client Near Cache, leading to stale data. + +* *Fixed incorrect handling of `serializeKeys` in dynamically added Near Cache configurations*: Resolved an issue where `serializeKeys=true` was ignored and stored as `false`. During rolling upgrades, re-adding such configurations may fail due to this mismatch; in these cases, use `serializeKeys=false` to match the existing state. + +* *Fixed validation for duplicate advanced network endpoint configurations (XML)*: Resolved an issue where XML configuration allowed multiple declarations of advanced network endpoint sections (such as `member-server`, `client-server`, REST, and memcache socket endpoint configs), even though only a single declaration is supported. + +* *Fixed non-functional operation timeout metrics*: Resolved an issue where the `operation.callTimeoutCount` and `operation.operationTimeoutCount` metrics were listed in the documentation but never incremented, making them ineffective. The fix restores the increment logic for `operation.callTimeoutCount`, ensuring it correctly tracks invocation call timeouts. The `operation.operationTimeoutCount` metric has been deprecated since its functionality is already covered by `InvocationMonitor` timeout metrics and will be removed in a future major release. + +* *Fixed initialization failure handling and dependency issues in `GenericMapStore`*: Improved error handling during initialization to ensure failures are correctly propagated, and removed an unnecessary `slf4j` classpath requirement. Additionally, the `columns` property now automatically trims leading and trailing whitespace. + +* *Fixed endpoint resolution for nonstandard AWS regions*: Resolved an issue where incorrect AWS domains were used for regions such as China and ISO partitions. The fix ensures endpoints are resolved using the correct domain based on the region, improving compatibility across AWS environments. + +* *Added shared `HttpClient` cache for `RestClient`*: Introduced a static cache to enable reuse of `HttpClient` instances across multiple `RestClient` instances, reducing overhead from repeated client creation. + +* *Fixed log format inconsistency when Enterprise REST is enabled*: Resolved an issue where enabling the Enterprise REST endpoint caused Hazelcast to emit log entries in two conflicting formats — the default Hazelcast format and the Spring-based format introduced by the REST layer. The `hazelcast.logging.type` setting now takes precedence over Spring Boot's logging system. When Hazelcast runs in embedded mode in the same JVM as Spring Boot with the REST API enabled, Spring Boot's logging system is automatically aligned with `hazelcast.logging.type`. To keep a different Spring Boot logging system, set `org.springframework.boot.logging.LoggingSystem` explicitly. + +* *Fixed CP Raft log loading failure caused by incomplete chunked snapshots*: Resolved an issue where partially written snapshot chunks could be treated as valid Raft log files, leading to errors during state reload. The fix ensures incomplete files are ignored and logged, and previous log files are only deleted once a complete snapshot is successfully persisted, improving recovery reliability with no user action required. + +* *Fixed failure during CP snapshot restore when upgrading from 5.5 to 5.6*: Resolved an issue where restoring Raft snapshots created in 5.5 could fail in 5.6 due to a missing conversion step in the snapshot processing path, causing repeated errors and blocking upgrades. + +* *Improved resilience of Jet jobs during member shutdown or topology changes*: Resolved an issue where Jet jobs could fail permanently if certain topology-related exceptions occurred during job startup, such as `HazelcastInstanceNotActiveException`, when a member stopped or restarted. Previously, these exceptions caused the job to enter a failed state without recovery. The fix ensures that such conditions trigger a job restart instead, improving reliability during cluster changes. + +* *Fixed race condition during Jet job initialization and termination with restart*: Resolved a race condition between job initialization and a termination request with restart that could cause an `AssertionError` (mode is null). This could occur when a termination request was received while the job was still starting, leading to inconsistent internal state and job failure. The fix ensures that termination requests received during initialization are properly handled by aborting initialization and scheduling the restart through the standard job restart flow, preventing unexpected job failures and improving reliability during graceful member shutdown or scaling scenarios. + +* *Updated Jet job cancellation handling for JDK 23+ compatibility*: Resolved an issue where changes in JDK 23 caused `Job.join()` and `isUserCancelled()` to lose the root cause `CancellationByUserException`. + +* *Improved detection of event loss in Event Journal sources*: Event Journal sources for maps and caches now mark entries when events have been lost due to journal overlap (such as slow consumers). A new `isAfterLostEvents()` indicator allows Jet pipelines to detect and handle such cases, improving observability without adding overhead or additional events. + === Deprecations +* *Tiered Storage*: Tiered Storage is deprecated and will be removed in Hazelcast Platform 6.0. + +* *`operation.operationTimeoutCount` metric*: Deprecated because its functionality is already covered by `InvocationMonitor` timeout metrics. It will be removed in Hazelcast Platform 6.0. + === Security -* **Security Fix for CVE-XXX-YYYY:** Resolved https://nvd.nist.gov/vuln/detail/CVE-XXX-YYYY[CVE-XXX-YYYY]. +* *Improved permission checks for IMap ServiceFactory used in Jet pipelines*: Resolved an issue where some mutating IMap operations executed through the ServiceFactory did not enforce the required permissions. As a result, Jet jobs submitted by users without sufficient permissions may now fail when performing IMap operations that require permissions beyond read access. + +* *Fixed missing permission checks for `EntryProcessor` execution through `ExecutorService`*: Resolved an issue where `EntryProcessor` operations submitted via `submitToKey`, `submitToKeys`, or `executeOnKeys` from an `ExecutorService` callable only validated the `remove` permission, while the required `put` permission was not enforced. + +* *Enforced permission checks for IMap projection and aggregation operations*: New `IMap.project()` and `IMap.aggregate()` permissions have been introduced and these operations are rejected if the corresponding permissions are not granted. *Action required*: users with fine-grained security must update their configurations to explicitly include these permissions where needed, otherwise these operations will fail. + +* *Added configurable class restrictions for Zero Config Compact Serialization (ZCCS)*: Introduced optional allowlist/blocklist controls using `JavaSerializationFilterConfig` to restrict which classes can be used with ZCCS, mitigating risks from unsafe deserialization. Users are encouraged to configure restrictions to improve security ahead of stricter defaults in a future major release. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-33870[CVE-2026-33870] and https://nvd.nist.gov/vuln/detail/CVE-2026-33871[CVE-2026-33871] in Netty*: Fixed vulnerabilities by upgrading the Netty dependency. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-22740[CVE-2026-22740], https://nvd.nist.gov/vuln/detail/CVE-2026-34483[CVE-2026-34483], https://nvd.nist.gov/vuln/detail/CVE-2026-34486[CVE-2026-34486], https://nvd.nist.gov/vuln/detail/CVE-2026-34487[CVE-2026-34487], and https://nvd.nist.gov/vuln/detail/CVE-2026-40973[CVE-2026-40973] in Spring Boot*: Fixed vulnerabilities by upgrading the Spring Boot dependency. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-34478[CVE-2026-34478], https://nvd.nist.gov/vuln/detail/CVE-2026-34480[CVE-2026-34480], and https://nvd.nist.gov/vuln/detail/CVE-2026-34481[CVE-2026-34481] in Apache Log4j*: Addressed potential security risks related to improper request handling and input processing. + +* *Resolved https://nvd.nist.gov/vuln/detail/CVE-2026-42198[CVE-2026-42198] in PostgreSQL JDBC driver*: Fixed a vulnerability by upgrading the `pgjdbc` dependency. + +* *Enforced stricter checks on classes in SQL*: Resolved an issue where restrictions on classnames used in SQL mappings and types were not checked in some cases. + +* *Enforced checks on classes in `JsonUtil` deserialization*: Some classes (unlikely to be used) are no longer allowed to be deserialized using `JsonUtil`. Customers using the `com.hazelcast.jet.json.JsonUtil` class are recommended to review the Javadoc for more secure alternative methods. + +* *Enforced checks on classes used in client protocol*: In some situations it was possible to instantiate arbitrary classes in client protocol error conditions. Strict filtering has been implemented to prevent this issue. + +=== Known issues + +* *Deeply nested JSON objects on JDK 25*: When running on JDK 25, extremely deeply nested JSON objects (approaching the ~1000 nesting depth limit) may trigger a `StackOverflowError`, causing the operation to fail. This is due to changes in the JDK and not caused by Hazelcast code changes in this release. As a workaround, increase the JVM stack size using the `-Xss` startup flag. + +* *`@ExposeHazelcastObjects` with dependent Hazelcast configuration or instance beans*: When running an application with Spring integration and `@ExposeHazelcastObjects` (explicit or implicit via autoconfiguration), beans with Hazelcast `Config` and/or `HazelcastInstance` cannot rely on dependencies such as classes marked as `@ConfigurationProperties`. The system incorrectly triggers early initialization of those beans before `@ConfigurationProperties` are injected. + +=== Contributors + +We would like to thank the contributors from our open source community +who worked on this release: + +* https://github.com/Connor-Roche-Fidelis[Connor-Roche-Fidelis] +* https://github.com/Haricshore[Haricshore] +* https://github.com/snicoll[snicoll] From f6e855963f1d79f87a8dc762ee1bfc7a4e1b099b Mon Sep 17 00:00:00 2001 From: Olga Shultseva <150694869+shultseva@users.noreply.github.com> Date: Mon, 11 May 2026 13:40:31 +0100 Subject: [PATCH 4/5] requireSnapshotBeforeProcessing (#2152) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit https://github.com/hazelcast/hazelcast-mono/pull/5956 https://hazelcast.atlassian.net/browse/CTT-996 --------- Co-authored-by: Krzysztof Jamróz <79092062+k-jamroz@users.noreply.github.com> --- docs/modules/fault-tolerance/pages/fault-tolerance.adoc | 9 +++++++++ docs/modules/integrate/pages/kafka-connector.adoc | 5 +++++ docs/modules/integrate/pages/map-connector.adoc | 6 ++++++ docs/modules/pipelines/pages/configuring-jobs.adoc | 5 +++++ 4 files changed, 25 insertions(+) diff --git a/docs/modules/fault-tolerance/pages/fault-tolerance.adoc b/docs/modules/fault-tolerance/pages/fault-tolerance.adoc index 5bd4e57e6..cfc527020 100644 --- a/docs/modules/fault-tolerance/pages/fault-tolerance.adoc +++ b/docs/modules/fault-tolerance/pages/fault-tolerance.adoc @@ -125,6 +125,15 @@ state accordingly. If the computation job stops and restarts, this state will be restored from the snapshot and then the source will replay `x1` and `x2`. The processor will think it got two new items. +=== Initial Snapshot + +Some stateful sources—such as map journal and Kafka—determine their initial reading position at the moment the job starts. +If a job restarts due to a failure before completing its first snapshot, this initial state may be reinitialized and in some cases to a different state. +This can lead to data loss (for example, skipping records already produced but not yet observed by job source). +To prevent this, enable `setRequireSnapshotBeforeProcessing(true)` +in the xref:pipelines:configuring-jobs.adoc#_job_configuration_options[job configuration]. + + == Data Safety === In-Memory Snapshot Storage diff --git a/docs/modules/integrate/pages/kafka-connector.adoc b/docs/modules/integrate/pages/kafka-connector.adoc index 42054b799..4bbe19a37 100644 --- a/docs/modules/integrate/pages/kafka-connector.adoc +++ b/docs/modules/integrate/pages/kafka-connector.adoc @@ -81,6 +81,11 @@ Those offsets are used only when the job is started for the first time after sub Afterwards, the regular fault tolerance mechanism described above is used. This option is not supported when processing guarantees are disabled. +Kafka source is a stateful source—it remembers the last processed offset. +To prevent data loss when using `at-least-once` or `exactly-once` job guarantees and kafka consumer configured to start from the `latest` offset, +enable `setRequireSnapshotBeforeProcessing` in the xref:pipelines:configuring-jobs.adoc#_job_configuration_options[job configuration]. +See additional details xref:hazelcast:fault-tolerance:fault-tolerance.adoc#_fault_tolerance. + == Transactional Guarantees As a sink, the Kafka connector provides exactly-once guarantees at the cost of using diff --git a/docs/modules/integrate/pages/map-connector.adoc b/docs/modules/integrate/pages/map-connector.adoc index 228af7775..028a65fed 100644 --- a/docs/modules/integrate/pages/map-connector.adoc +++ b/docs/modules/integrate/pages/map-connector.adoc @@ -117,6 +117,12 @@ For example, if there are many updates to just one key, with the default partition count of `271` and journal size of `100,000` the journal only has space for `370` events per partitions. +Map journal source is a stateful source—it remembers the last processed offset. +To prevent data loss with `at-least-once` or `exactly-once` job guarantees when using a map journal source set to `START_FROM_CURRENT`, +enable `setRequireSnapshotBeforeProcessing` in the xref:pipelines:configuring-jobs.adoc#_job_configuration_options[job configuration]. +See additional details xref:hazelcast:fault-tolerance:fault-tolerance.adoc#_fault_tolerance. + + For a tutorial, see the xref:pipelines:stream-imap.adoc[]. == Map as a Sink diff --git a/docs/modules/pipelines/pages/configuring-jobs.adoc b/docs/modules/pipelines/pages/configuring-jobs.adoc index b11310aba..2aa6afcca 100644 --- a/docs/modules/pipelines/pages/configuring-jobs.adoc +++ b/docs/modules/pipelines/pages/configuring-jobs.adoc @@ -28,6 +28,11 @@ a|Depends on the source or sink. |positive `INT` |10000 +|requireSnapshotBeforeProcessing +|When enabled, the job performs an initial snapshot before processing any data and transitions to RUNNING only after the snapshot completes successfully. +|`boolean` +|false + |autoScaling |Enable jobs to scale automatically when new members are added to the cluster or existing members are removed from the cluster. |`boolean` From 77de14d7374186deb0254e653b391e7ed4f2496e Mon Sep 17 00:00:00 2001 From: Georgi Donev <32678835+g-donev@users.noreply.github.com> Date: Tue, 12 May 2026 11:26:47 +0300 Subject: [PATCH 5/5] Adding Java 25 warnings --- docs/modules/ROOT/pages/whats-new.adoc | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/docs/modules/ROOT/pages/whats-new.adoc b/docs/modules/ROOT/pages/whats-new.adoc index 1d9da96ed..8975cbb0d 100644 --- a/docs/modules/ROOT/pages/whats-new.adoc +++ b/docs/modules/ROOT/pages/whats-new.adoc @@ -34,7 +34,13 @@ For more information, see xref:cp-subsystem:cp-subsystem.adoc[CP Subsystem]. == Java 25 support -Platform 5.7 adds support for Java 25, the latest Java LTS release, with no regressions against the prior LTS baseline across Oracle JDK, OpenJDK and the other JVM vendors Hazelcast supports. Benchmarks show no performance degradation, and in some workloads a measurable improvement. Management Center 5.11 is validated for Java 25 as well. Previously supported Java versions continue to work with Platform 5.7, so adopting this release does not force a JVM upgrade. +Platform 5.7 adds support for Java 25, the latest Java LTS release, with no regressions against the prior LTS baseline across Oracle JDK, OpenJDK and the other JVM vendors Hazelcast supports. Benchmarks show no performance degradation, and in some workloads a measurable improvement. Management Center 5.11 is validated for Java 25 as well. The previously supported Java versions 17 and 21 continue to work with Platform 5.7, so adopting this release does not force a JVM upgrade. ++ +IMPORTANT: When running Hazelcast on Java 25, the `SecurityManager` is not functional due to ecosystem changes in the JDK. ++ +NOTE: Docker users who want to run Hazelcast on Java 21 instead of the default Java 25 must pull a JDK-tagged image. For example, `hazelcast/hazelcast:latest-jdk21` will resolve to 5.7 built with JDK 21. ++ +NOTE: The Hazelcast Platform Operator defaults to Java 25 unless the deployment is pinned to a specific Java version. == .NET and C++ client performance