This topic includes new content added in version {page-component-version}. For a complete list of all product updates, see the Redpanda release notes. See also:
-
redpanda-cloud:get-started:whats-new-cloud.adoc
-
Redpanda Cloud vs Self-Managed feature compatibility
Cloud Topics are now available, making it possible to use durable cloud storage (S3, ADLS, GCS) as the primary backing store instead of local disk, eliminating over 90% of cross-AZ replication costs. This makes them ideal for latency-tolerant, high-throughput workloads such as observability streams, analytics pipelines, and AI/ML training data feeds, where cross-AZ networking charges are the dominant cost driver.
You can use Cloud Topics exclusively in Redpanda Streaming clusters, or in combination with traditional Tiered Storage and local storage topics on a shared cluster supporting low latency workloads.
Cloud Topics require Tiered Storage and an Enterprise license. For setup instructions and limitations, see develop:manage-topics/cloud-topics.adoc.
Redpanda {page-component-version} introduces group-based access control (GBAC), which extends OIDC authentication to support group-based permissions. In addition to assigning roles or ACLs to individual users, you can assign them to OIDC groups. Users inherit permissions from all groups reported by their identity provider (IdP) in the OIDC token claims.
GBAC supports two authorization patterns:
-
Assign a group as a member of an RBAC role so that all users in the group inherit the role’s ACLs.
-
Create ACLs directly with a
Group:<name>principal.
Group membership is managed entirely by your IdP. Redpanda reads group information from the OIDC token at authentication time and works across the Kafka API, Schema Registry, and HTTP Proxy.
Redpanda’s cryptographic module has been upgraded from FIPS 140-2 to FIPS 140-3 validation. Additionally, Redpanda now provides a FIPS-specific Docker image (docker.redpanda.com/redpandadata/redpanda:<version>-fips) for amd64 and arm64 architectures, with the required OpenSSL FIPS module pre-configured.
|
Note
|
If you are upgrading with FIPS mode enabled, ensure all SASL/SCRAM user passwords are at least 14 characters before upgrading. FIPS 140-3 enforces stricter HMAC key size requirements. |
See manage:security/fips-compliance.adoc for configuration details.
Redpanda now supports additional JSON Schema patterns when translating to Iceberg tables:
-
$refsupport: Internal references using$ref(for example,"$ref": "#/definitions/myType") are resolved from schema resources declared in the same document. External references are not yet supported. -
Map type from
additionalProperties:additionalPropertiesobjects that contain subschemas now translate to Icebergmap<string, T>. -
oneOfnullable pattern: TheoneOfkeyword is now supported for the standard nullable pattern if exactly one branch is{"type":"null"}and the other is a non-null schema.
See Specify Iceberg Schema for JSON types mapping and updated requirements.
Leader Pinning now supports the ordered_racks configuration value, which lets you specify preferred racks in priority order. Unlike racks, which distributes leaders uniformly across all listed racks, ordered_racks places leaders in the highest-priority available rack and fails over to subsequent racks only when higher-priority racks become unavailable.
Redpanda now supports throughput quotas based on authenticated user principals. Unlike client-based quotas (which rely on self-declared client-id values), user-based quotas enforce limits using verified identities from SASL, mTLS, or OIDC authentication.
You can set quotas for individual users, default users, or fine-grained user/client combinations. See manage:cluster-maintenance/about-throughput-quotas.adoc for conceptual details, and Set user-based quotas to get started.
Remote Read Replica topics on AWS can be deployed in a different region from the origin cluster’s S3 bucket. This enables cross-region disaster recovery and data locality scenarios while maintaining the read-only replication model.
To create cross-region Remote Read Replica topics, configure dynamic upstreams that point to the origin cluster’s S3 bucket location. Redpanda manages the number of concurrent dynamic upstreams based on your cloud_storage_url_style setting (virtual_host or path style).
See Remote Read Replicas for setup instructions and configuration details.
When continuous partition balancing is enabled, Redpanda can automatically decommission brokers that remain unavailable for a configured duration. The partition_autobalancing_node_autodecommission_timeout_sec property triggers permanent broker removal, unlike partition_autobalancing_node_availability_timeout_sec which only moves partitions temporarily.
Key characteristics:
-
Disabled by default
-
Requires
partition_autobalancing_modeset tocontinuous -
Permanently removes the node from the cluster (the node cannot rejoin automatically)
-
Processes one decommission at a time to maintain cluster stability
-
Manual intervention required if decommission stalls
See manage:cluster-maintenance/continuous-data-balancing.adoc for configuration details.
Storage mode:
-
default_redpanda_storage_mode: Set the default storage mode for new topics (local,tiered,cloud, orunset) -
redpanda.storage.mode: Set the storage mode for an individual topic, superseding the legacyredpanda.remote.readandredpanda.remote.writeproperties
Cloud Topics:
|
Note
|
Cloud Topics requires an Enterprise license. For more information, contact Redpanda sales. |
-
cloud_topics_allow_materialization_failure: Enable recovery from missing L0 extent objects -
cloud_topics_compaction_interval_ms: Interval for background compaction -
cloud_topics_compaction_key_map_memory: Maximum memory per shard for compaction key-offset maps -
cloud_topics_compaction_max_object_size: Maximum size for L1 objects produced by compaction -
cloud_topics_epoch_service_max_same_epoch_duration: Maximum duration a node can use the same epoch -
cloud_topics_fetch_debounce_enabled: Enable fetch debouncing -
cloud_topics_gc_health_check_interval: L0 garbage collector health check interval -
cloud_topics_l1_indexing_interval: Byte interval for index entries in long-term storage objects -
cloud_topics_long_term_file_deletion_delay: Delay before deleting stale long-term files -
cloud_topics_long_term_flush_interval: Interval for flushing long-term storage metadata to object storage -
cloud_topics_metastore_lsm_apply_timeout_ms: Timeout for applying replicated writes to LSM database -
cloud_topics_metastore_replication_timeout_ms: Timeout for L1 metastore Raft replication -
cloud_topics_num_metastore_partitions: Number of partitions for the metastore topic -
cloud_topics_parallel_fetch_enabled: Enable parallel fetching -
cloud_topics_preregistered_object_ttl: Time-to-live for pre-registered L1 objects -
cloud_topics_produce_no_pid_concurrency: Concurrent Raft requests for producers without a producer ID -
cloud_topics_produce_write_inflight_limit: Maximum in-flight write requests per shard -
cloud_topics_reconciliation_max_interval: Maximum reconciliation interval for adaptive scheduling -
cloud_topics_reconciliation_max_object_size: Maximum size for L1 objects produced by the reconciler -
cloud_topics_reconciliation_min_interval: Minimum reconciliation interval for adaptive scheduling -
cloud_topics_reconciliation_parallelism: Maximum concurrent objects built by reconciliation per shard -
cloud_topics_reconciliation_slowdown_blend: Blend factor for slowing down reconciliation -
cloud_topics_reconciliation_speedup_blend: Blend factor for speeding up reconciliation -
cloud_topics_reconciliation_target_fill_ratio: Target fill ratio for L1 objects -
cloud_topics_upload_part_size: Part size for multipart uploads -
cloud_topics_epoch_service_epoch_increment_interval: Interval for cluster epoch incrementation -
cloud_topics_epoch_service_local_epoch_cache_duration: Cache duration for local epoch data -
cloud_topics_long_term_garbage_collection_interval: Interval for long-term storage garbage collection -
cloud_topics_produce_batching_size_threshold: Object size threshold that triggers upload -
cloud_topics_produce_cardinality_threshold: Partition cardinality threshold that triggers upload -
cloud_topics_produce_upload_interval: Time interval that triggers upload -
cloud_topics_reconciliation_interval: Interval for moving data from short-term to long-term storage -
cloud_topics_short_term_gc_backoff_interval: Backoff interval for short-term storage garbage collection -
cloud_topics_short_term_gc_interval: Interval for short-term storage garbage collection -
cloud_topics_short_term_gc_minimum_object_age: Minimum age for objects to be eligible for short-term garbage collection
Object storage:
-
cloud_storage_gc_max_segments_per_run: Maximum number of log segments to delete from object storage during each housekeeping run -
cloud_storage_prefetch_segments_max: Maximum number of small segments to prefetch during sequential reads
Authentication:
-
nested_group_behavior: Control how Redpanda handles nested groups extracted from authentication tokens -
oidc_group_claim_path: JSON path to extract groups from the JWT payload -
schema_registry_enable_qualified_subjects: Enable parsing of qualified subject syntax in Schema Registry
Other:
-
delete_topic_enable: Enable or disable topic deletion via the Kafka DeleteTopics API -
internal_rpc_request_timeout_ms: Default timeout for internal RPC requests between nodes -
log_compaction_max_priority_wait_ms: Maximum time a priority partition (such as__consumer_offsets) waits before preempting regular compaction -
partition_autobalancing_node_autodecommission_timeout_sec: Duration a node must be unavailable before Redpanda automatically decommissions it
-
log_compaction_tx_batch_removal_enabled: Changed fromfalsetotrue. -
tls_v1_2_cipher_suites: Changed from OpenSSL cipher names to IANA cipher names.
The following deprecated configuration properties have been removed in v26.1.1. If you have any of these in your configuration files, update them according to the guidance below.
RPC timeout properties:
Replace with internal_rpc_request_timeout_ms.
-
alter_topic_cfg_timeout_ms -
create_topic_timeout_ms -
metadata_status_wait_timeout_ms -
node_management_operation_timeout_ms -
recovery_append_timeout_ms -
rm_sync_timeout_ms -
tm_sync_timeout_ms -
wait_for_leader_timeout_ms
Client throughput quota properties:
Use rpk cluster quotas to manage client throughput limits.
-
kafka_admin_topic_api_rate -
kafka_client_group_byte_rate_quota -
kafka_client_group_fetch_byte_rate_quota -
target_fetch_quota_byte_rate -
target_quota_byte_rate
Quota balancer properties:
Use broker-wide throughput limit properties.
-
kafka_quota_balancer_min_shard_throughput_bps -
kafka_quota_balancer_min_shard_throughput_ratio -
kafka_quota_balancer_node_period -
kafka_quota_balancer_window -
kafka_throughput_throttling_v2
Timestamp alert properties:
-
log_message_timestamp_alert_after_ms: Uselog_message_timestamp_after_max_ms -
log_message_timestamp_alert_before_ms: Uselog_message_timestamp_before_max_ms
Other removed properties:
No replacement needed. These properties were deprecated placeholders that have been silently ignored and will continue to be ignored even after removal.
-
cloud_storage_disable_metadata_consistency_checks -
cloud_storage_reconciliation_ms -
coproc_max_batch_size -
coproc_max_inflight_bytes -
coproc_max_ingest_bytes -
coproc_offset_flush_interval_ms -
datalake_disk_space_monitor_interval -
enable_admin_api -
enable_coproc -
find_coordinator_timeout_ms -
full_raft_configuration_recovery_pattern -
id_allocator_replication -
kafka_memory_batch_size_estimate_for_fetch -
log_compaction_adjacent_merge_self_compaction_count -
max_version -
min_version -
raft_max_concurrent_append_requests_per_follower -
raft_recovery_default_read_size -
rm_violation_recovery_policy -
schema_registry_protobuf_renderer_v2 -
seed_server_meta_topic_partitions -
seq_table_min_size -
tm_violation_recovery_policy -
transaction_coordinator_replication -
tx_registry_log_capacity -
tx_registry_sync_timeout_ms -
use_scheduling_groups