Skip to content

[opt](config) add max_bucket_num_per_partition config to limit bucket number#61576

Open
gavinchou wants to merge 1 commit intoapache:masterfrom
gavinchou:gavin-limit-max-num-bucket
Open

[opt](config) add max_bucket_num_per_partition config to limit bucket number#61576
gavinchou wants to merge 1 commit intoapache:masterfrom
gavinchou:gavin-limit-max-num-bucket

Conversation

@gavinchou
Copy link
Contributor

Proposed changes

Add a new FE config max_bucket_num_per_partition to limit the maximum number of buckets when creating a table or adding a partition.

Changes:

  1. Add max_bucket_num_per_partition config in Config.java, defaulting to autobucket_max_buckets (128) for consistency.
  2. Add bucket number validation in DistributionDescriptor.validate() for CREATE TABLE scenario.
  3. Add bucket number validation in InternalCatalog.addPartition() for ALTER TABLE ADD PARTITION scenario.
  4. Add unit tests for the new validation logic.

Behavior:

  • For user-specified buckets (CREATE TABLE / ALTER TABLE ADD PARTITION): if bucket number exceeds this limit, the operation will be rejected with a helpful error message.
  • For auto-bucket feature (Dynamic Partition): bucket number is capped by autobucket_max_buckets automatically, no change.
  • Set to 0 or negative value to disable this limit.

Error message example:

Number of buckets (200) exceeds the maximum allowed value (128). 
Generally, a large number of buckets is not needed. 
If you have a specific use case requiring more buckets, 
please review your schema design or modify the FE config 
'max_bucket_num_per_partition' to adjust this limit.

Test plan

  • Unit tests added in DistributionDescriptorTest.java
  • Verify CREATE TABLE with bucket number exceeding limit is rejected
  • Verify ALTER TABLE ADD PARTITION with bucket number exceeding limit is rejected
  • Verify auto-bucket is not affected by this limit
  • Verify setting config to 0 disables the limit

@Thearas
Copy link
Contributor

Thearas commented Mar 20, 2026

Thank you for your contribution to Apache Doris.
Don't know what should be done next? See How to process your PR.

Please clearly describe your PR:

  1. What problem was fixed (it's best to include specific error reporting information). How it was fixed.
  2. Which behaviors were modified. What was the previous behavior, what is it now, why was it modified, and what possible impacts might there be.
  3. What features were added. Why was this function added?
  4. Which code was refactored and why was this part of the code refactored?
  5. Which functions were optimized and what is the difference before and after the optimization?

@gavinchou gavinchou changed the title [feat](config) add max_bucket_num_per_partition config to limit bucket number [opt](config) add max_bucket_num_per_partition config to limit bucket number Mar 20, 2026
@gavinchou
Copy link
Contributor Author

run buildall

…t number

Add a new FE config `max_bucket_num_per_partition` to limit the maximum
number of buckets when creating a table or adding a partition.

Changes:
1. Add `max_bucket_num_per_partition` config in Config.java, defaulting
   to `autobucket_max_buckets` (128) for consistency.
2. Add bucket number validation in DistributionDescriptor.validate() for
   CREATE TABLE scenario.
3. Add bucket number validation in InternalCatalog.addPartition() for
   ALTER TABLE ADD PARTITION scenario.
4. Add unit tests for the new validation logic.

Behavior:
- For user-specified buckets (CREATE TABLE / ALTER TABLE ADD PARTITION):
  if bucket number exceeds this limit, the operation will be rejected.
- For auto-bucket feature (Dynamic Partition): bucket number is capped
  by autobucket_max_buckets automatically.
- Set to 0 or negative value to disable this limit.
@gavinchou gavinchou force-pushed the gavin-limit-max-num-bucket branch from 673630a to aba8238 Compare March 20, 2026 10:42
@gavinchou
Copy link
Contributor Author

run compile

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants