Cloud4U Object Storage is a fully Amazon S3-compatible object storage platform. Most restrictions are inherited from the S3 standard. Quotas are organizational limits that can be modified upon request to technical support. Limits/restrictions/requirements are technical limitations determined by architecture specifics and cannot be changed.
1.1 Bucket Restrictions
- A bucket belongs to the account (user) that created it.
- Bucket ownership cannot be transferred to another account.
- The bucket name must be globally unique across the entire cluster.
1.2 Bucket Naming Requirements
- Length: 3 to 63 characters
- Allowed characters: lowercase Latin letters (a-z), digits (0-9), hyphens (-), periods (.), and underscores (_)
- Underscores are not recommended
- Must start and end with a letter or digit
- Must not contain two or more consecutive periods
- Must not contain a hyphen adjacent to a period
- Must not be in IP address format (e.g., 192.168.1.1)
- Must not end with a hyphen or period
- Maximum name size in UTF-8: 63 characters (not bytes)
Recommendation:
Avoid periods (.) in bucket names — this may cause issues with virtual-hosted-style HTTPS addressing (similar to AWS). Underscores should also be avoided.
Examples
Valid and recommended:
my-bucket-namemybucket1my-bucket-name-2022
Valid, but with caveats:
my_bucket_name(underscore)my.example.bucket(periods)
Invalid:
MyExampleBucket(uppercase letters)my-bucket-(ends with hyphen)192.168.5.4(IP format)xn--something(xn-- prefix)
1.3 Bucket Limits
| Parameter | Value |
|---|---|
| Name uniqueness | Unique across the entire cluster |
| Default buckets per user | 100 |
| Maximum single object size | 5 TB |
| Multipart Upload requirement | Mandatory for objects > 100 MB |
- An empty bucket can be deleted. After deletion, the name becomes available for reuse.
2. Object Naming Requirements
- Object names are case-sensitive.
- Maximum object name length (including prefixes): 1024 bytes UTF-8
Examples:
Development/Projects.xlsphotos/myphoto.jpg
Tip:
Use meaningful names with prefixes (e.g.,project/date/purpose).
3. Limits
| Parameter | Value |
|---|---|
| Maximum object size | 5 TB |
| Maximum data size per single PUT request | 5 GB |
| Minimum Multipart part size (except last) | 5 MB |
| Maximum number of Multipart parts | 10,000 |
| Maximum buckets per user | 100 |
| Maximum bucket policy size | 20 KB |
| Maximum custom metadata size (x-amz-meta-*) | 2 KB per object (total name + value) |
4. Group/User Quotas
Quotas can be set for:
- Storage volume (KiB)
- Number of objects
- Peak request rate
- Upload/download speed (KiB/min)
When exceeded:
- 403 — storage quota exceeded
- 503 — rate limit exceeded
Recommendation:
For high workloads, distribute data across multiple buckets and use prefixes. See the detailed article for more information.
5. API Request Restrictions (Rate Limiting)
| Restriction Type | Threshold | Consequence |
|---|---|---|
| S3 requests (all types) | Dynamic threshold exceeded for 5 consecutive 1-minute intervals | Bucket enters restricted state for 1 minute (HTTP 503) |
| Metadata DB queries | 800,000 requests per 5 minutes per bucket | Restricted state for 5 minutes (HTTP 503) |
| Batch Delete (DeleteObjects) | Maximum 1,000 objects per request | — |
Common Errors and Recommendations
| Error | Cause / Solution |
|---|---|
| Invalid bucket name | Name does not comply with rules — check length, characters, and format |
| 503 Too Many Requests / SlowDown | Dynamic rate limiting triggered — distribute load across buckets |
| Large object upload fails | For objects larger than 100 MB — use Multipart Upload |
| Mass deletion is slow | Use Batch Delete (up to 1,000 objects) instead of thousands of individual DELETE requests |