Dynamic batch size
Edit on GitHubDuring large-scale data imports - such as loading customer orders, product details, and pricing data - the process can take several hours and consume significant system memory. This may lead to memory exhaustion, application crashes, or degraded performance.
Moreover, memory usage is often uneven, with spikes of high consumption followed by low-usage periods. To address these challenges, you can implement a dynamic batch size strategy.
This approach continuously adjusts the batch size based on the available memory for the current thread. As a result, it helps to achieve the following:
-
Optimize memory utilization
-
Prevent memory overload
-
Ensure a more stable and efficient import process
By dynamically adapting to the system’s capacity, this method improves reliability and makes large imports more predictable and resource-friendly.
For more information, see RAM-aware batch processing and Integrate RAM-aware batch processing.
Thank you!
For submitting the form