Skip to main content

Machine Learning Forecasting in PLAIO

Updated over 2 weeks ago

This article describes PLAIO’s machine learning forecasting methodology and explains how the model produces forecasts across the portfolio. The aim is to give planners a clear understanding of how machine learning (ML) forecasting behaves, what information it uses, and which characteristics distinguish it from statistical forecast.

PLAIO’s ML method uses a model trained on historical data from all products in the portfolio. By learning shared patterns across SKUs it can generate consistent forecasts even when individual items have limited history or irregular demand.

How ML Forecasting Works in PLAIO

PLAIO’s ML Forecast model is designed to learn from the entire dataset at once rather than focusing on a single SKU at a time. This allows it to recognize portfolio-level patterns and apply them to individual items. Historical demand is the foundation of the model, but it also incorporates product attributes such as category, grouping or other metadata, along with seasonality, trend behaviours. Through this combined dataset, the model identifies relationships between products, repeated seasonal cycles, changes in demand levels and other structural patterns.

Because the model is trained across the whole dataset rather than SKU by SKU, newly launched or low-history items can benefit from patterns learned from more established SKUs. As a result, the model is able to generate stable initial forecasts and avoid extreme behaviour caused by limited data points. When the underlying demand patterns shift in the historical data, the model adapts during retraining, learning updated relationships without requiring manual model adjustments.

The overall output is a forecast line that behaves consistently across SKUs, reflects shared seasonal and trend patterns, and maintains stability even when individual items do not have long or regular demand histories.

Key Differences Between ML Forecasting and Statistical Forecasting

ML forecasting differs from traditional forecasting approaches mainly in how it uses information. Traditional methods focus solely on each SKU’s own history, which limits their ability to detect patterns when data is sparse or irregular, while ML forecasting learns from the entire portfolio at once.

Model consistency is improved with the ML approach. Because a single global model is used for all products, the behaviour of the forecast is more uniform across the entire portfolio. Seasonal patterns, trend interpretation, and sensitivity to recent changes follow the same logic for all SKUs, whereas history-based per-SKU methods often behave differently depending on data length and parameter choices. The global model structure further allows PLAIO to retrain forecasts more efficiently, since updates occur in one model rather than many separate ones.

ML forecasting is better suited to adapting when demand patterns change. If market dynamics shift or structural changes occur in the data, the model incorporates this information during retraining without requiring individual SKU-level adjustments. This creates a forecasting system that is easier to maintain and more responsive to actual behaviour observed in the portfolio.

Demand Planner module in PLAIO

In the Demand Planner module, the machine learning forecast is displayed as DemandML series type alongside Market Forecast, Benchmark and Customer Orders.

Series Type

Description

Customer Order

Actual committed orders from customers

Market Forecast

Manual forecast input

DemandML

Machine learning predictions based on historical sales, patterns, and trends

Benchmark

Statistical benchmark to validate ML performance

This allows planners to compare different forecast perspectives, observe how the ML model responds to changes in historical demand, and understand where manual market insight may provide additional context.

Understanding Forecast Performance

PLAIO evaluates forecast behaviour using two complementary metrics: Error and Bias. Error represents the magnitude of deviations between forecasts and actuals, while Bias indicates whether forecasts tend to be consistently above or below actual demand. Together, these metrics provide a clearer picture of how the ML forecast behaves across time and across products.

Performance is typically reviewed through comparisons between the ML forecast,

the market forecast, and a simple benchmark. These comparisons help identify

whether a product is inherently more difficult to forecast, whether the model is capturing the relevant patterns, or whether manual input from markets contains insight not reflected in historical data. Over time, Error and Bias trends help planners understand how forecast behaviour evolves through rolling planning cycles.

More details regarding forecasting performance in article: Forecasting Approach & Performance Evaluation.

Did this answer your question?