Forecasting 4.0 with Probabilistic Forecasts
A little over one year ago, we unveiled quantile grids as our 3.0 forecasting technology. More than ever, Lokad remains committed to delivering the best forecasts that technology can produce, and today, our 4th generation of forecasting technology, namely our probabilistic forecasting engine, is live and available in production for all clients. This new engine consists of a complete rewrite of our forecasting technology stack, and addresses many long-standing challenges that we were facing.
True probabilities
The future is uncertain no matter how good the forecasting technology. Back in 2012, when Lokad first ventured into the depths of quantile forecasting, we quickly realized that uncertainty should not be dismissed like it’s done with the classic forecasting approach, but should rather be embraced. Simply put, supply chain costs are concentrated at the statistical extremes: it’s the surprisingly high demand that causes stock-outs, and the surprisingly low demand that causes dead inventory. In the middle, supply chain tends to operates quite smoothly.
With quantile grids, Lokad was delivering a much more fine-grained vision of possible future outcomes. However, as the name suggests, our quantile grids were built on top of our quantile forecasts, multiple layers of quantiles actually. These quantile grids proved to be tremendously useful over the last year, but while our forecasting engine was producing probabilities, internally, nearly all its logic was not working directly with probabilities. The probabilities we computed were a byproduct of a quantile forecasting system.
Because of these quantile roots, our forecasting engine 3.0 had multiple subtle limitations. And while most of these limitations were too subtle to be noticed by clients, they did not go ignored by Lokad’s R&D team. Thus, we decided to reboot our entire forecasting technology with a true native probabilistic forecasting perspective; and this was the start of the forecasting engine 4.0.
Lead time forecasting
Lead times are frequently assumed to be a given. However, while past lead times are known, future lead times can only be estimated. For years, Lokad had under-estimated the challenge of accurately approximating the future lead times. Lead times are subtle: most statistical patterns, such as seasonality (and the Chinese New Year in particular), which impact the demand, also impact the lead time.
In our forecasting engine 4.0, lead times have become first-class citizens with their own lead time forecasting mode. Lead times now benefit from dedicated built-in forecasting models. Naturally, with our engine being a probabilistic forecasting engine, lead time forecasts are a distribution of probabilities associated with an uncertain time period.
Integrated demand forecasting
Lead times vary, and yet, our forecasting engine 3.0 was stuck with fixed lead times. From a traditional perspective, the classic safety stock analysis assumes that lead time follows a normal distribution, while nearly all measurements we have ever carried out indicate that varying lead times are clearly not normally distributed. While our experiments routinely showed that having a fixed lead time was better than having a flawed model, being stuck with static lead times was nevertheless not the perfectly satisfying solution we were looking for.
The forecasting engine 4.0 introduces the concept of integrated demand forecasting, with integrated signifying integrated over the lead time. The engine takes a full distribution of lead time probabilities, and produces the corresponding probabilistic demand forecast. In practice, the lead time distribution is also computed by the forecasting engine as seen previously. Integrated demand forecasting finally brings a satisfying answer to the challenge of dealing with varying lead times.
New products forecasting
Forecasting demand for new product is plain hard. Since, in this case, forecasting obviously cannot rely on the sales history, the forecasting engine has to rely on other data known about the product prior to its launch. Our forecasting engine 3.0 already had a tags framework, precisely geared towards this specific use case. However, tags were unfortunately not carrying as much information as we would have liked and some accuracy was left on the table.
With 4.0, this specific challenge is revised with the introduction of categories and hierarchies. Categories and hierarchies are more expressive as well as more structured than tags, and convey a lot more information. The forecasting engine 4.0 takes the full advantage of this richer data framework to deliver more accurate forecasts, with new-product forecasting being the most acute use case.
Stock-outs and promotions
The intent of the forecasting engine is to forecast the future demand. However, our knowledge of past demand is typically imperfect, with only past sales really being known. Sales typically tends to be a reasonable approximation of the demand, but sales come with multiple biases, the most common cases being stock-outs and promotions. Our engine 3.0 already had a few heuristics to deal with this bias, plus quantile forecasts are intrinsically more robust than (classic) average forecasts. Yet, once again, the situation was not entirely satisfying for us.
The engine 4.0 introduces the notion of biased demand, which can be either censored or inflated. When the demand for a given product on a given day is marked as censored, we are telling the forecasting engine that the demand should have been higher, and that the true demand for that day remains unknown. The engine leverages this information to refine the forecasts, even when the history is full of events which have distorted the demand signal.
Ultra-sparse demand
While quantile forecasts are vastly superior to classic average or median forecasts when it comes to estimating the probabilities of rare events, quantiles begin demonstrating their limits when it comes to estimating super-rare events. For example, our quantile models were struggling to properly deal with items sold only once or twice a year, as well as handling service levels higher than 98%.
Native probabilistic models, as implemented in our engine 4.0, are much better behaved when it comes to ultra-sparse demand and “rare” events in general. These models could have been implemented within a quantile forecasting framework (a probabilistic forecast can be easily turned into a quantile forecast); but our engine 3.0 did not have the infrastructure to support them. So they were implemented into the engine 4.0 instead.
Blended into Envision
Versions 2.0 and 3.0 of our forecasting engine came with a web user interface. At first glance, it seemed easy. However, the user interface was actually dismissing the factor which represents the true challenge of using (any) forecasting engine, which is to provide complete control of the data transferred into the forecasting engine. Indeed, garbage-in, garbage-out remains an all too frequent problem.
The engine 4.0 is interfaced from within Envision, our domain-specific language geared towards quantitative optimization for commerce. Calling the forecasting engine takes a series of data arguments provided from an Envision script. This approach requires a bit more upfront effort, however, the productivity benefits kick in rapidly; as soon as adjustments are made on the input data.
The release of our forecasting engine 4.0 is only the first part of a series of important improvements that have been brought to Lokad over the last few weeks. Stay tuned for more.