Arrange a consultation
|
Beratung vereinbaren

Inside Forecasting: What Truly Defines Strong Forecasting Applications

Kilian Schneider von HMS Analytical Software GmbH
Kilian Schneider

published on March 24, 2026

Why time series forecasting is rarely just a modeling problem in practice

Time series forecasting is often treated as a clearly defined machine learning task: data is collected and prepared, models are trained, forecasts are generated, and performance is evaluated using metrics.

In real-world forecasting projects, however, forecasts are rarely viewed as isolated model outputs. They serve as a foundation for business-critical decisions and are therefore subject to close scrutiny and discussion.

A key tension typically arises from two factors:

  • Forecasts are required across multiple countries and business units, each with its own requirements and data landscape.
  • New forecasting solutions are compared to existing tools, often using the same evaluation metric.

As a result, technical model quality is only one part of the overall picture. What ultimately matters is whether forecasts are accepted within the organization and can be effectively used in decision-making.

The following insights are based on project experience from the development and implementation of forecasting solutions in regulated enterprise environments at HMS Analytical Software GmbH.

Management Summary: What truly defines strong forecasting applications

Successful forecasting applications are not defined solely by high accuracy, but by characteristics that enable their use in operational environments.

Three aspects stand out in particular:

  1. Acceptance is driven by transparency and explainability, especially when forecasts deviate from existing plans.
  2. Local data sources are business-critical and must be integrated into a generalized modeling framework.
    Example: In project contexts, it became clear that countries and business units often maintain additional data sources that are not included in global datasets.Typical examples include regional pricing campaigns, country-specific sales structures, local marketing initiatives, or manually maintained sales assumptions that are not reflected in global reporting.
  3. Forecast quality should not be assessed based on a single metric but must be validated and plausibility-checked.

In addition, a fundamental decision must be made: should forecasts be generated using a fully independent, generalized approach, or should they be tailored to specific business requirements? This decision directly impacts both acceptance and governance of the solution.

Across HMS projects, it became evident that technical forecast quality alone is rarely sufficient if organizational integration and comparability with existing processes are not addressed.

Technical Deep Dive: How forecasting projects are implemented in enterprise environments

Platforms and technologies used in forecasting projects

The projects analyzed were implemented across various platforms, including:

Platforms

  • Palantir Foundry
  • AWS
  • Snowflake
  • Databricks, including MLflow

Infrastructure and orchestration

  • CloudFormation
  • Lambda
  • Glue Jobs
  • Step Functions
  • Bash scripts

Implementation and modeling

  • Python
  • PySpark
  • Jupyter
  • Optuna

Forecasting methods

  • ARIMA
  • Prophet
  • Random Forest
  • XGBoost
  • LightGBM
  • CatBoost

This combination reflects typical HMS project requirements. Forecasting is not only about model development but about embedding forecasting into a reproducible end-to-end process within existing data platforms.

The range of methods highlights that forecasting in practice often combines classical time series techniques with machine learning models, depending on data availability, target architecture, and scalability requirements.

Why explainable AI is a key driver of acceptance in forecasting

A central learning from project experience is: Understanding drives acceptance.

Forecasting systems are not automatically accepted just because they are mathematically or statistically sound. When new applications are compared to existing processes or tools, the need for explanation becomes critical.

In practice, it is recommended to make solutions tangible early on through:

  • explainability components
  • visualizations
  • simulation capabilities that allow interaction and influence

Explainable AI in time series forecasting is therefore not an optional add-on but a prerequisite for integrating forecasts into business decision-making.

From an HMS perspective, explainability is a core element of product acceptance in forecasting systems.

Explaining forecast differences: Why deviations are not necessarily model errors

A recurring pattern in practice is the comparison of new forecasting solutions with existing tools, often based on identical evaluation metrics.

From a business perspective, success is typically measured by whether the new solution performs at least as well as the existing one.

At the same time, a key principle applies: Never rely on a single evaluation metric.

From a data science perspective, a single metric cannot fully capture forecast quality. Forecasts should also be validated against actual values, for example through sample-based visualizations, and assessed across multiple metrics.

Differences between a new forecasting solution and an existing tool do not automatically indicate poor modeling.

Similarly, deviations between forecasts and actual values can have multiple causes beyond a single metric.

In project experience, such differences often result from structural variations between tools, such as:

  • different modeling approaches and mechanisms
  • different data foundations, global versus local
  • different data preprocessing steps

Explaining forecast differences therefore requires clearly defining the comparison context: against an existing tool, against actual values, or across different data scenarios.

Forecast revisions: Why forecasting without feedback rarely works

Another key learning is that forecasting systems only become effective in the long term when feedback processes are in place.

Successful applications rely on continuous input from users and the business. It is essential to define early:

  • when feedback is provided
  • how feedback is incorporated
  • how responsibilities are distributed between central analytics functions and local business units

Experience also shows that business adoption often increases when the business takes ownership of the solution, whether financially or organizationally.

Forecast revisions are therefore not an exception but an expected outcome of real-world usage.

Business story vs. independent forecast: The central governance question

One of the most critical decisions concerns the nature of the forecast itself:

Should the forecasting system produce independent predictions, or should it incorporate business expectations?

In project contexts, this was identified as a key tension:

  • integration of business expectations
  • independent predictive results
  • or a hybrid approach

Clarifying this question is essential for acceptance. Without it, there is no clear definition of what constitutes a “correct” forecast.

This decision has proven to be a fundamental lever in HMS projects, shaping both model logic and business adoption.

Why local data sources often determine forecasting success

Forecasting projects typically start with global datasets such as sales figures. At the same time, local entities often maintain additional data sources.

In practice, the business expects these local inputs to be incorporated into forecasts, for example as predictive indicators.

A robust forecasting solution must therefore be capable of integrating a wide range of local data sources into a generalized modeling framework.

System perspective: Operations, scalability, and stability

Why unclear objectives lead to misaligned expectations

Forecasting projects can only be executed effectively if a clear target vision is defined. In one project, the desired target state was initially unclear, both in terms of automation and standardization, as well as the underlying data foundation.

In forecasting, defining the objective goes beyond technical requirements. It includes clarifying what constitutes a “good” forecast:

Is the goal to optimize a specific metric?
Is the intention to replace or complement an existing tool?
Or is transparency of forecast behavior the primary objective?

Project experience shows that aligning expectations with internal stakeholders is essential before model performance can be meaningfully evaluated.

Why data access is more than a technical detail

A recurring challenge in projects was uncertainty about data availability and delays caused by access and permission issues.
From a business perspective, expectations are clear: relevant data must be complete, up-to-date, and reliable, especially when forecasts inform decisions.

This can be summarized as follows: Without data, there are no results.

From an implementation perspective, this means:

  • From the business side: ensuring availability, accessibility, and data quality
  • From the implementation side: making data issues visible early and addressing delays transparently

Forecasting systems therefore depend as much on organizational data readiness as on model quality.

Why result presentation can matter more than infrastructure

In a proof-of-concept context, a clear recommendation emerged: focus more on visualizations and result presentation, and less on infrastructure.

This highlights an important insight: forecasting projects are not evaluated solely on technical sophistication, but on whether results are understandable and actionable for stakeholders.

FAQ

What does explainable AI mean in time series forecasting?

It refers to presenting forecasts in a way that users can understand, for example through explanations, visualizations, or interactive elements.

Why is explainability critical for acceptance?

Because forecasts are often compared with existing solutions, and deviations must be understood to build trust.

How can forecast differences be explained?

They often result from differences in models, data sources, or preprocessing. They should always be evaluated in context using multiple metrics and actual values.

Why should forecasts not be evaluated using a single metric?

Because no single metric fully captures forecast quality. Additional validation methods are required.

What causes forecast revisions?

Primarily user feedback and iterative improvements based on real-world usage.

What role do local data sources play?

They are often critical, as local entities hold additional data that improves forecast relevance.

Why is the decision between business expectations and independent forecasts important?

Because it defines how forecast accuracy and relevance are judged.

Which technologies are typically used?

Python, PySpark, AWS, Snowflake, Databricks, and Palantir Foundry.

Which methods are commonly applied?

ARIMA, Prophet, Random Forest, and boosting models such as XGBoost, LightGBM, and CatBoost.

Why is a clear target definition essential?

Because unclear objectives lead to misaligned expectations and hinder project success.

Conclusion

Strong forecasting applications are not defined by a single algorithm but by the ability to make forecasts understandable, comparable, and operationally usable.

Project experience shows that acceptance is driven by understanding, forecast quality cannot be reduced to a single metric, and local data sources are often critical for real-world usability.

HMS therefore positions forecasting not as a purely modeling task, but as a combination of data integration, modeling, validation, and business adoption.


Kilian Schneider
Senior Data Scientist

Questions about the article?

We are happy to provide answers.
Contact Us
© 2024 – 2026 HMS Analytical Software
chevron-down