
Time series forecasting is often treated as a clearly defined machine learning task: data is collected and prepared, models are trained, forecasts are generated, and performance is evaluated using metrics.
In real-world forecasting projects, however, forecasts are rarely viewed as isolated model outputs. They serve as a foundation for business-critical decisions and are therefore subject to close scrutiny and discussion.
A key tension typically arises from two factors:
As a result, technical model quality is only one part of the overall picture. What ultimately matters is whether forecasts are accepted within the organization and can be effectively used in decision-making.
The following insights are based on project experience from the development and implementation of forecasting solutions in regulated enterprise environments at HMS Analytical Software GmbH.
Successful forecasting applications are not defined solely by high accuracy, but by characteristics that enable their use in operational environments.
Three aspects stand out in particular:
In addition, a fundamental decision must be made: should forecasts be generated using a fully independent, generalized approach, or should they be tailored to specific business requirements? This decision directly impacts both acceptance and governance of the solution.
Across HMS projects, it became evident that technical forecast quality alone is rarely sufficient if organizational integration and comparability with existing processes are not addressed.
The projects analyzed were implemented across various platforms, including:
Platforms
Infrastructure and orchestration
Implementation and modeling
Forecasting methods
This combination reflects typical HMS project requirements. Forecasting is not only about model development but about embedding forecasting into a reproducible end-to-end process within existing data platforms.
The range of methods highlights that forecasting in practice often combines classical time series techniques with machine learning models, depending on data availability, target architecture, and scalability requirements.
A central learning from project experience is: Understanding drives acceptance.
Forecasting systems are not automatically accepted just because they are mathematically or statistically sound. When new applications are compared to existing processes or tools, the need for explanation becomes critical.
In practice, it is recommended to make solutions tangible early on through:
Explainable AI in time series forecasting is therefore not an optional add-on but a prerequisite for integrating forecasts into business decision-making.
From an HMS perspective, explainability is a core element of product acceptance in forecasting systems.
A recurring pattern in practice is the comparison of new forecasting solutions with existing tools, often based on identical evaluation metrics.
From a business perspective, success is typically measured by whether the new solution performs at least as well as the existing one.
At the same time, a key principle applies: Never rely on a single evaluation metric.
From a data science perspective, a single metric cannot fully capture forecast quality. Forecasts should also be validated against actual values, for example through sample-based visualizations, and assessed across multiple metrics.
Differences between a new forecasting solution and an existing tool do not automatically indicate poor modeling.
Similarly, deviations between forecasts and actual values can have multiple causes beyond a single metric.
In project experience, such differences often result from structural variations between tools, such as:
Explaining forecast differences therefore requires clearly defining the comparison context: against an existing tool, against actual values, or across different data scenarios.
Another key learning is that forecasting systems only become effective in the long term when feedback processes are in place.
Successful applications rely on continuous input from users and the business. It is essential to define early:
Experience also shows that business adoption often increases when the business takes ownership of the solution, whether financially or organizationally.
Forecast revisions are therefore not an exception but an expected outcome of real-world usage.
One of the most critical decisions concerns the nature of the forecast itself:
Should the forecasting system produce independent predictions, or should it incorporate business expectations?
In project contexts, this was identified as a key tension:
Clarifying this question is essential for acceptance. Without it, there is no clear definition of what constitutes a “correct” forecast.
This decision has proven to be a fundamental lever in HMS projects, shaping both model logic and business adoption.
Forecasting projects typically start with global datasets such as sales figures. At the same time, local entities often maintain additional data sources.
In practice, the business expects these local inputs to be incorporated into forecasts, for example as predictive indicators.
A robust forecasting solution must therefore be capable of integrating a wide range of local data sources into a generalized modeling framework.
Forecasting projects can only be executed effectively if a clear target vision is defined. In one project, the desired target state was initially unclear, both in terms of automation and standardization, as well as the underlying data foundation.
In forecasting, defining the objective goes beyond technical requirements. It includes clarifying what constitutes a “good” forecast:
Is the goal to optimize a specific metric?
Is the intention to replace or complement an existing tool?
Or is transparency of forecast behavior the primary objective?
Project experience shows that aligning expectations with internal stakeholders is essential before model performance can be meaningfully evaluated.
A recurring challenge in projects was uncertainty about data availability and delays caused by access and permission issues.
From a business perspective, expectations are clear: relevant data must be complete, up-to-date, and reliable, especially when forecasts inform decisions.
This can be summarized as follows: Without data, there are no results.
From an implementation perspective, this means:
Forecasting systems therefore depend as much on organizational data readiness as on model quality.
In a proof-of-concept context, a clear recommendation emerged: focus more on visualizations and result presentation, and less on infrastructure.
This highlights an important insight: forecasting projects are not evaluated solely on technical sophistication, but on whether results are understandable and actionable for stakeholders.
It refers to presenting forecasts in a way that users can understand, for example through explanations, visualizations, or interactive elements.
Because forecasts are often compared with existing solutions, and deviations must be understood to build trust.
They often result from differences in models, data sources, or preprocessing. They should always be evaluated in context using multiple metrics and actual values.
Because no single metric fully captures forecast quality. Additional validation methods are required.
Primarily user feedback and iterative improvements based on real-world usage.
They are often critical, as local entities hold additional data that improves forecast relevance.
Because it defines how forecast accuracy and relevance are judged.
Python, PySpark, AWS, Snowflake, Databricks, and Palantir Foundry.
ARIMA, Prophet, Random Forest, and boosting models such as XGBoost, LightGBM, and CatBoost.
Because unclear objectives lead to misaligned expectations and hinder project success.
Strong forecasting applications are not defined by a single algorithm but by the ability to make forecasts understandable, comparable, and operationally usable.
Project experience shows that acceptance is driven by understanding, forecast quality cannot be reduced to a single metric, and local data sources are often critical for real-world usability.
HMS therefore positions forecasting not as a purely modeling task, but as a combination of data integration, modeling, validation, and business adoption.
