Several Metrics to Evaluate Your Forecast Model
The forecast is the process of predicting future values based on historical time series data.
There are many applications where the forecast is important; For example, companies’ income, stock prices, and weather prediction.
It is important to evaluate our forecast model, which means we want to know the metrics:
Mean Absolute Error (MAE)
MAE measures the average magnitude of the errors in the forecast, where lower MAE indicates better performance. We can calculate MAE using the absolute difference between the forecast and the actual data, then averaging the differences over the forecast horizon.
Mean Absolute Percentage Error (MAPE)
MAPE measures the difference between the forecast and the actual data using percentage differences, where a lower MAPE means better performance. It is calculated by dividing the absolute difference between the forecast and the actual data by the actual data. Then we average the percentages over the forecast horizon.
Root Mean Squared Error (RMSE)
The RMSE is measured by taking the square root of the average of the squared differences between the forecast and the actual data over the forecast horizon. A lower RMSE means better performance.
Theil's U-Statistic
Theils’ U measures the ratio of the RMSE of the forecast to the RMSE of a naive forecast (e.g., a forecast that uses the previous data as the forecast for the next period). Theil's U ranges from 0 to 1, where 0 means a perfect model and 1 means no differences from the naive forecast.
There are still a lot of metrics to use. You can check it out here:
Each metric has strengths and weaknesses, so trying multiple metrics is good.
That is all for today! Please comment if you want to know something else from Machine Learning and Python domain!