Prioritize...
Once you have completed this section, you should be able to select the appropriate visuals and statistical metrics for assessing forecast skill, depending on the forecast type.
Read...
In this lesson, we learned how to assess the skill of a forecast, dependent on the forecast type. The assessment relies heavily on the question at hand and the consequences of a bad forecast versus a good forecast (user-centric). Before you continue on to the assignment, make sure you feel comfortable selecting the appropriate assessment given a forecast type. Below, you will find one final note on strictly proper scores and a flowchart to help guide you in the decision making.
Strictly Proper Scores
If you chose not to go with cost minimization, there are a plethora of other error and skill metrics to choose from. Many of them, however, reward issuing forecasts that don’t reflect the true probability distribution. For example, MAE applied to a probability forecast will improve if you forecast a 1 for probability > 0.5 and a 0 for probability <= 0.5. Thus, destroying some of the available information about forecast uncertainty actually improves the score. It’s not rewarding the right aspects of skill in this setting.
In contrast, the Brier Score and cost minimization reward issuing the best guess of the event probability as your forecast. Error and skill metrics that reward issuing a correct probability forecast are called ‘strictly proper’.
Flowchart
I will end this lesson by providing a flowchart that should help guide your forecast assessment. There are many more methods for assessing the skill of a forecast, and you are not limited to what was presented here.