The fallacy of Mean/Variance indicators.

I’d like to present a lesson by professor Ruggero Bertelli on the subject of deterministic statistical indicators as they compare to classic indicators of mean and variance. LET’S SEE IT IN ACTIONAssuming we have an investment that in the first year earned 10%, the following year lost 10%, the third year earned another 10% and the fourth year lost 10%, what would be the return on investment at the end of the four years? THINK ABOUT YOUR ANSWER BEFORE YOU KEEP READINGIf your answer was zero, perhaps you forgot that returns are not linear, and losses are not equal to the returns necessary to recover them. If I’ve lost 50%, to return to the initial value I would have to earn 100%. If I lost 10%, to get even, I’d have to make almost 11% back, as you can see from the image below. At the four-year mark, I’d find

Read more

Now let’s talk about Standard Deviation from a “non-standard” point of view.

The Standard Deviation is an indicator that looks at dispersed data around a position index; it is one of the few statistical indicators that are able to measure fluctuation around the mean. In finance, especially in Italy, this indicator has become increasingly used to assess the risk of a financial instrument, illustrating that the higher the standard deviation, the higher the risk the investor runs. This association is very approximative and misleading; the standard deviation is not an indicator of risk but one of uncertainty since when it’s very high, estimates on a given financial instrument are not too reliable, and when it is low, they can be considered more accurate. First introduced by Pearson, the standard deviation is nothing but the square root of the variance, see Wikipedia for the mathematical formula. The main issue with the standard deviation from a financial use point-of-view, is not so much the

Read more