Abstract :
Prediction of events is a challenge in many different disciplines, from meteorology to finance: the more difficult this task is, the more complex the system is. Nevertheless, even according to this restricted definition, a general consensus on what should be the correct indicator for complexity is still not reached. In particular, this characterization is still lacking for systems whose time evolution is influenced by factors which are not under control and appear as random parameters or random noise. We show in this paper how to find the correct indicators for complexity in the information theory context. The crucial point is that the answer is twofold depending on the fact whether the random parameters are measurable or not. The content of this apparently trivial observation has been often ignored in literature leading to paradoxical results. Predictability is obviously larger when the random parameters are measurable, nevertheless, on the contrary, predictability improves when the unknown random parameters are time correlated.