That scientific modelling should aim to take into account uncertainty, is by now widely accepted. To that end, most scholars use probabilities to describe this uncertainty. However, it has been convincingly argued that there are cases in which this is not defendable. If these probabilities are to be used for high-risk decision making under uncertainty, as for example required in medical diagnosis and self-driving cars, this problem cannot be neglected. To better understand the problem, consider a sequence of binary data, consisting of zeros and ones, for example obtained through physical experiments. If one associates a probability with this data sequence, this implies that it should adhere to probabilistic laws, the convergence of relative frequencies being a prime example. However, there are plenty of binary data sequences for which the relative frequency of ones does not converge at all. Fortunately, these kinds of limitations can be addressed by using probability intervals instead of probabilities, which will then for example provide lower and upper bounds on non-stabilising frequencies. The aim of this proposal is to develop a methodology able to do this in general contexts. The first part consists in further developing notions of imprecise randomness that are able to associate appropriate intervals with infinite sequences of data. The second part consists in developing efficient statistical methods for learning these intervals.