Yes. This is actually correct, so I’ve clarified what I meant in the context of the article, which was aimed at a layperson audience and the fundamentals of statistical research methods. But it’s worth the correction & clarification for the more capable reader, since it was otherwise quite lazy. Thank you.

To be meta-mathematically precise, the existence of a single solution, as well as a solution at all, are both provable things. An existence proof demonstrates there is a solution to the problem, and a uniqueness proof demonstrates that a solution is the only one.

The same thing is of course, true in sets or in a “solution space” (e.g. a viable region in a something like metric space) and in the event a hypothesis was to claim that 2,4,8,1,1,1,1 was *the *next item in the sequence, then the researcher would have failed to have created a valid null hypothesis (e.g. “that 2,4,8,1,1,1,1 is not ** the** solution to the sequence 2,4,8,…”, when multiple solutions exist). Thus, due to the existence of these other covariate solutions, the null hypothesis has not been disproven, so that we can accept the alternate that 2,4,8,1,1,1,1 is

*the only*solution (though it is ‘a’ solution, we cannot accept it, since the sequence is not unique — thus the null hypothesis has not been disproven).

In any event, statistically, the EU ref was a coin flipping control. Monte Carlo work, not automatically a sequencing one. Identifying a sequence from a series of samples is a good exercise mind, especially using L-norms and Fourier Analyses to more accurately identify approximations to sampled data as well as identify if a sample set is truly random or not. Depending on the need for accuracy both Taylor and McLauren can start to become inaccurate as the precision needs to increase (unless it and the solution space is all scaled down, by say, segmenting it into more fine grained ‘unit’ elements) it’s why the study of L-norms and approximation is an important exercise for most applied mathematicians.