Chaos is frequently the concern of those who are worried about a data set somehow “on the edge” of..
Chaos is frequently the concern of those who are worried
about a data set somehow “on the edge” of stability. Consider, for
example, a sample of size n from the ^ Gaussian Af (0,1). Then, we know that if
we compute the sample mean and variance S2, then has a
t distribution with degrees of freedom. For n = 2, this is the
Cauchy distribution which has neither expectation nor variance but is
symmetrical about zero. For this is Λ/*(0,1). Let us consider what
happens when we sample from t(u) adding on observations
one at a time and computing the sequential sample mean:
(a) Give plots of for N going from 1 to 5000.
(b) What happens if you throw away the 10% smallest
observations and the 10% largest before computing (show this for N = 10,
50, 100 and 5000). This “trimming” is generally an easy way for
minimizing the effects of long-tailed distributions.
(c) The Cauchy does not occur very often in nature
(although it is easy to design a hardwired version: Just divide a
Λ^Ο, 1) signal by another M(0,1) signal; but that would not be a good
design). Much more realistically, we carry out (a) but for v = 3.