A crisis of confidence (intervals)
Lately I’ve taken to exploring some of the aggregated event statistics that we
have on file at Flight Data Services — and as a side project, I was using my
knowledge of statistics to develop a fairly straightforward anomaly detection
algorithm. My first approach was to compute the daily average of a particular
key point value (in this case, Acceleration Normal at Touchdown
), and then
compute the mean and an arbitrary confidence interval (say, 99.5%). Data that
fell outside of this interval would then be marked as an anomaly and would
warrant further investigation. This is a pretty rudimentary statistical
approach to anomaly detection, but one that’s commonly applied in this context
— the only problem is that it’s wrong.
What does a confidence interval actually measure?
In the case, I’m going to be talking about the Gaussian distribution, but this applies to other distributions too. In basic inferential statistics, we try to use data generated from a sample of a population to make conclusions about the population as a whole. That is, we are implicitly making the assumption that the population’s data is generated due to some underlying probability distribution. The problem is, the parameters of this distribution (i.e. in the Gaussian case, the mean and standard deviation) are unknown — we can only estimate those parameters by calculating statistics from samples of the population.
These parameters (i.e. mean and std. dev) are easy to calculate from sample data, but the samples are not necessarily representative of the underlying population — so these estimates have an inherent amount of uncertainty in them. One other thing that’s useful to note is that because of a neat statistical property known as the law of large numbers, we can be more confident that a large sample better represents the underlying population than a small one. So bigger is better — at least in statistics.
A confidence interval, then, is a mathematical concept that expresses the uncertainty in a particular estimate of the population’s distribution. Phrased another way, the upper 95% confidence bound is the point at which 97.5% of the “weight”^{1} of the probability density function lies below it (and of course, the opposite is true for the lower bound). If you haven’t seen this before, you might be thinking “why 97.5%”? Well that’s because a confidence interval is twosided — so $ 100\%  95\% = 5\% $, and $\frac{5\%}{2} = 2.5\%$. Of course, then $100\%  2.5\% = 97.5\%$.
What was I doing wrong?
In short, I was applying confidence intervals incorrectly because a confidence interval expresses the range of expected values of a sample mean. It’s a subtle detail (and one that is often missed, myself included!), but an important one. A 95% confidence interval suggests that if you sampled a population 100 times, then the sample mean would lie within the confidence interval 95 out of those 100 times — and, from a Bayesian perspective, you can be 95% sure that the population mean lies within the 95% confidence interval. Because a confidence interval is about the expected range of values of an average, basing an anomaly detection algorithm on this kind of interval will lead to lots of false positives. If you think of each new flight as a new sample of one (as opposed to an average of several), you should hopefully be able to see why a confidence interval is too small — because the variance of a sample containing only a single data point will be very high!
When to use it?
You should use a confidence interval if you’re attempting to express your confidence in a possible range for the expected values of a particular random variable. For example, if I wanted to estimate the average speed of cars driving past the office during the hometime rush every day (hint: it’s not fast), then I would record a large number of measurements and take the average — let’s say, 10 kilometres per hour. A confidence interval (in this case, let’s say a 95% confidence interval of ±2km/h) is simply a measure of my expectation of the range of values for the average speed — that is, 95% of the time, I would expect the average speed of a sample of similar cars to be between 8km/h and 12km/h.
What about the tolerance interval?
After a little bit of reading up on other anomaly detection techniques, I stumbled upon a couple of online resources that discussed some lesserknown intervals that have a slightly stronger statistical justification for building anomaly detection thresholds — and so produce much more sensible results. In this case I’m going to be talking about the tolerance interval — an interval that works as a sort of “setandforget” quantity that is estimated once and then can be reused for future observations (this differs from something like the prediction interval which should technically be recomputed each time).
Instead of estimating the range of possible values that contains the mean of a population, tolerance intervals attempt to estimate the range of values that the entire population resides in. This is a conceptual departure from the usual confidence interval, because one must also provide a tolerance threshold as well as a confidence level. That is, a tolerance interval answers the question “what range will contain 99% of the population, with a 95% confidence?”. That essentially means that, 95% of the time, 99% of data points will lie within the tolerance interval.
How is it defined?
This is where things start to get fun. As mentioned above, we need to provide two parameters to calculate the tolerance interval — and that’s because we now need to consider two probability distributions; the Gaussian (as before), and the Chisquared distribution as well. In short, the Chisquared distribution (with $n$ degrees of freedom, where $n$ is the number of data points in a sample) shows the probability distribution of the sum of squares of individual data points. The tolerance interval $t_i$ can be approximated as follows, where $n$ is the degrees of freedom, $z \frac{1  p}{2}$ is the critical value of the Gaussian distribution, taken at the value of the proportion of the data within the interval $t_i$, and $\gamma$ is the confidence level;
An example
In Python, an approximation to the tolerance interval can be computed using just a few lines of code;
1
2
3
4
5
6
7
8
import numpy as np
import scipy.stats as st
def tolerance(prop, conf, n):
v = n  1
z = st.norm.ppf((1  prop) / 2)
chi = st.chi2.ppf(1  conf, v)
return np.sqrt((v * (1 + 1 / n) * (z ** 2)) / chi)
And of course, applying this tolerance interval to 1,200 randomlygenerated numbers yields the following bounds (note how few outliers there are!).
This figure was generated quite easily using the following Python code;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import matplotlib.pyplot as plt
# generate synthetic data
n = 1200
X = np.linspace(1, n, n)
Y = np.random.randn(n)
# compute tolerance interval
ti = tolerance(0.99, 0.95, n)
# denormalise interval using standard deviation
d = Y.std() * ti
# compute actual bounds on real line
lb = Y.mean()  d
ub = Y.mean() + d
# plot!
plt.scatter(X, Y, alpha=0.5)
plt.axhline(lb)
plt.axhline(ub)
In summary
Tolerance intervals are a very powerful statistical tool for creating “setandforget” thresholds for performing rudimentary anomaly detection on certain types of numerical data. There are some great resources on these statistical concepts^{2}^{3}^{4} (and I’ve only scraped the surface), so if my explanation falls a little bit short, I’d check those out too.
References and Footnotes

I know that’s the wrong word (sorry, mathematicians) ↩