When we make a measurement, we can never be sure that it is 100% "correct". All devices, ranging from a simple ruler to the experiments at CERN, have a limit in their ability to determine a physical quantity.
Error analysis attempts to quantify this in a meaningful way. Error analysis allows us to ask meaningful questions on the quality of the data, these include
It should be noted that in Physics the words "error" and "uncertainty" are used almost interchangably. This is not ideal, but get used to it!
We need to identify the following types of errors
Clearly you shouldn't be making any mistakes in your lab ("human error" is not an acceptable source of error!)
Fig. 1: The range of a ball-bearing launched from a spring loaded projectile launcher. Repetition of the experiment with the same conditions yields different results. Therefore there is an error related to this quantity
These are two terms that have very different meanings in experimental physics. We need to be able to distinguish between an accurate measurement and a precise measurement. An accurate measurement is one in which the results of the experiment are in agreement with the ‘accepted’ value. Note this only applies to experiments where this is the goal – measuring the speed of light, for example. A precise measurement is one that we can make to a large number of decimal places.
Fig. 2: Diagrams illustrating the differences between accuracy and precision
On the standard error page, we dealt with calculating a statistical error from multiple measurements. However, what do we do if we want to know the random error in an individual measurement, such as from a ruler or a multimeter?
There are two very simple rules to follow, which in most cases will be true.
If your measuring device has an analogue scale (e.g. rulers, vernier scales etc) then the error is ± half the smallest increment
For example, take the ruler in figure 1, the error on this instrument is ±0.5 mm (using the metric scale).
Fig. 1: An image of a ruler.
If your measuring device has a digital scale (e.g. multimeters) then the error is
For example, take the image of the multimeter shown in figure 2, the error in the measurement would be ±0.001 V.
Fig. 2: An image of a multimeter, click to enlarge.
In Physics, we often combine multiple variables into a single quantity. The question is, how do we calculate the error in this new quantity from the errors in the initial variables?
There are two main approaches to solving this: the functional approach and the calculus approximation.
The functional approach simply states that if we have a function of the form Z = f(A), where f() is a general function, then it follows that.
In most circumstances we assume that f(A) is symmetric about its mean. However, this is not always true and sometimes we have different error bars in the positive and negative directions.
From the functional approach, described above, we can make a calculus based approximation for the error. This is done by considering a Taylor series expansion of equation 1. Assuming αA is small relative to the mean of A, we can deduce equation 2.
If we recall that Z = f(A), then equation 3 follows from a substitution of equation 2 into equation 1.
It should be stressed that this equation is only valid for small errors. However, in most cases this is valid and so the calculus approximation is a good way to propagate errors.
Another important point to note is that the calculus approach is only useful when the differential is solvable. With non-trivial functions such as Z = arcsin(A) the functional approach would be more useful.
Systematic errors are a constant bias that are introduced into all your results. Unlike random errors, which can be reduced by repeated measurements, systematic errors are much more difficult to combat and cannot be detected by statistical means. They cause the measured quantity to be shifted away from the 'true' value.
When you design an experiment, you should design it in a way so as to minimise systematic errors. For example, when measuring electric fields you might surround the experiment with a conductor to keep out unwanted fields. You should also calibrate your instruments, that is use them to measure a known quantity. This will help tell you the magnitude of any remaining systematic errors.
One famous unchecked systematic error towards the end of the 20th century was the incorrect grinding of the Hubble Space telescope's main mirror (shown in figure 1) resulting in an unacceptable level of spherical abberation. The mirror was perfectly ground, but to the wrong curve. This led to the sacrifice of an instrument in order to fit corrective optics into the telescope on the first Shuttle maintenance mission (STS-61) to the orbiting observatory.
Importantly, this episode illustrates that although some simple systematic errors can be corrected after data has been collected others need new data and hardware to compensate for failings in experimental design.
Fig. 1: The Hubble main mirror at the start of its grinding in 1979.