Report writing guidance
 Section 1: Page limits, presentation and format and Lab Report templates
 Section 2: Report Structure
 Section 3: Written style
 Section 4: Referencing
 Section 5: Equations, symbols and units
 Section 6: Figures, tables and graphs
 Section 7: Errors
What are errors?
When we make a measurement, we can never be sure that it is 100% "correct". All devices, ranging from a simple ruler to the experiments at CERN, have a limit in their ability to determine a physical quantity.
Error analysis attempts to quantify this in a meaningful way. Error analysis allows us to ask meaningful questions on the quality of the data, these include
 Do the results agree with theory?
 Are they reproducible?
 Has a new phenomenon or effect been observed?
It should be noted that in Physics the words "error" and "uncertainty" are used almost interchangably. This is not ideal, but get used to it!
When we make a measurement, we can never be sure that it is 100% "correct". All devices, ranging from a simple ruler to the accelerators at CERN, have a limit in their ability to determine a physical quantity.
Error analysis attempts to quantify this uncertainty in a meaningful way and interrogates the quality of the data. We can question:
NB: In physics, the words “error” and “uncertainty” are often used interchangeably. They are distinctly different and ideally shouldn’t be used like that, but it is something to just get used to!
· In reality, “error” refers to the difference between the quoted result and the accepted/true value being measured while “uncertainty” refers to the range of values your result could take given the capabilities of your equipment. See more on this here.

Accuracy and Precision
These are two terms that have very different meanings in experimental physics. We need to be able to distinguish between an accurate measurement and a precise measurement. An accurate measurement is one in which the results of the experiment are in agreement with the ‘accepted’ value. Note this only applies to experiments where this is the goal – measuring the speed of light, for example. A precise measurement is one that we can make to a large number of decimal places.
Fig. 2: Diagrams illustrating the differences between accuracy and precision
Two other terms that could be used incorrectly are accuracy and precision. In regards to measurements, they have very different meanings and it’s important to be able to distinguish between them. · An accurate measurement is one in which the results of the experiment are in agreement with an “accepted value.” As a result, this can only apply to experiments where there is a known value to compare to
o NB: As experiments become more advanced at higher levels, there are fewer known values to compare to, so this is no longer the case!
· A precise measurement is one where the spread of the measurements is small (centering around strongly around one value). It is one that can be made to a large number of decimal places.
Observe the diagrams to see how accuracy and precision look in experimental data.

Types of Errors
We need to identify the following types of errors
 Systematic errors  these influence the accuracy of a result
 Random errors  these influence precision
 Mistakes  bad data points.
Clearly you shouldn't be making any mistakes in your lab ("human error" is not an acceptable source of error!)
Fig. 1: The range of a ballbearing launched from a spring loaded projectile launcher. Repetition of the experiment with the same conditions yields different results. Therefore there is an error related to this quantity
In order to properly analyse our data, we need to identify the following types of errors: · Systematic errors – these influence the accuracy of a result. · Random errors – these influence the precision of the result. · Mistakes – bad data points
It should be noted that “human error” is not a thing in physics and data analysis! These are what you would refer to as “mistakes”, which are generally not reported in technical documents and should be avoided altogether. 
Systematic Errors
Introduction
Systematic errors are a constant bias that are introduced into all your results. Unlike random errors, which can be reduced by repeated measurements, systematic errors are much more difficult to combat and cannot be detected by statistical means. They cause the measured quantity to be shifted away from the 'true' value.
Dealing with systematic errors
When you design an experiment, you should design it in a way so as to minimise systematic errors. For example, when measuring electric fields you might surround the experiment with a conductor to keep out unwanted fields. You should also calibrate your instruments, that is use them to measure a known quantity. This will help tell you the magnitude of any remaining systematic errors.
A Famous Systematic Error
One famous unchecked systematic error towards the end of the 20th century was the incorrect grinding of the Hubble Space telescope's main mirror (shown in figure 1) resulting in an unacceptable level of spherical abberation. The mirror was perfectly ground, but to the wrong curve. This led to the sacrifice of an instrument in order to fit corrective optics into the telescope on the first Shuttle maintenance mission (STS61) to the orbiting observatory.
Importantly, this episode illustrates that although some simple systematic errors can be corrected after data has been collected others need new data and hardware to compensate for failings in experimental design.
Systematic errors are a constant bias that are introduced into all your results and have to do with the method your carry out in your experiment. Unlike random errors, which can be reduced by repeated measurements, systematic errors are much more difficult to combat and cannot be detected by statistical means. They cause the measured quanity to shift away from the ‘true’ value.
Dealing with systematic errors
When you design an experiment, you should design it in a way that minimises systematic errors. This often includes measuring and accounting for background effects. For example, when measure electric fields, you might surround the experiment with a conductor to keep out unwanted fields. You should also calibrate your instruments (use them to measure a known quantity to determine their accuracy).
Another way to account for systematic error is measuring a quantity as a function of something else (e.g. current as a function of voltage), plotting that relationship, then deriving the final result from the gradient of that plot. Here, the actual values may be affected by the systematic error, but the relationship between the quantities will still be accurate.
A famous systematic error
One famous unchecked systematic error towards the end of the 20^{th} century was the incorrect grinding of the Hubble Space telescope’s main mirror (shown in figure 1). It was perfectly ground, but to the wrong curve which resulted in an unacceptable level of spherical aberration. This led to the sacrifice of a different instrument in order to fit corrective optics into the telescope on the first Shuttle maintenance mission (STS61) to the orbiting observatory.
Importantly, this illustrates that, although simple systematic errors can be corrected after the data has been collected, others need new data and hardware to compensate for failings in experimental design.
Fig. 1: The Hubble main mirror at the start of its grinding in 1979. 
Random Errors – Measurement Uncertainty
On the standard error page, we dealt with calculating a statistical error from multiple measurements. However, what do we do if we want to know the random error in an individual measurement, such as from a ruler or a multimeter?
There are two very simple rules to follow, which in most cases will be true.
 The error that makes the uncertainties
Analogue Devices
If your measuring device has an analogue scale (e.g. rulers, vernier scales etc) then the error is ± half the smallest increment
For example, take the ruler in figure 1, the error on this instrument is ±0.5 mm (using the metric scale).
Fig 1. An image of a ruler
Digital Devices
If your measuring device has a digital scale (e.g. multimeters) then the error is
± 1 unit of the lowest significant figure
For example, take the image of the multimeter shown in figure 2, the error in the measurement would be ±0.001 V.
Fig. 2: An image of a multimeter
Minimising random errors is a primary concern of experimental physics. Random errors are harder to control and manifest as the scattering of repeat measurements over a range (as seen in figure 2). More random error leads to lower precision. · NB: Repeated measurements are not always the best solution. Recommending repeat measurements should be justified, and averaging poor quality data will not necessarily give a good quality result.
Fig. 2: The range of a ballbearing launched from a spring loaded projectile launcher. Repetition of the experiment with the same conditions yields different results. Therefore there is an error related to this quantity When analysing data, the standard error (link) is a measure of that set’s random error. When taking individual measurements, however, the random error can be quantified in the uncertainty on each measurement. There are two simple rules to follow, which in most cases will be true. · IMPORTANT: Above all else, use your best judgement about the uncertainty of your measurements. If a digital reading is fluctuating wildly or the analogue scale is hard to read, make sure your recorded error reflects that. For analogue devices If the device has an analogue scale in the form of tick marks (e.g. rulers, vernier scales, etc.) then the error is plus or minus (±) half of the smallest increment. For example, on a normal ruler (figure 3), the error is ±0.5 mm. Fig. 3: An image of the metric side of a ruler.
For digital devices If the device has a digital scale in the form of a readout (e.g. multimeters) then the error is plus or minus (±) 1 unit of the lowest significant figure. For example, on a multimeter (figure 4), the error is ±0.001 V.
Fig. 4: An image of a multimeter, click to enlarge. 
Error Analysis
In Physics, we often combine multiple variables into a single quantity. The question is, how do we calculate the error in this new quantity from the errors in the initial variables?
There are two main approaches to solving this: the functional approach and the calculus approximation.
Functional Approach
The functional approach simply states that if we have a function of the form Z = f(A), where f() is a general function, then it follows that.
Where
 is the mean of A.
 is the standard error of A.
 is the error in Z.
In most circumstances we assume that f(A) is symmetric about its mean. However, this is not always true and sometimes we have different error bars in the positive and negative directions.
Calculus Approximation
From the functional approach, described above, we can make a calculus based approximation for the error. This is done by considering a Taylor series expansion of equation 1. Assuming α_{A} is small relative to the mean of A, we can deduce equation 2.
If we recall that Z = f(A), then equation 3 follows from a substitution of equation 2 into equation 1.
It should be stressed that this equation is only valid for small errors. However, in most cases this is valid and so the calculus approximation is a good way to propagate errors.
Another important point to note is that the calculus approach is only useful when the differential is solvable. With nontrivial functions such as Z = arcsin(A) the functional approach would be more useful.
In physics, we often combine multiple variables into our single stated result. The question is, how do we calculate the error in this new quantity from all the errors in the initial variables? The answer is error propagation, or a specific way of propagating the error through your calculations. There are two main approaches to this: the functional approach and the calculus approximation.
The following information applies to single variable functions only. For information on multivariable functions, see Measurements and their Uncertainties by I.G. Hughes and T.P.A. Hase (OUP, 2010)
Functional Approach
This approach simply states that the error on the final result Z will be the function’s value including the error minus the function’s value without. In other words, if we have a function of the from Z = f(A), where f is a general function, then it follows that
Where
 Alpha Z is the error on Z
 A bar is the mean of A
 Alpha A is the standard error on link
In most circumstances, we assume that the error on A is symmetrical. However, this is not always true and sometimes the error bars in the positive and negative directions are different sizes.
Calculus Approximation
From the functional approach, we can make a calculusbased approximation for the error using a Taylor series expansion of the equation for alpha Z above. Assuming alpha A is small relative to the mean A bar, we deduce that
It should be stressed that this equation is only valid for small errors. However, in most cases, this is valid therefore a good way to propagate errors. Another important point to note is that this approach is only useful when the differential is solvable. With nontrivial functions such as Z = arcsin(A), the functional approach would be more useful.
There are useful lookup tables in Hughes and Hase (2010) for simple functions and their associated errors using the calculusbased approach.