Systematic error

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Systematic errors are biases in measurement which lead to measured values being systematically too high or too low. See also biased sample and errors and residuals in statistics. All measurements are prone to systematic error. A systematic error is any biasing effect, in the environment, methods of observation or instruments used, which introduces error into an experiment and is such that it always affects the results of an experiment in the same direction. Distance measured by radar will be in error if the slight slowing down of the waves in air is not accounted for. The oscillation frequency of a pendulum will be in error if slight movement of the support is not accounted for. Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation. So is a clock running fast or slow. See also observational error and errors and residuals in statistics.

Constant systematic errors are very difficult to deal with, because their effects are only observable if they can be removed. Such errors cannot be removed by repeating measurements or averaging large numbers of results. A common means to remove systematic error is the observation of a known process, i.e. through calibration. Another means to remove systematic error is by a subsequent measurement with a more sophisticated experiment equipment.

Contents

  • 1 Drift
  • 2 Notes
  • 3 References
  • 4 See also

[edit] Drift

Systematic errors which change during an experiment (drift) are easier to detect. Measurements show trends with time rather than varying randomly about a mean.

Drift is evident if a measurement of a constant quantity is repeated several times and the measurements drift one way during the experiment, for example if each measurement is higher than the previous measurement which could perhaps occur if an instrument becomes warmer during the experiment. If the measured quantity is variable, it is possible to detect a drift by checking the zero reading during the experiment as well as at the start of the experiment (indeed, the zero reading is a measurement of a constant quantity). If the zero reading is consistently above or below zero, a systematic error is present. If this cannot be eliminated, for instance by resetting the instrument immediately before the experiment, it needs to be allowed for by subtracting its (possibly time-varying) value from the readings, and by taking it into account in assessing the accuracy of the measurement.

If no pattern in a series of repeated measurements is evident, the presence of fixed systematic errors can only be found if the measurements are checked, either by measuring a known quantity or by comparing the readings with readings made using a different apparatus, known to be more accurate. For example, suppose the timing of a pendulum using an accurate stopwatch several times gives readings randomly distributed about the mean. A systematic error is present if the stopwatch is checked against the 'speaking clock' of the telephone system and found to be running slow or fast. Clearly, the pendulum timings need to be corrected according to how fast or slow the stopwatch was found to be running. Measuring instruments such as ammeters and voltmeters need to be checked periodically against known standards.

Systematic errors can also be detected by measuring already known quantities. For example, a spectrometer fitted with a diffraction grating may be checked by using it to measure the wavelength of the D-lines of the sodium electromagnetic spectrum which are at 589.0 and 589.6 nm. The measurements may be used to determine the number of lines per millimetre of the diffraction grating, which can then be used to measure the wavelength of any other spectral line.