Approximation error

From Wikipedia, the free encyclopedia

(Redirected from Absolute error)
Jump to: navigation, search

In the mathematical field of numerical analysis, the approximation error in some data is the discrepancy between an exact value and some approximation to it. An approximation error can occur because

  1. the measurement of the data is not precise (due to the instruments), or
  2. approximations are used instead of the real data (e.g., 3.14 instead of π).

The numerical stability of an algorithm in numerical analysis indicates how the error is propagated by the algorithm.

Contents

  • 1 Overview
  • 2 Definitions
  • 3 See also
  • 4 References

[edit] Overview

One commonly distinguishes between the relative error and the absolute error. The absolute error is the magnitude of the difference between the exact value and the approximation. The relative error is the absolute error divided by the magnitude of the exact value. The percent error is a version of the relative error.

As an example, if the exact value is 50 and the approximation is 49.9, then the absolute error is 0.1 and the relative error is .002. The relative error is often used to compare approximations of numbers of widely differing size; for example, approximating the number 1,000 with an absolute error of 3 is in most applications much worse than approximating the number 1,000,000 with an absolute error of 3; in the first case the relative error is .003 and in the second it is only .000003.

[edit] Definitions

Given some value a and its approximation b, the absolute error is

\epsilon = |b - a|\,

where the vertical bars denote the absolute value. If a\ne 0, the relative error is

\eta = \frac{|b-a|}{|a|},

and the percent error is

\delta = \frac{|b-a|}{|a|}\times{}100\%.

These definitions can be extended to the case when a and b are n-dimensional vectors, by replacing the absolute value with a 2-norm[1].