unbiased
(adjective)
impartial or without prejudice
Examples of unbiased in the following topics:
-
Introduction to inference for other estimators
- We make another important assumption about each point estimate encountered in this section: the estimate is unbiased.
- A point estimate is unbiased if the sampling distribution of the estimate is centered at the parameter it estimates.
- That is, an unbiased estimate does not naturally over or underestimate the parameter.
- The sample mean is an example of an unbiased point estimate, as are each of the examples we introduce in this section.
-
Samples
- An unbiased (representative) sample is a set of objects chosen from a complete sample using a selection process that does not depend on the properties of the objects.
- For example, an unbiased sample of Australian men taller than 2 meters might consist of a randomly sampled subset of 1% of Australian males taller than 2 meters.
- However, one chosen from the electoral register might not be unbiased since, for example, males aged under 18 will not be on the electoral register.
- In an astronomical context, an unbiased sample might consist of that fraction of a complete sample for which data are available, provided the data availability is not biased by individual source properties.
-
Random Samples
- An unbiased random selection of individuals is important so that in the long run, the sample represents the population.
- A simple random sample is an unbiased surveying technique.
- An unbiased random selection of individuals is important so that, in the long run, the sample represents the population.
-
Confidence intervals for nearly normal point estimates
- This same logic generalizes to any unbiased point estimate that is nearly normal.
- A confidence interval based on an unbiased and nearly normal point estimate is:
-
Least-Squares Regression
- It is considered optimal in the class of linear unbiased estimators when the errors are homoscedastic and serially uncorrelated.
- Under these conditions, the method of OLS provides minimum-variance, mean-unbiased estimation when the errors have finite variances.
-
Characteristics of Estimators
- Scale 2, by contrast, gives unbiased estimates of your weight.
- Therefore the sample mean is an unbiased estimate of μ.
- If N is used in the formula for s2, then the estimates tend to be too low and therefore biased.The formula with N-1 in the denominator gives an unbiased estimate of the population variance.Note that N-1 is the degrees of freedom.
-
Hypothesis testing for nearly normal point estimates
- Verify conditions to ensure the standard error estimate is reasonable and the point estimate is nearly normal and unbiased.
- This point estimate is nearly normal and is an unbiased estimate of the actual difference in death rates.
-
Calculations for the t-Test: Two Samples
- ${ S }{ x }_{ 1 }{ x }_{ 2 }$ is an estimator of the common standard deviation of the two samples: it is defined in this way so that its square is an unbiased estimator of the common variance whether or not the population means are the same.
- Here s2 is the unbiased estimator of the variance of the two samples, ni = number of participants in group i, i=1 or 2.
-
Estimating the Accuracy of an Average
- It can also be viewed as the standard deviation of the error in the sample mean relative to the true mean, since the sample mean is an unbiased estimator.
- For a value that is sampled with an unbiased normally distributed error, the graph depicts the proportion of samples that would fall between 0, 1, 2, and 3 standard deviations above and below the actual value.
-
The Correction Factor
- If the expected value exists, this procedure estimates the true expected value in an unbiased manner and has the property of minimizing the sum of the squares of the residuals (the sum of the squared differences between the observations and the estimate).
- A formula is typically considered good in this context if it is an unbiased estimator—that is, if the expected value of the estimate (the average value it would give over an arbitrarily large number of separate samples) can be shown to equal the true value of the desired parameter.