**Type I error**

A

**type I error**, also known as an**error of the first kind**, occurs when the null hypothesis (*H*_{0}) is true, but is rejected. It is**asserting something that is absent**, a**false hit**. A type I error may be compared with a so called*false positive*(a result that indicates that a given condition is present when it actually is not present) in tests where a single condition is tested for. Type I errors are philosophically a focus of skepticism and Occam's razor. A Type I error occurs when we believe a falsehood.^{[1]}In terms of folk tales, an investigator may be "crying wolf" without a wolf in sight (raising a false alarm) (*H*_{0}: no wolf).
The rate of the type I error is called the

*size*of the test and denoted by the Greek letter (alpha). It usually equals the significance level of a test. In the case of a simple null hypothesis is the probability of a type I error. If the null hypothesis is composite, is the maximum (supremum) of the possible probabilities of a type I error.#### False positive error

A

**false positive error**, commonly called a "**false alarm**" is a result that indicates a given condition has been fulfilled, when it actually has not been fulfilled. In the case of "crying wolf" - the condition tested for was "is there a wolf near the herd?", the actual result was that there had not been a wolf near the herd. The shepherd wrongly indicated there was one, by calling "Wolf, wolf!".
A false positive error is a Type I error where the test is checking a single condition, and results in an affirmative or negative decision usually designated as "true or false".

### Type II error

A

**type II error**, also known as an**error of the second kind**, occurs when the null hypothesis is false, but it is erroneously accepted as true. It is**missing to see what is present**, a**miss**. A type II error may be compared with a so-called*false negative*(where an actual 'hit' was disregarded by the test and seen as a 'miss') in a test checking for a single condition with a definitive result of true or false. A Type II error is committed when we fail to believe a truth.^{[1]}In terms of folk tales, an investigator may fail to see the wolf ("failing to raise an alarm"; see Aesop's story of The Boy Who Cried Wolf). Again,*H*_{0}: no wolf.
The rate of the type II error is denoted by the Greek letter (beta) and related to the power of a test (which equals ).

What we actually call type I or type II error depends directly on the null hypothesis. Negation of the null hypothesis causes type I and type II errors to switch roles.

The goal of the test is to determine if the null hypothesis can be rejected. A statistical test can either reject (prove false) or fail to reject (fail to prove false) a null hypothesis, but never prove it true (i.e., failing to reject a null hypothesis does not prove it true).

#### False negative error

A

**false negative error**is where a test result indicates that a condition failed, while it actually was successful. A common example is a guilty prisoner freed from jail. The condition: "*Is the prisoner guilty?*" actually had a positive result (yes, he is guilty). But the test failed to realize this, and wrongly decided the prisoner was not guilty.
A false negative error is a type II error occurring in test steps where a single condition is checked for and the result can either be positive or negative.

### Example

As it is conjectured that adding fluoride to toothpaste protects against cavities, the null hypothesis of no effect is tested. When the null hypothesis is true (i.e., there is indeed no effect), but the data give rise to rejection of this hypothesis, falsely suggesting that adding fluoride is effective against cavities, a type I error has occurred.

A type II error occurs when the null hypothesis is false (i.e., adding fluoride is actually effective against cavities), but the data are such that the null hypothesis cannot be rejected, failing to prove the existing effect.

In colloquial usage type I error can be thought of as "convicting an innocent person" and type II error "letting a guilty person go free".

Tabularised relations between truth/falseness of the null hypothesis and outcomes of the test:

Null hypothesis (H_{0}) is true | Null hypothesis (H_{0}) is false | |
---|---|---|

Reject null hypothesis | Type I error False positive | Correct outcome True positive |

Fail to reject null hypothesis | Correct outcome True negative | Type II error False negative |

### Understanding Type I and Type II errors

From the Bayesian point of view, a type I error is one that looks at information that should not substantially change one's prior estimate of probability, but does. A type II error is one that looks at information which should change one's estimate, but does not. (Though the null hypothesis is not quite the same thing as one's prior estimate, it is, rather, one's

*pro forma*prior estimate.)
Hypothesis testing is the art of testing whether a variation between two sample distributions can be explained by chance or not. In many practical applications type I errors are more delicate than type II errors. In these cases, care is usually focused on minimizing the occurrence of this statistical error. Suppose, the probability for a type I error is 1% , then there is a 1% chance that the observed variation is not true. This is called the

*level of significance*, denoted with the Greek letter (alpha). While 1% might be an acceptable level of significance for one application, a different application can require a very different level. For example, the standard goal of six sigma is to achieve precision to 4.5 standard deviations above or below the mean. This means that only 3.4 parts per million are allowed to be deficient in a normally distributed process
## No comments:

## Post a Comment