Let’s assume that FDA researchers are reviewing a drug for possible approval. They are reviewing results and data from clinical trials etc. Of course there are some side effects , but only from a small percentage of the samples. Let’s look at a model of their decision making process in the context of hypothesis testing. Let’s assume:
Ho: X = Xo ‘drug is harmful’
Ha: X = Xa ‘drug is safe’
If the decision makers make the mistake of releasing a harmful drug, the consequences would be easily identifiable, and could be visibly traced to their decision. They would want to avoid this at all cost. In essence, they want to avoid making a type I error.
Type I Error: = releasing a harmful drug
From my previous discussion on hypothesis testing, we know that to decrease the probability of committing a type I error, we choose an alpha level that is lower. In a t-test, if we are really afraid of making a type I error, we might set alpha ( the significance level) at .05, .01, or to go overboard.001 etc.
As previously discussed, setting alpha lower and lower increases beta, or the probability of committing a type II error. Recall, a type II error is accepting a false Ho. In this case that would be equivalent to falsely concluding that the ‘drug is harmful’ when it could actually be released and improve the lives of millions.
Type II error bias = over precaution; setting alpha so low, or setting the standard of proof so high as to almost always reject Ho, and biasing the decision in such a way that the probability of a type two error (beta) is greatly increased.
As a result of type II error bias, many life improving drugs never make it to the market.