Statistical power of KH and SH tests?

Peter Werner pgwerner at SFSU.EDU
Sat May 15 03:36:31 CDT 2004


> My understanding is that beta is 1 minus alpha.

No, that is *definitely not* the definition of beta. Alpha is the
probability of making a Type I error and beta is the probability of
making a Type II error. Its definitely *not* the case that if you have
a, say, 5% probability of making a Type I error, you therefore have a
95% chance of making a Type II error!

Alpha and beta are related in the sense that if alpha is raised, beta
is lowered, but not necessarily in a 1 minus x relationship. Beta,
however, is lowered while retaining the same alpha level by raising the
sample size (the reverse is also true - alpha is lowered while
retaining the same beta level when sample size is raised). Ideally, you
want both alpha and beta to be low.

Statistical power is defined as 1 minus beta, though in practice the
term "statistical power" is often used in a less exact/more vague sense
to refer to utility and robustness of a given statistical test.

> Not being able to reject a null hypothesis NEVER means you must accept
> it.
> You simply can't reject it.

Never? Even if a test has sufficient statistical power, that is, low
beta? That's not my understanding. My understanding is that if beta is
high, then failure to reject the null hypothesis (because p is higher
than alpha) has precisely the meaning you've stated - you clearly can't
reject the null hypothesis, but you can't accept it, either; hence,
acceptance or rejection of the null hypothesis is uncertain. However,
if beta is sufficiently low, when you fail to reject the null
hypothesis, you can actually go so far as to accept the null
hypothesis, since the chance of Type II error is sufficiently low.

Peter




More information about the Taxacom mailing list