A fast refresher on the two most common parametric hypothesis tests. Use it as a cheat sheet when you’re juggling A/B experiments, manufacturing QC, or any situation where you need to compare means.
σ) or have a very large sample (n ≥ 30)? → lean z-test.σ unknown and your sample is modest (n < 30) or you need extra robustness? → use a t-test.| Test | Statistic | Use when |
|---|---|---|
| One-sample z-test | z = (\bar{x} - μ_0) / (σ / √n) |
σ known or n ≥ 30 and sampling from (approximately) normal population. |
| One-sample t-test | t = (\bar{x} - μ_0) / (s / √n), df = n − 1 |
σ unknown, any n (works best for n < 30). Uses sample std dev s. |
| Two-sample t-test (Welch) | t = (\bar{x}_1 - \bar{x}_2) / √(s_1^2/n_1 + s_2^2/n_2) |
Comparing two independent means, variances not assumed equal. df via Welch–Satterthwaite. |
| Pooled two-sample t-test | t = (\bar{x}_1 - \bar{x}_2) / (s_p √(1/n_1 + 1/n_2)) |
Only when you can assume equal variance and similar sample sizes. |
| Paired t-test | t = (\bar{d}) / (s_d / √n) |
Same subjects measured twice (before/after). Work on the differences d. |
σ unknown, n is small or moderate.Once you compute z or t, compare it with the relevant critical value (or obtain a p-value). For t-tests, degrees of freedom drive the shape of the distribution — smaller df means fatter tails → larger critical values. Always state:
Need a calculator? Let me know and I’ll add an interactive widget here.