← Back to /goodtoknow

t-test vs z-test

A fast refresher on the two most common parametric hypothesis tests. Use it as a cheat sheet when you’re juggling A/B experiments, manufacturing QC, or any situation where you need to compare means.

Start with the question

  1. Do you know the population standard deviation (σ) or have a very large sample (n ≥ 30)? → lean z-test.
  2. Is σ unknown and your sample is modest (n < 30) or you need extra robustness? → use a t-test.
  3. Matched pairs / before-and-after? → paired t-test.
  4. Two independent samples with unknown, possibly unequal variance? → two-sample t-test (Welch’s variant).

Core formulas

Test Statistic Use when
One-sample z-test z = (\bar{x} - μ_0) / (σ / √n) σ known or n ≥ 30 and sampling from (approximately) normal population.
One-sample t-test t = (\bar{x} - μ_0) / (s / √n), df = n − 1 σ unknown, any n (works best for n < 30). Uses sample std dev s.
Two-sample t-test (Welch) t = (\bar{x}_1 - \bar{x}_2) / √(s_1^2/n_1 + s_2^2/n_2) Comparing two independent means, variances not assumed equal. df via Welch–Satterthwaite.
Pooled two-sample t-test t = (\bar{x}_1 - \bar{x}_2) / (s_p √(1/n_1 + 1/n_2)) Only when you can assume equal variance and similar sample sizes.
Paired t-test t = (\bar{d}) / (s_d / √n) Same subjects measured twice (before/after). Work on the differences d.

Decision checklist

z-test

t-test

Interpreting results

Once you compute z or t, compare it with the relevant critical value (or obtain a p-value). For t-tests, degrees of freedom drive the shape of the distribution — smaller df means fatter tails → larger critical values. Always state:

When the assumptions break

Need a calculator? Let me know and I’ll add an interactive widget here.