# What is the ttest null hypothesis?

## Student’s t-test

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Select Citation Style

Copy Citation

Share

Share

Share to social media

Give Feedback

External Websites

Feedback

Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

- National Center for Biotechnology Information — PubMed Central — Application of Student’s t-test, Analysis of Variance, and Covariance
- Statistics LibreTexts — The Independent Samples t-test
- University of California — Department of Statistics — t-Tests

Print Cite

*verified*Cite

While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.

Select Citation Style

Copy Citation

Share

Share

Share to social media

Feedback

External Websites

Feedback

Thank you for your feedback

Our editors will review what you’ve submitted and determine whether to revise the article.

External Websites

- National Center for Biotechnology Information — PubMed Central — Application of Student’s t-test, Analysis of Variance, and Covariance
- Statistics LibreTexts — The Independent Samples t-test
- University of California — Department of Statistics — t-Tests

Written and fact-checked by

The Editors of Encyclopaedia Britannica

Encyclopaedia Britannica’s editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree. They write new content and verify and edit content received from contributors.

The Editors of Encyclopaedia Britannica

Last Updated: Apr 29, 2023 • Article History

Table of Contents

Related Topics: hypothesis testing Student’s t distribution . *(Show more)*

**Student’s t-test**, in statistics, a method of testing hypotheses about the mean of a small sample drawn from a normally distributed population when the population standard deviation is unknown.

In 1908 William Sealy Gosset, an Englishman publishing under the pseudonym Student, developed the *t*-test and *t* distribution. (Gosset worked at the Guinness brewery in Dublin and found that existing statistical techniques using large samples were not useful for the small sample sizes that he encountered in his work.) The *t* distribution is a family of curves in which the number of degrees of freedom (the number of independent observations in the sample minus one) specifies a particular curve. As the sample size (and thus the degrees of freedom) increases, the *t* distribution approaches the bell shape of the standard normal distribution. In practice, for tests involving the mean of a sample of size greater than 30, the normal distribution is usually applied.

It is usual first to formulate a null hypothesis, which states that there is no effective difference between the observed sample mean and the hypothesized or stated population mean—i.e., that any measured difference is due only to chance. In an agricultural study, for example, the null hypothesis could be that an application of fertilizer has had no effect on crop yield, and an experiment would be performed to test whether it has increased the harvest. In general, a *t*-test may be either two-sided (also termed two-tailed), stating simply that the means are not equivalent, or one-sided, specifying whether the observed mean is larger or smaller than the hypothesized mean. The test statistic *t* is then calculated. If the observed *t*-statistic is more extreme than the critical value determined by the appropriate reference distribution, the null hypothesis is rejected. The appropriate reference distribution for the *t*-statistic is the *t* distribution. The critical value depends on the significance level of the test (the probability of erroneously rejecting the null hypothesis).

For example, suppose a researcher wishes to test the hypothesis that a sample of size *n* = 25 with mean *x* = 79 and standard deviation *s* = 10 was drawn at random from a population with mean μ = 75 and unknown standard deviation. Using the formula for the *t*-statistic, the calculated *t* equals 2. For a two-sided test at a common level of significance α = 0.05, the critical values from the *t* distribution on 24 degrees of freedom are −2.064 and 2.064. The calculated *t* does not exceed these values, hence the null hypothesis cannot be rejected with 95 percent confidence. (The confidence level is 1 − α.)

A second application of the *t* distribution tests the hypothesis that two independent random samples have the same mean. The *t* distribution can also be used to construct confidence intervals for the true mean of a population (the first application) or for the difference between two sample means (the second application). *See also* interval estimation.

The Editors of Encyclopaedia Britannica This article was most recently revised and updated by Erik Gregersen.

## Two-sample *t*-test

The purpose of the two sample *t*-test is to compare the means of two independent samples. These can be obtained either by random sampling from two populations (an observational design) or by random allocation to two treatment groups (an experimental design) — although that assumes the experimental group represents the wider population, that is seldom the case in reality.

The *t*-statistic is estimated as the difference between the two sample means, minus the difference between the true population means, divided by the estimated standard error of the difference between the sample means. For a null hypothesis of no difference, the difference between the true population means is zero. The standard error of the difference is usually estimated from the weighted variance about the means of the samples being compared.

For large sample sizes the *t*-distribution becomes equivalent to a ‘standard’ normal distribution with a mean of zero and a standard deviation of one. If the standard normal distribution is used to obtain critical values, the test is sometimes known as the **z-test** or occasionally the **d-test**. The *t*-distribution diverges from the normal distribution for small samples as it allows for the random error in estimating the variance.

There are two versions of the two-sample *t*-test. The standard version which we give first assumes that the two population variances are equal.

### The equal-variance *t*-test

#### The general formula

#### Algebraically speaking —

*t*is the*t*-statistic; under the null hypothesis*t*is a random quantile of the*t*-distribution with (n_{1}+ n_{2}− 2) degrees of freedom,- D is the observed difference between sample means and of the two samples,
- δ is the difference between the true population means,
- s
_{D}is the estimated standard error of the difference between the means, - n
_{1}& n_{2}are the number of observations in each sample, - v
_{1}and v_{2}are the two sample

#### For equal sample sizes

Where both samples have the same number of observations, the variance of the difference simplifies to (v_{1}+v_{2})/n. Hence:

*t*is the estimated*t*-statistic; under the null hypothesis it is a random quantile of the*t*-distribution with 2(n − 1) degrees of freedom,- v
_{1}and v_{2}are the two sample variances - n is the number of observations in each sample
- all other variables are as above.

#### For large sample sizes

If both samples are unequal, but large enough that (n−1) ≅ n, the variance of the difference simplifies to (v_{1}/n_{2})+(v_{2}/n_{1}). Hence:

*t*is the estimated*t*-statistic; under the null hypothesis it is a random quantile of the*t*-distribution with (n_{1}+ n_{2}− 2) degrees of freedom,- v
_{1}and v_{2}are the two sample variances - all other variables are as above.

Where the sample size and variance are expected to be very similar, the variance of the difference between observations is about double the variance of the observations themselves.

### The unequal-variance *t*-test

If population variances cannot be assumed equal (following for example an *F*-ratio test), then we cannot use the standard *t*-test.

The first thing to try in this situation is a transformation of the data. If the variance is proportional to the mean, then you may well find the problem of non-equality of variances is resolved with a logarithmic transformation.

- Try a different type of statistical test, for example a randomisation test or a non-parametric
- Use the unequal variance
*t*-test (also known as Welch’s approximate*t*-test).

#### Algebraically speaking —

*t*‘ is the unequal variance t-statistic for which critical values are determined as below,- μ
_{1}− μ_{2}is the difference between your population means, _{1}−_{2}is the difference between your sample means,- v
_{1}and v_{2}are the sample variances, - n
_{1}and n_{2}are the number of observations in

Corrected degrees of freedom

The estimated t’ statistic can be tested against the standard *t*-distribution, but with reduced degrees of freedom. The appropriate number of degrees of freedom are given by the equation below:

#### Algebraically speaking —

- df (t’) are the number of degrees of freedom for the unequal variance
*t*-test, - all other variables are defined as

We recommend this method as it enables you to determine the precise *P*-value for your test providing you have a probability calculator on your software package

Corrected critical value

The estimated t’ statistic can also be tested against a different critical value which is calculated as a weighted average of the critical values of t based on the respective degrees of freedom of the two samples. The formula below shows how this works for a 1-tailed test.

#### Algebraically speaking —

### The weighted *t*-test

Use of the above formulae gives equal weight to each observation. But if your sampling or experimental unit is a cluster, then the percentages or means may be based on different sample sizes. In that situation, those based on a larger sample size should carry more weight. This is achieved by using the formulae given in to calculate weighted means and variances for each group.

#### Algebraically speaking —

Σ(m_{i}_{i} 2 ) | − | n _{w} 2 |

Weighted variance (s 2 _{w}) = | ||

n − 1 |

_{i}indicates the ith value of the cluster means,- m
_{i}indicates the number of units (= weights) in each cluster, - is the average cluster size (Σm
_{i}/n), - n is the number of clusters,
_{w}is the weighted mean.

The weighted means and variances are then used in place of the unweighted estimates in the appropriate formula .

### Confidence interval of the difference between means

The 95% normal approximation confidence interval for the difference between the means is readily obtained by multiplying the standard error of the difference by *t*:

#### Algebraically speaking —

_{D}is the mean difference,*t*is the (1 − α/2) quantile of the*t*-distribution with n_{1}+n_{2}− 2 degrees of freedom, and α = 0.05,- n
_{1}& n_{2}are the number of observations in each sample, - s
_{D}is the standard error of the difference.

Notice this interval assumes that estimates of and s_{D} are unrelated — in other words the differences are homoscedastic.

### Assumptions

- The means are of measurement variables
- Ranked or coded categorical observations, or variables derived from such data, should not be analysed using this test. With such data you should be asking if the mean is an appropriate measure of location — often the median would be a better choice. Replicated proportions can be analysed with the

*t*-test providing they are appropriately transformed, for example using the arcsin square root transformation. - Sampling (or allocation) is random and observations are independent
- Observations in a time series data should generally not be used as replicates as observations are not independent.

- The samples are drawn from normally distributed populations .
- This assumption is often relaxed under certain circumstances:
- For large samples (above 300 observations) the means are close to normal, irrespective of how the observations are distributed.
- For moderate samples (30-300 observations) the means should approximate a normal distribution. However, if the distributions are skewed, it is always preferable to apply a normalizing transformation.
- For small samples (3-30 observations) distributions should be checked using qq or rankit plots. Where possible a normalizing transformation should be applied, although the efficacy of such a transformation may be difficult to assess with small data sets.

- Sample variances are homogenous (that is they represent the same population)
- Sample variances should be tested for homogeneity. Where sample variances are different, transformations should be checked in an attempt to homogenize variances. Achieving homogeneity should take precedence over achieving normality. If homogeneity cannot be achieved, use the ‘approximate’ unequal-variance

*t*-test instead. - The model is additive
- This assumption is required for the above to be true

## The t-Test

A *t*-test (also known as Student’s *t*-test) is a tool for evaluating the means of one or two populations using hypothesis testing. A t-test may be used to evaluate whether a single group differs from a known value (a one-sample t-test), whether two groups differ from each other (an independent two-sample t-test), or whether there is a significant difference in paired measurements (a paired, or dependent samples t-test).

## How are *t*-tests used?

First, you define the hypothesis you are going to test and specify an acceptable risk of drawing a faulty conclusion. For example, when comparing two populations, you might hypothesize that their means are the same, and you decide on an acceptable probability of concluding that a difference exists when that is not true. Next, you calculate a test statistic from your data and compare it to a theoretical value from a *t-*distribution. Depending on the outcome, you either reject or fail to reject your null hypothesis.

## What if I have more than two groups?

You cannot use a *t*-test. Use a multiple comparison method. Examples are analysis of variance (ANOVA__)__, Tukey-Kramer pairwise comparison, Dunnett’s comparison to a control, and analysis of means (ANOM).

*t*-Test assumptions

While *t*-tests are relatively robust to deviations from assumptions, *t*-tests do assume that:

- The data are continuous.
- The sample data have been randomly sampled from a population.
- There is homogeneity of variance (i.e., the variability of the data in each group is similar).
- The distribution is approximately normal.

For two-sample *t*-tests, we must have independent samples. If the samples are not independent, then a paired *t*-test may be appropriate.

## Types of *t*-tests

There are three *t*-tests to compare means: a one-sample *t*-test, a two-sample *t*-test and a paired *t*-test. The table below summarizes the characteristics of each and provides guidance on how to choose the correct test. Visit the individual pages for each type of *t*-test for examples along with details on assumptions and calculations.

The table above shows only the *t*-tests for population means. Another common *t*-test is for correlation coefficients. You use this *t*-test to decide if the correlation coefficient is significantly different from zero.

## One-tailed vs. two-tailed tests

When you define the hypothesis, you also define whether you have a one-tailed or a two-tailed test. You should make this decision before collecting your data or doing any calculations. You make this decision for all three of the *t*-tests for means.

To explain, let’s use the one-sample *t*-test. Suppose we have a random sample of protein bars, and the label for the bars advertises 20 grams of protein per bar. The null hypothesis is that the unknown population mean is 20. Suppose we simply want to know if the data shows we have a different population mean. In this situation, our hypotheses are:

$ mathrm H_o: mu = 20 $

$ mathrm H_a: mu neq 20 $

Here, we have a two-tailed test. We will use the data to see if the sample average differs sufficiently from 20 – either higher or lower – to conclude that the unknown population mean is different from 20.

Suppose instead that we want to know whether the advertising on the label is correct. Does the data support the idea that the unknown population mean is at least 20? Or not? In this situation, our hypotheses are:

$ mathrm H_o: mu >= 20 $

Here, we have a one-tailed test. We will use the data to see if the sample average is sufficiently less than 20 to reject the hypothesis that the unknown population mean is 20 or higher.

See the «tails for hypotheses tests» section on the *t*-distribution page for images that illustrate the concepts for one-tailed and two-tailed tests.

## How to perform a *t*-test

For all of the *t*-tests involving means, you perform the same steps in analysis:

- Define your null ($ mathrm H_o $) and alternative ($ mathrm H_a $) hypotheses before collecting your data.
- Decide on the alpha value (or α value). This involves determining the risk you are willing to take of drawing the wrong conclusion. For example, suppose you set α=0.05 when comparing two independent groups. Here, you have decided on a 5% risk of concluding the unknown population means are different when they are not.
- Check the data for errors.
- Check the assumptions for the test.
- Perform the test and draw your conclusion. All
*t*-tests for means involve calculating a test statistic. You compare the test statistic to a theoretical value from the*t-*distribution. The theoretical value involves both the α value and the degrees of freedom for your data. For more detail, visit the pages for one-sample*t*-test, two-sample*t*-test and paired*t*-test.