Compare your sample mean to a known or hypothesized value. Results include t-statistic, p-value, Cohen's d, and 95% CI in APA format.
A one-sample t-test is a parametric statistical test used to determine whether the mean of a single sample differs significantly from a known or hypothesized population value. Unlike the independent or paired samples t-test, which compare two groups against each other, the one-sample t-test compares one group against a fixed reference point. This makes it one of the simplest yet most powerful tools in inferential statistics, frequently used in quality control, clinical research, psychology, and education.
Use a one-sample t-test whenever you have a single group of continuous measurements and want to test whether the group's average is statistically different from a specific value. Common scenarios include:
A teacher wants to know if her class of 10 students performed differently from the national average of 80 on a standardized math exam. She records the following scores:
Sample Data (n = 10)
72, 85, 91, 68, 77, 83, 95, 88, 74, 79
M = 81.20, SD = 8.75, Test Value = 80
Results
t(9) = 0.43, p = .675, d = 0.14, 95% CI [-5.06, 7.46]
The class mean did not differ significantly from the national average of 80. The effect size was negligible (Cohen's d = 0.14), suggesting no meaningful departure from the population benchmark.
Before interpreting your results, verify that these assumptions are reasonably met:
1. Continuous Dependent Variable
The variable being measured must be on an interval or ratio scale (e.g., weight, temperature, test score). Ordinal or categorical data require non-parametric alternatives such as the Wilcoxon signed-rank test.
2. Independence of Observations
Each data point must be independent of the others. This means no repeated measures on the same participant and no clustering. If your observations are correlated, consider a paired-samples t-test or mixed-effects model instead.
3. Approximate Normality
The data should be approximately normally distributed. For sample sizes above 30, the Central Limit Theorem ensures the sampling distribution of the mean is normal regardless of the population shape. For smaller samples, check normality using the Shapiro-Wilk test or a Q-Q plot. Moderate deviations are tolerable because the t-test is fairly robust.
4. No Significant Outliers
Extreme outliers can distort the sample mean and inflate or deflate the t-statistic. Use box plots or z-scores to screen for outliers before running the test. If outliers are present, consider trimming, winsorizing, or using a robust alternative.
According to APA 7th edition guidelines, report the sample descriptives, t-statistic, degrees of freedom, p-value, effect size, and confidence interval. Here is a template:
APA Reporting Template
A one-sample t-test was conducted to compare the sample mean (M = 81.20, SD = 8.75) to the test value of 80.00. The result was not statistically significant, t(9) = 0.43, p = .675, d = 0.14, 95% CI [-5.06, 7.46].
Significant Result Example
A one-sample t-test indicated that participants' reaction times (M = 342.50, SD = 28.10) were significantly faster than the population norm of 375 ms, t(39) = -7.31, p < .001, d = 1.16, 95% CI [-41.49, -23.51].
Note: Report t-values to two decimal places, p-values to three decimal places (use p < .001 when below .001), and always include a measure of effect size such as Cohen's d.
In a one-sample t-test, Cohen's d is calculated as the absolute difference between the sample mean and the test value, divided by the sample standard deviation. It quantifies how many standard deviations the sample mean lies from the hypothesized value, making it independent of sample size.
| Cohen's d | Interpretation | Practical Meaning |
|---|---|---|
| < 0.2 | Negligible | Sample mean is very close to the test value |
| 0.2 | Small | Difference detectable only with precise measurement |
| 0.5 | Medium | Noticeable difference in practical terms |
| 0.8+ | Large | Substantial departure from the test value |
| Situation | Recommended Test |
|---|---|
| Compare one sample mean to a known value | One-sample t-test |
| Compare means from two independent groups | Independent samples t-test |
| Compare pre/post means (same subjects) | Paired samples t-test |
| Non-normal data, one sample vs. value | Wilcoxon signed-rank test |
| Compare proportions to a known value | One-sample z-test for proportions |
StatMate's one-sample t-test calculations have been validated against R (t.test function) and SPSS output. We use the jstat library for the Student's t probability distribution and compute degrees of freedom as n - 1. The 95% confidence interval is constructed around the mean difference using the critical t-value for the appropriate df. All results match R output to at least 4 decimal places.
T-Test
Compare means between two groups
ANOVA
Compare means across 3+ groups
Chi-Square
Test categorical associations
Correlation
Measure relationship strength
Descriptive
Summarize your data
Sample Size
Power analysis & sample planning
Mann-Whitney U
Non-parametric group comparison
Wilcoxon
Non-parametric paired test
Regression
Model X-Y relationships
Multiple Regression
Multiple predictors
Cronbach's Alpha
Scale reliability
Logistic Regression
Binary outcome prediction
Factor Analysis
Explore latent factor structure
Kruskal-Wallis
Non-parametric 3+ group comparison
Repeated Measures
Within-subjects ANOVA
Two-Way ANOVA
Factorial design analysis
Friedman Test
Non-parametric repeated measures
Fisher's Exact
Exact test for 2×2 tables
McNemar Test
Paired nominal data test
Paste from Excel/Sheets or drop a CSV file
Enter your data and click Calculate
or click "Load Example" to try it out