標本平均を既知の値または仮説値と比較します。結果にはAPA形式のt統計量、p値、Cohenのd、95%信頼区間が含まれます。
A one-sample t-test is a parametric statistical test used to determine whether the mean of a single sample differs significantly from a known or hypothesized population value. Unlike the independent or paired samples t-test, which compare two groups against each other, the one-sample t-test compares one group against a fixed reference point. This makes it one of the simplest yet most powerful tools in inferential statistics, frequently used in quality control, clinical research, psychology, and education.
Use a one-sample t-test whenever you have a single group of continuous measurements and want to test whether the group's average is statistically different from a specific value. Common scenarios include:
A teacher wants to know if her class of 10 students performed differently from the national average of 80 on a standardized math exam. She records the following scores:
Sample Data (n = 10)
72, 85, 91, 68, 77, 83, 95, 88, 74, 79
M = 81.20, SD = 8.75, Test Value = 80
Results
t(9) = 0.43, p = .675, d = 0.14, 95% CI [-5.06, 7.46]
The class mean did not differ significantly from the national average of 80. The effect size was negligible (Cohen's d = 0.14), suggesting no meaningful departure from the population benchmark.
Before interpreting your results, verify that these assumptions are reasonably met:
1. Continuous Dependent Variable
The variable being measured must be on an interval or ratio scale (e.g., weight, temperature, test score). Ordinal or categorical data require non-parametric alternatives such as the Wilcoxon signed-rank test.
2. Independence of Observations
Each data point must be independent of the others. This means no repeated measures on the same participant and no clustering. If your observations are correlated, consider a paired-samples t-test or mixed-effects model instead.
3. Approximate Normality
The data should be approximately normally distributed. For sample sizes above 30, the Central Limit Theorem ensures the sampling distribution of the mean is normal regardless of the population shape. For smaller samples, check normality using the Shapiro-Wilk test or a Q-Q plot. Moderate deviations are tolerable because the t-test is fairly robust.
4. No Significant Outliers
Extreme outliers can distort the sample mean and inflate or deflate the t-statistic. Use box plots or z-scores to screen for outliers before running the test. If outliers are present, consider trimming, winsorizing, or using a robust alternative.
According to APA 7th edition guidelines, report the sample descriptives, t-statistic, degrees of freedom, p-value, effect size, and confidence interval. Here is a template:
APA Reporting Template
A one-sample t-test was conducted to compare the sample mean (M = 81.20, SD = 8.75) to the test value of 80.00. The result was not statistically significant, t(9) = 0.43, p = .675, d = 0.14, 95% CI [-5.06, 7.46].
Significant Result Example
A one-sample t-test indicated that participants' reaction times (M = 342.50, SD = 28.10) were significantly faster than the population norm of 375 ms, t(39) = -7.31, p < .001, d = 1.16, 95% CI [-41.49, -23.51].
Note: Report t-values to two decimal places, p-values to three decimal places (use p < .001 when below .001), and always include a measure of effect size such as Cohen's d.
In a one-sample t-test, Cohen's d is calculated as the absolute difference between the sample mean and the test value, divided by the sample standard deviation. It quantifies how many standard deviations the sample mean lies from the hypothesized value, making it independent of sample size.
| Cohen's d | Interpretation | Practical Meaning |
|---|---|---|
| < 0.2 | Negligible | Sample mean is very close to the test value |
| 0.2 | Small | Difference detectable only with precise measurement |
| 0.5 | Medium | Noticeable difference in practical terms |
| 0.8+ | Large | Substantial departure from the test value |
| Situation | Recommended Test |
|---|---|
| Compare one sample mean to a known value | One-sample t-test |
| Compare means from two independent groups | Independent samples t-test |
| Compare pre/post means (same subjects) | Paired samples t-test |
| Non-normal data, one sample vs. value | Wilcoxon signed-rank test |
| Compare proportions to a known value | One-sample z-test for proportions |
StatMate's one-sample t-test calculations have been validated against R (t.test function) and SPSS output. We use the jstat library for the Student's t probability distribution and compute degrees of freedom as n - 1. The 95% confidence interval is constructed around the mean difference using the critical t-value for the appropriate df. All results match R output to at least 4 decimal places.
t検定
2群の平均値を比較
分散分析
3群以上の平均値を比較
カイ二乗検定
カテゴリ変数の関連を検定
相関分析
関係の強さを測定
記述統計
データを要約
サンプルサイズ
検出力分析・標本計画
マン・ホイットニーU
ノンパラメトリック群間比較
ウィルコクソン検定
ノンパラメトリック対応検定
回帰分析
X-Yの関係をモデル化
重回帰分析
複数の予測変数
クロンバックのα
尺度の信頼性
ロジスティック回帰
二値アウトカムの予測
因子分析
潜在因子構造の探索
クラスカル・ウォリス
ノンパラメトリック3群以上比較
反復測定
被験者内分散分析
二元配置分散分析
要因計画の分析
フリードマン検定
ノンパラメトリック反復測定
フィッシャーの正確検定
2×2表の正確検定
マクネマー検定
対応のある名義データの検定
Excel/スプレッドシートから貼り付け、またはCSVファイルをドロップ
データを入力して「計算」をクリックしてください
または「サンプルデータを読み込む」をクリックしてお試しください