連続的な結果変数に対する2つの独立変数の主効果と交互作用を検討します。結果にはF統計量、偏η²、交互作用プロットが含まれます。
Two-way ANOVA, also known as factorial ANOVA, is a statistical method that examines the simultaneous effects of two independent categorical variables (factors) on a continuous dependent variable. Unlike one-way ANOVA, which tests a single factor, two-way ANOVA tests three distinct hypotheses: the main effect of Factor A, the main effect of Factor B, and the interaction between Factor A and Factor B. This makes it one of the most widely used analytical tools in experimental research across psychology, medicine, education, and the social sciences.
Use a two-way ANOVA when your study design includes two independent categorical factors, each with two or more levels, and a single continuous dependent variable measured on an interval or ratio scale. Common scenarios include experimental designs examining the combined effects of treatment type and demographic group, dose and delivery method, or any two grouping variables measured simultaneously.
| Feature | One-Way ANOVA | Two-Way ANOVA |
|---|---|---|
| Factors | 1 | 2 |
| Tests | 1 main effect | 2 main effects + 1 interaction |
| Interaction | Not applicable | Tested |
| Effect size | η² | Partial η² |
| Design complexity | Simple | Factorial (A × B) |
A researcher tests the effects of study method (Method A vs. Method B) and test difficulty (Easy vs. Hard) on exam scores. Five students are randomly assigned to each of the four cells.
Results
Study method: F(1, 16) = 52.27, p < .001, η²p = .77
Difficulty: F(1, 16) = 36.82, p < .001, η²p = .70
Interaction: F(1, 16) = 0.33, p = .576, η²p = .02
Both main effects are significant, but the interaction is not, meaning the advantage of Method A over Method B is consistent across difficulty levels.
Before interpreting your results, verify these four assumptions:
1. Normality
The dependent variable should be approximately normally distributed within each cell of the design. Assess with Shapiro-Wilk tests or Q-Q plots. ANOVA is robust to moderate violations when cell sizes are equal and reasonably large.
2. Homogeneity of Variance
Variances should be approximately equal across all cells. Use Levene's test to check. When group sizes are unequal and variances differ, results may be unreliable.
3. Independence of Observations
Each observation must be independent. Random assignment to cells ensures independence. If observations are nested or repeated, use mixed-effects models instead.
4. Interval or Ratio Data
The dependent variable must be continuous (interval or ratio scale). For ordinal or categorical outcomes, consider non-parametric alternatives such as the Aligned Rank Transform.
The interaction is arguably the most informative part of a two-way ANOVA. A significant interaction means the effect of one factor depends on the level of the other factor. When the interaction is significant, main effects should be interpreted cautiously because average differences across one factor may mask opposite patterns at different levels of the other factor. In such cases, report simple main effects (the effect of Factor A at each level of Factor B, and vice versa) rather than overall main effects.
Report each effect (Factor A, Factor B, and the interaction) separately, including F-statistic, degrees of freedom, p-value, and partial eta-squared:
Example Report
A 2 × 2 between-subjects ANOVA was conducted. There was a significant main effect of study method, F(1, 16) = 52.27, p < .001, η²p = .77. The main effect of difficulty was also significant, F(1, 16) = 36.82, p < .001, η²p = .70. The interaction between study method and difficulty was not significant, F(1, 16) = 0.33, p = .576, η²p = .02.
Note: Always report partial η² (not regular η²) for factorial designs. Italicize F, p, and η². Report degrees of freedom for both the effect and the residual.
StatMate's two-way ANOVA calculations have been validated against R's aov() and SPSS GLM output. The implementation uses balanced-formula sums of squares with the jstat library for the F-distribution. All F-statistics, p-values, and partial eta-squared values match R and SPSS output. Degrees of freedom use standard formulas: dfA = a − 1, dfB = b − 1, dfAB = (a − 1)(b − 1), dferror = N − ab.
t検定
2群の平均値を比較
分散分析
3群以上の平均値を比較
カイ二乗検定
カテゴリ変数の関連を検定
相関分析
関係の強さを測定
記述統計
データを要約
サンプルサイズ
検出力分析・標本計画
1標本t検定
既知の値との比較
マン・ホイットニーU
ノンパラメトリック群間比較
ウィルコクソン検定
ノンパラメトリック対応検定
回帰分析
X-Yの関係をモデル化
重回帰分析
複数の予測変数
クロンバックのα
尺度の信頼性
ロジスティック回帰
二値アウトカムの予測
因子分析
潜在因子構造の探索
クラスカル・ウォリス
ノンパラメトリック3群以上比較
反復測定
被験者内分散分析
フリードマン検定
ノンパラメトリック反復測定
フィッシャーの正確検定
2×2表の正確検定
マクネマー検定
対応のある名義データの検定
Excel/スプレッドシートから貼り付け、またはCSVファイルをドロップ
Excel/スプレッドシートから貼り付け、またはCSVファイルをドロップ
Excel/スプレッドシートから貼り付け、またはCSVファイルをドロップ
Excel/スプレッドシートから貼り付け、またはCSVファイルをドロップ
データを入力して「計算」をクリックしてください
または「サンプルデータを読み込む」をクリックしてお試しください