Skip to content
S
StatMate
Back to Blog
APA Reporting15 min read2026-03-26

How to Report Linear Regression in APA 7th Edition — R², Beta, F-Test & Examples

Complete guide to reporting simple and multiple linear regression in APA 7th edition. R-squared, standardized beta coefficients, F-test, assumption checks, and copy-paste APA templates.

Why Proper Regression Reporting Matters

Regression analysis is one of the most versatile and widely used methods in quantitative research. Whether you are predicting exam scores from study hours, modeling the joint influence of several workplace factors on job satisfaction, or testing whether a clinical intervention predicts symptom reduction after controlling for demographics, regression provides the framework. Yet reporting regression results in APA format trips up researchers more than almost any other statistical procedure.

The difficulty stems from the layered nature of regression output. Unlike a t-test, which produces a single test statistic and effect size, regression produces an overall model fit statistic, individual predictor statistics, standardized and unstandardized coefficients, confidence intervals, and assumption diagnostics. Omitting any of these elements invites revision requests from journal reviewers. This guide walks through the APA 7th edition format for both simple and multiple regression with concrete numerical examples you can use as templates.

The APA Format for Regression Results

Simple Linear Regression Template

Every simple linear regression reported in APA should include:

  • R-squared (R²): the proportion of variance explained
  • F-statistic: the overall model test, with degrees of freedom
  • Exact p value: to three decimal places or p < .001
  • Unstandardized coefficient (B): the slope, with standard error
  • Standardized coefficient (beta): for cross-study comparisons
  • 95% confidence interval for B

The general template:

A simple linear regression was conducted to examine whether [predictor] predicted [outcome]. The model was statistically significant, R² = .XX, F(1, N-2) = X.XX, p = .XXX. [Predictor] significantly predicted [outcome], B = X.XX, SE = X.XX, beta = .XX, t(df) = X.XX, p = .XXX, 95% CI [X.XX, X.XX].

Multiple Regression Template

Multiple regression adds two requirements:

  • Adjusted R²: corrects for the number of predictors
  • Coefficients table: when two or more predictors are present

A multiple regression analysis was conducted to examine whether [predictor 1], [predictor 2], and [predictor 3] predicted [outcome]. The overall model was statistically significant, R² = .XX, adjusted R² = .XX, F(k, N-k-1) = X.XX, p < .001.

Step-by-Step Example: Predicting GPA from Study Hours

Research Scenario

An educational psychologist investigates whether weekly study hours predict semester GPA among 120 university students.

Reporting the Overall Model

A simple linear regression was conducted to examine whether weekly study hours predicted semester GPA. The results indicated that the overall model was statistically significant, R² = .34, F(1, 118) = 60.73, p < .001. Weekly study hours accounted for 34% of the variance in semester GPA.

Reporting the Coefficient

Weekly study hours significantly predicted semester GPA, B = 0.08, SE = 0.01, beta = .58, t(118) = 7.79, p < .001, 95% CI [0.06, 0.10]. For each additional hour of weekly study, semester GPA increased by an average of 0.08 points.

Breaking Down the Components

| Component | Value | Explanation | |-----------|-------|-------------| | R² | .34 | 34% of variance explained; no leading zero | | F | 60.73 | Overall model F-statistic, two decimal places | | df | 1, 118 | Regression df (predictors) and residual df (N - k - 1) | | p | < .001 | Exact p-value or < .001 for very small values | | B | 0.08 | Unstandardized slope in original units | | SE | 0.01 | Standard error of the slope | | beta | .58 | Standardized coefficient (no leading zero) | | t | 7.79 | t-statistic for the coefficient | | 95% CI | [0.06, 0.10] | Confidence interval for B |

Interpreting R-Squared

R² represents the proportion of variance in the outcome explained by the predictor(s). In the example above, R² = .34 means study hours explain 34% of the variation in GPA.

Guidelines for Interpreting R²

Cohen (1988) provided benchmarks for the behavioral sciences:

| R² | f² | Interpretation | |------|------|---------------| | .02 | .02 | Small effect | | .13 | .15 | Medium effect | | .26 | .35 | Large effect |

These benchmarks are field-specific. In economics, R² = .10 might be noteworthy; in psychophysics, R² = .80 might be expected. Always interpret in context.

R² vs. Adjusted R²

For simple regression with one predictor, R² suffices. For multiple regression, always report both R² and adjusted R².

Adjusted R² penalizes for adding predictors that do not meaningfully improve fit. If you add a predictor and adjusted R² decreases, that predictor is not contributing beyond what is explained by existing predictors.

The model explained 52% of the variance in exam scores (R² = .52, adjusted R² = .51).

The difference between .52 and .51 is small, indicating that all three predictors contribute meaningfully.

Beta Coefficients: Standardized vs. Unstandardized

Unstandardized Coefficients (B)

B tells you the predicted change in the outcome for a one-unit change in the predictor, in the original measurement units:

For every additional hour of weekly study, GPA increased by 0.08 points (B = 0.08).

This is the most practically interpretable coefficient when the units are meaningful (hours, dollars, years).

Standardized Coefficients (beta)

beta expresses the predicted change in standard deviation units, enabling direct comparison across predictors measured on different scales:

Study hours had a stronger relative contribution (beta = .38) than attendance rate (beta = .22).

Use beta when comparing the relative importance of predictors with different units.

Reporting Both in APA Format

APA recommends including both whenever possible:

Study hours significantly predicted exam scores, B = 1.92, SE = 0.31, beta = .38, t(146) = 6.19, p < .001, 95% CI [1.31, 2.53].

Multiple Regression: Full Worked Example

Research Scenario

A researcher investigates whether study hours, class attendance rate, and prior GPA predict final exam scores among 150 students.

Overall Model

A multiple regression analysis was conducted to examine whether study hours, class attendance rate, and prior GPA predicted final exam scores. The overall model was statistically significant, R² = .52, adjusted R² = .51, F(3, 146) = 52.78, p < .001. Together, the three predictors accounted for 52% of the variance in final exam scores.

Coefficients Table

| Predictor | B | SE | beta | t | p | 95% CI | |-----------|------|------|--------|------|------|---------| | (Intercept) | 12.45 | 5.32 | -- | 2.34 | .021 | [1.94, 22.96] | | Study hours | 1.92 | 0.31 | .38 | 6.19 | < .001 | [1.31, 2.53] | | Attendance rate | 0.28 | 0.08 | .22 | 3.50 | < .001 | [0.12, 0.44] | | Prior GPA | 8.74 | 1.85 | .29 | 4.72 | < .001 | [5.08, 12.40] |

Note. R² = .52, adjusted R² = .51, F(3, 146) = 52.78, p < .001.

Individual Predictor Write-Up

Study hours was the strongest predictor of final exam scores, B = 1.92, SE = 0.31, beta = .38, t(146) = 6.19, p < .001, 95% CI [1.31, 2.53]. Prior GPA also significantly predicted exam scores, B = 8.74, SE = 1.85, beta = .29, t(146) = 4.72, p < .001, 95% CI [5.08, 12.40]. Class attendance rate made a smaller but statistically significant contribution, B = 0.28, SE = 0.08, beta = .22, t(146) = 3.50, p < .001, 95% CI [0.12, 0.44].

Hierarchical (R² Change) Reporting

When building models in blocks, report Delta R² to show additional variance explained:

In Step 1, study hours was entered and explained 28% of variance, R² = .28, F(1, 148) = 57.56, p < .001. In Step 2, attendance rate and prior GPA were added. The model explained an additional 24% of variance, Delta R² = .24, F-change(2, 146) = 36.52, p < .001.

Reporting Regression Assumptions

APA 7th edition expects at least a brief mention of assumption checks. Here are the five key assumptions with reporting templates.

1. Linearity

Scatter plots of each predictor against the outcome showed approximately linear relationships.

2. Independence of Residuals

The Durbin-Watson statistic was 1.92, indicating no substantial autocorrelation among residuals.

3. Homoscedasticity

A plot of standardized residuals against predicted values showed a random scatter pattern, suggesting the homoscedasticity assumption was met.

4. Normality of Residuals

A Q-Q plot of standardized residuals indicated an approximately normal distribution. The Shapiro-Wilk test on residuals was not significant (W = 0.99, p = .312).

5. Multicollinearity (Multiple Regression Only)

VIF values for all predictors ranged from 1.12 to 2.34, well below the threshold of 10, indicating no multicollinearity concerns.

When Assumptions Are Violated

Report the violation and corrective action:

Visual inspection of the residual plot indicated potential heteroscedasticity. Heteroscedasticity-consistent standard errors (HC3) were used for all coefficient tests. The pattern of significant predictors remained unchanged with robust standard errors.

| Violation | Corrective Strategy | |-----------|-------------------| | Non-linearity | Log or polynomial transformation | | Autocorrelation (Durbin-Watson far from 2) | Generalized least squares | | Heteroscedasticity | Robust (HC3) standard errors or weighted least squares | | Non-normal residuals | Bootstrapped confidence intervals or transformations | | Multicollinearity (VIF > 10) | Remove or combine correlated predictors; ridge regression |

Model Comparison in APA Format

When comparing nested models (e.g., adding predictors in blocks), report the F-change test:

Model 1 included demographic controls and explained 15% of variance (R² = .15). Model 2 added the intervention variable, increasing explained variance by 12% (Delta R² = .12), F-change(1, 96) = 18.34, p < .001. The intervention variable was a significant predictor above and beyond demographic controls, B = 4.52, SE = 1.06, beta = .35, t(96) = 4.28, p < .001, 95% CI [2.43, 6.61].

When comparing non-nested models, report AIC or BIC values:

Model A (AIC = 456.2) provided a better fit than Model B (AIC = 472.8), with a difference of 16.6 favoring Model A.

Confidence Intervals for Regression Coefficients

APA 7th edition emphasizes confidence intervals because they convey both the point estimate and its precision.

  • CI does not contain zero: The coefficient is statistically significant.
  • CI contains zero: The coefficient is not significant.
  • Narrow CI: Precise estimate (large sample, low variability).
  • Wide CI: Imprecise estimate (needs larger sample or better measurement).

Study hours significantly predicted exam scores, B = 2.34, 95% CI [0.89, 3.79], t(118) = 3.21, p = .002. The confidence interval indicates that the true population slope lies between 0.89 and 3.79 points per additional hour.

Regression Table Formatting

When to Use a Table

Use inline text for simple regression with one predictor. Use a table for two or more predictors. Tables become essential with three or more predictors.

APA Table Format

| Variable | B | SE | beta | t | p | 95% CI | |----------|------|------|--------|------|------|---------| | (Intercept) | 12.45 | 5.32 | -- | 2.34 | .021 | [1.94, 22.96] | | Study hours | 1.92 | 0.31 | .38 | 6.19 | < .001 | [1.31, 2.53] | | Attendance rate | 0.28 | 0.08 | .22 | 3.50 | < .001 | [0.12, 0.44] | | Prior GPA | 8.74 | 1.85 | .29 | 4.72 | < .001 | [5.08, 12.40] |

Note. R² = .52, adjusted R² = .51, F(3, 146) = 52.78, p < .001.

Table Formatting Guidelines

  • Horizontal rules above and below the header row and at the bottom only (no vertical lines)
  • Italicize all statistical symbols in column headers
  • Place the intercept first, then predictors in order of theoretical importance
  • Report B and SE to two decimal places; beta to two decimal places without a leading zero
  • Include a table number and descriptive title

Non-Significant Regression Results

Report the same components regardless of significance:

A simple linear regression indicated that sleep duration did not significantly predict exam scores, R² = .02, F(1, 118) = 2.41, p = .123. Sleep duration was not a significant predictor, B = 0.95, SE = 0.61, beta = .14, t(118) = 1.55, p = .123, 95% CI [-0.26, 2.16].

For multiple regression with mixed significance:

The overall model was statistically significant, R² = .38, F(3, 96) = 19.62, p < .001. Study hours (B = 2.14, p < .001) and attendance (B = 0.34, p = .008) were significant predictors, while sleep duration was not (B = 0.42, p = .318).

Common Mistakes to Avoid

1. Omitting Adjusted R² in Multiple Regression

Adjusted R² is essential for evaluating model fit with multiple predictors. It penalizes for unnecessary variables and provides a more honest estimate of explained variance.

2. Confusing B and Beta

B (unstandardized) preserves original units for practical interpretation. beta (standardized) enables cross-predictor comparison. Always label which you are reporting and include both when possible.

3. Reporting p = .000

Never write p = .000. Always report as p < .001. The probability is never exactly zero.

4. Missing Confidence Intervals

APA 7th edition strongly recommends 95% CIs for all regression coefficients. CIs convey precision that p-values alone cannot.

5. Forgetting the Intercept

The intercept should appear in your coefficients table, even if it is not the focus of interpretation.

6. Ignoring Assumption Checks

Always mention that you verified key assumptions. At minimum, report VIF values for multicollinearity and note that residual plots were examined.

7. Interpreting B Without Holding Other Predictors Constant

In multiple regression, each B represents the unique effect of that predictor while controlling for all others. Do not interpret it as a simple bivariate relationship.

8. Using Stepwise Regression Without Justification

Automated stepwise methods capitalize on chance and produce unstable models. If you use stepwise selection, justify it and report cross-validation results.

APA Regression Checklist

Before submitting your manuscript, verify:

  • Overall model R² (and adjusted R² for multiple regression)
  • F-statistic with correct degrees of freedom
  • Exact p value for the overall model
  • Coefficients table with B, SE, beta, t, p, and 95% CI for each predictor
  • Intercept row in the coefficients table
  • Clear labels distinguishing unstandardized (B) and standardized (beta) coefficients
  • All statistical symbols italicized
  • Assumption checks mentioned (at minimum VIF and residual plots)
  • If hierarchical: Delta R² and F-change for each step

Frequently Asked Questions

What is the difference between R-squared and adjusted R-squared?

R² shows the proportion of variance explained by all predictors. Adjusted R² corrects for the number of predictors, penalizing unnecessary variables. Use adjusted R² when comparing models with different numbers of predictors. In simple regression, the two values are nearly identical.

How do I interpret a negative regression coefficient?

A negative B means that as the predictor increases by one unit, the outcome decreases by B units, holding other predictors constant. The sign indicates direction, not strength. For example, B = -1.47 for exercise hours predicting stress means each additional hour of exercise is associated with a 1.47-point decrease in stress.

Should I report standardized or unstandardized coefficients?

APA recommends both. Unstandardized coefficients (B) preserve measurement units for practical interpretation. Standardized coefficients (beta) allow comparison of relative predictor importance across different scales.

What does a VIF greater than 10 mean?

A VIF above 10 indicates severe multicollinearity. Two or more predictors are highly correlated, inflating standard errors and making individual coefficient tests unreliable. Consider removing or combining correlated predictors, or using ridge regression.

Can I use regression with categorical predictors?

Yes, by creating dummy variables (0/1 coding). A variable with k categories requires k - 1 dummy variables. The reference category is represented when all dummies equal zero.

How do I report regression results when the model is not significant?

Report the same statistics: F, degrees of freedom, p, and R². Example: "The model was not statistically significant, F(2, 97) = 1.45, p = .240, R² = .03."

What is the difference between simple and multiple regression?

Simple regression uses one predictor. Multiple regression uses two or more predictors simultaneously, accounting for shared and unique variance. Multiple regression requires adjusted R² and typically a coefficients table.

Should I report confidence intervals for regression coefficients?

Yes. APA 7th edition recommends 95% CIs for all regression coefficients. A CI that excludes zero confirms statistical significance. CIs also help readers assess practical significance by showing the range of plausible effect sizes.

Using StatMate for APA-Formatted Regression Results

Formatting regression output correctly is tedious and error-prone, especially with multiple predictors. StatMate's simple regression and multiple regression calculators automate the process.

Enter your data and StatMate computes R², adjusted R², the F-statistic, individual coefficients with standard errors, standardized betas, t-statistics, p-values, and confidence intervals, all formatted in APA 7th edition style. The results are ready to copy directly into your manuscript.

By letting StatMate handle the calculations and formatting, you eliminate common errors like swapped coefficients, missing confidence intervals, or incorrect degrees of freedom.

Open the Regression Calculator | Open the Multiple Regression Calculator

Try It Now

Analyze your data with StatMate's free calculators and get APA-formatted results instantly.

Start Calculating

Stay Updated with Statistics Tips

Get weekly tips on statistical analysis, APA formatting, and new calculator updates.

No spam. Unsubscribe anytime.