Overview

<< Click to Display Table of Contents >>

Navigation:  R Statistics in PMOD > Analysis Scripts for Aggregates > Linear Models: Statistical Analysis of Regional Imaging Data > rm-ANOVA/MANOVA >

Overview

Multivariate analysis of variance (MANOVA) and repeated measures analysis of variance (rm-ANOVA) are closely related generalizations of analysis of variance (ANOVA) to deal with multiple regions of interest [4]. They represent general linear models with regional values as multiple dependent parameters, which are being considered jointly within the same model.

MANOVA and rm-ANOVA minimize the residual variance that cannot be explained by the factors included in the model. They also provide an analytical separation of the various components of total variance while also considering the covariance among within-subject variables, which requires a complete and balanced data structure comprising groups of equal size. Results are presented as ANOVA tables and appropriate F-tests to determine the significance of main effects for within-subject and between-subject factors (including covariates), as well as for their interactions (see example below). MANOVA is assuming that the within-subject dependent variable vector has multivariate normal distribution within groups. Rm-ANOVA assumes homogeneous variance across all variables and groups and also of the variance of differences between groups (sphericity). It is typically more sensitive to detect within-subject differences than MANOVA, and p-values can be adjusted in case of sphericity violation by procedures according to Greenhouse-Geisser or Huynh-Feldt. Similar to univariate ANOVA, post-hoc comparisons are available for main effects with proper corrections for multiple comparisons to examine individual differences between those groups or regions that demonstrated a significant main effect or interaction.

The available correction for multiple comparison used in (M)ANOVA are:

1.Holm correction, described by Holm et al [2], is a modified version of Bonferroni correction: relaxing requirements step-wise subsequently after reaching significance for the strongest signal according to the number of remaining tests. Still, it does not take into account possible correlations among regional values. The Holm correction controls the family wise errors (FWER) without assuming independence. The benefit is that tests are made more powerful (smaller adjusted p values) while maintaining control of FWER. It is more conservative. The FWER is the probability of getting at least one wrong significance (=one false positive test) <5%

2.Alternatively, the false discovery rate (FDR) can be controlled by procedures described by Benjamini and Hochberg [3]. The FDR correction controls the "false discovery rate" (FDR) and not the FWER.