Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

Published on Apr 17, 2018in Frontiers in Psychology2.129
· DOI :10.3389/fpsyg.2018.00294
Andreas M. Brandmaier14
Estimated H-index: 14
(MPG: Max Planck Society),
Timo von Oertzen19
Estimated H-index: 19
(MPG: Max Planck Society)
+ 2 AuthorsChristopher Hertzog61
Estimated H-index: 61
(MPG: Max Planck Society)
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance – effective curve reliability (ECR) – by scaling slope variance against effective error, which is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study’s sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs.
  • References (42)
  • Citations (5)
📖 Papers frequently viewed together
27 Citations
8 Citations
1 Author (Paolo Ghisletta)
78% of Scinapse members use related papers. After signing in, all features are FREE.
Cited By5
#1Michaela C. DeBolt (UC Davis: University of California, Davis)H-Index: 1
#2Mijke Rhe (UC Davis: University of California, Davis)H-Index: 21
Last. Lisa M. Oakes (UC Davis: University of California, Davis)H-Index: 30
view all 3 authors...
#1Øystein Sørensen (University of Oslo)H-Index: 5
Last. Anders M. FjellH-Index: 59
view all 10 authors...
Analyzing data from multiple neuroimaging studies has great potential in terms of increasing statistical power, enabling detection of effects of smaller magnitude than would be possible when analyzing each study separately and also allowing to systematically investigate between-study differences. Restrictions due to privacy or proprietary data as well as more practical concerns can make it hard to share neuroimaging datasets, such that analyzing all data in a common location might be impractical...
#1Andreas M. Brandmaier (MPG: Max Planck Society)H-Index: 14
#2Paolo GhislettaH-Index: 28
Last. Timo von Oertzen (MPG: Max Planck Society)H-Index: 19
view all 3 authors...
Longitudinal data collection is a time-consuming and cost-intensive part of developmental research. Wu et al. (2016) discussed planned missing (PM) designs that are similar in efficiency to complete designs but require fewer observations per person. The authors reported optimal PM designs for linear latent growth curve models based on extensive Monte Carlo simulations. They called for further formal investigation of the question as to how much the proposed PM mechanisms influence study design ef...
#1Julian David Karch (LEI: Leiden University)H-Index: 3
#2Elisa Filevich (Humboldt University of Berlin)H-Index: 10
Last. Simone Kühn (MPG: Max Planck Society)H-Index: 42
view all 10 authors...
Abstract Adequate reliability of measurement is a precondition for investigating individual differences and age-related changes in brain structure. One approach to improve reliability is to identify and control for variables that are predictive of within-person variance. To this end, we applied both classical statistical methods and machine-learning-inspired approaches to structural magnetic resonance imaging (sMRI) data of six participants aged 24–31 years gathered at 40–50 occasions distribute...
#1Jacquelyn K. Mallette (ECU: East Carolina University)H-Index: 2
#2Ted G. Futris (UGA: University of Georgia)H-Index: 10
Last. Geoffrey L. Brown (UGA: University of Georgia)H-Index: 15
view all 4 authors...
#1Elliot M. Tucker-Drob (University of Texas at Austin)H-Index: 31
#2Andreas M. Brandmaier (MPG: Max Planck Society)H-Index: 14
Last. Ulman Lindenberger (MPG: Max Planck Society)H-Index: 81
view all 3 authors...
6 CitationsSource
#1Andreas M. Brandmaier (MPG: Max Planck Society)H-Index: 14
#2Elisabeth Wenger (MPG: Max Planck Society)H-Index: 13
Last. Ulman Lindenberger (MPG: Max Planck Society)H-Index: 81
view all 6 authors...
3 CitationsSource
#1Gizem Hülür (UZH: University of Zurich)H-Index: 9
#2Sherry L. Willis (UW: University of Washington)H-Index: 47
Last. Denis Gerstorf (HSU: Humboldt State University)H-Index: 32
view all 5 authors...
A growing body of research has examined whether people's judgments of their own memory functioning accurately reflect their memory performance at cross-section and over time. Relatively less is known about whether these judgments are specifically based on memory performance, or reflect general cognitive change. The aim of the present study was to examine longitudinal associations of subjective memory with performance in tests of episodic memory and a wide range of other cognitive tests, includin...
1 CitationsSource