I am a Ph.D. candidate in Quantitative Methods, Measurement, and Statistics at the University of California, Merced. I firmly believe that scientific discovery is a dynamic process driven by the continuous updating of scientific knowledge through experiential learning. This perspective aligns closely with the philosophical paradigm of Bayesian inference. My program of research centers on the development, evaluation, and application of Bayesian methodology for theory construction and statistical modeling in the field of psychological sciences. My expertise lies in Bayesian inference, structural equation modeling, mixture modeling, and longitudinal data analysis. My projects address methodological issues surrounding model estimation, model evaluation, hypothesis testing, and missing data. I have authored or co-authored articles in methodological journals, including Structural Equation Modeling: A Multidisciplinary Journal, Applied Psychological Measurement, and Psychometrika. In addition, I am dedicated to the dissemination of quantitative methods to applied researchers and have published tutorials on statistical software such as R, JASP, and jamovi.
May, 2026 (Expected) Ph.D. Candidate in Quantitative Methods, Measurement, and StatisticsAdvisors:Dr. Sarah Depaoli & Dr. Fan Jia | ||
July, 2021 M.Sc. in Methodology and Statistics (Cum Laude)Thesis:All models are uncertain, but averaging is useful: Bayesian multi-model inference in structural equation models with bridge sampling Supervisor:prof. dr. Eric-Jan Wagenmakers (University of Amsterdam) | ||
August, 2019 B.A. in Psychology (Highest Honors) |
In this contributed discussion, we point out avenues for expanding upon the ideas of Frühwirth-Schnatter et al. (2024). The methods proposed by the authors implicitly assume the homogeneity of the population of interest. Put differently, the estimated factor structure is assumed to apply uniformly across the entire population. However, this assumption often needs to be relaxed due to the inherent heterogeneity in the population. One approach to account for population heterogeneity and consider differences between qualitatively distinct subpopulations—termed latent classes—is the mixture modeling framework. With this consideration in mind, we suggest that an interesting direction is the incorporation of the mixture modeling framework to induce sparsity across different latent classes.
Bayesian piecewise growth mixture models (PGMMs) are a powerful statistical tool based on the Bayesian framework for modeling nonlinear, phasic developmental trajectories of heterogeneous subpopulations over time. Although Bayesian PGMMs can benefit school psychology research, their empirical applications within the field remain limited. This article introduces Bayesian PGMMs, addresses three key methodological considerations (i.e., class separation, class enumeration, and prior sensitivity), and provides practical guidance for their implementation. By analyzing a dataset from the Early Childhood Longitudinal Study-Kindergarten Cohort, we illustrate the application of Bayesian PGMMs to model piecewise growth trajectories of mathematics achievement across latent classes. We underscore the importance of considering both statistical criteria and substantive theories when making decisions in analytic procedures. Additionally, we discuss the importance of transparent reporting of the results and provide caveats for researchers in the field to promote the wide usage of Bayesian PGMMs.
The Bayesian piecewise growth model (PGM) is a useful class of models for analyzing nonlinear change processes that consist of distinct growth phases. In applications of Bayesian PGMs, it is important to accurately capture growth trajectories and carefully consider knot placements. The presence of missing data is another challenge researchers commonly encounter. To address these issues, one could use model fit and selection indices to detect misspecified Bayesian PGMs, and should give care to the potential impact of missing data on model evaluation. Here we conducted a simulation study to examine the impact of model misspecification and missing data on the performance of Bayesian model fit and selection indices (PPP-value, BCFI, BTLI, BRMSEA, BIC, and DIC), with an additional focus on prior sensitivity. Results indicated that (a) increasing the degree of model misspecification and amount of missing data aggravated the performance of indices in detecting misfit, and (b) different prior specifications had negligible impact on model assessment. We provide practical guidelines for researchers to facilitate effective implementation of Bayesian PGMs.
Bayesian estimation has become increasingly more popular with piecewise growth models because it can aid in accurately modeling nonlinear change over time. Recently, new Bayesian approximate fit indices (BRMSEA, BCFI, and BTLI) have been introduced as tools for detecting model (mis)fit. We compare these indices to the posterior predictive p-value (PPP), and also examine the Bayesian information criterion (BIC) and the deviance information criterion (DIC), to identify optimal methods for detecting model misspecification in piecewise growth models. Findings indicated that the Bayesian approximate fit indices are not as reliable as the PPP for detecting misspecification. However, these indices appear to be viable model selection tools rather than measures of fit. We conclude with recommendations regarding when researchers should be using each of the indices in practice.
The importance of longitudinal research to the social and behavioral sciences is difficult to overstate. A large body of research implements longitudinal designs and analyzes dynamic, repeated measures data to answer a broad range of research questions surrounding growth or change over time. A key decision involved in longitudinal research is the choice of appropriate statistical models. Christian Geiser’s new book—Longitudinal Structural Equation Modeling with Mplus: A Latent State-Trait Perspective—focuses on the implementation of longitudinal statistical models with latent variables and their application in Mplus. The book specifically introduces and discusses longitudinal structural equation models in the context of latent state-trait (LST) theory. By couching LST theory into a longitudinal structural equation modeling framework, this book describes how researchers can reflect the change of individual states or traits under situational influences. Such a feature makes the book a distinctive resource that stands out in comparison with other longitudinal modeling books.
Diagnostic classification models (DCMs) have been used to classify examinees into groups based on their possession status of a set of latent traits. In addition to traditional item-based scoring approaches, examinees may be scored based on their completion of a series of small and similar tasks. Those scores are usually considered as count variables. To model count scores, this study proposes a new class of DCMs that uses the negative binomial distribution at its core. We explained the proposed model framework and demonstrated its use through an operational example. Simulation studies were conducted to evaluate the performance of the proposed model and compare it with the Poisson-based DCM.
How can we model the form of change in an outcome as time passes by? This question is of importance to researchers who examine developmental, longitudinal, or consecutive measurements across multiple occasions. What solves their problems is a statistical technique called Latent Growth Curve Modeling (LGCM). This tutorial guides readers from the concept of LGCM to interpreting results with intuitive examples.
Researchers often have questions about inter-relationships between observed variables (indicators) and latent variables (factors). The Multiple Indicators and Multiple Causes (MIMIC) model is one of the models to quench the thirst for such questions. This tutorial introduces the idea of MIMIC models and, with a simple example, explains how to fit and interpret them.
This tutorial illustrates how to follow the When-to-Worry-and-How-to-Avoid-the-Misuse-of-Bayesian-Statistics (WAMBS) Checklist in JASP using JAGS. Among many analytic techniques, we focus on the regression analysis and explain the 10 points for the thorough application of Bayesian analysis. After the tutorial, we expect readers can refer to the WAMBS Checklist to sensibly apply the Bayesian statistics to answer substantive research questions.
This tutorial illustrates how to interpret the more advanced output and to set different prior specifications in performing Bayesian regression analyses in JASP. We explain various options in the control panel and introduce such concepts as Bayesian model averaging, posterior model probability, prior model probability, inclusion Bayes factor, and posterior exclusion probability. After the tutorial, we expect readers can deeply comprehend the Bayesian regression and perform it to answer substantive research questions.
This tutorial illustrates how to perform Bayesian analyses in JASP with informative priors using JAGS. Among many analytic options, we focus on the regression analysis and explain the effects of different prior specifications on regression coefficients. We also present the Shiny App designed to help users to define the prior distributions using the example in this tutorial. After the tutorial, we expect readers can understand how to incorporate prior knowledge in conducting Bayesian regression analysis to answer substantive research questions.
This tutorial illustrates how to perform Bayesian analyses in JASP with default priors for starters. We deal with basic procedures to do Bayesian statistics and explain ways to interpret core results. In each analytic option, a brief comparison between Bayesian and frequentist statistics is presented. After the tutorial, we expect readers can perform correlation analysis, multiple linear regression, t-test, and one-way analysis of variance, all from a Bayesian perspective, and understand the logic of Bayesian statistics.
This tutorial introduces the fundamentals of JASP for starters. We guide you from installation to interpretation of results via data loading and data management. After the tutorial, we expect readers can easily perform correlation, multiple linear regression, t-test, and one-way analysis of variance and draw conclusions from outputs in JASP.
This tutorial explains how to interpret the more advanced output and to set different prior specifications in conducting Bayesian regression analyses in jamovi. We guide you to various options in the options panel and introduce concepts including Bayesian model averaging, prior model probability, posterior model probability, inclusion Bayes factor, and posterior exclusion probability. After the tutorial, we expect readers can deeply understand the Bayesian regression and perform it to answer substantive research questions.
This tutorial explains how to conduct Bayesian analyses in jamovi with default priors for starters. With step-by-step illustrations, we perform and interpret core results of correlation analysis, multiple linear regression, t-test, and one-way analysis of variance, all from a Bayesian perspective. To enhance readers’ understanding, a brief comparison between the Bayesian and frequentist approach is provided in each analytic option. After the tutorial, we expect readers can perform basic Bayesian analyses and distinguish its approach from the frequentist approach.
This tutorial introduces the basics of jamovi for beginners. Starting from jamovi installation, we explain the screen structure of jamovi, how to load a dataset, and how to explore and visualize data. Readers will further learn ways to perform such statistical analyses as correlation analysis, multiple linear regression, t-test, and one-way analysis of variance, all from a frequentist viewpoint. Given the integrative power between jamovi and R, one section is designed to help readers to make use of the best of both jamovi and R.
This tutorial provides the basics of R for beginners. Our detailed instruction will start from the foundations including the installation of R and RStudio, the structure of the R screen, and loading the data. Next, we introduce basic functions for data exploration and data visualization. Also, we illustrate how to do statistical analyses such as correlation analysis, multiple linear regression, t-test, and one-way analysis of variance (ANOVA) with easy and intuitive explanations.
This Github repository contains code behind the online statistics tutorials located at https://www.rensvandeschoot.com/tutorials/.
This Zenodo community contains citable online statistics tutorials located at https://www.rensvandeschoot.com/tutorials/.