Author Image

Hi, I'm Ihnwhi

Ihnwhi Heo

Ph.D. Candidate at University of California, Merced

I am a Ph.D. candidate in Quantitative Methods, Measurement, and Statistics at the University of California, Merced. My interests touch on Bayesian inference, structural equation modeling, longitudinal modeling, and missing data analysis. I am also interested in such topics as hypothesis testing and metascience. Under the mentorship of Dr. Sarah Depaoli and Dr. Fan Jia, my goal is to develop and evaluate Bayesian methods for researchers.

Education

Ph.D. Student in Quantitative Methods, Measurement, and Statistics
M.Sc. in Methodology and Statistics (Cum Laude)
B.A. in Psychology (Highest Honors)

Articles

Performance of model fit and selection indices for Bayesian piecewise growth modeling with missing data

The Bayesian piecewise growth model (PGM) is a useful class of models for analyzing nonlinear change processes that consist of distinct growth phases. In applications of Bayesian PGMs, it is important to accurately capture growth trajectories and carefully consider knot placements. The presence of missing data is another challenge researchers commonly encounter. To address these issues, one could use model fit and selection indices to detect misspecified Bayesian PGMs, and should give care to the potential impact of missing data on model evaluation. Here we conducted a simulation study to examine the impact of model misspecification and missing data on the performance of Bayesian model fit and selection indices (PPP-value, BCFI, BTLI, BRMSEA, BIC, and DIC), with an additional focus on prior sensitivity. Results indicated that (a) increasing the degree of model misspecification and amount of missing data aggravated the performance of indices in detecting misfit, and (b) different prior specifications had negligible impact on model assessment. We provide practical guidelines for researchers to facilitate effective implementation of Bayesian PGMs.

Detecting model misspecification in Bayesian piecewise growth models

Bayesian estimation has become increasingly more popular with piecewise growth models because it can aid in accurately modeling nonlinear change over time. Recently, new Bayesian approximate fit indices (BRMSEA, BCFI, and BTLI) have been introduced as tools for detecting model (mis)fit. We compare these indices to the posterior predictive p-value (PPP), and also examine the Bayesian information criterion (BIC) and the deviance information criterion (DIC), to identify optimal methods for detecting model misspecification in piecewise growth models. Findings indicated that the Bayesian approximate fit indices are not as reliable as the PPP for detecting misspecification. However, these indices appear to be viable model selection tools rather than measures of fit. We conclude with recommendations regarding when researchers should be using each of the indices in practice.

Review of Longitudinal structural equation modeling with Mplus: A latent state-trait perspective

The importance of longitudinal research to the social and behavioral sciences is difficult to overstate. A large body of research implements longitudinal designs and analyzes dynamic, repeated measures data to answer a broad range of research questions surrounding growth or change over time. A key decision involved in longitudinal research is the choice of appropriate statistical models. Christian Geiser’s new book—Longitudinal Structural Equation Modeling with Mplus: A Latent State-Trait Perspective—focuses on the implementation of longitudinal statistical models with latent variables and their application in Mplus. The book specifically introduces and discusses longitudinal structural equation models in the context of latent state-trait (LST) theory. By couching LST theory into a longitudinal structural equation modeling framework, this book describes how researchers can reflect the change of individual states or traits under situational influences. Such a feature makes the book a distinctive resource that stands out in comparison with other longitudinal modeling books.

Applying negative binomial distribution in diagnostic classication models for analyzing count data

Diagnostic classification models (DCMs) have been used to classify examinees into groups based on their possession status of a set of latent traits. In addition to traditional item-based scoring approaches, examinees may be scored based on their completion of a series of small and similar tasks. Those scores are usually considered as count variables. To model count scores, this study proposes a new class of DCMs that uses the negative binomial distribution at its core. We explained the proposed model framework and demonstrated its use through an operational example. Simulation studies were conducted to evaluate the performance of the proposed model and compare it with the Poisson-based DCM.

Tutorials

Latent Growth Curve Modeling (LGCM) in JASP
Latent Growth Curve Modeling (LGCM) in JASP
Koch, Heo, & van Kesteren 2022

How can we model the form of change in an outcome as time passes by? This question is of importance to researchers who examine developmental, longitudinal, or consecutive measurements across multiple occasions. What solves their problems is a statistical technique called Latent Growth Curve Modeling (LGCM). This tutorial guides readers from the concept of LGCM to interpreting results with intuitive examples.

Multiple Indicators Multiple Causes (MIMIC) model in JASP
Multiple Indicators Multiple Causes (MIMIC) model in JASP
Koch, Heo, & van Kesteren 2022

Researchers often have questions about inter-relationships between observed variables (indicators) and latent variables (factors). The Multiple Indicators and Multiple Causes (MIMIC) model is one of the models to quench the thirst for such questions. This tutorial introduces the idea of MIMIC models and, with a simple example, explains how to fit and interpret them.

WAMBS Checklist in JASP (using JAGS)
WAMBS Checklist in JASP (using JAGS)
Heo & van de Schoot 2020

This tutorial illustrates how to follow the When-to-Worry-and-How-to-Avoid-the-Misuse-of-Bayesian-Statistics (WAMBS) Checklist in JASP using JAGS. Among many analytic techniques, we focus on the regression analysis and explain the 10 points for the thorough application of Bayesian analysis. After the tutorial, we expect readers can refer to the WAMBS Checklist to sensibly apply the Bayesian statistics to answer substantive research questions.

Advanced Bayesian regression in JASP
Advanced Bayesian regression in JASP
Heo & van de Schoot 2020

This tutorial illustrates how to interpret the more advanced output and to set different prior specifications in performing Bayesian regression analyses in JASP. We explain various options in the control panel and introduce such concepts as Bayesian model averaging, posterior model probability, prior model probability, inclusion Bayes factor, and posterior exclusion probability. After the tutorial, we expect readers can deeply comprehend the Bayesian regression and perform it to answer substantive research questions.

JASP for Bayesian analyses with informative priors (using JAGS)
JASP for Bayesian analyses with informative priors (using JAGS)
Heo & van de Schoot 2020

This tutorial illustrates how to perform Bayesian analyses in JASP with informative priors using JAGS. Among many analytic options, we focus on the regression analysis and explain the effects of different prior specifications on regression coefficients. We also present the Shiny App designed to help users to define the prior distributions using the example in this tutorial. After the tutorial, we expect readers can understand how to incorporate prior knowledge in conducting Bayesian regression analysis to answer substantive research questions.

JASP for Bayesian analyses with default priors
JASP for Bayesian analyses with default priors
Heo, Veen, & van de Schoot 2020

This tutorial illustrates how to perform Bayesian analyses in JASP with default priors for starters. We deal with basic procedures to do Bayesian statistics and explain ways to interpret core results. In each analytic option, a brief comparison between Bayesian and frequentist statistics is presented. After the tutorial, we expect readers can perform correlation analysis, multiple linear regression, t-test, and one-way analysis of variance, all from a Bayesian perspective, and understand the logic of Bayesian statistics.

JASP for beginners
JASP for beginners
Heo, Veen, & van de Schoot 2020

This tutorial introduces the fundamentals of JASP for starters. We guide you from installation to interpretation of results via data loading and data management. After the tutorial, we expect readers can easily perform correlation, multiple linear regression, t-test, and one-way analysis of variance and draw conclusions from outputs in JASP.

Advanced Bayesian regression in jamovi
Advanced Bayesian regression in jamovi
Heo & van de Schoot 2020

This tutorial explains how to interpret the more advanced output and to set different prior specifications in conducting Bayesian regression analyses in jamovi. We guide you to various options in the options panel and introduce concepts including Bayesian model averaging, prior model probability, posterior model probability, inclusion Bayes factor, and posterior exclusion probability. After the tutorial, we expect readers can deeply understand the Bayesian regression and perform it to answer substantive research questions.

jamovi for Bayesian analyses with default priors
jamovi for Bayesian analyses with default priors
Heo & van de Schoot 2020

This tutorial explains how to conduct Bayesian analyses in jamovi with default priors for starters. With step-by-step illustrations, we perform and interpret core results of correlation analysis, multiple linear regression, t-test, and one-way analysis of variance, all from a Bayesian perspective. To enhance readers’ understanding, a brief comparison between the Bayesian and frequentist approach is provided in each analytic option. After the tutorial, we expect readers can perform basic Bayesian analyses and distinguish its approach from the frequentist approach.

jamovi for beginners
jamovi for beginners
Heo & van de Schoot 2020

This tutorial introduces the basics of jamovi for beginners. Starting from jamovi installation, we explain the screen structure of jamovi, how to load a dataset, and how to explore and visualize data. Readers will further learn ways to perform such statistical analyses as correlation analysis, multiple linear regression, t-test, and one-way analysis of variance, all from a frequentist viewpoint. Given the integrative power between jamovi and R, one section is designed to help readers to make use of the best of both jamovi and R.

R for beginners
R for beginners
Heo, Veen, & van de Schoot 2020

This tutorial provides the basics of R for beginners. Our detailed instruction will start from the foundations including the installation of R and RStudio, the structure of the R screen, and loading the data. Next, we introduce basic functions for data exploration and data visualization. Also, we illustrate how to do statistical analyses such as correlation analysis, multiple linear regression, t-test, and one-way analysis of variance (ANOVA) with easy and intuitive explanations.

Online statistics tutorials - Github
Online statistics tutorials - Github
Contributer 2020 - Present

This Github repository contains code behind the online statistics tutorials located at https://www.rensvandeschoot.com/tutorials/.

Online statistics tutorials - Zenodo
Online statistics tutorials - Zenodo
Contributer 2020 - Present

This Zenodo community contains citable online statistics tutorials located at https://www.rensvandeschoot.com/tutorials/.