Title: | Bayesian Estimation of the Reduced Reparameterized Unified Model with Gibbs Sampling |
---|---|
Description: | Implementation of Gibbs sampling algorithm for Bayesian Estimation of the Reduced Reparameterized Unified Model ('rrum'), described by Culpepper and Hudson (2017) <doi: 10.1177/0146621617707511>. |
Authors: | Steven Andrew Culpepper [aut, cph] , Aaron Hudson [aut, cph] , James Joseph Balamuta [aut, cph, cre] |
Maintainer: | James Joseph Balamuta <[email protected]> |
License: | GPL (>= 2) |
Version: | 0.2.1 |
Built: | 2024-11-10 04:29:31 UTC |
Source: | https://github.com/tmsalab/rrum |
Obtains samples from posterior distributon for the reduced Reparametrized Unified Model (rRUM).
rrum( Y, Q, chain_length = 10000L, as = 1, bs = 1, ag = 1, bg = 1, delta0 = rep(1, 2^ncol(Q)) )
rrum( Y, Q, chain_length = 10000L, as = 1, bs = 1, ag = 1, bg = 1, delta0 = rep(1, 2^ncol(Q)) )
Y |
A |
Q |
A |
chain_length |
A |
as |
A |
bs |
A |
ag |
A |
bg |
A |
delta0 |
A |
A list
that contains
PISTAR
: A matrix
where each column represents one draw from the
posterior distribution of pistar.
RSTAR
: A
array
where J
reperesents the
number of items, and K
represents the number of attributes.
Each slice represents one draw from the posterior distribution
of rstar
.
PI
: A matrix
where each column reperesents one draw from the posterior
distribution of pi
.
ALPHA
: An
array
where N
reperesents the
number of individuals, and K
represents the number of
attributes. Each slice represents one draw from the posterior
distribution of alpha
.
Steven Andrew Culpepper, Aaron Hudson, and James Joseph Balamuta
Culpepper, S. A. & Hudson, A. (In Press). An improved strategy for Bayesian estimation of the reduced reparameterized unified model. Applied Psychological Measurement.
Hudson, A., Culpepper, S. A., & Douglas, J. (2016, July). Bayesian estimation of the generalized NIDA model with Gibbs sampling. Paper presented at the annual International Meeting of the Psychometric Society, Asheville, North Carolina.
# Set seed for reproducibility set.seed(217) ## Define Simulation Parameters N = 1000 # Number of Individuals J = 6 # Number of Items K = 2 # Number of Attributes # Matrix where rows represent attribute classes As = attribute_classes(K) # Latent Class probabilities pis = c(.1, .2, .3, .4) # Q Matrix Q = rbind(c(1, 0), c(0, 1), c(1, 0), c(0, 1), c(1, 1), c(1, 1) ) # The probabiliies of answering each item correctly for individuals # who do not lack any required attribute pistar = rep(.9, J) # Penalties for failing to have each of the required attributes rstar = .5 * Q # Randomized alpha profiles alpha = As[sample(1:(K ^ 2), N, replace = TRUE, pis),] # Simulate data rrum_items = simcdm::sim_rrum_items(Q, rstar, pistar, alpha) ## Not run: # Note: This portion of the code is computationally intensive. # Recover simulation parameters with Gibbs Sampler Gibbs.out = rrum(rrum_items, Q) # Iterations to be discarded from chain as burnin burnin = 1:5000 # Calculate summarizes of posterior distributions rstar.mean = with(Gibbs.out, apply(RSTAR[,,-burnin], c(1, 2), mean)) pistar.mean = with(Gibbs.out, apply(PISTAR[,-burnin], 1, mean)) pis.mean = with(Gibbs.out, apply(PI[,-burnin], 1 ,mean)) ## End(Not run)
# Set seed for reproducibility set.seed(217) ## Define Simulation Parameters N = 1000 # Number of Individuals J = 6 # Number of Items K = 2 # Number of Attributes # Matrix where rows represent attribute classes As = attribute_classes(K) # Latent Class probabilities pis = c(.1, .2, .3, .4) # Q Matrix Q = rbind(c(1, 0), c(0, 1), c(1, 0), c(0, 1), c(1, 1), c(1, 1) ) # The probabiliies of answering each item correctly for individuals # who do not lack any required attribute pistar = rep(.9, J) # Penalties for failing to have each of the required attributes rstar = .5 * Q # Randomized alpha profiles alpha = As[sample(1:(K ^ 2), N, replace = TRUE, pis),] # Simulate data rrum_items = simcdm::sim_rrum_items(Q, rstar, pistar, alpha) ## Not run: # Note: This portion of the code is computationally intensive. # Recover simulation parameters with Gibbs Sampler Gibbs.out = rrum(rrum_items, Q) # Iterations to be discarded from chain as burnin burnin = 1:5000 # Calculate summarizes of posterior distributions rstar.mean = with(Gibbs.out, apply(RSTAR[,,-burnin], c(1, 2), mean)) pistar.mean = with(Gibbs.out, apply(PISTAR[,-burnin], 1, mean)) pis.mean = with(Gibbs.out, apply(PI[,-burnin], 1 ,mean)) ## End(Not run)