### 4.5 Latent-class linear model

Mathematical representation

 ${y}_{i|c}={\mathbf{x}}_{i}^{\prime }{\beta }_{c}+{𝜀}_{i|c},\phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}{𝜀}_{i|c}\sim \mathrm{N}\left(0,\frac{1}{{\tau }_{c}}\right),\phantom{\rule{2em}{0ex}}c=0,1,\dots ,C-1$ (4.6)
• the model is estimated using $N$ observations and involves $C$ classes (counting starts at zero)
• ${y}_{i}$ is the value of the dependent variable for observation $i$
• ${\mathbf{x}}_{i}$ is a $K\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector that stores the values of the $K$ independent variables for observation $i$
• ${\beta }_{c}$ is a $K\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector of parameters for class $c$
• ${\tau }_{c}$ is the precision of the error term for class $c$: ${\sigma }_{𝜀,c}^{2}=\frac{1}{{\tau }_{c}}$
• each observation, $i$, belongs to class $c$ with prior probability (before seeing the data $\left\{{y}_{i},{\mathbf{x}}_{i}\right\}$) ${\pi }_{i,c}$. BayES supports two types of models:
1. unconditional prior class membership probabilities, in which case:

${\pi }_{i,c}={\pi }_{c}\phantom{\rule{1em}{0ex}}\forall i,c$

With this speciﬁcation $\pi \equiv {\left[\begin{array}{cccc}\hfill {\pi }_{0}\hfill & \hfill {\pi }_{1}\hfill & \hfill \cdots \hfill & \hfill {\pi }_{C-1}\hfill \end{array}\right]}^{\prime }$ is a vector of parameters to be estimated, with ${\pi }_{c}>0$ $\forall c$ and $\sum _{c=0}^{C-1}{\pi }_{c}=1$.

2. conditional prior class membership probabilities, in which case:

${\pi }_{i,c}=\frac{{e}^{{\mathbf{z}}_{i}^{\prime }{\delta }_{c}}}{\sum _{\ell =0}^{C-1}{e}^{{\mathbf{z}}_{i}^{\prime }{\delta }_{\ell }}}\phantom{\rule{1em}{0ex}}\forall i,c$

where:

• ${\mathbf{z}}_{i}$ is an $L\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector that stores the values of the $L$ determinants of class-membership for observation $i$
• $\delta \equiv {\left[\begin{array}{ccc}\hfill {\delta }_{1}^{\prime }\hfill & \hfill {\delta }_{2}^{\prime }\hfill & \hfill \cdots {\delta }_{C-1}^{\prime }\hfill \end{array}\right]}^{\prime }$ is an $L\cdot \left(C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1\right)\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector of parameters to be estimated

In this speciﬁcation class-membership probabilities are determined by a multinomial Logit model, where, for identiﬁcation purposes, ${\delta }_{0}$ is normalized to an $L\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector of zeros.

Priors

 Parameter Probability density function Default hyperparameters Common to both model types ${\beta }_{c}$ $p\left({\beta }_{c}\right)=\frac{|{\mathbf{P}}_{c}{|}^{1∕2}}{{\left(2\pi \right)}^{K∕2}}exp\left\{-\frac{1}{2}{\left({\beta }_{c}-{\mathbf{m}}_{c}\right)}^{\prime }{\mathbf{P}}_{c}\left({\beta }_{c}-{\mathbf{m}}_{c}\right)\right\}$ ${\mathbf{m}}_{c}={\mathbf{0}}_{K}$, ${\mathbf{P}}_{c}=0.001\cdot {\mathbf{I}}_{K}$ ${\tau }_{c}$ $p\left({\tau }_{c}\right)=\frac{{b}_{{\tau }_{c}}^{{a}_{{\tau }_{c}}}}{\Gamma \left({a}_{{\tau }_{c}}\right)}{\tau }_{c}^{{a}_{{\tau }_{c}}-1}{e}^{-{\tau }_{c}{b}_{{\tau }_{c}}}$ ${a}_{{\tau }_{c}}=0.001$, ${b}_{{\tau }_{c}}=0.001$ Model with unconditional class-membership probabilities $\pi$ $p\left(\pi \right)=\frac{1}{B\left(\mathbf{a}\right)}\prod _{c=0}^{C-1}{\pi }_{c}^{{a}_{c}-1}$ ${a}_{0}={a}_{1}=\cdots ={a}_{C-1}=1$ Model with conditional class-membership probabilities $\delta$ $p\left(\delta \right)=\frac{|{\mathbf{P}}_{\delta }{|}^{1∕2}}{{\left(2\pi \right)}^{\frac{L\left(C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1\right)}{2}}}exp\left\{-\frac{1}{2}{\left(\delta -{\mathbf{m}}_{\delta }\right)}^{\prime }{\mathbf{P}}_{\delta }\left(\delta -{\mathbf{m}}_{\delta }\right)\right\}$ ${\mathbf{m}}_{\delta }={\mathbf{0}}_{L\left(C-1\right)}$, ${\mathbf{P}}_{\delta }=0.001\cdot {\mathbf{I}}_{L\left(C-1\right)}$

Syntax

$\left[$<model name> = $\right]$ lm_lc( y ~ x1 x2  xK $\left[$ $|$ z1 z2 $\dots$ zL$\right]$ $\left[$, <options> $\right]$ );

where:

• y is the dependent variable name, as it appears in the dataset used for estimation
• x1 x2 $\dots$xK is a list of the $K$ independent variable names, as they appear in the dataset used for estimation; when a constant term is to be included in the model, this must be requested explicitly
• z1 z2 $\dots$zL is a list of the $L$ variable names that enter the speciﬁcation of the class-membership probabilities (determinants of class membership), as they appear in the dataset used for estimation; when a constant term is to be included in the model, this must be requested explicitly; this list is optional and when provided the conditional latent-class model is estimated; if not provided the unconditional model is estimated

 If the dataset used for estimation has been previously declared as a panel dataset (typically, by a call to the set_pd() function) then the model estimated is the one documented in the following section. Each group in that model is restricted to belong to the same class for the entire period for which it is observed.

The optional arguments for the latent-class linear model are:5

 Gibbs parameters "chains" number of chains to run in parallel (positive integer); the default value is 1 "burnin" number of burn-in draws per chain (positive integer); the default value is 10000 "draws" number of retained draws per chain (positive integer); the default value is 20000 "thin" value of the thinning parameter (positive integer); the default value is 1 "seed" value of the seed for the random-number generator (positive integer); the default value is 42 Model specification "classes" speciﬁcation of the number of classes to be used in the model (positive integer); the default value is 2 Hyperparameters Common to both model types "m" mean vector of the prior for each ${\beta }_{c}$ ($K\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector); the default value is ${\mathbf{0}}_{K}$ "P" precision matrix of the prior for each ${\beta }_{c}$ ($K\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}K$ symmetric and positive-deﬁnite matrix); the default value is $0.001\phantom{\rule{0.3em}{0ex}}\cdot \phantom{\rule{0.3em}{0ex}}{\mathbf{I}}_{K}$ "mj" mean vector of the prior for ${\beta }_{j}$, $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ ($K\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector); this mean overwrites the generic mean ("m") for class $j$ only "Pj" precision matrix of the prior for ${\beta }_{j}$, $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ ($K\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}K$ symmetric and positive-deﬁnite matrix); this precision matrix overwrites the generic precision matrix ("P") for class $j$ only "a_tau" shape parameter of the prior for each ${\tau }_{c}$ (positive number); the default value is $0.001$ "b_tau" rate parameter of the prior for each ${\tau }_{c}$ (positive number); the default value is $0.001$ "a_tauj" shape parameter of the prior for ${\tau }_{j}$, $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ (positive number); this shape parameter overwrites the generic shape parameter ("a_tau") for class $j$ only "b_tauj" rate parameter of the prior for ${\tau }_{j}$, $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ (positive number); this rate parameter overwrites the generic rate parameter ("b_tau") for class $j$ only Model with unconditional class-membership probabilities "a" vector of concentration parameters for the Dirichlet prior on $\pi$ ($C\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector with positive entries); the default value is a $C\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector of ones Model with conditional class-membership probabilities "m_delta" mean vector of the prior for $\delta$ ($L\left(C-1\right)\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ vector); the default value is ${\mathbf{0}}_{L\left(C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1\right)}$ "P_delta" precision matrix of the prior for $\delta$ ($L\left(C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1\right)\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}1$ symmetric and positive-deﬁnite matrix); the default value is $0.001\phantom{\rule{0.3em}{0ex}}\cdot \phantom{\rule{0.3em}{0ex}}{\mathbf{I}}_{L\left(C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1\right)}$ Dataset and log-marginal likelihood "dataset" the id value of the dataset that will be used for estimation; the default value is the ﬁrst dataset in memory (in alphabetical order) "logML_CJ" boolean indicating whether the Chib (1995)/Chib & Jeliazkov (2001) approximation to the log-marginal likelihood should be calculated (true$|$false); the default value is false

Reported Parameters

 Common to both model types ${\beta }_{c}$ variable_name vector of parameters associated with the independent variables for class $c$ ${\tau }_{c}$ tau precision parameter of the error term for class $c$, ${𝜀}_{i|c}$ ${\sigma }_{𝜀,c}$ sigma_e standard deviation of the error term for class $c$: ${\sigma }_{𝜀,c}=1∕{\tau }_{c}^{1∕2}$ Model with unconditional class-membership probabilities ${\pi }_{c}$ pi prior class-membership probability for class $c$ Model with conditional class-membership probabilities ${\delta }_{c}$ variable_name vector of parameters associated with the determinants of class membership for class $c$; for identiﬁcation purposes, these parameters for class 0 are normalized to zero

Stored values and post-estimation analysis
If a left-hand-side id value is provided when a latent-class linear model is created, then the following results are saved in the model item and are accessible via the ‘.’ operator:

 Samples a matrix containing the draws from the posterior of ${\beta }_{c}$ and ${\tau }_{c}$ for $c=0,1,\dots ,C-1$, and, depending on the estimated model, $\pi$ or $\delta$. cj$x1,$\dots$,cj$xK vectors containing the draws from the posterior of the parameters associated with variables x1,$\dots$,xK, for $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ (the names of these vectors are the names of the variables that were included in the right-hand side of the model, prepended by ‘c’, the class index and a dollar sign; in this way ‘cj’ can be used to distinguish among parameters across diﬀerent classes) cj$tau vectors containing the draws from the posterior of each ${\tau }_{c}$, for $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ (‘tau’ is prepended by ‘c’, the class index and the dollar sign; in this way ‘cj’ can be used to distinguish among precision parameters in diﬀerent classes) pi_j vectors containing the draws from the posterior of each ${\pi }_{c}$, for $j=0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$ (these vectors are available only after the estimation of the model with unconditional class-membership probabilities) pi_j$z1,$\dots$,  pi_j$zL vectors containing the draws from the posterior of the parameters associated with variables z1,$\dots$,zL, for $j=1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$6 (the names of these vectors are the names of the variables that were included the z list, in the right-hand side of the model, prepended by ‘pi_j’ and the dollar sign; in this way ‘pi_j’ can be used to distinguish among parameters associated with variables with diﬀerent roles in the model, for example the same variable appearing in both x and z lists, as well as among parameters associated with a variable in the z list, but corresponding to diﬀerent classes; these vectors are available only after the estimation of the model with conditional class-membership probabilities) logML the Lewis & Raftery (1997) approximation of the log-marginal likelihood logML_CJ the Chib (1995)/Chib & Jeliazkov (2001) approximation to the log-marginal likelihood; this is available only if the model was estimated with the "logML_CJ"=true option pi_i $N\phantom{\rule{0.3em}{0ex}}×\phantom{\rule{0.3em}{0ex}}C$ matrix that stores the expected values of the posterior class-membership probabilities for each observation and for each of the $C$ classes nchains the number of chains that were used to estimate the model nburnin the number of burn-in draws per chain that were used when estimating the model ndraws the total number of retained draws from the posterior ($=$chains $\cdot$ draws) nthin value of the thinning parameter that was used when estimating the model nseed value of the seed for the random-number generator that was used when estimating the model nclasses number of classes used during the estimation of the model Additionally, the following functions are available for post-estimation analysis (see section B.14): • diagnostics() • test() • pmp() • store() • mfx() The latent-class linear model uses the store() function to associate the estimates of the posterior class-membership probabilities (pi_i) with speciﬁc observations and store their values in the dataset used for estimation. The generic syntax for a statement involving the store() function after estimation of a latent-class linear model is: store( pi_i, <new variable name prefix> $\left[$, "model"=<model name>$\right]$ ); This statement will generate $C$ additional variables in the dataset used for estimation of the model, with names constructed by appending the class index ($0,1,\dots ,C\phantom{\rule{0.3em}{0ex}}-\phantom{\rule{0.3em}{0ex}}1$) to the preﬁx provided as the second argument to store(). The latent-class linear model with conditional class-membership probabilities uses the mfx() function to calculate and report the marginal eﬀects of the variables in the z list on the prior class-membership probabilities that come from the multinomial-Logit part of the model. The generic syntax for a statement involving the mfx() function after estimation of a latent-class linear model with conditional class-membership probabilities is: mfx( $\left[$"type"=1$\right]$ $\left[$, "point"=<point of calculation>$\right]$ $\left[$, "model"=<model name>$\right]$ ); See the general documentation of the mfx() function (section B.14) for details on the optional arguments. Examples Example 1 myData = import("$BayESHOME/Datasets/dataset2.csv");
myData.constant = 1;

lm_lc(y ~ constant x1 x2);

Example 2

myData = import("$BayESHOME/Datasets/dataset2.csv"); myData.constant = 1; myModel = lm_lc(y ~ constant x1 x2, "m0"=[2;0.6;0.3], "P" = 10*eye(3,3), "burnin"=10000, "draws"=30000, "thin"=2, "chains"=2, "classes"=2, "logML_CJ" = true, "dataset"=myData); diagnostics("model"=myModel); plot([myModel.c0$x1, myModel.c1$x1], "title"="beta2 for the two classes"); plot([myModel.pi_0, myModel.pi_1], "title"="Prior class$-$membership probabilities"); Example 3 myData = import("$BayESHOME/Datasets/dataset2.csv");
myData.constant = 1;

myModel = lm_lc(y ~ constant x1 x2 | constant x3,
"m0"=[2;0.6;0.3], "P" = 10*eye(3,3),
"burnin"=10000, "draws"=30000, "thin"=2, "chains"=2, "classes"=2,
"logML_CJ" = true, "dataset"=myData);

diagnostics("model"=myModel);
mfx("model"=myModel);

plot([myModel.c0$x1, myModel.c1$x1],
"title"="beta2 for the two classes");
plot(myModel.pi_1\$x3,
"title"="delta2 for class 1");

5Optional arguments are always given in option-value pairs (eg. "chains"=3).

6Indexing starts at $1$ because the parameters of the multinomial-Logit part of the model associated with class $0$ are normalized to zero for identiﬁcation purposes.