Consider this model: \[ x_i = a x_0 + e_i, \quad i=1, \dots, 4 \] and \(x_0=e_0\). All terms \(e_0, \dots, e_3\) are independent and \(N(0,1)\) distributed. Let \(e=(e_0, \dots, e_3)\) and \(x=(x_0, \dots x_3)\). Isolating error terms gives that \[ e = L_1 x \] where \(L_1\) has the form
## {{ 1, 0, 0, 0},
## {-a, 1, 0, 0},
## {-a, 0, 1, 0},
## {-a, 0, 0, 1}}
If error terms have variance \(1\) then \(\mathbf{Var}(e)=L \mathbf{Var}(x) L'\) so the covariance matrix is \(V1=\mathbf{Var}(x) = L^- (L^-)'\) while the concentration matrix (the inverse covariances matrix) is \(K=L' L\).
\[\begin{align} K_1 &= \left( \begin{array}{cccc} 3 a ^{2} + 1 & - a & - a & - a \\ - a & 1 & 0 & 0 \\ - a & 0 & 1 & 0 \\ - a & 0 & 0 & 1 \end{array} \right) \\ V_1 &= \left( \begin{array}{cccc} 1 & a & a & a \\ a & a ^{2} + 1 & a ^{2} & a ^{2} \\ a & a ^{2} & a ^{2} + 1 & a ^{2} \\ a & a ^{2} & a ^{2} & a ^{2} + 1 \end{array} \right) \end{align}\]
Slightly more elaborate:
## {{ 1, 0, 0, 0},
## {-a1, 1, 0, 0},
## {-a2, 0, 1, 0},
## {-a3, 0, 0, 1}}
## {{w1, 0, 0, 0},
## { 0, w2, 0, 0},
## { 0, 0, w2, 0},
## { 0, 0, 0, w2}}
\[\begin{align} K_2 &= \left( \begin{array}{cccc} \frac{1}{w_{1}} + \frac{a_{1} ^{2}}{w_{2}} + \frac{a_{2} ^{2}}{w_{2}} + \frac{a_{3} ^{2}}{w_{2}} & \frac{ - a_{1}}{w_{2}} & \frac{ - a_{2}}{w_{2}} & \frac{ - a_{3}}{w_{2}} \\ \frac{ - a_{1}}{w_{2}} & \frac{1}{w_{2}} & 0 & 0 \\ \frac{ - a_{2}}{w_{2}} & 0 & \frac{1}{w_{2}} & 0 \\ \frac{ - a_{3}}{w_{2}} & 0 & 0 & \frac{1}{w_{2}} \end{array} \right) \\ V_2 &= \left( \begin{array}{cccc} w_{1} & w_{1} a_{1} & w_{1} a_{2} & w_{1} a_{3} \\ a_{1} w_{1} & w_{1} a_{1} ^{2} + w_{2} & a_{1} w_{1} a_{2} & a_{1} w_{1} a_{3} \\ a_{2} w_{1} & a_{2} w_{1} a_{1} & w_{1} a_{2} ^{2} + w_{2} & a_{2} w_{1} a_{3} \\ a_{3} w_{1} & a_{3} w_{1} a_{1} & a_{3} w_{1} a_{2} & w_{1} a_{3} ^{2} + w_{2} \end{array} \right) \end{align}\]