library(multinma)
options(mc.cores = parallel::detectCores())
This vignette describes the analysis of data on the mean off-time reduction in patients given dopamine agonists as adjunct therapy in Parkinson’s disease, in a network of 7 trials of 4 active drugs plus placebo (Dias et al. 2011). The data are available in this package as parkinsons
:
head(parkinsons)
#> studyn trtn y se n diff se_diff
#> 1 1 1 -1.22 0.504 54 NA 0.504
#> 2 1 3 -1.53 0.439 95 -0.31 0.668
#> 3 2 1 -0.70 0.282 172 NA 0.282
#> 4 2 2 -2.40 0.258 173 -1.70 0.382
#> 5 3 1 -0.30 0.505 76 NA 0.505
#> 6 3 2 -2.60 0.510 71 -2.30 0.718
We consider analysing these data in three separate ways:
y
and corresponding standard errors se
);diff
and corresponding standard errors se_diff
);Note: In this case, with Normal likelihoods for both arms and contrasts, we will see that the three analyses give identical results. In general, unless the arm-based likelihood is Normal, results from a model using a contrast-based likelihood will not exactly match those from a model using an arm-based likelihood, since the contrast-based Normal likelihood is only an approximation. Similarity of results depends on the suitability of the Normal approximation, which may not always be appropriate - e.g. with a small number of events or small sample size for a binary outcome. The use of an arm-based likelihood (sometimes called an “exact” likelihood) is therefore preferable where possible in general.
We begin with an analysis of the arm-based data - means and standard errors.
We have arm-level continuous data giving the mean off-time reduction (y
) and standard error (se
) in each arm. We use the function set_agd_arm()
to set up the network.
<- set_agd_arm(parkinsons,
arm_net study = studyn,
trt = trtn,
y = y,
se = se,
sample_size = n)
arm_net#> A network with 7 AgD studies (arm-based).
#>
#> ------------------------------------------------------- AgD studies (arm-based) ----
#> Study Treatment arms
#> 1 2: 1 | 3
#> 2 2: 1 | 2
#> 3 3: 4 | 1 | 2
#> 4 2: 4 | 3
#> 5 2: 4 | 3
#> 6 2: 4 | 5
#> 7 2: 4 | 5
#>
#> Outcome type: continuous
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 5
#> Total number of studies: 7
#> Reference treatment is: 4
#> Network is connected
We let treatment 4 be set by default as the network reference treatment, since this results in considerably improved sampling efficiency over choosing treatment 1 as the network reference. The sample_size
argument is optional, but enables the nodes to be weighted by sample size in the network plot.
Plot the network structure.
plot(arm_net, weight_edges = TRUE, weight_nodes = TRUE)
We fit both fixed effect (FE) and random effects (RE) models.
First, we fit a fixed effect model using the nma()
function with trt_effects = "fixed"
. We use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\) and study-specific intercepts \(\mu_j\). We can examine the range of parameter values implied by these prior distributions with the summary()
method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
The model is fitted using the nma()
function.
<- nma(arm_net,
arm_fit_FE trt_effects = "fixed",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 10))
#> Note: Setting "4" as the network reference treatment.
Basic parameter summaries are given by the print()
method:
arm_fit_FE#> A fixed effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> d[1] 0.53 0.01 0.47 -0.43 0.23 0.53 0.85 1.46 1477 1
#> d[2] -1.28 0.01 0.52 -2.31 -1.62 -1.27 -0.93 -0.29 1575 1
#> d[3] 0.04 0.01 0.33 -0.62 -0.19 0.03 0.26 0.71 1745 1
#> d[5] -0.31 0.00 0.20 -0.71 -0.45 -0.31 -0.18 0.10 2338 1
#> lp__ -6.67 0.06 2.34 -12.06 -8.05 -6.38 -4.97 -3.11 1683 1
#>
#> Samples were drawn using NUTS(diag_e) at Thu Feb 24 09:03:41 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
By default, summaries of the study-specific intercepts \(\mu_j\) are hidden, but could be examined by changing the pars
argument:
# Not run
print(arm_fit_FE, pars = c("d", "mu"))
The prior and posterior distributions can be compared visually using the plot_prior_posterior()
function:
plot_prior_posterior(arm_fit_FE)
We now fit a random effects model using the nma()
function with trt_effects = "random"
. Again, we use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\) and study-specific intercepts \(\mu_j\), and we additionally use a \(\textrm{half-N}(5^2)\) prior for the heterogeneity standard deviation \(\tau\). We can examine the range of parameter values implied by these prior distributions with the summary()
method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.
Fitting the RE model
<- nma(arm_net,
arm_fit_RE seed = 379394727,
trt_effects = "random",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100),
prior_het = half_normal(scale = 5),
adapt_delta = 0.99)
#> Note: Setting "4" as the network reference treatment.
#> Warning: There were 7 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems
We do see a small number of divergent transition errors, which cannot simply be removed by increasing the value of the adapt_delta
argument (by default set to 0.95
for RE models). To diagnose, we use the pairs()
method to investigate where in the posterior distribution these divergences are happening (indicated by red crosses):
pairs(arm_fit_RE, pars = c("mu[4]", "d[3]", "delta[4: 3]", "tau"))
The divergent transitions occur in the upper tail of the heterogeneity standard deviation. In this case, with only a small number of studies, there is not very much information to estimate the heterogeneity standard deviation and the prior distribution may be too heavy-tailed. We could consider a more informative prior distribution for the heterogeneity variance to aid estimation.
Basic parameter summaries are given by the print()
method:
arm_fit_RE#> A random effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> d[1] 0.54 0.02 0.63 -0.66 0.16 0.54 0.92 1.82 1296 1.00
#> d[2] -1.31 0.02 0.72 -2.71 -1.72 -1.31 -0.89 0.05 1060 1.00
#> d[3] 0.04 0.01 0.48 -0.88 -0.22 0.04 0.31 0.99 1586 1.00
#> d[5] -0.29 0.01 0.42 -1.10 -0.50 -0.30 -0.09 0.61 1644 1.00
#> lp__ -12.84 0.10 3.68 -20.97 -15.14 -12.53 -10.31 -6.58 1282 1.00
#> tau 0.40 0.02 0.42 0.01 0.13 0.28 0.51 1.52 454 1.01
#>
#> Samples were drawn using NUTS(diag_e) at Thu Feb 24 09:04:00 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
By default, summaries of the study-specific intercepts \(\mu_j\) and study-specific relative effects \(\delta_{jk}\) are hidden, but could be examined by changing the pars
argument:
# Not run
print(arm_fit_RE, pars = c("d", "mu", "delta"))
The prior and posterior distributions can be compared visually using the plot_prior_posterior()
function:
plot_prior_posterior(arm_fit_RE)
Model fit can be checked using the dic()
function:
<- dic(arm_fit_FE))
(arm_dic_FE #> Residual deviance: 13.3 (on 15 data points)
#> pD: 11
#> DIC: 24.3
<- dic(arm_fit_RE))
(arm_dic_RE #> Residual deviance: 13.6 (on 15 data points)
#> pD: 12.5
#> DIC: 26.1
Both models fit the data well, having posterior mean residual deviance close to the number of data points. The DIC is similar between models, so we choose the FE model based on parsimony.
We can also examine the residual deviance contributions with the corresponding plot()
method.
plot(arm_dic_FE)
plot(arm_dic_RE)
For comparison with Dias et al. (2011), we can produce relative effects against placebo using the relative_effects()
function with trt_ref = 1
:
<- relative_effects(arm_fit_FE, trt_ref = 1))
(arm_releff_FE #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.53 0.47 -1.46 -0.85 -0.53 -0.23 0.43 1479 2017 1
#> d[2] -1.81 0.33 -2.48 -2.03 -1.82 -1.59 -1.16 6446 2784 1
#> d[3] -0.49 0.49 -1.44 -0.82 -0.49 -0.17 0.47 2595 2729 1
#> d[5] -0.84 0.52 -1.86 -1.18 -0.84 -0.51 0.17 1688 2499 1
plot(arm_releff_FE, ref_line = 0)
<- relative_effects(arm_fit_RE, trt_ref = 1))
(arm_releff_RE #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.54 0.63 -1.82 -0.92 -0.54 -0.16 0.66 1430 1453 1
#> d[2] -1.85 0.56 -2.97 -2.14 -1.84 -1.55 -0.84 2736 1775 1
#> d[3] -0.50 0.66 -1.78 -0.88 -0.49 -0.12 0.80 2068 1386 1
#> d[5] -0.83 0.75 -2.33 -1.29 -0.82 -0.39 0.65 1525 1504 1
plot(arm_releff_RE, ref_line = 0)
Following Dias et al. (2011), we produce absolute predictions of the mean off-time reduction on each treatment assuming a Normal distribution for the outcomes on treatment 1 (placebo) with mean \(-0.73\) and precision \(21\). We use the predict()
method, where the baseline
argument takes a distr()
distribution object with which we specify the corresponding Normal distribution, and we specify trt_ref = 1
to indicate that the baseline distribution corresponds to treatment 1. (Strictly speaking, type = "response"
is unnecessary here, since the identity link function was used.)
<- predict(arm_fit_FE,
arm_pred_FE baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
type = "response",
trt_ref = 1)
arm_pred_FE#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.26 0.52 -2.29 -1.60 -1.27 -0.92 -0.22 1668 2279 1
#> pred[1] -0.73 0.22 -1.16 -0.88 -0.73 -0.58 -0.29 3856 3973 1
#> pred[2] -2.54 0.40 -3.32 -2.81 -2.54 -2.27 -1.74 5022 3305 1
#> pred[3] -1.22 0.54 -2.28 -1.58 -1.23 -0.86 -0.15 2955 2728 1
#> pred[5] -1.57 0.56 -2.68 -1.94 -1.58 -1.20 -0.47 1823 2733 1
plot(arm_pred_FE)
<- predict(arm_fit_RE,
arm_pred_RE baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
type = "response",
trt_ref = 1)
arm_pred_RE#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.27 0.67 -2.67 -1.67 -1.27 -0.86 0.03 1564 1580 1
#> pred[1] -0.73 0.22 -1.16 -0.87 -0.73 -0.58 -0.30 4018 3891 1
#> pred[2] -2.58 0.61 -3.79 -2.91 -2.56 -2.23 -1.49 3012 1890 1
#> pred[3] -1.23 0.70 -2.61 -1.63 -1.22 -0.81 0.14 2249 1601 1
#> pred[5] -1.56 0.78 -3.13 -2.02 -1.54 -1.09 -0.02 1608 1641 1
plot(arm_pred_RE)
If the baseline
argument is omitted, predictions of mean off-time reduction will be produced for every study in the network based on their estimated baseline response \(\mu_j\):
<- predict(arm_fit_FE, type = "response")
arm_pred_FE_studies
arm_pred_FE_studies#> ---------------------------------------------------------------------- Study: 1 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[1: 4] -1.64 0.46 -2.55 -1.96 -1.64 -1.32 -0.73 1870 2626 1
#> pred[1: 1] -1.11 0.42 -1.94 -1.40 -1.10 -0.83 -0.31 4354 3568 1
#> pred[1: 2] -2.92 0.51 -3.94 -3.26 -2.92 -2.57 -1.96 3960 3322 1
#> pred[1: 3] -1.60 0.39 -2.36 -1.88 -1.61 -1.33 -0.83 3820 3319 1
#> pred[1: 5] -1.95 0.51 -2.93 -2.30 -1.95 -1.60 -0.98 1986 2855 1
#>
#> ---------------------------------------------------------------------- Study: 2 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[2: 4] -1.17 0.50 -2.14 -1.50 -1.18 -0.84 -0.16 1576 2170 1
#> pred[2: 1] -0.64 0.26 -1.15 -0.81 -0.64 -0.46 -0.12 5358 3495 1
#> pred[2: 2] -2.45 0.25 -2.93 -2.62 -2.46 -2.28 -1.97 4630 3216 1
#> pred[2: 3] -1.13 0.52 -2.13 -1.48 -1.13 -0.78 -0.11 2481 2867 1
#> pred[2: 5] -1.48 0.54 -2.55 -1.84 -1.49 -1.11 -0.41 1745 2142 1
#>
#> ---------------------------------------------------------------------- Study: 3 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[3: 4] -1.12 0.42 -1.94 -1.40 -1.11 -0.83 -0.31 1835 2394 1
#> pred[3: 1] -0.58 0.36 -1.31 -0.82 -0.59 -0.34 0.12 3894 2832 1
#> pred[3: 2] -2.40 0.38 -3.15 -2.65 -2.40 -2.14 -1.65 4744 3014 1
#> pred[3: 3] -1.08 0.47 -2.01 -1.40 -1.09 -0.77 -0.16 2999 2698 1
#> pred[3: 5] -1.43 0.46 -2.35 -1.74 -1.41 -1.11 -0.54 2203 2251 1
#>
#> ---------------------------------------------------------------------- Study: 4 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4: 4] -0.39 0.30 -0.97 -0.59 -0.39 -0.18 0.19 2031 2687 1
#> pred[4: 1] 0.14 0.49 -0.83 -0.19 0.16 0.47 1.11 2347 2418 1
#> pred[4: 2] -1.67 0.54 -2.76 -2.03 -1.67 -1.29 -0.64 2351 2469 1
#> pred[4: 3] -0.35 0.25 -0.83 -0.52 -0.35 -0.17 0.13 4937 3151 1
#> pred[4: 5] -0.70 0.36 -1.41 -0.94 -0.71 -0.45 0.02 2310 3040 1
#>
#> ---------------------------------------------------------------------- Study: 5 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[5: 4] -0.56 0.34 -1.22 -0.79 -0.55 -0.32 0.10 2590 2916 1
#> pred[5: 1] -0.02 0.53 -1.06 -0.38 -0.01 0.33 1.01 2437 2656 1
#> pred[5: 2] -1.84 0.57 -2.97 -2.22 -1.83 -1.45 -0.74 2409 2731 1
#> pred[5: 3] -0.52 0.30 -1.11 -0.72 -0.52 -0.32 0.05 5880 3418 1
#> pred[5: 5] -0.87 0.40 -1.65 -1.14 -0.86 -0.59 -0.08 2493 2906 1
#>
#> ---------------------------------------------------------------------- Study: 6 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[6: 4] -2.19 0.17 -2.53 -2.30 -2.19 -2.08 -1.86 2970 2637 1
#> pred[6: 1] -1.66 0.50 -2.65 -1.99 -1.66 -1.32 -0.66 1602 2385 1
#> pred[6: 2] -3.48 0.54 -4.55 -3.84 -3.46 -3.11 -2.44 1768 2686 1
#> pred[6: 3] -2.16 0.37 -2.88 -2.42 -2.15 -1.91 -1.42 2151 2758 1
#> pred[6: 5] -2.50 0.17 -2.83 -2.61 -2.51 -2.39 -2.17 3865 3011 1
#>
#> ---------------------------------------------------------------------- Study: 7 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[7: 4] -1.79 0.17 -2.13 -1.91 -1.80 -1.68 -1.45 3303 3093 1
#> pred[7: 1] -1.26 0.51 -2.27 -1.59 -1.26 -0.92 -0.27 1677 2204 1
#> pred[7: 2] -3.08 0.55 -4.18 -3.44 -3.06 -2.70 -2.03 1868 2685 1
#> pred[7: 3] -1.76 0.38 -2.48 -2.02 -1.75 -1.50 -1.02 2057 2802 1
#> pred[7: 5] -2.10 0.20 -2.49 -2.24 -2.11 -1.97 -1.71 4567 2997 1
plot(arm_pred_FE_studies)
We can also produce treatment rankings, rank probabilities, and cumulative rank probabilities.
<- posterior_ranks(arm_fit_FE))
(arm_ranks #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[4] 3.52 0.69 2 3 3 4 5 1779 NA 1
#> rank[1] 4.66 0.76 2 5 5 5 5 2351 NA 1
#> rank[2] 1.05 0.27 1 1 1 1 2 2409 2415 1
#> rank[3] 3.50 0.92 2 3 4 4 5 2782 NA 1
#> rank[5] 2.27 0.66 1 2 2 2 4 2186 2414 1
plot(arm_ranks)
<- posterior_rank_probs(arm_fit_FE))
(arm_rankprobs #> p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4] 0.00 0.03 0.49 0.39 0.08
#> d[1] 0.00 0.04 0.06 0.11 0.79
#> d[2] 0.96 0.03 0.01 0.00 0.00
#> d[3] 0.00 0.17 0.27 0.44 0.12
#> d[5] 0.04 0.72 0.17 0.06 0.01
plot(arm_rankprobs)
<- posterior_rank_probs(arm_fit_FE, cumulative = TRUE))
(arm_cumrankprobs #> p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4] 0.00 0.03 0.53 0.92 1
#> d[1] 0.00 0.04 0.10 0.21 1
#> d[2] 0.96 0.99 1.00 1.00 1
#> d[3] 0.00 0.17 0.44 0.88 1
#> d[5] 0.04 0.76 0.93 0.99 1
plot(arm_cumrankprobs)
We now perform an analysis using the contrast-based data (mean differences and standard errors).
With contrast-level data giving the mean difference in off-time reduction (diff
) and standard error (se_diff
), we use the function set_agd_contrast()
to set up the network.
<- set_agd_contrast(parkinsons,
contr_net study = studyn,
trt = trtn,
y = diff,
se = se_diff,
sample_size = n)
contr_net#> A network with 7 AgD studies (contrast-based).
#>
#> -------------------------------------------------- AgD studies (contrast-based) ----
#> Study Treatment arms
#> 1 2: 1 | 3
#> 2 2: 1 | 2
#> 3 3: 4 | 1 | 2
#> 4 2: 4 | 3
#> 5 2: 4 | 3
#> 6 2: 4 | 5
#> 7 2: 4 | 5
#>
#> Outcome type: continuous
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 5
#> Total number of studies: 7
#> Reference treatment is: 4
#> Network is connected
The sample_size
argument is optional, but enables the nodes to be weighted by sample size in the network plot.
Plot the network structure.
plot(contr_net, weight_edges = TRUE, weight_nodes = TRUE)
We fit both fixed effect (FE) and random effects (RE) models.
First, we fit a fixed effect model using the nma()
function with trt_effects = "fixed"
. We use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\). We can examine the range of parameter values implied by these prior distributions with the summary()
method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
The model is fitted using the nma()
function.
<- nma(contr_net,
contr_fit_FE trt_effects = "fixed",
prior_trt = normal(scale = 100))
#> Note: Setting "4" as the network reference treatment.
Basic parameter summaries are given by the print()
method:
contr_fit_FE#> A fixed effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> d[1] 0.52 0.01 0.48 -0.40 0.19 0.51 0.84 1.51 2000 1
#> d[2] -1.29 0.01 0.53 -2.32 -1.64 -1.29 -0.93 -0.25 1974 1
#> d[3] 0.05 0.01 0.32 -0.57 -0.16 0.04 0.26 0.68 3086 1
#> d[5] -0.30 0.00 0.21 -0.71 -0.45 -0.30 -0.16 0.13 3346 1
#> lp__ -3.16 0.03 1.45 -6.77 -3.89 -2.84 -2.09 -1.37 1727 1
#>
#> Samples were drawn using NUTS(diag_e) at Thu Feb 24 09:04:25 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
The prior and posterior distributions can be compared visually using the plot_prior_posterior()
function:
plot_prior_posterior(contr_fit_FE)
We now fit a random effects model using the nma()
function with trt_effects = "random"
. Again, we use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\), and we additionally use a \(\textrm{half-N}(5^2)\) prior for the heterogeneity standard deviation \(\tau\). We can examine the range of parameter values implied by these prior distributions with the summary()
method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.
Fitting the RE model
<- nma(contr_net,
contr_fit_RE seed = 1150676438,
trt_effects = "random",
prior_trt = normal(scale = 100),
prior_het = half_normal(scale = 5),
adapt_delta = 0.99)
#> Note: Setting "4" as the network reference treatment.
#> Warning: There were 1 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems
We do see a small number of divergent transition errors, which cannot simply be removed by increasing the value of the adapt_delta
argument (by default set to 0.95
for RE models). To diagnose, we use the pairs()
method to investigate where in the posterior distribution these divergences are happening (indicated by red crosses):
pairs(contr_fit_RE, pars = c("d[3]", "delta[4: 4 vs. 3]", "tau"))
The divergent transitions occur in the upper tail of the heterogeneity standard deviation. In this case, with only a small number of studies, there is not very much information to estimate the heterogeneity standard deviation and the prior distribution may be too heavy-tailed. We could consider a more informative prior distribution for the heterogeneity variance to aid estimation.
Basic parameter summaries are given by the print()
method:
contr_fit_RE#> A random effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> d[1] 0.52 0.01 0.62 -0.70 0.15 0.52 0.89 1.68 2013 1.00
#> d[2] -1.33 0.02 0.69 -2.67 -1.73 -1.33 -0.91 -0.05 2089 1.00
#> d[3] 0.05 0.01 0.48 -0.81 -0.22 0.04 0.31 0.97 1571 1.00
#> d[5] -0.32 0.01 0.44 -1.24 -0.52 -0.32 -0.11 0.50 1255 1.00
#> lp__ -8.25 0.07 2.82 -14.57 -10.01 -7.96 -6.22 -3.47 1448 1.00
#> tau 0.39 0.02 0.42 0.01 0.12 0.28 0.51 1.45 612 1.01
#>
#> Samples were drawn using NUTS(diag_e) at Thu Feb 24 09:04:41 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
By default, summaries of the study-specific relative effects \(\delta_{jk}\) are hidden, but could be examined by changing the pars
argument:
# Not run
print(contr_fit_RE, pars = c("d", "delta"))
The prior and posterior distributions can be compared visually using the plot_prior_posterior()
function:
plot_prior_posterior(contr_fit_RE)
Model fit can be checked using the dic()
function:
<- dic(contr_fit_FE))
(contr_dic_FE #> Residual deviance: 6.3 (on 8 data points)
#> pD: 4
#> DIC: 10.4
<- dic(contr_fit_RE))
(contr_dic_RE #> Residual deviance: 6.5 (on 8 data points)
#> pD: 5.3
#> DIC: 11.9
Both models fit the data well, having posterior mean residual deviance close to the number of data points. The DIC is similar between models, so we choose the FE model based on parsimony.
We can also examine the residual deviance contributions with the corresponding plot()
method.
plot(contr_dic_FE)
plot(contr_dic_RE)
For comparison with Dias et al. (2011), we can produce relative effects against placebo using the relative_effects()
function with trt_ref = 1
:
<- relative_effects(contr_fit_FE, trt_ref = 1))
(contr_releff_FE #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.52 0.48 -1.51 -0.84 -0.51 -0.19 0.40 2019 2062 1
#> d[2] -1.81 0.33 -2.48 -2.03 -1.81 -1.59 -1.16 4870 3094 1
#> d[3] -0.47 0.49 -1.44 -0.80 -0.47 -0.15 0.48 2970 2794 1
#> d[5] -0.82 0.52 -1.91 -1.17 -0.81 -0.46 0.18 2245 2449 1
plot(contr_releff_FE, ref_line = 0)
<- relative_effects(contr_fit_RE, trt_ref = 1))
(contr_releff_RE #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.52 0.62 -1.68 -0.89 -0.52 -0.15 0.70 2151 1332 1
#> d[2] -1.85 0.50 -2.86 -2.14 -1.83 -1.56 -0.88 3197 2440 1
#> d[3] -0.46 0.62 -1.61 -0.85 -0.47 -0.08 0.74 3504 2704 1
#> d[5] -0.84 0.76 -2.33 -1.26 -0.82 -0.40 0.53 1902 1223 1
plot(contr_releff_RE, ref_line = 0)
Following Dias et al. (2011), we produce absolute predictions of the mean off-time reduction on each treatment assuming a Normal distribution for the outcomes on treatment 1 (placebo) with mean \(-0.73\) and precision \(21\). We use the predict()
method, where the baseline
argument takes a distr()
distribution object with which we specify the corresponding Normal distribution, and we specify trt_ref = 1
to indicate that the baseline distribution corresponds to treatment 1. (Strictly speaking, type = "response"
is unnecessary here, since the identity link function was used.)
<- predict(contr_fit_FE,
contr_pred_FE baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
type = "response",
trt_ref = 1)
contr_pred_FE#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.25 0.53 -2.31 -1.61 -1.24 -0.89 -0.25 2213 2703 1
#> pred[1] -0.73 0.22 -1.16 -0.88 -0.73 -0.58 -0.30 3967 3603 1
#> pred[2] -2.54 0.40 -3.34 -2.81 -2.55 -2.27 -1.76 4663 3450 1
#> pred[3] -1.20 0.54 -2.25 -1.56 -1.19 -0.83 -0.18 2995 2847 1
#> pred[5] -1.55 0.57 -2.71 -1.94 -1.54 -1.15 -0.48 2392 2725 1
plot(contr_pred_FE)
<- predict(contr_fit_RE,
contr_pred_RE baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
type = "response",
trt_ref = 1)
contr_pred_RE#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.25 0.66 -2.50 -1.64 -1.25 -0.85 0.00 2306 1508 1
#> pred[1] -0.73 0.22 -1.16 -0.88 -0.73 -0.59 -0.30 3676 3583 1
#> pred[2] -2.58 0.54 -3.66 -2.90 -2.58 -2.25 -1.53 3317 2464 1
#> pred[3] -1.20 0.66 -2.43 -1.62 -1.20 -0.79 0.08 3507 2857 1
#> pred[5] -1.57 0.79 -3.11 -2.01 -1.54 -1.11 -0.16 1997 1459 1
plot(contr_pred_RE)
If the baseline
argument is omitted an error will be raised, as there are no study baselines estimated in this network.
# Not run
predict(contr_fit_FE, type = "response")
We can also produce treatment rankings, rank probabilities, and cumulative rank probabilities.
<- posterior_ranks(contr_fit_FE))
(contr_ranks #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[4] 3.49 0.72 2 3 3 4 5 2350 NA 1
#> rank[1] 4.65 0.75 2 5 5 5 5 2543 NA 1
#> rank[2] 1.06 0.30 1 1 1 1 2 2273 2301 1
#> rank[3] 3.53 0.91 2 3 4 4 5 3744 NA 1
#> rank[5] 2.26 0.66 1 2 2 2 4 2854 2912 1
plot(contr_ranks)
<- posterior_rank_probs(contr_fit_FE))
(contr_rankprobs #> p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4] 0.00 0.05 0.49 0.37 0.08
#> d[1] 0.00 0.04 0.06 0.12 0.79
#> d[2] 0.95 0.04 0.01 0.00 0.00
#> d[3] 0.00 0.16 0.26 0.45 0.12
#> d[5] 0.04 0.72 0.18 0.05 0.01
plot(contr_rankprobs)
<- posterior_rank_probs(contr_fit_FE, cumulative = TRUE))
(contr_cumrankprobs #> p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4] 0.00 0.05 0.54 0.92 1
#> d[1] 0.00 0.04 0.09 0.21 1
#> d[2] 0.95 0.99 1.00 1.00 1
#> d[3] 0.00 0.16 0.43 0.88 1
#> d[5] 0.04 0.76 0.94 0.99 1
plot(contr_cumrankprobs)
We now perform an analysis where some studies contribute arm-based data, and other contribute contrast-based data. Replicating Dias et al. (2011), we consider arm-based data from studies 1-3, and contrast-based data from studies 4-7.
<- parkinsons$studyn
studies <- parkinsons[studies %in% 1:3, ])
(parkinsons_arm #> studyn trtn y se n diff se_diff
#> 1 1 1 -1.22 0.504 54 NA 0.504
#> 2 1 3 -1.53 0.439 95 -0.31 0.668
#> 3 2 1 -0.70 0.282 172 NA 0.282
#> 4 2 2 -2.40 0.258 173 -1.70 0.382
#> 5 3 1 -0.30 0.505 76 NA 0.505
#> 6 3 2 -2.60 0.510 71 -2.30 0.718
#> 7 3 4 -1.20 0.478 81 -0.90 0.695
<- parkinsons[studies %in% 4:7, ])
(parkinsons_contr #> studyn trtn y se n diff se_diff
#> 8 4 3 -0.24 0.265 128 NA 0.265
#> 9 4 4 -0.59 0.354 72 -0.35 0.442
#> 10 5 3 -0.73 0.335 80 NA 0.335
#> 11 5 4 -0.18 0.442 46 0.55 0.555
#> 12 6 4 -2.20 0.197 137 NA 0.197
#> 13 6 5 -2.50 0.190 131 -0.30 0.274
#> 14 7 4 -1.80 0.200 154 NA 0.200
#> 15 7 5 -2.10 0.250 143 -0.30 0.320
We use the functions set_agd_arm()
and set_agd_contrast()
to set up the respective data sources within the network, and then combine together with combine_network()
.
<- set_agd_arm(parkinsons_arm,
mix_arm_net study = studyn,
trt = trtn,
y = y,
se = se,
sample_size = n)
<- set_agd_contrast(parkinsons_contr,
mix_contr_net study = studyn,
trt = trtn,
y = diff,
se = se_diff,
sample_size = n)
<- combine_network(mix_arm_net, mix_contr_net)
mix_net
mix_net#> A network with 3 AgD studies (arm-based), and 4 AgD studies (contrast-based).
#>
#> ------------------------------------------------------- AgD studies (arm-based) ----
#> Study Treatment arms
#> 1 2: 1 | 3
#> 2 2: 1 | 2
#> 3 3: 4 | 1 | 2
#>
#> Outcome type: continuous
#> -------------------------------------------------- AgD studies (contrast-based) ----
#> Study Treatment arms
#> 4 2: 4 | 3
#> 5 2: 4 | 3
#> 6 2: 4 | 5
#> 7 2: 4 | 5
#>
#> Outcome type: continuous
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 5
#> Total number of studies: 7
#> Reference treatment is: 4
#> Network is connected
The sample_size
argument is optional, but enables the nodes to be weighted by sample size in the network plot.
Plot the network structure.
plot(mix_net, weight_edges = TRUE, weight_nodes = TRUE)
We fit both fixed effect (FE) and random effects (RE) models.
First, we fit a fixed effect model using the nma()
function with trt_effects = "fixed"
. We use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\) and study-specific intercepts \(\mu_j\). We can examine the range of parameter values implied by these prior distributions with the summary()
method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
The model is fitted using the nma()
function.
<- nma(mix_net,
mix_fit_FE trt_effects = "fixed",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100))
#> Note: Setting "4" as the network reference treatment.
Basic parameter summaries are given by the print()
method:
mix_fit_FE#> A fixed effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> d[1] 0.55 0.01 0.49 -0.43 0.22 0.56 0.88 1.48 1323 1
#> d[2] -1.26 0.01 0.53 -2.35 -1.61 -1.25 -0.90 -0.21 1404 1
#> d[3] 0.06 0.01 0.32 -0.57 -0.15 0.06 0.28 0.69 2800 1
#> d[5] -0.30 0.00 0.21 -0.70 -0.44 -0.30 -0.16 0.12 3363 1
#> lp__ -4.65 0.04 1.85 -9.24 -5.72 -4.31 -3.28 -2.01 1729 1
#>
#> Samples were drawn using NUTS(diag_e) at Thu Feb 24 09:05:01 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
By default, summaries of the study-specific intercepts \(\mu_j\) are hidden, but could be examined by changing the pars
argument:
# Not run
print(mix_fit_FE, pars = c("d", "mu"))
The prior and posterior distributions can be compared visually using the plot_prior_posterior()
function:
plot_prior_posterior(mix_fit_FE)
We now fit a random effects model using the nma()
function with trt_effects = "random"
. Again, we use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\) and study-specific intercepts \(\mu_j\), and we additionally use a \(\textrm{half-N}(5^2)\) prior for the heterogeneity standard deviation \(\tau\). We can examine the range of parameter values implied by these prior distributions with the summary()
method:
summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.
Fitting the RE model
<- nma(mix_net,
mix_fit_RE seed = 437219664,
trt_effects = "random",
prior_intercept = normal(scale = 100),
prior_trt = normal(scale = 100),
prior_het = half_normal(scale = 5),
adapt_delta = 0.99)
#> Note: Setting "4" as the network reference treatment.
#> Warning: There were 4 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems
We do see a small number of divergent transition errors, which cannot simply be removed by increasing the value of the adapt_delta
argument (by default set to 0.95
for RE models). To diagnose, we use the pairs()
method to investigate where in the posterior distribution these divergences are happening (indicated by red crosses):
pairs(mix_fit_RE, pars = c("d[3]", "delta[4: 4 vs. 3]", "tau"))
The divergent transitions occur in the upper tail of the heterogeneity standard deviation. In this case, with only a small number of studies, there is not very much information to estimate the heterogeneity standard deviation and the prior distribution may be too heavy-tailed. We could consider a more informative prior distribution for the heterogeneity variance to aid estimation.
Basic parameter summaries are given by the print()
method:
mix_fit_RE#> A random effects NMA with a normal likelihood (identity link).
#> Inference for Stan model: normal.
#> 4 chains, each with iter=2000; warmup=1000; thin=1;
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#>
#> mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
#> d[1] 0.52 0.01 0.60 -0.64 0.16 0.53 0.91 1.65 1811 1
#> d[2] -1.32 0.02 0.68 -2.65 -1.72 -1.31 -0.90 -0.03 1814 1
#> d[3] 0.04 0.01 0.45 -0.86 -0.23 0.04 0.30 0.97 2400 1
#> d[5] -0.30 0.01 0.40 -1.10 -0.51 -0.30 -0.09 0.52 2419 1
#> lp__ -10.75 0.09 3.22 -17.80 -12.78 -10.49 -8.52 -5.20 1200 1
#> tau 0.38 0.01 0.36 0.01 0.13 0.28 0.50 1.37 726 1
#>
#> Samples were drawn using NUTS(diag_e) at Thu Feb 24 09:05:19 2022.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at
#> convergence, Rhat=1).
By default, summaries of the study-specific intercepts \(\mu_j\) and study-specific relative effects \(\delta_{jk}\) are hidden, but could be examined by changing the pars
argument:
# Not run
print(mix_fit_RE, pars = c("d", "mu", "delta"))
The prior and posterior distributions can be compared visually using the plot_prior_posterior()
function:
plot_prior_posterior(mix_fit_RE)
Model fit can be checked using the dic()
function:
<- dic(mix_fit_FE))
(mix_dic_FE #> Residual deviance: 9.3 (on 11 data points)
#> pD: 7
#> DIC: 16.3
<- dic(mix_fit_RE))
(mix_dic_RE #> Residual deviance: 9.7 (on 11 data points)
#> pD: 8.5
#> DIC: 18.1
Both models fit the data well, having posterior mean residual deviance close to the number of data points. The DIC is similar between models, so we choose the FE model based on parsimony.
We can also examine the residual deviance contributions with the corresponding plot()
method.
plot(mix_dic_FE)
plot(mix_dic_RE)
For comparison with Dias et al. (2011), we can produce relative effects against placebo using the relative_effects()
function with trt_ref = 1
:
<- relative_effects(mix_fit_FE, trt_ref = 1))
(mix_releff_FE #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.55 0.49 -1.48 -0.88 -0.56 -0.22 0.43 1330 1757 1
#> d[2] -1.81 0.33 -2.46 -2.03 -1.81 -1.59 -1.18 6047 3261 1
#> d[3] -0.49 0.49 -1.42 -0.83 -0.50 -0.16 0.50 1927 2606 1
#> d[5] -0.85 0.53 -1.87 -1.20 -0.85 -0.49 0.20 1448 2051 1
plot(mix_releff_FE, ref_line = 0)
<- relative_effects(mix_fit_RE, trt_ref = 1))
(mix_releff_RE #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d[4] -0.52 0.60 -1.65 -0.91 -0.53 -0.16 0.64 1975 2138 1
#> d[2] -1.85 0.50 -2.90 -2.13 -1.83 -1.55 -0.91 3954 2494 1
#> d[3] -0.48 0.61 -1.64 -0.88 -0.49 -0.10 0.77 2988 2757 1
#> d[5] -0.82 0.71 -2.20 -1.26 -0.82 -0.38 0.58 2131 2199 1
plot(mix_releff_RE, ref_line = 0)
Following Dias et al. (2011), we produce absolute predictions of the mean off-time reduction on each treatment assuming a Normal distribution for the outcomes on treatment 1 (placebo) with mean \(-0.73\) and precision \(21\). We use the predict()
method, where the baseline
argument takes a distr()
distribution object with which we specify the corresponding Normal distribution, and we specify trt_ref = 1
to indicate that the baseline distribution corresponds to treatment 1. (Strictly speaking, type = "response"
is unnecessary here, since the identity link function was used.)
<- predict(mix_fit_FE,
mix_pred_FE baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
type = "response",
trt_ref = 1)
mix_pred_FE#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.27 0.54 -2.30 -1.63 -1.28 -0.90 -0.21 1480 2095 1
#> pred[1] -0.72 0.22 -1.14 -0.87 -0.72 -0.58 -0.30 3607 3552 1
#> pred[2] -2.53 0.40 -3.32 -2.80 -2.54 -2.25 -1.77 5241 3645 1
#> pred[3] -1.21 0.54 -2.22 -1.58 -1.23 -0.85 -0.11 2114 2686 1
#> pred[5] -1.57 0.57 -2.67 -1.95 -1.58 -1.19 -0.44 1579 2131 1
plot(mix_pred_FE)
<- predict(mix_fit_RE,
mix_pred_RE baseline = distr(qnorm, mean = -0.73, sd = 21^-0.5),
type = "response",
trt_ref = 1)
mix_pred_RE#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[4] -1.25 0.64 -2.47 -1.66 -1.26 -0.86 0.00 2083 2105 1
#> pred[1] -0.73 0.22 -1.17 -0.88 -0.73 -0.58 -0.31 3923 3771 1
#> pred[2] -2.57 0.55 -3.68 -2.90 -2.56 -2.23 -1.51 3957 2682 1
#> pred[3] -1.21 0.65 -2.47 -1.64 -1.23 -0.80 0.07 3036 2852 1
#> pred[5] -1.55 0.74 -2.99 -2.00 -1.56 -1.09 -0.09 2225 2308 1
plot(mix_pred_RE)
If the baseline
argument is omitted, predictions of mean off-time reduction will be produced for every arm-based study in the network based on their estimated baseline response \(\mu_j\):
<- predict(mix_fit_FE, type = "response")
mix_pred_FE_studies
mix_pred_FE_studies#> ---------------------------------------------------------------------- Study: 1 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[1: 4] -1.67 0.46 -2.55 -1.98 -1.67 -1.36 -0.76 1974 2508 1
#> pred[1: 1] -1.12 0.44 -1.96 -1.42 -1.13 -0.83 -0.27 3172 2930 1
#> pred[1: 2] -2.93 0.52 -3.93 -3.30 -2.92 -2.57 -1.93 3386 3056 1
#> pred[1: 3] -1.61 0.39 -2.37 -1.88 -1.61 -1.34 -0.82 3087 3093 1
#> pred[1: 5] -1.97 0.50 -2.94 -2.31 -1.96 -1.63 -0.97 2123 2785 1
#>
#> ---------------------------------------------------------------------- Study: 2 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[2: 4] -1.19 0.53 -2.23 -1.53 -1.20 -0.85 -0.10 1293 1660 1
#> pred[2: 1] -0.64 0.26 -1.15 -0.81 -0.64 -0.46 -0.13 5058 3712 1
#> pred[2: 2] -2.45 0.24 -2.94 -2.61 -2.45 -2.28 -1.97 4582 3189 1
#> pred[2: 3] -1.13 0.53 -2.17 -1.47 -1.14 -0.77 -0.05 1832 2263 1
#> pred[2: 5] -1.48 0.56 -2.58 -1.86 -1.49 -1.12 -0.37 1385 1786 1
#>
#> ---------------------------------------------------------------------- Study: 3 ----
#>
#> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> pred[3: 4] -1.12 0.43 -1.95 -1.42 -1.13 -0.84 -0.27 1653 2305 1
#> pred[3: 1] -0.58 0.36 -1.28 -0.82 -0.57 -0.32 0.13 3972 3130 1
#> pred[3: 2] -2.39 0.37 -3.13 -2.63 -2.39 -2.13 -1.66 3584 3164 1
#> pred[3: 3] -1.06 0.47 -1.98 -1.38 -1.07 -0.75 -0.11 2628 2966 1
#> pred[3: 5] -1.42 0.47 -2.33 -1.74 -1.43 -1.10 -0.52 1817 2406 1
plot(mix_pred_FE_studies)
We can also produce treatment rankings, rank probabilities, and cumulative rank probabilities.
<- posterior_ranks(mix_fit_FE))
(mix_ranks #> mean sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[4] 3.48 0.71 2 3 3 4 5 2247 NA 1
#> rank[1] 4.65 0.76 2 5 5 5 5 1785 NA 1
#> rank[2] 1.06 0.29 1 1 1 1 2 2193 2226 1
#> rank[3] 3.55 0.90 2 3 4 4 5 3618 NA 1
#> rank[5] 2.26 0.65 1 2 2 2 4 2233 2716 1
plot(mix_ranks)
<- posterior_rank_probs(mix_fit_FE))
(mix_rankprobs #> p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4] 0.00 0.05 0.50 0.37 0.08
#> d[1] 0.00 0.04 0.06 0.11 0.79
#> d[2] 0.95 0.04 0.01 0.00 0.00
#> d[3] 0.00 0.15 0.26 0.47 0.12
#> d[5] 0.04 0.73 0.17 0.05 0.01
plot(mix_rankprobs)
<- posterior_rank_probs(mix_fit_FE, cumulative = TRUE))
(mix_cumrankprobs #> p_rank[1] p_rank[2] p_rank[3] p_rank[4] p_rank[5]
#> d[4] 0.00 0.05 0.55 0.92 1
#> d[1] 0.00 0.04 0.10 0.21 1
#> d[2] 0.95 0.99 1.00 1.00 1
#> d[3] 0.00 0.15 0.41 0.88 1
#> d[5] 0.04 0.77 0.94 0.99 1
plot(mix_cumrankprobs)