The BayesFactor has a number of prior settings that should provide for a consistent Bayes factor. In this document, Bayes factors are checked for consistency.
The independent samples \(t\) test and ANOVA functions should provide the same answers with the default prior settings.
# Create data
x <- rnorm(20)
x[1:10] = x[1:10] + .2
grp = factor(rep(1:2,each=10))
dat = data.frame(x=x,grp=grp)
t.test(x ~ grp, data=dat)
##
## Welch Two Sample t-test
##
## data: x by grp
## t = 0.5, df = 17, p-value = 0.6
## alternative hypothesis: true difference in means between group 1 and group 2 is not equal to 0
## 95 percent confidence interval:
## -0.793 1.255
## sample estimates:
## mean in group 1 mean in group 2
## 0.411 0.180
If the prior settings are consistent, then all three of these numbers should be the same.
as.vector(ttestBF(formula = x ~ grp, data=dat))
## Alt., r=0.707
## 0.431
as.vector(anovaBF(x~grp, data=dat))
## grp
## 0.431
as.vector(generalTestBF(x~grp, data=dat))
## grp
## 0.431
In a paired design with an additive random factor and and a fixed effect with two levels, the Bayes factors should be the same, regardless of whether we treat the fixed factor as a factor or as a dummy-coded covariate.
# create some data
id = rnorm(10)
eff = c(-1,1)*1
effCross = outer(id,eff,'+')+rnorm(length(id)*2)
dat = data.frame(x=as.vector(effCross),id=factor(1:10), grp=factor(rep(1:2,each=length(id))))
dat$forReg = as.numeric(dat$grp)-1.5
idOnly = lmBF(x~id, data=dat, whichRandom="id")
summary(aov(x~grp+Error(id/grp),data=dat))
##
## Error: id
## Df Sum Sq Mean Sq F value Pr(>F)
## Residuals 9 49.1 5.46
##
## Error: id:grp
## Df Sum Sq Mean Sq F value Pr(>F)
## grp 1 25.3 25.33 17.1 0.0025 **
## Residuals 9 13.3 1.48
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
If the prior settings are consistent, these two numbers should be almost the same (within MC estimation error).
as.vector(lmBF(x ~ grp+id, data=dat, whichRandom="id")/idOnly)
## grp + id
## 23.1
as.vector(lmBF(x ~ forReg+id, data=dat, whichRandom="id")/idOnly)
## forReg + id
## 23.2
Given the effect size \(\hat{\delta}=t\sqrt{N_{eff}}\), where the effective sample size \(N_{eff}\) is the sample size in the one-sample case, and \[ N_{eff} = \frac{N_1N_2}{N_1+N_2} \] in the two-sample case, the Bayes factors should be the same for the one-sample and two sample case, given the same observed effect size, save for the difference from the degrees of freedom that affects the shape of the noncentral \(t\) likelihood. The difference from the degrees of freedom should get smaller for a given \(t\) as \(N_{eff}\rightarrow\infty\).
# create some data
tstat = 3
NTwoSample = 500
effSampleSize = (NTwoSample^2)/(2*NTwoSample)
effSize = tstat/sqrt(effSampleSize)
# One sample
x0 = rnorm(effSampleSize)
x0 = (x0 - mean(x0))/sd(x0) + effSize
t.test(x0)
##
## One Sample t-test
##
## data: x0
## t = 3, df = 249, p-value = 0.003
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.0652 0.3143
## sample estimates:
## mean of x
## 0.19
# Two sample
x1 = rnorm(NTwoSample)
x1 = (x1 - mean(x1))/sd(x1)
x2 = x1 + effSize
t.test(x2,x1)
##
## Welch Two Sample t-test
##
## data: x2 and x1
## t = 3, df = 998, p-value = 0.003
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 0.0656 0.3138
## sample estimates:
## mean of x mean of y
## 1.90e-01 4.98e-18
These (log) Bayes factors should be approximately the same.
log(as.vector(ttestBF(x0)))
## Alt., r=0.707
## 1.72
log(as.vector(ttestBF(x=x1,y=x2)))
## Alt., r=0.707
## 1.77
A paired sample \(t\) test and a linear mixed effects model should broadly agree. The two are based on different models — the paired t test has the participant effects substracted out, while the linear mixed effects model has a prior on the participant effects — but we'd expect them to lead to the same conclusions.
These two Bayes factors should be lead to similar conclusions.
# using the data previously defined
t.test(x~grp,data=dat,paired=TRUE)
##
## Paired t-test
##
## data: x by grp
## t = -4, df = 9, p-value = 0.003
## alternative hypothesis: true mean difference is not equal to 0
## 95 percent confidence interval:
## -3.48 -1.02
## sample estimates:
## mean difference
## -2.25
as.vector(lmBF(x ~ grp+id, data=dat, whichRandom="id")/idOnly)
## grp + id
## 23.1
as.vector(ttestBF(x=dat$x[dat$grp==1],y=dat$x[dat$grp==2],paired=TRUE))
## Alt., r=0.707
## 18.9
This document was compiled with version 0.9.12-4.4 of BayesFactor (R version 4.2.1 (2022-06-23 ucrt) on x86_64-w64-mingw32).