3
is an unconditional distribution associated with θ. In contrast to NHST, θ is not as-
sumed to be random, we are merely nescient
2
of its value. In other words, probability
is conceptualised as a state of subjective belief or state of knowledge (as opposed to
objective “pure” probability as an intrinsic characteristic of θ).
The posterior distribution is approximated by a powerful class of algorithms known
as Markov chain Monte Carlo (MCMC) methods (named in analogy to the randomness
of events observed at games in casinos). MCMC generates a large representative
sample from the data which, in principle, allows to approximate the posterior distri-
bution to an arbitrarily high degree of accuracy (as ). The MCMC sample (or
chain) contains a large number (i.e., > 1000) of combinations of the parameter values
of interest. Our model of perceptual judgments contains the following parameters: <
μ
1
, μ
2
, σ
1
, σ
2
, >. In other words, the MCMC algorithm randomly samples a very large
n of combinations of θ from the posterior distribution. This representative sample of
θ values is subsequently utilised in order to estimate various characteristics of the
posterior (Gustafsson, Montelius, Starck, & Ljungberg, 2017), e.g., its mean, mode,
standard deviation, etc. pp. The thus obtained sample of parameter values can then
be plotted in the form of a histogram in order to visualise the distributional proper-
ties and a prespecified high density interval can be superimposed on the histogram.
Relatively recent advances in technology make these computationally demanding
methods feasible. The combination of powerful microprocessor and sophisticated
computational algorithms allows researchers to perform extremely powerful Bayes-
ian statistical analyses that would have been very expensive only 15 years ago and
virtually impossible circa 25 years ago. The statistical “Bayesian revolution” is rele-
vant for many scientific disciplines (Beaumont & Rannala, 2004; Brooks, 2003;
Gregory, 2001; Shultz, 2007) and the scientific method in general. This Kuhnian-par-
adigm shift (Kuhn, 1970) goes hand in hand with the Moore's law (Moore, 1965) and
the exponential progress of information technologies (Kurzweil, 2005) (cf. Goertzel,
2007) and the associated ephemeralization
3
(Heylighen, 2008). For the current
Bayesian analysis, the parameter space Θ is a five-dimensional space that embeds
the joint distribution of all possible combinations of parameter values (Kruschke,
2014). Hence exact parameter values can be approximated by sampling large num-
bers of values from the posterior distribution. The larger the number of random
samples the more accurate the estimate. A longer MCMC chain (a larger sample) pro-
vides a more accurate representation (i.e., better estimate or higher resolution) of
the posterior distribution of the parameter values (given the empirical data). For in-
stance, if the number of MCMC samples is relatively small and the analysis would be
repeated the values would be significantly different and, on visual inspection, the
2
The term “nescienct” is a composite lexeme composed of the Latin prefix from ne "not" + scire "to know" (cf.
“science”). It is not synonymous with ignorant because ignorance has a different semantic meaning (“to ignore”
is very different from “not knowing”).
3
A concept popularised by Buckminster Fuller which is frequently cited as an argument against Malthusianism.