If there are two dependent positive real variables
and
, and only
is known, what is the probability that
is larger versus smaller than
? There is no uniquely correct answer according to
[...] Read more.
If there are two dependent positive real variables
and
, and only
is known, what is the probability that
is larger versus smaller than
? There is no uniquely correct answer according to “frequentist” and “subjective Bayesian” definitions of probability. Here we derive the answer given the “objective Bayesian” definition developed by Jeffreys, Cox, and Jaynes. We declare the standard distance metric in one dimension,
, and the uniform prior distribution, as axioms. If neither variable is known,
. This appears obvious, since the state spaces
and
have equal size. However, if
is known and
unknown, there are infinitely more numbers in the space
than
. Despite this asymmetry, we prove
, so that
is the median of
, and
is statistically independent of ratio
. We present three proofs that apply to all members of a set of distributions. Each member is distinguished by the form of dependence between variables implicit within a statistical model (gamma, Gaussian, etc.), but all exhibit two symmetries in the joint distribution
that are required in the absence of prior information: exchangeability of variables, and non-informative priors over the marginal distributions
and
. We relate our conclusion to physical models of prediction and intelligence, where the known ’sample’ could be the present internal energy within a sensor, and the unknown the energy in its external sensory cause or future motor effect.
Full article