With the usual notation, we have a parameter and a vector of observations . Let me try to make sense of their definition of a Bayes factor at least when we have simple null and alternative hypothesis: and .

Define , for , and suppose that both are dominated by a -finite measure , with Radon-Nikodym derivatives . Then, we have the usual result for the Bayes factor

Now, if we suppose that and are equivalent, using the chain rule for the Radon-Nikodym derivatives we have

Since we have

the Bayes factor satisfies

It seems that they take this *property* as their definition of a Bayes factor. Actually, they do more: when and are not equivalent, using the Lebesgue decomposition we can write

where is dominated by , and and are mutually singular, so that

and then, since , we have

$

Now, as it seems to me, they make an analogy with the former case and come to their definition of a Bayes factor as any that satisfies

I can’t see clearly the advantage of this extended definition with the inequality sign. Also, I don’t understand how all of this extends to cases where we have composite hypothesis.

]]>