Difference between revisions 113696689 and 113696690 on dewiki

In [[Bayesian inference]], the '''Bernstein–von Mises theorem''' provides the basis for the important result that the [[posterior distribution]] for unknown quantities in any problem is effectively independent of the [[prior distribution]] (assuming it obeys [[Cromwell's rule]]) once the amount of information supplied by a sample of data is large enough. 

The theorem is named after [[Richard von Mises]] and [[S. N. Bernstein]] even though the first proper proof was given by [[Joseph Leo Doob]] in 1949 for random variables with finite [[probability space]]. Later [[Lucien Le Cam]], his PhD student [[Lorraine Schwarz]], [[David A. Freedman (statistician)| David A. Freedman]] and [[Persi Diaconis]] extended the proof under more general assumptions. A remarkable result was found by Freedman in 1965: the Bernstein-von Mises theorem does not hold [[Almost surely|almost surely]] if the random variable has an infinite countable [[probability space]]. 

The statistician [[A. W. F. Edwards]] has remarked, "It is sometimes said, in defence of the Bayesian concept, that the choice of prior distribution is unimportant in practice, because it hardly influences the posterior distribution at all when there are moderate amounts of data. The less said about this 'defence' the better."<ref>{{cite book | last = Edwards | first = A.W.F. | title = Likelih(contracted; show full)
*Le Cam, Lucien (1986) ''Asymptotic Methods in Statistical Decision Theory'', Springer. ISBN 0387963073 (Pages 336 and 618&ndash;621).
* Lorraine Schwartz (1965), “On Bayes procedure”. ''Z. Wahrscheinlichkeitstheorie'', No. 4, pp. 10-26.


[[Category:Bayesian inference]]
[[Category:Statistical theorems]]
{{statistics-stub}}