Misplaced Pages

Bernoulli distribution: Difference between revisions

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 21:56, 12 September 2020 editMiaumee (talk | contribs)Extended confirmed users765 edits Minor C/E fixes. + inline refs. - "Mathworld" entry in External Links (inline-cited already). + "Continuous Bernoulli distribution" to See Also.Tags: Reverted Visual edit← Previous edit Revision as of 15:34, 21 September 2020 edit undoJayBeeEll (talk | contribs)Extended confirmed users, New page reviewers28,266 edits Undid revision 978097866 by Miaumee (talk) Per User talk:Miaumee, this is apparently the preferred response to poor editingTag: UndoNext edit →
Line 39: Line 39:
}} }}


In ] and ], the '''Bernoulli distribution''', named after Swiss mathematician ],<ref>James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45</ref> is the ] of a ] which takes the value 1 with probability <math>p</math> and the value 0 with probability <math>q = 1-p</math>, and is sometimes denoted as <math>\operatorname{Ber}(p)</math>.<ref>{{Cite web|date=2020-04-26|title=List of Probability and Statistics Symbols|url=https://mathvault.ca/hub/higher-math/math-symbols/probability-statistics-symbols/|access-date=2020-09-12|website=Math Vault|language=en-US}}</ref><ref name=":2">{{Cite web|last=Weisstein|first=Eric W.|title=Bernoulli Distribution|url=https://mathworld.wolfram.com/BernoulliDistribution.html|access-date=2020-09-12|website=mathworld.wolfram.com|language=en}}</ref><ref name=":3">{{Cite web|title=Special Distributions {{!}} Bernoulli Distribution {{!}} Geometric Distribution {{!}} Binomial Distribution {{!}} Pascal Distribution {{!}} Poisson Distribution|url=https://www.probabilitycourse.com/chapter3/3_1_5_special_discrete_distr.php|access-date=2020-09-12|website=www.probabilitycourse.com}}</ref> In ] and ], the '''Bernoulli distribution''', named after Swiss mathematician ],<ref>James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45</ref> is the ] of a ] which takes the value 1 with probability <math>p</math> and the value 0 with probability <math>q = 1-p</math>. Less formally, it can be thought of as a model for the set of possible outcomes of any single ] that asks a ]. Such questions lead to ] that are ]-valued: a single ] whose value is success/]/]/] with ] ''p'' and failure/no/]/] with probability ''q''. It can be used to represent a (possibly biased) ] where 1 and 0 would represent "heads" and "tails" (or vice versa), respectively, and ''p'' would be the probability of the coin landing on heads or tails, respectively. In particular, unfair coins would have <math>p \neq 1/2.</math>


The Bernoulli distribution is a special case of the ] where a single trial is conducted (so ''n'' would be 1 for such a binomial distribution). It is also a special case of the '''two-point distribution''', for which the possible outcomes need not be 0 and 1.
Less formally, a Bernoulli distribution can be thought of as a model for the set of possible outcomes of any single ] that asks a ]. Such questions lead to ] that are ]-valued: a single ] whose value is success/]/]/] with ] ''p'', and failure/no/]/] with probability ''q''. It can be used to represent, for example, a (possibly biased) ] where 1 and 0 would represent "heads" and "tails" (or vice versa), respectively, and where ''p'' and ''q'' would be the probability of the coin landing on heads or tails, respectively. In particular, unfair coins would have <math>p \neq 1/2.</math>

The Bernoulli distribution is a special case of the ], where a single trial is conducted (i.e., a binomial distribution with ''n'' = 1). It is also a special case of the '''two-point distribution''', for which the possible outcomes need not be 0 and 1.


==Properties== ==Properties==
Line 51: Line 49:
:<math>\Pr(X=1) = p = 1 - \Pr(X=0) = 1 - q.</math> :<math>\Pr(X=1) = p = 1 - \Pr(X=0) = 1 - q.</math>


The ] <math>f</math> of this distribution, over possible outcomes ''k'', is<ref name=":2" /><ref name=":3" /> The ] <math>f</math> of this distribution, over possible outcomes ''k'', is


:<math> f(k;p) = \begin{cases} :<math> f(k;p) = \begin{cases}
Line 68: Line 66:
The Bernoulli distribution is a special case of the ] with <math>n = 1.</math><ref name="McCullagh1989Ch422">{{cite book | last = McCullagh | first = Peter | authorlink= Peter McCullagh |author2=Nelder, John |authorlink2=John Nelder | title = Generalized Linear Models, Second Edition | publisher = Boca Raton: Chapman and Hall/CRC | year = 1989 | isbn = 0-412-31760-5 |ref=McCullagh1989 |at=Section 4.2.2 }}</ref> The Bernoulli distribution is a special case of the ] with <math>n = 1.</math><ref name="McCullagh1989Ch422">{{cite book | last = McCullagh | first = Peter | authorlink= Peter McCullagh |author2=Nelder, John |authorlink2=John Nelder | title = Generalized Linear Models, Second Edition | publisher = Boca Raton: Chapman and Hall/CRC | year = 1989 | isbn = 0-412-31760-5 |ref=McCullagh1989 |at=Section 4.2.2 }}</ref>


The ] goes to infinity for high and low values of <math>p,</math> but for <math>p=1/2</math>, the two-point distributions—including the Bernoulli distribution—have a lower ] than any other probability distribution, namely −2. The ] goes to infinity for high and low values of <math>p,</math> but for <math>p=1/2</math> the two-point distributions including the Bernoulli distribution have a lower ] than any other probability distribution, namely −2.


The Bernoulli distributions for <math>0 \le p \le 1</math> form an ]. The Bernoulli distributions for <math>0 \le p \le 1</math> form an ].
Line 75: Line 73:


== Mean == == Mean ==
The ] of a Bernoulli random variable <math>X</math> is<ref name=":2" /> The ] of a Bernoulli random variable <math>X</math> is


:<math>\operatorname{E}\left(X\right)=p</math> :<math>\operatorname{E}\left(X\right)=p</math>


This is due to the fact that for a Bernoulli distributed random variable <math>X</math> with <math>\Pr(X=1)=p</math> and <math>\Pr(X=0)=q</math>, we have that This is due to the fact that for a Bernoulli distributed random variable <math>X</math> with <math>\Pr(X=1)=p</math> and <math>\Pr(X=0)=q</math> we find


:<math>\operatorname{E} = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0 :<math>\operatorname{E} = \Pr(X=1)\cdot 1 + \Pr(X=0)\cdot 0
Line 85: Line 83:


== Variance == == Variance ==
The ] of a Bernoulli distributed <math>X</math> is<ref name=":2" /> The ] of a Bernoulli distributed <math>X</math> is


:<math>\operatorname{Var} = pq = p(1-p)</math> :<math>\operatorname{Var} = pq = p(1-p)</math>
Line 93: Line 91:
:<math>\operatorname{E} = \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p</math> :<math>\operatorname{E} = \Pr(X=1)\cdot 1^2 + \Pr(X=0)\cdot 0^2 = p \cdot 1^2 + q\cdot 0^2 = p</math>


From this, it follows that From this follows


:<math>\operatorname{Var} = \operatorname{E}-\operatorname{E}^2 = p-p^2 = p(1-p) = pq</math><ref name=":0" /> :<math>\operatorname{Var} = \operatorname{E}-\operatorname{E}^2 = p-p^2 = p(1-p) = pq</math><ref name=":0" />


== Skewness == == Skewness ==
The ] is <math>\frac{q-p}{\sqrt{pq}}=\frac{1-2p}{\sqrt{pq}}</math>.<ref name=":2" /> When we take the standardized Bernoulli distributed random variable <math>\frac{X-\operatorname{E}}{\sqrt{\operatorname{Var}}}</math>, we find that this random variable attains <math>\frac{q}{\sqrt{pq}}</math> with probability <math>p</math> and attains <math>-\frac{p}{\sqrt{pq}}</math> with probability <math>q</math>. Thus we get The ] is <math>\frac{q-p}{\sqrt{pq}}=\frac{1-2p}{\sqrt{pq}}</math>. When we take the standardized Bernoulli distributed random variable <math>\frac{X-\operatorname{E}}{\sqrt{\operatorname{Var}}}</math> we find that this random variable attains <math>\frac{q}{\sqrt{pq}}</math> with probability <math>p</math> and attains <math>-\frac{p}{\sqrt{pq}}</math> with probability <math>q</math>. Thus we get


:<math>\begin{align} :<math>\begin{align}
Line 139: Line 137:


==Related distributions== ==Related distributions==
*If <math>X_1,\dots,X_n</math> are independent, identically distributed (]) random variables, all ] with success probability&nbsp;''p'', then their ] according to a ] with parameters ''n'' and ''p'':<ref name=":3" /> *If <math>X_1,\dots,X_n</math> are independent, identically distributed (]) random variables, all ] with success probability&nbsp;''p'', then their ] according to a ] with parameters ''n'' and ''p'':
*:<math>\sum_{k=1}^n X_k \sim \operatorname{B}(n,p)</math> (]).<ref name=":0" /> *:<math>\sum_{k=1}^n X_k \sim \operatorname{B}(n,p)</math> (]).<ref name=":0" />


Line 154: Line 152:
*] *]
*] *]
*]


==References== ==References==
Line 166: Line 163:
{{Commons category|Bernoulli distribution}} {{Commons category|Bernoulli distribution}}
*{{springer|title=Binomial distribution|id=p/b016420}} *{{springer|title=Binomial distribution|id=p/b016420}}
*{{MathWorld|title=Bernoulli Distribution|urlname=BernoulliDistribution}}
* Interactive graphic: * Interactive graphic:



Revision as of 15:34, 21 September 2020

probability distribution modeling a coin toss which need not be fair

Bernoulli
Parameters

0 p 1 {\displaystyle 0\leq p\leq 1}

q = 1 p {\displaystyle q=1-p}
Support k { 0 , 1 } {\displaystyle k\in \{0,1\}}
PMF { q = 1 p if  k = 0 p if  k = 1 {\displaystyle {\begin{cases}q=1-p&{\text{if }}k=0\\p&{\text{if }}k=1\end{cases}}}
CDF { 0 if  k < 0 1 p if  0 k < 1 1 if  k 1 {\displaystyle {\begin{cases}0&{\text{if }}k<0\\1-p&{\text{if }}0\leq k<1\\1&{\text{if }}k\geq 1\end{cases}}}
Mean p {\displaystyle p}
Median { 0 if  p < 1 / 2 [ 0 , 1 ] if  p = 1 / 2 1 if  p > 1 / 2 {\displaystyle {\begin{cases}0&{\text{if }}p<1/2\\\left&{\text{if }}p=1/2\\1&{\text{if }}p>1/2\end{cases}}}
Mode { 0 if  p < 1 / 2 0 , 1 if  p = 1 / 2 1 if  p > 1 / 2 {\displaystyle {\begin{cases}0&{\text{if }}p<1/2\\0,1&{\text{if }}p=1/2\\1&{\text{if }}p>1/2\end{cases}}}
Variance p ( 1 p ) = p q {\displaystyle p(1-p)=pq}
Skewness q p p q {\displaystyle {\frac {q-p}{\sqrt {pq}}}}
Excess kurtosis 1 6 p q p q {\displaystyle {\frac {1-6pq}{pq}}}
Entropy q ln q p ln p {\displaystyle -q\ln q-p\ln p}
MGF q + p e t {\displaystyle q+pe^{t}}
CF q + p e i t {\displaystyle q+pe^{it}}
PGF q + p z {\displaystyle q+pz}
Fisher information 1 p q {\displaystyle {\frac {1}{pq}}}

In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability p {\displaystyle p} and the value 0 with probability q = 1 p {\displaystyle q=1-p} . Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are boolean-valued: a single bit whose value is success/yes/true/one with probability p and failure/no/false/zero with probability q. It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails" (or vice versa), respectively, and p would be the probability of the coin landing on heads or tails, respectively. In particular, unfair coins would have p 1 / 2. {\displaystyle p\neq 1/2.}

The Bernoulli distribution is a special case of the binomial distribution where a single trial is conducted (so n would be 1 for such a binomial distribution). It is also a special case of the two-point distribution, for which the possible outcomes need not be 0 and 1.

Properties

If X {\displaystyle X} is a random variable with this distribution, then:

Pr ( X = 1 ) = p = 1 Pr ( X = 0 ) = 1 q . {\displaystyle \Pr(X=1)=p=1-\Pr(X=0)=1-q.}

The probability mass function f {\displaystyle f} of this distribution, over possible outcomes k, is

f ( k ; p ) = { p if  k = 1 , q = 1 p if  k = 0. {\displaystyle f(k;p)={\begin{cases}p&{\text{if }}k=1,\\q=1-p&{\text{if }}k=0.\end{cases}}}

This can also be expressed as

f ( k ; p ) = p k ( 1 p ) 1 k for  k { 0 , 1 } {\displaystyle f(k;p)=p^{k}(1-p)^{1-k}\quad {\text{for }}k\in \{0,1\}}

or as

f ( k ; p ) = p k + ( 1 p ) ( 1 k ) for  k { 0 , 1 } . {\displaystyle f(k;p)=pk+(1-p)(1-k)\quad {\text{for }}k\in \{0,1\}.}

The Bernoulli distribution is a special case of the binomial distribution with n = 1. {\displaystyle n=1.}

The kurtosis goes to infinity for high and low values of p , {\displaystyle p,} but for p = 1 / 2 {\displaystyle p=1/2} the two-point distributions including the Bernoulli distribution have a lower excess kurtosis than any other probability distribution, namely −2.

The Bernoulli distributions for 0 p 1 {\displaystyle 0\leq p\leq 1} form an exponential family.

The maximum likelihood estimator of p {\displaystyle p} based on a random sample is the sample mean.

Mean

The expected value of a Bernoulli random variable X {\displaystyle X} is

E ( X ) = p {\displaystyle \operatorname {E} \left(X\right)=p}

This is due to the fact that for a Bernoulli distributed random variable X {\displaystyle X} with Pr ( X = 1 ) = p {\displaystyle \Pr(X=1)=p} and Pr ( X = 0 ) = q {\displaystyle \Pr(X=0)=q} we find

E [ X ] = Pr ( X = 1 ) 1 + Pr ( X = 0 ) 0 = p 1 + q 0 = p . {\displaystyle \operatorname {E} =\Pr(X=1)\cdot 1+\Pr(X=0)\cdot 0=p\cdot 1+q\cdot 0=p.}

Variance

The variance of a Bernoulli distributed X {\displaystyle X} is

Var [ X ] = p q = p ( 1 p ) {\displaystyle \operatorname {Var} =pq=p(1-p)}

We first find

E [ X 2 ] = Pr ( X = 1 ) 1 2 + Pr ( X = 0 ) 0 2 = p 1 2 + q 0 2 = p {\displaystyle \operatorname {E} =\Pr(X=1)\cdot 1^{2}+\Pr(X=0)\cdot 0^{2}=p\cdot 1^{2}+q\cdot 0^{2}=p}

From this follows

Var [ X ] = E [ X 2 ] E [ X ] 2 = p p 2 = p ( 1 p ) = p q {\displaystyle \operatorname {Var} =\operatorname {E} -\operatorname {E} ^{2}=p-p^{2}=p(1-p)=pq}

Skewness

The skewness is q p p q = 1 2 p p q {\displaystyle {\frac {q-p}{\sqrt {pq}}}={\frac {1-2p}{\sqrt {pq}}}} . When we take the standardized Bernoulli distributed random variable X E [ X ] Var [ X ] {\displaystyle {\frac {X-\operatorname {E} }{\sqrt {\operatorname {Var} }}}} we find that this random variable attains q p q {\displaystyle {\frac {q}{\sqrt {pq}}}} with probability p {\displaystyle p} and attains p p q {\displaystyle -{\frac {p}{\sqrt {pq}}}} with probability q {\displaystyle q} . Thus we get

γ 1 = E [ ( X E [ X ] Var [ X ] ) 3 ] = p ( q p q ) 3 + q ( p p q ) 3 = 1 p q 3 ( p q 3 q p 3 ) = p q p q 3 ( q p ) = q p p q {\displaystyle {\begin{aligned}\gamma _{1}&=\operatorname {E} \left}{\sqrt {\operatorname {Var} }}}\right)^{3}\right]\\&=p\cdot \left({\frac {q}{\sqrt {pq}}}\right)^{3}+q\cdot \left(-{\frac {p}{\sqrt {pq}}}\right)^{3}\\&={\frac {1}{{\sqrt {pq}}^{3}}}\left(pq^{3}-qp^{3}\right)\\&={\frac {pq}{{\sqrt {pq}}^{3}}}(q-p)\\&={\frac {q-p}{\sqrt {pq}}}\end{aligned}}}

Higher moments and cumulants

The central moment of order k {\displaystyle k} is given by

μ k = ( 1 p ) ( p ) k + p ( 1 p ) k . {\displaystyle \mu _{k}=(1-p)(-p)^{k}+p(1-p)^{k}.}

The first six central moments are

μ 1 = 0 , μ 2 = p ( 1 p ) , μ 3 = p ( 1 p ) ( 1 2 p ) , μ 4 = p ( 1 p ) ( 1 3 p ( 1 p ) ) , μ 5 = p ( 1 p ) ( 1 2 p ) ( 1 2 p ( 1 p ) ) , μ 6 = p ( 1 p ) ( 1 5 p ( 1 p ) ( 1 p ( 1 p ) ) ) . {\displaystyle {\begin{aligned}\mu _{1}&=0,\\\mu _{2}&=p(1-p),\\\mu _{3}&=p(1-p)(1-2p),\\\mu _{4}&=p(1-p)(1-3p(1-p)),\\\mu _{5}&=p(1-p)(1-2p)(1-2p(1-p)),\\\mu _{6}&=p(1-p)(1-5p(1-p)(1-p(1-p))).\end{aligned}}}

The higher central moments can be expressed more compactly in terms of μ 2 {\displaystyle \mu _{2}} and μ 3 {\displaystyle \mu _{3}}

μ 4 = μ 2 ( 1 3 μ 2 ) , μ 5 = μ 3 ( 1 2 μ 2 ) , μ 6 = μ 2 ( 1 5 μ 2 ( 1 μ 2 ) ) . {\displaystyle {\begin{aligned}\mu _{4}&=\mu _{2}(1-3\mu _{2}),\\\mu _{5}&=\mu _{3}(1-2\mu _{2}),\\\mu _{6}&=\mu _{2}(1-5\mu _{2}(1-\mu _{2})).\end{aligned}}}

The first six cumulants are

κ 1 = 0 , κ 2 = μ 2 , κ 3 = μ 3 , κ 4 = μ 2 ( 1 6 μ 2 ) , κ 5 = μ 3 ( 1 12 μ 2 ) , κ 6 = μ 2 ( 1 30 μ 2 ( 1 4 μ 2 ) ) . {\displaystyle {\begin{aligned}\kappa _{1}&=0,\\\kappa _{2}&=\mu _{2},\\\kappa _{3}&=\mu _{3},\\\kappa _{4}&=\mu _{2}(1-6\mu _{2}),\\\kappa _{5}&=\mu _{3}(1-12\mu _{2}),\\\kappa _{6}&=\mu _{2}(1-30\mu _{2}(1-4\mu _{2})).\end{aligned}}}

Related distributions

The Bernoulli distribution is simply B ( 1 , p ) {\displaystyle \operatorname {B} (1,p)} , also written as B e r n o u l l i ( p ) . {\textstyle \mathrm {Bernoulli} (p).}

See also

References

  1. James Victor Uspensky: Introduction to Mathematical Probability, McGraw-Hill, New York 1937, page 45
  2. ^ Bertsekas, Dimitri P. (2002). Introduction to Probability. Tsitsiklis, John N., Τσιτσικλής, Γιάννης Ν. Belmont, Mass.: Athena Scientific. ISBN 188652940X. OCLC 51441829.
  3. McCullagh, Peter; Nelder, John (1989). Generalized Linear Models, Second Edition. Boca Raton: Chapman and Hall/CRC. Section 4.2.2. ISBN 0-412-31760-5.

Further reading

  • Johnson, N. L.; Kotz, S.; Kemp, A. (1993). Univariate Discrete Distributions (2nd ed.). Wiley. ISBN 0-471-54897-9.
  • Peatman, John G. (1963). Introduction to Applied Statistics. New York: Harper & Row. pp. 162–171.

External links

Probability distributions (list)
Discrete
univariate
with finite
support
with infinite
support
Continuous
univariate
supported on a
bounded interval
supported on a
semi-infinite
interval
supported
on the whole
real line
with support
whose type varies
Mixed
univariate
continuous-
discrete
Multivariate
(joint)
Directional
Univariate (circular) directional
Circular uniform
Univariate von Mises
Wrapped normal
Wrapped Cauchy
Wrapped exponential
Wrapped asymmetric Laplace
Wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
Bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
Categories:
Bernoulli distribution: Difference between revisions Add topic