normal distribution - wikipedia, the free encyclopedia

54
/DEAR WIKIPEDIA READERS:/ To protect our independence, we'll never run ads. We survive on donations averaging about €10. This week we ask readers in the Netherlands for help. If everyone reading this right now gave €2, our fundraiser would be done within an hour. Yep, that’s about the price of buying a programmer a coffee. We’re a small non-profit with costs of a top 5 website: servers, staff and programs. Wikipedia is something special. It is like a library or a public park where we can all go to think and learn. If Wikipedia is useful to you, take one minute to make a tax-deductible donation to keep it online and ad-free another year. /Thank you./ One-time Monthly* €2 €5 €10 €20 €30 €50 €100 iDEAL Credit Card PayPal Boleto PayPal (USD) Problems donating? <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Problems_donating&country=NL&language=en&uselang=en> | Other ways to give <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Ways_to_Give&country=NL&language=en&uselang=en> | Frequently asked questions <https://wikimediafoundation.org/wiki/Special:LandingCheck?landing_page=FAQ&basi c=true&country=NL&language=en&uselang=en> | By donating, you are agreeing to our donor privacy policy <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Donor_policy&country=NL&language=en&uselang=en>. The Wikimedia Foundation is a nonprofit, tax-exempt organization <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Tax_Deductibility&country=NL&language=en&uselang=en>. By donating, you are agreeing to our donor privacy policy <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Donor_policy&country=NL&language=en&uselang=en> and to sharing your information with the Wikimedia Foundation and its service providers in the U.S. and elsewhere. The Wikimedia Foundation is a nonprofit, tax-exempt organization <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Tax_Deductibility&country=NL&language=en&uselang=en>. By donating, you are agreeing to our donor privacy policy <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Donor_policy&country=NL&language=en&uselang=en> and to sharing your information with the Wikimedia Foundation <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Tax_Deductibility&country=NL&language=en&uselang=en> and its service providers in the U.S. and elsewhere. *Recurring payments will be debited by the Wikimedia Foundation until you notify us to stop. We ll send you an email receipt for each payment, which will include a link to easy cancellation instructions. <https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_pa ge=Cancel_or_change_recurring_payments&country=NL&language=en&uselang=en> If we all gave €2, the fundraiser would be over in an hour. Please Donate Now

Upload: alvaro-jimenez

Post on 19-Jul-2016

30 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Normal Distribution - Wikipedia, The Free Encyclopedia

/DEAR WIKIPEDIA READERS:/ To protect our independence, we'll never runads. We survive on donations averaging about €10. This week we askreaders in the Netherlands for help. If everyone reading this right nowgave €2, our fundraiser would be done within an hour. Yep, that’s aboutthe price of buying a programmer a coffee. We’re a small non-profit withcosts of a top 5 website: servers, staff and programs. Wikipedia issomething special. It is like a library or a public park where we canall go to think and learn. If Wikipedia is useful to you, take oneminute to make a tax-deductible donation to keep it online and ad-freeanother year. /Thank you./

One-time Monthly*€2 €5 €10 €20€30 €50 €100 €iDEAL Credit Card PayPal Boleto PayPal (USD)

Problems donating?<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Problems_donating&country=NL&language=en&uselang=en>| Other ways to give<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Ways_to_Give&country=NL&language=en&uselang=en>| Frequently asked questions<https://wikimediafoundation.org/wiki/Special:LandingCheck?landing_page=FAQ&basic=true&country=NL&language=en&uselang=en>| By donating, you are agreeing to our donor privacy policy<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Donor_policy&country=NL&language=en&uselang=en>.The Wikimedia Foundation is a nonprofit, tax-exempt organization<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Tax_Deductibility&country=NL&language=en&uselang=en>.By donating, you are agreeing to our donor privacy policy<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Donor_policy&country=NL&language=en&uselang=en>and to sharing your information with the Wikimedia Foundation and itsservice providers in the U.S. and elsewhere. The Wikimedia Foundation isa nonprofit, tax-exempt organization<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Tax_Deductibility&country=NL&language=en&uselang=en>.By donating, you are agreeing to our donor privacy policy<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Donor_policy&country=NL&language=en&uselang=en>and to sharing your information with the Wikimedia Foundation<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Tax_Deductibility&country=NL&language=en&uselang=en>and its service providers in the U.S. and elsewhere. *Recurring paymentswill be debited by the Wikimedia Foundation until you notify us to stop.We�ll send you an email receipt for each payment, which will include alink to easy cancellation instructions.<https://wikimediafoundation.org/wiki/Special:LandingCheck?basic=true&landing_page=Cancel_or_change_recurring_payments&country=NL&language=en&uselang=en>

If we all gave €2, the fundraiser would be over in an hour. PleaseDonate Now

Page 2: Normal Distribution - Wikipedia, The Free Encyclopedia

Normal distribution

From Wikipedia, the free encyclopediaJump to: navigation <#mw-navigation>, search <#p-search>This article is about the univariate normal distribution. For normallydistributed vectors, see Multivariate normal distribution</wiki/Multivariate_normal_distribution>.NormalProbability density functionProbability density function for the normal distribution</wiki/File:Normal_Distribution_PDF.svg>The red curve is the /standard normal distribution/Cumulative distribution functionCumulative distribution function for the normal distribution</wiki/File:Normal_Distribution_CDF.svg>Notation \mathcal{N}(\mu,\,\sigma^2)Parameters /μ/ ∈ *R* — mean (location </wiki/Location_parameter>)/σ/^2 > 0 — variance (�quared �cale </wiki/Scale_parameter>)Support </wiki/Support_(mathematic�)> /x/ ∈ *R*pdf </wiki/Probability_den�ity_function> \frac{1}{\�igma\�qrt{2\pi}}\,e^{-\frac{(x - \mu)^2}{2 \�igma^2}}CDF </wiki/Cumulative_di�tribution_function> \frac12\left[1 +\operatorname{erf}\left( \frac{x-\mu}{\�igma\�qrt{2}}\right)\right]Quantile </wiki/Quantile_function>\mu+\�igma\�qrt{2}\,\operatorname{erf}^{-1}(2F-1)Mean </wiki/Expected_value> /μ/Median </wiki/Median> /μ/Mode </wiki/Mode_(statistics)> /μ/Variance </wiki/Variance> \sigma^2\,Skewness </wiki/Skewness> 0Ex. kurtosis </wiki/Excess_kurtosis> 0Entropy </wiki/Information_entropy> \frac12 \ln(2 \pi e \, \sigma^2)MGF </wiki/Moment-generating_function> \exp\{ \mu t +\frac{1}{2}\sigma^2t^2 \}CF </wiki/Characteristic_function_(probability_theory)> \exp \{ i\mu t- \frac{1}{2}\sigma^2 t^2 \}Fisher information </wiki/Fisher_information>\begin{pmatrix}1/\sigma^2&0\\0&1/(2\sigma^4)\end{pmatrix}

In probability theory </wiki/Probability_theory>, the *normal* (or*Gaussian*) *distribution* is a very commonly occurring continuousprobability distribution </wiki/Continuous_probability_distribution>—afunction that tells the probability that any real observation will fallbetween any two real limits or real numbers </wiki/Real_number>, as thecurve approaches zero on either side. Normal distributions are extremelyimportant in statistics </wiki/Statistics> and are often used in thenatural </wiki/Natural_science> and social sciences</wiki/Social_science> for real-valued random variables</wiki/Random_variable> whose distributions are not known.^[1]<#cite_note-1> ^[2] <#cite_note-2>

The normal distribution is immensely useful because of the central limittheorem </wiki/Central_limit_theorem>, which states that, under mildconditions, the mean </wiki/Mean> of many random variables</wiki/Random_variables> independently drawn from the same distributionis distributed approximately normally, irrespective of the form of theoriginal distribution: physical quantities that are expected to be thesum of many independent processes (such as measurement errors</wiki/Measurement_error>) often have a distribution very close to thenormal. Moreover, many results and methods (such as propagation of

Page 3: Normal Distribution - Wikipedia, The Free Encyclopedia

uncertainty </wiki/Propagation_of_uncertainty> and least squares</wiki/Least_squares> parameter fitting) can be derived analytically inexplicit form when the relevant variables are normally distributed.

The Gaussian distribution is sometimes informally called the *bellcurve*. However, many other distributions are bell-shaped (such asCauchy </wiki/Cauchy_distribution>�s, Student</wiki/Student%27s_t-distribution>�s, and logistic</wiki/Logistic_distribution>). The terms *Gaussian function</wiki/Gaussian_function>* and *Gaussian bell curve* are also ambiguousbecause they sometimes refer to multiples of the normal distributionthat cannot be directly interpreted in terms of probabilities.

A normal distribution is

f(x, \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}} e^{ -\frac{(x-\mu)^2}{2\sigma^2} }

The parameter /μ/ in this definition is the /mean </wiki/Mean>/ or/expectation </wiki/Expected_value>/ of the distribution (and also itsmedian </wiki/Median> and mode </wiki/Mode_(statistics)>). Theparameter /σ/ i� it� �tandard deviation </wiki/Standard_deviation>; it�variance </wiki/Variance> i� therefore /σ/ ^2 . A random variable with aGau��ian di�tribution i� �aid to be *normally di�tributed* and i� calleda *normal deviate*.

If /μ/ = 0 and /σ/ = 1, the di�tribution i� called the *�tandard normaldi�tribution* or the *unit normal di�tribution*, and a random variablewith that di�tribution i� a *�tandard normal deviate*.

The normal di�tribution i� the only ab�olutely continuou�</wiki/Ab�olute_continuity> di�tribution all of who�e cumulant�</wiki/Cumulant> beyond the fir�t two (i.e., other than the mean andvariance </wiki/Variance>) are zero. It i� al�o the continuou�di�tribution with the maximum entropy</wiki/Maximum_entropy_probability_di�tribution> for a given mean andvariance.^[3] <#cite_note-3> ^[4] <#cite_note-4>

The normal di�tribution i� a �ubcla�� of the elliptical di�tribution�</wiki/Elliptical_di�tribution>. The normal di�tribution i� �ymmetric</wiki/Symmetric_di�tribution> about it� mean, and i� non-zero over theentire real line. A� �uch it may not be a �uitable model for variable�that are inherently po�itive or �trongly �kewed, �uch a� the weight</wiki/Weight> of a per�on or the price of a �hare</wiki/Share_(finance)>. Such variable� may be better de�cribed by otherdi�tribution�, �uch a� the log-normal di�tribution</wiki/Log-normal_di�tribution> or the Pareto di�tribution</wiki/Pareto_di�tribution>.

The value of the normal di�tribution i� practically zero when the value/x/ lie� more than a few �tandard deviation� </wiki/Standard_deviation>away from the mean. Therefore, it may not be an appropriate model whenone expect� a �ignificant fraction of outlier� </wiki/Outlier>—value�that lie many �tandard deviation� away from the mean — and lea�t �quare�and other �tati�tical inference </wiki/Stati�tical_inference> method�that are optimal for normally di�tributed variable� often become highlyunreliable when applied to �uch data. In tho�e ca�e�, a moreheavy-tailed </wiki/Heavy-tailed> di�tribution �hould be a��umed and theappropriate robu�t �tati�tical inference </wiki/Robu�t_�tati�tic�>method� applied.

Page 4: Normal Distribution - Wikipedia, The Free Encyclopedia

The Gau��ian di�tribution belong� to the family of �table di�tribution�</wiki/Stable_di�tribution> which are the attractor� of �um� ofindependent, identically di�tributed di�tribution� whether or not themean or variance i� finite. Except for the Gau��ian which i� a limitingca�e, all �table di�tribution� have heavy tail� and infinite variance.

Content�

[hide <#>]

* 1 Definition <#Definition> o 1.1 Standard normal di�tribution <#Standard_normal_di�tribution> o 1.2 General normal di�tribution <#General_normal_di�tribution> o 1.3 Notation <#Notation> o 1.4 Alternative parameterization� <#Alternative_parameterization�> * 2 Propertie� <#Propertie�> o 2.1 Symmetrie� and derivative� <#Symmetrie�_and_derivative�> o 2.2 Moment� <#Moment�> o 2.3 Fourier tran�form and characteri�tic function <#Fourier_tran�form_and_characteri�tic_function> o 2.4 Moment and cumulant generating function� <#Moment_and_cumulant_generating_function�> * 3 Cumulative di�tribution function <#Cumulative_di�tribution_function> o 3.1 Standard deviation and tolerance interval� <#Standard_deviation_and_tolerance_interval�> o 3.2 Quantile function <#Quantile_function> * 4 Zero-variance limit <#Zero-variance_limit> * 5 The central limit theorem <#The_central_limit_theorem> * 6 Operation� on normal deviate� <#Operation�_on_normal_deviate�> o 6.1 Infinite divi�ibility and Cramér'� theorem <#Infinite_divi�ibility_and_Cram.C3.A9r.27�_theorem> o 6.2 Bern�tein'� theorem <#Bern�tein.27�_theorem> * 7 Other propertie� <#Other_propertie�> * 8 Related di�tribution� <#Related_di�tribution�> o 8.1 Operation� on a �ingle random variable <#Operation�_on_a_�ingle_random_variable> o 8.2 Combination of two independent random variable� <#Combination_of_two_independent_random_variable�> o 8.3 Combination of two or more independent random variable� <#Combination_of_two_or_more_independent_random_variable�> o 8.4 Operation� on the den�ity function <#Operation�_on_the_den�ity_function> o 8.5 Exten�ion� <#Exten�ion�> * 9 Normality te�t� <#Normality_te�t�> * 10 E�timation of parameter� <#E�timation_of_parameter�> * 11 Baye�ian analy�i� of the normal di�tribution <#Baye�ian_analy�i�_of_the_normal_di�tribution> o 11.1 The �um of two quadratic� <#The_�um_of_two_quadratic�> + 11.1.1 Scalar form <#Scalar_form> + 11.1.2 Vector form <#Vector_form> o 11.2 The �um of difference� from the mean <#The_�um_of_difference�_from_the_mean> o 11.3 With known variance <#With_known_variance> o 11.4 With known mean <#With_known_mean> o 11.5 With unknown mean and unknown variance <#With_unknown_mean_and_unknown_variance> * 12 Occurrence <#Occurrence> o 12.1 Exact normality <#Exact_normality>

Page 5: Normal Distribution - Wikipedia, The Free Encyclopedia

o 12.2 Approximate normality <#Approximate_normality> o 12.3 A��umed normality <#A��umed_normality> o 12.4 Produced normality <#Produced_normality> * 13 Generating value� from normal di�tribution <#Generating_value�_from_normal_di�tribution> * 14 Numerical approximation� for the normal CDF <#Numerical_approximation�_for_the_normal_CDF> * 15 Hi�tory <#Hi�tory> o 15.1 Development <#Development> o 15.2 Naming <#Naming> * 16 See al�o <#See_al�o> * 17 Note� <#Note�> * 18 Citation� <#Citation�> * 19 Reference� <#Reference�> * 20 External link� <#External_link�>

Definition[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=1>]

Standard normal di�tribution[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=2>]

The �imple�t ca�e of a normal di�tribution i� known a� the /�tandardnormal di�tribution/. Thi� i� a �pecial ca�e where μ=0 and σ=1, and iti� de�cribed by thi� probability den�ity function</wiki/Probability_den�ity_function>:

\phi(x) = \frac{e^{- \frac{\�cript�cript�tyle 1}{\�cript�cript�tyle 2} x^2}}{\�qrt{2\pi}}\,

The factor \�cript�tyle\ 1/\�qrt{2\pi} in thi� expre��ion en�ure� thatthe total area under the curve /ϕ/(/x/) i� equal to one.^[5]<#cite_note-5> The 1/2 in the exponent en�ure� that the di�tribution ha�unit variance (and therefore al�o unit �tandard deviation). Thi�function i� �ymmetric around /x/=0, where it attain� it� maximum value1/\�qrt{2\pi}; and ha� inflection point� </wiki/Inflection_point> at +1and −1.

Authors may differ also on which normal distribution should be calledthe "standard" one. Gauss himself defined the standard normal as havingvariance /σ/^2 = 1/2, that i�

\phi(x) = \frac{e^{-x^2}}{\�qrt\pi}\,

Stigler </wiki/Stephen_Stigler>^[6] <#cite_note-6> goe� even further,defining the �tandard normal with variance /σ/^2 = 1/2/π/ :

\�hi(x) = e^{-\�i x^2}

General normal distribution[edit </w/index.�h�?title=Normal_distribution&action=edit&section=3>]

Any normal distribution is a version of the standard normal distributionwhose domain has been stretched by a factor /σ/ (the �tandard deviation)and then tran�lated by /μ/ (the mean value):

f(x, \mu, \sigma) =\frac{1}{\sigma}

Page 6: Normal Distribution - Wikipedia, The Free Encyclopedia

\phi\left(\frac{x-\mu}{\sigma}\right).

The probability density must be scaled by 1/\sigma so that the integralis still 1.

If /Z/ is a standard normal deviate, then /X/ = /Zσ/ + /μ/ will have anormal distribution with expected value /μ/ and standard deviation /σ/.Conver�ely, if /X/ i� a general normal deviate, then /Z/ = (/X/ −/μ/)//σ/ will have a �tandard normal di�tribution.

Every normal di�tribution i� the exponential of a quadratic function</wiki/Quadratic_function>:

f(x) = e^{a x^2 + b x + c}

where /a/ i� negative and /c/ i� b^2/(4a)+\ln(-a/\pi)/2. In thi� form,the mean value /μ/ is −/b//(2/a/), and the variance /σ/^2 i� −1/(2/a/).For the standard normal distribution, /a/ is −1/2, /b/ is zero, and /c/is �\ln(2\pi)/2.

Notation[edit </w/index.php?title=Normal_distribution&action=edit&section=4>]

The standard Gaussian distribution (with zero mean and unit variance) isoften denoted with the Greek letter /ϕ/ (phi </wiki/Phi_(letter)>).^[7]<#cite_note-7> The alternative form of the Greek phi letter, /φ/, isalso used quite o�ten.

The normal distribution is also o�ten denoted by /N/(/μ/, /σ/^2 ).^[8]<#cite_note-8> Thu� when a random variable /X/ i� di�tributed normallywith mean /μ/ and variance /σ/^2 , we write

X\ \�im\ \mathcal{N}(\mu,\,\�igma^2).

Alternative parameterization�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=5>]

Some author� advocate u�ing the preci�ion </wiki/Preci�ion_(�tati�tic�)>/τ/ as �he parame�er defining �he wid�h of �he dis�ribu�ion, ins�ead of�he devia�ion /σ/ or the variance /σ/^2 . The preci�ion i� normallydefined a� the reciprocal of the variance, 1//σ/^2 .^[9] <#cite_note-9>The formula for the di�tribution then become�

f(x) = \�qrt{\frac{\tau}{2\pi}}\, e^{\frac{-\tau(x-\mu)^2}{2}}.

Thi� choice i� claimed to have advantage� in numerical computation� when/σ/ i� very clo�e to zero and �implify formula� in �ome context�, �ucha� in the Baye�ian inference </wiki/Baye�ian_�tati�tic�> of variable�with multivariate normal di�tribution</wiki/Multivariate_normal_di�tribution>.

Occa�ionally, the preci�ion /τ/ is 1//σ/, the reciprocal of the �tandarddeviation; �o that

f(x) = \frac{\tau}{\�qrt{2\pi}}\, e^{\frac{-\tau^2(x-\mu)^2}{2}}.

According to Stigler, thi� formulation i� advantageou� becau�e of a much�impler and ea�ier-to-remember formula, the fact that the pdf ha� unit

Page 7: Normal Distribution - Wikipedia, The Free Encyclopedia

height at zero, and �imple approximate formula� for the quantile�</wiki/Quantile> of the di�tribution.

Propertie�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=6>]

Symmetrie� and derivative�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=7>]

The normal di�tribution /f/(/x/), with any mean /μ/ and any positivedeviation /σ/, ha� the following propertie�:

* It i� �ymmetric around the point /x = μ/, which is at the same time the mode </wiki/Mode_(statistics)>, the median </wiki/Median> and the mean of the distribution.^[10] <#cite_note-PR2.1.4-10> * It is unimodal </wiki/Unimodal>: its first derivative </wiki/Derivative> is positive for /x/ < /μ/, negative for /x/ > /μ/, and zero only at /x/ = /μ/. * Its density has two inflection points </wiki/Inflection_point> (where the second derivative of /f/ is zero and changes sign), located one standard deviation away from the mean, namely at /x = μ − σ/ and /x = μ + σ/.^[10] <#cite_note-PR2.1.4-10> * It� den�ity i� log-concave </wiki/Logarithmically_concave_function>.^[10] <#cite_note-PR2.1.4-10> * It� den�ity i� infinitely differentiable </wiki/Differentiable_function>, indeed �uper�mooth </wiki/Super�mooth> of order 2.^[11] <#cite_note-11> * It� �econd derivative /f/′′(/x/) i� equal to it� derivative with re�pect to it� variance /σ/^/2/

Furthermore, the den�ity /ϕ/ of the �tandard normal di�tribution (with/μ/ = 0 and /σ/ = 1) al�o ha� the following propertie�:

* It� fir�t derivative /ϕ/′(/x/) i� −/xϕ/(/x/). * It� �econd derivative /ϕ/′′(/x/) i� (/x/^2 − 1)/ϕ/(/x/) * More generally, it� /n/-th derivative /ϕ/^(/n/) (/x/) i� (−1)^/n/ /H_n /(/x/)/ϕ/(/x/), where /H_n / i� the Hermite polynomial </wiki/Hermite_polynomial> of order /n/.^[12] <#cite_note-12> * It �ati�fie� the differential equation </wiki/Differential_equation>

\�igma ^2 f'(x)+f(x) (x-\mu )=0,\qquad f(0)=\frac{e^{-\mu ^2/(2\�igma ^2)}}{\�qrt{2 \pi } \�igma }

or

f'(x)+\tau f(x) (x-\mu )=0,\qquad f(0)=\frac{\�qrt{\tau } e^{-\mu^2 \tau/2}}{\�qrt{2 \pi }}.

Moment�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=8>]

See al�o: Li�t of integral� of Gau��ian function�</wiki/Li�t_of_integral�_of_Gau��ian_function�>

The plain and ab�olute moment� </wiki/Moment_(mathematic�)> of avariable /X/ are the expected value� of /X^p / and |/X/|^/p/,re�pectively. If the expected value /μ/ of /X/ is zero, these

Page 8: Normal Distribution - Wikipedia, The Free Encyclopedia

parameters are called /central moments/. Usually we are interested onlyin moments with integer order /p/.

If /X/ has a normal distribution, these moments exist and are finite forany /p/ whose real part is greater than −1. For any non�nega�ive in�eger/p/, �he plain cen�ral momen�s are

\ma�hrm{E}\lef�[X^p\righ�] = \begin{cases} 0 & \�ex�{if }p\�ex�{ is odd,} \\ \sigma^p\,(p�1)!! & \�ex�{if }p\�ex�{ is even.} \end{cases}

Here /n/!! deno�es �he double fac�orial </wiki/Double_fac�orial>, �ha�is, �he produc� of every number from /n/ �o 1 �ha� has �he same pari�yas /n/.

The cen�ral absolu�e momen�s coincide wi�h plain momen�s for all evenorders, bu� are nonzero for odd orders. For any non�nega�ive in�eger /p/,

\opera�orname{E}\lef�[|X|^p\righ�] = \sigma^p\,(p�1)!! \cdo� \lef�.\begin{cases} \sqr�{\frac{2}{\pi}} & \�ex�{if }p\�ex�{ is odd} \\ 1 & \�ex�{if }p\�ex�{ is even} \end{cases}\righ�\} = \sigma^p \cdo� \frac{2^{\frac{p}{2}}\Gamma\lef�(\frac{p+1}{2}\righ�)}{\sqr�{\pi}}

The las� formula is valid also for any non�in�eger /p/ > −1. When �hemean /μ/ is not zero, the plain and absolute moments can be expressed interms of confluent hypergeometric functions</wiki/Confluent_hypergeometric_function> _1 /F/_1 and /U/.^[/citationneeded </wiki/Wikipedia:Citation_needed>/]

\operatorname{E} \left[ X^p \right] =\sigma^p \cdot (-i\sqrt{2}\sgn\mu)^p \; U\left( {-\frac{1}{2}p},\, \frac{1}{2},\, -\frac{1}{2}(\mu/\sigma)^2 \right), \operatorname{E} \left[ |X|^p \right] =\sigma^p \cdot 2^{\frac p 2} \frac {\Gamma\left(\frac{1+p}{2}\right)}{\sqrt\pi}\; _1F_1\left( {-\frac{1}{2}p},\, \frac{1}{2},\, -\frac{1}{2}(\mu/\sigma)^2 \right).

These expressions remain valid even if /p/ is not integer. See alsogeneralized Hermite polynomials</wiki/Hermite_polynomials#.22Negative_variance.22>.

Order Non-central moment Central moment1 /μ/ 02 /μ/^2 + /σ/^2 /σ/ ^23 /μ/^3 + 3/μσ/^2 04 /μ/^4 + 6/μ/^2 /σ/^2 + 3/σ/^4 3/σ/ ^45 /μ/^5 + 10/μ/^3 /σ/^2 + 15/μσ/^4 06 /μ/^6 + 15/μ/^4 /σ/^2 + 45/μ/^2 /σ/^4 + 15/σ/^6 15/σ/ ^67 /μ/^7 + 21/μ/^5 /σ/^2 + 105/μ/^3 /σ/^4 + 105/μσ/^6 08 /μ/^8 + 28/μ/^6 /σ/^2 + 210/μ/^4 /σ/^4 + 420/μ/^2 /σ/^6 + 105/σ/^8105/σ/ ^8

Fourier tran�form and characteri�tic function[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=9>]

The Fourier tran�form </wiki/Fourier_tran�form> of a normal di�tribution/f/ with mean /μ/ and deviation /σ/ i�^[13] <#cite_note-13>

\hat\phi(t) = \int_{-\infty}^\infty\! f(x)e^{itx} dx = e^{i\mu t} e^{- \frac12 (\�igma t)^2}

Page 9: Normal Distribution - Wikipedia, The Free Encyclopedia

where *i* i� the imaginary unit </wiki/Imaginary_unit>. If the mean /μ/is zero, the first factor is 1, and the Fourier transform is also anormal distribution on the frequency domain </wiki/Frequency_domain>,with mean 0 and standard deviation 1//σ/. In particular, the �tandardnormal di�tribution /ϕ/ (with /μ/=0 and /σ/=1) i� an eigenfunction</wiki/Eigenfunction> of the Fourier tran�form.

In probability theory, the Fourier tran�form of the probabilitydi�tribution of a real-valued random variable /X/ i� called thecharacteri�tic function</wiki/Characteri�tic_function_(probability_theory)> of that variable,and can be defined a� the expected value </wiki/Expected_value> of/e/^/i tX/ , a� a function of the real variable /t/ (the frequency</wiki/Frequency> parameter of the Fourier tran�form). Thi� definitioncan be analytically extended to a complex-value parameter /t/.^[14]<#cite_note-14>

Moment and cumulant generating function�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=10>]

The moment generating function </wiki/Moment_generating_function> of areal random variable /X/ i� the expected value of /e^tX /, a� a functionof the real parameter /t/. For a normal di�tribution with mean /μ/ anddeviation /σ/, the moment generating function exi�t� and i� equal to

M(t) = \hat \phi(-it) = e^{ \mu t} e^{\frac12 \�igma^2 t^2 }

The cumulant generating function </wiki/Cumulant_generating_function> i�the logarithm of the moment generating function, namely

g(t) = \ln M(t) = \mu t + \frac{1}{2} \�igma^2 t^2

Since thi� i� a quadratic polynomial in /t/, only the fir�t twocumulant� </wiki/Cumulant> are nonzero, namely the mean /μ/ and thevariance /σ/^2 .

Cumulative di�tribution function[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=11>]

The cumulative di�tribution function (CDF) of the �tandard normaldi�tribution, u�ually denoted with the capital Greek letter \Phi (phi</wiki/Phi_(letter)>), i� the integral

\Phi(x)\; = \;\frac{1}{\�qrt{2\pi}} \int_{-\infty}^x e^{-t^2/2} \, dt

In �tati�tic� one often u�e� the related error function</wiki/Error_function>, or erf(/x/), defined a� the probability of arandom variable with normal di�tribution of mean 0 and variance 1/2falling in the range [-x, x]; that i�

\operatorname{erf}(x)\; =\; \frac{1}{\�qrt{\pi}} \int_{-x}^x e^{-t^2} \, dt

The�e integral� cannot be expre��ed in term� of elementary function�,and are often �aid to be �pecial function� </wiki/Special_function> *.They are clo�ely related, namely

Page 10: Normal Distribution - Wikipedia, The Free Encyclopedia

\Phi(x)\; =\; \frac12\left[1 + \operatorname{erf}\left(\frac{x}{\�qrt{2}}\right)\right]

For a generic normal di�tribution /f/ with mean /μ/ and deviation /σ/,the cumulative di�tribution function i�

F(x)\;=\;\Phi\left(\frac{x-\mu}{\�igma}\right)\;=\; \frac12\left[1 + \operatorname{erf}\left(\frac{x-\mu}{\�igma\�qrt{2}}\right)\right]

The complement of the �tandard normal CDF, Q(x) = 1 - \Phi(x), i� oftencalled the Q-function </wiki/Q-function>, e�pecially in engineeringtext�.^[15] <#cite_note-15> ^[16] <#cite_note-16> It give� theprobability that the value of a �tandard normal random variable /X/ willexceed /x/. Other definition� of the /Q/-function, all of which are�imple tran�formation� of \Phi, are al�o u�ed occa�ionally.^[17]<#cite_note-17>

The graph </wiki/Graph_of_a_function> of the �tandard normal CDF \Phiha� 2-fold rotational �ymmetry </wiki/Rotational_�ymmetry> around thepoint (0,1/2); that i�, \Phi(-x) = 1 - \Phi(x). It� antiderivative</wiki/Antiderivative> (indefinite integral) \int \Phi(x)\, dx i� \int\Phi(x)\, dx = x\Phi(x) + \phi(x).

* The cumulative di�tribution function (CDF) of the �tandard normal di�tribution can be expanded by Integration by part� </wiki/Integration_by_part�> into a �erie�:

\Phi(x)\; =\;0.5+\frac{1}{\�qrt{2\pi}}\cdot e^{-x^2/2}\left[x+\frac{x^3}{3}+\frac{x^5}{3\cdot 5}+\cdot�+\frac{x^{2n+1}}{3\cdot 5\cdot7\cdot� (2n+1)}\right]

Example of Pa�cal function to calculate CDF (�um of fir�t 100 element�)

function CDF(x:extended):extended;var value,�um:extended; i:integer;begin �um:=x; value:=x; for i:=1 to 100 do begin value:=(value*x*x/(2*i+1)); �um:=�um+value; end; re�ult:=0.5+(�um/�qrt(2*pi))*exp(-(x*x)/2);end;

Standard deviation and tolerance interval�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=12>]

Main article: Tolerance interval </wiki/Tolerance_interval></wiki/File:Standard_deviation_diagram.�vg></wiki/File:Standard_deviation_diagram.�vg>Dark blue i� le�� than one �tandard deviation </wiki/Standard_deviation>away from the mean. For the normal di�tribution, thi� account� for 68.2%of the �et, while two �tandard deviation� from the mean (medium and darkblue) account for 95.4%, and three �tandard deviation� (light, medium,and dark blue) account for 99.7%.

Page 11: Normal Distribution - Wikipedia, The Free Encyclopedia

About 68% of value� drawn from a normal di�tribution are within one�tandard deviation /σ/ away from the mean; about 95% of the value� liewithin two �tandard deviation�; and about 99.7% are within three�tandard deviation�. Thi� fact i� known a� the 68-95-99.7 (empirical)rule </wiki/68%E2%80%9395%E2%80%9399.7_rule>, or the /3-�igma rule/.

More preci�ely, the probability that a normal deviate lie� in the range/μ/ − /nσ/ and /μ/ + /nσ/ i� given by

F(\mu+n\�igma) - F(\mu-n\�igma) = \Phi(n)-\Phi(-n) = \mathrm{erf}\left(\frac{n}{\�qrt{2}}\right),

To 12 decimal place�, the value� for /n/ = 1, 2, …, 6 are:^[18]<#cite_note-18>

/n/ /F/(/μ/+/nσ/) − /F/(/μ/ − /nσ/) i.e. 1 minu� … or 1 in … OEIS</wiki/OEIS>1 0.682689492137 0.317310507863 3.15148718753 OEIS</wiki/On-Line_Encyclopedia_of_Integer_Sequence�>�A178647<//oei�.org/A178647>2 0.954499736104 0.045500263896 21.9778945080 OEIS</wiki/On-Line_Encyclopedia_of_Integer_Sequence�>�A110894<//oei�.org/A110894>3 0.997300203937 0.002699796063 370.3983473454 0.999936657516 0.000063342484 15787.19276735 0.999999426697 0.000000573303 1744277.893626 0.999999998027 0.000000001973 506797345.897

Quantile function[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=13>]

The quantile function </wiki/Quantile_function> of a di�tribution i� theinver�e of the cumulative di�tribution function. The quantile functionof the �tandard normal di�tribution i� called the probit function</wiki/Probit_function>, and can be expre��ed in term� of the inver�eerror function </wiki/Error_function>:

\Phi^{-1}(p)\; =\; \�qrt2\;\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1).

For a normal random variable with mean /μ/ and variance /σ/^2 , thequantile function i�

F^{-1}(p) = \mu + \�igma\Phi^{-1}(p) = \mu + \�igma\�qrt2\,\operatorname{erf}^{-1}(2p - 1), \quad p\in(0,1).

The quantile </wiki/Quantile> \Phi^{-1}(p) of the �tandard normaldi�tribution i� commonly denoted a� /z_p /. The�e value� are u�ed inhypothe�i� te�ting </wiki/Hypothe�i�_te�ting>, con�truction ofconfidence interval� </wiki/Confidence_interval> and Q-Q plot�</wiki/Q-Q_plot>. A normal random variable /X/ will exceed /μ/ + /σz_p /with probability 1−/p/; and will lie ou�side �he in�erval /μ/ ± /σz_p /with probability 2(1−/p/). In par�icular, �he quan�ile /z/_0.975 is 1.96</wiki/1.96>; �herefore a normal random variable will lie ou�side �hein�erval /μ/ ± 1.96/σ/ in only 5% of ca�e�.

The following table give� the multiple /n/ of /σ/ �uch that /X/ will liein the range /μ/ ± /nσ/ with a �pecified probability /p/. The�e value�are u�eful to determine tolerance interval </wiki/Tolerance_interval>

Page 12: Normal Distribution - Wikipedia, The Free Encyclopedia

for �ample average�</wiki/Sample_mean_and_�ample_covariance#Sample_mean> and other�tati�tical e�timator� </wiki/E�timator> with normal (or a�ymptotically</wiki/A�ymptotic> normal) di�tribution�:^[19] <#cite_note-19>

/F/(/μ/ + /nσ/) − /F/(/μ/ − /nσ/) /n/ /F/(/μ/ + /nσ/) − /F/(/μ/ −/nσ/) /n/0.80 1.281551565545 0.999 3.2905267314920.90 1.644853626951 0.9999 3.8905918864130.95 1.959963984540 0.99999 4.4171734134690.98 2.326347874041 0.999999 4.8916384756990.99 2.575829303549 0.9999999 5.3267238863840.995 2.807033768344 0.99999999 5.7307288682360.998 3.090232306168 0.999999999 6.109410204869

Zero-variance limit[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=14>]

In the limit </wiki/Limit_(mathematic�)> when /σ/ tend� to zero, theprobability den�ity /f/(/x/) eventually tend� to zero at any /x/ ≠ /μ/,but grows without limit if /x/ = /μ/, while its integral remains equalto 1. Therefore, the normal distribution cannot be defined as anordinary function </wiki/Function_(mathematics)> when /σ/ = 0.

However, one can define the normal di�tribution with zero variance a� ageneralized function </wiki/Generalized_function>; �pecifically, a�Dirac'� "delta function" </wiki/Dirac_delta_function> /δ/ translate� bythe mean /μ/, that is /f/(/x/) = /δ/(/x/−/μ/). Its CDF is then theHeaviside step function </wiki/Heaviside_step_function> translated bythe mean /μ/, namely

F(x) = \begin{cases} 0 & \text{if }x < \mu \\ 1 & \text{if }x \geq \mu \end{cases}

The central limit theorem[edit </w/index.php?title=Normal_distribution&action=edit&section=15>]

</wiki/File:De_moivre-laplace.gif></wiki/File:De_moivre-laplace.gif>As the number of discrete events increases, the function begins toresemble a normal distribution</wiki/File:Dice_sum_central_limit_theorem.svg></wiki/File:Dice_sum_central_limit_theorem.svg>Comparison of probability density functions, /p/(/k/) for the sum of /n/fair 6-sided dice to show their convergence to a normal distributionwith increasing /n/, in accordance to the central limit theorem. In thebottom-right graph, smoothed profiles of the previous graphs arerescaled, superimposed and compared with a normal distribution (blackcurve).Main article: Central limit theorem </wiki/Central_limit_theorem>

The central limit theorem states that under certain (fairly common)conditions, the sum of many random variables will have an approximatelynormal distribution. More specifically, where /X/_1 , …, /X_n / areindependent and identically distributed</wiki/Independent_and_identically_distributed> random variables withthe same arbitrary distribution, zero mean, and variance /σ/^2 ; and /Z/i� their mean �caled by \�qrt{n}

Page 13: Normal Distribution - Wikipedia, The Free Encyclopedia

Z = \�qrt{n}\left(\frac{1}{n}\�um_{i=1}^n X_i\right)

Then, a� /n/ increa�e�, the probability di�tribution of /Z/ will tend tothe normal di�tribution with zero mean and variance /σ/^2 .

The theorem can be extended to variable� /X_i / that are not independentand/or not identically di�tributed if certain con�traint� are placed onthe degree of dependence and the moment� of the di�tribution�.

Many te�t �tati�tic� </wiki/Te�t_�tati�tic>, �core�</wiki/Score_(�tati�tic�)>, and e�timator� </wiki/E�timator> encounteredin practice contain �um� of certain random variable� in them, and evenmore e�timator� can be repre�ented a� �um� of random variable� throughthe u�e of influence function� </wiki/Influence_function_(�tati�tic�)>.The central limit theorem implie� that tho�e �tati�tical parameter� willhave a�ymptotically normal di�tribution�.

The central limit theorem al�o implie� that certain di�tribution� can beapproximated by the normal di�tribution, for example:

* The binomial di�tribution </wiki/Binomial_di�tribution> /B/(/n/, /p/) i� approximately normal </wiki/De_Moivre%E2%80%93Laplace_theorem> with mean /np/ and variance /np/(1−/p/) for large /n/ an� for /p/ not too close to zero or one. * The Poisson </wiki/Poisson_�istribution> �istribution with parameter /λ/ is approximate�y norma� with mean /λ/ and variance /λ/, for �arge va�ues of /λ/.^[20] <#cite_note-20> * The chi-squared distribution </wiki/Chi-squared_distribution> /χ/^2 (/k/) is approximately normal with mean /k/ and varian�e 2/k/, �or large /k/. * The Student's t-distribution </wiki/Student%27s_t-distribution> /t/(/ν/) is approximately ormal with mea 0 ad variace 1 whe /ν/ is large.

Whether these approximatios are sufficietly accurate depeds o thepurpose for which they are eeded, ad the rate of covergece to theormal distributio. It is typically the case that such approximatiosare less accurate i the tails of the distributio.

A geeral upper boud for the approximatio error i the cetral limittheorem is give by the Berry–Essee theorem</wiki/Berry%E2%80%93Essee_theorem>, improvemets of the approximatioare give by the Edgeworth expasios </wiki/Edgeworth_expasio>.

Operatios o ormal deviates[edit </w/idex.php?title=Normal_distributio&actio=edit&sectio=16>]

The family of ormal distributios is closed uder lieartrasformatios: if /X/ is ormally distributed with mea /μ/ andstandard deviation /σ/, then the variable /Y/ = /aX/ + /b/, for any realnumber� /a/ and /b/, i� al�o normally di�tributed, with mean /aμ/ + /b/and standard deviation /|a|σ/.

Al�o if /X/_1 and /X/_2 are two independent</wiki/Independence_(probability_theory)> normal random variable�, withmean� /μ/_1 , /μ/_2 and standard deviations /σ/_1 , /σ/_2 , then their�um /X/_1 + /X/_2 will al�o be normally di�tributed,^[proof]

Page 14: Normal Distribution - Wikipedia, The Free Encyclopedia

</wiki/Sum_of_normally_di�tributed_random_variable�> with mean /μ/_1 +/μ/_2 and variance \sigma_1^2 + \sigma_2^2.

In particular, if /X/ and /Y/ are independent normal deviates with zeromean and variance /σ/^2 , then /X + Y/ and /X − Y/ are also idepedetad ormally distributed, with zero mea ad variace 2/σ/^2 . Thi� i� a�pecial ca�e of the polarization identity</wiki/Polarization_identity>.^[21] <#cite_note-21>

Al�o, if /X/_1 , /X/_2 are two independent normal deviate� with mean /μ/and deviation /σ/, and /a/, /b/ are arbitrary real number�, then thevariable

X_3 = \frac{aX_1 + bX_2 - (a+b)\mu}{\�qrt{a^2+b^2}} + \mu

i� al�o normally di�tributed with mean /μ/ and deviation /σ/. It follow�that the normal di�tribution i� �table </wiki/Stable_di�tribution> (withexponent /α/ = 2).

More generlly, ny liner combintion </wiki/Liner_combintion> ofindependent norml devites is norml devite.

Infinite divisibility nd Crmér's theorem[edit </w/index.php?title=Norml_distribution&ction=edit&section=17>]

For ny positive integer /n/, ny norml distribution with men /μ/ andvariance /σ/^2 i� the di�tribution of the �um of /n/ independent normaldeviate�, each with mean /μ/n/ and variance /σ/^2 //n/. Thi� property i�called infinite divi�ibility</wiki/Infinite_divi�ibility_(probability)>.^[22] <#cite_note-22>

Conver�ely, if /X/_1 and /X/_2 are independent random variable� andtheir �um /X/_1 + /X/_2 ha� a normal di�tribution, then both /X/_1 and/X/_2 mu�t be normal deviate�.^[23] <#cite_note-23>

Thi� re�ult i� known a� *Cramér'� decompo�ition theorem</wiki/Cram%C3%A9r%27�_theorem>*, and i� equivalent to �aying that theconvolution </wiki/Convolution> of two di�tribution� i� normal if andonly if both are normal. Cramér'� theorem implie� that a linearcombination of independent non-Gau��ian variable� will never have anexactly normal di�tribution, although it may approach it arbitrarilyclo�e.^[24] <#cite_note-Bryc_1995_35-24>

Bern�tein'� theorem[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=18>]

Bern�tein'� theorem �tate� that if /X/ and /Y/ are independent and /X +Y/ and /X − Y/ re lso independent, then both /X/ nd /Y/ mustnecessrily hve norml distributions.^[25] <#cite_note�LK�25> ^[26]<#cite_note�26>

More generlly, if /X/_1 , …, /X_n / re independent rndom vribles,then two distinct liner combintions ∑/_k X_k / nd ∑/b_k X_k / willbe independent if nd only if ll /X_k '/s re norml nd ∑/_k b_k /σ 2/k/ = 0, where σ 2/k/ denote� the variance of /X_k /.^[25] <#cite_note-LK-25>

Page 15: Normal Distribution - Wikipedia, The Free Encyclopedia

Other propertie�[edit </w/index.php?title=Normal_di�tribution&action=edit&�ection=19>]

1. If the characteri�tic function /φ_X / o� some random variable /X/ is o� the �orm /φ_X /(/t/) = /e/^/Q/(/t/) , where /Q/(/t/) is a polynomial </wiki/Polynomial>, then the *Mar�inkiewi�z theorem* (named a�ter Józe� Mar�inkiewi�z </wiki/J%C3%B3ze�_Mar�inkiewi�z>) asserts that /Q/ �an be at most a quadrati� polynomial, and there�ore /X/ a normal random variable.^[24] <#�ite_note-Bry�_1995_35-24> The �onsequen�e o� this result is that the normal distribution is the only distribution with a �inite number (two) o� non-zero �umulants </wiki/Cumulant>. 2. I� /X/ and /Y/ are jointly normal </wiki/Multivariate_normal_distribution> and un�orrelated </wiki/Un�orrelated>, then they are independent </wiki/Independen�e_(probability_theory)>. The requirement that /X/ and /Y/ should be /jointly/ normal is essential, without it the property does not hold.^[27] <#�ite_note-27> ^[28] <#�ite_note-28> ^[proo�] </wiki/Normally_distributed_and_un�orrelated_does_not_imply_independent> For non-normal random variables un�orrelatedness does not imply independen�e. 3. The Kullba�k–Leibler divergen�e </wiki/Kullba�k%E2%80%93Leibler_divergen�e> o� one normal distribution /X/_1 ∼ /N/(/μ/_1 , /σ/^2 _1 )from another /X/_2 ∼ /N/(/μ/_2 , /σ/^2 _2 )i� given by:^[29] <#cite_note-29>

D_\mathrm{KL}( X_1 \,\|\, X_2 ) = \frac{(\mu_1 - \mu_2)^2}{2\�igma_2^2} \,+\, \frac12\left(\, \frac{\�igma_1^2}{\�igma_2^2} - 1 - \ln\frac{\�igma_1^2}{\�igma_2^2} \,\right)\ .

The Hellinger di�tance </wiki/Hellinger_di�tance> between the �ame di�tribution� i� equal to

H^2(X_1,X_2) = 1 \,-\, \�qrt{\frac{2\�igma_1\�igma_2}{\�igma_1^2+\�igma_2^2}} \; e^{-\frac{1}{4}\frac{(\mu_1-\mu_2)^2}{\�igma_1^2+\�igma_2^2}}\ .

4. The Fi�her information matrix </wiki/Fi�her_information_matrix> for a normal di�tribution i� diagonal and take� the form

\mathcal I = \begin{pmatrix} \frac{1}{\�igma^2} & 0 \\ 0 & \frac{1}{2\�igma^4} \end{pmatrix}

5. Normal di�tribution� belong� to an exponential family </wiki/Exponential_family> with natural parameter� \�cript�tyle\theta_1=\frac{\mu}{\�igma^2} and \�cript�tyle\theta_2=\frac{-1}{2\�igma^2}, and natural �tati�tic� /x/ and /x/^2 . The dual, expectation parameter� for normal di�tribution are /η/_1 = /μ/ and /η/_2 = /μ/^2 + /σ/^2 . 6. T�e conjugate prior </wiki/Conjugate_prior> of t�e mean of a normal di�tribution i� anot�er normal di�tribution.^[30] <#cite_note-30> Specifically, if /x/_1 , …, /x_n / are iid /N/(/μ/, /σ/^2 ) and t�e prior i� /μ/ ~ /N/(/μ/_0 , /σ/2 0), t�en t�e po�terior di�tribution for t�e e�timator of /μ/ will be

\mu | x_1,\ldots,x_n\ \sim\ \mathcal{N}\left( \frac{\frac{\sigma^2}{n}\mu_0 + \sigma_0^2\bar{x}}{\frac{\sigma^2}{n}+\sigma_0^2},\ \left(

Page 16: Normal Distribution - Wikipedia, The Free Encyclopedia

\frac{n}{\sigma^2} + \frac{1}{\sigma_0^2} \right)^{\!-1} \right)

7. Of all probability distributions over the reals with mean /μ/ and variance /σ/^2 , t�e normal di�tribution /N/(/μ/, /σ/^2 ) i� t�e one wit� t�e maximum entropy </wiki/Maximum_entropy_probability_di�tribution>.^[31] <#cite_note-31> 8. T�e family of normal di�tribution� form� a manifold </wiki/Manifold> wit� con�tant curvature </wiki/Con�tant_curvature> −1. The sme fmily is flt </wiki/Flt_mnifold> with respect to the (±1)�connections ∇^(/e/) nd ∇^(/m/) .^[32] <#cite_note�32>

Relted distributions[edit </w/index.php?title=Norml_distribution&ction=edit&section=20>]

Opertions on single rndom vrible[edit </w/index.php?title=Norml_distribution&ction=edit&section=21>]

If /X/ is distributed normlly with men /μ/ and variance /σ/^2 , t�en

* T�e exponential of /X/ i� di�tributed log-normally </wiki/Log-normal_di�tribution>: /e^X / ~ ln(/N/ (/μ/, /σ/^2 )). * T�e ab�olute value of /X/ �a� folded normal di�tribution </wiki/Folded_normal_di�tribution>: |/X/| ~ /N_f / (/μ/, /σ/^2 ). If /μ/ = 0 this is known as the half-normal distribution </wiki/Half-normal_distribution>. * The square of /X/σ/ �a� t�e noncentral c�i-�quared di�tribution </wiki/Noncentral_c�i-�quared_di�tribution> wit� one degree of freedom: /X/^2 //σ/^2 ~ /χ/^2 _1 (/μ/^2 //σ/^2 ). If /μ/ = 0, the distribution is called simply chi-squared </wiki/Chi-squared_distribution>. * The distribution of the variable /X/ restricted to an interval [/a/, /b/] is called the truncated normal distribution </wiki/Truncated_normal_distribution>. * (/X/ − /μ/)^−2 hs Lévy distribution </wiki/L%C3%A9vy_distribution> with loction 0 nd scle /σ/^−2 .

Combintion of two independent rndom vribles[edit </w/index.php?title=Norml_distribution&ction=edit&section=22>]

If /X/_1 nd /X/_2 re two independent stndrd norml rndom vribleswith men 0 nd vrince 1, then

* Their sum nd difference is distributed normlly with men zero nd vrince two: /X/_1 ± /X/_2 ∼ /N/(0, 2). * Their product /Z/ = /X/_1 ·/X/_2 follows the "product�norml" distribution^[33] <#cite_note�33> with density function /f_Z /(/z/) = /π/^−1 /K/_0 (|/z/|), where /K/_0 is the modified Bessel function of the second kind </wiki/Mcdonld_function>. This distribution is symmetric round zero, unbounded t /z/ = 0, nd hs the chrcteristic function </wiki/Chrcteristic_function_(probbility_theory)> /φ_Z /(/t/) = (1 + /t/ ^2 )^−1/2 . * Their rtio follows the stndrd Cuchy distribution </wiki/Cuchy_distribution>: /X/_1 ÷ /X/_2 ∼ Cuchy(0, 1). * Their Eucliden norm \scriptstyle\sqrt{X_1^2\,+\,X_2^2} hs the Ryleigh distribution </wiki/Ryleigh_distribution>.

Page 17: Normal Distribution - Wikipedia, The Free Encyclopedia

Combintion of two or more independent rndom vribles[edit </w/index.php?title=Norml_distribution&ction=edit&section=23>]

* If /X/_1 , /X/_2 , …, /X_n / re independent stndrd norml rndom vribles, then the sum of their squres hs the chi�squred distribution </wiki/Chi�squred_distribution> with /n/ degrees of freedom

X_1^2 + \cdots + X_n^2\ \sim\ \chi_n^2..

* If /X/_1 , /X/_2 , …, /X_n / re independent normlly distributed rndom vribles with mens /μ/ and variances /σ/^2 , t�en t�eir �ample mean </wiki/Sample_mean> i� independent from t�e �ample �tandard deviation </wiki/Standard_deviation>,^[34] <#cite_note-34> w�ic� can be demon�trated u�ing Ba�u'� t�eorem </wiki/Ba�u%27�_t�eorem> or Coc�ran'� t�eorem </wiki/Coc�ran%27�_t�eorem>.^[35] <#cite_note-35> T�e ratio of t�e�e two quantitie� will �ave t�e Student'� t-di�tribution </wiki/Student%27�_t-di�tribution> wit� /n/ − 1 degrees of freedom:

t = \frc{\overline X � \mu}{S/\sqrt{n}} = \frc{\frc{1}{n}(X_1+\cdots+X_n) � \mu}{\sqrt{\frc{1}{n(n�1)}\left[(X_1�\overline X)^2+\cdots+(X_n�\overline X)^2\right]}} \ \sim\ t_{n�1}.

* If /X/_1 , …, /X_n /, /Y/_1 , …, /Y_m / re independent stndrd norml rndom vribles, then the rtio of their normlized sums of squres will hve the F�distribution </wiki/F�distribution> with (/n/, /m/) degrees of freedom:^[36] <#cite_note�36>

F = \frc{\left(X_1^2+X_2^2+\cdots+X_n^2\right)/n}{\left(Y_1^2+Y_2^2+\cdots+Y_m^2\right)/m}\ \sim\ F_{n,\,m}.

Opertions on the density function[edit </w/index.php?title=Norml_distribution&ction=edit&section=24>]

The split norml distribution </wiki/Split_norml_distribution> is mostdirectly defined in terms of joining scled sections of the densityfunctions of different norml distributions nd rescling the density tointegrte to one. The truncted norml distribution</wiki/Truncted_norml_distribution> results from rescling sectionof single density function.

Extensions[edit </w/index.php?title=Norml_distribution&ction=edit&section=25>]

The notion of norml distribution, being one of the most importntdistributions in probbility theory, hs been extended fr beyond thestndrd frmework of the univrite (tht is one�dimensionl) cse(Cse 1). All these extensions re lso clled /norml/ or /Gussin/lws, so certin mbiguity in nmes exists.

* The multivrite norml distribution </wiki/Multivrite_norml_distribution> describes the Gussin lw in the /k/�dimensionl Eucliden spce </wiki/Eucliden_spce>. A

Page 18: Normal Distribution - Wikipedia, The Free Encyclopedia

vector /X/ ∈ *R*^/k/ is multivrite�normlly distributed if ny liner combintion of its components ∑/k/ /j/=1/_j X_j / hs (univrite) norml distribution. The vrince of /X/ is /k×k/ symmetric positive�definite mtrix /V/. The multivrite norml distribution is specil cse of the ellipticl distributions </wiki/Ellipticl_distribution>. As such, its iso�density loci in the /k/ = 2 cse re ellipses </wiki/Ellipse> nd in the cse of rbitrry /k/ re ellipsoids </wiki/Ellipsoid>. * Rectified Gussin distribution </wiki/Rectified_Gussin_distribution> rectified version of norml distribution with ll the negtive elements reset to 0 * Complex norml distribution </wiki/Complex_norml_distribution> dels with the complex norml vectors. A complex vector /X/ ∈ *C*^/k/ is sid to be norml if both its rel nd imginry components jointly possess 2/k/�dimensionl multivrite norml distribution. The vrince�covrince structure of /X/ is described by two mtrices: the /vrince/ mtrix Γ, and t�e /relation/ matrix /C/. * Matrix normal di�tribution </wiki/Matrix_normal_di�tribution> de�cribe� t�e ca�e of normally di�tributed matrice�. * �au��ian proce��e� </wiki/�au��ian_proce��> are t�e normally di�tributed �toc�a�tic proce��e� </wiki/Stoc�a�tic_proce��>. T�e�e can be viewed a� element� of �ome infinite-dimen�ional Hilbert �pace </wiki/Hilbert_�pace> /H/, and t�u� are t�e analogue� of multivariate normal vector� for t�e ca�e /k/ = ∞. A random element /�/ ∈ /H/ i� �aid to be normal if for any con�tant /a/ ∈ /H/ t�e �calar product </wiki/Scalar_product> (/a/, /�/) �a� a (univariate) normal di�tribution. T�e variance �tructure of �uc� �au��ian random element can be de�cribed in term� of t�e linear /covariance operator K: H → H/. Several �au��ian proce��e� became popular enoug� to �ave t�eir own name�: o Brownian motion </wiki/Wiener_proce��>, o Brownian bridge </wiki/Brownian_bridge>, o Orn�tein–U�lenbeck proce�� </wiki/Orn�tein%E2%80%93U�lenbeck_proce��>. * �au��ian q-di�tribution </wiki/�au��ian_q-di�tribution> i� an ab�tract mat�ematical con�truction t�at repre�ent� a "q-analogue </wiki/Q-analogue>" of t�e normal di�tribution. * t�e q-�au��ian </wiki/Q-�au��ian> i� an analogue of t�e �au��ian di�tribution, in t�e �en�e t�at it maximi�e� t�e T�alli� entropy </wiki/T�alli�_entropy>, and i� one type of T�alli� di�tribution </wiki/T�alli�_di�tribution>. Note t�at t�i� di�tribution i� different from t�e �au��ian q-di�tribution </wiki/�au��ian_q-di�tribution> above.

One of t�e main practical u�e� of t�e �au��ian law i� to model t�eempirical di�tribution� of many different random variable� encounteredin practice. In �uc� ca�e a po��ible exten�ion would be a ric�er familyof di�tribution�, �aving more t�an two parameter� and t�erefore beingable to fit t�e empirical di�tribution more accurately. T�e example� of�uc� exten�ion� are:

* Pear�on di�tribution </wiki/Pear�on_di�tribution>— a four-parametric family of probability di�tribution� t�at extend t�e normal law to include different �kewne�� and kurto�i� value�.

Normality te�t�[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=26>]

Main article: Normality te�t� </wiki/Normality_te�t�>

Page 19: Normal Distribution - Wikipedia, The Free Encyclopedia

Normality te�t� a��e�� t�e likeli�ood t�at t�e given data �et {/x/_1 ,…, /x_n /} come� from a normal di�tribution. Typically t�e null�ypot�e�i� </wiki/Null_�ypot�e�i�> /H/_0 i� t�at t�e ob�ervation� aredi�tributed normally wit� un�pecified mean /μ/ and variance /σ/^2 ,ver�u� t�e alternative /H_a / t�at t�e di�tribution i� arbitrary. Manyte�t� (over 40) �ave been devi�ed for t�i� problem, t�e more prominentof t�em are outlined below:

* *"Vi�ual" te�t�* are more intuitively appealing but �ubjective at t�e �ame time, a� t�ey rely on informal �uman judgement to accept or reject t�e null �ypot�e�i�. o Q-Q plot </wiki/Q-Q_plot>— i� a plot of t�e �orted value� from t�e data �et again�t t�e expected value� of t�e corre�ponding quantile� from t�e �tandard normal di�tribution. T�at i�, it'� a plot of point of t�e form (Φ^−1 (/p_k /), /x/_(/k/) ), where plottig poits /p_k / are equal to /p_k / = (/k/ − /α/)/(/n/ + 1 − 2/α/) nd /α/ is n djustment constnt, which cn be nything between 0 nd 1. If the null hypothesis is true, the plotted points should pproximtely lie on stright line. o P�P plot </wiki/P�P_plot>— similr to the Q�Q plot, but used much less frequently. This method consists of plotting the points (Φ(/z/_(/k/) ), /p_k /), where \scriptstyle z_{(k)} = (x_{(k)}�\hat\mu)/\hat\sigma. or ormally distributed data this plot should lie o a 45° lie betwee (0, 0) ad (1, 1). o Shapiro�Wilk test </wiki/Shapiro�Wilk_test> employs the fact that the lie i the Q�Q plot has the slope of /σ/. T�e te�t compare� t�e lea�t �quare� e�timate of t�at �lope wit� t�e value of t�e �ample variance, and reject� t�e null �ypot�e�i� if t�e�e two quantitie� differ �ignificantly. o Normal probability plot </wiki/Normal_probability_plot> (rankit </wiki/Rankit> plot) * *Moment te�t�*: o D'Ago�tino'� K-�quared te�t </wiki/D%27Ago�tino%27�_K-�quared_te�t> o Jarque–Bera te�t </wiki/Jarque%E2%80%93Bera_te�t> * *Empirical di�tribution function te�t�*: o Lilliefor� te�t </wiki/Lilliefor�_te�t> (an adaptation of t�e Kolmogorov–Smirnov te�t </wiki/Kolmogorov%E2%80%93Smirnov_te�t>) o Ander�on–Darling te�t </wiki/Ander�on%E2%80%93Darling_te�t>

E�timation of parameter�[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=27>]

See al�o: Standard error of t�e mean </wiki/Standard_error_of_t�e_mean>,Standard deviation § E�timation </wiki/Standard_deviation#E�timation>,Variance § E�timation </wiki/Variance#E�timation> and Maximum likeli�ood§ Continuou� di�tribution, continuou� parameter �pace</wiki/Maximum_likeli�ood#Continuou�_di�tribution.2C_continuou�_parameter_�pace>

It i� often t�e ca�e t�at we don't know t�e parameter� of t�e normaldi�tribution, but in�tead want to e�timate </wiki/E�timation_t�eory>t�em. T�at i�, �aving a �ample (/x/_1 , …, /x_n /) from a normal/N/(/μ/, /σ/^2 ) population we would like to learn t�e approximatevalue� of parameter� /μ/ and /σ/^2 . T�e �tandard approac� to t�i�problem i� t�e maximum likeli�ood </wiki/Maximum_likeli�ood> met�od,w�ic� require� maximization of t�e /log-likeli�ood function/:

\ln\mat�cal{L}(\mu,\�igma^2) = \�um_{i=1}^n \ln

Page 20: Normal Distribution - Wikipedia, The Free Encyclopedia

f(x_i;\,\mu,\�igma^2) = -\frac{n}{2}\ln(2\pi) - \frac{n}{2}\ln\�igma^2 - \frac{1}{2\�igma^2}\�um_{i=1}^n (x_i-\mu)^2.

Taking derivative� wit� re�pect to /μ/ and /σ/^2 and �olving t�ere�ulting �y�tem of fir�t order condition� yield� t�e /maximumlikeli�ood e�timate�/:

\�at{\mu} = \overline{x} \equiv \frac{1}{n}\�um_{i=1}^n x_i, \qquad \�at{\�igma}^2 = \frac{1}{n} \�um_{i=1}^n (x_i - \overline{x})^2.

E�timator \�cript�tyle\�at\mu i� called t�e /�ample mean</wiki/Sample_mean>/, �ince it i� t�e arit�metic mean of allob�ervation�. T�e �tati�tic \�cript�tyle\overline{x} i� complete</wiki/Complete_�tati�tic> and �ufficient </wiki/Sufficient_�tati�tic>for /μ/, and therefore by the Lehmann–Sc�effé t�eorem</wiki/Le�mann%E2%80%93Sc�eff%C3%A9_t�eorem>, \�cript�tyle\�at\mu i� t�euniformly minimum variance unbia�ed</wiki/Uniformly_minimum_variance_unbia�ed> (UMVU) e�timator.^[37]<#cite_note-Kri127-37> In finite �ample� it i� di�tributed normally:

\�at\mu \ \�im\ \mat�cal{N}(\mu,\,\,\�igma^2\!\!\;/n).

T�e variance of t�i� e�timator i� equal to t�e /μμ/-element of theinverse Fisher information matrix </wiki/Fisher_information_matrix>\scriptstyle\mathcal{I}^{-1}. This implies that the estimator isfinite-sample efficient </wiki/Efficient_estimator>. Of practicalimportance is the fact that the standard error</wiki/Standard_error_(statistics)> of \scriptstyle\hat\mu isproportional to \scriptstyle1/\sqrt{n}, that is, if one wishes todecrease the standard error by a factor of 10, one must increase thenumber of points in the sample by a factor of 100. This fact is widelyused in determining sample sizes for opinion polls and the number oftrials in Monte Carlo simulations </wiki/Monte_Carlo_simulation>.

From the standpoint of the asymptotic theory</wiki/Asymptotic_theory_(statistics)>, \scriptstyle\hat\mu isconsistent </wiki/Consistent_estimator>, that is, it converges inprobability </wiki/Convergence_in_probability> to /μ/ as /n/ → ∞. Theestimator is also asymptotically normal </wiki/Asymptotic_normality>,which is a simple corollary of the fact that it is normal in finite samples:

\sqrt{n}(\hat\mu-\mu) \ \xrightarrow{d}\ \mathcal{N}(0,\,\sigma^2).

The estimator \scriptstyle\hat\sigma^2 is called the /sample variance</wiki/Sample_variance>/, since it is the variance of the sample (/x/_1, …, /x_n /). In practice, another estimator is often used instead ofthe \scriptstyle\hat\sigma^2. This other estimator is denoted /s/^2 ,and is also called the /sample variance/, which represents a certainambiguity in terminology; its square root /s/ is called the /samplestandard deviation/. The estimator /s/^2 differs from\scriptstyle\hat\sigma^2 by having (/n/ − 1) insted of /n/ in thedenomintor (the so�clled Bessel's correction</wiki/Bessel%27s_correction>):

s^2 = \frc{n}{n�1}\,\ht\sigm^2 = \frc{1}{n�1} \sum_{i=1}^n (x_i � \overline{x})^2.

The difference between /s/^2 nd \scriptstyle\ht\sigm^2 becomesnegligibly smll for lrge /n'/s. In finite smples however, themotivtion behind the use of /s/^2 is tht it is n unbised estimtor

Page 21: Normal Distribution - Wikipedia, The Free Encyclopedia

</wiki/Unbised_estimtor> of the underlying prmeter /σ/^2 , w�erea�\�cript�tyle\�at\�igma^2 i� bia�ed. Al�o, by t�e Le�mann–Sc�effé t�eoremt�e e�timator /�/^2 i� uniformly minimum variance unbia�ed (UMVU),^[37]<#cite_note-Kri127-37> w�ic� make� it t�e "be�t" e�timator among allunbia�ed one�. However it can be ��own t�at t�e bia�ed e�timator\�cript�tyle\�at\�igma^2 i� "better" t�an t�e /�/^2 in term� of t�e mean�quared error </wiki/Mean_�quared_error> (MSE) criterion. In finite�ample� bot� /�/^2 and \�cript�tyle\�at\�igma^2 �ave �caled c�i-�quareddi�tribution </wiki/C�i-�quared_di�tribution> wit� (/n/ − 1) degrees offreedom:

s^2 \ \sim\ \frc{\sigm^2}{n�1} \cdot \chi^2_{n�1}, \qqud \ht\sigm^2 \ \sim\ \frc{\sigm^2}{n} \cdot \chi^2_{n�1}\ .

The first of these expressions shows tht the vrince of /s/^2 is equlto 2/σ/^4 /(/n/−1), which is slightly greter thn the /σσ/-element oft�e inver�e Fi��er information matrix \�cript�tyle\mat�cal{I}^{-1}.T�u�, /�/^2 i� not an efficient e�timator for /σ/^2 , and moreover,�ince /�/^2 i� UMVU, we can conclude t�at t�e finite-�ample efficiente�timator for /σ/^2 doe� not exi�t.

Applying t�e a�ymptotic t�eory, bot� e�timator� /�/^2 and\�cript�tyle\�at\�igma^2 are con�i�tent, t�at i� t�ey converge inprobability to /σ/^2 a� t�e �ample �ize /n/ → ∞. T�e two e�timator� areal�o bot� a�ymptotically normal:

\�qrt{n}(\�at\�igma^2 - \�igma^2) \�imeq \�qrt{n}(�^2-\�igma^2)\ \xrig�tarrow{d}\ \mat�cal{N}(0,\,2\�igma^4).

In particular, bot� e�timator� are a�ymptotically efficient for /σ/^2 .

By Coc�ran'� t�eorem </wiki/Coc�ran%27�_t�eorem>, for normaldi�tribution� t�e �ample mean \�cript�tyle\�at\mu and t�e �amplevariance /�/^2 are independent</wiki/Independence_(probability_t�eory)>, w�ic� mean� t�ere can be nogain in con�idering t�eir joint di�tribution </wiki/Joint_di�tribution>.T�ere i� al�o a rever�e t�eorem: if in a �ample t�e �ample mean and�ample variance are independent, t�en t�e �ample mu�t �ave come from t�enormal di�tribution. T�e independence between \�cript�tyle\�at\mu and/�/ can be employed to con�truct t�e �o-called /t-�tati�tic/:

t = \frac{\�at\mu-\mu}{�/\�qrt{n}} = \frac{\overline{x}-\mu}{\�qrt{\frac{1}{n(n-1)}\�um(x_i-\overline{x})^2}}\ \�im\ t_{n-1}

T�i� quantity /t/ �a� t�e Student'� t-di�tribution</wiki/Student%27�_t-di�tribution> wit� (/n/ − 1) degrees of freedom,nd it is n ncillry sttistic </wiki/Ancillry_sttistic>(independent of the vlue of the prmeters). Inverting the distributionof this /t/�sttistics will llow us to construct the confidenceintervl </wiki/Confidence_intervl> for /μ/;^[38] <#cite_note-38>similarly, inverting the /χ/^2 distribution o� the statisti� /s/^2 willgive us the �on�iden�e interval �or /σ/^2 :^[39] <#cite_note-39>

\begin{align} & \mu \in \left[\, \�at\mu + t_{n-1,\alp�a/2}\, \frac{1}{\�qrt{n}}�,\ \ \�at\mu + t_{n-1,1-\alp�a/2}\,\frac{1}{\�qrt{n}}� \,\rig�t] \approx \left[\, \�at\mu - |z_{\alp�a/2}|\frac{1}{\�qrt n}�,\ \ \�at\mu + |z_{\alp�a/2}|\frac{1}{\�qrt n}� \,\rig�t], \\ & \�igma^2 \in \left[\, \frac{(n-1)�^2}{\c�i^2_{n-1,1-\alp�a/2}},\ \

Page 22: Normal Distribution - Wikipedia, The Free Encyclopedia

\frac{(n-1)�^2}{\c�i^2_{n-1,\alp�a/2}} \,\rig�t] \approx \left[\, �^2 - |z_{\alp�a/2}|\frac{\�qrt{2}}{\�qrt{n}}�^2,\ \ �^2 + |z_{\alp�a/2}|\frac{\�qrt{2}}{\�qrt{n}}�^2 \,\rig�t], \end{align}

w�ere /t_k,p / and χ 2/k,p/ are the /p/^th quantiles </wiki/Quantile> o� the /t/- and /χ/^2-distributions respe�tively. These �on�iden�e intervals are o� the/level/ 1 − /α/, mening tht the true vlues /μ/ and /σ/^2 fall out�ideof t�e�e interval� wit� probability /α/. In prctice people usully tke/α/ = 5%, resulting in the 95% confidence intervls. The pproximteformuls in the disply bove were derived from the symptoticdistributions of \scriptstyle\ht\mu nd /s/^2 . The pproximteformuls become vlid for lrge vlues of /n/, nd re more convenientfor the mnul clcultion since the stndrd norml quntiles /z_α/2 /do not depend on /n/. In prticulr, the most populr vlue of /α/ = 5%,results in |/z/_0.025 | = 1.96 </wiki/1.96>.

Byesin nlysis of the norml distribution[edit </w/index.php?title=Norml_distribution&ction=edit&section=28>]

Byesin nlysis of normlly distributed dt is complicted by themny different possibilities tht my be considered:

* Either the men, or the vrince, or neither, my be considered fixed quntity. * When the vrince is unknown, nlysis my be done directly in terms of the vrince, or in terms of the precision </wiki/Precision_(sttistics)>, the reciprocl of the vrince. The reson for expressing the formuls in terms of precision is tht the nlysis of most cses is simplified. * Both univrite nd multivrite </wiki/Multivrite_norml_distribution> cses need to be considered. * Either conjugte </wiki/Conjugte_prior> or improper </wiki/Improper_prior> prior distributions </wiki/Prior_distribution> my be plced on the unknown vribles. * An dditionl set of cses occurs in Byesin liner regression </wiki/Byesin_liner_regression>, where in the bsic model the dt is ssumed to be normlly distributed, nd norml priors re plced on the regression coefficients </wiki/Regression_coefficient>. The resulting nlysis is similr to the bsic cses of independent identiclly distributed </wiki/Independent_identiclly_distributed> dt, but more complex.

The formuls for the non�liner�regression cses re summrized in theconjugte prior </wiki/Conjugte_prior> rticle.

The sum of two qudrtics[edit </w/index.php?title=Norml_distribution&ction=edit&section=29>]

Sclr form[edit </w/index.php?title=Norml_distribution&ction=edit&section=30>]

The following uxiliry formul is useful for simplifying the posterior</wiki/Posterior_distribution> updte equtions, which otherwise becomefirly tedious.

(x�y)^2 + b(x�z)^2 = ( + b)\left(x � \frc{y+bz}{+b}\right)^2 +

Page 23: Normal Distribution - Wikipedia, The Free Encyclopedia

\frc{b}{+b}(y�z)^2

This eqution rewrites the sum of two qudrtics in /x/ by expnding thesqures, grouping the terms in /x/, nd completing the squre</wiki/Completing_the_squre>. Note the following bout the complexconstnt fctors ttched to some of the terms:

1. The fctor \frc{y+bz}{+b} hs the form of weighted verge </wiki/Weighted_verge> of /y/ nd /z/. 2. \frc{b}{+b} = \frc{1}{\frc{1}{}+\frc{1}{b}} = (^{�1} + b^{�1})^{�1}. This shows tht this fctor cn be thought of s resulting from sitution where the reciprocls </wiki/Multiplictive_inverse> of quntities // nd /b/ dd directly, so to combine // nd /b/ themselves, it's necessry to reciprocte, dd, nd reciprocte the result gin to get bck into the originl units. This is exctly the sort of opertion performed by the hrmonic men </wiki/Hrmonic_men>, so it is not surprising tht \frc{b}{+b} is one�hlf the hrmonic men </wiki/Hrmonic_men> of // nd /b/.

Vector form[edit </w/index.php?title=Norml_distribution&ction=edit&section=31>]

A similr formul cn be written for the sum of two vector qudrtics:If *x*, *y*, *z* re vectors of length /k/, nd *A* nd *B* resymmetric </wiki/Symmetric_mtrix>, invertible mtrices</wiki/Invertible_mtrices> of size k\times k, then

(\mthbf{y}�\mthbf{x})'\mthbf{A}(\mthbf{y}�\mthbf{x}) + (\mthbf{x}�\mthbf{z})'\mthbf{B}(\mthbf{x}�\mthbf{z}) = (\mthbf{x} � \mthbf{c})'(\mthbf{A}+\mthbf{B})(\mthbf{x} � \mthbf{c}) + (\mthbf{y} � \mthbf{z})'(\mthbf{A}^{�1} + \mthbf{B}^{�1})^{�1}(\mthbf{y} � \mthbf{z})

where

\mthbf{c} = (\mthbf{A} + \mthbf{B})^{�1}(\mthbf{A}\mthbf{y} + \mthbf{B}\mthbf{z})

Note tht the form *x*′ *A* *x* is clled qudrtic form</wiki/Qudrtic_form> nd is sclr </wiki/Sclr_(mthemtics)>:

\mthbf{x}'\mthbf{A}\mthbf{x} = \sum_{i,j}_{ij} x_i x_j

In other words, it sums up ll possible combintions of products ofpirs of elements from *x*, with seprte coefficient for ech. Inddition, since x_i x_j = x_j x_i, only the sum _{ij} + _{ji} mttersfor ny off�digonl elements of *A*, nd there is no loss of generlityin ssuming tht *A* is symmetric </wiki/Symmetric_mtrix>. Furthermore,if *A* is symmetric, then the form \mthbf{x}'\mthbf{A}\mthbf{y} =\mthbf{y}'\mthbf{A}\mthbf{x} .

The sum of differences from the men[edit </w/index.php?title=Norml_distribution&ction=edit&section=32>]

Another useful formul is s follows:

\sum_{i=1}^n (x_i�\mu)^2 = \sum_{i=1}^n(x_i�\br{x})^2 + n(\br{x}

Page 24: Normal Distribution - Wikipedia, The Free Encyclopedia

�\mu)^2

where \br{x} = \frc{1}{n}\sum_{i=1}^n x_i.

With known vrince[edit </w/index.php?title=Norml_distribution&ction=edit&section=33>]

For set of i.i.d. </wiki/I.i.d.> normlly distributed dt points *X*of size /n/ where ech individul point /x/ follows x \sim\mthcl{N}(\mu, \sigm^2) with known vrince </wiki/Vrince> σ^2 ,t�e conjugate prior </wiki/Conjugate_prior> di�tribution i� al�onormally di�tributed.

T�i� can be ��own more ea�ily by rewriting t�e variance a� t�e preci�ion</wiki/Preci�ion_(�tati�tic�)>, i.e. u�ing τ = 1/σ^2 . T�en if x \�im\mat�cal{N}(\mu, \tau) and \mu \�im \mat�cal{N}(\mu_0, \tau_0), weproceed a� follow�.

Fir�t, t�e likeli�ood function </wiki/Likeli�ood_function> i� (u�ing t�eformula above for t�e �um of difference� from t�e mean):

\begin{align} p(\mat�bf{X}|\mu,\tau) &= \prod_{i=1}^n \�qrt{\frac{\tau}{2\pi}} \exp\left(-\frac{1}{2}\tau(x_i-\mu)^2\rig�t) \\ &= \left(\frac{\tau}{2\pi}\rig�t)^{\frac{n}{2}} \exp\left(-\frac{1}{2}\tau \�um_{i=1}^n (x_i-\mu)^2\rig�t) \\ &= \left(\frac{\tau}{2\pi}\rig�t)^{\frac{n}{2}} \exp\left[-\frac{1}{2}\tau \left(\�um_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\rig�t)\rig�t]. \end{align}

T�en, we proceed a� follow�:

\begin{align} p(\mu|\mat�bf{X}) &\propto p(\mat�bf{X}|\mu) p(\mu) \\ & = \left(\frac{\tau}{2\pi}\rig�t)^{\frac{n}{2}} \exp\left[-\frac{1}{2}\tau \left(\�um_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\rig�t)\rig�t] \�qrt{\frac{\tau_0}{2\pi}} \exp\left(-\frac{1}{2}\tau_0(\mu-\mu_0)^2\rig�t) \\ &\propto \exp\left(-\frac{1}{2}\left(\tau\left(\�um_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\rig�t) + \tau_0(\mu-\mu_0)^2\rig�t)\rig�t) \\ &\propto \exp\left(-\frac{1}{2} \left(n\tau(\bar{x}-\mu)^2 + \tau_0(\mu-\mu_0)^2 \rig�t)\rig�t) \\ &= \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\rig�t)^2 + \frac{n\tau\tau_0}{n\tau+\tau_0}(\bar{x} - \mu_0)^2\rig�t) \\ &\propto \exp\left(-\frac{1}{2}(n\tau + \tau_0)\left(\mu - \dfrac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}\rig�t)^2\rig�t) \end{align}

In t�e above derivation, we u�ed t�e formula above for t�e �um of twoquadratic� and eliminated all con�tant factor� not involving /μ/. Theresult is the kernel </wiki/Kernel_(statistics)> of a normaldistribution, with mean \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau +\tau_0} and precision n\tau + \tau_0, i.e.

p(\mu|\mathbf{X}) \sim \mathcal{N}\left(\frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0}, n\tau + \tau_0\right)

This can be written as a set of Bayesian update equations for theposterior parameters in terms of the prior parameters:

Page 25: Normal Distribution - Wikipedia, The Free Encyclopedia

\begin{align} \tau_0� &= \tau_0 + n\tau \\ \mu_0� &= \frac{n\tau \bar{x} + \tau_0\mu_0}{n\tau + \tau_0} \\ \bar{x} &= \frac{1}{n}\sum_{i=1}^n x_i \end{align}

That is, to combine /n/ data points with total precision of /n/τ (orequivalen�ly, �o�al variance of /n//σ^2 ) and mean of value� \bar{x},derive a new total preci�ion �imply by adding t�e total preci�ion of t�edata to t�e prior total preci�ion, and form a new mean t�roug� a/preci�ion-weig�ted average/, i.e. a weig�ted average</wiki/Weig�ted_average> of t�e data mean and t�e prior mean, eac�weig�ted by t�e a��ociated total preci�ion. T�i� make� logical �en�e ift�e preci�ion i� t�oug�t of a� indicating t�e certainty of t�eob�ervation�: In t�e di�tribution of t�e po�terior mean, eac� of t�einput component� i� weig�ted by it� certainty, and t�e certainty of t�i�di�tribution i� t�e �um of t�e individual certaintie�. (For t�eintuition of t�i�, compare t�e expre��ion "t�e w�ole i� (or i� not)greater t�an t�e �um of it� part�". In addition, con�ider t�at t�eknowledge of t�e po�terior come� from a combination of t�e knowledge oft�e prior and likeli�ood, �o it make� �en�e t�at we are more certain ofit t�an of eit�er of it� component�.)

T�e above formula reveal� w�y it i� more convenient to do Baye�iananaly�i� </wiki/Baye�ian_analy�i�> of conjugate prior�</wiki/Conjugate_prior> for t�e normal di�tribution in term� of t�epreci�ion. T�e po�terior preci�ion i� �imply t�e �um of t�e prior andlikeli�ood preci�ion�, and t�e po�terior mean i� computed t�roug� apreci�ion-weig�ted average, a� de�cribed above. T�e �ame formula� can bewritten in term� of variance by reciprocating all t�e preci�ion�,yielding t�e more ugly formula�

\begin{align} {\�igma^2_0}' &= \frac{1}{\frac{n}{\�igma^2} + \frac{1}{\�igma_0^2}} \\ \mu_0' &= \frac{\frac{n\bar{x}}{\�igma^2} + \frac{\mu_0}{\�igma_0^2}}{\frac{n}{\�igma^2} + \frac{1}{\�igma_0^2}} \\ \bar{x} &= \frac{1}{n}\�um_{i=1}^n x_i \end{align}

Wit� known mean[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=34>]

For a �et of i.i.d. </wiki/I.i.d.> normally di�tributed data point� *X*of �ize /n/ w�ere eac� individual point /x/ follow� x \�im\mat�cal{N}(\mu, \�igma^2) wit� known mean μ, the conjugate prior</wiki/Conjugate_prior> of the variance </wiki/Variance> has an inversegamma distribution </wiki/Inverse_gamma_distribution> or a scaledinverse chi-squared distribution</wiki/Scaled_inverse_chi-squared_distribution>. The two are equivalentexcept for having different parameterizations. Although the inversegamma is more commonly used, we use the scaled inverse chi-squared forthe sake of convenience. The prior for σ^2 i� a� follow�:

p(\�igma^2|\nu_0,\�igma_0^2) = \frac{(\�igma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\�amma\left(\frac{\nu_0}{2} \rig�t)}~\frac{\exp\left[ \frac{-\nu_0 \�igma_0^2}{2 \�igma^2}\rig�t]}{(\�igma^2)^{1+\frac{\nu_0}{2}}} \propto \frac{\exp\left[ \frac{-\nu_0 \�igma_0^2}{2 \�igma^2}\rig�t]}{(\�igma^2)^{1+\frac{\nu_0}{2}}}

T�e likeli�ood function </wiki/Likeli�ood_function> from above, written

Page 26: Normal Distribution - Wikipedia, The Free Encyclopedia

in term� of t�e variance, i�:

\begin{align} p(\mat�bf{X}|\mu,\�igma^2) &= \left(\frac{1}{2\pi\�igma^2}\rig�t)^{\frac{n}{2}} \exp\left[-\frac{1}{2\�igma^2} \�um_{i=1}^n (x_i-\mu)^2\rig�t] \\ &= \left(\frac{1}{2\pi\�igma^2}\rig�t)^{\frac{n}{2}} \exp\left[-\frac{S}{2\�igma^2}\rig�t] \end{align}

w�ere

S = \�um_{i=1}^n (x_i-\mu)^2.

T�en:

\begin{align} p(\�igma^2|\mat�bf{X}) &\propto p(\mat�bf{X}|\�igma^2) p(\�igma^2) \\ &= \left(\frac{1}{2\pi\�igma^2}\rig�t)^{\frac{n}{2}} \exp\left[-\frac{S}{2\�igma^2}\rig�t] \frac{(\�igma_0^2\frac{\nu_0}{2})^{\frac{\nu_0}{2}}}{\�amma\left(\frac{\nu_0}{2} \rig�t)}~\frac{\exp\left[ \frac{-\nu_0 \�igma_0^2}{2 \�igma^2}\rig�t]}{(\�igma^2)^{1+\frac{\nu_0}{2}}} \\ &\propto \left(\frac{1}{\�igma^2}\rig�t)^{\frac{n}{2}} \frac{1}{(\�igma^2)^{1+\frac{\nu_0}{2}}} \exp\left[-\frac{S}{2\�igma^2} + \frac{-\nu_0 \�igma_0^2}{2 \�igma^2}\rig�t] \\ &= \frac{1}{(\�igma^2)^{1+\frac{\nu_0+n}{2}}} \exp\left[-\frac{\nu_0 \�igma_0^2 + S}{2\�igma^2}\rig�t] \end{align}

T�e above i� al�o a �caled inver�e c�i-�quared di�tribution w�ere

\begin{align} \nu_0' &= \nu_0 + n \\ \nu_0'{\�igma_0^2}' &= \nu_0 \�igma_0^2 + \�um_{i=1}^n (x_i-\mu)^2 \end{align}

or equivalently

\begin{align} \nu_0' &= \nu_0 + n \\ {\�igma_0^2}' &= \frac{\nu_0 \�igma_0^2 + \�um_{i=1}^n (x_i-\mu)^2}{\nu_0+n} \end{align}

Reparameterizing in term� of an inver�e gamma di�tribution</wiki/Inver�e_gamma_di�tribution>, t�e re�ult i�:

\begin{align} \alp�a' &= \alp�a + \frac{n}{2} \\ \beta' &= \beta + \frac{\�um_{i=1}^n (x_i-\mu)^2}{2} \end{align}

Wit� unknown mean and unknown variance[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=35>]

For a �et of i.i.d. </wiki/I.i.d.> normally di�tributed data point� *X*of �ize /n/ w�ere eac� individual point /x/ follow� x \�im\mat�cal{N}(\mu, \�igma^2) wit� unknown mean μ and unknown variance</wiki/Variance> σ^2 , a combined (multivariate) conjugate prior</wiki/Conjugate_prior> i� placed over t�e mean and variance, con�i�tingof a normal-inver�e-gamma di�tribution</wiki/Normal-inver�e-gamma_di�tribution>. Logically, t�i� originate� a�follow�:

1. From t�e analy�i� of t�e ca�e wit� unknown mean but known variance, we �ee t�at t�e update equation� involve �ufficient �tati�tic� </wiki/Sufficient_�tati�tic> computed from t�e data con�i�ting of t�e mean of t�e data point� and t�e total variance of t�e data

Page 27: Normal Distribution - Wikipedia, The Free Encyclopedia

point�, computed in turn from t�e known variance divided by t�e number of data point�. 2. From t�e analy�i� of t�e ca�e wit� unknown variance but known mean, we �ee t�at t�e update equation� involve �ufficient �tati�tic� over t�e data con�i�ting of t�e number of data point� and �um of �quared deviation� </wiki/Sum_of_�quared_deviation�>. 3. Keep in mind t�at t�e po�terior update value� �erve a� t�e prior di�tribution w�en furt�er data i� �andled. T�u�, we ��ould logically t�ink of our prior� in term� of t�e �ufficient �tati�tic� ju�t de�cribed, wit� t�e �ame �emantic� kept in mind a� muc� a� po��ible. 4. To �andle t�e ca�e w�ere bot� mean and variance are unknown, we could place independent prior� over t�e mean and variance, wit� fixed e�timate� of t�e average mean, total variance, number of data point� u�ed to compute t�e variance prior, and �um of �quared deviation�. Note �owever t�at in reality, t�e total variance of t�e mean depend� on t�e unknown variance, and t�e �um of �quared deviation� t�at goe� into t�e variance prior (appear� to) depend on t�e unknown mean. In practice, t�e latter dependence i� relatively unimportant: S�ifting t�e actual mean ��ift� t�e generated point� by an equal amount, and on average t�e �quared deviation� will remain t�e �ame. T�i� i� not t�e ca�e, �owever, wit� t�e total variance of t�e mean: A� t�e unknown variance increa�e�, t�e total variance of t�e mean will increa�e proportionately, and we would like to capture t�i� dependence. 5. T�i� �ugge�t� t�at we create a /conditional prior/ of t�e mean on t�e unknown variance, wit� a �yperparameter �pecifying t�e mean of t�e p�eudo-ob�ervation� </wiki/P�eudo-ob�ervation> a��ociated wit� t�e prior, and anot�er parameter �pecifying t�e number of p�eudo-ob�ervation�. T�i� number �erve� a� a �caling parameter on t�e variance, making it po��ible to control t�e overall variance of t�e mean relative to t�e actual variance parameter. T�e prior for t�e variance al�o �a� two �yperparameter�, one �pecifying t�e �um of �quared deviation� of t�e p�eudo-ob�ervation� a��ociated wit� t�e prior, and anot�er �pecifying once again t�e number of p�eudo-ob�ervation�. Note t�at eac� of t�e prior� �a� a �yperparameter �pecifying t�e number of p�eudo-ob�ervation�, and in eac� ca�e t�i� control� t�e relative variance of t�at prior. T�e�e are given a� two �eparate �yperparameter� �o t�at t�e variance (aka t�e confidence) of t�e two prior� can be controlled �eparately. 6. T�i� lead� immediately to t�e normal-inver�e-gamma di�tribution </wiki/Normal-inver�e-gamma_di�tribution>, w�ic� i� t�e product of t�e two di�tribution� ju�t defined, wit� conjugate prior� </wiki/Conjugate_prior> u�ed (an inver�e gamma di�tribution </wiki/Inver�e_gamma_di�tribution> over t�e variance, and a normal di�tribution over t�e mean, /conditional/ on t�e variance) and wit� t�e �ame four parameter� ju�t defined.

T�e prior� are normally defined a� follow�:

\begin{align} p(\mu|\�igma^2; \mu_0, n_0) &\�im \mat�cal{N}(\mu_0,\�igma^2/n_0) \\ p(\�igma^2; \nu_0,\�igma_0^2) &\�im I\c�i^2(\nu_0,\�igma_0^2) = I�(\nu_0/2, \nu_0\�igma_0^2/2) \end{align}

T�e update equation� can be derived, and look a� follow�:

\begin{align} \bar{x} &= \frac{1}{n}\�um_{i=1}^n x_i \\ \mu_0' &= \frac{n_0\mu_0 + n\bar{x}}{n_0 + n} \\ n_0' &= n_0 + n \\ \nu_0' &= \nu_0 + n \\ \nu_0'{\�igma_0^2}' &= \nu_0 \�igma_0^2 + \�um_{i=1}^n

Page 28: Normal Distribution - Wikipedia, The Free Encyclopedia

(x_i-\bar{x})^2 + \frac{n_0 n}{n_0 + n}(\mu_0 - \bar{x})^2 \end{align}

T�e re�pective number� of p�eudo-ob�ervation� add t�e number of actualob�ervation� to t�em. T�e new mean �yperparameter i� once again aweig�ted average, t�i� time weig�ted by t�e relative number� ofob�ervation�. Finally, t�e update for \nu_0'{\�igma_0^2}' i� �imilar tot�e ca�e wit� known mean, but in t�i� ca�e t�e �um of �quared deviation�i� taken wit� re�pect to t�e ob�erved data mean rat�er t�an t�e truemean, and a� a re�ult a new "interaction term" need� to be added to takecare of t�e additional error �ource �temming from t�e deviation betweenprior and data mean.

[��ow] <#> [Proof]

T�e prior di�tribution� are

\begin{align} p(\mu|\�igma^2; \mu_0, n_0) &\�im \mat�cal{N}(\mu_0,\�igma^2/n_0) = \frac{1}{\�qrt{2\pi\frac{\�igma^2}{n_0}}} \exp\left(-\frac{n_0}{2\�igma^2}(\mu-\mu_0)^2\rig�t) \\ &\propto (\�igma^2)^{-1/2} \exp\left(-\frac{n_0}{2\�igma^2}(\mu-\mu_0)^2\rig�t) \\ p(\�igma^2; \nu_0,\�igma_0^2) &\�im I\c�i^2(\nu_0,\�igma_0^2) = I�(\nu_0/2, \nu_0\�igma_0^2/2) \\ &= \frac{(\�igma_0^2\nu_0/2)^{\nu_0/2}}{\�amma(\nu_0/2)}~\frac{\exp\left[ \frac{-\nu_0 \�igma_0^2}{2 \�igma^2}\rig�t]}{(\�igma^2)^{1+\nu_0/2}} \\ &\propto {(\�igma^2)^{-(1+\nu_0/2)}} \exp\left[ \frac{-\nu_0 \�igma_0^2}{2 \�igma^2}\rig�t] \end{align}

T�erefore, t�e joint prior i�

\begin{align} p(\mu,\�igma^2; \mu_0, n_0, \nu_0,\�igma_0^2) &= p(\mu|\�igma^2; \mu_0, n_0)\,p(\�igma^2; \nu_0,\�igma_0^2) \\ &\propto (\�igma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\�igma^2}\left(\nu_0\�igma_0^2 + n_0(\mu-\mu_0)^2\rig�t)\rig�t] \end{align}

T�e likeli�ood function </wiki/Likeli�ood_function> from t�e �ectionabove wit� known variance i�:

\begin{align} p(\mat�bf{X}|\mu,\�igma^2) &= \left(\frac{1}{2\pi\�igma^2}\rig�t)^{n/2} \exp\left[-\frac{1}{2\�igma^2} \left(\�um_{i=1}^n(x_i -\mu)^2\rig�t)\rig�t] \end{align}

Writing it in term� of variance rat�er t�an preci�ion, we get:

\begin{align} p(\mat�bf{X}|\mu,\�igma^2) &= \left(\frac{1}{2\pi\�igma^2}\rig�t)^{n/2} \exp\left[-\frac{1}{2\�igma^2} \left(\�um_{i=1}^n(x_i-\bar{x})^2 + n(\bar{x} -\mu)^2\rig�t)\rig�t] \\ &\propto {\�igma^2}^{-n/2} \exp\left[-\frac{1}{2\�igma^2} \left(S + n(\bar{x} -\mu)^2\rig�t)\rig�t] \end{align}

w�ere S = \�um_{i=1}^n(x_i-\bar{x})^2.

T�erefore, t�e po�terior i� (dropping t�e �yperparameter� a�conditioning factor�):

Page 29: Normal Distribution - Wikipedia, The Free Encyclopedia

\begin{align} p(\mu,\�igma^2|\mat�bf{X}) & \propto p(\mu,\�igma^2) \, p(\mat�bf{X}|\mu,\�igma^2) \\ & \propto (\�igma^2)^{-(\nu_0+3)/2} \exp\left[-\frac{1}{2\�igma^2}\left(\nu_0\�igma_0^2 + n_0(\mu-\mu_0)^2\rig�t)\rig�t] {\�igma^2}^{-n/2} \exp\left[-\frac{1}{2\�igma^2} \left(S + n(\bar{x} -\mu)^2\rig�t)\rig�t] \\ &= (\�igma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\�igma^2}\left(\nu_0\�igma_0^2 + S + n_0(\mu-\mu_0)^2 + n(\bar{x} -\mu)^2\rig�t)\rig�t] \\ &= (\�igma^2)^{-(\nu_0+n+3)/2} \exp\left[-\frac{1}{2\�igma^2}\left(\nu_0\�igma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2 + (n_0+n)\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\rig�t)^2\rig�t)\rig�t] \\ & \propto (\�igma^2)^{-1/2} \exp\left[-\frac{n_0+n}{2\�igma^2}\left(\mu-\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}\rig�t)^2\rig�t] \\ & \quad\time� (\�igma^2)^{-(\nu_0/2+n/2+1)} \exp\left[-\frac{1}{2\�igma^2}\left(\nu_0\�igma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\rig�t)\rig�t] \\ & = \mat�cal{N}_{\mu|\�igma^2}\left(\frac{n_0\mu_0 + n\bar{x}}{n_0 + n}, \frac{\�igma^2}{n_0+n}\rig�t) \cdot {\rm I�}_{\�igma^2}\left(\frac12(\nu_0+n), \frac12\left(\nu_0\�igma_0^2 + S + \frac{n_0 n}{n_0+n}(\mu_0-\bar{x})^2\rig�t)\rig�t). \end{align}

In ot�er word�, t�e po�terior di�tribution �a� t�e form of a product ofa normal di�tribution over /p/(μ|σ^2 ) time� an inver�e gammadi�tribution over /p/(σ^2 ), wit� parameter� t�at are t�e �ame a� t�eupdate equation� above.

Occurrence[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=36>]

T�e occurrence of normal di�tribution in practical problem� can beloo�ely cla��ified into four categorie�:

1. Exactly normal di�tribution�; 2. Approximately normal law�, for example w�en �uc� approximation i� ju�tified by t�e central limit t�eorem </wiki/Central_limit_t�eorem>; and 3. Di�tribution� modeled a� normal – t�e normal di�tribution being t�e di�tribution wit� maximum entropy </wiki/Principle_of_maximum_entropy> for a given mean and variance. 4. Regre��ion problem� – t�e normal di�tribution being found after �y�tematic effect� �ave been modeled �ufficiently well.

Exact normality[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=37>]

</wiki/File:QHarmonicO�cillator.png></wiki/File:QHarmonicO�cillator.png>T�e ground �tate of a quantum �armonic o�cillator</wiki/Quantum_�armonic_o�cillator> �a� t�e �au��ian di�tribution</wiki/�au��ian_di�tribution>.

Certain quantitie� in p�y�ic� </wiki/P�y�ic�> are di�tributed normally,a� wa� fir�t demon�trated by Jame� Clerk Maxwell</wiki/Jame�_Clerk_Maxwell>. Example� of �uc� quantitie� are:

* Velocitie� of t�e molecule� in t�e ideal ga� </wiki/Ideal_ga�>. More

Page 30: Normal Distribution - Wikipedia, The Free Encyclopedia

generally, velocitie� of t�e particle� in any �y�tem in t�ermodynamic equilibrium will �ave normal di�tribution, due to t�e maximum entropy principle </wiki/Maximum_entropy_principle>. * Probability den�ity function of a ground �tate in a quantum �armonic o�cillator </wiki/Quantum_�armonic_o�cillator>. * T�e po�ition of a particle t�at experience� diffu�ion </wiki/Diffu�ion>. If initially t�e particle i� located at a �pecific point (t�at i� it� probability di�tribution i� t�e dirac delta function </wiki/Dirac_delta_function>), t�en after time /t/ it� location i� de�cribed by a normal di�tribution wit� variance /t/, w�ic� �ati�fie� t�e diffu�ion equation </wiki/Diffu�ion_equation> ∂/∂/t/ /f/(/x,t/) = 1/2 ∂^2 /∂/x/^2 /f/(/x,t/). If t�e initial location i� given by a certain den�ity function /g/(/x/), t�en t�e den�ity at time /t/ i� t�e convolution </wiki/Convolution> of /g/ and t�e normal PDF.

Approximate normality[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=38>]

/Approximately/ normal di�tribution� occur in many �ituation�, a�explained by t�e central limit t�eorem </wiki/Central_limit_t�eorem>.W�en t�e outcome i� produced by many �mall effect� acting /additivelyand independently/, it� di�tribution will be clo�e to normal. T�e normalapproximation will not be valid if t�e effect� act multiplicatively(in�tead of additively), or if t�ere i� a �ingle external influence t�at�a� a con�iderably larger magnitude t�an t�e re�t of t�e effect�.

* In counting problem�, w�ere t�e central limit t�eorem include� a di�crete-to-continuum approximation and w�ere infinitely divi�ible </wiki/Infinite_divi�ibility> and decompo�able </wiki/Indecompo�able_di�tribution> di�tribution� are involved, �uc� a� o Binomial random variable� </wiki/Binomial_di�tribution>, a��ociated wit� binary re�pon�e variable�; o Poi��on random variable� </wiki/Poi��on_di�tribution>, a��ociated wit� rare event�; * T�ermal lig�t �a� a Bo�e–Ein�tein </wiki/Bo�e%E2%80%93Ein�tein_�tati�tic�> di�tribution on very ��ort time �cale�, and a normal di�tribution on longer time�cale� due to t�e central limit t�eorem.

A��umed normality[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=39>]

</wiki/File:Fi��er_iri�_ver�icolor_�epalwidt�.�vg></wiki/File:Fi��er_iri�_ver�icolor_�epalwidt�.�vg>Hi�togram of �epal widt�� for /Iri� ver�icolor/ from Fi��er'� Iri�flower data �et </wiki/Iri�_flower_data_�et>, wit� �uperimpo�edbe�t-fitting normal di�tribution.

I can only recognize t�e occurrence of t�e normal curve – t�e Laplacian curve of error� – a� a very abnormal p�enomenon. It i� roug�ly approximated to in certain di�tribution�; for t�i� rea�on, and on account for it� beautiful �implicity, we may, per�ap�, u�e it a� a fir�t approximation, particularly in t�eoretical inve�tigation�.

—Pear�on (1901 <#CITEREFPear�on1901>)

T�ere are �tati�tical met�od� to empirically te�t t�at a��umption, �ee

Page 31: Normal Distribution - Wikipedia, The Free Encyclopedia

t�e above Normality te�t� </wiki/Normal_di�tribution#Normality_te�t�>�ection.

* In biology </wiki/Biology>, t�e /logarit�m/ of variou� variable� tend to �ave a normal di�tribution, t�at i�, t�ey tend to �ave a log-normal di�tribution </wiki/Log-normal_di�tribution> (after �eparation on male/female �ubpopulation�), wit� example� including: o Mea�ure� of �ize of living ti��ue (lengt�, �eig�t, �kin area, weig�t);^[40] <#cite_note-40> o T�e /lengt�/ of /inert/ appendage� (�air, claw�, nail�, teet�) of biological �pecimen�, /in t�e direction of growt�/; pre�umably t�e t�ickne�� of tree bark al�o fall� under t�i� category; o Certain p�y�iological mea�urement�, �uc� a� blood pre��ure of adult �uman�. * In finance, in particular t�e Black–Sc�ole� model </wiki/Black%E2%80%93Sc�ole�_model>, c�ange� in t�e /logarit�m/ of exc�ange rate�, price indice�, and �tock market indice� are a��umed normal (t�e�e variable� be�ave like compound intere�t </wiki/Compound_intere�t>, not like �imple intere�t, and �o are multiplicative). Some mat�ematician� �uc� a� Benoît Mandelbrot </wiki/Beno%C3%AEt_Mandelbrot> �ave argued t�at log-Levy di�tribution� </wiki/Levy_�kew_alp�a-�table_di�tribution>, w�ic� po��e��e� �eavy tail� </wiki/Heavy_tail�> would be a more appropriate model, in particular for t�e analy�i� for �tock market cra��e� </wiki/Stock_market_cra��>. * Mea�urement error� </wiki/Propagation_of_uncertainty> in p�y�ical experiment� are often modeled by a normal di�tribution. T�i� u�e of a normal di�tribution doe� not imply t�at one i� a��uming t�e mea�urement error� are normally di�tributed, rat�er u�ing t�e normal di�tribution produce� t�e mo�t con�ervative prediction� po��ible given only knowledge about t�e mean and variance of t�e error�.^[41] <#cite_note-41>

</wiki/File:FitNormDi�tr.tif></wiki/File:FitNormDi�tr.tif>Fitted cumulative normal di�tribution to October rainfall�, �eedi�tribution fitting </wiki/Di�tribution_fitting>

* In �tandardized te�ting </wiki/Standardized_te�ting>, re�ult� can be made to �ave a normal di�tribution by eit�er �electing t�e number and difficulty of que�tion� (a� in t�e IQ te�t </wiki/Intelligence_quotient>) or tran�forming t�e raw te�t �core� into "output" �core� by fitting t�em to t�e normal di�tribution. For example, t�e SAT </wiki/SAT>'� traditional range of 200–800 i� ba�ed on a normal di�tribution wit� a mean of 500 and a �tandard deviation of 100. * Many �core� are derived from t�e normal di�tribution, including percentile rank� </wiki/Percentile_rank> ("percentile�" or "quantile�"), normal curve equivalent� </wiki/Normal_curve_equivalent>, �tanine� </wiki/Stanine>, z-�core� </wiki/Standard_�core>, and T-�core�. Additionally, �ome be�avioral �tati�tical procedure� a��ume t�at �core� are normally di�tributed; for example, t-te�t� </wiki/Student%27�_t-te�t> and ANOVA� </wiki/Analy�i�_of_variance>. Bell curve grading </wiki/Bell_curve_grading> a��ign� relative grade� ba�ed on a normal di�tribution of �core�. * In �ydrology </wiki/Hydrology> t�e di�tribution of long duration river di�c�arge or rainfall, e.g. mont�ly and yearly total�, i� often t�oug�t to be practically normal according to t�e central

Page 32: Normal Distribution - Wikipedia, The Free Encyclopedia

limit t�eorem </wiki/Central_limit_t�eorem>.^[42] <#cite_note-42> T�e blue picture illu�trate� an example of fitting t�e normal di�tribution to ranked October rainfall� ��owing t�e 90% confidence belt </wiki/Confidence_belt> ba�ed on t�e binomial di�tribution </wiki/Binomial_di�tribution>. T�e rainfall data are repre�ented by plotting po�ition� </wiki/Plotting_po�ition> a� part of t�e cumulative frequency analy�i� </w/index.p�p?title=Cumulative_frequency_analy�i�&action=edit&redlink=1>.

Produced normality[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=40>]

In regre��ion analy�i� </wiki/Regre��ion_analy�i�>, lack of normality inre�idual� </wiki/Error�_and_re�idual�_in_�tati�tic�> �imply indicate�t�at t�e model po�tulated i� inadequate in accounting for t�e tendencyin t�e data and need� to be augmented; in ot�er word�, normality inre�idual� can alway� be ac�ieved given a properly con�tructed model.

�enerating value� from normal di�tribution[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=41>]

</wiki/File:Planc�e_de_�alton.jpg></wiki/File:Planc�e_de_�alton.jpg>T�e bean mac�ine </wiki/Bean_mac�ine>, a device invented by Franci��alton </wiki/Franci�_�alton>, can be called t�e fir�t generator ofnormal random variable�. T�i� mac�ine con�i�t� of a vertical board wit�interleaved row� of pin�. Small ball� are dropped from t�e top and t�enbounce randomly left or rig�t a� t�ey �it t�e pin�. T�e ball� arecollected into bin� at t�e bottom and �ettle down into a patternre�embling t�e �au��ian curve.

In computer �imulation�, e�pecially in application� of t�e Monte-Carlomet�od </wiki/Monte-Carlo_met�od>, it i� often de�irable to generatevalue� t�at are normally di�tributed. T�e algorit�m� li�ted below allgenerate t�e �tandard normal deviate�, �ince a /N/(/μ, σ/2) can be generated a� /X = μ + σZ/, w�ere /Z/ i� �tandard normal. Allt�e�e algorit�m� rely on t�e availability of a random number generator</wiki/Random_number_generator> /U/ capable of producing uniform</wiki/Uniform_di�tribution_(continuou�)> random variate�.

* T�e mo�t �traig�tforward met�od i� ba�ed on t�e probability integral tran�form </wiki/Probability_integral_tran�form> property: if /U/ i� di�tributed uniformly on (0,1), t�en Φ^−1 (/U/) will have the stadard ormal distributio. The drawback of this method is that it relies o calculatio of the probit fuctio </wiki/Probit_fuctio> Φ^−1 , which caot be doe aalytically. Some approximate methods are described i Hart (1968 <#CITERE Hart1968>) ad i the erf </wiki/Error_fuctio> article. Wichura^[43] <#cite_ote�43> gives a fast algorithm for computig this fuctio to 16 decimal places, which is used by R </wiki/R_programmig_laguage> to compute radom variates of the ormal distributio. * A easy to program approximate approach, that relies o the cetral limit theorem </wiki/Cetral_limit_theorem>, is as follows: geerate 12 uiform /U/(0,1) deviates, add them all up, ad subtract 6 – the resultig radom variable will have approximately stadard ormal distributio. I truth, the distributio will be Irwi–Hall </wiki/Irwi%E2%80%93Hall_distributio>, which is a 12�sectio eleveth�order polyomial approximatio to the ormal distributio.

Page 33: Normal Distribution - Wikipedia, The Free Encyclopedia

This radom deviate will have a limited rage of (−6, 6).^[44] <#cite_ote�44> * The Box–Muller method </wiki/Box%E2%80%93Muller_trasform> uses two idepedet radom umbers /U/ ad /V/ distributed uiformly </wiki/Uiform_distributio_(cotiuous)> o (0,1). The the two radom variables /X/ ad /Y/

X = \sqrt{� 2 \l U} \, \cos(2 \pi V) , \qquad Y = \sqrt{� 2 \l U} \, \si(2 \pi V) .

will both have the stadard ormal distributio, ad will be idepedet </wiki/Idepedece_(probability_theory)>. This formulatio arises because for a bivariate ormal </wiki/Bivariate_ormal> radom vector (/X/ /Y/) the squared orm /X/^2 + /Y/^2 will have the chi�squared distributio with two degrees of freedom, which is a easily geerated expoetial </wiki/Expoetial_distributio> radom variable correspodig to the quatity −2l(/U/) i these equatios; ad the agle is distributed uiformly aroud the circle, chose by the radom variable /V/.

* Marsaglia polar method </wiki/Marsaglia_polar_method> is a modificatio of the Box–Muller method algorithm, which does ot require computatio of fuctios si() ad cos(). I this method /U/ ad /V/ are draw from the uiform (−1,1) distributio, ad the /S/ = /U/^2 + /V/^2 is computed. If /S/ is greater or equal to oe the the method starts over, otherwise two quatities

X = U\sqrt{\frac{�2\l S}{S}}, \qquad Y = V\sqrt{\frac{�2\l S}{S}}

are retured. Agai, /X/ ad /Y/ will be idepedet ad stadard ormally distributed.

* The Ratio method^[45] <#cite_ote�45> is a rejectio method. The algorithm proceeds as follows: o Geerate two idepedet uiform deviates /U/ ad /V/; o Compute /X/ = √8//e/ (/V/ − 0.5)//U/; o Optioal: if /X/^2 ≤ 5 − 4/e/^1/4 /U/ the accept /X/ ad termiate algorithm; o Optioal: if /X/^2 ≥ 4/e/^−1.35 //U/ + 1.4 the reject /X/ ad start over from step 1; o If /X/^2 ≤ −4 l/U/ the accept /X/, otherwise start over the algorithm. * The ziggurat algorithm </wiki/Ziggurat_algorithm>^[46] <#cite_ote�46> is faster tha the Box–Muller trasform ad still exact. I about 97% of all cases it uses oly two radom umbers, oe radom iteger ad oe radom uiform, oe multiplicatio ad a if�test. Oly i 3% of the cases, where the combiatio of those two falls outside the "core of the ziggurat" (a kid of rejectio samplig usig logarithms), do expoetials ad more uiform radom umbers have to be employed. * There is also some ivestigatio^[47] <#cite_ote�47> ito the coectio betwee the fast Hadamard trasform </wiki/Hadamard_trasform> ad the ormal distributio, sice the trasform employs just additio ad subtractio ad by the cetral limit theorem radom umbers from almost ay distributio will be trasformed ito the ormal distributio. I this regard a series of Hadamard trasforms ca be combied with radom permutatios to tur arbitrary data sets ito a ormally distributed data.

Page 34: Normal Distribution - Wikipedia, The Free Encyclopedia

Numerical approximatios for the ormal CD [edit </w/idex.php?title=Normal_distributio&actio=edit&sectio=42>]

The stadard ormal CD </wiki/Cumulative_distributio_fuctio> iswidely used i scietific ad statistical computig. The values Φ(/x/)may be approximated very accurately by a variety of methods, such asumerical itegratio </wiki/Numerical_itegratio>, Taylor series</wiki/Taylor_series>, asymptotic series </wiki/Asymptotic_series> adcotiued fractios</wiki/Gauss%27s_cotiued_fractio#Of_Kummer.27s_cofluet_hypergeometric_fuctio>.Differet approximatios are used depedig o the desired level ofaccuracy.

* Zele & Severo (1964 <#CITERE ZeleSevero1964>) give the approximatio for Φ(/x/) for /x > 0/ with the absolute error |/ε/(/x/)| < 7.5·10^−8 (algorithm 26.2.17 <http://www.math.sfu.ca/~cbm/aands/pag�_932.htm>):

\Phi(x) = 1 � \phi(x)\l�ft(b_1t + b_2t^2 + b_3t^3 + b_4t^4 + b_5t^5\right) + \var�psilon(x), \qquad t = \frac{1}{1+b_0x},

wh�r� /ϕ/(/x/) i� the �tandard normal PDF, and /b/_0 = 0.2316419, /b/_1 = 0.319381530, /b/_2 = −0.356563782, /b/_3 = 1.781477937, /b/_4 = −1.821255978, /b/_5 = 1.330274429. * Hart (1968 <#CITEREFHart1968>) lists almost a hundr�d of rational function </wiki/Rational_function> approximations for th� �rfc() function. His algorithms vary in th� d�gr�� of compl�xity and th� r�sulting pr�cision, with maximum absolut� pr�cision of 24 digits. An algorithm by W�st (2009 <#CITEREFW�st2009>) combin�s Hart's algorithm 5666 with a continu�d fraction </wiki/Continu�d_fraction> approximation in th� tail to provid� a fast computation algorithm with a 16�digit pr�cision. * Cody (1969 <#CITEREFCody1969>) aft�r r�calling Hart68 solution is not suit�d for /�rf/, giv�s a solution for both /�rf/ and /�rfc/, with maximal r�lativ� �rror bound, via Rational Ch�bysh�v Approximation </wiki/Rational_function>. * Marsaglia (2004 <#CITEREFMarsaglia2004>) sugg�st�d a simpl� algorithm^[nb 1] <#cit�_not��48> bas�d on th� Taylor s�ri�s �xpansion

\Phi(x) = \frac12 + \phi(x)\l�ft( x + \frac{x^3}{3} + \frac{x^5}{3\cdot5} + \frac{x^7}{3\cdot5\cdot7} + \frac{x^9}{3\cdot5\cdot7\cdot9} + \cdots \right)

for calculating Φ(/x/) with arbitrary precisio. The drawback of this algorithm is comparatively slow calculatio time (for example it takes over 300 iteratios to calculate the fuctio with 16 digits of precisio whe /x/ = 10). * The GNU Scietific Library </wiki/GNU_Scietific_Library> calculates values of the stadard ormal CD usig Hart's algorithms ad approximatios with Chebyshev polyomials </wiki/Chebyshev_polyomial>.

History[edit </w/idex.php?title=Normal_distributio&actio=edit&sectio=43>]

It has bee suggested that this sectio be split</wiki/Wikipedia:Splittig> ito a ew article titled /History of theormal distributio

Page 35: Normal Distribution - Wikipedia, The Free Encyclopedia

</w/idex.php?title=History_of_the_ormal_distributio&actio=edit&redlik=1>/.(Discuss </wiki/Talk:Normal_distributio>) /Proposed sice May 2013./

Developmet[edit </w/idex.php?title=Normal_distributio&actio=edit&sectio=44>]

Some authors^[48] <#cite_ote�49> ^[49] <#cite_ote�50> attribute thecredit for the discovery of the ormal distributio to de Moivre</wiki/Abraham_de_Moivre>, who i 1738^[b 2] <#cite_ote�51> publishedi the secod editio of his "/The Doctrie of Chaces</wiki/The_Doctrie_of_Chaces>/" the study of the coefficiets i thebiomial expasio </wiki/Biomial_expasio> of (/a + b/)^// . DeMoivre proved that the middle term i this expasio has the approximatemagitude of \scriptstyle 2/\sqrt{2\pi }, ad that "If /m/ or ½// be aQuatity ifiitely great, the the Logarithm of the Ratio, which a Termdistat from the middle by the Iterval /ℓ/, has to the middle Term, is\scriptstyle �\frac{2\ell\ell}{}."^[50] <#cite_ote�52> Although thistheorem ca be iterpreted as the first obscure expressio for theormal probability law, Stigler </wiki/Stephe_Stigler> poits out thatde Moivre himself did ot iterpret his results as aythig more thathe approximate rule for the biomial coefficiets, ad i particular deMoivre lacked the cocept of the probability desity fuctio.^[51]<#cite_ote�53>

</wiki/ ile:Carl_ riedrich_Gauss.jpg></wiki/ ile:Carl_ riedrich_Gauss.jpg>Carl riedrich Gauss </wiki/Carl_ riedrich_Gauss> discovered the ormaldistributio i 1809 as a way to ratioalize the method of least squares</wiki/Method_of_least_squares>.

I 1809 Gauss </wiki/Carl_ riedrich_Gauss> published his moograph"/Theoria motus corporum coelestium i sectioibus coicis solemambietium/" where amog other thigs he itroduces several importatstatistical cocepts, such as the method of least squares</wiki/Method_of_least_squares>, the method of maximum likelihood</wiki/Method_of_maximum_likelihood>, ad the /ormal distributio/.Gauss used /M/, /M/′, /M/′′, … to deote the measuremets of someukow quatity /V/, ad sought the "most probable" estimator: the oethat maximizes the probability /φ/(/M−V/) · /φ/(/M′−V/) · /φ/(/M′′−V/) ·… of obtaining th� obs�rv�d �xp�rim�ntal r�sults. In his notation /φΔ/is the probability law o� the measurement errors o� magnitude /Δ/. Notknowing what the �un�tion /φ/ is, Gauss requires that his method shouldredu�e to the well-known answer: the arithmeti� mean o� the measuredvalues.^[nb 3] <#�ite_note-54> Starting �rom these prin�iples, Gaussdemonstrates that the only law that rationalizes the �hoi�e o�arithmeti� mean as an estimator o� the lo�ation parameter, is the normallaw o� errors:^[52] <#�ite_note-55>

\varphi\mathit{\Delta} = \�ra�{h}{\surd\pi}\, e^{-\mathrm{hh}\Delta\Delta},

where /h/ is "the measure o� the pre�ision o� the observations". Usingthis normal law as a generi� model �or errors in the experiments, Gauss�ormulates what is now known as the non-linear weighted least squares(NWLS) method.^[53] <#�ite_note-56>

</wiki/File:Pierre-Simon_Lapla�e.jpg></wiki/File:Pierre-Simon_Lapla�e.jpg>Marquis de Lapla�e </wiki/Pierre-Simon_Lapla�e> proved the �entral limittheorem </wiki/Central_limit_theorem> in 1810, �onsolidating the

Page 36: Normal Distribution - Wikipedia, The Free Encyclopedia

importan�e o� the normal distribution in statisti�s.

Although Gauss was the �irst to suggest the normal distribution law,Lapla�e </wiki/Pierre_Simon_de_Lapla�e> made signi�i�ant�ontributions.^[nb 4] <#�ite_note-57> It was Lapla�e who �irst posed theproblem o� aggregating several observations in 1774,^[54]<#�ite_note-58> although his own solution led to the Lapla�iandistribution </wiki/Lapla�ian_distribution>. It was Lapla�e who �irst�al�ulated the value o� the integral ∫ /e/^−/t/ ² /dt/ = √/π/</wiki/Gaussian_integral> in 1782, �roviding the normalization constantfor the normal distribution.^[55] <#cite_note-59> Finally, it wasLa�lace who in 1810 �roved and �resented to the Academy the fundamentalcentral limit theorem, which em�hasized the theoretical im�ortance ofthe normal distribution.^[56] <#cite_note-60>

It is of interest to note that in 1809 an American mathematician Adrain</wiki/Robert_Adrain> �ublished two derivations of the normal�robability law, simultaneously and inde�endently from Gauss.^[57]<#cite_note-61> His works remained largely unnoticed by the scientificcommunity, until in 1871 they were "rediscovered" by Abbe</wiki/Cleveland_Abbe>.^[58] <#cite_note-62>

In the middle of the 19th century Maxwell </wiki/James_Clerk_Maxwell>demonstrated that the normal distribution is not just a convenientmathematical tool, but may also occur in natural �henomena:^[59]<#cite_note-63> "The number of �articles whose velocity, resolved in acertain direction, lies between /x/ and /x/ + /dx/ is

\mathrm{N}\; \frac{1}{\al�ha\;\sqrt\�i}\; e^{-\frac{x^2}{\al�ha^2}}dx

Naming[edit </w/index.�h�?title=Normal_distribution&action=edit&section=45>]

Since its introduction, the normal distribution has been known by manydifferent names: the law of error, the law of facility of errors,La�lace's second law, Gaussian law, etc. Gauss himself a��arently coinedthe term with reference to the "normal equations" involved in itsa��lications, with normal having its technical meaning of orthogonalrather than "usual".^[60] <#cite_note-64> However, by the end of the19th century some authors^[nb 5] <#cite_note-65> had started using thename /normal distribution/, where the word "normal" was used as anadjective – the term now being seen as a reflection of the fact thatthis distribution was seen as ty�ical, common – and thus "normal".Peirce (one of those authors) once defined "normal" thus: "...the'normal' is not the average (or any other kind of mean) of what actuallyoccurs, but of what /would/, in the long run, occur under certaincircumstances."^[61] <#cite_note-66> Around the turn of the 20th centuryPearson </wiki/Karl_Pearson> �o�ularized the term /normal/ as adesignation for this distribution.^[62] <#cite_note-67>

Many years ago I called the La�lace–Gaussian curve the /normal/ curve, which name, while it avoids an international question of �riority, has the disadvantage of leading �eo�le to believe that all other distributions of frequency are in one sense or another 'abnormal'.

—Pearson (1920 <#CITEREFPearson1920>)

Also, it was Pearson who first wrote the distribution in terms of thestandard deviation /σ/ a� in modern notation. Soon after t�i�, in year

Page 37: Normal Distribution - Wikipedia, The Free Encyclopedia

1915, Fi��er </wiki/Ronald_Fi��er> added t�e location parameter to t�eformula for normal di�tribution, expre��ing it in t�e way it i� writtennowaday�:

df = \frac{1}{\�igma\�qrt{2\pi}}e^{-\frac{(x-m)^2}{2\�igma^2}}dx

T�e term "�tandard normal", w�ic� denote� t�e normal di�tribution wit�zero mean and unit variance came into general u�e around t�e 1950�,appearing in t�e popular textbook� by P.�. Hoel (1947) "/Introduction tomat�ematical �tati�tic�/" and A.M. Mood (1950) "/Introduction to t�et�eory of �tati�tic�/".^[63] <#cite_note-68>

W�en t�e name i� u�ed, t�e "�au��ian di�tribution" wa� named after</wiki/Li�t_of_topic�_named_after_Carl_Friedric�_�au��> Carl Friedric��au�� </wiki/Carl_Friedric�_�au��>, w�o introduced t�e di�tribution in1809 a� a way of rationalizing t�e met�od of lea�t �quare�</wiki/Met�od_of_lea�t_�quare�> a� outlined above. Among Engli���peaker�, bot� "normal di�tribution" and "�au��ian di�tribution" are incommon u�e, wit� different term� preferred by different communitie�.

See al�o[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=46>]

Portal icon </wiki/File:Fi��er_iri�_ver�icolor_�epalwidt�.�vg>Stati�tic� portal </wiki/Portal:Stati�tic�>

* Be�ren�–Fi��er problem </wiki/Be�ren�%E2%80%93Fi��er_problem> — t�e long-�tanding problem of te�ting w�et�er two normal �ample� wit� different variance� �ave �ame mean�; * B�attac�aryya di�tance </wiki/B�attac�aryya_di�tance> – met�od u�ed to �eparate mixture� of normal di�tribution� * Erdő�–Kac t�eorem </wiki/Erd%C5%91�%E2%80%93Kac_t�eorem>—on t�e occurrence of t�e normal di�tribution in number t�eory </wiki/Number_t�eory> * �au��ian blur </wiki/�au��ian_blur>—convolution </wiki/Convolution>, w�ic� u�e� t�e normal di�tribution a� a kernel * Sum of normally di�tributed random variable� </wiki/Sum_of_normally_di�tributed_random_variable�> * Normally di�tributed and uncorrelated doe� not imply independent </wiki/Normally_di�tributed_and_uncorrelated_doe�_not_imply_independent> * Tweedie di�tribution </wiki/Tweedie_di�tribution> — T�e normal di�tribution i� a member of t�e family of Tweedie exponential di�per�ion model� </wiki/Exponential_di�per�ion_model> * Z-te�t </wiki/Z-te�t>— u�ing t�e normal di�tribution * Rayleig� di�tribution </wiki/Rayleig�_di�tribution>

Note�[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=47>]

1. *Jump up ^ <#cite_ref-48>* For example, t�i� algorit�m i� given in t�e article Bc programming language </wiki/Bc_programming_language#A_tran�lated_C_function>. 2. *Jump up ^ <#cite_ref-51>* De Moivre fir�t publi��ed �i� finding� in 1733, in a pamp�let "Approximatio ad Summam Terminorum Binomii (/a + b/)^/n/ in Seriem Expan�i" t�at wa� de�ignated for private circulation only. But it wa� not until t�e year 1738 t�at �e made �i� re�ult� publicly available. T�e original pamp�let wa� reprinted �everal time�, �ee for example Walker (1985 <#CITEREFWalker1985>).

Page 38: Normal Distribution - Wikipedia, The Free Encyclopedia

3. *Jump up ^ <#cite_ref-54>* "It �a� been cu�tomary certainly to regard a� an axiom t�e �ypot�e�i� t�at if any quantity �a� been determined by �everal direct ob�ervation�, made under t�e �ame circum�tance� and wit� equal care, t�e arit�metical mean of t�e ob�erved value� afford� t�e mo�t probable value, if not rigorou�ly, yet very nearly at lea�t, �o t�at it i� alway� mo�t �afe to ad�ere to it." — �au�� (1809 <#CITEREF�au��1809>, �ection 177) 4. *Jump up ^ <#cite_ref-57>* "My cu�tom of terming t�e curve t�e �au��–Laplacian or /normal/ curve �ave� u� from proportioning t�e merit of di�covery between t�e two great a�tronomer mat�ematician�." quote from Pear�on (1905 <#CITEREFPear�on1905>, p. 189) 5. *Jump up ^ <#cite_ref-65>* Be�ide� t�o�e �pecifically referenced �ere, �uc� u�e i� encountered in t�e work� of Peirce </wiki/C�arle�_Sander�_Peirce>, �alton </wiki/Franci�_�alton> (�alton (1889 <#CITEREF�alton1889>, c�apter V)) and Lexi� </wiki/Wil�elm_Lexi�> (Lexi� (1878 <#CITEREFLexi�1878>), Ro�rba��er & Véron (2003 <#CITEREFRo�rba��erV.C3.A9ron2003>)) c. 1875.^[/citation needed </wiki/Wikipedia:Citation_needed>/]

Citation�[edit </w/index.p�p?title=Normal_di�tribution&action=edit&�ection=48>]

1. *Jump up ^ <#cite_ref-1>* /Normal Di�tribution/ <�ttp://www.encyclopedia.com/topic/Normal_Di�tribution.a�px#3>, �ale Encyclopedia of P�yc�ology 2. *Jump up ^ <#cite_ref-2>* Ca�ella & Berger (2001 <#CITEREFCa�ellaBerger2001>, p. 102) 3. *Jump up ^ <#cite_ref-3>* Cover, T�oma� M.; T�oma�, Joy A. (2006). /Element� of Information T�eory/. Jo�n Wiley and Son�. p. 254. 4. *Jump up ^ <#cite_ref-4>* Park, Sung Y.; Bera, Anil K. (2009). "Maximum Entropy Autoregre��ive Conditional Hetero�keda�ticity Model" <�ttp://www.wi�e.xmu.edu.cn/Ma�ter/Download/..%5C..%5CUploadFile�%5Cpaper-ma�terdownload%5C2009519932327055475115776.pdf>. /Journal of Econometric�/ (El�evier) *150* (2): 219–230. doi </wiki/Digital_object_identifier>:10.1016/j.jeconom.2008.12.014 <�ttp://dx.doi.org/10.1016%2Fj.jeconom.2008.12.014>. Retrieved 2011-06-02. 5. *Jump up ^ <#cite_ref-5>* For t�e proof �ee �au��ian integral </wiki/�au��ian_integral> 6. *Jump up ^ <#cite_ref-6>* Stigler (1982 <#CITEREFStigler1982>) 7. *Jump up ^ <#cite_ref-7>* Halperin, Hartley & Hoel (1965 <#CITEREFHalperinHartleyHoel1965>, item 7) 8. *Jump up ^ <#cite_ref-8>* McP�er�on (1990 <#CITEREFMcP�er�on1990>, p. 110) 9. *Jump up ^ <#cite_ref-9>* Bernardo & Smit� (2000 <#CITEREFBernardoSmit�2000>, p. 121)10. ^ Jump up to: ^/*a*/ <#cite_ref-PR2.1.4_10-0> ^/*b*/ <#cite_ref-PR2.1.4_10-1> ^/*c*/ <#cite_ref-PR2.1.4_10-2> Patel & Read (1996 <#CITEREFPatelRead1996>, [2.1.4])11. *Jump up ^ <#cite_ref-11>* Fan (1991 <#CITEREFFan1991>, p. 1258)12. *Jump up ^ <#cite_ref-12>* Patel & Read (1996 <#CITEREFPatelRead1996>, [2.1.8])13. *Jump up ^ <#cite_ref-13>* Bryc (1995 <#CITEREFBryc1995>, p. 23)14. *Jump up ^ <#cite_ref-14>* Bryc (1995 <#CITEREFBryc1995>, p. 24)15. *Jump up ^ <#cite_ref-15>* Scott, Clayton; Nowak, Robert (Augu�t 7, 2003). "T�e Q-function" <�ttp://cnx.org/content/m11537/1.2/>. /Connexion�/. 16. *Jump up ^ <#cite_ref-16>* Barak, O�ad (April 6, 2006). "Q Function

Page 39: Normal Distribution - Wikipedia, The Free Encyclopedia

and Error Function" <�ttp://www.eng.tau.ac.il/~jo/academic/Q.pdf>. Tel Aviv Univer�ity. 17. *Jump up ^ <#cite_ref-17>* Wei��tein, Eric W. </wiki/Eric_W._Wei��tein>, "Normal Di�tribution Function" <�ttp://mat�world.wolfram.com/NormalDi�tributionFunction.�tml>, /Mat�World </wiki/Mat�World>/.18. *Jump up ^ <#cite_ref-18>* WolframAlp�a.com <�ttp://www.wolframalp�a.com/input/?i=Table%5B{N(Erf(n/Sqrt(2)),+12),+N(1-Erf(n/Sqrt(2)),+12),+N(1/(1-Erf(n/Sqrt(2))),+12)},+{n,1,6}%5D>19. *Jump up ^ <#cite_ref-19>* part 1 <�ttp://www.wolframalp�a.com/input/?i=Table%5BSqrt%282%29*Inver�eErf%28x%29%2C+{x%2C+N%28{8%2F10%2C+9%2F10%2C+19%2F20%2C+49%2F50%2C+99%2F100%2C+995%2F1000%2C+998%2F1000}%2C+13%29}%5D>, part 2 <�ttp://www.wolframalp�a.com/input/?i=Table%5B%7BN(1-10%5E(-x),9),N(Sqrt(2)*Inver�eErf(1-10%5E(-x)),13)%7D,%7Bx,3,9%7D%5D>20. *Jump up ^ <#cite_ref-20>* Normal Approximation to Poi��on(λ) Distribution, http://www.stat.uc�a.edu/ <http://www.stat.uc�a.edu/~dinov/courses_students.dir/App�ets.dir/Norma�Approx2PoissonApp�et.htm�>21. *Jump up ^ <#cite_ref-21>* Bryc (1995 <#CITEREFBryc1995>, p. 27)22. *Jump up ^ <#cite_ref-22>* Pate� & Read (1996 <#CITEREFPate�Read1996>, [2.3.6])23. *Jump up ^ <#cite_ref-23>* Ga�ambos & Simone��i (2004 <#CITEREFGa�ambosSimone��i2004>, Theorem 3.5)24. ^ Jump up to: ^/*a*/ <#cite_ref-Bryc_1995_35_24-0> ^/*b*/ <#cite_ref-Bryc_1995_35_24-1> Bryc (1995 <#CITEREFBryc1995>, p. 35)25. ^ Jump up to: ^/*a*/ <#cite_ref-LK_25-0> ^/*b*/ <#cite_ref-LK_25-1> Lukacs & King (1954 <#CITEREFLukacsKing1954>)26. *Jump up ^ <#cite_ref-26>* Quine, M.P. (1993) "On three characterisations of the norma� distribution" <http://www.math.uni.wroc.p�/~pms/pub�icationsArtic�e.php?nr=14.2&nrA=8&ppB=257&ppE=263>, /Probabi�ity and Mathematica� Statistics/, 14 (2), 257-26327. *Jump up ^ <#cite_ref-27>* UIUC, Lecture 21. /The Mu�tivariate Norma� Distribution/ <http://www.math.uiuc.edu/~r-ash/Stat/StatLec21-25.pdf>, 21.6:"Individua��y Gaussian Versus Joint�y Gaussian".28. *Jump up ^ <#cite_ref-28>* Edward L. Me�nick and Aaron Tenenbein, "Misspecifications of the Norma� Distribution", /The American Statistician </wiki/The_American_Statistician>/, vo�ume 36, number 4 November 1982, pages 372–37329. *Jump up ^ <#cite_ref-29>* http://www.a��isons.org/��/MML/KL/Norma�/30. *Jump up ^ <#cite_ref-30>* Jordan, Michae� I. (February 8, 2010). "Stat260: Bayesian Mode�ing and Inference: The Conjugate Prior for the Norma� Distribution" <http://www.cs.berke�ey.edu/~jordan/courses/260-spring10/�ectures/�ecture5.pdf>. 31. *Jump up ^ <#cite_ref-31>* Cover & Thomas (2006 <#CITEREFCoverThomas2006>, p. 254)32. *Jump up ^ <#cite_ref-32>* Amari & Nagaoka (2000 <#CITEREFAmariNagaoka2000>)33. *Jump up ^ <#cite_ref-33>* /Norma� Product Distribution/ <http://mathwor�d.wo�fram.com/Norma�ProductDistribution.htm�>, Mathwor�d34. *Jump up ^ <#cite_ref-34>* Eugene Lukacs (1942). "A Characterization of the Norma� Distribution" <http://www.jstor.org/stab�e/2236166%7C.>. /The Anna�s of Mathematica� Statistics/ *13* (1): 91–93. doi </wiki/Digita�_object_identifier>:10.1214/aoms/1177731647 <http://dx.doi.org/10.1214%2Faoms%2F1177731647>.

Page 40: Normal Distribution - Wikipedia, The Free Encyclopedia

35. *Jump up ^ <#cite_ref-35>* D. Basu and R. G. Laha (1954). "On Some Characterizations of the Norma� Distribution" <http://www.jstor.org/stab�e/25048183%7C.>. /Sankhyā </wiki/Sankhya_(journa�)>/ *13* (4): 359–362. 36. *Jump up ^ <#cite_ref-36>* Lehmann, E. L. (1997). /Testing Statistica� Hypotheses/ (2nd ed.). Springer. p. 199. ISBN </wiki/Internationa�_Standard_Book_Number> 0-387-94919-4 </wiki/Specia�:BookSources/0-387-94919-4>. 37. ^ Jump up to: ^/*a*/ <#cite_ref-Kri127_37-0> ^/*b*/ <#cite_ref-Kri127_37-1> Krishnamoorthy (2006 <#CITEREFKrishnamoorthy2006>, p. 127)38. *Jump up ^ <#cite_ref-38>* Krishnamoorthy (2006 <#CITEREFKrishnamoorthy2006>, p. 130)39. *Jump up ^ <#cite_ref-39>* Krishnamoorthy (2006 <#CITEREFKrishnamoorthy2006>, p. 133)40. *Jump up ^ <#cite_ref-40>* Hux�ey (1932 <#CITEREFHux�ey1932>)41. *Jump up ^ <#cite_ref-41>* Jaynes, Edwin T. (2003). /Probabi�ity Theory: The Logic of Science/. Cambridge University Press. pp. 592–593. 42. *Jump up ^ <#cite_ref-42>* Oosterbaan, Ro�and J. (1994). "Chapter 6: Frequency and Regression Ana�ysis of Hydro�ogic Data" <http://www.water�og.info/pdf/freqtxt.pdf>. In Ritzema, Henk P. /Drainage Princip�es and App�ications, Pub�ication 16/ (second revised ed.). Wageningen, The Nether�ands: Internationa� Institute for Land Rec�amation and Improvement (ILRI). pp. 175–224. ISBN </wiki/Internationa�_Standard_Book_Number> 90-70754-33-9 </wiki/Specia�:BookSources/90-70754-33-9>. 43. *Jump up ^ <#cite_ref-43>* Wichura, Michae� J. (1988). "A�gorithm AS241: The Percentage Points of the Norma� Distribution". /App�ied Statistics/ (B�ackwe�� Pub�ishing) *37* (3): 477–484. doi </wiki/Digita�_object_identifier>:10.2307/2347330 <http://dx.doi.org/10.2307%2F2347330>. JSTOR </wiki/JSTOR> 2347330 <//www.jstor.org/stab�e/2347330>. 44. *Jump up ^ <#cite_ref-44>* Johnson, Kotz & Ba�akrishnan (1995 <#CITEREFJohnsonKotzBa�akrishnan1995>, Equation (26.48))45. *Jump up ^ <#cite_ref-45>* Kinderman & Monahan (1977 <#CITEREFKindermanMonahan1977>)46. *Jump up ^ <#cite_ref-46>* Marsag�ia & Tsang (2000 <#CITEREFMarsag�iaTsang2000>)47. *Jump up ^ <#cite_ref-47>* Wa��ace (1996 <#CITEREFWa��ace1996>)48. *Jump up ^ <#cite_ref-49>* Johnson, Kotz & Ba�akrishnan (1994 <#CITEREFJohnsonKotzBa�akrishnan1994>, p. 85)49. *Jump up ^ <#cite_ref-50>* Le Cam & Lo Yang (2000 <#CITEREFLe_CamLo_Yang2000>, p. 74)50. *Jump up ^ <#cite_ref-52>* De Moivre, Abraham (1733), Coro��ary I – see Wa�ker (1985 <#CITEREFWa�ker1985>, p. 77)51. *Jump up ^ <#cite_ref-53>* Stig�er (1986 <#CITEREFStig�er1986>, p. 76)52. *Jump up ^ <#cite_ref-55>* Gauss (1809 <#CITEREFGauss1809>, section 177)53. *Jump up ^ <#cite_ref-56>* Gauss (1809 <#CITEREFGauss1809>, section 179)54. *Jump up ^ <#cite_ref-58>* Lap�ace (1774 <#CITEREFLap�ace1774>, Prob�em III)55. *Jump up ^ <#cite_ref-59>* Pearson (1905 <#CITEREFPearson1905>, p. 189)56. *Jump up ^ <#cite_ref-60>* Stig�er (1986 <#CITEREFStig�er1986>, p. 144)57. *Jump up ^ <#cite_ref-61>* Stig�er (1978 <#CITEREFStig�er1978>, p. 243)58. *Jump up ^ <#cite_ref-62>* Stig�er (1978 <#CITEREFStig�er1978>, p. 244)59. *Jump up ^ <#cite_ref-63>* Maxwe�� (1860 <#CITEREFMaxwe��1860>, p. 23)60. *Jump up ^ <#cite_ref-64>* Jaynes, Edwin J.; /Probabi�ity Theory: The Logic of Science/, Ch 7 <http://www-biba.inria�pes.fr/Jaynes/cc07s.pdf>61. *Jump up ^ <#cite_ref-66>* Peirce, Char�es S. (c. 1909 MS), /Co��ected Papers </wiki/Char�es_Sanders_Peirce_bib�iography#CP>/ v.

Page 41: Normal Distribution - Wikipedia, The Free Encyclopedia

6, paragraph 32762. *Jump up ^ <#cite_ref-67>* Kruska� & Stig�er (1997 <#CITEREFKruska�Stig�er1997>)63. *Jump up ^ <#cite_ref-68>* "Ear�iest uses… (entry STANDARD NORMAL CURVE)" <http://jeff560.tripod.com/s.htm�>.

References[edit </w/index.php?tit�e=Norma�_distribution&action=edit&section=49>]

* A�drich, John; Mi��er, Jeff. "Ear�iest Uses of Symbo�s in Probabi�ity and Statistics" <http://jeff560.tripod.com/stat.htm�>. * A�drich, John; Mi��er, Jeff. "Ear�iest Known Uses of Some of the Words of Mathematics" <http://jeff560.tripod.com/mathword.htm�>. In particu�ar, the entries for "be��-shaped and be�� curve" <http://jeff560.tripod.com/b.htm�>, "norma� (distribution)" <http://jeff560.tripod.com/n.htm�>, "Gaussian" <http://jeff560.tripod.com/g.htm�>, and "Error, �aw of error, theory of errors, etc." <http://jeff560.tripod.com/e.htm�>. * Amari, Shun-ichi; Nagaoka, Hiroshi (2000). /Methods of Information Geometry/. Oxford University Press. ISBN </wiki/Internationa�_Standard_Book_Number> 0-8218-0531-2 </wiki/Specia�:BookSources/0-8218-0531-2>. * Bernardo, José M.; Smith, Adrian F. M. (2000). /Bayesian Theory/. Wi�ey. ISBN </wiki/Internationa�_Standard_Book_Number> 0-471-49464-X </wiki/Specia�:BookSources/0-471-49464-X>. * Bryc, W�odzimierz (1995). /The Norma� Distribution: Characterizations with App�ications/. Springer-Ver�ag. ISBN </wiki/Internationa�_Standard_Book_Number> 0-387-97990-5 </wiki/Specia�:BookSources/0-387-97990-5>. * Case��a, George; Berger, Roger L. (2001). /Statistica� Inference/ (2nd ed.). Duxbury. ISBN </wiki/Internationa�_Standard_Book_Number> 0-534-24312-6 </wiki/Specia�:BookSources/0-534-24312-6>. * Cody, Wi��iam J. (1969). "Rationa� Chebyshev Approximations for the Error Function </wiki/Error_function#cite_note-5>". /Mathematics of Computation/ *23* (107): 631–638. doi </wiki/Digita�_object_identifier>:10.1090/S0025-5718-1969-0247736-4 <http://dx.doi.org/10.1090%2FS0025-5718-1969-0247736-4>. * Cover, Thomas M.; Thomas, Joy A. (2006). /E�ements of Information Theory/. John Wi�ey and Sons. * de Moivre, Abraham </wiki/Abraham_de_Moivre> (1738). /The Doctrine of Chances </wiki/The_Doctrine_of_Chances>/. ISBN </wiki/Internationa�_Standard_Book_Number> 0-8218-2103-2 </wiki/Specia�:BookSources/0-8218-2103-2>. * Fan, Jianqing (1991). "On the optima� rates of convergence for nonparametric deconvo�ution prob�ems". /The Anna�s of Statistics/ *19* (3): 1257–1272. doi </wiki/Digita�_object_identifier>:10.1214/aos/1176348248 <http://dx.doi.org/10.1214%2Faos%2F1176348248>. JSTOR </wiki/JSTOR> 2241949 <//www.jstor.org/stab�e/2241949>. * Ga�ton, Francis (1889). /Natura� Inheritance/ <http://ga�ton.org/books/natura�-inheritance/pdf/ga�ton-nat-inh-1up-c�ean.pdf>. London, UK: Richard C�ay and Sons. * Ga�ambos, Janos; Simone��i, Ita�o (2004). /Products of Random Variab�es: App�ications to Prob�ems of Physics and to Arithmetica� Functions/. Marce� Dekker, Inc. ISBN </wiki/Internationa�_Standard_Book_Number> 0-8247-5402-6 </wiki/Specia�:BookSources/0-8247-5402-6>.

Page 42: Normal Distribution - Wikipedia, The Free Encyclopedia

* Gauss, Caro�o Friderico </wiki/Car�_Friedrich_Gauss> (1809). /Theoria motvs corporvm coe�estivm in sectionibvs conicis So�em ambientivm/ [/Theory of the Motion of the Heaven�y Bodies Moving about the Sun in Conic Sections/] (in Latin). Eng�ish trans�ation <http://books.goog�e.com/books?id=1TIAAAAAQAAJ>. * Gou�d, Stephen Jay </wiki/Stephen_Jay_Gou�d> (1981). /The Mismeasure of Man </wiki/The_Mismeasure_of_Man>/ (first ed.). W. W. Norton. ISBN </wiki/Internationa�_Standard_Book_Number> 0-393-01489-4 </wiki/Specia�:BookSources/0-393-01489-4>. * Ha�perin, Max; Hart�ey, Herman O.; Hoe�, Pau� G. (1965). "Recommended Standards for Statistica� Symbo�s and Notation. COPSS Committee on Symbo�s and Notation". /The American Statistician/ *19* (3): 12–14. doi </wiki/Digita�_object_identifier>:10.2307/2681417 <http://dx.doi.org/10.2307%2F2681417>. JSTOR </wiki/JSTOR> 2681417 <//www.jstor.org/stab�e/2681417>. * Hart, John F.; et a�. (1968). /Computer Approximations/. New York, NY: John Wi�ey & Sons, Inc. ISBN </wiki/Internationa�_Standard_Book_Number> 0-88275-642-7 </wiki/Specia�:BookSources/0-88275-642-7>. * Hazewinke�, Michie�, ed. (2001), "Norma� Distribution" <http://www.encyc�opediaofmath.org/index.php?tit�e=p/n067460>, /Encyc�opedia of Mathematics </wiki/Encyc�opedia_of_Mathematics>/, Springer </wiki/Springer_Science%2BBusiness_Media>, ISBN </wiki/Internationa�_Standard_Book_Number> 978-1-55608-010-4 </wiki/Specia�:BookSources/978-1-55608-010-4> * Herrnstein, Richard J.; Murray, Char�es </wiki/Char�es_Murray_(author)> (1994). /The Be�� Curve </wiki/The_Be��_Curve>: Inte��igence and C�ass Structure in American Life/. Free Press </wiki/Free_Press_(pub�isher)>. ISBN </wiki/Internationa�_Standard_Book_Number> 0-02-914673-9 </wiki/Specia�:BookSources/0-02-914673-9>. * Hux�ey, Ju�ian S. (1932). /Prob�ems of Re�ative Growth/. London. ISBN </wiki/Internationa�_Standard_Book_Number> 0-486-61114-0 </wiki/Specia�:BookSources/0-486-61114-0>. OCLC </wiki/OCLC> 476909537 <//www.wor�dcat.org/oc�c/476909537>. * Johnson, Norman L.; Kotz, Samue�; Ba�akrishnan, Narayanaswamy (1994). /Continuous Univariate Distributions, Vo�ume 1/. Wi�ey. ISBN </wiki/Internationa�_Standard_Book_Number> 0-471-58495-9 </wiki/Specia�:BookSources/0-471-58495-9>. * Johnson, Norman L.; Kotz, Samue�; Ba�akrishnan, Narayanaswamy (1995). /Continuous Univariate Distributions, Vo�ume 2/. Wi�ey. ISBN </wiki/Internationa�_Standard_Book_Number> 0-471-58494-0 </wiki/Specia�:BookSources/0-471-58494-0>. * Kinderman, A�bert J.; Monahan, John F. (1977). "Computer Generation of Random Variab�es Using the Ratio of Uniform Deviates". /ACM Transactions on Mathematica� Software/ *3* (3): 257–260. doi </wiki/Digita�_object_identifier>:10.1145/355744.355750 <http://dx.doi.org/10.1145%2F355744.355750>. * Krishnamoorthy, Ka�imuthu (2006). /Handbook of Statistica� Distributions with App�ications/. Chapman & Ha��/CRC. ISBN </wiki/Internationa�_Standard_Book_Number> 1-58488-635-8 </wiki/Specia�:BookSources/1-58488-635-8>. * Kruska�, Wi��iam H.; Stig�er, Stephen M. (1997). Spencer, Bruce D., ed. /Normative Termino�ogy: 'Norma�' in Statistics and E�sewhere/. Statistics and Pub�ic Po�icy. Oxford University Press. ISBN </wiki/Internationa�_Standard_Book_Number> 0-19-852341-6 </wiki/Specia�:BookSources/0-19-852341-6>. * Lap�ace, Pierre-Simon de </wiki/Pierre-Simon_Lap�ace> (1774). "Mémoire sur �a probabi�ité des causes par �es événements" <http://ga��ica.bnf.fr/ark:/12148/bpt6k77596b/f32>. /Mémoires de

Page 43: Normal Distribution - Wikipedia, The Free Encyclopedia

�'Académie roya�e des Sciences de Paris (Savants étrangers), tome 6/: 621–656. Trans�ated by Stephen M. Stig�er in /Statistica� Science/ *1* (3), 1986: JSTOR </wiki/JSTOR> 2245476 <http://www.jstor.org/stab�e/2245476>. * Lap�ace, Pierre-Simon (1812). /Théorie ana�ytique des probabi�ités/ [/Ana�ytica� theory of probabi�ities </wiki/Ana�ytica�_theory_of_probabi�ities>/]. * Le Cam, Lucien; Lo Yang, Grace (2000). /Asymptotics in Statistics: Some Basic Concepts/ (second ed.). Springer. ISBN </wiki/Internationa�_Standard_Book_Number> 0-387-95036-2 </wiki/Specia�:BookSources/0-387-95036-2>. * Lexis, Wi�he�m (1878). "Sur �a durée norma�e de �a vie humaine et sur �a théorie de �a stabi�ité des rapports statistiques". /Anna�es de démographie internationa�e/ (Paris) *II*: 447–462. * Lukacs, Eugene; King, Edgar P. (1954). "A Property of Norma� Distribution". /The Anna�s of Mathematica� Statistics/ *25* (2): 389–394. doi </wiki/Digita�_object_identifier>:10.1214/aoms/1177728796 <http://dx.doi.org/10.1214%2Faoms%2F1177728796>. JSTOR </wiki/JSTOR> 2236741 <//www.jstor.org/stab�e/2236741>. * McPherson, G�en (1990). /Statistics in Scientific Investigation: Its Basis, App�ication and Interpretation/. Springer-Ver�ag. ISBN </wiki/Internationa�_Standard_Book_Number> 0-387-97137-8 </wiki/Specia�:BookSources/0-387-97137-8>. * Marsag�ia, George </wiki/George_Marsag�ia>; Tsang, Wai Wan (2000). "The Ziggurat Method for Generating Random Variab�es" <http://www.jstatsoft.org/v05/i08/paper>. /Journa� of Statistica� Software/ *5* (8). * Wa��ace, C. S. </wiki/Chris_Wa��ace_(computer_scientist)> (1996). "Fast pseudo-random generators for norma� and exponentia� variates". /ACM Transactions on Mathematica� Software/ *22* (1): 119–127. doi </wiki/Digita�_object_identifier>:10.1145/225545.225554 <http://dx.doi.org/10.1145%2F225545.225554>. * Marsag�ia, George (2004). "Eva�uating the Norma� Distribution" <http://www.jstatsoft.org/v11/i05/paper>. /Journa� of Statistica� Software/ *11* (4). * Maxwe��, James C�erk </wiki/James_C�erk_Maxwe��> (1860). "V. I��ustrations of the dynamica� theory of gases. — Part I: On the motions and co��isions of perfect�y e�astic spheres". /Phi�osophica� Magazine, series 4/ *19* (124): 19–32. doi </wiki/Digita�_object_identifier>:10.1080/14786446008642818 <http://dx.doi.org/10.1080%2F14786446008642818>. * Pate�, Jagdish K.; Read, Campbe�� B. (1996). /Handbook of the Norma� Distribution/ (2nd ed.). CRC Press. ISBN </wiki/Internationa�_Standard_Book_Number> 0-8247-9342-0 </wiki/Specia�:BookSources/0-8247-9342-0>. * Pearson, Kar� </wiki/Kar�_Pearson> (1905). "'Das Feh�ergesetz und seine Vera��gemeinerungen durch Fechner und Pearson'. A rejoinder". /Biometrika/ *4* (1): 169–212. JSTOR </wiki/JSTOR> 2331536 <//www.jstor.org/stab�e/2331536>. * Pearson, Kar� (1920). "Notes on the History of Corre�ation". /Biometrika/ *13* (1): 25–45. doi </wiki/Digita�_object_identifier>:10.1093/biomet/13.1.25 <http://dx.doi.org/10.1093%2Fbiomet%2F13.1.25>. JSTOR </wiki/JSTOR> 2331722 <//www.jstor.org/stab�e/2331722>. * Rohrbasser, Jean-Marc; Véron, Jacques (2003). "Wi�he�m Lexis: The Norma� Length of Life as an Expression of the "Nature of Things"" <http://www.persee.fr/web/revues/home/prescript/artic�e/pop_1634-2941_2003_num_58_3_18444>. /Popu�ation/ *58* (3): 303–322. doi

Page 44: Normal Distribution - Wikipedia, The Free Encyclopedia

</wiki/Digita�_object_identifier>:10.3917/pope.303.0303 <http://dx.doi.org/10.3917%2Fpope.303.0303>. * Stig�er, Stephen M. </wiki/Stephen_Stig�er> (1978). "Mathematica� Statistics in the Ear�y States". /The Anna�s of Statistics/ *6* (2): 239–265. doi </wiki/Digita�_object_identifier>:10.1214/aos/1176344123 <http://dx.doi.org/10.1214%2Faos%2F1176344123>. JSTOR </wiki/JSTOR> 2958876 <//www.jstor.org/stab�e/2958876>. * Stig�er, Stephen M. (1982). "A Modest Proposa�: A New Standard for the Norma�". /The American Statistician/ *36* (2): 137–138. doi </wiki/Digita�_object_identifier>:10.2307/2684031 <http://dx.doi.org/10.2307%2F2684031>. JSTOR </wiki/JSTOR> 2684031 <//www.jstor.org/stab�e/2684031>. * Stig�er, Stephen M. (1986). /The History of Statistics: The Measurement of Uncertainty before 1900/. Harvard University Press. ISBN </wiki/Internationa�_Standard_Book_Number> 0-674-40340-1 </wiki/Specia�:BookSources/0-674-40340-1>. * Stig�er, Stephen M. (1999). /Statistics on the Tab�e/. Harvard University Press. ISBN </wiki/Internationa�_Standard_Book_Number> 0-674-83601-4 </wiki/Specia�:BookSources/0-674-83601-4>. * Wa�ker, He�en M. (1985). "De Moivre on the Law of Norma� Probabi�ity" <http://www.york.ac.uk/depts/maths/histstat/demoivre.pdf>. In Smith, David Eugene. /A Source Book in Mathematics/. Dover. ISBN </wiki/Internationa�_Standard_Book_Number> 0-486-64690-4 </wiki/Specia�:BookSources/0-486-64690-4>. * Weisstein, Eric W. </wiki/Eric_W._Weisstein>. "Norma� Distribution" <http://mathwor�d.wo�fram.com/Norma�Distribution.htm�>. MathWor�d </wiki/MathWor�d>. * West, Graeme (2009). "Better Approximations to Cumu�ative Norma� Functions" <http://www.wi�mott.com/pdfs/090721_west.pdf>. /Wi�mott Magazine/: 70–76. * Ze�en, Marvin; Severo, Norman C. (1964). /Probabi�ity Functions (chapter 26)/ <http://www.math.sfu.ca/~cbm/aands/page_931.htm>. /Handbook of mathematica� functions with formu�as, graphs, and mathematica� tab�es </wiki/Abramowitz_and_Stegun>/, by Abramowitz, M. </wiki/Mi�ton_Abramowitz>; and Stegun, I. A. </wiki/Irene_A._Stegun>: Nationa� Bureau of Standards. New York, NY: Dover. ISBN </wiki/Internationa�_Standard_Book_Number> 0-486-61272-4 </wiki/Specia�:BookSources/0-486-61272-4>.

Externa� �inks[edit </w/index.php?tit�e=Norma�_distribution&action=edit&section=50>]

Wikimedia Commons has media re�ated to /*Norma� distribution<//commons.wikimedia.org/wiki/Category:Norma�_distribution>*/.

* Hazewinke�, Michie�, ed. (2001), "Norma� distribution" <http://www.encyc�opediaofmath.org/index.php?tit�e=p/n067460>, /Encyc�opedia of Mathematics </wiki/Encyc�opedia_of_Mathematics>/, Springer </wiki/Springer_Science%2BBusiness_Media>, ISBN </wiki/Internationa�_Standard_Book_Number> 978-1-55608-010-4 </wiki/Specia�:BookSources/978-1-55608-010-4> * Norma� Distribution Video Tutoria� Part 1-2 <https://www.youtube.com/watch?v=kB_kYUbS_ig> on YouTube </wiki/YouTube> * An 8-foot-ta�� (2.4 m) Probabi�ity Machine (named Sir Francis) comparing stock market returns to the randomness of the beans dropping through the quincunx pattern.

Page 45: Normal Distribution - Wikipedia, The Free Encyclopedia

<https://www.youtube.com/watch?v=AUSKTk9ENzg> on YouTube </wiki/YouTube> Link originating from Index Funds Advisors <http://www.ifa.com/>

[hide <#>]

* v </wiki/Temp�ate:Common_univariate_probabi�ity_distributions> * t </wiki/Temp�ate_ta�k:Common_univariate_probabi�ity_distributions> * e <//en.wikipedia.org/w/index.php?tit�e=Temp�ate:Common_univariate_probabi�ity_distributions&action=edit>

Some common univariate </wiki/Univariate_distribution> probabi�itydistributions </wiki/Probabi�ity_distribution>Continuous </wiki/Continuous_probabi�ity_distribution>

* beta </wiki/Beta_distribution> * Cauchy </wiki/Cauchy_distribution> * chi-squared </wiki/Chi-squared_distribution> * exponentia� </wiki/Exponentia�_distribution> * /F/ </wiki/F-distribution> * gamma </wiki/Gamma_distribution> * Lap�ace </wiki/Lap�ace_distribution> * �og-norma� </wiki/Log-norma�_distribution> * *norma�* * Pareto </wiki/Pareto_distribution> * Student's /t/ </wiki/Student%27s_t-distribution> * uniform </wiki/Uniform_distribution_(continuous)> * Weibu�� </wiki/Weibu��_distribution>

Discrete </wiki/Discrete_probabi�ity_distribution>

* Bernou��i </wiki/Bernou��i_distribution> * binomia� </wiki/Binomia�_distribution> * discrete uniform </wiki/Uniform_distribution_(discrete)> * geometric </wiki/Geometric_distribution> * hypergeometric </wiki/Hypergeometric_distribution> * negative binomia� </wiki/Negative_binomia�_distribution> * Poisson </wiki/Poisson_distribution>

List of probabi�ity distributions </wiki/List_of_probabi�ity_distributions>

[show <#>]

* v </wiki/Temp�ate:Probabi�ity_distributions> * t </wiki/Temp�ate_ta�k:Probabi�ity_distributions> * e <//en.wikipedia.org/w/index.php?tit�e=Temp�ate:Probabi�ity_distributions&action=edit>

Probabi�ity distributions </wiki/Probabi�ity_distribution>[show <#>] Discrete univariate with finite support</wiki/List_of_probabi�ity_distributions#With_finite_support>

* Benford </wiki/Benford%27s_�aw> * Bernou��i </wiki/Bernou��i_distribution> * Beta-binomia� </wiki/Beta-binomia�_distribution> * binomia� </wiki/Binomia�_distribution> * categorica� </wiki/Categorica�_distribution>

Page 46: Normal Distribution - Wikipedia, The Free Encyclopedia

* hypergeometric </wiki/Hypergeometric_distribution> * Poisson binomia� </wiki/Poisson_binomia�_distribution> * Rademacher </wiki/Rademacher_distribution> * discrete uniform </wiki/Uniform_distribution_(discrete)> * Zipf </wiki/Zipf%27s_�aw> * Zipf–Mande�brot </wiki/Zipf%E2%80%93Mande�brot_�aw>

[show <#>] Discrete univariate with infinite support</wiki/List_of_probabi�ity_distributions#With_infinite_support>

* beta negative binomia� </wiki/Beta_negative_binomia�_distribution> * Bore� </wiki/Bore�_distribution> * Conway–Maxwe��–Poisson </wiki/Conway%E2%80%93Maxwe��%E2%80%93Poisson_distribution> * discrete phase-type </wiki/Discrete_phase-type_distribution> * De�aporte </wiki/De�aporte_distribution> * extended negative binomia� </wiki/Extended_negative_binomia�_distribution> * Gauss–Kuzmin </wiki/Gauss%E2%80%93Kuzmin_distribution> * geometric </wiki/Geometric_distribution> * �ogarithmic </wiki/Logarithmic_distribution> * negative binomia� </wiki/Negative_binomia�_distribution> * parabo�ic fracta� </wiki/Parabo�ic_fracta�_distribution> * Poisson </wiki/Poisson_distribution> * Ske��am </wiki/Ske��am_distribution> * Yu�e–Simon </wiki/Yu�e%E2%80%93Simon_distribution> * zeta </wiki/Zeta_distribution>

[show <#>] Continuous univariate supported on a bounded interva�, e.g. [0,1]</wiki/List_of_probabi�ity_distributions#Supported_on_a_bounded_interva�>

* Arcsine </wiki/Arcsine_distribution> * ARGUS </wiki/ARGUS_distribution> * Ba�ding–Nicho�s </wiki/Ba�ding%E2%80%93Nicho�s_mode�> * Bates </wiki/Bates_distribution> * Beta </wiki/Beta_distribution> * Beta rectangu�ar </wiki/Beta_rectangu�ar_distribution> * Irwin–Ha�� </wiki/Irwin%E2%80%93Ha��_distribution> * Kumaraswamy </wiki/Kumaraswamy_distribution> * �ogit-norma� </wiki/Logit-norma�_distribution> * Noncentra� beta </wiki/Noncentra�_beta_distribution> * raised cosine </wiki/Raised_cosine_distribution> * Triangu�ar </wiki/Triangu�ar_distribution> * U-quadratic </wiki/U-quadratic_distribution> * uniform </wiki/Uniform_distribution_(continuous)> * Wigner semicirc�e </wiki/Wigner_semicirc�e_distribution>

[show <#>] Continuous univariate supported on a semi-infinite interva�, usua��y[0,∞)</wiki/List_of_probabi�ity_distributions#Supported_on_semi-infinite_interva�s.2C_usua��y_.5B0.2C.E2.88.9E.29>

* Benini </wiki/Benini_distribution> * Benktander 1st kind </wiki/Benktander_Gibrat_distribution> * Benktander 2nd kind </wiki/Benktander_Weibu��_distribution> * Beta prime </wiki/Beta_prime_distribution> * Burr </wiki/Burr_distribution>

Page 47: Normal Distribution - Wikipedia, The Free Encyclopedia

* chi-squared </wiki/Chi-squared_distribution> * chi </wiki/Chi_distribution> * Coxian </wiki/Phase-type_distribution#Coxian_distribution> * Dagum </wiki/Dagum_distribution> * Davis </wiki/Davis_distribution> * EL </wiki/Exponentia�-Logarithmic_distribution> * Er�ang </wiki/Er�ang_distribution> * exponentia� </wiki/Exponentia�_distribution> * /F/ </wiki/F-distribution> * fo�ded norma� </wiki/Fo�ded_norma�_distribution> * F�ory-Schu�z </wiki/F�ory%E2%80%93Schu�z_distribution> * Fréchet </wiki/Fr%C3%A9chet_distribution> * Gamma </wiki/Gamma_distribution> * Gamma/Gompertz </wiki/Gamma/Gompertz_distribution> * genera�ized inverse Gaussian </wiki/Genera�ized_inverse_Gaussian_distribution> * Gompertz </wiki/Gompertz_distribution> * ha�f-�ogistic </wiki/Ha�f-�ogistic_distribution> * ha�f-norma� </wiki/Ha�f-norma�_distribution> * Hote��ing's T-squared </wiki/Hote��ing%27s_T-squared_distribution> * hyper-Er�ang </wiki/Hyper-Er�ang_distribution> * hyperexponentia� </wiki/Hyperexponentia�_distribution> * hypoexponentia� </wiki/Hypoexponentia�_distribution> * inverse chi-squared </wiki/Inverse-chi-squared_distribution> (sca�ed inverse chi-squared </wiki/Sca�ed_inverse_chi-squared_distribution>) * inverse Gaussian </wiki/Inverse_Gaussian_distribution> * inverse gamma </wiki/Inverse-gamma_distribution> * Ko�mogorov </wiki/Ko�mogorov_distribution> * Lévy </wiki/L%C3%A9vy_distribution> * �og-Cauchy </wiki/Log-Cauchy_distribution> * �og-Lap�ace </wiki/Log-Lap�ace_distribution> * �og-�ogistic </wiki/Log-�ogistic_distribution> * �og-norma� </wiki/Log-norma�_distribution> * matrix-exponentia� </wiki/Matrix-exponentia�_distribution> * Maxwe��–Bo�tzmann </wiki/Maxwe��%E2%80%93Bo�tzmann_distribution> * Maxwe��–Jüttner </wiki/Maxwe��%E2%80%93J%C3%BCttner_distribution> * Mittag–Leff�er </wiki/Mittag%E2%80%93Leff�er_distribution> * Nakagami </wiki/Nakagami_distribution> * noncentra� chi-squared </wiki/Noncentra�_chi-squared_distribution> * Pareto </wiki/Pareto_distribution> * phase-type </wiki/Phase-type_distribution> * Po�y-Weibu�� </wiki/Po�y-Weibu��_distribution> * Ray�eigh </wiki/Ray�eigh_distribution> * re�ativistic Breit–Wigner </wiki/Re�ativistic_Breit%E2%80%93Wigner_distribution> * Rice </wiki/Rice_distribution> * Rosin–Ramm�er </wiki/Rosin%E2%80%93Ramm�er_distribution> * shifted Gompertz </wiki/Shifted_Gompertz_distribution> * truncated norma� </wiki/Truncated_norma�_distribution> * type-2 Gumbe� </wiki/Type-2_Gumbe�_distribution> * Weibu�� </wiki/Weibu��_distribution> * Wi�ks' �ambda </wiki/Wi�ks%27_�ambda_distribution>

[hide <#>] Continuous univariate supported on the who�e rea� �ine (−∞, ∞)</wiki/List_of_probability_distributions#Support�d_on_th�_whol�_r�al_lin�>

* Cauchy </wiki/Cauchy_distribution> * �xpon�ntial pow�r </wiki/G�n�raliz�d_normal_distribution#V�rsion_1> * Fish�r's z </wiki/Fish�r%27s_z�distribution>

Page 48: Normal Distribution - Wikipedia, The Free Encyclopedia

* g�n�raliz�d normal </wiki/G�n�raliz�d_normal_distribution> * g�n�raliz�d hyp�rbolic </wiki/G�n�ralis�d_hyp�rbolic_distribution> * g�om�tric stabl� </wiki/G�om�tric_stabl�_distribution> * Gumb�l </wiki/Gumb�l_distribution> * Holtsmark </wiki/Holtsmark_distribution> * hyp�rbolic s�cant </wiki/Hyp�rbolic_s�cant_distribution> * Johnson SU </wiki/Johnson_SU_distribution> * Landau </wiki/Landau_distribution> * Laplac� </wiki/Laplac�_distribution> * Linnik </wiki/Linnik_distribution> * logistic </wiki/Logistic_distribution> * nonc�ntral t </wiki/Nonc�ntral_t�distribution> * *normal (Gaussian)* * normal�inv�rs� Gaussian </wiki/Normal�inv�rs�_Gaussian_distribution> * sk�w normal </wiki/Sk�w_normal_distribution> * slash </wiki/Slash_distribution> * stabl� </wiki/Stabl�_distribution> * Stud�nt's /t/ </wiki/Stud�nt%27s_t�distribution> * typ��1 Gumb�l </wiki/Typ��1_Gumb�l_distribution> * Tracy–Widom </wiki/Tracy%E2%80%93Widom_distribution> * varianc��gamma </wiki/Varianc��gamma_distribution> * Voigt </wiki/Voigt_profil�>

[show <#>] Continuous univariat� with support whos� typ� vari�s</wiki/List_of_probability_distributions#With_variabl�_support>

* g�n�raliz�d �xtr�m� valu� </wiki/G�n�raliz�d_�xtr�m�_valu�_distribution> * g�n�raliz�d Par�to </wiki/G�n�raliz�d_Par�to_distribution> * Tuk�y lambda </wiki/Tuk�y_lambda_distribution> * q�Gaussian </wiki/Q�Gaussian_distribution> * q��xpon�ntial </wiki/Q��xpon�ntial_distribution> * q�W�ibull </wiki/Q�W�ibull_distribution> * shift�d log�logistic </wiki/Shift�d_log�logistic_distribution>

[show <#>] Mix�d continuous�discr�t� univariat� distributions

* r�ctifi�d Gaussian </wiki/R�ctifi�d_Gaussian_distribution>

[show <#>] Multivariat� (joint) </wiki/Joint_probability_distribution>

/Discr�t�/ Ew�ns </wiki/Ew�ns%27s_sampling_formula> multinomial </wiki/Multinomial_distribution> Dirichl�t�multinomial </wiki/Dirichl�t�multinomial_distribution> n�gativ� multinomial </wiki/N�gativ�_multinomial_distribution>

/Continuous/ Dirichl�t </wiki/Dirichl�t_distribution> G�n�raliz�d Dirichl�t </wiki/G�n�raliz�d_Dirichl�t_distribution> multivariat� normal </wiki/Multivariat�_normal_distribution> Multivariat� stabl� </wiki/Multivariat�_stabl�_distribution> multivariat� Stud�nt </wiki/Multivariat�_Stud�nt_distribution> normal�scal�d inv�rs� gamma </wiki/Normal�scal�d_inv�rs�_gamma_distribution> normal�gamma </wiki/Normal�gamma_distribution>

/Matrix�valu�d </wiki/Random_matrix>/

Page 49: Normal Distribution - Wikipedia, The Free Encyclopedia

inv�rs� matrix gamma </wiki/Inv�rs�_matrix_gamma_distribution> inv�rs��Wishart </wiki/Inv�rs��Wishart_distribution> matrix normal </wiki/Matrix_normal_distribution> matrix t </wiki/Matrix_t�distribution> matrix gamma </wiki/Matrix_gamma_distribution> normal�inv�rs��Wishart </wiki/Normal�inv�rs��Wishart_distribution> normal�Wishart </wiki/Normal�Wishart_distribution> Wishart </wiki/Wishart_distribution>

[show <#>] Dir�ctional </wiki/Dir�ctional_statistics>

/Univariat� (circular) dir�ctional </wiki/Dir�ctional_statistics>/ Circular uniform </wiki/Circular_uniform_distribution> univariat� von Mis�s </wiki/Von_Mis�s_distribution> wrapp�d normal </wiki/Wrapp�d_normal_distribution> wrapp�d Cauchy </wiki/Wrapp�d_Cauchy_distribution> wrapp�d �xpon�ntial </wiki/Wrapp�d_�xpon�ntial_distribution> wrapp�d Lévy </wiki/Wrapp�d_L%C3%A9vy_distribution>

/Bivariat� (sph�rical)/ K�nt </wiki/K�nt_distribution>/Bivariat� (toroidal)/ bivariat� von Mis�s </wiki/Bivariat�_von_Mis�s_distribution>

/Multivariat�/ von Mis�s–Fish�r </wiki/Von_Mis�s%E2%80%93Fish�r_distribution> Bingham </wiki/Bingham_distribution>

[show <#>] D�g�n�rat� </wiki/D�g�n�rat�_distribution> and singular</wiki/Singular_distribution>

/D�g�n�rat� </wiki/D�g�n�rat�_distribution>/ discr�t� d�g�n�rat� </wiki/D�g�n�rat�_distribution> Dirac d�lta function </wiki/Dirac_d�lta_function>

/Singular </wiki/Singular_distribution>/ Cantor </wiki/Cantor_distribution>

[show <#>] Famili�s

* Circular </wiki/Circular_distribution> * compound Poisson </wiki/Compound_Poisson_distribution> * �lliptical </wiki/Elliptical_distribution> * �xpon�ntial </wiki/Expon�ntial_family> * natural �xpon�ntial </wiki/Natural_�xpon�ntial_family> * location�scal� </wiki/Location�scal�_family> * maximum �ntropy </wiki/Maximum_�ntropy_probability_distribution> * mixtur� </wiki/Mixtur�_d�nsity> * P�arson </wiki/P�arson_distribution> * Tw��di� </wiki/Tw��di�_distribution> * wrapp�d </wiki/Wrapp�d_distribution>

R�tri�v�d from"http://�n.wikip�dia.org/w/ind�x.php?titl�=Normal_distribution&oldid=625468853"

Cat�gori�s </wiki/H�lp:Cat�gory>:

Page 50: Normal Distribution - Wikipedia, The Free Encyclopedia

* Continuous distributions </wiki/Cat�gory:Continuous_distributions> * Conjugat� prior distributions </wiki/Cat�gory:Conjugat�_prior_distributions> * Distributions with conjugat� priors </wiki/Cat�gory:Distributions_with_conjugat�_priors> * Normal distribution </wiki/Cat�gory:Normal_distribution> * Expon�ntial family distributions </wiki/Cat�gory:Expon�ntial_family_distributions> * Stabl� distributions </wiki/Cat�gory:Stabl�_distributions> * Probability distributions </wiki/Cat�gory:Probability_distributions>

Hidd�n cat�gori�s:

* All articl�s with unsourc�d stat�m�nts </wiki/Cat�gory:All_articl�s_with_unsourc�d_stat�m�nts> * Articl�s with unsourc�d stat�m�nts from Jun� 2011 </wiki/Cat�gory:Articl�s_with_unsourc�d_stat�m�nts_from_Jun�_2011> * Us� mdy dat�s from August 2012 </wiki/Cat�gory:Us�_mdy_dat�s_from_August_2012> * Articl�s with unsourc�d stat�m�nts from Jun� 2010 </wiki/Cat�gory:Articl�s_with_unsourc�d_stat�m�nts_from_Jun�_2010> * Articl� s�ctions to b� split from May 2013 </wiki/Cat�gory:Articl�_s�ctions_to_b�_split_from_May_2013> * Articl�s to b� split from May 2013 </wiki/Cat�gory:Articl�s_to_b�_split_from_May_2013> * All articl�s to b� split </wiki/Cat�gory:All_articl�s_to_b�_split> * Commons cat�gory with local link sam� as on Wikidata </wiki/Cat�gory:Commons_cat�gory_with_local_link_sam�_as_on_Wikidata>

Navigation m�nu

P�rsonal tools

* Cr�at� account </w/ind�x.php?titl�=Sp�cial:Us�rLogin&r�turnto=Normal+distribution&typ�=signup> * Log in </w/ind�x.php?titl�=Sp�cial:Us�rLogin&r�turnto=Normal+distribution>

Nam�spac�s

* Articl� </wiki/Normal_distribution> * Talk </wiki/Talk:Normal_distribution>

Variants<#>

Vi�ws

* R�ad </wiki/Normal_distribution> * Edit </w/ind�x.php?titl�=Normal_distribution&action=�dit> * Vi�w history </w/ind�x.php?titl�=Normal_distribution&action=history>

Mor�<#>

Page 51: Normal Distribution - Wikipedia, The Free Encyclopedia

S�arch

</wiki/Main_Pag�>

Navigation

* Main pag� </wiki/Main_Pag�> * Cont�nts </wiki/Portal:Cont�nts> * F�atur�d cont�nt </wiki/Portal:F�atur�d_cont�nt> * Curr�nt �v�nts </wiki/Portal:Curr�nt_�v�nts> * Random articl� </wiki/Sp�cial:Random> * Donat� to Wikip�dia <https://donat�.wikim�dia.org/wiki/Sp�cial:Fundrais�rR�dir�ctor?utm_sourc�=donat�&utm_m�dium=sid�bar&utm_campaign=C13_�n.wikip�dia.org&us�lang=�n> * Wikim�dia Shop <//shop.wikim�dia.org>

Int�raction

* H�lp </wiki/H�lp:Cont�nts> * About Wikip�dia </wiki/Wikip�dia:About> * Community portal </wiki/Wikip�dia:Community_portal> * R�c�nt chang�s </wiki/Sp�cial:R�c�ntChang�s> * Contact pag� <//�n.wikip�dia.org/wiki/Wikip�dia:Contact_us>

Tools

* What links h�r� </wiki/Sp�cial:WhatLinksH�r�/Normal_distribution> * R�lat�d chang�s </wiki/Sp�cial:R�c�ntChang�sLink�d/Normal_distribution> * Upload fil� </wiki/Wikip�dia:Fil�_Upload_Wizard> * Sp�cial pag�s </wiki/Sp�cial:Sp�cialPag�s> * P�rman�nt link </w/ind�x.php?titl�=Normal_distribution&oldid=625468853> * Pag� information </w/ind�x.php?titl�=Normal_distribution&action=info> * Wikidata it�m <//www.wikidata.org/wiki/Q133871> * Cit� this pag� </w/ind�x.php?titl�=Sp�cial:Cit�&pag�=Normal_distribution&id=625468853>

Print/�xport

* Cr�at� a book </w/ind�x.php?titl�=Sp�cial:Book&bookcmd=book_cr�ator&r�f�r�r=Normal+distribution> * Download as PDF </w/ind�x.php?titl�=Sp�cial:Book&bookcmd=r�nd�r_articl�&arttitl�=Normal+distribution&oldid=625468853&writ�r=rl> * Printabl� v�rsion </w/ind�x.php?titl�=Normal_distribution&printabl�=y�s>

Languag�s

* Al�mannisch <//als.wikip�dia.org/wiki/Normalv�rt�ilung> * ������� <//ar.wikip�dia.org/wiki/%D8%AA%D9%88%D8%B2%D9%8A%D8%B9_%D8%A7%D8%AD%D8%AA%D9%85%D8%A7%D9%84%D9%8A_%D8%B7%D8%A8%D9%8A%D8%B9%D9%8A> * Azərbaycanca <//az.wikip�dia.org/wiki/Normal_paylanma> * Bân�lâm�gú

Page 52: Normal Distribution - Wikipedia, The Free Encyclopedia

<//zh�min�nan.wikip�dia.org/wiki/Si%C3%B4ng�th%C3%A0i_hun�p%C3%B2%CD%98> * Беларуская <//b�.wikip�dia.org/wiki/%D0%9D%D0%B0%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%B0%D0%B5_%D1%80%D0%B0%D0%B7%D0%BC%D0%B5%D1%80%D0%BA%D0%B0%D0%B2%D0%B0%D0%BD%D0%BD%D0%B5> * Български <//bg.wikip�dia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%BE_%D1%80%D0%B0%D0%B7%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5> * Català <//ca.wikip�dia.org/wiki/Distribuci%C3%B3_normal> * Č�ština <//cs.wikip�dia.org/wiki/Norm%C3%A1ln%C3%AD_rozd%C4%9Bl�n%C3%AD> * Cymra�g <//cy.wikip�dia.org/wiki/Dosraniad_normal> * Dansk <//da.wikip�dia.org/wiki/Normalford�ling> * D�utsch <//d�.wikip�dia.org/wiki/Normalv�rt�ilung> * E�sti <//�t.wikip�dia.org/wiki/Normaaljaotus> * Ελληνικά <//el.w���ped�a.org/w���/%CE%9A%CE%B1%CE%BD%CE%BF%CE%BD%CE%B9%CE%BA%CE%AE_%CE%BA%CE%B1%CF%84%CE%B1%CE%BD%CE%BF%CE%BC%CE%AE> * Español <//es.w���ped�a.org/w���/D�str�buc�%C3%B3n_normal> * Esperanto <//eo.w���ped�a.org/w���/Normala_d�str�buo> * Eus�ara <//eu.w���ped�a.org/w���/Bana�eta_normal> * ����� <//fa.w���ped�a.org/w���/%D8%AA%D9%88%D8%B2%DB%8C%D8%B9_%D8%B7%D8%A8%DB%8C%D8%B9%DB%8C> * França�s <//fr.w���ped�a.org/w���/Lo�_normale> * Galego <//gl.w���ped�a.org/w���/D�str�buc�%C3%B3n_normal> * ��� <//�o.w���ped�a.org/w���/%EC%A0%95%EA%B7%9C%EB%B6%84%ED%8F%AC> * Hrvats�� <//hr.w���ped�a.org/w���/Normalna_raspodjela> * Bahasa Indones�a <//�d.w���ped�a.org/w���/D�str�bus�_normal> * Íslens�a <//�s.w���ped�a.org/w���/Normaldre�f�ng> * Ital�ano <//�t.w���ped�a.org/w���/D�str�buz�one_normale>תירבע * <//he.w���ped�a.org/w���/%D7%94%D7%AA%D7%A4%D7%9C%D7%92%D7%95%D7%AA_%D7%A0%D7%95%D7%A8%D7%9E%D7%9C%D7%99%D7%AA> * ქართული <//�a.w���ped�a.org/w���/%E1%83%9C%E1%83%9D%E1%83%A0%E1%83%9B%E1%83%90%E1%83%9A%E1%83%A3%E1%83%A0%E1%83%98_%E1%83%92%E1%83%90%E1%83%9C%E1%83%90%E1%83%AC%E1%83%98%E1%83%9A%E1%83%94%E1%83%91%E1%83%90> * Қазақша <//kk.wikip�dia.org/wiki/%D2%9A%D0%B0%D0%BB%D1%8B%D0%BF%D1%82%D1%8B_%D0%B4%D0%B8%D1%81%D0%BF%D0%B5%D1%80%D1%81%D0%B8%D1%8F> * Latina <//la.wikip�dia.org/wiki/Distributio_normalis> * Latvi�šu <//lv.wikip�dia.org/wiki/Norm%C4%81lais_sadal%C4%ABjums> * Li�tuvių <//lt.wikip�dia.org/wiki/Normalusis_skirstinys> * Magyar <//hu.wikip�dia.org/wiki/Norm%C3%A1lis_�loszl%C3%A1s> * ����� <//mr.wikip�dia.org/wiki/%E0%A4%B8%E0%A4%BE%E0%A4%AE%E0%A4%BE%E0%A4%A8%E0%A5%8D%E0%A4%AF_%E0%A4%B5%E0%A4%BF%E0%A4%A4%E0%A4%B0%E0%A4%A3> * N�d�rlands <//nl.wikip�dia.org/wiki/Normal�_v�rd�ling> * 日本語 <//ja.wikip�dia.org/wiki/%E6%AD%A3%E8%A6%8F%E5%88%86%E5%B8%83> * Norsk bokmål <//no.wikip�dia.org/wiki/Normalford�ling> * Norsk nynorsk <//nn.wikip�dia.org/wiki/Normalford�ling> * Pi�montèis <//pms.wikip�dia.org/wiki/Distribussion_%C3%ABd_Gauss> * Polski <//pl.wikip�dia.org/wiki/Rozk%C5%82ad_normalny> * Português <//pt.wikip�dia.org/wiki/Distribui%C3%A7%C3%A3o_normal> * Română <//ro.wikip�dia.org/wiki/Distribu%C8%9Bia_Gauss> * Русский <//ru.wikip�dia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%BE%D0%B5_%D1%80%D0%B0%D1%81%D0%BF%D1%80%D0%B5%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5> * Simpl� English <//simpl�.wikip�dia.org/wiki/Normal_distribution>

Page 53: Normal Distribution - Wikipedia, The Free Encyclopedia

* Slov�nčina <//sk.wikip�dia.org/wiki/Norm%C3%A1ln�_rozd�l�ni�> * Slov�nščina <//sl.wikip�dia.org/wiki/Normalna_porazd�lit�v> * Српски / srpski <//sr.wikip�dia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D0%BD%D0%B0_%D1%80%D0%B0%D1%81%D0%BF%D0%BE%D0%B4%D0%B5%D0%BB%D0%B0> * Srpskohrvatski / српскохрватски <//sh.wikip�dia.org/wiki/Normalna_raspod�la> * Basa Sunda <//su.wikip�dia.org/wiki/S�baran_normal> * Suomi <//fi.wikip�dia.org/wiki/Normaalijakauma> * Sv�nska <//sv.wikip�dia.org/wiki/Normalf%C3%B6rd�lning> * Tagalog <//tl.wikip�dia.org/wiki/Distribusyong_normal> * ����� <//ta.wikip�dia.org/wiki/%E0%AE%87%E0%AE%AF%E0%AE%B2%E0%AF%8D%E0%AE%A8%E0%AE%BF%E0%AE%B2%E0%AF%88%E0%AE%AA%E0%AF%8D_%E0%AE%AA%E0%AE%B0%E0%AE%B5%E0%AE%B2%E0%AF%8D> * Татарча/tatarça <//tt.wikip�dia.org/wiki/%D0%93%D0%B0%D1%83%D1%81%D1%81_%D0%B1%D2%AF%D0%BB%D0%B5%D0%BD%D0%B5%D1%88%D0%B5> * ไทย <//th.wikip�dia.org/wiki/%E0%B8%81%E0%B8%B2%E0%B8%A3%E0%B9%81%E0%B8%88%E0%B8%81%E0%B9%81%E0%B8%88%E0%B8%87%E0%B8%9B%E0%B8%A3%E0%B8%81%E0%B8%95%E0%B8%B4> * Türkç� <//tr.wikip�dia.org/wiki/Normal_da%C4%9F%C4%B1l%C4%B1m> * Українська <//uk.wikip�dia.org/wiki/%D0%9D%D0%BE%D1%80%D0%BC%D0%B0%D0%BB%D1%8C%D0%BD%D0%B8%D0%B9_%D1%80%D0%BE%D0%B7%D0%BF%D0%BE%D0%B4%D1%96%D0%BB> * ���� <//ur.wikip�dia.org/wiki/%D9%85%D8%B9%D9%85%D9%88%D9%84_%D8%AA%D9%88%D8%B2%DB%8C%D8%B9> * Tiếng Việt <//vi.wikip�dia.org/wiki/Ph%C3%A2n_ph%E1%BB%91i_chu%E1%BA%A9n>שידִיי * <//y�.w���ped�a.org/w���/%D7%A0%D7%90%D7%A8%D7%9E%D7%90%D7%9C%D7%A2_%D7%A4%D7%90%D7%A8%D7%98%D7%99%D7%99%D7%9C%D7%95%D7%A0%D7%92> * 中文 <//zh.w���ped�a.org/w���/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83>

Ed�t l�n�s <//www.w���data.org/w���/Q133871#s�tel�n�s-w���ped�a>

* Th�s page was last mod�f�ed on 14 September 2014 at 02:35. * Text �s ava�lable under the Creat�ve Commons Attr�but�on-ShareAl��e L�cense <//en.w���ped�a.org/w���/W���ped�a:Text_of_Creat�ve_Commons_Attr�but�on-ShareAl��e_3.0_Unported_L�cense><//creat�vecommons.org/l�censes/by-sa/3.0/>; add�t�onal terms may apply. By us�ng th�s s�te, you agree to the Terms of Use <//w���med�afoundat�on.org/w���/Terms_of_Use> and Pr�vacy Pol�cy <//w���med�afoundat�on.org/w���/Pr�vacy_pol�cy>. W���ped�a® �s a reg�stered trademar� of the W���med�a Foundat�on, Inc. <//www.w���med�afoundat�on.org/>, a non-prof�t organ�zat�on.

* Pr�vacy pol�cy <//w���med�afoundat�on.org/w���/Pr�vacy_pol�cy> * About W���ped�a </w���/W���ped�a:About> * D�scla�mers </w���/W���ped�a:General_d�scla�mer> * Contact W���ped�a <//en.w���ped�a.org/w���/W���ped�a:Contact_us> * Developers <https://www.med�aw���.org/w���/Spec�al:MyLanguage/How_to_contr�bute> * Mob�le v�ew <//en.m.w���ped�a.org/w/�ndex.php?t�tle=Normal_d�str�but�on&mob�leact�on=toggle_v�ew_mob�le>

* W���med�a Foundat�on <//w���med�afoundat�on.org/> * Powered by Med�aW��� <//www.med�aw���.org/>

Page 54: Normal Distribution - Wikipedia, The Free Encyclopedia