Misplaced Pages

Kernel density estimation: Difference between revisions

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 16:21, 21 July 2007 editIgny (talk | contribs)Extended confirmed users4,699 editsm Definition← Previous edit Revision as of 16:16, 22 July 2007 edit undoOleg Alexandrov (talk | contribs)Administrators47,244 edits rv, you don't need to take a kernel with variance 1Next edit →
Line 7: Line 7:
:<math>\widehat{f}_h(x)=\frac{1}{Nh}\sum_{i=1}^N K\left(\frac{x-x_i}{h}\right)</math> :<math>\widehat{f}_h(x)=\frac{1}{Nh}\sum_{i=1}^N K\left(\frac{x-x_i}{h}\right)</math>


where ''K'' is some ] and ''h'' is the bandwidth (] parameter). Quite often ''K'' is taken to be a standard ] with ] zero and ] 1: where ''K'' is some ] and ''h'' is the bandwidth (] parameter). Quite often ''K'' is taken to be a ] with ] zero and ] σ<sup>2</sup>:


:<math>K(x) = {1 \over \sqrt{2\pi} }\,e^{-x^2}.</math> :<math>K(x) = {1 \over \sigma\sqrt{2\pi} }\,e^{-{x^2 / 2\sigma^2}}.</math>


==Intuition== ==Intuition==

Revision as of 16:16, 22 July 2007

Kernel density estimation of 100 normally distributed random numbers using different smoothing bandwidths.

In statistics, the kernel density estimation (or Parzen window method, named after Emanuel Parzen) is a way of estimating the probability density function of a random variable. As an illustration, given some data about a sample of a population, the kernel density estimation makes it possible to extrapolate the data to the entire population.

Definition

If x1, x2, ..., xN ~ f is a IID sample of a random variable, then the kernel density approximation of its probability density function is

f ^ h ( x ) = 1 N h i = 1 N K ( x x i h ) {\displaystyle {\widehat {f}}_{h}(x)={\frac {1}{Nh}}\sum _{i=1}^{N}K\left({\frac {x-x_{i}}{h}}\right)}

where K is some kernel and h is the bandwidth (smoothing parameter). Quite often K is taken to be a Gaussian function with mean zero and variance σ:

K ( x ) = 1 σ 2 π e x 2 / 2 σ 2 . {\displaystyle K(x)={1 \over \sigma {\sqrt {2\pi }}}\,e^{-{x^{2}/2\sigma ^{2}}}.}

Intuition

Although less smooth density estimators such as the histogram density estimator can be made to be asymptotically consistent, others are often either discontinuous or converge at slower rates than the kernel density estimator. Rather than grouping observations together in bins, the kernel density estimator can be thought to place small "bumps" at each observation, determined by the kernel function. The estimator consists of a "sum of bumps" and is clearly smoother as a result (see below image).

Six Gaussians (red) and their sum (blue). The Parzen window density estimate f(x) is obtained by dividing this sum by 6, the number of Gaussians. The variance of the Gaussians was set to 0.5. Note that where the points are denser the density estimate will have higher values.

Properties

Let R ( f , f ^ ( x ) ) {\displaystyle R(f,{\hat {f}}(x))} be the L risk function for f. Under weak assumptions on f and K,

R ( f , f ^ ( x ) ) 1 4 σ k 4 h 4 ( f ( x ) ) 2 d x + K 2 ( x ) d x n h {\displaystyle R(f,{\hat {f}}(x))\approx {\frac {1}{4}}\sigma _{k}^{4}h^{4}\int (f''(x))^{2}dx+{\frac {\int K^{2}(x)dx}{nh}}} where σ K 2 = x 2 K ( x ) d x {\displaystyle \sigma _{K}^{2}=\int x^{2}K(x)dx} .

By minimizing the theoretical risk function, it can be shown that the optimal bandwidth is

h = c 1 2 / 5 c 2 1 / 5 c 3 1 / 5 n 1 / 5 {\displaystyle h^{*}={\frac {c_{1}^{-2/5}c_{2}^{1/5}c_{3}^{-1/5}}{n^{1/5}}}}

where

c 1 = x 2 K ( x ) d x {\displaystyle c_{1}=\int x^{2}K(x)dx}
c 2 = K ( x ) 2 d x {\displaystyle c_{2}=\int K(x)^{2}dx}
c 3 = ( f ( x ) ) 2 d x {\displaystyle c_{3}=\int (f''(x))^{2}dx}

When the optimal choice of bandwidth is chosen, the risk function is R ( f , f ^ ( x ) ) c 4 n 4 / 5 {\displaystyle R(f,{\hat {f}}(x))\approx {\frac {c_{4}}{n^{4/5}}}} for some constant c4 > 0. It can be shown that, under weak assumptions, there cannot exist a non-parametric estimator that converges at a faster rate than the kernel estimator. Note that the n rate is slower than the typical n convergence rate of parametric methods.

Statistical implementation

  • In Stata, it is implemented through kdensity; for example histogram x, kdensity.
  • In R, it is implemented through the density function.

See also

References

  • Parzen E. (1962). On estimation of a probability density function and mode, Ann. Math. Stat. 33, pp. 1065-1076.
  • Duda, R. and Hart, P. (1973). Pattern Classification and Scene Analysis. John Wiley & Sons. ISBN 0-471-22361-1.
  • Wasserman, L. (2005). All of Statistics: A Concise Course in Statistical Inference, Springer Texts in Statistics.

External links

Categories:
Kernel density estimation: Difference between revisions Add topic