Misplaced Pages

Eigenvector: Difference between revisions

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 18:53, 7 August 2005 editLowellian (talk | contribs)Autopatrolled, Administrators45,312 editsNo edit summary← Previous edit Revision as of 16:46, 31 August 2005 edit undo192.116.127.165 (talk)No edit summaryNext edit →
Line 120: Line 120:
] ]
] ]
'''Bold text'''

Revision as of 16:46, 31 August 2005

This page is currently being merged.
After a discussion, consensus to merge this page with eigenvalue, eigenvector, and eigenspace was found. You can help implement the merge by following the instructions at Help:Merging and the resolution on the discussion.

Template:MathCOTW

In linear algebra, the eigenvectors (from the German eigen meaning "own") of a linear operator are non-zero vectors which, when operated on by the operator, result in a scalar multiple of themselves. The scalar is then called the eigenvalue associated with the eigenvector.

In applied mathematics and physics the eigenvectors of a matrix or a differential operator often have important physical significance. In classical mechanics the eigenvectors of the governing equations typically correspond to natural modes of vibration in a body, and the eigenvalues to their frequencies. In quantum mechanics, operators correspond to observable variables, eigenvectors are also called eigenstates, and the eigenvalues of an operator represent those values of the corresponding variable that have non-zero probability of being measured.

Definition

Formally, we define eigenvectors and eigenvalues as follows: If A : V -> V is a linear operator on some vector space V, v is a non-zero vector in V and λ is a scalar (possibly zero) such that

A v = λ v {\displaystyle Av=\lambda v\;}

then we say that v is an eigenvector of the operator A, and its associated eigenvalue is λ. Note that if v is an eigenvector with eigenvalue λ, then any non-zero multiple of v is also an eigenvector with eigenvalue λ. In fact, all the eigenvectors with associated eigenvalue λ, together with 0, form a subspace of V, the eigenspace for the eigenvalue λ.

Examples

For linear transformations of two-dimensional space R we can discern the following special cases:

  • translations: no eigenvectors. Example: Av=v+a
  • rotations: no real eigenvectors, (Complex eigenvalue, eigenvector pairs exist). Example: A=(01)(10).
  • reflection: eigenvectors are perpendicular and parallel to the line of symmetry, the eigenvalues are -1 and 1, respectively. Example: A=(10)(0 -1)
  • uniform scaling: all vectors are eigenvectors, and the eigenvalue is the scale factor.Example: Ax=cx
  • projection onto a line: vectors on the line are eigenvectors with eigenvalue 1 and vectors perpendicular to the line are eigenvectors with eigenvalue 0. Example: A=(00)(01).

Identifying eigenvectors

For example, consider the matrix

A = [ 0 1 1 1 1 0 1 0 1 ] {\displaystyle A={\begin{bmatrix}\;0&1&-1\\\;1&1&\;0\\-1&0&\;1\end{bmatrix}}}

which represents a linear operator R -> R. One can check that

A [ 1 1 1 ] = [ 2 2 2 ] = 2 [ 1 1 1 ] {\displaystyle A{\begin{bmatrix}\;1\\\;1\\-1\end{bmatrix}}={\begin{bmatrix}\;2\\\;2\\-2\end{bmatrix}}=2{\begin{bmatrix}\;1\\\;1\\-1\end{bmatrix}}}

and therefore 2 is an eigenvalue of A and we have found a corresponding eigenvector.

The characteristic polynomial

An important tool for describing eigenvalues of square matrices is the characteristic polynomial: saying that λ is an eigenvalue of A is equivalent to stating that the system of linear equations (A - λidV) v = 0 (where idV is the identity matrix) has a non-zero solution v (namely an eigenvector), and so it is equivalent to the determinant det(A - λ idV) being zero. The function p(λ) = det(A - λidV) is a polynomial in λ since determinants are defined as sums of products. This is the characteristic polynomial of A; its zeros are precisely the eigenvalues of A. If A is an n-by-n matrix, then its characteristic polynomial has degree n and A can therefore have at most n eigenvalues.

Returning to the example above, if we wanted to compute all of A's eigenvalues, we could determine the characteristic polynomial first:

p ( x ) = det ( A x I ) = | x 1 1 1 1 x 0 1 0 1 x | = x 3 + 2 x 2 + x 2 = ( x 2 ) ( x 1 ) ( x + 1 ) {\displaystyle p(x)=\det(A-xI)={\begin{vmatrix}-x&1&-1\\1&1-x&0\\-1&0&1-x\end{vmatrix}}=-x^{3}+2x^{2}+x-2=-(x-2)(x-1)(x+1)}

and we see that the eigenvalues of A are 2, 1 and -1.

The Cayley-Hamilton theorem states that every square matrix satisfies its own characteristic polynomial, that is p(A)=0.

(In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Faster and more numerically stable methods are available, for instance the QR decomposition.)

Complex eigenvectors

Note that if A is a real matrix, the characteristic polynomial will have real coefficients, but not all its roots will necessarily be real. The complex eigenvalues will all be associated to complex eigenvectors.

In general, if v1, ..., vm are eigenvectors with different eigenvalues λ1, ..., λm, then the vectors v1, ..., vm are necessarily linearly independent.

The spectral theorem for symmetric matrices states that, if A is a real symmetric n-by-n matrix, then all its eigenvalues are real, and there exist n linearly independent eigenvectors for A which are mutually orthogonal.

Our example matrix from above is symmetric, and three mutually orthogonal eigenvectors of A are

v 1 = [ 1 1 1 ] , v 2 = [ 0 1 1 ] , v 3 = [ 2 1 1 ] . {\displaystyle v_{1}={\begin{bmatrix}\;1\\\;1\\-1\end{bmatrix}},\quad v_{2}={\begin{bmatrix}\;0\;\\1\\1\end{bmatrix}},\quad v_{3}={\begin{bmatrix}\;2\\-1\\\;1\end{bmatrix}}.}

These three vectors form a basis of R. With respect to this basis, the linear map represented by A takes a particularly simple form: every vector x in R can be written uniquely as

x = x 1 v 1 + x 2 v 2 + x 3 v 3 {\displaystyle x=x_{1}v_{1}+x_{2}v_{2}+x_{3}v_{3}}

and then we have

A x = 2 x 1 v 1 + x 2 v 2 x 3 v 3 . {\displaystyle Ax=2x_{1}v_{1}+x_{2}v_{2}-x_{3}v_{3}.}

Decomposition theorem

An n by n matrix has n linearly independent real eigenvectors if and only if it can be decomposed into the form

A = U Λ U 1 {\displaystyle A=U\Lambda U^{-1}\;}

and Λ is a diagonal matrix with all of the eigenvalues on the diagonal. If A is symmetric, then U is orthogonal, and if A is Hermitian, then U is unitary. Such a matrix U does not always exist; for example

A = ( 1 1 0 1 ) {\displaystyle A=\left({\begin{matrix}1&1\\0&1\end{matrix}}\right)}

has only one 1-dimensional eigenspace. In such a case, the singular value decomposition must be used.

Infinite-dimensional spaces

The concept of eigenvectors can be extended to linear operators acting on infinite-dimensional Hilbert spaces or Banach spaces.

There are operators on Banach spaces which have no eigenvectors at all. For example, take the bilateral shift on the Hilbert space 2 ( Z ) {\displaystyle \ell ^{2}(\mathbf {Z} )} ; it is easy to see that any potential eigenvector can't be square-summable, so none exist. However, any bounded linear operator on a Banach space V does have non-empty spectrum. The spectrum σ(A) of the operator A : VV is defined as

σ ( A ) = { λ C | A λ i d V  is not invertible  } . {\displaystyle \sigma (A)=\{\lambda \in \mathbf {C} |A-\lambda \mathrm {id} _{V}{\mbox{ is not invertible }}\}.}

Then σ(A) is a compact set of complex numbers, and it is non-empty. When A is a compact operator (and in particular when A is an operator between finite-dimensional spaces as above), the spectrum of A is the same as the set of its eigenvalues.

The spectrum of an operator is an important property in functional analysis.

See also

External links

Bold text

Categories:
Eigenvector: Difference between revisions Add topic