Misplaced Pages

Adjugate matrix: Difference between revisions

Article snapshot taken from[REDACTED] with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 01:18, 9 September 2020 editMiaumee (talk | contribs)Extended confirmed users765 edits General revision throughout the page. Improved inline citations. Rephrased sentences to prevent flow disruption. Wikilinked "invertible matrices". Minor punctuation fixes. Broken down lengthy sentences. Put non-inline-cited books in Bibliography section.Tags: Reverted Visual edit← Previous edit Revision as of 15:44, 21 September 2020 edit undoJayBeeEll (talk | contribs)Extended confirmed users, New page reviewers28,266 edits Undid revision 977469291 by Miaumee (talk) Per User talk:Miaumee, this is apparently the preferred response to poor editingTag: UndoNext edit →
Line 1: Line 1:
In ], the '''adjugate''', '''classical adjoint''', or '''adjunct'''{{Citation needed|reason=Few references exist from before this term was included in this article in 2012.|date=June 2020}} of a ] {{math|'''A'''}}, denoted <math>\operatorname{adj}(\mathbf{A})</math>,<ref>{{Cite web|date=2020-03-25|title=Comprehensive List of Algebra Symbols|url=https://mathvault.ca/hub/higher-math/math-symbols/algebra-symbols/|access-date=2020-09-09|website=Math Vault|language=en-US}}</ref> is the ] of its ].<ref>{{cite book |first=F. R. |last=Gantmacher |authorlink=Felix Gantmacher |title=The Theory of Matrices |volume=1 |publisher=Chelsea |location=New York |year=1960 |isbn=0-8218-1376-5 |pages=76–89 |url=https://books.google.com/books?id=ePFtMw9v92sC&pg=PA76 }}</ref><ref name=":0">{{Cite web|title=The Classical Adjoint of a Square Matrix|url=https://www.cliffsnotes.com/study-guides/algebra/linear-algebra/the-determinant/the-classical-adjoint-of-a-square-matrix|access-date=2020-09-09|website=www.cliffsnotes.com}}</ref> In ], the '''adjugate''', '''classical adjoint''', or '''adjunct'''{{Citation needed|reason=Few references exist from before this term was included in this article in 2012.|date=June 2020}} of a ] is the ] of its ].<ref>{{cite book |first=F. R. |last=Gantmacher |authorlink=Felix Gantmacher |title=The Theory of Matrices |volume=1 |publisher=Chelsea |location=New York |year=1960 |isbn=0-8218-1376-5 |pages=76–89 |url=https://books.google.com/books?id=ePFtMw9v92sC&pg=PA76 }}</ref>


The adjugate<ref>{{cite book | last=Strang | first=Gilbert | authorlink=Gilbert Strang | title=Linear Algebra and its Applications | edition=3rd | year=1988 | publisher=Harcourt Brace Jovanovich | isbn=0-15-551005-3 | pages= | chapter=Section 4.4: Applications of determinants | url-access=registration | url=https://archive.org/details/linearalgebraits00stra/page/231 }}</ref> has sometimes been called the "adjoint",<ref>{{cite book|ref=harv|first=Alston S.|last=Householder|title=The Theory of Matrices in Numerical Analysis |publisher=Dover Books on Mathematics|year=2006|authorlink=Alston Scott Householder | isbn=0-486-44972-6 |pages=166–168 }}</ref> although the "adjoint" of a matrix now normally refers to its corresponding ], which is its ]. The adjugate<ref>{{cite book | last=Strang | first=Gilbert | authorlink=Gilbert Strang | title=Linear Algebra and its Applications | edition=3rd | year=1988 | publisher=Harcourt Brace Jovanovich | isbn=0-15-551005-3 | pages= | chapter=Section 4.4: Applications of determinants | url-access=registration | url=https://archive.org/details/linearalgebraits00stra/page/231 }}</ref> has sometimes been called the "adjoint",<ref>{{cite book|ref=harv|first=Alston S.|last=Householder|title=The Theory of Matrices in Numerical Analysis |publisher=Dover Books on Mathematics|year=2006|authorlink=Alston Scott Householder | isbn=0-486-44972-6 |pages=166–168 }}</ref> but today the "adjoint" of a matrix normally refers to its corresponding ], which is its ].


== Definition == == Definition ==
The '''adjugate''' of {{math|'''A'''}} is the ] of the ] {{math|'''C'''}} of {{math|'''A'''}},<ref name=":0" /> The '''adjugate''' of {{math|'''A'''}} is the ] of the ] {{math|'''C'''}} of {{math|'''A'''}},
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T}.</math> :<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T}.</math>


Line 12: Line 12:
:<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \left((-1)^{i+j} \mathbf{M}_{ji}\right)_{1 \le i, j \le n}.</math> :<math>\operatorname{adj}(\mathbf{A}) = \mathbf{C}^\mathsf{T} = \left((-1)^{i+j} \mathbf{M}_{ji}\right)_{1 \le i, j \le n}.</math>


The adjugate is defined as it is so that the product of {{math|'''A'''}} with its adjugate yields a ], whose diagonal entries are the determinant {{math|det('''A''')}}. That is,<ref name=":0" /> The adjugate is defined as it is so that the product of {{math|'''A'''}} with its adjugate yields a ] whose diagonal entries are the determinant {{math|det('''A''')}}. That is,
:<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A}) \mathbf{A} = \det(\mathbf{A}) \mathbf{I},</math> :<math>\mathbf{A} \operatorname{adj}(\mathbf{A}) = \operatorname{adj}(\mathbf{A}) \mathbf{A} = \det(\mathbf{A}) \mathbf{I},</math>
where {{math|'''I'''}} is the {{math|''n''×''n''}} identity matrix. This is a consequence of the ] of the determinant. where {{math|'''I'''}} is the {{math|''n''×''n''}} identity matrix. This is a consequence of the ] of the determinant.


The above formula implies one of the fundamental results in matrix algebra, that {{math|'''A'''}} is ] if and only if {{math|det('''A''')}} is an invertible element of {{math|''R''}}. When this holds, the equation above yields The above formula implies one of the fundamental results in matrix algebra, that {{math|'''A'''}} is ] if and only if {{math|det('''A''')}} is an invertible element of {{math|''R''}}. When this holds, the equation above yields
Line 20: Line 20:
\operatorname{adj}(\mathbf{A}) &= \det(\mathbf{A}) \mathbf{A}^{-1}, \\ \operatorname{adj}(\mathbf{A}) &= \det(\mathbf{A}) \mathbf{A}^{-1}, \\
\mathbf{A}^{-1} &= \det(\mathbf{A})^{-1} \operatorname{adj}(\mathbf{A}). \mathbf{A}^{-1} &= \det(\mathbf{A})^{-1} \operatorname{adj}(\mathbf{A}).
\end{align}</math><ref name=":0" /> \end{align}</math>


== Examples == == Examples ==


=== 1 × 1 generic matrix === === 1 × 1 generic matrix ===
The adjugate of any non-zero 1×1 matrix (complex scalar) is <math>\mathbf{I} = (1)</math>. By convention, adj(0) = 0. The adjugate of any non-zero 1×1 matrix (complex scalar) is <math>\mathbf{I} = (1)</math>. By convention, adj(0) = 0.


=== 2 × 2 generic matrix === === 2 × 2 generic matrix ===
Line 102: Line 102:
It is easy to check the adjugate is the inverse times the determinant, {{math|&minus;6}}. It is easy to check the adjugate is the inverse times the determinant, {{math|&minus;6}}.


The {{math|&minus;1}} in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of '''A'''. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix '''A''', The {{math|&minus;1}} in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of '''A'''. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix '''A''',
:<math>\begin{pmatrix}-3&-5\\-1&-2\end{pmatrix}.</math> :<math>\begin{pmatrix}-3&-5\\-1&-2\end{pmatrix}.</math>
The (3,2) cofactor is a sign times the determinant of this submatrix: The (3,2) cofactor is a sign times the determinant of this submatrix:
Line 113: Line 113:
* <math>\operatorname{adj}(c \mathbf{A}) = c^{n - 1}\operatorname{adj}(\mathbf{A})</math> for any scalar {{math|''c''}}. * <math>\operatorname{adj}(c \mathbf{A}) = c^{n - 1}\operatorname{adj}(\mathbf{A})</math> for any scalar {{math|''c''}}.
* <math>\operatorname{adj}(\mathbf{A}^\mathsf{T}) = \operatorname{adj}(\mathbf{A})^\mathsf{T}</math>. * <math>\operatorname{adj}(\mathbf{A}^\mathsf{T}) = \operatorname{adj}(\mathbf{A})^\mathsf{T}</math>.
* <math>\det(\operatorname{adj}(\mathbf{A})) = (\det \mathbf{A})^{n-1}</math>.<ref name=":0" /> * <math>\det(\operatorname{adj}(\mathbf{A})) = (\det \mathbf{A})^{n-1}</math>.
* If {{math|''A''}} is invertible, then <math>\operatorname{adj}(\mathbf{A}) = (\det \mathbf{A}) \mathbf{A}^{-1}</math>.<ref name=":0" /> It follows that: * If {{math|''A''}} is invertible, then <math>\operatorname{adj}(\mathbf{A}) = (\det \mathbf{A}) \mathbf{A}^{-1}</math>. It follows that:
**{{math|adj('''A''')}} is invertible with inverse {{math|(det '''A''')<sup>&minus;1</sup> '''A'''}}. ** {{math|adj('''A''')}} is invertible with inverse {{math|(det '''A''')<sup>&minus;1</sup> '''A'''}}.
** {{math|1=adj('''A'''{{i sup|&minus;1}}) = adj('''A'''){{i sup|&minus;1}}}}. ** {{math|1=adj('''A'''{{i sup|&minus;1}}) = adj('''A'''){{i sup|&minus;1}}}}.
* {{math|adj('''A''')}} is entrywise polynomial in {{math|'''A'''}}. In particular, over the real or complex numbers, the adjugate is a smooth function of the entries of {{math|'''A'''}}. * {{math|adj('''A''')}} is entrywise polynomial in {{math|'''A'''}}. In particular, over the real or complex numbers, the adjugate is a smooth function of the entries of {{math|'''A'''}}.
Line 123: Line 123:
* <math>\operatorname{adj}(\mathbf{A}^*) = \operatorname{adj}(\mathbf{A})^*</math>, where the asterisk denotes conjugate transpose. * <math>\operatorname{adj}(\mathbf{A}^*) = \operatorname{adj}(\mathbf{A})^*</math>, where the asterisk denotes conjugate transpose.


Suppose that {{math|'''B'''}} is another {{math|''n'' × ''n''}} matrix. Then Suppose that {{math|'''B'''}} is another {{math|''n'' × ''n''}} matrix. Then
:<math>\operatorname{adj}(\mathbf{AB}) = \operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}).</math> :<math>\operatorname{adj}(\mathbf{AB}) = \operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}).</math>
This can be proved in three ways. One way, valid for any This can be proved in three ways. One way, valid for any
Line 131: Line 131:
matrices {{math|'''A'''}} and {{math|'''B'''}}, matrices {{math|'''A'''}} and {{math|'''B'''}},
:<math>\operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}) = (\det \mathbf{B})\mathbf{B}^{-1}(\det \mathbf{A})\mathbf{A}^{-1} = (\det \mathbf{AB})(\mathbf{AB})^{-1} = \operatorname{adj}(\mathbf{AB}).</math> :<math>\operatorname{adj}(\mathbf{B})\operatorname{adj}(\mathbf{A}) = (\det \mathbf{B})\mathbf{B}^{-1}(\det \mathbf{A})\mathbf{A}^{-1} = (\det \mathbf{AB})(\mathbf{AB})^{-1} = \operatorname{adj}(\mathbf{AB}).</math>
Because every non-invertible matrix is the limit of ], continuity of the adjugate then implies that the formula remains true when one of {{math|'''A'''}} or {{math|'''B'''}} is not invertible. Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of {{math|'''A'''}} or {{math|'''B'''}} is not invertible.


A corollary of the previous formula is that, for any non-negative integer {{math|''k''}}, A corollary of the previous formula is that, for any non-negative integer {{math|''k''}},
Line 160: Line 160:
* Normal. * Normal.


If {{math|'''A'''}} is invertible, then, as noted above, there is a formula for {{math|adj('''A''')}} in terms of the determinant and inverse of {{math|'''A'''}}. When {{math|'''A'''}} is not invertible, the adjugate satisfies different but closely related formulas. If {{math|'''A'''}} is invertible, then, as noted above, there is a formula for {{math|adj('''A''')}} in terms of the determinant and inverse of {{math|'''A'''}}. When {{math|'''A'''}} is not invertible, the adjugate satisfies different but closely related formulas.
* If {{math|1=rk('''A''') &le; ''n'' &minus; 2}}, then {{math|1=adj('''A''') = '''0'''}}. * If {{math|1=rk('''A''') &le; ''n'' &minus; 2}}, then {{math|1=adj('''A''') = '''0'''}}.
* If {{math|1=rk('''A''') = ''n'' &minus; 1}}, then {{math|1=rk(adj('''A''')) = 1}}. (Some minor is non-zero, so {{math|adj('''A''')}} is non-zero and hence has rank at least one; the identity {{math|1=adj('''A''')&thinsp;'''A''' = '''0'''}} implies that the dimension of the nullspace of {{math|adj('''A''')}} is at least {{math|''n'' &minus; 1}}, so its rank is at most one.) It follows that {{math|1=adj('''A''') = &alpha;'''xy'''<sup>T</sup>}}, where {{math|&alpha;}} is a scalar and {{math|'''x'''}} and {{math|'''y'''}} are vectors such that {{math|1='''Ax''' = '''0'''}} and {{math|1='''A'''<sup>T</sup>&thinsp;'''y''' = '''0'''}}. * If {{math|1=rk('''A''') = ''n'' &minus; 1}}, then {{math|1=rk(adj('''A''')) = 1}}. (Some minor is non-zero, so {{math|adj('''A''')}} is non-zero and hence has rank at least one; the identity {{math|1=adj('''A''')&thinsp;'''A''' = '''0'''}} implies that the dimension of the nullspace of {{math|adj('''A''')}} is at least {{math|''n'' &minus; 1}}, so its rank is at most one.) It follows that {{math|1=adj('''A''') = &alpha;'''xy'''<sup>T</sup>}}, where {{math|&alpha;}} is a scalar and {{math|'''x'''}} and {{math|'''y'''}} are vectors such that {{math|1='''Ax''' = '''0'''}} and {{math|1='''A'''<sup>T</sup>&thinsp;'''y''' = '''0'''}}.
Line 169: Line 169:
Partition {{math|'''A'''}} into column vectors: Partition {{math|'''A'''}} into column vectors:
:<math>\mathbf{A} = (\mathbf{a}_1\ \cdots\ \mathbf{a}_n).</math> :<math>\mathbf{A} = (\mathbf{a}_1\ \cdots\ \mathbf{a}_n).</math>
Let {{math|'''b'''}} be a column vector of size {{math|''n''}}. Fix {{math|1 &le; ''i'' &le; ''n''}} and consider the matrix formed by replacing column {{math|''i''}} of {{math|'''A'''}} by {{math|'''b'''}}: Let {{math|'''b'''}} be a column vector of size {{math|''n''}}. Fix {{math|1 &le; ''i'' &le; ''n''}} and consider the matrix formed by replacing column {{math|''i''}} of {{math|'''A'''}} by {{math|'''b'''}}:
:<math>(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\ \stackrel{\text{def}}{=}\ \begin{pmatrix} \mathbf{a}_1 & \cdots & \mathbf{a}_{i-1} & \mathbf{b} & \mathbf{a}_{i+1} & \cdots & \mathbf{a}_n \end{pmatrix}.</math> :<math>(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\ \stackrel{\text{def}}{=}\ \begin{pmatrix} \mathbf{a}_1 & \cdots & \mathbf{a}_{i-1} & \mathbf{b} & \mathbf{a}_{i+1} & \cdots & \mathbf{a}_n \end{pmatrix}.</math>
Laplace expand the determinant of this matrix along column {{math|''i''}}. The result is entry {{math|''i''}} of the product {{math|adj('''A''')'''b'''}}. Collecting these determinants for the different possible {{math|''i''}} yields an equality of column vectors Laplace expand the determinant of this matrix along column {{math|''i''}}. The result is entry {{math|''i''}} of the product {{math|adj('''A''')'''b'''}}. Collecting these determinants for the different possible {{math|''i''}} yields an equality of column vectors
:<math>\left(\det(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\right)_{i=1}^n = \operatorname{adj}(\mathbf{A})\mathbf{b}.</math> :<math>\left(\det(\mathbf{A} \stackrel{i}{\leftarrow} \mathbf{b})\right)_{i=1}^n = \operatorname{adj}(\mathbf{A})\mathbf{b}.</math>


This formula has the following concrete consequence. Consider the linear system of equations This formula has the following concrete consequence. Consider the linear system of equations
:<math>\mathbf{A}\mathbf{x} = \mathbf{b}.</math> :<math>\mathbf{A}\mathbf{x} = \mathbf{b}.</math>
Assume that {{math|'''A'''}} is non-singular. Multiplying this system on the left by {{math|adj('''A''')}} and dividing by the determinant yields Assume that {{math|'''A'''}} is non-singular. Multiplying this system on the left by {{math|adj('''A''')}} and dividing by the determinant yields
Line 291: Line 291:
{{Reflist}} {{Reflist}}


== Bibliography ==
* Roger A. Horn and Charles R. Johnson (2013), ''Matrix Analysis'', Second Edition. Cambridge University Press, {{ISBN|978-0-521-54823-6}} * Roger A. Horn and Charles R. Johnson (2013), ''Matrix Analysis'', Second Edition. Cambridge University Press, {{ISBN|978-0-521-54823-6}}
* Roger A. Horn and Charles R. Johnson (1991), ''Topics in Matrix Analysis''. Cambridge University Press, {{ISBN|978-0-521-46713-1}} * Roger A. Horn and Charles R. Johnson (1991), ''Topics in Matrix Analysis''. Cambridge University Press, {{ISBN|978-0-521-46713-1}}
Line 298: Line 297:
* *
* Compute Adjugate matrix up to order 8 * Compute Adjugate matrix up to order 8
* {{cite web|url=http://www.wolframalpha.com/input/?i=adjugate+of+{+{+a%2C+b%2C+c+}%2C+{+d%2C+e%2C+f+}%2C+{+g%2C+h%2C+i+}+}|url-status=live|archive-url=|last=|first=|date=|title=<nowiki>Adjugate of { { a, b, c }, { d, e, f }, { g, h, i } }</nowiki>|archive-date=|access-date=|work=]}} * {{cite web | url=http://www.wolframalpha.com/input/?i=adjugate+of+{+{+a%2C+b%2C+c+}%2C+{+d%2C+e%2C+f+}%2C+{+g%2C+h%2C+i+}+} | title=<nowiki>adjugate of { { a, b, c }, { d, e, f }, { g, h, i } }</nowiki> | work=]}}


{{Matrix classes}} {{Matrix classes}}

Revision as of 15:44, 21 September 2020

In linear algebra, the adjugate, classical adjoint, or adjunct of a square matrix is the transpose of its cofactor matrix.

The adjugate has sometimes been called the "adjoint", but today the "adjoint" of a matrix normally refers to its corresponding adjoint operator, which is its conjugate transpose.

Definition

The adjugate of A is the transpose of the cofactor matrix C of A,

adj ( A ) = C T . {\displaystyle \operatorname {adj} (\mathbf {A} )=\mathbf {C} ^{\mathsf {T}}.}

In more detail, suppose R is a commutative ring and A is an n × n matrix with entries from R. The (i,j)-minor of A, denoted Mij, is the determinant of the (n − 1) × (n − 1) matrix that results from deleting row i and column j of A. The cofactor matrix of A is the n × n matrix C whose (i, j) entry is the (i, j) cofactor of A, which is the (i, j)-minor times a sign factor:

C = ( ( 1 ) i + j M i j ) 1 i , j n . {\displaystyle \mathbf {C} =\left((-1)^{i+j}\mathbf {M} _{ij}\right)_{1\leq i,j\leq n}.}

The adjugate of A is the transpose of C, that is, the n×n matrix whose (i,j) entry is the (j,i) cofactor of A,

adj ( A ) = C T = ( ( 1 ) i + j M j i ) 1 i , j n . {\displaystyle \operatorname {adj} (\mathbf {A} )=\mathbf {C} ^{\mathsf {T}}=\left((-1)^{i+j}\mathbf {M} _{ji}\right)_{1\leq i,j\leq n}.}

The adjugate is defined as it is so that the product of A with its adjugate yields a diagonal matrix whose diagonal entries are the determinant det(A). That is,

A adj ( A ) = adj ( A ) A = det ( A ) I , {\displaystyle \mathbf {A} \operatorname {adj} (\mathbf {A} )=\operatorname {adj} (\mathbf {A} )\mathbf {A} =\det(\mathbf {A} )\mathbf {I} ,}

where I is the n×n identity matrix. This is a consequence of the Laplace expansion of the determinant.

The above formula implies one of the fundamental results in matrix algebra, that A is invertible if and only if det(A) is an invertible element of R. When this holds, the equation above yields

adj ( A ) = det ( A ) A 1 , A 1 = det ( A ) 1 adj ( A ) . {\displaystyle {\begin{aligned}\operatorname {adj} (\mathbf {A} )&=\det(\mathbf {A} )\mathbf {A} ^{-1},\\\mathbf {A} ^{-1}&=\det(\mathbf {A} )^{-1}\operatorname {adj} (\mathbf {A} ).\end{aligned}}}

Examples

1 × 1 generic matrix

The adjugate of any non-zero 1×1 matrix (complex scalar) is I = ( 1 ) {\displaystyle \mathbf {I} =(1)} . By convention, adj(0) = 0.

2 × 2 generic matrix

The adjugate of the 2×2 matrix

A = ( a b c d ) {\displaystyle \mathbf {A} ={\begin{pmatrix}{a}&{b}\\{c}&{d}\end{pmatrix}}}

is

adj ( A ) = ( d b c a ) . {\displaystyle \operatorname {adj} (\mathbf {A} )={\begin{pmatrix}{d}&{-b}\\{-c}&{a}\end{pmatrix}}.}

By direct computation,

A adj ( A ) = ( a d b c 0 0 a d b c ) = ( det A ) I . {\displaystyle \mathbf {A} \operatorname {adj} (\mathbf {A} )={\begin{pmatrix}ad-bc&0\\0&ad-bc\end{pmatrix}}=(\det \mathbf {A} )\mathbf {I} .}

In this case, it is also true that det(adj(A)) = det(A) and hence that adj(adj(A)) = A.

3 × 3 generic matrix

Consider a 3×3 matrix

A = ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) . {\displaystyle \mathbf {A} ={\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}.}

Its cofactor matrix is

C = ( + | a 22 a 23 a 32 a 33 | | a 21 a 23 a 31 a 33 | + | a 21 a 22 a 31 a 32 | | a 12 a 13 a 32 a 33 | + | a 11 a 13 a 31 a 33 | | a 11 a 12 a 31 a 32 | + | a 12 a 13 a 22 a 23 | | a 11 a 13 a 21 a 23 | + | a 11 a 12 a 21 a 22 | ) , {\displaystyle \mathbf {C} ={\begin{pmatrix}+{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}&-{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}&+{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}\\&&\\-{\begin{vmatrix}a_{12}&a_{13}\\a_{32}&a_{33}\end{vmatrix}}&+{\begin{vmatrix}a_{11}&a_{13}\\a_{31}&a_{33}\end{vmatrix}}&-{\begin{vmatrix}a_{11}&a_{12}\\a_{31}&a_{32}\end{vmatrix}}\\&&\\+{\begin{vmatrix}a_{12}&a_{13}\\a_{22}&a_{23}\end{vmatrix}}&-{\begin{vmatrix}a_{11}&a_{13}\\a_{21}&a_{23}\end{vmatrix}}&+{\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}\end{pmatrix}},}

where

| a i m a i n a j m a j n | = det ( a i m a i n a j m a j n ) {\displaystyle {\begin{vmatrix}a_{im}&a_{in}\\a_{jm}&a_{jn}\end{vmatrix}}=\det {\begin{pmatrix}a_{im}&a_{in}\\a_{jm}&a_{jn}\end{pmatrix}}} .

Its adjugate is the transpose of its cofactor matrix,

adj ( A ) = C T = ( + | a 22 a 23 a 32 a 33 | | a 12 a 13 a 32 a 33 | + | a 12 a 13 a 22 a 23 | | a 21 a 23 a 31 a 33 | + | a 11 a 13 a 31 a 33 | | a 11 a 13 a 21 a 23 | + | a 21 a 22 a 31 a 32 | | a 11 a 12 a 31 a 32 | + | a 11 a 12 a 21 a 22 | ) {\displaystyle \operatorname {adj} (\mathbf {A} )=\mathbf {C} ^{\mathsf {T}}={\begin{pmatrix}+{\begin{vmatrix}a_{22}&a_{23}\\a_{32}&a_{33}\end{vmatrix}}&-{\begin{vmatrix}a_{12}&a_{13}\\a_{32}&a_{33}\end{vmatrix}}&+{\begin{vmatrix}a_{12}&a_{13}\\a_{22}&a_{23}\end{vmatrix}}\\&&\\-{\begin{vmatrix}a_{21}&a_{23}\\a_{31}&a_{33}\end{vmatrix}}&+{\begin{vmatrix}a_{11}&a_{13}\\a_{31}&a_{33}\end{vmatrix}}&-{\begin{vmatrix}a_{11}&a_{13}\\a_{21}&a_{23}\end{vmatrix}}\\&&\\+{\begin{vmatrix}a_{21}&a_{22}\\a_{31}&a_{32}\end{vmatrix}}&-{\begin{vmatrix}a_{11}&a_{12}\\a_{31}&a_{32}\end{vmatrix}}&+{\begin{vmatrix}a_{11}&a_{12}\\a_{21}&a_{22}\end{vmatrix}}\end{pmatrix}}} .

3 × 3 numeric matrix

As a specific example, we have

adj ( 3 2 5 1 0 2 3 4 1 ) = ( 8 18 4 5 12 1 4 6 2 ) . {\displaystyle \operatorname {adj} {\begin{pmatrix}-3&2&-5\\-1&0&-2\\3&-4&1\end{pmatrix}}={\begin{pmatrix}-8&18&-4\\-5&12&-1\\4&-6&2\end{pmatrix}}.}

It is easy to check the adjugate is the inverse times the determinant, −6.

The −1 in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A,

( 3 5 1 2 ) . {\displaystyle {\begin{pmatrix}-3&-5\\-1&-2\end{pmatrix}}.}

The (3,2) cofactor is a sign times the determinant of this submatrix:

( 1 ) 3 + 2 det ( 3 5 1 2 ) = ( 3 2 5 1 ) = 1 , {\displaystyle (-1)^{3+2}\operatorname {det} {\begin{pmatrix}-3&-5\\-1&-2\end{pmatrix}}=-(-3\cdot -2--5\cdot -1)=-1,}

and this is the (2,3) entry of the adjugate.

Properties

For any n × n matrix A, elementary computations show that adjugates enjoy the following properties.

  • adj ( 0 ) = 0 {\displaystyle \operatorname {adj} (\mathbf {0} )=\mathbf {0} } and adj ( I ) = I {\displaystyle \operatorname {adj} (\mathbf {I} )=\mathbf {I} } , where 0 {\displaystyle \mathbf {0} } and I {\displaystyle \mathbf {I} } are the zero and identity matrices, respectively.
  • adj ( c A ) = c n 1 adj ( A ) {\displaystyle \operatorname {adj} (c\mathbf {A} )=c^{n-1}\operatorname {adj} (\mathbf {A} )} for any scalar c.
  • adj ( A T ) = adj ( A ) T {\displaystyle \operatorname {adj} (\mathbf {A} ^{\mathsf {T}})=\operatorname {adj} (\mathbf {A} )^{\mathsf {T}}} .
  • det ( adj ( A ) ) = ( det A ) n 1 {\displaystyle \det(\operatorname {adj} (\mathbf {A} ))=(\det \mathbf {A} )^{n-1}} .
  • If A is invertible, then adj ( A ) = ( det A ) A 1 {\displaystyle \operatorname {adj} (\mathbf {A} )=(\det \mathbf {A} )\mathbf {A} ^{-1}} . It follows that:
    • adj(A) is invertible with inverse (det A) A.
    • adj(A) = adj(A).
  • adj(A) is entrywise polynomial in A. In particular, over the real or complex numbers, the adjugate is a smooth function of the entries of A.

Over the complex numbers,

  • adj ( A ¯ ) = adj ( A ) ¯ {\displaystyle \operatorname {adj} ({\bar {\mathbf {A} }})={\overline {\operatorname {adj} (\mathbf {A} )}}} , where the bar denotes complex conjugation.
  • adj ( A ) = adj ( A ) {\displaystyle \operatorname {adj} (\mathbf {A} ^{*})=\operatorname {adj} (\mathbf {A} )^{*}} , where the asterisk denotes conjugate transpose.

Suppose that B is another n × n matrix. Then

adj ( A B ) = adj ( B ) adj ( A ) . {\displaystyle \operatorname {adj} (\mathbf {AB} )=\operatorname {adj} (\mathbf {B} )\operatorname {adj} (\mathbf {A} ).}

This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the Cauchy–Binet formula. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices A and B,

adj ( B ) adj ( A ) = ( det B ) B 1 ( det A ) A 1 = ( det A B ) ( A B ) 1 = adj ( A B ) . {\displaystyle \operatorname {adj} (\mathbf {B} )\operatorname {adj} (\mathbf {A} )=(\det \mathbf {B} )\mathbf {B} ^{-1}(\det \mathbf {A} )\mathbf {A} ^{-1}=(\det \mathbf {AB} )(\mathbf {AB} )^{-1}=\operatorname {adj} (\mathbf {AB} ).}

Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of A or B is not invertible.

A corollary of the previous formula is that, for any non-negative integer k,

adj ( A k ) = adj ( A ) k . {\displaystyle \operatorname {adj} (\mathbf {A} ^{k})=\operatorname {adj} (\mathbf {A} )^{k}.}

If A is invertible, then the above formula also holds for negative k.

From the identity

( A + B ) adj ( A + B ) B = det ( A + B ) B = B adj ( A + B ) ( A + B ) , {\displaystyle (\mathbf {A} +\mathbf {B} )\operatorname {adj} (\mathbf {A} +\mathbf {B} )\mathbf {B} =\det(\mathbf {A} +\mathbf {B} )\mathbf {B} =\mathbf {B} \operatorname {adj} (\mathbf {A} +\mathbf {B} )(\mathbf {A} +\mathbf {B} ),}

we deduce

A adj ( A + B ) B = B adj ( A + B ) A . {\displaystyle \mathbf {A} \operatorname {adj} (\mathbf {A} +\mathbf {B} )\mathbf {B} =\mathbf {B} \operatorname {adj} (\mathbf {A} +\mathbf {B} )\mathbf {A} .}

Suppose that A commutes with B. Multiplying the identity AB = BA on the left and right by adj(A) proves that

det ( A ) adj ( A ) B = det ( A ) B adj ( A ) . {\displaystyle \det(\mathbf {A} )\operatorname {adj} (\mathbf {A} )\mathbf {B} =\det(\mathbf {A} )\mathbf {B} \operatorname {adj} (\mathbf {A} ).}

If A is invertible, this implies that adj(A) also commutes with B. Over the real or complex numbers, continuity implies that adj(A) commutes with B even when A is not invertible.

Finally, there is a more general proof than the second proof, which only requires that an nxn matrix has entries over a field with at least 2n+1 elements (e.g. a 5x5 matrix over the integers mod 11). det(A+tI) is a polynomial in t with degree at most n, so it has at most n roots. Note that the ijth entry of adj((A+tI)(B)) is a polynomial of at most order n, and likewise for adj(A+tI)adj(B). These two polynomials at the ijth entry agree on at least n+1 points, as we have at least n+1 elements of the field where A+tI is invertible, and we have proven the identity for invertible matrices. Polynomials of degree n which agree on n+1 points must be identical (subtract them from each other and you have n+1 roots for a polynomial of degree at most n - a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of t. Thus, they take the same value when t = 0.

Using the above properties and other elementary computations, it is straightforward to show that if A has one of the following properties, then adj A does as well:

  • Upper triangular,
  • Lower triangular,
  • Diagonal,
  • Orthogonal,
  • Unitary,
  • Symmetric,
  • Hermitian,
  • Skew-symmetric,
  • Skew-hermitian,
  • Normal.

If A is invertible, then, as noted above, there is a formula for adj(A) in terms of the determinant and inverse of A. When A is not invertible, the adjugate satisfies different but closely related formulas.

  • If rk(A) ≤ n − 2, then adj(A) = 0.
  • If rk(A) = n − 1, then rk(adj(A)) = 1. (Some minor is non-zero, so adj(A) is non-zero and hence has rank at least one; the identity adj(A) A = 0 implies that the dimension of the nullspace of adj(A) is at least n − 1, so its rank is at most one.) It follows that adj(A) = αxy, where α is a scalar and x and y are vectors such that Ax = 0 and Ay = 0.

Column substitution and Cramer's rule

See also: Cramer's rule

Partition A into column vectors:

A = ( a 1     a n ) . {\displaystyle \mathbf {A} =(\mathbf {a} _{1}\ \cdots \ \mathbf {a} _{n}).}

Let b be a column vector of size n. Fix 1 ≤ in and consider the matrix formed by replacing column i of A by b:

( A i b )   = def   ( a 1 a i 1 b a i + 1 a n ) . {\displaystyle (\mathbf {A} {\stackrel {i}{\leftarrow }}\mathbf {b} )\ {\stackrel {\text{def}}{=}}\ {\begin{pmatrix}\mathbf {a} _{1}&\cdots &\mathbf {a} _{i-1}&\mathbf {b} &\mathbf {a} _{i+1}&\cdots &\mathbf {a} _{n}\end{pmatrix}}.}

Laplace expand the determinant of this matrix along column i. The result is entry i of the product adj(A)b. Collecting these determinants for the different possible i yields an equality of column vectors

( det ( A i b ) ) i = 1 n = adj ( A ) b . {\displaystyle \left(\det(\mathbf {A} {\stackrel {i}{\leftarrow }}\mathbf {b} )\right)_{i=1}^{n}=\operatorname {adj} (\mathbf {A} )\mathbf {b} .}

This formula has the following concrete consequence. Consider the linear system of equations

A x = b . {\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} .}

Assume that A is non-singular. Multiplying this system on the left by adj(A) and dividing by the determinant yields

x = adj ( A ) b det A . {\displaystyle \mathbf {x} ={\frac {\operatorname {adj} (\mathbf {A} )\mathbf {b} }{\det \mathbf {A} }}.}

Applying the previous formula to this situation yields Cramer's rule,

x i = det ( A i b ) det A , {\displaystyle x_{i}={\frac {\det(\mathbf {A} {\stackrel {i}{\leftarrow }}\mathbf {b} )}{\det \mathbf {A} }},}

where xi is the ith entry of x.

Characteristic polynomial

Let the characteristic polynomial of A be

p ( s ) = det ( s I A ) = i = 0 n p i s i R [ s ] . {\displaystyle p(s)=\det(s\mathbf {I} -\mathbf {A} )=\sum _{i=0}^{n}p_{i}s^{i}\in R.}

The first divided difference of p is a symmetric polynomial of degree n − 1,

Δ p ( s , t ) = p ( s ) p ( t ) s t = 0 j + k < n p j + k + 1 s j t k R [ s , t ] . {\displaystyle \Delta p(s,t)={\frac {p(s)-p(t)}{s-t}}=\sum _{0\leq j+k<n}p_{j+k+1}s^{j}t^{k}\in R.}

Multiply sIA by its adjugate. Since p(A) = 0 by the Cayley–Hamilton theorem, some elementary manipulations reveal

adj ( s I A ) = Δ p ( s I , A ) . {\displaystyle \operatorname {adj} (s\mathbf {I} -\mathbf {A} )=\Delta p(s\mathbf {I} ,\mathbf {A} ).}

In particular, the resolvent of A is defined to be

R ( z ; A ) = ( z I A ) 1 , {\displaystyle R(z;\mathbf {A} )=(z\mathbf {I} -\mathbf {A} )^{-1},}

and by the above formula, this is equal to

R ( z ; A ) = Δ p ( z I , A ) p ( z ) . {\displaystyle R(z;\mathbf {A} )={\frac {\Delta p(z\mathbf {I} ,\mathbf {A} )}{p(z)}}.}

Jacobi's formula

Main article: Jacobi's formula

The adjugate also appears in Jacobi's formula for the derivative of the determinant. If A(t) is continuously differentiable, then

d ( det A ) d t ( t ) = tr ( adj ( A ( t ) ) A ( t ) ) . {\displaystyle {\frac {d(\det \mathbf {A} )}{dt}}(t)=\operatorname {tr} \left(\operatorname {adj} (\mathbf {A} (t))\mathbf {A} '(t)\right).}

It follows that the total derivative of the determinant is the transpose of the adjugate:

d ( det A ) A 0 = adj ( A 0 ) T . {\displaystyle d(\det \mathbf {A} )_{\mathbf {A} _{0}}=\operatorname {adj} (\mathbf {A} _{0})^{\mathsf {T}}.}

Cayley–Hamilton formula

Main article: Cayley–Hamilton theorem

Let pA(t) be the characteristic polynomial of A. The Cayley–Hamilton theorem states that

p A ( A ) = 0 . {\displaystyle p_{\mathbf {A} }(\mathbf {A} )=\mathbf {0} .}

Separating the constant term and multiplying the equation by adj(A) gives an expression for the adjugate that depends only on A and the coefficients of pA(t). These coefficients can be explicitly represented in terms of traces of powers of A using complete exponential Bell polynomials. The resulting formula is

adj ( A ) = s = 0 n 1 A s k 1 , k 2 , , k n 1 = 1 n 1 ( 1 ) k + 1 k k ! tr ( A ) k , {\displaystyle \operatorname {adj} (\mathbf {A} )=\sum _{s=0}^{n-1}\mathbf {A} ^{s}\sum _{k_{1},k_{2},\ldots ,k_{n-1}}\prod _{\ell =1}^{n-1}{\frac {(-1)^{k_{\ell }+1}}{\ell ^{k_{\ell }}k_{\ell }!}}\operatorname {tr} (\mathbf {A} ^{\ell })^{k_{\ell }},}

where n is the dimension of A, and the sum is taken over s and all sequences of kl ≥ 0 satisfying the linear Diophantine equation

s + = 1 n 1 k = n 1. {\displaystyle s+\sum _{\ell =1}^{n-1}\ell k_{\ell }=n-1.}

For the 2×2 case, this gives

adj ( A ) = I 2 ( tr A ) A . {\displaystyle \operatorname {adj} (\mathbf {A} )=\mathbf {I} _{2}\left(\operatorname {tr} \mathbf {A} \right)-\mathbf {A} .}

For the 3×3 case, this gives

adj ( A ) = 1 2 I 3 ( ( tr A ) 2 tr A 2 ) A ( tr A ) + A 2 . {\displaystyle \operatorname {adj} (\mathbf {A} )={\frac {1}{2}}\mathbf {I} _{3}\left((\operatorname {tr} \mathbf {A} )^{2}-\operatorname {tr} \mathbf {A} ^{2}\right)-\mathbf {A} \left(\operatorname {tr} \mathbf {A} \right)+\mathbf {A} ^{2}.}

For the 4×4 case, this gives

adj ( A ) = 1 6 I 4 ( ( tr A ) 3 3 tr A tr A 2 + 2 tr A 3 ) 1 2 A ( ( tr A ) 2 tr A 2 ) + A 2 ( tr A ) A 3 . {\displaystyle \operatorname {adj} (\mathbf {A} )={\frac {1}{6}}\mathbf {I} _{4}\left((\operatorname {tr} \mathbf {A} )^{3}-3\operatorname {tr} \mathbf {A} \operatorname {tr} \mathbf {A} ^{2}+2\operatorname {tr} \mathbf {A} ^{3}\right)-{\frac {1}{2}}\mathbf {A} \left((\operatorname {tr} \mathbf {A} )^{2}-\operatorname {tr} \mathbf {A} ^{2}\right)+\mathbf {A} ^{2}\left(\operatorname {tr} \mathbf {A} \right)-\mathbf {A} ^{3}.}

The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the characteristic polynomial of A.

Relation to exterior algebras

The adjugate can be viewed in abstract terms using exterior algebras. Let V be an n-dimensional vector space. The exterior product defines a bilinear pairing

V × n 1 V n V . {\displaystyle V\times \wedge ^{n-1}V\to \wedge ^{n}V.}

Abstractly, n V {\displaystyle \wedge ^{n}V} is isomorphic to R, and under any such isomorphism the exterior product is a perfect pairing. Therefore, it yields an isomorphism

ϕ : V     Hom ( n 1 V , n V ) . {\displaystyle \phi \colon V\ {\xrightarrow {\cong }}\ \operatorname {Hom} (\wedge ^{n-1}V,\wedge ^{n}V).}

Explicitly, this pairing sends vV to ϕ v {\displaystyle \phi _{\mathbf {v} }} , where

ϕ v ( α ) = v α . {\displaystyle \phi _{\mathbf {v} }(\alpha )=\mathbf {v} \wedge \alpha .}

Suppose that T : VV is a linear transformation. Pullback by the (n − 1)st exterior power of T induces a morphism of Hom spaces. The adjugate of T is the composite

V   ϕ   Hom ( n 1 V , n V )   ( n 1 T )   Hom ( n 1 V , n V )   ϕ 1   V . {\displaystyle V\ {\xrightarrow {\phi }}\ \operatorname {Hom} (\wedge ^{n-1}V,\wedge ^{n}V)\ {\xrightarrow {(\wedge ^{n-1}T)^{*}}}\ \operatorname {Hom} (\wedge ^{n-1}V,\wedge ^{n}V)\ {\xrightarrow {\phi ^{-1}}}\ V.}

If V = R is endowed with its coordinate basis e1, ..., en, and if the matrix of T in this basis is A, then the adjugate of T is the adjugate of A. To see why, give n 1 R n {\displaystyle \wedge ^{n-1}\mathbf {R} ^{n}} the basis

{ e 1 e ^ k e n } k = 1 n . {\displaystyle \{\mathbf {e} _{1}\wedge \dots \wedge {\hat {\mathbf {e} }}_{k}\wedge \dots \wedge \mathbf {e} _{n}\}_{k=1}^{n}.}

Fix a basis vector ei of R. The image of ei under ϕ {\displaystyle \phi } is determined by where it sends basis vectors:

ϕ e i ( e 1 e ^ k e n ) = { ( 1 ) i 1 e 1 e n , if   k = i , 0 otherwise. {\displaystyle \phi _{\mathbf {e} _{i}}(\mathbf {e} _{1}\wedge \dots \wedge {\hat {\mathbf {e} }}_{k}\wedge \dots \wedge \mathbf {e} _{n})={\begin{cases}(-1)^{i-1}\mathbf {e} _{1}\wedge \dots \wedge \mathbf {e} _{n},&{\text{if}}\ k=i,\\0&{\text{otherwise.}}\end{cases}}}

On basis vectors, the (n − 1)st exterior power of T is

e 1 e ^ j e n k = 1 n ( det A j k ) e 1 e ^ k e n . {\displaystyle \mathbf {e} _{1}\wedge \dots \wedge {\hat {\mathbf {e} }}_{j}\wedge \dots \wedge \mathbf {e} _{n}\mapsto \sum _{k=1}^{n}(\det A_{jk})\mathbf {e} _{1}\wedge \dots \wedge {\hat {\mathbf {e} }}_{k}\wedge \dots \wedge \mathbf {e} _{n}.}

Each of these terms maps to zero under ϕ e i {\displaystyle \phi _{\mathbf {e} _{i}}} except the k = i term. Therefore, the pullback of ϕ e i {\displaystyle \phi _{\mathbf {e} _{i}}} is the linear transformation for which

e 1 e ^ j e n ( 1 ) i 1 ( det A j i ) e 1 e n , {\displaystyle \mathbf {e} _{1}\wedge \dots \wedge {\hat {\mathbf {e} }}_{j}\wedge \dots \wedge \mathbf {e} _{n}\mapsto (-1)^{i-1}(\det A_{ji})\mathbf {e} _{1}\wedge \dots \wedge \mathbf {e} _{n},}

that is, it equals

j = 1 n ( 1 ) i + j ( det A j i ) ϕ e j . {\displaystyle \sum _{j=1}^{n}(-1)^{i+j}(\det A_{ji})\phi _{\mathbf {e} _{j}}.}

Applying the inverse of ϕ {\displaystyle \phi } shows that the adjugate of T is the linear transformation for which

e i j = 1 n ( 1 ) i + j ( det A j i ) e j . {\displaystyle \mathbf {e} _{i}\mapsto \sum _{j=1}^{n}(-1)^{i+j}(\det A_{ji})\mathbf {e} _{j}.}

Consequently, its matrix representation is the adjugate of A.

If V is endowed with an inner product and a volume form, then the map φ can be decomposed further. In this case, φ can be understood as the composite of the Hodge star operator and dualization. Specifically, if ω is the volume form, then it, together with the inner product, determines an isomorphism

ω : n V R . {\displaystyle \omega ^{\vee }\colon \wedge ^{n}V\to \mathbf {R} .}

This induces an isomorphism

Hom ( n 1 R n , n R n ) n 1 ( R n ) . {\displaystyle \operatorname {Hom} (\wedge ^{n-1}\mathbf {R} ^{n},\wedge ^{n}\mathbf {R} ^{n})\cong \wedge ^{n-1}(\mathbf {R} ^{n})^{\vee }.}

A vector v in R corresponds to the linear functional

( α ω ( v α ) ) n 1 ( R n ) . {\displaystyle (\alpha \mapsto \omega ^{\vee }(\mathbf {v} \wedge \alpha ))\in \wedge ^{n-1}(\mathbf {R} ^{n})^{\vee }.}

By the definition of the Hodge star operator, this linear functional is dual to *v. That is, ω ∘ φ equals v ↦ *v.

Higher adjugates

Let A be an n × n matrix, and fix r ≥ 0. The rth higher adjugate of A is an ( n r ) × ( n r ) {\displaystyle \textstyle {\binom {n}{r}}\times {\binom {n}{r}}} matrix, denoted adjr A, whose entries are indexed by size r subsets I and J of {1, ..., m}. Let I and J denote the complements of I and J, respectively. Also let A I c , J c {\displaystyle \mathbf {A} _{I^{c},J^{c}}} denote the submatrix of A containing those rows and columns whose indices are in I and J, respectively. Then the (I, J) entry of adjr A is

( 1 ) σ ( I ) + σ ( J ) det A J c , I c , {\displaystyle (-1)^{\sigma (I)+\sigma (J)}\det \mathbf {A} _{J^{c},I^{c}},}

where σ(I) and σ(J) are the sum of the elements of I and J, respectively.

Basic properties of higher adjugates include:

  • adj0(A) = det A.
  • adj1(A) = adj A.
  • adjn(A) = 1.
  • adjr(BA) = adjr(A) adjr(B).
  • adj r ( A ) C r ( A ) = C r ( A ) adj r ( A ) = ( det A ) I ( n r ) {\displaystyle \operatorname {adj} _{r}(\mathbf {A} )C_{r}(\mathbf {A} )=C_{r}(\mathbf {A} )\operatorname {adj} _{r}(\mathbf {A} )=(\det \mathbf {A} )I_{\binom {n}{r}}} , where Cr(A) denotes the rth compound matrix.

Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting r V {\displaystyle \wedge ^{r}V} and n r V {\displaystyle \wedge ^{n-r}V} for V {\displaystyle V} and n 1 V {\displaystyle \wedge ^{n-1}V} , respectively.

Iterated adjugates

Iteratively taking the adjugate of an invertible matrix A k times yields

adj adj k ( A ) = det ( A ) ( n 1 ) k ( 1 ) k n A ( 1 ) k   , {\displaystyle \overbrace {\operatorname {adj} \dotsm \operatorname {adj} } ^{k}(\mathbf {A} )=\det(\mathbf {A} )^{\frac {(n-1)^{k}-(-1)^{k}}{n}}\mathbf {A} ^{(-1)^{k}}~,}
det ( adj adj k ( A ) ) = det ( A ) ( n 1 ) k   . {\displaystyle \det(\overbrace {\operatorname {adj} \dotsm \operatorname {adj} } ^{k}(\mathbf {A} ))=\det(\mathbf {A} )^{(n-1)^{k}}~.}

For example,

adj ( adj ( A ) ) = det ( A ) n 2 A . {\displaystyle \operatorname {adj} (\operatorname {adj} (\mathbf {A} ))=\det(\mathbf {A} )^{n-2}\mathbf {A} .}
det ( adj ( adj ( A ) ) ) = det ( A ) ( n 1 ) 2 . {\displaystyle \det(\operatorname {adj} (\operatorname {adj} (\mathbf {A} )))=\det(\mathbf {A} )^{(n-1)^{2}}.}

See also

References

  1. Gantmacher, F. R. (1960). The Theory of Matrices. Vol. 1. New York: Chelsea. pp. 76–89. ISBN 0-8218-1376-5.
  2. Strang, Gilbert (1988). "Section 4.4: Applications of determinants". Linear Algebra and its Applications (3rd ed.). Harcourt Brace Jovanovich. pp. 231–232. ISBN 0-15-551005-3.
  3. Householder, Alston S. (2006). The Theory of Matrices in Numerical Analysis. Dover Books on Mathematics. pp. 166–168. ISBN 0-486-44972-6. {{cite book}}: Invalid |ref=harv (help)
  • Roger A. Horn and Charles R. Johnson (2013), Matrix Analysis, Second Edition. Cambridge University Press, ISBN 978-0-521-54823-6
  • Roger A. Horn and Charles R. Johnson (1991), Topics in Matrix Analysis. Cambridge University Press, ISBN 978-0-521-46713-1

External links

Matrix classes
Explicitly constrained entries
Constant
Conditions on eigenvalues or eigenvectors
Satisfying conditions on products or inverses
With specific applications
Used in statistics
Used in graph theory
Used in science and engineering
Related terms
Categories:
Adjugate matrix: Difference between revisions Add topic