Skip to main content.

I just learned about a nice trick to show that \det(A B) = \det A \cdot \det B for matrices A, B \in K^{n \times n} from my colleague Francesco Sica, who attributed it to F. Catanese.

Assuming that it is known that there is, up to scale, only one alternating n-linear form K^{n \times n} \to K (i.e. that \dim_K \bigwedge^n K^n = 1), one can proceed as follows. Given A \in K^{n \times n}, consider the map f_A : K^{n \times n} \to K, B \mapsto \det(A B). This is clearly n-linear and alternating, whence there exists some \lambda \in K such that f_A = \lambda \cdot \det. Evaluating f_A at the identity matrix I gives \lambda = \lambda \det(I) = f_A(I) = \det(A I) = \det A. Evaluating f_A at B gives \det(A B) = f_A(B) = \lambda \det B = \det A \cdot \det B.

Of course, using the trick similar to the first lemma here, it suffices to show this for K = \C to obtain it for any unitary commutative ring, after showing that the determinant is in fact a polynomial equation with integer coefficients (for example, by showing the Leibniz formula).

Comments.

Jens wrote on September 19, 2011:

Hi! This proof looked too familiar to me... and indeed: Quebbemann (WS 99/00) used the same proof in his Lineare Algebra. :)

Felix Fontein wrote on September 19, 2011:

And again I regret that I didn't start studying in Oldenburg two years earlier... :)