Skip to main content.

This time, I want to share an observation on the Jordan decomposition, which is the main tool needed to show the existence of the Jordan normal form. Let me begin by introducing a more general notation, and show that the Jordan decomposition satisfies a kind of universal property.

Let V be a vector space over a field K and \varphi : V \to V a linear map. We say that a subspace W \subseteq V is \varphi-invariant if \varphi(W) \subseteq W. Another way to interpret this is to consider the K-algebra A = K[\varphi]; then V is an A-module and the A-submodules of V are exactly the \varphi-invariant subspaces of V.

Definition.

An A-decomposition of V is a decomposition V = \bigoplus_{i \in I} V_i of A-submodules such that, for every A-submodule W of V, one has W = \bigoplus_{i \in I} (V_i \oplus W).

Clearly, there always exists a trivial A-decomposition of V, namely V itself. One can define a partial order on the set of A-decompositions:

Definition.

Let V = \bigoplus_{i \in I} V_i and V = \bigoplus_{j \in J} W_j be two V decompositions. We say that \bigoplus_{i \in I} V_i \le \bigoplus_{j \in J} W_j if, for every i \in I, there exists an j \in J with V_i \subseteq W_j.

Clearly, the trivial A-decomposition is the maximum with respect to this order. One can ask whether a minimal A-decomposition exists. In case it exists, it has a nice property:

Lemma.

Assume that V = \bigoplus_{i \in I} V_i is a minimal A-decomposition. Let W = \bigoplus_{j \in J} W_j be another A-decomposition. Then, for every j \in J, there exists a subset I_j \subseteq I with W_j = \bigoplus_{i \in I_j} V_i.

Proof.

Define I_j := \{ i \in I \mid V_i \subseteq W_j \}. Then I = \bigcup_{j \in J} I_j is a disjoint union. Now, \bigcup_{i \in I_j} U_i \subseteq W_j and \bigoplus_{i \in I_j} U_i form a direct sum, whence \bigoplus_{i \in I_j} U_i \subseteq W_j for all j.

Now assume that \bigoplus_{i \in I_j} U_i \subsetneqq W_j for some j; let w \in W_j \setminus \bigoplus_{i \in I_j} U_i. Now, as V = \bigoplus_{i \in I} V_i, we can write w = \sum_{i \in I} v_i with v_i \in V_i. Moreover, write w = \sum_{t \in J} w_t with w_t \in W_t. Clearly, we must have w_t = \sum_{i \in I_t} v_i for every t \in J. As w \in W_j we have w_t = 0 for all t \neq 0, whence v_i = 0 for all i \not\in I_j. But this implies w = \sum_{i \in I_j} v_i \in \bigoplus_{i \in I_j} V_i, a contradiction.

Now one can ask when such a decomposition exists, and if it can be computed. An important case in which this is true is the one where V is a finitely dimensional vector space over an algebraically closed field K; for example, K = \C and V = \C^n.

Definition.

Let \lambda \in K. The generalized eigenspace of \varphi with respect to \lambda is

\GEig(\varphi, \lambda) := \{ v \in V \mid \exists n \in \N : (\varphi - \lambda \id)^n v = 0 \}.

In case \dim V = n < \infty, one has that \GEig(\varphi, \lambda) = \ker (\varphi - \lambda)^n. Hence, generalized eigenspaces can be efficiently computed. Moreover, we have \Eig(\varphi, \lambda) \subseteq \GEig(\varphi, \lambda), and a simple argument shows that either both are trivial or both are non-trivial. Hence, the \lambda \in K with \GEig(\varphi, \lambda) \neq \{ 0 \} are exactly the zeroes of the characteristic polynomial of \varphi.

Now note that \varphi(\GEig(\varphi, \lambda)) \subseteq \GEig(\varphi, \lambda). Hence, \GEig(\varphi, \lambda) is \varphi-invariant. We now have three lemmas:

Lemma.

Let \lambda_1, \dots, \lambda_t be t different eigenvalues of \varphi. Then \bigoplus_{i=1}^t \GEig(\varphi, \lambda_i) is a direct sum.

Proof.

Let v_i \in \GEig(\varphi, \lambda_i) with \sum_{i=1}^t v_i = 0. We have to show that v_i = 0 for all i. Assume that not all v_i are zero, and that the relation is chosen minimal with respect to the number of nonzero v_i.

Let j \in \{ 1, \dots, t \} with v_j \neq 0, and choose n \in \N with (\varphi - \lambda_j \id)^n v_j = 0. If \psi := (\varphi - \lambda_j \id)^n, then \sum_{i=1}^t \psi(v_i) = 0 yields a second relation with \psi(v_i) = 0. By minimality, we must have \psi(v_j) = 0 for all j.

We will show that (\varphi - \lambda_i \id)|_{\GEig(\varphi, \lambda)} is injective for \lambda \neq \lambda_i, which gives v_j = 0 for all j \neq 0 and, therefore, v_i = 0, a contradiction.

Let v \in \GEig(\varphi, \lambda) with (\varphi - \lambda_i \id) v = 0. Assume that v \neq 0 and let n \in \N be maximal with w := (\varphi - \lambda \id)^n v \neq 0; in that case, \varphi(w) = \lambda w and

(\varphi - \lambda_i) w = (\varphi - \lambda_i) (\varphi - \lambda)^n v = (\varphi - \lambda)^n (\varphi - \lambda_i) v = (\varphi - \lambda)^n 0 = 0,

whence we get \lambda_i w = \varphi(w) = \lambda w, which is only possible for w = 0, a contradiction. Hence, we must have v = 0, i.e. \varphi - \lambda_i \id is injective on \GEig(\varphi, \lambda).

Lemma.

Assume that \dim V < \infty and let \lambda \in K. Then there exists an \varphi-invariant subspace W \subseteq V such that V = W \oplus \GEig(\varphi, \lambda).

Proof.

Set \psi := \varphi - \lambda \id. ConsiderS the chains

\{ 0 \} \subseteq \ker \psi \subseteq \ker \psi^2 \subseteq \ker \psi^3 \subseteq \dots

and

V \supseteq \image \psi \supseteq \image \psi^2 \supseteq \image \psi^3 \supseteq \dots

Clearly, there exists an s \in \N with \image \psi^s = \image \psi^{s+1} as \dim V < \infty. Now one easily shows \image \psi^s = \image \psi^{s+i} for all i \in \N. By the Dimension Formula, we have

\dim \ker \psi^{s+i} ={} & \dim V - \dim \image \psi^{s+i} \\ {}={ } & \dim V - \dim \image \psi^s = \dim \ker \psi^s

for all i \in \N, whence \ker \psi^{s+i} = \ker \psi^s for all i \in \N. But then \GEig(\varphi, \lambda) = \ker \psi^s and \dim \GEig(\varphi, \lambda) + \dim \image \psi^s = \dim V.

Set W := \image \psi^s and let w \in W, i.e. let v \in V with \psi^s(v) = w. Then

\varphi(w) = \varphi (\varphi - \lambda \id)^s v = (\varphi - \lambda \id)^s \varphi(v) \in \image \psi^s,

whence W is \varphi-invariant. Now it suffices to show that W \cap \GEig(\varphi, \lambda) = 0. Now \image \psi^{s+1} = \image \psi^s, whence for every w \in W there exists some v \in \image \psi^s with \psi(v) = w. But this means that \psi|_W is surjective, whence \ker(\psi|_W) = \{ 0 \}. But then \ker \psi^s \cap W = \ker (\psi^s|_W) = \ker (\psi|_W)^s = \{ 0 \}.

Lemma.

Assume that \dim V < \infty and that the characteristic polynomial \chi_\varphi of \varphi splits into linear factors. Let \lambda_1, \dots, \lambda_t be all eigenvalues of \varphi. Then V = \bigoplus_{i=1}^t \GEig(\varphi, \lambda).

Proof.

We proceed by induction on \dim V. For \dim V = 0 this is cleary. Hence, assume \dim V \ge 1 and let \lambda be an eigenvalue of \varphi. Choose an \varphi-invariant subspace W with V = \GEig(\varphi, \lambda) \oplus W. We have \dim \GEig(\varphi, \lambda) \ge \dim \Eig(\varphi, \lambda) \ge 1, whence \dim W < \dim V. Now

\chi_\varphi = \chi_{\varphi|_W} \cdot \chi_{\varphi|_{\GEig(\varphi, \lambda)}},

whence the characteristic polynomial of \varphi|_W splits into linear factors as well.

Let \lambda'_1, \dots, \lambda'_s be the eigenvalues of \varphi|_W. Then, by induction, we have W = \bigoplus_{i=1}^s \GEig(\varphi|_W, \lambda_i'). Now \GEig(\varphi|_W, \lambda_i') = W \cap \GEig(\varphi, \lambda_i'), whence W \subseteq \bigoplus_{i=1}^s \GEig(\varphi, \lambda_i').

Finally, note that \lambda \neq \lambda_i' for all i, as this would contradict W \cap \GEig(\varphi, \lambda) = \{ 0 \}. Therefore, V = \GEig(\varphi, \lambda) \oplus \bigoplus_{i=1}^s \GEig(\varphi, \lambda_i'). Moreover, we must have \{ \lambda_1, \dots, \lambda_t \} = \{ \lambda, \lambda'_1, \dots, \lambda_s' \} as the dimensions of the generalized eigenspaces for all \lambda_i must be non-zero, whence "\supseteq" must hold. The converse holds because every non-trival generalized eigenvector yields a non-trivial eigenvector to the same value.

Therefore, we get:

Corollary (Jordan Decomposition).

Let K be algebraically closed and assume that \dim V < \infty. Then, for every endomorphism \varphi of V, there exist \lambda_1, \dots, \lambda_t \in K such that

V = \bigoplus_{i=1}^t \GEig(\varphi, \lambda_i)

is an K[\varphi]-decomposition.

Proof.

We have to show that this yields an K[\varphi]-decomposition. For that, let W be a \varphi-invariant subspace of V. Consider \varphi|_W; this is an endomorphism of W whose set of eigenvalues is a subset of the set of eigenvalues of \varphi. Hence, by the previous lemma applied to \varphi|_W, we have

W = \bigoplus_{i=1}^t \GEig(\varphi|_W, \lambda_i) = \bigoplus_{i=1}^t (\GEig(\varphi, \lambda_i) \cap W),

what we had to show.

We can now prove our main result, namely that the generalized eigenspace decomposition is exactly the minimal K[\varphi]-decomposition of V:

Theorem.

Let K be algebraically closed and \dim V < \infty. Then the minimal K[\varphi]-decomposition of V is given by

V = \bigoplus_{\lambda \in K} \GEig(\varphi, \lambda).

Note that we do not need that K is algebraically cloesd, but only that \chi_\varphi splits over K.

Proof.

Let V = \bigoplus_{i=1}^n V_i be a K[\varphi]-decomposition. Assume that there exists some \lambda \in K and 1 \le i < j \le n such that V_i \cap W \neq \{ 0 \} \neq V_j \cap W with W := \GEig(\varphi, \lambda); if this would not exist, we would have \bigoplus_{\lambda \in K} \GEig(\varphi, \lambda) \le \bigoplus_{i=1}^n V_i.

Assume that we can find eigenvectors v \in V_i \cap W and w \in V_j \cap W. Then \varphi(v + w) = \lambda (v + w), whence v + w is an eigenvector as well. But then W' := \langle v + w \rangle is an \varphi-invariant subspace of V with W' \subseteq V_i \oplus V_j, but (W' \cap V_i) \oplus (W' \cap V_j) = \{ 0 \} \subsetneqq W', a contradiction that \bigoplus_{i=1}^n V_i is a K[\varphi]-decomposition.

We now show that W \cap V_i contains an eigenvector. As V_i is \varphi-invariant, we can consider \psi := \varphi|_{V_i}. Now \GEig(\psi, \lambda) = W \cap V_i \neq \{ 0 \}, whence we must have \Eig(\psi, \lambda) \neq \{ 0 \}. Hence, there exists some v \in W, v \neq 0 with \varphi(v) = \psi(v) = \lambda v.

Comments.

No comments.