Assume you have a linear system of equations , where and . Assume that , and that we know that a solution in exists. One question is: how can we efficiently compute ? Clearly, any algorithm solving linear systems over the integers or rationals will do; for example, the algorithms from the Integer Matrix Library by Z. Chen, C. Fletcher and A. Storjohann will do. That library will find any solution , and also does not require that is invertible (over the rationals) or that is square. But for our purposes, using such a general solver is overkill.
Note that the below material is well-known among experts.
Let be any prime not dividing . Then modulo is invertible, and modulo , the system has a unique solution. Moreover, for any integer , the system has a unique solution modulo : this is true since is also invertible modulo – for that, it suffices to check that the determinant of is a unit, which it is since it is coprime to . Moreover, if is a solution to over the integers, then modulo is the unique solution of modulo . Hence, if we choose such that bounds all coefficients of the solution , we can recover a solution to over the integers from a solution to modulo , by chosing the unique preimages in .
This opens the question on how to solve modulo . For that, a Hensel-like lifting technique can be used. (In fact, this follows from Bourbaki's generalization since the Jacobian of the map , equals .) Assume that we have an which satisfies . We want to find with . Write with . As , and as is divisible by , we obtain the linear system . Hence, it suffices to solve linear systems over the prime field to solve over .
This yields the following algorithm:
- Choose
p := 2
. - Solve
A x = b
modulop
. -
If a unique solution exists:
- Set
e = 0
and liftx
to the integers with coordinates in . - Compute
c := A*x - b
. - If
c = 0
, returnx
. - Solve
A y = c/p^e
modulop
. - Set
x := x + y*p^e
ande := e + 1
. - Adjust
x
modulo such that all coefficients are in . - Go back to Step 3.2.
Else:
- Choose the next prime
p
and go back to Step 2.
- Set
The only subprogram we need is a linear systems solver for with square over a finite field, which returns information on the number of solutions. (Note that is not divisible by if and only if there is a unique solution.) If more information is known on the matrix , for example its determinant has been already computed, this information can be used as well.
Let us analyze the running time of this algorithm. Denote by the smallest prime not dividing , and by the time the linear system solver over needs to solve a system of size . Let (resp. resp. ) denote the largest absolute value of an coefficient of (resp. resp. ).
Clearly, the number of iterations is in . In each iteration, one linear system over of size has to be solved, and has to be evaluated. The former takes operations, and the latter involves multiplications and additions of integers of size and , and substractions of integers of size and . For simplicity, assume that . Finally, to compute , we need multipliations of integers of size and , and additions which can be neglected. Clearly, the multiplications can also be neglected, since the evaluation of already is slower.
Let denote the time a multiplication of two numbers of size needs. Then inside the main loop, we need
time units, and the main loop alltogether needs
time units. Finding needs
time units (using the Prime Number Theorem).
Assuming that we use a naive Gaussian algorithm as well as naive multiplication, i.e. and , we obtain a total running time of
Using fast multiplication, i.e. (using FFT methods), and fast linear system solving, i.e. , where , we obtain a total running time of
Now let us try to eliminate from this expression. Clearly, the the second part, we can use that . To eliminate from the first part, we need to find an upper bound. For that, let us first stick to , the smallest prime not dividing the integer . (Letting be a -matrix yields ; in general, using this notation.) Now is divisible by , whence for we have . Note that for integral , , where denotes the Chebyshev function. Using known bounds on , we get
Therefore, becomes true for for every . This shows that eventually holds for , yielding and, thus, . Using the Leibniz formula, .
Finally, we can use some linear algebra to bound in terms of and . First note that , where denotes the identity matrix and denotes the adjungate matrix of . As , we see that it suffices to bound . Now the coefficients of are determinants of matrices with coefficients coming from , whence . Therefore,
when assuming that .
This can be combined into the following theorem:
Assuming that and , the above algorithm needs
time units to compute the unique solution of using naive arithmetic in , , and naive Gaussian elimination to solve linear systems over . Using fast linear algebra and fast multiplication, we only need
time units for any .
Comments.