In this article, I want to discuss three questions, which turn out to be closely related. The first question is,
“Given a lattice . How do I find a basis of this lattice?”
(Note that this question is far from being well-posed.) The second question is,
“If is a finite abelian group and , how do I compute the structure of , the subgroup generated by these elements?”
The third question comes up in the description of algorithms, for example of the baby-step giant-step algorithm by D. Shanks, an optimization by D. Terr, and more general algorithms, like the Buchmann-Schmidt algorithm. These algorithms can be described in terms of the first question, making them easier to understand.
We begin with sketching the relation between lattices and finite abelian groups. If is a finite abelian group, and some elements, for example, a set of generators, consider the map
This turns out to be a group homomorphism onto . The kernel of , which we will denote by , is called the relation lattice. This is in fact a lattice in of volume
note that by the Homomorphism Theorem, . On the other hand, if is a lattice, then is a finite abelian group of elements. Moreover, if the residue class of , the vector consisting of zeroes except a one at the -th position, in is denoted by , then . Therefore, lattices in and finite abelian groups with generators are essentially the same thing.
How is this related to group structure computations? Recall the Smith normal form; this allows to convert a basis of a lattice into another basis, such that if one applies an invertible linear transformation to , this basis is sent to with such that divides , ; then, . Hence, computing the structure of a finite abelian group generated by can be split up in the two parts, (a) computation of a basis of the relation lattice and (b) computation of a Smith normal form of this basis. There exist a lot of algorithms for computation of Smith normal forms; usually, the bottleneck is finding a basis of .
Often, the process of finding a lattice equals determining the relation lattice of a finite abelian group, or something similar. One often has a way to test whether , by computing a unique representation of the residue class ; then, one tries to find two ways of writing the same residue class, but with : then is a non-trivial element of . If one sets , then this means that one seeks for pairs , , where and , such that and .
More generally, assume that is a finite set and is a map such that for all . Our above example, and satisfies this. Moreover, assume that is “mostly injective”, i.e. that implies that one can find such that , , with very little effort. Then one can work with elements with , and try to find two such pairs , with ; then this gives rise to an element of near to .
Which brings us to the subject of explaining algorithms. Consider, for example, Shanks' baby-step giant- step algorithm. You are given a group element of finite order together with a bound . Then, for the algorithm, one computes , as well as , and compares this elements with the first elements. Any match will result in a multiple of the order of . But why is this the case? One can of course try to prove this; it is actually not very hard, in fact one just needs Fermat's Little Theorem as well as long division. But one can do better, by visualizing the algorithm in a way which makes the solution looking obvious, and which allows even people who have no understanding of formal mathematics or computer science to immediately realize that the algorithm produces the correct result.
Namely, you might have guessed it, the order of generates the relation lattice . Hence finding the order is equivalent to finding a the smallest positive element in . For this correspondence, we use the pairs with as above. Two pairs with gives a multiple of the order of . Now, the algorithm can be interpreted as translating the set of elements by , , , and checking if a lattice element is contained in any of these translates. If one visualizes this, one immediately sees that this method will eventually find the smallest non-zero element of the lattice. First, this depicts the lattice (gray dots) with its sublattice (black dots), with the set drawn in for :
The next figure depicts the translates of by :
One directly sees that the translates , cover all positive integers, and that every positive integer is contained in exactly one translate. Moreover, one sees that if the first translate contains at most one lattice element, the first translate which contains a lattice element uniquely determines the smallest positive integer. Note that in case is the order of , then one can minimize the number of operations by chosing .
Now let us consider Terr's modification of the baby-step giant-step algorithm; there, the situation is a bit more complicated. In Terr's algorithm, the bound from above constantly changes, starting with . Written as pseudo-code, the algorithm looks like this:
- Let and , and let and .
- If for some , return .
- Set , and set .
- Compute and .
- Go back to Step 2.
There is no obvious reason why this should work. Note that the test whether means that one translates by the exponent of , which turns out to be . One immediately gets the idea if one draws the first few translates:
This is another tiling of , where every positive integer is contained in exactly one translate. This tiling has the property that the first time occurs for some is when the order of is found, and one no longer has to fix a bound before. The asymptotic complexity of this method is the same as the previous method in case , but in case is chosen the wrong way, the first algorithm will perform worse than the second. Another way to visualize the second agorithm is to depict the set together with , where and , for :
Comments.