next up previous contents
Next: The Seysen Basis Reduction Up: Introduction Previous: Reduced Lattice Bases

  
Lattice Basis Reduction Algorithms

Although both Minkowski reduction and Korkin-Zolotarev reduction provide frameworks for studying lattice basis reduction, computing a reduced lattice basis (in either sense) is in general a difficult problem. Currently, there are no known polynomial-time algorithms for finding either a Minkowski or a Korkin-Zolotarev reduced basis for a given lattice L. If such an algorithm existed, then we would be able to find the Euclidean-norm shortest nonzero vector in L in polynomial time by finding the reduced basis, for which $\bold{b_1}$ is the desired vector. Thus, any polynomial-time lattice basis reduction algorithm we use will not be able to satisfy the strict conditions of Minkowski or Korkin-Zolotarev reduction. Techniques for finding relatively small vectors in a lattice have been known for some time (see [22] for example); it was not until recently, though, that a fast algorithm was known which was guaranteed to produce lattice bases with relatively short vectors. In [27] Lenstra, Lenstra and Lovász described a polynomial-time algorithm for transforming a give lattice basis $B=(\bold{b_1},\ldots{},\bold{b_n})$ of lattice L into an LLL-reduced lattice basis $B' = (\bold{b_1'},\ldots{},\bold{b_n'})$. A basis B' is LLL-reduced if it has the following two properties:
  
where the parameter $y \in (\tfrac{1}{4},1)$ and $\bold{b_j^*} = \bold{b_j'} -
\sum_{i=1}^{j-1} \mu_{i,j}\bold{b_i^*}$ (That is, $B^*=(\bold{b_1^*},\ldots{},\bold{b_n^*})$ is a Gram-Schmidt orthogonalized basis generated from B). Notice that LLL reduction is similar to but weaker than Korkin-Zolotarev reduction. The Lenstra-Lenstra-Lovász basis reduction algorithm converts lattice basis B into B' by performing two types of transformations. In a size-reduction step, LLL finds the largest j such that there exists an i > j and $\mu_{i,j}$ violates Equation 1.1. By performing the transformation

\begin{displaymath}\bold{b_i} \longleftarrow \bold{b_i} - \left\lceil \mu_{i,j} \right\rfloor
\bold{b_j},
\end{displaymath}

where $\lceil \cdot \rfloor$ denotes the nearest-integer function, we find that the new value of $\mu_{i,j}$ is $\leq \tfrac{1}{2}$. In the exchange step, LLL searches for the smallest value of i such that $\bold{b_{i-1}}$ and $\bold{b_i}$ fail to satisfy Equation 1.2. Here LLL swaps the two vectors ( $\bold{b_{i-1}} \longleftarrow \bold{b_i}$ and $\bold{b_i} \longleftarrow
\bold{b_{i-1}}$) to force compliance with the second LLL-reduced condition. The LLL basis reduction algorithm alternately performs size-reduction and exchange steps until both Equations 1.1 and 1.2 are satisfied, at which point the algorithm halts. LLL-reduced lattice bases satisfy a number of bounds. In particular, if y is the global LLL constant, then the first vector in the reduced lattice basis $\bold{b_1}$ satisfies:

\begin{displaymath}\Vert\bold{b_1}\Vert \leq \left(\frac{4}{4y-1}\right)^{n-1} \...
...
\quad\text{for all } \bold{x} \in L, \bold{x} \neq \bold{0}.
\end{displaymath}

In particular, for $y = \tfrac{3}{4}$ (the value used in [27]), we have

\begin{displaymath}\Vert\bold{b_1}\Vert \leq 2^{n-1} \Vert\bold{x}\Vert, \quad\text{for all } \bold{x} \in L,
\bold{x} \neq \bold{0}.
\end{displaymath}

Thus the length of $\bold{b_1}$ is at most an exponential multiple of the length of the shortest nonzero vector in the lattice. (Similar bounds exist for the other vectors in the reduced basis.) In practice, the LLL algorithm usually performs much better than this exponential bound, although example lattice bases are known which cause the LLL algorithm to exhibit worst-case performance. Most of the work on lattice basis reduction algorithms since the introduction of LLL has focused on improving and extending the technique. For example, the version of LLL described above requires rational arithmetic (the $\mu_{i,j}$ variables in particular must be stored as fractions); multiprecision floating-point numbers are usually used to reduce the computation requirements, but may introduce error into the calculations. One method of reducing the multiprecision requirements is described in [33]. Similarly, [32] showed how to modify the LLL algorithm so that the set of input vectors can be linearly dependent. Hierarchies of LLL-type algorithms have been investigated [34], stretching from LLL-reduction at one end of the spectrum to Korkin-Zolotarev reduction at the other. However, little effort has been expended on looking at algorithms not derived from or similar to LLL. This thesis examines an approach to lattice basis reduction of different structure than that of the LLL algorithm and related methods. This method was originally suggested by M. Seysen [38] to simultaneously produce a reduced basis and a reduced dual basis for some lattice L. Where the LLL algorithm concentrates on local optimizations to produce a reduced lattice, the Seysen approach considers the entire basis for methods of optimization. (Recall that the LLL size-reduction steps only consider $\mu_{i,j}$ for the smallest possible j, and that only adjacent vectors $\bold{b_{i-1}}$ and $\bold{b_i}$ may be exchanged.) Chapter 2 describes the first phase of the research, in which Seysen's basis reduction algorithm and multiple variants were implemented and examined. Theoretical and empirical analyses of the fixed components of Seysen's algorithm are given. For those parts of the algorithm which are permitted to vary, we examine some of the possible variations, and look at the effect of these changes on the performance of the algorithm. Possible extensions of the algorithm are also discussed. The motivation behind our study of Seysen's lattice basis reduction algorithm is presented in Chapter 3. It is known that certain subset sum problems may be solved by finding the shortest nonzero vector in a particular lattice (the lattice is generated based on the specific construction of the subset sum problem). The best methods previously known for reducing subset sum problem lattices [26,33] involve the LLL algorithm and some other heuristics, and are not very successful for n > 25 (n is the size of the subset sum problem to be solved). Chapter 3 details experiments which used Seysen's algorithm in combination with the LLL algorithm and other heuristics to solve a much greater range of subset sum problems.
next up previous contents
Next: The Seysen Basis Reduction Up: Introduction Previous: Reduced Lattice Bases
Brian A. LaMacchia
1999-10-30