** Next:** The Seysen Basis Reduction
** Up:** Introduction
** Previous:** Reduced Lattice Bases

#

Lattice Basis Reduction Algorithms

Although both Minkowski reduction and Korkin-Zolotarev reduction provide
frameworks for studying lattice basis reduction, computing a reduced
lattice basis (in either sense) is in general a difficult problem.
Currently, there are no known polynomial-time algorithms for finding
either a Minkowski or a Korkin-Zolotarev reduced basis for a given
lattice *L*. If such an algorithm existed, then we would be able to
find the Euclidean-norm shortest nonzero vector in *L* in polynomial
time by finding the reduced basis, for which
is the desired
vector. Thus, any polynomial-time lattice basis reduction algorithm we
use will not be able to satisfy the strict conditions of Minkowski or
Korkin-Zolotarev reduction.
Techniques for finding relatively small vectors in a lattice have been
known for some time (see [22] for example); it was not until
recently, though, that a fast algorithm was known which was guaranteed
to produce lattice bases with relatively short vectors. In
[27] Lenstra, Lenstra and Lovász described a polynomial-time
algorithm for transforming a give lattice basis
of lattice *L* into an *LLL-reduced*
lattice basis
.
A basis *B*' is
LLL-reduced if it has the following two properties:

where the parameter
and
(That is,
is a Gram-Schmidt orthogonalized
basis generated from *B*). Notice that LLL reduction is similar to but
weaker than Korkin-Zolotarev reduction.
The Lenstra-Lenstra-Lovász basis reduction algorithm
converts lattice basis *B* into *B*' by performing two types of
transformations. In a *size-reduction* step, LLL finds the
largest *j* such that there exists an *i* > *j* and
violates
Equation 1.1. By performing the transformation

where
denotes the nearest-integer function, we
find that the new value of
is
.
In the *
exchange* step, LLL searches for the smallest value of *i* such that
and
fail to satisfy
Equation 1.2. Here LLL swaps the two vectors
(
and
)
to force compliance with the second LLL-reduced
condition. The LLL basis reduction algorithm alternately performs
size-reduction and exchange steps until both
Equations 1.1 and 1.2 are satisfied, at
which point the algorithm halts.
LLL-reduced lattice bases satisfy a number of bounds. In particular, if
*y* is the global LLL constant, then the first vector in the reduced
lattice basis
satisfies:

In particular, for
(the value used in [27]),
we have

Thus the length of
is at most an exponential multiple of
the length of the shortest nonzero vector in the lattice. (Similar
bounds exist for the other vectors in the reduced basis.) In practice,
the LLL algorithm usually performs much better than this exponential
bound, although example lattice bases are known which cause the LLL
algorithm to exhibit worst-case performance.
Most of the work on lattice basis reduction algorithms since the
introduction of LLL has focused on improving and extending the
technique. For example, the version of LLL described above requires
rational arithmetic (the
variables in particular must be
stored as fractions); multiprecision floating-point numbers are usually
used to reduce the computation requirements, but may introduce error
into the calculations. One method of reducing the multiprecision
requirements is described in [33]. Similarly, [32] showed
how to modify the LLL algorithm so that the set of input vectors can be
linearly dependent. Hierarchies of LLL-type algorithms have been
investigated [34], stretching from LLL-reduction at one end of
the spectrum to Korkin-Zolotarev reduction at the other. However,
little effort has been expended on looking at algorithms not derived
from or similar to LLL.
This thesis examines an approach to lattice basis reduction of different
structure than that of the LLL algorithm and related methods. This
method was originally suggested by M. Seysen [38] to simultaneously
produce a reduced basis and a reduced dual basis for some lattice *L*.
Where the LLL algorithm concentrates on local optimizations to produce a
reduced lattice, the Seysen approach considers the entire basis for
methods of optimization. (Recall that the LLL *size-reduction*
steps only consider
for the smallest possible *j*, and that
only adjacent vectors
and
may be *
exchanged*.)
Chapter 2 describes the first phase of the research,
in which Seysen's basis reduction algorithm and multiple variants were
implemented and examined. Theoretical and empirical analyses of the
fixed components of Seysen's algorithm are given. For those parts of
the algorithm which are permitted to vary, we examine some of the
possible variations, and look at the effect of these changes on the
performance of the algorithm. Possible extensions of the
algorithm are also discussed.
The motivation behind our study of Seysen's lattice basis reduction
algorithm is presented in Chapter 3. It is known
that certain *subset sum problems* may be solved by finding the
shortest nonzero vector in a particular lattice (the lattice is
generated based on the specific construction of the subset sum problem).
The best methods previously known for reducing subset sum problem
lattices [26,33] involve the LLL algorithm and some other
heuristics, and are not very successful for *n* > 25 (*n* is the
size of the subset sum problem to be solved).
Chapter 3 details experiments which used Seysen's
algorithm in combination with the LLL algorithm and other heuristics to
solve a much greater range of subset sum problems.

** Next:** The Seysen Basis Reduction
** Up:** Introduction
** Previous:** Reduced Lattice Bases
*Brian A. LaMacchia*

*1999-10-30*