next up previous contents
Next: Theoretical Analysis Up: No Title Previous: Lattice Basis Reduction Algorithms

  
The Seysen Basis Reduction Algorithm

In 1990, Martin Seysen proposed a new method for performing lattice basis reduction [38]. Seysen's basis reduction algorithm (or just Seysen's algorithm) differs from the LLL algorithm and its variants in that it considers all vectors in the lattice simultaneously, and performs operations on those vectors which will reduce the lattice according to some measure. Recall that the LLL algorithm works locally on the lattice it is reducing; LLL will only perform an operation on two vectors which are adjacent to each other in the ordered lattice basis. Seysen was motivated to create a new method for basis reduction by a desire to find a better way to simultaneously reduce a lattice and its reciprocal (or dual) lattice. If lattice L is defined by basis vectors $\bold{b_1},\ldots{},\bold{b_n}$, then the dual lattice L* of L is defined by basis vectors $\bold{b_1^*},\ldots{},\bold{b_n^*}$, where

 \begin{displaymath}\begin{split}
(\bold{b_i},\bold{b_i^*}) & = 1, \\
(\bold{b...
...\bold{b_j^*}) & = 0, \quad\text{for } i \neq j.
\end{split}
\end{displaymath} (2.1)

Now consider what happens in the dual lattice when we perform a row move on $\bold{b_i}$ and $\bold{b_j}$ in L. (A row move is any operation which adds a constant multiple of one lattice basis vector to another basis vector.) Let $\bold{b_j'} = \bold{b_j} + \lambda\bold{b_i}$, where $\lambda \in {\Bbb Z}$. We consider what changes must occur in the dual lattice basis vectors $\bold{b_1^*},\ldots{},\bold{b_n^*}$, since Equation 2.1 must hold at all times. For $k \neq i$, we find that:
\begin{align*}\bold{{b_k^*}'} & = \bold{b_k^*}, \\
\intertext{since}
(\bold{b...
...b_j},\bold{b_k^*}) + \lambda(\bold{b_i},\bold{b_k^*}), \\
& = 0.
\end{align*}
For k = i, however, this is not the case:
\begin{align*}\bold{{b_i^*}'} & = \bold{b_i^*} - \lambda\bold{b_j^*}, \\
\inte...
..._i},\bold{b_j^*}), \\
& = 0 - \lambda + \lambda - 0, \\
& = 0.
\end{align*}
Thus, when we add $\lambda\bold{b_i}$ to $\bold{b_j}$ in the basis of lattice L, we must subtract $\lambda\bold{b_j^*}$ from $\bold{b_i^*}$ in the basis of L*. It is easy to see that if lattice L is reduced with the LLL algorithm, the resulting reduced lattice may have a dual in which some basis vector is quite large, as no attempt is ever made to consider the size of the dual basis when row moves are being performed. Seysen's algorithm attempts to choose row moves that reduce both the lattice and its dual. We now outline the basic operation of Seysen's algorithm. Let L and L* be a lattice and its dual which we wish to simultaneously reduce. Let A and A* be the associated quadratic forms of L and L*, respectively:
\begin{alignat*}{4}
A & = & & \left[ a_{i,j} \right]_{1 \leq i,j \leq n} & & = ...
...
\left[ (\bold{b_i^*},\bold{b_j^*}) \right]_{1 \leq i,j \leq n}.
\end{alignat*}
The element ai,j of matrix A is the inner product of the basis vectors $\bold{b_i}$ and $\bold{b_j}$ of lattice L. Notice that A and A* are inverses of each other and are symmetric. If L is the lattice defined by basis B, then any other basis B' of L may be written as:

\begin{displaymath}B' = B \;T,
\end{displaymath}

where $T \in SL_n({\Bbb Z})$ (i.e. T is an $n \times n$ integer matrix with determinant 1). The quadratic form A' associated with B' may similarly be derived from A:

\begin{displaymath}A' = T^t \;A \;T.
\end{displaymath}

For any quadratic form A, define the Seysen measure S(A) as follows:

 \begin{displaymath}S(A) = \sum_{i=1}^n a_{i,i} a_{i,i}^* = \sum_{i=1}^n \Vert\bold{b_i}\Vert^2
\Vert\bold{b_i^*}\Vert^2.
\end{displaymath} (2.2)

A basis B is then S-reduced if:

 \begin{displaymath}S(A) \leq S(T^t \;A \;T), \quad\text{for all } T \in SL_n({\Bbb Z}).
\end{displaymath} (2.3)

We suspect that it is computationally difficult to find the optimal transformation matrix T for a given basis B. Consider, however, the class of transformation matrices $T_{i,j}^\lambda $ defined by:
\begin{align*}T_{i,j}^\lambda & = I_n + \lambda U_{i,j}, \quad\text{where $i \ne...
... \\
0 & \text{if $k \neq i$\space or $l \neq j$ }.
\end{cases}
\end{align*}
(The matrix Ui,j has exactly one nonzero entry. Matrix $T_{i,j}^\lambda $ has diagonal entries of 1 and exactly one nonzero off-diagonal entry.) Right-multiplying B by any $T_{i,j}^\lambda $ simply adds $\lambda $ times the $i^{\rm th}$ column of B to the $j^{\rm th}$ column of B. If the columns of B are the basis vectors $\bold{b_i}$, then $B \;T_{i,j}^\lambda$ is simply the transformed basis $B' = (\bold{b_1},\bold{b_2},\ldots{},\bold{b_{j-1}},\bold{b_j}+\lambda
\bold{b_i},\ldots{},\bold{b_n})$. Since it is easy to perform calculations with $T_{i,j}^\lambda $ transformations, we focus our attention on products of one or more $T_{i,j}^\lambda $ transformation matrices. It can be shown that every $T \in SL_n({\Bbb Z})$ may be written as such a product:

\begin{displaymath}SL_n({\Bbb Z}) = \left\{\ T \; \colon \; T = \prod_k
T_{i_k,j_k}^{\lambda_k}, 1 \leq k < \infty \right\}.\end{displaymath}

We call a quadratic form S2-reduced if

\begin{displaymath}S(A) \leq S(T_{j,i}^\lambda \;A \;T_{i,j}^\lambda),
\quad\text{for } 1 \leq i,j \leq n, \text{ for } \lambda \in {\Bbb Z}.\end{displaymath}

Seysen suggests the following algorithm for S2-reducing a quadratic form:


while (A is not S2-reduced)
do
choose
i,j such that $\exists\ \lambda \in {\Bbb Z}$ with

\begin{displaymath}S\left(T_{j,i}^\lambda \;A \;T_{i,j}^\lambda\right) < S(A)\end{displaymath}

let

\begin{displaymath}\lambda = \left\lceil \frac{1}{2}\left(\frac{a_{i,j}^*}{a_{j,j}^*} -
\frac{a_{i,j}}{a_{i,i}}\right) \right\rfloor \end{displaymath}

let

\begin{displaymath}A = T_{j,i}^\lambda \;A \;T_{i,j}^\lambda \end{displaymath}





where $\lceil \cdot \rfloor$ denotes the nearest-integer function. This procedure for S2-reducing a quadratic form is Seysen's basis reduction algorithm. To date, little has been proven concerning the performance of Seysen's algorithm. There are no known bounds on the Seysen measure of an S2-reduced basis (although bounds have been proven for S-reduced lattice bases), nor on the length of the shortest nonzero vector in the basis. The running time of Seysen's algorithm is clearly bounded if the lattice basis consists of only integer vectors, but it is not known if the algorithm even terminates for basis vectors with real coefficients. However, preliminary experiments performed by Seysen on lattices of dimension $n \leq 30$ suggest that this technique may be faster than the LLL algorithm and yield bases with shorter vectors. Based on these observations, a comprehensive investigation of theoretical and practical aspects of Seysen's algorithm was undertaken. This chapter details the results of our study of Seysen's basis reduction algorithm. Section 2.1 below discusses the theoretical underpinnings of Seysen's algorithm. Empirical tests performed with various versions of the Seysen algorithm are detailed in Section 2.2. Section 2.3 mentions some modifications which may be made to Seysen's algorithm when the performance of the basic version breaks down. Finally, Section 2.4 discusses possible extensions to Seysen's algorithm.

 
next up previous contents
Next: Theoretical Analysis Up: No Title Previous: Lattice Basis Reduction Algorithms
Brian A. LaMacchia
1999-10-30