recommended netscape fonts: 14-point New Century Schoolbook or Helvetica
versions for printing: postscript and PDF versions one and two

2000 IEEE Midwest Symposium on Circuits and Systems,
Lansing, Michigan, USA, August, 2000.

Mathematical Unification of Dynamic-Element-Matching Methods for Spectral Shaping of Hardware-Mismatch Errors

Jeffrey O. Coleman1

Naval Research Laboratory
Radar Division
Washington, DC


Multibit delta-sigma conversion requires an internal DAC so extraordinarily accurate that signal processing to move DAC hardware-mismatch error outside the signal band appears necessary. Here the error-shaping DACs reported previously are shown mathematically to be special cases of a general architecture convenient for analysis and simulation.

1 Introduction

Multibit $\Delta\Sigma$ conversion (both D/A and A/D) requires a DAC with exceptional in-band precision. Several recent dynamic-element-matching DACs achieve this by dynamically enabling subsets of a bank of simple one-bit subconverters so that their summed outputs exhibit suppressed hardware-mismatch error in the desired band. The recently presented [1] error-shaping DAC architecture of Fig. 1 (top) is shown here to generalize those of both Schreier [2] and Galton [3]. The latter then become efficiently realized special cases of a general form convenient for analysis and simulation.

The general system, at the top of Fig. 1, is simple. If the allowable set of DAC input vectors $\mathbf{v}(n)$ is $\cal V$, the set of permissible $x$ values is precisely $\mathbf{r}^T{\cal V}$, where for convenience $\mathbf{r}^T$ denotes the first row of $\mathbf{A}$. If $\mathbf{r}^T=(1,\ldots,1)$ and ${\cal V} = \{0,1\}\lefteqn{^N}\:$, for example, $x \in \mathbf{r}^T{\cal V} = \{0,\ldots,N\}$ results, creating an $N+1$ level DAC system from $N$ one-bit DACs.

The potential to suppress hardware errors is inherent in the structure. Model the outputs of the $N$ internal DACs as vector $(\mathbf{P}\star \mathbf{v})(t) +\mathbf{e}(t)$, where nominally diagonal pulse matrix $\mathbf{P}(t)$ and error vector $\mathbf{e}(t)$ represent the dynamic (input-related) and static characteristics of the DACs respectively and where the mixed-time convolution of $\mathbf{P}(t)$ and DAC input vector $\mathbf{v}(n)$ is given by $(\mathbf{P}\star \mathbf{v})(t)\stackrel{\Delta}{=}T \sum_k
\mathbf{P}(t-kT)\,\mathbf{v}(k)$. Using $\mathbf{w}^{\rm T}\stackrel{\Delta}{=}( x,\,
\mathbf{s}^{\rm T})$ for convenience and suppressing time dependence for brevity, the system output is $y = \mathbf{r}^T ( \mathbf{P} \star
\mathbf{v} + \mathbf{e} ) = \mathbf{r}^T (...
...f{r}^T \mathbf{P} \mathbf{A}^{-1} ) \star
\mathbf{w} + \mathbf{r}^T \mathbf{e}$, comprising a term linear in $x$, a term linear in $\mathbf{s}$ and an independent term $\mathbf{r}^T
\mathbf{e}$. We seek in the spectral band of interest to have very little power in scalar $\mathbf{r}^T \mathbf{e}(t)$ and vector $\mathbf{s}(n)$, in the latter case to make irrelevant any deviations of matrix pulse $\mathbf{P}(t)$ from its nominal characteristic. This gives the system its robustness to hardware errors.

Figure 1: Generalized error-shaping DAC (top), and with two possible switching-vector quantization loops (middle, bottom).

Generating switching-vector sequence $\mathbf{s}(n)$ to have a shaped spectrum is complicated by requiring at each $n$ that it be chosen from an allowable set, dependent on input $x(n)$, that will keep $\mathbf{v}$ in the input range of the DACs. To do this, the middle and bottom systems of Fig. 1 repeat the top system in streamlined notation with one new aspect:

Generate $\mathbf{s}$ with an inputless $\Delta\Sigma$-style loop: Derive the quantizer input by filtering either the quantizer output $\mathbf{s}$ or the quantization error.
This loop and its topology already specializes the system. With either loop shown, loop-filter LF largely determines the spectrum of the switching vector, and the quantizer ${\cal Q}_{\mathbf{s}}$ must quantize the loop-filter output such that $\mathbf{v}\in{\cal V}$. The next section shows that under reasonable assumptions $\mathbf{s}$ must lie in a subset of a particular lattice with an $x$-dependent offset. The specializations leading to the Galton and Schreier special cases are explored subsequently.

There are other general mismatch-shaping architectures. The system of Hernandez [4] was unique in modeling circuit constraints (for example, KCL) on hardware-mismatch errors by confining the latter to a subspace. Whether or how this idea might be incorporated into the present system, which is more general in other respects, remains unexplored. Scholnik and Coleman [5,6] incorporate the system of [1] into a mismatch-shaped sigma-delta D/A system and thereby generalize on the present generalization.

The term ``the DAC'' below refers to the entire system, and ``each DAC'' (``DACs'') refers to one (all) subconverter(s). ``DAC input'' refers to the subconverters, with ``system input'' used for $x$.

2 Switching-Vector Quantization

Figure 2: Galton's analysis step for two three-level DACs.

A simple example, based on Galton's system, will clarify the goal of the derivation to follow. Suppose a Galton switching block [3], an analysis step with

1 & 1\\
1 & -1

is constructed to produce three-level outputs. Figure 2 shows how signal input $x\in\{0,1,2,3,4\}$ results in DAC inputs $v_i\in\{0,1,2\}$ according to a switching signal $s$ whose range is dependent on $x$. In Galton's system, quantization of $s$ to the even or odd integers, according as $x$ is even or odd, first ensures that the DACs receive integer inputs, and the result is then hard limited in magnitude in an $x$-dependent way. Galton's odd integers, of course, are just the even integers offset by one, and the even integers are just the integers scaled up (by two). Both notions generalize below.

Begin by removing the limiting step, that is, extend the set of allowable DAC input vectors $\mathbf{v}$ to an offset lattice. For unit DAC elements, the extension is from $\{0,1\}^N$ to $\mathbb{Z}^N$. This is analogous to requiring the $(x,s)$ pair in Fig. 2 to be on the lattice that is the obvious extension of the nine points shown. More generally, let us require that $\mathbf{A}^{-1} \mathbf{w}=\mathbf{v}
\in \mathbf{\Gamma}\mathbb{Z}^N+\mathbf{\gamma}$, where $\mathbf{\gamma}$ is an arbitrary offset vector and where nonsingular lattice-basis matrix $\mathbf{\Gamma}$ is technically arbitrary, though in practice it appears to be always diagonal.

First determine the set of allowable $x$ values by splitting off the first row of $\mathbf{A}$ with $\mathbf{A}^T = \left( \mathbf{r}\:\:\: \mathbf{R}
\right)$ so that $\mathbf{w}=\mathbf{A}\mathbf{v}$ becomes $x=\mathbf{r}^T\mathbf{v}$ and $\mathbf{s}=\mathbf{R}^T\mathbf{v}$. This yields requirement

$\displaystyle x$ $\textstyle \in$ $\displaystyle \mathbb{X}+ \mathbf{r}^T \mathbf{\gamma},$ (1)
$\displaystyle \mathbb{X}$ $\textstyle \stackrel{\Delta}{=}$ $\displaystyle \mathbf{r}^T \mathbf{\Gamma} \mathbb{Z}^N.$  

For Galton's system, $\mathbf{\Gamma}=\mathbf{I}$, $\mathbf{\gamma}=0$, and $\mathbf{r}^T=(1,\ldots,1)$, so the requirement is then just $x\in\mathbb{Z}$. Some authors use the odd integers for DAC inputs, so that $\mathbf{\Gamma}=2\mathbf{I}$ and $\mathbf{\gamma}^T=(1,\ldots,1)$, and the unsurprising result $x \in 2\mathbb{Z}+ N$ emerges, limiting $x$ to either the evens or the odds, depending on $N$.

Fixing $x$ in Fig. 2 restricts $s$ to the evens or odds. To what $x$-dependent set is vector $\mathbf{s}$ restricted in general? Since (1) also applies for the case [5] of vector $\mathbf{x}$ and matrix $\mathbf{r}$, here $\mathbf{x}\in\mathbb{R}^M$ (with $M<N$). The mathematical (engineering) version of these parallel arguments is simpler (less simple) but uses less-simple (simpler) concepts.

Mathematical argument:
The map $\mathbb{Z}^N
\stackrel{\mathbf{r}^T\mathbf{\Gamma}}{\longrightarrow} \mathbb{X}$ is a lattice homomorphism with a sublattice kernel $\mathbf{D}\mathbb{Z}^N$ for some integer matrix $\mathbf{D}$. This homomorphism is onto, so factor group $\mathbb{Z}^N/\mathbf{D}\mathbb{Z}^N$ is isomorphic to $\mathbb{X}$. Lattice $\mathbb{X}=\mathbf{C}\mathbb{Z}^M$ for some generator matrix $\mathbf{C}$, so if the columns of integer matrix $\mathbf{B}$ map to the corresponding columns of $\mathbf{C}$, then $\mathbf{B}
\mathbf{m}\longrightarrow\mathbf{C}\mathbf{m}$, where $\mathbf{m}\in\mathbb{Z}^M$. Sublattice $\mathbf{B}\mathbb{Z}^M$ (of $\mathbb{Z}^M$) then comprises coset representatives $[\mathbb{Z}^N/\mathbf{D}\mathbb{Z}^N]$. Fixing $\mathbf{x}=\mathbf{C}\mathbf{m}+\mathbf{r}^T\mathbf{\gamma}$ then implies $\mathbf{s}\in\mathbf{R}^T\left(\mathbf{\Gamma}(\mathbf{B}
\mathbf{m}+\mathbf{D}\mathbb{Z}^N)+\mathbf{\gamma}\right)$, an $\mathbf{x}$-dependent shift of offset sublattice $\mathbf{R}^T(\mathbf{\Gamma}\mathbf{D}\mathbb{Z}^N+\mathbf{\gamma})$. In the Galton example, this sublattice was the evens.
Engineering argument:
Prerequisites: (1) lattices are discrete sets, closed under addition, of vectors, (2) a lattice in $\mathbb{R}^N$ requires no more than $N$ basis vectors (integer coeffs), and (3) integer basis vectors yield a sublattice of $\mathbb{Z}^N$.

Lattice $\mathbb{X}= \mathbf{r}^T \mathbf{\Gamma} \mathbb{Z}^N$ has dimension $M$ only and so can be expressed as $\mathbb{X}=\mathbf{C}\mathbb{Z}^M$ for some square matrix $\mathbf{C}$. By definition of $\mathbb{X}$, for every $\mathbf{c}\in\mathbb{X}$ there is a vector $\mathbf{b}\in\mathbb{Z}^N$ with $\mathbf{r}^T\mathbf{\Gamma}\mathbf{b}=\mathbf{c}$, so there is an $N \times M$ integer matrix $\mathbf{B}$ with $\mathbf{C} =
\mathbf{r}^T\mathbf{\Gamma}\mathbf{B}$. Since $\mathbf{r}^T\mathbf{\Gamma}\mathbf{d}=\mathbf{0}$ is just $\mathbf{d}\perp\{\mbox{columns of }\mathbf{\Gamma}^T\mathbf{r}\}$, the set of such $\mathbf{d}\in\mathbb{Z}^N$ is closed under addition, is a sublattice of $\mathbb{Z}^N$, and is just $\mathbf{D}\mathbb{Z}^N$ for some integer matrix $\mathbf{D}$.

Suppose $ \mathbf{w}= \mathbf{A} \left[ \mathbf{\Gamma} \left( \mathbf{B}
\mathbf{m} + \mathbf{D}\mathbf{z} \right) +\mathbf{\gamma} \right]$, with $\mathbf{m}\in\mathbb{Z}^M$, $\mathbf{z}\in\mathbb{Z}^N$. Then $ \mathbf{v}=
\mathbf{\Gamma} \left( \mathbf{B} \mathbf{m} + \mathbf{D}\mathbf{z} \right)
+\mathbf{\gamma}$ and (since $\mathbf{x} = \mathbf{r}^T \mathbf{v} =
\mathbf{r}^T \mathbf{\Gamma} \mathbf{B} ...
...+ \mathbf{r}^T
\mathbf{\Gamma}\mathbf{D}\mathbf{z} +\mathbf{r}^T\mathbf{\gamma}$)

$\displaystyle \mathbf{x}$ $\textstyle =$ $\displaystyle \mathbf{C} \mathbf{m} + \mathbf{r}^T\mathbf{\gamma}$  
$\displaystyle \mathbf{s}$ $\textstyle =$ $\displaystyle \mathbf{R}^T
\mathbf{B} \mathbf{m} + \mathbf{D}\mathbf{z}
\right].$ (2)

Each value of parameter $\mathbf{m}$ in (2) gives one allowable value for $\mathbf{x}$ and a class of allowable vectors $\mathbf{s}$, with the within-class choice determined by $\mathbf{z}\in\mathbb{Z}^N$.

Galton's two-step quantization/limiting of scalar $s$ in Fig. 2 exactly assigns to $s$ the nearest element of the subset determined by fixing $x$. But this nearest-neighbor aspect disappears when Galton's system is viewed multidimensionally ($N>2$). Further, taking quantization/limiting as separate steps does not appear to generalize cleanly to multiple dimensions and arbitrary $\mathbf{A}$.

There are other approaches. Section 3.2 shows how rotation of the decision space can sometimes make nearest-neighbor decisions straightforward, as in Schreier's system. In [6], $x$ and $\mathbf{s}$ are quantized jointly in a nearest-neighbor sense with a particular noneuclidean metric.

3 Special Cases of the General System

Galton's case is developed first. A different quantization then leads through incremental specialization to a previous example [1] of the general system and to Schreier's system.

3.1 Galton's System

Galton's extraordinarily hardware-efficient system [3] can be derived from the middle system of Fig. 1. First use

1 & 1 \\
1 &-1
\end{displaymath} (3)

to implement the analysis with a hardware ``switching block'' controlling two DACs chosen to be unit DAC elements accepting inputs in $\{0,1\}$. This creates an error-shaped DAC accepting inputs in $\{0,1,2\}$. Two such three-level DACs and another switching block can then be similarly combined to form a five-level DAC. Ultimately a recursive tree structure results with $2^M$ unit DAC elements controlled by $2^M-1$ shaped switching signals to create an output with $2^M+1$ levels. This entire tree is equivalent to a larger error-shaped DAC of the same type controlling $2^M$ unit DAC elements with a $2^M\times 2^M$ synthesis matrix $\mathbf{A}$ for which a recursion relation is derived next.

To derive Galton's $\mathbf{A}$ matrix, suppose for convenience that his intermediate signal variables $\{x_{k,r}: r=1,\ldots,2^{M-k}\}$ and switching signals $\{s_{k,r}: r=1,\ldots,2^{M-k}\}$ at a given level $k$ in the tree are arranged into signal and switching vectors $\mathbf{x}_{k} \stackrel{\Delta}{=}( x_{k,1}, x_{k,2}, \ldots)^T$ and $\mathbf{s}_{k}
\stackrel{\Delta}{=}( s_{k,1}, s_{k,2}, \ldots )^T$. Let system input $x=\mathbf{x}_{M}$ and analysis-step output $\mathbf{v}=\mathbf{x}_0$. Signal vector $\mathbf{x}_k$ is then related to the input signal $x$ and the various switching vectors by some matrix transformation

=\mathbf{B}_k \mathbf{x}_k.
\end{displaymath} (4)

Trivially, $\mathbf{B}_M=1$. For $k<M$, an expansion of (4) mirroring a bank of switching blocks yields a recursion relation for $\mathbf{B}_k$,


where Kronecker product $\otimes$ is used to build a permutation matrix.

The simple $M=1$ case yields (3), and its inverse gives Galton's switching-block equations. And for his $M=3$ example,

1 & 1 & 1 & 1 ...
... & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & -1
\end{array} \right).

The rows are orthogonal but the columns are not, so the fact that the inverse is not the transpose is no surprise:

...& 4 \\
1 & -1 & 0 & -2 & 0 & 0 & 0 & -4

3.2 An Alternative to Quantizer ${\cal Q}_s$

Generating the loop-filter input using $\mathbf{s}=\mathbf{R}^T\mathbf{v}$, as in Fig. 3, brings the analysis step into the loop and allows the quantization to be moved to the DAC inputs. Now quantizer ${\cal
Q}_{\mathbf{v}}$ can directly place the necessary limitations on $\mathbf{v}$ without the complication of an intervening matrix transformation. However, ${\cal
Q}_{\mathbf{v}}$ must now constrain $\mathbf{r}^T\mathbf{v}=\mathbf{r}^T\mathbf{u}$ to ensure that it does not disturb the relation between $x$ and $\mathbf{v}$. Because $x$ can be obtained from $\mathbf{u}$, it is no longer strictly needed by the quantizer directly. Now specialize:

Nearest-neighbor quantization: Choose $\mathbf{v}$ acceptably to the DACs and such that $\Vert\mathbf{v}-\mathbf{u}\Vert$ is minimized under the constraint $\mathbf{r}^T\mathbf{v}=\mathbf{r}^T\mathbf{u}$.
This decision rule is especially simple if one extra condition is met. Suppose that under the $\mathbf{r}^T\mathbf{v}=\mathbf{r}^T\mathbf{u}$ constraint $\Vert\mathbf{v}\Vert$ is independent of the choice of $\mathbf{v}$. In that case $\Vert\mathbf{v}-\mathbf{u}\Vert^2 = \Vert\mathbf{v}\Vert^2
-2\mathbf{u}^T\mathbf{v} +\Vert\mathbf{u}\Vert^2$ is affected only in the cross term by the choice of $\mathbf{v}$, so choosing $\mathbf{v}$ to maximize $\mathbf{u}^T\mathbf{v}$ (under the constraint) minimizes $\Vert\mathbf{v}-\mathbf{u}\Vert$.

Output the sum of the outputs of binary DACs: Use $\mathbf{r}^T=(1,\ldots,1)$ for synthesis with nominally identical DACs with two-valued outputs.
Suppose that constraint $\mathbf{r}^T\mathbf{v}=x$ fixes $\Vert\mathbf{v}\Vert^2$ by requiring $\mathbf{v}$ to contain $n_{\ell}$ and $n_s$ elements equal to its larger and smaller subconverter output values $\beta_{\ell}$ and $\beta_s$ respectively. Aligning these with the $n_{\ell}$ and $n_s$ largest and smallest elements of $\mathbf{u}$ maximizes $\mathbf{u}^T\mathbf{v}$. In this system [1], the nearest-neighbor rule reduces to identifying the $n_{\ell}$ largest elements of $\mathbf{u}$.

3.3 Schreier's System

Figure 3: The two lower systems of Fig. 1 with pre-analysis quantization replaced with post-analysis quantization.

Figure 4: Incorporation of analysis-matrix orthogonality (top), a scalar (spherically symmetric) loop-filter (middle), and simplifications into the bottom system of Fig. 3.

The specialization that will next move us towards Schreier's system also simplified the spectral analysis in [1].

Orthogonal analysis matrix. Given $\mathbf{r}$ as above, choose $\mathbf{R}$ so that $\frac{1}{\sqrt{N}}\mathbf{A}$ is orthogonal.
Analysis-matrix orthogonality leads to [1]
y(t) =
(p \star x)(t)
+ \frac{1}{N} (\mathbf{r}^T \mathbf{Q} \mathbf{R} \star \mathbf{s})(t)
+ \mathbf{r}^T \mathbf{e}(t),
\end{displaymath} (5)

where $\mathbf{Q}(t) \stackrel{\Delta}{=}\mathbf{P}(t) - p(t)\mathbf{I}$ with scalar pulse $p(t)$ defined as the entire system's $\mathbf{s}=0$ unit-sample response. Matrix $\mathbf{Q}(t)$ captures the DACs' output-level mismatch, static timing errors, frequency-response differences, and crosstalk. Matrix $\frac{1}{\sqrt{N}}\mathbf{A}$ is orthogonal, so $\mathbf{A}^{-1} =\frac{1}{N} \mathbf{A}^T =\frac{1}{N} \left(\mathbf{r}\:\:\:
\mathbf{R} \right)$.
Error-feedback loop. Use an inputless error-feedback sigma-delta loop to generate $\mathbf{s}$.
This specializes the lower diagram in Fig. 3 to the top diagram of Fig. 4. Unpredictable hardware impulse-response error matrix $\mathbf{Q}(t)$ determines $\mathbf{r}^T \mathbf{Q}(t) \mathbf{R}$ and hence the vector direction(s) along which the spectral content of $\mathbf{s}(t)$ in (5) must be suppressed, so the system shown uses...
spherically symmetric loop filtering. Let the loop-filter transfer matrix be a scalar transfer function times an identity matrix.
This allows the loop filter to commute with its neighbor, resulting in the middle system of Fig. 4. Some computational savings results from precomputing $\mathbf{R}\mathbf{R}^T$, but since $\mathbf{R}$ is not square, the dimensionality of the loop filter also increases. Better, eliminate $\mathbf{R}$ entirely by using $N\mathbf{I}=\mathbf{A}^T\mathbf{A}=\mathbf{r}\mathbf{r}^T+\mathbf{R}\mathbf{R}^T$ in the analysis block in the form of $\mathbf{R}\mathbf{R}^T=N\mathbf{I}-\mathbf{r}\mathbf{r}^T$, yielding the far more-efficient bottom system of Fig. 4. All orthogonal analysis matrices with the desired first column lead to the same system!

Structurally, the loop now resembles that of the bottom system of Fig. 1, except that the upper path has been added, replacing the component of the quantizer input along unit vector $\frac{1}{\sqrt{N}}\mathbf{r}$ by input $x/\sqrt{N}$. The output of the loop filter along that direction is now completely irrelevant, lowering the dimensionality of the loop.

Figure 5: The modifications required of the bottom system of Fig. 4 to produce Schreier's system.

In fact, the only relevancy ever of the $\frac{1}{\sqrt{N}}\mathbf{r}$ component of the input to the quantizer was in providing the latter with $\mathbf{r}^T\mathbf{u}=x$, to which it can fix the number of high output values of the DACs to enforce the quantization constraint. But $x$ could just as well be provided to the quantization algorithm as side information. The $\frac{1}{\sqrt{N}}\mathbf{r}$ component of the quantizer input would then be irrelevant, as it can only shift every component of $\mathbf{u}$ equally and so cannot affect the rank ordering used in the quantization. This side-information approach is taken by Schreier. Figure 5 is the bottom system of Fig. 4 redrawn with annotations showing the changes that produce Schreier's system exactly. Side information is provided to the quantizer, and other changes affect the signals in the loop only along the irrelevant $\frac{1}{\sqrt{N}}\mathbf{r}$ direction. These convenient changes shift the $\frac{1}{\sqrt{N}}\mathbf{r}$ common-mode component of various signals, particularly inside the loop filter but produce input-output behavior identical to the bottom Fig. 4 system. This behavior can be analyzed using the system of Fig. 1 and the appropriate matrix $\mathbf{A}$.

4 Summary

A single system based on vector signals, matrix transformations, and DAC elements with arbitrary impulse responses has been shown here to be mathematically equivalent in various special cases to three published error-shaping DAC architectures [1,2,3]. The fact that these architectures are realizations of this unified system suggests that the latter could result in even more distinct realizations, possibly improved ones. Meanwhile, it provides a common framework for simulation and analysis.


J. O. Coleman and D. P. Scholnik, ``Vector switching generalizes D/A noise shaping,'' in Proc. 1999 Midwest Symp. on Circuits and Systems (MWSCAS '99), Las Cruces, New Mexico, Aug. 1999.

R. Schreier and B. Zhang, ``Noise-shaped multibit D/A convertor employing unit elements,'' Electronics Letters, vol. 31, no. 20, pp. 1712-1713, Sept. 1995.

I. Galton, ``Spectral shaping of circuit errors in digital-to-analog convertors,'' IEEE Trans. Circuits and Systems II, vol. 44, no. 10, pp. 808-817, Oct. 1997.

Luis Hernández, ``A model of mismatch-shaping D/A conversion for linearized DAC architectures,'' IEEE Trans. Circuits and Systems I, vol. 45, no. 10, pp. 453-459, Oct. 1998.

D. P. Scholnik and J. O. Coleman, ``Vector delta-sigma modulation with integral shaping of hardware-mismatch errors,'' in IEEE 2000 Int'l Symp. on Circuits and Systems (ISCAS 2000), Geneva, Switzerland, May 2000.

D. P. Scholnik and J. O. Coleman, ``Joint shaping of quantization and hardware-mismatch errors in a multibit delta-sigma DAC,'' in Proc. 2000 Midwest Symp. on Circuits and Systems (MWSCAS 2000), Lansing MI, Aug. 2000.


This work was supported by the AMRFC program (ONR 31) of the Office of Naval Research.