recommended netscape fonts:
14-point New Century Schoolbook or Helvetica
versions for printing: postscript and PDF versions one and two
2000 IEEE Midwest Symposium on Circuits and Systems,
Lansing, Michigan, USA, August, 2000.
Jeffrey O. Coleman1
Naval Research Laboratory
Multibit conversion (both D/A and A/D) requires a DAC with exceptional in-band precision. Several recent dynamic-element-matching DACs achieve this by dynamically enabling subsets of a bank of simple one-bit subconverters so that their summed outputs exhibit suppressed hardware-mismatch error in the desired band. The recently presented  error-shaping DAC architecture of Fig. 1 (top) is shown here to generalize those of both Schreier  and Galton . The latter then become efficiently realized special cases of a general form convenient for analysis and simulation.
The general system, at the top of Fig. 1, is simple. If the allowable set of DAC input vectors is , the set of permissible values is precisely , where for convenience denotes the first row of . If and , for example, results, creating an level DAC system from one-bit DACs.
The potential to suppress hardware errors is inherent in the structure. Model the outputs of the internal DACs as vector , where nominally diagonal pulse matrix and error vector represent the dynamic (input-related) and static characteristics of the DACs respectively and where the mixed-time convolution of and DAC input vector is given by . Using for convenience and suppressing time dependence for brevity, the system output is , comprising a term linear in , a term linear in and an independent term . We seek in the spectral band of interest to have very little power in scalar and vector , in the latter case to make irrelevant any deviations of matrix pulse from its nominal characteristic. This gives the system its robustness to hardware errors.
Generating switching-vector sequence to have a shaped spectrum is complicated by requiring at each that it be chosen from an allowable set, dependent on input , that will keep in the input range of the DACs. To do this, the middle and bottom systems of Fig. 1 repeat the top system in streamlined notation with one new aspect:
Generate with an inputless -style loop: Derive the quantizer input by filtering either the quantizer output or the quantization error.This loop and its topology already specializes the system. With either loop shown, loop-filter LF largely determines the spectrum of the switching vector, and the quantizer must quantize the loop-filter output such that . The next section shows that under reasonable assumptions must lie in a subset of a particular lattice with an -dependent offset. The specializations leading to the Galton and Schreier special cases are explored subsequently.
There are other general mismatch-shaping architectures. The system of Hernandez  was unique in modeling circuit constraints (for example, KCL) on hardware-mismatch errors by confining the latter to a subspace. Whether or how this idea might be incorporated into the present system, which is more general in other respects, remains unexplored. Scholnik and Coleman [5,6] incorporate the system of  into a mismatch-shaped sigma-delta D/A system and thereby generalize on the present generalization.
The term ``the DAC'' below refers to the entire system, and ``each DAC'' (``DACs'') refers to one (all) subconverter(s). ``DAC input'' refers to the subconverters, with ``system input'' used for .
A simple example, based on Galton's system, will clarify the goal of
the derivation to follow. Suppose a Galton switching block
, an analysis step with
Begin by removing the limiting step, that is, extend the set of allowable DAC input vectors to an offset lattice. For unit DAC elements, the extension is from to . This is analogous to requiring the pair in Fig. 2 to be on the lattice that is the obvious extension of the nine points shown. More generally, let us require that , where is an arbitrary offset vector and where nonsingular lattice-basis matrix is technically arbitrary, though in practice it appears to be always diagonal.
First determine the set of allowable values by splitting off the
first row of with
. This yields
Fixing in Fig. 2 restricts to the evens or odds. To what -dependent set is vector restricted in general? Since (1) also applies for the case  of vector and matrix , here (with ). The mathematical (engineering) version of these parallel arguments is simpler (less simple) but uses less-simple (simpler) concepts.
Lattice has dimension only and so can be expressed as for some square matrix . By definition of , for every there is a vector with , so there is an integer matrix with . Since is just , the set of such is closed under addition, is a sublattice of , and is just for some integer matrix .
Galton's two-step quantization/limiting of scalar in Fig. 2 exactly assigns to the nearest element of the subset determined by fixing . But this nearest-neighbor aspect disappears when Galton's system is viewed multidimensionally (). Further, taking quantization/limiting as separate steps does not appear to generalize cleanly to multiple dimensions and arbitrary .
There are other approaches. Section 3.2 shows how rotation of the decision space can sometimes make nearest-neighbor decisions straightforward, as in Schreier's system. In , and are quantized jointly in a nearest-neighbor sense with a particular noneuclidean metric.
Galton's case is developed first. A different quantization then leads through incremental specialization to a previous example  of the general system and to Schreier's system.
Galton's extraordinarily hardware-efficient system  can
be derived from the middle system of Fig. 1. First use
To derive Galton's matrix, suppose for convenience that his
intermediate signal variables
at a given level
in the tree are arranged into signal and switching vectors
. Let system input
and analysis-step output
Signal vector is then related to the input signal and
the various switching vectors by some matrix transformation
The simple case yields (3), and its inverse gives Galton's
switching-block equations. And for his example,
Generating the loop-filter input using , as in Fig. 3, brings the analysis step into the loop and allows the quantization to be moved to the DAC inputs. Now quantizer can directly place the necessary limitations on without the complication of an intervening matrix transformation. However, must now constrain to ensure that it does not disturb the relation between and . Because can be obtained from , it is no longer strictly needed by the quantizer directly. Now specialize:
Nearest-neighbor quantization: Choose acceptably to the DACs and such that is minimized under the constraint .This decision rule is especially simple if one extra condition is met. Suppose that under the constraint is independent of the choice of . In that case is affected only in the cross term by the choice of , so choosing to maximize (under the constraint) minimizes .
Output the sum of the outputs of binary DACs: Use for synthesis with nominally identical DACs with two-valued outputs.Suppose that constraint fixes by requiring to contain and elements equal to its larger and smaller subconverter output values and respectively. Aligning these with the and largest and smallest elements of maximizes . In this system , the nearest-neighbor rule reduces to identifying the largest elements of .
The specialization that will next move us towards Schreier's system also simplified the spectral analysis in .
Orthogonal analysis matrix. Given as above, choose so that is orthogonal.Analysis-matrix orthogonality leads to 
Error-feedback loop. Use an inputless error-feedback sigma-delta loop to generate .This specializes the lower diagram in Fig. 3 to the top diagram of Fig. 4. Unpredictable hardware impulse-response error matrix determines and hence the vector direction(s) along which the spectral content of in (5) must be suppressed, so the system shown uses...
spherically symmetric loop filtering. Let the loop-filter transfer matrix be a scalar transfer function times an identity matrix.This allows the loop filter to commute with its neighbor, resulting in the middle system of Fig. 4. Some computational savings results from precomputing , but since is not square, the dimensionality of the loop filter also increases. Better, eliminate entirely by using in the analysis block in the form of , yielding the far more-efficient bottom system of Fig. 4. All orthogonal analysis matrices with the desired first column lead to the same system!
Structurally, the loop now resembles that of the bottom system of Fig. 1, except that the upper path has been added, replacing the component of the quantizer input along unit vector by input . The output of the loop filter along that direction is now completely irrelevant, lowering the dimensionality of the loop.
In fact, the only relevancy ever of the component of the input to the quantizer was in providing the latter with , to which it can fix the number of high output values of the DACs to enforce the quantization constraint. But could just as well be provided to the quantization algorithm as side information. The component of the quantizer input would then be irrelevant, as it can only shift every component of equally and so cannot affect the rank ordering used in the quantization. This side-information approach is taken by Schreier. Figure 5 is the bottom system of Fig. 4 redrawn with annotations showing the changes that produce Schreier's system exactly. Side information is provided to the quantizer, and other changes affect the signals in the loop only along the irrelevant direction. These convenient changes shift the common-mode component of various signals, particularly inside the loop filter but produce input-output behavior identical to the bottom Fig. 4 system. This behavior can be analyzed using the system of Fig. 1 and the appropriate matrix .
A single system based on vector signals, matrix transformations, and DAC elements with arbitrary impulse responses has been shown here to be mathematically equivalent in various special cases to three published error-shaping DAC architectures [1,2,3]. The fact that these architectures are realizations of this unified system suggests that the latter could result in even more distinct realizations, possibly improved ones. Meanwhile, it provides a common framework for simulation and analysis.