recommended netscape fonts: 14-point New Century Schoolbook or Helvetica
versions for printing: postscript, PDF


EUSIPCO 2000, The European Signal Processing Conference,
Tampere, Finland, September 5-8, 2000.
(preprint)





The Power Spectrum of a Generalization of Full-Response CPM

Jeffrey O. Coleman1
jeffc@alum.mit.edu

Naval Research Laboratory
Radar Division
Washington, DC

Abstract:

The power spectrum is derived for a generalization of full-response continuous phase modulation (CPM). The derivation is simpler than those previously published for CPM, the result is cleaner, and the more-general signal class may enable improvement on CPM at negligible cost.

1 Introduction

Data signal $\sum_k q_k(t-kT)$ has a simply expressed power spectrum when uncorrelated data are used to choose waveforms $\{q_k(t)\}$ from a finite waveform alphabet. This result is well known for the common case $q_k(t)=a_k \, p(t)$. Presented in the preliminaries below is the extension from a finite waveform alphabet to an arbitrary finite-dimensional waveform alphabet. For either form, the signal is constructed by shifting consecutive waveform symbols to place them sequentially in time. Figure 1 shows such a symbol with two time-axis ``attachment points.'' In effect, the construction places the first attachment point of a given symbol on the second attachment point of the previous symbol, continues the process symbol by symbol, and then takes the sum.

Now suppose an arbitrary complex waveform is cut into segments of duration $T$ as in fig. 2(left). Each segment can be shifted to a standard time, rotated in the complex plane until the end-point angles are symmetric about the real axis, and amplitude scaled until the end points have reciprocal magnitudes. Each transformed segment trajectory takes the form shown in fig. 2(right). If this decomposition is reversed with segments restricted to a finite waveform alphabet, a generalization of the fig. 1 construction results. The attachment points remain temporally separated by $T$ but now also have reciprocally related complex amplitudes. To place one attachment point over another, the second waveform must be both time-shifted and scaled in complex amplitude.

Figure 1: Waveform with time-axis attachment points.
\begin{figure}\centering\setlength{\unitlength}{1cm}\hfill
\begin{picture}(0,2)(...
...{\makebox(0,0)[c]{%
\input{figfigs/time.tex}}}
\end{picture}\hfill~
\end{figure}

Figure 2: Complex-plane trajectories of (left) a time-segmented complex waveform and (right) one segment shifted and normalized to reciprocal attachment points.
\begin{figure}\centering\setlength{\unitlength}{1cm}\hfill
\begin{picture}(0,3)(...
...akebox(0,0)[c]{%
\input{figfigs/segment.tex}}}
\end{picture}\hfill~
\end{figure}


Table 1: Modulation index $h$ and frequency pulse $g(t)$ determine the $M$-ary full-response CPM waveform alphabet.
full-response: $ {\rm support}\{q_k(t)\}=\mbox{}$ interval of length $T$
phase modulation: $ \vert q_k(t)\vert = 1$ constant envelope
end points: $\Delta_k=e^{j{\theta_k}/2}$ data determined
net phase change: $ {\theta_k} \in \pi h \times\{\ldots -3\:\: -1\:\:\: 1\:\:\: 3 \ldots\}$ $M$ choices
instantaneous frequency: $d\angle q_k({t})/dt =\theta_k\, 2g(t)$ same shape, all symbols


The next section develops the power spectrum of such a signal when waveform symbols (segments in the above) are chosen in i.i.d. fashion from a finite waveform alphabet. The derivation does not actually require waveform segments to be of finite length nor to go through their attachment points. So the commonly known special case of full-response continuous phase modulation (CPM), whose defining characteristics are listed in Table 1, is actually rather restrictive. Indeed, of these characteristics, only finite support is required by the typical implementation structure used to realize a coherent CPM system. This suggests one might optimize signal characteristics within the larger class.

CPM's constant envelope is important in radar and communication systems requiring maximum efficiency of the transmitter power amplifier. A preliminary, CPM-specific version of the present derivation was presented in [1]. That one and this one both are more elegant and less restrictive than the most-similar previous result in the literature [2], which was limited to integer $Mh$.

2 Preliminaries

Begin with a mixed continuous- and discrete-time matrix-vector convolution that generalizes on PAM signaling. Notations $\vect{v}(n)$ and $\vect{v}_k$ are equivalent.
\begin{defn}
Function $(\mat{w}\convmix\vect{v})$, a convolution of discrete-tim...
...me matrix system-response
kernel $\mat{w}(t)$, is defined by\\ [-4ex]
\end{defn}

\begin{eqnarray*}
(\mat{w}\convmix\vect{v})(t) & \defeq & T \sum_k
\mat{w}(t-kT)...
...convmix\mat{w})(t) & \defeq & T \sum_k \vect{v}(k)\mat{w}(t-kT).
\end{eqnarray*}



The usual properties (e.g. associativity) hold by analogy to continuous-time convolution of $\mat{w}(t)$ with an impulse train with areas $\{T\vect{v}(n)\}$.
\begin{defn}
The {\em adjoint} $\mat{M}^\prime$\ of (possibly matrix- or
vector-...
...rime(t)=\mat{M}^\herm(-t)$\ or
$\mat{M}^\prime(n)=\mat{M}^\herm(-n)$.
\end{defn}
The Hermitian (conjugate) transpose is denoted by $(\cdot)^\herm$. (If $h(n)$ has $z$ transform $H(z)$, then the $z$ transform of $h^\prime(n)$ is the paraconjugate of $H(z)$ from filter-bank theory.) The appendix develops related second-order statistics leading to:
\begin{prop}
For random process $\vect{v}$\ and impulse response $\mat{w}$,
conv...
...t{u}} = \mat{w} \convcon \mat{R}_{\vect{v}} \convmix
\mat{w}^\prime$.
\end{prop}
Any combination of discrete- or continuous-time $\mat{w}$ and $\vect{v}$ is allowed (the convolution may be continuous, discrete, or mixed), as is scalar or vector $\vect{v}$ and scalar, vector, or matrix $\mat{w}$. When the convolution is mixed, cyclostationarity is averaged out. (See the Appendix.) Vector and matrix dimensions must be compatible. The autocorrelations are scalar- or matrix-valued according to the dimensions of $\mat{w}$ and $\vect{v}$.

In 1974 Prabhu and Rowe [3] used a mixed-convolution version to derive the spectra of communication signals, but no connection was mentioned to the familiar fact that it generalized: An LTI filter operating on a random process scales its power spectrum by the squared magnitude of the transfer function. Indeed, Fourier transformation yields
\begin{prop}
The convolution $\vect{u}(t)=(\mat{w}\convmix\vect{v})(t)$\ of proc...
...{W}(f)\, T\mat{S}_{\vect{v}}(Tf)\, \mat{W}^\herm(f).
\end{displaymath}\end{prop}
The change of variable from normalized to unnormalized frequency in $\mat{S}_{\vect{v}}(\cdot)$ gives this mixed-convolution version a slightly different look from the others. The scalar version applies to the introduction's $T\sum_k a_k q(t-kT)$ form. The extension to $\sum_k q_k(t-kT)$ with $q_k(t)$ drawn from a finite-dimensional space is covered by the vector version of the proposition with $q_k(t)=T\mat{w}(t)\vect{v}_k$, with row vector $T\mat{w}(t)$ containing the waveform basis and the products with coefficient vectors $\{\vect{v}_k\}$ forming the symbol waveforms.

Figure 3: Form each segment as a linear combination of basis vectors..
\begin{figure}\centering {\setlength{\unitlength}{1cm}\begin{picture}(4,3)(2.5,-...
...0,0){\makebox(0,0)[c]{%
\input{figfigs/segnew.tex}}}
\end{picture}}
\end{figure}

3 The Derivation

The derivation now specializes to attachment-point sequencing in the complex plane. Figure 3 repeats fig. 2(right) but with waveform segments and associated attachment points constructed from row vectors $T\vect{q}^\prime(t)$ and $\vect{\Delta}^T$ of basis functions and (second) attachment points respectively. Real i.i.d. coefficient vectors $\vect{v}_k$ are drawn from an arbitrary finite or infinite alphabet and serve as transmitted data:
\begin{displaymath}
s(t)=\sum_k T\vect{q}^\prime(t-kT)\vect{v}_k \alpha_k.
\end{displaymath} (1)

The constructed sequence $\{\alpha_k\}$ connects attachment points in the complex plane. Using $(\unit\alpha)(n)\defeq\unit_k\alpha_k$, this signal can be written $s=\vect{q}^\prime\convmix (\unit \alpha)$, so its power spectrum is
\begin{displaymath}
\mat{S}_s(f) = \vect{Q}^\herm(f)\, T\mat{S}_{(\unit\alpha)}(Tf)\,
\vect{Q}(f).
\end{displaymath} (2)

Deriving matrix $\mat{S}_{(\unit\alpha)}(Tf)$ or $\mat{R}_{(\unit\alpha)}(n)$ is the challenge.

The key is the attachment-point placement recursion. Suppose the scaling of segment $k-1$ has mapped point ``1'' in fig. 3 to $\alpha_{k-1}$. The second segment $k-1$ attachment point is then at $\alpha_{k-1}(\vect{\Delta}^T\unit_{k-1})$, and the segment $k$ ``1'' must map to $\alpha_{k-1}(\vect{\Delta}^T\unit_{k-1})(\vect{\Delta}^T\vect{v}_k)$, yielding

\begin{displaymath}
\alpha_{k} = \vect{v}_k^T\vect{\Delta} \vect{\Delta}^T\unit_{k-1} \, \alpha_{k-1}.
\end{displaymath}

For the $n>0$ case, $\mat{R}_{(\unit\alpha)}(n) =
\expect{\vect{v}_k \alpha_k \alpha_{k-n}^\ast \unit^T_{k-n}}$ or

\begin{displaymath}
\mat{R}_{(\unit\alpha)}(n) =
\bigexp{
\vect{v}_k
\frac{\alpha_k}{\alpha_{k-n}}\vert\alpha_{k-n}\vert^2 \unit^T_{k-n}
}.
\end{displaymath}

Using the recursion relation $n$ times to obtain the ratio,

\begin{eqnarray*}
&&\mat{R}_{(\unit\alpha)}(n) =\mbox{}\\
&&\bigexp{
\left(
\...
...right)
\unit_{k-n}
\unit^T_{k-n}
\vert\alpha_{k-n}\vert^2
}.
\end{eqnarray*}



Conditioned on $\vert\alpha_{k-n}\vert^2$, the products $\unit_{k-m}
\unit_{k-m}^T$ are independent for $m\leq n$ with only the $m=n$ product actually dependent on $\vert\alpha_{k-n}\vert^2$. So with $\mat{P}\defeq
\expect{\vect{v}_k \vect{v}_k^T}$,

\begin{displaymath}
\mat{R}_{(\unit\alpha)}(n) =
\left(\mat{P}\vect{\Delta}\vec...
...{\unit_{k-n} \unit^T_{k-n} \given \vert\alpha_{k-n}\vert^2}
}.
\end{displaymath}

The spectrum concept requires a wide-sense stationary signal, implying for independent data $\{\vect{v}_k\}$ that $\vert\alpha_k\vert^2$ is constant and therefore that the inner expectation's conditioning can be removed.

\begin{eqnarray*}
\mat{R}_{(\unit\alpha)}(n)
& = &
\left(\mat{P}\vect{\Delta}\ve...
...ta}
\right)^{n-1}
\vect{\Delta}^T\mat{P}\, \vert\alpha_k\vert^2.
\end{eqnarray*}



Attachment points off the unit circle clearly serve no immediate purpose, but their inclusion may aid future extension.

Using $\mat{R}_{(\unit\alpha)}(0) = \expect{\vect{v}_k \alpha_k
\alpha_k^\ast \unit^T_k} = \mat{P} \vert\alpha_k\vert^2$ then, along with the automatic $\mat{R}_{(\unit\alpha)}=\mat{R}^\prime_{(\unit\alpha)}$ symmetry,

\begin{displaymath}
\frac{1}{\vert\alpha_k\vert^2}
\mat{R}_{(\unit\alpha)}(n) =
...
...ect{\Delta}^T
\mat{P}
\gamma^{n-1}1_{[1,\infty)}(n)
\right\}
\end{displaymath} (3)

for all $n$, where $\gamma\defeq\vect{\Delta}^T\mat{P}\vect{\Delta}$ and where indicator function $1_{[1,\infty)}(n)$ is unity when $n\in[1,\infty)$ and zero otherwise. The Hermitian-symmetric component of a matrix sequence is denoted $\mbox{H.S.}\{\mat{A}_n\}
\defeq \frac{1}{2} (\mat{A}_n+\mat{A}^\herm_{-n})$.

Taking the Fourier transform of $v_n\defeq\gamma^{n-1}1_{[1,\infty)}(n)$ with care (to get a key special case right), let $h_n=\delta_{n+1}-\gamma\delta_n$, so that $v_n\convolve h_n=\delta_n$ and $V(\nu )H(\nu )=1$. When $\vert\gamma\vert<1$, $H(\nu )\neq 0$ for all $\nu $, so $V(\nu
)=\frac{1}{H(\nu )}=\frac{1}{e^{j2\pi \nu }-\gamma}$. If $\vert\gamma\vert=1$ however, $\gamma=e^{j\nu_{\gamma}}$ for some frequency $\nu_{\gamma}$ for which $H(\nu_{\gamma})=0$, and any additional component $\beta\sum_k\delta(\nu -\nu_{\gamma}- k)$ must be discovered some other way:

\begin{displaymath}
\beta=
\lim_{N\rightarrow\infty}
\frac{1}{2N+1}
\sum_{n=-N}^N
v_n
e^{-j\nu_{\gamma} n}
=
\frac{e^{-j\nu_{\gamma}}}{2}.
\end{displaymath}

So,

\begin{displaymath}\renewedcommand{arraystretch}{1.5}V(\nu )=
\frac{1}{e^{j2\pi ...
...\ *
\mbox{if $\gamma=e^{j\nu_{\gamma}}$}.
\end{array}\right.
\end{displaymath}

The Fourier transform of (3) can now be written2 $\mat{S}_{(\unit\alpha)}(\nu) =
\vert\alpha_k\vert^2\mbox{Re}\!\left\{ \mat{P} + 2 \mat{P} \vect{\Delta}
\vect{\Delta}^T \mat{P}\, V(\nu) \right\} $, and (2) then yields the power spectrum of signal $s(t)$ in (1). Scale key quantities by a symmetric square root of real covariance matrix $\mat{P}$:

\begin{eqnarray*}
\Psi &\defeq & \mat{P}^{1/2}\vect{\Delta}\\
\vect{G}(f) & \defeq & \mat{P}^{1/2}\vect{Q}(f).
\end{eqnarray*}



Now $\gamma = \vect{\Psi}^T\vect{\Psi}$, so if $\vert\vect{\Psi}^\top\vect{\Psi}\vert<1$,

\begin{displaymath}
\frac{\mat{S}_s(f)}{\vert\alpha_k\vert^2} =
\vect{G}^\herm\!...
...ect{\Psi}}
\vect{\Psi}
\vect{\Psi}^T
\right\}\,
\vect{G}(f).
\end{displaymath}

If $\vect{\Psi}^\top\vect{\Psi}=e^{j\nu_\gamma}$ however, this spectral-line term must be added on the right:

\begin{displaymath}
\vect{G}^\herm\!(f)\,
\mbox{Re}\!\left\{
\vect{\Psi}\vect{\...
...nu_\gamma}
\sum_k\delta(fT-\nu_\gamma-k)
\right\}\vect{G}(f).
\end{displaymath}

The usual CPM signal that would be transmitted on a channel has $\vert\vect{\Psi}^\top\vect{\Psi}\vert<1$ and has the continuous spectral form given first above. The additional spectral-line term typically arises when a conventional CPM signal is passed through a bandpass nonlinearity, as might be used in certain types of synchronization systems for carrier and symbol timing.

4 Summary

This paper's complex data signal comprised complex waveform symbols chosen independently with arbitrary probabilities from a finite-dimensional function space and sequentially attached through time shifting and complex scaling of Markov character. One example is full-response CPM, elsewhere represented with Markov encoding of the signaling-interval endpoints. Encoding the geometric mean of the endpoints instead is fundamentally responsible for this simpler result, the first in a clean vector/matrix form. The final spectral expression is a simple function of two vectors, one the basis for the waveform-symbol space and one the basis for corresponding Markov changes in complex amplitude.

Bibliography

1
J. O. Coleman,
``An Even Simpler Derivation of the Power Spectrum of Full-Response CPM,''
in Proc. 1999 Int'l Conf. on Signal and Image Processing, Nassau, Bahamas, Oct. 1999.

2
Ezio Biglieri and Monica Visintin,
``A simple derivation of the power spectrum of full-response CPM and some of its properties,''
IEEE Transactions on Communications, vol. 38, no. 3, pp. 267-269, Mar. 1990.

3
V. K. Prabhu and H. E. Rowe,
``Spectra of digital phase modulation by matrix methods,''
Bell System Technical Journal, vol. 53, no. 5, pp. 899-935, May 1974.

Appendix: Mathematical Foundations

In this appendix, $\vect{u}(t)$ and $\vect{v}(n)$, subscripted or not, are continuous-time and discrete-time random processes respectively, the latter with $n$ implicitly referring to time $nT$. These mixed continuous- and discrete-time results parallel and sometimes depend on familiar results in both continuous time and discrete time. As usual, double-subscripted crosscorrelations become single-subscripted autocorrelations when the two processes are one. The simple proofs of the first two propositions are omitted. Fourier-pair notations used:

\begin{displaymath}
\begin{array}{ccc}
\mbox{auto/cross}&&\mbox{\hspace{\autowid...
...g}(n)& \longleftrightarrow & S_{\rm something}(\nu)
\end{array}\end{displaymath}


\begin{prop}
For any convolution type, $(\mat{x}\convcon\mat{y})^\prime
= \mat{y}^\prime\convcon\mat{x}^\prime$.
\end{prop}

\begin{prop}
Let $\mat{u}(t)\leftrightarrow\mat{U}(f)$\ and
$\mat{v}(n)\leftrigh...
...)(t) & \longleftrightarrow & \mat{U}(f)\,T\mat{V}(fT).
\end{eqnarray*}\end{prop}

\begin{defn}
Crosscorrelation of a continuous-time process and a discrete-time
p...
... and $\vect{v}(n)$\ are
said to be {\em crosscorrelation stationary}.
\end{defn}

\begin{prop}
Crosscorrelation stationary $\vect{u}(t)$\ and $\vect{v}(n)$\ imply...
..._{\vect{u}\vect{v}}^\prime(\tau) =
\mat{R}_{\vect{v}\vect{u}}(\tau)$.
\end{prop}

\begin{proof}
Because $\mat{R}_{\vect{u}\vect{v}}^\prime(\tau) =
\left(\expect{\...
...herm(-\tau-nT)} =
\expect{\vect{v}(n) \, \vect{u}^\prime(\tau-nT)}$.
\end{proof}

\begin{prop}
If $\vect{u}(t)=(\mat{w}\convmix\vect{v}_1)(t)$\ with $\vect{v}_1(n...
...tau) =
(\mat{R}_{\vect{v}_2\vect{v}_1}\convmix\mat{w}^\prime)(\tau)$.
\end{prop}

\begin{proof}$\displaystyle
\mat{R}_{\vect{u}\vect{v}_2}(\tau) =
T\sum_k
\mat{...
...w}^\prime =
\mat{R}_{\vect{v}_2\vect{v}_1} \convmix \mat{w}^\prime$.
\end{proof}

Those definitions and results were completely parallel to the familiar ones, but these next few involve a minor, natural extension to average out the cyclostationarity.


\begin{defn}
Average-crosscorrelation function
$\mat{R}_{\vect{u}_1\vect{u}_2}(\...
...ses are jointly (2nd-order) cyclostationary with period dividing
$T$.
\end{defn}
It does not matter what interval $\cal I$ is chosen, because decomposition $\rho=kT+r$ provides a one-to-one map $\rho\mapsto r$ from $\cal I$ onto $[0,T)$ with $dr/d\rho=1$ almost everywhere. Consequently, the right side of definition

\begin{displaymath}
\mat{R}_{\vect{u}_1\vect{u}_2}(\tau) =
\int_{\cal I}
\expec...
...\, \vect{u}_2^\prime(\tau-t+\rho)\given\rho}
\,\frac{d\rho}{T}
\end{displaymath}

becomes

\begin{displaymath}
\int_{[0,T)}
\expect{\vect{u}_1(t-kT-r) \, \vect{u}_2^\prime(\tau-t+kT+r)\given r}
\,\frac{dr}{T}.
\end{displaymath}

Cyclostationarity guarantees that the expectation is invariant to the shift by $kT$, so this becomes $\mat{R}_{\vect{u}_1\vect{u}_2}(\tau) =
\expect{\vect{u}_1(t-r) \, \vect{u}_2^\prime(\tau-t+r)}$, where independent random variable $r$ is uniformly distributed on $[0,T)$. This is so no matter which interval ${\cal I}$ is chosen, so this autocorrelation is well defined.


\begin{prop}
When $\vect{u}_1$\ and $\vect{u}_2$\ are crosscorrelation stationar...
...t{u}_1\vect{u}_2}^\prime(\tau) =
\mat{R}_{\vect{u}_2\vect{u}_1}(\tau)$\end{prop}

\begin{proof}
The argument is straightforward.
\begin{eqnarray*}
\mat{R}_{\vect{...
...ta=\rho-\tau$\ is uniformly distributed on an
interval of width $T$.
\end{proof}


\begin{prop}
If $\vect{u}_1(t)=(\mat{w}_1\convmix\vect{v}_1)(t)$, then
$\mat{R}_...
...u) =
(\mat{R}_{\vect{u}_2\vect{v}_1}\convcon\mat{w}_1^\prime)(\tau)$.
\end{prop}

\begin{proof}
Suppose $\vect{u}_1(t)=(\mat{w}_1\convmix\vect{v}_1)(t)$. Then
$\m...
...llows
from the conjugate symmetry of the crosscorrelations involved.
\end{proof}



Footnotes

...1
Support: Office of Naval Research (ONR) program in Operations Research and the ONR Base program at the Naval Research Laboratory.
... written2
If symmetric-matrix sequence $\mat{A}_n$ Fourier transforms to ${\cal
A}(f)$, the transform of $\mbox{H.S.}\{\mat{A}_n\}$ is $\frac{1}{2}[{\cal A}(f) + {\cal A}^\herm(f)]=\mbox{Re}\{{\cal
A}(f)\}$.