Menu
Home Explore People Places Arts History Plants & Animals Science Life & Culture Technology
On this page
Multidelay block frequency domain adaptive filter

The multidelay block frequency domain adaptive filter (MDF) algorithm is a block-based frequency domain implementation of the (normalised) Least mean squares filter (LMS) algorithm.

We don't have any images related to Multidelay block frequency domain adaptive filter yet.
We don't have any YouTube videos related to Multidelay block frequency domain adaptive filter yet.
We don't have any PDF documents related to Multidelay block frequency domain adaptive filter yet.
We don't have any Books related to Multidelay block frequency domain adaptive filter yet.
We don't have any archived web articles related to Multidelay block frequency domain adaptive filter yet.

Introduction

The MDF algorithm is based on the fact that convolutions may be efficiently computed in the frequency domain (thanks to the fast Fourier transform). However, the algorithm differs from the fast LMS algorithm in that block size it uses may be smaller than the filter length. If both are equal, then MDF reduces to the FLMS algorithm.

The advantages of MDF over the (N)LMS algorithm are:

  • Lower algorithmic complexity
  • Partial de-correlation of the input (which 'may' lead to faster convergence)

Variable definitions

Let N {\displaystyle N} be the length of the processing blocks, K {\displaystyle K} be the number of blocks and F {\displaystyle \mathbf {F} } denote the 2Nx2N Fourier transform matrix. The variables are defined as:

e _ ( ℓ ) = F [ 0 1 x N , e ( ℓ N ) , … , e ( ℓ N − N − 1 ) ] T {\displaystyle {\underline {\mathbf {e} }}(\ell )=\mathbf {F} \left[\mathbf {0} _{1xN},e(\ell N),\dots ,e(\ell N-N-1)\right]^{T}} x _ k ( ℓ ) = d i a g { F [ x ( ( ℓ − k + 1 ) N ) , … , x ( ( ℓ − k − 1 ) N − 1 ) ] T } {\displaystyle {\underline {\mathbf {x} }}_{k}(\ell )=\mathrm {diag} \left\{\mathbf {F} \left[x((\ell -k+1)N),\dots ,x((\ell -k-1)N-1)\right]^{T}\right\}} X _ ( ℓ ) = [ x _ 0 ( ℓ ) , x _ 1 ( ℓ ) , … , x _ K − 1 ( ℓ ) ] {\displaystyle {\underline {\mathbf {X} }}(\ell )=\left[{\underline {\mathbf {x} }}_{0}(\ell ),{\underline {\mathbf {x} }}_{1}(\ell ),\dots ,{\underline {\mathbf {x} }}_{K-1}(\ell )\right]} d _ ( ℓ ) = F [ 0 1 x N , d ( ℓ N ) , … , d ( ℓ N − N − 1 ) ] T {\displaystyle {\underline {\mathbf {d} }}(\ell )=\mathbf {F} \left[\mathbf {0} _{1xN},d(\ell N),\dots ,d(\ell N-N-1)\right]^{T}}

With normalisation matrices G 1 {\displaystyle \mathbf {G} _{1}} and G 2 {\displaystyle \mathbf {G} _{2}} :

G 1 = F [ 0 N × N 0 N × N 0 N × N I N × N ] F H {\displaystyle \mathbf {G} _{1}=\mathbf {F} {\begin{bmatrix}\mathbf {0} _{N\times N}&\mathbf {0} _{N\times N}\\\mathbf {0} _{N\times N}&\mathbf {I} _{N\times N}\\\end{bmatrix}}\mathbf {F} ^{H}} G ~ 2 = F [ I N × N 0 N × N 0 N × N 0 N × N ] F H {\displaystyle {\tilde {\mathbf {G} }}_{2}=\mathbf {F} {\begin{bmatrix}\mathbf {I} _{N\times N}&\mathbf {0} _{N\times N}\\\mathbf {0} _{N\times N}&\mathbf {0} _{N\times N}\\\end{bmatrix}}\mathbf {F} ^{H}} G 2 = diag ⁡ { G ~ 2 , G ~ 2 , … , G ~ 2 } {\displaystyle \mathbf {G} _{2}=\operatorname {diag} \left\{{\tilde {\mathbf {G} }}_{2},{\tilde {\mathbf {G} }}_{2},\dots ,{\tilde {\mathbf {G} }}_{2}\right\}}

In practice, when multiplying a column vector x {\displaystyle \mathbf {x} } by G 1 {\displaystyle \mathbf {G} _{1}} , we take the inverse FFT of x {\displaystyle \mathbf {x} } , set the first N {\displaystyle N} values in the result to zero and then take the FFT. This is meant to remove the effects of the circular convolution.

Algorithm description

For each block, the MDF algorithm is computed as:

y ^ _ ( ℓ ) = G 1 X _ ( ℓ ) h ^ _ ( ℓ − 1 ) {\displaystyle {\underline {\hat {\mathbf {y} }}}(\ell )=\mathbf {G} _{1}{\underline {\mathbf {X} }}(\ell ){\underline {\hat {\mathbf {h} }}}(\ell -1)} e _ ( ℓ ) = d _ ( ℓ ) − y ^ _ ( ℓ ) {\displaystyle {\underline {\mathbf {e} }}(\ell )={\underline {\mathbf {d} }}(\ell )-{\underline {\hat {\mathbf {y} }}}(\ell )} Φ x x ( ℓ ) = X _ H ( ℓ ) X _ ( ℓ ) {\displaystyle \mathbf {\Phi } _{\mathbf {xx} }(\ell )={\underline {\mathbf {X} }}^{H}(\ell ){\underline {\mathbf {X} }}(\ell )} h ^ _ ( ℓ ) = h ^ _ ( ℓ − 1 ) + μ G 2 Φ x x − 1 ( ℓ ) X _ H ( ℓ ) e _ ( ℓ ) {\displaystyle {\underline {\hat {\mathbf {h} }}}(\ell )={\underline {\hat {\mathbf {h} }}}(\ell -1)+\mu \mathbf {G} _{2}\mathbf {\Phi } _{\mathbf {xx} }^{-1}(\ell ){\underline {\mathbf {X} }}^{H}(\ell ){\underline {\mathbf {e} }}(\ell )}

It is worth noting that, while the algorithm is more easily expressed in matrix form, the actual implementation requires no matrix multiplications. For instance the normalisation matrix computation Φ x x = X _ H ( ℓ ) X _ ( ℓ ) {\displaystyle \mathbf {\Phi } _{\mathbf {xx} }={\underline {\mathbf {X} }}^{H}(\ell ){\underline {\mathbf {X} }}(\ell )} reduces to an element-wise vector multiplication because X _ ( ℓ ) {\displaystyle {\underline {\mathbf {X} }}(\ell )} is block-diagonal. The same goes for other multiplications.

See also