In 1990, Yamaguchi et al. used max pooling in TDNNs in order to realize a speaker independent isolated word recognition system.
The Time Delay Neural Network, like other neural networks, operates with multiple interconnected layers of perceptrons, and is implemented as a feedforward neural network. All neurons (at each layer) of a TDNN receive inputs from the outputs of neurons at the layer below but with two differences:
In the case of a speech signal, inputs are spectral coefficients over time.
In order to learn critical acoustic-phonetic features (for example formant transitions, bursts, frication, etc.) without first requiring precise localization, the TDNN is trained time-shift-invariantly. Time-shift invariance is achieved through weight sharing across time during training: Time shifted copies of the TDNN are made over the input range (from left to right in Fig.1). Backpropagation is then performed from an overall classification target vector (see TDNN diagram, three phoneme class targets (/b/, /d/, /g/) are shown in the output layer), resulting in gradients that will generally vary for each of the time-shifted network copies. Since such time-shifted networks are only copies, however, the position dependence is removed by weight sharing. In this example, this is done by averaging the gradients from each time-shifted copy before performing the weight update. In speech, time-shift invariant training was shown to learn weight matrices that are independent of precise positioning of the input. The weight matrices could also be shown to detect important acoustic-phonetic features that are known to be important for human speech perception, such as formant transitions, bursts, etc. TDNNs could also be combined or grown by way of pre-training.
The precise architecture of TDNNs (time-delays, number of layers) is mostly determined by the designer depending on the classification problem and the most useful context sizes. The delays or context windows are chosen specific to each application. Work has also been done to create adaptable time-delay TDNNs where this manual tuning is eliminated.
TDNN-based phoneme recognizers compared favourably in early comparisons with HMM-based phone models. Modern deep TDNN architectures include many more hidden layers and sub-sample or pool connections over broader contexts at higher layers. They achieve up to 50% word error reduction over GMM-based acoustic models. While the different layers of TDNNs are intended to learn features of increasing context width, they do model local contexts. When longer-distance relationships and pattern sequences have to be processed, learning states and state-sequences is important and TDNNs can be combined with other modelling techniques.
TDNNs used to solve problems in speech recognition that were introduced in 1989 and initially focused on shift-invariant phoneme recognition. Speech lends itself nicely to TDNNs as spoken sounds are rarely of uniform length and precise segmentation is difficult or impossible. By scanning a sound over past and future, the TDNN is able to construct a model for the key elements of that sound in a time-shift invariant manner. This is particularly useful as sounds are smeared out through reverberation. Large phonetic TDNNs can be constructed modularly through pre-training and combining smaller networks.
Large vocabulary speech recognition requires recognizing sequences of phonemes that make up words subject to the constraints of a large pronunciation vocabulary. Integration of TDNNs into large vocabulary speech recognizers is possible by introducing state transitions and search between phonemes that make up a word. The resulting Multi-State Time-Delay Neural Network (MS-TDNN) can be trained discriminative from the word level, thereby optimizing the entire arrangement toward word recognition instead of phoneme classification.
Two-dimensional variants of the TDNNs were proposed for speaker independence. Here, shift-invariance is applied to the time as well as to the frequency axis in order to learn hidden features that are independent of precise location in time and in frequency (the latter being due to speaker variability).
One of the persistent problems in speech recognition is recognizing speech when it is corrupted by echo and reverberation (as is the case in large rooms and distant microphones). Reverberation can be viewed as corrupting speech with delayed versions of itself. In general, it is difficult, however, to de-reverberate a signal as the impulse response function (and thus the convolutional noise experienced by the signal) is not known for any arbitrary space. The TDNN was shown to be effective to recognize speech robustly despite different levels of reverberation.
TDNNs were also successfully used in early demonstrations of audio-visual speech, where the sounds of speech are complemented by visually reading lip movement. Here, TDNN-based recognizers used visual and acoustic features jointly to achieve improved recognition accuracy, particularly in the presence of noise, where complementary information from an alternate modality could be fused nicely in a neural net.
Video has a temporal dimension that makes a TDNN an ideal solution to analysing motion patterns. An example of this analysis is a combination of vehicle detection and recognizing pedestrians. When examining videos, subsequent images are fed into the TDNN as input where each image is the next frame in the video. The strength of the TDNN comes from its ability to examine objects shifted in time forward and backward to define an object detectable as the time is altered. If an object can be recognized in this manner, an application can plan on that object to be found in the future and perform an optimal action.
Two-dimensional TDNNs were later applied to other image-recognition tasks under the name of "Convolutional Neural Networks", where shift-invariant training is applied to the x/y axes of an image.
Alexander Waibel, Tashiyuki Hanazawa, Geoffrey Hinton, Kiyohito Shikano, Kevin J. Lang, Phoneme Recognition Using Time-Delay Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989. /wiki/Alex_Waibel
Alexander Waibel, Tashiyuki Hanazawa, Geoffrey Hinton, Kiyohito Shikano, Kevin J. Lang, Phoneme Recognition Using Time-Delay Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989. /wiki/Alex_Waibel
Alexander Waibel, Phoneme Recognition Using Time-Delay Neural Networks, SP87-100, Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE), December, 1987, Tokyo, Japan. http://www.inf.ufrgs.br/~engel/data/media/file/cmp121/waibel89_TDNN.pdf
John B. Hampshire and Alexander Waibel, Connectionist Architectures for Multi-Speaker Phoneme Recognition Archived 2016-04-11 at the Wayback Machine, Advances in Neural Information Processing Systems, 1990, Morgan Kaufmann. http://papers.nips.cc/paper/213-connectionist-architectures-for-multi-speaker-phoneme-recognition.pdf
Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, Online handwriting recognition: the NPen++recognizer, International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001 https://www.researchgate.net/profile/Stefan_Jaeger/publication/220163530_Online_handwriting_recognition_the_NPen_recognizer_Int_J_Doc_Anal_Recognit_3169-180/links/0c96051af3e6133ed0000000.pdf
Fukushima, Kunihiko (1980). "Neocognitron: A Self-organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position" (PDF). Biological Cybernetics. 36 (4): 193–202. doi:10.1007/BF00344251. PMID 7370364. S2CID 206775608. Archived (PDF) from the original on 3 June 2014. Retrieved 16 November 2013. https://www.cs.princeton.edu/courses/archive/spr08/cos598B/Readings/Fukushima1980.pdf
Fukushima, Kunihiko; Miyake, Sei (1982-01-01). "Neocognitron: A new algorithm for pattern recognition tolerant of deformations and shifts in position". Pattern Recognition. 15 (6): 455–469. Bibcode:1982PatRe..15..455F. doi:10.1016/0031-3203(82)90024-3. ISSN 0031-3203. https://www.sciencedirect.com/science/article/abs/pii/0031320382900243
LeCun, Yann; Boser, Bernhard; Denker, John; Henderson, Donnie; Howard, R.; Hubbard, Wayne; Jackel, Lawrence (1989). "Handwritten Digit Recognition with a Back-Propagation Network". Advances in Neural Information Processing Systems. 2. Morgan-Kaufmann. https://proceedings.neurips.cc/paper/1989/hash/53c3bce66e43be4f209556518c2fcb54-Abstract.html
Yamaguchi, Kouichi; Sakamoto, Kenji; Akabane, Toshio; Fujimoto, Yoshiji (November 1990). A Neural Network for Speaker-Independent Isolated Word Recognition. First International Conference on Spoken Language Processing (ICSLP 90). Kobe, Japan. Archived from the original on 2021-03-07. Retrieved 2019-09-04. https://web.archive.org/web/20210307233750/https://www.isca-speech.org/archive/icslp_1990/i90_1077.html
John B. Hampshire and Alexander Waibel, Connectionist Architectures for Multi-Speaker Phoneme Recognition Archived 2016-04-11 at the Wayback Machine, Advances in Neural Information Processing Systems, 1990, Morgan Kaufmann. http://papers.nips.cc/paper/213-connectionist-architectures-for-multi-speaker-phoneme-recognition.pdf
Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, Online handwriting recognition: the NPen++recognizer, International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001 https://www.researchgate.net/profile/Stefan_Jaeger/publication/220163530_Online_handwriting_recognition_the_NPen_recognizer_Int_J_Doc_Anal_Recognit_3169-180/links/0c96051af3e6133ed0000000.pdf
Alexander Waibel, Tashiyuki Hanazawa, Geoffrey Hinton, Kiyohito Shikano, Kevin J. Lang, Phoneme Recognition Using Time-Delay Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989. /wiki/Alex_Waibel
Alexander Waibel, Hidefumi Sawai, Kiyohiro Shikano, Modularity and Scaling in Large Phonemic Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, December, December 1989. https://ieeexplore.ieee.org/abstract/document/45535/
Christian Koehler and Joachim K. Anlauf, An adaptable time-delay neural-network algorithm for image sequence analysis, IEEE Transactions on Neural Networks 10.6 (1999): 1531-1536 https://web.archive.org/web/20190904162647/https://pdfs.semanticscholar.org/9a0a/08e4d9a4cea6fa035555f2ee54bdae673614.pdf
Alexander Waibel, Tashiyuki Hanazawa, Geoffrey Hinton, Kiyohito Shikano, Kevin J. Lang, Phoneme Recognition Using Time-Delay Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989. /wiki/Alex_Waibel
Alexander Waibel, Hidefumi Sawai, Kiyohiro Shikano, Modularity and Scaling in Large Phonemic Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, December, December 1989. https://ieeexplore.ieee.org/abstract/document/45535/
Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, A time delay neural network architecture for efficient modeling of long temporal contexts, Proceedings of Interspeech 2015 https://web.archive.org/web/20180306041537/https://pdfs.semanticscholar.org/ced2/11de5412580885279090f44968a428f1710b.pdf
David Snyder, Daniel Garcia-Romero, Daniel Povey, A Time-Delay Deep Neural Network-Based Universal Background Models for Speaker Recognition, Proceedings of ASRU 2015. http://danielpovey.com/files/2015_asru_tdnn_ubm.pdf
Patrick Haffner, Alexander Waibel, Multi-State Time Delay Neural Networks for Continuous Speech Recognition Archived 2016-04-11 at the Wayback Machine, Advances in Neural Information Processing Systems, 1992, Morgan Kaufmann. http://papers.nips.cc/paper/580-multi-state-time-delay-networks-for-continuous-speech-recognition.pdf
John B. Hampshire and Alexander Waibel, Connectionist Architectures for Multi-Speaker Phoneme Recognition Archived 2016-04-11 at the Wayback Machine, Advances in Neural Information Processing Systems, 1990, Morgan Kaufmann. http://papers.nips.cc/paper/213-connectionist-architectures-for-multi-speaker-phoneme-recognition.pdf
Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, Online handwriting recognition: the NPen++recognizer, International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001 https://www.researchgate.net/profile/Stefan_Jaeger/publication/220163530_Online_handwriting_recognition_the_NPen_recognizer_Int_J_Doc_Anal_Recognit_3169-180/links/0c96051af3e6133ed0000000.pdf
Alexander Waibel, Phoneme Recognition Using Time-Delay Neural Networks, SP87-100, Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE), December, 1987, Tokyo, Japan. http://www.inf.ufrgs.br/~engel/data/media/file/cmp121/waibel89_TDNN.pdf
Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, A time delay neural network architecture for efficient modeling of long temporal contexts, Proceedings of Interspeech 2015 https://web.archive.org/web/20180306041537/https://pdfs.semanticscholar.org/ced2/11de5412580885279090f44968a428f1710b.pdf
David Snyder, Daniel Garcia-Romero, Daniel Povey, A Time-Delay Deep Neural Network-Based Universal Background Models for Speaker Recognition, Proceedings of ASRU 2015. http://danielpovey.com/files/2015_asru_tdnn_ubm.pdf
Alexander Waibel, Hidefumi Sawai, Kiyohiro Shikano, Modularity and Scaling in Large Phonemic Neural Networks, IEEE Transactions on Acoustics, Speech, and Signal Processing, December, December 1989. https://ieeexplore.ieee.org/abstract/document/45535/
Patrick Haffner, Alexander Waibel, Multi-State Time Delay Neural Networks for Continuous Speech Recognition Archived 2016-04-11 at the Wayback Machine, Advances in Neural Information Processing Systems, 1992, Morgan Kaufmann. http://papers.nips.cc/paper/580-multi-state-time-delay-networks-for-continuous-speech-recognition.pdf
Christoph Bregler, Hermann Hild, Stefan Manke, Alexander Waibel, Improving Connected Letter Recognition by Lipreading, IEEE Proceedings International Conference on Acoustics, Speech, and Signal Processing, Minneapolis, 1993. http://isl.anthropomatik.kit.edu/cmu-kit/downloads/Improving_Connected_Letter_Recognition_by_Lipreading.pdf
Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, Online handwriting recognition: the NPen++recognizer, International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001 https://www.researchgate.net/profile/Stefan_Jaeger/publication/220163530_Online_handwriting_recognition_the_NPen_recognizer_Int_J_Doc_Anal_Recognit_3169-180/links/0c96051af3e6133ed0000000.pdf
John B. Hampshire and Alexander Waibel, Connectionist Architectures for Multi-Speaker Phoneme Recognition Archived 2016-04-11 at the Wayback Machine, Advances in Neural Information Processing Systems, 1990, Morgan Kaufmann. http://papers.nips.cc/paper/213-connectionist-architectures-for-multi-speaker-phoneme-recognition.pdf
Vijayaditya Peddinti, Daniel Povey, Sanjeev Khudanpur, A time delay neural network architecture for efficient modeling of long temporal contexts, Proceedings of Interspeech 2015 https://web.archive.org/web/20180306041537/https://pdfs.semanticscholar.org/ced2/11de5412580885279090f44968a428f1710b.pdf
David Snyder, Daniel Garcia-Romero, Daniel Povey, A Time-Delay Deep Neural Network-Based Universal Background Models for Speaker Recognition, Proceedings of ASRU 2015. http://danielpovey.com/files/2015_asru_tdnn_ubm.pdf
Christoph Bregler, Hermann Hild, Stefan Manke, Alexander Waibel, Improving Connected Letter Recognition by Lipreading, IEEE Proceedings International Conference on Acoustics, Speech, and Signal Processing, Minneapolis, 1993. http://isl.anthropomatik.kit.edu/cmu-kit/downloads/Improving_Connected_Letter_Recognition_by_Lipreading.pdf
Stefan Jaeger, Stefan Manke, Juergen Reichert, Alexander Waibel, Online handwriting recognition: the NPen++recognizer, International Journal on Document Analysis and Recognition Vol. 3, Issue 3, March 2001 https://www.researchgate.net/profile/Stefan_Jaeger/publication/220163530_Online_handwriting_recognition_the_NPen_recognizer_Int_J_Doc_Anal_Recognit_3169-180/links/0c96051af3e6133ed0000000.pdf
Christian Woehler and Joachim K. Anlauf, Real-time object recognition on image sequences with the adaptable time delay neural network algorithm—applications for autonomous vehicles." Image and Vision Computing 19.9 (2001): 593-618. https://www.sciencedirect.com/science/article/pii/S0262885601000403
"Time Series and Dynamic Systems - MATLAB & Simulink". mathworks.com. Retrieved 21 June 2016. https://www.mathworks.com/help/deeplearning/time-series-and-dynamic-systems.html
Vijayaditya Peddinti, Guoguo Chen, Vimal Manohar, Tom Ko, Daniel Povey, Sanjeev Khudanpur, JHU ASpIRE system: Robust LVCSR with TDNNs i-vector Adaptation and RNN-LMs, Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 2015. http://danielpovey.com/files/2015_asru_aspire.pdf