14th International Congress of Phonetic Sciences (ICPhS-14)

San Francisco, CA, USA
August 1-7, 1999

Human and Machine Recognition of Nasal Consonants in Noise

Abeer Alwan (1), Jeff Lo (2), Qifeng Zhu (1)

(1) Department of Electrical Engineering, UCLA, Los Angeles, CA, USA
(2) NEC Electronics, Santa Clara, CA, USA

The nasal consonants /m, n/ are often confused in the presence of background noise. In addition, these consonants are difficult to recognize reliably by machine. In this study, the perception of the place of articulation for nasal consonants in adverse conditions is examined through a series of perceptual experiments. The experiments examined the effects of additive white Gaussian noise (AWGN), and additive speech-shaped noise on nasal place perception in CV syllables. Results show a strong vowel-context effect. For example, it appears that the role of the formant transitions is more critical than that of the murmur in signaling place for /Ca/ and /Cu/ syllables while both the murmur and formant transitions appear to be important in signaling place for /Ci/ syllables. A Hidden Markov Model (HMM)-based automatic speech recognition (ASR) system was then constructed to identify the nasals at various signal-to-noise ratios. Modifications to a standard ASR system were made that were inspired by the results of the perceptual experiments. The modifications allowed a greater focus on formant transitions significantly improving recognition performance in noise.

Full Paper

Bibliographic reference.  Alwan, Abeer / Lo, Jeff / Zhu, Qifeng (1999): "Human and machine recognition of nasal consonants in noise", In ICPhS-14, 167-170.