Javascript must be enabled for the correct page display

CLCG Colloquia

A perception study of tone and sentence intonation of Chinese aphasic patients

Abstract

A perception study of tone and sentence intonation

of Chinese aphasic patients

Jie Liang & Vincent J.van Heuven

Phonetics Laboratory, Universiteit Leiden Centre for Linguistics, The Netherlands

INTRODUCTION. We present an experimental investigation on the perception of lexical tone and sentence intonation (question vs. statement) by 14 Chinese aphasic patients with a unilateral damage in the left-hemisphere (LH). Investigating the breakdown of lexical tone and sentence intonation perception in acquired language aphasia can provide valuable insight into the nature of normal speech perception, since in Chinese pitch can be used both at the lexical and the sentence level. Packard (1986) demonstrated that left-damaged nonfluent aphasic speakers of Chinese experience both a tonal and intonation production deficit. Gandour (1988, 1992, 1997) showed that lexical tone production was defective in Thai (a five-tone language) while the production of intonation was still intact; the results from our study on the Chinese aphasic speakers agree with Gandour’s. Obviously, the crucial question now is whether or not intonation perception is impaired in LH damaged patients. The literature has not yet provided a perception experiment on aphasic patients that was specifically designed to separate tone from intonation perception. The present study aims to fill the gap. Since we focus on the dissociation between the lexical tones and intonation in the perception by aphasics, our experiment is divided into two main parts: lexical identification (tones and vowels) and (postlexical) identification (sentence intonation).

. We present an experimental investigation on the of lexical tone and sentence intonation (question vs. statement) by 14 Chinese aphasic patients with a unilateral damage in the left-hemisphere (LH). Investigating the breakdown of lexical tone and sentence intonation perception in acquired language aphasia can provide valuable insight into the nature of normal speech perception, since in Chinese pitch can be used both at the lexical and the sentence level. Packard (1986) demonstrated that left-damaged nonfluent aphasic speakers of Chinese experience both a tonal and intonation deficit. Gandour (1988, 1992, 1997) showed that lexical tone was defective in Thai (a five-tone language) while the of intonation was still intact; the results from our study on the Chinese aphasic speakers agree with Gandour’s. Obviously, the crucial question now is whether or not intonation perception is impaired in LH damaged patients. The literature has not yet provided a perception experiment on aphasic patients that was specifically designed to separate tone from intonation perception. The present study aims to fill the gap. Since we focus on the dissociation between the lexical tones and intonation in the perception by aphasics, our experiment is divided into two main parts: lexical identification (tones and vowels) and (postlexical) identification (sentence intonation).

METHODS. We set up a three-factor experiment:

. We set up a three-factor experiment:

(i) Type of Chinese listeners: 14 Beijing non-fluent aphasic listeners with damage mainly in the LH frontal & parietal lobe, 30 Beijing (the control group), 8 Nantong (speaking a seven-tone dialect of Chinese), 13 Changsha (speaking a six-tone dialect) and 10 Uygur (speakers of a non-tone language).

(iia) Type of stimuli for the lexical part: (a) seven words were chosen representing the seven-vowel system of the Beijing Dialect /pa1, ti1, t§ u1, ã 2, k« 1, po1, y2

/, and (b) four words to cover the four-tone system /ma1 ma2 ma3 ma4/.

(iib) Type of stimuli for the postlexical (intonation) part: a two-dimensional stimulus continuum was generated (through PSOLA resynthesis) from one utterance (Beijing speaker) and systematically combining seven overall pitch levels and seven different boundary tones. The basic utterance was kai1 sun1 jiN 1 kî ai1 fei1 ó i1, ‘let Sun Ying fly flying machine’, which contains high level tones only (Tone 1) in order to reduce the lexical tonal influence and which can be produced as either a statement or a question depending on the intonation only.

(iii) Time pressure. Stimuli were in two task conditions: accurate (only accuracy is required) and speed (both accuracy and speed are required).

Results. A three-way ANOVA with (i) type of listener, (ii) linguistic task (tone vs. vowel vs. intonation identification), and (iii) time pressure (speed vs. accuracy) as fixed effects was carried out on the responses. It shows that, overall, percent correct was significantly different (p<0.001) for the various types of listener but with strong interaction between listener type and linguistic level. In tone identification, Uygur (64%) and aphasic (81%) have poorer identification scores than the other three types, Beijing (99%), Nantong (98%) and Changsha (99%). This indicates that the aphasics experienced the same difficulty in tone identification as the Uygur (whose mother tongue is a non-tone language). In vowel identification, however, a much smaller difference is found among the listener types (even smaller when time pressure is absent): aphasic (96%), Beijing (99%), Nantong (98%), Changsha (99%) and Uygur (99%). These findings indicate that our aphasics definitely have their tones impaired while at the same time they have some difficulty in speeded identification of their vowels. As for the identification of intonation pattern, we can see that the aphasic and Uygur listeners based their decision primarily on the boundary tone (as evidenced by more complete cross-overs and steeper psychometric functions) whereas the tone language groups attached more weight to the overall pitch level of the utterance. Taking RT into consideration, we see that in correct tone identification both aphasic (1987ms) and Uygur listeners (1659ms) were much slower than the tone-language groups (1246, 1346 and 1200ms), while in vowel identification the aphasics showed a significant lag under time pressure (1617 vs. 1208, 1233, 1152 and 1082 ms). Interestingly, in the identification of intonation pattern, Uygur (1383) and the aphasic (1474) listeners were significantly faster than the tone-language groups (1561, 1595 and 1615), and the difference among the types of listeners are much smaller compared with tone identification.

. A three-way ANOVA with (i) type of listener, (ii) linguistic task (tone vs. vowel vs. intonation identification), and (iii) time pressure (speed vs. accuracy) as fixed effects was carried out on the responses. It shows that, overall, was significantly different (p<0.001) for the various types of listener but with strong interaction between listener type and linguistic level. In , Uygur (64%) and aphasic (81%) have poorer identification scores than the other three types, Beijing (99%), Nantong (98%) and Changsha (99%). This indicates that the aphasics experienced the same difficulty in tone identification as the Uygur (whose mother tongue is a non-tone language). In , however, a much smaller difference is found among the listener types (even smaller when time pressure is absent): aphasic (96%), Beijing (99%), Nantong (98%), Changsha (99%) and Uygur (99%). These findings indicate that our aphasics definitely have their tones impaired while at the same time they have some difficulty in speeded identification of their vowels. As for the identification of , we can see that the aphasic and Uygur listeners based their decision primarily on the boundary tone (as evidenced by more complete cross-overs and steeper psychometric functions) whereas the tone language groups attached more weight to the overall pitch level of the utterance. Taking into consideration, we see that in correct both aphasic (1987ms) and Uygur listeners (1659ms) were much slower than the tone-language groups (1246, 1346 and 1200ms), while in the aphasics showed a significant lag under time pressure (1617 vs. 1208, 1233, 1152 and 1082 ms). Interestingly, in the identification of , Uygur (1383) and the aphasic (1474) listeners were significantly faster than the tone-language groups (1561, 1595 and 1615), and the difference among the types of listeners are much smaller compared with tone identification.

ConclusionS. We conclude that the aphasics had both tones and vowels impaired while keeping the intonation contrast intact. These finding support our earlier findings in a production study of tones and intonation (Liang & Van Heuven, forthcoming) showing that tonal impairment can be independent of intonation. Also, we found that the aphasics had better vowel than tone identification. However, for the Beijing (control) listeners the effect was reversed and the difference between tone and vowel identification is much smaller. Additionally, the RTs for tone identification are higher than those for vowel identification for the aphasics, while it was just the opposite with the controlled group. Again, these findings support our earlier study showing that tonal impairment can be independent of the vowel impairment – although tones are realized on vowels.

. We conclude that the aphasics had both tones and vowels impaired while keeping the intonation contrast intact. These finding support our earlier findings in a production study of tones and intonation (Liang & Van Heuven, forthcoming) showing that tonal impairment can be independent of intonation. Also, we found that the aphasics had better vowel than tone identification. However, for the Beijing (control) listeners the effect was reversed and the difference between tone and vowel identification is much smaller. Additionally, the RTs for tone identification are higher than those for vowel identification for the aphasics, while it was just the opposite with the controlled group. Again, these findings support our earlier study showing that tonal impairment can be independent of the vowel impairment – although tones are realized on vowels.

Our perception experiments with aphasic patients provide evidence for separate representation of lexical tones and sentence intonation. Our data suggest that lexical tone and sentence intonation are separate functions with separate locations or different mechanisms in the brain. Therefore our results conflict with Packard’s (1986) production results. Furthermore, our study provides perceptual evidence for separate tonal and segmental impairment.

If these findings are replicated with larger groups of patients, then we must assume that the phonological or phonetic components of the grammar are more diffusely represented within the language-dominant hemisphere than heretofore believed. Such a structure would be consistent with the non-linear representation of vowel features, lexical tone, and sentence intonation in autosegmental phonology, in which tones and segments are represented on separate tiers and lexical tones and sentence intonation belong to different hierarchical levels in the linguistic structure.

Last modified:September 16, 2013 15:04