Skip to ContentSkip to Navigation
About us How to find us prof. dr. M.B. (Martijn) Wieling, MSc

prof. dr. M.B. Wieling, MSc

Professor by special appointment of Low Saxon / Groningen Language and Culture | Associate Professor Information Science
prof. dr. M.B. Wieling, MSc
Telephone:
Mobile phone:
wieling gmail.com
E-mail:
m.b.wieling rug.nl

[N.B. all projects, also smaller ones, can be found at http://www.martijnwieling.nl]

VIVIS project (2/2016 - 1/2017; €50,000):
Speech recognition for congenitally blind speakers

Previous scientific research (Ménard et al., 2009) has shown that French congenitally blind (i.e. blind since birth, CB) speakers pronounce vowel acoustically less distinct than non-blind (NB) French speakers. French CB speakers appear to use their lips less prominently than the French NB speakers (Ménard et al., 2013). This suggests that visual input during the phase where speech is acquired has a clear impact on the pronunciation and the underlying articulatory (tongue and lip) movements.

The research of Ménard and her colleagues (2009, 2013), however, was limited to studying isolated vowels, and it is currently unknown if and how the pronunciations of the CB and NB speakers differ in running speech. Even though CB speakers can be easily understood by human listeners, a structurally more limited vowel space may negatively impact word recognition by automatic speech recognition (ASR) software.

Given that the frequency of use of ASR software is increasing, it is important to investigate if the results of Ménard and her colleagues for French speakers generalize to running speech in another language (Dutch), and assess if these potential structural differences between the CB and NB speakers affects ASR negatively. If that is the case, CB speakers may be trained to use their lips more prominently during speech, or specific ASR systems may be developed for CB speakers.

Besides collecting acoustic data, we (Pauline Veenstra and I) will also collect articulatory data (i.e. the movement of tongue and lips) using an articulography device in this project. This project is funded by the Vereniging van Instellingen voor mensen met een Visuele beperking (VIVIS).

NWO Veni project (9/2013 - 8/2017; €250,000):
Improving speech learning models and English pronunciation with articulography

People learning a second language generally have a clearly noticeable accent. According to current speech learning models, such as Flege’s (1995) Speech Learning Model or Best’s Perceptual Assimilation Model (Best, 1995), these accents can be attributed to the similarity of the sounds or sound contrasts in the native and non-native languages.

Instead of determining the impact of the native language on second language pronunciation via the acoustics, I propose to focus on the movements of the speech articulators (i.e. tongue and lips) responsible for the production of speech. As the articulatory perspective is able to offer a more precise view than using acoustics, this will be informative for speech learning models. To obtain the articulatory movements I will use an electromagnetic articulography device, which measures the three-dimensional trajectories of several sensors attached to the tongue and lips during speech.

In order to compare articulatory patterns and relate these patterns to predictions of current speech learning models, I will obtain and analyze articulatory data for native English speakers, as well as two groups of non-native English speakers (i.e. Dutch and German).

Given that I will collect articulatory data, a more practical aim of this project is to use visual feedback on a computer screen of the speech articulators (which are not always readily observable during face-to-face communication) in order to improve the English pronunciation of non-native speakers. Especially the dynamics of pronunciation learning are relevant for speech learning models. In three intervention studies, I will assess the effectiveness of this approach by visualizing the articulator movements of a reference native English speaker during speech together with (a) the participant’s own articulator movements, (b) the articulator movements of a typical non-native speaker with the same native language, or (c) the approximate articulator movements of the participant (obtained via converting the speech signal).

NWO Rubicon project (9/2012 - 8/2013; €59,000):
Investigating language variation physically

In this research proposal, we suggest a new level at which dialectal variation should be investigated. We propose to focus on the articulatory characteristics underlying the speech signal, to obtain a more detailed view of linguistic variation than using traditional phonetic analysis. A novel method, electromagnetic articulography (EMA), will allow us to measure the trajectories of several points in and near the mouth in three-dimensional space. As there is only limited experience in analyzing these trajectories, we shall investigate the suitability of using generalized additive models (GAMs). GAMs allow the user to model non-linear relations in any number of dimensions, and may be used here to predict the position of a point (such as the tongue tip) over time in space, while also taking covariates into account. We will obtain and analyze EMA data in two experiments. In the first experiment we assess if GAMs applied to EMA data are able to distinguish northern from southern dialects in well-known dialectal pronunciation differences between the two regions. In the second experiment, we attempt to investigate dialectal variation more broadly. The results of both experiments will inform us about the feasibility of using EMA data to investigate language variation physically.

Last modified:19 September 2018 12.17 a.m.

Contact information

Oude Kijk in 't Jatstraat 26
9712 EK Groningen
The Netherlands

Office

Room:
1311.0434
Telephone:
Mobile phone:
wieling gmail.com