Authors:
van der Stelt, Candace M., cav122@pitt.edu, University of Pittsburgh
Wallace, Sarah E., Sarah.wallace@pitt.edu, University of Pittsburgh
Madden, Elizabeth, Elizabeth.Madden@cci.fsu.edu, Florida State University
Dickey, Michael Walsh, mdickey@pitt.edu, University of Pittsburgh
Keywords: post-stroke alexia, silent reading comprehension, assessment
Abstract:
Introduction Alexia diagnoses are based on error patterns in reading aloud words that vary in their lexical features: imageability, orthographic consistency, and frequency. These patterns have identified four alexia subtypes: phonological, deep, surface, and global1. Individuals with either phonological or deep alexia have difficulty reading aloud low-imageability words (truth) compared to high-imageability words (horse)2. In contrast, individuals with surface alexia have difficulty reading aloud inconsistent (dove) compared to consistent words (peace)3. Importantly, alexia manifests in difficulty with both aloud and silent reading4. However, no known studies to date have investigated how lexical features impact silent reading comprehension. We aim to investigate the impact of imageability, consistency, and frequency in people with alexia (PWA) as a group and in each alexia subtype.
Methods Thirty-seven PWA following left-hemisphere stroke completed written synonym judgements5 on word-pairs that varied in imageability6, frequency7, and consistency8. An item-level logistic mixed-effects model targeting the whole group tested for main effects and interactions of these lexical variables. Next, we ran parallel models on subgroups of participants, based on alexia subtypes9: phonological (n=23), deep (n=4), global (n=5).
Results Whole-group: High-imageability and high-frequency word-pairs were more accurate than low-imageability and low-frequency word-pairs (both p<.001). Additionally, there was a significant 3-way interaction: within low-imageability word-pairs, consistency interacted with frequency such that inconsistent word-pairs were more accurate than consistent word-pairs when they were also low-frequency (p=.03).
Subgroups: The advantage for high-imageability and high-frequency words was maintained in the phonological (p<.001, p=.03) and global (both p=.02) subgroups. In global alexia, there was also a frequency by imageability interaction (p=.004): low-imageability word-pairs were less accurate, even more so when they were low-frequency.
Discussion The whole-group analysis demonstrates that words with weak semantic representations are more difficult, and that phonological skills fail to compensate for these word-types (i.e., 3-way interaction). For phonological and global alexia, the imageability effect (high>low) in this silent reading task mirrors performance in reading aloud.2 The deep alexia subgroup was small and had wide variability in accuracy, which we believe drove the lack of imageability effect. Future research with this dataset will investigate how lexical features relate to reading comprehension of texts.
References