Undergraduate Research Projects
Undergraduate Research Projects
Below is a list of research projects completed by undergraduate students in the Departmental Honors and Research Experience Programs
Abstract: Though audiologists and audiology graduate students typically earn undergraduate degrees in communication sciences and disorders, audiology-centered topics are often de-emphasized in the undergraduate curriculum. The purpose of this study is to describe the current status of undergraduate audiology education in the United States, including the availability and content of audiology-related clinical observation and practicum experiences. This descriptive study confirms a distinct bias toward SLP programming in terms of undergraduate clinical experiences.
Abstract: This study investigates differences between the grammatical language performances of children with Developmental Language Disorder (DLD) in conversation and narrative language samples. Language samples were collected from five kindergarten aged children who had a language deficit. Using the Systematic Analysis of Language Transcripts (SALT) software, these samples were coded and measured for MLU (Mean Length of Utterance) in Morphemes, Total Number of Utterances, and PGU (Percent Grammatical Utterances). Findings indicate that the participants produced more total utterances and more grammatical utterances in conversation as opposed to a narrative retell. Additionally, participants used more longer utterances in narrative samples. Findings suggest that including additional language sampling measures such as narrative retell can help to sample children’s grammatical weaknesses that might not surface in conversational speech.
Abstract: The purpose of this study was to explore how effective communication interactions change over time for individuals diagnosed with aphasia, and how that may be impacted by participation in conversation group therapy. The original goal of this research was to compare language samples obtained from two individuals during conversation group therapy. Select samples were transcribed and coded using an adapted version of Damico’s Discourse Analysis. Due to the retrospective nature of the study, several unforeseen issues were encountered. Thus, the purpose of the research shifted towards the construction of a prospective study. Knowledge gained from the complications of the retrospective study will likely impact the design of future research on this topic.
Abstract: The purpose of this research was to look at cognitive resource allocation during a dual-task paradigm. For this research, we were interested in the p300 level for our participants (which can provide a way to measure cognitive spare capacity). For this study we had 8 young adult participants with normal hearing who had 2 tasks. The first task was to memorize a digit sequence (this being 1, 3, or 5 digits) and the second task was to listen to speech-in-noise (spoken words). There were three levels of difficulty, these being: easy (10 dB SNR- signal to noise ratio), medium (0 dB SNR), and difficult (-5 dB SNR). They were then required to first report the speech, then the digit sequence. We measured the listening effort using an EEG (electroencephalography) machine, which goes on the head and measures electrical activity from different groups of neurons. Our results indicated that the quadratic at sH (signal High) and sL (signal low), and was low at sM (signal medium). We also found that our signal low and signal medium D1 (load one) were statistically significant, meaning they don’t overlap, and that sL and sM d5 (load five) also don’t overlap, meaning our statistics were significant. Since they were statistically significant we’re 95% confident our true mean is within our confidence intervals and don’t overlap. Our results were consistent with previous research we read about (Wu et al.) and showed that when the task became too difficult, our participants either disengaged or gave up.
Abstract: The purpose of this study is to examine the use of decontextualized language by mothers of children with fragile X syndrome across three contexts. Decontextualized language involves communicating about the future or past, entities that are not present, abstract ideas, or pretend talk. Video-taped mother-child interactions in three contexts—reading a book, engaging in free play, and making a snack together—were transcribed and coded for use of three types of decontextualized language. We are interested to find out if context plays a role in the quantity of decontextualized language and, furthermore, which type of decontextualized language (narrative, explanation, and pretend) is used more often. Decontextualized language, particularly narrative and explanation, has been shown to be a predictor of vocabulary and syntax in young children.
Abstract: The purpose of this study is to compare how speech and one’s articulators are affected by this neurodegenerative disease by analyzing acoustic and kinematic markers for target speech sounds. These targeted speech sounds are within fifteen minimal word pairs, which elicited the acoustic and kinematic markers for analysis and displayed the underlying reduced contrast. This study included thirteen patients diagnosed with ALS and ten control speakers for comparison. Each participant wore sensors on the lips, tongue, and jaw as they read minimal word pairs aloud. These readings were recorded to attain 5 trials for each minimal word pair and to calculate Intelligible Speech Rate. The resulting acoustic markers were measured by attaining specified acoustic features, including first and second format frequencies, vowel duration, band-passed acoustic energy, burst amplitude, rise time, noise duration, and second and third format steady state duration. The resulting kinematic markers were measured by attaining dissimilarity for displacement and velocity from the tongue dorsal, tongue tip, lower lip, and central jaw sensors. These measurements were then analyzed by utilizing a LASSO regression to attain the Intelligible Speech Rate and different signals as the dependent variable. This regression also provided correlates for both acoustic and kinematic measures. The results of this analysis will be used to compare speech and articulation between ALS patients and control speakers. The results will also be used to determine the kinematic contributions to the acoustic features measured. These findings will provide useful information for assessment, treatment, and the progression of this disease to clinicians who work with ALS patients.
Dr. Brady and Dr. Warren have been collecting data from families with at least one child with Fragile X Syndrome since 2004 to analyze parenting strategies and how it affects language development in adolescents with Fragile X Syndrome. Fragile X Syndrome is a genetic disorder that causes developmental delays and other cognitive impairments. Forty-seven mothers with a child with Fragile X Syndrome were interviewed by a researcher from the lab. These semistructured interviews (SSIs) were recorded then transcribed and decoded. Then, qualitative data was recorded from the language samples from the SSIs in order to analyze development in children with Fragile X Syndrome and their daily living set up and family dynamics. This project is currently an on-going longitudinal study.
Fragile X syndrome (FXS) is caused by an increased number of CCG (cytosine-guanineguanine) trinucleotide repeats in the Fragile X Mental Retardation 1 (FMR1) gene. The FMR1 gene is responsible for producing the fragile x mental retardation protein (FMRP), which is essential for normal cognitive development. FXS is the most common inherited cause of intellectual disability and autism. In addition, most males and many females with FXS have language impairments. The hypothesis of this study is that individuals with FXS may not understand the illocutionary force of an indirect request and will be less likely to comply to an indirect request than to a direct request. The hypothesis of this study is that individuals with FXS may not understand the illocutionary force of an indirect request and will be less likely to comply to an indirect request than to a direct request.
Decline in Fragile X Syndrome, Autism Spectrum Disorder, and Down Syndrome
Narrative fiction creates a simulative experience of social interactions (Mar & Oatley, 2008). Some texts, like narrative fiction, are more social than non-fiction narrative texts; these two types of texts occupy opposite ends of a spectrum that describes “Text socialness”. Text socialness can be determined through a variety of metrics. Previous work has examined the difference between fiction and nonfiction in regards to text socialness (Bradshaw & Davidson, 2019). However, my aim is to narrow down the search in order to make a more reasonable comparison. My research shows statistically significant differences between the two types of books on all four of our socialness measures. My work also captures a statistically significant difference between fiction and nonfiction 4th and 5th grade texts on the measure of syntactic simplicity.
The purpose of this research is to determine how speech articulation is affected by rate manipulation in individuals with ALS. ALS is a progressive, neurodegenerative disease that disrupts the transmission of central motor commands to muscle fibers, which leads to slow, impaired speech. In this project, I analyzed the acoustic and kinematic data recorded during slow, regular, and fast speech in seven individuals with varying severities of bulbar motor involvement in ALS and eight healthy controls, using the Praat and MatLab software programs. Based on the analyses, a variety of acoustic and kinematic features that characterized different aspects of articulatory impairments in ALS were derived from various speech sounds (i.e., vowels, diphthongs, fricatives, stops). These features were compared between the ALS and healthy control groups and across different rate conditions using the R Project for Statistical Computing software program. The results showed differential effects of rate manipulation on the acoustic and kinematic features of different speech sounds in individuals with ALS, which provided useful information to help tailor the rate manipulation therapy for prolonging speech communication in these individuals.
This systematic review and meta-analysis was designed to evaluate the effectiveness of interventions that use responsivity strategies for improving prelinguistic and language outcomes in children with autism spectrum disorder. Responsivity intervention strategies are strategies designed to support the development of turn-taking conversations through setting up the environment to increase communication, following the child’s lead, and using natural reinforcement for communicative attempts. Fortyseven reports met criteria for inclusion as randomized controlled trials. We identified a significant, positive mean effect size for interventions that include responsivity strategies increasing prelinguistic and language skills in children with ASD compared with a randomly assigned control condition. We also conducted moderator analyses and identified wide variation in how caregivers were trained to implement the intervention strategies.
How does the brain utilize auditory feedback during speech? Several theorize that speech production relies on an internal forward model that compares sensory feedback with expected speech output, or a motor efference copy, before further processing. If feedback aligns with the predicted efference copy, this leads to decreased activation within the auditory cortex, as shown during the speaking-induced suppression (SIS) response to self-generated speech. This raises the question: what aspects of speech modulate the SIS response? Little research exists regarding auditory bone conduction, a process in which speech sounds vibrate the skull and directly stimulate the cochlea, and the SIS response. This project aims to investigate the neurological effects of manipulating auditory feedback via air and bone conduction masking with electroencephalography (EEG). Not only will findings indicate how feedback is used by healthy individuals to adjust and process speech as it unfolds, but results may be contrasted from those with deficits in speech perception as a potential method of clinical assessment.
Gabrielle Rosenwald - The Relationship Between Working Memory Tasks and Hearing
Abstract: This research is a survey that will be sent out to families of AAC users regarding their knowledge and interest of telepractice. After a study in 2011 from ASHA, we have seen a large gap in the usage of telepractice. This survey intends to gather information on families’ perceptions of telepractice, their usage with their AAC devices, and what they would hope to benefit from if they participated in telepractice. The survey will be sent out to national organizations, social media groups, and through word of mouth. The survey will be sent out in January 2020 and results will be examined immediately.
Abstract: The purpose of this project was to determine whether there is a relationship between standardized test measures of language and vocabulary and language sample measures from a narrative retell sample in children with developmental language disorders (DLD). Ten kindergarten-age (5- to 6-year-old) children with DLD completed standardized tests (of morphosyntax, narrative comprehension and production, word relationship identification, semantic knowledge, and parent-reported semantic ability) and narrative retell language samples. Language samples were analyzed using the Systematic Analysis of Language Transcripts (SALT) Software. There was some overlap between tests of narrative ability and word relationships and language sample measures at the sentence level. However, not all measures hypothesized to be related were correlated. This research confirmed previous findings that there were moderate correlations between some standardized test measures and language sample measures.