Undergraduate Student Research


Hands typing on a computer

Research Projects

SPLH majors can get involved in research through two main options: Directed Study and the Departmental Honors Program.

Directed Study is flexible, with no GPA requirement, and is typically completed in a single semester through direct coordination with a faculty mentor.

Departmental Honors is a multi-semester research program for students with a 3.5 GPA in the major. Students complete 6–8 credit hours of research (e.g., SPLH 498), develop an independent project, and present at a research symposium. The program uses a centralized matching process: each semester, students receive an email listing available mentors. Application dates are generally April for Fall and November for Spring. Accepted students will be matched with a mentor and given enrollment instructions.

To apply for Spring 2026, submit your materials to Dr. Panying Rong by Wednesday, November 19th. Space is limited, so students should register for a backup class until research placement is confirmed. 

Questions? Contact the Undergraduate Research Coordinator, Dr. Panying Rong (prong@ku.edu)

View full program details and application instructions


Independent Study Form

Students enrolling in an undergraduate independent study must complete the Independent Study Form before beginning the course. Once completed, the form should be emailed to splh@ku.edu and the course instructor.

Undergraduate Independent Study Form


Take a look below at recent SPLH student projects to get inspired and see what’s possible! There are also other opportunities you can explore with SPLH such as study abroad or leadership. 

Spring 2025

Abstract: Much of research focuses on reading comprehension, this study examines the
production of written words by analyzing characteristics of spelling errors. Rather than
looking at accuracy, we looked at which word characteristics make a word difficult to
spell, which gave us insights into dyslexia and related language disorders. Results
showed that commonly misspelled words were higher in frequency of occurrence than
words in general. The probability of a phoneme, given a grapheme, was significantly
higher in misspelled words, while onset/rime-based probability of a grapheme given a
phoneme was lower. Spelling proficiency ratings were significantly lower for misspelled
words, which were shorter in length, with fewer phonemes and syllables. These findings
suggest that spellers rely on statistically likely phoneme- grapheme mappings, even if
they are incorrect. Hopefully, future research will examine the characteristics of the
errors themselves to determine if the misspelling contains more consistent phoneme-
grapheme mappings, like the common substitution of ‘ur’ for ‘your’ (or if the errors are
random).

Abstract: Priming is a well-studied semantic memory phenomenon. While recent studies (e.g., Mace & Unlu, 2020)
have demonstrated the priming effects of semantic concepts on autobiographical memory retrieval, this
study explores the opposite: can autobiographical memories serve as primers for semantic concepts? To
look at this, a multilayer network model was constructed, consisting of semantic associates from the
Small World of Words project (Layer 1) and autobiographical associates obtained via mTurk (Layer 2).
Through these computer simulations, activation spread across the network from semantic associates or
from autobiographical associates to 30 semantic cues. In Simulation 1, the semantic cues were activated
less by the autobiographical associates than by the semantic associates, but in all cases the semantic cue
was activated. In Simulation 2, where 50% of semantic links were removed, 14 of 30 semantic associates
failed to activate the semantic cue, while 1 of 30 autobiographical associates failed to activate the
semantic cue. The results of these computer simulations suggest that autobiographical memories may
provide an alternative pathway to retrieve semantic concepts, which may have implications for
individuals with semantic memory impairments (e.g., aphasia, dementia).

 

 

 

Abstract: During conversations we not only listen to the sounds that are spoken, we
also look at lip and jaw movements to help us understand what was said. To
examine how “lip-reading” information is integrated with spoken information, we
used the mathematical tools of network science to examine the lip-reading errors
made by participants watching videos of silently spoken words. In the network of
errors we created, nodes represented the cue words that were silently spoken, and
the incorrect responses given by participants. Directed links indicated which cue
word produced which erroneous responses. Among other things, our analyses
examined which phonemes (spoken sounds) were especially difficult to perceive
visually (because they “look” like many other phonemes when silently spoken).
The results of this study could improve the teaching of lip-reading strategies for
hard-of-hearing individuals, and may help engineers improve automatic
transcription (e.g., in Zoom or YouTube) by better integrating visual and spoken
information via the camera and microphone on your computer.

 

Abstract: The purpose of this study is to identify rhythmic disturbances in speech for individuals
with Parkinson’s Disease and to detect if rate manipulation impacts articulatory and rhythmic
patterns in that population. Parkinson’s Disease (PD) is a progressive neurological disorder
primarily caused by the loss of neurons in the basal ganglia, which plays a large role in
regulating movements. This degeneration of this structure can cause symptoms regarding motor
functions. This can also bring about hypokinetic dysarthria, with symptoms such as monotone
pitch, reduced volume, and imprecise articulation overall. For this project, nine individuals with
Parkinson’s Disease and 10 healthy speakers were recorded saying eight linguistically diverse
sentences using three different rates: fast, slow, and regular speech. From these recordings,
data was analyzed using Praat to extract timestamps based on the onset and offset of
sentences, words, phonemes and voice onset times of stops. MatLab was used to draw
attention to kinematic features of the speakers. Once this data was found, analysis of the
rhythmic modulation was used to form the results. Based on both the acoustic and kinematic
data, there was a trend of increased modulation depths at theta (relating to syllable rhythms)
and beta-gamma (relating to the sub syllabic rhythms) rhythms for most of the articulators and
acoustic bands. When looking at rate manipulation, slow speaking rates showed a trend toward
an enhanced modulation depth at both theta and beta-gamma rhythms across different
frequency bands, including 100-300 Hz (relating to vocal fold activity) and 3000-8000 Hz
(consonant articulatory activities). The results of this study provides information about how
individuals with Parkinson’s Disease can mitigate some of the rhythmic speech patterns,
specifically relating to the beta-gamma and theta rhythms, when using a slow speech rate.

 

Fall 2025

Abstract: Understanding how speech sounds develop in bilingual children can contribute to better
identification of Speech Sound Disorder and having advanced treatment strategies for the
population. Development of shared and unshared speech sounds have been compared between
many bilingual languages like English and Spanish, but there is minimal research based on
English and Vietnamese bilinguals. In my research, I want to look at the accuracy in production
of shared and unshared speech sounds with English and Vietnamese bilingual children. I plan on
testing bilingual English-Vietnamese speaking children from ages 3-6 years old. I am
administering the Goldman Fristoe Test of Articulation and the Vietnamese Articulation Test II
to each participant. I expect a higher rate of accuracy for shared vs. unshared sounds within
English-Vietnamese bilingual children. The outcomes of this project can provide resources for
caregivers to identify the influence of speech sound acquisition in bilingual English-Vietnamesespeaking
children as well as provide research for the speech and language field about shared and
unshared speech sounds on bilingualism.

Abstract: Listening effort is the attention and resources that an individual devotes to hearing and understanding speech. When an individual encounters an adverse listening environment, such as a crowded restaurant or loud shopping mall, they need to use more of these cognitive resources to understand what a conversation partner is trying to say, especially if they have hearing loss. In this study, we are investigating neural markers of increased listening effort and listening fatigue. To do so, each participant has two separate sessions in the Speech Perception, Cognition, and Hearing Lab, each session on a different day. Session One measures the participant’s hearing and cognitive functioning, while Session Two uses an EEG cap to investigate how adverse listening environments affect neural markers of effort and fatigue. The results of this study can help improve hearing technology, such as hearing aids or cochlear implants, to be able to better adjust to loud listening environments in the future.

 

 

 

Abstract: Research has suggested that second language (L2) learners can improve their ability to distinguish speech sounds through distributional learning. This can be done through exposure to a bimodal distribution of two phonemes across an acoustic continuum. The skill is important in the field of speech pathology and linguistics because it has implications regarding what exposure is most effective in children's language learning. In the present study, native and non-native (accented) speakers were compared to evaluate the impacts accent has on speech perception of monolingual children. Participants were two monolingual English-speakers who fell within the age range of 6–9-years. The participants were tested on their ability to distinguish two newly learned phonetic categories. This study includes two language conditions, a bimodal condition in which participants are exposed to two native speakers and a unimodal condition in which participants are exposed to one native speaker and one accented speaker. Based on the literature, it was predicted that the presence of an accented speaker would not aid in children’s distributional learning of sound categories, but it also would not be detrimental. We found that both children learned the difference between the newly learned phonemes, however the participant exposed to two native speakers was able to distinguish them better. In both conditions participants struggled with phonemic grey area, where the two new phonemes began to blend in the middle steps of a 5-step phonemic continuum that was created and tested. These results are not conclusive, and no statistical analysis was produced due to the small sample size. This study is on-going; however these results have implications for research on distributional learning and bilingual speech perception. 

Spring 2024

Abstract: Priming is a well-studied semantic memory phenomenon. While recent studies (e.g., Mace & Unlu, 2020) have demonstrated the priming effects of semantic concepts on autobiographical memory retrieval, this study explores the opposite: can autobiographical memories serve as primers for semantic concepts? To look at this, a multilayer network model was constructed, consisting of semantic associates from the Small World of Words project (Layer 1) and autobiographical associates obtained via mTurk (Layer 2). Through these computer simulations, activation spread across the network from semantic associates or from autobiographical associates to 30 semantic cues. In Simulation 1, the semantic cues were activated less by the autobiographical associates than by the semantic associates, but in all cases the semantic cue was activated. In Simulation 2, where 50% of semantic links were removed, 14 of 30 semantic associates failed to activate the semantic cue, while 1 of 30 autobiographical associates failed to activate the semantic cue. The results of these computer simulations suggest that autobiographical memories may provide an alternative pathway to retrieve semantic concepts, which may have implications for individuals with semantic memory impairments (e.g., aphasia, dementia).

Abstract: The purpose of this study is to identify rhythmic disturbances in speech for individuals with Parkinson’s Disease and to detect if rate manipulation impacts articulatory and rhythmic patterns in that population. Parkinson’s Disease (PD) is a progressive neurological disorder primarily caused by the loss of neurons in the basal ganglia, which plays a large role in regulating movements. This degeneration of this structure can cause symptoms regarding motor functions. This can also bring about hypokinetic dysarthria, with symptoms such as monotone pitch, reduced volume, and imprecise articulation overall. For this project, nine individuals with Parkinson’s Disease and 10 healthy speakers were recorded saying eight linguistically diverse sentences using three different rates: fast, slow, and regular speech. From these recordings, data was analyzed using Praat to extract timestamps based on the onset and offset of sentences, words, phonemes and voice onset times of stops. MatLab was used to draw attention to kinematic features of the speakers. Once this data was found, analysis of the rhythmic modulation was used to form the results.  Based on both the acoustic and kinematic data, there was a trend of increased modulation depths at theta (relating to syllable rhythms) and beta-gamma (relating to the sub syllabic rhythms) rhythms for most of the articulators and acoustic bands. When looking at rate manipulation, slow speaking rates showed a trend toward an enhanced modulation depth at both theta and beta-gamma rhythms across different frequency bands, including 100-300 Hz (relating to vocal fold activity) and 3000-8000 Hz (consonant articulatory activities). The results of this study provides information about how individuals with Parkinson’s Disease can mitigate some of the rhythmic speech patterns, specifically relating to the beta-gamma and theta rhythms, when using a slow speech rate.

 

 

Abstract: During conversations we not only listen to the sounds that are spoken, we also look at lip and jaw movements to help us understand what was said. To examine how “lip-reading” information is integrated with spoken information, we used the mathematical tools of network science to examine the lip-reading errors made by participants watching videos of silently spoken words. In the network of errors we created, nodes represented the cue words that were silently spoken, and the incorrect responses given by participants. Directed links indicated which cue word produced which erroneous responses. Among other things, our analyses examined which phonemes (spoken sounds) were especially difficult to perceive visually (because they “look” like many other phonemes when silently spoken). The results of this study could improve the teaching of lip-reading strategies for hard-of-hearing individuals, and may help engineers improve automatic transcription (e.g., in Zoom or YouTube) by better integrating visual and spoken information via the camera and microphone on your computer.

Abstract: Much of research focuses on reading comprehension, this study examines the production of written words by analyzing characteristics of spelling errors. Rather than looking at accuracy, we looked at which word characteristics make a word difficult to spell, which gave us insights into dyslexia and related language disorders. Results showed that commonly misspelled words were higher in frequency of occurrence than words in general. The probability of a phoneme, given a grapheme, was significantly higher in misspelled words, while onset/rime-based probability of a grapheme given a phoneme was lower. Spelling proficiency ratings were significantly lower for misspelled words, which were shorter in length, with fewer phonemes and syllables. These findings suggest that spellers rely on statistically likely phoneme- grapheme mappings, even if they are incorrect. Hopefully, future research will examine the characteristics of the errors themselves to determine if the misspelling contains more consistent phoneme-grapheme mappings, like the common substitution of ‘ur’ for ‘your’ (or if the errors are random).

Elizabeth Biegger: How Text Characteristics Impact Comprehension on the Gray Oral Reading Test, Fifth Edition in Autistic Children

Molly Sullivan: Autistic Children's Attention to Social Information's Impact on Theory of Mind

Sofie Young: An Analysis of Social Communication Components and Its Related Hierarchy in Autistic Children 

Fall 2024

Abstract: During conversations we not only listen to the sounds that are spoken, we also look at lip and jaw movements to help us understand what was said. To examine how “lip-reading” information is integrated with spoken information, we used the mathematical tools of network science to examine the lip-reading errors made by participants watching videos of silently spoken words. In the network of errors we created, nodes represented the cue word that was silently spoken, and the incorrect responses given by participants. Directed links indicated which cue word produced which erroneous responses. Among other things, our analyses examined which phonemes (spoken sounds) were especially difficult to perceive visually (because they “look” like many other phonemes when silently spoken). The results of this study could improve the teaching of lip-reading strategies for hard-of-hearing individuals, and may help engineers improve automatic transcription (e.g., in Zoom or YouTube) by better integrating visual and spoken information via the camera and microphone on your computer.

Abstract: Verbal fluency tests (i.e., Bennett & Verney 2018; Gollan et. al., 2002) are a type of standardized
test that can be used across bilingual populations to test the activation of one, each and/or both languages.
A methodological task that appears in verbal fluency tests are phonological expressive tasks. In
phonological expressive tasks, participants are given a phoneme that appears in either or both of their
languages and are asked to name as many words that begin with that phoneme. In addition, phonemes can
be ranked by the amount of times (i.e., frequency) that the speech sound or sign appears in the respective
language’s alphabet. In my study proposal, I plan to use phonological expressive tasks to measure the
fluency of hearing bimodal English-American Sign Language (ASL) bilinguals. The within-subjects
design will include two language conditions, English (participants’ first language, or L1) and ASL
(participants’ second language, or L2.) Each language condition will record the responses of the
phonological expressive task between the language condition’s high letter frequency and the ‘other
language letter’ conditions' low or high letter frequencies (refer to Figure 1.) Based on the literature, it is
predicted that low- and high- letter frequencies in ASL will have very minimal impact on the number of
correct responses in the English language condition; whereas, the letter frequencies in English will impact
the number of correct responses in the ASL language condition.

Abstract: Research has suggested that second language (L2) learners can improve their ability to distinguish speech sounds through distributional learning. This can be done through exposure to a bimodal distribution of two phonemes across an acoustic continuum. (Escudero, Benders, Wanrooji, 2011). In my study proposal, I plan to test accented conditions and their impacts on speech perception. To do this, I will test monolingual English-speaking 6–9-year-olds on their ability to distinguish two newly learned phonetic categories. This study will include two language conditions, a bimodal condition in which participants are exposed to two native speakers and a unimodal condition in which participants are exposed to one native speaker and one accented speaker. Based on the literature, it is predicted that the presence of an accented speaker will not aid in children’s distributional learning of sound categories, but it also will not be detrimental. 

Abstract: The purpose of this study is to explore the relationship between mothers’ choice of book during shared-reading with their infant and how it relates to their child’s engagement and attention within a book sharing intervention. Book sharing is a specific approach to reading with very young children that targets language rich interaction. Salley et al. (2022) indicated a strong, positive parent response to Ready Set Share a Book!, a specific parent-education based book-sharing intervention for parents of infants and toddlers. The current study is a secondary analysis including nine mother-child dyads’ video data from the seventh and eighth week of one-on-one parent coaching intervention sessions from the Ready Set Share a Book! intervention. Parent-child shared reading sessions were coded for child attention, following Ruff and Lawson (1990). Child attention was compared across two conditions: (1) book was assigned and (2) book was chosen by mothers. It was hypothesized that children would spend a larger percentage of book sharing time actively engaged, or in “focused purposeful” attention, when their mother was given the option to select a familiar book versus when they are assigned a book. This study did not find a statistically significant difference between infant attention states when mothers were given the option to choose a book (week 8) vs. when they were assigned one (week 7). This result suggests that this variable of parent book choice should continue to be examined for clear results on the effect or lack thereof of book choice versus assignment on future book sharing intervention designs.

Spring 2024

Abstract: This study explores how book type affects infant engagement during shared reading. It
compares infants' attention states while reading books featuring real faces versus animated
illustrations. Analysis reveals that infants exhibit higher focused attention with books containing
saturated colors and real/live pictures. The findings emphasize the importance of book
characteristics in shaping infants' engagement during shared reading. Further research is
needed to understand underlying mechanisms and interactions between book type and
caregiver reading strategies.

Abstract: The purpose of this research is aimed at tracking the progression of speech production and perception in individual's with neurovegetative diseases, such as amyotrophic lateral sclerosis (ALS) and Parkinson’s. The study aims to use various digital tools to detect speech elements that may be difficult to hear solely by the human ear with hopes of early intervention and progression detection.  The first phase of this project was to look at previous research articles related to the current study. The second phase of the research project was to look at the perception side. The speech analysis software “Praat” was used to transcribe the speech samples from various participants. The software tracked a variety of speech elements, such as pauses within and between sentences. Finally, the transcription results were uploaded into a programing system (R-Studio) and results were compared between sessions. 

Fall 2023

Abstract: The purpose of this research project is to determine the feasibility of an alternative eye tracking method that integrates surface electrodes (EOG, or electro-oculography) with augmented reality (AR) glasses. This system has the potential to be a method of assistive communication for individuals with severe motor impairments. Current eye tracking methods use infrared cameras to record eye movement by reflecting light off the cornea. This method presents many challenges for individuals with droopy eyelids, involuntary head movements, or who wear prescription glasses. In contrast, the proposed system addresses these limitations by capturing eye movements directly from the electrodes and translating them on to a wearable display that moves with the user. We began first using calibration tasks that decoded a users gaze as it followed a moving target, similar to those used in camera eye tracking. However, results indicate that calibration tasks used in camera eye tracking do not translate well for this interface. A tracking task that more specifically elicits EOG signal using saccade eye movements will be explored next.

Abstract: This research project aims to find prelinguistic predictors of communication outcomes in children with autism. I examined a small portion of the data to compare the results of the scripted Communication Complexity Scale (CCS) and the Parent-Child Free Play (PCFP). I proposed the following research question: Are there differences in participants' CCS scores in two different assessment contexts? The data consisted of the Optimal scores, the average of the top three scores, and then the Typical scores, the average of the middle 50% of scores. We found that most participants scored higher on the scripted CCS assessment than the parent-child free play on both optimal and typical scores. Optimal scores in each context were not significantly different (> 0.05) while Typical scores in the scripted CCS assessment were significantly higher than Typical scores in the PCFP and we calculated statistically significant differences (W(18) = 22, p < 0.05). Overall, as expected the CCS was consistently higher than the PCFP, and the optimal score was higher than the typical score. Once a larger sample size is obtained, future research will look at why one score is more affected by assessment change and compare the CCS and PCFP to other assessments being conducted for the study.

Abstract: Visual supports are an evidence-based practice which helps autistic individuals complete tasks and follow directions. However, the use and effectiveness of visual supports in spoken and written language interventions is unknown. The current study evaluated if visual supports were mentioned and how they were used in spoken and written language comprehension interventions from an existing systematic review. The study also assessed whether participant (i.e., age, intellectual disability) and/or intervention (i.e., study design, treatment type, interventionist type) characteristics determined if a visual support was used in an intervention or not. Coding of indication (i.e., used, suggested, or not used), category (i.e., boundary, cue, or schedule), and format (i.e., picture, word, map, organization system, script, drawing, or physical alteration) of visual supports for 61 articles from an existing systematic review was completed. Thirty-six articles indicated that a visual support was used, however, the category and format, as well as the number of visual supports used in each intervention, were often not specified. Participant characteristics (i.e., age, intellectual disability) did not significantly determine visual support use. Study design and interventionist type significantly differentiated visual support use. Insufficient reporting limited our ability to determine if visual supports were a key component of spoken and written language comprehension intervention effectiveness. 

Abstract: This study aimed to investigate subclinical speech markers to determine objective measures of early ALS diagnosis and track disease progression. ALS leads to the weakening and atrophy of the muscles involved in speech production. The muscles responsible for controlling airflow during speech, such as the diaphragm and intercostal muscles, can be affected by ALS. It becomes more challenging for individuals to coordinate the precise movements required for speech. Subclinical refers to indicators of a disorder that are not prominent enough to produce noticeable symptoms that meet the criteria for a clinical diagnosis. The symptoms studied exist below the threshold of observable speech but may still involve detectable changes at the laboratory level. Two speech characteristics studied were pause and rhythm by comparing healthy speakers and ALS speakers parsed speech samples.