A Neurobiological Case for Early Sign Language Education for the Deaf and Hard of Hearing
Published in Grey Matters Journal, GSU Chapter, Spring 2025, featured on pgs. 12-17
Original Article Link: https://bpb-us-w2.wpmucdn.com/...
Could you say everything you’re thinking without making a sound? Through signed languages, the Deaf and Hard of Hearing (D/HH) can. While a child’s first language is most often developed passively, by listening to the speech of primary caregivers, that option is not available to D/HH children. In the absence of sufficient auditory input, alternative modalities of language learning are essential, such as hearing assistive devices and signed languages. With proper language education, D/HH children can go on to match the success of their hearing peers, but this is often not their reality. A look back through history shows us how a movement to eradicate signed languages disadvantaged the Deaf community for nearly two centuries. Its effects still linger, with speech-only language education remaining as the primary communication model for D/HH children today. Emerging neuroscience research, however, suggests that early sign language education is critical to the long term success of a Deaf or Hard of Hearing person. That language access and speech access are not the same.
Historically, D/HH education has been rooted in residential schools that taught through signed languages, with some lipreading and speech materials integrated into the curriculum [13]. It wasn’t until the mid-1800s that the Deaf community encountered its greatest tragedy: the oralist movement. During this time, fears surrounding ethnic and linguistic diversity were growing internationally due to rising immigration rates. D/HH communities and their unique signed languages were not immune to the effects of these spreading fears. Around the 1860s, international educators began campaigning for the eradication of signed languages and the use of speech-only education for the Deaf, known as oralism. Proponents for oralism cited sign language use as impairing spoken language learning and enhancing D/HH discrimination. After The Milan Conference, an international educators conference held in the late 1800s, an international ban on sign language education was put in place. This skyrocketed the popularity of the oralist movement, and despite protests from Deaf educators, signing programs were quickly replaced with speech-only education. Some oralist supporters, such as American inventor Alexander Graham Bell, advocated expanding on sign language bans to include bans on intermarriage between the Deaf, with the aim to eradicate deafness [13]. After roughly another one hundred years, sign language began to find its way back into Deaf education through the work of Deaf activists and allies [5]. But the damage was already done.
Modern approaches to early language education for the D/HH are often still founded in the school of thought that sign language use impairs spoken language learning. Oralist traditions remain alive and well, the results of which do not go unnoticed. D/HH children consistently show educational delays compared to their hearing peers, likely due to incomplete early language access [12]. These effects continue into adulthood. It is estimated that roughly 70% of D/HH adults have language deprivation syndrome, which is a combination of deficits in cognition, behavior, memory, and communication due to early impaired access to language. The syndrome is predominantly non-existent in hearing populations, highlighting the need for further investigation into the shortcomings of early language education for the D/HH [8].
Pervasive misconceptions about signed languages and the true capacity of hearing technologies may contribute to present gaps in D/HH language education. Historically, signed languages have been viewed as incomplete, primitive forms of communication compared to spoken languages. Modern research, however, has shown that signed languages have equal complexity and comparable linguistic structures such as unique grammar, syntax, and idioms, to their spoken language counterparts [11]. The use of sign language alone, when taught by a fluent user, can provide a Deaf or Hard of Hearing child with complete access to language at the level necessary for development; even in the absence of spoken language education. The same is not true of speech-only education.
Despite its apparent disadvantages, oralist education continues to be a popular choice due to the availability of assistive hearing devices such as hearing aids or cochlear implants. Parents and medical practitioners alike often work under the assumption that using these devices invites D/HH children into the same auditory world that hearing children have from infancy. Understandably, this assumption carries with it that such a level of auditory input should be sufficient to adopt language by hearing alone. But while the capacities of modern hearing technologies are substantial, they do not provide restoration of “natural hearing” and speech perception [3]. The reality is that the efficacy of hearing devices for any given D/HH individual varies greatly, often offering enhanced access to sound but incomplete access to the level of clear, distinguishable speech necessary for language development. This is, in part, due to the mechanisms by which these devices amplify sound.
According to Sohoglu (2019), while a brain that has access to complete auditory input from birth naturally develops to selectively distinguish between sounds, hearing devices don’t offer that same ability. This sound selectivity is what allows you to focus on the voice of a friend while conversing at a loud party; separating its sound from that of dishes clinking, music playing, and other people talking. But for the hearing aid or cochlear implant user, sounds are amplified according to frequency, without selective processing that distinguishes speech from non-descript sounds [2]. Hearing aids, for example, provide real-time adjustments to varying sound frequencies by dampening or amplifying sounds as they change in volume. This process closely reflects the natural mechanisms used by fully functioning inner ear structures. Unfortunately, this technology is not without its limitations. The mechanisms used by hearing technology to provide these real-time adjustments inadvertently alter some aspects of incoming sound that are necessary for the brain to perform sound selectivity. Ultimately this results in the capacity at which the user can selectively process speech being significantly decreased [2]. Understanding this shortcoming in the available technology provides insight into how sound access may not always translate to speech understanding. These considerations call into question commonly used speech-only approaches in D/HH language learning, and further support the exploration of early sign language education.
While the complexities of signed language learning, with or without spoken language access, are under-researched, the neurological mechanisms of early spoken language acquisition are better understood [14]. The first few years of life, known as the “critical language period”, have been identified as necessary for language acquisition and the development of the brain’s language processing centers. Some research even further suggests that language perception may begin in-utero during late gestational development [6, 14, 16]. While there is some variation among different aspects of language learning, the majority of first language acquisition is thought to be completed by the age of two. The remaining linguistic foundations for a first language and the maturation of related neural structures are estimated to continue developing until around the age of five [6]. Not all language learning is created equal, however, an important distinction is made between the learning of a first language versus the learning of a second. The former is subject to the critical language period and plays a key part in the development of neurobiological structures related to language processing and cognition. The latter is not crucial to brain development and can occur at any point in life. Insufficient first language access during the critical language period is at the core of the development of language deprivation syndrome in D/HH adults.
A variety of common circumstances for D/HH children during infancy and early childhood present as primary barriers to success during the critical language period. Firstly, there is often an initial delay in the diagnosis of hearing loss due to the unique difficulties of audiological testing in younger populations. Secondly, hearing technologies offered as an intervention can be ineffective for young children who may find hearing devices uncomfortable and are likely to remove them often. At a young age, hearing device users also can’t effectively provide feedback about the quality of sound received by their respective hearing technology. They may not be able to communicate whether sounds are heard at comfortable volumes and provide quality speech understanding like adult users can. This is likely to lead to a young hearing device user not actually receiving the level of sound and speech access that carers and practitioners estimate. Lastly, the overwhelming majority of D/HH children, around 90%, are born to hearing, non-signing parents. Hearing parents are more likely only to expose the child to spoken language than to take a dual-language learning approach [11]. When early access to sign language is provided, the learning process is still not without its flaws. Most often, hearing parents are not fluent in the respective sign language themselves. Without supplemental language instruction from fluent signers, this may result in the child experiencing delayed exposure to complete linguistic structures, such as complex grammar and syntax. It’s apparent that even in the presence of these challenges, early sign language education is highly beneficial for D/HH and hearing children alike. Critical evaluation of modern oralist approaches in D/HH education is essential to providing early intervention for language deprivation.
Antiquated ideas that signing impacts spoken language learning may be at the core of modern speech-only education methods. Ongoing findings in neuroscience suggest these ideas are likely unfounded. Clinically, speech production, language production, and language comprehension are three distinct processes. Speech production is defined as the physical production of sounds and words that may or may not have meaning. Language production is defined as the ability to express ideas, wants, and needs, with language comprehension being the ability to understand the expression of those things from others [1]. Indeed, the clinical separation of these processes is due to the fact that they are performed by different areas of the brain.
Language production originates in a part of the brain called Broca’s area. This area has been found to be active both when thinking about what to say and during the physical act of speaking. In spoken language, Broca’s area signals to the muscles involved in the production of speech, such as those in the lips, tongue, and throat, to verbally express desired concepts [4, 9]. In signed languages, just like in spoken languages, Broca’s area is active when thinking about what will be said. The muscles signaled during language expression, however, are different. In signed languages, we see muscles signaled that are involved in manual language production, such as hand and arm muscles, as opposed to the muscles associated with speech production. Evaluation of these neural pathways shows us that the use of a spoken language versus the use of a signed language differs only in the process of speech production, but not in the process of language production [4, 9]. This can inform two things: firstly, that signed language use is a sufficient modality to form and express one’s thoughts; and secondly, the comparable activation of brain pathways between language types makes it unlikely that the learning of one would impede the learning of the other, as has been previously suggested. Rather, it’s arguable that these processes complement each other. Dual activation of similar neurological pathways by using both language approaches may strengthen each pathway and enhance their efficiency.
The process of using language in daily life doesn’t stop at being able to generate and express ideas, however. Language comprehension, the ability to understand ideas expressed by others, is crucial to social interaction and educational growth [16]. It involves a distinct structure of the brain called Wernicke’s area. Neural imaging has found this region to be active when receptively processing language, both spoken and signed. Importantly, Wernicke’s area is only active in response to words or signs with linguistic meaning attached to them. The same level of neural activity is not seen in response to non-specific sounds or random manual movements [16]. This area of the brain is generally regarded as essential to language processing, irrespective of if the language is spoken or signed.
Wernicke’s area, Broca’s area, and other regions of the brain that play smaller roles in language production and comprehension are among those primarily developed during the critical language period. Understanding how their development differs for D/HH children with speech-only education versus those with the addition of early sign language education is necessary to make a case for its inclusion. Functional brain imaging studies have shown notable differences between Deaf adults who learned sign language during the critical language period and those who learned it later in life. In the brains of those with early access to sign language, the regions most active when viewing signed sentences were Wernicke’s area and other nearby language processing regions [16]. But in later learners, the regions most active while viewing the same content were those responsible for processing general visual stimuli, with no implication in language processing. When linguistic input must first pass through the brain's visual centers, language comprehension could be delayed for these later learners. This alternative activation of visual processing centers in later learners was found regardless of whether the Deaf individual had also had early spoken language education. Results were specific to the presence or absence of early sign language education.
Such findings beg the question of whether hearing capabilities or early sign language learning are truly responsible for how language is processed. To evaluate this, the same testing was performed on hearing individuals of similar groupings: those who learned sign language during the critical language period, and those who learned it later in life. In both groups, Wernicke’s area and similar language processing regions were the most active when viewing signed sentences. Hearing individuals who learned sign language later in life, still showed activation of language processing centers when viewing someone signing, as opposed to the activation of visual processing centers seen in Deaf late learners. This highly suggests that speech-only education for D/HH children during the critical language period may not be sufficient, leading to underdevelopment of language processing regions in the brain that cannot be resolved with later introduction to complete language. In the absence of proper development in these areas, the brain of a D/HH individual is forced to rely on less efficient mechanisms for language comprehension [16]. This lasting impact of early language deprivation is avoidable when the proper education is provided.
Deaf and Hard of Hearing children deserve equal access to the expressive world of language that most hearing children enjoy. Prioritizing the use of early sign language education, with or without spoken language, can reliably provide that access while speech-only education often cannot. Signed languages have rich history, forms of song and poetry, and promote creative physical expression of one’s thoughts. The addition of sign language learning in the early education of D/HH children adds these clear benefits, while spoken-language only learning may result in the dampening of one’s ability to express themselves and understand others. Why risk it?
Post a comment