Language diversity and the visual communication continuum (a thread) 

I chose this topic because my system has had a knack for non-English languages since we were about 14, starting with Mandarin in high school, moving on to basic Japanese and Hawaiian in college, and learning extremely rudimentary Arabic and Korean on our own time. For some reason, I didn’t expect to learn so much about linguistic theory in a certification program...

I assumed that subjects like morphology, semantics, and phonology would be in a bachelors or graduate program. I’ve found that the theory not only improves my signing, but also helps me identify the fundamental elements of other languages we’ve been learning, so I’m excited to share a bit about two language systems that are new to me.

Specific to communication between Deaf and hearing people is a continuum of visual communication illustrating numerous language systems.

Language systems are not languages, but methods that supplement or enable the understanding of a language(s). In Deaf-hearing communication, these systems are variants of signing, speaking, fingerspelling, or combinations thereof. Both English and ASL have places on the visual communication continuum, but not at opposite ends. Instead, systems are organized by whether they more closely resemble English (the Rochester method is strictly fingerspelling, with no signs or spoken words)...

or are more visual-gestural (pantomime). For the purpose of this reflection, I’m going to speak briefly about cued speech and simultaneous communication.

The first time I learned about cued speech (CS), it was in Sarah Katz’s op-ed for the New York Times (2019). The author’s parents had started her out in ASL as a child, but when she began speaking English words in ASL order, they shifted her communication from signed to cued.

This system was developed by a hearing professor at Gallaudet in 1966, and is meant to enhance deaf students’ lipreading and improve their English grammar. CS supplements the user’s spoken English with handshapes to represent consonants, and locations to represent vowels. Importantly, these handshapes and locations don’t exist as lexical representations in ASL – they are exclusive to CS.

It’s difficult to convey how CS works without a visual, and Katz’s article alone hadn’t given me much insight. Fortunately, our class illustrated the subject with numerous videos. An especially striking example was that of a family in Colorado, whose son was, at the time, the only deaf CS user in his entire state (Denver7, 2014).

We learned that between 2009 and 2010, only 0.4% of deaf children surveyed were learning to cue, compared to the 21.9% that were learning ASL (Gallaudet Research Institute, 2011). Despite the ease of both use and acquisition for hearing people – it can take at most 48 hours to learn to cue, and as little as four months to become “fluent” – the low prevalence raises questions about what justifies the use of this system when other, more common systems and languages exist.

And now, simultaneous communication 

Ever since I saw a peer using simultaneous communication (also known as simcom or SC), I’ve been fascinated by the challenge of using two languages at once. Recall that ASL is not English, and that its grammar structure is derived from French – used in tandem, a person cannot accurately produce one language without compromising the other.

Ideally, I would sign

PRO-1 MAKE TEA WILL, PRO-2 WANT?

while saying “I’m going to make tea, do you want some?”

Notice that the English statement can (and often does) have a greater number of symbols than the signed statement has. Unfortunately, in order to produce this ideal simcom statement, I would have to alter the rate at which I produce my signs and words. I could gloss the signs by saying aloud, “I make tea will, you want?” while signing. In this case, the signed message remains intact, but the English one is unnatural.

In a review of studies on simcom and its impact on speech, Schiavetti et al. (2004) note an important distinction between intelligibility and naturalness. While English speech was found to be slowed and less natural when combined with ASL, the overall message was not necessarily compromised. How vital is it that the English statement “I’m going to make tea, you want?” remain intact to a receiver who is fluent in English? How vital is it that the ASL statement remain intact to a Deaf receiver?

Learning about language diversity in Deaf-hearing communication has changed my perspective more drastically than any other topic in my course – I already had a strong social justice foundation, but my knowledge of systems was really lacking. Learning about tools like cued speech and simcom required me to think about how I treat the people who use them, the people who have chosen any of them for their child, and the people who have strong opinions about those choices.

I hope that I can maintain a non-judgmental, professional approach to users (and opponents) as I become more involved in the Deaf/HoH and DeafBlind communities.

Follow

References 

Denver7 – The Denver Channel. (2014, May 2). Mother chooses Cued Speech for deaf son [Video]. YouTube. youtube.com/watch?v=anmHWAmwlk

Gallaudet Research Institute. (2011). Regional and national summary report of data from the 2009-10 annual survey of deaf and hard of hearing children and youth. .research.gallaudet.edu/Demogra

Katz, S. (2019, Nov. 7). Is There a Right Way to be Deaf? The New York Times. nytimes.com/2019/11/07/opinion

References 

Schiavetti, N., Whitehead, R. L., & Metz, D. E. (2004). The Effects of Simultaneous communication on Production and Perception of Speech. Journal of Deaf Studies and Deaf Education, vol. 9 no. 3. doi.org/10.1093/deafed/enh031

References 

@Gemma this thread was fascinating, thank you for sharing!

Sign in to participate in the conversation
Polyglot City

Polyglot City is the right instance for you, if you're interested in languages, language learning and translating, or if you are multilingual or polyglot. All languages are allowed to flourish on our timelines. Welcome!