Sunday, September 27, 2015

Giving Depth Behind the Phrase "Why the Face?"

Before I started taking American Sign Language (ASL) classes, I didn’t realize how integral nonverbal cues, facial expressions, and gestures are when there is a lack in verbal communication. I use them sometimes without even realizing it. When I’m at a party with friends and one of us is engaged in a conversation we don’t want to impede on, we try to make eye contact without alerting the others. Once eye contact is maintained, we throw a "thumps up" with an eyebrow raise to signal if they're OK or we point to the door to tell them we want to go. Sometimes at Kimball, I beg a friend to grab me a cookie because I’m too lazy to walk across the dining hall. If my friend forgets how many I asked for, they wave until they get my attention then signal with their fingers if I wanted one, two, three, etc. If someone mentions that they love the Red Sox, I shoot them a disgusted look that gets the point across without any words. If someone mentions getting Chipotle, I smile really big and clap my hands, the universal signal for YES! LET’S GET A BURRITO BOWL. The list goes on. 

ASL interpretation of Miley Cyrus' "Party in the USA"
Now, as a potential Deaf Studies minor, I know that American Sign Language is nothing without all the facial emotions and common gestures that make up even the simplest of sentences. The difference between “Do you want to get coffee?” and “You want coffee.” lies solely in an eye raise. The amount of time spent doing the sign can differentiate between a noun and a verb. It is so interesting to think about how these unnoticeable things become so important in every day communication. It led me to think, Do Deaf people and hearing people share, or even recognize, common nonverbal cues? 

In an experimental study Moving Faces: Categorization of Dynamic Facial Expressions in American Sign Language by Deaf and Hearing Participants conducted by Grossman and Kegl, the two psychologists took Deaf signers and hearing non-signers and tested them on a series of communicative facial expressions used commonly in ASL grammar rules. The difference between this study and its predecessors is that instead of using static images of facial expressions, the experimenters used dynamic videos of facial expressions in motion, as if it was taken right out of a real world conversation. The groups were divided up by hearing subjects with little to no experience or knowledge of ASL and Deaf subjects who use ASL as their main way of communicating. They were asked to watch a sample of ASL sentences with the signs cut out, showing only the facial movements. They were then asked to label the expression out of six categories: neutral, angry, surprise, quizzical, y/n question, and wh-question. After they chose the category, they rated their confidence in their selection with an adjacent 5-point confidence scale on the response sheet. 

A Saturday Night Live skit satirizing NYC Mayor Bloomberg's ASL translator
The results were very surprising. Once the data was compiled and analyzed, “the most striking accuracy result is that the deaf group exhibited lower accuracy score than the hearing cohort” (Grossman and Kegl, 2006). While the hearing group did accurately guess more facial dynamics, they're confidence levels were significantly lower than the Deaf group’s confidence scores. One reason for low confidence scores is a lack of habituation with the communicative facial cues in English dialogue. Stated basically, we don't even know that facial and physical behavior plays a large role in our verbal communication. 

You might be wondering, What does this have to do with what we learned in Social Psychology? This whole study is looking at a small sliver of what makes up the large and complicated “cake” that is social perception, or how we form impressions of and make inferences about other people. I hope this experiment opens the eyes of verbal communicators and highlights the importance in nonverbal communication. What we hear is just a fraction of what the other person is trying to convey. Growing up, we learn to decode other’s nonverbal behaviors and in return we use our experiences to begin to encode and emit our nonverbal behavior. While we may not have a complex language of signs and gestures, hearing people use culturally relevant emblems to get our point across in the most concise ways possible. 

So next time your mom tells you to look at her when she's talking to you, do it. There is only so much we can understand from just listening. Remember, there's always more than meets the ear. 

Submitted by Bridgette Dagher


Reference
Grossman, R. B., & Kegl, J. (2007). Moving faces: Categorization of dynamic facial expressions in american sign language by deaf and hearing participants. Journal of Nonverbal Behavior, 31(1), 23-38. doi:http://dx.doi.org/10.1007/s10919-006-0022-2



No comments:

Post a Comment