Exploring the Risks of AI in Language and Communication
Written on
In a progressive digital workplace, my team created a unique bot for the messaging platform Slack. Users of Slack are likely familiar with bots that can help set reminders or schedule meetings, but we aimed to develop one that would promote the use of inclusive language. Although we had a diverse group, our lack of non-verbal cues hindered effective communication, a fact I discovered during an experiment with our “Guys Bot.”
The concept was straightforward: if someone typed "guys" in a message, the bot would automatically reply, suggesting alternatives like "friends" or "teammates" for a more inclusive tone. This initiative sparked a lively discussion among our colleagues about suitable replacements for "guys" when addressing mixed-gender groups, with options like "friends" and "teammates" gaining popularity.
One colleague noted, "I prefer 'folks.' It feels modern and is frequently used in various groups," leading to a flurry of supportive emoji reactions. Others chimed in with their preferences, including one person who suggested spelling it "folx" for a more edgy appeal. The conversation was lighthearted, but for me, the term "folks" carried a heavy personal history that made me uncomfortable.
Growing up near Chicago, I had a close friend whose brother was involved with the Latin Kings, a well-known gang. His involvement made him a target for rival gangs, particularly the Folks, which included various groups like the Gangster Disciples. Tragically, he was attacked by members of the Folks, leaving him with a severe injury that required extensive surgery.
The term "folks" brings back memories of my teenage years spent learning to identify gang signs and the dangers associated with them. The pitchfork sign, a symbol of the Folks, represented a serious threat, while the Latin Kings had their own distinct signs.
After reflecting on my feelings about the word "folks," I decided to share my story with the team. While I did not expect immediate changes in language usage, I felt it was important to highlight how a seemingly innocent term can evoke varied meanings.
After a couple of days, a colleague replied, expressing appreciation for my perspective. I aimed to illustrate how language can be interpreted differently, especially within a group of educated and diverse individuals. Despite our backgrounds, misinterpretations can easily occur.
Throughout my life, I have engaged deeply with language. I learned basic sign language to communicate with deaf classmates, served as an Arabic linguist in the military, and now work in technology focusing on natural language processing. My experience in artificial intelligence has made me increasingly aware of how machines are reshaping communication in ways we may not fully understand.
Words are symbols meant to convey meaning, and while AI technologies are approaching near-human language capabilities, they often fail to capture the full spectrum of human communication. This challenge is magnified when we consider the need for inclusive and neutral language, which is subjective and varies based on individual experiences.
We are asking AI to enforce inclusive language on a massive scale, but this can lead to oversimplification and homogenization. As AI systems suggest language options, we may unknowingly sacrifice individuality and nuance for convenience, allowing machines to dictate acceptable speech.
The narrative surrounding Timnit Gebru, a former Google researcher, sheds light on these issues. Her co-author, Emily Bender, pointed out that AI struggles to reflect the language of underrepresented groups and evolving social norms.
MIT Technology Review noted, “An AI model trained on vast swaths of the internet won’t be attuned to the nuances of this vocabulary.” This limitation results in homogenized language that reflects the voices of privileged communities, neglecting those with less online presence.
Three critical observations arise from this discussion. First, while it is important to eliminate hateful language, the definition of what constitutes acceptable language is always debated. The decision-making power over language use should not rest with a select few but rather reflect diverse societal perspectives.
Second, societal language norms evolve, and AI's reliance on scale may stifle smaller movements or nuanced changes in language. Without substantial representation, emerging voices can be drowned out by dominant narratives.
Finally, as Bender pointed out, marginalized groups often remain unrepresented in AI language models, which risks excluding vital cultural perspectives.
In our daily interactions, we communicate in complex ways, employing shorthand, tone, and humor that AI cannot fully capture. This was evident in a recent conversation with Dr. Chris Tucker, who raised an intriguing question about AI's ability to understand sign language.
Sign language encompasses more than mere gestures; it is rich with cultural significance and emotional depth. Reducing it to simplistic symbols would be a disservice to its complexity.
I questioned why we allow AI to dilute our communication when we would reject similar treatment of sign language. The reality is that we should resist AI's encroachment on the uniqueness of our interactions. The goal should be to celebrate and preserve the richness of individual communication, demanding transparency and representation in AI development.
Currently, we risk allowing algorithms to erode our distinctiveness, much like Pac-Man consuming dots on a screen. It's crucial to advocate for AI systems that uphold the individuality and authenticity of human expression.