Is AI Stealing Your Accent? How to Protect Your Cultural Identity in the Age of LLMs

In an era where Large Language Models (LLMs) could become the primary editors of our thoughts, we could face a silent crisis: the flattening of global culture .

In our project ‘AI as a way to enhance evidence-informed foreign language learning- AI2improveFLL-’, we believe that technology should empower individuals, not erase their heritage. As AI becomes more and more the primary bridge for global communication, we must confront a serious challenge: how to use these tools without losing the unique cultural voices that make our world diverse.

1) Preserving the Mosaic: Protecting Linguistic Diversity from AI Flattening

Recent research published in 2025 warns of a ‘shrinking landscape’ of linguistic diversity. The study found that while AI can help rewrite text, it tends to homogenize writing styles, amplifying dominant patterns while suppressing the unique markers that make our personal and cultural voices stand out1.  When an EFL learner uses AI to “improve” an essay, the tool often views regionalisms or unique metaphors as “errors” to be corrected. By stripping away local idioms and unique sentence structures, AI produces “sanitized” English—a generic, algorithmically “safe” style that replaces the student’s vibrant cultural background with a robotic, uniform voice that lacks personality and heritage [2]. You can see a specific example of “shrinking linguistic diversity” in the following

Feature

Student’s Original Voice

AI-Sanitized Version

The Text

“In my town, you never just walk past a shop. You pop your head in, say ‘Kolay gelsin,’ and share a tea with the owner while the world rushes by.”

“In my hometown, it is common to visit local shops. You can greet the shopkeeper politely and have a drink while observing the busy street.” 

Language Style

Warm and Communal: Feels like a conversation with a friend.

Cold and Clinical: Feels like a report written by a stranger.  

When we allow AI to “fix” our sentences, we aren’t just correcting grammar; we are often erasing our identity. As we’ve seen, a simple phrase like “Kolay gelsin” carries centuries of Turkish hospitality and respect for labor. When an algorithm swaps it for a “polite greeting,” the soul of the story disappears. The student’s writing feels like a warm conversation with a friend, while the AI’s version feels like a cold report written by a stranger. Have you ever felt that AI changed the meaning of what you wanted to say? Tell us in the comments.

2) The Western "Default" and Algorithmic Bias

Most Large Language Models are built using data that heavily favors North American and Western European norms [3]. This creates an invisible “default” setting for what is considered professional, logical, or even polite. In our project, we recognize that this bias creates an uneven playing field, pressuring learners to adopt Western ways of thinking and linear argumentative styles just to be deemed “competent” by the software [4].

Beyond writing style, this bias results in unfair penalization of non-native speakers. Stanford researchers found that AI detectors are significantly biased against non-native English writers. While these detectors were nearly perfect at evaluating essays by U.S.-born students, they misclassified over 61% of TOEFL essays written by non-native students as AI-generated. This happens because detectors rely on “perplexity” metrics—scoring based on lexical complexity—which often flag the clearer, more direct language of EFL learners as “robotic”. We advocate for an inclusive AI ecosystem that recognizes and respects these different rhetorical traditions as valid, rather than flagging them as fraudulent [5].

3) The Subtle Shift: Unconscious Americanization

EFL learners are increasingly picking up “Americanized” English without ever stepping foot in the United States. This goes far beyond choosing “color” over “colour.” It involves adopting American humor, specific social cadences, and even political perspectives embedded in the AI’s responses [6]. We want students to develop critical AI literacy so they can recognize these influences.  Our teacher training empowers EFL Teachers to advocate for Inclusive AI—teaching them how to validate World Englishes and diverse rhetorical traditions as equal and valid. The goal is to help our students use AI tools more efficiently without inadvertently trading their own cultural “accent”—in both thought and word—for a digital mimicry of a Silicon Valley persona.

4) Erasure of the Student’s Local Voice

When AI “fixes” a student’s work, it often deletes the very things that make the writing authentic. A story rooted in community-focused values might be rewritten to sound more individualistic, or a description of local history might be smoothed over because the AI lacks the specific context to understand its importance [7]. This erasure is a form of digital colonization. If we rely too heavily on AI for expression, the true cost is the loss of the student’s authentic self and the unique perspective they bring to the global conversation.

5) The Teacher as a Cultural Mentor

In the age of AI, the teacher’s primary value has shifted. In our project AI2improveFLL we believe educators are not just grammar checkers; they are also cultural mentors. Their role is to act as the guardians of authenticity. Instead of simply teaching how to prompt for speed, they must help students identify when an AI is “over-editing” their identity or changing their intended meaning [8]. We aim to create bi-cultural learners who are masters of the technology but remain firmly rooted in their native perspectives. Some ways to do this:

Identity-Preserving Prompting: Explicitly telling the AI to “maintain a regional tone” or “respect my specific cultural values.” For example, while prompting an AI  teachers should instruct their students to give the right prompt such as:

I am writing a blog post about my life in Istanbul. I have included specific Turkish idioms and a conversational rhythm that reflects my culture. Please check my grammar for clarity, but do not remove my local expressions (like 'Kolay gelsin') and do not change my sentence structure to sound like a Western textbook. Maintain my warm, communal tone.

Without Identity-Preserving Prompt

With Identity-Preserving Prompt

“I visited the local bakery. I greeted the baker politely and we had a conversation about the neighborhood while drinking tea.”

“I stepped into the bakery and called out a warm ‘Kolay gelsin’ to the owner. We shared a tea and caught up on the neighborhood’s news—because here, a simple loaf of bread always comes with a story.”

Why: The AI “sanitized” the experience into a generic report.

Why: The AI respected the specific cultural ritual and kept the heart of the story intact.

This example shows that the user is the boss, not the AI. By adding these specific instructions, the writer prevents “Linguistic Homogenization” and ensures their voice remains in the “Outer” or “Expanding” circles of English rather than being forced into the “Inner Circle” standard. Check the end of the blog for our culture-preserving and language-sensitive prompting guide!

Human-in-the-Loop:  Ensuring the student always has the final authority to reject AI suggestions that feel “foreign” or “wrong” to their personal identity. It’s about teaching students that the AI is merely a consultant, not an editor-in-chief.

Stage

Version of the Text

Why it Matters

Original Student Draft

“My mother always says the table is the heart of the home. When we eat, there is no such thing as a guest; everyone is misafir, and they are treated like kings.”

Raw Identity: Contains personal rhythm and the specific cultural concept of misafir.

AI Suggestion (Rejected)

“My mother believes that the dining table is central to our household. We treat all visitors with great hospitality and high regard.”

Sanitized Output: Correct but “cold.” It replaces cultural depth with clinical words( like “hospitality.”

Student’s Final Choice

“My mother says the table is our home’s heart. In our culture, a guest is a misafir—a gift from God—and we treat them like kings.”

Authority in Action: The student kept the AI’s flow but “vetoed” the loss of heritage, re-inserting the cultural meaning.

The table above illustrates that the goal of using AI shouldn’t be to reach a “perfect” English standard, but to use the tool to explore options while retaining the final veto. The student’s choice to add “a gift from God” shows they are the ones in charge of the cultural “synergy,” not the algorithm.

Linguistic Pride: Encouraging the use of World Englishes as valid and powerful forms of communication in a globalized world.  Building linguistic pride means shifting the narrative from what students lack to what they uniquely possess.

Step Action Practical Example
1. Identify the “Un-translatable” Ask the student to pick one word or phrase from their native language that has no perfect English match. The student chooses “Eyvallah.”
2. Contrast with AI Ask AI to “translate and formalize” a sentence using that word. AI Version: “I accept your decision and I am grateful for your help.”
3. Reclaim with Pride The student writes a version that keeps the word but provides the “global” reader with enough context to understand its weight. Student Version: “I just looked at him, placed my hand on my heart, and said, ‘Eyvallah.’ It’s more than a thanks; it’s a way of saying I accept whatever comes from a friend with grace.”

When a student chooses to keep a term like ‘Eyvallah’ despite an AI’s suggestion to “correct” it, they move from being a passive learner to a cultural expert. This reframes linguistic difference not as a failure to meet a standard, but as a high-value contribution that enriches the English language with concepts it cannot express on its own. Ultimately, this creates a more powerful form of communicative impact; while AI produces “safe” and generic text, the student’s authentic voice builds a genuine human connection that no algorithm can replicate.

Reclaiming the Human Narrative

The risk of digital colonization is not just a theory; it is a global policy concern. UNESCO (2024) warns that the unregulated use of generative AI in education could lead to the loss of local knowledge and the erosion of cultural diversity. They emphasize that AI should never replace the human-in-the-loop, but rather serve to enhance ‘human agency’ [9].

The rise of AI does not have to result in a world where everyone speaks and thinks the same way. In our project, our mission is to transform AI from a tool of conformity into a tool of empowerment. By fostering a critical understanding of algorithmic bias and reclaiming the role of the teacher as a cultural guide, we ensure that the global English of the future is as diverse as the people who speak it.

In a world where algorithms try to make us all sound the same, standing out is an act of rebellion. In our training course we don’t just teach you how to use AI; we teach you how to master it without losing yourself in the process!

Join us at ai2improve.eu as we work to keep our digital future inclusive, diverse, and authentically human!

Infographic titled “Keep Your Voice: The Student’s Guide to Identity-Preserving AI Prompts.” The graphic presents six illustrated sections arranged in two columns. On the left: “Define Your Cultural Region,” encouraging students to specify their city or country; “Protect Local Idioms,” advising users to keep culturally specific expressions; and “Maintain Social Norms,” asking AI to reflect communal values. On the right: “Prioritize Clarity Over Erasure,” instructing AI to improve clarity without rewriting identity; “Preserve Your Natural Rhythm,” requesting original sentence structure and pacing; and “Reject ‘Standardized’ English,” encouraging avoidance of generic textbook language. The design uses soft pastel colors, rounded boxes, and small icons (map, speech bubble, music note, people, text comparison) to visually represent each concept.

References:

  1. Sourati, Z., Karimi-Malekabadi, F., Ozcan, M., McDaniel, C., Ziabari, A., Trager, J., Tak, A., Chen, M., Morstatter, F., & Dehghani, M. (2025). The shrinking landscape of linguistic diversity in the age of large language models.
  2. Jesudas, A. (2025). Artificial intelligence and the standardization of global English: A sociolinguistic inquiry. ResearchGate.
  3. Peters, U., & Chin-Yee, B. (2025). Generalization bias in large language model summarization of scientific research. Royal Society Open Science, 12(4), Article 241776. https://doi.org/10.1098/rsos.241776
  4. British Council. (2025). The future of English: Global perspectives in the age of AI.
  5. Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Stanford HAI. https://hai.stanford.edu/news/ai-detectors-biased-against-non-native-english-writers
  6. Baker, W., & Baynham, M. (2023). Culture and identity in English language teaching. Cambridge University Press.
  7. Zaman, K., & Lee, S. (2024). Negotiating professional identities in the AI-mediated classroom. Journal of English for Academic Purposes.
  8. Hockly, N. (2023). Artificial intelligence in English language teaching: The good, the bad and the ugly. ELT Journal, 77(3), 350-355.
  9. (2024). Guidance for generative AI in education and research.

Related Articles

Responses

Your email address will not be published. Required fields are marked *

  1. The article offers a good deal of food for thoughts! English in AI tools is becoming a lingua franca, at the same time, English is a global language for representing a variety of communities and cultures to the world. It is so important to preserve this function of the English language. The article gives truly valuable tips on how to use AI with this purpose and minimize its ‘sanitary’ effect.