When Machine-generated Mistranslation on Social Media Becomes Misinformation

Risks to Users, Corporate Responsibility, and Legal Implications

Authors

  • Khetam Al Sharou Dublin City University

Keywords:

Machine translation , social media, misinformation , Language rights , user experience , corporate responsability

Abstract

Machine-generated mistranslations on social media can result in misinformation, with potential major consequences for users, especially marginalised communities. As Machine Translation (MT) is increasingly used to access online content, its errors often go unnoticed by users lacking knowledge of the source language. MT inaccuracies can distort meaning, contribute to misinformation, and reinforce digital inequality. Social media has become a main source of information. The unchecked use of machine-generated content introduces vulnerabilities, especially in politically and culturally-sensitive contexts. Through real-world case studies and empirical analysis, this work shows how mistranslations can distort meaning and cause misinformation. It highlights the ethical responsibility of tech companies and service providers to ensure accuracy and transparency while mitigating the risks that arise when MT errors lead to real-world harm. It further assesses how regulatory frameworks, including the EU’s Digital Services Act and other similar frameworks, can help address these challenges. This work advocates for responsible MT integration, equitable information access, and stronger corporate and regulatory accountability in combating MT-driven misinformation.

Downloads

Published

12.01.2026

How to Cite

Al Sharou, K. (2026). When Machine-generated Mistranslation on Social Media Becomes Misinformation: Risks to Users, Corporate Responsibility, and Legal Implications. Language and Law Linguagem E Direito, 12(1). Retrieved from https://ojs.letras.up.pt/index.php/LLLD/article/view/14817