When Machine-generated Mistranslation on Social Media Becomes Misinformation
Risks to Users, Corporate Responsibility, and Legal Implications
Palavras-chave:
Machine translation , social media, misinformation , Language rights , user experience , corporate responsabilityResumo
Machine-generated mistranslations on social media can result in misinformation, with potential major consequences for users, especially marginalised communities. As Machine Translation (MT) is increasingly used to access online content, its errors often go unnoticed by users lacking knowledge of the source language. MT inaccuracies can distort meaning, contribute to misinformation, and reinforce digital inequality. Social media has become a main source of information. The unchecked use of machine-generated content introduces vulnerabilities, especially in politically and culturally-sensitive contexts. Through real-world case studies and empirical analysis, this work shows how mistranslations can distort meaning and cause misinformation. It highlights the ethical responsibility of tech companies and service providers to ensure accuracy and transparency while mitigating the risks that arise when MT errors lead to real-world harm. It further assesses how regulatory frameworks, including the EU’s Digital Services Act and other similar frameworks, can help address these challenges. This work advocates for responsible MT integration, equitable information access, and stronger corporate and regulatory accountability in combating MT-driven misinformation.
Downloads
Publicado
Como Citar
Edição
Secção
Licença
Este trabalho está licenciado com uma Licença Creative Commons - Atribuição-NãoComercial 4.0 Internacional.

