Generative AI: A Double-Edged Sword for Religious Freedom & Human Rights

How Generative AI Can Liberate—and Oppress—Religious Communities

by Kinan Abdelnour, 2024 Summer Associate

In the rapidly evolving digital landscape, generative AI is emerging as a powerful tool with profound implications for society. Among its many applications, generative AI has the potential to bypass censorship and provide individuals with access to unsanctioned knowledge, particularly in environments where information is tightly controlled. These AI models, like OpenAI’s ChatGPT series, are trained on vast and diverse datasets encompassing a wide array of perspectives, historical records, and religious discourses from around the world. As a result, they can reveal truths and insights about religious practices and freedoms that might otherwise be inaccessible to people living under repressive regimes. However, as the influence of generative AI grows, so too does the awareness of its potential to challenge authoritarian control over information. Governments, particularly those with a history of suppressing religious freedoms, have begun to recognize the threat posed by AI's ability to democratize access to information. In response, some have taken steps to regulate or suppress these technologies, effectively curating the religious knowledge available to their citizens and stifling the dissemination of critical perspectives. Beyond censorship, a more insidious threat looms: the use of generative AI to fabricate evidence against religious minorities, further justifying persecution and oppression under the guise of legality and social order.

Generative AI models are built by training on extensive datasets sourced from the internet, books, research papers, and various other forms of media. These datasets are not typically curated to exclude content based on the political or religious preferences of any specific government. Instead, they often include a wide range of viewpoints, including those that are critical of authoritarian regimes or highlight the persecution of religious minorities. For example, OpenAI’s GPT-4 is trained on data that includes discussions about religious persecution, human rights abuses, and the challenges faced by religious communities in different parts of the world (OpenAI, 2023). This broad training approach enables generative AI to provide insights and generate content that reflects a more accurate and diverse picture of global religious practices and the state of religious freedom. Users interacting with these AI models can ask questions and receive information that may be suppressed in their own countries, thus gaining access to a wider understanding of religious issues.

Upon realizing the potential threat that generative AI poses to their control over information, some governments have taken steps to suppress or regulate these technologies. In countries like China, there are stringent controls over internet content, and AI technologies are no exception. The Chinese government, for example, has imposed strict regulations on the development and deployment of AI models within its borders. These regulations are designed to ensure that AI systems do not disseminate information that contradicts the government’s official narrative or challenge its authority, particularly regarding sensitive topics like religion (Mozur, 2019). For this example, generative AI that has been trained on unsanctioned data becomes a target for suppression. Governments may block access to these AI models or require that they be retrained on datasets that align with state-approved content. In doing so, they indirectly suppress access to information that falls outside the official narrative, say the Uyghurs in China who are being detained by the CCP (Freedom House, 2022). This not only limits the ability of individuals to learn about religious practices and beliefs in other parts of the world but also stifles domestic religious discourse that might otherwise benefit from global perspectives (Freedom House, 2023). 

The suppression of generative AI has significant implications for religious freedom. When governments restrict access to AI models that are trained on diverse and unsuppressed information, they are effectively curating the religious knowledge that is available to their citizens. This can result in a narrowed understanding of religion that aligns with state ideologies, leaving little room for alternative religious perspectives or critical discussion. Moreover, the suppression of generative AI can prevent the dissemination of information about religious persecution and human rights abuses. AI models trained on global data can highlight instances of religious repression, providing evidence and narratives that might otherwise be hidden. By controlling or limiting access to these AI technologies, authoritarian regimes can maintain their grip on information and continue to suppress religious minorities without international scrutiny or domestic awareness (Thompson, 2021).

Another alarming yet underexplored aspect of generative AI is its potential to be weaponized for falsely imprisoning individuals or entire groups by generating fabricated evidence. The rapid evolution of AI technology, often likened to the acceleration predicted by Moore's law, where computational power doubles approximately every 18 months, has surpassed this rate to potentially every 18 weeks instead of months. This rapid progression has led to AI systems that are no longer limited to generating text but can now produce highly realistic images, audio, and videos, blurring the line between reality and fabrication. At first, the generated content was easily detectable, but recently we have cases where up to 61% of humans are unable to detect AI generated versus real images (AMT Lab @ CMU, 2023). This could be catastrophic for innocent minorities living in countries where the government and the people around them have been known to try and persecute or even execute certain minorities for their religious beliefs. This technological advancement could have catastrophic consequences for vulnerable minorities living under oppressive regimes, particularly in countries with a history of religious persecution. In such environments, the ability of a government or other actors to fabricate evidence using AI presents a terrifying tool for oppression. For example, an authoritarian regime could use generative AI to create convincing images or videos depicting members of a religious minority committing crimes or engaging in immoral behavior. This fabricated "evidence" could then be used to justify arrests, imprisonment, or even execution, under the guise of upholding law and order, rather than admitting to religious intolerance or discrimination.

Generative AI represents a double-edged sword in the fight for religious freedom and human rights. On one hand, it has the unparalleled ability to democratize access to religious knowledge, provide a platform for diverse perspectives, and expose hidden truths about global religious practices and the state of religious freedom. On the other hand, the same technology that can empower and enlighten also holds the potential for misuse in ways that could have catastrophic consequences. As governments increasingly seek to control or suppress generative AI, particularly in authoritarian regimes, there is a growing risk that these technologies will be used not only to limit access to diverse religious information but also to fabricate evidence that could falsely incriminate individuals or groups. This weaponization of AI could serve as a terrifying tool for oppression, particularly against religious minorities who are already vulnerable to persecution. As generative AI continues to evolve, it is imperative that advocates for religious freedom and human rights work to ensure that these technologies remain accessible and are used ethically. By promoting the responsible use of AI and establishing safeguards against its misuse, we can help ensure that generative AI serves as a force for truth and justice, rather than a tool for repression and falsehood.

—----------------------------------------------------------------------------------------------------------------------------

Mozur, P. (2019, April 14). One month, 500,000 face scans: How China is using A.I. to profile a minority. The New York Times. Retrieved from https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html

OpenAI. (2023). GPT-4 Technical Report. Retrieved from https://openai.com/research/gpt-4

Thompson, R. M. (2021). Artificial Intelligence and Religious Persecution: The Role of AI in Facilitating and Combatting Repression. The International Journal of Human Rights, 25(8), 1289-1310. Retrieved from https://www.tandfonline.com/doi/full/10.1080/13642987.2021.1968376

Freedom House. (2023). The repressive power of artificial intelligence. Freedom House. Retrieved from https://freedomhouse.org/article/repressive-power-artificial-intelligence

Bournousouzi, E. (2023, December 12). AI-generated images: Can we even trust photography anymore? Arts Management and Technology Laboratory. https://amt-lab.org/blog/2023/12/ai-generated-images-cant-we-trust-photography-anymore

Freedom House. (2022). Global Propaganda on Uyghurs, 20th Congress Censorship, Brazen Transnational Repression. Freedom House. Retrieved from https://freedomhouse.org/report/china-media-bulletin/2022/global-propaganda-uyghurs-20th-congress-censorship-brazen

Previous
Previous

The State of Religious Freedom in India: A Guide to Complex Issues

Next
Next

A Closer Look at the Congressional Working Group