Investigating authoritarian nation-state censorship landscapes in terms of generative AI products, specifically the use of large language models (LLMs) to promulgate misinformation and suppress discourse
This fellowship is investigating the use of generative AI technologies in China and Russia as vehicles for information censorship by identifying ethical research access methods to these tools, exploring the extent of censorship with regards to adherence to country-specific laws, and potentially exploring circumvention methods.
As LLMs gain market share on search engines as a source of truth, censorship via these platforms is especially interesting due to the algorithmic limitations of the technology. The fellow seeks to investigate the intersection of political pressure by nation-state actors and implementations of censorship in these generative systems.