Malaysia joins Indonesia in blocking xAI’s Grok AI after the X platform faced global backlash over obscene deepfake images and misuse of generative AI.
In the Media & Entertainment sector, this highlights the growing concern surrounding AI-generated misinformation and the erosion of trust in digital content. Companies may need to invest in technologies that can detect and flag AI-generated deepfakes and other forms of AI-generated content, thereby adding an additional layer of protection to user experience on their platforms, as well as proactively participate in building AI ethical standards.
Operational impact: Businesses utilizing generative AI for content creation or customer interaction may face increased compliance burdens and require more sophisticated content moderation strategies. Developers of large language models need to invest heavily in safety features like watermarking, provenance tracking, and bias detection to mitigate the risk of misuse and potential shutdowns. Automation of content flagging will also need to improve drastically.