EU Investigates Musk’s X Over Grok’s Sexualized Images Amid Public Outcry

- Pro21st - January 26, 2026
a downing street spokesperson added we won t hesitate to go further to protect children online and strengthen the law as needed photo reuters
10 views 3 mins 0 Comments

Major Investigation into X’s AI Chatbot: What You Need to Know

The landscape of social media is evolving rapidly, especially with the introduction of artificial intelligence (AI) tools. Recently, Elon Musk’s platform, X, has found itself under scrutiny by both the European Union (EU) and the UK’s Ofcom for concerning issues related to its AI chatbot, Grok. This probe comes at a critical time when the potential for misuse of AI technologies is sparking global debates.

The European Commission announced that it would investigate whether X has adequately assessed the risks related to Grok’s functionalities, particularly in light of disturbing reports of manipulated and sexualized images being circulated. Just weeks before this investigation, Ofcom launched its own inquiry, expressing worries about Grok generating inappropriate deepfake images.

To add to this, certain countries, including Indonesia, the Philippines, and Malaysia, temporarily blocked access to Grok, emphasizing the urgent need to safeguard users, especially children, from this type of harmful content. The EU has made it clear that non-consensual deepfakes are not only a violation of privacy but also a serious legal issue that needs immediate attention and correction.

Grok’s developers, xAI, have stated that they implemented additional safety measures, including restricting image-editing capabilities related to revealing clothing. However, the EU has expressed that these changes may not be enough. They noted quite clearly that compliance with the Digital Services Act (DSA) is not just about making adjustments but also about carrying out comprehensive risk assessments before launching new features.

With regulators around the world echoing the call for stricter measures, this situation brings to light the critical need for robust laws governing AI technologies. As discussions continue, it’s essential to consider how these technologies affect not just users but society as a whole. A balanced approach could pave the way for innovation without compromising safety and ethics.

Stakeholders in tech, government, and society must engage in constructive dialogue to create frameworks that ensure the responsible use of AI. As we navigate these complexities, platforms like Pro21st can provide valuable insights and guidance on best practices for a safer online environment. Stay informed, stay engaged, and let’s work towards a more secure digital future together.

At Pro21st, we believe in sharing updates that matter.
Stay connected for more real conversations, fresh insights, and 21st-century perspectives.

TAGS:

Leave a Reply

Your email address will not be published. Required fields are marked *

  • Rating