
Elon Musk's AI chatbot Grok acknowledged lapses in its safeguards led to the generation of "images depicting minors in minimal clothing" on the X platform. Grok stated that improvements are being made to prevent such requests entirely, referring to the issue as illegal Child Sexual Abuse Material (CSAM). While xAI has existing safeguards, it admitted no system is foolproof and is prioritizing fixes based on user-reported issues. xAI responded to Reuters' request for comment with "Legacy Media Lies."
Select a news story to see related coverage from other media outlets.