{"id":5706,"date":"2025-05-24T04:00:00","date_gmt":"2025-05-24T04:00:00","guid":{"rendered":"http:\/\/burn-the-priest.com\/?p=5706"},"modified":"2025-05-27T11:30:41","modified_gmt":"2025-05-27T11:30:41","slug":"groks-white-genocide-misinformation-shows-need-for-african-tailored-ai-algorithms","status":"publish","type":"post","link":"http:\/\/burn-the-priest.com\/index.php\/2025\/05\/24\/groks-white-genocide-misinformation-shows-need-for-african-tailored-ai-algorithms\/","title":{"rendered":"Grok\u2019s \u2018white genocide\u2019 misinformation shows need for African-tailored AI algorithms"},"content":{"rendered":"
The creators of artificial intelligence chatbot Grok, developed by Elon Musk\u2019s xAI<\/a>, admitted to it being instructed <\/a>to spew right-wing disinformation about South Africa on social media platform X, raising concerns about digital vulnerabilities.<\/p>\n On 14 May, Grok caused a stir on X when it would respond to unrelated questions from numerous users with misinformation on \u201cwhite genocide\u201d in South Africa, a claim made by United States President Donald Trump, an ally of Musk, to support his offer of refuge to Afrikaners.<\/p>\n Many users posted screenshots<\/a> of the chatbot\u2019s responses. Grok further designated farm murders as \u201cracially motivated\u201d instead of previous responses where it linked the killings to South Africa\u2019s high crime levels. <\/p>\n In a statement about a day later<\/a>, xAI said an \u201cunauthorised modification\u201d by an employee had caused Grok to accept \u201cwhite genocide\u201d in South Africa as a fact. <\/p>\n It said from now on, Grok\u2019s system prompts will be published on proprietary developer platform GitHub for public review and to promote transparency. <\/p>\n Although AI-powered tools used in agriculture, health, banking, fintech and research have had a positive effect on economies, the Grok incident raises questions about how algorithms are trained. <\/p>\n Data collection and training can influence AI systems\u2019 biases for good or bad, said the director of Media Monitoring Africa, William Bird.<\/p>\n If the data \u201cis poor quality, polarised, divisive, not evidence based or credible then the outputs will be just as bad\u201d, Bird said. <\/p>\n \u201cGrok doesn\u2019t reveal the source of its training data but admits it does also pull from X which, given its biases, is potentially massively problematic.\u201d<\/p>\n One of the biggest issues for Africa is that most AI bots are trained outside of the African setting, said Karen Allen, a consultant at the Institute for Security Studies (ISS).<\/p>\n AI technology can be manipulated, especially by those set on distorting truth or verifiable facts, Allen said. \u201cThis is a good illustration of why we need an African AI or an indigenous AI,\u201d she said.<\/p>\n A local AI tool would need extensive computing power and a huge data centre, which requires huge amounts of energy, coupled with skills and expertise, Bird said.<\/p>\n Bird added that affordable, fast and reliable internet access is the best counter to disinformation and necessary for critical media.<\/p>\n AI models struggle with interpreting nuanced language and contextualising information, noted Daryl Swanepoel, the chief executive of the Inclusive Society Institute. The lack of comprehensive training and source alternatives to untested views calls for better regulation of social media platforms, he said.<\/p>\n \u201cThere must be transparency as to the algorithm\u2019s design, where users are informed about how the algorithms operate, as well as the criteria they use for recommending content.\u201d<\/p>\n Effective regulation would mean outputs should be monitored on an ongoing basis, and contextualised by requiring algorithms to present diverse perspectives to ensure a well-rounded view of any topic.<\/p>\n \u201cImplementing these measures would help to mitigate even undetected unauthorised prompts and should, to a large extent, prevent the prioritisation of misunderstood extreme political topics in feeds of the platforms\u2019 users,\u201d Swanepoel said. <\/p>\n AI researcher Nasreen Watson said the Grok incident revealed the algorithmic oppression of marginalised communities in AI systems.<\/p>\n \u201cA serious consequence of this leads to an erasure through algorithmic invisibility in an already underrepresented country such as South Africa, where historical biases are still forming part of digital narratives,\u201d she said.<\/p>\n As Africa seeks to increase its role in the global AI economy, policymakers will have to consider the prospects of AI innovation and data storage with limited telecommunications infrastructure. <\/p>\n Bird noted that \u201cone of Trump\u2019s first executive orders was to rescind the AI guidelines issued by [his predecessor Joe] Biden. There is also now a Bill before Congress for the US not to pass AI regulations. In effect they are seeking to remove guardrails, so things are likely to get worse.\u201d<\/p>\n The ISS\u2019s Allen said South Africa, which has the presidency of the G20<\/a> until the annual summit of heads of states set for November, can take a leading role on artificial intelligence and complement discussions about the potential for development while not being afraid to talk about the pitfalls.<\/p>\n The positive applications of AI still outweigh the negative, she argued. \u201cThe problem is when there is no human oversight.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":" The creators of artificial intelligence chatbot Grok, developed by Elon Musk\u2019s xAI, admitted to it being instructed to spew right-wing disinformation about South Africa on social media platform X, raising<\/p>\n