The creators of artificial intelligence chatbot Grok, developed by Elon Musk’s xAI, admitted to it being instructed to spew right-wing disinformation about South Africa on social media platform X, raising concerns about digital vulnerabilities.
On 14 May, Grok caused a stir on X when it would respond to unrelated questions from numerous users with misinformation on “white genocide” in South Africa, a claim made by United States President Donald Trump, an ally of Musk, to support his offer of refuge to Afrikaners.
Many users posted screenshots of the chatbot’s responses. Grok further designated farm murders as “racially motivated” instead of previous responses where it linked the killings to South Africa’s high crime levels.
In a statement about a day later, xAI said an “unauthorised modification” by an employee had caused Grok to accept “white genocide” in South Africa as a fact.
It said from now on, Grok’s system prompts will be published on proprietary developer platform GitHub for public review and to promote transparency.
Although AI-powered tools used in agriculture, health, banking, fintech and research have had a positive effect on economies, the Grok incident raises questions about how algorithms are trained.
Data collection and training can influence AI systems’ biases for good or bad, said the director of Media Monitoring Africa, William Bird.
If the data “is poor quality, polarised, divisive, not evidence based or credible then the outputs will be just as bad”, Bird said.
“Grok doesn’t reveal the source of its training data but admits it does also pull from X which, given its biases, is potentially massively problematic.”
One of the biggest issues for Africa is that most AI bots are trained outside of the African setting, said Karen Allen, a consultant at the Institute for Security Studies (ISS).
AI technology can be manipulated, especially by those set on distorting truth or verifiable facts, Allen said. “This is a good illustration of why we need an African AI or an indigenous AI,” she said.
A local AI tool would need extensive computing power and a huge data centre, which requires huge amounts of energy, coupled with skills and expertise, Bird said.
Bird added that affordable, fast and reliable internet access is the best counter to disinformation and necessary for critical media.
AI models struggle with interpreting nuanced language and contextualising information, noted Daryl Swanepoel, the chief executive of the Inclusive Society Institute. The lack of comprehensive training and source alternatives to untested views calls for better regulation of social media platforms, he said.
“There must be transparency as to the algorithm’s design, where users are informed about how the algorithms operate, as well as the criteria they use for recommending content.”
Effective regulation would mean outputs should be monitored on an ongoing basis, and contextualised by requiring algorithms to present diverse perspectives to ensure a well-rounded view of any topic.
“Implementing these measures would help to mitigate even undetected unauthorised prompts and should, to a large extent, prevent the prioritisation of misunderstood extreme political topics in feeds of the platforms’ users,” Swanepoel said.
AI researcher Nasreen Watson said the Grok incident revealed the algorithmic oppression of marginalised communities in AI systems.
“A serious consequence of this leads to an erasure through algorithmic invisibility in an already underrepresented country such as South Africa, where historical biases are still forming part of digital narratives,” she said.
As Africa seeks to increase its role in the global AI economy, policymakers will have to consider the prospects of AI innovation and data storage with limited telecommunications infrastructure.
Bird noted that “one of Trump’s first executive orders was to rescind the AI guidelines issued by [his predecessor Joe] Biden. There is also now a Bill before Congress for the US not to pass AI regulations. In effect they are seeking to remove guardrails, so things are likely to get worse.”
The ISS’s Allen said South Africa, which has the presidency of the G20 until the annual summit of heads of states set for November, can take a leading role on artificial intelligence and complement discussions about the potential for development while not being afraid to talk about the pitfalls.
The positive applications of AI still outweigh the negative, she argued. “The problem is when there is no human oversight.”