AI Hallucinations Infect Government Policy Papers
The South African Department of Home Affairs (DHA) has suspended two senior officials after discovering that a revised white paper on citizenship, immigration, and refugee protection contained AI-generated fake references. The term "hallucinations"—used to describe erroneous or fictitious outputs from large language models—was officially cited in the department's statement.
What Happened
The DHA's revised white paper included a reference list that was not actually cited in the body of the text. Upon review, the references were determined to be AI fabrications. The department suspended the Chief Director of the citizenship and immigration unit on Thursday, and the director involved in drafting the document will be suspended at the start of the next week.
The Fallout
The DHA acknowledged the "embarrassment caused" and is taking several steps:
- Two independent law firms have been appointed: one to manage the disciplinary process, another to review all policy documents produced since November 30, 2022—the date ChatGPT was released to the public.
- The department will design and implement AI checks and declarations as part of its internal approval processes moving forward.
- The reference list has been withdrawn, though the department maintains the policy's content itself is accurate and reflects the government's position after cross-departmental collaboration and public consultation.
Not an Isolated Incident
Just a week earlier, the Department of Communications and Digital Technologies (DCDT) was forced to withdraw its own draft National AI Policy after fictitious sources and citations were found. Minister Solly Malatsi stated, "The most plausible explanation is that AI-generated citations were included without proper verification. This should not have happened."
What This Means for Developers
This is a textbook case of why you should never trust AI outputs without verification. The DHA incident mirrors what happens in codebases when developers blindly copy-paste AI-generated code snippets without understanding them. The consequences here are disciplinary action and public embarrassment, but the principle applies everywhere: AI is a tool, not a replacement for human judgment.
The DHA's response—implementing AI checks and declarations—is essentially adding a validation layer. In software terms, think of it as adding a linting step that flags AI-generated content for human review. It's a pragmatic approach that acknowledges AI's usefulness while enforcing accountability.
Lessons for Your Work
- Always verify AI-generated references, citations, or code. Treat them as starting points, not final answers.
- If you're using AI to help write documentation or policy, ensure you have a human-in-the-loop review process.
- Consider adding automated checks to detect AI hallucinations—there are tools emerging that can flag potential fabrications.
- The DHA's review of documents back to ChatGPT's release date is a smart move. You might want to audit any AI-assisted work you've done since similar tools became available.
The Bigger Picture
This isn't just about government bureaucracy. It's a real-world example of the risks of AI adoption without proper safeguards. As AI tools become more integrated into development workflows, incidents like this will become more common. The question isn't whether to use AI, but how to use it responsibly.
The DHA's statement says it best: "It is a transformative but disruptive technology that is changing how organisations operate across the private and public sectors. We must now adapt to keep up." Adapting means building verification systems, not blindly trusting outputs.
Next Steps
If you're using AI tools in your development process:
- Establish a clear policy for AI-generated content—what's acceptable, what requires review.
- Implement automated checks for common AI errors like fake references or code that doesn't compile.
- Train your team to spot hallucinations. A fake citation in a comment is bad; a fake API call in production is worse.
The DHA's suspensions are a warning: AI errors can have real consequences. Don't let your codebase be the next headline.




