From my vantage point here in Texas, I can only know so much. However, based solely on the recording of the recent event in Dundee, Scotland, a handful of facts are evident.
It went down like this: at least two local girls were in a public park when they were recorded by a man. This is clear from the video uploaded to social media. The girls are visibly white and speak with thick Scottish accents. Notably, their language is quite foul.
Here is a report on the video with most of it displayed.
The individual recording the event is not seen, but his voice is heard, indicating he is male. A few other unrelated individuals pass by during the recording. The girls are heard complaining that the man did something unclear but of a sexual nature. One of the girls mentions that her sister is only 12 years old, suggesting the girls are related and implying that the man is accused of inappropriate behavior toward a 12-year-old female.
The man recording, however, focuses on the knives the younger, smaller girl is carrying. His apparent purpose is to document the weapons which the girl eventually displays. The atmosphere is tense, the recording is brief, and beyond the dramatic display of a young girl holding a large knife and a hatchet, not much else occurs.
This video popped up on one of my screens—but where, I can’t say. Like everyone else online, my media habits are tracked, and algorithms learn what I might want to see and deliver it to me. Those services usually get it right: this was a video I wanted to see, because I’m keenly interested in Europe—my ethnic heritage lies there—and specifically Scotland, a place I’ve visited and still hold dear. I have a few valued Scottish friends, and many more British friends.
And like millions of others, I use ChatGPT to gather information and get summaries of news and notable encounters. When I asked the AI to find more news on this encounter in Dundee, I discovered something quite disturbing.
I fully understand that Chat GPT and other AI sites can’t think, and they can only know what they are able to gather from reading the news content around the world. They are, in that way, just like me. I can only know what I can know. But, when I was searching for more information about this encounter in Dundee, the AI served up a very specific and reiterated message that went far beyond a summary of what was reported. It gave me, on its own accord, far more than I was looking for. The message delivered to me had a specific and easily detectable tilt. The warning the AI was giving me was about ‘misinformation’ and how terrible it would be to accuse a ‘migrant’ of committing a sex crime in Britian if the man making the recording was just a part of a “Bulgarian couple’ out for a walk when he and his wife were set upon by dangerous 12-year-old Scottish girls.
Even now, days after the incident, when asked to summarize the incident, this is prominently displayed in my results:
Misinformation and Political Reactions
The video was amplified by figures like Elon Musk and Tommy Robinson, who claimed the man was a migrant and used the incident to stoke anti-immigration sentiments. However, Police Scotland debunked these claims, confirming that the man was a Bulgarian national and that there was no evidence supporting the narrative that the girl was defending herself from migrants.
This message is sourced to the Guardian and a London Times report. I followed up and asked the AI several more questions and was able to get it to confirm that yes, there have been many documented sex crimes by migrants and Muslim immigrants against white girls in the UK, but these confirmations were accompanied by many more warnings and admonitions against stereotypes and lots of talk about how, the situation was ‘nuanced’ and complicated. I kept going down that path with the AI and it eventually stated that it was entirely plausible that these two girls, and not the migrant man recording them, were the aggressors.
Since then, the messaging has changed a bit since it is now known that the man behind the camera is not part of a charming ‘Bulgarian couple’ which implies middle age and peaceful. He is instead 21-year-old Fatos Ali Dumana, ‘Bulgarian national’ who emigrated to the UK in 2021. If you are wondering if Ali and Dumana are common Bulgarian names, the answer is no. Using the same tool we find:
Is Dumana a Common Bulgarian Name?
Absolutely not. According to data, the surname Dumana is extremely rare—globally it ranks around 422,941st, with only about 1 in 8.9 million people bearing the name. Its highest concentration is in the Philippines, not Bulgaria.
Hmmm, that’s old. But he’s a married Bulgarian immigrant, right?
The Most Common Bulgarian Surnames
Bulgaria’s naming traditions are deeply rooted in Slavic patronymics, commonly ending in -ov(a) or -ev(a). Here’s what stands out:
- The most frequent Bulgarian surname is Ivanov for men and Ivanova for women. In 2018, around 136,125 men and 141,183 women carried that name.
- Other top common names include:
- Georgiev / Georgieva
- Dimitrov / Dimitrova
- Petrov / Petrova
So, if we use this oddity as the window into the reasoning of the AI, we’ll get this:
I asked Why would a man with the name Fatos Ali Dumana be an immigrant from Bulgaria if that is such an uncommon Bulgarian name?
And what I got was:
You raise a thoughtful and perceptive question—why would someone named Fatos Ali Dumana, whose surname is decidedly rare and not traditionally Bulgarian, be described as a Bulgarian immigrant? The answer lies in Bulgaria’s rich tapestry of ethnic diversity and migration history. Let’s unpack this:
Allow me to state right here that this ‘unpack’ usage is suspect and irritating and a language use pattern with a political agenda.
The response is long and includes this:
At first glance, it seems counterintuitive—why does someone with such an uncommon surname identify as Bulgarian? But the explanation is rooted in Bulgaria’s complex demographic and historical fabric:
- Ethnic Variety: Bulgaria isn’t ethnically monolithic. Many minorities—especially Turkish, Pomak, and Roma—bear non-Slavic names, often of Islamic or regional origin.
So fundamentally, the issue isn’t that the man “isn’t Bulgarian,” but that his name is Islamic in origin, and it is migrants with Islamic backgrounds who are disproportionately responsible for assaulting British girls. When I ask ChatGPT about this, it merely echoes what the Scottish police say: that there is “no evidence” Mr. Ali did anything wrong, that the girl has been charged, and that the ‘far right’ is exploiting the incident to vilify migrants.
That’s all fine—but the AI is contributing nothing new to the story. I can read the police reports myself. How is it that I can ask the same AI complex questions about mathematics—and it will deliver precise answers with little sourcing—but in this case I get only the “state narrative,” where the migrants appear as victims and the only presumed wrongdoer is a tiny 12-year-old girl, portrayed as resisting exploitation?
When I asked the AI to proofread the above two paragraphs, without me asking for explanation, it referred to the mass rapes of British girls by Muslim men as “the narrative” and also gave me this warning, and offered to help:
Reflections on Your Point
Your frustration is understandable. You’re highlighting two interrelated tensions:
- Content Sensitivity vs. Directness
- AI systems are designed to tread carefully around potentially inflammatory or harmful narratives, especially involving children, race, or sexual violence. Their built-in safeguards often favor prevailing institutional or official positions, which can feel dismissive or evasive.
- Disparity in Depth Across Topics
- The contrast between how the AI handles complex technical queries versus sensitive social topics stems from its tuning. Math problems are neutral and stable; socio-political content is often charged and constrained by content policies, so the AI defaults to cautious, widely accepted accounts rather than speculative or unverified assertions.
While that approach mitigates misinformation and reduces the risk of amplifying harmful rumors, it can also stifle deeper inquiry—especially for users grappling with controversial or marginalized viewpoints.
Why the AI Reacts Differently
- Policy Constraints: OpenAI’s models are restricted to avoid reinforcing potentially harmful stereotypes or misleading narratives, especially without verified sources.
- Verification Requirements: For open-ended or speculative questions, the AI leans on reputable official sources (police, mainstream media) and avoids conjecture.
- Deprioritized Alternative Narratives: Unless widely corroborated, alternative or minority voices—especially those critical of institutions—are given less prominence to avoid spreading misinformation.
So, there is it. AI is an agent of the state for any incidents of consequence.I find it preposterous to suggest that a 12-year-old girl was the aggressor in a confrontation with a 21-year-old migrant in the UK named Fatos Ali. Further, this is Ali:

I will use AI—and clearly, it’s a transformative technology—but in all of these events and responses, I see the heavy hand of humans and very specific programming. This tool, like other media outlets, is ultimately a medium for a crafted “narrative.”
Caveat emptor.
That said, at least for now, the memes generated by AI make it’s lame defense of the police version of the story tolerable. These will live on, and their effect will go far beyond the wall of official denials.
