Kurdistan’s education gamble: The hidden risks of AI

The launch of the My School (Qutabxanakam) project, featuring the AI assistant Zana, a key initiative highlighted during the Third Kurdistan Educational Forum, marks a bold step towards the future of education in the Kurdistan Region. Its vision is to modernize classrooms and provide our students with cutting-edge tools. This ambition is commendable and as a young Kurd who has watched our homeland strive for progress, I share in the excitement for a better future.

My own journey has taken me to the heart of this technological revolution. As a lead AI engineer with a degree in artificial intelligence from the United Kingdom, my work and research have centered on the most critical challenges in this field: safety, reliability, and the responsible use of these powerful models. Pursuing research in collaboration with institutions such as the University of Toronto, the University of Malaga, and De Montfort University, I have seen firsthand the incredible potential of AI. But this insider’s perspective has also shown me its profound risks. It is therefore from a place of deep commitment to both AI's responsible development and to Kurdistan's future that I feel a duty to urge caution.

In our pursuit of progress, we must not be blinded by the promise of technology. Before we commit the minds of our next generation to this path, we must critically examine the risks. To rush the adoption of artificial intelligence into our classrooms is a reckless gamble. It is a symptom of technological solutionism - the blind faith that machines can fix what human effort has not. In a land still healing, where the soil of education remains fragile, this is not progress. It would trade the hard, necessary work of building critical minds for the cheap illusion of modernization. To surrender our children’s intellect to algorithms designed elsewhere is not a leap forward. It is an abdication of our most fundamental responsibility.

The neurological imperative: Building minds, not dependencies

Childhood is a critical, non-repeatable window for the brain's development. The brain’s very structure is shaped by experience - a principle known as neuroplasticity. The prefrontal cortex, which governs critical thinking, is not fully formed in youth and requires rigorous cognitive workouts to mature. For a generation of Kurdish youth who must be equipped to build a nation, developing this mental fortitude is a national security interest.

AI, in its current form, is a direct opponent to this process. It is an engine of cognitive offloading - the act of outsourcing our thinking to technology, a concept first detailed by Sparrow, Liu and Wegner in their research on the Google Effect. By providing instant answers, it systematically eliminates the “desirable difficulties” that educational psychologists like Bjork have shown are essential for building strong, long-term memories. We risk raising a generation adept at querying a machine but neurologically unpractised in the art of independent reasoning. For a people whose survival has depended on ingenuity, enabling such cognitive dependency is an unthinkable regression.

The foundational flaws of the tool itself

The technology itself is deeply flawed. The world's leading AI labs - OpenAI, Google, and Meta - are in a constant struggle with the failures inherent in their models. These are not minor bugs, they are fundamental problems:

Reliability and hallucinations: These models are designed to generate plausible text, not to state objective truth. They frequently hallucinate - inventing facts with complete confidence. A 2024 study from Stanford University's Institute for Human-Centered Artificial Intelligence (HAI) found that even specialized legal AI models hallucinated in more than one out of every six queries. Imagine a student asking for help with a complex physics problem. The AI might generate a proof that appears logical, step-by-step but contains a fundamental flaw. Because the final answer is presented with confidence, the student, who lacks the expertise to spot the error, learns a completely incorrect scientific concept. The AI's confidence becomes a barrier to true learning.

Algorithmic bias: The models are trained on the internet, complete with all its embedded biases. This has led to real-world harm, from AI-powered hiring tools that systematically discriminate against female candidates to chatbots that have produced racist and harmful content.

To hand over the education of our children to a technology that its own creators cannot make consistently reliable or ethical is an act of profound irresponsibility.

The homogenization of thought

A truly intelligent child understands that the number 4 is not just a fact, but a concept accessible through multiple pathways: 1+3, 5-1, 2x2. This ability to see a concept from many perspectives - cognitive flexibility - is the hallmark of creative thought. It creates innovators, not just technicians.

Current AI models, however, are built on probabilistic reasoning. They are designed to find the most common, statistically likely pattern. When asked for help, an AI will almost always provide the single, most standard method. It doesn't encourage the beautiful, messy, and divergent thinking that leads to true understanding.

By consistently exposing every child to the same optimized approach, we risk creating a monoculture of thought. We would be actively training them out of the very habits of mind that allow for unique judgment. This erases the opportunity for what is known as “productive failure,” the vital process of learning through trial and error. We would get a generation who all know the one "correct" way to get to 4, while being utterly incapable of discovering a new path to 5.

The infrastructure and equity mirage: A digital caste system

Proponents will point to the Kurdistan Regional Government’s digital transformation strategies as evidence of readiness. This argument dangerously overlooks a more fundamental crisis: the profound digital literacy gap. The problem is not just access to a device, but the knowledge of how to use it for learning.

According to a 2022 UNICEF report, a staggering 59.2 percent of youth in Iraq lack the digital skills needed for employment. This confirms a wide skills gap. Deploying an AI tool into this environment is like handing a complex scientific instrument to someone who has never been taught how to read the measurements.

This will create a new, more insidious digital caste system based on knowledge. A privileged elite of children with digitally literate parents will be taught how to critically engage with AI - how to question its answers, check its sources, and use it as a creative partner. The vast majority, however, will be left to navigate this powerful, flawed technology alone. Without guidance, they will be passive recipients of its biases and hallucinations. AI will not be an equalizer. It will become a privilege multiplier, cementing and amplifying the very inequalities we must overcome.

The pedagogical crisis: De-skilling teachers, not empowering them

Kurdistan’s most valuable educational asset is its teachers. Yet they are already under-supported. The notion that we can simply layer a complex technology like AI onto this system and expect positive outcomes is a fantasy.

The far more likely outcome is the de-skilling of the teaching profession. As the World Economic Forum's "Shaping the Future of Learning" report warns, while AI can free educators from routine tasks, it must serve to enhance, not replace, the role of the teacher. Without proper strategy, teachers' professional judgment is subordinated to the opaque recommendations of a machine. The priority must be a profound investment in foundational teacher training and a robust, culturally relevant curriculum. To divert precious resources towards AI licenses before these fundamental needs are met is a catastrophic misallocation of priorities.

The cultural erosion: A threat to Kurdish identity

For a people who have fought for centuries to preserve their language and history, the uncritical adoption of AI poses a new, insidious danger: digital colonialism.

While these models can and will be fine-tuned on a variety of Kurdish curricula and texts, it is merely a thin layer applied over a vast, pre-existing foundation. The model’s digital DNA, its core logic, has been shaped by a global corpus of data that is overwhelmingly Western in its language, values, and historical perspective.

This creates a hidden vulnerability. Consider a child in a history class studying the life of a highly respected Kurdish national hero. They turn to Zana for help. The AI, lacking the deep, lived context of our struggle, might analyze the hero's actions through a detached, Western lens that defines a freedom fighter as a militant or a separatist. It could then generate a perfectly reasoned, logical-sounding argument concluding that this hero was, in fact, a negative figure. The AI is contextually blind. It cannot grasp the nuances of why a figure who fought against state oppression is a symbol of hope and resilience for our people.

While this may seem like a theoretical or extreme example, it only takes a handful of such encounters for the damage to take root. The result is not education, it is a form of cultural re-engineering. We would be providing our children with a tool that can, with algorithmic authority, dismantle their own national identity. It would sow doubt in the foundational stories of our people, not with malice, but with the cold, contextless logic of a machine that does not share our memory or our soul. This is a far more insidious threat than simple censorship. It is the erosion of identity from within, backed by the illusion of objective, intelligent reasoning.

A call for a sovereign pedagogy

To critique this specific application of AI is not to reject the technology itself. AI is a tool of immense power, but its true value is unlocked when it is wielded by an expert, not when it is used to teach a student. An expert can use AI to automate repetitive tasks and accelerate complex work because they possess the critical judgment to verify the output, correct its flaws, and guide it towards a useful outcome. A student does not have this expertise.

Therefore, our path to modernity should be strategic and sovereign. Let us use AI where it can truly serve us: behind the scenes. Let us build a strong educational foundation by using AI for data collection, logistical analysis, and making data-informed decisions about resource allocation. Let us use it to empower our administrators and support our teachers, not to replace them.

Once that strong foundation is built, and once we have a generation of students with strong foundational literacy, then we can begin to integrate AI into the classroom as a tool for experts-in-training.

Kurdistan does not need to prove its modernity by hastily adopting every new technological trend. True progress lies in the wisdom to build a system that is resilient and equitable. The path forward is not through a premature shortcut, but through the more meaningful work of building from the ground up. Let us invest in our teachers, focus on critical thinking, and use technology as a selective tool, not a wholesale replacement for human intellect. To do so is not to reject the future, it is to claim the right to build a Kurdish future, on our own terms.

Nivar Hangaw is an AI expert, specializing in foundational and theoretical AI and its applications in medical imaging. He holds a BSc in Artificial Intelligence and was awarded the Google Prize for Best AI Implementation at the University of Cambridge. He currently collaborates with the University of Toronto on AI safety research. And in partnership with De Montfort University and the University of Málaga, he is at the forefront of developing self-aware and self-correcting neural networks.

The views expressed in this article are those of the author and do not necessarily reflect the position of Rudaw.