Monday, May 4, 2026

 

Learning in a Brave New World of AI

Opportunities and Threats

Technology will not replace great teachers but technology in the hands of great teachers can be transformational.

— George Couros


 

Abstract

Artificial Intelligence — embodied in machines that learn from experience rather than fixed rules — has become the most consequential technology of our era, reshaping all facets of life today.  With inputs from the app Claude AI, this article explains, in plain language, what AI is and how it works, before turning to its central concern: the teaching-learning process in schools. AI's promise for education is real and substantial — personalized tutoring that addresses Bloom's long-intractable two-sigma problem, instant adaptive feedback, relief for overburdened teachers, and the democratization of quality instruction across economic and geographical divides. But the risks are equally real: the erosion of academic integrity, the atrophy of independent thinking when cognitive effort is routinely outsourced, and the impoverishment of the human community that school uniquely provides. The article concludes that AI literacy must become a core curricular competence, and that the technology's value in education ultimately depends on preserving the teacher's irreplaceable role — as mentor, moral model, and the one intelligence that really matters.


The Thinking Machine

Something extraordinary has happened in our lifetime. Machines have begun to do things we once considered the exclusive property of the human mind — composing poetry, doing math, diagnosing cancer, holding conversations, teaching children. The technology behind this upheaval is called Artificial Intelligence. Here is an attempt to explain it honestly, in plain language, from the ground up.

What Is Artificial Intelligence?

Begin with a simple observation: when a child sees a cat for the first time, she doesn't need to be given a textbook definition. She watches. She notices the fur, the ears, the particular way it moves. The next time she sees a cat — any cat, of any color — she recognizes it immediately. She has learned from experience, without being explicitly programmed.

Artificial Intelligence, at its most essential, is the attempt to give machines this same ability — to learn from experience, to recognize patterns, to make judgements, and to act usefully in situations they have not been explicitly programmed to handle.

The term itself was coined in 1956, when the American mathematician John McCarthy (see picture below) gathered a group of visionaries at Dartmouth College and proposed that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." It was an audacious wager. Seven decades on, the bet is paying off — in ways that astonish even its makers.


Intelligence, Artificial and Otherwise

Human intelligence is not a single thing. It encompasses memory and reasoning, language and creativity, perception and emotion, intuition and the painstaking logic of a chess grandmaster. When we speak of AI, we are not — at least not yet — talking about a machine that possesses all of these simultaneously. What we have today is what researchers call narrow AI: systems that are extraordinarily capable within a defined domain but helpless outside it. The AI that defeats world champions at chess cannot recognise a cat. The AI that transcribes speech cannot drive a car.

What has changed dramatically in the last decade is the breadth and fluency of what narrow AI can do. The most advanced systems today — the large language models and multimodal AI that power tools like ChatGPT, Gemini, and Claude — can carry on sophisticated conversations, write code, analyze images, summarize legal documents, and explain quantum physics, all within the same conversation. This is still narrow AI, but its narrowness has become very wide indeed.

"Artificial Intelligence is not magic. It is statistics applied at breathtaking scale, trained on the accumulated text of human civilization."

The goal of general AI — a machine that reasons across all domains as flexibly as a human being — remains an open research frontier. Some researchers believe it is decades away. Others think it may never arrive in quite the form we imagine. But the debate is no longer merely philosophical: it is engineering.

How Is AI Actually Built?

To understand modern AI, you need to know three things: data, neural networks, and training. These three ideas, taken together, explain almost everything that has happened in the field since roughly 2012.

The Neural Network: A Brain Made of Numbers

The human brain contains roughly 86 billion neurons. Each neuron is connected to thousands of others. When you think — when you recognize a face, recall a name, decide to stand up — electrical signals travel through these connections in patterns. The strength of the connections changes with experience. That is, in a rough biological sense, what memory and learning are.


An artificial neural network mimics this architecture, very loosely. It consists of layers of mathematical "nodes" — the artificial analogue of neurons — connected by numerical weights. Input arrives at the first layer (an image, a sentence, a sound recording). Each node in that layer processes the input and passes a transformed signal to the next layer. This continues through many layers — sometimes hundreds — until the final layer produces an output: "this is a cat," or "the answer to your question is," or "the next word in this sentence is likely."

The "intelligence" of the network lives entirely in those numerical weights — the strengths of the connections between nodes. The key question is: how do you set them to the right values?

Training: Learning by Getting Things Wrong

You set the weights through training, and training works by making mistakes and correcting them — millions or billions of times.

Imagine you are teaching the network to identify cats. You show it a photograph of a cat and ask it to classify the image. At first, the weights are random, and the network guesses wildly — "this is a bicycle." You tell it this is wrong. A mathematical procedure called backpropagation traces the error back through all the layers and nudges every weight very slightly in a direction that would have produced a better answer. Then you show the network another image. And another. And another — until you have shown it millions of images of cats and non-cats. After enough iterations, the weights settle into values that let the network identify cats it has never seen before, with remarkable accuracy.

An analogy that helps: Think of training a neural network like sculpting with water erosion. You pour water (data) over rock (the network). Each trickle carves the rock very slightly. After an enormous number of trickles — millions, billions — the rock has taken on a shape that efficiently channels water in the right direction. Nobody sculpted it deliberately. The final form emerged from the accumulated pressure of the data itself.

Deep Learning: Going Deeper

The breakthrough that changed everything was the realization that networks with many layers — deep neural networks — could learn hierarchical features automatically. A deep network looking at a photograph does not need to be told what edges, textures, shapes, or faces are. It discovers these concepts itself, layer by layer, from raw pixels alone. This idea, called deep learning, was championed by researchers Geoffrey Hinton*, Yann LeCun, and Yoshua Bengio (who shared the Turing Award — computing's Nobel Prize — in 2018), and it is the engine behind virtually every AI achievement of the past decade.

[*Hinton was also the joint recipient of the 2024 Nobel Prize in Physics, signaling the acceptance of AI as a branch of Physics]





Language Models: Teaching Machines to Read and Write

The most consequential application of deep learning in recent years is the large language model (LLM). These are neural networks trained not on images but on text — vast libraries of books, websites, academic papers, and conversations. The training task seems almost too simple: predict the next word. Given "The cat sat on the," predict "mat." Do this billions of times across hundreds of billions of words, and something remarkable happens. The network, in learning to predict text, is forced to develop internal representations of grammar, facts, reasoning, analogy, and context. It absorbs a compressed model of much of what humanity has written down.


The architecture that made this possible is called the Transformer, introduced by Google researchers in 2017. Its key innovation — the "attention mechanism" — allows the model to consider the entire context of a sentence or paragraph at once, rather than reading word by word. This is why modern LLMs can maintain coherent conversations across long exchanges and grasp subtle references and nuances.

The result — when you scale these models to hundreds of billions of parameters and train them on much of the written record of human civilization — is a system that can discuss philosophy, write poetry, debug code, explain medical symptoms, and tutor a child in mathematics. Not because it understands in the way a human does, but because it has distilled, in its enormous weight matrices, the statistical shape of human knowledge and expression.



AI and the World: A Revolution in Human Affairs

No technology since electricity has the potential to touch every domain of human activity simultaneously. AI is already doing this. Let us take a rapid survey.

Medicine and Health

AI systems can now read medical scans — X-rays, MRIs, retinal photographs — with an accuracy that matches or exceeds specialist physicians. Google's DeepMind built a system that predicted the three-dimensional structure of nearly every known protein — a problem that had stumped biochemists for fifty years — and released the results freely to science. AI is accelerating drug discovery, personalizing cancer treatment, predicting sepsis in hospital patients hours before symptoms appear, and helping radiologists catch tumors they might otherwise miss. In low-resource settings with few doctors, AI-assisted diagnostics may prove to be one of the most life-saving technologies ever deployed.

Science and Research

AI is becoming a co-author of scientific discovery. It designs experiments, sifts through petabytes of astronomical data, models climate systems, and generates hypotheses for human researchers to test. In materials science, AI has proposed thousands of new compounds with potentially useful properties. In mathematics, AI systems have collaborated with human mathematicians to find new proofs. The pace of scientific literature has outrun any human's ability to read it; AI can synthesize thousands of papers and surface connections that would otherwise remain buried.

Work and the Economy

This is where AI's arrival feels most disruptive and most personal. Automation has always replaced physical labor — the loom, the tractor, the assembly robot. AI is different because it encroaches on cognitive labor: the work of lawyers, accountants, writers, programmers, customer service agents, financial analysts. Tasks that once required years of professional training can now be done, in draft form, in seconds.

The economic consequences are real and unequal. Some jobs will disappear. Many will be transformed. New jobs — AI trainers, prompt engineers, AI auditors, human-AI collaboration specialists — will emerge. History suggests that technological revolutions ultimately create more work than they destroy, but that cold comfort is small consolation to those whose skills become suddenly redundant. The challenge of managing this transition — through retraining, social support, and new regulatory frameworks — is one of the defining policy challenges of our era.

Creativity and Culture

AI can now generate images from text descriptions, compose music in the style of any composer, write screenplays, and produce video. This raises profound questions about authorship, originality, and the nature of creativity itself. When an AI produces a painting, who owns it? When a film studio uses AI to generate backgrounds, what becomes of the artist who used to paint them? These are not hypothetical dilemmas — they are live legal battles in courts around the world. At the same time, AI is giving ordinary people creative superpowers they never had: the amateur musician who can now orchestrate her melody, the writer who can draft ten versions of a paragraph and choose the best, the designer who can iterate visually in real time.

Governance, Ethics, and Risk

AI inherits the biases of its training data. A hiring algorithm trained on historical data may learn to discriminate. A facial recognition system trained mostly on lighter-skinned faces may fail dangerously on darker-skinned ones. Deepfake technology can make a political leader appear to say anything. Autonomous weapons raise questions about accountability in warfare that existing law is not equipped to answer.

There are also longer-term risks that serious researchers take seriously: as AI systems become more capable, ensuring they remain aligned with human values — that they do what we actually want, not just what we literally instruct — becomes both more important and harder. The field of "AI safety" exists precisely to grapple with these questions before the systems become too powerful to easily correct.

 

AI in the Classroom: Teaching, Learning, and the School

Perhaps no arena is simultaneously more promising and more fraught with risk than education. The school is where society reproduces itself — where the young are inducted into knowledge, values, skills, and the habits of mind needed for a life well-lived. AI's arrival in education is not a minor administrative convenience. It is a structural challenge to what school is for.

The Traditional Model and Its Limits

For over a century, mass schooling has operated on a factory model: one teacher, thirty students, one curriculum, one pace. The teacher delivers a lesson; the students receive it. Some thrive. Many struggle. A few are bored rigid because they understood the concept five minutes in. The constraints are real — teachers are human beings with limited time and energy — but the mismatch between what the model delivers and what children need is profound.

Benjamin Bloom, the American educational psychologist, demonstrated this in a celebrated 1984 study now known as the "two-sigma problem." Students who received one-to-one tutoring performed two standard deviations better than those in conventional classrooms. That is the difference between an average student and one at the 98th percentile. Bloom called it the "two-sigma problem" because society could not afford one private tutor for every child. AI may be about to change that arithmetic.


The AI Tutor: Personalized Learning at Scale

The most immediately transformative application of AI in school education is the intelligent tutoring system — an AI that works with a student one-on-one, adapting to their pace, identifying their misconceptions, and explaining concepts in multiple ways until the student genuinely understands.

These systems have existed in rudimentary form since the 1970s. What is new is their fluency. Modern AI tutors can converse naturally, adjust their explanations based on a student's responses, ask Socratic questions that prompt thinking rather than simply delivering answers, detect frustration or confusion from the pattern of a student's responses, and maintain a persistent model of what the student knows and doesn't know. Khan Academy's Khanmigo and several other platforms are already deploying these capabilities with real students.

Adaptive Pacing

AI systems can identify exactly where a student's understanding breaks down and address that specific gap — not the generic "chapter 5 difficulty" but this student's particular confusion about this concept at this moment.

Infinite Patience

An AI tutor can explain the same concept forty different ways without fatigue, irritation, or judgement. Students who feel embarrassed asking a teacher to repeat themselves will ask a machine as many times as they need.

Immediate Feedback

In traditional schooling, students submit work and wait days for feedback. AI provides instant, specific, actionable feedback — the kind that cognitive science tells us is far more effective for learning.

Learning Data

AI systems generate granular data on student progress — not a single exam score but a continuous record of what is known, what is emerging, and what needs work. Teachers can use this data to intervene precisely.

The Teacher's New Role

It must be said clearly: AI does not replace teachers. It changes what teachers do. The teacher's role has always contained elements that no machine can perform — mentorship, moral guidance, the modelling of intellectual passion, the recognition of a child's fragile self-esteem, the sense of belonging that a good classroom community creates. These are irreducibly human. What AI can take off a teacher's plate is the most mechanical, least creative, and most time-consuming part of their work.

Lesson planning that once took hours can be done in minutes, with the teacher's role shifting from creation to curation and critique. Grading routine exercises — spelling, arithmetic, short-answer comprehension — can be automated, freeing teachers to spend their limited attention on the work that genuinely requires it: the essay that reflects a struggling child's tentative first encounter with abstract thought, the math proof that shows a student reaching beyond what they were taught. AI handles the routine; the teacher handles the human.

An important caveat: The quality of AI-assisted teaching depends entirely on the teacher's capacity to use these tools wisely. A bad teacher with AI is still a bad teacher. A great teacher with AI becomes something closer to a great teacher with a superb support staff. Investment in teacher development is, if anything, more important in the AI era, not less.

Democratizing Access: The Equity Argument

One of the most compelling promises of AI in education is its potential to reduce the enormous inequalities in educational opportunity that currently exist. The child in a rural village with a single overworked teacher and no library now has, in principle, access to the same intelligent tutoring system as a child at an elite private school. The student whose parents cannot afford after-school coaching can get patient, adaptive support from an AI at no cost. The child who speaks a minority language can be tutored in that language.

This remains, for now, more aspiration than reality — access to devices and reliable internet remains deeply unequal, especially in the developing world. But the direction of AI's economics is encouraging: as computing costs fall and connectivity spreads, the marginal cost of providing an excellent AI tutor approaches zero. No previous educational technology has had this property.

Dangers and Distortions: The Risks for School Education

Honesty requires that we also look at what could go wrong. And much can indeed go wrong.

The Cheating Problem

The most immediately visible challenge in schools is academic dishonesty. When a student can instruct an AI to write their essay in seconds, what is the point of assigning essays? Educators are right to be disturbed. The essay is not just a product — it is a process. The struggle to organize one's thoughts, to find the right word, to discard the weak argument — these are where learning happens. When AI performs that struggle on behalf of the student, the product exists but the learning does not.

The response cannot simply be prohibition — AI is too ubiquitous to ban, and children who learn to use it thoughtfully will be better prepared for a world saturated with it. The response must be pedagogical: design assessments that cannot be faked — in-person discussions, oral examinations, iterative projects done in class under observation, portfolios of process rather than just product. The examination system as a whole will need to evolve.

The Thinking Problem

There is a subtler and deeper risk: that students who outsource their thinking to AI will never develop the cognitive muscles that sustained, difficult intellectual work builds. Reading a hard book is frustrating. Working through a mathematical proof is exhausting. Sitting with a complex ethical question until something clarifies is uncomfortable. These discomforts are not obstacles to learning — they are the learning. If AI provides the comfortable path around every cognitive difficulty, the mind may remain permanently weak. This is the educational equivalent of never walking because cars exist.

The Surveillance Risk

AI-powered educational platforms generate torrents of data about children — their errors, their learning speeds, their emotional states, their interests. In the wrong hands or with inadequate regulation, this data can be exploited commercially, used to categorize children in ways that follow them for life, or handed to governments with authoritarian inclinations. The surveillance architecture of an AI-powered school could be, if carelessly designed, profoundly hostile to the freedom and privacy that healthy development requires.

The Human Connection Risk

School is not only a place where children learn mathematics and history. It is where they learn to be with each other — to navigate disagreement, to build friendship, to develop empathy, to exist in community. An over-reliance on AI-mediated learning, where each child sits alone with their device, could impoverish this social dimension of education in ways whose consequences we would not fully appreciate until a generation later.

"The goal of education is not a well-filled bucket but a well-lit mind. AI can help pour in knowledge. Only human community can light the fire."

Teaching Critical Thinking About AI Itself

Perhaps the most important thing schools can now do is teach children to think critically about AI itself. AI systems make mistakes — sometimes subtly, sometimes spectacularly. They can produce fluent, confident, and entirely false information. They reflect biases in their training data. They optimize for the appearance of helpfulness rather than truth. A child who cannot evaluate whether an AI's output is reliable is not educated; they are merely a sophisticated consumer of machine-generated text.


Media literacy in the twenty-first century must include AI literacy: understanding what these systems are, how they work (in broad outline), what they are good at, where they fail, and how to use them as tools without becoming dependent on them as authorities.

Looking Forward: The Shape of Things to Come

It is worth stepping back and asking: what do we actually want from our schools? What is education for? The answers have always included knowledge transmission, skill formation, socialization, character development, and the cultivation of the capacity to think, question, and imagine. AI is relevant to some of these goals. It is irrelevant to others. And the danger is that, in the excitement of the technology, we allow the things AI can automate to crowd out the things only humans can do.

The school of the future — the school worth building — uses AI to free teachers from drudgery and to ensure that no child is left to flounder alone with a concept they haven't grasped. It uses AI to bring the world's knowledge to a child in a village as readily as to a child in a city. It uses AI to identify the struggling students before they fall irretrievably behind. But it insists, as fiercely as ever, on the irreplaceable importance of the teacher as mentor and model; on the classroom as community; on the essay, the debate, and the experiment as disciplines that build minds, not just fill them.

What AI cannot do is want anything for a child. It cannot look at a twelve-year-old who is bright but bored and stubborn, and recognize — as a gifted teacher can — that the stubbornness is not obstruction but unfulfilled ambition waiting for the right challenge. It cannot sit with that child after class and say: I think you are capable of something no rubric has yet measured.

That recognition — that patient, particular, irreducibly human act of seeing a child whole — is what education has always been, at its best. The technology changes. That does not.



 

 

1 comment:

Anonymous said...

Anna, this is Bina. Thoroughly enjoyed this blog. The last paragraph is elegant!

Only one comment: you wrote-
“No technology since electricity has the potential to touch every domain of human activity simultaneously.” I wonder if computers beat electricity in this description. After all, computers arrived well after electricity and hit almost every domain of human activity.