There are just so many, many bad things being written about AI and the education world. So many unserious bits of advice being taken seriously. So many people who appear to be intelligent and well-educated who are harboring fantasy-based ideas about what AI is and what it does.
We need to keep talking about them, because right now we are living through a moment in which the emperor has new clothes, but a new horse, a new castle, and a new inclination to make everyone share his sartorial choices (and winter is coming). But the fantasy is so huge, the invisible baloney stacked so high, that people are concluding that they must just be losing their minds.

So let's look at this one, with the inspiring title "
The AI Tsunami Is Here: Reinventing Education for the Age of AI." Published at Educause ("the Voice of the Higher Education Technology Community"), this monstrosity lists six authors even though it's a seven-minute read. Two authors--
Tanya Gamby and
Rachel Koblic-- are mucky mucks at
Matter and Space, a Manchester, NH company that promises "human-centered learning for the age of AI." Furthermore "By combining cutting-edge AI with a holistic focus on personal growth, we’re creating an entirely new way for your people to learn, evolve, and thrive." The emperor may have a new thesaurus, too. Matter and Space are central to this article.
Also authoring this article we get
David Kil, entrepreneur and data scientist; Paul LeBlanc, president of
Southern New Hampshire University, a university big in the online learning biz, also based in Manchester, and the board chair at Matter and Space;
Georg Siemens, a figure in the Massive Online Open Course (MOOC) world (the one that was going to replace regular universities but then, you know, didn't) and is a co-founder and Chief Science Officer of Matter and Space.
So what did this sextet of luminaries come up with? Well, the pitch is for "interactionalism—a human-centered approach to learning that fosters collaboration, creativity, adaptability, feedback, and well-being."
The authors yadda yadda their way through "AI will do big and we are on the cusp of huge changes etc" before launching into their actual pitch for changing the model. Higher education, they argue, still uses a "broadcast-era model." Instructor delivers, students receive, exams assess. Feedback is sparse. They arguably have a point, but we are in familiar ed reform territory here-- present a problem, and then, rather than searching for the best solution, start insisting that whatever you're promoting is a solution.
They want to beat up the old model a bunch. The downlink aka delivery of stuff is one size fits all and broad. The uplink aka assessment is narrow. The feedback loop is narrower still. This was designed "for an industrial economy that prized efficiency and standardization over curiosity, adaptability, and genuine thinking." What about the "vision of truly personalized learning"? Welll.... You can't realistically talk about personalized learning if you aren't going to balance it with recognition that learning, particularly for younger humans, is a social activity. And they're going to head further into the weeds:
The world in which this system was built no longer exists. Knowledge is everywhere, and it's instantly accessible. Memorization as a primary skill makes little sense when any fact is a click away. Modern work demands collaboration, adaptability, and the ability to navigate uncertainty—skills developed in interaction, not isolation. And now AI has entered the room—not simply as a tool for automating tasks, but as a co-creator: asking questions, raising objections, and refining ideas. It is already better than most of us at delivering content. Which forces us to ask: If AI can do that part, what should we be doing?
That is a lot of stuff to get wrong in just one paragraph.
"We don't need to know stuff because we can look it up on the internet" is one of the dumbest ideas to come out of the internet era. You cannot have thoughts regarding things you know nothing about. The notion here is that somehow some historically illiterate shmoe with an internet connection could be as functionally great a historian as David McCullough.
"Modern work demands..." a bunch of social skills which will be hard to develop sitting in front of a computer screen--but I have a sick feeling they have a "solution" for that. And sure enough--instead of a social process involving other humans, you can get the "social" element from AI as a "co-creator." AI creates nothing. It can ask questions, but it can't raise meaningful objections and it can't refine ideas because it does not think. And this next line--
"It is already better than most of us at delivering content." How? First of all, it's not a great sign that these folks are using "content" instead of "information" or "learning." Content is the mulch of the internet, the fodder used to fill click-hungry eyeball-collecting ad-clogged websites. Content is not meant to engage or inform or launch an inquiry for greater understanding; it's just bulk meant to take up space and keep things moving, roughage for the internet's bowels. Second, AI delivers content along the same "broadcast-era model" the authors were disparaging mere paragraphs ago. And finally, AI can't even deliver "content" that is reliably accurate. AI's closest human analog is not a scholar, but a bullshit artist--and one that doesn't know anything about the topic at hand.
So we are not off to a great start here. And we have yet to define "interactionalism," which we are assured is "more than a teaching method" but "a set of principles for designing the skills and knowledge learners need—and the mechanisms by which they acquire them—in a world where human and machine intelligence work together." What does "designing knowledge" even mean?
Well, here come the three pillars of interactionalism. Buckle up:
Dialogical learning. Learners and AI agents engage in two-way conversational exchanges. There are no one-way lectures. Every presentation invites questions; every explanation invites challenges. Learners' questions inform the assessment of competence just as much as their answers. Feedback is continuous, as it is in the workplace.
I'm going to skip over the glib assumption that feedback is continuous in the workplace. Instead, I want to know why a computer makes a better partner for
the Socratic method than an actual human, whose knowledge of the topic being dialogically learninated might produce some more useful and pointed questions than one can expect from a chatbot.
Interactive skill building. As AI takes over more routine tasks, uniquely human skills—such as questioning, adapting models to context, and exercising judgment—become central. These are practiced continuously and in conversation with AI tools long before students face similar exercises in the real world.
What do you mean "long before students face similar exercises in the real world"? Are you seriously suggesting we bubble up some young humans and have them practice humaning with an empty stochastic parrot rather than with other humans? Do you imagine that young humans--including very young humans-- do not practice "questioning, adapting models to context, and exercising judgment" on a regular daily basis? Have you met some young humans?
Meta-human skills. Beyond subject mastery, students develop metacognition (thinking about their thinking) and meta-emotional skills (managing their emotions), as well as the ability to design and refine AI agents. Proficiency in these skills enables learners to shift from being passive users to active shapers of their digital collaborators.
A bicycle, because a vest has no sleeves. Meta-cognition, emotional management, and refining AI "agents" do not become related skills just because you put those words in the same sentence. Is the suggestion here that learning to be humans has utility because it help you help the AI better fake being a human, because that would be some seriously backward twisted shit that confuses who is supposed to be serving whom.
So those are the pillars. But this new approach requires a new kind of curriculum that is "dynamic, learner-adaptive, and co-created." It's going to have the following features:
Dynamic, adaptive content. The curriculum is a living entity, updated in response to new discoveries, industry changes, and students' needs. It is modular in design and can be easily revised.
Yes, this again. Fully adaptive course content has always been out of reach because it costs money, but perhaps if AI ever becomes anything less than grossly expensive, maybe the chatbot will do it instead. Of course, someone will have to check every last bit for accuracy so that students aren't learning, say, an entirely made-up bibliography.
Co-creation of learning pathways. Students collaborate with instructors to set goals and choose content. Peer-to-peer design, shared decision-making, and ongoing negotiation over scope and depth are the norm.
We see a pattern developing here. Not the worst idea in the world (at least on the college level, where students know enough to reasonably "choose content"), but what reason is there to believe that involving AI would make this work any better than just using human beings?
Multiple perspectives and sources. Moving beyond single textbooks or single voices, learners explore diverse viewpoints, open resources, real-world data, and contributions from experts across fields.
Again, why is AI needed to pursue these goals that have been commonplace for the last sixty years?
Formative, responsive assessments. Evaluation is integrated into the learning process through self-assessment, peer review, and authentic tasks that reflect real-world applications.
In the K-12 world, we were all training to do this stuff in the 90s. Without AI.
Cultivation of self-directed learning. Students learn to chart their own learning journeys, gradually assuming more responsibility for outcomes while building skills for lifelong learning.
See also: open schools of the 1960s.
For instructors, this shift is profound. They move from being content deliverers to facilitators, mentors, and curators of learning communities.
Good lord in heaven. Is there anyone in education who has not heard a discussion of relative merits of "the sage on the stage" versus "the guide on the side." But the authors promise that classrooms will focus on "what humans do best: discussion, debate, simulation and collaboration." Students will shut their laptops and work together "on challenging applications of their learning, supported by peers and guided by faculty who know them not just as learners, but as people."
These promises have been made and remade, debated and implemented for decades. What do these folks think they have that somehow makes this "profound" shift possible?
AI-- an enabler of scale!
"Intelligent agents" will provide personalized, support, feedback and intervention at scale.
The most revealing form of assessment—a probing, ten-minute conversation—can now be conducted by dialogic agents for hundreds of students, surfacing the depth (or shallowness) of understanding in ways multiple-choice tests never could.
No. I mean, wise choice, comparing chatbots to the worst form of assessment known to humans, but still-- no. The dialogic agent can assess whether the student has strung together a highly probably string of words that falls within the parameters of the strings of words in its training bank (including whatever biases are included in its "training"). It certainly can't probe.
And even if it could, how would this help the human instructor better know the students as learners or people? What is lost when the AI reduces a ten minute "conversation" to a 30 second summary?
And how the hell are students supposed to feel about being required to get their grade by chatting with a bot? What would they learn beyond how to talk to the bots to get the best assessment? Why should any student make a good faith attempt to speak about their learning when no responsible human is making a good faith attempt to listen to them?
The goal, they declare, is to move education from content acquisition to the "cultivation of thinking, problem-solving, self-reflection and human traits that cannot be automated," capabilities that enhance not just employability but well-being. Like these are bold new goals for education that nobody ever thought of repeatedly for more than half a century. And then one last declaration:
AI doesn't diminish this mission—it sharpens it. The future of teaching and learning is not about keeping up with machines, but about using them to become more deeply and distinctively human.
How does AI sharpen the mission? Seven minutes later we still don't have an answer, because there isn't one. The secret of better, deeper humaning is not getting young humans to spend more time with simulated imitation humans.
It's fitting that a co-founder of Matter and Space is a veteran of the MOOC bubble, a "brilliant" idea that was going to get education to everyone with relatively low overhead costs. MOOCs failed hard, quickly. They turned out to be, as
Derek Newton wrote at Forbes, mainly "
marketing tools and revenue sources for “certificate” sellers." Post-mortems of MOOCs focused on
the stunningly low completion and retention rate, and many analysts blamed that one the fact that MOOCs were free. I think it's just as likely that the problem was that MOOC students were isolated, sitting and watching videos on a screen and completing work on their own. Education is a social process. If nobody cares if you show up or try, why should you show up or try?
An AI study buddy does not solve that problem. In education, AI still only solves one problem--"How can I increase revenue by simultaneously lowering personnel costs and increasing number of customers served?"
The authors of this piece have, on one level, described an educational approach that is sound (and popular for decades). What they have not done is to make a compelling case for why automated edu-bots are the best way to pursue their educational vision-- they haven't even made a case for why edu-bots would be an okay way to pursue it. Wrapping a whole lot of argle bargle and edu-fluff language around the same old idea-- we'll put your kid on a computer with a bot-- does not make it a good idea, and you are not crazy for thinking it isn't.