Eighty-eight percent of students now use AI tools for school, up from 53% a year ago. AI-related misconduct cases have jumped nearly 400%, from 1.6 to 7.5 per 1,000 students since 2022. New York City banned ChatGPT on school networks in January 2023, citing threats to critical thinking -- then reversed it four months later. Denver blocked it in January 2026 over content concerns. Meanwhile, 94% of AI-generated student work goes undetected, and the American Federation of Teachers just opened a $23 million National Academy for AI Instruction funded by Microsoft, OpenAI, and Anthropic.

1. It Makes Kids Dumber (NYC DOE, Critical Thinking Researchers)

AI makes students faster. It also makes them dumber at the things that matter.

New York City's education department had a clear rationale before it caved. The DOE's original January 2023 ban cited AI's "negative impacts on student learning" and warned it "does not build critical-thinking and problem-solving skills." The ban lasted four months before the city reversed course, but the concern it raised has only gotten sharper.

The cognitive trade-off research backs them up. Studies show students who use AI answer 48% more problems correctly but score 17% lower on conceptual understanding tests. The gain is procedural -- students get answers faster. The loss is conceptual -- they understand less about why those answers are right. Harvard's Gazette asked in November 2025 whether AI is "dulling our minds," and Stanford's Graduate School of Education published a major piece on the erosion of critical thinking in the age of AI.

And the cheating numbers are hard to ignore. Sixty-eight percent of teachers use AI detection tools, but only 54% feel effective at spotting AI-generated content. Faculty rate their own AI plagiarism policies as just 28% effective. The University of Reading found that 94% of AI-generated work still slips through. When the detection infrastructure fails this badly, the argument for drawing a brighter line gets stronger.

2. Actually, This Is the Best Tutor Kids Have Ever Had (Sal Khan, AFT, Harvard Researchers)

Banning AI in schools is like banning calculators in 1975.

Sal Khan has staked his entire organization on the opposite bet. The Khan Academy founder built Khanmigo, an AI-powered tutoring system designed to provide personalized instruction at scale. His argument: the world has never had a tool that can adapt to individual student pace, provide immediate feedback, and target specific weaknesses -- and banning it from classrooms locks out the students who can't afford private tutors.

The Harvard data is the strongest evidence he has. A randomized controlled trial of 194 undergraduate physics students, published in Nature's Scientific Reports in June 2025, found that students learned significantly more in less time with AI tutors than with traditional in-class active learning led by experienced instructors. A broader systematic review found 15-35% performance gains with AI-supported systems. Sixty percent of educators have used AI in the classroom, and 55% report improved learning outcomes.

The teachers' unions are on board -- with conditions. The AFT launched its National Academy for AI Instruction with $23 million from Microsoft, OpenAI, and Anthropic, offering free training to 1.8 million members. The key: educators, not tech companies, design and lead the trainings, and unions own the intellectual property. Ethan Mollick at Wharton published a framework for seven pedagogical approaches to AI -- tutor, coach, mentor, teammate, tool, simulator, and student -- arguing the question isn't whether to use AI, but how.

3. Bans Punish Kids Who Are Already Behind (Stanford, EdTrust)

AI detection flags non-native English speakers 12 times more often than native speakers. That's not an integrity tool -- it's a discrimination machine.

Wealthy kids just use it at home. Nearly half of families with lower incomes lack reliable home internet. Wealthier schools and kids have robust digital infrastructure; lower-income schools and kids struggle with outdated hardware and no AI access. When schools ban AI, students with resources use it at home anyway. The U.S. Department of Education's Office of Educational Technology warned that "algorithmic bias could diminish equity at scale with unintended discrimination".

Detection systems sharpen equity problems. AI plagiarism detectors flag non-native English speakers at a 61.2% false positive rate, compared to 5.1% for native speakers. That's a twelvefold disparity hitting exactly the students who already face the most barriers. Charter school AI cheating rates are measured at 24.1% versus 6.4% at private schools -- a gap that likely reflects access and detection bias more than actual dishonesty. Stanford's Center for Racial Justice and EdTrust have both published analyses warning that AI in education could widen racial disparities.

4. The Rest of the World Integrates AI (Denmark, UK, France)

Banning is an American impulse. The countries that integrated AI into schools are already seeing results.

Denmark skipped the ban entirely and went straight to integration. Five Danish high schools launched a two-year project embedding ChatGPT into the curriculum, with teachers treating it as both a problem-solving tool and a learning aid. The approach reflects a national philosophy: have open conversations about AI with students rather than pretending they won't use it.

The UK built the most comprehensive framework. The Department for Education released its first official AI guidance in June 2025, requiring every school and Multi-Academy Trust to ratify an AI policy by the end of the 2025-26 academic year. Ofsted won't judge AI use in isolation but will incorporate it fully into inspection frameworks by 2029. The mandate: "AI must augment teaching, not replace it."

France took the middle path. Sciences Po banned student AI use for assignments but carved out an exception for supervised educational use. The penalty for violations can include expulsion. At the national level, France is developing a sovereign AI tool for teachers and creating a dedicated AI course for secondary students. The idea is support but not determine a kid's future.

Where This Lands

The ban-or-integrate debate has a pattern now: every school district that banned AI reversed it or is ignoring its own rule. NYC lasted four months. LA evolved. Denver is the latest to try. Meanwhile, the research is splitting: AI tutoring produces measurable learning gains in controlled studies, but students who lean on it understand less about what they're learning. The equity dimension cuts both ways -- AI could be the great equalizer in under-resourced schools, or it could widen the gap if detection tools keep flagging non-native speakers at twelve times the rate of everyone else. Where this lands depends on whether schools treat AI as a technology problem to be banned or an educational reality to be designed around.


Sources: