Asking Students to Use AI Responsibly When The Adults Aren't

Kids can't be expected to learn what adults haven't — or can't — learn

Asking Students to Use AI Responsibly When The Adults Aren't

Teach the children how to use AI responsibly, educators are being told.

Meanwhile, this summer, the Grok chatbot spewed antisemitic hatred and sexually explicit material on the X social media platform; shortly after, it was contracted for use by the US Department of Defense. Stanford researchers found that large language models (LLMs) express detrimental ideas to those seeking mental health help. McDonald’s AI recruiting tool was easily hacked, exposing applicants’ personal information.   

Despite these and other stories, proponents are encouraging K-12 schools to embrace AI. Boosters urge educators to use AI tools and teach children how to use them. Much of their rationale rests on a sense of AI’s inevitability.  Many teens report using tools such as ChatGPT to help with homework and to cheat; I saw it in my high school classes last year. A common refrain is that since students are already using it, teachers must show them how to — but “responsibly.” 

How can children be expected to use AI responsibly, though, when the adults are not? When tech giants, the federal government, and blue-chip corporations are using it in reckless ways? As Wired editor Brian Barrett said of the Trump Administration’s haphazard application of AI to eliminate programs and jobs: “AI agents are still in the early stages; they’re not nearly cut out for this. They may not ever be. It’s like asking a toddler to operate heavy machinery.”

Some adults, even those in charge of schools, appear uneducated about AI even as they push it. Linda McMahon, the US Education Secretary, was captured on video expressing her desire for kids as young as kindergarten to learn “A1,” referring to artificial intelligence by the name of a popular steak sauce. It’s like asking toddlers to operate machinery without understanding it or its heaviness.

K-12 district leaders should try to stay on top of the rapidly-changing technology landscape — an enormous task — and institute policies to protect children from the harms of new and untested tech. Unfortunately, sensible imperatives like these can be warped by panicky messaging about the future from political and corporate elites. Headlines blare that AI will lead to mass layoffs, possible recession, and slim job pickings for the next generation. 

AI is Already Disempowering Workers Through Hype
Workers and students are being prepped for less power in the workplace

Belief in AI’s inevitability has some districts plunging into a multiplicity of uses across grade levels and disciplines. It has pushed the American Federation of Teachers to accept funding from Microsoft, Open AI, and Anthropic for a $23 million training center to encourage union members to use those firms’ products and get them “integrated into classrooms across the United States.” This follows the “googlization of education” model that has forced the dependence of districts across the country on the platforms of Google and other companies.

The responsible question for schools to ask is What does the research say about the best ways to teach and learn? AI mania has many instead asking, How can AI help teaching and learning? 

Tough recent lessons about adopting unproven educational means and methods appear to be lost. Districts nationwide have been reversing course after sidelining phonics for a popular reading program, which many parents now view as pedagogical malpractice. Schools and states are busy instituting cell phone bans after spending billions on classroom devices, the educational benefits of which remains unclear. Nevertheless, with AI, the shiny new thing once again mesmerizes.

Source: oneaccordpartners.com

The adults in Washington, D.C. have failed to regulate AI in any comprehensive way. A provision in the new tax cut and spending bill would have kept states from passing their own AI regulations for a decade; fortunately it was struck before passage. Schools shouldn’t follow the lead of those politicians willing to let citizens and consumers, including children, be guinea pigs for underregulated products. They can slow down and take a risk-focused approach. 

Computer science classes can teach about AI’s capabilities, limits, and potential. Media literacy and social media safety programs can inform, as age appropriate, about how it can enable predation, perpetuate bias, and spread disinformation. Students should understand AI’s numerous social, economic, health, and environmental costs. Giving lip service to these big ethical issues while hastily integrating AI sends kids a strong message — the adults don’t really care and neither should you.  

Schools can and should wait for more research on AI and its effects on learning. Recent studies suggest key skills may be harmed by use of some tools. In an MIT study, those using chatGPT to help write an essay demonstrated lower brain engagement and “‘consistently underperformed at neural, linguistic, and behavioral levels.’” Some lost motivation.  A business school found those who used AI tools often had degraded critical thinking skills, especially younger participants. In another study, users became dependent on ChatGPT and did less metacognitive thinking.  

Students may be able to learn to use AI responsibly — once the adults do. If we can.