Executive Summary
Used well, AI can strengthen student social and emotional learning by helping schools do three things that
are already hard at scale: give students timely reflection and coaching, help educators notice patterns
earlier, and reduce the planning burden of high-quality SEL instruction. Used badly, the same technologies
can turn SEL into surveillance, over-automate sensitive judgments, or create false confidence in tools that
have not actually been shown to improve student well-being. The practical conclusion is that AI is most
defensible in K-12 when it augments adult relationships, not when it substitutes for them.
The evidence base is promising but uneven. Traditional schoolwide SEL programs have a strong research
record. Digital SEL interventions also show positive effects, especially for social-emotional skills, but the
strongest modern school-based evidence for generative AI is still thin and mixed. A 2026 meta-analysis of
digital SEL interventions found positive effects on social-emotional skills and behavior, but effects on affect
and attitudes weakened after publication-bias correction. A 2026 school-based randomized trial of
generative AI for self-regulated learning found modest motivational benefits but no clear gains in effort or
domain learning over control. Meanwhile, research on affective pedagogical agents and educational
chatbots suggests small motivational benefits and improved satisfaction, but not a reliable basis for high-
stakes SEL decisions.
For U.S. districts, the highest-value near-term use cases are teacher-facing curriculum design, structured
student reflection or coaching inside bounded classroom tasks, low-stakes student check-ins, and analytics
that help adults coordinate support. The weakest and riskiest use cases are automated emotion recognition
from faces or voices, always-on monitoring, opaque student “risk scores,” and any student-facing agent that
blurs the line between tutor, friend, and therapist. Those categories carry the greatest risks of bias, privacy
intrusion, cultural mismatch, discrimination, and over-reliance.
The market is ahead of the evidence. Most reviewed products are AI-enabled workflows, not rigorously
validated SEL interventions. Vendor claims often emphasize safety, efficiency, or usage metrics, while public
evidence on subgroup performance, false positives, and long-term student outcomes is limited. Districts
should therefore treat procurement as a governance decision first and a software decision second, using
U.S. Department of Education 4 guidance, the National Institute of Standards and Technology 5 AI risk
framework, and clear human-in-the-loop protocols as the baseline. , , and the are the most useful starting
documents.