Cover: Artificial Intelligence–Based Student Activity Monitoring for Suicide Risk

Artificial Intelligence–Based Student Activity Monitoring for Suicide Risk

Considerations for K–12 Schools, Caregivers, Government, and Technology Developers

Published Dec 5, 2023

by Lynsay Ayer, Benjamin Boudreaux, Jessica Welburn Paige, Pierrce Holmes, Tara Laila Blagg, Sapna J. Mendon-Plasek


Download eBook for Free

FormatFile SizeNotes
PDF file 0.5 MB

Use Adobe Acrobat Reader version 10 or higher for the best experience.


Purchase Print Copy

 Format Price
Add to Cart Paperback86 pages $19.00

Research Questions

  1. What has existing research found about how accurately AI-based suicide risk monitoring identifies youth who are at risk for suicide?
  2. How is AI-based suicide risk monitoring being used in K–12 schools to detect and prevent youth suicide risk and self-harm?
  3. What is the perceived impact of these programs on students, what are their potential risks, and how can benefits be realized while mitigating risks?
  4. What are the best practices and recommendations for schools, caregivers, technology developers, and government seeking to use these technologies in K–12 schools while preventing potential harms?

In response to the widespread youth mental health crisis, some kindergarten-through-12th-grade (K–12) schools have begun employing artificial intelligence (AI)–based tools to help identify students at risk for suicide and self-harm. The adoption of AI and other types of educational technology to partially address student mental health needs has been a natural forward step for many schools during the transition to remote education. However, there is limited understanding about how such programs work, how they are implemented by schools, and how they may benefit or harm students and their families.

To assist policymakers, school districts, school leaders, and others in making decisions regarding the use of these tools, the authors address these knowledge gaps by providing a preliminary examination of how AI-based suicide risk monitoring programs are implemented in K–12 schools, how stakeholders perceive the effects that the programs are having on students, and the potential benefits and risks of such tools. Using this analysis, the authors also offer recommendations for school and district leaders; state, federal, and local policymakers; and technology developers to consider as they move forward in maximizing the intended benefits and mitigating the possible risks of AI-based suicide risk monitoring programs.

Key Findings

  • Interviews with school staff, education technology company representatives, healthcare professionals and advocacy group members suggest that AI-based suicide risk monitoring tools can help identify kindergarten-through-12th-grade (K–12) students who are at risk for suicide and provide reassurance for school staff and parents.
  • Prior research shows that AI-based suicide risk prediction algorithms—and, by extension, student activity monitoring in schools—can compromise student privacy and perpetuate existing inequalities.
  • There is a need for data to show how accurately AI-based algorithms can detect a student's risk of suicide and whether the use of these tools improves student mental health.
  • K–12 schools and their broader communities are often not sufficiently resourced to respond to youth mental health challenges, even with the use of AI-based suicide risk monitoring.
  • Key community members—including pediatric providers, mental health counselors, and caregivers—play important roles in the implementation of these tools, but they might be unaware of how they are used by K–12 schools to detect student suicide risk.


  • School districts should engage with their communities for feedback on the implementation of AI-based suicide risk monitoring.
  • School districts should clearly notify caregivers and students about AI-based suicide risk monitoring and clarify opt-out procedures.
  • School districts should establish effective and consistent processes for responding to AI alerts and track student outcomes from those alerts.
  • School districts should engage with students to help them understand mental health issues.
  • School districts should review and update antidiscrimination policies to consider the implementation of AI-based technologies and their potential biases against protected classes.
  • Policymakers should fund evidence-based mental health supports in schools and communities, including the use of technology.
  • Policymakers should refine government approaches and standards for privacy, equity, and oversight of suicide risk monitoring systems
  • Technology developers should continue participation in school engagement activities to integrate feedback into their programs.
  • Technology developers should share data to allow for evaluation of the impact of AI-based monitoring software on student outcomes and develop best practices for its implementation.

Research conducted by

Funding for this research was provided by gifts from RAND supporters and income from operations and was undertaken by RAND Education and Labor.

This report is part of the RAND research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit

RAND is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.