Some schools have begun employing artificial intelligence (AI)–based tools to help identify students at risk for suicide and self-harm. The authors provide a preliminary examination of how these programs are implemented, how stakeholders perceive the effects of the programs, and their potential benefits and risks. The authors then offer recommendations for school leaders, policymakers, and technology developers to consider.
Artificial Intelligence–Based Student Activity Monitoring for Suicide Risk
Considerations for K–12 Schools, Caregivers, Government, and Technology Developers
Published Dec 5, 2023
- What has existing research found about how accurately AI-based suicide risk monitoring identifies youth who are at risk for suicide?
- How is AI-based suicide risk monitoring being used in K–12 schools to detect and prevent youth suicide risk and self-harm?
- What is the perceived impact of these programs on students, what are their potential risks, and how can benefits be realized while mitigating risks?
- What are the best practices and recommendations for schools, caregivers, technology developers, and government seeking to use these technologies in K–12 schools while preventing potential harms?
In response to the widespread youth mental health crisis, some kindergarten-through-12th-grade (K–12) schools have begun employing artificial intelligence (AI)–based tools to help identify students at risk for suicide and self-harm. The adoption of AI and other types of educational technology to partially address student mental health needs has been a natural forward step for many schools during the transition to remote education. However, there is limited understanding about how such programs work, how they are implemented by schools, and how they may benefit or harm students and their families.
To assist policymakers, school districts, school leaders, and others in making decisions regarding the use of these tools, the authors address these knowledge gaps by providing a preliminary examination of how AI-based suicide risk monitoring programs are implemented in K–12 schools, how stakeholders perceive the effects that the programs are having on students, and the potential benefits and risks of such tools. Using this analysis, the authors also offer recommendations for school and district leaders; state, federal, and local policymakers; and technology developers to consider as they move forward in maximizing the intended benefits and mitigating the possible risks of AI-based suicide risk monitoring programs.
- Interviews with school staff, education technology company representatives, healthcare professionals and advocacy group members suggest that AI-based suicide risk monitoring tools can help identify kindergarten-through-12th-grade (K–12) students who are at risk for suicide and provide reassurance for school staff and parents.
- Prior research shows that AI-based suicide risk prediction algorithms—and, by extension, student activity monitoring in schools—can compromise student privacy and perpetuate existing inequalities.
- There is a need for data to show how accurately AI-based algorithms can detect a student's risk of suicide and whether the use of these tools improves student mental health.
- K–12 schools and their broader communities are often not sufficiently resourced to respond to youth mental health challenges, even with the use of AI-based suicide risk monitoring.
- Key community members—including pediatric providers, mental health counselors, and caregivers—play important roles in the implementation of these tools, but they might be unaware of how they are used by K–12 schools to detect student suicide risk.
- School districts should engage with their communities for feedback on the implementation of AI-based suicide risk monitoring.
- School districts should clearly notify caregivers and students about AI-based suicide risk monitoring and clarify opt-out procedures.
- School districts should establish effective and consistent processes for responding to AI alerts and track student outcomes from those alerts.
- School districts should engage with students to help them understand mental health issues.
- School districts should review and update antidiscrimination policies to consider the implementation of AI-based technologies and their potential biases against protected classes.
- Policymakers should fund evidence-based mental health supports in schools and communities, including the use of technology.
- Policymakers should refine government approaches and standards for privacy, equity, and oversight of suicide risk monitoring systems
- Technology developers should continue participation in school engagement activities to integrate feedback into their programs.
- Technology developers should share data to allow for evaluation of the impact of AI-based monitoring software on student outcomes and develop best practices for its implementation.