Wednesday, May 13, 2026

60% of U.S. teens have tried AI chatbots, 11.4% use them almost daily

 

As AI chatbots become increasingly part of daily life for American teens, a new national study documents widespread exposure to harm. While many use them for school, entertainment and support, researchers warn they may also expose youth to harmful content, encourage risky behavior and blur the line between human and AI relationships. The youngest teens in the study, especially 13 year olds, appeared among the most exposed.


The peer-reviewed study by Florida Atlantic University and the University of Wisconsin-Eau Claire, provides one of the first large-scale looks at how adolescents are using – and being influenced by – rapidly evolving AI chatbots. Researchers examined how often and why teens use these tools, as well as the risks involved, including exposure to unsafe content and whether chatbots may be encouraging problematic behaviors.

They surveyed 3,466 teens – 13 to 17 year olds – nationwide, analyzing usage patterns across demographic groups including gender, race, age and sexual orientation. Researchers also assessed exposure to 13 types of harmful or unsafe interactions, from problematic content to concerning behavioral suggestions, to better understand the risks teens may face and which groups could be more vulnerable.

Results of the study, published in the Journal of Adolescence, reveal that CAI chatbot use is widespread among U.S. teens, with 60.2% reporting they have used one at least once or twice, and about 1 in 20 saying they use them daily. Male teens were significantly more likely than females to report use, and white, African American and multiracial youth reported higher usage rates than Hispanic youth, while no meaningful differences emerged by age or sexual orientation.

Among teens who had used CAI chatbots, entertainment was by far the most common motivation, cited by 85% of users. Many also turned to these tools for more personal reasons, including advice or guidance (65.6%), friendship (60.1%) and even emotional or mental health support (49.2%).

More than one-third reported using chatbots for romantic companionship. Male youth were consistently more likely than female youth to report each of these motivations, and some differences also appeared across race and sexual orientation, particularly in the use of chatbots for emotional support and relationships. The researchers note that CAI chatbots can offer real value to young people, with prior research documenting benefits including educational support, creative exploration, mental health assistance and companionship for those who feel isolated.

At the same time, a substantial share of teens reported troubling interactions. Nearly one-third said a chatbot had asked for personal information that made them uncomfortable, while others described feeling monitored, being drawn into inappropriate conversations or being pressured to reveal secrets.

About 23% said they felt manipulated or pressured by a chatbot and 17% reported that a chatbot shared false information about them. Notably, between 13% and 19% said chatbots had encouraged behaviors with real-world consequences, including unethical or illegal actions, risky activities and even self-harm or suicidal thoughts.

These negative experiences were not evenly distributed, and the youngest teens in the sample were among the most exposed. Higher rates were reported by 13 year olds more than older age groups across multiple harm categories, including being asked for personal information that made them uncomfortable, being pressured to reveal secrets, and being encouraged toward unethical, illegal or risky behavior, as well as self-harm and suicidal thoughts.

“Conversational AI is not inherently dangerous, but it is not yet consistently safe for young people,” said Sameer Hinduja, Ph.D., senior author, a professor in the School of Criminology and Criminal Justice within FAU’s College of Social Work and Criminal Justice, co-director of the Cyberbullying Research Center, and a faculty associate at the Berkman Klein Center at Harvard University. “These systems engage, respond and even affirm users in highly personalized ways, which can make their influence especially powerful. For adolescents – who are still developing critical thinking skills and a sense of identity – that can create a situation where they’re more likely to trust, internalize or act on what the chatbot is saying without fully questioning it.”

Findings also show male youth were also more likely to report many of the harms, as were heterosexual youth, a pattern researchers note as counterintuitive given prior work showing higher online risk exposure among LGBTQ+ youth and one that warrants further study. White youth generally reported higher exposure to a range of negative interactions compared to other racial groups.

Overall, nearly half of the teens surveyed – 47.1% – reported experiencing at least one of the 13 risks examined in the study, underscoring the dual nature of CAI chatbots as both widely used tools and potential sources of harm for a significant portion of youth.

The results show that adoption is moving faster than the broader response, as teens increasingly turn to these tools for advice, emotional support and companionship.

“These findings make a strong case for prioritizing youth safety in how conversational AI is built and deployed,” said Hinduja. “When nearly half of young users report experiencing harm, it signals that existing safeguards are falling short. We’re not just talking about isolated incidents. We are seeing patterns that affect a meaningful number of young users, and that is what makes a coordinated response across families, schools and companies so important.”

The researchers also note that AI responses perceived as empathetic or human-like may carry particular weight for adolescent users.

“Adults need to stay engaged and curious about how teens are interacting with AI, creating space for open, judgment-free conversations about both the benefits and the risks,” Hinduja said. “At the same time, we need stronger AI literacy education in schools, content filtering and mental health response protocols designed into these platforms from the start, reliable age verification, and regular independent audits to confirm that safety measures are working as intended. AI is here to stay, so our responsibility is to make sure young people are equipped and protected as they navigate it.”

Monday, May 11, 2026

No clear evidence that the school smartphone ban policy reduced screentime

 This paper is the first to examine the causal effects of school smartphone bans on the mental health of youth in the US. Time series data show that the mental health of youth has been declining for the past decade. Several researchers argue that easy access to social media and other internet sites provided by smartphones is to blame. 

To provide causal evidence of the effects of these bans, the author relies on synthetic difference-in-difference models and the National Survey of Children’s Health (NSCH) from 2016 to 2024. Currently, there are data for only one state with two post-ban periods and two states with one post-ban period, which makes the results preliminary evidence only. 

The outcome variables are screentime and measures of psychological wellbeing. 

Overall, these early results provide no clear evidence that the school ban policy reduced screentime or improved psychological wellbeing. 

Future studies with additional years of data, when they are available, are needed to increase power and to estimate the longer-term effects of school bans on youth mental health.

Strategic Manipulation of University Grading Systems

 When do university grades permit informative comparisons across courses, and how does transcript adjustment affect student and instructor incentives? A raw grade mixes student performance with course-specific conditions, so grade-only comparisons fail whenever course effects are large enough to reverse ability rankings at grade cutoffs. 

This study shows that full transcripts can recover comparable student signals through what we call eigengrades: course-adjusted reports that use common or externally anchored grading standards and enrollment overlap to identify centered student effects. In the scalar additive benchmark, row-mean, affinity-spectral, and graph-Laplacian methods recover the same object. Eigengrades are, therefore, not a separate source of identification; they are a representation of fixed-effect adjustment. 

The framework also clarifies limits: ordinary letter grades with unanchored course-specific cutoffs do not separate course difficulty from grading standards, and multidimensional transcripts identify a skill-match subspace rather than a unique universal ranking unless the institution specifies a benchmark. 

Finally, exact difficulty adjustment removes the direct report-mediated incentive to choose easier courses and eliminates a competitive enrollment channel behind grade inflation, while leaving other strategic and governance margins intact.

Lessons from the First Statewide Mandate on School Start Times

This study examines the impact of California’s Senate Bill 328 (SB 328), the first statewide mandate requiring later school start times for middle and high schools, on adolescent sleep, mental health, and academic outcomes. 

The authors find that SB 328 increased the share of students sleeping at least 8 hours per night by 13%, meeting the CDC-recommended minimum for this age group. 

Average mental health effects are imprecisely estimated, but boys show significant reductions in sadness, hopelessness, and suicidal ideation, and Hispanic students, who experienced the largest sleep-timing shifts, show parallel reductions in difficulty concentrating; together these patterns are consistent with a dose-response relationship between sleep improvement and mental well-being. 

Math and English scores in grade 8 improved by approximately 0.08–0.10 standard deviations, with the largest gains among Hispanic and economically disadvantaged students. 

A within-state analysis using teachers’ commute arrival times as a proxy for pre-policy school start times corroborates these findings, and shows academic gains accumulating over 2023–2025 alongside a suggestive decline in high school dropout rates. 

The absence of effects on chronic absenteeism rules out an attendance-driven mechanism, pointing instead to the direct cognitive benefits of aligning school schedules with adolescents’ biological rhythms.

Gender Gaps in Education and Declining Marriage Rates

 Over the past half-century, U.S. four-year colleges have shifted from enrolling mostly men to enrolling mostly women, while the economic position of non-college men has weakened markedly. This study examines how these changes correspond with the evolving structure of marriage markets across cohorts and places. 

As college men have become increasingly scarce, college women have maintained stable marriage rates by marrying high-earning non-college men. This shift—combined with the broader economic decline of non-college men—has sharply reduced the pool of economically stable partners available to non-college women: the share of non-college men who earn above the national median and are not married to college women has fallen by more than 50%. 

Cross-area evidence shows that education gaps in marriage are smaller where non-college men face lower rates of joblessness and incarceration. 

Taken together, the evidence suggests that deteriorating outcomes for men have primarily undermined the marriage prospects of non-college women.

Saturday, May 9, 2026

Smartphone app + personal coaching improves college student mental health

 A study of more than 6,200 university students, including some at WashU, found that a smartphone app combined with personal coaching via text messages can be an effective intervention against depression, anxiety, and eating disorders.

For the students in the study — all of whom were identified through college-wide screening as being at high risk for or as having a mental health condition — the digital approach proved more effective than a referral to campus counseling services, the typical next step for students who show signs of mental health struggles. 

Compared with students who received a referral, those who were offered the app reported fewer symptoms of mental health problems in follow-up testing six weeks, six months, and two years later. They were also more likely to be free of any mental health disorders.

“Universities like WashU already have excellent mental health services, but not all students will take the steps to make an appointment,” said Denise Wilfley, the Scott Rudolph University Professor and a professor of psychological and brain sciences. “We were able to offer students an effective resource that they could download on their phones right then and there.”

Wilfley was the senior author of the study published in Nature Human Behavior. Ellen Fitzsimmons-Craft, an associate professor of psychological and brain sciences and an associate professor of psychiatry, was a co-first author. Michelle Newman of Penn State and Daniel Eisenberg of UCLA were also co-authors. 

The app is designed to deliver a digital version of cognitive behavioral therapy (CBT), a well-established therapeutic approach that aims to identify and change the negative thought and behavior patterns that can drive anxiety, depression, and eating disorders.

Responding to prompts, users completed interactive modules where they received psychoeducational content and engaged in exercises to help them learn and practice the content. The coaches could then review their progress and provide personalized responses and feedback. “The coaches help students implement the things they're learning through the mobile app,” Fitzsimmons-Craft said. “They provide feedback on progress and get students thinking about what they’re doing to achieve positive change.”

The app’s accessibility turned out to be a major advantage. Nearly 75% of students randomly chosen to receive the app used it at least once. In contrast, only 30% of students who received a referral to campus mental health services reported receiving any mental health treatment in the following six months. The accessibility advantage of the app was evident for all student groups, including those from disadvantaged backgrounds and those who generally face greater barriers to care. “Having something right on their phone made a big difference for students,” Wilfley said.

Campus-based counseling services, including those offered at WashU, are still an invaluable resource for students, Wilfley said. “We're not using digital tools to replace counseling services,” she said. “We’re developing a way to make evidence-based intervention available to as many students as possible. We’re removing barriers to care.”

Unlike some other digital mental health platforms, the app used in the study doesn’t run on artificial intelligence. That’s an important distinction, because generative AI-based therapy remains largely untested and carries certain risks, including the possibility of misinformation and harmful advice. In November 2025, the American Psychological Association recommended against the use of generative AI chatbots and wellness apps as a replacement for standard mental health care.

Artificial intelligence could still be an important tool for addressing mental health concerns on campus. Leading a team that includes Wilfley, Fitzsimmons-Craft is the principal investigator on a five-year, $3.7 million grant from the National Institutes of Health (NIH) to develop a self-guided, chatbot-based digital intervention designed to help students with eating disorders. The chatbot uses carefully controlled rules-based AI, not generative AI.

Student mental health should be a top concern for campuses around the country, Fitzsimmons-Craft said. In the current study, nearly half of the 39,194 students who completed initial screening were identified as either having or being at high risk for depression, anxiety, or an eating disorder. In addition to the physical and emotional toll, such conditions can make it difficult or impossible for students to succeed academically, she said.

“Many students wait until they reach a crisis point to reach out to the counseling center,” Fitzsimmons-Craft added. “By pairing screening with immediate access to the app, students have an opportunity to take a more proactive approach to their mental health.”

Wilfley, Fitzsimmons-Craft, and colleagues are now working to make the app available to all students who are struggling with mental health. “Sometimes evidence-based research can be locked away for many years before it reaches the public,” Wilfley said. “Digital interventions should be available to everybody who needs it. The fact that this study started with large-scale screening on college campuses shows the potential for reaching large populations. ”

Given the prevalence of mental health disorders on campuses across the country, it would make sense for colleges and universities to screen all incoming freshmen for anxiety, depression, and eating disorders, Wilfley said.  

This work demonstrates that the combination of population-based mental health screening and digital interventions can not only reduce psychiatric symptoms and improve quality of life but also prevent psychiatric disorders. "This approach can simultaneously reduce the prevalence of mental disorders, expand equitable access to care, and improve affected individuals’ symptoms," Wilfley said.

“WashU already has a program that promotes awareness about alcohol use disorders, which, of course, is an extremely important issue,” Wilfley said. “But universities could also take a more proactive approach to mental health.”

Wednesday, May 6, 2026

How are teachers reckoning with AI in schools?


Artificial intelligence has swept into American schools, and more is sure to come. This year, both Google and Microsoft — the two biggest companies at the forefront of the AI boom — announced major investments in AI training for teachers. 

But what do teachers think of this transformation of their work?

Katie Davis, a University of Washington professor in the Information School and co-director of the Center for Digital Youth, studies how technology affects young people’s learning and development. Davis has also been teaching for over two decades — first as an elementary school teacher and now as a professor — so she’s acutely aware of how earlier technological revolutions in teaching have not always played out as hoped.

Davis and a UW-led team of researchers interviewed 22 teachers in Aurora Public Schools in Colorado — a district that’s investing heavily in AI through systems like Google’s Gemini and MagicSchool, an AI tool that helps teachers plan. Overall, teachers were ambivalent about the technology. They liked that it could reduce workload, especially for rote tasks, but worried that it could erode the social aspects of teaching.

The team presented its research April 15 at the Association for Computing Machinery Conference on Human Factors in Computing Systems in Barcelona.

UW News talked with Davis about the study and how ostensibly democratizing technologies can widen disparities in schools. 

Why did you want to study AI adoption by schools?

Katie Davis: At least since the introduction of the radio, every new technological invention has been hyped for how it will change teaching and learning. Computers are the prototypical example. They were pushed into schools only to start collecting dust, because they didn’t really change anything. We saw it with massive open online courses, too. Ten or 15 years ago, these courses were supposed to transform education and put colleges and universities out of business. But that hasn’t happened.

Often the hype centers on closing educational inequities. But these new technologies actually tend to aggravate existing inequities. The schools serving the most affluent students have the resources to think carefully about how to incorporate technologies into their curriculum so that they’re supporting student learning goals and outcomes, whereas more under-resourced schools don’t have the resources or the time to do that kind of work. So they end up incorporating technologies in ways that don’t necessarily help students learn; instead, they make things more efficient or keep track of students.

When AI started being intensely pushed into schools, I thought here we go again. AI is here and it’s not going anywhere, so I would love for us to understand how it’s being taken up in schools and, ideally, to prevent this recurring pattern.

What did you hear from teachers about AI?

KD: Teachers expressed a deep ambivalence toward AI. It wasn’t as if any one teacher said it’s all great or it’s all terrible. I think the single strongest driver for teachers to use AI was to prevent burnout. Teachers are being asked to do more and more — not just teach, but care for students’ entire emotional, cognitive and academic lives. It really weighs on them. So a lot of them talked about turning to AI to be a thought partner, to help them brainstorm lesson ideas, create assessments and differentiate lessons for different learners.

Another really big benefit for this particular school district was multilingual support. The district serves students who speak more than 160 languages. One teacher we spoke with said she had four main languages represented in her classroom but she only spoke English, so she was turning to AI to help her translate materials for her students and for their families so that she could communicate with them. 

I think it’s really important to note that this district is going all in on AI. They’re encouraging teachers to use it and providing professional development, and teachers are talking among themselves and sharing ideas. This kind of institutional support and more informal teacher conversations are also encouraging teachers to use AI and explore how they might incorporate it into their teaching practice.

AI is often presented as a democratizing technology, but a Financial Times story recently showed that higher wage earners are using AI more than lower wage earners in the same industry — possibly increasing disparities. Are you seeing anything like that playing out in education?

KD: The way that manifests in education is in the kinds of support that students have access to. It’s more likely that better-resourced schools are also going to provide some form of AI literacy instruction — to really engage students in thoughtful reflection about what AI is, how it may or may not be useful for their learning, and to actually get them to think about these issues in a deep way. Whereas in under-resourced schools, the easiest thing to do is to just block AI. That’s not going to prevent students from using it, but they will end up using it in a communication vacuum, without any adult guidance. You can see how that would create disparities in how well students can use it.

I was really interested in the finding that teachers are concerned that students will know they’re using AI.

KD: That is one of the most interesting findings for me. Teachers are definitely aware that if their students think they’ve used AI, students and their parents will feel that their teachers are cheating them out of a proper education. Teachers are very worried about both students and their more AI-resistant colleagues seeing them that way. I don’t think this is unique to teachers — I feel it in university jobs, too. Many people have this perception that using AI is cheating or taking the easy way out. 

But there’s another layer: Teachers are personally worried about their own authentic voice and professional identity. They’re asking, “If I am using AI, at what point am I no longer a teacher? Where’s that line between using AI as a thought partner to augment my professional practice versus it now replacing my professional practice?” 

What are ways schools might amplify the positive parts of using AI while mitigating some of these negative effects?

KD: One of the first things is to bring AI out of the shadows and talk about it. Since we published this piece, I’ve been engaging with groups of teachers around the country in professional development experiences around AI, and they really enjoy having a community of practice. They feel that those spaces don’t necessarily exist in their schools. It’s like there’s this vacuum of communication — students don’t talk about it because they’re implicitly getting the message that it’s not OK to use it, and it’s the same with teachers.

Professional development is also very important. But a lot of professional development for teachers is just one-off PowerPoint presentations. It doesn’t really connect to whatever is going on in the classroom. Professional development needs to be done in a sustained way that meaningfully connects AI to teachers’ immediate classroom experiences.

School leaders need to be able to communicate AI policies, so that teachers are aware of them and understand how they apply in their specific schools. If you take Washington state as an example, the Office of Superintendent of Public Instruction has a really great blueprint and guidance for using AI. But my sense is that not many teachers are aware of it, or even if they are, there hasn’t been any concerted effort to say, “OK, this is what that means in our school.” We need to be working at many levels to make sure that AI is integrated into education well. 

Is there anything you want to add?

KD: Something I hold very dear as a teacher is that teaching is relational. Kids don’t learn in isolation. The CEO of Khan Academy gave a TED Talk saying the ideal vision is for every kid on the planet to have their own personal AI tutor and for every teacher to have their own personal AI teaching assistant. Maybe that would be great, but I worry that this push toward AI will erode the relationships between teachers and students. Teaching and learning are social processes. It’s not just about putting information into a student’s brain. Students learn through dialog, through participation in cultural practices. To remove that element of learning really concerns me.