The advent of generative AI has elicited waves of frustration and worry across academia for all the reasons one might expect: Early studies are showing that artificial intelligence tools can dilute critical thinking and undermine problem-solving skills. And there are many reports that students are using chatbots to cheat on assignments.
But how do students feel about AI? And how is it affecting their relationships with peers, instructors and their coursework?
I am part of a group of University of Pittsburgh researchers with a shared interest in AI and undergraduate education. While there is a growing body of research exploring how generative AI is affecting higher education, there is one group that we worry is underrepresented in this literature, yet perhaps uniquely qualified to talk about the issue: our students.
Our team ran a series of focus groups with 95 students across our campuses in the spring of 2025 and found that whether students and faculty are actively using AI or not, it is having significant interpersonal, emotional effects on learning and trust in the classroom. While AI products such as ChatGPT, Gemini or Claude are, of course, affecting how students learn, their emergence is also changing their relationships with their professors and with one another.
‘It’s not going to judge you’
Most of our focus group participants had used AI in the academic setting – when faced with a time crunch, when they perceive something to be “busy work,” or when they are “stuck” and worry that they can’t complete a task on their own. We found that most students don’t start a project using AI, but many are willing to turn to it at some point.
Many students described positive experiences using AI to help them study or answer questions, or give them feedback on papers. Some even described using AI instead of a professor, tutor or teaching assistant. Others found a chatbot less intimidating than attending office hours where professors might be “demeaning.” In the words of one interviewee: “With ChatGPT you can ask as many questions as you want and it’s not going to judge you.”
But by using it, you may be judged. While some were excited about using AI, many students voiced mild feelings of guilt or shame about their AI use due to environmental or ethical concerns, or just coming across as lazy. Some even expressed a feeling of helplessness, or a sense of inevitability regarding AI in their futures.
Anxiety, distrust and avoidance
While many students expressed a sense that faculty members are, as one participant put it, “very anti-ChatGPT,” they also lamented the fact that the rules around acceptable AI use were not sufficiently clear. As one urban planning major put it: “I feel uncertain of what the expectations are,” with her peer chiming in, “We’re not on the same page with students and teachers or even individually. No one really is.”
Students also described feelings of distrust and frustration toward peers they saw as overly reliant on AI. Some talked about asking classmates for help, only to find that they “just used ChatGPT” and hadn’t learned the material. Others pointed to group projects, where AI use was described as “a giant red flag” that made them “think less” of their peers.
These experiences feel unfair and uncomfortable for students. They can report their classmates for academic integrity violations – and enter yet another zone in which distrust mounts – or they can try to work with them, sometimes with resentment. “It ends up being more work for me,” a political science major said, “because it’s not only me doing my work by myself, it’s me double checking yours.”
U.S. Department of Education
Distrust was a marker that we observed of both student-to-teacher relationships and student-to-student relationships. Learners shared fears of being left behind if other students in their classes used chatbots to get better grades. This resulted in emotional distance and wariness among students. Indeed, our findings reflect other reports that indicate the mere possibility that a student might have used a generative AI tool is now undercutting trust across the classroom. Students are as anxious about baseless accusations of AI use as they are about being caught using it.
Students described feeling anxious, confused and distrustful, and sometimes even avoiding peers or learning interactions. As educators, this worries us. We know that academic engagement – a key marker of student success – comes not only from studying the course material, but also from positive engagement with classmates and instructors alike.
AI is affecting relationships
Indeed, research has shown that faculty-student relationships are an important indicator of student success. Peer-to-peer relationships are essential too. If students are sidestepping important mentoring relationships with professors or meaningful learning experiences with peers due to discomfort over ambiguous or shifting norms around the use of AI technology, institutions of higher education could imagine alternative pathways for connection. Residential campuses could double down on in-person courses and connections; faculty could be incentivized to encourage students to visit during office hours. Faculty-led research, mentoring and campus events where faculty and students mix in an informal fashion could also make a difference.
We hope our research can also flip the script and disrupt tropes about students who use AI as “cheaters.” Instead, it tells a more complex story of students being thrust into a reality they didn’t ask for, with few clear guidelines and little control.
As generative AI continues to pervade everyday life, and institutions of higher education continue to search for solutions, our focus groups reflect the importance of listening to students and considering novel ways to help students feel more comfortable connecting with peers and faculty. Understanding these evolving interpersonal dynamics matters because how we relate to technology is increasingly affecting how we relate to one another. Given our experiences in dialogue with them, it is clear that students are more than ready to talk about this issue and its impact on their futures.
Acknowledgment: Thank you to the full team from the University of Pittsburgh Oakland, Greensburg, Bradford and Johnstown campuses, including Annette Vee, Patrick Manning, Jessica FitzPatrick, Jessica Ghilani, Catherine Kula, Patty Wharton-Michael, Jialei Jiang, Sean DiLeonardi, Birney Young, Mark DiMauro, Jeff Aziz, and Gayle Rogers.

The post “University students feel ‘anxious, confused and distrustful’ about AI in the classroom and among their peers” by Elise Silva, Director of Policy Research at the Institute for Cyber Law, Policy, and Security, University of Pittsburgh was published on 07/16/2025 by theconversation.com