Students push back against gaze tracking and emotion AI in universities


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 18-02-2026 10:24 IST | Created: 18-02-2026 10:24 IST
Students push back against gaze tracking and emotion AI in universities
Representative Image. Credit: ChatGPT

Artificial intelligence (AI) systems that promise to detect when students lose focus, feel confused, or disengage have started entering university classrooms. But new research suggests that instead of improving learning, these AI monitoring tools may be triggering anxiety, discomfort, and resistance among students who feel watched rather than supported.

The findings are from a study titled AI Sensing and Intervention in Higher Education: Student Perceptions of Learning Impacts, Affective Responses, and Ethical Priorities, presented at the 2026 CHI Conference on Human Factors in Computing Systems. The research reveals that students strongly oppose camera-based attention and emotion tracking, ranking autonomy and privacy above promised learning gains.

Students reject AI monitoring, even when it promises better learning

The research team designed six simulated classroom scenarios to test student reactions. These scenarios varied across three dimensions: whether AI sensing was used at all, whether the sensing relied on gaze tracking or facial emotion detection, and whether the resulting intervention came from an automated system or a human teacher.

Students responded significantly more negatively when they were told that AI sensing was in use. Their belief that the system would help personalize learning, improve efficiency, or boost grades dropped sharply once monitoring was disclosed. At the same time, reported feelings of anxiety, discomfort, distraction, and fear of making mistakes increased.

In scenarios without sensing, students were relatively open to automated hints and feedback. They saw potential for improved efficiency and tailored assistance. However, once those same interventions were described as being based on camera-driven attention or emotion detection, perceptions shifted from neutral or mildly positive to clearly negative.

Notably, the type of sensing made little difference. Students reacted just as negatively to gaze-based attention tracking as they did to facial emotion recognition. The assumption that attention detection might feel less intrusive than emotion analysis was not supported by the data. Both forms of biometric monitoring triggered similar levels of discomfort.

Qualitative responses revealed why. Many participants described the experience of constant monitoring as stressful and distracting. Rather than focusing on their learning tasks, they imagined worrying about how they appeared on camera or whether their natural expressions would be misinterpreted. Some feared that ordinary emotional fluctuations or atypical behaviors would be flagged as signs of struggle.

Students also raised concerns about being pressured to perform attentiveness. The idea of having eye movements tracked created a sense that they would need to appear constantly focused, even though real learning often includes moments of reflection, distraction, or confusion. For some, the concept felt invasive enough to imagine avoiding classes altogether.

The findings challenge the common assumption that more data leads to better learning outcomes. While AI sensing systems are marketed as tools to improve personalization and responsiveness, this study suggests that the monitoring mechanism itself may undermine students' comfort and trust, weakening any potential educational gains.

Automated hints preferred over teacher alerts, but only under student control

The second major finding focuses on how interventions are delivered. Across nearly all conditions, students preferred system-generated hints over teacher-initiated assistance. Automated prompts were seen as less socially awkward and more respectful of personal space.

When no sensing was involved, system-generated hints were viewed as helpful for personalization and efficiency. Students reported lower anxiety and discomfort compared to scenarios in which teachers were automatically alerted to provide assistance. Many participants expressed a desire to manage their own help-seeking behavior rather than being singled out by an AI system in front of peers.

Teacher-mediated interventions raised social concerns. Students worried about being publicly identified as struggling, drawing unwanted attention, or experiencing embarrassment if a tutor repeatedly approached them based on algorithmic signals. In classroom environments where peer perception matters, the social cost of automated teacher alerts appeared high.

However, this preference for automated hints was conditional. Once students learned that hints were triggered by gaze or facial monitoring, even system-generated assistance became less appealing. Anxiety and discomfort levels rose, and perceived learning benefits declined. The same feature that seemed useful in a non-monitoring context was judged more harshly when tied to AI sensing.

This suggests that acceptance of AI support tools hinges less on the presence of automation and more on the underlying data collection practices. Students may welcome intelligent tutoring features if they feel voluntary and non-intrusive. When these features depend on real-time biometric monitoring, support quickly erodes.

Participants repeatedly emphasized the importance of control. They wanted the option to decide when to receive hints and whether teachers should be notified. Some suggested that systems could prompt them privately first, allowing them to choose whether further assistance was necessary. The desire for agency emerged as a central theme across responses.

The research highlights a broader tension in AI-enhanced education. Tools designed to increase efficiency and responsiveness may conflict with students' expectations of independence and self-directed learning. In higher education settings, where learners are treated as adults, involuntary monitoring and intervention can feel patronizing rather than supportive.

Autonomy and privacy outweigh promised learning benefits

Apart from emotional reactions and learning perceptions, the study examined how students prioritize ethical concerns. Participants rated and ranked six core ethical principles: privacy, autonomy, fairness, accuracy, transparency, and learning beneficence.

Autonomy and privacy clearly emerged as the top priorities. Students expressed strong concern about not being asked for consent before AI deployment and not being given the option to opt out. The idea of hidden AI use or mandatory monitoring generated high levels of discomfort. For many participants, the right to choose whether and how AI systems operate in their learning environment was foundational.

Privacy concerns extended beyond data breaches. Students were particularly uneasy about their emotional states or learning struggles being shared with teachers or peers. Social privacy, not just data security, played a critical role. Being flagged as confused or disengaged in front of others was seen as potentially stigmatizing.

Accuracy and transparency were important but ranked lower than autonomy and privacy. Students were concerned about systems misinterpreting their behavior or emotions, especially given natural variations in expression, neurodiversity, or cultural differences. However, understanding the technical workings of algorithms was less urgent than knowing how their data would be used and having control over participation.

Fairness received the lowest ranking overall. While algorithmic bias in education has been widely documented, students in this study appeared more focused on avoiding intrusive monitoring altogether than on ensuring equal treatment within the system. Some participants did raise concerns about neurodivergent students, cultural bias in facial recognition, and inclusivity, but these issues did not dominate the rankings.

Notably, when forced to weigh trade-offs, students prioritized autonomy and privacy over potential learning benefits. The possibility that AI might help them receive faster or more personalized support did not outweigh concerns about loss of control and surveillance.

The study suggests that unless systems are designed to respect agency and protect privacy, students may reject them regardless of their technical sophistication.

Implications for AI in Education

The authors clearly state that AI in education must move beyond technical optimization and embed ethical and human-centered principles at its core.

First, intrusive sensing should be minimized. Alternatives to camera-based gaze and emotion detection, such as analyzing interaction patterns within learning platforms, may feel less invasive. If sensing is used, it should ideally operate on personal devices with clear opt-in mechanisms rather than as a blanket classroom requirement.

Next up, interventions should be lightweight, customizable, and student-triggered. Automated hints can be effective if they preserve learner control. Allowing students to accept, dismiss, or request assistance reduces the sense of surveillance.

Third, teacher-mediated interventions need redesign. Instead of automatically notifying instructors when algorithms detect struggle, systems could first inform students privately and seek consent before escalating. Aggregated class-level insights might also help instructors adjust teaching without singling out individuals.

Additionally, consent must be dynamic rather than one-time. Students' comfort with AI may vary depending on context, task difficulty, or personal circumstances. Systems that allow ongoing choice can reinforce trust.

The study also highlights the link between emotional experience and learning. Positive emotions corresponded with more favorable perceptions of learning impact. When anxiety and discomfort rose, belief in educational benefit declined. AI systems that trigger stress may therefore undermine their own pedagogical goals.

  • FIRST PUBLISHED IN:
  • Devdiscourse

TRENDING

DevShots

Latest News

OPINION / BLOG / INTERVIEW

Hallucinated medical advice threatens trust in AI-assisted breastfeeding support

Language inequality deepens as AI development favors dominant tongues

AI, satellite intelligence and multi-agent design signal new era for FINtech

AI in event logistics: Key barriers and success factors

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback