AI skills gap emerges between students and educators
Universities worldwide are now grappling with how to integrate AI technologies into classrooms while maintaining academic integrity, ensuring fair access to digital tools, and preparing students with the skills needed for an AI-driven future.
A new study investigates this evolving challenge. The research, titled "Exploring Student and Educator Challenges in AI Competency Development: A Comparative Analysis," published in the journal Multimodal Technologies and Interaction, examines how students and educators differ in their engagement with AI technologies and the obstacles they face in developing essential AI-related skills.
Students and educators show different patterns of AI engagement
Students generally report more frequent use of AI technologies in their academic work. Many rely on AI-powered systems for tasks such as information retrieval, language assistance, drafting written content, and exploring new topics.
This high level of adoption reflects the increasing availability of generative AI tools that can quickly generate summaries, explanations, and creative content. For students facing tight deadlines and heavy academic workloads, these tools often become convenient companions in the learning process. The survey results suggest that students tend to view AI systems primarily as practical tools that can help improve efficiency and support problem solving.
Educators, however, display a more cautious approach to AI adoption. While many instructors acknowledge the potential benefits of AI-powered educational technologies, they also express concerns about how these tools may affect academic integrity, learning quality, and assessment practices. The rapid rise of generative AI systems capable of producing essays, reports, and programming code has intensified debates about plagiarism and originality in academic work.
Another factor influencing educators' attitudes is the lack of clear institutional guidelines governing AI use in educational settings. Many respondents report uncertainty about how students should be allowed to use AI tools during coursework or examinations. Without consistent policies, educators often struggle to determine whether the use of AI constitutes legitimate assistance or academic misconduct.
The study also identifies differences across academic disciplines. Participants in technical fields such as computer science, engineering, and data science report higher levels of familiarity with AI technologies and greater confidence in using them. In contrast, educators and students in humanities and social science disciplines often report lower levels of experience with AI systems and greater uncertainty about their potential role in teaching and research.
These disciplinary differences highlight an emerging divide in AI literacy across higher education. Students and educators in technology-oriented fields are more likely to integrate AI tools into their daily academic activities, while those in other disciplines may have fewer opportunities to develop relevant skills.
Institutional support and training gaps slow AI competency development
The study highlights structural barriers that hinder the development of AI competencies within universities. One of the most frequently reported challenges is the lack of formal training opportunities for both educators and students.
Many participants indicate that their institutions have not yet introduced comprehensive programs to teach AI literacy or responsible AI use. As a result, individuals often learn about AI tools independently through experimentation rather than through structured educational programs. This informal learning process can create gaps in understanding, particularly regarding ethical considerations, data privacy, and the limitations of AI-generated information.
Professional development opportunities for educators are particularly limited. Teachers and lecturers often report that they lack access to training programs that would help them understand how AI technologies work and how they might be integrated into teaching strategies. Without adequate support, educators may feel unprepared to guide students in the responsible use of AI systems.
The study also emphasizes the role of infrastructure in shaping AI competency development. Access to reliable digital infrastructure, computing resources, and advanced software tools varies widely across institutions and regions. In well-resourced universities, students may have access to advanced computing platforms, AI laboratories, and specialized courses. In other institutions, particularly those located in developing regions, such resources may be scarce.
These disparities raise concerns about the potential emergence of new forms of digital inequality. Students studying in technologically advanced environments may gain valuable AI skills that enhance their future career prospects. Meanwhile, those in institutions with limited technological resources may struggle to develop comparable competencies.
The researchers note that addressing these inequalities will require coordinated efforts from governments, universities, and international organizations. Investments in digital infrastructure, training programs, and educational resources will be essential for ensuring that AI education opportunities are distributed more equitably.
Ethical awareness and responsible AI use in education
One major concern is the potential misuse of AI tools for academic misconduct. Generative AI systems can produce written content that closely resembles human-generated work, making it more difficult for instructors to detect plagiarism. This challenge has led many educators to reconsider traditional assessment methods and explore alternative approaches that emphasize critical thinking, creativity, and personal engagement.
The study suggests that AI technologies can also support ethical and inclusive learning when used appropriately. For example, AI-powered language tools can assist students who are studying in a second language by helping them improve writing clarity and comprehension. Similarly, adaptive learning systems can personalize educational experiences to meet the needs of diverse learners.
To maximize these benefits while minimizing risks, the researchers argue that AI competency frameworks should go beyond technical training. Educational programs should also emphasize critical thinking, ethical reasoning, and an understanding of how AI systems generate and interpret data.
International guidelines, such as those developed by UNESCO, provide a useful foundation for designing such frameworks. These guidelines outline key competencies related to AI literacy, including understanding the social impact of artificial intelligence, recognizing biases in algorithms, and applying ethical principles when using AI tools.
The study calls for collaboration in developing these frameworks. Policymakers, educators, and students must work together to design policies and educational strategies that reflect real-world classroom experiences. Policies created without input from those directly involved in teaching and learning may fail to address practical challenges.
- FIRST PUBLISHED IN:
- Devdiscourse