Why professionals use AI to learn: Key factor behind genAI adoption
While millions of workers now use AI tools daily, researchers are discovering that only some users integrate these systems into deeper learning and decision-making processes.
A new academic study titled "When Professions Meet GenAI: Patterns of Self-Regulated Learning," by Meital Amzalag of the Holon Institute of Technology and published in Education Sciences explores how professionals across different sectors engage with generative AI for learning. The research shows that perceived usefulness of AI, rather than motivation or profession alone, determines whether people adopt the technology as a tool for reflective and self-directed learning.
The results challenge assumptions that AI adoption is primarily driven by enthusiasm for new technology. Instead, the research suggests that how useful people believe AI is for improving their learning outcomes is the strongest factor influencing whether they use it in deeper and more reflective ways.
Perceived usefulness drives meaningful AI-supported learning
The research found that perceived usefulness of generative AI consistently predicted whether individuals used AI tools for deeper cognitive engagement rather than simple task completion.
Participants who believed that AI tools could improve accuracy, enhance understanding, or help organize knowledge were far more likely to integrate them into structured learning processes. In these cases, AI tools were used to support activities such as refining questions, evaluating information, testing ideas, and adapting learning strategies.
This form of use represents metacognitive engagement, meaning the learner actively manages and evaluates their thinking process while interacting with AI systems.
The study found that this pattern was consistent across professional sectors. Regardless of occupation, individuals who perceived generative AI as useful were significantly more likely to integrate it into their learning routines. On the other hand, when users did not see clear learning benefits, even individuals with strong digital skills and high learning motivation tended to avoid using AI tools in deeper ways.
This finding challenges the widespread assumption that motivation alone leads to technology adoption. The study found that learning motivation by itself did not reliably predict whether individuals would engage in metacognitive AI use.
In many cases, participants who reported strong curiosity and interest in learning still avoided using generative AI if they questioned its accuracy, reliability, or value for improving their understanding.
The research, therefore, positions perceived usefulness as a critical psychological gateway between motivation and actual use of AI tools in learning environments.
The study also highlights the role of technological self-efficacy, or the belief that one has the skills to use AI tools effectively. Individuals with strong confidence in their digital abilities were more likely to explore advanced interactions with AI systems, such as refining prompts, comparing outputs, and adjusting learning strategies based on AI feedback.
However, these skills only translated into meaningful engagement when users believed the technology genuinely improved their learning outcomes.
Professional context shapes how AI is integrated into learning
Although the study found that profession alone does not determine AI learning behavior, it revealed distinct patterns across occupational groups based on professional norms, responsibilities, and skill levels.
Among academic lecturers, AI tools were integrated into learning processes primarily when they were seen as valuable intellectual resources rather than simple productivity tools. Lecturers with strong self-regulated learning skills and digital literacy were more likely to use generative AI to analyze ideas, evaluate arguments, and refine their thinking.
However, lecturers who did not perceive AI as enhancing learning tended to avoid using it, even if they had strong motivation to improve their knowledge.
In the high-tech sector, professionals often used AI tools frequently but primarily for operational tasks such as coding assistance, translation, or information retrieval. The study found that deeper metacognitive engagement occurred only when professionals combined strong motivation with a clear perception that AI improved their analytical or learning processes.
This suggests that in technologically advanced industries, frequent AI use does not automatically lead to meaningful learning integration.
Among students, the study found that generative AI could support self-regulated learning effectively when students possessed strong metacognitive skills. Students who were able to plan, monitor, and evaluate their learning strategies were more likely to use AI tools for deeper understanding rather than relying on them as shortcuts.
When such self-regulation skills were absent, AI tools tended to be used primarily for quick task completion rather than sustained learning.
The pattern among educators in formal and informal education systems showed a similar trend. Educators often approached AI cautiously and tended to adopt it only when they clearly believed it contributed to deeper comprehension or teaching-related learning processes. This caution reflects the professional emphasis on critical thinking and reliability within educational environments.
The study identified particularly distinctive patterns among healthcare professionals, where concerns about information accuracy, professional responsibility, and ethical implications significantly influenced AI use. Healthcare workers with strong self-regulation and digital skills were able to integrate generative AI cautiously into their learning routines. In these cases, AI outputs were carefully evaluated and cross-checked with other information sources.
However, when such skills were weaker, healthcare professionals were more likely to limit their AI use to basic operational tasks or avoid it altogether.
Within the general workforce, the study found that AI use patterns were inconsistent because this group includes a wide range of professions with different skill requirements. As a result, AI engagement in this category remained largely task-focused and operational rather than becoming part of structured learning strategies.
AI learning depends on skills, trust, and professional norms
The study describes the integration of generative AI into learning as a three-stage process aligned with the theory of self-regulated learning.
- The first stage involves evaluating whether the technology is useful for achieving learning goals. At this stage, individuals assess the value of AI tools based on their perceived relevance, reliability, and compatibility with professional expectations.
- The second stage involves how the technology is used during learning activities. Individuals with strong self-regulation skills tend to incorporate AI into planning, monitoring, and adapting their thinking processes. Those without these skills tend to use AI tools only for operational tasks such as generating text or retrieving information.
- The third stage involves reflection and evaluation after using the technology. At this point, users assess whether AI contributed meaningfully to their learning outcomes and decide whether to continue integrating it into future tasks.
The study found that concerns about generative AI play different roles depending on professional context. In professions that emphasize responsibility and accuracy, such as healthcare and education, concerns about AI reliability often led to careful and reflective use rather than outright rejection. In other professional contexts where digital skills were weaker, concerns were more likely to discourage meaningful AI integration.
The findings suggest that skepticism about AI does not necessarily prevent its use. Instead, when combined with strong learning skills, concerns can lead to more careful and critical engagement with AI systems.
Implications for education, training, and AI literacy
Institutions must focus on developing AI literacy, metacognitive learning strategies, and digital confidence among learners and professionals. Training programs for educators and lecturers should emphasize how generative AI can support reflective thinking and strategic learning rather than just classroom productivity.
In high-tech industries, professional development programs may benefit from highlighting how AI tools can support planning, analysis, and insight generation rather than focusing solely on efficiency.
For students, early exposure to AI-supported learning environments combined with structured guidance could help develop the self-regulation skills necessary for deeper engagement.
Healthcare professionals may require training that addresses concerns about accuracy and ethical responsibility while demonstrating safe and reliable ways to use AI tools in learning contexts.
The broader workforce may benefit from tiered training programs that show how AI tools can move beyond simple task execution to support strategic thinking and professional development.
- FIRST PUBLISHED IN:
- Devdiscourse