Adaptive AI system brings personalization and accessibility to mainstream LMS
Classrooms are rapidly integrating artificial intelligence (AI), but most learning management systems still operate as static content repositories rather than intelligent learning environments. A new study states that this gap can be closed without replacing existing platforms. Instead of building experimental systems from scratch, the researchers demonstrate how a mainstream LMS can be transformed into an adaptive, inclusive ecosystem powered by generative AI and learning analytics.
Their study, titled "AI for All: Adaptive, Accessible, and Inclusive Learning Experiences in the Age of Intelligent LMSs," and published in the journal Information, introduces the PREPARE project, a large-scale framework that embeds generative AI, multimodal content production, adaptive personalization, and privacy-aware analytics directly into Moodle, one of the world's most widely used learning management systems.
From static LMS to intelligent learning ecosystem
Traditional LMS platforms have long been criticized for delivering uniform content to diverse learners. They prioritize administrative control and content storage over personalization and inclusion. The PREPARE framework challenges this model by introducing an AI-driven architecture that transforms a single textbook into a complete, adaptive course experience.
Under the hood, the system has an end-to-end generative AI pipeline. In the current deployment, instructors upload a single authoritative PDF textbook. The system parses and segments the document into structured chapters and sub-units. It then uses large language models in a retrieval-grounded configuration to automatically generate chapter summaries, structured notes, slide decks, quiz questions, narrated videos, podcast-style audio lessons, and chatbot-ready knowledge bases.
This retrieval-augmented generation design ensures that AI outputs remain anchored to the source textbook. Retrieved text fragments are embedded into prompts before response generation, reducing hallucination risk and maintaining alignment with curricular objectives. Each generated artifact is linked to version control metadata, including content hashes and prompt identifiers, ensuring traceability and reproducibility.
The automation does not eliminate human oversight. Every AI-generated artifact undergoes structured validation before publication. Instructors review summaries for conceptual accuracy, slides for pedagogical coherence, quiz items for clarity and correctness, and multimedia captions for accessibility compliance. Resources that fail validation are corrected or regenerated. This human-in-the-loop model positions AI as an augmentation tool rather than an autonomous author.
The technical scale of deployment is substantial. The reported system-level validation describes approximately 12 full courses, each with around 13 chapters and five sub-units per chapter. This results in roughly 780 structured learning sub-units and several thousand deployable multimodal artifacts. The workflow incorporates quota-aware orchestration to manage external AI service limits, enabling staged generation without disrupting course publication.
Hybrid learner modeling and non-prescriptive personalization
PREPARE integrates adaptive personalization through a hybrid learner profiling approach. At the beginning of a course, students complete the Index of Learning Styles questionnaire based on the Felder–Silverman Learning Style Model. This provides an initial signal across four dimensions: active–reflective, sensing–intuitive, visual–verbal, and sequential–global.
However, the researchers explicitly acknowledge the limitations of fixed learning style models. Rather than treating these dimensions as stable traits, the system uses them as low-weight initialization signals. Continuous behavioral data from Moodle logs then refine the learner profile. Resource views, quiz attempts, navigation paths, forum activity, and chatbot interactions are mapped to learning style dimensions using established behavior-based profiling methods.
Profiles are recalculated periodically, either after a two-week rolling window or once a threshold of meaningful interactions is reached. Recent interactions receive slightly higher weighting, allowing the system to capture evolving learning strategies. Discrepancies between self-reported and behavior-derived signals are treated as dynamic changes rather than errors.
The system does not hide content or restrict modalities. Instead, it prioritizes entry points, highlights recommended resources, adjusts interface ordering, and modifies chatbot response style. Learners retain full access to all materials, preserving autonomy and agency.
To avoid reinforcing narrow usage patterns, PREPARE incorporates a diversity mechanism. After repeated engagement with a single modality, the system promotes alternative formats. Accessibility settings, such as caption requirements or audio preferences, function as hard constraints, while learning style signals operate as soft cues. This layered logic ensures that personalization enhances flexibility without creating new barriers.
The AI-powered chatbot exemplifies this adaptive design. Embedded directly within Moodle, the chatbot operates under a supported-only response policy. It retrieves semantically similar textbook fragments using dense vector embeddings and similarity thresholds before generating responses. If retrieval fails to meet defined criteria, the chatbot requests clarification or indicates insufficient coverage rather than speculating. Interaction logs are captured for analytics while maintaining pseudonymization.
Accessibility, evaluation, and ethical governance
Intelligent LMS design must prioritize inclusion from the outset. PREPARE aligns with Universal Design for Learning principles by offering multimodal access to every chapter. Text summaries are paired with slide decks, narrated videos, podcast versions, quizzes, and augmented reality experiences.
Augmented reality modules, developed alongside subject matter experts, provide interactive 3D models and simulations that complement textual explanations. While AR content currently requires manual creation, the integration workflow ensures consistent linking within Moodle. For learners who struggle with abstract concepts, experiential AR elements offer alternative pathways to understanding.
The evaluation framework is structured in two layers. The paper reports system-level validation, confirming structural completeness, deployment consistency, logging reliability, and adaptive feature functionality. A classroom-based learner evaluation is scheduled in a secondary education setting, involving students from lower and upper secondary levels.
The planned study will assess engagement intensity, interaction diversity across modalities, perceived personalization accuracy, accessibility benefits, trust in AI-assisted learning, and overall usability. Instruments include validated questionnaire constructs adapted to the PREPARE context, combined with Moodle log analytics. Participation requires parental consent and student assent, and inclusion criteria are predefined to ensure methodological robustness.
Ethical and regulatory considerations are integrated into the architecture. Behavioral data are pseudonymized at collection, and identifiable mappings are stored separately with restricted access. Data retention is limited to three years for audit and publication purposes. Chatbot prompts exclude personal identifiers, and input filters detect common sensitive data patterns. All AI processing remains anchored to instructional content rather than personal student information.
The researchers also position PREPARE within emerging regulatory frameworks, including GDPR and the European Union AI Act. By combining transparency, human validation, data minimization, and non-restrictive adaptation, the framework anticipates compliance requirements for high-risk educational AI systems.
Generative AI, as the study points out, should function as an enabling infrastructure rather than a replacement for teachers. Automation reduces repetitive authoring tasks, allowing educators to focus on validation, refinement, and pedagogical strategy. The architecture is intentionally model-agnostic to ensure portability across institutions and future AI systems.
Looking ahead, the authors propose extending personalization beyond learning style signals to incorporate knowledge progression, mastery-based difficulty adjustment, and motivational indicators. They also highlight potential exploration of privacy-preserving techniques such as federated learning and automated ethical auditing tools.
- FIRST PUBLISHED IN:
- Devdiscourse