New AI literacy scale exposes gaps between what people think they know and what they actually know

The study stresses that developing AI literacy is now a social necessity, as artificial intelligence increasingly influences employment, governance, and everyday decision-making. By focusing on factual knowledge rather than self-assessment, the scale enables a more accurate picture of what adults actually understand about algorithms, data bias, and ethical challenges. This clarity can inform curriculum design, media literacy programs, and even regulatory frameworks concerned with responsible AI adoption.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 28-10-2025 17:16 IST | Created: 28-10-2025 17:16 IST
New AI literacy scale exposes gaps between what people think they know and what they actually know
Representative Image. Credit: ChatGPT

A team of European researchers has unveiled a novel tool designed to measure what people truly know about artificial intelligence, beyond perceptions or self-assessed confidence.

The study, titled "The Scale of Artificial Intelligence Literacy for All (SAIL4ALL): Assessing Knowledge of Artificial Intelligence in All Adult Populations," was published in Humanities and Social Sciences Communications by Nature Portfolio. The authors introduce a validated psychometric instrument that aims to provide a global benchmark for assessing factual AI literacy among adults, marking a major step forward in evidence-based AI education and policy.

Measuring what people really know about AI

The SAIL4ALL study arises from a growing need to distinguish between people's perceived understanding of AI and their actual knowledge. The researchers argue that while AI literacy has become a popular theme in digital education and workforce training, most available tools rely heavily on self-reporting and fail to capture factual comprehension. To address this, the team designed and validated SAIL4ALL, a 56-item scale that objectively measures what adults know about AI's nature, capabilities, functioning, and ethical implications.

The study's authors collected and analyzed data from three separate adult samples in the United Kingdom. They rigorously tested the internal structure of the instrument using confirmatory factor analysis and examined how responses differed by gender, education, and AI-related attitudes. The results confirmed that SAIL4ALL successfully identifies four distinct yet related domains of AI knowledge: "What is AI?", "What can AI do?", "How does AI work?", and "How should AI be used?"

Each domain captures a specific dimension of literacy. The "What is AI?" section distinguishes between conceptual understanding and basic definitional awareness, while "What can AI do?" explores knowledge of current capabilities and applications. "How does AI work?" addresses the underlying mechanisms of AI, and "How should AI be used?" assesses ethical and normative reasoning regarding responsible deployment.

The scale exists in two response formats, true/false and a five-point Likert scale, allowing researchers and educators to choose between precision and depth. Both formats demonstrated strong reliability, though the authors caution that they should not be collapsed into a single total score due to the multidimensional nature of AI literacy.

Testing knowledge across gender and education

The study assesses how AI knowledge differs across demographic and educational groups. Using measurement invariance analysis, the researchers found that the scale performs consistently across gender and educational levels, ensuring its fairness and broad applicability. However, they observed meaningful variations in the results: men tended to score higher on certain dimensions, and participants with higher education levels demonstrated stronger overall literacy in understanding AI concepts and ethics.

These differences, the authors note, underscore the importance of targeted educational interventions rather than generalized AI awareness campaigns. They also highlight how misconceptions about AI persist even among highly educated individuals, particularly concerning the mechanisms behind machine learning and neural networks.

The study's results further show that knowledge of AI correlates strongly with acceptance and affinity for AI technologies, while being negatively correlated with fear of AI. This pattern, observed most clearly in the five-point response version, supports the argument that informed understanding reduces anxiety and increases openness toward emerging technologies. By providing a reliable way to measure such correlations, SAIL4ALL can become a foundation for future behavioral and psychological research on how people interact with intelligent systems.

Implications for Policy, Education, and Society

In addition to academic validation, the researchers highlight that SAIL4ALL has practical applications in shaping AI literacy policies, training programs, and public communication strategies. The tool offers a standard reference for educators, institutions, and governments seeking to evaluate the impact of AI education initiatives or workforce reskilling programs.

The study stresses that developing AI literacy is now a social necessity, as artificial intelligence increasingly influences employment, governance, and everyday decision-making. By focusing on factual knowledge rather than self-assessment, the scale enables a more accurate picture of what adults actually understand about algorithms, data bias, and ethical challenges. This clarity can inform curriculum design, media literacy programs, and even regulatory frameworks concerned with responsible AI adoption.

The authors also outline the limitations of their research. The validation samples were drawn entirely from the United Kingdom, meaning that cross-cultural testing will be needed to confirm whether the instrument performs equally well in other contexts. Additionally, while the scale demonstrates strong internal consistency, future studies should explore its discriminant validity against other established measures of digital competence and science literacy.

Despite these limitations, SAIL4ALL represents one of the first comprehensive attempts to produce an objective, multidimensional, and empirically tested measure of AI literacy suitable for general adult populations. It bridges a critical gap between educational theory and empirical assessment, setting a foundation for evidence-based evaluation of how societies understand and adapt to AI.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback