AI education must include environmental costs

AI education must include environmental costs
Representative image. Credit: ChatGPT

The rapid expansion of generative artificial intelligence is not only reshaping industries and education but also raising urgent questions about its environmental footprint, especially among younger users who increasingly interact with AI systems in daily life. A new study finds that while children are aware of environmental issues, they struggle to fully understand how AI contributes to resource consumption, highlighting a critical gap in AI literacy.

The study, titled "Where Does AI Leave a Footprint? Children's Reasoning About AI's Environmental Costs," published in the Proceedings of the 25th Interaction Design and Children Conference (IDC '26), examines how children aged 6 to 12 interpret the environmental impact of generative AI and how interactive tools can shape their understanding.

The research introduces an interactive system called EcoPrompt, combining a real-time environmental footprint calculator with a simulation-based game. The findings offer new insight into how children reason about AI's energy use, water consumption, and carbon emissions, and how these perceptions influence their decisions to use or avoid AI tools.

Children struggle to connect AI use with real-world environmental impact

At the outset of the study, children demonstrated fragmented and often incomplete mental models of how AI systems consume resources. Many participants understood that electronic devices require electricity, often linking this energy to renewable or conventional sources such as solar panels, wind turbines, or power plants. However, they frequently separated this understanding from the functioning of AI itself.

Children tended to view AI as an abstract, information-processing system that operates independently once created, rather than as a technology continuously dependent on large-scale infrastructure. This distinction led some to question whether AI itself consumes energy at all, assuming instead that only the device running the AI requires power.

Through interaction with the EcoPrompt system, this perception began to shift. Children were introduced to the concept of data centers, where AI computations occur, and how these facilities consume vast amounts of electricity and require cooling systems that use significant quantities of water. As they engaged with these ideas, participants started connecting AI usage to broader environmental systems, including energy grids, water resources, and carbon emissions.

The study highlights that making invisible processes visible is key to understanding. When children were shown how a simple AI query travels from a device to distant data centers and triggers resource-intensive computations, they began to form more complex, systems-based explanations. They increasingly recognized that AI is not immaterial but deeply embedded in physical infrastructure with tangible environmental consequences.

This shift is particularly important as generative AI becomes more integrated into tools used by children for learning, creativity, and entertainment. The research suggests that without explicit educational interventions, young users may continue to perceive AI as cost-free, reinforcing patterns of uncritical and potentially excessive use.

Interactive tools reveal how children evaluate AI use through cost and value

The study explores how children assess the "worth" of using AI when environmental costs are made visible. Through the footprint calculator, participants were shown estimated energy, water, and carbon usage for each AI query, prompting immediate reactions and behavioral adjustments.

Children began experimenting with prompts to understand what drives resource consumption. Many discovered that longer or more complex responses required more resources, leading them to test strategies such as shortening questions or requesting concise answers. Others associated higher costs with more difficult or knowledge-intensive queries, reflecting an emerging understanding of computational demand.

These interactions led to the development of value-based reasoning. Children started distinguishing between questions that justified resource use and those they considered wasteful. Simple or easily answerable questions were often rejected, while more complex or unfamiliar topics were seen as legitimate uses of AI.

This evaluative process extended beyond individual curiosity. Children set their own limits on acceptable resource consumption and debated these limits within groups. Water, in particular, was frequently treated as a scarce and sensitive resource, influencing decisions about whether to proceed with a query.

When AI responses failed to provide useful information, children expressed frustration not only with the output but also with the perceived waste of environmental resources. This indicates that children were not only processing information but also integrating ethical and environmental considerations into their interactions with AI.

The findings suggest that when users are made aware of the environmental cost of digital actions, they may adopt more deliberate and selective usage patterns. For children, this awareness can translate into early habits of responsible technology use, provided that the information is presented in an accessible and engaging way.

Collective responsibility emerges as children confront limits of individual control

The study further explores how children understand responsibility for AI's environmental impact through a simulation game that models shared resource use. In the game, participants manage virtual farms that depend on a common resource pool, with AI usage affecting the health of a shared lake representing environmental resources.

As the simulation progressed, children observed that the resource pool declined not only due to their own actions but also because of collective usage within the system. This realization prompted strong reactions, with participants expressing concern over depletion and questioning who else was contributing to the decline.

Children began to differentiate between individual agency and collective responsibility. While they could choose to limit their own AI use, they recognized that broader environmental outcomes depended on the actions of others as well. This led to a more nuanced understanding of how individual behavior interacts with larger systems.

In many cases, children actively chose to avoid AI assistance in the game, even when it made tasks more difficult. This behavior reflected a growing sense of responsibility and a willingness to prioritize environmental preservation over convenience or efficiency. The study simultaneously reveals a tension between personal effort and systemic limitations. Children acknowledged that their actions alone could not fully prevent environmental degradation, leading them to consider the role of collective behavior and broader system design.

Rethinking AI literacy to include environmental awareness

The findings highlight the need to expand current approaches to AI literacy. Existing frameworks often focus on technical understanding, such as how AI works or how to identify bias. While these aspects remain important, the study argues that environmental impact should also be a core component of AI education.

Children in the study showed a strong capacity to engage with environmental concepts when these were made visible and relatable. They connected AI use to issues such as energy consumption, water scarcity, and carbon emissions, integrating these considerations into their decision-making processes.

The research also highlights the role of design in shaping understanding. Tools like EcoPrompt, which provide real-time feedback and interactive experiences, can help bridge the gap between abstract concepts and tangible outcomes. By enabling users to see the consequences of their actions, such systems support deeper engagement and reflection.

Environmental awareness should not be framed as an individual burden, the study asserts, stating that it should be situated within a broader context that includes infrastructure, policy, and industry practices. This approach helps avoid placing undue responsibility on users while still encouraging informed and responsible behavior.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback