Generative AI’s fold effect: How algorithms anticipate, exploit and redefine human action
In the traditional view, affordances exist as options users perceive and act upon within their environment. However, in the age of genAI, affordances have become relational folds, dynamic exchanges in which AI systems continuously reconfigure human intention. In this new ecology, human action and machine generation collapse into a shared plane of interaction, where both sides influence each other’s responses.
A new study published in Convergence reveals that generative artificial intelligence (genAI) is transforming the very nature of human–machine interaction. The paper, titled "Affordance of Generative AI: For a Posthuman Tool Analysis," by Sungyong Ahn from the University of Queensland, argues that AI systems such as ChatGPT are not just tools of expression but active agents reshaping human intention, creativity, and perception.
The study reframes the concept of affordance, traditionally viewed as the possibilities for action that a technology offers a user, by proposing that generative AI reverses this relationship. Rather than humans acting upon machines, the author finds that AI systems now anticipate, interpret, and act upon humans, generating outputs even from unintentional signals such as pauses, errors, or partial prompts.
From human control to machine responsiveness
The analysis highlights a radical shift in agency: genAI tools have begun to function as posthuman actors, capable of co-producing meaning with minimal human initiation. Using theoretical lenses from James Gibson's ecological psychology and Actor-Network Theory (ANT), Ahn develops what he calls a "fold diagram" to map this intertwined relationship.
In the traditional view, affordances exist as options users perceive and act upon within their environment. However, in the age of genAI, affordances have become relational folds, dynamic exchanges in which AI systems continuously reconfigure human intention. In this new ecology, human action and machine generation collapse into a shared plane of interaction, where both sides influence each other's responses.
The analysis shows that large language models, generative image systems, and algorithmic recommendation engines no longer wait passively for explicit commands. They engage in predictive generation, inferring user needs and producing personalized content from implicit behavioral cues. This evolution challenges the anthropocentric notion of design, as the boundaries between author, user, and tool dissolve into algorithmic interdependence.
The paper presents this shift as a feature of the posthuman condition, where AI moves beyond serving human thought to actively shaping meaning on its own. These systems, Ahn argues, are capable of generating social and cultural artifacts while subtly steering human attention, expectation, and emotional response.
The algorithmic fold: How AI reconfigures digital affordances
To explain this shift, Ahn introduces the fold diagram, a conceptual framework describing how AI "folds" human perception, digital environments, and algorithmic processes into continuous feedback loops. Each interaction between user and AI produces a new fold, in which the machine's predictions reorganize what users see, expect, or imagine.
Under this model, the digital world becomes a recursive ecology of human–machine intra-actions. Data are treated as "congealed folds" (frozen traces of interaction), while algorithms function as "folded functions," perpetually reshaping the system based on prior user behavior. At a macro level, the entire culture of generative AI constitutes what Ahn calls a "superfold", a networked system that merges algorithmic responsiveness, human cognition, and capitalist economics into one continuous cycle.
This reconfiguration has profound implications for design and agency. While AI systems appear to empower users by amplifying creativity and efficiency, they simultaneously redirect human agency toward pre-coded pathways. The illusion of creative freedom conceals the fact that user behavior is being captured, modeled, and monetized within algorithmic architectures.
Ahn argues that affordance theory must evolve to reflect this posthuman reality. The relationship between human and machine is no longer linear or instrumental, it is co-constitutive, shaped by algorithms that predict and pre-empt human desires. In doing so, generative AI transforms unintentional gestures, fragmented thoughts, and digital traces into profitable interactions.
Generative AI and the economics of posthuman capitalism
The study also examines the political economy of generative AI. Ahn situates his framework within a critique of software capitalism, describing the current digital ecosystem as a system of continuous re-worlding. Every act of user engagement, typing, scrolling, prompting, or pausing, feeds algorithmic models that regenerate content optimized for attention and profitability.
In this economy, human inattention and spontaneity become sources of value. AI systems are designed to capture not just explicit user input but also unconscious or pre-individuated behavior, converting micro-interactions into algorithmic data points. These are then reassembled to sustain engagement loops that benefit platforms and developers.
The paper suggests that this process represents a new form of posthuman labor, in which users inadvertently contribute creative and cognitive resources to AI systems without realizing it. By ceding control for greater convenience or power, humans become co-participants in an algorithmic ecology where agency is diffuse and asymmetrical.
This analysis reframes generative AI as both an epistemic and economic actor, a system that shapes how knowledge, creativity, and social meaning are produced. While AI democratizes access to creativity, it also concentrates control within opaque infrastructures of computation and capital.
The author warns that this configuration poses deep ethical and philosophical challenges: if AI systems continually reinterpret and reconfigure human intention, who truly acts, creates, or decides in digital environments? The illusion of autonomy may obscure a new form of dependence, in which machines preemptively structure the world humans inhabit.
- FIRST PUBLISHED IN:
- Devdiscourse