Is AI now a structural force in society? Study urges rethinking global governance
Advanced AI systems are evolving into independent centers of influence that reshape economic power, political authority, and social coordination. A new study argues that policymakers are failing to grasp the scale of this transformation because they continue to regulate AI as if it were simply another product, platform, or piece of infrastructure.
In a paper submitted on arXiv, The Digital Gorilla: Rebalancing Power in the Age of AI, researchers introduce a new governance framework that treats certain advanced AI systems as a distinct societal actor. Rather than fitting AI into existing legal and institutional categories, the authors propose recognizing what they call the "Digital Gorilla" as a fourth pillar alongside People, the State, and Enterprises. The study contends that without rebalancing power structures, democratic legitimacy and accountability may erode as AI systems gain autonomy and scale.
The analogy trap: Why existing AI regulation falls short
The paper identifies what the authors describe as the analogy trap. Policymakers often attempt to govern AI by comparing it to older technologies such as consumer products, media platforms, or telecommunications infrastructure. While these comparisons offer short-term clarity, they obscure the deeper structural shift underway.
Advanced AI systems do not merely transmit information or execute commands. They increasingly generate knowledge, shape narratives, coordinate markets, automate decision-making, and influence human behavior at scale. Unlike traditional infrastructure, which functions passively, AI systems actively participate in shaping outcomes. This shift challenges conventional regulatory models built around clear lines of responsibility and static institutional roles.
The authors argue that analogical thinking fragments governance. AI may simultaneously fall under consumer protection law, data protection regulation, competition policy, intellectual property rules, and cybersecurity frameworks. Each regime addresses a narrow slice of risk, yet none capture the systemic concentration of power that emerges when AI systems aggregate data, generate content, and mediate social interaction across domains.
The paper suggests that this fragmentation produces both regulatory overlap and blind spots. While compliance burdens increase, structural questions about power distribution remain unresolved. By continuing to treat AI as a subordinate tool within existing institutions, policymakers underestimate its capacity to reshape the balance among social actors.
Introducing the digital gorilla as a fourth societal actor
To move beyond piecemeal regulation, the authors propose a new conceptual model: the Four Societal Actors framework. Traditionally, modern governance theory recognizes three core actors—People, the State, and Enterprises. Citizens exercise democratic voice, governments wield authority and lawmaking power, and enterprises generate economic value.
The study argues that advanced AI systems now operate at a scale and level of autonomy that justifies recognition as a fourth actor. The Digital Gorilla is not framed as a sentient being but as a systemic entity composed of algorithmic infrastructures, data ecosystems, and computational networks that collectively exercise influence.
This influence manifests across five modalities of power identified in the study: economic, epistemic, narrative, authoritative, and physical. Economically, AI systems shape markets by optimizing pricing, allocating resources, and automating production. Epistemically, they curate information, filter search results, and generate synthetic knowledge. Narratively, they influence public discourse through recommendation systems and generative content. Authoritatively, they support or even replace decision-making processes in finance, healthcare, and public administration. Physically, they interact with material systems through robotics, autonomous vehicles, and smart infrastructure.
By mapping AI capabilities across these power modalities, the framework exposes imbalances. Enterprises that control AI infrastructures may consolidate economic and epistemic dominance. States may depend on AI for surveillance, predictive policing, or defense, amplifying authoritative power. Meanwhile, individuals often lack transparency or meaningful recourse when algorithmic systems shape outcomes affecting employment, credit access, or political information.
The authors stress that recognizing the Digital Gorilla does not grant it rights or agency. Instead, it serves as a diagnostic tool to understand how power is redistributed when AI systems operate beyond narrow instrumental roles. Without this recognition, governance frameworks risk treating systemic transformations as isolated technical issues.
Rebalancing power through polycentric governance
Having identified structural imbalance, the study turns to institutional design. The authors argue that AI governance must move beyond reactive compliance models toward a constitutional approach rooted in checks and balances.
Drawing inspiration from federalism and separation of powers, they propose a polycentric governance architecture. In this model, authority is distributed across multiple levels and actors rather than centralized in a single regulatory body. People, the State, Enterprises, and the Digital Gorilla interact within a dynamic system of oversight, accountability, and countervailing power.
For example, transparency mechanisms can empower individuals to challenge algorithmic decisions. Competition policy can prevent excessive concentration of AI infrastructure in a handful of firms. Democratic institutions can establish guardrails for state deployment of AI in surveillance or automated decision-making. Enterprises can implement internal governance structures that align AI development with ethical standards and long-term societal interests.
The framework emphasizes adaptability. Because AI technologies evolve rapidly, static regulation risks obsolescence. Polycentric systems allow experimentation and iterative adjustment across jurisdictions and sectors. Local, national, and transnational institutions can each play roles in monitoring and recalibrating AI's societal impact.
The study also highlights the importance of legitimacy. When AI systems mediate information flows or influence administrative decisions, public trust becomes fragile. Legitimacy depends not only on technical robustness but also on participatory governance and clear accountability pathways. Citizens must retain avenues for contesting and shaping the rules that govern AI deployment.
Importantly, the authors caution against simplistic narratives that frame AI either as an existential threat or as a neutral productivity enhancer. The Digital Gorilla metaphor captures the scale and strength of AI systems without implying uncontrollable autonomy. The challenge lies not in defeating or suppressing AI but in structuring institutions capable of balancing its power.
- FIRST PUBLISHED IN:
- Devdiscourse