Ethical AI needs human dispositions, not one-size-fits-all codes


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 10-02-2026 12:53 IST | Created: 10-02-2026 12:53 IST
Ethical AI needs human dispositions, not one-size-fits-all codes
Representative Image. Credit: ChatGPT

Efforts to govern artificial intelligence (AI) have largely focused on preventing harm and enforcing compliance, but experts warn that ethics extends beyond regulation. In many everyday situations, AI systems must navigate moral choices that depend on individual values rather than legal mandates.

That challenge is examined in Individual ethics and dispositions in the digital world, a study published in AI & Society, which proposes a computational model for embedding personal moral preferences into AI systems. The research argues that ethical AI must account for how individuals differ in judgment, even when operating under the same legal framework.

The research arrives amid intensifying regulatory efforts, including the European Union's AI Act, which prioritizes safety, accountability, and legal compliance. While such frameworks establish non-negotiable boundaries for AI behavior, the authors argue that regulation alone cannot address the ethical diversity of real human decision-making. Instead, they propose a formal model that enables AI systems to adapt to the moral preferences of individual users while remaining within legal and ethical constraints.

Why universal AI ethics fall short in daily life

Much of the public debate on AI ethics has focused on extreme scenarios, such as autonomous weapons or life-and-death decisions made by self-driving cars. However, the authors shift attention to a quieter but more pervasive reality: AI systems increasingly act as digital partners in mundane, morally charged situations that shape daily life.

These situations include deciding whether to prioritize ethical products over cheaper alternatives, whether to save energy at the cost of personal comfort, or how to mediate fairness in public services. In such contexts, ethical decisions are rarely binary or universal. They depend on personal values, social norms, lived experience, and situational context.

The study challenges the assumption that embedding a fixed ethical code into AI systems is sufficient. Instead, it positions ethics as something that emerges through interaction. According to the authors, individuals do not simply apply abstract moral principles. They act based on tendencies shaped by experience, social environment, and context. Capturing this reality requires a move away from rigid rule-based ethics toward what the study describes as "soft ethics."

Soft ethics operates beyond legal compliance without violating it. It addresses what individuals believe should be done when the law permits multiple choices. The authors emphasize that soft ethics does not replace hard ethics, such as data protection or safety regulations. Rather, it fills the ethical space that law intentionally leaves open.

Modeling moral behavior through dispositions

The study discusses the concept of moral dispositions, drawn from philosophical theories of dispositional properties. A disposition reflects how an individual is inclined to act when certain conditions are met, without guaranteeing that the action will always occur. Courage, generosity, and fairness are classic examples of dispositions. They describe tendencies, not fixed outcomes.

The authors apply this framework to ethics in digital environments. Instead of treating moral preferences as explicit rules, they model them as dispositions that may manifest depending on context. This approach acknowledges that people can behave inconsistently without losing their moral identity. A person may be generally helpful yet fail to act generously in a stressful situation. The disposition remains, even if it does not manifest.

To operationalize this idea, the researchers propose a structured method for eliciting individual moral dispositions using scenario-based questionnaires. Participants are presented with everyday ethical dilemmas and asked to choose a course of action. Crucially, they also justify their choice using four evaluative dimensions: consequences for others, consequences for oneself, alignment with social norms, and alignment with personal experience.

These dimensions are not treated as moral truths but as descriptive parameters. Each response generates a moral signature that reflects how strongly an individual weighs each consideration in a given context. Over time, consistent patterns emerge, allowing the system to infer underlying dispositions.

The study stresses that this process is descriptive rather than prescriptive. The goal is not to judge whether a person's ethics are right or wrong, but to understand how they tend to act when faced with moral choices.

From questionnaires to ethical action in AI systems

The research presents a formal computational model that translates human responses into machine-readable ethical profiles. These profiles allow AI systems to recognize which actions best align with a user's moral tendencies in new situations.

The model links three elements: a description of the world setting, a set of possible actions, and a preferred action that reflects the user's disposition in that context. When a similar situation arises, the AI system can compare the new context to previously encountered ones and select the action most consistent with the user's ethical profile.

Importantly, the authors do not assume perfect rationality or consistency. They explicitly reject traditional decision-theory assumptions that often fail to reflect human behavior. Instead, the model accommodates context sensitivity, competing influences, and even contradictions.

The study also allows for multiple ways of resolving uncertainty when no exact match exists between a new situation and past dispositions. AI designers or users may choose conservative approaches, random selection, or similarity-based reasoning depending on the application. This flexibility is presented as a strength rather than a weakness.

Balancing personalization and ethical safeguards

While personalization is often presented as an unquestioned good in AI design, the authors take a more cautious stance. They acknowledge that personal preferences can be biased, harmful, or in conflict with social norms.

For this reason, the model explicitly operates within hard ethical boundaries. Legal requirements and fundamental rights remain non-negotiable. Preferences that violate these constraints cannot be enacted, even if they are accurately captured. The authors argue that eliciting such preferences is still valuable, as it provides insight into the complexity of human moral reasoning.

The study challenges the idea of ethical profiling as personalization, instead casting it as a tool to reclaim user agency from opaque algorithmic decision-making. By allowing individuals to express and refine their moral preferences, the model aims to rebalance power between users and digital systems.

This approach also addresses long-standing concerns about value alignment in AI. Rather than assuming a single set of shared human values, the model accepts moral pluralism as a reality. Ethical AI, in this view, is not about enforcing consensus but about managing diversity responsibly.

Implications for policy, design, and AI governance

For the general public, the research challenges simplistic narratives about AI ethics. It reframes ethical AI not as a problem to be solved once and for all, but as an ongoing negotiation between individuals, technology, and institutions.

For policymakers, it highlights the limits of regulation in shaping ethical behavior. Laws can prohibit harm, but they cannot dictate how individuals prioritize fairness, generosity, or responsibility in ambiguous situations. Systems that ignore this gap risk alienating users and eroding trust.

For AI developers, the study offers a concrete path toward human-centered design. By embedding ethical dispositions rather than fixed rules, systems can become more adaptive, transparent, and respectful of user autonomy. The formal nature of the model also makes it compatible with existing software engineering practices.

  • FIRST PUBLISHED IN:
  • Devdiscourse

TRENDING

DevShots

Latest News

OPINION / BLOG / INTERVIEW

Ethical AI needs human dispositions, not one-size-fits-all codes

Structural flaws make generative AI systems hard to secure

Gender blind AI design puts African women at greater privacy risk

Healthcare’s digital twin ambitions clash with ethics, law, and social trust

Connect us on

LinkedIn Quora Youtube RSS
Give Feedback