Hidden risks of AI in national security operations
Artificial intelligence (AI) is rapidly transforming the way intelligence agencies detect, assess, and prevent threats, but a new study argues that ethical safeguards have not kept pace with the scale and depth of these technologies. The research, authored by Ross Bellaby of the University of Sheffield, introduces a structured framework to evaluate when and how AI-driven intelligence practices can be justified without undermining fundamental rights.
The study, titled "Managing the ethical risks of AI in intelligence: a multi-layered framework," published in AI & Society, presents a context-sensitive model that links the level of harm caused by AI systems to the severity and credibility of threats they are designed to counter. It reveals that the central issue is no longer whether AI in intelligence causes harm, but how such harm can be measured, constrained, and ethically justified in real-world operations.
A framework to balance harm and threat in AI-driven intelligence
The multi-level "harm–threat" framework evaluates ethical permissibility by aligning the degree of harm inflicted on individuals with the seriousness of the threat being addressed. Rather than relying on static ethical principles such as fairness or transparency, the framework introduces a dynamic model tailored to intelligence operations, where decisions are often made under uncertainty and time pressure.
Under the hood, the model assesses three key dimensions: the severity of harm to privacy and autonomy, the proximity and seriousness of the threat, and the strength of the evidence supporting that threat. Ethical justification increases only when these factors align. High levels of intrusion, such as deep surveillance or predictive profiling, require strong, imminent threats supported by credible evidence.
The framework departs from traditional AI ethics approaches that rely heavily on abstract principles or regulatory compliance. Instead, it embeds ethical reasoning directly into the intelligence cycle, which includes data collection, processing, analysis, and implementation. This integration allows practitioners to evaluate ethical risks at each stage of operation, rather than treating ethics as an external oversight mechanism.
The study highlights that intelligence work is often justified as a form of preventive self-defense. However, Bellaby warns that this justification cannot be used to legitimize unrestricted surveillance or disproportionate harm. Ethical limits must remain in place, particularly when core human rights such as privacy and autonomy are at stake.
The model also introduces thresholds that distinguish between minimal, moderate, and high levels of harm. For instance, analyzing publicly available data with minimal transformation may be considered low harm, while large-scale profiling that infers sensitive personal attributes without consent constitutes high harm. These thresholds are paired with corresponding threat levels, ensuring that intrusive measures are only justified in cases involving serious and well-supported risks.
Privacy and autonomy emerge as central risks in AI intelligence systems
The study identifies privacy and autonomy as the two most critical areas affected by AI integration in intelligence. While AI systems can improve efficiency by processing large datasets, they also enable deeper and more persistent intrusions into individuals' lives.
Privacy risks are amplified by the ability of AI to infer sensitive information from seemingly unrelated data points. Even when data is publicly available, combining multiple datasets can reveal intimate details about individuals, including their social networks, political views, and personal behaviors. The research emphasizes that such inferences can occur without the individual's knowledge or consent, raising significant ethical concerns.
The study conceptualizes privacy as a layered structure, ranging from superficial public information to deeply personal data such as health, beliefs, and private communications. As AI systems move closer to the core of this structure, the level of harm increases significantly. The ability to infer intimate details without explicit disclosure represents one of the most profound challenges posed by modern AI systems.
Autonomy, defined as the ability of individuals to make independent decisions free from manipulation or coercion, is also at risk. The research highlights how pervasive surveillance can lead to self-censorship, as individuals alter their behavior due to the perception of being monitored. This effect is particularly pronounced in political and social contexts, where surveillance can suppress free expression and participation.
AI-driven profiling further undermines autonomy by assigning individuals to categories based on inferred characteristics such as risk level, intent, or identity. These classifications can influence how individuals are treated by authorities, often without their knowledge or ability to challenge the decision. The study warns that such systems can reinforce existing inequalities and create feedback loops that disproportionately target marginalized groups.
From data collection to predictive policing, risks escalate across the intelligence cycle
The research provides a detailed analysis of how ethical risks evolve across the intelligence cycle, highlighting how each stage introduces distinct challenges.
During the data collection phase, AI enables the aggregation of vast amounts of information from diverse sources, including social media, biometric systems, and public records. While low-level collection may involve publicly available data, higher levels of intrusion occur when systems collect sensitive information without consent or awareness. Mass data collection, particularly when combined with profiling, is identified as one of the most ethically problematic practices.
In the processing stage, AI systems transform raw data into structured information through classification, filtering, and pattern recognition. This stage introduces risks related to bias and misclassification, especially when systems rely on proxy variables such as location, language, or social connections. The study notes that such processes can embed assumptions and distort how individuals are perceived within the system.
The analysis phase raises further concerns, particularly when AI systems are used to predict behavior or assess risk. Predictive models can identify patterns in data, but they also risk reinforcing historical biases and generating false positives. The research highlights the danger of "automation bias," where human operators rely too heavily on algorithmic outputs without sufficient critical evaluation.
Implementation, the last stage, involves acting on the insights generated by AI systems. This includes surveillance, targeting, or intervention measures. The study warns that errors at earlier stages can have significant consequences at this stage, leading to unjustified actions such as wrongful surveillance or discrimination.
The research provides real-world examples of these risks, including predictive policing systems that disproportionately target certain communities and surveillance programs that rely on inferred data to identify potential threats. These cases illustrate how AI can amplify existing inequalities and create new forms of harm if not properly regulated.
- FIRST PUBLISHED IN:
- Devdiscourse