This paper considers terrorism as potentially ‘intelligent’, as a threat capable of abusing the critical infrastructures of societies and the related methods for knowledge production. Respectively, it sees critical infrastructures as attractive mediums for terrorist influence. The paper describes the contrast between the logic of providing security and certainty for critical infrastructures and the threat of terrorism, which is evolving in terms of its systemic capacities and intelligence. The way security is provided within critical infrastructures and the way intelligent terrorism could operate seem to separate from each other, thereby creating vulnerability. The paper seeks to enhance the conceptual understanding of this question by describing and closing the gap created by the intellectual separation. By doing so, the article will shed light on the conceptual dimension of the (in)security that has gone unnoticed in the interface between critical infrastructures and terrorism. It outlines the aforementioned dilemma and provides conceptual understanding that makes it easier to grasp and communicate further. The paper shows that the intellectual separation has weakened the possibilities for theoretically understanding and practically recognising terrorism as a phenomenon that is becoming systemically more conscious, more intelligent and potentially increasingly capable in a form of violence that exploits the basic structures of societies and the related knowledge methods for its own purposes. As a conclusion, the paper stresses the importance of profoundly critical tools. Such tools are often perceived as being undesirable or even counter-productive in figuring out the mechanism through the very means utilised in providing for security.
Security has been, is, and continues to be an ever-present concern in societies. The way states, institutions and organisations provide security also affects how it is perceived. Perceived insecurity can be extensive, even when many indicators show that life is more peaceful and secure than before. The interpretation frameworks through which security is grasped therefore play a key role. States, institutions and organisations not only produce information about their operations and operating environment, but also create frameworks for interpretation that determine how reliable, functional and strategically up-to-date these operators are perceived to be. This is a particularly important theme for systems that have a special role to play in the functioning and continuity of societies. It therefore deserves critical conceptual research.
Critical infrastructures (CIs) are the cornerstone of societies’ day-to-day functioning, safety and security (
Considering CIs as socio-technical systems, significant amounts of material and intellectual capital are tied to them, not to mention the professional capital needed to keep the systems safe to operate. This material and intellectual capital that has accumulated and cultivated over time has its well-established tracks and well-polished ways to focus on the essentials. It has enabled CIs to become more capable so that they can better serve changing needs. Moreover, it incorporates a knowledge structure that formulates the guidelines for development and frames how risks and threats should be managed, thereby assuring people about the reliability of CIs. Performing as expected, CIs become a set of self-evident structures upon which daily operations can be designed (
It is interesting to consider CIs as systems in which technological assemblies are intertwined with the social processes of knowledge production. This paper grasps this through the concept of the black box, introduced by Latour (
Infrastructure systems can quite obviously be considered black boxes, as they are taken for granted and their internal mechanisms are not assumed to be understood by outsiders (
However, trust in CIs requires that they can show that they are aware of their operational security environment and that they can manage risks. It must be demonstrated in an evidence-based manner that CIs are in competent hands—the reliability of operations must be credibly reported (
In this context, terrorism constitutes a particularly interesting security threat for CIs in its capacity for creativity, learning and intelligence
Although a great deal has been learned from the 9/11 event, CIs as increasingly complex bundles of operations that extend into the daily functioning of societies continue to constitute a platform of opportunity for terrorism (
Moreover, what is undoubtedly left unburied in black boxes concerns our understanding of the interconnections and dependencies in the CIs. Not only is our ability to manage complex systems limited, but also the ways in which security threats are examined and uncertainty is reduced are formulated and developed further in silos (
When terrorism is grasped as an active threat that proceeds intelligently in terms of its purpose and objectives, it appears to be capable of becoming aware of the vulnerabilities of CIs and adjusting its tactics to avoid the terrorism prevention measures (
This article seeks to enhance the conceptual understanding by analysing this intellectual separation using the lens from the black box theorisation (
The security of CIs requires not only the effective assessment and management of threats and risks and proven reliability,4Measuring the security of CIs is an extraordinary complex task. However, various measures for assessing it have been developed, such as approved CI protection plans, audit of CI protection status, structural and budgetary changes, and exercises (
Increased connections and the formation of dependencies within CIs are essential features that reflect the level of modern societies’ development. However, the critical elements of the systems need to be protected, and external interference with the processes must be prevented—in terms of
In modern societies, CIs consist of sheltered systems of this type, together with connections and dependencies between them. Due to such complexity, even if an individual function or process can be sheltered and controlled, this capacity is unavailable at the system level; security, reliability and the related knowledge within separate systems do not translate into a broader system-level capacity (
As socio-technological systems, CIs rely on a concept of rationality that is linked to technical and engineering expertise, with their characteristic action ideals (
Technical rationality recognises errors and imperfection as part of activity, so errors made and deficiencies identified can be turned into an ability to close gaps in security, learn from failures and strengthen management capacity (
Another key characteristic of technical rationality is its tendency to interpret reality as being composed of technical challenges calling for technical responses—since a black box is filled with technical content, engineering expertise represents the most advanced interpretation of the security of CIs (
However, it is worth asking whether these expert hands are tied to established habits of thinking (
The unambiguous definition of the concept of terrorism has proved to be exceptionally challenging (
Although CIs have been targets of terrorist attacks in recent history (on statistical analysis of this theme, see e.g.
In the light of technical rationality, this would imply that even devastating terrorist attacks have been interpreted in the framework of the security of systems, and eventually, dissolved into black boxes’ increased ability to recover from any incidents and to take necessary lessons from such experiences. It is possible to explain the attacks as external acts based on the sudden use of violence which, despite their unpredictability, have succeeded in remaining within the limits of security estimates, but which can be circumscribed even better in the future by learning from them. The acts are interpreted as having hit an area that can be explained in terms of limited resources, meaning that the identified gap can be closed through additional investments without the need for fundamental change in the scheme or its underpinning logic.
The acts have failed to put under question the form of expertise that provides for security and certainty and evolves around the related issues. Instead, the acts are grasped as isolated cases that mainly indicate the intellectual relevance and insufficient resources of the systems, but not fundamental deficiencies in the general framework. Thus accounted, terrorism operates appropriately for the technical rationality: although the attacks constitute a real, unpredictable and serious security threat in the short term, the longer-term capacity and reliability of CIs are not questioned. A terrorist attack comes as a surprise, reveals vulnerability, indicates a weak link and reveals the limitations of knowledge, but it is eventually interpreted in a way that can be managed and effectively addressed by investing in stronger systems and broader up-to-date coverage.
Confidence in the logic of security creation is not undermined because the general framework appears fully capable in terms of making sense of attacks, providing for their prevention in the future, and thus not requiring deeper reflection or scrutiny. In other words, terrorist attacks fail to abuse the trust and confidence in the structures on which people rely (
In view of the above, one may ask, what forms of terrorism would call into question the framework utilised in making sense of terrorist acts? To begin with, it can be said that creating and maintaining trust and certainty—even their ability to be taken for granted and blackboxed—are at the core of the security and credibility of CIs. Undermining trust and certainty requires systematic abuse of the systems, a deliberate effort to change the operational logic of the system qualitatively. This would call for sustained action to integrate the threat into the very methods and means upon which the maintenance of security of operations have been established. Terrorism would sneak in through the very operating methods that are perceived as being critical, continuously updated, effective, benevolent, field-tested and confirmed apparatus for identifying and solving problems. Terrorism could infiltrate a black box, settle in its inner world, and exploit it for its own purposes.
Such terrorism would cause systemic damage to society by affecting the various stages of designing, building, operating and managing protected and sheltered CI systems. The consequences would not be limited to isolated shocks or cascaded disturbances. Instead, they would reveal an innate vulnerability in how CIs are becoming increasingly complex, more critical and hence more and more capable in causing harm (
Such terrorism targeted at CIs would be less about a new tool in the terrorists’ toolbox, and more about a different logic of impact in which the vulnerability relates to intellectual foundations of certainty and security, not on the interruption or destruction of individual functions. When security and the ability to manage threats become thoroughly challenged, a shadow of doubt is also cast over systems operating with a similar logic. The essence and enormous size of black boxes would be revealed. It would be difficult for operators dependent on CIs to make informed decisions on the use of systems because the credibility of the information available would have been undermined. In this sense, the real impact of terrorism consists of the fact that the target group interprets the acts in ways that question the knowledge structure and the methods and means for strengthening it that have, in general, been sealed in a black box.
This paper discusses the contrast between the logic of providing for security and certainty for CIs and the intelligent development of terrorism as a major security threat to societies. The intellectual separation described above has weakened the possibilities for theoretically understanding and practically recognising terrorism as a phenomenon that is becoming systemically more conscious, more intelligent, and potentially increasingly capable in a form of violence that abuses the basic structures of societies and the related methods for knowledge production. The paper contributes to the research on terrorism by stressing the importance of profoundly critical tools that are often perceived as being undesirable or even counter-productive in figuring out the mechanism through the very means utilised in providing for security.
This article makes it clear that the threat of intelligent terrorism relates to perceptions of the capabilities in knowledge and control. Identifying threats, managing them and hardening the targets are essential functions that provide the desired certainty. Seeing the logic of producing certainty as blackboxing, this article shows how it can become a source of vulnerability and systematic blindness. Perhaps, therefore, the ‘known’ is less important than the means utilised in coming to know it. Perhaps the systems behind the identification of those who possess the very knowledge and the positions granted to them and the attached privileges in putting that knowledge into action deserve critical analysis. This paper concludes that the main practical challenge is to become aware of what threats are not being seen because of systematic blindfolds and blind spots. As can be seen, the formation of blind spots is linked to what is commonly seen as strength. However, threats beyond the field of vision will not cease to exist, even if the focus is increasingly on what is already in sight. The potential of the CIs’ context to prepare for the coming of intelligent terrorism lies in this realisation. The ways by which terrorism can infiltrate into the black boxes and the ways in which preparedness can be developed accordingly are both interesting areas for further research.
This work was supported by the Academy of Finland under Grant 315074.
Not applicable.
No potential conflict of interest was reported by the author. The author read and agreed to the published version of the manuscript.
For instance, what are the unknown knowns, ‘knowledge that is not known because it is not supposed to be known’ described by
In this article, intelligence is defined in line with the Merriam Webster dictionary as ‘the ability to learn or understand or to deal with new or trying situations’ and as ‘the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria’. In this sense, intelligence is not a normative or ethical statement.
Measuring the security of CIs is an extraordinary complex task. However, various measures for assessing it have been developed, such as approved CI protection plans, audit of CI protection status, structural and budgetary changes, and exercises (