Introduction

Security has been, is, and continues to be an ever-present concern in societies. The way states, institutions and organisations provide security also affects how it is perceived. Perceived insecurity can be extensive, even when many indicators show that life is more peaceful and secure than before. The interpretation frameworks through which security is grasped therefore play a key role. States, institutions and organisations not only produce information about their operations and operating environment, but also create frameworks for interpretation that determine how reliable, functional and strategically up-to-date these operators are perceived to be. This is a particularly important theme for systems that have a special role to play in the functioning and continuity of societies. It therefore deserves critical conceptual research.

Critical infrastructures (CIs) are the cornerstone of societies’ day-to-day functioning, safety and security (Bennett, 2018; Żaboklicka, 2020). CIs include, but are not limited to, sectors such as energy, water, transportation, information and communications technology, nuclear, financial services and government facilities.1 As Hellström (2007) explains, ‘[c]ritical infrastructures are “critical”, not because they are important in general, but because they are strategically connected in such a way that they focus society’s total vulnerability to a few particular points in the system.’ In this sense, CIs have the potential to cause adverse effects that go deep into societies’ functional ability (Graham, 2010). They have become a security problem (Collier and Lakoff, 2007).

Considering CIs as socio-technical systems, significant amounts of material and intellectual capital are tied to them, not to mention the professional capital needed to keep the systems safe to operate. This material and intellectual capital that has accumulated and cultivated over time has its well-established tracks and well-polished ways to focus on the essentials. It has enabled CIs to become more capable so that they can better serve changing needs. Moreover, it incorporates a knowledge structure that formulates the guidelines for development and frames how risks and threats should be managed, thereby assuring people about the reliability of CIs. Performing as expected, CIs become a set of self-evident structures upon which daily operations can be designed (Graham, 2010).

It is interesting to consider CIs as systems in which technological assemblies are intertwined with the social processes of knowledge production. This paper grasps this through the concept of the black box, introduced by Latour (1987, 1994, 1999). According to this theorisation, as the number of a system’s components and their management become increasingly complex and hence more difficult to understand, it is reasonable to describe it as a black box from which only inputs and outputs need to be known. The historical bifurcation points of the systems, unselected development paths, controversial choices, conflicting mindsets, different interests of actors, etc. are locked in a box, so that only its main function remains visible (Latour, 1987, 1999; see also Shindell, 2020). Blackboxing ‘obfuscates actions and inner workings due to stable and reliable functioning’ as De Rosa (2018) explains.

Infrastructure systems can quite obviously be considered black boxes, as they are taken for granted and their internal mechanisms are not assumed to be understood by outsiders (Graham, 2010; Latour, 1994). Only their failure—when, for example, they become ‘transformed into weapons of mass disruption, destruction and often death’ (Graham, 2005)—may reveal something about the functioning of the inner world of black boxes, whereas in ordinary times when they operate as expected, the underlying assumptions of their operations and security become blackboxed, faded out of the critical sight.

However, trust in CIs requires that they can show that they are aware of their operational security environment and that they can manage risks. It must be demonstrated in an evidence-based manner that CIs are in competent hands—the reliability of operations must be credibly reported (Bennett, 2018). But what is classified as evidence and how conclusions about the reliability of CIs are drawn stay out of reach (Aven and Guikema, 2011). In addition, what also remains hidden is the process in which non-knowledge is worn in the form of expert knowledge.2 This is essential since knowledge and non-knowledge are both constitutive in the decision-making processes (Beck, 2002; Daase and Kessler, 2007). In line with Rydin et al. (2018), it can be argued that the question is how knowledge-claims are constructed in response to the uncertainty of the security environment. In fact, from the outside of a black box, it is impossible to see beyond the knowledge-claims presented from inside the box and assess their limitations and validity. That is, one only needs to trust CIs’ reliability and be interested only in inputs and outputs.

In this context, terrorism constitutes a particularly interesting security threat for CIs in its capacity for creativity, learning and intelligence3 (Gill et al., 2013). How the development potential of terrorism is interpreted—what is perceived as being possible for it—affects the interests of terrorism research, the counter-terrorism practices, and risk assessment work regarding vital functions of societies. New types of terrorist acts keep pushing the boundaries of seeing that potential, but that sight tends to come only after the effect. It can be said that the events of 9/11 caused long-term impacts precisely because they undermined the then-established ideas and assumptions of terrorism prevention: something happened that had been thought impossible, or not properly thought of or deemed as being beyond comprehension (The 9/11 Commission Report, 2004).

Although a great deal has been learned from the 9/11 event, CIs as increasingly complex bundles of operations that extend into the daily functioning of societies continue to constitute a platform of opportunity for terrorism (Collier and Lakoff, 2007; Coward, 2009). Things that remain outside the established interpretative framework can become a reality in new and drastic ways, which may further shake the closely guarded contents of black boxes in addition to causing extensive and deep adverse impacts in other ways (Botha, 2020). In this regard, harnessing such systems that have developed into increasingly complex bundles of critical societal functions and their interdependencies over time (Cedergren et al., 2018; Hempel et al., 2018; Peerenboom and Fisher, 2010; Wang et al., 2013) to serve terrorist purposes as a weapon would make 9/11 look like a short prelude.

Moreover, what is undoubtedly left unburied in black boxes concerns our understanding of the interconnections and dependencies in the CIs. Not only is our ability to manage complex systems limited, but also the ways in which security threats are examined and uncertainty is reduced are formulated and developed further in silos (Aradau and van Munster, 2011). Our vulnerability to terrorism thus reflects the very nature of our systems and the way they have been organised (Logan et al., 2019). It is therefore important to understand the dynamics of such a threat and the provision of security in the context of CIs in more diverse and critical ways.

Aim and method

When terrorism is grasped as an active threat that proceeds intelligently in terms of its purpose and objectives, it appears to be capable of becoming aware of the vulnerabilities of CIs and adjusting its tactics to avoid the terrorism prevention measures (Hausken, 2017; Quijano et al., 2018). In this sense, the way security is provided within CIs and the way intelligent terrorism could operate form an interesting tension for closer examinations. They are separating from each other in a way that creates vulnerability, albeit it is not well understood how.

This article seeks to enhance the conceptual understanding by analysing this intellectual separation using the lens from the black box theorisation (Latour 1994, 1987). By so doing, the article sheds light on the conceptual dimension of the (in)security that has gone unnoticed in the interface between CIs and terrorism (Ranstorp and Normark, 2009). It aims to outline the aforementioned dilemma and provide conceptual understanding that makes it easier to grasp and communicate further. It is conducted in the following way: First, the rationality of creating security within the context of Cis is outlined (i.e., the logic of blackboxing). After that, terrorism targeted at CIs as a method capable of opening black boxes is scrutinised. Then, the potential next level of terrorism that could shake the content hidden in the box is discussed. Finally, conclusions are drawn, whose intention is to open new paths in thinking in unforeseen—but when it comes to terrorism—relevant and insightful ways.

Results

Critical infrastructure and the logic of creating security

The security of CIs requires not only the effective assessment and management of threats and risks and proven reliability,4Measuring the security of CIs is an extraordinary complex task. However, various measures for assessing it have been developed, such as approved CI protection plans, audit of CI protection status, structural and budgetary changes, and exercises (Piekarski and Wojtasik, 2022).4 but also public trust in that reliability; the dependence of everyday activities on CI systems indicates and requires trust and certainty (Critical Five, 2014). The interpretation frameworks through which certainty assessments are made and trust maintained reflect the way threats are perceived, prepared for and managed.

Increased connections and the formation of dependencies within CIs are essential features that reflect the level of modern societies’ development. However, the critical elements of the systems need to be protected, and external interference with the processes must be prevented—in terms of target hardening, for instance—to ensure the continuity of the respective operations, and also in exceptional times (Bennett, 2018; Botha, 2020; Kahan, 2017; Wang et al., 2013; Wiśniewski, 2016). Such a solidification of protection is a way of making systems more robust and less susceptible to interference and harm (Aradau, 2010). However, it is concurrently a method for blackboxing CI systems in a way that supports their main functional objectives and draws attention to their essential aspects. That is, they encapsulate not only in a technical sense but also in terms of knowledge production (Moore et al., 2019). But what does this capsulation mean more specifically?

In modern societies, CIs consist of sheltered systems of this type, together with connections and dependencies between them. Due to such complexity, even if an individual function or process can be sheltered and controlled, this capacity is unavailable at the system level; security, reliability and the related knowledge within separate systems do not translate into a broader system-level capacity (Hempel et al., 2018). Despite this, CIs must deliver on their promise of certainty and prove their ability to provide for security. But how does the mechanism behind this ‘providing for security’ work? In other words, how is blackboxing conducted so that CIs present themselves as systems performing their functions in a fool-proof manner?

As socio-technological systems, CIs rely on a concept of rationality that is linked to technical and engineering expertise, with their characteristic action ideals (Delanty and Harris, 2021; Mitcham, 2015; see also Schön, 1983). As technical rationality aims to produce order and control capacity through data classification and calculation (Gunderson et al., 2018), the provision for security builds on the concepts of risks and threats identified from previous events, as well as their management mechanisms, criticality and vulnerability analysis, and scenario models, among others (Chen et al., 2019; Reniers and Audenaert, 2014; Wang et al., 2013; Zio, 2016). Technical rationality brings efficiency to interpreting reality and turning what has been learned from previous events into better capability. It seldom expresses its selectivity, but instead represents itself as the only reasonable indicator of reality and provider of solutions, as is typical for black boxes.

Technical rationality recognises errors and imperfection as part of activity, so errors made and deficiencies identified can be turned into an ability to close gaps in security, learn from failures and strengthen management capacity (Gómez, 2015). The internal logic of technical rationality determines what stands out as an error, an imperfect performance or a gap in security. The very idea that this logic could be defective or even harmful is absurd. It is impossible to scrutinise analytically within itself. Systems learn in a way that represents a stronger capacity to address an increasingly diverse range of security threats and an enhanced capability to manage them effectively. It builds trust in the capability of expertise to face uncertainty (Bieder, 2017; Runciman, 2006). In this way, technical rationality can ensure that the CIs are always the best possible ones considering the given situation, and that they are capable of evolving towards the next best possible condition.

Another key characteristic of technical rationality is its tendency to interpret reality as being composed of technical challenges calling for technical responses—since a black box is filled with technical content, engineering expertise represents the most advanced interpretation of the security of CIs (Gunderson et al., 2018). It defines the most essential problems in the security environment, formulates them, and opens the vista to solutions deemed adequate, justified and sufficient (Hubbard, 2009). It is legitimised as the correct, commonly approved knowledge method (Aradau and van Munster 2011), which increases the distance between professionals and laypeople such as the ordinary users of infrastructure services, as it becomes increasingly sophisticated and strengthens its position (Gómez, 2015). Laypeople are expected to trust and believe that the security of the systems is in expert hands (Rodríguez, 2015).

However, it is worth asking whether these expert hands are tied to established habits of thinking (Aven and Guikema, 2011), and hence have become an instrument for solving the problems tailored to it more and more precisely and subtly. That is, on what basis can the capacity to design, operate and manage well-defined technical problems be extended to apply for providing for security to CIs on a larger scale? Such a question usually remains unasked, as the basic assumption regarding technical rationality standardises, moves aside from criticism, and thus allows argumentation and operation to be built on these assumptions (Mitcham, 2015). Paraphrasing Goldman (2018), it can be said that technical rationality gives an impression of the creation of security in a value-neutral, evidence-based manner, which is why it elicits trust and undermines sceptical arguments. However, why technical rationality is considered competent for determining the CIs’ security environment, and what knowledge is being excluded or ignored by its interpretation framework, remains unexplored. Such taken-for-granted gaps provide for potential security hazards. Therefore, it is important to approach them as possible means and mediums intelligent terrorism could utilise by weaponising them.

Terrorism targeted at critical infrastructures

The unambiguous definition of the concept of terrorism has proved to be exceptionally challenging (Feyyaz, 2019), yet attempts to define it precisely may also have hampered the grip of the phenomenon comprehensively (Ramsay, 2015). Despite the lack of consensus, the general features of terrorism include the desire to cause feelings of terror and insecurity through violence or the threat of violence within a group larger than the actual target of the very attacks (Horgan, 2005; Richards, 2014). According to Richards (2014), ‘terrorism is best conceptualised as a particular method of political violence’, which can be interpreted so that the very meaning of terrorism cannot be reduced to the immediate impacts of a limited attack, but the real impacts arise from the fundamental values, structures and assumptions terrorism threatens (e.g., Enders and Sandler, 2012). In other words, the core of the dynamics of terrorism consists of how terrorist activity is interpreted in the target society, and how these interpretations affect people’s sense of security. By attacking a piece of critical infrastructure, as Bennett (2018) argues, terrorists may disrupt the standard of living and cause significant physical, psychological and financial damage. However, another question is how an attack would shake the joints of black boxes and undermine trust in CIs and their management in general.

Although CIs have been targets of terrorist attacks in recent history (on statistical analysis of this theme, see e.g. Miller, 2016), the extent to which they have succeeded in causing the long-term impacts in the larger scales is highly questionable (Abrahams and Gottfried, 2014; see also Muro, 2019). Roughly speaking, the CI systems have so far proved their ability to return to normal operation quickly after the attacks, and communities have regained their trust in the security of the systems. It seems that the attacks have generally failed to communicate in ways that would profoundly undermine people’s trust in the technical systems and structures that underpin and maintain their daily lives. The acts have not disrupted the social fabric of societies by instilling distrust between people or caused them to question the basic assumptions on which everyday activities are built (Maras, 2013; see also Muro, 2019). In other words, people have sustained their sense that experts and administrations are capable of identifying and managing the respective threats as well as organising effective responses to attacks (cf., Van Der Does et al., 2019). The knowledge strategy for maintaining the continuity and enhancing the security ex post facto of a CI has convinced the people of its capability and proven its reliability quite well (Hoffman and Shelby, 2017).

In the light of technical rationality, this would imply that even devastating terrorist attacks have been interpreted in the framework of the security of systems, and eventually, dissolved into black boxes’ increased ability to recover from any incidents and to take necessary lessons from such experiences. It is possible to explain the attacks as external acts based on the sudden use of violence which, despite their unpredictability, have succeeded in remaining within the limits of security estimates, but which can be circumscribed even better in the future by learning from them. The acts are interpreted as having hit an area that can be explained in terms of limited resources, meaning that the identified gap can be closed through additional investments without the need for fundamental change in the scheme or its underpinning logic.

The acts have failed to put under question the form of expertise that provides for security and certainty and evolves around the related issues. Instead, the acts are grasped as isolated cases that mainly indicate the intellectual relevance and insufficient resources of the systems, but not fundamental deficiencies in the general framework. Thus accounted, terrorism operates appropriately for the technical rationality: although the attacks constitute a real, unpredictable and serious security threat in the short term, the longer-term capacity and reliability of CIs are not questioned. A terrorist attack comes as a surprise, reveals vulnerability, indicates a weak link and reveals the limitations of knowledge, but it is eventually interpreted in a way that can be managed and effectively addressed by investing in stronger systems and broader up-to-date coverage.

Confidence in the logic of security creation is not undermined because the general framework appears fully capable in terms of making sense of attacks, providing for their prevention in the future, and thus not requiring deeper reflection or scrutiny. In other words, terrorist attacks fail to abuse the trust and confidence in the structures on which people rely (BaMaung et al., 2018). However, the fact that the black boxes’ internal logic has so far withstood them does not imply that it is free from vulnerabilities. To figure out the inherent vulnerability, it is necessary to delineate a trend in the development of terrorism that casts light on the need for alternative interpretation frameworks.

The potential next level of terrorism

In view of the above, one may ask, what forms of terrorism would call into question the framework utilised in making sense of terrorist acts? To begin with, it can be said that creating and maintaining trust and certainty—even their ability to be taken for granted and blackboxed—are at the core of the security and credibility of CIs. Undermining trust and certainty requires systematic abuse of the systems, a deliberate effort to change the operational logic of the system qualitatively. This would call for sustained action to integrate the threat into the very methods and means upon which the maintenance of security of operations have been established. Terrorism would sneak in through the very operating methods that are perceived as being critical, continuously updated, effective, benevolent, field-tested and confirmed apparatus for identifying and solving problems. Terrorism could infiltrate a black box, settle in its inner world, and exploit it for its own purposes.

Such terrorism would cause systemic damage to society by affecting the various stages of designing, building, operating and managing protected and sheltered CI systems. The consequences would not be limited to isolated shocks or cascaded disturbances. Instead, they would reveal an innate vulnerability in how CIs are becoming increasingly complex, more critical and hence more and more capable in causing harm (Hellström, 2007). This would question the assumptions about the method applied in the accumulation of knowledge, understanding and expertise on which the security of CI relies. Such an act would be reflected in the mechanisms through which people and societal functions have become and continuously are dependent on CI systems. Well-intentioned investments in system reliability would be questioned in a way that undermines the framework considered legitimate. It would destroy social trust by creating anticipatory fear about terrorism: As Godefroidt and Langer (2018) showed, fearing terrorism destroys social trust, and this threat of terrorism does not even have to be real.

Such terrorism targeted at CIs would be less about a new tool in the terrorists’ toolbox, and more about a different logic of impact in which the vulnerability relates to intellectual foundations of certainty and security, not on the interruption or destruction of individual functions. When security and the ability to manage threats become thoroughly challenged, a shadow of doubt is also cast over systems operating with a similar logic. The essence and enormous size of black boxes would be revealed. It would be difficult for operators dependent on CIs to make informed decisions on the use of systems because the credibility of the information available would have been undermined. In this sense, the real impact of terrorism consists of the fact that the target group interprets the acts in ways that question the knowledge structure and the methods and means for strengthening it that have, in general, been sealed in a black box.

Conclusions

This paper discusses the contrast between the logic of providing for security and certainty for CIs and the intelligent development of terrorism as a major security threat to societies. The intellectual separation described above has weakened the possibilities for theoretically understanding and practically recognising terrorism as a phenomenon that is becoming systemically more conscious, more intelligent, and potentially increasingly capable in a form of violence that abuses the basic structures of societies and the related methods for knowledge production. The paper contributes to the research on terrorism by stressing the importance of profoundly critical tools that are often perceived as being undesirable or even counter-productive in figuring out the mechanism through the very means utilised in providing for security.

This article makes it clear that the threat of intelligent terrorism relates to perceptions of the capabilities in knowledge and control. Identifying threats, managing them and hardening the targets are essential functions that provide the desired certainty. Seeing the logic of producing certainty as blackboxing, this article shows how it can become a source of vulnerability and systematic blindness. Perhaps, therefore, the ‘known’ is less important than the means utilised in coming to know it. Perhaps the systems behind the identification of those who possess the very knowledge and the positions granted to them and the attached privileges in putting that knowledge into action deserve critical analysis. This paper concludes that the main practical challenge is to become aware of what threats are not being seen because of systematic blindfolds and blind spots. As can be seen, the formation of blind spots is linked to what is commonly seen as strength. However, threats beyond the field of vision will not cease to exist, even if the focus is increasingly on what is already in sight. The potential of the CIs’ context to prepare for the coming of intelligent terrorism lies in this realisation. The ways by which terrorism can infiltrate into the black boxes and the ways in which preparedness can be developed accordingly are both interesting areas for further research.