Skip to main content
Center for War Studies

Center for War Studies conference

Technology & Digitalization as Forces of Change

Held on 3 – 4 October 2019 at SDU, the Center for War Studies’ annual conference brought together experts and perspectives on how technological change in the fields of communications and robotics is impacting the existing international order. Olivier Schmitt, then head of the Center for War Studies (CWS), stated in his opening remarks that examining this subject is only possible through including the insights of the multiple research disciplines represented within CWS and at the conference. Amelie Theussen and Bugge Thorbjørn Daniel from CWS, the organizers of the conference, then set the stage for the conference, the purpose of which was to allow ourselves the time to ask questions and together discuss how technology and digitalization will impact our future.

The first panel of the conference, titled The Changing Role of the Media, was chaired by CWS’s own Chiara de Franco, and opened with Ekatherina Zhukova of the University of Copenhagen, whose research focuses on “Image Substitutes and Visual Fake History”, especially the representation of the Ukrainian Famine of 1932-1933 on social media. Using data collected from Instagram with the tag #holodomor (the Ukrainian word for the famine), Zhukova has been able to identify the use of falsified images and description for the spread of certain narratives. Images are substituting the historical truth, based on a lack of codes of ethics on image manipulation, the institutional neglect for the prosecution of history falsification, and the visual clattering which overwhelms the efforts for the identification and removal of such images. It is not “fake news” we should fear, warned Zhukova, it is “fake history”, which is slow and long-lived.

A different approach was presented by Matthew Mahmoudi of the University of Cambridge, whose presentation regarding the “ Techno-Politics of Human Rights”, argued for the use of data from social media to spot human rights violations. The volume of available data on social media is a double-edged sword; it provides an abundance of information, but it also poses problems of management and verification. According to Mahmoudi, a variety of forensic tools are then employed for spotting such human rights violations and utilizing such tools to ensure transparency and accountability. Social media can also be forces of good but can also lead to fetishization because they are tied to elitism, and the structural inequities which exist within the same frameworks. “Having a voice does not mean that you will be heard”, Mahmoudi warned.

The utility of social media as spaces for accountability but also as echo chambers was the topic of Rune Ottosen’s talk, of Oslo Metropolitan University. Ottosen talked about social media in times of crisis, starting with the dark net as a motivating factor for preparing terrorist attacks, but also reinforcement and spread of dangerous rhetoric. While these social media function as fertile ground for terrorist groups, either for recruitment or for gaining attention, other spaces like Twitter work for creating situation awareness and bringing attention to the crisis at place. At the same time, certain authorities are able to reach a wider audience through the use of social media such as Twitter, thereby being established as means of effective, two-way communication. Therefore, Ottosen concludes, social media is what users make of it, recommending the acknowledgment of social media’s importance by crisis authorities, and the better utilization of their features and their audience.

Finally, the first panel ended with Kevin Williams of Swansea University, who offered a different view of reporting and media, one which focuses on the history of war reporting, and how war correspondents have reflected on, interpreted, and integrated into their work technological and social change. Especially with the advent of social media and the subsequent feeling of insecurity it has instilled into war correspondents because they are being subjected to continuous scrutiny; “the death of the war correspondent” has though, according to Williams, always dominated the discourse surrounding the profession. “New technologies”, said Williams, “always create premonitions of demise”, and the profession of the war correspondent has of course changed with new technologies and social media, but historically correspondents have adjusted to social and technological change. It is therefore vital to emphasize the importance of continuity rather than change.

In the second panel, chaired by Kerstin Carlson from CWS, the focus was on the threat of cyberattacks, the regulation of cyberspace, and its fit with existing legal norms, as well as the representations of these issues in popular culture. Stefan Soesanto from ETH Zurich started the panel by giving out his perspective and experiences from the realm of think tanks on the developments of cyber security. In general, Soesanto outlined that Obama’s strategies for making cyberspace more regulated had many good intentions, but it did not work. Despite that US have highly sophisticated cyber capabilities, actors such as Iran and North Korea have also been active in this domain. The presentation ended with the point that cyberspace is largely an offensive persistent environment, where defence is only possible in the moment.  In that sense, “cyber operations”, said Soesanto, “is about messing with everything there is in an adversarial space and there is an overlap between cyber operations and electronic warfare.”

 

The question of what has been made to make cyberspace more ‘structured’ was the focus of Tomáš Minárik from the National Cyber and Information Security Agency of the Czech Republic. In his presentation, he outlined that the 2007 cyber-attacks on Estonia were important for establishing some of the most important legal manuals today. Albeit it is not a codification, the Tallinn manual functions as a source for advice on how to understand the legal implications of cyberspace. Minárik stated the manual has had an impact in the international community, where recently France has made its own interpretation of the legal framework in cyber operations based on the Tallinn manual. Finally, Minárik outlined that an online toolkit has been developed, which is available for lawyers to use when needed.

 

Next up was Dominique Routhier from SDU, who focused on cybernetics from a philosophical approach. Based on the conceptual etymology of cybernetics, Routhier outlined that cybernetics is about a feedback system. This was based on one of the leading theorists on cybernetics, Norbert Wiener, who realized that the notion of feedback systems, such as the case of his development of computation of anti-aircraft guns for targeting enemy airplanes during the second world war, became the model for a cybernetic understanding of the universe. The idea therefore is that humans are not any different from animals nor machines because all are governed by complex behaviour guided by feedback systems. Routhier then closed his presentation by argument that art, and its representation in popular culture, acts as a gateway for foresight and further creativity in understanding the grey zone and lack of boundaries between technology and humanity. 

 

And the second panel was closed by Jan Lemnitzer from CWS, who asked to what extent can history teach us about norm creation, and how it may be applicable in cyberspace? Lemnitzer characterized that any progress in developing of such norms are so far limited. The same was the case in the mid-19 century, and Lemnitzer used the case of outlawing naval bombardment of cities and use of privateers as something to be examined further. The driver of the norm creation was that naming and shaming did in fact work with the example of the Bombardment of Greytown in 1854 and the case with Spanish attack on Valparaiso in 1866. So, what can history teach us when it comes to norm creation? Creativity pays off; name and shaming should not be underestimated, and it has an accumulative effect over time; use the public attention to accuse the perpetrator of violating a specific norm; and to be prepared for the moment of codification.

 

The third panel of the conference was dedicated to the Role of Drones and was chaired by Vincent Keating of CWS. The panel opened with CWS’s own James Rogers, with his talk on “Distant and Deniable Drones”. Distant because they can go farther and hit targets with pin-point precision, and deniable due to a number of actors, both state and non-state, acquiring very similar in capacity and looks systems. Terrorist networks and rebel groups are able to produce their own drones, because they both possess the molds and can acquire from commercial suppliers the materials to build and modify drones according to their desired and intended purposes, but also sell them to other actors, thereby further accentuating their deniability. Deniability makes it difficult, if not impossible to identify the source of the attack, thus reducing the chances for accountability and the justification for retaliation, making it very hard to take any concerted action against alleged sources of attack. And that is, according to Rogers, the “looming threat” of drones.

Adoption and development of drone capabilities by states paint a slightly different picture, according to Dominika Kunertova of CWS. Drones, especially after the attacks on 9/11, have been effectively employed in operations, and thus have become a widely debated issue, raising a number of ethical and legitimacy questions. Kunertova has developed a report on the defense market and specifically the proliferation of military drones in Europe, and has observed that while European states are said to be lagging behind the US in developing and adopting military drone technology, the demand for more and better drones is increasing. States procure drones because of operational needs while facing security threats, but also to emulate the practices of other militaries and especially the US. At the same time though, they are facing certain platform and adoption challenges, such as organizational and financial constraints, material obstacles, and technological demands. “The more sophisticated the technology, the higher the platform and adoption challenges”, said Kunertova.

A new way of thinking about what drone formations, such as swarms, was presented by Maaike Verbruggen of Vrije Universiteit of Brussels. Swarms are configurations of a system, including different systems which have a collective goal and interact individually; they have the benefits of being robust -meaning that if one individual system is taken out, the rest can still operate-, adaptable, and scalable. They can be controlled as whole or subsets, they do not have individual objectives but only collective, and they distribute their tasks internally. There is a plethora of envisioned uses for swarms, including overwhelming enemy air defenses, aerial dogfights, and Anti-Access / Area Denial operations. As of now, there is a number of fundamental obstacles to their use, mainly the lack of ability to control swarms post-launch, bandwidth problems and asynchronicity, their high cost and low accessibility, and important questions about ethics and their compliance to international law.

The last presentation of the panel was offered by Caroline Kennedy-Pipe of the University of Loughborough. “If we think about the future, we are going to be inhabiting a disorderly world”, said Kennedy-Pipe. Global warming and climate change will change security irrevocably, and the future looks like one of state fragmentation and sub-state war, with technology playing a big role in war. The importance of technological innovation has grown since the Industrial Revolution, as capital states and capitalism itself relies on technological innovation to keep going. It is therefore not a coincidence that state themselves began sponsoring technological innovation, as militaries became interested in how technology could enable the state to fight more effectively. A major shift of the latest years though is that societies are now more distant from war, and that distance reflects on the search for technological developments which will provide even greater distance by being able to wage war remotely, but also the narrative of more humane and targeted elimination of enemies, compared to the alternatives available.

The fourth panel of the conference was focused on Artificial intelligence & Automation and was chaired by CWS’s Amelie Theussen. Starting with Kenneth Payne of King’s College, the panel opened with the strategic implications of AI weapons. AI technology is inherently a technology that is about decision-making in war and is inescapably linked with strategy. Being in the very early days of fully autonomous weapon systems, much is uncertain about their function, an uncertainty which is causing a number of ethical and legal questions, such as questions about proportionality, distinction between civilians and combatants, complexity, but also the more humane conduct of warfare. What are the biggest concerns regarding autonomous weapons, said Payne, are the chances of conflict initiation and escalation, which are inherently strategic questions. Payne offered his strategic points of concern regarding autonomous weapons: the pressure for more automation, the difficulties of regulation such weapons, the possible disruption of the international “balance of power” because of vast asymmetry between countries with such capabilities and countries without, the redefinition of the character and functions of armies, and the shift of the offence-defense balance.

Next, Frederik Rosén of SDU brought up questions of legality, in particular the rule of precaution in International Humanitarian Law. Mentioned in the 1899 and 1907 Hague Conventions and codified in the 1st Additional Protocol of 1977, the rule of precaution is a cornerstone of International Humanitarian Law. It stipulates that combatants must do everything feasible to attack neither civilians, not civilian objects, and it obliges combatants to take precaution to minimize and incidental loss of civilian lives, injury to civilians, and damage to civilian objects. That feasibility is contextual, as it rests on the circumstances of each situation, including the available technology at the hands of the combatants. New technologies are reshaping how feasibility is perceived, and combatants are expected to take the cost of technology into consideration, thus having to measure the value of machinery versus the value of human lives. Therefore, Rosén concluded, the disruptive effect of new technology and autonomy will be information overload, and will make considerations of precaution more and more difficult.

William Merrin of Swansea University drew attention to robotic warfare and the replacement of humans in warfare with robots, even though it is only one instantiation of AI and its potential uses; the biggest revolution is in AI technology. Battlefield weapon systems are only one part of what can be transformed in terms of the rise of big data analytics, the automation of certain forms of information and cyber warfare, and ultimately issues of command and control. We need to look more broadly in other developments in big data, Merrin said, especially because of the long-term dream of perfect informational battlefield awareness. The dream of the automated battlefield started from the Vietnam War, turning into the dream of perfect battlefield information control, which at early stages was demonstrated in the Gulf War, utilizing rapid developments of the internet, information technology, and the theoretical reconceptualization of war. According to Merrin, the depth and complexity of the battlefield will ultimately take humans out of command and control, turning it into an entirely automated process in order to overcome issues of speed and response time, and information overload.

The panel continued with Andreas Immanuel Graae of CWS, with his presentation titled “Swarming Sensations, Autonomous Bees, and the Politics of the Swarm”. Drawing from the “Hated in the Nation” episode of the Black Mirror series, the presentation highlighted issues of the ethics of new technologies, issues of anonymity and accountability, the impact of social media, the threat of cyber terrorism and the vulnerability of critical infrastructure to hijacking by malicious actors, issues of state surveillance and privacy, and the preemption of how technological developments like autonomous drones can be used or misused for coercive purposes. Graae directed the focus on the entangled history of insects and machines in the case of swarming. Swarming is a hybridization between machines and insects, which raises a crucial question of human versus non-human agency. Is swarming merely an extension of humans, or is it disrupting the human sensorium through un-human, insect-like configurations? A question not easily answered, according to Graae, who emphasized that the future politics of the swarm will be based on radically unhuman and dispersed systems of mass distribution and big data.

The panel closed with Matthijs Michiel Maas of the University of Copenhagen, who talked about “normal accidents” in military AI systems. The presentation focused on the accidents that can happen at the intersection of AI technology and military organizations. Such accidents are endemic and can occur across different technologies, and can emerge because of the complexity and opaqueness of AI systems, and human or technological failures. In some cases, military pressure drive engineers and designers to forgo safety concerns for the sake of rapid deployment, while in others accident occur simply because of adversarial input such as hacking and spoofing. Additionally, an operational advantage of AI systems, which is their speed and response time, hinders proper and timely control, thus risking unintentional escalation. What we need to do then, according to Maas, is to understand that normal accidents will occur, and try to minimize their chances of occurring with proper regulation, and a focus on wedging against the factors which heighten these chances.

The final panel, chaired by CWS’s own Trine Flockhart, looked at all the changes addressed in the previous panels and their consequences for the modern state system. It focused on the role and nature of the state, sovereignty, and international institutions. David Galbreath from the University of Bath started the panel by presenting his argument that technology has agency externally from human influence. It is not necessarily a problem but a condition that could potentially also be a solution. Galbreath took a broad perspective as to what makes up the will in Clausewitz’s notion of war as a clash of wills? Galbreath drew from Ruth Millers understanding of the power of technology, not as specific objects, but as agents. He outlines that Ruth Miller tried to raise the point that wars often have a very human-centric perspective, and e.g. the First World War could also be seen from other vignettes such as a war of technologies. What impact this has on state sovereignty is that the states’ hold on power is being challenged by other forms agencies. Power, agency, and responsibility conflict with the notion that we can control these forces of change network – also known as the network fallacy.

 

 

Hereafter Sten Schaumburg Müller from SDU gave his perspective on how the influence of social media challenges traditional understanding of what the state have influence over. For Schaumburg-Müller it was interesting that Facebook has community standards and protection from governmental interference in removing content. The question of who or what decides what we watch and read on social media is a fundamental democratic question, and this gives these social media companies a huge influence because of their outreach. For Schaumburg-Müller, these new media are trying to signal that they are separate from governments in that their rules are setting what content ought or ought not to be on their platform.

 

Karen Lund Petersen, from University of Copenhagen, offered a different perspective. Two major concerns from Petersen’s presentation came in this context: The first concerns were the role of cybernationalism and internet sovereignty, and how states are trying to control the actions of private actors, hereby going beyond classical liberal understanding of the division between the state and the private sector. The second concern was the rise of norm building in the private sector and how it confronts our understanding of the state’s agency and power. From this view, private businesses have been ‘responsiblized’ in the name of national security and it is expected from them to be an extended arm of national policy. This is not just the case in cyber security but also the case with counter-terrorism. Petersen argued that the relationship between the state and private tech companies are moving in each their different directions.

 

And as the last presentation of the panel and in the conference, Stéphane Taillat, from French military academy Saint-Cyr, emphasized on how the French Army integrated new digital technologies at the tactical level and in general, what this means for the conduct of warfare in the 21st century. The main argument from Taillat was that the digitalization of the armed forces is shifting the balance between the state and non-state actors in the conduct of war and reshaping the relationship between the state and war where technology is the main driver. Tailat ended his presentation that the future operational environment will be one permanently contested battlefield. In that sense, the French Army is in the process of augment their fighting capabilities via further digitalization to keep pace with future operational environment.

 

The conference came to a conclusion with Amelie Theussen, who outlined the important themes and questions: What is the relationship between continuity and change? Is continuity a process of change? What is the link connecting technology and the human? Is technology a human tool or does it have its own agency? Is human control being replaced by AI? Will we reach a stage where technology will create its own logic which we might be incapable of grasping? Or is technology rather a reflection of our own logic, and inherently of our flaws? What is the potential of technology for disruptive change? Theussen stated in her final remarks that “the jury is still out on just how big and manageable the changes caused by technology and digitalization will be.” But the conference was a chance to examine these important questions, which sparked further curiosity and reflection amongst the participants and the audience.

 

 

 

 


Editing was completed: