Internet of Things - Atlantic Council https://www.atlanticcouncil.org/issue/internet-of-things/ Shaping the global future together Mon, 26 Jun 2023 16:46:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png Internet of Things - Atlantic Council https://www.atlanticcouncil.org/issue/internet-of-things/ 32 32 Accelerating digitalization and innovation in Latin America and the Caribbean https://www.atlanticcouncil.org/in-depth-research-reports/report/accelerating-digitalization-and-innovation-in-latin-america-and-the-caribbean/ Fri, 16 Jun 2023 17:43:18 +0000 https://www.atlanticcouncil.org/?p=656097 To sustain the ongoing recovery against short-term headwinds and boost inclusive, productive, and sustainable development in the long term, governments cannot, and should not, act alone. The private sector can improve infrastructure, foster skills, and promote adoption to help the region transform its digital potential into development gains.

The post Accelerating digitalization and innovation in Latin America and the Caribbean appeared first on Atlantic Council.

]]>

This is the 2nd installment of the Unlocking Economic Development in Latin America and the Caribbean report, which explores five vital opportunities for the private sector to drive socioeconomic progress in LAC, with sixteen corresponding recommendations private firms can consider as they take steps to support the region.

How does the private sector perceive Latin America and the Caribbean (LAC)? What opportunities do firms find most exciting? And what precisely can companies do to seize on these opportunities and support the region’s journey toward recovery and sustainable development? To answer these questions, the Atlantic Council collaborated with the Inter-American Development Bank (IDB) to glean insights from its robust network of private-sector partners. Through surveys and in-depth interviews, this report identified five vital opportunities for the private sector to drive socioeconomic progress in LAC, with sixteen corresponding recommendations private firms can consider as they take steps to support the region.

Accelerating digitalization and innovation

When asked about areas where they see themselves making an important social impact, 47 percent of surveyed services firms selected “digital transformation,” making it the second most impactful area only after “economic growth and job creation” (as shown below in Figure 7). Indeed, the private sector can unlock the three enablers (infrastructure, skills, and adoption), thus helping the region materialize its digital friendliness into better digital outcomes. In particular, firms in the services industries (financial, telecommunications, and information technology) consider digital transformation a vital part of their responsibility and contribution to society.

SOURCE: Atlantic Council survey 2022

Recommendations for the private sector

The private sector is well positioned to help LAC economies, governments, and citizens make the most of its digital-innovation potential. As employers, service providers, consumers, partners, and investors, companies can leverage an ecosystem approach to enhance digital infrastructure, skills, and adoption within and across countries, delivering better digital outcomes conducive to economic inclusion and competitiveness.

  1. Improving digital infrastructure: Firms can help strengthen digital connectivity in LAC, both operationally (as information and communication technology (ICT) product and service providers and investors can help strengthen digital connectivity in LAC operationally and financially.
  2. Fostering skills: Employers and employees should stay innovative and competitive in an increasingly digitized economy through upskilling, reskilling, and workforce-development programs.
  3. Promoting adoption: Multinational corporations (MNCs) can accelerate digital development by undertaking internal digital transformation and spurring adoption among suppliers and other businesses within their entrepreneurial ecosystems.

About the author

The Adrienne Arsht Latin America Center broadens understanding of regional transformations and delivers constructive, results-oriented solutions to inform how the public and private sectors can advance hemispheric prosperity.

The post Accelerating digitalization and innovation in Latin America and the Caribbean appeared first on Atlantic Council.

]]>
The 5×5—The Internet of Things and national security https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-internet-of-things-and-national-security/ Wed, 28 Sep 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=566471 Five experts from various backgrounds assess the national security challenges posed by IoT and discuss potential solutions.

The post The 5×5—The Internet of Things and national security appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

The connection of mundane household gadgets, industrial machinery, lifesaving healthcare technologies, vehicles, and more to the Internet has probably helped modern society to be more convenient and efficient. IoT devices worldwide number over 13 billion, a number that is estimated to balloon to over 29 billion by 2030. For all its benefits, the resultant web of connected devices, collectively known as the Internet of Things (IoT), has exposed everyday users, as well as entire economic sectors, to cybersecurity threats. For example, criminal groups have exploited IoT product insecurities to infect hundreds of thousands of devices around the world with malware in order to enlist them in distributed denial-of-service attacks against targets. 

Inadequate cybersecurity across the IoT ecosystem is inherently a US national security issue due to IoT’s ubiquity, integration across all areas of life, and potential to put an incredible number of individuals’ data and physical safety at risk. We brought together five experts from various backgrounds to assess the national security challenges posed by IoT and discuss potential solutions.

#1 What isn’t the Internet of Things (IoT)?

Irina Brassassociate professor in regulation, innovation, and public policy, Department of Science, Technology, Engineering, and Public Policy (STEaPP), University College London:

“IoT is not just our everyday physical devices embedded with sensing (data capture) or actuation capabilities, like a smart lightbulb or a thermostat. ‘Smart’ devices are just the endpoint of a much more complex ‘infrastructure of interconnected entities, people, systems and information resources together with services, which processes and reacts to information from the physical world and virtual world’ (ISO/IEC 20924: 2021). This consensus-based definition, agreed in an international standard, is particularly telling of the highly dynamic and pervasive nature of IoT ecosystems which capture, transfer, analyze data, and take actions on our behalf. While IoT ecosystems are functional, poor device security specifications and practices in these highly dynamic environments create infrastructures that are not always secure, transparent, or trustworthy.”

Katerina Megasprogram manager, Cybersecurity for the Internet of Things (IoT) Program, National Institute of Standards and Technology (NIST):

“Likely very little, which would explain why the US National Cyber Director, Chris Inglis, at a NIST public workshop [on August 17, 2022] referred to the ‘Internet of Everything.’ IoT is the product of the worlds of information technology (IT) and operational technology (OT) converging. The IoT is a system of interconnected components including devices that sense, actuate, collect/analyze/process data, and are connected to the Internet either directly or through some intermediary system. While a shrinking number of systems still fall outside of this definition, what we used to think of as traditional OT systems based on PLC architectures with no connectivity to the Internet are, in fact, more and more connected to the Internet and meet the above definition of IoT systems.”

Bruce Schneierfellow, Berkman-Klein Center for Internet and Society, Harvard University; adjunct lecturer in public policy, Harvard Kennedy School:

“Ha! A salami sandwich is not the Internet of Things. A sense of comradeship towards your friends is not the Internet of Things. I am not the Internet of Things. Neither are you. The Internet of Things is the connected totality of computers that are not generally interacted with using traditional keyboards and screens. They’re ‘things’ first and computers second: cars, refrigerators, drones, thermostats, pacemakers.”

Justin Shermannonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab):

“There is no single definition of IoT, and how to scope IoT is a key policy and technical question. Regardless, basically every definition of IoT rightfully excludes the core underpinnings of the global Internet itself—internet service provider (ISP) networks that bring online connectivity to people’s homes and offices, submarine cables that haul internet traffic between continents, and so on.”

Sarah Zatkochief scientist, Cyber ITL:

“IoT is not modern or state of the art. The hardware on the outside may look sleek and shiny, but under the hood there is old software built with out-of-date compilers running on old chip architectures. MIPS, a reduced instruction set computer (RISC) architecture, was used in the largest portion of the IoT products that we have tested.”

#2 Why should national security policymakers care about the cybersecurity of IoT products?

Brass: “Many IoT devices currently on the market have known security vulnerabilities, such as default passwords and unclear software update policies. Users are typically unaware of these vulnerabilities, purchase IoT devices, set and forget them. These practices do not occur just at the consumer level, although there are many examples of how insecure and unsafe our ‘smart homes’ have become. They take place in critical sectors of strategic national importance such as our healthcare system. For instance, the Internet of Medical Things (IoMT) is known to be especially vulnerable to cyberattacks, data leaks, and ransomware because a lot of IoMT devices, such as IV pumps, have known security vulnerabilities but continue to be purchased and remain in constant use for a long time, with limited user awareness of their potential exposure to serious compromise.”

Megas: “I think the combination of the nature and ubiquity of IoT technology are the perfect storm. IoT has taken existing concerns and put them on steroids by increasing both the attack surface and also impacts, if you think of risk as the product of likelihood (IoT is everywhere) and impact (automated interactions with the physical world). In traditional IT systems, a compromised system could produce faulty data to the end user, however, typically there was always a human in the loop that would take (or prevent) action on the physical world based on this data. With the actuating capabilities we are seeing in most IoT and the associated level of automation (which will only increase as IoT systems incorporate AI), the impact of a compromised IoT system is likely going to be higher. As more computing devices are put on the Internet, they become available for botnets to be installed, which can result in significant national economic damage as in the case of Mirai. Lastly, because this technology is so ubiquitous, the vast amount of data collected—from proprietary information from a factory to video footage from a recreational drone to sound sensors collected from around a smart city—can both be accessed through a breach, shared, and used by other nations without anyone’s knowledge, even without a cybersecurity failure.”

Schneier: “Because the security of the IoT affects the security of the nation. It’s all one big network, and everything is connected.”

Sherman: “IoT products are used in a number of critical sectors, ranging from healthcare to energy, and hacks of those products could be financially costly and disrupt those sectors’ operations. There are even IoT devices that can produce physical effects, like small internet-linked machines hooked into manufacturing lines, and hackers could exploit vulnerabilities in those devices to cause real-world damage. In general, securing IoT products is also part of securing the overall internet ecosystem: IoT devices plug into many other internet systems and increasingly constitute a greater percentage of all internet devices used in the world.”

Zatko: “IoT is ubiquitous. Even when a ‘smart’ device is not necessary, at this point it is often difficult or impossible to find a ‘dumb’ one. Their presence often punches holes in network environment security, so they are common access points for attacks.”

#3 What kinds of threats are there to the cybersecurity of IoT devices that differ from information technology (IT) or other forms of operational technology (OT)?

Brass: “The kinds of vulnerabilities per se might not differ—ultimately, you still have devices running software that can be exploited by malicious actors. What differs is the scale and, in some cases, the severity of the outcome. IoT ecosystems are highly interconnected. Compromising a single device is often sufficient to gain the foothold necessary to exploit other devices in the system and even the entire system. The transnational dimension of IoT cybersecurity should also not be neglected. The 2016 Mirai attack showed how compromised IoT devices with poor security specifications (default passwords), located around the world, can be very easily exploited to target internet infrastructure in different jurisdictions.”

Megas: “I am not sure whether there are different threats for IoT, OT, and IT systems. They are converging more and more, so it is not meaningful to try to create artificial lines of distinction. This might be one of those instances where I say the dreaded phrase ‘it depends.’  It is possible that there are some loosely coupled IoT systems in which the components that are IoT devices do not sit behind more security capable components, but are more directly accessing the Internet (and therefore more directly accessible by threat actors). This could mean that vulnerabilities in these IoT systems are more easily exploitable and thus easier targets. Also, the nature of IoT systems that can interact with the physical world could affect the motivations of threat actors. The focus on many risks to traditional IT systems is around the data and its potential theft, but attacks on IoT can impact the real world. For instance, modifying the sensors at a water treatment plant can throw off readings and lead the system to incorrectly adjust how much fluoride is added to the water.”

Schneier: “The IoT is where security meets safety. Insecure spreadsheets can compromise your data. Insecure IoT devices can compromise your life.”

Sherman: “Typically, IoT devices use less energy, have less memory, and have much less computing power than traditional IT devices such as laptops, or even smartphones. This can make it more difficult to integrate traditional IT cybersecurity features and processes into IoT devices. To boot, manufactures often produce IoT devices and products with terrible security—installing default, universal passwords and other bad features on the manufacturing line that end up undermining their cybersecurity once deployed. In part, this happens because smaller manufacturers are essentially pumping IoT devices off the manufacturing line.”

Zatko: “Users often forget to consider IoT devices when they think about their computing environment’s safety, but even if they did, IoT devices are not always able to be patched. Sometimes software bugs in IoT operating systems are hard-coded or otherwise inaccessible, as opposed to purely software products, where changes are much easier to affect. This makes getting the software as safe as possible from the get go particularly important.”

More from the Cyber Statecraft Initiative:

#4 What is the greatest challenge to improving the security of the IoT ecosystem?

Brass: “These days, we very often focus on behavioral change—what can individual users or organizations do to improve their cyber hygiene and general cybersecurity practices? While this is an important step in securing the IoT, it is not sufficient because it places the burden on a large, non-homogenous, distributed set of users. Let us turn the problem around to its origin. Then, the greatest challenge becomes how to ensure that IoT devices and systems produced and sold all over the world have baseline security specifications, that manufacturers have responsible lifecycle care for their products, and that distributors and retailers do not compromise on device security in favor of lower priced items. This is not an easy challenge, but it is not impossible either.”

Megas: “There is a role for everyone in the IoT ecosystem. Setting aside the few organizations developing their own IoT systems for their own use, the majority of IoT technologies are purchased or acquired. One of the challenges that I see is educating everyone that there are two critical roles in supporting cybersecurity of the IoT ecosystem: those of the producers of the IoT products and those of the customers, both enterprise and consumers. While this dynamic is not new between producer and buyers, the relationships in IoT lack maturity. While producers need to build securable products that meet the needs and expectations of their customers, the customers are responsible for securing the product that operates in the customer environment. Identifying cybersecurity baselines for IoT products is a start in defining the cybersecurity capabilities producers should build into a product to meet the needs and expectations of their customers. However, one size does not fit all. A baseline is a good start for minimal cybersecurity, but we want to encourage tailoring baselines commensurate to the risk for those products whose use carries greater higher risk. 

Beyond the IoT product manufacturer’s role, there are network-based approaches that can contribute to better cybersecurity (such as using device intent signaling), that might be implemented by other ecosystem members. Vendors of IoT can ensure that their customers recognize the importance of cybersecurity. Enterprises should consider using risk management frameworks, such as the NIST Cybersecurity Framework, to manage their risks that arise out of the use of IoT technology. Formalizing and promoting recognition of the role in product organizations for a Chief Product Security Officer (CPSO) is also critical. Given that most C-suites and boards are starting to recognize the importance of the CISO towards securing their organizations’ operations, we need to also promote the visibility of the CPSO responsible for ensuring that the products that companies sell have the appropriate cybersecurity features that meet the companies’ strategic brand positioning and other factors.”

Schneier: “Economics. The buyers and sellers of the products don’t care, and no one wants to regulate the industry.”

Sherman: “As with many cybersecurity issues, the greatest challenge is getting companies that have been grossly underinvesting in security to do more, while also producing government regulations and guidance that are technically sound, roughly compatible with regulations and guidance in other countries, and that do not raise the barrier too much so as to cut out small players—though, if we want better security, some barrier-raising is necessary. It is a very boring answer, but there has been a lot of great work done already on IoT security by the National Institute of Standards and Technology, other governments, various industry groups, etc. The central challenge is better coordinating those efforts, fixing bad market incentives, and appropriately filling in the gaps.”

Zatko: “There are so many vendors, and many of them are not capable of producing secure products from scratch. It is currently too hard for even a well-meaning vendor to do the ‘right’ thing.”

#5 How can the United States and its allies promote security across the IoT ecosystem when a large portion of devices are manufactured outside their jurisdictions?

Brass: “Achieving an international baseline of responsible IoT security requires political and diplomatic will to adopt and align legislation that promotes the security of internet-connected devices and infrastructures. The good news is that we are seeing policy change in this direction in several jurisdictions, such as the IoT Cybersecurity Improvement Act in the United States, the Product Security and Telecommunications Infrastructure Bill in the United Kingdom, and several cybersecurity certification and labelling schemes such as CLS in Singapore. As IoT cybersecurity becomes a priority for several governments, the United States and its allies can be the driving force behind international cooperation and convergence towards an agreed set of responsible IoT security practices that underpin legislative initiatives around the world.”

Megas: “Continuing to share lessons learned with others. Educating customers, both consumers as well as enterprise customers, on the importance of seeking out products that support minimum cybersecurity.”

Schneier: “Regulation. It is the same that way we handle security and safety with any other product. You are not allowed to sell poisoned baby food or pajamas that catch on fire, even if those products are manufactured outside of the United States.”

Sherman: “US allies and partners are already doing important work on IoT cybersecurity—from security efforts led by the UK government to an emerging IoT labeling scheme in Singapore. The United States can work and collaborate with these other countries to help drive security progress on devices made and sold all around the world. Others have argued that the United States should exert regulatory leverage over whichever US-based companies it can to push progress internationally, too, such as with Nathaniel Kim, Trey Herr, and Bruce Schneier’s “reversing the cascade” idea.”

Zatko: “By open sourcing security-forward tools and secure operating systems for common architectures like MIPS and ARM, the United States could make it easier for vendors to make secure products. Vendors do not intentionally make bad, insecure products—they do it because making secure products is currently too difficult and thus too expensive. However, they often use open-source operating systems, tool kits, and libraries for the base of their products, and securing those resources will do a great deal to improve the whole security stance.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The Internet of Things and national security appeared first on Atlantic Council.

]]>
Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem https://www.atlanticcouncil.org/in-depth-research-reports/report/security-in-the-billions/ Mon, 26 Sep 2022 06:30:00 +0000 https://www.atlanticcouncil.org/?p=568504 The explosion of Internet of Things (IoT) devices and services worldwide has amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large. In light of this systemic risk, this report offers a multinational strategy to enhance the security of the IoT ecosystem. It provides a framework for a clearer understanding of the IoT security landscape and its needs, looks to reduce fragmentation between policy approaches, and seeks to better situate technical and process guidance into cybersecurity policy.

The post Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem appeared first on Atlantic Council.

]]>

Executive summary

The explosion of Internet of Things (IoT) devices and services worldwide has contributed to an explosion in data processing and interconnectivity. Simultaneously, this interconnection and resulting interdependence have amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large. Governments, companies, and civil society have proposed and implemented a range of IoT cybersecurity initiatives to meet this challenge, ranging from introducing voluntary standards and best practices to mandating the use of cybersecurity certifications and labels. However, issues like fragmentation among and between approaches, complex certification schemes, and placing the burden on buyers have left much to be desired in bolstering IoT cybersecurity. Ugly knock-on effects to states, the private sector, and users bring risks to individual privacy, physical safety, other parts of the internet ecosystem, and broader economic and national security.

In light of this systemic risk, this report offers a multinational strategy to enhance the security of the IoT ecosystem. It provides a framework for a clearer understanding of the IoT security landscape and its needs—one that focuses on the entire IoT product lifecycle, looks to reduce fragmentation between policy approaches, and seeks to better situate technical and process guidance into cybersecurity policy. Principally, it analyzes and uses as case studies the United States, United Kingdom (UK), Australia, and Singapore, due to combinations of their IoT security maturity, overall cybersecurity capacity, and general influence on the global IoT and internet security conversation. It additionally examines three industry verticals, smart homes, networking and telecommunications, and consumer healthcare, which cover different products and serve as a useful proxy for understanding the broader IoT market because of their market size, their consumer reach, and their varying levels of security maturity.

This report looks to existing security initiatives as much as possible—both to leverage existing work and to avoid counterproductively suggesting an entirely new approach to IoT security—while recommending changes and introducing more cohesion and coordination to regulatory approaches to IoT cybersecurity. It walks through the current state of risk in the ecosystem, analyzes challenges with the current policy model, and describes a synthesized IoT security framework. The report then lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting a baseline of minimally acceptable security (or “Tier 1”), incentivizing above the baseline (or “Tier 2” and above), and pursuing international alignment on standards and implementation across the entire IoT product lifecycle (from design to sunsetting). It also includes implementation guidance for the United States, Australia, UK, and Singapore, providing a clearer roadmap for countries to operationalize the recommendations in their specific jurisdictions—and push towards a stronger, more cohesive multinational approach to securing the IoT worldwide.

Implementation plans by country

Introduction

The billions of Internet of Things (IoT) products used worldwide have contributed to an explosion in data processing and the connection of individuals, buildings, vehicles, and physical machines to the global internet. Work-from-home policies and the need for contact tracing during the COVID-19 pandemic have furthered societal dependence on IoT products. All this interconnection and interdependence have amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large.

Securing IoT products is inherently critical because IoT products increasingly touch all facets of modern life. Citizens have IoT wearables on their bodies and IoT products in their cars, gathering data on their heartbeats, footsteps, and Global Positioning System (GPS) locations. People also have IoT smart products in their homes—speakers awake to every private conversation, internet-connected door locks, devices that control atmospheric systems, and cameras to monitor young children and pets. Hospitals even use IoT products to control medicine dosages to patients. The ever-growing reliance on IoT products increasingly and inescapably ties users to network and telecommunications systems, including the cloud. IoT insecurity, given this degree of interconnection, poses risks to individual privacy, individual safety, and national security.

The IoT explosion is also poised to impact the security of the internet ecosystem writ large. More IoT products deploy each year, meaning IoT products constitute a significant percentage of devices linked to the global internet. For example, IoT Analytics, a market research firm, estimates that IoT products surpassed traditional internet-connected devices in 2019 and projects that the ratio will be around three to one by 2025.1 At that scale, poorly secured products (for instance, those with easy-to-guess passwords or with known and unfixed security flaws) can enable attackers to gain footholds in corporate or otherwise sensitive environments and steal data or cause disruption. For instance, hackers could exploit security problems in IoT cameras to break into a building—digitally or physically.2 Hackers can break into IoT devices at scale to launch distributed denial of service (DDoS) attacks that bring down internet services for hundreds of thousands or even millions of consumers.

In response to these cybersecurity risks, governments, private companies, industry organizations, and civil society groups have developed a myriad of national and industry frameworks to improve IoT security, each addressing considerations in the product design, development, sale and setup, maintenance, and sunsetting phases. These numerous controls sets and frameworks, however, are a hodgepodge across and within jurisdictions. Within jurisdictions, some governments are charging ahead with detailed IoT security guidance while others have made little substantive headway or have ambiguous policy goals that confuse and impede industry progress. Between jurisdictions, fragmented requirements have chilled efforts by even some of the most security-concerned vendors to act. Consumers, meanwhile, must grapple with IoT product insecurity, bad security outcomes, and ugly knock-on effects to others in their communities and networks—exacerbated by a lack of security information from vendors. Poor outcomes for users, a lack of cross-national harmonization, and gaps between government and industry efforts impede better security in the IoT ecosystem.

Yet, progress is possible. The number of countries and industry actors who have acknowledged one standard alone—European Norm (EN) 303 645, from the European Telecommunication Standards Institute (ETSI)—as a consensus approach alone demonstrates how some baseline security guidance can help drive real, coordinated change.

This report presents a consolidated approach to IoT cybersecurity to reconcile existing national approaches, balance the interest of public and private sectors, and ensure that a product recognized as secure in one jurisdiction will be recognized as secure in others. The framework is not prescriptive to the level of individual controls; rather, it seeks to address the structural priorities of approaches taken by industry coalitions and governments in the United States, United Kingdom (UK), Singapore, and Australia. We focus on these countries because of the maturity of their IoT cybersecurity approaches, their mature cyber policy processes, their historical influence on cybersecurity policy in other countries, and the strong precedent for cooperation across all four.

In considering the effects of this consolidated approach, the report also focuses on three verticals: smart homes, networking and telecommunications, and consumer healthcare. These three provide ready critical IoT product use cases, differentiate in the kinds of technology and products available, and serve as useful proxies for understanding the broader IoT market because of their market size, consumer reach, and varying levels of security maturity.

This report draws on research of IoT security best practices, standards, laws, and regulations; conversations with industry stakeholders and policymakers; and convenings with members of the IoT security community. In principle and wherever possible in practice, the report relies on existing approaches, seeking to create as little new information or guidance as practicable to ease implementation. The first section below describes the state of risk in the IoT ecosystem, including challenges with the current model, insecurity across three IoT industry segments, and a brief history of IoT security efforts and control sets across the United States, UK, Australia, and Singapore as well as industry-led efforts. The second section synthesizes these disparate control sets, mapped against every phase of the IoT product lifecycle. The third (and final section) presents a consolidated approach to IoT security across these four countries and the relevant industry partners—with nine recommendations to address gaps in existing IoT security approaches, disincentivize further fragmentation in standard setting or enforcement, and rationalize the balance between public and private sector security interests. These recommendations come with implementation guidance specific to each of the four countries.

While this report describes some key components of an IoT labeling approach, it deliberately does not prescribe a particular label design. The report leaves open many questions that require more work, including “who” sets label design, “how” companies should pair physical and digital labels, and to “what” extent companies and/or governments should harmonize labels across jurisdictions.

There is an overriding public interest in secure IoT products, and industry players—including source manufacturers, integrators/vendors, and retailers—must be responsive to this interest. The highly disharmonized state of IoT security regulations, however, pulls against that public interest. Moreover, a further doubling down on the current national approaches threatens to worsen the problem. What little compromise in national autonomy this or another consolidated approach might require must be weighed against a more coherent and enforceable scheme where such a scheme produces meaningful security gains for users. To comprehend this need, one should begin by understanding the state of affairs.

The current state of IoT risk

The current IoT ecosystem is rife with insecurity. Companies routinely design and develop IoT products with poor cybersecurity practices, including weak default passwords,3 weak encryption,4 limited security update mechanisms,5 and minimal data security processes on devices themselves. Governments, consumers, and other companies then purchase these products and deploy them, often without adequately evaluating or understanding the cybersecurity risk they are assuming. For example, while the US government has worked to develop IoT security considerations for products purchased for federal use, private companies routinely buy and deploy insecure IoT products because there is no mandatory IoT security baseline in the United States.6

Compromising IoT products is often remarkably easy. IoT products have less computing power, smaller batteries, and smaller amounts of memory than traditional information technology devices like laptops or even smartphones. This makes traditional security software (and its computing and power demands) often impractical in—or less immediately transferrable to—IoT systems. Many IoT botnets (networks of devices infected by malware), such as Mirai and Bashlite, capitalize on this insecurity by seeking to weaponize known vulnerabilities or brute-force access to an IoT product using predefined lists of common passwords. Such passwords may include “123456” or even just “password”.7

While these errors seem trivial, they quickly lead to material harm. In late 2016, for example, Mirai infected almost 65,000 IoT devices around the world in its first 20 hours, peaking at 600,000 compromised devices.8 The operators of the Mirai botnet subsequently launched a series of DDoS attacks, including against Dyn, a US-based Domain Name System (DNS) provider and registrar.9 By taking advantage of security problems in IoT devices, the individuals behind the botnet rendered major websites like PayPal, Twitter, Reddit, GitHub, Amazon, Netflix, and Spotify entirely unavailable to parts of the United States.10

Criminals infect IoT products with malware that may use the compromised device to execute DDoS attacks, mine for cryptocurrencies on behalf of the attacker, or hold the device hostage pending a ransom paid to the attackers. In 2018, cybercriminals compromised over 200,000 routers in a cryptojacking campaign. They used the computing power of the compromised routers to mine cryptocurrency.11 States also turn to compromising IoT products to create covert infrastructure. A May 2022 report by security firm Nisos revealed that the Russian Federal Security Service (FSB) employed a botnet made up of compromised IoT products to fuel social media manipulation operations.12

On top of using IoT devices for larger malware operations, hackers can break into IoT products to spy on people’s everyday lives. They could see adjustments made to a smart thermostat, questions asked to a smart speaker, and workouts logged on fitness wearables. This kind of spying can be a threat to individuals’ privacy and physical safety. In the context of intimate partner violence, abusive individuals may control access to or illicitly access IoT products to spy on and exert control over people, raising serious stalking and physical safety risks.13 There are also threats that come from strangers. Trend Micro, in a 2019 report, noted that hackers with access to compromised internet-connected cameras sold subscriptions that allowed others to view the illicitly accessed video streams online. The price of the stream depended on what the camera was looking at, with bedrooms, massage parlors, warehouses, and payments desks at retail shops among the priciest and most sought-after.14 These products can also be launch points from which attackers conduct further malicious activities. Brazilian fraudsters, for instance, are known to use access to compromised routers to change the compromised devices’ DNS settings to redirect victims to phishing pages for major websites, such as banks and retailers.15

IoT products, industry segments, and their insecurity

The IoT, on its face, may appear to be a simple concept, but scoping it and understanding the number of systems the IoT touches is more complex. For example, some devices like routers could be “part of” or “separate from” the IoT. There are also questions on the “if, and how” the IoT includes the networks, devices, and products touching it—like IoT sensors linked to outside cloud services to process data, connect to a company’s network to enable administrative oversight and control, and connect to the public internet to communicate with application programming interfaces (APIs). For government and industry policies to be effective, scopes must clearly define the products and services they do and do not include. 

For instance, EN 303 645 guidance—ETSI’s key standard document for IoT security—defines a “consumer IoT device” as a “network-connected (and network-connectable) device that has relationships to associated services and are used by the consumer typically in the home or as electronic wearables.”16 The US National Institute of Standards and Technology (NIST), meanwhile, defines the IoT in NIST SP 1800-16C as “user or industrial devices that are connected to the internet” including “sensors, controllers, and household appliances.”17 This report focuses primarily on the IoT products themselves, and in part the services directly dependent on IoT products or on which IoT products directly depend (e.g., a cloud software program for managing an IoT device network). 

The IoT constitutes a massive technology ecosystem with clusters of IoT product design and deployment models, each of which present differentiated cybersecurity risks. Several key examples of industry IoT product segments and some of their security challenges are detailed here, based on their wide deployment, impact on consumers, and touchpoints into other parts of the digital world, whether home Wi-Fi networks or hospital medical systems. 

  • Smart Homes: Numerous companies sell IoT products to serve as thermostats, doorbell cameras, window locks, speakers, and other components of so-called smart homes. Apple offers HomeKit integration, a software framework for configuring, communicating with, and controlling smart home appliances.18 Resideo offers a number of smart home-style products, for both consumer environments—such as thermostats, humidifiers, security systems, and programmable light switch timers—as well as professional environments—such as UV treatment systems and fire and burglary alarms.19 Philips sells smart lighting products, and Wink sells smart doorbells.20 On the software side, companies like Tuya offer IoT management services to automatically control robotic vacuums, smart cameras, smart locks, and other IoT products in the home.21 Google and Amazon both manufacture and sell smart home IoT products, from home security products to smart speakers.22 The cybersecurity risks here include spying on individuals in their homes, using IoT products in the home and workplace to break into other systems (e.g., someone’s work laptop on their home Wi-Fi), and harnessing numerous compromised smart products to create a botnet and launch DDoS attacks.23
  • Networking and Telecommunications Gear: Traditional internet and telecommunications companies, which supply the devices and some of the infrastructure that fundamentally underpins the internet, are moving more into IoT services and devices. Cisco offers Industrial Wireless solutions that include wireless backhaul, private cellular connectivity, and embedded networking for industrial IoT products.24 Extreme Networks offers a Defender Adapter service to provide in-line security for vulnerable wired devices.25 Arista offers a Cognitive Campus service that includes IoT edge connectivity, real-time telemetry, and Spline platforms for connection reliability.26 The cybersecurity risks here include spying on traffic going across networks, using networking and telecommunications entry points to break into other systems, and degrading or disrupting the flow of network data altogether. 
  • Consumer Health Products: Companies are offering IoT products and services to support the provision of healthcare and medicine. Philips sells fetal and maternal monitors, MR compatible monitors, patient-worn monitors, and other IoT products to monitor vitals.27 Medtronic sells glucose monitoring and heart monitoring products.28 Honeywell Life Sciences offers embedded products and safety solutions for hospitals.29 Dexcom offers a glucose monitoring smart wearable, and ResMed offers a phone-connected product for sleep apnea.30 The cybersecurity risks here include stealing highly sensitive medical data and manipulating device data or disrupting product operations in ways that physically threaten human life. 

Numerous companies, from telecommunications gear manufacturers to medical equipment suppliers, have a stake in security debates about IoT products. Many industries do as well, from home security to industrial manufacturing, and many of their products and services overlap and integrate. Yet, similarities between sector products and their cybersecurity risks do not change the fact that widespread IoT insecurity merits meaningful improvement.

Policy challenges to addressing IoT risk

The UK, Singapore, United States, and Australia provide a set of case studies for government approaches to IoT security—due to the maturity of their IoT cybersecurity approaches, the maturity of their overall cyber policy processes, their historical influence on cybersecurity policy in other countries, and the strong precedent for cooperation across all four. There is also fragmentation within the countries’ frameworks, where different parts of a country or different government agencies pursue different IoT security policies and processes. The US, for instance, has the Federal Communications Commission (FCC) focused on communications standards for IoT products and the Federal Trade Commission (FTC) focused on the marketing practices of IoT vendors, but has no agency in charge of enforcing IoT security requirements in design. 

At least three key themes stand out across these countries. First, state approaches to IoT security have generally moved from voluntary best practices towards direct intervention. Second, state approaches have predominantly manifested in consumer labeling programs and minimum baseline security legislation. And third, states have made the need for international, agreed-upon standards a key design principle of their IoT security efforts though as yet without sufficient uptake or success.31

UK: Mandatory minimum security standards 

The UK was an early innovator in holistic responses to IoT insecurity. Its Department for Digital, Culture, Media & Sport (DCMS)—which works on digital economy and some broadband and Internet issues—published a Secure by Design report in March 2018, setting out how it aims to “work with industry to address the challenges of insecure consumer IoT.”32 As a result of its report, in October 2018, DCMS, along with the UK National Cyber Security Centre (NCSC) and industry partners, published the “Code of Practice for Consumer Internet of Things (IoT) Security,” consisting of “thirteen outcomes-focused guidelines that are considered good practice in IoT security.”33 It aims, as one NCSC official described it, to identify impactful, updatable measures to which a broad coalition could agree34—captured in the below principles

Figure 1: Thirteen Principles of Consumer IoT Security

SOURCE: UK Department for Digital, Culture, Media & Sport.

The UK was not alone in this endeavor, working in tandem as a member of ETSI to launch ETSI Technical Specification 303 645, the first “globally-applicable industry standard on internet-connected consumer devices.”35 In June 2020, this Technical Specification became formalized as a European standard (EN 303 645), and now serves as a common underlying source for many countries’ initiatives. 

Despite the initial promise of the Code of Practice, the DCMS found low industry uptake for the guidance and decided to pursue a legislative route. After multiple consultation rounds, the resulting Product Security and Telecommunications Infrastructure (PSTI) Bill was introduced in November 2021, empowering the Secretary of State for DCMS “to specify by regulations security requirements.”36 The new law would require “manufacturers, importers, and distributors to ensure that minimum security requirements are met in relation to consumer connectable products that are available to consumers.”37 Noncompliant firms could face fines up to £10 million or 4 percent of worldwide revenue, and a new regulator—to be delegated following the law’s enactment—would also have the ability to enforce recalls or outright product bans.38 The bill is currently in the Report stage with the House of Lords and would require compliance within twelve months of enactment. 

By empowering the DCMS minister to specify security requirements instead of codifying them, the PSTI Bill allows the mandatory baseline requirements to respond to changing circumstances. The current principles outlined by DCMS focus on the “top three” elements of the UK Code of Practice/ETSI EN 303 645: banning default passwords, requiring a vulnerability disclosure process for products, and transparency for consumers on the duration that products will receive security updates. The UK’s NCSC views these three measures as having outsize importance, and “will make the most fundamental difference to the vulnerability of consumer connectable products in the UK, are proportionate given the threats, and universally applicable to devices within scope.”39 Cognizant that good security must require organizational action, not just device-level changes at the point of design and manufacture, a DCMS official has highlighted the additional appeal of the framework in allowing requirements placed on economic actors, not just devices. Indeed, two of the three requirements involve organizational changes or activity. The UK’s framework allows for the introduction of secondary legislation to build on this baseline over time. 

Singapore: IoT product labeling 

In October 2020, Singapore’s Cyber Security Agency (CSA) launched the Cybersecurity Labelling Scheme (CLS), a labeling program for internet-connected devices that describe the level of security included in their design. The CLS aims to help consumers “easily assess the level of security offered and make informed choices in purchasing a device.”40 It also aims to let product manufacturers signal the cybersecurity features of their products—as a senior CSA official put it, “to create the demand” and then “to provide a natural incentive to provide more secure and trusted devices.” 

The CLS has four levels of additive and progressively demanding security provision tiers (Figure 2). In the first two levels, developers self-certify, and the CSA can audit compliance. In the third and fourth levels, independent laboratories certified by the nongovernmental International Organization for Standardization (ISO) validate products. At the bottom end, products must have security updates and no universal default passwords, while manufacturers must adhere to secure-by-design principles, such as processes and policies for protecting personal data, securely storing security parameters, and conducting threat risk assessments. At the higher end, authorized labs conduct penetration tests against the product and its communications. Labels are valid as long as developers support the product with security updates, for up to a three-year period.

Figure 2: Singapore’s CLS Four Security Provisions Tiers

SOURCE: Cybersecurity Agency of Singapore.

While the program’s terminology slightly differs, the CLS embraces the same principles as ETSI EN 303 645, doing so in a manner that “groups the clauses and spreads them out across four ranked levels.”41 And while the program’s higher-tier labels incentivize the adoption of stronger security measures, the Singapore Standards Council concedes that the first-tier labeling requirements “will suffice in staving off [sic] large percentage of attacks encountered on the internet today.”42 Finally, Singapore’s CLS shows how a voluntary labeling scheme can work to gradually dial up requirements for products as the market matures. For example, while the CLS is voluntary for most products, new internet routers sold in Singapore must meet the security requirements for the Level 1 label. This “voluntary-mandatory” split can keep evolving over time, both for different product categories as well as specific security measures. 

Interviewees at CSA said vendors have reacted positively to the labeling program (e.g., citing the onboarding of major vendors like Google and Asus). As of July 2022, there were 174 certified products, a total that has more than tripled since the start of 2022, and includes diverse items such as smart lights, video doorbells, locks, appliances, routers, and home hubs.43 Despite these positive signs, it is too soon to tell if the CLS program will be a success, and Singapore must continue to monitor the label’s appeal for consumers and firms as well as its broader security impact. 

US: State initiatives & government procurement 

In the United States, initial action on consumer IoT insecurity began at the state level. The nation’s first IoT security law went into effect in January 2020 with California’s requirement that manufacturers of smart products sold in the state “equip the device with a reasonable security feature or features.” The law explicitly takes aim at universal default passwords, stating that a reasonable security feature could mean “the preprogrammed password is unique to each device manufactured,” or “the device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.”44California’s law—enforced by state attorneys—does not include a private right of action, nor does it put any duties on retailers to ensure that products they sell meet the law’s requirements. 

Oregon joined California with its House Bill (HB) 2395, which has much of the same text (e.g., the same definition of “reasonable security feature” the same enforcement mechanisms) but limits its scope to only consumer IoT products (“used primarily for personal, family or household purposes”).45 While the two laws may compel companies to adopt better security in all states, it appears that no cases have been brought forward under the law, even though insecure products are doubtlessly still sold in these states. 

The United States passed the IoT Cybersecurity Improvement Act into law in December 2020.46 It requires NIST to develop cybersecurity standards and guidelines for federally owned IoT products, consistent with NIST’s understanding of “examples of possible security vulnerabilities” and management of those vulnerabilities.47 48 Thus, the law seeks to strengthen the security of IoT products procured by the government and intends to influence the private sector’s IoT cybersecurity practices through the federal government’s procurement power.49 The 2020 act also shifts the burden of compliance from product vendors to federal agencies,50 prohibiting them “[from] procuring or obtain[ing] IoT devices” that an agency’s chief information officer deems out of compliance with NIST’s standards.51 Finally, the act requires NIST to review and revise its standards at least every five years to ensure that recommendations are current, allowing for technical flexibility.52 NIST is empowered to suggest whatever finding it wants, with only vague guidance to consider “secure development” and other high-level cybersecurity items. Figure 3 offers an overview of the act’s recommendations. 

Figure 3: Overview of the IoT Cybersecurity Improvement Act of 2020

SOURCE: Liv Rowley for the Atlantic Council.

On May 12, 2021, the Biden administration issued Executive Order (EO) 14028, “Improving the Nation’s Cybersecurity.” The executive order directed NIST, in consultation with the FTC, to develop cybersecurity criteria for an IoT product labeling program aimed at educating consumers about IoT products’ security capabilities.53 It also tasked NIST with examining how to incentivize IoT manufacturers to get on board with such a program. On February 4, 2022, NIST released its recommended criteria for a consumer IoT labeling scheme.54 However, NIST has been clear that its aim is to describe the ideal components of a labeling scheme, rather than implement this scheme itself.55 While EO 14028 may feel a little toothless at the moment, it effectively outlines specific federal cybersecurity goals. Moreover, it demonstrates a will to move beyond federal procurement power as the sole method for influencing the private sector. 

Australia: Starting with voluntary best practices 

In August 2020, the Australian Department of Home Affairs (DHA) released a voluntary “Code of Practice: Securing the Internet of Things for Consumers” as part of its 2020 cybersecurity strategy. This code of practice highlighted the thirteen principles outlined in ETSI EN 303 645. 

Australia’s voluntary code of practice did not prove to be a panacea. In March 2021, the Australian government published six months of research on the results of its Code of Practice, saying firms “found it difficult to implement voluntary, principles-based guidance,” and many had still not implemented basic security guidelines like a vulnerability disclosure reporting process.56 As such, the Australian government appears intent on conducting more direct regulation of its consumer IoT market. In a request for comments that concluded in fall 2021, the DHA solicited public opinion on both a proposed consumer labeling program and a minimum security standards regime.57

For the minimum security standards approach, the government proposes to base its requirements on ETSI EN 303 645 and is considering either mandating all 13 guidelines or choosing to focus on just the top three (no default passwords, the existence of vulnerability disclosure programs, and the provision of security updates). The potential regulator within the Australian government is yet to be determined, but it would be empowered to issue fines and other penalties for those who fail to comply. 

The potential labeling approaches consider two scenarios. A voluntary “star rating label,” such as Singapore’s CLS program, basing it on an existing international standard, such as ETSI EN 303 645, and involve some component of self-certification and testing within the framework of Australian consumer law’s protection against fraudulent claims. Alternatively, a mandatory “expiry date label” would indicate the period over which the product will receive critical security updates. This second option received a higher recommendation from the government. Minimum security standards could complement either of these approaches. 

Industry: Certification models and security standards 

Companies have also advanced numerous security approaches. Common industry approaches to IoT security include secure endpoints and stringent encryption requirements for third-party applications, hardware-based security, and the formalization of vulnerability and software communications protocols. The industry verticals for smart homes, networking and telecommunications, and consumer healthcare (recognizing there is overlap and integration between these verticals) see varying implementations of these measures. 

For example, the ioXt Alliance, which is composed of dozens of product manufacturers and vendors as well as major software companies, offers self-certified and third-party-validated certification for IoT products. Its five compliance tests cover everything from Android to smart speaker device profiles, measured against eight principles: no universal default passwords, secured interfaces, proven cryptography, security by default, verified software, automatic security updates, vulnerability reporting program, and security expiration date.58 The overall certification process has five steps: 

  1. Join the ioXt Alliance and register for certification; 
  2. Select one of the five base profiles for testing, and then opt to self-certify or use one of the ioXt’s approved laboratories (currently, Bureau Veritas, SGS Brightsight, DEKRA, NCC Group, NowSecure, Onward Security, or Bishop Fox59); 
  3. Upload production information and test results to the ioXt portal; 
  4. ioXt reviews the submissions and approves or rejects certification—with approved submitters receiving “the ioXt SmartCert” for their product; and 
  5. “Stay certified with ongoing verification and insights,” like IoT regulatory updates through the Alliance.60

The Alliance’s membership includes companies like IBM, Google, Facebook, Silicon Labs, Logitech, Honeywell, Avast, Asus, Motorola, and Lenovo; other associations like the Consumer Technology Association (CTA) and the Internet Infrastructure Coalition; and non-industry organizations like Consumer Reports. Even the UK’s DCMS is an Alliance member.61 While the membership roster certainly does not cover every IoT product manufacturer or vendor in the United States (where many of its members are based), it does have global representation. It also certified 245 percent more products and membership increased 63 percent in 2021 compared to 2020.62 

The IoT Security Foundation, a global nonprofit representing many appliance manufacturers, recommends a framework composed of a few hundred security standards for organizations—spanning management governance, engineering, secure networks and applications, and supply chain.63 Its members include smaller product manufacturers as well as larger companies like Honeywell, Huawei, and Arm, plus many more nongovernmental organizations, like academic institutions, than the ioXt Alliance.64 The framework has three different audiences: (1) managers, (2) developers and engineers, and logistics and manufacturing staff, and (3) supply chain managers.65 While its membership is not as large as that of the ioXt Alliance, the IoT Security Foundation does have global representation as well, such as the University of Southampton, Huawei, the University of Oxford Department of Computer Science, and Eurofins Digital Testing in France.66

The Open Web Application Security Project® (OWASP) is an open-source community effort that provides IoT security standards tailored to three threat models—attacks only against software, attacks only against hardware, and situations where compromise must be avoided at all costs (e.g., medical products, connected vehicles, due to highly sensitive data, etc.).67 OWASP then specifies several dozen security standards based on these threat models, such as standards for bootloader vs. OS configurations vs. Linux.68 OWASP is a nonprofit foundation with over 250 local chapters worldwide and tens of thousands of members, and it runs training conferences and other events to bring together experts from industry, academia, and civil society focused on software development and security.69 Its capacity to drive change on IoT security is considerably different from the previous two coalitions—for instance, the OWASP community cannot marshal the marketing and lobbying power held by members of the ioXt Alliance or the IoT Security Foundation. However, OWASP draws on its tens of thousands of members around the world and leverages different forms of engagement than the other coalitions. The IoT Security Foundation, for instance, does not run events at the same scale as OWASP. 

The GSM Association, an industry group for mobile network operators, has hundreds of industry members—from Amazon to Coinbase to Audi—and has numerous guidance documents for IoT security.70 For example, it has security considerations ranging from having password policies protect against hard-coded or default passwords (CLP12_6.11.1.5) to having a process for decommissioning endpoint devices (CLP13_8.10.1).71

The CTA, a standards and trade organization with over 1,000 company members, runs an IoT Working Group that supports consumer IoT development. Included in those efforts is educating consumers about IoT security best practices and improving the security of IoT products.72 The CTA has multiple labeling schemes under development around IoT products, focused on consumer-facing product security descriptions managed through an accreditation system.73 The CTA, in fact, submitted a position paper to NIST in 2021 that described its vision for a cybersecurity labeling system for software and IoT devices—noting that labels should reflect the consensus industry standards, avoid marketplace fragmentation, and look to risk assessment as much as specific security capabilities, among others.74 It also has global reach, with Cisco, Google, Panasonic, Samsung, Walmart, Alibaba, Nvidia, and ADT among its members.75

The Connectivity Standards Alliance (CSA), which develops and certifies IoT technology standards, has a number of documents and efforts focused on security. For example, the CSA website contains numerous developer resources on IoT security, from security and privacy guidance on the CSA-developed IP-based protocol Matter to documentation around Zigbee, the low-latency communication specification.76 The CSA’s product security working group is underway, developing security standards for IoT devices and exploring security options around labeling and it has a recently started IoT privacy effort, as well. Both of these endeavors focus on consumer-facing security considerations (meanwhile, other CTA efforts focus on less consumer-facing aspects of IoT product security). The CSA has nearly 300 participant companies and dozens of sponsors around the world, and it also has hundreds of corporate adopters—ranging from large retailers like Amazon to device and component developers like Arm, Silicon Labs, Schneider Electric, LG, Huawei, and Google.77

Individual companies have also provided their own guidance, such as Google’s Cloud IoT Core “device security” guidelines,“78 Microsoft’s Edge Secured-core criteria,79 and Arm’s Platform Security Architecture for the IoT.80 Each emphasizes different threat models and targets different stakeholders in the IoT process, from product engineers to those in management at product manufacturers. 

While beneficial, these approaches in the aggregate present a fragmented industry approach to IoT security. Governments looking to industry standards as a reference point find numerous, very different options; for instance, while the ioXt Alliance’s security approach emphasizes testing against specific device profiles, the OWASP approach emphasizes different kinds of threat models that could, hypothetically, apply across device profiles. There are also implementation differences: the ioXt Alliance points to independent, third-party testing and evaluation, whereas OWASP offers a list of standards that organizations can pair to a particular threat model. Some yet (like the ioXt Alliance) create new, IoT security-specific approaches, and others (like Arm) offer rough replicas of their overall cybersecurity guidance, with some tailoring to IoT.

Summarizing challenges 

The current government approaches towards IoT security present many challenges—and have many gaps and shortfalls. This matters across the United States, Singapore, Australia, the UK, and many other governments, because industry has failed to appropriately invest in IoT security, leaving governments to step in. Simultaneously, some states are leading aggressively on securing IoT while others appear willing, on a structural level, to cede that leadership to industry (or to not act at all). Australia, for example, has put forward an IoT security framework but has long delayed the publication of specific guidance. 

Industry organizations have pursued a range of IoT security approaches across labeling, certification, minimum standards, and best practices. This guidance also varies across industry verticals—for instance, given embedded IoT healthcare devices face many more regulatory security requirements than smart speakers. All these initiatives represent a substantial effort and reflect years of work from individuals in the security community—yet challenges (Table 1) around enforcement and implementation leave room for greater cohesion to tie security actions to particular parts of the product lifecycle. 

On the private sector side, ambiguous requirements and policy goals,81 diverging processes and regulatory requirements across jurisdictions, and duplicative certification schemes all hinder private-sector efforts to boost IoT security. And on the user side, individuals are grappling with little to no information to select more secure products, bad security outcomes and insecurity, and harmful knock-on effects from IoT insecurity that harm others in society and using the internet. 

Table 1: Challenges with Current IoT Security Models

SOURCE: Justin Sherman for the Atlantic Council.

State IoT security challenges 

State IoT security policies are fragmented across jurisdictions. While the United States, UK, Singapore, and Australia (as well as the EU bloc) have generally moved from a voluntary best practices approach toward a mandatory approach, the states’ policies do not necessarily integrate well with one another. Each country has different specific cybersecurity best practices and places different levels of regulatory requirements on companies. This state-to-state fragmentation makes it more difficult for governments to agree on IoT security goals and operationalize IoT security cooperation—impeding a multinational approach to systemic risk. 

Further, when states work to increase cooperation, there is a question of selectivity and exclusion: the ten countries with the most infected devices in the 2016 Mirai botnet were primarily in South America and Southeast Asia. Meanwhile, most high-resourced countries principally focus on IoT security collaboration with one another (e.g., UK-Singapore IoT security collaboration), not on building IoT security capacity in lower-resourced countries.82 The latter does happen—for example, Singapore and the Netherlands have engaged the nonprofit, multistakeholder Global Forum on Cyber Expertise on global IoT security issues. Nevertheless, collaboration remains primarily among higher-resourced and higher-capacity states.83

Thus, one set of countries debate solutions while excluding a bevy of impacted stakeholders from the discussion. In doing so, higher-resourced countries may miss important points about their IoT frameworks’ applicability. Notably, cultural contexts greatly matter alongside technical considerations when weighing country adoption, and IoT product reliability may be just as important if not more so than cybersecurity per se in a development context.84 In fact, for many countries, increased reliance on information and communication technologies without proper reliability can very well yield suboptimal development outcomes.85 For example, while other governments (e.g., Singapore, Australia) reference the UK’s IoT security recommendations, some of the UK standards may require too much investment for lower-resourced states and focus less on reliability per se than security. 

Furthermore, regulatory approaches within countries may still be fragmented and leave gaps. For example, in the United States the FCC regulates IoT products’ network connectivity, and the FTC regulates the marketing practices of IoT products.86 The FCC has broad authority to regulate product manufacturers and sellers. On the flip side, the FTC’s authority mainly concerns consumer protection to ensure IoT product sellers are not being deceptive.87 However, this still leaves gaps, such as not incentivizing security requirements at the device manufacturing stage and leaving national laws to govern IoT cybersecurity for federal agencies, while mostly standards and voluntary guidelines guide the private sector.88 In Australia, to give another example, the state’s “privacy, consumer, and corporations laws were not originally intended to address cybersecurity,” leaving the national government trying to make do with a patchwork of laws to address cybersecurity.89 Country-internal fragmentation, in total, leaves policy and regulatory gaps in promoting IoT security, forces the government to grapple with an ill-formed patchwork of authorities and procedures, and raises costs and increases confusion for businesses and users—especially when different labels are in play. 

Private sector IoT security challenges 

Many IoT security approaches in practice have ambiguous requirements and policy goals that make it difficult for the private sector to both understand and implement the government’s vision—and difficult for the state to require or incentivize the private sector to change. Take government procurement requirements, whose aim can be unclear. One aim could be the use of procurement to directly secure specific products, such as by requiring the military to only buy IoT products with a higher cybersecurity bar. Another possibility is using procurement to signal best practices to industry, such as requiring compliance with NIST’s cybersecurity framework—mandatory for US federal agencies and which more than 30 percent of US organizations have voluntarily adopted.90 And another possibility is not just signaling best practices but incentivizing companies broadly, even those not doing federal contracting, to increase their own product security. As one standards body expert put it, “if the government only buys products meeting certain standards, that sets a bar for the private sector.”91

While the security approach may be similar or identical in each case, there are different policy goals in play that may not be articulated (even if they are not mutually exclusive). If most IoT vendors are not government contractors, the use of federal procurement requirements to secure the broader ecosystem may fail.58 Danielle Kriz, the senior director of global policy at Palo Alto Networks, argues that government procurement on its own is “not enough to result in full-scale IoT security.”92 Using procurement to signal to the broader market could also produce product fragmentation: “If you make the standards too robust,” argues David Hoffman, a Duke University professor of cybersecurity policy, “then you create a situation where there is a profit incentive for contractors to sell two different products: one for government and one for the private sector.”93 Further, if introducing a procurement requirement is meant to signal a coming wave of incentives around that set of security requirements, governments should note that—so industry can begin to get on board. 

Differences in cybersecurity and IoT security processes, levels of maturity, and regulatory requirements across jurisdictions likewise complicate the private sector’s implementation of IoT security approaches. When a country’s internal approach to IoT security is fragmented, it becomes harder to coordinate with the private sector as well as other countries—because there is no clear and cohesive national approach. Companies, for their part, often find themselves caught between multiple competing, if not contradictory, IoT cybersecurity regimes. This increases industry confusion about IoT security best practices (particularly for businesses with less institutionalized cybersecurity capacity) and may force IoT manufacturers and vendors to tailor-make products to meet specific, varied regulatory requirements (discussed in the next section). Disjointed IoT security standards also raise the costs of government interaction for companies, especially for smaller players with less budget and in-house governmental relations capacity. Vendors and manufacturers that have more money and resources could therefore have an even more outsized ability to influence the security conversation. 

For industry, certification schemes also introduce many challenges. The current IoT security certification approach emphasizes independent, third-party product certification—time-consuming and costly (sometimes in the tens of thousands of dollars)—which may be outright prohibitive for smaller manufacturers and vendors. This approach often excludes lower-cost approaches that could work simultaneously, like self-certification to a lower bar of standards. Certification schemes are also binary, tiered, and descriptive; there is no unified approach for companies to implement and understand. For example, Singapore’s CLS has four progressively demanding security level provision tiers (see Figure 2): security baseline requirements (Tier 1), lifecycle requirements (Tier 2), software binary analysis (Tier 3), and penetration testing (Tier 4).94 Others, however, such as many industry certification schemes, are binary, either certifying a product as “secure” under their definition or not doing so at all.

User IoT security challenges 

The current approach also presents challenges for users. An Ipsos MORI survey in Australia, Canada, France, Japan, the UK, and the United States found that consumers overwhelmingly think that “connected device manufacturers should comply with legal privacy and security standards” (88 percent), “manufacturers should only produce connected devices that protect privacy and security” (81 percent), and “retailers should ensure the connected devices they sell have good privacy and security standards” (80 percent).95 A majority of those that own connected devices (63 percent) “think they are creepy.”96 Despite these findings, by and large, users continue to purchase insecure IoT products. 

Currently, manufacturers and vendors provide users with little to no information to select more secure products. Where labeling and/or certification schemes do exist, they expect that buyers have a fair knowledge of IoT security and will make purchasing decisions based off that knowledge. This user knowledge assumption is faulty, as all countries surveyed in this report are far from sufficiently educating the public on cybersecurity issues. And in the context of a corporate buyer of IoT products, there is no guarantee many organizations purchasing IoT products have deep, in-house capacity around IoT cybersecurity practices, either. 

The current approach also leaves users, and the IoT ecosystem in general, with bad security outcomes and insecurity. Many manufacturers and vendors underinvest in cybersecurity and might not even have any kind of robust cybersecurity processes in place in their organization. This manifests itself in IoT products riddled with bad security practices, like default passwords and weak encryption, which leave products, users, and connected systems vulnerable to data theft and much worse. Merely encouraging organizations to adopt voluntary standards (that some organizations may not even know about) does not widely improve IoT security outcomes, either. Further, the labeling and certification schemes that do exist in some jurisdictions are often expensive—and if manufacturers and vendors choose not to absorb the costs themselves, then they will charge consumers higher prices for IoT products. 

Even if companies wanted to invest and buyers had all this knowledge, the current approach would still negatively impact users, the broader internet ecosystem, and other involved individuals. Given the “paradox of choice,” where increasing the number of options available to someone can make it harder to reach a decision, providing users with many different labels and certifications may do the same. The lack of a unified labeling scheme also makes it difficult for consumers to compare labels (binary versus tiered versus descriptive), and the lack of a single global IoT cybersecurity certification means buyers may not even be able to compare IoT security attestations at all. Moreover, there is little indication that introducing labeling and/or certification would necessarily cause a buyer to look anywhere beyond the price tag. And in the narrow cases where manufacturers or vendors provide labels and certification information to buyers, many users only see that information when the product is already unpacked and undergoing setup in their home or work environment. Overall, current IoT security approaches still place a heavy security burden on individuals, rather than systematically mandating and incentivizing product manufacturers and vendors to consider and build in security from the outset. As one DCMS official described it, labels may be attractive because they can avoid the bureaucracy of legislation—yet they still expect consumers to move the security needle. 

Addressing these challenges should not devolve into championing one national approach over another. The need for harmonization in specific controls is real, and this need extends to control philosophies and enforcement schemes. The section below synthesizes these previous approaches into a single framework based on the general lifecycle of IoT products as a basis for a path forward. 

Creating a synthesized framework 

There is no shortage of IoT security frameworks. As noted in the last section, government agencies, private companies, industry organizations, and civil society groups around the world have developed and published a range of IoT security policy frameworks, design best practices, and security certification schemes. This represents a substantial body of work on IoT security, yet there is more to be done—and its range creates complexity heedless of industry cries for coherence and presents a meaningful obstacle to international coordination. 

Rather than address each of the four jurisdictions of interest (US, UK, Australia, Singapore) in isolation, this section presents a consolidated framework with existing security regulations, standards, and guidance from all four countries. 

The framework’s first goal is to reduce fragmentation between policy approaches by highlighting their contributions and limitations. Operating in multiple jurisdictions with different IoT security regimes can drive up product development and legal compliance costs, disincentivize companies from investing in security or widely selling their products, and even create scenarios where companies must tailor-make IoT products to sell in different countries. Reducing fragmentation addresses these cost issues. It also empowers IoT product users, by giving companies and individuals a clearer set of tradeoffs and information rather than numerous, different stamps of security approval from different places. Lastly, reducing fragmentation helps policymakers forge cooperation internationally and cover the entire IoT security landscape at home. 

The framework’s second goal is to better situate technical and process guidance into cybersecurity policy. As previously discussed, some government requirements and guidance on IoT security lack detail and have ambiguous policy goals, which impede the private sector’s progress on better implementing IoT product security. Integrating technical and process details into government policy can help the private sector, especially companies with limited cybersecurity knowledge and capacity, operationalize higher-level IoT security objectives. It would also help governments identify flaws in their own IoT security approaches; for example, an overemphasis on certifications’ policy value has come at the expense of looking at the certification process—which for many organizations is a time-consuming, costly endeavor. 

Table 2 presents a synthesized IoT cybersecurity framework—mapping at what stages of the IoT lifecycle various IoT security actions and policies could be applied. This leads to a discussion in this section of how existing government IoT security approaches have enforced, incentivized, or guided these measures. It then leads to the recommendations section, which discusses ways in which governments can better select from these security action options and appropriately enforce, incentivize, or guide them to achieve better cybersecurity across the IoT ecosystem. 

Overwhelmingly, this framework highlights that the IoT security approaches in the countries studied focus on the design, development, and sale, and setup phases of the IoT lifecycle, with significant gaps in security actions and policies for the maintenance and sunsetting phases of an IoT product’s lifespan. 

Table 2: Synthesized IoT Security Framework

SOURCE: Liv Rowley and Justin Sherman for the Atlantic Council.

Cybersecurity decisions at each lifecycle phase help determine a product’s ultimate security (Figure 4). 

Design decisions frame how IoT products are ultimately architected, and they can include or exclude certain cybersecurity considerations from the outset. Security action and policy options at this level include following voluntary and/or mandatory technical standards, following voluntary and/or mandatory best practices, and employing best practice security design principles. 

Development decisions begin to put those design ideas into practice, and they impact how higher-level ideas and principles are operationally employed into the creation of products. They also present an opportunity for IoT product manufacturers to tailor additional security requirements based on their product’s risk profile—for instance, adding in extra controls on top of voluntary, minimum best practices for products used in safety-sensitive or critical infrastructure settings. 

Sale and setup decisions focus on IoT products going on the shelf and getting configured in their use environment, and they impact the cybersecurity of those products when first activated. Security action and policy options at this level include implementing vulnerability disclosure policies and processes, implementing mechanisms for regularly updating software, employing labeling schemes, and getting products security-certified. 

Maintenance decisions focus on IoT products that have already been configured and deployed, and they impact the security of those products for the rest of their lifetime. The security action and policy options at this level include maintaining vulnerability disclosure policies and processes, issuing regular security updates, updating labeling schemes in line with software security updates and disclosed vulnerabilities, and updating certifications in line with software updates and disclosed vulnerabilities. 

And finally, sunsetting decisions pertain to the end of a product lifecycle—such as when a vendor stops providing security updates—and how product vendors and users should communicate about, prepare for, and navigate the process of retiring an IoT product.97 

Figure 4: Overview of Government and Industry Frameworks

SOURCE: Liv Rowley and Justin Sherman for the Atlantic Council.

When applied to the United States, the UK, Australia, and Singapore, the framework shows that most country IoT security approaches concentrate on the earlier parts of the IoT product lifecycle. The design, development, and sale and setup phases are heavily covered. In the United States, existing NIST publications that provide guidance on security-by-design (like NIST SP 800-160) are applicable to IoT.98 The UK’s PSTI Bill, introduced in November 2021 and not yet passed, would require “manufacturers, importers, and distributors to ensure that minimum security requirements are met in relation to consumer connectable products that are available to consumers.”99 The provisions leverage recommendations in the UK Code of Practice/ETSI EN 303 645: banning default passwords, requiring vulnerability disclosure processes for products, and providing transparency for consumers on the duration that products will receive security updates.100 Nonetheless, there are still gaps; the UK PSTI Bill focuses more on design, development, and sale and setup.101

Design and development guidance often overlap in the four countries. The Australian government’s Code of Practice on securing the IoT for consumers uses the 13 principles laid out by the UK and ETSI, including not using default passwords, implementing a vulnerability disclosure policy, and keeping software updated and secure.102 The provisions around not using default passwords, validating input data, and securely storing credentials are primarily useful in the abstract at the design phase and implemented during the development phase. 

The United States, the UK, Australia, and Singapore also have significant guidance and/or requirements at the product sale and setup phase. For the ETSI guidance—which underpins guidelines in the UK, Australia, and Singapore—the implementation of a vulnerability disclosure policy comes into play during sale and setup. Singapore’s CLS has four levels against which companies can certify products, from baseline requirements, certified based on developer self-declaration, to comprehensive penetration testing conducted by ISO-accredited independent laboratories.103 And in the United States, the IoT Cybersecurity Improvement Act of 2020 requires NIST to publish “standards and guidance” around IoT product purchasing and shifts the compliance burden from vendors onto federal purchasers.104 Moreover, federal agencies must consider such factors as secure development, identity management, and patching when looking at buying an IoT product and then prove that said product satisfies NIST’s guidance.105 E.O. 14028 directs federal agencies to implement secure software verification processes and directs NIST, the FTC, and other agencies to identify “IoT cybersecurity criteria for a consumer labeling program.”106

Regulations enforced by the FTC and FCC likewise focus on IoT product labeling when consumers look to purchase and deploy products (in the FTC’s case) and IoT network design (in the FCC’s case). This is not to say the US security approach entirely neglects the maintenance and sunsetting phases; NIST’s first IoT publication (NISTIR 8259)107 includes a category for “post-market” security considerations as well as general recommendations for establishing communication channels for product updates and customer feedback. A subsequent update to the document (NISTIR 8259A) contains recommendations for security update features.108  

All four government approaches focus less on the maintenance phase of the IoT product lifecycle. The UK’s IoT security approach has gaps in providing manufacturers, vendors, and users with maintenance guidance (e.g., once the security update plan is in place and communicated, how will it be continuously followed?) and sunsetting guidance (e.g., if the company stops providing security updates, how should they inform users and what options might users have for replacing devices?). While there is some minimal guidance here—for instance, the UK DCMS Code of Practice includes a provision to make the installation and maintenance of products easy—it hardly provides anything substantively useful for manufacturers, vendors, or buyers. The same therefore goes for Australia, which follows the UK’s guidance. Singapore does provide detailed guidance on the maintenance phase at Tier 2, 3, and 4 of the certification scheme. 

Each approach has significant gaps at the sunsetting phase. The United States lacks sunsetting guidance in its IoT security approaches, and regulatory enforcement does not focus on sunsetting (e.g., the FTC focuses on how products are marketed to consumers, not how products are retired). Singapore’s labeling scheme program provides little guidance in the way of notifying users about terminated security updates when products are at their “life’s end” and then, as a result, posing new and greater security risks. The UK’s IoT security approach also lacks sunsetting guidance, such as what happens if a company stops providing security updates as recommended by the DCMS. This means users, and society writ large, may have some protections against IoT insecurity at the earlier phases of the IoT product lifecycle, such as when companies are designing IoT products sold to the government and used in relation to critical infrastructure, or when vendors are advertising their products on the shelf and regulated. Yet, for all the businesses, individuals, and other entities using IoT products that are long past their lifespan, they are exposing themselves to insecurity possibly without even knowing it—and without government policies and security approaches that protect users against the termination of security updates, outdated labels, and other security problems. 

It is also important to note that requirements may, in the future, speak to areas outside the device lifecycle as well, concentrating more on an IoT manufacturer’s organizational structure or developer training. NIST notes this in their June 2022 NISTIR 8425 initial public draft, titled “Profile of the IoT Core Baseline for Consumer IoT Products.”109 Developer activities, as outlined in NISTIR 8425, may include Documentation, Information & Query Reception, Information Dissemination, and Education & Awareness.110 Some industry IoT security frameworks include non-device requirements as well. For instance, the IoT Security Foundation’s framework mandates the existence of certain roles at a company (for example, 2.4.3.1 mandates “There is a person or role, accountable to the Board, who takes ownership of and is responsible for product, service and business level security, and mandates and monitors the security policy”); or specific actions to be included in a company’s security policy (for example, “As part of the Security Policy, provide a dedicated security email address and/or secure online page for Vulnerability Disclosure communications”).111 Such standards that apply to elements outside of the scope of the device lifecycle itself are critical to fostering a stronger security environment overall and should be remembered and considered as IoT security becomes stronger. 

Toward a consolidated approach

The framework above underscores how some governments and industry actors are making progress in pushing for greater IoT security—but there is a long road ahead to improving cybersecurity in the IoT ecosystem. There are still some governments and many industry actors underinvesting in IoT security. Despite their stated concerns, consumers continue to purchase insecure products. As a result, product manufacturers and vendors need to deliver meaningful transparency and improvements in user security outcomes. Without the predictability of common security standards that impose pressure on all manufacturers and vendors, proactive firms have little incentive to produce secure products, and there are few penalties for laggards. 

Overcoming widespread risks 

Promisingly, the past few years have seen a flurry of activity on IoT security from governments, industry groups, and consumer advocates. The attitude among those interviewed for this report generally was optimism about the direction of travel, with concern over the pace of the trip. Singapore is nearly two years into a voluntary, four-level labeling scheme that will be gradually expanded as mandatory by product type, as it currently is for internet routers. Australia appears poised to pursue a labeling approach that mirrors Singapore’s four levels (“graded shield”) or a simpler indicator showing the timeframe that security updates will be provided (“icon expiry”). The UK has rejected the concept of labels and is instead on the cusp of passing legislation that empowers regulators to set basic cybersecurity requirements for all smart devices, a baseline that can be ratcheted up over time. In the United States, two states have implemented their own minimum security requirements, federal agencies must purchase products with more robust security, and NIST recently recommended a binary label akin to approaches in Germany and Finland. Consensus standards, enforcement measures, and international cooperation across these four jurisdictions are feasible but not yet close. Nevertheless, there still are threats to progress: 

  • Risk #1: Regulations, standards, and norms diverge between jurisdictions. Despite today’s promising signs, as more jurisdictions take on the problem of IoT insecurity, there is a risk that regulatory divergence worsens with an ‘every-market-for-themselves’ approach where duplicative requirements and confusing enforcement schemes burden IoT vendors who must work to support multiple sets of standards or elect to focus on a small set of jurisdictions. 
  • Risk #2: Cybersecurity labels fail to demonstrate value to both manufacturers and consumers. One interviewee summed up the attitude toward cybersecurity labels with an analogy to Churchill’s famous quote about democracy: “the worst option, except for all the others.” Labels are an increasingly popular approach in national IoT security efforts. Despite a clearly articulated demand for greater security by consumers, some observers are doubtful that consumers can or will make informed cybersecurity decisions even with the benefit of an indicator on the box or the webpage. Others question whether it is correct to task consumers with making such security decisions for themselves, comparing insecure IoT products to an unsafe lightbulb: you do not compare lightbulb brands to see which one is least likely to explode. Like other market signals, cybersecurity labels can suffer from a collective action problem, only arising if both sides of a transaction value them. 
  • Risk #3: Product security requirements become watered down as they approach broader adoption. Particularly in the United States, legislation often becomes less potent as it approaches the federal level. Industry resistance was sufficient to kill prior versions of the IoT Cybersecurity Improvement Act and cut some of the provisions that were finally passed in its 2020 version.112 Given federal law’s preemptive power, consumer IoT security legislation could counteract more ambitious measures at the state level. This dynamic may also occur internationally if jurisdictions are driven to the lowest common denominator in pursuit of consensus. 
  • Risk #4: Guidelines become too rigid, locking in outdated security practices. As Brian Russell and Drew van Duren describe, “The greatest challenge in the security industry is finding methods today of defending against tomorrow’s attacks given that many products and systems are expected to operate years or decades into the future.”113 Legislation must define processes and outcomes rather than codifying specific security measures that might soon become irrelevant. 
  • Risk #5: The drive for improved consumer IoT security fails to have an impact on product manufacturers in jurisdictions without strong IoT security laws. The national initiatives surveyed in this report focus primarily on efforts to effect change by imposing requirements on products sold in each one’s jurisdiction, as opposed to trying to impact what happens where products tend to be manufactured, citing the challenge of extraterritorial enforcement. Interventions must consider the full range of actors who can put pressure further up the supply chain, with retailers, in particular, having the potential to play an influential role. 

The shape of a consolidated approach 

What might a better IoT future look like? One description is: “a world in which every IoT ecosystem stakeholder[’s] choices and actions contribute to overall security of IoT where consumers and benefactors are simply secured by default.”114 It could be raising the tolerable level of insecurity to the point where consumers trust IoT products and services as something more than a roll of the dice.

Crucially, this world must reflect different economic incentives for manufacturers, consumers, and attackers. Policy change is necessary to help shape and channel these incentives. When assessing any proposal, one should consider its ability to advance the following outcomes: 

  • Eliminate the most glaring insecurities in consumer IoT products, thus increasing the level of effort and sophistication required for attackers to compromise them. 
  • Promote harmonization across jurisdictions, avoiding needless divergence and duplication, thereby reducing friction for manufacturer uptake. 
  • Sharpen incentives for manufacturers to exceed the minimum baseline of security practices. 
  • Increase consumer awareness of the risks from insecure products and increase interest in security as a feasible and accessible buying criterion. 
  • Provide real impact on user security outcomes in the near term while maintaining flexibility to incorporate new controls through consensus measures as technology evolves. 

To drive the above outcomes and closer alignment in policy across these four states, the team proposes a multi-tiered IoT product labeling and certification scheme with basic, easily understandable labels for consumers (Figure 5). This multi-tiered scheme would ensure that minimum security standards are met, give consumers easily digestible ways of understanding the security of a product, and allow manufacturers that invest in higher security to advertise it understandably.

Figure 5: Overview of IoT Security Tiers

SOURCE: Patrick Mitchell for the Atlantic Council.

Tier 1: Minimum Baseline Features. The first tier should be a set of mandatory, baseline, self-attested IoT security standards created by governments in consultation with industry. For each country, the government agency leading this effort should ideally be the organization already in charge of cybersecurity standards, and if there is not one, governments should select an organization with a high degree of transparency, technical competence and capacity, and a track record of working with industry and civil society. The recommended baseline security standards should be rooted in widely agreed upon desirable security outcomes, for instance, the core principles outlined in ETSI EN 303 645—such as eliminating default passwords, mandating a vulnerability reporting contact, and facilitating secure updates for software. Once governments set this tier, manufacturers should apply with the agency administering the program and self-attest that they meet these standards. The agency should then provide qualifying products with a label indicating that they have met these baseline requirements, and the manufacturer and product vendor (if different than the manufacturer) should include this label and information about it in the product description. Random audits can assess compliance without the need for a time-consuming and expensive certification process. Examples of national programs in this tier include the UK’s PSTI Bill, Singapore’s CLS Tier 1 requirement for routers, and California and Oregon’s IoT security laws. 

Tier 2: Enhanced Security Features. Building off the first tier of mandatory, baseline, self-attested IoT security standards, governments should then work with industry to set a second tier of security standards—higher, voluntary, and independently tested. The standards to qualify for this second tier should likewise look to the Tier 1 baseline as a starting point, with a particular focus on ensuring products communicate securely and protect consumers’ personal data, inspired by security outcomes that may be drawn from ETSI EN 303 645. Qualifying products will receive a label indicating that they have both met the Tier 1 baseline requirements and the Tier 2 requirements, and the product description should include information about this label. To encourage the uptake of the second tier, securing a label should be a relatively cheap and quick process. Given that some jurisdictions may see more value in a scheme with more than two tiers, national regulators should be able to subdivide this tier into different levels of security. Examples of existing programs that would fall within this tier include Levels 3 and 4 of Singapore’s CLS, Finland’s Cybersecurity Label, Germany’s BSI IT Security Label, and the binary label recommended by NIST in the United States. 

Special Standards for Safety Critical Products. Industry-specific regulators should remain in charge of setting the highest bar of security standards for IoT products that present an imminent threat to human life if compromised. For most smart devices, consumers do not bear the brunt of the consequences if their device is vulnerable to an attacker. This dynamic shifts dramatically when the connected device is an automobile or pacemaker and the consequences become potentially lethal. In these instances, however, consumers still lack the expertise to assess risk. These industries tend to already have specific regulators focused on product safety: for example, the FDA certifies medical devices, and the National Highway Traffic Safety Administration (NHTSA) is charged with enforcing motor vehicle safety standards. In this context, an internet connection is merely another feature that introduces new risks to product safety. These regulators should look to standards bodies such as ETSI and NIST as a starting point for guidance on cybersecurity, but the ultimate requirements for these safety-critical applications must extend to the particular security needs of the industry—which are likely even more stringent than the second tier discussed above. Products that fall into this category need not be certified with a label. Instead, if they fail to meet the regulator’s minimum standards, they should not be approved (or should be recalled if they are already on the market). 

What does the label look like? 

A label for IoT security should consist of a standardized table or graphical description of security features, attached physically to a product box and digitally affixed to product descriptions online. The digital description of an IoT product’s security features is especially important, and—given the constantly changing security landscape—keeping digital labels up-to-date is often easier than doing so for physical labels. Ideally, the standardized-format description of product security features should be mapped to a set of standard IoT security criteria—such as a checklist of product compliance with some NIST security best practices, or a checklist of product compliance with ETSI requirements for IoT security (e.g., does this product use universal default passwords, does it have a security update function in place). Labels, intended for audiences ranging from consumers to enterprise purchasers, should use clear, easily understandable language to describe product security features, rather than referencing specific standards numbers or using highly technical verbiage (such as describing a specific encryption algorithm). 

Related to the label, governments should consider cooperating and coordinating with industry to ensure data on labels is easily accessible—to regulators, researchers, and the public generally. One idea is creating a central repository of manufacturer and vendor label information, perhaps maintained by a country’s cybersecurity standards organization or a standards development organization (SDO), into which vendors and manufacturers can upload independently tested and/or self-certified label information about IoT product security. It may be advantageous to develop a single form containing information of interest to multiple major jurisdictions, inspired by the “Common App” form which allows individuals to fill out one form to apply to multiple US-based universities. This would allow regulators and others to access information on company compliance and broader IoT product security trends in a single place and in a single, accessible format; it also potentially streamlines compliance efforts by IoT vendors, allowing them to file security information about their products in one place that is applicable in multiple jurisdictions. Another idea is having companies make this information available from their systems through a standard API—such that all the information is not stored in one single place, and the government does not have to maintain a central repository of IoT security label data, but that individuals can query manufacturer and vendor APIs to get label information. 

A note on ambitions 

At their simplest, today’s approaches reflect two different philosophies about where governments should focus their efforts: (1) targeting the “low hanging fruit” of higher impact/lower effort measures with mandatory requirements, or (2) setting an optional higher bar and trying to get consumers and industry to care about it. The former arguably views security improvements as a rising tide that fills in the lowest lying areas first, while the latter arguably views it as a distant target that focuses our gaze, even if not everything hits the bullseye. While both strategies have their merits, they need not be mutually exclusive. We cannot content ourselves with merely getting rid of the worst shortcomings. Similarly, the choice for consumers should not be between one class of products that have poor security and another with world-class security. 

It would be counterproductive to suggest that these countries should scrap their national approaches in favor of a new consensus program. Given how recent these efforts are—if they have even yet been implemented—it is still too soon to tell how each country’s approach will fare. A degree of national-level experimentation can help determine what does and does not work. Further, as one interviewee noted, while standards may harmonize internationally, enforcement occurs locally. Many jurisdictions have lined up behind the same set of guidelines in ETSI EN 303 645, with some others pursuing slightly differing approaches that nonetheless seek the same outcomes that the ETSI documentation aims to achieve. But the measures chosen to encourage (or compel) industry to generate products with better security must reflect the jurisdiction’s regulatory and consumer cultures. The silver bullet is not necessarily a new global label, new methods of enforcement, or new standards for IoT products. Instead, the world needs a better way of bringing together these efforts and ensuring they continue to avoid contradiction and duplication. 

Recommendations

This section lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting the baseline of minimally acceptable security, incentivizing above the baseline, and pursuing international alignment on standards and implementation across the entire IoT product lifecycle. While many of these recommendations apply generally to those interested in promoting a more secure IoT ecosystem, the report also aims to identify specific actors and the steps they can take to bring about this multi-tier structure for IoT security (Figure 6). Moreover, these recommendations also aim to address the risks and uncertainties described in the prior section. 

Importantly, this report deliberately does not prescribe a particular label design, such as a table or graph. Moreover, it does not prescribe “how” companies should pair physical and digital labels nor to “what” extent companies and/or governments should harmonize specific label designs and digital characteristics across jurisdictions. These areas deserve more work, and the optimal approaches remain unclear at this stage. 

Figure 6: Overview of Actors and Actions to Improve IoT Security

SOURCE: Patrick Mitchell for the Atlantic Council.

Recommendation Set: Establish the Baseline of Minimally Acceptable Security (Tier 1): Currently, many governments lack baseline security standards for IoT products, and for some of those that do have such standards enacted, companies must go through a time- and cost-intensive process of independent testing and certification. This substantially raises the barrier to adopting what should be easily achievable and cybersecurity-bolstering baseline standards. By setting this minimum baseline, making it low-cost for companies to comply with, selecting criteria that greatly increase cybersecurity (like no universal default passwords115 and having security updates), and making it mandatory, governments can ensure IoT products within a country have the most basic and critical security measures in place. In some jurisdictions, enforcement might look like a law that requires every IoT manufacturer to implement the government-set IoT security baseline standards; in other countries, enforcement might look like a consumer regulatory agency creating a new rule within its existing authorities. 

IoT products are currently so insecure that hacking them is relatively trivial. The insecurities these products have are so glaring and egregious that even relatively unskilled hackers can get into the game and claim their slice of the pie. Implementing mandatory minimum security standards would have an impact on the state of IoT security by plugging those widely known and easy-to-find holes, which raises the cost of knowledge, time, and resources required to compromise IoT products. In other words, this would help push small fry hackers out of the scene, and the more sophisticated hackers would have to invest energy into developing ways to target more secure products. 

To illustrate this point, the Global Cyber Alliance’s October 2021 report “IoT Policy and Attack Report” provides a glimpse into just how effective some of these minimum security measures can be.116 Using a “honeyfarm” (a large network of IoT device honeypots), the Global Cyber Alliance was able to measure the number of attacks against different classes of IoT products and determine whether the number of successful attacks against the target changed, given the implementation of different security standards. For instance, the report found that of over 7,000 malicious login attempts, attackers were only able to login and thereby compromise a device in 79 instances. Those 79 instances all involved devices that used default passwords 

This section describes two recommendations that aim to influence two critical groups of actors in implementing this baseline: product manufacturers and retailers. 

Recommendation 1: Governments should implement regulatory measures to enforce a mandatory baseline on manufacturers selling in their markets (Figure 7). Initially, governments should conduct outreach to encourage compliance and spread awareness among manufacturers about the security requirements. Inevitably, some companies will not implement the Tier 1 security baseline within the required window or in the required way. This could be the result of many factors, including a lack of awareness about the rule (e.g., for smaller IoT manufacturers), feet-dragging, and limited capacity to quickly implement the self-attested label and certification, among others. Governments should therefore develop mechanisms to publicize the new, required security baseline at Tier 1 and encourage companies to implement it within the specified window. Beyond general public education campaigns, for example, this could include such processes as a country’s key standards agency holding sessions with industry to explain new requirements and answer any questions that may arise—well before the requirements go into effect. 

Next, governments should set up random audit mechanisms to ensure firms’ claims are accurate and issue penalties as needed. Some companies may self-attest to a security baseline and then take action that deviates from that attestation (e.g., implementing security updates and then ceasing security updates). Other companies may falsely self-attest to the security baseline altogether. If a product has been falsely attested to and does not meet the minimum security standards, the government should begin by issuing a compliance notice to its manufacturer. The compliance notice (or prompt for change) should outline all corrective actions and set a clear deadline for when these actions must be complete. Should a manufacturer continue to produce a noncompliant product with a falsely advertised security label, the government’s relevant enforcement agency should issue a stop notice that orders the manufacturer to cease selling the product until made compliant. The agency’s stop notice (sent to the company and published publicly) should also demand the recall of the noncompliant product. The agency should also consider additional actions depending on its authorities and typical enforcement processes against other companies domestically, such as fines. In line with other contexts in which companies may hold liability, governments carrying out enforcement should weigh whether a reasonable effort was made to attest in good faith, among other factors. 

Figure 7: Setting the Baseline of Minimally Acceptable Security (Recommendation 1)

SOURCE: Patrick Mitchell for Atlantic Council
SOURCE: Patrick Mitchell for Atlantic Council

Recommendation 2: Governments should follow the “reversing the cascade” philosophy, where instead of trying to influence manufacturers based abroad, governments put pressure on domestic suppliers and retailers—who may, in turn, put their own pressure on manufacturers to improve security (Figure 8). It is not just governments that make policy decisions that impact product manufacturers. There is considerable power in the terms and conditions for selling through major marketplaces and retailers like Amazon, Walmart, and Target. Many IoT security efforts encounter issues when they try to levy penalties on manufacturers, as many of them are based outside their jurisdiction and may not have incentives to comply with security requirements. Vendors, however, fall within a government’s jurisdiction, making actions more feasible. There are also fewer major IoT vendors than there are IoT product manufacturers, allowing efforts to be more concentrated. In the US, political leaders and regulatory agencies, such as cybersecurity officials in the Department of Commerce and regulators at the FTC, should call upon major retailers to more proactively police the sale of consumer IoT products that lack basic security features. This is because these retailers currently sell products like smart thermostats, smart speakers, and baby monitors that have poor security practices and use default passwords. If engagement does not bring about change, retailers could be held accountable through new laws that penalize them for the sale of noncompliant products. Though, targeting noncompliant smart products that have been sitting for a long time on the shelf may achieve higher security across products more quickly, without creating barriers to entry for small manufacturers. It is also possible that the FTC could pursue action against specific retailers under its “unfair or deceptive acts or practices.99, 117

As the world’s largest online retailer, Amazon, for example, could have an outsized impact with an expansion of its “Restricted Products Policy” to bar unsafe smart devices. When contacted by security researchers about a particularly vulnerable wireless camera (promoted as “Amazon’s Choice”) the firm removed the preferred listing and responded, “We require all products offered in our store to comply with applicable laws and regulations and have developed industry-leading tools to prevent unsafe or non-compliant products from being listed in our stores.”118 In this vein, in its Examples of Prohibited Listings in the Electronics category, Amazon should explicitly prohibit smart home products that fail to meet the Tier 1 requirements. The US government has the ability to apply pressure on online retailers (not just Amazon) to do that, such as through public messaging campaigns and convenings with company executives through organizations like NIST. If this fails to stem the presence of insecure products on the site, another measure could include requiring firms to receive approval before listing consumer IoT products—as they must for categories including jewelry, DVDs, and “Made in Italy” items—or just a subset of high priority items like children’s connected toys. This approval could be as simple as submitting a form that attests that the firm does not use universal default passwords and lists a vulnerability reporting point of contact. Amazon’s application form for selling streaming media players could serve as a template. Even without specific laws that force its hand, this policy would be in line with Amazon’s stated goal of allowing customers to buy with confidence on its platform. 

Figure 8: Setting the Baseline of Minimally Acceptable Security (Recommendation 2)

SOURCE: Patrick Mitchell for Atlantic Council

Recommendation Set: Incentivize Above the Baseline (Tier 2): Ensuring that all smart devices meet basic security requirements is valuable, but insufficient relative to the present risk in the IoT ecosystem. Some buyers may wish to achieve security at a higher level, and even more likely, some governments may wish to require manufacturers to adopt security standards above the first-tier baseline. Some manufacturers may also pursue a higher level of security as a differentiator. This section outlines four recommendations that will strengthen the development of this higher tier: setting the higher tier, mandating a more stringent degree of security for government-procured smart devices, expanding label recognition between states, and moving towards a consensus certification and labeling program. These actions will grow demand for secure products, increase consumer awareness, and decrease friction for firms that must otherwise navigate multiple certification regimes. 

Recommendation 3: Governments should support the creation of a voluntary, higher tier of security requirements, indicated via labeling programs in their markets (Figure 9). The objective of this tier is to encourage firms to adopt more advanced security features and design practices in their products. As with the first tier, the specific security provisions that governments select for this tier should consider outcomes-based approaches, perhaps looking to ETSI EN 303 645 for inspiration. Other provisions, such as those from OWASP and ioXt, can supplement such approaches. Unlike the first tier in which companies self-attest to meeting standards, in this tier, companies should have their products evaluated and their status certified by a third-party testing lab. These approved labs should be certified under ISO/IEC 17025, an internationally accepted standard for Testing and Calibration Laboratories, to ensure consistent application of device security testing procedures. Since product certification at this tier is on a voluntary basis, manufacturers will likely wish to advertise their products’ enhanced security features. Any device that passes the test and therefore shown to meet the Tier 2 requirements will receive an accompanying Tier 2 label. These labeling schemes can be “binary,” indicating the presence or lack of desired security features (e.g., Finland and Germany’s programs), or multi-level, allowing manufacturers to pursue the certification that meets the desired “grade” of security for their product (e.g., Singapore’s CLS). After issuance, random audits should ensure that devices continue to remain in compliance with the provisions of their label. If a product has received a label but no longer meets its requirements, the government should decertify the product. Depending on the jurisdiction, the government may also pursue legal action against those who willfully make false claims about their product’s security features. 

The existence of the second tier will aid in raising the security of IoT products above the minimum standards set in tier one. As the multi-tier model evolves over time, governments can also migrate effective standards from the second tier over to the first tier. Further, using outcomes-based approaches such as ETSI EN 303 645 as inspiration for these security requirements will ensure continued momentum around many agreed-upon basic security principles, while the employment of public-private cooperation ensures that standards are actionable. To drive the uptake of labeling programs, governments should engage with industry and the public to spread awareness of the programs’ benefits, and they may also consider defraying start-up costs, such as waiving registration fees and subsidizing testing expenses. 

Much like ETSI could serve as a guiding foundation for establishing a set of baseline security requirements for Tier 1, the industry security efforts underway by the CTA and the CSA, among others, could become a foundation for establishing a higher bar of IoT product security paired with a consumer-facing IoT labeling scheme. 

Figure 9: Incentivizing Above the Baseline (Recommendation 3)

SOURCE: Patrick Mitchell for the Atlantic Council.
SOURCE: Patrick Mitchell for the Atlantic Council.

Recommendation 4: Governments should include Tier 2 requirements as part of government procurement contracts (Figure 10). Technology manufacturers and vendors strongly benefit from government contracts, and the inclusion of cybersecurity standards in government procurement requirements can be one mechanism to incentivize large and small manufacturers to adopt them. The cost-benefit is simple for those companies: if they do not meet the specified cybersecurity requirements, they do not qualify for government contracts. Governments should therefore include Tier 2 (or higher) security standards in their procurement requirements such that any IoT manufacturer or vendor who wishes to do business with them must invest in a higher level of security beyond the Tier 1 baseline. 

The United States provides a recent case study in this approach with its IoT Cybersecurity Improvement Act of 2020, which requires federal agencies to abide by NIST cybersecurity guidelines when procuring IoT products. Thus, companies will not be able to sell their IoT products and services to the US federal government without complying with NIST cybersecurity guidelines. Procurement requirements in the UK, Singapore, and Australia, especially in the defense apparatuses, can similarly provide a mechanism by which the government can incentivize the adoption of a higher tier of cybersecurity practices. Since it tends to be too unwieldy for companies to produce multiple lines of the same product—one suitable for the government’s requirements and a separate less secure model—the entire market would benefit. This measure would not only incentivize companies to act but would also mean that IoT products used by governments will themselves have a higher bar of security. In turn, procurement is a mechanism by which to better protect government systems and, likely, citizen data against cyber risks as well. 

Figure 10: Incentivizing Above the Baseline (Recommendation 4)

SOURCE: Patrick Mitchell for the Atlantic Council.

Recommendation 5: In the short term, governments should reach agreements to mutually recognize each other’s labels. As different national IoT labeling schemes proliferate around the world, it will be important to reduce the burden on manufacturers from duplicative testing and certification requirements. In October 2021, Singapore and Finland agreed to mutually recognize each other’s labels for IoT products, hoping that this agreement will also spur more international collaboration. Through this agreement, companies that receive Finland’s Cybersecurity Label for a product are immediately eligible for Singapore’s Level 3 label, and vice versa. Even though not a country focused on in this report, Germany’s voluntary cybersecurity labeling program went live in January 2022, and it is reportedly in discussions with other countries to further expand mutual recognition. Given ETSI EN 303 645’s role as the backbone of multiple national frameworks, these agreements would likely be relatively simple to establish, recognizing that some agreements might focus on recognizing specific requirements while others might focus on recognizing equivalency—when similar outcomes are achieved with slightly different requirements. Major technology firms that care about improving the security of smart devices can apply for certification, even if it does not immediately benefit them, thus adding to the credibility of labeling programs. Countries with labeling programs already underway should study their impact, consider stakeholder feedback, adjust their schemes as needed, and share lessons learned with other countries interested in adopting this approach. It would be helpful for some of the analysis to focus on how to balance the need for maintaining high standards with reducing the administrative burden on firms going through the certification process. Major IoT vendors have noted how onerous it is to submit their products to multiple IoT security certification processes; for smaller firms, it can only be more difficult. Solutions like a “Common Application form”—inspired by the innovation that allows individuals to apply to multiple US-based universities by filling out one document—could help address this problem, as can regularly reviewing program-specific requirements and dropping ones that do not add value. 

Recommendation 6: Over the longer term, governments should compare results of their national labeling programs and move towards a single global model for communicating security characteristics of an IoT product. As regulators in each of the four countries gather performance data on the impact of their approaches, they should work to adopt the attributes of the certification scheme(s) that show the most promise. Labels are already moving well past static data forms with the inclusion of commonly accepted machine-readable formats and more dynamic data sources like SBOMs might be contemplated. Most fundamentally, any future consensus model to communicate the security characteristics of an IoT product (not its packaging) should include basic, easily understandable information affixed to the product, as well as more detailed and dynamic information found online. 

Oftentimes, companies’ currently issued IoT security labels and certifications fail to articulate exactly what certification means and how users should understand security. Further, by the time many consumers read an IoT product label, it is already unboxed and undergoing set-up in their home. These shortcomings impede buyers’ ability to understand IoT product labels and certifications—thus undermining their effectiveness. As part of this multi-tier framework, government and industry should ensure that at their respective tiers, labels issued for IoT products have basic, easily understandable information affixed to the product itself. They should also ensure the same information is available online, supplemented with other details that manufacturers and vendors can more easily update over time. Instead of communicating in the highly technical language used by experts, governments and industry should look to their relevant communicators for help employing the clearest language possible: for instance, saying, for Tier 1, “No default passwords” on a box and then include a check mark next to it. Doing so will empower buyers to easily make decisions about the security and privacy of a product through easy-to-understand labels. 

Recommendation Set: Pursue international alignment on standards and implementation that cover entire IoT product lifecycle: Coherence between jurisdictions on enforcement mechanisms is important, but consistency in the principles of good security practice that form their foundation is even more critical. Given that security is a moving target, regulators must also be able to adapt as capabilities and threats shift. This section describes three recommendations that are key to these objectives: maintaining consensus on standards and scope, introducing regular reviews to keep IoT security programs up-to-date with technological change, and ensuring that all phases of the IoT lifecycle are appropriately addressed. 

Recommendation 7: Governments should pursue outcomes-based approaches to consumer IoT security rooted in agreed-upon basic security principles and maintain similar definitions for products considered “in-scope.” Efforts to secure consumer IoT should be rooted in widely recognized desirable security outcomes, though countries may find benefits in slightly different standards to achieve those outcomes. This focus on outcomes is already evident in the approaches taken by leading standards bodies: NIST notes that its “baseline product criteria for consumer IoT products are expressed as outcomes rather than as specific statements as to how they would be achieved,”119 while ETSI says that its “provisions are primarily outcome-focused, rather than prescriptive, giving organizations the flexibility to innovate and implement security solutions appropriate for their products.”120 ETSI EN 303 645 already underpins national efforts in the United Kingdom, Singapore, Australia, Finland, Germany, India, Vietnam, and elsewhere, which goes a long way to ensuring a degree of uniformity in this space. As these countries have implemented national programs, they have supplemented the main ETSI EN 303 645 provisions with additional principles from other bodies, such as Singapore’s Infocomm Media Development Authority (IMDA) and Germany’s Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, or BSI). While some variation among requirements is perhaps inevitable, it can risk becoming onerous for IoT vendors as additional provisions proliferate across jurisdictions. This highlights the importance of encouraging countries to strive for similar outcomes and not just standards. Other IoT security frameworks may be referenced to bolster specific aspects of IoT security that are outside the scope of guidance found in standards such as ETSI EN 303 645, particularly those that extend beyond the device hardware and into the product’s related software and apps. For instance, the App Defense Alliance has a framework that may be useful to reference while developing apps that are partnered with physical IoT products. 

Similarly, governments must remain aligned on the products they consider “in-scope” for their IoT security efforts. ETSI EN 303 645, for example, covers “consumer IoT products that are connected to network infrastructure (such as the Internet or home network) and their interactions with associated services,” and provides a non-exhaustive list of examples that includes: 

Connected children’s toys and baby monitors; connected smoke detectors, door locks and window sensors; IoT gateways, base stations and hubs to which multiple products connect; smart cameras, TVs and speakers; wearable health trackers; connected home automation and alarm systems, especially their gateways and hubs; connected appliances, such as washing machines and fridges; and smart home assistants.”121 

Governments should consider how far to draw the line on systems, devices, and services with which IoT products connect—thinking about IoT cloud applications and other services that might fall under the scope of security baseline enforcement. For instance, the language in the UK’s PSTI Bill—as written—excludes many IoT products from the scope of an IoT device, thus limiting the potential benefits of a mandated security baseline. As a starting point, governments should consider enforcing the baseline on all IoT products as well as on the systems and services on which IoT products depend to function. For example, if an IoT cloud application breaking would stop an IoT product from functioning, governments should consider including that in the scope of a default password mandate. Governments should delegate this task to the relevant cybersecurity standards agency and then embed the recommended definitional scope in legislation, regulation, and other requirements. 

Recommendation 8: Governments and industry should review and, if necessary, update their respective tiers of standards every two years. Technology changes quickly, and future efforts must ensure that guidance for security keeps up with the evolving security landscape. Further, there is a question of “moving goalposts”—once a government, for example, has success in requiring industry to meet the Tier 1 baselines, it should aim to raise the baseline even further through additional updates. Nonetheless, while standards can provide more specific guidance for organizations, governments should also consider mapping those evolving standards to a set of broader, desired security outcomes. Then, governments and industry should revisit and, if necessary, update their respective tiers of standards every two years, initiating update processes ahead of that two-year timeline such that the final updated guidance is ready for release at, or ahead of, the end of the two-year interval. Updating requirements each year with appropriate government, industry, and civil society consultation may require too much time and too many resources needed elsewhere, but without regular updates (e.g., every two years), IoT cybersecurity standards will become quickly outdated. On the international stage, standards bodies, including the ETSI and ISO, should continue to adapt guidelines as technological circumstances change and new information becomes available. This process should discard outdated and ineffective standards (or even contradict or undermine new security guidance), modify existing standards based on new technologies and risks, and consider adding new standards to each tier given the current rate of progress. To implement these changes into regulation, the UK’s approach of empowering the DCMS secretary to define baseline security requirements—rather than “hard coding” them into legal text—provides an excellent model for replication. Law is extremely slow to change. However, if the appropriate agency or agencies receive the power to produce regulations and modify enforcement mechanisms within a stated scope of authority—and with appropriate government, industry, and civil society consultation—this would result in more regularly updated and thus more relevant and useful IoT security requirements. 

Recommendation 9: Governments should develop additional guidance around the sunsetting phase of the IoT product lifecycle. As illustrated in this report, many existing IoT security frameworks heavily skew towards the design, development, sale and setup, and maintenance phases of the lifecycle. Across best practice guidance, technical standards, and labeling and certification schemes, there is comparatively little IoT security focus on what happens when products are no longer receiving software security updates or must otherwise reach their end of life—and what manufacturers, vendors, and/or buyers should do to prepare for and handle that eventuality. This is a considerable oversight in the existing IoT security approaches. It also risks replicating a problem seen before with more conventional parts of the internet ecosystem, such as organizations needing to use old products and systems long after it is reasonably secure to do so (e.g., those running Windows 95). Governments should therefore develop additional guidance around the sunsetting phase, through their respective organizations designated with technical standard-setting. Producing this sunsetting guidance will take time and should not necessarily hold up the development and deployment of the minimum baseline tier of IoT security certification, but it is essential for addressing all parts of the IoT product lifecycle in a security approach. 

These recommendations provide a sensible starting point to address the economic incentive issues that sustain consumer IoT’s insecurity while promoting the core policy objectives of eliminating the most glaring vulnerabilities, harmonizing requirements across jurisdictions, encouraging greater prioritization of security by manufacturers, increasing consumer awareness, and making an overdue impact without further delay. Implemented and updated continuously, this would help drive towards a world in which IoT product manufacturers build in better security from the start—referencing many of the same sets of baseline security standards, roughly consensus and harmonized across jurisdictions—and every other actor in the supply chain follows, with manufacturers and vendors displaying understandable cybersecurity labels on products, retailers enforcing security requirements on those manufacturers and vendors, buyers looking to labels and other security guidance, and regulators ensuring that IoT security is better implemented across the entire device lifecycle.

Measuring success

As with many cybersecurity issues, simple quantification of the problem is challenging. The discovery of a single vulnerability—whether in the product itself or in commonly used software packages—can mean that millions of IoT products are suddenly at risk. But methods to better understand and quantify IoT security risk are needed, both to better understand the nature of the problem and to measure the success of policy interventions and security standards. Several data points may prove helpful in enhancing understanding of the overall threat ecosystem presented to IoT products. 

  • Information on the number of in-scope products: One widely cited study from Transforma Insights, a market research firm, estimates that the number of active IoT products will grow to 24.1 billion by 2030, up from 7.6 billion in 2019, expanding on average 11 percent per year.122
  • Information on attacks: After coming online, on average, an IoT product is probed within five minutes by tools that scan the web for vulnerable products, and many are targeted by exploits within 24 hours. Attacks on a simulated smart home, constructed by the UK consumer group called “Which?”, reached 12,000 in a single week.123 Kaspersky, a cybersecurity firm, maintains a network of “honeypot” devices to learn more about attacks, and measured 1.5 billion IoT attacks over the first half of 2021, up from 640 million over the same period a year prior.124 Defining an “attack” can be another tricky question, with some definitions including activities that range from a relatively benign probe by a popular scanner tool to an all-out compromise of the device. It would perhaps be most fruitful to focus efforts on activities that hint at active malicious activity, such as brute-forcing attempts or attempts to employ remote code execution exploits. 
  • Information on product insecurities: Unit 42, a team of threat intelligence researchers at Palo Alto Networks, estimates that 57 percent of smart devices are susceptible to medium- or high-severity attacks, while 98 percent lack encryption in their communications, putting confidential personal information at risk.125 Default manufacturer passwords, often the same for thousands of devices, provide some of the simplest entry points in the compromise of a device. In 2017, researchers at Positive Technologies found that five login/password combinations—support/support, admin/admin, admin/0000, user/user and root/12345—granted access to 10 percent of internet-connected devices.126

Measuring the impact of labels, standards, and legislation is harder still. In the UK, DCMS published cost-benefit analysis in parallel with the filing of the PSTI Bill. This report represents one of the more admirable efforts to quantify this risk and the potential benefits of intervention. But as the NCSC notes, analyzing the cost of intrusions specific to connected consumer products is very difficult today, as the user does not necessarily notice the attack, and the line between what is and is not an attack may be blurry from an outside observer’s perspective.127 Better methods to measure the impacts of policy interventions must continue to be the subject of research. An initial—and non-exhaustive—list of these metrics may include: 

  • Percent/number of products that meet various levels of security (as defined by ETSI/NIST/other frameworks). 
  • Percent of products using default passwords. 
  • Number of products infected with Mirai and other IoT malware. 
  • Percent of products sold whose company has a vulnerability reporting contact. 
  • Average response time / patch release time for critical vulnerabilities by product. 
  • Percent/number of unpatchable products in operation. 
  • Percent/number of products no longer receiving security updates in operation. 
  • Percent of customers who say they use product security as a key buying criterion. 
  • Percent of customers who say they trust the security of their IoT products. 
  • Number of IoT product vulnerabilities with high CVSS scores publicly disclosed (the assumption being at first a deluge of reporting as researchers start to focus on these products, and with time the number of found vulnerabilities decreasing). 

What’s next for labeling 

Throughout the conversations with government and industry players, one point of worldwide consensus shines through: there is a solid appetite to adopt some sort of labeling scheme for consumer IoT devices. The benefits of such a scheme are plentiful. The ability to collect information on product security and having that information public offers exciting possibilities. Access to such information empowers purchasers and supports researchers and auditors in doing their work. IoT vendors have also recognized the benefits of labels from a marketing perspective, allowing them to use product security as a clearly articulated, understandable differentiator.   

While the interest in labeling is there, the logistics are still lacking. There is a slew of details that need ironing out down the road. Getting them right is important for the IoT, and, as such, labeling merits future dedicated study. Plenty of questions exist around label design: How should it look? What information should it communicate? Beyond that are the bigger questions of how the system itself should work: Who could issue labels? What information would be needed to award a label? Where would that information be kept and stored, and how could it be accessed? Many details need workable answers—and there are lots of proposed ideas to sort through—before a labeling scheme can roll out on a global scale.  

Conclusion

Inadequate security for consumer IoT products is just one of many difficult emerging technology issues that require global coordination among public and private sector actors. A range of parallel efforts exist to address wide-ranging digital challenges, such as protecting the privacy of personal data, addressing anti-competitive behavior by tech giants, and countering online misinformation. The steady march of technology means that poorly designed interventions risk irrelevance. Moreover, they leave the IoT more vulnerable to harm from the unintended consequences they should prevent. 

Despite the perennially crowded global to-do list, reducing the threats from insecure consumer IoT products is overdue, attainable, and worthy of the world’s attention. This report likely gives short shrift to the many benefits of consumer IoT, but fully realizing its potential requires addressing its worst failings. These deficiencies—rooted not merely in technology but, more so, in economic incentives—means that the IoT demands better policy intervention. A litany of proposals has at last turned into momentum behind some reasonable, consensus measures. As one interviewee said, “we cannot let the perfect become the enemy of the good.” 

From botnets that menace internet infrastructure to universal default passwords that allow hackers to invade user privacy, the impact on consumers is real, with risks that multiply in tandem with the number of connected devices. As Nathaniel Kim, Bruce Schneier, and Trey Herr contend, “these attacks are all the byproducts of connecting computing tech to everything, and then connecting everything to the Internet.”128 Unlike traditional appliances, which tend to degrade over predictable timescales and stop working individually, “computers fail differently.”129 They all work fine, until one day, the discovery of a vulnerability means finding a fix for all products of that particular model. As more and more things continue to become computers, they will increasingly fail like computers. The world needs processes, norms, and global standards that fit for this new reality. 

Appendix I

Country-specific implementation plans 

This section discusses tangible, high-impact next steps that the UK, Singapore, Australia, and the United States can each take to bring about the global multi-tier system for IoT security detailed in our recommendations. 

As noted earlier, this research seeks to capitalize on existing momentum, whether international or intranational. There are multiple viable paths for governments that are consistent with our vision to (1) rid the world of IoT’s most glaring vulnerabilities and (2) harmonize international efforts to make it easier for firms to manufacture and sell products with even stronger security features. This implementation plan aims to nudge their approaches towards greater consistency, as opposed to calling for dramatic about-faces. 

UK 

Tier 1. Set the Baseline of Minimally Acceptable Security: 

Of the four countries examined in this report, the UK is closest to creating a mandatory baseline for a broad range of IoT products sold in its market. The PSTI Bill, currently advancing in the House of Lords, will set minimum security requirements for manufacturers and couple them with potent enforcement mechanisms. By empowering the DCMS secretary to set these guidelines, this baseline can keep pace with technological change without the need to constantly rewrite legislation. The UK government should take the following actions: 

  • Pass the legislation. The most obvious and immediate next step is for parliament to enact the PSTI Bill. Thus far, the proposed law has made its way through the legislative process with its core provisions intact. While it does not address everything on the wish list of security advocates, it is an ambitious effort that lawmakers should approve. The House of Lords has recommended a sensible amendment that will also protect security researchers conducting legitimate vulnerability research from intimidation and lawsuits by manufacturers.130 Given that the countdown for firms to comply with the new law begins one year after the bill receives Royal Assent—and that it has already been nearly nine months since its filing—consideration of further amendments should take into account the additional time they will add to the process. 
  • Identify a regulator. While the DCMS will define the cybersecurity provisions that manufacturers must abide by, it will not be the agency that enforces them. At the time of publication, the UK government had not publicly named the regulator responsible for enforcing the baseline product requirements. In its 2021 consultation, the DCMS sought recommendations on agencies well-positioned to serve in this role. Multiple respondents highlighted Trading Standards as a natural fit given its consumer protection role under Schedule 5 of the Consumer Rights Act 2015. Another was Ofcom, the UK’s communications regulator.131 The DCMS has also consulted with the Office for Product Safety and Standards in the Department for Business, Energy, and Industrial Strategy, another consumer product safety regulator.132 This report does not have a specific recommendation as to the best-positioned agency to assume this role, but the government should announce this decision and begin to build out the key elements of its enforcement capacity. 

Tier 2. Incentivize Above the Baseline: 

Unlike the other three countries profiled in this report, the UK government has for now explicitly rejected the approach of device labeling, choosing to initially focus the bulk of its efforts on setting the first tier of a mandatory baseline. Despite the challenges with cybersecurity labels, the team views them as the best option for encouraging manufacturers to invest in greater security as well as providing consumers with accessible information. In partnership with NCSC, the DCMS should: 

  • Provide “forward guidance” on provisions that it aims to mandate next. Like a good central bank, the DCMS should provide predictability in its intended future actions while remaining flexible to change in the face of new information. While the UK plans to begin with the so-called “top three” measures in its initial list of mandatory requirements, one of the key design principles of its approach is the ability to gradually ratchet up the baseline with new provisions. Through public announcements and meetings with industry, DCMS can telegraph where regulation is headed and allow security-minded firms to bring their products into compliance before the measures become mandatory. For starters, the DCMS should look to the World Economic Forum (WEF) statement that highlights two additional ETSI principles as the logical next steps: ensure that products communicate securely and safeguard personal data.133 Other impactful measures could include a guideline requiring manufacturers to provide security updates for a minimum period consistent with the average length of time consumers use a product, which can vary by product category. The DCMS could go even further by publishing the planned effective dates of new security requirements years in advance. These provisions can change as cybersecurity threats and commercial considerations change. 
  • Study the impact of cybersecurity labels in other markets and be prepared to reevaluate if they achieve results. Thus far, research on cybersecurity labeling for smart devices remains largely limited to surveys about consumers’ hypothetical willingness to pay more for products that have an indicator of greater security. Now that several countries have introduced labeling programs, users should begin to see “real world” data on their performance, both as it relates to changing consumer behavior and in addressing the downstream ills of insecure devices. If it becomes apparent that one or more of these labeling approaches are achieving success—or gaining traction as an international standard—the UK government should remain open to adopting it in its market. 

Singapore 

Tier 1. Set the Baseline of Minimally Acceptable Security:  

While Singapore’s CLS for consumer IoT is largely voluntary, it provides the regulatory infrastructure for a program that gradually expands to establish a baseline level of security for all devices. Internet routers sold in its market already must meet the provisions of the CLS Tier 1 label, which map directly to the UK’s “top three” requirements that will be enforced with its proposed PSTI Bill. In consultation with IMDA and other partners, the CSA should: 

  • Make the Tier 1 label mandatory for more product categories. Internet routers have been a wise starting point: they have an outsize presence in today’s botnets and can have security knock-on effects that threaten consumers’ other smart home devices. Perhaps unsurprisingly, routers now account for over half of the CLS labels issued.134 The CSA should consider the next highest priority product categories that will need to meet these minimum security measures, incorporating criteria like the (lack of) maturity in the category’s cybersecurity features and the privacy risk to individuals if compromised. IP cameras, connected baby toys, and smart locks are strong candidates. 
  • Add to the security provisions required as part of the Tier 1 label, especially those related to secure development practices. CLS includes 76 security provisions, with roughly half required by one or more of its tiers, while the others are merely recommended. The first tier currently has 13 required provisions. Tier 2, which primarily concerns product lifecycle and secure development practices, has 17 required provisions—eight drawn from ETSI EN 303 645 and nine from the IMDA’s IoT Cyber Security Guide. Over time, the CSA should aim to collapse the most impactful Level 2 requirements into Level 1, while removing those not seen as value-added. Alternatively, the CSA could keep the same provisions in each CLS level and gradually require that devices meet the second level. Since both CLS Levels 1 and 2 rely on manufacturer self-attestation, these changes should not require any operational changes in administering the program. 

Tier 2. Incentivize Above the Baseline: 

CLS has seen dramatic growth since the beginning of 2022, with the number of labels issued tripling during that timeframe. But the gains are not evenly distributed: of the 176 labels issued by CSA as of July 2022, 148 are at the Level 1 designation, an additional 16 are at Level 2, and 10 are for Level 4.135 As mentioned earlier, many of the recipients of labels are internet routers, where the Level 1 label is mandatory. A key selling point of its multi-tier system is the ability to provide manufacturers with a reason to go above and beyond the bare minimum. To this end, CSA should: 

  • Conduct a review of the program’s effectiveness in addressing the core problems associated with IoT insecurity and publish the findings. As the country with the most mature cybersecurity labeling program, Singapore is in a unique position to gather information on the successes and challenges of this regulatory approach. How have consumers adapted their purchasing behavior since its launch? Has the number of insecure devices sold in Singapore decreased? What have been the challenges for firms? Have there been impacts beyond Singapore’s borders? This review could also help improve the structure of the program. For example, it might review the fitness of the CLS tier structure. The inclusion of more levels makes sense if it adds to the range of choice for consumers and manufacturers to select the appropriate certification level that meets their needs. If no one selects it—currently the case for CLS Level 3—it is possible to simplify the scheme. The report’s “Measuring Success” section includes some example metrics that could help gauge a topic that is notoriously difficult to quantify. The results will be helpful for Singapore, but just as critically, for the large number of countries and industry bodies that are experimenting with cybersecurity labels for IoT products. 
  • Pursue an agreement with Germany for mutual recognition of cybersecurity labels. Finland and Singapore’s agreement shows that binary and multi-tier labeling approaches need not conflict. Germany, which recently launched its own binary label in January 2022, should also join the bilateral agreement between Singapore and Finland for mutual recognition. All three countries draw largely from the same list of ETSI EN 303 645 security provisions. Partnering with a market of Germany’s size will add significant momentum for Singapore’s approach to securing IoT, while reducing the burden of duplicate testing and certification for firms. This approach should be pursued for any country that adopts an IoT labeling program found to be largely compatible with the existing Singaporean program.  
  • Consider measures to encourage broader adoption of the labeling scheme. Anecdotal evidence suggests that many security-minded firms have been eager to participate in the program, but the CSA should continue to search for ways to increase its attractiveness. While the program will eventually need to generate revenue to cover its costs, CSA could extend the moratorium on application fees for an extended period, or even subsidize testing for devices at higher levels of security. 

Australia 

Tier 1. Set the Baseline of Minimally Acceptable Security: 

Since the conclusion of its Call for Views in August 2021, Australia’s DHA has been relatively quiet in public on its path forward for the regulation of consumer IoT. Whatever its ultimate action, it is evident that Australia aims to take a more hands-on approach than its past voluntary measures. To establish this minimum baseline, the DHA should: 

  • Select a regulatory approach for mandating basic security requirements for devices sold in its market. Australia has multiple approaches at its disposal and should continue to study the benefits and drawbacks of programs in the UK, Singapore, and elsewhere. The options it is most seriously considering are either a mirror image of the UK’s minimum security standards or a four-level “graded shield” that appears very similar to Singapore’s CLS. Australia’s voluntary Code of Practice, which aligns with ETSI EN 303 645, should provide a strong foundation that will have prepared Australian businesses for more stringent enforcement. 
  • If pursuing a minimum security standard, align its approach with the PSTI Bill’s planned enforcement measures. At a minimum, these measures should include the “top three,” banning universal default passwords and mandating vulnerability reporting contacts and transparency on security updates. Preferably, it would also include additional provisions on securing personal data, encrypted communications, and minimum acceptable support periods for security updates. Currently, Australian Consumer Law does not require firms to adhere to any principles meant to reduce cyber risk, “only that they cannot make misleading or deceptive representations about the cyber security of their products.”136 This baseline could be achieved either through a new law, modeled on the UK’s PSTI Bill, or an expansion of Australia’s existing Consumer Law to incorporate protections against the most basic flaws in cybersecurity in its definition of “acceptable quality” and “fit for purpose.”137
  • If pursuing a multi-level labeling approach, follow a strategy of gradual mandates by product category. Given that it seems most drawn to a multi-tier label mirroring CLS, the clearest path for Australia is to follow Singapore’s strategy and gradually mandate a tier 1 label by product type, beginning with high-priority items like internet routers. The labeling scheme should include a broad definition of in-scope products, drawing from ETSI’s definition of smart devices. In addition to expanding mandates by product category, DHA can also raise the baseline over time by advancing along the other “axis” of incorporating more security provisions from higher security levels into its base tier. 

Tier 2. Incentivize Above the Baseline: 

The approach for incentivizing action to instill even greater security measures in its smart device market is highly related to Australia’s method for enforcing its baseline. As DHA notes, these measures need not be mutually exclusive. To promote a higher tier of security, it should: 

  • Select a cybersecurity labeling approach. A study conducted by the Behavioral Economics Team of the Australian Government compared the effects of multiple label options on consumers, finding that “participants were more likely to choose a device with a cyber security label than one without a label, by 13–19 percentage points.”138 While the graded shield was most impactful, it found that “expiry labels were still effective” and “a high security level or long expiry date increased the likelihood of choosing a device.”139 Each of these options appears likely to have its own benefits and drawbacks, but it is time to choose one and move forward with it. 
  • If pursuing an expiry date-label, study its effect and publish the findings. If it follows through on this proposal, Australia would be the first to introduce a label that indicates the length of time manufacturers will provide security updates to the product. Studying this approach can help answer several questions about the impact of cybersecurity labels, particularly around the sunsetting phase. For example, are consumers incentivized to purchase devices at a discount that are about to go “off warranty”? As stated earlier, there is nothing wrong with national-level experimentation, as it can be beneficial in formulating new approaches that may be suitable for broader adoption. 
  • If pursuing a “graded shield” label, agree to mutual recognition with Singapore and other participating countries. The four-level labeling scheme that Australia appears likely to pursue bears many similarities with Singapore’s CLS. In this case, the two countries should aim to bring their programs into close harmony, including the definitions of in-scope devices, the security provisions included in each tier, and the processes for self-attestation and third-party testing. Over time, the DHA should work with the CSA to ensure that the programs evolve together with consistency. Australia should then join the bilateral agreement with Finland for mutual label recognition, as well as a proposed agreement with Germany. 

United States 

Tier 1. Set the Baseline of Minimally Acceptable Security: 

In comparison to other jurisdictions, the United States has preferred a less interventionist approach. There are two main exceptions: the two states that have enacted legislation to impose minimum security standards on IoT products, as well as the IoT Cybersecurity Improvement Act of 2020 which requires federal agencies to only procure devices that meet NIST security guidelines. In this context, the team recommends: 

  • States should pass and enforce their own IoT security laws. California and Oregon led the way but should expand their laws to focus on more specific guidance for organizations and manufacturers less versed in cybersecurity, rather than just focusing on concepts like “reasonable security.” Ideally, they will do so in a way that does not lock in specific security measures into legal text but instead points toward another regulatory mechanism that more easily updates standards, such as the UK’s approach of empowering an agency to maintain these standards, or points them to guidelines set for federal government agencies by NIST. More states should follow in their footsteps, putting forth IoT security laws that incorporate the standards outlined by the US government, as well as considering standards established by others around the world. The states that have implemented these laws should also study their impact. It is not apparent that any enforcement actions have yet occurred, which indicates one of two possible scenarios: all devices sold in their markets are now compliant, or enforcement has been insufficient. The latter seems more likely than the former. 
  • The federal government should adopt the binary labeling approach proposed by NIST. In NIST’s February 2022 publication “Recommended Criteria for Cybersecurity Labeling for Consumer Internet of Things (IoT) Products,” the organization recommends pursuing a binary labeling approach.140 In this scenario, there would exist a single label stating that a product has met baseline security standards. Implementing the binary label would be a first step towards goals such as defining minimum security standards, creating and implementing a labeling program, and starting to broadcast to consumers what they should be looking for when purchasing IoT products. Among other details, this will require identifying an owner for the program, and the FTC would be the strongest candidate. 

Tier 2. Incentivize Above the Baseline: 

President Biden’s 2021 Executive Order 14028 (Improving the Nation’s Cybersecurity) directed NIST to design a labeling program for IoT devices, which should also serve as a mechanism to encourage the adoption of security measures that exceed the minimum baseline. The program’s ultimate owner should: 

  • Provide incentives for industry to obtain labels. The US may look to Singapore and other countries that have adopted labeling programs to see how companies have been encouraged to participate in a labeling program and reach for higher tiers. Fee waivers for label applications may be a good way of incentivizing participation during the first few years of the program. Industry would likely react positively to some form of compensation for the third-party testing required to earn a higher label.  
  • Provide liability protection for firms that pursue the higher, tier 2 security standards. Experts have indicated that many players in industry would be incentivized to pick up higher security standards in exchange for liability protections. There are various types of liability protections that may be considered here, and this report leaves such determination up to the regulatory body. The implementation of such liability protection may take the form of a law passed by Congress outlining these protections, or conversely may come in the shape of a publicly articulated approach by the FTC. 

Authors and acknowledgements

Patrick Mitchell is a consultant with the Atlantic Council’s Cyber Statecraft Initiative. He recently graduated from the Master in Public Policy program at Harvard University’s John F. Kennedy School of Government, where he studied issues at the intersection of emerging technology and global affairs, including a second-year thesis on international efforts to improve IoT security. Prior to this, he interned with the UN Secretary-General’s Office and worked for several years as a consultant with Accenture, where he supported federal, state, and local government agencies on projects related to technology strategy and digital transformation. He also holds a B.S. in Management from Boston College.

Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative, where his work focuses on the geopolitics, governance, and security of the global Internet. He is also a senior fellow at Duke University’s Sanford School of Public Policy and a contributor at WIRED Magazine.

Liv Rowley is a former assistant director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). Prior to joining the Atlantic Council, Liv worked as a threat intelligence analyst in both the US and Europe. Much of her research has focused on threats originating from the cybercriminal underground as well as the Latin American cybercriminal space. Liv holds a BA in International Relations from Tufts University. She is based in Barcelona, Spain.

Acknowledgments: The authors thank Kat Megas, James Deacon, and Rob Spiger for their comments on earlier drafts of this document and Trey Herr and Bruce Schneier for support. Thanks to Nancy Messieh for her support with data visualization. The authors also thank all the participants, who shall remain anonymous, in multiple Chatham House Rule discussions about the issues.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

 

1    Knud Lasse Lueth, “State of the IoT 2020: 12 Billion IoT Connections, Surpassing Non-IoT for the First Time,” IoT-Analytics.com, November 19, 2020, https://iot-analytics.com/state-of-the-iot-2020-12-billion-iot-connections-surpassing-non-iot-for-the-first-time/.
2    Keumars Afifi-Sabet, “Critical Supply Chain Flaw Exposes IoT Cameras to Cyber Attack,” IT Pro, June 16, 2021, https://www.itpro.com/security/vulnerability/359899/critical-supply-chain-flaw-exposes-iot-cameras-to-cyber-attack.
3    “Consumer IoT Security Quick Guide: No Universal Default Passwords,” IoT Security Foundation, 2020, https://www.iotsecurityfoundation.org/wp-content/uploads/2020/08/IoTSF-Passwords-QG_FINAL.pdf.
4    Max Eddy, “Majority of IoT Traffic on Corporate Networks Is Insecure, Report Finds,” PCMag, February 26, 2020, https://www.pcmag.com/news/majority-of-iot-traffic-on-corporate-networks-is-insecure-report-finds.
5    Xu Zou, “IoT Devices Are Hard to Patch: Here’s Why—and How to Deal with Security,” TechBeacon, accessed August 17, 2022, https://techbeacon.com/security/iot-devices-are-hard-patch-heres-why-how-deal-security.
6    Gareth Corfield, “Research Finds Consumer-grade IoT Devices Showing up … On Corporate Networks,” The Register, October 21, 2021, https://www.theregister.com/2021/10/21/iot_devices_corporate_networks_security_warning/.
7    Graham Cluley, “These 60 Dumb Passwords Can Hijack over 500,000 IoT Devices into the Mirai Botnet,” Graham Cluley, October 10, 2016, https://grahamcluley.com/mirai-botnet-password/.
8    Manos Antonakakis et al., “Understanding the Mirai Botnet,” USENIX 26, August 2017, https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-antonakakis.pdf, 1093, 1098
9    Antonakakis et al., “Understanding the Mirai Botnet,” 1105.
10    Antonakakis et al., “Understanding the Mirai Botnet,” 1105.
11    “Over 200,000 MikroTik Routers Compromised in Cryptojacking Campaign,” Trend Micro, August 03, 2018, https://www.trendmicro.com/vinfo/in/security/news/cybercrime-and-digital-threats/over-200-000-mikrotik-routers-compromised-in-cryptojacking-campaign.
12    “Fronton: A Botnet for Creation, Command, and Control of Coordinated Inauthentic Behavior,” Nisos (blog), May 19, 2022, https://www.nisos.com/blog/fronton-botnet-report/.
13    Donna Lu, “How Abusers Are Exploiting Smart Home Devices,” Vice, October 17, 2019,  https://www.vice.com/en/article/d3akpk/smart-home-technology-stalking-harassment.
14    Stephen Hilt et al., “The Internet of Things in the Cybercrime Underground,” Trend Micro, September 10, 2019, https://documents.trendmicro.com/assets/white_papers/wp-the-internet-of-things-in-the-cybercrime-underground.pdf.
15    Pascal Geenens, “IoT Hackers Trick Brazilian Bank Customers into Providing Sensitive Information,” Radware (blog), August 10, 2018, https://blog.radware.com/security/2018/08/iot-hackers-trick-brazilian-bank-customers/.
16    ETSI EN 303 645 – “Cyber Security for Consumer Internet of Things: Baseline Requirements,” European Telecommunications Standards Institute (ETSI), (Sophia Antipolis Cedex, France: June 2020), 10, https://www.etsi.org/deliver/etsi_en/303600_303699/303645/02.01.00_30/en_303645v020100v.pdf
17    “Internet of Things (IoT),” National Institute of Standards and Technology (NIST), accessed August 17, 2022, https://csrc.nist.gov/glossary/term/internet_of_things_IoT; Mehwish Akram, et al., “NIST Special Publication 1800-16: Securing Web Transactions,” NIST, June 2020,  https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1800-16.pdf
18    Apple Developer, “Developing apps and accessories for the home,” Apple, accessed August 25, 2022, https://developer.apple.com/apple-home/
19    “All Smart Home Products,” Resideo, accessed August 25, 2022, https://www.resideo.com/us/en/products/; “Resideo Pro,” Residio, accessed August 25, 2022, https://www.resideo.com/us/en/pro/
20    “Philips Hue, Smart Home Lighting Made Brilliant,” Philips, accessed August 25, 2022, https://www.philips-hue.com/en-sg; “Ring Video Doorbell,” Wink, accessed August 25, 2022, https://www.wink.com/products/ring-video-doorbell/.  
21    “Device Management,” Tuya, accessed August 25, 2022, https://www.tuya.com/product/device-management/device-management
22    Google Nest Help, ”Explore what you can do with Google Nest or Home devices,” Google, accessed August 25, 2022, https://support.google.com/googlenest/answer/7130274?hl=en; “Alexa Guard Plus,” Amazon, accessed August 25, 2022, https://www.amazon.com/b?ie=UTF8&node=18021383011
23    Amazon Web Services, “Security challenges and focus areas,” Amazon, accessed August 25, 2022, https://docs.aws.amazon.com/whitepapers/latest/securing-iot-with-aws/security-challenges-and-focus-areas.html; Dave McMillen, “Internet of Threats: IoT Botnets Drive Surge in Network Attacks,” Security Intelligence, April 22, 2021, https://securityintelligence.com/posts/internet-of-threats-iot-botnets-network-attacks/
24    “Outdoor and Industrial Wireless,” Cisco, accessed August 25, 2022, https://www.cisco.com/c/en/us/products/wireless/outdoor-wireless/index.html
25    “Defender Adapter,” Extreme Networks (data sheet), accessed August 25, 2022, https://cloud.kapostcontent.net/pub/679cf2be-16da-4b6c-91ed-7d504b47a5f1/defender-adapter-data-sheet
26    “Cognitive Campus Workspaces,” Arista, accessed August 25, 2022, https://www.arista.com/en/solutions/cognitive-campus.
27    “Maternal and Fetal Monitoring Systems,” Philips, accessed August 25, 2022, https://www.usa.philips.com/healthcare/solutions/mother-and-child-care/fetal-maternal-monitoring;   “Expression MR400,” Philips, accessed August 25, 2022, https://www.usa.philips.com/healthcare/product/HC866185/expression-mr400-mr-patient-monitor;  “Wearable Patient Monitoring Systems,” Philips, accessed August 25, 2022, https://www.usa.philips.com/healthcare/solutions/patient-monitoring/patient-worn-monitoring.
28    “Guardian Connect Continuous Glucose Monitoring,” Medtronic, accessed August 25, 2022, https://www.medtronicdiabetes.com/products/guardian-connect-continuous-glucose-monitoring-system
29    “Healthcare Sensing,” Honeywell, accessed August 25, 2022, https://sps.honeywell.com/us/en/products/advanced-sensing-technologies/healthcare-sensing
30    “Choose Your Country or Region,” Dexcom, accessed August 25, 2022, https://www.dexcom.com/global; “Sleep Apnea – Causes, Symptoms and Treatment,” Resmed, accessed August 25, 2022, https://www.resmed.com/en-us/sleep-apnea/.
31    Patrick Mitchell, “International Cooperation to Secure the Consumer Internet of Things,” (Cambridge: Harvard Kennedy School, April 5, 2022), 14. 
32    “Code of Practice for Consumer IoT Security,” United Kingdom Department for Digital, Culture, Media & Sport (DCMS) 2018, https://www.gov.uk/government/publications/code-of-practice-for-consumer-iot-security/code-of-practice-for-consumer-iot-security.
33    DCMS, “Code of Practice.”
34    PAE interview, United Kingdom National Cyber Security Centre (NCSC), Spring 2022.
35    Sophia Antipolis, “ETSI Releases World-leading Consumer IOT Security Standard,” news release, European Telecommunication Standards Institute (ETSI), June 30, 2020, https://www.etsi.org/newsroom/press-releases/1789-2020-06-etsi-releases-world-leading-consumer-iot-security-standard.
36    “The Product Security and Telecommunications Infrastructure (PSTI) Bill – Product security Factsheet,” United Kingdom Department for Digital, Culture, Media & Sport (DCMS), 2021, https://www.gov.uk/guidance/the-product-security-and-telecommunications-infrastructure-psti-bill-product-security-factsheet, “Product Security and Telecommunications Infrastructure Bill Explanatory Notes,” UK Parliament, accessed August 17, 2022, https://publications.parliament.uk/pa/bills/cbill/58-02/0199/en/210199en.pdf.
37    DCMS, “PSTI Product Fact Sheet.”
38    James Coker, “UK Introduces New Cybersecurity Legislation for IoT Devices,” Info Security, November 24, 2021, https://www.infosecurity-magazine.com/news/uk-cybersecurity-legislation-iot/.
39    “Regulation of Consumer Connectable Product Cyber Security,” RPC-DCMS-4353(2), United Kingdom Department for Digital, Culture, Media & Sport (DCMS), 2021, https://bills.parliament.uk/publications/43916/documents/1025.
40    Cybersecurity Labelling Scheme (CLS) Updates, Singapore Cyber Security Agency (CSA), 2021, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/updates.
41    Singapore Standards Council, “Technical Reference 91 – Cybersecurity Labelling for Consumer IoT,” Enterprise Singapore, 2021, https://www.singaporestandardseshop.sg/Product/SSPdtDetail/41f0e637-22d6-4d05-9de3-c92a53341fe5
42    Singapore Standards Council, “Technical Reference 91 – Cybersecurity Labelling.” 
43    Cybersecurity Labelling Scheme (CLS) Product List, Cyber Security Agency (CSA), 2022, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/product-list.
44    Senate Bill No. 327, Chapter 886, California Legislative Information, 2018, https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB327.
45    House Bill 2395, Chapter 193, Oregon State Legislature, 2019, https://olis.oregonlegislature.gov/liz/2019R1/Measures/Overview/HB2395. 
46    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020).
47    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(a)(1),
48    Deborah George. “New Federal Law Alert: The Internet of Things (IoT) Cybersecurity Improvement Act of 2020 – IoT Security for Federal Government-Owned Device,” National Law Review, December 10, 2020, https://www.natlawreview.com/article/new-federal-law-alert-internet-things-iot-cybersecurity-improvement-act-2020-iot.
49    H.R. 1668 Rep. No. 116-501, Part I (2020), (Proclaiming the purpose of the IoT Cybersecurity Improvement Act of 2020 bill as “to leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices…”), https://www.congress.gov/bill/116th-congress/house-bill/1668/text/rh.
50    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(a)(1) & (2)(B)(i)-(iv).
51    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(a)(1) & (2)(B)(i)-(iv).
52    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(c)(1)(A)-(B).
53    President Biden,“Executive Order 14028 on Improving the Nation’s Cybersecurity,” The White House, May 12, 2021, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/.
54    “IoT Product Criteria,” National Institute of Standards and Technology (NIST), May 24, 2022, https://www.nist.gov/itl/executive-order-14028-improving-nations-cybersecurity/iot-product-criteria. 
55    “NIST Developed an IoT Label. How Do We Get It onto Shelves?” New America, March 1, 2022. https://www.youtube.com/watch?v=ZwDFb3DEkMw.
56    “Voluntary Code of Practice: Securing the Internet of Things for Consumers,” Australian Department of Home Affairs (DHA), [updated March 22, 2022], https://www.homeaffairs.gov.au/reports-and-publications/submissions-and-discussion-papers/code-of-practice. 
57    “Strengthening Australia’s cyber security regulations and incentives,” Australian Department of Home Affairs (DHA),[updated March 22, 2022], https://www.homeaffairs.gov.au/reports-and-publications/submissions-and-discussion-papers/cyber-security-regulations-incentives.
58    “Get ioXt Certified,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org/get-ioxt-certified.
59    “Authorized Labs,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org/authorized-labs.
60    “Certifying Your Product,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org/certifying-your-device 
61    “The Global Standard for IoT Security,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org.
62    “ioXt Alliance Closes Record Year of Membership Growth and Certifications,” Businesswire, January 19, 2022, https://www.businesswire.com/news/home/20220119005139/en/ioXt-Alliance-Closes-Record-Year-of-Membership-Growth-and-Certifications.
63    “IoT Security Assurance Framework,” IoT Security Foundation, November 2021,https://www.iotsecurityfoundation.org/wp-content/uploads/2021/11/IoTSF-IoT-Security-Assurance-Framework-Release-3.0-Nov-2021-1.pdf.
64    “IoT Security Foundation Members,” IoT Security Foundation, accessed August 17, 2022, https://www.iotsecurityfoundation.org/our-members/.
65    IoT Security Foundation, “IoT Security Assurance Framework.”
66    IoT Security Foundation, “IoT Security Foundation Members”; “Eurofins Digital Testing Your Trusted Partner in Quality,” Eurofins, accessed August 17, 2022, https://www.eurofins-digitaltesting.com
67    “OWASP IoT Security Verification Standard,” Open Web Application Security Project® (OWASP), accessed August 17, 2022, https://owasp.org/www-project-iot-security-verification-standard/; “IoT Security Verification Standard (ISVS),” GitHub, accessed August 17, 2022, https://github.com/OWASP/IoT-Security-Verification-Standard-ISVS.
68    “IoT Security Verification Standard (ISVS),” GitHub, accessed August 17, 2022, https://github.com/OWASP/IoT-Security-Verification-Standard-ISVS.
69    “About the OWASP Foundation,” Open Web Application Security Project® (OWASP), accessed August 17, 2022, https://owasp.org/about/.
70    GSM Association, “GSMA IoT Security Guidelines and Assessment,” Groupe Speciale Mobile, or GSMA, accessed August, 4, 2022, https://www.gsma.com/iot/iot-security/iot-security-guidelines/.
71    GSM Association, “IoT Security Assessment Checklist,” Groupe Speciale Mobile, or GSMA, accessed August 4, 2022, https://www.gsma.com/iot/iot-security-assessment/
73    CTA, “IoT Working Group,” Consumer Technology Association. 
74    CTA, “Cybersecurity Labeling, Conformity Assessment and Self-Attestation (CTA),” Consumer Technology Association, accessed September 22, 2022, https://www.nist.gov/system/files/documents/2021/09/03/CTA%20Position%20Paper%20on%20Cybersecurity%20Label%20Considerations%20Final.pdf.
75    CTA, “Member Directory,” Consumer Technology Association, accessed September 22, 2022, https://members.cta.tech/cta-member-directory?_ga=2.13576244.208474513.1663814734-503620203.1663814734&reload=timezone.
76    Connectivity Standards Alliance, accessed September 22, 2022, https://csa-iot.org/.
77    CSA, “Community, The Power of Membership,” Connectivity Standards Alliance, accessed September 22, 2022, https://csa-iot.org/members/.
78    Device security,” Google Cloud, accessed August 17, 2022,https://cloud.google.com/iot/docs/concepts/device-security.
79    “Azure Certified Device – Edge Secured-core,” Microsoft, August 11, 2022, https://docs.microsoft.com/en-us/azure/certification/program-requirements-edge-secured-core?pivots=platform-linux.
80    “Architecture Security Features,” Arm, accessed August 17, 2022, https://developer.arm.com/architectures/architecture-security-features/platform-security.
81    To the reader: For instance, the ioXt Alliance has clear requirements and is clear about its desired means of improving IoT cybersecurity—“multi-stakeholder, international, harmonized, and standardized security and privacy requirements, product compliance programs, and public transparency of those requirements and programs”—but is not clear about its policy goals beyond general references to improving IoT cybersecurity, see: https://www.ioxtalliance.org/about-ioxt
82    Mitchell, “International Cooperation to Secure the Consumer Internet of Things,” 21. 
83    “International IoT Security Initiative,” Global Forum on Cyber Expertise (GFCE), accessed April 6, 2022, https://thegfce.org/initiatives/international-iot-security-initiative/
84    Harnessing the Internet of Things for Global Development, (Geneva: International Telecommunication Union, 2015), 7, https://www.itu.int/en/action/broadband/Documents/Harnessing-IoT-Global-Development.pdf.
85    Robert Morgus, Securing Digital Dividends: Mainstreaming Cybersecurity in International Development (Washington, D.C.: New America, April 2018), 38, https://www.newamerica.org/cybersecurity-initiative/reports/securing-digital-dividends/.
86    Nima Agah, “Segmenting Networks and De-segmenting Laws: Synthesizing Domestic Internet of Things Cybersecurity Regulation,” (Durham, NC: Duke University School of Law, 2022), 8–12.
87    Agah, “Segmenting Networks and De-segmenting Laws,” 8–12.
88    Efrat Daskal, “Establishing standards for IoT devices: Recent examples,” Diplo (blog), December 16, 2020, https://www.diplomacy.edu/blog/establishing-standards-for-iot-devices-recent-examples/.
89    DHA, “Strengthening Australia’s Cyber Security Regulations and Incentives.”
90    US National Institute of Standards and Technology, “Cybersecurity “Rosetta Stone” Celebrates Two Years of Success,” National Institute of Standards and Technology, accessed September 22, 2022, https://www.nist.gov/news-events/news/2016/02/cybersecurity-rosetta-stone-celebrates-two-years-success.
91    US National Institute of Standards and Technology, “Cybersecurity “Rosetta Stone” Celebrates Two Years of Success.”
92    Danielle Kriz,, “Governments Must Promote Network-Level IoT Security at Scale,” Palo Alto Networks, December 8, 2021, https://www.paloaltonetworks.com/blog/2021/12/network-level-iot-security/.
93    David Hoffman, Interview with report author, April 6, 2022.
94    Cyber Security Agency, “CSA | Cybersecurity Labelling Scheme – For Manufacturers,” Accessed September 22, 2022, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/for-manufacturers.
95    The Trust Opportunity: Exploring Consumer Attitudes to the Internet of Things, Internet Society and Consumers International, May 1, 2019, https://www.internetsociety.org/resources/doc/2019/trust-opportunity-exploring-consumer-attitudes-to-iot/.
96    Internet Society and Consumers International, The Trust Opportunity.
97    To the reader, it is important to note that users of IoT products must also play a role in ensuring device security. For instance, it is not enough for vendors to make patches; consumers must be sure to apply said patches.
98    Ron Ross, Michael McEvilley, and Janet Oren, “Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems,” National Institute of Standards and Technology, March 21, 2018, https://doi.org/10.6028/NIST.SP.800-160v1.
99    Department for Digital, Culture, Media & Sport, “The Product Security and Telecommunications Infrastructure (PSTI) Bill – product security factsheet.”
100    Department for Digital, Culture, Media & Sport, “The Product Security and Telecommunications Infrastructure (PSTI) Bill – product security factsheet.”
101    ETSI, “Cyber; Cyber Security for Consumer Internet of Things: Baseline Requirements,” European Telecommunications Standards Institute, accessed September 22, 2022, https://www.etsi.org/deliver/etsi_en/303600_303699/303645/02.01.01_60/en_303645v020101p.pdf. 
102    “Code of Practice: Securing the Internet of Things for Consumers,” the Australian Government, accessed September 22, 2022, https://www.homeaffairs.gov.au/reports-and-pubs/files/code-of-practice.pdf.
103    CSA Singapore, “Cybersecurity Labelling Scheme (CLS),” Cyber Security Agency Singapore, accessed September 22, 2022, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/about-cls.
104    IoT Cybersecurity Improvement Act of 2020, H.R.1668, 116th Cong. (2020). https://www.congress.gov/bill/116th-congress/house-bill/1668.
105    IoT Cybersecurity Improvement Act of 2020
106    The White House Briefing Room, “Executive Order on Improving the Nation’s Cybersecurity,” The White House, accessed September 22, 2022, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/
107    US National Institute of Standards and Technology. Foundational Cybersecurity Activities for IoT Device Manufacturers. NISTIR 8259. Michael Fagan et al. Gaithersburg: National Institute of Standards and Technology, May 2020. https://csrc.nist.gov/publications/detail/nistir/8259/final. 
108    US National Institute of Standards and Technology. Foundational Cybersecurity Activities for IoT Device Manufacturers. NISTIR 8259A. Michael Fagan et al. Gaithersburg: National Institute of Standards and Technology, May 2020. https://csrc.nist.gov/publications/detail/nistir/8259a/final.
109    Michael Fagan et al., “Profile of the IoT Core Baseline for Consumer IoT Products,” National Institute of Standards and Technology (NIST), June 17, 2022, https://csrc.nist.gov/publications/detail/nistir/8425/draft.
110    Michael Fagan et al., “Profile of the IoT Core Baseline for Consumer IoT Products.”
111    IoT Security Foundation, “IoT Security Assurance Framework.”
112    Robert Lemos, “New IoT Security Bill: Third Time’s the Charm?” Dark Reading, March 2019. https://www.darkreading.com/iot/new-iot-security-bill-third-time-s-the-charm-.
113    Brian Russell and Drew van Duren. Practical Internet of Things Security – Second Edition, Packt Publishing, (Birmingham, UK: 2018).
114    Eustace Asanghanwa, “Solving IoT device security at scale through standards,” Microsoft (blog), September 21, 2020, https://techcommunity.microsoft.com/t5/internet-of-things-blog/solving-iot-device-security-at-scale-through-standards/ba-p/1686066.
115    To the reader, this is not to say that organizations should always use passwords as the go-to authentication mechanism in the future—but that if organizations are doing so now, they should not use universal default ones.
116    GCA Internet Integrity Papers: IoT Policy and Attack Report, Global Cyber Alliance (GCA), October 2021, https://www.globalcyberalliance.org/wp-content/uploads/IoT-Policy-and-Attack-Report_FINAL.pdf.
118    Andrew Laughlin, How a smart home could be at risk from hackers, Which? 2021, https://www.which.co.uk/news/article/how-the-smart-home-could-be-at-risk-from-hackers-akeR18s9eBHU.
119    “Labeling for Consumer Internet of Things (IoT) Products,” National Institute of Standards and Technology (NIST), February 2022. https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.02042022-2.pdf. 
120    ETSI EN 303 645 – “Cyber Security for Consumer Internet of Things: Baseline Requirements.”
121    ETSI EN 303 645 – “Cyber Security for Consumer Internet of Things: Baseline Requirements.”
122    “Global IoT market to grow to 24.1 billion devices in 2030, generating $1.5 trillion annual revenue,” Transforma Insights, May 19, 2020, https://transformainsights.com/news/iot-market-24-billion-usd15-trillion-revenue-2030. 
123    “How a smart home could be at risk from hackers,” Which?, July 2, 2021, https://www.which.co.uk/news/article/how-the-smart-home-could-be-at-risk-from-hackers-akeR18s9eBHU.
124    “Kaspersky Detects 1.5B IoT Cyberattacks This Year,” PYMNTS, September 3, 2021, https://www.pymnts.com/news/security-and-risk/2021/kaspersky-detects-iot-cyberattacks-double-last-year/.
125    “2020 Unit 42 IoT Threat Report,” Unit 42, March 10, 2020, https://unit42.paloaltonetworks.com/iot-threat-report-2020/.
126    Catalin Cimpanu, “15% of All IoT Device Owners Don’t Change Default Passwords,” BleepingComputer, June 19, 2017, https://www.bleepingcomputer.com/news/security/15-percent-of-all-iot-device-owners-dont-change-default-passwords/.
127    DCMS, “Regulation of Consumer Connectable Product Cyber Security,” RPC-DCMS-4353(2).
128    Nathaniel Kim, Trey Herr, and Bruce Schneier, The Reverse Cascade: Enforcing Security on the Global IoT Supply Chain, The Atlantic Council, June 2020, https://www.atlanticcouncil.org/in-depth-research-reports/report/the-reverse-cascade-enforcing-security-on-the-global-iot-supply-chain/
129    Bruce Schneier, “Security in a World of Physically Capable Computers, Schneir (blog), October 12, 2018. https://www.schneier.com/blog/archives/2018/10/security_in_a_w.html.
130    Alex Scroxton, “Lords Move to Protect Cyber Researchers from Prosecution, Computer Weekly, June 2022, https://www.computerweekly.com/news/252521716/Lords-move-to-protect-cyber-researchers-from-prosecution
131    “Government Response to the Regulatory Proposals for Consumer Internet of Things (IoT) Security Consultation.” United Kingdom Department for Digital, Culture, Media & Sport (DCMS), February 2020, https://www.gov.uk/government/consultations/consultation-on-regulatory-proposals-on-consumer-iot-security/outcome/government-response-to-the-regulatory-proposals-for-consumer-internet-of-things-iot-security-consultation
132    “Proposals for Regulating Consumer Smart Product Cyber Security – Call for Views,” United Kingdom Department for Digital, Culture, Media & Sport (DCMS), October 2020, https://www.gov.uk/government/publications/proposals-for-regulating-consumer-smart-product-cyber-security-call-for-views/proposals-for-regulating-consumer-smart-product-cyber-security-call-for-views.  
133    “IoT security: How We Are Keeping Consumers Safe from Cyber Threats,” World Economic Forum, February 2022, https://www.weforum.org/impact/iot-security-keeping-consumers-safe/.
134    CSA, Cybersecurity Labelling Scheme (CLS) Product List.
135    CSA, Cybersecurity Labelling Scheme (CLS) Product List. 
136    DHA, “Strengthening Australia’s Cyber Security Regulations and Incentives.”
137    DHA, “Strengthening Australia’s Cyber Security Regulations and Incentives.”
138    “Stay Smart: Helping Consumers Choose Cyber Secure Smart Devices,” Behavioural Economics Team of the Australian Government (BETA), March 2022, https://behaviouraleconomics.pmc.gov.au/sites/default/files/projects/beta-report-cyber-security-labels.pdf.
139    BETA, “Stay Smart: Helping Consumers Choose.”
140    NIST, “Recommended Criteria for Cybersecurity Labeling.”

The post Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem appeared first on Atlantic Council.

]]>
The 5×5—Reflections on trusting trust: Securing software supply chains https://www.atlanticcouncil.org/content-series/the-5x5/the-55reflections-on-trusting-trust-securing-software-supply-chains/ Thu, 12 May 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=522510 Five experts discuss the implications of insecure software supply chains and realistic paths to securing them. 

The post The 5×5—Reflections on trusting trust: Securing software supply chains appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Nearly every bit of technology and infrastructure that enables modern society to function runs on software. Lines of code—in some cases millions of them—underpin systems ranging from smart electronic kettles to fifth-generation fighter jets. A significant portion of software is cobbled together with and dependent on many pieces of open-source, non-proprietary code that, by and large, is built and maintained by volunteer software engineers for whom security may not be a top priority. As such, vulnerabilities abound in widely used systems, and securing the entirety of the increasingly complex software supply chain is no easy feat. 

Notable examples like the Sunburst/SolarWinds cyber-espionage campaign shed light on how adversaries are increasingly exploiting vulnerabilities in the software supply chain to compromise critically important public- and private-sector systems, and yet software supply chain security remains an underdeveloped aspect of public policy with meaningful gains starting to be made only more recently. On May 12, 2021, US President Joe Biden signed Executive Order (EO) 14028 on Improving the Nation’s Cybersecurity, a portion of which addressed the need to enhance software supply chain security. On May 5, 2022, in accordance with the EO, the National Institute of Standards and Technology (NIST) released its updated Cybersecurity Guidance for Supply Chain Risk Management.  

Between 2010 and 2021, according to the Atlantic Council’s Breaking Trust dataset, at least forty-two attacks or vulnerability disclosures involved open-source projects and repositories. Public policy still has significant room to address how both government and software developers improve the security of software supply chains, especially the wider health of the open-source ecosystem. 

We brought together five experts with a range of perspectives to discuss the implications of insecure software supply chains and realistic paths to securing them. 

#1 How does the security of software supply chains impact national security? 

Jennifer Fernick SVP & global head of research, NCC Group; member of the Governing Board, Open Source Security Foundation (OpenSSF)

“Software is the core infrastructure of almost every aspect of contemporary life, and a security vulnerability in any aspect of a system or network. Its effect on government, intelligence, defense operations, financial transaction networks, energy companies, telecommunications, food and pharmaceutical supply chains, and other public- or private-sector critical infrastructure can have devastating consequences in the physical world. The now-popular notion of securing the ‘software supply chain’ ultimately reflects a growing apprehension that the security risks of computing systems are many layers deeper, more invisible, and more interdependent and impactful than they initially seemed.” 

Amélie Koran, senior fellow, Cyber Statecraft Initiative; director of external technology partnerships, Electronic Arts, Inc.

“Security does not always mean reliability. It is an outcome in most cases, such as resiliency is, but for most, the typical CIA triad of confidentiality, integrity, and availability drives most assessments of basic system and software security. While we can develop a very secure national security system—from defense to utility and other infrastructure—the additional security components sometimes hurt reliability or resiliency, as they must perform a set of steps or lack inclusion of typical convenience features that non-critical or national security systems tend to avoid, as they may just be bad security choices. 

In this case, often the selection of components for ensuring a secure system, including the construction, building and testing, as well as operation of those utilized for national security purposes become very oblique and even harder to manage. For many years those building these systems eschewed open-source software due to the fear of unexpected or unpredictable inclusion in the code base would make systems less secure. Over time, however, they have realized that, in most cases, the transparency of such software sources results in more rapid fixes, and thus it closes the windows of opportunity for potential attackers versus closed-source or even more bespoke systems that do not go under as much scrutiny. In other words, some of the most trusted national security solutions may actually be the most insecure from iterative tests and test of rigor by others. A test is only as good as the test developer, and once you cut those creators and testers down, less issues tend to get caught and resolved.” 

John Speed Meyers, security data scientist, Chainguard

“It is not too much of a stretch to say that the functioning of most digital systems—including those of western militaries, governments, and societies—have become deeply reliant on a hard-to-understand and hard-to-secure software supply chain. It is like we built a digital Manhattan on a foundation of quicksand and swamps.” 

Wendy Nather, senior fellow, Cyber Statecraft Initiative; head of advisory CISOs, Cisco

“No matter how they are compromised or by whom, software supply chains have an outsized network effect on the security and stability of everything from utilities to emergency response, transportation, healthcare, aviation, and public safety. As everything digitally transforms, the attack surface grows in subtle, remotely accessible ways, and it potentially affects even those populations without access to technology.” 

Stewart Scott, assistant director, Cyber Statecraft Initiative

“Software is eating the world, or so I am told, so securing any system or application relevant to national security is critical—everything from computer systems on fighter jets to government-adjacent email accounts. Securing one’s own systems is challenging enough, but modern software services are a patchwork of in-house programs, purchased products, imported libraries, cloud applications, open-source components, and even copy-and-pasted code. Security for software supply chains extends the challenge to using others’ code securely, identifying what products are developed and maintained securely, and even figuring out what dependencies exist. It is incredibly complicated and difficult for security everywhere, and national security especially, as incidents like the Sunburst campaign and log4j illustrate.” 

#2 What are the challenges to building more secure software supply chains? Do developers, intermediaries, consumers know what they need to do? 

Fernick: “Vulnerabilities are cheap to create but expensive to find—even the best programmers in the world regularly write code with security vulnerabilities, and even the best security tools on earth will fail to find all of these vulnerabilities without the time-intensive intervention of human experts. Yet, even theoretically perfectly secure code would more likely than not depend on another piece of software that is full of exploitable vulnerabilities, will run on an insecure operating system on top of flawed hardware, and be deployed through a build pipeline that can be compromised by attackers. Security is very hard, and yet I feel like as an industry we push too much of this responsibility downstream to other developers and users who are not equipped to face it. Instead, we need to ‘shift left’ and assume that vulnerabilities are present, but find ways of reducing them at scale or detecting and remediating them early, through improved programming languages and frameworks, scalable vulnerability-finding tools, and other systematic investments in improving the ecosystem as a whole.” 

Koran: “Complexity is by far one of the greatest challenges to securing software and digital supply chains. Most tools and systems that are available to organizations provide a patchwork of awareness of the overall risk, and require a high level of competence in the minutia of the software or system in order to generate a plan of action, short of a basic “update this code with something deemed more secure.” One of the major issues in all of this also is consumer based. In most cases, it takes extra effort to track back the health of code or source components that may be utilized to build systems. These may be stitched together, but could be, once together, increasing the insecurity or risk of operations in certain configurations.  

Put it this way, while the suggestion of a software bill of materials (SBOM) may tell you what is in the box, it does not tell you if the ingredients are good for you. It is only one portion of a chain of decision-support processes that are necessary to build safer and more secure software supply chains. Imagine standing in the cereal aisle at a grocery store, and while you can look at the ingredients between a toasted whole wheat cereal and “sugar bombs,” what appetite are you satisfying chasing one over the other? Technically, the base components of the grains and such within in them may be very close to one another, but if there is more of one bad item (e.g., preservatives or sugar), while it solves the same task of giving you a breakfast, the satisfaction on consumption may be different. The same goes for software development. Pick a quick and dirty “all-in-one” suite that solves problems quickly, but may be opaque as to what is in it and how it was built, or make an artisan selection of bespoke code—you will have a lot of potential work ahead from choosing a turn-key opportunity provided by the former over the latter.” 

Meyers: “There are many. If you are reading this article on a computer of some sort, ask yourself why you trust the chain of software that allows you to read this article. If you have no answer, try to find a little solace in the knowledge that most experts do not have a great answer either. And nobody really knows what they need to do, although software supply chain integrity frameworks like SLSA (Supply chain Levels for Software Artifacts, pronounced “salsa”) are a good start.” 

Nather: “Software is organic and dynamic; it changes faster than humans can follow, as it is the result of human contributions on a worldwide scale. Software challenges cannot be compared directly to a hardware supply chain or a manufacturing line because of these additional complexities. Only with specific expertise can developers and intermediaries know what they need to do, and many developers do not come from the traditional educational pipelines any more. The answer is NOT to rely on consumers to have this expertise to make their own market-driven choices; it is to ensure that security is baked into the software standards, protocols, tools, and automation at scale.” 

Scott: “Writ large, the challenge is identifying and managing an incredible number of rapidly changing relationships that range everywhere from import lines to massive government acquisition contracts. There is a huge range of understanding about and capability to address software supply chain security. If developers, maintainers, vendors, CIOs, and consumers aren’t all on the same page about how to improve supply chain security—let alone provided with sufficient resources to get things moving—there’s going to be progress in some places and awkward lapses elsewhere. For example, GitHub is moving towards universal multi-factor authentication, which will help secure massive amounts of open-source components, but many entities will not even know the degree to which they are relying on and/or contributing to that code, especially at higher organizational levels.” 

#3 How would you describe the state of public-private sector collaboration on securing software supply chains? Compared to where it could be? 

Fernick: “I am optimistic and encouraged by the proactive and collaborative engagement between senior US government officials and the Open-Source Security Foundation (OpenSSF), a cross-industry effort to improve the security of the open-source ecosystem, which I helped to establish with colleagues across the industry in early 2020. The January 2022 White House meeting on Software Security brought together a powerful alliance of public- and private-sector organizations, including OpenSSF, to discuss initiatives to: (1) prevent open-source software security vulnerabilities, (2) improve coordinated vulnerability disclosure, and (3) reduce vulnerability remediation times. In mid-May, we will be returning to Washington, DC with a bold mobilization plan for exactly that; several carefully-defined initiatives that will together help radically improve the security of the world’s most critical open-source software.” 

Koran: “As much as somebody would want to rehash “I’m from the government, I’m here to help” as an opening line, developers and other technicians tend not to like bureaucracy meddling in their creations and orbits, especially if it means more under-resourced work for them to perform. While compliance is also not the best carrot to achieve results either, a hybrid model of standards that can be modeled and accepted as a method to align, much like NIST’s Special Publication series, which both public and private sectors use to get to a reasonable level of assurance. Possibly a similar Commerce Department/NIST-driven guidance that understands scale and complexity, but also operating methods, such as making recommendations support DevOps that can be better integrated. Ensuring that input for this guidance incorporates not only broad industry input and support, but also addresses needs from small- and medium-sized businesses, as well as enterprises. Most regulations and guidance are selectively chosen due to the lack of timely, affordable and manageable actions to comply. In short, “keep it simple, stupid” (KISS) should be the name of the game to have better than average uptake, and note that all of this should also be iterated upon.” 

Meyers: “Nascent, especially when it comes to the security of the open-source software supply chain. Anecdotally, when I worked at In-Q-Tel, a strategic investor for the US intelligence community, many intelligence community staff used to look at me cross-eyed when my colleagues and I would suggest that they should devote their time and resources to open-source software supply chain security. Log4j and the recent White House meeting on open-source software security suggests, however, that the times are changing.” 

Nather: “There have been excellent ad hoc responses to specific events, but we need to create a more repeatable process that includes not just the ‘biggest and loudest’ private sector companies, but also the ones below the Security Poverty Line, which are equally likely to be victimized by software supply chain attacks. For example, the Blackbaud ransomware incident at last count affected over a thousand organizations, many of them nonprofits who provide critical services. Beyond response, we need to create a way for every organization to understand its supply chain risk. SBOMs are a start in this direction, but it is an after-the-fact report at this point, not a demonstration of secure software development practices. Make no mistake: this is not a chain; it is a vast web in which we are all somewhere in the middle.” 

Scott: “There is building momentum in the federal government around software supply chains, and fora like the Cybersecurity and Infrastructure Security Agency’s (CISA) Joint Cyber Defense Collaborative are a good start at bringing industry to the table. Parts of the private sector, too, have recently started pouring a lot of resources into the issue, especially open source, but it is patchwork. Some companies are piling millions of dollars on to the issue, and some are not ready to take seriously how much it affects their security. It is also unfortunate that a lot of that momentum seems a response to recent incidents—I would love to see more proactive security collaboration capitalize on a well-intentioned reaction to compromise. In that vein, expanding the scope of existing public-private partnerships to include industry consumers outside of the usual IT vendors and, regarding open source, the nonprofits, maintainers, repositories, and package managers responsible for a lot of the actual code in question. Continued formalization of those ventures would be great too—supply chain and open-source security need to be ongoing discussions within among all stakeholders in the cyber policy world.” 

More from the Cyber Statecraft Initiative:

#4 How can the United States and European Union most effectively contribute to the security of open-source software?  

Fernick: “Coordination among major stakeholders is key, as open-source software is a vast, complex global ecosystem. Our success at securing the supply chain hinges upon having a singular place where representatives from government, the private sector, and open-source software projects alike come together to work on and make impact-prioritized, coordinated investments in things like security audits of critical software, improving vulnerability disclosure and remediation, and coordinating vendor-neutral emergency response teams to support open-source software maintainers in times of security crisis. Piecemeal initiatives and investments cannot comprehensively solve a lifecycle problem like securing the software supply chain—attackers will simply exploit the weakest remaining link.” 

Koran: “Coordinate. There is a number of international organizations that have attempted to bite off various pieces of the open-source security, and software and digital product security, pie. Organizations may be looking at anything from vulnerability disclosure treatments to secure coding practices, but many of them attempt to reinvent their own wheel for their influence areas. Because of that, there is no single or coordinated voice or guidance as to what to do. The community, private sector, and individuals either choose to strike out on their own or pick and choose those pieces of guidance, frameworks, or regulations that may be best suited to them, or, in the worst case, may be the minimum bar in order for their work to comply. While asked how to “contribute”, it does not always mean having to directly provide technical contributions, like code, infrastructure or other bits and bobs; it can mean doing what governments do best, which is to get the right people talking to one another to share information at various levels. That is where they need to start—get on the same page and the right heads sharing knowledge and experiences.” 

Meyers: “While I welcome transatlantic cooperation on open-source software security, each government probably needs to examine its own software supply chain security practices before there can be an open-source software Atlantic Charter.” 

Nather: “Ensure alignment in the standards, practices, regulations, etc. that are being generated. Like everything else in cyber, open-source software development is cross-border, and world-wide coordination is required to ensure security is effective and aligned with the motivations of the open-source project contributors and maintainers. Tracking the dependencies in open-source software and identifying those ‘linchpin’ components that cross a certain threshold of impact would be a good start, as would a coordinated effort to fill resource gaps for software projects that are under-resourced or abandoned.” 

Scott: “Governments can start by recognizing open source as infrastructure—it is everywhere, comprising 70 to 90 percent of many codebases, and it supports a huge part of the innovation and functionality in the digital economy. Ideally, that kind of framing will lead to regular, proactive investment from government alongside industry and help keep a transatlantic approach from getting bogged down by different approaches towards licensing and privacy. Open-source security is much more about how responsibly consumers are using (and tracking their use of) components, contributing to them, and supporting the ecosystem. An infrastructure approach should also help move away from the simple narrative that its maintainers are not getting paid enough—sometimes that is true, and sometimes maintainers have immense corporate support or even work for premiere IT companies. Usage, tooling, and self-knowledge are all hugely important and point to a much broader solution set than throwing money towards developers and hoping they can ‘fix everything.’ 

#5 What single proposal or idea to better secure software supply chains would you like to see Congress pass in the next year? 

Fernick: “In the legislation that it passes, I would like to see Congress work to incentivize companies to work collaboratively with good-faith security researchers who choose to responsibly report security vulnerabilities that they find in software products to the affected vendors, without fear of retribution or legal risk on behalf of the researcher. Many companies still lack the maturity to see good-faith security vulnerability reports for what they are—a free gift, and an actionable opportunity to improve their own products to help protect their customers, and the world, from threat actors.” 

Koran: “There needs to be focus. Most of the directives that have come since the recent change in administration have been merely addressing the federal government rather than the larger sphere. While that is an admirable focus, it is very top down and scoped very small. If Congress is to engage, it needs to be a wider and more comprehensive action where, possibly driven by some federal stewardship, the onus really lives within the community—among developers, private organizations, and individuals. This also should not be a “thou shalt” type of direction, but a way to structure guidance, oversight and engagement. This could possibly be either addressing which government agency or commission has the lead for certain areas, or by establishing something new to subsume a number of critical roles. This is not going to just to address a technical compliance issue, but also governance and sustainment by possibly providing grant-making capabilities, interfaces to the private sector, academia, and international partners. It will also need to be funded. Good ideas without a budget are just good ideas, not actions that can be relied upon for an outcome.” 

Meyers: “The creation of an open-source software security center within the federal government, perhaps within the Department of Homeland Security, seems like a promising first step. This center could help assess and improve the security of the open-source software that the federal government and critical infrastructure relies upon and even contribute to the security of the overall open-source software ecosystem.” 

Nather: “Securing the software supply chain goes beyond just creating secure software, as the Executive Order points out in calling for basic controls such as multi-factor authentication, monitoring and alerting to secure the development, distribution and production environments as well. The underlying problem is still assessing risk and impact, and prompting those conversations among suppliers and consumers. As this is all currently being piloted by the Biden administration, the next logical step would be for Congress to put it in a statutory framework to ensure effective oversight.” 

Scott: “I would love to see Congress stand up offices in government dedicated to open-source security and sustainability. An official office in CISA, even a small one, would provide a clear place for industry and maintainers to turn to and interface with, and the Critical Technology Security Centers would be great outputs to channel grantmaking. Having that formal infrastructure in place would make it much easier to involve developers and maintainers in open-source policy discussions in which they have not been included much yet. It would also help put to rest any lingering misconceptions that open source is inherently less secure than proprietary code or that open source is something that should—or even could—be avoided. Digital infrastructure everywhere depends in large part on open source, so the challenge is not securing or fixing that community or ecosystem—it is figuring out proactive, leveraged investments that government and industry can regularly make in its security.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Scowcroft Center for Strategy and Security. He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Reflections on trusting trust: Securing software supply chains appeared first on Atlantic Council.

]]>
Nia quoted in Bloomberg on the implications of Elon Musk’s overtaking of Twitter for activists and companies https://www.atlanticcouncil.org/insight-impact/in-the-news/nia-quoted-in-bloomberg-on-the-implications-of-elon-musks-overtaking-of-twitter-for-activists-and-companies/ Sun, 08 May 2022 18:14:00 +0000 https://www.atlanticcouncil.org/?p=523801 The post Nia quoted in Bloomberg on the implications of Elon Musk’s overtaking of Twitter for activists and companies appeared first on Atlantic Council.

]]>

The post Nia quoted in Bloomberg on the implications of Elon Musk’s overtaking of Twitter for activists and companies appeared first on Atlantic Council.

]]>
Nia quoted in the Washington Post on long-term implications of Elon Musk’s control over Twitter https://www.atlanticcouncil.org/insight-impact/in-the-news/nia-quoted-in-the-washington-post-on-long-term-implications-of-elon-musks-control-over-twitter/ Thu, 05 May 2022 13:37:00 +0000 https://www.atlanticcouncil.org/?p=520878 The post Nia quoted in the Washington Post on long-term implications of Elon Musk’s control over Twitter appeared first on Atlantic Council.

]]>

The post Nia quoted in the Washington Post on long-term implications of Elon Musk’s control over Twitter appeared first on Atlantic Council.

]]>
Dagres quoted in Voice of America Persian News Network on human rights and internet freedom in Iran https://www.atlanticcouncil.org/insight-impact/in-the-news/dagres-quoted-in-voice-of-america-persian-news-network-on-human-rights-and-internet-freedom-in-iran/ Sat, 29 Jan 2022 14:33:00 +0000 https://www.atlanticcouncil.org/?p=485111 The post Dagres quoted in Voice of America Persian News Network on human rights and internet freedom in Iran appeared first on Atlantic Council.

]]>

The post Dagres quoted in Voice of America Persian News Network on human rights and internet freedom in Iran appeared first on Atlantic Council.

]]>
Cybersecurity of Space-Based Assets and Why this is Important https://www.atlanticcouncil.org/insight-impact/in-the-news/cybersecurity-of-space-based-assets-and-why-this-is-important/ Mon, 12 Jul 2021 22:13:50 +0000 https://www.atlanticcouncil.org/?p=414045 On a recent joint Georgetown and Atlantic Council masters' class, GeoTech Director Dr. David Bray shared his insights on the seminar's question: "Cybersecurity of Space-Based Assets and Why This Is Important." This masters' class also featured GeoTech Fellows and experts Dr. William Jeffrey, Chuck Brooks, and Dr. Divya Chander.

The post Cybersecurity of Space-Based Assets and Why this is Important appeared first on Atlantic Council.

]]>
Space is quickly becoming the new frontier to be explored by national governments and private sector actors. In the process, the different parties are preparing themselves for an environment with the same competition and collaboration that are typical on Earth, which will require new regulations and international norms and will create novel opportunities for industry and innovation, from transportation and satellite communications to data sharing, artificial intelligence, and national security.

On a recent joint Georgetown and Atlantic Council masters’ class, GeoTech Director Dr. David Bray shared his insights on the seminar’s question: “Cybersecurity of Space-Based Assets and Why This Is Important.” This masters’ class also featured GeoTech Fellows and experts Dr. William Jeffrey, Chuck Brooks, and Dr. Divya Chander.

We invite you to watch the informative recording below:

Read more about our expert:

The post Cybersecurity of Space-Based Assets and Why this is Important appeared first on Atlantic Council.

]]>
Atlantic Council releases landmark recommendations on the geopolitical impacts of new technologies https://www.atlanticcouncil.org/news/press-releases/atlantic-council-releases-landmark-recommendations-on-the-geopolitical-impacts-of-new-technologies/ Wed, 26 May 2021 13:00:00 +0000 https://www.atlanticcouncil.org/?p=395593 Findings based on bipartisan study groups of U.S. government officials and senior figures in tech industry.

The post Atlantic Council releases landmark recommendations on the geopolitical impacts of new technologies appeared first on Atlantic Council.

]]>
Findings based on study groups of US government officials and senior figures in tech industry

WASHINGTON, DC – MAY 26, 2021 – The Atlantic Council’s bipartisan Commission on the Geopolitical Impacts of New Technologies and Data today released a landmark report proposing recommendations for the US government and like-minded allies on global technology and data development policy.

The report’s recommendations are designed to maintain US and allied leadership in science and technology; ensure the trustworthiness and resilience of physical and IT supply chains, infrastructures, and the digital economy at large; improve global health protection; assure commercial space operations for public benefit; and create a digitally fluent and resilient workforce.

The report was developed over months of intensive study and debate by an esteemed panel of commissioners comprised of senior representatives from Congress, academia, industry, and former officials from recent administrations. Sens. Mark Warner (D-VA) and Rob Portman (R-OH) and Reps. Suzan DelBene (D-WA) and Michael McCaul (R-TX) served as honorary co-chairs of the commission. John Goodman, Chief Executive Officer of Accenture Federal Services, and Teresa Carlson, President and Chief Growth Officer of Splunk, served as co-chairs. The commission was housed within the Atlantic Council’s GeoTech Center, which was launched in 2020 to champion positive paths forward to ensure new technologies and data empower people, prosperity, and peace.

Today’s report comes amid the “GeoTech Decade,” in which new technologies and data capabilities will have an outsized impact on geopolitics, economics, and global governance. However, no nation or international organization has created the appropriate governance structures needed to grapple with the complex and destabilizing dynamics of emerging technologies. As a result, new approaches are required for developing and deploying critical technologies, cultivating human capital, rebuilding trust in domestic and global governance, and establishing norms for international cooperation.

Key recommendations from the report include:

  • Global science and technology leadership: Develop a National & Economic Security Technology Strategy
  • Secure data and communications: Strengthen the National Cyber Strategy Implementation Plan and accelerate quantum information science technologies operationalization
  • Enhanced Trust and Confidence in the Digital Economy: Demonstrate AI improvements to delivery of public- and private-services
  • Assured Supply Chains and System Resiliency: Broaden federal oversight of supply chain assurance
  • Continuous Global Health Protection and Global Wellness: Launch a global pandemic surveillance and warning system
  • Assured Space Operations for Public Benefit: Harden security of commercial space industry facilities and space assets
  • Future of Work: Create the workforce for the GeoTech Decade and equitable access to opportunity

“The work of the bipartisan GeoTech Commission was 14 months in the making, representing the consensus of public and private sector leaders on practical steps forward for Congress, the White House, private industry, academia, and like-minded nations,” said Dr. David Bray, director of the Atlantic Council’s GeoTech Center. “The sophisticated, but potentially fragile, data and tech systems that now connect people and nations mean we must incorporate resiliency as a necessary foundational pillar of modern life. It is imperative that we promote strategic initiatives that employ data and tech to amplify the ingenuity of people, diversity of talent, strength of democratic values, innovation of companies, and reach of global partnerships.”

“The U.S. stands at a crossroads. New technologies and ready access to data offer exciting opportunities to tackle the world’s greatest challenges. Yet there are also risks that threaten to undermine peace and prosperity in unanticipated ways,” said John Goodman, CEO of Accenture Federal Services. “For the US and its partners to remain economically competitive and protect national security, we must work together to build trust in the digital fabric of the GeoTech decade. We must act now to invest in these new technologies, to develop and expand our skilled workforce, and to establish norms to ensure that technology emerges as a powerful force for good.”

“The GeoTech Decade impacts all countries, people, communities, and businesses from global safety to security and more,” said Teresa Carlson, President, and Chief Growth Officer at Splunk. “The recommendations in this independent report are necessary for innovation in the years to come. With bi-partisan buy-in from U.S. Congress and top industry leaders, executing on these seven areas will help us combine the data and technologies required for success in this new age.”

“We’re in the midst of a titanic technological shift, from IT modernization to artificial intelligence, as organizations from all industries look to harness the power of data to solve complex challenges,” said Max Peterson, Vice President, Worldwide Public Sector, Amazon Web Services. “Advanced computing systems, faster and higher-bandwidth communications networks, and increasingly sophisticated technologies are digitizing the information around us and transforming the way we live, learn, and do business. Together, government and the private sector should work to ensure we grasp the innovative opportunities before us in ways that promote security, trust, and inclusion.”

Commission on the Geopolitical Impacts of New Technologies and Data

Co-Chairs:
John Goodman, Chief Executive Officer, Accenture Federal Services
Teresa Carlson, President and Chief Growth Officer, Splunk

Honorary Co-Chairs:
Sen. Mark R. Warner (D-VA)
Sen. Rob Portman (R-OH)
Rep. Suzan DelBene (D-WA)
Rep. Michael T. McCaul (R-TX)

Commissioners:
Max R. Peterson II, Vice President, Worldwide Public Sector, Amazon Web Services
Paul Daugherty, Chief Executive – Technology & Chief Technology Officer, Accenture
Maurice Sonnenberg, Guggenheim Securities
Michael Chertoff, Former U.S. Secretary of Homeland Security
Michael J. Rogers, Former Chairman of the U.S. House Permanent Select Committee on Intelligence
Pascal Marmier, Head, Economy of Trust Foundation, SICPA
Ramayya Krishnan, PhD, Director, Block Center for Technology and Society, Carnegie Mellon University
Dr. Shirley Ann Jackson, President, Rensselaer Polytechnic Institute
Susan M. Gordon, Former Principal Deputy Director of National Intelligence
Vint Cerf, Internet Pioneer & “Father of the Internet”
Zia Khan, PhD, Vice President for Innovation, The Rockefeller Foundation
Anthony Scriffignano, PhD, Senior Vice President, Chief Data Scientist at Dun & Bradstreet Corporation
Frances F. Townsend, Executive Vice President, Activision Blizzard
Admiral James Stavridis, USN, Ret.

Executive Director:
David Bray, PhD, Director, GeoTech Center, The Atlantic Council

The Commission on the Geopolitical Impacts of New Technologies and Data was made possible by support from Accenture Federal Services and Amazon Web Services. The report’s full findings and recommendations can be found here.

For media inquiries, please contact press@atlanticcouncil.org.

The post Atlantic Council releases landmark recommendations on the geopolitical impacts of new technologies appeared first on Atlantic Council.

]]>
Conclusion, appendices, and acknowledgements https://www.atlanticcouncil.org/content-series/geotech-commission/conclusion-appendices-acknowledgements/ Tue, 25 May 2021 22:58:50 +0000 https://www.atlanticcouncil.org/?p=393961 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Conclusion, appendices, and acknowledgements appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(100%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Conclusion, appendices, and acknowledgements

Scroll down to navigate and learn more

Conclusion

The increasing capabilities and availability of data and new technologies change how nations remain competitive and secure. In the coming GeoTech Decade, data and technology will have a disproportionate impact on geopolitics, global competition, and global opportunities for collaboration as new capabilities may eliminate a technical advantage or may enable new processes superior to current methods. The United States and like-minded nations must be able to adapt and demonstrate effective governance, at faster speeds, in employing data and new technologies to promote a more secure, free, and prosperous world.

In 1945, Vannevar Bush, director of the Office of Scientific Research and Development, transmitted a report, Science – the Endless Frontier, with the goal of answering a few key questions asked by then-President Franklin D. Roosevelt in November 1944. In the report, Bush elaborated:

  • “With particular reference to the war of science against disease, what can be done now to organize a program for continuing in the future the work which has been done in medicine and related sciences?
  • “What can the Government do now and in the future to aid research activities by public and private organizations?
  • “Can an effective program be proposed for discovering and developing scientific talent in American youth so that the continuing future of scientific research in this country may be assured on a level comparable to what has been done during the war?”

Among its recommendations, the 1945 report called for the creation of the National Research Foundation. Bush concluded, noting the importance of action by Congress:

  • “Legislation is necessary. It should be drafted with great care. Early action is imperative, however, if this nation is to meet the challenge of science and fully utilize the potentialities of science. On the wisdom with which we bring science to bear against the problems of the coming years depends in large measure our future as a nation.”

Now, almost seventy-six years later, the GeoTech Commission similarly seeks to promote freedom and security through initiatives that employ data and new technologies to amplify the ingenuity of people, diversity of talent, strength of democratic values, innovation of companies, and the reach of global partnerships.

There are several areas where data and technology can help, or hinder, the achievement of these goals:

  • Communications and networking, data science, cloud computing
  • Artificial intelligence, distributed sensors, edge computing, the Internet of Things
  • Biotechnologies, precision medicine, genomic technologies
  • Space technologies, undersea technologies
  • Autonomous systems, robotics, decentralized energy methods
  • Quantum information science, nanotechnology, new materials for extreme environments, advanced microelectronics

To maintain national and economic security and competitiveness in the global economy, the United States and its allies must continue to be preeminent in these key areas, and must achieve trustworthy and assured performance of the digital economy and its infrastructure. The GeoTech Commission provided recommendations in the following seven areas where the United States and like-minded nations must succeed:

  • Global science and technology leadership
  • Secure data and communications
  • Enhanced trust and confidence in the digital economy
  • Assured supply chains and system resiliency
  • Continuous global health protection and global wellness
  • Assured space operations for public benefit
  • Future of work

The report’s recommendations embody several ideals. First, work to ensure the benefits of new technologies reach all sectors of society. Second, define protocols and standards for permissible ways to develop and use technologies and data, consistent with the norms of the United States and like-minded nations. Third, guide technology cooperation and sharing with nondemocratic nations based on respecting democratic values.

Just as Vannevar Bush urged in 1945, the United States must create new ways to develop and employ future critical and emerging technologies at speed, cultivate the needed human capital, and establish norms for international cooperation with nations. Such creation requires important action by Congress and the new administration to ensure that the United States has the wisdom with which to apply science to the challenges and opportunities of the coming years. If enacted, the report’s recommendations will enable the United States and like-minded nations to employ data capabilities and new technologies intentionally to promote a freer, more secure, and more prosperous world.

Appendices

Biographies of the GeoTech Commission co-chairs and commissioners

Co-chairs

John Goodman, Chief Executive Officer, Accenture Federal Services

John Goodman is the Chief Executive of Accenture Federal Services (AFS), which serves clients across all sectors of the US federal government – defense, intelligence, public safety, health, and civilian. Since joining Accenture in 1998, he has held a variety of leadership roles – including managing director of Accenture’s Defense & Intelligence portfolio, head of Management Consulting for the global Public Service Operating Group, and most recently Chief Operating Officer of AFS. John began his career at Accenture as a Member of the Communications & High Technology practice.

Prior to joining Accenture, John served for five years in the federal government as Deputy Under Secretary of Defense (Industrial Affairs & Installations), Deputy Assistant Secretary of Defense (Industrial Affairs), and a member of the staff of the National Economic Council, the White House office responsible for coordination of economic policy. He previously served on the Harvard Business School faculty.

John is co-chair of the Atlantic Council’s GeoTech Commission and member of the boards of both the Atlantic Council and the Northern Virginia Technology Council, as well as a member of the Council on Foreign Relations. He is a member, and the immediate past chair, of the Executive Committee of the Professional Services Council, a former member of the Executive Committee of AFCEA, and the former chairman of the Defense Business Board. John was named Executive of the Year by the Greater Washington Government Contractors in 2018; a Wash100 inductee in 2018, 2019, 2020 and 2021; and a Fed100 Award winner in 2015. He has been awarded the Office of the Secretary of Defense Medal for Exceptional Public Service, the Department of Defense Medal for Distinguished Public Service, and the Department of Defense Medal for Outstanding Public Service.

John received his Bachelor of Arts, summa cum laude, from Middlebury College and his Master of Arts and Ph.D. from Harvard University.

Teresa Carlson, President and Chief Growth Officer, Splunk

As President and Chief Growth Officer at Splunk, Teresa Carlson leads our efforts to align and drive our ongoing business transformations across Splunk’s go-to-market segments. Most recently, Carlson served as Vice President, Worldwide Public Sector and Industries, for Amazon Web Services (AWS). After she founded AWS’s Worldwide Public Sector in 2010, Carlson’s role eventually expanded to include financial services, energy services, telecommunications, and aerospace and services industry business units.

Carlson has also been a strong advocate for empowering women in the technology field. That passion led to the creation of “We Power Tech,” AWS’s diversity and inclusion initiative, which aims to ensure underrepresented groups – including women – are reflected throughout all AWS outreach efforts. Carlson dedicates time to philanthropic and leadership roles in support of the global community. Prior to joining AWS in 2010, Carlson led sales, marketing and business development organizations at Microsoft, Keyfile/Lexign and NovaCare. Carlson holds a B.A. and M.S. from Western Kentucky University.

Honorary co-chairs

Mark R. Warner U.S. Senator from Virginia

Senator Warner was elected to the U.S. Senate in November 2008 and reelected to a third term in November 2020. He serves on the Senate Finance, Banking, Budget, and Rules Committees as well as the Select Committee on Intelligence, where he is the Chairman. During his time in the Senate, Senator Warner has established himself as a bipartisan leader who has worked with Republicans and Democrats alike to cut red tape, increase government performance and accountability, and promote private sector innovation and job creation. Senator Warner has been recognized as a national leader in fighting for our military men and women and veterans, and in working to find bipartisan, balanced solutions to address our country’s debt and deficit.

From 2002 to 2006, he served as Governor of Virginia. When he left office in 2006, Virginia was ranked as the best state for business, the best managed state, and the best state in which to receive a public education.

The first in his family to graduate from college, Mark Warner spent 20 years as a successful technology and business leader in Virginia before entering public office. An early investor in the cellular telephone business, he co-founded the company that became Nextel and invested in hundreds of start-up technology companies that created tens of thousands of jobs.

Senator Warner and his wife Lisa Collis live in Alexandria, Virginia. They have three daughters.

Rob Portman U.S. Senator for Ohio

Rob Portman is a United States Senator from the state of Ohio, a position he has held since he was first elected in 2010. Portman previously served as a U.S. Representative, the 14th United States Trade Representative, and the 35th Director of the Office of Management and Budget (OMB). In 1993, Portman won a special election to represent Ohio’s 2nd congressional district in the U.S. House of Representatives and served six terms before President George W. Bush appointed him as U.S. Trade Representative in May 2005. Portman currently serves as the Ranking Member on the Senate Homeland Security and Governmental Affairs Committee, as well as on the Senate Finance and Foreign Relations Committees. He was born and raised in Cincinnati, where he still lives today with his wife Jane. Together they have three children: Jed, Will, and Sally.

Suzan DelBene. U.S. Congresswoman Representing Washington’s 1st District

Congresswoman Suzan DelBene represents Washington’s 1st Congressional District, which spans from northeast King County to the Canadian border and includes parts of King, Snohomish, Skagit, and Whatcom counties. First sworn into the House of Representatives in November 2012, Suzan brings a unique voice to the nation’s capital with more than two decades of experience as a successful technology entrepreneur and business leader. Suzan takes on a wide range of challenges both in Congress and in the 1st District and is a leader on issues of technology, health care, trade, taxes, environmental conservation, and agriculture.

Suzan currently serves as the Vice Chair on the House Ways and Means Committee, which is at the forefront of debate on a fairer tax code, health care reform, trade deals, and lasting retirement security. She serves on the Select Revenue Measures and Trade Subcommittees. Suzan also serves as Chair of the forward-thinking New Democrat Coalition, which is one of the largest ideological coalitions in the House, and is co-chair of the Women’s High Tech Caucus, Internet of Things Caucus, and Dairy Caucus. She is also a member of the Pro-Choice Caucus.

Over more than two decades as an executive and entrepreneur, she helped to start drugstore.com as Vice President of Marketing and Store Development, and served as CEO and President of Nimble Technology, a business software company based on technology developed at the University of Washington. Suzan also spent 12 years at Microsoft, most recently as corporate vice president of the company’s mobile communications business.

Before being elected to Congress, Suzan served as Director of the Washington State Department of Revenue. During her tenure, she proposed reforms to cut red tape for small businesses. She also enacted an innovative tax amnesty program that generated $345 million to help close the state’s budget gap while easing the burden on small businesses.
Suzan and her husband, Kurt DelBene, have two children, Becca and Zach, and a dog named Reily.

Michael T. McCaul, U.S. Congressman Representing Texas’ 10th District

Congressman Michael T. McCaul is currently serving his ninth term representing Texas’ 10th District in the United States Congress. The 10th Congressional District of Texas stretches from the city of Austin to the Houston suburbs and includes Austin, Bastrop, Colorado, Fayette, Harris, Lee, Travis, Washington and Waller Counties.

At the start of the 116th Congress, Congressman McCaul became the Republican Leader of the Foreign Affairs Committee. This committee considers legislation that impacts the diplomatic community, which includes the Department of State, the Agency for International Development (USAID), the Peace Corps, the United Nations, and the enforcement of the Arms Export Control Act. In his capacity as the committee’s Republican Leader, McCaul is committed to ensuring we promote America’s leadership on the global stage. In his view, it is essential the United States bolsters international engagement with our allies, counters the aggressive policies of our adversaries, and advances the common interests of nations in defense of stability and democracy around the globe. He will continue to use his national security expertise to work to counter threats facing the United States, especially the increasing threat we face from nation state actors such as China, Iran, Russia, North Korea, among others.

Prior to Congress, Michael McCaul served as Chief of Counter Terrorism and National Security in the U.S. Attorney’s office, Western District of Texas, and led the Joint Terrorism Task Force charged with detecting, deterring, and preventing terrorist activity. McCaul also served as Texas Deputy Attorney General under current U.S. Senator John Cornyn, and served as a federal prosecutor in the Department of Justice’s Public Integrity Section in Washington, DC.
A fourth generation Texan, Congressman McCaul earned a B.A. in Business and History from Trinity University and holds a J.D. from St. Mary’s University School of Law. In 2009 Congressman McCaul was honored with St. Mary’s Distinguished Graduate award. He is also a graduate of the Senior Executive Fellows Program of the School of Government, Harvard University. Congressman McCaul is married to his wife, Linda. They are proud parents of five children: Caroline, Jewell, and the triplets Lauren, Michael, and Avery.

Commissioners

Max R. Peterson II, Vice President, Worldwide Public Sector, Amazon Web Services

Max Peterson is Vice President for Amazon Web Services’ (AWS) Worldwide Public Sector. In this role, Max supports public sector organizations as they leverage the unique advantages of commercial cloud to drive innovation among government, educational institutions, health care institutions, and nonprofits around the world.

A public sector industry veteran with thirty years of experience, he has an extensive background in developing relationships with public sector customers. He has previously worked with Dell Inc. as Vice President and General Manager for Dell Federal Civilian and Intelligence Agencies, as well as CDWG and Commerce One.

Max earned both a Bachelor’s Degree in Finance and Master’s of Business Administration in Management Information Systems from the University of Maryland.

Paul Daugherty, Accenture Chief Executive – Technology and Chief Technology Officer

Paul Daugherty is Accenture’s Group Chief Executive – Technology & Chief Technology Officer. He leads all aspects of Accenture’s technology business. Paul is also responsible for Accenture’s technology strategy, driving innovation through R&D in Accenture Labs and leveraging emerging technologies to bring the newest innovations to clients globally. He recently launched Accenture’s Cloud First initiative to further scale the company’s market-leading cloud business and is responsible for incubating new businesses such as blockchain, extended reality and quantum computing. He founded and oversees Accenture Ventures, which is focused on strategic equity investments and open innovation to accelerate growth. Paul is responsible for managing Accenture’s alliances, partnerships and senior-level relationships with leading and emerging technology companies, and he leads Accenture’s Global CIO Council and annual CIO and Innovation Forum. He is a member of Accenture’s Global Management Committee.

Maurice Sonnenberg, Guggenheim Securities

Maurice Sonnenberg has served as an outside advisor to five Presidential Administrations in the areas of international trade, finance, international relations, intelligence, and foreign election monitoring. In 1994 and 1995, he served as a member of the US Commission on Protecting and Reducing Government Secrecy, and from 1996 as the Senior Advisor to the US Commission on the Roles and Capabilities of the US Intelligence Community. He was a member of the President’s Foreign Intelligence Advisory Board under President Bill Clinton for 8 years. In 2002, he was a member of the Task Force of Terrorist Financing for the Council on Foreign Relations. From 2007-2010, he served on the Department of Homeland Security Advisory Council and the Panel Advisory Board for the Secretary of the Navy from 2008-2015. In 2012-14, he served as co-Chairman of the National Commission for the Review of the Research and Development Programs for the Intelligence Community. He has also served as an Official US Observer at elections in Latin America. This includes multiple elections in El Salvador, Guatemala, Nicaragua and Mexico. Sonnenberg has worked at the investment banking firms Donaldson Lufkin and Jenrette, Bear Stearns, and J.P. Morgan, and at the law firms Hunton & Williams, Manatt, Phelps & Phillips. Currently, he is with Guggenheim Securities as Senior International Advisor. He is also a Senior Advisor to the Advanced Metallurgical Group, N.V.

Michael Chertoff, Former U.S. Secretary of Homeland Security

Michael Chertoff is the Executive Chairman and Co-Founder of The Chertoff Group. From 2005 to 2009, he served as Secretary of the U.S. Department of Homeland Security. Earlier in his career, Mr. Chertoff served as a federal judge on the U.S. Court of Appeals for the Third Circuit and head of the U.S. Department of Justice’s Criminal Division. He is the Chairman of the Board of Directors of BAE Systems, Inc., the U.S.-based subsidiary of BAE Systems plc. In 2018, he was named the chairman of the Board of Trustees for Freedom House. He currently serves on the board of directors of Noblis and Edgewood Networks. In the last five years, Mr. Chertoff co-chaired the Global Commission in Stability of Cyberspace and also co-chairs the Transatlantic Commission on Election Integrity. Chertoff is magna cum laude graduate of Harvard College and Harvard Law School.

Michael J. Rogers, Former Chairman of the U.S. House Permanent Select Committee on Intelligence

Mike Rogers is a former member of Congress, where he represented Michigan’s Eighth Congressional District for seven terms. While in the U.S. House of Representatives, he chaired the powerful House Permanent Select Committee on Intelligence (HPSCI), authorizing and overseeing a budget of $70 billion that funded the nation’s seventeen intelligence agencies. Mr. Rogers built a legacy as a bipartisan leader on cybersecurity, counterterrorism, intelligence, and national security policy. Mr. Rogers worked with two presidents, congressional leadership, and countless foreign leaders, diplomats, and intelligence professionals. Before joining Congress, he served as an officer in the US Army and as a Special Agent with the FBI. He is currently investing in and helping build companies that are developing solutions for healthcare, energy efficiency, and communications challenges. He also serves as a regular national security commentator on CNN and hosted the channel’s documentary-style original series Declassified. Mr. Rogers is a regular public speaker on global affairs, cybersecurity, and leadership. He is married to Kristi Rogers and has two children.

Pascal Marmier, Head, Economy of Trust Foundation, SICPA

Pascal Marmier is head of SICPA’s Economy of Trust Foundation. Most recently, Marmier held several positions in the United States within Swiss Re, a global reinsurer, focusing on digital strategy and innovation management. Previously, he spent twenty years as a Swiss diplomat as one of the early leaders of the Swissnex network, a private–public partnership dedicated to facilitating collaboration with Swiss universities, startups, and corporations in all fields related to science, technology, and innovation. After spending a decade establishing key partnerships and activities in Boston, Marmier moved to China to establish the Swissnex platform in the region. He holds law degrees from the University of Lausanne and Boston University, as well as an MBA from the MIT Sloan School of Management.

Ramayya Krishnan, PhD, Director, Block Center for Technology and Society, Carnegie Mellon University

Ramayya Krishnan is the W. W. Cooper and Ruth F. Cooper Professor of Management Science and Information Systems at Carnegie Mellon University. He is Dean of the H. John Heinz III College of Information Systems and Public Policy and directs the Block Center for Technology and Society at the university. His scholarly contributions have focused on mathematical modeling of organizational decision making, the design of data driven decision support systems and statistical models of consumer behavior in digital environments. He advises governments, businesses and development banks on digital transformation technology and its consequences.

Dr. Shirley Ann Jackson, President, Rensselaer Polytechnic Institute

The Honorable Shirley Ann Jackson, Ph.D., has served as the 18th president of Rensselaer Polytechnic Institute since 1999. A theoretical physicist described by Time Magazine as “perhaps the ultimate role model for women in science,” Dr. Jackson has held senior leadership positions in academia, government, industry, and research. She is the recipient of many national and international awards, including the National Medal of Science, the United States’ highest honor for achievement in science and engineering. Dr. Jackson served as Co-Chair of the United States President’s Intelligence Advisory Board from 2014 to 2017 and as a member of the President’s Council of Advisors on Science and Technology from 2009 to 2014. Before taking the helm at Rensselaer, she was Chairman of the U.S. Nuclear Regulatory Commission from 1995 to 1999. She serves on the boards of major corporations that include FedEx and PSEG, where she is Lead Director.

Dr. Jackson holds an S.B. in Physics, and a Ph.D. in Theoretical Elementary Particle Physics, both from MIT.

Susan M. Gordon, Former Principal Deputy Director of National Intelligence

The Honorable Susan (Sue) M. Gordon served as Principal Deputy Director of National Intelligence from August 2017 until August 2019. In her more than three decades of experience in the IC, Ms. Gordon served in a variety of leadership roles spanning numerous intelligence organizations and disciplines, including serving as the Deputy Director of the National Geospatial-Intelligence Agency (NGA) from 2015 to 2017. In this role, she drove NGA’s transformation to meet the challenges of a 21st century intelligence agency. Since leaving government service, Ms. Gordon serves on a variety of public and private boards, is a fellow at Duke and Harvard Universities, and consults with a variety of companies on technology—including cyber and space—strategy, and leadership, focusing on shared responsibility for national and global security.

Vint Cerf

Vinton G. Cerf is vice president and Chief Internet Evangelist for Google. Cerf is the codesigner of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at the Internet Corporation for Assigned Names and Numbers, the Internet Society, MCI, the Corporation for National Research Initiatives, and the Defense Advanced Research Projects Agency. A former Stanford Professor and member of the National Science Board, he is also the past president of the Association for Computing Machinery and serves in advisory capacities at the National Institute of Standards and Technology, the Department of Energy, and the National Aeronautics and Space Administration. Cerf is a recipient of numerous awards for his work, including the US Presidential Medal of Freedom, US National Medal of Technology, the Queen Elizabeth Prize for Engineering, the Prince of Asturias Award, the Tunisian National Medal of Science, the Japan Prize, the Charles Stark Draper Prize, the ACM Turing Award, the Legion d’Honneur, the Franklin Medal, Foreign Member of the British Royal Society and Swedish Academy of Engineering, and twenty-nine honorary degrees. He is a member of the Worshipful Company of Information Technologists and the Worshipful Company of Stationers.

Zia Khan, PhD, Vice President for Innovation, The Rockefeller Foundation

As Senior Vice President for Innovation, Zia Khan oversees the Rockefeller Foundation’s approach to developing solutions that can have a transformative impact on people’s lives through the use of convenings, data and technology, and strategic partnerships. He writes and speaks frequently on leadership, strategy, and innovation. Khan has served on the World Economic Forum Advisory Council for Social Innovation and the US National Advisory Board for Impact Investing. He leads a range of the Rockefeller Foundation’s work in applying data science for social impact and ensuring artificial intelligence contributes to an inclusive and equitable future.

Prior to joining the Rockefeller Foundation, Khan was a management consultant advising leaders in technology, mobility, and private equity sectors. He worked with Jon Katzenbach on research related to leadership, strategy, and organizational performance, leading to their book, Leading Outside the Lines.

Zia holds a BS from Cornell University and MS and PhD from Stanford University.

Anthony Scriffignano, PhD, is Senior Vice President, Chief Data Scientist at Dun & Bradstreet Corporation

Anthony Scriffignano, PhD is Senior Vice President, Chief Data Scientist at Dun & Bradstreet Corporation. He is an internationally recognized data scientist with experience spanning over forty years in multiple industries and enterprise domains. Scriffignano has extensive background in advanced anomaly detection, computational linguistics and advanced inferential algorithms, leveraging that background as primary inventor on multiple patents worldwide. Scriffignano was recognized as the U.S. Chief Data Officer of the Year 2018 by the CDO Club, the world’s largest community of C-suite digital and data leaders. He is also a member of the OECD Network of Experts on AI working group on implementing Trustworthy AI, focused on benefiting people and the planet. He has briefed the US National Security Telecommunications Advisory Committee and contributed to three separate reports to the president, on Big Data Analytics, Emerging Technologies Strategic Vision, and Internet and Communications Resilience. Additionally, Scriffignano provided expert advice on private sector data officers to a group of state Chief Data Officers and the White House Office of Science and Technology Policy. Scriffignano serves on various advisory committees in government, private sector, and academia. Most recently, he has been called upon to provide insight on data science implications in the context of a highly disrupted datasphere and the implications of the global pandemic. He is considered an expert on emerging trends in advanced analytics, the “Big Data” explosion, artificial intelligence, multilingual challenges in business identity and malfeasance in commercial and public-sector contexts.

Frances F. Townsend, Executive Vice President, Activision Blizzard

Frances Fragos Townsend is the Executive Vice President of Corporate Affairs, Chief Compliance Officer and Corporate Secretary at Activision Blizzard. Prior to that, she was Vice Chairman, General Counsel and Chief Administration Officer at MacAndrews & Forbes, Inc. In her 10 years there, she focused internally on financial, legal and personnel issues, as well as international, compliance and business development across MacAndrews’ portfolio companies. Prior to that, she was a corporate partner with the law firm of Baker Botts, LLP. From 2004 to 2008, Ms. Townsend served as Assistant to President George W. Bush for Homeland Security and Counterterrorism and chaired the Homeland Security Council. She also served as Deputy National Security Advisor for Combatting Terrorism from 2003 to 2004. Ms. Townsend spent 13 years at the US Department of Justice under the administrations of President George H. W. Bush, President Bill Clinton and President George W. Bush. She has received numerous awards for her public service accomplishments. Ms. Townsend is a Director on the Board of two public companies: Chubb and Freeport McMoRan. She previously served on the Boards at Scientific Games, SciPlay, SIGA and Western Union. She is an on-air senior national security analyst for CBS News. Ms. Townsend previously served on the Director of National Intelligence’s Senior Advisory Group, the Central Intelligence Agency’s (CIA) External Advisory Board and the US President’s Intelligence Advisory Board. Ms. Townsend is a trustee on the Board of the New York City Police Foundation, the Intrepid Sea, Air & Space Museum, the McCain Institute, the Center for Strategic and International Studies (CSIS) and the Atlantic Council. She also serves on the Board at the Council on Foreign Relations, on the Executive Committee of the Trilateral Commission and the Board of the International Republican Institute. She is a member of the Aspen Strategy Group.

Admiral James Stavridis, USN, Ret.

Admiral James Stavridis is an Operating Executive of The Carlyle Group and Chair of the Board of Counselors of McLarty Global Associates, following five years as the 12th Dean of The Fletcher School of Law and Diplomacy at Tufts University. He also serves as the Chairman of the Board of the Rockefeller Foundation. A retired four-star officer in the U.S. Navy, he led the North Atlantic Treaty Organization (NATO) Alliance in global operations from 2009 to 2013 as Supreme Allied Commander with responsibility for Afghanistan, Libya, the Balkans, Syria, counter piracy and cyber security. He also served as Commander of U.S. Southern Command, with responsibility for all military operations in Latin America from 2006 to 2009. He earned more than 50 medals, including 28 from foreign nations in his 37-year military career. Admiral Stavridis earned a PhD in international relations and has published 10 books and hundreds of articles in leading journals around the world, including the recent novel “2034: A Novel of the Next World War,” which was a New York Times bestseller. His 2012 TED Talk on global security has close to one million views. Admiral Stavridis is a monthly columnist for TIME Magazine and Chief International Security Analyst for NBC News.

Biographies of supporting Atlantic Council staff

Dr. David A. Bray, Director, GeoTech Center, Atlantic Council

Dr. David A. Bray has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response from 2000 to 2005, time on the ground in Afghanistan in 2009, serving as a non-partisan Senior National Intelligence Service Executive directing a bipartisan National Commission for the Review of the Research and Development Programs of the US Intelligence Community, and providing leadership as a non-partisan federal agency Senior Executive where he led a team that received the global CIO 100 Award twice in 2015 and 2017. He is an Eisenhower Fellow, Marshall Memorial Fellow, and Senior Fellow with the Institute for Human & Machine Cognition. Business Insider named him one of the top “24 Americans Who Are Changing the World” and the World Economic Forum named him a Young Global Leader. Over his career, he has advised six different start-ups, led an interagency team spanning sixteen different agencies that received the National Intelligence Meritorious Unit Citation, and received the Joint Civilian Service Commendation Award, the National Intelligence Exceptional Achievement Medal, Arthur S. Flemming Award, as well as the Roger W. Jones Award for Executive Leadership. He is the author of more than forty academic publications, was invited to give the AI World Society Distinguished Lecture to the United Nations in 2019, and was named by HMG Strategy as one of the Global “Executives Who Matter” in 2020.

Dr. Peter Brooks, Consultant, GeoTech Center, Atlantic Council

Peter Brooks is a senior researcher and national security analyst at the Institute for Defense Analyses, a federally funded research and development center. For more than three decades, he has contributed to the understanding of critical national security issues for a wide range of government agencies. His broad expertise includes intelligence analysis, advanced technologies and applications, and joint force analyses, experimentation, strategy, and cost assessments.

Stephanie Wander, Deputy Director, GeoTech Center, Atlantic Council

Stephanie Wander is a technology and innovation strategist with a successful track record of launching large-scale projects to solve global grand challenges. Ms. Wander’s approaches integrate innovation best practices and mindsets, including design thinking, behavior change strategies, foresight techniques, and expert and public crowdsourcing.

Previously, Ms. Wander was a lecturer at the University of Southern California Suzanne Dworak-Peck School of Social Work where she taught graduate social work professionals in design, innovation, and disruptive technology.

Rose Butchart, Senior Adviser, National Security Initiatives, GeoTech Center, Atlantic Council

Rose Butchart is the senior adviser for National Security Initiatives at the Atlantic Council’s GeoTech Center.

As a program manager for the Department of Defense’s National Security Innovation Network, she managed, designed, and scaled a variety of programs, including a technology, transfer, and transition (T3) program designed to bring breakthrough Department of Defense lab technology to market— and to the warfighter. She also managed a workshop series to tackle some of the military’s intractable problems and a fellowship which placed active duty military and Department of Defense civilians at technology start-ups.

Claudia Vaughn Zittle, Program Assistant, Atlantic Council GeoTech Center

Claudia Vaughn Zittle was a program assistant with the Atlantic Council’s GeoTech Center. In this role, she managed a wide range of projects at the intersection of emerging technologies and dynamic geopolitical landscapes. She also conducted research and provided written analysis for publication on Atlantic Council platforms.

Originally from the Washington, DC, area, she received her BA in International Relations from Cornell College. She is continuing her education at American University’s School of International Service, where she studies International Relations with a concentration in US Foreign Policy and National Security.

Claire Branley, Program Assistant, Atlantic Council GeoTech Center

Claire Branley joined the Atlantic Council’s Geotech Center after graduating from the University of Washington with a BS in Public Health and Global Health. She was a research assistant in the Moussavi-Harami Lab, uncovering gene therapies for inherited heart disease. She is deeply passionate about the prevention of disease and has assisted several maternal and child health research projects and volunteered in farm-to-food pantry initiatives to decrease food insecurity in the Seattle area. Her interests include chronic disease burden, global food security, and promoting interdisciplinary solutions.

Biographies of the key contributors to the GeoTech Commission Report

Research and writing on misinformation

Dr. Pablo Breuer, Nonresident Senior Fellow, GeoTech Center, Atlantic Council

Dr. Pablo Breuer is an information/cyber warfare expert and a twenty-two-year veteran of the US Navy with tours including the National Security Agency, US Cyber Command, and United States Special Operations Command. He is a cofounder of the Cognitive Security Collaborative and coauthor of the Adversarial Misinformation and Influence Tactics and Techniques (AMITT) framework.

Dr. Robert Leonhard, National Security Analysis, Johns Hopkins University Applied Physics Laboratory

Robert Leonhard is on the principal professional staff as an analyst in the National Security Analysis Department of Johns Hopkins University’s Applied Physics Laboratory (JHU/APL). His main areas of focus are irregular warfare, nuclear deterrence, and game design. Prior to joining JHU/APL, he earned a PhD in American History from West Virginia University, a Master of Military Arts and Sciences from the US Army, an MS in International Relations from Troy State University, and a BS in European History from Columbus University. He is a retired Army infantry officer and planner. He is the author of The Art of Maneuver (Presidio Press, 1991), Fighting by Minutes: Time and the Art of War (Praeger, 1994), The Principles of War for the Information Age (Presidio Press, 1998), Little Green Men: a primer in Russian Unconventional Warfare, Ukraine 2013-2014 (JHUAPL, 2016), and The Defense of Battle Position Duffer: Cyber-Enabled Maneuver in Multi-Domain Battle (JHUAPL, 2016). He may be contacted at Robert.Leonhard@jhuapl.edu.

John Renda, Program Manager, Army Special Operations, Johns Hopkins University Applied Physics Laboratory

Col. John Renda, USA (Ret), is a program manager for Army Special Operations at the Johns Hopkins University’s Applied Physics Laboratory. He graduated from Tulane University with a degree in Political Science and International Relations, and earned a MS in National Security from the US Naval War College. He served as a career Psychological Operations officer in US Army Special Operations. His key assignments included 75th Ranger Regiment Information Operations Officer, 1st Psychological Operations Battalion Commander, United States Special Operations Command (USSOCOM) Director J39 National Capital Region, and National Security Council Staff, Director for Strategic Communication. He may be contacted at john.renda@jhuapl.edu.

Dr. Sara-Jayne Terp, Nonresident Senior Fellow, GeoTech Center, Atlantic Council

Sara-Jayne Terp builds frameworks to improve how autonomous systems, algorithms, and human communities work together. At Threet Consulting, she creates processes and technologies to support community-led disinformation defence. She is an Atlantic Council Senior Fellow, CogSecCollab lead, and chair at CAMLIS and Defcon AI Village. Her background includes intelligence systems, crowdsourced data gathering, autonomous systems (e.g., human-machine teaming), data strategy, data ethics, policy, nation state development, and crisis response.

Appendix B

Stewart Scott, Assistant Director, GeoTech Center, Atlantic Council

Stewart Scott is an assistant director with the Atlantic Council’s GeoTech Center, where he conducts research and provides written analysis for publication on Atlantic Council platforms and works on joint projects with other centers in the Atlantic Council. He earned his AB, along with a minor in Computer Science, at the School of Public and International Affairs at Princeton University.

We would also like to thank the following members of the Atlantic Council’s Cyber Statecraft Initiative for their contributions to Appendix B: Trey Herr, Simon Handler, Madison Lockett, Will Loomis, Emma Schroeder, and Tianjiu Zuo.

Appendix C and writings on global health

Dr. Divya Chander, Nonresident Senior Fellow, GeoTech Center, Atlantic Council

Dr. Chander is a physician and neuroscientist who trained at Harvard, University of California San Diego, University of California San Francisco, and the Salk Institute. She has been on the Anesthesiology Faculty at Stanford University since 2008 and Neuromedicine Faculty at Singularity University since 2010. Her postdoctoral training in optogenetic technology was conducted in the laboratories of Karl Deisseroth and Luis de Lecea at Stanford University, where she used light-activated ion channels inserted in DNA to study sleep and consciousness switches in brains. She is currently working on applications of neural wearable devices to crossover consumer and medical markets.

Appendix D

Inkoo Kang, Research Consultant, GeoTech Center, Atlantic Council

US Air Force 2nd Lt. Inkoo Kang is a research consultant for the Atlantic Council’s GeoTech Center. At the Atlantic Council, he conducts research and provides written analyses on the increasingly important role of outer space for social, economic, and military operations. His main interest focuses on how emerging technologies are merging military, diplomatic, humanitarian, and economic challenges and how the military must learn to adapt to such threats.

Appendix E

Borja Prado, Research Assistant, GeoTech Center Atlantic Council

Borja Prado holds an MS in Foreign Service (MSFS) from Georgetown University, where he concentrated in Global Politics and Security, focusing on the impact of disruptive technologies on governments, businesses, and societies.

He aims to apply his research experience, language skills, and strong background in technology and global affairs to help governments, businesses, and societies succeed in this increasingly uncertain era.

Acknowledgements

We would like to thank the following members of the Commission Co-Chair teams for their assistance, expertise, and technical review of the report:

  • Stoney Burke, Head of Federal Affairs and Public Policy, Amazon Web Services
  • Ira Entis, Managing Director, Growth and Strategy Lead, Accenture Federal Services
  • Geoffrey Kahn, Managing Director, Government Relations, Accenture
  • Pamela Merritt, Managing Director, Federal Marketing and Communications, Accenture Federal Services
  • Davis Pace, Professional Staff Member, House Foreign Affairs Committee
  • Sean Sweeney, Manager, Government Relations, Accenture
  • Clayton Swope, Senior Manager, National Security Public Policy, Amazon Web Services
  • Carolyn Vigil, Senior Customer Engagement Manager, Amazon Web Services

We would like to acknowledge the following individuals for their review and commentary on relevant sections of the report: Laura Bate, Natalie Barrett, Pablo Breuer, Mark Brunner, Mung Chiang, Kevin Clark, Donald Codling, Carol Dumaine, Ryan G. Faith, Melissa Flagg, James F. Geurts, Jasper Gilardi, Bob Gourley, Bob Greenberg, Simon Handler, Henry Hertzfeld, Robert Hoffman, Erich James Hösli, Diane M. Janosek, William Jeffrey, Charles Jennings, Declan Kirrane, John J. Klein, Sandra J. Laney, John Logsdon, Robert Lucas, Lauren Maffeo, Jerry Mechling, Ivan Medynskyi, Ben King, Ben Murphy and the team at Reaching the Future Faster LLC, James Olds, Nikhil Raghuveera, Matthew Rose, Benjamin Schatz, Emma Schroeder, Jeremy Spaulding, Keith Strier, Daniella Taveau, Trent Teyema, Bill Valdez, and Tiffany Vora.

We also would like to express sincere appreciation to individuals both internal and external to the Atlantic Council for help in preparing this report for final publication. Their professional and dedicated efforts were essential to this work.

Lastly, we want to thank all the GeoTech Fellows and GeoTech Action Council members, each of whom embodies the spirit of the new Center as we look to the future ahead: Be bold. Be Brave. Be Benevolent.

The post Conclusion, appendices, and acknowledgements appeared first on Atlantic Council.

]]>
Future of work https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-7/ Tue, 25 May 2021 22:58:26 +0000 https://www.atlanticcouncil.org/?p=392395 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Future of work appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(100%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 7. Future of work

Scroll down to navigate and learn more

While this report has focused on the technological changes that will impact geopolitics over the next decade, the recommendations contained within will be meaningless if the United States and allied nations ignore the most important ingredient in the success or failure of all endeavors: people. Developing a digitally fluent and resilient workforce that can meet the challenges of the GeoTech Decade will require private and public sectors to pursue several approaches. These include a broadened view of technical competencies and how they are acquired, improved alignment of skills and job requirements, incentives for employer-based training, and data collection to help assess the effectiveness of these investments and their effects on workers. Ensuring that people, especially people from underrepresented communities, are not left behind by the advance of technology—and that societies have the skilled workforces they need to innovate and prosper—will determine whether the GeoTech Decade lives up to its ambition.

From artificial intelligence (AI) to quantum computing, and for applications ranging from augmented reality to smart cities and communities,1 the technologies that will shape the GeoTech Decade require specialized investments in the US workforce.2 Shifting from the “findings and recommendations” format of the previous chapters, this closing chapter discusses key areas needing greater focus and investment from businesses, governments, educational institutions, and stakeholder organizations, as follows.

Create the workforce for the GeoTech Decade

Recognize the diverse competencies that characterize skilled technical workers

Diverse competencies include academic credentials, technical competencies in an industry, and technical competencies in a specific occupation, plus “soft skills” that make for reliable and collegial employees.3 Job descriptions should consider the value of all sources of relevant experience and ability.

From artificial intelligence (AI) to quantum computing, and for applications ranging from augmented reality to smart cities and communities, the technologies that will shape the GeoTech Decade require specialized investments in the US workforce.

Communicate the breadth of pathways for gaining skilled technical work 

Given the current focus on a college degree being a prerequisite to desirable, skilled technical jobs, the workforce should be better informed about the variety of skilled technical occupations, the different ways of acquiring credentials, e.g., college certificates, professional certifications, professional licenses, and digital badges and how such credentials allow more points of entry into desired occupations.

Strengthen skilled technical training and education

Secondary school: Career and technical education (CTE) programs4 enable the acquisition of STEM education combined with work experience that teaches technical skills relevant to specific professions. CTE programs can be enhanced through active participation and guidance provided by representatives from local businesses. This could help ensure that the skills training is better matched with employer needs and requirements. The P-TECH program, now operating schools in eleven US states, Australia, Morocco, and Taiwan, is another model for building regional workforces with the needed technical skills and for providing underserved youths with opportunities for gaining relevant technical skills.5

Post-secondary school: There are 936 public community colleges in the United States,6 representing a nationwide resource for improving the technical skills of the current and future workforce. According to a Community College Resource Center analysis, “6.7 million students were enrolled at community colleges in fall 2017, and nearly 10 million students enrolled at a community college at some point during the 2017-18 academic year. Yet, the overall percent of community college enrollees in 2014 that completed a college degree at a four-year institution within six years is 17 percent.”7 Increasing this completion rate through financial incentives and investments could increase the number and qualifications of the technically skilled workforce in the United States.

Non-college credentials: The value to the worker and the employer of non-college degree certification programs—apprenticeships, certifications, certificate programs—could be improved by better linking them to established, defined technical workforce competencies. Improved standards and data on the effectiveness of these credentials will help workers and employers determine the value of these credentials and enable more informed choices for skills training.

Alternative sources of skilled workers: A recent study8examined the prevailing practice of a four-year college degree being a prerequisite for skilled jobs. The analysis identified large populations of workers with suitable skills but who did not have a college degree. Of these, the analysis showed that twenty-nine million have skills that would enable them to transition to an occupation with a significantly higher wage. These results suggest that job descriptions should be carefully specified so as to reach the largest qualified talent pool.

Better align employer-based training with needs

Business incentives: Incentives for employers to invest in improving workforce technical skills should help a company remain competitive. The investments would align the employer’s needs for technically skilled workers and the training and education that is offered. One approach could be based on tax incentives for increasing investment in workforce skill development to increase productivity.”9

Technology development and training: Workforce organizations can play a role in effectively communicating, between employers and the workforce, issues concerning needed technical skills and the mechanisms and policies being used to manage these requirements. To accelerate identifying and acquiring future technical skills needed by the workforce, technology development programs could also create a training program for the skills associated with using the new technology in a product. This can shorten the link between technology development and the training of workers.

Acquire and analyze human capital development and management data

Human capital development and management data should address projections of the supply and demand for workers according to categories of technical skills, results of the search and hiring process, and how well the employer’s needs were satisfied. The data also should inform how well the training policies provided equitable access to skills training across the workforce.

These data should enable analyses of the expected value of different options for skills education and training for workers, the return on the investment of workforce training for businesses, and options for adjusting workforce training policies.

Foster lifelong learning

The pace at which advanced technology is changing the workplace and the skills needed to maintain a competitive economy makes lifelong learning imperative. Individuals should be able to guide their training and education throughout their working years.

To accomplish this on a national scale will require effort to craft incentives that motivate individuals to embrace this approach. Important elements may involve information on the value of continuing educational programs and the job opportunities that are enabled, funding mechanisms to lower the cost to the individual, and strategies developed with businesses that specify how continuing learning enhances an individual’s work prospects.

To guide individual choices, new tools can facilitate gathering and synthesizing the complex array of information on skills, occupations, training opportunities, and assessments of their value. The tools can also help the individual identify and secure funding from available sources, and help government funding sources be applied efficiently to this long-term challenge.

Equitable access to opportunity

The United States needs to ensure equitable access to opportunity during the GeoTech Decade. From access to affordable broadband to digital literacy, governments and the private sector need to make significant investments and work together to reduce barriers to full participation in the economy.

Access to affordable, high-speed Internet and devices to use it

Ensuring that all people can participate in the GeoTech Decade requires a commitment to equitable access to affordable, high-speed Internet. Millions do not have high-speed broadband, particularly in rural areas.10 What is more, many with access to high-speed broadband are still unable to afford the high cost of Internet and the devices needed to access it.11 Lack of access and affordability perpetuates systemic inequities. 

While Congress has made significant investments in broadband since the onset of the COVID-19 pandemic, more remains to be done. The Emergency Broadband Benefit Program has helped low-income households afford broadband during the pandemic.

Acquiring digital literacy

Digital literacy, the ability to find, evaluate, utilize, and create information using digital technology, is becoming an essential skill for every individual. Digital literacy is an important element in eliminating a digital divide among nations and within a society. It complements affordable, high-speed Internet access by enabling people to develop and communicate local content, to communicate their issues and concerns, and to help others understand the context in which these issues occur.

1    Smart Cities and Communities Act of 2019, H.R. 2636 — 116th Congress (2019-2020), accessed March 26, 2021, https://www.congress.gov/116/bills/hr2636/BILLS-116hr2636ih.pdf
2    National Academies of Sciences, Engineering, and Medicine, Building America’s Skilled Technical Workforce (Washington, DC: National Academies Press, 2017) accessed April 16, 2021, http://nap.edu/23472; Mark Warner, “Part II. Investing in Workers,” Medium, February 8, 2021, accessed April 16, 2021, https://senmarkwarner.medium.com/ii-investing-in-workers-e7e9a09ff24c
3    National Academies of Sciences, Engineering, and Medicine, Building America’s Skilled
4    Bri Stauffer, “What Is Career & Technical Education (CTE)?” Applied Educational Systems, February 4, 2020, accessed April 16, 2021, https://www.aeseducation.com/blog/career-technical-education-cte
5    “What is P-TECH all about?” website homepage accessed April 16, 2021, https://www.ptech.org/
6    “Number of community colleges in the United States in 2021, by type,” Statista, accessed April 16, 2021, https://www.statista.com/statistics/421266/community-colleges-in-the-us/
7    “Community College FAQs,” Community College Research Center, Teachers College, Columbia University, accessed April 16, 2021, https://ccrc.tc.columbia.edu/Community-College-FAQs.html
8    Peter Q. Blair et al., “Searching for STARs: Work Experience as a Job Market Signal for Workers without Bachelor’s Degrees,” National Bureau of Economic Research, March 2020, accessed April 16, 2021, https://www.nber.org/papers/w26844
9    Warner, “Part II. Investing in Workers.”
10    Federal Communications Commission, 2020 Broadband Deployment Report, April 24, 2020, accessed April 16, 2021, https://docs.fcc.gov/public/attachments/FCC-20-50A1.pdf
11    Tom Wheeler, 5 steps to get the internet to all Americans COVID-19 and the importance of universal broadband, Brookings Institution, May 27, 2020, accessed April 16, 2021, https://www.brookings.edu/research/5-steps-to-get-the-internet-to-all-americans/

The post Future of work appeared first on Atlantic Council.

]]>
Assured space operations for public benefit https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-6/ Tue, 25 May 2021 22:58:12 +0000 https://www.atlanticcouncil.org/?p=392392 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Assured space operations for public benefit appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(100%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 6. Assured space operations for public benefit

Scroll down to navigate and learn more

The growing commercial space industry enables ready access to advanced space capabilities for a broader group of actors. To maintain trusted, secure, and technically superior space operations, the United States must ensure it is a leading provider of needed space services and innovation in launch, on-board servicing, remote sensing, communications, and ground infrastructures. A robust commercial space industry not only enhances the resilience of the US national security space system by increasing space industrial base capacity, workforce, and responsiveness, but also further advances a dynamic innovative environment that can bolster US competitiveness across existing industries, while facilitating the development of new ones.

The fast-growing critical dependence on space for national security, the global economy, and public-benefit interests makes assured space operations essential for ensuring a more free, secure, and prosperous world.

As smaller satellites become more capable, large constellations of government and commercial platforms could increase space mission assurance and deterrence by “eliminating mission critical, single-node vulnerabilities and distributing space operations across hosts, orbits, spectrum, and geography.”1 Advances in commercial space also enable exploring our planet’s oceans, monitoring for climate change-related risks, and mapping of other parts of our solar system.

The fast-growing critical dependence on space for national security, the global economy, and public-benefit interests makes assured space operations essential for ensuring a more free, secure, and prosperous world.

Finding 6: The US commercial space industry can increase its role in supporting national security.

The National Space Strategy2 includes four areas of emphasis: resilience, deterrence, foundational capabilities, and more conducive domestic and international environments. It envisions improved leverage of, and support for, the US commercial industry. The Defense Space Strategy Summary3 highlights that the rapidly growing commercial space industry is introducing new capabilities as well as new threats to US space operations. A main effort in this strategy is to cooperate with industry and other actors to leverage their capabilities.

“Space Policy Directive-2—Streamlining Regulations on Commercial Use of Space,” provides support for the US commercial space industry.4 In support of the overall policy of the executive branch to promote economic growth, protect national security, and encourage US leadership in space commerce, the directive requires reviews of the launch and reentry licensing for commercial space flight, the Land Remote Sensing Policy Act of 1992, the Department of Commerce’s organization of its regulation of commercial space flight activities, radio frequency spectrum, and export licensing regulations.5

The Government Accountability Office’s (GAO’s) report on the Department of Defense’s (DoD’s) use of commercial satellites6 describes several potential benefits of including more responsive delivery of capabilities to space and increasing deterrence and resilience due to the larger number and distribution of commercial constellations of satellites.

Finding 6.1: Large constellations of small satellites are being developed.

The development of small satellites enables the proliferation of very large constellations of satellites. For example, several companies are currently planning constellations of communications satellites comprising an aggregate deployment of several thousand satellites in low Earth orbit (LEO). In total, the communications capacities could exceed tens of terabytes. This enables low-latency, high-bandwidth communications to any region, bringing valuable educational opportunities to underserved populations, and supporting new data-intensive communications in advanced countries.7 Small Earth observation satellites are being deployed in constellations of hundreds of platforms by several companies. These can produce global coverage with revisit intervals ranging from minutes to hours. Several types of sensors are being deployed including electro-optical, synthetic aperture radar, and radio signal collection.8 Companies in the United States, Europe, Russia, and China are actively pursuing these new capabilities.9

The ability to image any area, and to communicate with any area, will become commercially available to any individual, group, or government. Coupled with access to cloud computing and big data analytics, innovations will occur in many fields, e.g., precise, real-time weather and soil condition data for farmers to increase yield, ship tracking to aid logistics, indicators of disease spread to inform a pandemic observation network, and the like.

Large constellations may also contribute to deterrence. The larger number of platforms operating in conjunction with major military satellites may make the entire constellation more resilient.

The commercial space industry is developing satellite servicing capabilities. This helps extend the operating life of each satellite, though the ability to operate near another satellite is viewed negatively by adversaries.

Finding 6.2: There is increasing focus on cybersecurity for commercial space systems.

The “Space Policy Directive 5”10 specifies the US policy for managing risks11 to the growth and prosperity of its commercial space economy is to rely on “executive departments and agencies to foster practices within Government space operations and across the commercial space industry that protect space assets and their supporting infrastructure from cyber threats and ensure continuity of operations.” Several cybersecurity principles provide the foundation for these efforts, though the directive expects space system owners and operators to be responsible for implementing cybersecurity practices and does not address enforcement actions. No timeline for the development of regulations is provided.

Finding 6.3: The UN Outer Space Treaty (OST) requires interpretation to determine when emerging commercial space platforms become targets.

The growth in the commercial satellite industry will lead to lower-cost satellites with advanced sensors, communications, on-board computation, and security capability. Over time, each small satellite, when operated in large constellations, could be more useful for military purposes.

A key determinant in the application of the UN OST to the question of whether the military can use commercial satellites is “whether the commercial satellite is actively making a contribution to military action.”12 For example, if the military is using a commercial communications satellite to relay its messages, the UN OST does not view the communications satellite as a military target. Full consideration of the treatment of dual-use commercial satellites is not settled and will evolve as more nations participate in the commercial space industry.13 Yet, because nations like China and Russia already target (terrestrial) commercial networks as part of their computer network exploitation campaigns, it stands to reason that they will not necessarily recognize a distinction between commercial and military satellite targets.

Finding 6.4: The development of constellations of small satellites beneficial to the military may require government support.

Commercially viable capabilities in small satellites are advancing, but may not be sufficient for some military needs at this time.  For example, the resolution of an electro-optical sensor for surveilling traffic is not useful for target identification, though it may be useful for tracking troop movements. A balanced policy would require the government to focus on the more exquisite capabilities that only it can provide, while relying on the commercial sector to meet other requirements.  The government can also do more to send a signal to the markets that it supports these constellations and their capabilities by purchasing commercial data and services, thereby helping to ensure a strong commercial industrial base.

Finding 6.5: Government support for commercial space activities can be strengthened.

The growth of the commercial space industry occurring in several major countries14 requires a review of US commercial space policy15 as the roles of government and commercial industry change in key areas. The National Aeronautics and Space Administration (NASA) is establishing a wholly commercial capability to land humans on the moon (from lunar orbit), in contrast with the prior approach of government control of human spaceflight.16 There are efforts to consolidate and streamline the regulatory framework and organizations for US commercial space capabilities.17 To support greater innovation and bolster US commercial space industries, recently proposed legislation identified ways to make the commercial space licensing process simpler, more timely, and more transparent.18 These efforts attempt to balance commercial interests against the government’s need to ensure the commercial space capabilities meet national security and foreign policy requirements. Such balancing may be less important as sensitive imagery becomes more available from foreign companies. To address urgent new requirements—e.g., on-orbit servicing of a space force, or continuous global observation in support of climate study, agriculture, and ocean systems—the government may require new policies to support increasing reliance on commercial space industries and new commercial space capabilities.

Approach 6: Accelerate the development and deployment of dual-use commercial satellites, including applications to Earth and space exploration.

The United States should use the emerging commercial space industry, and large constellations of small satellites, to enhance the resilience of national security space missions. This will require a deliberate strategy to guide commercial system developments, and this must be balanced with benefits that accrue to the public. The United States should, with its allies, examine how to interpret current treaties when considering the new commercial space capabilities. The United States, its allies, and private industry should implement global Earth and space observation capabilities.

Recommendation 6: Foster the development of commercial space technologies and develop a cross-agency strategy and approach to space that can enhance national security space operations and improve agriculture, ocean exploration, and climate change activities; align both civilian and military operations, and international treaties to support these uses.

Recommendation 6.1: Ensure federal investments in the commercial space industry deliver public benefits.

Congress should pass legislation that directs the Office of Science and Technology Policy (OSTP) to lead an interagency initiative that develops an economic impact assessment of existing and future government investments in the US commercial space industry, as well as a public-private investment strategy for technology innovations and operating efficiencies that will ensure subsequent benefit to the public interest. Such benefits should contribute to global access to open data sets—via a space-based Internet, space-based cloud storage and computing—of Earth observation, global health, humanitarian applications, and other areas; it should also include suitable sharing of government-funded data collections among other government programs. A cross-agency group including the National Aeronautics and Space Administration (NASA), the National Geospatial-Intelligence Agency (NGA), the Defense Advanced Research Projects Agency (DARPA), relevant federal departments, private industry, and allied nations should develop the plans and partnerships for global Earth and space observation in support of environmental security.

Recommendation 6.2: Foster commercial space technologies of strategic importance and protect these from foreign acquisition.

Congress should direct a cross-agency group including NASA and the Department of Defense to conduct a joint review19 This does not address foreign acquisition of commercial space technologies of strategic importance of dual-use commercial space technologies and capabilities that are of strategic importance to national security space missions. The scope includes communications, on-orbit storage and computing, large constellations of small platforms, sensing, space situational awareness, satellite protection, launch, and on-orbit servicing. Congress should direct a streamlined licensing process and simplify regulations where appropriate. Such dual-use technologies should be reviewed for protection from foreign acquisition by the expanded authorities of the Committee on Foreign Investment in the United States (CFIUS)20 and by the Senate Select Committee on Intelligence and the House Permanent Select Committee on Intelligence. The broadened role delineated by the Foreign Investment Risk Review Modernization Act of 2018 (FIRRMA) enables CFIUS to review noncontrolling foreign investments in critical technologies and critical infrastructure in the US space industrial base. Congress should direct an assessment of how the FIRRMA reforms have been applied and the resulting effect.

Recommendation 6.3: Harden the security of commercial space industry facilities and space assets.

The administration should designate the commercial space industry as a critical infrastructure sector and develop a sector-specific plan for its protection. The Department of Commerce should be assigned as the Sector-Specific Agency and should work with international standards-setting groups to harden select commercial space capabilities, e.g., protect communications against cyber threats.

The cybersecurity of both military and commercial spacecraft is a growing concern. Threat actors are devoting more attention to attacking both the software/IT supply chain as well as vulnerabilities in the cyber defenses on spacecraft. Large commercial mega-constellations of small satellites are performing an increasing range of business and communications functions, yet do not necessarily conform to high cybersecurity standards. The US government does not have standards for the design of cyber-secure commercial satellites, though it is introducing self-certification programs for commercial satellite providers.

The administration should extend the National Institute of Standards and Technology (NIST) cybersecurity maturity standards, guidelines, and best practices to the space domain, covering the space, link, ground, and user segments. The cyber-resilient design principles should consider the following: “Intrusion detection and prevention leveraging signatures and machine learning to detect and block cyber intrusions onboard spacecraft; a supply chain risk management (SCRM) program to protect against malware inserted in parts and modules; software assurance methods within the software supply chain to reduce the likelihood of cyber weaknesses in flight software and firmware; logging onboard the spacecraft to verify legitimate operations and aid in forensic investigations after anomalies; root-of-trust to protect software and firmware integrity; a tamper-proof means to restore the spacecraft to a known good cyber-safe mode; and lightweight cryptographic solutions for use in small satellites.”21

Recommendation 6.4: Establish the conformance of emerging commercial space constellations to multinational agreements.

The United States should lead a conference to assess future developments in the commercial space industry with respect to the UN OST, the Artemis Accords,22 and other international agreements that may be constructed. The objective is to clarify the acceptable use of commercial space assets as these become of greater use in supporting militaries.

Commercial capabilities may, over time, provide essential portions of space-based surveillance, reconnaissance, communications, refueling, data storage and processing, and maintenance. As new military space capabilities become possible, there is an increased risk that these will be interpreted as “making an effective contribution to military action” and thereby become legitimate targets. These capabilities may include imaging satellites, communications satellites, space networks, satellite maintenance vehicles, launch vehicles, and so forth. A key area to clarify is the legal and technical assessment of what qualifies as “making an effective contribution to military action” involving space technology.23

Recommendation 6.5: Develop space technologies for mega-constellations of satellites that support monitoring the entire planet pervasively and persistently, at high resolution and communicate the information in near-real time.

The administration should develop autonomous space operations technologies for large-scale constellations. This program, led by the DoD, NASA, and other elements of the national security space enterprise, would use AI technologies to minimize or eliminate human requirements for satellite control, information collection, and information analysis; and increase the speed of the information-to-decision loop.

The administration should encourage commercial space companies to develop cost-effective technologies that increase the survivability of commercial satellites as the operating regions become more crowded or contested. This may enable commercial satellites to operate in a greater variety of conditions, thereby providing expanded value to the United States.

The administration should develop and conduct Challenge Prizes funding opportunities for autonomous satellite operations on single platforms, i.e., for applications where highly capable satellites autonomously manage their own complex taskings, and also work as part of a large collection of similarly autonomous satellites.

The administration should use the model of the NASA Tipping Point solicitation to develop the capability to continuously monitor the world’s oceans—in particular, using space-based sensors—for the impact of climate change and other issues of global importance. This program would be jointly managed by NASA, NSF, and DARPA with collaborations from the European Union (EU) and other participants. This multiyear initiative would help establish a global, real-time Earth oceans observation network and the supporting autonomous control, communications, and data analytics capabilities. In addition to space technologies, this program could also support the development of surface and underwater vehicles to perform this function. The Department of State should address the treaty implications of large numbers of remotely-piloted and autonomous surface and underwater vehicles and develop new international agreements where needed.

1    John J. Klein, The Influence of Commercial Space Capabilities on Deterrence, Center for a New American Security, March 25, 2019, accessed March 26, 2021, https://www.cnas.org/publications/reports/the-influence-of-commercial-space-capabilities-on-deterrence; US Deputy Secretary of Defense Robert Work’s speech to the Satellite Industries Association, March 7, 2016, accessed March 26, 2021, https://www.defense.gov/Newsroom/Speeches/Speech/Article/696289/satellite-industries-association/; Government Accountability Office, Military Space Systems: DoD’s Use of Commercial Satellites to Host Defense Payloads Would Benefit from Centralizing Data, July 2018, GAO-18-493, accessed March 26, 2021, https://www.gao.gov/products/gao-18-493
2    White House, “An America First National Space Strategy,” accessed March 26, 2021, https://aerospace.csis.org/wp-content/uploads/2018/09/Trump-National-Space-Strategy.pdf
3    Department of Defense, Defense Space Strategy Summary, June 2020, accessed March 26, 2021, https://media.defense.gov/2020/Jun/17/2002317391/-1/-1/1/2020_DEFENSE_SPACE_STRATEGY_SUMMARY.PDF
4    Executive Office of the President, “Streamlining Regulations on Commercial Use of Space,” Federal Register, Space Policy Directive-2 of May 24, 2018, accessed March 26, 2021, https://www.federalregister.gov/documents/2018/05/30/2018-11769/streamlining-regulations-on-commercial-use-of-space
5    Ibid.
6    Government Accountability Office, Military Space Systems, 4
7    Matthew A. Hallex and Travis S. Cottom, “Proliferated Commercial Satellite Constellations, Implications for National Security,” Joint Forces Quarterly 97 (2nd Quarter 2020), accessed March 26, 2021, https://ndupress.ndu.edu/Portals/68/Documents/jfq/jfq-97/jfq-97_20-29_Hallex-Cottom.pdf?ver=2020-03-31-130614-940
8    Ibid.
9    Ibid.
10    White House, Memorandum on Space Policy Directive-5—Cybersecurity Principles for Space Systems, presidential memoranda, September 4, 2020, accessed March 26, 2021, https://trumpwhitehouse.archives.gov/presidential-actions/memorandum-space-policy-directive-5-cybersecurity-principles-space-systems/
11    Office of the Director of National Intelligence, Annual Threat Assessment of the US Intelligence Community, April 9, 2021, accessed April 16, 2021, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2021-Unclassified-Report.pdf; Todd Harrison, Space Threat Assessment 2021, Center for Strategic and International Studies, March 31, 2021, accessed April 16, 2021, https://www.csis.org/analysis/space-threat-assessment-2021
12    “Practice Relating to Rule 10. Civilian Objects’ Loss of Protection from Attack,” ICRC IHL Database, Customary IHL, accessed March 26, 2021, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v2_rul_rule10
13    P.J. Blount, “Targeting in Outer Space: Legal Aspects of Operational Military Actions in Space,” Harvard National Security Journal Features, accessed March 26, 2021, https://harvardnsj.org/wp-content/uploads/sites/13/2012/11/Targeting-in-Outer-Space-Blount-Final.pdf; Yun Zhao, Space Commercialization and the Development of Space Law, Oxford University Press, July 30, 2018, accessed March 26, 2021, https://oxfordre.com/planetaryscience/view/10.1093/acrefore/9780190647926.001.0001/acrefore-9780190647926-e-42
14    Congressional Research Service, Commercial Space: Federal Regulation, Oversight, and Utilization, updated November 29, 2018, accessed March 26, 2021, https://fas.org/sgp/crs/space/R45416.pdf
15    American Space Commerce Free Enterprise Act of 2019, H.R. 2809 — 116th Congress (2019-2020), accessed March 26, 2021, https://www.congress.gov/bill/115th-congress/house-bill/2809
16    Congressional Research Service, Artemis: NASA’s Program to Return Humans to the Moon, updated January 8, 2021, accessed March 26, 2021, https://fas.org/sgp/crs/space/IF11643.pdf
17    Jeff Foust, “Commerce Department seeks big funding boost for Office of Space Commerce,” SpaceNews, February 16, 2020, accessed March 26, 2021, https://spacenews.com/commerce-department-seeks-big-funding-boost-for-office-of-space-commerce/
18    In the 115th Congress (2017-2018), the American Space Commerce Free Enterprise Act (H.R. 2809) and the Space Frontier Act of 2018 (S. 3277) include provisions to streamline the licensing process.
19    National Aeronautics and Space Administration, “Memorandum of Understanding Between the National Aeronautics and Space Administration and the United States Space Force,” September 2020, https://www.nasa.gov/sites/default/files/atoms/files/nasa_ussf_mou_21_sep_20.pdf
20    Congressional Research Service, The Committee on Foreign Investment in the United States (CFIUS), updated February 14, 2020, accessed March 26, 2021, https://fas.org/sgp/crs/natsec/RL33388.pdf
21    Brandon Bailey et al., Defending Spacecraft in the Cyber Domain, Aerospace Corporation, November 2019, accessed March 26, 2021, https://aerospace.org/sites/default/files/2019-11/Bailey_DefendingSpacecraft_11052019.pdf
22    National Aeronautics and Space Administration, The Artemis Accords: Principles for Cooperation in the Civil Exploration and Use of the Moon, Mars, Comets, and Asteroids for Peaceful Purposes, accessed March 26, 2021, https://www.nasa.gov/specials/artemis-accords/img/Artemis-Accords-signed-13Oct2020.pdf
23    Dr. Cassandra Steer, Why Outer Space Matters for National and International Security, Center for Ethics and the Rule of Law, University of Pennsylvania, January 8, 2020, accessed March 26, 2021, https://www.law.upenn.edu/live/files/10053-why-outer-space-matters-for-national-and; Jackson Nyamuya Maogoto and Steven Freeland, “Space Weaponization and the United Nations Charter Regime on Force: A Thick Legal Fog or a Receding Mist?” International Lawyer 41 (4) (Winter 2007): 1091–1119, http://www.jstor.org/stable/40707832, accessed March 26, 2021, https://www.law.upenn.edu/live/files/7860-maogoto-and-freelandspace-weaponization.pdf; Blount, “Targeting”; Theresa Hitchens and Colin Clark, “Commercial Satellites: Will They Be Military Targets?” Breaking Defense, July 16, 2019, accessed March 26, 2021, https://breakingdefense.com/2019/07/commercial-satellites-will-they-be-military-targets/

The post Assured space operations for public benefit appeared first on Atlantic Council.

]]>
Continuous global health protection and global wellness https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-5/ Tue, 25 May 2021 22:57:54 +0000 https://www.atlanticcouncil.org/?p=392390 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Continuous global health protection and global wellness appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(100%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 5. Continuous global health protection and global wellness

Scroll down to navigate and learn more

The COVID-19 pandemic has disrupted health and economic security, both directly and indirectly, for most of the planet. Inherent to this disruption are three systemic problems: (i) global and national leaders acted slowly to detect and contain the spread of the virus, (ii) global health organizations reacted slowly to contain the spread of the virus, and (iii) a mixture of factors caused the delayed response including late recognition of the threat and where it was circulating, slow incorporation of science and data into decision making, poor political will, and inconsistent messaging to citizens regarding the nature of the threat and precautions to take. The origin and spread of the coronavirus that causes COVID-19 also depended on a number of codependent factors—human encroachment on animal habitats, globalization and an interconnected world, and a global economy that ignored insufficient sanitation and public health standards. But, most importantly, it depended on a failure of adequate monitoring, data sharing, and early warning and mitigation systems.

Continuous global health protection builds upon a foundation of secure data and communications, rapid sharing of biological threat data across the globe, enhanced trust and confidence in the digital economy, and assured supply chains.

Viruses and other pathogens know no borders, nor do they discriminate by race or class. Though nations may adopt their own strategies to enhance resilience and future planning, a more global approach to this interconnected system will be essential to keep all humans safe. Continuous global health protection builds upon a foundation of secure data and communications, rapid sharing of biological threat data across the globe, enhanced trust and confidence in the digital economy, and assured supply chains.

Finding 5: There is a need for a continuous biological surveillance, detection, and prevention capability.

The design of a pandemic surveillance, detection, and prevention system would require a multipronged approach, comprising global monitoring, early detection, rapid warning, and capable mitigation and prevention strategies. The system would perform the following main functions: biothreat agent recognition, mobilization of defenses, containing the spread of the biothreat agent, administration of therapeutic treatment, and the ability to recognize new pathogens and form specific neutralizing responses.

Much of the integrative assessments performed by the system would need to rely on a network capable of receiving data from multiple, decentralized information sources, and converting that information into indicators that can be aggregated and evaluated to support decision making at the individual, local community, and population level.1 A global detection and response system could enable greater resilience and prevention, and decrease the potential that new outbreaks of pathogens lead to global pandemics.2

Early detection would require the funding of a global, interconnected system that relies on partnerships among national governments and regional partners. Where there are gaps in collecting and sharing preferred data, e.g., when a nation or region does not participate, alternative indicators would need to be developed.4

The development of novel, authenticated data sources is a key risk factor for pandemic warning systems. As seen at the start of the COVID-19 pandemic, relying on government-provided information led to a delay in identifying the unusual pneumonia-like illness in Wuhan, China, and ultimately in releasing the genetic sequence of the virus.5 It cost lives, delayed warnings and the ability for others to detect the circulating virus, delayed containment and mitigation strategies (e.g., vaccine and therapeutic development), and enabled the virus to spread globally via human vectors.6

Authenticated data sources from different decentralized sources and edge devices could include both traditional (e.g., positive viral tests, hospitalization rates, excess death rates) and nontraditional sources of health information (e.g., passive monitoring of environment, wastewater, satellite data, human migration trends, market signals) that can be overlaid, combined, and aggregated to understand current public health conditions and to have predictive value.

Finding 5.2: An elevated capacity on the global stage is required.

The components of global capacity in a pandemic include the ability to quickly identify and sequence novel pathogens; to quickly share that information with the world; to rapidly ramp-up testing; to develop and approve targeted vaccines and therapeutics; to have medical supply chain, manufacturing, and distribution capabilities in place; to have sufficient capital health equipment, medical consumables, and healthcare personnel in place; and to provide access to healthcare and reliable health information to all those in need.

These specific functions for creating a comprehensive global alert and response system and coordinating actions, as well as supporting localized capacity strengthening,7 were made part of the World Health Organization’s (WHO’s) updated 2005 International Health Regulations (IHR)8 and its pandemic preparedness plan.9 “To help countries review and, if necessary, strengthen their ability to detect, assess, and respond to public health events, WHO develops guidelines, technical materials, and training and fosters networks for sharing expertise and best practices. WHO’s help supports countries in meeting their commitments under the IHR to build capacity for all kinds of public health events.”10

To achieve the fullest potential of these approaches, there need to be investments on a global scale to support expanded detection, mitigation, and capacity-building strategies. These efforts should be conducted through public, private, and government partnerships based on mutual agreements to share data and report issues early. These should be multinational collaborations that would be able to overcome the limiting factors discussed in the next section. In developing these approaches, a priority is to strengthen transparency and accountability within the United Nations (UN) system, including at the WHO.11

Finding 5.3: There are several limiting factors.

There often is a lack of trust among groups, institutions, and governments. Governments do not always trust other governments; countries do not always trust global health bodies; nationally, states do not always trust each other or the federal government; and individuals do not always trust governments or health entities or officials. This lack of trust is well-documented. According to the 2020 Edelman Trust Barometer,12 “no institution is seen as both competent and ethical,” an opinion that includes government, business, nongovernmental organizations (NGOs), and the media. In the statistical model Edelman provides, government is widely seen as the most unethical, and the least competent, institution of the four. According to the International Development Association of the World Bank Group, half of the global population does not trust government institutions.13 Similarly, both individual citizens and countries may lack trust in national and global health bodies.

Health institutions are concerned about sharing data on health outbreaks too early, as this could make them look underinformed, or to be “crying wolf” before the true measure of an outbreak is known.14 Governments may be incentivized to withhold information on outbreaks to maintain appearances of strength and ultimately to control medical supplies to keep their own people safe. Withholding immediate access to information can severely affect outcomes, such as the spread of the virus, allowing it to gain a foothold in other countries unaware. It also prevents the type of global and interdisciplinary cross-collaboration that has been so effective at advancing science, research and development (R&D), and progress toward solutions.

The cost of developing and operating a global pandemic surveillance, detection, and warning and response system must be borne by all nations in an equitable manner. A recent study15 estimates “[t]his cost includes the cumulative cost of failed vaccine candidates through the research and development process. … [P]rogressing at least one vaccine through to the end of phase 2a for each of the 11 epidemic infectious diseases would cost a minimum of $2.8–3.7 billion ($1.2 billion–$8.4 billion range).” According to a 2002 study, the cost of developing a vaccine—from research and discovery to product registration—is estimated to be between $200 million and $500 million per vaccine.16 Due to the high costs of developing vaccines and current therapeutics, developing an equitable funding model will rely on new research to make vaccines less expensive to develop, new technologies to conduct wide-area detection of signatures of biological activity, and new techniques for inexpensive diagnostic testing worldwide. The supply chains, manufacturing capabilities, vaccines, and therapeutics must be developed in such a manner that all nations are protected by such a global pandemic prevention system. The concern extends beyond vaccines which have been developed. Some diseases, like Zika, for which no vaccines exist, continue to be studied; and parasites, such as those that cause malaria, may become more widespread due to global climate change.

There are many types and sources of data that need to be identified in order to effectively predict or fight an epidemic. One is vector tracking. It is difficult to track zoonotic vectors that lead to viral spread. It is estimated that wild animals, in particular mammals, harbor an estimated forty thousand unknown viruses, a quarter of which could potentially jump to humans;17 it is also estimated that 75 percent of all emerging pathogens in the last decade have come from a zoonotic event.18 Further, it is complicated to surveil and track pathogen genesis, evolution, and global spread. Understanding of the science of viruses, other pathogens, and their mutation and evolution is incomplete, and research continues on new ways to monitor and spot outbreaks.

Insufficient public health infrastructures. A 2017 study conducted by the World Bank and the WHO points out that half of the global population does not have access19 to necessary health services, and one hundred million people live in extreme poverty.20

Approach 5: Develop a global pandemic surveillance, detection, and response system based on data sensing and integration via trusted networks.

Three important elements of this global system are the early detection and warning system, the rapid response and recovery system, and the elevated capacity building system.

Recommendation 5: Field and test new approaches that enable the world to accelerate the detection of biothreat agents, to universalize treatment methods, and to engage in mass remediation through multiple global means.

Recommendation 5.1: Develop a global early warning system comprised of pandemic surveillance systems coupled with an early warning strategy.

Congress should request the Centers for Disease Control and Prevention (CDC), National Institutes of Health (NIH), United States Agency for International Development (USAID), United States Department of Agriculture (USDA), and other associated agencies to jointly develop an initial demonstration of this system in collaboration with the WHO, private institutions, and partner nations. The foundation is a surveillance system comprised of both active and passive monitoring of multiple environments and biomes—space, atmosphere, water, soil, animal reservoirs. Fundamental to the pandemic surveillance strategy is (i) training locals to conduct routine testing and genomic surveillance where spillovers occur and to regularly report incidences of novel illnesses, and (ii) increased genetic testing to track pathogens and to delineate what is coming from the natural environment versus being weaponized. Funding contributions and expert participation from other nations should be obtained.

Early detection would be enhanced by increasing the ability to identify and aggregate known data signals, identifying novel data signals, and enabling the combination of these signals into meaningful public health insights. This requires data to be labeled in such a way that it is globally recognized, named, and usable. Detection and monitoring also depend on developing distributed networks upon which those secured signals can arrive, inform local testing and response activities, and eventually be aggregated, while protecting personal data privacy, so that insights can be extracted. Finally, after preliminary flags or warning indicators are observed, a threshold is crossed and the warning or alarm could be sent throughout the distributed network, rather than relying upon a single entity or body to release the relevant information.

Key development principles include:

  1. First determine a sufficient and obtainable set of data that the surveillance system should collect, and develop the local and regional capabilities to collect these data;
  2. Support a global, decentralized network that can authenticate data sources, and enable validated data-sharing amongst validated data producers;
  3. Enable cybersecure data aggregation and analysis capabilities while preserving personal data based on the terms specified in Recommendation 3.1 in this report;
  4. Empower a surveillance strategy commensurate with civil liberties and privacy protections;
  5. Facilitate a surveillance strategy comprised of both active and passive monitoring of multiple environments and biomes (space, atmosphere, water, soil);
  6. Facilitate a surveillance strategy comprised of monitoring of traditional health and nontraditional data sources [e.g., excess death rates, viral genome sequences, Internet searches, geographic information systems (GIS), market trends]; and
  7. Form distributed networks for global early warning system alerts.

Recommendation 5.2: Reestablish and realign existing pandemic monitoring programs.

The administration should provide R&D funding to current pandemic monitoring and response networks as part of the effort to build a system for continuous global health protection. The primary actions to consider include: reinstate the USAID PREDICT program21 for tracking global zoonotic disease, provide additional funding to the EcoHealth Alliance22, and utilize networks to combine data being accumulated through parallel observation networks—e.g., the Strategic Advisory Group of Experts on Immunization (SAGE),23 the National Ecological Observatory Network (NEON),24 Collective and Augmented Intelligence Against COVID-19 (CAIAC),25 and the Epidemic Intelligence from Open Sources (EIOS).26

Recommendation 5.3: Emphasize privacy protections in pandemic surveillance systems.

The administration should support initiatives that emphasize privacy protections in pandemic surveillance systems. These initiatives should be managed by NIST and NSF in collaboration with the Department of Health and Human Service’s Office of the National Coordinator for Health Information Technology and the lead science institutions in partner nations. The mitigation strategies will (i) identify infected individuals early through robust and frequent testing with a globally-recommended strategy; (ii) deploy contact-tracing strategies (commensurate with civil liberties); (iii) deliver consistent health messaging for disease prevention, spread, and treatment by coordinating centralized information and data reporting with local, on-the-ground, trusted community leaders; and (iv) provide consistent public health guidance for gatherings like air travel, cruises, sporting events, schools, restaurants, stores, and so forth.

Recommendation 5.4: Increase resilience in medical supply chains.

The administration should fund R&D of cellular- and molecular-based manufacturing technologies27 that enhance supply chain assurance.28 Both cellular and molecular manufacturing are specific instances of synthetic biology. In some cases, they can be rapidly deployed by setting up the conditions for production, and then substituting in the genetic sequences of interest to go into high-gear production. This simplifies supply chain and production lead time, can increase capacity, and creates flexible supply chains by producing candidates that are thermostable.

Some of the more forward-looking technologies for bio-sensing, vaccine development, and therapeutics are amenable to this kind of manufacturing and stockpiling. The goal is to develop redundancy at a regional level (components/ingredients; manufacturing), adopt more rigorous methods for validation of authenticity, and support multiregional distribution chains.

Recommendation 5.5: Develop capacity building for vaccine and therapeutics discovery, development, and distribution.

The administration should establish PPPs to improve pandemic protection capacity building. There are three efforts: (i) biomanufacturing and synthetic biology innovations will create therapeutic discovery systems and speed vaccine discovery; (ii) vaccine discovery, development, and distribution coalitions like the Coalition for Epidemic Preparedness Innovations (CEPI) will enable equitable distribution; and (iii) information monitoring and distribution regarding consumables, capital equipment supplies, hospital resources, and healthcare workers will support public and organizational activities during a crisis.

Recommendation 5.6: Develop rapid responses to unknown pathogens, and supporting data collection networks.

NIH should develop and lead a program for the automated development of treatments for unknown pathogens. The goal is to universalize treatment methods; for example, by employing automated methods to massively select bacteriophages as a countermeasure to bacteria—or employ antibody-producing E. coli or cell-free synthetic biology as a countermeasure to viruses. Advanced computational methods such as computational modeling of the 3D molecules of novel pathogens, and AI-based selection of potential treatments, can help automate and speed up this process. New technologies that can change the time for the regulatory approval process, i.e., the time required for human clinical trials, should be researched—for example, in silico testing or artificial organ testing.29

NIH should create a consortium of universities and biotechnology companies to develop rapid, wide-area distribution of vaccines. This program should consider approaches that distribute vaccines through conventional supply channels, and methods to make vaccines that are survivable and transportable in any environment. Treatments in addition to vaccines should be incorporated in this effort.

NSF should create a digital infrastructure that can connect diverse, independent observation networks, databases, and computers—including emerging biosensors and autonomous sequencers deployed in water systems, air filtration systems, and other public infrastructure—to integrate their diverse data for analysis and modeling with protocols for activating rapid analysis of new pathogens, including new strains of extant pathogens to evaluate ongoing vaccine efficacy.

1    National Syndromic Surveillance Program, “North Carolina Integrates Data from Disaster Medical Assistance Teams for Improved Situational Awareness,” Centers for Disease Control and Prevention, accessed March 26, 2021, https://www.cdc.gov/nssp/success-stories/NC-Disaster-Teams.html; “Influenza – Surveillance and monitoring,” World Health Organization, accessed March 26, 2021, https://www.who.int/influenza/surveillance_monitoring/en
2    “World Health Organization, Global Influenza Surveillance and Response System,” World Health Organization, accessed March 26, 2021, https://www.who.int/influenza/gisrs_laboratory/updates/GISRS_one_pager_2018_EN.pdf?ua=1
3    “Toward the Development of Disease Early Warning Systems,” in Under the Weather: Climate, Ecosystems, and Infectious Disease, National Research Council (US) Committee on Climate, Ecosystems, Infectious Diseases, and Human Health [Washington, DC: National Academies Press (US), 2001], https://www.ncbi.nlm.nih.gov/books/NBK222241/
4    Sylvia Mathews Burwell et al., “Improving Pandemic Preparedness: Lessons From COVID-19,” Independent Task Force Report No. 78, Council on Foreign Relations, October 2020, accessed March 26, 2021, https://www.cfr.org/report/pandemic-preparedness-lessons-COVID-19/pdf/TFR_Pandemic_Preparedness.pdf; Elias Kondilis et al., “COVID-19 data gaps and lack of transparency undermine pandemic response,” Journal of Public Health, February 9, 2021, fdab016, https://doi.org/10.1093/pubmed/fdab016; Kamran Ahmed et al., “Novel Approach to Support Rapid Data Collection, Management, and Visualization During the COVID-19 Outbreak Response in the World Health Organization African Region: Development of a Data Summarization and Visualization Tool,” JMIR Public Health and Surveillance 6 (4) (Oct-Dec, 2020), accessed March 26, 2021, https://publichealth.jmir.org/2020/4/e20355/; Sameer Saran et al., “Review of Geospatial Technology for Infectious Disease Surveillance: Use Case on COVID-19,” Journal of the Indian Society of Remote Sensing 48 (2020): 1121–1138, accessed March 26, 2021, https://doi.org/10.1101/2020.02.07.20021071
5    Associated Press, “China didn’t warn public of likely pandemic for 6 key days,” April 15, 2020, accessed March 26, 2021, https://apnews.com/68a9e1b91de4ffc166acd6012d82c2f9
6    Jin Wu et al., “How the Virus Got Out,” New York Times, March 22, 2020, accessed March 26, 2021, https://www.nytimes.com/interactive/2020/03/22/world/coronavirus-spread.html; Zhidong Cao et al., “Incorporating Human Movement Data to Improve Epidemiological Estimates for 2019-nCoV,” medRxiv, https://www.medrxiv.org/node/71912.external-links.html
7    “Strengthening health security by implementing the International Health Regulations (2005), Country capacity strengthening,” UN World Health Organization, accessed March 26, 2021, https://www.who.int/ihr/capacity-strengthening/en/
8    “Strengthening health security by implementing the International Health Regulations (2005), A global system for alert and response,” World Health Organization, https://www.who.int/ihr/alert_and_response/en/; Apoorva Mandavilli, “239 Experts With One Big Claim: the Coronavirus Is Airborne,” New York Times, updated November 19, 2020, accessed March 26, 2021, https://www.nytimes.com/2020/07/04/health/239-experts-with-one-big-claim-the-coronavirus-is-airborne.html
9    World Health Organization, WHO global influenza preparedness plan: The role of WHO and recommendations for national measures before and during pandemics, 2005, accessed March 26, 2021, https://www.who.int/csr/resources/publications/influenza/WHO_CDS_CSR_GIP_2005_5.pdf
10    “Strengthening health security by implementing the International Health Regulations (2005), Country capacity strengthening,” UN World Health Organization, accessed March 26, 2021, https://www.who.int/ihr/capacity-strengthening/en/
11    Chairman Michael McCaul, China Task Force Report, U.S. House of Representatives, 116th Congress, September 2020, accessed March 26, 2021, https://gop-foreignaffairs.house.gov/wp-content/uploads/2020/09/CHINA-TASK-FORCE-REPORT-FINAL-9.30.20.pdf
12    “2020 Edelman Trust Barometer,” Edelman, accessed March 26, 2021, https://www.edelman.com/trust/2020-trust-barometer
13    “Governance and Institutions,” International Development Association, World Bank Group, accessed March 26, 2021, https://ida.worldbank.org/theme/governance-and-institutions
14    Stephen Buranyi, “The WHO v coronavirus: why it can’t handle the pandemic,” Guardian, April 10, 2020, accessed March 26, 2021, https://www.theguardian.com/news/2020/apr/10/world-health-organization-who-v-coronavirus-why-it-cant-handle-pandemic
15    Dimitrios Gouglas et al., “Estimating the cost of vaccine development against epidemic infectious diseases: a cost minimisation study,” Lancet Global Health 6 (12) (E1386-E1396, DECEMBER 01, 2018), October 17, 2018, DOI: https://doi.org/10.1016/S2214-109X(18)30346-2, accessed March 26, 2021
16    Irina Serdobova and Marie-Paule Kieny, “Assembling a Global Vaccine Development Pipeline for Infectious Diseases in the Developing World,” American Journal of Public Health 96 (9): 1554–1559, https://doi.org/ 10.2105/AJPH.2005.074583, accessed March 26, 2021.
17    C.J. Carlson et al., “Global estimates of mammalian viral diversity accounting for host sharing,” Nature Ecology & Evolution 3 (2019): 1070–1075 (2019), https://doi.org/10.1038/s41559-019-0910-6, accessed March 26, 2021. Global Virome Project / PREDICT has estimated that there are over 1.6 million unknown viral species in mammalian and avian populations, of which approximately 700,000 have the potential to infect and cause disease in humans. “Global Virome Project,” https://static1.squarespace.com/static/581a4a856b8f5bc98311fb03/t/5ada612470a6ad672eea01b3/1524261157638/GVP%2B2%2Bpager%2BFINAL.pdf
18    Alex Long, “Zoonotic Diseases and the Possibilities with EBV Monitoring,” CTRL Forward, November 14, 2017, accessed March 26, 2021, https://www.wilsoncenter.org/blog-post/zoonotic-diseases-and-the-possibilities-ebv-monitoring.
19    World Health Organization, “World Bank and WHO: Half the world lacks access to essential health services, 100 million still pushed into extreme poverty because of health expenses,” December 13, 2017, accessed March 26, 2021, https://www.who.int/news-room/detail/13-12-2017-world-bank-and-who-half-the-world-lacks-access-to-essential-health-services-100-million-still-pushed-into-extreme-poverty-because-of-health-expenses
20    “Health Financing: Key policy messages,” World Health Organization, accessed March 26, 2021, https://www.who.int/health_financing/topics/financial-protection/key-policy-messages/en/
21    PREDICT, “Reducing Pandemic Risk, Promoting Global Health,” USAID, https://www.usaid.gov/sites/default/files/documents/1864/predict-global-flyer-508.pdf
22    “EcoHealth Alliance,” website homepage accessed April 16, 2021, https://www.ecohealthalliance.org/
23    “Strategic Advisory Group of Experts on Immunization (SAGE),” World Health Organization, accessed April 16, 2021, https://www.who.int/groups/strategic-advisory-group-of-experts-on-immunization/working-groups/cholera-(november-2015—august-2017)
24    “The National Science Foundation’s National Ecological Observatory Network (NEON),” website homepage accessed April 16, 2021, https://www.neonscience.org/
25    “CAIAC: Collective and Augmented Intelligence Against COVID-19,” website homepage accessed April 16, 2021, https://oecd.ai/wonk/collective-and-augmented-intelligence-against-covid-19-a-decision-support-tool-for-policymakers
26    “Epidemic Intelligence from Open Sources (EIOS): Saving Lives through Early Detection,” World Health Organization, https://www.who.int/initiatives/eios
27    Megan Scudellari, “Step Aside, PCR: CRISPR-based COVID-19 Tests Are Coming,” IEEE Spectrum, December 21, 2020, accessed April 16, 2021, https://spectrum.ieee.org/the-human-os/biomedical/diagnostics/step-aside-pcr-crispr-based-covid-19-tests-are-coming
28    Nicholas A. C. Jackson et al., “The promise of mRNA vaccines: a biotech and industrial perspective,” npj Vaccines 5 (11) (2020), https://doi.org/10.1038/s41541-020-0159-8, accessed March 26, 2021; Giulietta Maruggi et al., “mRNA as a Transformative Technology for Vaccine Development to Control Infectious Diseases,” Molecular Therapy 27 (4) (April 10, 2019): 757–772, accessed March 26, 2021, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6453507/
29    Committee on Animal Models for Assessing Countermeasures to Bioterrorism Agents, Institute for Laboratory Animal Research Division on Earth and Life Studies, “Chapter 5: Alternative Approaches to Animal Testing for Biodefense Countermeasures,” in Animal Models for Assessing Countermeasures to Bioterrorism Agents (Washington, DC: The National Academies Press, 2011), accessed March 26, 2021, https://www.nap.edu/read/13233/chapter/7

The post Continuous global health protection and global wellness appeared first on Atlantic Council.

]]>
Assured supply chains and system resiliency https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-4/ Tue, 25 May 2021 22:57:38 +0000 https://www.atlanticcouncil.org/?p=392385 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Assured supply chains and system resiliency appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(100%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 4. Assured supply chains and system resiliency

Scroll down to navigate and learn more

Both physical and digital supply chain vulnerabilities can have cascading effects on the global economy and national security. Two critical examples include:

  • US dependence on foreign production of the main components used in generic drugs. Trade disputes and economic crises can stop the flow of medicines and affect the health and economic welfare of tens of millions of individuals in the United States and other countries.1
  • US dependence on foreign-produced semiconductors for military and commercial products. As the manufacturing and assembly of key components shifts to markets in East Asia, particularly China,2 the United States is susceptible to sudden interruptions in supplies and deliberate efforts to degrade the integrity of the products.

The interconnected global networks of manufacturing, transportation,3 and distribution contain many instances where supply chain problems can have magnified effects. To protect against these diverse risks requires understanding which types of goods and sectors of the economy are critical. It also requires assessing the state and characteristics of supplies, trade networks and policies, inventory reserves, and the ability to substitute products or processing facilities. Assuring the performance of physical and software/IT supply chains is essential for a functioning, prosperous society and for national and economic security.

Finding 4: Resilient, trusted supply chains require defense, diversification, and reinvention.

One of the goals of the United States’ National Strategy for Global Supply Chain Security4 is to “foster a resilient supply chain.” As part of its strategic approach, the national strategy works to prepare for, withstand, and recover from threats and disruptions. “Executive Order 13806 of July 21, 2017: Assessing and Strengthening the Manufacturing and Defense Industrial Base and Supply Chain Resiliency of the United States“5 states that “a healthy manufacturing and defense industrial base and resilient supply chains are essential to the economic strength and national security of the United States” and requires a report detailing the current state of supply chains that are essential for national security. The Interagency Task Force report6 in response to the executive order recommends decreasing the fragility and single points of failure of supply chains and diversifying away from dependencies on politically unstable countries.

It is difficult to know the full range of potential threats and disruptions for a given supply chain. For multitiered supply chains, the primary suppliers may not have information on each of the suppliers at the third or fourth tier and will not have accurate or up-to-date information on the trustworthiness of the sources of components, e.g., circuit board component suppliers. The multiplying, dynamic effects of supply chain disturbances are often not deterministic. In cases of deliberate sabotage of a resource, there may not be observable indicators, as with the insertion of hidden back doors in software. Resilient supply chains address a portion of these uncertainties through risk-reduction strategies and greater supply chain transparency.

For some supply chains, resilience may be attained by increasing defenses through greater trade enforcement and strengthening key segments. For some supply chains, diversifying the sources and manufacturing locations, in partnership with allies, is an effective strategy. Adversaries are creating strategic vulnerabilities and weaknesses in US supply chains; a key area is the design and manufacture of advanced electronics. To address this growing risk, the strategy exemplified in the Defense Advanced Research Projects Agency’s (DARPA’s) Electronics Resurgence Initiative7 involves developing new technologies for alternative materials, designs, and production processes.

Finding 4.1: Critical supply chains are pervasive and challenging to defend.

Presidential Policy Directive 21 (PPD-21), “Critical Infrastructure Security and Resilience,” defines critical infrastructure to be those “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters.”8 There are eighteen critical infrastructure sectors. The Sector-Specific Plans discuss critical infrastructure resilience and include the supply chains in the risk management or risk mitigation section of some sector plans.

Supply chain attacks can be hard to detect and defend against. The Department of Defense’s (DoD’s) report, Department of Defense Strategy for Operating in Cyberspace,9 highlights the critical issue of supply chain vulnerabilities and the risks of US reliance on foreign suppliers. The range of supply chain attack opportunities is large—including design, manufacturing, servicing, distribution, and disposal segments of the supply chain—and challenging to detect.

Appendix B discusses the cyberattack of FireEye, involving the theft of its penetration testing toolkit, and the breadth of a comprehensive cyber espionage campaign centered on SolarWinds’ Orion network monitoring software. More than eighteen thousand commercial and government targets, including Intel, Microsoft, California state hospitals,10 the National Nuclear Security Administration,11 and dozens12 of federal, state, and local government agencies, downloaded compromised updates, all with the goal of extracting valuable intelligence while remaining undetected.

Finding 4.2: A broadened view of stockpiles increases resiliency.

Creating additional supplies or increasing production capacity contribute to creating stockpiles in a supply network. Adding more production capacity in the United States, or encouraging allies to undertake similar actions, is the focus of recent legislative efforts.

The Coronavirus Aid, Relief, and Economic Security Act (CARES Act; P.L. 116-136) strengthened reporting requirements to delineate the domestic versus foreign production of finished drug products and active pharmaceutical ingredients. While the CARES Act requires the National Academies of Sciences, Engineering, and Medicine to evaluate the US medical product supply chain, options for increasing the security and resilience of this supply chain are still under consideration.13

The William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 202114 includes provisions to enhance the security of the semiconductor supply chain. It incentivizes investment in facilities and equipment in the United States for semiconductor fabrication, assembly, testing, advanced packaging, or R&D. It strengthens the United States’ capacity to develop and produce cutting-edge semiconductors domestically through federal funding, promotes greater global transparency around subsidies to identify unfair or opaque forms of support that distort global supply chains, and provides funding support to “foreign government partners to participate in a consortium in order to promote consistency in policies related to microelectronics, greater transparency in microelectronic supply chains, and greater alignment in policies toward non-market economies.15

“Executive Order 13817 of December 20, 2017: A Federal Strategy to Ensure Secure and Reliable Supplies of Critical Minerals” defines “critical mineral” to be “(i) a non-fuel mineral or mineral material essential to the economic and national security of the United States, (ii) the supply chain of which is vulnerable to disruption, and (iii) that serves an essential function in the manufacturing of a product, the absence of which would have significant consequences for our economy or our national security.”16 Based on country production and import reliance, thirty-five minerals were deemed critical minerals. For some of these critical minerals,17 increased domestic production is possible,18 through the policies in the executive order intended to decrease the time to obtain mining permits.

The DoD is working to ensure reliable supplies of rare earth minerals by increasing domestic production and processing capabilities.19 The department has taken steps to increase stockpiles, reduce reliance on Chinese sources, partner with private industry to increase production of rare earth magnets, and accelerate the development of new rare earth mineral processing technologies, and is seeking to increase funding for domestic production of rare earth minerals for munitions and missiles. To increase domestic production of rare earth minerals, mining-reform legislation is needed. The current mine-permitting process takes approximately ten years, when timelines of two to three years may be possible. Cooperative agreements with like-minded countries may also increase the supply available to the United States. South Africa, Canada, Australia, Brazil, India, Malaysia, and Malawi have rare earth minerals; China, Russia, and the United States hold 82.6 percent of the world’s production and reserves.20

Finding 4.3: By creating new materials and new design and manufacturing technologies, the United States can eliminate critical dependencies on foreign sources.

The DARPA Electronics Resurgence Initiative21 is in the fourth year of a long-term, $1.5 billion effort to reinvent defense electronics both to improve performance and to respond to foreign efforts to shift innovation in electronics away from the United States. The program currently includes applications of the new materials, chip designs, chip manufacturing technologies, and new methods for increasing security in a variety of defense systems. At present, the United States imports 80 percent of its rare earth elements directly from China.

The DARPA Electronics Resurgence Initiative supports the goals of the “Executive Order 13953 of September 30, 2020: Addressing the Threat to the Domestic Supply Chain From Reliance on Critical Minerals From Foreign Adversaries and Supporting the Domestic Mining and Processing Industries.” The transformation of microelectronics is DoD’s top modernization priority. A critical, fundamental risk is the US dependence on foreign semiconductor chip manufacturing, dominated by microelectronics fabrication plants in vulnerable Taiwan and South Korea.

Approach 4: Develop supply chain resilience strategies for a broadened set of critical resources, conduct assessments with allies.

The United States must establish criteria for determining which supply chains are critical and develop supply chain assurance strategies based on knowledge of the current supply network and the creation of alternative pathways, processes, and materials.

Such strategies must incorporate:

  1. A supplier nation’s trade and export policies and the effects of sudden changes,
  2. A nation’s near-monopoly of a key resource,
  3. Alternate supply lines available to the United States,
  4. Baseline capacities and resources, and
  5. The ability to reestablish commercial operations in locations having lower risk.22

For information systems and networks, the United States should develop and test cybersecurity resilience strategies and performance standards for increased cybersecurity in systems that support supply chains for critical resources.

Recommendation 4: Conduct regularized assessments in the United States and in allied countries to determine critical supply chain resilience and trust, implement risk-based assurance measures. Establish coordinated cybersecurity acquisition across government networks and create more experts.

Recommendation 4.1: Implement a framework that identifies and establishes global data collection on critical resources.

“Executive Order 14017 of February 24, 2021: America’s Supply Chains,” will conduct a review of critical supply chain vulnerabilities affecting both government procurement and also that of the private sector. This review will address the changing nature of critical supply chains as “manufacturing and other needed capacities of the United States modernize to meet future needs.”23 It will examine dependence on foreign suppliers, measures of resilience, and a range of sectors including energy, semiconductors, key electronics and related technologies, telecommunications infrastructure, and key raw materials. Strategies to increase critical supply chain resilience include “a combination of increased domestic production, strategic stockpiles sized to meet our needs, cracking down on anti-competitive practices that threaten supply chains, implementing smart plans to surge capacity in a time of crisis, and working closely with allies.”24 After this initial review, the administration plans to ask Congress to enact a mandatory quadrennial critical supply chain review to institute this process permanently.

To conduct this critical supply chain review, the administration should develop a set of criteria for determining resources that are critical to the nation with respect to public health, national security, economic security, and technological competitiveness. These criteria should encompass critical resources beyond high-technology products, to include IT and computer systems and infrastructures, and lower technology products that are important for high-technology competitiveness, e.g., steel, auto parts, and other portions of US manufacturing industries. These criteria should be developed by the White House Office of Science and Technology Policy (OSTP) in coordination with relevant executive branch agencies and departments and with the active participation of private industry. Because critical resources are dynamic in nature and are constantly evolving, this should be a recurring, ongoing initiative.

The administration should use existing fora for international outreach to foster data collection and information sharing for assessments of critical resources and critical supply chains. It should also identify where US funding will strengthen supply chain assurance in partner countries, particularly those with a strong rule of law and a commitment to intellectual property protection. The assessments must address where key resources (e.g., pharmaceuticals,25 agricultural products26) are manufactured and sourced, and how this impacts the robustness of US supply chains, the ability to manufacture the key resources in the United States, and other issues concerning supply chain threats and vulnerabilities. The United States-Mexico-Canada Agreement (USMCA) in its “Rules of Origin” chapter provides a model for agreements with like-minded countries.27 The United States Trade Representative would develop trade agreements that help strengthen supply chains.

Recommendation 4.2: Fund and broaden federal oversight of supply chain assurance to include all critical resources.

Congress should establish an annual reporting requirement that assesses the supply chain assurance for all critical resources, to be assigned to the Department of Homeland Security (DHS) with support from the Office of Management and Budget (OMB). The Cybersecurity and Infrastructure Security Agency (CISA) will contribute assessments of the cybersecurity of the supply chains included in the annual report. This report should determine priorities for supply chains deemed critical to US national and economic security and national health. Congress should require that federal budget requests affecting critical supply chains are based on these priorities.

The administration should develop an approach to address risk management for supply chains beyond those already associated with information technology and computer systems. The administration should extend the work by NIST to model critical assets and components for information systems,28 to critical resources as described here. This effort will delineate the data—for both physical supply chains and software/IT supply chains—required to perform supply chain assurance assessments.

Recommendation 4.3: For the United States, the administration must develop a geopolitical deterrence strategy that addresses critical digital resources and digital supply chain assurance.

State-based cyber-enabled threats to the integrity of global supply chains—impacting both physical (as seen in disruption to global logistics and manufacturing activity in the wake of the NotPetya ransomware attack29) and digital (as illustrated in the wake of the SolarWinds compromise) supply chains—increasingly represent costly and high-impact challenges. The national cyber director, as part of the National Cyber Strategy, should develop a geopolitical deterrence strategy that enables the US government to leverage all tools of US power—from diplomacy, to sanctions, cyber, and military activity—to exercise deterrence. The administration should evaluate the potential for (i) continuous evaluation of digital supply chains to enable prompt detection of malicious activity targeting these supply chains, and (ii) prompt detection, combined with improved supply chain resilience and timely actions in response to the detected activity, to decrease the likelihood of cyberattacks. Continuous evaluation of supply chains for critical digital resources30 would be coordinated and managed by CISA as part of its role in managing federal cybersecurity risk.

Recommendation 4.4: Conduct regular physical and software/IT supply chain assessments in the United States and with allies, focused on intersecting vulnerabilities with cascading consequences.

The administration should establish with allies and partner nations a test program for supply chains and reporting on supply chains’ status and test results. This reporting would address the readiness status of both public and private sector supply chains, and the results of exercises that test the preparedness, adequacy, and resiliency of supply chains against a range of conditions and scenarios, much like stress tests for the financial sector.

  • Because most of the supply chain data are held by private companies, a key issue is whether the private sector will provide enough data about its supply chains, or can be incentivized to do so. Questions to address include: what is the minimal information that is needed to calculate these performance measures, and will the resultant tests provide useful results across the situations of interest? will the private sector give these data, given its competitive positions? what is the best estimate of the metrics subject to the data availability constraints? Thus, the tests must show these estimates can be developed using acceptable access to the private data, or must determine a narrower set of criteria to test against.

Due to the many factors bearing on cybersecurity resilience, including the growing threat of sophisticated cyberattacks by major adversaries, the administration should develop software/IT supply chain resilience risk assessments that incorporate the effects of new standards and tools to measure cyber vulnerabilities, improved information sharing (including intelligence information on nation state-supported cyberattacks and ransomware denial of service attacks), designs for improvements that protect against systemic vulnerabilities, and new technologies such as cloud-based services.

1    Congressional Research Service, COVID-19: China Medical Supply Chains and Broader Trade Issues, updated December 23, 2020, accessed March 26, 2021, https://crsreports.congress.gov/product/pdf/R/R46304
2    Department of Defense, Fiscal Year 2020: Industrial Capabilities: Report to Congress, January 2021, accessed March 26, 2021, https://media.defense.gov/2021/Jan/14/2002565311/-1/-1/0/FY20-INDUSTRIAL-CAPABILITIES-REPORT.PDF
3    Vivian Yee, “Ship Is Freed After a Costly Lesson in the Vulnerabilities of Sea Trade,” New York Times, March 29, 2021, accessed April 3, 2021, https://www.nytimes.com/2021/03/29/world/middleeast/suez-canal-ever-given.html
4    “National Strategy for Global Supply Chain Security,” Department of Homeland Security, last published July 13, 2017, accessed March 26, 2021, https://www.dhs.gov/national-strategy-global-supply-chain-security
5    Executive Order 13806 of July 21, 2017: Assessing and Strengthening the Manufacturing and Defense Industrial Base and Supply Chain Resiliency of the United States,” Federal Register 82 (142) (July 26, 2017), accessed March 26, 2021, https://www.govinfo.gov/content/pkg/FR-2017-07-26/pdf/2017-15860.pdf
6    Department of Defense, Assessing and Strengthening the Manufacturing and Defense Industrial Base and Supply Chain Resiliency of the United States, Report to President Donald J. Trump by the Interagency Task Force in Fulfillment of Executive Order 13806, September 2018, accessed March 26, 2021, https://media.defense.gov/2018/Oct/05/2002048904/-1/-1/1/ASSESSING-AND-STRENGTHENING-THE-MANUFACTURING-AND%20DEFENSE-INDUSTRIAL-BASE-AND-SUPPLY-CHAIN-RESILIENCY.PDF
7    “DARPA Electronics Resurgence Initiative,” DARPA, last updated April 2, 2020, accessed March 26, 2021, https://www.darpa.mil/work-with-us/electronics-resurgence-initiative
8    White House, “Presidential Policy Directive – Critical Infrastructure Security and Resilience,” February 12, 2013, accessed March 26, 2021, https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-critical-infrastructure-security-and-resil
9    Department of Defense, “Department of Defense Strategy for Operating in Cyberspace,” July 2011, accessed March 26, 2021, https://csrc.nist.gov/CSRC/media/Projects/ISPAB/documents/DOD-Strategy-for-Operating-in-Cyberspace.pdf
10    Laura Hautala, “SolarWinds hackers accessed DHS acting secretary’s emails: What you need to know,” c|net, March 29, 2021, accessed April 16, 2021, https://www.cnet.com/news/solarwinds-hackers-accessed-dhs-acting-secretarys-emails-what-you-need-to-know/
11    Natasha Bertrand and Eric Wolff, “Nuclear weapons agency breached amid massive cyber onslaught,” Politico, December 17, 2020, accessed March 26, 2021, https://www.politico.com/news/2020/12/17/nuclear-agency-hacked-officials-inform-congress-447855
12    Raphael Satter, “U.S. cyber agency says SolarWinds hackers are ‘impacting’ state, local governments,” Reuters, December 23, 2020, accessed March 26, 2021, https://www.reuters.com/article/us-global-cyber-usa-idUSKBN28Y09L
13    Congressional Research Service, FDA’s Role in the Medical Product Supply Chain and Considerations During COVID-19, September 1, 2020, accessed March 26, 2021, https://crsreports.congress.gov/product/pdf/R/R46507
14    Samuel K. Moore, “U.S. Takes Strategic Step to Onshore Electronics Manufacturing,” IEEE Spectrum, January 6, 2021, “The semiconductor strategy and investment portion of the William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021 began as separate bills in the House of Representatives and the Senate. In the Senate, it was called the American Foundries Act of 2020, and was introduced in July and called for $15 billion for state-of-the-art construction or modernization and $5 billion in R&D spending, including $2 billion for the Defense Advanced Research Projects Agency’s Electronics Resurgence Initiative. In the House, the Creating Helpful Incentives to Produce Semiconductors (CHIPS) for America Act, was introduced in the 116th Congress by Senators John Cornyn (R-TX) and Mark Warner (D-VA), and Representatives Michael McCaul (R-TX) and Doris Matsui (D-CA), and offered similar levels of R&D,” accessed April 16, 2021, https://spectrum.ieee.org/tech-talk/semiconductors/processors/us-takes-strategic-step-to-onshore-electronics-manufacturing
15    ”US Sen. Mark R. Warner (D-VA), Bipartisan, Bicameral Bill Will Help Bring Production of Semiconductors, Critical to National Security, Back to U.S., press release, June 10, 2020, accessed March 26, 2021, https://www.warner.senate.gov/public/index.cfm/2020/6/bipartisan-bicameral-bill-will-help-bring-production-of-semiconductors-critical-to-national-security-back-to-u-s
16    “Executive Order 13817 of December 20, 2017: A Federal Strategy To Ensure Secure and Reliable Supplies of Critical Minerals,” Federal Register, December 20, 2017, accessed March 26, 2021, https://www.federalregister.gov/documents/2017/12/26/2017-27899/a-federal-strategy-to-ensure-secure-and-reliable-supplies-of-critical-minerals
17    germanium, graphite (natural), hafnium, helium, indium, lithium, magnesium, manganese, niobium, platinum group metals, potash, the rare earth elements group, rhenium, rubidium, scandium, strontium, tantalum, tellurium, tin, titanium, tungsten, uranium, vanadium, and zirconium
18    National Strategic and Critical Minerals Production Act, H.R. 2531 — 116th Congress (2019-2020), accessed March 26, 2021, https://www.congress.gov/bill/116th-congress/house-bill/2531. The bill aims to increase the domestic supply of critical minerals
19    Department of Defense, DOD Announces Rare Earth Element Awards to Strengthen Domestic Industrial Base, press release, November 17, 2020, accessed March 26, 2021, https://www.defense.gov/Newsroom/Releases/Release/Article/2418542/dod-announces-rare-earth-element-awards-to-strengthen-domestic-industrial-base/
20    Marc Humphries, Rare Earth Elements: The Global Supply Chain, Congressional Research Service, December 16, 2013, accessed March 26, 2021, https://fas.org/sgp/crs/natsec/R41347.pdf
21    “DARPA Electronics Resurgence Initiative,” DARPA
22    Congressional Research Service, COVID-19: China Medical Supply Chains and Broader Trade Issues, R46304, April 6, 2020, updated December 23, 2020, accessed March 26, 2021, https://crsreports.congress.gov/product/pdf/R/R46304
23    “Executive Order on America’s Supply Chains,” White House, February 24, 2021, accessed March 26, 2021, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/02/24/executive-order-on-americas-supply-chains/; “Executive Order 14017 of February 24, 2021, America’s Supply Chains,” Federal Register, March 1, 2021, https://www.federalregister.gov/documents/2021/03/01/2021-04280/americas-supply-chains
24    “The Biden Plan to Rebuild U.S. Supply Chains and Ensure the U.S. Does Not Face Future Shortages of Critical Equipment,” accessed March 26, 2021, https://joebiden.com/supplychains
25    OECD and European Union Intellectual Property Office, Trade in Counterfeit Pharmaceutical Products, (Paris: OECD Publishing, 2020), accessed March 26, 2021, https://doi.org/10.1787/a7c7e054-en; Agnes Shanley, “Focusing on the Last Link,” PharmaTech, September 2, 2018, accessed March 26, 2021, https://www.pharmtech.com/view/focusing-last-link; Eurohealth, Quarterly of the European Observatory on Health Systems and Policies 24 (3) (2018), accessed March 26, 2021, https://www.euro.who.int/__data/assets/pdf_file/0011/382682/eurohealth-vol24-no3-2018-eng.pdf?ua=1
26    Clara Frezal and Grégoire Garsous, “New digital technologies to tackle trade in illegal pesticides,” OECD Trade and Environment Working Papers 2020/02, OECD Publishing, accessed March 26, 2021, https://doi.org/10.1787/9383b310-en
27    “Agreement between the United States of America, the United Mexican States, and Canada 7/1/20 Text,” Office of the United States Trade Representative, accessed March 26, 2021, https://ustr.gov/trade-agreements/free-trade-agreements/united-states-mexico-canada-agreement/agreement-between/
28    “NISTIR 8179, Criticality Analysis Process Model: Helping Organizations Decide Which Assets Need to Be Secured First,” National Institute of Standards and Technology, April 11, 2018, accessed March 26, 2021, https://csrc.nist.gov/News/2018/NISTIR-8179-Criticality-Analysis-Process-Model
29    Andy Greenberg, “The Untold Story of NotPetya, the most Devasting Cyberattack in History,” Wired, August 22, 2018, accessed March 26, 2021, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/
30    A key enabler of continuous evaluation comprises software configuration databases which will permit visibility and traceability of software/IT supply chains. These require development.

The post Assured supply chains and system resiliency appeared first on Atlantic Council.

]]>
Enhanced trust and confidence in the digital economy https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-3/ Tue, 25 May 2021 22:57:24 +0000 https://www.atlanticcouncil.org/?p=392381 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Enhanced trust and confidence in the digital economy appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(100%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 3. Enhanced trust and confidence in the digital economy

Scroll down to navigate and learn more

Enhanced trust and confidence in the digital economy is founded upon personal privacy, data security, accountability for performance and adherence to standards, transparency of the internal decision-making algorithms, and regulations and governance for digital products and services. Trust and confidence in the digital economy is diminished by practices that do not protect privacy or secure data, and by a lack of legal and organizational governance to advance and enforce accountability.1 Data breaches, malware embedded in downloaded apps, unfiltered mis- and disinformation, and the lack of governance models to effectively address these harms all contribute to the degradation of social and civic trust. This degradation undermines economic and civic confidence, is costly,2 constrains the growth of the digital economy,3 and has destabilizing effects on society, governments, and markets. Trust and confidence in the digital economy is essential for open societies to function, and for resilience against cascading effects of local, regional, or national economic, security, or health instabilities.

Finding 3: To enhance trust and confidence in artificial intelligence and other digital capabilities, technologies must objectively meet the public’s needs for privacy, security, transparency, and accountability.

The growth of digital economies is changing how trust is valued by institutions, businesses, and the public.4 The traditional view of trust is expressed in terms of the security of a business transaction. The increase in cyberattacks, identity theft, social media disinformation campaigns, and the use of autonomous decision-making software, introduces new factors that affect trust. Trust in a firm’s reputation and ethical practices, privacy protection, and how personal data are used depend on technology, business practices, and the public’s perception of how well these components of trust are protected.

Not everyone has the same perception of what is trustworthy. However, reaping the benefits of the digital economy requires a high level of trust among users. Therefore, government and industry should work to enhance the transparency and accountability of digital systems to improve trustworthiness. Challenges include the following: (i) views on personal privacy protection are context-dependent, vary by culture or location, and may be formalized in different terms across nations, regions, and states; and (ii) as automated decision-making algorithms proliferate, new applications reveal trust weaknesses regarding implicit bias, unethical use of personal data, and lack of identity protection.

Trustworthiness needs to be prioritized and empirically demonstrated in the evolving market. Building trust involves educating all participants on the fundamental value of trust in the digital economy and ensuring digital systems reflect individual and societal conceptions of trust. There must be national and international standards for judging how well technologies and systems protect trust. Professional organizations that audit for trust in the digital economy will strengthen accountability.

As European Union (EU) member nations work to conform national rules and laws to the General Data Protection Regulation (GDPR), the European Commission notes that these steps may strengthen trust relationships. Other nations propose that a global framework for cross-border Internet policies may be able to protect data security and privacy while still allowing national laws and regulations as a part of the approach if certain trust relationships are maintained. For both approaches, a set of rules or principles provides the foundation for trust.

The GDPR6 establishes regulations for data security and privacy that apply to any organization that collects or uses data related to people in the EU. The entire data chain is covered by the GDPR, including data collection, processing, storing, and managing.

The GDPR comprises principles that govern data protection and accountability for those who process data. There are technical measures for data security, and organizational design principles for data protection. Data privacy is expressed in terms of privacy rights, including the right: to be informed, to rectification, to erasure, to restrict processing, to data portability, and to object, and the right of access. There are also rights in relation to automated decision-making and profiling. The governance mechanism centers on Data Protection Authorities that work to align each EU member nation’s approach to data security and privacy to conform with the GDPR. These Data Protection Authorities have enforcement powers and the ability to levy fines when a GDPR rule is violated.

Data privacy protection is vulnerable to advanced data analytics that can infer personal identifiable information by joining loosely related data sources. As a result, the growing use of current machine learning methods applied to large, multi-source data sets highlights potential limitations in the GDPR where such computational methods can infer data originally made private. The development of new data science capabilities may require research on new privacy-preserving technologies for nations to remain compliant with the GDPR. With increasing amounts of personal medical and genetic information being held in data repositories, this need is urgent.

Finding 3.3: Evolving US data privacy approaches consider outcome-based methods, versus prescriptive methods.

The development of data privacy laws in the United States is an evolving patchwork, with more than one hundred and fifty state data privacy laws proposed in 2019.8 There is no overall federal data privacy law.

One instance of federal legislation for data privacy proposed in the 117th Congress9 includes the following key privacy features, which are viewed as outcome-based.10

  • Transparent communication of the privacy and data use policy
  • Affirmative opt-in and opt-out consent
  • Preemption, in which the proposed statute would preempt most state laws with limited exceptions for data breaches, and other limited situations
  • A right to action, enforced at the federal or state level, to address alleged violations
  • Independent audit of the effectiveness and appropriateness of the privacy policy for each entity providing data services

Several bills11 introduced in the 116th Congress addressed a subset of the above features or are focused on COVID-19 contact tracing, health status, and identifiers. In addition, several bills introduced in the 116th Congress addressed disclosing how data are used or monetized by social media companies that enhance the accessibility and portability of a user’s data across devices.12

The National Institute of Standards and Technology (NIST) Privacy Framework describes a risk- and outcomes-based approach to establishing privacy protection practices in an organization. Organizations can vary the technologies and design of the privacy protection aimed at satisfying performance outcomes. This may be advantageous when the technologies and applications are changing at a fast pace, e.g., artificial intelligence (AI) and the Internet of Things (IoT).13

The variety of new ways to collect, process, and analyze data is increasing at a fast rate, while compliance often is determined on a case-by-case basis by regulatory and legal experts. To keep pace, automated testing for compliance with data privacy regulations is necessary.

While there are several federal data privacy laws specific to certain industries or groups, e.g., the Health Insurance Portability and Accountability Act (HIPAA),14 the eventual form and scope of US data protection laws will depend on policy and legal considerations. A key decision concerns the model for data protection laws. The EU GDPR model is prescriptive; GDPR compliance involves demonstrating that the procedural rules were followed. An alternate model for data protection laws is outcome-based, which allows flexibility in how to achieve data protection.15

A choice between prescriptive versus outcome-based approaches must assess their relative costs and benefits and how the two approaches can work together. The proposed bills in the 116th Congress identify a robust set of data privacy features while promoting flexibility and innovation in their implementation; the GDPR model has greater worldwide traction, creating opportunities for harmonized regulatory treatment.

Finding 3.4: New information technologies compel automated compliance testing.

New information technologies and advanced data capabilities challenge current methods of compliance and enforcement. The variety of new ways to collect, process, and analyze data is increasing at a fast rate, while compliance often is determined on a case-by-case basis by regulatory and legal experts. To keep pace, automated testing for compliance with data privacy regulations is necessary.

Table 2 portrays some of the challenges and solutions for achieving automated compliance testing. This research agenda identifies the following key developments: standards, new privacy-preserving technologies, and automated methods to establish compliance.

Table 3. Big Data Value Association Strategic Research and Innovation Agenda
Challenges Solutions
A general, easy-to-use, and enforceable data protection approach Guidelines, standards, law, and codes of conduct
Maintaining robust data privacy with utility guarantees Multiparty computation, federated learning approaches, and distributed ledger technologies
Risk-based approaches calibrating data controllers’ obligations Automated compliance, risk assessment tools
Combining different techniques for end-to-end data protection Integration of approaches, toolboxes, overviews, and repositories of privacy-preserving technologies

Source: Timan and Mann 201916

Privacy-preserving technologies are an active research area, and include the following:17secure multiparty computation, (fully) homomorphic encryption, trusted execution environments, differential privacy, and zero-knowledge proofs.

The value of privacy-preserving technologies involves trade-offs between privacy and utility—how useful is the resulting data—both of which are context dependent.18 Affecting these trade-offs are the technical methods, the technical definitions of privacy, and the specifications of the privacy laws. The technical methods (e.g., anonymization, sanitization, and encryption) operate on data in different ways. The technical definition of privacy varies by application and the user’s perceptions of risk versus the benefit of making personal data available. Privacy laws vary across nations, challenging the uniform application of technical methods. For both professionals and members of the public, making trade-offs between privacy and utility remains challenging. This is partially due to the absence of definitions of and standards for measuring privacy and the social benefits obtained from making data available for use by others.

Finding 3.5: Trust and confidence in digital capabilities requires businesses and governments to focus on the responsible use of technology.

Increasing trust and confidence in emerging technologies, such as AI, requires a recognition by both businesses and governments that they have an obligation to use technology responsibly, ensuring that technology has a positive impact on society, especially with regards to equality and inclusion.19 Developing and innovating responsibly means ensuring that (i) ethical frameworks and policies exist to guide organizations during all aspects of a product’s development and deployment, (ii) fairness in design is emphasized from the outset, and that (iii) questions around the manner in which technologies will be used are given the same rigorous examination as technical issues. As technological capabilities evolve and become more deeply intertwined in all aspects of society, businesses and governments must put ethics at the center of everything they do.

Approach 3: Build in trust-enabling technologies, measure performance against standards, conduct independent compliance audits.

The digital economy relies on achieving a high level of trust and confidence on a continuing basis as technologies evolve. Trust and confidence-enabling technologies must be developed and built into the components of the digital economy infrastructure; a detailed understanding of the trade-offs between privacy versus utility is an essential foundation. Such technologies must be paired with similar civic norms, practices, and rules designed to enhance confidence in the digital economy. To assure businesses that they remain compliant with data protection regulations as they modernize their practices, automated compliance testing, accompanied by standards of performance, is needed. To establish transparency for automated decision-making algorithms, standards for the measurable performance, i.e., the output results, are necessary. Independent assessments of the compliance testing and algorithmic transparency by professional auditing organizations could enhance trust among all participants in the digital economy and aid accountability and governance; such methods should be explored. However, mechanisms for compliance testing and auditing by regulators are also necessary.20

Recommendation 3: Develop international standards and best practices for a trusted digital economy that accommodate national rules and regulations, streamline the process of independently assessing adherence to these standards.

Recommendation 3.1: Develop a US data privacy standard.

Congress should create a national data privacy standard that embodies the following principles: (i) appropriate use of data: this defines the intended purpose for the collected data, the scope of what can be collected, the needed security, and the entities that are covered by the principle; (ii) nondiscriminatory use: the collected data cannot be used to discriminate against protected classes; (iii) informed participation: the individuals must receive the privacy policies in a transparent manner before data are collected, and provide affirmative express consent, including the ability to revoke consent and require destruction of the data or the movement of the data as directed by the individual (i.e., portability); (iv) public reporting: covered entities must periodically report on the data collected, retained, and destroyed, and the groups of individuals from whom the data were collected; (v) independent audit: the performance of covered entities with respect to the data privacy standard must be annually audited by an independent auditing organization, with parallel mechanisms to accommodate auditing and review by regulatory agencies; (vi) enforcement: federal and state enforcement organizations are given the authority to pursue violations of the laws for data privacy protection; (vii) preemption: this would preempt state privacy laws that are inconsistent with the proposed national standard; and (viii) consumer protection laws: the privacy standard would not interfere with consumer protection laws on issues apart from data privacy.

The data privacy standard should recognize gradations in the sensitivity of personal data—some personal data are treated more strictly than others. Affirmative express consent should be structured based on the types of data and how they will be used.

Congress should work to develop a national data privacy standard that can achieve global interoperability and should request an analysis of emerging privacy standards and issues that limit this achievement. Congress also should use the proposed national data privacy standard to inform the development of transparent national consumer data privacy laws that preserve individuals’ control of their personal data and facilitate the development of trusted networks and applications.

The results should establish federal data privacy standards for personal data, establish standards for content moderation by information providers, and should regulate platform providers’ ability to conduct experiments or surveys with users and user data without prior consent.

Recommendation 3.2: Develop privacy-preserving technologies for the digital economy and demonstrate in a full-scale test their conformance with the General Data Protection Regulation.

The administration should direct NIST to establish and test privacy-preserving technologies that enable a risk- and outcomes-based approach to trust in the digital economy. The test should evaluate, at scale, conformance with relevant GDPR rules, conformance with existing US laws governing data privacy, and robustness with respect to innovations and advances in information technologies and data capabilities, especially those based on AI, machine learning, and the IoT. This work should include the development of technical definitions of privacy and application-specific measures of the utility of analyses that are based on privacy-protected data. The tests should include end user evaluations.

The administration should establish a near-term program that demonstrates privacy-preserving technologies to aid the trusted collection and sharing of data for the purpose of improving individuals’ access to healthcare during large-scale biological events. This program should be jointly managed by NIST, the Department of Health and Human Services (HHS), the National Institutes of Health (NIH), and the National Science Foundation (NSF). This program will monitor system performance to inform the development of standards for the ethical use of the shared data and how data governance will be formulated.

Recommendation 3.3: Create measurement methods and standards for evaluating trust in the digital economy.

The administration should direct the National Institute of Standards and Technology (NIST) to establish methods for evaluating users’ trust in the digital economy given the increasing use of AI, big data analytics, and automated decision-making algorithms. This work builds on the Commission on Enhancing National Cybersecurity’s Report on Securing and Growing the Digital Economy21 and the National Strategy for Trusted Identities in Cyberspace.22 One assessment framework example23 describes measures of: “(i) user trust in the digital environment, e.g., data privacy, security, private sector efforts to control the spread of misinformation, and private sector adherence to cybersecurity best practices; (ii) the user experience, i.e., the effort needed to interact with the digital environment; (iii) user attitudes, e.g., how trusted are government and business leaders; and (iv) user behavior, i.e., how much do users interact with the digital environment.”24

The administration should create a coalition to develop international standards for achieving trust in the digital economy. The coalition should include representatives from NIST, the Federal Trade Commission (FTC), private industry, Federally Funded Research and Development Centers (FFRDCs), University Affiliated Research Centers (UARCs), and international standards organizations. The United States and like-minded nations and partners should develop national assessments of trust in the digital economy using these standards.

Recommendation 3.4: Empower an organization to audit trust in the digital economy.

Congress should establish or empower an organization to audit the efficacy of measures designed to ensure trust in the digital economy and assess conformance to current and future standards designed to enhance and maintain such trust. Independent third parties or the Government Accountability Office (GAO) are examples of where such auditing organizations could be housed. 

As part of this process, the auditing organization could provide recommendations to Congress on legislation that would enhance existing trust measures, develop new trust measures, and create trust performance standards. The auditing organization should also provide a mechanism through which the public and industry can raise topics and concerns for attention and, for cases where assessments or audits were done, include an ombudsman function for assessment appeals, identification of new information, or adjudication of concerns in a manner distinct from political influence.

The administration should work to establish a similar auditing program with EU members of the International Organization of Supreme Audit Institutions.

Recommendation 3.5: Assess standards relating to the trustworthiness of digital infrastructure.

Congress should direct an assessment by the National Academies of Sciences, Engineering, and Medicine of the current national and international standards relating to the trustworthiness of digital infrastructure to support the digital economy. “Trustworthiness of an information system is defined as the degree to which an information system (including the information technology components that are used to build the system) can be expected to preserve the confidentiality, integrity, and availability of the information being processed, stored, or transmitted by the system across the full range of threats.”25

Due to the increasing complexity of the digital infrastructure, the assessment should also review design standards for complex systems-of-systems from the perspective of trustworthiness. The overall assessment focuses on systems that support the digital economy. The study should assess the sufficiency of existing standards to guide improvements in trustworthiness, identify where new standards are needed, and recommend the data collection and testing methods that would enable ongoing assessments.

Recommendation 3.6: Educate the public on trustworthy digital information.

Congress should establish a grant program led by NSF for the purpose of developing a curriculum on trustworthiness of information—distinct from the trustworthiness of information systems—in the digital age. This curriculum should be created by a consortium headed by a university or coalition of universities. The program should be administered by select universities, with the participation of US information providers. The goal should be to educate the public on how to assess the trustworthiness of information—its credibility, truthfulness, and authenticity, and to develop tools that students and members of the public can use and benefit from on a regular basis.

Recommendation 3.7: Conduct demonstration projects involving artificial intelligence to improve delivery of public- and private-sector services at local, state, and federal levels.

Congress should authorize and appropriate funds for AI demonstration projects that improve the delivery of public services.26 The overall program would be managed by one of the National Laboratories or by a newly created FFRDC with the mission to leverage technology to improve the delivery of public services. These testbed projects would be supported by local and state grants, cross-cutting federal government efforts, and public-private partnerships (PPPs) to employ AI to improve healthcare, workforce training, food production and distribution, and other areas. The overarching goals are to increase public trust in, understanding of, and confidence in AI; to learn how to use AI in ways that reduce inequality and enhance, rather than replace, human work; and to improve access, affordability, and availability of such services. At local, state, and federal levels, individual government agencies will gain long-term benefits by acquiring the necessary data infrastructure to employ AI to improve the delivery of public services.

Recommendation 3.8: Produce a framework for assessing ethical, social, trust, and governance considerations associated with specific current and future use cases for AI.

The administration should request the National Academy of Sciences to produce a framework for assessing ethical, social, trust, and governance considerations associated with specific current and future use cases for AI solutions. The framework should identify where new federal standards and rules are needed. This guidance should be developed with the participation of relevant executive branch departments and agencies, and in consultation with private industry, academia, members of the public, and government and industry representatives from foreign partners.

1    Amon, “Toward a New Economy of Trust.”
2    World Economic Forum, “Why trust in the digital economy is under threat,” accessed March 26, 2021, http://reports.weforum.org/digital-transformation/building-trust-in-the-digital-economy/, citing an estimate by McAfee that the costs associated with cybersecurity incidents approximated $575 billion in 2014; Accenture, Securing the Digital Economy: Reinventing the Internet for Trust, 16, accessed March 26, 2021, https://www.accenture.com/us-en/insights/cybersecurity/_acnmedia/Thought-Leadership-Assets/PDF/Accenture-Securing-the-Digital-Economy-Reinventing-the-Internet-for-Trust.pdf#zoom=50. Cites five-year loss of foregone revenue from 2019 to 2023 to be $5.2 trillion, calculated using a sample of 4,700 global public companies.
3    Congressional Research Service, Digital Trade and U.S. Trade Policy, 11, May 21, 2019, accessed March 26, 2021, https://crsreports.congress.gov/product/pdf/R/R44565; Alan B Davidson, “The Commerce Department’s Digital Economy Agenda,” Department of Commerce, November 9, 2015, accessed March 26, 2016, https://2014-2017.commerce.gov/news/blog/2015/11/commerce-departments-digital-economy-agenda.html. Davidson identifies four pillars: promoting a free and open Internet worldwide; promoting trust online; ensuring access for workers, families, and companies; and promoting innovation.
4    Frank Dickson, “The Five Elements of the Future of Trust,” IDC, April 22, 2020, accessed March 26, 2021, https://blogs.idc.com/2020/04/22/the-five-elements-of-the-future-of-trust/.
5    “Communication from the Commission to the European Parliament and the Council. Data protection rules as a trust-enabler in the EU and beyond – taking stock,” COM/2019/374 final, European Union, July 24, 2019, accessed March 26, 2021, https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2019:374:FIN.
6    “General Data Protection Regulation,” Intersoft Consulting, https://gdpr-info.eu/.
7    T. Timan and Z.Á. Mann, eds., Data protection in the era of artificial intelligence. Trends, existing solutions and recommendations for privacy-preserving technologies, Big Data Value Association, October 2019, accessed March 26, 2021, https://www.bdva.eu/sites/default/files/Data%20protection%20in%20the%20era%20of%20big%20data%20for%20artificial%20intelligence_BDVA_FINAL.pdf.
8    “2019 Consumer Data Privacy Legislation,” National Conference of State Legislatures, January 3, 2020, accessed March 26, 2021, https://www.ncsl.org/research/telecommunications-and-information-technology/consumer-data-privacy.aspx.
9    “Information Transparency and Personal Data Control Act,” fact sheet, accessed March 26, 2021, https://delbene.house.gov/uploadedfiles/delbene_consumer_data_privacy_bill_fact_sheet.pdf; Information Transparency & Personal Data Control Act, H.R. 2013 — 116th Congress (2019-2020), accessed April 2, 2021, https://delbene.house.gov/uploadedfiles/delbene_privacy_bill_final.pdf.
10    “Developing the Administration’s Approach to Consumer Privacy,” Federal Register, September 26, 2018, accessed March 26, 2021, https://www.federalregister.gov/documents/2018/09/26/2018-20941/developing-the-administrations-approach-to-consumer-privacy; Alan Charles Raul and Christopher Fonzone, “The Trump Administration’s Approach to Data Privacy, and Next Steps,” Sidley Austin LLP, October 2, 2018, accessed March 26, 2021, https://datamatters.sidley.com/the-trump-administrations-approach-to-data-privacy-and-next-steps.
11    Setting an American Framework to Ensure Data Access, Transparency, and Accountability (SAFE DATA Act), S.4626 — 116th Congress (2019-2020), https://www.congress.gov/116/bills/s4626/BILLS-116s4626is.pdf; Online Privacy Act of 2019 , H.R. 4978 — 116th Congress (2019-2020), https://www.congress.gov/bill/116th-congress/house-bill/4978/text; COVID-19 Consumer Data Protection Act of 2020, S. 3663 — 116th Congress (2019-2020), https://www.congress.gov/bill/116th-congress/senate-bill/3663.
12    Designing Accounting Safeguards to Help Broaden Oversight and Regulations on Data Act, S. 1951 — 116th Congress (2019-2020), accessed March 26, 2021, https://www.congress.gov/bill/116th-congress/senate-bill/1951. The informal reference, DASHBOARD Act, is found in articles about this bill; Public Health Emergency Privacy Act, S. 3749 — 116th Congress (2019-2020), accessed March 26, 2021, https://www.congress.gov/bill/116th-congress/senate-bill/3749. This has been reintroduced in the 117th Congress. Mark R. Warner, Warner, Blumenthal, Eshoo, Schakowsky & DelBene Introduce the Public Health Emergency Privacy Act, press release, January 28, 2021, https://www.warner.senate.gov/public/index.cfm/2021/1/warner-blumenthal-eshoo-schakowsky-delbene-introduce-the-public-health-emergency-privacy-act; Augmenting Compatibility and Competition by Enabling Service Switching (ACCESS) Act of 2019, S. 2658 — 116th Congress (2019-2020), accessed March 26, 2021, https://www.congress.gov/bill/116th-congress/senate-bill/2658.
13    National Institute of Standards and Technology, “NIST Privacy Framework: A Tool for Improving Privacy through Enterprise Risk Management, Version 1.0,” January 16 2020, accessed March 26, 2021, https://www.nist.gov/system/files/documents/2020/01/16/NIST%20Privacy%20Framework_V1.0.pdf.
14    Congressional Research Service, Data Protection Law: An Overview, March 25, 2019, accessed March 26, 2021, https://fas.org/sgp/crs/misc/R45631.pdf.
15    Ibid., 56.
16    Timan and Mann, Data protection.
17    Big Data UN Global Working Group, UN Handbook on Privacy-Preserving Computation Techniques, accessed March 26, 2021, https://marketplace.officialstatistics.org/privacy-preserving-techniques-handbook.
18    Daniel Bachlechner, Karolina La Fors, and Alan M. Sears, “The Role of Privacy-Preserving Technologies in the Age of Big Data,” proceedings of the 13th Pre-ICIS Workshop on Information Security and Privacy, San Francisco, December 13, 2018, accessed March 26, 2021, https://www.albany.edu/wisp/papers/WISP2018_paper_11.pdf; Felix T. Wu, “Defining Privacy and Utility in Data Sets,” University of Colorado Law Review 84 (2013), accessed March 26, 2021, http://lawreview.colorado.edu/wp-content/uploads/2013/11/13.-Wu_710_s.pdf.
19    Kirsten Martin, Katie Shilton, and Jeffrey Smith, “Business and the Ethical Implications of Technology: Introduction to the Symposium,” Journal of Business Ethics 160, 307–317 (2019), accessed April 16, 2021, https://doi.org/10.1007/s10551-019-04213-9
20    Nicholas Confessore, “Audit Approved of Facebook Policies, Even After Cambridge Analytica Leak,” New York Times, April 19, 2018, accessed March 26, 2021, https://www.nytimes.com/2018/04/19/technology/facebook-audit-cambridge-analytica.html.
21    Commission on Enhancing National Cybersecurity, Report on Securing and Growing the Digital Economy, December 1, 2016, accessed March 26, 2021, https://www.nist.gov/system/files/documents/2016/12/02/cybersecurity-commission-report-final-post.pdf.
22    White House, “National Strategy for Trusted Identities in Cyberspace, Enhancing Online Choice, Efficiency, Security, and Privacy,” April 2011, accessed March 26, 2021, https://obamawhitehouse.archives.gov/sites/default/files/rss_viewer/NSTICstrategy_041511.pdf.
23    Bhaskar Chakravorti, Ajay Bhalla, and Ravi Shankar Chaturvedi, “How Digital Trust Varies Around the World,” Harvard Business Review, February 25, 2021, accessed April 16, 2016, https://hbr.org/2021/02/how-digital-trust-varies-around-the-world#:~:text=To%20that%20end%2C%20in%20partnership,user%20experience%3B%20the%20extent%20to.
24    Appendix A provides several references on the topics of trust and countering digital misinformation.
25    National Institute of Standards and Technology, Security and Privacy Controls for Information Systems and Organizations, Special Publication 800-53, Revision 5, September 2020, accessed April 16, 2021, https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r5.pdf.
26    A potential source for the types of initiatives of interest is the OECD Network of Experts on AI (ONE AI). This group provides policy, technical and business expert input to inform OECD analysis and recommendations. “OECD Network of Experts on AI (ONE AI),” OECD.AI, accessed March 26, 2021, https://www.oecd.ai/network-of-experts.

The post Enhanced trust and confidence in the digital economy appeared first on Atlantic Council.

]]>
Secure data and communications https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-2/ Tue, 25 May 2021 22:57:10 +0000 https://www.atlanticcouncil.org/?p=392379 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Secure data and communications appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(40%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 2. Secure data and communications

Scroll down to navigate and learn more

This chapter addresses secure data and communications in two timeframes. Part A discusses current cybersecurity concerns and includes recommendations for improving US cybersecurity against an expanding range of vulnerabilities. Part B focuses on quantum information science (QIS) and recommends steps for ensuring the United States, along with its allies and partners, remains a leader in the development and operationalization of QIS technologies.

Secure data and communications are fundamental to the United States’ digital infrastructure and to attaining the full benefits of the global digital economy.

Part A: Current cybersecurity concerns

Secure data and communications are fundamental to the United States’ digital infrastructure and to attaining the full benefits of the global digital economy. Through the use of standards, risk assessments, monitoring, and technologies, the US government enables the public and private sectors to secure systems, data, and communications.

As the digital economy connects more public and private sector processes, effective cybersecurity for the US government faces several challenges: (i) the US government, through regulations, can affect though not assure the cybersecurity preparedness of the private sector; (ii) the ultimate size of the needed cybersecurity workforce to secure US government and private sector networks requires the private sector to fulfill the larger share, though some small- and medium-sized companies cannot afford a dedicated cybersecurity workforce; and (iii) US government agencies and laws for ensuring cybersecurity are not fully adapted to the evolving characteristics of cyberattacks. The effects of these limitations will lead to more attack vectors, missed early warning indicators, and lower cybersecurity preparedness. To maintain secure data and communications, the United States must overcome these limitations and must also stay ahead of adversaries’ exploitation of US network and endpoint vulnerabilities.

Finding 2A: Expanding cybersecurity vulnerabilities require partnerships between the public and private sectors.

Cybersecurity vulnerabilities are increasing in scope and effect: greater connectivity yields more vectors for attacks, interdependent networks produce cascading effects, data breaches and records exposed are increasing,1 and disjointed governance limits awareness and speed of action.

Cyberattackers leverage the interdependent parts of digital infrastructure to create complex attacks for the purposes of “coercion, sabotage, espionage, or extortion.”2 The greater number of connected devices can give attackers new, less defended points of access to systems and networks; for example, attackers could access the network controller devices in an electrical power network.3 Software supply chains also present new cyberattack vulnerabilities when companies fail to employ industry-best security practices.

  • In the recent SolarWinds Orion software supply chain attack, malware was inserted into a trusted software update, which led to significant breaches of government and private networks as the update was downloaded by as many as eighteen thousand SolarWinds customers (including other software and IT vendors). Such exploits of software/IT supply chains require knowledge of software configurations and dependencies. If a software vendor in the supply chain is vulnerable, then its software updates become vectors for diffusing malware.4

Interdependencies among networks, including between digital infrastructures and physical systems or people, are a growing type of vulnerability. Three cases illustrate such interdependencies.  In a cyber risk assessment of the election infrastructure, the Cybersecurity and Infrastructure Security Agency (CISA) found that “Disinformation campaigns conducted in concert with cyberattacks on election infrastructure can amplify disruptions of electoral processes and public distrust of election results.”5 Ransomware attacks cost institutions money, caused inconvenience, and disrupted the healthcare at some hospitals.6 An adversary could hold hostage one of the US critical infrastructure sectors7 to preempt US military or diplomatic responses.

Data are as important as the networks, and are the foundation for new capabilities to monitor the climate, global health, agriculture, and cyberspace. Large data collections are essential for new applications of AI and innovations in medicine and education. The data infrastructure, including where the data are stored, analyzed, and the networks that communicate the results, are targets for cyberattacks.

Advanced cyberattacks take advantage of the limited information sharing between government cybersecurity experts and private industry, and the limited collection of cyberattack indicator information on private systems. Cyberattackers can spend weeks or months carefully probing the target systems, unnoticed.

Federal and private sector organizations lack sufficient insight into system operations, acquired software dependencies, and vendor practices. Also lacking is an effective system of liability and incentives to promote software supply chain security.

Finding 2A.1: Private sector infrastructure critical for economic or national security needs strengthened cybersecurity.

Private sector enterprises and small businesses can be a vector for significant attacks on critical infrastructure, yet cannot readily access or benefit from US government cybersecurity expertise. According to Securing Cyber Assets, Addressing Urgent Cyber Threats to Critical Infrastructure:8

“[M]any outstanding federal capabilities play crucial roles in cyber defense and resilience today. However, their effectiveness is constrained in the following ways:

  • Private sector knowledge of these [federal cybersecurity] capabilities and incentives to use them is limited.
  • Access [to federal cybersecurity capabilities] is hindered by multiple legal and administrative constraints.
  • Government capabilities are scattered across a wide swath of agencies, departments, and their sub-units—a complicated labyrinth comparatively few can effectively navigate.
  • Classification of essential threat information can delay and hinder coordinated response.”

The following sources of cyber information and resources, along with improved coordination with the federal government, can address these needs: (i) Government sharing of critical information about cyberthreats, capabilities, and early attack indicators. This information can help private companies focus their cyberdefense resources and be more agile in doing so. (ii) A national cyber strategy that incorporates the private sector as an integral participant. This requires clarifying the laws governing the ability of the US government to direct the cybersecurity actions of private sector entities, including obligatory information sharing from certain private sector entities. (iii) For software/IT supply chains that support critical economic or national security infrastructure, US government provided risk information on vendors and components flowing into the software/IT supply chain, based on comprehensive and up-to-date collection of supply chain data and analysis of supply chain risks. Private industry can use this information to inform their risk assessments. (iv) US government incentives that assist private industry to grow the cybersecurity workforce needed to make the private sector more secure.

Finding 2A.2: Obtaining the needed cybersecurity workforce and expertise requires participation by the public and the private sector.

“Executive Order 13870 of May 2, 2019: America’s Cybersecurity Workforce,”9 establishes national requirements to expand both the federal cybersecurity workforce and the cybersecurity workforce for state, territorial, local, and tribal governments, academia, private sector stakeholders, and others. There are five hundred and twenty-one thousand unfilled cybersecurity jobs in the United States, of which thirty-seven thousand are in the federal government.10

The EO supports workforce mobility between the public and private sector for cybersecurity workers, and directs departments to share recruitment strategies and tools across these sectors. A starting point, for both sectors, is the Workforce Framework for Cybersecurity [National Initiative for Cybersecurity Education (NICE) Framework].11 This defines categories and specialty areas, knowledge, tasks, skills, abilities, and work roles. It can be used by public and private sector employers to better match candidates with sets of needed skills.

To close the workforce gap in nonfederal positions, a flexible approach, consistent with the NICE Framework, may be effective.12 The strategy is to develop new career models that are better matched to the pool of candidates, aligned with the NICE Framework where possible, and using employee development programs and financial incentives to grow workforce skills.

Finding 2A.3: Cybersecurity governance, which must enable timely protective actions, has not matched the speed of the cyber threat environment.

The National Institute of Standards and Technology (NIST) Cybersecurity Framework comprises five functions: Identify, Protect, Detect, Respond, and Recover.13 In each function, timely action is essential for effective cybersecurity. Yet, defensive cybersecurity posture is systemically outpaced by offensive actors.

  • Patching quickly is imperative. A FireEye study14 reports the average time disclosure and patch availability was approximately nine days. Other reports15 have found longer times to patch though—up to thirty-eight days on average—and some of the most notorious cyber incidents exploited vulnerabilities patched months before their compromise.16
  • Organizational adjustments and implementation of best practices must be rapid to keep up with developing threats. Yet, at the federal level, many agencies have been unable to adopt NIST-recommended best practices for ICT supply chain risk management for years.17
  • Timely and rapid detection and response is necessary to forestall damage and the risk of cascading effects. This capability relies on a system of indicators and warnings, and, at times, comprehensive situational awareness that allows one to monitor cyber events closely and deploy defensive tools with precision. Still, the most sophisticated incursions can remain undetected for months.18
  • Timely recovery depends on having built resilience into the digital infrastructure, and in having efficient decision making. Long-running attacks, however, can take more than a year to fully recover from.19
  • All core cybersecurity functions depend on efficient information sharing between and within the public and private sectors. Yet, industry still complains about their incident response being hampered by liability concerns20 and information sharing challenges.21

Approach 2A: Establish comprehensive situational awareness of cybersecurity risks in systems that are critical for national and economic security.

The foundation of an effective cybersecurity strategy is comprehensive situational awareness of the state of the critical infrastructure for economic and national security. This is built upon the continuous collection of key indicators, prioritization of risk, the ability to assess key points in the software/IT supply chain, standards to inform best practices, and assessments of the actual levels of cyberdefense and resilience.

To achieve such comprehensive situational awareness requires that the public and private sectors must develop a partnership that ensures sufficient information is monitored and exchanged; that the authorities for taking action, when needed, are established in law; and that sufficient cybersecurity training and knowledge is available across the private sector to help strengthen the cybersecurity of this sector.

Recommendation 2A: The United States should update and renew the National Cyber Strategy’s Implementation Plan with a focus on streamlining how public and private sector entities monitor their digital environments.

The administration should establish a process to incorporate both regular and ad hoc updates into the National Cyber Strategy so that the strategy remains current and evolves to meet future cybersecurity threats and challenges.

Recommendation 2A.1: Review, update, and reestablish the Implementation Plan for the National Cyber Strategy.

The administration should establish a process to incorporate both regular and ad hoc updates into the National Cyber Strategy so that the strategy remains current and evolves to meet future cybersecurity threats and challenges.22 The strategy should retain focus on streamlining how public and private sector entities continuously monitor their digital environments to include outlining the appropriate roles, responsibilities, and governance. In addition to a single national cyber coordinator23 that was established in the FY 2021 National Defense Authorization Act (NDAA), the strategy should consider the following components: uniform rules and increased compliance with standards for cybersecurity practices across all government activities (with exceptions for national security activities); skilled cybersecurity officers either in, or embedded in, organizations; and a national educational program to improve individuals’ cybersecurity habits.

Recommendation 2A.2: Establish effective and coordinated continuous monitoring for software and hardware used by the federal government.

As part of COVID-19 pandemic relief, the America Rescue Plan Act of 2021 (Public Law No: 117-2, March 11, 2021)24 includes $1.65 billion for cybersecurity capabilities, readiness, and resilience. This increases the Technology Modernization Fund and helps CISA and the General Services Administration (GSA) complete modernization projects at federal agencies. Additional funds for CISA could bolster cybersecurity across federal civilian agency networks and support pilot programs for shared security and cloud computing services.

The acquisition strategies to achieve cybersecurity resilience should reflect the unique cybersecurity requirements and the need for specialized expertise in operations and networks supporting Title 5 (Government Organization and Employees), Title 10 (Armed Forces), Title 34 (Crime Control and Law Enforcement), and Title 50 (War and National Defense) of the US Code. The acquisition strategies should strengthen compliance with standards for continuous monitoring of cybersecurity performance.

The federal government should seek to achieve continuous cybersecurity monitoring of the hardware and software systems that support US government functions, including critical supply chains and network infrastructure. The approach should ensure coordination across all relevant elements of the federal government. Attributes to monitor include external network traffic, internal network behavior, vulnerability exposure, asset tracking, security posture, vendor compliance, product compliance, and product updates. There are four contributing activities to fully realize a cybersecurity posture informed by continuous monitoring: (i) assess the trustworthiness of software and hardware employed by the US government based on inherent vulnerabilities and risks due to the network position, permissions, and supply chain considerations; (ii) further empower the Department of Homeland Security (DHS) to perform these assessments by strengthening the ties among US government agency chief information officers (CIOs) and DHS for the various government networks; (iii) make these hardware and software risk assessments available to local and state governments to inform their endeavors; and (iv) leverage these assessments to support the private sector, especially small- to mid-sized businesses that do not have the capacity to fully assess their own supply chains yet would benefit from knowing what software is trustworthy. The risk assessments developed by the US government could also be shared with like-minded partners that are seeking to do the same regarding the hardware and software they employ to achieve assured supply chains and trusted digital environments.

There are several lines of effort, described further in Appendix B.

Recommendation 2A.3: Increase compliance with continuous monitoring that is part of the National Institute of Standards and Technology security control guidance.

The administration should require GAO to review the efficacy of agency-specific practices regarding the continuous monitoring portion of its security control guidance. NIST controls dedicated to continuous monitoring for agencies25 are required for all three priority levels of the federal agency information systems.26 OMB memoranda as far back as 201127 discuss continuous monitoring superseding periodic reviews. While NIST has long recommended the practice, agencies have failed to implement it: in 2019, only about three-quarters had done so,28 marking little improvement over several years. The most recent GAO report29 indicates that general compliance with fundamental risk management practices has turned worse.

To achieve increased compliance, CISA should be empowered to assist lagging agencies in conforming with NIST guidelines and best practices mandated by the Federal Information Security Modernization Act (FISMA).30 This would support a more responsive and uniform implementation of security methods—monitoring, security updates, approaches such as stress tests, assessing vendor security maturity, and certificate transparency. New data disclosure policies must be developed to enable the mapping, visualization, and testing of the software/IT supply chain networks.31

More specific understanding of the continuous monitoring practices is needed to guide implementation. There is overlap in the types of continuous monitoring discussed most often. First is the continuous monitoring of vendor compliance with certification regimes— the Federal Risk and Authorization Management Program (FedRAMP), the Department of Defense (DoD) information networks approved product list (DoDIN APL), the new Cybersecurity Maturity Model Certification (CMMC), etc. Each describes and aspires toward continuous assessment of compliance, but they are still organized around monthly, yearly, or three-year review periods. Truly continuous monitoring would bring more rigor and regularity to reviewing changes made to deployed software, a potentially devastating attack vector for adversaries, and changes in vendor security practices and context.

NIST guidelines refer to continuous monitoring of security control efficacy, asset exposure, threat vulnerability, configuration compliance, and other quasi-technical metrics. Between 79 percent and 83 percent of Chief Financial Officers Act of 1990 (CFO Act) federal agencies,32 and between 58 percent and 63 percent of non-CFO Act agencies, fulfill these requirements. This type of continuous monitoring is determined by agency policy, leading to varying standards for how often to perform checks, what to check, and what satisfactory levels are.33 A program at CISA, the Continuous Diagnostics and Mitigation (CDM) program, is supposed to integrate these activities. It has met systemic implementation difficulties, however,34 and Homeland Security Secretary Alejandro Mayorkas has sought a review of the CDM program, along with CISA’s EINSTEIN program, which monitors inbound and outbound traffic on federal networks.35 It also must overcome great variation among the networks and products that would be checked. There is little agreement and the quality of implementation is not well-known.

Finally, there is the continuous monitoring of actual network behavior. This would include mandating the maintenance of standardized access logs, auditing of those logs, monitoring inbound and outbound traffic, and all the related detailed measurements. More transparency is needed in how much such monitoring occurs within government networks, though CISA’s EINSTEIN program does the work of monitoring traffic in and out of federal civilian agencies.

Recommendation 2A.4: Ensure cybersecurity best practices, expertise, and assurance testing are widely available to industry and government entities.

The administration should provide the private sector technical information on threats on a regular basis, to bolster cybersecurity. The private sector outreach would be linked to the existing Information Sharing and Analysis Centers (ISACs) for US critical infrastructure entities and the Information Sharing and Analysis Organizations (ISAOs) to ensure monitoring of both supply chain risks and cybersecurity performance for vital US private sector companies of all sizes.

The US national security domain requires independent certification of adherence to a set of multinational standards.36 One approach could be to expand CMMC to all of government instead of just DoD. While the program is still facing implementation challenges,37 it could provide useful information on general cybersecurity maturity to industry and government alike, with benefits beyond the specific vendor products. Because DoD is only just beginning to implement CMMC, as a first step the administration should conduct a feasibility assessment for an across-government approach. To improve and streamline cybersecurity requirements, the administration should assess how a government-wide implementation of CMMC would overlap with FedRAMP or any other cybersecurity requirements, and how the broadened implementation of CMMC could improve general industry cyber hygiene.

To implement cybersecurity capabilities and practices, private sector companies must acquire cleared personnel, spaces, and IT equipment. The administration should consider accelerating any necessary prerequisite steps.

Part B: Quantum information sciences

The United States, the European Union (EU), China, Russia, the United Kingdom, Canada, and other nations are expanding their investments in QIS, with national and regional QIS strategies and programs.38 Recent demonstrations of quantum computers increase concerns that aspects of the technical foundation of the United States’ digital security may be vulnerable in the foreseeable future.39 Quantum communication and quantum key distribution (QKD) methods,40 though, can enhance the security of the digital infrastructure. These methods may contribute to data and communications security against untrusted and corrupted hardware and also protect against the ability to make inferences about sensitive data based on access to multiple data sources containing nonsensitive data.41

Finding 2B: Long-term quantum information science priorities include international collaboration, which is limited by national and regional funding and data-sharing policies.

A primary element of leadership in QIS is the ability to set key standards for QIS applications. This relies on developing and deploying devices that operationalize QIS, and in working in collaboration with many nations and partners. While collaboration is identified as a national priority in the US national strategy for QIS, it should be extended beyond basic S&T activities.

Finding 2B.1: The US strategy for quantum information science emphasizes US efforts and benefits.

The National Strategic Overview for Quantum Information Science42 provides a strategic approach for achieving US leadership in QIS and its applications to national and economic security. The six policy areas are as follows:

  • Choosing a science-first approach to QIS: Strengthen the research foundation and the collaboration across disciplines. Use Grand Challenge problems as a strategic mechanism to coordinate and focus efforts.
  • Creating a future quantum-smart workforce: Foster a QIS-skilled workforce through investments in industry, academia, and government laboratories that increase the scope of QIS research, development, and education.
  • Deepening engagement with the quantum industry: Increase coordination among the federal government, industry, and academia to enhance awareness of needs, issues, and opportunities.
  • Providing critical infrastructure: Encourage necessary investments, create and provide access to QIS infrastructure, and establish testbeds.
  • Maintaining national security and economic growth: Maintain awareness of the security benefits and risks of QIS capabilities.
  • Advancing international cooperation: Seek opportunities for international cooperation to benefit the US talent pool and raise awareness about other QIS developments.

The US strategy for QIS recognizes the sensitivities of this research, which can both enable new scientific and economic applications, and create new methods for attacking sensitive data and communications. This strategy supports international collaboration in QIS both to advance the basic research and its applications, and to ensure the United States maintains its leadership and competitiveness in QIS.43

The US strategy for QIS supports international efforts in three ways: It reviews international research to maintain awareness of new results and directions, selects partnerships that will give the United States access to top-quality researchers and facilities, and shares certain public data from QIS research to help the development of standards for future QIS applications.

In addition to the US strategy for QIS, the National Quantum Initiative Act “authorized $1.2 billion in federal research and development (R&D) spending over five years, established the National Quantum Coordination Office, and called for the creation of new QIS research institutes and consortia around the country.”44 Also, the National Science Foundation (NSF) recently established three quantum research centers45 and added the opportunity for limited supplemental funding requests to support international collaboration on basic research topics.46

Congressional hearings on “Industries of the Future” discussed the importance of QIS and establishing US leadership in QIS.47 One effort by the United States to establish international cooperation in QIS is the agreement between the United States and Japan to cooperate on quantum research through activities including “collaborating in venues such as workshops, seminars, and conferences to discuss and recognize the progress of research in QIST, which in turn will lead to the identification of overlapping interests and opportunities for future scientific cooperation.”48

Finding 2B.2: China is pursuing quantum information science as a strategic technology.

Quantum communications and computing are among the strategic technologies highlighted in China’s 14th Five-Year Plan (2021-2025). China aims to be a global leader in innovation, using large demonstration projects to advance its science and technology (S&T), and to build human capital for strategic technology areas. This includes major initiatives in quantum research and development (R&D), demonstrations of QKD and quantum computing, and a major new National Laboratory for Quantum Information Sciences.49 China is able to advance in quantum R&D in part due to the close coordination among the government, universities, and industry, which aids both the advancement of the science and the building of a skilled workforce.50

Finding 2B.3: EU’s science and technology strategy focuses on EU participation.

The EU’s S&T program includes three components that address QIS and other technology areas: (i) Horizon Europe, which has a seven-year budget of €95.5 billion for 2021-2027, within which the Digital, Industry and Space area is funded at €15.5 billion;51 (ii) Digital Europe Programme, funded at €7.5 billion;52 and (iii) Space Programme, with proposed funding of €13.2 billion.53 The European Commission is soliciting proposals for quantum communications infrastructure, which will be funded by these initiatives. The objective is to enable the EU to be an independent provider of quantum technologies needed to build a quantum communications infrastructure.54

Horizon 2020, the predecessor to Horizon Europe, involved US researchers in only 1.5 percent of the Horizon 2020 projects.55 In comparison, EU researchers participate at a much greater level considering all National Science Foundation (NSF) and National Institutes of Health (NIH) active grants.56 This asymmetry in participation is due to EU rules that require participants in Horizon 2020 projects to sign grant agreements. For US institutions, this raises issues concerning “governing law and jurisdiction, intellectual property treatment, joint and several liability57 and indemnification, access to data and implications for export control, and auditing requirements.58

Finding 2B.4: Funding policies constrain collaboration.

One issue of concern in the Horizon Europe initiative rules governing participation is the determination of financial contribution by the United States and “third countries” as defined in Article 12 of Horizon Europe—the Framework Programme for Research and Innovation.59 The calculated cost of association with the Horizon Europe initiative is based on the relative size of a country’s gross domestic product (GDP) compared with EU GDP. For example, the European Commission has proposed making the UK pay a proportion of the 2021-2027 research budget based on its share of EU GDP, which currently stands at 18 percent. For the United States, this corresponding value is 137 percent, yielding a required contribution of $131.4 billion.

The regulations establishing Horizon Europe contain other potential issues for US participation. These include Article 36, which gives the European Commission rights regarding transfer and licensing, and Article 49, which gives certain EU entities the right to carry out investigations and inspections.

Approach 2B: Coordinate with allies and partners to build human capital for quantum information science and overcome limitations imposed by national and regional funding and data-sharing polices.

In the ongoing competitive R&D of QIS, key determinants of success are the size, skill, and collaboration of the technology workforce spanning a number of disciplines, including those in the fields of science, technology, engineering, mathematics (STEM), and manufacturing. The United States recognizes that it “must work with international partners, even while advancing domestic investments and research strategies.”60

Recommendation 2B: With allies and partners, the United States should develop priority global initiatives that employ transformative quantum information science and catalyze the development of human capital and infrastructure for these and other next-generation quantum information science applications.

Recommendation 2B.1: Establish, with other nations, a common set of demonstration milestones for quantum data and communications security.

The administration should extend the technological development portfolio of national investments in QIS to incorporate a common set of milestones with allies. The members of the National Science and Technology Council (NSTC) Subcommittee on Quantum Information Science should develop such milestones in coordination with representatives from collaborating nations. These are to be consonant with plans by the United States and like-minded nations to develop testbeds, demonstrations, standards, and a quantum-skilled workforce. The milestones will inform the practical applications for use with near-, mid-, and long-term levels of quantum information capabilities. The EU’s Horizon Europe initiative is a potential opportunity for such collaboration. The United States should also establish data sharing agreements with other nations for QIS results pertaining to shared economic and national security interests.

Recommendation 2B.2: Create a program of quantum information science research and development focused on emerging issues for digital economies.

The administration should continuously evaluate QIS progress and technologies through the White House Office of Science and Technology Policy (OSTP) and the National Academies of Sciences, Engineering, and Medicine; this could be accomplished by the creation of a standing committee such as they have done for other areas that will be long-lived. This will identify new technology directions, review QIS policies, and revisit priorities and partnerships. The evaluations should focus on entirely new quantum capabilities that can benefit digital economies, e.g., privacy and advances in biotechnology and data capabilities, open sharing of data while maintaining data privacy, principles for systems to be quantum-secure by design, digital supply chain security for both hardware and software, evolution of Internet protocols, network modernization, and other topics.

Recommendation 2B.3: Establish a program to accelerate the operationalization of quantum information science technologies.

Recognizing the need for broad and significant investment in quantum applications to focus and accelerate progress, Congress and the administration should establish a program, led by the Defense Advanced Research Projects Agency (DARPA), to accelerate the operationalization of continually evolving hybrid (classical and quantum) computing architectures. This program will mature prototype demonstrations of quantum computing, communication, sensing, and metrology technologies to yield fieldable capabilities. The program also should include elements that seek to develop a quantum-skilled workforce in the private and public sectors. Several models for such a program are seen in DARPA’s long history of rapidly growing and maturing advanced technology fields, e.g., Grand Challenges for autonomous vehicles, Have Blue for stealth technologies, and AI Next for artificial intelligence.

Recommendation 2B.4: Establish leading roles for the United States in setting international standards for data and communications security as quantum information science evolves.

Building on the results obtained from NDAA FY 2021, SEC. 9414, Study on Chinese Policies and Influence in the Development of International Standards for Emerging Technologies,61 the administration should take steps to bolster the development of standards for QIS technology development and applications.62 This will drive toward a strategy for achieving a leadership role in international quantum standards setting, sharing sensitive security-related advances with allies, responding to China’s efforts to influence international standards,63 and catalyzing private sector investments in quantum technologies. NIST is currently developing quantum resilient encryption standards for the United States.64 The administration should direct NIST to broaden the scope of its work to develop standards for QIS technology development and applications.65

The administration should develop DoD and Intelligence Community policy guidance to govern the sharing of QIS findings and capabilities with allies and partners. This guidance should be developed with representation from the Department of Commerce’s National Telecommunications and Information Administration (NTIA) and NSF to balance security concerns with the benefits of collaboration; address government and private industry information, both classified and proprietary; and also should include categories of information that the United States is interested in receiving from allies and partners.

Recommendation 2B.5: Establish a national QIS research, development, and testing infrastructure; fund quantum demonstration programs.

The administration should establish a national QIS research, development, and testing infrastructure. This will comprise research centers focused on quantum computing, quantum communications, quantum sensing, and evaluation of QIS (including QIS-secure) applications; a national computational infrastructure to support this initiative; engineering testbeds; programs to build a skilled QIS workforce; and participation by private industry (for example, the Quantum Economic Development Consortium66) to advance the development of a national QIS infrastructure and create fielded capabilities. In support of the National Quantum Coordinating Office, an interagency group led by the Department of Energy, NIST, and DARPA should oversee this infrastructure initiative, coordinating federal programs and guiding private industry’s participation.

The administration should develop demonstration programs that show, in operational settings, national security implications of near-term quantum platforms. Some examples include the following:

  • Quantum communications: There are two areas of interest: (i) understanding vulnerabilities of various public key cryptographic systems to future quantum computing systems, an effort currently underway at NIST in the development of quantum resilient encryption standards, and (ii) use of QKD in large-scale demonstrations relevant to commercial and security applications, including space communications. QKD provides an approach to post-quantum communications security that is based on quantum phenomena, not algorithmic complexity.
  • Quantum computing: Using small quantum computers in networked clusters or in hybrid architectures with classical computers.
  • Quantum networks: The use of quantum networks for long-range quantum communications.
  • Quantum sensing: Using quantum mechanics phenomena and devices for high-sensitivity and precision applications in sensing and communication, life sciences, and other fields.

The administration, through the National Quantum Coordinating Office, should establish funded competitions to improve the exchange of intellectual property and foster a common understanding across the government, industry, academic communities, and foreign institutions working on QIS.J.67

1    Joseph Johnson, “Annual number of data breaches and exposed records in the United States from 2005 to 2020,” Statista, March 3, 2021, accessed April 16, 2021, https://www.statista.com/statistics/273550/data-breaches-recorded-in-the-united-states-by-number-of-breaches-and-records-exposed/; Joseph Johnson, “Number of data breaches in the United States from 2013 to 2019, by industry,” Statista, March 9, 2021, https://www.statista.com/statistics/273572/number-of-data-breaches-in-the-united-states-by-business/.
2    U.S. Cyberspace Solarium Commission, United States of America Cyberspace Solarium Commission Report, March 2020, accessed March 26, 2021, https://www.solarium.gov/report.
3    Mission Support Center, “Cyber Threat and Vulnerability Analysis of the U.S. Electric Sector: Mission Support Center Analysis Report, Idaho National Laboratory, August 2016, accessed March 26, 2021, https://www.energy.gov/sites/prod/files/2017/01/f34/Cyber%20Threat%20
and%20Vulnerability%20Analysis%20of%20the%20U.S.%20Electric%20Sector.pdf
.
4    Ken Thompson, “Reflections on Trusting Trust,” Communications of the ACM, Volume 27 (8) (August 1984): 761-763, accessed March 26, 2021, https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf.
5    Cybersecurity and Infrastructure Security Agency, “Election Infrastructure Cyber Risk Assessment,” Critical Infrastructure Security and Resilience Note, July 28, 2020, accessed March 26, 2021, https://www.cisa.gov/sites/default/files/publications/cisa-election-infrastructure-cyber-risk-assessment_508.pdf.
6    Internet Crime Complaint Center, Internet Crime Report 2020, Federal Bureau of Investigation, accessed March 26, 2021, https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf.
7    White House, President Barack Obama, “Presidential Policy Directive – Critical Infrastructure Security and Resilience, PPD-21,” February 12, 2013, accessed March 26, 2021, https://obamawhitehouse.archives.gov/the-press-office/2013/02/12/presidential-policy-directive-critical-infrastructure-security-and-resil.
8    The President’s National Infrastructure Advisory Council, Securing Cyber Assets: Addressing Urgent Cyber Threats to Critical Infrastructure, August 2017, accessed March 26, 2021, https://www.cisa.gov/sites/default/files/publications/niac-securing-cyber-assets-final-report-508.pdf.
9    “Executive Order 13870 of May 2, 2019: America’s Cybersecurity Workforce,” Federal Register, accessed March 26, 2021, https://www.federalregister.gov/documents/2019/05/09/2019-09750/americas-cybersecurity-workforce.
10    “Cybersecurity Supply/Demand Heat Map,” Cyberseek.org, accessed March 26, 2021, https://www.cyberseek.org/heatmap.html.
11    National Initiative for Cybersecurity Careers and Studies, “Workforce Framework for Cybersecurity (NICE Framework),” Cybersecurity and Infrastructure Security Agency, accessed March 26, 2021, https://niccs.cisa.gov/workforce-development/cyber-security-workforce-framework.
12    Aspen Institute, Principles for Growing and Sustaining the Nation’s Cybersecurity Workforce, November 2018, accessed March 26, 2021, https://www.aspeninstitute.org/wp-content/uploads/2018/11/Aspen-Cybersecurity-Group-Principles-for-Growing-and-Sustaining-the-Nations-Cybersecurity-Workforce-1.pdf.
13    “Cybersecurity Framework,” National Institute of Standards and Technology, accessed March 26, 2021, https://www.nist.gov/cyberframework/online-learning/five-functions.
14    Kathleen Metrick, Jared Semrau, and Shambavi Sadayappan, “Think Fast: Time Between Disclosure, Patch Release and Vulnerability Exploitation — Intelligence for Vulnerability Management, Part Two,” FireEye, April 13, 2020, accessed April 16, 2021, https://www.fireeye.com/blog/threat-research/2020/04/time-between-disclosure-patch-release-and-vulnerability-exploitation.html.
15    Rapid7, “Security Report for In-Production Web Applications,” White Paper, accessed April 16, 2021, https://www.rapid7.com/globalassets/_pdfs/whitepaperguide/rapid7-tcell-application-security-report.pdf.
16    Amir Preminger, “NotPetya: Looking Back Three Years Later,” Claroty, June 30, 2020, accessed April 16, 2021, https://claroty.com/2020/06/30/notpetya-looking-back-three-years-later/.
17    United States Government Accountability Office, Information Technology: Federal Agencies Need to Take Urgent Action to Manage Supply Chain Risks, GAO-21-171, December 15, 2020, accessed March 26, 2021, https://www.gao.gov/assets/gao-21-171.pdf.
18    Robert McMillan, “Hackers Lurked in SolarWinds Email System for at Least 9 Months, CEO Says,” Wall Street Journal, February 2, 2021, accessed April 16, 2021, https://www.wsj.com/articles/hackers-lurked-in-solarwinds-email-system-for-at-least-9-months-ceo-says-11612317963.
19    Patrick Howell O’Neill, “Recovering from SolarWinds hack could take 18 months,” MIT Technology Review, March 2, 2021, accessed April 16, 2021, https://www.technologyreview.com/2021/03/02/1020166/solarwinds-brandon-wales-hack-recovery-18-months/.
20    Cybersecurity and Infrastructure Security Agency, Information and Communications Technology Supply Chain Risk Management Task Force Year 2 Report: Status Update on Activities and Objectives of the Task Force, December 2020, accessed April 16, 2021, https://www.cisa.gov/sites/default/files/publications/ict-scrm-task-force_year-two-report_508.pdf.
21    Lauren Feiner, “Microsoft president: The only reason we know about SolarWinds hack is because FireEye told us,” CNBC, February 23, 2021, accessed April 16, 2021, https://www.cnbc.com/2021/02/23/microsoft-exec-brad-smith-praises-fireeye-in-solarwinds-hack-testimony.html.
22    Government Accountability Office, Cybersecurity: Clarity of Leadership Urgently Needed to Fully Implement the National Strategy, report to congressional requestors, September 2020, accessed March 26, 2021, https://www.gao.gov/assets/gao-20-629.pdf; National Security Council, National Cyber Strategy Implementation Plan (Washington, D.C.: June 2019). The Implementation Plan was not published to the public, but any entity assigned a lead or supporting role within the plan received a digital copy of the plan.
23    William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021.
24    American Rescue Plan Act of 2021, H.R. 1319, Public Law No. 117-2, 117th Congress (2021-2022), https://www.congress.gov/bill/117th-congress/house-bill/1319/text.
25    “NIST Risk Management Framework,” National Institute of Standards and Technology Computer Security Resource Center, accessed March 26, 2021, https://nvd.nist.gov/800-53/Rev4/control/CA-7.
26    Kelley Dempsey et al., Information Security Continuous Monitoring (ISCM) for Federal Information Systems and Organizations, Special Publication 800-137, NIST, September 2011, accessed March 26, 2021, https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-137.pdf.
27    Office of Management and Budget, “FY 2011 Reporting Instructions for the Federal Information Security Management Act and Agency Privacy Management,” Executive Office of the President, Memorandum M-11-33, September 14, 2011, accessed March 26, 2021, https://www.whitehouse.gov/sites/whitehouse.gov/files/omb/memoranda/2011/m11-33.pdf.
28    Executive Office of the President of the United States, Federal Information Security Modernization Act of 2014: Annual Report to Congress, Fiscal Year 2019, accessed March 26, 2021, https://www.whitehouse.gov/wp-content/uploads/2020/05/2019-FISMARMAs.pdf.
29    Government Accountability Office, Information Technology: Federal Agencies Need to Take Urgent Action to Manage Supply Chain Risks, GAO-21-171, December 15, 2020, accessed March 26, 2021, https://www.gao.gov/products/GAO-21-171.
30    Federal Information Security Modernization Act of 2014, S. 2521 — 113th Congress (2013-2014), https://www.congress.gov/bill/113th-congress/senate-bill/2521/text; FISMA requires each agency to handle its own security by meeting NIST SP 800-53 controls as well as requiring their information systems maintainers to comply with NIST SP 800-171. These NIST publications discuss continuous monitoring controls, with NIST SP 800-137 dedicated to even more, in depth consideration.
31    Cybersecurity and Infrastructure Security Agency, Information and Communications Technology.
32    Executive Office of the President of the United States, Federal Information Security Modernization Act of 2014.
33    Dempsey et al., Information Security Continuous Monitoring (ISCM).
34    Congressional Research Service, Cybersecurity: DHS and Selected Agencies Need to Address Shortcomings in Implementation of Network Monitoring Program, August 2020, accessed March 26, 2021, https://www.gao.gov/assets/gao-20-598.pdf.
35    Justin Katz, “Mayorkas calls for review of Einstein, CDM,” FCW, January 19, 2021, accessed March 26, 2021, https://fcw.com/articles/2021/01/19/mayorkas-dhs-confirm-cyber.aspx.
36    “Cybersecurity Maturity Model Certification (CMMC) Compliance,” Compliance Forge, accessed March 26, 2021, https://www.cmmc-compliance.com/.
37    Jackson Barnett, “New bottleneck emerges in DOD’s contractor cybersecurity program, concerning assessors,” FEDSCOOP, April 19, 2021, accessed April 21, 2021, https://www.fedscoop.com/cmmc-bottleneck-c3pao-assessments-dod/.
38    Subcommittee on Quantum Information Science under the Committee on Science of the National Science & Technology Council, National Strategic Overview; “National Quantum Initiative Advisory Committee,” US Department of Energy, accessed March 26, 2021, https://science.osti.gov/About/NQIAC; QUROPE Quantum Information Processing and Communication in Europe, Quantum Technologies Roadmap, European Union, August 2018, accessed March 26, 2021, http://qurope.eu/h2020/qtflagship/roadmap2016; National Development and Reform Commission, “The 13th Five Year Plan for Economic and Social Development of the People’s Republic of China (2016-2020),” People’s Republic of China, accessed March 26, 2021, https://en.ndrc.gov.cn/newsrelease_8232/201612/P020191101481868235378.pdf; Arjun Kharpal, “In battle with U.S., China to focus on 7 ‘frontier’ technologies from chips to brain-computer fusion,” CNBC, March 5, 2021, accessed March 26, 2021, https://www.cnbc.com/2021/03/05/china-to-focus-on-frontier-tech-from-chips-to-quantum-computing.html.
39    SS. Debnath et al., “Demonstration of a small programmable quantum computer with atomic qubits,” Nature 536 (2016): 63-66, accessed March 26, 2021, https://doi.org/10.1038/nature18648; Google AI Quantum and Collaborators et al., “Hartree-Fock on a superconducting qubit quantum computer,” Science 369 (6507) (August 28 2020): 1084–1089, accessed March 26, 2021, https://doi.org/10.1126/science.abb9811; Juan Yin et al., “Entanglement-based secure quantum cryptography over 1,120 kilometres,” Nature 582 (2020): 501-505, accessed March 26, 2021, https://doi.org/10.1038/s41586-020-2401-y; Vasileios Mavroeidis et al., “The Impact of Quantum Computing on Present Cryptography,” International Journal of Advanced Computer Science and Applications 9 (3) (2018), accessed April 16, 2021, https://arxiv.org/pdf/1804.00200.pdf.
40    “Quantum Key Distribution (QKD) and Quantum Cryptography (QC),” National Security Agency Central Security Service, accessed March 26, 2021, https://www.nsa.gov/what-we-do/cybersecurity/quantum-key-distribution-qkd-and-quantum-cryptography-qc/.
41    M. Fujiwara et al. “Unbreakable distributed storage with quantum key distribution network and password-authenticated secret sharing,” Scientific Reports 6, 28988 (2016), accessed March 26, 2021, https://doi.org/10.1038/srep28988.
42    Subcommittee on Quantum Information Science under the Committee on Science of the National Science & Technology Council, National Strategic Overview.
43    Ibid.
44    National Quantum Initiative Act of 2018, S. 3143, Public Law No. 115-368, 115th Congress (2017-2018), accessed March 26, 2021, https://www.congress.gov/115/plaws/publ368/PLAW-115publ368.pdf.
45    National Science Foundation, “NSF establishes 3 new institutes to address critical challenges in quantum information science,” Announcement, July 21, 2020, accessed March 26, 2021, https://www.nsf.gov/news/special_reports/announcements/072120.jsp.
46    “Dear Colleague Letter: International Collaboration Supplements in Quantum Information Science and Engineering Research,” National Science Foundation, NSF 20-063, March 24, 2020, accessed March 26, 2021, https://nsf.gov/pubs/2020/nsf20063/nsf20063.jsp.
47    “Industries of the Future,” U.S. Senate Committee on Commerce, Science, and Transportation, January 15, 2020, accessed March 26, 2021, https://www.commerce.senate.gov/2020/1/industries-of-the-future.
48    “Tokyo Statement on Quantum Cooperation,” U.S. Department of State, December 19, 2019, accessed March 26, 2021, https://www.state.gov/tokyo-statement-on-quantum-cooperation/.
49    Elsa B. Kania, “China’s Quantum Future,” Foreign Affairs, September 26, 2018, https://www.foreignaffairs.com/articles/china/2018-09-26/chinas-quantum-future; European Commission, “Quantum Technologies Flagship kicks off with first 20 projects,” Factsheet, October 29, 2018, accessed March 26, 2016, https://ec.europa.eu/commission/presscorner/detail/de/MEMO_18_6241; Arjun Kharpal, “In battle with U.S., China to focus on 7 ‘frontier’ technologies from chips to brain-computer fusion,” CNBC, March 5, 2021, accessed March 26, 2021, https://www.cnbc.com/2021/03/05/china-to-focus-on-frontier-tech-from-chips-to-quantum-computing.html; Lauren Dudley, “China’s Quest for Self-Reliance in the Fourteenth Five-Year Plan,” Net Politics, March 8, 2021, accessed April 16, 2021, https://www.cfr.org/blog/chinas-quest-self-reliance-fourteenth-five-year-plan.
50    Martin Giles, “The man turning China into a quantum superpower,” MIT Technology Review, December 19, 2018, accessed March 26, 2021, https://www.technologyreview.com/2018/12/19/1571/the-man-turning-china-into-a-quantum-superpower/.
51    “Final budget breakdown Horizon Europe,” Science|Business, accessed April 16, 2021, https://sciencebusiness.net/sites/default/files/inline-files/Final%20budget%20breakdown%20Horizon%20Europe_0.pdf.
52    “Digital Europe Programme,” European Commission, accessed April 16, 2021, https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/programmes/digital.
53    European Commission, Commission welcomes the political agreement on the European Space Programme, press release, December 16, 2020, accessed April 16, 2021, https://ec.europa.eu/commission/presscorner/detail/en/IP_20_2449.
54    European Commission, “European Commission, Call for tenders CNECT/LUX/2020/CPN/0062, Detailed system study for a Quantum Communication Infrastructure, Competitive Procedure with Negotiation,” accessed April 16, 2021, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=69304; Éanna Kelly, “Switzerland pencilled back into quantum plans, but no access for UK, Israel,” Science|Business, March 18, 2021, accessed April 16, 2021, https://sciencebusiness.net/news/switzerland-pencilled-back-quantum-plans-no-access-uk-israel; “Horizon Europe, Work Programme 2021-2022, 7. Digital, Industry and Space,” European Commission, accessed April 16, 2021, https://sciencebusiness.net/sites/default/files/inline-files/7.%20Digital%20Industry%20Space.pdf.
55    CORDIS, European Commission Research Results, accessed April 16, 2021, https://cordis.europa.eu/projects/en. This represents a comparison of Horizon 2020 projects originating in the United States during 2013-2020 with the total number of Horizon 2020 projects, excluding certain subcategories from both groupings.
56    “Funding & tender opportunities, Single Electronic Data Interchange Area (SEDIA),” European Commission, accessed March 26, 2021, https://ec.europa.eu/info/funding-tenders/opportunities/portal/screen/opportunities/horizon-dashboard.
57    “When two or more parties are jointly and severally liable for a tortious act, each party is independently liable for the full extent of the injuries stemming from the tortious act.” “Joint and Several Liability,” Cornell Law School, accessed March 26, 2021, https://www.law.cornell.edu/wex/joint_and_several_liability.
58    ”Richard L. Hudson, “Tale of two cities: Brussels and Washington struggle to cooperate in science,” Science|Business, May 14, 2018, accessed April 16, 2021, https://sciencebusiness.net/tale-two-cities-brussels-and-washington-struggle-cooperate-science; Ryan Lankton and Jennifer Ponting, “Managing Horizon 2020 Grants: the Experiences of the University of Michigan and Harvard,” NCURA Magazine, National Council of University Research Administrators, XLVIII (1) (January/February 2016), accessed April 16, 2016, http://www.ncura.edu/portals/0/docs/srag/january%202016%20issue-weibo.pdf.
59    “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing Horizon Europe – the Framework Programme for Research and Innovation, laying down its rules for participation and dissemination – Common understanding,” Council of the European Union, Interinstitutional File: 2018/0224(COD), accessed March 26, 2021, https://www.consilium.europa.eu/media/38902/st07942-en19.pdf.
60    Subcommittee on Quantum Information Science under the Committee on Science of the National Science & Technology Council, National Strategic Overview, 12.
61    William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021.  SEC. 9414. Study on Chinese Policies and Influence in the Development of International standards for Emerging Technologies will produce an assessment of this issue for emerging technologies. SEC. 9414 is based on the “Ensuring American Leadership over International Standards Act of 2020,” S. 4901, introduced on November 16, 2020, by Senator Cortez Masto (D-NV) and Senator Portman (R-OH), accessed March 26, 2021, https://www.congress.gov/bill/116th-congress/senate-bill/4901/text comprises.
62    “Working Group 14 for Quantum computing was established by ITO/IEC JTC1 in June 2020,” JTC1, accessed March 26, 2021, https://jtc1info.org/technology/working-groups/quantum-computing/. IEC and ISO have set up a working group (WG 14) in their joint technical committee on information technology (JTC1) to identify the standardization needs of quantum computing.
63    “A ‘China Model?’ Beijing’s Promotion of Alternative Global Norms and Standards,” hearing before the U.S.-China Economic and Security Review Commission, 116th Congress, March 13, 2020, accessed March 26, 2021, https://www.uscc.gov/sites/default/files/2020-10/March_13_Hearing_and_April_27_Roundtable_Transcript.pdf.
64    National Institute of Standards and Technology, “NIST’s Post-Quantum Cryptography Program Enters ‘Selection Round,’” July 22, 2020, accessed March 26, 2021, https://www.nist.gov/news-events/news/2020/07/nists-post-quantum-cryptography-program-enters-selection-round.
65    Dr. Carl J. Williams, “NIST’s Program in Quantum Information Science,” accessed April 16, 2016, https://science.osti.gov/-/media/nqiac/pdf/NIST_-presentation-NQIAC-20201027.pdf?la=en&hash=79A89EDF5BF6175360DF7EBCEB024F9B240B64A7.
66    National Institute of Standards and Technology, “NIST Launches Consortium to Support Development of Quantum Industry,” September 28, 2018, accessed March 25, 2021, https://www.nist.gov/news-events/news/2018/09/nist-launches-consortium-support-development-quantum-industry. The Quantum Economic Development Consortium (QEDC) is a public-private partnership in the United States tasked with developing the future workforce needs for the QIS economy. Virtually all of the US private sector quantum companies are represented in the QEDC.
67     Bienfang et al., Building the Foundations for Quantum Industry, NIST, June 20, 2018, accessed March 26, 2021, https://www.nist.gov/system/files/documents/2018/06/20/report-on-qid-v10.pdf.

The post Secure data and communications appeared first on Atlantic Council.

]]>
Global science and technology leadership https://www.atlanticcouncil.org/content-series/geotech-commission/chapter-1/ Tue, 25 May 2021 22:56:57 +0000 https://www.atlanticcouncil.org/?p=392374 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Global science and technology leadership appeared first on Atlantic Council.

]]>
.gta-media-overlay--media__video { filter: brightness(40%) !important; }

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Chapter 1. Global science and technology leadership

Scroll down to navigate and learn more

The United States and like-minded nations, as well as private sector organizations, must continue to invest in and develop the multilateral mechanisms and academic and industrial capabilities, and the human capital needed for continued leadership in key science and technology (S&T) areas. Such leadership is essential for national and economic security and for ensuring that technology is developed and deployed with democratic values and standards in mind. The global development of advanced technologies requires the United States to pursue, as strategic goals and in collaboration with allies and partners, leadership in select areas.1

Six broad areas of S&T are critical to national and economic security, as follows:2

  • Communications and networking, data science, and cloud computing: collectively provide the foundation for secure transmission of data for both the public and private sector and enable robust economies of ideas, resources, and talent. This critical area supports all aspects of a healthy digital economy domestically and internationally.
  • Artificial intelligence (AI), distributed sensors, edge computing, and the Internet of Things (IoT): add new capabilities for understanding changes in the world for both physical and digital environments and enhance human governance in key, defined areas.
  • Biotechnologies, precision medicine, and genomic technologies: collectively provide the foundation to heal and promote healthy individuals and communities, as well as to improve the performance of agricultural systems with regard to the reduction of atmospheric greenhouse gases, and to develop a system for early warning of emerging natural and human-produced risks such as outbreaks, bioterrorism, and environmental shocks.
  • Space technologies, undersea technologies, and new materials for extreme environments: collectively provide for commercial companies and nations around the world to deploy mega-constellations of satellites, or fleets of autonomous ocean platforms, with advanced, persistent surveillance and communications capabilities to monitor the planet, including its oceans and environment, for emerging risks.3
  • Autonomous systems, robotics, and decentralized energy methods: collectively provide the foundation to do work in dangerous or hazardous environments without risk to human lives, while at the same time augmenting human teams, potentially prompting long-term dislocations in national workforces, and requiring additional workforce talent for new technology areas.
  • Quantum information science (QIS), nanotechnology, and advanced microelectronics: collectively provide the foundation for solving classes of computational problems, next-generation manufacturing, new ways to monitor the trustworthiness of digital and physical supply chains, as well as potentially presenting new challenges to communications security that underpin effective governance and robust economies.

Participation by industry, academia, government labs, and US allies and partners will help ensure a fast pace of discovery and innovation. Achieving global S&T leadership also requires protecting intellectual property and proprietary information, and guiding technology sharing with other nations based on their adherence to shared standards and values for security and privacy.

Technology sharing with non-allied nations poses strategic risks. For example, sharing advanced findings and applications of AI may benefit one nation at the expense of the other—AI-based image understanding algorithms could enhance remote sensing of military activities by commercial satellites. In other cases, new capabilities may benefit all nations, for example, a better disease testing technology.

Finding 1: The US National Strategy for Critical and Emerging Technologies requires an implementation plan to guide both domestic and international coordination to achieve global science and technology leadership.

The National Strategy for Critical and Emerging Technologies supports US national and economic security by promoting the National Security Innovation Base and by protecting the United States’ technological advantage. Priority actions include developing the S&T workforce, establishing technology norms and standards that reflect democratic values and interests, ensuring research and development (R&D) funding of priorities, building strong partnerships with the private sector and with like-minded nations, and protecting the security of the technologies, their development, and how they are shared.4 A detailed implementation plan, coordinated across the US government, is needed.5

Finding 1.1: Achieving and sustaining technology leadership must be a long-term national priority.

To achieve the long-term goals of technology leadership in key areas, a close and continuing interaction between S&T development and national security policy is essential.

The National Strategy for Critical and Emerging Technologies must be accompanied by long-term S&T goals resulting in demonstrations of significant import, and detailed programmatic plans for achieving these goals. The breadth of these technologies and their interdependencies require that progress should be shared with allies and partners and involve public-private partnerships (PPPs) among government research centers, private industry, and academia. This approach can catalyze human capital development and accelerate innovation.

Finding 1.2: Private sector research and development exceeds that of the government in some areas that are important for national and economic security, underscoring the need for greater coordination.

The annual growth rate of domestic R&D government spending for 2000-2017 places the United States sixth, at 4.3 percent, behind the European Union (EU), Germany, India, South Korea, and China (17.3 percent).6 The US government funds the largest share of basic research, while US industry funds the largest share of both applied research and development.7

The newness of the technologies and their continuing evolution challenges the creation of internationally accepted, harmonized, and tested rules. In areas such as data privacy, harmonization of standards will require a heightening of US standards. In other areas of Internet and technology governance, the United States must have a leadership role in determining international standards and rules.

Among the more important critical and emerging technologies are AI, quantum, cyber, digital infrastructure, and health/medical technologies, all areas in which private industry is growing. To strengthen US technology leadership, the United States must increase government R&D funding in critical areas and coordinate government and private industry R&D strategies.

Finding 1.3: Recent proposed legislation addresses policies for guiding permissible technology development and use.

Several countries are developing legislation to strengthen ethical practices underpinning data collection for AI algorithms, protect data privacy, and govern data rights.8

“Executive Order 13960 of December 3, 2020: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government” establishes a set of principles governing the development and use of AI.9

A small sampling from recent, proposed US legislation includes the following ideas:

  • Require assessments of the impacts of automated decision-making systems, including AI systems. These assessments would evaluate their accuracy, bias, discrimination, privacy, and security.10
  • Recommend approaches that promote the development and use of AI “while protecting civil liberties, civil rights, and economic and national security.11
  • Reinforce government regulations for protecting the privacy rights of individuals in terms of how data are collected, protected, used, and shared.
  • Establish standards governing the responsible use of data and emerging technologies that include prohibitions on the use of personal data and emerging technologies in a manner that discriminates based on protected classes.

The European Commission established a High-Level Expert Group on Artificial Intelligence that published Ethics Guidelines for Trustworthy AI in April 2019. These guidelines address human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity, nondiscrimination and fairness, societal and environmental well-being, and accountability.12

The newness of the technologies and their continuing evolution challenges the creation of internationally accepted, harmonized, and tested rules. In areas such as data privacy, harmonization of standards will require a heightening of US standards. In other areas of Internet and technology governance, the United States must have a leadership role in determining international standards and rules.

Finding 1.4: Models for gaining technological leadership encourage innovation, focus on challenges concerning security or economic growth, organize governance, and draw from the global talent pool.

A recent analysis, Innovation Policies in the United States,13 discusses how these policies have changed over time, citing five models: “(i) Connected, challenge model, driven by societal challenges during World War II, where innovations are rapidly turned into capabilities, (ii) Basic science-focused, disconnected, decentralized model—the linear model during the Cold War, (iii) ‘Right-left’ translation model wherein the desired technologies motivate the basic science, (iv) Spanning the ‘valley of death’ model in which government initiatives helped bridge from basic research to the use of the innovations by industry, (v) Connected model in which societal needs connect innovation with the production of desired products.” The analysis concludes that “basic research must be complemented with additional institutional elements that reach much further down the innovation pipeline to development and later innovation stages.”

Proposed legislation introduced in the 116th Congress concerning AI research focused on convening “technical experts across academia, government, and industry to develop a detailed plan for how the United States can build, deploy, govern, and sustain a national AI research cloud.”14 Another model for research collaboration was included in proposed legislation which would “organize a coordinated national strategy for developing AI, establish and support collaborative ventures or consortia with public or private sector entities, and accelerate the responsible delivery of AI applications from government agencies, academia, and the private sector.”15 Both of these bills became law in Division E of the National Defense Authorization Act (NDAA): the Artificial Intelligence Initiative Act (Sections 5101-5105 of P.L.116-283) and the National AI Research Resource Task Force Act (Section 5106 of P.L.116-283).

The United States is a founding member of the Global Partnership on Artificial Intelligence (GPAI). “In collaboration with partners and international organizations, GPAI will bring together leading experts from industry, civil society, governments, and academia to collaborate across four Working Group themes: 1) Responsible AI; 2) Data Governance; 3) The Future of Work; and 4) Innovation & Commercialization,” according to a joint statement from the GPAI’s founding members.16

While the US model for funding R&D allows for multiple, independent lines of inquiry, in QIS, for example,17 some coordination in international collaboration could help ensure a diversity of approaches is fostered.

Approach 1: Focus the innovative work and talent on long-term capability demonstrations, while emphasizing democratic values.

The United States and like-minded nations must be successful in each of the critical technology areas, or risk a vulnerability affecting national security. Success includes investing in innovative work and talent linked to long-term capability demonstrations. A focused approach sets concrete capability goals, constructs and funds fast-paced programs, and undergoes regular review. Talent from many nations and groups will make essential contributions. In contrast with nondemocratic nations, the United States and its allies and partners possess democratic values that can empower this work.

Recommendation 1: Establish priorities, investments, standards, and rules for technology dissemination; develop across government, private industry, academia, and with allies and partners

Recommendation 1.1: Develop a National and Economic Security Technology Strategy.

To ensure the United States and its allies remain at the forefront of strategic S&T areas, the administration should develop a National and Economic Security Technology Strategy. The administration should create long-term S&T goals informed by assessments of foreign capabilities and plans. The National and Economic Security Technology Strategy should complement the National Security Strategy and draw upon the National Strategy for Critical and Emerging Technologies and other sources. The strategy should establish a long-term plan to direct government activities, incentivize private sector investments, enhance human capital, and develop capabilities in S&T that protect US national and economic security. The US Congress should conduct annual reviews of the milestone progress and budgets for these strategic S&T areas.

The strategy should also articulate a plan to establish a strategic technology ecosystem, including public-private partnerships, academia, industry, nonprofits, and others to accelerate technological development, support experimentation and pilot projects, and facilitate the application of new technologies to national and global challenges. Possible models include the Enduring Security Framework established by the National Security Agency (NSA), sector-specific consortia that include industry and academia, innovation labs that mature technology targeted at specific sectors, national laboratories developing large-scale test and evaluation infrastructure for advanced technology development, and focusing the National Science Foundation to address S&T.18 The strategy should articulate ways to leverage not just the US workforce, but also the global talent base, while seeking to grow and retain existing highly skilled technical talent in the United States. The strategy should outline an approach that ensures the results of the strategic technology ecosystem provide the greatest public benefit possible from government investments.

The strategy should specifically address the following technology areas, with the strategic S&T goal for each area in italics:

  1. Communications and networking, data science, and cloud computing: provide the foundation for trustworthy digital infrastructures.
  2. Artificial intelligence (AI), distributed sensors, edge computing, and the Internet of Things (IoT): testable, tunable, and trusted AI algorithms that are robust to limited, sparse, or corrupted data and require significantly less data, power, and time compared with today.
  3. Biotechnologies, precision medicine, and genomic technologies: field a global system for fast, automated detection, diagnoses, and discovery of treatments for emerging pathogens, bioterrorism, and other environmental shocks to the planet.
  4. Space technologies, undersea technologies, and new materials for extreme environments: monitor the entire planet pervasively and persistently, at high resolution and communicate the information in near-real time.
  5. Autonomous systems, robotics, and decentralized energy methods: develop coordinated protocols for testing modular systems and methods and for evaluating emergent behaviors.
  6. Quantum information science (QIS), nanotechnology, and advanced microelectronics: establish a national QIS infrastructure comprising research, development, computational, and testing programs, facilities, and skilled personnel; accelerate the operationalization of QIS technologies.

Recommendation 1.2: Establish a Global GeoTech Alliance and Executive Council.

To ensure coordination between the US government and private sector on key S&T issues, the administration should create a Global GeoTech Alliance and Executive Council comprised of US private sector representatives and government representatives from the National Security Council, the Intelligence Community, the Department of Defense (DoD), the Department of State, the Treasury Department, the Department of Commerce, and the Office of the United States Trade Representative. This group—the Global GeoTech Alliance and Executive Council—would advise on issues arising from emerging technologies and data capabilities, technology cooperation, and technology standard-setting efforts, such as those raised in this report, and could provide the existing President’s Intelligence Advisory Board with augmented membership and a honed focus on GeoTech issues of concern across sectors globally.

Recommendation 1.3: Strengthen international collaboration on science and technology.

The administration should develop a strategy and a new multilateral mechanism among like-minded and democratic countries to coordinate technology policy, standards, and development. This strategy should seek to coordinate strategic S&T goals and milestones for collaborations with US allies and partner nations and develop agreements for sharing information, data, and research results. The strategy should also establish a framework for facilitating technical and programmatic information exchanges, with the goal of identifying opportunities for collaboration on specific S&T projects.

The administration should also increase participation by the United States in the GPAI.19 The William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021 directs the United States to establish several national AI programs and organizations to “ensure continued US leadership in artificial intelligence and to lead the world in the development and use of trustworthy artificial intelligence systems in the public and private sectors.”20 This requires the United States to take a more active role in the GPAI—in GPAI leadership activities, AI strategy development multi-stakeholder experts group, and in the formulation and execution of the research agenda that supports the work of the multi-stakeholder experts group. Interfacing with the EU in support of the new seven-year Horizon Europe S&T initiative is another potential type of collaboration.

Recommendation 1.4: Conduct annual reviews on how nations use technology—with a focus on privacy, civil liberties, and human rights; use the findings to guide international cooperation.

The administration should conduct an annual review that assesses the extent to which other nations use or develop S&T in ways that infringe upon the privacy, civil liberties, and human rights of their citizens, and undermine global peace and security. The results of the reviews should be used to help the United States prioritize cooperative efforts and facilitate coordination on S&T activities with other nations whose application of technology promotes peace, protects human rights, upholds the rule of law, and benefits global society. There is a recent proposal, for example, by the European Commission for a joint US-EU trade council.21 This could be one of the focal points of this approach.

Recommendation 1.5: Develop risk assessments of the ability of technology applications to violate civil rights, human rights, or undermine security.

The administration should develop risk assessments22 for technology applications to determine the potential of a technology application to violate human rights and civil liberties or to undermine security. The assessments also should identify ways to lessen the identified risks. The administration should develop an interagency process, involving the Department of Commerce, the DoD, the Department of State, the Office of the Director of National Intelligence, the Office of Science and Technology Policy, the National Institute of Standards and Technology, and the attorney general,23 to carry out these risk assessments. The processes, criteria, and metrics should be open, transparent, and consistent with relevant US trade and export and import control laws.

To help society participate in deciding how new technologies are developed and used, the administration should establish a national-scale educational program to inform the public about the benefits, risks, and brittleness of critical and emerging technologies.

Recommendation 1.6: Establish national-scale training and education programs to foster continuing technological leadership.

The administration should establish national-scale training and education programs to foster continuing technological leadership and to gain the strategic competitive advantage of being able to put advanced technologies to work quickly. The Department of Labor should establish a program that speeds up the matching of people to needed skills and rapidly trains individuals and companies in how to employ advanced technology capabilities. Current training methods cannot handle the fast-changing needs and numbers of students, and new mixtures of methods will evolve.24 To help society participate in deciding how new technologies are developed and used, the administration should establish a national-scale educational program to inform the public about the benefits, risks, and brittleness of critical and emerging technologies.

1    Democracy Technology Partnership Act, S. 604 — 117th Congress (2021-2022), 1st Session, accessed March 19, 2021, https://www.warner.senate.gov/public/_cache/files/c/9/c9502023-85b4-4f7d-90db-9045237da704/18C2CE128388C4EC06C87EE8E4CEFB76.democracy-technology-partnership-act-bill-text.pdf.
2    President’s Council of Advisors on Science and Technology, Recommendations for Strengthening American Leadership in Industries of the Future. A Report to the President of the United States of America, June 2020, https://science.osti.gov/-/media/_/pdf/about/pcast/202006/PCAST_June_2020_Report.pdf?la=en&hash=019A4F17C79FDEE5005C51D3D6CAC81FB31E3ABC; White House, “National Strategy for Critical and Emerging Technologies,” October 2020, accessed March 19, 2021, https://sesecuritycenter.org/national-strategy-for-critical-and-emerging-technologies/.
3    National Aeronautics and Space Administration, “Space Technology Grand Challenges,” December 2, 2010, accessed March 24, 2021, https://www.nasa.gov/pdf/503466main_space_tech_grand_challenges_12_02_10.pdf.
4    White House, “National Strategy,” 7-9.
5    US Government Accountability Office, DoD Critical Technologies: Plans for Communicating, Assessing, and Overseeing Protection Efforts Should Be Completed, GAO-21-158, January 2021, accessed April 16, 2021, https://www.gao.gov/assets/gao-21-158.pdf.
6    National Science Foundation, “The State of U.S. Science and Engineering 2020,” January 2020, accessed March 24, 2021, https://ncses.nsf.gov/pubs/nsb20201/global-r-d.
7    Congressional Research Service, “U.S. Research and Development Funding and Performance: Fact Sheet,” updated January 24, 2020, accessed March 26, 2021, https://fas.org/sgp/crs/misc/R44307.pdf; the National Academies defines federal S&T as essentially comprising funding categories 6.1 and 6.2. R&D is described as being more focused on application and development. Generally, government-funded S&T is dominated by academia and R&D is dominated by industry funding. For government-focused missions (e.g., NASA or DoD), the government funds industry directly for their R&D (either through contracts or independent R&D that is an allowable cost in contracts). This amount of R&D is still less than nongovernment industry R&D.
8    Law Library of the Library of Congress, Regulation of Artificial Intelligence in Selected Jurisdictions, January 2019, accessed March 26, 2021, https://www.loc.gov/law/help/artificial-intelligence/regulation-artificial-intelligence.pdf.
9    “Executive Order 13960 of December 3, 2020: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” Federal Register, accessed March 26, 2021, https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government.
10    Algorithmic Accountability Act of 2019, S. 1108 — 116th Congress (2019-2020), 1st Session, accessed March 26, 2021, https://www.wyden.senate.gov/imo/media/doc/Algorithmic%20Accountability%20Act%20of%202019%20Bill%20Text.pdf.
11    ”AI in Government Act of 2020, H.R. 2575 — 116th Congress (2019-2020), accessed April 16, 2021, https://www.congress.gov/bill/116th-congress/house-bill/2575/text.
12    European Commission, “On Artificial Intelligence – A European approach to excellence and trust,” White Paper, Brussels, 19.2.2020, COM(2020) 65 final, accessed March 26, 2021, https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
13    Bhavya Lal, “Innovation Policies in the United States,” Science and Technology Policy Institute, Institute for Defense Analyses, Washington, DC, accessed March 26, 2021, https://gsdm.u-tokyo.ac.jp/file/170208_S2P2_Lal.pdf.
14    US Sen. Rob Portman (R-OH), Portman, Heinrich Propose National Strategy For Artificial Intelligence; Call For $2.2 Billion Investment In Education, Research & Development, press release, May 21, 2019, https://www.portman.senate.gov/newsroom/press-releases/portman-heinrich-propose-national-strategy-artificial-intelligence-call-22.
15    US Sens. Martin Heinrich (D-NM), Rob Portman (R-OH), and Brian Schatz (D-HI), in the 116th Congress sponsored the Artificial Intelligence Initiative Act (AI-IA), S. 1558, introduced in the Senate on May 21, 2019. Artificial Intelligence Initiative Act of 2019, S. 1558 — 116th Congress (2019-2020), https://www.congress.gov/bill/116th-congress/senate-bill/1558.
16    Department of State, “Joint Statement From Founding Members of the Global Partnership on Artificial Intelligence,” June 15, 2020, accessed March 26, 2021, https://www.state.gov/joint-statement-from-founding-members-of-the-global-partnership-on-artificial-intelligence/.
17    Subcommittee on Quantum Information Science under the Committee on Science of the National Science & Technology Council, National Strategic Overview for Quantum Information Science, September 2018, accessed March 26, 2021, https://www.quantum.gov/wp-content/uploads/2020/10/2018_NSTC_National_Strategic_Overview_QIS.pdf.
18    Endless Frontier Act, H.R. 6978 / S. 3832 — 116th Congress (2019-2020), https://www.aip.org/fyi/federal-science-bill-tracker/116th/endless-frontier-act, introduced in the 116th Congress.
19    “The Global Partnership on Artificial Intelligence,” website homepage accessed on March 26, 2021, https://www.gpai.ai/.
20    William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, 117th Congress (2021-2022), Public Law No. 116-283, https://www.congress.gov/bill/116th-congress/house-bill/6395.
21    European Commission, EU-US: A new transatlantic agenda for global change, press release, December 2, 2020, Brussels, accessed March 26, 2021, https://ec.europa.eu/commission/presscorner/detail/en/ip_20_2279.
22    Asena Baykal and Thorsten Benner, Risky Business, Rethinking Research Cooperation and Exchange with Non-Democracies, Strategies for Foundations, Universities, Civil Society Organizations, and Think Tanks, Global Public Policy Institute, October 2020, accessed March 26, 2021, https://www.gppi.net/media/GPPi_Baykal_Benner_2020_Risky_Business_final.pdf.
23    Bureau of Industry and Security, “Scope of Export Administration Regulations, Part 734,” Department of Commerce, accessed March 26, 2021, https://www.bis.doc.gov/index.php/documents/regulations-docs/2382-part-734-scope-of-the-export-administration-regulations-1/file.
24    Lee Rainie and Janna Anderson, “The Future of Jobs and Jobs Training,” Pew Research Center, May 3, 2017, accessed March 26, 2021, https://www.pewresearch.org/internet/2017/05/03/the-future-of-jobs-and-jobs-training/.

The post Global science and technology leadership appeared first on Atlantic Council.

]]>
Report of the Commission on the Geopolitical Impacts of New Technologies and Data https://www.atlanticcouncil.org/content-series/geotech-commission/exec-summary/ Tue, 25 May 2021 22:56:20 +0000 https://www.atlanticcouncil.org/?p=392365 An in depth report produced by the Commission on the Geopolitical Impacts of New Technologies, making recommendations to maintain economic and national security and new approaches to develop and deploy critical technologies.

The post Report of the Commission on the Geopolitical Impacts of New Technologies and Data appeared first on Atlantic Council.

]]>

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Conclusion, appendices, and acknowledgements

Scroll down to navigate and learn more

Executive summary

The advancing speed, scale, and sophistication of new technologies and data capabilities that aid or disrupt our interconnected world are unprecedented. While generations have relied consistently on technologies and tools to improve societies, we now are in an era where new technologies and data reshape societies and geopolitics in novel and even unanticipated ways. As a result, governments, industries, and other stakeholders must work together to remain economically competitive, sustain social welfare and public safety, protect human rights and democratic processes, and preserve global peace and stability.

Emerging technologies also promise new abilities to make our increasingly fragile global society more resilient. To sustain this progress, nations must invest in research, expand their digital infrastructures, and increase digital literacy so that their people can compete and flourish in this new era. Yet, at the same time, no nation or international organization is able to keep pace with the appropriate governance structures needed to grapple with the complex and destabilizing dynamics of these emerging technologies. Governments, especially democratic governments, must work to build and sustain the trust in the algorithms, infrastructures, and systems that could underpin society. The world must now start to understand how technology and data interact with society and how to implement solutions that address these challenges and grasp these opportunities. Maintaining both economic and national security and resiliency requires new ways to develop and deploy critical and emerging technologies, cultivate the needed human capital, build trust in the digital fabric with which our world will be woven, and establish norms for international cooperation.

The Commission on the Geopolitical Impacts of New Technologies and Data (GeoTech Commission) was established by the Atlantic Council in response to these challenges and seeks to develop recommendations to achieve these strategic goals. Specifically, the GeoTech Commission examined how the United States, along with other nations and global stakeholders, can maintain science and technology (S&T) leadership, ensure the trustworthiness and resiliency of physical and software/informational technology (IT) supply chains and infrastructures, and improve global health protection and wellness. The GeoTech Commission identified key recommendations and practical steps forward for the US Congress, the presidential administration, executive branch agencies, private industry, academia, and like-minded nations.

The GeoTech Decade

Data capabilities and new technologies increasingly exacerbate social inequality and impact geopolitics, global competition, and global opportunities for collaboration. The coming decade—the “GeoTech Decade”—must address the sophisticated but potentially fragile systems that now connect people and nations, and incorporate resiliency as a necessary foundational pillar of modern life. Additionally, the rapidity of machines to make sense of large datasets and the speed of worldwide communications networks means that any event can escalate and cascade quickly across regions and borders—with the potential to further entrench economic inequities, widen disparities in access to adequate healthcare, as well as to hasten increased exploitation of the natural environment. The coming years also will present new avenues for criminals and terrorists to do harm; authoritarian nations to monitor, control, and oppress their people; and diplomatic disputes to escalate to armed conflict not just on land, sea, and in the air, but also in space and cyberspace.

Domestically and internationally, the United States must promote strategic initiatives that employ data and new technologies to amplify the ingenuity of people, diversity of talent, strength of democratic values, innovation of companies, and the reach of global partnerships.

Geopolitical impacts of new technologies and data collections

Critical technologies that will shape the GeoTech Decade—and in which the United States and its allies must maintain global S&T leadership—can be grouped into six areas. All technologies in these categories will have broad—and interdependent—effects on people and the way they live and work, on global safety and security, and on the health of people and our planet.

  • Technologies that enable a digital economy: communications and networking, data science, and cloud computing: collectively provide the foundation for secure transmission of data for both the public and private sector and establish robust economies of ideas, resources, and talent.
  • Technologies for intelligent systems: artificial intelligence, distributed sensors, edge computing, and the Internet of Things: add new capabilities for understanding changes in the world in both physical and digital environments. The resulting data may supplement human intelligence, social engagements, and other sources of insight and analysis. In select, defined areas, intelligent systems may enhance human governance of complex systems or decisions.
  • Technologies for global health and wellness: biotechnologies, precision medicine, and genomic technologies: help create new fields of research, development, and practical solutions that promote healthy individuals and communities. Nations and health care organizations can use advances in genomics, or more broadly omics,1 to provide sentinel surveillance capabilities with respect to natural or weaponized pathogens. Sentinel surveillance2 can provide early detection, data about how a new element is appearing and growing, and information to guide our response.
  • Technologies that enlarge where people, enterprises, and governments operate: space technologies, undersea technologies: commercial companies and nations around the world are deploying mega-constellations of satellites, or fleets of autonomous ocean platforms, with advanced, persistent surveillance and communications capabilities. 
    Large-scale Earth observation data is important for monitoring the world’s atmosphere, oceans, and climate as a foundation for understanding evolving health and environmental risks and increasing the economic efficiencies in transportation, agriculture, and supply chain robustness.
  • Technologies that augment human work: autonomous systems, robotics, and decentralized energy methods: collectively provide the foundation to do work in dangerous or hazardous environments without risk to human lives, while at the same time augmenting human teams, potentially prompting long-term dislocations in national workforces, and requiring additional workforce talent for new technology areas.
  • Foundational technologies: quantum information science (QIS), nanotechnology, new materials for extreme environments, and advanced microelectronics: collectively provide the foundation for solving classes of computational problems, catalyzing next-generation manufacturing, setting standards, creating new ways to monitor the trustworthiness of digital and physical supply chains, as well as potentially presenting new challenges and opportunities to communications security that underpin effective governance and robust economies.

In addition to the technology itself, countries and organizations must learn to harness and protect the human element—by recruiting and upskilling workers with the needed skill sets for today and training the next generation with the right knowledge for tomorrow. There is great competition globally for digitally-skilled workers, and some countries or companies invest large amounts to develop or recruit this talent. When like-minded nations collaborate in S&T areas, the talent resources can produce greater benefits than possible otherwise. This requires governments to ensure their entire populations gain the needed digital literacy skills and have the means and opportunities to participate in the global digital economy. Making the whole greater than the sum of the parts represents the important global need for international collaboration.

The broad range of important S&T areas requires several forms of collaboration. In multiple key areas, such as QIS and advanced microelectronics, several nations already have significant government investments underway, and current results span a growing number of application areas. Collaborating on research and coordinating national investments among like-minded nations could benefit all participants. Fast-evolving technical capabilities, such as commercial space or autonomous systems, are supporting global industries that are developing and fielding new products. Effective collaboration relies on a broad ecosystem of domestic and foreign partners, including private sector entities. Collaboration will be limited in certain areas, for example, areas where, due to security considerations, the United States will develop capabilities in a self-reliant manner.

Summary of recommendations

To maintain national and economic security and competitiveness in the global economy, the United States and its allies must

  • Continue to be preeminent in key technology areas,
  • Take measures to ensure the trustworthiness and sustainability of the digital economy, the analog economy, and their infrastructures.

The GeoTech Commission provides recommendations in the following six areas for achieving these strategic objectives. A seventh area, the Future of Work, discusses ways to ensure the workforce acquires the skills needed for the digital economy, and that there is equitable access to opportunity.

To ensure that the United States and its allies remain the world leaders in S&T, the federal government, working with industry and stakeholders, should establish a set of prioritized strategic S&T objectives and align those objectives with specific timeframes. Additionally, the United States should establish a technology partnership among like-minded and democratic countries to coordinate actions around those objectives. The president and the US Congress should increase annual federal funding for research and development activities to secure US global leadership in critical new industries and technologies, with priorities determined for the largest impact challenges and gaps. To help people across the United States adapt to the realities of the future, the US government should establish programs to fund reskilling activities for workers displaced by changes brought about by the GeoTech Decade, seek new technologies and increase funding in support of efforts to close the broadband gap, and develop programs to improve the digital literacy of all Americans.

To strengthen cybersecurity, the administration should update the implementation plan for the National Cyber Strategy. The strategy should streamline how public and private sector entities monitor the security of their digital environments; encourage new networking, computing, and software designs that strengthen cyber defense; and raise priorities and activities for the cybersecurity of operational technology—the hardware and software that keeps equipment running—to match those of information technology.

In order to maintain the credibility of government and private industry, as well as to ensure prosperity, security, and stability in the coming data-driven epoch, the US government should establish new frameworks for data that incorporate security, accountability, auditability, transparency, and ethics. This means enacting measures that strengthen data privacy and security, establish transparency and ethics principles in how the government and private sector use data about people, and provide guidance on auditing how such data may be used.

To ensure that the United States remains attuned to threats and weaknesses in supply chains and critical systems that power its future, the US government should develop a federal mechanism to assess and prioritize the importance of specific supply chains and systems to the nation, considering physical as well as software/IT supply chains and systems. The government should develop procedures and allocate resources to achieve sufficient resiliency, based on these priorities, for supply chains and critical systems to ensure the economic and national security of the United States.

In order to protect the American people and environment from future threats, the US government should develop a global early warning system comprised of pandemic surveillance systems coupled with an early warning strategy, as well as a similar system aimed at providing early indicators of global environmental threats which could significantly impact the safety, security, and wellness of the nation.

The US government should foster the growth of the commercial US space industrial base and leverage the increasing capabilities of large commercial satellite constellations. This could increase space mission assurance and deterrence by eliminating mission critical, single-node vulnerabilities and distributing space operations across hosts, orbits, spectrum, and geography.

Table: Priority recommendations


1. Global science and technology leadership
1.1 Develop a National and Economic Security Technology Strategy
1.2 Establish Global GeoTech Alliance and Executive Council
1.6 Establish national-scale training and education programs to foster continuing technological leadership
2. Secure data and communications
2A.1 Review, update, and reestablish the implementation plan for the National Cyber Strategy
2A.2 Establish effective and coordinated continuous monitoring for software and hardware used by the federal government
2A.4 Ensure cybersecurity best practices, expertise, and assurance testing are widely available to industry and government entities
2B.1 Establish, with other nations, a common set of demonstration milestones for quantum data and communications security
2B.3 Establish a program to accelerate the operationalization of quantum information science technologies
2B.4 Establish leading roles for the United States in setting international standards for data and communications security as quantum information science evolves
3. Enhanced trust and confidence in the digital economy
3.1 Develop a US data privacy standard
3.4 Empower an organization to audit trust in the digital economy
3.5 Assess standards relating to the trustworthiness of digital infrastructure
3.6 Educate public on trustworthy digital information
4. Assured supply chains and system resiliency
4.2 Fund and broaden federal oversight of supply chain assurance to include all critical resources
4.3 For the United States, the administration must develop a geopolitical deterrence strategy that addresses critical digital resources and digital supply chain assurance
4.4 Conduct regular physical and software/IT supply chain assessments in the United States and with allies, focused on intersecting vulnerabilities with cascading consequences
5. Continuous global health protection and global wellness
5.1 Develop a global early warning system comprised of pandemic surveillance systems coupled with an early warning strategy
5.4 Increase resilience in medical supply chains
5.5 Develop capacity building for vaccine and therapeutics discovery, development, and distribution
6. Assured space operations for public benefit
6.2 Foster commercial space technologies of strategic importance and protect these from foreign acquisition
6.3 Harden the security of commercial space industry facilities and space assets
7. Create the workforce for the GeoTech Decade, and equitable access to opportunity

Note: This table contains a subset of the full collection of recommendations.
Numbers refer to the recommendation sequence as discussed in the main chapters of the report.

Table of contents

Overview: Inflection points

Accelerating global connectedness—of people, supply chains, networks, economies, the environment, and other foundations of society—is changing how nations work together and compete. For example, the global spread of scientific and technology (S&T) knowledge has lessened the United States’ strategic advantage based on advanced technology. The global movement of people allows biological threats to spread worldwide, outpacing the world’s ability to respond. In the digital economy, the economic, governmental, and political parts of society are interconnected, with the potential for cybersecurity threats experienced in one context to reverberate in others.

This interconnectedness can lead to inflection points wherein current assumptions and practices are no longer valid or effective. Sources of strength or advantage can diminish. New vulnerabilities can be discovered, e.g., in global supply chains for hardware and software, and exploited. New approaches to protecting national interests in this globally connected world will rely, in many situations, on the cooperation and collaboration of like-minded nations to increase mutual knowledge and awareness. Without this focus, the detrimental aspects of globally connected systems and infrastructures will grow larger and become more urgent.

Each of the following areas is experiencing rapid change and each is critical for ensuring a secure and peaceful world. This overview discusses, for each chapter, the key issues, the opportunities and risks, and a characterization of what must be solved.

Chapter 1: Global science and technology leadership

The United States, with like-minded nations and partners, must collectively maintain continued leadership in key S&T areas to ensure national and economic security, and that technology is developed and deployed with democratic values and standards in mind. The United States must pursue, as strategic goals, establishing priorities, investments, standards, and rules for technology dissemination, developed across government, private industry, academia, and in collaboration with allies and partners. Collaboration among like-minded nations and partners is essential to the attainment of global S&T leadership.

Chapter 2: Secure data and communications

Sophisticated attacks on the software/information technology (IT) supply chains have led to significant breaches in the security of government and private networks, requiring a new strategy for cybersecurity. This centers on updating and renewing the National Cyber Strategy Implementation Plan with a focus on streamlining how public and private sector entities monitor their digital environments and exchange information about current threats. Beyond these current challenges, advances in quantum information science (QIS) lay the foundation for future approaches to securing data and communications, to include new ways to monitor the trustworthiness of digital and physical supply chains. With allies and partners, the United States should develop priority global initiatives that employ transformative QIS.

Chapter 3: Enhanced trust and confidence in the global digital economy

Diminished trust and confidence in the global digital economy can constrain growth;3 have destabilizing effects on society, governments, and markets; and lessen resilience against cascading effects of local, regional, or national economic, security, or health instabilities. Trust and confidence are diminished by practices that do not protect privacy or secure data, and by a lack of legal and organizational governance to advance and enforce accountability.4 Automation and artificial intelligence (AI), essential for digital economies, pose challenges to how we organize and amplify the strength of both while minimizing their weakness or vulnerabilities in open societies. The United States should develop international standards and best practices for a trusted digital economy and should promote adherence to these standards.

Chapter 4: Assured supply chains and system resiliency

Both physical and digital supply chain vulnerabilities can have amplifying effects on the global economy and national security. To protect against these diverse risks requires understanding which types of goods and sectors of the economy are critical, and how to construct supply chains that are inherently more adaptable, resilient, and automated. This requires assessing the state and characteristics of supplies, trade networks and policies, inventory reserves, and the ability to substitute products or processing facilities. The United States should conduct regular assessments in the United States and in allied countries to determine critical supply chain resilience and trust, implement risk-based assurance measures, establish coordinated cybersecurity acquisition across government networks, and create more experts. A critical resource is semiconductor chip manufacturing, for which the vulnerability of foreign suppliers and the long lead time and cost of new production facilities requires the United States to invest in assured supply of semiconductor chips.

Chapter 5: Continuous global health protection and global wellness

Inherent to the disruption caused by the COVID-19 pandemic are three systemic problems: (i) global leaders acted slowly to contain the spread of the virus, (ii) global health organizations reacted slowly to contain the spread of the virus, and (iii) a mixture of factors caused the delayed response, including late recognition of the threat, slow incorporation of science and data into decision making, poor political will, and inconsistent messaging to citizens regarding the nature of the threat and what precautions to take. Though nations may adopt their own strategies to enhance resilience and future planning, a more global approach to this interconnected system will be essential. The United States and its allies should lead the effort to field and test new approaches that enable the world to accelerate the detection of biothreat agents, universalize treatment methods, and deploy mass remediation, through multiple global means. This is needed not only for recovering from the COVID-19 pandemic and future outbreaks, but also for human-developed pathogens.

Chapter 6: Assured space operations for public benefit

The world is transforming from space assets being dominated almost entirely by government to being largely dominated by the private sector.5 To maintain trusted, secure, and technically superior space operations, the United States must ensure it is a leading provider of needed space services and innovation in launch, on-board servicing, remote sensing, communications, and ground infrastructures. A robust commercial space industry not only enhances the resilience of the US national security space system by increasing space industrial base capacity, workforce, and responsiveness, but also advances a dynamic innovative environment that can bolster US competitiveness across existing industries, while facilitating the development of new ones. The United States should foster the development of commercial space technologies that can enhance national security space operations and improve agriculture, ocean exploration, and climate change activities, as well as align civilian and military operations and international treaties to support these uses.

Chapter 7: Future of work

People will power the GeoTech Decade, even as technology and data capabilities transform how people live, work, and operate as societies around the world. Successful societies will be those that found ways to augment human strengths with approaches to technology and data that were uplifting, while also working to minimize biases and other shortcomings of both humans and machines. Developing a digitally resilient workforce that can meet these challenges will require private and public sectors to take an all-of-the-above approach, embracing everything from traditional educational pathways to nontraditional avenues that include employer-led apprenticeships and mid-career upskilling. Ensuring that people are not left behind by the advance of technology—and that societies have the workforces they need to innovate and prosper—will determine whether the GeoTech Decade achieves its full promise of improving security and peace.

Appendices

The remainder of the report includes the following appendices that discuss the technical foundations and potential solutions for several important challenges:

Table: Summary of the GeoTech Commission’s findings and recommendations


Findings Recommendations
1. Global science and technology leadership The US National Strategy for Critical and Emerging Technologies requires an implementation plan to guide both domestic and international coordination to achieve global science and technology leadership. Establish priorities, investments, standards, and rules for technology dissemination; develop across government, private industry, academia, and with allies and partners.
2. Secure data and communications Expanding cybersecurity vulnerabilities require partnerships between the public and private sectors. The United States should update and renew the National Cyber Strategy’s Implementation Plan with a focus on streamlining how public and private sector entities monitor their digital environments.
Long-term quantum information science priorities include international collaboration, which is limited by national and regional funding and data sharing policies. With allies and partners, the United States should develop priority global initiatives that employ transformative quantum information science and catalyze the development of human capital and infrastructure for these and other next-generation quantum information science applications.
3. Enhanced trust and confidence in the digital economy To enhance trust and confidence in artificial intelligence and other digital capabilities, technologies must objectively meet the public’s needs for privacy, security, transparency, and accountability. Develop international standards and best practices for a trusted digital economy that accommodate national rules and regulations, streamline the process of independently assessing adherence to these standards.
4. Assured supply chains and system resiliency Resilient, trusted supply chains require defense, diversification, and reinvention. Conduct regularized assessments in the United States and in allied countries to determine critical supply chain resilience and trust, implement risk-based assurance measures. Establish coordinated cybersecurity acquisition across government networks and create more experts.
5. Continuous global health protection and global wellness There is a need for a continuous biological surveillance, detection, and prevention capability. Field and test new approaches that enable the world to accelerate the detection of biothreat agents, to universalize treatment methods, and to engage in mass remediation, through multiple global means.
6. Assured space operations for public benefit The US commercial space industry can increase its role in supporting national security. Foster the development of commercial space technologies and develop a cross-agency strategy and approach to space that can enhance national security space operations and improve agriculture, ocean exploration, and climate change activities; align both civilian and military operations, and international treaties to support these uses.
7. Future of work Create the workforce for the GeoTech Decade, and equitable access to opportunity

Table: List of all recommendations of the Commission in abridged form


Strategy Governance & leadership Capabilities International allies
1. Global science and technology leadership 1.1 Develop National and Economic Security Technology Strategy 1.2 Establish Global GeoTech Alliance 1.4 Review nations’ use of technology with focus on privacy, civil liberties, rights

1.5 Assess risks of technology applications ability to violate rights
1.3 Strengthen S&T collaboration

1.6 Establish training, education programs to foster technology leadership
2. Secure data and communications 2A.1 Strengthen National Cyber Strategy Implementation Plan

2B.2 Conduct QIS R&D focused on digital economy issues
2A.3 Bolster compliance with NIST guidance for continuous monitoring

2A.4 Ensure cybersecurity expertise, testing are widely available
2A.2 Coordinate gov’t H/W, S/W monitoring

2B.3 Accelerate QIS technologies operationalization

2B.5 Establish national QIS infrastructure
2B.1 Establish shared quantum data and communications security milestones

2B.4 Set international data/communications standards
3. Enhanced trust and confidence in the digital economy 3.5 Assess digital infrastructure trustworthiness standards

3.6 Educate public on trustworthy digital information
3.1 Develop a US data privacy standard

3.4 Empower an organization to audit trust in the digital economy
3.3 Create measures and standards for digital economy trust

3.7 Demonstrate AI improvements to delivery of public- and private-sector services
3.2 Develop privacy-preserving technologies for the digital economy

3.8 Produce AI ethical, social, trust, and governance assessment framework
4. Assured supply chains and system resiliency 4.3 Develop a geopolitical cyber deterrence strategy for critical digital resources 4.2 Broaden federal oversight of supply chain assurance 4.1 Identify and collect critical resource data 4.4 Assess physical and software/IT supply chain with allies
5. Continuous global health protection and global wellness 5.1 Launch a global pandemic surveillance and warning system 5.2 Reestablish extant pandemic monitoring

5.3 Prioritize privacy protections in pandemic surveillance
5.5 Develop vaccine, therapeutics capacity for discovery, development, distribution

5.6 Develop rapid responses to unknown pathogens
5.4 Increase medical supply chain with allies
6. Assured space operations for public benefit 6.1 Foster public benefits via federal space investments 6.3 Harden security of commercial space industry facilities and space assets 6.2 Foster and protect strategic space tech

6.5 Develop technologies for mega-constellation monitoring satellites
6.4 Establish conformance of commercial space systems to multinational agreements
7. Future of work Create the workforce for the GeoTech Decade, and equitable access to opportunity

Click the sections under the table above to explore each chapter and their recommendations.

1    Omics technologies are primarily aimed at the universal detection of genes (genomics), mRNA (transcriptomics), proteins (proteomics), and metabolites (metabolomics) in a specific biological sample.
2    A sentinel surveillance system is used to obtain data about a particular disease that cannot be obtained through a passive system such as summarizing standard public health reports. Data collected in a well-designed sentinel system can be used to signal trends, identify outbreaks, and monitor disease burden, providing a rapid, economical alternative to other surveillance methods. Source: “Immunization Analysis and Insights,” World Health Organization, accessed March 19, 2021, https://www.who.int/teams/immunization-vaccines-and-biologicals/immunization-analysis-and-insights/surveillance/surveillance-for-vpds.
3    Congressional Research Service, Digital Trade and U.S. Trade Policy, May 21, 2019, 11, accessed March 19, 2021, https://crsreports.congress.gov/product/pdf/R/R44565; in 2015, the Department of Commerce launched a Digital Economy Agenda, Alan B. Davidson, “The Commerce Department’s Digital Economy Agenda,” November 9, 2015, accessed March 19, 2021, https://2014-2017.commerce.gov/news/blog/2015/11/commerce-departments-digital-economy-agenda.html. This identifies four pillars: promoting a free and open Internet worldwide; promoting trust online; ensuring access for workers, families, and companies; and promoting innovation.
4    Philippe Amon, “Toward a New Economy of Trust” in Revitalizing the Spirit of Bretton Woods: 50 Perspectives on the Future of the Global Economic System (Washington, DC: Bretton Woods Committee), July 2019, accessed March 19, 2021, https://www.brettonwoods.org/BW75/compendium-release.
5    Simonetta Di Pippo, “Space Technology and the Implementation of the 2030 Agenda,” UN Chronicle 55 (4) (January 2019): 61-63, accessed April 16, 2021, https://www.un.org/en/chronicle/article/space-technology-and-implementation-2030-agenda; Matt Weinzierl and Mehak Sarang, “The Commercial Space Age Is Here,” Harvard Business Review, February 12, 2021, accessed April 16, 2021, https://hbr.org/2021/02/the-commercial-space-age-is-here; Matt Weinzierl, “Space, the Final Economic Frontier,” Journal of Economic Perspectives 32 (2) (Spring 2018): 173-192, accessed April 16, 2021, https://www.hbs.edu/ris/Publication%20Files/jep.32.2.173_Space,%20the%20Final%20Economic%20Frontier_413bf24d-42e6-4cea-8cc5-a0d2f6fc6a70.pdf; KPMG, 30 Voices on 2030: The future of space: Communal, commercial, contested, May 2020, accessed April 16, 2021, https://assets.kpmg/content/dam/kpmg/au/pdf/2020/30-voices-on-2030-future-of-space.pdf.

The post Report of the Commission on the Geopolitical Impacts of New Technologies and Data appeared first on Atlantic Council.

]]>
Launching 26 May 2021: Report of the Commission on the Geopolitical Impacts of New Technologies and Data https://www.atlanticcouncil.org/content-series/geotech-commission/launch-page/ Mon, 24 May 2021 17:42:40 +0000 https://www.atlanticcouncil.org/?p=394483 On the early morning of 26 May 2021, the Report of the Commission on the Geopolitical Impacts of New Technologies and Data will launch both its interactive report website and downloadable report. Above are two links to our planned events that day.

The post Launching 26 May 2021: Report of the Commission on the Geopolitical Impacts of New Technologies and Data appeared first on Atlantic Council.

]]>

 

Report of the Commission on the Geopolitical
Impacts of New Technologies and Data

Conclusion, appendices, and acknowledgements

Scroll down to navigate and learn more

On the early morning of 26 May 2021, the Report of the Commission on the Geopolitical Impacts of New Technologies and Data will launch both its interactive report website and downloadable report. Above are two links to our planned events that day.

The post Launching 26 May 2021: Report of the Commission on the Geopolitical Impacts of New Technologies and Data appeared first on Atlantic Council.

]]>
Computing to win: Addressing the policy blind spot that threatens national AI ambitions https://www.atlanticcouncil.org/blogs/new-atlanticist/computing-to-win-addressing-the-policy-blind-spot-that-threatens-national-ai-ambitions/ Thu, 29 Apr 2021 20:52:59 +0000 https://www.atlanticcouncil.org/?p=384100 Policymakers must make specialized hardware and software a core component of their strategic planning in order to fully realize the economic windfall of AI.

The post Computing to win: Addressing the policy blind spot that threatens national AI ambitions appeared first on Atlantic Council.

]]>
Artificial intelligence (AI) is causing significant structural changes to global competition and economic growth. AI may generate trillions of dollars in new value over the next decade, but this value will not be easily captured or evenly distributed across nations. Much of it will depend on how governments invest in the underlying computational infrastructure that makes AI possible.

Yet early signs point to a blind spot—a lack of understanding, measurement, and planning. It comes in the form of “compute divides” that throttle innovation across academia, startups, and industry. Policymakers must make AI compute a core component of their strategic planning in order to fully realize the anticipated economic windfall.

What is AI compute?

AI compute refers to a specialized stack of hardware and software optimized for AI applications or workloads. This “computer” can be located and accessed in different ways (public clouds, private data centers, individual workstations, etc.) and leveraged to solve complex problems across domains from astrophysics to e-commerce and to autonomous vehicles.

Conventional information technology (IT) infrastructure is now widely available as a utility through public cloud service providers. This idea of “infrastructure as a service” includes computing for AI, so the public cloud is duly credited with democratizing access. At the same time, it has created a state of complacency that AI compute will be there when we need it. AI, however, is not the same as IT.

Today, when nations plan for AI, they gloss over AI compute, with policies focused almost exclusively on data and algorithms. No government leader can answer three fundamental questions: How much domestic AI compute capacity do we have? How does this compare to other nations? And do we have enough capacity to support our national AI ambitions? This lack of uniform data, definitions, and benchmarks leaves government leaders (and their scientific advisors) unequipped to formulate a comprehensive plan for AI compute investments.

Recognizing this blind spot, the Organisation for Economic Cooperation and Development (OECD) recently established a new task force to tackle the issue. “There’s nothing that helps our member countries assess what [AI compute] they need and what they have, and so some of them are making large but not necessarily well-informed investments” in AI compute, Karine Perset, head of the OECD AI Policy Observatory, told VentureBeat in January.

Measuring domestic AI compute capacity is a complex task compounded by a paucity of good data. While there are widely accepted standards for measuring the performance of individual AI systems, they use highly technical metrics and don’t apply well to nations as a whole. There is also the nagging question of capacity versus effective use. What if a nation acquires sufficient AI compute capacity but lacks the skills and ecosystem to effectively use it? In this case, more public investment may not lead to more public benefit.

Nonetheless, more than sixty national AI strategies were formulated and published from June 2017 to December 2020. These plans average sixty-five pages and coalesce around a common set of topics including transforming legacy industries, expanding opportunities for AI education, advancing public-sector AI adoption, and promoting responsible or trustworthy AI principles. Many national plans are aspirational and not detailed enough to be operational.

In conducting a survey of forty national AI plans, and analyzing the scope and depth of content around AI compute, we found that on average national AI plans had 23,202 words while sections on AI compute averaged only 269 words. Put simply, 98.8 percent of the content of national AI strategies focuses on vision, and only 1.2 percent covers the infrastructure needed to execute this vision.

Engine for economic growth

This would be the equivalent of a national transportation strategy that devotes less than 2 percent of its recommendations to roads, bridges, and highways. Just like transportation investments, AI compute capacity will be a crucial driver of the future wealth of nations. And yet, national AI plans generally exclude detailed recommendations on this topic. There are exceptions, of course. France, Norway, India, and Singapore had particularly comprehensive sections on AI infrastructure, some with detailed recommendations for AI compute requirements, performance, and system size.

Exponential growth in computational power has led to astonishing improvements in AI capabilities, but it also raises questions of inequality in compute distribution. Fairness, inclusion, and ethics are now center stage in AI policy discussions—and these apply to compute, even though it is often ignored or relegated to side workshops at major conferences. Unequal access to computational power as well as the environmental cost of training computationally complex models require greater attention.

Some of the more popular AI models are large neural networks that run on state-of-the-art machines. For example, AlphaGo Zero and GPT-3 require millions of dollars in AI compute. This has led to AI research being dominated and shaped by a few actors mostly affiliated with big tech companies or elite universities. Governments may have to step up and reduce the compute divide by developing “national research clouds.” Initiatives in Europe and China are underway to develop indigenous high-performance computing (HPC) technologies and related computing supply chains that reduce their dependence on foreign sources and promote technology sovereignty.

Governments in the United States, Europe, China, and Japan have been making substantial investments in the “exascale race,” a contest to develop supercomputers capable of one billion billion (1018) calculations per second. The rationale behind investments in HPC is that the benefits are now going beyond scientific publications and prestige, as computing capabilities become a necessary instrument for scientific discovery and innovation. The machines are the engine for economic prosperity.

What can policymakers do?

Understanding compute requirements is a non-trivial measurement challenge. At a micro level, it is important to scientifically understand the relative contribution of AI compute to driving AI progress on, for example, natural-language understanding, computer vision, and drug discovery. At a macro level, nations and companies need to assess compute requirements using a data-driven approach that accounts for future industrial pathways. Wouldn’t it be eye-opening to benchmark how much compute power is being used at the company or national level and to help evaluate future compute needs?

In 2019, Stanford University organized a workshop with over one hundred interdisciplinary experts to better understand opportunities and challenges related to measurement in AI policy. The participants unanimously agreed that “growth in computational power is leading to measurable improvements in AI capabilities, while also raising issues of the efficiency of AI models and the inherent inequality of compute distribution.” Getting measurement right will be a stepping stone to addressing this blind spot in national AI policymaking. Going forward, we recommend that governments pivot to develop a national AI compute plan with detailed sections on capacity, effectiveness, and resilience.

  • Capacity for AI compute exists along a continuum. This includes public cloud, sovereign cloud (government-owned or controlled), private cloud, enterprise data centers, AI labs/Centers of Excellence, workstations, and “edge clusters” (small AI supercomputers situated outside a data center). Understanding the current state of AI compute capacity across this continuum will inform public priorities and investments. Partnerships with public cloud service providers expand access to AI compute and should be viewed as complementary (not in competition) with investments in domestic AI compute capacity. This hybrid approach will help close compute divides. Over the longer term, capacity planning should consider the shift from data center to edge computing. For example, self-driving trucks will be powered by on-board AI supercomputers, and it may be possible to run AI workloads on fleets of self-driving trucks, leveraging them as a “virtual” AI supercomputer when they are parked and garaged overnight.
  • Effectiveness is achieved through a multitude of initiatives, from STEM education to open data policies. Examples include National Youth AI Challenges, industry or government hackathons, mid-career re-training programs, boot camps, and professional certification courses. Diversity across all these programs is key to promoting inclusiveness and creating a foundation for trustworthy AI.
  • Resilience means nations should have an AI compute continuity plan, and possibly a strategic reserve, to ensure that mission-critical public AI research and governance functions can persist. It’s also important to focus on the carbon impact of national AI infrastructure, given that large-scale computing exacts an increasingly heavy toll on the environment and hence the broader economy. An optimal strategy would be a hybrid, multi-cloud model that blends public, private, and government-owned or controlled (sovereign) infrastructure, with accountability to encourage “Green AI.”

The completeness of a national AI strategy forecasts that nation’s ability to compete in the digital global economy. Few national AI strategies, however, reflect a robust understanding of domestic AI compute capacity, how to use it effectively, and how to structure it in a resilient manner. Nations need to take action by measuring and planning for the computational infrastructure needed to advance their AI ambitions. The future of their economies is at stake.

Saurabh Mishra is an economist and former researcher at Stanford University’s Institute for Human-Centered Artificial Intelligence.

Keith Strier is the chair of the AI Compute Taskforce at the Organization for Economic Cooperation and Development, and vice president for Worldwide AI Initiatives at the NVIDIA corporation.

Further reading

The post Computing to win: Addressing the policy blind spot that threatens national AI ambitions appeared first on Atlantic Council.

]]>
Event recap | Data science and social entrepreneurship https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-coordinating-data-privacy-public-interest/ Wed, 24 Mar 2021 19:04:00 +0000 https://www.atlanticcouncil.org/?p=370517 An episode of the GeoTech Hour featuring data scientists and entrepreneurs who discuss how to employ tech for good.

The post Event recap | Data science and social entrepreneurship appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, the GeoTech Center is returning to the fourth episode of the Data Salon Series, hosted in partnership with Accenture. This episode focuses on the challenges and opportunities of employing data for social good, and how entrepreneurship can fill a unique gap to ensure sound business practices and ethics concerning how data is used.

Around the world, scores of individuals and organizations work to create a better reality for their communities, their nations, and the world. Yet, with so many players in the field, it is often difficult to coordinate between different streams of public, private, and nongovernmental data seeking to combat overlapping problems. During this episode, panelists discuss their efforts and outlined methods to connect data with the organizations who need it without exposing personal information of anyone involved.

Featuring


Valeria Budinich

Scholar-in-Residence, Legatum Center
MIT’s Sloan School of Management

Derry Goberdhansingh
CEO
Harper Paige

Bevon Moore
CEO
Elevate U

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Mar 17, 2021

Event recap | Coordinating data privacy and the public interest

By the GeoTech Center

Data usage and the employment of data trusts has maximized individual privacy and private sector benefits. Both the government and the private sector are working towards developing strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis they have come to expect. As our digital landscape continues to evolve, panelists in this episode of the GeoTech Hour discuss intentional policy and design choices that could allow for greater data ownership within people-centered structures.

Digital Policy International Norms

The post Event recap | Data science and social entrepreneurship appeared first on Atlantic Council.

]]>
Event recap | Coordinating data privacy and the public interest https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-coordinating-data-privacy/ Wed, 17 Mar 2021 19:48:00 +0000 https://www.atlanticcouncil.org/?p=363793 Data usage and the employment of data trusts has maximized individual privacy and private sector benefits. Both the government and the private sector are working towards developing strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis they have come to expect. As our digital landscape continues to evolve, panelists in this episode of the GeoTech Hour discuss intentional policy and design choices that could allow for greater data ownership within people-centered structures.

The post Event recap | Coordinating data privacy and the public interest appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

On this episode of the weekly GeoTech Hour, the GeoTech Center is returning to the third episode of the Data Salon Series, hosted in partnership with Accenture. This episode focuses on data usage and employing data trusts to maximize individual privacy and private sector benefits.

The panelists discuss how governments and the private sector alike are working to develop strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis (including machine or AI-enabled learning) they have come to expect. As our digital landscape continues to evolve, it is time to consider what intentional policy and design choices could allow for greater data ownership within people-centered structures.

This recording will be available here, on the Atlantic Council’s YouTube channel, or on the GeoTech Center’s Twitter.

Featuring

Dr. Divya Chander, MD, PhD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Krista Pawley
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Mar 17, 2021

Event recap | Coordinating data privacy and the public interest

By the GeoTech Center

Data usage and the employment of data trusts has maximized individual privacy and private sector benefits. Both the government and the private sector are working towards developing strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis they have come to expect. As our digital landscape continues to evolve, panelists in this episode of the GeoTech Hour discuss intentional policy and design choices that could allow for greater data ownership within people-centered structures.

Digital Policy International Norms

The post Event recap | Coordinating data privacy and the public interest appeared first on Atlantic Council.

]]>
Event recap | Artificial intelligence, the internet, and the future of data: Where will we be in 2045? https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-artificial-intelligence-2045/ Wed, 10 Mar 2021 21:02:00 +0000 https://www.atlanticcouncil.org/?p=362871 An episode of the GeoTech Hour where panelists
look towards the future of artificial intelligence, discussing the GeoTech Decade ahead and beyond to 2045.

The post Event recap | Artificial intelligence, the internet, and the future of data: Where will we be in 2045? appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

This special edition of the GeoTech Hour pulls from a keynote address originally provided as an AI World Society Distinguished Lecture at the United Nations Headquarters on United Nations Charter Day June 26th, 2019 by Dr. David Bray, the current inaugural director for the GeoTech Center.

The address looks towards 2045: rapid technological change, global questions of governance, and the future of human co-existence. Made relevant even more by the events of 2020, this video will set the stage for a second special GeoTech Hour segment celebrating our first anniversary on March 11, 12:00 – 1:00 p.m.

Featuring

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Mar 17, 2021

Event recap | Coordinating data privacy and the public interest

By the GeoTech Center

Data usage and the employment of data trusts has maximized individual privacy and private sector benefits. Both the government and the private sector are working towards developing strategies that emphasize individual privacy more than ever before, as the public continues to express greater interest in protecting their data. However, few institutions have landed upon successful solutions in practice that can protect user privacy while allowing for the high levels of analysis they have come to expect. As our digital landscape continues to evolve, panelists in this episode of the GeoTech Hour discuss intentional policy and design choices that could allow for greater data ownership within people-centered structures.

Digital Policy International Norms

The post Event recap | Artificial intelligence, the internet, and the future of data: Where will we be in 2045? appeared first on Atlantic Council.

]]>
Event recap | Synthetic data, privacy, and the future of trust https://www.atlanticcouncil.org/blogs/geotech-cues/synthetic-data-privacy-trust/ Wed, 24 Feb 2021 20:29:18 +0000 https://www.atlanticcouncil.org/?p=357495 A live GeoTech Hour where panelists discussedartificial intelligence and how to address the legal and ethical privacy concerns associated with synthetic data.

The post Event recap | Synthetic data, privacy, and the future of trust appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour series here.

Event description

Over the last decade, the business of data has disrupted nearly every business category with its promise of technological, industrial, and human advancement. Data continues to captivate our interest as entrepreneurs, executives, and policymakers for its potential to democratize the next wave of productivity with artificial intelligence and machine-to-machine advancements. To advance this wave of productivity, new models of data have been invented: Synthetic Data 

As it suggests, synthetic data is completely artificial and offers the promise of both usefulness and privacy.  Artificial intelligence that is trained on real-life information often contains a baked-in bias: algorithmic decision-making in fields such as criminal justice and credit scoring shows evidence of racial discrimination. The promise of synthetic data allows organizations and governments to overcome geographical, resource, and political barriers. It can be applied to solving some of the world’s biggest problems, from international medical research, fairness in lending, to reducing fraud and money laundering.  By 2022, Gartner estimates over 25% of training data for AI will be synthetically generated.  It is already being used in healthcare, banking, crime detection, manufacturing, telecom, retail, and several other fast-moving industries to accelerate learning. 

However, its usefulness hinges on privacy: that anybody utilizing synthetic data could make the same statistical decisions as they would from the true data — without being able to identify individual contributions.

On this episode of the GeoTech Hour that took place on Wednesday, February 24, at 12:00 p.m. ET, experts discussed how if the privacy thresholds can be legally and ethically addressed, synthetic data can be the best way to safely unlock the potential of the data economy.

Featuring

Jacqueline Musiitwa
Research Associate, China, Law & Development Project
University of Oxford

Krista Pawley
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Michael Capps
CEO
Diveplane

Steven Tiell
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Stuart Brotman
Howard Distinguished Endowed Professor of Media Management and Law, University of Tennessee, Knoxville; International Advisory Council member, APCO Worldwide

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The post Event recap | Synthetic data, privacy, and the future of trust appeared first on Atlantic Council.

]]>
Event recap | The geopolitics of emerging tech during the pandemic https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-geopolitics-of-emerging-tech-pandemic/ Wed, 10 Feb 2021 18:36:00 +0000 https://www.atlanticcouncil.org/?p=352680 A GeoTech Hour with panelists sharing insights on lessons learned, ongoing challenges, and requisite next steps to be taken when considering the intersection of geopolitics, modern technologies, and COVID-19 pandemic.

The post Event recap | The geopolitics of emerging tech during the pandemic appeared first on Atlantic Council.

]]>

Find the full GeoTech Hour Series here.

Event description

Corporations and governments alike continue to struggle with technology policy, especially under the strain of a global pandemic that struck at a moment when the internet was mature enough to alleviate many of COVID’s harms despite facing novel geopolitical challenges to its design and use at the same time. These turbulences have not just tested every facet of the digital world and its ability to handle crises but also altered international relations significantly. Many countries are at a crossroad when it comes to emerging technologies and security during a global pandemic.

On this episode of the GeoTech Hour on Wednesday, February 10, experts shared insights on lessons learned, ongoing challenges, and requisite next steps that can be taken when considering the intersection of geopolitics, modern technologies, and COVID-19 pandemic.

To take a closer look at previous work on transatlantic tech relationships, check out the recap of the partially public, partially private virtual event  held by the Embassy of Finland and the Atlantic Council GeoTech Center in December.

Featuring

Divya Chander, PhD MD
Nonresident Senior Fellow, GeoTech Center
Atlantic Council

Charina Chou, PhD
Global Policy Lead for Emerging Technologies
Google

Andrea Little Limbago, PhD
Vice President, Research and Analysis 
Interos Inc.  

Antti Niemela
Head of Section for Sustainable Growth and Commerce
Embassy of Finland in Washington, D.C.  

Hosted by

David Bray, PhD
Director, GeoTech Center
Atlantic Council

Previous episode

Event Recap

Feb 17, 2021

Event recap | Data-informed nutrition policy and practices

By the GeoTech Center

An episode of the GeoTech Hour where panelists discuss challenges and opportunities for data-informed nutrition solutions, and how a multisectoral approach can improve global health.

Climate Change & Climate Action Inclusive Growth

The post Event recap | The geopolitics of emerging tech during the pandemic appeared first on Atlantic Council.

]]>
Cole’s “Burn In” named to best sci-fi of 2020 list by Polyon https://www.atlanticcouncil.org/insight-impact/in-the-news/coles-burn-in-named-to-best-sci-fi-of-2020-list-by-polyon/ Sun, 10 Jan 2021 15:20:50 +0000 https://www.atlanticcouncil.org/?p=338521 Polygon, a video game publication, named Burn In, co-authored by Forward Defense non-resident senior fellow August Cole, to its top sci-fi and fantasy books of 2020. Burn In is a thriller at the intersection of robotics, artificial intelligence, and security.

The post Cole’s “Burn In” named to best sci-fi of 2020 list by Polyon appeared first on Atlantic Council.

]]>
Polygon, a video game publication, named Burn In, co-authored by Forward Defense non-resident senior fellow August Cole, to its top sci-fi and fantasy books of 2020. Burn In is a thriller at the intersection of robotics, artificial intelligence, and security.

Singer and Cole come from the policy and think tank worlds, and look at not only the potential threats that our current technological lives bring, but how the growing white nationalist movement seems poised to take advantage of those problems.

Andrew Liptak, Polygon
Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Cole’s “Burn In” named to best sci-fi of 2020 list by Polyon appeared first on Atlantic Council.

]]>
Huawei’s push in Russia exploits Kremlin fears of Western technology https://www.atlanticcouncil.org/blogs/new-atlanticist/huaweis-push-in-russia-exploits-kremlin-fears-of-western-technology/ Wed, 18 Nov 2020 14:55:19 +0000 https://www.atlanticcouncil.org/?p=321842 With Moscow yearning for an alternative to Western technology and the United States on a campaign to throw Huawei out of Europe and East Asia, the Chinese telecom giant sensed opportunity in Russia.

The post Huawei’s push in Russia exploits Kremlin fears of Western technology appeared first on Atlantic Council.

]]>
Chinese telecommunications firm Huawei, a leading developer and vendor of 5G technology, has been the target of a years-long US campaign to see the company’s products banned in partner countries worldwide. Huawei has pushed back the US efforts but also made forays in another direction: Russia. Capitalizing on the Kremlin’s fears of Western technology and its desires to reduce its dependence on the United States and the European Union, Huawei has set its eyes on the Russian market.

The Kremlin has long espoused fears of Western technology, encapsulated by Russian President Vladimir Putin’s 2014 comment that the internet was a “CIA project.” This public skepticism is borne from both economic and security factors, including the Kremlin’s desire to boost Russian domestic technology firms and crack down on internet use by domestic political opposition.

But even if most analysts only discuss the Kremlin’s worries as pretext for domestic censorship, Moscow evinces legitimate concern over the risks of using technology from abroad, especially the United States and the European Union. These foreign governments could conduct espionage and cyber operations through the technological infrastructure sold to Russians. These concerns have garnered serious responses. The Russian government has pushed a domestic operating system replacement in place of Microsoft Windows; Putin signed a law mandating state approval for software on computers sold domestically; and the Russian parliament passed a so-called domestic internet law, which aims to have Russian firms uproot some foreign internet infrastructure to replace it with domestic technology. These and other moves to bring the Russian technology sector under tighter government control are driven in large part by desires to block foreign technology.

With Moscow yearning for an alternative to Western technology and the United States on a campaign to throw Huawei out of Europe and East Asia, the Chinese telecom giant sensed opportunity in Russia. In June 2019, The Moscow Times reported that MTS, Russia’s largest mobile network operator, had signed a deal with Huawei on 5G technology. The agreement was finalized after Chinese President Xi Jinping and Putin met in Moscow that month; it also took place a month after the US Department of Commerce added Huawei to the Entity List, imposing significant restrictions on US firms’ interactions with the company (from selling products to engaging in the same technical standards bodies). Just after MTS and Huawei inked their agreement, Putin stated, “There are unceremonious attempts at pushing Huawei away from the global markets.” Always eager to frame the world in Cold War-esque terms, he then added, “some call it the first technological war of the new digital era.”

Subscribe to Fast Thinking email alerts

Sign up to receive rapid insight in your inbox from Atlantic Council experts on global events as they unfold.

  • This field is for validation purposes and should be left unchanged.

Following those remarks, in September 2019, the Russian government supported Huawei competing in a series of commercial 5G trials with Swedish telecom vendor Ericsson. In March 2020, Nikkei reported that Huawei had announced a partnership with Russia’s largest (and US-sanctioned) bank, Sberbank, which is also now moving into cloud computing. Huawei’s inroads in Russia deepened from there: in late August, Russian Foreign Minister Sergei Lavrov said Russia was ready to cooperate with China and Huawei on 5G technology. Later that month, Ren Zhengfei, Huawei’s founder, allegedly said in address, “After the United States included us in the Entity List, we transferred our investment in the United States to Russia, increased Russian investment, expanded the Russian scientist team, and increased the salary of Russian scientists.”

Even more recently, on November 10, the Russian daily paper Vedomosti reported that Huawei participated in an event with numerous Russian technology companies and government agencies, an event which included discussion of Huawei’s ability to support Russia’s digital economy. Huawei also promoted a strategy for the Eurasian region to encourage technological development—again, a clear play to Kremlin visions of a pan-Eurasian digital platform and a means to capitalize on Russian fears of digital dependence on the United States and the EU.

Of course, Russia risks creating a similar dependence on Huawei to the one it fears with Western technology. Russia’s relationship with China is quite different than Beijing’s with the United States, but presumably the espionage and network interference that worry analysts in the United States could also apply to Russian adoption of 5G technology from Huawei; it’s possible that the Chinese government could leverage Huawei’s infrastructural influence to spy on Russia. One analyst at the Russian International Affairs Council in Moscow has in fact argued this very point: he wrote (in addition to suggesting that Chinese technology is “no different” than Western options) that Russian overdependence on Huawei technology could itself pose security and economic risks for the country. It is also quite possible, though, that Russian decisionmakers are making a “lesser of two evils” calculus.

The situation offers clear implications for the United States and the EU: as the Kremlin reduces its digital dependence on the West, opportunities to seize market share will emerge for firms incorporated elsewhere, including those in China. These firms could, in turn, leverage the Russian market to support expansion elsewhere, an opportunity that may exist in other countries with similar fears of Western technology. The Trump administration tried to convince other countries to ban Huawei’s 5G technology with mixed success, but on balance, ever more governments are sidelining Huawei 5G technology—not official bans, but pushing against its use—as a result of myriad political factors (not just American diplomacy).

This will undoubtedly hurt Huawei’s global market share as the firm’s representatives find it much more difficult, if not effectively impossible, to market their 5G equipment to certain countries. Huawei’s ability to expand into the Russian market—a lucrative one if Huawei executives deepen relationships with Russian state officials and power brokers that control said industry—is an important factor in evaluating the company’s global 5G competitiveness. It will also help determine what kinds of data the company might be able to collect from 5G networks outside of China’s borders, because Huawei having equipment in other countries’ telecommunications infrastructure presents an opportunity to at the very least collect metadata.

While Huawei is making gains in Russia, this does not necessarily correlate to broad Russia-China technology cooperation across the board; the countries’ technological engagements in coming years must be assessed in the context of broader political and economic dynamics. But at least on this issue, the verdict is clear: Huawei’s newfound success in selling its 5G technology in Russia is inextricably linked with the Kremlin’s fears of that same technology coming from other parts of the world. US and EU policymakers should thus continue to monitor how anti-Western technology sentiment is impacting 5G adoption worldwide, while also recognizing that these dynamics of distrust might play out elsewhere. In an era of great uncertainty around supply chain security, with many countries instituting checks or raising barriers to protect against perceived security threats, limit domestic political mobilization, and/or preference domestic firms, this is hardly the last time a Chinese technology company may turn to markets outside the United States and the EU to prop up its aspirations elsewhere—and capitalize on distrustful narratives and beliefs to do so.

Justin Sherman (@jshermcyber) is a fellow at the Atlantic Council’s Cyber Statecraft Initiative.

Further reading:

The post Huawei’s push in Russia exploits Kremlin fears of Western technology appeared first on Atlantic Council.

]]>
A presidential agenda for the GeoTech Decade that uplifts people, prosperity, and peace https://www.atlanticcouncil.org/blogs/new-atlanticist/a-presidential-agenda-for-the-geotech-decade-that-uplifts-people-prosperity-and-peace/ Mon, 09 Nov 2020 14:54:21 +0000 https://www.atlanticcouncil.org/?p=318582 For both the good of the United States and the world, it will be his duty to ensure the United States emerges from the challenges of the COVID-19 pandemic, the deep economic recession, and its polarized society, renewed, revitalized, and rebuilt. Leveraging new technologies will be an integral part of that mission.

The post A presidential agenda for the GeoTech Decade that uplifts people, prosperity, and peace appeared first on Atlantic Council.

]]>

When former Vice President Joe Biden is sworn into office in January 2021, he will face a world in a state of fear and uncertainty, plagued by a deep lack of trust in society. For both the good of the United States and the world, it will be his duty to ensure the United States emerges from the challenges of the COVID-19 pandemic, the deep economic recession, and its polarized society, renewed, revitalized, and rebuilt. Leveraging new technologies will be an integral part of that mission.

Data and new technologies are changing societies around the world. Companies and individuals have gained access to both technological and advanced data capabilities that previously were only available to the national security apparatuses of large nation states thirty to forty years ago. This access is super-empowering private sector entities and some individuals with unprecedented reach and ability.

Not everyone is being empowered equally. Some parts of society lack the needed infrastructure (e.g., broadband internet) or needed new digital literacy skills (e.g., education) or opportunities (e.g., startup ecosystems and ties to venture capital) to flourish in this new era. We call this new decade the “GeoTech Decade”—where data and tech will have disproportionate impacts on geopolitics, global competition, as well as global opportunities for collaboration.

Amid the GeoTech Decade, governments cannot continue to operate as they have in the past, as they no longer have a monopoly on exquisite technologies relative to the private sector. Often it is the private sector that is developing better technologies and government now must play “catch-up”—which means new ways of working and performing the business of civil society must be implemented. Newly empowered citizens want more from their government and governments need to find ways to involve such citizens in the participatory business of government. Without this participation, citizens in open societies will feel disempowered, disenchanted, and dismayed at what appear to be crucial decisions involving data and tech that impact communities, made with little to no public involvement.

Policymakers must be able to adapt and demonstrate effective governance at a speed faster than any time before in the history of the world. We must do this in an era where the private sector and members of the public will increasingly need to use emerging technologies such as artificial intelligence, commercial space satellites, next-generation biosensors, and more—and achieve outcomes heretofore provided solely by governments. We also must find ways to recognize that questions of data and technology are not solely about privacy and intellectual property protections, but also are key questions of individual and community identities, personhood, inclusion, diversity, and tolerance among open societies. Advances in tech and data have produced a world where traditional government activities no longer need to be performed by governments alone; where private sector entities must think beyond just their own individual profits, to include community responsibilities and obligations; and where a public remains mistrustful of both the government and the private sector.

To complicate things further, some transnational entities now have outsized influence, especially when it comes to digital capabilities, datasets, and influence. This represents a new era for both doing the business of civil societies as well as the global business of diplomacy. Such transnational private sector entities need to recognize that functions that previously were done solely by government now require their partnership to successfully continue—such responsibilities cannot be ignored lest the functions of societies fall apart or attempts at regulation take the place of stewardship on both the national and global stage.

From a foreign policy perspective, in the midst of the growing tensions, success for the United States will depend on cooperation with allies and partners—to include both nation-states and transnational private sector entities—on issues like artificial intelligence, the future of space, and data trusts. This includes embracing the largest global democracy India, as well as other nations around the world seeking a future where data and technology empower open societies. To such end, the United States must work to assemble a coalition of “digital democracies and more,” who seek not surveillance states, nor surveillance capitalism. Such a coalition would work across borders and sectors to build a world where we employ data and tech for greater digital empathy, diversity, and shared humanity across nations and economic sectors.

Lastly, the people of the United States must work to empower everyone, including those who have not yet seen the benefits of new technologies or data as visibly as others. We must first identify and define what “Tech for Good” means in relation to the ideals of what the United States can and must be. Then, the United States must fund and support pragmatic initiatives that demonstrate how tech can unite instead of divide.

As we work towards that future, the Atlantic Council GeoTech Center will continue to promote the non-partisan ideas of “tech, data, people, prosperity, and peace,” and lead a path to find consensus, across communities and like-minded nations, on what we mean when we say #GoodTechChoices.

David Bray is director of the Atlantic Council’s GeoTech Center and director of the GeoTech Commission.

Further reading:

The post A presidential agenda for the GeoTech Decade that uplifts people, prosperity, and peace appeared first on Atlantic Council.

]]>
Manning in Foreign Policy: The US finally has a Sputnik moment with China https://www.atlanticcouncil.org/insight-impact/in-the-news/manning-in-foreign-policy-the-us-finally-has-a-sputnik-moment-with-china/ Thu, 29 Oct 2020 17:56:02 +0000 https://www.atlanticcouncil.org/?p=316230 The post Manning in Foreign Policy: The US finally has a Sputnik moment with China appeared first on Atlantic Council.

]]>
Original Source

The post Manning in Foreign Policy: The US finally has a Sputnik moment with China appeared first on Atlantic Council.

]]>
Trouble underway: Seven perspectives on maritime cybersecurity https://www.atlanticcouncil.org/blogs/new-atlanticist/trouble-underway-seven-perspectives-on-maritime-cybersecurity/ Tue, 13 Oct 2020 14:24:40 +0000 https://www.atlanticcouncil.org/?p=308238 With greater than 90 percent of all global trade tonnage transported by sea and vital global energy networks, maritime infrastructure has never been more essential and yet also more at risk.

The post Trouble underway: Seven perspectives on maritime cybersecurity appeared first on Atlantic Council.

]]>

With greater than 90 percent of all global trade tonnage transported by sea and vital global energy networks, maritime infrastructure has never been more essential and yet also more at risk. In just the last two weeks, there have been several high-profile attacks on the maritime industry, with both the fourth largest global shopping company and the International Maritime Organization (IMO) targeted.  

To dive deeper on this topic, we asked seven experts—including several who spoke at a recent Scowcroft Center for Strategy and Security event on maritime cybersecurity—about these threats and how policymakers can help protect against them:

What are the most vulnerable aspects of our maritime infrastructure? What makes them such attractive targets?

“When compared to commercial IT, the technologies used within the maritime sector illustrate the difficulties new sectors have to adapt to the Internet of Everything (IoE). Like many other sectors, the maritime sectors used to develop stand-alone software and hardware, inherently “limiting” the risks to internal threats. The new IoE paradigm, however, proves that it is challenging to securely design, develop, and operate a fully connected environment. Current GPS, ECDIS, and AIS systems have demonstrated various vulnerabilities in the last couple of years. So in order for the maritime environment to develop and operate in a secure fashion, it will be essential to have an overall view of the supply chain, from third party manufacturer to the people operating and maintaining the equipment. This view should further evolve over the lifetime of the equipment, with updates, upgrades, and training. 

“In its current state, the maritime industry is a prime target due the many moving parts of ports and vessels, the increasing attack surface (e.g. adding connectivity to devices that had never been thought to be connected), the current lack of security and privacy by design, as well as the inadequacy of cyber-security training. Furthermore, with the industry quickly bridging the gap between IT and Operational Technology (OT), we may soon see wide-spread vulnerabilities impacting the maritime sector as a whole.”

Dr. Xavier Bellekens, Lecturer and Chancellor’s FellowInstitute for Signals, Sensors, and Communications,University of Strathclyde

From a government standpoint, what can the US government do to incentivize the maritime industry to invest more in cybersecurity?

“I believe that the most impactful things the US government can do to incentivize maritime industry investments in cybersecurity are:

  • Promote robust, real-time, maritime-specific cyber threat and incident information sharing between maritime industry stakeholders, and between those stakeholders and the US government (and vice versa), when appropriate.
  • Share cybersecurity threat intelligence with cleared maritime industry stakeholders.

I believe that these two measures are critically important as, currently, maritime industry executives have limited information about cybersecurity threats that other companies have experienced. Only by sharing cybersecurity threat and incident information widely with and between maritime companies can their senior executives gain a clear appreciation of the collective threats and potential financial and national security impacts of failing to adequately invest in IT and OT infrastructure improvements and other cybersecurity enhancement measures. Having this complete cybersecurity threat picture is key to making corporate cost-benefit decisions on increased investments in cybersecurity, and to ensuring that those investments achieve the best possible cybersecurity protections.”

Cameron Naron, Director, Office of Maritime Security, Maritime Administration, US Department of Transportation

What kind of players exist in the maritime industry and what role should they play in driving improved cybersecurity outcomes?

“The challenges in driving improvement in cybersecurity programs within the global maritime industry result from the many links in the marine transportation system and the personnel at each of these links. With enhanced technology, the interconnectivity—while improving the efficiency of the system itself—also presents multiple nodes which provide opportunities for cyberattacks. Looking at the system as a whole and starting at the most basic level, the vessel and its systems, interconnected within the ship and interfaced with shore management, is the basic building block. Key links to and from the vessel include shore management (ship owner, operator, or charterer), government agencies requiring electronic reporting of vessel information, third-party contractors including classification societies, vendors, technical service providers, and port and terminal authorities. Simply put, in an ideal world, the entire logistics chain is interconnected and provides stakeholders real-time information essential to scheduling and decision making. Integrating cybersecurity programs at each interface is critical as is also the education of personnel at each interface. In such an integrated system, the cybersecurity programs are only as good as the weakest link, making it critical that all links in the logistics chain collaborate in establishing robust programs, properly training personnel and maintaining the operational efficiency necessary for all parts to work as one.”

Ms. Kathy Metcalf, President and Chief Executive Officer, Chamber of Shipping of America

Cyber-attacks on maritime infrastructure can be especially alarming because of potential compounding effects. What lessons can be taken from other sectors to help better protect maritime infrastructure from systemic threats?

“Three opportunities for maritime to build on the cybersecurity lessons learned by others jump out. First, from the energy sector, how to monitor and alert on malicious system behaviors in technology without a great deal of computing head room left for big commercial IT security applications. Second, from the US financial sector, the importance of regular and realistic joint exercises to build confidence in the collaborative links between stakeholders and raise awareness of channels for cascade failure between them. Third, from the telecommunications sector, how some companies have approached repeated adversarial events as an issue of resilience—building flexibility, capacity to adapt, and deep system expertise as a means of operating through failure rather than endlessly seeking to prevent it.”

Trey Herr, Director, Cyber Statecraft Initiative, Scowcroft Center for Strategy and Security, Atlantic Council

What was your biggest takeaway from the Atlantic Council panel conversation? How does it align with what you see as the biggest threat to maritime cybersecurity that needs to be tackled?

“Sustaining a safe, secure, and resilient marine transportation system is foundational to our economic and national security. When we consider evolving risks in the cyber domain, the maritime sector is on par with other more widely recognized sectors, like finance and energy, in terms of the potential for significant consequences. As we have seen from recent incidents, the maritime industry’s growing dependence on continuous network connectivity and converging layers of information and operational technology make it inherently vulnerable to cyber threats. 

“The first step for the maritime industry is to recognize that cyber risk management is not an administrative function that can be left solely to company IT professionals, but rather a strategic and operational imperative that must be managed at the C-suite level. We also need to recognize that cyber security is a team sport; no single public or private entity has the capabilities, authorities, resources, and partnerships to do it alone, so information sharing and collaboration are essential to managing this risk.”

Captain Jason P. Tama, Commander, Sector New YorkCaptain of the Port of New York and New Jersey, United States Coast Guard

How does cyber insecurity in civilian maritime infrastructure impact military readiness and capabilities? Why should the cybersecurity of our commercial fleets be a priority for the US government and the Department of Defense (DoD)?

“While cyber insecurity in civilian maritime infrastructure has not yet been a hindrance to force projection, it could be in the future, given the right set of circumstances. In the past, we have operated under the assumption of an uncontested homeland and uncontested passage. However, exploring the asymmetric level of effort required for successful cyber-attacks juxtaposed against the damage they may cause, has forced a re-evaluation of whether our infrastructure and routes will remain uncontested in the future. Because the Army relies on the civilian maritime industry to move equipment, when US forces need to be sent overseas quickly, minor delays throughout our civilian critical infrastructure could have a ripple effect on the deployment timeline. The cybersecurity of commercial fleets should be a priority for the US government and DoD because disruptions or delays to military deployments could jeopardize our ability to maintain stability and to support our allies and partners.”

Dr. Erica Mitchell, Critical Infrastructure/Key Resources Research Group Leader, Army Cyber Institute, West Point; Assistant Professor in the Electrical Engineering and Computer Science Department, West Point

How can we help better enable and operationalize the Maritime industry to ensure that cybersecurity is not only understood, but also prioritized? 

“First, to understand and prioritize cybersecurity, persistent visibility into organizations’ own networks, assets, and critical third-party integration must be achieved. This is the spectrum of attack surfaces that requires the same continual monitoring and awareness that we have practiced for centuries at sea: inspections of cargo holds and machinery spaces, watertight enclosures and hatches, and material conditions throughout the vessel to ensure seaworthiness. An understanding of network architecture, what is connected, when it connects, and who may be required to connect is an imperative. Real-time knowledge of business, vessel, and marine terminal networks and technologies presents the greatest power of information to empower stakeholders because what belongs and what doesn’t belong is discoverable and tangible in the present, allowing actions to be taken early, instead of after a breach.  Observable behaviors of how systems react to detectable adversarial activities and breach attempts is convincing and defensible evidence from which to understand then prioritize the risk through informed decisions. This is largely missing—inconsistent at best—across the maritime industry, with some exceptions. Without persistent monitoring in a rapidly advancing digital ecosystem, decisions will be farther behind the curve and based on scanty information.       

“Second, cybersecurity leadership is necessary in the board room to ensure leadership is informed, that all the appropriate considerations are included in strategic planning and governance, and that cybersecurity actions taken are translated to a business language for all leadership and stakeholders to understand. In operating ships and marine terminals where cyber-physical systems integrate with IT, leaders must create and implement unified strategies for how the fleet or facilities will be protected; to support the vessel masters, crews, and employees through the creation of sensible plans to respond and recover, and to maintain safe operations. This is no different from how responsible maritime companies develop strategies to understand and manage other forms of somewhat tangible risk, such as geopolitical, climate change, ballast water, and even obsolescent technology replacement. As an example, many operational and safety checks are required to be performed and logged for a vessel preparing to sail or arrive in port. Very little in the form of pre-departure or arrival cybersecurity checks are provided to the vessel as tested and validated from ashore. This type of assurance and safety due diligence can be organized and led by a maritime Chief Information Security Officer (CISO). At the present, very few maritime companies are staffed with a CISO, with some exceptions. So how can we sail into the digital future without the dedicated leadership and the processes to trust-but-verify?   

“Third, industry would benefit from discreet information sharing exchanges from which stakeholders may meet in private to discuss not only cybersecurity threat information, but also strategy and best practices, and to meet with government representatives as needed. As the deployment of OT monitoring software solutions by vendors increases, we must understand industry’s experiences with the performance of these technologies, the value of the output data, and new unintended security vulnerabilities. These lessons learned should be shared so industry can advance through digitalization together, vice operate in a vacuum. Lastly, as businesses interface with shareholder and government entities in the sharing of cybersecurity information, organizations need the right blend of industry and cyber leadership expertise to represent their equities ahead of regulation.

“We are always thinking ahead in maritime—monitoring through watchkeeping, anticipating, scanning, plotting navigation fixes, inspecting, analyzing trends, and preparing—because the sea is unforgiving, and the duty of care is neither optional nor negotiable. Until now, cyber has run counter to every best practice we have learned and practiced—react, wait for the bad news, then scramble (with some exceptions). Instead, turn the constraints of limited resources, talent, and low priority into advantages and strategy by simplifying the cybersecurity problem through continuous monitoring, dedicated cybersecurity leadership, and discreet collaboration.

Captain Alex Soukhanov, Managing Director & Master MarinerMoran Cyber

Further reading:

The post Trouble underway: Seven perspectives on maritime cybersecurity appeared first on Atlantic Council.

]]>
The 5×5—Cybersecurity and the 117th Congress https://www.atlanticcouncil.org/content-series/the-5x5/cybersecurity-and-the-117th-congress/ Wed, 07 Oct 2020 13:20:31 +0000 https://www.atlanticcouncil.org/?p=301783 Approximately eighty congressional committees and subcommittees claim jurisdiction over at least some dimension of cybersecurity policy. As the agenda for the coming years is only getting more crowded, Congress must improve its agility in order to pass meaningful cybersecurity legislation effectively.

The post The 5×5—Cybersecurity and the 117th Congress appeared first on Atlantic Council.

]]>
This article is part of the monthly 5×5 series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

How many Congressional committees does it take to oversee cybersecurity? Apparently, dozens.

Approximately eighty congressional committees and subcommittees claim jurisdiction over at least some dimension of cybersecurity policy. The topics range from privacy rights to Internet of Things (IoT) safety to defense technologies and everything in between. With many committees and subcommittees overseeing these dimensions of cybersecurity, and Congress’s quickly filling agenda, bills that could protect Americans from cyberattacks may face long waits before being passed. Congress has its hands full and as the agenda for the coming years is only getting more crowded, it must improve its agility in order to pass meaningful cybersecurity legislation.

Cyber Statecraft Initiative experts go 5×5 to assess how Congress should govern over cybersecurity.

#1 What has been the Congress’s greatest legislative win on cybersecurity in the past decade?

David Bray, director, GeoTech Center:

“Congress’s 2018 National Defense Authorization Act which included authorization to conduct background investigations for up to 80 percent of Department of Defense personnel. This was finalized in 2019 with a presidential executive order starting the transfer from what had been the Office of Personnel Management/National Background Investigations Bureau to the new Defense Counterintelligence and Security Agency (DCSA) out of what had been the Defense Security Service. In addition to overseeing 95 percent of all government clearances, DCSA provides oversight to approximately ten thousand cleared companies under the National Industrial Security Program, ensuring that the US government information they are entrusted with and the critical technologies they develop are properly protected.”

Ryan Ellis, assistant professor, Department of Communication Studies, Northeastern University:

“That’s a tough one. There have been bright spots here and there, but the biggest win is possibly a story of inaction. Despite cycles of attention and real political pressure, Congress has not (yet!) passed legislation that seriously undermines the deployment of strong encryption. Legislation mandating backdoors has not come to pass. This is good news. Mandating backdoors would be a disaster for security and privacy.”

Meg King, strategic and national security advisor to the CEO & President; Director of the Science and Technology Innovation Program, The Wilson Center:

“I have to pick two. First, authorizing the Cybersecurity and Infrastructure Security Agency (CISA) at the Department of Homeland Security made the organization a household name. Too few understood the role of CISA’s predecessor (the National Protection and Programs Directorate or NPPD), and the rebrand focused energy and resources on clearer missions. Congress also significantly increased its funding. Second is the United States-Mexico-Canada Agreement (USMCA). Most would be surprised to learn that ratification of the USMCA by Congress earlier this year was a legislative win for cybersecurity. With its adoption, USMCA became one of the first trade agreements in the world to include commitments on cybersecurity policy through its new Digital Trade chapter in which the Parties agreed to principles for cybersecurity policy consistent with America’s NIST Cybersecurity Framework––a widely adopted benchmark for effective management of cyber risks.”

Ronald A. Marks III, president, ZPN National Security and Cyber Strategies; former Central Intelligence Agency and Capitol Hill official:

“Establishing CISA. For the first time you have a non-intelligence, non-military, non-judicial/legal organization that can be an equal with other players in the US government and represent a straight-forward US government ‘buttonhole’ to the private sector.”

Heather West, former head of public policy for the Americas, Mozilla:

“One win is the application of sanctions on countries that have been involved in targeted cybersecurity offensives, especially when it comes to election interference. While cybersecurity in general is an incredibly broad issue to attempt to address, the use of cyberattacks by nation-states and directed towards the United States is much more clear-cut. And while there was potential for partisan infighting over these sanctions, the successful passage of sanctions was a demonstration that Congress is prioritizing protecting Americans and American infrastructure from these actors.”

#2 Where are lawmakers most glaringly falling short on cybersecurity?

Bray: “Very few have done it, or have staffers who have done it, themselves. It has become a partisan issue and one in which there is more theater versus discussion about how to change this from being a series of band-aids after band-aids on the issues to a holistic rethink of new strategies to get at the real root of the issues––for example, a significant number of services on the Internet were not designed for an era of scarce computation cycle or memory storage compared to today’s technologies, and thus, in some cases, were not built with security or attribution as the paramount goal. While not the model for what the United States should do, recently China has proposed New IP demonstrating their interest in technologies that would protect against digital abuse, yet also would take away privacy and free speech. Ideally in 2021, the United States would work with like-minded nations to offer a counter proposal to New IP that embodies the values that open societies have while also striving to find better ways to permit openness, improving security, and protecting against digital abuse.”

Ellis: “It is, I am afraid, hard to pick on just one! But, as many others have pointed out, thinking specifically here of Jason Healey’s piece in Lawfare earlier this summer, one of the biggest failures is the continued comparative lack of funding and attention for ‘defense.’ While the Department of Defense continues to invest in its ‘offensive’ cyber capabilities, spending on defense—meaning support for creating and maintaining secure and resilient software, networks, and devices—lags. Defending forward is all well and good (and maybe even a necessity), but failing to pair these investments with a comparatively serious efforts to fund the Department of Homeland Security (and others) to fulfill their cybersecurity mission is a failure.”

King: “Because of the nature of the threat, cybersecurity committee jurisdiction is extremely broad. At the same time, many possible solutions require governing in a non-traditional way: like co-locating public with private sectors to analyze and respond to threats faster together, relying on tools and information shared by both. That’s a legal mess that isn’t easily fixed by any legislature. Meanwhile, it is extremely difficult to write laws that won’t be quickly overtaken by rapid technology advances. Where most lawmakers tend to fall short on cybersecurity is finding time to keep current on the evolving threat landscape and understanding what technical and policy solutions might evolve to bridge gaps. In the absence of any consistent technology research or training resources, the Wilson Center operates the Congressional Tech Labs to help.”

Marks: “I would say two areas: 1) Cyber Budget/program consolidation through a National Cyber Director for the Executive (working on it) and the same consolidation on interests in the Legislative budget and oversight process; and 2) not getting the right/enough staff talent on the Hill to support the legislators with cutting edge understanding and ideas.”

West: “Despite Congress increasingly understanding the importance of strengthening our cybersecurity stance as a nation, it seems that we’re mostly looking to the products directly used by the government or sold by the largest companies—and open source projects and consumer products are not seeing the same progress. I hope that lawmakers take the time to learn about the huge swaths of internet infrastructure that are open source—and that usually don’t have the same resources as larger companies. Indeed, many bedrock technologies and standards of the internet are not adequately resourced or secured, despite relying on this underlying infrastructure. Incentivizing companies to secure their products should be done in parallel with work to provide resources to secure widely used open source projects and creating secure open standards.”

#3 Has cybersecurity become a partisan issue in Congress? Should it?

Bray: “Things digital and IT have become partisan issues in Congress with regards to how government operates, yes. When a private sector major event happens there also seems to be a lot of visible concern, even if little changes happen as a result. This politicization probably has origins back to the initial stumbles associated with the launch of Healthcare.gov and the subsequent political division over not just the associate policy but also the digital platform. That said I’m not sure the right lessons were learned from Healthcare.gov or other situations; the right lesson was that the government should not require all intended features to be available by a specific date or else they’re going to preclude agile development and force waterfall development which is risky. The private sector does prolonged periods of open betas and phased launches for any major endeavor. I’m not sure if the budgetary cycles of the US government align with doing agile efforts with cybersecurity baked-in to every part of the development process either within government or with industrial base partners.”

Ellis: “Yes and yes. Cybersecurity is partisan for a good reason: decisions about cybersecurity necessarily require difficult trade-offs between often competing goals. There are fundamental and real differences about how best to balance, for example, security, freedom of speech, privacy, and economic efficiency. At a basic level, decisions about security necessarily prioritize certain values (and users and uses) over others. These are fundamentally political questions—they can’t help but be partisan.”

King: “Compared to other policy challenges, cybersecurity largely enjoys bipartisan collaboration. But there’s still a disconnect, especially when it comes to securing elections. According to Pew Research Center, 87 percent of Democrats believe a hostile power will tamper with US elections compared with 66 percent of Republicans. Although political campaigns are by definition partisan, protecting them from foreign influence on the cyber front should not be. As CISA Director Chris Krebs said at a recent Wilson Center event, the United States must fully integrate ‘the Zero Trust concept, where you just assume the network front to back is adversary territory.”

Marks: “Yes, it has to be. Civil liberties questions abound with information control and information use by public and private sector entities. Oversight and debate are crucial. And it’s Congress’s job to debate/control the ‘power of the purse’ guiding spending.”

West: “There are very few issues that benefit from a partisan approach in Congress—so I am very happy to see largely bipartisan efforts around cybersecurity. I hope that cybersecurity doesn’t become more polarized, despite the current state of politics. Recent intelligence reports have made it clear that major efforts are targeting both parties, so this should remain a bipartisan issue.”

More from the Cyber Statecraft Initiative:

#4 Is a consolidated umbrella committee appropriate for managing cybersecurity or should responsibility for cybersecurity’s many topics be layered onto existing authorities?

Bray: “Before consolidating at the committee level, perhaps we should first consider that it doesn’t make sense for the executive branch to have each agency and department doing their own cybersecurity—this wouldn’t happen across different divisions in the private sector. Instead perhaps we need three specific executive branch designees responsible for cybersecurity: 1) one sole designee for cybersecurity elements associated with national security activities, 2) one sole designee for cybersecurity elements associated with justice, law enforcement, critical infrastructure, and public safety activities, and 3) one sole designee for cybersecurity elements associated with all other civilian activities. If we combine these designees into these three groups to be responsible for executive branch actions relating to cloud services, privacy protections, and cybersecurity combined, we’ll get economies of scale. Then we can consider potential Congressional committee-level mergers to match these executive branch designees.”

Ellis: “In terms of Congress, the existing structure of committees should or least could work—the issue here is a lack of political will, not structural or bureaucratic impediments. Prioritizing the CISA—beefing up their budget and capacity—can and should be a priority. Inside the executive branch, we don’t have to reinvent the wheel. Restoring the cybersecurity coordinator within the National Security Council should be a priority. These changes would provide significant upside without having to create a new consolidated committee.”

King: “It is unlikely—for both political and logistical reasons—that cybersecurity issues will ever be contained in single umbrella committees in each chamber. But jurisdiction could be streamlined and the number of committees reduced, which would make oversight more efficient and effective.”

Marks: “The Cyberspace Solarium Commission wants designated oversight committees, but it’s unlikely to happen. Leadership designating the House and Senate Budget Committees with special ‘cyber interest’ might be the next best centralizing alternative––setting limits/guidance on budget and program support issues related to cyber.”

West: “A hybrid approach would be best suited, giving each committee a space to address relevant cybersecurity issues—which is not to say that there aren’t potential ‘umbrella’ bills that should be considered. Cybersecurity provisions can be added to any number of existing laws and authorities, and every government agency should be continually working on their cybersecurity stance and the cybersecurity stance of the companies and entities that they oversee or regulate. That means that there should be somewhere in each committee—potentially a subcommittee—that looks at cybersecurity issues, whether that’s in health, military, or internal administration. We can’t ‘win’ cybersecurity in a single push, but we can make concrete and incremental changes that improve cybersecurity for everyone across sectors and committee jurisdictions.”

#5 What hasn’t Congress tried, or what is it doing that it could do better, to attract more cybersecurity expertise?

Bray: “Launch a Cybersecurity Reserves force that could work full-time jobs in the private sector and spend a specific number of days a month in support of the activities of the US government. Also recognize that cybersecurity is not just about ensuring software, hardware, and networking technologies do what they are intended to do and are not exploited or abused; it is also about social engineering, misinformation and disinformation, and other human element attacks where, even if the machines operate as intended, the humans find novel ways of using machines to trick other humans to do unhelpful or harmful behaviors. This includes bots that spread misinformation, steal digital identities, or polarize the US public in ways that divide us as a nation. Our information environments, which support commercial and essential services, have become both digital and data banks to be robbed and battlegrounds for conflicts.”

Ellis: “A first step—which would have positive impacts beyond cybersecurity—would be to restore and reinvigorate the Office of Technology Assessment (OTA). Long-since defunded and left to rot, OTA provided expert advice to Members of Congress. Deciding how to balance competing aims, as noted above, are political questions. But in order to reasonably assess and weight these competing aims, technical insight and expertise is a necessity. A revitalized OTA would be an important first step (but only a first step) in making sure that political debates are informed by sound science.”

King: “We expect a lot from our legislators: to win elections in a hyper-partisan environment and then not only fix systemic problems that no one person can change alone, but have the expertise to do so. And even if Congress had all the cybersecurity talent in the world at its fingertips, it simply is not possible to address all priorities equally. Especially during overlapping crises like a pandemic and a crumbling economy.

“Non-profit organizations including my own provide a variety of training programs and fellowships to offer Congress trusted access to cybersecurity expertise when needed. The most successful ones put technology directly into the hands of legislators and their staffs—including the upcoming Hack the Capitol—so that they gain experience actually using these tools and understand how the laws they write might be implemented.  

“Perhaps the COVID-19 experience and its impact on the future of work might just be the disruption Congress needed to think differently about how it hires and attracts cybersecurity talent. In many cases, Congress (like many other institutions) relies on an outdated system that prioritizes experience in a professional setting, with an advanced degree. Some of the best software engineers don’t fit into neat molds—and you probably don’t want them to. Looking beyond the usual path and recruiting experts with deep technical expertise who are intrigued by the prospect of public service—even on a remote basis—will be critical.”

Marks: “This is a war of attrition as the Hill rarely moves fast on issues not demanding—in their minds—immediate attention. The older staff and Members of Congress will age out and the newer ones coming in will have greater understanding. As for a program, Congress needs to reinforce its own cyber oversight with talent and commensurate pay the same way they advocate for STEM in the private sector. In the meantime, perhaps a boost in contracting expertise through the Government Accountability Office and Congressional Research Service might ameliorate some of the problem.”

West: “It’s a well-known problem that Congress and cybersecurity experts often speak different languages, have different contexts, and thrive in different cultures. Staff and Members of Congress understand the urgency of cybersecurity issues––even if they’re still working to figure out how to address it. I hope to see Congress continue to bring expertise into all levels of staff and to work in partnership with external experts to understand how to concretely and positively impact the way that we approach securing American infrastructure, from consumer products to sensitive government systems. Congress and staff need to understand that security experts really do work within a different context than Congress—and working to bridge that divide from both sides.

Simon Handler is the assistant director of the Atlantic Council’s Cyber Statecraft Initiative under the Scowcroft Center for Strategy and Security, focused on the nexus of geopolitics and international security with cyberspace. He is a former special assistant in the United States Senate. Follow him on Twitter @SimonPHandler

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cybersecurity and the 117th Congress appeared first on Atlantic Council.

]]>
How to counter worsening cyber-security threats: The international strategy of the Dutch government https://www.atlanticcouncil.org/commentary/article/how-to-counter-worsening-cyber-security-threats-the-international-strategy-of-the-dutch-government/ Thu, 01 Oct 2020 20:12:13 +0000 https://www.atlanticcouncil.org/?p=302412 Cyber-security reports show that threats are getting worse, not decreasing. International cooperation is vital to consolidate the rules of the road in cyberspace, further their implementation by increasing the chance of exposing and stopping those who break the rules, and assist other countries in upgrading their resilience.

The post How to counter worsening cyber-security threats: The international strategy of the Dutch government appeared first on Atlantic Council.

]]>

The piece below originally appeared in the Netherlands Atlantic Association’s magazine Atlantisch Perspectief.

Over the past decades, our reliance on the internet and everything that passes over it has grown exponentially to a point where living without it is hardly imaginable. Even those people who are not active online are indirectly dependent on many goods and services that could no longer be rendered without the government or private sector relying on Information and Communications Technology (ICT). We experience this as progress and have recognized the enormous opportunities and increase in quality of life that have come with it. The discourse that evolved with the emergence of the internet held a promise of increased business opportunities, transparency, and fundamental freedoms (of speech, assembly, etc.) that could now be exercised in an unfettered way like never before.

Waves of democratization, like the Arab Spring, were spurred or amplified by social media. However, with those opportunities and increased interconnectedness came threats and vulnerabilities. And online interdependence has increasingly exposed us to malicious actors with bad intentions. This has sparked an intense international debate and necessitated efforts to fight these evils. Every single part of our government deals with the threats and opportunities that come with the use of ICT, ranging from the Dutch Defense Ministry adding cyber capabilities to its inventory, to the Dutch Ministry of Home Affairs making sure our elections are conducted securely and without foreign interference, to the Dutch Justice Ministry protecting our critical infrastructure and fighting cybercrime. It is now integral to everything the government undertakes, and responses to all these challenges and opportunities must be aligned in a whole-of-government approach, which requires regular updates.

For the Dutch government, cyber diplomacy is a necessity, not a luxury. For an open service-oriented economy, a safe and secure internet is key. Many of the transatlantic internet cables land in the Netherlands, making Amsterdam one of the largest hubs for internet traffic in the world. But most importantly, the Netherlands is one of the few countries in the world that has promotion of the international rules-based order enshrined in our constitution (article ninety). Extending this mission into cyberspace is therefore regarded as a core task of the Ministry of Foreign Affairs (MFA).

The MFA is involved in many of these efforts, particularly where the work of other departments has an international dimension. But it also has defined its own mission, which is derived from the overall foreign and security policy. Obviously, the work of the development-cooperation side and the international-trade side of the house also has a digital dimension, but for the sake of this article, I will focus on cyber diplomacy in the security context of our work. We have a three-pronged approach to promoting a safe, free, and open internet globally.

Subscribe for content and events on European security

Sign up for updates from the Atlantic Council’s Transatlantic Security Initiative, covering the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.



  • This field is for validation purposes and should be left unchanged.

1. Develop and consolidate the rules of the road in cyberspace

Although to some the internet may seem a lawless place where state actors and their proxies, criminals, and terrorists can perform harmful acts and go unpunished, that is not the case. Our laws, and in particular international law, apply in full in cyberspace. In the opinion of the Netherlands and a growing group of like-minded countries, the notions contained in the United Nations (UN) Charter and Human Rights Conventions are applicable equally online and offline. Your privacy, rights of assembly, and free speech, all are protected by the same laws and treaties. However, discussion amongst experts is still ongoing as to how some international rules exactly apply. For example: what constitutes an armed attack in cyberspace, and when is self-defense warranted? Over the past decades, such discussions have taken place under the aegis of the UN. The most recent set of eleven norms for state behavior was agreed in 2015. Of course, this is a domain that is still very much under development, both in terms of technology and in terms of policy. So, it is only natural that the discussion continues.

Norms of state behavior are currently being further elaborated along two separate UN tracks. And with the development of new technologies, we need to keep developing and strengthening those rules of the road in cyberspace to make sure that use of ICTs and other technologies is safe and secure, and that the fundamental rights of each individual are respected. Also, a number of Confidence Building Measures like transparency and responsible behavior in case of cross-border cyberattacks have been developed within the OSCE. The construct of rules, norms, and principles for state behavior is being elaborated and strengthened all the time. Nevertheless, we see an increase in irresponsible and malicious behavior by many, ranging from theft of intellectual property to benefit national industries, to crippling attacks on critical infrastructure, to interfering in electoral processes. The Global Commission on the Stability of Cyber Space (GCSC), an independent non-governmental organization that the Netherlands MFA initiated, issued an authoritative report on these matters in November 2019.

It stated unequivocally that the public core of the internet as well as critical infrastructure should be off-limits to any tampering and that the latest developments concerning electoral infrastructure and medical infrastructure in the face of the COVID-19 crisis gave particular reason for concern and grounds for their protection. The suggestions from the GCSC have been referenced by many nations during the negotiations at the UN in New York. The GCSC had an important attribute which is vital to the entire debate on shaping the rules of the road in cyberspace: a multi-stakeholder composition. Government representatives, scientists, practitioners, politicians, civil society, tech companies, think-tankers, even hackers: Every single part of our society should be involved in shaping this space for the future. The internet is not like a public road that is owned by the government, which can decide on its own what the maximum speed should be. The internet is owned by us all, and it is still growing as we speak. Every owner should feel responsibility, and every owner should take part in shaping its future. This should remain the organizing principle of future exchanges on our digital future, and I was delighted to see that UN Secretary-General António Guterres has embraced this principle in his recent Roadmap for Digital Cooperation.

2. Hold those who break the rules accountable

Although we all seem to broadly agree on the norms and rules of the road in cyberspace, and on what is and is not acceptable, malicious behavior is on the rise. A recent example is the abuse of the COVID-19 crisis for cyber operations. There is thus a clear need to get better at catching those who break the rules. And this is not easy; actions on the internet are easily disguised, and it is increasingly difficult to ascertain who may be behind an action. So it is of critical importance that we work together with EU partners and like-minded nations across the world, first of all, on the forensic details of cyberattacks. We share information between allies and partners, just as with information on common crime. And then we coordinate on calling out malign practices. Whereas the decision to attribute a cyber operation to another state will always be a sovereign decision by any government, the way to communicate such a move, and the option to impose consequences, will gain meaning and impact when coordinated between nations.

Many individual nations have an attribution framework for these purposes, but on Dutch initiative, the European Union (EU) has also developed a cyber-diplomacy toolbox. The most recent addition to that is an EU cyber-sanctions regime, which allows the EU at twenty-seven to impose sanctions on individuals and entities that are found guilty of malicious cyber operations. In order to be able to agree on such a sanction by unanimity within the EU, we need to able to convince each member state of the facts. For this purpose we draw up evidence packs, which have to be unclassified, because the person or entity targeted by a sanction should have recourse to the European Court of Justice. To facilitate the assembly of such evidence packs, we need to build alliances between the public and private sectors, with the involvement of other stakeholders. We have increasingly realized that many different players hold one or more pieces of the incredibly complex puzzle that is called cyber: police, national cyber-security centers, security services, Computer Emergency Readiness Teams (CERTs), but also private security firms, social media platforms, universities, Interpol/Europol, and so on. We have to get much better at forging alliances between these players to be able to see the full extent of what goes on on the dark side of the internet. And this could help us with getting better at exposing different kinds of interference, including disinformation campaigns.

The Dutch are engaged in diverse diplomatic efforts to promote adherence to international rules and norms. For example, the Netherlands Ministry of Foreign Affairs has initiated the Freedom Online Coalition (FOC). The FOC is a partnership of now thirty-two governments, working to advance and secure internet freedom. Coalition members work closely together to coordinate their diplomatic efforts and engage with civil society and the private sector to support Internet freedom—free expression, association, assembly, and privacy online—worldwide. Similarly, the Dutch MFA supports non-governmental organizations in this field like the Digital Defenders Partnership and Access Now. We seek to serve, guide, and influence decision-makers across sectors through human-rights-focused thought leadership and innovative, evidence-based policy analysis, as well as through events like RightsCon, an annual meeting and a movement that connects and empowers civil society and mobilizes a global community to collaborate on the most pressing issues at the intersection of human rights and technology.

Minister Stef Blok visited the cybersecurity company Fox-IT in Delft. His visit focused on cyber diplomacy and the international nature of cyber attacks. Photo via Fox-IT, Sicco van Grieken.

3. Enable all nations to protect themselves

Defense against cyber operations starts with resilience. Resilience means not only being able to withstand an attack, but also being able to continue functioning through an attack. This is vital as today the functioning of our society is increasingly dependent on a functioning internet. Full protection of a country’s critical infrastructure is therefore key. Definitions of critical infrastructure vary from country to country and keep changing and growing all the time. Banks, electrical grids, telecoms, etc. are regarded as critical in every country. But for the Netherlands for instance, where about 35 percent of its territory lies below sea level, the waterworks that keep the sea out are computer operated and are without a doubt critical infrastructure. However, until recently, our medical infrastructure was not regarded as critical in the same way other sectors were. With the COVID-19-related attacks and the realization that hospitals were critical to the continuity of our society, that definition is now under review.

It is very important that countries that are less well-equipped get assistance to achieve the same level of protection. This should not be seen as charity; it is in everyone’s interest. Some major cyber incidents of the past few years have shown that most damage done is actually collateral damage well beyond the initial target, or intentionally random and global in its effect. Think of the Wannacry cyberattack that affected up to 300,000 computers in 150 countries, or NonPetya, the most devastating cyberattack in history that crippled ports, paralyzed entire corporations, and froze government agencies around the world. In that sense we could compare the internet to the earth’s atmosphere; damaging effects from a cyberattack do not stop at our borders, just as climate change knows no boundaries. This capacity building is first of all a matter of building technical resilience, for example by setting up Cyber Emergency Response Teams.

But it could also entail assistance with drafting legislation that ensures internet safety and security, and at the same time respect for human rights. The overall aim should be to make sure that everyone can reap the benefits of new technologies and enjoy a free, safe, and secure internet. And just as important: empower all states to take part in the global debate about our common digital future as equal partners, including the private sector and civil society in these countries. To support this effort, the Netherlands Ministry of Foreign Affairs initiated the Global Forum for Cyber Expertise (GFCE) in 2015. The GFCE is a multi-stakeholder community of more than 115 members and partners from all regions of the world, aimed at strengthening cyber capacity and expertise globally. As a global platform comprising governments, international organizations, non-governmental organizations, civil society, private companies, the technical community, and academia, the GFCE builds global cyber capacity. This Dutch government initiative has now matured into the world’s strongest independent Capacity Building platform.

4. Conclusion

The Dutch government has climbed a steep learning curve over the past years and is still learning every day. Cyber-security reports show that threats are getting worse, not decreasing. International cooperation is vital to consolidate the rules of the road in cyberspace, further their implementation by increasing the chance of exposing and stopping those who break the rules, and assist other countries in upgrading their resilience. With new technological developments like the Internet of Things, the attack surface will increase, and with it, the responsibilities of all involved to protect us from malicious actors. The Dutch Ministry of Foreign Affairs aims to continue to play a leading role in cyber diplomacy both from The Hague and through our dedicated cyber diplomats in selected embassies across the world.

Timo S. Koster was, until September 1, 2020, Ambassador-at-Large for Security Policy and Cyber at the Dutch Ministry of Foreign Affairs. From 2012 to 2018, he was Director for Defense Policy and Capabilities at NATO Headquarters in Brussels.

Further reading:

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post How to counter worsening cyber-security threats: The international strategy of the Dutch government appeared first on Atlantic Council.

]]>
Five big questions as America votes: Cybersecurity https://www.atlanticcouncil.org/blogs/new-atlanticist/five-big-questions-as-america-votes-cybersecurity/ Thu, 24 Sep 2020 14:57:51 +0000 https://www.atlanticcouncil.org/?p=298249 With the next US presidential election looming, the next administration will face no shortage of substantive cyber policy issues. US adversaries such as China and Russia continue to undermine and fracture the free and open internet, while the technology ecosystem has been altered by the rapid adoption of cloud computing, placing immense power and responsibility in the hands of few technology giants, such as Amazon and Microsoft.

The post Five big questions as America votes: Cybersecurity appeared first on Atlantic Council.

]]>

As part of the Atlantic Council’s Elections 2020 programming, the New Atlanticist will feature a series of pieces looking at the major questions facing the United States around the world as Americans head to the polls.

With the next US presidential election in less than two months, the next administration will face no shortage of substantive cyber policy issues. US adversaries such as China and Russia continue to undermine and fracture the free and open internet. The technology ecosystem has been altered by the rapid adoption of cloud computing, placing immense power and responsibility in the hands of few technology giants, such as Amazon and Microsoft. The effects of the coronavirus pandemic have forced millions of Americans to rely on remote technologies to work and study from home. These trends and an increasing number of Internet of Things (IoT) connected devices are raising new concerns over privacy and security at the forefront of policy discussions.

Below are the five major questions facing the United States on cybersecurity as the US elections approach, answered by five Atlantic Council experts:

Since the last Bush administration, US attention in cyberspace has largely focused on four adversaries—Russia, China, North Korea (DPRK), and Iran. Who will be the biggest challenge for the United States in the next decade?

“Proliferation of offensive cyber capabilities. Where states like Russia or DPRK develop and share capabilities or intelligence on US and allied targets with non-state groups, either direct proxies, independents, or criminal groups, the threat environment facing the United States becomes decidedly more complex.” – Trey Herr, director, Cyber Statecraft Initiative

 “Russia. At this point, it is more difficult to point out a geopolitical conflict involving Russia that doesn’t have a cyber component instead of one that does. We also need to watch out for China’s advancements in artificial intelligence research and military applications. The real challenge, however, is that the current cyber landscape is such that both the United States and its adversaries are stuck in a prisoner’s dilemma where the individual incentives for surprise attack, preemption, and exploitation of vulnerabilities leave cyberspace collectively insecure for everyone. Challenges from adversaries, old and new, will continue as long as this dynamic persists and as the attack surface increases.” – Jenny Jun, fellow, Cyber Statecraft Initiative; PhD candidate, Columbia University’s Department of Political Science

 “China will be the largest challenge to not just to the United States, but to all of the ‘rule of law’ countries over the next decade. Aggressive intellectual property theft and the influence of their market power combined with the exportation of an authoritarian governance model as a viable alternative to the Western system will be the defining issue of the decade.” – Jeff Moss, nonresident senior fellow, Cyber Statecraft Initiative; founder, Black Hat and DEF CON security conferences

 “Size is not as crucial as a concern; we need to be worried the most about how criminal groups and similar actors working with nation-states to disrupt the West politically. Particularly concerning is the ability of these groups to use nation-state grade cyber capabilities in operations, with the support of nation-states.” – Gregory Rattray, senior fellow, Cyber Statecraft Initiative; partner/co-founder, Next Peak LLC

“These four states will all continue to pose different kinds of threat, in addition to many new players who have obtained or built offensive cyber capabilities in recent years. The biggest challenge for the United States in the next decade will be China; but the challenge is not to prevent China from assuming a more influential role in cyberspace—that cat is out of the bag—but for the United States to find cybersecurity solutions that work in a multipolar and interdependent world. The alternatives of withdrawal from or overmatching in cyberspace do not appear sustainable.” – James Shires, fellow, Cyber Statecraft Initiative; Assistant professor, Cybersecurity Governance, Institute of Security and Global Affairs, University of Leiden

The Cybersecurity and Infrastructure Security Agency (CISA) was established in 2018 to improve cybersecurity across government. What other organizational changes at the federal and/or state levels should be made to best protect Americans from damaging cyberattacks?

Herr: “The United States must revamp its cloud adoption and security regulatory processes. The incumbents, programs like FedRAMP and the DoD Cloud Computing Security Requirements Guide, are slow, emphasize manual processes over automation, and skew towards prescribing design choices instead of security outcomes and performance. Change at the Federal level would be a boon for more secure adoption of cloud computing by states and allies, many of whom are searching for a more flexible and cloud-friendly model.”

Jun: “At this time, instead of making more organizational changes, I think it’s more important to empower existing positions and strengthen regulations for specific policy goals. For example, if the goal is to make businesses exercise more due diligence when manufacturing various products in the supply chain, making acquisition choices, and handling customer data, this can be achieved through congressional legislation and empowering regulatory agencies such as the Securities and Exchange Commission (SEC) for its enforcement by sector. If the goal is to make individuals and enterprises less susceptible to ransomware attacks, federal insurances can be established to pay for victim recovery, and baking in backup and resiliency best practices to insurance premium prices, instead of leaving insurances to pay the ransom itself or simply asking victims to not pay.”

Moss: “Our future will depend on reliable, safe, and secure technology gluing society together. A clearly articulated industrial policy for technology that prioritizes resiliency and security would provide intention and direction to future decisions. The creation of a National Supply Chain Safety and Transparency Agency could act as a coordinator for industry and government risk evaluation, best practices, and policy advice. This would help governments understand the risks before procuring a technology, and companies would be able to address their concerns.”

Rattray: “Beyond CISA, the government needs to establish joint operating capabilities and collaboration with the private sector, through analysis and resilience centers like the FS-ISAC. These should very definitely include enabling state governments through the increase of response capabilities, following the federal level in working directly with the providers of critical national functions.”

Shires: “Greater privacy and data protection will be crucial. Although it would not directly prevent cyberattacks, good privacy and data protection design and regulation would prevent some of the most serious consequences of intrusions and breaches for the general public. Of course, finding a good solution to information sharing on cyberattacks is also key, whether incentive-based or regulatory, and at state and federal levels. Finally, keeping to a single and clear definition of critical infrastructure, enabling states and federal government to focus resources effectively, would definitely help.”

What will be the most influential force on the shape of the internet that the next administration will face?

Herr: “Apathy. The internet’s continued universality and technical integrity are in question, as much because of deliberate Chinese and Russian efforts to drive fragmentation as benign neglect from the United States and key allies. The internet’s original design features which pushed intelligence (and thus much of decision-making power) to the edge of the networks has been eroded in favor of ever more capable intermediaries. The arguments against a single internetwork are being made far more frequently than those in favor.”

Jun: “Increased attack surface from 5G adoption coupled with IoT devices. More dependencies and integration can make it easier to create cascading effects where exploitation of one feature could affect the entire network. As a political scientist, one hope is that perhaps the vulnerabilities will be so great and intertwined such that it will create mutual hostage situations and ironically contribute to stability.”

Moss: “Great power competition and decoupling economies will fundamentally change the nature of the internet over the next decade, and the next four to eight years will be critical. There has been a lot of speculation about the use of cyber retaliation and how infrastructure provider would alter their networks to limit the impacts, and now we are witnessing the emergence of what I call ‘App Diplomacy’ with India banning popular Chinese social media apps such as TikTok. A new capability in the diplomatic toolbelt.”

Rattray: “The internet and cyberspace generally continue to evolve extremely rapidly. I would have the next administration focus particularly on artificial intelligence, the protection of data used to form algorithms, and the reliance of government, economic, and social functions on the use of artificial intelligence.”

Shires: “There are two. The first influence is its own current trajectory. If current ‘clean’ measures are taken to their extremes, the United States itself will reshape the internet into at least two halves (and, maybe, depending on which way Europe goes, three). The second influence will be the continuing surge in internet access worldwide, especially in the Global South.”

How will the novel coronavirus pandemic impact the next administration’s approach to digital privacy?

Herr: “Unclear. Contact tracing through smartphone and wearable apps has not featured in the United States as prominently in other countries and the EU remains a patchwork of different designs and applications. The move to work from home has forced users to more frequently consider where their data lives and how it can be accessed which may improve support for a private authority or more coherent federal privacy protections.”

Jun: “This is a classic commitment problem, for both governments and big tech. Whoever is collecting vast amounts of personal data to a centralized system has a hard time credibly promising that it will use insights from that data or the capacity to collect it for one purpose but not the other. But this problem is not unlike other promises that governments had to make in the past, such as a promise to use military power only against foreign adversaries but not against its own citizens or domestic rivals. While no silver bullet, mechanisms such as tying hands, allowing oneself to be sued, delegating authority, and embedding these mechanisms in institutions so that they are hard to change—stuff democratic governments have known for centuries—can partially mitigate this commitment problem. But voluntarily giving up this power will take an extraordinary event, probably not without a bottom-up demand from consumers.”

Moss: “COVID-19 has forced everyone to embrace online everything, and with that comes a growing awareness that their expectation of privacy is being violated or traded away with only illusory consent. In the absence of a constitutional right to privacy, the next administration could address this by creating legislation that would create a minimum for privacy expectations to provide clarity to both people and to the market, ultimately enabling competition instead of privacy law arbitrage. I know. I’m an optimist.”

Rattray: “The pandemic requires the next administration to both enhance digital privacy and deal with the increased sharing of private information. New, ever more relevant medical information must be shared digitally and must be protected. Simultaneously, remote work increases the necessity of the digital environment, and the amount of private information handled digitally. Finding the right approach to privacy is going to become a fundamental enabler.”

Shires: “It will create some contradictory pressures. For example, the pandemic has forced many personal and professional lives online for the foreseeable future, suggesting investment in digital privacy for online connective services should be a priority; but also generating more of a market for these services and greater opportunities for data-based advertising, threatening privacy. Another example is healthcare data: protecting the privacy of healthcare data should be more important, but finding a balance that also enables the sharing of data when necessary for contact tracing, and avoiding a shadow surveillance economy for doing so, will be difficult.”

The IoT is growing by the day, increasingly morphing the digital and physical worlds and in turn broadening the cyberattack surface. What immediate steps can the next administration take that can result in quick wins for creating a more secure IoT ecosystem?

Herr: “Empower the Federal Trade Commission to enforce a baseline standard of secure design and manufacture for IoT devices. There are numerous proposals of what such a standard might look like and various labeling schemes to support their enforcement but the root of impact is a clear and minimally ambiguous enforcement scheme.”

Jun: “Change incentives of manufacturers and distributors to sell more secure products, like we have done for areas such as food safety and financial products. Don’t rely on the consumers to make security choices, and don’t regard this as a purely technical problem beyond the policy realm. Leverage existing recommendations for manufacturers like NISTIR 8259, create certification mechanisms, and ensure compliance. Coordinate such efforts with like-minded countries to gradually establish global market standards.”

Moss: “While I don’t believe there are any quick wins to be had in IoT, there are some steps that are long overdue, like requiring an update mechanism to fix critical bugs, imposing manufacturer liability for defective products, and requiring transparency on what is inside the device, possibly a Software Bill of Materials (SBOM). Finally legislating clear product labeling laws like we have for food would provide transparency to important questions like how many years will the product be supported? Does the product require you to reveal personal information to activate it after purchase? Will this product be used to track my movements or spy on me?”

Rattray: “In general, I am a skeptic of the notion regarding quick wins, especially when it comes to fundamental changes in the digital ecosystem, like the rise of IoT. The next administration should focus on providing support to those who provide IoT products and services to make sure that they are easily secured and robust, while also increasing the general capacity of individual organizations in the nation to respond to digital disruption.”

Shires: “First, develop consistent standards for IoT security (although this isn’t quick, it should be started immediately). Second, find a way to effectively leverage the cybersecurity community to publicize and patch bugs in IoT systems. Third, for consumer IoT it is privacy and data protection. There will always be successful IoT attacks, and the harder it is to for example exploit access to smart speakers, or pivot from a dumb device to something juicier, the more we can use them with confidence.”

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

See more from CSI’s 5X5 series:

The post Five big questions as America votes: Cybersecurity appeared first on Atlantic Council.

]]>
Trade flows in the age of automation https://www.atlanticcouncil.org/in-depth-research-reports/report/trade-flows-in-the-age-of-automation/ Fri, 18 Sep 2020 13:00:48 +0000 https://www.atlanticcouncil.org/?p=298135 Innovative digital technologies will alter global value chains (GVCs) in the decade following COVID-19. As new technology re-shapes the nature of services trade, entire value chains will be disrupted. With trade in services growing 60 percent faster than that of goods, it is clear that the impact of new digital technologies will be widespread.

The post Trade flows in the age of automation appeared first on Atlantic Council.

]]>

Trade Flows in the Age of Automation

Innovative digital technologies are poised to change the nature of services trade within global value chains in the decade following COVID-19. With trade in services growing 60 percent faster than that of goods, the impact of new digital technologies will be widespread.

As digital technologies increase the intensity of services in value chains across a broad range of industries, from automotive manufacturing to financial services, the primary sources of comparative advantage going forward will be innovation, intellectual property, and specialized skills.

This report analyzes four digital innovations that have the potential to disrupt global value chains: the Internet of Things (IoT); blockchain; artificial intelligence (AI); and advanced manufacturing. While advances in information and communication technology have largely worked in concert in the past to support the expansion and fragmentation of global value chains the impact of these innovative digital technologies are unlikely to be uniform. The report evaluates these new technologies in the context of financial, pharmaceutical, and automotive industries to arrive at three key takeaways. First, the private sector’s competitiveness will increasingly depend on an ability to collect, store, organize, and analyze data. Second, regional value chains will emerge as global trade orients more around regional poles. And third, big tech firms could disrupt global value chains, although it appears as likely that they will collaborate with existing firms in established markets. 

The World Trade Organization (WTO) forecasts that services will comprise one-third of all trade by 2040, up from 21 percent today and just 9 percent in 1970.

As global value chains become more knowledge- and services-intensive, advanced economies should be prime beneficiaries overall. The UK and the United States should be the leaders in adopting digital technologies and establishing global rules and standards to maximize their benefit. The report calls for strong multilateral cooperation to fully realize the potential of these digital technologies within global value chains.

Related experts

The post Trade flows in the age of automation appeared first on Atlantic Council.

]]>
Event recap | Data salon episode 4: Data science and social entrepreneurship https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-data-salon-episode-4/ Thu, 17 Sep 2020 13:24:00 +0000 https://www.atlanticcouncil.org/?p=304091 On Thursday, September 17, 2020, the GeoTech Center hosted the fourth episode of the Data Salon Series in partnership with Accenture. The panel featured Ms. Valeria Budinich, Scholar-in-Residence at the Legatum Center in MIT's Sloan School of Management; Mr. Derry Goberdhansingh, CEO of Harper Paige, and Mr. Bevon Moore, CEO of Elevate U.

The post Event recap | Data salon episode 4: Data science and social entrepreneurship appeared first on Atlantic Council.

]]>

View the full series here.

Event description

On Thursday, September 17, 2020, the GeoTech Center hosted the fourth episode of the Data Salon Series in partnership with Accenture. The panel featured Ms. Valeria Budinich, Scholar-in-Residence at the Legatum Center in MIT’s Sloan School of Management; Mr. Derry Goberdhansingh, CEO of Harper Paige, and Mr. Bevon Moore, CEO of Elevate U. GeoTech Center Director Dr. David Bray moderated the panel, incorporating questions from a wide range of voices in the data science community.

Around the world, scores of individuals and organizations work to create a better reality for their communities, their nations, and the world . Yet, with so many players in the field, it is often difficult to coordinate between different streams of public, private, and nongovernmental data seeking to combat overlapping problems. The panelists discussed their efforts and outlined methods to connect data with the organizations who need it without exposing personal information of anyone involved. The speakers touched on personal experiences in the field of social entrepreneurship, including both their successes and challenges to learn from.

Watch the whole video above to hear more about how data is revolutionizing the social entrepreneurship space.

Previous episode

Event Recap

Aug 19, 2020

Event recap | Data salon episode 3: Coordinating data privacy and the public interest

By Henry Westerman

On Wednesday, July 30, the GeoTech Center hosted the third episode of the Data Salon series in partnership with Accenture. The virtual event hosted by Dr. Divya Chander, Chair of Neuroscience at Singularity University, and Ms. Krista Pawley, Principal and Culture and Reputation Architect at Imperative Impact, in conversation with audience members from across the data and innovation space.

Cybersecurity Digital Policy

The post Event recap | Data salon episode 4: Data science and social entrepreneurship appeared first on Atlantic Council.

]]>
The ‘Digital Ocean’ as a model for innovation in the perfect storm https://www.atlanticcouncil.org/blogs/new-atlanticist/the-digital-ocean-as-a-model-for-innovation-in-the-perfect-storm/ Wed, 19 Aug 2020 13:08:09 +0000 https://www.atlanticcouncil.org/?p=289081 By capitalizing on opportunities such as the ‘Digital Ocean’ NATO can help provide solutions to the megatrends that will define this century, while fulfilling its core mission of providing security to its nearly one billion citizens.

The post The ‘Digital Ocean’ as a model for innovation in the perfect storm appeared first on Atlantic Council.

]]>

Megatrends that shape the current century point to a future in which our security and environment are inextricably linked. Population changes, exponential technological growth, climate change, and scarcity of resources, both natural and financial, set against rising global insecurity create a situation in which our international institutions must innovate or risk becoming irrelevant and financially unsustainable.

NATO is positioned to lead in this era of risk and innovation. By capitalizing on opportunities such as the ‘Digital Ocean’ NATO can help provide solutions to the megatrends that will define this century, while fulfilling its core mission of providing security to its nearly one billion citizens.

The perfect storm of challenges

The asymmetric demographic change we are already witnessing will shrink the Western working age population remarkably. By mid-century every fourth person in Europe will be over sixty-five years old and by 2080 there will be less than two working age persons per each elderly person in Europe. We know now, decades in advance, that there will be fewer taxpayers. Falling working age population brings shrinking budgets, while the need for resources and workers in numerous sectors are increasing.  


Climate change will impact virtually every aspect of society. More frequent severe weather events will bring high economic costs—heat waves, forest fires, storms, and floods that will test the health of all species. Food and water shortages will accelerate tensions in already fragile regions. Currently, we’re in a hunger crisis with 260 million people predicted to be without enough food by the end of the year. Rising temperatures and extreme events will further harm livestock and diminish crop yields, driving starvation. Around 1.5 billion people live in river basins where the demand of fresh water exceeds the natural recharge level and by 2030 the global water demand will exceed current sustainable supplies by 40 percent. Although 70 percent of the earth is covered with water, only 2.5% of it is freshwater. Ironically, while our freshwater reserves are falling, our oceans are rising as icebergs melt into the sea and eat away at our coastlines, displacing millions of people. Water scarcity, extreme weather events, rising water levels, food shortages, and other climate change calamities will make parts of our world simply unlivable, causing migration pressure and increasing the risk of further cultural tensions and conflicts.

Rapidly developing technologies have already changed the ways people communicate and do business, governments serve their citizens, and societies protect their interests. Exponential growth in technology, computing power, and artificial intelligence has and will be felt by every state and every sector.   

These megatrends have emerged amid an increasingly more complex security environment in which a resurgent Russia aggressively promotes misinformation and engages provocative behavior to create instability around the globe. Further, China is in a race to become the global leader in technology, trade, and both digital and physical infrastructure. This security situation is of course further exacerbated by the global pandemic currently ravaging our health care systems and economies.

Granted the challenges are many, but so too are the opportunities to launch a new era of institutional and technological innovation unrivaled in history.

How NATO can harness the ‘Digital Ocean’ revolution

NATO is well-positioned to lead this new era of innovation. With its thirty Allies and over one trillion dollars in annual defense expenditures dedicated to the collective defense of their nearly one billion citizens, NATO is the largest and most powerful military alliance on the planet. The Alliance is uniquely situated at this nexus of security and environment. 

Perhaps the best illustration of this nexus is NATO’s maritime domain. The seas remain essential for global trade, with 90 percent of the world’s trade conducted by sea. And additional trade routes are opening in the Artic due to climate change and exposing NATO’s northern flank to Russian and Chinese fleets. Furthermore, the global digital economy runs on cables on the ocean’s floor.  It is the sea that connects us all, powers the global economy, and is primed for innovation. 

NATO could lead this innovation, by bringing together key stakeholders across government, academia, and industry to create a ‘digital ocean’ and exploit enormous swaths of data with artificial intelligence-enhanced tools to predict weather patterns, get early warning of appearing changes and risks, ensure the free flow of trade, and keep a close eye on migration patterns and a potential adversary’s ships and submarines. And it could be done in a sustainable carbon-neutral manner by leveraging the “Blue Tech” revolution currently underway. Innovators across Europe and North America continue to design and build a diverse array of maritime surface and subsurface drones. Many of these maritime drones are propelled by wind, wave, and solar energy and carry sensors that can collect data critical to unlocking the yet untapped potential of the ocean.  

If NATO Allies could stich these drones together in a secure digital network, it could essentially create an ‘Internet of Things’ for the ocean, a ‘digital ocean’ spanning from seafloor to satellite that stretches across millions of square miles.  It is clear no single nation could undertake such an effort on their own, nor would they achieve the synergistic network effects an alliance like NATO offers, when such an effort is undertaken in a coherent manner. There are significant fiscal benefits as well, as maritime drones greatly enhance the capabilities of ships, submarines, and other platforms at a fraction of the cost. These savings would be magnified by the fact that the digital ocean would be powered by free and sustainable energy sources like wind, wave, and solar. The digital ocean will drive the ocean economy which is now $2.5 trillion a year. It has the potential to bring in new solutions and to use the tech change megatrend for the benefit of all—to create a more sustainable planet as well as robust economic driver through applications such as offshore wind, sustainable aquaculture, and carbon sequestration through growing food crops like seaweed.

Charting a new course

This type of sustainable innovation needs to occur across all domains and in all our international institutions if we are to meet the challenges of the coming decades. Population trends, technology shifts, climate change, and scarcity of resources make an already complex security situation that much more daunting. But we are not powerless in this face of the ‘Perfect Storm’ when megatrends collide; we have the ability to invent our own future.

Why act now? Because the technological shift is making it is possible to get the data from every specific part of the ocean and use the gathered data to interpolate. We must take bold innovative action now and chart the new course, to ensure the health of our planet and the security of our citizens for generations to come.

Keit Pentus-Rosimannus is a Member of the Estonian Parliament, vice-chairwoman of the biggest parliament party Reform Party. She was the Minister of Foreign Affairs for Estonia from 2014 to 2015 having previously been the Minister of the Environment from 2011 to 2014.

Michael D. Brasseur is the co-founder and first Director of NATO’s Maritime Unmanned Systems Innovation and Coordination Cell (MUSIC^2).  He is a former captain of two US Navy warships and has sailed the world’s oceans partnering with friends and Allies. The views presented here are his own, and not that of NATO or the US Navy.

Further reading:

The post The ‘Digital Ocean’ as a model for innovation in the perfect storm appeared first on Atlantic Council.

]]>
Preparing for the post-pandemic tech environment: Dr. David Bray https://www.atlanticcouncil.org/insight-impact/in-the-news/preparing-for-the-post-pandemic-tech-environment-dr-david-bray/ Tue, 11 Aug 2020 10:30:24 +0000 https://www.atlanticcouncil.org/?p=286479 Dr. David Bray, the GeoTech Center's director, envisions a future where proper preparedness and a people-centered perspective will allow humanity to harness the incredible innovations anticipated in the coming years for good. On an episode of the "Futurized" podcast with Trond Undheim, PhD, David discussed this vision, and how we at the GeoTech Center are working to make it a reality.

The post Preparing for the post-pandemic tech environment: Dr. David Bray appeared first on Atlantic Council.

]]>

At the moment, few around the world can imagine a more disruptive event to our world than the COVID-19 pandemic. After all, the pandemic has shaken up societies on all parts of the planet, calling into question the very nature of our interconnected 21st century society. However, at the GeoTech Center, we recognize that COVID-19 will only be the beginning of a wave of revolutionary change that will be brought on by the next stage of technological innovation. If we as a society fail to properly prepare for what is coming, we stand to suffer even worse consequences than what COVID-19 continues to bring.

Dr. David Bray, the GeoTech Center’s director, envisions a different future, where proper preparedness and a people-centered perspective will allow humanity to harness the incredible innovations anticipated in the coming years for good. On an episode of the “Futurized” podcast with Trond Undheim, PhD, David discussed this vision, and how we at the GeoTech Center are working to make it a reality.

“The takeaway is that the world now needs to ensure that new technologies not only contribute to innovation but also simultaneously empower people, increase prosperity and secure peace. One way we talked about is to develop data trusts to secure that exchanges of data are mutually beneficial and provide ethical and governance support.”

Listen to the entire podcast on the “Futurized” to hear more of Dr. Bray and Dr. Undheim’s analysis of the post-pandemic world to come.

The post Preparing for the post-pandemic tech environment: Dr. David Bray appeared first on Atlantic Council.

]]>
“Instrumenting the planet:” Dr. David Bray discusses Internet of Things with AIPCA https://www.atlanticcouncil.org/insight-impact/in-the-news/instrumenting-the-planet-dr-david-bray-discusses-internet-of-things-with-aipca/ Tue, 28 Jul 2020 01:52:27 +0000 https://www.atlanticcouncil.org/?p=281030 With so many devices providing so much data on individuals all across the world, Dr. Bray emphasized the need for new, intentionally designed infrastructure for managing and accounting for the data produced by the IoT in a discussion with Jeff May of AICPA. In line with the GeoTech Center's mission, Dr. Bray asserted that the goal of any new data and IoT management regimen should be to propel society towards a greater level of transparency and participation, empowering individuals with data rather than enveloping them in a surveillance state.

The post “Instrumenting the planet:” Dr. David Bray discusses Internet of Things with AIPCA appeared first on Atlantic Council.

]]>

The public may consider the Internet of Things–referring to the many sensors and other internet-connected devices that can enhance our daily lives–a future technological promise, but the GeoTech Center recognizes that IoT has been a growing part of our lives for decades. Though most might not realize it, in 2013 the number of network devices for the first time equaled the population of humans. Within two years, that number of networked devices had doubled.

These sensors, along with traditional digital systems, are producing unfathomable amounts of data: an estimated 4 zettabytes or 4 billion terabytes were stored in databases around the world in 2013, a number expected to increase to 175 zettabytes by 2025. In context, that data content would be the size of twice the number of conversations of any kind that humans have ever had as a species.

With so many devices providing so much data on individuals all across the world, Dr. Bray emphasized the need for new, intentionally designed infrastructure for managing and accounting for the data produced by the IoT in a discussion with Jeff May of AICPA. In line with the GeoTech Center’s mission, Dr. Bray asserted that the goal of any new data and IoT management regimen should be to propel society towards greater transparency and participation, empowering individuals with data rather than enveloping them in a surveillance state.

Check out the video for more of Dr. Bray’s ideas for IoT and data management.

The post “Instrumenting the planet:” Dr. David Bray discusses Internet of Things with AIPCA appeared first on Atlantic Council.

]]>
Breaking trust: Shades of crisis across an insecure software supply chain https://www.atlanticcouncil.org/in-depth-research-reports/report/breaking-trust-shades-of-crisis-across-an-insecure-software-supply-chain/ Mon, 27 Jul 2020 03:44:00 +0000 https://www.atlanticcouncil.org/?p=275631 Software supply chain security remains an under-appreciated domain of national security policymaking. Working to improve the security of software supporting private sector enterprise as well as sensitive Defense and Intelligence organizations requires more coherent policy response together industry and open source communities.

The post Breaking trust: Shades of crisis across an insecure software supply chain appeared first on Atlantic Council.

]]>

After a particularly exhausting day at work in February 2017, Liv wraps up her project and prepares to head home. Managing the power grid for a third of the country is high-stakes work and tiring at the best of times. Packing up her bag, she goes to turn off her computer monitor and notices an update waiting patiently on her screen: “Flash Player might be out-of-date. The version of this plug-in on your computer might not include the latest security updates.” Liv clicks ‘Yes’ to begin the update and hurriedly steps out of her cubicle. As she moves quietly down the fall, her laptop fan whirs as it visits specific URLs before downloading a file called “install_flash_player.exe,” and, covertly, the Trojan.Karagany.B backdoor.

Liv has no reason to suspect that this software update is different from any other but it allows attackers to quickly install additional tools on her device. Leveraging passwords and usernames stolen through an earlier phishing campaign against Liv’s firm, the intruders move quickly across the entire company’s network and proceed to take screenshots of sensitive windows and capture images of the company’s grid operation control panels. What might have seemed like a harmless software update is actually part of a multiphase campaign that could have allowed attackers to stop the flow of electricity to thousands of businesses and homes in the United States.

This malware isn’t fictional. From 2015 to 2017, an extensive campaign called Dragonfly 2.0 saw “Trojanized” software updates alongside phishing emails and watering hole attacks used to gain access to the networks of more than twenty energy sector firms in the United States and in Europe. In an alarming echo of the 2015 attacks on Ukraine’s energy grid, the attackers obtained operational control of several firms’ networks, giving them the capability to sabotage the energy access of thousands of US users. Using compromised third-party software, attackers gained a foothold in operating systems over the course of the campaign. 

Liv wasn’t being careless. Updating software regularly is considered best practice. Yet impersonating updates by trusted third-party vendors provided the DragonFly attackers access to major firms in the energy sector. The software supply chain presents a significant source of risk for organizations from critical infrastructure companies to government security agencies but the state of security in this supply chain doesn’t match up to the risk. There are opportunities for the policy community and industry to work together to address the problem.

Executive summary

Society has a software problem. Since Ada Lovelace deployed the first computer program on an early mechanical device in the 1840s, software has spread to every corner of human experience. Our watches now have Internet connections, combat aircraft come with more code than computer operating systems, and every organization from the Internal Revenue Service to an Etsy storefront relies on software to serve their customers. No longer confined merely to computers, embedded software now controls the operation of complex power generators, medical hardware, the behavior of automotive brake pedals, and planetary scale datasets. As one commentator put it, “software is eating the world.” 

With software come security flaws and a long tail of updates from vendors and maintainers. Unlike a physical system that is little modified once it has left the factory, software is subject to continual revision through updates and patches. This makes the supply for code long and subject to myriad flaws, both unintentional and malicious. The private sector’s aggregated risk from software supply chain compromises continues to grow. Ever more feature-rich software is finding its way into a widening array of consumer products and enterprise services, enlarging the potential attack surface. Organizations increasingly outsource IT management and services to cloud computing and managed service providers (MSPs), raising the likelihood that a given firm will be impacted by an attack targeting one of these providers, like the successful penetration of eleven Saudi MSPs in 2018. A similar kind of concentration is present in software development where firms can buy pre-built code from a third parties for complex or widely encountered tasks. Trek Networks, a US company, builds software to allow Internet of things (IoT) devices to communicate over the Internet. In 2020, it was informed of nineteen critical vulnerabilities in its products. These vulnerabilities in one company’s software impacted products from nearly a dozen other manufacturers, like Intel and Caterpillar, potentially affecting hundreds of millions of devices.

The public sector, particularly defense organizations, assumes even greater risk. A generation of Western defense systems, led by those in the United States, benefit from the advantages of Commercial Off-the-Shelf (COTS) procurement. Under a COTS model, defense organizations look to buy and repurpose or build from available commercial components to reduce cost, limit technological lag, and improve development speed. For the United States, COTS software has underpinned a generation of Department of Defense (DoD) systems, leveraging everything from miniaturized low-cost GPS receivers to high-bandwidth satellite data links with unmanned aerial vehicles (UAVs) and growing dependence on open-source software (OSS) in logistics and maintenance systems. A flaw in widely used software could undermine the DoD’s ability to interpret and work with large quantities of sensor data. This attack could be innocuous and go undetected for months, like a 2017 incident in which malicious code was substituted for legitimate samples in the Python Package Manager.

Software supply chain security remains an underappreciated domain of national security policymaking. The debate over 5G and telecommunications security, for example, has focused largely on hardware manufacturing and deployment, yet it is software in these devices that determines data’s confidentiality and path through the global Internet. The push for open architecture in telecommunications, the Open Radio Access Network (ORAN) model, and industry trends toward network function virtualization mean even more hardware functionality will shift to software—making software supply chain security a critical competitive dimension. Exclusive focus on hardware security has resulted in missed opportunities for policy makers. Continued inaction to secure software supply chains risks compromising important intelligence, defense, and public policy programs and will undermine the long-term innovative potential of already faltering US technology dominance.

Software supply chain attacks are popular, they are impactful, and are used to great effect by states, especially China and Russia. High-profile attacks like NotPetya have forced policy makers to confront the importance of software supply chains, but only episodically and without leading to long-term security improvements. Improved technical security measures could raise the cost of attacks, but the United States and its allies must respond to systemic threats and counter the efforts of states to undermine trust in software. The cost of these attacks is as much an issue for states in the European Union (EU) and Asia as it is for the United States. This report’s first trend discusses patterns of state attacks against the software supply chain to motivate several recommendations on new alliance models and operational collaboration.

The security of the software supply chain matters as much as the codebase first delivered to a customer but receives comparably less attention and formal treatment in policy than secure development practices or vulnerability discovery. This report evaluates 115 software supply chain attacks and vulnerability disclosures collected from public reporting covering the past ten years. All of these incidents are based on public blogs, write-ups, and media articles so do not include private disclosures of attacks which never made it to the public domain. As such, even these trends likely undercount the frequency of attacks on certain high-value targets and are biased away from regions throughout the Global South that receive less focus from the cybersecurity industry. This report identifies five trends in software supply chain attacks over the past decade and offers three clusters of recommendations to address, mitigate, and counter them. The five trends identified are:

Deep Impact from State Actors: States have targeted software supply chains with great effect, alone constituting almost a quarter of this report’s dataset.1 The majority of cases surveyed here did, or could have, resulted in remote code execution. We found that states were more likely to hijack updates, a comparatively sophisticated distribution vector. Several of these cases, like the Dragonfly 2.0 attacks on energy sector targets, are representative of longer campaigns rather than single events. While there are examples from Egyptian, Indian, Iranian, North Korean, and Vietnamese actors, Russia and China were far and away the most frequent. Examples: CCleanerNotPetyaKingslayerSimDisk, and ShadowPad.

Hijacking Updates: These attacks were generally carried out by states or extremely capable actors. Updates that were signed either by stolen or forged certificates carried malware to targets. The advanced malware often contained components allowing it to spread further from the infected machine either along networks or in hardware. These attacks were more likely to encrypt data, target physical systems, or extract information, and, generally, were far more sophisticated than app store components. ExamplesFlameStuxnetCCleaner 1 and 2NotPetyaAdobe pwdum7v71Webmin, and PlugX.

Undermining Code Signing: The technique relies on public key cryptography and a certificate system to ensure the integrity of updates and the identity of their authors. Overcoming its protections is a critical step in any supply chain attack, enabling anything from simple alterations of open-source code to complex nation-state espionage campaigns. It is the technical process that fosters trust in software and is the central protection against hijacked updates. Examples: ShadowHammerNaid/McRAT, and BlackEnergy 3.

Open-Source Compromise: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to commonly used ones. The malicious code they spread usually stole victims’ data and occasionally tried to target payment information. The actors were usually criminals, and their attacks were generally quickly discovered. Examples: Cdorked/DarkleechRubyGems BackdoorHackTaskColouramaJavaScript 2018 Backdoor, and PyPI repository attack.

App Store Attacks: These attacks used the Google Play Store, Apple’s App Store, and other third-party app distributors to spread malware to mobile devices. Usually, the apps were designed by attackers to appear legitimate, though some were legitimate apps that they managed to compromise. The malicious apps tended to run adware, steal payment information, and extract data sent to a server operated by the attackers. Most perpetrators were criminals, though some state-backed tracking also occurred. Examples:  Sandworm’s Android attackExpensiveWallBankBotGooligan, and XcodeGhost.

These trends show software supply chain attacks are popular and impactful. They exploit natural seams between organizations and abuse relationships where users expect to find trustworthy code. These attacks are also impactful—targeting the supply chain for code can help magnify the value of a breach and sow distrust in widely used open-source projects. Second, these attacks can drive compromise deep into organization’s technology stack, undermining development and administrative tools, code-signing, and device firmware. And, third, software supply chain attacks have strategic utility for state actors and have been used to great effect, especially by Russian and Chinese groups. This trend is likely to continue and should motivate action from US policy makers. 

The implication for national security policymakers and the cybersecurity community across the US and allies is that change is necessary to raise the cost, and lower the impact, of software supply chain attacks. New efforts should use existing security controls and make them accessible and low cost for developers. Policymakers should drive new resources and support to open source projects and enable better software supply chain security. Civil society and the cybersecurity standards community hold great potential to help sustain these changes once initiated, especially for open source. The US and allies need to pursue new approaches to joint activity across the Atlantic and Pacific to counter state threats and facilitate more effective long-term investigations of criminal actors. Traditional alliance structures may be insufficient to address threats to major industry players and software supply chains relied on by intelligence and defense departments and agencies. 

This report offers three clusters of recommendations to policy makers and industry to address the insecurity of the software supply chain. First, improve the baseline. The lynchpin of any effort to improve the security of software supply chains broadly will be what impacts the largest number of codebases, not what improves one codebase the most. Perhaps the most useful thing the policy community can do is offer support for widely compatible standards and tools to reduce the burden of secure software supply chain management on developers and project owners. Best practices are not effective in isolation, and below a certain threshold it is difficult to profile which vendors and project owners enforce good security practices. The recommendations in this report focus on bringing the best of public and private sector supply chain security tools into the public domain, aligning these tools with widely supported standards, and calling out key stakeholders to help share both how to adopt and assess against them.

Second, better protect open source. Open-source code forms the basis of most enterprise systems and networks. Even large proprietary projects, like the Windows operating system, are built on top of huge quantities of open-source code. The security of open-source projects, and the apparent ease with which attackers can introduce insecure code, is a continuing concern. The fluidity with which anyone can commit code to an open-source project is at once a core strength and glaring weakness. The policy community must support efforts to secure open-source projects, or it will watch a critical and innovative ecosystem wither.

Third, counter systemic threats. Trust is the critical coin of the realm and the United States must work with international partners to protect against deliberate efforts to undermine software supply chains. Efforts by states to impersonate software vendors undermines defender’s ability to patch flaws in code and improve the security of software through the entirety of its lifecycle. This lifecycle is critical to sustaining the national security advantages granted by current and near-future technologies like sensor fusion networks, autonomous supply chains, and “smart” devices. Failure to protect the ability to trust software could cripple the benefits gained from it. These recommendations zero in on systemic threats to trust and highlight new institutional responses from the US and allies to mitigate the activities of states like Russia and China. Transoceanic responses are needed to protect the software supply chain and the US can extend existing partnerships to support more effective joint attribution and collaboration with allies.

Compromising the software supply chain enables attackers to deliver malicious code in the guise of trusted programs. This can be a terribly effective technique to spread an attack widely and in targeting well-secured systems. These risks are particularly acute for the national security community whose ability to churn through huge quantities of surveillance data, run complex weapon systems, and support modern logistics systems is dependent on software, most of it developed outside of government. The solution is not panic nor is it a moonshot, but rather renewed focus on software supply chain security practices, new investment from public and private sectors, and revisions to public policy that emphasize raising the lowest common denominator of security behavior while countering the most impactful attacks. For more on this project and the dataset behind it, please visit us on the web here.

1. Introduction

The state of security in the software supply chain is inadequate and, in some critical respects, getting worse. The policy community must refocus on this topic amidst competing national security priorities and do more to incentivize and support private sector security. Failure to do so creates new and systemic national security risks for the United States and its allies. Attackers capitalizing on vulnerable software supply chains are able to compromise trusted software and important cybersecurity protections to impact large numbers of critical systems. 

Significant economic output depends on the security of software every day—the sudden shift to remote work is exemplary of this dependence. Without reliable video conferencing, email, and file sharing, much of the world’s knowledge work would slow to a crawl. Even intangible capacities, like our capacity to innovate, rely in large part on digital tools to collaborate, coordinate, and revise. All of these tasks are dependent on software. Critical defense systems are equally dependent on software. Digitized logistics processes, the ability to work with massive quantities of data, semiautonomous sensors, and munitions all depend on a chain of digital logic embedded in software programs. Part of the F-35’s own software supply chain, the Autonomous Logistics Information System (ALIS), is plagued with vulnerabilities and testing gaps, potentially compromising the entire weapons platform. The supply chain for these physical products is a recognized priority, but the software enabling these systems must be as well.

Software supply chain attacks are not an esoteric or isolated tactic, as this report and its associated dataset shows; they are popular, impactful, and have been used to great effect by state actors. In this dataset, we have collected 115 instances, going back a decade, of publicly reported attacks on the software supply chain or disclosure of high-impact vulnerabilities likely to be exploited in such attacks.2 Each instance of an attack or vulnerability was coded with incident name, date, victim, and likely source, while categorizing the basic technical characteristics of each. Figure 1 shows the distribution of attacks and disclosures in the dataset in the examined period.

A software supply chain vulnerability is any software vulnerability that can evolve into an attack, if exploited. In our dataset, we include vulnerabilities that would enable the injection or distribution of malicious code rather than what would be found in the code’s payload, the means of access rather than the harmful payloads. Figure 2 shows the distribution of attacks versus disclosures in the dataset across impacted codebases

These incidents reveal a stunning diversity—there are as many potential target types in the supply chain as there are types of software. Certain categories have consistently been in the crosshairs for years now: software updates, firmware, and third-party apps. In this dataset, others are being targeted more today than in the past. For instance, as smartphones become ubiquitous, mobile apps, both attacker-made and attacker-infiltrated, have become increasingly popular targets—as have open-source libraries on which much software relies.

Figure 3 uses an abstracted model to show the distribution of the 115 attacks in our dataset and disclosures across a notional software supply chain. There is no good way to concisely represent software development and this graphic is not intended to capture all of the intricacies of the process. This representation of a waterfall-style model matches with much of the software captured in the study. Agile development methodologies like software Development/IT Operations (DevOps) and Development/Security/Operations (DevSecOps) are important but are also no magic solution for software supply chain security issues. These Agile approaches and the philosophy of continuous integration create new opportunities to quickly propagate risk through a codebase.

Figure 3. software supply chain life cycle

While there are attacks through this supply chain model, the figure highlights their concerning concentration against app stores, update processes, OSS projects, and around code signing. Incidents within these types often shared attack vectors, codebases, and distribution vectors, and were carried out by similar actors with similar goals. Figure 3 underlines just how many different opportunities exist to undermine a software supply chain.

The purpose of this dataset is to compile a variety of software supply chain attacks and discovered vulnerabilities, and to catalogue the different characteristics of each incident. While the dataset is not exhaustive, we took care to use consistent definitions and typologies throughout. We defined attacks as occurring when attackers access and edit software somewhere in a software development supply chain, compromising a target farther up on the chain, by inserting their own malicious code. Disclosures that came alongside an ongoing attack were coded as an attack. Attacks don’t require any public disclosure of a vulnerability, only public notice of the attack itself. Software supply chain vulnerabilities are any software vulnerability that, if exploited, would comprise an attack. This flexibility includes a wide variety of critical vulnerabilities, so we limited their scope to those that would enable injection of malicious code, not just compound the effects of an attack’s payload.

Coding the Dataset
Date: Best estimated start date of the attack. When no start date is identifiable, discovery date is used instead.

Name: Name(s) of the attack/incident. Multiple names are included where possible.

Software Supply Chain Attack: A software supply chain attack occurs when an attacker accesses and edits software in the complex software development supply chain to compromise a target farther up on the chain by inserting their own malicious code. 

Software Supply Chain Vulnerability: A software supply chain vulnerability is any software vulnerability that could evolve into an attack, if exploited (but which has not been used in an attack).

Affected Code: The code modified by attackers, or the code that had a vulnerability in it.

Code Owner: Who owned that code, or, if open source, the repository name.

Codebase: Categories describing the codebase, product, or service modified by attackers. First party refers to the same author as the creator of the machine; third party is code written by another author whose source code is not publicly available; and open source is code that is publicly viewable.

Attack Vector: How the attacker was able to edit the affected code without detection.

Distribution Vector: How the attacker was able to distribute the modified code.

All incidents were collected via manual online search for media articles, industry reporting, and researcher write-ups of incidents, using information in the public domain. As a result, the dataset here is a lower bound on the population of attacks which undercount incidents that have never been made public and does not include vulnerability disclosures that were not explicitly and publicly reported, i.e., does not include information from patch notes (though this is an opportunity for future work). To analyze trends in these attacks and disclosures, we established five key measures to better understand the different elements of supply chain compromise: attack vector, distribution vector, affected codebase, impact, and supply chain potential. None of these five measures is exclusive: a single entry in the database can have multiple insertion points or methods, distribution vectors, and so on. Information on an entry’s date, responsible actor, downstream target, links to relevant articles, and a paragraph of technical summary was also included. More detailed descriptions of the dataset’s fields can be found in the codebook online.

Although supply chain security has become a hot-button issue for both private industry and policy makers in recent years, the problem is not being addressed holistically and software has largely taken a back seat to 5G in public debate. There are clear and critical policy gaps that exist in securing our software supply chains. Users are increasingly vulnerable, whether it be as a result of malicious apps in app stores, forged digital certificates, or vulnerable open-source code frameworks. 

Moreover, this issue is not limited to the United States. State attackers have regularly employed software supply chain intrusions to attack the United States and its allies. Governments around the world are increasingly dependent on open-source and commercial off-the-shelf software for systems relevant to national security, and are exposed to vulnerabilities similar to those faced by the United States. Attacks such as PhantomLance and KingSlayer demonstrate that an attack on systems in one country can easily spread to other states through overlapping supply chains and shared software suppliers. The global nature of software supply chains and the international character of threats make it imperative for the United States to work with partners in the private sector and allies around the world to address this security challenge.

This report offers a series of recommendations that call for a coalition response to the systemic threats posed by software supply chain attacks and vulnerabilities. Working within existing alliance structures while broadening cooperation will allow the United States and its allies to limit the threat actors seeking to undermine trust in the global technology ecosystem, while adapting current alliance networks to the changing threat landscape. Shifting global cybersecurity norms, increasing information sharing to support operational collaboration, and reshaping the focus of interagency equities are all crucial for addressing the serious national security implications of attacks against the software supply chain.

There are opportunities for improvement and a closing window in which to seize them. The remainder of this report discusses five trends in the attacks and disclosures surveyed, including the damaging use of software supply chain attacks by a handful of major adversaries of the United States, efforts to undermine code-signing processes and hijack software updates, and the popularity of attacks on open-source projects and app stores. Following this is a summary of key takeaways from the dataset and specific policy recommendations to drive improved baseline security across software supply chains, better protect open-source software, and counter systemic threats to these supply chains. 

2. Attacks on the software supply chain

A software supply chain attack occurs when an attacker accesses and modifies software in the complex software development supply chain to compromise a target farther down on the chain by inserting their own malicious code. These inserts can be used to further modify code by obtaining system permissions or to directly deliver a malicious payload. Modern software products contain a vast number of dependencies on other code, so tracking down which vulnerabilities compromise which products is a nontrivial organizational and technical feat. There are efforts underway to make tracking these dependencies, and discovering them in the event of an incident, more straightforward. The most widely recognized is the Software Bill of Materials (SBOM) multi-stakeholder initiative coordinated by the US Department of Commerce, which remains in development.  

Software supply chain attacks take advantage of established channels of system verification to gain privileged access to systems and to compromise large networks. They undermine foundational tenets of trust in software development. Even with the increased exposure gained by these high-profile attacks, we are only just beginning to understand how wide-reaching the impact can be.  

The code that attackers and researchers target has changed over time within this dataset. Figure 4 shows the distribution of codebases targeted by these incidents, while Figure 5 shows that distribution over time. Mobile apps have grown more frequent in the dataset, especially those created by attackers and marketed as legitimate. For instance, in 2017, the Android app Lovely Wallpaper hid malware under the guise of providing phone background images. The malware would gain device permissions and charge users’ accounts for “premium” services they had not signed up for. It, and at least fifty other apps hiding the same payload, infected as many as 4.2 million devices, and successors continued to infiltrate the Google Play Store after the original offenders were removed. 

2.1 Deep impact: States and software supply chain attacks

2.2 Abusing trust: Code signing

2.3 Breaking the chain: Hijacked updates

2.4 Poisoning the well: Open-source software

2.5 Downloading trouble: App hubs/stores under attack

Third-party firmware is also an increasingly popular target for attackers and researchers alike—a particularly troublesome development given the inherent difficulty of patching firmware and its ability to make privileged edits and avoid much anti-malware program detection. The Equation Group demonstrated the potency of these attacks with its GrayFish attackon hard drive firmware, which allowed it to record system data in a nearly inaccessible portion of machine memory for remote extraction at a later point. Removing the data cache was nearly impossible short of destroying the physical system, and even detecting the infection was generally infeasible.

Proprietary firmware, too, is more frequently discovered to be insecure. In August 2019, McAfee researchers uncovered a vulnerability in Avaya firmware for the 9600 series desk phone, which could have been present at 90 percent of Fortune 500 companies. The bug resulted from the inclusion of unmaintained open-source code and would have allowed attackers to crash a system, run code with root access, and record or listen in on the phone’s audio.

From these general patterns across the 115 incidents in this dataset we observed five more specific trends:

  1. Deep Impact: State actors target the software supply chain and do so to great effect.
  2. Abusing Trust: Code signing is a deeply impactful tactic for compromising software supply chains as it can be used to undermine other related security schemes, including program attestation. 
  3. Breaking the Chain: Hijacked updates are a common and impactful means of compromising supply chains and they recurred throughout the decade despite being a well-recognized attack vector. 
  4. Poisoning the Well: Attacks on OSS were poplar, but unnervingly simple in many cases
  5. Downloading Trouble: App stores represent a poorly addressed source of risk to mobile device users as they remain popular despite years of evidence of security lapses.

2.1 Deep impact: States and software supply chain attacks

States have used software supply chain attacks to great effect. Hijacked updates have routinely delivered the most crippling state-backed attacks, thanks in part to a continued failure to secure the code-signing process. And while concerns about the real-world ramifications of attacks on firmwareIoT devices, and industrial systems are warranted, these are far from novel threats. Stuxnet and other incidents have had physical impacts as early as 2012. Several of these incidents, like NotPetya and the Equifax data breach in 2017, impacted millions of users, showcasing the immense potential scale of software supply chain attacks and their strategic utility for states. For Russia, these attacks have meant access to foreign critical infrastructure, while for China, they have facilitated a massive and multifaceted espionage effort. This section discusses trends in known state software supply chain attacks supported by publicly reported attribution, focused on four actors: Russia, China, Iran, and North Korea. The data in this report also include incidents linked to Egypt, India, the United States, and Vietnam, for a total of 27 distinct attacks. 

Russia

Russian actors were responsible for the 2017 NotPetya attack, one of the most destructive software supply chain attacks to date. Four of the five attacks attributed to Russia in our dataset involved initial insertion of malicious code into a third-party app, made possible by gaining access to an account with editing permissions. After inserting a malicious payload, Russia relied on diverse means to distribute this code, including hijacked updates, a worm component, phishing, and a hardware component. Notably, every attack involved multiple vectors for distribution.

Interestingly, three of the five attacks involved downstream targets in the energy sector. For instance, the 2015 Dragonfly 2.0 attack relied on a variety of vectors—including spear-phishing, watering-hole-style attacks, and Trojanized software updates—to obtain network credentials from targets in the US, Swiss, and Turkish energy sectors. Launched by Russian APT Energetic Bear, attackers were able to gain operational control of interfaces that power company engineers use to send commands to equipment like circuit breakers, giving them the ability to stop the flow of electricity into homes and businesses in the United States. Russian attackers similarly compromised Ukraine’s power grid in 2015, interrupting service to 225,000 customers in a complex infrastructure attack that involved spear phishing, credential stealing, malware insertion, distributed denial-of-service (DDoS) attacks on call centers, and firmware attacks. Russia has repeatedly engaged in software supply chain attacks against Ukrainian entities or programs since 2015, in line with its practice of testing cyber war tactics in Ukraine.

China

Of the state actors featured in the dataset, China has conducted the most software supply chain attacks (eleven) and demonstrated the greatest level of consistency in attack and distribution methods. The earliest case attributed to Chinese actors was in 2011, suggesting that China has been targeting software supply chains earlier than other state actors. Most Chinese attacks relied on a third-party application for their initial insertion point; in the majority of cases, the affected code was found in an update server. Chinese attacks were notably consistent in the method of distribution: eight of eleven cases relied on hijacked updates to distribute malicious code, while several cases relied on supply chain service providers. For instance, in the 2017 Kingslayer attack, Chinese attackers (likely APT31) targeted a Windows IT admin application to include malicious code under a valid signature, which could spread either by updating or downloading the application. The attack compromised a huge list of higher-education institutions, military organizations, governments, banks, IT and telecom providers, and other enterprises, as the malware installed a secondary package that could up- and download files, execute programs, and run arbitrary shell commands. With respect to impact, Chinese attacks tended to be of vast scale, gaining access to the personal data of millions of users, or impacting hundreds of companies.

Chinese software supply attacks are aimed more at corporate entities; eight attacks had companies and dependent users as their downstream targets. Given that all Chinese attacks resulted (or could have resulted) in data extraction, this data is consistent with continuing US concerns about Chinese intellectual property theft and economic espionage. The 2020 GoldenSpy malware notably targeted a multinational tech vendor servicing Western defense sectors by requiring it to install tax-paying software embedded with sophisticated malware while operating in China. We found that the greatest number of attacks occurred in 2017. The timing may have been influenced by the conflicting, but often hostile, moves taken against China by US President Donald J. Trump’s administration in its first year in office. 

North Korea

The dataset features one software supply chain attack likely linked to North Korea—the 2013 compromise of two file-sharing services’ auto-update features in order to launch a DDoS attack on South Korean government websites. The attack was launched by the cyber adversary responsible for the “DarkSeoul” attack on South Korean banking and media in March 2013 and is consistent with other attacks by the group in the past. This group has previously launched cyberattacks on days of historical significance in the United States and South Korea. The attack featured in the dataset occurred on June 25, the anniversary of North Korea’s invasion of South Korea in 1950, which marked the start of the Korean War. In early 2013, North Korea also conducted an underground nuclear missile test and, in response to tightening sanctions by the United Nations in March, cut off communications with South Korea, threatened to launch a preemptive nuclear strike against the United States and South Korea, and pledged to restart its Yongbyon nuclear plant to provide material for its weapons program. The high-profile nature of the software attack on June 25 and the extensive damage it caused may accordingly be seen in the context of North Korea’s escalation of military provocations at the time.  

Technical elements of the attack are also indicative of methods attributed to the adversary and North Korea more generally. The attack targeted a third-party app and was distributed through a hijacked software update. The dropped malware also allowed for remote code execution and established a botnet to carry out a DDoS attack, consistent with North Korea’s history of launching DDoS attacks (e.g., by North Korean APT Hidden Cobra). It is important to note that the adversary’s targeting of IP addresses that serve as DNS name servers demonstrates careful research into the target in order to maximize damage, an approach also seen in the March 2013 “DarkSeoul” attack.

Iran

The dataset features one software supply chain attack weakly attributed to Iran—the 2020 Kwampirs malware campaigntargeting companies in the industrial control systems sector, especially the energy industry. The attack initially targeted supply chain software providers through exploitation of default passwords. The attackers distributed the Kwampirs Remote Access Trojan (RAT) through supply chain software vendors who would install infected devices on the target’s network. The attack resulted, or likely resulted, in backdoor access and data extraction. The Kwampirs RAT was previously deployed in 2018 by a group called Orangeworm in similar attacks against supply chain software companies in the healthcare sector, and different entities feeding into the healthcare supply chain. While these attacks and the current campaign have not been directly attributed to Iran, the Kwampirs malware was found to contain numerous similarities to the Shamoon data-wiping malware used by Iranian-linked APT33, and also employed in multiple attacks against the energy sector. FireEye, a California-based cybersecurity firm, noted that targeting of organizations involved in energy and petrochemicals reflects a common interest and continuing thread among suspected Iranian threat groups. Previous reports suggest Iranian actors have targeted energy sector organizations headquartered in the United States, Saudi Arabia, and South Korea in an attempt to gain insight into regional rivals and potential competitors.

Other state attacks

Other state actors have also engaged in software supply chain attacks; this dataset features incidents attributed to the United States, India, Egypt, and Vietnam. Stuxnet—widely attributed to the United States and Israel—was one of the earliest examples to use stolen certificates and potentially one of a handful to leverage a hardware vector to compromise. Two campaigns were attributed to the US-attributed Equation Group (PassFreely and EquationDrug & GrayFish). In 2013, PassFreely enabled bypass of the authentication process of Oracle Database servers and access to SWIFT (Society for Worldwide Interbank Financial Telecommunication) money transfer authentication. In 2015, the EquationDrug & GrayFish programs used versions of nls_933w.dll to communicate with a C&C server to flash malicious copies of firmware onto a device’s hard disk drive, infecting 500 devices. Both attacks involved a supply chain service provider as a distribution vector. 

The United States is generally distinguished from several of the other states highlighted here by formulation and operation of due process constraints on how attacks like these are developed, targeted, and deployed. One former senior US national security official reflected of Stuxnet, “If a government were going to do something like this, a responsible government, then it would have to go through a bureaucracy, a clearance process… It just says lawyers all over it”. The operational constraints imposed by democratic accountability and the trend toward what Peter Berkowitz labeled the “lawyering of war” offer some meaningiful distance between the United States and several of the other states on this list, though those distinctions blur where legal protections are ignored or interpreted beyond the bounds of accepted logic. 

The attacks attributed to the other three states notably all involved attacker applications uploaded to a proprietary app store, leading to data extraction and often remote code execution. A January 2020 attack involving three malicious apps on Google Play Store was linked to APT SideWinder (previously attributed to India). The attack exploited a serious zero-day vulnerability through which the CallCam, Camero, and File CryptManager apps, once downloaded, could extract extensive machine data, including screenshots, and send them to a C&C server. The Egyptian government is believed to have been involved in a hacking campaign against Egyptian human rights activists, in which attackers crafted applications that used OAuth Phishing, generating fake requests for access to Google, Yahoo, Hotmail, and Outlook accounts before stealing emails. The attack also led to 5,000 downloads of malicious mobile apps from the Google Play Store that would enable attackers to view call log information. Finally attackers connected to Vietnam-linked APT 32 (OceanLotus), used a variety of techniques to sneak malware into various browser cleanup apps on the Google Play Store; the attack sent device specs to a C&C server, allowing the attackers to download custom-designed malware onto target devices, likely for espionage. Three attacks from 2011 and 2012 (Duqu, Flame, Adobe code signing hack) have not been attributed to a particular state, but likely involved state actors based on the highly sophisticated and targeted nature of the attacks. These attacks relied on a variety of attack and distribution vectors, and were some of the most destructive attacks in the dataset.

As discussed in the introduction, this report’s findings constitute a lower bound on the total population of supply chain attacks and disclosures. The dataset’s contents are shaped by the limitations of publicly available research on software supply chain security—both in the scope and geographical focus of research efforts. Accordingly, the dataset focuses on incidents involving state actors traditionally covered in cybersecurity research, namely, China and Russia, North Korea, and Iran while understating those from countries in the developing world and Global South.

2.2 Abusing trust: Code signing

Code-signing issues were among the most prolific attack vectors in our analysis, with many attacks stemming from self-signed certificates, broken signing systems, and poorly secured account access as Figure 6 shows. A significant portion of this dataset deals with code signing and how attackers bypass its protections. They occur in both the development and design stage as well as at deployment with hijacked updates and compromised deployment servers, a sub-set discussed in detail in the next section. 

Code signing is crucial to analyzing software supply chain attacks because, used correctly, it ensures the integrity of code and the identity of its author. Software supply chain attacks rely on the attacker’s ability to edit code, to pass it off as safe, and to abuse trust in the code’s author in a software environment full of third-party dependencies (so many that vendors are usually unaware of the full extent of their dependencies). If code signing were unassailable, there would be far fewer software supply chain attacks as applications could assert code was unmodified and coming straight from the authentic source without concern. 

But there are issues with code signing, so there are software supply chain attacks, and understanding the mechanism is crucial. Code signing is an application of public key cryptography and the trusted certificate system to ensure code integrity and source identity. When code is sent to an external source, it usually comes with a signature and a certificate. The signature is a cryptographic hash of the code itself that is encrypted with a private key. The consumer uses the public key associated with the code issuer to decrypt the hash and then hashes their copy of the code. If their hash matches the decrypted hash, they know their copy of the code is the same as the code that was hashed and encrypted by the author. The certificate is a combination of information that generally includes the name of the certificate authority (CA), the name of the software issuer, a creation and expiration date, and the public key associated with their private key. It provides the CA’s assurance to the consumer that the issuer of the code has the private key that is cryptographically connected to the listed public key. The software issuer alone possesses the private key. The system’s cryptography is similar to that of the certificate system behind TLS/SSL encryption (HTTPS connections), but distinct in its application. (Read more on the difference between code-signing certificate and SSL certificate here). 

There are several ways attackers can bypass these systems. The above description is a simplification. There is a chain of certificate authority dependencies: certificates can expire; there are a variety of algorithms used to sign code, some good and others bad; authors can fail to secure their private keys; vetting certificate requesters is challenging; and more. These complexities add vulnerabilities to the system. Attackers can modify code before it is signed by the issuer, meaning the code would be legitimately signed and still malicious. Attackers can compromise weak cryptography, which would allow them to forge a digital signature without needing to steal private keys. Attackers can even steal or buy private keys, allowing them to validly sign malware themselves. Sometimes organizations simply leak the private key by accident. Finally, some parts of computer architecture, like certain firmware drivers, were not designed with security in mind and do not even use code signing. All of these options are reflected in the dataset’s attack vector variable. 

For the purpose of this dataset, the difference between stealing private keys and certificates is negligible. The private key is the part of a certificate that is kept secret by the software issuer, and the part that is stolen when the phrase “stolen/purchased certificate” is used. The two phrases are interchangeable in most literature as well. A more technical breakdown can be found here, but an attacker obtaining a legitimate certificate means that they got its associated private key.  

Attacks that compromise the code signing system are extremely potent—they get a victim’s system to run malicious code from an apparently legitimate source with significant authority to modify system files. As such, they have been on the rise recently and are a fundamental dimension of software supply chain attacks. (Read more about stolen certificate trends here and here).  

Code-signing vulnerabilities are troubling because, if used successfully, they let attackers impersonate any trusted program and bypass some modern security tools. They also sow uncertainty about the authenticity of the very same patches and updates software developers use to fix security holes in their code. In January 2020, the United States’ National Security Agency (NSA) broke with precedent and disclosed CVE-2020-0601 to Microsoft. The bug is lodged in the Windows cryptographic libraries and allows attackers to forge the digital certificates that attest to an object’s identity. Developers sign their code to authenticate it as their own. This gives users a way to identify known and authentic code from that which is unknown and possibly malicious. Browsing online to a bank? There is a digital certificate at the other end of the connection which validates the bank’s website is what it purports to be. Running a software program from a company like Adobe or Microsoft on your computer? The operating system can compare that software’s digital signature with the developers’ to determine whether it is authentic or a malicious derivative. Code signing is an important process for establishing trust in the software supply chain.  

The abuse of code signing is also not a recent phenomenon. In 2012, Adobe publicly warned about an attack on its systems that allowed attackers to create malicious files digitally signed with a valid Adobe certificate. A malicious actor had compromised an internal Adobe build server, giving it access to internal code-signing infrastructure and the ability to create malware indistinguishable from legitimate Adobe software. One of the earliest examples of using code signing to disguise malware as legitimate software, the Adobe compromise begat a trend. Attacks using stolen or altered code signatures are littered throughout this paper’s dataset.  

In addition to stealing or forging certificates, attackers can also buy them from vendors online. These certificates could have been obtained through system access, but they can also come from mistakes (certificate authorities, or CAs, can be tricked into giving attackers legitimate certificates), insiders (employees with access to keys can sell them), or resellers (intermediate certificate issuers who require far less due diligence). The marketplaces are limited by the number of legitimate certificates issued, but they have been growing. The more expensive the certificate, the more trusted it is. The cost of a certificate is anywhere up to approximately $1,800 for an Extended Validation certificate issued by Symantec in 2016. CAs can revoke certificates that have been compromised and issue new ones, but the systems of reporting, intermediaries, and automatic revocation checks are slow to close security gaps. Moreover, without purchasing them off the black market and risking being defrauded, it is hard to know when certificates have actually been compromised until they are used in an attack. (Read more about certificate marketplaces here and here). 

2.3 Breaking the chain: Hijacked updates

This section explores trends in where attackers targeted the software supply chain, focusing on the use of hijacked software updates. Across both attacks and vulnerabilities, supply chain service providers, third-party app stores, and hijacked updates are popular distribution vectors. Supply chain providers are instances where attackers insert their code in software used by a vendor who pass it on to customers, for example, HVAC climate control services, firmware providers, or maintenance and service firms. These providers enable malicious code to spread by connecting malware to targets via intermediary services: from 5G providers to creators of intermediary software running PDF viewers to the authors of mobile apps, ICS systems, and car media player programs. Figure 7 shows the change in distribution vector for attacks over time.

Software updates are a major trust mechanism in the software supply chain and they are a popular vector to distribute attacks relying on compromised or stolen code-signing tools. Hijacking update processes is a key component of the more complex and harmful software supply chain attacks. The majority of hijacked updates in this report’s dataset were used to spread malicious code to third-party applications as new versions of existing software and open-source dependencies spread malware inserted into open-source libraries. One example is GOM Player, a free media application. On January 2, 2014, suspicious activity was detected in the reactor control room of the Monju fast breeder reactor facility in Tsuruga, Japan. A routine software update for the free media player GOM Player, a popular substitute for Windows Media Player, especially in parts of Asia, had been compromised to include malware, which gave the attacker access and the capability to exfiltrate information. In the case of the Monju reactor facility, this information was more than 40,000 internal emails. 

The Monju attack demonstrated the pernicious ease of a compromised update. Attackers were able to capitalize on the trust users placed in updates. Compromise required little more than the press of a button on a seemingly legitimate update to spread the attack. Despite likely not being an intentional target, the Monju reactor facility fell victim to an attack on a widely popular but simple media player—underlining how the tangled nature of software supply chains can easily bring innocuous code to high-consequence targets. Figure 8 illustrates the frequency of different attack and distribution vector combinations in our dataset.

Hijacking updates generally requires accessing a certificate or developer account, versus app store and open-source attacks which can rely on unsigned attacker-made software. CCleaner is a great example. In September 2018, Cisco Talos researchers disclosed that the computer cleanup tool had been compromised for several months. Attackers had gained access to a Piriform developer account and used it to obtain a valid certificate to update the CCleaner installer to include malicious code that sent system information back to a command and control (C&C) server. The attack initially infected more than two million computers and downloaded a second, stealthier payload onto selected systems at nearly two dozen technology companies. 

Across the data we collected, hijacked updates were most commonly used to target third-party applications like CCleaner. While attackers targeting CCleaner penetrated its developers’ networks and stole their signing certificates, there are other examples of brute force abuse. The Flame malware, discovered in 2012, leveraged an MD5 hash collision to forge semi-valid code signatures. Code-signing certificates are also available on underground markets. The most trusted, secure certificates go for as little as $1,600—a drop in the bucket for nation-state attackers, and not prohibitively expensive for criminal enterprises. Figure 9shows this distribution vector by targeted codebase relationship changing over time across this report’s dataset. The relationship in third-party code remains consistent but this shows a more episodic pattern with first-party OS/applications and open-source software. 

Figure 9. Changes in targeted code by distribution vector over the years

Hijacked updates rely on compromising build servers and code distribution tools. Rarely do attacks involve brute force assaults against cryptographic mechanisms. The implication is that many of the same organizational cybersecurity practices used to protect enterprise networks and ensure limited access to sensitive systems can be applied, if only more rigorously, here to address malicious updates. The update channel is a crucial one for defenders to distribute patches to users. Failure to protect this linkage could have dangerous ripple effects if users delay applying, or even begin to mistrust, these updates. 

2.4 Poisoning the well: Open-source software

As software continues to spread at an unprecedented pace, developers are under pressure to create new products and services ever faster and at lower cost. Open-source software is a crucial layer in the software ecosystem that needs more effective protection and is under-addressed especially as many vulnerabilities are hidden under layer after layer of dependencies. Looking across the software supply chain, there are a variety of types of codebase interleaving in a complex web of dependencies. Even larger-scale proprietary projects like the Windows operating system integrate large amounts of open-source code. How these projects are built and managed can shed light on the feasibility of widely adopting best practices for things like code integrity and long-term updates. 

There are two significant, and distinct, cultures of software development: open-source and proprietary. Both shaped by the principles of their communities and the legal status of their products.  Proprietary software is code owned by a single individual or organization. Some of the most well-known examples of proprietary code are Microsoft Windows, Adobe Flash Player, Skype, and Apple’s iOS. Apple’s iOS is developed in house, a product of the Cupertino giant’s design, market research, and myriad development teams. Thus, the vast majority of software produced for iOS is controlled, updated, licensed, and sold by Apple without outside development.

In contrast, an open-source community is defined as “an interacting, self-governing group involved in creating innovation with members contributing toward a shared goal of developing free/libre innovation.” Open-source developers voluntarily collaborate to develop software that is valuable to them or their organization. Open source has become the bedrock of technological innovations like cloud computingsoftware-as-a-service, next generation databases, mobile devices, and a consumer-focused Internet. Figure 10 aggregates this evolving attack surface by looking at just attacks and vulnerabilities targeting proprietary versus OSS codebases.

Most open-source projects do not follow strict organizational structures, instead relying on self-organization and a collaborative approach to drive software innovation and development. Open-source projects “give away” part of their intellectual property in order to create and benefit from a larger marketplace of ideas. Open source has given rise to a complex social web, with some supportive of limited and general copyrights even for code which is never sold, and others objecting to anything less than free software. Richard Stallman, founder of the GNU project and co-founder of the League for Programming Freedom, is famously quoted as saying “free software is a political movement; open source is a development model.” Stallman believes that, like speech, people should be free to code whatever and whenever they want. 

The left-pad affair
Despite the destructive potential of software supply chain attacks, little has been done to impose a cost on even the most basic; some attacks happen purely by accident. An npm publisher, Azer, removed 273 packages from the online JavaScript repository, including left-pad, a simple but commonly relied on program that pads a string with zeros on its left. So many programs relied on Azer’s left-pad, or programs that relied on left-pad, or programs that relied on programs relying on left-pad (ad nauseam) that hundreds of failures per minute occurred before npm republished the package two hours later, leading some users to claim that Azer “broke the Internet.”

There are two major types of OSS. Project or community-based OSS is the most common example: a distributed community of developers who continuously update and improve a codebase

Ruby is a classic example of open-source development. It is a community-based open-source codebase created in 1995 by Yukihiro “Matz” Matsumoto with a focus on “simplicity and productivity.” Twitter, Hulu, Shopify, and Groupon are just a few well-known sites built with Ruby. Individuals can manage their own packages and dependencies on a day-to-day basis to ensure their quality. Using a collection of package, version, and gem managers, as well as the web framework Ruby on Rails, the Ruby codebase is diverse and constantly growing. Attacks on Ruby feature in four different incidents in the dataset including a March 2019 attack on the “strong_password” gem which inserted a backdoor into code used to evaluate the strength of passwords on websites. The gem was downloaded more than 500 times before a single developer auditing the code noticed the change. 

The second, less intuitive type of open-source project is commercial open-source software (COSS). The major difference between the two is that COSS has an owner with full copyright, patents, and trademarks despite its development by a broader community. Up until its purchase by IBM in 2019, Red Hat was the largest COSS entity in the world. The company runs and operates an eponymous distribution (version) of the Linux operating system. Linux has existed since 1991 and is found in everything from cars and home appliances to supercomputers. Red Hat maintains profitability by giving away its OSS, but charging customers for support, maintenance, and installation.   

While proprietary code is owned by a single entity code and generally sold for profit, it can include open-source elements either directly or as part of a network of software around the product. MacOS is famously based in large part on the FreeBSD (Berkeley Software Distribution) version of Unix, and Windows 10 now includes a Microsoft-developed version of the full Linux kernel. 

The open-source development model is extremely flexible and can achieve all sorts of software functionality through a multitude of different development communities. Apache software is developed and maintained by the Apache Software Foundation and is the most widely used free web server in the world, running 67 percent of all web servers. OpenSSL is a software toolkit that implements the Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols The project is maintained by designated management and technical committees, and all development is meant to follow a set of established bylaws. OpenSSL does emphasize security bug reporting. Large-scale vulnerabilities in OpenSSL have been disclosed in the past—most notably, Heartbleed. Although Heartbleed was a serious vulnerability that gave malicious individuals access to hundreds of computers, it did have one positive side effect—OpenSSL received more than $300,000 in funding from major tech companies to hire new full-time developers. After years of operating while severely underfunded and understaffed, this kind of funding was an important first step for the open-source project. Figure 11 illustrates the distribution of targeted code types, including open-source software, by different types of attackers.

Attacks on OSS have grown more frequent in recent years. In February 2020, two accounts uploaded more than 700 packages to the Ruby repository and used typosquatting to achieve more than 100,000 downloads of their malware, which redirected Bitcoin payments to attacker-controlled wallets. Many of these attacks remain viable against users for weeks or months after software is patches because of the frequency with which open source projects patch and fail to notify users. Repositories and hubs can do more to help, providing easy to use tools for developers to notify users of changes and updates and shorten the time between when a vulnerability is fixed and users notified.

2.5 Downloading trouble: App hubs/stores under attack

App hubs/stores were a popular means of disseminating two-factor authentication. Still more provide a window to massively popular platforms like Facebook, Instagram, and Snapchat, which are sometimes onlyaccessible from mobile devices. As a consolidated venue, app stores simplify the user’s search for software that maximizes the value of their devices. However, as a high-traffic download center that connects end users to third-party developers, app stores have become a popular way to attack the software supply chain. Figure 12 shows the number of incidents we observed for different app stores.

Improving the security of software available through major app hubs like Google Play Store and Apple’s App Store must be a priority for developers and the software industry. The vulnerability of these hubs has long been recognized. It is an increasing source of risk to the national security enterprise as relatively innocuous services like Strava can be used to compromise the locationhealth, and welfare of military personnel. The app store model continues to weather a storm of attacks due to the difficulty of vetting the third-party products at its heart. Malware obfuscation techniques have continued to evolve, and the volume of apps that need screening has also increased. 

App store attacks generally unfold in one of three ways. Attackers can build their own apps, designed to appear legitimate, perhaps providing wallpapers, tutorial videos, or games. Hidden in those applications, which might function as advertised, is malicious software. Sometimes when attackers create their own apps, they try to impersonate legitimate ones either through typosquatting or by posing as updates (the difference being that the latter will not provide functionality to the user). One example is the ExpensiveWall attack, named after one of several apps that attackers uploaded to the Google Play Store, Lovely Wallpaper, which provided background images for mobile users. The malicious software in the apps evaded Google Play’s screening system using encryption and other obfuscation techniques. The malware, once downloaded, charged user accounts and sent fraudulent premium SMS messages, driving revenue to the attackers. It could also be easily modified to extract sensitive data and even microphone recordings from victims’ machines. The malware was eventually found in more than 50 apps cumulatively downloaded between 1 and 4.2 million times before they were removed from the store (installed apps remained on devices until users opted to remove them).

Attackers can also repackage apps, meaning they take a legitimate app, add their own malicious code, and then bundle it as a complete item for download, usually on third-party sites. In one case, attackers repackaged Android versions of Pokémon Go to include the DroidJack malware, which could access almost all of an infected device’s data and send it to an attacker C&C server server by simply asking for extra app permissions. The attackers placed the malware-infested app on third-party stores to take advantage of the game’s staggered release schedule, preying on users looking for early access to the game.

There is also publicly available evidence that groups compromise the software used to build apps, also known as software development kits (SDKs), allowing them to inject malware into legitimate apps as they are created. Whatever the model, malicious actors make an effort to obfuscate their malware to evade detection by app store curators. Compromising development tools used to build apps for those stores provides tremendous scale in a software supply chain attack. One example is the XcodeGhost malware, first detected early in the fall of 2015. Xcode is a development environment used exclusively to create iOS and OS X apps. Typically, the tool is downloaded directly from the App Store, but it is available elsewhere on the web outside Apple’s review. A version of Xcode found on Baidu Yunpan, a Chinese file-sharing service, came embedded with malicious logic that modified apps created in the development environment. Each app created with XcodeGhost relayed information about a customer’s device once the app had been downloaded, such as time, location, name of device, and network type. It also allowed attackers to phish credentials, open URLs, and access a device’s clipboard, which can often contain password and payment information. This version of Xcode was later dubbed XcodeGhost. 

Multiple apps created with XcodeGhost were accepted into Apple’s App Store, including malicious versions of WeChat, WinZip, and China Unicom Mobile Office, eventually impacting more than 500 users. XcodeGhost aptly demonstrates the significance of app stores as a distribution method. While Apple’s App Store is generally more rigorously policed, we recorded four successful attacks targeting it compared with more than a dozen against Google Play Store and other third-party distributors. These venues are attractive targets as the sheer scale of the attack’s blast radius demonstrates.

Recommendations

Software supply chain attacks remain popular, impactful, and are being used to great effect by states. The sustained growth of software supply chain attacks is caused at a technical level by continued failure to secure code integrity. Attackers continue to find ways to access accounts and bypass code signing, app stores struggle to verify the innocuity of all their software, developers embed insecure design choices at the lowest level of computing, and vendors have difficulty fully grasping the scope of their software dependencies and reliance on supply chain service providers. These are complex technical challenges with neither easy nor immediate solutions, and they further complicate the lapse in policy progress to secure a supply chain that has grown critical to industry and national security.

The most disconcerting trend of this entire dataset is the consistency with which these attacks occur against sensitive portions of the supply chain—this is not a new problem. A 2010 report from Carnegie Mellon University’s Software Engineering Institute profiled the DoD’s concern that “security vulnerabilities could be inserted into software.” 

Progress to improve the security of these supply chains has been halting for a multitude of reasons. Open-source projects continue to play a central role in enabling new software products and services, and fill critical gaps in the security architecture of the Internet. Despite this, efforts to better resource and secure these projects, like the Linux Foundation’s Core Infrastructure Initiative, remain too few and under-resourced relative to the problem.  Standards and existing security tools are too often applied with a “check-once-at-one-point-in-time” mindset. Scanning for vulnerabilities and working to remediate them is a constant process. Auditing the trust of an update channel, verifying the integrity of new code in a production environment, and assessing risks from third parties and open-source packages must be ongoing activities. 

Policy efforts which proliferate practices, protocols, standards, and codes of conduct which can’t be automated or which are not straightforward to implement at compile or commit time are insufficient to the problem. Software developers thrive on tools and dynamic means to implement controls on their code. Some public sector agencies have begun to better acknowledge this. The NSA recently open sourced a well-featured reverse engineering tool called Ghidra alongside a host of small programs to ingest and analyze large security datasets. This is far from the norm, however, and more can be done both to tie security standards efforts to automation and tooling as well as push for more DoD, Department of Homeland Security (DHS), and NSA tools to be open sourced and made publicly useful. 

Many of these recommendations focus on the role of DHS’ Cybersecurity and Infrastructure Security Agency (CISA) and could be improved through CISA’s collaboration with the NSA’s still new Cybersecurity Directorate. While there are flashes of sibling rivalry between the two, together they would leverage a significantly deeper pool of technical expertise and security acumen than alone. Failure to collaborate would impact not only efforts to improve open-source security but also the proposed improvements to baseline software supply chain security and efforts to counter systemic threats. In many cases we advocate for a CISA or NSA role in particular efforts but there is substitutability.

The United States and its allies cannot afford drive-by reforms. Sustained improvement is necessary. The power of the software supply chain is how it enables rapid and often low-cost change in the functionality of complex systems. This same rapidity can be the undoing of users and organizations. Policy makers should focus on supporting industry efforts to improve the supply chain security baseline and collaborate to reduce the impact of state attacks. Risk is the name of the game—not bought completely away or mitigated through whiz bang technology but managed deliberately, thoughtfully, and consistently. In this the report owes an intellectual debt to numerous prior efforts but particularly three reports, the New York Cyber Task Force’s Building a Defensible Cyberspace, MITRE’s Deliver Uncompromised, and the Carnegie Endowment Cyber Policy Initiative’s ICT Supply Chain Integrity: Principles for Governmental and Corporate Policies

Software will continue to “eat” the world and these recommendations are meant to support that—enabling more effective risk management and improved security practices broadly to raise the baseline cost of software supply chain attacks and reduce the impact of the most consequential:

3.1 Improve the baseline

Software supply chain attacks and disclosures that exploit vulnerabilities across the lifecycle of software in this dataset are too cheap relative to the harm they can impose. For example, while software updates are the critical channel to bring updates and patches to software users, they have been hijacked consistently over the past decade. The problem is generally not developers brewing their own cryptographic protections or poorly implemented accepted protocols but failing to rigorously implement basic protections for their networks, systems, and build servers. 

This cluster of recommendations is intended to reduce the complexity and burden of implementing secure software supply chain practices across organizations and software projects of all sizes. The central pillar is a new “Life Cycle Security Overlay” for Special Publication (SP) 800-53 developed by the National Institute of Standards and Technology (NIST), which integrates other identified industry best practices. SP 800-53 is one of the most complete bodies of security controls and has global visibility through the Cybersecurity Framework. SP 800-53 is not perfect; it locks in an impact-centric model of auditing and assurance and often trades compliance for risk management, but it is a tool in hand and far more useful today than an as-yet undeveloped specification many years out. 

Raising the cost of software supply chain attacks should center on providing the whole of industry, from SAP and Microsoft down to a three-person LiDAR startup, easy-to-use tools and well-defined reference implementations for major cloud and IT vendors that make rigorous security as low-effort and cheap as possible. Efforts to build an assessable maturity model , identify reference implementations for this Overlay, and build an open-source tooling to support these implementations all build from the Overlay itself. These would provide resources to developers as well as a framework to measure software supply chain security performance in federal contracting, opening the possibility that such measures could trickle down into the private sector and be enforced within segments of the technology marketplace. These steps also support implementation of recommendations made by others on improving operational coordination and developing security metrics necessary to “build solutions that scale at least cost for greater security.” Finally, this section calls for an organization to maintain the value of these tools over time, collecting, curating, and caring for the most useful long into the future.

FIRST

1. Life cycle security overlay [NIST and Industry]: Develop a software supply chain security Overlay to NIST  SP 800-53, wrapping in controls from existing families, the new supply chain family in 800-53 rev5, and best practices collected in the Secure Software Development Framework (SSDF) and related industry and open-source publications like the BSA Framework for Secure Software.3 This recommendation builds on the strong network and expertise of NIST  and follows on previous recommendations to anchor technical security obligations in standard-setting organizations.

a. Sector-specific agencies implement overlay [NIST and SSA Working Groups]: The NIST Overlay team should support appropriate sector-specific agencies to set up implementation working groups with industry partners focused on using this Overlay in their own development and contracting with third parties. NIST should feed requests for more specific controls or guidance into an 18-month revision cycle, producing additional guidance or changes to the Overlay as needed, for example, for industrial control systems in the energy sector.4

2. Bring the overlay to the cloud [Industry, Especially Cloud Service Providers]: Many software developers rely in whole or in part on cloud vendors to host, distribute, and maintain their codebases. Industry can assert moral leadership on software supply chain security issues and realize practical financial advantages by offering public reference implementations of the Overlay in their services and lower the complexity of secure life cycle practices for customers. Major cloud providers, namely Amazon, SAP, Microsoft, Google, Dell, and IBM, should build on existing industry organizations and collaboration to lead joint development of these reference implementations and make them freely available on government and industry partner websites. 

a. Integrate and grow the SBOM [NIST and the National Telecommunications and Information Administration]:NIST and NTIA should integrate the draft Software Bill of Materials (SBOM) standard developed by the NTIA multi-stakeholder process into this reference implementation material. NTIA should continue to evangelize on the role and utility of software transparency, leading a standing multi-stakeholder working group on SBOM.

3. Release the tools! [DoD and Industry]: Companies and open-source projects, including the cloud providers above, should commit to release tooling to help implement the Overlay. These industry efforts should be joined in parallel by the office of the DoD Chief Information Officer, the Defense Digital Service, and the Defense Information Security Agency as well as any other significant software development efforts within the federal government. An encouraging sign of the DoD’s commitment to this policy would be a new instruction from the secretary of defense modifying and updating DODI 8500.01 accordingly.  

NEXT

4. Harmonize the overlay [NIST, DHS CISA, and ENISA]: NIST, DHS CISA, and ENISA should work with industry partners and the international standards community to map this Overlay to appropriate ISO controls and the EU Cybersecurity Certification Framework to harmonize standards across the transatlantic community.  

5. Recognize software is part of 5G [State Department and NSC]: The State Department and the NSC’s efforts to shape a like-minded coalition for the United States on 5G security issues has produced variable results but is driving strong industry, and some international, pressure for a more open telecommunications technology marketplace. Much of that push is a result of industry trends towards virtualizing complex hardware functions in software. Starting with the EU, the State Department and NSC should work with civil society and industry to make software development and supply chain security principles a core part of the 5G coalition-building process. 

OVER THE HORIZON

6. Measure overlay maturity [NSA and Industry]: NSA’s Cybersecurity Directorate should work with an appropriate industry consortium to develop a software supply chain security maturity model based on this control Overlay and make it publicly available.

a. Tie overlay to CMMC [DoD]: The DoD should integrate this supply chain maturity model as part of its Cybersecurity Maturity Model Certification (CMMC) program and establish a level of performance required for prime contractors. The DoD should further implement these performance measures as new contracting requirements for information technology procurement. 

b. Assess overlay performance in IT contracting [General Services Administration]:GSA should establish similar performance measures against this maturity model and implement them as part of evaluating new federal information technology contracts. 

3.2 Better protect open source

One of the popular distribution vectors for software supply chain attacks in this report’s dataset was open-source packages and libraries. These are rarely the most consequential attacks, but they are often exploited through trivial effort, pointing to a concerning trend given the wide dependence on open-source code in commercial and national security applications. Continuing efforts by the White House to incorporate open-source software as a means of sharing code across different agencies and with the technical public raise the stakes of securing open-source software development.

This cluster of recommendations aims to support more effective and consistent security practices across open-source projects and in the governance of repositories and code registries. Policy makers should not endeavor to “fix” the open-source community. There is no one open-source community and effective change comes from resources, tools, education, and time—not trying to upend cultures.

These recommendations provide additional resources for security in the open-source community, including grant funding, public sector support and policy evangelism for best practices, and industry incentives to build in tools for supply chain security to hubs and repositories.5 Executed together, these changes should improve the health of open-source software, use federal funding to support private sector leverage over attackers, and help raise the bar for secure supply chain practices at important points of industry and community concentration. These policies should also help improve the stability of open-source security efforts and the long-term viability of other recommendations in this report by supporting channels between the public and private sectors. They should also help account for the rapidly growing use of containers in cloud service deployments, including registries and hubs for container and other cloud images. 

FIRST

7. Pay up [US Congress and DHS CISA]: Open-source software constitutes core infrastructure for major technology systems and critical software pipelines. The absence of US public support to secure these products, at a cost point far below what is spent annually on other domains of infrastructure security, is a striking lapse. The US Congress should appropriate suitable funds, no less than $25 million annually, and unambiguous grant-making authority, to DHS CISA to support baseline security improvements in open-source security packages through a combination of spot grants up to $500,000 administrated in conjunction with the US Computer Emergency Response Team (US-CERT), and an open rolling grant application process. 

8. Stand and deliver [DHS CISA]: In line with the identification of open-source projects as core infrastructure, DHS CISA should create a small (six to eight-person) open-source security evangelism and support organization. This group should help administer funds in the previous recommendation, drive collaboration with US allies and others in the private sector to support priority open-source packages, and act as community liaison/security evangelists for the open-source community across the federal government. The first project for such a group should building a dependency graph of as much of the open source ecosystem as feasible, identifying priority projects for investment and support.

NEXT

9. Curate and maintain tooling [CMU’s SEI and Linux Foundation]:TheDoD should create a new nonprofit entity in the public domain supported by Carnegie Mellon University’s Software Engineering Institute (SEI) and Linux Foundation’s staff and networks. SEI should support the organization as a principal contractor funded via indefinite delivery/indefinite quantity contract from the DoD, at an amount not less than $10 million annually. The Linux Foundation should manage a priority list of software tools and open-source projects to invest in and support.This entity should support the long-term health of the software supply chain ecosystem by maintaining, improving, and integrating software tools released as part of the Overlay effort, making them freely available with expert integration assistance and other appropriate resources.

10. Transatlantic infrastructure initiative [DHS CISA and the State Department] Software security is not a single-jurisdiction issue. DHS CISA and the State Department’s Office of the Cyber Coordinator should work with US allies in Europe to establish a Transatlantic Infrastructure Initiative (TII) modeled on the DHS open-source security funding program. Working with ENISA and cooperative partners in the region, the TII could establish a consensus collective security mechanism to support the security of critical open-source packages and help validate the global significance of effective trust in the supply chains for this software. 

OVER THE HORIZON

11. Bring lawyers, guns, and money [US Congress/Federal Trade Commission]: The US Congress should extend final goods assembler liability to operators of major open-source repositories, container managers, and app stores. These entities play a critical security governance role in administering large collections of open-source code, including packages, libraries, containers, and images. Governance of a repository like GitHub or an app hub like the PlayStore should include enforcing baseline life cycle security practices in line with the NIST Overlay, providing resources for developers to securely sign, distribute, and alert users for updates to their software. This recommendation would create a limited private right of action for entities controlling app stores, hubs, and repositories above a certain size to be determined. The right would provide victims of attacks caused by code, which failed to meet these baseline security practices, a means to pursue damages against repository and hub owners.6 Damages should be capped at $100,000 per instance and covered entities should include, at minimum, GitHub, Bitbucket, GitLab, and SourceForge, as well as those organizations legally responsible for maintaining container registries and associated hubs, including Docker, OpenShift, Rancher, and Kubernetes. 

3.3 Counter systemic threats

While there is much progress to be made in providing better tools, incentives, and resources to secure software supply chains, the United States and its allies can also take positive action to call out and counter systemic supply chain threats. This final cluster of recommendations works to broaden existing US partnerships to better reflect a consensus coalition on cybersecurity issues, widen the basis of cooperation to include regular operational collaboration as well as political coordination on law enforcement and intelligence issues, and provide better information to the public on software supply chain risks. Many of these recommendations build on existing relationships and informal partnerships with allies and the private sector. 

The abuse of software supply chains undermines trust in the technology ecosystem. Where new or exotic techniques are identified to attack the supply chain, the United States and allies should work to interdict malicious actors and blunt the consequences of these attacks. This section addresses potential actions from shifts in alliance composition to reshaping the focus of interagency equities. The goal of these recommendations is to provide a more concrete basis for statecraft to counter the highest-consequence software supply chain attacks and improve the quality of information available to decision makers and the public on systemic software supply chain threats and the activities of major state actors. 

FIRST

12. Know Your enemy [Office of the Director of National Intelligence]:Acting on behalf of the director of national intelligence, the assistant director of the Supply Chain and Cyber Directorate and the national intelligence officer for cyber, with appropriate members of the Intelligence Community and interagency, should produce a study on the nexus between state adversary groups and criminal organizations in any major software supply chain security attack from the past decade. This study should be shared in its entirety with key US allies, including the United Kingdom, Japan, Australia, the Netherlands, Poland, and Estonia, and should include a limited public component released within six months of the study’s conclusion. 

13. Trust busters [UK National Cyber Security Centre, DHS CISA, NSA, and DoJ]: Building on existing collaboration between CISA, the NSA, and the UK’s NCSC, an expanded international group should work to share information on systemic software supply chain attacks and disclosures, cooperate on investigations against responsible groups, and define policies for collective attribution of state actors. NCSC, in particular, offers well-established relationships with telecommunications providers and a strong base of talent to collaborate on such a joint effort.

a. Broaden the tent [DHS CISA and NSA]: DHS CISA and the NSA should work to include, and routinize with, a broader array of US allies in regular security collaboration, investigation, and joint attribution on software supply chain security events. These allies should include the same states as in recommendation twelve above.

14. Work for IT [US and allied governments]: The root of most software supply chain security protections is strong cryptography. Efforts by the United States and other governments to incentivize adoption of weak or compromised cryptographic standards work at cross-purposes with widespread industry practitioner and policy maker consensus. These efforts also present systemic threats to an already porous software supply chain. The legacy of US attempts to block the sale of “strong cryptography” is recurring attacks on weaker encryption standards and widespread access to previously controlled technologies. The United States and its allies should work in support of strong and accessible cryptographic standards and tools for developers.

NEXT

15. Know Thyself [Cybersecurity Tech Accord or Charter of Trust]: Industry, working through the Cybersecurity Tech Accord or the Charter of Trust, should annually survey and publish public instances of software supply chain attacks which undermine trust in software updates, code integrity, or distribution channels like public open-source repositories. Each group has an opportunity to assert meaningful leadership on software supply chain security issues with such an effort. These attacks are the evidentiary basis for motivating new investment in supply chain security. Transparency around their frequency and cost are important inputs to public debate.

16. SBOM Squeeze [State Department Cyber Coordinator and Department of Commerce NTIA]: The Departments of Commerce and State should collaborate to further internationalize the SBOM effort. Commerce has worked effectively to drive bottom-up engagement, while State should support with a top-down advocacy effort. The transparency associated with SBOM will help surface vulnerabilities and weaknesses which support broader US alliance efforts on cybersecurity. State and Commerce should focus on Germany, the Netherlands, the United Kingdom, Japan, South Korea, and Australia—leading with the German Federal Office for Information Security (BSI) and Japanese Ministry of Economy, Trade and Industry (METI) to start.

OVER THE HORIZON

17. Security for a Common Good [NSA, NCSC, other Vulnerabilities Equities Process ]: The NSA and the United Kingdom’s NCSC should encourage more frequent and fulsome disclosure of known vulnerabilities, or attractive primitives, in key mechanisms of trust for the software supply chain, particularly code signing and means to bypass firmware protections and hardware roots of trust. The process of determining whether to disclose or keep secret an impactful software vulnerability is inherently a tradeoff between different government agencies and as well as divergent conceptions of the public good. This recommendation aims to further tip the scales in favor of disclosure where the subject software vulnerability is principally useful for impersonating legitimate software updates and developer-signed code—the kind whose use, and potential theft or rediscovery, risks further corroding a critical linkage of trust between users and code maintainers. 

Conclusion

Supply chain security risk is growing and increasingly manifesting as harm to software users. There are many steps between the codebase they first compromise and their final targets, so the distribution vectors of attack—how they ripple across the supply chain—are just as varied as the first point of impact, though the two are often connected. Popular methods of attack include taking advantage of automated updates, compromising software development applications, and sneaking into mobile app stores. Even the act of infiltrating the supply chain with malware is intricate, involving stealing or forging code-signing certificates, breaking into developer accounts, or unearthing hardcoded default passwords

Software supply chain attacks are first and foremost about variety—a variety of attackers ranging from undergraduate students to the world’s most sophisticated state offensive cyber groups, of targets that range from uranium enrichment centrifuges to mobile video games, and of impacts that can result in multi-billion-dollar losses, rampant data interception, or absolutely nothing. The supply chains underlying final products grow longer and less linear over time. In this interconnected software environment, successful attacks migrate away from the final targets that harden their own vulnerabilities and toward the weakest links in those chains. The soft spots that software supply chain attacks target remain minimally protected because of the technical challenges of recognizing the full scope of a product’s code dependencies and the policy challenges of coordinating disclosure and patching. 

A consistent pattern of attacks and disclosures target software supply chains. Despite this, these supply chains remain poorly secured and policy maker attention on supply chain issues is distracted by the 5G debate. This ignores a critical national security risk posed by insecure software supply chains, namely: the accumulated harm to private sector firms impacted by untrustworthy code and a generation of defense systems reliant on commercial software. Software supply chain attacks are popular, they are impactful, and they have been used to great effect by major US adversaries. This report surveyed a decade of software supply chain attacks and disclosures—115 incidents in all—to develop a picture of the problem and develop five trends as software supply chain attacks are used to big effect, break the chain, abuse trust, poison the well, and download trouble. Building on these trends, the report recommends the policy community improve the baseline of software security for all organizations, better protect open source, and counter high-consequence supply chain threats. 

It would be a grave mistake to equate software supply chain attacks to a new weapon system in an opponent’s arsenal—they are a manifestation of opportunity as much as intent, attacking secure targets by compromising weaknesses in connected neighbors and vendors. Existing gaps in best practices, and poor adoption of these best practices, have granted these software supply chain attacks unnerving sustainability. There are even signs that the most fruitful software supply chain targets in firmware and at the heart of major cloud service providers have yet to peak in popularity. 

The implication of this for the technology industry and cybersecurity policymaking community is a crisis in waiting. For the national security establishment, attacks on the software supply chain threaten a generation of technology acquisitions and undermine the COTS model of development. As the recommendations of this report bear out, change is necessary and feasible but it will require concentrated purpose and clarity of outcome at a time when both are in short supply. This report finds evidence that the past decade has seen software supply chain attacks become only more common and effective. Without action, the next decade may be worse.

Read more

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Much like this dataset as a whole, the survey of state attacks is biased toward publicly reported incidents. Many such attacks and some disclosures have never reached the public domain and thus are not captured here.
2    All of the figures and charts in this report reflect our  dataset, which is a consistent but not random sample of the total population of software supply chain attacks and vulnerabilities going back a decade. We draw no conclusions as to the frequency of attacks or vulnerabilities for this population, only trends within this dataset, and can make no claims of statistical significance as a result.
3    The Council on Foreign Relations has similarly highlighted the need for affected vendors to receive “specific, targeted threats and technical indicators,” and for US policy makers to “facilitate more actionable cyber-threat information sharing, including informing vendors when intelligence agencies find vulnerabilities in supply chains or products,” in order for vendors to appropriately defend their supply chains (Improving Supply-Chain Policy for U.S. Government Procurement of Technology).
4    The Commission on Enhancing National Cybersecurity, in its Report on Securing and Growing the Digital Economy, similarly urged NIST to conduct research on supply chain risk focused on organizational interdependencies, recommending that it “identify methods that assess the nature and extent of organizational interdependencies, quantify the risks of such interdependencies, and support private-sector measurement against standards of performance.” Various organizations, including the New York Cyber Task Force and Northrop Grumman, have highlighted the importance of ensuring that private sector entities implement NIST standards and voluntary practices, for instance, by making them more accessible for all stakeholders. (Raising the Bar on Cybersecurity and Acquisition).
5    The Center for Strategic and International Studies’ Cyber Policy Task Force has similarly recommended that the Trump administration pursue ways to “support open-source software vulnerability research programs, through DHS or perhaps the National Science Foundation.” (From Awareness to Action—A Cybersecurity Agenda for the 45th President).
6    The New York Cyber Task Force has similarly recommended that “Congress could make it easier to hold software companies liable for products with known, unpatched vulnerabilities and no mature process to identify and fix them.”

The post Breaking trust: Shades of crisis across an insecure software supply chain appeared first on Atlantic Council.

]]>
Emerging technologies: new challenges to global stability https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/emerging-technologies-new-challenges-to-global-stability/ Tue, 07 Jul 2020 14:07:18 +0000 https://www.atlanticcouncil.org/?p=262464 The world may be fast approaching the perfect storm, with the intersection of two major global trends. At a moment of historic transition, when the post-WWII and post-Cold War international order is eroding amid competing visions of world order and renewed geopolitical rivalries, the world is also in the early stages of an unprecedented technological transformation

The post Emerging technologies: new challenges to global stability appeared first on Atlantic Council.

]]>
The world may be fast approaching the perfect storm, with the intersection of two major global trends. At a moment of historic transition, when the post-WWII and post-Cold War international order is eroding amid competing visions of world order and renewed geopolitical rivalries, the world is also in the early stages of an unprecedented technological transformation. It promises to be a period of exponential change, the second—and far more disruptive—chapter of the digital revolution that began with the Internet in the 1990s. Historically, technology usually races ahead of institutions, rules, and norms. The extraordinary magnitude of change at a time of global institutional fraying and disorder, however, portends a particularly dangerous gap in global governance impacting economies, societies, and the future of war.

Substantially more technology-driven change will take place during the coming two decades than in the first ICT (information and communications technology)-based revolution, with profound social, economic, and geopolitical ramifications. This new wave is a convergence of technologies, a digital synergy of artificial intelligence (AI), big data (the cloud), robotics, biotech/biosciences, three-dimensional (3D) printing, advanced manufacturing, new materials, fifth-generation (5G) powering the Internet of Things (IoT), nanoengineering and nanomanufacturing, and, over the horizon, quantum computing. It is a still thickening merger of the digital and physical economies (called “online-to-offline,” or O2O), transforming business models, transport, healthcare, finance, manufacturing, agriculture, warfare, and the very nature of work itself.

As a practical matter, as these technologies are deployed over the coming decades, they will bring about accelerating economic and geopolitical change beginning in the 2020s. For example, using AI powered by superfast 5G technology (up to one hundred times faster than the current 4G), the Internet of Things (IoT) will monitor and manage farms, factories, and smart cities. The increased productivity of ICT-connected sensors will warn of factory equipment needing maintenance; monitor energy use in buildings; give farmers real-time information on soil conditions; maintain and operate driverless vehicles; optimize energy-grid performance; and monitor remotely and diagnose individuals’ health, with gene editing, engineering the demise of malaria-carrying mosquitos, and perhaps erasing hereditary DNA to eliminate horrific diseases. In the national security realm, AI, 5G, and the IoT portend radical changes in missions from logistics and inventory management to surveillance and reconnaissance with air and undersea drones of all sizes and with autonomous capabilities.

The full text of the paper is split across the various articles linked below. Readers can browse in any order. To download a PDF version, use the button below.

Issue Brief

Jul 7, 2020

I. The emerging tech revolution

By Robert A. Manning

Technological advancements in fields ranging from AI to biotech are already rapidly changing existing economic, social and geopolitical arrangements. How well nations are able to innovate and adapt will play a large role in determining their standing in the decades ahead.

Issue Brief

Jul 7, 2020

Will AI and robots kill jobs?

By Robert A. Manning

New technologies are being rolled out across the world at a pace that outstrips our ability to comprehend their implications. Concerns over the death of jobs may be overblown, but the need to understand and mitigate the risks presented by emerging technologies remains.

Issue Brief

Jul 7, 2020

National security impact

By Robert A. Manning

Technological change throughout history shaped and reshaped the strategy, tactics, and the character of war. Today’s emerging technologies have the potential to revolutionize warfighting, while also posing new challenges to strategic stability across increasingly contested global commons—air, sea, cyber, and space.

About the author

Related reading

The post Emerging technologies: new challenges to global stability appeared first on Atlantic Council.

]]>
Research and development still key to competitiveness: But for whom? https://www.atlanticcouncil.org/blogs/geotech-cues/research-and-development-still-key-to-competitiveness-but-for-whom/ Fri, 03 Jul 2020 05:14:55 +0000 https://www.atlanticcouncil.org/?p=273742 The decade ahead must be spurred on by a new “Sputnik moment” for the United States to inspire new focus on research and development funding and initiatives to bolster the STEM workforce, while understanding the changing market dynamics connecting funding, innovation, and competitive advantage for open societies.

The post Research and development still key to competitiveness: But for whom? appeared first on Atlantic Council.

]]>

“The best way to predict the future is to invent it.”

Quote ascribed to Alan Kay

As the 4th of July nears and the United States prepares to celebrate Independence Day, it is worth reflecting that the year 2020 and the decade ahead must be spurred on by a new “Sputnik moment” for the country, this time inspired by the swift growth of China’s technological capability over the past decade and its overt, all-out push to surpass the United States as a technological and geopolitical leader. At present, there is too much national posturing about China and too little competition with it. No sense of urgency has yet rippled through the country, as one did in the late 1950s.

The United States’ innovative edge has been ebbing for several years now. In total yearly research and development (R&D) spending for 2018 (the most recently tallied year), the United States still narrowly led China at $582 billion to $554 billion. However, there are vast differences in growth: China’s R&D spending has been growing by about 17% annually since 2000, while the United States’ annual R&D growth has been stuck at about 4%. US federal government R&D funding has shrunk steadily both as a percentage of GDP and in relation to private sector R&D, which now accounts for nearly 70% of all R&D spending. Notably, only 17% of all US R&D spending is for basic research.

However, in a world connected by the Internet, can national spending on R&D still translate into national competitive advantage? For societies that are closed or more restrictive of digital information flows, the answer might be yes, due to the facts that intellectual property is less likely to be lost and outside actors are less able to emulate innovations. For open societies, it is unclear, though, especially when companies operate trans-nationally and when global venture capitalists can simply buy innovative and their intellectual property.

Further, the type of research nations fund, as well as the quantity, is notable. Outside of the Defense Advanced Research Projects Agency (DARPA), the long-term mindset that has historically been key to US innovation—from the Manhattan Project to computer chips, the Internet, and GPS—is less common.  Some US companies are still haunted by the ghosts of historical R&D efforts that produced great innovation, like Xerox’s Palo Alto Research Center, which helped develop the modern graphical user interface, mouse-based computer interactions, object-oriented programming, laser printing, Ethernet networking, and large-scale integration for semiconductors but did not profitably commercialize its inventions.

Industry is mostly focused on the development side of R&D, leaving the basic research required for developing novel products to others, either in academia or government research labs. With US federal government R&D spending on basic research only at 32% of total federal R&D spending, breakthroughs in fields other those already being commercialized may occur elsewhere in the world,  or US researchers may lured outside of the country if foreign basic research creates novel fields of study that remain unfunded within the United States.

All told, it should be no surprise that at the center of US-China competition is a battle over technology. The world is in the midst of multiple technological revolutions – digital synergy of the Internet of Things, the gathering of massive amounts of data, advances in machine learning, developments in additive manufacturing and 3-D printing, production of new composite materials, commercialization of space and small satellites, and further advances in bioengineering and personalized medicine. These revolutions will increasingly define geopolitical standing. The question is whether these advances will define national strength or something different.

For decades, tech innovation has been the secret sauce in US economic prosperity and global predominance. Much of it –including the Manhattan Project, Project Corona and satellite reconnaissance, the semiconductors that spawned Silicon Valley, TCP/IP and the Internet, and GPS and geolocation services – grew from federal R&D funding for basic research.

However, all such game-changing technologies took years of and multiple “learning through experimentation” trials – otherwise known as learning from failure – and were the result of basic research funding and the strategic patience of US federal R&D spending. Now only 17% of US R&D spending is in basic research, and 40% of that is federally funded. Federal R&D spending has fallen to just over 0.6 % of GDP, the lowest level since the Sputnik era, and far below the peak period it inspired, which reached above 1.8%. The good news is that private sector R&D is booming, reaching nearly 70% of the US total spending in 2018. Still though, the bulk of private R&D spending (78%) is on applied development instead of basic research—in other words, aimed at commercial success in a two- to three-year timeframe. Business, necessarily focused on the bottom line, fears that basic research breakthroughs might be taken advantage of by competitors.

Pre-competitive basic research is not necessarily efficient. It often takes many years to yield results – five years in the case of the Manhattan Project, urgent as it was. Federal and university R&D is best positioned to support basic research. A US penchant for instant gratification, or more specifically, Congressional impatience, has tilted federal R&D spending toward funding the “doable.” Though spending on basic research is at an all-time high, as a portion of GDP it has fallen to levels unseen since 1962 (notably, only about 5.5% of Chinese R&D spending goes to basic research in 2018). Outside of DARPA, which has been a key driver of US innovation in the past, it has become more difficult for scientists and engineers working on non-military research with abstract concepts, often called “blue sky research,” to obtain funding.

Part of this might also be a consequence of the Internet itself. Specifically, the Internet now allows rapid dissemination of ideas and insights globally. This includes scientific papers as well as direct emails, chats, and videos with researchers. Ideas are no longer as geographically bounded as in the past.

Similarly, the Internet has accelerated the global diffusion of new technologies. It took more than 25 years for 50% of US households to adopt the washing machine. Personal computers took about 18 years to be adopted by 50% of households. Cell phones took about 10 years. Tablet devices took about 6 years. On one hand, this means commercialization, if successful, can go global much faster than ever before. On the other hand, with the rapid diffusion of new technologies, innovators in other countries can remix any new device or development and come up with something better fast. This means any long-term R&D effort risks being out of date by the time it is done. The lifespan of the competitive advantage afforded to a nation or company by investing in blue sky research seems to be getting shorter and shorter.

Market dynamics also apply. For the last decade, it has become too expensive to commercially research and develop new semiconductors in the United States, partly because of the United States’ high standard of living and the competitiveness of the US dollar globally. There are cheaper places to do the R&D, often subsidized by governments hoping to attract research and create jobs in their own nations.

The United States faces hard questions regarding its desire for open and free markets, but the reality is that such markets may encourage the offshoring of research away from the United States, either for expense reasons or because research institutions cannot attract foreign researchers to the United States if immigration costs and policy prevent it. 

Lastly, in open societies, it is unclear whether national spending translates into competitive advantage. Employees who develop breakthrough ideas can simply be offered higher salaries or better stock options to work somewhere else. Yes, intellectual property laws exist, but with the right lab setup and funding, star researchers can take their ideas one level higher, working around any strong intellectual property protections. This also may explain why companies focus so much more on applied development than on basic research. However, there is growing bipartisan understanding of these issues, and pending legislation to provide a range of incentives for US firms to reshore R&D and production domestically offers new promise.

Policy implications

To maintain independence as a technology leader and regain the competitive momentum that has made the United States a prolific innovator, policymakers need to ask whether they are playing the right game for the 21st century: that of increasing competitiveness through R&D spending. From a strategic perspective, while R&D spending does still provide advantages to certain players, it is less clear how those advantages benefit open nations in an era of globally connected societies and companies.

If the United States is handicapping itself and its companies in its R&D strategy, then policy makers must consider changing how they play the game.

Action points

  • Identify new approaches to incentivizing and funding the innovation that creates jobs, increases profits, and uplifts people and communities.
  • Identify new ways of linking basic research done in the United States to “first mover” advantages created by applied development. Potential action includes funding to lure overseas R&D back to the United States, particularly in certain industries like semiconductor fabrication and telecoms hardware manufacturing.
  • Develop research alliances across nations that also value open societies, creating a political bloc for R&D funding with shared values, focus, and professionalism for its research community members.
  • Incentivize US students at the high school and college levels to pursue science and engineering degrees.

During the Cold War, the United States scouted for, recruited, and supported talent across the nation – including from both rural areas and inner cities. Now, in an era of budget cuts and historical sequestration, that mandate seems adrift. A growing number of science and engineering PhDs are foreign students, led by China and India – now approaching half of STEM PhDs. To a degree, this is simply a product of the numbers, as both China and India have approximately four times as many people as the United States. The United States needs to identify and attract talent both internally and from other nations if it is to address the raw population gap and secure a more innovative future for the United States and all nations that stand for openness, freedom, and choice.

It is worth reflecting on the second part of the Alan Kay quote that began this paper:

The best way to predict the future is to invent it. Really smart people with reasonable funding can do just about anything that doesn’t violate too many of Newton’s Laws!

Are the US government and industry putting sufficient funding into the R&D that will invent the future?

Note: the figures on R&D spending used for this article generally reflect 2018 spending, the most recent completely available year, and are inflation adjusted for current PPP USD. Other estimates for funding can vary significantly due to different definitions in spending and different base years, but generally reflect the same trends.

The post Research and development still key to competitiveness: But for whom? appeared first on Atlantic Council.

]]>
Can technology help build a shock-resistant planet? https://www.atlanticcouncil.org/commentary/event-recap/event-recap-can-technology-help-build-a-shock-resistant-planet/ Thu, 25 Jun 2020 17:47:31 +0000 https://www.atlanticcouncil.org/?p=271296 On June 17, 2020, Columbia University’s Earth Institute gathered a panel to discuss ideas for building a shock-resistant planet as part of the Sustain What series.

The post Can technology help build a shock-resistant planet? appeared first on Atlantic Council.

]]>
On June 17, 2020, Columbia University’s Earth Institute gathered a panel to discuss ideas for building a shock-resistant planet as part of the Sustain What series. Dr. Andrew Revkin moderated the event, which included panelists Dr. David Bray, Director of the Atlantic Council’s GeoTech Center, Dr. Diyva Chander, physician and professor of neuroscience at Singularity University, Robert S. Chen, Director of the Center for Earth Science Information (CIESIN) at Columbia University, Sandra Baptista, Senior Research Scientist at CIESIN, and Katindi Sivi Njonjo, a futurist at Nairobi-based Foresight for Development.  

The COVID-19 pandemic has exposed how vulnerable we are to catastrophic disruptions, whether they be human-made crises or natural disasters. As the world struggles to emerge from the pandemic, many are looking to effectively predict and prevent the next crisis. Unfortunately, the panel noted that the data pointing to the next global disaster are already out there in the world. We just have yet to find them. Despite the enormous amounts of data produced by internet-enabled sensors around the planet, the individuals, institutions, and governments that gather this information still struggle to identify and address threats quickly enough. 

A planetary immune system

The panel envisioned a “global immune system” that would collate data from around the world into a massive database accessible from anywhere by anyone. The system would operate like the complex human immune system making use of various sources of internal data to identify threats before routing messages along a nervous system to delegate appropriate responses. 

With the data necessary for such a project mostly available today, the global immune system would simply have to connect existing networks, correct taxonomic inconsistencies, and distribute information to authorized analysts. The technology and expertise to operate such an apparatus is already in development. Panelists identified how tools such as block chain and artificial intelligence will prove integral to the construction and operation of the global immune system. As Dr. Bray pointed out in his article “We can build an immune system for the planet,” connecting humans or neural networks with data could identify pandemic agents before they spread. 

The panel explained that the major component preventing such a system’s creation is not infrastructure, technology, or gathering data. Rather, the lack of open data sharing between people, communities, and nations around the world is currently the biggest obstacle. Although establishing a global immune system would help identify and combat threats more quickly, convincing governments, companies, and individuals to share data that they are already collecting might prove too difficult a challenge.

Balancing protection and trust

The panel acknowledged the potential dangers of the sharing massive quantities of biometric data from around the world. The opportunity for foreign actors to gain insight into other countries’ populations might be too great a risk. For some, the mass collection of data, even if for a noble end, might seem too much like an authoritarian nightmare. Many might reasonably fear that the implementation of a global immune system would enable a more autocratic and oppressive use of technology. 

The panel also pointed out how the system’s solutions to global public health challenges might still fail to serve some communities. In particular, populations with limited access to the internet or cultural norms encouraging wariness of outsiders and government might be poorly situated to contribute to or benefit from a global immune system designed by Western technologists. Of course, without truly global coverage and participation in the system, the panel’s vision could not succeed. Thus, the panel emphasized that taking on a local character and including diverse developers and input are integral for the project’s development , allowing each community to contribute to and make use of the system in the manner most helpful for its own people. 

A participatory approach to crisis prevention 

The panel advocated for deliberately designing the global immune system and its operation with a people-centered, participatory approach, allowing for individual ownership of data, its analysis, and the solutions that emerge. The panel noted how, especially for health-related data, trust in terms of access and use is crucial. An effort led by a single government or industry, the panel agreed, could never be as successful in earning trust than one built by a coalition of governments, institutions, companies, non-profits, and ordinary citizens. Mechanisms like a rotating jury of global peers working to prevent improper use of the system’s data could also reinforce trust.  

Ultimately, though, it is essential that individuals have the opportunity both to choose whether to submit their own data and to play a part in identifying and resolving potential crises if they wish to. With a decentralized model for gathering and analyzing data, no single entity has sole ownership of the system, creating greater trust among its participants. If people share data willingly, the participatory data economy can differentiate itself from a surveillance state, allowing individuals to sacrifice their own privacy with the knowledge that they and their neighbors will use it to protect each other. 

Actionable steps towards planetary immunity 

In order to prevent the next major global crisis, individuals, governments, and organizations must: 

  • Collaborate to assemble global data into a database that all can access and search for anomalies; 
  • Develop analyst expertise or artificial intelligence to detect potential threats based on this data and identify solutions; 
  • Design the system with the rights and experiences of its all humans in mind (different communities have different ways of interacting with data, and distinct expectations of how their data will be used and by whom); and 
  • Develop a system of data ownership in which individuals willingly share data with the knowledge that any authenticated user, can access it.

Henry Westerman is an intern with the Atlantic Council’s GeoTech Center and a rising senior at Georgetown University’s School of Foreign Service. His course of study is in Science, Technology, and International Affairs, with a concentration in Security, focusing on the intersection of science and geopolitics, particularly relating to advanced digital infrastructure and outer space development. Previously, Henry has interned at the Library of Congress and the Department of State’s Office of Science and Technology Cooperation. He also works at Georgetown’s writing center, providing free editing and consultations and serves as the historian for Georgetown’s student association. 

The post Can technology help build a shock-resistant planet? appeared first on Atlantic Council.

]]>
Event recap | Data salon episode 2: Could better technology protect privacy when a crisis requires enhanced knowledge? https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-data-salon-episode-2/ Wed, 17 Jun 2020 22:11:00 +0000 https://www.atlanticcouncil.org/?p=269942 On Wednesday, May 27, 2020, the Atlantic Council’s GeoTech Center and Accenture hosted Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society at Stanford Law School, and Ms. Jana Gooth, legal policy advisor to MEP Alexandra Geese, for the inaugural episode of the jointly presented Data Salon Series. The event was co-hosted by Mr. Steven Tiel, Senior Principle, Responsible Innovation at Accenture and Dr. David Bray, Inaugural Director, GeoTech Center at the Atlantic Council.

The post Event recap | Data salon episode 2: Could better technology protect privacy when a crisis requires enhanced knowledge? appeared first on Atlantic Council.

]]>

On Wednesday, 17 June 2020, the Atlantic Council’s GeoTech Center and Accenture held the second episode of the jointly presented Data Salon Series, featuring a presentation from Mr. Davi Ottenheimer, Vice President of Trust and Digital Ethics at Inrupt, that prompted animated discussion among participants about the nature of privacy, consent, and responsibility. The event focused on how our understanding of privacy and its preservation affects our ability to temporarily compromise it in the interest of addressing crises. These issues are particularly relevant to the ongoing pandemic, and their intersections with other topics—integrating different cultural priorities and expectations of privacy, ensuring data is truly representative of a diverse population, and examining the nuanced relationships between privacy, knowledge, and power—are especially timely.

Presentation

Mr. Ottenheimer’s presentation focused on the history of privacy, other cultures’ understanding of the concept, and how expectations of privacy vary during crises and between technologies. He began by noting how some societies are willing to sacrifice a degree of privacy when there is a threat to an individual, while others are more focused on threats to the community. A brief survey administered prior to the presentation corroborated that by demonstrating that the audience’s priorities aligned more with individual interests (see figure 1). Mr. Ottenheimer also introduced a framework for understanding privacy as a concept in tension with knowledge, where losing privacy results in gained knowledge. The relationship is nuanced though—there are contexts where marginalized groups are denied privacy in which they could pursue knowledge, and where the knowledge gained from degraded privacy only benefits some groups.

Next, trust and distrust were posed as separate axes rather than different extremes of a single continuum. Mr. Ottenheimer described lack of trust as ambivalence, not suspicion, while the presence of distrust creates a need to question, to develop independence, and to monitor. Survey results also helped motivate this distinction, as audience members indicated a belief that the experience of losing privacy would increase popular concern for privacy the most (see figure 2). Applying these principles to technology use amidst crisis, Mr. Ottenheimer proposed two ways to make our treatment of privacy consent-based during disruptions that require compromise. First, crisis-optimized privacy must contain a disabling function—some way for users to turn off the increased access granted to institutions addressing the crisis at their discretion. Second, a reset must be possible, allowing for reversion to a prior information state in order to correct false information or undo a compromise in privacy while still preventing manipulation by malicious actors.

Throughout his presentation, Mr. Ottenheimer pointed to examples from the past to ground abstract theories and historical beliefs about privacy: the development of chimneys, the abuses of dictatorships past and present, the plight of Blacks in the slave-owning South, police states at home and abroad, colonial plantations, 1850s Austria, and more. Tying them together was a deep examination of the relationship between privacy and power, leading Mr. Ottenheimer to end on the somber observation that some surveys report 50% of cryptographers are uncomfortable answering questions about ethics. Accordingly, there may be a troubling disconnect between engineers’ experiences and comfort and the ethical ramifications of the things they create.

Presentation slides

Discussion

Many in the audience used Mr. Ottenheimer’s presentation as a prompt for addressing the gaps between existing laws and broader notions of privacy. One discussant asked about the lack of legal frameworks addressing any responsibility of industry to preserve privacy, which Mr. Ottenheimer hoped could be addressed by the design choices of technology companies themselves. Ideally, privacy-minded engineers would build consent and transparency into software, giving customers control of when companies are granted full access to their data, say for service needs. Another audience member raised the challenges of “informed” consent, which presumes an often-unrealistic level of understanding on the part of consumers, particularly as more of the world comes online, though some noted the opportunities presented by a clean slate for technology and policy developments in some parts of the world, and some were optimistic about the potential market growth and innovation they could provide.

Towards the event’s end, several discussants raised concerns echoing a parallel Zoom chat conversation—that focusing on privacy and informed consent through a technological lens misses critical ideas about ethical norms. They were worried about undue focus on problems with purely technical solutions. Some wanted to refocus on the fiduciary duties of care that could be applied to information providers, on preserving trust between users and industry, and on incentivizing or enforcing responsible corporate citizenship. The discussion ended on the note that untangling technological solutions from broader ones might not be so clear cut though. Many of the frameworks that could empower those providing data to own its use and their own consent have both technical and policy elements.

We look forward to the next episode in the Data Salon Series on Wednesday, 29 July, 2020 at 11:30 a.m. EDT, in which Ms. Joy Bonaguro, Chief Data Officer for the State of California, will present on her state’s response to COVID-19, and we hope to see you there.

Previous episode

gtc data salon lightbulb

Event Recap

May 27, 2020

Event recap | Data salon episode 1: Notice, consent, and disclosure in times of crisis

By Stewart Scott

On Wednesday, May 27, 2020, the Atlantic Council’s GeoTech Center and Accenture hosted Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society at Stanford Law School, and Ms. Jana Gooth, legal policy advisor to MEP Alexandra Geese, for the inaugural episode of the jointly presented Data Salon Series. The event was co-hosted by Mr. Steven Tiell, Senior Principle, Responsible Innovation at Accenture and Dr. David Bray, Inaugural Director, GeoTech Center at the Atlantic Council.

Digital Policy Economy & Business

The post Event recap | Data salon episode 2: Could better technology protect privacy when a crisis requires enhanced knowledge? appeared first on Atlantic Council.

]]>
Event recap | New approaches to trust in manufacturing https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-new-approaches-to-trust-in-manufacturing/ Tue, 16 Jun 2020 17:30:56 +0000 https://www.atlanticcouncil.org/?p=266940 On Tuesday, June 16, 2020, the Atlantic Council’s GeoTech Center and Nanotronics hosted Dr. Joseph Bonivel Jr., Subject Matter Expert at the US Department of Defense, Mr. Donald Codling, President of Codling Group International, Dr. Andrea Little Limbago, Vice President of Research and Analysis at Interos Inc., and Ms. Roberta Stempfley, Director of the CERT Division at Carnegie Mellon University, for a virtual private round table about the future of trust in manufacturing. The event was co-hosted by Mr. Matthew Putman, Cofounder and CEO of Nanotronics, and Dr. David Bray, Director of the GeoTech Center at the Atlantic Council.

The post Event recap | New approaches to trust in manufacturing appeared first on Atlantic Council.

]]>

On Tuesday, June 16, 2020, the Atlantic Council’s GeoTech Center and Nanotronics hosted Dr. Joseph Bonivel Jr., Subject Matter Expert at the US Department of Defense, Mr. Donald Codling, President of Codling Group International, Dr. Andrea Little Limbago, Vice President of Research and Analysis at Interos Inc., and Ms. Roberta Stempfley, Director of the CERT Division at Carnegie Mellon University, for a virtual private round table about the future of trust in manufacturing. The event was co-hosted by Mr. Matthew Putman, Cofounder and CEO of Nanotronics, and Dr. David Bray, Director of the GeoTech Center at the Atlantic Council.

The current state of supply chains

The COVID-19 pandemic is forcing governments, companies, and institutions to reevaluate their supply chains. Before, most companies assembled their supply chains to reduce manufacturing costs. Some also considered the security of the products being made, especially regarding recent developments in 5G and software supply chains: if a company cannot guarantee that all the components it relies on are secure, it cannot ensure a secure product. However, as the pandemic and other crises disrupt markets and lines of trade, industry focus has broadened to include reliability, too, and the need to trust and understand all links in one’s supply chain well enough to prepare for local and global crises and ensure continued service has become clear.

The panelists pointed out that, for many products, the full process of production is too complex to understand at a granular level. Some referred to the Department of Defense as an example, relying on hundreds of thousands of distinct suppliers. Cohost Matthew Putman highlighted that, to meet this challenge, organizations like Nanotronics are trying to build mechanisms that establish trust between suppliers and their partners so that, even oceans away, companies can be confident in the security and availability of components they receive. Supply chain management demands in-depth knowledge of all suppliers and components involved. The panel explained that government policy and technology can work in tandem to help overcome challenges to building trust.

Obstacles to trust

Several audience members acknowledged that, in most industries, companies have an incomplete understanding of the full extent of their supply chains. In some cases, obtaining that knowledge entirely seems impossible, as the complex products involved incorporate too many distinct partners and dependencies for a single organization to track. The panel elaborated that such a challenge creates the potential for disruption down the line. If organizations do not or cannot know the origin of their components, let alone the specific political, social, or environment context they were produced in, it is impossible to fully trust the integrity or availability of the products they assemble. Additionally, those same gaps in knowledge prevent nations from fully preparing for disruptions like pandemics, natural disasters, and the intervention of other governments—preparing for shortages is difficult without the ability to anticipate which sectors and products will be affected by what disturbances.

Accordingly, the panelists noted that many companies—especially those contracting with the US government—are interested in “reshoring,” the practice of relocating their supply chains either within the United States or to other allied and partner nations. Companies use reshoring to improve trust in their supply chains through proximity, or a shared set of trading partners, regulations, values, and so on.

There are several incentives to reshore, but it nonetheless reduces the comparative advantage that globalization takes advantage of. With uncertainty and mistrust pushing more and more companies to consider reorganizing their supply chains, whole industries may see reduced innovation and increased costs if reshoring takes hold. Without regulatory and technological solutions to build trust in supply chains, the future of today’s globalized economy is uncertain.

Regulations for trust

The panel explained that, in the field of supply chain management and especially within the government sector, it pays to have an attitude somewhere between wary and paranoid, and, through intentional regulation, governments can encourage such a mindset. Panelists highlighted the United Kingdom’s recent proposal of a ten country 5G pact with allied nations to channel investments to trusted companies and enable a shared security framework for 5G regulations. Such a political strategy would create a pool of trusted suppliers for companies seeking to invest in 5G technology and provide alternatives to less-reliable partners.

The panel also noted that law enforcement has a role to play through its enforcement of intellectual property rights and similar laws both within and between nations. International coordination will become increasingly important as standards of trust rise. Policies like the proposed 5G pact can reduce the likelihood of vulnerabilities embedded in technologies, but the legal frameworks that identify violators and enforce penalties against them are still lacking . Panelists noted that while many countries partner with each other to coordinate such enforcement, there is still room for improvement in a larger multilateral arrangement. Initiatives like the State Department’s Clean Path Plan, which seeks to establish a secure communication infrastructure between its embassies and the United States government, will help incentivize other nations’ adherence to secure hardware and software through the threat of sanctions.

The role of technology

Though policy can set guidelines for determining trustworthiness, discussants observed that many companies still need a way to verify their suppliers. New technologies of certification can enable that verification. In the past, certification systems like car safety ratings have pushed companies to consumer protections and resulted in industry-wide improvements. With increased demand for security and trust, especially in software and IoT devices, industry can look forward to new innovations in both creating and verifying product security. The panel proposed a system of certification that stamps components before allowing them to progress up the supply chain. Technologies like block chain could ensure that all components carry their production information to each stage along the chain in a transparent and verifiable form. Industry can also use AI to review the ample production data of components, assessing and flagging potential threats at a final product level. Together, the two technologies could enable third party approval of products shown to meet certain requirements, allowing companies to select components that meet institutional or government standards.

The way forward

Envisioning the future of supply chain trust isn’t a question of discovering a panacea technology or policy—rather, it is one of how multiple systems can collectively ensure safe products. Companies and governments alike face difficult challenges: the distinctions between trustworthy, lax, and downright deceptive suppliers are blurry. As a result, solutions must focus on both while still considering new, unconventional thinking. Panelists and audience members agreed that ensuring technology and policy support the principles companies and governments value will be essential to creating trusted global supply chains in the future.

Henry Westerman joins the GeoTech Center as a Project Assistant, having served as an intern with the team this past summer. He is a member of the Class of 2021 in the Georgetown University School of Foreign Service studying Science, Technology, and International Affairs with a concentration in Security. Henry has previously interned with the Library of Congress Digital Strategy team, and at the Department of State Office of Science and Technology Cooperation. Henry’s primary academic interests include geospatial analysis, emerging technologies, and digital sensemaking; he also dabbles in Spanish and Philosophy. 

The post Event recap | New approaches to trust in manufacturing appeared first on Atlantic Council.

]]>
The reverse cascade: Enforcing security on the global IoT supply chain https://www.atlanticcouncil.org/in-depth-research-reports/report/the-reverse-cascade-enforcing-security-on-the-global-iot-supply-chain/ Mon, 15 Jun 2020 04:01:00 +0000 https://www.atlanticcouncil.org/?p=264688 The Internet of Things (IoT) refers to the increasing convergence of the physical and digital worlds and it affects us all. Hundreds of “things” are being connected to the Internet and each other, with more than fifty billion devices expected to be connected by 2030. Many IoT devices are manufactured abroad at low cost with little consideration for security. How can we secure these devices, especially those manufactured outside the United States?

The post The reverse cascade: Enforcing security on the global IoT supply chain appeared first on Atlantic Council.

]]>

1. Overview

The Internet of Things (IoT) refers to the increasing convergence of the physical and digital worlds. Hundreds of “things” are being connected to the Internet and each other, with more than fifty billion devices expected to be connected by 2030. 1 These devices vary from Internet-connected power-generation equipment to wearable health trackers and smart home appliances, and generally offer some combination of new functionality, greater convenience, or cost savings to users.

Cybersecurity is now a relevant concern for even the most mundane household objects—smart electric kettles can be set to explode, while compromised smart toys might eavesdrop on private conversations.”

As with all benefits, IoT also comes with serious risks, with impacts ranging from individual consumer safety to national security. IoT gives computers the ability to directly affect the physical world: toys, small and large appliances, home thermostats, medical devices, cars, traffic signals, and power plants. This transfers the traditional computer risks to these devices. Cybersecurity is now a relevant concern for even the most mundane household objects—smart electric kettles can be set to explode, while compromised smart toys might eavesdrop on private conversations. 2 Hacked thermostats can cause property damage. Hacked power generators can cause blackouts. Hacked cars, traffic signals, and medical devices can result in death. IoT devices taken over en masse can be used for distributed denial-of-service (DDoS) attacks, paralyzing critical Internet resources and corporate websites with a flood of Internet traffic. In April 2020, a security firm observed a botnet emitting a Linux malware known as “Kaiji” using SSH brute-force techniques to target IoT devices. 3 Examples such as these suggest that attempts by both criminals and governments to exploit vulnerabilities in insecure IoT devices will only increase. The result of these insecurities is an emerging national security threat likely only to grow without substantial countering action. 4

These attacks are all the byproducts of connecting computing tech to everything, and then connecting everything to the Internet. They are made substantially more frequent and impactful by the poor state of security practice across many segments of IoT manufacturing and design. While the IoT needs reliable security throughout its ecosystem, the unsecure devices that make up the billions of nodes within that ecosystem are a significant part of the problem. Many vendors bring insecure or poorly configured products to market in response to competitive pressures and lack of clear secure-development standards. A variety of policies and best practices have been proposed, but all remain voluntary and have failed to stem the tide of insecure IoT. Cheeky Twitter feeds such as @InternetofShit offer endless one-liners about Wi-Fi-connected toasters, refrigerators, and adult toys, but the real downside is a diffuse, but growing, risk to public safety and the security of data. 

Problem: Many IoT devices are manufactured abroad, and many of these products are extremely low cost with little consideration made for security.”

Problem: Many IoT devices are manufactured abroad, and many of these products are extremely low cost with little consideration made for security. 

The economics of IoT favor low-cost products. Unlike computers and smartphones, security isn’t prioritized in the development process for IoT products. They are often designed under contract for the company whose brand is on the finished product. The design teams are temporary for the design process, and don’t stay together through the product’s lifecycle.

The United States has limited means to enforce its standards in foreign jurisdictions, like China, where the bulk of IoT products are manufactured. There is nothing inherently untrustworthy or insecure about foreign manufacturing; individual firms and product lines are much more fruitful levels to analyze in establishing good security practices from bad. Importantly, however, the United States has few tools to enforce its security standards on manufacturers located abroad. Thus, companies with poor security practices outside the United States create a challenge for established regulatory tools. Policymakers would benefit from more coherent and detailed IoT security standards, but what’s urgently needed is a mechanism to enforce these standards abroad. A coherent set of standards and associated enforcement action against manufacturers throughout global IoT supply chains could well “lift all boats” and address IoT insecurities, which can impact the United States even when the devices themselves are well abroad. 

This paper proposes to apply regulatory pressure to domestic technology distributors to drive adoption of security standards throughout their supply chains. This reverse cascade enforces standards back to foreign manufacturers by preventing domestic sale or distribution of products that don’t adhere to the standard. The reverse cascade’s effectiveness is amplified where these supply chains are unusually concentrated in a single or small handful of firms. This approach addresses US regulators’ limited influence in foreign jurisdictions and relinquishes the need to monitor hundreds, if not thousands, of overseas manufacturers directly. 

This attempt to squeeze an upstream participant in a supply chain is not unprecedented. In the 1990s, Canadian civil-society organizations successfully used pressure on US home-goods companies like Sears and Home Depot to enforce a set of public standards for logging practice and conservation on Canadian logging firms. 5 Much more recently, the US Defense Department’s Cybersecurity Maturity Model Certification (CMMC) program adopted a requirement for prime vendors—large firms with many subsidiary suppliers—to be responsible for the adoption of good supply-chain security practices by their suppliers. 6 In the CMMC model, rather than force the DoD to map complex supply chains two or three steps removed from the end product, prime vendors are leveraged to enforce standards directly on their supply chains. 

This paper will

  • briefly summarize previous approaches to IoT security;
  • outline the challenge of enforcing domestic standards on a globalized supply chain;
  • develop and apply the reverse cascade to the case of Wi-Fi home routers; and
  • make specific recommendations for the United States and the EU.

2. The Challenge of International Enforcement

Intensive manufacturing and technical industries have experienced broad globalization. Cars and trucks, as much as sophisticated medical devices or home Wi-Fi routers, are manufactured with components from a kaleidoscope of foreign countries. This section discusses the challenge of enforcing domestic standards for security and safety on foreign-based manufacturers, building on comparable examples in the automotive and medical-device industries. 

While there is no shortage of proposed security and privacy standards, none has moved beyond voluntary best practices, and all lack enforcement requirements. In a recent example from March 2019, Senator Mark Warner and Representative Robin Kelly in the US Congress introduced the Internet of Things Cybersecurity Improvement Act (S.734 and H.R. 1668). While it would certainly be a step in the right direction, the bill is limited in only addressing federal government procurement and use of IoT devices, leaving IoT purchases by millions of US consumers largely unprotected. Around the same time, California enacted its own IoT security law (S.B. 327), which had its own enforcement complications, including ambiguity—the California law requires connected devices to have “a reasonable security feature,” without much guidance as to what those security features should include, beyond the devices having unique default passwords or requiring users to set their own passwords. 7

The EU has been actively engaged with these issues. In early 2019, the European Standards Organization, ETSI, launched the world’s first regional industry standard on Internet-connected consumer devices. 8 The standard was built on the United Kingdom’s Code of Practice for Consumer IoT Security, which outlined recommended best practices for manufacturers of consumer IoT devices and associated services. 9

The United States does not yet have a formal enforceable standard for IoT security.”

As of this writing, the United States does not yet have a formal enforceable standard for IoT security. However that could change soon, with institutions in the EU setting a strong example, the International Organization for Standardization (ISO) gradually publishing standards for data security, cryptography, and IoT interoperability, and the National Institute of Standards and Technology (NIST) working on establishing a “Core Baseline” of security capabilities in IoT devices. 10 However, even if a formal security standard were to be adopted within the next few years, the reality of a globalized supply chain for consumer IoT products will pose a serious challenge for enforcement. This challenge is especially relevant and significant for the IoT, because most basic components and products are engineered abroad, outside of the regulatory jurisdiction of the United States. 

It is worth noting that automobiles and medical devices differ markedly from Internet-connected devices in the economics driving consumer and product incentives. Namely, cars and medical devices are both perceived as expensive and potentially dangerous—and with such high costs involved, the economics of security for these industries are quite different from those for a connected home appliance or toy. People buying smart speakers simply do not consider safety as much as they do when buying a new car. Due to the lack of demand signal for security from the consumer, smart-speaker makers do not prioritize security, either. 11 

Despite this important difference, these examples can still reveal useful insights. The manner in which these other industries hold suppliers to account for minimum standards of design and manufacturing can help inform an enforcement scheme for consumer IoT security.

Automotive Industry

More than six decades after the first recorded traffic death in the United States, and about fifty years after the first stop sign was installed in Detroit, Congress passed the 1966 National Traffic and Motor Vehicle Safety Act. The bill, a response to rising highway deaths and growing calls for vehicle-safety laws, established the National Highway Traffic Safety Administration (NHTSA) to improve passenger survivability and vehicular safety. 12

While the law enables NHTSA to develop safety standards and track vehicle crashes, it devolves responsibility for certifying that automakers are meeting these standards to the companies themselves. Under this scheme, companies test their own vehicles and move them to market having self-certified to the NHTSA safety standards. 13 NHTSA then verifies this self-certification by independently auditing the safety performance of newly released vehicles, and fining manufacturers whose products fail to pass up to $6,000 per violation. 14

Just as the NHTSA does not approve or certify motor vehicles for standards compliance itself, it also does not directly enforce standards on suppliers outside of the United States. Instead, the agency offers a set of best practices (based largely on the US Consumer Product Safety Commission’s Handbook for Manufacturing Safer Consumer Products) for companies like Ford and General Motors to minimize regulatory risk from endangering life and safety, including selecting a responsible overseas business partner, inspecting foreign manufacturing facilities, and instituting quality-control measures throughout the distribution process in the United States. 15

As such, the NHTSA’s model of standards enforcement serves as an encouraging example that enforcement need not take on a purely adversarial nature. In this case, the regulatory body employs a strategy of cooperation and deterrence: working with the auto manufacturers to help them with compliance, while setting up mechanisms that discourage cutting corners in the safety-check and quality-assurance processes. The result of this approach is a system that achieves good safety outcomes for automobile drivers—the fatality rate per one hundred million vehicle miles traveled has consistently declined since 1975. 16

Medical-Devices Industry

The US Food and Drugs Administration (FDA) was authorized by Congress to enforce the Federal Food, Drug, and Cosmetic (FD&C) Act in 1938, with authority over medical devices following in May 1976. 17 The FDA mandates that a specific class of medical devices be subject to a premarket approval (PMA) process to evaluate and approve their safety and effectiveness, and also requires post-market surveillance (PMS) by medical device makers to track and monitor their products for malfunction once they are being used by consumers. 18 A mix of direct inspection and self-reporting, the PMS process can result in safety notifications, warning letters, and recalls when issues are found in the products.

With more than one third of the medical devices in the United States being imported, international enforcement is a major part of the FDA’s work. 19 To tackle the challenges of transparency and accountability in globalized supply chains, the Office of Regulatory Affairs plays a key role in enforcing the FD&C Act through international inspections. Any “drug, medical device, biological, and food products manufactured in foreign countries and intended for U.S. distribution” are subject to inspection for compliance with standards. 20 While it does not directly examine components in a medical device, the FDA evaluates the evidence provided by the manufacturer, including third-party attestations by testing labs. At the same time, the FDA also directly performs inspections in manufacturing facilities, including those abroad, to check their quality systems and ensure they use approved manufacturing practices. 

The FDA takes a more hands-on approach compared to NHTSA’s self-certification scheme for automobile safety. The agency itself often inspects the manufacturing process of each product, and rewards certifications of compliance. In the event that a violation is found through these inspections, the FDA has a variety of tools at its disposal—ranging from warning letters and injunctions to criminal prosecution and heavy fines—to ensure that unsafe and unlawful products are removed from the market. 21

While this allows for a far more comprehensive inspection process that catches potentially unsafe products before they go on the market, it also forces medical-device manufacturers to confront lengthy product-review periods stretching many months. Such prolonged review periods may be more acceptable for medical devices whose consequences for failure are far higher than a compromised smart refrigerator, but they demonstrate some consumer appetite for delaying products from market to be evaluated for security and safety. The FDA’s approach also suggests that even in a complex international supply chain where lives are at stake, effective security standards can be designed, adopted, and enforced without crippling industry. The FDA also offers a model of exhaustive technical evaluation that could also be formalized and shifted to third-party auditors. Finally, the NHTSA and FDA together demonstrate that demand from consumers for safe and secure products in a marketplace helps push product manufacturers toward standards compliance, while also reinforcing the authority and efficacy of regulators’ enforcement power. 

FDA in Focus
The FDA faces significant challenges in the coming decades as the food, pharmaceuticals, and medical-devices industries continue to grow. The FDA has had to increasingly rely on third-party testing labs and assessors in order to carry out regulatory evaluation and enforcement. Because third-party testing is always paid for by the manufacturers, perverse incentives and conflicts of interest may arise without adequate oversight. This problem is compounded by the fact that—unlike matters more firmly grounded in the laws of physics and chemistry, like safety from electrical faults or proper sterilization—cybersecurity standards for secure design and implementation have been less consistent over time and are more subject to context, including the specific risk tolerance of individuals and organizations. This makes it more difficult for the FDA to develop and enforce a set of test criteria that is objective, repeatable, observable, and verifiable without regular attention and updating. 

Consumer IoT

A central US regulator for consumer IoT devices is the Federal Trade Commission (FTC), which has been involved in policing electronic commerce and privacy since 1990. 22 The Consumer Product Safety Commission has an active agenda in this area as well. This paper focuses on the FTC because of its history of public enforcement actions against unsafe and insecure products. As the Internet of Things grew ubiquitous, so did the FTC’s interest in IoT as a domain of consumer protection. The FTC was one of the first regulators on the IoT scene, hosting a workshop in November 2013 to discuss security and privacy risks, and later publishing recommended best practices for IoT companies. 23 The FTC’s work across a number of consumer IoT security cases has been complicated by the challenge of international enforcement—IoT product manufacturers based abroad are not legally compelled to respond to FTC actions against them. 24

Nowhere is this better highlighted than the FTC’s recently settled case against D-Link Corporation. In January 2017, the FTC issued an official complaint against the Taiwanese IoT manufacturer and its US subsidiary D-Link Systems for failing “to take reasonable steps to secure the routers and Internet-protocol cameras they designed for, marketed, and sold to United States consumers.” 25 Contrary to D-Link’s promises to consumers that its products were protected by “advanced network security,” the FTC found that the company had failed to test its products for “well-known and easy-to-fix security flaws” before selling them to consumers. Among other security vulnerabilities, D-Link’s products used hard-coded passwords that consumers could not change, and stored user credentials in plaintext, rather than encrypted and secret from attackers. 26

The FTC ultimately settled the case in 2019, only after the parent company (D-Link Corporation) managed to extricate itself by separating from its US-based subsidiary, leaving the FTC to deal only with California-based D-Link Systems. The FTC forced the US-based firm to discontinue certain practices that left consumers vulnerable to security and privacy risks. The FTC settled another case on a data-security breach last year, underlining its newfound focus on the security of consumer technology. LightYear Dealer Technologies (“DealerBuilt”), an Iowa company that sells data-management software to auto dealerships nationwide, was held to account after a “hacker gained access to the unencrypted personal information of about 12.5 million consumers stored by DealerBuilt customers.” 27 Notably, the FTC was able to point to the Standards for Safeguarding Customer Information Rule (16 C.F.R. Part 314) of the Gramm-Leach-Bliley Act, which required DealerBuilt to develop and maintain a comprehensive information-security program to protect customer information from this kind of breach. The DealerBuilt case also shows that the FTC does not have to wield rulemaking authority to cause companies to correct problematic practices—it can instead leverage existing standards to support its regulatory actions against companies. The reverse-cascade proposal builds on this precedent and suggests a way to leverage a coherent set of external security standards to drive change in IoT design and manufacturing.   

In connection with the DealerBuilt case, the FTC’s D-Link action also helped communicate the FTC’s focus on consumer security. But, the D-Link settlement underlines the limitations of any domestic regulator in trying to shift the incentives and behavior of a foreign company to adopt acceptable security practices. Even with the additional leverage afforded by the existence of a US subsidiary, the FTC was unable to hold the parent company accountable for its lack of care in the security of its products. This is because the FTC—or any US government agency, for that matter—does not have legal authority over companies based abroad. Getting this right is critical—consumer IoT cybersecurity has impacted, and will continue to impact, people and the physical world, causing harm and potentially death. The consequences of IoT security are not confined to users alone. The D-Link case highlights the need for a policy tool that would enable domestic regulators to bring pressure on foreign-based companies, especially in the case of IoT, where the bulk of manufacturing happens outside the United States. 

3. The Reverse Cascade

Enter the reverse cascade. This paper proposes a policy tool premised on strategic upward pressure applied to information and communications technology (ICT) product supply chains, using domestic distributors as a point of leverage to enforce standards on foreign-based manufacturers. This section develops a detailed case study of how this reverse cascade would apply to home Wi-Fi routers. 

The FTC and other domestic regulators should recognize and exploit the fact that while supply chains are global, they often terminate with a domestic distributor. The reverse cascade starts with applying regulatory pressure on the distributor to sell products that adhere to a specified set of design and manufacturing standards. In a competitive market like home routers, where multiple vendors compete in the same product segments, a small number of compliant vendors could threaten others’ market access through distributors in the same jurisdiction. This creates subsequent pressure from vendors up their own supply chain for hardware components and software. 

Home Wi-Fi Routers and IoT Security

The reverse cascade’s essential components are a regulator’s jurisdiction over a domestic distributor and a source of security standards. The home Wi-Fi router is a particularly useful example because its security impacts the security of all other devices connected to it. Home routers are also representative of other consumer IoT products; routers are mostly inexpensive consumer-electronics products largely built offshore by a plethora of foreign manufacturers.

Routers can be understood as the entrance between a home’s local network and the broader Internet. Routers are responsible for ensuring data is “routed” correctly from sender to destination, and can manage a variety of network maintenance and security functions, such as firewalls to block malicious or sensitive content, running virtual private networks (VPNs), and limiting the bandwidth of the network to particular sites or at high-demand times of day. 28 As a network switch, a router also helps computers communicate with each other within the same network, acting as a communications hub for computers and other IoT devices. 29

The router’s role as Internet gateway and network hub makes it an important and useful case study on security risks in the Internet of Things. Absent an independent cellular connection, all connected devices in the home talk to the router, sending sensor and user data, as well as receiving the manufacturer’s software updates. It is, therefore, not surprising that the router has been a frequent target for security breaches and exploitation. In May 2018, the Federal Bureau of Investigation (FBI) found “hundreds of thousands of routers” had been compromised by Russian hackers to “collect user information or shut down network traffic.” 30 Another investigation less than a year later found that up to one hundred and thirty thousand Asus routers contained a software-security flaw that could enable massive identity theft. 31

A poorly secured router leaves every connected device on its network vulnerable to an attack. Router manufacturers have a responsibility to implement basic secure design and manufacturing standards and to mitigate known vulnerabilities. The D-Link case makes the US government’s position clear—these manufacturers will be held accountable for reasonable security processes and practices, or will else be held liable for unfair or deceptive practices. The FTC’s public settlement with D-Link goes so far as to attach a relevant international standard for the security of industrial automation-and-control systems (IEC 62443-4-1) as an exhibit of these reasonable processes. So, how to drive enforcement on foreign manufacturers?

Securing the Router
What does it mean for a Wi-Fi router to be secure? One way to assess a Wi-Fi router’s security is by thinking in terms of its components: hardware, software, and firmware.
Hardware: All routers have some kind of microprocessor to enable the device to blink and route data over antennas and cables, as well as a radio for wireless signals. These are usually combined on printed circuit boards that physically support the chips, as well as connect the chips to other components and a power supply. Hardware security can involve unnecessarily easy access to the microprocessor and radio, unused ports that are open for surreptitious malicious physical connections, or the use of components without adequate security safeguards. Recommendations to avoid hardware vulnerabilities include limiting the number of physical external ports, and integrating security hardware directly on the microprocessor to validate all of the hardware attached to the router, inside and out, on startup. 32
Software: Software has eaten the world, and routers right along with it. Long a “set it and forget it” kind of device, routers were something most people rarely interacted with after initial setup. In recent years, however, routers have become more sophisticated as users look to them for additional security and network-management functionality. Manufacturers have moved to include small applications to collect data and shape network behavior, even on low-cost routers. Some of these applications are open-source projects, but most are developed by router manufacturers or an expanding network of third-party developers. The principles of secure software development fit well here; developers should be securing user credentials and sensitive data with widely used cryptographic protocols, and ensuring users can receive signed updates to prevent unauthorized changes. 33
Firmware: Firmware is software built for a specific hardware component to permit interaction with the user and higher-level applications Essentially, firmware is what lets a hardware device communicate with hardware and software. Firmware for routers is typically written by the router manufacturers, who take code that is widely available on open-source projects such as DD-WRT (https://dd-wrt.com/) and customize it for their products, including adding or modifying security functionality. Unfortunately, router manufacturers consistently fail to properly secure their firmware. A recent study by the Cyber Independent Testing Lab (CITL) examining thousands of firmware samples from popular router brands revealed poor security across the board and, worse, little meaningful improvement from versions spanning the last fifteen years. CITL examined thousands of firmware versions issued by some of the most highly rated router brands, including Asus, D-Link, Linksys, and Netgear. Astoundingly, firmware updates issued by manufacturers were, on average, more likely to weaken security. 34 “There is no consistent security in- dustry practice. It’s very haphazard, and any features that we found appeared to be there by accident,” Sarah Zatko, chief scientist at CITL, explained to the author. “There’s just no evidence that any of the vendors we looked at [in the study] prioritize security in that way.” 35

Home Wi-Fi Router Supply Chains

Similar to many high-tech industries, the home router industry’s supply chain is large and complex, with a web of connections across vendors along the chain. 36 Figure 1 serves as a simplification of those supply chains, capturing the basic roles and types of companies involved.

There are four key stages in a typical Wi-Fi router supply chain, before reaching the consumer, that matter for security: component suppliers; original design manufacturers (ODMs); router manufacturers; and distributors. The flow from initial components to the finished product is often referred to as going “downstream,” and the opposite direction “upstream.”

  1. Component suppliers: The vendors of hardware, software, and firmware components for the router (e.g., Broadcom, which manufactures radio chips and antenna).
  2. ODMs: ODMs design and mass produce hardware that product manufacturers can purchase as private or white-label products. The product manufacturers then sell the products under their own names. 37 An ODM can be seen as the final assembler of various components before the finished product is sent to the router manufacturer for branding and marketing. Foxconn (which acquired Belkin and its Linksys portfolio in 2018) also provides ODM services to router manufacturers. 38
  3. Router manufacturers: These are the companies whose names are on routers and manage their sales to distributors. Popular brands include Netgear, Belkin, Linksys, Asus, Huawei, TP-Link, and D-Link. Of these, only Netgear and Belkin (which acquired Linksys in 2013) are based in the United States; the rest are in China or Taiwan.
  4. Distributors: While some router manufacturers sell their products directly to consumers through their own brand stores (e.g., Google sells its Nest Wi-Fi routers through its online store), most products reach consumers through third-party retailers like Best Buy or Amazon, or consumer Internet service providers like Comcast, which buys and rents routers from companies like Arris and Cisco. 39
ODMs are different from OEMs (original equipment manufacturers). OEMs offer technical expertise and mass-production services to other companies that bring their own designs. For example, Apple as the product manufacturer can bring its iPhone design to Foxconn the OEM to manufacture according to Apple’s specifications. By contrast, ODMs design and manufacture products themselves. While Foxconn was made famous because Apple uses it as an OEM for most of its smartphone manufacturing, many Wi-Fi router companies also go to Foxconn for its ODM services. Foxconn has also grown its in-house router business significantly through the acquisition of Belkin in September 2018. 40

The Reverse Cascade in a Wi-Fi Router Supply Chain

The reverse cascade begins with domestic distributors. Assume a product whose manufacturer is based in a foreign country, but with a local distributor.41 A regulator, like the FTC, identifies an adequate IoT security standard for manufacturers and vendors. This could be an international standard, a National Institute of Standards and Technology (NIST) publication like the recent NIST Internal Report 8259 defining an IoT design security baseline, or even something integrated into a law enacted by Congress. 42 This standard serves as the baseline the FTC can point to for distributors of relevant products. A distributor caught selling an IoT device whose design or manufacture fails to meet this standard would be subject to an enforcement action under FTC’s authority to challenge deceptive or unfair trade practices.

Threat of action from the FTC, and resultant penalties, creates a strong incentive for distributors to look upstream and evaluate their potential vendors according to the IoT security standard; for example, Best Buy could demand that all routers placed on its shelves follow the new standard. This would put pressure on router manufacturers to certify adherence to those steps they could control and pressure their ODMs and component manufacturers on the remainder. 

Continuing the example, assume Best Buy pressures Netgear, threatening to move to a competing manufacturer unless Netgear brings its Nighthawk router into compliance with the IoT security standard. Netgear can account for some of the router’s software and functionality, such as eliminating the use of default passwords, but must turn upstream in the supply chain for more. Netgear might levy new requirements on contracts with a component manufacturer like Broadcom (a chipset builder) and ODMs like Foxconn to follow the relevant design and manufacturing principles of the IoT security standard. Where vendors refuse, Netgear looks to alternatives. The FTC’s initial action on the US distributor drives a cascade of actions up the supply chain, helping to overcome legal and geographic boundaries to influence behavior globally. 

When there are only a few firms concentrated at a single step in the upstream supply chain, pressure from the distributor can be passed up to greater effect and, potentially, speed. One such point is within the component-manufacturing phase. Chipset manufacturers integrate the components of a computer, including the central processing unit, memory, and storage into a single board. The majority of home Wi-Fi routers use chipsets manufactured by just a handful of companies: Broadcom, Qualcomm Atheros, Marvell, and Annapurna Labs. 43 Simplifying things for US regulators, Broadcom and Qualcomm are both headquartered in the United States, and would thus be directly subject to an applicable new IoT security standard.

Another promising pocket of concentration exists at the ODM step. Brian Knopf, a security researcher who worked as director of application security for Linksys and Belkin, observed that just a handful of ODMs in the world are responsible for supplying Wi-Fi router companies. He explained this at a DEF CON talk.

If you start looking for vulnerabilities, and you find, ‘Hey—Linksys has a vulnerability,’ the question is, is it really a Linksys vulnerability, or is it Edimax, Arcadyan, Sercomm, or any number of the ODMs used by tons of vendors, like Asus, Netgear, D-Link—they’re using a lot of the same ODMs…What you’ll find is, if you put the pressure on the right place, we can get things fixed a lot easier.” 44

The prospect of holding distributors to account for the security of their products is not far-fetched. Major third-party retailers, such as Target and Best Buy, already require vendors to comply with relevant safety and quality standards. 45 These same two firms have also advocated for testable IoT standards that would enable businesses to “make consistent representations to customers regarding the security and privacy attributes of the IoT devices they offer.” 46 Among third-party retailers, Best Buy, Walmart, and Amazon collectively account for a significant majority of consumer electronics sales in the country. 47 In the United States, there is also a relative paucity of home internet service providers (ISPs)—further limiting the number of firms that independently source routers and need to enforce a new IoT security standard. Less than a dozen broadband providers, including such names as Comcast, Charter, AT&T, Verizon, and CenturyLink, serve all connected US households, and the top-three providers own more than half the market. 48

Part of the FTC’s prospect for success is enforcing this standard across all major router distributors in the United States at the same time. Ideally, this action could be taken in concert with EU regulators in the digital single market. The FTC or other domestic regulators’ use of an international security standard would only make this easier. While a monumental task of political coordination, such transatlantic alignment would benefit from the IoT’s rising popularity as a topic in security policy. Cross-national action would help minimize the risk of noncompliant manufacturers simply hopping across a border and continuing to sell their wares online.

Figure 1. Simplified illustration of Wi-Fi router supply chain

4. Recommendations

IoT security is a pressing national security issue, as these devices increasingly permeate homes and lives. The home Wi-Fi router is a good example of the IoT security challenge, and helps to illustrate the reverse cascade in action. Implementing this approach requires a handful of steps in the policy community and industry.

Clarity on Enforcement: While the FTC has successfully leveraged its authority to police unfair or deceptive trade practices to go after firms with poor security practices, this is a slow process requiring demonstration of harm. The Senate Commerce Committee should make a small, but important, change to Section 5(a) of the FTC Act, adding “unsafe acts or practices” to the current statute’s provision for “unfair or deceptive acts or practices.” Together with the DealerBuilt, LabMD, and D-Link precedents, this should clarify FTC’s enforcement authority on cybersecurity issues, and allow for action prior to the imposition of harm where practices are demonstrably unsafe in a lab environment or based on expert consensus.

Pick a Baseline: The linchpin of the reverse cascadefor IoT is an international, or at the least broadly recognized, set of standards for the secure design and manufacturing of IoT devices. These standards will need to encompass a variety of different product types and manufacturing stages. To avoid excessive fragmentation, it would be desirable for this recognized baseline, or framework, to permit the inclusion and relative cross-compatibility of specific standards. The earlier portion of this paper suggested several candidates, but additional endorsement by US and EU cybersecurity agencies would help elevate and focus on one. The Cybersecurity and Infrastructure Security Agency of the US Department of Homeland Security together with NIST and the EU Agency for Cybersecurity (ENISA) play important, if somewhat differing, roles in their respective cybersecurity policy apparatuses. Agreement from both agencies that a single IoT security standard was their focus, and an adequate guide for secure design and manufacturing, would support efforts such as the reverse cascade to bring pressure on non-expert distributors and IoT firms alike. The Cyberspace Solarium Commission’s proposal for a National Cybersecurity Certification and Labeling Authority (NCCLA) would fit well with this recommendation. A future NCCLA would be the logical entity to pick up and endorse such an international standard, as well as taking on responsibility for supporting its continued development over time.

Create a Label for Good Security Practices: There are frequent debates about how to better leverage the consumer marketplace to reward good security practices. A label for adherence to security standards under the baseline mentioned would be a useful foundation for this proposal and related efforts to improve consumer decision-making about secure products and services. A recent survey by a cybersecurity firm found that nearly three quarters of consumers expected their IoT devices to be secured by the manufacturers, with 87 percent believing that it is the manufacturers’ responsibility to do so. 49 A future NCCLA, or an existing agency like NIST, could create a simple labeling scheme for the selected international standard—creating a second source of pressure on distributors and, thereby, manufacturers. Properly labeled products could help mobilize consumers against insecure alternatives, filling the gap while FTC enforcement actions work to conclusion against non-compliant manufacturers. Rather than require complex evaluation and auditing, the use of a single standard would allow standardized technical assessment of new products to assign a suitable label per this scheme. This would avoid unnecessary demand on specialized technical skillsets, and permit the existing healthy market of consulting and compliance firms to support audits in line with this label. 

74% of consumers expect their IoT devices to be secured by the manufacturers

A recent survey by a cybersecurity firm found that nearly three quarters of consumers expected their IoT devices to be secured by the manufacturers, with 87 percent believing that it is the manufacturers’ responsibility to do so.

“Survey: Consumer IoT Customers Expect Manufacturers to Embed Security in Devices,” Karamba Security (blog), December 8, 2019, https://www.karambasecurity.com/blog/2019-12-08-consumer-iot-survey.

Align Standards and Collaborate with Allies: To prevent manufacturers or distributors from jurisdiction hopping, the United States and key allies in the EU should make it a priority to align on an appropriate international security baseline and coordinate enforcement actions. This is not an inconsiderable challenge, since the EU organizes its efforts to coordinate national activities on consumer safety and competition differently than the United States. A good starting point would be for the FTC to collaborate with the EU’s Directorate General for Competition Policy and other national government agencies as appropriate, to drive an IoT security-enforcement working group. 50 It will take time to converge these and other agencies’ theories of action, especially moving in advance of demonstrated harm to the public. The earlier this coordination starts, the better. 

Conclusion

For many years, experts both in and out of government have been calling for a set of standards to hold manufacturers accountable for poor software and hardware security. The rising pace of IoT adoption and continued insecurity of many widely accessible devices sets the stage for regulatory action of one kind or another soon. For many of these devices, manufacturers and key portions of the supply chain are based outside of the United States, presenting a challenge of enforcement in foreign jurisdictions.

This paper presents the reverse cascade as a means to address this foreign-enforcement problem, encouraging regulators to leverage downstream distributors to ensure standards compliance by upstream foreign IoT manufacturers. Growing consumer awareness and demand for better security in smart devices, as well as internationally harmonized standards, will further aid enforcement efforts and help improve the security of IoT devices. This is about more than just keeping thousands of home routers safe from hacking—addressing foreign enforcement of security standards is an essential hurdle that governments must clear in order to ensure that digital transformation continues to provide benefits without compromising consumer product safety or national security. 

About the Authors

Nathaniel Kim is a recent graduate of the Harvard Kennedy School. He aspires to help shape policies for better cyber safety and digital governance, and has written on the security and safety challenges of the Internet of Things as part of his work at the Organisation for Economic Co-operation and Development’s Digital Economy Division as well as the Belfer Center for Science and International Affairs. He has previously worked as a researcher at Harvard Business School and as a consultant at the Economist Intelligence Unit. Nathaniel is an incoming Technology Law & Policy Scholar at Georgetown Law, and holds an MPP from HKS and a BS in Brain & Cognitive Sciences from the Massachusetts Institute of Technology.

Dr. Trey Herr is the Director of the Cyber Statecraft Initiative under the Scowcroft Center for Strategy and Security at the Atlantic Council. His team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, cyber safety, and growing a more capable cybersecurity policy workforce. Previously, he was a Senior Security Strategist with Microsoft handling cloud computing and supply chain security policy as well as a fellow with the Belfer Cybersecurity Project at Harvard Kennedy School and a non-resident fellow with the Hoover Institution at Stanford University. He holds a PhD in Political Science and BS in Musical Theatre and Political Science. 

Bruce Schneier is an internationally renowned security technologist, called a “security guru” by the Economist. He is the New York Times best-selling author of 14 books — including Click Here to Kill Everybody — as well as hundreds of articles, essays, and academic papers. His influential newsletter Crypto-Gram and blog Schneier on Security are read by over 250,000 people. Schneier is a fellow at the Berkman-Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Number of Internet of Things (IoT) Connected Devices Worldwide 2030,” Statista, February 19, 2020, https://www.statista.com/statistics/802690/worldwide-connected-devices-by-access-technology/.
2    “International Product Safety Week 2018 (Conference),” European Commission, November 12, 2018, https://ec.europa.eu/info/events/international-product-safety-week-2018-2018-nov-12-0_en.
3    Bisson, David. “New ‘Kaiji’ Linux Malware Targeting IoT Devices.” Security Intelligence, May 6, 2020. https://securityintelligence.com/news/new-kaiji-linux-malware-targeting-iot-devices/.
4    Justin Sherman and Deb Crawford, “Securing America’s Connected Infrastructure Can’t Wait,” War on the Rocks, December 4, 2018, https://warontherocks.com/2018/12/securing-americas-connected-infrastructure-cant-wait/
5    Benjamin Cashore, Graeme Auld, and Deanna Newsom, Governing Through Markets: Forest Certification and the Emergence of Non-State Authority (New Haven, CT: Yale University Press, 2004), https://www.jstor.org/stable/j.ctt1npqtr; Trey Herr, “Cyber Insurance and Private Governance: The Enforcement Power of Markets,” Regulation & Governance, July 3, 2019, https://www.onlinelibrary.wiley.com/doi/abs/10.1111/rego.12266.
6    “Cybersecurity Maturity Model Certification (CMMC),” Office of the Under Secretary of Defense for Acquisition and Sustainment, March 18, 2020, https://www.acq.osd.mil/cmmc/docs/CMMC_ModelMain_V1.02_20200318.pdf.
7    Hannah-Beth Jackson, Information Privacy: Connected Devices, 327 California Senate Bill § (2018). https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB327.
8    Sophia Antipolis, “ETSI Releases First Globally Applicable Standard for Consumer IoT Security,” ETSI, February 19, 2019, https://www.etsi.org/newsroom/press-releases/1549-2019-02-etsi-releases-first-globally-applicable-standard-for-consumer-iot-security.
9    “Code of Practice for Consumer IoT Security,” Department for Digital, Culture, Media & Sport, October 14, 2018, https://www.gov.uk/government/publications/code-of-practice-for-consumer-iot-security.
10    “ISO/IEC JTC 1/SC 27 — Information Security, Cybersecurity and Privacy Protection,” International Organization for Standardization, accessed May 16, 2020, https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/committee/04/53/45306.html; “ISO/IEC JTC 1/SC 41 – Internet of Things and Related Technologies,” International Organization for Standardization, accessed May 16, 2020, https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/committee/64/83/6483279.html; Michael Fagan, et al., “Recommendations for IoT Device Manufacturers: Foundational Activities and Core Device Cybersecurity Capability Baseline (2nd Draft),” National Institute of Standards and Technology, January 7, 2020, https://csrc.nist.gov/publications/detail/nistir/8259/final.
11    For a more in-depth discussion of the economic considerations in cybersecurity, see: Tyler Moore and Ross Anderson, “Economics and Internet Security: A Survey of Recent Analytical, Empirical, and Behavioral Research,” Harvard Computer Science Group Technical Report, 2011.
12    Bill Canis and Richard K. Lattanzio, “U.S. and EU Motor Vehicle Standards: Issues for Transatlantic Trade Negotiations,” Congressional Research Service, February 18, 2014, https://www.hsdl.org/?abstract&did=751039.
13    Motor Vehicle Safety: Certification of Compliance, Pub. L. No. 89–563, § 30115, 49 U.S. Code (1996), https://www.govinfo.gov/content/pkg/USCODE-2009-title49/html/USCODE-2009-title49-subtitleVI-partA-chap301-subchapII-sec30115.htm.
14    “Recommended Best Practices for Importers of Motor Vehicles and Motor Vehicle Equipment,” National Highway Traffic Safety Administration, accessed December 18, 2019, https://one.nhtsa.gov/Laws-%26-Regulations/Recommended-Best-Practices-for-Importers-of-Motor-Vehicles-and-Motor-Vehicle-Equipment.
15    Ibid.
16    “2018 Fatal Motor Vehicle Crashes: Overview,” National Highway Traffic Safety Administration, October 22, 2019, https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/812826.
17    “PMA Historical Background,” US Food and Drug Administration, November 3, 2018, http://www.fda.gov/medical-devices/premarket-approval-pma/pma-historical-background.
18    “Premarket Approval (PMA),” US Food and Drug Administration, July 9, 2019, http://www.fda.gov/medical-devices/premarket-submissions/premarket-approval-pma; “Postmarket Requirements (Devices),” US Food and Drug Administration, December 1, 2018, http://www.fda.gov/medical-devices/device-advice-comprehensive-regulatory-assistance/postmarket-requirements-devices.
19    “FDA Globalization,” US Food and Drug Administration Office of the Commissioner, November 27, 2019, http://www.fda.gov/international-programs/fda-globalization.
20    “Foreign Inspections Overview,” US Food and Drug Administration Office of Regulatory Affairs, December 14, 2018, http://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/foreign-inspections/foreign-inspections-overview.
21    “Types of FDA Enforcement Actions,” US Food and Drug Administration Center for Veterinary Medicine, November 3, 2018, http://www.fda.gov/animal-veterinary/resources-you/types-fda-enforcement-actions.
22    Chris Jay Hoofnagle, Federal Trade Commission Privacy Law and Policy (Cambridge, UK: Cambridge University Press, 2016).
23    “FTC Report on Internet of Things Urges Companies to Adopt Best Practices to Address Consumer Privacy and Security Risks,” Federal Trade Commission, January 27, 2015, https://www.ftc.gov/news-events/press-releases/2015/01/ftc-report-internet-things-urges-companies-adopt-best-practices.
24    Regulatory action is not a hard and fast requirement for positive change in the marketplace for IoT; indeed, the threat of costly and potentially disruptive regulatory action could serve as incentive to change enough. This paper focuses on the role of the FTC in the proposed regulatory scheme to work through the content of this proposal to address foreign-manufactured and insecure products. 
25    “D-Link,” Federal Trade Commission, July 2, 2019, https://www.ftc.gov/enforcement/cases-proceedings/132-3157/d-link.
26    Leslie Fair, “D-Link Settlement: Internet of Things Depends on Secure Software Development,” Federal Trade Commission, July 2, 2019, https://www.ftc.gov/news-events/blogs/business-blog/2019/07/d-link-settlement-internet-things-depends-secure-software.
27    “Auto Dealer Software Provider Settles FTC Data Security Allegations,” Federal Trade Commission, June 12, 2019, https://www.ftc.gov/news-events/press-releases/2019/06/auto-dealer-software-provider-settles-ftc-data-security.
28    Lauren Hockenson, “This Is How a Router Really Works,” Mashable, February 4, 2013, https://mashable.com/2013/02/04/router-faq/.
29    Jason Fitzpatrick, “Understanding Routers, Switches, and Network Hardware,” How-To Geek, July 5, 2017, https://www.howtogeek.com/99001/htg-explains-routers-and-switches/.
30    “FBI Warns Russians Hacked Hundreds of Thousands of Routers,” Reuters, May 29, 2018, https://www.cnbc.com/2018/05/29/fbi-warns-russians-hacked-hundreds-of-thousands-of-routers.html.
31    Thomas Brewster, “FBI Warned of Fraudster’s Paradise: Up To 130,000 Hacked Asus Routers on Sale For A Few Dollars,” Forbes, February 28, 2020, https://www.forbes.com/sites/thomasbrewster/2020/02/28/fbi-warned-of-fraudsters-paradise-up-to-130000-hacked-asus-routers-on-sale-for-a-few-dollars/.
32    “Mapping Security & Privacy in the Internet of Things,” Copper Horse, September 24, 2018, https://iotsecuritymapping.uk/code-of-practice-guideline-no-6/.
33    “Secure by Design,” Department for Digital, Culture, Media & Sport, March 7, 2018, https://www.gov.uk/government/collections/secure-by-design.
34    “Binary Hardening in IoT Products,” Cyber Independent Testing Lab, August 26, 2019, https://cyber-itl.org/2019/08/26/iot-data-writeup.html.
35    Sarah Zatko, chief scientist, Cyber Independent Testing Laboratory, interview by Nathaniel Kim, January 24, 2020.
36    Kate Crawford and Vladan Joler created a fascinating mapping of the components and supply chain for an Amazon Echo. Crawford and Joler, “Anatomy of an AI System,” 2018, https://anatomyof.ai/.
37    Kai Huang, “OEM vs ODM: Difference between OEM and ODM: OEM and ODM,” China Sourcelink, October 21, 2018, https://cnsourcelink.com/2018/06/04/oem-vs-odm/.
38    Jacob Kastrenakes, “Foxconn Buys Belkin, Linksys, and Wemo,” Verge, March 26, 2018, https://www.theverge.com/2018/3/26/17166272/foxconn-buys-belkin-fit-linksys-wemo.
39    “Overview of Xfinity Gateways,” Xfinity, September 10, 2012, https://www.xfinity.com/support/articles/broadband-gateways-userguides.
40    While Foxconn was made famous because Apple uses it as an OEM for most of its smartphone manufacturing, many Wi-Fi router companies also go to Foxconn for its ODM services. Foxconn has also grown its in-house router business significantly through the acquisition of Belkin in September 2018.
41     For the purposes of the reverse cascade, the relevant cases are when the router manufacturer sells poorly secured products through a third-party retailer or broadband provider. If the manufacturer sells poorly secured products directly to the customer through its own brand stores, the FTC would be able bring a case directly against that manufacturer on the grounds of unfair practices since the firm would be failing to take reasonable steps to secure products according to the security standard.
42    Fagan, et al., “Recommendations for IoT Device Manufacturers: Foundational Activities and Core Device Cybersecurity Capability Baseline (2nd Draft).”
43    “Understanding Router Chipsets: Broadcom vs. Atheros vs. Marvell,” FlashRouters Networking & VPN Blog (blog), January 22, 2018, https://blog.flashrouters.com/2018/01/22/understanding-router-chipsets/.
44    “DEF CON 23 – IoT Village – Brian Knopf – Yes You Can Walk on Water,” 2015, YouTube video, https://www.youtube.com/watch?v=aTirAI-B-dI.
45    “Product Safety and Quality Assurance Tools and Processes,” Target Corporate, accessed February 13, 2020, http://corporate.target.com/corporate-responsibility/responsible-sourcing/product-safety-quality-assurance/product-safety-and-quality-assurance-tools-and-pro.
46    “Fiscal Year 2019 Corporate Responsibility & Sustainability Report,” Best Buy, accessed February 13, 2020, https://corporate.bestbuy.com/wp-content/uploads/2019/06/FY19-full-report-FINAL-1.pdf.
47    Consolidated data on router sales by retailer are difficult to find. However, a couple sources seem to indicate that Best Buy, Walmart, Amazon, and Target are the leading consumer electronics retailers, which could serve as proxy data for home router sales. “Best Buy: The Largest Consumer Electronics Retailer,” Market Realist, https://marketrealist.com/2015/01/best-buy-largest-consumer-electronics-retailer/; “Share of Consumer Electronics Units,” Seeking Alpha, https://static.seekingalpha.com/uploads/2012/4/9/saupload_Share-of-Consumer-Electronics-Units.png.
48    “Market Share of Three Largest U.S. Broadband Providers 2006-2013,” Statista, accessed February 12, 2020, https://www.statista.com/statistics/256424/market-share-of-three-largest-us-broadband-providers/;S. O’Dea, “Number of Broadband Internet Subscribers in the United States from 2011 to 2019, by Cable Provider,” Statista, March 10, 2020, https://www.statista.com/statistics/217348/us-broadband-internet-susbcribers-by-cable-provider; S. O’Dea, “Charter U.S. Broadband Internet Subscribers 2009-2018,” Statista, February 27, 2020, https://www.statista.com/statistics/292366/charter-internet-broadband-subscribers/.
49    “Survey: Consumer IoT Customers Expect Manufacturers to Embed Security in Devices,” Karamba Security (blog), December 8, 2019, https://www.karambasecurity.com/blog/2019-12-08-consumer-iot-survey.
50    Statista. “IoT Market Size by Country in Europe 2014 and 2020,” November 28, 2016. https://www.statista.com/statistics/686435/internet-of-things-iot-market-size-in-europe-by-country/.

The post The reverse cascade: Enforcing security on the global IoT supply chain appeared first on Atlantic Council.

]]> Artificial intelligence principles: Avoiding catastrophe https://www.atlanticcouncil.org/commentary/artificial-intelligence-principles-avoiding-catastrophe/ Fri, 05 Jun 2020 10:00:00 +0000 https://www.atlanticcouncil.org/?p=275460 An urgent challenge for the coming decade is to forge a global consensus to operationalize widely-shared ethical principles, standards, and norms to govern the development and use of artificial intelligence.

The post Artificial intelligence principles: Avoiding catastrophe appeared first on Atlantic Council.

]]>
This publication is part of the AI & China: Smart Partnerships for Global Challenges project, focused on data and AI. The Atlantic Council GeoTech Center produces events, pioneers efforts, and promotes educational activities on the Future of Work, Data, Trust, Space, and Health to inform leaders across sectors. We do this by:

  • Identifying public and private sector choices affecting the use of new technologies and data that explicitly benefit people, prosperity, and peace.
  • Recommending positive paths forward to help markets and societies adapt in light of technology- and data-induced changes.
  • Determining priorities for future investment and cooperation between public and private sector entities seeking to develop new technologies and data initiatives specifically for global benefit.

@acgeotech: Championing positive paths forward that nations, economies, and societies can pursue to ensure new technologies and data empower people, prosperity, and peace.

AI – A Potential Game Changer

An urgent challenge for the coming decade is to forge a global consensus to operationalize widely-shared ethical principles, standards, and norms to govern the development and use of artificial intelligence. This should be the predicate for all other AI issues. As AI becomes a ubiquitous driver of economic growth and shapes legal, medical, educational, financial, and military sectors, there is no consensus on rules, standards, or operating principles governing its use on either a national or global basis. As is often the case, technology is racing ahead of efforts to control it. The rich literature and inherent risks with regard to complex systems and their tendency to fail should provide a sense of urgency so far not apparent in either the private sector, government or US Congress. Unless there is a broad global consensus on core principles and standards, and how to make them operational, there is significant downside risk of a dangerous race to the bottom with potentially catastrophic consequences as AI applications become more widespread. There appears a lack of urgency among G-20 governments to transform accepted broad principles into functional global governance. The two leading tech powers, the US and China, appear to be moving in the direction of heightened techno-nationalism rather than seeking to shape global standards for the safe, secure, and beneficial deployment of AI.

AI, which is fundamentally, data plus algorithms, is usefully viewed as an enabler or synthesizer of an inter-connected BigData/IoT/robotics/biotech suite of technologies, promises to be a game-changer from economics to the automation of the battlefield. AI, however, is more an enabling force, like electricity than a thing, and like apps today, it is already being applied to most industries and services. The issue is how the data is employed. Think of AI as a new platform: the future will be everything plus AI. By 2030, algorithms combined with data from 5G Internet of Things (IoT) will be in every imaginable app and pervasive in robots, reshaping industries from healthcare and education to manufacturing, finance, and transportation, to military organizations. AI is already starting to be incorporated into military management, logistics, and target acquisition. Yet there is a dangerous governance deficit, given the fact that there are few principles, norms, and standards guiding the growing applications of artificial intelligence.

As Kai Fu-Lee writes, AI is evolving beyond one-dimensional tasks of what is called “narrow AI” to “general AI.”  The former refers to single tasks such as facial recognition or language translation; the latter, AI that can operate across a range of tasks using learning and reasoning without supervision or external input to solve any problem, learning from layers of neural network of data, is in its early days of development. Two-thirds of private sector investment in AI is in machine learning, deep learning, using neural networks, mimicking the human brain to use millions of gigabytes of data to solve problems. Most famously, AlphaGo beat the world champion in Go, a very complex Asian game with millions of moves. It was fed data from thousands of Go matches and was able to induce the best possible moves to out maneuver its opponent. AI is demonstrating a growing capability to learn autonomously extrapolating from the data fed into the algorithm.

But algorithms have their limits, too. Particularly, with regard to the prospect of autonomous systems, AI lacks understanding of context, culture, emotions, and meaning: can it tell if someone is pointing a real gun at it or a toy pistol, or what their intent is by doing so? Some leading neurologists are skeptical, arguing that intelligence requires consciousness. Emotions, memories, and culture are part of human intelligence that machines cannot replicate. There is a growing body of evidence that AI can be hacked (i.e. misdirecting an autonomous car) or spoofed (i.e. identifying targets) with false images. One potential problem for many applications is that we don’t know exactly how AI learns what it knows, meaning how the process worked resulting in its decision and conclusion. Circumstances that make it difficult to test, evaluate, or know why it was wrong or malfunctioned – and will only be more difficult as deep learning becomes more sophisticated. Yet explainability, not least, the need to know why an AI system failed, is a cardinal principle of an AI safety and accountability regime. How, for instance, can liability be determined if we don’t understand whose fault it is? At the same time, there is a relatively low bar to entry the game, as transparency among AI researchers has spawned wide access. For example, there are several open source websites, one prominent one, TensorFlow, to which leading researchers from top tech firms such as Google not only post their latest algorithms, but TensorForce also enables one to download neural networks and software with tutorials showing techniques for building your own algorithms that could also be deployed to the future of warfare, as Paul Scharre writes in his book the “Army of None.” This obviously ups the ante with regard to the need for common ethics and operating principles.

Preventing the Coming Storm

The urgency of developing a global consensus on ethics and operating principles for the uses and restrictions on AI starts from our knowledge that complex systems like supercomputers, robots, or Boeing 737 jets, with multiple moving parts and inter-acting systems, are inherently dangerous and prone to fail, sometimes catastrophically. Because the failure of complex systems may have multiple sources, sometimes triggered by small failures cascading to larger ones, it can require multiple instances to fully understand the causes. This problem of building in safety is compounded by the fact that it is increasingly difficult, as AI gets smarter, to discern why AI decided to reach its conclusions. 

The downside risks in depending solely on an imperfect AI, absent the human factor in decision-making, have already begun to reveal themselves. For example, research on facial recognition has shown bias against certain ethnic groups, apparently due to the preponderance of white faces in their respective database. Similarly, as AI is employed in a variety of decision-making roles such as job searches or determining parole, absent a human to provide context, cultural perspective, and judgment, bias becomes more likely. For example, can AI discern character or personality traits absent on a resume that may lead an employer to be more or less likely to hire an applicant? Or can AI accurately detect how a prisoner may be changed for better or worse while incarcerated to recommend parole?

More ominously, there have been incidents when semi-autonomous missile systems have hit wrong targets, as the USS Vincennes erroneously shot down an Iranian civilian airliner in 1988, or US Aegis-3 missiles mistakenly hitting a US target. The risk of waking up and discovering that fully autonomous weapons started an escalating war as the enemy’s autonomous weapons retaliated, is a scenario that could be possible in the coming decade. Clearly, the risk of AI systems going awry is significant, particularly given the wide range of potential scenarios. How do you assess liability for failure? What safety standards are required to assure accountability? How do we assure transparency in failure – human understanding of “how and why AI makes a specific decision,” as a Chinese White Paper put it.

Such core concerns have spurred pro-active efforts by a wide array of stakeholders from both the private sector as well as prominent international technologists and scientists to create a governance structure for artificial intelligence. The world’s biggest tech firms including Google and Microsoft, for example, are among the vanguard in seeking the creation of “ethical AI” guidelines. Google’s AI principles list now-familiar items such as accountability, safety, and a commitment to ensure that AI systems are unbiased. Microsoft has adopted similar principles, and both are members of the Partnership on AI, a multi-stakeholder organization of some 80 partners, including leading tech firms, NGOs, and research institutes.

Of course, private sector activism is largely driven by an imperative to sort out liability, accountability, and fairness issues in developing and deploying AI for profit. At the same time, over the past several years, there has been no dearth of efforts by government agencies and commissions, quasi-official expert bodies, technicians, and scientists to define AI ethics, principles, and standards. The EU, which has been a leader in tech standard setting, is a good example. In April 2019, the EU Commission released its Ethics Guidelines for Trustworthy Artificial Intelligence, put together by a high-level expert group on AI. The document spelled out a set of “fundamental rights” from which “trustworthy AI” should be derived. Those rights consist of broadly-shared democratic norms that underpin European institutions. From these norms, spelled out as rights, the EU’s expert group derived seven AI guidelines.

  • Be subject to human agency and oversight; 
  • Be technically robust and safe; 
  • Ensure privacy and allow for good data governance; 
  • Be transparent (for example, AI systems ought to inform people that they are interacting with an artificial system rather than another person); 
  • Enable diversity, non-discrimination, and fairness; 
  • Work in the service of societal and environmental well-being; 
  • Be accountable (“In applications affecting fundamental rights, including safety-critical applications, AI systems should be able to be independently audited” by external parties). 

The Global Quest for Principles

As the EU has done with regard to data regulation, the expert group’s activities suggest that the continent will likely play a large role in shaping global standards for AI regulation, as it has for data regulation. The EU’s General Data Protection Regulation (GDPR), which went into effect in 2018, is already forcing companies to prove compliance with EU regulations regarding data protection. Companies that seek to do business within Europe must comply in order to avoid steep fines and retain access to European markets.

In this vein, the GDPR is on its way of approaching a global standard, though there is risk that AI standards like data regulation and the internet, become splintered by competing norms. Other governments have modeled legislation or regulation after the GDPR, including Japan, which harmonized its data privacy regulations with European standards, and the state of California, which passed the California Consumer Privacy Act (CCPA), a law that comes into force in 2020. There is further contemplation whether to align the CCPA even more closely with the GDPR. In both cases, governments have been motivated in part by gaining an “adequacy determination” under the GDPR, meaning that the EU would allow that country’s firms to transfer their data from Europe to the home country (or state in California’s case).

Europe’s AI ethics guidelines might be an important first step toward translating principles into regulatory standards and norms. Even though some of its provisions capture AI principles and already have had an effect on AI-related business operations in Europe, the GDPR doesn’t mention AI. Critics further argue that the GDPR’s data privacy and transparency provisions are overly broad, difficult to enforce, and costly for firms wishing to develop and use AI applications. For example, the GDPR requires firms to give individuals the right to a human review of a decision made by an automated (AI) system, the effect of which is to increase a firm’s costs (one of AI’s great advantages is its ability to process massive amounts of data swiftly). These critics argue that, without reform, the GDPR will depress AI-related investment within Europe and shift even more of it to China and the United States.

Other governments and multilateral institutions have crafted AI ethics guidelines that are similar to the EU’s, the OECD being among the most recent and notable. In May 2019, it released five “complementary values-based principles” for responsible AI. These assert that AI should: 

  • drive “inclusive growth, sustainable development and well-being”; 
  • respect the rule of law; 
  • be transparent; 
  • be robust and secure; 
  • AI system designers and owners should be accountable for their systems’ impacts. 

In June 2019, the G20 adopted its own set of principles that were drawn entirely from the OECD’s principles. In the US, there are piecemeal efforts to build a consensus. One prominent effort is the 2017 Asilomar Principles, a set of 23 principles covering research, ethics, and values including safety, failure transparency, and human control to avoiding the development of fully autonomous weapons. The document was endorsed by more than 2500 AI/robotics researchers, engineers, prominent technologists, and scientists, including Elon Musk and the late Stephen Hawking. Similarly, the Institute for Electrical and Electronics Engineers (IEEE) a leading body with nearly half a million members globally. IEEE created a Global Initiative on Ethics of Autonomous and Intelligent Systems in 2016 to ensure those designing and developing AI prioritize ethical standards. The US Department of Defense Innovation Board in 2019 issued a similar set of ethical principles for AI. China’s 2018 White Paper on Artificial Intelligence Standardization has a similar tone and thrust, arguing that “relevant [AI] standards remain in a blank state,” and says, “China should strengthen international cooperation and promote the formulation of a set of universal regulatory principles and to ensure the safety of AI technology.” The White Paper goes on to outline key principles that largely overlap with Western ones discussed above, including: AI should benefit human welfare; safety is a prerequisite for sustainable technology; a clear system of liability to hold AI developers accountable; transparency requires understanding of how and why AI makes a specific decision; a clear definition of privacy.”  The White paper, issued by the Chinese Electronics Standards Institute is semi-official, and similar views are echoes by a number of Chinese Institutes. At a minimum, the US, EU, Japan, and others should test its sincerity by actively pursuing negotiations to operationalize these ethics and principles in binding agreements.

While there are different points of emphasis and the devil, as always, is in the detail, there appears substantial common ground among the four major international statements on AI governance since 2017 from the US, the EU, OECD, and in China’s 2018 White Paper on Artificial Intelligence Standardization. Based on a review of all four statements, this set of issues captures essential core shared ethics and principles:

  • Human agency and benefit: research and deployment of AI should augment human well-being and autonomy; have human oversight to choose how and whether to delegate decisions to AI systems; be sustainable, environmentally friendly, compatible with human values and dignity; 
  • Safety and ResponsibilityAI systems should be technically robust, based on agreed standards, verifiably safe, including resilience to attack and security, reliability and reproducibility; 
  • Transparency in failure: If an AI system fails or causes harm or otherwise malfunctions, it should be explainable why and how the AI made its decision – algorithmic accountability; 
  • Avoiding arms races: An arms race in lethal autonomous weapons should be avoided. Decisions on lethal use of force should be human in origin
  • Periodic Review: Ethics and principles should be periodically reviewed to reflect new technological developments, particularly in general deep learning AI.

Translating such principles into operational social, economic, legal, and national security policies will be a daunting task. These ethical issues already confront business and government decision-makers. Yet neither have demonstrated comprehensive policy decisions on implementing them, suggesting that establishing governance is likely to be an incremental, trial-and-error process.

How to decide standards and liability for autonomous vehicles, data privacy, and algorithmic accountability is almost certainly a complex goal very difficult to attain. Moreover, as AI becomes smarter, the ability of humans to understand how AI made decisions is likely to diminish. Even though AI may be a top arena of US-China tech competition, given the risks of catastrophic failure and the desire for global markets, this should necessitate such norms for both governments and industry. The need for human responsibility and accountability for AI decisions and the downside risks of unsafe AI and lack of transparency to understand failure are shared dangers. One recent example of US, China, and other competitors cooperating is in the creation of standards and technical protocols for 5G – a fierce arena of US-China competition. A coalition of global private sector telecom/IT firms, known as 3rd Generation Partnership Project (3GPP) in collaboration with the ITU, a key UN standard-setting institution, have, so far successfully, agreed to a host of technical standards for 5G technology.

While some in the US complained of Chinese assertiveness in pursuing standards that tend to favor Chinese tech, Beijing played by the rules, and like other stakeholders (albeit, more aggressively), sought to shape global standards. US firms also had the opportunity to push for their preferred guidelines, they just have not matched Chinese efforts. But the point is that markets are global and all stakeholders have an interest in tech standards reflecting that, competitors or not. Given the enormous stakes, getting AI governance right, ‘strategic competitors’ or not, both US and China have a mutual interest in adopting safe, secure, and accountable rules for AI applications. This should be an area of public-private partnership, with US and Chinese Big Tech having much at stake. With AI at the center of US-China tech competition, whether common global ethical principles, norms, and standards can be adopted, or whether US-China zero-sum competition leads to a dangerous race to the bottom, is a question with huge, and potentially catastrophic consequences. As the two leading AI powers, to the degree the US and China can reach accord in a bilateral dialogue, the outcome would likely shape parallel global efforts to achieve consensus in international standard setting institutions. After all, the G-20 has already embraced the OECD AI principles.

Action Points

  • The US needs a Presidential Commission comprised of engineers, technologists, private sector, and Congress to recommend national policies on control of data and AI regulatory standards/ethics, building on NIST recommendations. This is a sine qua non to reinforce American leadership;
  • A potential next step could be a G-20 mandate to negotiate norms, standards, and ethical principles for the use and restrictions of AI applications, and a new international mechanism to codify and monitor them;
  • Launch US-EU-China talks on AI governance, a key building block. Whatever consensus achievable among the tech giants, would create a powerful basis for global standards;
  • Create a standing international regulatory body on AI standards and ethics under UN auspices with a UN Security Council mandate, the International AI Commission (IAIC). It should have standards function like the ITU, but with an arbitration function similar to WTO dispute mechanism. This body should also have an Advisory Board comprised of engineers, technologists, tech firms, and legal experts. 

This publication by Robert A. Manning is part of the Atlantic Council’s ongoing endeavor to establish forums, enable discussions about opportunities and challenges of modern technologies, and evaluate their implications for society as well as international relations — efforts that are championed by the newly established GeoTech Center. Prior to its formation and to help lay the groundwork for the launch of the Center in March 2020, the Atlantic Council’s Foresight, Strategy, and Risks Initiative was awarded a Rockefeller Foundation grant to evaluate China’s role as a global citizen and the country’s use of AI as a development tool. The work that the grant commissioned the Atlantic Council to do focused on data and AI efforts by China around the world, included the publication of reports, and the organization of conferences in Europe, China, India, and Africa. At these gatherings, international participants evaluate how AI and the collection of data will influence their societies, and how countries can successfully collaborate on emerging technologies, while putting a special emphasis on the People’s Republic in an ever-changing world. Other articles as well as summaries of the discussions in Paris, Brussels, Berlin, Beijing, Shanghai, with India, and Africa have been published.

The Foresight, Strategy, and Risks Initiative (FSR) provides actionable foresight and innovative strategies to a global community of policymakers, business leaders, and citizens. 

The post Artificial intelligence principles: Avoiding catastrophe appeared first on Atlantic Council.

]]>
How tech, data and geopolitics impact food https://www.atlanticcouncil.org/insight-impact/in-the-news/video-how-tech-data-and-geopolitics-impact-food/ Thu, 28 May 2020 13:00:00 +0000 https://www.atlanticcouncil.org/?p=262975 On May 28, 2020, Ms. Daniella Taveau, Dr. Molly Jahn, and Dr. David Bray, Director of Atlantic Council's GeoTech Center, discussed how tech, data, and geopolitics impact food. The conversation focused on how these vulnerabilities have existed for sometime and how the COVID-19 pandemic has exacerbated these issues, amplifying existing instabilities, inequities, and insecurities, and will continue to do so unless action is taken to address problems in the food system.

The post How tech, data and geopolitics impact food appeared first on Atlantic Council.

]]>

On May 28, 2020, Ms. Daniella Taveau, Dr. Molly Jahn, and Dr. David Bray, Director of Atlantic Council’s GeoTech Center, discussed how tech, data, and geopolitics impact food. The conversation focused on how these vulnerabilities have existed for sometime and how the COVID-19 pandemic has exacerbated these issues, amplifying existing instabilities, inequities, and insecurities, and will continue to do so unless action is taken to address problems in the food system.

The panel focused on how, as the number of people on the planet grows and as weather and climate-related risks begin to present themselves – combined with the human activities ranging from geopolitics to internal conflicts within countries – the Global Food System faces increasing risks of major disruptions. Simultaneous to these increasing stresses on the Global Food System, several important economies appear to be headed toward isolationism and many more economies are applying restrictive trade barriers. These actions will likely introduce even more impediments and vulnerabilities to the fragile global food Ssstem.

The post How tech, data and geopolitics impact food appeared first on Atlantic Council.

]]>
Event recap | Data salon episode 1: Notice, consent, and disclosure in times of crisis https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-data-salon-episode-1/ Wed, 27 May 2020 10:00:00 +0000 https://www.atlanticcouncil.org/?p=261762 On Wednesday, May 27, 2020, the Atlantic Council’s GeoTech Center and Accenture hosted Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society at Stanford Law School, and Ms. Jana Gooth, legal policy advisor to MEP Alexandra Geese, for the inaugural episode of the jointly presented Data Salon Series. The event was co-hosted by Mr. Steven Tiell, Senior Principle, Responsible Innovation at Accenture and Dr. David Bray, Inaugural Director, GeoTech Center at the Atlantic Council.

The post Event recap | Data salon episode 1: Notice, consent, and disclosure in times of crisis appeared first on Atlantic Council.

]]>

On Wednesday, May 27, 2020, the Atlantic Council’s GeoTech Center and Accenture hosted Dr. Jennifer King, Director of Consumer Privacy at the Center for Internet and Society at Stanford Law School, and Ms. Jana Gooth, legal policy advisor to MEP Alexandra Geese, for the inaugural episode of the jointly presented Data Salon Series, which will host private roundtables preceded by publicly recorded presentations concerning data policymaking and governance.

The event was co-hosted by Mr. Steven Tiell, Senior Principal for Responsible Innovation at Accenture, and Dr. David Bray, Director of the GeoTech Center at the Atlantic Council.

gtc night aerial view of networked city lights from above

Presentation

Dr. King’s presentation began by identifying a source of frustration shared among data scientists and policymakers: that the consent and privacy agreements we have all mindlessly clicked through act as a barrier to helping consumers make informed decisions and are little more than a liability shield for companies. Further complicating the dilemma is the system’s lack of scalability—there are simply too many terms of service agreements updating to often for consumers to be able to meaningfully process that information. It is a process made by and for lawyers that has failed to adjust to consumer concerns about data usage, and Dr. King argued that a paradigm shift is required, not just tinkering within an existing framework.

Dr. King went on to describe several approaches to begin addressing that policy gap, gathered from research and forums. The first pushed for the development of personal user agents, or software that helps users aggregate and coordinate their relationship with their data and its privacy, much like a password manager. Others hoped to expand the knowledge base of policymakers, especially concerning human-computer interactions, through data visualization tools.  A third approach hoped to consider data technology in terms of public spaces given its impending ubiquity in the forms of IoT, facial recognition, smart cities, and so on—how can we accommodate people in public places who don’t want to be recorded in some way? Similarly, a fourth approach emphasized the importance or proactive efforts to include the interests of marginalized, vulnerable communities not traditionally considered in the tech design process.

From a more regulatory perspective, data trusts were identified as a possible way to shift data ownership to the community level and away from the individual, particularly regarding genetic data. In addition, some sought to incentivize companies to use data responsibly rather than to use prohibitions and penalties. Further, many subscribed to the concept of algorithmic explainability—the notion that people providing data should be able to understand exactly what happens to it, who controls it, and what decisions it ultimately guides. Finally, some hoped to legislate limits on widespread public surveillance and develop a system of metrics for the harm associated with certain uses of data through an independent body. Dr. King ended her presentation with an appeal to reconsidering the Fair Information Practice Principles as a framework that constrains the ethical dynamics that must be considered in legislating data.

In her follow up, Ms. Gooth provided context from the EU perspective that aligned with Dr. King’s main points: that the EU’s GDPR and Data Privacy Direction still exist in the traditional notice and consent framework with no provisions regarding design. The closest thing to design-focused policy was a potential requirement for web browsers to default to the highest privacy settings, though the legislation has been stuck for years.

gtc network of green and red nodes

Discussion

While the Data Salon format usually holds a portion of the discussion under the non-attribution Chatham House rule, our audience participants unanimously and graciously allowed for the unabridged recording of the entire event to be made publicly available. First, participants inquired about the possibility of using different groups or regulatory bodies to produce applications that would regulate an individual’s privacy settings for them, and about the possibility of requiring companies to track data use in the same way they keep records of financial flows (which is already required under the GDPR). One recurrent issue was the lack of appropriately skilled manpower to enforce future or extant regulations.

There were also concerns about the future of data: how likeness rights and associated laws would change in response to inferred  information, how post-COVID data legislation would deal with slackened regulations after the pandemic, what will be done when most machines have collected enough data to carry out their tasks without further training, and how would legislation handle the myriad edge cases that about in a technology infused world—for example, what rights does a pedestrian have when an automated car uses their presence for data, or a shopper meandering through a mall?

Underlying the many questions raised by the discussion was the premise of design. One participant asked whether something could be changed in the business model behind data to move from companies’ avoidance of penalization to their pursuit of some good, perhaps through a new understanding of fiduciary responsibility. Another considered how the lens of anti-trust law might be applied, given that data collection often requires a stated end, limiting its use in yet-unknown studies. That consideration tapped back into discussions of data trusts, a construct that would pool data indefinitely, doing away with the purpose-limitation framework so common today. Others considered whether more granular consent would result, say for collection, specific inferences, and degrees of generalizability from those inferences. Another participant looked towards the narrow banking model, which would empower individuals to chose whether their data could be put to use, presumably in return for some fee like interest on a savings account, or be simply stored safely and left untouched.

Ultimately, the constant frustration of discussants was the sheer diversity of places data is gathered from and put to use in. Generalizable policy is difficult in that environment, particularly when trying to imagine a paradigm shift. Both speakers concluded on a similar note though: that successful policies about notice, consent, and privacy will require cooperation between industry, government, and consumers, and trust must be built between the three, particularly between regulators and industry and between consumers and the products they use.

The post Event recap | Data salon episode 1: Notice, consent, and disclosure in times of crisis appeared first on Atlantic Council.

]]>
Event recap | Mobilizing industry to encourage multi-sector solutions to global concerns https://www.atlanticcouncil.org/blogs/geotech-cues/going-deeper-mobilizing-industry-to-encourage-multi-sector-solutions-to-global-concerns/ Thu, 21 May 2020 13:00:00 +0000 https://www.atlanticcouncil.org/?p=262529 On May 21, 2020, Daryl Haegley, Yusuf Abdul-Qadir, Melissa Flagg, Lee McKnight, Mary Collins, Lin Wells, and Divya Chander shared their perspectives in a live video discussion titled "Mobilizing industry to encourage multi-sector solutions to address emergent global concerns" and moderated by David Bray, Director of the Atlantic Council's GeoTech Center. The discussion focused on new ways of addressing the COVID-19 pandemic and application of multi-sector industry solutions to current and potential future pandemics.

The post Event recap | Mobilizing industry to encourage multi-sector solutions to global concerns appeared first on Atlantic Council.

]]>

On May 21, 2020, Daryl Haegley, Yusuf Abdul-Qadir, Melissa Flagg, Lee McKnight, Mary Collins, Lin Wells, and Divya Chander shared their perspectives in a live video discussion titled “Mobilizing industry to encourage multi-sector solutions to address emergent global concerns” and moderated by David Bray, Director of the Atlantic Council’s GeoTech Center. The discussion focused on new ways of addressing the COVID-19 pandemic and application of multi-sector industry solutions to current and potential future pandemics.

The panelist focused on how nations and industries need better more timely approaches to future outbreaks and potential additional waves of COVID-19, as well as IoT-based risks and cyber-related concerns, disruptions to supply chains and autonomous systems in cities or factories, and better monitoring for manufactured biological or chemical threats. Continuous efforts are needed to increase the resilience of physical systems, cyber infrastructure, and people-centered communities in a way that respects and preserves privacy, sensitive information, and empowers people to have choice.

We need ways of advancing shared solutions with distributed action and data sharing of a shared context that does not involve centralized control nor centralized data repositories. Early examples already exist. Imagine you were trying to describe to someone in the 1920’s the importance of having smoke detectors linked to calling the fire department and automatic sprinkler systems to put out the fire. We can do the same thing for public health resilience. We can do the same for IoT and cyber-related infrastructure. And we can do the same thing for open societies in ways that do not required centralized data collection.

The post Event recap | Mobilizing industry to encourage multi-sector solutions to global concerns appeared first on Atlantic Council.

]]>
Seven perspectives on securing the global IoT supply chain https://www.atlanticcouncil.org/blogs/new-atlanticist/seven-perspectives-on-securing-on-the-global-iot-supply-chain/ Tue, 19 May 2020 12:53:58 +0000 https://www.atlanticcouncil.org/?p=256247 Many IoT devices are manufactured abroad and many of these are extremely low cost with little consideration made for security. There is nothing inherently untrustworthy or insecure about foreign manufacturing, and individual firm and product lines are much more fruitful levels of analysis in establishing good security practices from bad. Importantly however—the United States has limited means to enforce its standards in foreign jurisdictions, like China, where the bulk of IoT products are manufactured.

The post Seven perspectives on securing the global IoT supply chain appeared first on Atlantic Council.

]]>

The Internet of Things (IoT) refers to the increasing convergence of the physical and digital worlds. Hundreds of “things” are being connected to the internet and each other, with more than fifty billion devices expected to be connected by 2030. These devices vary from internet-connected power-generation equipment to wearable health trackers and smart home appliances, and generally offer some combination of new functionality, greater convenience, or cost savings to users.

As with all benefits, IoT also comes with serious risks, with impacts ranging from individual consumer safety to national security. Cybersecurity is now a relevant concern for even the most mundane household objects. Many IoT devices are manufactured abroad and many of these are extremely low cost with little consideration made for security. There is nothing inherently untrustworthy or insecure about foreign manufacturing, and individual firm and product lines are much more fruitful levels of analysis in establishing good security practices from bad. Importantly however—the United States has limited means to enforce its standards in foreign jurisdictions, like China, where the bulk of IoT products are manufactured.

We asked IoT experts seven question about securing the global IoT supply chain:

Question 1: What kinds of harm can IoT really bring to users? Others?

Bruce Schneier, adjunct lecturer in Public Policy, Harvard University’s John F. Kennedy School of Government:

“The IoT gives computers the ability to directly affect the physical world: toys, small and large appliances, home thermostats, medical devices, cars, traffic signals, power plants. This transfers the traditional computer risks to these devices. Hacked thermostats can cause property damage. Hacked power generators can cause blackouts. Hacked cars, traffic signals, and medical devices can result in death. To date, most of these vulnerabilities have been demonstrated by researchers. But we have seen examples by both criminals and governments, and there is no reason to expect the trends to suddenly reverse.”

Question 2: Nearly everything is made at least in part by a foreign manufacturer. What makes IoT devices special?

Josh Corman, founder, I Am the Cavalry (dot org); former director, Cyber Statecraft Initiative:

“When compared to their Enterprise IT counterparts, IoT devices often prove quite challenging to securely design, develop, and operate. Available “best practices” for cybersecurity carry heavy biases and assumptions across at least six dimensions: consequences of failure, adversaries, device composition, economics, operational context, and time scales. Where smaller, cheaper devices may lack adequate processing power, margins, and the benefit of layered defenses and security teams, they may encounter elevated risks to safety, face a wider swath of accidents and adversaries, and for longer lifecycles than is sound. This framework of six differences for IoT is explored in more detail by “I Am The Cavalry.”

“Further, many of the nascent IoT supply chains lack the mature, traceable, auditable processes required for higher assurance dependence—leaving them more prone to avoidable harms (accidental or otherwise).”

Question 3: Who has the greatest potential (or power) to play a positive role in IoT security (markets, governments, or international organizations)?

Nate Kim, MPP candidate, Harvard University’s John F. Kennedy School of Government

“One of the biggest factors underlying the problem of IoT security is economics: IoT suppliers and manufacturers haven’t been building security into their products because it’s cheaper and because consumers haven’t been demanding it. To me, this suggests that the market alone cannot deliver significant improvements for IoT security—we need interventions that change the cost calculus for IoT manufacturers or amplify the demand signal from consumers (e.g. increasing consumer awareness to alleviate information asymmetry concerns). At the end of the day, these interventions will have to be administered by the government.

“The government therefore has the greatest potential to make a difference in IoT security because of its authority to pass and enforce policies that can increase the cost of bad security to IoT suppliers. These policies can include strict security standards for connected devices, or liability schemes that hold manufacturers accountable for harms resulting from poor security in their products. If executed well, such policies can allow consumers to gain all the benefits of using connected products without also putting them at risk of serious harms that arise from an unsecure Internet of Things.

“International organizations will have an important role to play as well, especially in the context of increasingly globalized markets and supply chains of the 21st century. The connectedness of the internet makes IoT security very much a global problem. Even if the United States manages to enforce strong security in its national IoT ecosystem, vulnerabilities in IoT systems outside the United States can still be exploited to pose threats against US consumers and critical infrastructure. IoT products are also largely made outside of the United States, which means that the United States must collaborate with other governments in enforcing the rules and standards of digital security. Multinational cooperation through international platforms will be essential to improving IoT security everywhere.”

Question 4: Labeling has been discussed as a way to enforce best practices and inform consumers about the products they buy across a range of different fields. What makes labeling most tricky?

Robert Morgus, director, research & analysis, US Cyberspace Solarium Commission

“The purpose of labeling is to provide the consumer or purchaser with better information on the product they are purchasing. This presents two specific hurdles in the context of cybersecurity and information technology devices. The first challenge lies in identifying the most relevant security information to present on a label. Is information about the security features of the product itself relevant (ie. that each item sold has a unique default password)? Or are should we be more concerned about the process by which the product was developed (ie. its codebase and whether the constructor adhered to good practices in secure coding)?

“Both are likely relevant to the consumer but are difficult to present in a clear and coherent way, leading to the second major challenge: presenting the label information in a manner that is meaningful to the consumer. If the goal of labeling is to enable consumers to demand better security through their purchases, the information presented on labels must be easily understandable for the purchaser. Simply listing the sources of code—components, development frameworks, libraries, and so on—is unlikely to be actionable information for most. Building a productive labeling schema will require expert and consumer input to help design symbols and shortcuts for average consumers in the form of security scores or consumer-facing certifications with tiers.”

Question 5: What would Congress need to do for the Federal Trade Commission to be able to enforce security standards on a distributor like Amazon?

Jessica Rich, distinguished fellow, Institute for Tech Law and Policy, Georgetown Law; former director, Bureau of Consumer Protection, Federal Trade Commission

“Under current law, distributors can be liable for selling products in ways that violate the FTC Act’s prohibition against “unfair or deceptive” practices. For example, in several cases against home shopping network QVC, the FTC charged the company with making false claims about products that had been manufactured by other companies. The FTC also has held catalog companies liable for their role in disseminating false claims about products sold on behalf of others. Further, Amazon itself has settled FTC charges that it made false claims that products sold on its website were “bamboo” when they were in fact made of rayon.

“However, absent some sort of agency relationship between a manufacturer and a distributor, the liability of a distributor generally depends on the role it plays in committing a law violation. In the above cases, the distributors themselves engaged in conduct that allegedly violated the law—for example, formulating or disseminating their own misleading claims or marketing strategy—and the FTC was required to develop extensive proof of such conduct. In other words, the law does not generally allow the FTC to hold an independent distributor strictly liable for selling a faulty product, or for failing to undertake the screening of products for defects or safety. Adding to the challenge, the Communications Decency Act (CDA), passed in 1990s and interpreted expansively, confers some immunity on “interactive computer services” (arguably Amazon, Google, and Facebook) that merely repeat the speech of others.

“For these reasons, if the goal is to hold a distributor like Amazon responsible for the security of any products it sells—automatically, and without regard to the role it plays in creating or promoting the product—Congress would need to pass a law specifically creating such liability and amending or superseding portions of the CDA. The political and practical obstacles to doing so would be significant.”

Question 6: Between the European Union and the United States, who has set a better example on how to enforce IoT security standards on foreign manufacturers?

Beau Woods, cyber safety innovation fellow, Cyber Statecraft Initiative; founder and CEO, Stratigos Security

“There are not a lot of laws around IoT, much less enforcement. The United Kingdom’s plan is to restrict sale and import of devices without their top three, but they haven’t yet put that into action. I remember Germany banned a doll named My Friend Cayla that had security issues. California’s IoT law is in force, though I don’t know if there have been any enforcement actions around it.

“Regarding the broader set of operational technology devices, I’d say the Food and Drug Administration leads and no one else is close.”

Question 7: What’s the most important effort on IoT security we’ve never heard of?

Benedikt Abendroth, senior security program manager, Azure Sphere, Microsoft

“One challenge is the notion that not every connected device should be protected with the highest levels of security. Even a very mundane device, such as a child’s toy, a household appliance, or even a connected cactus watering sensor can pose risks when they are compromised over the internet. A toy can spy or deceive, a household device can be destroyed, and a watering sensor can launch a denial-of-service attack. Taking that into account, second-best solutions are not enough. In 2017, Microsoft introduced a new standard for IoT security in the white paper “The Seven Properties of Highly Secured Devices,” which demonstrates that it is possible to engineer all connected devices, even those that are price-sensitive, to be trustworthy even in the face of determined attackers.”

Trey Herr, PhD, is director of the Atlantic Council’s Cyber Statecraft Initiative under the Scowcroft Center for Strategy and Security.

Further reading:

The post Seven perspectives on securing the global IoT supply chain appeared first on Atlantic Council.

]]>
Why COVID-19 requires both industry and nations to transform how they collaborate https://www.atlanticcouncil.org/insight-impact/in-the-news/video-why-covid-19-requires-both-industry-and-nations-to-transform-how-they-collaborate/ Fri, 01 May 2020 13:00:31 +0000 https://www.atlanticcouncil.org/?p=252156 On May 1, 2020, Vala Afshar, Ray Wang, and David Bray, shared perspectives in a live video discussion titled "Why COVID-19 requires industry and nations to transform how they collaborate" about the transformative nature of the pandemic and post-COVID-19 era ahead.

The post Why COVID-19 requires both industry and nations to transform how they collaborate appeared first on Atlantic Council.

]]>

On May 1, 2020, Vala Afshar, Ray Wang, and David Bray, shared perspectives in a live video discussion titled Why COVID-19 requires industry and nations to transform how they collaborate about the transformative nature of the pandemic and post-COVID-19 era ahead.

This live discussion included what CEOs of companies could do to adapt their organizations and lead in the post-COVID-19 era. The discussion also highlighted the efforts of the Atlantic Council GeoTech Center in championing positive paths forward that nations, economies, and societies can pursue to ensure new technologies and data empower people, prosperity, and peace a mission focus that remains especially salient amid the current pandemic response and recovery.

The post Why COVID-19 requires both industry and nations to transform how they collaborate appeared first on Atlantic Council.

]]>
Design choices of Central Bank Digital Currencies will transform digital payments and geopolitics https://www.atlanticcouncil.org/blogs/geotech-cues/design-choices-of-central-bank-digital-currencies-will-transform-digital-payments-and-geopolitics/ Fri, 24 Apr 2020 01:30:23 +0000 https://atlanticcouncil.org/?p=247588 In this analysis, the Atlantic Council GeoTech Center examines the geopolitical implications of Central Bank Digital Currencies (CBDCs) and calls for the United States to lead on setting standards for CBDC and financial technology.

The post Design choices of Central Bank Digital Currencies will transform digital payments and geopolitics appeared first on Atlantic Council.

]]>
In this analysis, the Atlantic Council GeoTech Center examines the geopolitical implications of Central Bank Digital Currencies (CBDCs) and calls for the United States to lead on setting standards for CBDC and financial technology.

CBDCs will be used by countries to create new monetary systems that will be leveraged in great power competition to form economic alliances, avoid sanctions, and depending on design potentially surveil the transactions associated with users of the currencies. World leaders from both the public and private sectors must understand CBDCs because, depending on the technology design choices, CBDCs will:

  • Revolutionize the way money is exchanged and payments are made
  • Redefine the relationship between the state, private sector financial institutions, and technology companies
  • Give countries the ability to design independent payments systems that operate separate from the US dollar-based order

Introduction

The Stone, Bronze, and Iron ages each reflect the tools used to craft civilization. In the current digital age, money is being transformed to reflect the most valued asset, data. No longer is money rooted in printed cash that has exchanged thousands of hands; instead, its digitization has changed the way people transact and institutions conduct business. What began as the first electronic payment offered by Western Union in the 1870s has become sprawling digital payments ecosystems. The latest innovation is entirely new non-state types of money predicated on cryptography. Just like fiat currency, these new forms of money are able to serve as a unit of account, store of value, and medium of exchange. The payment evolution’s only logical end is the digitization of the state-backed currency created by the central bank: the central bank digital currency (CBDC).

This analysis consists of two parts. Part I explores how CBDCs can improve payments, the underlying factors in a country that are necessary for a CBDC to be effective, and the payment values countries must decide upon. Part II assesses the geopolitical implications of CBDCs. CBDCs challenge the existing order of global payments. They allow countries to create monetary systems that operate independently of the existing US dollar-based order. Furthermore, CBDCs will be a primary tool used by major powers in economic diplomacy to form alliances, subvert sanctions, and collect payment data at a granularity not possible through fiat currency.

This report concludes with a call for the United States to reassert its role as a guiding voice, and set standards for CBDC and financial technology. CBDC development is driven by a host of nations, each determining their own set of values and some looking to export these values. Global economic stability and prosperity hinges upon shared working norms. In 1944, the United States ushered in an era of unprecedented monetary stability when it established the US dollar as the global reserve currency. American leadership is needed once again.

gtc photo of currencies in jars

Part I. Central Bank Digital Currency and its Opportunities

An Overview

Central bank digital currency is a new form of digital currency offered by a central bank to replace fiat cash. Like cryptocurrencies and other private payment providers, central banks give users the ability to make digital payments without the use of physical cash. CBDCs would be a liability of the state (unlike private payment platforms which are managed by private companies and cryptocurrencies which are managed by protocols). Governments in advanced economies are exploring CBDCs as a means to curb the growth of private payment providers and cryptocurrencies, which they see as a competitive risk to central bank-issued cash. Governments in emerging markets are interested in CBDCs as a means to promote financial inclusion and more effective digital payments systems.

CBDC, while novel in its creation, utilizes existing technology that underpins private payment providers and cryptocurrency.

  1. Digital Ledger: CBDCs are based on a digital ledger, or database, which keeps track of CBDC ownership and transactions by users who have accounts on the ledger. The ledger will likely be centralized in the form of a core ledger managed by the issuing central bank. On this core ledger, the central bank would issue CBDCs and process transactions. The core ledger could have some decentralized features based on distributed ledger technology, in which maintenance or processing of transactions could be performed by a group of entities rather than a single body. Distributed ledger technology is unlikely to be fully embraced due to technical limitations, but certain characteristics of it are being explored by central banks such as programmability, ledger management, and use of cryptography. The government faces a number of costs and benefits based on the design choices and so must choose what it believes best serves the purposes of the payments system (e.g. resilience, bandwidth, scalability, transaction speed).
  2. Account-Based System or Digital Tokens: An account-based system or digital tokens determines how users are represented and make transactions in the payments system. In an account-based model, transactions are recorded in the database and referenced to individual identities. For this to be done en masse, a digital identity system is needed for every user. Alternatively, a digital token-based system utilizes public-private key pairs and digital signatures to sign a message and make a transaction. A token-based system offers universal access since a user identity is not required, also allowing for high levels of privacy. The challenge, however, is that users must remember their private key, or they lose access to their funds. Furthermore, it is difficult to create an effective anti-money laundering and know your customer framework on a tokenized system.
  3. Digital Wallet: A digital wallet is a system that securely stores payment information for users to make transactions. Account-based and digital token systems both require some form of digital wallet that gives users the ability to securely store money. Digital wallets could either be offered by the government or a private entity.
  4. Wholesale or Retail Interlinkages and Application Program Interfaces (APIs): CBDCs can be wholesale, retail, or both. A wholesale CBDC is designed specifically for financial institutions. A retail CBDC, on the other hand, is issued directly to the general public. Depending on the type of CBDC being implemented, linkages must be developed across platforms. Wholesale CBDCs will require linkages with financial institutions, securities and foreign exchange platforms, and other financial market systems. A retail CBDC will require linkages to consumer payment applications, foreign exchange systems, and digital wallets. APIs will be necessary for these linkages, connecting users to the core ledger and to one another to create the overall payment network.

The underlying technology of CBDCs (digital ledgers, account-based systems, digital tokens, digital wallets, and APIs) has existed for decades. Recent advances in financial technology and cryptography, however, have allowed for the creation of digital payment ecosystems that are resilient, protect privacy, avoid double-counting, and quickly process large volumes of transactions. Furthermore, broad adoption of financial technology by companies and individuals, as well as the accessibility of smartphones, makes CBDC a possible replacement to fiat currency.

Based on this technology, CBDC offers several key advantages to traditional fiat currency.

  1. More consumer protection and security than that offered by private payment providers: Central banks are heavily incentivized to ensure a robust payments system and protect consumer data from private firms. Private payment providers, on the other hand, do not value social costs the same as central banks. Consequently, private payment providers are not incentivized to invest in security and resiliency measures to the same degree as central banks. Private companies also have strong incentives to collect and monopolize on the “data exhaust” produced by CBDC flows.
  2. Promote financial inclusion through inclusive mobile money: Cash may be difficult to obtain for underpopulated and rural communities due to lack of bank branches and mechanisms to safely distribute cash. This is likely to worsen as more people adopt digital forms of money. Unbanked people also may be unable to access private digital payments systems. CBDCs would provide individuals access to a digital payments system without a private bank account.
  3. Reduce costs associated with issuing and managing cash: Issuing and managing cash is expensive, with costs including printing, distribution, and replacement. For example, the costs of issuing and managing cash in the Euro area is 0.5% of GDP.
  4. Enhance monetary policy and more effectively manage money supply: Interest-bearing CBDCs would increase the economy’s response to interest rate changes. CBDCs could also be used to charge negative interest rates in the case of an economic crisis.
gtc photo of currencies

Determining Whether a CBDC is Possible

Central banks have choices to make when determining whether to create a CBDC. The first step is deciding whether a CBDC is an effective tool given the economic circumstances of the country and the level of technology accessible to institutions and citizens. Countries need certain technical and infrastructure capabilities if CBDCs are to be a worthwhile project.

  1. Comprehensive Digital Infrastructure: CBDC requires digital infrastructure; it is an ineffective solution in cash-based economies with poor internet connectivity, low smartphone penetration, and general inability to access technology. For CBDCs to be realized, a lion’s share of the populace across all demographics needs access to digital infrastructure. Additionally, CBDCs have technological requirements that must be met.
  2. A Well-Functioning Central Bank: CBDCs give central banks new responsibilities that add considerable costs and risks. CBDCs require central banks to take on a number of new operations such as interfacing with customers, maintaining technology, being responsible for anti-money laundering, avoiding human errors, etc. Additionally, the central bank may need to intervene on behalf of the banking sector if CBDCs replace the use of retail bank accounts (banking-sector disintermediation). Lastly, retail CBDCs would cause a central bank’s balance sheet to grow significantly.
  3. Effective Governance: CBDCs will involve new forms of digital payments and change the role of central banks and private financial institutions. New policies and regulations will be needed for: consumer protection, financial stability, anti-money laundering and know your customer frameworks, cybersecurity, etc. Government bodies, therefore, will need to be capable of taking on this new challenge to promote a financial and payments system that benefits the country.
  4. Financial Technology Knowledge by the Central Bank and Government: CBDC development requires technical expertise. While the creation of digital notes, user interfaces, digital wallets, and payments infrastructure can be outsourced to government contractors, members of the central bank and government officials will need to understand the implications and risks of the technical design. By understanding the design, the central bank and government can put in place appropriate policies and regulations to manage the payments infrastructure and create a system that promotes economic prosperity.

Political Will and the Values of the CBDC-based Payments System

Central banks have a number of design choices. These include the architecture of the system, type of ledger, account-based vs. token, and linkages with other institutions. Design choices will be predicated on the constraints listed above, and just as importantly, on political will.

Politics will be central to the success or failure of CBDC development. Political decisions will need to be made on the values of the payments system to be adopted by users—a CBDC will be the primary factor in the allocation of resources after all. Political decision-making will revolve on the following payments system values:

  • How/when CBDCs and digital currencies are used in daily transactions
  • Privacy
  • Consumer protection
  • Accessibility and convenience
  • Role of the central bank and other financial institutions

The CBDC will be designed accordingly based on the values agreed upon by political actors. Once implemented, the design will have far-reaching implications on domestic and international institutions.

gtc photo of currencies from different nations and hands

Part II. Central Bank Digital Currency and its Geopolitical Implications

US Dollar as King

In July 1944, 730 delegates representing 44 states convened in Bretton Woods, New Hampshire to create a new international monetary system led by the United States. This system, in which the US dollar (pegged to gold) was made the international reserve currency, would promote global stability and economic growth.

The Bretton Woods system eventually ended in 1971 when the US dollar was de-pegged from gold, but its legacy remains. The US dollar today remains the global reserve currency, serving as a unit of account, store of value, and medium of exchange. Over 60% of all foreign central bank reserves are denominated in dollars, and over half of cross-border trade is denominated in US dollars. As a result, most international transactions are cleared by US-based banks.

The Federal Reserve is the epicenter of payment settlement—performed by correspondent banks which transfer money between accounts held by the Fed. Additionally, the US maintains significant control over the Society for Worldwide Interbank Financial Telecommunication (SWIFT), the cross-border messaging system necessary for banks to make transactions. Global transactions rarely happen without interfacing with the US dollar.

Overall, the current global payments system is based on American values. Many payments are conducted through the United States banking system of the Fed and private financial institutions (commercial, investment, and correspondent banks). United States laws and policies that regulate the operations of financial institutions therefore have far-reaching implications on the transactions performed outside the country.

The Wall Street Journal showcases this by describing a transaction between a Canadian lumber company and a French purchaser: “A Canadian lumber company sells boards to a French buyer. The buyer’s bank in France and the seller’s bank in Canada settle the payment, in dollars, via “correspondent banks” that have accounts at the Fed. The money is transferred seamlessly between the banks’ Fed accounts because their status as correspondent banks means they are seen as safe counterparties. The use of these accounts, the U.S. says, means every transaction technically touches U.S. soil, giving it legal jurisdiction.”

The United States has leveraged this hegemony and weaponized the US dollar in recent decades, using it as a means to impose sanctions with greater frequency. Specifically, the United States has restricted the use of US dollars or forced SWIFT to prevent cross-country interbank transactions from sanctioned parties; other parties have limited power in working around US-imposed sanctions.

gtc original figure example of swift

A New Payments Ecosystem Independent from the US Dollar

The US dollar is unlikely to be replaced as the global reserve currency in the immediate future. CBDCs, however, give countries the ability to operate outside the US dollar-led system by serving as a means to create independent payment mechanisms that link financial institutions together without the need for correspondent banks and SWIFT. This allows countries to:

  1. Establish their own independent values for a monetary system
  2. Export payment system values independent from the current global system and make financial transfers to sanctioned and rogue parties without international oversight
  3. Collect transaction data, also know as “digital exhaust”, at the individual level

China and the Creation of an Alternate Payment Mechanism

China’s primary desire for a CBDC has been in response to projects like Bitcoin and Facebook’s Libra, which risk establishing payment values different from the Chinese Communist Party. While China does not look to upend the US dollar-based system or replace the US dollar as the global reserve currency, it is in the country’s best interests to have a payment process that does not rely on the US dollar. The geopolitical implications of a Chinese CBDC, therefore, are a critical issue.

China has made extensive efforts to develop a CBDC and has completed the “top-level” design. Recently, the Agriculture Bank of China issued a test app for mobile phones and the People’s Bank of China began a pilot program to introduce a digital currency in four cities. While the full design is yet to be revealed, the Chinese CBDC as of now is meant to only partially replace current cash in circulation. The CBDC would be issued to commercial banks that would then re-distribute the CBDC to the retail market; it is not meant to threaten the retail banking sector.

The immediate effects of a Chinese CBDC on global payments will be minimal, but the long-term ramifications cannot be understated. The Chinese CBDC fits within a greater context of the country’s efforts to create an independent payments system based on its Cross-Border Inter-Bank Payments System (CIPS). CIPS provides clearing and settlement services for cross-border RMB transactions, and is expected to eventually feature an independent messaging system that does not utilize SWIFT. Furthermore, Chinese payment applications like Alipay and WeChat Pay allow for direct payment transfers between Chinese citizens. Together, a Chinese CBDC, CIPS, and China’s robust digital payments ecosystem would result in a digital ecosystem that facilitates rapid and efficient payments across the country.

This payments ecosystem, when developed, can be exported to incorporate other countries. Alipay and WeChat Pay are already making significant inroads in Southeast Asia. Given transaction volume, Southeast Asian central banks will find it advantageous to join CIPS for more efficient financial transfers. Belt and Road Initiative countries will find similar benefits as they take Chinese loans and look to improve their financial infrastructure. Africa’s payments in RMB increased by 123% from 2016 to 2019—it is no surprise then that the Chinese government is exploring how to integrate a CBDC with Belt and Road Initiative countries. When fully realized, the Chinese system would allow for cross-border transactions between institutions and individuals without going through correspondent banks. While China may not seek to take on the responsibility of leading the global payments system, a CBDC is a critical piece for the country to solidify an independent economic ecosystem.

Western Europe and the Effort to Determine the Implications of CBDCs

The European Union, befitting its recent role as the global voice on technology regulation, is exploring how CBDCs affect the digital payments landscape and the design features needed to fit existing monetary values. European central banks have not determined whether to launch a CBDC yet. Instead, they are taking the time to understand its opportunities and challenges, particularly regarding CBDC design and the implications on privacy, financial inclusion, private payment providers, payment needs, and financial stability.

Many of the recent efforts were necessitated by the announcement of Libra, which prompted serious questions regarding the role of private companies in the digital payments space. The development of CBDCs by other countries is another sign that European banks must respond. Christine Lagarde as IMF Director made the case for digital currencies, “based on new and evolving requirements for money, as well as essential public policy objectives. My message is that while the case for digital currency is not universal, we should investigate it further, seriously, carefully, and creatively.”

Initially, Sweden was the only country in the European Union considering a CBDC. More recently, France, Germany, and the European Central Bank have also stepped into the fray. France began tests this year to experiment how a CBDC could complete interbank settlement, identify additional advantages of a CBDC, and understand the impact on financial stability. The Bank of France is soliciting applications to experiment the use of a digital euro. The experiment’s objectives are:

  • “To show how conventional use cases for central bank money can be achieved through a CBDC based on different technologies”
  • “Identify the benefits of introducing a CBDC for the current ecosystem and understand how a CBDC might foster financial innovation”
  • “Conduct a detailed analysis of the potential effects of introducing a CBDC on financial stability, monetary policy and the regulatory environment”

While Germany has not made as significant of a step as France, the Bundesbank is also exploring the possibility of a CBDC. The Association of German Banks has advocated for the creation of a digital euro, calling on policymakers “to achieve a social consensus on how programmable digital money can be integrated into the existing financial system.”

The European Central Bank, as one would expect given the work of the German and French central banks, is assessing the potential for a CBDC. They have formed a working group consisting of the Bank of England, the Bank of Japan, the Sveriges Riksbank (Sweden), the Swiss National Bank, and the Bank of International Settlement. The European Central Bank makes the case for CBDCs as a means to address a “fragmented point-of-sale and online payments infrastructure.” The Bank looks to fulfill five objectives:

  • Full pan-European reach and a seamless customer experience
  • Convenience and cost efficiency
  • Safety and security
  • European identity and governance
  • Global acceptance in the long run

The ECB hopes that the private industry can offer a solution to these objectives, but is open to creating a CBDC if the need arises.

Further west of the European Union, the United Kingdom released a report examining the opportunities, challenges, and design for a British CBDC. The Bank of England showcased a possible CBDC model to “illustrate the key issues as a basis for further discussion and exploration of the opportunities and challenges that CBDC could pose for payments, the Bank’s objectives for monetary and financial stability, and the wider economy.” While the Bank has not yet determined whether to launch a CBDC, they look to define the key principles that a CBDC should address for monetary and financial stability.

The European CBDC effort is a means to improve the region’s financial system, particularly in the wake of Facebook’s Libra and other private company offerings. The European project was created to economically tie together a number of countries with a history of military conflict. Today, these countries transact with one another heavily and many are interconnected through the euro. The economic alliances already in play necessitate the exploration of CBDCs as a payment innovation that “would be good for the [European] financial center and its integration into the world financial system.” As François Villeroy de Galhau, Governor of the Bank of France, notes, “we as central banks must and want to take up this call for innovation at a time when private initiatives—especially payments between financial players—and technologies are accelerating, and public and political demand is increasing. Other countries have paved the way; it is now up to us to play our part, both ambitiously and methodically.”

gtc photo of yuan

The CBDC Horse Race

A Bank of International Settlements survey of 63 central banks found that 70% are engaged in CBDC work. Some are simply researching its opportunities, while others have or will create a digital currency. Emerging markets are leading the charge, seeing it as a means to improve payments infrastructure and promote financial inclusion as they digitize their economies. The implications are clear: CBDCs will play a critical role in shaping the future of digital payments.

Enter the digital currency horse race. The effects of CBDCs are magnified when linked to other countries. These connections create a fuller digital ecosystem that facilitates more efficient cross-border payments between individuals and institutions. The financial centers of the world see emerging markets up for grabs in the competitive expansion of cross-border digital payments networks. CBDCs serve as a weapon in economic competition. The major powers of the world have no wish to see a CBDC and digital payments design imposed upon them by another country.

Mu Changchun, deputy director at the People’s Bank of China says, “in the future, the process of digital currency issuance will be the way of horse racing, the leader will win the entire market; who is more efficient, who can better serve the public, who it will survive in the future; if a front-runner takes the lead in taking action, the technology they use will be adopted by other parties.”

Similarly, François Villeroy de Galhau, Governor of the Bank of France notes that “I see a certain interest to move quickly on the issue of at least a wholesale [CBDC] to be the first issuer at the international level and thus derive the benefits reserved for a reference [CBDC].”

German Finance Minister Olaf Scholz is most explicit: “such a payment system would be good for the [European] financial center and its integration into the world financial system…We should not leave the field to China, Russia, the U.S. or any private providers.”

CBDCs serve as an opportunity for governments to establish and export norms on the role of currency in society. These norms could set global standards for payments systems. As discussed in Part I, norms include: how/when CBDCs and digital currencies are used in daily transactions, privacy, consumer protection, accessibility/convenience, and the role of the central bank and other financial institutions. In their effort to export payments system values, advanced economies offer emerging markets and other players several opportunities:

  1. Financial Ecosystem: Advanced economies can provide a large financial ecosystem for efficient domestic and cross-country transactions between individuals and institutions. This is particularly useful for countries engaging in significant amounts of trade and remittances, or for countries with lack of trust in domestic institutions. An example would be Belt and Road Initiative countries participating in a payments network led by China.
  2. Technical Support: The IMF finds in their study that “various central banks surveyed plan to outsource CBDC development.” Whether central banks turn to private sector contractors for support or not, launching a CBDC requires technical expertise, digital infrastructure, and financial technology knowledge. Advanced economies can help emerging markets achieve technological leaps in their payments system. Specifically, they can offer support to emerging markets either in the form of letting other countries adopt an interoperable payments system managed by the advanced economy’s central bank and/or providing technical assistance to countries in the development process of a digital payments system.
  3. Circumvent Sanctions: Advanced economies can establish cross-border payment processes with countries under international sanctions or terrorist groups to bypass the US dollar and SWIFT. The European Union, for example, could establish a CBDC payment link to Iran and circumvent United States sanctions. Advanced economies could also purchase digital currencies from sanctioned countries. Venezuela, for example, issued the petro so investors could “do an end run around sanctions imposed against their country by the Trump administration.”

CBDC and the New Currency: Data

In addition to serving as a means for establishing payment alliances and values, the exportation of CBDCs gives governments an even more powerful currency: data. While CBDCs would limit international oversight, central banks will have the ability to track CBDC payments. Transaction data would be centrally housed within the core ledger, giving governments the ability to monitor the economy and determine interventions. Privacy, however, is at risk. The central bank would be able to identify individual users and payment flows, making the anonymity currently offered by cash obsolete. Governments would be able to track payment flows domestically and internationally. As a result, governments could more effectively repress opposition groups or tacitly support the funding of terrorist organizations.

Furthermore, individuals and financial institutions of countries who have adopted another country’s CBDC-based payments system could see their financial transactions being tracked and collected. CBDCs would allow for an unprecedented level of intrusion by a foreign central bank, which would have visibility into transactions well beyond what is possible with the US dollar-based system. Similar to privacy concerns around private payment providers, CBDCs pose the heightened risk that a government is privy to payments information. As a result, the technical design of the ledger and CBDC system will need to be heavily examined to ensure a level of privacy that guarantees national sovereignty.

gtc photo of currencies

Missing in Action: The United States and the Need for a Global Voice

Unlike Bretton Woods, no clear voice has emerged to define the standards for CBDCs. The United States thus far has no plans to develop a CBDC. In a letter to Congress, United States Federal Reserve Chairman Jerome Powell writes that “the characteristics that make the development of central bank digital currency more immediately compelling for some countries differ from those of the US.” Given the robust payments landscape in the United States, Powell’s reasoning makes sense. The claim that United States does not need a CBDC at the moment is justifiable.

Missing in this assessment, however, is the need for a global voice on CBDC standards. In 1944, the United States helped usher in financial and economic stability by establishing a system of norms and values that the world could coalesce around. United States reticence this time around has created a vacuum in which countries are exploring CBDC design without a shared set of principles. Countries are establishing their own settlement systems and implementing laws and regulations for private sector payments to operate and innovate around. Furthermore, the vacuum in financial technology leadership is evident by the fact that the private sector is playing an increasingly important role in promoting payments system values for which governments are then forced to respond to after the fact. A key example is recent government responses against Libra.

Whether the United States decides to launch a CBDC or not, it must provide global leadership and support the development of a common set of values for other countries and private sector companies to operate around. The United States made some forays through its response to Libra, culminating in a request for Facebook to halt development. A stronger and more explicit stance is needed. Given the country’s role in global payments, as well as the fact that it houses a substantial portion of private sector innovation, the United States government (specifically Congress, the Federal Reserve, the Securities and Exchange Commission, and the Department of Treasury) should look to establish principles for others to embrace. Specifically, the United States must take on a leadership role to determine common norms around the following values:

  • The Role of Central Banks: wholesale vs. retail CBDCs and the role of central banks when it comes to allowing and restricting certain types of payments (e.g. terrorism)
  • CBDC Technology: the technical design for CBDCs, the core ledger, APIs, payment functionality, programmable money, cryptography, an account-based vs. token-based system, and interoperability across central banks
  • Consumer Protection: privacy and cybersecurity standards for CBDC-based payments systems (both public and private sector) to maintain resiliency, the extent to which central banks and digital payment providers can track payments and identify users, and public disclosure requirements for transparency and reporting
  • Financial Stability and Integrity: CBDC effects on the banking system, commercial bank deposits, and the implementation of monetary policy
  • Private Sector Innovation: the role of private sector in the overall payments ecosystem, their protocols, and integration with other providers
  • Financial inclusion: methods to ensure that CBDCs serve the most marginalized of communities without further entrenching social and economic inequality

Conclusion

CBDCs are a new frontier for payments infrastructure and central banks. Their geopolitical ramifications cannot be underestimated. CBDCs pave the path for circumventing sanctions, forming new alliances, and establishing norms around identity, privacy, innovation, and cybersecurity. The nascent technology, however, is being explored by many countries without a guiding voice on how to ensure prosperity for its end user: the individual. The United States has historically served as the leading nation on global payments guiding both policy and innovation. The country must once again step into the limelight and take on a leading role.

Given the geopolitical implications and current US involvement, world leaders in the private and public sectors will need to:

  • Explicitly define international values for a digital payments ecosystem that protects users and ensures financial stability
  • Establish cross-border collaboration across central banks and government agencies on CBDC design, and bring in other countries currently not exploring CBDCs
  • Craft clear policies and regulations that delineate the role of private sector and public sector innovation in payments systems
  • Explore public-private partnerships to coordinate innovation and ensure that advancements support central bank efforts to manage the monetary system
  • Raise the voices of historically marginalized, underserved, and unbanked communities to ensure that new payment designs promote financial inclusion

The development of CBDCs tasks nations and companies to reimagine money in a more equitable and efficient model. Clear voices will be needed to offer common values for the world to coalesce around as the payments system is remade. Global prosperity depends on it.

The Atlantic Council GeoTech Center would like to thank Nikhil Raghuveera for serving as both Guest Author and lead author for this report. Nikhil is a dual degree MBA/MPA student at The Wharton School and the Harvard Kennedy School with a background in economic consulting, nonprofit consulting, cryptocurrency, and venture capital who will be graduating this May 2020.

Further reading:

The post Design choices of Central Bank Digital Currencies will transform digital payments and geopolitics appeared first on Atlantic Council.

]]>
The role of tech, data, and leadership in pandemic geopolitics and recovery post-COVID https://www.atlanticcouncil.org/insight-impact/in-the-news/video-recap-the-role-of-tech-data-and-leadership-in-pandemic-geopolitics-and-recovery-post-covid/ Wed, 22 Apr 2020 13:00:19 +0000 https://atlanticcouncil.org/?p=247624 On April 22, 2020, Vint Cerf, Sue Gordon, Melissa Flagg, and Terry Halvorsen participated in a Webit virtual panel titled "Pandemic geopolitics and recovery post-COVID," moderated by David Bray, the Director of the Atlantic Council's GeoTech Center, on the role of tech, data, and leadership in the global response to and recovery from COVID-19.

The post The role of tech, data, and leadership in pandemic geopolitics and recovery post-COVID appeared first on Atlantic Council.

]]>

On April 22, 2020, Vint Cerf, Sue Gordon, Melissa Flagg, and Terry Halvorsen participated in a Webit virtual panel titled “Pandemic geopolitics and recovery post-COVID,” moderated by David Bray, the Director of the Atlantic Council’s GeoTech Center, on the role of tech, data, and leadership in the global response to and recovery from COVID-19.

They discussed what tech innovators and world leaders can do regarding the long-term global recovery, with great insights from an august panel of experts who both amplify and inform the mission of the GeoTech Center amid this period of global turbulence.

To build a better world, we need education plus empathy, leading to engagement.

The post The role of tech, data, and leadership in pandemic geopolitics and recovery post-COVID appeared first on Atlantic Council.

]]>
Event recap | Strategic standards now, so people can return to work soon https://www.atlanticcouncil.org/blogs/geotech-cues/video-recap-strategic-standards-now-so-people-can-return-to-work-soon/ Mon, 20 Apr 2020 13:15:32 +0000 https://atlanticcouncil.org/?p=246723 On April 20, 2020, Dame Wendy Hall, Declan Kiranne, Jay Williams, Daniella Taveau, and John Ackerly shared perspectives on "Strategic standards now, so people can return to work soon" as part of a live video discussion moderated by Dr. David Bray, Atlantic Council GeoTech Center Director.

The post Event recap | Strategic standards now, so people can return to work soon appeared first on Atlantic Council.

]]>

On April 20, 2020, Dame Wendy Hall, Declan Kiranne, Jay Williams, Daniella Taveau, and John Ackerly shared perspectives on Strategic standards now, so people can return to work soon as part of a live video discussion moderated by Dr. David Bray, Atlantic Council GeoTech Center Director.

They discussed exploring the strategic standards required to properly identify, authenticate, and certify a workforce ready for work in a post-corona virus workplace. The panelists also discussed:

  • That if we think we will be “returning to work” the way we used to work, we are probably not recognizing how COVID-19 changes everything; moreover the lack of good data to inform these local decision and how “Data Trusts” might solve this.
  • The challenges of feeding the world during COVID-19, that there is enough food – just not distributed well with the COVID-19 breakdown; currently South Africa has 50% of its people needing food now.
  • How data and tech initiatives can use differential privacy and encryption in a way that preserves privacy and let us find ways to a new normal in which we will know when it is safe to travel, congregate together again, and start rebuilding our communities

Rebuilding from COVID-19 requires data-driven decisions, global partnerships, and intentional values baked-in to the tech solutions we employ.

The post Event recap | Strategic standards now, so people can return to work soon appeared first on Atlantic Council.

]]>
COVID-19 might accelerate or change previous AI adoption strategies https://www.atlanticcouncil.org/blogs/geotech-cues/covid-19-might-accelerate-or-change-previous-ai-adoption-strategies/ Fri, 17 Apr 2020 23:15:03 +0000 https://atlanticcouncil.org/?p=245239 The COVID-19 pandemic may be a force that changes how the economies of the United States, Canada, and members of the European Union all use AI. Because of serious supply chain problems in Asia, many firms will restructure complex supplier networks or make them more redundant. This could mean building up their use of AI and machine learning in home-nation plants.

The post COVID-19 might accelerate or change previous AI adoption strategies appeared first on Atlantic Council.

]]>
The COVID-19 pandemic may be a force that changes how the economies of the United States, Canada, and members of the European Union all use AI. Because of serious supply chain problems in Asia, many firms will restructure complex supplier networks or make them more redundant. In many cases, this will mean building up their use of AI and machine learning in home-nation plants.  

Another change is an acceleration of online retailers’ use of AI. With the virus and extensive confinement to homes, online spending has spiked. To cope with the rise in customers, online businesses will likely turn to cloud computing, AI and machine learning to handle a greater volume of orders and manage supply chains.

The current macroeconomic situation due to COVID-19 includes substantial reductions in GDP growth and trillion dollar increases in debt. The pandemic could also promote a serious call to improve firms’ technological sophistication. With slow growth and low interest rates, economic statistics suggest a slow-growth economy could follow the COVID-19 pandemic. To avoid slowing growth for too long, governments might employ an industrial policy to increase the adoption of AI and related technologies. The object of such targeted spending would be to revive the country’s manufacturing base and reskill employees.

Examples of Specific Changes

Redesigning supply chains is a major place for a pivot from current trends due to COVID-19. Volkswagen has 31 of its 122 plants in China. VW plans to employ AI in its digital factories to achieve a 30 percent productivity gain over the next 6-7 years. These plans may no longer include a quick AI rollout in China. VW is more likely to cut its dependence on Chinese subsystems. As a result, it is more likely to deploy most of its AI-based Volkswagen Industrial Cloud in Germany and Western Europe. This shift away from China would reshape its supplier ecosystem.

If, as a result of COVID-19, VW shifts production of more components to its “mother plant” in Wolfsburg, it could also accelerate the adoption of 3D printing. This would mean VW would construct suppliers’ products at nearby assembly plants rather than shipping them from thousands of miles away. This would require production systems that use AI. Ford, GM, and other automakers would very likely to follow suit.

These trends also might reinforce a broad effort to implement AI-based digital ecosystems where humans operate machinery and manage operations. This would require additional digital skills among factory workers, especially where they need to work with machine learning models and AI. It might also result in a technology transformation of less capital-intensive parts suppliers, where tools including AI and machine learning would increase the suppliers’ importance in producing electric cars. This would change the economics of auto production and put pressure on competitors to ramp up their own digitization plans.

Application of AI to COVID-19 has been hindered both by the lack of historical precedence and by the absence of quality datasets necessary to train AI implementations. The one area of promise so far has been in the use of AI to identify potential connections and promising interventions or treatments by wading through the large number of publications on COVID-19, SARS, and MERS. If this proves successful, one might expect future medical and health researchers to apply similar techniques to cancer research and other related issues.

For online retailers, scaling up processing and supply chains to deal with the surge in online customers will also rely upon AI to analyze sales and logistics. It will also rely upon AI to evaluate issues in supply chains, particularly where there are shortages in supplies of products or produce.

An additional shift might transform entire industries where some firms have followed a strategy of becoming more technologically sophisticated than their competitors. In the biopharma industry, there is a push to employ AI and machine learning to create new drugs in record time. Moderna, a startup outside Boston, has developed a proposed COVID-19 vaccine in 42 days, the first to enter clinical trials. It has benefited from its knowledge of DNA and messenger RNA to create a vaccine that disrupts the virus’s reproduction in the body. COVID-19 might push other pharmaceutical firms to move more rapidly to create digital plants that rely upon AI-based analysis of genomic information; several large pharmas, like Sanofi and Eli Lilly are already moving in this direction.

The COVID-19 pandemic may be a force that changes how the the economies of the United States, Canada, and members of the European Union all use AI. Because of serious supply chain problems in Asia, many firms will restructure complex supplier networks or make them more redundant.

Atlantic Council

Larger and Longer-Term Shifts

An even larger and longer-term shift might also occur because of COVID-19. Technology-aware firms have learned from Tesla’s success that aggressively pioneering the adoption of technologies like AI can provide them with an opportunity to open a performance gap with their competitors. Tesla has gained a reputation for its rapid adoption of AI and machine learning. This has enhanced Tesla’s digital technology and analytics capabilities. Other firms plan to emulate Tesla’s efforts to create a wider competitive gap between their achievements and competitors that have waited too long to become digital. This opportunity might present itself due to COVID-19. It could offer such firms a chance to embark upon a “survival of the fittest” struggle in a few large, capital intensive sectors of the economy. If they succeed and emerge as winners, they might achieve a dominant position in the auto or pharmaceutical industries. This might severely diminish their competitors’ chances to survive.

COVID-19 might also create a dramatic change in technology policy in the governments of the United States, Canada, and members of the European Union. This might occur as part of a larger pivot to promote higher productivity growth in the economy. A unique national initiative might include an effort by governments to invest in the creation and commercialization of new AI-based technologies. This might be spurred by COVID-19 because the pandemic is likely to have such a dramatic impact on GDP and productivity. To be successful, programs, funds and partnerships would be devoted to helping industries such as autos develop autonomous vehicles or speed the commercialization of AI and quantum computing.

A driving force behind such a major policy shift might be economists’ recognition that COVID-19 has crippled the economy more than expected. With low interest rates likely to continue for three more years or more, GDP growth is expected to be low, around 2 percent based on current bond market data. This implies that productivity growth will be around 2 percent or below. With the addition of trillions of dollars of debt to the economy to fight COVID-19, the recovery could be handicapped by higher debt and low growth. By promoting more innovative technologies, such as AI, the economy might experience a productivity resurgence to 4 percent or more, bringing GDP growth to 4 or 5 percent. This would speed the recovery and very likely promote faster GDP growth.  

Such a new generation of public policies to promote technological innovation would mark a real pivot in the economic policy of Western nations. The need to recover from COVID-19 might bring about such a shift.

This analysis was authored by Robert B. Cohen, a Guest Author with the Atlantic Council GeoTech Center as well as Senior Fellow with the Economic Strategy Institute, and by Dr. David Bray, Director for the Atlantic Council’s GeoTech Center.

More analyses by the GeoTech Center include:

The post COVID-19 might accelerate or change previous AI adoption strategies appeared first on Atlantic Council.

]]>
Can tech companies embody global values and ride to the rescue? https://www.atlanticcouncil.org/blogs/geotech-cues/can-tech-companies-embody-global-values-and-ride-to-the-rescue/ Fri, 17 Apr 2020 03:00:55 +0000 https://atlanticcouncil.org/?p=244759 Tech companies have the resources to help us better manage this pandemic. If you are a leader of tech company today, what could be of greater benefit to your company than a healthy population, a revitalized global economy, and new international capabilities to fight this crisis, and the next one?

The post Can tech companies embody global values and ride to the rescue? appeared first on Atlantic Council.

]]>
The global coronavirus crisis presents new challenges not just of scope, but of speed. In a pandemic, each day matters and each minute. And who better than the regional technology hubs around the U.S., to include Silicon Valley, Austin, Portland, Boston, Raleigh-Durham, and more to address the need for speed in defeating coronavirus and accelerating economic recovery? The tech industry not only understands rapid change and disruption, it thrives on it.

Technology companies are starting to step up. Apple and Google announced development of a new “contact tracing” service using cell phones; Facebook and Intel have created multi-million dollar funds to support COVID-related tech innovation, and Twitter’s Jack Dorsey has pledged $1 billion, almost a quarter of his personal wealth; Facebook, Google and YouTube are posting public health information related to “coronavirus” searches; Tesla is making ventilators. All encouraging, but not enough and it is important to be ever mindful of unintended ripple effects tied to these initiatives. There are values baked-in to each of these initiatives, whether tech Companies intend for them or not, and it is important to be mindful of these values and what they mean both for the short-term response to and longer-term recovery from this pandemic.

In advocating that tech companies – and perhaps more importantly Technology Communities – work globally to advance tech programs and a global data strategy to inform the coronavirus response and recovery, this rally call is not asking tech companies to do anything that conflicts with the Western world’s heritage of personal liberty and free markets. Certain personal data collection techniques employed by Asian countries—especially China—are indeed too draconian to be used widely. Yet that does not mean public-private partnerships should do nothing to standardize and strengthen national tech programs designed to defeat COVID and accelerate an intelligent and informed economic recovery.

The COVID-19 pandemic may be a force that changes how the the economies of the United States, Canada, and members of the European Union all use AI. Because of serious supply chain problems in Asia, many firms will restructure complex supplier networks or make them more redundant.

Atlantic Council

Until a COVID-19 vaccine is available, the global economy and every nation in it will struggle to regain solid economic footing. There is no binary “On/Off” switch: the reopening of factories, restaurants, sports stadiums, industries and regions will happen slowly and sporadically – informed by data both about the local population as well as whether it is possible to become re-infected the coronavirus a second time as well as spread the coronavirus even after recovering from an initial infection. Since every viral pandemic in modern history has had at least one second wave, there will likely be re-closures also, well after this first wave has passed. The only way to dial up and dial down economic activity is to distribute data on a scale, at a speed and with a precision never achieved globally. As should be clear by now, this needs to be done global and in ways consistent with values of personal liberty and freedom. Big Tech can choose to advance both these efforts and values if it does so intentionally.

Banks also have an important role to play. The U.S. Federal Reserve and Europe’s Central Banks have crucial data, the power to command, presses that print money. Big Tech needs to partner with national, state, and local government – as some of the smartest infectious disease experts in the world work these governments and have been operating on the frontlines throughout this pandemic. Big tech also needs to partner with non-profits that can help serve as a bridge between these different groups and a “Jimmy Cricket” of sorts, asking tech companies to think through the intended and unintended ripple effects of actions today on the future ahead. As noted, in our rush for solutions it is essential to be mindful of our values tied to initiatives.

Tech companies may need to consider a modern “Golden Rule” during this time of crisis: whatever actions you do now, make sure you are okay with them five years from now too. Actions we take today will shape the future ahead. Tech companies both in the U.S. and globally must show leadership and work collaboratively across sectors and nations to build innovative global information services similar to those that have been so successful in Asia, but in ways consistent with the values of the Western world.  

For tech to play a key role getting us through this crisis, citizens around the world also must insist that both governments and tech companies provide data transparency, so we can have credible information with which to drive valuable new data services. We need to advocate—from wherever we are in the global economy—for new tech that makes it easier to work at home and stay properly socially distanced. We need to convince elected representatives at local, state, and national levels to put innovative tech to work in all infrastructure stimulus plans. For both tech companies and governments, we need to encourage both sectors work to include diverse stakeholders in efforts to rebuild and recover from COVID-19.

Skeptics of the “tech for good” mantra might say that all tech companies care about is making money. While this is debatable; even if this is so, tech industry leaders are quite good at following their self-interest—and smart enough to see it. Tech companies have the resources to help us better manage this pandemic. If you are a leader of tech company today, what could be of greater benefit to your company than a healthy population, a revitalized global economy, and new international capabilities to fight this crisis, and the next one?

The post Can tech companies embody global values and ride to the rescue? appeared first on Atlantic Council.

]]>
Event recap | What technologies, investments, and policy actions could help us rebuild from COVID-19 on a global scale https://www.atlanticcouncil.org/commentary/event-recap/video-recap-what-technologies-investments-and-policy-actions-could-help-us-rebuild-from-covid-19-on-a-global-scale/ Thu, 16 Apr 2020 13:00:00 +0000 https://atlanticcouncil.org/?p=245269 On April 16, 2020 - Dr. David Brin and Dr. Kathryn Newcomer shared perspectives on "What technologies, investments, and policy actions could help us rebuild from COVID-19 on a global scale" as part of a live video discussion moderated by Dr. David Bray, Atlantic Council GeoTech Center Director.

The post Event recap | What technologies, investments, and policy actions could help us rebuild from COVID-19 on a global scale appeared first on Atlantic Council.

]]>

On April 16, 2020, Dr. David Brin and Dr. Kathryn Newcomer shared perspectives on What technologies, investments, and policy actions could help us rebuild from COVID-19 on a global scale as part of a live video discussion moderated by Dr. David Bray, Atlantic Council GeoTech Center Director.

They discussed which technologies and investments show the greatest promise with the rebuilding and recovery from COVID-19, and what policy actions would help us better rebuild locally, nationally, and globally. They also considered the role of transparency, both in the public and private sector, in supporting good governance with the rebuilding and recovery efforts. In addition, the discussion highlighted the the role of countering polarizing misinformation as well as preserving individual privacy during the COVID-19 response and recovery.

The post Event recap | What technologies, investments, and policy actions could help us rebuild from COVID-19 on a global scale appeared first on Atlantic Council.

]]>
Event recap | Why data trusts could help us better respond and rebuild from COVID-19 globally https://www.atlanticcouncil.org/blogs/geotech-cues/event-recap-why-data-trusts-could-help-us-better-respond-and-rebuild-from-covid-19-globally/ Thu, 16 Apr 2020 01:00:10 +0000 https://atlanticcouncil.org/?p=244810 On April 15, 2020, Lord Tim Clement-Jones and Dame Wendy Hall shared their perspectives in a live video discussion titled "Why data trusts could help us better respond and rebuild from COVID-19 globally" and moderated by David Bray, PhD, Atlantic Council GeoTech Center Director on the role of Data Trusts in the global response to and recovery from COVID-19.

The post Event recap | Why data trusts could help us better respond and rebuild from COVID-19 globally appeared first on Atlantic Council.

]]>

On April 15, 2020, Lord Tim Clement-Jones and Dame Wendy Hall shared their perspectives in a live video discussion titled Why data trusts could help us better respond and rebuild from COVID-19 globally and moderated by David Bray, PhD, Atlantic Council GeoTech Center Director on the role of Data Trusts in the global response to and recovery from COVID-19.

The hour-long discussion asked key questions: What are data trusts? What roles can Data Trusts play in the global response to COVID-19? What can the United States learn from the United Kingdom’s activities involving data trusts and AI? Most importantly, Lord Clement-Jones, Dame Hall, and Dr. Bray presented actionable steps that Google, Apple, or any other major tech company or coalition could enact to move forward with a data trust initiative to help the world respond to and recover from COVID-19.

The post Event recap | Why data trusts could help us better respond and rebuild from COVID-19 globally appeared first on Atlantic Council.

]]>
5G’s geopolitics solvable by improving routing protocols against modern threats https://www.atlanticcouncil.org/blogs/geotech-cues/5gs-geopolitics-solvable-by-improving-routing-protocols-vs-modern-threats/ Thu, 09 Apr 2020 13:15:00 +0000 https://www.atlanticcouncil.org/?p=243110 Having performed a deeper dive over the last few months into the issues surrounding 5G, the GeoTech Center proposes to world policymakers that the geopolitical tensions associated with 5G, as well as other geopolitical cybersecurity-related concerns, can be solved by improving routing protocols against modern threats.

The post 5G’s geopolitics solvable by improving routing protocols against modern threats appeared first on Atlantic Council.

]]>
Much fear, uncertainty, and doubt has been cast on the 5th Generation of International Mobile Telecommunications standards (5G), which has become a geopolitical point of contention between China and the United States. 5G standards themselves still have to be finalized internationally, making it even more difficult to discern market reality vs. market positioning vs. market hype.

Amid all this controversy, the Atlantic Council’s GeoTech Center is charged both with advancing the values of the Council when it comes to benefiting people, prosperity, and peace globally as well as with working to understand the different perspectives of nation-state actors globally.

As such, having performed a deeper dive over the last few months into the issues surrounding 5G, the GeoTech Center proposes to world policymakers that the geopolitical tensions associated with 5G, as well as other geopolitical cybersecurity-related concerns, can be solved by improving routing protocols against modern threats. Such an endeavor would require a commitment by multiple parties to advance the state-of-the-art in terms of research and development now, with an eye to future benefits in three to five years.

Though the United States recently published a “National Strategy to Secure 5G”, the proposals seem to dance around the heart of the issue: namely that any nation, organization, or individual needs a to develop a way of dynamically evolving trust based on whatever criteria that nation, organization, or individual sets for their Internet and telecommunications experience. Consequentially this proposal cuts across the both the “Line of Effort 2: Assess Risks to & Identify Core Security Principles of 5G Infrastructure” and “Line of Effort 4: Promote Responsible Global Development and Deployment of 5G” of the strategy, with an eye to whatever framework being mutually beneficial globally as well.

Consistent with how the Defense Advanced Research Projects Agency uses the “Heilmeier questions” to provide clarity into any proposed endeavor, this policy proposal will follow the same format.

H1: What are you trying to do?

Motivate world policymakers and industry leaders to develop and demonstrate a governance protocol by which an individual communications network device can evolve one or more trustworthy communication pathways in a heterogeneous communications environment amid potentially deceptive and disruptive nodes.

How: Instead of relying on a network of trust associated with advertised routes, each node will decide for itself what nodes to trust for the next hop to reach a destination. This includes querying next hops for evidence to indicate their trustworthiness – to include at least three methods, to be discussed in detail below.

H2: How is it done today, and what are the limits of current practice?

International Mobile Telecommunications: The 4th Generation (4G) of these standards has known vulnerabilities and while 5G resolves some of these, some legacy vulnerabilities have been shown to remain. Known 5G vulnerabilities include “TRacking via Paging mEssage DistributiOn” attacks, which verify in <10 calls whether a victim device is present in a geographical cell and can eventually obtain a victim’s International Mobile Subscriber Identity.

gtc chart describing what 5g is

TCP/IP: suite of protocols that dictate how information should be packaged, sent, received, and routed to destination. TCP/IP includes Transmission Control Protocol, which ensures reliable transmission of information across Internet-connected networks, to include checking packets for errors and submitting requests for re-transmissions if any are found. TCP/Ip also includes Internet Protocol, which tells packets of information where to go and how to get there. This allows any network device to forward a packet to another device that is one or more intervals closer to the packet’s recipient. Cumulatively TCP/IP includes four abstracted layers:

  • Application Layer = software-mediated presentation and interactions.
  • Transport Layer = ensures proper transmission of data.
  • Internet Layer = describes how packets are to be delivered.
  • Network Access Layer = builds packets.

Internet-based routing includes Border Gateway Protocol (BGP): Unfortunately BGP lacks cryptographic identification that Autonomous Systems (AS) providing routing information are who they claim or that the information they provide on behalf of other ASes can be trusted. To fix this, Secure BGP and related approaches attempt to overcome the vulnerabilities present in BGP, yet so far Secure BGP and similar efforts to address these vulnerabilities have proven economically difficult to roll-out at scale. Even then, like BGP, Secure BGP itself has limits on the growth of the routing table.

gtc chart describing what 5g is

H3: What is new in your approach and why do you think it will be successful?

No originating node will assume that subsequent hops trusted by a prospective next hop can be trusted automatically by the originating node. Each node determines trust dynamically and independently, which is possible for mobile 5G because of sparse density of hops from a mobile network device to a trusted core:

  • Assume any network device probably has <28 1st-hop possible base stations in range and that a trusted core can be reached within 5 hops, if not sooner.
  • 2nd-, 3rd-, and 4th- hops from a base station towards either a legacy 4G or 5G core still probably <25 potential nodes each, especially if protocol can define certain paths as restricted by an inbound query from a mobile device. Later we can extrapolate what widespread peer-to-peer 5G comms might mean.
  • 2^8 + 2^13 + 2^18 + 2^23 = 8,659,200 nodes as max nodes needed to be queried; probably much less. If we multiply this number x 16,384 bytes of data / node ≈ 141.9 GB max memory needed; currently a USB 3.0 256 GB stick ≈ $39 USD.

256 GB of NAND flash memory simply has not been available for most of the history of the Internet and mobile communications; now it’s available cheaply and will continue getting cheap as data centers are driving this. NAND stores data in arrays of memory cells that were made using floating-gate transistors. 3D NAND stacking incorporates a vertical plane to the NAND memory architecture.

gtc chart describing advances in memory and processing power

At the same time, 5G should reduce latency and increase bandwidth, so sending out exploratory packages is now possible for densely connected workers in ways that were not possible with 2G or 3G. Also on-board computing able to do more than what was possible in the past; a palm-size device now does 20 Teraflops using x86 architectures at low energy/via solar power (100 Teraflops early 2020).

gtc chart describing advances in memory and processing power

Pausing for policymakers, what does all this mean?

Simply put, TCP/IP required trust relationships to be dis-intermediated between senders and receivers because for most of the Internet’s history, a sender couldn’t store in-memory knowledge of all the nodes to reach a receiver. Mobile 5G presents a sparse density of hops from a mobile network device to a trusted core to assess trustworthiness at scale.

Regardless of 5G or 4G or any other mobile telecommunications standards, the era in when on-system memory limits prevented storing the necessary information about potential nodes from which to evolve trust is over.

It is now possible for industry partners to develop a new approach to routing information, incorporating elements of multi-factor authentication across nodes:

  • Something you know: what a node can tell another.
  • Something you have: what a node can present via encrypted signatures about hardware or software of a node.
  • Something you are: what a potential route is intrinsically.

World policymakers should care about resolving the geopolitical tensions associated with 5G as this impacts individual consumers and world markets. If consumers or markets are concerned that 5G technologies are being used surreptitiously for intelligence purposes without their consent, that will erode trust in open societies and free markets, as also noted by fellow Atlantic Council colleague Kirsten Fontenrose in a recent interview.

Evolving trust dynamically will be required for the next decade ahead

Industry partners can develop a light-weight, telecommunications governance protocol that will evolve trust dynamically by each originating node for a specific duration of time using multiple approaches, to include these following three examples:

1 – via one or more Tells – something a prospective next hop can provide as data explicitly or implicitly to node in response to a query from originating node, involving the following three steps:

Step 1. Originating Node queries Prospective Next Hop with [time or context-specific] query?

Step 2. Prospective Next Hop can either choose not to answer or answer with a [data response] to the Originating Node. This [data response] could be:

* a time-specific, cryptographic response to the challenge phrase, based on the Prospective Next Hop’s own [in-memory software] or programmable [read-only memory],
* some other response that shows the node even understands the challenge phrase query, etc.

Step 3. Originating Node compares the data response with condition states pre-loaded on the Node by software-defined [flash memory].

2 – via Encryption – something a next hop can cryptographically affirm to the originating node about their hardware or software layers including chipsets, involving the following three steps:

Step 1. Originating Node queries Prospective Next Hop with either a [set] query about its hardware or software? 

Step 2. Prospective Next Hop can either choose not to answer or answer with a [data response] to the Originating Node. This [data response] could be:

* time-specific, cryptographic signature of the name and version of important [software] running on the node,
* what [hardware] it possesses,
* what additional encryption modes it can activate, etc.

Step 3. Originating Node compares the data response with condition states pre-loaded on the Node by software-defined [flash memory].

3 – via Routing – something an individual node already knows in advance by other means, such as a software update provided out-of-band to the originating node, about the next hop as a prospective route. This third option of evolving trust dynamically would involve the following two steps:

Step 1. Operator pre-loads coordinated, trusted routes into the memory of the Originating Node.

Step 2. As a result, Originating Node already has in-memory knowledge about either [using] or [avoiding] a Prospective Next Hop on the Node by software-defined [flash memory]

H4: Who cares? If you are successful, what difference will it make?

By pursuing this approach to resolving the geopolitical tensions associated with 5G and evolving trust dynamically, the cumulative result would scale:

(1) Zero-trust networking, currently limited in its requirement that network controls must be established internally in advance and does not address external autonomy systems, with

(2) Software-Defined Networking to overcome the current technological and economic burdens of secure border gateway protocol by incorporating such capabilities into future revisions of 5G standards.

gtc describing a new approach to resolving 5G's geopolitical tensions

Such an endeavor needs to be treated as an urgent research and development effort, because there remain some demonstration capabilities necessary to ensure the governance protocol is sufficiently light-weight.

  • The new protocol must prove that it is not too bandwidth intensive to implement.
  • The protocol must also demonstrate that it does not reveal too much when attempting to discover and determine which network devices are more trustworthy vs. others.
  • Furthermore, the protocol must prevent untrustworthy devices learn to much about the originating “node-zero” as it attempts to evolve the ad-hoc communications pathway(s).
  • Finally, the protocol must prevent less benevolent actors from reverse-engineering the multi-factor Tells, Encryption, and Routing approaches used to determine trustworthiness or present a weakness akin to a “pass the hash” attack.

Pursuing this endeavor now is important, because as noted, both 4G and 5G have known vulnerabilities including hijacking and man-in-the-middle attacks, this approach removes that risk.

Secure communications at risk unless a coalition of industry partners adopt a zero-trust approach to help commercialize such an approach into mass-produced consume mobile and Internet of Things devices. Without Software-Defined, Zero-Trust there exists significant risks to “still on copper” routes using TCP/IP through other unverified nodes.

The post 5G’s geopolitics solvable by improving routing protocols against modern threats appeared first on Atlantic Council.

]]>
We can build an immune system for the planet https://www.atlanticcouncil.org/blogs/geotech-cues/we-can-build-an-immune-system-for-the-planet/ Mon, 06 Apr 2020 10:00:15 +0000 https://atlanticcouncil.org/?p=241409 Our approaches for pathogen detection and antigen development are too slow. Using high-speed computers, biosensors, and the Internet, we can universalize and automate the process such that we can automatically sense an abnormal pathogen and immediately start synthesizing in a computer’s memory techniques to mitigate it. Once an abnormal pathogen is detected, we can automate the antigen development to have a solution ready much faster for possible use than conventional means. Together, we can build an auto-immune system for the planet.

The post We can build an immune system for the planet appeared first on Atlantic Council.

]]>
Back in 2013 and again in 2016, a proposal informed by the anthrax events of 2001, West Nile Virus in 2002, and both Severe Acute Respiratory Syndrome (SARS) and a outbreak of monkeypox in the United States in 2003 was shared with the Defense Advanced Research Projects Agency (DARPA). The proposal at the time was something seen as a safeguard against a “low probability, high consequence” event – a natural or human-caused pandemic.

The solution was a series of proposals centered around the concept of building an “immune system for the planet” that could detect a novel pathogen in the air, water, or soil of the Earth and rapidly sequence its DNA or RNA. Once sequenced, high-performance computers would strive to identify both the three-dimensional protein surfaces of either the virus or bacteria and then search through an index of known molecular therapies that might be able to neutralize the pathogen.

At a minimum, such an immune system for the planet would overcome the limits of waiting for nation-states themselves to alert the international community of outbreaks within their borders.

A second reason also was associated with this proposal, namely: exponential changes in technology and create pressures for representative democracies, republics, and other forms of deliberative governments to keep up – both at home and abroad.

In an era in which precision medicine will be possible, so too will be precision poison, tailored and at a distance. As proposed both in 2013 and again in 2016: this will become a national security issue if we don’t figure out how to better use technology to do the work of deliberative governance at the necessary speed needed to keep up with threats associated with pandemics.

Building an immune system for the planet now

While the pitches to DARPA in 2013 and 2016 were not funded at the time; such a solution remains viable and now potentially even more possible to start in the next two to five years given advances in computing, biosensors, and our understanding of microbiology.

The premise of such a solution centers around the recognition that, per the 2001 anthrax events, SARS, H1N1, the biggest threat of biological agents is the protracted time window it takes to characterize, develop treatment, and perform remediation. We certainly are seeing this again with the current COVID-19 pandemic.

Exponentially reducing the time it takes to mitigate a biothreat agent will save lives, property, and national economies. To do this, we need to:

  1. Automate detection, by embedding electronic sensors into living organisms and developing algorithms that take humans “out of loop” with characterizing a biothreat agent
  2. Universalize treatment methods, by employing automated methods to massively select bacteriophages vs. bacteria or antibody-producing E. Coli vs. viruses
  3. Accelerate mass remediation, either via rain or the drinking water supply with chemicals to time-limit the therapy

Which, if these three steps are completed, would produce a globally distributed artificial “immune system”. The chart below, from the 2013 proposal, details what an immune system for the planet would do.

gtc chart from 2013 of what an immune system for the planet would do

Other creative solutions that the next decade may need

In parallel to the pitches to DARPA in 2013 and 2016 were additional ideas that now, in the era of COVID-19, might be worth re-examining for the future ahead:

1. Herd Monitoring for the Internet – One such idea was whether the growing number of “Internet of Things” (IoT) and other devices online would exponentially challenge our already strained approaches to cybersecurity in terms of the sheer volume of devices online. The solution in 2016 was whether a “public health model” focusing more on monitoring what is abnormal behaviors for IoT devices, without revealing identities or specific actions tied to an individual, to protect privacy – would be superior at rapidly detect, contain, and mitigating threats.

The purpose behind the timing was to engage in such a public health approach to the IoT before mass exploits got really bad for societies around the world. Even in 2016 it was clear that IoT devices, with security models based on industrial controls, might have even worse security than Internet-based, TCP/IP endeavors thus creating a compelling urgency to do this. Moreover, with an eye to the future, the intersection of a more automated, public health approach to the “herd immunity” of IoT devices might pave the way for necessary future technologies and approaches to do the same in an era in which precision medicine, and thus the risk of precision poison, becomes available.

2. Your Own Digital Privacy Agent – A second such idea pitched both in 2013, and again in 2016, was whether technological steps could be done to empower consumers to decide when, where, and in what context their data should be shared with data requestors. By developing an open source agent or mobile app, consumers could choose to use it to be their trusted online broker when interfacing with other websites, mobile apps, or online services requesting their data.

The issue at the time was individuals are no longer in control of their privacy. End User License Agreements (EULAs) associated with apps and programs usually are too long for most people to read and parse. The concern in 2013 and 2016 was that trust in democratic institutions would wane if open societies did not come up to a solution the empowered individuals to make choices with regards to their personal data. The Internet of Things would only complicate this as would precision medicine at the volumes of data associated with that activity too.

3. Mechanism to Privatize and Mask DNA – A third and final such idea was to pursue mechanisms to privatize DNA, recognizing that in the very near future individuals high need additional safeguards to mask and keep secret their DNA. This stemmed in part from a projection of advances in precision medicine and biometrics that ultimately would mean a future where anyone could collect and sequence another person’s DNA.

As a result, knowing a person’s DNA might reveal what they are at risk health-wise or allow the equivalent of biological misinformation to be spread by others, such as copies of someone’s DNA could be placed at crime scenes where they had not personally been present. Back in 2013 and 2016, the ability to design DNA from scratch to tailor retroviruses was still maturing, yet new biological techniques have accelerated this capability. It will only be a matter of time before tailored retroviruses can be made, or even just the publishing of a famous person’s DNA to the Internet may reveal things they may not want known publicly about their health.

A call to action

COVID-19 is a case of a low-probability, high consequence event, i.e., a pandemic, finally happened. The pandemic will transform how our world operates and have ripple effects both on the development of new technologies and new ways of operating as societies in the aftermath.

Our approaches for pathogen detection & antigen development are too slow. Using high-speed computers, biosensors, and the Internet, we can universalize and automate the process for pathogen detection and antigen development, such that we can automatically sense an abnormal pathogen and immediately start synthesizing in a computer’s memory techniques to mitigate it. Once an abnormal pathogen is detected, we can automate the antigen development (e.g., phages, e. coli that eat other e. coli, and more) to have a solution ready much faster for possible use than conventional means. We can build an auto-immune system for the planet.

Such an endeavor is possible given advances in machine-learning and computational power. Such a concerted effort is needed given the continuing risk of a future global pandemics. The question being: will world leaders make the choice to invest in the future by beginning the necessarily technological and geopolitical conversations needed to make this happen today?

The post We can build an immune system for the planet appeared first on Atlantic Council.

]]>
Event recap | How can tech help with crisis responses? https://www.atlanticcouncil.org/commentary/event-recap/how-can-tech-help-with-crisis-responses/ Thu, 26 Mar 2020 13:00:16 +0000 https://atlanticcouncil.org/?p=236246 On March 26, 2020, Mr. Matthew Putman, CEO Nanotronics and David Bray, Atlantic Council GeoTech Center Director participated in a live video discussion hosted by Michael Krigsman, Founder of CxOTalk title "How Can Tech Help With Crisis Responses?"

The post Event recap | How can tech help with crisis responses? appeared first on Atlantic Council.

]]>

On March 26, 2020, Mr. Matthew Putman, CEO Nanotronics and David Bray, Atlantic Council GeoTech Center Director participated in a live video discussion hosted by Michael Krigsman, Founder of CxOTalk title “How Can Tech Help With Crisis Responses?”

In this video, Matthew and David discuss the importance of innovation as a way of getting through crises, such as the current COVID-19 pandemic. They both highlight what private sector entrepreneurs can do in collaboration with public sector leaders. The discussion highlights the broad role of advances in additive manufacturing and 3D printing, data and AI, as well as distributed production and computing. The video concludes with the recognition that COVID-19 is catalyzing a global shift that was already happening away from centralized production to distribution production with connected digital and cognitive means to mirror the same output of a centralized production with the added advantage of being more resilient as a result.

The post Event recap | How can tech help with crisis responses? appeared first on Atlantic Council.

]]>
Atlantic Council launches GeoTech Center and Commission https://www.atlanticcouncil.org/news/press-releases/atlantic-council-launches-geotech-center-and-commission/ Wed, 11 Mar 2020 13:00:00 +0000 https://atlanticcouncil.org/?p=229742 The Atlantic Council today launched the GeoTech Center, whose role will be to provide greater understanding of emerging technologies and to develop strategies and policies to ensure the use of “technology for good” among individuals, societies and the international community.

The post Atlantic Council launches GeoTech Center and Commission appeared first on Atlantic Council.

]]>
President and CEO Frederick Kempe also announced high-level Atlantic Council GeoTech Commission; taps top technologist David Bray as founding director

WASHINGTON, DC—The Atlantic Council today launched the GeoTech Center, whose role will be to provide greater understanding of emerging technologies and to develop strategies and policies to ensure the use of “technology for good” among individuals, societies and the international community. The Center will focus on the impact of data and machine learning, personalized medicine, additive manufacturing, nanotechnology, green energy, commercialization of space, robotics, synthetic biology, and other new technologies on the horizon. The center is being launched with support from founding partners Accenture, SICPA, The Rockefeller Foundation, and Carnegie Mellon University, as well as corporate supporter Amazon Web Services (AWS). 

“We are living in a time of rapid change, where new technologies like AI, blockchain, cloud computing, quantum computing and extended reality are changing how people work and live.”

John Goodman, chief executive officer of Accenture Federal Services

Atlantic Council President and CEO Frederick Kempe also announced the launch of the GeoTech Commission, a high-level group of experts that will study the trajectories of new technologies and report on policy actions to ensure the most beneficial outcomes for people, prosperity, and peace. The commission’s honorary co-chairs will be a bipartisan group of four leading members of the United States Congress: Senator Mark Warner (D-VA), Senator Rob Portman (R-OH), Rep. Suzan DelBene (D-WA), and Rep. Michael McCaul (R-TX). The commission’s report, which will be released later this year, will be timed to inform policy makers, legislators, and the general public in conjunction with US elections in November.

The GeoTech Center will be directed by David Bray, a leading technology strategist and public servant. Bray’s rich background ranges from private sector experience as a strategist for startups to US government positions that have included work at the Centers for Disease Control leading tech responses to the September 11 and anthrax attacks, as well as SARS. He also served as a senior advisor to the Department of Defense in Afghanistan and as executive director for the National Commission for the Review of Research and Development Programs of the US intelligence community. He most recently served as executive director of the People-Centered Internet coalition.

Subscribe for updates

Sign up to learn more about the GeoTech Center



  • This field is for validation purposes and should be left unchanged.

“The next generation of technological change will have a more dramatic impact on how individuals live and work, how economies operate and how nations interact than any that has come before it,” said Kempe. “We hope that our new GeoTech Center, benefitting from the rich business and government experience of David Bray, will help us more successfully navigate a disruptive period ahead. Our aim will be to work with the Atlantic Council’s global community so that we can better tap the breathtaking potential of these new technologies to do good while at the same time recognizing the need to manage the inevitable downsides of such revolutionary change.”

Said Bray, “New technologies and data are tools. It is upon the choices we make, both as individuals and as communities, that ensure that they are used as a force for good in the world—and through the efforts of the new center and commission, we plan to help define what measurable good outcomes include. In a time of global turbulence and polarization, our mission is more important than ever. The GeoTech Team will champion positive paths for new tech and data that benefit people, prosperity, and peace globally.”

The GeoTech Commission will be comprised of a select group of prominent leaders from the private sector, academia, and government. Working together, the commissioners will explore how societies and markets characterize new technologies and data choices and their benefits to people, prosperity, and peace. The commission will also study how overall adoption of beneficial new technologies and data initiatives can be accelerated regionally, nationally, and globally. Aside from its congressional honorary chairs, its co-chairs will be John Goodman, Chief Executive Officer of Accenture Federal Services, and Teresa Carlson, Vice President of Worldwide Public Sector at AWS.

“We are living in a time of rapid change, where new technologies like AI, blockchain, cloud computing, quantum computing, and extended reality are changing how people work and live. We see public and private sector leaders focused on harnessing the impact of these technologies for their customers, their organizations, and their workforce,” said John Goodman, Chief Executive Officer of Accenture Federal Services. “Together, these innovations have broader implications on national security, economic competitiveness, and the vitality of society. Bringing together leaders across the public and private sectors to go beyond understanding these changes and working together to chart a path forward to make a positive impact in the world is why Accenture is proud to be a founding partner of the GeoTech Center.”

“We are excited to work with the GeoTech Commission on these important topics,” said Shannon Kellogg, Vice President of AWS Public Policy for the Americas. “Cloud technology plays a key role in driving digital transformation, and allows people, organizations, and governments to be more efficient and agile in accomplishing their missions and reinventing citizen services and experiences.”

The GeoTech Center will benefit as well from a rich stable of fellows with a wide spectrum of expertise across different fields, including former Australian Prime Minister Malcolm Turnbull, the prominent British computer scientist Dame Wendy Hall, and Lord Tim Clement-Jones, digital economy spokesman of the United Kingdom’s Liberal Party. The GeoTech Fellows will help the center identify public and private sector choices affecting the use of new technologies and data and will recommend positive paths forward to help markets and societies adapt in light of technology and data-induced changes. They will also work closely with the Center to determine priorities for future investment and cooperation between public and private sector entities seeking to develop new technologies and data initiatives specifically for global benefit.

To learn more about the GeoTech Center and Commission, please visit https://gtc.atlanticcouncil.org and follow its efforts on Twitter at @ACGeoTech. For media inquiries, please contact press@atlanticcouncil.org.

Connect with our experts

The post Atlantic Council launches GeoTech Center and Commission appeared first on Atlantic Council.

]]>
Hruby joins the Global Startup Movement to discuss Africa-India digital transformation parallels https://www.atlanticcouncil.org/insight-impact/in-the-news/hruby-joins-the-global-startup-movement-to-discuss-africa-india-digital-transformation-parallels/ Wed, 05 Feb 2020 15:52:00 +0000 https://www.atlanticcouncil.org/?p=219730 The post Hruby joins the Global Startup Movement to discuss Africa-India digital transformation parallels appeared first on Atlantic Council.

]]>
original source

The post Hruby joins the Global Startup Movement to discuss Africa-India digital transformation parallels appeared first on Atlantic Council.

]]>
US-China collaboration on the Internet of Things safety: What next? https://www.atlanticcouncil.org/in-depth-research-reports/report/us-china-collaboration-on-the-internet-of-things-safety-what-next/ Mon, 16 Dec 2019 05:01:00 +0000 https://www.atlanticcouncil.org/?p=205122 While the Internet of Things offers a range of humanitarian, commercial, and national security benefits, its pervasive nature has many concerned over its impacts on safety and security in society. In a new report by the Atlantic Council’s Scowcroft Center for Strategy and Security, Karl Rauscher notes that the world’s two largest powers are at a crossroads with regard to their level and scope of cooperation in continued IoT advances. United States–China Collaboration on the Internet of Things Safety: What’s Next? analyzes possibilities for the United States and China to work together to establish consensus policies and standards to make their societies safer and provide a model for the world.

The post US-China collaboration on the Internet of Things safety: What next? appeared first on Atlantic Council.

]]>
The Internet of Things describes a future world with pervasive connectivity. While IoT offers a range of humanitarian, commercial, and national security benefits, its pervasive nature has many concerned over its impacts on safety and security in society. A great disservice is done when national security, commercial, and humanitarian interests are conflated. In a new report by the Atlantic Council’s Scowcroft Center for Strategy and Security, Karl Rauscher notes that the world’s two largest powers are at a crossroads with regard to their level and scope of cooperation in continued IoT advances. United States–China Collaboration on the Internet of Things Safety: What’s Next? analyzes possibilities for the United States and China to work together to establish consensus policies and standards to make their societies safer and provide a model for the world.

The post US-China collaboration on the Internet of Things safety: What next? appeared first on Atlantic Council.

]]> Aviation cybersecurity: Scoping the challenge https://www.atlanticcouncil.org/in-depth-research-reports/report/aviation-cybersecurity-scoping-the-challenge-report/ Wed, 11 Dec 2019 05:01:00 +0000 https://www.atlanticcouncil.org/?p=204003 The digital attack surface the aviation sector presents to its adversaries continues to grow in such a way that both managing risk and gaining insight on it remain difficult. With emerging technologies like machine learning and fifth-generation (5G) telecommunications seeing wider adoption—alongside electric vertical takeoff and landing (eVTOL), autonomous aircraft, and increased use of space—aviation-cybersecurity risk management is on the cusp of becoming more complex.

The post Aviation cybersecurity: Scoping the challenge appeared first on Atlantic Council.

]]>
Table of Contents

Foreword

In the past decade, the aviation industry has reaped the benefits of digitization. With the aircraft efficiency gains and enhancements to the passenger experience catalyzed by new technologies, we have to acknowledge the corresponding new risks, including social and technical vulnerabilities never before addressed. In 2017, the Atlantic Council released its ground-breaking report, Aviation Cybersecurity—Finding Lift, Minimizing Drag. The report raised awareness on the state of cybersecurity in the aviation industry, sparking public dialogue on the intersection of cybersecurity and aviation. This created a foundation for the aviation community to convene around protecting the traveling public. Since then, it has become evident that anticipating, identifying, and mitigating cyberspace vulnerabilities in aviation will require the buy-in of all stakeholders in this ecosystem.

Two years on, Thales is honored to continue its support for the Atlantic Council and this crucial initiative that aims to map perspectives on cybersecurity across this diverse industry and highlight the growing need for collaboration across stakeholders. Ultimately, there is no silver bullet for aviation cybersecurity, and confronting cyber risk in aviation will require a global approach, working across safety, security, cybersecurity, and enterprise IT. This report and the accompanying global survey developed by the Atlantic Council will increase our holistic understanding of aviation cyber risk and drive meaningful engagement across the aviation community.

This effort to broaden the community of stakeholders examining cybersecurity in aviation will increase our collective security, safety, and resilience. When it comes to the trust of travelers, we are all only as strong as those most vulnerable among us. It is only through mutual understanding and collaboration that we can continue to challenge one another, grow, and improve. I applaud the Atlantic Council for embracing this topic and am proud Thales has the chance to support this work.

Sincerely,

Alan Pellegrini

CEO, Thales North America
Board Director, Atlantic Council

Executive Summary

The objective of this report is to capture and understand the diversity of perspectives on aviation cybersecurity. The range of opinions and perspectives became apparent in the 2017 report, Aviation Cybersecurity—Finding Lift, Minimizing Drag. In that report, perspectives ranged from a belief that there was no aviation-cybersecurity challenge, because “it wasn’t possible to hack” aviation systems, to the belief that there is significant, systemic risk in aviation.

The 2017 report called out the complexity of the global aviation-cybersecurity challenge and focused on the leadership role of the International Civil Aviation Organization (ICAO) as critical to drive coordinated, strategic change. ICAO took a positive step toward asserting its leadership in October 2019, when the 40th Session of the ICAO General Assembly adopted Assembly Resolution A40-10 Addressing Cybersecurity in Civil Aviation and urged states to implement the Aviation Cybersecurity Strategy, laying out both a vision and strategic goals. The significance of this development for bringing coherence to global aviation cybersecurity cannot be underestimated.

This report builds on the challenges raised two years ago to explore how these diverse perspectives have changed in the intervening time. The digital attack surface the aviation sector presents to its adversaries continues to grow in such a way that both managing risk and gaining insight on it remain difficult. With emerging technologies like machine learning and fifth-generation (5G) telecommunications seeing wider adoption—alongside electric vertical takeoff and landing (eVTOL), autonomous aircraft, and increased use of space—aviation-cybersecurity risk management is on the cusp of becoming more complex.

This report leverages a global survey of 244 respondents (in whole or in part) together with targeted  interviews and several expert workshops to explore the diverse challenges of aviation cybersecurity. Although there are multiple initiatives on the topic, management of aviation-cybersecurity risk remains challenging. The first set of challenges involved issues in trying to weave aviation cybersecurity into flight safety, security, and enterprise information technology (IT), all of which have well-established governance and accountability frameworks. The second set of challenges orbits the relationship between aviation-sector suppliers and customers regarding cybersecurity, with many finding it difficult to incorporate best practices into purchases, as well as difficulties in developing consensus on adequate cybersecurity risk management and transparency.

Managing aviation cybersecurity requires making thoughtful choices from a clear and well-informed understanding of risk. Here, despite ample challenges, there are some glimmers of hope. But, on topics such as information sharing, it was clear that respondents thought there was much more to be done. Additionally, there is a clear desire for increased objectivity regarding aviation-cybersecurity risk, whether through independent assessment or agreement among aviation-sector stakeholders. There was strong agreement that good-faith researchers were a positive thing for the aviation industry, but perspectives on guidance, legal clarity, and ease of vulnerability disclosure all remain unclear or difficult to navigate.

Through both its designs and its training practices, the aviation sector rigorously works to anticipate, mitigate, and objectively investigate failure, but incorporating cybersecurity into this culture remains a challenge. There is very little operational training (for pilots, air-traffic controllers, etc.) to either recognize or manage aviation-cybersecurity incidents. And, although aviation operations are inherently resilient, disruptive attacks at scale will prove challenging to manage. Additionally, attacks against data integrity, “second-generation” attacks, undermine the ability of aviation operators to conduct safe operations. Working through these issues will require an increased effort to understand cybersecurity aspects of everything from normal operations and procedures to post-accident and incident management.

There has been increased focus on, and increased efforts toward, aviation-cybersecurity regulations and standards, but the survey conducted for this report found deep concern about their effectiveness, clarity, and communication. What was clear is that a majority of contributors thought that aviation cybersecurity should be led globally. As national, regional, and organizational efforts are under way to improve aviation cybersecurity, there is a growing risk of adding complexity across the landscape of regulations and best practices. All regions deserve the tools to improve, and any new body of standards must be harmonized across complex global supply and operations chains. 

Improving aviation cybersecurity is a journey, and every stakeholder must be able to make the trip if global, systemic risk is to be reduced.”

Improving aviation cybersecurity is a journey, and every stakeholder must be able to make the trip if global, systemic risk is to be reduced. ICAO promotes this from a capacity-building perspective with a tagline of “No Country Left Behind.” As global aviation-cybersecurity efforts ramp up, adopting a tagline of “No Vulnerability Left Behind” is a fitting example of how focus must be applied if the sector is to remain safe, secure, and resilient.

1: Top-Line Actions

This report recommends the following steps for the aviation ecosystem

1.1 Global standards for a global industry

With publication of the ICAO Cybersecurity Strategy, there is now a vision for how aviation cybersecurity can advance globally.

“ICAO’s vision for global cybersecurity is that the civil aviation sector is resilient to cyber-attacks and remains safe and trusted globally, whilst continuing to innovate and grow.”

To coherently gain insight, understand and manage aviation-cybersecurity risk, and bring swift, globally aligned, and effective change, all aviation stakeholders—including states, international bodies, regulators, manufacturers, and service providers—are strongly encouraged to act in unison, and to support the new ICAO Cybersecurity Strategy, as called for in the ICAO Assembly Resolutions A40-10 Addressing Cybersecurity in Civil Aviation. 1

1.2 Increasing transparency and trust

Trust in aviation cybersecurity will only come with increased transparency. Limited or ineffectual information sharing is leading to opacity of risk among stakeholders, and arguably obfuscates the scale of the aviation-cybersecurity challenge and the way forward. Actions to improve this fall into two key areas: contracts and system design.

1.2.1 Contracts
All contracts between aviation stakeholders must include cybersecurity considerations, such as through-life risk management, vulnerability management, and data sharing. These must be clearly and transparently agreed upon, to ensure that all stakeholders are able to make informed decisions about cybersecurity risk.

1.2.2 System design
Aviation-system design must be approached from the perspective of not only securing systems, but also increasing cybersecurity risk transparency and objectivity, for manufacturer and customer alike. All stakeholders must, therefore, be able to access and analyze their cybersecurity-relevant data. 

Additionally, efforts must be taken to reduce the rapidly expanding digital attack surface of the aviation sector, with a default of designing for simplicity, security, and resiliency.

1.3 Building bridges

The scale and complexity of the cybersecurity challenges facing the industry mean diverse stakeholders must be encouraged to support and learn from each other. There are three key areas.

1.3.1 Diverse stakeholders
Because of the scale, nature, and variety of the aviation sector, a number of diverse stakeholder groups can productively collaborate to help understand and manage risk. Ranging from other sectors to cybersecurity researchers, creating a rich and positive dialogue will accelerate the understanding of the challenge, as well as potential solutions.

1.3.2 Regulations and standards
ICAO, states, and standards bodies must be supported in the creation of informed and balanced aviation-cybersecurity regulations, through input from diverse stakeholders, as a collaborative and structured effort to promote global coherency.

1.3.3 Safety, security, enterprise cybersecurity, and aviation cybersecurity
Where aviation cybersecurity crosses the traditional elements of aviation security, safety, and enterprise IT, efforts must be made to break down silos and create a shared vision of risk.

1.4 Information sharing

Cybersecurity information sharing must be approached in the same way as information sharing on the topic of flight safety. Moving to a “learn once, share widely” model will promote rapid visibility, mitigation, and management of risk across the entire sector. Blockers of information sharing on aviation cybersecurity must be critically assessed, and standards must promote the sharing of cybersecurity-relevant information in a timely and responsive manner that gets defenders ahead of vulnerabilities and adversaries.

1.5 Communications

Aviation cybersecurity is a critical and complex topic that is still little discussed outside the sector, leading to risks of misperception and inaccuracy. Increasing external dialogue on the topic and helping create informed positions will go a considerable way toward increasing understanding and trust across multiple stakeholders.

1.6 People

The global scale of the aviation-cybersecurity challenge means that it now touches every single element of the sector. Already, the sector does not have enough cybersecurity staff, and this shortage will only become more acute as initiatives and efforts increase. Global, sector-wide, coordinated efforts must be made to increase the cybersecurity skills of those already in the sector, as well as to create pathways and incentives for those wanting to embark on an aviation-cybersecurity career.

1.7 Passenger privacy and cybersecurity

How the aviation sector protects passenger privacy and cybersecurity must be a proactive and transparent dialogue. Starting discussions now on the topic of passenger privacy and security will also make it easier to develop appropriate supporting frameworks, reduce noncompliance risks, and scale technology such as biometrics.

Airplanes at Seattle-Tacoma Airport.

2: Introduction and Overview

2.1 THE AIM OF THE SURVEY AND REPORT

Like many other sectors, aviation is digitized, connected, and potentially vulnerable to malicious cyber adversaries and activities. Because it is a global, interconnected, and interdependent sector, any disruption can quickly ripple out to have international impacts, cause significant financial and reputational damage, and potentially compromise safety. The digital attack surface of the aviation sector has never been larger than it is today, as more and more digitized and connected services are developed for sound reasons such as efficiency and passenger service. Understanding how to manage and protect this burgeoning attack surface, while building in resiliency, is arguably the most pressing security challenge facing the aviation sector.

This report uses the results of a survey, workshops, and interviews with those involved in aviation cybersecurity, and explores the risks, challenges, opportunities, and suggested actions for a resilient and cyber-secure global aviation sector.

To do this, voices were explored from across the sector: aircraft and airport operations, manufacturers, air-traffic control, maintenance, repair and overhaul, security, the supply chain, regulators, government, and those that support from outside the sector, such as the cybersecurity research community. All of these stakeholders have valuable perspectives, but nobody has ever been able to engage so deeply on the topic, capture their voices, understand their perspectives, and learn from them, until now.

The 2017 report, Finding Lift, Minimizing Drag, highlighted that the diversity of perspectives on the nature and severity of the cybersecurity challenge facing the aviation sector was potentially holding back tangible progress. Some stakeholders proffered that there was very little cybersecurity risk in aviation, while others said that it was a critical, complex, and little understood challenge, and that only once a cyberattack took place would there be tangible progress. 2 With the increased focus on global aviation cybersecurity, ranging from the new ICAO Cybersecurity Strategy to new security standards both in Europe and the United States, alongside increasing adversary efforts to target the aviation sector, the coming years will be challenging ones.

The topics discussed within this report and its findings are valuable not just for the aviation sector, but all complex, digitized, connected industries. It is not a lack of available technology that affects how people address the cyber challenge, but rather the level and maturity with which they perceive these challenges. The complexity and rapid pace of digital evolution are now the norm, and can no longer be used as a reason for the difficulty of defending that which is critical. Collectively moving forward, gaining focus, and developing clear intent to manage aviation-cybersecurity risks will require partnerships across diverse perspectives and stakeholders; this will allow the sector to quickly and collaboratively improve.

3: Scope

Aviation cybersecurity is a topic that straddles many silos; therefore, defining its scope is essential. For the purposes of this report, aviation cybersecurity is defined as cybersecurity pertaining to aviation operations.

This may seem a simple melding of cybersecurity and aviation, but simplicity must be the key. Across the sector, the focus is very much on maintaining safe and secure aviation operations. This encompasses airliners, future urban air mobility (UAM) vehicles, commercial space travel (which must transit through “legacy” airspace), and everything that supports aviation operations, ranging from ground assets to space-based communications and positioning, navigation, and timing (PNT). To allow for this breadth and for future developments, cybersecurity in aviation must also align with this scope.

METHODOLOGY

The purpose of this report is to gain insight on aviation-cybersecurity perspectives across a wide demographic. With such a challenging topic, this was approached in a number of ways. First, focus areas were developed from the 2017 report, Finding Lift, Minimizing Drag, as well as interviews with those involved in aviation cybersecurity and observations from across the sector. From these topic areas, questions were developed that allowed for the creation of a survey that was distributed across the sector. The 244 respondents to participate in the survey (in whole or in part) spanned the breadth of the aviation industry, with occupational backgrounds including: aircraft operations; airports; air-traffic management; aviation services; maintenance, repair, and overhaul; original equipment manufacturing; and cybersecurity research. Responses from the survey were then triangulated in a series of workshops and interviews that explored and amplified the gathered perspectives.

4: Vision

The 40th Session of the ICAO General Assembly adopted its first Cybersecurity Strategy relating to aviation in October 2019, stating the following vision.

“ICAO’s vision for global cybersecurity is that the civil aviation sector is resilient to cyber-attacks and remains safe and trusted globally, whilst continuing to innovate and grow.” 3 

This vision, the first for ICAO, highlights the key challenges facing the sector. The importance of resilience sits alongside the need for safety and maintaining trust at the same time, while still embracing growth and innovation. This report strongly supports such a vision, as it brings global coherence to both the challenge and direction of travel.

5: The Aviation-Cybersecurity Landscape

It is fair to say that the aviation sector has now fully embraced digitized, connected technologies; this is most evident in the evolution of eEnabled aircraft. The nature of that evolution is laid out by the US Federal Aviation Administration (FAA).

“New aircraft designs use advanced technology for the main aircraft backbone connecting flight-critical avionics as well as passenger information and entertainment systems in a manner that makes the aircraft an airborne interconnected network.” 4

It describes the internal aircraft network as follows.

“The architecture of this airborne network may allow read and/or write access to and/or from external systems and networks, such as wireless airline operations and maintenance systems, satellite communications, email, the internet, etc. Onboard wired and wireless devices may also have access to portions of the aircraft’s digital data buses (DDB) that provide flight critical functions.”

It also goes on to highlight some of the myriad risks.

“Connected aircraft have the capability to reprogram flight critical avionics components wirelessly and via various data transfer mechanisms. This capability alone, or coupled with passenger connectivity on the aircraft network, may result in cybersecurity vulnerabilities from intentional or unintentional corruption of data and/or systems critical to the safety and continued airworthiness of the airplane.”

As much as the FAA has laid out formal wording, an eEnabled aircraft can be more simply summarized as a flying data center that continually travels around the globe, with connected safety-critical systems, multiple connections over wired and wireless bearers, and multiple service suppliers both while on the ground and while airborne. It’s easy to see why the cybersecurity of such a platform is as critical as it is challenging.

Cyber Statecraft Initiative Updates

Sign up for the Cyber Statecraft Initiative newsletter to stay up to date on the program’s work.



  • This field is for validation purposes and should be left unchanged.

Increased digitization of air-traffic management (ATM) and information systems is also continuing at pace, as the sector seeks to increase airspace capacity and throughput. At the cutting edge of this is the ICAO Trust Framework project, which aims to securely and digitally connect aviation assets and units around the globe to facilitate information sharing that will be used for multiple purposes, including real-time traffic management. 5 Alongside increasing traffic density and variety from platforms such as UAM and unmanned aerial vehicles (UAV), digitization is enabling greater situational awareness and reduced separations based on trajectory, not just height or location.

Airports are also increasingly connected and digitized, with many of these services also having remote or wireless connections. These range from access-control and airside systems such as maintenance, tugs, and high-speed wireless links between the aircraft and docking gate. 6 All of these digitized services exist against a backdrop of complex airport management and accountability, making it difficult to holistically define and defend such an attack surface.

Many of these services—spanning aircraft, ATM, and airport—increasingly rely on space-based assets for their operations, ranging from data transfer and communications to PNT. As legacy and analog capabilities are phased out in favor of space-based capabilities, their cybersecurity and resiliency must increasingly be scrutinized. One contributor described the cybersecurity posture of some space assets as stuck in the 1980s.

The use of 5G networks is also expected to rapidly grow across the aviation sector. In 2021, the estimated 5G market in aviation will be worth $500 million, with projected growth to $3.9 billion by 2026. 7 5G will likely become a ubiquitous means of communications across every aspect of the aviation sector, with advantages based on size (connectivity at “chip level”), low-power requirements, and flexibility. 8 But, 5G has several cybersecurity challenges, with the European NIS Cooperation group asserting, “5G will increase the overall attack surface and the number of potential entry points for attackers” alongside the challenge of third-party-supplier risk management. 9

Overall, the challenge of understanding risk across interdependent and complex digitized aviation systems, with an extensive supply chain, is only increasing. Other sectors have seen the scale and costs from a single vulnerability and “wormable” exploit. Given the criticality of the sector, combined with disruptions that could scale rapidly, there remains much to do to understand the aviation-cybersecurity landscape.

5.1 AVIATION-CYBERSECURITY PROGRESS

Against this background of challenges, there has been increasing dialogue and action on aviation cybersecurity across the entire sector. In 2017, Finding Lift, Minimizing Drag proposed that, at a global level, it would “take leadership from the top down to improve governance and accountability in the global aviation ecosystem.” 10 The publication of the first Aviation Cybersecurity Strategy by ICAO in October 2019 was a critical first stage in building global coherency, and has gone a significant way to signpost direction.

Additionally, the publication of the European Strategic Coordination Platform Strategy for Cybersecurity in Aviation is a significant step forward at a regional level, and sits alongside national efforts such as the UK Aviation Cybersecurity Strategy. 11

From an aviation-cybersecurity-standards perspective, there has been significant activity by both the European Aviation Safety Agency (EASA) and the US FAA. By the close of 2019, the only way that aircraft, aviation systems, engines, etc. will be able to achieve airworthiness certification is to comply with the recently updated DO-326 and ED-202. 12 These new regulations are considerably more detailed and comprehensive in their approach to the management of cybersecurity risk.

Additionally, a new initiative of the US Department of Homeland Security (DHS), in partnership with the US Air Force (USAF), will increase scrutiny of aircraft cybersecurity. 13 Following the publication of the US National Strategy for Aviation Security and the creation of the Aviation Cybersecurity Initiative (ACI), chaired jointly by DHS’s Cybersecurity and Infrastructure Security Agency, US Department of Defense (DoD), and the US Department of Transportation (DoT), the new initiative includes conducting vulnerability assessments of aircraft as a means to better understand and mitigate risk. 14

Gaining insight to risk is a fundamental requirement for the aviation sector. For example, if a potential flight-safety issue were raised, the safety management system (SMS) would be systematic and proactive in terms of managing that risk; arguably, the management of cybersecurity risk in aviation should be no different. 15Therefore, it is heartening to see organizations such as Boeing now providing guidance on how security researchers can submit potential cybersecurity vulnerabilities. 16 Other regional bodies—such as the European Centre for Cybersecurity in Aviation (ECCSA, part of EASA), as well as the ICAO Cybersecurity Strategy—highlight and promote researcher engagement and, in the case of ECCSA, are willing to receive potential cybersecurity vulnerabilities that relate to any vendor in the aviation sector. 17 Additionally, in August 2019, the first-ever Aviation Village was held at the DEF CON hacker conference, which focused on building bridges and trusted partnerships between the aviation sector and good-faith researchers. 18

All of these developments suggest that the building of such relationships may have turned the corner, and there is hope of  increased cooperation between the research community and aviation sector.

5.2 CHALLENGES

Cyberattacks against aviation organizations appear to be increasing. 19 Although there is much industry focus on traditional information-technology (IT) systems for threats such as ransomware and theft of personally identifiable information (PII) or intellectual property, attacks on airport systems—like those that targeted flight-information displays at Odessa International Airport—are examples of adversarial evolution. 20 Additionally, the increased sophistication and scale of spoofing of Global Positioning System (GPS) signals, seen recently in the maritime domain, indicate how adversary techniques are rapidly evolving.

The cybersecurity and resiliency of Automatic Dependent Surveillance–Broadcast (ADS–B) have been discussed for many years. As a surveillance technology that uses GPS and position broadcasts to assist with situational awareness and separation, it is quickly becoming a cornerstone of the ATM system. But, challenges remain. Outages caused by either signal interruptions or spoofing could rapidly cause operational impacts. An example is that, in 2019, a short period of system errors across some ADS–B units caused about four hundred flights to be cancelled. 21

Even these examples arguably belie the fragility of the situation. Combining the current levels of connectivity with increasingly technically capable adversaries, one can expect attempted widescale, disruptive future attacks against aviation operations. The first generation of these attacks will likely impact confidentiality of data or availability of systems. Such an attack against aviation systems with multiple backups and a workforce that trains for system failure will potentially still disrupt capacity or rate of operations, but likely not cause critical impacts. More concerning second-generation attacks against data integrity would be significantly more challenging to both identify and address. Adversary behavior in other sectors has indicated that adversaries dedicate themselves to learning about the systems they plan to attack; the aviation sector is no different. Even in a sector where humans are seen as the last link in the flight-safety chain, a compromise of the integrity of the information on which they rely to make safe decisions would cause significant challenges.

Arguably the most critical risk to the aviation sector—terrorism—was previously not considered through a cybersecurity lens, because kinetic effects were simpler to carry out, so long as the threat actor gained physical access. But, as increased physical security hardens and wireless connectivity increases throughout a multitude of aviation systems, there is a growing risk that aviation-cybersecurity vulnerabilities may become a credible vector for terrorist actors—either enablement of physical attacks or as an end goal in themselves. With this increased risk, international focus on the cybersecurity aspects of UNSCR 2341and the protection of critical infrastructure against terrorist attacks has been increasing. 22 Dialogues between Interpol, the United Nations (UN), ICAO, and national bodies to counter terrorist activity across both the cyber and physical domains will likely become even more tightly woven. 23

An airplane taking off at sunset.

6: Report Findings and Analysis

With the breadth, variety, and overlap of the cybersecurity challenges facing the aviation sector, it can be difficult to structure these challenges in a clear manner. Therefore, the report findings have been structured to flow through the challenges of managing aviation-cybersecurity risk, gaining insight into that risk, the management of potential aviation-cybersecurity incidents, and, finally, exploring the challenges of regulation, standards, and best practices that can manage and mitigate these risks. 

6.1 MANAGING AVIATION-CYBERSECURITY RISK

Managing risk is a key challenge in cybersecurity. In safety, the aviation industry has, for many years, been focused on driving risk as low as reasonably practicable (ALARP). This has been achieved through the development of a strong safety culture, objective oversight, and rapid and robust information sharing. As a result, accident and incident rates have seen historic lows.

The digitized and connected aviation ecosystem includes such a high number of diverse actors, services, devices, and data that it is very difficult to map out a comprehensive view. This increasing attack surface and complexity have made managing aviation-cybersecurity risk a strategic challenge.

To manage risk, it is first necessary to identify and understand that risk. What became clear from the survey results, workshops, and interviews is that, for aviation cybersecurity, identifying and understanding risk remain critical challenges. A clear majority of respondents disagreed or strongly disagreed with the statement that it isn’t possible to hack aviation systems. The new reality is that aviation systems are likely to face vulnerabilities and challenges similar to those of other sectors. 

One respondent discussing their perspectives on the main blockers to improving aviation cybersecurity stated, “everyone believes that it is not vulnerable.” Overall, respondents clearly disagreed with the statement that is “easy to objectively assess aviation security risk.” This means that considerable effort is now required to improve the understanding of risk. 

It is heartening that most respondents involved in aviation operations reported that their organizations had a cyber strategy in place to appropriately manage aviation-cybersecurity risk, but work remains to get this figure to 100 percent. Further discussion also highlighted the challenges between developing an enterprise cybersecurity strategy with one that also incorporated aviation-cybersecurity risk. With connectivity between operations and enterprise now increasingly difficult to separate, it is important to ensure that cyber strategies and accountability appropriately consider both aviation operations and the enterprise. 

The challenge of having enough appropriately trained cybersecurity staff to manage cybersecurity risk is keenly felt across many sectors. From the results of the survey, the aviation sector faces the same challenge, but even more so, due to the need to develop a workforce with expertise in both aviation and cybersecurity. As momentum builds in generating aviation-cybersecurity capabilities, the challenge of finding and developing an aviation-cybersecurity-aware workforce will become more acute, and the sector will need to compete with others for talent.

With cybersecurity risk being subjective, it is crucial to consider what stakeholders perceive as an adequate baseline of cybersecurity risk management and transparency within products and services. Ultimately, transparency between supplier and customer promotes informed decision-making between both parties about the cybersecurity requirements and the cybersecurity status of the product or service. The question that explored this challenge asked respondents if they felt that cybersecurity requirements were transparent and agreed upon in aviation contracts; the response was a resounding no.

Following this, many respondents also disagreed with the statement that it “was easy to incorporate best practices into the procurement of aviation-related hardware, software, and services.” There may be a number of reasons for this—and perhaps more clarity is needed on what exactly constitutes “best practices”—but this fundamentally points to the question of how much cybersecurity should be “built-in” versus “built-on” in the aviation sector. If aviation service providers are struggling to understand system-cybersecurity requirements and best practices, a key requirement must be defining an adequate, minimum baseline of cybersecurity for the design of the product and service.

The previous two questions demonstrate that many respondents saw challenges in the cybersecurity dialogue between supplier and customer across the aviation sector, and this extended throughout the product lifecycle. With the lengthy lifecycle of many products in the aviation sector, and the potential for multiple ownership changes over the years, focus on “through-life” cybersecurity management will be critical if the sector is to adequately manage cybersecurity risk through the second- and third-hand markets until end of life and disposal.

Across the aviation sector, the amount of cybersecurity-relevant data being produced is expanding at an exponential rate. The ability to access and analyze such data, to gain significant insight and identify potential issues, is essential to managing risk. Between suppliers and customers, it is critical to understand how such data are provided and at what cost. From the survey results and discussions, respondents reported that there are blockers and potentially additional costs to accessing such data. 

Overall, the results, comments, and discussions show both suppliers and customers across the aviation sector have challenges in understanding and managing aviation-cybersecurity risk. None of these challenges is insurmountable, but they require increased dialogue within organizations managing aviation-cybersecurity risk, and between customers and suppliers of products, software, and services across the aviation sector. The more that cybersecurity is considered, discussed, and explored, the easier it will be to visualize—and, therefore, manage—the risk.

Passenger privacy and cybersecurity 

With the rapid expansion of connected digital services available to passengers across their journey—ranging from biometric security to airport and aircraft services—passenger privacy and security are increasingly sensitive and critical topics. On the whole, respondents disagreed that current privacy and security protections were adequate. As evidenced in other sectors, a proactive approach and transparent dialogue with passengers on these topics create informed positions and increased trust.

6.2 GAINING INSIGHT INTO AVIATION-CYBERSECURITY RISK

For aviation organizations to manage risk, they need to be able to gain insight and understanding of potential vulnerabilities, as well as to understand the threat. Complex platforms, systems hardware, software, and multiple service providers, alongside traditional enterprise structures and complex governance, including security and flight safety, can make developing such insight challenging.

With a comprehensive understanding of risk, management of that risk becomes considerably easier. From the responses received, it is clear that considerably more can and must be done to improve understanding of risk across the aviation sector. 

The first survey question explored whether there was sufficient aviation-cybersecurity dialogue across stakeholders, and it became clear that respondents did not think this was the case. There is robust dialogue on the topic of safety globally, and across a multitude of stakeholders—for example, ICAO Regional Aviation Safety Groups and InfoShare events. This dialogue assists in the identification of potential risks and their mitigations. On aviation cybersecurity, it is essential that the sector generates, mirrors, and enshrines the same level of dialogue. This will not be easy; within flight safety, the focus is very much on finding the root cause and sharing it without blame. Across much of the cybersecurity landscape, there arguably remains a stigma about discussing cybersecurity vulnerabilities and challenges that go beyond managing sensitive vulnerabilities. The aviation sector must actively work to improve and mature the current culture.

It is clear from the survey findings that this approach is equally desired when it comes to assessing and gaining insight to cybersecurity risk in the aviation sector.”

The use of independent, objective assessments to determine the safety of aviation products and services is well established. It is clear from the survey findings that this approach is equally desired when it comes to assessing and gaining insight to cybersecurity risk in the aviation sector.  The use of Coordinated Vulnerability Programs (CVP) has proven highly successful in discovering previously unknown vulnerabilities. It is encouraging to see around 50 percent of respondents say that their organization has a CVP program in place, but much work remains. One respondent reported that, from their perspective, “a lack of a coordinated vulnerability disclosure culture among the private and public organizations involved in civil aviation” was the main blocker to improving aviation cybersecurity.

When raising the topic of bug-bounty programs (in which an organization motivates individuals to report potential cyber issues and vulnerabilities with the offer of financial rewards), it is fair to say that their use is not widespread across the aviation industry. Bug bounty programs, as an element of a mature cybersecurity strategy, have considerable benefits, as demonstrated by the multiple vulnerabilities found by programs such as Hack the Air Force and Hack the Pentagon. 24

Within the ICAO cybersecurity strategy, “states are encouraged to set up appropriate mechanisms for cooperation with good faith security research—research activity carried out in an environment designed to avoid affecting the safety, security and continuity of civil aviation.” 25 This change should globally help drive positive and productive engagements between the aviation industry and security researchers. From the results of the survey and workshops, respondents thought that such cooperation is a positive development for the aviation sector.  Conversely, many respondents did not agree that sufficient advice and guidance were available for good-faith researchers who want to research aviation cybersecurity in a safe manner, or that they had adequate and well-understood legal protections in place. The perceived difficulty that good-faith cybersecurity researchers face when contacting companies within the aviation sector also contrasts with the results that point to organizations firmly welcoming such approaches. If the aviation sector can create and promote clearer and easier processes for researchers to work with them, it is obvious that there is great benefit to be had for both stakeholder groups. These new processes also have the potential to create increasingly positive interactions between good-faith researchers and the aviation industry. 

6.3 AVIATION-CYBERSECURITY INCIDENT MANAGEMENT

Irrespective of the effort put into preventing accidents or incidents, the aviation industry fully understands that accidents and incidents still occur. Years of hard-won experience and development of best practices have resulted in globally agreed-upon rules and regulations that ensure robust and objective investigation, with the goal of never suffering the same accident or incident twice.

To deal with flight-safety incidents, there is a clear and well-understood process. With digitized and connected systems now underpinning operational safety, understanding how to prepare for, identify, manage, and learn from aviation-cybersecurity incidents will be critical.

To the initial question of whether they thought the organization was well prepared for aviation-security incidents, respondents felt their organizations were, on the whole, prepared. However, the subsequent questions highlighted some challenging nuances between the management of safety and security incidents and aviation-cybersecurity incidents.

With the increasing awareness on the topic, it would be difficult to find an organization that didn’t have a degree of cybersecurity-awareness training. The aviation sector is no different, with the majority of respondents saying that all staff received such training. But, when taking that question forward to explore whether their organization had appropriate cybersecurity culture in place, considerably fewer respondents thought that was the case. Historically, flight-safety-and-security culture has achieved considerable results for the aviation sector. With the importance of a cybersecurity culture clearly stated in the new ICAO Cybersecurity Strategy, this area will need considerable attention. As part of this effort, developing a clear understanding of aviation-cybersecurity culture, and its interplay between flight-safety culture and security culture, will be critical.

With aviation cybersecurity straddling both safety and security, a consistent solution to its governance and accountability has yet to be developed. This challenge is brought into stark focus during the management of an aviation-cybersecurity incident. Currently, many aviation organizations split cybersecurity-incident responsibilities between networks (traditional enterprise) run by the chief information security officer (CISO), and products (aircraft, operations, etc.) run by the safety team. The fact that, even with current levels of connectivity, such a division of responsibilities is arguably unrealistic was strongly reinforced. One contributor explained that the “safety committee may own the plane, but they have no cybersecurity expertise.” It was suggested that a more robust approach that better aligns responsibilities to the reality of the networks would require considerable cultural change. With the majority of survey respondents believing that an incident response would be led by a joint team, there is much room to improve and adjust existing organizational processes. Another contributor described the “ongoing separation of safety and cybersecurity within the industry” as the main blocker to improving aviation cybersecurity and resilience. Ultimately, there is much work to be done to develop the governance and processes around such organizational structures, but the benefits could be considerable. 

For years, the human operator has always been seen as the critical link in the flight-safety chain, as he or she is able to recognize and prevent flight-safety incidents. With connected, digitized technology now underpinning safety-critical systems, there is now a risk of adversaries undermining that critical safety break. The two questions exploring whether operational staff was trained to both recognize and manage a potential aviation-cybersecurity incident did not give clear answers, potentially because such a situation has yet to occur. Research by EASA carried out in 2016 used a flight simulator to assess the potential safety impacts of cyberattacks on aircrew. 26 The results demonstrated that it was challenging for the crews to recognize such attacks, but, if standard flight-operation practices were followed, safety issues could be mitigated. Efforts must be made to expand this research to provide practical advice that can be woven into role-based training for aviation operators.

Rigorous training is a cornerstone of developing aviation operators who can deal with whatever is thrown at them, in order to maintain safe operations. From responses to the question on the conduct of exercises relating to aviation-cybersecurity incidents, it is clear that such exercises are not yet common. There is hope that this situation will improve soon, as preparing for aviation-cybersecurity incidents does not just train operators, but also helps build organizational understanding and maturity in dealing with such incidents.

Simply capturing such data will not be enough; data must be adequately protected and accessible.”

The aviation sector, with its objective of never suffering the same accident twice, has a rigorous and objective incident-investigation methodology that will explore both the root technical causes and the organizational, systemic causes. With increased digitization of all systems within the aviation sector, the complexity of data, ownership, and governance now presents a significant challenge to investigating potential cybersecurity aspects of accidents and incidents. Results clearly indicate that respondents do not believe that adequate cybersecurity relevant data are captured, protected, and available for analysis.  Additionally, in learning from other sectors and advanced threat-actor techniques, simply capturing data is not enough; data capture must be rigorously protected from interference. In order to frustrate and disrupt cybersecurity investigations, cyber threat actors will compromise the integrity and availability of data and security logs relevant to the investigation and remediation. Therefore, it must be acknowledged that simply capturing such data will not be enough; data must be adequately protected and accessible. 

Aviation cybersecurity and communications

As part of normal business—and especially during any cybersecurity incident—effective and clear communication is essential to help manage and mitigate loss. For many stakeholders, aviation cybersecurity has been a challenging topic to discuss with external stakeholders, such as the media.

It was suggested that, on the topic of aviation cybersecurity, the media “have struggled to find enough best practice examples and so have generally not been able to write about the issue with any purpose.” If the dialogue on aviation cybersecurity is to be balanced and informed, the aviation industry must be open to discussing ongoing efforts and realistic challenges.

Finally, how organizations manage communications around any incident or accident is crucial. Currently, respondents do not think that the aviation sector effectively communicates about aviation cybersecurity with external stakeholders. With the added pressure of managing an aviation-cybersecurity incident, it would be very clear that there is much to be done to increase effective communications and build understanding across stakeholders and the media. 

6.4 REGULATIONS, STANDARDS, AND BEST PRACTICES

With the cybersecurity challenges the aviation industry faces, regulations, standards, and the development of best practices are, and will continue to be, a cornerstone for systemically understanding and reducing risk at a global scale. This section explores contributor perspectives across this important topic.

Excessive regulations and standards can slow growth and innovation, while too few can result in technical divergence and little understanding of the creation of systemic risk.”

Regulations and standards have been, and will remain, critical components of a safe, efficient, and harmonized global aviation industry. To the question of whether respondents perceived current aviation-cybersecurity regulations as effective, clear, and well understood or well communicated, the responses indicate they are not.  It could be easy to conclude that more regulation is the answer; however, there is a need to find a balance. Excessive regulations and standards can slow growth and innovation, while too few can result in technical divergence and little understanding of the creation of systemic risk. As seen with cybersecurity in other sectors, like finance or power generation, heavy regulation without balance can lead to a compliance culture, especially at board level—chasing yearly audit goals for shareholder reports. Efforts must be made to find a balanced regulatory approach for the aviation sector that promotes good behavior and an appropriate culture to manage aviation-cybersecurity risk.

In minimizing risk, aviation already has an effective model in flight safety, where there is never enough effort, and risk is always being driven down. Therefore, aviation-cybersecurity regulations and standards must not be seen as a minimum to be achieved, but as measures that sit alongside a culture of cybersecurity, driven by senior leadership and spread throughout the organization.

There is a clear contributor perception that aviation-cybersecurity regulation should be led globally, placing ICAO in a strong leadership position. To maximize this position and accelerate progress in an increasingly crowded international field, ICAO will need to take on this role with the strong support of its members and the aviation sector. Such an approach would create an environment for global coherency across aviation-cybersecurity regulations, as well as the internationally agreed-upon best practices desired by all stakeholders.

Respondents were asked where they look for advice on aviation cybersecurity, and it is clear that industry bodies, government departments, regulators, and vendors have roles in that advisory capacity. Respondents felt only somewhat supported by these stakeholder groups. A follow-on question about how satisfied respondents were with their ability to access advice on aviation-cybersecurity best practices and guidance showed a large degree of dissatisfaction; overall, there is much to be done. At this stage—with increased focus on aviation cybersecurity, and the rush to further develop regional and national regulations, standards, and best practices, there is also a critical risk of divergence. With the new ICAO Cybersecurity Strategy (and the associated action plan, which at the time of this report’s publication is under development), the updated DO-326, and ED-202, there is potential for a structured global effort that can also apply to the appropriate organizations. Too many standard-setting bodies or proffered best practices risk incoherency and complexity.

All of this makes future investment in cybersecurity critical. Though many of the respondents agreed that their organizations plan to invest more in aviation cybersecurity, the question remains: invest in what? It has become clear that confusion remains about aviation-cybersecurity standards, regulations, and best practices. For organizations willing to spend more money in this area, it is challenging to make informed decisions. Arguably, this area is underserved from a commercial perspective, with a sectorial desire for improvement and additional budget overhead. The critical challenge will be ensuring not just the creation of an aviation-cybersecurity support industry, but one that supports the aviation industry in synergy with its current strengths. Much like a technical solution in isolation is never the answer to flight safety, it is also not the answer to aviation cybersecurity. Improving aviation cybersecurity must be approached holistically—across people, processes, and technology—and in synergy with already-robust safety culture and current aviation best practices.

The International Civil Aviation Organization (ICAO) headquartered in Montreal, Canada.

7: Conclusions

It is clear from the survey results, workshops, and interviews that, although the aviation sector continues to have multiple and critical cybersecurity challenges, progress is being made in understanding and managing them. The diversity of voices and perspectives remains, but that should not be interpreted as a negative thing. Strength lies in diversity, and the challenge is finding a way to work together through that diversity. From an adversary’s perspective, the burgeoning cyberattack surface presented by increased digitization and connectivity, stretching from land to space, makes it both an attractive target for and an enabler of adversary action. 

7.1 MANAGING RISK

A key element of cybersecurity is accepting the reality that vulnerabilities exist, proactively identifying them, and then fixing them before they can be exploited by adversaries. Achieving this requires acceptance, management, and organizational processes, just as much as technical capability. It is clear from the report contributions that cybersecurity vulnerabilities and risks exist in the aviation sector. With flight safety, the aviation sector demonstrably has the right mindset in managing risk. Now it must apply this mindset to managing the reality of its cybersecurity challenges.

Although progress is being made, a key theme brought up by many contributors to this report is a perceived lack of knowledge, understanding, and leadership in tackling what is now a critical aviation-cybersecurity challenge. To move forward and overcome inertia will require significant leadership from international organizations, governments, and industry, which must raise awareness of the critical nature of the challenge, tangible mitigations, and efforts under way. Even the simple step of highlighting where organizations can start on their aviation-cybersecurity journey will help—especially as this is a multinational issue with diverse starting points.

Many sectors have silos across which governance, accountability, and management of risk prove difficult, and the aviation sector is no different. For years, flight safety and security have evolved into effective, but understandably separate, elements within the aviation sector. Across aviation cybersecurity and enterprise IT, the collaboration between all these elements will be challenging, but is at the heart of future success.

The role and challenge of cyber insurance 

There is an increasing market for cyber insurance as companies incorporate it as an element of risk management. These two questions explored whether respondents’ organizations included cyber insurance as an element of managing their aviation-cybersecurity risk. The next question explores their perspective on how easily they believed insurance underwriters could assess that risk. It is clear that, although there is significant usage of cyber insurance in aviation, assessing the risk exposure is challenging. Managing the dichotomy of increasing coverage with a potentially limited understanding of risk will require increased collaboration between underwriters, insurers, and the insured.

The importance of proactive cybersecurity management and transparency throughout the lifecycle of aviation products, services, and software is critical. It cannot be considered with only the first buyer in mind, but must also consider the second, third, and fourth users, as well as end of life and disposal. This will need both suppliers (original equipment manufacturers (OEM) or other) and customers to incorporate such thinking into contractual agreements, so that risks and their management are accounted for and transparent.

Managing aviation-cybersecurity risk across such a complex system is challenging. Some of this challenge may be due to organizational issues, but much of it also lays with system design/architecture, and the issue of gaining objective insight into risks. For flight-safety critical systems, objectively and independently testing them for assurance is the norm. Embracing this cultural approach across digitized and connected systems should be seen as a standard methodology in managing aviation-cybersecurity risk, whether it is conducted by the OEM, the end user, or the regulator.

At every step of aviation-system design, building, manufacturing, and operation, cyber-secure and resilient-by-design must be the default for every process, along with strenuous efforts to minimize attack surface. Such system design must also enable operators to quickly restore systems to airworthiness after a compromise (or test). Additionally, with a considerable number of legacy systems already in use, efforts must be made to fully understand their potential attack surface and develop mitigations for existing systems.

In the tumult and excitement of rapid technological innovation, considerations around the privacy and cybersecurity of passengers cannot be forgotten. Other sectors have learned the hard way about losing consumer trust through poor transparency or security, alongside compliance frameworks that reactively change due to consumer pressure. To maintain consumer trust, there must be an informed and transparent dialogue about consumer data and privacy, encompassing everything from the increased use of biometrics for security, airborne Wi-Fi, and the use of cameras both on the ground and in the air.

Finally, it will not be possible to manage aviation-cybersecurity risk without personnel who can meet the challenge. Achieving this across the aviation sector will require a great deal of skills diversity, going beyond technology into policy and strategy, and crossing multiple stakeholder groups.

7.2 GAINING INSIGHT INTO RISK

Contributors to the report strongly believe that aviation-cybersecurity risks exist, and that actively identifying risks and preparing for their potential realization are critical.

To achieve this, there must be increased dialogue and contributions from all aviation-sector stakeholders, including manufacturers, end users, governments, and regulators. As part of this dialogue, there must be a willingness to hear different perspectives on risk, test assumptions, and be unafraid of what might be found. Wherever possible, this learning must also be collective, with stakeholders sharing hard-earned aviation-cybersecurity knowledge and best practices as if they were flight-safety-critical information, rather than cybersecurity information.

Even outside the aviation sector, there are relevant potential challenges for how cybersecurity vulnerabilities are shared and actioned. In the ongoing class-action suit FCA US LLC v. Flynn, consumer (customer) plaintiffs claim that a manufacturer knew of, but did not fix, a cybersecurity vulnerability. The plaintiffs allege an overpayment theory; that is, had the plaintiffs known about the vulnerability, they would not have paid as much or bought the product at all. 27 Until this suit is settled, there is a risk that vulnerability disclosure and management will become increasingly limited, when they should be increasing in transparency and collaboration. To overcome these challenges, partnerships must develop between all stakeholders focused on minimizing risk, rather than legal jeopardy, as the priority.

Achieving this will require a cultural shift. Wherever there is a requirement for a flight-safety or security culture, there must also be an aviation-cybersecurity culture that stretches from supply chain to operator. Once achieved, this will drive improvement across the sector, from passive and reactive to positive and proactive.

7.3 INCIDENT MANAGEMENT

Adversaries will exploit organizational boundaries, utilizing confusion and miscommunication as means of obscuring or amplifying their attack. In flight safety, the aviation sector has a proven model of how good leadership, governance, accountability, information sharing, and safety culture can make a significant difference in reducing risk. The sector must strive for the same model regarding cybersecurity.

Recognizing and responding to cyberattacks within the aviation sector is a whole-of-sector responsibility. The nature of the potential attacks is such that the frontline of aviation cyber defense stretches internationally across service-provider personnel including pilots, air-traffic controllers, maintainers, security-operations centers (SOC), contractors, and more. 

Therefore, all the personnel in this chain must be trained to recognize and manage potential cyber incidents. This must not be seen as an opportunity for system design that pushes responsibility for cybersecurity to the end user or service provider. Inherent system design should focus on making systems secure and resilient, with the end user, irrespective of their role, as protected as possible in their need to make safe and timely decisions.

There must also be an industry-wide assessment of the cybersecurity investigatory aspects of post-accident and post-incident investigations. This must be led in partnership with post-crash-investigation (PCI) bodies, with findings subsequently incorporated into industry best practices. This may also highlight gaps in how relevant aviation-cybersecurity data are collected and protected, and these findings must be fed back into standards, manufacturers, and operators.

Following a cybersecurity incident or exercise, returning aircraft to a safe and flying state must be as efficient and safe as possible. Much of current practice is to replace hardware in the event of software issues. But, in the event of a widescale incident, such an approach may slow progress in restoring full operational capabilities, especially at scale. Finding a way to improve this must be a priority for the sector—not just on aircraft, but across all operations, including ATM. To improve resilience, the sector needs plans that prevent critical failure, but also minimize impacts, restore full operations as soon as possible, and rebuild trust.

7.4 REGULATIONS, STANDARDS, AND BEST PRACTICES

Regulations and standards exist for aviation cybersecurity, and considerable effort has been put into creating them. Nevertheless, somewhere along the line, there has been a disconnect resulting in significant contributors disagreeing with the effectiveness, clarity, and communication on these regulations and standards. This situation may improve with the updated DO-326 and ED-202 documents, but changing current perceptions will likely take considerable effort. It was made very clear during the workshops that the bodies writing such standards were keen for input from across the whole sector. This is not just a case of increasing awareness of standards, but also increasing collaboration on them.

For organizations in a technical industry looking to manage their cybersecurity risk, there will be a strong temptation to reach for technical solutions, and understandably so. The analogy to this is that flight-safety success does not come through technical solutions in isolation. It is more a matter of how people, processes, technology, and culture are woven together that brings success; aviation cybersecurity must be approached in the same manner.

As a global sector, aviation absolutely depends on global coherency for interoperability, collective understanding of risk, efficiencies, and considerably more. Varied standards, best practices, and complex demands on the supply chain will increase costs for all parties, as well as make it considerably harder to coherently understand risk. Having an international body such as ICAO brings the leadership and ability to maintain that coherency. Respondents see that leadership role as crucial, especially when considering the initiatives on aviation cybersecurity across multiple regions; there are risks that this global coherency could be compromised. A concerted effort will be required to bring coherent global, regional, and national structure to aviation-cybersecurity regulation and best practice. A balance must be found between regulation and culture, as burdensome regulation could create a compliance culture that undermines the principles and culture of flight safety, which is continually striving to drive risk down.

Finally, a structured approach to aviation cybersecurity—either through regulations or standards—enables the development of aviation-cybersecurity roles and qualifications that will promote the building of a critically needed workforce. Until the sector can define such roles and qualifications, the building of a global aviation-cybersecurity workforce will be based more on luck than structured planning. The roles within this workforce cannot be purely technical; they must be created with depth through the critical non-technical roles, such as policy and strategy.

There is no single solution, role, or action to solve aviation-cybersecurity challenges. It will require a collaborative and proactive effort to develop the right regulations, standards, and best practices across this global industry. Even with clear leadership, this task will be a challenge—but it is possible.

8: Suggested Next Actions

This report recommends the following next steps for the aviation ecosystem

8.1 Global standards for a global industry

With publication of the ICAO Cybersecurity Strategy, there is now a vision for how aviation cybersecurity can advance globally.

“ICAO’s vision for global cybersecurity is that the civil aviation sector is resilient to cyber-attacks and remains safe and trusted globally, whilst continuing to innovate and grow.” 28

To coherently gain insight, understand and manage aviation-cybersecurity risk, and bring swift, globally aligned, and effective change, all aviation stakeholders—including states, international bodies, regulators, manufacturers, and service providers—are strongly encouraged to act in unison and support the new ICAO Cybersecurity Strategy, as called for through the ICAO Assembly Resolutions A40-10 Addressing Cybersecurity in Civil Aviation.

8.2 Increasing transparency and trust

Trust in aviation cybersecurity will only come with increased transparency. Limited or ineffectual information sharing is leading to opacity of risk among stakeholders, and arguably obfuscates the scale of the aviation-cybersecurity challenge and the way forward. Actions to improve this fall into two key areas.

8.2.1 Contracts
All contracts between aviation stakeholders must include cybersecurity considerations, such as through-life risk management, vulnerability management, and data sharing. These must be clearly and transparently agreed upon, to ensure that all stakeholders are able to make informed decisions on cybersecurity risk.

8.2.2 System design
Aviation-system design must be approached from the perspective of not only securing systems, but also increasing cybersecurity risk transparency and objectivity for both manufacturer and customer. All stakeholders must be able to access and analyze their cybersecurity-relevant data. Additionally, efforts must be taken to reduce the rapidly expanding digital attack surface of the aviation sector, with a default of designing for simplicity, security, and resiliency—not complexity. 

8.3 Building bridges

The scale and complexity of the cybersecurity challenge facing the industry is such that all stakeholders must be encouraged to support and learn from each other. There are three key areas.

8.3.1 Diverse stakeholders
The scale, nature, and variety of the aviation sector is such that there are a number of diverse stakeholder groups that can productively collaborate to help understand and manage risk. Creating a rich and positive dialogue, including those from other sectors and cybersecurity researchers, will accelerate both the understanding of the challenge and potential solutions.

8.3.2 Regulations and standards
ICAO, states, and standards bodies must be supported in the creation of informed and balanced aviation-cybersecurity regulations through input from diverse stakeholders, as a collaborative and structured effort to promote global coherency.

8.3.3 Safety, security, enterprise cybersecurity, and aviation cybersecurity
Where aviation cybersecurity crosses the traditional elements of aviation security, safety, and enterprise IT, efforts must be made to break down silos and create a shared vision of risk.

8.4 Information sharing

Cybersecurity information sharing must be approached in the same way as information sharing on the topic of flight safety. Moving to a “learn once, share widely” model will promote rapid visibility, mitigation, and management of risk across the entire sector. Blockers to information sharing on aviation cybersecurity must be critically assessed, and standards must promote the sharing of cybersecurity-relevant information in a timely and responsive manner that gets defenders ahead of vulnerabilities and adversaries.

8.5 Communications

Aviation cybersecurity is a critical and complex topic that is still little discussed outside the sector, leading to risks of misperception and inaccuracy. Increasing external dialogue on the topic, and helping create informed positions, will go a considerable way to increase understanding and trust across multiple stakeholders.

8.6 People

The global scale of the aviation-cybersecurity challenge means that it now touches every single element of the sector. Already, the sector does not have enough cybersecurity staff, and this shortage will only become more acute as initiatives and efforts increase. Global, sector-wide, coordinated efforts must be made to increase the cybersecurity skills of those already in the sector, as well as creating pathways and incentives for those wanting to embark on an aviation-cybersecurity career.

8.7 Passenger privacy and cybersecurity

How the aviation sector protects passenger privacy and cybersecurity must be a proactive and transparent dialogue. Starting discussions now on the topic of passenger privacy and security will also make it easier to develop appropriate supporting frameworks, reduce noncompliance risks, and scale technology such as biometrics.

Flight taking off from Queen Alia International Airport in Zizya, Jordan.

9: Final Thoughts

The intent of this report was to explore multiple perspectives on the nature of the cybersecurity challenges facing the global aviation sector. It found that, although a multitude of perspectives remain, there is hope for—and some progress toward—building a shared understanding of aviation-cybersecurity risk.

With increasing digitization and connectivity, adversaries have significant attack surface and opportunity. The growing complexity of systems, process, and supply chain, alongside increasing wireless connectivity, adds to the potential weakening of the physical controls that have protected the aviation sector for so long. Combined with increasingly capable threat actors, ranging from terrorists to nation states, that means the aviation sector faces a significant task.

Where possible, the sector must seek quick wins, but also acknowledge that the challenge of securing the aviation sector from cyber adversaries is now a persistent problem. Therefore, the sector must be prepared to tackle large, systemic challenges, even if they are global in scale and will take considerable time to change. ICAO must be seen as the global lead on this topic—and it now has the strategy to deliver tangible change. It cannot drive change in isolation, and will require assistance and cooperation from states, industry bodies, and those contributing to the aviation sector if change is to be effective and long lasting.

Although progress is being made, significant challenges remain to both gaining insight into aviation-cybersecurity risk and globally managing it. Cultural change to better position the aviation industry to manage these cybersecurity challenges will take leadership and time. Measures must be taken to accelerate this process of improvement, increasing transparency and trust, and develop objectivity and collaboration. 

There is no single solution to aviation cybersecurity, and it will take positive collaboration across diverse stakeholders. Building partnerships across safety, security, cybersecurity, and enterprise IT will also be challenging, but will lead to greatly increased understanding of holistic risk, better reflecting the nature of the complex attack surface being defended.

Along with all this effort, it must be remembered that the aviation sector is a global one, in which national and regional maturity and capability vary. Improving aviation cybersecurity will be a journey, and bringing along all stakeholders is essential if global, systemic risk is to be reduced. ICAO promotes this from a capacity-building perspective, with a tagline of “No Country Left Behind.” As global aviation-cybersecurity efforts ramp up, adopting the tagline of “No Vulnerability Left Behind” is a fitting update that describes where and how focus must be applied if the sector is to remain safe, secure, and resilient.

About the Authors

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Aviation Cybersecurity Strategy,” International Civil Aviation Organization, October 2019, https://www.icao.int/cybersecurity/Pages/Cybersecurity-Strategy.aspx.
2    Pete Cooper, Aviation Cybersecurity—Finding Lift, Minimizing Drag, Atlantic Council, November 7, 2017, https://www.atlanticcouncil.org/in-depth-research-reports/report/aviation-cybersecurity-finding-lift-minimizing-drag/.
3    “Aviation Cybersecurity Strategy,” International Civil Aviation Organization.
4    “Flight Standards Information Management System (FSIMS),” US Department of Transportation, Federal Aviation Agency, 2007, 633, https://www.faa.gov/documentLibrary/media/Order/8900.1.pdf.
5    “A40 SkyTalks: Aviation Benefits,” Uniting Aviation, October 24, 2019, https://www.unitingaviation.com/video/skytalks/a40-skytalks-aviation-benefits/.
6    Ken Munro, “Mapping the Attack Surface of an Airport,” Pen Test Partners, October 11, 2019, https://www.pentestpartners.com/security-blog/mapping-the-attack-surface-of-an-airport/.
7    “5G Market in Aviation by End Use (5G Infrastructure for Aircraft and Airport), Technology (EMBB, FWA, URLLC/MMTC), Communication Infrastructure (Small Cell, DAS), 5G Services (Aircraft Operations, Airport Operations), Region—Global Forecast to 2026,” Markets and Markets, August 2019, https://www.marketsandmarkets.com/Market-Reports/5g-market-aviation-152979610.html.
8    Douglas Busvine, “Huawei Shows off All-in-One 5G System on a Chip,” Disruptive.Asia, September 9, 2019, https://disruptive.asia/huawei-5g-system-on-a-chip/.
9    “EU Coordinated Risk Assessment of the Cybersecurity of 5G Networks,” NIS Cooperation Group, October 2019, https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=62132.
10    Cooper, Aviation Cybersecurity—Finding Lift, Minimizing Drag.
11    “Strategy for Cybersecurity in Aviation,” European Strategic Coordination Platform, September 2019, https://www.easa.europa.eu/sites/default/files/dfu/Cybersecurity%20Strategy%20-%20First%20Issue%20-%2010%20September%202019.pdf; “Aviation Cybersecurity Strategy,” UK Department for Transport, Civil Aviation Authority, July 12, 2018, https://www.gov.uk/government/publications/aviation-cyber-security-strategy.
12    Aharon David, “How DO-326 and ED-202 Are Becoming Mandatory for Airworthiness,” Aviation Today, May 1, 2019, https://www.aviationtoday.com/2019/05/01/326-ed-202-becoming-mandatory-airworthiness/.
13    Robert McMillan and Dustin Volz, “U.S. Steps Up Scrutiny of Airplane Cybersecurity,” Wall Street Journal, September 29, 2019, https://www.wsj.com/articles/u-s-government-steps-up-scrutiny-of-airplane-cybersecurity-11569764123.
14    “National Strategy for Aviation Security of the United States of America,” White House, December 2018, https://www.whitehouse.gov/wp-content/uploads/2019/02/NSAS-Signed.pdf; McMillan and Volz, “U.S. Steps Up Scrutiny of Airplane Cybersecurity.”
15    “Safety Management Systems,” UK Department for Transport, Civil Aviation Authority, last visited November 19, 2019, https://www.caa.co.uk/Safety-initiatives-and-resources/Working-with-industry/Safety-management-systems/Safety-management-systems/.
16    “Ethics and Compliance,” Boeing, last visited November 19, 2019
17    “Vulnerability Disclosure—Request for Assistance,” European Centre for Cyber Security in Aviation, last visited November 19, 2019, https://www.easa.europa.eu/eccsa/eccsa-request-assistance-vulnerability-disclosure.
18    “Aviation Village,” last visited November 19, 2019, https://aviationvillage.org/.
19    “Cyber Security in Aviation,” Aviation Intelligence Unit, EuroControl, August 2019, https://www.eurocontrol.int/sites/default/files/2019-08/cybersecurity-in-aviation-eurocontrol-think-paper-3.pdf.
20    “‘F*ck you”: in Odessa, Hackers Staged a Cyber Attack, Airport Operation is Paralyzed,” October 17, 2019, https://odesa.znaj.ua/ru/270882-f-ck-you-v-odesi-hakeri-vlashtuvali-kiberataku-robota-aeroportu-paralizovana.
21    Stan Goff, “U.S. Flights Canceled as FAA Looks into GPS, ADS-B System Errors,” Inside GNSS, June 10, 2019, https://insidegnss.com/u-s-flights-canceled-as-faa-looks-into-gps-ads-b-system-errors/.
22    “Resolution 2341: Threats to International Peace and Security Caused by Terrorist Acts,” United Nations Security Council, February 13, 2017, http://unscr.com/en/resolutions/2341.
23    “UNSCR 2341 and the Role of Civil Aviation in Protecting Critical Infrastructure from Terrorist Attacks,” International Civil Aviation Organization, 2107, https://www.icao.int/Meetings/AVSEC2019/Pages/Critical-Infrastructure.aspx.
24    “USAF Announces Hack the Air Force 3.0,” US Air Force, November 5, 2018, https://www.af.mil/News/Article-Display/Article/1682502/usaf-announces-hack-the-air-force-30/; “Department of Defense Expands ‘Hack the Pentagon’ Crowdsourced Digital,” US Department of Defense, October 24, 2018, https://www.defense.gov/Newsroom/Releases/Release/Article/1671231/department-of-defense-expands-hack-the-pentagon-crowdsourced-digital-defense-pr/.
25    “Aviation Cybersecurity Strategy,” Aviation Civil Aviation Organization.
26    “Impact Assessment of Cybersecurity Threats (IACT): EASA_REP_RESEA_2016_1,” European Union Aviation Safety Agency, July 31, 2018, https://www.easa.europa.eu/document-library/research-reports/easarepresea20161.
27    Megan L. Brown and Boyd Garriott, “Supreme Court Declines Connected Vehicle Lawsuit, Leaving Standing Issues in Tech and Security for Future Resolution,” Federalist Society, February 4, 2019, https://fedsoc.org/commentary/blog-posts/supreme-court-declines-connected-vehicle-lawsuit-leaving-standing-issues-in-tech-and-security-for-future-resolution.
28    “Aviation Cybersecurity Strategy,” Aviation Civil Aviation Organization.

The post Aviation cybersecurity: Scoping the challenge appeared first on Atlantic Council.

]]> A digital agenda for the new European Commission: issues and opportunities https://www.atlanticcouncil.org/commentary/blog-post/a-digital-agenda-for-the-new-commission-issues-and-opportunities/ Thu, 03 Oct 2019 17:10:36 +0000 https://atlanticcouncil.org/?p=186361 Under the outgoing European Commission of President Jean-Claude Juncker, the European Union has emerged as the regulatory superpower of the global digital economy. Early signs indicate Brussels will continue to leverage the power of the single market and its regulatory approach when a new Commission takes office in November 2019.

The post A digital agenda for the new European Commission: issues and opportunities appeared first on Atlantic Council.

]]>
The text below is an excerpt from the Future Europe Initiative’s recent report: A transatlantic agenda for the new European Commission .

Under the outgoing European Commission of President Jean-Claude Juncker, the European Union has emerged as the regulatory superpower of the global digital economy. Having initially made it its priority to empower the digitalization of the EU economy and complete the digital single market, the Juncker Commission quickly found itself dealing with a series of controversies involving data leaks, disinformation campaigns, and digital companies’ tax practices. Amid a shift in public attitudes toward big tech, the EU shifted gears to take a more aggressive regulatory approach toward the digital sector. The implementation of the General Data Protection Regulation, or GDPR, sent ripples far beyond the EU. Similarly, investigations of large US tech firms by Competition Commissioner Margrethe Vestager ended in record fines.

The European public seems to have grown only more skeptical of large tech companies. The accelerated uptake of new technologies, such as artificial intelligence (AI), has prompted regulatory brainstorming, with the EU publishing a series of reports on the matter. Even on digital tax, where an initial EU-wide proposal failed in 2018, the debate is far from over, and a patchwork of national frameworks may still prompt further Commission action. While the Commission leadership of its digital portfolios will change, there is also important continuity, with Margrethe Vestager likely promoted to vice president and keen to see her hawkish stance on competition issues persist.

The following provides an overview of some of the key digital-policy issues that the new Commissioners will find in their briefing books, as well as the challenges and opportunities for transatlantic cooperation.

Digital Taxation

Issue: The July 2019 passage of a new digital-services tax (DST) by France has elevated the issue of how services delivered digitally across borders within the EU’s single market are taxed. The unilateral action by Paris seems to contravene long-established international tax rules, such as the principle of permanent establishment, and has prompted the United States to examine retaliatory tariffs over what it considers discrimination aimed specifically at US tech firms under the new French law. Similar digital-service levies are in advanced stages in Italy, Spain, and Austria, and other Member States are contemplating national measures. But, the fault lines within the EU over the issue are complex. A group of European countries that are home to EU headquarters of big US digital companies and have vibrant domestic tech sectors helped sink a French-led initiative for an EU-wide DST framework in 2018. Any new push by the Commission will also be complicated by the perennially contentious nature of taxation issues at the EU level, as Member States on all sides of the issue seek to protect their revenue bases and a core element of their sovereignty from Brussels.

Opportunity: The European Union and the United States cannot afford to add another costly spat to the list of existing transatlantic trade disputes. A proliferation and patchwork of national digital taxes would also further complicate any drive by the new Commission to advance the integration of the EU’s digital single market and seize synergies for the European economy from the next stages of digitization. The new Commission should, therefore, work with the United States to accelerate existing negotiations within the Organisation for Economic Cooperation and Development (OECD) on a long-overdue update of international tax rules to account for the realities of the digital economy. These have been slow to progress, and part of the declared intent behind the French DST is to push for progress in the OECD talks. A joint EU-US initiative at the OECD could both avoid a costly escalation of tariff measures and yield a real prospect of a new US-EU-brokered global gold standard for taxation in the digital age.

Artificial Intelligence

Issue: Commission President-elect Ursula von der Leyen doubled down on the EU’s AI strategy in her agenda for Europe, promising legislation to address the human and ethical implications of AI during her first one hundred days in office.

There is growing concern on both sides of the Atlantic, especially among private-sector leaders, that the EU is putting the cart before the horse in seeking to regulate a technology that is still in its infancy, thereby threatening to fall further behind AI leaders such as China and the United States.

AI will shape the coming era of great power competition. As Vladimir Putin famously said of the emerging technology, “the one who becomes the leader in this sphere will be the ruler of the world.” Unlike its direct competitors, the EU views AI through a socioeconomic lens, rather than from a purely economic or geopolitical perspective.

While the United States and China have poured significant resources into military AI, this subfield is absent from the EU’s strategy, and instead left to individual Member States. The limitations of the EU, which prevent it from incorporating military AI, present an immense challenge for Europe. Fundamental differences in the priorities, resource allocation, and military and technological capabilities of Member States may make European coordination on the issue difficult, and could exacerbate the gap between Europe and other great powers.

Opportunity: The EU has several notable strengths in the field of AI that it must play to, including: its five-hundred-million-consumer market, its education system and research quality, its quantity of talent, and its normative institutional power. Compared to other AI leaders, the EU also boasts a balanced coverage of the four main subdomains of the AI landscape when compared to other AI leaders: machine learning methods, connected and automated vehicles, speech recognition and natural language processing, and face recognition. The EU must leverage these in order to establish itself as a legitimate challenger and to lead in a tangible field of AI. The EU will not be able to set institutional standards from a severely disadvantaged position.

Privacy and Data Sharing

Issue: A fundamental right in the European charter, privacy is a crucial pillar of EU digital policy. The implementation of the GDPR, which helped solidify the EU’s status as a digital superpower, has been hailed as one of the Juncker Commission’s biggest legacies. That Commission took office during the fallout from the Edward Snowden scandal, ensuring that there was a strong political will and sense of urgency from the public for action on data-privacy issues.

While the hysteria over Snowden has subsided, scandals such as Cambridge Analytica, companies’ data-harvesting practices, and large-scale leaks have eroded public trust in these companies and helped maintain continued public support in Europe for privacy legislation.

While not yet fully understood, 5G and AI also pose emerging challenges. Despite serious security risks, certain EU Member States have cooperated with Chinese company Huawei, a global leader in 5G technology, which is blacklisted by the US Commerce Department. As the global AI competition heats up, so will the increasingly cutthroat demand for big data, the fuel which feeds the AI fire.

Opportunity: The new Commission could inherit a landmark court ruling in its honeymoon phase that could disrupt transatlantic data transfers. A pending case before the European Court of Justice (ECJ) challenges the legitimacy of transatlantic data transfers under the Privacy Shield framework. It contests the transfer of personal data from Facebook’s servers in Ireland to Facebook’s servers in the United States, on the grounds that data transferred to the United States is subject to surveillance even under Privacy Shield, and therefore not adequately protected under EU equivalence rules.

The new Commission’s outlook on privacy issues may well hinge on this case. In 2015, the ECJ ruled Privacy Shield’s predecessor, Safe Harbor, invalid. if Privacy Shield follows suit, it means the new Commission will have to rewrite the playbook on privacy and cross-border data flows, the lifeblood of the digital economy. It could also have big implications for the private sector and big tech. The Silicon Valley tech giants view themselves as global companies, and the European single market is vital to their continued success. As a result, the European Union has leverage and can continue to uphold—or even strengthen—its norms and values surrounding privacy. But, as the Juncker Commission has shown, a five-year term is long, and much can change in EU digital policy.

Jörn Fleck is associate director at the Atlantic Council’s Future Europe Initiative.

Alex Baker is a project assistant at the Atlantic Council’s Future Europe Initiative. Follow him on Twitter @alexpieterbaker.

Further Reading

The post A digital agenda for the new European Commission: issues and opportunities appeared first on Atlantic Council.

]]>
Europe’s new commission: The outlook for digital policy https://www.atlanticcouncil.org/blogs/new-atlanticist/europes-new-commission-the-outlook-for-digital-policy/ Wed, 02 Oct 2019 20:48:27 +0000 https://atlanticcouncil.org/?p=186089 A quick look at how the new European Commission will line up on digital policy.

The post Europe’s new commission: The outlook for digital policy appeared first on Atlantic Council.

]]>
On September 10, European Commission President-elect Ursula von der Leyen outlined the structure of the new European Commission and announced the assignment of commissioners-elect to their specific portfolios. The main actors working on digital policy will be:

  • President-elect Ursula von der Leyen, who has made clear that digital policy will be a top priority of her administration (see below);
  • Executive Vice President Margrethe Vestager, responsible for “A Europe Fit for a Digital Age,” and with oversight of Directorate General for Competition (DG COMP);
  • Commissioner Sylvie Goulard, responsible for the “Internal Market,” and with oversight over the DG for Internal Market, Industry, Entrepreneurship, and Small and Medium Enterprises (DG GROW); DG Communications Networks, Content, and Technology (DG CONNECT); and the new DG Defense Industry and Space.
  • Commissioner Mariya Gabriel, responsible for “Innovation and Youth,” and with oversight of Directorate General for Research and Innovation (DG RTD).

It is also possible that Valdis Dombrovskis, the executive vice president-designate, responsible for “An Economy that Works for People” will have a role in digital policy, but the extent of his involvement is unclear. Didier Reynders, responsible for “Justice” will also have input on law enforcement and consumer issues.

The commissioners-elect must now appear before the relevant committees of the European Parliament to answer questions in a process similar to US cabinet-level confirmations. Gabriel had her hearing on Monday, September 30, before the Industry (ITRE) and Culture committees, and Goulard appeared on Wednesday, October 2, before the ITRE and Internal Market (IMCO) committees. The three executive vice presidents will follow, with Vestager’s hearing on October 8, before the ITRE, IMCO, and the Economic and Monetary Affairs (ECON) committees. The Parliament will vote its approval (or disapproval) of the entire Commission en bloc, rather than individual commissioners. That vote is scheduled for October 23.

In the past, one or two commissioners-elect have encountered serious opposition in the Parliament, and some have been forced to withdraw; in fact, the Hungarian and Romanian commissioners-designate have already withdrawn and been replaced. Whether this will affect the commissioners working on digital policy is not clear; Goulard faced close scrutiny in her hearing due to on-going investigations into her use of parliamentary funds, and will probably have to answer additional questions. Her future is far from clear. If approved, the new commission is expected to take office on November 1, and they will guide European policy until October 31, 2024. In her Political Guidelines for the Next European Commission, von der Leyen identifies “A Europe fit for the digital age,” as the third of six major priorities to guide the Commission’s work. She called for Europe to “lead the transition to… a new digital world,” and in particular for:

  • The EU to coordinate a joint approach on the “human and ethical implications of artificial intelligence;”
  • A new Digital Services Act that will update the 2000 E-Commerce Directive by providing “liability and safety rules for digital platforms, services, and products, and complete our digital single market;”
  • The EU to define standards for new technologies that will become the global norm;
  • And for the creation of a European Union (EU) joint Cyber Unit that will “speed up information sharing and better protect ourselves;”

In addition, the mission letters that von der Leyen provided to Vestager, Goulard, and Gabriel call for:

  • Finding an international consensus on digital taxation;
  • Strengthening competition enforcement and possibly launching a review of competition in the digital arena;
  • fostering digital research and education;
  • and building a single market for cybersecurity.

Von der Leyen specifically prioritized taxation of the big tech companies, and she made it clear that the EU will act alone at the end of 2020 if there is no multilateral solution by then. Von der Leyen’s Political Guidelines and mission letters also provide a broader context in which digital policy is a key element through which the EU will seek to develop greater economic and technological sovereignty. In outlining Vestager’s responsibilities, von der Leyen writes that while “striving for digital leadership,” the EU must support industry and “…need[s] companies that compete on equal terms.” She has charged Goulard with “enhancing Europe’s technological sovereignty,” linking this to Europe being a leader in the next frontier of technologies.

Who does what?

Vestager will clearly lead the way on competition policy and any digital services tax, but she will coordinate the Commission’s work on AI and the Digital Services Act, while Goulard has been charged with taking the “lead.” While the headlines have focused on Vestager, Goulard is likely to be the most important player on digital regulation, apart from the specific areas of tax and competition. Gabriel, with oversight of the research DG, will be an important player for industry and academia, but will be less involved in digital policy, except when discussing the need for greater innovation in Europe. As with the last Commission, specific issues might well bring other commissioners and DGs into the digital policy space. Working out who the major players are on key issues, and how overlapping portfolios will be managed, will take time. Given the pace of technological and market change, we should not forget that the digital agenda of this Commission could well look very different even a year or two into its five-year term.

Frances G. Burwell is a distinguished fellow at the Atlantic Council and a senior adviser at McLarty Associates.

Further reading

The post Europe’s new commission: The outlook for digital policy appeared first on Atlantic Council.

]]>
Manning in GlobalAsia: Techno-Nationalism vs. the Fourth Industrial Revolution https://www.atlanticcouncil.org/insight-impact/in-the-news/manning-in-globalasia-techno-nationalism-vs-the-fourth-industrial-revolution/ Thu, 28 Mar 2019 15:37:12 +0000 https://www.atlanticcouncil.org/?p=237407 The post Manning in GlobalAsia: Techno-Nationalism vs. the Fourth Industrial Revolution appeared first on Atlantic Council.

]]>
original source

The post Manning in GlobalAsia: Techno-Nationalism vs. the Fourth Industrial Revolution appeared first on Atlantic Council.

]]>
Remes in McKinsey Global Institute: Smarter cities are resilient cities https://www.atlanticcouncil.org/insight-impact/in-the-news/remes-in-mckinsey-global-institute-smarter-cities-are-resilient-cities/ Tue, 01 Jan 2019 17:25:00 +0000 https://www.atlanticcouncil.org/?p=237419 The post Remes in McKinsey Global Institute: Smarter cities are resilient cities appeared first on Atlantic Council.

]]>
original source

The post Remes in McKinsey Global Institute: Smarter cities are resilient cities appeared first on Atlantic Council.

]]>
Building a smart partnership for the Fourth Industrial Revolution https://www.atlanticcouncil.org/in-depth-research-reports/report/building-a-smart-partnership-for-the-fourth-industrial-revolution/ Fri, 27 Apr 2018 23:00:21 +0000 http://live-atlanticcouncil-wr.pantheonsite.io/building-a-smart-partnership-for-the-fourth-industrial-revolution/ Along with greater prospects for human advancement and progress, advancements in emerging technologies have the potential to be dramatically disruptive, threatening existing assumptions around national security, rules for international cooperation, and a thriving global commerce.

The post Building a smart partnership for the Fourth Industrial Revolution appeared first on Atlantic Council.

]]>
Download PDF

The emerging technologies of the Fourth Industrial Revolution offer unprecedented avenues to improve quality of life, advance society, and contribute to global economic growth. Yet along with greater prospects for human advancement and progress, advancements in these technologies have the potential to be dramatically disruptive, threatening existing assumptions around national security, rules for international cooperation, and a thriving global commerce. This report by the Atlantic Council’s Scowcroft Center for Strategy and Security and the Korea Institute for Advancement of Technology (KIAT) addresses emerging technologies in key areas of the Fourth Industrial Revolution and explores innovative ways by which the United States and the Republic of Korea can cooperate around advancements in artificial intelligence and robotics; biotechnology; and the Internet of Things.

Each chapter focuses on one of these scientific advancements, with two authors exploring the technology from the perspective of the United States and the Republic of Korea, respectively. Building off the work already underway in both countries, the authors of this report examine opportunities for continued growth and development in these key areas, offering concrete, distinct recommendations for increasing US-ROK cooperation around each technology as the world moves further into the Fourth Industrial Revolution.

The post Building a smart partnership for the Fourth Industrial Revolution appeared first on Atlantic Council.

]]>
Aviation cybersecurity—Finding lift, minimizing drag https://www.atlanticcouncil.org/in-depth-research-reports/report/aviation-cybersecurity-finding-lift-minimizing-drag/ Tue, 07 Nov 2017 12:32:09 +0000 http://live-atlanticcouncil-wr.pantheonsite.io/aviation-cybersecurity-finding-lift-minimizing-drag/ The aviation industry is faced with a complex and critical challenge to carefully balance costs with evolving business imperatives, customer demands, and safety standards.

The post Aviation cybersecurity—Finding lift, minimizing drag appeared first on Atlantic Council.

]]>
download pdf

The aviation industry is faced with a complex and critical challenge to carefully balance costs with evolving business imperatives, customer demands, and safety standards. The increasing use of new technologies in the movement towards automation has yielded efficiencies and enhanced the customer experience. Yet, it has also inadvertently created vulnerabilities for exploitation. As a central component of commerce, trade, and transportation infrastructure, the aviation industry is indispensable to the global economy. The consequences of failure would carry direct public safety and national security implications.

Pete Cooper’s “Aviation Cybersecurity—Finding Lift, Minimizing Drag” indicates that the aviation industry will likely experience cybersecurity challenges similar to other industries that have embraced the “digital revolution.” As the industry moves forward, will it be able to maintain stakeholder trust by accurately perceiving the risks and opportunities as well as understanding adversary threats?

The post Aviation cybersecurity—Finding lift, minimizing drag appeared first on Atlantic Council.

]]>
Confronting transatlantic cybersecurity challenges in the internet of things https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/confronting-transatlantic-cybersecurity-challenges-in-the-internet-of-things-2/ Fri, 07 Apr 2017 15:10:01 +0000 https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/ Beau Woods authors 'Confronting Transatlantic Cybersecurity Challenges in the Internet of Things,' in which he explains how the society is only one cyber crisis away from proving how unimaginative policy makers have been.

The post Confronting transatlantic cybersecurity challenges in the internet of things appeared first on Atlantic Council.

]]>

In 2016, a series of highly impactful and publicized disruptions provided a wake-up call to societies on both sides of the Atlantic making obvious their dependence on inherently unpredictable technology. Just before the year began, a targeted attack disrupted the Ukrainian energy grid, forcing its operators to fall back on decades-old manual processes, and a similar attack followed late in the year. The Hollywood Presbyterian Hospital in Los Angeles was forced to shut down for weeks as a critical patient-care system was unintentionally disrupted by ransomware—a common plague that impacted many other parts of societal infrastructure through the year, including San Francisco’s Bay Area Rapid Transit (BART), US electricity providers, and hospitals in the United States and across Europe. At the same time, a botnet of poorly secured devices disrupted large portions of the US Internet and knocked more than one million German households offline. And while the Russian breach of the Democratic National Committee (DNC) and the associated influence campaign continue to shock many in the United States and beyond, the specter of hackable voting computers also cast doubt on the US electoral system in the lead-up to and aftermath of the presidential election.

Beau Woods authors ‘Confronting Transatlantic Cybersecurity Challenges in the Internet of Things,’ in which he explains how the society is only one cyber crisis away from proving how unimaginative policy makers have been. The issue brief and its recommendations are based on a series of discussions around Europe with policy makers, private sector leaders, academics, and cybersecurity researchers identifying ways to confront cybersecurity challenges facing the transatlantic community in 2017 and beyond.

Introduction

In 2016, a series of highly impactful and publicized disruptions provided a wake-up call to societies on both sides of the Atlantic making obvious their dependence on inherently unpredictable technology. Just before the year began, a targeted attack disrupted the Ukrainian energy grid, forcing its operators to fall back on decades-old manual processes, and a similar attack followed late in the year. The Hollywood Presbyterian Hospital in Los Angeles was forced to shut down for weeks as a critical patient-care system was unintentionally disrupted by ransomware—a common plague that impacted many other parts of societal infrastructure through the year, including San Francisco’s Bay Area Rapid Transit (BART), US electricity providers, and hospitals in the United States and across Europe. At the same time, a botnet of poorly secured devices disrupted large portions of the US Internet and knocked more than one million German households offline. And while the Russian breach of the Democratic National Committee (DNC) and the associated influence campaign continue to shock many in the United States and beyond, the specter of hackable voting computers also cast doubt on the US electoral system in the lead-up to and aftermath of the presidential election.

These events illustrated a general trend of increasing risk, from an increasing number of adversary types. High-capability adversaries, such as Russia, showed growing willingness to engage in cyberattacks. Meanwhile, high-intent adversaries, such as cyber-criminal groups and the Islamic State of Iraq and al-Sham (ISIS), have access to increasingly sophisticated toolkits to strengthen their capabilities. The line between nation-state and non-state hostility in cyberspace is blurring, while the United States and its allies are becoming more susceptible to adversaries of all types. Society is only one cyber crisis away from proving how unimaginative policy makers have been.1 In the face of high-consequence cybersecurity failures, a higher standard of care is merited.

Against this backdrop, the Atlantic Council’s Cyber Statecraft Initiative of the Brent Scowcroft Center on International Security, in collaboration with the Howard Baker Forum and CSC, initiated a series of conversations on these uncomfortable topics, anchored by dinners in Berlin in November 2016 and in Brussels in January 2017. These off-the-record discussions with policy makers, private sector leaders, academics, and cybersecurity researchers were meant to identify ways to confront cybersecurity challenges facing the transatlantic community, in 2017 and beyond. This issue brief synthesizes key observations, insights, and approaches from the series. The emergent theme was that the transatlantic community must come together at this critical moment in history to preserve trust through trustworthiness with cyber hygiene, societal and technical resilience, market transparency, and people-to-people connection.

Public safety and the internet of things

Connected technology holds both great promise and great peril for international security, prosperity, and stability. Increased integration of technological and social systems unlocks new capabilities for prosperity, growth, health, safety, and resilience. The Internet of Things (IoT) is bringing life-changing capabilities to more people, faster and cheaper, than would be possible otherwise. Public safety and human lives are improved by automotive safety features, medical therapies, logistics enhancements, utility services, and other advances.

At the same time, societies’ dependence on connected technology is increasing faster than their ability to build defensive capabilities and resilience against accidents and adversaries. This dependency represents potential threats to: national and international security, where low-capability adversaries like terrorists and hacktivists gain new capabilities to cause physical harm; trustworthiness of democratic institutions, where poor cyber hygiene contributes to undermining confidence in the electoral process; and stability of global prosperity, where cybersecurity incidents reveal unreliable technological dependencies in key market segments.

Where cybersecurity failures impact human life and public safety, the consequences will manifest much more broadly. Exotic sources of potential harm, such as aviation disasters or terrorism, play an outsized role in shaping consumer confidence in key markets. Similarly, national security depends on reliable transportation, energy, and military capabilities—all of which are rapidly adopting technology and the associated vulnerabilities. Where cybersecurity impacts public safety—cyber safety, as framed by the Atlantic Council’s Cyber Statecraft Initiative—the level of care must be commensurate with the level of harm.

Global supply chains and markets make IoT issues inherently international. Concern for public safety, prosperity, and national security transcends borders and unites international citizens and governments. Laws in one jurisdiction impact suppliers and consumers in distant reaches of the globe. Trust built by states, cooperating where their interests and incentives are aligned, can facilitate more trustworthy dialogues on other issues that might otherwise be contentious.

A technical literacy gap

There exists a policy knowledge gap in connected technology and the Internet of Things. Information technology and the Internet are relatively new fields, and doctrine is still being formed. The growing dependence on the Internet of Things further widens this gap, as even cybersecurity and cyber-policy experts have struggled to come to grips with this new wave of connected technology.

Policy makers and other stakeholders vary widely in their cybersecurity background and their access to consistent, credible advice. Predominant mental models for understanding these technologies are inconsistent, and even the most faithful analogies diverge from technical fact in key areas. Policy makers’ current ability to collectively anticipate, identify, and address cybersecurity issues, is therefore inadequate. There are four key aspects to consider in evaluating the impact of the IoT wave on cybersecurity: a cognitive cultural gap that has emerged due to three waves of connectivity; increased scale of the attack surface; added complexity for defenders, due to the diversity in functionality and security approaches; and increased potential to transfer the impact of cyberattacks from the virtual into the physical domain (i.e., increased potential for physical damage).2 Based on these principles, it would be instructive to expand on this framework and extend it more broadly to identify material differences between cyber safety and more conventional domains.

  • Cultural Cognitive Gap—Connected technology has undergone no fewer than three distinct generational shifts over the past thirty years. Cognitive capacity to understand and adapt to those changes cannot keep pace with the need for technically literate policymaking. Cultural practices and awareness may take even longer to identify and adjust to optimal mental models.
  • Scale of Attack Surface—The number of distinct hardware and software components in connected technology often exceeds the ability of one person to identify, account for, and understand implications. Connectivity brings many orders of magnitude more interactions, with more potential hazards or hostile actors. In combination, the number of vulnerable, exposed components exceeds societies’ ability to anticipate, particularly in a domain that changes so quickly.
  • Complexity—Connected technologies have vastly different composition, economics, operational environments, and timescales. Limited capabilities and economic considerations in IoT constrain options for securing these systems, while operational contexts may require more rigorous and preventive approaches than have yet been achieved even in traditional information technology. Timescales can be more extreme in IoT as well, with lifetimes measured in decades rather than years, and irreparable harm can manifest in milliseconds.
  • Consequences—When safety-critical systems rely on software and connectivity, cybersecurity failure modes include direct harm to human life and public safety. Public unfamiliarity with the causes, bounds, and extant efforts can amplify impact on trust in markets and governments. Widespread dependence on these technologies in critical infrastructure poses a national security threat, if those technologies remain vulnerable and exposed to adversaries who can use these systems’ scale, speed, and connectivity to undermine them.
  • Adversaries—The Internet can bring adversaries from across the globe into private homes and critical public infrastructure, whereas only the world’s superpowers could match that reach thirty years ago. Different adversaries have different goals, motivations, methods, and capabilities. While some adversaries may be chastened by potential harm from safety-impacting systems, others may seek those systems out. Low cybersecurity hygiene and high connectivity open the door for actors who lack others’ skill and restraint (i.e., those who “like the boom”).

Solutions to transatlantic cybersecurity challenges in the internet of things

In the face of uncomfortable situations, it may be time to consider uncomfortable approaches. Among these, several seem most promising in bringing a level of care to match the level of potential harm. These approaches center on cyber hygiene, resilience, market changes, and people-to-people connection.

Hygiene-focused approaches to cybersecurity deny low-capability adversaries by raising the level of defense higher than they can overcome. Many of the highest-intent adversaries also have the lowest capabilities, yet they are still highly successful. Hacktivists generally do not have a high degree of skill, yet even their simple tactics achieve high profile consequences; ISIS has adapted and extended this playbook. State actors often use the tools and tradecraft of a much lower-skilled adversary to avoid tipping their hand and to confound investigators attempting to identify them. The technical tradecraft used in the Russian attack on the DNC was not much more sophisticated than Nigerian scam emails. Raising the bar causes higher-capability adversaries to increase resources and reduces the field of actors, increasing confidence in attribution. Many of these hygiene practices are captured in existing standards or are knowable by owners and operators.

Resilience can prevent harm and permit society to recover in the wake of a large-scale event. While some techniques focus on avoiding failure from cyberattacks, others ensure that any failures are evident and can be accounted for in operations. Awareness and education on smartly consuming information in the Internet era serves as an inoculation against information operations, and other measures can reduce the impact and time to recover at a societal level.

Market transparency and software liability allow buyers to size and bound their risk, and can shape market choices. Owners and operators who can identify known points of risk, such as published software defects or configuration weaknesses, can account for these in planning and resourcing, to better defend themselves from cyberattack. Manufacturer attestation of security capabilities improves confidence in the brand and in entire markets. Liability regimes for public safety and software conflict in the Internet of Things; reconciliation of those differences guides manufacturers, business customers, and consumers to clearly define roles, responsibilities, and accountabilities.

Connecting people from different backgrounds and experiences in the transatlantic community shrinks the cultural cognitive gap. Many perspectives are compatible where knowledge, principles, and doctrine are nascent; policy makers and others are well served to optimize solutions for the multiple truths that make up reality in the Internet of Things. Accounting for the broad diversity of background, perspective, and experience builds better mental models, which allows the transatlantic community to generate and promulgate more effective policies.

Conclusion

Dependence on connected technology is growing faster than societies’ ability to secure it against the rising capabilities and intent of diverse adversaries. The Internet of Things has great power to transform societies, but only if the trust placed in these technologies is merited. A public crisis of confidence may delay benefits for years or decades, and susceptibility to remote attack may undermine national and international security. Solutions call for engagement across the transatlantic community— to build bridges between disparate communities, to embrace preventive resilience, to realign market forces, and to return to effective cybersecurity practices. Where domains of expertise and areas of interest overlap, societies can preserve trust through trustworthiness and be safer, sooner, together.

Beau Woods is deputy director of the Cyber Statecraft Initiative at the Brent Scowcroft on International Security, where he focuses on the intersection of cyber security and the human condition, primarily around cyber safety. He also works closely with the I Am The Cavalry civil society initiative, ensuring the connected technology that can impact life and safety is worthy of our trust.

The Howard Baker Forum was founded by former Senator Howard Baker in Washington, DC to provide a platform for examining specific, immediate, critical issues affecting the nation’s progress at home and its relations abroad. Under the leadership of its president, Scott Campbell, the Forum organizes a variety of programs and research projects to examine and illuminate public policy challenges facing the nation today. The Howard Baker Forum is a public and international affairs affiliate of Baker, Donelson, Bearman, Caldwell, and Berkowitz, P.C.

CSC leads clients on their digital transformation journey, providing innovative next-generation technology solutions and services that leverage deep industry expertise, global scale, technology independence and an extensive partner community. CSC helps commercial and interactional public sector clients solve their toughest challenges by modernizing their business processes, applications and infrastructure with next-generation technology solutions.

 
1    Fran Burwell, Distinguished Fellow at the Atlantic Council, made this observation.
Used with permission
2    This framework was volunteered by Ambassador Sorin Ducaru,
Assistant Secretary General of NATO for Emerging Security Challenges. Used with permission

The post Confronting transatlantic cybersecurity challenges in the internet of things appeared first on Atlantic Council.

]]>
Confronting transatlantic cybersecurity challenges in the internet of things https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/confronting-transatlantic-cybersecurity-challenges-in-the-internet-of-things/ Fri, 07 Apr 2017 15:10:01 +0000 http://live-atlanticcouncil-wr.pantheonsite.io/confronting-transatlantic-cybersecurity-challenges-in-the-internet-of-things/ In 2016, a series of highly impactful and publicized disruptions provided a wake-up call to societies on both sides of the Atlantic making obvious their dependence on inherently unpredictable technology. Just before the year began, a targeted attack disrupted the Ukrainian energy grid, forcing its operators to fall back on decades-old manual processes, and a […]

The post Confronting transatlantic cybersecurity challenges in the internet of things appeared first on Atlantic Council.

]]>
download pdf

In 2016, a series of highly impactful and publicized disruptions provided a wake-up call to societies on both sides of the Atlantic making obvious their dependence on inherently unpredictable technology. Just before the year began, a targeted attack disrupted the Ukrainian energy grid, forcing its operators to fall back on decades-old manual processes, and a similar attack followed late in the year. The Hollywood Presbyterian Hospital in Los Angeles was forced to shut down for weeks as a critical patient-care system was unintentionally disrupted by ransomware—a common plague that impacted many other parts of societal infrastructure through the year, including San Francisco’s Bay Area Rapid Transit (BART), US electricity providers, and hospitals in the United States and across Europe. At the same time, a botnet of poorly secured devices disrupted large portions of the US Internet and knocked more than one million German households offline. And while the Russian breach of the Democratic National Committee (DNC) and the associated influence campaign continue to shock many in the United States and beyond, the specter of hackable voting computers also cast doubt on the US electoral system in the lead-up to and aftermath of the presidential election.

Beau Woods authors ‘Confronting Transatlantic Cybersecurity Challenges in the Internet of Things,’ in which he explains how the society is only one cyber crisis away from proving how unimaginative policy makers have been. The issue brief and its recommendations are based on a series of discussions around Europe with policy makers, private sector leaders, academics, and cybersecurity researchers identifying ways to confront cybersecurity challenges facing the transatlantic community in 2017 and beyond.

 

The post Confronting transatlantic cybersecurity challenges in the internet of things appeared first on Atlantic Council.

]]>
Cyber Risk Wednesday: Software Liability—the Good, the Bad, and the Uncomfortable https://www.atlanticcouncil.org/commentary/event-recap/cyber-risk-wednesday-software-liability-the-good-the-bad-and-the-uncomfortable/ Wed, 30 Nov 2016 16:19:54 +0000 http://live-atlanticcouncil-wr.pantheonsite.io/cyber-risk-wednesday-software-liability-the-good-the-bad-and-the-uncomfortable/ With more cars and medical devices connecting to the internet, what happens if automakers and health care companies don’t start prioritizing digital security? Many cybersecurity experts worry that faulty code in the so-called Internet of Things (IoT) won’t just cause systems to malfunction and freeze. Instead, they say, flaws inside connected cars or pacemakers could […]

The post Cyber Risk Wednesday: Software Liability—the Good, the Bad, and the Uncomfortable appeared first on Atlantic Council.

]]>

With more cars and medical devices connecting to the internet, what happens if automakers and health care companies don’t start prioritizing digital security?

Many cybersecurity experts worry that faulty code in the so-called Internet of Things (IoT) won’t just cause systems to malfunction and freeze. Instead, they say, flaws inside connected cars or pacemakers could lead to serious injury or death.

As a result, leading digital security experts are calling on US policymakers to hold manufacturers liable for software vulnerabilities in their products in an effort to prevent the bugs commonly found in smartphones and desktops from pervading the emerging IoT space.

But can that strategy work? Or will more government regulation stifle innovation?

Those were the big questions at an event Wednesday at the Atlantic Council in Washington. Passcode was a media partner of the event. Here are a few things we learned:

1. Everything is a computer. Act like it

To lay the legal foundation for the Digital Age, policymakers need to start wrapping their minds around the idea that we’re living in an era of technology, where everything we depend on is a computer that may be connected to the internet, says cryptographer Bruce Schneier, a fellow at Harvard Law School’s Berkman Klein Center for Internet and Society.

“The way to think about the world is that we’re creating technology where everything is a computer,” he said. “Your smartphone is a computer that makes calls. Your car is a 100-computer network with an engine. That’s the Internet of Things.”

Though the US government hasn’t adopted regulations for the burgeoning space, the Obama administration last month released guidelines for IoT devices that called on engineers to build secure features into the design of connected products. That followed a similar strategy from the Department of Homeland Security that said manufacturers should prioritize security features for the most harmful functions that could be breached.

But creating a legal regime that determines who’s responsible for security flaws in those computers or software, Mr. Schneier says, will require the country to enact consumer protection laws that can more effectively respond to rapid changes in technology. More safety regulation is needed, he added, because consumers still might buy harmful products if they tend to work well, regardless of the potential dangers to their safety.

“The market can’t fix this because neither the buyer and the seller care,” he said. “Until now, we’ve given programmers the right to code the world that they saw fit. We need to figure out the policy.”

2. Data rules everything around you

In the era of big data, companies can measure many digital security metrics, from the cost of cyberattacks to the susceptibility of employees to phishing and other hacking tricks. But there’s still not enough data on IoT breaches, because its spread is so new, says John Soughan, who heads up business in the cyberinsurance division at Zurich North America, a Switzerland-based insurance company.

“Right now, there’s not enough data around what are the causes of these breaches, all of the liabilities in there. That’s problematic for insurance companies, because that’s part of the market,” he said. “That’s why we’re supportive of efforts to collect breach data to make sure we know what the cost of that risk is.”

The lack of information on data breaches is also problematic as courts begin to determine how to settle cases where consumers are harmed by internet-connected products. Since there’s been few efforts to categorically track the harmful impact of faulty internet-connected products, legal cases against manufacturers are often based on ambiguous threats, which may not be enough to get a ruling – let alone create a precedent for future cases.

What’s more, added Wendy Knox Everette, a legal fellow at the technology-focused law firm ZwillGen, “the amorphous threat of some future non-physical harm is not enough for a court to address right now.”

3. Learn to live with risk

Even if there is a legal framework for IoT that’s designed to protect consumers, people still may need to accept some risk with these types of devices, the experts said.

“We don’t want perfectly unbreakable door locks because they’d be too expensive. We choose to bear that risk,” said Eli Dourado, director of the Technology Policy Program at George Mason University’s Mercatus Center. “You never get rid of externalities. We’re trying to get to the most efficient result – the least harm.”

So to strike a balance between keeping consumers secure and enabling technology to advance, experts say, policymakers would do well to find ways to get the riskiest products off the market.

“The IoT makes people think about software liability,” said Ms. Everette. “Instead of being locked inside desktop computers, [software] is now inside physical devices that can now interact with us and possibly harm us… . You can buy knives, but we no longer have lawn darts on the market. That’s a really good way to see how product liability helps you determine your risk.”

The post Cyber Risk Wednesday: Software Liability—the Good, the Bad, and the Uncomfortable appeared first on Atlantic Council.

]]>
Cyber Risk Wednesday: Healthcare Internet of Things https://www.atlanticcouncil.org/commentary/event-recap/cyber-risk-wednesday-healthcare-internet-of-things/ Wed, 24 Feb 2016 17:30:52 +0000 http://live-atlanticcouncil-wr.pantheonsite.io/cyber-risk-wednesday-healthcare-internet-of-things/ The Atlantic Council revisited its groundbreaking work on the Healthcare Internet of Things and updated the discussion to see what the likely implications are, how to proceed, and what the future of healthcare cyber safety might look like.

The post Cyber Risk Wednesday: Healthcare Internet of Things appeared first on Atlantic Council.

]]>
The latest medical advances lay at the intersection of patient care and connected technology. Integration of new technology enables innovations that improve patient outcomes, reduce cost of care delivery, and advance medical research. However, new technology also introduces new classes of accidents and adversaries that must be anticipated and addressed proactively. Where cyber security issues can affect patient safety, an appropriate standard of care is warranted. In recent weeks, cyber threats have become the top-of-mind topic for healthcare and security communities; an open letter from Senator Barbara Boxer called on manufacturers to improve cybersecurity of their products, and the US Food and Drug Administration (FDA) began a public push for good-faith hacking of medical devices.

In light of these developments, the Atlantic Council revisited its groundbreaking work on the Healthcare Internet of Things and updated the discussion to see what the likely implications are, how to proceed, and what the future of healthcare cyber safety might look like. The moderated panel discussion gathered a group of prominent cybersecurity experts, including Dr. Suzanne Schwartz, Associate Director for Science and Strategic Partnerships at the FDA; Mara Tam, Director of Government Affairs at HackerOne; and Beau Woods, Deputy Director of the Atlantic Council’s Cyber Statecraft Initiative.

The post Cyber Risk Wednesday: Healthcare Internet of Things appeared first on Atlantic Council.

]]>
Cyber risk Wednesday: Rewards and risks of the healthcare Internet of Things https://www.atlanticcouncil.org/commentary/event-recap/cyber-risk-wednesday-rewards-and-risks-of-the-healthcare-internet-of-things/ Thu, 19 Mar 2015 15:20:14 +0000 http://live-atlanticcouncil-wr.pantheonsite.io/cyber-risk-wednesday-rewards-and-risks-of-the-healthcare-internet-of-things/ In order to examine the balance of the security challenges and societal opportunities of networked healthcare devices, on March 18, 2015 the Atlantic Council's Cyber Statecraft Initiative gathered a group of experts for a panel discussion and an accompanied report release. Jason Healey, Director of the Cyber Statecraft Initiative, moderated the discussion between Pat Calhoun, Senior Vice President and General Manager of Network Safety at McAfee, Suzanne B. Schwartz, Director of Emergency Preparedness, Operations, and Medical Countermeasures at US Food and Drug Administration, and Joshua Corman, Chief Technology Officer at Sonatype.

The post Cyber risk Wednesday: Rewards and risks of the healthcare Internet of Things appeared first on Atlantic Council.

]]>

The medical industry has been evolving rapidly over the past few years, creating new technologies connected to the larger Internet with the potential to unlock benefits for both the users and the broader society. This healthcare Internet of Things, comprising personal medical devices and hospital machines, holds the promise of improving patient care, empowering the users, and cutting skyrocketing healthcare costs by an estimated $60 billion over fifteen years.

However, as the healthcare devices are as open to misuse as any other networked technology, the rewards come with difficult obstacles to ensuring the security for the patient and the society at-large. In order to examine the balance of the security challenges and societal opportunities of networked healthcare devices, on March 18, 2015 the Atlantic Council’s Cyber Statecraft Initiative gathered a group of experts for a panel discussion and an accompanied report release. Jason Healey, Director of the Cyber Statecraft Initiative, moderated the discussion between Pat Calhoun, Senior Vice President and General Manager of Network Safety at McAfee, Suzanne B. Schwartz, Director of Emergency Preparedness, Operations, and Medical Countermeasures at US Food and Drug Administration, and Joshua Corman, Chief Technology Officer at Sonatype.

The event was opened by a keynote address by Representative Diana DeGette, from the First District of Colorado, one of Congress’ leading experts on cutting-edge healthcare research. In her remarks, Rep. DeGette highlighted the need to balance the fine line between necessary oversight to assure user safety on one hand, and flexibility of regulation to enable innovation and incorporate future developments on the other. She also called for smart budgeting taking into account long-term benefits instead of short-term savings.

20150319 healthcare internet2

The subsequent panel discussion began by debating how to foster innovative ways to help industry and manufacturers to produce networked healthcare devices while effectively regulating the security of the new technologies. All panelists agreed that it is crucial to act proactively by incorporating the security aspect to the development of the products from the start, rather than adding it as a mere afterthought or a response to security breaches. The panel also discussed the importance of developing comprehensive security infrastructures instead of risk-specific and isolated protections. Further, the panelists debated the possibility of providing the industry with common security platforms to be utilized when developing new products. This would allow the manufacturers to focus on their field of specialty, while providing the users with dependable protections.

20150319 healthcare internet3

The event echoed many of the recommendations included in a report launched at the event, titled “The Healthcare Internet of Things: Rewards and Risks”. Authored by Jason Healey, Neal Pollard, and Beau Woods, the report is a collaboration between Intel Security and the Atlantic Council’s Cyber Statecraft Initiative. It explores the opportunities and challenges of networked healthcare devices with practical recommendations for governments and the health care and security industries.

The post Cyber risk Wednesday: Rewards and risks of the healthcare Internet of Things appeared first on Atlantic Council.

]]>