Cybersecurity - Atlantic Council https://www.atlanticcouncil.org/issue/cybersecurity/ Shaping the global future together Thu, 20 Jul 2023 16:36:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://www.atlanticcouncil.org/wp-content/uploads/2019/09/favicon-150x150.png Cybersecurity - Atlantic Council https://www.atlanticcouncil.org/issue/cybersecurity/ 32 32 Ukraine’s tech sector is playing vital wartime economic and defense roles https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-tech-sector-is-playing-vital-wartime-economic-and-defense-roles/ Thu, 20 Jul 2023 16:35:49 +0000 https://www.atlanticcouncil.org/?p=665702 The Ukrainian tech industry has been the standout performer of the country’s hard-hit economy following Russia’s full-scale invasion and continues to play vital economic and defense sector roles, writes David Kirichenko.

The post Ukraine’s tech sector is playing vital wartime economic and defense roles appeared first on Atlantic Council.

]]>
The Ukrainian tech industry has been the standout sector of the country’s hard-hit economy during the past year-and-a-half of Russia’s full-scale invasion. It has not only survived but has adapted and grown. Looking ahead, Ukrainian tech businesses will likely continue to play a pivotal role in the country’s defense strategy along with its economic revival.

While Ukraine’s GDP plummeted by 29.1% in 2022, the country’s tech sector still managed to outperform all expectations, generating an impressive $7.34 billion in annual export revenues, which represented 5% year-on-year growth. This positive trend has continued into 2023, with IT sector monthly export volumes up by nearly 10% in March.

This resilience reflects the combination of technical talent, innovative thinking, and tenacity that has driven the remarkable growth of the Ukrainian IT industry for the past several decades. Since the 2000s, the IT sector has been the rising star of the Ukrainian economy, attracting thousands of new recruits each year with high salaries and exciting growth opportunities. With the tech industry also more flexible than most in terms of distance working and responding to the physical challenges of wartime operations, IT companies have been able to make a major contribution on the economic front of Ukraine’s resistance to Russian aggression.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

Prior to the onset of Russia’s full-scale invasion in February 2022, the Ukrainian tech sector boasted around 5,000 companies. Ukrainian IT Association data for 2022 indicates that just two percent of these companies ceased operations as a result of the war, while software exports actually grew by 23% during the first six months of the year, underlining the sector’s robustness. Thanks to this resilience, the Ukrainian tech sector has been able to continue business relationships with its overwhelmingly Western clientele, including many leading international brands and corporations. According to a July 2022 New York Times report, Ukrainian IT companies managed to maintain 95% of their contracts despite the difficulties presented by the war.

In a world where digital skills are increasingly defining military outcomes, Ukraine’s IT prowess is also providing significant battlefield advantages. Of the estimated 300,000 tech professionals in the country, around three percent are currently serving in the armed forces, while between 12 and 15 percent are contributing to the country’s cyber defense efforts. Meanwhile, Ukraine’s IT ecosystem, hardened by years of defending against Russian cyber aggression, is now integral to the nation’s defense.

A range of additional measures have been implemented since February 2022 to enhance Ukrainian cyber security and safeguard government data from Russian attacks. Steps have included the adoption of cloud infrastructure to back up government data. Furthermore, specialized teams have been deployed to government data centers with the objective of identifying and mitigating Russian cyber attacks. To ensure effective coordination and information sharing, institutions like the State Service for Special Communications and Information Protection serve as central hubs, providing updates on Russian activities and the latest threats to both civilian and government entities.

Today’s Ukraine is often described as a testing ground for new military technologies, but it is important to stress that Ukrainians are active participants in this process who are in many instances leading the way with new innovations ranging from combat drones to artillery apps. This ethos is exemplified by initiatives such as BRAVE1, which was launched by the Ukrainian authorities in 2023 as a hub for cooperation between state, military, and private sector developers to address defense issues and create cutting-edge military technologies. BRAVE1 has dramatically cut down the amount of time and paperwork required for private sector tech companies to begin working directly with the military; according to Ukraine’s defense minister, this waiting period has been reduced from two years to just one-and-a-half months.

One example of Ukrainian tech innovation for the military is the Geographic Information System for Artillery (GIS Arta) tool developed in Ukraine in the years prior to Russia’s 2022 full-scale invasion. This system, which some have dubbed the “Uber for artillery,” optimizes across variables like target type, position, and range to assign “fire missions” to available artillery units. Battlefield insights of this nature have helped Ukraine to compensate for its significant artillery hardware disadvantage. The effectiveness of tools like GIS Arta has caught the attention of Western military planners, with a senior Pentagon official saying Ukraine’s use of technology in the current war is a “wake-up call.”

Alongside intensifying cooperation with the state and the military, members of Ukraine’s tech sector are also taking a proactive approach on the digital front of the war with Russia. A decentralized IT army, consisting of over 250,000 IT volunteers at its peak, has been formed to counter Russian digital threats. Moreover, the country’s underground hacktivist groups have shown an impressive level of digital ingenuity. For example, Ukraine’s IT army claims to have targeted critical Russian infrastructure such as railways and the electricity grid.

Ukraine’s tech industry has been a major asset in the fightback against Russia’s invasion, providing a much-needed economic boost while strengthening the country’s cyber defenses and supplying the Ukrainian military with the innovative edge to counter Russia’s overwhelming advantages in manpower and military equipment.

This experience could also be critical to Ukraine’s coming postwar recovery. The Ukrainian tech industry looks set to emerge from the war stronger than ever with a significantly enhanced global reputation. Crucially, the unique experience gained by Ukrainian tech companies in the defense tech sector will likely position Ukraine as a potential industry leader, with countries around the world eager to learn from Ukrainian specialists and access Ukrainian military tech solutions. This could serve as a key driver of economic growth for many years to come, while also improving Ukrainian national security.

David Kirichenko is an editor at Euromaidan Press, an online English language media outlet in Ukraine. He tweets @DVKirichenko.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s tech sector is playing vital wartime economic and defense roles appeared first on Atlantic Council.

]]>
Global Strategy 2023: Winning the tech race with China https://www.atlanticcouncil.org/content-series/atlantic-council-strategy-paper-series/global-strategy-2023-winning-the-tech-race-with-china/ Tue, 27 Jun 2023 13:00:00 +0000 https://www.atlanticcouncil.org/?p=655540 The United States and the People’s Republic of China (PRC) are engaged in a strategic competition surrounding the development of key technologies. Both countries seek to out-compete the other to achieve first-mover advantage in breakthrough technologies, and to be the best country in terms of the commercial scaling of emerging and existing technologies.

The post Global Strategy 2023: Winning the tech race with China appeared first on Atlantic Council.

]]>
Table of contents

As strategic competition between the United States and China continues across multiple domains, the Scowcroft Center for Strategy and Security in partnership with the Global China Hub, has spent the past year hosting a series of workshops aimed at developing a coherent strategy for the United States and its allies and partners to compete with China around technology. Based on these workshops and additional research, we developed our strategy for the US to retain its technological advantage over China and compete alongside its allies and partners.

Strategy Paper Editorial board

Executive editors

Frederick Kempe
Alexander V. Mirtchev

Editor-in-chief

Matthew Kroenig

Editorial board members

James L. Jones
Odeh Aburdene
Paula Dobriansky
Stephen J. Hadley
Jane Holl Lute
Ginny Mulberger
Stephanie Murphy
Dan Poneman
Arnold Punaro

Executive summary

The United States and the People’s Republic of China (PRC) are engaged in a strategic competition surrounding the development of key technologies. Both countries seek to out-compete the other to achieve first-mover advantage in breakthrough technologies, and to be the best country in terms of the commercial scaling of emerging and existing technologies.

Until recently, the United States was the undisputed leader in the development of breakthrough technologies, and in the innovation and commercial scaling of emerging and existing technologies, while China was a laggard in both categories. That script has changed dramatically. China is now the greatest single challenger to US preeminence in this space. 

For the United States, three goals are paramount. The first is to preserve the US advantage in technological development and innovation relative to China. The second is to harmonize US strategy and policy with those of US allies and partners, while gaining favor with nonaligned states. The third is to retain international cooperation around trade in technology and in scientific research and exploration.

The strategy outlined in these pages has three major elements: the promotion of technologically based innovation; the protection of strategically valuable science and technology (S&T) knowhow, processes, machines, and technologies; and the coordination of policies with allies and partners. The shorthand for this triad is “promote, protect, and coordinate.”

On the promotion side, if the United States wishes to remain the leading power in scientific research and in translating that research into transformative technologies, then the US government—in partnership with state and local governments, the private sector, and academia—will need to reposition and recalibrate its policies and investments. On the protect side, a coherent strategy requires mechanisms to protect and defend a country’s S&T knowledge and capabilities from malign actors, including trade controls, sanctions, investment screening, and more. Smartly deploying these tools, however, is exceedingly difficult and requires the United States to hone its instruments in a way that yields only intended results. The coordination side focuses on “tech diplomacy,” given the need to ensure US strategy and policy positively influence as many allies, partners, and even nonaligned states as possible, while continuing to engage China on technology-related issues. The difficulty lies in squaring the interests and priorities of the United States with those of its allies and partners, as well as nonaligned states, and even China itself. 

This strategy assumes that China will remain a significant competitor to the United States for years to come. It also assumes that relations between the United States and China will remain strained at best or, at worst, devolve into antagonism or outright hostility. Even if a thaw were to reset bilateral relations entirely, the US interest in maintaining its advantage in technological development would remain. 

Any successful long-term strategy will require that the US government pursue policies that are internally well coordinated, are based on solid empirical evidence, and are flexible and nimble in the short run, while being attentive to longer-run trends and uncertainties. 

There are two major sets of risks accompanying this strategy. Overreach is one because decoupling to preserve geopolitical advantages can be at odds with economic interests. A second involves harms to global governance including failure to continue cooperation surrounding norms and standards to guide S&T research, and failure to continue international science research cooperation focused on solving global-commons challenges such as pandemics and climate change. 

The recommendations that follow from this analysis include the following, all directed at US policymakers.

  1. Restore and sustain public research and development (R&D) funding for scientific and technological advancement.
  2. Improve and sustain STEM (science, technology, engineering, and math) education and skills training across K–12, university, community college, and technical schools.
  3. Craft a more diverse tech sector.
  4. Attract and retain highly skilled talent from abroad.
  5. Support whole-of-government strategy development.
  6. Ensure private-sector firms remain at the cutting edge of global competitiveness. 
  7. Improve S&T intelligence and counterintelligence.
  8. Ensure calibrated development and application of punitive measures. 
  9. Build out and sustain robust multilateral institutions.
  10. Engage with China, as it cannot be avoided.

Back to top

A 2033 What If…

Imagine that it is the year 2033. Imagine that China has made enormous strides forward in the technology arena at the expense of the United States and its allies and partners. Suppose that this outcome occurred because, between 2023 and 2033, China’s economy not only does not weaken substantially but instead goes from strength to strength, including (importantly) increasing its capabilities in technological development and innovation. Suppose, too, that the US government failed to craft and maintain the kinds of investments and policies that are needed to sustain and enhance its world-leading tech-creation machine—its “innovation ecosystem”—to stay ahead of China. Suppose that the US government also failed to properly calibrate the punitive measures designed to prevent China from acquiring best-in-class technologies from elsewhere in the world—where calibration means the fine-tuning of policies to achieve prescribed objectives without spillover consequence. Finally, suppose that the United States and its allies and partners around the world failed to align with one another in terms of strategies and policies regarding how to engage China and, just as critically, about alignment of their own ends. What might that world look like?

Looking at that world from the year 2033, a first observation is that US scientific and technological (S&T) advantage, a period that lasted from 1945 to the 2020s, has come to an end. In its place is a world where China’s government labs, universities, and firms are often the first to announce breakthrough scientific developments and the first to turn them into valuable technologies.

For the US government and for allies and partners in the Indo-Pacific region, the strategic consequences are severe, as China has not only closed much of the defense spending gap by 2033 but is able to employ weaponry as advanced, and in some cases more advanced, than those of the United States and its allies.1 Military planners from Washington to New Delhi watch China’s rising capabilities with much anxiety, given the geostrategic leverage that such changes have given Beijing across the region.

Nor is this problem the only headache for the United States and its coalition of partners in 2033. For a variety of reasons, many of China’s tech firms are outcompeting those elsewhere in the world, including some of the United States’ biggest and most important firms. Increasingly, the world looks as much to Shenzhen as to Silicon Valley for the latest tech-infused products and services.

China’s long-standing ambition to give its tech firms an advantage has paid off. The Chinese state has successfully pursued its strategy of commercial engagement with other countries, one that has been well known for decades and is characterized by direct and indirect financial and technical aid for purchases of Chinese hardware and software. This approach, while imperfect, drove adoption of Chinese technology abroad, with much of that adoption happening in the Global South.2 Across much of Africa, Latin America, South and Southeast Asia, and the Middle East, China has grown into the biggest player in the tech space, with its technologies appealing both to consumers and to many governments looking for financial assistance in upgrading their tech infrastructure. Moreover, China’s tech assistance has aided authoritarian governments seeking the means to control access to information, especially online, and the desire to surveil citizens and suppress dissent.3 China’s efforts have been a major reason why the internet has fractured in many countries around the world. The ideal of the internet as an open platform is largely gone, replaced by a system of filtered access to information—in many instances, access that is controlled by authoritarian and illiberal states.

In 2033, even the biggest US-based tech firms struggle to keep pace with Chinese firms, as do tech firms based in Europe, Asia, and elsewhere. Although still formidable, Western firms find themselves at a disadvantage in both domestic and foreign markets. China’s unfair trading practices have continued to give its firms an edge, even in markets in mature economies and wealthy countries. China has continued its many unfair trading practices, including massive direct and indirect state subsidies and regulatory support for its firms, suspect acquisition—often outright theft—of intellectual property (IP) from firms abroad, and requiring that foreign firms transfer technology to China in exchange for granting access to its enormous domestic consumer market, in 2033 the biggest in the world.4 When added to the real qualitative leaps that China has made in terms of the range and sophistication of its tech-based products and services, foreign firms are often on the back foot even at home. In sector after sector, China is capturing an increasingly large share of global wealth.

Nor is this all. China’s rising influence means that the democratic world has found it impossible to realize its preferences concerning the global governance of technology. This problem extends beyond China’s now significant influence on technical-standards development within the range of international organizations that are responsible for standards.5 The problem is much larger than even that. Since the early 2020s, because of decreasing interest in scientific cooperation, the United States, China, and Europe have been unable to agree on the basic norms and principles that should guide the riskiest forms of advanced tech development. As a result, big gaps have appeared in how the major players approach such development. This patchwork, incomplete governance architecture has meant that countries, firms, and even individual labs have forged ahead without common ethical-normative frameworks to guide research and development. In such fields as artificial intelligence (AI), China has increased its implementation of AI-based applications that have eroded individual rights and privacies—for example, AI-driven facial-recognition technologies used by the state to monitor individual activity—not only within China, but in parts of the world where its technologies have been adopted.6

Nor is even this long list all that is problematic in the year 2033. Scientific cooperation between the United States and China—and, by extension, China and many US allies and partners—has declined precipitously since 2023. Cross-national collaboration among the world’s scientists has always been a proud hallmark of global scientific research, delivering progress on issues ranging from cancer treatments to breakthrough energy research. Collaboration between China on the one hand, and Western states on the other, used to be a pillar of global science. Now, unfortunately, much of that collaboration has disappeared, given the rising suspicions and antagonism and the resulting policies that were implemented to limit and, in some cases, even block scientific exchange.7

From the perspective of developments that led to this point in the year 2033, the United States and its allies and partners failed to pursue a coherent, cooperative, and united strategy vis-à-vis strategic competition with China. Policymakers were unable to articulate, and then implement, policies that were consistent over time and across national context. Various international forums were created for engagement on strategy and policy questions, but they proved of low utility as policy harmonization bodies or tech trade-dispute mechanisms.

Opening session of US-China talks at the Captain Cook Hotel in Anchorage, Alaska, US March 18, 2021. REUTERS/Frederic J. Brown

Back to top

Strategic context

The above scenario, which sketches a world in 2033 where China has gained the upper hand at the expense of the United States and its allies and partners, is not inevitable. As this strategy paper articulates, there is much that policymakers in the United States and elsewhere can do to ensure that more benign futures, from their perspectives, are possible. However, as this strategy paper also articulates, their success is far from a given.

The United States and the People’s Republic of China (PRC) are engaged in a strategic competition surrounding the development of key technologies, including advanced semiconductors (“chips”), AI, advanced computing (including quantum computing), a range of biotechnologies, and much more. Both countries seek to out-compete the other to achieve first-mover advantage in breakthrough technologies, and to be the best country at the commercial scaling of emerging and existing technologies.

These two capabilities—the first to develop breakthrough technologies and the best at tech-based innovation—overlap in important respects, but they are not identical and should not be regarded as the same thing. The first country to build a quantum computer for practical application (such as advanced decryption) is an example of the former capability; the country that is best at innovating on price, design, application, and functionality of electric vehicles (EVs) is an example of the latter capability. The former will give the inventing country a (temporary) strategic and military advantage; the latter will give the more innovative country a significant economic edge, indirectly contributing to strategic and military advantage. The outcome of this competition will go a long way toward determining which country—China or the United States—has the upper hand in the larger geostrategic competition between them in the coming few decades.

For China, the primary goal is to build an all-encompassing indigenous innovation ecosystem, particularly in sectors that Chinese leadership has deemed critical. Beijing views technology as the main arena of competition and rivalry with the United States, with many high-level policies and strategy documents released under Xi Jinping’s tenure emphasizing technology across all aspects of society. Under Xi’s direction, China has intensified its preexisting efforts to achieve self-sufficiency in key technology sectors, centering on indigenous innovation and leapfrogging the United States. 

On the US side, the Joe Biden administration and Congress have emphasized the need to maintain leadership in innovation and preserve US technological supremacy. Although there are many similarities between the Donald Trump and Biden administrations’ approaches to competition with China, one of the primary differences has been the Biden administration’s focus on bringing allies and partners onboard and trying to make policies as coordinated and multilateral as possible. While a laudable goal, implementation of a seamless allies-and-partners coordination is proving difficult.

Until recently, the United States was the undisputed leader in the development of breakthrough technologies, and in the innovation and commercial scaling of emerging and existing technologies. Until recently, China was a laggard in both categories, falling well behind the United States and most, if not all, of the world’s advanced economies in both the pace of scientific and technological (S&T) development and the ability to innovate around technologically infused products and services.

That script has changed dramatically as a result of China’s rapid ascension up the S&T ladder, starting with Deng Xiaoping’s reforms in the 1970s and 1980s and continuing through Xi Jinping’s tenure.8

Although analysts disagree about how best to measure China’s current S&T capabilities and its progress in innovating around tech-based goods and services, there is no dispute that China is now the greatest single challenger to US preeminence in this space. In some respects, China may already have important advantages over the United States and all other countries—for example, in its ability to apply what has been labeled “process knowledge,” rooted in the country’s vast manufacturing base, to improve upon existing tech products and invent new ones.9

Chinese President Xi Jinping speaks at the military parade marking the 70th founding anniversary of People’s Republic of China, on its National Day in Beijing, China October 1, 2019. REUTERS/Jason Lee

This competition represents a new phase in the two countries’ histories. The fall of the Berlin Wall and the decade that followed saw US leadership seek to include China as a member of the rules-based international order. In a March 2000 speech, President Bill Clinton spoke in favor of China’s entry into the World Trade Organization (WTO), arguing that US support of China’s new permanent normal trade relations (PNTR) status was “clearly in our larger national interest” and would “advance the goal America has worked for in China for the past three decades.”10 China’s leadership returned the favor, with President Jiang Zemin later stating that China “would make good on [China’s] commitments…and further promote [China’s] all-directional openness to the outside world.”11

Despite some US concerns, the period from 2001 through most of the Barack Obama administration saw Sino-American relations at their best.12 The lure of the Chinese market was strong, with bilateral trade in goods exploding from less than $8 billion in 1986 to more than $578 billion in 2016.13 People-to-people exchanges increased dramatically as well, with tourism from China increasing from 270,000 in 2005 to 3.17 million in 2017, and the number of student F-visas granted to PRC students increasing tenfold, from approximately 26,000 in 2000 to nearly 250,000 in 2014.14 US direct investment in China also grew significantly after 2000, as US companies saw the vast potential of the Chinese market and workforce. Notably, overall US investment in China continued to grow even after the COVID-19 pandemic.15

So what changed? In a 2018 essay titled “The China Reckoning,” China scholars Ely Ratner and Kurt Campbell—now both members of the Biden administration—described how the US plan for China and its role in the international system had not gone as hoped. 

Neither carrots nor sticks have swayed China as predicted. Diplomatic and commercial engagement have not brought political and economic openness. Neither US military power nor regional balancing has stopped Beijing from seeking to displace core components of the US-led system. And the liberal international order has failed to lure or bind China as powerfully as expected. China has instead pursued its own course, belying a range of American expectations in the process.

Campbell and Ratner, “The China Reckoning.”

These sentiments were shared by many others in Washington. Many felt like China was taking advantage of the United States as the Obama administration transitioned to its “pivot to Asia.” For example, in 2014 China sent an uninvited electronic-surveillance ship alongside four invited naval vessels to the US-organized Rim of the Pacific (RIMPAC) military exercises, damaging what had appeared to be improving military-to-military relations.16 On the economic side, despite the two sides signing an agreement in April 2015 not to engage in industrial cyber espionage, it soon became clear that China did not plan to uphold its side of the bargain. In 2017, the US Department of Justice indicted three Chinese nationals for cyber theft from US firms, including Moody’s Analytics, Siemens AG, and Trimble.17

Within China, political developments were also driving changes in the relationship. Xi Jinping assumed power in November 2012, and most expected him to continue on his predecessors’ trajectory. However, in 2015 a slew of Chinese policies caught the eye of outside observers, especially the “Made in China 2025” strategy that caused a massive uproar in Washington and other global capitals, given its explicit focus on indigenization of key sectors, including the tech sector. 

On the US side, when President Trump was elected in 2017, the bilateral economic relationship came under further fire, sparked by growing concerns surrounding China’s unfair trade practices, IP theft, and the growing trade deficit between the two countries. First the first time, frustration over these issues brought about strong US policy responses, including tariffs on steel, aluminum, soybeans, and more, a Section 301 investigation of Chinese economic practices by the US trade representative, and unprecedented export controls on the Chinese firms Huawei and ZTE. On the Chinese side, a growing emphasis on self-reliance, in conjunction with narratives surrounding the decline of the West, has dominated the conversation at the highest levels of government. In many instances, some of these statements—like China’s relatively unachievable indigenization goals in the semiconductor supply chain—have pushed the US policy agenda closer toward one centering on zero-sum tech competition.

In 2023, the Biden administration continued some Trump-era policies toward China, often reaching for export controls as a means to prevent US-origin technology from making its way to China. The Biden administration is even considering restricting outbound investment into China, stemming from concerns around everything from pharmaceutical supply chains to military modernization. The bottom line is that US-China competition is intense, and is here to stay for the foreseeable future. 

Back to top

Goals

There are three underlying goals for policymakers in the United States to consider when developing a comprehensive strategy. 

  1. Preserve the US advantage in technological development and innovation relative to China. Although the United States has historically led the world in the development of cutting-edge technologies, technological expertise, skills, and capabilities have proliferated worldwide and eroded this advantage. Although the United States arguably maintains its first position, it can no longer claim to be the predominant global S&T power across the entire board. As a result, US leadership will have to approach this issue with a clear-eyed understanding of US capabilities and strengths, as well as weaknesses. 

    Further, it is impractical to believe that the United States alone can lead in all critical technology areas. US policymakers must determine (with the help of the broader scientific community) not only which technologies are critical to national security but also how these technologies are directly relevant in a national security context. This point suggests the need for aligning means with ends—what is the US objective in controlling or promoting a specific technology? Absent strong answers to this question, technology controls or promotion efforts will likely yield unintended results, both good and bad. 

    Further, it is impractical to believe that the United States alone can lead in all critical technology areas. US policymakers must determine (with the help of the broader scientific community) not only which technologies are critical to national security but also how these technologies are directly relevant in a national security context. This point suggests the need for aligning means with ends—what is the US objective in controlling or promoting a specific technology? Absent strong answers to this question, technology controls or promotion efforts will likely yield unintended results, both good and bad. 

    Further, the United States’ capacity to transform basic research into applications and commercial products is an invaluable asset that has propelled its innovation ecosystem for decades. In contrast, Chinese leadership is keenly aware of its deficiencies in this area. 

    First-mover advantage in laboratory scientific research is not the same thing as innovation excellence. A country needs both if it seeks predominance. A country can have outstanding scientific capabilities but poor innovation capacity (or vice versa). Claims that China is surpassing the United States and other advanced countries in critical technology areas are premature, and often fail to consider how metrics to assess innovative capacity interact with one another (highly cited publications, patents, investment trends, market shares, governance, etc.).18 Assessing a country’s ability to preserve or maintain its technological advantage requires a holistic approach that takes all of these factors into account.
  2. Harmonize strategy and policy with allies and partners, while gaining favor with nonaligned states. With respect to strategic competition vis-a-vis China, the interests of the United States are not always identical to those of its allies and partners. Any strategy designed to compete in the tech space with China needs to align with the strategies and interests of US allies and partners. Simultaneously, US strategy should offer benefits to nonaligned states within the context of this strategic competition with China, so as to curry favor with them.

    This goal is especially important, given that the United States relies on and benefits from a network of allies and partners, whereas China aspires to self-sufficiency in S&T development. To preserve the United States’ advantage, US leadership must first recognize that its network is one of the strongest weapons in the US arsenal.

    US allies and partners, of which there are many, want to maintain and strengthen their close diplomatic, security, and economic ties to the United States. The problem is that most also have substantial, often critical, economic relationships with China. Hence, they are loath to jeopardize their relationships with either the United States or China. 

    This strategic dilemma has become a significant one for US allies in both the transpacific and transatlantic arenas. As examples, Japan and South Korea, the two most advanced technology-producing countries in East Asia, are on the front lines of this dilemma. Their challenging situation owes to their geographic proximity to China on the one hand—and, hence, proximity to China’s strategic ambitions in the East and South China Seas, as well as Taiwan—and to their close economic ties to both China and the United States on the other.19 Although both have been attempting an ever-finer balancing act between the United States and China for years, the challenge is becoming more difficult.20 In January 2023, Japan reportedly joined the United States and the Netherlands to restrict sales of advanced chipmaking lithography machines to China, despite the policy being against its clear economic interests.21 In April and May 2023, even before China banned sales of chips from Micron Technology, a US firm, the US government was urging the South Korea government to ensure that Micron’s principal rivals, South Korea’s Samsung Electronics and SK Hynix, did not increase their sales in China.22

    For nonaligned states, many of which are in the Global South, their interests are manifold and not easily shoehorned into a US-versus-China bifurcation. Many states in this category have generalized concerns about a world that is dominated by either Washington or Beijing, and, as such, are even more interested in hedging than are the closest US allies and partners. Their governments and business communities seek trade, investment, and access to technologies that can assist with economic development, while their consumers seek affordable and capable tech. Although China has made enormous strides with respect to technological penetration of markets in the Global South, there also is much opportunity for the United States and its allies and partners, especially given widespread popular appetite for Western ideals, messaging, and consumer-facing technologies.23
  3. Retain cooperation around trade and scientific exploration. One of the risks that is inherent in a fraught Sino-American bilateral relationship is that global public-goods provision will be weakened. Within the context of rising tensions over technological development, there are two big concerns: first, that global trade in technologically based goods and services will be harmed, and second, that global scientific cooperation will shrink. 

    An open trading system has been an ideal of the rules-based international order since 1945, built on the premise that fair competition within established trading rules is best for global growth and exchange. The US-led reforms at the end of the World War II and early postwar period gave the world the Bretton Woods system, which established the International Monetary Fund (IMF), plus the Marshall Plan and the General Agreement on Tariffs and Trade (GATT). Together, these reforms enabled unprecedented multi-decade growth in global trade.24 China’s accession to the WTO in 2001, which the US government supported, marked a high point as many read into China’s entry its endorsement of the global trade regime based on liberal principles. However, since then—and for reasons having much to do with disagreements over China’s adherence to WTO trading rules—this global regime has come under significant stress. In 2023, with few signs that the Sino-American trade relationship will improve, there is significant risk of damage to the global trading system writ large.25

    Any damage done to the global trading system also risks harm to trade between the two countries, which is significant given its ongoing scale (in 2022, bilateral trade in goods measured a record $691 billion).26. Tech-based trade and investment remain significant for both countries, as illustrated by the February 2023 announcement of a $3.5-billion partnership between Ford Motor Company and Contemporary Amperex Technology Limited (CATL) to build an EV-battery plant in Michigan using CATL-licensed technology.27 A priority for US policymakers should be to preserve trade competition in tech-infused goods and services, at least for those goods and services that are not subject to national security-based restrictions and where China’s trade practices do not result in unfair advantages for its firms. 

    Beyond trade, there are public-goods benefits resulting from bilateral cooperation in the S&T domain. These benefits extend to scientific research that can hasten solutions to global-commons challenges—for example, climate change. China and the United States are the two most active countries in global science, and are each other’s most important scientific-research partner.28 Any harm done to their bilateral relationship in science is likely to decrease the quality of global scientific output. Further, the benefits from cooperation also extend to creation and enforcement of international norms and ethics surrounding tech development in, for example, AI and biotechnology.
A worker conducts quality-check of a solar module product at a factory of a monocrystalline silicon solar equipment manufacturer LONGi Green Technology Co, in Xian, Shaanxi province, China December 10, 2019. REUTERS/Muyu Xu

Back to top

Major elements of the strategy

The strategy outlined in these pages has three major elements: the promotion of technologically based innovation, sometimes labeled “running faster”; the protection of strategically valuable S&T knowhow, processes, machines, and technologies; and the coordination of policies with allies and partners. This triad—promote, protect, and coordinate—is also shorthand for the most basic underlying challenge facing strategists in the US government and in the governments of US allies and partners. In the simplest terms, strategists should aim to satisfy the “right balance between openness and protection,” in the words of the National Academies of Sciences, Engineering, and Medicine.29 This strategic logic holds for both the United States and its allies and partners.

  1. Promote: The United States has been the global leader in science and tech-based innovation since 1945, if not earlier. However, that advantage has eroded, in some areas significantly, in particular since the end of the Cold War. If the United States wishes to remain the leading power in scientific research and in translating that research into transformative technologies (for military and civilian application), then the US government, in partnership with state and local governments, the private sector, and academia, will have to reposition and recalibrate its policies and investments.

    The preeminence of America’s postwar innovation ecosystem resulted from several factors, including: prewar strengths across several major industries; massive wartime investments in science, industry, and manufacturing; and even larger investments made by the US government in the decades after the war to boost US scientific and technological capabilities. The 1940s through 1960s were especially important, owing to the whole-of-society effort behind prosecuting World War II and then the Cold War. The US government established many iconic S&T-focused institutions, including the National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA), the National Aeronautics and Space Administration (NASA), most of the country’s national laboratories (e.g., Sandia National Laboratories and Lawrence Livermore National Laboratories), and dramatically boosted funding for science education, public-health research, and academic scientific research.30

    This system, and the enormous investments made by the US government to support it, spurred widespread and systematized cooperation among government, academic science, and the private sector. This cooperation led directly to a long list of breakthrough technologies for military and civilian purposes, and to formation of the United States’ world-leading tech hubs, Silicon Valley most prominent among them.31

    The trouble is that after the Cold War ended, “policymakers [in the US government] no longer felt an urgency and presided over the gradual and inexorable shrinking of this once preeminent system,” in particular through allowing federal spending on research and development (R&D) and education to flatline or even atrophy.32 From a peak of around 2.2 percent of national gross domestic product (GDP) in the early 1960s, federal R&D spending has declined since, reaching a low of 0.66 percent in 2017 before rebounding slightly to 0.76 percent in 2023.33

    Today, US competitors, including China, have figured out the secrets to growing their own innovation ecosystems (including the cultural dimensions that historically have been key to separating the United States from its competition) and are investing the necessary funding to do so. For example, several countries, especially China, have outpaced the United States in R&D spending. Between 1995 and 2018, China’s R&D spending grew at an astonishing 15 percent per annum, about double that of the next-fastest country, South Korea, and about five times that of the United States. By 2018, China’s total R&D spending (from public and private sources) was in second place behind the United States and had surpassed the total for the entire European Union.34 From the US perspective, other metrics are equally concerning. A 2021 study by Georgetown University’s Center for Security and Emerging Technology (CSET) projected that China will produce nearly twice as many STEM PhDs as the United States by 2025 (if counting only US citizens graduating with a PhD in STEM, that figure would be three times as much). This projection is based, in part, on China’s government doubling its investment in STEM higher education during the 2010s.35

    The United States retains numerous strengths, including the depth and breadth of its scientific establishment, number and sizes of its Big Tech firms, robust startup economy and venture capital to support it, numerous world-class educational institutions, dedication to protection of intellectual property, relatively open migration system for high-skilled workers, diverse and massive consumer base, and its still-significant R&D investments from public and private sources.36

    In addition, over the past few years there have been encouraging signs of a shift in thinking among policymakers, away from allowing the innovation model that won the Cold War to further erode and toward increased bipartisan recognition that the federal government has a critical role to play in updating that system. As was the case with the Soviet Union, this newfound interest in strengthening the US innovation ecosystem owes much to a recognition that China is a serious strategic competitor to the United States in the technology arena.37 The Biden administration’s passage of several landmark pieces of legislation, including the CHIPS and Science Act, the Inflation Reduction Act (IRA), and the Infrastructure Investment and Jobs Act (IIJA), increased the amount of federal government spending on S&T, STEM education and skills training, and various forms of infrastructure (digital and physical), all of which are concrete evidence of the degree to which this administration and much of Congress recognize the stiff challenge from China.
  2. Protect: A coherent strategy requires mechanisms to protect and defend a country’s S&T knowledge and capabilities from malign actors. Policy documents and statements from US officials over the past decade have called out the many ways in which the Chinese state orchestrates technology transfer through licit and illicit means, ranging from talent-recruitment programs and strategic mergers and acquisitions (M&A) to outright industrial espionage via cyber intrusion and other tactics.38

    On the protect side, tools include trade controls, sanctions, investment screening, and more. On the export-control side, both the Trump and Biden administrations have relied on dual-use export-control authorities to both restrict China’s access to priority technologies and prevent specific Chinese actors (those deemed problematic by the US government) from accessing US-origin technology and components.39 Investment screening has also been a popular tool; in 2018, Congress passed the bipartisan Foreign Investment Risk Review Modernization Act (FIRRMA) that strengthened and modernized the Committee on Foreign Investment in the United States (CFIUS)—an interagency body led by the Treasury Department that reviews inbound foreign investment for national security risks.40 Under the Biden administration, a new emphasis on the national security concerns associated with US outbound investment into China has arisen, with an executive order focused on screening outbound tech investments in the works for almost a year.41 On sanctions, although the United States has so far been wary of deploying them against China, the Biden administration has, in conjunction with thirty-eight other countries, imposed a harsh sanctions regime on Russia and Belarus following Russia’s unprovoked invasion of Ukraine.42

    Trade controls can be effective tools, but they need to be approached with a clear alignment between means and ends. For decades, an array of export controls and other regulations have worked to prevent rivals from accessing key technologies. However, historical experience (such as that of the US satellite industry) shows that, with a clear alignment between means and ends, trade controls can have massive implications for the competitiveness of US industries and, by extension, US national security.43

    Before deploying these tools, it is critical for policymakers to first identify what China is doing—both within and outside its borders—in its attempts to acquire foreign technology, an evaluation that should allow the United States to hone more targeted controls that can yield intended results. Trade controls that are too broad and ambiguous tend to backfire, as they create massive uncertainties that lead to overcompliance on the part of industry, in turn causing unintended downside consequences for economic competitiveness.

    Understanding China’s strategy for purposes of creating effective trade controls is not as difficult as it once appeared. For instance, a 2022 report from CSET compiled and reviewed thirty-five articles on China’s technological import dependencies.44 This series of open-source articles, published in Chinese in 2018, provides specific and concrete examples of Chinese S&T vulnerabilities that can be used by policymakers to assess where and how to apply trade controls. Other similar resources exist. Although the Chinese government appears to be systematically tracking and removing these as they receive attention, there are ways for US government analysts and scholars to continue making use of these materials that preserve the original sources.
  3. Coordinate: The final strategy pillar is outward facing, focused on building and sustaining relationships with other countries in and around the tech strategy and policy space. This pillar might be labeled “tech diplomacy,” given the need to ensure US strategy and policy positively influences as many allies, partners, and even nonaligned states as possible, while continuing to engage China on technology-related issues. As with the other two pillars, this pillar is simple to state as a priority, but difficult to realize in practice.

    In a May 2022 speech, US Secretary of State Antony Blinken said that the administration’s shorthand formula is to “invest, align, [and] compete” vis-a-vis China.45 Here, he meant “invest” to refer to large public investments in US competitiveness, “align” to closer coordination with allies and partners on tech-related strategy and policy, and “compete” largely to geostrategic competition with China over Taiwan, the East and South China Seas, and other areas.

    Blinken’s remarks underscore the Biden administration’s priority for allies and partners to view the United States as a trusted interlocutor. When it comes to technology policy on China, the trouble lies in the execution—in particular, overcoming the tensions inherent within the “invest, align, compete” formula. After Blinken’s speech, for example, the IRA became law, which triggered a firestorm of protest among the United States’ closest transpacific and transatlantic allies. Viewing the IRA’s ample support for domestic production and manufacturing of electric vehicles and renewable-energy technologies—designed to boost the US economy and tackle climate change while taking on China’s advantages in these areas—the protectionist European Union (EU) went so far as to formulate a Green Deal Industrial Plan, widely seen as an industrial policy response to the IRA.46 Much of the row over the IRA resulted from the perception—real or not—that the United States had failed to properly consider allies’ and partners’ interests while formulating the legislation. In the words of one observer, “amid the difficult negotiations at home on the CHIPS Act and the IRA, allies and partners were not consulted, resulting in largely unintended negative consequences for these countries.”47

    Long-term investment by US policymakers in multilateral institutions focused on technology will be a critical aspect of any potential victory. The Biden administration is already making strides on this front through several multilateral arrangements, including the resurrection of the Quadrilateral Security Dialogue (the Quad) and the establishment of the US-EU Trade and Technology Council (TTC) and AUKUS trilateral pact. All three of these arrangements have dedicated time and resources to specific technological issues in both the military/geopolitical and economic spheres, and all three have the potential to be massively impactful in terms of technology competition.

    However, history has shown that these types of arrangements are only effective as long as high-level political leadership remains involved and dedicated to the cause. Cabinet officials and other high-level leaders from all participating countries—especially the United States—will have to demonstrate continued interest in and commitment to these arrangements if they want them to produce more than a handful of documents with broad strategic visions.

Back to top

Assumptions

The strategy outlined in these pages rests on two plausible assumptions. First, this strategy assumes that China will not follow the Soviet Union into decline, collapse, and disintegration anytime soon, which, in turn, means that China should remain a significant competitor to the United States for a long time to come.

China’s leadership has studied the collapse of the Soviet Union closely and learned from it, placing enormous weight on delivering economic performance through its brand of state capitalism while avoiding the kind of reforms that Mikhail Gorbachev instituted during the 1980s, which included freer information flows, freer political discourse, and ideological diversity within the party and state—all of which Chinese leadership believes to have been key to the Soviet Union’s undoing.48 China also does not have analogous centrifugal forces that threaten an internal breakup along geographic lines as did the Soviet Union, which had been constructed from the outset as a federation of republics built upon the contours of the tsarist empire. (The Soviet Union, after all, was a union of Soviet Socialist republics scattered across much of Europe and Asia).49

These factors weigh against an assessment that China will soon collapse. Nicholas Burns, the US ambassador to China, has said recently that China is “infinitely stronger” than the Soviet Union ever was, “based on the extraordinary strength of the Chinese economy” including “its science and technology research base [and] innovative capacity.” He concluded that the Chinese challenge to the United States and its allies and partners “is more complex and more deeply rooted [than was the Soviet Union] and a greater test for us going forward.”50

A more realistic long-term scenario is one in which the United States and its allies and partners would need to manage a China that will either become stronger or plateau, rather than one that will experience a steep decline. Both variants of this scenario are worrisome, and both underscore the need to hew to the strategy outlined in this paper. A stronger China brings with it obvious challenges. A plateaued China is a more vexing case, owing to the very real possibility that Chinese leadership might conclude that, as economic stagnation portends a future decline and fall, the case for military action (e.g., against Taiwan) is more, rather than less, pressing. The strategist Hal Brands, for example, has suggested that a China that has plateaued will become more dangerous than it is now, requiring a strategy that is militarily firm, economically wise (including maintenance of the West’s advantages in the tech-innovation space), and diplomatically flexible.51

Second, the strategy outlined here assumes that relations between the United States and China will remain strained at best or, at worst, devolve into antagonism or outright hostility. In 2023, the assumption of ongoing strained relations appears wholly rational, based on a straightforward interpretation of all available diplomatic evidence.

How this strategy should shift if the United States and China were to have a rapprochement would depend greatly on the durability and contours of that shift. Even if a thaw were to reset bilateral relations to where they were at the beginning of the century (an unlikely prospect), the US interest in maintaining a first-mover advantage in technological development would remain. As reviewed in this paper, there was a long period during which the United States and China traded technologically based goods and services in a more open-ended trading regime than is currently the case. During that period, the United States operated on two presumptions: that China’s S&T capabilities were nowhere near as developed as its own, and that the US system could stay ahead owing to its many strengths compared with China’s.

The trouble with returning to this former state is that both presumptions no longer hold. China has become a near-peer competitor in science and technological development, and its innovative capabilities are considerable.

If China and the United States were to thaw their relationship, the policy question would concern the degree to which the United States would reduce its “protect” measures—the import and export restrictions, sanctions, and other policies designed to keep strategic technologies and knowhow from China, while protecting its own assets from espionage, sabotage, and other potential harms.

Back to top

Guidelines for implementation

As emphasized throughout this paper, any successful long-term strategy will require that the US government pursue policies that are internally well coordinated, are based on solid empirical evidence, and are flexible and nimble in the short run, while being attentive to longer-run trends and uncertainties. The government will need to improve its capabilities in three areas.

  1. Improved intelligence and counterintelligence: The US government will need to reassess, improve, and extend its intelligence and counterintelligence capabilities about tech development. The intelligence community will need to be able to conduct ongoing, comprehensive assessments of tech trends and uncertainties of relevance to the strategic competition with the United States. To properly gauge the full range of relevant and timely information about China’s tech capabilities, the Intelligence Community’s practice of relying on classified materials will need to be augmented by stressing unclassified open-source material. Classified sources, which the Intelligence Community always has prioritized, do not provide a full picture of what is happening in China. Patent filings, venture-capital investment levels and patterns, scientific and technical literature, and other open sources can be rich veins of material for analysts looking to assess where China is making progress, or seeking to make progress, in particular S&T areas. The US government’s prioritization of classified material contrasts with the Chinese government’s approach. For decades, China has employed “massive, multi-layered state support” for the “monitoring and [exploitation] of open-source foreign S&T.”52 There is recognition that the US government needs to upgrade its capabilities in this respect. In 2020, the House Permanent Select Committee on Intelligence observed that “open-source intelligence (OSINT) will become increasingly indispensable to the formulation of analytic products” about China.53

    An intelligence pillar will need a properly calibrated counterintelligence element to identify where China might be utilizing its means and assets—including legal, illegal, and extralegal ones—to obtain intellectual property in the United States and elsewhere (China has a history of utilizing multiple means, including espionage, to gain IP that is relevant to their S&T development).54 Here, “properly calibrated” refers to how counterintelligence programs must ensure that innocent individuals, including Chinese nationals who are studying or researching in the United States, are not brought under undue or illegitimate scrutiny. At the same time, these programs must be able to identify, monitor, and then handle as appropriate those individuals who might be engaging in industrial espionage or other covert activities. The Trump administration’s China Initiative was criticized both for its name (it implied that Chinese nationals and anyone of East Asian descent were suspect) and the perception of too-zealous enforcement (the program resulted in several high-profile cases ending in dismissal or exoneration for the accused). In 2022, the Biden administration shuttered this initiative and replaced it with “a broader strategy aimed at countering espionage, cyberattacks and other threats posed by a range of countries.”55
  2. Improved foresight: Strategic-foresight capabilities assist governments in understanding and navigating complex and fast-moving external environments. Foresight offices in government and the private sector systematically examine long-term trends and uncertainties and assess how these will shape alternative futures. These processes often challenge deeply held assumptions about where the world is headed, and can reveal where existing strategies perform well or poorly.

    This logic extends to the tech space, where the US government should develop a robust foresight apparatus to inform tech-focused strategies and policies at the highest levels. The purpose of this capability would be to enhance and deepen understanding of where technological development might take the United States and the world. Such a foresight capability within the US government would integrate tech-intelligence assessments, per above, into comprehensive foresight-based scenarios about how the world might unfold in the future. The US government has impressive foresight capabilities already, most famously those provided by the National Intelligence Council (NIC). However, for a variety of reasons, including distance from the center of executive power, neither the NIC nor other foresight offices within the US government currently perform a foresight function described here. The US government should institutionalize a foresight function within or closely adjacent to the White House—for example, within the National Security Council or as a presidentially appointed advisory board. Doing so would give foresight the credibility and mandate to engage the most critical stakeholders from across the entire government and from outside of it, a model followed by leading public foresight offices around the world.56 This recommendation is consistent with numerous others put forward by experts over the past decade, which stress how the US government needs to give foresight more capabilities while bringing it closer to the office of the president.57
  3. Improved S&T strategy and policy coordination: One of the major challenges facing the US government concerns internal coordination around S&T strategy and policy. As technology is a broad and multidimensional category, the government’s activities are equally broad, covered by numerous statutes, executive orders, and administrative decisions. One of many results is a multiplicity of departments and agencies responsible for administering the many different pieces of the tech equation, from investment to development to monitoring, regulation, and enforcement. In just the area of critical technology oversight and control, for example, numerous departments including Commerce, State, Defense, Treasury, Homeland Security, and Justice, plus agencies from the Intelligence Community, all have responsibilities under various programs.58

    Moreover, the US government’s approach to tech oversight tends to focus narrowly on control of specific technologies, which leads to an underappreciation of the broader contexts in which technologies are used. A report issued in 2022 by the National Academies of Sciences, Engineering, and Medicine argued that the US government’s historic approach to tech-related risks is done through assessing individual critical technologies, defining the risks associated with each, and then attempting to restrict who can access each type of technology. Given that technologies now are “ubiquitous, shared, and multipurpose,” the National Academies asserted, a smarter approach would be to focus on the motives of bad-faith actors to use technologies and then define the accompanying risks.59 This approach “requires expertise that goes beyond the nature of the technology to encompass the plans, actions, capabilities, and intentions of US adversaries and other bad actors, thus involving experts from the intelligence, law enforcement, and national defense communities in addition to agency experts in the technology.”60

Back to top

US Secretary of State Antony Blinken meets with Chinese President Xi Jinping in the Great Hall of the People in Beijing, China, June 19, 2023. REUTERS

Major risks

There are two major sets of risks accompanying this strategy, both of which involve the potential damage that might result from failure to keep the strategic competition within acceptable boundaries. 

  1. Decoupling run amok: Overreach is one of the biggest risks associated with this strategy. Geopolitical and economic goals contradict, and it can be difficult to determine where to draw the line. As such, reconciling this dilemma will be the hardest part of a coherent and effective competition strategy.

    Technology decoupling to preserve geopolitical advantages can be at odds with economic interests, which the United States is currently experiencing in the context of semiconductors. The October 7, 2022, export controls were deemed necessary for geopolitical reasons, as the White House’s official rationale for the policy centered around the use of semiconductors for military modernization and violation of human rights. However, limiting the ability of US companies like Nvidia, Applied Materials, KLA, and Lam Research to export their products and services to China, in addition to applying complex compliance burdens on these firms, has the potential to affect these firms’ ability to compete in the global semiconductor industry. 

    In addition, the continued deployment of decoupling tactics like export controls can put allies and partners in a position where they feel forced to choose sides between the United States and China. On the October 7 export controls, it took months to convince the Netherlands and Japan—two critical producer nations in the semiconductor supply chain whose participation is critical to the success of these export controls—to get on board with US policy.61 Even now, although media reporting says an agreement has been reached, no details of the agreements have been made public, likely due to concerns surrounding Chinese retaliation.

    These issues are not exclusive to trade controls or protect measures. On the promote side, the IRA has also put South Korea in a difficult position as it relates to EVs and related components. When first announced, many on the South Korean side argued that the EV provisions of the IRA violated trade rules. At one point in late 2022, the South Korean government considered filing a complaint with the WTO over the issue.62 Although things seem to have cooled between Washington and Seoul—and the Netherlands and Japan have officially, albeit privately, agreed to join the US on semiconductor controls—these two instances should be lessons for US policymakers in how to approach technology policies going forward. Policies that push allies and partners too hard to decouple from the Chinese market are likely to be met with resistance, as many (if not all) US allies have deeply woven ties with Chinese industry, and often do not have the same domestic capabilities or resources that the United States has that can insulate us from potential harm. China is acutely aware of this, and will likely continue to take advantage of this narrative to convince US allies to not join in US decoupling efforts. China has historically leveraged economic punishments against countries for a variety of reasons, so US policymakers should be sure to incorporate this reality into their policy planning to ensure that allies are not put in tough positions. 

    Recently, government officials within the Group of Seven (G7) have been using the term “de-risking ” instead of “decoupling.” The term was first used by a major public official during a speech by Ursula von der Leyen, president of the European Commission, in a March 2023 speech where she called for an “open and frank” discussion with China on contentious issues.63 The term was used again in the G7 communique of May 2023: economic security should be “based on diversifying and deepening partnerships and de-risking, not de-coupling.”64 This rhetorical shift represents a recognition that full economic decoupling from China is unwise, and perhaps impossible. Moreover, it also is a tacit admission that decoupling sends the wrong signals not only to China, but to the private sector in the West as well.

    In the authors’ opinion, de-risking is superior to decoupling as a rhetorical device—but changes in phrasing do not solve the underlying problem for policymakers in the United States, Europe, East Asia, and beyond. That underlying problem is to define and then implement a coherent strategy, coordinated across national capitals, that manages to enable them to stay a step ahead of China in the development of cutting-edge technology while preventing an economically disastrous trade war with China.
  2. Harm to global governance: Another major set of risks involves the harms to global governance should the strategic competition between the United States and China continue on its current trajectory. Although the strategy outlined in these pages emphasizes, under the coordination pillar, maintenance of global governance architecture—the norms, institutions, pathways, laws, good-faith behavior, and so on that guide technology development—there is no guarantee that China and the United States, along with other important state and nonstate actors, will be able to do so given conflicting pressures to reduce or eliminate cooperative behavior. 

    Tragic outcomes of this strategic competition, therefore, would be: failure to continue cooperation regarding development of norms and standards that should guide S&T research; and failure to continue S&T research cooperation focused on solving global-commons challenges such as pandemics and climate change. 

    Any reduction in cooperation among the United States, China, and other leading S&T-research countries will harm the ability to establish norms and standards surrounding tech development in sensitive areas—for instance, in AI or biotechnology. As recent global conversations about the risks associated with rapid AI development show, effective governance of these powerful emerging technologies is no idle issue.65

    Even under the best of circumstances, however, global governance of such technologies is exceedingly difficult. For example, Gigi Kwik Gronvall, an immunologist and professor at Johns Hopkins University, has written that biotechnology development is “inherently international and cannot be controlled by any international command and control system” and that, therefore, “building a web of governance, with multiple institutions and organizations shaping the rules of the road, is the only possibility for [effective] governance.”66 By this, she meant that—although a single system of rules for governing the biotechnology development is impossible to create given the speed of biotech research and multiplicity of biotech research actors involved (private and public-sector labs, etc.) around the world—it is possible to support a “web of governance” institutions such as the WHO that set norms and rules. Although this system is imperfect, as she admits, it is much better than the alternative, which is to have no governance web at all. The risk of a weak or nonexistent web becomes much more real if the United States, China, and other S&T leaders fail to cooperate in strengthening it. 

Back to top

Conclusions and recommendations

The arguments advanced in this paper provide an overview of the range and diversity of policy questions that must be taken into consideration when formulating strategies to compete with China in science and technology. This final section offers a set of recommendations that follow from this analysis.

  1. Restore and sustain public R&D funding for scientific and technological advancement. As noted in this paper, public investment in R&D—most critically, federal-government investment in R&D—has been allowed to atrophy since the end of the Cold War. Although private-sector investment was then, and is now, a critical component of the nation’s R&D spending, public funding is also imperative for pure scientific research (versus applied research) and for funneling R&D toward ends that are in the public interest (defense, public health, etc.). Although the CHIPS and Science Act and the IRA both pledge massive increases in the amount of federal R&D investment, there is no guarantee that increased funding will be sustained over time. Less than a year after the CHIPS Act was signed into law, funding levels proposed in Congress and by the White House have fallen well short of amounts specified in the act.67
  2. Improve and sustain STEM education and skills training across K–12, university, community college, technical schools. It is widely recognized that the United States has fallen behind peer nations in STEM education and training at all levels, from K–12 through graduate training.68 Although the Biden administration’s signature pieces of legislation, including the CHIPS Act, address this problem through increased funding vehicles for STEM education and worker-training programs, the challenge for policymakers will be to sustain interest in, and levels of funding for, such programs well into the future, analogous to the federal R&D spending challenge. Other related problems include the high cost of higher education, driven in part by lower funding by US states, that drives students into long-term indebtedness, and the need to boost participation in (and reduce stigma around) STEM-related training at community colleges and technical schools.69 Germany’s well-established, well-funded, and highly respected technical apprenticeship programs are models.70
  3. Craft a more diverse tech sector. A closely related challenge is to ensure that the tech sector in the United States reflects the country’s diversity, defined in terms of gender, ethnicity, class, and geography. This is a long-term challenge that has multiple roots and many different pathways to success, including public investment in education, training, and apprenticeship programs, among other things.71 Among the most challenging problems (with potentially the most beneficial solutions) are those rooted in economic geography—specifically regional imbalances in the knowledge economy, where places like Silicon Valley and Boston steam ahead and many other places fall behind. As in other areas, recent legislation including the IRA, CHIPS Act, and IIJA have called for billions in funding to spread the knowledge economy to a greater number of “tech hubs” around the country. As with other pieces of the investment equation, however, there is no guarantee that billions will be allocated under current legislation.72
  4. Attract and retain high-skilled talent from abroad. One of the United States’ enduring strengths is its ability to attract and retain the world’s best talent, which has been of enormous benefit to its tech sector. A December 2022 survey conducted by the National Bureau of Economic Research (NBER), for example, found that between 1990 and 2016, about 16 percent of all inventors in the United States were immigrants, who, in turn, were responsible for 23 percent of all patents filed during the same period.73 Although the United States is still the top destination for high-skilled migrants, other countries have become more attractive in recent years, owing to foreign countries’ tech-savvy immigration policies and problems related to the US H-1B visa system.74
  5. Support whole-of-government strategy development. This paper stresses the need to improve strategic decision-making regarding technology through improving (or relocating) interagency processes and foresight and intelligence capabilities. One recommendation is to follow the suggestion by the National Academies of Sciences, Engineering, and Medicine, and bring a whole-of-government strategic perspective together under the guidance of the White House.75 Such a capacity would bring under its purview and/or draw upon a tech-focused foresight capacity, as well as an improved tech-focused intelligence apparatus (see below). The CHIPS Act contains provisions that call for development of quadrennial S&T assessments followed by technology strategy formulation, both to be conducted by the White House’s Office of Science Technology and Policy (OSTP).76 A bill that was introduced in June 2022 by Senators Michael Bennet, Ben Sasse, and Mark Warner (and reintroduced in June 2023) would, if passed, create an Office of Global Competition Analysis, the purpose of which would be to “fuse information across the federal government, including classified sources, to help us better understand U.S. competitiveness in technologies critical to our national security and economic prosperity and inform responses that will boost U.S. leadership.”77
  6. Ensure private sector firms remain at the cutting edge of global competitiveness. Policymakers will need to strengthen the enabling environment to allow US tech firms to meet and exceed business competition from around the world. Doing so will require constant monitoring of best-practice policy development elsewhere, based on the presumption that other countries are tweaking their own policies to outcompete the United States. Policymakers will need to properly recalibrate, as appropriate and informed by best practices, an array of policy instruments including labor market and immigration policies, types and level of infrastructural investments, competition policies, forms of direct and indirect support, and more. An Office of Global Competition Analysis, as referred to above, might be an appropriate mechanism to conduct the horizon scanning tasks necessary to support this recommendation.
  7. Improve S&T intelligence and counterintelligence. Consistent with the observations about shortcomings in the US Intelligence Community regarding S&T collection, analysis, and dissemination, some analysts have floated creation of an S&T intelligence capability outside the Intelligence Community itself. This capability would be independent of other agencies and departments within the government and would focus on collection and analysis of S&T intelligence for stakeholders within and outside of the US government, as appropriate.78
  8. Ensure calibrated development and application of punitive measures. As this paper has stressed at multiple points, although the US government has powerful protect measures at its disposal, implementing those measures often comes with a price, including friction with allies and partners. The US government should create an office within the Bureau of Industry and Security (BIS) at the Commerce Department to monitor the economic impact (intended and unintended) of its export-control policies on global supply chains before they are implemented (including impacts on allied and partner economies).79 This office would have a function that is similar in intent to the Sanctions Economic Analysis Unit, recently established at the US Treasury to “research the collateral damage of sanctions before they’re imposed, and after they’ve been put in place to see if they should be adjusted.”80
  9. Build out and sustain robust multilateral institutions. This paper has stressed that any effort by the United States to succeed in its tech-focused competition with China will require that it successfully engage allies and partners in multilateral settings such as the EU-TTC, Quad, and others. As with so many other recommendations on this list, success will be determined by the degree to which senior policymakers can stay focused over the long run (i.e., across administrations) on this priority and in these multilateral forums. In addition, US policymakers might consider updating multilateral forums based on new realities. For example, some analysts have called for the creation of a new multilateral export-control regime that would have the world’s “techno-democracies…identify together the commodities, software, technologies, end uses, and end users that warrant control to address shared national security, economic security, and human rights issues.”81
  10. Engagement with China cannot be avoided. The downturn in bilateral relations between the United States and China should not obscure the need to continue engaging China on S&T as appropriate, and as opportunities arise. There are zero-sum tradeoffs involved in the strategic competition with China over technology. At the same time, there are also positive-sum elements within that competition that need to be preserved or even strengthened. As the Ford-CATL Michigan battery-plant example underscores, trade in nonstrategic technologies (EVs, batteries, etc.) benefits both countries, assuming trade occurs on a level playing field. The same is true of science cooperation, where the risk is of global scientific research on climate change and disease prevention shrinking if Sino-American scientific exchange falls dramatically. Policymakers in the United States will need to accept some amount of S&T collaboration risk with China. They will need to decide what is (and is not) of highest risk and communicate that effectively to US allies and partners around the world, the scientific community, and the general public. 

Back to top

The authors would like to thank Noah Stein for his research assistance with this report.

Report authors

Explore the Strategy Paper Series

Explore the programs

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

Global China Hub

The Global China Hub researches and devises allied solutions to the global challenges posed by China’s rise, leveraging and amplifying the Atlantic Council’s work on China across its 15 other programs and centers.

1    Although China likely will not close the spending gap with the United States by the mid-2030s, current spending trajectories strongly suggest that China will have narrowed the gap considerably. See the US-China bilateral comparison in: “Asia Power Index 2023,” Lowy Institute, last visited June 13, 2023, https://power.lowyinstitute.org; “China v America: How Xi Jinping Plans to Narrow the Military Gap,” Economist, May 8, 2023, https://www.economist.com/china/2023/05/08/china-v-america-how-xi-jinping-plans-to-narrow-the-military-gap.
2    See, e.g., the arguments presented by: Bryce Barros, Nathan Kohlenberg, and Etienne Soula, “China and the Digital Information Stack in the Global South,” German Marshall Fund, June 15, 2022, https://securingdemocracy.gmfus.org/china-digital-stack/.
3    For a brief overview of China’s efforts in this regard, see: Bulelani Jili, China’s Surveillance Ecosystem and the Global Spread of Its Tools, Atlantic Council, October 17, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/chinese-surveillance-ecosystem-and-the-global-spread-of-its-tools/.
4    For background to these practices, see: Karen M. Sutter, ““Made in China 2025’ Industrial Policies: Issues for Congress,” Congressional Research Service, March 10, 2023, https://sgp.fas.org/crs/row/IF10964.pdf; Gerard DiPippo, Ilaria Mazzocco, and Scott Kennedy, “Red Ink: Estimating Chinese Industrial Policy Spending in Comparative Perspective,” Center for Strategic and International Studies, May 23, 2022, https://www.csis.org/analysis/red-ink-estimating-chinese-industrial-policy-spending-comparative-perspective; “America Is Struggling to Counter China’s Intellectual Property Theft,” Financial Times, April 18, 2022, https://www.ft.com/content/1d13ab71-bffd-4d63-a0bf-9e9bdfc33c39; “USTR Releases Annual Report on China’s WTO Compliance,” Office of the United States Trade Representative, February 16, 2022, press release, 3, https://ustr.gov/about-us/policy-offices/press-office/press-releases/2022/february/ustr-releases-annual-report-chinas-wto-compliance.
5     On China and technical standards, see: Matt Sheehan, Marjory Blumenthal, and Michael R. Nelson, “Three Takeaways from China’s New Standards Strategy,” Carnegie Endowment for International Peace, October 28, 2021, https://carnegieendowment.org/2021/10/28/three-takeaways-from-china-s-new-standards-strategy-pub-85678.
6    China’s current (2023) AI regulations are generally seen as more developed than those in either Europe or the United States. However, analysts argue that the individual rights and corporate responsibilities to protect them, as outlined in China’s regulations, will be selectively enforced, if at all, by the state. See: Ryan Heath, “China Races Ahead of U.S. on AI Regulation,” Axios, May 8, 2023, https://www.axios.com/2023/05/08/china-ai-regulation-race.
7    The scientific community has warned that this scenario is a real risk, owing to heightened Sino-American tension. James Mitchell Crow, “US–China partnerships bring strength in numbers to big science projects,” Nature, March 9, 2022, https://www.nature.com/articles/d41586-022-00570-0.
8    Deng Xiaoping’s reforms included pursuit of “Four Modernizations” in agriculture, industry, science and technology, and national defense. In the S&T field, his reforms included massive educational and worker-upskilling programs, large investments in scientific research centers, comprehensive programs to send Chinese STEM (science, technology, engineering, and math) students abroad for advanced education and training, experimentation with foreign technologies in manufacturing and other production processes, and upgrading of China’s military to include a focus on development of dual-use technologies. Bernard Z. Keo, “Crossing the River by Feeling the Stones: Deng Xiaoping in the Making of Modern China,” Education About Asia 25, 2 (2020), 36, https://www.asianstudies.org/publications/eaa/archives/crossing-the-river-by-feeling-the-stones-deng-xiaoping-in-the-making-of-modern-china/.
9    Dan Wang, “China’s Hidden Tech Revolution: How Beijing Threatens U.S. Dominance,” Foreign Affairs, March/April 2023, https://www.foreignaffairs.com/china/chinas-hidden-tech-revolution-how-beijing-threatens-us-dominance-dan-wang.
10    “Full Text of Clinton’s Speech on China Trade Bill,” Federal News Service, March 9, 2000, https://www.iatp.org/sites/default/files/Full_Text_of_Clintons_Speech_on_China_Trade_Bi.htm.
11    “Speech by President Jiang Zemin at George Bush Presidential Library,” Ministry of Foreign Affairs of the PRC, October 24, 2002, https://perma.cc/7NYS-4REZ; G. John Ikenberrgy, “The Rise of China and the Future of the West: Can the Liberal System Survive?” Foreign Affairs 87, 1, (2008), https://www.jstor.org/stable/20020265.
12    Elizabeth Economy, “Changing Course on China,” Current History 102, 665, China and East Asia (2003), https://www.jstor.org/stable/45317282; Thomas W. Lippman, “Bush Makes Clinton’s China Policy an Issue,” Washington Post, August 20, 1999, https://www.washingtonpost.com/wp-srv/politics/campaigns/wh2000/stories/chiwan082099.htm.
13     Kurt M. Campbell and Ely Ratner, “The China Reckoning: How Beijing Defied American Expectations,” Foreign Affairs, February 18, 2018, https://www.foreignaffairs.com/articles/china/2018-02-13/china-reckoning.
14     “Number of Tourist Arrivals in the United States from China from 2005 to 2022 with Forecasts until 2025,” Statista, April 11, 2023, https://www.statista.com/statistics/214813/number-of-visitors-to-the-us-from-china/; and “Visa Statistics,” U.S. Department of State, https://travel.state.gov/content/travel/en/legal/visa-law0/visa-statistics.html.
15    “Direct Investment Position of the United States in China from 2000 to 2021,” Statista, January 26, 2023, https://www.statista.com/statistics/188629/united-states-direct-investments-in-china-since-2000/.
16     Robbie Gramer, “Washington’s China Hawks Take Flight,” Foreign Policy, February 15, 2023, https://foreignpolicy.com/2023/02/15/china-us-relations-hawks-engagement-cold-war-taiwan/; Sam LaGrone, “China Sends Uninvited Spy Ship to RIMPAC,” USNI News, July 18, 2014, https://news.usni.org/2014/07/18/china-sends-uninvited-spy-ship-rimpac.
17    “Findings of the Investigations into China’s Acts, Policies, and Practices Related to Technology Transfer, Intellectual Property, and Innovation Under Section 301 of the Trade Act of 1974,” Office of the United States Trade Representative, March 22, 2018, https://ustr.gov/sites/default/files/Section%20301%20FINAL.PDF. When asked in November 2018 if China was violating the 2015 cyber-espionage agreement, senior National Security Agency cybersecurity official Rob Joyce said, “it’s clear that they [China] are well beyond the bounds today of the agreement that was forced between our countries.” See: “U.S. Accuses China of Violating Bilateral Anti-Hacking Deal,” Reuters, November 8, 2018, https://www.reuters.com/article/us-usa-china-cyber/u-s-accuses-china-of-violating-bilateral-anti-hacking-deal-idUSKCN1NE02E.
18    Jacob Feldgoise, et. al, “Studying Tech Competition through Research Output: Some CSET Best Practices,” Center for Security and Emerging Technology, April 2023, https://cset.georgetown.edu/article/studying-tech-competition-through-research-output-some-cset-best-practices.
19    The World Intellectual Property Organization’s annual “Global Innovation Index,” considered the gold standard rankings assessment of the world’s tech-producing economies, ranks South Korea sixth and Japan thirteenth in the 2022 edition. “Global Innovation Index 2022. What Is the Future of Innovation-Driven Growth?” World Intellectual Property Organization, 2022, https://www.globalinnovationindex.org/analysis-indicator.
20    For a general review of the Japanese case, see: Mireya Solis, “Economic Security: Boon or Bane for the US-Japan Alliance?,” Sasakawa Peace Foundation USA, November 5–6, 2022, https://spfusa.org/publications/economic-security-boon-or-bane-for-the-us-japan-alliance/#_ftn19. For the South Korean case, see: Seong-Ho Sheen and Mireya Solis, “How South Korea Sees Technology Competition with China and Export Controls,” Brookings, May 17, 2023, https://www.brookings.edu/blog/order-from-chaos/2023/05/17/how-south-korea-sees-technology-competition-with-china-and-export-controls/.
21    Jeremy Mark and Dexter Tiff Roberts, United States–China Semiconductor Standoff: A Supply Chain under StressAtlantic Council, February 23, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/united-states-china-semiconductor-standoff-a-supply-chain-under-stress/.
22    Yang Jie and Megumi Fujikawa, “Tokyo Meeting Highlights Democracies’ Push to Secure Chip Supplies,” Wall Street Journal, May 18, 2023, https://www.wsj.com/articles/tokyo-meeting-highlights-democracies-push-to-secure-chip-supplies-54e1173d?mod=article_inline; “US Urges South Korea not to Fill Chip Shortfalls in China if Micron Banned, Financial Times Reports,” Reuters, April 23, 2023, https://www.reuters.com/technology/us-urges-south-korea-not-fill-china-shortfalls-if-beijing-bans-micron-chips-ft-2023-04-23/.
23    See, e.g., the arguments in: Matias Spektor, “In Defense of the Fence Sitters. What the West Gets Wrong about Hedging,” Foreign Affairs, May/June 2023, https://www.foreignaffairs.com/world/global-south-defense-fence-sitters.
24    On the expansion of trade under Bretton Woods during the first postwar decades, see: Tamim Bayoumi, “The Postwar Economic Achievement,” Finance & Development, June 1995, https://www.elibrary.imf.org/view/journals/022/0032/002/article-A013-en.xml
25    For a review of the history of the bilateral trade relationship, see: Anshu Siripurapu and Noah Berman, “Backgrounder: The Contentious U.S.-China Trade Relationship,” Council on Foreign Relations, December 5, 2022, https://www.cfr.org/backgrounder/contentious-us-china-trade-relationship.
26    Eric Martin and Ana Monteiro, “US-China Goods Trade Hits Record Even as Political Split Widens,” Bloomberg, February 7, 2023, https://www.bloomberg.com/news/articles/2023-02-07/us-china-trade-climbs-to-record-in-2022-despite-efforts-to-split?sref=a9fBmPFG#xj4y7vzkg
27    Neal E. Boudette and Keith Bradsher, “Ford Will Build a U.S. Battery Factory with Technology from China,” New York Times, February 13, 2023, https://www.nytimes.com/2023/02/13/business/energy-environment/ford-catl-electric-vehicle-battery.html.
28    “Tracking the Collaborative Networks of Five Leading Science Nations,” Nature 603, S10–S11 (2022), https://www.nature.com/articles/d41586-022-00571-z.
29     “Protecting U.S. Technological Advantage,” National Academies of Sciences, Engineering, and Medicine, 2022, 12, https://doi.org/10.17226/26647.
30     Robert W. Seidel, “Science Policy and the Role of the National Laboratories,” Los Alamos Science 21 (1993), 218–226, https://sgp.fas.org/othergov/doe/lanl/pubs/00285712.pdf.
31     The federal government’s hand in creating Silicon Valley is well known. For a short summary, see: W. Patrick McCray, “Silicon Valley: A Region High on Historical Amnesia,” Los Angeles Review of Books, September 19, 2019, https://lareviewofbooks.org/article/silicon-valley-a-region-high-on-historical-amnesia/. A forceful defense of the federal government’s role in creating and sustaining Silicon Valley is: Jacob S. Hacker and Paul Pierson, “Why Technological Innovation Relies on Government Support,” Atlantic, March 28, 2016, https://www.theatlantic.com/politics/archive/2016/03/andy-grove-government-technology/475626/.
32     Robert D. Atkinson, “Understanding the U.S. National Innovation System, 2020,” International Technology & Innovation Foundation, November 2020, 1, https://www2.itif.org/2020-us-innovation-system.pdf.
33     “National Innovation Policies: What Countries Do Best and How They Can Improve,” International Technology & Innovation Foundation, June 13, 2019, 82, https://itif.org/publications/2019/06/13/national-innovation-policies-what-countries-do-best-and-how-they-can-improve/; “Historical Trends in Federal R&D, Federal R&D as a Percent of GDP, 1976-2023,” American Association for the Advancement of Science, last visited June 13, 2023, https://www.aaas.org/programs/r-d-budget-and-policy/historical-trends-federal-rd.
34     Matt Hourihan, “A Snapshot of U.S. R&D Competitiveness: 2020 Update,” American Association for the Advancement of Science, October 22, 2020, https://www.aaas.org/news/snapshot-us-rd-competitiveness-2020-update.
35    Remco Zwetsloot, et al., “China is Fast Outpacing U.S. STEM PhD Growth,” Center for Security and Emerging Technology, August 2021, 2–4, https://cset.georgetown.edu/publication/china-is-fast-outpacing-u-s-stem-phd-growth/.
36    As reviewed in: Robert D. Atkinson, “Understanding the U.S. National Innovation System, 2020,” International Technology & Innovation Foundation, November 2020, https://www2.itif.org/2020-us-innovation-system.pdf.
37    See, e.g., the arguments laid out by Frank Lucas, chairman of the House Science, Space, and Technology Committee, in: Frank Lucas, “A Next-Generation Strategy for American Science,” Issues in Science and Technology 39, 3, Spring 2023, https://issues.org/strategy-american-science-lucas/.
38     “Findings of the Investigations into China’s Acts, Policies, and Practices Related to Technology Transfer, Intellectual Property, and Innovation Under Section 301 of the Trade Act of 1974”; “Threats to the U.S. Research Enterprise: China’s Talent Recruitment Plans,” Permanent Subcommittee on Investigations, Committee on Homeland Security and Governmental Affairs, US Senate, November 2019, https://www.hsgac.senate.gov/wp-content/uploads/imo/media/doc/2019-11-18%20PSI%20Staff%20Report%20-%20China’s%20Talent%20Recruitment%20Plans%20Updated2.pdf; Michael Brown and Pavneet Singh, “China’s Technology Transfer Strategy: How Chinese Investments in Emerging Technology Enable A Strategic Competitor to Access the Crown Jewels of U.S. Innovation,” Defense Innovation Unit Experimental (DIUx), January 2018, https://www.documentcloud.org/documents/4549143-DIUx-Study-on-China-s-Technology-Transfer.
39     Steven F. Hill, et. al, “Trump Administration Significantly Enhances Export Control Supply Chain Restrictions on Huawei,” K&L Gates, September 2020, https://www.klgates.com/Trump-Administration-Significantly-Enhances-Export-Control-Supply-Chain-Restrictions-on-Huawei-9-2-2020; and “Implementation of Additional Export Controls: Certain Advanced Computing and Semiconductor Manufacturing Items; Supercomputer and Semiconductor End Use; Entity List Modification,” Bureau of Industry and Security, US Department of Commerce, October 14, 2022, https://www.federalregister.gov/documents/2022/10/13/2022-21658/implementation-of-additional-export-controls-certain-advanced-computing-and-semiconductor.
40    “The Committee on Foreign Investment in the United States,” US Department of the Treasury, last visited June 13, 2023, https://home.treasury.gov/policy-issues/international/the-committee-on-foreign-investment-in-the-united-states-cfius.
41    Hans Nichols and Dave Lawler, “Biden’s Next Move to Box China out on Sensitive Tech,” Axios, May 25, 2023, https://www.axios.com/2023/05/25/china-investments-ai-semiconductor-biden-order.
42    “With Over 300 Sanctions, U.S. Targets Russia’s Circumvention and Evasion, Military-Industrial Supply Chains, and Future Energy Revenues,” US Department of the Treasury, press release, May 19, 2023, https://home.treasury.gov/news/press-releases/jy1494.
43     Tim Hwang and Emily S. Weinstein, “Decoupling in Strategic Technologies: From Satellites to Artificial Intelligence,” Center for Security and Emerging Technology, July 2022, https://cset.georgetown.edu/publication/decoupling-in-strategic-technologies/.
44     The articles were published in China’s state-run newspaper, Science and Technology Daily. Ben Murphy, “Chokepoints: China’s Self-Identified Strategic Technology Import Dependencies,” Center for Security and Emerging Technology, May 2022, https://cset.georgetown.edu/publication/chokepoints/.
45     Antony J. Blinken, “The Administration’s Approach to the People’s Republic of China,” US Department of State, May 26, 2022, https://www.state.gov/the-administrations-approach-to-the-peoples-republic-of-china/.
46     “Media Reaction: US Inflation Reduction Act and the Global ‘Clean-Energy Arms Race,’” Carbon Brief, February 3, 2023, https://www.carbonbrief.org/media-reaction-us-inflation-reduction-act-and-the-global-clean-energy-arms-race/; Théophile Pouget-Abadie, Francis Shin, and Jonah Allen, Clean Industrial Policies: A Space for EU-US Collaboration, Atlantic Council, March 10, 2023, https://www.atlanticcouncil.org/blogs/energysource/clean-industrial-policies-a-space-for-eu-us-collaboration/.
47     Shannon Tiezzi, “Are US Allies Falling out of ‘Alignment’ on China?” Diplomat, December 19, 2022, https://thediplomat.com/2022/12/are-us-allies-falling-out-of-alignment-on-china/.
48     “The Fall of Empires Preys on Xi Jinping’s Mind,” Economist, May 11, 2023, https://www.economist.com/briefing/2023/05/11/the-fall-of-empires-preys-on-xi-jinpings-mind; Kunal Sharma, “What China Learned from the Collapse of the USSR,” Diplomat, December 6, 2021, https://thediplomat.com/2021/12/what-china-learned-from-the-collapse-of-the-ussr/; Simone McCarthy, “Why Gorbachev’s Legacy Haunts China’s Ruling Communist Party,” CNN, August 31, 2022, https://www.cnn.com/2022/08/31/china/china-reaction-mikhail-gorbachev-intl-hnk/index.html.
49     For a review of the complex history of the construction and deconstruction of the Soviet Union, see: Serhii Plokhy, “The Empire Returns: Russia, Ukraine and the Long Shadow of the Soviet Union,”Financial Times, January 28, 2022, https://www.ft.com/content/0cbbd590-8e48-4687-a302-e74b6f0c905d.
50     Phelim Kine, “China ‘Is Infinitely Stronger than the Soviet Union Ever Was,’” Politico, April 28, 2023, https://www.politico.com/newsletters/global-insider/2023/04/28/china-is-infinitely-stronger-than-the-soviet-union-ever-was-00094266.
51     Hal Brands, “The Dangers of China’s Decline,” Foreign Policy, April 14, 2022, https://foreignpolicy.com/2022/04/14/china-decline-dangers/.
52     Tarun Chhabra, et al., “Open-Source Intelligence for S&T Analysis,” Center for Security and Emerging Technology (CSET), Georgetown University Walsh School of Foreign Service, September 2020, https://cset.georgetown.edu/publication/open-source-intelligence-for-st-analysis/.
53     A summary of and link to the committee’s redacted report is in: Tia Sewell, “U.S. Intelligence Community Ill-Prepared to Respond to China, Bipartisan House Report Finds,” Lawfare, September 30, 2020, https://www.lawfareblog.com/us-intelligence-community-ill-prepared-respond-china-bipartisan-house-report-finds.
54     William Hannas and Huey-Meei Chang, “China’s Access to Foreign AI Technology,” Center for Security and Emerging Technology (CSET), Georgetown University Walsh School of Foreign Service, September 2019, https://cset.georgetown.edu/publication/chinas-access-to-foreign-ai-technology/.
55     Ellen Nakashima, “Justice Department Shutters China Initiative, Launches Broader Strategy to Counter Nation-State Threats,” Washington Post, February 23, 2022, https://www.washingtonpost.com/national-security/2022/02/23/china-initivative-redo/.
56     Tuomo Kuosa, “Strategic Foresight in Government: The Cases of Finland, Singapore, and the European Union,” S. Rajaratnam School of International Studies, Nanyang Technological University, 43, https://www.files.ethz.ch/isn/145831/Monograph19.pdf.
57     For a review, including a summary of such recommendations, see: J. Peter Scoblic, “Strategic Foresight in U.S. Agencies. An Analysis of Long-term Anticipatory Thinking in the Federal Government,” New America, December 15, 2021, https://www.newamerica.org/international-security/reports/strategic-foresight-in-us-agencies/.
58     See, for example: Marie A. Mak, “Critical Technologies: Agency Initiatives Address Some Weaknesses, but Additional Interagency Collaboration Is Needed,” General Accounting Office, February 2015, https://www.gao.gov/assets/gao-15-288.pdf.
59     “Protecting U.S. Technological Advantage,” 97.
60     Ibid.
61    Toby Sterling, Karen Freifeld, and Alexandra Alper, “Dutch to Restrict Semiconductor Tech Exports to China, Joining US Effort,”Reuters, March 8, 2023, https://www.reuters.com/technology/dutch-responds-us-china-policy-with-plan-curb-semiconductor-tech-exports-2023-03-08/.
62    Troy Stangarone, “Inflation Reduction Act Roils South Korea-US Relations,” Diplomat, September 20, 2022, https://thediplomat.com/2022/09/inflation-reduction-act-roils-south-korea-us-relations/; “S. Korea in Preparation for Legal Disputes with U.S. over IRA,” Yonhap News Agency, November 3, 2022, https://en.yna.co.kr/view/AEN20221103004500320.
63    “Speech by President von der Leyen on EU-China Relations to the Mercator Institute for China Studies and the European Policy Centre,” European Commission, March 30, 2023, https://ec.europa.eu/commission/presscorner/detail/en/speech_23_2063.
64    “G7 Hiroshima Leaders’ Communiqué,” White House, May 20, 2023, https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/20/g7-hiroshima-leaders-communique/.
65    See, e.g.: Kevin Roose, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html.
66    Gigi Kwik Gronvall, “Managing the Risks of Biotechnology Innovation,” Council on Foreign Relations, January 30, 2023, 7, https://www.cfr.org/report/managing-risks-biotechnology-innovation.
67     Madeleine Ngo, “CHIPS Act Funding for Science and Research Falls Short,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/us/politics/chips-act-science-funding.html; Matt Hourihan, Mark Muro, and Melissa Roberts Chapman, “The Bold Vision of the CHIPS and Science Act Isn’t Getting the Funding It Needs,” Brookings, May 17, 2023, https://www.brookings.edu/blog/the-avenue/2023/05/17/the-bold-vision-of-the-chips-and-science-act-isnt-getting-the-funding-it-needs/.
68    See, e.g.: Gabrielle Athanasia and Jillian Cota, “The U.S. Should Strengthen STEM Education to Remain Globally Competitive,” Center for Strategic and International Studies, April 1, 2022, https://www.csis.org/blogs/perspectives-innovation/us-should-strengthen-stem-education-remain-globally-competitive.
69     On per-student university funding at state level, see: Mary Ellen Flannery, “State Funding for Higher Education Still Lagging,” NEA Today, October 25, 2022, https://www.nea.org/advocating-for-change/new-from-nea/state-funding-higher-education-still-lagging
70    Matt Fieldman, “5 Things We Learned in Germany,” NIST Manufacturing Innovation Blog, December 14, 2022, https://www.nist.gov/blogs/manufacturing-innovation-blog/5-things-we-learned-germany.
71    For a review, see: Peter Engelke and Robert A. Manning, Keeping America’s Innovative EdgeAtlantic Council, April 2017, https://www.atlanticcouncil.org/in-depth-research-reports/report/keeping-america-s-innovative-edge-2/.
72    To date, Congress has allocated only 5 percent of the funds called for in the piece of the CHIPS Act that funds the tech hubs. Madeleine Ngo, “CHIPS Act Funding for Science and Research Falls Short,” New York Times, May 30, 2023, https://www.nytimes.com/2023/05/30/us/politics/chips-act-science-funding.html; Mark Muro, et al., “Breaking Down an $80 Billion Surge in Place-Based Industrial Policy,” Brookings, December 15, 2022, https://www.brookings.edu/blog/the-avenue/2022/12/15/breaking-down-an-80-billion-surge-in-place-based-industrial-policy/.
73    Shai Bernstein, et al., “The Contribution of High-Skilled Immigrants to Innovation in the United States,” National Bureau of Economic Research, December 2022, 3, https://www.nber.org/papers/w30797.
74    Miranda Dixon-Luinenburg, “America Has an Innovation Problem. The H-1B Visa Backlog Is Making It Worse,” Vox, July 13, 2022, https://www.vox.com/future-perfect/23177446/immigrants-tech-companies-united-states-innovation-h1b-visas-immigration.
75    “Protecting U.S. Technological Advantage,” 98–99.
76    Matt Hourihan, “CHIPS And Science Highlights: National Strategy,” Federation of American Scientists, August 9, 2022, https://fas.org/publication/chips-national-strategy/.
77     “Press Release: Bennet, Sasse, Warner Unveil Legislation to Strengthen U.S. Technology Competitiveness,” Office of Michael Bennet, June 9, 2022, https://www.bennet.senate.gov/public/index.cfm/2022/6/bennet-sasse-warner-unveil-legislation-to-strengthen-u-s-technology-competitiveness.
78     Tarun Chhabra, et al., “Open-Source Intelligence for S&T Analysis,” Center for Security and Emerging Technology (CSET),Georgetown University Walsh School of Foreign Service, September 2020, https://cset.georgetown.edu/publication/open-source-intelligence-for-st-analysis/.
79     Emily Weinstein, “The Role of Taiwan in the U.S. Semiconductor Supply Chain Strategy,” National Bureau of Asian Research, January 21, 2023, https://www.nbr.org/publication/the-role-of-taiwan-in-the-u-s-semiconductor-supply-chain-strategy/.
80    Daniel Flatley, “US Treasury Hires Economists to Study Consequences of Sanctions,” Bloomberg, May 17, 2023, https://www.bloomberg.com/news/articles/2023-05-18/us-treasury-hires-economists-to-study-consequences-of-sanctions?sref=a9fBmPFG.
81    Kevin Wolf and Emily S. Weinstein, “COCOM’s daughter?” World ECR, May 13, 2022, 25, https://cset.georgetown.edu/wp-content/uploads/WorldECR-109-pp24-28-Article1-Wolf-Weinstein.pdf.

The post Global Strategy 2023: Winning the tech race with China appeared first on Atlantic Council.

]]>
The 5×5—Cyber conflict in international relations: A scholar’s perspective https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cyber-conflict-in-international-relations-a-scholars-perspective/ Tue, 20 Jun 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=654086 Leading scholars provide insights on cyber conflict’s role in international relations, how the topic can best be taught to students, and how scholars and policymakers can better incorporate each other’s perspectives.

The post The 5×5—Cyber conflict in international relations: A scholar’s perspective appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Over the past decade, scholarly debate over the topic of cyber conflict’s place in international relations has evolved significantly. The idea that cyber tools would fundamentally change the nature of war and warfare has largely given way to the idea that cyber conflict is merely a different way of doing the same old things, and primarily suited for engaging in an intelligence contest. Other, less settled questions range from whether cyber operations are useful tools of signaling to if these operations lead to escalation. These unsettled questions remain active in scholarly literature and, critically, inform policymaking approaches. 

We brought together a group of leading scholars to provide insights on cyber conflict’s role in international relations, how the topic can best be taught to students, and how scholars and policymakers can better incorporate each other’s perspectives.

#1 What, in your opinion, is the biggest misconception about cyber conflict’s role in international relations theory?

Andrew Dwyer, lecturer in information security, Department of Information Security, Royal Holloway, University of London; steering committee lead, Offensive Cyber Working Group

“[The biggest misconception is] that cyberspace is malleable and controllable. The environment is often presented tangentially and, when it is, it is often about how people use the terrains of computation. I think that a lack of attention on how the environment ‘shapes’ people is one of the greatest missing parts of international relations thought. Simply, the environment and terrain have much more impact than is typically accounted for.” 

Melissa Griffith, lecturer in technology and national security, Johns Hopkins University School of Advanced International Studies (SAIS) and the Alperovitch Institute for Cybersecurity Studies; non-resident research fellow, University of California, Berkeley’s Center for Long-Term Cybersecurity (CLTC)

“Much of the scholarship focused on the intersection between cyber conflict and international relations theory has concentrated on capturing the nature of the evolving cyber threat. This has led, in turn, to ongoing and vibrant debates over whether (a) deterrence strategies are feasible or valuable, (b) cyberspace favors the offense or defense, (c) cyber operations are useful tools for coercion, (d) cyberspace is escalatory, or (e) strategic competition in cyberspace is best understood as an intelligence contest, for example. While these are important areas of focus, they have previously overshadowed other lines of inquiry. Such as, why we see variation in how states respond in practice, a line of inquiry that requires leveraging international relations theories beyond those focused on grappling with what best captures the dynamics of this new threat space as a whole.” 

Richard Harknett, professor & director, School of Public and International Affairs (SPIA); chair, Center for Cyber Strategy and Policy (CCSP), University of Cincinnati

“[The biggest misconception is] that the most salient impact of cyber operations should be in conflict; that is, the equivalent of armed attack and warfighting. Much of cybersecurity studies, itself, has focused on the construct of cyber war and thus international relations theory has primarily treated ‘cyber’ as another form of war, when the majority of state cyber activity is actually a strategic attempt to gain relative power via an alternative to war. I argue from a realist-structuralist perspective that the most fascinating theoretical question is the interplay between states struggle for autonomy and the organizing principle of interconnectedness that defines the cyber strategic environment.” 

Jenny Jun, research fellow, CyberAI Project, Center for Security and Emerging Technology; nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; Ph.D. candidate, Department of Political Science, Columbia University

“It is much more useful to think that conflict and competition have cyber dimensions to them, rather than to think that cyber conflict occurs in isolation.” 

Jon Lindsay, associate professor, School of Cybersecurity and Privacy, Sam Nunn School of International Affairs, Georgia Institute of Technology

“The biggest misconception remains that cyber operations are a revolution in military affairs akin to the invention of nuclear weapons. Cyber ‘conflict’ is better understood as the digital dimension of intelligence competition, between both state and nonstate competitors, which is an increasingly important and still understudied dimension of international relations.” 

Michael Poznansky, associate professor, Strategic & Operational Research Department, US Naval War College; core faculty member, Cyber & Innovation Policy Institute

Disclaimer: The opinions expressed below are the author’s alone and do not represent those of the U.S. Naval War College, the Department of Navy, the Department of Defense, or any government entity. 

“One potential misconception is that we need entirely new theoretical frameworks to understand cyber conflict. Are there are distinctive attributes of cyberspace that should give us pause from unthinkingly applying existing international relations theories to it? You bet. But the real task—which many have been doing and are continuing to do—is to figure out where we can apply existing theories, perhaps with certain modifications, and where novel frameworks are genuinely needed.”

#2 What would you like to see change about how cyber conflict is widely taught?

Dwyer: “As much as there is frequent discussion about ‘interdisciplinarity’ in the study of cyber conflict, all too often we teach through and in silos. That is, we teach ‘from’ an angle, whether that be international relations, computer science, psychology, and so on. I think this does a disservice to the study of cyber conflict. I am not claiming for a wholly radical empiricism here, but about one that is less grounded in theory as a starting place for exploration.” 

Griffith: “Notably, in this field, perhaps far more so than others, there is no uniform or widely pursued approach across classrooms, as pointed out by Herr, Laudrian, and Smeets in ‘Mapping the Known Unknowns of Cybersecurity Education‘. That said, students entering this field armed with social science and policy leaning coursework should be comfortable engaging with technical and private sector reporting alongside academic, government, and legal documents. At risk of straying beyond the focus of this topic (i.e., theory), I favor introducing students first to core technical foundations and operational realities—what is cyberspace and how has it evolved; how, when, and which groups hack; and how, where, and when does defense play out—before turning to policy, strategy, or theoretical debates. In my experience, this approach allows subsequent discussions of systemic, international, national, and subnational questions to be firmly grounded in the realities of the space.” 

Harknett: “Cybersecurity is not a technical problem, but a political, economic, organizational, and behavioral challenge in a technically fluid environment. Thus, how cyber insecurity can be reduced and state competition in and through cyberspace can be stabilized should be taught from multiple perspectives across the computing and social sciences and humanities. Basically, a more multidisciplinary integrated, rather than segmented, approach to courses and curriculum.” 

Jun: “There should be a greater effort to integrate literature on cyber conflict as part of bigger international relations themes such as coercion, signaling, trade, etc., and move away from viewing dynamics in cyberspace as monolithic. In many international relations syllabi, cyber conflict often appears at the very end (if at all) in about week thirteen as a standalone module. Often, the discussion question then becomes, “To what extent is cyber different from all of the traditional stuff we learned so far?” This not only leads to overgeneralizations about cyberspace and cyber conflict, but also nudges students into viewing cyber as something separate and distinct from other major themes and dynamics in international relations.” 

Lindsay: “Two things that would improve cybersecurity education would be 1) to situate it in the history of intelligence and covert action, and 2) to give more attention to the political economy of cyberspace, which fundamentally shapes the dynamics of cyber conflict.” 

Poznansky: “My hunch is that cyber conflict is often included in many international relations courses as part of a module on emerging technologies alongside space, autonomous systems, quantum, and so forth. Because cyberspace has relevance for almost all aspects of modern statecraft—warfighting, coercion, commerce, diplomacy—a better approach may be to consciously integrate it into modules on all these broader topics. Stand-alone courses also have high upside by allowing for a deep dive, but infusing cyber throughout discussions of major international relations concepts would offer a better foundation.”

#3 What is a piece of literature on cyber conflict theory that you recommend aspiring policymakers read closely and why?

Dwyer: “I think one of the best and underacknowledged written pieces is by JD Work, ‘Balancing on the Rail – considering responsibility and restraint in the July 2021 Iran Railways incident.’ In this piece, Work examines an incident on Iranian railways in July 2021. The explication of responsibility and restraint in offensive cyber operations is a must-read for anyone interested in the area.” 

Griffith: “Whether or not readers agree with them, Michael Fisherkeller, Emily Goldman, and Richard Harknett’s Cyber Persistence Theory (2022) sets the stage for a productive and ongoing theoretical debate over the structural conditions animating cyberspace. Though an exercise in theory development rather than policy prescription, the book is not merely of interest to academics. Echoes of the underlying logic can be found animating US Cyber Command’s Persistent Engagement and the UK National Cyber Force’s recently released, ‘Responsible Cyber Power in Practice,’ for example.” 

Harknett: “As Alexander George correctly wrote to bridge the gap between policy and theory, it is the theoretician that must cross-over the bridge to meet policymakers on their own turf. Two recent books that do a good job at this are Max Smeets’ No Short Cuts: Why States Struggle to Develop a Military Cyber-Force and a just released edited book from Smeets and Robert Chesney, Deter, Disrupt, or Deceive, which examines the debate between those who posit cyberspace as strategic competition and those who view it as an intelligence contest and thus apply research from intelligence studies. Misconceiving this fundamental categorization would have profound impact on policy development, and thus grappling with the difference between the two perspectives is important.” 

Jun: “Aspiring policymakers should be familiar with the arguments made in Cyber Persistence Theory by Goldman, Fischerkeller, and Harknett, as well as the back-and-forth debate leading up to the publication of this book in various journals and opinion pieces. Ideas laid out in this book embody much of the thinking behind the 2018 US government pivot towards Persistent Engagement and Defend Forward away from a strategy based on deterrence by punishment. Reading the book as well as the debate around it will allow an aspiring policymaker to trace how certain characterizations of cyberspace and its functions will lead to corresponding theoretical predictions, and how such assessments are translated into strategy documents by various agencies.” 

Lindsay: “I highly recommend the new volume by Robert Chesney and Max Smeets exploring the debate over cyber as an intelligence contest or something else. I also recommend that international relations scholars become more familiar with the Workshop on the Economics of Information Security community, which produces fascinating papers every year.” 

Poznansky: “I am going to cheat and highlight two. First is an article by Jordan Branch looking at how the military’s use of familiar metaphors to understand and describe cyberspace affected investments and policy decisions. Branch shows that the comparisons we invoke to understand new phenomena have real-world impacts. Second is a new book by Erica Lonergan and Shawn Lonergan on the dynamics of escalation in cyberspace. It tackles one of the most pressing issues in cyber conflict in a way that appeals to scholars and practitioners alike.”

More from the Cyber Statecraft Initiative:

#4 How has the theory of cyber conflict evolved in the last five years and where do you see the field evolving in the next five years?

Dwyer: “Undoubtedly, the greatest transformation has been the demise of ‘cyber war’ and ‘cyber weapons’ in both theory and practice. This has steadily been replaced (albeit over much more than the past five years) by cyber conflict as an ‘intelligence contest.’ In many ways, this is a welcome development. For the next five years, one might ask what then is distinct about cyber conflict; is it simply a transplant of conventional intelligence-related activity with new tools? I would wager not, and I hope that the cyber conflict studies community examines the role that technology plays that does not simply reduce computation to a tool with none of its own agency.” 

Griffith: “Two significant shifts stand out. One of the biggest was the pivot away from the early focus on war toward a recognition of the diversity of activity that occurs in the absence of and below the threshold of war. In the process, the theories and disciplines cyber conflict scholars brought to bear expanded beyond security studies approaches, which had largely dominated the field, to increasingly include intelligence studies, history, economics, law, etc. In the next five years, I hope to see that aperture continue to widen as we continue to move beyond those early ‘cyber war’ framings to an array of questions stemming from a diversity of disciplines and examining a greater diversity of countries.” 

Harknett: “Along with the work above, Ben Buchanan’s The Hacker and the State and Daniel Moore’s Offensive Cyber Operations have begun to examine the operational space as it is, rather than how people thought it would be. I think there is a significant pivot away from the cyber war construct occurring. Of course, my own bias is that Cyber Persistence Theory as presented by myself, Emily Goldman and Michael Fischerkeller offers a foundational piece of theory that explains a lot of the shifting in state strategy and behavior. I think the utility of the constructs of initiative persistence, campaigning, and strategic competition, will garner debate and may emerge or will be challenged as further research with this focus develops.” 

Jun: “In the past five years, there has been a shift away from efforts to study cyber deterrence to focus on the dynamics of cyber incidents and/or campaigns below the threshold of armed conflict that occur on a regular basis. The field is also becoming more methodologically diverse. In the next five years, the field is likely to focus on getting at the nuances of cyber activity occurring below the threshold of armed conflict. The scholarly community may seek to answer questions such as: when a state takes certain offensive or defensive actions in cyberspace, what do these actions signal, and how are they interpreted on the receiving side? How do we measure or evaluate the effectiveness of cyber campaigns? As other states acquire cyber capabilities and respond to cyber threats, what accounts for how their cyber strategies evolve?” 

Lindsay: “In the last five years, the field has taken a decidedly empirical turn. Cyber is no longer an emerging technology. It has emerged. We have decades of data to explore. This empirical turn complements the theoretical emphasis on intelligence that I mentioned above.” 

Poznansky: “There has been an explosion of work over the last few years devoted to better understanding what exactly cyberspace represents. Is it yet another arena of warfare with some new bells and whistles or is it more akin to an intelligence contest? How we understand the nature of cyberspace has major implications for how we theorize cyber conflict and, equally important, what sorts of policy implications we arrive at. There is much more to be done here.”

#5 How can scholars and policymakers of cyber conflict better incorporate perspectives from each other’s work?

Dwyer: “This is by far the hardest question; however, it is about understanding the needs and goals of both academics and policymakers. This simply requires 1) a firm commitment and foundation from policymakers to fund critical social science and humanities work that can sustain positive engagement and trust building; 2) recognition and support for academics in the translation of their work and impact in ways that are visible to their institutions; and 3) for academics not enter a room with preconceived notions of the solutions to policymakers’ problems.” 

Griffith: “There are a variety of models at our disposal, but one approach of note is on full display in Robert Chesney and Max Smeets’ recent edited volume, Deter, Disrupt, or Deceive, which explicitly puts authors who disagree—and who spearhead emerging schools of thought—in direct conversation with each other. This volume represents the culmination of roughly four years of formal and informal debate and has actively sought to continue the conversation through an ongoing, global series of workshops in the wake of its publication. Another model can be found in the field-building work of the Cyber Conflict Studies Association in United States and the European Cyber Conflict Research Initiative in Europe.” 

Harknett: “Again, the bridge between policy and theory has never been easy to traverse, but one essential element is adopting an agreed upon lexicon. There is, currently, this interesting phenomena in which the UK’s National Cyber Mission Forces’ Responsible Cyber Power in Practice document and the US Defense Department’s approaches of cyber persistent engagement and defend forward, as well as the broader 2023 US National Cybersecurity Strategy, align with the logic of initiative persistence and the structural reasoning of cyber persistence theory, with growing focus on continuous campaigns and seizing the initiative, rather than legacy constructs such as deterrence threats. Although full lexicon consensus has yet to solidify, it will be interesting to observe whether it occurs overtime.” 

Jun: “[Scholars and policymakers of cyber conflict can better incorporate perspectives from each other’s work with] more frequent conversations that raise good new policy-relevant research questions, efforts to ground theoretical and empirical research in what is actually going on, and efforts to turn conclusions from scholarly analysis into actionable policy agendas.” 

Lindsay: “This question is tricky because there are several different groups on either side of the gap, and it is important for all of them to talk. On the policy side, there are government policymakers and intelligence professionals, but also the hugely important commercial sector. And on the academic side you have international relations scholars, computer scientists, and many other social scientists and engineers working in related areas. Cybersecurity is a pretty wicked interdisciplinary problem.” 

Poznansky: “For scholars, being open to the possibility that many of the things we often bracket, in part because they can be hard to measure—bureaucratic politics, organizational culture, leadership, and so forth—is valuable. These factors probably explain more about cyber conflict than we care to admit. For practitioners, remaining open minded to debates that might sound purely academic in nature at first blush but in fact have immense practical relevance is also valuable. Whether cyberspace is mainly an arena for intelligence competition or warfighting—a debate, as mentioned, that is happening right now—matters for the prospect of developing norms, the utility of coercion, the dynamics of escalation, and more.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cyber conflict in international relations: A scholar’s perspective appeared first on Atlantic Council.

]]>
Activists and experts assemble in Costa Rica to protect human rights in the digital age https://www.atlanticcouncil.org/content-series/360os/activists-and-experts-assemble-in-costa-rica-to-protect-human-rights-in-the-digital-age/ Wed, 07 Jun 2023 20:21:18 +0000 https://www.atlanticcouncil.org/?p=652275 Our Digital Forensic Research Lab is convening top tech thinkers and human-rights defenders at RightsCon to collaborate on an agenda for advancing rights globally.

The post Activists and experts assemble in Costa Rica to protect human rights in the digital age appeared first on Atlantic Council.

]]>
Will the world’s human-rights defenders be able to match the pace of quickly moving technological challenges arising from artificial intelligence, information wars, and more?

Rights activists, tech leaders, and other stakeholders are meeting at RightsCon Costa Rica on June 5-8 to collectively set an agenda for advancing human rights in this digital age.

Our experts at the Digital Forensic Research Lab are coordinating part of that effort, with a slate of RightsCon events as part of their 360/Open Summit: Around the World global programming. Below are highlights from the events at RightsCon, which cover digital frameworks in Africa, disinformation in Ukraine, online harassment of women globally, and more.


The latest from San José

Rethinking transparency reporting

Human rights must be central in the African Union’s Digital Transformation Strategy

Day two wraps with a warning about dangerous threats, from militant accelerationism to violence toward women

What’s behind today’s militant accelerationism?

The digital ecosystem’s impact on women’s political participation

Day one wraps with recommendations for Africa’s digital transformation, Venezuela’s digital connectivity, and an inclusionary web

What does a trustworthy web look like?

Mapping—and addressing—Venezuela’s information desert

Where open-source intelligence meets human-rights advocacy


Rethinking transparency reporting

On Day 3 of RightsCon Costa Rica, Rose Jackson, director of the DFRLab’s Democracy & Tech Initiative, joined panelists Frederike Kaltheuner, director for technology and human rights at Human Rights Watch, and David Green, civil liberties director at Electronic Frontier Foundation, for a panel on rethinking transparency reporting. The discussion was led and moderated by Gemma Shields, Online Safety Policy Lead at the United Kingdom’s Office of Communications (Ofcom).

Shields opened the session by describing the online safety bill currently making its way through the UK parliament and the role of Ofcom in its implementation. The bill will give new powers to Ofcom to test mandatory platform transparency reporting requirements. Through these efforts, Ofcom hopes that “good, effective meaningful transparency reporting might encourage proactive action from the platforms,” Shields explained.

During the discussion, the panelists discussed what will be central to implementation of the online safety bill, including what effective transparency reporting looks like. Kaltheuner emphasized the complexity of defining meaningful transparency when the use cases vary across end users, regulators, civil society, journalists, and academics. Green underscored the importance of centering user needs in the conversation and the need to tailor reporting mandates to specific platforms.

Jackson noted that it is a strategic imperative for the UK government to consult experts from the global majority and consider how regulations and norms could be potentially used for harm by non-democratic actors. As Jackson put it, “what happens in the most unprotected spaces is the beta test for what will show up in your backyard.” She also highlighted the importance of global civil society engaging with the UK Online Safety Bill and European transparency regulations, such as the Digital Services Act, because these policies are first movers in codifying more regulation, and future policies will refer back to these efforts.

Human rights must be central in the African Union’s Digital Transformation Strategy

The DFRLab gathered stakeholders from the policy-making, democracy, rights, and tech communities across the African continent to discuss the African Union’s Digital Transformation Strategy. Participants compared notes and identified opportunities for increasing the strategy’s human-rights focus as it approaches its mid-mandate review. Participants also agreed that trusted conveners, such as watchdog agencies within national governments, can play a critical facilitating role in ensuring effective communication between experts, users, and civil society on one hand and policymakers and elected officials on the other. Discussion of particular concerns with the Strategy or recommendations to increasingly center human rights in it will be continued in future gatherings.

Day two wraps with a warning about dangerous threats, from militant accelerationism to violence toward women

The DFRLab kicked off day two at RightsCon with a conversation on how Russian information operations, deployed ahead of the full-scale invasion of Ukraine, were used to build false justifications for the war, deny responsibility for the war of aggression, and mask Russia’s military build-up. The panel also highlighted two DFRLab reports, released in February 2023, that examine Russia’s justifications for the war and Russia’s attempts to undermine Ukraine’s resistance and support from the international community.

Read more

Transcript

Jun 8, 2023

Mapping the last decade of Russia’s disinformation and influence campaign in Ukraine

By Atlantic Council

Since its full-scale invasion of Ukraine, Russia has continued its information operations, targeting more than just Ukraine, say speakers at a RightsCon event hosted by the Digital Forensic Research Lab.

Disinformation Russia

While at RightsCon, the DFRLab participated in a discussion on militant accelerationism, its impact on minority communities, and how bad actors can be held accountable. The event, hosted by the United Kingdom’s Office of Communications and Slovakia’s Council of Media Services, featured panelists who discussed the ways in which policy can hold all voices, including those of the powerful, accountable. During the panel, DFRLab Research Fellow Meghan Conroy discussed how such violent narratives have become increasingly commonplace in some American ideologies and how extremist individuals and groups sympathetic to these narratives have been mobilized.

To close out the day, the DFRLab and the National Democratic Institute co-hosted a panel featuring global experts from civil society, government, and industry on how the threat of violence and harassment online has impacted the potential for women to participate in politics. As noted by the panelists, abuse suffered online is meant to strictly intimidate and silence those who want to get involved, and it is, therefore, all the more important that these very women, and those already established, stand up and speak out so as to serve as role models and protect diversity and equity in politics, tech, and beyond.

What’s behind today’s militant accelerationism?

By Meghan Conroy

While at RightsCon, I—a DFRLab research fellow and co-founder of the Accelerationism Research Consortium—joined an event hosted by the UK Office of Communications and Slovakia’s Council of Media Services on militant accelerationism.

My co-panelists and I provided an overview of militant accelerationism and an explanation of the marginalized groups that have been targets of militant accelerationist violence. I discussed accelerationist narratives that have not only permeated mainstream discourse but have also mobilized extremists to violence. Hannah Rose, research fellow and PhD candidate at King’s College London’s International Centre for the Study of Radicalization, zeroed in on the role of conspiracy theories in enabling the propagation of these extreme worldviews.

Stanislav Matějka, head of the Analytical Department at the Slovakian Council of Media Services, delved into the October 2022 attack in Bratislava. He flagged the role of larger, more mainstream platforms as well as filesharing services in enabling the spread of harmful content preceding the attack. Murtaza Shaikh, principal at the UK Office of Communications for illegal harms and hate and terrorism, highlighted the office’s work on the May 2022 attack in Buffalo, New York. He raised that these attacks result, in part, from majority populations framing themselves as under threat by minority populations, and then taking up arms against those minority populations.

Attendees then broke into groups to discuss regulatory solutions and highlight obstacles that may stand in the way of those solutions’ implementation or effectiveness. Key takeaways included the following:

  • Powerful voices need to be held to account. Politicians, influencers, and large platforms have played an outsized role in enabling the mainstreaming and broad reach of these worldviews.
  • Bad actors will accuse platforms and regulators of censorship, regardless of the extent to which content is moderated. As aforementioned, they’ll often position themselves as victims of oppression, and doing so in the context of content moderation policies is no different—even if the accusations are not rooted in reality.
  • Regulators must capitalize on existing expertise. Ahost of experts who monitor these actors, groups, and narratives across platforms, as well as their offline activities, can help regulators and platforms craft creative, adaptive, and effective policies to tackle the nebulous set of problems linked to militant accelerationism.

This conversation spurred some initial ideas that are geared toward generating more substantial discussion. Introducing those unfamiliar with understudied and misunderstood concepts, like militant accelerationism, is of the utmost importance to permit more effective combatting of online harms and their offline manifestations—especially those that have proven deadly.

Meghan Conroy is a US research fellow with the Atlantic Council’s Digital Forensic Research Lab.

The digital ecosystem’s impact on women’s political participation

By Abigail Wollam

The DFRLab and the National Democratic Institute (NDI) co-hosted a panel that brought together four global experts from civil society, government, and industry to discuss a shared and prevalent issue: The threat of digital violence and harassment that women face online, and the impact that it has on women’s participation in political life.

The panel was facilitated by Moira Whelan, director for democracy and technology at NDI; she opened the conversation by highlighting how critical these conversations are, outlining the threat to democracy posed by digital violence. She noted that as online harassment towards women becomes more prevalent, women are self-censoring and removing themselves from online spaces. “Targeted misogynistic abuse is designed to silence voices,” added panelist Julia Inman Grant, the eSafety commissioner of Australia.  

Both Neema Lugangira (chairperson for the African Parliamentary Network on Internet Governance and member of parliament in Tanzania) and Tracy Chou (founder and chief executive officer of Block Party) spoke about their experiences with online harassment and how those experiences spurred their actions in the space. Lugangira found, through her experience as a female politician in Tanzania, that the more outspoken or visible a woman is, the more abuse she gets. She observed that women might be less inspired to participate in political life because they see the abuse other women face—and the lack of defense or support these women get from other people. “I decided that since we’re a group that nobody speaks for… I’m going to speak for women in politics,” said Lugangira.

Chou said that she faced online harassment when she became an activist for diversity, equity, and inclusion in the tech community. She wanted to address the problem that she was facing herself and founded Block Party, a company that builds tools to combat online harassment.  

Despite these challenges, the panelists discussed potential solutions and ways forward. Australia is leading by example with its eSafety commissioner and Online Safety Act, which provide Australians with an avenue through which to report online abuses and receive assistance. Fernanda Martins, director of InternetLab, discussed the need to change how marginalized communities that face gendered abuse are seen and talked about; instead of talking about the community as a problem, it’s important to see them as part of the solution and bring them into the discussions.

Abigail Wollam is an assistant director at the Atlantic Council’s DFRLab

Read more

Transcript

Jun 8, 2023

The international community must protect women politicians from abuse online. Here’s how.

By Atlantic Council

At RightsCon, human-rights advocates and tech leaders who have faced harassment online detail their experiences—and ways the international community can support women moving forward.

Disinformation Resilience & Society

Day one wraps with recommendations for Africa’s digital transformation, Venezuela’s digital connectivity, and an inclusionary web

This year at RightsCon Costa Rica, the DFRLab previewed its forthcoming Task Force for a Trustworthy Future Web report and gathered human-rights defenders and tech leaders to talk about digital frameworks in Africa, disinformation in Latin America and Ukraine, and the impact online harassment has on women in political life, and what’s to come with the European Union’s Digital Services Act. 

Read more

Transcript

Jun 8, 2023

The European Commission’s Rita Wezenbeek on what comes next in implementing the Digital Services Act and Digital Markets Act

By Atlantic Council

At a DFRLab RightsCon event, Wezenbeek spoke about the need to get everyone involved in the implementation of the DSA and DMA.

Disinformation European Union

The programming kicked off on June 5 with the Digital Sherlocks training program in San José, which marked the first time the session was conducted in both English and Spanish. The workshop aimed to provide human-rights defenders with the tools and skills they need to build movements that are resilient to disinformation.  

On June 6, the programming opened with a meeting on centering human rights in the African Union’s Digital Transformation Strategy. The DFRLab gathered stakeholders from democracy, rights, and tech communities across the African continent to discuss the African Union’s Digital Transformation Strategy. Participants compared notes and identified opportunities for impact as the strategy approaches its mid-mandate review. 

Next, the DFRLab, Venezuela Inteligente, and Access Now hosted a session on strengthening Venezuela’s digital information ecosystem, a coalition-building meeting with twenty organizations. The discussion drew from a DFRLab analysis of Venezuela’s needs and capabilities related to the country’s media ecosystems and digital security, literacy, and connectivity. The speakers emphasized ways to serve vulnerable groups.

Following these discussions, the DFRLab participated a dialogue previewing findings from the Task Force for a Trustworthy Future Web. The DFRLab’s Task Force is convening a broad cross-section of industry, civil-society, and government leaders to set a clear and action-oriented agenda for future online ecosystems. As the Task Force wraps up its report, members discussed one of the group’s major findings: the importance of inclusionary design in product, policy, and regulatory development. To close out the first day of DFRLab programming at RightsCon Costa Rica, the task force notified the audience that it will be launching its report in the coming weeks. 

What does a trustworthy web look like?

By Jacqueline Malaret and Abigail Wollam

The DFRLab’s Task Force for a Trustworthy Future Web is charting a clear and action-oriented roadmap for future online ecosystems to protect users’ rights, support innovation, and center trust and safety principles. As the Task Force is wrapping up its report, members joined Task Force Director Kat Duffy to discuss one of the Task Force’s major findings—the importance of inclusionary design in product, policy, and regulatory development—on the first day of RightsCon Costa Rica.

In just eight weeks, Elon Musk took over Twitter, the cryptocurrency market crashed, ChatGPT launched, and major steps have been made in the development of augmented reality and virtual reality, fundamentally shifting the landscape of how we engage with technology. Framing the panel, Duffy highlighted how not only has technology changed at a breakneck pace, but the development and professionalization of the trust and safety industry have unfolded rapidly in tandem, bringing risks, harms, and opportunities to make the digital world safer for all.

Read more

Digital mouse cursor

Task Force for a Trustworthy Future Web

The Task Force for a Trustworthy Future Web will chart a clear and action-oriented roadmap for future online ecosystems to protect users’ rights, support innovation, and center trust and safety principles.

The three panelists—Agustina del Campo, director of the Center for Studies on Freedom of Expression; Nighat Dad, executive director of the Digital Rights Foundation; and Victoire Rio, a digital-rights advocate—agreed that the biggest risk, which could yield the greatest harm, is shaping industry practices through a Western-centric lens, without allowing space for the global majority. Excluding populations from the conversation around tech only solidifies the mistakes of the past and risks creating a knowledge gap. Additionally, the conversation touched on the risk of losing sight of the role of government, entrenching self-regulation as an industry norm, and absolving both companies and the state for harms that can occur because of the adoption of these technologies.

Where there is risk, there is also an opportunity to build safer and rights-respecting technologies. Panelists said that they found promise in the professionalization and organization of industry, which can create a space for dialogue and for civil society to engage and innovate in the field. They are also encouraged that more and more industry engagements are taking place within the structures of international law and universal human rights. The speakers were encouraged by new opportunities to shape regulation in a way that coalesces action around systemic and forward-looking solutions.

But how can industry, philanthropy, and civil society maximize these opportunities? There is an inherent need to support civil society that is already deeply engaged in this work and to help develop this field, particularly in the global majority. There is also a need to pursue research that can shift the narrative to incentivize investment in trust and safety teams and articulate a clear case for the existence of this work.

Jacqueline Malaret is an assistant director at the Atlantic Council’s DFRLab

Abigail Wollam is an assistant director at the Atlantic Council’s DFRLab

Mapping—and addressing—Venezuela’s information desert

By Iria Puyosa and Daniel Suárez Pérez

On June 6, the DFRLab, Venezuela Inteligente, and Access Now (which runs RightsCon) hosted a coalition-building meeting with twenty organizations that are currently working on strengthening Venezuela’s digital information ecosystem. The discussion was built on an analysis, conducted by the DFRLab, of the country’s media ecosystems and digital security, literacy, and connectivity; the speakers focused on ways to serve vulnerable groups such as grassroots activists, human-rights defenders, border populations, and populations in regions afflicted by irregular armed groups. 

The idea of developing a pilot project in an information desert combining four dimensions—connectivity, relevant information, security, and literacy—was discussed. Participants agreed that projects should combine technical solutions to increase access to connectivity and generate relevant information for communities, with a human-rights focus. In addition, projects should include a digital- and media-literacy component and continuous support for digital security.

Iria Puyosa is a senior research fellow at the Atlantic Council’s DFRLab

Daniel Suárez Pérez is a research associate for Latin America at the Atlantic Council’s DFRLab

Where open-source intelligence meets human-rights advocacy

By Ana Arriagada

On June 5, the DFRLab hosted a Digital Sherlocks workshop on strengthening human-rights advocacy through open-source intelligence (OSINT) and countering disinformation.

I co-led the workshop with DFRLab Associate Researchers Jean le Roux, Daniel Suárez Pérez, and Esteban Ponce de León.

In the session, attendees discussed the worrying rise of antidemocratic governments in Latin America—such as in Nicaragua and Guatemala—who are  using open-source tools for digital surveillance and are criminalizing the work of journalists and human-rights defenders. When faced with these challenges, it becomes imperative for civil-society organizations to acquire and use investigative skills to produce well-documented reports and investigations. 

During the workshop, DFRLab researchers shared their experiences investigating paid campaigns that spread disinformation or promote violence or online harassment. They recounted having used an array of tools to analyze the origin and behavior of these paid advertisements. 

DFRLab researchers also discussed tools that helped them detect suspicious activity on platforms such as YouTube, where, for example, some gamer channels spread videos related to disinformation campaigns or political violence. The workshop attendees also discussed how policy changes at Twitter have made the platform increasingly challenging to investigate, but they added that open-source researchers are still investigating, thanks to the help of available tools and the researchers’ creative methodologies. 

The workshop also showcased the DFRLab’s work with the Action Coalition on Meaningful Transparency (ACT). Attendees received a preview of ACT’s upcoming portal launch, for which the DFRLab has been offering guidance. The new resource will offer access to a repository of transparency reporting, policy documents, and analysis from companies, governments, and civil society. It will also include a registry of relevant actors and initiatives, and it will allow users to establish links between entries to see the connections between organizations, the initiatives they are involved in, and the reports they have published. 

The workshop ended with the DFRLab explaining that social network analysis— the study of social relationships and structures using graph theory—is important because it allows for investigating suspicious activity or unnatural behavior exhibited by users on social media platforms. 

Ana Arriagada is an assistant director for Latin America at the Atlantic Council’s DFRLab

The post Activists and experts assemble in Costa Rica to protect human rights in the digital age appeared first on Atlantic Council.

]]>
Will the debt ceiling deal mean less for homeland security? https://www.atlanticcouncil.org/blogs/new-atlanticist/will-the-debt-ceiling-deal-mean-less-for-homeland-security/ Wed, 31 May 2023 19:00:12 +0000 https://www.atlanticcouncil.org/?p=650792 Congress needs to ensure that the Department of Homeland Security has the resources it needs to defend the nation against nonmilitary threats.

The post Will the debt ceiling deal mean less for homeland security? appeared first on Atlantic Council.

]]>
What the new budget deal to raise the federal debt ceiling means for homeland security is only slowly coming into focus. Very few of the initial statements out of the White House or House Republican leadership about the Fiscal Responsibility Act of 2023 mention what the new budget cap means for the Department of Homeland Security (DHS) or for homeland security more broadly. A close look, however, leaves reason for concern. DHS will be competing for fewer civilian budget dollars against the full range of the nation’s domestic needs and priorities. This puts the United States’ defenses at risk in areas where the threats are increasing, as in cybersecurity, border and immigration security, and domestic counterterrorism. 

US President Joe Biden and House Speaker Kevin McCarthy deserve praise for avoiding a catastrophic default on the United States’ fiscal obligations that otherwise would have disrupted debt payments, Social Security payments to seniors, and the federal payroll that includes everyone who keeps the United States safe. Most commentators on the budget part of the deal have focused on the contrast between “defense spending,” where the agreement largely endorses the Biden administration’s requested increase for the Department of Defense, versus domestic programs, which are slated for a cut over the previous year’s levels. However, it is important to remember that DHS leads the defense of the United States against nonmilitary threats. DHS is responsible for border, aviation, and maritime security, as well as cybersecurity. It also helps protect critical infrastructure, oversees immigration, builds resilience, restores communities after disasters, and combats crimes of exploitation. As the third-largest cabinet department in the federal government, DHS’s budget is intrinsically linked to the security of the United States. However, DHS’s budget for fiscal year (FY) 2024 is not getting the same treatment as the budget for the Department of Defense (DOD).

When security is “nonsecurity”

The Fiscal Responsibility Act of 2023 classifies most of DHS’s budget as “nonsecurity.” This is paradoxical but true. Barring future changes to the deal, which are always possible, DHS will be in a zero-sum competition in the FY 2024 budget negotiations against other civilian programs such as nutrition programs for children, domestic law enforcement, housing programs, community grants programs, and national parks. Whereas the federal government should be spending more on cybersecurity, border and immigration security, and community programs to prevent violent extremism and domestic terrorism, the Fiscal Responsibility Act of 2023 will make this harder because the overall pot of money for nondefense programs for FY 2024 will be less than in FY 2023. This appears to be the case even though more spending on cybersecurity and border security has strong bipartisan support.

The Fiscal Responsibility Act of 2023 follows the legislative language of the Budget Control Act of 2011 (the first of several debt ceiling deals in the Obama administration), which divided so-called “discretionary” federal spending into two different two-way splits. First, there is the “security category” and the “nonsecurity category.” The security category includes most of the budgets of the departments of Defense, Homeland Security, and Veterans Affairs. It also includes the National Nuclear Security Administration, the intelligence community management account, and the so-called “150 account” for international programs such as military aid, development assistance, and overseas diplomatic operations. The nonsecurity category is essentially everything else, such as the departments of Justice, Health and Human Services, Commerce, Housing and Urban Development, and Interior. 

Central to the 2011 budget deal was that it did not apply to nondiscretionary programs such as Social Security and fee-based programs such as citizenship and visa applications, which are not considered “discretionary” spending. Emergency spending, narrowly defined, was exempt from the budget caps, as was most of the war against al-Qaeda, which was categorized as “Overseas Contingency Operations” and exempt from the budget caps that began in 2011.

DHS will be competing for fewer civilian budget dollars against the full range of the nation’s domestic needs and priorities.

The second split in budget law, which originated in a budget deal in December 2013, is between the “revised security category” and the “revised nonsecurity category.” The revised security category includes only budget account 050, roughly 96 percent of which is the Department of Defense (budget code 051). About 3 percent is for nuclear programs run by the Department of Energy (code 053), and about 1 percent is for national defense-related programs at DHS, the Federal Bureau of Investigation (mainly counterintelligence programs), and parts of the Central Intelligence Agency.

The main DHS programs funded under this revised security category (budget code 054) are extremely limited: emergency management functions of the Federal Emergency Management Agency on things like emergency communications systems and alternate sites the federal government could use in case of emergency or an extreme event such as a nuclear attack, as well as some functions of the Cybersecurity and Infrastructure Security Agency.

Thus, since 2013, most of the budgets of DHS, the Department of Veterans Affairs, and foreign military assistance have been in the “security category” but have also paradoxically been in the “revised nonsecurity category.”

In the May 2023 budget deal, the $886.3 billion spending cap agreed to by the White House and the House Republican leadership for FY 2024 is only for the “revised security category.” Most of DHS, the Department of Veterans Affairs, and military assistance are lumped in with the $703.6 billion cap for “revised nonsecurity” civilian parts of the federal government. Of that, $703.6 billion, $121 billion is earmarked for veterans’ programs. After several other adjustments and offsets, as the White House calculates it, this leaves $637 billion for all other “revised nonsecurity” programs. This is a nominal cut of one billion dollars from what those departments got in the FY 2023 budget passed in December 2022. Because inflation in the past year was 4.9 percent, the effective budget cut to “revised nonsecurity programs” would be greater than one billion dollars. The House Republicans calculate an even greater cut, to $583 billion, by not including the adjustments and offsets.

Flash back to 2011 and forward to 2024

In 2011, the debate between the Obama administration and the Republicans in Congress could be simplified into the idea that Democrats wanted more spending on social programs in the “nonsecurity category,” while Republicans wanted more money spent on “security,” principally defense spending but also including homeland security.

The debate in 2023 does not break down so neatly. There is increasing, bipartisan agreement that the United States needs to be spending more on border and immigration security, and that waiting until the start of FY 2024 to address this shortfall is not going to enable the administration’s strategy to succeed. There is also bipartisan agreement that the federal government as a whole should spend more on cybersecurity. And as the Bipartisan Safer Communities Act showed, mental health and community grants to address the causes of school shootings have bipartisan support. There is also bipartisan support for military assistance to help Ukraine defend itself from Russian aggression and to help Taiwan build up its defenses to deter a possible Chinese invasion. These programs are all funded mostly or wholly from “revised nonsecurity” programs. It is not clear how these programs will fare in the budget environment created by the Fiscal Responsibility Act of 2023.

Commercial aviation and borders still need to be protected, even while cyber threats mount and increased quantities of fentanyl come through ports of entry.

Other departments and agencies can reallocate funds when priorities change, but not DHS. After DOD successfully led international efforts to take away the Islamic State of Iraq and al-Sham’s territory in Iraq and Syria, the military was able to pivot to Asia, redeploying drones and personnel out of the Middle East to defend the Indo-Pacific. However, for DHS, as the 2023 Quadrennial Homeland Security Review made clear, threats seldom go away, even when the homeland faces new threats. Commercial aviation and borders still need to be protected, even while cyber threats mount and increased quantities of fentanyl come through ports of entry.

As valid as these concerns are, they are no reason to torpedo the Fiscal Responsibility Act of 2023. To the contrary, failure to pass the bill would gravely jeopardize national and homeland security, not to mention the economic security of the United States.

Nor do these concerns mean that other departments and agencies do not have their own justifications for increased resources in FY 2024. But the Fiscal Responsibility Act of 2023 is not going to make it easier for homeland security. Congress needs to recognize this as it works toward the final budget for FY 2024, and, perhaps more urgently, when it considers whether to pass an emergency supplemental appropriations bill for border and immigration security. Congress needs to ensure, as it provided for military security in the “security category” of the Fiscal Responsibility Act, that DHS has the resources it needs to defend the nation against nonmilitary threats.


Thomas S. Warrick is the director of the Future of DHS project at the Scowcroft Center for Strategy and Security’s Forward Defense program and a nonresident senior fellow and the Scowcroft Middle East Security Initiative at the Atlantic Council. He is a former DHS deputy assistant secretary for counterterrorism policy.

The post Will the debt ceiling deal mean less for homeland security? appeared first on Atlantic Council.

]]>
Ukraine’s Diia platform sets the global gold standard for e-government https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-diia-platform-sets-the-global-gold-standard-for-e-government/ Wed, 31 May 2023 01:30:31 +0000 https://www.atlanticcouncil.org/?p=650569 Ukraine's Diia app is widely seen as the world's first next-generation e-government platform, and is credited with implementing what many see as a more human-centric government service model, writes Anatoly Motkin.

The post Ukraine’s Diia platform sets the global gold standard for e-government appeared first on Atlantic Council.

]]>
Several thousand people gathered at the Warner Theater in Washington DC on May 23 for a special event dedicated to Ukraine’s award-winning e-governance platform Diia. “Ukrainians are not only fighting. For four years behind the scenes, they have been creating the future of democracy,” USAID Administrator Samantha Power commented at the event.

According to Power, users of Diia can digitally access the kinds of state services that US citizens can only dream of, including crossing the border using a smartphone application as a legal ID, obtaining a building permit, and starting a new business. The platform also reduces the potential for corruption by removing redundant bureaucracy, and helps the Ukrainian government respond to crises such as the Covid pandemic and the Russian invasion.

Since February 2022, the Diia platform has played a particularly important part in Ukraine’s response to Russia’s full-scale invasion. According to Ukraine’s Minister of Digital Transformation Mykhailo Fedorov, in the first days of the invasion the platform made it possible to provide evacuation documents along with the ability to report property damage. Other features have since been added. The e-enemy function allows any resident of Ukraine to report the location and movement of Russian troops. Radio and TV functions help to inform people who find themselves cut off from traditional media in areas where broadcasting infrastructure has been damaged or destroyed.

Today, the Diia ecosystem offers the world’s first digital passport and access to 14 other digital documents along with 25 public services. It is used by more than half the Ukrainian adult population. In addition to consumer-oriented functions, the system collects information for the national statistical office and serves as a digital platform for officials. Diia is widely seen as the world’s first next-generation e-government platform, and is credited with implementing what many see as a more human-centric government service model.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

In today’s increasingly digital environment, governments may find that they have a lot of siloed systems in place, with each system based on its own separate data, infrastructure, and even principles. As a result, people typically suffer from additional bureaucracy and need to deal repeatedly with different official organizations. Most e-government initiatives are characterized by the same problems worldwide, such as technical disparity of state systems, inappropriate data security and data protection systems, absence of unified interoperability, and inefficient interaction between different elements. Ukraine is pioneering efforts to identify more human-centric solutions to these common problems.

One of the main challenges on the path to building sustainable e-government is to combine user friendliness with a high level of cyber security. If we look at the corresponding indices such as the Online Services Index and Baseline Cyber Security Index, we see that only a handful of European countries have so far managed to achieve the right balance: Estonia, Denmark, France, Spain, and Lithuania. Beyond Europe, only Singapore and Malaysia currently meet the necessary standards.

Ukraine has a strong record in terms of security. Since the onset of the Russian invasion, the Diia system has repeatedly been attacked by Russian cyber forces and has been able to successfully resist these attacks. This is an indication that the Ukrainian platform has the necessary reserve of cyber security along with a robust and secure digital public infrastructure.

The success of the IT industry in Ukraine over the past decade has already changed international perceptions of the country. Instead of being primarily seen as an exporter of metals and agricultural products, Ukraine is now increasingly viewed as a trusted provider of tech solutions. The Ministry of Digital Transformation is now working to make Diia the global role model for human-centric GovTech. According to Samantha Power, the Ukrainian authorities are interested in sharing their experience with the international community so that others can build digital infrastructure for their citizens based on the same human-centric principles.

USAID has announced a special program to support countries that, inspired by Diia, will develop their own e-government systems on its basis. This initiative will be launched initially in Colombia, Kosovo, and Zambia. Ukraine’s Diia system could soon be serving as a model throughout the transitional world.

As they develop their own e-government systems based on Ukraine’s experience and innovations, participating governments should be able to significantly reduce corruption tied to bureaucratic obstacles. By deploying local versions of Diia, transitional countries will also develop a large number of their own high-level IT specialists with expertise in e-government. This is an important initiative that other global development agencies may also see value in supporting.

Anatoly Motkin is president of the StrategEast Center for a New Economy, a non-profit organization with offices in the United States, Ukraine, Georgia, Kazakhstan, and Kyrgyzstan.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s Diia platform sets the global gold standard for e-government appeared first on Atlantic Council.

]]>
The 5×5—Cross-community perspectives on cyber threat intelligence and policy https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cross-community-perspectives-on-cyber-threat-intelligence-and-policy/ Tue, 30 May 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=649392 Individuals with experience from the worlds of cyber threat intelligence and cyber policy share their insights and career advice.

The post The 5×5—Cross-community perspectives on cyber threat intelligence and policy appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

A core objective of the Atlantic Council’s Cyber Statecraft Initiative is to shape policy in order to better secure users of technology by bringing together stakeholders from across disciplines. Cybersecurity is strengthened by ongoing collaboration and dialogue between policymakers and practitioners, including cyber threat intelligence analysts. Translating the skills, products, and values of these communities between each other can be challenging but there is prospective benefit, as it helps drive intelligence requirements and keeps policymakers abreast of the latest developments and realities regarding threats. For younger professionals, jumping from one community to another can appear to be a daunting challenge.

We brought together five individuals with experience from both the worlds of cyber threat intelligence and cyber policy to share their experiences, perspectives on the dynamics between the two communities, and advice to those interested in transitioning back and forth.

#1 What’s one bad piece of advice you hear for threat intelligence professionals interested in making a transition to working in cyber policy?

Winnona DeSombre Bernsen, nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“I have not heard bad pieces of advice specifically geared toward threat intelligence professionals, but I was told by someone once that if I wanted to break into policy, I could not focus on cyber. This is mostly untrue: the number of cyber policy jobs in both the public and the private sectors are growing rapidly, because so many policy problems touch cybersecurity. Defense acquisition? Water safety? Civil Rights? China policy? All of these issues (and many more!) touch upon cybersecurity in some way. However, cyber cannot be your only focus! As most threat intelligence professionals know, cybersecurity does not operate in a vacuum. A company’s security protocols are only as good as the least aware employee, and a nation-state’s targets in cyberspace usually are chosen to further geopolitical goals. Understanding the issues that are adjacent to cyber in a way that creates sound policy is important when making the transition.” 

Sherry Huang, program fellow, Cyber Initiative and Special Projects, William and Flora Hewlett Foundation

“I would not count this as advice, but the emphasis on getting cybersecurity certifications that is persistent in the cyber threat intelligence community is not directly helpful to working in the cyber policy space. Having technical knowledge and skills is always a plus, but in my view, having the ability to translate between policymakers and technical experts is even more valuable in the cyber policy space, and there is not a certification for that.” 

Katie Nickels, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; director of intelligence, Red Canary

“I think there is a misconception that to work in cyber policy, you need to have spent time on Capitol Hill or at a think tank. I have found that to be untrue, and I think that misconception might make cybersecurity practitioners hesitant to weigh in on policy matters. The way I think of it is that cyber policy is the convergence of two fields: cybersecurity and policymaking. Whichever field is your primary one, you will have to learn about the other. Practitioners can absolutely learn about policy.” 

Christopher Porter, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“When intelligence professionals think about policy work, they often experience a feeling of personal control—‘now I get to make the decisions!’ So there is a temptation to start applying your own pet theories or desired policy outcomes and start working on persuasion. That is part of it, but in reality policymaking looks a lot like intelligence work in one key aspect—it is still a team sport. You have to have buy-in from a lot of stakeholders, many of whom will have different perspectives or intellectual approaches to the same problem. Even if you share the same goal, they may have very different tools. So just as intelligence is a team sport, policymaking is too. That is a reality that is not reflected in a lot of academic preparation, which emphasizes theoretical rather than practical policymaking.” 

Robert Sheldon, director of public policy & strategy, Crowdstrike

“I sometimes hear people treating technical career paths and policy career paths as binary–and I do not think that is the direction that we are headed as a community. People currently working in technical cybersecurity disciplines, including threat intelligence, should consider gaining exposure to policy work without fully transitioning and leaving their technical pursuits behind. This is a straightforward way to make ongoing, relevant contributions to a crowded cyber policy discourse.”

#2 What about working in threat intelligence best prepared you for a career in cyber policy, or vice versa?

Desombre Bernsen: “Threat intelligence gave me two key skills: the first is the ability to analyze a large-scale problem. Just like threat intelligence analysts, cyber policymakers must look through large systems to find chokepoints and potential vulnerabilities, while also making sure that the analytic judgments one makes about the system are sound. This skill enables one to craft recommendations that best fit the problem. The second skill is the ability to tailor briefings to different principal decisionmakers. Threat intelligence is consumed by network defenders and C-suite executives alike, so understanding at what level you are briefing is key. A chief information security officer does not care about implementing YARA rules, just like a network defender does not want their time wasted with a recommendation on altering their company-wide phishing policies. Being able to figure out what the principal cares about, and to tailor recommendations to the audience best able to action on them is applicable to the cyber policy field as well. When briefing a company or government agency, knowing their risk tolerance and organization mission, for example, helps tailor the briefing to help them understand what they can do about the problem.” 

Huang: “Being a cyber threat intelligence analyst gave me exposure to a wide variety of issues that are top of mind for government and corporate clients. In a week, I could be writing about nation-state information operations, briefing clients on cybersecurity trends in a certain industry, and sorting through data dumps on dark web marketplaces. Knowing a bit about numerous cyber topics made it easier for me to identify interest areas that wanted to pursue in the cyber policy space and, more importantly, allows me to easily understand and interact with experts on different cyber policy issue areas, which is helpful in my current role.” 

Nickels: “The ability to communicate complex information in an accessible way is a skill I learned from my threat intelligence career that has translated well to policy work. Threat intelligence is all about informing decisions, so there are many overlaps with writing to inform policy.” 

Porter: “In Silicon Valley, it is typical to have a position like ‘chief solutions architect.’ I have spent most of my career in intelligence being the ‘chief problems architect.’ It is the nature of the job to look for threats, problems, and shortcomings. Policymakers have the inverse task—to imagine a better future and build it, even if that is not the path we are on currently. But still, I think policymakers need to keep in mind how their plans might fail or lead to unintended consequences. When it comes to cybersecurity, new policies almost never eliminate a threat, they only change its shape. Much like the end to Ghostbusters, you get to choose the kind of problem you are going to face, but not whether or not you face one. Anyone with a background in intelligence will be ready for that step, where you have to imagine second- and third-order implications beyond the first-order effect you are seeking to have.” 

Sheldon: “Working as an analyst early in my career taught me a lot about analytical methods and rigor, evidence quality, and constructing arguments. Each of these competencies apply directly to policy work.”

#3 What realities of working in the threat intelligence world do you believe are overlooked by the cyber policy community?

Desombre Bernsen: “The cyber policy community has not yet realized that threat intelligence researchers and parts of the security community themselves—similarly to high level cyber policy decisionmakers—are targets of cyberespionage and digital transnational repression. North Korea, Russia, China, and Iran have all targeted researchers and members of civil society in cyberspace. Famously, North Korea would infect Western vulnerability researchers, likely to steal capabilities. In addition, threat intelligence researchers lack the government protections many policymakers have. Researchers that publicly lambast US adversaries can be targeted and threatened online by state-backed trolls. Protections for these individuals are few and far between—CISA just this year rolled out a program for protecting civil society members targeted by transnational repression, so I hope it gets expanded soon.” 

Huang: “Most of the time, threat intelligence analysts (at least in the private sector) do not hear from clients after a report has gone out and do not have visibility into whether their analysis and recommendations are helpful or have real-world impact. Feedback, whether positive or constructive, can help analysts fine-tune their craft and improve future analysis.” 

Nickels: “I think the cyber policy community largely considers threat intelligence to be information to be shared about breaches, often in the form of indicators like IP addresses. While that can be one aspect of it, they may not recognize that threat intelligence analysts consider much more than that. Broadly, threat intelligence is about using an understanding of how cyber threats work to make decisions. Under that broad definition, cyber policymakers have a significant need for threat intelligence—if policymakers do not know how the threats operate, they cannot determine how to create policies to help organizations better protect against them.” 

Porter: “There are aspects of the work—such as attribution—that are more reliable and not as difficult as imagined. Conversely, there are critical functions, like putting together good trends data or linking together multiple different pieces of evidence, that can be very difficult and time-intensive but seem simple to those outside the profession. So there is always a little bit of education that needs to take place before getting into a substantive back-and-forth, where the cyber intelligence community needs to explain a little bit about how they are doing their work, and the strengths and limitations of that so that everyone has the same assumptions and understands one another’s perspective.” 

Sheldon: “The policy community sometimes lacks understanding of the sources and methods that threat intelligence practitioners leverage in their analysis. This informs the overall quality of their work, the skill needed to produce it, timeliness, extensibility, the possibility for sharing, and so on. All of these are good reasons for the two communities to talk more about how they do their work.”

More from the Cyber Statecraft Initiative:

#4 What is the biggest change in writing for a threat intelligence audience vs. policymakers? 

Desombre Bernsen: “The scope is much broader. Threats to a corporate system are confined largely to the corporate system itself, but the world of geopolitics has far more players and many more first- and second-order effects of the policies you recommend.” 

Huang: “Not having to be as diligent about confidence levels! Jokes aside, it is similar in that being precise in wording and being brief and to the point are appreciated by both audiences. However, I do find that a policy audience often cares more about the forward-looking aspect and the ‘so what?’” 

Nickels: “The biggest difference is that when writing for policymakers, you are expected to express your opinion! As part of traditional intelligence doctrine, threat intelligence analysts avoid injecting personal opinions into their assessments and try to minimize the effects of their cognitive biases. Intelligence analysts might write about potential outcomes of a decision, but should not weigh in on which decision should be made. However, policymakers want to hear what you recommend. It can feel freeing to be able to share opinions, and it remains valuable to try to hedge against cognitive biases because it allows for sounder policy recommendations.” 

Porter: “Threat intelligence professionals are going to be very interested in how the work gets done, as the culture—to some degree—borrows from academic work, in terms of rewarding reproducibility of results and sharing of information. But, strictly speaking, policymakers do not care about that. Their job is to link the findings in those reports to the broader strategic context. One really only need to show enough of how the intelligence work was done to give the policymaker confidence and help them use the intelligence appropriately without understating or overstating the case. The result is that for policy audiences you end up starting from the end of the story—instead of a blog post or white paper building up to a firm conclusion, you talk about the conclusion and, depending on the level of technical understanding and skepticism on the part of the policymaker, may or may not get into the story of how things were pieced together at all.” 

Sheldon: “Good writing in both disciplines has much in common. Each should be concise, include assertions and evidence, provide context, and make unknowns clear. But there are perhaps fewer ‘product types’ relevant to core threat intelligence consumers and, in some settings, analysts can assume some fundamental knowledge base among their audience.” 

#5 Where is one opportunity to work on policy while still in industry that most people miss?

Desombre Bernsen: “You absolutely can work on policy issues while working in threat intelligence! I cannot just choose one, but I highly recommend searching for non-resident fellowship programs in think tanks (ECCRI, Atlantic Council, etc.), speaking at conferences on threat trends and their policy implications, and doing more policy through corporate threat wargaming internally.” 

Huang: “Volunteering at conferences that involve the cyber policy community, such as Policy@DEF CON and IGF-USA. These are great opportunities to support policy-focused discussions and to have deeper interactions with peers in the cyber policy space.” 

Nickels: “In the United States, one commonly missed opportunity is to reach out to elected representatives with opinions on cybersecurity legislation. Cybersecurity practitioners can also be on the lookout for opportunities to provide comments that help shape proposed regulations affecting the industry. For example, the Commerce Department invited public comments to proposed changes to the Wassenaar Arrangement around export controls of security software, and cybersecurity practitioners weighed in on how they felt the changes would influence tool development.” 

Porter: “That will vary greatly from company to company; almost universally though, you will have the opportunity to help your colleagues and future generations by providing mentorship and career development opportunities. Personnel is policy, so in addition to thinking about particular policies you might want to shape, think also about how you can shape the overall policymaking process by helping others make the most of their talents. It will take years, but, in the long run, those are the kinds of changes that are most lasting.” 

Sheldon: “Regardless of your current role, you can read almost everything relevant to the policy discourse. National strategies, executive orders, bills, commission and think tank reports, and so on are all publicly available. Unfortunately, many in the policy community are only skimming, but reading these sources deeply and internalizing them is a great basis to distinguish yourself in a policy discussion. Also, there are more opportunities than ever to read and respond to Requests for Comment from the National Institute of Standards and Technology and other government agencies, and these frequently include very technical questions.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cross-community perspectives on cyber threat intelligence and policy appeared first on Atlantic Council.

]]>
Iran is using its cyber capabilities to kidnap its foes in the real world https://www.atlanticcouncil.org/blogs/iransource/iran-cyber-warfare-kidnappings/ Wed, 24 May 2023 16:28:19 +0000 https://www.atlanticcouncil.org/?p=649191 This new form of transnational repression by Iran has alarmed security professionals and governments worldwide. 

The post Iran is using its cyber capabilities to kidnap its foes in the real world appeared first on Atlantic Council.

]]>
In November 2020, as results for the closely watched and hotly contested United States presidential and congressional elections began to emerge, hackers gained access to at least one website announcing results. They were thwarted, but it took the resources of the US military and the Department of Homeland Security to block what could have turned into another attempt to spread doubts and confusion about a vote that would eventually threaten to undermine US democracy some weeks later. 

The culprit in the attack, according to US officials and tech professionals cited by The Washington Post, was a hacking group operating out of or at the direction of Iran—an increasingly powerful state actor in the world of cyber warfare. 

The Islamic Republic has been steadily improving and sharpening its cyber warfare, cyber espionage, and electronic sabotage abilities, staging complex operations that, while not always successful, show what experts in the field describe as devious inventiveness. 

In addition to its nuclear ambitions, its refining of missile technologies, and cultivation of armed ideologically motivated proxy paramilitary groups, Iran’s electronic warfare and intelligence operations are emerging as yet another worry about the country’s international posture. 

The cyber realm fits snugly into Iran’s security arsenal. It is characterized by the asymmetricity, clandestinity, and plausible deniability that complement the proxy and shadow operations that have long been Islamic Republic’s favored tools for decades. 

Iran’s most aggressive cyber realm actions are also powered by a sense of righteous grievance and resentment, emotional and ideological motivations that have long energized the clerical establishment. After all, it was US and Israeli spy agencies that, according to many experts, launched the era of cyber warfare by deploying the Stuxnet virus against the country’s controversial nuclear program in 2010, damaging hundreds of its centrifuges. Tehran is proud that its growing army of techies is catching up and, in some ways, surpassing the West at its own games. 

Iran’s cyber efforts have been steadily broadening. They range from attempting to hack into defense, civil society, and private systems abroad to harassment campaigns against opponents in the diaspora. Experts closely watching Iran’s Internet and electronic warfare activities have detected an escalation of its abilities and ambitions in recent months. In early May, Microsoft issued a warning about Iran’s increasingly aggressive and sophisticated tactics. 

“Iranian cyber actors have been at the forefront of cyber-enabled influence operations, in which they combine offensive cyber operations with multi-pronged influence operations to fuel geopolitical change in alignment with the regime’s objectives,” said the report by Microsoft’s Clint Watts, a former FBI cybersecurity expert. 

In particular, Iran appears to be building complex tactics that merge cyber and real world operations to lure people into kidnappings. This new form of transnational repression has alarmed security professionals and governments worldwide. 

“We’re seeing an evolution over time of this actor evolving and using their techniques in ever more complex ways,” Sherrod DeGrippo, a former head of threat research and detection at the cyber security firm Proofpoint told me in January. “Iran is seen in the big four of the main actors. It is really stepping onto the stage and evolving what it’s doing.”

One particularly nefarious tactic that they are using is creating fake personas in the form of researchers who approach targets and try to glean information or lure them out into the open for suspected kidnapping practices. Through my research in Turkey, we learned that it is quite possible Iranian intelligence operatives have infiltrated the Turkish mobile phone networks and are using the data to track dissidents in the country. In one instance, a vocal dissident journalist received a message identifying a cafe near her home that she walked past every day. She was so terrified that she refused to leave her home for months and wound up obtaining asylum in a Western country.

In another instance, a dissident living in Turkey received messages with photographs of recent tourist sites he had visited on a trip to Istanbul. The speculation is that Iran had managed to purchase or surreptitiously access tracking data for their phones and use it to intimidate them.

According to a December 2022 report by ProofPoint, Iran’s cyber activities have gone beyond anonymous hacks and phishing campaigns to include made-up personas meant to lure people out into the open and in at least one alleged attempt, a kidnapping attempt. Sometimes alleged Iranian operatives use US or Western phone numbers to register WhatsApp accounts which can obscure their identities. 

Last year, Israel’s domestic security service Shin Bet uncovered an alleged plot to use false identities with robust and complex legends to lure businessmen and scholars abroad in what security officials suspect were Iranian kidnapping plots. In one case, an operative pretending to be a prominent Swiss political scientist invited Israelis to a conference abroad. A number of Israelis were on the verge of traveling before the plot was exposed. 

Experts are also noticing that Iran is getting better and better at creating virtual honey traps. “They’re evolving their ability to create personas,” said DeGrippo, who has since moved to Microsoft. “They’ve used these personas that are mildly attractive. They like to use women’s names, as they have learned that they get a bit more interaction and success when they use female personas.”

The US and other Western countries are well aware of the threat posed by Iranian cyber operations and have taken steps to counter them. But Iran’s state-sponsored program continues to evolve. Tehran likely believes the cyber capabilities give it leverage to yield information without the messiness of a hostage crisis, the headlines of a boat seizure, the riskiness of a human intelligence operation, or the potential retribution of a missile strike.

In January, the London cyber security firm Secureworks published a report on the emergence of a new likely Iranian hacking collective called Abraham’s Ax, which aimed to use leaks and hacks to prevent the expansion of the Abraham Accords normalizing ties between Israel and some Arab states. The collective leaked allegedly stolen from the Saudi Ministry of the Interior and a recording said to be an intercepted phone conversation between Saudi ministers.

“There are clear political motivations behind this group with information operations designed to destabilize delicate Israeli-Saudi Arabian relations,” Rafe Pilling, a researcher at Secureworks, was quoted as saying.

Less than two months later, in March, Saudi Arabia signed a deal to resume ties with Iran rather than commence them with Israel, as many in Washington and Jerusalem were expecting.  

While Prime Minister Benjamin Netanyahu’s hardline government and his rightwing policies likely played a major role in Saudi’s decision to hold off on joining the Abraham Accords, Riyadh’s hopes that it could rein Iran’s diverse array of threats—including its increasing cyber warfare capabilities—likely played a role in its decision to pen the China-brokered deal with Tehran. 

Iran invests in its cyber warfare program because it works.

Borzou Daragahi is an international correspondent for The Independent. He has covered the Middle East and North Africa since 2002. He is also a nonresident fellow with the Atlantic Council’s Middle East Security Initiative. Follow him on Twitter: @borzou.

The post Iran is using its cyber capabilities to kidnap its foes in the real world appeared first on Atlantic Council.

]]>
Regional cyber powers are banking on a wired future. Expanding the Abraham Accords to cybersecurity will help. https://www.atlanticcouncil.org/blogs/menasource/cybersecurity-iran-abraham-accords-israel/ Fri, 19 May 2023 19:44:07 +0000 https://www.atlanticcouncil.org/?p=647942 The Abraham Accord countries face threats from hostile actors, and defending their technology and their peoples is a challenge.

The post Regional cyber powers are banking on a wired future. <strong>Expanding the Abraham Accords to cybersecurity will help</strong>. appeared first on Atlantic Council.

]]>

The Abraham Accords is one of the major diplomatic achievements of the last five years. This historic agreement normalized relations between Israel and the Arab countries of Bahrain, Morocco, Sudan, and the United Arab Emirates (UAE), in partnership with the United States. Following the initial burst of activity late in the Donald Trump administration, the accords’ first expansion under the Joe Biden administration was announced in Tel Aviv on January 31, when Bahrain, Israel, the UAE, and the United States said they would widen the scope of the accords to include cybersecurity.

The January announcement by US Department of Homeland Security Under Secretary for Strategy, Policy, and Plans Robert Silvers was, like the accords themselves, a surprise that seems perfectly logical in hindsight. Israel and the Arab countries who participated in the announcement are among the Middle East and North Africa (MENA) region’s most dynamic economies, with substantial public and private investments in high tech being an important factor in each country. These countries face threats from hostile actors, and defending their technology and their peoples is a challenge. A challenge shared can lead to a challenge overcome.

Cyberattacks from nation-states and cybercriminals affect everyone

Each of the countries involved, with the possible exception of Morocco, has recent historical reason to be concerned about protecting its people and its industrial base—cyber and non-cyber—against cyberattacks. The greatest threats come from the Islamic Republic of Iran and cybercriminals—and the two overlap like Venn diagram circles.

Iran uses a well-documented peculiar sense of symmetry in how it conducts cyberattacks. Iran has an especially aggressive cyber offensive state capability for a country its size. Most of Iran’s nearby peers in population (ex: Turkey, Congo, Thailand, and Tanzania) or GDP per capita (ex: Bosnia, Namibia, Paraguay, and Ecuador) do not mount offensive cyberattacks or information operations against other countries on the scale that Tehran does. Iran and Israel have been engaged in “gray zone” cyberattacks against each other for more than a decade, and Iran has carried out various kinds of cyber operations against Israel, Saudi Arabia, Bahrain, most of the Arab countries of the Gulf, and the United States.

Cybercrime is another threat that has increased in recent years. The United States has convened two international conferences on ransomware, with the most recent being held in October-November 2022. The UAE and Saudi Arabia were reportedly the main targets in the Gulf for ransomware attacks, according to media reports, but other Gulf Arab countries are also at risk.

Complicating the picture is the fact that Iran often uses private contractors to carry out cyber operations—sometimes those entities carry out cyberattacks for profit as well. This complicates attribution and gives Tehran a patina of plausible deniability.

These factors make deterring cyberattacks especially difficult in the Middle East. The United States has sometimes retaliated against Iranian cyberattacks by carrying out operations against the perpetrators. However, the logic of deterrence requires an ability to impose costs that surpass the adversary’s perceived gains from the conduct in question. Iran has shown limited susceptibility thus far to being deterred by the US or others’ cyber operations. This makes cyber defense even more important.

Setting aside old rivalries to work together on cybersecurity is now in everyone’s interest

Iranian cyber behavior, the rising threat of cybercrime, and the inability so far to deter these behaviors have made it imperative that Israel, the Gulf countries, and the United States work more closely on civilian cyber defense.

Network imperatives make it important that this collaboration be both at network speeds and peer-to-peer. Cybersecurity needs to move quickly to be effective at addressing threats, which means that governments facing common threats should work together. The architecture of pre-Internet times allowed for hub-and-spoke information sharing in a situation where several governments were regional rivals but all had a common ally they could trust (usually, an ally that was considerably far away).

As a result, the United States could simultaneously be an ally of Israel and most Arab countries in the Middle East, and each of the countries would be willing to share information with the United States, even if they wouldn’t do so with each other (France and the United Kingdom have played similar roles with different sets of countries). Each country could trust the United States to protect its sources and methods while working for the common good, which, in earlier days, was focused on keeping the Soviet Union at bay.

For a time, this approach worked in cybersecurity. But this is no longer the case. Al-Qaeda and the Islamic State of Iraq and al-Sham (ISIS) were social-media savvy but lacked the resources and deep bench of a nation-state, allowing the United States and MENA to limit terrorists’ efforts to raise funds and recruit new fighters.

Today, Iran, even under sanctions, has far more resources than al-Qaeda ever did to use cyber tools to target Israel and the Gulf Arab states. While there are signs that a lack of funds holds back some of Iran’s cyber operations, cyberattacks are still remarkably cost-effective. Cybercrime raises enough funds to enrich organized gangs to run their own 24/7 ransomware help desks. “Ransomware-as-a-service” is now an actual thing.

The countries in the MENA region still face a number of challenges in the cyber domain. The use of Chinese technology by some countries raises fears of possible network penetration. Each country needs to work out how privacy norms and expectations should govern electronic surveillance tools, because the abuse of those tools has become an international concern. US concerns over “spyware” has already led to an executive order against the use of commercial tools that pose a risk to national security or have been misused to enable human rights abuses around the world.

A number of countries in MENA—Israel, Bahrain, and the UAE included—are increasingly becoming regional cyber powers and are banking on a wired future. Many governments in the region are trying to stimulate local investment in the digital sector, and protecting small but growing companies from cyber threats is becoming a significant business, with market research experts estimating a doubling of dollar volume in five years. The UAE’s new National Security Strategy aims to train more than forty thousand cybersecurity professionals and encourages Emirati students to pursue a career in this field.

To the private sector, an agreement among Abraham Accords members is more than just a sign of possible government-to-government cooperation. The agreement gives a valuable green light for direct business-to-business exchanges that could benefit the economy of the region. It may also heighten the value of joining the accords for other nations facing cyber threats, such as Saudi Arabia.

Given the importance of a closer cybersecurity partnership among Israel, key Gulf Arab states, and the United States, broadening the Abraham Accords to include cybersecurity is an eminently sensible approach. Like other parts of the accords, expanding them to include cybersecurity will have a lasting impact if cooperation leads to real benefits in security and commerce, making the Middle East more secure and prosperous than ever before.

Thomas S. Warrick is the director of the Future of DHS project at the Scowcroft Center for Strategy and Security’s Forward Defense practice, and a senior fellow and the Scowcroft Middle East Security Initiative at the Atlantic Council. 

The post Regional cyber powers are banking on a wired future. <strong>Expanding the Abraham Accords to cybersecurity will help</strong>. appeared first on Atlantic Council.

]]>
The 5×5—Cryptocurrency hacking’s geopolitical and cyber implications https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cryptocurrency-hackings-geopolitical-and-cyber-implications/ Wed, 03 May 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=641955 Experts explore the cybersecurity implications of cryptocurrencies, and how the United States and its allies should approach this challenge.

The post The 5×5—Cryptocurrency hacking’s geopolitical and cyber implications appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

In January 2023, a South Korean intelligence service and a team of US private investigators conducted an operation to interdict $100 million worth of stolen cryptocurrency before its hackers could successfully convert the haul into fiat currency. The operation was the culmination of a roughly seven-month hunt to trace and retrieve the funds, stolen in June 2022 from a US-based cryptocurrency company, Harmony. The Federal Bureau of Investigation (FBI) attributed the theft to a team of North Korean state-linked hackers—one in a string of massive cryptocurrency hauls aimed at funding the hermit kingdom’s illicit nuclear and missile programs. According to blockchain analysis firm Chainalysis, North Korean hackers stole roughly $1.7 billion worth of cryptocurrency in 2022—a large percentage of the approximately $3.8 billion stolen globally last year.

North Korea’s operations have brought attention to the risks surrounding cryptocurrencies and how state and non-state groups can leverage hacking operations against cryptocurrency wallets and exchanges to further their geopolitical objectives. We brought together a group of experts to explore cybersecurity implications of cryptocurrencies, and how the United States and its allies should approach this challenge.

#1 What are the cybersecurity risks of decentralized finance (DeFi) and cryptocurrencies? What are the cybersecurity risks to cryptocurrencies?

Eitan Danon, senior cybercrimes investigator, Chainalysis

Disclaimer: Any views and opinions expressed are the author’s alone and do not reflect the official position of Chainalysis. 

“DeFi is one of the cryptocurrency ecosystem’s fastest-growing areas, and DeFi protocols accounted for 82.1 percent of all cryptocurrency stolen (totaling $3.1 billion) by hackers in 2022. One important way to mitigate against this trend is for protocols to undergo code audits for smart contracts. This would prevent hackers from exploiting vulnerabilities in protocols’ underlying code, especially for cross-chain bridges, a popular target for hackers that allows users to move funds across blockchains. As far as the risk to cryptocurrencies, the decentralized nature of cryptocurrencies increases their security by making it extraordinarily difficult for a hostile actor to take control of permissionless, public blockchains. Transactions associated with illicit activity continue to represent a minute portion (0.24 percent) of the total crypto[currency] market. On a fundamental level, cryptocurrency is a technology—like data encryption, generative artificial intelligence, and advanced biometrics—and thus a double-edged sword.” 

Kimberly Donovan director, Economic Statecraft Initiative, and Ananya Kumar, associate director of digital currencies, GeoEconomics Center, Atlantic Council

“We encourage policymakers to think about cybersecurity vulnerabilities of crypto-assets and services in two ways. The first factor is the threat of cyberattacks for issuers, exchanges, custodians, or wherever user assets are pooled and stored. Major cryptocurrency exchanges like Binance and FTX have had serious security breaches, which has led to millions of dollars being stolen. The second factor to consider is the use of crypto-assets and crypto-services in money-laundering. Often, attackers use cryptocurrencies to receive payments due to the ability to hide or obfuscate financial trails, often seen in the case of ransomware attacks. Certain kinds of crypto-services such as DeFi mixers and aggregators allow for a greater degree of anonymity to launder money for criminals, who are interested in hiding money and moving it quickly across borders.” 

Giulia Fanti, assistant professor of electrical and computer engineering, Carnegie Mellon University

“The primary cybersecurity risks (and benefits) posed by DeFi and cryptocurrencies are related to lack of centralized control, which is inherent to blockchain technology and the philosophy underlying it. Without centralized control, it is very difficult to control how these technologies are used, including for nefarious purposes. Ransomware, for example, enables the flow of money to cybercriminial organizations. The primary cybersecurity risks to cryptocurrencies on the other hand can occur at many levels. Cryptocurrencies are built on various layers of technology, ranging from an underlying peer-to-peer network to a distributed consensus mechanism to the applications that run atop the blockchain. Attacks on cryptocurrencies can happen at any of these layers. The most widely documented attacks—and those with the most significant financial repercussions—are happening at the application layer, usually exploiting vulnerabilities in smart contract code (or in some cases, private code supporting cryptocurrency wallets) to steal funds.” 

Zara Perumal, chief technology officer, Overwatch Data

“Decentralized means no one person or institution is in control. It also means that no one person can easily step in to enforce. In cases like Glupteba, fraudulent servers or data listed on a blockchain can be hard to take down in comparison to cloud hosted servers where companies can intervene. Cybersecurity risks to cryptocurrencies include endpoint risk, since there is not a centralized party to handle returning accounts as the standard ways of credential theft is a risk to cryptocurrency users. There is a bigger risk in cases like crypto[currency] lending, where one wallet or owner holds a lot of keys and is a large target. In 2022, there were numerous high-profile protocol attacks, including the Wormhole, Ronin, and BitMart attacks. These attacks highlight the risks associated with fundamental protocol vulnerabilities via blockchain, smart contracts or user interface.”

#2 What organizations are most active and capable of cryptocurrency hacking and what, if any, geopolitical impact does this enable for them?

Danon: “North Korea- and Russia-based actors remain on the forefront of crypto[currency] crime. North Korea-linked hackers, such as those in the Lazarus Group cybercrime syndicate, stole an estimated $1.7 billion in 2022 in crypto[currency] hacks that the United Nations and others ­­have assessed the cash-strapped regime uses to fund its weapons of mass destruction and ballistic missiles programs. Press reporting about Federation Tower East—a skyscraper in Moscow’s financial district housing more than a dozen companies that convert crypto[currency] to cash—has highlighted links between some of these companies to money laundering associated with the ransomware industry. Last year’s designations of Russia-based cryptocurrency exchanges Bitzlato and Garantex for laundering hundreds of millions of dollars’ worth of crypto[currency] for Russia-based darknet markets and ransomware actors cast the magnitude of this problem into starker relief and shed light on a diverse constellation of cybercriminals. Although many pundits have correctly noted that Russia cannot ‘flip a switch’ and run its G20 economy on the blockchain, crypto[currency] can enable heavily sanctioned countries, such as Russia, North Korea, and others, to project power abroad while generating sorely needed revenue.” 

Donovan and Kumar: “We see actors from North Korea, Iran, and Russia using both kinds of cybersecurity threats described above to gain access to money and move it around without compliance. Geopolitical implications include sanctioned state actors or state-sponsored actors using the technology to generate revenue and evade sanctions. Hacking and cyber vulnerabilities are not specific to the crypto-industry and exist across digital infrastructures, specifically payments architecture. These threats can lead to national security implications for the private and public entities accessing or relying on this architecture.” 

Perumal: “Generally, there are state-sponsored hacking groups that are targeting cryptocurrencies for financial gain, but also those like the Lazarus Group that are disrupting the cryptocurrency industry. Next, criminal hacking groups may both use cryptocurrency to receive ransom payments or also attack on chain protocols. These groups may or may not be associated with a government or political agenda. Many actors are purely financially motivated, while other government actors may hack to attack adversaries without escalating to kinetic impact.”

#3 How are developments in technology shifting the cryptocurrency hacking landscape?

Danon: “The continued maturation of the blockchain analytics sector has made it harder for hackers and other illicit actors to move their ill-gotten funds undetected. The ability to visualize complex crypto[currency]-based money laundering networks, including across blockchains and smart contract transactions, has been invaluable in enabling financial institutions and crypto[currency] businesses to comply with anti-money laundering and know-your-customer requirements, and empowering governments to investigate suspicious activity. In some instances, hackers have chosen to let stolen funds lie dormant in personal wallets, as sleuths on crypto[currency] Twitter and in industry forums publicly track high-profile hacks and share addresses in real-time, complicating efforts to off-ramp stolen funds. In other instances, this has led some actors to question whether this transparency risks unnecessary scrutiny from authorities. For example, in late April, Hamas’s military wing, the Izz al-Din al-Qassam Brigades, publicly announced that it was ending its longstanding cryptocurrency donation program, citing successful government efforts to identify and prosecute donors.” 

Donovan and Kumar: “Industry is responding and innovating in this space to develop technology to protect and/or trace cyber threats and cryptocurrency hacks. We are also seeing the law enforcement, regulatory, and other government communities develop the capability and expertise to investigate these types of cybercrimes. These communities are taking steps to make public the information gathered from their investigations, which further informs the private sector to safeguard against cyber operations as well as technology innovations to secure this space.” 

Fanti: “They are not really. For the most part, hacks on cryptocurrencies are not increasing in frequency because of sophisticated new hacking techniques, but rather because of relatively mundane vulnerabilities in smart contracts. There has been some research on using cutting-edge tools such as deep reinforcement learning to try to gain funds from smart contracts and other users, particularly in the context of DeFi. However, it is unclear to what extent DeFi users are using such tools; on-chain records do not allow observers to definitively conclude whether such activity is happening.” 

Perumal: “As the rate of ransomware attacks rises, cryptocurrency is more often used as a mechanism to pay ransoms. For both that and stolen cryptocurrency, defenders aim to track actors across the blockchain and threat actors increase their usage mixers and microtransactions to hide their tracks. A second trend is crypto-jacking and using cloud computing from small to large services to fund mining. The last development is not new. Sadly, phishing and social engineering for crypto[currency] logins is still a pervasive threat and there is no technical solution to easily address human error.”

More from the Cyber Statecraft Initiative:

#4 What has been the approach of the United States and allied governments toward securing this space? How should they be approaching it?

Danon: “The US approach toward securing the space has centered on law enforcement actions, including asset seizures and takedowns with partners of darknet markets, such as Hydra Market and Genesis Market. Sanctions in the crypto[currency] space, which have dramatically accelerated since Russia’s invasion of Ukraine last February, have generated awareness about crypto[currency] based money laundering. However, as is the case across a range of national security problems, the United States has at times over relied on sanctions, which are unlikely to change actors’ behavior in the absence of a comprehensive strategy. The United States and other governments committed to AML should continue to use available tools and data offered by companies like Chainalysis to disrupt and deter bad actors from abusing the international financial system through the blockchain. Given the blockchain’s borderless and unclassified nature, the United States should also pursue robust collaboration with other jurisdictions and in multilateral institutions.” 

Donovan and Kumar: “The United States and its allies are actively involved in this space to prevent regulatory arbitrage and increase information sharing on cyber risks and threats. They have also increased communication with the public and private sectors to make them aware of cyber risks and threats, and are making information available to the public and industry to protect consumers against cybercrime. Government agencies and allies should continue to approach this issue by increasing public awareness of the threats and enabling industry innovation to protect against them.” 

Fanti: “One area that I think needs more attention from a consumer protection standpoint is smart contract security. For example, there could be more baseline requirements and transparency in the smart contract ecosystem about the practices used to develop and audit smart contracts. Users currently have no standardized way to evaluate whether a smart contract was developed using secure software development practices or tested prior to deployment. Standards bodies could help set up baseline requirements, and marketplaces could be required to report such details. While such practices cannot guarantee that a smart contract is safe, they could help reduce the prevalence of some of the most common vulnerabilities.” 

Perumal: “Two recent developments from the US government are the White House cybersecurity strategy and the Cybersecurity and Infrastructure Security Agency’s (CISA) move to ‘secure by default.’ They both emphasize cooperation with the private sector to move security of this ecosystem to cloud providers. While the system is inherently decentralized, if mining or credential theft is happening on major technology platforms, these platforms have an opportunity to mitigate risk. The White House emphasized better tracing of transactions to “trace and interdict ransomware payments,” and CISA emphasizes designing software and crypto[currency] systems to be secure by default so smaller actors and users bear less of the defensive burden. At a high level, I like that this strategy moves protections to large technology players that can defend against state actors. I also like the focus on flexible frameworks that prioritize economics (e.g., cyber liability) to set the goal, but letting the market be flexible on the solution—as opposed to a prescriptive regulatory approach that cannot adapt to new technologies. In some of these cases, I think cost reduction may be a better lever than liability, which promotes fear on a balance sheet, however, I think the push toward financially motivated goals and flexible solutions is the right direction.”

#5 Has the balance of the threats between non-state vs. state actors against cryptocurrencies changed in the last five years? Should we be worried about the same entities as in 2018?

Danon: “Conventional categories of crypto[currency]-related crime, such as fraud shops, darknet markets, and child abuse material, are on the decline. Similarly, the threat from non-state actors, such as terrorist groups, remains extremely low relative to nation states, with actors such as North Korea and Russia continuing to leverage their technical sophistication to acquire and move cryptocurrency. With great power competition now dominating the policy agenda across many capitals, analysts should not overlook other ways in which states are exercising economic statecraft in the digital realm. For example, despite its crypto[currency] ban, China’s promotion of its permissioned, private blockchain, the Blockchain-based Service Network, and its central bank digital currency, the ‘digital yuan,’ deserve sustained research and analysis. Against the backdrop of China’s rise and the fallout from the war on Ukraine, it will also be instructive to monitor the efforts of Iran, Russia, and others to support non-dollar-pegged stablecoins and other initiatives aimed at eroding the dollar’s role as the international reserve currency.” 

Donovan and Kumar: “More is publicly known now on the range of actors in this space than ever. Agencies such as CISA, FBI, and the Departments of Justice and the Treasury and others have made information available and provided a wide array of resources for people to get help or learn—such as stopransomware.gov. Private blockchain analytics firms have also enabled tracing and forensics, which in partnership with enforcement can prevent and punish cybercrime in the crypto[currency] space. Both the knowledge about ransomware and awareness of ransomware attacks have increased since 2018. As the popularity of Ransomware as a Service rises, both state and non-state actors can cause destruction. We should continue to be worried about cybercrime in general and remain agnostic of the actors.” 

Perumal: “State actors continue to get more involved in this space. As cryptocurrencies and some digital currencies based on the blockchain become more mainstream, attacking it allows a more targeted geopolitical impact. In addition to attacks by governments (like Lazarus Group), a big recent development was China’s ban on cryptocurrency, which moved mining power from China to other parts of the world, especially the United States and Russia. This changed attack patterns and targets. At a high level, we should be worried about both financially-motivated and government-backed groups, but as the crypto[currency] market grows so does the sophistication of attacks and attackers.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cryptocurrency hacking’s geopolitical and cyber implications appeared first on Atlantic Council.

]]>
Practice makes perfect: What China wants from its digital currency in 2023 https://www.atlanticcouncil.org/blogs/econographics/practice-makes-perfect-what-china-wants-from-its-digital-currency-in-2023/ Mon, 24 Apr 2023 16:58:55 +0000 https://www.atlanticcouncil.org/?p=639365 The e-CNY network has expanded over the last year, and China's goals have only become clearer. Domestically, the People’s Bank of China is still in test-and-learn mode, globally, China is more focused on setting defining international standards.

The post <strong>Practice makes perfect: What China wants from its digital currency in 2023</strong> appeared first on Atlantic Council.

]]>
It’s been a year since the Beijing Olympics, where China’s central bank digital currency (CBDC), the e-CNY, debuted in front of an international audience. As the e-CNY network has expanded over the last 12 months, China’s goals have become clearer. Domestically, the People’s Bank of China (PBOC) is still in test-and-learn mode, prioritizing experimentation over adoption. Globally, China is less focused on internationalizing the RMB than it is on setting technical and regulatory standards that will define how other countries’ central bank digital currencies will work going forward. 

Domestic ambitions 

Even with its persistent low adoption rates, the e-CNY is by far the largest CBDC pilot in the world by both the amount of currency in circulation—13.61 billion RMB—and the number of users—260 million wallets. As the pilot regions have expanded to 25 cities, so have the real-world use-cases tested through the pilots. From the start, the PBOC’s objective within its borders has been to not just to compete in China’s domestic payments landscape, which is dominated by two “private” players—AliPay and TencentPay/WePay—but to expand the universe of economic activities that are included the state-enabled payments network. So far, common use-cases being tested include public transportation, public health checkpoints including COVID test centers, integrated identification cards to receive and pay utilities such as retirement benefits and school tuition payments, as well as tax payments and refunds. The pilots have also begun testing technical and programmability functions like smart contracts for B2B and B2C functions, e-commerce and credit provision. Some of these projects are described in the table below..

These domestic test cases are likely to expand this year and cover a broader range of activities and regions. Already, the PBOC is looking to reach the margins of society: e-CNY is being tested amongst elderly populations and in broader rural connectivity schemes initiated to improve digitization. It is also aiming to reach AliPay and TencentPay/WePay customers through integrating their wallet and e-commerce functions for e-CNY distribution. Over the last few years, the PBOC, like the broader Chinese state apparatus, has displayed a tendency toward centralizing regulatory authority when it comes to the two sectors at the intersection of CBDCs —finance and technology. The universe of expanded economic networks enabled by the e-CNY has rightly created concerns regarding the centralization of authority by the PBOC, and the resulting impacts on freedom of choice and from state surveillance for its users. The expanded network of use cases across applications that would collect data on personal identification, health information, and consumption habits and behavior should also raise concerns around the vulnerability of such data to cyber threats domestically and abroad. 

Recent developments on regulation

Interestingly, on the regulatory side, at the National People’s Congress in early March, there were a few changes announced to China’s financial regulators. The PBOC has lost its authority over financial holding companies and financial consumer protection regulation to a new regulator, the State Administration of Financial Supervision, which will also oversee banking and insurance regulation. The PBOC is also opening up 31 new provincial-level branches signaling deeper coordination between the PBOC and provincial level authorities. This reshuffle in authority signals further centralization of power under the party apparatus. Unlike other central banks, the PBOC is not fully independent, and requires the State Council to sign off on decisions relating to money supply and interest rates, and the State Council has been tracking PBOC’s research into e-CNY since approving the initial plan in 2016

From a monetary policy perspective, the e-CNY infrastructure could be a handy tool in the hands of the PBOC, with which it can increase or decrease money supply. As China devises a strategy to stimulate consumer spending this year, there is an opportunity to do so by using and expanding the e-CNY network. China has already increased bank’s short term liquidity by $118 billion and long term liquidity by $72 billion through reducing reserve ratio requirements this year.   

PBOC’s ambition for an all-encompassing domestic network of e-CNY infrastructure raises questions about the state’s ability and reach in controlling citizens’ activities. The pilots test real-world scenarios for CBDC use cases, and while adoption has been low, the broad range of applications suggest that testing, not adoption, is the priority for now.

e-CNY around the world

Use of the word “e-CNY” commonly refers to this domestic, retail payments infrastructure. However, much of the discussion in Washington references the cross-border, wholesale capabilities that the PBOC has been testing publicly for a while now. The PBOC participates in a joint experiment with the Hong Kong Monetary Authority, the Bank of Thailand, the Central Bank of the UAE and the Bank for International Settlements named Project mBridge, the purpose of which is to create a common infrastructure across borders to facilitate real-time and cheap transaction settlement. Last October, the project successfully conducted 164 transactions in collaboration with 20 banks across the 4 countries, settling a total of $22 million. Instead of relying on correspondent banking networks, banks were able to link with their foreign counterparts directly to conduct payments, FX settlements, redemptions and issuance across e-HKD, e-AED, e-THB and e-CNY. Interestingly, almost half of all transactions were in e-CNY, which amounted to approximately $1,705,453 issued, $3,410,906 used in payments and FX settlements and $6,811,812 redeemed. Both issuance and redemption transactions were highest in e-CNY, and as stated by the BIS, it was likely because of the automatic integration of the retail e-CNY system and the higher share of RMB in regional trade settlements. 

Analysts have characterized wholesale cross-border arrangements like the mBridge as an effort towards de-dollarization and internationalization of the RMB. The e-CNY, much like its physical counterpart, faces the same liquidity constraints due to capital controls on off-shore transactions and holdings. This was reflected in the mBridge experiment, as one of the main feedback from participants was the need for greater liquidity from FX market makers and other liquidity providers to improve the FX transaction capabilities of the platform. Additionally, even if e-CNY were to become freely traded in the future, it could lead to significant appreciation of RMB and balance of payments issues for the PBOC. This is likely not a desirable outcome for the PBOC, which is why currency arrangements like the mBridge can only have a limited impact on the role of the dollar.

If winning the currency competition is an unlikely short-term objective of the PBOC, what has raised national security concerns regarding the e-CNY? China has long used the rhetoric of international cooperation and “do no harm” principles in its cross-border CBDC engagements. However, these cross-border experiments require months of preparation and coordination between central and commercial banks to ensure that regulatory and jurisdictional requirements are aligned. They highlight the need for legal pathways and standards for data sharing, privacy, and risk frameworks between heretofore unsynchronized jurisdictions. Similarly, experiments rely on technological prototypes that interact with existing domestic e-CNY framework, creating de-facto technical standards for cross-border transactions which are likely to be replicated by other jurisdictions. What can potentially emerge is a set of technical and regulatory standards built in the image of the e-CNY, and with that comes the baggage of surveillance and unauthorized access by the Chinese state. MBridge’s platform, for instance, can be utilized for domestic CBDC infrastructure if required by any jurisdiction. 

Already, Chinese company Red Date Technology—which, along with China Mobile, UnionPay, State Information Center and others—is behind the creation of Blockchain Service Network (BSN) (a blockchain infrastructure service that connects different payment networks) has launched a similar product under the name of Universal Digital Payments Network. At an event at the World Economic Forum in January 2023, it targeted emerging markets experimenting with CBDCs and stablecoins, since the project wants to build an interconnected global architecture in the vein of BSN’s ambitions.

Technological and regulatory replication by country blocs, enabled by Chinese state and private actors, could create a parallel system of financial networks outside of the dollar, especially where there is a high volume of transactions. The United States relies on the dollar’s dominance to establish global anti-money laundering standards and achieve effective and broad implementation of financial sanctions. The emergence of alternate currency-blocs—enabled by e-CNY-like technology—has the potential to chip away at the primacy of the dollar in global finance and trade, as the dollar will not be the only available option. 

Therefore, even though it is unlikely that the development of e-CNY would lead to a broader share of the RMB as a payment or reserve currency, replication of the e-CNY’s technical and regulatory model could further payments infrastructure that is not only inherently unworkable with the dollar, but exacerbates the privacy and surveillance concerns of the retail e-CNY by exporting the problem to the world. China’s domestic motivations of greater control and surveillance, therefore, are intertwined with its global ambitions, and the consequences will be dire in the absence of a competing, privacy preserving, dollar-enabling, payments infrastructure.


Ananya Kumar is the associate director for digital currencies with the GeoEconomics Center.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

Check out our CBDC Tracker

Central Bank Digital Currency Tracker

Our flagship Central Bank Digital Currency (CBDC) Tracker takes you inside the rapid evolution of money all over the world. The interactive database now features 130 countries— triple the number of countries we first identified as being active in CBDC development in 2020.

The post <strong>Practice makes perfect: What China wants from its digital currency in 2023</strong> appeared first on Atlantic Council.

]]>
Russia’s invasion of Ukraine is also being fought in cyberspace https://www.atlanticcouncil.org/blogs/ukrainealert/russias-invasion-of-ukraine-is-also-being-fought-in-cyberspace/ Thu, 20 Apr 2023 16:30:09 +0000 https://www.atlanticcouncil.org/?p=638524 While the war in Ukraine often resembles the trench warfare of the twentieth century, the battle for cyber dominance is highly innovative and offers insights into the future of international aggression, writes Vera Mironova.

The post Russia’s invasion of Ukraine is also being fought in cyberspace appeared first on Atlantic Council.

]]>
The Russian invasion of Ukraine is the first modern war to feature a major cyber warfare component. While the conventional fighting in Ukraine often resembles the trench warfare of the early twentieth century, the evolving battle for cyber dominance is highly innovative and offers important insights into the future of international aggression.

The priority for Ukraine’s cyber forces is defense. This is something they have long been training for and are excelling at. Indeed, Estonian PM Kaja Kallas recently published an article in The Economist claiming that Ukraine is “giving the free world a masterclass on cyber defense.”

When Russian aggression against Ukraine began in 2014 with the invasion of Crimea and eastern Ukraine’s Donbas region, Russia also began launching cyber attacks. One of the first attacks was an attempt to falsify the results of Ukraine’s spring 2014 presidential election. The following year, an attempt was made to hack into Ukraine’s electricity grid. In 2017, Russia launched a far larger malware attack against Ukraine known as NotPetya that Western governments rated as the most destructive cyber attack ever conducted.

In preparation for the full-scale invasion of 2022, Russia sought to access Ukraine’s government IT platforms. One of the goals was to obtain the personal information of Ukrainians, particularly those working in military and law enforcement. These efforts, which peaked in January 2022 in the weeks prior to the invasion, failed to seriously disrupt Ukraine’s state institutions but provided the country’s cyber security specialists with further important experience. “With their nonstop attacks, Russia has effectively been training us since 2014. So by February 2022, we were ready and knew everything about their capabilities,” commented one Ukrainian cyber security specialist involved in defending critical infrastructure who was speaking anonymously as they were not authorized to discuss details.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

Ukrainian specialists say that while Russian hackers previously tried to disguise their origins, many now no longer even attempt to hide their IP addresses. Instead, attacks have become far larger in scale and more indiscriminate in nature, with the apparent goal of seeking to infiltrate as many systems as possible. However, the defenders of Ukraine’s cyberspace claim Russia’s reliance on the same malware and tactics makes it easier to detect them.

The growing importance of digital technologies within the Ukrainian military has presented Russia with a expanding range of high-value targets. However, efforts to access platforms like Ukraine’s Delta situational awareness system have so far proved unsuccessful. Speaking off the record, Ukrainian specialists charged with protecting Delta say Russian hackers have used a variety of different methods. “They tried phishing attacks, but this only resulted in our colleagues having to work two extra hours to block them. They have also created fake interfaces to gain passwords and login details.”

Ukrainian security measures that immediately detect and block unauthorized users requesting information have proved effective for the Delta system and similar platforms. Russian hackers have had more success targeting the messaging platforms and situation reports of various individual Ukrainian military units. However, due to the fast-changing nature of the situation along the front lines, this information tends to become outdated very quickly and therefore is not regarded as a major security threat.

Ukraine’s cyber efforts are not exclusively focused on defending the country against Russian attack. Ukrainians have also been conducting counterattacks of their own against Russian targets. One of the challenges they have encountered is the comparatively low level of digitalization in modern Russian society compared to Ukraine. “We could hack into Russia’s railway IT systems, for example, but what information would this give us? We would be able to access train timetables and that’s all. Everything else is still done with paper and pens,” notes one Ukrainian hacker.

This has limited the scope of Ukrainian cyber attacks. Targets have included the financial data of Russian military personnel via Russian banks, while hackers have penetrated cartographic and geographic information systems that serve as important infrastructure elements of the Ukraine invasion. Ukrainian cyber attacks have also played a role in psychological warfare efforts, with Russian television and radio broadcasts hacked and replaced with content revealing suppressed details of the invasion including Russian military casualties and war crimes against Ukrainian civilians.

While Ukraine’s partners throughout the democratic world have provided the country with significant military aid, the international community has also played a role on the cyber front. Many individual foreign volunteers have joined the IT Army of Ukraine initiative, which counts more than 200,000 participants. Foreign hacker groups are credited with conducting a number of offensive operations against Russian targets. However, the large number of people involved also poses significant security challenges. Some critics argue that the practice of making Russian targets public globally provides advance warning and undermines the effectiveness of cyber attacks.

Russia has attempted to replicate Ukraine’s IT Army initiative with what they have called the Cyber Army of Russia, but this is believed to have attracted fewer international recruits. Nevertheless, Russia’s volunteer cyber force is thought to have been behind a number of attacks on diverse targets including Ukrainian government platforms and sites representing the country’s sexual minorities and cultural institutions.

The cyber front of the Russo-Ukrainian War is highly dynamic and continues to evolve. With a combination of state and non-state actors, it is a vast and complex battlefield full of gray zones and new frontiers. Both combatant countries have powerful domestic IT industries and strong reputations as hacker hubs, making the cyber front a particularly fascinating aspect of the wider war. The lessons learned are already informing our knowledge of cyber warfare and are likely to remain a key subject of study in the coming decades for anyone interested in cyber security.

Vera Mironova is an associate fellow at Harvard University’s Davis Center and author of Conflict Field Notes. You can follow her on Twitter at @vera_mironov.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Russia’s invasion of Ukraine is also being fought in cyberspace appeared first on Atlantic Council.

]]>
Critical infrastructure cybersecurity prioritization: A cross-sector methodology for ranking operational technology cyber scenarios and critical entities https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/critical-infrastructure-cybersecurity-prioritization/ Wed, 19 Apr 2023 13:01:19 +0000 https://www.atlanticcouncil.org/?p=636290 As critical infrastructure becomes increasingly targeted by malicious adversaries, how can we effectively prioritize criticality?

The post Critical infrastructure cybersecurity prioritization: <strong>A cross-sector methodology for ranking operational technology cyber scenarios and critical entities</strong> appeared first on Atlantic Council.

]]>

Executive summary

“Cyber policy today has created a world in which seemingly everything non-military can be held at risk—hospitals, trains, dams, energy, water—and nothing is off limits.”1

Policy experts have long looked to other fields to gain a better understanding of cyber issues—natural disasters, terrorism, insurance and finance, and even nuclear weapons—due to the “always/never” rule. The always/never concept stipulates that weapons must always work correctly when they are supposed to and never be launched or detonated by accident or sabotage. The application of the always/never rule to process control systems across an increasingly digitized critical infrastructure landscape is incredibly difficult to master.

Threading the tapestry of risk across critical infrastructure requires a more granular and purposeful model than the current approach to classifying critical infrastructure can deliver. Failing to contextualize the broad problem set that is critical infrastructure cybersecurity jeopardies increasing the cost of compliance-based cybersecurity to the extent that small- and medium-sized businesses cannot afford the expense and/or expect the government to provide managed cybersecurity services for designated concentrations of risk across multiple sectors—an imprudent, expensive, and unsustainable outcome.

Informing decision-makers requires deeper analysis of critical infrastructure targets through available open-source intelligence, criticality and vulnerability data, the degradation of operations by cyber means, and mean time to recover from cyber impacts that does not exist at scale. This paper offers an initial step to focus on cyber-physical operations, discussing the limitations of current methods to prioritize across critical infrastructure cybersecurity and outlining a methodology for prioritizing scenarios and entities across sectors and local, state, and federal jurisdictions.

This methodology has two primary use cases:

  1. It provides a way for asset owners to rank relevant cyber scenarios, enabling a single entity, organization, facility, or site in scope to prioritize a tabletop exercise scenario that maps cyber-physical impacts from control failures to localized cascading impacts.
  2. It generates a standardized priority score, which can be used by government and industry stakeholders to compare entities, locations, facilities, or sites within any jurisdiction (by geography, sector, regulatory body, etc.)—e.g., to compare 1,000 entities in a single sector or to compare a prison to a water utility or a rail operator to a hospital.

Introduction

The Department of Homeland Security’s National Incident Management System includes five components: plan, organize and equip, train, exercise, and evaluate and improve.2 Cybersecurity conversations are stuck in a limited cycle of buy a product, run a tabletop exercise, and check compliance boxes, often skipping key steps for organization, failing to exercise function-specific responsibilities, and almost never exercising to failure like a real emergency might require. Collectively, cyber-physical security requires new strategic and tactical thinking to better inform decision-makers in cyber policy, planning, and preparedness.

Critical infrastructure sectors and operations depend on equipment, communications, and business operations to supply goods, services, and resources to populations and interdependent commercial industries each day around the clock. Over the last decade, distributed operations, including manual and analog components that were originally not accessible via the internet, have increasingly become digitized and connected as networked technology connects systems to systems, sites to sites, and people to everything.

Owners and operators of critical infrastructure are responsible for securing their operations and processes from the inside out according to assorted regulatory and compliance requirements within and across each sector. The U.S. government is responsible for protecting citizens, national security, and the economy. Despite the tactical understanding of critical infrastructure equipment, communications, and business operations, critical infrastructure cybersecurity remains ambiguous. Several agencies across the U.S. government are working together to develop cybersecurity performance standards, baseline metrics, incident reporting mechanisms, information sharing tools, and liability protections.

Nevertheless, critical infrastructure cybersecurity presents a massive needle in a haystack problem. Where information technology (IT) sees many vulnerabilities, likely to be exploited in similar ways across mainstream and ubiquitous systems, operational technology (OT) security is often a proprietary ,case-by-case distinction. The oversimplification of their differences leads to a contextual gap when translating roles and responsibilities into tasks and capabilities for government and business continuity and disaster recovery for industry.

Essential critical infrastructure sectors

Source: cisa.gov

What is eating critical infrastructure is not a talent gap, the convergence of IT and OT, or even the lack of investment in cybersecurity products and solutions. It is the improbability of determining all possible outcomes from single points of dependence and the failure that exists between and beyond business continuity, physical equipment, and secure data and communications.

One consistently repeated recommendation from high-level decision-makers is that organizations, entities, and/or facilities carry out tabletop exercises and scenario planning to prepare for cyber situations that could have disruptive and devastating outcomes, especially those that threaten human life and national and economic security. However, there is no standardized way to develop or run these exercises or to decide which scenarios to simulate for teams based on size, location, scope, operational specifics, security maturity, and resource capacity.

All of it is critical, so what matters?

“Systems of economic exchange that promote patterns of civil society depend on the sustainable availability and equitable use of natural and social resources necessary for constructing a satisfying and ‘satisficing’ life by present and future generations.”3

Critical infrastructure is critical not only because the disruption, degradation, or destruction of entities/operations will impact life, the economy, or national security, but also because critical infrastructure sectors form the backbone of U.S. civil society. Some critical infrastructure sectors are also transactionally dependent on one another. The water sector depends heavily on operations and outputs from the energy, transportation, finance, and manufacturing sectors. Transportation depends on operations and outputs from the energy, finance, communications, and manufacturing sectors, and so on.4

There are indicators to suggest that government will likely continue tasking industry with cybersecurity requirements. Recent European Commission legislation sheds light on the due diligence of cybersecurity activities. The Network and Information Security 2 directive suggests that entities assess the proportionality of their risk management activities according to their individual degree of exposure to risks, size, likelihood and severity of incidents, and the societal and economic impacts of potential incidents.

According to retired National Cyber Director Chris Inglis, the Biden administration’s National Cybersecurity Strategy drills into “affirmative intentionality,” asking industry to raise the bar on cyber responsibility, liability, and resilience building. This comes at a time when best practices are numerous but implementation specifics are scarce. The strategy is positioned to expand mandated policies at sector risk management agencies and to double down on broader information sharing, combined with international law enforcement, to quell undeterred cyber criminals and threat-actor groups.

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) uses the National Critical Functions Framework to define and assess critical functions across sectors. Critical functions, including the fifty-five published by CISA, are defined as “vital to the security, economy, and public health and safety of the nation.”5 Critical assets are prioritized as those which “if destroyed or disrupted, would cause national or regional catastrophic effects.”6

According to a review by the U.S. Government Accountability Office, this approach has fallen short in three major ways: Stakeholders found it difficult to prioritize the framework given competing planning and operations considerations, struggled with implementing the goals and strategies, and required more tailored information to use the framework in a meaningful way. As a result, only fourteen states out of fifty-six have provided updates to the National Critical Infrastructure Prioritization Program since 2017.7

Entities determined to be the most essential of all critical infrastructure are categorized as Section 9 entities, defined as “critical infrastructure where a cybersecurity incident could reasonably result in catastrophic regional or national effects on public health or safety, economic security, or national security.”8 A recommended definition of systemically important critical infrastructure (SICI) in proposed legislation suggests the secretary of the U.S. Department of Homeland Security could declare a facility, system, or asset as “systemically important critical infrastructure” if the compromise, damage, and/or destruction of that entity would result in the following:

  • The interruption of critical services, including the energy supply, water supply, electricity grid, and/or emergency services, that could cause mass casualties or lead to mass evacuations.
  • The perpetuation of catastrophic damage to the U.S. economy, including the disruption of the financial market, disruption of transportation systems, and the unavailability of critical technology services.
  • The degradation and/or disruption of defense, aerospace, military, intelligence, and national security capabilities.
  • The widespread compromise or malicious intrusion of technologies, devices, or services across the cyber ecosystem.9

Regardless of scoping for SICI, there is a lack of understanding about the inventory of industrial assets and technologies that are in use across critical sectors today and the configuration contingencies for risk management for that inventory. There is a similar absence of holistic awareness about the realistic, cascading impacts or the fallout analysis for entities with varying characteristics and demographics.

Operational technology

OT and industrial control system (ICS) technologies include a wide range of machines and equipment, such as pumps, compressors, valves, turbines and similar equipment, interface computers and workstations, programmable logic controllers, and many diagnostics, safety, and metering and monitoring systems that enable or report the status of variables, processes, and operations.

Supervisory control and data acquisition (SCADA) systems encompass operations management and supervisory control of local or physical OT controls and are programmed and monitored to direct one or more processes operating at scale—i.e., machines and devices command process controls that are involved in directing and manipulating physical sensors and actuators.

Sectors operating OT and ICS on a daily basis include oil and gas, power and utilities, water treatment and purification facilities, manufacturing, transportation, hospitals, and connected buildings. OT devices tend to be legacy devices with fifteen- to twenty-year lifecycles and beyond, operating 24-7 with rarely scheduled or available maintenance windows for software patches and updates. These devices often lack robust security controls by design and feature proprietary communication protocols and varying connectivity and networking requirements.

OT cybersecurity aims to prevent attacks that target process control equipment that reads data, executes logic, and sends outputs back to the machine or equipment. However, IT cybersecurity practices, analytics, forensics, and detection tools do not match the unique data and connectivity requirements and various configurations of OT environments.

A single operation or location might have more than a dozen different types of vendor technologies—SCADA, distributed control systems, programmable logic controllers, remote terminal units, human-machine interfaces, and safety instrumented systems—running with proprietary code and industry specific protocols. Prioritizing availability and data in motion, each asset and system will have unique parameters for identification and communication on a network, making it nearly impossible to manually log granular session- and packet-level details about each asset or system.

Attacks involving OT and ICS come predominately in two forms. Some are tailored specifically for a single target with the intent of establishing prolonged, undetected access to manipulate view and/or control scenarios that could result in physical disruption or destruction. Others involve “living off the land” techniques that target common denominators across organizations based on opportunistic activities, such as using established social engineering; tactics, techniques and procedures (TTPs); credential harvesting; and the purchase of intelligence and access from threat actors and groups conducting continuous reconnaissance and acting as initial access brokers.

Risks and vulnerabilities in operational technology and critical infrastructure

It is increasingly difficult to contextualize critical infrastructure both operationally—based on specific products, services, resources, processes, and technologies—and functionally—based on centralized versus distributed risks, dependencies, and interdependencies. Attempts to at contextualization have led to a debate between asset-specific (things, such as technologies, systems, and equipment) versus function-specific (actions, such as connecting, distributing, managing, and supplying) cybersecurity prioritization. This dichotomy is also characterized as “threats from” a threat actor and their capabilities to impact functions, instead of “threats to” specific assets as explained in product-specific vulnerability disclosures.10

Today there are thousands of known product vulnerabilities in OT and ICS systems from each vendor that produces machines and equipment in those categories. While each vulnerability is published with an associated common vulnerability score, it is impossible to immediately understand how severe that vulnerability will be in context for a single entity or organization’s risk profile based on the designated severity of the vulnerability. Vulnerabilities must be compared with operational status to understand their significance and to prioritize the actions and procedures that will reduce the severity of the vulnerability’s potential impacts.

Unfortunately, “threats from” actors cannot easily be mapped to the exploitation of threats to OT and ICS. The assets versus functions distinction that is commonplace in the current debate over critical infrastructure typically leads to a hyper focus on either systems impact analysis (asset-specific) or business continuity (function-specific) outcomes and limits holistic fallout analysis for four main reasons:

  1. The plethora of existing product vulnerabilities in critical OT do not translate directly into manipulation of view or manipulation of control scenarios.
  2. The severity scoring for vulnerabilities is too vague to determine cascading impacts or relevant fallout analysis for a specific facility or operation.
  3. The loss of function outcomes and consequences are often not well scoped in terms of realistic cyber scenarios that would lead to and produce cascading impacts.
  4. Cyber incidents that impact physical processes are less repeatable than IT attacks and accessible cyber threat intelligence for threat actors and TTPs that specifically target OT and ICS is less widely available, as there are fewer known and analyzed incidents.

Many OT and ICS systems have known vulnerabilities and unsophisticated, yet complex, designs; the security complexity is in the attack path or “kill chain,” targeting simplistic systems that can be configured in a myriad of ways. Critical infrastructure entities can be targeted by threat actors to exploit and extort their IT and OT or ICS systems, but OT and ICS systems—traditionally designed with mission state and continuity in mind—also risk having their native functionality targeted and hijacked in cyber scenarios.11

Risks to cyber-physical systems include:

  • the use of legacy technologies with well-known vulnerabilities
  • the widespread availability of technical information about control systems
  • the connectivity of control systems to other networks
  • constraints on the use of existing security technologies and practices
  • insecure remote connections
  • a lack of visibility into network connectivity
  • complex and just-in-time supply chains
  • human error, neglect, and accidents.

If the core of cybersecurity is a calculation of threats, vulnerabilities, and likelihood, critical infrastructure sectors and technologies represent an exponential number of probabilistic outcomes for cyber scenarios with physical consequences. Despite increased awareness, pressure, and oversight from governments, boards, and insurance providers, the scale and complexity of the problem set quickly intensifies given the entanglement of

  • similar, but not identical, industries and technologies
  • inconsistent change management and documentation
  • reliance on third-party systems and components
  • external threat actors and TTPs
  • risk management and security best practices
  • compensating controls and security policy enforcement
  • compliance, standards, and regulations.

Table for potential escalation of consequences

This complexity results in four types of general OT and ICS cyber scenarios in critical infrastructure. The two most commonly discussed, but not necessarily the most commonly experienced, are if/when an adversary accesses an OT environment and intentionally causes effects within the scope of their objectives or causes unintended consequences beyond the scope of their objectives. These general scenarios can be further dissected and understood by referencing the specific attack paths and impacts outlined in the MITRE Corporation’s ATT&CK Matrix for ICS.12

A scoring methodology for cross-sector entity prioritization

Today, critical infrastructure cyber protection correlates sixteen different sectors, with no way to compare a standardized risk metric from a municipal water facility in Wyoming with a large commercial energy provider in Florida or a rural hospital in Texas with a train operator in New York. This section proposes a scoring methodology for cross-sector entity prioritization using qualitative scenario planning and quantitative indicators for severity scoring, assessing the potential for scenarios to cause public panic and to stress/overcome local, state, and federal response capacity.

Prioritizing critical infrastructure cybersecurity requires robust planning—comprehensive in scope, yet flexible enough to account for contingencies. Tasha Jhangiani and Graham Kennis note that “a risk-based approach to national security requires that the U.S. must prioritize its resources in areas where it can have the greatest impact to prevent the worst consequences.”13 Owners and operators of critical infrastructure have relayed to the U.S. government a need for more “regionally specific information” to address cyber threats. 14

A recent report on the ownership of various utilities in the United States found that “a better indicator of how to approach [cyber] regulations is to look at how many people a utility services,” a direct indicator for fallout analysis when OT systems are impacted.15 Where progress should start can be determined by expanding fallout analysis to identify the most at-risk environments across any given jurisdiction regardless of sector, location, ownership, or cybersecurity policy enforcement.

Scoring entities according to the prioritization methodology outlined below requires a well-executed thought exercise. The results are a way to determine the most consequential scenarios for facilities and operations, as well as the most at-risk facilities and operations within a given jurisdiction. The scoring can be performed at a local, state, or federal level. This type of prioritization offers an accessible way for entities to grapple with cybersecurity concerns in a local and regional context. The ranking also allows prioritization from an effects-based (impacts), rather than a means-based (capabilities), approach.

This methodology has two primary use cases:

  1. The scoring matrix provides a way to rank and prioritize relevant cyber scenarios for a single entity, organization, facility, or site in scope.
    a. The ranking, based on weighted scores, will allow any entity, organization, facility, or site to choose scenarios to exercise based on a choice of two real-world impacts (impact A, impact B) or to assess both impacts when choosing a tabletop scenario.
    i. This ranking has the potential to prioritize scenarios that will cause public panic and/or overwhelm response resources over scenarios that simply have a higher cyber severity rating (see Table 1).
  2. The standardized priority score provides an overall priority score for the entity, organization, facility, or site.
    a. This score can be used to compare and rank different entities, locations, facilities, or sites within a given jurisdiction—city or local, state, federal, sector-specific, etc.

This methodology can be incorporated into assessments, training, and tabletop exercises in the planning phase of cyber risk mitigation and incident response. It can also be used by leaders to prioritize multiple critical infrastructure sectors or locations in their jurisdiction from a cybersecurity perspective.

How to use the methodology

Prioritizing cybersecurity efforts across critical infrastructure can borrow from the suggested fallout analysis applied to the public and local response capacity of a given target. When a weapon of mass destruction is used as an act of terror, according to the 2002 Federal Emergency Management Agency’s Interim Planning Guide for State and Local Governments, “Managing the Emergency Consequences of Terrorist Incidents,” there are two additional possible outcomes:16

  • Impact A—the creation of chaos, confusion, and public panic
  • Impact B—increased stress on local, state, and federal response resources.

Weighting cyber severity scores for scenarios based on impact A and impact B is essential, as each scenario will impact the level of public panic and available resources differently depending on the sector and that sector’s assets and functions, location, and region. For example, a hospital ransomware attack in an urban area may not cause widespread public panic, but it may have the ability to overwhelm response resources in rural areas. Conversely, an attack on the financial sector may result in public panic, but it may be less likely to overwhelm response resources.

An IT system interruption might cause business disruptions and downtime that results primarily in public panic, while manipulation of control at a water facility could have major impacts on both public panic and response resources. The 2021 Colonial Pipeline ransomware incident inadvertently shut down OT and ICS systems and led to unforeseen local and regional impacts. The scoring methodology used here works to manage uncertainty, identifying four essential components in consultation with informed cybersecurity experts, owners and operators, and local and regional stakeholders.

  1. Scenario planning: Six scenarios will be outlined according to their potential to result in either manipulation of view (three scenarios) or manipulation of control (three scenarios) outcomes for OT. 17
  2. Severity scoring: The scoring will be based on cybersecurity severity (see Tables 1 and 2).
  3. Weighting and ranking scenarios: The scenarios will be weighted and ranked based on their potential to cause public panic and/or to stress or overwhelm response capacity.
  4. Final scoring: The standardized priority score will be calculated for the entire entity/operation.

The methodology compliments the SICI definition of critical infrastructure outlined above and can also be used to enhance the following concerted CISA recommendations:18

  • develop primary, alternate, contingency, and emergency plans to mitigate the most severe effects of prolonged disruptions, including the ability to operate manually without the aid of control systems in the event of a compromise
  • ensure redundancies of critical components and data systems to prevent single points of failure that could produce catastrophic results
  • conduct exercises to provide personnel with effective and practical mechanisms to identify best practices, lessons learned, and areas for improvement in plans and procedures.

The resulting scenarios could further be compared using CISA’s National Cyber Incident Scoring System, designed to provide a repeatable and consistent mechanism for estimating the risk of an incident. In the future, this methodology can potentially be used together with a Diamond Model of Intrusion Analysis applied to cyber-physical incidents to better understand how adversaries demonstrate and use certain capabilities and techniques against critical infrastructure targets. This may allow for better nation-state level analysis and more robust information for decision-makers who struggle to understand the likelihood of attacks against specific operations or facilities today.

Analysis and calculations

Step 1: Scenario planning: Six scenarios will be outlined for their potential to result in either manipulation of view (three scenarios) or manipulation of control (three scenarios) outcomes for OT.19

Scenarios can include incidents in which the threat, vulnerability, or exploitation originate in the IT/corporate or enterprise side of operations. First, the top three most realistic manipulation of view scenarios for a target are identified based on impacts to OT, with severity indicators outlined in Table 1. Then, the top three most realistic manipulation of control scenarios for a target are identified based on impacts to OT, with indicators outlined in Table 1.

Table 1: Severity indicators

Qualitative assessment to determine severity score in Table 2

SOURCE: Adapted from the Center for Regional Disaster Resilience “Washington Cybersecurity Situational Awareness Concept of Operations (CONOPS)” guidance document20

Step 2: Severity scoring: The scoring will be based on cybersecurity severity indicators (see Table 1). Each scenario is scored based on a severity rating in Table 2 (scores for each scenario range from 10 to 50).

Table 2: Severity rating

(does not have to equal 100)


SOURCE: Adapted from the Center for Regional Disaster Resilience “Washington Cybersecurity Situational Awareness Concept of Operations (CONOPS)” guidance document.21

Step 3: Weighting and ranking scenarios: The scenarios will be weighted and ranked based on their potential to cause public panic and/or to stress or overwhelm response capacity.

The scenarios will be ranked based on impact A and impact B. All six scenarios will be ranked separately by both likelihood of causing public panic and ability to overwhelm local response resources (see Table 3).

Table 3: Weighting likelihood to cause public panic and to overwhelm resources

(total weights must = 1)

Step 4: Final scoring: The standardized priority score will be calculated for the entire entity/operation. The weighted scores for both impact A and impact B are combined and the standardized priority score is calculated (see Figure 4).

Case study: Prison cybersecurity

In November 2022, the Atlantic Council’s Cyber Statecraft Initiative brought together cybersecurity experts to apply this scoring methodology to a mock tabletop exercise focused on a prison. A prison environment includes many functional OT and ICS systems and helps illustrate the utility of cybersecurity scenario planning beyond what is traditionally considered critical infrastructure. U.S. prisons also offer a real-world environment where experts who specialize in OT and ICS cybersecurity for any Section 9 entities or existing critical infrastructure sectors can address the problem set on equal footing, without speaking directly to any sector they serve or have worked in or with.

Prisons, often referred to as correctional facilities, operate across the United States. Twenty-six states and the Federal Bureau of Prisons rely heavily on private facilities to house incarcerated inmates.22These facilities depend on a myriad of IT and OT systems for safe, healthy, and continuous 24-7 operations. Examples of IT systems in prisons include telephone and email, video, telemedicine, radios, and management platforms (i.e., access to computers or tablets for entertainment, education, job skills, and reentry planning). Examples of OT systems include security platforms, surveillance cameras, access control points, perimeter intrusion detection, cell doors, and health and safety platforms, such as fire alarms and heating, ventilation, and air conditioning (HVAC) systems.23 These OT and ICS systems are exposed to the threats and vulnerabilities that were previously discussed.

Consider one potential OT scenario in which a threat actor gains access to the system that controls the cell doors, which are programmed not to open or close simultaneously. Access to the controllers that incrementally open and close the cell doors could be achieved and a threat actor could override the incremental interval, directing all doors to move at once, potentially surging the power and/or destroying electronics and components of the cyber-physical system. Researchers have discovered prison control rooms with internet access and commissaries connected to OT networks where programmable logic controllers are operating.24 This scenario represents a potential manipulation of control that would likely produce some level of public panic, but may not necessarily overwhelm local response capabilities.

Tabletop participants conducted a 90-minute exercise to develop six potential scenarios—three specifying manipulation of view impacts to OT and three specifying manipulation of control impacts to OT. The guidelines specified that each scenario must be realistic, technically feasible, worst-case scenarios based on cyber-physical impacts. The scenarios could not be duplicative and must be considered irrespective of network segmentation and best practice compensating controls. Scenarios could have initial access vectors in traditional information technologies, directly or indirectly impacting OT.

The prison specifics indicated that the facility opened in 1993 as a supermax prison in upstate New York. The mock facility housed 300 male inmates and had about 500 employees. Visiting hours were reportedly weekends and holidays between 9:00am and 3:15pm. The facility was said to be located five miles outside of a city of 27,000 people. The immediate town had twenty-seven police officers and fourteen civilian support staff. The nearest hospital, with 125 beds, was five miles away and in similar proximity to two large elementary schools. The facility itself was described as a hub-and-spoke model for operations, with a central command center monitoring and operating the facility and control systems located on premise but removed from the command center.

Access vectors were potentially numerous, including technicians with equipment and inventory access, universal serial bus (USB) drive and other transient devices, internet-connected control systems and networks, software updates, remote access, and remote exploitation, leading to the example scenarios outlined below. The scenarios and scoring that follow are a snapshot of this mock exercise and the application of the methodology in this paper. The example demonstrates bounded knowledge of a simulated exercise and is meant to showcase how an organization or facility might use the methodology for an entity or operation. Participants were cybersecurity experts, however, the scenario planning and thought exercise is meant to include all relevant stakeholders.

Mock prison example scenarios: Manipulation of view and manipulation of control


MOV = manipulation of view, MOC = manipulation of control.

Figure 1. Priority based on severity rating alone (Table 1)


NOTE that based on cybersecurity severity alone, MOC 3 ranks highest as a cyber scenario worth preparing and executing a tabletop exercise for.

Figure 2. Weighted priority for impact A (panic)


FORMULA: Score = Severity * Panic
NOTE that based on the cybersecurity severity score and the ability to cause public panic, MOC 3 still ranks highest as a cyber scenario worth preparing and executing a tabletop exercise for.

Figure 3. Weighted priority for impact B (resources)


FORMULA: Score = Severity * Resources
NOTE that based on the cybersecurity severity score and the ability to overwhelm local response
capacity, MOC 2 now ranks highest as a cyber scenario worth preparing and executing a tabletop exercise for.

Figure 4. Weighted priority for impact A and B (both panic and resources)


FORMULA: Score = Severity * Panic * Resources

Manipulation of control scenario two—communications distributed denial-of-service, internally and externally, with capacity/threat to manipulate, modify, and disrupt process control systems—became potentially more impactful than manipulation of control scenario three—third-party access to takeover process control systems of cell block doors only—as a cyber scenario worth preparing for. Planning and training for a scenario that cuts off internal and external communications and includes uncertainty surrounding cyber-physical impacts is a more robust scenario than direct access to a limited OT/ICS asset or a potential ransomware situation that has limited cascading impacts.

The standardized priority score can be used to compare entities from various sectors based on likely real-world scenarios, expected severity, and impacted populations. Another entity with different severity and impact calculations may have a total score of 4.35, for example. It is scalable; a company can compare different facilities and a city or sector or agency can work to enhance protections for the top 10 percent of entities in their purview of responsibility or scope, creating a starting point for addressing the most critical of critical targets and building cross-sector resilience.

Conclusion

When considering whether assets or functions are more important, the answer is concretely somewhere in between—it always depends on the operation, product, or service. Evaluating entities and sectors against how well they implement cybersecurity requirements and best practices is abundant in complexity but limited in scope. Meanwhile, focusing on technology regulation leads to time-consuming and expensive audits and standardizing unrelated sectors yields vague guidance that becomes difficult to implement and enforce. Hypothetical cyber-physical scenarios quickly become convoluted with technical contingencies, competing priorities, overlapping authorities and analysis gaps.

Like the CARVER Target Analysis and Vulnerability Assessment tool, a similar way to standardize and prioritize what is most important from a cyber perspective is needed and must include impact analysis that goes beyond the cyber incident itself to consider scenarios that also impact public panic and the ability to overwhelm local response capabilities.25 The methodology proposed in this paper is a simple scoring system that provides a repeatable mechanism that is suitable for prioritization based on real-world cyber scenarios, cyber-physical impacts, and fallout analysis.

Some sector-specific target and attack data exists, but there is still too much fear, uncertainty, and doubt driving tabletop exercises. Hopefully in the future, cyber policy and preparedness will have processes akin to the Homeland Security Exercise and Evaluation Program, with the key ingredient being a common approach.26This methodology will not resolve all critical infrastructure cybersecurity and systemically critical infrastructure debates. It will take widespread adoption to be most useful, offering a strategic way to scope and prepare for effective tabletop exercises and to compare entities across various sectors and jurisdictions.

About the author

Danielle Jablanski is a nonresident fellow at the Cyber Statecraft Initiative under the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and an OT cybersecurity strategist at Nozomi Networks, responsible for researching global cybersecurity topics and promoting operational technology (OT) and industrial control systems (ICS) cybersecurity awareness throughout the industry. Jablanski serves as a staff and advisory board member of the nonprofit organization Building Cyber Security, leading cyber-physical standards development, eduction, certifications, and labeling authority to advance physical security, safety, and privacy in public and private sectors. Since January 2022, Jablanski has also served as the president of the North Texas Section of the International Society of Automation, organizing monthly member meetings, training, and community engagements. She is also a member of the Cybersecurity Apprenticeship Advisory Taskforce with the Building Apprenticeship Systems in Cybersecurity Program sponsored by the US Department of Labor.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Danielle Jablanski, “Why Cyber Holds the Entire World at Risk,” National Interest, April 5, 2022, https://nationalinterest.org/blog/techland-when-great-power-competition-meets-digital-world/why-cyber-holds-entire-world-risk.
2    “National Preparedness Cycle,” Homeland Security Emergency Management Center of Excellence, https://www.coehsem.com/emergency-management-cycle/.
3    Benjamin R., Barber, A Place for Us: How to Make Society Civil and Democracy Strong (New York: Hill and Wang, 1984).
4    Tyson Macaulay, Critical Infrastructure: Understanding Its Component Parts, Vulnerabilities, Operating Risks, and Interdependencies (Boca Raton: CRC Press, 2009).
5    “Critical Infrastructure Protection: CISA Should Improve Priority Setting, Stakeholder Involvement, and Threat Information Sharing,” U.S. Government Accountability Office, March 1, 2022, https://www.gao.gov/products/gao-22-104279.
6     “Critical Infrastructure Protection,” 2022.
7    “Critical Infrastructure Protection,” 2022.
8    Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure, May 8, 2018.
9    Tasha Jhangiani and Graham Kennis, “Protecting the Critical of Critical: What Is Systemically Important Critical Infrastructure?” Lawfare, June 15, 2021, https://www.lawfareblog.com/protecting-critical-critical-what-systemically-important-critical-infrastructure.
10    Tyson Macaulay and Bryan Singer, Cybersecurity for Industrial Control Systems: SCADA, DCS, PLC, HMI, and SIS (Boca Raton: CRC Press, 2012), 57.
11    Michael J. Assante and Robert M. Lee, “The Industrial Control System Cyber Kill Chain,” SANS Institute, October 2015, https://na-production.s3.amazonaws.com/documents/industrial-control-system-cyber-kill-chain-36297.pdf.
12    “MITRE ATT&CK Matrix for ICS,” MITRE Corporation, last modified May 6, 2022,https://attack.mitre.org/matrices/ics/.
13    Jhangiani and Kennis, 2021.
14    “Critical Infrastructure Protection,” 2022.
15    Jacob Azrilyant, Melissa Sidun, and Mariami Dolashvili, “Fact and Fiction: Demystifying the Myth of the 85%,” capstone project, George Washington University, May 6, 2022, https://www.scribd.com/document/575971848/Fact-and-Fiction-85-and-Critical-Infrastructure.
16    “Managing the Emergency Consequences of Terrorist Incidents: Interim Planning Guide for State and Local Governments,” Federal Emergency Management Agency,July 2002, https://www.fema.gov/pdf/plan/managingemerconseq.pdf.
17    View and/or control cannot be recovered automatically or remotely from manipulation. The potential for sabotage can come through misinformation delivered to control room personnel or through malicious instructions sent to production infrastructure. Macaulay and Singer, 2012.
18    “Sector Spotlight: Cyber-Physical Security Considerations for the Electricity Sub-Sector,” Cybersecurity and Infrastructure Security Agency, https://www.cisa.gov/sites/default/files/publications/Sector%20Spotlight%20Cyber-Physical%20Security%20Considerations%20Electricity%20Sub-Sector%20508%20compliant.pdf.
19    View and/or control cannot be recovered automatically or remotely from manipulation. The potential for sabotage can come through misinformation delivered to control room personnel or through malicious instructions sent to production infrastructure. Macaulay and Singer, 2012.
20    Washington Cybersecurity Situational Awareness Concept of Operations (CONOPS),” Center for Regional Disaster Resilience, https://www.regionalresilience.org/uploads/2/3/2/9/23295822/washington_cybersecurity_situational_awareness_conops.pdf.
21    “Washington Cybersecurity Situational Awareness,” Center for Regional Disaster Resilience.
22    Mackenzie Buday and Ashley Nellis, “Private Prisons in the United Sates, The Sentencing Project,August 23, 2022, https://www.sentencingproject.org/reports/private-prisons-in-the-united-states/.
23    Teague Newman, Tiffany Rad, and John Strauchs, “SCADA & PLC Vulnerabilities in Correctional Facilities,” Wired, July 30, 2011, https://www.wired.com/images_blogs/threatlevel/2011/07/PLC-White-Paper_Newman_Rad_Strauchs_July22_2011.pdf.
24    Newman, Rad, and Strauchs, 2011.
25    “What is the CARVER Target Analysis and Vulnerability Assessment Methodology?” SMI Consultancy, https://www.smiconsultancy.com/what-is-carver.
26    “Homeland Security Exercise and Evaluation Program,” Federal Emergency Management Agency, https://training.fema.gov/programs/nsec/hseep/.

The post Critical infrastructure cybersecurity prioritization: <strong>A cross-sector methodology for ranking operational technology cyber scenarios and critical entities</strong> appeared first on Atlantic Council.

]]>
Russian War Report: Russian army presses on in Bakhmut despite losses https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-russian-army-presses-on-in-bakhmut-despite-losses/ Fri, 14 Apr 2023 17:34:44 +0000 https://www.atlanticcouncil.org/?p=636784 Bakhmut remains a major conflict zone with dozens of attacks on Ukrainian forces there, despite Russian forces sustaining heavy losses.

The post Russian War Report: Russian army presses on in Bakhmut despite losses appeared first on Atlantic Council.

]]>
As Russia continues its assault on Ukraine, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) is keeping a close eye on Russia’s movements across the military, cyber, and information domains. With more than seven years of experience monitoring the situation in Ukraine—as well as Russia’s use of propaganda and disinformation to undermine the United States, NATO, and the European Union—the DFRLab’s global team presents the latest installment of the Russian War Report. 

Security

Russian army presses on in Bakhmut despite losses

Russia enacts “e-drafting” law

Drone imagery locates new burial site east of Soledar

Russian hackers target NATO websites and email addresses

Russian army presses on in Bakhmut despite losses

The General Staff of the Ukrainian Armed Forces recorded fifty-eight attacks on Ukrainian troop positions on April 9 and 10. Of these attacks, more than thirty were in the direction of Bakhmut, and more than twenty were in the direction of Marinka and Avdiivka. Russian forces also attempted to advance toward Lyman, south of Dibrova.

Documented locations of fighting April 1-13, 2023; data gathered from open-source resources. (Source: Ukraine Control Map, with annotations by the DFRLab)
Documented locations of fighting April 1-13, 2023; data gathered from open-source resources. (Source: Ukraine Control Map, with annotations by the DFRLab)

On April 10, Commander of the Eastern Group of Ukrainian Ground Forces Oleksandr Syrskyi said that Russian forces in Bakhmut increasingly rely on government special forces and paratroopers because Wagner units have suffered losses in the recent battles. Syrskyi visited Bakhmut on April 9 to inspect defense lines and troops deployed to the frontline. According to the United Kingdom’s April 10 military intelligence report, Russian troops are intensifying tank attacks on Marinka but are still struggling with minimal advances and heavy losses. 

On April 13, Deputy Chief of the Main Operational Directorate of Ukrainian Forces Oleksiy Gromov said that Bakhmut remains the most challenging section on the frontline as Russian forces continue to storm the city center, trying to encircle it from the north and south through Ivanivske and Bohdanivka. According to Ukrainian estimates, during a two-week period, Russian army and Wagner Group losses in the battle for Bakhmut amounted to almost 4,500 people killed or wounded. To restore the offensive potential in Bakhmut, Russian units that were previously attacking in the direction of Avdiivka were transferred back to Bakhmut.

On April 8, Commander of the Ukrainian Air Forces Mykola Oleshchuk lobbied for Ukraine obtaining F-16 fighter jets. According to his statement, Ukrainian pilots are now “hostages of old technologies” that render all pilot missions “mortally dangerous.” Oleshchuk noted that American F-16 jets would help strengthen Ukraine’s air defense. Oleshchuk said that even with a proper number of aircraft and pilots, Ukrainian aviation, which is composed of Soviet aircraft and missiles, may be left without weapons at some point. He noted the F-16 has a huge arsenal of modern bombs and missiles. The commander also discussed the need for superiority in the air and control of the sea. Currently, Russian aviation is more technologically advanced and outnumbers Ukraine, meaning Ukraine cannot adequately protect its airspace. In order for the Ukrainian army to advance and re-capture territory occupied by Russia, it will require substantial deliveries of aviation and heavy equipment like tanks, howitzers, and shells. 

April 10, Ukrainian forces reported they had spotted four Russian ships on combat duty in the Black Sea, including one armed with Kalibr missiles. Another Russian ship was spotted in the Sea of Azov, along with seven in the Mediterranean, including three Kalibr cruise missile carriers. 

Meanwhile, according to Ukrainian military intelligence, Russia plans to produce Kh-50 cruise missiles in June. If confirmed, this could potentially lead to increased missile strikes against Ukraine in the fall. The Kh-50 missiles in the “715” configuration are intended to be universal, meaning they can be used by many Russian strategic bombers, including the Tu-22M3, Tu-95MS, and Tu-160.

Ruslan Trad, Resident Fellow for Security Research, Sofia, Bulgaria

Russia enacts “e-drafting” law

On April 11, the Russian State Duma approved a bill reading allowing for the online drafting of Russian citizens using the national social service portal Gosuslugi. One day later, the Russian Federal Council adopted the law. The new law enables military commissariats, or voenkomat, to send mobilization notices to anyone registered in the Gosuslugi portal. Contrary to the traditional in-person delivery of paper notices, the digital mobilization order will be enforced immediately upon being sent out to the user; ordinarily, men drafted for mobilization could dispute the reception of the notice during the twenty-one-day period after the notice was sent. As of 2020, 78 million users were reportedly registered in the Gosuslugi portal, nearly two-thirds of the Russian population.

Alongside the adoption of the digital mobilization notices are newly adopted restrictions regarding unresponsive citizens. Those who fail to appear at their local military commissariat in the twenty-day period following notice will be barred from leaving the country and banned from receiving new credit or driving a car. Of the 164 senators who took part in the vote, only one voted against the bill; Ludmila Narusova argued that the law had been adopted exceptionally hastily and that the punishments against “deviants” who do not respond to the notice are “inadequate.”

As explained by Riga-based Russian news outlet Meduza, the law also states that reserves could be populated with those who legally abstained from military service until the age of twenty-seven, due to an amendment in the bill that allows for personal data to be shared with the Russian defense ministry in order to establish “reasonable grounds” for mobilization notices to be sent out. Several institutions across the country will be subject to the data exchange, including the interior ministry, the federal tax office, the pension and social fund, local and federal institutions, and schools and universities.

Valentin Châtelet, Research Associate, Security, Brussels, Belgium

Drone imagery locates new burial site east of Soledar

Images released by Twitter user @externalPilot revealed a new burial site, located opposite a cemetery, in the village of Volodymyrivka, southeast of Soledar, Donetsk Oblast. The DFRLab collected aerial imagery and assessed that the burial site emerged during the last week of March and the first week of April. The city of Soledar has been under Russian control since mid-January. The burial site faces the Volodymyrivka town cemetery. Drone footage shows several tombs with no apparent orthodox crosses or ornaments. Analysis of the drone imagery indicates around seventy new graves have been dug on this site. A DFRLab assessment of satellite imagery estimates the surface area of the burial site amounts to around thirteen hectares.

Location of new burial site east of Soledar, Volodymyrivka, Donetsk Oblast. (Source: PlanetLab, with annotations by the DFRLab)
Location of new burial site east of Soledar, Volodymyrivka, Donetsk Oblast. (Source: PlanetLab, with annotations by the DFRLab)

Valentin Châtelet, Research Associate, Security, Brussels, Belgium

Russian hackers target NATO websites and email addresses

On April 8, the pro-war Russian hacktivist movement Killnet announced they would target NATO in a hacking operation. On April 10, they said they had carried out the attack. The hacktivists claimed that “40% of NATO’s electronic infrastructure has been paralyzed.” They also claimed to have gained access to the e-mails of NATO staff and announced they had used the e-mails to create user accounts on LBGTQ+ dating sites for 150 NATO employees.

The hacktivists forwarded a Telegram post from the KillMilk channel showing screenshots of one NATO employee’s e-mail being used to register an account on the website GayFriendly.dating. The DFRLab searched the site for an account affiliated with the email but none was found.

Killnet also published a list of e-mails it claims to have hacked. The DFRLab cross-checked the e-mails against publicly available databases of compromised e-mails, like Have I been Pwned, Avast, Namescan, F-secure, and others. As of April 13, none of the e-mails had been linked to the Killnet hack, though this may change as the services update their datasets.

In addition, the DFRLab checked the downtime of the NATO websites that Killnet claims to have targeted with distributed denial of service (DDoS) attacks. According to IsItDownRightNow, eleven of the forty-four NATO-related websites (25 percent) were down at some point on April 10.  

Nika Aleksejeva, Resident Fellow, Riga, Latvia

The post Russian War Report: Russian army presses on in Bakhmut despite losses appeared first on Atlantic Council.

]]>
Banning TikTok alone will not solve the problem of US data security https://www.atlanticcouncil.org/blogs/new-atlanticist/banning-tiktok-alone-will-not-solve-the-problem-of-us-data-security/ Fri, 31 Mar 2023 16:24:22 +0000 https://www.atlanticcouncil.org/?p=631176 TikTok is just a symptom of a much bigger problem involving China-based technology. Here are some steps US policymakers can take now.

The post Banning TikTok alone will not solve the problem of US data security appeared first on Atlantic Council.

]]>
Last week, the TikTok chief executive officer, Shou Zi Chew, appeared before the US House of Representatives Energy and Commerce Committee. The media and political perception within the Washington Beltway is that it did not go well, and it didn’t. Chew’s answers were unconvincing and at times disingenuous, including when he downplayed accusations that the company had spied on journalists critical of the company. On social media, including on TikTok, the perception of the hearing by users was equally decisive, but not in Congress’s favor.

There are 150 million US users of TikTok, and the contrast between the creative and often viral nature of clips produced on the platform—including those defending Chew—and the stodgy nature of C-SPAN’s fixed camera positions, pre-planned talking points, and members demanding “yes” or “no” answers to their questions, made for an unfavorable contrast for committee members. US policymakers considering a ban on TikTok need to think about the very serious ramifications to people and small businesses whose livelihoods do, at least in part, rely on the app. Those Americans who use the app for professional and business purposes should have their legitimate concerns addressed by policymakers in a meaningful manner alongside any sort of ban.

But TikTok users’ usage of the social media app, even if only to generate business, does not mitigate the potential threats to US national security associated with it. In December, Director of National Intelligence Avril Haines warned about the potential uses of TikTok by Beijing stemming from the data the app collects and the possibility of using it to influence public opinion. TikTok’s algorithm, for example—which experts view as more advanced than that of Facebook parent company Meta—could be used by China to create propaganda that seeks to influence or manipulate elections and the broader information environment.

TikTok’s connections to China’s government stem from it being a wholly owned subsidiary of the Beijing-based company ByteDance. Chew testified that “ByteDance is not owned or controlled by the Chinese government.” However, Article VII of China’s National Intelligence Law of 2017 makes clear the mandated responsibility for private sector companies (and any Chinese organization) to “support, assist, and cooperate” with China’s intelligence community. ByteDance, therefore, has an absolute obligation to turn over to China’s intelligence apparatus any data it requests.

There are significant reasons to be skeptical of Chew’s claims that “Project Texas”—TikTok’s effort to wall off US user data from Chinese authorities by solely storing it in the United States—will prevent China from having access to US user data in the future. Worse, even if one takes Chew at his word that “Project Texas” will accomplish this feat, it defies logic to believe that ByteDance would not—independently or compelled by China’s intelligence agencies—retain a copy of all 150 million current US users’ data.

At the same time, TikTok is just a symptom of a much bigger problem. The United States and its allies have a more fundamental issue when it comes to their citizens using China-based apps, programs, or any technology that collects their data. All China-based companies have the same obligations to provide data information to China’s intelligence services whenever requested.

What the US government can do

TikTok’s ban would mitigate the immediate threat posed by the ByteDance subsidiary, but there’s far more work that needs to be done. The Committee on Foreign Investment in the United States (CFIUS) has, up until now, been the most prominent tool used to prevent foreign governments, or individuals associated with them, from making investments in the United States that could be used to ultimately undermine US national security. CFIUS has a specific and meaningful role focused on investments, but nowadays it has too often become the default instrument for reconciling an increasingly broad swath of national security challenges. This is in part because it has a track record of success, but also because it’s one of the only meaningful tools available to policymakers. But it is not an ideal tool for every situation, something best demonstrated by CFIUS’s challenge in resolving TikTok’s ongoing review that has stretched on for more than two years now.

The bipartisan RESTRICT Act—which would give the Department of Commerce the right to review foreign technologies and ban them in the United States or force their sale—is a thoughtful place from which to begin discussions about additional ways to mitigate the US national security challenges related to information and communications platforms available for mass use. But that act alone would not solve the broader data challenges as they exist today.

The lack of federal regulation related to commercial data brokers, which today can and do legally collect and resell the data of millions of Americans, is a glaring gap that needs to be filled immediately. A ban on TikTok, for example, would do nothing to prevent data brokers from aggregating the same consumer data from other apps and re-selling it to commercial entities, including those in China. 

The threat posed by China to US national security, and to Americans’ individual data, is acute. The good news is the United States can deal with these challenges, but it will take more than just banning TikTok.


Jonathan Panikoff is a senior fellow in the Atlantic Council’s GeoEconomics Center and the former director of the Investment Security Group, overseeing the intelligence community’s CFIUS efforts, at the Office of the Director of National Intelligence.

The views expressed in this publication are the author’s and do not imply endorsement by the Office of the Director of National Intelligence, the intelligence community, or any other US government agency.

The post Banning TikTok alone will not solve the problem of US data security appeared first on Atlantic Council.

]]>
What to expect from the world’s democratic tech alliance as the Summit for Democracy unfolds https://www.atlanticcouncil.org/blogs/new-atlanticist/what-to-expect-from-the-worlds-democratic-tech-alliance-as-the-summit-for-democracy-unfolds/ Wed, 29 Mar 2023 17:37:06 +0000 https://www.atlanticcouncil.org/?p=630003 Ahead of the Biden administration’s second Summit for Democracy, stakeholders from the Freedom Online Coalition gave a sneak peek at what to expect on the global effort to protect online rights and freedoms.

The post What to expect from the world’s democratic tech alliance as the Summit for Democracy unfolds appeared first on Atlantic Council.

]]>
Watch the full event

Ahead of the Biden administration’s second Summit for Democracy, US Deputy Secretary of State Wendy Sherman gave a sneak peek at what to expect from the US government on its commitments to protecting online rights and freedoms.

The event, hosted by the Atlantic Council’s Digital Forensic Research Lab on Monday, came on the same day that US President Joe Biden signed an executive order restricting the US government’s use of commercial spyware that may be abused by foreign governments or enable human-rights abuses overseas.

But there’s more in store for this week, Sherman said, as the United States settles into its role as chair of the Freedom Online Coalition (FOC)—a democratic tech alliance of thirty-six countries working together to support human rights online. As chair, the United States needs “to reinforce rules of the road for cyberspace that mirror and match the ideals of the rules-based international order,” said Sherman. She broke that down into four top priorities for the FOC:

  1. Protecting fundamental freedoms online, especially for often-targeted human-rights defenders
  2. Building resilience against digital authoritarians who use technology to achieve their aims
  3. Building a consensus on policies designed to limit abuses of emerging technologies such as artificial intelligence (AI)
  4. Expanding digital inclusion  

“The FOC’s absolutely vital work can feel like a continuous game of catch-up,” said Sherman. But, she added, “we have to set standards that meet this moment… we have to address what we see in front of us and equip ourselves with the building blocks to tackle what we cannot predict.”

Below are more highlights from the event, during which a panel of stakeholders also outlined the FOC’s role in ensuring that the internet and emerging technologies—including AI—adhere to democratic principles.

Deepening fundamental freedoms

  • Sherman explained that the FOC will aim to combat government-initiated internet shutdowns and ensure that people can “keep using technology to advance the reach of freedom.”
  • Boye Adegoke, senior manager of grants and program strategy at the Paradigm Initiative, recounted how technology was supposed to help improve transparency in Nigeria’s recent elections. But instead, the election results came in inconsistently and after long periods of time. Meanwhile, the government triggered internet shutdowns around the election period. “Bad actors… manipulate technology to make sure that the opinions and the wishes of the people do not matter at the end of the day,” he said.
  • “It’s very important to continue to communicate the work that the FOC is doing… so that more and more people become aware” of internet shutdowns and can therefore prepare for the lapses in internet service and in freely flowing, accurate information, Adegoke said.
  • On a practical level, once industry partners expose where disruptions are taking place, the FOC offers a mechanism by which democratic “governments can work together to sort of pressure other governments to say these [actions] aren’t acceptable,” Starzak argued.
  • The FOC also provides a place for dialogue on human rights in the online space, said Alissa Starzak, vice president and global head of public policy at Cloudfare. Adegoke, who also serves in the FOC advisory network, stressed that “human rights [are] rarely at the center of the issues,” so the FOC offers an opportunity to mainstream that conversation into policymakers’ discussions on technology.

Building resilience against digital authoritarianism

  • “Where all of [us FOC countries] may strive to ensure technology delivers for our citizens, autocratic regimes are finding another means of expression,” Sherman explained, adding that those autocratic regimes are using technologies to “divide and disenfranchise; to censor and suppress; to limit freedoms, foment fear, and violate human dignity.” New technologies are essentially “an avenue of control” for authoritarians, she explained.
  • At the FOC, “we will focus on building resilience against the rise of digital authoritarianism,” Sherman said, which has “disproportionate and chilling impacts on journalists, activists, women, and LGBTI+ individuals” who are often directly targeted for challenging the government or expressing themselves.
  • One of the practices digital authoritarians often abuse is surveillance. Sherman said that as part of the Summit for Democracy, the FOC and other partners will lay out guiding principles for the responsible use of surveillance tech.
  • Adegoke recounted how officials in Nigeria justified their use of surveillance tech by saying that the United States also used the technology. “It’s very important to have some sort of guiding principle” from the United States, he said.
  • After Biden signed the spyware executive order, Juan Carlos Lara, executive director at Derechos Digitales, said he expects other countries “to follow suit and hopefully to expand the idea of bans on spyware or bans on surveillance technology” that inherently pose risks to human rights.

Addressing artificial intelligence

  • “The advent of AI is arriving with a level of speed and sophistication we haven’t witnessed before,” warned Sherman. “Who creates it, who controls it, [and] who manipulates it will help define the next phase of the intersection between technology and democracy.”
  • Some governments, Sherman pointed out, have used AI to automate their censorship and suppression practices. “FOC members must build a consensus around policies to limit these abuses,” she argued.
  • Speaking from an industry perspective, Starzak acknowledged that sometimes private companies and governments “are in two different lanes” when it comes to figuring out how they should use AI. But setting norms for both good and bad AI use, she explained, could help get industry and the public sector in the same lane, moving toward a world in which AI is used in compliance with democratic principles.
  • Lara, who also serves in the FOC advisory network, explained that the FOC has a task force to specifically determine those norms on government use of AI and to identify the ways in which AI contributes to the promise—or peril—of technology in societies worldwide.

Improving digital inclusion

  • “The internet should be open and secure for everyone,” said Sherman. That includes “closing the gender gap online” by “expanding digital literacy” and “promoting access to safe online spaces” that make robust civic participation possible for all. Sherman noted that the FOC will specifically focus on digital inclusion for women and girls, LGBTI+ people, and people with disabilities.
  • Starzak added that in the global effort to cultivate an internet that “builds prosperity,” access to the free flow of information for all is “good for the economy and good for the people.” Attaining that version of the internet will require a “set of controls” to protect people and their freedoms online, she added.
  • Ultimately, there are major benefits to be had from expanded connectivity. According to Sherman, it “can drive economic growth, raise standards of living, create jobs, and fuel innovative solutions” for global challenges such as climate change, food insecurity, and good governance.

Katherine Walla is an associate director of editorial at the Atlantic Council.

Watch the full event

The post What to expect from the world’s democratic tech alliance as the Summit for Democracy unfolds appeared first on Atlantic Council.

]]>
Wendy Sherman on the United States’ priorities as it takes the helm of the Freedom Online Coalition https://www.atlanticcouncil.org/news/transcripts/wendy-sherman-on-the-united-states-priorities-as-it-takes-the-helm-of-the-freedom-online-coalition/ Tue, 28 Mar 2023 14:22:55 +0000 https://www.atlanticcouncil.org/?p=628865 US Deputy Secretary of State Wendy Sherman outlined the priorities for the world's democratic tech alliance, from protecting fundamental freedoms online to building resilience against digital authoritarianism.

The post Wendy Sherman on the United States’ priorities as it takes the helm of the Freedom Online Coalition appeared first on Atlantic Council.

]]>
Watch the event

Event transcript

Uncorrected transcript: Check against delivery

Introduction
Rose Jackson
Director, Democracy & Tech Initiative, Digital Forensic Research Lab

Opening Remarks
Wendy Sherman
Deputy Secretary of State, US Department of State

Panelists
Boye Adegoke
Senior Manager, Grants and Program Strategy, Paradigm Initiative

Juan Carlos Lara
Executive Director, Derechos Digitales

Alissa Starzak
Vice President, Global Head of Public Policy, Cloudflare

Moderator
Khushbu Shah
Nonresident Fellow, Digital Forensic Research Lab

ROSE JACKSON: Hello. My name is Rose Jackson, and I’m the director of the Democracy + Tech Initiative here at the Atlantic Council in Washington, DC.

I’m honored to welcome you here today for this special event, streaming to you in the middle of the Freedom Online Coalition, or the FOC’s first strategy and coordination meeting of the year.

For those of you watching at home or many screens elsewhere, I’m joined here in this room by representatives from thirty-one countries and civil-society and industry leaders who make up the FOC’s advisory network. They’ve just wrapped up the first half of their meeting and wanted to bring some of the conversation from behind closed doors to the community working everywhere to ensure the digital world is a rights-respecting one.

It’s a particularly important moment for us to be having this conversation. As we get ready for the second Summit for Democracy later this week, the world’s reliance and focus on the internet has grown, while agreement [on] how to further build and manage it frays.

I think at this point it’s a bit of a throwaway line that the digital tools mediate every aspect of our lives. But the fact that most of the world has no choice but to do business, engage with their governments, or stay connected with friends and family through the internet makes the rules and norms around how that internet functions a matter of great importance. And even more because the internet is systemic and interconnected, whether it is built and imbued with the universal human rights we expect offline will determine whether our societies can rely on those rights anywhere.

Antidemocratic laws have a tendency of getting copied. Troubling norms are established in silence. And a splintering of approach makes it easier for authoritarians to justify their sovereign policies used to shutter dissent, criminalize speech, and surveil everyone. These are the core democratic questions of our time, and ensuring that the digital ecosystem is a rights-respecting one requires democracies [to row] in the same direction in their foreign policy and domestic actions.

The now twelve-year-old FOC, as the world’s only democratic tech alliance, presents an important space for democratic governments to leverage their shared power to this end, in collaboration with civil society and industry around the world.

We were encouraged last year when Secretary of State Antony Blinken announced at our open summit conference in Brussels that the US would take over as chair of the FOC in 2023 as part of its commitment to reinvest in the coalition and its success. Just over an hour ago, the US announced a new executive order limiting its own use of commercial spyware on the basis of risks to US national security and threats to human rights everywhere really brings home the stakes and potential of this work.

So today we’re honored to have Deputy Secretary of State Wendy Sherman here to share more about the US government’s commitment to these issues and its plans for the coming year as chair.

We’ll then turn to a panel of civil-society and industry leaders from around the world to hear more about how they view the role and importance of the FOC in taking action on everything from internet shutdowns to surveillance tech and generative AI. That session will be led by our nonresident fellow and the former managing editor of Rest of World Khushbu Shah.

Now, before I turn to the deputy secretary, I want to thank the FOC support unit, the US State Department, and our Democracy and Tech team here for making this event possible. And I encourage you in Zoomland to comment on and engage liberally with the content of today’s event on your favorite social media platforms, following at @DFRLab, and using the hashtags #SummitforDemocracy, #S4D too, or #PartnersforDemocracy.

For those tuning in remotely in need of closed captioning, please view today’s program on our YouTube channel through the link provided in the chat.

It is now my distinct honor to pass the podium to Deputy Secretary of State Wendy Sherman, who needs no introduction as one of our nation’s most experienced and talented diplomats.

Deputy Secretary, thank you so much for joining us.

WENDY SHERMAN: Good afternoon. It’s terrific to be with you, and thank you, Rose, for your introduction and for all of the terrific work that the Freedom Online Coalition is doing.

It is fitting to be here at the Atlantic Council for this event because your mission sums up our purpose perfectly: shaping the global future together. That is our fundamental charge in the field of technology and democracy: how we use modern innovations to forge a better future.

That’s what the DFRLab strives to achieve, through your research and advocacy, and that’s what the Freedom Online Coalition, its members, observers, and advisory network seek to accomplish through our work. Thank you for your partnership.

More than five decades ago—seems like a long time ago, but really very short—the internet found its origins in the form of the first online message ever sent, all of two letters in length, delivered from a professor at UCLA to colleagues at Stanford. It was part of a project conceived in university labs and facilitated by government. It was an effort meant to test the outer limits of rapidly evolving technologies and tap into the transformative power of swiftly growing computer networks.

What these pioneers intended at the time was actually to devise a system that could allow people to communicate in the event of a nuclear attack or another catastrophic event. Yet what they created changed everything—how we live and work, how we participate in our economy and in our politics, how we organize movements, how we consume media, read books, order groceries, pay bills, run businesses, conduct research, learn, write, and do nearly everything we can think of.

Change didn’t happen overnight, of course, and that change came with both promise and peril. This was a remarkable feat of scientific discovery, and it upended life as we know it for better, and sometimes, worse.

Over the years, as we went from search engines to social media, we started to face complicated questions as leaders, as parents and grandparents, as members of the global community—questions about how the internet can best be used, how it should be governed, who might misuse it, how it impacts our children’s mental and emotional health, who could access it, and how we can ensure that access is equitable—benefitting people in big cities, rural areas, and everywhere in between. Big-picture questions arose about these tectonic shifts. What would they mean for our values and our systems of governance? Whether it’s the internet as we understand it today or artificial intelligence revolutionizing our world tomorrow, will digital tools create more democracy or less? Will they be deployed to maximize human rights or limit them? Will they be used to enlarge the circle of freedom or constraint and contract it?

For the United States, the Freedom Online Coalition, and like-minded partners, the answer should point in a clear direction. At a basic level, the internet should be open and secure for everyone. It should be a force for free enterprise and free expression. It should be a vast forum that increases connectivity, that expands people’s ability to exercise their rights, that facilitates unfettered access to knowledge and unprecedented opportunities for billions.

Meeting that standard, however, is not simple. Change that happens this fast in society and reaches this far into our lives rarely yields a straightforward response, especially when there are those who seek to manipulate technology for nefarious ends. The fact is where all of us may strive to ensure technology delivers for our citizens, autocratic regimes are finding another means of expression. Where democracies seek to tap into the power of the internet to lift individuals up to their highest potential, authoritarian governments seek to deploy these technologies to divide and disenfranchise, to censor and suppress, to limit freedoms, [to] foment fear and [to] violate human dignity. They view the internet not as a network of empowerment but as an avenue of control. From Cuba and Venezuela to Iran, Russia, the PRC, and beyond, they see new ways to crush dissent through internet shutdowns, virtual blackouts, restricted networks, blocked websites, and more.

Here in the United States, alongside many of you, we have acted to sustain connections to internet-based services and the free flow of information across the globe, so no one is cut off from each other, the outside world, or cut off from the truth. Yet even with these steps, none of us are perfect. Every day, almost everywhere we look, democracies grapple with how to harness data for positive ends, while preserving privacy; how to bring out the best in modern innovations without amplifying their worst possibilities; how to protect the most vulnerable online while defending the liberties we hold dear. It isn’t an easy task, and in many respects, as I’ve said, it’s only getting harder. The growth of surveillance capabilities is forcing us to constantly reevaluate how to strike the balance between using technologies for public safety and preserving personal liberties.

The advent of AI is arriving with a level of speed and sophistication we haven’t witnessed before. It will not be five decades before we know the impact of AI. That impact is happening now. Who creates it, who controls it, [and] who manipulates it will help define the next phase of the intersection between technology and democracy. By the time we realize AI’s massive reach and potential, the internet’s influence might really pale in comparison. The digital sphere is an evolving and is evolving at a pace we can’t fully fathom and in ways at least I can’t completely imagine. Frankly, we have to accept the fact that the FOC’s absolutely vital work can feel like a continuous game of catchup. We have to acknowledge that the guidelines we adopt today might seem outdated as soon as tomorrow.

Now let me be perfectly clear: I am not saying we should throw up our hands and give up. To the contrary, I’m suggesting that this is a massive challenge we have to confront and a generational change we have to embrace. We have to set standards that meet this moment and that lay the foundation for whatever comes next. We have to address what we see in front of us and equip ourselves with the building blocks to tackle what we cannot predict.

To put a spin on a famous phrase, with the great power of these digital tools comes great responsibility to use that power for good. That duty falls on all our shoulders and the stakes could not be higher for internet freedom, for our common prosperity, for global progress, because expanded connectivity, getting the two billion unconnected people online can drive economic growth, raise standards of living, create jobs, and fuel innovative solutions for everything from combating climate change to reducing food insecurity, to improving public health, to promoting good governance and sustainable development.

So we need to double down on what we stand for: an affirmative, cohesive, values-driven, rights-respecting vision for democracy in a digital era. We need to reinforce rules of the road for cyberspace that mirror and match the ideals of the rules-based international order. We need to be ready to adapt our legal and policy approaches for emerging technologies. We need the FOC—alongside partners in civil society, industry, and elsewhere—to remain an essential vehicle for keeping the digital sphere open, secure, interoperable, and reliable.

The United States believes in this cause as a central plank of our democracy and of our diplomacy. That’s why Secretary Blinken established our department’s Bureau of Cyberspace and Digital Policy, and made digital freedom one of its core priorities. That’s why the Biden-Harris administration spearheaded and signed into and onto the principles in the Declaration for the Future of the Internet alongside sixty-one countries ready to advance a positive vision for digital technologies. That’s why we released core principles for tech-platform accountability last fall and why the president called on Congress to take bipartisan action in January.

That’s why we are committed to using our turn as FOC chair as a platform to advance a series of key goals.

First, we will deepen efforts to protect fundamental freedoms, including human rights defenders online and offline, many of whom speak out at grave risk to their own lives and to their families’ safety. We will do so by countering disruptions to internet access, combating internet shutdowns, and ensuring everyone’s ability to keep using technology to advance the reach of freedom.

Second, we will focus on building resilience against the rise of digital authoritarianism, the proliferation of commercial spyware, and the misuse of technology, which we know has disproportionate and chilling impacts on journalists, activists, women, and LGBTQI+ individuals. To that end, just a few hours ago President Biden issued an executive order that for the first time will prohibit our government’s use of commercial spyware that poses a risk to our national security or that’s been misused by foreign actors to enable human rights abuses overseas.

On top of that step, as part of this week’s Summit for Democracy, the members of the FOC and other partners will lay out a set of guiding principles on government use of surveillance technologies. These principles describe responsible practices for the use of surveillance tech. They reflect democratic values and the rule of law, adherence to international obligations, strive to address the disparate effect on certain communities, and minimize the data collected.

Our third objective as FOC chair focuses on artificial intelligence and the way emerging technologies respect human rights. As some try to apply AI to help automate censorship of content and suppression of free expression, FOC members must build a consensus around policies to limit these abuses.

Finally, we will strengthen our efforts on digital inclusion—on closing the gender gap online; on expanding digital literacy and skill-building; on promoting access to safe online spaces and robust civic participation for all, particularly women and girls, LGBTQI+ persons, those with disabilities, and more.

Here’s the bottom line: The FOC’s work is essential and its impact will boil down to what we do as a coalition to advance a simple but powerful idea, preserving and promoting the value of openness. The internet, the Web, the online universe is at its best when it is open for creativity and collaboration, open for innovation and ideas, open for communication and community, debate, discourse, disagreement, and diplomacy.

The same is true for democracy—a system of governance, a social contract, and a societal structure is strongest when defined by open spaces to vote, deliberate, gather, demonstrate, organize, and advocate. This openness could not be more important, because when the digital world is transparent, when democracy is done right, that’s when everyone has a stake in our collective success. That’s what makes everyone strive for a society that is free and fair in our politics and in cyberspace. That’s what we will give—that’s what we’ll give everyone reason to keep tapping into the positive potential of technology to forge a future of endless possibility and boundless prosperity for all.

So good luck with all your remaining work; lots ahead. And thank you so much for everything that you all do. Thank you.

KHUSHBU SHAH: Hello, everybody. Thank you so much for joining us. I’m Khushbu Shah, a journalist and a nonresident fellow at the Atlantic Council’s DFRLab.

We’re grateful to have these three experts here with us today to discuss rights in the digital world and the Freedom Online Coalition’s role in those rights. I’ll introduce you to these three experts.

This is Adeboye Adegoke, who is the senior manager of grants and program strategy at Paradigm Initiative. We have Alissa Starzak, the vice president and global head of public policy at Cloudflare, and Juan Carlos, known as J.C., Lara, who’s the executive director of Derechos Digitales. And so I will mention that both J.C. and Adeboye are also on the FOC’s Advisory Network, which was created as a strong mechanism for ongoing multi-stakeholder engagement.

And so I’ll start with the thirty-thousand-foot view. So we’ve heard—we’ve just heard about the FOC and its continued mission with the United States at the helm as chair this year in an increasingly interconnected and online world. More than five billion people are online around the world. That’s the majority of people [on] this planet. We spend nearly half of our time that we’re awake online, around more than 40 percent.

We as a global group of internet users have evolved in our use of the internet, as you’ve heard, since the creation of the FOC in 2011.

So Adeboye, why do you think now suddenly so many people are suddenly focused on technology as a key democratic issue? And speaking, you know, from your own personal experience in Nigeria, should we be?

ADEBOYE ADEGOKE: Yeah. I mean, I think the reasons are very clear, not just [looking out] to any region of the world, but, you know, generally speaking, I mean, the Cambridge Analytica, you know, issue comes to mind.

But also just speaking, you know, very specifically to my experience, on my reality as a Nigerian and as an African, I mean, we just concluded our general elections, and technology was made to play a huge role in ensuring transparency, you know, the integrity of the elections, which unfortunately didn’t achieve that objective.

But besides that, there are also a lot of concerns around how technology could be manipulated or has been manipulated in order to literally alter potential outcomes of elections. We’re seeing issues of microtargeting; you know, misinformation campaigns around [the] election period to demarcate, you know, certain candidates.

But what’s even most concerning for me is how technology has been sometimes manipulated to totally alter the outcome of the election. And I’ll give you a very clear example in terms of the just-concluded general elections in Nigeria. So technology was supposed to play a big role. Results were supposed to be transmitted to a central server right from the point of voting. But unfortunately, those results were not transmitted.

In fact, as a matter of fact, three or four days after the election, 50 percent of the results were not uploaded. As of the time that the election results were announced, those results were—less than 50 percent of the results had been transmitted, which then begin to, you know, lead to questioning of the integrity of those outcomes. These are supposed to be—elections are supposed to be transmitted, like, on the spot. So, you know, it becomes concerning.

The electoral panel [gave] an excuse that there was a technical glitch around, you know, their server and all of that. But then the question is, was there actually a technical glitch, or was there a compromise or a manipulation by certain, you know, bad actors to be able to alter the outcome of the election? [This] used to be the order of the day in many supposedly, you know, democratic countries, especially from the part of the world that I come from, where people really doubt whether what they see as the outcomes of their election is the actual outcome or somebody just writing something that they want.

So technology has become a big issue in elections. On one side, technology has the potential to improve on [the] integrity of elections. But on the other side, bad actors also have the tendency to manipulate technology to make sure that the opinions or the wishes of the people do not matter at the end of the day. So that’s very important here.

KHUSHBU SHAH: And you just touched on my next question for Alissa and J.C. So, as you mentioned, digital authoritarians have used tech to abuse human rights, limit internet freedoms. We’re seeing this in Russia and Myanmar, Sudan, and Libya. Those are some examples. [The] deputy secretary mentioned a few others. For example, in early 2022, at the start of its invasion of Ukraine, Russia suppressed domestic dissent by closing or forcing into exile the handful of remaining independent media outlets. In at least fifty-three countries, users have faced legal repercussions for expressing themselves online, often leading to prison terms, according to a report from Freedom House. It’s a trend that leaves people on the frontlines defenseless, you know, of course, including journalists and activists alike.

And so, J.C., what have you seen globally? What are the key issues we must keep an eye on? And what—and what are some practical steps to mitigate some of these issues?

JUAN CARLOS LARA: Yeah. I think it’s difficult to think about the practical steps without first addressing what those issues are. And I think Boye was pointing out basically what has been a problem as perceived in many in the body politic, or many even activists throughout the world. But I think it’s important to also note that these broader issues about the threats to democracy, about the threats to human rights, [they] manifest sometimes differently. And that includes how they are seen in my region, in Latin America, where, for instance, the way in which you see censorship might differ from country to country.

While some have been able to pass laws, authoritarian laws that restrict speech and that restrict how expression is represented online and how it’s penalized, some other countries have resorted to the use of existing censorship tools. Like, for instance, some governments [are] using [Digital Millennium Copyright Act] notice and technical mechanisms to delete or to remove some content from the online sphere. So that also becomes a problematic issue.

So when we speak about, like, how do we go into, like, the practical ways to address this, we really need to identify… some low-level practices [that] connect with the higher-level standards that we aspire to for democracies; and how bigger commitments to the rule of law and to fair elections and to addressing and facing human rights threats goes to the lower level of what are actually doing in governments, what people are actually doing when they are presented with the possibility of exercising some power that can affect the human rights of the population in general. So to summarize a bit of that point, we still see a lot of censorship, surveillance, internet blockings, and also, increasingly, the use of emerging technologies as things that might be threatening to human rights.

And while some of those are not necessarily exclusive to the online sphere, they are certainly been evolving—they have been evolving [for] several years. So we really need to address how those are represented today.

KHUSHBU SHAH: Thank you. Alissa, as our industry expert I want to ask you the same question. And especially I want you to maybe touch upon what J.C. was saying about low-level practices that might be practical.

ALISSA STARZAK: You know, I think I actually want to step back and think about all of this, because I think—I think one of the challenges that we’ve seen, and we certainly heard this in Deputy Secretary Sherman’s remarks—is that technology brings opportunities and risks. And some of the challenges, I think, that we’ve touched on are part of the benefit that we saw initially. So the drawbacks that come from having broad access is that you can cut it off.

And I think that as we go forward, thinking about the Freedom Online Coalition and sort of how this all fits together, the idea is to have conversations about what it looks like long term, what are the drawbacks that come from those low-level areas, making sure that there is an opportunity for activists to bring up the things that are coming up, for industry, sort of folks in my world, to do the same. And making sure that there’s an opportunity for governments to hear it in something that actually looks collaborative.

And so I think that’s our big challenge. We have to find a way to make sure [that] those conversations are robust, that there is dialogue between all of us, and [that] we can both identify the risks that come from low-level practices like that and then also figure out how to mitigate them.

KHUSHBU SHAH: Thank you. And so, back to you—both of you. I’d like to hear from you both about, as part of civil society—we can start with you, Adegoke—what role as an organization, such as the Freedom Online Coalition, what kind of role can it play in all of these issues that we’re talking about as it expands and it grows in its own network?

ADEBOYE ADEGOKE: Yeah. So I think the work of the Freedom Online Coalition is very critical in such a time as this. So when you look at most international or global [platforms] where conversations around technology, its impact, are being discussed, human rights is rarely at the center of the issues. And I think that is where the advocacy comes in terms of highlighting and spotlighting, you know, the relevance of human rights of this issue. And as a matter of fact, not just relevance but the importance of human rights to this issue.

I think the work of the FOC is relevant even more to the Global South than probably it is to the Global North because in the Global South you—our engagement with technology, and I mean at the government level, is only from the—it’s likely from the perspective of… economics and… security. [Human rights] is, sadly, in an early part of the conversation. So, you know, with a platform like the FOC, it’s an opportunity to mainstream human rights into the technology, you know, conversation generally, and it’s a great thing that some of us from that part of the world are able to engage at this level and also bring those lessons back to our work, you know, domestically in terms of how we engage the policy process in our countries.

And that’s why it’s very important for the work of FOC to be expanded to—you know, to have real impact in terms of how it is deliberate—in terms of how it is—it is deliberate in influencing not just regional processes, but also national processes, because the end goal—and I think the beauty of all the beautiful work that is being done by the coalition—is to see how that reflects on what governments, in terms of how governments are engaging technology, in terms of how governments are consciously taking into cognizance the human rights implication of, you know, new emerging technologies and even existing technologies. So I think the FOC is very, very important stakeholder in technology conversation globally.

KHUSHBU SHAH: J.C., I want to ask you the same question, especially as Chile recently joined the FOC in recent years. And love to hear what you think.

JUAN CARLOS LARA: Yeah. I think it’s important to also note what Boye was saying in the larger context of when this has happened for the FOC. Since its creation, we have seen what has happened in terms of shutdowns, in terms of war, in terms of surveillance revelations. So it’s important to also connect what the likemindedness of certain governments and the high-level principles have to do with the practice of those same governments, as well as their policy positions both in foreign policy forums and internally, as the deputy secretary was mentioning.

I think it’s—that vital role that Boye was highlighting, it’s a key role but it’s a work in progress constantly. In which way? Throughout the process of the FOC meeting and producing documents and statements, that’s when the advisory network that Boye and myself are members of was created. Throughout that work, we’ve been able to see what happens inside the coalition and what—the discussions they’re having to some degree, because I understand that some of them might be behind closed doors, and what those—how the process of those statements comes to be.

So we have seen that very important role [in] how it’s produced and how it’s presented by the governments and their dignitaries. However, I still think that it’s a work in progress because we still need to be able to connect that with the practice of governments, including those that are members of the coalition, including my own government that recently joined, and how that is presented in internal policy. And at the same time, I think that key role still has a big room—a big role to play in terms of creating those principles; in terms of developing them into increasingly detailed points of action for the countries that are members of; but also then trying to influence other countries, those that are not members of the coalition, in order to create, like, better standards for human rights for all of internet users.

KHUSHBU SHAH: Any thoughts, Alissa?

ALISSA STARZAK: Yeah. You know, I think J.C. touched on something that is—that is probably relevant for everyone who’s ever worked in government which is the reality that governments are complicated and there isn’t one voice, often, and there frequently what you see is that the people who are focused on one issue may not have the same position as people who are working on it from a different angle. And I think the interesting thing for me about the FOC is not that you have to change that as a fundamental reality, but that it’s an opportunity for people to talk about a particular issue with a focus on human rights and take that position back. So everybody sitting in this room who has an understanding of what human rights online might look like, to be able to say, hey, this is relevant to my government in these ways if you’re a government actor, or for civil society to be able to present a position, that is really meaningful because it means that there’s a voice into each of your governments. It doesn’t mean that you’re going to come out with a definitive position that’s always going to work for everyone or that it’s going to solve all the problems, but it’s a forum. And it’s a forum that’s focused on human rights, and it’s focused on the intersection of those two, which really matters.

So, from an FOC perspective, I think it’s an opportunity. It’s not going to ever be the be all and end all. I think we all probably recognize that. But you need—I think we need a forum like this that really does focus on human rights.

KHUSHBU SHAH: An excellent point and brings me to my next question for you three. Let’s talk specifics, speaking of human rights: internet shutdowns. So we’ve mentioned Russia. Iran comes to mind as well during recent months, during protests, and recently, very recently, the Indian government cut tens of millions of people off in the state of Punjab as they search for a Sikh separatist.

So what else can this look like, J.C.? Those are some really sort of very basic, very obvious examples of internet shutdowns. And how can the FOC and its network of partners support keeping people online?

JUAN CARLOS LARA: Yes, thank you for that question because specifically for Latin America, the way in which shutdowns may present themselves is not necessarily a huge cutting off of the internet for many people. It sometimes presents in other ways, like, for instance, we have seen the case of one country in South America in which their telecommunication networks has been basically abandoned, and therefore, all of the possibilities of using the internet are lost not because the government has decided to cut the cable, but rather because it’s let it rot, or because it presents in the form of partially and locally focused cutting off services for certain platforms.

I think the idea of internet shutdowns has provided awareness about the problems that come with losing access to the internet, but that also can be taken by governments to be able to say they have not shut access to the internet; it’s just that there’s either too much demand in a certain area or that a certain service has failed to continue working, or that it’s simply failures by telecommunication companies, or that a certain platform has not complied with its legal or judicial obligations and therefore it needs to be taken off the internet. So it’s important that when we speak about shutdowns we consider the broader picture and not just the idea of cutting off all of the internet.

KHUSHBU SHAH: Adeboye, I’d like to hear what your thoughts are on this in the context of Nigeria.

ADEBOYE ADEGOKE: Yeah. It’s really very interesting. And to the point, you know, he was making about, you know, in terms of when we talk about shutdown, I think the work around [understanding shutdowns] has been great and it’s really helped the world to understand what is happening globally. But just as he said, I think there are also some other forms of exclusion that [happen] because of government actions and inactions that probably wouldn’t fall on that thematic topic of shutdown, but it, in a way, is some sort of exclusionary, you know, policy.

So an example is in some remote areas in Nigeria, for example, for most of the technology companies who are laying cables, providing internet services, it doesn’t make a lot of business sense for them to be, you know, present in those locations. And to make the matter worse for them, the authorities, the local governments, those are imposing huge taxes on those companies to be able to lay their fiber cables into those communities, which means that for the businesses, for the companies it doesn’t make any economic sense to invest in such locations. And so, by extension, those [kinds] of people are shut down from the internet; they are not able to assess communication network and all of that.

But I also think it’s very important to highlight the fact that—I mean, I come from the continent where internet is shut down for the silliest reason that you can imagine. I mean, there have been [shutdowns] because [the] government was trying to prevent students cheating in exams, you know? Shutdowns are common during elections, you know? [Shutdowns] happen because [the] government was trying to prevent gossip. So it’s the silliest of reasons why there have been internet [shutdowns] in the area, you know, in the part of the world that I am from.

But what I think—in the context of the work that the FOC does, I think something that comes to mind is how we are working to prevent future [shutdowns]. I spoke about the election that just ended in Nigeria. One of the things that we did was to, shortly before the election, organize, like, a stakeholder meeting of government representative, of fact checkers, of, you know, the platforms, the digital companies, civil society [organizations], and electoral [observers]… to say that, OK, election is—if you are from Africa, any time election is coming you are expecting a shutdown. So it’s to have a conversation and say: Election is coming. There is going to be a lot of misinformation. There’s going to be heightened risk online. But what do we need to do to ensure that we don’t have to shut down the internet?

So, for Nigeria, we were able to have that conversation a few weeks before the election, and luckily the [internet was] not shut down. So I mean, I would describe that as a win. But just to emphasize that it is helpful when you engage in a platform like the FOC to understand the dimensions that [shutdowns] take across the world. It kind of helps you to prepare for—especially if you were in the kind of tradition that we were to prepare for potential shutdown. And also I think it’s also good to spotlight the work that Access Now has done with respect to spotlighting the issue of shutdown because it helps to get their perspective.

So, for example, I’m from Nigeria. We have never really experienced widespread shutdown in Nigeria, but because we are seeing it happen in our sister—in our neighboring countries—we are kind of conscious of that and were able to engage ahead of elections to see, oh, during election in Uganda, [the] internet was shut down. In Ethiopia, [the] internet was shut down. So it’s likely [the] internet will be shut down in Nigeria. And then to say to the authority: No, you know what? We don’t have to shut down the internet. This is what we can do. This is the mechanism on [the] ground to identify risk online and address those risks. And also, holding technology platform accountable to make sure that they put mechanism in place, to make sure they communicate those mechanisms clearly during elections.

So it’s interesting how much work needs to go into that, but I think it’s… important work. And I think for the FOC, it’s also—it’s also very important to continue to communicate the work that the FOC is doing in that regard so that more and more people become aware of it, and sort of more people are prepared, you know, to mitigate it, especially where you feel is the highest risk of shutdown.

KHUSHBU SHAH: Thank you. I’m going to jump across to the other side of that spectrum, to surveillance tech, to the—to the almost literally—the opposite, and I wanted to start with the news that Deputy Secretary Sherman mentioned, with the news that the Biden administration announced just this afternoon, a new executive order that would broadly ban US federal agencies from using commercially developed spyware that poses threats to human rights and national security.

The deputy secretary also mentioned, Alissa, some guiding principles that they were going to announce later this week with the FOC. What are some—what are some things—what are some principles or what are some ambitions that you would hope to see later this week?

ALISSA STARZAK: So I think there’s a lot coming is my guess. Certainly the surveillance tech piece is an important component, but I think there are lots of broad guidelines.

I actually want to go back to shutdowns for a second, if you don’t mind…. Because I think it’s a really interesting example of how the FOC can work well together and how you take all of the different pieces—even at this table—of what—how you sort of help work on an internet problem or challenge, right? So you have a world where you have activists on the ground who see particular challenges who would then work with their local government. You have industry partners like Cloudflare who can actually show what’s happening. So are there—is there a shutdown? Is there a network disruption? So you can take the industry component of it, and that provides some information for governments, and then governments can work together to sort of pressure other governments to say these aren’t acceptable. These are—these norms—you can’t—no, you can’t shut down because you are worried about gossip, and cheating, and an exam, right? There’s a set of broad international norms that become relevant in that space, and I think you take that as your example. So you have the players—you have the government to government, you have the civil society to government, you have the industry which provides information to government and civil society. And those are the pieces that can get you to a slightly better place.

And so when I look at the norms coming out later this week, what I’m going to be looking for are that same kind of triangulation of using all of the players in the space to come to a better—to come to a better outcome. So whether that’s surveillance tech, sort of understanding from civil society how it has been used, how you can understand it from other tech companies, how you can sort of mitigate against those abuses, working with governments to sort of address their own use of it to make sure that that doesn’t become a forum—all of those pieces are what you want from that model. And I think—so that’s what I’m looking for in the principles that come out. If they have that triangulation, I’m going to be—I’m going to be very happy.

KHUSHBU SHAH: What would you both be looking for, as well? J.C., I’ll start with you.

JUAN CARLOS LARA: Yeah, as part of the [FOC advisory network], of course, there might be some idea of what’s coming in when we speak about principles for governments for the use of surveillance capabilities.

However, there are two things that I think are very important to consider for this type of issue: first of all is that which principles and which rules are adopted by the states. I mean, it’s a very good—it’s very good news that we have this executive order as a first step towards thinking how states refrain from using surveillance technology disproportionately or indiscriminately. That’s a good sign in general. That’s a very good first step. But secondly, within this same idea, we would expect other countries to follow suit and hopefully to expand the idea of bans on spyware or bans on surveillance technology that by itself may pose grave risks to human rights, and not just in the case of this, or that, or the fact that it’s commercial spyware, which is a very important threat including for countries in Latin America who are regular customers for certain spyware producers and vendors.

But separately from that, I think it’s very important to also understand how this ties into the purposes of the Freedom Online Coalition and its principles, and how to have further principles that hopefully pick up on the learnings that we have had for several years of discussion on the deployment of surveillance technologies, especially by academia and civil society. If those are picked up by the governments themselves as principle, we expect that to exist in practice.

One of the key parts of the discussion on commercial spyware is that I can easily think of a couple of Latin American countries that are regular customers. And one of them is an FOC member. That’s very problematic, when we speak about whether they are abiding by these principles and by human rights obligations or not, and therefore whether these principles will generate any kinds of restraint in the use and the procurement of such surveillance tools.

KHUSHBU SHAH: So I want to follow up on that. Do you think that there—what are the dangers and gaps of having this conversation without proposing privacy legislation? I want to ask both of our—

JUAN CARLOS LARA: Oh, very briefly. Of course, enforcement and the fact that rules may not have the institutional framework to operate I think is a key challenge. That is also tied to capacities, like having people with enough knowledge and have enough, of course, exchange of information between governments. And resources. I think it’s very important that governments are also able to enact the laws that they put in the books, that they are able to enforce them, but also to train every operator, every official that might be in contact with any of these issues. So that kind of principle may not just be adopted as a common practice, but also in the enforcement of the law, so get into the books. Among other things, I think capacities and resources are, like—and collaboration—are key for those things.

KHUSHBU SHAH: Alissa, as our industry expert, I’d like to ask you that same question.

ALISSA STARZAK: You know, I think one of the interesting things about the commercial spyware example is that there is a—there is a government aspect on sort of restricting other people from doing certain things, and then there is one that is a restriction on themselves. And so I think that’s what the executive order is trying to tackle. And I think that the restricting others piece, and sort of building agreement between governments that this is the appropriate thing to do, is—it’s clearly with the objective here, right?

So, no, it’s not that every government does this. I think that there’s a reality of surveillance foreign or domestic, depending on what it looks like. But thinking about building rulesets of when it’s not OK, because I think there is—there can be agreement if we work together on what that ruleset looks like. So we—again, this is the—we have to sort of strive for a better set of rules across the board on when we use certain technologies. And I think—clearly, I think what we’ve heard, the executive order, it’s the first step in that process. Let’s build something bigger than ourselves. Let’s build something that we can work across governments for. And I think that’s a really important first step.

ADEBOYE ADEGOKE: OK. Yeah, so—yeah, so, I think, yeah, the executive order, it’s a good thing. Because I was, you know, thinking to myself, you know, looking back to many years ago when in my—in our work when we started to engage our government regarding the issue of surveillance and, you know, human rights implications and all of that, I recall very vividly a minister at the time—a government minister at the time saying that even the US government is doing it. Why are you telling us not to do it? So I think it’s very important.

Leadership is very key. The founding members of the FOC, if you look FOC, the principles and all of that, those tests are beautiful. Those tests are great. But then there has to be a demonstration of—you know, of application of those tests even by the governments leading, you know, the FOC so that it makes the work of people like us easier, to say these are the best examples around and you don’t get the kind of feedback you get many years ago; like, oh, even the US government is doing it. So I think the executive order is a very good place to start from, to say, OK, so this is what the US government is doing right now and this is how it wants to define their engagement with spyware.

But, of course, like, you know, he said, it has to be, you know, expanded beyond just, you know, concerns around spyware. It has to be expanded to different ways in which advanced technology [is] applied in government. I come from a country that has had to deal with the issues of, you know, terrorism very significantly in the past ten years, thereabout, and so every justification you need for surveillance tech is just on the table. So whenever you want to have the human rights conversation, somebody’s telling you that, you want terrorists to kill all of us? You know? So it’s very important to have some sort of guiding principle.

Yeah, we understand [the] importance of surveillance to security challenges. We understand how it can be deployed for good uses. But we also understand that there are risks to human-rights defenders, to journalists, you know, to people who hold [governments] accountable. And those have to be factored into how these technologies are deployed.

And in terms of, you know, peculiar issues that we have to face, basically you are dealing with issues around oversight. You are dealing with issues around transparency. You are dealing with issues around [a] lack of privacy frameworks, et cetera. So you see African governments, you know, acquiring similar technologies, trying, you know, in the—I don’t want to say in the guise, because there are actually real problems where those technologies might be justified. But then, because of the lack of these principles, these issues around transparency, oversight, legal oversight, human-rights considerations, it then becomes problematic, because this too then become—it’s true that it is used against human-rights defenders. It’s true that it is used against opposition political parties. It’s true that it is used against activists and dissidents in the society.

So it’s very important to say that we look at the principle that has been developed by the FOC, but we want to see FOC government demonstrate leadership in terms of how they apply those principles to the reality. It makes our work easier if that happens, to use that as an example, you know, to engage our government in terms of how this is—how it is done. And I think these examples help a lot. It makes the work very easy—I mean, much easier; not very easy.

KHUSHBU SHAH: Well, you mentioned a good example; so the US. So you reminded me of the biometric data that countries share in Central and North America as they monitor refugees, asylum seekers, migrants. Even the US partakes. And so, you know, what can democracies do to address the issue when they’re sometimes the ones leveraging these same tools? Obviously, it’s not the same as commercial spyware, but—so what are the boundaries of surveillance and appropriate behavior of governments?

J.C., can I throw that question to you?

JUAN CARLOS LARA: Happy to. And we saw a statement by several civil-society organizations on the use of biometric data with [regard] to migrants. And I think it’s very important that we address that as a problem.

I really appreciated that Boye mentioned, like, countries leading by example, because that’s something that we are often expecting from countries that commit themselves to high-level principles and that sign on to human-rights instruments, that sign declarations by the Human Rights Council and the General Assembly of the [United Nations] or some regional forums, including to the point of signing on to FOC principles.

I think that it’s very problematic that things like biometric data are being used—are being collected from people that are in situations of vulnerability, as is the case of very—many migrants and many people that are fleeing from situations of extreme poverty and violence. And I think it’s very problematic also that also leads to [the] exchange of information between governments without proper legal safeguards that prevent that data from falling into the hands of the wrong people, or even that prevent that data from being collected from people that are not consenting to it or without legal authorization.

I think it’s very problematic that countries are allowing themselves to do that under the idea that this is an emergency situation without proper care for the human rights of the people who are suffering from that emergency and that situations of migrations are being treated like something that must be stopped or contained or controlled in some way, rather than addressing the underlying issues or rather than also trying to promote forms of addressing the problems that come with it without violating human rights or without infringing upon their own commitments to human dignity and to human privacy and to the freedom of movement of people.

I think it’s—that it’s part of observing legal frameworks and refraining from collecting data that they are not allowed to, but also to obeying their own human-rights commitments. And that often leads to refraining from taking certain action. And in that regard, I think the discussions that there might be on any kind of emergency still needs to take a few steps back and see what countries are supposed to do and what obligations they are supposed to abide [by] because of their previous commitments.

KHUSHBU SHAH: So thinking about what you’ve just said—and I’m going to take a step back. Alissa, I’m going to ask you kind of a difficult question. We’ve been talking about specific examples of human rights and what it means to have online rights in the digital world. So what does it mean in 2023? As we’re talking about all of this, all these issues around the world, what does it mean to have freedom online and rights in the digital world?

ALISSA STARZAK: Oh, easy question. It’s really easy. Don’t worry; we’ve got that. Freedom Online’s got it; you’ve just got to come to their meetings.

No, I think—I think it’s a really hard question, right? I think that we have—you know, we’ve built something that is big. We’ve built something where we have sort of expectations about access to information, about the free flow of information across borders. And I think that, you know, what we’re looking at now is finding ways to maintain it in a world where we see the problems that sometimes come with it.

So when I look at the—at the what does it mean to have rights online, we want to—we want to have that thing that we aspire to, I think that Deputy Secretary Sherman mentioned, the sort of idea that the internet builds prosperity, that the access to the free flow of information is a good thing that’s good for the economy and good for the people. But then we have to figure out how we build the set of controls that go along with it that are—that protect people, and I think that’s where the rule of law does come into play.

So thinking about how we build standards that are respect—that respect human rights in the—when we’re collecting all of the information of what’s happening online, right, like, maybe we shouldn’t be collecting all of that information. Maybe we should be thinking of other ways of addressing the concerns. Maybe we should be building [a] framework that countries can use that are not us, right, or that people at least don’t point to the things that a country does and say, well, if they can do this, I can do this, right, using it for very different purposes.

And I think—I think that’s the kind of thing that we’re moving—we want to move towards, but that doesn’t really answer the underlying question is the problem, right? So what are the rights online? We want as many rights as possible online while protecting security and safety, which is, you know, also—they’re also individual rights. And it’s always a balance.

KHUSHBU SHAH: It seems like what you’re touching on—J.C., would you like to—

JUAN CARLOS LARA: No. Believe me.

KHUSHBU SHAH: Well, it seems like what you’re talking about—and we’re touching—we’ve, like, talked around this—is, like, there’s a—there’s a sense of impunity, right, when you’re on—like in the virtual world, and that has led to what we’ve talked about for the last forty minutes, right, misinformation/disinformation. And if you think about what we’ve all been talking about for the last few weeks, which is AI—and I know there have been some moments of levity. I was thinking about—I was telling Alissa about how there was an image of the pope wearing a white puffer jacket that’s been being shown around the internets, and I think someone pointed out that it was fake, that it was AI-generated. And so that’s one example. Maybe it’s kind of a fun example, but it’s also a little bit alarming.

And I think about the conversation we’re having, and what I really want to ask all of you is, so, how might these tools—like the AI, the issue of AI—further help or hurt [human rights] activists and democracies as we’re going into uncharted territories, as we’re seeing sort of the impact of it in real time as this conversation around it evolves and how it’s utilized by journalists, by activists, by politicians, by academics? And what should the FOC do—I know I’m asking you again—what can the FOC do? What should we aim for to set the online world on the right path for this uncharted territory? I don’t know who wants to start and attempt.

ADEBOYE ADEGOKE: OK, I’ll start. Yeah.

So I think it’s great that, you know, the FOC has, you know, different task [forces] working on different thematic issues, and I know there is a task force on the issue of artificial intelligence and human rights. So I think for me that’s a starting point, you know, providing core leadership on how emerging technology generally impacts… human rights. I think that’s the starting point in terms of what we need to do because, like the deputy secretary said, you know, technology’s moving at such a pace that we can barely catch up on it. So we cannot—we cannot afford to wait one minute, one second before we start to work on this issue and begin to, you know, investigate the human rights implications of all of those issues. So it’s great that the FOC’s doing that work.

I would just say that it’s very important for—and I think this [speaks] generally to the capacities of the FOC. I think the FOC needs to be further capacitated so that this work can be made to bear in real-life issues, in regional, in national engagement so that some of the hard work that has been put into those processes can really reflect in real, you know, national and regional processes.

ALISSA STARZAK: Yeah. So I definitely agree with that.

I think—I think on all of these issues I think we have a reality of trying to figure out what governments do and then what private companies do, or what sort of happens in industry, and sometimes those are in two different lanes. But in some ways figuring out what governments are allowed to do, so thinking about the sort of negative potential uses of AI may be a good start for thinking about what shouldn’t happen generally. Because if you can set a set of norms, if you can start with a set of norms about what acceptable behavior looks like and where you’re trying to go to, you’re at least moving in the direction of the world that you think you want together, right?

So understanding that you shouldn’t be generating it for the purpose of misinformation or, you know, that—for a variety of other things, at least gets you started. It’s a long—it’s going to be a long road, a long, complicated road. But I think there’s some things that can be done there in the FOC context.

JUAN CARLOS LARA: Yes. And I have to agree with both of you. Specifically, because the idea that we have a Freedom Online Coalition to set standards, or to set principles, and a taskforce that can devote some resources, some time, and discussion to that, can also identify where this is actually the part of the promise and which is the part of the peril. And how governments are going to react in a way that promotes prosperity, that promotes interactivity, and promotes commerce—exercise of human rights, the rights of individuals and groups—and which sides of it become problematic from the side of the use of AI tools, for instance, for detecting certain speech for censorship or for identifying people in the public sphere, because they’re working out on the streets, or to collect and process people without consent.

I think because that type of expertise and that type of high political debate can be held at the FOC, that can promote the type of norms that we need in order to understand, like, what’s the role of governments in order to steer this somewhere. Or whether they should refrain from doing certain actions that might—with the good intention of preventing the spread of AI-generated misinformation or disinformation—that may end up stopping these important tools to be used creatively or to be used in constructive ways, or in ways that can allow more people to be active participants of the digital economy.

KHUSHBU SHAH: Thank you. Well, I want to thank all three of you for this robust conversation around the FOC and the work that it’s engaging in. I want to thank Deputy Secretary Sherman and our host here at the Atlantic Council for this excellent conversation. And so if you’re interested in learning more about the FOC, there’s a great primer on it on the DFRLab website. I recommend you check it out. I read it. It’s excellent. It’s at the bottom of the DFRLab’s registration page for this event.

Watch the full event

The post Wendy Sherman on the United States’ priorities as it takes the helm of the Freedom Online Coalition appeared first on Atlantic Council.

]]>
Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21 https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/modernizing-critical-infrastructure-protection-policy-seven-perspectives-on-rewriting-ppd21/ Wed, 22 Mar 2023 12:30:00 +0000 https://www.atlanticcouncil.org/?p=625907 In February of 2013, then President Obama signed a landmark executive order - Presidential Policy Directive 21 (PPD 21) - that defined how U.S. Departments and Agencies would provide a unity of government effort to strengthen and maintain US critical infrastructure. Almost a decade later, evolutions in both the threat landscape and the interagency community invite the US government to revise this critical policy.

The post Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21 appeared first on Atlantic Council.

]]>
In February of 2013, then President Obama signed a landmark executive order—Presidential Policy Directive 21 (PPD 21)—that defined how US Departments and Agencies would pursue a unity of government effort to strengthen and maintain US critical infrastructure. Almost a decade later, evolutions in both the threat landscape and the interagency community invite the US government to revise this critical policy.

As the current administration looks to modernize this essential piece of legislation, particular emphasis must be placed on two key steps. First, to deconflict and clarify the specific roles and responsibilities within the ever-growing interagency—particularly, the SRMA-CISA relationship. Second, to help policymakers better understand and work to implement a risk-based approach to critical infrastructure protection—if everything is critical, what gets prioritized?

To dive deeper on this topic, we asked seven experts to offer their perspectives on critical infrastructure and how we can rebalance the interagency to better secure that infrastructure:

If the US government were to change the way it categorizes or prioritizes critical infrastructure, what’s a better alternative to the current approach?

“Over time, the phrase “critical infrastructure” has become overused.  This overuse has led to varying definitions of the phrase, and the analyses conducted to better categorize the concept have led to inconsistent focus and findings across the sectors. The baseline definition— assets, systems, and networks, whether physical or virtual, [that] are considered so vital to the United States that their incapacitation or destruction would have a debilitating effect on security, national economic security, national public health or safety, or any combination thereof – does not lend clarity because there is a definitional tension between infrastructure that is critical for sustaining and supporting Americans’ daily lives and the economy, and infrastructure that might be dangerous (ex. Chemical or nuclear facilities), but not necessarily critical.

The only way to resolve what should consistently be used as the underlying definition for “critical infrastructure” is clarify the goals or desired end state for these national risk management efforts.  For instance, there are stated end goals to support continuity of government objectives, but it is not clear that there are a similar set of national resiliency goals to support the nation’s critical infrastructure. Recent CSAC recommendations (September 2022) made this point directly: “Clear national-level goals in the areas of national security, economic continuity, and health and human safety would help organize public and private critical infrastructure stakeholders in the analysis of what it would take to accomplish those objectives.” Whatever end goal is articulated, it must be sustained consistently for a long time (10 years or more). This will create the continuity necessary to marshal the resources of both industry and government to carry out these goals.   

Government does not need to begin from nothing to carry out this work. The sector structure is in place, and the National Critical Functions are understood. Whatever end goal is articulated, mobilizing the initial analysis by using the current sector structure and what we already know about the critical functions in the following sequential approach:   

  • Foundational/lifeline sector analysis: Energy, communications, transportation, and water & wastewater. All are dependent on these critical functions, and existing analysis has shown that disruption impacts are felt at once; all are precursors to community restoration post-disaster.  
  • Middle level infrastructure: Chemical, financial Services, food and agriculture, healthcare/public health, and information technology (IT). The critical functions performed in these sectors are reliant on foundational infrastructure, are complex systems-of-systems, and are necessary for continuity of the economy/society.  
  • Higher-level infrastructure (end users, producers of goods and services): Commercial & government facilities, critical manufacturing, defense industrial base (DIB), and emergency services. In some ways, these sectors are consumers of infrastructure and not really providers of it.  This is not to suggest that the services provided by these sectors are NOT critical, but that they rely upon infrastructure provided by others.  

With clearly articulated, long-term national goals, leveraging structures and analysis completed to date, the means to identify, categorize and prioritize which infrastructure is “critical” will be a logical outcome of the analysis.”    

Kathryn Condello, Senior Director, National Security/Emergency Preparedness, Lumen Technologies

In theory, what is a Sector Risk Management Agency (SRMA)? In practice, how should a SRMA’s role change depending on what kind of organization plays that role?

“In theory, a SRMA should be the day-to-day, substantively deep operational partner within USG for private sector critical infrastructure partners. These SRMAs should be the entity that is in the trenches with critical infrastructure operators—working to better understand the threat environment, lift up and support those who lack sufficient resources or capabilities, and guide our partners to acceptable and more sustainable levels of risk management and resilience.

In practice, an organization’s resources and capabilities—and the role that they are able to play—varies a lot depending on the type of organization in this role. I’ll provide two examples here. First, some SRMAs—like the Coast Guard—have regulatory capabilities to help apply pressure to owner/operators in their sector to raise their baselines for security. Others, like the Department of Energy (DoE), need to rely on other agencies to do so or use other, more incentive-based programs to achieve these objectives. Second, SRMAs may bring a different balance of resources and substantive sector knowledge to the table. As an example, CISA—which serves as the SRMA for several sectors—may bring far more resources and manpower to the table than another single agency but may lack the deep sector knowledge and partnerships of an organization like DoE.”

Will Loomis, Associate Director, Digital Forensics Research Lab, Cyber Statecraft Initiative, Atlantic Council

Where are some of the biggest existing fault lines in the relationship between CISA and the SRMAs? How might any future revision to PPD-21 better address these?

“Current PPD-21 guidance is based on the model of the 16 critical infrastructure sectors where roles and responsibilities fall under the designated leads for each sector. This model works well when it comes to directing congressional funding to a particular agency or knowing which agency leads the response to an incident in a specific sector.

In reality, significant challenges to the security and safety of the nation’s critical infrastructure are typically complex, multi-faceted events that are rarely limited to just one sector. This holds true for both a single, catastrophic incident and the simple, daily work necessary to mitigate risks. Actions in both situations depend on and have impacts well outside a single sector.

PPD-21 guidance is purposely not prescriptive, which leaves certain elements open to interpretation when it comes to the SRMA’s primacy compared to CISA. Additionally, current guidance does not account for an agency’s capability to fulfill its SRMA responsibilities. The expertise and capabilities of some SRMAs are generally agreed to be more mature than others. I experienced firsthand the friction between different views and capabilities created during my time at CISA as part of the COVID Task Force. Disagreements on roles and responsibilities during the response to ransomware at a hospital or regarding the security of information systems in portions of the vaccine supply chain induced unnecessary challenges during an already difficult national pandemic.

I am not advocating for more detail on roles and responsibilities, since no amount of guidance could cover every situation and account for the differences in each agency’s expertise and capabilities. I do think a different approach where PPD-21 guidance has an increased focus towards national functions and an emphasis on greater collaboration and integration would better serve the ability of federal agencies to fulfill their missions.”

Steve Luczynski, Senior Manager – Critical Infrastructure Security, Accenture Federal Services

What responsibilities should SRMAs be investing in to be better operational partners for the private sector?

“SRMAs should look to prioritize those assets most significant to national security, begin processes to analyze risk, and ultimately buy down that risk utilizing experts within those sectors and cross training them in cyber. It’s time we refocus on nationally critical assets vs. trying to be everything to every asset, almost like a helpdesk approach to critical infrastructure protection. This includes clearly defining roles for state and local entities, as well setting objectives for performance.  Finally, the government should cross train the private sector in a common language for coordination, like Incident Command System to work together better on a day-to-day basis, as well as during response and recovery from cyber events.” 

Megan Samford, Non-Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council; VP & Chief Product Security Officer – Energy Management, Schneider Electric

How should any future revision of PPD-21 think holistically about SRMA capabilities?

“In a perfect world there would be a dedicated cybersecurity SME at the federal level for each critical infrastructure sector, either within each SRMA or at CISA as a main technical liaison. In lieu of this reality, with the ‘near-future’ capabilities, SRMAs’ cybersecurity maturity and mandates should capture the entire supply chain—security management of suppliers, enterprise content management, development environment, products and services, upstream supply chain, operational technology (OT), and downstream supply chain—aligned to the CISA Cybersecurity Performance Goals as a baseline. As the SRMAs designate required tools and capabilities at the asset owner level, they should continue vendor-neutral evaluations of designated and required tools and capabilities. These agencies should represent the boots on the ground approach to the reframing sections above. SRMAs also need to identify the level of cybersecurity and risk management that asset owners can afford to own vs. what government can reasonably subsidize and augment. I don’t believe this can be effectively done without addressing the point above. Lastly, SRMAs should reevaluate the definition and efficacy of information sharing capabilities within each sector, as information sharing ≠ situational awareness ≠ incident prevention.

Regardless of commonalities, no two attacks on OT/industrial control systems (ICS) are ever the exact same, making automated response and remediation difficult. Unfortunately, this reality means that every operation and facility must wait to see another organization victimized before there can be shared signatures, detections, and fully-baked intelligence for threat hunting to ensue. In terms of the threat landscape, there is no way to standardize and correlate threat and vulnerability research produced from competitive market leaders. Information sharing lacks trust and verification, has been siloed into sector-specific, private sector, or government agency-specific mechanisms—creating single sources of information without much consensus. This is a major roadblock for efficacy across SRMAs and their situational awareness/strategic planning.”

Danielle Jablanski, Non-Resident Senior Fellow, Cyber Statecraft Initiative, Atlantic Council; OT Cybersecurity Strategist, Nozomi Networks

How can the US government address risks associated with cross-sector interdependencies in the naturally siloed SRMA model?

“When addressing cyber risks to critical infrastructure, the US government—and industry—need to reframe thinking around jurisdiction and impact. The SRMA model hinges on federal agencies, which creates a governance gap and cognitive blind spot for interdependence. In the same way that the National Security Council drives the interagency process, the US government needs a coordinating body to prioritize and manage the competing and corollary agencies. Whether that is CISA or ONCD, one office must take the strategic, systemic view of critical infrastructure.”

Munish Walter-Puri, Senior Director of Critical Infrastructure, Exiger

In any future policy, how could the US government preserve the ability to regularly adjust the boundaries of critical infrastructure classifications or sectors?

Presidential Policy Directive 21 identified 16 critical infrastructure sectors and their associated sector-specific agencies (now called SRMAs) and called upon the Secretary of Homeland Security to “periodically evaluate the need for and approve changes to critical infrastructure sectors” and to “consult with the Assistant to the President for Homeland Security and Counterterrorism before changing a critical infrastructure sector or a designated [SRMA] for that sector.” Since the issuance of PPD-21, changes to the Homeland Security Act have required a reassessment of the current sector structure and SRMA designations at least every five years. The National Defense Authorization Act for Fiscal Year 2021 required the Secretary of Homeland Security to evaluate the sectors and SRMA designations and provide recommendations for revisions to the President. In fulfillment of this mandate, the Department of Homeland Security delivered a report to Congress and the President, assessing that the absence of a statutory basis for the definition of a “sector” has “created a challenge in clarifying and building criteria for clarifying and rationalizing the sector structure.” The report cites the National Infrastructure Protection Plan as the origin of the current operating definition of a “sector”: “[A] logical collection of assets, systems, or networks that provide a common function to the economy, government, or society.”

In evaluating critical infrastructure sector classifications or structure, the federal government should minimize the overall number of sectors to allow for productive engagement to accomplish specific efforts. Focusing on creating structures to enable cross-sector engagement scoped around specific risk management concerns prioritizes the work to be performed with flexibility and who needs to be there to support it. The current statutory requirement to regularly evaluate sector classifications would be sufficient provided the federal government creates a mechanism to convene critical infrastructure owners and operators independent of sector designations. In its September 2022 recommendations to the Director of the Cybersecurity and Infrastructure Security Agency (CISA), the Cybersecurity Advisory Committee Subcommittee on Systemic Risk recommended that CISA “[scope] its national resilience efforts around focus areas like national security, health and human safety, and economic prosperity” with the goal of enabling CISA “to use resources and personnel more efficiently to prioritize the appropriate [National Critical Functions]–and [systemically important entities]–and orient national resilience programming within each scope.” Within each of these focus areas, CISA, in its role as the national coordinator of sector risk management agencies, should periodically assess the challenges facing critical infrastructure owners and operators and identify workstreams to organize relevant entities that measurably contribute to the risk management effort. For example, under the broad focus area of national security, CISA might organize a cross-sector effort to address small unmanned aerial system surveillance of critical infrastructure sites, an issue for which the White House has organized a task force. These assessments should align with the cadence that the Homeland Security Act requires for reassessments of the sector/SRMA designations or in conjunction with the five-year term granted to the CISA Director. The federal government should also ensure that there is a mechanism for leadership of both the Sector Coordinating Councils and Government Coordinating Councils that provides decision-making authority for workstreams as the risk landscape evolves and new challenges arise.”

Jeffrey Baumgartner, Vice President, National Security and Resilience, Berkshire Hathaway Energy

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Modernizing critical infrastructure protection policy: Seven perspectives on rewriting PPD21 appeared first on Atlantic Council.

]]>
The 5×5—Conflict in Ukraine’s information environment https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-conflict-in-ukraines-information-environment/ Wed, 22 Mar 2023 04:01:00 +0000 https://www.atlanticcouncil.org/?p=625738 Experts provide insights on the war being waged through the Ukrainian information environment and take away lessons for the future.

The post The 5×5—Conflict in Ukraine’s information environment appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Just over one year ago, on February 24, 2022, Russia launched a full-scale invasion of neighboring Ukraine. The ensuing conflict, Europe’s largest since World War II, has not only besieged Ukraine physically, but also through the information environment. Through kinetic, cyber, and influence operations, Russia has placed Ukraine’s digital and physical information infrastructure—including its cell towers, networks, data, and the ideas that traverse them—in its crosshairs as it seeks to cripple Ukraine’s defenses and bring its population under Russian control. 

Given the privately owned underpinnings of the cyber and information domains by technology companies, a range of local and global companies have played a significant role in defending the information environment in Ukraine. From Ukrainian telecommunications operators to global cloud and satellite internet providers, the private sector has been woven into Ukrainian defense and resilience. For example, Google’s Threat Analysis Group reported having disrupted over 1,950 instances in 2022 of Russian information operations aimed at degrading support for Ukraine, undermining its government, and building support for the war within Russia. The present conflict in Ukraine offers lessons for states as well as private companies on why public-private cooperation is essential to building resilience in this space, and how these entities can work together more effectively. 

We brought together a group of experts to provide insights on the war being waged through the Ukrainian information environment and take away lessons for the United States and its allies for the future. 

#1 How has conflict in the information environment associated with the war in Ukraine compared to your prior expectations?

Nika Aleksejeva, resident fellow, Baltics, Digital Forensic Research Lab (DFRLab), Atlantic Council

“As the war in Ukraine started, everyone was expecting to see Russia conducting offensive information influence operations targeting Europe. Yes, we have identified and researched Russia’s coordinated information influence campaigns on Meta’s platforms and Telegram. These campaigns targeted primarily European countries, and their execution was unprofessional, sloppy, and without much engagement on respective platforms.” 

Silas Cutler, senior director for cyber threat research, Institute for Security and Technology (IST)

“A remarkable aspect of this conflict has been how Ukraine has maintained communication with the rest of the world. In the days leading up to the conflict, there was a significant concern that Russia would disrupt Ukraine’s ability to report on events as they unfolded. Instead of losing communication, Ukraine has thrived while continuously highlighting through social media its ingenuity within the conflict space. Both the mobilization of its technical workforce through the volunteer IT_Army and its ability to leverage consumer technology, such as drones, have shown the incredible resilience and creativity of the Ukrainian people.” 

Roman Osadchuk, research associate, Eurasia, Digital Forensic Research Lab (DFRLab), Atlantic Council: 

“The information environment was chaotic and tense even before the invasion, as Russia waged a hybrid war since at least the annexation of Crimea and war in Eastern Ukraine in 2014. Therefore, the after-invasion dynamic did not bring significant surprises, but intensified tension and resistance from Ukrainian civil society and government toward Russia’s attempts to explain its unprovoked invasion and muddle the water around its war crimes. The only things that exceeded expectations were the abuse of fact-checking toolbox WarOnFakes and the intensified globalization of the Kremlin’s attempts to tailor messages about the war to their favor globally.” 

Emma Schroeder, associate director, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“The information environment has been a central space and pathway throughout which this war is being fought. Russian forces are reaching through that space to attack and spread misinformation, as well as attacking the physical infrastructure underpinning this environment. The behavior, while novel in its scale, is the continuation of Russian strategy in Crimea, and is very much living up to expectations set in that context. What has surpassed expectations is the effectiveness of Ukrainian defenses, in coordination with allies and private sector partners. The degree to which the international community has sprung forward to provide aid and assistance is incredible, especially in the information environment where such global involvement can be so immediate and transformative.” 

Gavin Wilde, senior fellow, Technology and International Affairs Program, Carnegie Endowment for International Peace

“The volume and intensity of cyber and information operations has roughly been in line with my prior expectations, though the degree of private and commercial activity was something that I might not have predicted a year ago. From self-selecting out of the Russian market to swarming to defend Ukrainian networks and infrastructure, the outpouring of support from Western technology and cybersecurity firms was not on my bingo card. Sustaining it and modeling for similar crises are now key.” 

 
#2 What risks do private companies assume in offering support or partnership to states engaged in active conflict?

Aleksejeva: “Fewer and fewer businesses are betting on Russia’s successful economical future. Additionally, supporting Russia in this conflict in any way is morally unacceptable for most Western companies. Chinese and Iranian companies are different. As for Ukraine, supporting it is morally encouraged, but is limited by many practicalities, such as supply chain disruptions amid Russia’s attacks.” 

Cutler: “By providing support during conflict, companies risk becoming a target themselves. Technology companies such as Microsoft, SentinelOne, and Cloudflare, which have publicly reported their support for Ukraine, have been historically targeted by Russian cyber operations and are already familiar with the increased risk. Organizations with pre-conflict commercial relationships may fall under new scrutiny by nationally-aligned hacktivist groups such as Killnet. This support for one side over the other—whether actual or perceived—may result in additional risk.” 

Osadchuk: “An important risk of continuing business as usual [in Russia] is that it may damage a company’s public image and test its declared values, since the continuation of paying taxes within the country-aggressor makes the private company a sponsor of these actions. Another risk for a private company is financial, since the companies that leave a particular market are losing their profits, but this is incomparable to human suffering and losses caused by the aggression. In the case of a Russian invasion, one of the ways to stop the war is to cut funding for and, thus, undermine the Russian war machine and support Ukraine.” 

Schroeder: “Private companies have long provided goods and services to combatants outside of the information environment. The international legal framework restricting combatants to targeting ‘military objects’ provides normative protection, as objects are defined as those ‘whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage’ in a manner proportional to the military gain foreseen by the operation. This definition, however, is still subject to the realities of conflict, wherein combatants will make those decisions to their own best advantage. In the information environment, this question becomes more complicated, as cyber products and services often do not fall neatly within standard categories and where private companies themselves own and operate the very infrastructure over and through which combatants engage. The United States and its allies, whether on a unilateral of supranational basis, work to better define the boundaries of civilian ‘participation’ in war and conflict, as the very nature of the space means that their involvement will only increase.” 

Wilde: “On one hand, it is important not to falsely mirror onto others the constraints of international legal and normative frameworks around armed conflict to which responsible states strive to adhere. Like Russia, some states show no scruples about violating these frameworks in letter or spirit, and seem unlikely to be inhibited by claims of neutrality from companies offering support to victimized states. That said, clarity about where goods and services might be used for civilian versus military objectives is advisable to avoid the thresholds of ‘direct participation’ in war outlined in International Humanitarian Law.”

#3 What useful lessons should the United States and its allies take away from the successes and/or failures of cyber and information operations in Ukraine?

Aleksejeva: “As for cyber operations, so far, we have not seen successful disruptions achieved by Russia of Ukraine and its Western allies. Yes, we are seeing constant attacks, but cyber defense is much more developed on both sides than before 2014. As for information operations, the United States and its allies should become less self-centered and have a clear view of Russia’s influence activities in the so-called Global South where much of the narratives are rooted in anti-Western sentiment.” 

Cutler: “Prior to the start of the conflict, it was strongly believed that a cyber operation, specifically against energy and communication sectors, would act as a precursor to kinetic action. While a WannaCry or NotPetya-scale attack did not occur, the AcidRain attack against the Viasat satellite communication network and other attacks targeting Ukraine’s energy sector highlight that cyber operations of varying effectiveness will play a role in the lead up to a military conflict.” 

Osadchuk: “First, cyber operations coordinate with other attack types, like kinetic operations on the ground, disinformation, and influence operations. Therefore, cyberattacks might be a precursor of an upcoming missile strike, information operation, or any other action in the physical and informational dimensions, so allies could use cyber to model and analyze multi-domain operations. Finally, preparation for and resilience to information and cyber operations are vital in mitigating the consequences of such attacks; thus, updating defense doctrines and improving cyber infrastructure and social resilience are necessary.” 

Schroeder: “Expectations for operations in this environment have exposed clear fractures in the ways that different communities define as success in a wartime operation. Specifically, there is a tendency to equate success with direct or kinetic battlefield impact. One of the biggest lessons that has been both a success and a failure throughout this war is the role that this environment can play. Those at war, from ancient to modern times, have leveraged every asset at their disposal and chosen the tool they see as the best fit for each challenge that arises—cyber is no different. While there is ongoing debate surrounding this question, if cyber operations have not been effective on a battlefield, that does not mean that cyber is ineffective, just that expectations were misplaced. Understanding the myriad roles that cyber can and does play in defense, national security, and conflict is key to creating an effective cross-domain force. 

Wilde: “Foremost is the need to check the assumption that these operations can have decisive utility, particularly in a kinetic wartime context. Moscow placed great faith in its ability to convert widespread digital and societal disruption into geopolitical advantage, only to find years of effort backfiring catastrophically. In other contexts, better trained and resourced militaries might be able to blend cyber and information operations into combined arms campaigns more effectively to achieve discrete objectives. However, it is worth reevaluating the degree to which we assume offensive cyber and information operations can reliably be counted on to play pivotal roles in hot war.”

More from the Cyber Statecraft Initiative:

#4 How do comparisons to other domains of conflict help and/or hurt understanding of conflict in the information domain?

Aleksejeva: “Unlike conventional warfare, information warfare uses information and psychological operations during peace time as well. By masking behind sock puppet or anonymous social media accounts, information influence operations might be perceived as legitimate internal issues that polarize society. A country might be unaware that it is under attack. At the same time, as the goal of conventional warfare is to break an adversary’s defense line, information warfare fights societal resilience by breaking its unity. ‘Divide and rule’ is one of the basic information warfare strategies.” 

Cutler: “When looking at the role of cyber in this conflict, I think it is critical to examine the history of Hacktivist movements. This can be incredibly useful for understanding the influences and capabilities of groups like the IT_Army and Killnet.” 

Osadchuk: “The information domain sometimes reflects the kinetic events on the ground, so comparing these two is helpful and could serve as a behavior predictor. For instance, when the Armed Forces of Ukraine liberate new territories, they also expose war crimes, civilian casualties, and damages inflicted by occupation forces. In reaction to these revelations, the Kremlin propaganda machine usually launches multiple campaigns to distance themselves, blame the victim, or even denounce allegations as staged to muddy the waters for certain observers.” 

Schroeder: “It is often tricky to carry comparisons over different environments and context, but the practice persists because, well, that is just what people do—look for patterns. The ability to carry over patterns and lessons is essential, especially in new environments and with the constant developments of new tools and technologies. Where these comparisons cause problems is when they are used not as a starting point, but as a predetermined answer.” 

Wilde: “It is problematic, in my view, to consider information a warfighting ‘domain,’ particularly because its physical and metaphorical boundaries are endlessly vague and evolving—certainly relative to air, land, sea, and space. The complexities and contingencies in the information environment are infinitely more than those in the latter domains. However talented we may be at collecting and analyzing millions of relevant datapoints with advanced technology, these capabilities may lend us a false sense of our ability to control or subvert the information environment during wartime—from hearts and minds to bits and bytes.”

#5 What conditions might make the current conflict exceptional and not generalizable?

Aleksejeva: “This war is neither ideological nor a war for territories and resources. Russia does not have any ideology that backs up its invasion of Ukraine. It also has a hard time maintaining control of its occupied territories. Instead, Russia has many disinformation-based narratives or stories that justify the invasion to as many Russian citizens as possible including Kremlin officials. Narratives are general and diverse enough, so everyone can find an explanation of the current invasion—be it the alleged rebirth of Nazism in Ukraine, the fight against US hegemony, or the alleged historical right to bring Ukraine back to Russia’s sphere of influence. Though local, the war has global impact and makes countries around the world pick sides. Online and social media platforms, machine translation tools, and big data products provide a great opportunity to bombard any internet user in any part of the world with pro-Russia massaging often tailored to echo historical, racial, and economic resentments especially rooted in colonial past.” 

Cutler: “During the Gulf War, CNN and other cable news networks were able to provide live coverage of military action as it was unfolding. Now, real-time information from conflict areas is more broadly accessible. Telegram and social media have directly shaped the information and narratives from the conflict zone.” 

Osadchuk: “The main difference is the enormous amount of war content, ranging from professional pictures and amateur videos after missile strikes to drone footage of artillery salvos and bodycam footage of fighting in the frontline trenches—all making this conflict the most documented. Second, this war demonstrates the need for drones, satellite imagery, and open-source intelligence for successful operations, which distances it from previous conflicts and wars. Finally, it is exceptional due to the participation of Ukrainian civil society in developing applications, like the one alerting people about incoming shelling or helping find shelter; launching crowdfunding campaigns for vehicles, medical equipment, and even satellite image services; and debunking Russian disinformation on social media.” 

Schroeder: “One of the key lessons we can take from this war is the centrality of the global private sector to conflict in and through the information environment. From expedited construction of cloud infrastructure for the Ukrainian government to Ukrainian telecommunications companies defending and restoring services along the front lines to distributed satellite devices, providing flexible connectivity to civilians and soldiers alike, private companies have undoubtedly played an important role in shaping both the capabilities of the Ukrainian state and the information battlespace itself. While we do not entirely understand the incentives that drove these actions, an undeniable motivation that will be difficult to replicate in other contexts is the combination of Russian outright aggression and comparative economic weakness. Companies and their directors felt motivated to act due to the first and, likely, free to act due to the second. Private sector centrality is unlikely to diminish and, in future conflicts, it will be imperative for combatants to understand the opportunities and dependencies that exist in this space within their own unique context.” 

Wilde: “My sense is that post-war, transatlantic dynamics—from shared norms to politico-military ties—lent significant tailwinds to marshal resource and support to Ukraine (though not as quickly or amply from some quarters as I had hoped). The shared memory of the fight for self-determination in Central and Eastern Europe in the late 1980s to early 1990s still has deep resonance among the publics and capitals of the West. These are unique dynamics, and the degree to which they could be replicated in other theaters of potential conflict is a pretty open question.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Conflict in Ukraine’s information environment appeared first on Atlantic Council.

]]>
Building a shared lexicon for the National Cybersecurity Strategy https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/building-a-shared-lexicon-for-the-national-cybersecurity-strategy/ Thu, 16 Mar 2023 12:00:00 +0000 https://www.atlanticcouncil.org/?p=621766 The 2023 National Cybersecurity Strategy, released on March 3, represents the ambitions of the Biden Administration to chart a course within and through the cyber domain, staking out a critical set of questions and themes. These ambitions are reflected within the strategy’s pillars and titled sections, but also key words and phrases scattered throughout the […]

The post Building a shared lexicon for the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>

The 2023 National Cybersecurity Strategy, released on March 3, represents the ambitions of the Biden Administration to chart a course within and through the cyber domain, staking out a critical set of questions and themes. These ambitions are reflected within the strategy’s pillars and titled sections, but also key words and phrases scattered throughout the document. As we and others have said, the success of this strategy will hinge largely on the practical implementation of its boldest ideas. The details of that implementation will depend on how the administration chooses to interpret or define many of the key terms found within the strategy.

To begin the creation of a shared lexicon to interpret these terms and the policy questions and implications that flow from each, this series identifies seven terms used throughout the strategy that represent pivotal ideas and priorities of this administration: “best-positioned actors,” “realign incentives,” “shift liability,” “build in security,” “modernize federal systems,” “privacy,” and “norms of responsible state behavior.” This article digs into the meaning behind these phrases and how they serve as waypoints in debates over the future of cybersecurity policy.

Strategy terms

“Best-positioned actors”

Throughout the National Cybersecurity Strategy, there are various iterations of the idea of “best-positioned actors” to describe and delineate the private actors expected, or at the least encouraged, to play a larger role in building and reinforcing a secure cyberspace. The repetition of this term represents a larger trend within the 2023 NCS: the central role of the private sector. The prior strategy certainly represented a step in this process, but its successor signals a more fundamental move toward addressing the significant role of private sector players in shaping cybersecurity.

According to this strategy, a keystone in this effort will be increased responsibility by the “best positioned actors” within the private sector. But what does this term mean? At its most basic level, a best-positioned company is one whose product(s) or service(s) represents a considerable portion of a key structural point identified within a pillar of US cyber strategy and, therefore, a company whose manner of operation will be decisive in determining cybersecurity outcomes for a large number of users. The strategy explains that “protecting data and assuring the reliability of critical systems must be the responsibility of the owners and operators of the systems that hold our data and make our society function, as well as of the technology providers that build and service these systems.” Though specific sectors or companies are not tied to this category in the strategy, the definition appears to include primarily the owners and operators of both traditional physical infrastructure, especially critical infrastructure, as well as digital infrastructure, like cloud computing services. It may also point to entities who operate as crucial intermediary nodes in the software stack or software development life cycle, whose privileged positions allow the implementation of security protections for downstream resources at scale, such as operating systems, app stores, browsers, and code-hosting platforms.

The strategy appears to distinguish these best-positioned actors as a sub-set within the category of actors whose action, or inaction, has the greatest potential consequences. The strategy further stipulates that a company’s resourcing partially determines its designation as best-positioned. This distinction reflects an issue throughout the digital ecosystem, where an entity responsible for a critical product or service might have insufficient resources, might fall under what Cyber Statecraft Initiative Senior Fellow Wendy Nather terms the “cyber poverty line,” to act as a best-positioned actor. Such entities may not be “best positioned,” but are important to the security and resilience if the products or services they are responsible for are depended on by a significant proportion of technology users or would, if compromised, create a large blast radius of effect because they play a connecting role within a large number of other products and services.

The strategy’s emphasis on shifting responsibility is crucial to reducing the impact of security failures on users and serves to support many of the other concepts around “build-in security,” “privacy,” and “realign incentives.” As a result, who that responsibility shifts to, the “best-positioned actors,” will have material influence on the outcomes of these policies. Establishing a common understanding of what companies fall within that category is imperative.

“Realign incentives”

Another term found throughout the National Cybersecurity Strategy—particularly within Pillar Three (Shape Market Forces)—is various iterations of “incentivizing responsibility.” It describes how the US government can shape the security ecosystem by motivating actors—chiefly the private sector owners, producers, and operators of critical technologies—toward a sense of heightened responsibility in securing US digital infrastructure. The previous strategy discussed incentives at a very high level—how to incentivize investments, innovation, and so on—but lacked a coherent sense of objective. The 2023 strategy moves closer to stating a goal but still falls short of actualizing a plan to achieve it. The repetition of this term represents a larger trend within the 2023 strategy: the desire to shift the onus of security failures away from users and onto the private sector. This term is a major driver to achieve the strategic objectives of the National Cyber Security Strategy.

These incentives are primarily divided into four categories: investment, procurement, regulation, and liability (discussed in Shifting Liability). Investment sits at the heart of Pillar Four (Invest in a Resilience Future), but this approach is common across the pillars. Using investment as an incentive includes creating or building upon existing funds and grant programs for critical and innovative technologies, especially those secure- and resilient-by-design (discussed further in Build in Security). Bridging investment and regulation is the strategy’s emphasis on using federal purchasing power to create positive incentives within the market to adopt stricter cybersecurity design standards.

More prominent throughout the strategy, however, is a regulatory approach that seeks to balance increased resilience with the realities of the free market. This inclusion is important—resilience investment is not maximally efficient. By design a resilient system may have multiple channels for the same information or control. Building resilience into a system may also involve costly engineering and research programs without adding new (and marketable) functionality, and they might even raise the cost of goods sold. Public policy can incentivize these less efficient investments and behaviors, but it may also need to mandate them, especially where markets are most dysfunctional or risk most concentrates. Regulatory tools are intrinsic to a properly functioning market and suffer abuse through neglect or overuse in equal measure.

The strategy hints that making security and resilience the preferred market choice requires making inadequate security approaches more difficult and costly. The strategy recognizes the critical role that private companies play in creating a secure and resilient cyber ecosystem—they are acknowledged even more frequently than allied and partner states. The various approaches to incentivizing responsibility illustrate the careful balancing act that a more robust public-private relationship will require, creating both opportunity and consequence for the private sector.

The strategy tasks the federal government with creating regulation responsibly, with “modern and nimble regulatory frameworks for cybersecurity tailored for each sector’s risk profile, harmonized to reduce duplication, complementary to public-private collaboration, and cognizant of the cost of implementation.” This specific and flexible approach is progress in the government’s approach to regulation, yet it raises questions as to the capability of the US government to create and regularly update a suite of regulatory statutes with sufficient agility. Finding specific and actionable ways to realign incentives and responsibilities will be essential to achieving the goals set by the 2023 strategy. However, to achieve this goal, it is essential to better identify both what these regulations seek to achieve and how to best design them to fit, bypassing the debate about Regulation: Friend or Foe.

“Shift liability”

The 2023 National Cybersecurity Strategy has an entire subsection dedicated to software liability—one of the strategy’s most explicit endorsements of a specific, new policy mechanism to shift responsibility and realign incentives for better cybersecurity. Creating a clear framework for software products and service liability would incentivize vendors to uphold baseline standards for secure software development and production, to protect themselves from legal action in response to damages incurred by issues with their product.

In the US legal landscape, software, by itself, is rarely considered a product (in contrast to physical goods with embedded software, such as smart TVs or smart cars). This limits the ability of a user to bring claims under traditional product-liability law against the manufacturer in the event of a security flaw or other problem with the software. In addition, many software vendors disclaim liability by contract—when a consumer clicks “I Agree” on a software license to install a program, they often agree to a contract that forfeits their right to sue the maker. Indeed, the strategy explicitly calls out this tactic.

Taken in tandem, these facts mean that software manufacturers often can insulate themselves from legal liability caused by failures of their products, removing a strong incentive that has motivated physical-goods manufacturers to put their products through rigorous safety testing. The Federal Trade Commission (FTC) retains broad enforcement powers against unfair and deceptive practices, which it has used to bring judgements against businesses for abysmal security failures, and certain authorities to regulate security practices in specific software-reliant sectors like the financial-services industry. However, a broader liability framework specific to software is conspicuously absent.

The strategy, recognizing that even the best-intentioned software manufacturers cannot anticipate all potential security vulnerabilities, leads with a safe harbor-based approach, in which software manufacturers are insulated from security-related product liability claims if they have adhered to a set of baseline secure development practices. This is a negligence liability standard—where manufacturers are held accountable only if they fail to meet an accepted baseline of adequate care—in contrast to a strict liability standard, in which manufacturers are liable for harms regardless of the precautions they took. The National Cybersecurity Strategy also makes explicit mention of the need to protect open-source developers from any form of liability for participating in open-source development, given that open-source software is more akin to free speech than to the offering of a final product. This recognition is both correct and important in light of the different paradigm within which open-source development operates and its incredibly common integration in most software products.

The strategy does not explicitly state whether such a standard should be enforced solely by an executive branch agency, such as the FTC, or whether the intent of the framework would be to allow individuals to directly sue software manufacturers whose products harmed them through a private right of action. The acknowledgement of the need to refine the software liability framework is a crucial step toward the strategy’s goals of realigning public-private incentives for security and resilience. The strategy is silent on whether existing federal authorities would be sufficient, through the FTC or even the Department of Justice’s Civil Cyber Fraud Initiative, or if a private right of action is still necessary (see here for more context on this distinction and liability as a cybersecurity policy issue). This could be a defining question, especially where it may involve congressional action to back up such a program versus merely sustain it.

“Build in security”

While discussing ways to shape market forces for improved security and resilience, the National Cybersecurity Strategy dedicates two sections of Pillar Three (Shape Market Forces) to adapt federal grants and other incentives to “build in security” throughout the cyber ecosystem. This is one of the more mature interpretations of the document’s focus on reshaping incentives and responsibilities to improve security. As far as individual technologies and products are concerned, vendor incentives to rush to market can leave security features as an afterthought or add on—worse, they can remove security considerations from design processes entirely. The implementation of secure-by-design technology is especially important in light of the interconnectedness of this space, as the integration of new technology alongside old systems can create points of weakness and transitive risk.

While much policymaking discussion considers how to punish or disincentivize poor practices, rewarding security incorporated at the outset of design is as useful. Software which is built to be difficult to compromise (versus layered with post-facto security features) can be easier, and sometimes cheaper, to defend in daily use and offer vendors and users both a more defensible product. These benefits are manifold when such standards are in place early in the development of an industry, as seen in the administration’s desire to implement a National Cyber-Informed Engineering Strategy for the new generation of clean energy infrastructure. The challenge will lie in whether the administration can define what it means to build in security (i.e., is it a set of specific practices, such as using memory-safe languages? or a set of process considerations which must be accounted for and documented?) with enough specificity to build policy incentive structures such as regulation around the concept.

The next logical step is to consider how to build in security not just for granular products but for systems writ large. The ever-increasing complexity of cloud infrastructure and other large-scale networked systems is an enormous strain on vendors and service providers, which have already gone to great lengths to engineer processes and software around navigating that complexity. Unchecked, those systems and their increasing importance will put users and government on their heels, forcing them to defend an extremely sophisticated and inherently insecure landscape.

Government is well-positioned to create incentives to help industry avoid race-to-the-bottom market pressures that lead towards insecurity and unmanaged complexity, and the strategy does well to tee up that priority even if it views the cyber landscape through a somewhat narrow product lens. Moving toward incentivizing secure design, architectural review processes, and buying down risk at the systems scale can convert “building in security” from an operational feature of federal funding to a strategic reshaping of the cyber landscape.

“Modernize federal systems”

Section Five within Pillar One (Defend Critical Infrastructure) of the National Cybersecurity Strategy focuses on modernizing what it terms the federal enterprise. The recognition of the federal civilian executive branch agencies (FCEB) as a singular enterprise from a security perspective is valuable and hints at broader themes for the Office of the National Cyber Director’s (ONCD) conception of modernization: streamlined points of contact, better coordinated security posturing and policymaking, and more evenly distributed and accessible resourcing and tooling among other gains.

At the most abstract level, modernization can be considered appropriately adjusting the federal enterprise to the challenges inherent in digital security: complexity, speed, and scale. Perhaps the most important contribution of the strategy here is the simple recognition that the federal government is outmatched—with infrastructure that has so far proven inadequate. The strategy’s approach to modernization commits to alleviating the government’s dependence on legacy systems that create too porous a foundation for US cybersecurity. Specific adaptations mentioned include the implementation of zero-trust architecture, a migration to cloud-based services, and progress toward “quantum-resistant cryptography-based environments.” Notably, “zero trust” remains a phrase of the moment after it did a starring turn in Executive Order 14028 and its use as a rhetorical catch-all for “modern” security tools and approaches has only increased.

The strategy directly appoints the Office of Management and Budget (OMB), in coordination with Cybersecurity and Infrastructure Security Agency (CISA), as the lead planner for FCEB cybersecurity planning and the custodian of shared services for constituent agencies. Though direct implementation plans are not laid out within this document, the specific tasking of the OMB to lead this process, assuming that the office receives the necessary resources, does create accountability and measurability for the pillar.

Another key component of FCEB modernization is a parallel workforce modernization. Any and all plans to create a modern, resilient federal cyber environment will require fostering a talented, diverse cyber workforce. The ONCD is spearheading this effort, and work on a workforce-specific strategy is underway. The National Cyber Strategy’s treatment of the cyber workforce provides a strong foundation for ONCD’s more detailed plan to address what is a significant problem for the US government. In that strategy, there is indeed opportunity to go further, not just to build the cyber workforce necessary for the problems of today, but to ensure that workforce development is conducted in parallel with government efforts to reshape its cyber environment into one that is more secure-by-design.

Modernization of federal systems is a gargantuan challenge, and one that will never be complete. To effectuate real change, modernization must become an engrained and cyclical process. This process does not have to mean the pursuit of the most cutting-edge technology for wholesale implementation across the FCEB, but must prioritize raising the baseline of security by targeting widespread dependencies and reduce risk for the most insecure and critical system components.

“Privacy”

One of the central themes of the inaugural National Cyber Director’s tenure was that cybersecurity must amount to more than creating an absence of threats. Securing the devices and services surrounding us should enable their use toward positive social, political, and economic ends. The security of data on these devices and running through these services is as much a question of protection against its appropriation and misuse by entities to whom it was entrusted as it is a question of preventing theft by malicious adversaries.

It is only a little surprising then, and very much welcome, to see the National Cybersecurity Strategy repeatedly highlight the importance of privacy as a key component of the United States’ cyber posture. Security and privacy are tightly intermeshed, as both a practical issue, where security features can function as guarantors of some privacy policies and protections, and as a policy issue—witness certain European Union (EU) member states agita over US surveillance and intelligence collection authorities as they impact the privacy of EU data and the perceived security of US-based cloud services. The inclusion of privacy is an overdue recognition of the fact that, if we succeed at preventing adversaries from stealing data from US networks, but then allow the same data to be freely bought and sold on the open web, we have gained little protection from espionage or targeting.

The recurring inclusion of privacy also marks an overdue move to collectively wield tools of both cybersecurity policy and corporate accountability in concert—taking the efforts of entities like the FTC, the Securities and Exchange Commission (SEC), and CISA together to drive change in private sector behavior. The strategy supports “legislative efforts to impose robust, clear limits on the ability to collect, use, transfer, and maintain personal data and provide strong protections for sensitive data like geolocation and health information,” but stops short of acknowledging that Congress’ ongoing failure to pass a comprehensive federal privacy law is harming US national cyber posture. Given such a law would likely include mandatory minimum-security standards for entities processing personal data, a privacy law would also provide new enforcement tools for the executive branch to penalize companies for poor security practices, going a long way towards creating incentives to fix some of the market failures identified by the administration throughout the strategy. The strategy also arrives as the intelligence community and Congress more publicly recognize the national security importance of data security and the risks posed by the widespread proliferation of surveillance tools.

Privacy has many definitions, but perhaps the most significant implied here is control over information and the right to exercise that control in the service of individual liberty. Strengthening users’ control over the data they produce, its use in digital technologies, and the integrity of those technologies against harm is a means of giving greater power back to users. These acknowledgments are fundamentally important—however, without going further, policy risks falling back into the broken “notice and choice” model of privacy, which has demonstrated its insufficiency in the proliferation of cookie banners under GDPR. The strategy would have gone further if it had acknowledged the need to preclude companies from collecting, processing, and reselling consumer data beyond the minimum required to deliver requested goods and services, which would more fundamentally limit the collection and propagation of Americans’ data.

The embrace of privacy as a key component of cyber posture is a large step, but the strategy still lacks concrete operational plans for implementing this vision. Hopefully, this is a sign of policy action still to come. Using this strategy as another important marker, policymakers should continue to address cybersecurity and privacy issues by bringing individual users back into the conversation and restoring a measure of ownership over their digital footprint along the way.

“Norms of responsible state behavior”

Within the 2023 National Cybersecurity Strategy, the drafters highlight the need for the United States and its likeminded allies and partners to work toward a free, fair, and open cyber domain aligned with US cyber norms and values. This concept, as a guiding principle for strategy, is not new, and indeed, was a central pillar of the 2018 strategy. The continued emphasis placed on norms and values-guided cyber strategy signals the ongoing importance of this conversation.

This strategy specifically calls out the Declaration of the Future of the Internet (DFI) as creating a foundation for “a common, democratic vision for an open, free, global, interoperable, reliable, and secure digital future.” The strategy also highlights the importance of international institutions and agreements in developing a framework and set of norms for this vision, including the United Nations (UN) Group of Governmental Experts and Open-Ended Working Group and the Budapest Convention on Cybercrime.

While there is agreement among the United States and allies on a set of cyber norms, these norms do not encompass all of state behavior in cyberspace. Important differences in approach might impede the level of cooperation sought by the United States and its allies. One such tension, briefly mentioned, is the question of data localization requirements. Pillar Five (Forge International Partnerships) discusses a series of goals surrounding international collaboration. These include counter-threat coalitions, partner capacity building, and supply chain security. This pillar also discusses many existing efforts toward enhancing international cooperation, yet lacks a clear, cohesive set of actions for moving the United States and the global cyber ecosystem toward an “open, free, global, interoperable, reliable, and secure Internet.” Without such a bridge, US allies and partners around the globe, especially those with immature or nonexistent relationships with the US government on cyber issues, might struggle to move toward the kind of cyber ecosystem the US government seeks to create.

As the US government builds on and operationalizes the strategy, the cyber norms and values used as its frame will require clear specification as more than just platitudes. The internet is not a topic merely of foreign policy, and there are opportunities throughout the document to better connect discussion of shifting responsibility and securing the internet together, including these important normative dimensions through domestic implementation. It is simple to claim the pursuit of a free, fair, open, and secure cyber domain. However, if norms are truly to serve as the foundation of cyber strategy, the US government must do more than allude—it must lead the way in integrating specific ideals into its strategy, operations, and tactics.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Building a shared lexicon for the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>
Atkins on ICS Pulse Podcast https://www.atlanticcouncil.org/insight-impact/in-the-news/atkins-on-ics-pulse-podcast/ Fri, 10 Mar 2023 20:47:33 +0000 https://www.atlanticcouncil.org/?p=641195 On March 7, IPSI Nonresident Senior Fellow Victor Atkins was interviewed on the Industrial Cybersecurity Pulse Podcast on protecting critical infrastructure.

The post Atkins on ICS Pulse Podcast appeared first on Atlantic Council.

]]>

On March 7, IPSI Nonresident Senior Fellow Victor Atkins was interviewed on the Industrial Cybersecurity Pulse Podcast on protecting critical infrastructure.

The post Atkins on ICS Pulse Podcast appeared first on Atlantic Council.

]]>
How will the US counter cyber threats? Our experts mark up the National Cybersecurity Strategy https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/the-us-national-cybersecurity-strategy-mark-up/ Sat, 04 Mar 2023 02:15:46 +0000 https://www.atlanticcouncil.org/?p=576755 On March 2, the White House released the 2023 US National Cybersecurity Strategy. Read along with CSI staff, fellows, and experts for commentary on the document and its relationship with larger cybersecurity policy issues.

The post How will the US counter cyber threats? Our experts mark up the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>
On March 2, the Biden administration released its 2023 National Cybersecurity Strategy (NCS), an attempt to chart a course through the stormy waters of cyberspace, where the private sector, peer-competitor states, and nonstate actors navigate around and with each other in ways growing more complex—and dangerous—by the day. The Atlantic Council’s Cyber Statecraft Initiative (CSI), which is housed within the Digital Forensic Research Lab, gathered a group of experts from government and private-sector cyber backgrounds to dive into the document and offer context, commentary, and concerns to help decipher the strategy. Commenters include Maia Hamin, Trey Herr, Danielle Jablanski, Amelie Koran, Will Loomis, Jeff Moss, Katie Nickels, Marc Rogers, Stewart Scott, and Chris Wysopal.

CSI’s key takeaways from the strategy

  1. The strategy offers the much-needed beginnings of an ambitious shift in US cybersecurity policy, but it often falls short on implementation details and addressing past failures. The actionable outputs it does identify are fundamentally cautious.
  2. The strategy’s greatest virtues might be its focus on the pressing need to grapple with market incentives driving insecurity and to reallocate responsibility for security.
  3. By deferring rigorous treatment of allied and partner states’ role in its strategic vision for cybersecurity, the strategy gives short shrift to cybersecurity’s fundamentally global nature across all pillars.

NCS table of contents

A steady course in stormy seas: How to read the Biden administration’s new cyber strategy

Far before the age of steam, in the earliest days of sailing ships, captains knew to keep their vessels close to shore. Out in deeper water lay the vicissitudes of storms and faithless winds. Safety lay in the often more arduous, lengthier voyages hugging the coastline. Trading speed for the safety of their ship, crew, and cargo, captains steered carefully through the rocks on a conservative course to their destination. Sailors might tell tales of the exotic lands they planned to visit, but reliable routes close to shore kept them far from the perils of such journeys.

The 2023 National Cybersecurity Strategy (NCS), released March 2, reflects this cautious reality in the actual commitments it makes under a bolder vision to “rebalance the responsibility to defend cyberspace” and “realign incentives to favor long-term investments.” The strategy’s greatest contribution in years to come will likely hinge on its success reframing cyber policy toward explicit discussion of the market—and its failure to adequately distribute responsibility and risk while still clinging to weak incentives for good security practices. This will serve future policy efforts well and open discussions about material changes in the complexity and defensibility of digital technologies. A market lens for cyber policy also serves to integrate privacy into mainstream cybersecurity discussions and heartily embraces the notion that it is more than just defense against external compromise that determines the security of users and data. The strategy also charts out new horizons in its acknowledgement of the need to address software product liability while protecting open-source developers.

But in its discussion of a liability regime, and throughout, the strategy often hews close to safe harbors, steering away from the specific actions and policies that would implement the thornier parts of its vision. The document’s focus on the market, for instance, is weakened by the absence of efforts to trace the source of market failings. Missing too are efforts to further unpack barriers to federal information-technology modernization or the complex web of cyber authorities that have left security requirements fragmented and inconsistent across sectors.  The document also does little to integrate the international perspective across its discussion of threats or technologies, leaving the topic largely in a single, final pillar (the strategy is organized into five such pillars).

This was a singular opportunity to better address the global business environment in which technology vendors and consumers operate, and the geopolitical significance attached to questions of technology design and security. One need only look through the rapid expansion of activity in the Committee on Foreign Investment in the United States or the recent flurry of debate around TikTok to see the deeply international nature of the market in which the strategy seeks to drive “security and resilience.” The isolation of international issues ignores the reality of global US security partnerships and insufficiently addresses the reality of defense cooperation in cyberspace with both foreign states and private companies.

The Office of the National Cyber Director was handed a mammoth task in drafting this administration’s NCS. The young office could easily have foundered, beset by the interagency demons of the deep. Instead, it seems this captain and crew chose to remain in sight of land while charting in florid prose what could be in these grand adventures. The result is an important framework with some novel and useful policy activities, but also with questions that the cyber policy community must work to answer in the years to come. Important ideas, such as an affirmative statement about what the balance of responsibility for security should look like across the technology ecosystem, are here established in principle—flags left to be carried forward by others. In light of the fraught political winds the drafting team navigated, the result is commendable, but a frank recognition of how much work remains is also important. This text may serve to fire the imaginations of a generation of sailors yet to leave port, but we must ensure they do indeed set sail for distant shores and capture some of the promise presented here.

Authors and contributors

Maia Hamin is an associate director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She works on the Initiative’s Systems Security portfolio, which focuses on policy for open-source software, cloud, and other technologies with important systemic security effects.

Trey Herr is the director of the Atlantic Council’s Cyber Statecraft Initiative. His team works on cybersecurity and geopolitics including cloud computing, the security of the internet, supply chain policy, cyber effects on the battlefield, and growing a more capable cybersecurity policy workforce.

Danielle Jablanski is a nonresident fellow at the Cyber Statecraft Initiative and an operational technology (OT) cybersecurity strategist at Nozomi Networks, responsible for researching global cybersecurity topics and promoting OT and industrial control systems (ICS) cybersecurity awareness throughout the industry. Jablanski serves as a staff and advisory board member of the nonprofit organization Building Cyber Security, leading cyber-physical standards development, education, certifications, and labeling authority to advance physical security, safety, and privacy in the public and private sectors. Since January 2022, Jablanski has also served as the president of the North Texas Section of the International Society of Automation, organizing monthly member meetings, training, and community engagements.

Amelie Koran is a nonresident senior fellow at the Cyber Statecraft Initiative and the current director of external technology partnerships for Electronic Arts, Inc. Koran has a wide and varied background of nearly thirty years of professional experience in technology and leadership in the public and private sectors. During her career, she has supported work across various government agencies and programs including the US Department of the Interior, Treasury Department, and the Office of the Inspector General in the Department of Health and Human Services. In the private sector, she has held various roles including those at the Walt Disney Company, Splunk, Constellation Energy (now Exelon), Mandiant, and Xerox.  

Will Loomis is an associate director with the Cyber Statecraft Initiative. In this role, he manages a wide range of projects at the nexus of geopolitics and national security with cyberspace.

Jeff Moss is a nonresident senior fellow with the Cyber Statecraft Initiative. He is also the founder and creator of both the Black Hat Briefings and DEF CON, two of the most influential information security conferences in the world, attracting over ten thousand people from around the world to learn the latest in security technology from those researchers who create it. DEF CON just had its thirtieth anniversary.

Katie Nickels is the director of intelligence for Red Canary as well as a SANS certified instructor for FOR578: Cyber Threat Intelligence and a nonresident senior fellow for the Cyber Statecraft Initiative. She has worked on cyber threat intelligence (CTI), network defense, and incident response for over a decade for the US Department of Defense, MITRE, Raytheon, and ManTech.

Marc Rogers is currently CSO for Qnetsecurity. He formerly worked at Okta, Cloudflare, Lookout, and Vectra. Rogers is a well-known security researcher (Tesla Model S, TouchID, Google Glass), senior advisor to IST, a member of the Ransomware Taskforce, and co-founder of the CTI League.

Emma Schroeder is an associate director with the Cyber Statecraft Initiative. Her focus in this role is on developing statecraft and strategy for cyberspace that is useful for both policymakers and practitioners.

Stewart Scott is an associate director with the Cyber Statecraft Initiative. He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy.

Chris Wysopal is the co-founder and CTO of Veracode, an application security technology provider for software developers. He was one of the original software vulnerability researchers in the 1990’s. He has testified in Congress on the topic of government cybersecurity.

The post How will the US counter cyber threats? Our experts mark up the National Cybersecurity Strategy appeared first on Atlantic Council.

]]>
Makings of the Market: Seven perspectives on offensive cyber capability proliferation https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/makings-of-the-market-seven-perspectives-on-offensive-cyber-capability-proliferation/ Wed, 01 Mar 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=614128 The marketplace for offensive cyber capabilities continues to grow globally. Their proliferation poses an expanding set of risks to national security and human rights, these capabilities also have legitimate use in state security and defense. To dive deeper on this topic, we asked seven experts to offer their perspectives.

The post Makings of the Market: Seven perspectives on offensive cyber capability proliferation appeared first on Atlantic Council.

]]>
The marketplace for offensive cyber capabilities (OCC)—the combination of tools; vulnerabilities; and skills, including technical, organizational, and individual capacities used to conduct offensive cyber operations—continues to grow globally. These capabilities, once developed primarily by a small handful of states, are now available for purchase from this international private market, both legal and illegal, to a widening array of both state and nonstate actors. These capabilities, and their proliferation, pose an expanding set of risks to national security and human rights around the globe.

However, these capabilities also have legitimate use in state security and defense—the boundaries of which are ill-defined. Many states have clear incentives to participate in this market, to acquire these capabilities, and more types of actors are able to find financial opportunity as this market grows. Regulation, transparency, and reshaping of this market are necessary to counter the threats this unbounded proliferation poses, and states, independently and in cooperation, have the impetus and the opportunity to do so.

To dive deeper on this topic, we asked seven experts to offer their perspectives on these threats and how policymakers can help counter them: 

Briefly, what are the principal equities/interests in the proliferation of cyber capabilities?

“There are five main players interested in the proliferation of cyber capabilities: capability vendors, governments, middlemen and resellers, large technology companies, and civil society organizations.  

Capability vendors (i.e., zero-day brokers, Access-as-a-Service firms, spyware vendors etc.) sell capabilities to governments, occasionally through middlemen or resellers (especially if they do not have pre-existing relationships with people in government technology acquisition programs). These capabilities usually involve abusing platforms and services offered by tech companies—like breaking into phones, exploiting chat platforms, or hosting malware on cloud services. Some of the operations using these capabilities target legitimate national security threats, but others will target civil society organizations, especially if the government has a wide definition of national security and little outside accountability. The privatization of this industry also means that governments who previously could not afford to build spying capabilities at home can now do so, cheaply.  

Because all players are operating in a space full of secrecy and information asymmetry, each part of the system can and will be abused. Some capability vendors sell to governments they shouldn’t sell to, some middlemen will repackage and resell vulnerabilities they’ve already sold to others, and some governments will abuse these tools to target vulnerable populations or engage in “spyware diplomacy”—allowing their domestic spyware companies to sell to a foreign government in order to curry diplomatic favor. Western governments, large technology firms, and civil society have overlapping interests in this space: curbing the abuse caused by its inherent secrecy and thereby see fewer abuses of human rights, fewer countries engaging in cyber operations, and fewer actors abusing technology services.”  

Winnona DeSombre-Bernsen, non-resident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

What benefits and risks do companies like Zerodium, along with similar middlemen, pose as their role grows larger in the proliferation of offensive cyber capabilities?

“Zerodium and other middlemen operate as market makers, buying and selling the same product in a marketplace. Market making is not inherently bad—the problem arises when they connect vendors to customers both internationally and domestically without providing any transparency to the people they’re buying from or selling to.  While these firms enable ways to capture supply from sources that wouldn’t be able to reach buyers directly or would be averse to a direct relationship, they also result in a murky supply chain. There is a lack of understanding around vulnerability sourcing, who the talent is, and who else they’re selling things to. Because of that, governments are unable to drive the direction of the supply chain for future assurance.  

Zerodium is able to operate this way because government customers appreciate the lack of transparency: historically, independent exploits have been written by seedy individuals, and the less government has to interact with them, the better. However, this is no longer the case. Exploits are now available from reputable individuals and companies. If government customers continue to want this ambiguity, they will continue to enable brokers like Zerodium to operate outside of the best interests of the US market. Increased transparency from all parties will make sure offensive cyber capabilities end up in the proper hands.” 

Sophia D’Antoine, founder and managing partner, Margin Research

If there is a legitimate state interest in shaping the flow of offensive cyber capabilities to friendly states, how does this activity differ from conventional arms sales? Is the US government signaling differently in the two spaces?

“The differences between conventional arms and offensive cyber capabilities are immense. Deniable ambiguity muddies every step of the way in attempting to meaningfully curtail the sale of offensive cyber capabilities. First, offensive cyber capabilities are often multi-role by nature; they are tools of network breaching, surveillance, and potential attack depending on how they are used. Second, their footprint is substantially smaller than their physical counterparts, which makes interdiction—or threat thereof—challenging to impossible. Third, offensive cyber capabilities require relatively little except for high quality personnel to produce reliable outputs. While experts may be in relatively short supply, manufacturing and supply chains are much thinner and therefore harder to subject to scrutiny, transparency, and enforcement.  

Only a great deal of collaborative international intent and investment can even remotely make a dent in shaping the flow of offensive cyber capabilities. Efforts will need to include incentivizing the positive actors to continue participating responsibly, disincentivizing sales to less desirable users, creating a culture of due diligence on sale and use, exerting diplomatic pressure on “flexible” nations willing to host unscrupulous sellers or creating a pipeline of expat talent, and a stronger accounting of key human talent in this space and their doings. Considering the quantity of actors benefiting from the existing ambiguities, it is not clear to me that the motivation even exists to support a shift like that, let alone to invest in it strategically.”  
 

Dr. Daniel Moore, cyber warfare researcher, author of “Offensive Cyber Operations”

There has been a lot of focus on Israel and NSO Group, but there are plenty of other countries home to similar activities. What kind of effects might Israel changing the character of its regulation of these firms have on where similar companies choose to do business?

“Despite receiving most of the attention, Israel is far from the only nation with a bustling digital surveillance industry. Indeed, over the last decade, the Israeli government has implemented additional controls on the export of hacking tools which has caused some local companies to consider moving abroad. The most common destination for this relocation effort so far has been Cyprus, but there is also some expansion in the Middle East and Asia – especially into the United Arab Emirates and Singapore.  

As Western governments continue to move towards tighter controls on the sale and development of hacking tools, they will likely face internal pressures from their own defense and intelligence communities which may effectively temper rapid change. The sale of military-related products has long been seen as a key tool in the nation state diplomatic toolbox for building and maintaining relations between foreign partners. 

Assuming some level of significant regulatory progress in the future, however, I expect to see more spyware companies move into tax haven territories that offer greater corporate secrecy. This is already beginning to occur. While the shift so far is limited and only anecdotal, it may lead to a situation where these companies are harder to identify, track and regulate.”  

Christopher Bing, media fellow, Alperovitch Institute

Besides jurisdiction and the fact that many states want some of these companies to operate, what are policymakers biggest challenges to imposing penalties and positive shaping the behavior of companies across the marketplace for offensive cyber capabilities?

“The challenges are varied and speak to the core of transferring concepts from the physical to the digital world. The nature of the asset, i.e., data—which can be easily transmitted and transformed—makes transfers difficult to detect or trace across national boundaries. These traits coupled with the complex and global nature of the ecosystem comprised of varying cultures and legal jurisdictions create an intricate mix. Beyond these foundational aspects there is then: 

  • The strategic national advantage and agency that offensive cyber capabilities provide. 
  • Pace of policy response historically against dynamic, fast changing and modular ecosystems reliant on technical definitions at a trans-government level.  
  • An assumption that only companies and not individuals with no legal entity are capable of being material market or capability shapers and makers. 
  • Lack of transparency, insight, and monitorability of this global ecosystem when compared to physical equivalents such as small arms, chemical and radiological weapons etc. 
  • Lack of evidence that an ecosystem which in part has its roots in counterculture, creativity, and anti-authoritarianism can be sufficiently shaped and controlled globally to achieve policy aims. 
  • Ways in which software can be broken down into component parts distributed across many suppliers so as to not provide described functionality as written in legislation and yet reassembled elsewhere in the destination country to provide said functionality. 
  • Existence of alternative financial systems which are resilient to Western government-imposed sanctions in situations of non-compliance or disagreement. 
  • Existence of vast and growing amount of capability as open source which can be integrated to provide capability further lowering bar of entry. 

These examples highlight the complexities and competing forces in a market which are only now starting to be contested. Any one of these could be material in its own right but when combined, highlight the enormity and complexity of the challenge to policymakers. Especially so when we recognize this list isn’t comprehensive. 

However, this does not mean we should not try and learn from previous lessons as we look to address the challenge.” 

Ollie Whitehouse, founder, BinaryFirefly

For governments and corporations, there is generally more public awareness of this proliferation and its impacts but so far that attention has translated to only limited action from both groups. What role should different kinds of companies play in raising awareness, shaping, and providing appropriate incentives or disincentives to this market for offensive cyber capabilities?

“Microsoft recognizes the urgency of the threat posed by cyber mercenaries and the proliferation of offensive cyber capabilities and believes that progress can only happen through strong multistakeholder partnerships. Therefore, we welcome the growing number of governments that are taking action. The charges brought in the United States against former US intelligence and military personnel accused of being cyber mercenaries is one such example. The European Parliament’s investigation of spyware use in Europe is another. These developments follow years of work by non-governmental organizations (NGOs), which tirelessly support and draw attention to the victims of cyber mercenaries—innocent citizens around the world.  

Similarly, industry recognizes its own role in addressing this issue, but acknowledges that more needs to be done. The volume of abuse connected with this market is increasing exponentially and indeed, it seems likely that the current public revelations may only be the tip of the iceberg. Companies have a key role to play and should focus efforts around:  

  1. Taking steps to counter cyber mercenaries’ use of products and services to harm people 
  2. Identifying ways to actively counter the cyber mercenary market 
  3. Investing in cybersecurity awareness of customers, users and the general public 
  4. Protecting customers and users by maintaining the integrity and security of products and services and  
  5. Developing processes for handling valid legal requests for information.

Some transformative business practices include adhering to established corporate responsibility principles grounded in the protection of human rights and adopting policies that ensure private sector transparency.” 

Monica Ruiz, program manager, Digital Diplomacy, Microsoft

How do you expect the clients present in the market for offensive cyber capabilities to change over the next 3 years?

“The market for offensive cyber capabilities has already demonstrated its ability to grow  to meet ever-expanding demand. The affordability of these capabilities, relative to the cost of building them domestically, allows governments previously unable to procure surveillance capabilities the avenue to do so. The PEGA committee inquiry particularly calls out governments like Hungary and Greece who do not have large cyber operations capabilities but were able to purchase spyware for political suppression among other uses. 

Even in cases where governments have attempted to crack down on companies operating within their countries, like Israel, the talent pool shifts to other states like Cyprus, North Macedonia, and Turkey to circumvent regulation. Growth is thus driven by demand and not limited by any highly effective regulatory scheme. The future of real governance over this market is dependent on governments, technology companies, and civil society partners enacting scalable and transparent policies for both vendors and clients. Done right, the international community can still effectively shape this market to greatly reduce widespread human rights abuses and national security harms.”  

Jen Roberts, program assistant, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Makings of the Market: Seven perspectives on offensive cyber capability proliferation appeared first on Atlantic Council.

]]>
Tech innovation helps Ukraine even the odds against Russia’s military might https://www.atlanticcouncil.org/blogs/ukrainealert/tech-innovation-helps-ukraine-even-the-odds-against-russias-military-might/ Tue, 28 Feb 2023 22:50:23 +0000 https://www.atlanticcouncil.org/?p=618100 Over the past year, Ukrainians have demonstrated their ability to defeat Russia using a combination of raw courage and innovative military tech, writes Ukraine's Digital Transformation Minister Mykhailo Fedorov.

The post Tech innovation helps Ukraine even the odds against Russia’s military might appeared first on Atlantic Council.

]]>
For more than a year, Ukraine has been fighting for its life against a military superpower that enjoys overwhelming advantages in terms of funding, weapons, and manpower. One of the few areas were Ukraine has managed to stay consistently ahead of Russia is in the use of innovative military technologies.

Today’s Ukraine is often described as a testing ground for new military technologies, but it is important to stress that Ukrainians are active participants in this process who are in many instances leading the way with new innovations. The scale of Russia’s invasion and the intensity of the fighting mean that concepts can often go from the drawing board to the battlefield in a matter of months or sometimes even days. Luckily, Ukraine has the tech talent and flexibility to make the most of these conditions.

With the war now entering its second year, it is clear that military tech offers the best solutions to the threats created by Russia’s invasion. After all, success in modern warfare depends primarily on data and technology, not on the number of 1960s tanks you can deploy or your willingness to use infantry as cannon fodder.

Russian preparations for the current full-scale invasion of Ukraine have been underway for much of the past two decades and have focused on traditional military thinking with an emphasis on armor, artillery, and air power. In contrast, the rapidly modernizing Ukrainian military has achieved a technological leap in less than twelve months. Since the invasion began, Ukraine has demonstrated a readiness to innovate that the more conservative Russian military simply cannot match.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

Modern weapons supplied by Ukraine’s international partners have played a crucial role in the Ukrainian military’s battlefield victories during the first year of the war. Likewise, Western countries have also supported Ukraine with a range of tech solutions and assistance. At the same time, Ukrainians have repeatedly demonstrated their ability to develop and adapt new technologies suited to the specific circumstances of Russia’s ongoing invasion. Ukraine has used everthing from drones and satellite imagery to artificial intelligence and situational awareness tools in order to inflict maximum damage on Russian forces while preserving the lives of Ukrainian service personnel and civilians.

Drones deserve special attention as the greatest game-changers of Russia’s war in Ukraine. Thanks to the widespread and skillful use of air reconnaissance drones, the Ukrainian military has been able to monitor vast frontline areas and coordinate artillery. Meanwhile, strike drones have made it possible to hit enemy positions directly.

The critical role of drones on the battlefield has helped fuel a wartime boom in domestic production. Over the past six months, the number of Ukrainian companies producing UAVs has increased more than fivefold. This expansion will continue. The full-scale Russian invasion of Ukraine is fast evolving into the world’s first war of robots. In order to win, Ukraine needs large quantities of drones in every conceivable category.

This helps to explain the thinking behind the decision to launch the Army of Drones initiative. This joint project within the framework of the UNITED24 fundraising platform involves the General Staff of the Ukrainian Armed Forces, the State Special Communications Service, and the Ministry of Digital Transformation. Within the space of six months, the Army of Drones initiative resulted in the acquisition of over 1,700 drones worth tens of millions of dollars. This was possible thanks to donations from individuals and businesses in 76 countries.

Ukraine is currently developing its own new types of drones to meet the challenges of the Russian invasion. For example, Ukraine is producing new kinds of naval drone to help the country guard against frequent missile attacks launched from Russian warships. Ukrainian tech innovators are making significant progress in the development of maritime drones that cost hundreds of thousands of dollars and can potentially target and deter or disable warships costing many millions.

Ukrainian IT specialists are creating software products to enhance the wartime performance of the country’s armed forces. One good example is Delta, a comprehensive situational awareness system developed by the Innovation Center within Ukraine’s Defense Ministry. This tool could be best described as “Google maps for the military.” It provides real-time views of the battlefield in line with NATO standards by integrating data from a variety of sources including aerial reconnaissance, satellite images, and drone footage.

Such systems allow the Ukrainian military to become increasingly data-driven. This enables Ukrainian commanders to adapt rapidly to circumstances and change tactics as required. The system saves lives and ammunition while highlighting potential opportunities for Ukraine to exploit. This approach has already proven its effectiveness in the defense of Kyiv and during the successful counteroffensives to liberate Kharkiv Oblast and Kherson.

Ukraine has also launched a special chatbot that allows members of the public to report on the movements of enemy troops and military hardware. Integrated within the widely used Diia app, this tool has attracted over 460,000 Ukrainian users. The reports they provide have helped to destroy dozens of Russian military positions along with tanks and artillery.

In addition to developing its own military technologies, Ukraine has also proven extremely adept at taking existing tech solutions and adapting them to wartime conditions. One prominent example is Starlink, which has changed the course of the war and become part of Ukraine’s critical infrastructure. Satellite communication is one of Ukraine’s competitive advantages, providing connections on the frontlines and throughout liberated regions of the country while also functioning during blackouts. Since the start of the Russian invasion, Ukraine has received over 30,000 Starlink terminals.

Ukraine’s effective use of military technologies has led some observers to suggest that the country could become a “second Israel.” This is a flattering comparison, but in reality, Ukraine has arguably even greater potential. Within the next few years, Ukraine is on track to become a nation with top tier military tech solutions.

Crucial decisions setting Ukraine on this trajectory have already been made. In 2023, efforts will focus on the development of a military tech ecosystem with a vibrant startup sector alongwide a strong research and development component. There are already clear indications of progress, such as the recent creation of strike drone battalions within the Ukrainian Armed Forces.

The war unleashed by Russia in February 2022 has now entered its second year. Putin had expected an easy victory. Instead, his faltering invasion has highlighted Ukraine’s incredible bravery while also showcasing the country’s technological sophistication. Ukrainians have demonstrated their ability to defeat one of the world’s mightiest armies using a combination of raw courage and modern innovation. This remarkable success offers lessons for military strategy and security policy that will be studied for decades to come.

Mykhailo Fedorov is Ukraine’s Deputy Prime Minister and Minister of Digital Transformation.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Tech innovation helps Ukraine even the odds against Russia’s military might appeared first on Atlantic Council.

]]>
A parallel terrain: Public-private defense of the Ukrainian information environment https://www.atlanticcouncil.org/in-depth-research-reports/report/a-parallel-terrain-public-private-defense-of-the-ukrainian-information-environment/ Mon, 27 Feb 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=615692 The report analyzes Russia’s continuous assaults against the Ukrainian information environment, and examines how Russian offensives and Ukrainian defense both move through this largely privately owned and operated environment. The report highlights key questions that must emerge around the growing role that private companies play in conflict.

The post A parallel terrain: Public-private defense of the Ukrainian information environment appeared first on Atlantic Council.

]]>

Executive summary

In the year since the Russian invasion of Ukraine, the conventional assault and advances into Ukrainian territory have been paralleled by a simultaneous invasion of the Ukrainian information environment. This environment, composed of cyber infrastructure, both digital and physical, and the data, networks, and ideas that flow through and across it, is more than a domain through which the combatants engage or a set of tools by which combatants interact—it is a parallel territory that Russia is intent on severing from the global environment and claiming for itself.

Russian assaults on the Ukrainian information environment are conducted against, and through, largely privately owned infrastructure, and Ukrainian defense in this space is likewise bound up in cooperative efforts with those infrastructure owners and other technology companies providing aid and assistance. The role of private companies in this conflict seems likely to grow, along with the scale, complexity, and criticality of the information infrastructure they operate.

Examining and mitigating the risks related to the involvement of private technology companies in the war in Ukraine is crucial and looking forward, the United States government must also examine the same questions with regard to its own security and defense:

  1. What is the complete incentive structure behind a company’s decision to provide products or services to a state at war?
  2. How dependent are states on the privately held portions of the information environment, including infrastructure, tools, knowledge, data, skills, and more, for their own national security and defense?
  3. How can the public and private sectors work together better as partners to understand and prepare these areas of reliance during peace and across the continuum of conflict in a sustained, rather than ad hoc, nature?

Russia’s war against Ukraine is not over and similar aggressions are likely to occur in new contexts and with new actors in the future. By learning these lessons now and strengthening the government’s ability to work cooperatively with the private sector in and through the information space, the United States will be more effective and resilient against future threats.

Introduction

Russia’s invasion of Ukraine in 2022 held none of the illusory cover of its 2014 operation; instead of “little green men” unclaimed by Moscow, Putin built up his forces on Ukraine’s border for the entire international community to see. His ambitions were clear: To remove and replace the elected government of Ukraine with a figurehead who would pull the country back under Russia’s hold, whether through literal absorption of the state or by subsuming the entire Ukrainian population under Russia’s political and information control. In the year since the Russian invasion, Ukraine’s defense has held back the Russian war machine with far greater strength than many thought possible in the early months of 2022. President Zelenskyy, the Ukrainian government, and the Ukrainian people have repeatedly repelled Russian attempts to topple the state, buttressed in part by the outpouring of assistance from not just allied states, but also local and transnational private sector companies.

Amidst the largest conventional land war in Europe since the fall of the Third Reich, both Russia and Ukraine have directed considerable effort toward the conflict’s information environment, defined as the physical and digital infrastructure over and through which information moves, the tools used to interact with that information, and information itself. This is not only a domain through which combatants engage, but a parallel territory that the Kremlin seeks to contest and claim. Russian efforts in this realm, to destroy or replace Ukraine’s underpinning infrastructure and inhibit the accessibility and reach of infrastructure and tools within the environment, are countered by a Ukrainian defense that prioritizes openness and accessibility.

The information environment, and all the components therein, is not a state or military dominated environment; it is largely owned, operated, and populated by private organizations and individuals around the globe. The Ukrainian information environment, referring to Ukrainian infrastructure operators, service providers, and users, is linked to and part of a global environment of state and non-state actors where the infrastructure and the terrain is largely private. Russian operations within the Ukrainian information environment are conducted against, and through, this privately owned infrastructure, and the Ukrainian defense is likewise bound up in cooperative efforts with those infrastructure owners and other technology companies that are providing aid and assistance. These efforts have contributed materially, and in some cases uniquely, to Ukraine’s defense.

The centrality of this environment to the conduct of this war, raises important questions about the degree to which states and societies are dependent on information infrastructure and functionalities owned and operated by private actors, and especially transnational private actors. Although private sector involvement in the war in Ukraine has generally been positive, the fact that the conduct of war and other responsibilities in the realm of statehood are reliant on private actors leads to new challenges for these companies, for the Ukrainian government, and for the United States and allies.

The United States government must improve its understanding of, and facility for, joint public-private action to contest over and through the information environment. The recommendations in this report are intended to facilitate the ability of US technology companies to send necessary aid to Ukraine, ensure that the US government has a complete picture of US private-sector involvement in the war in Ukraine, and contribute more effectively to the resilience of the Ukrainian information environment. First, the US government should issue a directive providing assurance and clarification as to the legality of private sector cyber, information, capacity building, and technical aid to Ukraine. Second, a task force pulling from agencies and offices across government should coordinate to track past, current, and future aid from the private sector in these areas to create a better map of US collaboration with Ukraine across the public and private sectors. Third, the US government should increase its facilitation of private technology aid by providing logistical and financial support.

These recommendations, focused on Ukraine’s defense, are borne of and provoke larger questions that will only become more important to tackle. The information environment and attempts to control it have long been a facet of conflict, but the centrality of privately owned and operated technology—and the primacy of some private sector security capabilities in relation to all but a handful of states—pose increasingly novel challenges to the United States and allied policymaking communities. Especially in future conflicts, the risks associated with private sector action in defense of, or directly against, a combatant could be significantly greater and multifaceted, rendering existing cooperative models insufficient.

The Russian information offensive

The Russian Federation Ministry of Foreign Affairs defines information space—of which cyberspace is a part—as “the sphere of activity connected with the formation, creation, conversion, transfer, use, and storage of information and which has an effect on individual and social consciousness, the information infrastructure, and information itself.1 Isolating the Ukrainian information space is key to both the short- and long-term plans of the Russian government. In the short term, the Kremlin pursues efforts to control both the flow and content of communications across the occupied areas.2 In the longer term, occupation of the information environment represents an integral step in Russian plans to occupy and claim control over the Ukrainian population.

In distinct opposition to the global nature of the information environment, over the past decade or so, the Kremlin has produced successive legislation “to impose ‘sovereignty’ over the infrastructure, content, and data traversing Russia’s ‘information space,’” creating a sectioned-off portion of the internet now known as RuNet.3 Within this space, the Russian government has greater control over what information Russian citizens see and a greater ability to monitor what Russian citizens do online.4 This exclusionary interpretation is an exercise in regime security against what the Kremlin perceives as constant Western information warfare against it.5 As Gavin Wilde, senior fellow with the Carnegie Endowment for International Peace, writes, the Russian government views the information environment “as an ecosystem to be decisively dominated.”6

To the Kremlin, domination of the information environment in Ukraine is an essential step toward pulling the nation into its fold and under its control. Just as Putin views information domination as critical to his regime’s exercise of power within Russia, in Ukraine, Russian forces systematically conduct offensives against the Ukrainian information environment in an attempt to create a similar model of influence and control that would further enable physical domination. This strategy is evident across the Kremlin’s efforts to weaken the Ukrainian state for the last decade at least. In the 2014 and 2022 invasions, occupied, annexed, and newly “independent” regions of Ukraine were variously cut off from the wider information space and pulled into the restricted Russian information space.  

The Crimean precedent – 2014 

The Russian invasion of Ukraine did not begin in 2022, but in 2014. Examining this earlier Russian incursion illustrates the pattern of Russian offensive behavior in and through the information environment going back nearly a decade—a combination of physical, cyber, financial, and informational maneuvers that largely target or move through private information infrastructure. In 2014, although obfuscated behind a carefully constructed veil of legitimacy, Russian forces specifically targeted Ukrainian information infrastructure to separate the Crimean population from the Ukrainian information environment, and thereby the global information environment, and filled that vacuum with Russian infrastructure and information. 

The Russian invasion of eastern Ukraine in 2014 was a direct response to the year-long Euromaidan Revolution, which took place across Ukraine in protest of then-President Viktor Yanukovych’s decision to spurn closer relations with the European Union and ignore growing calls to counter Russian influence and corruption within the Ukrainian government. These protests were organized, mobilized, and sustained partially through coordination, information exchange, and message amplification over social media sites like Facebook, Twitter, YouTube, and Ustream—as well as traditional media.7 In February 2014, after Yanukovych fled to Russia, the Ukrainian parliament established a new acting government and announced that elections for a new president would be held in May. Tensions immediately heightened, as Russian forces began operating in Crimea with the approval of Federal Assembly of Russia at the request of “President” Yanukovych, although Putin denied that they were anything other than “local self-defense forces.”8 On March 21, Putin signed the annexation of Crimea.9

During the February 2014 invasion of Crimea, the seizure and co-option of Ukrainian physical information infrastructure was a priority. Reportedly, among the first targets of Russian special forces was the Simferopol Internet Exchange Point (IXP), a network facility that enables internet traffic exchange.10 Ukraine’s state-owned telecommunications company Ukrtelecom reported that armed men seized its offices in Crimea and tampered with fiber-optic internet and telephone cables.11 Following the raid, the company lost the “technical capacity to provide connection between the peninsula and the rest of Ukraine and probably across the peninsula, too.”12 Around the same time, the head of the Security Service of Ukraine (SBU), Valentyn Nalivaichenko, reported that the mobile phones of Ukrainian parliament members, including his own, were blocked from connecting through Ukrtelecom networks in Crimea.13

Over the next three years, and through the “progressive centralization of routing paths and monopolization of Internet Service market in Crimea … the topology of Crimean networks has evolved to a singular state where paths bound to the peninsula converge to two ISPs (Rosetelecom and Fiord),” owned and operated by Russia.14 Russian forces manipulated the Border Gateway Protocol (BGP)—the system that helps connects user traffic flowing from ISPs to the wider internet—modifying routes to force Crimean internet traffic through Russian systems, “drawing a kind of ‘digital frontline’ consistent with the military one.”15 Residents of Crimea found their choices increasingly limited, until their internet service could only route through Russia, instead of Ukraine, subject to the same level of censorship and internet controls as in Russia. The Russian Federal Security Service (FSB) monitored communications from residents of Crimea, both within the peninsula and with people in Ukraine and beyond.16 Collaboration between ISPs operating in Crimea through Russian servers and the FSB appears to be a crucial piece of this wider monitoring effort. This claim was partially confirmed by a 2018 Russian decree that forbade internet providers from publicly sharing any information regarding their cooperation with “the authorized state bodies carrying out search and investigative activities to ensure the security of the Russian Federation.”17

From March to June 2014, Russian state-owned telecom company Rostelcom began and completed construction of the Kerch Strait cable, measuring 46 kilometers (about 28.5 miles) and costing somewhere between $11 and $25 million, to connect the Crimean internet with the Russian RuNet.18 Rostelcom, using a local agent in Crimea called Miranda Media, became the main transit network for several Crimean internet service providers (ISPs), including KCT, ACS-Group, CrimeaCom, and CRELCOM in a short period of time.19 There was a slower transition of customers from the Ukrainian company Datagroup to Russian ISPs, but nonetheless, the number of Datagroup customers in Crimea greatly decreased throughout 2014. According to one ISP interviewed by Romain Fontugne, Ksenia Ermoshina, and Emile Aben, “the Kerch Strait cable was used first of all for voice communication … The traffic capacity of this cable was rather weak for commercial communications.”20 But by the end of 2017, remnant usage of Ukrainian ISPs had virtually disappeared, following the completion of a second, better internet cable through the Kerch Strait and a series of restrictions placed on Russian social media platforms, news outlets, and a major search engine by Ukrainian President Poroshenko.21 The combination of the new restrictions, and the improved service of Russian ISPs encouraged more Crimeans to move away from Ukrainian ISPs. 

Russia’s efforts to control the information environment within Crimea, and the Russian government’s ability to monitor communications and restrict access to non-Russian approved servers, severely curtailed freedom of expression and belief—earning the region zero out of four in this category from Freedom House.22 Through physical, and formerly private, information infrastructure, Russia was able to largely take control of the information environment within Crimea. 

A parallel occupation – 2022 

Digital information infrastructure 

Just as in 2014, one of the first priorities of invading Russian forces in 2022 was the assault of key Ukrainian information infrastructure, including digital infrastructure. Before, during, and following the invasion, Russian and Russian-aligned forces targeted Ukrainian digital infrastructure through cyber operations, ranging in type, target, and sophistication. Through some combination of Ukrainian preparedness, partner intervention, and Russian planning shortfalls, among other factors, large-scale cyber operations disrupting Ukrainian critical infrastructure, such as those seen previously in 2015 with BlackEnergy and NotPetya, did not materialize.23 This could be because such cyber operations require significant time and resources, and similar ends can be more cheaply achieved through direct, physical means. Russian cyber operators, however, have not been idle.  

Preceding the physical invasion, there was a spate of activity attributed to both Russian and Russian-aligned organizations targeting a combination of state and private organizations.24 From January 13 to 14, for example, hackers briefly took control of seventy Ukrainian government websites, including the Ministries of Defense and Foreign Affairs, adding threatening messages to the top of these official sites.25 The following day, January 15, Microsoft’s Threat Intelligence Center reported the discovery of wiper malware, disguised as ransomware, in dozens of Ukrainian government systems, including agencies which “provide critical executive branch or emergency response function,” and an information technology firm that services those agencies.26 A month later, on February 15, Russian hackers targeted several websites with distributed denial of service (DDoS) attacks, forcing Ukrainian defense ministry and armed forces websites, as well as those of PrivatBank and Oschadbank, offline.27  Around the same time, according to Microsoft’s special report on Ukraine, “likely” Russian actors were discovered in the networks of unidentified critical infrastructure in Odessa and Sumy.28 The day before the invasion, cybersecurity companies ESET and Symantec reported that a new destructive wiper was spreading across Ukrainian, Latvian, and Lithuanian networks, as a second round of DDoS attacks again took down a spate of government and financial institution websites.“29 This activity centered around information—with defacements sending a clear threat to the Ukrainian government and population, DDoS attacks impairing accurate communication, and wiper malware degrading Ukrainian data—and gaining access to Ukrainian data for Russia. Although many of these operations targeted Ukrainian government networks, the attacks moved through or against privately operated infrastructure and, notably, the first public notification and detailing of several of these operations was undertaken by transnational technology companies.  

After February 24, Russian cyber activity continued and the targets included a number of private information infrastructure operators. A March hack of Ukrtelecom—Ukraine’s largest landline operator, which also provides internet and mobile services to civilians and the Ukrainian government and military—resulted in a collapse of the company’s network to just 13 percent capacity, the most severe disruption in service the firm recorded since the invasion began.30 Another such operation targeted Triolan—a Ukrainian telecommunications provider—on February 24 in tandem with the physical offensive and a second time on March 9. These incursions on the Triolan network took down key nodes and caused widespread service outages. Following the March 9 attack, the company was able to restore service, but these efforts were complicated by the need to physically access some of the equipment located in active conflict zones.31 These attacks against Ukraine-based information infrastructure companies caused service outages that were concurrent with the physical invasion and afterwards, restricted communications among Ukrainians and impeded the population’s ability to respond to current and truthful information. 

This unacceptable cyberattack is yet another example of Russia’s continued pattern of irresponsible behaviour in cyberspace, which also formed an integral part of its illegal and unjustified invasion of Ukraine.1

Council of the European Union

These types of operations, however, were not restricted to Ukraine-based information infrastructure. A significant opening salvo in Russia’s invasion was a cyber operation directed against ViaSat, a private American-based satellite internet company that provides services to users throughout the world, including the Ukrainian military.32 Instead of targeting the satellites in orbit, Russia targeted the modems in ViaSat’s KA-SAT satellite broadband network that connected users with the internet.33 Specifically, Russia exploited a “misconfiguration in a VPN [virtual private network] appliance to gain remote access to the trusted management segment of the KA-SAT network.”34 From there, the attackers were able to move laterally though the network to the segment used to manage and operate the broader system.35 They then “overwrote key data in flash memory on the modems,” making it impossible for the modems to access the broader network.36 Overall, the effects of the hack were short-lived, with ViaSat reporting the restoration of connectivity within a few days after shipping approximately 30,000 new modems to affected customers.37

SentinelOne, a cybersecurity firm, identified the malware used to wipe the modems and routers of the information they needed to operate.38 The firm assessed “with medium-confidence“ that AcidRain, the malware used in the attack, had ”developmental similarities” with an older malware, VPNFilter, that the Federal Bureau of Investigation and the US Department of Justice have previously linked to the Russian government.39  The United States, United Kingdom, and European Union all subsequently attributed the ViaSat hack to Russian-state backed actors.40

The effectiveness of the operation is debated, although the logic of the attack is straightforward. Russia wanted to constrain, or preferably eliminate, an important channel of communication for the Ukrainian military during the initial stages of the invasion. Traditional, land-based radios, which the Ukrainian military relies on for most of their communications, only work over a limited geographic range, therefore making it more difficult to use advanced, long-range weapons systems.41 It should be expected that landline and conventional telephony would suffer outages during the opening phases of the war and struggle to keep up with rapidly moving forces.

Initially, it was widely reported that the Russian strike on ViaSat was effective. On March 15, a senior Ukrainian cybersecurity official, Viktor Zhora, was quoted saying that the attack on ViaSat caused “a really huge loss in communications in the very beginning of the war.42 When asked follow-up questions about his quote, Zhora said at the time that he was unable to elaborate, leading journalists and industry experts to believe that the attack had impacted the Ukrainian military’s ability to communicate.43 However, several months later, on September 26, Zhora revised his initial comments, stating that the hack would have impacted military communications if satellite communications had been the Ukrainian military’s principal medium of communication. However, Zhora stated that the Ukrainian military instead relies on landlines for communication, with satellites as a back-up method. He went on to say that “in the case land lines were destroyed, that could be a serious issue in the first hours of war.”44 The tension, and potential contradictions, in Zhora’s comments underlines the inherent complications in analyzing cyber operations during war: long-term consequences can be difficult to infer from short-term effects, and countries seek to actively control the narratives surrounding conflict.  

The effectiveness of the ViaSat hack boils down to how the Ukrainian military communicates, and how adaptable it was in the early hours of the invasion. However, it is apparent how such a hack could impact military effectiveness. If Russia, or any other belligerent, was able to simultaneously disrupt satellite communications while also jamming or destroying landlines, forces on the frontlines would be at best poorly connected with their superiors. In such a scenario, an army would be cut off from commanders in other locations and would not be able to report back or receive new directives; they would be stranded until communications could be restored.  

The ViaSat hack had a military objective: to disrupt Ukrainian military access to satellite communications. But the effects were not limited to this objective. The operation had spillover effects that rippled across Europe. In Germany, nearly 6,000 wind turbines were taken offline, with roughly 2,000 of those turbines remaining offline for nearly a month after the initial hack due to the loss of remote connectivity.45 In France, modems used by emergency services vehicles, including firetrucks and ambulances, were also affected.46

ViaSat is not a purely military target. It is a civilian firm that counts the Ukrainian military as a customer. The targeting of civilian infrastructure with dual civilian and military capability and use has occurred throughout history and has been the center of debate in international law, especially when there are cross-border spillover effects in non-combatant countries. Both the principle of proportionality and international humanitarian law require the aggressor to target only military objects, defined as objects “whose total or partial destruction, capture or neutralization, in the circumstances ruling at the time, offers a definite military advantage” in a manner proportional to the military gain foreseen by the operation. 47 What this means in practice, however, is that the aggressor determines whether they deem a target to be a military object and a beneficial target and, therefore, what is legitimate. Konstantin Vorontsov, the Head of the Russian Delegation to the United Nations, attempted to justify Russian actions in October 2022 by saying that the use of civilian space infrastructure to aid the Ukrainian war effort may be a violation of the Outer Space Treaty, thereby rendering this infrastructure a legitimate military target.48 Similar operations like that against ViaSat are likely to be the new norm in modern warfare. As Mauro Vignati, the adviser on new digital technologies of warfare at the Red Cross, said in November 2022, insofar as private companies own and operate the information infrastructure of the domain, including infrastructure acting as military assets, “when war start[s], those companies, they are inside the battlefield.”49

Physical information infrastructure 

In February 2022, as Russian forces moved to seize airfields and key physical assets in Ukraine, they simultaneously assaulted the physical information infrastructure operating within and beneath the Ukrainian information environment. Russian forces targeted this infrastructure, largely privately operated, by taking control of assets where possible and destroying them where not, including through a series of Russian air strikes targeting Ukrainian servers, cables, and cell phone towers.50 As of June 2022, about 15 percent of Ukrainian information infrastructure had been damaged or destroyed; by July, 12.2 percent of homes had lost access to mobile communication services, 11 percent of base stations for mobile operators were out of service, and approximately 20 percent of the country’s telecommunications infrastructure was damaged or destroyed.51 By August “the number of users connecting to the Internet in Ukraine [had] shrunk by at least 16 percent nationwide.”52

In some areas of Ukraine, digital blackouts were enforced by Russian troops to cut the local population off from the highly contested information space. In Mariupol, the last cell tower connecting the city with the outside world was tirelessly tended by two Kyivstar engineers, who kept it alive with backup generators that they manually refilled with gasoline. Once the Russians entered the city, however, the Ukrainian soldiers who had been protecting the cell tower location left to engage with the enemy, leaving the Kyivstar engineers alone to tend to their charge. For three days the engineers withstood the bombing of the city until March 21, when Russian troops disconnected the tower and it went silent.53

Russian forces coerced Ukrainian occupied territories onto Russian ISPs, once again through Rostelcom’s local agent Miranda Media, and onto Russian mobile service providers.54 Information infrastructure in Ukraine is made up of overlapping networks of mobile service and ISPs, a legacy of the country’s complicated post-Soviet modernization process. This complexity may have been a boon for its resilience. Russian forces, observed digital-rights researcher Samuel Woodhams, “couldn’t go into one office and take down a whole region … There were hundreds of these offices and the actual hardware was quite geographically separated.55 Across eastern Ukraine, including Kherson, Mlitopol, and Mariupol, the Russians aimed to subjugate the physical territory, constituent populations, and Ukrainian information space. In Kherson, Russian forces entered the offices of a Ukrainian ISP and at gunpoint, forced staff to transfer control to them.56

Russian bombardment of telecommunications antennas in Kiev
Russian bombardment of telecommunications antennas in Kiev (Attribution: Mvs.gov.ua)

Routing the internet and communications access of occupied territories through Russia meant that Moscow could suppress communications to and from these occupied areas, especially through social media and Ukrainian news sites, sever access to essential services in Ukraine, and flood the populations with its own propaganda, as was proved in Crimea in 2014. Moving forward, Russia could use this dependency to “disconnect, throttle, or restrict access to the internet” in occupied territories, cutting off the occupied population from the Ukrainian government and the wider Ukrainian and international community.57

The Kremlin’s primary purpose in the invasion of Ukraine was and is to remove the Ukrainian government and, likely, install a pro-Russian puppet government to bring to an end an independent Ukraine.58 Therefore, isolating the information environment of occupied populations, in concert with anti-Ukrainian government disinformation, such as the multiple false allegations that President Zelenskyy had fled the country and abandoned the Ukrainian people,59 were a means to sway the allegiances, or at least dilute the active resistance, of the Ukrainian people.60 Without connectivity to alternative outlets, the occupying Russians could promote false and largely uncontested claims about the progress of the war. In early May 2022 for example, when Kherson lost connectivity for three days, the deputy of the Kherson Regional Council, Serhiy Khlan, reported that the Russians “began to spread propaganda that they were in fact winning and had captured almost all of Mykolaiv.”61 

Russia used its assault on the information environment to undermine the legitimacy of the Ukrainian government and its ability to fulfill its governmental duties to the Ukrainian people. Whether through complete connectivity blackouts or through the restrictions imposed by Russian networks, the Russians blocked any communications from the Ukrainian government to occupied populations—not least President Zelenskyy’s June 13, 2022 address, intended most for those very populations, in which he promised to liberate all occupied Ukrainian land and reassured those populations that they had not been forgotten. Zelenskyy acknowledged the Russian barrier between himself and Ukrainians in occupied territories, saying, “They are trying to make people not just know nothing about Ukraine… They are trying to make them stop even thinking about returning to normal life, forcing them to reconcile.”62

Isolating occupied populations from the Ukrainian information space is intended, in large part, said Stas Prybytko, the head of mobile broadband development within the Ukrainian Ministry of Digital Transformation, to “block them from communicating with their families in other cities and keep them from receiving truthful information.”63 Throughout 2022, so much of what the international community knew about the war came—through Twitter, TikTok, Telegram, and more—from Ukrainians themselves. From videos of the indiscriminate Russian shelling of civilian neighborhoods to recordings tracking Russian troop movements, Ukrainians used their personal devices to capture and communicate the progress of the war directly to living rooms, board rooms, and government offices around the world.64The power of this distributed information collection and open-source intelligence relies upon mobile and internet access. The accounts that were shared after Ukrainian towns and cities were liberated from Russian occupation lay bare just how much suffering, arrest, torture, and murder was kept hidden from international view by the purposeful isolation of the information environment and the constant surveillance of Ukrainians’ personal devices.65 The war in Ukraine has highlighted the growing impact of distributed open source intelligence during the conduct of war that is carried out by civilians in Ukraine and by the wider open source research community though various social media and messaging platforms.66 

Russian operations against, especially transnational, digital infrastructure companies can mostly be categorized as disruption, degradation, and information gathering, which saw Russian or Russian-aligned hackers moving in and through the Ukrainian information environment. The attacks against Ukrainian physical infrastructure, however, are of a slightly different character. Invading forces employed physically mediated cyberattacks, a method defined by Herb Lin as “attacks that compromise cyber functionality through the use of or the threat of physical force” to pursue the complete destruction or seizure and occupation of this infrastructure.67 Both ends begin with the same purpose: to create a vacuum of information between the Ukrainian government, the Ukrainian people, and the global population, effectively ending the connection between the Ukrainian information environment and the global environment. But the seizure of this infrastructure takes things a step beyond: to occupy the Ukrainian information environment and pull its infrastructure and its people into an isolated, controlled Russian information space. 

Reclaiming the Ukrainian information environment 

Preparation of the environment 

The Russian assault on the Ukrainian information environment is far from unanswered. Russian efforts have been countered by the Ukrainian government in concert with allied states and with technology companies located both within and outside Ukraine. Russia’s aim to pull occupied Ukrainian territory onto Russian networks to be controlled and monitored has been well understood, and Ukraine has been hardening its information infrastructure since the initial 2014 invasion. Ukraine released its Cyber Security Strategy in 2016, which laid out the government’s priorities in this space, including the defense against the range of active cyber threats they face, with an emphasis on the “cyber protection of information infrastructure.”68 The government initially focused on centralizing its networks in Kyiv to make it more difficult “for Russian hackers to penetrate computers that store critical data and provide services such as pension benefits, or to use formerly government-run networks in the occupied territories to launch cyberattacks on Kyiv.”69

As part of its digitalization and security efforts, the Ukrainian government also sought out new partners, both public and private, to build and bolster its threat detection and response capabilities. Before and since the 2022 invasion, the Ukrainian government has worked with partner governments and an array of technology companies around the world to create resilience through increased connectivity and digitalization. 

Bolstering Ukrainian connectivity 

Since the 2014 invasion and annexation of Crimea, Ukraine-serving telecommunications operators have developed plans to prepare for future Russian aggression. Lifecell, the third largest Ukrainian mobile telephone operator, prepared its network for an anticipated Russian attack. The company shifted their office archives, documentation, and critical network equipment from eastern to western Ukraine, where it would be better insulated from violence, added additional network redundancy, and increased the coordination and response capabilities of their staff.70 Similarly, Kyivstar and Vodafone Ukraine increased their network bandwidth to withstand extreme demand. In October 2021, these three companies initiated an infrastructure sharing agreement to expand LTE (Long Term Evolution) networks into rural Ukraine and, in cooperation with the Ukrainian government, expanded the 4G telecommunications network to bring “mobile network coverage to an estimated 91.6 per cent of the population.”71 

The expansion and improvement of Ukrainian telecommunications continued through international partnerships as well. Datagroup, for example, announced a $20 million partnership in 2021 with Cisco, a US-based digital communications company, to modernize and expand the bandwidth of its extensive networks.72 Since the February 2022 invasion, Cisco has also worked with the French government to provide over $5 million of secure, wireless networking equipment and software, including firewalls, for free to the Ukrainian government.73

This network expansion is an integral part of the Ukrainian government’s digitalization plans for the country, championed by President Zelenskyy. Rather than the invasion putting an end to these efforts, Deputy Prime Minister and Minister for Digital Transformation Mykhailo Fedorov claimed that during the war “digitalization became the foundation of all our life. The economy continues to work … due to digitalization.74 The digital provision of government services has created an alternate pathway for Ukrainians to engage in the economy and with their government. The flagship government initiative Diia, launched in February 2020, is a digital portal through which the 21.7 million Ukrainian users can access legal identification, make social services payments, register a business, and even register property damage from Russian missile strikes.75 The Russian advance and consequent physical destruction that displaced Ukrainians means that the ability to provide government services through alternate and resilient means is more essential than ever, placing an additional premium on defending Ukrainian information infrastructure. 

Backing up a government 

As Russian forces built up along Ukraine’s borders, Ukrainian network centralization may have increased risk, despite the country’s improved defense capabilities. In preparation for the cyber and physical attacks against the country’s information infrastructure, Fedorov moved to amend Ukrainian data protection laws to allow the government to store and process data in the cloud and worked closely with several technology companies, including Microsoft, Amazon Web Services, and Google, to effect the transfer of critical government data to infrastructure hosted outside the country.76 Cloud computing describes “a collection of technologies and organizational processes which enable ubiquitous, on-demand access to a shared pool of configurable computing resources.”77 Cloud computing is dominated by the four hyperscalers—Amazon, Microsoft, Google, and Alibaba—that provide computing and storage at enterprise scale and are responsible for the operation and security of data centers all around the world, any of which could host customer data according to local laws and regulations.78 

According to its April 2022 Ukraine war report, Microsoft “committed at no charge a total of $107 million of technology services to support this effort” and renewed the relationship in November, promising to ensure that “government agencies, critical infrastructure and other sectors in Ukraine can continue to run their digital infrastructure and serve citizens through the Microsoft Cloud” at a value of about $100 million.79 Amazon and Google have also committed to supporting cloud services for the Ukrainian government, for select companies, and for humanitarian organizations focused on aiding Ukraine.80 In accordance with the Ukrainian government’s concerns, Russian missile attacks targeted the Ukrainian government’s main data center in Kyiv soon after the invasion, partially destroying the facility, and cyberattacks aggressively tested Ukrainian networks.81    

Unlike other lines of aid provided by the international community to strengthen the defense of the Ukrainian information environment, cloud services are provided only by the private sector.82 While this aid has had a transformative effect on Ukrainian defense, that transformative quality has also raised concerns. Microsoft, in its special report on Ukraine, several times cites its cloud services as one of the determining factors that limited the effect of Russian cyber and kinetic attacks on Ukrainian government data centers, and details how their services, in particular, were instrumental in this defense.83 In this same report, Microsoft claims to be most worried about those states and organizations that do not use cloud services, and provides corroborating data.84 Microsoft and other technology companies offering their services at a reduced rate, or for free, are acting—at least in part—out of a belief in the rightness of the Ukrainian cause. However, they are still private companies with responsibilities to shareholders or board members, and they still must seek profit. Services provided, especially establishing information infrastructure like Cloud services, are likely to establish long-term business relationships with the Ukrainian government and potentially with other governments and clients, who see the effectiveness of those services illustrated through the defense of Ukraine. 

Mounting an elastic defense  

Working for wireless 

Alongside and parallel to the Ukrainian efforts to defend and reclaim occupied physical territory is the fight for Ukrainian connectivity. Ukrainian telecommunications companies have been integral to preserving connectivity to the extent possible. In March 2022, Ukrainian telecom operators Kyivstar, Vodafone Ukraine, and Lifecell made the decision to provide free national mobile roaming services across mobile provider networks, creating redundancy and resilience in the mobile network to combat frequent service outages.85 The free mobile service provided by these companies is valued at more than UAH 980 million (USD 26.8 million).86 In addition, Kyivstar in July 2022 committed to the allocation of UAH 300 million (about USD 8.2 mil) for the modernization of Ukraine’s information infrastructure in cooperation with the Ukrainian Ministry of Digital transformation.87 The statements that accompanied the commitments from Kyivstar and Lifecell—both headquartered in Ukraine—emphasized each company’s dedication to Ukrainian defense and their role in it, regardless of the short-term financial impact.88 These are Ukrainian companies with Ukrainian infrastructure and Ukrainian customers, and their fate is tied inextricably to the outcome of this war. 

As Russian forces advanced and attempted to seize control of information infrastructure, in at least one instance, Ukrainian internet and mobile service employees sabotaged their own equipment first. Facing threats of imprisonment and death from occupying Russians, employees in several Ukrtelecom facilities withstood pressure to share technical network details and instead deleted key files from the systems. According to Ukrtelecom Chief Executive Officer Yuriy Kurmaz, “The Russians tried to connect their control boards and some equipment to our networks, but they were not able to reconfigure it because we completely destroyed the software.”89 Without functional infrastructure, Russian forces struggled to pull those areas onto Russian networks.  

The destruction of telecommunications infrastructure has meant that these areas and many others along the war front are, in some areas, without reliable information infrastructure, either wireless or wired. While the Ukrainian government and a bevy of local and international private sector companies battle for control of on-the-ground internet and communications infrastructure, they also pursued new pathways to connectivity.

Searching for satellite 

Two days after the invasion, Deputy Prime Minister Fedorov tweeted at Elon Musk, the Chief Executive Officer of SpaceX, that “while you try to colonize Mars — Russia try [sic] to occupy Ukraine! While your rockets successfully land from space — Russian rockets attack Ukrainian civil people! We ask you to provide Ukraine with Starlink stations and to address sane Russians to stand.”90 Just another two days later, Fedorov confirmed the arrival of the first shipment of Starlink stations.91  

Starlink, a network of low-orbit satellites working in constellations operated by SpaceX, relies on satellite receivers no larger than a backpack that are easily installed and transported. Because Russian targeting of cellular towers made communications coverage unreliable, says Fedorov, the government “made a decision to use satellite communication for such emergencies” from American companies like SpaceX.92 Starlink has proven more resilient than any other alternative throughout the war. Due to the low orbit of Starlink satellites, they can broadcast to their receivers at relatively higher power than satellites in higher orbits. There has been little reporting on successful Russian efforts to jam Starlink transmissions, and the Starlink base stations—the physical, earthbound infrastructure that communicates directly with the satellites—are located on NATO territory, ensuring any direct attack on them would be a significant escalation in the war.93

Starlink has been employed across sectors almost since the war began. President Zelenskyy has used the devices himself when delivering addresses to the Ukrainian people, as well as to foreign governments and populations.94 Fedorov has said that sustained missile strikes against energy and communication infrastructure have been effectively countered through the deployment of Starlink devices that can restore connection where it is most needed. He even called the system “an essential part of critical infrastructure.”95   

Starlink has also found direct military applications. The portability of these devices means that Ukrainian troops can often, though not always, stay connected to command elements and peer units while deployed.96 Ukrainian soldiers have also used internet connections to coordinate attacks on Russian targets with artillery-battery commanders.97 The Aerorozvidka, a specialist air reconnaissance unit within the Ukrainian military that conducts hundreds of information gathering missions every day, has used Starlink devices in areas of Ukraine without functional communications infrastructure to “monitor and coordinate unmanned aerial vehicles, enabling soldiers to fire anti-tank weapons with targeted precision.”98 Reports have also suggested that a Starlink device was integrated into an unmanned surface vehicle discovered near Sevastopol, potentially used by the Ukrainian military for reconnaissance or even to carry and deliver munitions.99 According to one Ukrainian soldier, “Starlink is our oxygen,” and were it to disappear, “our army would collapse into chaos.”100

The initial package of Starlink devices included 3,667 terminals donated by SpaceX and 1,333 terminals purchased by the United States Agency for International Development (USAID).101 SpaceX initially offered free Starlink service for all the devices, although the offer has already been walked back by Musk, and then reversed again. CNN obtained proof of a letter sent by Musk to the Pentagon in September 2022 stating that SpaceX would be unable to continue funding Starlink service in Ukraine. The letter requested that the Pentagon pay what would amount to “more than $120 million for the rest of the year and could cost close to $400 million for the next 12 months.” It also clarified that the vast majority of the 20,000 Starlink devices sent to Ukraine were financed at least in part by outside funders like the United States, United Kingdom, and Polish governments.102

After the letter was sent, but before it became public, Musk got into a Twitter spat with Ukrainian diplomat Adrij Melnyk after the former wrote a tweet on October 3 proposing terms of peace between Russia and Ukraine. Musk’s proposal included Ukraine renouncing its claims to Crimea and pledging to remain neutral, with the only apparent concession from Russia a promise to ensure water supply in Crimea. The plan was rejected by the public poll Musk included in the tweet, and Melnyk replied and tagged Musk, saying “Fuck off is my very diplomatic reply to you @elonmusk.”103 After CNN released the SpaceX letter to the Pentagon, Musk seemingly doubled down on his decision to reduce SpaceX funding at first. He responded on October 14 to a tweet summarizing the incident, justifying possible reduced SpaceX assistance stating, “We’re just following his [Melnyk’s] recommendation,” even though the letter was sent before the Twitter exchange. Musk then tweeted the following day, “The hell with it … even though Starlink is still losing money & other companies are getting billions of taxpayer $, we’ll just keep funding Ukraine govt for free.”104 Two days later, in response to a Politico tweet reporting that the Pentagon was considering covering the Starlink service costs, Musk stated that “SpaceX has already withdrawn its request for funding.”105 Musk’s characterization of SpaceX’s contribution to the war effort has sparked confusion and reprimand, with his public remarks often implying that his company is entirely footing the bill when in fact, tens of millions of dollars’ worth of terminals and service are being covered by several governments every month.  

The Starlink saga, however, was not over yet. Several weeks later in late October, 1,300 Starlink terminals in Ukraine, purchased in March 2020 by a British company for use in Ukrainian combat-related operations, were disconnected, allegedly due to lack of funding, causing a communications outage for the Ukrainian military.106 Although operation was restored, the entire narrative eroded confidence in SpaceX as a guarantor of flexible connectivity in Ukraine. In November 2022, Federov noted that while Ukraine has no intention of breaking off its relationship with Starlink, the government is exploring working with other satellite communications operators.107 Starlink is not the only satellite communications network of its kind, but its competitors have not yet reached the same level of operation. Satellite communications company OneWeb, based in London with ties to the British military, is just now launching its satellite constellation, after the Russian invasion of Ukraine required the company to change its launch partner from Roscosmos to SpaceX.108 The US Space Development Agency, within the United States Space Force, will launch the first low earth orbit satellites of the new National Defense Space Architecture in March 2023. Other more traditional satellite companies cannot provide the same flexibility as Starlink’s small, transportable receivers.

UA Support Forces use Starlink
UA Support Forces use Starlink (Attribution: Mil.gov.ua)

With the market effectively cornered for the moment, SpaceX can dictate the terms, including the physical bounds, of Starlink’s operations, thereby wielding immense influence on the battlefield. Starlink devices used by advancing Ukrainian forces near the front, for example, have reported inconsistent reliability.109 Indeed CNN reported on February 9th that this bounding was a deliberate attempt to separate the devices from direct military use, as SpaceX President Gwynne Shotwell explained “our intent was never to have them use it for offensive purposes.”110 The bounding decision, similar to the rationale behind the company’s decision to refuse to activate Starlink service in Crimea, was likely made to contain escalation, especially escalation by means of SpaceX devices.111

But SpaceX is not the only satellite company making decisions to bound the area of operation of their products to avoid playing—or being perceived to play—a role in potential escalation. On March 16, 2022, Minister Fedorov tweeted at DJI, a Chinese drone producer, “@DJIGlobal are you sure you want to be a partner in these murders? Block your products that are helping russia to kill the Ukrainians!”112 DJI responded directly to the tweet the same day, saying “If the Ukrainian government formally requests that DJI set up geofencing throughout Ukraine, we will arrange it,” but pointed out that such geofencing would inhibit all users of their product in Ukraine, not just Russians.113

While Russia continues to bombard the Ukrainian electrical grid, Starlink terminals have grown more expensive for new Ukrainian consumers, increasing from $385 earlier this year to $700, although it is unclear if this price increase also affected government purchasers.114 According to Andrew Cavalier, a technology industry analyst with ABI Research, the indispensability of the devices gives “Musk and Starlink a major head start [against its competitors] that its use in the Russia–Ukraine war will only consolidate.”115 Indeed, the valuation of SpaceX was $127 million in May 2022, and the company raised $2 billion in the first seven months of 2022.116 For SpaceX, the war in Ukraine has been an impressive showcase of Starlink’s capabilities and has proven the worth of its services to future customers. The company recently launched a new initiative, Starshield, intended to leverage “SpaceX’s Starlink technology and launch capability to support national security efforts. While Starlink is designed for consumer and commercial use, Starshield is designed for government use.”117 It is clear that SpaceX intends to capitalize on the very public success of its Starlink network in Ukraine.

Reclaiming Territory 

The Russian assault is not over, but Ukraine has reclaimed “54 percent of the land Russia has captured since the beginning of the war” and the front line has remained relatively stable since November 2022.118 Videos and reports from reclaimed territory show the exultation of the liberated population. As Ukrainian military forces reclaim formerly occupied areas, the parallel reclamation of the information environment, by or with Ukrainian and transnational information infrastructure operators, follows quickly. 

In newly liberated areas, Starlink terminals are often the first tool for establishing connectivity. In Kherson, the first regional capital that fell to the Russian invasion and reclaimed by Ukrainian troops on November 11, 2022, residents lined up in public spaces to connect to the internet through Starlink.119 The Ministry of Digital Transformation provided Starlink devices to the largest service providers, Vodaphone and Kyivstar, to facilitate communication while their engineers repaired the necessary infrastructure for reestablishing mobile and internet service.120 A week after Kherson was recaptured, five Kyivstar base stations were made operational and Vodaphone had reestablished coverage over most of the city.121

Due to the importance of reclaiming the information space, operators are working just behind Ukrainian soldiers to reconnect populations in reclaimed territories to the Ukrainian and global information environment as quickly as possible, which means working in very dangerous conditions. In the Sumy region, a Ukrtelecom vehicle pulling up to a television tower drove over a land mine, injuring three of the passengers and killing the driver.122 Stanislav Prybytko, the head of the mobile broadband department in the Ukrainian Ministry of Digital Transformation, says “It’s still very dangerous to do this work, but we can’t wait to do this, because there are a lot of citizens in liberated villages who urgently need to connect.”123 Prybytko and his eleven-person team have been central to the Ukrainian effort to stitch Ukrainian connectivity back together. The team works across a public-private collaborative, coordinating with various government officials and mobile service providers to repair critical nodes in the network and to reestablish communications and connectivity.124 According to Ukrainian government figures, 80 percent of liberated settlements have partially restored internet connection, and more than 1,400 base stations have been rebuilt by Ukrainian mobile operators since April 2022.125

Key Takeaways 

The information environment is a key domain through which this war is being contested. The Russian government has demonstrated for over a decade the importance it places on control of the information environment, both domestically and as part of campaigns to expand the Russian sphere of influence abroad. Yet, despite this Russian focus, the Ukrainian government has demonstrated incredible resilience against physical assaults, cyberattacks, and disinformation campaigns against and within the Ukrainian information environment and has committed to further interlacing government services and digital platforms.  

The centrality of this environment to the conduct of this war means that private actors are necessarily enmeshed in the conflict. As providers of products and services used for Ukrainian defense, these companies are an important part of the buttressing structure of that defense. The centrality of private companies in the conduct of the war in Ukraine brings to light new and increasingly important questions about what it means for companies to act as information infrastructure during wartime, including:  

  1. What is the complete incentive structure behind a company’s decision to provide products or services to a state at war? 
  2. How dependent are states on the privately held portions of the information environment, including infrastructure, tools, knowledge, data, skills, and more, for their own national security and defense?  
  3. How can the public and private sectors work together better as partners to understand and prepare these areas of reliance during peace and across the continuum of conflict in a sustained, rather than ad hoc, nature? 

Incentives 

The war in Ukraine spurred an exceptional degree of cooperation and aid from private companies within Ukraine and from around the globe. Much of public messaging around the private sector’s assistance of Ukrainian defense centers around the conviction of company leadership and staff that they were compelled by a responsibility to act. This is certainly one factor in their decision. But the depth of private actor involvement in this conflict demands a more nuanced understanding of the full picture of incentives and disincentives that drive a company’s decision to enter into new, or expand upon existing, business relationships with and in a country at war. What risks, for example, do companies undertake in a war in which Russia has already demonstrated its conviction that private companies are viable military targets? The ViaSat hack was a reminder of the uncertainty that surrounds the designation of dual-use technology, and the impact that such designations have in practice. What role did public recognition play in companies’ decisions to provide products and services, and how might this recognition influence future earnings potential? For example, while their remarks differed in tone, both Elon Musk on Twitter and Microsoft in its special report on Ukraine publicly claimed partial credit for the defense of Ukraine.  

As the war continues into its second year, these questions are important to maintaining Ukraine’s cooperation with these entities. With a better understanding of existing and potential incentives, the companies, the United States, and its allies can make the decision to responsibly aid Ukraine much easier.  

Dependencies 

Private companies play an important role in armed conflict, operating much of the infrastructure that supports the information environment through which both state and non-state actors compete for control. The war in Ukraine has illustrated the willingness of private actors, from Ukrainian telecommunications companies to transnational cloud and satellite companies, to participate as partners in the defense of Ukraine. State dependence on privately held physical infrastructure is not unique to the information environment, but state dependence on infrastructure that is headquartered and operated extraterritorially is a particular feature. 

Prior to and throughout the war, the Ukrainian government has coordinated successfully with local telecommunication companies to expand, preserve, and restore mobile, radio, and internet connectivity to its population. This connectivity preserved what Russia was attempting to dismantle—a free and open Ukrainian information environment through which the Ukrainian government and population can communicate and coordinate. The Ukrainian government has relied on these companies to provide service and connectivity, working alongside them before and during the war to improve infrastructure and to communicate priorities. These companies are truly engaging as partners in Ukrainian defense, especially because this information infrastructure is not just a medium through which Russia launches attacks but an environment that Russia is attempting to seize control of. This dependence has not been unidirectional—the companies themselves are inextricably linked to this conflict through their infrastructure, employees, and customers in Ukraine. Each is dependent to some degree on the other and during times of crisis, their incentives create a dynamic of mutual need. 

The Ukrainian government has also relied on a variety of transnational companies though the provision of technology products or services and information infrastructure. As examined in this report, two areas where the involvement of these companies has been especially impactful are cloud services and satellite internet services. Cloud services have preserved data integrity and security by moving information to data centers distributed around the world, outside of Ukrainian territory and under the cyber-protection of those cloud service companies. Satellite services have enabled flexible and resilient connectivity, once again located and run primarily outside of Ukraine. These companies can provide essential services within the information environment and the physical environment of Ukraine, but are not fundamentally reliant on the integrity of the country. This dynamic is heightened by the fact that cloud service providers like Microsoft, Amazon Web Services, Google, and satellite internet service providers like Space X’s Starlink are operating within a market with global reach and very few competitors. While these companies and others have made the laudable decision to contribute to Ukrainian defense, the fact is that had they not, there are only a few, if any, other companies with comparable capabilities and infrastructure at scale. Additionally, there’s very little Ukraine or even the US government could have done to directly provide the same capabilities and infrastructure.  

Coordination 

Built into the discussions around dependency and incentives is the need for government and the private companies who own and operate information infrastructure to coordinate with each other from a more extensive foundation. While coordination with Ukrainian companies and some transnational companies emerged from sustained effort, many instances of private sector involvement were forged on an ad hoc basis and therefore could not be planned on in advance. The ad hoc approach can produce rapid results, as seen by Minister Fedorov’s tweet at Elon Musk and receipt of Starlink devices just days later. While this approach has been wielded by the Ukrainian government, and the Ministry for Digital Transformation in particular, to great effect, this very same example illustrates the complexity of transforming ad hoc aid into sustainable partnerships. Sustainability is especially important when states are facing threats outside of open war, across the continuum of insecurity and conflict where many of these capabilities and infrastructures will continue to be relied upon. Security and defense in the information environment requires states to work in coordination with a diverse range of local and transnational private actors. 

Recommendations 

Key recommendations from this paper ask the US government, in coordination with the Ukrainian government, to better understand the incentives that surround private sector involvement, to delineate states’ dependency on private information infrastructure, and to improve long-term public-private coordination through three pathways: 

  • Define support parameters. Clarify how private technology companies can and should provide aid 
  • Track support. Create a living database to track the patterns of technological aid to Ukraine from US private companies 
  • Facilitate support requests. Add to the resilience of the Ukrainian information environment by facilitating US private aid.  

Define support parameters 

Private information infrastructure companies will continue to play a key role in this war. However, there are a number of unresolved questions regarding the decisions these companies are making about if, and how, to provide support to the Ukrainian government to sustain its defense. A significant barrier may be the lack of clarity about the risks of partnership in wartime, which may disincentivize action or may alter existing partnerships. Recent SpaceX statements surrounding the bounding of Starlink use is an example, at least in part, of just such a risk calculous in action. The US government and its allies should release a public directive clarifying how companies can ensure that their involvement is in line with US and international law—especially for dual-use technologies. Reaffirming, with consistent guidelines, how the United States defines civilian participation in times of war will be crucial for ensuring that such actions do not unintentionally legitimize private entities as belligerents and legitimate targets in wartime. At the direction of the National Security Advisor, the US Attorney General and Secretary of State, working through the Office of the Legal Advisor at the State Department, should issue public guidance on how US companies can provide essential aid to Ukraine while avoiding the designation of legitimate military target or combatant under the best available interpretation of prevailing law. 

Track support 

While a large amount of support for Ukraine has been given directly by or coordinated through governments, many private companies have started providing technological support directly to the Ukrainian government. Some private companies, especially those with offices or customers in Ukraine, got in touch directly with, or were contacted by, various Ukrainian government offices, often with specific requests depending on the company’s products and services.126 

However, the US government does not have a full and complete picture of this assistance, which limits the ability of US policymakers to track the implications of changing types of support or the nature of the conflict. Policymakers should have access to not only what kind of support is being provided by private US companies, but also the projected period of involvement, what types of support are being requested and denied by companies (in which case, where the US government may be able to act as an alternative provider), and what types of support are being supplied by private sector actors without a significant government equity or involvement. A more fulsome mapping of this assistance and its dependency structure would make it possible for policymakers and others to assess its impact and effectiveness. This data, were it or some version of it publicly available, would also help private companies providing the support to better understand how their contributions fit within the wider context of US assistance and to communicate the effect their products or services are having to stakeholders and shareholders. Such information may play a role in a company’s decision to partner or abstain in the future.

The US government should create a collaborative task force to track US-based private sector support to Ukraine. Because of the wide equities across the US government in this area, this team should be led by the State Department’s Bureau of Cyberspace and Digital Policy and include representatives from USAID, the Department of Defense’s Cyber Policy Office, the National Security Agency’s Collaboration Center, and the Cybersecurity and Infrastructure Security’s Joint Cyber Defense Collaborative. This task force should initially focus on creating a picture of public-private support to Ukraine from entities within the United States, but its remit could extend to work with allies and partners, creating a fulsome picture of international public-private support.

Facilitate support requests 

Tracking the technical support that is requested, promised, and delivered to the Ukrainian government is an important first step toward gaining a better understanding of the evolving shape of the critical role that the private sector is increasingly playing in conflict. But closer tracking, perhaps by an associated body, could go further by acting as a process facilitator. Government offices and agencies have long been facilitators of private aid, but now states are increasingly able to interact with, and request support from, private companies directly, especially for smaller quantities or more specific products and services. While this pathway can be more direct and efficient, it also requires a near constant churn of request, provision, and renewal actions from private companies and Ukrainian government officials.  

Private organizations have stepped into this breach, including the Cyber Defense Assistance Collaboration (CDAC), founded by Greg Rattray and Matthew Murray, now a part of the US-based non-profit CRDF Global. CDAC works with a number of US private technology companies, as well as the National Security and Defense Council of Ukraine and the Ukrainian think tank Global Cyber Cooperative Center, to match the specific needs of Ukrainian government and state-owned enterprises with needed products and services offered by companies working in coordination.127

The growth and reach of this effort demonstrate the potential impact that a government-housed, or even a government-sponsored mechanism, could have in increasing the capacity to facilitate requests from the Ukrainian government, decreasing the number of bureaucratic steps required by Ukrainian government officials while increasing the amount and quality of support they receive. In addition, government facilitation would ease progress toward the previously stated recommendations by building in clarity around what kind of support can be provided and putting facilitation and aid tracking within a single process. As discussed above, this facilitation should start with a focus on US public-private support, but can grow to work alongside similar allied efforts. This could include, for example, coordination with the United Kingdom’s Foreign, Commonwealth and Development Office (FCDO) program, which “enables Ukrainian agencies to access the services of commercial cybersecurity companies.”128 Crucially, this task force, helmed by the State Department’s Bureau of Cyberspace and Digital Policy, would act as a facilitator, not as a restricting body. Its mission in this task would be to make connections and provide information.  

In line with tracking, US government facilitation would enable government entities to communicate where assistance can be most useful, such as shoring up key vulnerabilities or ensuring that essential defense activities are not dependent on a single private sector entity, and ideally, avoiding dependency on a single source of private sector assistance. A company’s financial situation or philanthropic priorities are always subject to change, and the US government should be aware of such risks and create resilience through redundancy.  

Central to this resilience will be the provision of support to bolster key nodes in the Ukrainian telecommunications infrastructure network against not just cyber attacks but also against physical assault, including things like firewalls, mine clearing equipment, and power generators. Aiding the Ukrainian government in the search for another reliable partner for satellite communication devices that offer similar flexibility as Starlink is also necessary, and a representative from the Pentagon has confirmed that such a process is underway, following Musk’s various and contradictory statements regarding the future of SpaceX’s aid to Ukraine back in October.129 Regardless, the entire SpaceX experience illustrates the need to address single dependencies in advance whenever possible. 

A roadblock to ensuring assistance redundancy is the financial ability of companies to provide products and services to the Ukrainian government without charge or to the degree necessary. While the US government does provide funding for private technological assistance (as in the Starlink example), creating a pool of funding that is tied to the aforementioned task force and overseen by the State Department’s Bureau of Cyberspace and Digital Policy, would enable increased flexibility for companies to cover areas of single dependence, even in instances that would require piecemeal rather than one-to-one redundancy. As previously discussed, many companies are providing support out of a belief that it is the right thing to do, both for their customers and as members of a global society. However, depending on whether that support is paid or provided for free, or publicly or privately given, a mechanism that provides government clarity on private sector support, tracks the landscape of US private support to Ukraine, and facilitates support requests would make it easier for companies to make the decision to start or continue to provide support when weighed against the costs and potential risks of offering assistance.

Looking forward and inward 

The questions that have emerged from Ukraine’s experience of defense in and through the information environment are not limited to this context. Private companies have a role in armed conflict and that role seems likely to grow, along with the scale, complexity, and criticality of the information infrastructures they own and operate. Companies will, in some capacity, be participants in the battlespace. This is being demonstrated in real time, exposing gaps that the United States and its allies and partners must address in advance of future conflicts.

Russia’s war on Ukraine has created an environment in which both public and private assistance in support of Ukrainian information infrastructure is motivated by a common aversion toward Russian aggression, as well as a commitment to the stability and protection of the Ukrainian government and people. This war is not over and despite any hopes to the contrary, similar aggressions will occur in new contexts, and with new actors in the future. It is crucial that in conjunction with examining and mitigating the risks related to the involvement of private technology companies in the war in Ukraine, the US government also examines these questions regarding its own national security and defense.

The information environment is increasingly central to not just warfighting but also to the practice of governance and the daily life of populations around the world. Governments and populations live in part within that environment and therefore atop infrastructure that is owned and operated by the private sector. As adversaries seek to reshape the information environment to their own advantage, US and allied public and private sectors must confront the challenges of their existing interdependence. This includes defining in what form national security and defense plans in and through the information environment are dependent upon private companies, developing a better understanding of the differing incentive structures that guide private sector decision-making, and working in coordination with private companies to create a more resilient information infrastructure network through redundancy and diversification. It is difficult to know what forms future conflict and future adversaries will take, or the incentives that may exist for companies in those new contexts, but by better understanding the key role that private information and technology companies already play in this domain, the United States and allies can better prepare for future threats.

About the Authors 

Emma Schroeder is an associate director with the Atlantic Council’s Cyber Statecraft Initiative, within the Digital Forensic Research Lab, and leads the team’s work studying conflict in and through cyberspace. Her focus in this role is on developing statecraft and strategy for cyberspace that is useful for both policymakers and practitioners. Schroeder holds an MA in History of War from King’s College London’s War Studies Department and also attained her BA in International Relations & History from the George Washington University’s Elliott School of International Affairs. 

Sean Dack was a Young Global Professional with the Cyber Statecraft Initiative during the fall of 2022. He is now a Researcher at the NATO Parliamentary Assembly, where he focuses on the long-term strategic and economic implications of Russia’s invasion of Ukraine. Dack graduated from Johns Hopkins School of Advanced International Studies in December 2022 with his MA in Strategic Studies and International Economics. 

Acknowledgements 

The authors thank Justin Sherman, Gregory Rattray, and Gavin Wilde for their comments on earlier drafts of this document, and Trey Herr and the Cyber Statecraft team for their support. The authors also thank all the participants, who shall remain anonymous, in multiple Chatham House Rule discussions and one-on-one conversations about the issue.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    ”The Ministry of Foreign Affairs of the Russian Federation, Convention on International Information Security (2011), https://carnegieendowment.org/files/RUSSIAN-DRAFT-CONVENTION-ON-INTERNATIONAL-INFORMATION-SECURITY.pdf.
2    To learn more about Russian disinformation efforts against Ukraine and its allies, check out the Russian Narratives Reports from the Atlantic Council’s Digital Forensic Research Lab: Nika Aleksejeva et al., Andy Carvin ed., “Narrative Warfare: How the Kremlin and Russian News Outlets Justified a War of Aggression against Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/narrative-warfare/; Roman Osadchuk et al., Andy Carvin ed., “Undermining Ukraine: How the Kremlin Employs Information Operations to Erode Global Confidence in Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/undermining-ukraine/.
3    Previously, the term RuNet described Russian language portions of the global internet accessible anywhere in the world. However, since Russia passed a domestic internet law in May 2019, RuNet has come to refer to a technically isolated version of the internet that services users within the borders of Russia. Gavin Wilde and Justin Sherman, No Water’s Edge: Russia’s Information War and Regime Security, Carnegie Endowment for International Peace, January 4, 2023, https://carnegieendowment.org/2023/01/04/no-water-s-edge-russia-s-information-war-and-regime-security-pub-88644; Justin Sherman, Reassessing Runet: Russian Internet Isolation and Implications for Russian Cyber Behavior, Atlantic Council, July 7, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/reassessing-runet-russian-internet-isolation-and-implications-for-russian-cyber-behavior/.
4    Adam Satariano and Valerie Hopkins, “Russia, Blocked from the Global Internet, Plunges into Digital Isolation,” New York Times, March 7, 2022, https://www.nytimes.com/2022/03/07/technology/russia-ukraine-internet-isolation.html.
5    Gavin Wilde and Justin Sherman, No Water’s Edge: Russia’s Information War and Regime Security, Carnegie Endowment for International Peace, January 4, 2023, https://carnegieendowment.org/2023/01/04/no-water-s-edge-russia-s-information-war-and-regime-security-pub-88644; Stephen Blank, “Russian Information Warfare as Domestic Counterinsurgency,” American Foreign Policy Interests 35, no. 1 (2013): 31–44, https://doi.org/10.1080/10803920.2013.757946.
6    Gavin Wilde, Cyber Operations in Ukraine: Russia’s Unmet Expectations, Carnegie Endowment for International Peace, December 12, 2022, https://carnegieendowment.org/2022/12/12/cyber-operations-in-ukraine-russia-s-unmet-expectations-pub-88607.
7    Tetyana Bohdanova, “Unexpected Revolution: The Role of Social Media in Ukraine’s Euromaidan Uprising,” European View 13, no. 1: (2014), https://doi.org/10.1007/s12290-014-0296-4; Megan MacDuffee Metzger, and Joshua A. Tucker. “Social Media and EuroMaidan: A Review Essay,” Slavic Review 76, no. 1 (2017): 169–91, doi:10.1017/slr.2017.16
8    Jonathon Cosgrove, “The Russian Invasion of the Crimean Peninsula 2014–2015: A Post-Cold War Nuclear Crisis Case Study,” Johns Hopkins (2020), 11–13, https://www.jhuapl.edu/Content/documents/RussianInvasionCrimeanPeninsula.pdf.
9    Steven Pifer, Ukraine: Six Years after the Maidan, Brookings, February 21, 2020, https://www.brookings.edu/blog/order-from-chaos/2020/02/21/ukraine-six-years-after-the-maidan/.
10    Kenneth Geers, ed., Cyber War in Perspective: Russian Aggression Against Ukraine (Tallinn: NATO CCD COE Publications, 2015), 9; Keir Giles, “Russia and Its Neighbours: Old Attitudes, New Capabilities,” in Geers, Cyber War in Perspective, 25; ‘Кримські регіональні підрозділи ПАТ «Укртелеком» офіційно повідомляють про блокування невідомими декількох вузлів зв’язку на півострові’ [Ukrtelekom officially reports blocking of communications nodes on peninsula by unknown actors], Ukrtelekom, February 28, 2014, http://www.ukrtelecom.ua/presscenter/news/official?id=120327.
11    Pavel Polityuk and Jim Finkle, “Ukraine Says Communications Hit, MPs Phones Blocked,” Reuters, March 4, 2014, https://www.reuters.com/article/ukraine-crisis-cybersecurity/ukraine-says-communications-hit-mps-phones-blocked-idINL6N0M12CF20140304.
12    Jen Weedon, “Beyond ‘Cyber War’: Russia’s Use of Strategic Cyber Espionage and Information Operations in Ukraine,” in Geers, Cyber War in Perspective, 76; Liisa Past, “Missing in Action: Rhetoric on Cyber Warfare,” in Geers, Cyber War in Perspective, 91; “Ukrtelecom’s Crimean Sub-Branches Officially Report that Unknown People Have Seized Several Telecommunications Nodes in the Crimea,” Ukrtelecom, February 28, 2014, http://en.ukrtelecom.ua/about/news?id=120467; “Feb. 28 Updates on the Crisis in Ukraine,” New York Times, February 28, 2014, https://archive.nytimes.com/thelede.blogs.nytimes.com/2014/02/28/latest-updates-tensions-in-ukraine/?_r=0; “The Crimean Regional Units of PJSC ‘Ukrtelecom’ Officially Inform About the Blocking by Unknown Persons of Several Communication Nodes on the Peninsula,” Ukrtelecom, February 28, 2014, https://web.archive.org/web/20140305001208/, http://www.ukrtelecom.ua/presscenter/news/official?id=120327.
13    Polityuk and Finkle, “Ukraine Says Communications Hit”; John Leyden, “Cyber Battle Apparently under Way in Russia–Ukraine Conflict,” The Register, April 25, 2018, https://www.theregister.com/2014/03/04/ukraine_cyber_conflict/.
14    Fontugne, Ermoshina, and Aben, “The Internet in Crimea.”
15    Frédérick Douzet et al., “Measuring the Fragmentation of the Internet: The Case of the Border Gateway Protocol (BGP) During the Ukrainian Crisis,” 2020 12th International Conference on Cyber Conflict (CyCon), Tallinn, Estonia, May 26–29, 2020, 157-182, doi: 10.23919/CyCon49761.2020.9131726; Paul Mozur et al., “‘They Are Watching’: Inside Russia’s Vast Surveillance State,” New York Times, September 22, 2022, https://www.nytimes.com/interactive/2022/09/22/technology/russia-putin-surveillance-spying.html
16    Yaropolk Brynykh and Anastasiia Lykholat, “Occupied Crimea: Victims and Oppressors,” Freedom House, August 30, 2018, https://freedomhouse.org/article/occupied-crimea-victims-and-oppressors.
17    Halya Coynash, “Internet Providers Forced to Conceal Total FSB Surveillance in Occupied Crimea and Russia,” Kyiv Post, February 2, 2018, https://www.kyivpost.com/article/opinion/op-ed/halya-coynash-internet-providers-forced-conceal-total-fsb-surveillance-occupied-crimea-russia.html.
18    Joseph Cox, “Russia Built an Underwater Cable to Bring Its Internet to Newly Annexed Crimea,” VICE, August 1, 2014, https://www.vice.com/en/article/ypw35k/russia-built-an-underwater-cable-to-bring-its-internet-to-newly-annexed-crimea.
19    Cox, “Russia Built an Underwater Cable.”
20    Romain Fontugne, Ksenia Ermoshina, and Emile Aben, “The Internet in Crimea: A Case Study on Routing Interregnum,” 2020 IFIP Networking Conference, Paris, France, June 22–25, 2020, https://hal.archives-ouvertes.fr/hal-03100247/document.
21    Sebastian Moss, “How Russia Took over the Internet in Crimea and Eastern Ukraine,” Data Center Dynamics, January 12, 2023, https://www.datacenterdynamics.com/en/analysis/how-russia-took-over-the-internet-in-crimea-and-eastern-ukraine/; “Ukraine: Freedom on the Net 2018 Country Report,” Freedom House, 2019, https://freedomhouse.org/country/ukraine/freedom-net/2018.
22    “Crimea: Freedom in the World 2020 Country Report,” Freedom House, https://freedomhouse.org/country/crimea/freedom-world/2020.
23    Kim Zetter, “Inside the Cunning, Unprecedented Hack of Ukraine’s Power Grid,” Wired, March 3, 2016, https://www.wired.com/2016/03/inside-cunning-unprecedented-hack-ukraines-power-grid/; Andy Greenberg, “The Untold Story of Notpetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
24    “Special Report: Ukraine An Overview of Russia’s Cyberattack Activity in Ukraine,” Microsoft Digital Security Unit, April 27, 2022, https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4Vwwd; Kyle Fendorf and Jessie Miller, “Tracking Cyber Operations and Actors in the Russia–Ukraine War,” Council on Foreign Relations, March 24, 2022, https://www.cfr.org/blog/tracking-cyber-operations-and-actors-russia-ukraine-war.
25    Jakub Przetacznik and Simona Tarpova, “Russia’s War on Ukraine: Timeline of Cyber-Attacks,” European Parliament, June 2022, https://www.europarl.europa.eu/RegData/etudes/BRIE/2022/733549/EPRS_BRI(2022)733549_EN.pdf; Catalin Cimpanu, “Hackers Deface Ukrainian Government Websites,” The Record, January 14, 2022, https://therecord.media/hackers-deface-ukrainian-government-websites/.
26    Tom Burt, “Malware Attacks Targeting Ukraine Government,” Microsoft, January 15, 2022, https://blogs.microsoft.com/on-the-issues/2022/01/15/mstic-malware-cyberattacks-ukraine-government/.
27    Roman Osadchuk, Russian Hybrid Threats Report: Evacuations Begin in Ukrainian Breakaway Regions, Atlantic Council, February 18, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-hybrid-threats-report-evacuations-begin-in-ukrainian-breakaway-regions/#cyberattack; Sean Lyngaas and Tim Lister, “Cyberattack Hits Websites of Ukraine Defense Ministry and Armed Forces,” CNN, February 15, 2022, https://www.cnn.com/2022/02/15/world/ukraine-cyberattack-intl/index.html.
28    Microsoft, “Special Report Ukraine.”
29    ESET Research: Ukraine Hit by Destructive Attacks Before and During the Russian Invasion with HermeticWiper and IsaacWiper,” ESET, March 1, 2022, https://www.eset.com/int/about/newsroom/press-releases/research/eset-research-ukraine-hit-by-destructive-attacks-before-and-during-the-russian-invasion-with-hermet/; “Ukraine: Disk-Wiping Attacks Precede Russian Invasion,” Symantec Threat Hunter Team, February 24, 2022, https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/ukraine-wiper-malware-russia; “Ukraine Computers Hit by Data-Wiping Software as Russia Launched Invasion,” Reuters, February 24, 2022, https://www.reuters.com/world/europe/ukrainian-government-foreign-ministry-parliament-websites-down-2022-02-23/.
30    Britney Nguyen, “Telecom Workers in Occupied Parts of Ukraine Destroyed Software to Avoid Russian Control over Data and Communications,” Business Insider, June 22, 2022, https://www.businessinsider.com/telecom-workers-ukraine-destroyed-software-avoid-russian-control-2022-6; Net Blocks (@netblocks), “Confirmed: A major internet disruption has been registered across #Ukraine on national provider #Ukrtelecom; real-time network data show connectivity collapsing …,” Twitter, March 28, 2022, 10:38 a.m., https://twitter.com/netblocks/status/1508453511176065033; Net Blocks (@netblocks), “Update: Ukraine’s national internet provider Ukrtelecom has confirmed a cyberattack on its core infrastructure. Real-time network data show an ongoing and …,” Twitter, March 28, 2022 11:25 a.m., https://twitter.com/netblocks/status/1508465391244304389; Andrea Peterson, “Traffic at Major Ukrainian Internet Service Provider Ukrtelecom Disrupted,” The Record, March 28, 2022, https://therecord.media/traffic-at-major-ukrainian-internet-service-provider-ukrtelecom-disrupted/; James Andrew Lewis, Cyber War and Ukraine, Center for Strategic and International Studies, January 10, 2023, https://www.csis.org/analysis/cyber-war-and-ukraine.
31    Thomas Brewster, “As Russia Invaded, Hackers Broke into A Ukrainian Internet Provider. Then Did It Again As Bombs Rained Down,” Forbes, March 10, 2022, https://www.forbes.com/sites/thomasbrewster/2022/03/10/cyberattack-on-major-ukraine-internet-provider-causes-major-outages/?sh=51d16b9c6573.
32    “Global Communications: Services, Solutions and Satellite Internet,” ViaSat, accessed November 14, 2022, http://data.danetsoft.com/viasat.com; Matt Burgess, “A Mysterious Satellite Hack Has Victims Far beyond Ukraine,” Wired, March 23, 2022, https://www.wired.com/story/viasat-internet-hack-ukraine-russia/.
33    Michael Kan, “ViaSat Hack Tied to Data-Wiping Malware Designed to Shut down Modems,” PCMag, March 31, 2022, https://www.pcmag.com/news/viasat-hack-tied-to-data-wiping-malware-designed-to-shut-down-modems.
34    “Ka-Sat Network Cyber Attack Overview,” ViaSat, September 12, 2022, https://news.viasat.com/blog/corporate/ka-sat-network-cyber-attack-overview.
35    Lee Mathews, “ViaSat Reveals How Russian Hackers Knocked Thousands of Ukrainians Offline,” Forbes, March 31, 2022, https://www.forbes.com/sites/leemathews/2022/03/31/viasat-reveals-how-russian-hackers-knocked-thousands-of-ukrainians-offline/?sh=4683638b60d6; ViaSat, “Ka-Sat Network.”
36    ViaSat, “Ka-Sat Network.”
37    Andrea Valentina, “Why the Viasat Hack Still Echoes,” Aerospace America, November 2022, https://aerospaceamerica.aiaa.org/features/why-the-viasat-hack-still-echoes.
38    Juan Andres Guerrero-Saade and Max van Amerongen, “Acidrain: A Modem Wiper Rains down on Europe,” SentinelOne, April 1, 2022, https://www.sentinelone.com/labs/acidrain-a-modem-wiper-rains-down-on-europe/.
39    Guerrero-Saade and Van Amerongen, “Acidrain.”
40    Joe Uchill, “UK, US, and EU Attribute Viasat Hack Against Ukraine to Russia,” SC Media, June 23, 2022, https://www.scmagazine.com/analysis/threat-intelligence/uk-us-and-eu-attribute-viasat-hack-against-ukraine-to-russia; David E. Sanger and Kate Conger, “Russia Was Behind Cyberattack in Run-Up to Ukraine War, Investigation Finds,” New York Times, May 10, 2022, https://www.nytimes.com/2022/05/10/us/politics/russia-cyberattack-ukraine-war.html.
41    Kim Zetter, “ViaSat Hack ‘Did Not’ Have Huge Impact on Ukrainian Military Communications, Official Says,” Zero Day, September 26, 2022, https://zetter.substack.com/p/viasat-hack-did-not-have-huge-impact; “Satellite Outage Caused ‘Huge Loss in Communications’ at War’s Outset—Ukrainian Official,” Reuters, March 15, 2022, https://www.reuters.com/world/satellite-outage-caused-huge-loss-communications-wars-outset-ukrainian-official-2022-03-15/.
42    ”Reuters, “Satellite Outage.”
43    Sean Lyngaas, “US Satellite Operator Says Persistent Cyberattack at Beginning of Ukraine War Affected Tens of Thousands of Customers, CNN, March 30, 2022, https://www.cnn.com/2022/03/30/politics/ukraine-cyberattack-viasat-satellite/index.html.
44    Zetter, “ViaSat Hack.”
45    Burgess, “A Mysterious Satellite Hack” Zetter, “ViaSat Hack”; Valentino, “Why the ViaSat Hack.”
46    Jurgita Lapienytė, “ViaSat Hack Impacted French Critical Services,” CyberNews, August 22, 2022, https://cybernews.com/news/viasat-hack-impacted-french-critical-services/
47    International Committee of the Red Cross, Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I), 1125 UNTS 3 (June 8, 1977), accessed January 18, 2023, https://www.refworld.org/docid/3ae6b36b4.html; Zhanna L. Malekos Smith, “No ‘Bright-Line Rule’ Shines on Targeting Commercial Satellites,” The Hill, November 28, 2022, https://thehill.com/opinion/cybersecurity/3747182-no-bright-line-rule-shines-on-targeting-commercial-satellites/; Anaïs Maroonian, “Proportionality in International Humanitarian Law: A Principle and a Rule,” Lieber Institute West Point, October 24, 2022, https://lieber.westpoint.edu/proportionality-international-humanitarian-law-principle-rule/#:~:text=The%20rule%20of%20proportionality%20requires,destruction%20of%20a%20military%20objective; Travis Normand and Jessica Poarch, “4 Basic Principles,” The Law of Armed Conflict, January 1, 2017, https://loacblog.com/loac-basics/4-basic-principles/.
48    “Statement by Deputy Head of the Russian Delegation Mr. Konstantin Vorontsov at the Thematic Discussion on Outer Space (Disarmament Aspects) in the First Committee of the 77th Session of the Unga,” Permanent Mission of the Russian Federation to the United Nations, October 26, 2022, https://russiaun.ru/en/news/261022_v.
49    Mauro Vignati, “LABScon Replay: Are Digital Technologies Eroding the Principle of Distinction in War?” SentinelOne, November 16, 2022, https://www.sentinelone.com/labs/are-digital-technologies-eroding-the-principle-of-distinction-in-war/
50    Matt Burgess, “Russia Is Taking over Ukraine’s Internet,” Wired, June 15, 2022, https://www.wired.com/story/ukraine-russia-internet-takeover/.
51    Nino Kuninidze et al., “Interim Assessment on Damages to Telecommunication Infrastructure and Resilience of the ICT Ecosystem in Ukraine.”
52    Adam Satariano and Scott Reinhard, “How Russia Took Over Ukraine’s Internet in Occupied Territories,” The New York Times, August 9, 2022, https://www.nytimes.com/interactive/2022/08/09/technology/ukraine-internet-russia-censorship.html; https://time.com/6222111/ukraine-internet-russia-reclaimed-territory/  
53    Thomas Brewster, “The Last Days of Mariupol’s Internet,” Forbes, March 31, 2022, https://www.forbes.com/sites/thomasbrewster/2022/03/31/the-last-days-of-mariupols-internet/.
54    Matt Burgess, “Russia Is Taking over Ukraine’s Internet,” Wired, June 15, 2022, https://www.wired.com/story/ukraine-russia-internet-takeover/; Satariano and Reinhard, “How Russia Took.”
55    ”Vera Bergengruen, “The Battle for Control over Ukraine’s Internet,” Time, October 18, 2022, https://time.com/6222111/ukraine-internet-russia-reclaimed-territory/.
56    Herbert Lin, “Russian Cyber Operations in the Invasion of Ukraine,” Cyber Defense Review (Fall 2022): 35, https://cyberdefensereview.army.mil/Portals/6/Documents/2022_fall/02_Lin.pdf, Herb Lin, “The Emergence of Physically Mediated Cyberattacks?,” Lawfare, May 21, 2022, https://www.lawfareblog.com/emergence-physically-mediated-cyberattacks; “Invaders Use Blackmailing and Intimidation to Force Ukrainian Internet Service Providers to Connect to Russian Networks,” State Service of Special Communications and Information Protection of Ukraine, May 13, 2022, https://cip.gov.ua/en/news/okupanti-shantazhem-i-pogrozami-zmushuyut-ukrayinskikh-provaideriv-pidklyuchatisya-do-rosiiskikh-merezh; Satariano and Reinhard, “How Russia Took.”
57    Gian M. Volpicelli, “How Ukraine’s Internet Can Fend off Russian Attacks,” Wired, March 1, 2022, https://www.wired.com/story/internet-ukraine-russia-cyberattacks/; Satariano and Reinhard, “How Russia Took.” 
58    David R. Marples, “Russia’s War Goals in Ukraine,” Canadian Slavonic Papers 64, no. 2–3 (March 2022): 207–219, https://doi.org/10.1080/00085006.2022.2107837.
59    David Klepper, “Russian Propaganda ‘Outgunned’ by Social Media Rebuttals,” AP News, March 4, 2022, https://apnews.com/article/russia-ukraine-volodymyr-zelenskyy-kyiv-technology-misinformation-5e884b85f8dbb54d16f5f10d105fe850; Marc Champion and Daryna Krasnolutska, “Ukraine’s TV Comedian President Volodymyr Zelenskyy Finds His Role as Wartime Leader,” Japan Times, June 7, 2022, https://www.japantimes.co.jp/news/2022/02/26/world/volodymyr-zelenskyy-wartime-president/;“Российское Телевидение Сообщило Об ‘Бегстве Зеленского’ Из Киева, Но Умолчало Про Жертвы Среди Гражданских,” Агентство, October 10, 2022, https://web.archive.org/web/20221010195154/https://www.agents.media/propaganda-obstreli/.
60    To learn more about Russian disinformation efforts against Ukraine and its allies, check out the Russian Narratives Reports from the Atlantic Council’s Digital Forensic Research Lab:  Nika Aleksejeva et al., Andy Carvin ed., “Narrative Warfare: How the Kremlin and Russian News Outlets Justified a War of Aggression against Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/narrative-warfare/; Roman Osadchuk et al., Andy Carvin ed., “Undermining Ukraine: How the Kremlin Employs Information Operations to Erode Global Confidence in Ukraine,” Atlantic Council, February 22, 2023, https://www.atlanticcouncil.org/in-depth-research-reports/report/undermining-ukraine/.
61    Олександр Янковський, “‘Бояться Спротиву’. Для Чого РФ Захоплює Мобільний Зв’язок Та Інтернет На Херсонщині?,” Радіо Свобода, May 7, 2022, https://www.radiosvoboda.org/a/novyny-pryazovya-khersonshchyna-okupatsiya-rosiya-mobilnyy-zvyazok-internet/31838946.html
62    Volodymyr Zelenskyy, “Tell People in the Occupied Territories about Ukraine, That the Ukrainian Army Will Definitely Come—Address by President Volodymyr Zelenskyy,” President of Ukraine Official Website, June 13, 2022, https://www.president.gov.ua/en/news/govorit-lyudyam-na-okupovanih-teritoriyah-pro-ukrayinu-pro-t-75801. 
63    Satariano and Reinhard, “How Russia Took.”
64    Michael Sheldon, “Geolocating Russia’s Indiscriminate Shelling of Kharkiv,” DFRLab, March 1, 2022, https://medium.com/dfrlab/geolocating-russias-indiscriminate-shelling-of-kharkiv-deaccc830846; Michael Sheldon, “Kharkiv Neighborhood Experienced Ongoing Shelling Prior to February 28 Attack,” DFRLab, February 28, 2022, https://medium.com/dfrlab/kharkiv-neighborhood-experienced-ongoing-shelling-prior-to-february-28-attack-f767230ad6f6https://maphub.net/Cen4infoRes/russian-ukraine-monitor; Michael Sheldon (@Michael1Sheldon), “Damage to civilian houses in the Zalyutino neighborhood of Kharkiv. https://t.me/c/1347456995/38991 …,” Twitter, February 27, 2022, 4:15 p.m., https://twitter.com/Michael1Sheldon/status/1498044130416594947; Michael Sheldon, “Missile Systems and Tanks Spotted in Russian Far East, Heading West,” DFRLab, January 27, 2022, https://medium.com/dfrlab/missile-systems-and-tanks-spotted-in-russian-far-east-heading-west-6d2a4fe7717a; Jay in Kyiv (@JayinKyiv), “Not yet 24 hours after Ukraine devastated Russian positions in Kherson, a massive Russian convoy is now leaving Melitopol to replace them. This is on Alekseev …,” Twitter, July 12, 2022, 7:50 a.m., https://twitter.com/JayinKyiv/status/1546824416218193921; “Eyes on Russia Map,” Centre for Information Resilience, https://eyesonrussia.org/
65    Katerina Sergatskova, What You Should Know About Life in the Occupied Areas in Ukraine, Wilson Center, September 14, 2022, https://www.wilsoncenter.org/blog-post/what-you-should-know-about-life-occupied-areas-ukraine; Jonathan Landay, “Village near Kherson Rejoices at Russian Rout, Recalls Life under Occupation,” Reuters, November 12, 2022, https://www.reuters.com/world/europe/village-near-kherson-rejoices-russian-rout-recalls-life-under-occupation-2022-11-11/.
66    Andrew Salerno-Garthwaite, “OSINT in Ukraine: Civilians in the Kill Chain and the Information Space,” Global Defence Technology 137 (2022), https://defence.nridigital.com/global_defence_technology_oct22/osint_in_ukraine; “How Has Open-Source Intelligence Influenced the War in Ukraine?” Economist, August 30, 2022, https://www.economist.com/ukraine-osint-pod; Gillian Tett, “Inside Ukraine’s Open-Source War,” Financial Times, July 22, 2022, https://www.ft.com/content/297d3300-1a65-4793-982b-1ba2372241a3; Amy Zegart, “Open Secrets,” Foreign Affairs, January 7, 2023, https://www.foreignaffairs.com/world/open-secrets-ukraine-intelligence-revolution-amy-zegart?utm_source=twitter_posts&utm_campaign=tw_daily_soc&utm_medium=social
67    Lin, “The Emergence.”
68    “Cyber Security Strategy of Ukraine,” Presidential Decree of Ukraine, March 15, 2016, https://ccdcoe.org/uploads/2018/10/NationalCyberSecurityStrategy_Ukraine.pdf.
69    Eric Geller, “Ukraine Prepares to Remove Data from Russia’s Reach,” POLITICO, February 22, 2022, https://www.politico.com/news/2022/02/22/ukraine-centralized-its-data-after-the-last-russian-invasion-now-it-may-need-to-evacuate-it-00010777.  
70    Kuninidze et al., “Interim Assessment.”
71    Kuninidze et al., “Interim Assessment.”
72    “Datagroup to Invest $20 Million into a Large-Scale Network Modernization Project in Partnership with Cisco,” Datagroup, April 8, 2021, https://www.datagroup.ua/en/novyny/datagrup-investuye-20-mln-dolariv-u-masshtabnij-proyekt-iz-m-314.
73    Lauriane Giet, “Eutech4ukraine—Cisco’s Contribution to Bring Connectivity and Cybersecurity to Ukraine and Skills to Ukrainian Refugees,” Futurium, June 22, 2022, https://futurium.ec.europa.eu/en/digital-compass/tech4ukraine/your-support-ukraine/ciscos-contribution-bring-connectivity-and-cybersecurity-ukraine-and-skills-ukrainian-refugees; “Communiqué de Presse Solidarité Européenne Envers l’Ukraine: Nouveau Convoi d’Équipements Informatiques,” Government of France, May 25, 2022, https://minefi.hosting.augure.com/Augure_Minefi/r/ContenuEnLigne/Download?id=4FFB30F8-F59C-45A0-979E-379E3CEC18AF&filename=06%20-%20Solidarit%C3%A9%20europ%C3%A9enne%20envers%20l%E2%80%99Ukraine%20-%20nouveau%20convoi%20d%E2%80%99%C3%A9quipements%20informatiques.pdf
74    ”Atlantic Council, “Ukraine’s Digital Resilience: A conversation with Deputy Prime Minister of Ukraine Mykhailo Fedorov,” December 2, 2022, YouTube video, https://www.youtube.com/watch?v=Vl75e0QU6uE.
75    “Digital Country—Official Website of Ukraine,” Ukraine Now (Government of Ukraine), accessed January 17, 2023, https://ukraine.ua/invest-trade/digitalization/; Atlantic Council, “Ukraine’s Digital Resilience.”
76    Brad Smith, “Extending Our Vital Technology Support for Ukraine,” Microsoft, November 3, 2022, https://blogs.microsoft.com/on-the-issues/2022/11/03/our-tech-support-ukraine/; “How Amazon Is Assisting in Ukraine,” Amazon, March 1, 2022, https://www.aboutamazon.com/news/community/amazons-assistance-in-ukraine; Phil Venables, “How Google Cloud Is Helping Those Affected by War in Ukraine,” Google, March 3, 2022, https://cloud.google.com/blog/products/identity-security/how-google-cloud-is-helping-those-affected-by-war-in-ukraine.
77    Simon Handler, Lily Liu, and Trey Herr, Dude, Where’s My Cloud? A Guide for Wonks and Users, Atlantic Council, July 7, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/report/dude-wheres-my-cloud-a-guide-for-wonks-and-users/.
78    Handler, Liu, and Herr, “Dude, Where’s My Cloud?” 
79    Brad Smith, “Defending Ukraine: Early Lessons from the Cyber War,” Microsoft On the Issues, November 2, 2022, https://blogs.microsoft.com/on-the-issues/2022/06/22/defending-ukraine-early-lessons-from-the-cyber-war/; Smith, “Extending Our Vital Technology.”
80    Amazon, “How Amazon Is Assisting”; Sebastian Moss, “Ukraine Awards Microsoft and AWS Peace Prize for Cloud Services and Digital Support,” Data Center Dynamics, January 12, 2023, https://www.datacenterdynamics.com/en/news/ukraine-awards-microsoft-and-aws-peace-prize-for-cloud-services-digital-support/; Venables, “How Google Cloud”; Kent Walker, “Helping Ukraine,” Google, March 4, 2022, https://blog.google/inside-google/company-announcements/helping-ukraine/.
81    Catherine Stupp, “Ukraine Has Begun Moving Sensitive Data Outside Its Borders,” Wall Street Journal, June 14, 2022, https://www.wsj.com/articles/ukraine-has-begun-moving-sensitive-data-outside-its-borders-11655199002; Atlantic Council, “Ukraine’s Digital Resilience”; Smith, “Defending Ukraine.”
82    Nick Beecroft, Evaluating the International Support to Ukrainian Cyber Defense, Carnegie Endowment for International Peace, November 3, 2022, https://carnegieendowment.org/2022/11/03/evaluating-international-support-to-ukrainian-cyber-defense-pub-88322.
83    Smith, “Defending Ukraine,” 5, 6, 9.
84    Smith, “Defending Ukraine,” 3, 11.
85    Thomas Brewster, “Bombs and Hackers Are Battering Ukraine’s Internet Providers. ‘Hidden Heroes’ Risk Their Lives to Keep Their Country Online,” Forbes, March 15, 2022, https://www.forbes.com/sites/thomasbrewster/2022/03/15/internet-technicians-are-the-hidden-heroes-of-the-russia-ukraine-war/?sh=be5da1428844.
86    Kuninidze et al., “Interim Assessment,” 40.
87     Kuninidze et al., “Interim Assessment,”40; ““Київстар Виділяє 300 Мільйонів Гривень Для Відновлення Цифрової Інфраструктури України,” Київстар, July 4, 2022, https://kyivstar.ua/uk/mm/news-and-promotions/kyyivstar-vydilyaye-300-milyoniv-gryven-dlya-vidnovlennya-cyfrovoyi.
88    Київстар, “Київстар Виділяє”; “Mobile Connection Lifecell—Lifecell Ukraine,” Lifecell UA, accessed January 17, 2023, https://www.lifecell.ua/en/.
89    Ryan Gallagher, “Russia–Ukraine War: Telecom Workers Damage Own Equipment to Thwart Russia,” Bloomberg, June 21, 2022), https://www.bloomberg.com/news/articles/2022-06-21/ukrainian-telecom-workers-damage-own-equipment-to-thwart-russia.
90    Mykhailo Fedorov (@FedorovMykhailo), Twitter, February 26, 2022, 7:06 a.m., https://twitter.com/FedorovMykhailo/status/1497543633293266944?s=20&t=c9Uc7CDXEBr-e5-nd2hEtw.
91    Mykhailo Fedorov (@FedorovMykhailo), “Starlink — here. Thanks, @elonmusk,” Twitter, February 28, 2022, 3:19 p.m., https://twitter.com/FedorovMykhailo/status/1498392515262746630?s=20&t=vtCM9UqgWRkfxfrEHzYTGg
92    Atlantic Council, “Ukraine’s Digital Resilience.”
93    “How Elon Musk’s Satellites Have Saved Ukraine and Changed Warfare,” Economist, January 5, 2023, https://www.economist.com/briefing/2023/01/05/how-elon-musks-satellites-have-saved-ukraine-and-changed-warfare.
94    Alexander Freund, “Ukraine Using Starlink for Drone Strikes,” Deutsche Welle, March 27, 2022, https://www.dw.com/en/ukraine-is-using-elon-musks-starlink-for-drone-strikes/a-61270528.
95    Mykhailo Fedorov (@FedorovMykhailo), “Over 100 cruise missiles attacked 🇺🇦 energy and communications infrastructure. But with Starlink we quickly restored the connection in critical areas. Starlink …,” Twitter, October 12, 2022 3:12 p.m., https://twitter.com/FedorovMykhailo/status/1580275214272802817.
96    Rishi Iyengar, “Why Ukraine Is Stuck with Elon (for Now),” Foreign Policy, November 22, 2022, https://foreignpolicy.com/2022/11/22/ukraine-internet-starlink-elon-musk-russia-war/.
97    Economist, “How Elon Musk’s.”
98    Freund, “Ukraine Using Starlink”; Nick Allen and James Titcomb, “Elon Musk’s Starlink Helping Ukraine to Win the Drone War,” Telegraph, March 18, 2022, https://www.telegraph.co.uk/world-news/2022/03/18/elon-musks-starlink-helping-ukraine-win-drone-war/; Charlie Parker, “Specialist Ukrainian Drone Unit Picks off Invading Russian Forces as They Sleep,” Times, March 18, 2022, https://www.thetimes.co.uk/article/specialist-drone-unit-picks-off-invading-forces-as-they-sleep-zlx3dj7bb.
99    Matthew Gault, “Mysterious Sea Drone Surfaces in Crimea,” Vice, September 26, 2022, https://www.vice.com/en/article/xgy4q7/mysterious-sea-drone-surfaces-in-crimea.
100    Economist, “How Elon Musk’s.”  
101    Akash Sriram, “SpaceX, USAID Deliver 5,000 Satellite Internet Terminals to Ukraine Akash Sriram,” Reuters, April 6, 2022, https://www.reuters.com/technology/spacex-usaid-deliver-5000-satellite-internet-terminals-ukraine-2022-04-06/.
102    Alex Marquardt, “Exclusive: Musk’s Spacex Says It Can No Longer Pay for Critical Satellite Services in Ukraine, Asks Pentagon to Pick up the Tab,” CNN, October 14, 2022, https://www.cnn.com/2022/10/13/politics/elon-musk-spacex-starlink-ukraine.  
103    Elon Musk (@elonmusk), “Ukraine-Russia Peace: – Redo elections of annexed regions under UN supervision. Russia leaves if that is will of the people. – Crimea formally part of Russia, as it has been since 1783 (until …” Twitter, October 3, 2022 12:15 p.m., https://twitter.com/elonmusk/status/1576969255031296000; Andrij Melnyk (@MelnykAndrij), Twitter, October 3, 2022, 12:46 p.m., https://twitter.com/MelnykAndrij/status/1576977000178208768.
104    Elon Musk (@elonmusk), Twitter, October 14, 2022, 3:14 a.m., https://twitter.com/elonmusk/status/1580819437824839681; Elon Musk (@elonmusk), Twitter, October 15, 2022, 2:06 p.m., https://twitter.com/elonmusk/status/1581345747777179651.
105    Elon Musk (@elonmusk), Twitter, October 17, 2022, 3:52 p.m., https://twitter.com/elonmusk/status/1582097354576265217; Sawyer Merrit (@SawyerMerritt), “BREAKING: The Pentagon is considering paying for @SpaceX ‘s Starlink satellite network — which has been a lifeline for Ukraine — from a fund that has been used …,” Twitter, October 17, 2022, 3:09 p.m., https://twitter.com/SawyerMerritt/status/1582086349305262080.
106    Alex Marquardt and Sean Lyngaas, “Ukraine Suffered a Comms Outage When 1,300 SpaceX Satellite Units Went Offline over Funding Issues” CNN, November 7, 2022, https://www.cnn.com/2022/11/04/politics/spacex-ukraine-elon-musk-starlink-internet-outage/; Iyengar, “Why Ukraine Is Stuck.”
107    Ryan Browne, “Ukraine Government Is Seeking Alternatives to Elon Musk’s Starlink, Vice PM Says,” CNBC, November 3, 2022, https://www.cnbc.com/2022/11/03/ukraine-government-seeking-alternatives-to-elon-musks-starlink.html.
108    William Harwood, “SpaceX Launches 40 OneWeb Broadband Satellites, Lighting up Overnight Sky,” CBS News, January 10, 2023, https://www.cbsnews.com/news/spacex-launches-40-oneweb-broadband-satellites-in-overnight-spectacle/.
109    Marquardt and Lyngaas, “Ukraine Suffered”; Mehul Srivastava et al., “Ukrainian Forces Report Starlink Outages During Push Against Russia,” Financial Times, October 7, 2022, https://www.ft.com/content/9a7b922b-2435-4ac7-acdb-0ec9a6dc8397.
110    Alex Marquardt and Kristin Fisher, “SpaceX admits blocking Ukrainian troops from using satellite technology,” CNN, February 9, https://www.cnn.com/2023/02/09/politics/spacex-ukrainian-troops-satellite-technology/index.html.
111    Charles R. Davis, “Elon Musk Blocked Ukraine from Using Starlink in Crimea over Concern that Putin Could Use Nuclear Weapons, Political Analyst Says,” Business Insider, October 11, 2022, https://www.businessinsider.com/elon-musk-blocks-starlink-in-crimea-amid-nuclear-fears-report-2022-10; Elon Musk (@elonmusk), Twitter, February 12, 2022, 4:00 p.m., https://twitter.com/elonmusk/status/1624876021433368578.
112    Mykhailo Fedorov (@FedorovMykhailo), “In 21 days of the war, russian troops has already killed 100 Ukrainian children. they are using DJI products in order to navigate their missile. @DJIGlobal are you sure you want to be a …,” Twitter, March 16, 2022, 8:14 a.m., https://twitter.com/fedorovmykhailo/status/1504068644195733504; Cat Zakrzewski, “4,000 Letters and Four Hours of Sleep: Ukrainian Leader Wages Digital War,” Washington Post, March 30, 2022, https://www.washingtonpost.com/technology/2022/03/30/mykhailo-fedorov-ukraine-digital-front/
113    DJI Global (@DJIGlobal), “Dear Vice Prime Minister Federov: All DJI products are designed for civilian use and do not meet military specifications. The visibility given by AeroScope and further Remote ID …,” Twitter, March 16, 2022, 5:42 p.m., https://twitter.com/DJIGlobal/status/1504206884240183297
114    Mehul Srivastava and Roman Olearchyk, “Starlink Prices in Ukraine Nearly Double as Mobile Networks Falter,” Financial Times, November 29, 2022, https://www.ft.com/content/f69b75cf-c36a-4ab3-9eb7-ad0aa00d230c.
115    Iyengar, “Why Ukraine Is Stuck.”
116    Michael Sheetz, “SpaceX Raises Another $250 Million in Equity, Lifts Total to $2 Billion in 2022,” CNBC, August 5, 2022, https://www.cnbc.com/2022/08/05/elon-musks-spacex-raises-250-million-in-equity.html.
117    “Starshield,” SpaceX, accessed January 17, 2023, https://www.spacex.com/starshield/; Micah Maidenberg and Drew FitzGerald, “Elon Musk’s Spacex Courts Military with New Starshield Project,” Wall Street Journal, December 8, 2022), https://www.wsj.com/articles/elon-musks-spacex-courts-military-with-new-starshield-project-11670511020.  
118    “Maps: Tracking the Russian Invasion of Ukraine,” New York Times, February 14, 2022, https://www.nytimes.com/interactive/2022/world/europe/ukraine-maps.html#:~:text=Ukraine%20has%20reclaimed%2054%20percent,for%20the%20Study%20of%20War; Júlia Ledur, Laris Karklis, Ruby Mellen, Chris Alcantara, Aaron Steckelberg and Lauren Tierney, “Follow the 600-mile front line between Ukrainian and Russian forces,” The Washington Post, February 21, 2023, https://www.washingtonpost.com/world/interactive/2023/russia-ukraine-front-line-map/.
119    Jimmy Rushton (@JimmySecUK), “Ukrainian soldiers deploying a Starlink satellite internet system in liberated Kherson, allowing local residents to communicate with their relatives in other areas of Ukraine,” Twitter, November 12, 2022, 8:07 a.m., https://twitter.com/JimmySecUK/status/1591417328134402050; José Andrés (@chefjoseandres), “@elonmusk While I don’t agree with you about giving voice to people that brings the worst out of all of us, thanks for @SpaceXStarlink in Kherson, a city with no electricity, or in a train from …,” Twitter, November 20, 2022, 1:58 a.m., https://twitter.com/chefjoseandres/status/1594223613795762176.
120    Mykhailo Fedorov (@FedorovMykhailo), “Every front makes its contribution to the upcoming victory. These are Anatoliy, Viktor, Ivan and Andrii from @Vodafone_UA team, who work daily to restore mobile and Internet communications …,” Twitter, April 25, 2022, 1:13 p.m., https://twitter.com/FedorovMykhailo/status/1518639261624455168; Mykhailo Fedorov (@FedorovMykhailo), “Can you see a Starlink? But it’s here. While providers are repairing cable damages, Gostomel’s humanitarian headquarter works via the Starlink. Thanks to @SpaceX …,” Twitter, May 8, 2022, 9:48 a.m., https://twitter.com/FedorovMykhailo/status/1523298788794052615.
121    Thomas Brewster, “Ukraine’s Engineers Dodged Russian Mines to Get Kherson Back Online–with a Little Help from Elon Musk’s Satellites,” Forbes, November 18, 2022, https://www.forbes.com/sites/thomasbrewster/2022/11/18/ukraine-gets-kherson-online-after-russian-retreat-with-elon-musk-starlink-help/?sh=186e24b0ef1e.  
122    Mark Didenko, ed., “Ukrtelecom Car Hits Landmine in Sumy Region, One Dead, Three Injured,” Yahoo!, October 2, 2022, https://www.yahoo.com/video/ukrtelecom-car-hits-landmine-sumy-104300649.html.
123    Vera Bergengruen, “The Battle for Control over Ukraine’s Internet,” Time, October 18, 2022, https://time.com/6222111/ukraine-internet-russia-reclaimed-territory/.
124    Bergengruen, “The Battle for Control over Ukraine’s Internet.”
125    Atlantic Council, “Ukraine’s Digital Resilience: A conversation with Deputy Prime Minister of Ukraine Mykhailo Fedorov,” December 2, 2022, YouTube video, https://www.youtube.com/watch?v=Vl75e0QU6uE; “Keeping connected: connectivity resilience in Ukraine,” EU4Digital, February 13, 2022, https://eufordigital.eu/keeping-connected-connectivity-resilience-in-ukraine/.
126    Greg Rattray, Geoff Brown, and Robert Taj Moore, “The Cyber Defense Assistance Imperative Lessons from Ukraine,” The Aspen Institute, February 16, 2023, https://www.aspeninstitute.org/wp-content/uploads/2023/02/Aspen-Digital_The-Cyber-Defense-Assistance-Imperative-Lessons-from-Ukraine.pdf, 8
127    CRDF Global, “CRDF Global becomes Platform for Cyber Defense Assistance Collaborative (CDAC) for Ukraine,” News 19, November 14, 2022, https://whnt.com/business/press-releases/cision/20221114DC34776/crdf-global-becomes-platform-for-cyber-defense-assistance-collaborative-cdac-for-ukraine/; Dina Temple-Raston, “EXCLUSIVE: Rounding Up a Cyber Posse for Ukraine,” The Record, November 18, 2022, https://therecord.media/exclusive-rounding-up-a-cyber-posse-for-ukraine/; Rattray, Brown, and Moore, “The Cyber Defense Assistance Imperative Lessons from Ukraine.” 
128    Beecroft, Evaluating the International Support.
129    Lee Hudson, “‘There’s Not Just SpaceX’: Pentagon Looks Beyond Starlink after Musk Says He May End Services in Ukraine,” POLITICO, October 14, 2022, https://www.politico.com/news/2022/10/14/starlink-ukraine-elon-musk-pentagon-00061896.

The post A parallel terrain: Public-private defense of the Ukrainian information environment appeared first on Atlantic Council.

]]>
The 5×5—Strengthening the cyber workforce https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-strengthening-the-cyber-workforce/ Thu, 23 Feb 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=613977 Experts provide insights into ways for the United States and its allies to bolster their cyber workforces.

The post The 5×5—Strengthening the cyber workforce appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

On July 19, 2022, the White House convened leaders from industry, government, and academia at its a National Cyber Workforce and Education Summit. In his remarks at the Summit, recently departed National Cyber Director Chris Inglis committed to developing a National Cyber Workforce and Education Strategy with input from relevant stakeholders to align government resources and efforts toward addressing the many challenges in this area. Among these challenges is finding sufficient talent to fill the United States’ ever-growing number of openings for cyber-related roles across all sectors of the economy. According to research from CyberSeek, US employers posted 714,548 of these job openings in the year leading up to April 2022. While many of the vacancies are oriented toward individuals who are savvy in the more technical aspects of cybersecurity, more organizations are searching for multidisciplinary talent, ranging from international affairs to project management and everything in between. 

While we await the White House’s National Cyber Workforce and Education Strategy, we brought together a group of experts to provide insights into bolstering the cyber workforces of the United States and its allies.

#1 What is one assumption about the cyber workforce that is holding the cyber community back?

Nelson Abbott, senior director, advanced program operations, NPower

“‘We cannot find good talent.’ This sentiment is, in my opinion, a result of companies not broadening their talent acquisition strategies. You will not meet the increasing demand for cyber talent by using the same talent pipelines that are not increasing their output to market.” 

Richard Harris, principal cybersecurity policy engineer, MITRE Corporation

“One problematic assumption is that the market, academia, or government alone can solve the problem of cyber workforce shortages. Developing cyber workforces at the right time, in the right quantities, and with the right skills requires purposeful and persistent public, private, and academic partnerships.” 

Ayan Islam, director, cyber workforce, Office of the National Cyber Director

“There is an assumption that there is a single pathway into the cyber workforce when there are many pathways to recruit cyber workforce talent. To open the job pipeline to those for whom a career in cyber or a related field would be out of reach, new pathways need to be created. We need to fully leverage the potential for community colleges to contribute to the workforce, grow work-based learning programs such as apprenticeships, and further explore non-traditional training opportunities. While some exist today, we need many more pathways to allow for more entrants and career changers into the cyber workforce and to demystify those pathways.” 

Eric Novotny, Hurst professor of international relations, emeritus, School of International Service, American University

“One assumption that I have noticed in employment advertising is the posting of entry-level positions in which the Certified Information Systems Security Professional (CISSP) certification is listed as necessary or desirable. This certification, as is well-known in the community, is a cybersecurity management certification that requires five years of experience in the domain. It may be that human resources representatives do not understand the levels or purpose of cybersecurity certifications. Some organizations may lose qualified job candidates if desired certifications are not aligned with job requirements.” 

Merili Soosalu, partner leader and regional coordinator for Latin America and the Caribbean, EU Cyber Resilience for Development Project (Cyber4Dev), Information System Authority of Estonia (RIA)

“Cybersecurity as a topic is on its way to the mainstream. In the more and more digitalized world, cybersecurity is an integral aspect that cannot be overlooked. This should also be reflected in the outlooks of cyber careers that do not only mean highly experienced technical skills but rather a variety of professions and skillsets from the areas of project management and communications to the highly skilled blue- and red-team competencies.”

#2 What government or industry-led programs have had an outsized positive impact on workforce development efforts?

Abbott: “I am of the opinion that there have not been ‘outsized’ positive impacts. There are a lot of great companies and organizations doing good work (NPower, Per Scholas, etc.), but they do not have the capacity to meet the exponential growth in demand for talent. The recent cybersecurity sprint was good to develop interest in that alternative hiring model, but it is still too early to see what the measurable results are.” 

Harris: “Some of the most successful workforce development programs have been in local communities. These programs were the result of local businesses, governments, and academic institutions putting their heads together to meet cybersecurity and other technical skill needs. While these efforts help keep people in their communities, they also support workforce mobility where these same skills are in demand outside of the local community.” 

Islam: “With over seven hundred thousand (approximately 756,000 as of December 2022, per CyberSeek.org) vacancies in cybersecurity positions across the United States, these numbers constitute a national security risk and must be tackled aggressively. Therefore, it is important for government, industry, education, and training providers to all contribute to workforce development efforts, and work in tandem to address our growing needs. For example, the Office of National Cyber Director hosted a National Cyber Workforce and Education Summit at the White House last summer with government and private sector partners to discuss building the United States’ cyber workforce, increasing skill-based pathways to cyber careers, and equipping Americans to thrive in our increasingly digital society. The event resulted in many new commitments. A cybersecurity apprenticeship sprint was also announced at the Summit, which led to an increase in private-sector participation in the Department of Labor’s apprenticeship program, with 194 new registered participants and over seven thousand apprentices getting jobs.” 

Novotny: “Sponsored events to attract new talent into the field, such as Cyber 9/12, AvengerCon, and various Capture the Flag (CTF) exercises are invaluable for stimulating interest in cybersecurity and exposing students and young professionals to executives and experts in the field.” 

Soosalu: “In Estonia in recent years, many positive initiatives have been developed for different age groups. For instance, for adults looking to change their careers to information technology (IT), the Kood/Jõhvi, an international coding school, was created and top IT specialists should enter to the job market in the coming months. A private initiative called Unicorn Squad was created in 2018 to popularize technology education among girls. These initiatives, to name some, will hopefully show positive effects in the coming years. The Estonian State Systems Authority, responsible for national cybersecurity, prioritizes the knowledge development of cyber incidents of critical sectors by regularly organizing joint exercises between the national Computer Emergency Response Team (CERT) and the IT teams of different critical service providers.”

#3 Are there any issues or challenges in workforce development have been overstated or immaterial?

Abbott: “‘Anyone can do cyber.’ While it is true that there is a much broader spectrum of roles in cyber than most people realize (non-technical; governance, risk management, and compliance; policy; etc.), these still require a strong working knowledge of information technology and networking concepts.” 

Harris: “Many people need to move beyond wringing their hands about cyber workforce shortages or hoping that someone else will solve the problem. Organizations can start at the grassroots level and proactively develop partnerships and plans that result in a tangible workforce development achievement at whatever level is feasible, and then build on that success.” 

Islam: “Actually, what is understated and greatly material to the issue and challenge in cyber workforce development is the lack of appropriate resourcing and C-suite appreciation with security program investments. There is still a disconnect in recognizing that cybersecurity is a foundational business risk and not a one-time, niche issue. Without proper investments on the people side of security programs, we will continue to see the same issues or challenges in tackling cybersecurity threats.” 

Novotny: “There are some misconceptions that cybersecurity is an exclusively IT-driven, technical field. That is certainly true for some roles and responsibilities, but cybersecurity solutions also embrace people and processes, as well as technology.  Professionals with highly developed technical skills will need to include management and people skills in their career development.” 

Soosalu: “Today, all studies show that the IT sector, cybersecurity in particular, lacks a qualified workforce. Therefore, all challenges are real and need to be tackled.”

More from the Cyber Statecraft Initiative:

#4 How can different types of organizations better assess their cyber talent needs?

Abbott: “By 1) moving from credential-based job descriptions to competency-based job descriptions; 2) better communicating between hiring managers and talent-acquisition teams; 3) changing job descriptions to remove bias and non-negotiable requirements to encourage more candidates to apply; and 4) considering internal upskilling programs and backfilling entry-level roles with new talent.” 

Harris: “The National Institute of Standards and Technology’s (NIST) National Initiative for Cybersecurity Education (NICE) Framework is an awesome baseline reference for understanding workforce positions and skills. Organizations, however, must do the work to understand their current and future cyber talent needs, then leverage the NICE Framework, or a similar guide, to connect those business needs with the right positions and skill paths, and build a workforce development plan.” 

Islam: “A growing number of organizations are taking advantage of skill-based and aptitude assessments to allow for diverse and multidisciplinary candidates to join the cyber workforce. However, skill-based training and hiring practices are still necessary. Any solution must be inclusive of historically untapped talent, including underserved areas and neurodivergent populations. A cybersecurity career should be within reach for any American who wishes to pursue it, and skills-based training and hiring practices enable inclusive outcomes, give workers a fair shot, and keep the economy strong.” 

Novotny: “The size of the existing IT and cybersecurity internal infrastructure plays a huge role here. Medium and small enterprises will have a more difficult time justifying a large cybersecurity staff in most cases. For these organizations, where many cybersecurity functions are outsourced, the skills shift to management and procurement, rather than technical operations, such as staffing a security operations center. In the government sector, having different standards and compliance rules than in the private sector also drives different necessary skill sets. On the other hand, I would argue that any organization that has network operations and valuable information assets to protect has similar security requirements in principle.” 

Soosalu: “For assessing needs, some forms of standards are needed. In the European Union, the new European Cybersecurity Skills Framework (ECSF) was created to become a useful tool to help identify the profiles and skills that are most needed and valued. This will help create a European framework for recognizing skills and training programs.”

#5 How have cyber workforce needs shifted in the past five years, and where do you see them going from here?

Abbott: “They have only increased, and almost doubled in 2022. More companies are taking cybersecurity seriously, and are now realizing the importance of having those individuals on their teams. I fear that the demand for cyber talent will only continue unless employers start to create new solutions instead of relying on old habits when it comes to talent acquisition.” 

Harris: “Rapid technological change like the current artificial intelligence revolution, and increasingly complex risk dynamics exemplified by greater cyber-physical convergence, require cyber workforces and individuals to embrace continuous learning throughout their careers. More attention needs to be paid to developing interesting and flexible cyber career paths and investing in more career progression training and education.” 

Islam: “We need to broaden our thinking about the importance of cyber across occupations and professions in our interconnected society. There are many occupations and professions that have not traditionally required in-depth cybersecurity knowledge or training, but whose work relies on the use of cyber technologies. Greater attention should be paid to ensuring that cybersecurity training and education are part of the professional preparation of these workers.” 

Novotny: “Several broad trends are noticeable in workforce requirements that have changed over time. First, as more sectors of the economy are identified as critical infrastructure, professionals that have industry sector experience are in higher demand.  Second, the cyber threat intelligence business—in both government and in the private sector—has opened job opportunities for young professionals with language and international relations education. Third, there is an apparent fusion of traditional cybersecurity needs with a growing concern about misinformation, social media, and privacy. A few years ago, these latter issues were largely separate from the cybersecurity domain. That is not the case today.” 

Soosalu: “Estonia was the target of one of the first ever national cyberattacks in 2007, and therefore cybersecurity as an issue is not new to our general public. However, being one of the most digitalized countries in the world, Estonia relies heavily on its digital services and needs to both create awareness and invest in being as cyber resilient as possible. The lack of a skilled workforce is clearly a vector of risk. Compared to the period of past five years, the legislation has evolved. Today, many more sectors are obliged to follow information and cybersecurity standards, hire information security officers, and invest budget into dealing with cybersecurity. The topic of cybersecurity is here to stay, and we will need to do our outmost to create interested and competent workforce for these profiles. Hopefully, the initiatives named above (Question #2) will help to contribute to this, and we see soon more women and more IT and cyber enthusiasts in the job market.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Strengthening the cyber workforce appeared first on Atlantic Council.

]]>
Soofer quoted in Radio Free Asia on North Korean missile developments https://www.atlanticcouncil.org/insight-impact/in-the-news/soofer-quoted-in-radio-free-asia-on-north-korean-missile-developments/ Thu, 09 Feb 2023 19:42:53 +0000 https://www.atlanticcouncil.org/?p=616700 On February 9, Forward Defense Senior Fellow Dr. Robert Soofer was quoted in an article by Radio Free Asia on the recent display of long range missiles by North Korea (DPRK) on February 8th during a military parade to mark the 75th Anniversary of the founding of the DPRK’s army. Soofer stressed the ‘alarming’ development […]

The post Soofer quoted in Radio Free Asia on North Korean missile developments appeared first on Atlantic Council.

]]>

On February 9, Forward Defense Senior Fellow Dr. Robert Soofer was quoted in an article by Radio Free Asia on the recent display of long range missiles by North Korea (DPRK) on February 8th during a military parade to mark the 75th Anniversary of the founding of the DPRK’s army. Soofer stressed the ‘alarming’ development of the DPRK’s nuclear arsenal and missile capabilities.

We know they have the missiles and the nuclear warheads. We don’t know for certain whether they can successfully reach the U.S. homeland and survive reentry into the atmosphere…Implications are big for U.S. homeland missile defense

Robert Soofer
Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Soofer quoted in Radio Free Asia on North Korean missile developments appeared first on Atlantic Council.

]]>
Avoiding the success trap: Toward policy for open-source software as infrastructure https://www.atlanticcouncil.org/in-depth-research-reports/report/open-source-software-as-infrastructure/ Wed, 08 Feb 2023 14:25:07 +0000 https://www.atlanticcouncil.org/?p=603755 Open-source software (OSS) sits at the center of almost every digital technology moving the world since the early 1980s—laptops, cellphones, widespread internet connectivity, cloud computing, social media, automation, all the rainbow flavors of e-commerce, and even secure communications and anti-censorship tools.

The post Avoiding the success trap: Toward policy for open-source software as infrastructure appeared first on Atlantic Council.

]]>

This report was drafted in collaboration with the Open Source Policy Network, a network of OSS developers, maintainers, and stakeholders convened by the Atlantic Council’s Cyber Statecraft Initiative to develop community-led strategy and policy recommendations for OSS.

Executive summary

High-profile security incidents involving open-source software (OSS) have brought the ubiquity of OSS and the unique challenges its communities face to the attention of policymakers in the United States, EU, and beyond. For policymakers seeking to support the security and sustainability of OSS as a shared resource, this report builds on an important perspective on open-source software: OSS as Infrastructure. OSS is code published under a license that allows anyone to inspect, modify, and re-distribute the source code. This helps developers share and re-use solutions to common problems, creating such efficiencies that some estimate that 97 percent of software depends on OSS. OSS ranges from small components for illustrating graphs to entire operating systems. Contributors include individuals working in their free time, staff at large companies, foundations, and many others. The ecosystem is community-based, with many governance structures to manage contributions and maintenance.

This report compares OSS to three infrastructure systems—water management systems, capital markets, and networks of roads and bridges—and draws on existing policy vehicles from each to suggest policy that supports the sustainability and security of OSS as a communally beneficial resource.

Software borrows metaphors from water systems, including “upstream” and “downstream” relationships between packages and the end products that rely on them. Entities that use water from the ground or rivers do not assume its potability or perpetual availability—instead, they ensure the water is fit for their varying needs. OSS consumers have a responsibility to ensure the OSS they consume is well supported and secure, and the largest OSS users have the most responsibility for supporting ecosystem sustainability. OSS also bears similarity to capital markets, facing compounding, systemic risks, as chains of software dependencies can make a single OSS project a point of failure for many downstream systems. These risks intensify when there is little transparency or accurate reporting available to consumers—or regulators—to evaluate and mitigate risk. Finally, OSS has previously been compared to roads and bridges, and this bears out in the manner that insufficient investment in ongoing support creates risk over time. The collapse of a bridge—or the discovery of a vulnerability in a widely used OSS package—can focus attention and investment, but continuous, mundane maintenance to prevent such crises often falls by the wayside.

Taken together, these infrastructure systems—and the policy vehicles that support them—provide key principles for policymakers looking to support open-source software as infrastructure:

Encouraging responsible OSS consumption:

  1. Get government to “walk the walk” of being a responsible OSS consumer by establishing one or more Open Source Program Offices in the federal government to help agencies manage their OSS strategy, policy, and relationships.
  2. Develop an OSS Best Practices framework through NIST that incorporates risk assessments andcontribution back to the OSS ecosystem. Industry and government could use the framework for self-assessment, and government could use it to help inform procurement evaluations.
  3. Develop, through OSS-mature companies and nonprofits, a standard of best practices for contributing to OSS to bring in more OSS Good Samaritans from smaller organizations.

Mitigating Systemic Risks:

  1. Create an Office of Digital Systemic Risk Management (ODSRM) within the Cybersecurity and Infrastructure Security Agency to identify systemic digital risks, including key widely used and at-risk OSS packages for targeted support.

Providing resources with security and sustainability in mind:

  1. Establish a target-of-opportunity funding program to support maintenance and incident-response work for systemically important OSS projects.
  2. Establish an OSS Trust Fund to provide sustainable and long-lasting investments in the security and maintenance of OSS code and the health and size of OSS maintainer communities.
  3. Develop an adopt-a-package program through which companies provide resources to support ongoing maintenance and vulnerability mitigation for OSS packages they depend on. Such a program could encourage more small and non-IT-sector companies to take part.

1. Introduction

Open-source software (OSS) sits at the center of almost every digital technology moving the world since the early 1980s—laptops, cellphones, widespread internet connectivity, cloud computing, social media, automation, all the rainbow flavors of e-commerce, and even secure communications and anti-censorship tools. OSS, developed without exclusive ownership by globe-spanning communities, has enabled engineers, scientists, and entrepreneurs alike to build great things and make momentous technological advances.

Much like the transcontinental rail systems of the nineteenth century and the intermodal shipping container system of the twentieth, OSS is an infrastructure that enables and shapes social, political, and economic activity across the world. Like the shipping container system and more than the highly visible railroad, OSS has long gone underrecognized outside of expert communities for the influence its code and developers have on the world.

That lack of recognition began changing only recently as OSS has come to the fore outside technology communities, with interest from philanthropic investors and grantmaking as well as congressional hearings after the December 2021 log4shell vulnerability.1 2The challenge with much of this attention is its emphasis on there being something wrong with OSS, something “broken” or “inherently weaker” with the code that needs fixing. The mindset of putting out a fire in open source, without critically reevaluating the relationship between OSS developers and consumers as well as the need for material acknowledgment of the importance of open-source code, threatens the long-term sustainability and security of OSS.

Pathbreaking research from Nadia Eghbal3 in 2016 helped present the public-policy challenge regarding OSS used to build essential technology systems. Not just an issue of shortfall in security, the OSS development model poses a basic problem of equity and value. OSS separates sale value, the amount a consumer is willing to pay for a free product, and use value, the amount this consumer gains by using it—an issue called out as early as 1997 by Eric Raymond.4 There is no clear market solution when conventional mechanisms to assign a value at sale and fractionally return that value to developers do not work. This kind of gap in a market opens a clear lane for public policy to do more than just support this infrastructure through the public purse. A survey conducted for this report,5discussed in more depth in the appendix, shows 65 percent of respondents agreed or strongly agreed on the necessity of a government role for the long-term health of the ecosystem. Moreover, 70 percent saw direct government funding as necessary to ensure this.

Figure 1. Survey response
Figure 2. Survey response

However, this is not to say that government is the only relevant player. Respondents indicated that, while they largely thought a government role in supporting the OSS ecosystem was requisite for its long-term health, they did not necessarily see it as the main party responsible for stepping up to the plate. This reflects a common thread of argument throughout this report: the criticality of OSS projects is determined not by their creators but by those using the package, and accordingly, responsibility for the ecosystem primarily rests in the hands of its largest beneficiaries—here, industry.

What are we doing here?

This report builds on previous research by the Atlantic Council and others, as well as the collected insights of the Open-Source Policy Network,6to argue that public policy can address the systems’ shortfalls by approaching OSS as infrastructure. Making policy to support and sustain OSS as infrastructure helps move viewing this code from a place of fear of security vulnerabilities to one that understands OSS as a critical component of an efficient software ecosystem, while still acknowledging the important role policy holds in improving security writ large.

When policy focuses only on terrible, potential outcomes, its ideas tend to reflect that bias toward fear, but this need not be the framing for OSS. Open source enables and solves much more than it imperils. Its security is as much a guarantor of continued value to users large and small, from individuals to national intelligence agencies, as it is a bulwark against malicious intent.

While OSS has come back to attention as an issue of national policy in the European Union (EU), and indeed become one for the first time in the United States in some ways as a product of fear and calamity, opportunities run much deeper. Infrastructure of such scale and magnitude is supported, reinforced, and amplified—not fixed in a brief whirlwind of activity—much like the consistent provisions of clean water, roads and bridges, and healthy capital markets. This report proposes clear models for sustained OSS support and offers guidance on how governments in the United States, European Union (EU), and nations across its member-state constituents can implement such models.

Much like roads or bridges, which anyone can walk or drive on, open source code can be used by anyone…This type of code makes up the digital infrastructure of our society today.

Roads and Bridges: Unseen Labor 7

This report identifies key principles of OSS development and use. It relates them to other physical infrastructures for which there are mature policies and laws in an ensemble approach to combine nuance and tangible recommendations. The report points policymakers toward adaptable policies addressing more familiar forms of infrastructure that serve as case studies for government support of OSS. There are two reasons for this work.

First, as tangible as the infrastructure comparison is, OSS also has useful differences from physical infrastructure that offer opportunities for nuance. The open-source ecosystem is far more varied, complex, and dynamic than most physical infrastructure. Eghbal, for example, explains in detail the many differences between OSS and her chosen roads and bridges analogy.8 Obscuring that nuance can lead policymakers to ignore obvious benefits—the substantial human communities involved in building and maintaining OSS, for example. OSS is, ultimately, the product of people with a variety of motivations, not the least of which are pure enthusiasm, curiosity, and a desire for community. Given the ecosystem’s overwhelming variety, it is often more accurate to understand OSS as an expression of social interaction and group problem-solving. Rather than designed top-down, it is infrastructure that emerges.9 OSS is fundamentally free speech in machine-readable form, not exquisite public works produced under a single engineering vision. Dynamic, interwoven groups of individuals produce, modify and maintain the code, rather than it being a commodity, product, or service per se, which carries significant ramifications for law and policy, as well as the infrastructure analogy.10

Second, as policymakers consider OSS in the larger context of significant cybersecurity policy in the United States, a set of guiding principles would help predict and model policies’ impact on OSS. Common physical infrastructure shares similarities to OSS: both support critical functions, provide dependable services, offer subtle and often unseen service delivery, function through systems of decentralized control, and more. Government has long engaged in infrastructure policy, so drawing on those more familiar frameworks offers opportunity to hone engagement with, and support for, the OSS ecosystem.

To better capture the complexity of the OSS ecosystem, this report offers not one but three infrastructure analogies for OSS policy. They are water-management systems, capital markets in the financial services sector, and roads and bridges from Eghbal’s report. The comparison between OSS and water-management systems invokes both systems’ sprawling networks of producers, intermediaries, quality assurers, and varied use cases. It also highlights the relationship between the degrees of usage and responsibility to the overall sustainable functioning of the ecosystem and discusses policy models based on Nevada water law and federal regulations around funding and protecting volunteer clean-up efforts. The comparison to the financial sector focuses on the nature of risk and transparency in both domains, where a variety of modular, interconnected, and aggregated items (projects in OSS, assets in finance) create nodes of risk and leverage and where risk management relies on insight into the location and of function of underlying system components. The section looks at policy efforts to identify and manage systemic risk created in these networks of dependence. Last, the roads and bridges comparison builds on Eghbal’s report, highlighting the importance of continual maintenance, funding, and tailored intervention across an interconnected network. It looks to the Highway Trust Fund (HTF) and adopt-a-highway programs for models of funding and support for key infrastructure.

Open source software is part of the foundation of digital infrastructure that promotes a free and open internet.

– S.4913, The Securing Open Source Software Act of 2022 11

For each analogy, the report addresses the prominent characteristics shared with the OSS ecosystem, explores the comparison in depth, and surfaces guiding policy principles before offering examples of relevant US and EU policies as potentially useful models for OSS. Following these analogies is a discussion of some existing government policies toward OSS and specific recommendations.

This report aims to develop tangible example policies for the United States and European Union to support OSS as infrastructure and point policymakers toward existing policy vehicles that government can readily modify and adopt to better support and engage with the OSS ecosystem. The report does not seek to make definitive statements about what open source is or is not through these analogies. Rather the goal is to capture a snapshot of its most essential features and most consequential participants. Any of the analogies can be extended far past usefulness, and policymakers should approach each keeping in mind the essential truth that, while all models are wrong, some (including, we believe, these) are useful, nonetheless. Before diving into the analogies though, this report looks to discuss the open-source ecosystem as it is, highlighting key principles and addressing common misconceptions.

2. The open-source ecosystem

While the motives of software developers can vary from securing a paycheck to satisfying personal curiosity, most software itself ultimately strives to carry out a task or solve a problem. Open-source software (OSS) is an acknowledgment that many such problems are similar and repeatedly encountered by developers. OSS works by making one solution to a problem available to all to re-purpose and re-use, which likely results in a strong return on investment (ROI),12both financially and socially.13 While there are several different legal approaches to defining and licensing what is “open source,” the common OSS philosophy grants forward to users and consumers the rights to inspect, modify, and redistribute software—its source code is “open.”14 In this, OSS generally differs from closed-source or proprietary software by providing these additional rights.

The result is a vast network of overlapping communities principally involved with developing, maintaining, and integrating OSS. These communities range from volunteers to paid professionals, with participants who exist entirely outside the for-profit technology industry and myriad others who are full-time employees from the likes of Google, Microsoft, and Amazon.

While open source as a philosophy predates the internet—witness the chaotic ballet of licensing and development values that characterized the 1969 birth of Unix and its fractured gestation as one example15—the internet proved a tremendous accelerant to OSS development. Indeed, the emergence of online communities developing and maintaining open-source code helped meaningfully differentiate the internet from precursor telecommunications networks and gave tangible form to Licklider and Taylor’s vision of creative communications among thinking machines.16

Figure 3. Dependencies and contributions

There are several key characteristics of the open-source ecosystem for policymakers to keep in mind. First among these is its sheer scale and variety. Though treating open source as a monolithic concept is a convenient abstraction—and for high-level policy, a necessary one at least up to a point—the real landscape is staggeringly diverse. There are communities built around specific programming languages, from commonly known Python to the deliberately esoteric Befunge.17 Some communities center on specific projects like the Linux kernel, and others orbit downstream functions like encrypted communications tools or specialized statistical analysis packages. Some projects serve simple ends like correctly adding characters to the left of a string or number.18 Others provide word-processing programs19or even entire operating systems, such as Linux and its many distributions.20 There are open cloud platforms such as OpenStack and open container orchestration systems like Kubernetes. There are also open-source code compilers, web servers, media players, and so on—some open source functions as standalone applications, some as deeply buried components for repurposing in different contexts. Some assembles programming languages into executable binaries, some builds software, some analyzes code for bugs, and so on.

The relationships between OSS projects and the larger software world are also complex and widely varying. A useful term here is “depth in stack,” referring to how deeply buried within an overall product or application OSS and other components can be. The most straightforward use of OSS might be in user-facing applications—for example, instead of purchasing Microsoft Word, one might download and use LibreOffice, an open-source word processor that provides largely the same functions as Word.21 A similar simple example of incorporating OSS into a project could include an academic researcher writing a data-analysis script in R, a commonly used statistics language. They might include the lines “install.packages(ggplot2)” and “library(ggplot2)” at the top of their script, giving them access to a variety of graphing tools and functions as they analyze a dataset.22

Figure 4. Buried OSS relationships

Other instances of OSS reliance run far deeper and are more challenging to map out. A user in the simple act of watching a show on Netflix relies on an immense variety of OSS, from the streaming platform’s own open-sourced projects to the guts of the underlying Amazon Web Services (AWS) cloud instances,23 which include server operating systems, container orchestrators, and innumerable component services. The log4shell incident highlighted just how deeply buried OSS dependence can be and, accordingly, how challenging the task of identifying dependence is. One report found that 60 percent of log4j uses were indirectly rather than directly implemented, challenging remediation efforts.24 One study by Qualys found that as of March 2022, some 30 percent of log4j instances remained unpatched.25 This pattern holds across the ecosystem, where dependence is rarely obvious and easily identified when OSS components lie buried beneath indirect relationships and obscure references.

While all the above mainly considers the open-source ecosystem through the lens of the code, keeping its human basis in mind is critical. Members of the open-source ecosystem can wear many hats, from running a hobby project to integrating OSS into industry products in their day job, often moving between different communities, contexts, and ecosystems. Even the common roles for a given open-source project are fluid—a developer might open-source one of their projects and act as its maintainer while they continue to contribute.26 Down the line though, either from lost interest in the project or not enough time to dedicate to its maintenance, a developer might call in a well-known contributor as a maintainer, either transferring the project over entirely or creating a team of maintainers. Different communities rely on different governance models, from maintainer-controls-all to elected positions for a project or select individuals relied upon for commit reviews. These OSS participants also distribute geographically, their contributions enabled by the foundational transparency of the ecosystem.

It is helpful to frame open source as many different, interacting ecosystems. They evolve, respond to stimuli, compete, collaborate, have cultures, and follow norms. Actions that impact an open source ecosystem can have ripple effects beyond that ecosystem – and beyond the world of proprietary technology or even technology altogether.

Julia Ferraioli 27

While OSS directly invokes “the code” and its developers, there also exists a staggering array of intermediary entities supporting and shaping the software side of things. Code hosts (sometimes called “forges”) store the actual code in either public or private repositories—for example, Microsoft’s GitHub, though there are myriad other hosts.28 Registries or indices, like Node Package Manager (npm) and the Python Package Index (PyPI), record official versioning and documentation for some packages, though their code might reside on a code host like GitHub or be mirrored there. Package managers like Python’s Preferred Installer Program (PIP) are the tools that, starting with a user command, retrieve the necessary code from a repository. At the more human level, nonprofits—many of them business leagues, like the Linux Foundation or Open Source Collective29—provide financial support for programs, and others, like the Open Source Initiative, manage licensing definitions.30 Some groups might provide security tooling or developer support to specific projects—for instance, the Alpha-Omega project assists maintainers of critical open-source projects.31

Figure 5. Maintainer and contributor relationship

All this is to say that the open-source ecosystem is complex. With that complexity comes disagreement, and assuming consensus among the ecosystem’s participants is an oversimplification similar to presuming that the code is uniform and governance structures straightforward. Some of the key debates among OSS communities will have direct policy implications. Some maintainers worry about where their projects might end up used,32some are wary of corporate involvement in the space shaping project direction and governance,33and others see OSS as a path toward a digital right to repair.34 The survey conducted for this report reflects this diversity in priorities well. What respondents considered the greatest source of risk for the OSS ecosystem ranged widely, including technical concerns about memory-safe languages, practices for transferring project ownership, government overregulation, misunderstood or disregarded OSS community values, unknown and deeply intertwined dependencies, the insufficiency of economic models, maintainer burnout and overburdening, and even maintainer sabotage. Similarly, there was little consensus on what metric best captured the overall health and well-being of an open-source project community, with the number of active contributors and maintainers being the only standout answer, and not by a wide margin. As a policy report first and foremost, many of these discussions are out of scope here, but they are nonetheless important to policymakers.35

Figure 6. Survey response

3. OSS as infrastructure: Three analogies

Defining infrastructure

Infrastructure rests as the “…vitally important, if not absolutely essential…” component that enables people to thrive, to create, and to build.36 Infrastructure is the underlying plumbing under great ideas. Some definitions lean toward the tangible, roads, bridges, software code, and computer networks. Others emphasize the economic categorization—infrastructure as a public good. However, not all kinds of infrastructure fulfill the strict economic definition of being both non-excludable and non-rivalrous entailed.

Even physical infrastructure is not so easily defined and sees a significant amount of “know it when you see it” classification—for instance, the Cybersecurity and Infrastructure Security Agency (CISA) lists sixteen critical infrastructure sectors, with the selection criteria emphasizing critical far more than infrastructure.37 OSS is present within traditional critical sectors, serving as infrastructure in a very literal sense.38 For this report’s purposes of guiding policy, significant similarity between OSS and infrastructure is sufficient, and there is plenty to find.

First, OSS handles many of the digital world’s unseen, “nitty-gritty” tasks upon which the larger digital ecosystem relies. Take, for instance, any of the following: OpenSSL, OpenStack, Kubernetes, the GNU Compiler Collection, BIRD, and Linux running on most large internet servers—all these functions are core to digital services and largely hidden from end users.39 Another striking example is cURL, which stands for client Uniform Resource Locator (URL) and pronounced curl informally.40 It is a command line tool and library to handle data transfers, residing within internet servers, gaming consoles, automobiles, operating systems, smartphones, and more.41 Consumers rely on digital systems for communications, financial transactions, transportation, healthcare, and other vital services—and many of those digital systems rely on OSS.

Second, beyond this necessary but less visible support, both OSS and physical infrastructure scale massively beyond their immediate surroundings, enabling huge swathes of the economy, end-user products, and more. One frequently cited report from Synopsys found that 78 percent of code in surveyed codebases was open source, while 97 percent of codebases contained at least some OSS.42 Buried in the settings of every iPhone (Settings > General > Legal & Regulatory > Legal Notices) is a four-thousand-line-long, barely navigable list of all the licenses declared by the phone, many of which concern the open-source components it relies on—including, in iconic OSS style, “‘THE BEER-WARE LICENCE’ (Revision 42)…As long as you retain this notice you can do whatever you want with this stuff. If we meet some day, and you think this stuff is worth it, you can buy me a beer in return.”43

Third, much of what physical infrastructure accomplishes happens out of immediate public view and is easily taken for granted, despite its centrality to a smoothly functioning society. Rarely does the end user think of complex tangles of transmission lines, transformer hubs, and powerplants when flicking on a light switch—except when the lights stay dark. Similarly, most end users are unaware of the role that OSS plays in the digital systems that underpin their daily lives. Likewise, that dependence remains underappreciated until disruption of the end service.

Fourth and finally, the variety of forms of “ownership” or stewardship of OSS mirror the complex web of federal, state, local, and private ownership of physical infrastructure. In physical infrastructure, some sectors see almost complete federal ownership, some feature neat division among state or local governments and industry, and others rely on the many distribution patterns in between these.44 For OSS, some projects are individually maintained, others housed in nonprofits or funded by foundations or trade organizations, some with support from large information technology (IT) vendors, or even maintained and curated by for-profit companies, and more. Some technology companies develop software projects in-house before “open sourcing” them out into the world. The variety of governance models in both domains requires careful, targeted, and flexible policy.

Industry players have repeatedly emphasized that OSS insecurity largely reflects the challenges of securing any kind of software—vulnerabilities are inevitable and agnostic to licensing.“45 The US government, meanwhile, has focused its most prominent efforts on OSS through a security lens—the first bill in Congress addressing OSS as an ecosystem, S.4913, is the Securing Open Source Software Act of 2022, and congressional testimony, and other spurts of government attention tend to react to security incidents like log4shell and Heartbleed. In one dataset of OSS government policies, security and modernization were the two most popular stated purposes for US policies related to OSS, with security holding the majority in the proposed legislation.46

This security focus does not and should not imply that OSS is in any way less secure than proprietary code. The two are not so easily distinguished, and the ability of anyone to review OSS for vulnerabilities should, at least in theory, make it more securable, if not secure, than obscured proprietary software. Rather, the fact that OSS underpins so much software and modern infrastructure means that its security, which is subject to some different incentives and forces than proprietary offerings, is of notable importance. This is like how CISA focuses on securing infrastructure not because it is innately insecure, but because it is critically important to the national interest. OSS is already as commonplace, structurally critical, and hidden from end users as rebar inside the reinforced concrete of a bridge span. It is equally critical, mundane, and—in some circles—unappreciated as the water treatment plants which ensure healthy drinking water or the catenary wires above an electric train. Where that criticality exceeds the ability of other policy levers to create change, a security lens helps prioritize action and investment, especially when shaping industry behavior.

Three analogies

Treating OSS as infrastructure also invites other forms of engagement without exclusivity. While some governments might focus on supporting the security of OSS insofar as it is infrastructure, others can focus on investing in it for the holistic benefits to society or for the influence it might provide their countries in shaping the future social impact of important technologies. Infrastructure corresponds to investment and provides a ready framework for international cooperation. An infrastructure framing allows stakeholders to hold independent priorities under common, unifying principles.

Different characteristics of the OSS ecosystem evoke different kinds of infrastructure. This section describes the report’s ensemble model: three analogies each mapping from principles shared by open source and a form of infrastructure to offer policy takeaways for the open-source ecosystem. Each analogy uses the language of tangible infrastructure alongside real-world policies that invest in, and support, this infrastructure. The table below summarizes these shared principles, infrastructure comparisons, and policy takeaways, in addition to the broader commonalities between physical infrastructure and OSS noted so far.

None of these analogies is complete on its own. Taken together, they present a practical view of much of what makes OSS work and work well at that. The takeaways intend to steer policymakers toward practical, considerate models for policy action shaped by lessons previously learned and concepts properly ordered.

Figure 7. Table of shared principles of infrastructure and open source

This section also provides several direct models for the beginnings of government support for OSS—these are not prescriptive policy recommendations but rather tangible examples of how the investment of funds and other resources can help better support OSS. These models highlight effective parallels to OSS policy challenges either through the problems and questions they address, the intervention strategies they offer, the systems dynamics they navigate, or some combination.

Water management systems

Water management and distribution systems share two crucial characteristics with the open-source ecosystem. Most visible are both systems’ continuous, directional relationships. Software development speak already roots itself in hydrologic nomenclature. The “upstream” and “downstream” relationships borrow from literal descriptions of rivers to describe how choices along supply chains impact different participants. Often, though not exclusively, these relationships explain the trickle-down impact of upstream incidents—for instance, the downstream users exposed to the recent log4shell vulnerability, or when the deletion of a little-known package called left-pad briefly broke websites across the world.47 For water management and distribution systems, an upstream issue with a dam might impact water levels downriver, or changes in weather patterns might disrupt aquifer replenishment, causing shortages for downstream users, whether industrial, agricultural, or otherwise.

Figure 8. Water management and open source

This straightforward language about chains of dependency and shared exposure also describes another similarity between water infrastructure and OSS: the obligation of its users to contribute to the sustainability of the larger ecosystem, from statewide apportionment of the Colorado river to agricultural collectives deciding on the usage of local aquifers. For both water and OSS, a relatively small subset of users relies more heavily on shared resources than others. Hydroelectric facilities and large farms can use more water in an hour than an average household does in a year.48 Likewise, massive IT vendors ship widely used products incorporating numerous open-source projects, while a researcher might rely on only a handful of packages aiding in statistical analysis.

While the policy solutions to protect the sustainability of water and the security of OSS do not map perfectly—a hard quota on industry use of OSS makes little sense. For example, as OSS is a non-rivalrous resource, the general ethos is critical: the largest users carry the largest obligations (and capacity) to contribute back to the sustainability of the ecosystem. Just like growing populations and a changing climate mean that water consumers and policymakers need to invest in conservation and sustainability,49the growth and increasing criticality of the OSS ecosystem means that OSS consumers and policymakers must understand that the availability and innate usability of the underlying code cannot be guaranteed without support. Few expect that water taken directly from a stream or pond be immediately potable. Neither should consumers assume the security and independent governance capacity for OSS projects as pulled into products without some level of security assurance and code review. Again, not because OSS is any less secure than proprietary offerings, but because it is all too likely that projects were developed without specific consumer usage in mind, and therefore, consumers should not expect them to cater to their exact management needs. An overriding principle of open-source licenses is that this code is delivered “as is.”

Water infrastructure also highlights the immense varieties in use, governance, and creation in the open-source ecosystem. Just as water fuels textile production, energy generation, and individual consumption alike, OSS has a wide variety of use cases, including hobbyist tinkering, academic research, internet functionality, and business- and product-critical operations.Open source and water management systems also feature large networks of intermediaries between easily conceptualized endpoints (e.g., developer and end user, mountain spring and sink faucet). Water does not just flow, uninterrupted, from a stream or spring into a residential tap, but instead twists through a series of reservoirs, canals, treatment facilities, and plumbing. In the same way, much OSS finds itself incorporated into software projects, those projects into others, and over again through other projects maintainers, repository hosts like GitHub, private mirrors within companies, curators like Red Hat, auditors like the Open Source Technology Improvement Fund (OSTIF), transitive dependencies of other projects, and more before ever reaching a user.

Many OSS stakeholders worry that government investment and support will bring onerous obligations and regulations for developers,50whether in the form of liability or excessive documentation, that risk dissuading developers from providing open-source systems. Water management systems provide a clear parallel example of an alternate approach. In the same way that companies and individuals do not assume the purity of water in unknown streams or springs, neither should they assume that volunteer developers, often uncompensated for their work, have provided perfectly secure code and will bear total responsibility for repairs and upkeep. Most open-source licensing bears out this relationship, including something to the effect of the Apache 2.0 license’s phrasing: the “licensor provides the work (and each contributor provides its contributions) on an ‘as is’ basis, without warranties or conditions of any kind.”51 OSS users, especially the largest and best-resourced, should bear more of the responsibility for supporting the security, and appropriate selection, of open-source software, rather than using blithely and thereby trusting warranties never promised. Among more mature OSS consumers—particularly large IT vendors—this relationship is well realized, with vendors like Microsoft, Google, and others investing significant funds and developer time into the OSS ecosystem.52 Governments can participate in similar relationships by funding OSS development and potentially even contributing to projects themselves, setting an example that may spur other large entities to act in kind.

The similarities between water management systems and OSS, including directional dependence, complex webs of intermediaries, and the need for sustainable usage, suggest a paradigm for policymakers weighing potential engagement with the open-source ecosystem. Considering directional dependence prompts a more accurate understanding of the importance of intermediaries in OSS as well as a better starting point for understanding the criticality of different OSS components and how to preempt costly incidents. Instead of expecting open-source software to be perfectly stable, well-maintained, and fully secure upon import, OSS consumers can continue to take more responsibility for their usage and all its benefits, consequences, and attendant obligations. Considering those connections also emphasizes the existing network of intermediaries between developer and end user, which government must engage with rather than disrupt. Finally, the water-management comparison emphasizes that a sustainable ecosystem requires a proactive relationship between large users and the source; an affirmative responsibility to contribute back to the ecosystem. Organizations with high expectations for, and dependence on, OSS be they public or private sector should devote substantial resources to supporting the relevant communities in meeting those expectations. Failure to do so will leave the OSS ecosystem perpetually under-supported and increasingly unable to support more complex and systemically critical use cases. The notion that open source might become unsustainable because of such overuse, or integration to critical applications without responsible consideration, would imperil the benefits of OSS to all.

Nevada Water Legislation: Mandate responsible use

Regulations surrounding water use, allocation, and sustainability in the United States are largely the purview of states or multi-state consortiums.53 Even where the federal government does take a more active role in water safety standards, such as with the Clean Water and Safe Drinking Water Acts, considerable room for state governments to take the lead exists, by design.54 Water management legislation in Nevada, the country’s most arid state, offers two examples of policy vehicles well-suited to the OSS ecosystem: Senate Bills (SB) 47 and 74, both passed in 2017. First, in SB 47, Nevada adopted the stance that “it is the policy of this State…To manage conjunctively the appropriation, use, and administration of all waters of this State, regardless of the source of the water.”55

From the OSS perspective, this is a straightforward acknowledgment of how usage drives criticality—that, regardless of the source of code or water, effective policy lies in governing where and how software is consumed as much or more than how it is developed. In this sense, for OSS particularly, policymaking that takes the existence of OSS as it is rather than aiming toward an unrealized ideal for the code itself is useful, and it is particularly well met by Nevada’s situation, whose primary sources of water generally originate in other states.56 SB 74 offers more concrete guidance, requiring water suppliers—here, analogized to OSS intermediaries—to develop water conservation plans,57with some additional requirements for larger providers.58

Both SB 47 and SB 74 put a large burden for the sustainability of the state’s water use on intermediary water suppliers—ostensibly those pulling water from its sources and sending it to users for various “municipal, industrial, and or domestic purposes” downstream.59 For OSS, this compares with ensuring responsibility lies with those who take open-source packages and use them in downstream applications, rather than expecting the river of OSS itself to be clean and self-sustaining to a degree sufficient for uses outside its control (or even on the repositories, similar to aquifers and reservoirs here). These bills focus on water suppliers not just as the users of the resource but the intermediaries with much sway over the connective infrastructure, specifically calling out their role in developing “standards for water efficiency for new developments” and reducing leaks among other provisions.

There is no shortage of OSS,60but insofar as conservation serves as a synonym for sustainable use, federal OSS policy can draw on this framing. A policy pivot away from just assessing the risks of using OSS—say as required by many conventional supply chain risk management programs—and toward broader models of enforcing responsible use might include recommending an explicit Sustainable OSS Usage Plan as a signal of large OSS users interacting responsibly with the ecosystem, inclusive of managing their risk posture but also deliberate, systemic efforts to identify and support communities around critical OSS dependencies. There is much to be gained in shifting the focus of OSS policy to improve security from the developers and their code (“the source”) to the framing of aggregate usage, reliance, and responsibility.

Moreover, the specific requirements of the Nevada conservation plans amount to a call for suppliers to explicitly understand their place and role in the larger ecosystem. Regarding intermediaries, more policies both from government and industry might focus on the ability of large code-hosting platforms to leverage their platform as natural bottlenecks in the ecosystem (as the means for many to access repositories and store their code) to provide useful tooling at scale to OSS communities.Some of this work is underway, and this is not a claim that it is insufficient but rather a call for policy to capitalize on those points of outsized returns on tooling investment and integration. Importantly, this is not a call for platforms to be responsible for the safety of all the code they host, but rather useful in the distribution and usability of tools to projects—to provide tools and capability for responsible use and security conscious development. In line with the water analogy, consideration of the context of different use cases is key—just as water powering hydroelectric dams need not be drinkable, different use contexts imply different support obligations and maintenance standards.

Good Samaritan Initiative: Limit liability for volunteers

Federal water law, meanwhile, has useful models for encouraging external support for the OSS ecosystem—specifically, unmaintained dependencies. The Environmental Protection Agency’s Good Samaritan Initiative helps facilitate the cleanup of abandoned mines, a significant source of water pollution, with over half a million abandoned mines estimated throughout the country.“61 Volunteers assist in the cleanup of these abandoned mines, providing a great benefit to their communities, which often rely on the same water impacted by the pollution. The Good Samaritan guidance protects those volunteers explicitly from liability for their efforts, effectively lowering the bar to entry for helpful ecosystem contributions. Some federal programs go further by directly funding cleanups of water systems, though these often come within larger spending packages rather than pulled from specific funds.62

There are two OSS parallels here: unmaintained projects, and organizations doing support work (e.g., security auditing or incident response support). On the former, a Tidelift study in 2019 found that between 10 and 20 percent of common OSS packages lacked active maintainers, posing obvious security and sustainability challenges, and arising likely as a symptom of limited developer time and resourcing.63 Organizations that support OSS projects are just an extension of this parallel beyond the common language of abandonment.

Government and industry might help improve the overall OSS ecosystem’s health through incentives for Good-Samaritan-style engagement and by continuing to maintain the widely understood protection for OSS developers and maintainers against liability arising from the downstream uses of their components. This comparison points to the importance of policymakers vetting proposed policies relating to security requirements for OSS to ensure they do not create additional compliance-related liability for OSS developers, contributors, or maintainers, which might paradoxically deter individuals and organizations from contributing to the OSS ecosystem.

In addition to liability protection, an OSS policy equivalent could emphasize broader support and investment by funding external support groups (much of which already takes place through the private sector), guiding them toward critical under- or un-supported projects, and rewarding and aiding the “adoption” of orphaned projects still in use. There has already been some consideration of these approaches outside the public sector, such as the Alpha-Omega project and several academic studies64— providing the basis less for reinvention than for renewal of government support as part of a broader engagement with the OSS ecosystem.

Environmental regulations, including water management systems, in the EU are guided by the “polluter pays principle,” which states that polluting entities should be responsible for costs like pollution control and prevention.65 The principle encompasses a wide variety of regulations targeting different industries including agriculture and manufacturing. The types of cost for which polluters are responsible also vary, funding anything from cleanups of pollution they caused to investigations and permitting efforts. The principle is explicitly included in several important pieces of regulation, such as the Water Framework Directive and Waste Framework Directive.66 Not all regulation is in line with the principle yet, but its inclusion in recent regulatory efforts and role guiding future policy demonstrates the EU’s emphasis on ensuring that those who use natural resources, resulting in their degradation, pay for the consequences of their actions so the public need not foot the bill.

Capital markets

A critical feature shared between financial markets and the open-source community is that both liquidity and OSS act as enabling inputs to a wide variety of other industries. Financial backing and loans from investors enable businesses and individuals to raise capital to overcome initial fixed costs, which is vital for getting businesses off the ground. Similarly, OSS allows businesses and individuals to save vast amounts of time and effort that would otherwise be spent re-solving similar problems—a critical input that helps overcome burdensome upfront investment. This enabling-input characteristic is true of many forms of physical infrastructure—in water management systems as noted above, as well as power grids, gas pipelines, transport networks, and more.

Capital markets, however, highlight the relationship between risk and transparency. In capital markets, debt or equity in real-world assets, stocks in companies, and mortgages back numerous financial instruments. Financial actors can manage their risk only by understanding the valuation and risk of these underlying components, and there are many intermediary entities such as ratings agencies that help create and provide this information. The 2008 financial crisis serves as a useful reminder of the consequences of failures in this system—when ratings agencies inaccurately appraised the risk of mortgage-backed securities, huge portions of the financial sector were left holding fundamentally unsound investments believed to be low-risk, leading to disastrous, global consequences.67 Without accurate transparency, sources of systemic risk went unidentified, unaddressed, and unmitigated, fueling a financial meltdown.

There are useful parallels for the OSS ecosystem here. Like financial instruments, OSS often serves as the building blocks for other end products. For consumers and producers, visibility into these components is necessary to improve risk-management practices. The entity that assembles a bundle of financial instruments—or a bundle of software that includes OSS components—holds a better perspective than the end user to understand the risks, as well as to know how to manage that risk through investment in upstream packages and projects. More transparency from assemblers can help recipients better understand the components within a product or a project and adjust their incident response and risk-management practices accordingly. The financial sector has developed procedures for assessing and describing risk, due to a combination of regulation, profit motive, and market demand. Industry-led development on tools and data to enable visibility into the use of OSS and other software components are already underway—software bills of materials (SBOMs) offer point-in-time insight into the components in a given piece of software (including open-source components), and ratings systems and metrics platforms like Supply chain Levels for Software Artifacts (SLSA), Community Health Analytics in Open Source Software (CHAOSS), and Open Source Security Foundation (OpenSSF) Scorecards offer aggregated insight into the security posture and maturity of those component projects.

Figure 9. Capital markets and open source

At the systemic level, transparency and visibility into the use of OSS components can highlight where the wider digital ecosystem is leveraged on a small number of critical packages, helping to prioritize support and investment on all fronts. Heartbleed, the left-pad incident, and log4shell illustrate this kind of risk—where disruption in a single upstream component has widespread effects, and in some cases, deep ones.68 The Census II report from the Linux Foundation and the Laboratory for Innovation Science at Harvard offers an example of the benefits of such system-scale analysis. The report used aggregated software-composition-analysis (SCA) data to identify open-source components widely depended upon across industry69—notably, the report identified log4j, the library impacted by the log4shell vulnerability, as one of those widely used packages (after the incident, unfortunately).70

The comparison to the financial sector also offers a model for how government might interact with industry and the open-source ecosystem. As noted, the private sector is already developing many of the tools that will help address risk with transparency. Government’s role in that space is best understood as one that supports and provides appropriate incentives, especially for adoption over prescription—for example, through its procurement policies, rather than supplanting these tools or intensive regulation. As in financial markets, government is well-positioned to guide ecosystem-scale efforts toward a better understanding of aggregated risk concentrations. And, as with financial market data, government may also need to consider how to safeguard data collected for that analysis, which may have proprietary or trade-secret sensitivities. For OSS, a list of critical projects would be as useful to attackers in guiding their efforts as to defenders. Finally, at the most abstract level, the relationship between transparency and risk to the larger system can help guide broad government strategy, emphasizing that transparency and openness are not just rhetorical values but practical tenets of extreme, tangible benefit to the stability of the overall ecosystem.

Financial Stability Oversight Council: Transparency toward proactive stability

Many proposed cybersecurity policies require a substantial level of system knowledge and data availability: they require being able to identify critical OSS packages across entities, the most significant users and beneficiaries of OSS, the overlap between projects that are unmaintained or under-resourced and that are key dependencies, and more. Policy vehicles from the financial sector, particularly those born out of the 2008 crisis, offer models for managing risk through transparency and an ecosystem-scale lens. Formed by the Dodd-Frank Act, the Financial Stability Oversight Council (FSOC) within the Department of Treasury works to “address several potential sources of systemic risk…[by] monitoring financial stability and designating…companies…and utilities as systemic[ally important].71 Where it identifies systemically important financial market utilities (FMUs), it can subject them to additional regulation in concert with the wide array of relevant government offices and regulators.

A parallel office for OSS would serve to identify projects, dependencies, and even entities that constitute systemically important infrastructure, and, in place of regulations, might offer those nodes of risk more targeted and comprehensive support, coordinating among government cyber authorities and industry, in place of financial regulators. Such a federal office would not need to limit its study to OSS dependencies. It could also contribute to analyzing cyber risk within other complex systems like cloud service providers and critical vendors to government.

Identifying points of risk concentration created by system-scale OSS dependencies points policy immediately toward the next mechanism from the financial system: stress testing. For financial entities, stress testing boils down, in part, to liquidity requirements—minimum asset-liability ratios meant to ensure institutional resilience to market shocks, or more simply having enough cash on hand to cope when things get ugly. For the OSS ecosystem, the first steps toward stress testing might include—once critical dependencies are better identified and understood—by-sector requirements for contingency planning in response to the compromise or degradation of important OSS packages. For example, government might start requiring such risk management of critical infrastructure sectors. This could also include exercises to respond to vulnerabilities in deep-in-the-stack packages or active compromise of developer tools or authentication systems widely depended on by identified software.

Critiques of the FSOC, and the larger Dodd-Frank Act (DFA) of which it is a part, illustrate useful considerations for a parallel body overseeing digital risk management concerning the OSS ecosystem. One notable concern for the DFA was its potential to overburden banks—both compared to other parts of the financial system and compared to international banks not covered by the act—to their detriment.72 Crucially for the OSS ecosystem, increasing burdens on open-source project developers and maintainers, already short on time and money, should be a non-starter for any policy. Given the principle that use (rather than the manner of construction) determines the criticality of an OSS project, any responsibilities added to existing regulation will better suit large vendors, and, even there, an OSS FSOC need not create further red tape. Rather, such an entity could focus on gathering data-—perhaps initially focused on the federal government’s most essential digital systems, the process of which could provide insights used to focus later iterations with other entities such as industry-heavy critical infrastructure sectors.

Metric selection is a significant challenge when assessing the risk of OSS projects, requiring careful consideration of both factors that affect a project’s capacity for secure development as well as the levels of dependence on that project across a vast digital ecosystem. When asked about the former, survey respondents for this report were generally split across answers, emphasizing the lack of consensus on key risk heuristics, though they did consistently devalue the number of sponsors, either corporate or individual that a project had and more significantly weighing project popularity, a history of recent vulnerabilities, and community size.

Focus on identifying risk concentrations, over mandating how to address and manage that risk, would also help a potential OSS FSOC equivalent navigate another concern it would share with its financial counterpart, namely, the complexity of the existing network of relevant authorities. The web of federal financial authorities, not to mention the role states play in other portions of that sector, is a challenge for the FSOC to navigate.73 Moreover, the division of powers and controls among federal cyber entities is even less mature. Many key agencies have come into existence only within the past decade. And unresolved and overlapping cybersecurity authorities in the United States remain divided between CISA, the Office of the National Cyber Director, the Office of Management and Budget, sector-specific agencies, chief information officers of agencies, and a variety of other offices and regulators at the federal and state levels. A digital FSOC’s primary focus on information gathering and collation would avoid stepping on the roles and responsibilities of other entities while providing ecosystem visibility to help them regulate more effectively. A mission of identifying nodes of dependence would help avoid messy interagency conflict while still highlighting systemic risk and helping the federal government get its own (cyber) house in better order.

Operating similarly to the FSOC in the United States, the EU’s European Services and Markets Authority (ESMA) oversees European financial markets. ESMA’s four objectives are assessing risks, developing standards for financial entities, ensuring the consistent application of financial regulations across the EU, and directly overseeing specific kinds of financial entities. ESMA releases detailed reports on the European financial markets, with specific releases focused on various securities, derivatives, alternative investment funds, and retail investment products. Like the FSOC, ESMA was created in the aftermath of the 2008 financial crisis as regulators sought more insight into the interactions among complex financial instruments. ESMA focuses more on broader ecosystem risks across the European financial system than on subjecting certain companies or utilities to heightened scrutiny, in line with its advisory role.74

Roads and bridges

The titular comparison of Eghbal’s Roads and Bridges report links OSS to critical transportation infrastructure. The comparison draws out key characteristics of the open-source ecosystem, such as the free-rider dynamic and the necessity of consistent, mundane maintenance. The concept of usage driving the need for maintenance deserves particular focus. OSS is used in many varied contexts and is the backbone of most digital technology. Like interstate highways and other transportation infrastructure, open-source software inevitably require maintenance, and waiting too long to address emerging issues can result in a catastrophic incident down the proverbial road.75 Responding to individual issues, like the collapse of a bridge or a widely-publicized vulnerability like log4shell, is essential, but is not enough to ensure the stability of the essential infrastructure of transportation systems or OSS. Coupling a recognition of OSS’s essential nature with an understanding that most code is not static and will require additional support over time allows for targeted policies that address the crucial challenges of OSS ecosystems.

Figure 10. Roads and bridges and open source

Relatedly, both physical transportation infrastructure and OSS ecosystems suffer from widely varying support, with no reliable transaction model to capture value from those who use the infrastructure and feed it back to maintenance and support. Eric Raymond identified this issue in The Cathedral and the Bazaar as a discontinuity between sale value and use value—the value of code at the point of transaction vs. its value in use over time.76 Roads are costless to use outside of specific toll schemes and yet valuable to their users, especially when well surveyed and maintained. The widespread assumption of availability means that, without sufficient dedicated efforts to overcome this lack of support through consistent maintenance and funding, roads and bridges would collapse due to damage from use, while essential OSS components may degrade in availability or security as their developers fail to receive support commensurate with the criticality of their code.

The roads and bridges analogy also captures well the variety of use within the open-source ecosystem. In the same way that interstate highways receive more traffic than streets in suburban neighborhoods and some roads provide singular access to remote geographies, certain packages are critical due to either the large number of software packages dependent on them or their service of a particularly niche function, while other packages might be relatively less important to the ecosystem due to a lack of widespread use in downstream applications. Importantly, there is no singular way to use any OSS project—each can serve different users and applications differently, much like how roads rarely require or serve a single destination and are agnostic to the route of drivers.

Government has long worked to close resourcing gaps in transportation infrastructure, for example, through the Highway Trust Fund (HTF). While the exact nature of the most useful forms of support for OSS is up for debate—they might include any combination of funding, developer hours, tooling, security auditing, and more—government is uniquely resourced to bolster efforts in closing that gap and help reset market expectations for contribution by the private sector. None of this is to counter or dispute the original Roads and Bridges report. Rather, this report emphasizes the utility of its analogy of choice, adds others to capture different OSS traits, and below strives to connect extant transportation policy to workable OSS models. Figures 11 and 12 capture survey responses to questions on what methods of external support, and investment, for open source projects would be most useful, for open source maintainers/developers and downstream users respectively. The results are notably consistent across both questions, highlighting the link between upstream resources and downstream benefits.

The Highway Trust Fund: Consistent and sustainable support

For transportation systems, the HTF provides an example of consistent funding to maintain critical infrastructure. Maintaining transportation infrastructure requires preventative, systemic investment instead of reactive disaster response; the Highway Trust Fund provides financial support so that bridges do not have to collapse before they receive maintenance. As such, it provides a useful model for how to fund the maintenance of OSS.

HTF funding is spent largely through grants to state and local governments, suggesting the importance of working with existing entities within an ecosystem with regional expertise.77 The federal government should not depend only on its own knowledge to identify useful recipients of funding—instead, it should work with industry and the existing web of OSS stakeholders including volunteer networks and paying foundations, relying on their expertise in the domain. Like the HTF, OSS funding could support instead of supplant existing efforts.

The HTF’s explicit focus on construction and maintenance is also a model of a solution for a potential shortcoming in existing OSS funding: several previously mentioned examples of funding intermediaries tend to focus on investing in the development and creation of open-source solutions, but support is also needed for the long-term, less glamorous work of maintaining OSS projects—managing contributions, ongoing security engagement and community governance, and so on. The solution might look like a federal OSS Trust explicitly focused on backing extant projects rather than focusing on spinning up new ones. It might directly pay maintainers of critical projects, as well as support the development of tooling, security support organizations, and other scalable means to support a broader ecosystem of OSS components. Relatedly, survey respondents for this report prioritized tooling, with several specifically calling out automated, scalable solutions, and direct funding to OSS developers as most useful for both OSS project support and downstream security.

Figure11. Survey response
Figure 12. Survey response

It is also worth mentioning the funding source that feeds the HTF: fuel taxes. From an economic perspective, the HTF thus linked (if by happenstance more than economic design) two distinct policy vehicles: a taxed negative externality and a subsidized public good. In a key difference from the HTF’s fuel-tax funding, there is no clear negative externality for OSS usage, and policy should not aim to discourage its use. Instead, it should develop incentives for more responsible usage, such as tax credits for upstream contributions and donations to an OSS fund. Such a model for OSS, a fund supported by consistent contribution premised on use value, would offer another incentive lever for policymakers to encourage large OSS consumers to contribute back to the sustainability of the ecosystem, and could potentially encourage additional industry players heavily reliant on OSS but outside the IT sector to play an increasing role in supporting OSS. These entities might rely just as much on OSS as IT vendors but struggle to mature their own OSS programming and therefore benefit from more general means of upstream support.

Adopt-a-highway: Incentivize direct local support

Transportation policy also provides a useful model for community-specific support. Adopt-a-highway programs are usually state-run endeavors connecting volunteers with stretches of local roads to remove litter. Aside from the convenient marketing phrase—adopt a package78—programs linking volunteers to both funding and packages they rely on and benefit from supporting offer another investment vehicle.

Adopt-a-Highway programs have faced challenges with groups seeking to participate in such programs.79 While parallel lessons are not as direct here as with the HTF, it is worth clarifying the role of any potential adopt-a-package programs (AAPPs) in OSS. One long-running concern for OSS communities has been the role of large corporations in the governance and direction of open-sourcing products, potentially keeping features behind a paywall with forked proprietary code or swamping independent projects with their sheer volume of contribution.80While the appeal of adopt-a-highway programs often lies in the optics of supporting local infrastructure, AAPPs can have a more practical purpose—they should instead focus on enabling and regularizing vendors substantively supporting the OSS projects they rely on, a practice already practiced in some isolated examples in the IT industry, with public recognition a secondary concern. There is a material benefit to these kinds of relationships, from component familiarity to better- managed and -resourced projects. Moreover, any implementation should healthily delegate to industry, which can better identify what projects require support.

Challenges that the HTF and adopt-a-highway programs have encountered can help pave a path forward for similar investment in the OSS ecosystem. The HTF, funded mainly by fuel-tax proceeds, has faced solvency crises requiring congressional intervention.81 Concerns about the source of funding are pertinent to any potential federal OSS fund. Fortunately, some key differences between OSS and physical infrastructure help here. Road construction is slow and disruptive, but maintenance of OSS projects and support for their developers less so in helping with popularity of investment. While ROI studies for OSS and highways are somewhat spotty, the estimates for OSS ROI are promising if realized,82 in addition to the knock-on benefits such investment might provide to national security concerns, workforce shortages, and more. Meanwhile, some OSS incidents can be directly connected to shortcomings in support,83 from unpaid developers pulling down widely used packages to small teams challenged with vulnerability identification and remediation at scale.

Finally, while valid concerns about investment in transportation projects leading to government “picking winners” exist,84 the OSS ecosystem indeed already has winners—projects meriting investment by virtue of either their ubiquity, criticality, or both—and there is much benefit to security in identifying those projects to begin with, as noted above. Moreover, the extant field of governance and support infrastructure from industry, nonprofits, and philanthropy already prioritizes some projects and modes of support over others—by necessity and often with more expertise and domain-specific knowledge than currently available to the federal enterprise. Working with and through those entities, rather than in parallel or at odds with them, and focusing on support and maintenance as much or more than project creation is a promising avenue for avoiding the lived shortfalls of some physical infrastructure planning.

These tangible policy vehicles all aim to make the three OSS as infrastructure analogies more readily useful, adding concrete intervention models and consideration of past challenges to the guiding principles and high-level characterizations of the OSS ecosystem already provided. The following section discusses a sampling of existing or proposed initiatives for policy engagement with the OSS ecosystem before converting the analogies into direct recommendations, primarily for government with some items including significant public-private coordination or giving the reins to industry.

Outside the United States, transportation infrastructure also faces a disconnect between the assumption of availability and the lack of support from those depending upon it. To overcome this gap and ensure essential infrastructure is maintained and reliable, the EU has several large funds that provide grants to build or maintain roads and other components of the transportation system. The Connecting Europe Facility (CEF) targets cross-border transport infrastructure, while the Cohesion Fund (CF) provides additional funding to countries in the EU with a Gross National Income per capita below 90 percent of the EU average. These funds help create consistency across the transportation infrastructure of the EU’s member states—difficult to ensure without a coordinating central entity. The CEF and CF are part of the EU’s sustainable development efforts, with both funds committed to ensuring that the infrastructures they build and maintain are energy efficient and cause minimal environmental impact. Though they spend toward slightly different project sets than the HTF—for example, the CEF also supports telecommunications and energy projects—the underlying principle is the same: infrastructure projects generally do not arise sufficiently from industry alone.85

4. Real-world infrastructure policy for OSS

The open-source ecosystem and its many stakeholders have long recognized the need for sustained, stable support to projects and responded with the creation of nonprofits and institutions to provide that. Government support, tailored to both community needs and government priorities such as security or innovation, can provide robust, stable backing for the existing patchwork of organizations and projects in the OSS world. This section describes several existing policies for governments to take inspiration from and work with rather than assuming the whole burden of reinventing the wheel of OSS policy.

This section samples relevant policies—sourced principally from the Center for Strategic and International Studies’ (CSIS) newly updated dataset, Government Open Source Software Policies86—in three categories synthesized from the three analogies above: government support and funding, ecosystem risk practices, and responsible use by OSS consumers. The CSIS dataset also described other kinds of policy outside these three categories—some establishing offices within governments dedicated to managing various OSS functions, often termed Open Source Program Offices (OSPOs), some requiring the open-sourcing of government-developed data and solutions, and others describing procurement practices.

Government support and funding

Policies establishing government support and funding for OSS were the most common of the three categories discussed here from the CSIS dataset, though there were still relatively few instances of these compared to the many procurement advisories and requirements it contained. Support for open-source projects in many ways is a natural extension of several government priorities—a search for non-proprietary solutions, support for acquired systems, and the logical conclusion of education and training programs—so their relative abundance makes sense. However, the fact that more policies discuss OSS procurement than OSS support is telling—just as in industry, it seems that governments are using OSS more than they are contributing back. The reasons for usage are often clearly laid out: “to reduce the dependency on proprietary software,”87 to reduce costs,88 and to improve interoperability. Approaching OSS as infrastructure adds depth to this discussion—there are great benefits to using OSS solutions (and recognizing the vast majority of proprietary code incorporates OSS as well) and that usage creates a need to support the underlying projects. Though government support lags government usage, there are some models of supporting OSS projects—even those not acquired and used by government—that can help create the increased market choice so many procurement policies seem to desire.

In Germany, several organizations work to channel government funding toward OSS projects. The German Ministry for Economic Affairs and Climate Action funds Germany’s Sovereign Tech Fund, which launched a pilot round for funding open digital infrastructure in October 2022,89 and the Prototype Fund which supports public interest technology—requiring that it be made available under open-source licensing—with investment coming from Germany’s Federal Ministry of Education and Research.90

There are nascent efforts in the United States too: the National Science Foundation’s Pathways to Enable Open-Source Ecosystems solicitation program launched in May 2022 to support governance organizations at the ecosystem level.91 The Open Technology Fund receives funding from the US Agency for Global Media among other entities, part of which goes toward “advancing global Internet freedom” through supporting open-source projects relevant to its mission.92 NASA’s Open-Source Science Initiative funds and adjusts policies to encourage open and collaborative scientific processes, including through supporting open-source software and related infrastructure.

More broadly across the world, a 2013 Argentinian policy established a fund with over $2 million in initial backing to build OSS projects.93The Austrian government, in 2016, offered prizes of up to €200,000 for the OSS projects in various categories—the first round of funding shelled out €3.6 million across 31 projects.94 One fund in Malaysia, set up in 2003, allocated $36 million for start-ups developing OSS, but further information on the project is scant.95 These funds often support the establishment of OSS projects fulfilling an established need. While the support is generally useful, it is worth noting that as important as funding project creation is, supporting existing projects, in the long run, is even more vital to the long-term sustainability of the ecosystem.

Ecosystem risk management

Though no government policies in the dataset explicitly focus on assessing ecosystem-wide risk in the OSS world, interest in dedicated open-source offices provides a possible avenue toward this activity. Recently, governments have begun turning an eye toward formal offices dedicated to the many open-source activities they may undertake, such as project support, license compliance, security evaluation, incident response, public awareness, and providing clear points of contact for government employees and OSS developers. These OSPOs originate in industry as departments for coordinating all manner of open-source efforts.96 The World Health Organization recently established an OSPO, for example,97 and the European Commission’s Open Source Software Strategy for 2020–2023 includes establishing an Open Source Program Office within the commission to implement relevant OSS actions of the strategy.98

Other governments are focusing on information gathering. This year, the Japanese Ministry of Economy, Trade, and Industry released a report from a task force studying Software Security, which studied private sector reliance on OSS. Government initiatives that study the open-source ecosystem can provide crucial information which can then guide future investment and support of OSS.99 Similarly, the proposed bill S.4913, the Securing Open Source Software Act of 2022, includes a requirement for the US government to conduct a study assessing its own reliance on OSS as well as its ability to accurately track those dependencies either through SBOM data, existing government programs like the Continuous Diagnostics and Mitigation (CDM) program run by CISA, and other sources of information.

Responsible use

Policies that focus on patterns of responsible use in the OSS landscape were scant. One Armenian document concerning the country’s principles of internet governance noted that the central role of decentralization in the development of the internet, specifically regulation on OSS, should be light, if necessary, at all.100 Other instances of policy embracing the cultural values of OSS also exist, and the preference of governments to open-source their own solutions and code is notable. However, an explicit discussion of incentive and responsibility structures in the OSS ecosystem is somewhat lacking. Notably, White House conversations about the forthcoming National Cyber Strategy have not included any new mechanisms to explicitly support OSS, addressing little more than a carve out to protect OSS developers from any potential liability regime: a good and warranted item but underwhelming against the totality of need in the ecosystem.

While government policies for OSS exist, they focus more on the government as a consumer than as a regulator or supporter. Government procurement preferences seem driven by a desire for autonomy from large vendors and expensive licenses and patterns in little procedural upstream contribution. Though some funding models exist, by and large, government policies explicitly addressing OSS seem to focus on what government purposes it can serve and what transparent values it might inspire in government practice.

5. Crafting infrastructure policy for OSS

OSS is really not much different from proprietary software: all code can be developed more securely, and the security risks OSS faces are common across most digital systems. For OSS the differences come in the relationships between open-source consumers—from government to the private sector to end users—and the projects they rely on. The lack of clear transactional relationships and the deeply influential role of the diverse, ever-changing contributor community are a challenge for policy and industry to navigate and support sufficiently. The result is an ecosystem that has both enabled digital innovation and often suffered from overburdened developers and under-resourced communities and projects.

Encouraging sustainable OSS participation

The recommendations of this section aim to use policy levers and industry collaboration to provide models for sustainable usage of and support for the OSS ecosystem, emphasizing responsibility driven by usage.

Start by improving government consumption

In the United States, the federal government is not just a regulator but also an enormous consumer of OSS. This enormous use case provides a valuable opportunity for the federal government to test many of the recommendations below on its codebases, which is of immediate benefit to the federal enterprise. If the federal government is to truly assign as much importance to the OSS ecosystem as it has recently signaled,101 it might consider creating institutional entities with an explicit mandate to focus on the federal government’s use of and support for OSS, modeled after OSPOs recently established by other organizations. For the United States, a whole-of-government OSPO-like entity could be established within OMB or (with a focus on government procurement) the General Services Administration (GSA). Alternately, OMB and GSA could provide a coordinating function for smaller OPSO-like entities established in each agency. Such a program could take inspiration from the OPEN Government Data Act, which requires the designation of Chief Data Officers within federal agencies,102 by requiring agencies to designate a Chief Open Source Officer (COSO).

In addition to setting agency policy around the use of OSS and managing relationships with relevant OSS communities and vendors, agency COSOs could also contribute to a whole-of-government OSS strategy through a structure like an inter-agency Chief Open Source Officers Council, modeled after or housed within the Chief Information Officers Council. S.4913, if enacted into law, would pilot OPSO-like programs in the federal government by directing OMB to select agencies to create pilot OSPO-like entities to develop standards for their agency’s use of OSS and engagement with the OSS ecosystem.103 EU member states, where collaboration with the OSS community and consumption of OSS similarly need not tie as closely to cybersecurity regulators, could well replicate this model.

Regardless of whether they have an OSPO, or an existing commitment to OSS consumption and development, (in the United States, see entities like the Department of Defense (DoD) and National Aeronautics and Space Administration (NASA)), all agencies should also encourage and fund travel to OSS community forums for government employees engaged with software development, procurement, and/or technology governance. The social graph of a project defines OSS development, maintenance, and growth. The security of this code and its sustainable integration into government software projects would benefit greatly from wider government employee participation in the myriad conferences and governance bodies that populate the OSS ecosystem. While this may be a practical challenge for some defense and intelligence organizations, it is an important, meaningful way to integrate government needs and contributions more fully into OSS communities and help identify risks and opportunities for sustainable use.

Support private-sector consumption

Develop an OSS Usage Best Practices framework through the National Institute of Standards and Technology (NIST) with significant industry input. Such a framework could include and build on the proposed OSS risk assessment guide recommended by S.4913.104 However, it should also incorporate consideration of upstream contribution as a foundational measure of organizational maturity around OSS usage. Included among its recommendations should be an organizational plan for sustainable OSS use.

This document would serve as a reference for further policy attempts to incentivize investment in OSS sustainability. For example, government procurement processes could include consideration of for-profit vendor compliance with the NIST OSS Usage Best Practices framework. By framing compliance as a consideration rather than a hard mandate, the goal would be to incentivize for-profit providers without precluding nonprofit and individual contributors lacking the resources to develop a compliance program. A similar framework, which considers financial contributions to upstream projects, could help guide the application of tax credits used to incentivize donations.

Industry, as well, could take a leading role here, developing a common, voluntary OSS-engagement plan across entities under the auspices of a coordinating nonprofit such as OpenSSF. Important too would be including non-IT companies in these considerations. Though understandably less fluent in the technology sphere, large industry manufacturers and other corporations nonetheless have a considerable dependence on OSS projects. Where such large, non-IT companies have their own robust IT resourcing and capacity in-house, they too should build and contribute to models for risk management based on discarding the assumption of availability or functionality of critical OSS packages.

A NIST guide on best practices for OSS usage could also help guide federal developers and agencies in their relationships with vendors, key projects, and larger risk-management practices. Further, federal developers’ and procurers’ experiences with using such a framework could help inform future iterations of the document and bring industry best practices more fully into the federal enterprise.

Protect OSS Good Samaritans

Private-sector firms with existing investments in the open-source community (e.g., Google, Microsoft/GitHub, and IBM/RedHat) and well-established OSS governance and security organizations (e.g., OSI, the Open Source Collective, OpenSSF, and the Internet Security Research Group) should lead on drafting a best-practice standard for contributing to and supporting OSS projects. This document should help define the standard of care associated with volunteer contributions. This standard is not a form of liability protection but a way for firms to design policies encouraging volunteer contributions to OSS packages in a way that best meets corporate risk appetite. These volunteer commitments are an important way to contribute back to OSS used by companies and are a form of contribution-in-kind to support packages used by others.

Addressing systemic risk

The rapid pace of digital innovation and the informal relationships between OSS dependencies and their downstream beneficiaries has led to a digital ecosystem prone to stacking risk in a relatively small number of critical OSS projects, and created challenges for nonprofits, governments or companies seeking to obtain visibility into those points of concentration. These recommendations aim to align government and industry in systematically identifying key dependencies meriting direct support and investment without adding undue regulatory burden. These recommendations take inspiration from the FSOC and ESMA entities in the capital markets analogy.

Establish an Office of Digital Systemic Risk Management (ODSRM)

Modeled after the FSOC or ESMA described above, a central government office would, in close cooperation with industry and OSS community stakeholders, work to identify critical OSS dependencies both in the federal civilian agencies and across critical infrastructure sectors. This office might eventually mature from identifying these points of concentration to stress testing their compromise (either malicious or otherwise) and the related, wider ecosystem effects, modeling and exercising through variations on future log4shell-style events using real-world dependency information.

In the United States, this office should have broad authority to draw on federal expertise wherever it might reside, from the National Security Agency to CISA, and focus both on identifying specific critical OSS projects or systems and methods for producing and collating dependency data that can highlight nodes of risk. Such data might, for instance, include pooling SBOMs provided to government during its procurement processes. Given the large mandate this office would eventually assume, implementation might best start in pilot programs focused on mapping out the dependencies of one or more federal IT systems. Existing programs to map Federal digital assets and existing Federal vendors would be natural partners in the project. However, in the latter case, the implementing agency, perhaps with congressional support, would need to overcome obdurate industry resistance to the inclusion of dependency data about software products in the form of software bills of material, despite being regularly generated and consumed already. While the array of use cases for these SBOMs is still maturing,105 large organizations, like New York Presbyterian Hospital,106 already use them regularly. And there is a healthy supply of software tools to generate and process them employed by for- and nonprofit entities.107

Lessons learned from the analysis of one system could inform a widening aperture across other government systems and eventually across the broader digital domain, particularly considering that there may be significant overlap of key OSS dependencies between similar systems. Establishing an ODSRM is an opportunity for government to better map its own digital systems and assets before using lessons learned in that process to inform its approach to a larger, industry-wide attempt at helping to identify key critical dependencies.

Provide resources with security and sustainability in mind

Throwing funds at a problem is rarely ever a sufficient fix, but where investment shortfalls exist, it can help. These recommendations focus on guiding policymakers toward a resourcing model that helps cover funding gaps, particularly around long-term maintenance and support rather than the creation of new OSS projects, while accounting for non-financial resources (e.g. labor time, expertise) and financial support for important non-technical factors (e.g. encouraging contributor community depth and diversity, governance and good package management policies) and relying on community expertise in directing resources toward critical projects.

There are three important factors to consider in developing schemes for government support to OSS as infrastructure. First, where resources go is as important as how they get there. Direct funding and government-to-project contributions may work well for areas of urgent or existential need, but OSS projects will benefit most from consistent support delivered with local knowledge about the project, its maintainer community, and its user base. Few, if any, government-led schemes will be able to achieve this level of local knowledge on their own, so resources should mostly, flow through trusted intermediaries like software foundations (e.g., Apache, Linux, and Eclipse) and nonprofit groups (e.g., Open Source Collective and the Internet Security Research Group) as well as selected university programs.

Second, support must be sustainable. One of the difficulties of private-sector funding for OSS projects and their security is that, outside of a handful of exceptions, crisis has been the catalyst for much of this support. Monies flow to projects and project classes affected by an ugly vulnerability or momentary disaster without the promise of consistent, long-term commitment that project owners can plan and build around. The good work of several software foundations across the OSS ecosystem is a function of both the resources they bring and the stability they offer.

Third, it bears repeating that resources need not just be financial. Dollars and euros are fungible and necessary—volunteer labor can only bring OSS projects so far and might not account well for specific technical skills or experience needed to audit code or management and governance processes. Governments, generally, possess a scale of financial power available to few in the private sector. But governments also have other policy levers. Changes to government policy can reduce barriers to sustainable OSS adoption, open new opportunities for agency and government employee-level contributions back to OSS projects and punish abusive or malicious behavior targeting OSS communities. These are non-monetary contributions to the long-term security and sustainability of OSS and important alongside financial support.

With that in mind, this report offers three final recommendations on how to shape government support for OSS, keeping security and sustainability as the key goals, instead of massive feature expansion or redevelopment.

Target of opportunity

Governments with the financial and organizational wherewithal should create target-of-opportunity funding programs to support OSS security. The goal of this funding is to award resources in a targeted manner, determined by government need, to OSS projects and activities. These awards should root in criticality and help account for urgent needs, ideally in anticipation of, but perhaps in response to, a crisis. Criticality can be determined by an entity, like the ODSRM, and used to guide single-agency or cross-government resourcing schemes. Smaller than the OSS Trust discussed below, a Target of Opportunity funding pool should scale into the single or tens of millions, allowing governments to resource security and compliance requirements that might fall on OSS programs as well as urgent mitigations and responses to incidents.

In the United States, such a program should be run by the federal agency best positioned to assess and respond to insecurity in technologies supporting critical infrastructure and broad swaths of society—CISA, under the US Department of Homeland Security (DHS). Congress, in S.4913, already views CISA as the logical home for tracking the use of OSS across the federal government and assessing the risks posed to OSS and other software. CISA should have the resources to support the implementation of those efforts and support the OSS projects identified as critical dependencies along the way.

Establish the OSS Trust

Recognition of OSS as the digital infrastructure underneath myriad economic and social activities entails a collective acknowledgment of the failure to-date to support it as such. Across national boundaries, open-source code generates and captures considerable value without consistent government backing, neither for the most critical security updates nor for long-running code maintenance and improvement. New resources will not solve every problem faced by OSS maintainers, and the intention of government support of this kind is not to rewrite the economic relationship between the maintainers of free and “as-is” code and their users.

The OSS Trust should be a mechanism for governments to provide consistent support for the security of OSS code, the integrity of OSS projects, and the health and size of OSS maintainer communities. These funds should scale into the hundreds of millions, enabling broad training and education programs, to support security reviews and mitigation for hundreds of projects at a time, and to bring more maintainers and contributors into OSS communities. These funds can help facilitate widely useful security research and cover the costs associated with long-term hardening, like rewriting a project in a memory-safe language. The Trust’s thesis of what to support should center on activities that produce sustainable, long-term improvements as well as less-well-funded aspects of secure OSS projects like effective governance practices.

In the United States, NIST could aid this effort by developing an inclusive list of metrics by which to gauge the health and needs of OSS packages and communities in close cooperation with extant industry initiatives such as OpenSSF’s Scorecard project, SLSA, S2C2F SIG, CHAOSS, and others.108 It might focus on determining what best practices signal project maturity and sufficient resourcing, and what shortfalls are most critical for downstream users and thus worth prioritizing in upstream support. This framework should not supplant, but rather aggregate and synthesize extant industry measurement initiatives and could later be part of vendor assessments and best practices documents in government procurement processes.

In the United States, the OSS Trust should rely on both regular congressional appropriations and the diversion of a small portion of corporate taxes. Depending on the structure of the receiving organization, Congress could also consider incentivizing individuals and corporations to contribute to the fund or similar organizations through tax-credited donations. Given the immense room for improved support in the OSS ecosystem, such a fund need not begin at its final potential size, able to satisfy all needs at once, on the first day but can grow incrementally, taking the opportunities to refine its grantmaking processes and partner-organization relationships as it grows.

This can and should eventually be an international scheme. The German-government-backed Sovereign Tech Fund already works to fund OSS projects to “support the development, improvement, and maintenance of open digital infrastructure.”109 This and similar initiatives at the EU member state level could be subsumed into a broader international effort in the near future or grow in isolation and work to coordinate with U.S. and other national programs absent immediate consolidation.

Like the HTF, CEF, or CF, such a fund should work with intermediaries to identify the best recipients—the central government need not try to locate decrepit concrete and unaddressed potholes itself, but rather can improve the resourcing of organizations with that on-the-ground expertise, relying on the existing web of intermediaries and support groups already present and growing in the OSS ecosystem.

Adopt-a-package

Private sector and nonprofit leaders in OSS should define schemes by which firms and other donors can “adopt” important unmaintained packages and provide resources to support their ongoing maintenance, vulnerability mitigation, and potentially rewrites into memory-safe languages or other structural updates. Rather than the urgent need met by a target-of-opportunity model or the long-term focus and friendliness to cross-cutting research of the OSS Trust. The government can contribute funding and support to existing initiatives or construct one in parallel, similar to Federal Emergency Management Agency’s (FEMA) reservist program. Government teams might supplement private-sector groups or focus on assisting incident response and resourcing for projects critical to government functions.

One entity already working toward this end is the for-profit startup, thanks.dev, which looks to connect users and patrons of open-source packages with a simple way to fund those packages and the packages they depend on. The company builds on several layers of deep dependency graphs using existing bill-of-materials, like data. That part is crucial—because of the web of dependencies across OSS, funding standalone packages is often not enough to drive resources everywhere they are needed. Log4j is a great example of a piece of a whole that turned out to be extremely important in the aggregate but may not have attracted high-profile attention on its own.

6. Conclusion

We do not build most of the code we use. In realizing this and accepting it for the indefinite future, OSS and the many communities developing and maintaining it should loom large in any analysis of cybersecurity and economic health. Open source constitutes the infrastructure to which we trust sensitive data, critical social programs, and cycles of economic development and innovation. That such infrastructure is weakening,110 and in some places crumbling,111 from the weight of demands placed on it should no more shock us than the imagery of bridges collapsing and reports of poisoned groundwater due to inadequate sustainment combined with widespread use.

None of this report reflects a belief that OSS is inherently insecure, but rather that it is uniquely central to modern digital systems and that relationships with the OSS community are necessarily, and substantively, different than those government has grown accustomed to with industry and industry within itself. Sustainable use emphasizes the user responsibility for much of the risk associated with software use, including OSS, and addresses OSS-specific features of development and contribution possibly only with open-source code. Addressing systemic risk is an important step for policy efforts to support the security and sustainability of OSS projects with an accurate picture of the considerable interdependency between code bases. Finally, governments must step up to support OSS as the infrastructure that it is. These resources should come alongside expanded private sector support and can manifest in targeted formats as well as a more general support model, the OSS Trust. OSS is infrastructure, and the provision of support for it as such will permit more rapid adoption and considerable innovation in even critical domains of economic and government activity.

Most of us too often take for granted the everyday things, the problems well solved. Yet, ignorance and the failure to protect them come with hefty price tags. Log4shell, a rash of open-source package incidents,112 and the chorus of concern amongst OSS maintainers about an economic model that extracts value from labor without committing back are symptoms of the choice to remain in such ignorance. The risk is the slow collapse of a vibrant ecosystem and a future riven by falling diversity in and capability for digital development outside a concentrated handful of technology firms, imperiling national security and economic competitiveness in equal measure. The good news is that this collapse is neither necessary nor permanent.

Change is possible, indeed much needed, but it must come in the form of investment as well as policy. Pennies on the dollar of value can be gained from a healthy and resilient open-source ecosystem, and such investments provide a means to secure essential digital infrastructure against a myriad of threats. Strong investment in and well-informed policy about OSS is, above all, a gift to the present, not just an abstract donation to future generations, that would impact and protect communities throughout the world.113

About the authors

Sara Ann Brackett is a research assistant at the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She focuses her work on open-source software security, software bills of material, and software supply-chain risk management and is currently an undergraduate at Duke University.

Acknowlegements

The authors owe a continuing debt of gratitude to the members of the Open Source Policy Network whose growing collaboration on open-source security and sustainability policy is an important part of this work. Major thanks to the Open Source as Infrastructure Working Group, including co-sponsor Open Forum Europe and its Executive Director Astor Nummelin Carlberg, whose insights shaped this report across 2022 and 2023. Thank you to Abhishek Arya, Jack Cable, Brian Fox, John Speed Meyers, Sarah Novotny, Jeff Wayman, and David Wheeler for their feedback on this and earlier drafts. Additional thanks to Kathy Butterfield, Estefania Casal Campos, and Abdolhamid Dalili for developing the report’s graphics; to Nancy Messiah and Andrea Raitu for its web design; to Donald Partyka for graphic and document design; and to Jen Roberts for coordinating the project’s many visual and design elements. This work is made possible with support from Craig Newmark Philanthropies, Schmidt Futures, the Open Source Security Foundation, and Omidyar Network.

CSI produced this report’s cover image using in part OpenAI’s DALL-E program, an AI image-generating application. Upon generating the draft image-prompt language, the authors reviewed, edited, and revised the language to their own liking and take ultimate responsibility for the content of this publication.

Appendix: Survey results

As part of this report, the Atlantic Council and the Open Source Policy Network distributed an anonymous survey to several OSS governance, policy, and security communities, including through the OpenSSF’s general Slack channel and Open Forum Europe’s email forum. The survey, which was open from November 20, 2022, through January 8, 2023, aimed to gather attitudes on OSS policy and security from OSS maintainers, developers, and stakeholder communities closer to the problem set than policymakers in Brussels or DC. Despite being open to over two thousand potential respondents, the survey only achieved a sample size of forty-six, limiting the insight into community priorities that it could provide. Nonetheless, there were some noteworthy trends in the responses, and the Atlantic Council and Open Source Policy Network will continue to gather outside perspectives and sentiment trends in this manner.

GovernmentICT VendorNon-ICT VendorIndependent ResearcherAcademiaNon-profit organizationOther
21643498
4.3%34.8%8.7%6.5%8.7%19.6%17.4%
1. Main respondent affiliation 
MaintainerContributorUserNone of the above
2932345
63.0%69.6%73.9%10.9%
2. Respondent’s primary role with respect to OSS (select all that apply)
ICT VendorsAll IndustryOSS devsFoundations/Non-profitsGovOther
9202582
19.6%43.5%4.3%10.9%17.4%4.3%
3. If you had to pick one party to assume more responsibility than they currently do for security outcomes associated with the use of open-source software, which would it be? 
Project activityContributor communityMaintainersHigh-activity contributors and maintainersCommunity principlesSecurity expert involvementOther
75512548
15.2%10.9%10.9%26.1%10.9%8.7%17.4%
4. Which is the most useful characteristics for assessing the health and well-being of an open-source community, if you had to pick just one? 
Public education and awarenessPublic-private coordination managementFundingLicensing + auditing policiesOSS engagementOther
5414995
10.9%8.7%30.4%19.6%19.6%10.9%
5. Which is the most critical function of an Open-Source Program Office (OSPO) if you had to pick just one? 
Project metadataUsage dataVulnerability reportingVulnerability info accessSecurity testingSBOM generationOther
1113212512
2%24%7%4%26%11%26%
6. Where do you see the tooling or information gap that might be most harmful to the OSS ecosystem? 
 1-Most useful2345-Least useful
Security testing/assessments14141251
Bug-bounty programs112111111
Security info-sharing and procedures21713113
Incident response support5169133
Direct funding258661
7. Please sort these methods of external support for, and investment in, open-source projects from most useful to least useful open-source maintainers and developers, in your opinion and relative to each other.  
 1-Most useful234567-Least useful
Project popularity711810235
Community size and activity1313116111
Cost of maintenance and usage37971118
Fulltime developer count911108422
Recent significant vulnerabilities711127360
Number of corporate sponsors44786125
Number of individual sponsors23551498
8. Please sort these heuristics for assessing the risk of using a specific OSS package from most useful to lease useful, in your opinion. 
  1-Most useful2345-Least useful
Security testing/assessments 18131023
Bug-bounty programs112101211
Security infosharing and procedures41515210
Incident response support7178122
Direct funding228664
9. Please sort these methods of external support for, and investment in, open-source projects from most useful to least useful for the security of downstream users.  

How much do you agree or disagree with the following statements?  

Strongly AgreeAgreeNeutralDisagreeStrongly Disagree
1614853
34.8%30.4%17.4%10.9%6.5%
10. A government role in supporting the open-source ecosystem is necessary for its long-term sustainability and success. 
Strongly AgreeAgreeNeutralDisagreeStrongly Disagree
1418841
30.4%39.1%17.4%8.7%2.2%
11. Government support must include direct financial investment to ensure the open-source ecosystem’s long-term sustainability and success. 

12. Tell us about your bogeyman – where do you see the most risk across the OSS community? Answers here can reflect either security risks, dangers posed by policy, or other concerns. 

  • Moving software to memory safe languages and actionable OSS supply chain management are the highest risk issues, IMO.
    • Lack of proper project governance, in particular for accepting commits.
      • Rogue maintainers who sabotage their own work for whatever reason.siloing of information about os production and consumption leading to ineffectual allocation of support/resources. lack of coordinated engagement by all relevant stakeholders: the community, foundations and other industry bodies, government and consumers (especially large global ones)I fear that the burden (via law/policy) of ensuring secure software will be set unrealistically (zero bugs) and fall (with serious consequences) on individual contributors.  This would effectively kill the OSS ecosystem by creating huge disincentives for anyone to be involved.The volume of mission critical code that is written in a memory unsafe language is highly alarming – it’s so bug-prone and those bugs are often part of an exploit chain. lack of security awareness and efforts by OSS developers.Tragedy of the commons, and assumption that someone else will do “it”. Putting too much of the burden on volunteer maintainers. Companies shouldn’t try to require too much of the free projects that they are using. Any interventions must come with strong community incentives.Increasing and poorly tracked dependency on projects, in some cases individuals, misalignment of funding and resources, treating OSS as a public good (gov investment) is maybe sound, consider tax concepts (really), who benefits more should pay/fund more, cui bono, shouldn’t be a complete gov subsidy.Transitive dependencies, where users evaluate the parent OSS project, but not all of its dependencies to see if they are well maintained and following best practices.Funding. The world is capitalism, and it is not practical for critical open source maintainers to focus on that job full-time without capital.NULLUsers of open source don’t understand that in many/most cases that the software isn’t supported in the same was commercial software is.  Example, I recently say a user ask about when some vulnerabilities that have been published would be addressed in the project.  This project is widely used and has critical vulnerabilities in it.  The single maintainer’s response was “it simply depends on my spare time.”  Critical security issues in what is likely critical software for some orgs and it will be addressed when someone has some spare time.  That’s not a formula for highly secure software.Education and knowledge gapThe Jeeper CreeperGovernment attempting to regulate an anti-culture which is based entirely on the foundations of helpfulness, novelty, and innovation. Open source is not industry, it is not corporate, and the ideals of it are often at odds as those using it.  It’s like volunteer EMTs or Good Samaritans. There should be support and protections for those that do the reasonable right thing without introducing a burden on them.Security loopholes should addressed with caution and strict measures. comprehensive and aligned and equal support for both upstream creators of open source and downstream consumers is criticalRisk: death and burnout. We are currently ignoring both in the name of security and that’s going to bite us.Funding. Governments should require a % of profit – not even revenue, just profit – be invested into their open source stack.Trey Herr really worries me. Don’t let him near a command line.The biggest risk is automation without proper processes and workflows in-place. Automating a process incorrectly is a greater risk than not doing it at all.Biggest risk across OSS community is sustainable – having x-omega and free security trainings is nice but research has shown most of OSS projects have a single maintainer. How can you expect a single maintainer to maintain his/her project and also spent time on security considerations?  We need an open source way to give OSS usesr (especially large enterprise) easy insights into OSS usage so they then can undertake action to support the OSS projects vital/critical to them (have seen people use OSS Review Tookit for this) One of the most significant issues is a cultural one. Today, most conversations around open-source software still put too much emphasis on the community aspect and define it as some charity. The solutions are usually related to increasing long-term volunteer contributions from corporations or individuals.However, if the open-source initiatives had the necessary financial resources, like any other businesses, they would already do their best to minimize the risks, hire the needed talents and produce a healthy software solution. Hence, we should recognize the overall economic value of open-source software, see it as a regular business activity in which entrepreneurs contribute to digital public goods, and address investment coordination issues around it.Once the open-source ecosystem receives adequate funding, the competition in the market should sort out the rest.Projects fail or are mismanaged due to lack of organizational support.Not having an asset list of what actually the enterprise has Insider risk or the malicious maintainer – Open source projects can switch hands or be influenced by anyone despite their motivations or backgrounds. This is an incredible difficult security risk to address for OSS. Government overreach would be a concern. Standards would be helpful. A monopoly on the code hosting servicessecurity fatigue due to vendors overselling BS, generally the amount of bad security vendors and productsLack of direct funding for core nodes, central components in wide use. Lack of practical contributions by ENISA, see e.g. their analysis of Heartbleed in someoneshouldhavedonesomething style, bugbounty programs on a too small scale without larger buy-in of officials, no large scale strategic investment in open source with regards to platform dependencies of the economy, that is, no learnings from the Putin gas disaster, dependencies from China in the hardware sector.Security risks. Code bases not checked by real security experts.Throwing OSS under geopolitical busLack of critical thinking and understanding of biaised axiomsGovernment/large corporate business users failing to financially support the open source projects they use. Government stepping in and trying to regulate/control a system that does not want or need this. Government depts. trying to be software developers.Funding Public, Securitysoftware patentsAn inability to objectively and discrete measure risk assocaited with different OSS projects.It’s a fact that very few projects have been undergone independent security review. More funding should go into initiatives that can do that. Furthermore, even “well-supported” projects are prone to vulnerabilities and exploits; so projects need to be consistently evaluated and reviewed based on their risk and usage. License changes in existing and widely used open source components and libraries. For example, Akka is changing its license from OSS license to a commercial license. All the other OSS components and libraries depending on Akka need to change Akka to some other library or try to meet the requirements of the new license (which is not always possible). Regulation that fails to account for the dynamics of the work project that is open source.Most surveys of this ilk have a common top blocker: time available to address this priority with everything else to be done. While there are such time constraints amongst OSS devs and maintainers, the risks are high that security issues won’t get addressed in the optimal way. Blindly relying on projects without ensuring they have a long-term viability.
      • The biggest risk I see is in continuity. If the primary maintainer(s) of a popular project leaves the project for whatever reason (burn-out, interest changes, death, etc.), what can the overall open source community to do help that transition?

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Critical Digital Infrastructure Research,” Ford Foundation, accessed January 12, 2023, https://www.fordfoundation.org/campaigns/critical-digital-infrastructure-research/.
2    Full Committee Hearing: “Responding to and Learning from the Log4Shell Vulnerability,” US Senate Committee on Homeland Security & Governmental Affairs,  February 8, 2022, https://www.hsgac.senate.gov/hearings/responding-to-and-learning-from-the-log4shell-vulnerability; Hearing: “Securing the Digital Commons: Open-Source Software Cybersecurity,” House Committee on Science, Space, and Technology,  May 11, 2022, https://science.house.gov/2022/5/joint.
3     Nadia now goes by Nadia Asparouhova and more on her work can be found here https://nadia.xyz/
4    Eric S. Raymond, The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary (O’Reilly Media, Inc., 2001).
5    To the reader, as part of this report, the Atlantic Council and the Open-Source Policy Network distributed an anonymous survey to several OSS governance, policy, and security communities, including through the OpenSSF’s general Slack channel and Open Forum Europe’s email forum. The survey, open from November 20, 2022, through January 8, 2023, aimed to gather attitudes on OSS policy and security from OSS maintainers, developers, and stakeholder communities closer to the problem set than policymakers in Brussels or DC. Despite being open to over two thousand potential respondents, the survey only achieved a sample size of forty-six, limiting the insight into community priorities that it could provide. Nonetheless, there were some noteworthy trends in the responses, and the Atlantic Council and Open-Source Policy Network will continue to gather outside perspectives and sentiment trends in this manner.
6    To the reader, this project and the Open Source Policy Network are made possible with support from Craig Newmark Philanthropies, Schmidt Futures, the Open Source Security Foundation, and the Omidyar Network.
7    Nadia Eghbal, “Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure,” Ford Foundation, June 14, 2016, https://www.fordfoundation.org/media/2976/roads-and-bridges-the-unseen-labor-behind-our-digital-infrastructure.pdf.
8    Eghbal, “Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure.”
9    Julia Ferraioli, “Open Source and Social Systems,” (blog), December 7, 2022, https://juliaferraioli.com/blog/2022/open-source-social-systems/.
10    Alison Dame-Boyle, “EFF at 25: Remembering the Case That Established Code as Speech,” Electronic Frontier Foundation, April 16, 2015, https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech.
11    “Securing Open Source Software Act of 2022,” S.4913, 117th Congress (2022), https://www.congress.gov/bill/117th-congress/senate-bill/4913.
12    Frank Nagle, “Government Technology Policy, Social Value, and National Competitiveness,” Harvard Business School Strategy Unit Working Paper No. 19-103, March 3, 2019, https://doi.org/10.2139/ssrn.3355486.
13    Karl Fogel and Cecilia Donnelly, “Open Data for Resilience Initiative and GeoNode: A Case Study on Institutional Investments in Open Source” (Washington, DC: World Bank Group, December 31, 2017), http://documents.worldbank.org/curated/en/713861563520709009/Open-Data-for-Resilience-Initiative-and-GeoNode-A-Case-Study-on-Institutional-Investments-in-Open-Source; Knut Blind et al., “Study about the Impact of Open Source Software and Hardware on Technological Independence, Competitiveness and Innovation in the EU Economy | Shaping Europe’s Digital Future” (Brussels: European Commission, September 6, 2021), https://digital-strategy.ec.europa.eu/en/library/study-about-impact-open-source-software-and-hardware-technological-independence-competitiveness-and; Brian Proffitt, “The ROI of Open Source,” Red Hat Blog, August 26, 2020, https://www.redhat.com/en/blog/roi-open-source. To the reader, while the authors of this report are not aware of replication studies validating these findings, it is worth noting that the sheer ubiquity of OSS already in proprietary offerings indicates the widespread success of the model. Whether that is due to reduced development time, crowd-sourced innovation, or other factors is not clear, however.
14    “The Open Source Definition,” Open Source Initiative, accessed January 13, 2023, https://opensource.org/osd.
15    Peter Salus, The Daemon, The Gnu, and the Penguin, (Reed Media Services, September 2008).
16    Joseph Carl Robnett Licklider and Robert W. Taylor, “The Computer as a Communication Device,” Science and Technology 76 (April 1968), 21–31.
17    befunge, GitHub, accessed January 13, 2023, https://github.com/topics/befunge.
18    Left-Pad, Nonsense Poetry Manager (npm), accessed January 13, 2023, https://www.npmjs.com/package/left-pad.
19    LibreOffice, accessed January 13, 2023, https://www.libreoffice.org/.
20    “Linux Distribution Introduction and Overview,” Linux Training Academy, accessed January 13, 2023, https://www.linuxtrainingacademy.com/linux-distribution-intro/.
21    LibreOffice.
22    ggplot2, accessed January 13, 2023, https://ggplot2.tidyverse.org/.
23    Andrew Spyker and Ruslan Meshenberg, “Evolution of Open Source at Netflix,” Netflix Technology Blog, October 28, 2015, https://netflixtechblog.com/evolution-of-open-source-at-netflix-d05c1c788429.
24    Liran Tai, “The Log4j Vulnerability and Its Impact on Software Supply Chain Security,” Snyk, December 13, 2021, https://snyk.io/blog/log4j-vulnerability-software-supply-chain-security-log4shell/.
25    Mehul Revankar, “New Study Reveals 30% of Log4Shell Instances Remain Vulnerable,” Qualys Security Blog, March 18, 2022, https://blog.qualys.com/qualys-insights/2022/03/18/qualys-study-reveals-how-enterprises-responded-to-log4shell.
26    To the reader, using the term “open-source” as a verb means to make the source code available to all, often on a code hosting platform, with GitHub being one of the most commonly used repository hosts.
27    Ferraioli, “Open Source and Social Systems.”
28    Milo Z. Trujillo, Laurent Hébert-Dufresne, and James Bagrow, “The Penumbra of Open Source: Projects Outside of Centralized Platforms Are Longer Maintained, More Academic and More Collaborative,” EPJ Data Science 11, no. 1 (May 21, 2022): 1–19, https://doi.org/10.1140/epjds/s13688-022-00345-7.
29    To the reader, these fall under the 501(c)(6) classification. Their main difference from a 501(c)(3) nonprofit is that where (c)(3) organizations must serve the public, (c)(6) organizations must their members. For more detail, see Internal Revenue Services, “Business Leagues,” irs.gov, accessed January 12, 2023, https://www.irs.gov/charities-non-profits/other-non-profits/business-leagues.
30    “Licenses & Standards,” Open Source Initiative, accessed January 13, 2023, https://opensource.org/licenses.
31    Alpha-Omega, Open Source Security Foundation, accessed January 13, 2023, https://openssf.org/community/alpha-omega/.
32    David Gray Widder, “Can You Stop Your Open-Source Project from Being Used for Evil?,” Overflow, August 8, 2022, https://stackoverflow.blog/2022/08/08/can-you-stop-your-open-source-project-from-being-used-for-evil/.
33    John Sullivan, “Thinking Clearly about Corporations,” Free Software Foundation, June 24, 2021, https://www.fsf.org/bulletin/2021/spring/thinking-clearly-about-corporations.
34    Sam Williams, Free as in Freedom: Richard Stallman’s Crusade for Free Software (O’Reilly Media, Inc., 2002), https://www.oreilly.com/library/view/free-as-in/9781449323332/.
35    To the reader, footnote entries offer further readings on the larger ecosystem and some of its defining debates.
36    To the reader, Dr. Tracy Miller defines infrastructure as “facilities, structure, equipment, or similar physical assets…vitally important, if not absolutely essential, to people having the capabilities to thrive…in ways critical to their own well-being and that of their society, and the material and other conditions which enable them to.” See: Tracy Miller, “Infrastructure: How to Define It and Why the Definition Matters,” Mercatus Center, July 12, 2021, https://www.mercatus.org/research/policy-briefs/infrastructure-how-define-it-and-why-definition-matters.
37    “Critical Infrastructure Sectors,” Cybersecurity and Infrastructure Security Agency, accessed January 12, 2023, https://www.cisa.gov/critical-infrastructure-sectors.
38    David Wheeler, “Securing Open Source Software Is Securing Critical Infrastructure,” Open Source Security Foundation (blog), October 11, 2022, https://openssf.org/blog/2022/10/11/securing-open-source-software-is-securing-critical-infrastructure/.
39    Steven Vaughan-Nichols, “Can the Internet Exist without Linux?,” ZDNet, October 15, 2015, https://www.zdnet.com/home-and-office/networking/can-the-internet-exist-without-linux/; “Cloud Infrastructure for Virtual Machines, Bare Metal, and Containers,” OpenStack, accessed January 13, 2023, https://www.openstack.org/; “Welcome to OpenSSL!” Open Secure Sockets Layer (OpenSSL) Project, accessed January 13, 2023, https://www.openssl.org/; Nate Matherson, “26 Kubernetes Statistics to Reference,” ContainIQ, July 3, 2022, https://www.containiq.com/post/kubernetes-statistics; “The BIRD Internet Routing Daemon Project,” BIRD, accessed January 12, 2023, https://bird.network.cz/.
40    Curl, accessed January 13, 2023, https://curl.se/.
41    Daniel Stenberg, “The World’s Biggest Curl Installations,” (blog), September 17, 2018, https://daniel.haxx.se/blog/2018/09/17/the-worlds-biggest-curl-installations/.
42    “Open Source Security and Risk Analysis Report,” (Mountain View, California: Synopsys Inc., 2022), https://www.synopsys.com/content/dam/synopsys/sig-assets/reports/rep-ossra-2022.pdf.
43    To the reader, authors tip their hats to the researchers at Chainguard for pointing this out.
44    Jennifer Bennett et al., “Measuring Infrastructure in the Bureau of Economic Analysis National Economic Accounts” (Suitland, MD: Bureau of Economic Analysis, December 1, 220AD).
45    The United States Securing Open Source Software Act: What You Need to Know,” Open Source Security Foundation (blog), September 27, 2022, https://openssf.org/blog/2022/09/27/the-united-states-securing-open-source-software-act-what-you-need-to-know/.
46    “Government Open Source Policies,” Center for Strategic and International Studies, August 2022, https://www.csis.org/programs/strategic-technologies-program/government-open-source-software-policies.
47    Sean Gallagher, “Rage-Quit: Coder Unpublished 17 Lines of JavaScript and ‘Broke the Internet,’” Ars Technica, March 25, 2016, https://arstechnica.com/information-technology/2016/03/rage-quit-coder-unpublished-17-lines-of-javascript-and-broke-the-internet/.
48    “How We Use Water,” Overviews and Factsheets, US Environmental Protection Agency, accessed January 13, 2023, https://www.epa.gov/watersense/how-we-use-water.
49    Rachel Estabrook and Michael Elizabeth Sakas, “The Colorado River Is Drying up — but Basin States Have ‘No Plan’ on How to Cut Water Use,” Colorado Public Radio, September 17, 2022, https://www.cpr.org/2022/09/17/colorado-river-drought-basin-states-water-restrictions/.
50    Ashwin Ramaswami, “Securing Open Source Software Act of 2022,” Sustain Open Source Forum, October 3, 2022, https://discourse.sustainoss.org/t/securing-open-source-software-act-of-2022/1098.
51    “Apache License, Version 2.0org,” Open Source Initiative, accessed January 13, 2023, https://opensource.org/licenses/Apache-2.0.
52    “Open Source Security Foundation Raises $10 Million in New Commitments to Secure Software Supply Chains,” Open Source Security Foundation (blog), October 13, 2021, https://openssf.org/press-release/2021/10/13/open-source-security-foundation-raises-10-million-in-new-commitments-to-secure-software-supply-chains/.
53    “Water Law Overview – National Agricultural Law Center,” National Agricultural Law Center, accessed January 12, 2023, https://nationalaglawcenter.org/overview/water-law/.
54    “Drinking Water Laws and New Rules,” Overviews & Factsheets, US Environmental Protection Agency, accessed January 12, 2023, https://www3.epa.gov/region1/eco/drinkwater/laws_regs.html.
56    Daniel Rothberg, “Everyone in Nevada Is Talking about Water. Here Are Five Things to Know.,” Nevada Independent, May 19, 2022, https://thenevadaindependent.com/article/everyone-in-nevada-is-talking-about-water-here-are-five-things-to-know-efbfbc.
57    To the reader, though originally required in the 90s, it is more precise to say that this legislation updated the requirements for those plans among other related items.
59    Pub. L. No. SB74 (2016).
60    To the reader, one might argue that there is a shortage of OSS tailored to meet all consumers’ needs, which leads to its constant change.
61    Fact Sheet: Good Samaritan Administrative Tools,” US Environmental Protection Agency, accessed January 13, 2023, https://www.epa.gov/enforcement/fact-sheet-good-samaritan-administrative-tools.
62    Rep. Lori Trahan (D-MA-03), Press Release: “House Passes Comprehensive Legislation to Aid Ukraine, Invest Millions in Third District,” March 9, 2022, https://trahan.house.gov/news/documentsingle.aspx?DocumentID=2411.
63    Havoc Pennington, “Up to 20% of Your Application Dependencies May Be Unmaintained,” Tidelift (blog), April 9, 2019, https://blog.tidelift.com/up-to-20-percent-of-your-application-dependencies-may-be-unmaintained.
64    Théo Zimmermann and Jean-Rémy Falleri, “A Grounded Theory of Community Package Maintenance Organizations-Registered Report,” CoRR 2108.07474 (September 2021), https://dblp.org/rec/journals/corr/abs-2108-07474.html?view=bibtex; Jailton Coelho et al., “Identifying Unmaintained Projects in GitHub,” in Proceedings of the 12th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, 2018, 1–10, https://doi.org/10.1145/3239235.3240501; Jordi Cabot, “Adopt an Open Source Project,” Livable Software, September 21, 2018, https://livablesoftware.com/adopt-abandoned-open-source-project/; Alpha-Omega; Adopt A Project, GitHub, accessed January 13, 2023, https://github.com/jonobacon/adopt-a-project.
65    European Commission, “Specific Principles: Polluter Pays Principle,” Principles of EU Environmental Law,  https://www.era-comm.eu/Introduction_EU_Environmental_Law/EN/module_2/module_2_11.html.
66    “Waste Framework Directive,” European Commission, https://environment.ec.europa.eu/topics/waste-and-recycling/waste-framework-directive_en and “Water Framework Directive,” European Commission, https://environment.ec.europa.eu/topics/water/water-framework-directive_en.
67    United States: Financial Crisis Inquiry Commission, “The Financial Crisis Inquiry Report: Final Report of the National Commission on the Causes of the Financial and Economic Crisis in the United States” (Washington DC: US Government Printing Office, February 25, 2011), https://www.govinfo.gov/app/details/GPO-FCIC.
68    To the reader, examples include Heartbleed and log4shell.
69    To the reader, some companies offer as a service scanning of software products to identify with reasonable but varied accuracy the underlying components within.
70    Frank Nagle et al., “Census II of Free and Open Source Software — Application Libraries,” [Linux Foundation, Laboratory for Innovation Sciences at Harvard (LISH), and Open Source Security Foundation (OpenSSF), March 2, 2022], https://lish.harvard.edu/publications/census-ii-free-and-open-source-software-%E2%80%94-application-libraries.
71    ”Jeffrey M Stupak, “Financial Stability Oversight Council (FSOC): Structure and Activities,” Congressional Research Services, February 12, 2018, (https://digital.library.unt.edu/ark:/67531/metadc1157125/, accessed January 13, 2023, University of North Texas Libraries, UNT Libraries Government Documents Department).
72    Walter Frick, “What You Should Know About Dodd-Frank and What Happens If It’s Rolled Back,” Harvard Business Review, March 2, 2017, https://hbr.org/2017/03/what-you-should-know-about-dodd-frank-and-what-happens-if-its-rolled-back.
73    House Hearing 114th Congress: “Oversight of the Financial Stability Oversight Council” (Washington, DC: US Government Publishing Office, December 8, 2015), https://www.govinfo.gov/content/pkg/CHRG-114hhrg99796/html/CHRG-114hhrg99796.htm.
74    “About ESMA,” European Securities and Markets Authority, accessed January 13, 2023, https://www.esma.europa.eu/about-esma.
75    To the reader, it is not the mere act of using code that creates the need for maintenance—binaries do not degrade like asphalt—but rather the fact that downstream dependencies and integrations make it essential for upstream components to keep pace with evolving language and environment features and security practices.
76    Raymond, The Cathedral & the Bazaar.
77    “Highway Trust Fund: Federal Highway Administration Should Develop and Apply Criteria to Assess How Pilot Projects Could Inform Expanded Use of Mileage Fee Systems” (Washington DC: US Government Accountability Office, January 10, 2022), https://www.gao.gov/products/gao-22-104299.
78    Adopt A Project.
79    Lindsey Bever, “KKK Takes Adopt-a-Highway Case to Georgia Supreme Court,” Washington Post, October 26, 2016, sec. Post Nation, https://www.washingtonpost.com/news/post-nation/wp/2016/02/23/kkk-takes-adopt-a-highway-case-to-georgia-supreme-court/.
80    Morten Rand-Hendriksen, “On the Corporate Takeover of the Cathedral and the Bazaar,” MOR10 (blog), February 4, 2019, https://mor10.com/on-the-corporate-takeover-of-the-cathedral-and-the-bazaar/.
81    Committee for a Responsible Federal Budget, “The Infrastructure Bill’s Impact on the Highway Trust Fund,” CFRB, February 3, 2022, https://www.crfb.org/blogs/infrastructure-bills-impact-highway-trust-fund.
82    Frank Nagle, “Why Congress Should Invest in Open-Source Software,” Brookings (blog), October 13, 2020, https://www.brookings.edu/techstream/why-congress-should-invest-in-open-source-software/.
83    To the reader, wording  considers both security lapses and wider incidents where developers pull down packages. See: “Awful OSS Incidents” (2022; PayDevs), accessed January 12, 2023, https://github.com/PayDevs/awful-oss-incidents for examples.
84    Hearing [archived webcast]: “Equity in Transportation Infrastructure: Connecting Communities, Removing Barriers, and Repairing Networks Across America,” US Senate Committee on Environment and Public Works, May 11, 2021, https://www.epw.senate.gov/public/index.cfm/2021/5/equity-in-transportation-infrastructure-connecting-communities-removing-barriers-and-repairing-networks-across-america.
85    “Cohesion Fund Fact Sheet,” European Parliament, https://www.europarl.europa.eu/factsheets/en/sheet/96/cohesion-fund, and “Connecting Europe Facility,” Innovation and Networks Executive Agency, December 22, 2022, https://wayback.archive-it.org/12090/20221222151902/https://ec.europa.eu/inea/en/connecting-europe-facility.
86    Eugenia Lostri, Georgia Wood, and Meghan Jain, “Government Open Source Software Policies,” Center for Strategic and International Studies, January 10, 2023, https://www.csis.org/programs/strategic-technologies-program/government-open-source-software-policies.
87    Gijs Hillenius, “Norway to Increase Its Use of Open Source,” Open Source Observatory, November 19, 2008, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/norway-increase-its-use.
89    Sovereign Tech Fund, German Ministry for Economic Affairs and Climate Action, accessed January 13, 2023, https://sovereigntechfund.de/en.
90    Prototype Fund, Open Knowledge Foundation Germany, accessed January 13, 2023, https://prototypefund.de/en/.
91    Program Solicitation, NSF 22-572: “Pathways to Enable Open-Source Ecosystems (POSE),” National Science Foundation, https://www.nsf.gov/pubs/2022/nsf22572/nsf22572.htm.
92    “Supporting Internet Freedom Worldwide,” Open Technology Fund, https://www.opentech.fund/.
93    “The Ministry of Science Creates a Cluster for Free Software Companies,” iProfessional, April 26, 2013, https://www.iprofesional.com/tecnologia/159530-el-ministerio-de-ciencia-crea-cluster-para-empresas-de-software-libre.amp.
94    Gijs Hillenius, “Up to EUR 200,000 for Austria… | Joinup,” Open Source Observatory, August 22, 2016, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/eur-200000-austria.
95    John Lui, “Malaysia Sets up $36m Open Source Fund – Silicon.Com,” Silicon, October 30, 2003, https://web.archive.org/web/20050411192233/http:/software.silicon.com/os/0,39024651,39116677,00.htm.
96    Chris Anizczyk et al., “Creating an Open Source Program,” Open Source Guides, n.d., https://www.linuxfoundation.org/resources/open-source-guides/creating-an-open-source-program.
97    Astor Nummelin Carlberg, “The WHO Is the Latest Public Administration to Launch an Open Source Programme Office,” Open Source Observatory, March 18, 2022, https://joinup.ec.europa.eu/collection/open-source-observatory-osor/news/who-builds-ospo
98    Think Open, “Communication to the Commission: Open Source Software Strategy 2020 – 2023” (Brussels, October 21, 2020), https://commission.europa.eu/about-european-commission/departments-and-executive-agencies/informatics/open-source-software-strategy_en.
99    “Collection of Use Case Examples Expanded Regarding Management Methods for Utilizing Open Source Software and Ensuring Its Security,” (Tokyo, Japan: Ministry of Economy, Trade and Industry, May 10, 2022), https://www.meti.go.jp/english/press/2022/0510_003.html.
100    “Extract from the Minutes of the Session of the Government of the Republic of Armenia – On the Endorsement of Internet Governance Principles,” http://www.irtek.am, August 2014, http://www.irtek.am/views/act.aspx?aid=77996.
101    Dan Knauss, “Open Source Communities: You May Not Be Interested in CISA, But CISA Is Very Interested in You,” Post Status (blog), October 3, 2022, https://poststatus.com/open-source-communities-you-may-not-be-interested-in-cisa-but-cisa-is-very-interested-in-you/.
103    “Securing Open Source Software Act of 2022,” S.4913.
104    “Securing Open Source Software Act of 2022,” S.4913.
105    Amelie Koran et al., “The Cases for Using the SBOMs We Build,” Atlantic Council (blog), November 22, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-cases-for-using-sboms/.
106    Katie Bratman and Adam Kojak, “SBOM Ingestion and Analysis at New York-Presbyterian Hospital” (Open Source Summit North America 2022, Austin, TX, June 21, 2022), https://ossna2022.sched.com/event/11Q0t/sbom-ingestion-and-analysis-at-new-york-presbyterian-hospital-katie-bratman-adam-kojak-newyork-presbyterian-hospital.
107    To the reader, for more examples of SBOMs already for OSS projects, see the bom-shelter dataset built by John Speed Meyers and Chainguard: “bom-shelter” (Chainguard), https://github.com/chainguard-dev/bom-shelter.
108    Open Source Security Foundation, “Secure Supply Chain Consumption Framework (S2C2F) SIG,” GitHub, accessed January 11, 2023, https://github.com/ossf/s2c2f.
109    Sovereign Tech Fund.
110    James Mcbride and Anshu Siripurapu, “The State of US Infrastructure,” Council on Foreign Relations, November 8, 2021, https://www.cfr.org/backgrounder/state-us-infrastructure.
111    Jim Mone, “NTSB: Design Errors Factor in 2007 Bridge Collapse,” USA Today, November 13, 2008, http://usatoday30.usatoday.com/news/world/2008-11-13-628592230_x.htm.
112    Dan Goodin, “Numerous Orgs Hacked after Installing Weaponized Open Source Apps,” Ars Technica, September 29, 2022, https://arstechnica.com/information-technology/2022/09/north-korean-threat-actors-are-weaponizing-all-kinds-of-open-source-apps/.
113    “OpenSSF Annual Report – 2022,” Open Source Security Foundation, December 2022, https://openssf.org/wp-content/uploads/sites/132/2022/12/OpenSSF-Annual-Report-2022.pdf.

The post Avoiding the success trap: Toward policy for open-source software as infrastructure appeared first on Atlantic Council.

]]>
Russia’s cyberwar against Ukraine offers vital lessons for the West https://www.atlanticcouncil.org/blogs/ukrainealert/russias-cyberwar-against-ukraine-offers-vital-lessons-for-the-west/ Tue, 31 Jan 2023 17:47:28 +0000 https://www.atlanticcouncil.org/?p=606930 Ukraine’s experience in countering Russian cyber warfare can provide valuable lessons while offering a glimpse into a future where wars will be waged both by conventional means and increasingly in the borderless realm of cyberspace.

The post Russia’s cyberwar against Ukraine offers vital lessons for the West appeared first on Atlantic Council.

]]>
Vladimir Putin’s full-scale invasion of Ukraine is fast approaching the one-year mark, but the attack actually started more than a month before columns of Russian tanks began pouring across the border on February 24, 2022. In the middle of January, Russia launched a massive cyberattack that targeted more than 20 Ukrainian government institutions in a bid to cripple the country’s ability to withstand Moscow’s looming military assault.

The January 14 attack failed to deal a critical blow to Ukraine’s digital infrastructure, but it was an indication that the cyber front would play an important role in the coming war. One year on, it is no longer possible to separate cyberattacks from other aspects of Russian aggression. Indeed, Ukrainian officials are currently seeking to convince the International Criminal Court (ICC) in The Hague to investigate whether Russian cyberattacks could constitute war crimes.

Analysis of the Russian cyberwarfare tactics used in Ukraine over the past year has identified clear links between conventional and cyber operations. Ukraine’s experience in countering these cyber threats can provide valuable lessons for the international community while offering a glimpse into a future where wars will be waged both by conventional means and increasingly in the borderless realm of cyberspace.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

The Russian cyberattack of January 2022 was not unprecedented. On the contrary, Ukraine has been persistently targeted since the onset of Russian aggression with the seizure of Crimea in spring 2014. One year later, Ukraine was the scene of the world’s first major cyberattack on a national energy system. In summer 2017, Ukraine was hit by what many commentators regard as the largest cyberattack in history. These high profile incidents were accompanied by a steady flow of smaller but nonetheless significant attacks.

Following the launch of Russia’s full-scale invasion one year ago, cyberattacks have frequently preceded or accompanied more conventional military operations. For example, prior to the Russian airstrike campaign against Ukraine’s civilian infrastructure, Ukrainian energy companies experienced months of mounting cyberattacks.

These tactics are an attractive option for Russia in its undeclared war against the West. While more conventional acts of aggression would likely provoke an overwhelming reaction, cyberattacks exist in a military grey zone that makes them a convenient choice for the Kremlin as it seeks to cause maximum mayhem in Europe and North America without risking a direct military response. Russia may not be ready to use tanks and missiles against the West, but Moscow will have fewer reservations about deploying the cyberwarfare tactics honed in Ukraine.

In addition to disrupting and disabling government bodies and vital infrastructure, Russian cyberattacks in Ukraine have also sought to manipulate public opinion and spread malware via compromised email accounts. The Ukrainian authorities have found that it is crucial to coordinate efforts with the public and share information with a wide range of stakeholders in order to counter attacks in a timely manner.

The effects of cyberattacks targeting Ukraine have already been felt far beyond the country’s borders. One attack on the satellite communication system used by the Ukrainian Armed Forces during the initial stages of the Russian invasion caused significant disruption for thousands of users across the European Union including private individuals and companies. Given the borderless nature of the digital landscape, similar scenarios are inevitable as cyberwarfare capabilities continue to expand.

From a Russian perspective, cyberwarfare is particularly appealing as it requires fewer human resources than traditional military operations. While Moscow is struggling to find enough men and military equipment to compensate for the devastating losses suffered in Ukraine during the first year of the invasion, the Kremlin should have no trouble finding enough people with the tech skills to launch cyber offensives against a wide range of countries in addition to Ukraine.

Russia can draw from a large pool of potential recruits including volunteers motivated by Kremlin propaganda positioning the invasion of Ukraine as part of a civilizational struggle against the West. Numerous individual attacks against Western targets have already been carried out by such networks.

At the same time, Ukraine’s experience over the past year has underlined that cyberattacks require both time and knowledge to prepare. This helps explain why there have been fewer high-complexity cyber offensives following the initial failure of Russia’s invasion strategy in spring 2022. Russia simply did not expect Ukraine to withstand the first big wave of cyberattacks and did not have sufficient plans in place for such an eventuality.

Ukraine has already carried out extensive studies of Russian cyberwarfare. Thanks to this powerful experience, we have increasing confidence in our ability to withstand further attacks. However, in order to maximize defensive capabilities, the entire Western world must work together. This must be done with a sense of urgency. The Putin regime is desperately seeking ways to regain the initiative in Ukraine and may attempt bold new offensives on the cyber front. Even if Russia is defeated, it is only a matter of time before other authoritarian regimes attempt to wage cyberwars against the West.

The democratic world must adapt its military doctrines without delay to address cyberspace-based threats. Cyberattacks must be treated in the same manner as conventional military aggression and should be subject to the same uncompromising responses. Efforts must also be made to prevent authoritarian regimes from accessing technologies that could subsequently be weaponized against the West.

The Russian invasion of Ukraine is in many ways the world’s first cyberwar but it will not be the last. In the interests of global security, Russia must be defeated on the cyber front as well as on the battlefields of Ukraine.

Yurii Shchyhol is head of Ukraine’s State Service of Special Communications and Information Protection.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Russia’s cyberwar against Ukraine offers vital lessons for the West appeared first on Atlantic Council.

]]>
Unlocking a sustainable future by making cybersecurity more accessible https://www.atlanticcouncil.org/blogs/energysource/unlocking-a-sustainable-future-by-making-cybersecurity-more-accessible/ Mon, 30 Jan 2023 20:00:20 +0000 https://www.atlanticcouncil.org/?p=606715 Cybersecurity will be a key feature of the energy transition. Decision-makers will need to be diligent as they look to secure an increasingly digital and interconnected global energy system.

The post Unlocking a sustainable future by making cybersecurity more accessible appeared first on Atlantic Council.

]]>
The world is on its way toward building a sustainable, inclusive energy future. Renewable energy sources have seen rapid growth thanks to technology innovation and declining costs. At the same time, digitalization is making conventional energy infrastructure more efficient. Continuing these trends will be critical to meeting global climate goals while raising prosperity around the world. And because energy transformation will herald a new, digitalized energy system, cybersecurity has a key role to play in unlocking that sustainable, inclusive future.

The energy sector must withstand a constant siege of cyberattacks—including some backed by nation-states. New attacks can propagate at the speed of light, and their consequences can take days and weeks to unravel, disrupting markets, making equipment unsafe to operate, and causing cascading effects that spread beyond the targeted organization.

Every energy sector participant—new or established, private or public—has an interest in maturing cybersecurity across an increasingly interconnected digital energy system. To continue to strengthen resilience and reliability, investments designed to improve the cost-benefit profile for cybersecurity are critical not just for the biggest players, but for everyone.

Both new and old energy technologies depend on cybersecurity. Rapid digitalization across the energy sector has increased efficiency and decreased emissions, but also has changed and expanded the vulnerabilities the sector must consider. Attackers increasingly target not just information technologies (IT), but operating technologies (OT) as well.  Retrofits to existing OT infrastructure like pipelines and legacy generating plants mean these are now often network-connected. Newer technologies like wind and solar depend on digital management.

The cyber threat isn’t limited to big players or the Global North. Recent years have seen successful ransomware against the biggest petroleum products pipeline in the United States, against the biggest electricity supplier in Brazil, and against smaller infrastructure operators like the municipal electricity utility in Johannesburg. We have also seen attacks against subcontractors leveraged to penetrate electric utilities connected to the US grid. This is a global challenge, for organizations large and small.

Faced with a continuous onslaught of cyberattacks, the energy sector will need to establish practices and institutions that drive down the cost of deploying strong cybersecurity across the energy value chain. Startups, subcontractors, and small utilities will become a consistently weak link in the energy ecosystem if affordable, effective cybersecurity remains unavailable.

So how can the energy sector ensure that cybersecurity keeps pace with cyber risk, and seize opportunities to get ahead of attackers? How can public and private sector leaders contribute to building a community of trust?

Regulators in the energy sector should ensure they enable—or at a minimum, don’t stifle—technology innovations that enhance cybersecurity. Cyber innovation will need to keep pace with both the new technologies of the energy transformation and the known risks to those technologies, even if slow-moving regulatory processes have not yet accounted for new business models, technologies, or threats.

Similarly, regulators should consider how to encourage rapid information sharing about threat intelligence. Although threat intelligence can help quickly harden targets against novel attacks, operators may be reluctant to share information if they believe it will later lead to legal and financial liabilities. Tabletop exercises that convene public and private organizations can improve incident response, building relationships and providing actionable insights before a crisis occurs.

Public and private sector leaders can both work to expand the pool of cybersecurity talent—one of the chief cost barriers for stronger cybersecurity. Cybersecurity experts are scarce, and experts who are also familiar with the operating technologies enabling the energy transition even more so. Training programs—public or private—will help meet demand. Solutions that expand the scope and power of automation can also help, as can information-sharing that enables security teams to quickly recognize new threats and efficiently apply patches.

For asset operators (public or private), cybersecurity should be part of decision-making on new projects. Considering how to secure new infrastructure or planned retrofits can help reduce the cost and complexity needed to manage risk. Monitoring operations helps operators and cyber analysts understand how systems interact with each other during normal production—and enables earlier detection of malicious activity. Seeking opportunities for automation of routine tasks can reduce the cost of strong cybersecurity. Advancements in machine learning and artificial intelligence make it easier to rapidly draw useful insights from massive data sets.

Private sector collaborations can help build trust and cyber maturity across the industry. Common standards and certifications can help spread best practices and build confidence that potential partners or clients will not introduce new vulnerabilities. Threat intelligence can sometimes be more comfortably shared across peer organizations than with regulators.

Private sector leaders can assess and improve their own organizations’ cyber risk posture. Boards that accurately understand their cyber risks will be better able to invest appropriately in managing those risks. Likewise, making clear that cybersecurity is a cross-cutting competency key to performance for every business unit helps build a strong security culture. And of course, recognizing that cybersecurity is an ongoing effort across the sector helps build the collaboration across the energy sector needed to contend with a dynamic, interconnected cyber threat landscape.

Finally, an inclusive energy transformation will also require cyber-inclusivity. Even as the Global North continues to build the connective tissue necessary to meet the cyber risks of a digitalized energy system, passing those lessons forward as the developing world pursues electrification and sustainable energy access will be necessary to ensure that the energy system of the Global South is constructed with cyber-resiliency in mind. Using global convenings like the Atlantic Council Global Energy Forum in Abu Dhabi earlier this month to bring cybersecurity to the table alongside discussions of increasing energy access is critical to build community and advance shared security in a digital energy system.

Leo Simonovich is the vice president and global head of industrial cyber and digital security at Siemens Energy.

Reed Blakemore is a deputy director at the Atlantic Council Global Energy Center.

Learn more about the Global Energy Center

The Global Energy Center develops and promotes pragmatic and nonpartisan policy solutions designed to advance global energy security, enhance economic opportunity, and accelerate pathways to net-zero emissions.

The post Unlocking a sustainable future by making cybersecurity more accessible appeared first on Atlantic Council.

]]>
The 5×5—China’s cyber operations https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-chinas-cyber-operations/ Mon, 30 Jan 2023 05:01:00 +0000 https://www.atlanticcouncil.org/?p=604684 Experts provide insights into China’s cyber behavior, its structure, and how its operations differ from those of other states.

The post The 5×5—China’s cyber operations appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

On October 6, 2022, the Cybersecurity and Infrastructure Security Agency, Federal Bureau of Investigation, and National Security Agency released a joint cybersecurity advisory outlining the top Common Vulnerabilities and Exposures that Chinese state-linked hacking groups have been actively exploiting since 2020 to target US and allied networks. Public reporting indicates that, for the better part of the past two decades, China has consistently engaged in offensive cyber operations, and as the scope of the country’s economic and political ambitions expanded, so has its cyber footprint. The number of China-sponsored and aligned hacking teams are growing, as they develop and deploy offensive cyber capabilities to serve the state’s interests—from economic to national security.

We brought together a group of experts to provide insights into China’s cyber behavior, its structure, and how its operations differ from those of other states.

#1 Is there a particular example that typifies the “Chinese” model of cyber operations?

Dakota Cary, nonresident fellow, Global China Hub, Atlantic Council; consultant, Krebs Stamos Group

“China’s use of the 2021 Microsoft Exchange Server vulnerability to access email servers captures the essence of modern Chinese hacking operations. A small number of teams exploited a vulnerability in a critical system to collecting intelligence on their targets. After the vulnerability became public and their operation’s stealth was compromised, the number of hacking teams using the vulnerability exploded. China has established a mature operational segmentation and capabilities-sharing system, allowing teams to quickly distribute and use a vulnerability after its use was compromised.” 

John Costello, former chief of staff, Office of the National Cyber Director

“No. China’s approach has evolved too quickly; its actors too heterogenous and many. What has remained consistent over time is the principal focus of China’s cyber operations, which, in general, is the economic viability and growth of China’s domestic industry and advancement of its scientific research, development, and modernization efforts. China does conduct what some would call ‘legitimate’ cyber operations, but these are vastly overshadowed by campaigns that are clearly intended to obtain intellectual property, non-public research, or place Chinese interests in an advantageous economic position.” 

Bulelani Jili, nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“What is unique is how the party-state promotes surveillance technology and cyber operations abroad. It utilizes diplomatic exchanges, law enforcement cooperation, and training programs in the Global South. These initiatives not only advance the promotion of surveillance technologies and cyber tools but also support the government’s goals with regard to international norm-making in multilateral and regional institutions.” 

Adam Kozy, independent analyst; CEO and founder, SinaCyber; former official with the FBI’s Cyber Team and Crowdstrike’s Asia-Pacific Analysis Team

“There is not one typical example of Chinese cyber operations in my opinion, as operations have evolved over time and are uneven in their distribution of tooling, access to the vulnerability supply chain, and organization. However, one individual who typifies how the Chinese Communist Party (CCP) has co-opted domestic hacking talent for state-driven espionage purposes is Tan Dailin (谭戴林/aka WickedRose) of WICKED PANDA/APT41 fame. He first began as a patriotic hacker during his time at university in 2000-2002, conducting defacements during the US-Sino hacker war, but was talent spotted by his local People’s Liberation Army (PLA) branch, the Chengdu Military Region Technical Reconnaissance Bureau (TRB) and asked to compete in a hackathon. This was followed by an “internship” where he and his fellow hackers at the NCPH group taught attack/defense courses and appear to have played a role in the 2003-2006 initial Titan Rain attacks probing US and UK government systems. Tan and his friends continued to do contract work for gaming firms, hacking a variety of South Korean, Japanese, and US gaming firms, which gave them experience with high-level vulnerabilities that are able to manipulate at the kernel level and also afforded them stolen gaming certificates allowing their malware to evade antivirus detection. After a brief period where he was reportedly arrested by the Ministry of Public Security (MPS) for hacking other domestic Chinese groups, he reemerged with several new contracting entities that have been noted to work for the Ministry of State Security (MSS) in Chengdu. Tan has essentially made a very comfortable living out of being a cyber mercenary for the Chinese state, using his legacy hacking network to constantly improve and upgrade tools, develop new intrusion techniques, and stay relevant for over twenty years.” 

Jen Roberts, program assistant, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“While no one case study stands out to typify a “Chinese” model, Chinese cyber operations blend components of espionage and entrepreneurship and capitalize on China’s pervasiveness in the international economy. One example of this is the Nortel/Huawei example where espionage, at least in part, caused the collapse of the Canadian telecommunications company.”

#2 What role do non-state actors play in China’s approach to cyber operations?

 

Cary: “Chinese security services still have a marked preference for using contracted hacking teams. These groups often raise money from committing criminal acts, in addition to work on behalf of intelligence agencies. Whereas in the United States, the government may purchase vulnerabilities to use on an offensive mission or hire a few companies to conduct cyber defense on a network, the US government does not hire firms to conduct specific offensive operations. In China, the government may hire teams for both offensive and defensive work, including offensive hacking operations.” 

Costello: “Non-state actors play a myriad number of roles. Most notably, Department of Justice and Federal Bureau of Investigation indictments show clear evidence of contractual relationships between the MSS and non-state actors conducting cyber intelligence operations. Less conventional, Chinese hacktivists have on occasion played a limited but substantive role in certain cases, such as cyberattacks against South Korea’s Lotte group during the US Terminal High Altitude Area Defense (THAAD) system kerfuffle in 2017. Hypothetically, China’s military strategy calls for a cyber defense militia; but the contours or reality of mobilization, training, and reliability are unclear. China’s concept of ‘people’s war’ in cyberspace—a familiar adoption of Maoist jargon for new concepts—has been discussed but has yet to be seen in practice in any meaningful form.” 

Jili: “State investment and procurement of public security systems from private firms are driving the development of China’s surveillance ecosystem. Accordingly, private firm work and collaboration with the state are scaling Beijing’s means to conduct surveillance operations on targeted domestic populations that are perceived threats to regime stability. Crucially, given the financial incentives to collaborate with Beijing, private companies have limited reasons not to support state security prerogatives.” 

Kozy: “This question has the issue of mirroring bias. We tend to view things from a United States and Western lens when evaluating whether someone is a state actor or not, because we have very defined lines around what an offensive cyber operator can do acting on behalf of the US government. China has thrived in this grey area, relying on patriotic hackers with tacit state approval at times, hackers with criminal businesses, as well as growing its domestic ability to recruit talented researchers from the private sector and universities. The CCP has historically compelled individuals who would be considered traditionally non-state-affiliated actors to aid campaigns when necessary. Under an authoritarian regime like the CCP, any individual who is in China or ethnically Chinese can become a state actor very quickly. Actors like Tan Dailin do constitute a different type of threat because the CCP effectively co-opts their talents, while turning a blind eye to their criminal, for-profit side businesses that are illegal and have worldwide impact.” 

Roberts: “Chinese non-state actors are very involved in Chinese cyber operations. A wide variety of non-state entities, such as contractors and technology conglomerates (Alibaba, Huawei, etc.), have worked in tandem with the CCP on a variety of research, development, and execution of cyber operations. This relationship is fortified by Chinese disclosure laws and repercussions of violating them. While Russia’s relationship with non-state actors relies on the opaqueness of non-state groups’ relationships with the government, China’s relationship with non-state entities is much more transparent.”

#3 How do China’s cyber operations differ from those of other states in the region?

Cary: “China has the most hackers and bureaucrats on payroll in Asia. Its operations are not different in kind nor process, but scale. While Vietnam’s or India’s cyber operators are able to have some effect in China, they are not operating at the scale at which China is operating. The most significant differentiator—which is still only speculation—is that China likely collects from the backbone of the Internet via agreements or compromise of telecommunication giants like Huawei, China Unicom, etc., as well as accessing undersea cables.” 

Costello: “Scale. The scale of China’s cyber operations dwarfs those of other countries in the region—the complexity and sheer range of targeting, and the number of domestic technology companies whose increasingly global reach may be utilized for intelligence gain and influence. As China’s influence and global reach expands, so too does its self-perceived need to protect and further expand its interests. Cyber serves as a low-risk and often successful tool to accomplish this in economic and security realms.” 

Jili: “While most regional and global players’ cyber operations have a domestic bent, Beijing also actively promotes surveillance technology and practices abroad through diplomatic exchanges, law enforcement cooperation, and training programs. These efforts not only advance the proliferation of Chinese public security systems, but they also support the government’s goals concerning international norm-making in multilateral and regional institutions.” 

Kozy: “China is by far the most aggressive cyber power in its region. It can be debated that Russian cyber operatives are still more advanced in terms of sophistication, but China aggressively conducts computer network exploitations against all of its regional neighbors with specific advanced persistent threat (APT) groups across the PLA and MSS having regional focuses. Some of its neighbors such as India, Vietnam, Japan, and South Korea have advanced capabilities of their own to combat this, but there are regular public references to successful Chinese cyber campaigns against these countries despite significant defensive spending. Regional countries without cyber capabilities likely have long-standing compromises of critical systems.” 

Roberts: “China has a talent for extracting intellectual property and conducting large-scale espionage. While other threat actors in the region, like North Korea, also conduct espionage operations, North Korea’s primary focus is on operations that prioritize fiscal extraction to fund regime activity, while China seems much more intent on collecting data for a variety of purposes. Despite differing capacities, sophistication, and types of operations, the end goals for both states are not all that different—political survival.”

More from the Cyber Statecraft Initiative:

#4 How have China’s offensive cyber operations changed since 2018?

Cary: “China’s emphasis on developing its domestic pipeline of software vulnerabilities is paying off. China has passed policies that co-opt private research on behalf of the security services, support public software vulnerability competitions, and invest in technology to automate software vulnerability discovery. Together, as outlined by Microsoft’s Threat Intelligence Center’s 2022 analysis, China is combining these forces to use more software vulnerabilities now than ever before.”

Costello: “China’s cyber operations have unsurprisingly grown in scale and sophistication. Actors are less ‘noisy’ and China’s tactical approach to cyber operations appears to have evolved towards more scalable operations, namely supply-chain attacks and targeting service providers. These tactics have the advantage of improving the return on investment for an operation or campaign, as they allow compromise of all customers who use the product or service while minimizing risk of discovery. Supply chain attacks or compromise through third-party services can also be more difficult to detect and identify. China’s cyber landscape is not homogenous, and there remains great variability in sophistication across the range of Chinese actors.

As reported by the Director of National Intelligence in the last few years, China has increasingly turned towards targeting US critical infrastructure, particular natural gas pipelines. This is an evolution, though whether it is ‘learning by doing,’ operational preparation of the battlespace, or nascent ventures by a more operationally-focused Strategic Support Force (reorganization into a Space and Cyber Corps from 2015-17) is unclear. Time will most certainly tell.”

Jili: “Since 2018, the party-state has been more active in utilizing platforms like BRICS (Brazil, Russia, India, China, and South Africa), an emerging markets organization, and the Forum on China-Africa Cooperation (FOCAC) to promote digital infrastructure products and investments in the Global South. Principally, through multilateral platforms like FOCAC, Beijing has promoted resolutions to increase aid and cooperation in areas like cybersecurity and cyber operations.”

Kozy: “Intrusions from China have continued unabated since 2018, with a select number of Chinese APTs having periods of inactivity due to COVID-19 shutdowns. The Cyber Security Law and National Intelligence Law, both enacted in 2017, provided additional legal authority for China’s intelligence services to access data and co-opt Chinese companies for use in vaguely worded national security investigations. Of note is China’s efforts to increase the number of domestic cybersecurity conferences and nationally recognized cybersecurity universities as part of ongoing recruitment pipelines for cyber talent. Though there was increased focus from the Western cybersecurity community on MSS-affiliated contractors after the formation of the PLA Strategic Support Force (PLASSF) in 2015, more PLA-affiliated APT groups have emerged since the pandemic with new tactics, techniques, and procedures. The new PLASSF organization means these entities may be compromising high-value targets and then assessing them for use for offensive cyber operations in wartime scenarios or cyber espionage operations.”

Roberts: “Since 2018, Chinese offensive cyber operations have increased in scale. China has reinvigorated its workforce capacity-building efforts to increase the overall quantity and quality of workers. It has tightened its legal regime, cracking down on external vulnerability disclosure. It has also begun significantly investing in disinformation campaigns, especially against Taiwan. This is evident by the Chinese influence in Taiwan’s 2018 and 2020 elections.”

#5 What domestic entities, partnerships, or roles exist in China’s model of cyber operations model that are not present in the United States or Western Europe?

Cary: “China’s emphasis on contracted hackers coincides with divergent levels of trust between the central government and some provincial-level MSS hacking teams. Some researchers maintain that one contracted hacking team pwns targets inside China to do internal security prior to visits by central government leaders. While there is scant evidence that these attitudes and beliefs make their way into operations against foreign targets, they do likely impact the distribution of responsibilities and operations in a way not seen in mature democracies. The politicization of intelligence services is particularly risky in China’s political system.”

Costello: “The extralegal influence of the CCP cannot be overstated. Though the National Security Law, National Intelligence Law, and other laws ostensibly establish a legal foundation for China’s security apparatus, the reality is that the party is not bound strictly to these laws—and they only demonstrate a public indicator of what power it may possess. The lack of any independent judiciary suggests unchecked power of the CCP to co-opt or compel assistance from any citizen or company for which it almost certainly has near-total leverage. While the suspicion of Chinese organizations can be overblown, the idea that the CCP has the power to utilize not each but any organization is sobering and the root of many of these concerns. The lack of rigorous rule of law, in these limited circumstances, is certainly a competitive advantage in the intelligence sphere.”

Jili: “Beijing has nurtured a tech industry and environment that actively support the party-state’s aims to bolster government surveillance and cyber capabilities. From large firms to startups, many companies work with the state to conduct vulnerability research, develop threat detection capabilities, and produce security and intelligence products. While these private firms rely on Chinese venture capital and state loans, they have grown to service a global customer base.”

Kozy: “Starting with the 2015 control of WooYun, China’s largest vulnerability site, the CCP has gained an incredible amount of control of the vulnerability supply chain within China, which affords its cyber actors access to high-value vulnerabilities for use in their campaigns. The aforementioned 2017 laws also made it easier for Chinese authorities to prevent domestic researchers from competing in cyber conferences overseas and improved access to companies doing vulnerability research in China. The CCP’s public crackdowns on Jack Ma, Ant Financial, and many others have shown that the CCP fears the influence its tech firms have and has quickly moved to keep its tech giants loyal to the party; a stark contrast to the relationships that the United States and European Union have with tech giants like Google, Facebook, etc.”

Roberts: “While corporate-government partnerships exist everywhere, what separates the United States and Western Europe from China is the scope and scale of the connective tissue that exists between the two entities. In China, this relationship has more explicit requirements in the cyber domain, especially when it comes to vulnerability disclosure.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—China’s cyber operations appeared first on Atlantic Council.

]]>
Russian War Report: Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize  https://www.atlanticcouncil.org/blogs/new-atlanticist/russian-war-report-russian-hacker-wanted-by-the-fbi-reportedly-wins-wagner-hackathon-prize/ Fri, 13 Jan 2023 19:04:07 +0000 https://www.atlanticcouncil.org/?p=602036 In December 2022, Wagner Group organized a hackathon that was won by a man wanted by the FBI for his connection to computer malware.

The post <strong>Russian War Report: Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize</strong>  appeared first on Atlantic Council.

]]>
As Russia continues its assault on Ukraine, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) is keeping a close eye on Russia’s movements across the military, cyber, and information domains. With more than seven years of experience monitoring the situation in Ukraine—as well as Russia’s use of propaganda and disinformation to undermine the United States, NATO, and the European Union—the DFRLab’s global team presents the latest installment of the Russian War Report. 

Security

Russian forces claim control of strategic Soledar

Tracking narratives

Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize

Frenzy befalls French company accused of feeding Russian forces on New Year’s Eve

Former head of Russian space agency injured in Donetsk, mails shell fragment to French ambassador

Sputnik Lithuania’s former chief editor arrested

International response

New year brings new military aid for Ukraine

Ukrainian envoy to Georgia discusses deteriorating relations between nations

Russian forces claim control of strategic Soledar

Russia said on January 13 that its forces had taken control of the contested city of Soledar. Recent fighting has been concentrated in Soledar and Bakhmut, two cities in the Donetsk region that are strategically important to Ukrainian and Russian forces. Moscow has been trying to take control of the two cities since last summer. Over the past week, Russia has increased its presence on the fronts with the support of Wagner units. Russia wants control of the Soledar-Bakhmut axis to cut supply lines to the Ukrainian armed forces.  

On January 10, Russian sources claimed that Wagner forces had advanced into Soledar. Interestingly, Wagner financier Yevgeny Prigozhin denied the claim and said the forces were still engaged in fighting. Wagner’s presence was established in a camp near Bakhmut. Soldiers from the Wagner Group and other special forces deployed to Bakhmut after other military units had failed to break through the Ukrainian defense.  

On January 11, Ukrainian Deputy Defense Minister Anna Malyar said that heavy fighting was taking place in Soledar and that Russian forces had replaced the unit operating in the city with fresh troops and increased the number of Wagner soldiers among them. The same day, Prigozhin claimed that Wagner forces had taken control of Soledar. The Ukrainian defense ministry denied the allegation. On January 12, Ukrainian sources shared unconfirmed footage of soldiers driving on the main road connecting Bakhmut and Soledar with Sloviansk and Kostyantynivka to as evidence that the area remained under Ukrainian control.  

Elsewhere, on January 11, the Kremlin announced that Valery Gerasimov would replace Sergei Surovikin as commander of Russian forces in Ukraine. The unexpected move could be interpreted as evidence of a struggle for influence in Russian military circles. Surovikin is considered close to Prigozhin’s entourage, which has criticized senior officers recently, including Gerasimov. Some analysts believe that the change signals a possible military escalation from Russia. 

Furthermore, on January 8, Ukrainian forces repelled a Russian offensive the vicinity of Makiyivka and Stelmakhivka. Further north of Lysychansk, on January 11, Ukraine also repelled an attack on the city of Kreminna. In the neighboring Kharkiv region, aerial threats remain high. On the southern front, the city of Kherson and several cities across the Zaporizhzhia region remain targets of Russian attacks.  

Lastly, a new Maxar satellite image from nearby Bakhmut exemplifies the brutality of war on the frontline in Donetsk. The image shows thousands of craters, indicating the intensity of the artillery shelling and exchange of fire between Ukrainian and Russian forces.

Valentin Châtelet, Research Associate, Brussels, Belgium

Ruslan Trad, Resident Fellow for Security Research, Sofia, Bulgaria

Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize

In December 2022, the Wagner Group organized a hackathon at its recently opened headquarters in St. Petersburg, for students, developers, analysts, and IT professionals. Wagner announced the hackathon on social media earlier that month. Organizers created the promotional website hakaton.wagnercentr.ru, but the website went offline soon after. A December 8 archive of the website, accessed via the Internet Archive Wayback Machine, revealed that the objective of the hackathon was to “create UAV [unmanned aerial vehicle] positioning systems using video recognition, searching for waypoints by landmarks in the absence of satellite navigation systems and external control.” Hackathon participants were asked to complete the following tasks: display the position of the UAV on the map at any time during the flight; direct the UAV to a point on the map indicated by the operator; provide a search for landmarks, in case of loss of visual reference points during the flight and returning the UAV to the point of departure, in case of a complete loss of communication with the operator.   

On December 9, Ukrainian programmers noticed that hakaton.wagnercentr.ru was hosted by Amazon Web Services and asked users to report the website to Amazon. Calls to report the channel also spread on Telegram, where the channel Empire Burns asked subscribers to report the website and provided instructions on how to do so. Empire Burns claims hakaton.wagnercentr.ru first went offline on December 9, which tallies with archival posts. However, there is no evidence that reporting the website to Amazon resulted in it being taken offline.   

Snapshots of hakaton.wagnercentr.ru from the Wayback Machine show the website was created in a Bitrix24 online workspace. A snapshot captured on December 13 shows an HTTP 301 status, which redirects visitors to Wagner’s main website, wagnercentr.ru. The Wagner website appears to be geo-restricted for visitors outside Russia. 

On December 23, a Wagner Telegram channel posted about the hackathon, claiming more than 100 people applied. In the end, forty-three people divided into twelve teams attended. The two-person team GrAILab Development won first place, the team SR Data-Iskander won second place, and a team from the company Artistrazh received third place. Notably, one of Artistrazh’s co-founders is Igor Turashev, who is wanted by the FBI for his connection to computer malware that the bureau claims infected “tens of thousands of computers, in both North America and Europe, resulting in financial losses in the tens of millions of dollars.” Artistrazh’s team comprised four people who won 200,000 Russian rubles (USD $3,000). OSINT investigators at Molfar confirmed that the Igor Turashev who works at Artistrazh is the same one wanted by the FBI.  

Wagner said that one of the key objectives of the hackathon was the development of IT projects to protect the interests of the Russian army, adding that the knowledge gained during the hackathon could already be applied to clear mines. Wagner said it had also invited some participants to collaborate further. The Wagner Center opened in St. Petersburg in early November 2022; the center’s mission is “to provide a comfortable environment for generating new ideas in order to improve Russia’s defense capability, including information.”

Givi Gigitashvili, DFRLab Research Associate, Warsaw, Poland

Frenzy befalls French company accused of feeding Russian forces on New Year’s Eve

A VKontakte post showing baskets of canned goods produced by the French company Bonduelle being distributed to Russian soldiers on New Year’s Eve has sparked a media frenzy in France. The post alleges that Bonduelle sent Russian soldiers a congratulatory package, telling them to “come back with a win.” The post quotes Ekaterina Eliseeva, the head of Bonduelle’s EurAsia markets. According to a 2019 Forbes article, Eliseeva studied interpretation at an Russian state security academy.  

Bonduelle has issued several statements denying the social media post and calling it fake. However, Bonduelle does maintain operations in Russia “to ensure that the population has access to essential foodstuff.”  

French broadcaster TV 5 Monde discovered that Bonduelle’s Russia division participated in a non-profit effort called Basket of Kindness, sponsored by the Fund of Presidential Grants of Russia. Food and supplies were gathered by food banks to be delivered to vulnerable segments of the population. However, during the collection drive, Dmitry Zharikov, governor of the Russian city of Podolsk, posted on Telegram that the collections would also serve military families.   

The story was shared on national television in France and across several international outlets. The Ukrainian embassy in France criticized Bonduelle for continuing to operate in Russia, claiming it was “making profits in a terrorist country which kills Ukrainians.”

Valentin Châtelet, Research Associate, Brussels, Belgium

Former head of Russian space agency injured in Donetsk, mails shell fragment to French ambassador

Dmitry Rogozin, former head of the Russian space agency Roscosmos, said he was wounded in Ukrainian shelling on December 21, 2022, at the Shesh hotel in Donetsk while “celebrating his birthday.” In response, Rogozin sent a letter to Pierre Lévy, the French ambassador to Russia, with a fragment of the shell.   

In the letter, Rogozin accused the French government of “betraying [Charles] De Gaulle’s cause and becoming a bloodthirsty state in Europe.” The shell fragment was extracted from Rogozin’s spine during surgery and allegedly came from a French CAESAR howitzer. Rogozin requested the fragment be sent to French President Emmanuel Macron. His message was relayed by Russian news agencies, and on Telegram by pro-Russian and French-speaking conspiracy channels.  

At the time of the attack, Rogozin was accompanied by two members of his voluntary unit, “Tsar’s wolves,” who were killed in the attack, according to reporting from RT, RIA Novosti, and others.  

Valentin Châtelet, Research Associate, Brussels, Belgium

Sputnik Lithuania’s former chief editor arrested

On January 6, Marat Kasem, the former chief editor of Sputnik Lithuania, was arrested in Riga, Latvia, on suspicion of “providing economic resources” to a Kremlin propaganda resource under EU sanctions.  

The following day, pro-Kremlin journalists held a small demonstration in support of Kasem in front of the Latvian embassy in Moscow. Russian journalist Dmitry Kiselyov and politician Maria Butina attended the event. 

The demonstration was filmed by Sputnik and amplified with the Russian hashtag  #свободуМаратуКасему (#freedomForMaratKasem) on Telegram channels operating in the Baltic states, including the pro-Russian BALTNEWS, Своих не бросаем! | Свободная Балтика!, and on Butina’s personal channel. The news of Kasem’s arrest also reached the Russian Duma’s Telegram channel, which re-shared Butina’s post. 

Valentin Châtelet, Research Associate, Brussels, Belgium

New year brings new military aid for Ukraine

International efforts in support of Ukraine are continuing in full force in 2023. On January 4, Norway announced it had sent Ukraine another 10,000 155mm artillery shells. These shells can be used in several types of artillery units, including the M109 self-propelled howitzer. On January 5, Germany confirmed it would provide Ukraine with Marder fighting vehicles and a Patriot anti-aircraft missile battery. German news outlet Spiegel also reported that talks are underway to supply Ukraine with additional Gepard anti-aircraft guns and ammunition. 

In addition, UK Foreign Secretary James Cleverly said the British government would supply Ukraine with military equipment capable of delivering a “decisive” strike from a distance. At the end of 2022, UK Defense Secretary Ben Wallace discussed the possibility of transferring Storm Shadow cruise missiles, with a range of up to 250 kilometers. Finland also reported that it is preparing its twelfth package of military assistance to Ukraine.  

US aid to Ukraine is also being reaffirmed with a $2.85 billion package on top of weapon deliveries. Additionally, the US plans to deliver fourteen vehicles equipped with anti-drone systems as part of its security assistance package. The company L3Harris is part of the Pentagon’s contract to develop anti-drone kits. This equipment would help protect Ukrainian civil infrastructure, which has been a frequent Russian target since October 2022.  

On January 6, French President Emmanuel Macron announced that France would supply Ukraine with units of the light AMX-10RC armored reconnaissance vehicle. These vehicles were produced in 1970 and have been used in Afghanistan, the Gulf War, Mali, Kosovo, and Ivory Coast. The French defense ministry also announced that the country was to deliver twenty units of ACMAT Bastion armored personnel carriers. 

On January 11, Ukrainian President Volodymyr Zelenskyy met with Presidents Andrzej Duda of Poland and Gitanas Nauseda of Lithuania in Lviv. During the visit, Duda announced that Poland would deliver fourteen units of the much-awaited German Leopard combat tanks, and Nauseda announced that his country would provide Ukraine with Zenit anti-aircraft systems. 

Meanwhile, the largest manufacturer of containers for the transport of liquified natural gas has ceased operations in Russia. French engineering group Gaztransport & Technigaz (GTT) said it ended operations in Russia after reviewing the latest European sanctions package, which included a ban on engineering services for Russian firms. The group said its contract with Russian shipbuilding company Zvezda to supply fifteen icebreakers to transport liquefied natural gas was suspended effective January 8.

Valentin Châtelet, Research Associate, Brussels, Belgium

Ruslan Trad, Resident Fellow for Security Research, Sofia, Bulgaria

Ukrainian envoy to Georgia discusses deteriorating relations between nations

On January 9, Andrii Kasianov, the Ukrainian Chargé d’Affaires in Georgia, published an article discussing the deteriorating relationship between the two countries. The article stated that the top issues affecting relations were military aid to Ukraine, bilateral sanctions against Russia, visa policies for fleeing Russians, and the legal rights of Mikheil Saakashvili, the imprisoned third president of Georgia, who is also a Ukrainian citizen. 

Kasianov noted that Tbilisi declined Kyiv’s request for military help, specifically for BUK missile systems, which were given to Georgia by Ukraine during Russia’s 2008 invasion. The diplomat said that the weapons request also included Javelin anti-tank systems supplied to Georgia by the United States.  

“Despite the fact that the Georgian government categorically refused to provide military aid, Ukraine opposes the use of this issue in internal political disputes and rejects any accusations of attempts to draw Georgia into a war with the Russian Federation,” Kasianov said. 

Since the Russian invasion of Ukraine, the Georgian Dream-led government has accused Ukraine, the US, and the EU of attempting to drag Georgia into a war with Russia.  

Eto Buziashvili, Research Associate, Tbilisi, Georgia

The post <strong>Russian War Report: Russian hacker wanted by the FBI reportedly wins Wagner hackathon prize</strong>  appeared first on Atlantic Council.

]]>
The West reaps multiple benefits from backing Ukraine against Russia https://www.atlanticcouncil.org/blogs/ukrainealert/the-west-reaps-multiple-benefits-from-backing-ukraine-against-russia/ Thu, 12 Jan 2023 16:43:23 +0000 https://www.atlanticcouncil.org/?p=601351 Ukraine is often viewed as being heavily reliant on Western support but the relationship is mutually beneficial and provides the West with enhanced security along with valuable intelligence, writes Taras Kuzio.

The post The West reaps multiple benefits from backing Ukraine against Russia appeared first on Atlantic Council.

]]>
As it continues to fight against Russia’s ongoing invasion, Ukraine is often depicted as being heavily reliant on Western military and economic support. However, this relationship is not as one-sided as it might initially appear. Western backing has indeed been crucial in helping Ukraine defend itself, but the democratic world also reaps a wide range of benefits from supporting the Ukrainian war effort.

Critics of Western support for Ukraine tend to view this aid through a one-dimensional lens. They see only costs and risks while ignoring a number of obvious advantages.

The most important of these advantages are being won on the battlefield. In short, Ukraine is steadily destroying Russia’s military potential. This dramatically reduces the threat posed to NATO’s eastern flank. In time, it should allow the Western world to focus its attention on China.

During the initial period of his presidency, Joe Biden is believed to have felt that the US should “park Russia” in order to concentrate on the far more serious foreign policy challenge posed by Beijing. Ukraine’s military success is now helping to remove this dilemma.

Defeat in Ukraine would relegate Russia from the ranks of the world’s military superpowers and leave Moscow facing years of rebuilding before it could once again menace the wider region. Crucially, by supporting Ukraine, the West is able to dramatically reduce Russia’s military potential without committing any of its own troops or sustaining casualties.

Backing Ukraine today makes a lot more strategic sense than allowing Putin to advance and facing a significantly strengthened Russian military at a later date. As former US Secretary of State Condoleezza Rice and former US Secretary of Defense Robert M. Gates wrote recently in The Washington Post, “The way to avoid confrontation with Russia in the future is to help Ukraine push back the invader now. This is the lesson of history that should guide us, and it lends urgency to the actions that must be taken, before it is too late.”

If this lesson is ignored and Ukraine is defeated, Russia will almost certainly go further and attack NATO member countries such as the Baltic nations, Finland, or Poland. At that point, it will no longer be possible to avoid significant NATO casualties.  

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

The international response to Russia’s invasion of Ukraine has also reshaped the geopolitical landscape far from the battlefield. Since February 2022, it has reinvigorated the West as a political force.

The war has given NATO renewed purpose and brought about the further enlargement of the military alliance in Scandinavia with the recent membership applications of Sweden and Finland. The EU is also more united than ever and has now overcome a prolonged crisis of confidence brought about by the rise of populist nationalist movements.

In the energy sector, Putin’s genocidal invasion has finally forced a deeply reluctant Europe to confront its debilitating dependency on Russian oil and gas. This has greatly improved European security and robbed the Kremlin of its ability to blackmail Europe with weaponized energy exports. It now looks likely that the era of corrupt energy sector collaboration with the Kremlin is now drawing to a close, in Europe at least.

Western support for Ukraine is bringing a variety of practical military gains. While Ukraine’s Western partners provide Ukraine with vital battlefield intelligence, Ukraine returns the favor by offering equally valuable intelligence on the quality and effectiveness of Russian troops, military equipment, and tactics. The events of the past ten months have confirmed that pre-war perceptions of the Russian army were wildly inaccurate. Thanks to Ukraine’s unique experience and insights, Western military planners now have a far more credible picture of Moscow’s true military capabilities.

Ukraine’s MacGyver-like ability to adapt and deploy NATO weapons using Soviet-era platforms could prove extremely useful to the alliance in future conflicts. Ukrainian troops have proven quick to learn how to use Western weapons, often requiring far shorter training periods than those allocated to Western troops.

The innovative use of digital technologies by the Ukrainian military also offers invaluable lessons for their Western counterparts. Ukraine’s widespread deployment of Elon Musk’s Starlink system in front line locations is unprecedented in modern warfare and offers rare insights for all NATO countries.

Similarly, the war in Ukraine is highlighting the increasingly critical role of drone technologies. This is building on the experience of the Second Karabakh War in 2020, when Israeli and Turkish drones played an important part in Azerbaijan’s victory over Armenia.

Russia failed to invest sufficient resources into the development of military drones and has been forced to rely on relatively unsophisticated Iranian drones. In contrast, Ukraine enjoys a strong military partnership with Turkey that includes a deepening drone component. Turkey’s Bayraktar drones gained iconic status during the early stages of the Russian invasion. The company has since confirmed plans to build a manufacturing plant in Ukraine. 

In addition to these Turkish drones, Ukraine’s powerful volunteer movement and tech-savvy military have created a myriad of drone solutions to address the challenges of today’s battlefield. Ukraine’s rapidly evolving drone technologies are extremely interesting to Western military planners and will be studied in great detail for years to come.

Ever since the full-scale invasion of Ukraine began on February 24, 2022, Ukrainian forces have fused courageous fighting spirit with advanced intelligence and innovative use of battle management software. “Tenacity, will, and harnessing the latest technology give the Ukrainians a decisive advantage,” noted General Mark Milley, the current US Chairman of the Joint Chiefs of Staff.

The relationship between Ukraine and the country’s Western partners is very much a two-way street bringing significant benefits and strategic advantages to both sides. While Ukraine is receiving critical military and economic support, the Western world is benefiting from improved security along with important intelligence and unique battlefield experience. There is clearly a strong moral case for standing with Ukraine, but it is worth underlining that the strategic argument is equally convincing.

Taras Kuzio is professor of political science at the National University of Kyiv Mohyla Academy and author of the forthcoming “Russia’s War and Genocide Against Ukrainians.”

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post The West reaps multiple benefits from backing Ukraine against Russia appeared first on Atlantic Council.

]]>
2023 DC Cyber 9/12 Strategy Challenge https://www.atlanticcouncil.org/content-series/cyber-9-12-project/2023-dc-cyber-9-12-strategy-challenge/ Wed, 04 Jan 2023 21:38:23 +0000 https://www.atlanticcouncil.org/?p=599092 The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the eleventh annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 17-18, 2023. For the first time in the competition’s history, we will be hosting a hybrid event. Teams […]

The post 2023 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>
The Atlantic Council’s Cyber Statecraft Initiative, in partnership with American University’s School of International Service and Washington College of Law, will hold the eleventh annual Cyber 9/12 Strategy Challenge both virtually and in-person in Washington, DC on March 17-18, 2023. For the first time in the competition’s history, we will be hosting a hybrid event. Teams are welcome to attend either virtually via Zoom, or in-person at American University’s Washington College of Law. The agenda and format will look very similar to past Cyber 9/12 Challenges, except that it will be held in a hybrid format. Plenary sessions will be livestreamed via Zoom.

Held in partnership with:

Frequently Asked QuestionsVirtual

How do I log in to the virtual sessions? 

Your team will be sent an invitation to your round’s Zoom meeting the day before the event using the emails provided during registration

How will I know where to log in, and where is the schedule? 

We will send out links to Zoom webinars and meetings, along with an agenda, the day before the event. 

How are the virtual sessions being run? 

Virtual sessions will be run very close to the traditional competition structure and rules. Each Zoom meeting will be managed by a timekeeper. This timekeeper will ensure that each team and judge logs on to the conference line and will manage the competition round.  

At the beginning of the round, decision documents will be shared by the timekeeper via Zoom and judges will have 2 minutes to review the documents prior to the competitors’ briefing.  

Teams will have 10 minutes to present their briefing and 10 minutes for Q&A. Judges will be asked to mute themselves for the 10minute briefing session. 

Judges will then be invited to a digital breakout room and will have 5 minutes to discuss scores and fill out their scorecards via JotForm.  

After the scoring is over, judges will have 15 minutes to provide direct feedback to the team.  

A 10-minute break is scheduled before the start of the next round. Each round has been allotted several minutes of transition time for technical difficulties and troubleshooting. 

What do I need to log into a virtual session?  

Your team will need a computer (recommended), tablet, or smartphone with a webcam, microphone, and speaker or headphones. 

Your team will be provided with a link to the Zoom conference for each competition round your team is scheduled for. If you have any questions about the software, please see Zoom’s internal guide here. 

Will my team get scored the same way on Zoom as in-person? 

Yes, the rules of the competition remain the same, including the rubric for scoring. 

How will my team receive Intel Pack 2 and Pack 3? 

We will send out the intelligence packs via email to all qualifying teams. 

How will the final round be run? 

The final round will be run identically to the traditional final round format except that each the only participants allowed in the competition Zoom conference will be final round judges and the assigned team.  

Finalists will not able to watch the presentations of other teams in real time. Final rounds will be recorded and published on the Atlantic Council website after the final round ends. 

Frequently Asked QuestionsIn-person

Where will the event be held in-person? 

For participants attending in-person, the Cyber 9/12 Strategy Challenge will be held at American University’s Washington College of Law (WCL). On the evening on Friday, March 17th, there will be a reception held at the Atlantic Council offices downtown. Further information about the Atlantic Council offices can be found here.

What time will the event start and finish? 

While the final schedule has yet to be finalized, participants will be expected at American University WCL at 8:00am on Day 1, and the competition will run until approximately 5:00pm, with an evening reception at approximately 6:30pm. Day 2 will commence at approximately 9:00am, and will finish at approximately 4:00pm. The organizing team reserves the right to modify the above timing. The official schedule of events will be distributed to teams in advance of the event. All times are EST. 

Can teams who are eliminated on Day 1 still participate in Day 2 events? 

Yes! All teams are welcome at all of the side-programming events. We strongly encourage teams eliminated on Day 1 to attend the competition on Day 2.

Will meals be included for in-person attendees?

Yes, breakfast and lunch will be provided for all participants on both days. Light refreshments & finger foods will be provided at the evening reception on Day 1.

What should I pack/bring to a Cyber 9/12 event?

At the event: Please bring at least 4 printed copies of your decision documents for the judges on Day 1. We will help print documents on Day 2. Name tags will be provided to all participants, judges, and staff at registration on March 17. We ask you to wear these name tags throughout the duration of the competition. Name tags will be printed using the exact first and last name provided upon registration.

Dress Code: We recommend that students dress in at least business casual attire as teams will be conducting briefings. You can learn more about business casual attire here.

Electronic Devices: Cell phones and laptops will not be used during presentations but we recommend teams bring their laptops as they will need to draft their decision documents for Day 2 and conduct research. Please refer to the competition rules for additional information.

How do we get to American University?

American University is on the DC Metro Red line. Metro service from both Dulles International Airport (IAD) and Reagan National Airport (DCA) connect with the Metro Red Line at Metro Center. 

Zoom

What is Zoom? 

Zoom is a free video conferencing application. We will be using it to host the competition remotely. 

Do I need to pay for Zoom to participate? 

No.  

Do I need a Zoom account? 

You do not have to have an account BUT we recommend that you do and download the desktop application to participate in the Cyber 9/12 Strategy Challenge. 

Please use your real name to register so we can track participation. A free Zoom account is all that is necessary to participate.  

What if I don’t have Zoom? 

Zoom is available for download online. You can also access Zoom conferences through a browser without downloading any software or registering.  

How do I use Zoom on my Mac? Windows? Linux Machine? 

Follow the instructions here and here to get started. Please use the same email you registered with for your Zoom to sign up.

Can I use Zoom on my mobile device? 

Yes, but we recommend that you use a computer or tablet 

Can each member of my team call into the Zoom conference line independently for our competition round? 

Yes. 

Can other teams listen-in to my team’s session? 

Zoom links to competition sessions are team specific—only your team and your judges will have access to a session and sessions will be monitored once all participants have joined. If an observer has requested to watch your team‘s presentation, your timekeeper will notify you at the start of your round.

Staff will be monitoring all sessions and all meetings will have a waiting room enabled in order to monitor attendance. Any team member or coach in a session they are not assigned to will be removed and disqualified. 

Troubleshooting

What if my team loses internet connection or is disconnected during the competition? 

If your team experiences a loss of internet connection, we recommend following Zoom’s troubleshooting steps listed here. Please remain in contact with your timekeeper.

If your team is unable to rejoin the Zoom conference – please use one of the several dial-in lines included in the Zoom invitation.  

What if there is an audio echo or other audio feedback issue? 

There are three possible causes for audio malfunction during a meeting: 

  • A participant has both the computer and telephone audio active. 
  • A participant computer and telephone speakers are too close together.  
  • Multiple participant computers with active audio are in the same room.  

If this is the case, please disconnect the computer’s audio from other devices, and leave the Zoom conference on one computer. To avoid audio feedback issues, we recommend each team use one computer to compete. 

What if I am unable to use a video conference, can my team still participate? 

Zoom has dial-in lines associated with each Zoom conference event and you are able to call directly using any landline or mobile phone. 

We do not recommend choosing voice only lines unless absolutely necessary.

Other

Will there be keynotes or any networking activity remotely? 

Keynotes will continue as reflected on our agenda and will be broadcast with links to be shared with competitors the day before the event.  

We also encourage competitors and judges to join the Cyber 9/12 Strategy Challenge Alumni Group on LinkedIn where we will post job vacancies and internship opportunities. 

How should I prepare for a Cyber 9/12?

Check out our preparation materials, which includes past scenarios, playbooks including award-winning policy recommendations and a starter pack for teams that includes templates for requesting coaching support or funding.

Cyber Statecraft Initiative

The post 2023 DC Cyber 9/12 Strategy Challenge appeared first on Atlantic Council.

]]>
The 5×5—The cyber year in review https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-cyber-year-in-review/ Wed, 14 Dec 2022 05:01:00 +0000 https://www.atlanticcouncil.org/?p=594701 A group of experts reviews the highs and lows of the year in cybersecurity and look forward to 2023. 

The post The 5×5—The cyber year in review appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

One year ago, the global cybersecurity community looked back at 2021 as the year of ransomware, as the number of attacks nearly doubled over the previous year and involved high-profile targets such as the Colonial Pipeline—bringing media and policy attention to the issue. Now, a year later, the surge of ransomware has not slowed, as the number of attacks hit yet another record high—80 percent over 2021—despite initiatives from the White House and the Cybersecurity and Infrastructure Security Agency (CISA). The persistence of ransomware attacks shows that the challenge will not be solved by one government alone, but through cooperation with friends, competitors, and adversaries. 

Russia’s full-scale invasion of Ukraine, the landmark development of 2022, indicates that this challenge will likely remain unsolved for a while. Roughly three-quarters of all ransomware revenue makes its way back to Russia-linked hacking groups, and cooperation with the Kremlin on countering these groups is unlikely to yield much progress anytime soon. Revelations in the aftermath of Russia’s invasion confirmed suspicions that Russian intelligence services not only tolerate ransomware groups but give some of them direct orders. 

Ransomware was not the only cyber issue to define 2022, as other challenges continued, from operational technology to workforce development, and various public and private-sector organizations made notable progress in confronting them. We brought together a group of experts to review the highs and lows of the year in cybersecurity and look forward to 2023. 

#1 What organization, public or private, had the greatest impact on cybersecurity in 2022?

Rep. Jim Langevin, US Representative (D-RI); former commissioner, Cyberspace Solarium Commission

“I think we have really seen the Joint Cyber Defense Collaborative (JCDC) come into its own this year. We saw CISA, through JCDC, lead impressive and coordinated cyber defense efforts in response to some of the most critical cyber emergencies the Nation faced in 2022, including the Log4Shell vulnerability and the heightened threat of Russian cyberattacks after its invasion of Ukraine.” 

Wendy Nather, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; head of advisory CISOs, Cisco

“I would argue that Twitter has had the most impact on cybersecurity. As a global nexus for public discourse, security research, threat intelligence sharing, media resources, and more, its recent implosion has disrupted essential communications and driven many cybersecurity stakeholders to seek connectivity elsewhere. We will probably continue to see the effects of this disruption well into 2023 and possibly beyond.” 

Sarah Powazek, program director, Public Interest Cybersecurity, UC Berkeley Center for Long-Term Cybersecurity

“CISA. The cross-sector performance goals and the sector-specific 100-Day Cyber Review Sprints this year are paving the way for a more complete understanding and encouragement of cybersecurity maturity in different industries. It is finally starting to feel like we have a federal home for nationwide cybersecurity defense.” 

Megan Samford, nonresident senior fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; vice president and chief product security officer for energy management, Schneider Electric

“I think all of us feel that it has to be the warfighting efforts that are going on in the background of the Ukraine war—these are the ‘known unknown’ efforts. If we take that off the table though, I would say it is not an organization at all, it is a standard (IEC 62443). As boring as it is to say that standards work, right now industry most needs time for the standards to be adopted to reach a minimum baseline. If we fail to achieve standardization, we will see regulation—both achieve the same things at different paces with different tradeoffs.” 

Gavin Wilde, senior fellow, Technology and International Affairs Program, Carnegie Endowment for International Peace

“The State Special Communications Service of Ukraine (SSSCIP), which has deftly defended and mitigated against Russian cyberattacks throughout Moscow’s war. SSSCIP’s ability to juggle those demands while coordinating and communicating with a vast array of state and commercial partners has improved the landscape for everyone.”

#2 What was the most impactful cyber policy or initiative of 2022? 

Langevin: “The Cyber Incident Reporting for Critical Infrastructure Act, or CIRCIA. Its impact lies not only in its effect—which will dramatically improve the federal government’s visibility of cyber threats to critical infrastructure—but also in the example it has set for how Congress, the executive branch, and the private sector can effectively work together to craft major legislation that will make the country fundamentally safer in cyberspace.” 

Nather: “I have to call out CISA’s election security support at this crucial point in our Nation’s fragile and chaotic state. It continues to provide excellent information and resources—particularly the wonderfully named “What to Expect When You are Expecting an Election” and video training to help election workers protect themselves and the democratic process. Reaching out directly to stakeholders and citizens with the education they need is every bit as important as the ‘public-private partnership’ efforts that most citizens never encounter.” 

Powazek: “CISA’s State and Local Cybersecurity Grant Program and Tribal Cybersecurity Grant Program. The programs will dole out $1 billion in cyber funding to state, local, tribal, and territorial governments over four years, with at least 25 percent of those funds earmarked for rural areas. If that money is invested well, it will be an incredible boon to critical public agencies struggling to improve their cybersecurity maturity, and it can better protect millions of people.” 

Samford: “Software bill of materials (SBOM), but not for the reasons people may think. SBOM is a very useful tool in managing risk, provided that organizations already have good asset inventory capability. In operational technology, asset inventory is an area that asset owners continue to struggle with, so the benefit from SBOM is more of a long-term journey. That is why I say SBOM, but not for the reasons people think. In my mind what I think was most impressive around SBOM was that it demonstrated that the industry can successfully rally and rapidly standardize around very specific asks. SBOM came together because it had three things: 1) common industry understanding of the problem; 2) existing tooling that, for the most part, did not require new training; and 3) government policy and right-sized program management.” 

Wilde: “The European Union’s proposed Cyber Resilience Act, which is poised to update and harmonize the regulatory environment across twenty-seven member states and set benchmarks for product and software security—particularly as both cybercrime and Internet-of-Things applications continue to proliferate. The proposals offer a stark contrast between a forward-looking regulatory regime, and a crisis-driven reporting and mitigation one.”

#3 What is the most important yet under-covered cyber incident of 2022?

Langevin: “I think it is worth reminding ourselves just how serious the ransomware attacks were that crippled the Costa Rican government this year. This was covered in the news, but from a policy perspective, I think it warrants a deeper conversation about what the United States can be doing on the international stage to double down on capacity-building and incident response efforts with allies, particularly those more vulnerable to such debilitating attacks. Part of that conversation needs to include a commitment to ensuring that our government actors, like the State Department’s Bureau of Cyberspace and Digital Policy, have the appropriate resources and authorities to effectively provide that assistance.”

Nather: “The Twilio breach (although Wired did a good job covering it). It is important because although SMS is a somewhat-reviled part of our security infrastructure, it is utterly necessary, and will continue to be long into the future.”

Powazek: “The Los Angeles Unified School District (LSUSD) ransomware attack by Vice Society was highly covered in the news, but I think the implications are resounding. LAUSD leaders refused to pay the ransom, maintained transparency with students and parents, and were able to move forward with minimal downtime. It was a masterclass in incident management, and I was thrilled to see a public institution take a stand against ransomware actors and recover quickly.”

Samford: “Uber’s chief information security officer (CISO) going to jail. This has turned the industry on its head and forced people to challenge what it means to be an executive in this industry and make decisions that can land you—not the chief executive officer or chief legal counsel—in jail. What is the compensation structure for this amount of risk taking? I have heard of CISOs being called the ‘chief look around the corner officer’ or the ‘chief translation officer,’ but now has the CISO become the ‘chief scapegoat officer?”

Wilde: “The US Department of Justice’s use of ‘search and seizure’ authority (Rule 41 of the federal criminal code) to neutralize a botnet orchestrated by the Russian GRU. So many fascinating elements of this story—including the legal and technical implications of the operation, as well as the cultural shift at DOJ—seem to have gone underexamined. Move over, NSPM-13…”

More from the Cyber Statecraft Initiative:

#4 What cybersecurity issue went unaddressed in 2022 but deserves greater attention in 2023?

Langevin: “I am hopeful that this answer proves to be wrong before the end of the year, but right now, it is the lack of a fiscal year (FY) 2023 budget. The federal government has a wide array of new cybersecurity obligations stemming from recent legislation and Biden administration policy, but agencies will struggle to fulfill these responsibilities if Congress does not provide appropriate funding for them to do so. Keeping the government at FY22 funding levels simply is not good enough; if we want to see real progress, we need to pass a budget.” 

Nather: “One trend I see is that there is almost no check on technological complexity, which is the nemesis of security. Simply slapping another ‘pane of glass’ on top of the muddled heap is not a long-term solution. I believe we will see more efforts to consolidate underlying infrastructure for many reasons, among them cost and ease of administration, but cybersecurity will be one of the loudest stakeholders.” 

Powazek: “The United States still does not have a scalable solution for providing proactive cyber assessments to folks who cannot afford to hire a consulting firm. There are lots of toolkits available, but some organizations do not even have the staff or time to consume them, and there is no substitute for face-to-face assistance. We could use more solutions like cybersecurity clinics and regional cyber advisors that address this market failure and help organizations increase resiliency to cyberattacks.” 

Samford: “Coordinated incident response as well as whistleblower protection. If you want safety-level protections in cybersecurity, you need safety-level whistleblower protections. In the culture of safety, based on decades of culture development and nurturing, whistleblowing is a key enabler. It is based on a basic truth that anyone in an organization can ‘stop the line’ if they see unsafe behavior. In cyber, we lack ‘stop the line’ power and, in many cases, individuals fail to report risk because of fear of attribution and retaliation. That is why, in my mind, the topic of whether or not whistleblower protection should become a cyber norm remains something that has gotten little attention but it is a critical decision point in how the cyber community wants to move forward. Will we have more of a tech-based culture or a safety-based culture?  

As far as coordinated incident response, we estimate that upward of 80 percent of the cyber defense capacity resides in the private sector, yet very few mechanisms exist to coordinate these resources alongside a government-led response. We have not yet figured out how to tap that pool of resources, and I fear that we are going to have to learn it quickly one day should such attacks occur that require rapid and consistent response coordination, such as a targeted campaigned cyberattack linked with physical impact on critical infrastructures. Using Incident Command System could solve for this and the ICS4ICS program is picking up this challenge.” 

Wilde: “Privacy and data protection. The ‘Wild West’ of data brokerages and opaque harvesting schemes that enables illicit targeting and exploitation of vulnerable groups poses as much a threat to national security as any foreign-owned applications or state intelligence agencies.”

#5 What do the results of the 2022 midterm elections in the United States portend for cybersecurity legislation in the 118th Congress?

Langevin: “The cybersecurity needs of the country are too great for Congress to get bogged down in partisan fighting, and I think there are bipartisan groups of lawmakers in both chambers who understand that. There may be philosophical differences on certain issues that are more pronounced in a divided Congress, but I expect that we will still see room for effective policymaking to improve the Nation’s cybersecurity. The key to progress, as it would have been no matter who controlled Congress, will be continuing to build Members’ policy capacity on these issues, lending a broader base of political support to those Members who understand the issues and can lead the charge on legislation.”

Nather: “Some of the centrist leaders from both parties who led on cybersecurity, such as John Katko (R-NY) and Jim Langevin (D-RI), are retiring. And Will Hurd (R-TX), who held a similar role—working across the aisle on cybersecurity issues—in the previous Congress, is gone. As the work on cybersecurity legislation has historically stayed largely above the political fray, it will be interesting to see who steps up to build consensus on this critical topic.”

Powazek: “The retirement of policy powerhouses Rep. John Katko and Rep. Jim Langevin leaves an opening for more cyber leadership, and the recent elections are our first glimpse of who those leaders may be. As a Californian, I am particularly excited about Rep. Ted Lieu and Senator Alex Padilla, both of whom are poised for cyber policy leadership.”

Samford: “More focus on zero trust, supply chain, and security of build environments. These are efforts that all have bipartisan support and engagement.”

Wilde: “The retirement of several of the most driven and conversant members does not bode well for major cybersecurity initiatives in Congress next session. Diminished expertise is not only a hurdle from a substantive perspective, but it also makes it difficult to avoid cyber issues falling victim to other political and legislative agendas from key committees.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The cyber year in review appeared first on Atlantic Council.

]]>
Wargaming to find a safe port in a cyber storm https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/wargaming-to-find-a-safe-port-in-a-cyber-storm/ Mon, 12 Dec 2022 15:00:00 +0000 https://www.atlanticcouncil.org/?p=593535 With the Maritime Transportation System increasingly reliant on cyberspace, how can cybersecurity be improved within key nodes of this critical infrastructure, particularly cargo ports?

The post Wargaming to find a safe port in a cyber storm appeared first on Atlantic Council.

]]>

Executive summary

With the Maritime Transportation System increasingly reliant on cyberspace, how can cybersecurity be improved within key nodes of this critical infrastructure, particularly cargo ports? Given the close relationship between the cyber and maritime domains, wargaming provides a useful tool for examining the potential threats and opportunities. This includes the attack surfaces, prioritization challenges, and coordination advantages highlighted by the new maritime cyber wargame Hacking Boundary.

Introduction

Critical infrastructure is rarely headline news—not until something goes very wrong—and the maritime transportation system (MTS) is no exception. The MTS, which is responsible for the safe transport of the majority of international trade, is vital to the global economy.1 From backlogged cargo at port facilities during the COVID-19 pandemic to the Ever Given container ship blocking the Suez Canal, recent events have highlighted the vulnerability of maritime transportation, and how impactful disruptions to that system can be to everyday life.2

Broadly speaking, the MTS consists of all the waterways, vehicles, and ports that are used to move people and goods via water.3 The volume of goods moved in this way is particularly striking, with most of the world’s cargo carried by sea—between 70–90 percent, depending on how the cargo is counted. For the United States, the MTS contributes to nearly 25 percent of gross domestic product, totaling around $5.4 trillion.4It is also essential to the US ability to project military power. Today, as for the past century, sealift—the use of cargo ships to deploy military assets—is responsible for transporting the vast majority of US military matériel around the world.5

Unfortunately, this critical infrastructure is under threat. Along with natural disasters and human errors, cyberattacks are increasingly threatening the MTS. In 2017, a destructive and rapidly propagating piece of malware known as NotPetya spread from Ukraine around the world.6 One of the many NotPetya victims was Maersk, the world’s largest shipping company. This single cyber incident cost the shipping giant approximately $300 million,7 and the price would have been much higher, were it not for a single uninfected server in Ghana. During another cyber incident just last year, foreign government-backed hackers were suspected of breaching information systems at the Port of Houston, further demonstrating that maritime transportation is firmly in the crosshairs.8 NotPetya, the Port of Houston, and other cyberattacks against various kinds of critical infrastructure—including the ransomware attack on Colonial Pipeline in 2021—provide an ominous glimpse into the threat environment.

Learning through gaming

Global and national security depend on understanding and mitigating threats to the MTS. The US government has taken some steps in this direction, including the National Maritime Cybersecurity Plan released in December 2020. More needs to be done, however, and one approach is to study what’s necessary through cyber wargaming, a useful tool for examining the complex and confusing problems involved with cyber and physical threats to critical infrastructure.

Working with Ed McGrady, the Cyber & Innovation Policy Institute (CIPI) at the US Naval War College in Newport, Rhode Island, hosted government officials, military service members, students, and academics to play Hacking Boundary: A Game of Maritime Cyber Operations.9This war game addresses a hypothetical cyberattack against a major US port facility, and the first iteration of the game was played at the CIPI Summer Workshop on Maritime Cybersecurity in June 2022.

The game was developed and run by Ed McGrady at the Center for a New American Security.

The second iteration of the game, conducted in partnership with the Atlantic Council’s Cyber Statecraft Initiative, was held at the Industrial Control Systems Village at the DefCon Hacking Conference in August 2022 in Las Vegas, Nevada. This iteration featured participants from across the maritime ecosystem, including active duty US Navy and Coast Guard personnel,  penetration testers, private sector operators, and many more.

This brief describes Hacking Boundary, along with several strategic and policy implications illuminated by repeated game play. The core takeaways include: (1) understanding the large attack surfaces of port facilities and the lead times that may be required to attack them; (2) the difficulties of prioritizing how and when to spend scarce resources; and (3) understanding that the tensions between competition and coordination, if navigated wisely, may offer defenders marginal—but valuable—advantages when providing maritime cybersecurity.

Scenario and players

Imagine a major US port facility, modeled on the Port Elizabeth Intermodal Complex of New York and New Jersey, in the year 2027. The facility includes a terminal along the water. Within the terminal are the yard, gantry cranes, cargo containers, scales, semitrucks, inspection sites, gates, and administrative offices needed to load, offload, process, and move 1.8 million twenty-foot equivalent units (TEUs) annually, or approximately 43 million tonnes of cargo. Connecting all of this equipment, and the people operating it, are local area networks, Wi-Fi, radios, phones, and wires, forming a complex web of near constant communication.

Picture from game play in Las Vegas.

When an ultra large container ship carrying 21,000 TEUs enters port, all of this information and operational technology is put to work. Positioning systems and radio communication with pilot ships helps steer the container ship into a berth; cargo data files are digitally sent to the port authority; local security contractors screen the cargo; and access control handles the hundreds of trucks required to move the cargo. Work that was once handled by thousands of people is now performed by computers, scanners, remote closed-circuit television cameras, and routers working both autonomously and with human support. Underpinned by cyberspace, this daily routine unfolds at a massive scale and pace.

During the wargame, teams of defenders and attackers face off in this cyber-physical environment. On the defending team, the maritime shipping industry is represented by a fictitious private firm called Worldwide Logistics Operations (WLO), which leases the container terminal. WLO runs the information technology (IT) infrastructure for the terminal. It also cooperates with local authorities and the federal government, played by another team of defenders. The attackers are broken into four groups, each representing different kinds of advanced persistent threats (APTs) with their own background, expertise, and modus operandi. These attackers range from independent cyber criminals to mercenaries to groups with ties to foreign intelligence organizations. Overseeing the contest between the attackers and the defenders is a game master, who helps construct and control the game narrative and, in the process, judges the outcome of each team’s moves.

Game play

This game is played over multiple turns, with each turn representing a month in the real world. At the start of each turn, the attacking and defending teams both draw random event cards. Possible events range from good news (e.g., receiving an unexpectedly large budget) to bad news (e.g., a power outage or having members of your team poached by the competition). These events are intended to represent some of the unpredictable realities faced by both parties in the real world. With a random event card in hand, each team plans their course of action.

The defending team’s objective is to prevent port terminal intrusions and establish resilient systems that fail gracefully, minimizing potential disruption or damage. Given a limited budget, represented in the game as coins, the team must make choices that involve difficult trade-offs. For example, defenders could prioritize security training and upgraded hardware but, as a consequence, they may have insufficient resources to conduct penetration testing to identify other potential vulnerabilities. Or, they could choose to conduct penetration testing, but then lack resources to fix the vulnerabilities they find. It is also important to safeguard port facility physical security against theft and illicit access to critical systems. The networked nature of cyber and physical systems means that neglecting one could expose the other to risk.

The objective of the attacking teams is to secure a profit at the expense of the port and the WLO. Attackers start the game with a set budget. They can earn additional coins by completing missions ranging from exfiltrating data to causing physical damage. To complete a mission successfully, an attacking team must allocate limited resources to hiring the right people for the job, which included technical experts to defeat defensive measures. For simplicity, the categories of expertise in this game are: social, physical, network, malware, operating system, applications, electronics, and cryptography. Attackers must also acquire the capabilities needed to accomplish their mission, such as tailored malware or radio-frequency identification scanners. This wargame emphasizes the full breadth of the cyber kill chain, including preplanning and lateral movement over time.10 Attackers may also take cyber actions that do not have immediate effects, laying the foundation for success later in the game.

The respective plans of attackers and defenders—and the logic behind them—interact via the game master, who determines the likelihood of success or failure. Outcomes are determined through discussion, with each team arguing their case about defensive measures taken at the port terminal, the complexity of the attack, and the personnel and capabilities dedicated to the job. This part of the game is where the collective expertise of each team really shines. Based on these discussions, the game master assesses the probability of an attack succeeding.

Chance is incorporated by rolling dice. For example, an attack with a 50 percent probability of success means that the attacking team must roll an eleven or higher on a twenty-sided die. More difficult attacks require a higher roll to succeed; easier attacks can succeed with a lower roll. The dice rolls determine if the attacker successfully completes all or part of their chosen mission.

Successful missions pay off in coins, building a unique narrative for the game. However, there is also the risk of discovery, modeled in the game as another roll of the dice by the team for “forensic points.” Depending on the complexity of the move, attacking teams incur higher or lower forensic points. Too much bravado or sloppy tradecraft risks teams being discovered by defenders and having all of their coins seized by the authorities. As is sometimes the case in real life, a bit of bad luck can mean the difference between striking it rich or losing it all.

When the success, failure, and payoff of all the teams’ actions have been decided, the next turn of the game begins with another round of event cards, planning, and outcome adjudication. Typically, each turn takes about an hour. There is no constraint on how many turns can be played, with consideration that higher stakes missions take longer to accomplish. Whenever the game ends, a victor is determined just for fun. Defender success is measured by number of attacks successfully repelled vice attempted intrusions into the port or related networks. For attackers, success depends on the number missions accomplished, their coin haul, and not getting caught.

Game takeaways from Newport and Las Vegas

Observations from only a few iterations of this game, with different players, do not constitute authoritative evidence. Even so, preliminary takeaways contain potentially important insights for maritime cyber and broader cybersecurity challenges facing critical infrastructure.

Attack surfaces and lead times

Large and varied attack surfaces challenge defenders and provide attackers with numerous opportunities for exploitation. This wargame only captured some of the complexities of real-world maritime infrastructure. Nevertheless, it illustrated the importance of interrelationships and dependencies in a cyber-physical system. Subject matter experts who played the game showed how hypothetical attackers might probe several points of entry that intersected with even this simplified version of a cargo port. The attempted exploits were both physical (e.g., breaking and entering or conducting reconnaissance at a local pub frequented by port security) and cyber (e.g., phishing, injecting malware via flash drives, or hacking shipboard systems using Raspberry Pi). The various attack options illustrate the myriad vulnerabilities of these complex facilities.

Put another way, no port is an island. Accidents and attacks outside the facility, such as disrupting a pump station or a nearby rail line, could still impact maritime operations by, for example, paralyzing road traffic around the cargo terminal. These interdependencies highlight the need to broaden the conceptual and operational boundaries of maritime cybersecurity as currently and traditionally conceived. In the wargame, defenders overlooked these external relationships, to their detriment.

While the multitude of attack options seemed to afford the attackers with endless choices, carrying out the attacks in this complicated environment took time. Successful attacks often required long lead times for planning and execution. In the game, as in real life, the cyber kill chain had multiple links spread out over time and, in some cases, over physical space. For example, some attacking teams probed physical security at the port early on, in an attempt to gather useful intelligence. Later, they exfiltrated data through lateral moves within the target network, exploiting access gained through phishing.

Both the large attack surfaces and the long lead times reaffirmed a well-known argument in cybersecurity that nevertheless bears repeating: defending a network is a lengthy and dynamic process, comprised of many different steps. Several attacks crossed multiple systems, spanning three or four moves in the game before a full picture of the offensive operation became apparent. The dramatic image of hackers running a rogue ship aground distracts from much of the preparatory, and seemingly mundane, work that would go into such an attack (e.g., orchestrating a phishing campaign against the cleaning company subcontracted to service the port bathrooms).

Key Takeaway

  • Maritime infrastructure consists of complex systems, which provide numerous opportunities for exploitation but also complicated kill chains.

Prioritization and resilience

The sheer number and variety of vulnerabilities to exploit and defend during game play posed serious challenges for players about how to allocate their scarce resources. Effective prioritization was a deciding factor for both attackers and defenders.       

For their part, attackers had to invest in capabilities and staffing to effectively penetrate target systems and accomplish mission objectives. Missteps or bad luck could result in a failed mission, setting attackers back in terms of time and money. For defenders, early investments to bolster security tended to have a large impact on their ability to thwart attacks later in the game. Defenders also needed to retain resources—and acquire skills—to dynamically (re)allocate defensive capabilities and capacities, which were then distributed across physical and network infrastructure, as well as across shipboard and terminal information systems. With limited resources at their disposal, poorly chosen priorities or bad luck could leave defenders struggling to respond to even basic incidents. Lack of defensive planning, or a purely reactive posture, provided attackers with dangerous freedom of movement.

Here again, the wargame only captured some of the real-life complexity, underscoring the very real challenge and necessity of prioritization. While critical infrastructure is, by definition, “critical,” some systems within it are more important than others, and some problems are easier to solve. Prioritizing investments where ease and importance overlap may seem obvious, but many of the tradeoffs are acute, presenting hard choices. As will be discussed, these choices are easier when public agencies and private firms share useful cyber intelligence. Each party may make different decisions about how to prioritize and allocate their respective resources, but both stand to benefit from pooling information about the threat environment.  

Making the right investments and allocating the proper resources to defense is only half the battle. When attacked, organizations also need resilience, namely the “ability to adapt to changing conditions and prepare for, withstand, and rapidly recover from disruption.11 In this game, as in real life, no defense was perfect: financial data leaked; ransomware jumped from contractor to vendor; and even positioning and navigation systems were compromised. Adapting and responding to unfortunate incidents is difficult, but necessary for minimizing disruptions to the most important MTS administrative and operational functions.

There is little doubt that bolstering the resilience of maritime cybersecurity will remain a challenge. Best practices and high standards can help, such as the US Coast Guard’s Navigation and Vessel Inspection Circular 01-20 and the International Maritime Organization’s guidance on cybersecurity.12 Since so many different operators and information systems intersect at port facilities, best practices within and across sectors are significant to forming strong links between the diverse entities involved. By providing a platform for practical learning, wargames can help individuals and organizations synthesize risk, identify priorities, build resilience, and highlight the significant—but often unappreciated—role that these various relationships can play in cybersecurity.

Key Takeaway

  • The range of cyber physical vulnerabilities in the MTS mean that prioritization and resilience are core challenges when allocating scarce resources.

Competition and coordination

Competition and coordination were reoccurring themes in this wargame, with significant policy implications. Attackers not only competed against defenders, but also against each other. Competition over scare resources, access points, and cyber exploits fueled tension among the APTs. In addition, some attacking teams were hurt by the actions, misfortunes, or errors of other team members. Attackers were both beneficiaries and potential victims of the difficulties of attribution in cyberspace, as some enterprising attackers tried to disguise their tracks by imitating others in false flag operations.

Instances of attacking teams directly targeting one another—as opposed to defenders—broke the binary concept of purely offensive and defensive roles in the game. These dynamics mirrored real life, helping dispel the notion that offensive and defensive moves in cyberspace inevitably aggregate to the attacker’s advantage. Different attackers have different motives. While a criminal enterprise may hack a port to steal cargo information to sell for financial gain, a state or hybrid actor may attempt to cripple port automation for political reasons. These different, and sometimes competing, objectives limit attackers’ incentives to cooperate with each other, let alone coordinate their actions. Leaked chat records from the Conti ransomware group highlight this discord inside real attacking teams, with interpersonal squabbles compounding conflicts between different APTs.13

Defenders suffered from conflicts of interest as well. The private firms that own and operate port facilities may not have the same incentives as government agencies to share information, especially if doing so invites scrutiny by regulators or law enforcement. These defenders also compete with each other for scarce cybersecurity talent and other resources.

While competition and conflict were evident among both defenders and attackers, Hacking Boundary indicates that defenders enjoy some advantages when it comes to institutionalizing cooperation, including a higher baseline level of trust. Honor among thieves may be harder to come by than even begrudging coordination between industry and government. Although defenders in the government, WLO, and terminal IT security teams had different incentives and threat perceptions, many still found ways to share information and coordinate action. On balance, this coordination gave defenders an edge in the game. Successful defenders established lines of communication sooner rather than later.

Real-world coordination between maritime owners, operators, and government agencies is easier said than done. Nevertheless, the potential payoff is considerable and physical proximity may help. Anecdotal evidence from our wargame suggests that players in the roles of port operators and government representatives conversed more when seated together. Perhaps it is no coincidence that communication between similar organizations in the real world correlates to a significant federal presence—Coast Guard headquarters, Department of Homeland Security regional centers, Federal Bureau of Investigation field offices, and the like—close to port facilities. Cybersecurity is social as well as technical, and face-to-face interaction can make a difference. However these relationships develop, the policies that build them before the next major cyber incident could prove to be invaluable.

Key Takeaway

  • Real-world coordination, whether among attackers or defenders, is a key dynamic in any cyber operation, and is easier said than done.

Conclusion

Cyber wargaming has demonstrated the potential to demystify and clarify threats and opportunities involving critical maritime infrastructure. The game Hacking Boundary engages players with a challenging, but realistic scenario, that reflects some of the serious risks facing the companies, crews, and government authorities operating port facilities around the country and around the world. The large attack surfaces, the importance of prioritization, and the implications of competition and coordination reinforce many well-established cybersecurity ideas. The relationship of these lessons to the maritime domain warrants further exploration.

The intersection between the maritime and cyber environments will likely grow in the years ahead. How these relationships and dependencies are conceptualized will likely determine our success or failure in protecting the MTS. The same goes for improving systemic resilience, including transportation by road, rail, and air – all of which increasingly relies on automation and networked information technology. Further iterations of this wargame and similar exercises stand to help by encouraging practitioners, academics, corporate executives, and government officials to think through potential threats and responses in order to secure these kinds of critical infrastructure. 

About the authors

Daniel Grobarcik is a research associate with the Cyber & Innovation Policy Institute at the U.S. Naval War College.

William Loomis is an associate director at the Atlantic Council’s Cyber Statecraft Initiative, within the Digital Forensic Research Lab.

Dr. Michael Poznansky is an associate professor with the Cyber & Innovation Policy Institute at the U.S. Naval War College.

Dr. Frank Smith is a professor and director of the Cyber & Innovation Policy Institute at the U.S. Naval War College.

The ideas expressed here do not represent the US Naval War College, US Navy, Department of Defense, or US Government.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1     William Loomis et al., Raising the Colors: Signaling for Cooperation on Maritime Cybersecurity, October 4, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/raising-the-colors-signaling-for-cooperation-on-maritime-cybersecurity/.
2     US Library of Congress, Congressional Research Service, Supply Chain Bottlenecks at US Ports, by John Frittelli and Liana Wong, IN11800 (2021), https://crsreports.congress.gov/product/pdf/IN/IN11800; Marc Jones, “Snarled-Up Ports Point to Worsening Global Supply Chain Woes – Report,” Reuters, May 3, 2022, https://www.reuters.com/business/snarled-up-ports-point-worsening-global-supply-chain-woes-report-2022-05-03/; Vivian Yee and James Glanz, “How One of the World’s Biggest Ships Jammed the Suez Canal,” New York Times, July 17, 2021, https://www.nytimes.com/2021/07/17/world/middleeast/suez-canal-stuck-ship-ever-given.html.
3    US Department of Transportation, Maritime Administration, “Maritime Transportation System (MTS),” last updated January 8, 2021, https://www.maritime.dot.gov/outreach/maritime-transportation-system-mts/maritime-transportation-system-mts.
4    William Loomis et al., Introduction: Cooperation on Maritime Cybersecurity, October 27, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/cooperation-on-maritime-cybersecurity-introduction/.
5    Jason Ileto, “Cyber at Sea: Protecting Strategic Sealift in the Age of Strategic Competition,” Modern War Institute, May 10, 2022, https://mwi.usma.edu/cyber-at-sea-protecting-strategic-sealift-in-the-age-of-strategic-competition/; See also https://www.maritime.dot.gov/national-security/strategic-sealift/strategic-sealift/.
6    Andy Greenberg, “The Untold Story of NotPetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
7    Nina Kollars, Sam J. Tangredi, and Chris C. Demchak, “The Cyber Maritime Environment: A Shared Critical Infrastructure and Trump’s Maritime Cyber Security Plan,” War on the Rocks, February 4, 2021, https://warontherocks.com/2021/02/the-cyber-maritime-environment-a-shared-critical-infrastructure-and-trumps-maritime-cyber-security-plan/.
8     Olafimihan Oshin, “Major US Port Target of Attempted Cyber Attack,” The Hill, September 24, 2021, https://thehill.com/homenews/state-watch/573749-major-us-port-target-of-attempted-cyber-attack/.
9    The game was developed and run by Ed McGrady at the Center for a New American Security.
10    US Department of Homeland Security, Management Directorate, OCRSO, Sustainability and Environmental Programs, Providing a roadmap for the Department in Operational Resilience and Readiness, July 2018, https://www.dhs.gov/sites/default/files/publications/dhs_resilience_framework_july_2018_508.pdf.
11    ”Lockheed Martin, “Cyber Kill Chain,” June 29, 2022, https://www.lockheedmartin.com/en-us/capabilities/cyber/cyber-kill-chain.html.
12    US Department of Homeland Security, United States Coast Guard, Navigation and Vessel Inspection Circular No. 01-20 (Washington, DC, 2002), Commandant Publication P16700.4, https://www.dco.uscg.mil/Portals/9/DCO%20Documents/5p/5ps/NVIC/2020/NVIC_01-20_CyberRisk_dtd_2020-02-26.pdf?ver=2020-03-19-071814-023; International Maritime Organization, “Guidelines on Maritime Cyber Risk Management,” MSC-FAL.1/Circ.3/Rev.1, June 14, 2021, https://wwwcdn.imo.org/localresources/en/OurWork/Security/Documents/MSC-FAL.1-Circ.3%20-%20Guidelines%20On%20Maritime%20Cyber%20Risk%20Management%20(Secretariat).pdf.
13    Gareth Corfield, “60,000 Conti Ransomware Gang Messages Leaked,” The Register, February 28, 2022, https://www.theregister.com/2022/02/28/conti_ransomware_gang_chats_leaked/; Maria Henriquez, “Inside Conti Ransomware Group’s Leaked Chat Logs,” Security Magazine, April 6, 2022, https://www.securitymagazine.com/articles/97379-inside-conti-ransomware-groups-leaked-chat-logs.

The post Wargaming to find a safe port in a cyber storm appeared first on Atlantic Council.

]]>
360/StratCom: How policymakers can set a democratic tech agenda for the interconnected world https://www.atlanticcouncil.org/content-series/360stratcom/360-stratcom-how-policymakers-can-set-a-democratic-tech-agenda-for-the-interconnected-world/ Thu, 08 Dec 2022 19:48:31 +0000 https://www.atlanticcouncil.org/?p=593715 The DFRLab assembled policymakers and civil-society leaders together to drive forward a democratic tech agenda that is rights-respecting and inclusive.  

The post 360/StratCom: How policymakers can set a democratic tech agenda for the interconnected world appeared first on Atlantic Council.

]]>
On December 7, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosted360/StratCom, its annual government-to-government forum, bringing policymakers and civil-society leaders together to drive forward a democratic tech agenda for the increasingly interconnected world—and ensure that is rights-respecting and inclusive.  

The day kicked off with a panel on anti-lockdown protests and dissent in China moderated by Kenton Thibaut, DFRLab’s resident fellow for China. Following a deadly fire at a residential building in Xinjiang, protests erupted in cities across China, including on almost eighty university campuses. While the protests have been fueled by frustration with China’s strict zero-COVID policy, Xiao Qiang, a research scientist at the University of California, Berkeley, noted the protests have also grown to object to censorship and Xi Jinping’s leadership. The protests mark the failure of Xi’s “prevention and control” security approach, added Sheena Greitens, associate professor at the University of Texas. “It was really interesting, and I imagine troubling, from the standpoint of China’s leaders, to see that model fail initially at multiple places, multiple cities in China when these protests broke out,” she said. While the panelists agreed that China has publicly used a lighter touch in dealing with the protest organizers than it has historically, they expressed concern that this is because surveillance technology provides authorities the ability to identify and target protesters behind closed doors. Maya Wang, associate director of the Asia division at Human Rights Watch, said an important takeaway from the protests is that many people in China seek democracy. 

Next up was a discussion about the Freedom Online Coalition (FOC), a global alliance in pursuit of a democratic tech agenda that ensures a free, open, secure, and interoperable internet for all. With Canada serving as the current chair of the FOC, the session began with remarks from Canadian Deputy Foreign Minister David Morrison. He noted that what unites the FOC is the belief that one of the most pressing challenges is finding a way to benefit from digital technology in a way that protects human rights and upholds democratic values. Morrison noted four essential components of digital inclusion: connectivity, digital literacy, civic participation, and online safety.   

With the United States preparing to serve as the incoming FOC chair, Anne Neuberger, deputy national security adviser for cyber and emerging technologies at the White House, also gave her thoughts on the democratic tech agenda. Neuberger noted that, while the internet has transformed the world, it has also led to a series of troubling developments. “The internet remains a critical tool for those on the front lines of the struggle for human rights, activists; and everyday people from Tehran to Shanghai to Saint Petersburg depend on access to an unblocked, unfiltered internet to communicate and gain information otherwise denied to them by their government.” As FOC chair, the United States will have three main priorities, Neuberger outlined: bolstering existing efforts where the FOC adds unique value, such as condemning governments that misuse technology; strengthening coordination between FOC policies and the foreign assistance that participating states are providing to ensure that national-level technology frameworks around the globe are in alignment with human rights; and strengthening the FOC’s operating mechanics to ensure the organization can have a greater impact in the years to come. 

Another vital goal for the FOC is to recognize and articulate the connection between pluralistic, open societies and a secure, open internet, said Katherine Maher, nonresident senior fellow at the DFRLab and former chief executive officer of Wikipedia. In a panel focusing on how the FOC can live up to its promise , Maher noted that an open internet is a means to an end, as it helps people protect human rights. Moderator Jochai Ben-Avie, chief executive of Connect Humanity and a DFRLab nonresident fellow, echoed this sentiment. “Never before has the call been louder for democratic countries to take coordinated action in defense of a free and open and secure and interoperable internet,” he noted.  

Read more

Report

Dec 6, 2022

An introduction to the Freedom Online Coalition

By Rose Jackson, Leah Fiddler, Jacqueline Malaret

The Freedom Online Coalition (FOC) is comprised of thirty-four member countries committed to advancing Internet freedom and human rights online.

Digital Policy International Organizations

Later in the day, the discussion shifted to the European Union’s (EU) approach to tech governance in a session moderated by Rose Jackson, director of the DFRLab’s Democracy and Tech Initiative. Gerard de Graaf, the European Union’s first ambassador to Silicon Valley, remarked on recent tech industry layoffs, saying that he had been reassured by some tech companies that the cuts would not affect compliance with European regulations. “In the industry, there is an awareness that it’s probably not so wise to start cutting into the areas where, frankly, you probably now need to step up rather than reduce your resources,” he said.  

Meanwhile, Prabhat Agarwal, one of the lead drafters of EU tech legislation and head of unit at the EU’s DG CONNECT Digital Services and Platforms, said that he is working on designing transparency provisions. He noted three key areas that these provisions will cover: user-facing transparency to ensure tech platforms’ terms and conditions are so clear “that even children can understand”; expert transparency that would allow civil society, journalists, and academics the ability to access data intrinsic to their research; and regulator transparency that would enable governments to inspect what happens “under the hood” of the platforms.  

To close out this year’s 360/StratCom programming, Safa Shahwan Edwards, deputy director of the DFRLab’s Cyber Statecraft Initiative, led a conversation with Camille Stewart Gloster, US deputy national cyber director for technology and ecosystem. The discussion centered on how to define and grow a competitive tech workforce. Stewart Gloster noted that technology underpins each person’s life, and it is imperative to raise the collective level of understanding of the tradeoffs people around the world make daily, from privacy to security.  


Layla Mashkoor is an associate editor at the DFRLab. 

The post 360/StratCom: How policymakers can set a democratic tech agenda for the interconnected world appeared first on Atlantic Council.

]]>
The call for coordinated action for a free, open, and interoperable internet https://www.atlanticcouncil.org/content-series/360stratcom/the-call-for-coordinated-action-for-a-free-open-and-inoperable-internet/ Thu, 08 Dec 2022 14:33:27 +0000 https://www.atlanticcouncil.org/?p=593513 The DFRLab, as part of its annual 360/StratCom event, convened a discussion about the FOC, including the need to coordinate action to protect a free, open, secure, and interoperable internet.

The post The call for coordinated action for a free, open, and interoperable internet appeared first on Atlantic Council.

]]>

The Freedom Online Coalition (FOC), founded a decade ago, is one of a number of coalitions, alliances, and forums that exist to advance human rights online. As part of its annual 360/StratCom event, the Atlantic Council’s Digital Forensic Research Lab, convened a discussion about the FOC, including the need to coordinate action to protect a free, open, secure, and interoperable internet—and how the FOC should establish itself as a useful vehicle for coordinating digital policy. The panelists also discussed what steps the United States should take as it assumes the FOC leadership position from Canada for the years 2023 and 2024. 

David Morrison, Canadian deputy foreign minister of global affairs, introduced the conversation. Morrison reflected on the work Canada accomplished in 2022 as chair of the FOC, as well as what challenges remain as the United States takes control in 2023.  

This year, the FOC saw crises that required clear pushbacks against repression online, including Russian disinformation campaigns in Ukraine and the Iranian government’s censorship of the internet, both of which proved the value of the FOC. Morrison highlighted how the FOC can play a lead role in speaking out against such infringements of human rights online, in part because the FOC is a collective powered by civil society and industry.  

Read more

Report

Dec 6, 2022

An introduction to the Freedom Online Coalition

By Rose Jackson, Leah Fiddler, Jacqueline Malaret

The Freedom Online Coalition (FOC) is comprised of thirty-four member countries committed to advancing Internet freedom and human rights online.

Digital Policy International Organizations

Morrison then passed the microphone to Anne Neuberger—deputy national security advisor, cyber and emerging technologies—who spoke about US priorities as incoming FOC chair.  

Neuberger highlighted how the United States is happy to build upon Canada’s previous work as chair and revisited the role the United States played in the past, particularly in the organization’s founding. With the support of US President Joe Biden and a strong foundation set by Canada’s leadership in 2022, Neuberger said she is optimistic that the United States can expand the FOC’s role to improve strategic planning, counter the rise of digital misinformation, and promote safe spaces for marginalized groups such as women, LGBTQ communities, and the disability community. In addition, the United States remains committed to speaking out against Russian and Iranian oppression.  

Both Morrison and Neuberger celebrated the expansion of the FOC with the addition of Chile. With membership now at thirty-five countries, Morrison noted how the FOC represents a coalition of countries that believe in responding collectively to digital threats against democracy. 

To follow up the opening remarks provided by the Canadian and US government representatives, DFRLab nonresident fellow Jochai Ben-Avie moderated a panel featuring Tatiana Tropina, assistant professor in cybersecurity governance at Leiden University; Katherine Maher, nonresident senior fellow at DFRLab; and Jason Pielemeier, executive director of the Global Network Initiative, to provide insight into the role civil society and industry play in the FOC, as well as improving the coalition’s efficacy. The panelists discussed how the FOC should play a greater role in coordinating countries that believe in using democratic norms to advance human rights, acting as a vehicle to accomplish this because it has expertise, global reach, and a coalition of like-minded countries with the potential to work together. 

Looking at the potential of the FOC, the panelists noted the difference in geopolitical contexts between when the organization was founded and today, and that the FOC’s utility is particularly salient because of democratic backsliding in many parts of the world. The panel asserted that, while the optimism that the internet would be a democratizing force has fallen away due to the use of its technology to repress citizens, this should spur even greater motivation to engage within and beyond the FOC.  

Panelists then discussed another issue facing the FOC: increasing internal coordination. On one hand, they mentioned, the power of the FOC comes from its reach with the countries comprising its membership. On the other hand, there is a disconnect between the norms that the FOC stands for and the difficulties of actualizing these norms. As Tropina noted, the most pressing issue keeping the FOC from being more effective is not membership inclusion but clarifying the FOC’s role, stating how countries cannot play a leading role without doing the work themselves. The FOC should go “go back to basics and extend its membership based on some really identified values and principles,” she concluded. 

The panel concluded by acknowledging that, while it feels as if technology constantly outpaces the institutions created in the past, there are core identifying democratic values that stay constant, and that should drive the FOC’s future action.  


Erika Hsu is a young global professional with the Digital Forensic Research Lab.   

The post The call for coordinated action for a free, open, and interoperable internet appeared first on Atlantic Council.

]]>
The White House’s new deputy cyber director: Tech’s challenges are society’s challenges https://www.atlanticcouncil.org/content-series/360stratcom/the-white-houses-new-deputy-cyber-director-techs-challenges-are-societys-challenges/ Wed, 07 Dec 2022 23:49:25 +0000 https://www.atlanticcouncil.org/?p=593417 Camille Stewart Gloster, the inaugural deputy national cyber director for technology and ecosystem security, spoke at the DFRLab's 360/StratCom about her newly created office's ambitious agenda to address a wide scope of cyber challenges.

The post The White House’s new deputy cyber director: Tech’s challenges are society’s challenges appeared first on Atlantic Council.

]]>

The White House is engaging in a “whole-of-society strategy” with its newly created Office of the National Cyber Director (ONCD), which set an ambitious agenda to diagnose and address the implications of everything from regional cybersecurity and quantum computing to Web3 blockchain technologies and sustainably expanding the tech workforce.

That was the message from Camille Stewart Gloster, the inaugural deputy national cyber director for technology and ecosystem security, who has been charged with crafting the scope of the office empowered to bridge a range of tech equities and help better define and grow a competitive tech workforce.

ONCD “is focused on moving us towards an affirmative vision of a thriving digital ecosystem that is secure, equitable, and resilient that we all can share in,” Gloster said at 360/StratCom, the annual government-to-government forum hosted by the Atlantic Council’s Digital Forensic Research Lab (DFRLab).

This year, 360/StratCom focused on the work of civil society to ensure that universal human rights in the physical world are also protected in the virtual realm. Here are just a few of Gloster’s insights into the Biden administration’s approach, in a conversation with Safa Shahwan Edwards, deputy director of the DFRLab’s Cyber Statecraft Initiative.

Security and education add up to resilience

  • Gloster noted that the cyber challenge is “a shared problem across the public sector and the private sector.” As such, her office is working to produce a strategy that focuses on both cyber workforce education and digital safety awareness while striving to fill nearly seven hundred thousand open cybersecurity roles. “We don’t want to engage the same players exclusively,” Gloster said. “Yes, the big players must be a part of the conversation, but we want to make sure [to also include] civil society, academia, the small players, the innovators.”
  • That multifaceted approach led to the administration’s announcement in August a string of partnerships with top tech companies (Google, Apple, IBM, Microsoft), coding credentialing programs (Code.org, Girls who Code), cyber insurers (Coalition, Resilience), and more than 150 electric utility providers (through its expansion of the Industrial Control Systems Cybersecurity Initiative). 
  • Gloster also emphasized that educational institutions will be critical to enhancing cybersecurity at the local level. The University of Texas system recently announced that it would expand its existing short-term credentials in cyber, as well as create new ones, leveraging its UT San Antonio Cybersecurity Manufacturing Innovation Institute.
  • Later this week, Gloster will speak at Whatcom Community College, a remote college between Vancouver and Seattle, which was chosen in August to be the site for a National Science Foundation cybersecurity center providing education to “fast-track students from college to career.” At the heart of the conversation, Gloster said, is one core question: “How does a community college sit at the intersection of regional cyber awareness, regional technology awareness, and how does that catalyze and support the work that’s going on on a local or regional level?’”

Expanding cyber policy into the unknown

  • The ONCD isn’t limited to tackling workforce education and training, but it is also well-positioned to address everything from “the emerging tech supply chain, the intersection of human rights and technology, privacy—all of the future-looking pieces of the technology landscape,” Gloster said.
  • Cyber administrators will need to grapple with critical questions around quantum computing, which holds incredible promise for creating more effective vaccines and predicting threat models, as well as significant risk. “There’s a lot of good work that can come out of that, so how do we both prepare for the threats and the opportunities?”
  • The increasing use of Web3 technologies built on blockchains will continue to present new security challenges (beyond financial instability threats, such as recent consumer losses caused by the sudden collapse of the crypto exchange FTX). A number of Web3 companies are built with a “collective contribution model that is open source,” Gloster said, which could leave them more vulnerable to cyberattacks.

The role of all sorts of governments

  • The ONCD will need to work with the State Department and other interagency partners to coordinate around the national-security, economic, and human-rights implications of these new technologies. Groups like the thirty-four-nation Freedom Online Coalition provide an important avenue “to collaborate with our partners to really think about what democracy looks like now and in the future, and how technology underpins that,” Gloster said.
  • International governments will need to be proactive in addressing pending security concerns around new technologies. Both the White House and international organizations like the European Commission have signaled that more regulation is coming in 2023 to address cryptocurrencies and the metaverse, for instance.
  • While people often get lost in the conversation about tech, Gloster said preventing cyber threats in the future will require a significant understanding of human nature—and a workforce equipped with not just tech skills but also expertise in social and cultural contexts. “People create, promulgate, use, and are the malicious actors that weaponize or leverage technology. That means that we have to understand them as people.”

Nick Fouriezos is a writer with more than a decade of journalism experience around the globe.

The post The White House’s new deputy cyber director: Tech’s challenges are society’s challenges appeared first on Atlantic Council.

]]>
Evanina testifies to Senate Select Committee on Intelligence https://www.atlanticcouncil.org/insight-impact/in-the-news/evanina-testifies-for-the-senate-committee-on-intelligence/ Fri, 02 Dec 2022 15:11:02 +0000 https://www.atlanticcouncil.org/?p=580656 William Evanina testifies on the growing cyber threat posed to US business and academic institutions.

The post Evanina testifies to Senate Select Committee on Intelligence appeared first on Atlantic Council.

]]>

On September 21, the Scowcroft Center for Strategy and Security’s Nonresident Senior Fellow William Evanina testified before the Senate Select Committee on Intelligence. In his testimony, Evanina discussed the growing cyber threat posed to US business and academic institutions.

America faces an unprecedented sophistication and persistence of threats by nation state actors, cyber criminals, hacktivists and terrorist organizations. Corporate America and academia have become the new counterintelligence battlespace for our nation state adversaries, especially the Communist Party of China.

William Evanina
Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Evanina testifies to Senate Select Committee on Intelligence appeared first on Atlantic Council.

]]>
The cases for using the SBOMs we build https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/the-cases-for-using-sboms/ Tue, 22 Nov 2022 18:45:12 +0000 https://www.atlanticcouncil.org/?p=587890 Software bills of materials (SBOMs) provide key data suit for many uses. Industry and government can continue to sharpen their demand signals, shape implementation, and continue driving development and adoption.

The post The cases for using the SBOMs we build appeared first on Atlantic Council.

]]>

In the beginning, developers created package manifests and header files. Code was formless and required documentation. Tabs and spaces hovered on the surfaces of the editors, and the spirit of Dennis Ritchie hovered over the code.

And then a developer typed, “git commit” and behold, there was a commit, and the developer saw that the commit was good, so they separated BRANCH from MAIN. They called the BRANCH a version and MAIN the source, and there were pulls and pushes and the first release. And yet lo, users often had little idea what was in any of it. This went on to cause many problems, but that did not make it a bad idea.

Introduction: SBOMS, public policy, and you

Anyone in tech, cyber policy, or security circles has probably heard about software bills of materials (SBOMs) by now and considered how they or their organization might use SBOM data. Many recent efforts strive to answer this question—one good example is Microsoft’s Open-Source Software Secure Supply Chain framework.1 Asking about SBOM use is nonetheless a reasonable act of self-examination given their relatively recent appearance on the policy scene, mostly in the wake of major software supply chain incidents.2

SBOMs themselves are not new. One widely accepted SBOM format, the Software Package Data Exchange (SPDX), dates back to 2011.3 Notably, that original SBOM concept has its roots in complex physical manufacturing processes in industries like the automotive sector to understand intricate supply chains,  as well as in legal practices for recording the inheritance of licenses through a business.4 Meanwhile, those who have compiled software from source code are likely familiar with build manifests that indicate all the packages, libraries, and other bits of code needed to properly construct a final piece of software. The bigger a project, from a simple application to an entire operating system, the longer and more complex that manifest becomes. An SBOM is similar—a snapshot in time of each component making up a piece of software, with additional metadata tracking provenance (information about component authors and affiliations) and versioning.5

While SBOMs are intuitively useful and have received some notable policy attention of late—from the National Telecommunications and Information Administration’s (NTIA) minimum-viable elements project to mentions in executive orders (EOs) and Office of Management and Budget (OMB) memoranda—they are just one tool (more precisely, one class of data) in the wider arsenal for managing risk in software systems.6 7 Although conversation about SBOMs has largely (and understandably) focused on their generation, requirements, and format, their growing maturity demands wider consider consideration of next steps: developing clear use cases for SBOMs. An absence of mature, well-understood use cases for SBOMs threatens their future as an effective risk management tool.

Though SBOMs and their widespread adoption face other, arguably more dire, challenges—for example, the risks of mistimed regulation and disconnects between SBOM designers and consumers—policymakers and the security community can directly address use cases now. Letting the challenges of SBOM generation drown out demand signals from the user side of the pipeline risks inundating purchasers, developers, and acquisition officers alike with a torrent of useless spreadsheets and effete compliance certifications.

Indeed, these uses extend beyond just technology-consuming firms to include governments and other central risk assessment bodies. An absence of well-articulated SBOM use cases and illustrated relevance to communities of SBOM consumers holds twin challenges. First, it risks mission creep, where policymakers might begin to frame SBOMs as a silver bullet for all supply-chain woes without clear demarcation of the problems they are designed to address. Second, it undersells SBOMs to those who would consume them, leading to slower adoption, poor tooling, and the malformation of a potentially powerful data standard into yet more bloated security theater.

To address the opportunity for further usage conversations, this paper offers several grounded applications for SBOMs, focusing particularly on the benefits they offer their consumers, from chief information security officers (CISOs) to acquisition officers and from software consumers to the Cybersecurity and Infrastructure Security Agency (CISA). Incident response may be the most intuitive role for SBOMs—a way to determine impacted software when a widespread component is compromised or found vulnerable—but it is far from the only one. SBOMs can help development teams determine what packages they will be managing. They can feed software composition analysis (SCA), acting as an ingredient and source list. They can help compliance officers streamline licensing acquisition and manage the adoption of components produced by sanctioned or entity-listed companies. At the largest scale, they can map out portions of the software ecosystem, highlighting little-known relationships and concentrations of dependence, while shedding light on the benefits of using extant code and the risks of relying on external repositories. First though, this paper considers the state of contemporary SBOM policy conversations.

Still fighting yesterday’s battles

The year 2014 saw one of the first truly widespread, dire software supply-chain events: the OpenSSL “Heartbleed” vulnerability.8 Heartbleed put the many systems that relied on OpenSSL at significant risk, allowing malicious actors to extract sensitive information due to a relatively simple software flaw. The incident catalyzed a small surge in private-sector funding to open-source projects to support security efforts and raised questions about ways to effectively track the use of critical, community-developed software in systems spread around the world, as well as ways to coordinate responses to flaws found in such code. The US government immediately asked all federal agencies, as part of alerting the public through the Department of Homeland Security (DHS), to emphasize where websites and other internet services used OpenSSL libraries.9 However, that was only the tip of the iceberg—in fact, OpenSSL also lived on many mobile devices, embedded hardware systems, and phone and conference-call systems,10 as well as much networking infrastructure.

Collecting data on the usage of OpenSSL protocols among websites to understand Heartbleed exposure was a useful first step to an unwieldy triage process. Wider SBOM adoption at the time would have aided long-tail remediation of false negatives and subtle implementations. Further, had a CISA-style entity been able to ingest and use SBOM information on OpenSSL, the true sprawl of the library would have been more immediately apparent and accessible—perhaps even before the vulnerability was found, leading to a better, more targeted response and, crucially, enabling proactive investment and security before the incident.

Discussions of SBOMs and their development have the opportunity now to match the technical solutions enabled by SBOM data to the policy challenges around transparency, processes, and due diligence they can address, and use case refinement will drive that matching. SBOMs offer a mechanical view into the minutiae of documentation for software, summarizing all the pieces of code that make up modern applications and services. If the end goal is for the digital ecosystem to widely adopt SBOMs—both their production and practical use by recipients—much of the necessary intermediary work in ingesting and interpreting SBOM data remains unfinished. This is understandable: in the early SBOM days, deliberate decisions to limit scope—in the NTIA Minimum Elements for an SBOM process, for instance—helped reduce a sprawling problem set to a tractable project.11 Now that SBOMs are moving toward the mainstream, beginning to address broader use scenarios will help drive their adoption and maturity, and industry, in particular, can play a key role in pushing an aggressive development cycle with clearly defined uses for SBOMs, each contributing to different facets of cybersecurity.

The potential role for SBOMs in long-term remediation of Heartbleed-style events to provide a snapshot of the composition of software packages is clear. However, this security model requires that, upon build and deployment, developers and consumers update and transparently publish SBOMs for consumption. SBOMs only work well if they are common, standardized, and quickly updated—all a considerable way off from the current situation,12 as a February 2022 Linux Foundation (LF) study found that less than half of surveyed organizations were ”using” SBOMs.13 This survey likely represents an optimistic upper-bound as well—64 percent of respondents were LF member companies, likely skewing toward SBOM maturity; only 74 percent of the organizations classed as using SBOMs were producing and consuming them; and even partial or marginal organizational use would have counted for the survey (and is helpfully broken down within the analysis, which acknowledges and strives to address potential sample bias well).

This is not a critique of adoption speed and progress to date, but rather an acknowledgment that the next steps for SBOMs will require a gear shift that well-articulated use cases and a clear policy demand signal can help accomplish. The same survey queried about adoption plans, forecasting a promising 66 percent increase in the rate of SBOM production and consumption amongst respondents. Anecdotally, some industries at the forefront of SBOM development are already innovating these use cases. For instance, the healthcare sector—which acted as one of the testbeds for NTIA’s SBOM proof-of-concept studies14—use SBOM processes to highlight relationships with suppliers and OSS communities that merit increased support, as well as produce human-readable risk analysis information.15

The most common, general communications to policymakers about SBOMs are that they are ingredient lists most useful for assessing the scale and supporting the recall of tainted, defective components. This describes the minimum viable SBOM: a list of component software, only referred to upon the discovery of a defective part—and in the case of NTIA’s minimum viable SBOM, only one layer of dependencies is tracked.16

This paper is not a call to reinvent SBOM standards. Like so much of government cybersecurity policy, the extreme visibility of SBOMs is a reaction to crisis. Rather, it argues that use cases can and should shape the production and adoption of SBOM and the tools accompanying them. As mentioned earlier, some of this work is underway,17 but policy conversations can continue focusing on what SBOM data can enable and what tooling and production/adoption incentives will best drive development there at a sufficient pace. Policies can also help match the different methods of SBOM production to the most applicable usage. Use cases strengthen the SBOM value proposition with both code maintainers and consumers, as well as help overcome obdurate resistance from technology vendors with little desire to have their behavior “shaped.”18

Use cases

The policy challenge behind SBOMs is the question of adoption—compelling use-cases can motivate that while sensibly shaping the circumstances and specifics of regulation. Below are four foundational use cases for SBOMs, each with their respective audiences, outcomes, and positions in the product and incident lifecycles. This is by no means an exclusive list, but it represents diverse and important usage. Each asks for different levels of SBOM completeness, from a minimum-viable components list to a thorough accounting of support, funding, versioning, and deployment context that no current SBOM standard mandates.

  1. Procurement—for reducing compliance burdens and preventing duplicative purchases.
  2. Vulnerability Management and Threat Intelligence—for tracking compromised components and remediation planning.
  3. Incident Response—for validating liability claims and guiding patch efforts.19
  4. Ecosystem Mapping—for providing a bird’s-eye view of dependencies in an enterprise’s ecosystem and beyond.

Discussing the role for SBOMS in these cases and the larger impacts of their use, offers clarity for the government on how to incentivize and structure SBOM adoption and, for industry, on what tooling to focus development.

1. Guiding software procurement and adoption decisions

SBOMs can prove useful during the procurement process for any third-party software, beyond obvious security functions. Large organizations often make individual purchases rather than coordinating licensing centrally. So creating an inventory and consolidating duplicate purchases or capabilities for cost savings can make a chief financial officer’s (CFO) day. Licensing checks can also surface instances where entities adopt open-source software but cannot legally incorporate it into other products. These are quick wins because the acceptance or rejection of that software can be binary: if licensing prevents use or an existing contract covers a need, the procurement goes no further. Software asset registers and intellectual property scanners already strive to serve these functions, but, given the overlap in their data and that of SBOMs, there is room for tooling to support quick decisions making for all, as well as for the different data sources to support rather than supplant each other.

Binary decisions can form part of a standard procurement process in pre-negotiations with suppliers, but decisions involving judgment calls (such as the relative criticality of a known bug) tend to slow workflows and create angry calls from executives who want to know the reasons behind a derailed purchase. Again, deciding ahead of time which data points in an SBOM are deal-breakers will streamline the process of software procurement. Simple, written policies such as “never adopt or acquire software with components from X supplier”—which can refer to competitors, companies operating in sanctioned nations, entity-listed organizations, known risky projects, or anything else unambiguously identified—can work well here, especially with automation.

When it comes to adopting and integrating open-source software, many of these policies should already exist at most firms, but SBOM use with a standardized format can streamline validation. One must check on the status of a project referenced in an SBOM: how healthy, deep, and thorough its community support is, how much investment it enjoys, or, if tied to a proprietary offering, how dedicated to support the parent company is—none included in the SBOM per se, but retrievable from tools like OpenSSF Scorecard, SLSA levels, and more once upon identifying dependencies. Many CISOs already struggle with the need to collect supply-chain data at a granular level for risk management. While insufficient for high-security organizations, SBOMs are workable substitutes for medium or small enterprises that lack the in-house expertise to analyze all their software in depth, and their unaltered data can serve to inform and define procurement standards and policies alongside risk-management posture.

2. Adding smarts to vulnerability management and threat intelligence

One of the main use cases for SBOMs is identifying components affected by vulnerabilities. SBOMs provide visibility into software a level or two deeper than is common today, particularly provenance. They allow for better triage, cross-referencing dependencies, and remediation planning for identified vulnerabilities. SBOMs provide the roadmap through software relationships that enable this degree of dedicated care.

One constructive application of SBOMs in this context is improving the usefulness of vulnerability risk ratings to impacted organizations. One organization’s “critical” is not necessarily so for a different environment, use case, or business model. Application security professionals already know this, but wide adoption of SBOMs may change how they design a remediation strategy by clarifying what entity is ultimately responsible for fixing a vulnerability and how those outside an organization’s control might handle that request. Some dependencies may have quite capable maintainers that can be relied on while others might require significant external support. SBOM data highlighting dependencies can help teams identify what external parties they rely on for code support and adjust accordingly and ahead of incidents.

Developers may need to confirm whether a vulnerable component of a package is actually in use. If not, the organization can declare the risk “low” and simply note that policy will change if it incorporates that component in the future. There is a possible resource squeeze in the future for enterprises that need more application development and security staff to investigate the origins of disclosed vulnerabilities, determine remediation responsibilities, pass on notifications and updates to affected parties within the ecosystem, and sign off on version changes to internally generated SBOMs. SBOMs are part of enabling that level of decision-making, allowing better tracking of dependencies and changes to them to provide better insight into actual vulnerability exposure. Again, the data SBOMs provide are just part of the foundation on which to build these processes, complemented by other tools and data like GitBOM and Vulnerability Exploitability eXchange (VEX), highlight the importance of sharpened demand signals from SBOM consumers.

One of the main questions to ask with any SBOM is whether its source and contents are trustworthy. One useful method involves scanning the binary of the software to validate the accuracy of the SBOM—essentially checking that what is under the hood matches the parts list. Binary scanners are imperfect, and if the same scanners help generate an SBOM in the first place,20 they may not produce reliable SBOMs for consumers using them in their own vulnerability scanning.21 SBOMs and scanning can help each other, mutually improving the accuracy of package component determination.

The overall risk rating of a software vulnerability informs the risk of a partial or phased remediation. Whether waiting for a third party to deliver a patch or allocating limited internal resources against dependencies that take longer to resolve and downstream requirements from partners, organizations will be able to monitor SBOM-sourced vulnerability data as part of their infrastructure risk-management practices (in conjunction with centralized data like VEX).22 This monitoring can also help threat intelligence analysts better understand organizational exposure. Better dependency knowledge from an SBOM can help clarify where dependencies might be under-resourced, frequently targeted by adversaries, or otherwise deserving of extra scrutiny and resourcing. The frequency of versioning changes can even provide insight into changes that support critical components. Even simply improving organizational visibility into the attack surface of its dependencies will help prioritize resourcing, direct remediation planning, and expand overall cybersecurity for an organization making full use of its SBOMs.

3. Incident response and building a better packing slip

While the above uses focus on using SBOMs for response planning prior to an incident, SBOMs also have utility right of “boom,” or after the fact. In many cases, initially, SBOMs can act as verification for incident reports and recommendations—a pointer to where things went wrong in a compromise. As corroborating evidence, a verified SBOM from an environment, system, or other package can help in the review of an incident and determine the impact on parallel systems or previous system versions. The core value within incident response and forensics is accurately comparing versions, changes, and their respective release times. An SBOM may provide some simple insight—after all, if an organization cannot confirm or deny whether a system was affected, does it have to declare a breach anyway? Having an SBOM that raises unanswerable questions is a business risk to examine with the leadership—business risks that otherwise might not have surfaced.

SBOMs can also aid in crisis communication among partners, affected organizations, and customers during and following an incident. Most product-security organizations already have a workflow to add SBOM information to, but they may require some additional information, such as a timeline matching SBOM versions to the systems under investigation. A challenge with this level of forensics is that organizations rarely have the right level of logging and sufficient log retention to be able to confirm authoritatively which versions of components were in use at the time of an incident. Suppliers may need to help customers determine whether an incident affected them, and sometimes that information may simply be unavailable.

One more use of SBOMs in incident response is to validate that an assertion about the contents listed by an SBOM were reasonably accurate at the time of release and that no known and unaddressed vulnerabilities existed. Organizations can reference attestations later if events or evidence indicate something different. While SBOMs are often compared to the ingredients list on a food-product label for software, another analogy could consider them a packing slip, describing what a supplier claimed was in a box at the time of its sealing. If a checksum to verify the absence of tampering fails, an SBOM can help guide responders to tracking down the discrepancies between shipped and delivered software.

4. A systemic view of software risk

In addition to using SBOMs between and within companies, SBOMs can also serve government agencies and other third parties in mapping dependency chains and concentration risk across the software ecosystem. Recent, widespread vulnerabilities, including log4shell, emphasize the degree to which single dependencies can underpin vast quantities of software. Without a systemic view into dependency patterns, government agencies and others will struggle immensely to assess risk across and within sectors. Given access to SBOMs from multiple sources, government could use that aggregated data to assemble a rough map of dependencies across slices of the digital ecosystem—a picture not just the dependencies of one application, but of many, and more importantly, where they overlap. While contemporary software composition analysis (SCA) can provide similar insight into widely-depended-on software,23 running SCA tools across the far larger set of software considered here would likely prove far less feasible or replicable. To protect both intellectual property and the critical nodes such a map might highlight, government would need to take extra care in protecting this data, but it would prove useful in identifying under-secured or under-resourced dependencies ripe for proactive investment and support. Vulnerabilities in one company’s codebase or within a popular open-source repository can have global impact. Widespread ignorance about software dependencies hampers proactive support that might include security auditing, maintainer funding, development of alternate dependencies, or any other number of methods to reduce the risk of high-leverage dependency.

Governments and private-sector companies currently lack measures that describe the scale of use of different pieces of software. Metrics such as download counts, license purchases, or userbase size do not provide information about deployment or reliance, either upstream or downstream. A package with only  a single user could still be critically important if all kinds of different software depend on it. However, without relationship mapping, the entire ecosystem remains blind to that package’s position as an essential link in the supply chain. SBOMs can reduce this problem by providing data, when aggregated from many sources, for an ecosystem-wide view of software dependencies to CISA and other entities, even if only for part of an enterprise. CISA is likely to be tasked with some of this work should the Securing Open Source Software Act of 2022 (S.4913), pass into law, or under III.B.2 of OMB M-22-18 on Enhancing the Security of the Software Supply Chain through Secure Software Development Practices.24 25

As more workloads move into the cloud, understanding and assessing the risk present in those systems is vital. One important use of SBOMs for software-as-a-service (SaaS) consumers is encouraging greater transparency in vulnerability reporting and mitigation inside cloud services. Over time, this information will help support more precise decision-making about the security practices of different vendors. While some companies have made policy choices about what to reveal to customers and what to withhold,26 SBOMs are useful tools for other companies to define their own policies, and for customers to push for what they (or regulators) find most comfortable. As part of this effort, good questions will need clear answers regarding how SBOMs can be most useful amidst widely varying configurations and associated products present in different SaaS deployments. Wider generation, use, and consumption provide incentives to determine and sharpen answers to these.

SBOMs can help, though differences between cloud and on-premises software create challenges. One is the speed at which the cloud changes. If SBOMs change minute-to-minute with cloud configurations, they might produce too much information and impede meaningful use by recipients. However, operating off out-of-date information is also risky. Additionally, cloud instances often utilize many different third-party services, so tracking the versioning of each service for each instance or configuration within an SBOM is difficult. Building SBOMs with this aggregate use case in mind will be important to managing this deluge of data, and a key to that is a clearer demand signal from consumers of cloud SBOMs, in and outside of the public sector, about how they aim to incorporate that data into their risk-management practices.

A standardized method for companies (and other entities) to inform each other of dependencies, used and combined at scale, would ease the task of assessing risk across sectors. For SBOMs to fulfill this role, the information contained within them must be consistently organized, filled, and updated, which might pose a challenge to organizational resources. Such data would be most useful when combined with assessments of the context surrounding any piece of software. Even so, SBOMs, as currently imagined, still provide a valuable piece of the puzzle not otherwise measurable. Better data on the arrangement of and relationships with the larger software ecosystem would allow CISA and other agencies to target resources more effectively toward shoring up mission-critical software.

Why define use cases at all?

Clearly defining the use cases will help guide and preserve the inertia of SBOM adoption and development, from shaping the automated tools for SBOM ingestion to pointing toward new product offerings and molding federal procurement policy. Only considering the challenges of SBOM generation while disregarding the other end of the pipe risks drowning purchasers, developers, and acquisition officers alike in a sea of useless spreadsheets and symbolic compliance certifications.

Though SBOMs and this paper’s considered uses of them are as important to proprietary software components as open-source ones, for the latter, they provide the beginnings of a more fundamental guidance, too. Unlike in traditional supply chains for physical goods or in the exchange of proprietary code, OSS dependence rarely sees an exchange of money or a contractual agreement.27. Rather, there is simply a quick “pip install XX –user” and “import YY as ZZ,” often from the public repository. SBOM adoption can eventually change the nature of that informal incorporation, and policymakers still have a chance to sculpt, for better or for worse, the roles and responsibilities that will redefine the ecosystem.

A key policy challenge is determining exactly which entities are ultimately responsible for producing and publishing SBOMs. Suppliers to finished goods manufacturers, due to various global and national regulations, often must detail the source of their materials—whether from forced or child labor, farmed or created under sustainable practices, acquired legally, and so on. The answers have implications for marketing as well as compliance and legal departments. Someone in the chain of the software development lifecycle must be responsible for the creation of SBOMs, but the trust framework for the completeness and veracity of their claims has yet to be developed, and debate over who, precisely, is responsible for making them and what levers are appropriate for achieving compliance persists. Burdening open-source developers and maintainers with that task, though, is an overreach in the absence of ubiquitous tooling to generate SBOMs automatically.

At the regulatory level, all this is challenging, as countries take multiple approaches to what entity is responsible for providing compliance and conformance assurances. This also complicates how governments support the security of open-source software supply chains, as each may have a different goal or preferred method despite aligned motivations. In the United States, CISA wants to assist, even lead, efforts to help support the securing of critical open-source software. However, the culture of open-source communities, the history of their development, and the very tenets that make open source a vital font of innovation all buck against direct government regulation in such stewardship, especially given that open-source code, legally in the United States, is a form of free speech.28Importantly, governments supporting the open-source ecosystem will not be able to rely on blanket requirements, and their assistance in identifying critical projects, supporting tooling development, and investing in developers and communities will provide more fruitful results.

SBOMs, sufficiently standardized and adopted, offer data that can serve critical policy challenges when combined with appropriate tooling and processes, allowing a better understanding of and investment in dependencies before incidents occur, as well as more complete vulnerability remediation fixes afterward. Applied and used correctly, SBOMs can make the ecosystem’s most capable actors responsible for its coherence. Incorrectly executed, burdensome requirements for SBOM generation could sterilize the open-source world’s thriving innovation.

So, what should you do about it?

SBOM generators have an outsized say in the use cases of SBOMs because they determine what each bill of materials contains. In developing tools for aggregation, analysis, and production of SBOMs, generators could do the following to speed adoption and provide a more complete, practical set of capabilities to SBOM consumers:

  • Develop tooling to convert from raw SBOM data to actionable information more intuitively. CISOs will not have the time or resourcing to parse through vast, rapidly changing informal tracking of dependency information, but automated checks with customizable, risk-tolerance leveling and other policies can make SBOMs a practical tool during acquisition and incorporation decision-making processes. Adding context, alongside SBOMs, that clearly declares what they do and do not contain and what purposes they serve can help here.
  • Develop tooling to provide more practical and varied information based on SBOM contents. Many of the use cases discussed above require a touch more detail than conveyed by current SBOM formats. This next layer of tooling, in tandem with products that coordinate SBOM consumption, will provide value both to their manufacturers and users.

The OMB and CISA have recently begun moving towards SBOM requirements at the federal level, likely in tandem with updates to government procurement processes and working with critical infrastructure sectors. They face a key challenge:

  • Provide better support for smaller enterprises that cannot easily adopt and produce SBOMs in a compliant manner. CISA might pursue this through added tooling and support in their small-to-medium business (SMB) programs and by tailoring any legal requirements to the unique needs and exposures of different sectors. These need not be new tools adding more complexity and variation to the SBOM landscape, but rather increased funding and guidance for SMBs to access tools normally available only to larger enterprises. Large IT vendors can also act as an intermediary in this provision by offering tooling and support for SMBs with government subsidies.

CISA could model practices to gather SBOM data beyond that used by a single enterprise. Wider collection of SBOM data is necessary for the envisioned aggregate use case. Although this process is more straightforward for open-source systems, there are valid concerns about SBOMs revealing proprietary information and providing attackers with the tools to identify vulnerable targets, particularly among software-as-a-service vendors, whose products are otherwise difficult to scrutinize. Industry could work with government to identify solutions to this information problem; doing so would increase the supply-chain insight SBOMs could provide. Aggregating and analyzing in-house collections of SBOMs first would be a good starting point and force government and industry to directly address the tradeoffs between identifying nodes of systemic risk to better secure them and pointing attackers to those nodes—some of which will be under-supported—through their identification.

SBOM users will need to provide the demand signals to producers that shape the future utility of software bills of materials. Often, users and consumers will be the same party, or at least departments within the same company, but they may also be small firms less focused on tech development, non-profits, or companies without the resources to do more than implement well-documented tooling. This responsibility is also a chance to extract significant value from SBOMs.

  • Accept the imperfect SBOM and iterate: If a complete SBOM must trace dependencies all the way down to another complete SBOM, they will rarely exist except for the simplest of components. Imperfect is not impractical. The processes that develop around SBOM use must not assume or depend on complete information. Industry and government could explicitly discuss how to navigate imperfect SBOMs and thresholds for acceptable inaccuracy while ensuring users can adopt and iterate on necessarily imperfect standards.
  • Innovate your use cases: Depending on the depth of information contained in or pointed to by an SBOM, consuming organizations might highlight the use of memory-unsafe languages, insecure calls, unmaintained libraries, or methods highlighted in Open Web Application Security Project (OWASP) and other “top of” lists to block these technologies from their environment. Risk managers can even develop tools converting detailed SBOMs into tolerable-risk metrics.
  • Build with ease for the user in mind: Part of strengthening the utility and longevity of SBOMs is enabling the use of this rich source of data in a wide range of possible ways. Tooling should reflect the expectation that many users are non-expert and/or lack considerable resources for IT administration and security, prioritizing simplicity and intelligibility over maximal functionality. Enterprise support for SBOM-tool users can help here too.

Conclusion

Businesses and developers hold mixed sentiments toward requiring SBOM production in regulations. Keen observers will find working groups with names like “SBOMs Everywhere” with employees from the very same companies funding letters (thinly veiled by trade associations), denouncing some efforts to promulgate SBOM requirements through policy.29 Part of this fractured view of SBOMs reflects the early stages of SBOM maturity, and part, the variety of opinions and incentives within large organizations too often treated as monolithic entities. More importantly, it reflects a disconnect among available government levers, SBOM functionality, and industry incentives. Procurement requirements are one of government’s most effective levers for shaping cybersecurity practices, and industry insistence that government wait for trivial or even default compliance before regulation is circular—if SBOMs were standard practice already, there would be no need to specifically request them to begin with, and government requiring higher security standards from its vendors is far from aberrant. The mismatch between federal security needs and the state of SBOM adoption and maturity is a significant opportunity for industry to continue to deepen its partnership with government and other would-be SBOM users to keep up the pace on SBOM development while shaping the tools serving SBOMs and the challenges that they can address.

A key question persists: what do SBOM producers stand to gain, short of compliance, from their considerable toil? Many prior requirements of large suppliers and component suppliers—self-attestations or FedRAMP requirements, for example—might have necessitated great expenditure in return for relatively small benefits to individual entities. Without making a clear case for SBOM use and the resultant tools that provide return on investment, policymakers advancing SBOMs risk mortgaging their future as a marketing tool— another sticker slapped on the proverbial product denoting begrudging compliance with federal requirements. Successful policy supporting SBOMs must put them on a sustainable path, tying hard and fast requirements to clear benefits for the ecosystem and the entities within it. Part of this must translate to better articulating how SBOMs can be consumed and used toward a variety of ends and by a diversity of organizational types.

Lacking a clear, tangible value proposition, particularly to considerations like the bottom line, future contracts, operations, or other more immediately recognizable benefits will create friction between parties that desire to use SBOMs and those that will not willfully provide them, even while governments and other organizations push to have SBOMs a standard part of their procurements. It is worth noting that some of the best analogs to SBOMs share a similarly fraught origin. Nutrition labels, ingredient lists, and food-goods advertising regulations span a century-long tug-of-war between government, industry, and consumer.30 The transition from prepared-from-scratch meals to off-the-shelf purchasing helped spur Food and Drug Administration (FDA) regulation, as consumers required better visibility into their purchases.31 Notably, some companies already use SBOMs or similar data internally, of their own accord, and presumably, for some of the benefits enumerated here—Google and Microsoft are easy enough examples to find public records of this.32 33

This paper aims to remove some of the friction against SBOM adoption and strengthen their long-term utility as a source of data for important risk management decisions, showing potential consumers clear benefits from using SBOMs, nudging producers and tool developers towards new offerings, and making clear to policymakers the importance of decisions they are already considering. SBOMs, initially marketed in cybersecurity as a solution to the fact that one cannot secure dependencies one does not know about, can enable so much more along the way. It is time they were sold as such.

About the authors:

Amélie Koran is a nonresident senior fellow at the Cyber Statecraft Initiative under the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and the current director of external technology partnerships for Electronic Arts, Inc. Koran has a wide and varied background of nearly thirty years of professional experience in technology and leadership in the public and private sectors.

Wendy Nather is a nonresident senior fellow at the Cyber Statecraft Initiative under the Atlantic Council’s Digital Forensic Research Lab (DFRLab) and leads the Advisory CISO team at Cisco.

Stewart Scott is an assistant director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy.

Sara Ann Bracket is a research assistant at the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She focuses her work on open-source software security, software bills of material, and software supply-chain risk management and is currently an undergraduate at Duke University.

Acknowledgments:

The authors of this paper would like to thank external reviewers John Speed Meyers, Aeva Black, William Bartholomew, and Allan Friedman, who all took significant time to provide input during its development, as well as Donald Partyka and Anais Gonzalez for designing the final document and others who contributed invaluable feedback along the way.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    “Open Source Software (OSS) Secure Supply Chain (SSC) Framework” (2022; repr., GitHub: Microsoft, August 4, 2022), https://github.com/microsoft/oss-ssc-framework/blob/165ba893f2080e75bc69acaa6ea3fc8550315738/specification/Open_Source_Software_(OSS)_Secure_Supply_Chain_(SSC)_Framework.pdf.
2    Incidents, rather than attacks, as several also included valid use cases and functionality leading to cascading failures or vulnerabilities—all important to recognize.
3    Adrian Bridgwater, “Linux Foundation Eases Open Source Licensing Woes,” Computer Weekly, August 19, 2011, https://web.archive.org/web/20210820144000/https:/www.computerweekly.com/blog/Open-Source-Insider/Linux-Foundation-eases-open-source-licensing-woes.
4    The Linux Foundation, “The Linux Foundation’s SPDXTM Workgroup Releases New Version of Software Package Data ExchangeTM Standard – Linux Foundation,” August 30, 2012, https://www.linuxfoundation.org/press/press-release/the-linux-foundations-spdx-workgroup-releases-new-version-of-software-package-data-exchange-standard-2.
5    In practice, many real SBOM-generation processes are more complex—build processes might resolve placeholder dependencies, with only the end result reflected in an SBOM, for example.
6    National Telecommunications and Information Administration (NTIA), “The Minimum Elements For a Software Bill of Materials (SBOM)” (Washington, DC: United States Department of Commerce, July 12, 2021), https://www.ntia.doc.gov/report/2021/minimum-elements-software-bill-materials-sbom.
7    Exec. Order. No. 14028 on Improving the Nation’s Cybersecurity, Federal Register, 86 FR 26633 (May 12, 2021), https://www.federalregister.gov/documents/2021/05/17/2021-10460/improving-the-nations-cybersecurity.
8    Timothy B. Lee, “The Heartbleed Bug, Explained,” Vox, May 14, 2015, https://www.vox.com/2014/6/19/18076318/heartbleed.
9    Larry Zelvin, “Reaction on ‘Heartbleed’: Working Together to Mitigate Cybersecurity Vulnerabilities | Homeland Security,” Department of Homeland Security, April 11, 2014 [Updated September 20, 2018], https://www.dhs.gov/blog/2014/04/11/reaction-%E2%80%9Cheartbleed%E2%80%9D-working-together-mitigate-cybersecurity-vulnerabilities-0.
10    Cisco Security, “Cisco Security Advisory: OpenSSL Heartbeat Extension Vulnerability in Multiple Cisco Products,” Cisco, April 9, 2014, http://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20140409-heartbleed.
11    NTIA, “The Minimum Elements For a Software Bill of Materials (SBOM).”
12    The Cybersecurity Coalition, “Comments on NTIA’s Request for Information (RFI) on ‘Software Bill of Materials Elements and Considerations,’” June 17, 2021, https://assets.website-files.com/60cd84aeadd2475c6229482f/60ec9f0a15e85933daa3b5ca_Coalition%20SBOM%20Response-Final%206-17-21.pdf.
13    Stephen Hendrick, “The State of Software Bill of Materials (SBOM) and Cybersecurity Readiness” (The Linux Foundation | Research, January 2022), https://8112310.fs1.hubspotusercontent-na1.net/hubfs/8112310/LF%20Research/State%20of%20Software%20Bill%20of%20Materials%20-%20Report.pdf. The survey does well acknowledging and striving to address the above-mentioned sources of possible bias explicitly, too.
14    National Telecommunications and Information Administration, “Healthcare SBOM Proof of Concept” (NTIA, April 29, 2021), https://www.ntia.doc.gov/files/ntia/publications/ntia_sbom_healthcare_update-2021-04-29.pdf.
15    Sourced from conversations with New York Presbyterian.
16    NTIA, “The Minimum Elements For a Software Bill of Materials (SBOM),” 12.
17    Velichka Atanasova, “Let’s Get SBOM Ready – Open Source Blog,” VMWare, April 14, 2022, https://blogs.vmware.com/opensource/2022/04/14/sbom-ready/.
18    Alliance for Digital Innovation et al., “Cautionary Notes on Codifying Use of SBOMs,” September 14, 2022, https://fcw.com/media/multi_association_letter_on_sbom_final_9.14.2022.pdf.
19    To differentiate vulnerability management and incident response, consider the former tracking vulnerabilities, relevant threat intelligence around dependencies, preemptive response planning, and determining whether a vulnerability impacts an enterprise. The latter comes into play after that determination—guiding patch efforts, outreach to third-party maintainers, mitigation, and tailoring general remediation plans to specific incidents.
20    One of several ways to generate an SBOM.
21    Ariadne Conill, “Not All SBOMs Are Created Equal,” Chainguard, April 22, 2022, https://www.chainguard.dev/unchained/not-all-sboms-are-created-equal.
22    National Telecommunications and Information Administration (NTIA), “Vulnerability-Exploitability eXchange (VEX) – An Overview,” September 27, 2021, https://www.ntia.gov/files/ntia/publications/vex_one-page_summary.pdf.
23    Frank Nagle et al., “Census II of Free and Open Source Software — Application Libraries” (Linux Foundation Research; OpenSSF; Laboratory for Innovation Sciences at Harvard: Harvard Laboratory for Innovation Science (LISH) and Open Source Security Foundation (OpenSSF), March 2, 2022), https://lish.harvard.edu/publications/census-ii-free-and-open-source-software-%E2%80%94-application-libraries.
24    “Securing Open Source Software Act of 2022,” S.4913, 117th Cong. (2022), https://www.congress.gov/bill/117th-congress/senate-bill/4913.
25    Shalanda Young, United States, Office of Management and Budget, OMB Memo to the Heads of Executive Departments and Agencies, M-22-18, “Enhancing the Security of the Software Supply Chain through Secure Software Development Practices,” September 14, 2022, https://www.whitehouse.gov/wp-content/uploads/2022/09/M-22-18.pdf.
26    Kevin Beaumont [@GossiTheDog], “For Anybody Who Doesn’t Know, August 2022’s Windows Patches Included Fixes for NSA and GCHQ Reported Cryptographic Bugs. but MS Didn’t Tell You and Didn’t Issue a CVE.,” Tweet, Twitter, October 12, 2022, https://twitter.com/GossiTheDog/status/1580244775638212608.
27    Iliana Etaoin, “There Is No ‘Software Supply Chain,’” iliana.fyi, September 19, 2022, https://iliana.fyi/blog/software-supply-chain/
28    Alison Dame-Boyle, “EFF at 25: Remembering the Case That Established Code as Speech,” Electronic Frontier Foundation, April 16, 2015, https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech.
29    Alliance for Digital Innovation et al., “Cautionary Notes on Codifying Use of SBOMs,” September 14, 2022.
30    Institute of Medicine (US) Committee on Examination of Front-of-Package Nutrition Rating Systems and Symbols, “Front-of-Package Nutrition Rating Systems and Symbols: Phase I Report,” in History of Nutrition Labeling, ed. Ellen A. Wartella, Alice H. Lichtenstein, and Caitlin S. Boon (Washington, DC: National Academies Press (US), 2010), https://www.ncbi.nlm.nih.gov/books/NBK209859/.
31    Department of Nutritional Sciences, University of Texas at Austin, “Factual Food Labels: A Closer Look at the History,” April 6, 2018, https://he.utexas.edu/ntr-news-list/food-labels-history.
32    Jessica Lyons Hardcastle, “Google SLSA, Linux Foundation Drops SBOM for Supply Chain Security Boost,” SDxCentral, June 18, 2021, https://www.sdxcentral.com/articles/news/google-slsa-linux-foundation-drops-sbom-for-supply-chain-security-boost/2021/06/.
33    Simon Bisson, “How Microsoft Will Publish Info to Comply with Executive Order on Software Bill of Materials,” TechRepublic, May 6, 2022, https://www.techrepublic.com/article/microsoft-publish-info-comply-executive-order-software-bill-materials/.

The post The cases for using the SBOMs we build appeared first on Atlantic Council.

]]>
GRU 26165: The Russian cyber unit that hacks targets on-site https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/the-russian-cyber-unit-that-hacks-targets-on-site/ Fri, 18 Nov 2022 13:44:53 +0000 https://www.atlanticcouncil.org/?p=586134 Russian hackers are not always breaching targets from afar, typing on their keyboards in Moscow bunkers or St. Petersburg apartment buildings. Enter GRU Unit 26165, a military cyber unit with hackers operating remotely and on-site. Going forward, Western intelligence and law enforcement personnel, as well as multinational organizations, would be wise to pay attention. 

The post GRU 26165: The Russian cyber unit that hacks targets on-site appeared first on Atlantic Council.

]]>
Russian hackers are not always breaching targets from afar, typing on their keyboards in Moscow bunkers or St. Petersburg apartment buildings. For some Russian government hackers, foreign travel is part of the game. They pack up their equipment, get on international flights, and covertly move around abroad to hack into computer systems.  

Enter GRU Unit 26165 (of the military intelligence agency Glavnoye Razvedyvatelnoye Upravlenie), a military cyber unit with hackers operating remotely and on-site. Despite the security risks on-site cyber operations pose to governments and international organizations, and the questions they raise about how the West should track and combat Russian state hacking, Russia’s activities in this realm are not receiving sufficient policy attention. 

GRU Unit 26165, the 85th Main Special Communications Center 

In March 2018, after the GRU tried to murder former Russian intelligence officer Sergei Skripal and his daughter Yulia in Salisbury, England using a Novichok nerve agent, the Kremlin came under international fire. British intelligence officials blamed the GRU, where Skripal used to work (and later became a British informant); the multinational Organization for the Prohibition of Chemical Weapons (OPCW), which enforces the Chemical Weapons Convention, launched an investigation; and in June of the same year, OPCW countries voted to let the body attribute chemical weapons attacks to particular actors. (A year later, the OPCW would formally ban Novichok nerve agents.) Additional journalistic investigations into the perpetrators, meanwhile, continued to point to the GRU’s involvement. 

Although the OPCW’s investigation was not made public for months, the Russian government decided to move quickly against the organization, turning to a tactical cyber unit to do so. 

OPCW Headquarters

On April 10, 2018, four Russian nationals landed at Amsterdam Schiphol Airport in the Netherlands. With diplomatic passports in hand, they were met by a member of the Russian embassy in The Hague. After loading a car with technical equipment—including a wireless network panel antenna to intercept traffic—the four individuals scouted the OPCW’s headquarters in The Hague for days, taking photos and circling the building before being intercepted by the Dutch General Intelligence and Security Service (Algemene Inlichtingen- en Veiligheidsdienst or AIVD) and sent back to Moscow. Seemingly, the plan had been for the operatives to hack into the OPCW’s systems to disrupt investigations into the attempted GRU chemical weapon attack.  

The Netherlands made all of this public on October 4, 2018, with Dutch intelligence identifying the four operators by name—Aleksei Sergeyevich Morenets and Evgenii Mikhaylovich Serebriakov were described as “cyber operators” and Oleg Mikhaylovich Sotnikov and Alexey Valerevich Minin were described as “HUMINT (human intelligence) support.” The AIVD linked all of these individuals to Russia’s GRU. A Department of Justice (DOJ) indictment issued on the same day went a step further, linking the hackers—Morenets and Serebriakov—to GRU Unit 26165. 

Unit 26165, otherwise known as Fancy Bear, was already known for breaking into systems from afar, including the Democratic National Committee in 2016 and World Athletics (previously the International Amateur Athletic Federation) in 2017. Yet, the revelations around the attempted OPCW hack made clear that Unit 26165 does much more. The full DOJ indictment, subsequently published by the National Security Archive at The George Washington University, alleged that Morenets “was a member of a Unit 26165 team that traveled with technical equipment to locations around the world to conduct on-site hacking operations to target and maintain persistent access to WiFi networks used by victim organizations and personnel.” Serebriakov also belonged to such a team. While Unit 26165 often conducts remote hacks from Russia, the indictment stated that “if the remote hack was unsuccessful or if it did not provide the conspirators with sufficient access to victims’ networks,” Unit 26165 would carry out “‘on-site’ or ‘close access’ hacking operations.” 

The OPCW incident was not the first time these particular hackers went abroad to conduct operations. According to the DOJ, Morenets traveled to Rio de Janeiro, Brazil, and Lausanne, Switzerland, in 2016 to breach the into WiFi networks used by people with access to the US Anti-Doping Agency, the World Anti-Doping Agency, and the Canadian Center for Ethics in Sport. Serebriakov, the indictment stated, also participated in these on-site hacking operations. Both individuals allegedly planned to target the Spiez Laboratory in Switzerland after the OPCW hack. The indictment alleged that Ivan Sergeyevich Yermakov, also part of GRU Unit 26165, provided remote reconnaissance support for his colleagues’ on-site hacking operation against the OPCW. 

Additionally, it is speculated that these on-site hackers were supported by another GRU unit, which is where the other two Russians caught in the Netherlands by the AIVD enter the picture. Sotnikov and Minin were described generically by the Dutch as HUMINT support for the two hackers, and as “Russian military intelligence officers” by the DOJ’s full indictment. Neither of these government documents mentions a specific GRU unit associated with Sotnikov or Minin. 

Published in tandem with the October 4, 2018 state disclosures was a new Bellingcat investigation linking Morenets’ Russian car to the Unit 26165 building in Russia. It also linked Minin’s car registration to the GRU “Conservatory.” The Conservatory—formally numbered GRU Unit 22177—is the Russian Defense Ministry’s Military Academy and a training site for the GRU, located in Moscow near GRU headquarters and other GRU training facilities. Due to Minin’s connection to 22177 and the Dutch and US governments’ vague references to Sotnikov and Minin as “HUMINT support” and “Russian military intelligence officers” separate from Unit 26165, numerous articles have speculated that operatives from another GRU unit were tasked to support the mission in The Hague. 

Stepping back, assessing the picture 

Policymakers should use this information as a case study for how Russian government hackers—and, theoretically, state hackers from other adversary countries—move around the world to break into systems. The use of on-site cyber operations abroad seems unique to this GRU team, with many possible motivations at play. It is unclear how high up the oversight chain these on-site operations go. What is clear, though, is that Western governments cannot restrict their hunt for Russian hackers to the digital sphere; they must also remember how Russian hacking fits into broader Russian intelligence activities, including overseas. 

There are several takeaways and implications that result from this information. The on-site, overseas cyber operations of GRU Unit 26165 appears to stand out from other Russian government cyber units. Of course, cyber capabilities are a part of intelligence operations more broadly, and many human operations around the world leverage cyber reconnaissance on an ongoing basis. Nonetheless, when the United Kingdom (UK) released its own statement on Russian government cyber activity in October 2018, it clearly differentiated between the activities of Unit 26165 in the Netherlands, Brazil, and Switzerland and those of Unit 74455 (Sandworm), which it stressed “were carried out remotely—by GRU teams based within Russia.” The DOJ indictment appears to suggest, although this is not totally clear, that hackers going abroad are part of at least one specific sub-team within the broader cyber unit. Further, the DOJ indictment lists numerous examples of on-site hacks or hack attempts, but publicly available information has not exposed the same kind of on-site operations by Russia’s Foreign intelligence Service, the SVR. 

The motivations behind the on-site operations of Unit 26165 are also a key question. Based on publicly available information, its proclivity for “close access” operations leans toward disrupting high-profile investigations into potentially embarrassing Russian government activity. The first set of reported hacks targeted international investigations into allegations of Russian doping at the Olympics; the second set of hacks targeted the international investigation into the attempted murder of the Skripals with chemical weapons. It is possible, therefore, that protecting the Kremlin’s image is a high priority. Simultaneously, the DOJ indictment stated that Unit 26165 carries out on-site operations when remote operations are unsuccessful, suggesting a more functional, effects-oriented motive for sending hackers overseas. 

However, there is another possibility: The GRU may simply be using on-site operations when it needs to draw attention away from its own failures. The botched attempt to murder Sergei and Yulia Skripal was carried out by GRU Unit 29155, a Russian military intelligence and assassination team with close relationships to the Signal Scientific Center federal research facility and the Ministry of Defense’s State Institute for Experimental Military Medicine in St. Petersburg, entities suspected of managing Russia’s Novichok program. GRU operatives are well-known for their high-risk appetites and sometimes overt violence, even relative to other Russian intelligence organs like the Federal Security Service (FSB), Russia’s domestic security agency. (That said, the FSB is a violent organization, too, carrying out repressive tactics in Russia and, in 2019, assassinating a Georgian asylum seeker in Berlin.) 

This tendency is playing out in cyberspace already, given that GRU teams are behind the NotPetya malware attack, shutdowns of Ukrainian power grids, and other more destructive, publicly visible operations. Such cyber activities, in line with broader intelligence cultures, stand in contrast to agencies like the SVR, which appears to place a premium on covertness, both online and offline. Wanting to frantically undermine an investigation into its own failed operation, it is not out of the question that the GRU sent Unit 26165 operatives overseas. That Unit 26165 hackers Morenets and Serebriakov may have had support from other parts of the GRU (HUMINT operators Sotbikov and Minin) in the OPCW plot suggests possible broader intra-agency coordination. But again, it is easy—and sometimes misguided—to assume there is more coordination within the Russian security services than actually occurs. 

All of this raises a final and more interesting question always at play in the Russian cyber ecosystem: How far up the chain does oversight of on-site hacks go? 

Issue Brief

Sep 19, 2022

Untangling the Russian web: Spies, proxies, and spectrums of Russian cyber behavior 

By Justin Sherman

This issue brief analyzes the range of Russian government’s involvement with different actors in the large, complex, and often opaque cyber web, as well as the risks and benefits the Kremlin perceives or gets from leveraging actors in this group. The issue brief concludes with three takeaways and actions for policymakers in the United States, as well as in allied and partner countries.

Cybersecurity Russia

Cyber and information operations with high political sensitivity, which Moscow conceptualizes more cohesively than in the West, are more likely to be supervised by the Kremlin. The US intelligence community assessed, for example, that the influence actions targeting the 2016 US election were “approved at the highest levels of the Russian government,” and a similar conclusion was reached vis-à-vis President Vladimir Putin and Russia’s election interference in 2020. This may also be true for more traditional intelligence operations. When the UK finished its investigation into the murder of former Russian spy Alexander Litvinenko, who was killed on British soil with the radioactive material Polonium-210, it concluded that Putin and Russian Security Council head Nikolai Patrushev “probably” approved the killing. 

The GRU’s botched murder attempt on the Skripals garnered significant international attention. At the time, Russian officials were already criticizing the OPCW’s investigations into the Assad regime’s use of chemical weapons in Syria—called an attempt “to make the OPCW draw hasty but at the same time far-reaching conclusions” by Russia’s deputy foreign minister. When the investigation into the Skripal poisonings began, senior officials like Russian Foreign Minister Sergei Lavrov falsely claimed that a lab used by the OPCW picked up traces of a nerve agent possessed by NATO countries but not Russia. Putin, meanwhile, has always held particular contempt for people he perceives as betraying the Russian nation, once saying that “traitors always meet a bad end,” suggesting a kind of personal anger directed at individuals like Sergei Skripal who became agents for the West. The Olympic doping investigations, too, proved an embarrassment for Moscow. 

In this vein, it is quite possible that higher-level Kremlin officials may direct the GRU to act against investigations like OPCW’s, prompting the GRU to deploy Unit 26165 hackers to the Netherlands. It is also plausible that the activities of Unit 26165 merely reflect broader intelligence collection priorities, spying on those trying to “hurt” Russia, such as investigators looking into Russian athlete doping. Since there are few publicly known cases of Unit 26165 conducting “close access” operations, perhaps these are not representative samples, with the GRU carrying out these activities on its own after all. 

Regardless, the GRU is clearly sending hackers overseas to carry out operations. Going forward, Western intelligence and law enforcement personnel, as well as multinational organizations, would be wise to pay attention. 

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post GRU 26165: The Russian cyber unit that hacks targets on-site appeared first on Atlantic Council.

]]>
The 5×5—The rise of cyber surveillance and the Access-as-a-Service industry https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-rise-of-cyber-surveillance-and-the-access-as-a-service-industry/ Wed, 16 Nov 2022 05:01:00 +0000 https://www.atlanticcouncil.org/?p=586322 Experts discuss the rise of cyber surveillance and the impact of the Access-as-a-Service industry on the United States and its allies.

The post The 5×5—The rise of cyber surveillance and the Access-as-a-Service industry appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Approximately one year ago, on November 3, 2021, the US Commerce Department added four companies, including Israel-based NSO Group, to its Entity List for supporting cyber surveillance and access-as-a-service activities, “that are contrary to the national security or foreign policy interests of the United States.” Foreign governments used NSO Group’s products, notably its Pegasus spyware, to target individuals, such as journalists and activists, and suppress dissent. Just one month later, reporting indicated that Apple tipped off the US Embassy in Uganda that an undisclosed foreign government had targeted the iPhones of eleven embassy employees. 

A New York Times report published on November 12 reveals how close the United States was to using Pegasus for its own investigative purposes. The FBI, which previously acknowledged having acquired a Pegasus license for research and development, contemplated use of the tool in late 2020 and early 2021 and developed guidelines for how federal prosecutors would disclose its use in criminal proceedings. The FBI ultimately decided not to buy from NSO, amid the many stories of abuse of the tool by foreign governments, but the revelation underscores the double-edged nature of cyber surveillance technologies designed to support law enforcement and intelligence missions. 

There are dozens of firms in the Access-as-a-Service industry developing and proliferating a powerful class of surveillance technologies. We brought together a group of experts to discuss the rise of cyber surveillance and the impact of this industry on the United States and its allies. 

#1 What implications can foreign governments’ domestic cyber surveillance programs have on US national security?

Siena Anstis, senior legal advisor, Citizen Lab, Munk School of Global Affairs & Public Policy, University of Toronto

“The proliferation of spyware presents a national security risk to the United States. These technologies facilitate not only the targeting of human rights defenders and civil society, but also provide an across-the-board opportunity to undertake acts of espionage through their ability to exploit vulnerabilities in popular applications and operating systems that impact everyone. This was well-illustrated by the targeting of US diplomats in 2021 with NSO Group’s Pegasus spyware. No one is safe from being targeted with this highly intrusive, silent, and increasingly hard to detect technology. This risk extends to the US government.” 

Winnona DeSombre, nonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council

“We live in an increasingly interconnected world when it comes to data and surveillance. From an individual perspective, US citizens who work on national security regularly interface with relatives and friends abroad who may be surveilled. US military service members use Tiktok, an app whose data flows back to China. Domestic surveillance in another country does not just touch that country’s citizens, but it also touches any US national who interfaces with that country’s people and corporations.” 

Lars Gjesvik, doctoral research fellow, Norwegian Institute of International Affairs

“Way back in ancient 2013, the US intelligence community warned that private companies were developing tools that aided foreign states in targeting US systems. Clearly, this has been of some concern for a decade and has some implications for national security. There is no doubt that such commercially available tools have done great harm when it comes to human rights and targeting civil society, and you have some reported cases like Project Raven where commercial tools start to become a national security problem as well.” 

Kirsten Hazelrig, policy lead, The MITRE Corporation

“There are absolutely direct threats to US interests from the use of cyber surveillance abroad—any newspaper will relay confirmed reports of US officials being targeted abroad by tools such as Pegasus. However, this is simply a new tool for an age-old game of espionage. Perhaps more insidious is how tools and programs can be abused to enable the spread of authoritarianism, degrade human rights, and erode democratic values. I am not sure if anyone fully understands the implications to national security if these capabilities are allowed to spread unchecked.” 

Ole Willers, postdoctoral researcher, Department of Organisation, Copenhagen Business School:  

“Within the context of cyber surveillance programs, the distinction between domestic and foreign operations is not always as clearcut. Domestic campaigns oftentimes target individuals located in other jurisdictions, including the United States. The targeting of Canadian-based activist Omar Abdulaziz by Saudi Arabian surveillance operations is a prominent example.”

#2 Where do cyber capabilities fit into the spectrum of surveillance technologies?

Anstis: “Spyware technology provides governments with the ability to undertake highly intrusive surveillance. Sophisticated versions of this technology provide complete entry into targeted devices, including the contents of encrypted communication apps, camera, microphone, documents stored on the phone, and more. This impacts not only targeted individuals, but also exposes those who communicate with these people such as friends, family, and colleagues. Governments have a variety of surveillance technologies at their disposal, and spyware is undoubtedly one of the most stealthy and intrusive tools on the market that makes it difficult, if not impossible, for journalists, human rights defenders, activists, and other members of civil society critical of the government to do their work.” 

DeSombre: “Cyber capabilities that feed into offensive cyber operations are usually far more tailored than surveillance technology writ large, especially compared to dragnet surveillance technologies. The little bit of overlap occurs when governments want to surveil targets who they believe are of higher value or harder to get to, in which case authoritarian governments will break out the more expensive capabilities like zero-days or purchase expensive spyware licenses like those offered by NSO and Candiru.” 

Gjesvik: “The term ‘surveillance technologies’ is quite broad, and it depends greatly on how you define it. But if you think about the capabilities and services provided to intelligence, law enforcement, or military agencies, then it is a question of how sophisticated they are and their scope. The most sophisticated cyber capabilities offered by the top-tier companies probably equal the capabilities of most intelligence agencies, and there is no real difference functionally in them being used domestically or against strategic adversaries.” 

Hazelrig: “Surveillance technologies are broad sets of tools that enable a human actor to achieve an objective, be it to improve traffic, indict a criminal, track terrorist movements, stalk a partner, or steal a competitor’s data. Cyber capabilities can range as widely as these objectives and their targets. They may range from low-end spyware to extremely sophisticated technology, and are almost always paired with additional tools and tradecraft that make them impossible to evaluate devoid of operational context.” 

Willers: “If we define cyber capabilities in terms of the various activities oriented towards gaining stealth access to digital information, their importance for surveillance operations can hardly be overstated. Whereas traditional surveillance technologies continue to play a role, cyber capabilities offer forms of access that are much more comprehensive. Access to a smartphone is fundamentally different from the traditional wiretap and allows for the real-time surveillance of location patterns, communications, web searches, financial transactions, and more.”

#3 What is the Access-as-a-Service industry and what kind of relationship should the United States and its allies have with it?

Anstis: “The Access-as-a-Service industry describes companies that provide services to different actors—often states—to access data or systems. In the past few years, we have seen an acceleration in human rights abuses associated with this industry and a growing formalization of the sector with private investors and states increasingly interested in the growth of these companies. Considering the litany of human rights abuses that follows the growing availability of the technologies and services offered by this industry, the United States and other states have an obligation to regulate and limit the availability of these technologies and the industry’s business practices.” 

DeSombre: “The Access-as-a-Service industry makes offensive cyber operations incredibly simple to pull off—aggregating disparate capabilities that take years of investment to make (zero-days, malware, training, infrastructure, processes) into a single solution that a government can purchase off the shelf and use easily. It is not necessarily a bad industry—the United States and its allies also rely on privatized talent to conduct cyber operations. However, the United States and its allies must be proactive about shaping responsible behavior within the industry to ensure these services are not purchased en masse by authoritarian regimes and adversaries.” 

Gjesvik: “Simply put, it is an industry that sells access to digital data and systems. A wide swathe of technologies and services fits into this definition. Considering what relationship Western states should have with it should start with acknowledging that most states rely on private contractors and capabilities to some extent. There are clear problems of democratic oversight and misuse, but having their intelligence agencies and law enforcement lose access to digital evidence and data is probably not something governments would accept, and smaller states would struggle to develop the capabilities themselves. It is hard to decide on a relationship with a surveillance industry without deciding on the role of surveillance in modern societies, and I do not think we have done that.” 

Hazelrig: “Access-as-a-Service, or the related but more colorfully named “hacker-for-hire” industry, are loose terms for the criminal actors that sell the information, capabilities, and services necessary to conduct cyber intrusions. These actors sell their wares with little regard as to impact and intent, enabling ransomware and other attacks.” 

Willers: “The Access-as-a-Service industry is a niche market that sells data access to state agencies, and it has repeatedly been singled out for facilitating the proliferation of offensive cyber capabilities to authoritarian states. The United States and its allies face a dilemma in that they rely on the Access-as-a-Service industry to provide domestic law enforcement and intelligence agencies with cutting edge technology. Simultaneously, they have a strong incentive to limit the availability of these technologies to other customers. Balancing these interests has proven extremely difficult, which is why I see a need to limit our dependency on the private sector within this context.” 

More from the Cyber Statecraft Initiative:

#4 In what ways does government surveillance compare and contrast with corporate surveillance?

Anstis: “Government surveillance is similar to corporate surveillance in that both exploit the fact that we increasingly live our lives on internet-connected devices. The data we generate in our daily interactions, which is then collected by companies and governments, can be used for a variety of purposes that target and exploit us—from the crafting of targeted advertising to location tracking to the mapping of a human right activist’s network. However, government surveillance differs in at least one important respect: governments have the power to not only surveil, but also to detain, torture, kidnap, or otherwise enact acts of violence against an individual. Spyware technologies facilitate the government’s ability to engage in these activities.” 

DeSombre: “The podcast I help run just made an episode on this! Effectively, corporate surveillance and government surveillance have two separate goals: corporations collect your data to sell (usually to advertisers who then target you with personalized advertisements), while the government collects data for law enforcement or national security purposes. US government surveillance has hard rules it must follow for collecting on US citizens, although some of this is circumvented by buying corporate data. US and EU companies are now getting increasingly constrained by data privacy laws as well. But these types of regulations on both companies and governments differ vastly from country to country.” 

Gjesvik: “When you think about who conducts the surveillance, the big difference would be the extent to which government surveillance is supposedly in the end about protecting its citizens while corporate surveillance is mainly about the interests of the corporation. If it is about who actually does the surveillance then the distinction between governments and private actors can be pretty blurry, as can the level of capabilities.” 

Hazelrig: “The technical aspects of government and commercial surveillance are similar, and often share tools and techniques. However, the practices around their use are widely different. For a large part, democratic states limit surveillance through public opinion and law. There is admittedly misuse and abuse, but an intent and organizational structure to ‘do good.’ This is not necessarily true of commercial capabilities that may be sold without understanding of or care about intended use. As the opaque commercial market evolves, we are just beginning to understand the full spectrum of uses and impacts. Democratic states need to develop norms for law enforcement and other acceptable uses of cyber intrusion and surveillance capabilities, and to enforce actions against those that violate these norms and the industry that supplies them.”

Willers: “Both can be problematic considering that privacy is a fundamental human right in the European Union. Access to personal information has become a key asset across many industries, but the gathering of this information is a purely private and for-profit undertaking, however problematic it may be. State surveillance derives from a desire to provide public safety, which can be a good thing as long as it remains proportional and rooted in democratic norms—conditions that cannot be taken for granted.”

#5 How has the Access-as-a-Service industry evolved over the past two decades and where do you see it going from here?

Anstis: “The Access-as-a-Service industry has become increasingly formalized in the past two decades, with growing interest from investors and states in terms of funding the industry, as well as accessing the services and technologies offered. I see the next few years as a critical turning point in the industry’s development. Countless human rights abuses have brought increased awareness that the services and technologies offered by the Access-as-a-Service industry have serious human rights ramifications—as well as national security concerns—that need to be addressed. With ongoing investigations in the European Parliament, the United States, and elsewhere into companies that participate in this industry, I hope that we will see more specific steps aimed at curbing and controlling it.” 

DeSombre: “Like every part of the cybersecurity ecosystem since the early 2000s, the Access-as-a-Service industry has grown, professionalized, and turned towards mobile, embedded, and other non-desktop systems. Your laptop is not the only place with interesting data!” 

Gjesvik: “This is a pretty opaque industry, and there is not a ton of structured encompassing data available that I am aware of, but there are some broad trends. The first is globalization, a quite substantive expansion of tools and technologies available, and a lot more money to be made as well. Going forward, I am probably most interested in the extent to which the industry is controllable by any state actor. Will recent efforts by the United States and the European Union succeed in limiting the worst excesses? Or will it just accelerate the diversification of suppliers?” 

Hazelrig: “So long as there have been criminal hackers, there have been ways for those with the right connections to procure intrusion services. However, about a decade ago, we started to see the emergence of professional firms that sold these services commercially, primarily to governments around the globe. The past couple of years has brought casual proliferation and a booming ‘consumer’ market—shady companies advertise euphemistically-phrased services on mainstream platforms such as LinkedIn, and many online criminal marketplaces have whole sections of specialty products and services from which to choose.” 

Willers: “The origins of the Access-as-a-Service industry can be traced back to a combination of privatization dynamics in the telecommunication sector during the 1990s, the rise of digital communication systems, and the political focus on surveillance in the aftermath of the September 11 terrorist attacks. Since then, the industry has developed at the speed of technology, and there is good reason to doubt that the United States remains in a position to control it. Limiting access to technology is difficult, especially when it is as mobile as spyware technology. This is why I doubt that the United States or any other country alone can control the operations of the market.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The rise of cyber surveillance and the Access-as-a-Service industry appeared first on Atlantic Council.

]]>
The cyber strategy and operations of Hamas: Green flags and green hats https://www.atlanticcouncil.org/in-depth-research-reports/report/the-cyber-strategy-and-operations-of-hamas-green-flags-and-green-hats/ Mon, 07 Nov 2022 05:01:00 +0000 https://www.atlanticcouncil.org/?p=579898 This report seeks to highlight Hamas as an emerging and capable cyber actor, and help the policy community understand how similar non-state groups may leverage the cyber domain in the future.

The post The cyber strategy and operations of Hamas: Green flags and green hats appeared first on Atlantic Council.

]]>

Executive summary

Cyberspace as a domain of conflict often creates an asymmetric advantage for comparably less capable or under-resourced actors to compete against relatively stronger counterparts.1 As such, a panoply of non-state actors is increasingly acquiring capabilities and integrating offensive cyber operations into their toolkits to further their strategic aims. From financially driven criminal ransomware groups to politically inspired patriot hacking collectives, non-state actors have a wide range of motivations for turning to offensive cyber capabilities. A number of these non-state actors have histories rooted almost entirely in armed kinetic violence, from professional military contractors to drug cartels, and the United States and its allies are still grappling with how to deal with them in the cyber context.2 Militant and terrorist organizations have their own specific motivations for acquiring offensive cyber capabilities, and their operations therefore warrant close examination by the United States and its allies to develop effective countermeasures.

While most academic scholarship and government strategies on counterterrorism are beginning to recognize and address the integral role of some forms of online activity, such as digital media and propaganda on behalf of terrorist organizations, insufficient attention has been given to the offensive cyber capabilities of these actors. Moreover, US strategy,3 public intelligence assessments, and academic literature on global cyber threats to the United States overwhelmingly focuses on the “big four” nation-state adversaries—China, Russia, Iran, and North Korea. Before more recent efforts to address the surge in financially driven criminal ransomware operations, the United States and its allies deployed policy countermeasures overwhelmingly designed for use against state actors.

To the extent that US counterterrorism strategy addresses the offensive cyber threat from terrorist organizations, it is focused on defending critical infrastructure against the physical consequences of a cyberattack. Hamas, despite being a well-studied militant and terrorist organization, is expanding its offensive cyber and information capabilities, a fact that is largely overlooked by counterterrorism and cyber analysts alike. Overshadowed by the specter of a catastrophic cyberattack from other entities, the real and ongoing cyber threats posed by Hamas prioritize espionage and information operations.

This report seeks to highlight Hamas as an emerging and capable cyber actor, first by explaining Hamas’s overall strategy, a critical facet for understanding the group’s use of cyber operations. Next, an analysis will show how Hamas’s cyber activities do not indicate a sudden shift in strategy but, rather, a realignment that augments operations. In other words, offensive cyber operations are a new way for Hamas to do old things better. Finally, the policy community is urged to think differently about how it approaches similar non-state groups that may leverage the cyber domain in the future. This report can be used as a case study for understanding the development and implementation of cyber tools by non-state entities.

As the title of this report suggests, Hamas is like a green hat hacker—a term that is not specific to the group but recognized in the information security community as someone who is relatively new to the hacking world, lacking sophistication but fully committed to making an impact and keen to learn along the way.4 Hamas has demonstrated steady improvement in its cyber capabilities and operations over time, especially in its espionage operations against internal and external targets. At the same time, the organization’s improvisation, deployment of relatively unsophisticated tools, and efforts to influence audiences are all hallmarks of terrorist strategies. This behavior is in some ways similar to the Russian concept of “information confrontation,” featuring a blend of technical, information, and psychological operations aimed at wielding influence over the information environment.5

Understanding these dynamics, as well as how cyber operations fit into the overall strategy, is key to the US development of effective countermeasures against terrorist organizations’ offensive cyber operations.

“Pwn” goal

In the summer of 2018, as teams competed in the International Federation of Association Football (FIFA) World Cup in Russia, Israeli soldiers followed the excitement on their smartphones from an Israel Defense Forces (IDF) base thousands of miles away. Like others in Israel, the soldiers were using a new Android application called Golden Cup, available for free from the Google Play store. The program was promoted in the lead up to the tournament as “the fastest app for live scores and fixtures for the World Cup.”6 The easy-to-use application delivered as advertised—and more.

Once installed, the application communicated with its command-and-control server to surreptitiously download malicious payloads onto user devices. The payloads infected the target devices with spyware, a variety of malware that discreetly monitors the target’s device and steals its information, usually for harmful use against the target individual.7 In this particular case, the spyware was intentionally deployed after the application was downloaded from the Google Play store in order to bypass Google’s security screening process.8 This allowed the spyware operator to remotely execute code on user smartphones to track locations, access cameras and microphones, download images, monitor calls, and exfiltrate files.

Golden Cup users, which included Israeli civilians and soldiers alike, did not realize that their devices were infected with spyware. As soldiers went about their daily routines on bases, the spyware operators reaped reams of data from the compromised smartphones. In just a few weeks of discreet collection, before discovery by IDF security, the adversary successfully collected non-public information about various IDF bases, offices, and military hardware, such as tanks and armored vehicles.9

The same adversary targeted Israeli soldiers with several other malicious Android applications throughout the summer of 2018. A fitness application that tracks user running routes collected the phone numbers of soldiers jogging in a particularly sensitive geographic location. After collecting these numbers, the adversary targeted the soldiers with requests to download a second application that then installed spyware. Additional targeting of Israeli soldiers that same summer included social engineering campaigns encouraging targets to download various spyware-laced dating applications with names like Wink Chat and Glance Love, prompting the IDF to launch the aptly named Operation Broken Heart in response.10

Surprisingly, this cyber espionage campaign was not the work of a nation-state actor. Although the clever tradecraft exhibited in each operation featured many of the hallmarks of a foreign intelligence service, neither Israel’s geopolitical nemesis Iran nor China,11 an increasingly active Middle East regional player, was involved.12 Instead, the campaign was the work of Hamas.

1. Introduction

The asymmetric advantage afforded by cyberspace is leading a panoply of non-state actors to acquire and use offensive cyber capabilities to compete against relatively stronger counterparts. The cyber threat from criminal ransomware organizations has been well documented, yet a range of other non-state actors traditionally involved in armed kinetic violence, from professional military contractors to drug cartels, is also trying their hand at offensive cyber operations, and the United States and its allies are still grappling with how to respond. Each actor has a discreet motivation for dabbling in cyber activities, and lumping them all into one bucket of non-state actors can complicate efforts to study and address their actions. The operations of militant and terrorist organizations in particular warrant close examination by the United States and its allies in order to develop effective countermeasures.

A robust online presence is essential for modern terrorist organizations. They rely on the internet to recruit members, fund operations, indoctrinate target audiences, and garner attention on a global scale—all key functions for maintaining organizational relevance and for surviving.13 The 2022 Annual Threat Assessment from the US Intelligence Community suggests that terrorist groups will continue to leverage digital media and internet platforms to inspire attacks that threaten the United States and US interests abroad.14 Recent academic scholarship on counterterrorism concurs, acknowledging the centrality of the internet to various organizations, ranging from domestic right-wing extremists to international jihadists, and their efforts to radicalize, organize, and communicate.

The US government has taken major steps in recent years to counter terrorist organizations in and through cyberspace. The declassification of documents on Joint Task Force Ares and Operation Glowing Symphony, which began in 2016, sheds light on complex US Cyber Command efforts to combat the Islamic State in cyberspace, specifically targeting the group’s social media and propaganda efforts and leveraging cyber operations to support broader kinetic operations on the battlefield.15 The latest US National Strategy for Counterterrorism, published in 2018, stresses the need to impede terrorist organizations from leveraging the internet to inspire and enable attacks.16

Indeed, continued efforts to counter the evolving social media and propaganda tools of terrorist organizations will be critical, but this will not comprehensively address the digital threat posed by these groups. Counterterrorism scholarship and government strategies have paid scant attention to the offensive cyber capabilities and operations of terrorist organizations, tools that are related but distinct from other forms of online influence. Activities of this variety do not necessarily cause catastrophic physical harm, but their capacity to influence public perception and, potentially, the course of political events should be cause for concern.

Several well-discussed, politically significant non-state actors with histories rooted almost entirely in kinetic violence are developing, or otherwise acquiring, offensive cyber capabilities to further their interests. More scrutiny of these actors, their motivations, and how they strategically deploy offensive cyber capabilities in conjunction with evolving propaganda and kinetic efforts is warranted to better orient toward the threat.

Hamas, a Palestinian political party and militant terrorist organization that serves as the de facto governing body of the Gaza Strip, is one such actor. The group’s burgeoning cyber capabilities, alongside its propaganda tactics, pose a threat to Israel, the Palestinian Authority, and US interests in the region—especially in tandem with the group’s capacities to fund, organize, inspire, and execute kinetic attacks. This combination of capabilities has historically been the dominion of more powerful state actors. However, the integration of offensive cyber capabilities into the arsenals of traditionally kinetic non-state actors, including militant organizations, is on the rise due to partnerships with state guarantors and the general proliferation of these competencies worldwide.

This report seeks to highlight the offensive cyber and information capabilities and behavior of Hamas. First, a broad overview of Hamas’s overall strategy is provided, an understanding of which is key for evaluating its cyber activities. Second, this report analyzes the types of offensive cyber operations in which Hamas engages, showing that the adoption of cyber capabilities does not indicate a sudden shift in strategy but, rather, a realignment of strategy and an augmentation of operations. In other words, offensive cyber operations are a new way to do old things better. Third, this report aims to push the policy community to think differently about its approach to similar non-state groups that may leverage the cyber domain in the future.

2. Overview of Hamas’s strategy

Principles and philosophy

Founded in the late 1980s, Harakat al-Muqawamah al-Islamiyyah, translated as the Islamic Resistance Movement and better known as Hamas, is a Palestinian religious political party and militant organization. After Israel disengaged from the Gaza Strip in 2005, Hamas used its 2006 Palestinian legislative election victory to take over militarily from rival political party Fatah in 2007. The group has served as the de facto ruler of Gaza ever since, effectively dividing the Palestinian Territories into two entities, with the West Bank governed by the Hamas-rejected and Fatah-controlled Palestinian Authority.17

Hamas’s overarching objectives are largely premised on its founding principles—terminating what it views as the illegitimate State of Israel and establishing Islamic, Palestinian rule.18 The group’s grand strategy comprises two general areas of focus: resisting Israel and gaining political clout with the Palestinian people. These objectives are interconnected and mutually reinforcing, as Hamas’s public resistance to Israel feeds Palestinian perceptions of the group as the leader of the Palestinian cause.19

Map of Israel and the Palestinian Territories.
Source: Nations Online Project

Despite Hamas’s maximalist public position on Israel, the organization’s leaders are rational actors who logically understand the longevity and power of the State of Israel. Where the group can make meaningful inroads is in Palestinian politics, trying to win public support from the more secular, ruling Fatah party and positioning itself to lead a future Palestinian state. Looming uncertainty about the future of an already weak Palestinian Authority, led by the aging President Mahmoud Abbas, coupled with popular demand for elections, presents a potential opportunity for Hamas to fill a leadership vacuum.20

To further these objectives, Hamas attracts attention by frequently generating and capitalizing on instability. The group inflames already tumultuous situations to foster an environment of extremism, working against those who are willing to cooperate in the earnest pursuit of a peaceful solution to the Israel–Palestine conflict. Hamas uses terror tactics to influence public perception and to steer political outcomes, but still must exercise strategic restraint to avoid retaliation that could be militarily and politically damaging. Given these self-imposed restraints, Hamas seeks alternative methods of influence that are less likely to result in blowback.

Terrorism strategy

Hamas’s terror tactics have included suicide bombings,21 indiscriminate rocket fire,22 sniper attacks,23 incendiary balloon launches,24 knifings,25 and civilian kidnappings,26 all in support of its larger information strategy to project a strong image and to steer political outcomes. Through these activities, Hamas aims to undermine Israel and the Palestinian Authority27 and challenge the Palestine Liberation Organization’s (PLO)28 standing as the “sole representative of the Palestinian people.”

Terrorism forms the foundation of Hamas’s approach, and the organization’s leadership openly promotes such activities.29 While the group’s terror tactics have evolved over time, they have consistently been employed against civilian targets to provoke fear, generate publicity, and achieve political objectives. Israeli communities targeted by terrorism, as well as Palestinians in Gaza living under Hamas rule, suffer from considerable physical and psychological stress,30 driving Israeli policymakers to carry out military operations, often continuing a vicious cycle that feeds into Hamas’s information campaign.

These terrorist tactics follow a coercive logic that aligns with Hamas’s greater messaging objectives. Robert Pape’s “The Strategic Logic of Suicide Terrorism” specifically names Hamas as an organization with a track record of perpetrating strategically timed suicide terrorist attacks for coercive political effect.31 In 1995, for example, Hamas conducted a flurry of suicide attacks, killing dozens of civilians in an attempt to pressure the Israeli government to withdraw from certain locations in the West Bank. Once negotiations were underway between Israel and the PLO, Hamas temporarily suspended the attacks, only to resume them against Israeli targets when diplomatic progress appeared to stall. Israel would eventually partially withdraw from several West Bank cities later that year.32

Similarly, just several months before Israel’s 1996 general election, incumbent Labor Party Prime Minister Shimon Peres led the polls by roughly 20 percent in his reelection bid against Benjamin Netanyahu and the Likud Party. However, a spate of Hamas suicide bombings cut Peres’s lead and Netanyahu emerged victorious.33 The attacks were designed to weaken the reelection bid of Peres, widely viewed as the candidate most likely to advance the peace process, and strengthen the candidacy of Netanyahu. Deliberate terror campaigns such as these demonstrate the power Hamas wields over Israeli politics.34

The Israeli security establishment has learned lessons from the phenomenon of suicide terrorism, implementing countermeasures to foil attacks. Since the mid-2000s, Hamas has shifted its focus to firing rockets of various ranges and precision from the Gaza Strip at civilian population centers in Israel.35 The rocket attacks became frequent after Israel’s disengagement from Gaza in 2005, ebbing and flowing in alignment with significant political events.36 For instance, the organization targeted towns in southern Israel with sustained rocket fire in the lead up to the country’s general election in 2009 to discourage Israelis from voting for pro-peace candidates.37

A rocket fired from the Gaza Strip into Israel, 2008.
Source: Flickr/paffairs_sanfrancisco

Strategic restraint

Each of these terror tactics has the powerful potential to generate publicity with Israelis, Palestinians, and audiences elsewhere. However, unrestrained terrorism comes at a cost, something Hamas understands. Hamas must weigh its desire to carry out attacks with the concomitant risks, including an unfavorable international perception, military retaliation, infrastructure damage, and internal economic and political pressures.

Hamas addresses this in a number of ways. First, it limits its operations, almost exclusively, to Israel and the Palestinian Territories. Hamas has learned from the failures of other Palestinian terrorist organizations, whose operations beyond Israel’s borders were often counterproductive, attracting legitimate international criticism of these groups.38 Such operations also run the risk of alienating critical Hamas benefactors like Qatar and Turkey.39 These states, which maintain important relationships with the United States—not to mention burgeoning ties with Israel—could pressure Hamas to course correct, if not outright withdraw their support for the organization.40 The continued flow of billions of dollars in funding from benefactors like Qatar is critical, not just to Hamas’s capacity to conduct terror attacks and wage war,41 but also to its efforts to reconstruct infrastructure and provide social services in the Gaza Strip, both key factors for building its political legitimacy among Palestinians.42

Second, with each terrorist attack, Hamas must weigh the potential for a forceful Israeli military response. The cycle of terrorism and retaliation periodically escalates into full-scale wars that feature Israeli air strikes and ground invasions of Gaza. These periodic operations are known in the Israeli security establishment as “mowing the grass,” a component of Israel’s strategy to keep Hamas’s arsenal of rockets, small arms, and infrastructure, including its elaborate underground tunnel network, from growing out of control like weeds in an unkempt lawn.43 Hamas’s restraint has been apparent since May 2021, when Israel conducted Operation Guardian of the Walls, a roughly two-week campaign of mostly airstrikes and artillery fire aimed at slashing the group’s rocket arsenal and production capabilities, crippling its tunnels, and eliminating many of its top commanders. Hamas is thought to be recovering and restocking since the ceasefire, carefully avoiding engaging in provocations that could ignite another confrontation before the group is ready.

Third, and critically, since mid-2021, the last year-plus of the Israel–Hamas conflict has been one of the quietest in decades due to the Israeli Bennett–Lapid government’s implementation of a sizable civil and economic program for Gaza.44 The program expands the number of permits for Palestinians from Gaza to work in Israel, where the daily wages of one worker are enough to support an additional ten Palestinians.45 Israel’s Defense Ministry signed off on a plan to gradually increase work permit quotas for Palestinians from Gaza to an unprecedented 20,000, with reports suggesting plans to eventually increase that number to 30,000.46 For an impoverished territory with an unemployment rate of around 50 percent, permits to work in Israel improve the lives of Palestinians and stabilize the economy. The program also introduced economic incentives for Hamas to keep the peace—conducting attacks could result in snap restrictions on permits and border crossing closures, leading to a public backlash, as well as internal political blowback within the group. The power of this economic tool was evident throughout Israel’s Operation Breaking Dawn in August 2022, during which Israel conducted a three-day operation to eliminate key military assets and personnel of the Palestinian Islamic Jihad (PIJ), another Gaza-based terrorist organization. Israel was careful to communicate its intention to target PIJ, not Hamas. Ordinarily a ready-and-willing belligerent in such flare-ups, Hamas did nothing to restrain the PIJ but remained conspicuously on the sidelines, refraining from fighting out of its interest in resuming border crossings as quickly as possible.47

Searching for alternatives

Given these limitations, blowbacks, and self-imposed restraints, Hamas is finding alternative methods of influence. Under the leadership of its Gaza chief Yahya Sinwar, Hamas is endeavoring to inspire Arab Israelis and West Bank Palestinians to continue the struggle by taking up arms and sparking an intifada while the group nurses itself back to strength.48 To further this effort, Hamas is turning to more insidious means of operating in the information space to garner support and ignite conflagrations without further jeopardizing its public reputation, weapons stockpiles, infrastructure, or the economic well-being of the Palestinians living under its control. Like many state actors working to advance strategic ambitions, Hamas has turned to offensive cyber operations as a means of competing below the threshold of armed conflict.

Deploying offensive cyber capabilities involves exceptionally low risks and costs for operators. For groups like Hamas that are worried about potential retaliation, these operations present an effective alternative to kinetic operations that would otherwise provoke an immediate response. Most national cyber operation countermeasures are geared toward state adversaries and, in general, finding an appropriate response to non-state actors in this area has been challenging. Many state attempts to retaliate and deter have been toothless, resulting in little alteration of the adversary’s calculations.49

3. Hamas’s cyber strategy

The nature of the cyber domain allows weak actors, like Hamas, to engage and inflict far more damage on powerful actors, like Israel, than would otherwise be possible in conventional conflict.50 This asymmetry means that cyberspace offers intrinsically covert opportunities to store, transfer, and deploy consequential capabilities with far less need for organizational resources and financial or human capacity than in industrial warfare. Well-suited to support information campaigns, cyber capabilities are useful for influencing an audience without drawing the attention and repercussions of more conspicuous operations, like terrorism. In these ways, cyber operations fit into Hamas’s overall strategy and emphasis on building public perception and influence. Making sense of this strategy allows a greater understanding of past Hamas cyber operations, and how the group will likely operate in the cyber domain going forward.

More than meets the eye

Aerial imagery of a Hamas cyber operations facility destroyed by the Israel Defense Forces in the Gaza Strip in May 2019.
Source: Israel Defense Forces

Hamas’s cyber capabilities, while relatively nascent and lacking the sophisticated tools of other hacking groups, should not be underestimated. It comes as a surprise to many security experts that Hamas—chronically plagued by electricity shortages in the Gaza Strip, with an average of just ten to twelve hours of electricity per day—even possesses cyber capabilities.51 Israel’s control over the telecommunications frequencies and infrastructure of the Gaza Strip raises further doubts about how Hamas could operate a cyber program.52 However, in 2019, Israel deemed the offensive cyber threat to be critical enough that after thwarting an operation, the IDF carried out a strike to destroy Hamas’s cyber headquarters,53 one of the first acknowledged kinetic operations by a military in response to a cyber operation. However, despite an IDF spokesperson’s claim that “Hamas no longer has cyber capabilities after our strike,” public reporting has highlighted various Hamas cyber operations in the ensuing months and years.54

This dismissive attitude toward Hamas’s cyber threat also overlooks the group’s operations from outside the confines of the Gaza Strip. Turkish President Recep Tayyip Erdoğan and his AKP Party share ideological sympathies with Hamas and have extended citizenship to Hamas leadership.55 The group’s leaders have allegedly used Turkey as a base for planning attacks and even as a safe haven for an overseas cyber facility.56 Hamas maintains even more robust relationships with other state supporters, namely Iran and Qatar, which provide financing, safe havens, and weapons technology.57 With the assistance of state benefactors, Hamas will continue to develop offensive cyber and information capabilities that, if overlooked, could result in geopolitical consequences.

For at least a decade, Hamas has engaged in cyber operations against Israeli and Palestinian targets. These operations can be divided in two broad operational categories that align with Hamas’s overall strategy: espionage and information. The first category, cyber espionage operations, accounts for the majority of Hamas’s publicly reported cyber activity and underpins the group’s information operations.

Espionage operations

Like any state or non-state actor, Hamas relies on quality intelligence to provide its leadership and commanders with decision-making advantages in the political and military arenas. The theft of valuable secrets from Israel, rival Palestinian factions, and individuals within its own ranks provides Hamas with strategic and operational leverage, and is thus prioritized in its cyber operations.

The Internal Security Force (ISF) is Hamas’s primary intelligence organization, comprised of members of the al-Majd security force from within the larger Izz al-Din al-Qassam Brigades, a military wing of Hamas. The ISF’s responsibilities range from espionage to quashing political opposition and dissent from within the party and its security apparatus.58 The range of the ISF’s missions manifests through Hamas’s cyber operations.

Tactical evolution

Naturally, Israel is a primary target of Hamas’s cyber espionage. These operations have become commonplace over the last several years, gradually evolving from broad, blunt tactics into more tailored, sophisticated approaches. The group’s initial tactics focused on a “spray and pray” approach, distributing impersonal emails with malicious attachments to a large number of targets, hoping that a subset would bite. For example, an operation that began in mid-2013 and was discovered in February 2015 entailed Hamas operators luring targets with the promise of pornographic videos that were really malware apps. The operators relied on their victims—which included targets across the government, military, academic, transportation, and infrastructure sectors—withholding information about the incidents from their workplace information technology departments, out of shame for clicking on pornography at work, thereby maximizing access and time on the target.59

Later, Hamas operations implemented various tactical updates to increase their chances of success. In September 2015, the group began including links rather than attachments, non-pornographic lures such as automobile accident videos, and additional encryption of the exfiltrated data.60 Another campaign, publicized in February 2017, involved a more personalized approach using social engineering techniques to target IDF personnel with malware from fake Facebook accounts.61 In subsequent years, the group began rolling out a variety of smartphone applications and marketing websites to surreptitiously install mobile remote access trojans on target devices. In 2018, the group implanted spyware on smartphones by masquerading as Red Alert, a rocket siren application for Israelis.62 Similarly in 2020, Hamas targeted Israelis through dating apps with names like Catch&See and GrixyApp.63 As previously mentioned, Hamas also cloaked its spyware in a seemingly benign World Cup application that allowed the group to collect information on a variety of IDF military installations and hardware, including armored vehicles. These are all areas Hamas commanders have demonstrated interest in learning more about in order to gain a potential advantage in a future kinetic conflict.64

According to the Israeli threat intelligence firm Cybereason, more recent discoveries indicate a “new level of sophistication” in Hamas’s operations.65 In April 2022, a cyber espionage campaign targeting individuals from the Israeli military, law enforcement, and emergency services used previously undocumented malware featuring enhanced stealth mechanisms. This indicates that Hamas is taking more steps to protect operational security than ever.66 The infection vector for this particular campaign was through social engineering on platforms like Facebook, a hallmark of many Hamas espionage operations, to dupe targets into downloading trojanized applications. Once the malware is downloaded, Hamas operators can access a wide range of information from the device’s documents, camera, and microphone, acquiring immense data on the target’s whereabouts, interactions, and more. Information collected off of military, law enforcement, and emergency services personnel can be useful on its own or for its potential extortion value.

As part of its power struggle with the Palestinian Authority and rival Fatah party, Hamas targets Palestinian political and security officials with similar operations. In another creative cyber espionage operation targeting the Palestinian Authority, Hamas operators used hidden malware to exfiltrate information from the widely used cloud platform Dropbox.67 The same operation targeted political and government officials in Egypt,68 an actor Hamas is keen to surveil given its shared border with the Gaza Strip and role brokering ceasefires and other negotiations between Israel and Hamas.

Other common targets of Hamas’s cyber espionage campaigns are members of its own organization. One of the ISF’s roles is counterintelligence, a supremely important field to an organization that is rife with internecine political rivalries,69 as well as paranoia about the watchful eyes of Israeli and other intelligence services. According to Western intelligence sources, one of the main missions of Hamas’s cyber facility in Turkey is deploying counterintelligence against Hamas dissenters and spies.70 Hamas is sensitive to the possibility of Palestinians within its ranks and others acting as “collaborators” with Israel, and the group occasionally summarily executes individuals on the suspicion of serving as Israeli intelligence informants.71

Information operations

While the bulk of Hamas’s cyber operations place a premium on information gathering, a subset involves using this information to further its efforts to influence the public. This broadly defined category of information operations comprises everything from hack-and-leaks to defacements to social media campaigns that advance beneficial narratives.

Hack-and-leak operations, when hackers acquire secret or otherwise sensitive information and subsequently make it public, are clear attempts to shift public opinion and “simulate scandal.”72 The strategic dissemination of stolen documents, images, and videos—potentially manipulated—at critical junctures can be a windfall for a group like Hamas. In December 2014, Hamas claimed credit for hacking the IDF’s classified network and posting multiple videos taken earlier in the year of Israel’s Operation Protective Edge in the Gaza Strip.73 The clips, which were superimposed with Arabic captions by Hamas,74 depicted sensitive details about the IDF’s operation, including two separate instances of Israeli forces engaging terrorists infiltrating Israel—one group infiltrating by sea en route to Kibbutz Zikim and one group via a tunnel under the border into Kibbutz Ein HaShlosha—to engage in kidnappings. One of the raids resulted in a fight that lasted for roughly six hours and the death of two Israelis.75 By leaking the footage, including images of the dead Israelis, Hamas sought to project itself as a strong leader to Palestinians and to instill fear among Israelis, boasting about its ability to infiltrate Israel, kill Israelis, and return to Gaza. These operations are intended to demonstrate Hamas’s strength on two levels: first, their ability to hack and steal valuable material from Israel and second, their boldness in carrying out attacks to further the Palestinian national cause.

Defacement is another tool in Hamas’s cyber arsenal. This sort of operation, a form of online vandalism that usually involves breaching a website to post propaganda, is not so much devastating as it is a nuisance.76 The operations are intended to embarrass the targets, albeit temporarily, and generate a psychological effect on an audience. In 2012, during Israel’s Operation Cast Lead in the Gaza Strip, Hamas claimed responsibility for attacks on Israeli websites, including the IDF’s Homefront Command, asserting that the cyber operations were “an integral part of the war against Israel.”77 Since then, Hamas has demonstrated its ability to reach potentially wider audiences through defacement operations. Notably, in July 2014 during Operation Protective Edge, Hamas gained access to the satellite broadcast of Israel’s Channel 10 television station for a few minutes, broadcasting images purportedly depicting Palestinians injured by Israeli airstrikes in the Gaza Strip. The Hamas hackers also displayed a threat in Hebrew text: “If your government does not agree to our terms, then prepare yourself for an extended stay in shelters.”78

Hamas has conducted defacement operations itself and has relied on an army of “patriotic hackers.” Patriotic hacking, cyberattacks against a perceived adversary performed by individuals on behalf of a nation, is not unique to the Israeli–Palestinian conflict. States have turned to sympathetic citizens around the world for support, often directing individual hackers to deface adversaries’ websites, as Ukraine did after Russia’s 2022 invasion.79 Similarly, Hamas seeks to inspire hackers from around the Middle East to “resist” Israel, resulting in the defacement of websites belonging to the Tel Aviv Stock Exchange and Israel’s national airline El Al by Arab hackers.80

In tandem with its embrace of patriotic hackers, Hamas seeks to multiply its propaganda efforts by enlisting the help of Palestinians on the street for less technical operations. To some extent, Hamas uses social media in similar ways to other terrorist organizations to inspire violence, urging Palestinians to attack Jews in Israel and the West Bank, for instance.81 However, the group goes a step further, encouraging Palestinians in Gaza to contribute to its efforts by providing guidelines for social media posting. The instructions, provided by Hamas’s Interior Ministry, detail how Palestinians should post about the conflict and discuss it with outsiders, including preferred terminology and practices such as, “Anyone killed or martyred is to be called a civilian from Gaza or Palestine, before we talk about his status in jihad or his military rank. Don’t forget to always add ‘innocent civilian’ or ‘innocent citizen’ in your description of those killed in Israeli attacks on Gaza.” Other instructions include, “Avoid publishing pictures of rockets fired into Israel from [Gaza] city centers. This [would] provide a pretext for attacking residential areas in the Gaza Strip.”82 Information campaigns like these extend beyond follower indoctrination and leave a tangible mark on international public discourse, as well as structure the course of conflict with Israel.

Hamas’s ability to leverage the cyber domain to shape the information landscape can have serious implications on geopolitics. Given the age and unpopularity of Palestinian President Mahmoud Abbas—polling shows that 80 percent of Palestinians want him to resign—as well as the fragile state of the Palestinian Authority,83 the Palestinian public’s desire for elections, and general uncertainty about the future, Hamas’s information operations can have a particularly potent effect on a discourse that is already contentious. The same can be said, to some extent, for the information environment in Israel, where political instability has resulted in five elections in just three and a half years.84 When executed strategically, information operations can play an influencing, if not deciding, role in electoral outcomes, as demonstrated by Russia’s interference in the 2016 US presidential election.85 A well-timed hack-and-leak operation, like Russia’s breach of the Democratic National Committee’s networks and dissemination of its emails, could majorly influence the momentum of political events in both Israel and Palestine.86 Continued failure to reach a two-state solution in the Israeli–Palestinian conflict will jeopardize Israel’s diplomatic relationships,87 as well as stability in the wider Middle East.88

4. Where do Hamas’s cyber operations go from here?

As outlined in its founding charter, as long as Hamas exists, it will place a premium on influencing audiences—friendly, adversarial, and undecided—and mobilizing them to bend political outcomes toward its ultimate objectives.89 Terrorism has been a central element of the group’s influence agenda, but cyber and information operations offer alternative and complementary options for engagement. It stands to reason that as Hamas’s cyber capabilities steadily evolve and improve, those of similar organizations will do the same.

Further Israeli efforts to curb terrorism through a cocktail of economic programs and advancements in defensive technologies, such as its integrated air defense system, raise questions about how Hamas and similar groups’ incentive structures may change their calculi in light of evolving state countermeasures. There is no Iron Dome in cyberspace. Militant and terrorist organizations are not changing their strategies of integrating cyber and information operations into their repertoires. Instead, they are finding new means of achieving old goals. Important questions for future research include:

  • If states like Iran transfer increasingly advanced kinetic weaponry to terrorist organizations like Hamas, PIJ, Hezbollah, Kata’ib Hezbollah, and the Houthis, to what extent does this assistance extend to offensive cyber capabilities? What will this support look like in the future, and will these groups depend on state support to sustain their cyber operations?
  • What lessons is Hamas drawing from the past year of relative calm with Israel that may influence the cadence and variety of its cyber operations? How might these lessons influence similar organizations around the world?
  • What sorts of operations, such as financially motivated ransomware and cybercrime, has Hamas not engaged in? Will Hamas and comparable organizations learn from and adopt operations that are similar to other variously motivated non-state actors?
  • What restrictions and incentives can the United States and its allies implement to curb the transfer of cyber capabilities to terrorist organizations?

Cyber capabilities are advancing rapidly worldwide and more advanced technologies are increasingly accessible, enabling relatively weak actors to compete with strong actors like never before. Few controls exist to effectively counter this proliferation of offensive cyber capabilities, and the technical and financial barriers for organizations like Hamas to compete in this domain remain low.90 Either by obtaining and deploying highly impactful tools, or by developing relationships with hacking groups in third-party countries to carry out operations, the threat from Hamas’s cyber and information capabilities will grow.

Just like the group’s rocket terror program, which began with crude, short-range, and inaccurate Qassam rockets that the group cobbled together from scratch, Hamas’s cyber program began with rather unsophisticated tools. Over the years, as the group obtained increasingly sophisticated, accurate, and long-range rockets from external benefactors like Iran, so too have Hamas’s cyber capabilities advanced in scale and sophistication.

Conclusion

Remarking on Hamas’s creative cyber campaigns, a lieutenant colonel in the IDF’s Cyber Directorate noted, “I’m not going to say they are not powerful or weak. They are interesting.”91 Observers should not view Hamas’s foray into cyber operations as an indication of a sudden organizational strategic shift. For its entire existence, the group has used terrorism as a means of garnering public attention and affecting the information environment, seizing strategic opportunities to influence the course of political events. As outside pressures change the group’s incentives to engage in provocative kinetic operations, cyber capabilities present alternative options for Hamas to advance its strategy. Hamas’s cyber capabilities will continue to advance, and the group will likely continue to leverage these tools in ways that will wield maximum influence over the information environment. Understanding how Hamas’s strategy and incentive structure guides its decision to leverage offensive cyber operations can provide insights, on a wider scale, about how non-state actors develop and implement cyber tools, and how the United States and its allies may be better able to counter these trends.

About the author

Acknowledgements

The author would like to thank several individuals, without whose support this report would not look the same. First and foremost, thank you to Trey Herr and Emma Schroeder, director and associate director of the Atlantic Council’s Cyber Statecraft Initiative, respectively, for helping from the start of this effort by participating in collaborative brainstorming sessions and providing extensive editorial feedback throughout. The author also owes a debt of gratitude to several individuals for generously offering their time to review various iterations of this document. Thanks to Ambassador Daniel Shapiro, Shanie Reichman, Yulia Shalomov, Stewart Scott, Madison Cullinan, and additional individuals who shall remain anonymous for valuable insights and feedback throughout the development of this report. Additionally, thank you to Valerie Bilgri for editing and Donald Partyka and Anais Gonzalez for designing the final document.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1     Michael Schmitt, “Normative Voids and Asymmetry in Cyberspace,” Just Security, December 29, 2014, https://www.justsecurity.org/18685/normative-voids-asymmetry-cyberspace/.
2     Emma Schroeder et al., Hackers, Hoodies, and Helmets: Technology and the Changing Face of Russian Private Military ContractorsAtlantic Council, July 25, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/technology-change-and-the-changing-face-of-russian-private-military-contractors; Cecile Schilis-Gallego and Nina Lakhani, “It’s a Free For All: How Hi-Tech Spyware Ends Up in the Hands of Mexico’s Cartels,” Guardian (UK), December 7, 2020, https://www.theguardian.com/world/2020/dec/07/mexico-cartels-drugs-spying-corruption.
3     The White House, National Security Strategy, October 2022, https://www.whitehouse.gov/wp-content/uploads/2022/10/Biden-Harris-Administrations-National-Security-Strategy-10.2022.pdf.; Emma Schroeder, Stewart Scott, and Trey Herr, Victory Reimagined: Toward a More Cohesive US Cyber StrategyAtlantic Council, June 14, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/victory-reimagined/.
4     Clare Stouffer, “15 Types of Hackers + Hacking Protection Tips for 2022,” Norton, May 2, 2022, https://us.norton.com/internetsecurity-emerging-threats-types-of-hackers.html#Greenhat.
5     Janne Hakala and Jazlyn Melnychuk, “Russia’s Strategy in Cyberspace,” NATO Strategic Communications Centre of Excellence, June 2021, https://stratcomcoe.org/cuploads/pfiles/Nato-Cyber-Report_15-06-2021.pdf.
6     Roy Iarchy and Eyal Rynkowski, “GoldenCup: New Cyber Threat Targeting World Cup Fans,” Broadcom Software, July 5, 2018, https://symantec-enterprise-blogs.security.com/blogs/expert-perspectives/goldencup-new-cyber-threat-targeting-world-cup-fans.
7     “Spyware,” MalwareBytes, https://www.malwarebytes.com/spyware.
8     Taylor Armerding, “Golden Cup App Was a World Cup of Trouble,” Synopsys, July 12, 2022, https://www.synopsys.com/blogs/software-security/golden-cup-app-world-cup-trouble/.
9     Yaniv Kubovich, “Hamas Cyber Ops Spied on Hundreds of Israeli Soldiers Using Fake World Cup, Dating Apps,” Haaretz, July 3, 2018, https://www.haaretz.com/israel-news/hamas-cyber-ops-spied-on-israeli-soldiers-using-fake-world-cup-app-1.6241773.
11     J.D. Work, Troubled Vision: Understanding Recent Israeli–Iranian Offensive Cyber ExchangesAtlantic Council, July 22, 2020, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/troubled-vision-understanding-israeli-iranian-offensive-cyber-exchanges/.
12     Amos Harel, “How Deep Has Chinese Intelligence Penetrated Israel?” Haaretz, February 25, 2022, https://www.haaretz.com/israel-news/.premium-how-deep-has-chinese-intelligence-penetrated-israel-1.10633942.
13     “Propaganda, Extremism and Online Recruitment Tactics,” Anti-Defamation League, April 4, 2016, https://www.adl.org/education/resources/tools-and-strategies/table-talk/propaganda-extremism-online-recruitment.
14     Office of the Director of National Intelligence, Annual Threat Assessment of the US Intelligence Community, February 7, 2022, https://www.dni.gov/files/ODNI/documents/assessments/ATA-2022-Unclassified-Report.pdf.
15     National Security Archive, “USCYBERCOM After Action Assessments of Operation GLOWING SYMPHONY,” January 21, 2020, https://nsarchive.gwu.edu/briefing-book/cyber-vault/2020-01-21/uscybercom-after-action-assessments-operation-glowing-symphony.
16     The White House, National Strategy for Counterterrorism of the United States of America, October 2018, https://www.dni.gov/files/NCTC/documents/news_documents/NSCT.pdf.
17     “Hamas: The Palestinian Militant Group That Rules Gaza,” BBC, July 1, 2022, https://www.bbc.com/news/world-middle-east-13331522.
18    “The Covenant of the Islamic Resistance Movement,” August 18, 1988, https://avalon.law.yale.edu/20th_century/hamas.asp.
19    Gur Laish, “The Amorites Iniquity – A Comparative Analysis of Israeli and Hamas Strategies in Gaza,” Infinity Journal 2, no. 2 (Spring 2022), https://www.militarystrategymagazine.com/article/the-amorites-iniquity-a-comparative-analysis-of-israeli-and-hamas-strategies-in-gaza/.
20     Khaled Abu Toameh, “PA Popularity Among Palestinians at an All-Time Low,” Jerusalem Post, November 18, 2021, https://www.jpost.com/middle-east/pa-popularity-among-palestinians-at-an-all-time-low-685438.
21     “16 Killed in Suicide Bombings on Buses in Israel: Hamas Claims Responsibility,” CNN, September 1, 2004, http://edition.cnn.com/2004/WORLD/meast/08/31/mideast/.
22     “Hamas Rocket Fire a War Crime, Human Rights Watch Says,” BBC News, August 12, 2021, https://www.bbc.com/news/world-middle-east-58183968.
23     Isabel Kershner, “Hamas Militants Take Credit for Sniper Attack,” New York Times, March 20, 2007, https://www.nytimes.com/2007/03/20/world/middleeast/19cnd-mideast.html.
24     “Hamas Operatives Launch Incendiary Balloons into Israel,” AP News, September 4, 2021, https://apnews.com/article/technology-middle-east-africa-israel-hamas-6538690359c8de18ef78d34139d05535.
25     Mai Abu Hasaneen, “Israel Targets Hamas Leader after Call to Attack Israelis with ‘Cleaver, Ax or Knife,’” Al-Monitor, May 15, 2022, https://www.al-monitor.com/originals/2022/05/israel-targets-hamas-leader-after-call-attack-israelis-cleaver-ax-or-knife.
26     Ralph Ellis and Michael Schwartz, “Mom Speaks Out on 3 Abducted Teens as Israeli PM Blames Hamas,” CNN, June 15, 2014, https://www.cnn.com/2014/06/15/world/meast/west-bank-jewish-teens-missing.
27     The Palestinian National Authority (PA) is the official governmental body of the State of Palestine, exercising administrative and security control over Area A of the Palestinian Territories, and only administrative control over Area B of the Territories. The PA is controlled by Fatah, Hamas’s most significant political rival, and is the legitimate ruler of the Gaza Strip, although Hamas exercises de facto control of the territory.
28     The Palestine Liberation Organization (PLO) is the political organization that is broadly recognized by the international community as the sole legitimate representative of the Palestinian people. The PLO recognizes Israel, setting it apart from Hamas, which is not a member of the organization.
29    Hamas is designated as a foreign terrorist organization by the US State Department and has earned similar designations from dozens of other countries and international bodies, including Australia, Canada, the European Union, the Organization of American States, Israel, Japan, New Zealand, and the United Kingdom. Jotam Confino, “Calls to Assassinate Hamas Leadership as Terror Death Toll Reaches 19,” Jewish Chronicle, May 12, 2022, https://www.thejc.com/news/world/calls-to-assassinate-hamas-leadership-as-terror-death-tolls-reaches-19-19wCeFxlx3w40gFCKQ9xSx; Byron Kaye, “Australia Lists All of Hamas as a Terrorist Group,” Reuters, March 4, 2022, https://www.reuters.com/world/middle-east/australia-lists-all-hamas-terrorist-group-2022-03-04; Public Safety Canada, “Currently Listed Entities,” Government of Canada, https://www.publicsafety.gc.ca/cnt/ntnl-scrt/cntr-trrrsm/lstd-ntts/crrnt-lstd-ntts-en.aspx; “COUNCIL IMPLEMENTING REGULATION (EU) 2020/19 of 13 January 2020 implementing Article 2(3) of Regulation (EC) No 2580/2001 on Specific Restrictive Measures Directed Against Certain Persons and Entities with a View to Combating Terrorism, and Repealing Implementing Regulation (EU) 2019/1337,” Official Journal of the European Union, January 13, 2020, https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L:2020:008I:FULL&from=EN; Organization of American States, “Qualification of Hamas as a Terrorist Organization by the OAS General Secretariat,” May 17, 2021, https://www.oas.org/en/media_center/press_release.asp?sCodigo=E-051/21; Ministry of Foreign Affairs, “Japan’s Foreign Policy in Major Diplomatic Fields,” Japan, 2005, https://www.mofa.go.jp/policy/other/bluebook/2005/ch3-a.pdf; “UK Parliament Approves Designation of Hamas as a Terrorist Group,” Haaretz, November 26, 2021, https://www.haaretz.com/israel-news/.premium-u-k-parliament-approves-designation-of-hamas-as-a-terrorist-group-1.10419344.
30     Nathan R. Stein et al., “The Differential Impact of Terrorism on Two Israeli Communities,” American Journal of Orthopsychiatry, American Psychological Association, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3814032/.
31     Robert A. Pape, “The Strategic Logic of Suicide Terrorism,” The American Political Science Review, August 2003, https://www.jstor.org/stable/3117613?seq=6#metadata_info_tab_contents.
32     “Arabs Celebrate Israeli Withdrawal,” South Florida Sun-Sentinel, October 26, 1995, https://www.sun-sentinel.com/news/fl-xpm-1995-10-26-9510260008-story.html.
33    Brent Sadler, “Suicide Bombings Scar Peres’ Political Ambitions,” CNN, May 28, 1996, http://www.cnn.com/WORLD/9605/28/israel.impact/index.html.
34    Akiva Eldar, “The Power Hamas Holds Over Israel’s Elections,” Al-Monitor, February 11, 2020, https://www.al-monitor.com/originals/2020/02/israel-us-palestinians-hamas-donald-trump-peace-plan.html.
35    Yoram Schweitzer, “The Rise and Fall of Suicide Bombings in the Second Intifada,” The Institute for National Security Studies, October 2010, https://www.inss.org.il/wp-content/uploads/sites/2/systemfiles/(FILE)1289896644.pdf; Beverley Milton-Edwards and Stephen Farrell, Hamas: The Islamic Resistance Movement (Polity Press, 2013), https://www.google.com/books/edition/Hamas/ozLNNbwqlAEC?hl=en&gbpv=1.
36    Ministry of Foreign Affairs, “Rocket Fire from Gaza and Ceasefire Violations after Operation Cast Lead (Jan 2009),” State of Israel, March 16, 2016, https://embassies.gov.il/MFA/FOREIGNPOLICY/Terrorism/Pages/Palestinian_ceasefire_violations_since_end_Operation_Cast_Lead.aspx.
37    “PA: Hamas Rockets Are Bid to Sway Israeli Election,” Associated Press, September 2, 2009, https://web.archive.org/web/20090308033654/http://haaretz.com/hasen/spages/1062761.html.
38     National Consortium for the Study of Terrorism and Responses to Terrorism, “Global Terrorism Database,” University of Maryland, https://www.start.umd.edu/gtd/search/Results.aspx?page=2&casualties_type=&casualties_max=&perpetrator=838&count=100&expanded=yes&charttype=line&chart=overtime&ob=GTDID&od=desc#results-table
39     US Congress, House of Representatives, Subcommittee on the Middle East and North Africa and Subcommittee on Terrorism, Nonproliferation, and Trade, Hamas Benefactors: A Network of Terror, Joint Hearing before the Subcommittee on the Middle East and North Africa and the Subcommittee on Terrorism, Nonproliferation, and Trade of the Committee on Foreign Affairs, 113th Congress, September 9, 2014, https://www.govinfo.gov/content/pkg/CHRG-113hhrg89738/html/CHRG-113hhrg89738.htm.
40     “Hamas Faces Risk, Opportunity from Warming Israel–Turkey Ties,” France 24, March 16, 2022, https://www.france24.com/en/live-news/20220316-hamas-faces-risk-opportunity-from-warming-israel-turkey-ties; Sean Mathews, “Israeli Military Officials Sent to Qatar as US Works to Bolster Security Cooperation,” Middle East Eye, July 8, 2022, https://www.middleeasteye.net/news/qatar-israel-military-officials-dispatched-amid-us-efforts-bolster-security.
41     Nitsana Darshan-Leitner, “Qatar is Financing Palestinian Terror and Trying to Hide It,” Jerusalem Post, February 18, 2022, https://www.jpost.com/opinion/article-696824.
42     Shahar Klaiman, “Qatar Pledges $500M to Rebuild Gaza, Hamas Vows Transparency,” Israel Hayom, May 27, 2021, https://www.israelhayom.com/2021/05/27/qatar-pledges-500m-to-gaza-rebuild-hamas-vows-transparency; Jodi Rudoren, “Qatar Emir Visits Gaza, Pledging $400 Million to Hamas,” New York Times, October 23, 2012, https://www.nytimes.com/2012/10/24/world/middleeast/pledging-400-million-qatari-emir-makes-historic-visit-to-gaza-strip.html.
43     Adam Taylor, “With Strikes Targeting Rockets and Tunnels, the Israeli Tactic of ‘Mowing the Grass’ Returns to Gaza,” May 14, 2021, https://www.washingtonpost.com/world/2021/05/14/israel-gaza-history/.
44     “What Just Happened in Gaza?” Israel Policy Forum, YouTube, https://www.youtube.com/watch?v=XqHjQo0ybvM&t=59s.
45     Michael Koplow, “Proof of Concept for a Better Gaza Policy,” Israel Policy Forum, August 11, 2022, https://israelpolicyforum.org/2022/08/11/proof-of-concept-for-a-better-gaza-policy; Tani Goldstein, “The Number of Workers from Gaza Increased, and the Peace Was Maintained,” Zman Yisrael, April 4, 2022, https://www.zman.co.il/302028/popup/.
46     Aaron Boxerman, “Israel to Allow 2,000 More Palestinian Workers to Enter from Gaza,” Times of Israel, June 16, 2022, https://www.timesofisrael.com/israel-to-allow-2000-more-palestinian-workers-to-enter-from-gaza/.
47     “Operation Breaking Dawn Overview,” Israel Policy Forum, August 8, 2022, https://israelpolicyforum.org/2022/08/08/operation-breaking-dawn-overview/.
48     Aaron Boxerman, “Hamas’s Sinwar Threatens a ‘Regional, Religious War’ if Al-Aqsa is Again ‘Violated,’” Times of Israel, April 30, 2022, https://www.timesofisrael.com/sinwar-warns-israel-hamas-wont-hesitate-to-take-any-steps-if-al-aqsa-is-violated/.
49     Safa Shahwan Edwards and Simon Handler, “The 5×5—How Retaliation Shapes Cyber Conflict,” Atlantic Council, https://www.atlanticcouncil.org/commentary/the-5×5-how-retaliation-shapes-cyber-conflict/.
50     Andrew Phillips, “The Asymmetric Nature of Cyber Warfare,” USNI News, October 14, 2012, https://news.usni.org/2012/10/14/asymmetric-nature-cyber-warfare.
51    “Gaza: ICRC Survey Shows Heavy Toll of Chronic Power Shortages on Exhausted Families,” International Committee of the Red Cross, July 29, 2021, https://www.icrcnewsroom.org/story/en/1961/gaza-icrc-survey-shows-heavy-toll-of-chronic-power-shortages-on-exhausted-families.
52    Daniel Avis and Fadwa Hodali, “World Bank to Israel: Let Palestinians Upgrade Mobile Network,” Bloomberg, February 8, 2022, https://www.bloomberg.com/news/articles/2022-02-08/world-bank-to-israel-let-palestinians-upgrade-mobile-network.
53    Israel Defense Forces (@IDF), “CLEARED FOR RELEASE: We thwarted an attempted Hamas cyber offensive against Israeli targets. Following our successful cyber defensive operation, we targeted a building where the Hamas cyber operatives work. HamasCyberHQ.exe has been removed,” Twitter, May 5, 2019, https://twitter.com/IDF/status/1125066395010699264.
54    Zak Doffman, “Israel Responds to Cyber Attack with Air Strike on Cyber Attackers in World First,” Forbes, May 6, 2019, https://www.forbes.com/sites/zakdoffman/2019/05/06/israeli-military-strikes-and-destroys-hamas-cyber-hq-in-world-first/?sh=654fbba9afb5.
55    “Turkey Said to Grant Citizenship to Hamas Brass Planning Attacks from Istanbul,” Times of Israel, August 16, 2020, https://www.timesofisrael.com/turkey-said-to-grant-citizenship-to-hamas-brass-planning-attacks-from-istanbul/.
56    Anshel Pfeffer, “Hamas Uses Secret Cyberwar Base in Turkey to Target Enemies,” Times (UK), October 22, 2020, https://www.thetimes.co.uk/article/hamas-running-secret-cyberwar-hq-in-turkey-29mz50sxs.
57    David Shamah, “Qatari Tech Helps Hamas in Tunnels, Rockets: Expert,” Times of Israel, July 31, 2014, https://www.timesofisrael.com/qatari-tech-helps-hamas-in-tunnels-rockets-expert; Dion Nissenbaum, Sune Engel Rasmussen, and Benoit Faucon, “With Iranian Help, Hamas Builds ‘Made in Gaza’ Rockets and Drones to Target Israel,” Wall Street Journal, May 20, 2021, https://www.wsj.com/articles/with-iranian-help-hamas-builds-made-in-gaza-rockets-and-drones-to-target-israel-11621535346.
58     “Internal Security Force (ISF) – Hamas,” Mapping Palestinian Politics, European Council on Foreign Relations, https://ecfr.eu/special/mapping_palestinian_politics/internal_security_force/.
59     “Operation Arid Viper: Bypassing the Iron Dome,” Trend Micro, February 16, 2015, https://www.trendmicro.com/vinfo/es/security/news/cyber-attacks/operation-arid-viper-bypassing-the-iron-dome; “Sexually Explicit Material Used as Lures in Recent Cyber Attacks,” Trend Micro, February 18, 2015, https://www.trendmicro.com/vinfo/us/security/news/cyber-attacks/sexually-explicit-material-used-as-lures-in-cyber-attacks?linkId=12425812.
60     “Operation Arid Viper Slithers Back into View,” Proofpoint, September 18, 2015, https://www.proofpoint.com/us/threat-insight/post/Operation-Arid-Viper-Slithers-Back-Into-View.
61     “Hamas Uses Fake Facebook Profiles to Target Israeli Soldiers,” Israel Defense Forces, February 2, 2017, https://www.idf.il/en/minisites/hamas/hamas-uses-fake-facebook-profiles-to-target-israeli-soldiers/.
62     Yossi Melman, “Hamas Attempted to Plant Spyware in ‘Red Alert’ Rocket Siren App,” Jerusalem Post, August 14, 2018, https://www.jpost.com/arab-israeli-conflict/hamas-attempted-to-plant-spyware-in-red-alert-rocket-siren-app-564789.
63     “Hamas Android Malware on IDF Soldiers—This is How it Happened,” Checkpoint, February 16, 2020, https://research.checkpoint.com/2020/hamas-android-malware-on-idf-soldiers-this-is-how-it-happened/.
64     Yaniv Kubovich, “Hamas Cyber Ops Spied on Hundreds of Israeli Soldiers Using Fake World Cup, Dating Apps,” Haaretz, July 3, 2018, https://www.haaretz.com/israel-news/hamas-cyber-ops-spied-on-israeli-soldiers-using-fake-world-cup-app-1.6241773; Ben Caspit, “Gilad Shalit’s Capture, in His Own Words,” Jerusalem Post, March 30, 2013, https://www.jpost.com/features/in-thespotlight/gilad-schalits-capture-in-his-own-words-part-ii-308198.
65     Omer Benjakob, “Exposed Hamas Espionage Campaign Against Israelis Shows ‘New Levels of Sophistication,’” Haaretz, April 7, 2022, https://www.haaretz.com/israel-news/tech-news/2022-04-07/ty-article/.premium/exposed-hamas-espionage-campaign-shows-new-levels-of-sophistication/00000180-5b9c-dc66-a392-7fdf14ff0000.
66     Cybereason Nocturnus, “Operation Bearded Barbie: APT-C-23 Campaign Targeting Israeli Officials,” Cybereason, April 6, 2022, https://www.cybereason.com/blog/operation-bearded-barbie-apt-c-23-campaign-targeting-israeli-officials?hs_amp=true.
67     Cybereason Nocturnus, “New Malware Arsenal Abusing Cloud Platforms in Middle East Espionage Campaign,” Cybereason, December 9, 2020, https://www.cybereason.com/blog/new-malware-arsenal-abusing-cloud-platforms-in-middle-east-espionage-campaign.
68     Sean Lyngaas, “Hackers Leverage Facebook, Dropbox to Spy on Egypt, Palestinians,” December 9, 2020, CyberScoop, https://www.cyberscoop.com/molerats-cybereason-gaza-espionage-palestine/.
69     Adnan Abu Amer, “Hamas Holds Internal Elections Ahead of Palestinian General Elections,” Al-Monitor, February 26, 2021, https://www.al-monitor.com/originals/2021/02/hamas-internal-elections-gaza-west-bank-palestinian.html.
71     “Hamas Kills 22 Suspected ‘Collaborators,’” Times of Israel, August 22, 2014, https://www.timesofisrael.com/hamas-said-to-kill-11-suspected-collaborators; “Hamas Executes Three ‘Israel Collaborators’ in Gaza,” BBC, April 6, 2017, https://www.bbc.com/news/world-middle-east-39513190.
72     James Shires, “Hack-and-Leak Operations and US Cyber Policy,” War on the Rocks, August 14, 2020, https://warontherocks.com/2020/08/the-simulation-of-scandal/.
73     Ben Tufft, “Hamas Claims it Hacked IDF Computers to Leak Sensitive Details of Previous Operations,” Independent, December 14, 2014, https://www.independent.co.uk/news/world/middle-east/hamas-claims-it-hacked-idf-computers-to-leak-sensitive-details-of-previous-operations-9923742.html.
74     Tova Dvorin, “Hamas: ‘We Hacked into IDF Computers,’” Israel National News, December 14, 2014, https://www.israelnationalnews.com/news/188618#.VI2CKiusV8E
75     Ari Yashar, “IDF Kills Hamas Terrorists Who Breached Border,” Israel National News, July 8, 2014, https://www.israelnationalnews.com/news/182666; Gil Ronen and Tova Dvorin, “Terrorists Tunnel into Israel: Two Soldiers Killed,” Israel National News, July 19, 2014, https://www.israelnationalnews.com/news/183076.
76     “Website Defacement Attack,” Imperva, https://www.imperva.com/learn/application-security/website-defacement-attack/.
77     Omer Dostri, “Hamas Cyber Activity Against Israel,” The Jerusalem Institute for Strategy and Security, October 15, 2018, https://jiss.org.il/en/dostri-hamas-cyber-activity-against-israel/.
78     WAQAS, “Israel’s Channel 10 TV Station Hacked by Hamas,” Hackread, July 16, 2014, https://www.hackread.com/hamas-hacks-israels-channel-10-tv-station/.
79     Joseph Marks, “Ukraine is Turning to Hacktivists for Help,” Washington Post, March 1, 2022, https://www.washingtonpost.com/politics/2022/03/01/ukraine-is-turning-hacktivists-help/.
80     “Israeli Websites Offline of ‘Maintenance’ as Hamas Praises Hackers,” The National, January 15, 2012, https://www.thenationalnews.com/world/mena/israeli-websites-offline-of-maintenance-as-hamas-praises-hackers-1.406178.
81     Dov Lieber and Adam Rasgon, “Hamas Media Campaign Urges Attacks on Jews by Palestinians in Israel and West Bank,” Wall Street Journal, May 2, 2022, https://www.wsj.com/articles/hamas-media-campaign-urges-attacks-on-jews-by-palestinians-in-israel-and-west-bank-11651511641.
82     “Hamas Interior Ministry to Social Media Activists: Always Call the Dead ‘Innocent Civilians’; Don’t Post Photos of Rockets Being Fired from Civilian Population Centers,” Middle East Media Research Institute, July 17, 2014, https://www.memri.org/reports/hamas-interior-ministry-social-media-activists-always-call-dead-innocent-civilians-dont-post#_edn1.
83     Joseph Krauss, “Poll Finds 80% of Palestinians Want Abbas to Resign,” AP News, September 21, 2021, https://apnews.com/article/middle-east-jerusalem-israel-mahmoud-abbas-hamas-5a716da863a603ab5f117548ea85379d.
84     Patrick Kingsley and Isabel Kershner, “Israel’s Government Collapses, Setting Up 5th Election in 3 Years,” New York Times, June 20, 2022, https://www.nytimes.com/2022/06/20/world/middleeast/israel-election-government-collapse.html.
85     Patrick Howell O’Neill, “Why Security Experts Are Braced for the Next Election Hack-and-Leak,” MIT Technology Review, September 29, 2020, https://www.technologyreview.com/2020/09/29/1009101/why-security-experts-are-braced-for-the-next-election-hack-and-leak/.
86     Eric Lipton, David E. Sanger, and Scott Shane, “The Perfect Weapon: How Russian Cyberpower Invaded the US,” New York Times, December 13, 2016, https://www.nytimes.com/2016/12/13/us/politics/russia-hack-election-dnc.html.
87     Ben Samuels, “No Normalization with Israel Until Two-State Solution Reached, Saudi FM Says,” Haaretz, July 16, 2022, https://www.haaretz.com/middle-east-news/2022-07-16/ty-article/.premium/no-normalization-with-israel-until-two-state-solution-reached-saudi-fm-says/00000182-0614-d213-adda-17bd7b2d0000.
88     Ibrahim Fraihat, “Palestine: Still Key to Stability in the Middle East,” Brookings Institution, January 28, 2016, https://www.brookings.edu/opinions/palestine-still-key-to-stability-in-the-middle-east/.
89     Israel Foreign Ministry, “The Charter of Allah: The Platform of the Islamic Resistance Movement (Hamas),” Information Division, https://irp.fas.org/world/para/docs/880818.htm.
90     “The Proliferation of Offensive Cyber Capabilities,” Cyber Statecraft Initiative, Digital Forensic Research Lab, Atlantic Council, https://www.atlanticcouncil.org/programs/digital-forensic-research-lab/cyber-statecraft-initiative/the-proliferation-of-offensive-cyber-capabilities/.
91     Neri Zilber, “Inside the Cyber Honey Traps of Hamas,” The Daily Beast, March 1, 2020, https://www.thedailybeast.com/inside-the-cyber-honey-traps-of-hamas.

The post The cyber strategy and operations of Hamas: Green flags and green hats appeared first on Atlantic Council.

]]>
The 5×5—Non-state armed groups in cyber conflict https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-non-state-armed-groups-in-cyber-conflict/ Wed, 26 Oct 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=579094 Five experts from various backgrounds assess the emerging threats posed by non-state armed groups in cyber conflict.

The post The 5×5—Non-state armed groups in cyber conflict appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Non-state organizations native to cyberspace, like patriotic hacking collectives and ransomware groups, continue to impact geopolitics through cyber operations. But, increasingly, non-state armed groups with histories rooted entirely in kinetic violence are adopting offensive cyber capabilities to further their strategic objectives. Each of these groups has its own motivations for acquiring these capabilities and its strategy to employ them, making developing effective countermeasures difficult for the United States and its allies. In Ukraine, the Russian government is increasingly outsourcing military activities to private military companies, such as the Wagner Group, and it may continue to do so for cyber and information operations. In Mexico, drug cartels are purchasing state-of-the-art malware to target journalists and other opponents. Elsewhere, militant and terrorist organizations such as Hezbollah and Boko Haram have employed cyber capabilities to bolster their existing operations and efficacy in violence against various states.

The proliferation of offensive cyber capabilities and low barriers to acquiring and deploying some of these powerful tools suggest that the cyber capacities of non-state armed groups will only continue to grow. We brought together five experts from various backgrounds to assess the emerging cyber threats posed by non-state armed groups and discuss how the United States and its allies can address them.

#1 How significant is the cyber threat posed by non-state armed groups to the United States and its allies? What kinds of entities should they be concerned about?

Sean McFate, nonresident senior fellow, Africa Center, Atlantic Council; professor, Georgetown University’s Walsh School of Foreign Service and the National Defense University:

“Currently, the most powerful non-state armed groups that use cyber do it on behalf of a state, offering a modicum of plausible deniability. For example, The Concord Group in Russia is owned by Yevgeny Prigozhin, an oligarch close to Putin. Under the Concord Group is the Wagner Group (mercenaries) and the Internet Research Agency, also known as “the troll farm.” Outsourcing these capabilities lowers the barrier of entry into modern conflicts and allows the Kremlin to purse riskier stratagems.”

Steph Shample, non-resident scholar, Cyber Program, Middle East Institute; senior analyst, Team Cymru:

“The cyber threat posed by independent actors or criminal groups—not advanced persistent threats (APT)—is high, and the first impact is primarily financial. Ransomware flourishes among non-state groups, and can makes these actors, at times, millions of dollars. Consider the SamSam ransomware operations, carried out by Iranian nationals. According to the publicized indictments, the two actors were not found to have ties to the Iranian government, but they took in $6 million in profit—and that is just what was traceable. The second impact is reputational damage for businesses. Once they are impacted by a cyber incident, building the trust of users back is often more difficult than recouping financial loss. Entities to worry about include fields and industries that do not have robust cyber protection or excessive funds, as malicious actors often go after them. These industries include academia, healthcare, and smaller government entities like cities and municipalities.”

Aaron Brantly, associate professor of political science and director, Tech4Humanity lab, Virginia Tech:

“Non-state armed groups do not pose a significant cyber threat at present to the United States and its allies. There are very few examples of non-state actors not affiliated or acting as proxies for states that have the capacity to develop and utilize vulnerabilities to achieve substantial effect. The threat posed by these groups increases when they act as proxies and leverage state capacity and motivation. It is conceivable that non-state armed groups may use cyberattacks to engage in criminal attacks to achieve financial benefits to fund kinetic activities. Yet, developing the capacity to carry out armed attacks and cyberattacks often require members with different skillsets.”

Maggie Smith, research scientist and assistant professor, Army Cyber Institute, United States Military Academy:

The views expressed are those of the author, and do not reflect the official position of the Army Cyber Institute, United States Military Academy, Department of the Army, or Department of Defense.

“I find the most confounding factor of non-state groups to be their motivations for attacking particular targets. Motivations can be financial, ideological, religious, grievance-based, or entities could be targeted for fun—the options are endless and they are not static. Therefore, our traditional intelligence and the indicators and warnings that typically tip and cue us to threats, may not be there. This makes defending against non-state actors that much more unpredictable, confusing, and challenging than defending against states.”

Jon Lindsay, associate professor, School of Cybersecurity and Privacy, Georgia Institute of Technology (Georgia Tech):

“The greatest threat to the United States remains other nuclear-armed states, as well as collective existential threats like climate change and pandemics. Non-state actors are a serious but less severe threat, and cyber is the least severe tool in their kits. Cyber is a minor feature of a minor threat to the United States and its allies.”

#2 How do strategies vary among different types of non-state armed groups and compare with those of states when it comes to cyber capabilities?

Lindsay: “A really interesting feature of the cyber revolution is the democratization of deception. The classic strategies of intelligence—espionage, subversion, disinformation, counterintelligence, and secret diplomacy—that were once practiced mainly by states are now within reach of many actors. The more interesting variation may be in capabilities—states can do more for many reasons—than in strategy. Like it or not, we are all actors, intermediaries, and targets of intelligence.”

McFate: “Outsourcing cyber threats allows states to circumnavigate international and domestic laws. This creates moral hazard in foreign policymaking because it lessens the likelihood of punishment by the international community.”

Brantly: “Whether terrorist organizations or insurgencies, armed groups historically use violence to achieve effects. The strategy of armed groups is to shift the public view of an organization, or issue in such a way as to compel a state actor to respond. Cyber threats do not achieve the same level of visibility that kinetic violence does, and are therefore strategically and tactically less useful to non-state groups. By contrast, state actors seek intelligence and signaling capabilities that control escalation. Because cyberattacks are frequently considered less impactful due to several factors including reversibility, levels of violence, etc., they are a robust tool to enable broader strategic objectives.”

Shample: “There is often overlap. If we again think about APT groups, or those directly sponsored by state governments—the “big four” US adversaries include Iran, China, North Korea, and Russia. All of these countries have mandatory conscription, so all men (and in selective cases, women) have to serve in these countries’ militaries. That mandatory military training can be fulfilled by going through one of their cyber academies and acting as what the United States and Five Eyes community considers a “malicious cyber actor.” Mandatory service is completed eventually, but then these actors can go and act on their own accord, using the training they received to cover their online tracks. State-trained individuals become part of the non-state actor community. They take their learned skills, they share them with other actors on forums and chat platforms, and voila. With training and sophistication, along with a way to evade tracking from their home countries, these individuals continue to improve their skills and networks online, which is a very serious problem. They are sophisticated and able to keep acting in a criminal capacity. The more sophisticated actors can also sell ready-to-use kits, such as Ransomware-as-a-Service, phishing kits, and so on that are premade and do not take high skill to use. The trained malicious actor can not only act independently, but they could have an additional stream of revenue selling kits and supplies to other malicious actors. It is an entire underground ecosystem that I see on closed forums all the time.”

Smith: “One difference is that strategies are more ad hoc or responsive and shift when a non-state group’s motivation for attacking changes. For example, Killnet, the now-infamous pro-Russian hacker group that has been conducting distributed denial-of-service attacks (DDoS) against European nations since March, started off as a DDoS tool that criminal and threat actors could purchase. Just after updating the version of the tool in March, the non-state, but pro-Russian criminals behind Killnet pulled the tool offline and declared that the name was now an umbrella term applied to hacktivism against Russia’s enemies.”

#3 What makes cyber capabilities attractive (or not) to these kinds of non-state groups?

Lindsay: “The obvious answer: cyber tools are low cost and low risk. Cyber becomes an attractive option to actors that lack the means or resolve to use more effective instruments of power. The more that an actor is concerned about adverse consequences like retaliation, punishment, and law enforcement, the more likely they are to use cyber capabilities.”

McFate: “Cyber is important, but not in ways people often think. It gives us new ways of doing old things: sabotage, theft, propaganda, deception, and espionage. Cyber war’s real power is malign information, not sabotage like Stuxnet. In an information age, disinformation is more important than firepower. Who cares about the sword if you can manipulate the mind that wields it?”

Brantly: “Cyber capabilities are less attractive to non-state armed groups because their cost-to-impact ratio is less than kinetic violence. At present, insurgents are unlikely to win by using cyberattacks, and terrorist organizations are unlikely to draw the desired levels of attention to their cause through cyber means that they would by comparable kinetic means. Where attacks disrupt, embarrass an adversary, or facilitate financial concerns of non-state armed groups, such attacks are more likely.”

Shample: “Pseudo-anonymity, of course. They can act from anywhere, target any entity, use obfuscation technology to cover their tracks, and target cryptocurrency to raise money. First, they can cover their tracks completely/partially. Second, they may have enough obscurity to provide plausible cover and not be officially tracked and charged, despite suspicion. Third, they can make a decent amount of money and/or cause damage without any personal harm that comes back to themselves. Fourth, they are able to be impactful and gain notoriety amongst the criminal contingent. The criminal underground is very ego driven, so if an actor can successfully impact a large business or organization, and in so doing make world-wide news, this only helps them gain traction and followers in their community. And they build, keep learning, and repeat, fueled by their financial success and notoriety.”

Smith: “Cyber capabilities are attractive for a lot of reasons—e.g., they can be executed remotely, purchased, obfuscated, difficult to positively attribute, among other attributes that make them easier to execute than a kinetic attack—but if I were a malicious cyber actor, I would be in the business because nation states are still figuring how to respond to cyberattacks. There is not an internationally agreed upon definition for what constitutes a cyberattack, when a cyberattack becomes an act of war, or any concrete estimation for what a proportional response to a cyberattack should be. Additionally, the legal mechanisms for prosecuting cyber activities are still being developed, so as a criminal, the fuzziness and ability to attack an asset within a country without clear consequences is very attractive—especially when law enforcement cyber capabilities are stretched thin and the courts have yet to catch up to technology (or have judges that do not understand the technology used in a case).”

More from the Cyber Statecraft Initiative:

#4 Where does existing theory or policy fall short in addressing the risks posed by the offensive cyber operations of non-state armed groups?

Lindsay: “Generally, we need more theory and empirical research about intelligence contests of any kind. Secret statecraft, and not only by states, is an understudied area in security studies, and it is also a hot research frontier. I do think that the conventional wisdom tends to overstate the threat of cyber from any kind of group, but it is consistent with the paranoid style of American politics.” 

McFate: “How many conferences have you been to where ‘experts’ bicker about whether a cyberattack constitutes war or not? Who cares? US policymakers and academic theorists think about war like pregnancy: you either are or are not. But, in truth, there is no such thing as war or peace; it is really war and peace. Our adversaries do not suffer from this bizarre false dichotomy and exploit our schizoid view of international relations. They wage war but disguise it as peace to us. Cyberattacks are perfect weapons because we spend more time on definitions than on solutions. We need more supple minds at the strategic helm.” 

Brantly: “Many scholars have focused on proxy actors operating in and through cyberspace. The theories and policies developed on the motivations and actions of proxies is robust. This subfield has grown substantially within the last three to four years. Some theorizing has focused on the use of cyber means by terrorist organizations, but most of the research in this area has been speculative. Little theorizing has been done on the use of cyberattacks by non-state armed groups that are not operating as proxies or terrorist organizations. Although there are few examples of such organizations using cyberattacks, increased analysis on this area is potentially warranted.” 

Shample: “The United States and its allies are overly focused on state-sponsored actors. This is because they can issue things like sanctions against state-tied actors, and have press conferences publicizing pomp and circumstance. They ignore the criminal contingent because they usually cannot publicly sanction them. This is short-sighted. The United States needs to combine its intelligence and military efforts to focus on all malicious actors, state-sponsored, criminal groups, and individual/independent actors. Stop worrying about sanctions—malicious APTs often laugh at sanctions from countries without extradition, and the sanctions will quite literally never impact them. They joke about them on underground forums and then continue attacking.” 

Smith: “An area that I am working on is the threats posed by non-state actors during periods of conflict—even ones that we cheer on from afar. The Russian invasion of Ukraine and the subsequent rise of the Ukrainian IT Army and pro-Russian groups like Killnet really complicate the conflict and have shown how organized non-military, non-state-sponsored, and mixed-nationality groups can have a direct impact on the modern battlefield. For entities like US Cyber Command and our foreign counterparts, this is an area of concern, as it is really the modern instantiation of civilians on the battlefield. When do those civilians become enemy combatants and how to we deal with them? Those questions are not answered yet and they are further complicated by the various motivations among groups that I discussed above.”

#5 How can the United States and its allies address the cyber threats posed by the many disparate non-state armed groups around the world?

Lindsay: “We should start by accepting that cyber conflict is both inevitable and tolerable. Cyberattacks are part of the societal search algorithm for identifying vulnerabilities that need to be patched, which helps us to build a better society. The United States and its allies should continue to work on the low-hanging fruit of cybercrime, privacy, and intelligence coordination (which are not really hanging that low), rather than focusing on bigger but more mythical threats. The small stuff will help with the big stuff.” 

McFate: “Three ways. First, better defense. Beyond the ‘ones and zeros’ warriors, we need to find ways to make Americans smarter consumers of information. Second, we need to get far more aggressive in our response. I feel like the United States is a goalie at a penalty shootout. If you want to deter cyberattacks, then start punching back hard until the bullies stop. Destroy problematic servers. Go after the people connected to them. Perhaps the United States should explore getting back into the dark arts again, as it once did during the Cold War. Lastly, enlist the private sector. ‘Hack back’ companies can chase down hackers like privateers. It is crazy in 2022 that we do not allow this, especially since the National Security Agency does not protect multinational corporations or civil society’s cybersecurity.”

Brantly: “The United States and its allies have already addressed cyber threats posed by different groups through the establishment of civilian and military organizations designed to identify and counter all manner of cyber threats. The United States has pushed out security standards through the National Institute of Standards and Technology, and US Cyber Command and the military cyber commands have worked to provide continuous intelligence on the cyber activities of potential adversaries. Continuing to strengthen organizations and standards that identify and counter cybersecurity threats remains important. Building norms around what is and is not acceptable behavior in cyberspace and what are critical cybersecurity practices among public and private sector actors will continue to constrain malicious behavior within this evolving domain of interaction. There is no single golden solution. Rather, addressing cybersecurity threats posed by all manner of actors requires multiple ongoing concurrent policy, regulatory, normative, and organizational actions.”

Shample: “If all entities working cyber operations (law enforcement, intelligence, and military) worked together and with the private sector more, the world would benefit. The private sector can move quicker with respect to changing infrastructure and the quickness of tracking malicious actors. Cyber criminals know they need to set up, act, and then usually tear down their infrastructure, change, and rebuild from scratch so as to avoid tracking. Cyber truly takes all efforts, all kinds of people working it together to be effective. There is too much focus on state-sponsored vs. criminal, and there is too much information not shared among practitioners. Counterterrorism focused analysis needs to be combined with combatting weapons and human trafficking and counter-narcotics, which all then come back to a financial focus. Terrorists like ISIS and others have been observed funding their operations by selling weapons, drugs, or humans, and then putting those funds into cryptocurrency. We have pillars of specialists that focus on one area, but there needs to be more combined efforts vs. singular-focused efforts. Underground forums need to be monitored. Telegram, discord, and dark web forums all need more monitoring. There needs to be a collective effort to combat serious cyber threats, versus dividing efforts and keeping ‘separate’ tracking. Government, military, and law enforcement need to work with the private sector and share the appropriate amount of information to take down criminal networks. There are too many solo efforts vs. a collective one to truly eradicate the malicious cyber criminals.”

Smith: “First, there is no silver bullet because there are so many variables to consider for each threat as it arises—context, composition, etc. are all confounding factors to consider. But I think that international partnerships and domestic partnerships with the private sector and critical infrastructure owners are the key to addressing non-state cyber actors and the threats they pose. The more we communicate and share intelligence and information among partners, the better we will be at anticipating threats and mitigating risk, while also ensuring that we are steadily working to create an ecosystem of support, skills, knowledge, processes and partnerships to combat the multi-modal threats coming from non-state cyber actors.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Non-state armed groups in cyber conflict appeared first on Atlantic Council.

]]>
Guevara in El Heraldo de México: on the effectiveness of state’s strategic capabilities (in Spanish) https://www.atlanticcouncil.org/insight-impact/in-the-news/guevara-in-el-heraldo-de-mexico-on-the-effectiveness-of-states-strategic-capabilities-in-spanish/ Tue, 25 Oct 2022 17:07:00 +0000 https://www.atlanticcouncil.org/?p=588159 On October 25, TSI NRSF Inigo Guevara authored an op-ed in El Heraldo de México discussing what makes a state’s strategic capabilities effective (text in Spanish).

The post Guevara in El Heraldo de México: on the effectiveness of state’s strategic capabilities (in Spanish) appeared first on Atlantic Council.

]]>

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

The post Guevara in El Heraldo de México: on the effectiveness of state’s strategic capabilities (in Spanish) appeared first on Atlantic Council.

]]>
China’s surveillance ecosystem and the global spread of its tools https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/chinese-surveillance-ecosystem-and-the-global-spread-of-its-tools/ Mon, 17 Oct 2022 04:00:00 +0000 https://www.atlanticcouncil.org/?p=567444 This paper seeks to offer insights into how China’s domestic surveillance market and cyber capability ecosystem operate, especially given the limited number of systematic studies that have analyzed its industry objectives.

The post China’s surveillance ecosystem and the global spread of its tools appeared first on Atlantic Council.

]]>

Executive summary

This paper seeks to offer insights into how China’s domestic surveillance market and cyber capability ecosystem operate, especially given the limited number of systematic studies that have analyzed its industry objectives. For the Chinese government, investment in surveillance technologies advances both its ambitions of becoming a global technology leader as well as its means of domestic social control. These developments also foster further collaboration between state security actors and private tech firms. Accordingly, the tech firms that support state cyber capabilities range from small cyber research start-ups to leading global tech enterprises. The state promotes surveillance technology and practices abroad through diplomatic exchanges, law enforcement cooperation, and training programs. These efforts encourage the dissemination of surveillance devices, but also support the government’s goals concerning international norm-making in multilateral and regional institutions.

The proliferation of Chinese surveillance technology and cyber tools and the associated linkages between both state and private Chinese entities with those in other states, especially in the Global South, is a valuable component of Chinese state efforts to expand and strengthen their political and economic influence worldwide. Although individual governments purchasing Chinese digital tools have their local ambitions in mind, Beijing’s export and promotion of domestic surveillance technologies shape the adoption of these tools in the Global South. As such, investigating how Chinese actors leverage demand factors for their own aims, does not undercut the ability of other countries to detect and determine outcomes. Rather it demonstrates an interplay between Chinese state strategy and local political environments. This paper specifically focuses on key features in China’s surveillance ecosystem, while the companion to this report will focus on the key ‘pull factors’  from African countries and their significance for US interests.

Introduction

Chinese tech companies are among the largest firms in the world. Initially focused on the domestic market, they now sell various surveillance technologies to a global customer base. Increased collaboration between the party-state and private Chinese actors in the sale of surveillance products inspires trepidations about the proliferation of China’s surveillance tools, ergo the rise of unwarranted surveillance. Namely, researchers scrutinize China’s diplomatic activities, raising questions about the degree to which the government enables surveillance practices abroad. Large Chinese firms and state amplify debate and concerns by pushing to change the norms and mechanisms in the use of public security technology.

This paper seeks to offer insights into how China’s domestic surveillance market and cyber capability ecosystem operate, especially given the limited number of systematic studies on the industry and its growing influence in the Global South. This issue brief focuses on the development of the Chinese surveillance industry and the firms that make it possible, including those firms that sell surveillance tools within the international surveillance market. The brief has four parts. The first discusses the development of China’s surveillance ecosystem. It specifically explores the establishment of the Golden Shield Project (GSP), a national Closed-Circuit Television (CCTV) network intended to digitize the public security sector, and its consequences for surveillance practices in China. The second section investigates China’s conception of “cyber sovereignty,” or wangluo zhuquan, which seeks to influence the governance of cyberspace. This idea and policy prerogative helps Beijing’s promotion of a controlled cyberspace and, therefore, the development of surveillance practices that rely on the use of artificial intelligence, big data, and biometric collection, among other means, to monitor citizens. The third and fourth sections carefully look at how private-public partnerships have empowered China’s cyberpower, while at the same time creating a more restrictive legal and political environment in China. What appears to make the party-state distinct from other exporters is the legal and political system from which these surveillance tools emerge—crucially, how China promotes their use in the Global South.1 The brief concludes by taking a close look at how the spread of Chinese surveillance tools is both a consequence of China’s supply capacity and local demand factors.

China’s domestic tech environment

In 2014, President Xi Jinping declared that there was “no national security without cybersecurity.”2 For the Chinese Communist Party (CCP), surveillance technology research and development support the party’s intention to be a global technology leader while also augmenting its means of domestic social control. Promoting social stability has been the chief policy goal of the party, and therefore the state, for years.3 As early as 1990, the State Council approved a proposal to establish a national information system.4 This includes the Golden Shield, or jindun gongcheng program. GSP is a surveillance initiative launched by the state in 1998. Promoted by public security authorities, the primary aim of the initiative is to create a fully digitized public security sector using a national surveillance network to bolster the means of data management and state security capabilities. Walton’s seminal report, “China’s Golden Shield Corporations and the Development of Surveillance Technology in The People’s Republic of China,” examines the early developments of GSP.5 Walton’s work examines how the initiative relied on American and Canadian made technology. Recent government bidding documents show further evidence that American companies supply some of the parts necessary for the GSP project.6

The first phase of the GSP involved the digitization of the ministry, province, and city, while the second phase’s intent has been to integrate all three levels of public security networks by establishing the means to foster information sharing between the three levels.7 The project relies on information and communication technology (ICT) systems to enhance the ability of a unified command, rapid response, and coordinated effort to address supposedly the challenges of crime. In its early stages, it was characterized mostly by surveillance cameras paired with more efficient ways of sharing data within state bureaus. The GSP has grown significantly in size and sophistication since its founding. It now includes 416 million surveillance cameras around the country that utilize artificial intelligence (AI) facial recognition technology.8 These developments also include many ostensibly benign technologies like geolocation and storage servers that support social control. The Police Geographic, for example, is a geolocation platform made by Tianjin Troila Technology that offers the police real-time spatial visualization data.9 This project enables the representation of space into grids for surveillance and knowledge building to serve security objectives. Valentin Weber and Vasilis Ververis bolster this argument in research published in August 2021. Their report examines various technologies, including geolocation, which form part of a layered assemblage of surveillance systems that support the tracking of vehicles and people.10

The “safe city model,” or Ping an chengshi, evolved from the GSP.11 Simply put, it is “a computational model of urban planning that promises to optimize operational efficacy and promote economic growth by leveraging ICT systems.”12 It is a commodity sold by Huawei at home, but also offered across the Global South. Currently, the safe city relies on integrating data from multiple sources that include utility companies, retail stores, and formal banks. This biometric data then feeds databases run by public security bureaus, which utilize facial recognition tools. The centralized information systems are known as city brains, or Chengshi danao.13 These efforts in part bring together civil-commercial actors with the state for the sake of data-driven governance. Underlying the turn towards data-driven governance is Beijing’s belief in a scientific outlook on development, or kexue fazhan guan, a notion that assumes that technical interventions can numerically capture and abate social challenges, like crime.14

Cyber sovereignty

In 2016, President Xi maintained that legal and political constraints must be accompanied by the development of technology at home and abroad.15 A goal of its lobbying on multilateral institutions like the UN is the adoption of its conception of cyber sovereignty. Simply put, cyber sovereignty refers to respecting a nation’s right to choose the trajectory of its internet development and management.16 These lobbying efforts do not merely focus on technical norms and standards aimed at advancing network security, but also speak to the state’s right to control the flow of information within its borders. As it stands, Beijing’s notion of cyber sovereignty seeks to advocate for a country’s sovereign right to delimit and control data flows based on its domestic security interests. From this vantage point, states should discourage interference in the internal affairs of others. This privileging of the state offers legitimacy and cover to Beijing’s predilection towards delimiting and controlling online activity, but is also in contrast to Western commitments to cyber governance. While the United States and its allies “advocate for a more open, free, and multistakeholder approach, which provides open platforms for private actors and civil society organizations, China wishes to promote a complete counterapproach that asserts the interests of the government over non-state actors.”17

China’s domestic environment has nurtured a tech industry that supports the state’s aims to monitor, censor, and condition public opinion. The Golden Shield Project, which is popularly referred to as the “Great Firewall of China,” is the best illustration of this project. An initiative managed by the Ministry of Public Security, which crucially relies on filtering and censorship technologies that operate alongside domestic law that limits and seeks to curate online discourse. Margaret Roberts, in her work titled “Censored: Distraction and Diversion Inside China’s Great Firewall,” offers a systematic analysis that demonstrates how state agencies have created social media accounts that flood the internet with approved state media content that seeks to influence public opinion.18 The state’s attempt to control discourses also includes a desire to influence international opinion about China. Adam Segal’s in-depth 2020 essay describes how Beijing promotes cyber sovereignty, or wangluo zhuquan, as an organizing principle to prevent the flow of online information that threatens domestic political stability, foster technological supremacy and independence from the United States, and counter US global influence.19

China increasingly acts in accordance with its policy of cyber sovereignty. In 2017, the government told companies like Tencent, a giant internet-based platform and company, to shut down websites that host content deemed as socially and politically threatening.20 Weibo, a Chinese social media platform, made changes to its platform in 2018 to allow government censors to tag posts as unsubstantiated rumors.21 This corporate complicity has made and scaled up surveillance. Likewise, mass surveillance practices in Xinjiang—a matter that Beijing has claimed to be a domestic affair that is beyond international critique—has several corporate actors involved. For instance, H3C has also developed an internet protocol (IP) telephone network for the Xinjiang Public Security authorities.22 State agencies employ a multisource and layered surveillance system that uses mobile apps, biometric collection, artificial intelligence, and big data, among other means, to monitor and control thirteen million Turkic Muslims.

Public-private partnerships

State procurement of public security technology and innovation policy is driving China’s surveillance ecosystem. Surveillance tools scale the party-state’s means to conduct surveillance operations on targeted populations that are presumed to be threats to social stability, which result in legal and extralegal means to address the supposed challenge to security. Chinese tech start-ups are seeking to meet the demands of the country’s security services. Many cybersecurity firms in China focus on vulnerability research, threat detection, and security intelligence products, which they sell to the state.23 While these firms mostly rely on Chinese venture capital, they have grown to service clients globally. For example, Pangu Lab is a cybersecurity research team under Pwnzen Infotech that focuses on advanced security research in offensive and defensive cyber capabilities. Pwnzen Infotech has the backing of Qihoo 360, the largest provider of internet and mobile security products in China.24 Pangu Lab aims to be at the forefront of vulnerability research and to offer insights into the offensive and defensive techniques necessary to combat potential infiltration and exploitation. Pangu Lab founder, Han Zhengguang, is well-known in the Chinese cybersecurity industry for cracking the iPhone.25 According to Han, Pangu Lab conducts security research on iOS. Moreover, they have discovered hundreds of zero-day security vulnerabilities in mainstream operating systems and popular applications, including Android and other leading mobile operating systems.26 Pangu Lab, like many new Chinese cybersecurity research firms, has connections to more established tech firms, but also forms part of an ecosystem of smaller firms and start-ups increasingly used by security services to conduct defensive and offensive cyber operations.27

Drawing attention to the development of China’s cybersecurity industry also means uncovering China’s national cyber ambitions, which are partly contingent on the rapidly advancing sector. Companies operating in this space are increasingly at the forefront of their respective fields, and their insights and products are sold to public security services in China.28 Party-state cyber capacities depend on private-public cooperation, where the state procures interception and intrusion technologies. Unlike the Israeli NSO Group, which claims to only sell products to state actors, Chinese start-ups like Pangu offer products to state and non-state actors. They justify their business model by pointing to the need for cybersecurity, but also how their vulnerability research allows for better software.29

Many tech firms tailor their services to meet the demands of China’s security services. For example, Chinese companies like Haimeng, Jin Ruan, Ruitec, and Goldeweb have developed products to support the police in predictive policing and the management of targeted populations perceived to be threats to social stability.30 Arcvideo, like Megvii, also helps equip public security services and has established relationships with the Beijing Criminal Investigation Corps, the Wuhan Public Security Bureau, and six other local security organs.31 Megvii offers a range of digital solutions, which includes portable video equipment, covert video tracking capabilities, and AI-based analytics software. Western companies like IBM, Intel, Cisco, and Oracle have also provided hardware and software used in China’s surveillance network. Oracle sold the software to Liaoning police, which has enhanced their tracking of key objects, events, and people to better identify potential suspects.32 Scholars have also noted that other Chinese security services—including the Xinjiang police force—use Oracle’s data security service.33

Chinese leaders have criticized Chinese cyber researchers for doing work outside of China. Indeed, they have implored them to stay in China in order for the government to realize the strategic value of software vulnerabilities.34 As a result, Zhou Hongyi, the chairman and CEO of Qihoo 360, delisted the company from the New York Stock Exchange in 2016. Qihoo 360 then relisted in Shanghai in 2018 in part to qualify for Chinese government and military contracts.35 Likewise, Chen Xie, the CEO of Tophant, has claimed that Chinese firms dealing with cloud security, data security, zero trust, and privacy, are more likely to receive contracts and funding from Beijing.36 Megvii, a partner of Chinese public security authorities, garnered sixty percent of its revenues from smart city contracts in 2020.37 Additionally, such access to mass population data enables firms like Megvii to better train their algorithms to identify human faces.38 As such, given the financial incentives to work with the CCP, companies have little interest or limited reasons not to develop and supply technologies for public security officials. Private firms within the technology sector, particularly in the cybersecurity space, are increasingly offering their insights and services to the Chinese government, even as they assert ignorance about their collaborative ventures with the state.39

While encouraging the private-public partnerships that have capacitated its cyber power, the Chinese government has also created a more restrictive environment for researchers. Chinese cyber researchers are now effectively banned from participating in international hacking events and competitions, which they once dominated.40 If researchers wish to participate in an international competition, they must ask for permission, which the state rarely grants.41 Additionally, they must submit their knowledge of software vulnerabilities to security services before attending any international event, giving Chinese security officials a comparative advantage over the United States concerning defensive or offensive hacking operations.

While direct engagement with the private and public sectors varies between firms, Chinese technology firms operate under a more restrictive legal environment. The  2016 cybersecurity law, 2021 data security law, and 2017 national intelligence law form a series of laws that obligate firms to cooperate with state security organs when requested.42 Lucero contends that this environment of increasing rigidity has exacerbated a bureaucratic architecture that prioritizes political stability over economic efficiency.43 Such a move has reportedly resulted “increased centralization and ideological control with fear and paralysis.”44 Accordingly, these rules establish obligations for firms to cooperate with party-state organs by sharing data that is believed to threaten or promote national security interests. Certainly, it appears that these changes in recent years to the Chinese system occur without any legal recourse or administrative means to decline requests made by state security officials.45

Pointedly, the shift towards public stability and security as the primary objective of the party-state has led to a more strict environment for corporations. For example, the new intelligence law requires companies to contribute to government intelligence work by sharing their data when requested by security officials. Simultaneously,  this change is unfolding alongside progress being made in personal consumer rights in China.  Two recent legal statements challenge this view of a more restrictive legal environment.46 The first is by the Beijing-based Zhong Lun law firm, their statement was submitted to the Federal Communications Commission during its proceedings regarding concerns around Huawei. At this time, Huawei representatives were sending documents to state officials and organs around the world in support of company as a safe and reliable vendor. The “Zhong Lun declaration” discusses statutory laws passed by China’s Standing Committee, and crucially contends that the current national cybersecurity law, national intelligence law, and anti-terrorism law do not necessarily require tech firms to cooperate with Beijing or obligate them to offer backdoor access to data. This position is further supported by the second statement made by the British law firm Clifford Chance, which was employed by Huawei to issue a legal opinion supposedly in concurrence with the Zhong Lun declaration. Despite these notable interventions, it is a misstep to simply focus on what Chinese law says about the party-state and what it can demand of firms. It is more salient, I argue, to know what the government can actually do, regardless of what the law says. These interventions on behalf of Huawei assume that Beijing is meaningfully constrained by law.

In this light, scholars, like Donald Clarke, contend these two legal statements offer a misleading conclusion. Indeed, the arguments do not ameliorate US national security concerns.47 Because while discussing some key features of the intelligence law, the Zhong Lun declaration focuses on a limited subset of mandatory rules and crucially ignores a number of other rules that ask for cooperation. The declaration contends that companies can simply decline state security official requests, and even take action if their legal rights have been violated, companies can pursue remedy through administrative review and through the court system.48 This view implies that there are judicial checks to state excesses. However, there is as yet no evidence of such a case resulting in an enterprise or citizen receiving this remedy as a result of such violations. These rights asserted in the Zhong Lun declaration—and supposedly respected—are not clearly defined and stated. For these reasons, it is unlikely that the CCP is meaningfully and substantially constrained by law.49

The party-state utilizes all-encompassing surveillance practices that mobilize the national CCTV network and cyber researchers to bolster its cyber power. This policy, in part, relies on a more rigid regulatory environment. Strategies, ranging from buying company shares to requiring the establishment of party committees within firms, allow for state-overseen enterprises. Weber and Ververis contend that the procurement of Chinese surveillance tools may expose Western individuals to privacy risks, as the backdoors used for domestic surveillance in China are exported to foreign markets, unless the tech firms choose to sell a more secure version of public security technologies for international customers.50 Researchers like Honovich have unearthed and forewarned the various cybersecurity vulnerabilities in Hikvision cameras.51 Currently, there is no empirical evidence from the ground that demonstrates the systematic coordination between Beijing and Hikvision in the purposeful theft of personal data. This concern, however, remains an escalating vulnerability. For example, African Union’s (AU) staffers discovered that China-based hackers, Bronze President, had “rigged a cluster of servers in the basement of an administrative annex to steal surveillance videos from across the AU’s sprawling campus in Addis Ababa, Ethiopia’s capital.”52 As such, it is paramount to promote and advance supply chain integrity given the real risk for designed backdoors in hardware or software.

Conclusion

The global push factors of China’s surveillance tools

In addition to aiming to realize cyber power ambitions at home, China’s drive for tech and cybersecurity leadership extends globally. Research from Steven Feldstein found that Chinese companies supply AI surveillance technology in sixty-three countries, thirty-six of which have signed onto China’s Belt and Road Initiative.53 Accordingly, these technologies, developed for the sake of the GSP program, are now exported across the globe. Much of the establishment of surveillance programs is through third parties and subsidiaries of Chinese companies.54 To be clear, the selling of digital monitoring tools and cyber capability technologies is not unique to Chinese vendors. Many non-Chinese enterprises, including Western firms, are involved in the sale of cyber capabilities and surveillance tools.55 This focus on Chinese technology does not aim to obfuscate the broader transnational market of digital surveillance tools, which indubitably includes American actors. Rather, the paper illustrates how the procurement of Chinese technology appears to be a result of both Chinese supply and local demand factors. What is unique about Beijing is how it goes about promoting public security systems in the Global South.

The party-state utilizes multilateral institutions like the BRICS (Brazil, Russia, India, China, and South Africa), an emerging markets group, the Belt and Road Initiative, and the Forum on China-Africa Cooperation (FOCAC) to promote its surveillance platforms across the Global South.56 Particularly, through FOCAC and the China-Africa Defense Forum, China has signed resolutions to increase cooperation in areas like counterterrorism,  safe city projects, and cybersecurity.57 China also supplements this promise with commitments to offer finance, technical assistance, and training to African governments on topics ranging from digital forensic techniques to cybersecurity.58 These efforts reflect Beijing’s aims to influence international norms through multilateral institutions, which further normalize and seek to legitimize its surveillance practices at home.

These trends are particularly prevalent in a handful of African countries. The China-Africa Internet Development and Cooperation Forum held in August 2021 offers an example of China’s aims to implement a joint China-Africa partnership to advance digitization and promote its notion of cyber sovereignty.59 Additionally, Beijing’s efforts to shape cybersecurity standards and regulations in part garner legitimacy from its digital development aid and projects in Africa.

The proliferation of surveillance technology has, unsurprisingly, had clear effects on law enforcement practices. For example, the use of Chinese surveillance technologies in South Africa has risen largely in tandem with police-to-police training and cooperation—like the 2018 South African delegation tour of Shanghai’s Public Security Bureaus to learn how to improve policing techniques.60 Similarly, the Botswana Police Services enlisted Huawei to install 500 surveillance cameras in Gaborone and Francistown, including inside commercial buildings, as part of a two-year deal with the company’s Safe City Project.61

Utilizing ICT systems and services, the Kenyan government aims to foster a safe city project where digital surveillance systems are incorporated into Nairobi’s city infrastructure to optimize development and security ambitions. Working with Huawei and Safaricom, the government established the first African safe city in Nairobi, which connected 200 high-definition traffic surveillance cameras and 1800 high-definition cameras.62 What is more, these integrated platforms include a high-speed private broadband network and command center for the National Police Service, which supports over 9000 police officers in 195 police stations. Through these digital surveillance systems, the safe city platform aims to meet several service delivery demands, including real-time surveillance, evidence collection, and video browsing that purportedly support accelerated police response, recovery missions, and crime prevention.

Namely, Huawei’s safe city platforms are promoted as solutions for crime and rising terroristic threats. Governments in the Global South are procuring their services on the grounds to expand their surveillance capacities to address growing trepidations around crime and terrorism. Yet, in part, due to the dearth of publicly available data, the benefits of the safe city platforms are difficult to verify and appear grossly overstated by Huawei.63 According to them, crime rates decreased by 46 percent in areas supported by their platform in 2014 to 2015.64 However, the Eastern African nation’s police services report lower reduction rates in crime during those years.65 Unfortunately, Nairobi and Mombasa, the two cities supported by Huawei’s safe city platforms have seen an increase in reported crime between 2017 and 2018.

While China’s surveillance system is confined to its national borders, the companies that make its surveillance state possible are now actively selling their tools abroad. Given the growing influence of these firms and the spread of digital surveillance tools, scholars like Feldstein contend that the party-state is not only supporting the proliferation of digital public security technologies, but also enabling the rise of authoritarianism. This kind of argument, I contend presumes a coordinated effort between the party-state and  technology firms as a way to export Chinese norms and repressive practices overseas. Indeed, while this argument draws attention to Chinese push factors, it ignores local demand features. Moreover, it lacks robust empirical evidence from the ground to establish the consequences of Beijing’s promotional efforts.66 For instance, the use of surveillance tools in Kenya, and across Africa, is supposedly a means to improve service delivery and law enforcement. Accordingly, technologies are adopted in order to address such structural and political challenges.67 The extent of technology and regulation diffusion, and indeed whether it undercuts civil liberties, is greatly contingent on the political and legal environment of the recipient African country.

We are yet to observe party-state solutions for public instability being promoted in the Global South by Beijing. Currently, Huawei’s safe city technologies are marketed as solutions to local concerns around crime and terror. Indeed, China’s active “push” of domestic surveillance technologies is a critical force in shaping African surveillance ecosystems. As such, highlighting how Beijing leverages local demand factors to advance its own geopolitical interests should not be viewed as an attempt to downplay African state agency in determining the application of public security technologies. For these reasons, Africa, and other regions, must be carefully studied both on their terms and as well as places enmeshed in wider relations. The companion report to this issue brief will focus on the key “pull factors” from African countries and their significance for US interests. More to the point, we must engender even-handed studies that demonstrate the degree to which local agency is shaping relations between Africa-China while also underscoring the interplay between local political commitments and Chinese state ambitions.68 This more proportional analysis seeks to expand our understanding and offers insights into the perennial consequences of Beijing’s growing cyber power on the global stage.

Endnotes


1 Bulelani Jili, The Rise of Chinese Surveillance Technology in Africa.

2 President Xi Jinping, “China Must Evolve from a Large Internet Nation to a Powerful Internet Nation,”  习近平:把我国从网络大国建设成 为网络强国, Xinhuanet, February 27, 2014, http://news.xinhuanet.com/politics/2014- 02/27/c_119538788.htm. Also see, William Wan, “Chinese President Xi Jinping takes charge of new cyber effort,” Washington Post, February 27, 2014, https://www.washingtonpost.com/world/chinese-president-takes-charge-of-new-cyber-effort/2014/02/27/a4bffaac-9fc9-11e3-b8d8-94577ff66b28_story.html

3 Samantha Hoffman, Engineering Global Consent: The Chinese Communist Party’s Data-Driven Power Expansion, Australian Strategic Policy Institute, October 14, 2019, https://www.aspi.org.au/report/engineering-global-consent-chinese-communist-partys-data-driven-power-expansion.

4 See, for example: Peter Mattis, “China’s Adaptive Approach to the Information Counter-Revolution,” Jamestown Foundation China Brief, 11 No.10 (June 3, 2011), https://jamestown.org/program/chinas-adaptive-approach-to-the-information-counter-revolution/; Yu Xu and Hongren Zhou, “Analysis and Forecast on China’s Informatization,” 中国信息化形势分析和预测, Beijing: Social Sciences Academic Press, 2010; and Qin Liang, “Public Security Information Industry Overview.”  公安信息化行业概况, Sealand Securities, 2019, https://web.archive.org/web/20201119002413/pg.jrj.com.cn/acc/Res/CN_RES/INDUS/2016/1/26/3d0f5812-fd68-4045-8dd7-c7a2b04862c6.pdf.

5 Greg Walton, China’s Golden Shield Corporations and the Development of Surveillance Technology in The People’s Republic of China, International Centre for Human Rights and Democratic Development, Montreal, 2001, https://ora.ox.ac.uk/objects/uuid:084840ac-b192-407b-ab6c-f8f810310369/download_file?file_format=pdf&safe_filename=CGS_ENG.pdf&type_of_work=Book.

6 Walton, China’s Golden Shield.

7 Yu Xu and Hongren Zhou, Analysis and Forecast on China’s Informatization, 中国信息化形势分析和预测, Beijing: Social Sciences Academic Press, 2010.

8 Valentin Weber and Vasilis Ververis. “China’s Surveillance State: A Global Project,” Top10VPN, August 2021, https://www.top10vpn.com/assets/2021/07/Chinas-Surveillance-State.pdf.

9 Troila Technology, 2022. “警用地理信息服务平台.” (Police geographic information service platform). Available at: https://www.troila.com/jiejuefangan?id=159

10 Weber and Ververis, “China’s Surveillance State: A Global Project.”

11 Samantha Hoffman, “China’s Tech-Enhanced Authoritarianism.” (Written Testimony before the House Permanent Select Committee on Intelligence), US Congress, May 16, 2019, https://www.congress.gov/116/meeting/house/109462/witnesses/HHRG-116-IG00-Wstate-HoffmanS-20190516.pdf

12 Bulelani Jili, “Chinese ICT and Smart City Initiatives in Kenya,” Asia Policy, 17, no. 3 (July 2022): 44, https://www.nbr.org/wp-content/uploads/pdfs/publications/asiapolicy17.3_africa-china_relations_rt_july2022.pdf; and Shannon Mattern, “A City Is Not a Computer: Other Urban Intelligences,” Princeton: Princeton University Press, 2021, DOI: 10.2307/j.ctv1h9dgtj.

13 See, for example: Sohu, “Shanghai’s ‘Public Security Brain’ Is Upgraded to ‘City Brain’,” 上海“公安大脑”将升级为“城市大脑, 2019, https://www.sohu.com/a/338738299_649849.

14 The concept kexue fazhan guan or scientific outlook on development was initially discussed in CCP circles as early as 2003. However, it was only introduced to the public by Hu Jintao in January 2004. It was later adopted by the National People’s Congress as a new guideline for social and economic development in March 2004.

15 President Xi Jinping, “Speech at the Symposium on Network Security and Informatization,” 习近平在网信工作座谈会上的讲话全文发表, Xinhuanet, April 19, 2016, http://www.xinhuanet.com//politics/2016-04/25/c_1118731175.htm.

16 See, for further elaboration of the concept: President Xi Jinping, “Speech at the Opening Ceremony of the Second World Internet Conference,” 习近平在第二届世界互联网大会开幕式上的讲话, Xinhuannet, December 16, 2015, http://www.xinhuanet.com//politics/2015-12/16/c_1117481089.htm; and Rogier Creemers “China’s Conception of Cyber Sovereignty: Rhetoric and Realization,” in Governing Cyberspace: Behavior, Power, and Diplomacy, SSRN (February 5, 2020) 1-34, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=353242.

17 Bulelani Jili, The Rise of Chinese Surveillance Technology in Africa.; Tian Shaohui, ed., “International Strategy of Cooperation on Cyberspace,” Xinhuanet, March 1, 2017, http://www.xinhuanet.com//english/china/2017-03/01/c_136094371.htm.

18 Margaret Roberts, Censored: Distraction and Diversion Inside China’s Great Firewall, Princeton: Princeton University Press, 2018.

19 Adam Segal, “China’s Vision for Cyber Sovereignty,” National Bureau of Asian Research (NBR) Special Report 87 (2020): 85-117, https://www.nbr.org/publication/chinas-vision-for-cyber-sovereignty-and-the-global-governance-of-cyberspace/.

20 Reuters, “China shuts 128,000 ‘harmful’ websites in 2017 – Xinhua,” January 8, 2018, https://www.reuters.com/article/china-internet/china-shuts-128000-harmful-websites-in-2017-xinhua-idINKBN1EX2GO; and People’s Daily, “Last Year’s Top Ten ‘anti-pornography and Illegal’ Cases Announced,” January 9, 2018, 2018. 去年“扫黄打非”十大案件公布, http://politics.people.com.cn/n1/2018/0109/c1001-29752891.html; Cyberspace Administration of China, “Opinions on Further Intensifying Website Platforms’ Entity Responsibility for Information Content,” China Law Translate, September 15, 2021, https://www.chinalawtranslate.com/en/content-responsibility/.

21 See, for example: Yuan Yang, “Beijing Now Able to Flag Weibo Posts as Rumor,” Financial Times, 2018, https://www.ft.com/content/e21369fe-e0db-11e8-8e70-5e22a430c1ad; Yang Ziyu, “Weibo Gives Media, Government Power to Quash ‘Rumors’,” Sixth Tone, November 3, 2018, https://www.sixthtone.com/news/1003152/weibo-gives-media%2C-government-power-to-quash-rumors.

22 See, for example: H3C, “Xinjiang Public Security Dedicated Line IP Telephone System Project,” 新疆公安专线IP电话系统项目, 2007, https://web.archive.org/web/20210514091329/http:/www.h3c.com/cn/Products___Technology/Products/Router/IP_Voice/Home/Success_Stories/200712/322755_30003_0.htm; Government Procurement of Xinjiang, “Announcement of the Winning Bid for the Upgrade Project of Yili Prefecture Public Security Bureau,” 伊犁州公安局党政军链路升级工程项目中标(成交)结果公告, April 1, 2021, https://web.archive.org/web/20210514094226/http:/www.ccgp-xinjiang.gov.cn/ZcyAnnouncement/ZcyAnnouncement4/ZcyAnnouncement3004/KG3KrdvMzw/o/pLznRbnoQ==.html.

23 Margin Research, “The Chinese Private Sector Cyber Landscape,” April 25, 2022,  https://margin.re/media/the-private-sector-chinese-offensive-cyber-landscape.aspx.

24 Pei Li and Cate Cadell, “At Beijing Security Fair, an Arms Race for Surveillance Tech,” Reuters, May 30, 2018, https://www.reuters.com/article/ctech-us-china-monitoring-tech-insight-idCAKCN1IV0OY-OCATC.

25 Qi Anxin Group, “Interview| Qian Pangu, the Strongest Guardian of Mobile Security,” 专访|奇安盘古,做移动安全的最强守护者, https://www.qianxin.com/news/detail?news_id=2664.

26 Qi Anxin Group, “Interview| Qian Pangu.”

27 Margin Research, “The Chinese Private Sector Cyber Landscape.”

28 Winnona DeSombre, Lars Gjesvik, and Johann Ole Willers, Surveillance Technology at the Fair: Proliferation of Cyber Capabilities in International Arms Markets, Atlantic Council, November 8, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/surveillance-technology-at-the-fair/.

29 Pangu Lab, “Pangu Research Lab,” https://pangukaitian.github.io/pangu/?lg=en  

30 Jin Ruan Science and Technology, “System Solution for the Construction of the Social Security Prevention and Control System,” 社会治安防控体系建设系统解决方案, March 1, 2022, https://archive.ph/Yzlez#selection-381.0-381.16.

31 Arcvideo Tech, “Business Scenario of Security+ AI commercial landing practice,” 安防+AI 业务场景驱动的商业化落地实践, https://web.archive.org/web/20220316140031/http://cpsforum.com.cn/15th/Public/Home/images/dh.pdf.

32 Mara Hvistendahl, “How Oracle Sells Repression in China,” The Intercept, February 18, 2021, https://theintercept.com/2021/02/18/oracle-china-police-surveillance/.

33 Weber and Ververis. “China’s Surveillance State: A Global Project.”

34 See, for example: Cyberspace Administration of China, “Regulations on the Management of Network Product Security Vulnerabilities,” 工业和信息化部 国家互联网信息办公室 公安部关于印发网络产品安全漏洞管理规定的通知, December 7, 2021, https://archive.ph/9cL8j#selection-713.0-713.41; Sina Technology, Zhou Hongyi interview, 周鸿祎接受采访, September 12, 2017, https://tech.sina.cn/i/gn/2017-09-12/detail-ifykusey8931658.d.html?vt=4 ; and Patrick Howell O’ Neill, “How China Built a One-of-a-Kind Cyber-Espionage Behemoth to Last,” MIT Technology Review, February 28, 2022, https://www.technologyreview.com/2022/02/28/1046575/how-china-built-a-one-of-a-kind-cyber-espionage-behemoth-to-last/#:~:text=Computing-,How%20China%20built%20a%20one%2Dof%2Da%2Dkind%20cyber,is%20paying%20off%20for%20China.&text=The%20%E2%80%9Cmost%20advanced%20piece%20of,to%20use%20was%20revealed%20today.

35 See, for example: Laura He, “Chinese Internet Security Firm Coming Home from US Valued at US$62bn, Drops 10pc on Shanghai Debut,” South China Morning Post, February 28, 2018, https://www.scmp.com/business/companies/article/2135098/chinese-internet-security-firm-coming-home-us-valued-us62bn-drops; and Elsa Kania and Lorand Laskai, Myths and Realities of China’s Military-Civil Fusion Strategy, Center for New American Security, January 28, 2021, https://www.cnas.org/publications/reports/myths-and-realities-of-chinas-military-civil-fusion-strategy.

36 Margin Research, 2022. “The Chinese Private Sector Cyber Landscape.”

37 Megvii Technology Limited, “IPO prospectus of Megvii Technology Limited,” 2020, https://static.sse.com.cn/stock/information/c/202103/bab29f856dc5431d931548cd27304d80.pdf.

38 Bulelani Jili, The Rise of Chinese Surveillance Technology in Africa, Electric Privacy Information Center (EPIC), May 31, 2022, https://epic.org/the-rise-of-chinese-surveillance-technology-in-africa/.

39 Minghe Hu, “Coronavirus: WeChat, Alipay Deny Helping Government Identify 350,000 Users Who Visited Beijing Food Market,” South China Morning Post, June 15, 2020, https://www.scmp.com/tech/apps-social/article/3089068/coronavirus-wechat-alipay-deny-helping-government-identify-350000.

40 Chris Bing, “China’s Government Is Keeping Its Security Researchers from Attending Conferences,” Cyberscoop, March 8, 2018, https://www.cyberscoop.com/pwn2own-chinese-researchers-360-technologies-trend-micro/.

41 See, for example: Cyberspace Administration of China, “Regulations on the Management of Network Product Security Vulnerabilities.”

42 Standing Committee, National Intelligence Law, 中华人民共和国国家情报法, China Law Translate, June 27, 2017, https://www.chinalawtranslate.com/national-intelligence-law-of-the-p-r-c-2017/.

43 Karman Lucero, “In China, Planning Towards AI Policy Paralysis,” New America, January 15, 2020,  https://www.newamerica.org/cybersecurity-initiative/digichina/blog/china-planning-towards-policy-paralysis/

44  Karman Lucero, “In China, Planning Towards AI policy Paralysis.”

45 Standing Committee, National Intelligence Law.

46 Chen Jihong and Jianwei Fang, “The Zhong Lun Declaration,” 2018,  https://perma.cc/L9BF-4JNY.

47 Donald Clarke, “The Zhong Lun Declaration on the Obligations of Huawei and Other Chinese Companies under Chinese Law,” for George Washington University Law School, SSRN (March 17, 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3354211.

48 Chen and Jianwei, “The Zhong Lun Declaration.”

49 See, for example: 2013. Susan Lawrence and Michael Martin, “Understanding China’s Political System,” Congressional Research Service, March 20, 2013, https://sgp.fas.org/crs/row/R41007.pdf.

50 Weber and Ververis. “China’s Surveillance State: A Global Project.”

51 John Honovich, “Hikvision Has ‘Highest Level of Critical Vulnerability,’ Impacting 100+ Million Devices,” IPVM, September 20, 2021, https://ipvm.com/reports/hikvision-36260.

52 Raphael Satter, “Exclusive-Suspected Chinese Hackers Stole Camera Footage from African Union – Memo,” Reuters, December 16, 2020, https://www.reuters.com/article/us-ethiopia-african-union-cyber-exclusiv-idINKBN28Q1DB.

53 Steven Feldstein, The Global Expansion of AI Surveillance, Carnegie Endowment for International Peace, September 17, 2019, https://carnegieendowment.org/2019/09/17/global-expansion-of-ai-surveillance-pub-79847.

54 Weber and Ververis. “China’s Surveillance State: A Global Project.”

55 See, for example: Feldstein, The Global Expansion of AI Surveillance”; Walton, China’s Golden Shield; Bulelani Jili, The Rise of Chinese Surveillance Technology in Africa.

56 Bulelani Jili, The Rise of Chinese Surveillance Technology in Africa.

57 Michael Kovrig, China Expands its Peace and Security Footprint in Africa, International Crisis Group, October 24, https://www.crisisgroup.org/asia/north-east-asia/china/china-expands-its-peace-and-security-footprint-africa; Yao Jianing, ed., “China to Host First China-Africa Defense Forum,” China Daily, June 1, 2018, http://eng.mod.gov.cn/news/2018-06/01/content_4815796.htm; Yin Hang, “Wei Fenghe Meets with Representatives of the First China-Africa Defense and Security Forum,” 魏凤和会见首届中非防务安全论坛代表, Ministry of National Defense People’s Republic of China, press release, July 10, 2018, http://www.mod.gov.cn/topnews/2018-07/10/content_4818896.htm.

58 Heidi Swart, “Joburg’s New Hi-Tech Surveillance Cameras: A Threat to Minorities That Could See the Law Targeting Thousands of Innocents,” Daily Maverick, September 28, 2018, https://www.dailymaverick.co.za/article/2018-09-28-joburgs-new-hi-tech-surveillance-cameras-a-threat-to-minorities-that-could-see-the-law-targeting-thousands-of-innocents/.

59 State Council Information Office, “China and Africa in the New Era: A Partnership of Equals,” White Paper, November 26, 2021, http://english.scio.gov.cn/whitepapers/2021-11/26/content_77894768_4.htm; Li Zhengwei, “The China-Africa Internet Development and Cooperation Forum Held,” 中非互联网发展与合作论坛举办, Guangming, August 24, 2021, https://m.gmw.cn/baijia/2021-08/24/35106965.html.

60 Li Wanyi, “Delegation of South African Parliament Police Committee Visits Shanghai.” 南非议会警察委员会代表团访问上海, Jiefang Daily, October 4, 2107, http://shzw.eastday.com/shzw/G/20171014/u1a13342865.html.

61 Frank Hersey, “Digital ID in Africa This Week: Biometrics for Tea Workers, Financial Inclusion with a Thumbprint,” Biometric Update, August 23, 2019, https://www.biometricupdate.com/201908/digital-id-in-africa-this-week-biometrics-for-tea-workers-financial-inclusion-with-a-thumbprint.

62 See, for example: Bulelani Jili, “Chinese Surveillance Tools in Africa,” China, Law, and Development, No. 8, June 30, 2020, https://cld.web.ox.ac.uk/files/finaljilipdf; Huawei, “Video Surveillance as the Foundation of ‘Safe City’ in Kenya,” Huawei Industry Insights, 2019, https://www.huawei.com/us/industry- insights/technology/digital-transformation/video/video- surveillance-as-the-foundation-of-safe-city-in-kenya; Steven Feldstein, “Testimony Before the U.S.-China Economic and Security Review Commission Hearing on China’s Strategic Aims in Africa,” US-China Economic and Security Review Commission, May 8, 2020, https://www.uscc.gov/sites/default/files/2020-06/May_8_2020_Hearing_Transcript.pdf.

63 Rachel Bernstein et al, “Expanding Engagement: Perspectives on the Africa-China Relationship,” 46.

64 See, for example: Huawei, “Huawei Hosts Safe City Summit in Africa to Showcase Industry Best Practices,” news release, October 17, 2016, https://www.huawei.com/us/news/2016/10/safe-city-summit-africa; Integrated Solutions, “Safe  City Summit in a Safe City,” Hi-Tech Security Solutions, February 2017, http://www.securitysa.com/56445n.

65 National Police Service, Annual Crime Report 2018, National Police Service of the Republic of Kenya, accessed April 25, 2022, http://www.nationalpolice.go.ke/crime-statistics.html.

66 Bulelani Jili, “Chinese Surveillance Tools in Africa.”

67 Bulelani Jili, “Africa: Regulate Surveillance Technologies and Personal Data,” Nature, 607, No. 7919 (2022), 445–448.

68 Bulelani Jili, The Rise of Chinese Surveillance Technology in Africa.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post China’s surveillance ecosystem and the global spread of its tools appeared first on Atlantic Council.

]]>
How the US can focus its fight against foreign influence operations https://www.atlanticcouncil.org/content-series/hybrid-warfare-project/how-the-us-can-focus-its-fight-against-foreign-influence-operations/ Fri, 30 Sep 2022 14:25:49 +0000 https://www.atlanticcouncil.org/?p=569552 Understanding exactly what US adversaries plan to do in the information space is vital to building domestic defenses.

The post How the US can focus its fight against foreign influence operations appeared first on Atlantic Council.

]]>
Intelligence is all about decisions: How to allocate limited personnel and technological resources when national security is at stake, and how to convey complex information and resulting assessments to policymakers for awareness and action. The decisions are seemingly endless but are vital to producing the best analysis for key officials on topics that have the greatest impact on national security. 

The United States has a massive intelligence ecosystem that gathers more information on more issues than any other country in the world. The true value of this vast amount of information lies in how it is curated, analyzed, and presented to policymakers. To aid in this vital process, the US government has a guide—the National Intelligence Priorities Framework (NIPF)—to identify intelligence priorities and assist agencies and departments with where to focus their efforts. 

During the Cold War, the NIPF focused on political, economic, and proliferation issues related to the Soviet Union and its allies, from the performance of the Soviet economy to details about new fighter jets being developed by Moscow and deployed to other countries. In a post-September 11 world, the fight against terrorism took center stage, with an emphasis on determining where the next attack against the United States or its allies could come from as well as gleaning the goals of various organizations and locations of their leaders. 

The world is now in another new era, one in which information—and what is viewed as truth—is a central national-security concern. As such, the NIPF needs to include requirements that push analysts to discover how adversaries manipulate the information environment to meet their goals. It should task the Intelligence Community with assessing where, how, and to what extent states and organizations weaponize propaganda, mis- and disinformation, as well as political and social manipulation. While conversations on this issue date back to the mid-1990s, the day-to-day impact of such influence campaigns—combined with the technological capability to spread them quickly—means the United States must finally act.

Tweets, Facebook posts, and YouTube videos are not disparate pieces of content, but rather puzzle pieces that, when combined, reveal to intelligence analysts what their adversaries are working toward. From the actors actually carrying out these influence campaigns across the digital media space to the entities that oversee their strategic implementation, the entire system is akin to a completed piece—one which analysts and policymakers alike need to see in order to fully understand an adversary’s goals and objectives.

Understanding what adversaries plan to do in the short (one-year) and medium (three-year) term is vital to building domestic defenses. That’s why the following questions should serve as a starting point for developing new NIPF requirements: 

  • What are the strategic goals of an adversary’s use of influence campaigns? 
  • Who are the targets of influence campaigns, and why were they chosen? 
  • What are the objectives of influence campaigns against the United States and its allies, and are there any specific timelines?
  • Who is responsible for crafting each adversary’s influence strategy? 
  • What fiscal allocation is provided to those programs? 
  • What government and non-government ministries, offices, or groups are responsible for conducting influence operations? How and why are they selected?
  • How are influence activities validated, measured, and evaluated? 
  • What training is provided to tactical- and operational-level influence staff? 
  • What tactics are used in influence campaigns? How are they selected based on target audiences? 

Though by no means comprehensive, these basic influence-related requirements in the NIPF can compel the Intelligence Community to allocate resources toward building out a more robust understanding of how adversaries approach influence campaigns and exactly who is calling the shots. Understanding how influence is being used against the United States and its allies could also help the government better position all its agencies—from the State and Commerce departments to members of the Intelligence Community—to build offensive influence campaigns that persuade key audiences of Washington’s own goals and objectives. 

Servicing NIPF priorities is no longer exclusively the domain of human-intelligence collectors at the Central Intelligence Agency (CIA) and Department of Defense, the signals-intelligence collectors at the National Security Administration, and counterintelligence agents at the Federal Bureau of Investigation. Open Source Enterprise and similar US government organizations can use their open-source intelligence (OSINT) resources—both human and technology-based—to support the effort. Because foreign-influence operations often play out in the public domain, they can usually be identified, traced, and evaluated to determine their effectiveness against the targeted audience. Experts can piece together the goals and objectives of a specific campaign through OSINT, saving scarce resources such as a CIA operations officer’s time for higher-level collection on those who are actually conceiving, managing, and implementing influence campaigns.

Currently, the US government does not have a lead organization to manage offensive or defensive influence activities. As the Department of Homeland Security recently found, how a government entity frames intelligence-gathering on adversarial actions against US and allied audiences is politically fraught. Americans are culturally sensitive to any suggestion that the government could manipulate their views on issues or their access to information—from traditional news to social media content. A recent effort to establish a government office that works to limit Americans’ exposure to mis- and disinformation was viewed across the political spectrum as untenable and inappropriate. 

But that does not mean the task is unnecessary or in violation of American civil liberties. Establishing a multi-agency task force of experts could be a viable first step: It would act as a manager tasking intelligence collection to better understand foreign influence operations; as a consumer of the newly gathered intelligence; and as an analyst producing formal reports for policymakers, as well as educational pieces for the US public to understand what it is seeing and hearing in the media, within social movements, and across politics. The goal would be to understand the “how” and “why” of foreign-influence campaigns and identify offensive campaigns in response that could advance US foreign-policy goals.

Difficult decisions need to be made around what is and is not included in the NIPF. Although there are only so many resources available to collect and analyze intelligence, prioritizing foreign-influence activities is vital. The information space is now at least as important—if not more so—than what happens on the physical battlefield.


Jennifer Counter is a nonresident senior fellow in the Forward Defense practice of the Atlantic Councils Scowcroft Center for Strategy and Security.

The post How the US can focus its fight against foreign influence operations appeared first on Atlantic Council.

]]>
The ITU election pitted the United States and Russia against each other for the future of the internet  https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/the-itu-election-and-the-future-of-the-internet/ Thu, 29 Sep 2022 19:17:56 +0000 https://www.atlanticcouncil.org/?p=571527 Earlier this morning, the International Telecommunication Union (ITU) elected American candidate Doreen Bogdan-Martin as the agency's Secretary-General. Even with her election, the future role of the ITU in internet governance remains uncertain, and the organization will face challenges in the future debate over respecting extant internet processes while trying to drive genuine progress—and Beijing and Moscow will certainly not sit on the sidelines.

The post The ITU election pitted the United States and Russia against each other for the future of the internet  appeared first on Atlantic Council.

]]>
Earlier this morning, the 193 member states of the International Telecommunication Union (ITU), the United Nations’ (UN) tech agency and oldest institution, elected as Secretary-General the American candidate Doreen Bogdan-Martin, the first-ever woman to head the ITU. Bogdan-Martin is the current head of the ITU’s development bureau, ITU-D. Her now-former opponent, Russian candidate Rashid Ismailov, is president of Russian telecom VimpelCom, former deputy minister of Russia’s Ministry of Communications, and a former executive at Chinese telecom company Huawei. The current Secretary-General is Houlin Zhao, a Chinese citizen who has held the position since 2014.1

Many challenges to an open and global internet lie ahead, and the US win should provide a sigh of relief to the internet community. Nonetheless, the way the election for the ITU’s leadership unfolded underscores how internet governance processes, international internet policymaking, and internet standards creation are becoming increasingly political issues. In an unprecedented move, for instance, both US President Joe Biden and US Secretary of State Antony Blinken posted messages in support of the US candidate.  

For the United States, it was evident that this election was a foreign policy issue—and rightly so. Over the years, the Russian and Chinese governments have grown closer in pushing for a state-controlled vision of internet governance, and both have long wished to see the UN play a central role in the management of the internet. Their vision is gaining traction, especially among African countries, which have historically felt excluded from internet governance conversations and see the ITU as one of the few places they can wield political power. In addition, Vladimir Putin’s invasion of Ukraine might have strained Russia’s relationship with the West, but for many other parts of the world, it remains business as usual. 

At the center of the election, therefore, was indeed the future role of the ITU in governing the internet. The organization currently has little involvement, but some governments maintain an interest in the ITU becoming more central to the process. Presently, internet governance is largely the purview of the Internet Engineering Task Force (IETF), a nonprofit, multi-stakeholder internet standards-setting body, and the Internet Corporation for Assigned Names and Numbers (ICANN), a nonprofit that, along with five regional internet registries, manages domain names and internet protocol (IP) addresses globally. This governance system, though imperfect, works because it is agile, inclusive of industry and civil society, and not directly subject to intergovernmental negotiations and maneuvering. It has worked based on a relatively common objective among these institutions: an open, global, and interoperable internet.  

However, not every country buys into this system. A number of countries, including Russia, China, and some in both Africa and the Asia-Pacific, look at the ITU as a more appropriate institution to manage the internet. Its broad development agenda has allowed the organization to become increasingly active on issues as wide-ranging as cybersecurity, connectivity, cybercrime, IP number allocation, and network management. At the same time, for decades, the Chinese government and the Russian government have both pushed for the ITU to have a greater role in governing the internet, from suggesting that the ITU literally take over ICANN to pushing for internet standards-setting to move to the ITU almost entirely. The United States, Japan, Australia, Germany, South Korea, and other open-internet supporters have managed to push back, but the tides may be shifting. More governments are adopting a “cyber sovereignty” approach that seeks to increase their perceived decision-making power or increase government surveillance online (or both).  

The stakes in the election, therefore, were high. A Russian-led, China-friendly ITU would, most likely, have sought more control over the internet; and from Moscow and Beijing’s past efforts, standards development is one of the likeliest routes. The Chinese government already knows this and has been working towards such a goal with its “New IP,” a proposal that seeks to centralize core functions of networking. The proposal has persisted in the ITU’s study groups for the past two years, and it has recently moved to another study group dealing with issues of the environment. Beijing has even renamed the standard “IPv6+” to repackage the same, top-down protocol proposal as merely a technological advancement. In a similar vein, China, at another study group, submitted a proposal for the standardization of the “metaverse.” In such a volatile environment, Ismailov’s victory would increase the likelihood of passage of government-controlling-internet proposals at the ITU. 

Heavy government involvement in standards setting with Russia at the ITU’s helm would be catastrophic for the internet. Presently, internet standards follow an open, participatory process and are voluntarily adopted on a global level; they serve as the building blocks for products and services targeted to meet the needs of consumers and the market. Now, try to imagine 193 different states negotiating standards about, say, privacy or security; the pace and the formality of an organization like the ITU cannot support the technical specificity and informality that is required by internet standards setting. Not to mention, the same issues that have plagued UN cyber norms discussions will become more prominent in the ITU: the Russian and Chinese government pushing for an expansive definition of terms like “information security” or “cybercrime” that allow them to promote censorship and surveillance under the guise of international security.  

In order, therefore, to preserve an internet that is relatively open and globally connected while navigating the processes and politics at play, the ITU needed a leader who understands the value of collaboration and bottom-up coordination when it comes to the internet. The United States can deliver on this; Russia cannot. 

For Russia, the UN has always been a core part of its internet governance strategy. Although its pushes over the last thirty-or-so years for more UN involvement were unsuccessful, in 2019, Russia achieved an unexpected win when it managed to get the votes for a cybercrime treaty at the UN General Assembly. The Kremlin’s tech envoy celebrated this as a significant win and a sign of Russia’s influence in the UN. For Moscow, this moves a step closer to a multipolar world in which the Russian government takes a more central role. The US victory means Russia doesn’t yet have the votes to continue on this trajectory.  

Even with Bogdan-Martin prevailing, it will be a rough road ahead to maintain an ITU that respects existing internet processes and institutions, while also trying to drive genuine progress in areas like internet development and capacity-building (which Bogdan-Martin presently leads at the ITU). Beijing and Moscow will not sit on the sidelines, as the past decades have shown. Not having a voting bloc to pass resolutions has not stopped the Chinese and Russian governments from “flooding the zone” with proposals before. But without a doubt, navigating a rough road with a US leader at the helm, experienced in internet development and a believer in an open internet, is better than cutting the brakes entirely. 

Authors

Konstantinos Komaitis (@kkomaitis) is an internet policy expert and author. 

Justin Sherman (@jshermcyber) is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative. 

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Although several candidates, representing different countries, could theoretically run at once, the US and Russian governments were the only ones to throw a candidate’s hat in the ring.

The post The ITU election pitted the United States and Russia against each other for the future of the internet  appeared first on Atlantic Council.

]]>
The 5×5—The Internet of Things and national security https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-internet-of-things-and-national-security/ Wed, 28 Sep 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=566471 Five experts from various backgrounds assess the national security challenges posed by IoT and discuss potential solutions.

The post The 5×5—The Internet of Things and national security appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

The connection of mundane household gadgets, industrial machinery, lifesaving healthcare technologies, vehicles, and more to the Internet has probably helped modern society to be more convenient and efficient. IoT devices worldwide number over 13 billion, a number that is estimated to balloon to over 29 billion by 2030. For all its benefits, the resultant web of connected devices, collectively known as the Internet of Things (IoT), has exposed everyday users, as well as entire economic sectors, to cybersecurity threats. For example, criminal groups have exploited IoT product insecurities to infect hundreds of thousands of devices around the world with malware in order to enlist them in distributed denial-of-service attacks against targets. 

Inadequate cybersecurity across the IoT ecosystem is inherently a US national security issue due to IoT’s ubiquity, integration across all areas of life, and potential to put an incredible number of individuals’ data and physical safety at risk. We brought together five experts from various backgrounds to assess the national security challenges posed by IoT and discuss potential solutions.

#1 What isn’t the Internet of Things (IoT)?

Irina Brassassociate professor in regulation, innovation, and public policy, Department of Science, Technology, Engineering, and Public Policy (STEaPP), University College London:

“IoT is not just our everyday physical devices embedded with sensing (data capture) or actuation capabilities, like a smart lightbulb or a thermostat. ‘Smart’ devices are just the endpoint of a much more complex ‘infrastructure of interconnected entities, people, systems and information resources together with services, which processes and reacts to information from the physical world and virtual world’ (ISO/IEC 20924: 2021). This consensus-based definition, agreed in an international standard, is particularly telling of the highly dynamic and pervasive nature of IoT ecosystems which capture, transfer, analyze data, and take actions on our behalf. While IoT ecosystems are functional, poor device security specifications and practices in these highly dynamic environments create infrastructures that are not always secure, transparent, or trustworthy.”

Katerina Megasprogram manager, Cybersecurity for the Internet of Things (IoT) Program, National Institute of Standards and Technology (NIST):

“Likely very little, which would explain why the US National Cyber Director, Chris Inglis, at a NIST public workshop [on August 17, 2022] referred to the ‘Internet of Everything.’ IoT is the product of the worlds of information technology (IT) and operational technology (OT) converging. The IoT is a system of interconnected components including devices that sense, actuate, collect/analyze/process data, and are connected to the Internet either directly or through some intermediary system. While a shrinking number of systems still fall outside of this definition, what we used to think of as traditional OT systems based on PLC architectures with no connectivity to the Internet are, in fact, more and more connected to the Internet and meet the above definition of IoT systems.”

Bruce Schneierfellow, Berkman-Klein Center for Internet and Society, Harvard University; adjunct lecturer in public policy, Harvard Kennedy School:

“Ha! A salami sandwich is not the Internet of Things. A sense of comradeship towards your friends is not the Internet of Things. I am not the Internet of Things. Neither are you. The Internet of Things is the connected totality of computers that are not generally interacted with using traditional keyboards and screens. They’re ‘things’ first and computers second: cars, refrigerators, drones, thermostats, pacemakers.”

Justin Shermannonresident fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab):

“There is no single definition of IoT, and how to scope IoT is a key policy and technical question. Regardless, basically every definition of IoT rightfully excludes the core underpinnings of the global Internet itself—internet service provider (ISP) networks that bring online connectivity to people’s homes and offices, submarine cables that haul internet traffic between continents, and so on.”

Sarah Zatkochief scientist, Cyber ITL:

“IoT is not modern or state of the art. The hardware on the outside may look sleek and shiny, but under the hood there is old software built with out-of-date compilers running on old chip architectures. MIPS, a reduced instruction set computer (RISC) architecture, was used in the largest portion of the IoT products that we have tested.”

#2 Why should national security policymakers care about the cybersecurity of IoT products?

Brass: “Many IoT devices currently on the market have known security vulnerabilities, such as default passwords and unclear software update policies. Users are typically unaware of these vulnerabilities, purchase IoT devices, set and forget them. These practices do not occur just at the consumer level, although there are many examples of how insecure and unsafe our ‘smart homes’ have become. They take place in critical sectors of strategic national importance such as our healthcare system. For instance, the Internet of Medical Things (IoMT) is known to be especially vulnerable to cyberattacks, data leaks, and ransomware because a lot of IoMT devices, such as IV pumps, have known security vulnerabilities but continue to be purchased and remain in constant use for a long time, with limited user awareness of their potential exposure to serious compromise.”

Megas: “I think the combination of the nature and ubiquity of IoT technology are the perfect storm. IoT has taken existing concerns and put them on steroids by increasing both the attack surface and also impacts, if you think of risk as the product of likelihood (IoT is everywhere) and impact (automated interactions with the physical world). In traditional IT systems, a compromised system could produce faulty data to the end user, however, typically there was always a human in the loop that would take (or prevent) action on the physical world based on this data. With the actuating capabilities we are seeing in most IoT and the associated level of automation (which will only increase as IoT systems incorporate AI), the impact of a compromised IoT system is likely going to be higher. As more computing devices are put on the Internet, they become available for botnets to be installed, which can result in significant national economic damage as in the case of Mirai. Lastly, because this technology is so ubiquitous, the vast amount of data collected—from proprietary information from a factory to video footage from a recreational drone to sound sensors collected from around a smart city—can both be accessed through a breach, shared, and used by other nations without anyone’s knowledge, even without a cybersecurity failure.”

Schneier: “Because the security of the IoT affects the security of the nation. It’s all one big network, and everything is connected.”

Sherman: “IoT products are used in a number of critical sectors, ranging from healthcare to energy, and hacks of those products could be financially costly and disrupt those sectors’ operations. There are even IoT devices that can produce physical effects, like small internet-linked machines hooked into manufacturing lines, and hackers could exploit vulnerabilities in those devices to cause real-world damage. In general, securing IoT products is also part of securing the overall internet ecosystem: IoT devices plug into many other internet systems and increasingly constitute a greater percentage of all internet devices used in the world.”

Zatko: “IoT is ubiquitous. Even when a ‘smart’ device is not necessary, at this point it is often difficult or impossible to find a ‘dumb’ one. Their presence often punches holes in network environment security, so they are common access points for attacks.”

#3 What kinds of threats are there to the cybersecurity of IoT devices that differ from information technology (IT) or other forms of operational technology (OT)?

Brass: “The kinds of vulnerabilities per se might not differ—ultimately, you still have devices running software that can be exploited by malicious actors. What differs is the scale and, in some cases, the severity of the outcome. IoT ecosystems are highly interconnected. Compromising a single device is often sufficient to gain the foothold necessary to exploit other devices in the system and even the entire system. The transnational dimension of IoT cybersecurity should also not be neglected. The 2016 Mirai attack showed how compromised IoT devices with poor security specifications (default passwords), located around the world, can be very easily exploited to target internet infrastructure in different jurisdictions.”

Megas: “I am not sure whether there are different threats for IoT, OT, and IT systems. They are converging more and more, so it is not meaningful to try to create artificial lines of distinction. This might be one of those instances where I say the dreaded phrase ‘it depends.’  It is possible that there are some loosely coupled IoT systems in which the components that are IoT devices do not sit behind more security capable components, but are more directly accessing the Internet (and therefore more directly accessible by threat actors). This could mean that vulnerabilities in these IoT systems are more easily exploitable and thus easier targets. Also, the nature of IoT systems that can interact with the physical world could affect the motivations of threat actors. The focus on many risks to traditional IT systems is around the data and its potential theft, but attacks on IoT can impact the real world. For instance, modifying the sensors at a water treatment plant can throw off readings and lead the system to incorrectly adjust how much fluoride is added to the water.”

Schneier: “The IoT is where security meets safety. Insecure spreadsheets can compromise your data. Insecure IoT devices can compromise your life.”

Sherman: “Typically, IoT devices use less energy, have less memory, and have much less computing power than traditional IT devices such as laptops, or even smartphones. This can make it more difficult to integrate traditional IT cybersecurity features and processes into IoT devices. To boot, manufactures often produce IoT devices and products with terrible security—installing default, universal passwords and other bad features on the manufacturing line that end up undermining their cybersecurity once deployed. In part, this happens because smaller manufacturers are essentially pumping IoT devices off the manufacturing line.”

Zatko: “Users often forget to consider IoT devices when they think about their computing environment’s safety, but even if they did, IoT devices are not always able to be patched. Sometimes software bugs in IoT operating systems are hard-coded or otherwise inaccessible, as opposed to purely software products, where changes are much easier to affect. This makes getting the software as safe as possible from the get go particularly important.”

More from the Cyber Statecraft Initiative:

#4 What is the greatest challenge to improving the security of the IoT ecosystem?

Brass: “These days, we very often focus on behavioral change—what can individual users or organizations do to improve their cyber hygiene and general cybersecurity practices? While this is an important step in securing the IoT, it is not sufficient because it places the burden on a large, non-homogenous, distributed set of users. Let us turn the problem around to its origin. Then, the greatest challenge becomes how to ensure that IoT devices and systems produced and sold all over the world have baseline security specifications, that manufacturers have responsible lifecycle care for their products, and that distributors and retailers do not compromise on device security in favor of lower priced items. This is not an easy challenge, but it is not impossible either.”

Megas: “There is a role for everyone in the IoT ecosystem. Setting aside the few organizations developing their own IoT systems for their own use, the majority of IoT technologies are purchased or acquired. One of the challenges that I see is educating everyone that there are two critical roles in supporting cybersecurity of the IoT ecosystem: those of the producers of the IoT products and those of the customers, both enterprise and consumers. While this dynamic is not new between producer and buyers, the relationships in IoT lack maturity. While producers need to build securable products that meet the needs and expectations of their customers, the customers are responsible for securing the product that operates in the customer environment. Identifying cybersecurity baselines for IoT products is a start in defining the cybersecurity capabilities producers should build into a product to meet the needs and expectations of their customers. However, one size does not fit all. A baseline is a good start for minimal cybersecurity, but we want to encourage tailoring baselines commensurate to the risk for those products whose use carries greater higher risk. 

Beyond the IoT product manufacturer’s role, there are network-based approaches that can contribute to better cybersecurity (such as using device intent signaling), that might be implemented by other ecosystem members. Vendors of IoT can ensure that their customers recognize the importance of cybersecurity. Enterprises should consider using risk management frameworks, such as the NIST Cybersecurity Framework, to manage their risks that arise out of the use of IoT technology. Formalizing and promoting recognition of the role in product organizations for a Chief Product Security Officer (CPSO) is also critical. Given that most C-suites and boards are starting to recognize the importance of the CISO towards securing their organizations’ operations, we need to also promote the visibility of the CPSO responsible for ensuring that the products that companies sell have the appropriate cybersecurity features that meet the companies’ strategic brand positioning and other factors.”

Schneier: “Economics. The buyers and sellers of the products don’t care, and no one wants to regulate the industry.”

Sherman: “As with many cybersecurity issues, the greatest challenge is getting companies that have been grossly underinvesting in security to do more, while also producing government regulations and guidance that are technically sound, roughly compatible with regulations and guidance in other countries, and that do not raise the barrier too much so as to cut out small players—though, if we want better security, some barrier-raising is necessary. It is a very boring answer, but there has been a lot of great work done already on IoT security by the National Institute of Standards and Technology, other governments, various industry groups, etc. The central challenge is better coordinating those efforts, fixing bad market incentives, and appropriately filling in the gaps.”

Zatko: “There are so many vendors, and many of them are not capable of producing secure products from scratch. It is currently too hard for even a well-meaning vendor to do the ‘right’ thing.”

#5 How can the United States and its allies promote security across the IoT ecosystem when a large portion of devices are manufactured outside their jurisdictions?

Brass: “Achieving an international baseline of responsible IoT security requires political and diplomatic will to adopt and align legislation that promotes the security of internet-connected devices and infrastructures. The good news is that we are seeing policy change in this direction in several jurisdictions, such as the IoT Cybersecurity Improvement Act in the United States, the Product Security and Telecommunications Infrastructure Bill in the United Kingdom, and several cybersecurity certification and labelling schemes such as CLS in Singapore. As IoT cybersecurity becomes a priority for several governments, the United States and its allies can be the driving force behind international cooperation and convergence towards an agreed set of responsible IoT security practices that underpin legislative initiatives around the world.”

Megas: “Continuing to share lessons learned with others. Educating customers, both consumers as well as enterprise customers, on the importance of seeking out products that support minimum cybersecurity.”

Schneier: “Regulation. It is the same that way we handle security and safety with any other product. You are not allowed to sell poisoned baby food or pajamas that catch on fire, even if those products are manufactured outside of the United States.”

Sherman: “US allies and partners are already doing important work on IoT cybersecurity—from security efforts led by the UK government to an emerging IoT labeling scheme in Singapore. The United States can work and collaborate with these other countries to help drive security progress on devices made and sold all around the world. Others have argued that the United States should exert regulatory leverage over whichever US-based companies it can to push progress internationally, too, such as with Nathaniel Kim, Trey Herr, and Bruce Schneier’s “reversing the cascade” idea.”

Zatko: “By open sourcing security-forward tools and secure operating systems for common architectures like MIPS and ARM, the United States could make it easier for vendors to make secure products. Vendors do not intentionally make bad, insecure products—they do it because making secure products is currently too difficult and thus too expensive. However, they often use open-source operating systems, tool kits, and libraries for the base of their products, and securing those resources will do a great deal to improve the whole security stance.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The Internet of Things and national security appeared first on Atlantic Council.

]]>
Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem https://www.atlanticcouncil.org/in-depth-research-reports/report/security-in-the-billions/ Mon, 26 Sep 2022 06:30:00 +0000 https://www.atlanticcouncil.org/?p=568504 The explosion of Internet of Things (IoT) devices and services worldwide has amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large. In light of this systemic risk, this report offers a multinational strategy to enhance the security of the IoT ecosystem. It provides a framework for a clearer understanding of the IoT security landscape and its needs, looks to reduce fragmentation between policy approaches, and seeks to better situate technical and process guidance into cybersecurity policy.

The post Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem appeared first on Atlantic Council.

]]>

Executive summary

The explosion of Internet of Things (IoT) devices and services worldwide has contributed to an explosion in data processing and interconnectivity. Simultaneously, this interconnection and resulting interdependence have amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large. Governments, companies, and civil society have proposed and implemented a range of IoT cybersecurity initiatives to meet this challenge, ranging from introducing voluntary standards and best practices to mandating the use of cybersecurity certifications and labels. However, issues like fragmentation among and between approaches, complex certification schemes, and placing the burden on buyers have left much to be desired in bolstering IoT cybersecurity. Ugly knock-on effects to states, the private sector, and users bring risks to individual privacy, physical safety, other parts of the internet ecosystem, and broader economic and national security.

In light of this systemic risk, this report offers a multinational strategy to enhance the security of the IoT ecosystem. It provides a framework for a clearer understanding of the IoT security landscape and its needs—one that focuses on the entire IoT product lifecycle, looks to reduce fragmentation between policy approaches, and seeks to better situate technical and process guidance into cybersecurity policy. Principally, it analyzes and uses as case studies the United States, United Kingdom (UK), Australia, and Singapore, due to combinations of their IoT security maturity, overall cybersecurity capacity, and general influence on the global IoT and internet security conversation. It additionally examines three industry verticals, smart homes, networking and telecommunications, and consumer healthcare, which cover different products and serve as a useful proxy for understanding the broader IoT market because of their market size, their consumer reach, and their varying levels of security maturity.

This report looks to existing security initiatives as much as possible—both to leverage existing work and to avoid counterproductively suggesting an entirely new approach to IoT security—while recommending changes and introducing more cohesion and coordination to regulatory approaches to IoT cybersecurity. It walks through the current state of risk in the ecosystem, analyzes challenges with the current policy model, and describes a synthesized IoT security framework. The report then lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting a baseline of minimally acceptable security (or “Tier 1”), incentivizing above the baseline (or “Tier 2” and above), and pursuing international alignment on standards and implementation across the entire IoT product lifecycle (from design to sunsetting). It also includes implementation guidance for the United States, Australia, UK, and Singapore, providing a clearer roadmap for countries to operationalize the recommendations in their specific jurisdictions—and push towards a stronger, more cohesive multinational approach to securing the IoT worldwide.

Implementation plans by country

Introduction

The billions of Internet of Things (IoT) products used worldwide have contributed to an explosion in data processing and the connection of individuals, buildings, vehicles, and physical machines to the global internet. Work-from-home policies and the need for contact tracing during the COVID-19 pandemic have furthered societal dependence on IoT products. All this interconnection and interdependence have amplified a range of cybersecurity risks to individuals’ data, company networks, critical infrastructure, and the internet ecosystem writ large.

Securing IoT products is inherently critical because IoT products increasingly touch all facets of modern life. Citizens have IoT wearables on their bodies and IoT products in their cars, gathering data on their heartbeats, footsteps, and Global Positioning System (GPS) locations. People also have IoT smart products in their homes—speakers awake to every private conversation, internet-connected door locks, devices that control atmospheric systems, and cameras to monitor young children and pets. Hospitals even use IoT products to control medicine dosages to patients. The ever-growing reliance on IoT products increasingly and inescapably ties users to network and telecommunications systems, including the cloud. IoT insecurity, given this degree of interconnection, poses risks to individual privacy, individual safety, and national security.

The IoT explosion is also poised to impact the security of the internet ecosystem writ large. More IoT products deploy each year, meaning IoT products constitute a significant percentage of devices linked to the global internet. For example, IoT Analytics, a market research firm, estimates that IoT products surpassed traditional internet-connected devices in 2019 and projects that the ratio will be around three to one by 2025.1 At that scale, poorly secured products (for instance, those with easy-to-guess passwords or with known and unfixed security flaws) can enable attackers to gain footholds in corporate or otherwise sensitive environments and steal data or cause disruption. For instance, hackers could exploit security problems in IoT cameras to break into a building—digitally or physically.2 Hackers can break into IoT devices at scale to launch distributed denial of service (DDoS) attacks that bring down internet services for hundreds of thousands or even millions of consumers.

In response to these cybersecurity risks, governments, private companies, industry organizations, and civil society groups have developed a myriad of national and industry frameworks to improve IoT security, each addressing considerations in the product design, development, sale and setup, maintenance, and sunsetting phases. These numerous controls sets and frameworks, however, are a hodgepodge across and within jurisdictions. Within jurisdictions, some governments are charging ahead with detailed IoT security guidance while others have made little substantive headway or have ambiguous policy goals that confuse and impede industry progress. Between jurisdictions, fragmented requirements have chilled efforts by even some of the most security-concerned vendors to act. Consumers, meanwhile, must grapple with IoT product insecurity, bad security outcomes, and ugly knock-on effects to others in their communities and networks—exacerbated by a lack of security information from vendors. Poor outcomes for users, a lack of cross-national harmonization, and gaps between government and industry efforts impede better security in the IoT ecosystem.

Yet, progress is possible. The number of countries and industry actors who have acknowledged one standard alone—European Norm (EN) 303 645, from the European Telecommunication Standards Institute (ETSI)—as a consensus approach alone demonstrates how some baseline security guidance can help drive real, coordinated change.

This report presents a consolidated approach to IoT cybersecurity to reconcile existing national approaches, balance the interest of public and private sectors, and ensure that a product recognized as secure in one jurisdiction will be recognized as secure in others. The framework is not prescriptive to the level of individual controls; rather, it seeks to address the structural priorities of approaches taken by industry coalitions and governments in the United States, United Kingdom (UK), Singapore, and Australia. We focus on these countries because of the maturity of their IoT cybersecurity approaches, their mature cyber policy processes, their historical influence on cybersecurity policy in other countries, and the strong precedent for cooperation across all four.

In considering the effects of this consolidated approach, the report also focuses on three verticals: smart homes, networking and telecommunications, and consumer healthcare. These three provide ready critical IoT product use cases, differentiate in the kinds of technology and products available, and serve as useful proxies for understanding the broader IoT market because of their market size, consumer reach, and varying levels of security maturity.

This report draws on research of IoT security best practices, standards, laws, and regulations; conversations with industry stakeholders and policymakers; and convenings with members of the IoT security community. In principle and wherever possible in practice, the report relies on existing approaches, seeking to create as little new information or guidance as practicable to ease implementation. The first section below describes the state of risk in the IoT ecosystem, including challenges with the current model, insecurity across three IoT industry segments, and a brief history of IoT security efforts and control sets across the United States, UK, Australia, and Singapore as well as industry-led efforts. The second section synthesizes these disparate control sets, mapped against every phase of the IoT product lifecycle. The third (and final section) presents a consolidated approach to IoT security across these four countries and the relevant industry partners—with nine recommendations to address gaps in existing IoT security approaches, disincentivize further fragmentation in standard setting or enforcement, and rationalize the balance between public and private sector security interests. These recommendations come with implementation guidance specific to each of the four countries.

While this report describes some key components of an IoT labeling approach, it deliberately does not prescribe a particular label design. The report leaves open many questions that require more work, including “who” sets label design, “how” companies should pair physical and digital labels, and to “what” extent companies and/or governments should harmonize labels across jurisdictions.

There is an overriding public interest in secure IoT products, and industry players—including source manufacturers, integrators/vendors, and retailers—must be responsive to this interest. The highly disharmonized state of IoT security regulations, however, pulls against that public interest. Moreover, a further doubling down on the current national approaches threatens to worsen the problem. What little compromise in national autonomy this or another consolidated approach might require must be weighed against a more coherent and enforceable scheme where such a scheme produces meaningful security gains for users. To comprehend this need, one should begin by understanding the state of affairs.

The current state of IoT risk

The current IoT ecosystem is rife with insecurity. Companies routinely design and develop IoT products with poor cybersecurity practices, including weak default passwords,3 weak encryption,4 limited security update mechanisms,5 and minimal data security processes on devices themselves. Governments, consumers, and other companies then purchase these products and deploy them, often without adequately evaluating or understanding the cybersecurity risk they are assuming. For example, while the US government has worked to develop IoT security considerations for products purchased for federal use, private companies routinely buy and deploy insecure IoT products because there is no mandatory IoT security baseline in the United States.6

Compromising IoT products is often remarkably easy. IoT products have less computing power, smaller batteries, and smaller amounts of memory than traditional information technology devices like laptops or even smartphones. This makes traditional security software (and its computing and power demands) often impractical in—or less immediately transferrable to—IoT systems. Many IoT botnets (networks of devices infected by malware), such as Mirai and Bashlite, capitalize on this insecurity by seeking to weaponize known vulnerabilities or brute-force access to an IoT product using predefined lists of common passwords. Such passwords may include “123456” or even just “password”.7

While these errors seem trivial, they quickly lead to material harm. In late 2016, for example, Mirai infected almost 65,000 IoT devices around the world in its first 20 hours, peaking at 600,000 compromised devices.8 The operators of the Mirai botnet subsequently launched a series of DDoS attacks, including against Dyn, a US-based Domain Name System (DNS) provider and registrar.9 By taking advantage of security problems in IoT devices, the individuals behind the botnet rendered major websites like PayPal, Twitter, Reddit, GitHub, Amazon, Netflix, and Spotify entirely unavailable to parts of the United States.10

Criminals infect IoT products with malware that may use the compromised device to execute DDoS attacks, mine for cryptocurrencies on behalf of the attacker, or hold the device hostage pending a ransom paid to the attackers. In 2018, cybercriminals compromised over 200,000 routers in a cryptojacking campaign. They used the computing power of the compromised routers to mine cryptocurrency.11 States also turn to compromising IoT products to create covert infrastructure. A May 2022 report by security firm Nisos revealed that the Russian Federal Security Service (FSB) employed a botnet made up of compromised IoT products to fuel social media manipulation operations.12

On top of using IoT devices for larger malware operations, hackers can break into IoT products to spy on people’s everyday lives. They could see adjustments made to a smart thermostat, questions asked to a smart speaker, and workouts logged on fitness wearables. This kind of spying can be a threat to individuals’ privacy and physical safety. In the context of intimate partner violence, abusive individuals may control access to or illicitly access IoT products to spy on and exert control over people, raising serious stalking and physical safety risks.13 There are also threats that come from strangers. Trend Micro, in a 2019 report, noted that hackers with access to compromised internet-connected cameras sold subscriptions that allowed others to view the illicitly accessed video streams online. The price of the stream depended on what the camera was looking at, with bedrooms, massage parlors, warehouses, and payments desks at retail shops among the priciest and most sought-after.14 These products can also be launch points from which attackers conduct further malicious activities. Brazilian fraudsters, for instance, are known to use access to compromised routers to change the compromised devices’ DNS settings to redirect victims to phishing pages for major websites, such as banks and retailers.15

IoT products, industry segments, and their insecurity

The IoT, on its face, may appear to be a simple concept, but scoping it and understanding the number of systems the IoT touches is more complex. For example, some devices like routers could be “part of” or “separate from” the IoT. There are also questions on the “if, and how” the IoT includes the networks, devices, and products touching it—like IoT sensors linked to outside cloud services to process data, connect to a company’s network to enable administrative oversight and control, and connect to the public internet to communicate with application programming interfaces (APIs). For government and industry policies to be effective, scopes must clearly define the products and services they do and do not include. 

For instance, EN 303 645 guidance—ETSI’s key standard document for IoT security—defines a “consumer IoT device” as a “network-connected (and network-connectable) device that has relationships to associated services and are used by the consumer typically in the home or as electronic wearables.”16 The US National Institute of Standards and Technology (NIST), meanwhile, defines the IoT in NIST SP 1800-16C as “user or industrial devices that are connected to the internet” including “sensors, controllers, and household appliances.”17 This report focuses primarily on the IoT products themselves, and in part the services directly dependent on IoT products or on which IoT products directly depend (e.g., a cloud software program for managing an IoT device network). 

The IoT constitutes a massive technology ecosystem with clusters of IoT product design and deployment models, each of which present differentiated cybersecurity risks. Several key examples of industry IoT product segments and some of their security challenges are detailed here, based on their wide deployment, impact on consumers, and touchpoints into other parts of the digital world, whether home Wi-Fi networks or hospital medical systems. 

  • Smart Homes: Numerous companies sell IoT products to serve as thermostats, doorbell cameras, window locks, speakers, and other components of so-called smart homes. Apple offers HomeKit integration, a software framework for configuring, communicating with, and controlling smart home appliances.18 Resideo offers a number of smart home-style products, for both consumer environments—such as thermostats, humidifiers, security systems, and programmable light switch timers—as well as professional environments—such as UV treatment systems and fire and burglary alarms.19 Philips sells smart lighting products, and Wink sells smart doorbells.20 On the software side, companies like Tuya offer IoT management services to automatically control robotic vacuums, smart cameras, smart locks, and other IoT products in the home.21 Google and Amazon both manufacture and sell smart home IoT products, from home security products to smart speakers.22 The cybersecurity risks here include spying on individuals in their homes, using IoT products in the home and workplace to break into other systems (e.g., someone’s work laptop on their home Wi-Fi), and harnessing numerous compromised smart products to create a botnet and launch DDoS attacks.23
  • Networking and Telecommunications Gear: Traditional internet and telecommunications companies, which supply the devices and some of the infrastructure that fundamentally underpins the internet, are moving more into IoT services and devices. Cisco offers Industrial Wireless solutions that include wireless backhaul, private cellular connectivity, and embedded networking for industrial IoT products.24 Extreme Networks offers a Defender Adapter service to provide in-line security for vulnerable wired devices.25 Arista offers a Cognitive Campus service that includes IoT edge connectivity, real-time telemetry, and Spline platforms for connection reliability.26 The cybersecurity risks here include spying on traffic going across networks, using networking and telecommunications entry points to break into other systems, and degrading or disrupting the flow of network data altogether. 
  • Consumer Health Products: Companies are offering IoT products and services to support the provision of healthcare and medicine. Philips sells fetal and maternal monitors, MR compatible monitors, patient-worn monitors, and other IoT products to monitor vitals.27 Medtronic sells glucose monitoring and heart monitoring products.28 Honeywell Life Sciences offers embedded products and safety solutions for hospitals.29 Dexcom offers a glucose monitoring smart wearable, and ResMed offers a phone-connected product for sleep apnea.30 The cybersecurity risks here include stealing highly sensitive medical data and manipulating device data or disrupting product operations in ways that physically threaten human life. 

Numerous companies, from telecommunications gear manufacturers to medical equipment suppliers, have a stake in security debates about IoT products. Many industries do as well, from home security to industrial manufacturing, and many of their products and services overlap and integrate. Yet, similarities between sector products and their cybersecurity risks do not change the fact that widespread IoT insecurity merits meaningful improvement.

Policy challenges to addressing IoT risk

The UK, Singapore, United States, and Australia provide a set of case studies for government approaches to IoT security—due to the maturity of their IoT cybersecurity approaches, the maturity of their overall cyber policy processes, their historical influence on cybersecurity policy in other countries, and the strong precedent for cooperation across all four. There is also fragmentation within the countries’ frameworks, where different parts of a country or different government agencies pursue different IoT security policies and processes. The US, for instance, has the Federal Communications Commission (FCC) focused on communications standards for IoT products and the Federal Trade Commission (FTC) focused on the marketing practices of IoT vendors, but has no agency in charge of enforcing IoT security requirements in design. 

At least three key themes stand out across these countries. First, state approaches to IoT security have generally moved from voluntary best practices towards direct intervention. Second, state approaches have predominantly manifested in consumer labeling programs and minimum baseline security legislation. And third, states have made the need for international, agreed-upon standards a key design principle of their IoT security efforts though as yet without sufficient uptake or success.31

UK: Mandatory minimum security standards 

The UK was an early innovator in holistic responses to IoT insecurity. Its Department for Digital, Culture, Media & Sport (DCMS)—which works on digital economy and some broadband and Internet issues—published a Secure by Design report in March 2018, setting out how it aims to “work with industry to address the challenges of insecure consumer IoT.”32 As a result of its report, in October 2018, DCMS, along with the UK National Cyber Security Centre (NCSC) and industry partners, published the “Code of Practice for Consumer Internet of Things (IoT) Security,” consisting of “thirteen outcomes-focused guidelines that are considered good practice in IoT security.”33 It aims, as one NCSC official described it, to identify impactful, updatable measures to which a broad coalition could agree34—captured in the below principles

Figure 1: Thirteen Principles of Consumer IoT Security

SOURCE: UK Department for Digital, Culture, Media & Sport.

The UK was not alone in this endeavor, working in tandem as a member of ETSI to launch ETSI Technical Specification 303 645, the first “globally-applicable industry standard on internet-connected consumer devices.”35 In June 2020, this Technical Specification became formalized as a European standard (EN 303 645), and now serves as a common underlying source for many countries’ initiatives. 

Despite the initial promise of the Code of Practice, the DCMS found low industry uptake for the guidance and decided to pursue a legislative route. After multiple consultation rounds, the resulting Product Security and Telecommunications Infrastructure (PSTI) Bill was introduced in November 2021, empowering the Secretary of State for DCMS “to specify by regulations security requirements.”36 The new law would require “manufacturers, importers, and distributors to ensure that minimum security requirements are met in relation to consumer connectable products that are available to consumers.”37 Noncompliant firms could face fines up to £10 million or 4 percent of worldwide revenue, and a new regulator—to be delegated following the law’s enactment—would also have the ability to enforce recalls or outright product bans.38 The bill is currently in the Report stage with the House of Lords and would require compliance within twelve months of enactment. 

By empowering the DCMS minister to specify security requirements instead of codifying them, the PSTI Bill allows the mandatory baseline requirements to respond to changing circumstances. The current principles outlined by DCMS focus on the “top three” elements of the UK Code of Practice/ETSI EN 303 645: banning default passwords, requiring a vulnerability disclosure process for products, and transparency for consumers on the duration that products will receive security updates. The UK’s NCSC views these three measures as having outsize importance, and “will make the most fundamental difference to the vulnerability of consumer connectable products in the UK, are proportionate given the threats, and universally applicable to devices within scope.”39 Cognizant that good security must require organizational action, not just device-level changes at the point of design and manufacture, a DCMS official has highlighted the additional appeal of the framework in allowing requirements placed on economic actors, not just devices. Indeed, two of the three requirements involve organizational changes or activity. The UK’s framework allows for the introduction of secondary legislation to build on this baseline over time. 

Singapore: IoT product labeling 

In October 2020, Singapore’s Cyber Security Agency (CSA) launched the Cybersecurity Labelling Scheme (CLS), a labeling program for internet-connected devices that describe the level of security included in their design. The CLS aims to help consumers “easily assess the level of security offered and make informed choices in purchasing a device.”40 It also aims to let product manufacturers signal the cybersecurity features of their products—as a senior CSA official put it, “to create the demand” and then “to provide a natural incentive to provide more secure and trusted devices.” 

The CLS has four levels of additive and progressively demanding security provision tiers (Figure 2). In the first two levels, developers self-certify, and the CSA can audit compliance. In the third and fourth levels, independent laboratories certified by the nongovernmental International Organization for Standardization (ISO) validate products. At the bottom end, products must have security updates and no universal default passwords, while manufacturers must adhere to secure-by-design principles, such as processes and policies for protecting personal data, securely storing security parameters, and conducting threat risk assessments. At the higher end, authorized labs conduct penetration tests against the product and its communications. Labels are valid as long as developers support the product with security updates, for up to a three-year period.

Figure 2: Singapore’s CLS Four Security Provisions Tiers

SOURCE: Cybersecurity Agency of Singapore.

While the program’s terminology slightly differs, the CLS embraces the same principles as ETSI EN 303 645, doing so in a manner that “groups the clauses and spreads them out across four ranked levels.”41 And while the program’s higher-tier labels incentivize the adoption of stronger security measures, the Singapore Standards Council concedes that the first-tier labeling requirements “will suffice in staving off [sic] large percentage of attacks encountered on the internet today.”42 Finally, Singapore’s CLS shows how a voluntary labeling scheme can work to gradually dial up requirements for products as the market matures. For example, while the CLS is voluntary for most products, new internet routers sold in Singapore must meet the security requirements for the Level 1 label. This “voluntary-mandatory” split can keep evolving over time, both for different product categories as well as specific security measures. 

Interviewees at CSA said vendors have reacted positively to the labeling program (e.g., citing the onboarding of major vendors like Google and Asus). As of July 2022, there were 174 certified products, a total that has more than tripled since the start of 2022, and includes diverse items such as smart lights, video doorbells, locks, appliances, routers, and home hubs.43 Despite these positive signs, it is too soon to tell if the CLS program will be a success, and Singapore must continue to monitor the label’s appeal for consumers and firms as well as its broader security impact. 

US: State initiatives & government procurement 

In the United States, initial action on consumer IoT insecurity began at the state level. The nation’s first IoT security law went into effect in January 2020 with California’s requirement that manufacturers of smart products sold in the state “equip the device with a reasonable security feature or features.” The law explicitly takes aim at universal default passwords, stating that a reasonable security feature could mean “the preprogrammed password is unique to each device manufactured,” or “the device contains a security feature that requires a user to generate a new means of authentication before access is granted to the device for the first time.”44California’s law—enforced by state attorneys—does not include a private right of action, nor does it put any duties on retailers to ensure that products they sell meet the law’s requirements. 

Oregon joined California with its House Bill (HB) 2395, which has much of the same text (e.g., the same definition of “reasonable security feature” the same enforcement mechanisms) but limits its scope to only consumer IoT products (“used primarily for personal, family or household purposes”).45 While the two laws may compel companies to adopt better security in all states, it appears that no cases have been brought forward under the law, even though insecure products are doubtlessly still sold in these states. 

The United States passed the IoT Cybersecurity Improvement Act into law in December 2020.46 It requires NIST to develop cybersecurity standards and guidelines for federally owned IoT products, consistent with NIST’s understanding of “examples of possible security vulnerabilities” and management of those vulnerabilities.47 48 Thus, the law seeks to strengthen the security of IoT products procured by the government and intends to influence the private sector’s IoT cybersecurity practices through the federal government’s procurement power.49 The 2020 act also shifts the burden of compliance from product vendors to federal agencies,50 prohibiting them “[from] procuring or obtain[ing] IoT devices” that an agency’s chief information officer deems out of compliance with NIST’s standards.51 Finally, the act requires NIST to review and revise its standards at least every five years to ensure that recommendations are current, allowing for technical flexibility.52 NIST is empowered to suggest whatever finding it wants, with only vague guidance to consider “secure development” and other high-level cybersecurity items. Figure 3 offers an overview of the act’s recommendations. 

Figure 3: Overview of the IoT Cybersecurity Improvement Act of 2020

SOURCE: Liv Rowley for the Atlantic Council.

On May 12, 2021, the Biden administration issued Executive Order (EO) 14028, “Improving the Nation’s Cybersecurity.” The executive order directed NIST, in consultation with the FTC, to develop cybersecurity criteria for an IoT product labeling program aimed at educating consumers about IoT products’ security capabilities.53 It also tasked NIST with examining how to incentivize IoT manufacturers to get on board with such a program. On February 4, 2022, NIST released its recommended criteria for a consumer IoT labeling scheme.54 However, NIST has been clear that its aim is to describe the ideal components of a labeling scheme, rather than implement this scheme itself.55 While EO 14028 may feel a little toothless at the moment, it effectively outlines specific federal cybersecurity goals. Moreover, it demonstrates a will to move beyond federal procurement power as the sole method for influencing the private sector. 

Australia: Starting with voluntary best practices 

In August 2020, the Australian Department of Home Affairs (DHA) released a voluntary “Code of Practice: Securing the Internet of Things for Consumers” as part of its 2020 cybersecurity strategy. This code of practice highlighted the thirteen principles outlined in ETSI EN 303 645. 

Australia’s voluntary code of practice did not prove to be a panacea. In March 2021, the Australian government published six months of research on the results of its Code of Practice, saying firms “found it difficult to implement voluntary, principles-based guidance,” and many had still not implemented basic security guidelines like a vulnerability disclosure reporting process.56 As such, the Australian government appears intent on conducting more direct regulation of its consumer IoT market. In a request for comments that concluded in fall 2021, the DHA solicited public opinion on both a proposed consumer labeling program and a minimum security standards regime.57

For the minimum security standards approach, the government proposes to base its requirements on ETSI EN 303 645 and is considering either mandating all 13 guidelines or choosing to focus on just the top three (no default passwords, the existence of vulnerability disclosure programs, and the provision of security updates). The potential regulator within the Australian government is yet to be determined, but it would be empowered to issue fines and other penalties for those who fail to comply. 

The potential labeling approaches consider two scenarios. A voluntary “star rating label,” such as Singapore’s CLS program, basing it on an existing international standard, such as ETSI EN 303 645, and involve some component of self-certification and testing within the framework of Australian consumer law’s protection against fraudulent claims. Alternatively, a mandatory “expiry date label” would indicate the period over which the product will receive critical security updates. This second option received a higher recommendation from the government. Minimum security standards could complement either of these approaches. 

Industry: Certification models and security standards 

Companies have also advanced numerous security approaches. Common industry approaches to IoT security include secure endpoints and stringent encryption requirements for third-party applications, hardware-based security, and the formalization of vulnerability and software communications protocols. The industry verticals for smart homes, networking and telecommunications, and consumer healthcare (recognizing there is overlap and integration between these verticals) see varying implementations of these measures. 

For example, the ioXt Alliance, which is composed of dozens of product manufacturers and vendors as well as major software companies, offers self-certified and third-party-validated certification for IoT products. Its five compliance tests cover everything from Android to smart speaker device profiles, measured against eight principles: no universal default passwords, secured interfaces, proven cryptography, security by default, verified software, automatic security updates, vulnerability reporting program, and security expiration date.58 The overall certification process has five steps: 

  1. Join the ioXt Alliance and register for certification; 
  2. Select one of the five base profiles for testing, and then opt to self-certify or use one of the ioXt’s approved laboratories (currently, Bureau Veritas, SGS Brightsight, DEKRA, NCC Group, NowSecure, Onward Security, or Bishop Fox59); 
  3. Upload production information and test results to the ioXt portal; 
  4. ioXt reviews the submissions and approves or rejects certification—with approved submitters receiving “the ioXt SmartCert” for their product; and 
  5. “Stay certified with ongoing verification and insights,” like IoT regulatory updates through the Alliance.60

The Alliance’s membership includes companies like IBM, Google, Facebook, Silicon Labs, Logitech, Honeywell, Avast, Asus, Motorola, and Lenovo; other associations like the Consumer Technology Association (CTA) and the Internet Infrastructure Coalition; and non-industry organizations like Consumer Reports. Even the UK’s DCMS is an Alliance member.61 While the membership roster certainly does not cover every IoT product manufacturer or vendor in the United States (where many of its members are based), it does have global representation. It also certified 245 percent more products and membership increased 63 percent in 2021 compared to 2020.62 

The IoT Security Foundation, a global nonprofit representing many appliance manufacturers, recommends a framework composed of a few hundred security standards for organizations—spanning management governance, engineering, secure networks and applications, and supply chain.63 Its members include smaller product manufacturers as well as larger companies like Honeywell, Huawei, and Arm, plus many more nongovernmental organizations, like academic institutions, than the ioXt Alliance.64 The framework has three different audiences: (1) managers, (2) developers and engineers, and logistics and manufacturing staff, and (3) supply chain managers.65 While its membership is not as large as that of the ioXt Alliance, the IoT Security Foundation does have global representation as well, such as the University of Southampton, Huawei, the University of Oxford Department of Computer Science, and Eurofins Digital Testing in France.66

The Open Web Application Security Project® (OWASP) is an open-source community effort that provides IoT security standards tailored to three threat models—attacks only against software, attacks only against hardware, and situations where compromise must be avoided at all costs (e.g., medical products, connected vehicles, due to highly sensitive data, etc.).67 OWASP then specifies several dozen security standards based on these threat models, such as standards for bootloader vs. OS configurations vs. Linux.68 OWASP is a nonprofit foundation with over 250 local chapters worldwide and tens of thousands of members, and it runs training conferences and other events to bring together experts from industry, academia, and civil society focused on software development and security.69 Its capacity to drive change on IoT security is considerably different from the previous two coalitions—for instance, the OWASP community cannot marshal the marketing and lobbying power held by members of the ioXt Alliance or the IoT Security Foundation. However, OWASP draws on its tens of thousands of members around the world and leverages different forms of engagement than the other coalitions. The IoT Security Foundation, for instance, does not run events at the same scale as OWASP. 

The GSM Association, an industry group for mobile network operators, has hundreds of industry members—from Amazon to Coinbase to Audi—and has numerous guidance documents for IoT security.70 For example, it has security considerations ranging from having password policies protect against hard-coded or default passwords (CLP12_6.11.1.5) to having a process for decommissioning endpoint devices (CLP13_8.10.1).71

The CTA, a standards and trade organization with over 1,000 company members, runs an IoT Working Group that supports consumer IoT development. Included in those efforts is educating consumers about IoT security best practices and improving the security of IoT products.72 The CTA has multiple labeling schemes under development around IoT products, focused on consumer-facing product security descriptions managed through an accreditation system.73 The CTA, in fact, submitted a position paper to NIST in 2021 that described its vision for a cybersecurity labeling system for software and IoT devices—noting that labels should reflect the consensus industry standards, avoid marketplace fragmentation, and look to risk assessment as much as specific security capabilities, among others.74 It also has global reach, with Cisco, Google, Panasonic, Samsung, Walmart, Alibaba, Nvidia, and ADT among its members.75

The Connectivity Standards Alliance (CSA), which develops and certifies IoT technology standards, has a number of documents and efforts focused on security. For example, the CSA website contains numerous developer resources on IoT security, from security and privacy guidance on the CSA-developed IP-based protocol Matter to documentation around Zigbee, the low-latency communication specification.76 The CSA’s product security working group is underway, developing security standards for IoT devices and exploring security options around labeling and it has a recently started IoT privacy effort, as well. Both of these endeavors focus on consumer-facing security considerations (meanwhile, other CTA efforts focus on less consumer-facing aspects of IoT product security). The CSA has nearly 300 participant companies and dozens of sponsors around the world, and it also has hundreds of corporate adopters—ranging from large retailers like Amazon to device and component developers like Arm, Silicon Labs, Schneider Electric, LG, Huawei, and Google.77

Individual companies have also provided their own guidance, such as Google’s Cloud IoT Core “device security” guidelines,“78 Microsoft’s Edge Secured-core criteria,79 and Arm’s Platform Security Architecture for the IoT.80 Each emphasizes different threat models and targets different stakeholders in the IoT process, from product engineers to those in management at product manufacturers. 

While beneficial, these approaches in the aggregate present a fragmented industry approach to IoT security. Governments looking to industry standards as a reference point find numerous, very different options; for instance, while the ioXt Alliance’s security approach emphasizes testing against specific device profiles, the OWASP approach emphasizes different kinds of threat models that could, hypothetically, apply across device profiles. There are also implementation differences: the ioXt Alliance points to independent, third-party testing and evaluation, whereas OWASP offers a list of standards that organizations can pair to a particular threat model. Some yet (like the ioXt Alliance) create new, IoT security-specific approaches, and others (like Arm) offer rough replicas of their overall cybersecurity guidance, with some tailoring to IoT.

Summarizing challenges 

The current government approaches towards IoT security present many challenges—and have many gaps and shortfalls. This matters across the United States, Singapore, Australia, the UK, and many other governments, because industry has failed to appropriately invest in IoT security, leaving governments to step in. Simultaneously, some states are leading aggressively on securing IoT while others appear willing, on a structural level, to cede that leadership to industry (or to not act at all). Australia, for example, has put forward an IoT security framework but has long delayed the publication of specific guidance. 

Industry organizations have pursued a range of IoT security approaches across labeling, certification, minimum standards, and best practices. This guidance also varies across industry verticals—for instance, given embedded IoT healthcare devices face many more regulatory security requirements than smart speakers. All these initiatives represent a substantial effort and reflect years of work from individuals in the security community—yet challenges (Table 1) around enforcement and implementation leave room for greater cohesion to tie security actions to particular parts of the product lifecycle. 

On the private sector side, ambiguous requirements and policy goals,81 diverging processes and regulatory requirements across jurisdictions, and duplicative certification schemes all hinder private-sector efforts to boost IoT security. And on the user side, individuals are grappling with little to no information to select more secure products, bad security outcomes and insecurity, and harmful knock-on effects from IoT insecurity that harm others in society and using the internet. 

Table 1: Challenges with Current IoT Security Models

SOURCE: Justin Sherman for the Atlantic Council.

State IoT security challenges 

State IoT security policies are fragmented across jurisdictions. While the United States, UK, Singapore, and Australia (as well as the EU bloc) have generally moved from a voluntary best practices approach toward a mandatory approach, the states’ policies do not necessarily integrate well with one another. Each country has different specific cybersecurity best practices and places different levels of regulatory requirements on companies. This state-to-state fragmentation makes it more difficult for governments to agree on IoT security goals and operationalize IoT security cooperation—impeding a multinational approach to systemic risk. 

Further, when states work to increase cooperation, there is a question of selectivity and exclusion: the ten countries with the most infected devices in the 2016 Mirai botnet were primarily in South America and Southeast Asia. Meanwhile, most high-resourced countries principally focus on IoT security collaboration with one another (e.g., UK-Singapore IoT security collaboration), not on building IoT security capacity in lower-resourced countries.82 The latter does happen—for example, Singapore and the Netherlands have engaged the nonprofit, multistakeholder Global Forum on Cyber Expertise on global IoT security issues. Nevertheless, collaboration remains primarily among higher-resourced and higher-capacity states.83

Thus, one set of countries debate solutions while excluding a bevy of impacted stakeholders from the discussion. In doing so, higher-resourced countries may miss important points about their IoT frameworks’ applicability. Notably, cultural contexts greatly matter alongside technical considerations when weighing country adoption, and IoT product reliability may be just as important if not more so than cybersecurity per se in a development context.84 In fact, for many countries, increased reliance on information and communication technologies without proper reliability can very well yield suboptimal development outcomes.85 For example, while other governments (e.g., Singapore, Australia) reference the UK’s IoT security recommendations, some of the UK standards may require too much investment for lower-resourced states and focus less on reliability per se than security. 

Furthermore, regulatory approaches within countries may still be fragmented and leave gaps. For example, in the United States the FCC regulates IoT products’ network connectivity, and the FTC regulates the marketing practices of IoT products.86 The FCC has broad authority to regulate product manufacturers and sellers. On the flip side, the FTC’s authority mainly concerns consumer protection to ensure IoT product sellers are not being deceptive.87 However, this still leaves gaps, such as not incentivizing security requirements at the device manufacturing stage and leaving national laws to govern IoT cybersecurity for federal agencies, while mostly standards and voluntary guidelines guide the private sector.88 In Australia, to give another example, the state’s “privacy, consumer, and corporations laws were not originally intended to address cybersecurity,” leaving the national government trying to make do with a patchwork of laws to address cybersecurity.89 Country-internal fragmentation, in total, leaves policy and regulatory gaps in promoting IoT security, forces the government to grapple with an ill-formed patchwork of authorities and procedures, and raises costs and increases confusion for businesses and users—especially when different labels are in play. 

Private sector IoT security challenges 

Many IoT security approaches in practice have ambiguous requirements and policy goals that make it difficult for the private sector to both understand and implement the government’s vision—and difficult for the state to require or incentivize the private sector to change. Take government procurement requirements, whose aim can be unclear. One aim could be the use of procurement to directly secure specific products, such as by requiring the military to only buy IoT products with a higher cybersecurity bar. Another possibility is using procurement to signal best practices to industry, such as requiring compliance with NIST’s cybersecurity framework—mandatory for US federal agencies and which more than 30 percent of US organizations have voluntarily adopted.90 And another possibility is not just signaling best practices but incentivizing companies broadly, even those not doing federal contracting, to increase their own product security. As one standards body expert put it, “if the government only buys products meeting certain standards, that sets a bar for the private sector.”91

While the security approach may be similar or identical in each case, there are different policy goals in play that may not be articulated (even if they are not mutually exclusive). If most IoT vendors are not government contractors, the use of federal procurement requirements to secure the broader ecosystem may fail.58 Danielle Kriz, the senior director of global policy at Palo Alto Networks, argues that government procurement on its own is “not enough to result in full-scale IoT security.”92 Using procurement to signal to the broader market could also produce product fragmentation: “If you make the standards too robust,” argues David Hoffman, a Duke University professor of cybersecurity policy, “then you create a situation where there is a profit incentive for contractors to sell two different products: one for government and one for the private sector.”93 Further, if introducing a procurement requirement is meant to signal a coming wave of incentives around that set of security requirements, governments should note that—so industry can begin to get on board. 

Differences in cybersecurity and IoT security processes, levels of maturity, and regulatory requirements across jurisdictions likewise complicate the private sector’s implementation of IoT security approaches. When a country’s internal approach to IoT security is fragmented, it becomes harder to coordinate with the private sector as well as other countries—because there is no clear and cohesive national approach. Companies, for their part, often find themselves caught between multiple competing, if not contradictory, IoT cybersecurity regimes. This increases industry confusion about IoT security best practices (particularly for businesses with less institutionalized cybersecurity capacity) and may force IoT manufacturers and vendors to tailor-make products to meet specific, varied regulatory requirements (discussed in the next section). Disjointed IoT security standards also raise the costs of government interaction for companies, especially for smaller players with less budget and in-house governmental relations capacity. Vendors and manufacturers that have more money and resources could therefore have an even more outsized ability to influence the security conversation. 

For industry, certification schemes also introduce many challenges. The current IoT security certification approach emphasizes independent, third-party product certification—time-consuming and costly (sometimes in the tens of thousands of dollars)—which may be outright prohibitive for smaller manufacturers and vendors. This approach often excludes lower-cost approaches that could work simultaneously, like self-certification to a lower bar of standards. Certification schemes are also binary, tiered, and descriptive; there is no unified approach for companies to implement and understand. For example, Singapore’s CLS has four progressively demanding security level provision tiers (see Figure 2): security baseline requirements (Tier 1), lifecycle requirements (Tier 2), software binary analysis (Tier 3), and penetration testing (Tier 4).94 Others, however, such as many industry certification schemes, are binary, either certifying a product as “secure” under their definition or not doing so at all.

User IoT security challenges 

The current approach also presents challenges for users. An Ipsos MORI survey in Australia, Canada, France, Japan, the UK, and the United States found that consumers overwhelmingly think that “connected device manufacturers should comply with legal privacy and security standards” (88 percent), “manufacturers should only produce connected devices that protect privacy and security” (81 percent), and “retailers should ensure the connected devices they sell have good privacy and security standards” (80 percent).95 A majority of those that own connected devices (63 percent) “think they are creepy.”96 Despite these findings, by and large, users continue to purchase insecure IoT products. 

Currently, manufacturers and vendors provide users with little to no information to select more secure products. Where labeling and/or certification schemes do exist, they expect that buyers have a fair knowledge of IoT security and will make purchasing decisions based off that knowledge. This user knowledge assumption is faulty, as all countries surveyed in this report are far from sufficiently educating the public on cybersecurity issues. And in the context of a corporate buyer of IoT products, there is no guarantee many organizations purchasing IoT products have deep, in-house capacity around IoT cybersecurity practices, either. 

The current approach also leaves users, and the IoT ecosystem in general, with bad security outcomes and insecurity. Many manufacturers and vendors underinvest in cybersecurity and might not even have any kind of robust cybersecurity processes in place in their organization. This manifests itself in IoT products riddled with bad security practices, like default passwords and weak encryption, which leave products, users, and connected systems vulnerable to data theft and much worse. Merely encouraging organizations to adopt voluntary standards (that some organizations may not even know about) does not widely improve IoT security outcomes, either. Further, the labeling and certification schemes that do exist in some jurisdictions are often expensive—and if manufacturers and vendors choose not to absorb the costs themselves, then they will charge consumers higher prices for IoT products. 

Even if companies wanted to invest and buyers had all this knowledge, the current approach would still negatively impact users, the broader internet ecosystem, and other involved individuals. Given the “paradox of choice,” where increasing the number of options available to someone can make it harder to reach a decision, providing users with many different labels and certifications may do the same. The lack of a unified labeling scheme also makes it difficult for consumers to compare labels (binary versus tiered versus descriptive), and the lack of a single global IoT cybersecurity certification means buyers may not even be able to compare IoT security attestations at all. Moreover, there is little indication that introducing labeling and/or certification would necessarily cause a buyer to look anywhere beyond the price tag. And in the narrow cases where manufacturers or vendors provide labels and certification information to buyers, many users only see that information when the product is already unpacked and undergoing setup in their home or work environment. Overall, current IoT security approaches still place a heavy security burden on individuals, rather than systematically mandating and incentivizing product manufacturers and vendors to consider and build in security from the outset. As one DCMS official described it, labels may be attractive because they can avoid the bureaucracy of legislation—yet they still expect consumers to move the security needle. 

Addressing these challenges should not devolve into championing one national approach over another. The need for harmonization in specific controls is real, and this need extends to control philosophies and enforcement schemes. The section below synthesizes these previous approaches into a single framework based on the general lifecycle of IoT products as a basis for a path forward. 

Creating a synthesized framework 

There is no shortage of IoT security frameworks. As noted in the last section, government agencies, private companies, industry organizations, and civil society groups around the world have developed and published a range of IoT security policy frameworks, design best practices, and security certification schemes. This represents a substantial body of work on IoT security, yet there is more to be done—and its range creates complexity heedless of industry cries for coherence and presents a meaningful obstacle to international coordination. 

Rather than address each of the four jurisdictions of interest (US, UK, Australia, Singapore) in isolation, this section presents a consolidated framework with existing security regulations, standards, and guidance from all four countries. 

The framework’s first goal is to reduce fragmentation between policy approaches by highlighting their contributions and limitations. Operating in multiple jurisdictions with different IoT security regimes can drive up product development and legal compliance costs, disincentivize companies from investing in security or widely selling their products, and even create scenarios where companies must tailor-make IoT products to sell in different countries. Reducing fragmentation addresses these cost issues. It also empowers IoT product users, by giving companies and individuals a clearer set of tradeoffs and information rather than numerous, different stamps of security approval from different places. Lastly, reducing fragmentation helps policymakers forge cooperation internationally and cover the entire IoT security landscape at home. 

The framework’s second goal is to better situate technical and process guidance into cybersecurity policy. As previously discussed, some government requirements and guidance on IoT security lack detail and have ambiguous policy goals, which impede the private sector’s progress on better implementing IoT product security. Integrating technical and process details into government policy can help the private sector, especially companies with limited cybersecurity knowledge and capacity, operationalize higher-level IoT security objectives. It would also help governments identify flaws in their own IoT security approaches; for example, an overemphasis on certifications’ policy value has come at the expense of looking at the certification process—which for many organizations is a time-consuming, costly endeavor. 

Table 2 presents a synthesized IoT cybersecurity framework—mapping at what stages of the IoT lifecycle various IoT security actions and policies could be applied. This leads to a discussion in this section of how existing government IoT security approaches have enforced, incentivized, or guided these measures. It then leads to the recommendations section, which discusses ways in which governments can better select from these security action options and appropriately enforce, incentivize, or guide them to achieve better cybersecurity across the IoT ecosystem. 

Overwhelmingly, this framework highlights that the IoT security approaches in the countries studied focus on the design, development, and sale, and setup phases of the IoT lifecycle, with significant gaps in security actions and policies for the maintenance and sunsetting phases of an IoT product’s lifespan. 

Table 2: Synthesized IoT Security Framework

SOURCE: Liv Rowley and Justin Sherman for the Atlantic Council.

Cybersecurity decisions at each lifecycle phase help determine a product’s ultimate security (Figure 4). 

Design decisions frame how IoT products are ultimately architected, and they can include or exclude certain cybersecurity considerations from the outset. Security action and policy options at this level include following voluntary and/or mandatory technical standards, following voluntary and/or mandatory best practices, and employing best practice security design principles. 

Development decisions begin to put those design ideas into practice, and they impact how higher-level ideas and principles are operationally employed into the creation of products. They also present an opportunity for IoT product manufacturers to tailor additional security requirements based on their product’s risk profile—for instance, adding in extra controls on top of voluntary, minimum best practices for products used in safety-sensitive or critical infrastructure settings. 

Sale and setup decisions focus on IoT products going on the shelf and getting configured in their use environment, and they impact the cybersecurity of those products when first activated. Security action and policy options at this level include implementing vulnerability disclosure policies and processes, implementing mechanisms for regularly updating software, employing labeling schemes, and getting products security-certified. 

Maintenance decisions focus on IoT products that have already been configured and deployed, and they impact the security of those products for the rest of their lifetime. The security action and policy options at this level include maintaining vulnerability disclosure policies and processes, issuing regular security updates, updating labeling schemes in line with software security updates and disclosed vulnerabilities, and updating certifications in line with software updates and disclosed vulnerabilities. 

And finally, sunsetting decisions pertain to the end of a product lifecycle—such as when a vendor stops providing security updates—and how product vendors and users should communicate about, prepare for, and navigate the process of retiring an IoT product.97 

Figure 4: Overview of Government and Industry Frameworks

SOURCE: Liv Rowley and Justin Sherman for the Atlantic Council.

When applied to the United States, the UK, Australia, and Singapore, the framework shows that most country IoT security approaches concentrate on the earlier parts of the IoT product lifecycle. The design, development, and sale and setup phases are heavily covered. In the United States, existing NIST publications that provide guidance on security-by-design (like NIST SP 800-160) are applicable to IoT.98 The UK’s PSTI Bill, introduced in November 2021 and not yet passed, would require “manufacturers, importers, and distributors to ensure that minimum security requirements are met in relation to consumer connectable products that are available to consumers.”99 The provisions leverage recommendations in the UK Code of Practice/ETSI EN 303 645: banning default passwords, requiring vulnerability disclosure processes for products, and providing transparency for consumers on the duration that products will receive security updates.100 Nonetheless, there are still gaps; the UK PSTI Bill focuses more on design, development, and sale and setup.101

Design and development guidance often overlap in the four countries. The Australian government’s Code of Practice on securing the IoT for consumers uses the 13 principles laid out by the UK and ETSI, including not using default passwords, implementing a vulnerability disclosure policy, and keeping software updated and secure.102 The provisions around not using default passwords, validating input data, and securely storing credentials are primarily useful in the abstract at the design phase and implemented during the development phase. 

The United States, the UK, Australia, and Singapore also have significant guidance and/or requirements at the product sale and setup phase. For the ETSI guidance—which underpins guidelines in the UK, Australia, and Singapore—the implementation of a vulnerability disclosure policy comes into play during sale and setup. Singapore’s CLS has four levels against which companies can certify products, from baseline requirements, certified based on developer self-declaration, to comprehensive penetration testing conducted by ISO-accredited independent laboratories.103 And in the United States, the IoT Cybersecurity Improvement Act of 2020 requires NIST to publish “standards and guidance” around IoT product purchasing and shifts the compliance burden from vendors onto federal purchasers.104 Moreover, federal agencies must consider such factors as secure development, identity management, and patching when looking at buying an IoT product and then prove that said product satisfies NIST’s guidance.105 E.O. 14028 directs federal agencies to implement secure software verification processes and directs NIST, the FTC, and other agencies to identify “IoT cybersecurity criteria for a consumer labeling program.”106

Regulations enforced by the FTC and FCC likewise focus on IoT product labeling when consumers look to purchase and deploy products (in the FTC’s case) and IoT network design (in the FCC’s case). This is not to say the US security approach entirely neglects the maintenance and sunsetting phases; NIST’s first IoT publication (NISTIR 8259)107 includes a category for “post-market” security considerations as well as general recommendations for establishing communication channels for product updates and customer feedback. A subsequent update to the document (NISTIR 8259A) contains recommendations for security update features.108  

All four government approaches focus less on the maintenance phase of the IoT product lifecycle. The UK’s IoT security approach has gaps in providing manufacturers, vendors, and users with maintenance guidance (e.g., once the security update plan is in place and communicated, how will it be continuously followed?) and sunsetting guidance (e.g., if the company stops providing security updates, how should they inform users and what options might users have for replacing devices?). While there is some minimal guidance here—for instance, the UK DCMS Code of Practice includes a provision to make the installation and maintenance of products easy—it hardly provides anything substantively useful for manufacturers, vendors, or buyers. The same therefore goes for Australia, which follows the UK’s guidance. Singapore does provide detailed guidance on the maintenance phase at Tier 2, 3, and 4 of the certification scheme. 

Each approach has significant gaps at the sunsetting phase. The United States lacks sunsetting guidance in its IoT security approaches, and regulatory enforcement does not focus on sunsetting (e.g., the FTC focuses on how products are marketed to consumers, not how products are retired). Singapore’s labeling scheme program provides little guidance in the way of notifying users about terminated security updates when products are at their “life’s end” and then, as a result, posing new and greater security risks. The UK’s IoT security approach also lacks sunsetting guidance, such as what happens if a company stops providing security updates as recommended by the DCMS. This means users, and society writ large, may have some protections against IoT insecurity at the earlier phases of the IoT product lifecycle, such as when companies are designing IoT products sold to the government and used in relation to critical infrastructure, or when vendors are advertising their products on the shelf and regulated. Yet, for all the businesses, individuals, and other entities using IoT products that are long past their lifespan, they are exposing themselves to insecurity possibly without even knowing it—and without government policies and security approaches that protect users against the termination of security updates, outdated labels, and other security problems. 

It is also important to note that requirements may, in the future, speak to areas outside the device lifecycle as well, concentrating more on an IoT manufacturer’s organizational structure or developer training. NIST notes this in their June 2022 NISTIR 8425 initial public draft, titled “Profile of the IoT Core Baseline for Consumer IoT Products.”109 Developer activities, as outlined in NISTIR 8425, may include Documentation, Information & Query Reception, Information Dissemination, and Education & Awareness.110 Some industry IoT security frameworks include non-device requirements as well. For instance, the IoT Security Foundation’s framework mandates the existence of certain roles at a company (for example, 2.4.3.1 mandates “There is a person or role, accountable to the Board, who takes ownership of and is responsible for product, service and business level security, and mandates and monitors the security policy”); or specific actions to be included in a company’s security policy (for example, “As part of the Security Policy, provide a dedicated security email address and/or secure online page for Vulnerability Disclosure communications”).111 Such standards that apply to elements outside of the scope of the device lifecycle itself are critical to fostering a stronger security environment overall and should be remembered and considered as IoT security becomes stronger. 

Toward a consolidated approach

The framework above underscores how some governments and industry actors are making progress in pushing for greater IoT security—but there is a long road ahead to improving cybersecurity in the IoT ecosystem. There are still some governments and many industry actors underinvesting in IoT security. Despite their stated concerns, consumers continue to purchase insecure products. As a result, product manufacturers and vendors need to deliver meaningful transparency and improvements in user security outcomes. Without the predictability of common security standards that impose pressure on all manufacturers and vendors, proactive firms have little incentive to produce secure products, and there are few penalties for laggards. 

Overcoming widespread risks 

Promisingly, the past few years have seen a flurry of activity on IoT security from governments, industry groups, and consumer advocates. The attitude among those interviewed for this report generally was optimism about the direction of travel, with concern over the pace of the trip. Singapore is nearly two years into a voluntary, four-level labeling scheme that will be gradually expanded as mandatory by product type, as it currently is for internet routers. Australia appears poised to pursue a labeling approach that mirrors Singapore’s four levels (“graded shield”) or a simpler indicator showing the timeframe that security updates will be provided (“icon expiry”). The UK has rejected the concept of labels and is instead on the cusp of passing legislation that empowers regulators to set basic cybersecurity requirements for all smart devices, a baseline that can be ratcheted up over time. In the United States, two states have implemented their own minimum security requirements, federal agencies must purchase products with more robust security, and NIST recently recommended a binary label akin to approaches in Germany and Finland. Consensus standards, enforcement measures, and international cooperation across these four jurisdictions are feasible but not yet close. Nevertheless, there still are threats to progress: 

  • Risk #1: Regulations, standards, and norms diverge between jurisdictions. Despite today’s promising signs, as more jurisdictions take on the problem of IoT insecurity, there is a risk that regulatory divergence worsens with an ‘every-market-for-themselves’ approach where duplicative requirements and confusing enforcement schemes burden IoT vendors who must work to support multiple sets of standards or elect to focus on a small set of jurisdictions. 
  • Risk #2: Cybersecurity labels fail to demonstrate value to both manufacturers and consumers. One interviewee summed up the attitude toward cybersecurity labels with an analogy to Churchill’s famous quote about democracy: “the worst option, except for all the others.” Labels are an increasingly popular approach in national IoT security efforts. Despite a clearly articulated demand for greater security by consumers, some observers are doubtful that consumers can or will make informed cybersecurity decisions even with the benefit of an indicator on the box or the webpage. Others question whether it is correct to task consumers with making such security decisions for themselves, comparing insecure IoT products to an unsafe lightbulb: you do not compare lightbulb brands to see which one is least likely to explode. Like other market signals, cybersecurity labels can suffer from a collective action problem, only arising if both sides of a transaction value them. 
  • Risk #3: Product security requirements become watered down as they approach broader adoption. Particularly in the United States, legislation often becomes less potent as it approaches the federal level. Industry resistance was sufficient to kill prior versions of the IoT Cybersecurity Improvement Act and cut some of the provisions that were finally passed in its 2020 version.112 Given federal law’s preemptive power, consumer IoT security legislation could counteract more ambitious measures at the state level. This dynamic may also occur internationally if jurisdictions are driven to the lowest common denominator in pursuit of consensus. 
  • Risk #4: Guidelines become too rigid, locking in outdated security practices. As Brian Russell and Drew van Duren describe, “The greatest challenge in the security industry is finding methods today of defending against tomorrow’s attacks given that many products and systems are expected to operate years or decades into the future.”113 Legislation must define processes and outcomes rather than codifying specific security measures that might soon become irrelevant. 
  • Risk #5: The drive for improved consumer IoT security fails to have an impact on product manufacturers in jurisdictions without strong IoT security laws. The national initiatives surveyed in this report focus primarily on efforts to effect change by imposing requirements on products sold in each one’s jurisdiction, as opposed to trying to impact what happens where products tend to be manufactured, citing the challenge of extraterritorial enforcement. Interventions must consider the full range of actors who can put pressure further up the supply chain, with retailers, in particular, having the potential to play an influential role. 

The shape of a consolidated approach 

What might a better IoT future look like? One description is: “a world in which every IoT ecosystem stakeholder[’s] choices and actions contribute to overall security of IoT where consumers and benefactors are simply secured by default.”114 It could be raising the tolerable level of insecurity to the point where consumers trust IoT products and services as something more than a roll of the dice.

Crucially, this world must reflect different economic incentives for manufacturers, consumers, and attackers. Policy change is necessary to help shape and channel these incentives. When assessing any proposal, one should consider its ability to advance the following outcomes: 

  • Eliminate the most glaring insecurities in consumer IoT products, thus increasing the level of effort and sophistication required for attackers to compromise them. 
  • Promote harmonization across jurisdictions, avoiding needless divergence and duplication, thereby reducing friction for manufacturer uptake. 
  • Sharpen incentives for manufacturers to exceed the minimum baseline of security practices. 
  • Increase consumer awareness of the risks from insecure products and increase interest in security as a feasible and accessible buying criterion. 
  • Provide real impact on user security outcomes in the near term while maintaining flexibility to incorporate new controls through consensus measures as technology evolves. 

To drive the above outcomes and closer alignment in policy across these four states, the team proposes a multi-tiered IoT product labeling and certification scheme with basic, easily understandable labels for consumers (Figure 5). This multi-tiered scheme would ensure that minimum security standards are met, give consumers easily digestible ways of understanding the security of a product, and allow manufacturers that invest in higher security to advertise it understandably.

Figure 5: Overview of IoT Security Tiers

SOURCE: Patrick Mitchell for the Atlantic Council.

Tier 1: Minimum Baseline Features. The first tier should be a set of mandatory, baseline, self-attested IoT security standards created by governments in consultation with industry. For each country, the government agency leading this effort should ideally be the organization already in charge of cybersecurity standards, and if there is not one, governments should select an organization with a high degree of transparency, technical competence and capacity, and a track record of working with industry and civil society. The recommended baseline security standards should be rooted in widely agreed upon desirable security outcomes, for instance, the core principles outlined in ETSI EN 303 645—such as eliminating default passwords, mandating a vulnerability reporting contact, and facilitating secure updates for software. Once governments set this tier, manufacturers should apply with the agency administering the program and self-attest that they meet these standards. The agency should then provide qualifying products with a label indicating that they have met these baseline requirements, and the manufacturer and product vendor (if different than the manufacturer) should include this label and information about it in the product description. Random audits can assess compliance without the need for a time-consuming and expensive certification process. Examples of national programs in this tier include the UK’s PSTI Bill, Singapore’s CLS Tier 1 requirement for routers, and California and Oregon’s IoT security laws. 

Tier 2: Enhanced Security Features. Building off the first tier of mandatory, baseline, self-attested IoT security standards, governments should then work with industry to set a second tier of security standards—higher, voluntary, and independently tested. The standards to qualify for this second tier should likewise look to the Tier 1 baseline as a starting point, with a particular focus on ensuring products communicate securely and protect consumers’ personal data, inspired by security outcomes that may be drawn from ETSI EN 303 645. Qualifying products will receive a label indicating that they have both met the Tier 1 baseline requirements and the Tier 2 requirements, and the product description should include information about this label. To encourage the uptake of the second tier, securing a label should be a relatively cheap and quick process. Given that some jurisdictions may see more value in a scheme with more than two tiers, national regulators should be able to subdivide this tier into different levels of security. Examples of existing programs that would fall within this tier include Levels 3 and 4 of Singapore’s CLS, Finland’s Cybersecurity Label, Germany’s BSI IT Security Label, and the binary label recommended by NIST in the United States. 

Special Standards for Safety Critical Products. Industry-specific regulators should remain in charge of setting the highest bar of security standards for IoT products that present an imminent threat to human life if compromised. For most smart devices, consumers do not bear the brunt of the consequences if their device is vulnerable to an attacker. This dynamic shifts dramatically when the connected device is an automobile or pacemaker and the consequences become potentially lethal. In these instances, however, consumers still lack the expertise to assess risk. These industries tend to already have specific regulators focused on product safety: for example, the FDA certifies medical devices, and the National Highway Traffic Safety Administration (NHTSA) is charged with enforcing motor vehicle safety standards. In this context, an internet connection is merely another feature that introduces new risks to product safety. These regulators should look to standards bodies such as ETSI and NIST as a starting point for guidance on cybersecurity, but the ultimate requirements for these safety-critical applications must extend to the particular security needs of the industry—which are likely even more stringent than the second tier discussed above. Products that fall into this category need not be certified with a label. Instead, if they fail to meet the regulator’s minimum standards, they should not be approved (or should be recalled if they are already on the market). 

What does the label look like? 

A label for IoT security should consist of a standardized table or graphical description of security features, attached physically to a product box and digitally affixed to product descriptions online. The digital description of an IoT product’s security features is especially important, and—given the constantly changing security landscape—keeping digital labels up-to-date is often easier than doing so for physical labels. Ideally, the standardized-format description of product security features should be mapped to a set of standard IoT security criteria—such as a checklist of product compliance with some NIST security best practices, or a checklist of product compliance with ETSI requirements for IoT security (e.g., does this product use universal default passwords, does it have a security update function in place). Labels, intended for audiences ranging from consumers to enterprise purchasers, should use clear, easily understandable language to describe product security features, rather than referencing specific standards numbers or using highly technical verbiage (such as describing a specific encryption algorithm). 

Related to the label, governments should consider cooperating and coordinating with industry to ensure data on labels is easily accessible—to regulators, researchers, and the public generally. One idea is creating a central repository of manufacturer and vendor label information, perhaps maintained by a country’s cybersecurity standards organization or a standards development organization (SDO), into which vendors and manufacturers can upload independently tested and/or self-certified label information about IoT product security. It may be advantageous to develop a single form containing information of interest to multiple major jurisdictions, inspired by the “Common App” form which allows individuals to fill out one form to apply to multiple US-based universities. This would allow regulators and others to access information on company compliance and broader IoT product security trends in a single place and in a single, accessible format; it also potentially streamlines compliance efforts by IoT vendors, allowing them to file security information about their products in one place that is applicable in multiple jurisdictions. Another idea is having companies make this information available from their systems through a standard API—such that all the information is not stored in one single place, and the government does not have to maintain a central repository of IoT security label data, but that individuals can query manufacturer and vendor APIs to get label information. 

A note on ambitions 

At their simplest, today’s approaches reflect two different philosophies about where governments should focus their efforts: (1) targeting the “low hanging fruit” of higher impact/lower effort measures with mandatory requirements, or (2) setting an optional higher bar and trying to get consumers and industry to care about it. The former arguably views security improvements as a rising tide that fills in the lowest lying areas first, while the latter arguably views it as a distant target that focuses our gaze, even if not everything hits the bullseye. While both strategies have their merits, they need not be mutually exclusive. We cannot content ourselves with merely getting rid of the worst shortcomings. Similarly, the choice for consumers should not be between one class of products that have poor security and another with world-class security. 

It would be counterproductive to suggest that these countries should scrap their national approaches in favor of a new consensus program. Given how recent these efforts are—if they have even yet been implemented—it is still too soon to tell how each country’s approach will fare. A degree of national-level experimentation can help determine what does and does not work. Further, as one interviewee noted, while standards may harmonize internationally, enforcement occurs locally. Many jurisdictions have lined up behind the same set of guidelines in ETSI EN 303 645, with some others pursuing slightly differing approaches that nonetheless seek the same outcomes that the ETSI documentation aims to achieve. But the measures chosen to encourage (or compel) industry to generate products with better security must reflect the jurisdiction’s regulatory and consumer cultures. The silver bullet is not necessarily a new global label, new methods of enforcement, or new standards for IoT products. Instead, the world needs a better way of bringing together these efforts and ensuring they continue to avoid contradiction and duplication. 

Recommendations

This section lays out nine recommendations for government and industry actors to enhance IoT security, broken into three recommendation sets: setting the baseline of minimally acceptable security, incentivizing above the baseline, and pursuing international alignment on standards and implementation across the entire IoT product lifecycle. While many of these recommendations apply generally to those interested in promoting a more secure IoT ecosystem, the report also aims to identify specific actors and the steps they can take to bring about this multi-tier structure for IoT security (Figure 6). Moreover, these recommendations also aim to address the risks and uncertainties described in the prior section. 

Importantly, this report deliberately does not prescribe a particular label design, such as a table or graph. Moreover, it does not prescribe “how” companies should pair physical and digital labels nor to “what” extent companies and/or governments should harmonize specific label designs and digital characteristics across jurisdictions. These areas deserve more work, and the optimal approaches remain unclear at this stage. 

Figure 6: Overview of Actors and Actions to Improve IoT Security

SOURCE: Patrick Mitchell for the Atlantic Council.

Recommendation Set: Establish the Baseline of Minimally Acceptable Security (Tier 1): Currently, many governments lack baseline security standards for IoT products, and for some of those that do have such standards enacted, companies must go through a time- and cost-intensive process of independent testing and certification. This substantially raises the barrier to adopting what should be easily achievable and cybersecurity-bolstering baseline standards. By setting this minimum baseline, making it low-cost for companies to comply with, selecting criteria that greatly increase cybersecurity (like no universal default passwords115 and having security updates), and making it mandatory, governments can ensure IoT products within a country have the most basic and critical security measures in place. In some jurisdictions, enforcement might look like a law that requires every IoT manufacturer to implement the government-set IoT security baseline standards; in other countries, enforcement might look like a consumer regulatory agency creating a new rule within its existing authorities. 

IoT products are currently so insecure that hacking them is relatively trivial. The insecurities these products have are so glaring and egregious that even relatively unskilled hackers can get into the game and claim their slice of the pie. Implementing mandatory minimum security standards would have an impact on the state of IoT security by plugging those widely known and easy-to-find holes, which raises the cost of knowledge, time, and resources required to compromise IoT products. In other words, this would help push small fry hackers out of the scene, and the more sophisticated hackers would have to invest energy into developing ways to target more secure products. 

To illustrate this point, the Global Cyber Alliance’s October 2021 report “IoT Policy and Attack Report” provides a glimpse into just how effective some of these minimum security measures can be.116 Using a “honeyfarm” (a large network of IoT device honeypots), the Global Cyber Alliance was able to measure the number of attacks against different classes of IoT products and determine whether the number of successful attacks against the target changed, given the implementation of different security standards. For instance, the report found that of over 7,000 malicious login attempts, attackers were only able to login and thereby compromise a device in 79 instances. Those 79 instances all involved devices that used default passwords 

This section describes two recommendations that aim to influence two critical groups of actors in implementing this baseline: product manufacturers and retailers. 

Recommendation 1: Governments should implement regulatory measures to enforce a mandatory baseline on manufacturers selling in their markets (Figure 7). Initially, governments should conduct outreach to encourage compliance and spread awareness among manufacturers about the security requirements. Inevitably, some companies will not implement the Tier 1 security baseline within the required window or in the required way. This could be the result of many factors, including a lack of awareness about the rule (e.g., for smaller IoT manufacturers), feet-dragging, and limited capacity to quickly implement the self-attested label and certification, among others. Governments should therefore develop mechanisms to publicize the new, required security baseline at Tier 1 and encourage companies to implement it within the specified window. Beyond general public education campaigns, for example, this could include such processes as a country’s key standards agency holding sessions with industry to explain new requirements and answer any questions that may arise—well before the requirements go into effect. 

Next, governments should set up random audit mechanisms to ensure firms’ claims are accurate and issue penalties as needed. Some companies may self-attest to a security baseline and then take action that deviates from that attestation (e.g., implementing security updates and then ceasing security updates). Other companies may falsely self-attest to the security baseline altogether. If a product has been falsely attested to and does not meet the minimum security standards, the government should begin by issuing a compliance notice to its manufacturer. The compliance notice (or prompt for change) should outline all corrective actions and set a clear deadline for when these actions must be complete. Should a manufacturer continue to produce a noncompliant product with a falsely advertised security label, the government’s relevant enforcement agency should issue a stop notice that orders the manufacturer to cease selling the product until made compliant. The agency’s stop notice (sent to the company and published publicly) should also demand the recall of the noncompliant product. The agency should also consider additional actions depending on its authorities and typical enforcement processes against other companies domestically, such as fines. In line with other contexts in which companies may hold liability, governments carrying out enforcement should weigh whether a reasonable effort was made to attest in good faith, among other factors. 

Figure 7: Setting the Baseline of Minimally Acceptable Security (Recommendation 1)

SOURCE: Patrick Mitchell for Atlantic Council
SOURCE: Patrick Mitchell for Atlantic Council

Recommendation 2: Governments should follow the “reversing the cascade” philosophy, where instead of trying to influence manufacturers based abroad, governments put pressure on domestic suppliers and retailers—who may, in turn, put their own pressure on manufacturers to improve security (Figure 8). It is not just governments that make policy decisions that impact product manufacturers. There is considerable power in the terms and conditions for selling through major marketplaces and retailers like Amazon, Walmart, and Target. Many IoT security efforts encounter issues when they try to levy penalties on manufacturers, as many of them are based outside their jurisdiction and may not have incentives to comply with security requirements. Vendors, however, fall within a government’s jurisdiction, making actions more feasible. There are also fewer major IoT vendors than there are IoT product manufacturers, allowing efforts to be more concentrated. In the US, political leaders and regulatory agencies, such as cybersecurity officials in the Department of Commerce and regulators at the FTC, should call upon major retailers to more proactively police the sale of consumer IoT products that lack basic security features. This is because these retailers currently sell products like smart thermostats, smart speakers, and baby monitors that have poor security practices and use default passwords. If engagement does not bring about change, retailers could be held accountable through new laws that penalize them for the sale of noncompliant products. Though, targeting noncompliant smart products that have been sitting for a long time on the shelf may achieve higher security across products more quickly, without creating barriers to entry for small manufacturers. It is also possible that the FTC could pursue action against specific retailers under its “unfair or deceptive acts or practices.99, 117

As the world’s largest online retailer, Amazon, for example, could have an outsized impact with an expansion of its “Restricted Products Policy” to bar unsafe smart devices. When contacted by security researchers about a particularly vulnerable wireless camera (promoted as “Amazon’s Choice”) the firm removed the preferred listing and responded, “We require all products offered in our store to comply with applicable laws and regulations and have developed industry-leading tools to prevent unsafe or non-compliant products from being listed in our stores.”118 In this vein, in its Examples of Prohibited Listings in the Electronics category, Amazon should explicitly prohibit smart home products that fail to meet the Tier 1 requirements. The US government has the ability to apply pressure on online retailers (not just Amazon) to do that, such as through public messaging campaigns and convenings with company executives through organizations like NIST. If this fails to stem the presence of insecure products on the site, another measure could include requiring firms to receive approval before listing consumer IoT products—as they must for categories including jewelry, DVDs, and “Made in Italy” items—or just a subset of high priority items like children’s connected toys. This approval could be as simple as submitting a form that attests that the firm does not use universal default passwords and lists a vulnerability reporting point of contact. Amazon’s application form for selling streaming media players could serve as a template. Even without specific laws that force its hand, this policy would be in line with Amazon’s stated goal of allowing customers to buy with confidence on its platform. 

Figure 8: Setting the Baseline of Minimally Acceptable Security (Recommendation 2)

SOURCE: Patrick Mitchell for Atlantic Council

Recommendation Set: Incentivize Above the Baseline (Tier 2): Ensuring that all smart devices meet basic security requirements is valuable, but insufficient relative to the present risk in the IoT ecosystem. Some buyers may wish to achieve security at a higher level, and even more likely, some governments may wish to require manufacturers to adopt security standards above the first-tier baseline. Some manufacturers may also pursue a higher level of security as a differentiator. This section outlines four recommendations that will strengthen the development of this higher tier: setting the higher tier, mandating a more stringent degree of security for government-procured smart devices, expanding label recognition between states, and moving towards a consensus certification and labeling program. These actions will grow demand for secure products, increase consumer awareness, and decrease friction for firms that must otherwise navigate multiple certification regimes. 

Recommendation 3: Governments should support the creation of a voluntary, higher tier of security requirements, indicated via labeling programs in their markets (Figure 9). The objective of this tier is to encourage firms to adopt more advanced security features and design practices in their products. As with the first tier, the specific security provisions that governments select for this tier should consider outcomes-based approaches, perhaps looking to ETSI EN 303 645 for inspiration. Other provisions, such as those from OWASP and ioXt, can supplement such approaches. Unlike the first tier in which companies self-attest to meeting standards, in this tier, companies should have their products evaluated and their status certified by a third-party testing lab. These approved labs should be certified under ISO/IEC 17025, an internationally accepted standard for Testing and Calibration Laboratories, to ensure consistent application of device security testing procedures. Since product certification at this tier is on a voluntary basis, manufacturers will likely wish to advertise their products’ enhanced security features. Any device that passes the test and therefore shown to meet the Tier 2 requirements will receive an accompanying Tier 2 label. These labeling schemes can be “binary,” indicating the presence or lack of desired security features (e.g., Finland and Germany’s programs), or multi-level, allowing manufacturers to pursue the certification that meets the desired “grade” of security for their product (e.g., Singapore’s CLS). After issuance, random audits should ensure that devices continue to remain in compliance with the provisions of their label. If a product has received a label but no longer meets its requirements, the government should decertify the product. Depending on the jurisdiction, the government may also pursue legal action against those who willfully make false claims about their product’s security features. 

The existence of the second tier will aid in raising the security of IoT products above the minimum standards set in tier one. As the multi-tier model evolves over time, governments can also migrate effective standards from the second tier over to the first tier. Further, using outcomes-based approaches such as ETSI EN 303 645 as inspiration for these security requirements will ensure continued momentum around many agreed-upon basic security principles, while the employment of public-private cooperation ensures that standards are actionable. To drive the uptake of labeling programs, governments should engage with industry and the public to spread awareness of the programs’ benefits, and they may also consider defraying start-up costs, such as waiving registration fees and subsidizing testing expenses. 

Much like ETSI could serve as a guiding foundation for establishing a set of baseline security requirements for Tier 1, the industry security efforts underway by the CTA and the CSA, among others, could become a foundation for establishing a higher bar of IoT product security paired with a consumer-facing IoT labeling scheme. 

Figure 9: Incentivizing Above the Baseline (Recommendation 3)

SOURCE: Patrick Mitchell for the Atlantic Council.
SOURCE: Patrick Mitchell for the Atlantic Council.

Recommendation 4: Governments should include Tier 2 requirements as part of government procurement contracts (Figure 10). Technology manufacturers and vendors strongly benefit from government contracts, and the inclusion of cybersecurity standards in government procurement requirements can be one mechanism to incentivize large and small manufacturers to adopt them. The cost-benefit is simple for those companies: if they do not meet the specified cybersecurity requirements, they do not qualify for government contracts. Governments should therefore include Tier 2 (or higher) security standards in their procurement requirements such that any IoT manufacturer or vendor who wishes to do business with them must invest in a higher level of security beyond the Tier 1 baseline. 

The United States provides a recent case study in this approach with its IoT Cybersecurity Improvement Act of 2020, which requires federal agencies to abide by NIST cybersecurity guidelines when procuring IoT products. Thus, companies will not be able to sell their IoT products and services to the US federal government without complying with NIST cybersecurity guidelines. Procurement requirements in the UK, Singapore, and Australia, especially in the defense apparatuses, can similarly provide a mechanism by which the government can incentivize the adoption of a higher tier of cybersecurity practices. Since it tends to be too unwieldy for companies to produce multiple lines of the same product—one suitable for the government’s requirements and a separate less secure model—the entire market would benefit. This measure would not only incentivize companies to act but would also mean that IoT products used by governments will themselves have a higher bar of security. In turn, procurement is a mechanism by which to better protect government systems and, likely, citizen data against cyber risks as well. 

Figure 10: Incentivizing Above the Baseline (Recommendation 4)

SOURCE: Patrick Mitchell for the Atlantic Council.

Recommendation 5: In the short term, governments should reach agreements to mutually recognize each other’s labels. As different national IoT labeling schemes proliferate around the world, it will be important to reduce the burden on manufacturers from duplicative testing and certification requirements. In October 2021, Singapore and Finland agreed to mutually recognize each other’s labels for IoT products, hoping that this agreement will also spur more international collaboration. Through this agreement, companies that receive Finland’s Cybersecurity Label for a product are immediately eligible for Singapore’s Level 3 label, and vice versa. Even though not a country focused on in this report, Germany’s voluntary cybersecurity labeling program went live in January 2022, and it is reportedly in discussions with other countries to further expand mutual recognition. Given ETSI EN 303 645’s role as the backbone of multiple national frameworks, these agreements would likely be relatively simple to establish, recognizing that some agreements might focus on recognizing specific requirements while others might focus on recognizing equivalency—when similar outcomes are achieved with slightly different requirements. Major technology firms that care about improving the security of smart devices can apply for certification, even if it does not immediately benefit them, thus adding to the credibility of labeling programs. Countries with labeling programs already underway should study their impact, consider stakeholder feedback, adjust their schemes as needed, and share lessons learned with other countries interested in adopting this approach. It would be helpful for some of the analysis to focus on how to balance the need for maintaining high standards with reducing the administrative burden on firms going through the certification process. Major IoT vendors have noted how onerous it is to submit their products to multiple IoT security certification processes; for smaller firms, it can only be more difficult. Solutions like a “Common Application form”—inspired by the innovation that allows individuals to apply to multiple US-based universities by filling out one document—could help address this problem, as can regularly reviewing program-specific requirements and dropping ones that do not add value. 

Recommendation 6: Over the longer term, governments should compare results of their national labeling programs and move towards a single global model for communicating security characteristics of an IoT product. As regulators in each of the four countries gather performance data on the impact of their approaches, they should work to adopt the attributes of the certification scheme(s) that show the most promise. Labels are already moving well past static data forms with the inclusion of commonly accepted machine-readable formats and more dynamic data sources like SBOMs might be contemplated. Most fundamentally, any future consensus model to communicate the security characteristics of an IoT product (not its packaging) should include basic, easily understandable information affixed to the product, as well as more detailed and dynamic information found online. 

Oftentimes, companies’ currently issued IoT security labels and certifications fail to articulate exactly what certification means and how users should understand security. Further, by the time many consumers read an IoT product label, it is already unboxed and undergoing set-up in their home. These shortcomings impede buyers’ ability to understand IoT product labels and certifications—thus undermining their effectiveness. As part of this multi-tier framework, government and industry should ensure that at their respective tiers, labels issued for IoT products have basic, easily understandable information affixed to the product itself. They should also ensure the same information is available online, supplemented with other details that manufacturers and vendors can more easily update over time. Instead of communicating in the highly technical language used by experts, governments and industry should look to their relevant communicators for help employing the clearest language possible: for instance, saying, for Tier 1, “No default passwords” on a box and then include a check mark next to it. Doing so will empower buyers to easily make decisions about the security and privacy of a product through easy-to-understand labels. 

Recommendation Set: Pursue international alignment on standards and implementation that cover entire IoT product lifecycle: Coherence between jurisdictions on enforcement mechanisms is important, but consistency in the principles of good security practice that form their foundation is even more critical. Given that security is a moving target, regulators must also be able to adapt as capabilities and threats shift. This section describes three recommendations that are key to these objectives: maintaining consensus on standards and scope, introducing regular reviews to keep IoT security programs up-to-date with technological change, and ensuring that all phases of the IoT lifecycle are appropriately addressed. 

Recommendation 7: Governments should pursue outcomes-based approaches to consumer IoT security rooted in agreed-upon basic security principles and maintain similar definitions for products considered “in-scope.” Efforts to secure consumer IoT should be rooted in widely recognized desirable security outcomes, though countries may find benefits in slightly different standards to achieve those outcomes. This focus on outcomes is already evident in the approaches taken by leading standards bodies: NIST notes that its “baseline product criteria for consumer IoT products are expressed as outcomes rather than as specific statements as to how they would be achieved,”119 while ETSI says that its “provisions are primarily outcome-focused, rather than prescriptive, giving organizations the flexibility to innovate and implement security solutions appropriate for their products.”120 ETSI EN 303 645 already underpins national efforts in the United Kingdom, Singapore, Australia, Finland, Germany, India, Vietnam, and elsewhere, which goes a long way to ensuring a degree of uniformity in this space. As these countries have implemented national programs, they have supplemented the main ETSI EN 303 645 provisions with additional principles from other bodies, such as Singapore’s Infocomm Media Development Authority (IMDA) and Germany’s Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, or BSI). While some variation among requirements is perhaps inevitable, it can risk becoming onerous for IoT vendors as additional provisions proliferate across jurisdictions. This highlights the importance of encouraging countries to strive for similar outcomes and not just standards. Other IoT security frameworks may be referenced to bolster specific aspects of IoT security that are outside the scope of guidance found in standards such as ETSI EN 303 645, particularly those that extend beyond the device hardware and into the product’s related software and apps. For instance, the App Defense Alliance has a framework that may be useful to reference while developing apps that are partnered with physical IoT products. 

Similarly, governments must remain aligned on the products they consider “in-scope” for their IoT security efforts. ETSI EN 303 645, for example, covers “consumer IoT products that are connected to network infrastructure (such as the Internet or home network) and their interactions with associated services,” and provides a non-exhaustive list of examples that includes: 

Connected children’s toys and baby monitors; connected smoke detectors, door locks and window sensors; IoT gateways, base stations and hubs to which multiple products connect; smart cameras, TVs and speakers; wearable health trackers; connected home automation and alarm systems, especially their gateways and hubs; connected appliances, such as washing machines and fridges; and smart home assistants.”121 

Governments should consider how far to draw the line on systems, devices, and services with which IoT products connect—thinking about IoT cloud applications and other services that might fall under the scope of security baseline enforcement. For instance, the language in the UK’s PSTI Bill—as written—excludes many IoT products from the scope of an IoT device, thus limiting the potential benefits of a mandated security baseline. As a starting point, governments should consider enforcing the baseline on all IoT products as well as on the systems and services on which IoT products depend to function. For example, if an IoT cloud application breaking would stop an IoT product from functioning, governments should consider including that in the scope of a default password mandate. Governments should delegate this task to the relevant cybersecurity standards agency and then embed the recommended definitional scope in legislation, regulation, and other requirements. 

Recommendation 8: Governments and industry should review and, if necessary, update their respective tiers of standards every two years. Technology changes quickly, and future efforts must ensure that guidance for security keeps up with the evolving security landscape. Further, there is a question of “moving goalposts”—once a government, for example, has success in requiring industry to meet the Tier 1 baselines, it should aim to raise the baseline even further through additional updates. Nonetheless, while standards can provide more specific guidance for organizations, governments should also consider mapping those evolving standards to a set of broader, desired security outcomes. Then, governments and industry should revisit and, if necessary, update their respective tiers of standards every two years, initiating update processes ahead of that two-year timeline such that the final updated guidance is ready for release at, or ahead of, the end of the two-year interval. Updating requirements each year with appropriate government, industry, and civil society consultation may require too much time and too many resources needed elsewhere, but without regular updates (e.g., every two years), IoT cybersecurity standards will become quickly outdated. On the international stage, standards bodies, including the ETSI and ISO, should continue to adapt guidelines as technological circumstances change and new information becomes available. This process should discard outdated and ineffective standards (or even contradict or undermine new security guidance), modify existing standards based on new technologies and risks, and consider adding new standards to each tier given the current rate of progress. To implement these changes into regulation, the UK’s approach of empowering the DCMS secretary to define baseline security requirements—rather than “hard coding” them into legal text—provides an excellent model for replication. Law is extremely slow to change. However, if the appropriate agency or agencies receive the power to produce regulations and modify enforcement mechanisms within a stated scope of authority—and with appropriate government, industry, and civil society consultation—this would result in more regularly updated and thus more relevant and useful IoT security requirements. 

Recommendation 9: Governments should develop additional guidance around the sunsetting phase of the IoT product lifecycle. As illustrated in this report, many existing IoT security frameworks heavily skew towards the design, development, sale and setup, and maintenance phases of the lifecycle. Across best practice guidance, technical standards, and labeling and certification schemes, there is comparatively little IoT security focus on what happens when products are no longer receiving software security updates or must otherwise reach their end of life—and what manufacturers, vendors, and/or buyers should do to prepare for and handle that eventuality. This is a considerable oversight in the existing IoT security approaches. It also risks replicating a problem seen before with more conventional parts of the internet ecosystem, such as organizations needing to use old products and systems long after it is reasonably secure to do so (e.g., those running Windows 95). Governments should therefore develop additional guidance around the sunsetting phase, through their respective organizations designated with technical standard-setting. Producing this sunsetting guidance will take time and should not necessarily hold up the development and deployment of the minimum baseline tier of IoT security certification, but it is essential for addressing all parts of the IoT product lifecycle in a security approach. 

These recommendations provide a sensible starting point to address the economic incentive issues that sustain consumer IoT’s insecurity while promoting the core policy objectives of eliminating the most glaring vulnerabilities, harmonizing requirements across jurisdictions, encouraging greater prioritization of security by manufacturers, increasing consumer awareness, and making an overdue impact without further delay. Implemented and updated continuously, this would help drive towards a world in which IoT product manufacturers build in better security from the start—referencing many of the same sets of baseline security standards, roughly consensus and harmonized across jurisdictions—and every other actor in the supply chain follows, with manufacturers and vendors displaying understandable cybersecurity labels on products, retailers enforcing security requirements on those manufacturers and vendors, buyers looking to labels and other security guidance, and regulators ensuring that IoT security is better implemented across the entire device lifecycle.

Measuring success

As with many cybersecurity issues, simple quantification of the problem is challenging. The discovery of a single vulnerability—whether in the product itself or in commonly used software packages—can mean that millions of IoT products are suddenly at risk. But methods to better understand and quantify IoT security risk are needed, both to better understand the nature of the problem and to measure the success of policy interventions and security standards. Several data points may prove helpful in enhancing understanding of the overall threat ecosystem presented to IoT products. 

  • Information on the number of in-scope products: One widely cited study from Transforma Insights, a market research firm, estimates that the number of active IoT products will grow to 24.1 billion by 2030, up from 7.6 billion in 2019, expanding on average 11 percent per year.122
  • Information on attacks: After coming online, on average, an IoT product is probed within five minutes by tools that scan the web for vulnerable products, and many are targeted by exploits within 24 hours. Attacks on a simulated smart home, constructed by the UK consumer group called “Which?”, reached 12,000 in a single week.123 Kaspersky, a cybersecurity firm, maintains a network of “honeypot” devices to learn more about attacks, and measured 1.5 billion IoT attacks over the first half of 2021, up from 640 million over the same period a year prior.124 Defining an “attack” can be another tricky question, with some definitions including activities that range from a relatively benign probe by a popular scanner tool to an all-out compromise of the device. It would perhaps be most fruitful to focus efforts on activities that hint at active malicious activity, such as brute-forcing attempts or attempts to employ remote code execution exploits. 
  • Information on product insecurities: Unit 42, a team of threat intelligence researchers at Palo Alto Networks, estimates that 57 percent of smart devices are susceptible to medium- or high-severity attacks, while 98 percent lack encryption in their communications, putting confidential personal information at risk.125 Default manufacturer passwords, often the same for thousands of devices, provide some of the simplest entry points in the compromise of a device. In 2017, researchers at Positive Technologies found that five login/password combinations—support/support, admin/admin, admin/0000, user/user and root/12345—granted access to 10 percent of internet-connected devices.126

Measuring the impact of labels, standards, and legislation is harder still. In the UK, DCMS published cost-benefit analysis in parallel with the filing of the PSTI Bill. This report represents one of the more admirable efforts to quantify this risk and the potential benefits of intervention. But as the NCSC notes, analyzing the cost of intrusions specific to connected consumer products is very difficult today, as the user does not necessarily notice the attack, and the line between what is and is not an attack may be blurry from an outside observer’s perspective.127 Better methods to measure the impacts of policy interventions must continue to be the subject of research. An initial—and non-exhaustive—list of these metrics may include: 

  • Percent/number of products that meet various levels of security (as defined by ETSI/NIST/other frameworks). 
  • Percent of products using default passwords. 
  • Number of products infected with Mirai and other IoT malware. 
  • Percent of products sold whose company has a vulnerability reporting contact. 
  • Average response time / patch release time for critical vulnerabilities by product. 
  • Percent/number of unpatchable products in operation. 
  • Percent/number of products no longer receiving security updates in operation. 
  • Percent of customers who say they use product security as a key buying criterion. 
  • Percent of customers who say they trust the security of their IoT products. 
  • Number of IoT product vulnerabilities with high CVSS scores publicly disclosed (the assumption being at first a deluge of reporting as researchers start to focus on these products, and with time the number of found vulnerabilities decreasing). 

What’s next for labeling 

Throughout the conversations with government and industry players, one point of worldwide consensus shines through: there is a solid appetite to adopt some sort of labeling scheme for consumer IoT devices. The benefits of such a scheme are plentiful. The ability to collect information on product security and having that information public offers exciting possibilities. Access to such information empowers purchasers and supports researchers and auditors in doing their work. IoT vendors have also recognized the benefits of labels from a marketing perspective, allowing them to use product security as a clearly articulated, understandable differentiator.   

While the interest in labeling is there, the logistics are still lacking. There is a slew of details that need ironing out down the road. Getting them right is important for the IoT, and, as such, labeling merits future dedicated study. Plenty of questions exist around label design: How should it look? What information should it communicate? Beyond that are the bigger questions of how the system itself should work: Who could issue labels? What information would be needed to award a label? Where would that information be kept and stored, and how could it be accessed? Many details need workable answers—and there are lots of proposed ideas to sort through—before a labeling scheme can roll out on a global scale.  

Conclusion

Inadequate security for consumer IoT products is just one of many difficult emerging technology issues that require global coordination among public and private sector actors. A range of parallel efforts exist to address wide-ranging digital challenges, such as protecting the privacy of personal data, addressing anti-competitive behavior by tech giants, and countering online misinformation. The steady march of technology means that poorly designed interventions risk irrelevance. Moreover, they leave the IoT more vulnerable to harm from the unintended consequences they should prevent. 

Despite the perennially crowded global to-do list, reducing the threats from insecure consumer IoT products is overdue, attainable, and worthy of the world’s attention. This report likely gives short shrift to the many benefits of consumer IoT, but fully realizing its potential requires addressing its worst failings. These deficiencies—rooted not merely in technology but, more so, in economic incentives—means that the IoT demands better policy intervention. A litany of proposals has at last turned into momentum behind some reasonable, consensus measures. As one interviewee said, “we cannot let the perfect become the enemy of the good.” 

From botnets that menace internet infrastructure to universal default passwords that allow hackers to invade user privacy, the impact on consumers is real, with risks that multiply in tandem with the number of connected devices. As Nathaniel Kim, Bruce Schneier, and Trey Herr contend, “these attacks are all the byproducts of connecting computing tech to everything, and then connecting everything to the Internet.”128 Unlike traditional appliances, which tend to degrade over predictable timescales and stop working individually, “computers fail differently.”129 They all work fine, until one day, the discovery of a vulnerability means finding a fix for all products of that particular model. As more and more things continue to become computers, they will increasingly fail like computers. The world needs processes, norms, and global standards that fit for this new reality. 

Appendix I

Country-specific implementation plans 

This section discusses tangible, high-impact next steps that the UK, Singapore, Australia, and the United States can each take to bring about the global multi-tier system for IoT security detailed in our recommendations. 

As noted earlier, this research seeks to capitalize on existing momentum, whether international or intranational. There are multiple viable paths for governments that are consistent with our vision to (1) rid the world of IoT’s most glaring vulnerabilities and (2) harmonize international efforts to make it easier for firms to manufacture and sell products with even stronger security features. This implementation plan aims to nudge their approaches towards greater consistency, as opposed to calling for dramatic about-faces. 

UK 

Tier 1. Set the Baseline of Minimally Acceptable Security: 

Of the four countries examined in this report, the UK is closest to creating a mandatory baseline for a broad range of IoT products sold in its market. The PSTI Bill, currently advancing in the House of Lords, will set minimum security requirements for manufacturers and couple them with potent enforcement mechanisms. By empowering the DCMS secretary to set these guidelines, this baseline can keep pace with technological change without the need to constantly rewrite legislation. The UK government should take the following actions: 

  • Pass the legislation. The most obvious and immediate next step is for parliament to enact the PSTI Bill. Thus far, the proposed law has made its way through the legislative process with its core provisions intact. While it does not address everything on the wish list of security advocates, it is an ambitious effort that lawmakers should approve. The House of Lords has recommended a sensible amendment that will also protect security researchers conducting legitimate vulnerability research from intimidation and lawsuits by manufacturers.130 Given that the countdown for firms to comply with the new law begins one year after the bill receives Royal Assent—and that it has already been nearly nine months since its filing—consideration of further amendments should take into account the additional time they will add to the process. 
  • Identify a regulator. While the DCMS will define the cybersecurity provisions that manufacturers must abide by, it will not be the agency that enforces them. At the time of publication, the UK government had not publicly named the regulator responsible for enforcing the baseline product requirements. In its 2021 consultation, the DCMS sought recommendations on agencies well-positioned to serve in this role. Multiple respondents highlighted Trading Standards as a natural fit given its consumer protection role under Schedule 5 of the Consumer Rights Act 2015. Another was Ofcom, the UK’s communications regulator.131 The DCMS has also consulted with the Office for Product Safety and Standards in the Department for Business, Energy, and Industrial Strategy, another consumer product safety regulator.132 This report does not have a specific recommendation as to the best-positioned agency to assume this role, but the government should announce this decision and begin to build out the key elements of its enforcement capacity. 

Tier 2. Incentivize Above the Baseline: 

Unlike the other three countries profiled in this report, the UK government has for now explicitly rejected the approach of device labeling, choosing to initially focus the bulk of its efforts on setting the first tier of a mandatory baseline. Despite the challenges with cybersecurity labels, the team views them as the best option for encouraging manufacturers to invest in greater security as well as providing consumers with accessible information. In partnership with NCSC, the DCMS should: 

  • Provide “forward guidance” on provisions that it aims to mandate next. Like a good central bank, the DCMS should provide predictability in its intended future actions while remaining flexible to change in the face of new information. While the UK plans to begin with the so-called “top three” measures in its initial list of mandatory requirements, one of the key design principles of its approach is the ability to gradually ratchet up the baseline with new provisions. Through public announcements and meetings with industry, DCMS can telegraph where regulation is headed and allow security-minded firms to bring their products into compliance before the measures become mandatory. For starters, the DCMS should look to the World Economic Forum (WEF) statement that highlights two additional ETSI principles as the logical next steps: ensure that products communicate securely and safeguard personal data.133 Other impactful measures could include a guideline requiring manufacturers to provide security updates for a minimum period consistent with the average length of time consumers use a product, which can vary by product category. The DCMS could go even further by publishing the planned effective dates of new security requirements years in advance. These provisions can change as cybersecurity threats and commercial considerations change. 
  • Study the impact of cybersecurity labels in other markets and be prepared to reevaluate if they achieve results. Thus far, research on cybersecurity labeling for smart devices remains largely limited to surveys about consumers’ hypothetical willingness to pay more for products that have an indicator of greater security. Now that several countries have introduced labeling programs, users should begin to see “real world” data on their performance, both as it relates to changing consumer behavior and in addressing the downstream ills of insecure devices. If it becomes apparent that one or more of these labeling approaches are achieving success—or gaining traction as an international standard—the UK government should remain open to adopting it in its market. 

Singapore 

Tier 1. Set the Baseline of Minimally Acceptable Security:  

While Singapore’s CLS for consumer IoT is largely voluntary, it provides the regulatory infrastructure for a program that gradually expands to establish a baseline level of security for all devices. Internet routers sold in its market already must meet the provisions of the CLS Tier 1 label, which map directly to the UK’s “top three” requirements that will be enforced with its proposed PSTI Bill. In consultation with IMDA and other partners, the CSA should: 

  • Make the Tier 1 label mandatory for more product categories. Internet routers have been a wise starting point: they have an outsize presence in today’s botnets and can have security knock-on effects that threaten consumers’ other smart home devices. Perhaps unsurprisingly, routers now account for over half of the CLS labels issued.134 The CSA should consider the next highest priority product categories that will need to meet these minimum security measures, incorporating criteria like the (lack of) maturity in the category’s cybersecurity features and the privacy risk to individuals if compromised. IP cameras, connected baby toys, and smart locks are strong candidates. 
  • Add to the security provisions required as part of the Tier 1 label, especially those related to secure development practices. CLS includes 76 security provisions, with roughly half required by one or more of its tiers, while the others are merely recommended. The first tier currently has 13 required provisions. Tier 2, which primarily concerns product lifecycle and secure development practices, has 17 required provisions—eight drawn from ETSI EN 303 645 and nine from the IMDA’s IoT Cyber Security Guide. Over time, the CSA should aim to collapse the most impactful Level 2 requirements into Level 1, while removing those not seen as value-added. Alternatively, the CSA could keep the same provisions in each CLS level and gradually require that devices meet the second level. Since both CLS Levels 1 and 2 rely on manufacturer self-attestation, these changes should not require any operational changes in administering the program. 

Tier 2. Incentivize Above the Baseline: 

CLS has seen dramatic growth since the beginning of 2022, with the number of labels issued tripling during that timeframe. But the gains are not evenly distributed: of the 176 labels issued by CSA as of July 2022, 148 are at the Level 1 designation, an additional 16 are at Level 2, and 10 are for Level 4.135 As mentioned earlier, many of the recipients of labels are internet routers, where the Level 1 label is mandatory. A key selling point of its multi-tier system is the ability to provide manufacturers with a reason to go above and beyond the bare minimum. To this end, CSA should: 

  • Conduct a review of the program’s effectiveness in addressing the core problems associated with IoT insecurity and publish the findings. As the country with the most mature cybersecurity labeling program, Singapore is in a unique position to gather information on the successes and challenges of this regulatory approach. How have consumers adapted their purchasing behavior since its launch? Has the number of insecure devices sold in Singapore decreased? What have been the challenges for firms? Have there been impacts beyond Singapore’s borders? This review could also help improve the structure of the program. For example, it might review the fitness of the CLS tier structure. The inclusion of more levels makes sense if it adds to the range of choice for consumers and manufacturers to select the appropriate certification level that meets their needs. If no one selects it—currently the case for CLS Level 3—it is possible to simplify the scheme. The report’s “Measuring Success” section includes some example metrics that could help gauge a topic that is notoriously difficult to quantify. The results will be helpful for Singapore, but just as critically, for the large number of countries and industry bodies that are experimenting with cybersecurity labels for IoT products. 
  • Pursue an agreement with Germany for mutual recognition of cybersecurity labels. Finland and Singapore’s agreement shows that binary and multi-tier labeling approaches need not conflict. Germany, which recently launched its own binary label in January 2022, should also join the bilateral agreement between Singapore and Finland for mutual recognition. All three countries draw largely from the same list of ETSI EN 303 645 security provisions. Partnering with a market of Germany’s size will add significant momentum for Singapore’s approach to securing IoT, while reducing the burden of duplicate testing and certification for firms. This approach should be pursued for any country that adopts an IoT labeling program found to be largely compatible with the existing Singaporean program.  
  • Consider measures to encourage broader adoption of the labeling scheme. Anecdotal evidence suggests that many security-minded firms have been eager to participate in the program, but the CSA should continue to search for ways to increase its attractiveness. While the program will eventually need to generate revenue to cover its costs, CSA could extend the moratorium on application fees for an extended period, or even subsidize testing for devices at higher levels of security. 

Australia 

Tier 1. Set the Baseline of Minimally Acceptable Security: 

Since the conclusion of its Call for Views in August 2021, Australia’s DHA has been relatively quiet in public on its path forward for the regulation of consumer IoT. Whatever its ultimate action, it is evident that Australia aims to take a more hands-on approach than its past voluntary measures. To establish this minimum baseline, the DHA should: 

  • Select a regulatory approach for mandating basic security requirements for devices sold in its market. Australia has multiple approaches at its disposal and should continue to study the benefits and drawbacks of programs in the UK, Singapore, and elsewhere. The options it is most seriously considering are either a mirror image of the UK’s minimum security standards or a four-level “graded shield” that appears very similar to Singapore’s CLS. Australia’s voluntary Code of Practice, which aligns with ETSI EN 303 645, should provide a strong foundation that will have prepared Australian businesses for more stringent enforcement. 
  • If pursuing a minimum security standard, align its approach with the PSTI Bill’s planned enforcement measures. At a minimum, these measures should include the “top three,” banning universal default passwords and mandating vulnerability reporting contacts and transparency on security updates. Preferably, it would also include additional provisions on securing personal data, encrypted communications, and minimum acceptable support periods for security updates. Currently, Australian Consumer Law does not require firms to adhere to any principles meant to reduce cyber risk, “only that they cannot make misleading or deceptive representations about the cyber security of their products.”136 This baseline could be achieved either through a new law, modeled on the UK’s PSTI Bill, or an expansion of Australia’s existing Consumer Law to incorporate protections against the most basic flaws in cybersecurity in its definition of “acceptable quality” and “fit for purpose.”137
  • If pursuing a multi-level labeling approach, follow a strategy of gradual mandates by product category. Given that it seems most drawn to a multi-tier label mirroring CLS, the clearest path for Australia is to follow Singapore’s strategy and gradually mandate a tier 1 label by product type, beginning with high-priority items like internet routers. The labeling scheme should include a broad definition of in-scope products, drawing from ETSI’s definition of smart devices. In addition to expanding mandates by product category, DHA can also raise the baseline over time by advancing along the other “axis” of incorporating more security provisions from higher security levels into its base tier. 

Tier 2. Incentivize Above the Baseline: 

The approach for incentivizing action to instill even greater security measures in its smart device market is highly related to Australia’s method for enforcing its baseline. As DHA notes, these measures need not be mutually exclusive. To promote a higher tier of security, it should: 

  • Select a cybersecurity labeling approach. A study conducted by the Behavioral Economics Team of the Australian Government compared the effects of multiple label options on consumers, finding that “participants were more likely to choose a device with a cyber security label than one without a label, by 13–19 percentage points.”138 While the graded shield was most impactful, it found that “expiry labels were still effective” and “a high security level or long expiry date increased the likelihood of choosing a device.”139 Each of these options appears likely to have its own benefits and drawbacks, but it is time to choose one and move forward with it. 
  • If pursuing an expiry date-label, study its effect and publish the findings. If it follows through on this proposal, Australia would be the first to introduce a label that indicates the length of time manufacturers will provide security updates to the product. Studying this approach can help answer several questions about the impact of cybersecurity labels, particularly around the sunsetting phase. For example, are consumers incentivized to purchase devices at a discount that are about to go “off warranty”? As stated earlier, there is nothing wrong with national-level experimentation, as it can be beneficial in formulating new approaches that may be suitable for broader adoption. 
  • If pursuing a “graded shield” label, agree to mutual recognition with Singapore and other participating countries. The four-level labeling scheme that Australia appears likely to pursue bears many similarities with Singapore’s CLS. In this case, the two countries should aim to bring their programs into close harmony, including the definitions of in-scope devices, the security provisions included in each tier, and the processes for self-attestation and third-party testing. Over time, the DHA should work with the CSA to ensure that the programs evolve together with consistency. Australia should then join the bilateral agreement with Finland for mutual label recognition, as well as a proposed agreement with Germany. 

United States 

Tier 1. Set the Baseline of Minimally Acceptable Security: 

In comparison to other jurisdictions, the United States has preferred a less interventionist approach. There are two main exceptions: the two states that have enacted legislation to impose minimum security standards on IoT products, as well as the IoT Cybersecurity Improvement Act of 2020 which requires federal agencies to only procure devices that meet NIST security guidelines. In this context, the team recommends: 

  • States should pass and enforce their own IoT security laws. California and Oregon led the way but should expand their laws to focus on more specific guidance for organizations and manufacturers less versed in cybersecurity, rather than just focusing on concepts like “reasonable security.” Ideally, they will do so in a way that does not lock in specific security measures into legal text but instead points toward another regulatory mechanism that more easily updates standards, such as the UK’s approach of empowering an agency to maintain these standards, or points them to guidelines set for federal government agencies by NIST. More states should follow in their footsteps, putting forth IoT security laws that incorporate the standards outlined by the US government, as well as considering standards established by others around the world. The states that have implemented these laws should also study their impact. It is not apparent that any enforcement actions have yet occurred, which indicates one of two possible scenarios: all devices sold in their markets are now compliant, or enforcement has been insufficient. The latter seems more likely than the former. 
  • The federal government should adopt the binary labeling approach proposed by NIST. In NIST’s February 2022 publication “Recommended Criteria for Cybersecurity Labeling for Consumer Internet of Things (IoT) Products,” the organization recommends pursuing a binary labeling approach.140 In this scenario, there would exist a single label stating that a product has met baseline security standards. Implementing the binary label would be a first step towards goals such as defining minimum security standards, creating and implementing a labeling program, and starting to broadcast to consumers what they should be looking for when purchasing IoT products. Among other details, this will require identifying an owner for the program, and the FTC would be the strongest candidate. 

Tier 2. Incentivize Above the Baseline: 

President Biden’s 2021 Executive Order 14028 (Improving the Nation’s Cybersecurity) directed NIST to design a labeling program for IoT devices, which should also serve as a mechanism to encourage the adoption of security measures that exceed the minimum baseline. The program’s ultimate owner should: 

  • Provide incentives for industry to obtain labels. The US may look to Singapore and other countries that have adopted labeling programs to see how companies have been encouraged to participate in a labeling program and reach for higher tiers. Fee waivers for label applications may be a good way of incentivizing participation during the first few years of the program. Industry would likely react positively to some form of compensation for the third-party testing required to earn a higher label.  
  • Provide liability protection for firms that pursue the higher, tier 2 security standards. Experts have indicated that many players in industry would be incentivized to pick up higher security standards in exchange for liability protections. There are various types of liability protections that may be considered here, and this report leaves such determination up to the regulatory body. The implementation of such liability protection may take the form of a law passed by Congress outlining these protections, or conversely may come in the shape of a publicly articulated approach by the FTC. 

Authors and acknowledgements

Patrick Mitchell is a consultant with the Atlantic Council’s Cyber Statecraft Initiative. He recently graduated from the Master in Public Policy program at Harvard University’s John F. Kennedy School of Government, where he studied issues at the intersection of emerging technology and global affairs, including a second-year thesis on international efforts to improve IoT security. Prior to this, he interned with the UN Secretary-General’s Office and worked for several years as a consultant with Accenture, where he supported federal, state, and local government agencies on projects related to technology strategy and digital transformation. He also holds a B.S. in Management from Boston College.

Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative, where his work focuses on the geopolitics, governance, and security of the global Internet. He is also a senior fellow at Duke University’s Sanford School of Public Policy and a contributor at WIRED Magazine.

Liv Rowley is a former assistant director with the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). Prior to joining the Atlantic Council, Liv worked as a threat intelligence analyst in both the US and Europe. Much of her research has focused on threats originating from the cybercriminal underground as well as the Latin American cybercriminal space. Liv holds a BA in International Relations from Tufts University. She is based in Barcelona, Spain.

Acknowledgments: The authors thank Kat Megas, James Deacon, and Rob Spiger for their comments on earlier drafts of this document and Trey Herr and Bruce Schneier for support. Thanks to Nancy Messieh for her support with data visualization. The authors also thank all the participants, who shall remain anonymous, in multiple Chatham House Rule discussions about the issues.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

 

1    Knud Lasse Lueth, “State of the IoT 2020: 12 Billion IoT Connections, Surpassing Non-IoT for the First Time,” IoT-Analytics.com, November 19, 2020, https://iot-analytics.com/state-of-the-iot-2020-12-billion-iot-connections-surpassing-non-iot-for-the-first-time/.
2    Keumars Afifi-Sabet, “Critical Supply Chain Flaw Exposes IoT Cameras to Cyber Attack,” IT Pro, June 16, 2021, https://www.itpro.com/security/vulnerability/359899/critical-supply-chain-flaw-exposes-iot-cameras-to-cyber-attack.
3    “Consumer IoT Security Quick Guide: No Universal Default Passwords,” IoT Security Foundation, 2020, https://www.iotsecurityfoundation.org/wp-content/uploads/2020/08/IoTSF-Passwords-QG_FINAL.pdf.
4    Max Eddy, “Majority of IoT Traffic on Corporate Networks Is Insecure, Report Finds,” PCMag, February 26, 2020, https://www.pcmag.com/news/majority-of-iot-traffic-on-corporate-networks-is-insecure-report-finds.
5    Xu Zou, “IoT Devices Are Hard to Patch: Here’s Why—and How to Deal with Security,” TechBeacon, accessed August 17, 2022, https://techbeacon.com/security/iot-devices-are-hard-patch-heres-why-how-deal-security.
6    Gareth Corfield, “Research Finds Consumer-grade IoT Devices Showing up … On Corporate Networks,” The Register, October 21, 2021, https://www.theregister.com/2021/10/21/iot_devices_corporate_networks_security_warning/.
7    Graham Cluley, “These 60 Dumb Passwords Can Hijack over 500,000 IoT Devices into the Mirai Botnet,” Graham Cluley, October 10, 2016, https://grahamcluley.com/mirai-botnet-password/.
8    Manos Antonakakis et al., “Understanding the Mirai Botnet,” USENIX 26, August 2017, https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-antonakakis.pdf, 1093, 1098
9    Antonakakis et al., “Understanding the Mirai Botnet,” 1105.
10    Antonakakis et al., “Understanding the Mirai Botnet,” 1105.
11    “Over 200,000 MikroTik Routers Compromised in Cryptojacking Campaign,” Trend Micro, August 03, 2018, https://www.trendmicro.com/vinfo/in/security/news/cybercrime-and-digital-threats/over-200-000-mikrotik-routers-compromised-in-cryptojacking-campaign.
12    “Fronton: A Botnet for Creation, Command, and Control of Coordinated Inauthentic Behavior,” Nisos (blog), May 19, 2022, https://www.nisos.com/blog/fronton-botnet-report/.
13    Donna Lu, “How Abusers Are Exploiting Smart Home Devices,” Vice, October 17, 2019,  https://www.vice.com/en/article/d3akpk/smart-home-technology-stalking-harassment.
14    Stephen Hilt et al., “The Internet of Things in the Cybercrime Underground,” Trend Micro, September 10, 2019, https://documents.trendmicro.com/assets/white_papers/wp-the-internet-of-things-in-the-cybercrime-underground.pdf.
15    Pascal Geenens, “IoT Hackers Trick Brazilian Bank Customers into Providing Sensitive Information,” Radware (blog), August 10, 2018, https://blog.radware.com/security/2018/08/iot-hackers-trick-brazilian-bank-customers/.
16    ETSI EN 303 645 – “Cyber Security for Consumer Internet of Things: Baseline Requirements,” European Telecommunications Standards Institute (ETSI), (Sophia Antipolis Cedex, France: June 2020), 10, https://www.etsi.org/deliver/etsi_en/303600_303699/303645/02.01.00_30/en_303645v020100v.pdf
17    “Internet of Things (IoT),” National Institute of Standards and Technology (NIST), accessed August 17, 2022, https://csrc.nist.gov/glossary/term/internet_of_things_IoT; Mehwish Akram, et al., “NIST Special Publication 1800-16: Securing Web Transactions,” NIST, June 2020,  https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1800-16.pdf
18    Apple Developer, “Developing apps and accessories for the home,” Apple, accessed August 25, 2022, https://developer.apple.com/apple-home/
19    “All Smart Home Products,” Resideo, accessed August 25, 2022, https://www.resideo.com/us/en/products/; “Resideo Pro,” Residio, accessed August 25, 2022, https://www.resideo.com/us/en/pro/
20    “Philips Hue, Smart Home Lighting Made Brilliant,” Philips, accessed August 25, 2022, https://www.philips-hue.com/en-sg; “Ring Video Doorbell,” Wink, accessed August 25, 2022, https://www.wink.com/products/ring-video-doorbell/.  
21    “Device Management,” Tuya, accessed August 25, 2022, https://www.tuya.com/product/device-management/device-management
22    Google Nest Help, ”Explore what you can do with Google Nest or Home devices,” Google, accessed August 25, 2022, https://support.google.com/googlenest/answer/7130274?hl=en; “Alexa Guard Plus,” Amazon, accessed August 25, 2022, https://www.amazon.com/b?ie=UTF8&node=18021383011
23    Amazon Web Services, “Security challenges and focus areas,” Amazon, accessed August 25, 2022, https://docs.aws.amazon.com/whitepapers/latest/securing-iot-with-aws/security-challenges-and-focus-areas.html; Dave McMillen, “Internet of Threats: IoT Botnets Drive Surge in Network Attacks,” Security Intelligence, April 22, 2021, https://securityintelligence.com/posts/internet-of-threats-iot-botnets-network-attacks/
24    “Outdoor and Industrial Wireless,” Cisco, accessed August 25, 2022, https://www.cisco.com/c/en/us/products/wireless/outdoor-wireless/index.html
25    “Defender Adapter,” Extreme Networks (data sheet), accessed August 25, 2022, https://cloud.kapostcontent.net/pub/679cf2be-16da-4b6c-91ed-7d504b47a5f1/defender-adapter-data-sheet
26    “Cognitive Campus Workspaces,” Arista, accessed August 25, 2022, https://www.arista.com/en/solutions/cognitive-campus.
27    “Maternal and Fetal Monitoring Systems,” Philips, accessed August 25, 2022, https://www.usa.philips.com/healthcare/solutions/mother-and-child-care/fetal-maternal-monitoring;   “Expression MR400,” Philips, accessed August 25, 2022, https://www.usa.philips.com/healthcare/product/HC866185/expression-mr400-mr-patient-monitor;  “Wearable Patient Monitoring Systems,” Philips, accessed August 25, 2022, https://www.usa.philips.com/healthcare/solutions/patient-monitoring/patient-worn-monitoring.
28    “Guardian Connect Continuous Glucose Monitoring,” Medtronic, accessed August 25, 2022, https://www.medtronicdiabetes.com/products/guardian-connect-continuous-glucose-monitoring-system
29    “Healthcare Sensing,” Honeywell, accessed August 25, 2022, https://sps.honeywell.com/us/en/products/advanced-sensing-technologies/healthcare-sensing
30    “Choose Your Country or Region,” Dexcom, accessed August 25, 2022, https://www.dexcom.com/global; “Sleep Apnea – Causes, Symptoms and Treatment,” Resmed, accessed August 25, 2022, https://www.resmed.com/en-us/sleep-apnea/.
31    Patrick Mitchell, “International Cooperation to Secure the Consumer Internet of Things,” (Cambridge: Harvard Kennedy School, April 5, 2022), 14. 
32    “Code of Practice for Consumer IoT Security,” United Kingdom Department for Digital, Culture, Media & Sport (DCMS) 2018, https://www.gov.uk/government/publications/code-of-practice-for-consumer-iot-security/code-of-practice-for-consumer-iot-security.
33    DCMS, “Code of Practice.”
34    PAE interview, United Kingdom National Cyber Security Centre (NCSC), Spring 2022.
35    Sophia Antipolis, “ETSI Releases World-leading Consumer IOT Security Standard,” news release, European Telecommunication Standards Institute (ETSI), June 30, 2020, https://www.etsi.org/newsroom/press-releases/1789-2020-06-etsi-releases-world-leading-consumer-iot-security-standard.
36    “The Product Security and Telecommunications Infrastructure (PSTI) Bill – Product security Factsheet,” United Kingdom Department for Digital, Culture, Media & Sport (DCMS), 2021, https://www.gov.uk/guidance/the-product-security-and-telecommunications-infrastructure-psti-bill-product-security-factsheet, “Product Security and Telecommunications Infrastructure Bill Explanatory Notes,” UK Parliament, accessed August 17, 2022, https://publications.parliament.uk/pa/bills/cbill/58-02/0199/en/210199en.pdf.
37    DCMS, “PSTI Product Fact Sheet.”
38    James Coker, “UK Introduces New Cybersecurity Legislation for IoT Devices,” Info Security, November 24, 2021, https://www.infosecurity-magazine.com/news/uk-cybersecurity-legislation-iot/.
39    “Regulation of Consumer Connectable Product Cyber Security,” RPC-DCMS-4353(2), United Kingdom Department for Digital, Culture, Media & Sport (DCMS), 2021, https://bills.parliament.uk/publications/43916/documents/1025.
40    Cybersecurity Labelling Scheme (CLS) Updates, Singapore Cyber Security Agency (CSA), 2021, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/updates.
41    Singapore Standards Council, “Technical Reference 91 – Cybersecurity Labelling for Consumer IoT,” Enterprise Singapore, 2021, https://www.singaporestandardseshop.sg/Product/SSPdtDetail/41f0e637-22d6-4d05-9de3-c92a53341fe5
42    Singapore Standards Council, “Technical Reference 91 – Cybersecurity Labelling.” 
43    Cybersecurity Labelling Scheme (CLS) Product List, Cyber Security Agency (CSA), 2022, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/product-list.
44    Senate Bill No. 327, Chapter 886, California Legislative Information, 2018, https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB327.
45    House Bill 2395, Chapter 193, Oregon State Legislature, 2019, https://olis.oregonlegislature.gov/liz/2019R1/Measures/Overview/HB2395. 
46    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020).
47    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(a)(1),
48    Deborah George. “New Federal Law Alert: The Internet of Things (IoT) Cybersecurity Improvement Act of 2020 – IoT Security for Federal Government-Owned Device,” National Law Review, December 10, 2020, https://www.natlawreview.com/article/new-federal-law-alert-internet-things-iot-cybersecurity-improvement-act-2020-iot.
49    H.R. 1668 Rep. No. 116-501, Part I (2020), (Proclaiming the purpose of the IoT Cybersecurity Improvement Act of 2020 bill as “to leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices…”), https://www.congress.gov/bill/116th-congress/house-bill/1668/text/rh.
50    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(a)(1) & (2)(B)(i)-(iv).
51    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(a)(1) & (2)(B)(i)-(iv).
52    IoT Cybersecurity Improvement Act of 2020, Pub. L. No. 116-207 (2020) at §4(c)(1)(A)-(B).
53    President Biden,“Executive Order 14028 on Improving the Nation’s Cybersecurity,” The White House, May 12, 2021, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/.
54    “IoT Product Criteria,” National Institute of Standards and Technology (NIST), May 24, 2022, https://www.nist.gov/itl/executive-order-14028-improving-nations-cybersecurity/iot-product-criteria. 
55    “NIST Developed an IoT Label. How Do We Get It onto Shelves?” New America, March 1, 2022. https://www.youtube.com/watch?v=ZwDFb3DEkMw.
56    “Voluntary Code of Practice: Securing the Internet of Things for Consumers,” Australian Department of Home Affairs (DHA), [updated March 22, 2022], https://www.homeaffairs.gov.au/reports-and-publications/submissions-and-discussion-papers/code-of-practice. 
57    “Strengthening Australia’s cyber security regulations and incentives,” Australian Department of Home Affairs (DHA),[updated March 22, 2022], https://www.homeaffairs.gov.au/reports-and-publications/submissions-and-discussion-papers/cyber-security-regulations-incentives.
58    “Get ioXt Certified,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org/get-ioxt-certified.
59    “Authorized Labs,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org/authorized-labs.
60    “Certifying Your Product,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org/certifying-your-device 
61    “The Global Standard for IoT Security,” ioXt, accessed August 17, 2022, https://www.ioxtalliance.org.
62    “ioXt Alliance Closes Record Year of Membership Growth and Certifications,” Businesswire, January 19, 2022, https://www.businesswire.com/news/home/20220119005139/en/ioXt-Alliance-Closes-Record-Year-of-Membership-Growth-and-Certifications.
63    “IoT Security Assurance Framework,” IoT Security Foundation, November 2021,https://www.iotsecurityfoundation.org/wp-content/uploads/2021/11/IoTSF-IoT-Security-Assurance-Framework-Release-3.0-Nov-2021-1.pdf.
64    “IoT Security Foundation Members,” IoT Security Foundation, accessed August 17, 2022, https://www.iotsecurityfoundation.org/our-members/.
65    IoT Security Foundation, “IoT Security Assurance Framework.”
66    IoT Security Foundation, “IoT Security Foundation Members”; “Eurofins Digital Testing Your Trusted Partner in Quality,” Eurofins, accessed August 17, 2022, https://www.eurofins-digitaltesting.com
67    “OWASP IoT Security Verification Standard,” Open Web Application Security Project® (OWASP), accessed August 17, 2022, https://owasp.org/www-project-iot-security-verification-standard/; “IoT Security Verification Standard (ISVS),” GitHub, accessed August 17, 2022, https://github.com/OWASP/IoT-Security-Verification-Standard-ISVS.
68    “IoT Security Verification Standard (ISVS),” GitHub, accessed August 17, 2022, https://github.com/OWASP/IoT-Security-Verification-Standard-ISVS.
69    “About the OWASP Foundation,” Open Web Application Security Project® (OWASP), accessed August 17, 2022, https://owasp.org/about/.
70    GSM Association, “GSMA IoT Security Guidelines and Assessment,” Groupe Speciale Mobile, or GSMA, accessed August, 4, 2022, https://www.gsma.com/iot/iot-security/iot-security-guidelines/.
71    GSM Association, “IoT Security Assessment Checklist,” Groupe Speciale Mobile, or GSMA, accessed August 4, 2022, https://www.gsma.com/iot/iot-security-assessment/
73    CTA, “IoT Working Group,” Consumer Technology Association. 
74    CTA, “Cybersecurity Labeling, Conformity Assessment and Self-Attestation (CTA),” Consumer Technology Association, accessed September 22, 2022, https://www.nist.gov/system/files/documents/2021/09/03/CTA%20Position%20Paper%20on%20Cybersecurity%20Label%20Considerations%20Final.pdf.
75    CTA, “Member Directory,” Consumer Technology Association, accessed September 22, 2022, https://members.cta.tech/cta-member-directory?_ga=2.13576244.208474513.1663814734-503620203.1663814734&reload=timezone.
76    Connectivity Standards Alliance, accessed September 22, 2022, https://csa-iot.org/.
77    CSA, “Community, The Power of Membership,” Connectivity Standards Alliance, accessed September 22, 2022, https://csa-iot.org/members/.
78    Device security,” Google Cloud, accessed August 17, 2022,https://cloud.google.com/iot/docs/concepts/device-security.
79    “Azure Certified Device – Edge Secured-core,” Microsoft, August 11, 2022, https://docs.microsoft.com/en-us/azure/certification/program-requirements-edge-secured-core?pivots=platform-linux.
80    “Architecture Security Features,” Arm, accessed August 17, 2022, https://developer.arm.com/architectures/architecture-security-features/platform-security.
81    To the reader: For instance, the ioXt Alliance has clear requirements and is clear about its desired means of improving IoT cybersecurity—“multi-stakeholder, international, harmonized, and standardized security and privacy requirements, product compliance programs, and public transparency of those requirements and programs”—but is not clear about its policy goals beyond general references to improving IoT cybersecurity, see: https://www.ioxtalliance.org/about-ioxt
82    Mitchell, “International Cooperation to Secure the Consumer Internet of Things,” 21. 
83    “International IoT Security Initiative,” Global Forum on Cyber Expertise (GFCE), accessed April 6, 2022, https://thegfce.org/initiatives/international-iot-security-initiative/
84    Harnessing the Internet of Things for Global Development, (Geneva: International Telecommunication Union, 2015), 7, https://www.itu.int/en/action/broadband/Documents/Harnessing-IoT-Global-Development.pdf.
85    Robert Morgus, Securing Digital Dividends: Mainstreaming Cybersecurity in International Development (Washington, D.C.: New America, April 2018), 38, https://www.newamerica.org/cybersecurity-initiative/reports/securing-digital-dividends/.
86    Nima Agah, “Segmenting Networks and De-segmenting Laws: Synthesizing Domestic Internet of Things Cybersecurity Regulation,” (Durham, NC: Duke University School of Law, 2022), 8–12.
87    Agah, “Segmenting Networks and De-segmenting Laws,” 8–12.
88    Efrat Daskal, “Establishing standards for IoT devices: Recent examples,” Diplo (blog), December 16, 2020, https://www.diplomacy.edu/blog/establishing-standards-for-iot-devices-recent-examples/.
89    DHA, “Strengthening Australia’s Cyber Security Regulations and Incentives.”
90    US National Institute of Standards and Technology, “Cybersecurity “Rosetta Stone” Celebrates Two Years of Success,” National Institute of Standards and Technology, accessed September 22, 2022, https://www.nist.gov/news-events/news/2016/02/cybersecurity-rosetta-stone-celebrates-two-years-success.
91    US National Institute of Standards and Technology, “Cybersecurity “Rosetta Stone” Celebrates Two Years of Success.”
92    Danielle Kriz,, “Governments Must Promote Network-Level IoT Security at Scale,” Palo Alto Networks, December 8, 2021, https://www.paloaltonetworks.com/blog/2021/12/network-level-iot-security/.
93    David Hoffman, Interview with report author, April 6, 2022.
94    Cyber Security Agency, “CSA | Cybersecurity Labelling Scheme – For Manufacturers,” Accessed September 22, 2022, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/for-manufacturers.
95    The Trust Opportunity: Exploring Consumer Attitudes to the Internet of Things, Internet Society and Consumers International, May 1, 2019, https://www.internetsociety.org/resources/doc/2019/trust-opportunity-exploring-consumer-attitudes-to-iot/.
96    Internet Society and Consumers International, The Trust Opportunity.
97    To the reader, it is important to note that users of IoT products must also play a role in ensuring device security. For instance, it is not enough for vendors to make patches; consumers must be sure to apply said patches.
98    Ron Ross, Michael McEvilley, and Janet Oren, “Systems Security Engineering: Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems,” National Institute of Standards and Technology, March 21, 2018, https://doi.org/10.6028/NIST.SP.800-160v1.
99    Department for Digital, Culture, Media & Sport, “The Product Security and Telecommunications Infrastructure (PSTI) Bill – product security factsheet.”
100    Department for Digital, Culture, Media & Sport, “The Product Security and Telecommunications Infrastructure (PSTI) Bill – product security factsheet.”
101    ETSI, “Cyber; Cyber Security for Consumer Internet of Things: Baseline Requirements,” European Telecommunications Standards Institute, accessed September 22, 2022, https://www.etsi.org/deliver/etsi_en/303600_303699/303645/02.01.01_60/en_303645v020101p.pdf. 
102    “Code of Practice: Securing the Internet of Things for Consumers,” the Australian Government, accessed September 22, 2022, https://www.homeaffairs.gov.au/reports-and-pubs/files/code-of-practice.pdf.
103    CSA Singapore, “Cybersecurity Labelling Scheme (CLS),” Cyber Security Agency Singapore, accessed September 22, 2022, https://www.csa.gov.sg/Programmes/certification-and-labelling-schemes/cybersecurity-labelling-scheme/about-cls.
104    IoT Cybersecurity Improvement Act of 2020, H.R.1668, 116th Cong. (2020). https://www.congress.gov/bill/116th-congress/house-bill/1668.
105    IoT Cybersecurity Improvement Act of 2020
106    The White House Briefing Room, “Executive Order on Improving the Nation’s Cybersecurity,” The White House, accessed September 22, 2022, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/
107    US National Institute of Standards and Technology. Foundational Cybersecurity Activities for IoT Device Manufacturers. NISTIR 8259. Michael Fagan et al. Gaithersburg: National Institute of Standards and Technology, May 2020. https://csrc.nist.gov/publications/detail/nistir/8259/final. 
108    US National Institute of Standards and Technology. Foundational Cybersecurity Activities for IoT Device Manufacturers. NISTIR 8259A. Michael Fagan et al. Gaithersburg: National Institute of Standards and Technology, May 2020. https://csrc.nist.gov/publications/detail/nistir/8259a/final.
109    Michael Fagan et al., “Profile of the IoT Core Baseline for Consumer IoT Products,” National Institute of Standards and Technology (NIST), June 17, 2022, https://csrc.nist.gov/publications/detail/nistir/8425/draft.
110    Michael Fagan et al., “Profile of the IoT Core Baseline for Consumer IoT Products.”
111    IoT Security Foundation, “IoT Security Assurance Framework.”
112    Robert Lemos, “New IoT Security Bill: Third Time’s the Charm?” Dark Reading, March 2019. https://www.darkreading.com/iot/new-iot-security-bill-third-time-s-the-charm-.
113    Brian Russell and Drew van Duren. Practical Internet of Things Security – Second Edition, Packt Publishing, (Birmingham, UK: 2018).
114    Eustace Asanghanwa, “Solving IoT device security at scale through standards,” Microsoft (blog), September 21, 2020, https://techcommunity.microsoft.com/t5/internet-of-things-blog/solving-iot-device-security-at-scale-through-standards/ba-p/1686066.
115    To the reader, this is not to say that organizations should always use passwords as the go-to authentication mechanism in the future—but that if organizations are doing so now, they should not use universal default ones.
116    GCA Internet Integrity Papers: IoT Policy and Attack Report, Global Cyber Alliance (GCA), October 2021, https://www.globalcyberalliance.org/wp-content/uploads/IoT-Policy-and-Attack-Report_FINAL.pdf.
118    Andrew Laughlin, How a smart home could be at risk from hackers, Which? 2021, https://www.which.co.uk/news/article/how-the-smart-home-could-be-at-risk-from-hackers-akeR18s9eBHU.
119    “Labeling for Consumer Internet of Things (IoT) Products,” National Institute of Standards and Technology (NIST), February 2022. https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.02042022-2.pdf. 
120    ETSI EN 303 645 – “Cyber Security for Consumer Internet of Things: Baseline Requirements.”
121    ETSI EN 303 645 – “Cyber Security for Consumer Internet of Things: Baseline Requirements.”
122    “Global IoT market to grow to 24.1 billion devices in 2030, generating $1.5 trillion annual revenue,” Transforma Insights, May 19, 2020, https://transformainsights.com/news/iot-market-24-billion-usd15-trillion-revenue-2030. 
123    “How a smart home could be at risk from hackers,” Which?, July 2, 2021, https://www.which.co.uk/news/article/how-the-smart-home-could-be-at-risk-from-hackers-akeR18s9eBHU.
124    “Kaspersky Detects 1.5B IoT Cyberattacks This Year,” PYMNTS, September 3, 2021, https://www.pymnts.com/news/security-and-risk/2021/kaspersky-detects-iot-cyberattacks-double-last-year/.
125    “2020 Unit 42 IoT Threat Report,” Unit 42, March 10, 2020, https://unit42.paloaltonetworks.com/iot-threat-report-2020/.
126    Catalin Cimpanu, “15% of All IoT Device Owners Don’t Change Default Passwords,” BleepingComputer, June 19, 2017, https://www.bleepingcomputer.com/news/security/15-percent-of-all-iot-device-owners-dont-change-default-passwords/.
127    DCMS, “Regulation of Consumer Connectable Product Cyber Security,” RPC-DCMS-4353(2).
128    Nathaniel Kim, Trey Herr, and Bruce Schneier, The Reverse Cascade: Enforcing Security on the Global IoT Supply Chain, The Atlantic Council, June 2020, https://www.atlanticcouncil.org/in-depth-research-reports/report/the-reverse-cascade-enforcing-security-on-the-global-iot-supply-chain/
129    Bruce Schneier, “Security in a World of Physically Capable Computers, Schneir (blog), October 12, 2018. https://www.schneier.com/blog/archives/2018/10/security_in_a_w.html.
130    Alex Scroxton, “Lords Move to Protect Cyber Researchers from Prosecution, Computer Weekly, June 2022, https://www.computerweekly.com/news/252521716/Lords-move-to-protect-cyber-researchers-from-prosecution
131    “Government Response to the Regulatory Proposals for Consumer Internet of Things (IoT) Security Consultation.” United Kingdom Department for Digital, Culture, Media & Sport (DCMS), February 2020, https://www.gov.uk/government/consultations/consultation-on-regulatory-proposals-on-consumer-iot-security/outcome/government-response-to-the-regulatory-proposals-for-consumer-internet-of-things-iot-security-consultation
132    “Proposals for Regulating Consumer Smart Product Cyber Security – Call for Views,” United Kingdom Department for Digital, Culture, Media & Sport (DCMS), October 2020, https://www.gov.uk/government/publications/proposals-for-regulating-consumer-smart-product-cyber-security-call-for-views/proposals-for-regulating-consumer-smart-product-cyber-security-call-for-views.  
133    “IoT security: How We Are Keeping Consumers Safe from Cyber Threats,” World Economic Forum, February 2022, https://www.weforum.org/impact/iot-security-keeping-consumers-safe/.
134    CSA, Cybersecurity Labelling Scheme (CLS) Product List.
135    CSA, Cybersecurity Labelling Scheme (CLS) Product List. 
136    DHA, “Strengthening Australia’s Cyber Security Regulations and Incentives.”
137    DHA, “Strengthening Australia’s Cyber Security Regulations and Incentives.”
138    “Stay Smart: Helping Consumers Choose Cyber Secure Smart Devices,” Behavioural Economics Team of the Australian Government (BETA), March 2022, https://behaviouraleconomics.pmc.gov.au/sites/default/files/projects/beta-report-cyber-security-labels.pdf.
139    BETA, “Stay Smart: Helping Consumers Choose.”
140    NIST, “Recommended Criteria for Cybersecurity Labeling.”

The post Security in the billions: Toward a multinational strategy to better secure the IoT ecosystem appeared first on Atlantic Council.

]]>
Assumptions and hypotheticals: Second edition https://www.atlanticcouncil.org/content-series/tech-at-the-leading-edge/assumptions-and-hypotheticals-second-edition/ Thu, 22 Sep 2022 14:14:52 +0000 https://www.atlanticcouncil.org/?p=567026 In the second "Assumptions and Hypotheticals," we explore various topics, including the cyber sovereignty debate, the question of an attribution threshold, and the utility of cyber tools in crisis escalation.

The post Assumptions and hypotheticals: Second edition appeared first on Atlantic Council.

]]>

When academics, policymakers, and practitioners discuss security and conflict within the cyber domain, they are often hampered by a series of ongoing debates and unarticulated assumptions, some more commonly agreed upon than others, which they nevertheless must cope with to better understand the domain.  

We have brought together members of these communities to discuss the reasons that these debates are important to the shaping of cybersecurity and strategic plans, as well as how outcomes of these debates might impact the way that public- and private-sector actors’ actions, informed by these debates one way or another, affect the domain, their adversaries, and their own goals. 

In this edition we explore various topics including the cyber sovereignty debate, the question of an attribution threshold, and the utility of cyber tools in crisis escalation. 

Assumption #1

Assumption #2

Assumption #3

Assumption #1

Cyber sovereignty is an unappealing alternative to the Western vision of a “free, open, and interoperable” internet.

Why is this discussion important?

Kenton Thibaut

As governments around the world grapple with how and to what degree the internet should be regulated, countries like China and Russia are advancing a regulatory concept that privileges total state control. This approach, known as “cyber sovereignty,” includes tactics like censorship of political speech online, strict data localization requirements, and the use of internet shutdowns to stifle unwanted activity. At the same time, cyber sovereignty is appealing to many countries across the world for which regulation of the internet poses significant challenges. The current lack of a coherent approach by liberal societies means ceding the debate and risks an eventual world where a more authoritarian approach to digital governance standards becomes a norm. 

Bulelani Jili

Cyber sovereignty can simply be defined as respecting an individual country’s right to choose its own internet development and management. This vision assumes and demands the recognition of individual countries’ right to craft and employ their own public policies about cyberspace. Crucially, Beijing’s efforts to shape the governance of cyberspace hinge on the promotion of this idea.

Justin Sherman

This assumption is false. When we talk about “cyber sovereignty” in the United States, we often discuss the idea of top-down, authoritarian, repressive control of the internet—which it can be, and in China and Russia’s case, it certainly is—but this can result in mirror-imaging. Sometimes, when Beijing and Moscow approach other states, there is an explicit or strong implicit suggestion that greater state control of the internet enables regimes to control dissent and information flows. But in many cases, that is not the narrative per se. Beijing and Moscow instead emphasize the “sovereignty” word in “cyber sovereignty”—telling others that cyber sovereignty is not about empowering China and Russia, but about empowering individual countries to exert their political rights and push back against United States government and Silicon Valley hegemony online.  

The appeal of “cyber sovereignty” matters because the Chinese and Russian governments are increasingly building coalitions in the United Nations (UN) and other forums to increase state control of the internet globally. Several pieces of evidence bear this out, whether my 2018 study with colleagues identifying “Digital Decider” swing states in the future of the internet or Russia’s successful UN proposal to establish a new cybercrime treaty. If the United States wants to preserve a relatively global, open, interoperable, secure, and resilient model of the internet—in concert with allies, partners, the private sector, and civil society—it must confront the effectiveness of “cyber sovereignty” messaging from those seeking to undermine the global internet. The fall election for the International Telecommunication Union (ITU) secretary-general—the leader, for the next four years, of the UN’s tech agency—may be a strong indicator of how the world falls on this question. 

Joshua Rovner

Cyberspace is neutral. States can use it to share ideas freely – or simply as a vehicle for their ideology. Because not everyone shares the same ideology, it is unsurprising that some have pushed back on the notion that the internet should be free and open. States, including Western states, are always conscious of outside influence. In this respect the debate about “cyber sovereignty” is no different from any other aspect of international politics.  

States are also aware that their rivals can use cyberspace for espionage and sabotage, so they have reason to monitor and restrict the flow of information as a kind of counterintelligence. All of this comes at a cost, of course, because they stand to lose a lot of business if they restrict too much. 

Melissa Griffith

This assumption is particularly important when coupled with a secondary assumption: that a “free, open, and interoperable” internet best serves American interests at home and abroad.  The notion that democratic states would inherently prefer, and non-democratic states would be inherently undermined by, a “free, open, and interoperable” internet removed any need to actively evaluate how to best pursue US security, economic, and foreign policy interests in a world increasingly tilting toward cyber sovereignty. This – notably in both democratic and non-democratic states alike – has not been the case. It is important to recognize, however, that the phrase “cyber sovereignty” encapsulates a wide range of activity including China’s Great Firewall and the EU’s vision of Digital Strategic Autonomy.

If smaller states increasingly see more benefit in the idea of cyber sovereignty and improved domestic security, than in the open, multistakeholder view championed by the United States, then … 

Kenton Thibaut

The United States and other open societies will need to come together to develop a broader consensus around what a “free and open” internet looks like. This will involve engaging international organizations, civil society actors, and platform operators to address the challenges brought about by an underregulated internet. Support for cyber sovereignty among smaller countries is not necessarily an endorsement of authoritarianism, but rather a strategy to address real issues with internet regulation in their respective political and economic contexts. Instead of thinking of this issue as a binary choice between a free, open internet and a closed authoritarian one, we need to start from a diagnosis of how smaller states believe “cyber sovereignty” can address their regulatory challenges. We can then provide actionable solutions that support the development of a more free and open internet, while steering other governments away from the more extreme forms of cyber sovereignty that countries like Russia and China espouse.

Bulelani Jili

Following this logic of cyber sovereignty, states big and small should discourage cyber hegemony; moreover, they should avoid interfering in the assumed internal affairs of other states. As a result, this vision privileges the actions and ambitions of state actors over private vendors and Civil Society Organizations (CSO). Such a vision is antithetical to the US government’s position on cyberspace and governance, which advocates for a more open, free, and multistakeholder approach that privileges private actors and CSOs. More to the point, this cyber sovereignty conception is precisely attractive because it offers legitimacy to state actors who wish to further curb and limit online activity under the name of political and social stability. The embrace of cyber sovereignty, particularly in the Global South, is not simply an outcome of Chinese promotion, but also, a corollary of the growing challenge of misinformation and disinformation that appears to be a consequence of underregulated cyberspace. Given this circumstance, a defense of US interests and values will chiefly rely on addressing disinformation, galvanizing all relevant stakeholders, and promoting the salience of privacy and online free speech.

Justin Sherman

We are going to see increased fragmentation of the internet around the world—fragmentation legally, as more governments introduce top-down internet laws within their borders that cut out industry and civil society voices; fragmentation in content and information, as countries introduce more restrictions on speech, including those enforced by companies in their borders; fragmentation architecturally, even, as some countries seek to further isolate themselves from the global internet (e.g., Russia’s “RuNet” push or Iran’s National Information Network); and so on. It is also likely that it will be harder for the United States and similarly minded countries to build international coalitions to support global and open internet proposals. 

Joshua Rovner

Internet governance will become a great deal more complex. That said, internet governance has never been a binary proposition, with states having a choice between sovereignty and the multistakeholder model. As in other forms of international organization, states balance the gains of institutional cooperation against the desire for autonomy and control. And as in other aspects of international life, there will be no permanent structure that satisfies anyone. Politics is an open-ended negotiation. 

Melissa Griffith

 Is it only smaller states? The perceived benefits of cyber sovereignty and improved domestic security, in all its various incarnations, appears to be thriving in small, medium, and large states alike. For the United States, this broader shift requires a recalibration of our security and economic policies. The question is no longer how the United States can ensure the ideal of a “free, open, and interoperable internet”. But rather, how can the United States best capitalize on shared values, mitigate security concerns, and capture economic gains in areas where we are neither the main, global architect nor the first mover (e.g. privacy regulations such as GDPR).  Concerningly, this is a challenge that we are only recently coming to terms with and that we are currently underequipped to meet.

Want to read more on the topic?

Assumption #2

High confidence attribution often requires significant time and resources, which limits response options.

Why is this discussion important?

Louise Marie Hurel

The fact is that attribution should be handled with more care – especially when considering political attribution – and that indeed it is a process that takes time despite political pressures. It is not a finalistic process. Attribution is composed of multiple processes that incrementally provide more confidence. Countries and private sector entities will have varying baselines to determine when to attribute. These will be determined not only by time, but also by resources and capacities available. Even so, that does not mean that a government cannot implement crisis communication plans before achieving high confidence, for example.   

June Lee

If states are to deter cyber attacks by imposing costs (or credibly threatening to impose costs) on responsible parties, they must be able to identify the responsible actors with high confidence. If high confidence attribution requires significant time and resources, threat actors could exploit the uncertainty that comes with operating in cyberspace to get away with illicit or criminal activity. A failure to quickly respond or call out problematic cyber activity cedes the initiative and can create a norm of impunity in cyberspace. Any joint activity that states wish to take in response to cyber operations (whether collective public attribution or joint operations) will likely require high confidence attribution and sharing of underlying intelligence. While rapid improvements in private sector cyber threat intelligence capabilities have ameliorated some of the constraints imposed by the time- and resource-intensive nature of cyber attribution, governments must continue to develop channels of communication with private entities to take advantage of this trend. 

Joshua Rovner

Attribution is hard when the stakes are low, and easy when the stakes are high. Attributing minor cyberspace operations is hard because attackers can hide, because the signal is lost in the noise, and so on. But significant operations are more likely to leave a trail. And determining responsibility for such operations doesn’t just rely on cyberspace forensics; all sorts of other information might be useful in determining responsibility. 

Melissa Griffith

Attribution is not an end in and of itself, but rather an important, and sometimes critical, input into other goals – processes and outcomes – we care about. The ability to attribute can help incentivize and inform security, increase visibility, and impose costs. As such, the goals can vary, ranging from businesses that find themselves in the midst of incident response to law enforcement pursuing an inditement to national security strategies like deterrence.  

Moreover, not all cyber operations take the same amount of time or resources to attribute. Operations vary, including the use of proxy actors, the degree of operational security, and the prevalence and diversity of operations undertaken by an actor over time. Attribution can be carried out by a diversity of actors (across the private sector and government) using a variety of indicators (technical, political, and clandestine). All of these factors influence the time and resources required for high confidence attribution.  Notably, as the resources and time needed to attribute a cyber operation increase, the number and diversity of states and private companies in a position to attribute shrinks. 

Importantly, attribution, timely or otherwise, is not always essential even if it is desirable. For example, the ability to impose costs on an adversary – a critical component of deterrence by punishment – requires being able to identify the responsible party. In contrast, the “who” behind an operation is not as critical to bolstering the overall resilience and security of a system. Similarly, while the speedy aspect of attribution may be desirable, it is also not always essential. For example, while rapid attribution was critical to the success of Mutually Assured Destruction (MAD), the same “limited window of opportunity to act before a response potentially becomes impossible” is not equally true when defending against cyber operations. 

If the threshold for actionable attribution is lower than that which requires a long and in-depth process, then …   

Louise Marie Hurel

One needs to examine why it is considered ‘lower’. Most of the times, the discussion around attribution being ‘lower’ that the ‘long and in-depth process’ is narrowly associated with countries that conduct political attribution and are called out for not presenting enough evidence to support it (Russia and China, for example). However, sometimes ‘actionable attribution’ from countries with less resources might be considered different (in their timeliness and evidence) or even ‘lower’ because their challenges in capacities. We need more of that sensitivity to the attribution debate both in the academic and policy discourse – as it assumes the capacity, resources, and effectiveness of response at times.  

June Lee

First, it’s important to note that there is no threshold of “actionable attribution” for cyber operations in international law – states are not required to provide evidence for any statement of public attribution, and primarily do so (if at all) as a matter of policy. 

At least within the United States, internal thresholds for “actionable attribution” vary depending on government agency and the type of “action” being considered. For instance, the threshold for a US official to speak anonymously to the press in a press leak or planted messaging (what David Pozen coined a “pleak”) might be lower than that needed for the Department of Justice (DOJ) to issue a criminal indictment, which is determined by domestic laws and standards of evidence. Bureaucratic processes and internal prioritization by agency leadership can also affect the speed of public attribution and/or subsequent responses. Lowering the threshold for “actionable attribution” may therefore not have a significant effect on how or when the United States and like-minded states respond to cyber incidents. However, if a lower threshold creates greater expectation for a response, and on a shorter timeline, the United States will need to clarify its thinking on proportional responses to different cyber operations. 

Joshua Rovner

The implications of this are complex. On the one hand, we might expect that states might publicly denounce their adversaries. Yet they might have reason to signal their displeasure quietly, or not at all, if they believe they gain an intelligence advantage by saying nothing. Counterintuitively, they might also downplay cyberspace intrusions in order to reinforce international norms, because alarmism might reduce everyone’s confidence in the system.   

Melissa Griffith

In many instances, this is already the case. While it is true that attribution presents a unique challenge in cyberspace, we have seen a sharp increase in public attribution by states and private companies. “Actionable” also takes on a very different meaning for a company engaged in incident response than it does for a state seeking to deter a subset of malicious hacking through the threat of punishment. Importantly, however, even if the threshold for actionable attribution was low across the board, it would still not be a costless process. For policymakers and industry alike, the question of “what purpose does attribution serve” and “what risks or consequences stem from public attribution” are just as important as “how feasible is high confidence attribution” and “by whom.” 

Want to read more on the topic?

Assumption #3

Cyber capabilities are a useful tool for signaling de-escalation or intent to deescalate during a militarized crisis.

Why is this discussion important?

June Lee

Scholars are often skeptical of the efficacy of cyber capabilities as a means of signaling. Cyber operations inherently involve ambiguity surrounding intent; the same capabilities can be used for espionage but also destructive effects. There is also uncertainty around states’ perception of cyber capabilities – an operation that some states view as de-escalatory may be perceived as hostile or threatening by others. And in the fog of war, ambiguity surrounding cyber operations may further limit their utility as de-escalatory signals. 

But that’s not to say that cyber capabilities will escalate an ongoing conflict. Cyber operations have temporary, sometimes covert effects and typically avoid casualties; plausible deniability provides a sort of shield for states from responding to and further escalating conflict. Moreover, states have not yet responded to a cyber operation with military force. Responses have consisted of economic or legal punishment (sanctions, indictment), public attribution, and limited cyber activity. Throughout the Russia-Ukraine conflict, European governments have not responded to several cyber incidents disrupting energy companies other than by publicly attributing intrusions to Moscow. Cyber incidents throughout the conflict have had primarily disruptive effects and do not appear to have meaningfully shaped the course of the war. Examining the record over the past few years suggests that states are hesitant to respond to cyber activity through kinetic, military means that could further escalate an ongoing conflict. A better understanding of the role cyber capabilities play in a militarized crisis will help policymakers to deploy them more effectively as part of their strategic toolkit 

Joshua Rovner

It depends on what we mean by “cyber capabilities.” If we are talking about aggressive intrusions that seem to be aligned with conventional military preparations, then cyberspace operations are probably not good for de-escalation. But in other cases, intrusions and minor acts of sabotage might serve as a useful release valve in a crisis. Covert cyberspace operations allow states to do something without simply backing down, giving some psychological comfort to leaders who worry about their reputations. And they also allow states to act in ways that show their displeasure, but without doing lasting harm. States have a long history of using covert action for this purpose.   

Melissa Griffith

This would be an incredibly risky assumption for policymakers to adopt in practice. In theory, cyber operations might hypothetically provide policymakers with de-escalatory offramps given that, unlike kinetic counterparts, they can lack physical violence; can, in some cases, be reversable; and present a greater degree of possible ambiguity. However, unlike the escalatory nature of cyber operations (which, despite a growing range of research, remains a matter of debate), their de-escalatory potential in the context of ongoing militarized conflict has been far less rigorously examined. The consequences of getting this wrong – both (a) having the opposite impact or (b) misinterpreting the intent of a cyber operation by an adversary as de-escalatory during a militarized crisis – could be severe.  

If cyber capabilities are used by another state to signal de-escalation but they are seen in the United States as either a continuation, or even an escalation, of conflict then …  

June Lee

It’s unclear that such a circumstance would lead to a significant escalation in conflict. The United States would likely respond with sanctions and public attribution, while pursuing an indictment. Kinetic military action is unlikely, and any cyber response would be relatively constrained (e.g. disrupting adversary offensive cyber capabilities) and consistent with international law. The United States would be careful not to set a precedent for responding to cyber operations that could create destabilizing norms around cyber activity or give adversaries reason to escalate in response to future cyber incidents. Escalation could be possible if a cyber operation were to significantly disrupt critical infrastructure (e.g. targeting nuclear facilities) or vulnerable civilian facilities (e.g. hospitals), but states are unlikely to launch such operations with de-escalatory intent. 

Joshua Rovner

Will the United States view adversary action in a crisis as escalatory or de-escalatory? The answer is hard to predict in advance, because so much will depend on the circumstances of the case, the type of intrusion, the nature of the target, and evidence that its adversary is committed to conflict. 

Melissa Griffith

This issue rests at the heart of signaling challenges. Efforts to signal rely on managing the perception of an adversary and those efforts can easily fall victim to misperception or uncertainty. The best-case scenario represented here would be if the United States perceived a cyber operation as non-escalatory: just one facet amongst many that shaped its specific approach to crisis management in real time. If the United States saw a cyber operation as an escalation of conflict and, depending on the state and militarized crisis in question in this scenario, crisis could potentially spill over into war. In either instance, this failed attempt may also hamstring future offramp efforts. 

Want to read more on the topic?

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Assumptions and hypotheticals: Second edition appeared first on Atlantic Council.

]]>
Untangling the Russian web: Spies, proxies, and spectrums of Russian cyber behavior  https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/untangling-the-russian-web/ Mon, 19 Sep 2022 09:00:00 +0000 https://www.atlanticcouncil.org/?p=565954 This issue brief analyzes the range of Russian government’s involvement with different actors in the large, complex, and often opaque cyber web, as well as the risks and benefits the Kremlin perceives or gets from leveraging actors in this group. The issue brief concludes with three takeaways and actions for policymakers in the United States, as well as in allied and partner countries.

The post Untangling the Russian web: Spies, proxies, and spectrums of Russian cyber behavior  appeared first on Atlantic Council.

]]>

Executive summary

The number of cyber operations launched from Russia over the last few years is astounding, ranging from the NotPetya malware attack that cost the global economy billions, to the SolarWinds espionage campaign against dozens of US government agencies and thousands of companies. Broad characterizations of these operations, such as “Russian cyberattack,” obscure the very real and entangled web of cyber actors within Russia that receive varying degrees of support from, approval by, and involvement with the Russian government. This issue brief describes the large, complex, and often opaque network of cyber actors in Russia, from front companies to patriotic hackers to cybercriminals. It analyzes the range and ambiguity of the Russian government’s involvement with the different actors in this cyber web, as well as the risks and benefits the Kremlin perceives or gets from leveraging actors in this group. The issue brief concludes with three takeaways and actions for policymakers in the United States, as well as in allied and partner countries: focus on understanding the incentive structure for the different actors in Russia’s cyber web; specify the relationship any given Russian actor has or does not have with the state, and calibrate their responses accordingly; and examine these actors and activities from Moscow’s perspective when designing policies and predicting the Kremlin’s responses. 

Introduction

The number of cyber operations launched from Russia over the last few years is astounding, ranging from the NotPetya malware attack that cost the global economy billions to the SolarWinds espionage campaign against dozens of US government agencies and thousands of companies. Yet broad characterizations of these operations, such as “Russian cyberattack,” obscure the very real and entangled web of cyber actors within Russia that have varying degrees of support from, approval by, and involvement with the Russian government. 

Contrary to popular belief, the Kremlin does not control every single cyber operation run out of Russia. Instead, the regime of President Vladimir Putin has to some extent inherited, and now actively cultivates, a complex web of Russian cyber actors. This network includes: cybercriminals who operate without state backing and inject money into the Russian economy; patriotic hackers and criminal groups recruited by the state on an ad hoc basis; and proxy organizations and front companies created solely for the purpose of conducting government operations, providing the Kremlin a veil of deniability. This web of cyber actors is large, often opaque, and central to how the Russian government organizes and conducts cyber operations, as well as how it develops cyber capabilities and recruits cyber personnel. 

Referring to all cyber activities that take place inside of Russia as “Russian”—and even those launched from outside Russia by “Russian” actors—flattens the complexity of this network and undermines analysis of the range of actors at the Kremlin’s disposal. Likewise, assuming the Putin regime controls every single cyber activity emanating from Russia ignores the government’s spectrum of involvement with various actors and, in turn, the different opportunities the United States and its allies and partners may have to disrupt Moscow’s cultivation and use of this cyber ecosystem. While researchers continue to publish on the “cyber proxies” concept, proxy as a universal term fails to capture the gradations of the state’s involvement with hackers, assuming a top-down hierarchical relationship that is not always present in Russia. Public information about this cyber ecosystem is not perfect or complete, but its relationship with the Russian government demands deeper analysis. 

Untangling this multifaceted web—and understanding how and why so many Russian cyber actors freely operate in, and oscillate between, state and non-state domains—will allow the United States to appropriately target negotiations and track the expansion of Russian cyber operations globally. This is particularly important now, with the Putin regime facing an unprecedented level of sanctions from governments around the world, and the country’s information technology (IT) “brain drain” accelerating since the regime’s (re)invasion of Ukraine in February 2022.1 Before these latest hostilities, the US government was negotiating a curtailment of ransomware attacks coming from within Russia; right after the war began, diplomatic talks between the Biden administration and the Kremlin quickly deteriorated.2 Arguably, understanding and disrupting Russian cyber operations in conflicts in Ukraine and other areas around the world is more important than ever for the US government and its allies and partners. However, the reality is that the US government cannot pursue these objectives effectively or comprehensively without first understanding and shaping its approach around the reality of Russia’s cyber ecosystem. 

This four-part issue brief reviews the complex web of cyber actors in Russia, analyzes the range of Russian government involvement with these actors through specific examples, explains the risks and benefits the Kremlin perceives or gets from cultivating and leveraging this web of cyber actors, and provides three key takeaway-action pairings for US policymakers and its allies and partners. 

A complex web: Inheritance meets cultivation 

Russia is home to a convoluted web of cyber actors comprised of government-funded front companies, state-tapped individuals, cybercriminals, and “patriotic hackers,” among others. While some of these entities receive direct orders and financial support from Russian authorities, others have tacit permission to operate independently, so long as they do not upset the Putin regime. The Kremlin’s involvement with each of these actors follows a varied and ambiguous pattern of engagement that the next section discusses in more detail. First, it is necessary to understand why the Russian government values this kind of cyberspace proxy activity, and how this activity has evolved into the convoluted and opaque web that exists in Russia today. 

Political warfare is generally important to the Kremlin. The Putin regime, inside and beyond Russian borders, has carried out assassinations and attempted assassinations, funded propaganda front companies, spread disinformation, and launched disruptive cyber operations, among other activities. While the organizational structures that execute these activities, and the techniques used, vary, the goals are often similar: to disrupt, destroy, sabotage, and subvert enemies of the Russian state (read: enemies of the Putin regime) abroad and at home. This reflects a growing emphasis in Russia’s military doctrine and national security thinking on the importance of information, proxy, and below-threshold-of-war conflict.3  Russia’s 2000 Foreign Policy Concept stated that “while the [sic] military power still retains significance in relations among states, an ever greater role is being played by economic, political, scientific and technological, ecological, and information factors.4 Prominent Russian military theorists S. G. Chekinov and S. A. Bogdanov underscored this in their 2010 article that appeared in the Russian journal Military Thought, writing that “asymmetric actions, too, will be used extensively to level off the enemy’s superiority in an armed struggle by a combination of political, economic, information, technological, and ecological campaigns in the form of indirect actions and nonmilitary measures.”5 Some of these political warfare actions, like disruptive cyber operations, explicitly target Russia’s enemies, while others have intentional indirect effects. Scholars Adrian Hänni and Miguel Grossmann, for instance, argue that the Putin regime’s “public, theatrical form of murderous attacks on intelligence defectors” is a kind of “signaling through covert action” to Russia’s enemies, Russian defectors, and the Russian public.6

Russia is home to a convoluted web of cyber actors comprised of government-funded front companies, state-tapped individuals, cybercriminals, and “patriotic hackers,” among others.”

Political warfare is generally important to the Kremlin. The Putin regime, inside and beyond Russian borders, has carried out assassinations and attempted assassinations, funded propaganda front companies, spread disinformation, and launched disruptive cyber operations, among other activities. While the organizational structures that execute these activities, and the techniques used, vary, the goals are often similar: to disrupt, destroy, sabotage, and subvert enemies of the Russian state (read: enemies of the Putin regime) abroad and at home. This reflects a growing emphasis in Russia’s military doctrine and national security thinking on the importance of information, proxy, and below-threshold-of-war conflict.7 Russia’s 2000 Foreign Policy Concept stated that “while the [sic] military power still retains significance in relations among states, an ever greater role is being played by economic, political, scientific and technological, ecological, and information factors.”8 Prominent Russian military theorists S. G. Chekinov and S. A. Bogdanov underscored this in their 2010 article that appeared in the Russian journal Military Thought, writing that “asymmetric actions, too, will be used extensively to level off the enemy’s superiority in an armed struggle by a combination of political, economic, information, technological, and ecological campaigns in the form of indirect actions and nonmilitary measures.”9 Some of these political warfare actions, like disruptive cyber operations, explicitly target Russia’s enemies, while others have intentional indirect effects. Scholars Adrian Hänni and Miguel Grossmann, for instance, argue that the Putin regime’s “public, theatrical form of murderous attacks on intelligence defectors” is a kind of “signaling through covert action” to Russia’s enemies, Russian defectors, and the Russian public.10 

This assessment has its roots in historical actions, bureaucracy, and thinking that inform how Moscow uses cyber and information capabilities today. The Soviet Union conducted political warfare-style operations under an umbrella of “active measures” against foreign and domestic targets. Akin to contemporary political warfare, these actions ranged from assassinating émigré leaders who participated in anti-Soviet activities to manufacturing and spreading the lie that the Pentagon started the AIDS epidemic.11 Of course, the parallels are not perfect, and the information environment today is fundamentally different than it was decades ago. For example, the scale and speed of microtargeting alone, enabled by the internet, is unprecedented. Regardless, the Putin regime and the Russian security apparatus continue to emphasize many of the same Soviet-era, active measures-type ideas, such as deniability, covertness, and the use of proxies, which carries over to cyber operations.12 Russia’s modern structure for information operations reportedly even mirrors the Soviet approach; after the collapse of the Soviet Union, the military transferred its propaganda directorate to the military intelligence agency (Glavnoye Razvedyvatelnoye Upravlenie, or GRU), rebranding it GRU Unit 54777 in 1994.13 This unit still exists today and,14  per the US Department of the Treasury’s 2021 sanctions, falls under Russia’s Information Operations Troops.15 From strategic thinking to operational style to intelligence structure and culture, many similarities exist between the active measures of the Soviet Union and the political warfare activities of contemporary Russia. 

To some extent the Putin regime inherited this convoluted web of cyber actors. Economic decline and political instability following the demise of the Soviet Union contributed to an explosion of crime,16 including cybercriminal activity. Among other reasons, a lack of laws and enforcement related to cybercrime, limited economic opportunities, and “highly educated and technologically empowered segments of [the] population with the capability to conduct sophisticated criminal operations” all accelerated the pace of cybercrime in 1990s Russia.17 This activity evolved from software piracy to more serious forms of profit generation like hacking banks and stealing identities.18 By the time Putin ascended to the presidency in December 1999, there were already numerous nonstate hackers in Russia engaged in criminal behavior. 

Instead of cracking down, the Kremlin actively cultivated this network of cyber actors, and continues to leverage this ecosystem for purposes that extend beyond criminal activity. The Putin regime allows cybercriminals and patriotic hackers to operate freely within Russia, so long as they focus on foreign targets, do not undermine the Kremlin’s objectives, and answer to the state when asked. The Federal Security Service (FSB), Russia’s internal security agency with some foreign purview, recruits cybercriminals to carry out operations on its behalf. The Foreign Intelligence Service (SVR) sets up front organizations to conduct cyber and information operations against foreign targets. The Kremlin permits private military companies (PMCs) to operate around the world and to sell their military and protective services to foreign governments; at least one Russian PMC has developed a cyber unit.19, 520 While Putin did inherit an ecosystem of both legitimate technology companies and technically talented individuals engaged in cybercrime, the regime has purposefully shaped this resource pool of Russian cyber actors to its own benefit, though not without accompanying risks. 

It is worth noting that this issue brief focuses primarily on cyber operations as understood by the United States (pertaining to code) but also mentions information operations throughout (pertaining to, in the US view, human-readable content). Russia’s conceptualization of the information space does not make such a firm distinction. Therefore, this issue brief errs toward depicting the Russian understanding of the space, as well as highlighting some of the similarities between the ways Russian actors have conducted cyber and information operations, such as the government setting up cyber and information front organizations in other countries.  

The spectrum of Russian government involvement 

Putin does not control every single cyber operation that occurs within or comes out of Russia. In fact, as Candace Rondeaux writes, the “narrative of a grand chess master, whether Putin, a Kremlin insider, or a mercenary group, singlehandedly orchestrating Russia’s proxy warfare strategy is a useful fiction for the Kremlin.”21 Simply put, “Vladimir Putin is not omnipotent,” as journalist Julia Ioffe remarked in 2013.22 In reality, there are degrees of Russian government involvement with most Russian cyber actors, whether it is through active financing, tacit approval, or another kind of engagement entirely. It is also possible that some activity is entrepreneurial by design, with nonstate hackers and developers auditioning their capabilities to capture the attention of the state.23 Further, for all that Russian doctrines and military thinking emphasize the importance of political warfare and cyber and information operations, there is a great deal of complexity, competition, and internal conflict in how the Russian government bureaucracy attempts to operationalize those doctrines and ideas. Unpacking this spectrum of Russian government involvement with hackers is essential for the United States and its allies and partners to accurately analyze the Russian cyber web, as well as to identify areas to disrupt Russian government or government-directed activity. 

In 2011, Jason Healey described a spectrum of state involvement in cyber activity,24 identifying ten separate types of hacking: state-prohibited, state-prohibited-but-inadequate, state-ignored, state-encouraged, state-shaped, state-coordinated, state-ordered, state-rogue-conducted, state-executed, and state-integrated.25 While Healey’s intention was to enhance the conversation around government responsibility for cyber operations beyond technical attribution, his framework alone illustrates that governments can maintain a range of relationships with hackers to suit their purposes. Putin’s regime has taken—and continues to take—this exact approach. 

The extensive Russian network includes: internal government cyber and information units; front companies established and run by the government; private companies leveraged by the government to develop capabilities and recruit talent; criminals recruited by state officials; industry developers recruited by state officials; independently operating patriotic hackers (often with state encouragement or as cover for state-run action); hackers independently building their capabilities and pitching them to the state; and murky, mafia-style familial entanglements between hackers and Russian government officials. Experts have published excellent research on cyber proxies,26 yet, in Russia’s case, questions remain about the exact nature of those relationships, as they sometimes defy the frequent assumption that proxy activity refers to a top-down hierarchical relationship, with the state as the primary actor. Considerable portions of Russia’s cybercriminal ecosystem operate with a sort of Darwinian entrepreneurialism, akin to the approach of Russian criminal enterprises and protective services in the 1990s.Thanks to several individuals for discussion of this point.27 Criminals often have substantial agency to drive this activity. And when there are quasi-symbiotic relationships at play with the state—a local FSB official, for instance, taking money on the side to provide a “roof” (krysha) of protection for hackers—these relationships do not entirely follow top-down or state-dominated definitions. It is also important to note, before diving into examples of actors in the Russian cyber web, that each case study raises questions about replicability.28 Some examples may be entirely or somewhat replicable, while others could be one-off cases, shaped by factors such as the Russian government’s operational needs, budgetary resources, technical constraints, and others. 

The Russian government has many internal teams carrying out cyber operations. The FSB, GRU, and SVR all have cyber units, in addition to the cyber organizations located within other parts of the Russian military and security service apparatus.29 For example, the FSB’s 16th Center has signals intelligence capabilities, and its 18th Center has been responsible for hacks of Yahoo, Ukrainian targets, and others.30 The GRU has multiple cyber teams, including Unit 26165 (“Fancy Bear”),31 that carried out the 2016 hack of the Democratic National Committee,32 and Unit 74455 (“Sandworm”), that hacked power grids in Ukraine.33 Even though less is known about its internal cyber structure,34 the SVR has also carried out major operations, such as the SolarWinds hack in 2020.35 Often these operations are launched from within Russia, but at other times, state hackers have gone abroad to attack targets. In 2018, for example, operatives from GRU Unit 26165 traveled to the Netherlands to hack into and disrupt the investigation of the Organization for the Prohibition of Chemical Weapons (OPCW) into the poisoning of Sergei Skripal and his daughter.36 GRU Unit 26165 hackers, apparently part of the same sub-team of GRU Unit 26165, were also on site in Rio de Janeiro, Brazil and Lausanne, Switzerland to break into systems of the US Anti-Doping Agency, the World Anti-Doping Agency, and the Canadian Center for Ethics in Sport.37

Moscow finances and directs cyber and information operations through front organizations and websites used by the GRU, the SVR, and the FSB to spread disinformation.38 The Russian government also uses companies like Neobit and AST to technically support cyber and information operations, with some companies acting like contractors but in a covert capacity.39 It is possible that the Russian government is increasingly stationing these cyber and information assets overseas. One of the Russian spies the United States caught and deported in June 2010 was working at Microsoft. The man had no apparent links to the Russian intelligence community. However, federal authorities knew that he had previously worked at Neobit,40 currently linked, per the US Department of the Treasury’s April 2021 sanctions, to the Russian Ministry of Defense, the FSB, and the SVR.41 In 2019, a Czech magazine reported that the Czech Security Information Service had shut down two private IT companies in early 2018 that were fronts for Russian hackers, reportedly part of a broader international network.42 Outside of what the United States considers cyber operations, but well within the Russian government’s cohesive conception of the information space, the Internet Research Agency has since 2016 been setting up overseas offices in Ghana, Nigeria, and Mexico to covertly run information operations.43 Yevgeny Prigozhin, Putin’s “chef” and confidante, heads these operations that, even while coordinated surreptitiously by the Kremlin, may not involve constant or direct government control. 

The Russian government also recruits hackers and cybercriminals on an ad hoc basis to conduct operations.44 Authorities allow the Russian cybercriminal apparatus to thrive for a variety of reasons, including the fact that cybercrime brings money into Russia, and the talent base it cultivates gives the Kremlin proxies to tap as needed. It is also part and parcel of the pervasive corruption in the Russian business and government world. Through the “social contract” these hackers have with the Kremlin, they generally get permission to operate freely, as long as they focus mainly on foreign targets and do not undermine the Kremlin’s objectives. They must also be responsive to Russian government requests, even if the motives of these cybercriminals are primarily financial.45 (In the rare, publicly reported instances of Russian authorities arresting cybercriminals, the hackers involved had either stolen from or targeted Russian citizens.46 Even former FSB-linked hackers may not be safe if they violate the Kremlin’s social contract.47) As Nina Kollars and Michael Petersen write, “institutional boundaries have become porous, allowing private citizens and organizations to conduct sanctioned state activities and allowing the state to mine society for autonomous assets to carry out state functions.”48

Several cases underscore how the Russian government recruits programmers and criminal hackers as needed, often through the FSB. In the late 2000s, the FSB reportedly contacted an individual tied to a patriotic hacker website in an attempt to establish a cooperative relationship.49 Around the time of the Russo-Georgian War in 2008, Russian intelligence agencies tried to create an online forum to recruit hackers to attack Georgian targets.50 In September 2015, the independent Russian news website Meduza reported that Alexander Vyarya, who worked at a Russian company building distributed denial-of-service (DDoS) defense software, said Rostec, Russia’s defense conglomerate, approached him requesting his help to improve the government’s DDoS attack capabilities.51 Vyarya noted that, at a meeting in Sofia, Bulgaria, software developers showed him an existing Russian government DDoS capability, which was demonstrated on the websites of the Ukrainian Ministry of Defense and the Russian edition of Slon.ru (an online magazine);52 Vyarya refused to get involved and then left Russia.53 This last example illustrates an additional set of risks and incentives—those of individuals working as company programmers tapped by the Russian government to provide assistance who must assess the consequences of refusal. 

In 2017, the US Department of Justice charged two FSB officers and their criminal collaborators with hacking into Yahoo and millions of email accounts.54 The indictment alleged that the officers “conspired together and with each other to protect, direct, facilitate, and pay criminal hackers to collect information through computer intrusions in the US and elsewhere.”55 The document stated that the officers tasked hackers with targeting Yahoo email accounts; when they wanted information from non-Yahoo emails, they tasked a hacker and paid them a “bounty.”56 The indictment described one officer, in particular, as a hacker’s “handling FSB officer.”57 Yet these FSB officers went a step beyond material direction and financing. In line with other nominally state-sanctioned criminal activities in Russia, the FSB officers allegedly provided one of the hackers with “sensitive FSB law enforcement and intelligence information that would have helped him avoid detection by law enforcement, including information regarding FSB investigations of computer hacking and FSB techniques for identifying criminal hackers.”58

Other accounts describe parts of the Russian government, including the FSB, the GRU, and the Ministry of Internal Affairs, cultivating close relationships with nonstate hackers.59 Positive Technologies, a Russian IT firm sanctioned by the US government, hosts conventions that the FSB and the GRU use as recruiting events.60 The US Treasury Department stated in April 2021 that the FSB cultivated and coopted the ransomware group Evil Corp.61 The FSB had apparently given one of Evil Corp’s alleged members, Igor Turashev, enough cover to register three Russian companies in his name, in a building known for crypto firm money laundering.62 Despite this apparent brazenness, most nonstate hacker recruitment occurs in the more obscure corners of the Russian cyber web. As journalist and Russian intelligence expert Andrei Soldatov has said, “We know there is a huge pool of capable talent, and at least some people who are willing to do things that are suggested to them. We know such things are being done. What we don’t know is how or why such orders are formulated, and who exactly may be involved.”63 To Soldatov’s point, different elements of the Russian security apparatus may tap hackers for different purposes, ranging from strategic to highly tactical; nonstate hacker recruitment does not necessarily originate from the same level of the Russian government. 

Beyond the outright backing and recruitment of nonstate cyber actors, the Kremlin also engages in other target activities, such as encouraging individuals to carry out cyber operations. Patriotic hacking groups are a prime example. These collectives, ranging from loosely to more formally organized, are composed of technically skilled people who conduct operations in line with government interests (or what they perceive as government interests). Some of these activities began with a domestic bent, such as the policing and targeting of regime critics online,64 but have since expanded into the foreign arena. Following the Russia-originating cyber operations against Estonia in 2007, a representative of the Unified Russia party said his assistant—a member of the pro-Kremlin youth group Nashi—participated in the attacks.65 During the 2008 Russo–Georgian War, it appears patriotic hackers may have taken part in launching DDoS attacks against Georgian websites.66 

These individuals genuinely believe they are expressing patriotism for the Russian nation. An analysis of pro-Russian and pro-Ukrainian patriotic hacker Twitter posts between 2014 and 2017, after the Putin regime’s invasion and annexation of Crimea, found that the hackers created a “popular, even populist identity” online based on patriotism.67 In 2007, malicious web queries transmitted to Estonian websites by Russian actors (believed to be patriotic hackers) invoked false claims of fascism in reference to Andrus Ansip, Estonia’s then-prime minister, with phrases such as “ANSIP_PIDOR=FASCIST,”68 echoing a nationalistic narrative espoused by members of the Russian parliament.69 

Meduza reports that several Russian-speaking, nonstate hackers identified the 2008 Russo–Georgian War as a catalyst for Russian intelligence service recruitment of patriotic hackers.70 There has recently been speculation about the Russian government encouraging the patriotic hacking of Ukrainian targets.71 Yet, hacks of this kind are not always state-directed. Something as simple as a Kremlin official getting on TV and criticizing a foreign country might be the only prompt a patriotic hacker needs to act. After browsing online forums that shared software for possible use to attack Georgia, journalist Evgeny Morozov said in August 2008: 

In less than an hour, I had become an internet solider. I didn’t receive any calls from Kremlin operatives; nor did I have to buy a web server or modify my computer in any significant way.…Paranoid that the Kremlin’s hand is everywhere, we risk underestimating the great patriotic rage of many ordinary Russians, who, having been fed too much government propaganda in the last few days, are convinced that they need to crash Georgian websites.72

Speculation also exists that the Russian government encourages patriotic hacking to provide cover for state-run operations. 

Although these individuals and organizations have permission to operate independently, Moscow does not hide its affinity for these hackers or their cyber capabilities. In a June 2017 meeting with international media, Putin compared patriotic hackers to painters, saying that “hackers are free people. They are like artists. If they are in a good mood, they get up in the morning and begin painting their pictures.”73 He elaborated that “hackers are the same. They wake up in the morning, they read about some developments in international affairs, and if they have a patriotic mindset, then they try to make their own contribution the way they consider right into the fight against those who have bad things to say about Russia.”74 Explicitly directed or not, Putin is well aware that patriotic hackers are a component of the Russian cyber web that the government can leverage at will.

Otherwise, most Russian state involvement with nonstate hackers is ill-defined. The Russian hacking group Evil Corp, indicted by the United States in November 2019 and sanctioned that December, is an illustrative example.75 The group is run by Maxim Yakubets, a Russian hacker reportedly married to Alyona Eduardovna Benderskaya, the daughter of Eduard Bendersky.76 A former FSB Spetsnaz officer, Bendersky owns multiple private Russian security firms and, according to Bellingcat, is a “de-facto spokesman for Department V” or Vympel,77 the FSB’s externally focused “antiterrorist” unit that has carried out multiple overseas assassinations.78 Since 2017, the year he and Bendersky’s daughter presumably married, Yakubets79 Yakubets has been in the process of getting a Russian government security clearance since April 201880 He is still at large in Russia, despite alleged Russian arrests of affiliates of a different ransomware group, REvil, in February 202281 that had provided a glimmer of (wishful) hope that Moscow was, in fact, actually cracking down on ransomware and other cybercriminal activity. One senior US official, for example, had—quite idealistically—told reporters following the REvil arrests that “these are very important steps, in that they represent the Kremlin taking action against criminals operating from within its borders, and they represent what we’re looking for with regard to continued activities like these in the future.”82 

[Hackers] wake up in the morning, they read about some developments in international affairs, and if they have a patriotic mindset, then they try to make their own contribution the way they consider right into the fight against those who have bad things to say about Russia.”

Vladimir Putin, June 2017

Putin does not control all these groups, and even if the FSB does engage with a hacker on a local level, Putin is (by and large) not involved in the day-to-day minutiae. Nevertheless, the Kremlin clearly allows cybercriminals and other nonstate hackers to thrive in Russia. Moreover, for the largest groups in the cyber web, the regime to a certain extent actively decides to look the other way. Given these circumstances, the next section discusses the benefits the regime gets, or perceives it gets, from leveraging this network of Russian cyber actors. 

The risks and benefits of the cyber web for the Kremlin 

From the Kremlin’s perspective, the web of Russian cyber actors—from nonstate patriotic hackers and cybercriminals to state-funded front companies—can provide numerous benefits. Principally, the returns include deniability, the power to wage covert political warfare below the threshold of outright war, and potentially reduced costs to maintain cyber capabilities. Additionally, the economic benefits should not be downplayed. While exact figures are hard to come by, cybercriminals are clearly bringing money into Russia, with billions of dollars estimated to have been raked in already by 2014.83  In 2021 alone, it was reported that 74 percent of global ransomware revenue went to Russian hackers, to the tune of $400 million in cryptocurrencies.84 That said, this activity also comes with many risks, including having to deal with competence and discipline issues that contribute to political-criminal tensions within hacking groups, undermining effectiveness. Recruiting from overlapping groups can also lead to political problems when the hackers act outside their remit or no longer work for the state but are identified as state actors. There is a simultaneous interplay between all these dynamics. 

As noted, deniability is a pivotal factor in the Kremlin’s strategic and operational decision-making. Putin is not a micromanager.85 Instead, he operates an “adhocracy” that allows elites to “become policy entrepreneurs, seeking and seizing opportunities to develop and even implement ideas that they think will further the Kremlin’s goals.”86 In practice, this creates ambiguity and, from the Kremlin’s perspective, plausible deniability.87 This approach is particularly conducive to cyber and information operations because they can be conducted remotely from behind a computer screen. Some argue that this deniability is implausible, correctly pointing out that Moscow often poorly obscures links between Kremlin officials and supposedly non-state-affiliated proxies,88 such as in the case of the patriotic hackers targeting Estonia, Georgia, and Ukraine. In some instances, Russian officials blatantly lie, even when faced with overwhelming evidence to the contrary. In 2018, when Dutch intelligence caught and publicly exposed the GRU Unit 26165 operatives who flew to The Hague to disrupt the OPCW investigations, one retired Russian lieutenant general said, “You say this is evidence. It’s not evidence to me. Russian intelligence was believed to be among the best in the world. Now you want to present a bunch of fools, absolutely incompetent, absolutely stupid, non-professional idiots? It’s insulting.”89

Regardless, the Kremlin does have periods when it can deny knowledge of, association with, and/or responsibility for cyber and information activities. While the ongoing war in Ukraine is an example of (Western) government intelligence exposing Russian plans and activities in near to real time, there are many prior instances when the state had plenty of time to deny cyber operations emanating from Russia before evidence emerged.90 This ambiguity between the Russian government and cyber actors—whether a GRU front company or a ransomware group working with an FSB officer—gives the Kremlin space, however small, to claim no involvement. The fact that this is sometimes genuinely true, like when the Russian government permits cybercriminals to do what they want without actively supervising or directing them, helps bolster Moscow’s objections. Moscow can engage with other governments knowing that sometimes, its denials of involvement are true and in cases when it is not (such as when the government is, at minimum, complicit in choosing not to investigate certain cyber operations), officials can lean into the ambiguity that surrounds its control over the Russian cyber web. Leveraging this extensive and opaque web of cyber actors also enables the Kremlin to make absurd demands of the United States, such as in June 2021, when Putin said that Russia would allow the extradition of cybercriminals to the United States, if the US government would agree to do the same for Russia.91 Touting these bad faith gestures as genuine attempts at diplomacy is reminiscent of the Kremlin’s legalistic approach to international norms on cyber issues more broadly, with legal concepts about “sovereignty” cited to promote a government-controlled vision of the internet.92 Furthermore, even if deniability is “implausible” to outside observers, that does not mean the claim is worthless. As Rory Cormac and Richard Aldrich have argued, implausible deniability can still exploit a target’s decision-making gaps, building powerful narratives (e.g., around Putin’s omnipotence) and signaling resolve, among other benefits.93 

Leveraging the cyber web empowers Moscow to wage political warfare in what the West would call the “gray zone,” below the threshold of armed conflict. The Russian state has a history of operating in the sphere of political warfare, and recent Russian military thinking has carried this mindset into the modern age. Valery Gerasimov, Chief of the General Staff of the Russian Armed Forces and First Deputy Defense Minister, wrote an article in 2013 arguing that “the role of nonmilitary means of achieving political and strategic goals has grown, and, in many cases, they have exceeded the power of force of weapons in their effectiveness.”94 While often wrongly cited as the “Gerasimov doctrine,” when it is neither a doctrine nor binding,95 and often used to incorrectly argue that hybrid warfare is a new kind of Russian thinking,96 the article nonetheless recognized the importance of nonmilitary tactics in modern conflict. As Eugene Rumer explains, Russia’s foreign and military policy over the last two decades clearly emphasizes that “military power is the necessary enabler” of what many refer to as hybrid warfare, where “hybrid tools can be an instrument of risk management when hard power is too risky, costly, or impractical, but military power is always in the background.”97

Leveraging the cyber web empowers Moscow to wage political warfare in what the West would call the “gray zone,” below the threshold of armed conflict.”

The Russian government can employ these measures continuously by leveraging the Russian cyber web both during war and peace. For decades, the Russian state has leveraged private Russian technology companies and their technical personnel to support state cyber and information operations. Through the FSB and other security agencies, the Kremlin has used hackers to assist with espionage and other activities below the threshold of armed conflict. It has even permitted ransomware and cybercriminal groups to thrive, so long as they toe the Kremlin’s political line and focus on foreign targets. The Kremlin can also leverage cyber operators in gray zone conflicts, such as its illegal invasion and annexation of Crimea in 2014, and its encouragement of patriotic hackers to go after Ukrainian targets. From the Kremlin’s perspective, all of this is an inherent benefit of having a large network of cyber actors to leverage as needed. 

Operating in the gray zone with proxies also conveys the benefit of creating uncertainty for adversaries about how to respond. The cyber and information operations that targeted US elections, for instance, generated intense debates in the United States about if and how to respond; if a response were taken, concern about how to employ different ladders of escalation and to classify that action under international law resulted in the US government hesitating to take forceful action. According to the Senate Intelligence Committee’s investigation into Russia’s 2016 election interference, Obama administration officials were concerned about “appearing to act politically on behalf of one candidate, undermining public confidence in the election, and provoking additional Russian actions.”98 This reluctance to act, including the associated political concerns, illustrates the benefit the Russian government receives from the below-threshold nature of internet-based political warfare. Individual actors might engage in phishing and ransomware attacks most days of the week, with one day set aside to steal data for a GRU officer. In this way, Moscow effectively blurs the lines between criminal activity, independent technology development, and espionage, muddling Western policy responses. 

Finally, the ability to tap into a nebulous web of cyber actors also means that the Kremlin can leverage capabilities without the need to constantly supervise everything. There is, once again, a spectrum of financial, training, and supervisory costs. The front companies that run FSB, SVR, and GRU cyber and information operations ostensibly pay for by those activities themselves, leveraging intelligence personnel (although that is unclear). The Internet Research Agency and state-supporting companies like Neobit operate in an undefined zone, where Putin cronies spend state-granted wealth and the Russian government contracts nonstate support and capabilities. Then there are the many cybercriminals, patriotic hackers, legitimate Russian IT company employees, and others who may operate independently, but do so with the state’s permission and may receive requests to redirect resources to government activities. The publicly available evidence is anecdotal, but these efforts sometimes cost the government next to nothing. In the previously mentioned 2017 indictment of two FSB officers, one of the hackers confessed that he was paid about $100 “for each successful hack,” wired by the FSB through PayPal, WebMoney, and other non-Russian online payment systems.99 

While leveraging non-state actors in the Russian cyber web saves the Kremlin resources in some cases, the government may have to deal with competence and discipline issues;100 cybercriminals might not operate with the same diligence as state hackers. Individual programmers recruited to develop capabilities for the state are likely untrained in Russian government methods of secrecy protection. Patriotic hackers might not use very sophisticated tools and instead, as the reporting suggests, use off-the-shelf capabilities posted on web forums. 

Dueling political and criminal dynamics can also generate internal fractions within hacker groups, which affects their ability to operate for the state. Leaked documents from the Russian hacker group Conti, for instance, highlighted divisions over the group’s official position on the war in Ukraine.101 The government itself might not coordinate operations very well either. Analysts already debate whether the GRU and the FSB coordinated the hack on the Democratic National Committee in 2016,102 and the Russian security services, in general, have a long history of turf wars and infighting.103 It is possible that multiple Russian security organizations—or even multiple units within a single Russian security organization—recruit hackers for overlapping purposes, such as developing information interception capabilities or launching destructive cyber operations that generate additional complexities. 

There is also the risk of an actor becoming so closely associated with the government that they create problems when they act in line with their own preferences—the actor or group may no longer be working with the Russian government, but others might assume otherwise. Theoretically, a Russian government agent could be held internally responsible for this kind of activity, with superiors believing that the agent was sanctioning a cybercriminal operation like stealing from Russians or going after politically sensitive targets abroad. Other hypothetical cases could involve an entire government organization being blamed by the Kremlin for how it handled a relationship with a cyber web actor. In this sense, the risk of cyber actors behaving out of line could range from individual-level repercussions to broader ones, generating a different set of issues for government officers to worry about. Some scholars have argued that, in general, governments empowering proxies with “more expansive, or less restrained, political agendas” can lead to escalatory situations,104 although that remains unclear in practice. 

Recommendations and conclusion 

Putin does not control every cyber operation within Russia, nor does the Russian government manage every single cyber actor in the country. It is highly unlikely that senior Kremlin officials are discussing a small-scale Russian phishing ring or a group of Russian hackers targeting Western credit card companies. FSB officers who recruit cybercriminals on an as-needed basis likely have no desire to manage the day-to-day activity of that cybercriminal operation. However, the Putin regime inherited, and now cultivates, an extensive network of cyber actors in Russia. The government rarely engages with some elements of this network, even at a local law enforcement level, but it recruits, encourages, and may even directly finance other constituencies. Moscow creates an environment in which cybercrime thrives (including by permitting corruption to flourish) and, in doing so, protects many cybercriminals in Russia. The United States and its allies and partners must gain a better understanding of this network and of Russian cyber and information capabilities, especially as they try to disrupt operations coming out of Russia. Russia should also act as a case study for how a government can cultivate and leverage a large web of cyber and information actors to augment its power. In particular, the United States and its allies and partners should note and consider the following actions. 

  • Takeaway: The Putin regime perceives that it benefits—and in many cases, does materially benefit—from leveraging the Russian cyber web because it can claim deniability, has more power to wage covert political warfare below the threshold of outright war, and has potentially lower costs for cyber capabilities. Cybercriminals also bring money into Russia, an increasingly important factor for a heavily sanctioned country with a declining economy. Overall, the Putin regime has many incentives for continuing to allow cybercrime to thrive in Russia, as well as for creating front companies, leveraging cybercriminals and patriotic hackers, filching private company employees, and letting PMCs develop cyber capabilities. 
  • Action: US policymakers, working with allies and partners, should focus more on understanding the incentive structure behind the Russian cyber web, the wide range of actors within it, and the relationships those actors have with the Russian government at different points in time. Some US public messaging—such as policymaker excitement about Moscow’s reported “arrests” of REvil ransomware members—does not reflect (or perhaps does not demonstrate) an understanding of the Russian government’s incentives vis-à-vis these groups. Alongside conversations about how to disrupt particular activities, US policymakers should also focus on understanding these particular incentives. For example, cybercriminals who target individuals in Russia as well as the United States are much more likely to attract Russian government enforcement actions than cybercriminals who just target US individuals. This would be a relatively more effective area to direct US law enforcement cooperation with Russia than, say, ransomware actors who have no impact on the Russian population. Targeting cybercriminals who moonlight as government hackers to “put them out of business” could similarly leverage the incentive structure of the Russian cyber web by indirectly going after the state’s capabilities. If these cybercriminals cannot afford to keep the lights on, then those hackers are also unable (at least in the immediate sense) to use those capabilities for the state’s benefit when the government comes knocking. US policymakers must understand this incentive structure to develop the most effective responses. 
  • Takeaway: Putin does not control every cyber operation conducted within and from Russia. Although he personally ordered the efforts to influence the 2016 US election,105 many cybercriminals (like those conducting phishing scams) do not receive direct instructions from the top levels of the Russian government. There are also many elements of Russia’s security apparatus that recruit nonstate hackers directly (e.g., through a local FSB office), which means that high-level Kremlin knowledge of specific recruitment activities is unclear. Nonetheless, the fact remains that the Putin regime cultivates and actively leverages different actors in the Russian cyber web, and it could take action against specific groups if it chooses. 
  • Action: The US government should be precise about how it specifies and communicates the type of relationship the Russian government has with a given Russian cyber actor. If US policymakers continue to engage with the Putin regime about cracking down on nonstate hackers, particularly cybercriminals, they should identify whether the state actively recruited or engaged with a particular hacking entity before branding it a state-affiliated actor. Within the realm of state-linked actors, the US government should specify in public messaging, internally or in private discussions with Russian counterparts—depending on the case—what that link looks like, such as financing and supervision, ad hoc recruitment, or tacit approval. This matters because establishing any consistency or escalation ladder in the US response will require matching that response to factors such as the group, the group’s actions, and the degree of Russian government involvement. The need for consistency also applies to public messaging, accurately distinguishing between espionage, disruptive attacks, hack-and-leak operations, and other actions. The degree of Russian government involvement in a cyber operation or with a cyber group may determine whether the responses taken by the United States and its allies and its partners target the actor behind the keyboard or specific parts of the Russian government. This is not to say that the Putin regime does not share responsibility for allowing a cybercriminal ecosystem to flourish (it does), nor that the prospects for US–Russian diplomatic engagement on cyber operations are great (they are not),106 but that an effective response must begin with a nuanced grounding in the Kremlin’s spectrum of engagement with hackers. 
  • Takeaway: Even though modern internet capabilities enable unprecedented levels of microtargeting and global reach, Russian government thinking around information technology draws on decades of Russian political and security culture. Russian thinking centers around information security, taking a sweeping view of the modern information environment and how the state should shape it. This view does not make the same, firm distinctions between cyber operations (e.g., in code) and information operations (e.g., in human-readable content) that the United States and its allies and partners do. Cyber and information operations reside within a broad set of Russian government political warfare activities, which, on the whole, emphasize deniability, covertness, the use of proxies, and operations below the threshold of armed conflict, among others. 
  • Action: When talking, writing, and thinking about Russian cyber and information operations, US, ally, and partner policymakers, as well as intelligence analysts, must focus on the Russian government’s unique views on the internet and information space, rather than projecting their own perspectives. Unfortunately, too many publications and analyses from the United States and other governments fail to grasp Russia’s viewpoints, such as dismissing Russian statements about the global internet as mere propaganda and not genuine Kremlin belief. This is not to say that the Kremlin’s more paranoid views about color revolutions or the internet as a CIA project are legitimate, nor that Moscow’s thinking is the most effective in practice. Perhaps the concept of information security is beneficial for its perceived cohesion, or, possibly, because it becomes so encompassing that it hampers actual operational and tactical action. However, understanding the Kremlin’s view of cyber and information activity, and situating it within other Russian thinking about political warfare and nonmilitary means of conflict, will move the United States and its allies and partners toward a more accurate picture of Russian cyber and information behavior. Arriving at this deeper understanding of Kremlin thinking will help the United States calibrate better policy responses to Russian government behavior, as well as predict how Moscow might respond to certain US actions. 

It is impossible to predict how the Russian cyber ecosystem will evolve in the coming months and years, particularly as Western sanctions continue to erode the Russian economy. Additionally, Russia is facing an IT “brain drain,” with technological talent fleeing the country for more economically stable—as well as freer and safer—work environments. That said, Russia’s web of cyber actors does not appear to be disappearing, which makes deciphering it all the more vital for grappling with the Kremlin’s political warfare and how it uses nonstate actors to augment cyber and information power. 

Author

Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative, where his work focuses on the geopolitics, governance, and security of the global internet. He is also a senior fellow at Duke University’s Sanford School of Public Policy and a contributor at WIRED Magazine.

Acknowledgments

The author would like to thank Gavin Wilde, John Sipher, Cara Dienst, Dylan Myles-Primakoff, Sean Atkins, and an additional individual who shall remain anonymous for their feedback on an earlier version of this document. The author would also like to thank the individuals who participated in a Chatham House Rule discussion about this issue brief. 

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Dina Temple-Raston and Sean Powers, “‘Cream of the Cream’: Russia’s High-Tech brain drain,” The Record, May 10, 2022, https://therecord.media/cream-of-the-cream-russias-high-tech-brain-drain/.
2    See, for example: “U.S.–Moscow Ties Close to Rupture after Biden’s ‘War Criminal’ Remarks, Russia Says,” Reuters, March 21, 2022, https://www.reuters.com/world/russia-summons-us-envoy-says-ties-close-rupture-after-bidens-putin-comments-2022-03-21/.
3    See, for example: Oscar Jonsson, The Russian Understanding of War: Blurring the Lines Between War and Peace (Washington, DC: Georgetown University Press, 2019).
4    ”Russian Federation, 2000 Foreign Policy Concept of the Russian Federation, June 2000.
5    S. G. Chekinov and S. A. Bogdanov, “The Nature and Content of a New-Generation War,” Military Thought (Voyennaya Mysl’), no. 3 (2010): 12–23, 16, https://www.usni.org/sites/default/files/inline-files/Chekinov-Bogdanov%20Miltary%20Thought%202013.pdf
6    Adrian Hänni and Miguel Grossmann, “Death to Traitors? The Pursuit of Intelligence Defectors from the Soviet Union to the Putin Era,” Intelligence and National Security 35, no. 3 (2020): 403–423, 404, 407. 
7    See, for example: Oscar Jonsson, The Russian Understanding of War: Blurring the Lines Between War and Peace (Washington, DC: Georgetown University Press, 2019). 
8    Russian Federation, 2000 Foreign Policy Concept of the Russian Federation, June 2000.
9    S. G. Chekinov and S. A. Bogdanov, “The Nature and Content of a New-Generation War,” Military Thought (Voyennaya Mysl’), no. 3 (2010): 12–23, 16, https://www.usni.org/sites/default/files/inline-files/Chekinov-Bogdanov%20Miltary%20Thought%202013.pdf
10    Adrian Hänni and Miguel Grossmann, “Death to Traitors? The Pursuit of Intelligence Defectors from the Soviet Union to the Putin Era,” Intelligence and National Security 35, no. 3 (2020): 403–423, 404, 407.
11    See, for example: US Central Intelligence Agency, Soviet Use of Assassination and Kidnapping, Declassified, 1964, 1, https://carnegieendowment.org/files/SovietUseOfAssassination.pdf; Mark Kramer, “Lessons From Operation ‘Denver,’ the KGB’s Massive AIDS Disinformation Campaign,” MIT Press Reader, May 26, 2020, https://thereader.mitpress.mit.edu/operation-denver-kgb-aids-disinformation-campaign/.
12    Justin Sherman, “Digital Active Measures: Historical Roots of Contemporary Russian Cyber and Information Operations,” Georgetown Security Studies Review 9, no. 2 (Washington, DC: Georgetown University’s Edmund A. Walsh School of Foreign Service, April 2022): 1–9, https://georgetownsecuritystudiesreview.org/wp-content/uploads/2022/04/92_Final-1.pdf.
13    Andrei Soldatov and Michael Weiss, “Inside Russia’s Secret Propaganda Unit,” Newsline Magazine, December 7, 2020, https://newlinesmag.com/reportage/inside-russias-secret-propaganda-unit/.
14    See, for example: Antonin Toianovski and Ellen Nakashima, “How Russia’s Military Intelligence Agency Became the Covert Muscle in Putin’s Duels with the West,” Washington Post, December 28, 2018, https://www.washingtonpost.com/world/europe/how-russias-military-intelligence-agency-became-the-covert-muscle-in-putins-duels-with-the-west/2018/12/27/2736bbe2-fb2d-11e8-8c9a-860ce2a8148f_story.html
15    To the reader, GRU Unit 54777 is also known as the 72nd Main Intelligence Information Center (GRITs), which the US Treasury Department identified as belonging to Russia’s Information Operations Troops. US Department of the Treasury, “Treasury Escalates Sanctions Against the Russian Government’s Attempts to Influence U.S. Elections,” April 15, 2021, https://home.treasury.gov/news/press-releases/jy0126.
16    See, for example: Mark Galeotti, “Gangster’s Paradise: How Organized Crime Took Over Russia,” The Guardian, March 23, 2018, https://www.theguardian.com/news/2018/mar/23/how-organised-crime-took-over-russia-vory-super-mafia; Vsevolod Sokolov, “From Guns to Briefcases: The Evolution of Russian Organized Crime,” World Policy Journal 21, no. 1 (Spring 2004): 68–74, https://www.jstor.org/stable/40209904
17    Dmitri Alperovitch and Keith Mularski, “Fighting Russian Cybercrime Mobsters: Report from the Trenches,” BlackHat, July 25–30, 2009, 2, https://www.blackhat.com/presentations/bh-usa-09/ALPEROVITCH/BHUSA09-Alperovitch-RussCybercrime-PAPER.pdf
18    Lucie Kadlecová, “Russian-Speaking Cyber Crime: Reasons Behind Its Success,” The European Review of Organized Crime 2, no. 2 (2015): 104–121, 4, https://standinggroups.ecpr.eu/sgoc/russian-speaking-cyber-crime-reasons-behind-its-success/.
19    Emma Schroeder et. al, Hackers, Hoodies, and Helmets: Technology and the Changing Face of Russian Private Military Contractors, Atlantic Council, July 2022
21    Candace Rondeaux, Decoding the Wagner Group: Analyzing the Role of Private Military Security Contractors in Russian Proxy Warfare, New America, November 7, 2019, 8, https://www.newamerica.org/international-security/reports/decoding-wagner-group-analyzing-role-private-military-security-contractors-russian-proxy-warfare/.
22    Julia Ioffe, “Dear Lawrence O’Donnell, Don’t Mansplain to Me About Russia,” The New Republic, August 8, 2013, https://newrepublic.com/article/114234/lawrence-odonnell-yells-julia-ioffe-about-putin-and-snowden
23    Thanks to Gavin Wilde for discussion of this point.
24    Jason Healey, “The Spectrum of National Responsibility for Cyberattacks,” The Brown Journal of World Affairs 18, no. 1 (Fall/Winter 2011): 57–70, https://www.jstor.org/stable/24590776.
25    Jason Healey, Beyond Attribution: Seeking National Responsibility in CyberspaceAtlantic Council, February 22, 2012, 2, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/beyond-attribution-seeking-national-responsibility-in-cyberspace/.
26    See, for example: Healey, Beyond Attribution; Tim Maurer, Cyber Mercenaries: The Stater, Hackers, and Power (Cambridge: Cambridge University Press, 2017); Erica D. Borghard and Shawn W. Lonergan, “Can States Calculate the Risks of Using Cyber Proxies?” Orbis 60, no. 3 (2016): 395–416.
27    See, for example: Vadim Volkov, Violent Entrepreneurs: The Use of Force in the Making of Russian Capitalism (Ithaca: Cornell University Press, 2002).
28    Thanks to a workshop participant for discussion of this point.
29    For a recently published discussion of the Russian government’s cyber units, see: Andrei Soldatov and Irina Borogan, Russian Cyberwarfare: Unpacking the Kremlin’s Capabilities (Washington, D.C.: Center for European Policy Analysis, https://cepa.org/russian-cyberwarfare-unpacking-the-kremlins-capabilities/
30    US Library of Congress, Congressional Research Service, Russian Cyber Units, by Andrew S. Bowen, IF11718 (2022), 2, https://sgp.fas.org/crs/row/IF11718.pdf; “Russia’s Gamaredon aka Primitive Bear APT Group Actively Targeting Ukraine,” Palo Alto Networks, February 3, 2022 (updated June 22, 2022), https://unit42.paloaltonetworks.com/gamaredon-primitive-bear-ukraine-update-2021/
31    See, for example: “Investigative Report: On The Trail Of The 12 Indicted Russian Intelligence Officers,” RadioFreeEurope/RadioLiberty, July 19, 2018, https://www.rferl.org/a/investigative-report-on-the-trail-of-the-12-indicted-russian-intelligence-officers/29376821.html
32    US Department of Justice, “Grand Jury Indicts 12 Russian Intelligence Officers for Hacking Offenses Related to the 2016 Election,” July 13, 2018, https://www.justice.gov/opa/pr/grand-jury-indicts-12-russian-intelligence-officers-hacking-offenses-related-2016-election
33    See, for example: Andy Greenberg, “Russia’s Sandworm Hackers Attempted a Third Blackout in Ukraine,” WIRED, April 12, 2022, https://www.wired.com/story/sandworm-russia-ukraine-blackout-gru/; Andy Greenberg, Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers (New York: Penguin Random House, 2020).
34    Thanks to individuals who participated in a Chatham House Rule workshop on Russian cyber operations for discussion of this issue.
35    See, for example: US Cybersecurity & Infrastructure Security Agency (CISA), “Russian Foreign Intelligence Service (SVR) Cyber Operations: Trends and Best Practices for Network Defenders,” April 26, 2021, https://www.cisa.gov/uscert/ncas/alerts/aa21-116a
36    “How the Dutch Foiled Russian ‘Cyber-Attack’ on OPCW,” BBC, October 4, 2018, https://www.bbc.com/news/world-europe-45747472
37    United States of America vs. Aleksei Sergeyevich Morenets, et al. (2018), 6, https://nsarchive.gwu.edu/document/17596-united-states-v-alexei-sergeyevich-morenets-et
38    US Department of the Treasury, “Treasury Escalates Sanctions Against the Russian Government’s Attempts to Influence U.S. Elections.” 
39    US Department of the Treasury, “Treasury Sanctions Russia with Sweeping New Sanctions Authority,” April 15, 2021, https://home.treasury.gov/news/press-releases/jy0127.
40    Benjamin Carlson, “Who Was the 12th Russian Spy at Microsoft?” The Atlantic, July 14, 2010, https://www.theatlantic.com/international/archive/2010/07/who-was-the-12th-russian-spy-at-microsoft/344876/; Sébastian Seibt, “Microsoft Entangled in Russian Spy Scandal,” France24, July 15, 2010, https://www.france24.com/en/20100715-microsoft-entangled-russian-spy-scandal-alexey-karetnikov-swap
41    US Department of the Treasury, “Treasury Sanctions Russia with Sweeping New Sanctions Authority.”
42    “Czech Intel Reveals Russian Hackers Using IT Company Front: Media,” UNIAN Information Agency, March 19, 2019, https://www.unian.info/world/10484166-czech-intel-reveals-russian-hackers-using-it-company-front-media.html.
43    Clarissa Ward et. al, “Russian Election Meddling Is Back – via Ghana and Nigeria – and in Your Feeds,” CNN, April 11, 2020, https://www.cnn.com/2020/03/12/world/russia-ghana-troll-farms-2020-ward/index.html; US Office of the Director of National Intelligence, Foreign Threats to the 2020 US Federal Elections, March 2021, 4, https://www.dni.gov/files/ODNI/documents/assessments/ICA-declass-16MAR21.pdf.
44    To the reader, as part of the broader Kremlin recruitment of criminals, see: Mark Galeotti, Crimintern: How the Kremlin Uses Russia’s Criminal Networks in Europe (Berlin: European Council on Foreign Relations, April 2017), https://ecfr.eu/publication/crimintern_how_the_kremlin_uses_russias_criminal_networks_in_europe/.
45    See, for example: Raymond Pompon recounting Russian cybercriminals complaining about the prospect of being coopted by the security services: Raymond Pompon, “Russian Hackers, Face to Face,” F5, August 1, 2017, https://www.f5.com/labs/articles/threat-intelligence/russian-hackers-face-to-face.
46    See, for example: “Russian hacker gang arrested over $25m theft,” BBC, June 2, 2016, https://www.bbc.com/news/technology-36434104; Jeff Stone, “Rare Cybercrime Enforcement in Russia Yields 25 Arrests, Shutters ‘BuyBest’ Marketplace,” CyberScoop, March 25, 2020, https://www.cyberscoop.com/buybest-hackers-arrested-fsb-russia/; Roman Zakharov, “Detentions in the Case of the Largest Group of Hackers Took Place in 11 Regions of the Russian Federation,” Задержания по делу крупнейшей группировки хакеров прошли в 11 регионах РФ, TV Zvevda, March 24, 2020, https://tvzvezda.ru/news/2020324943-i7KCz.html.
47    “Russian Hackers Allegedly Tied to FSB and Hack of U.S. Democratic Party Handed Lengthy Prison Terms,” RadioFreeEurope/RadioLiberty, February 14, 2022, https://www.rferl.org/a/russia-fsb-hackers-sentenced-democratic-party/31703350.html.
48    Nina A. Kollars and Michael B. Petersen, “Feed the Bears, Starve the Trolls: Demystifying Russia’s Cybered Information Confrontation Strategy,” The Cyber Defense Review (2019): 145–158, 148 https://cyberdefensereview.army.mil/Portals/6/Session%203%20Number%202%20CDR-Special%20Edition-2019.pdf.
49    “It’s Our Time to Serve the Motherland,” Meduza, August 7, 2018, https://meduza.io/en/feature/2018/08/07/it-s-our-time-to-serve-the-motherland; Andrei Soldatov, “Cyber Surprise,” Кибер-сюрприз, Novaya Gazeta, May 30, 2007, https://novayagazeta.ru/articles/2007/05/31/33284-kiber-syurpriz
50    Insikt Group, “Dark Covenant: Connections Between the Russian State and Criminal Actors,” (Somerville: Recorded Future, September 2021), 4, https://www.recordedfuture.com/russian-state-connections-criminal-actors.
51    Daniel Turovsky, “Why Did the State Corporation Need a System for Organizing DDoS Attacks,” Грузить по полной программе, Meduza, September 3, 2015, https://meduza.io/feature/2015/09/03/gruzit-po-polnoy-programme; Freid Weir, “In Russia’s Cyberscene: Kremlin Desires, Private Hackers, and Patriotism,” The Christian Science Monitor, October 27, 2016, https://www.csmonitor.com/World/Europe/2016/1027/In-Russia-s-cyberscene-Kremlin-desires-private-hackers-and-patriotism
52    Turovsky, “Why Did the State”; Weir, “In Russia’s Cyberscene.”
53    Turovsky, “Why Did the State”; Weir, “In Russia’s Cyberscene.”
54    US Department of Justice, “U.S. Charges Russian FSB Officers and Their Criminal Conspirators for Hacking Yahoo and Millions of Email Accounts,” Justice.gov, March 15, 2017, https://www.justice.gov/opa/pr/us-charges-russian-fsb-officers-and-their-criminal-conspirators-hacking-yahoo-and-millions
55    United States of America v. Dmitry DokuchaevIgor Sushchin, Alexsey Belan, and Karim Baratov, CR17-109 (2017), 2, https://www.justice.gov/opa/press-release/file/948201/download.
56    United States of America v. D. Dokuchaev et al., 3. 
57    United States of America v. D. Dokuchaev et al., 3. 
58    United States of America v. D. Dokuchaev et al., 2–3. 
59    See, for example: Insikt Group, “Dark Covenant”; Joe Cheravitch and Bilyana Lilly, “Russia’s Cyber Limitations in Personnel Recruitment and Innovation, Their Potential Impact on Future Operations and How NATO and Its Members Can Respond,” NATO Cooperative Cyber Defense Center of Excellence, December 2020, https://ccdcoe.org/uploads/2020/12/2-Russias-Cyber-Limitations-in-Personnel-Recruitment-and-Innovation_ebook.pdf, 31–59: 38–39; Flashpoint, “Russia Is Cracking Down on Cybercrime. Here Are the Law Enforcement Bodies Leading the Way,” February 14, 2022, https://flashpoint.io/blog/russian-cybercrime-law-enforcement-bodies-fsb-mvd-deptk/United States of America vs. Yevgeniy Alexandrovich Nikulin, CR 16-00440 WHA (2020), United States’ Motion in Limine No. Six to Exclude Hearsay Statements by Nikita Kislitsin, 4, https://www.courthousenews.com/wp-content/uploads/2020/07/USANikulin-KislitsinMotion.pdf
60    US Department of the Treasury, “Treasury Sanctions Russia with Sweeping New Sanctions Authority.”
61    US Department of the Treasury, “Treasury Sanctions Russia with Sweeping New Sanctions Authority.”
62    Joe Tidy, “Evil Corp: ‘My hunt for the World’s Most Wanted Hackers,’” BBC, November 17, 2021, https://www.bbc.com/news/technology-59297187; Kartikay Mehrotra and Olga Kharif, “Ransomware HQ: Moscow’s Tallest Tower Is a Cybercriminal Cash Machine,” Bloomberg, November 3, 2021, https://www.bloomberg.com/news/articles/2021-11-03/bitcoin-money-laundering-happening-in-moscow-s-vostok-tower-experts-say
63    Weir, “In Russia’s Cyberscene.”
64    See, for example: Françoise Daucé, Benjamin Loveluck, Bella Ostromooukhova, and Anna Zaytseva, “From Citizen Investigators to Cyber Patrols: Volunteer Internet Regulation in Russia,” Russian Review of Social Research 11, no. 3 (2019): 46–70; “Nashi Denies Cyberattack on Kommersant, Threatens Lawsuit,” Moscow Times, February 9, 2012, https://www.themoscowtimes.com/2012/02/09/nashi-denies-cyberattack-on-kommersant-threatens-lawsuit-a12531. See also, on the Internet Research Agency: Adrian Chen, “The Agency,” New York Times Magazine, June 2, 2015, https://www.nytimes.com/2015/06/07/magazine/the-agency.html.
65    Chloe Arnold, “Russian Group’s Claims Reopen Debate On Estonian Cyberattacks,” RadioFreeEurope/RadioLiberty, March 30, 2009, https://www.rferl.org/a/Russian_Groups_Claims_Reopen_Debate_On_Estonian_Cyberattacks_/1564694.html. On patriotic hacking, see also: Dorothy Denning, “Tracing the Sources of Today’s Russian Cyberthreat,” Scientific American, August 18, 2017, https://www.scientificamerican.com/article/tracing-the-sources-of-today-rsquo-s-russian-cyberthreat/.
66    Stephen W. Korns and Joshua E. Kastenberg, “Georgia’s Cyber Left Hook,” Parameters 38, no. 4 (Winter 2008–2009), https://www.army.mil/article/19351/georgias_cyber_left_hook. See also, on the Russian Business Network criminal group some suspected was involved: Peter Warren, “Hunt for Russia’s Web Criminals,” The Guardian, November 15, 2007, https://www.theguardian.com/technology/2007/nov/15/news.crime
67    Tetyana Lokot, “Public Networked Discourses in the Ukraine-Russia Conflict: ‘Patriotic Hackers’ and Digital Populism,” Irish Studies in International Affairs 28 (2017): 99–116, 113, https://www.jstor.org/stable/10.3318/isia.2017.28.9
68    Rain Ottis, “Analysis of the 2007 Cyber Attacks Against Estonia from the Information Warfare Perspective,” NATO Cooperative Cyber Defense Center of Excellence, 2008, 2, https://ccdcoe.org/uploads/2018/10/Ottis2008_AnalysisOf2007FromTheInformationWarfarePerspective.pdf.
69    Luke Harding, “Russia up in arms after Estonians remove statue of Soviet soldier,” The Guardian, April 27, 2007, https://www.theguardian.com/world/2007/apr/28/russia.lukeharding.
70    Meduza, “It’s Our Time to Serve the Motherland.”
71    See, for example: Joe Tidy, “Russian Vigilante Hacker: ‘I Want to Help Beat Ukraine from My Computer,’” BBC, February 25, 2022, https://www.bbc.com/news/technology-60528594.
72    “Evgeny Morozov, “An Army of Ones and Zeros,” SlateMagazine, August 14, 2008, https://slate.com/technology/2008/08/how-i-became-a-soldier-in-the-georgia-russia-cyberwar.html
73    “Putin Compares Hackers To ‘Artists,’ Says They Could Target Russia’s Critics For ‘Patriotic’ Reasons,” RadioFreeEurope/RadioLiberty, June 1, 2017, https://www.rferl.org/a/russia-putin-patriotic-hackers-target-critics-not-state/28522639.html
74    “Putin Compares Hackers To ‘Artists,’” RadioFreeEurope/RadioLiberty.
75    United States of America vs. Maskim V. Yakubets and Igor Turashev, CR 19-342 (W.D. Pa., 2019), https://www.justice.gov/opa/press-release/file/1223586/download; US Department of the Treasury, “Treasury Sanctions Evil Corp, the Russia-Based Cybercriminal Group Behind Dridex Malware,” December 5, 2019, https://home.treasury.gov/news/press-releases/sm845
76    “The FSB’s Personal Hackers,” Meduza, December 12, 2019, https://meduza.io/en/feature/2019/12/12/the-fsb-s-personal-hackers; Mark Krutov and Sergey Dobrynin, “Son in Law for 5 Million,” Зять на 5 миллионов, Svoboda, December 9, 2019, https://www.svoboda.org/a/30315952.html
77    “‘V’ for ‘Vympel’: FSB’s Secretive Department ‘V’ Behind Assassination Of Georgian Asylum Seeker in Germany,” Bellingcat, February 17, 2020, https://www.bellingcat.com/news/uk-and-europe/2020/02/17/v-like-vympel-fsbs-secretive-department-v-behind-assassination-of-zelimkhan-khangoshvili/
78    US Library of Congress, Russian Military Intelligence: Background and Issues for Congress, by Andrew S. Bowen, R46616, Congressional Research Service, November 2021, 13 https://sgp.fas.org/crs/intel/R46616.pdf; “‘V’ for ‘Vympel’”; “FSB’s Magnificent Seven: New Links between Berlin and Istanbul Assassinations,” Bellingcat, June 29, 2020, https://www.bellingcat.com/news/uk-and-europe/2020/06/29/fsbs-magnificent-seven-new-links-between-berlin-and-istanbul-assassinations/
79    US Department of the Treasury, “Treasury Sanctions Evil Corp.”
80    Meduza, “The FSB’s personal hackers”; US Department of the Treasury, “Treasury Sanctions Evil Corp.”
81    Arielle Waldman, “Fallout from REvil arrests shakes up ransomware landscape,” TechTarget, February 14, 2022, https://www.techtarget.com/searchsecurity/news/252513401/Fallout-from-REvil-arrests-shakes-up-ransomware-landscape
82    James Rundle, Catherine Stupp, and Kim S. Nash, “What Russia’s Arrest of REvil Hackers Means for Ransomware,” Wall Street Journal, January 14, 2022, https://www.wsj.com/articles/what-the-russian-crackdown-on-revil-means-for-ransomware-11642188675.  
83    Tim Maurer, Why the Russian Government Turns a Blind Eye to CybercriminalsCarnegie Endowment for International Peace, February 2, 2018, https://carnegieendowment.org/2018/02/02/why-russian-government-turns-blind-eye-to-cybercriminals-pub-75499.
84    Joe Tidy, “74% of Ransomware Revenue Goes to Russia-Linked Hackers,” BBC, February 14, 2022, https://www.bbc.com/news/technology-60378009
85    Fiona Hill and Clifford G. Gaddy, What Makes Putin Tick, and What the West Should DoBrookings Institution, January 13, 2017, https://www.brookings.edu/research/what-makes-putin-tick-and-what-the-west-should-do/
86    Mark Galeotti, “Russia Has No Grand Plans, but Lots of ‘Adhocrats,’” Intellinews, January 18, 2017, https://www.intellinews.com/stolypin-russia-has-no-grand-plans-but-lots-of-adhocrats-114014/. See also: Mark Galeotti, “Russia’s Murderous Adhocracy,” Moscow Times, August 22, 2020, https://www.themoscowtimes.com/2020/08/22/russias-murderous-adhocracy-a71219. Thanks as well to Brian Whitmore for discussion of this point during the writing of my Reassessing RuNet report. 
87    Lucian Kim, “In Putin’s Russia, An ‘Adhocracy’ Marked By Ambiguity And Plausible Deniability,” NPR, July 21, 2017, https://www.npr.org/sections/parallels/2017/07/21/538535186/in-putins-russia-an-adhocracy-marked-by-ambiguity-and-plausible-deniability.
88    See, for example: Paul Stronski, Implausible Deniability: Russia’s Private Military CompaniesCarnegie Endowment for International Peace, June 2, 2020, https://carnegieendowment.org/2020/06/02/implausible-deniability-russia-s-private-military-companies-pub-81954
89    Sarah Rainsford, “Have Russian Spies Lost Their Touch?” BBC, October 6, 2018, https://www.bbc.com/news/world-europe-45762300.  
90    To the reader, for instance, the Russian government has “vehemently denied accusations” of influence over cyber proxies active in the conflict in Ukraine. However, research by private sector cybersecurity companies has since suggested there are links between Russian cyber proxy groups and the Russian government. Tim Maurer, “Cyber Proxies and the Crisis in Ukraine,” in Kenneth Geers (ed.), Cyber War in Perspective: Russian Aggression Against Ukraine (Tallinn: NATO Cooperative Cyber Defense Center of Excellence, 2015), 85, https://ccdcoe.org/uploads/2018/10/Ch09_CyberWarinPerspective_Maurer.pdf. There are many other examples, for example: Jack Detsch, “How Russia and Others Use Cybercriminals as Proxies,” Christian Science Monitor, June 28, 2017, https://www.csmonitor.com/USA/2017/0628/How-Russia-and-others-use-cybercriminals-as-proxies
91    “Putin Says Russia Ready to Extradite Cyber Criminals to US on Reciprocal Basis,” TASS, June 13, 2021, https://tass.com/russias-foreign-policy/1302315
92    Dennis Broeders, Liisi Adamson, and Rogier Creemers, Coalition of the Unwilling? Chinese and Russian Perspectives on Cyberspace, Social Science Research Network, December 2019, 2, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3493600.
93    Rory Cormac and Richard J. Aldrich, “Grey is the New Black: Covert Action and Implausible Deniability,” International Affairs 94, no. 3 (May 2018): 477–494, 487, 490–491.
94    Valery Gerasimov, “The Value of Science in Foresight,” Ценность науки в предвидении, VPK-News, February 26, 2013, https://vpk-news.ru/articles/14632; Valery Gerasimov, “The Value of Science Is in the Foresight,” Military–Industrial Courier, February 27, 2013, translated June 2014 by Robert Coalson, published in Military Review (January–February 2016), 24, https://www.armyupress.army.mil/portals/7/military-review/archives/english/militaryreview_20160228_art008.pdf.
95    See, for example: Nicole Ng and Eugene Rumer, The West Fears Russia’s Hybrid Warfare. They’re Missing the Bigger PictureCarnegie Endowment for International Peace, July 3, 2019, https://carnegieendowment.org/2019/07/03/west-fears-russia-s-hybrid-warfare.-they-re-missing-bigger-picture-pub-79412; Mark Galeotti, “I’m Sorry for Creating the ‘Gerasimov Doctrine,’” Foreign Policy, March 5, 2018, https://foreignpolicy.com/2018/03/05/im-sorry-for-creating-the-gerasimov-doctrine/
96    To the reader, for a good treatment of this issue and the terminology “hybrid warfare,” see: Mark Galeotti, Russian Political War: Moving Beyond the Hybrid (London: Routledge, December 2020).
97    Eugene Rumer, The Primakov (Not Gerasimov) Doctrine in ActionCarnegie Endowment for International Peace, June 2019, 1, https://carnegieendowment.org/2019/06/05/primakov-not-gerasimov-doctrine-in-action-pub-79254
98    US Senate Select Committee on Intelligence, Russian Active Measures Campaigns and Interference in the 2016 US Election. Volume 3: U.S. Government Response to Russian Activities, 116th Congress, 116–XX, February 2020, 3, https://www.intelligence.senate.gov/sites/default/files/documents/Report_Volume3.pdf.
99    United States of America v. Karim Baratov, No. 17-CR-103 VC (2017), Plea Agreement, 5, https://www.justice.gov/usao-ndca/page/file/1021221/download. See more at: https://www.justice.gov/usao-ndca/us-v-dmitry-dokuchaev-et-al
100    Galeotti, Russian Political War, 83. To the reader, “deniability and the opportunity to pick up ‘off the shelf’ assets often come at the expense of competence and discipline.”
101    Christopher Whyte, “Leaked Hacker Logs Show Weaknesses of Russia’s Cyber Proxy Ecosystem,” CSO Online, March 29, 2022, https://www.csoonline.com/article/3655075/leaked-hacker-logs-show-weaknesses-of-russia-s-cyber-proxy-ecosystem.html
102    See, for example, Robert Morgus, “Whodunnit? Russia and Coercion Through Cyberspace,” War on the Rocks, October 19, 2016, https://warontherocks.com/2016/10/whodunnit-russia-and-coercion-through-cyberspace/; “It’s a Feature, Not a Bug: Discord Observed in Russian Intelligence Operations,” Horkos, September 20, 2018, https://horkos.medium.com/its-a-feature-not-a-bug-discord-observed-in-russian-intelligence-operations-2e2e79c4c8cc; “CrowdStrike’s Work with the Democratic National Committee: Setting the Record Straight,” Crowdstrike, June 5, 2020, https://www.crowdstrike.com/blog/bears-midst-intrusion-democratic-national-committee/.
103    Jonathan Haslam, Near and Distant Neighbors: A New History of Soviet Intelligence (New York: MacMillan, 2015). See also, in the cyber context specifically: Kimberly Zenz, “Infighting Among Russian Security Services in the Cyber Sphere,” Black Hat USA, 2019, https://i.blackhat.com/USA-19/Thursday/us-19-Zenz-Infighting-Among-Russian-Security-Services-in-the-Cyber-Sphere.pdf.
104    Borghard and Lonergan, “Can States Calculate the Risks of Using Cyber Proxies?”
105    US Office of the Director of National Intelligence, Background to “Assessing Russian Activities and Intentions in Recent US Elections”: The Analytic Process and Cyber Incident Attribution, ICA 2017-01D, January 2017, ii, https://www.dni.gov/files/documents/ICA_2017_01.pdf.
106    See, for example: Andrei Soldatov, “Can the U.S. Still Cooperate with Russia’s Security Agencies?” Moscow Times, May 14, 2021, https://www.themoscowtimes.com/2021/05/14/can-the-us-still-cooperate-with-russias-security-agencies-a73900

The post Untangling the Russian web: Spies, proxies, and spectrums of Russian cyber behavior  appeared first on Atlantic Council.

]]>
Policy hackers take Vegas https://www.atlanticcouncil.org/content-series/the-5x5/policy-hackers-take-vegas/ Thu, 15 Sep 2022 20:07:47 +0000 https://www.atlanticcouncil.org/?p=566235 Every year, in the early August heat, thousands of hackers from around the world head to Las Vegas, Nevada for a series of cybersecurity conferences known as Hacker Summer Camp. This year, the Cyber Statecraft Initiative – and a few friends – decided to ship out to see what all the hype is about.

The post Policy hackers take Vegas appeared first on Atlantic Council.

]]>

Every year, in the early August heat, thousands of hackers from around the world head to Las Vegas, Nevada for a series of cybersecurity conferences known as Hacker Summer Camp. This year, the Cyber Statecraft Initiative – and a few friends – decided to ship out to see what all the hype is about. Below, they talk about their experience at the DEF CON Hacking Conference, why policy conversations belong at a Hacker conference, and much more!

1. Why should a think tank be at a hacker conference?

Stewart Scott, assistant director, Cyber Statecraft Initiative, Atlantic Council:

“Cybersecurity policy is one of those spaces where actual, deep technical expertise and policymaking experience don’t often overlap. Policymakers would be missing out by trying to craft laws and rules about technologies without speaking to the people who make and/or break them.”

Will Loomis, associate director, Cyber Statecraft Initiative, Atlantic Council:

“With recent headline-grabbing security incidents like Colonial Pipeline, SolarWinds, and Log4J, there is finally sufficient momentum to make meaningful change when it comes to cyber security policy in the United States. However, these changes cannot be made without input from the folks who will be most affected for decades to come – the hackers and technical practitioners. DEF CON provides the perfect opportunity to bridge this divide and bring these two communities together.” 

Safa Shahwan Edwards, deputy director, Cyber Statecraft Initiative, Atlantic Council:

“Think tanks have a track record of serving as a bridge between government and industry. By connecting security researchers with government, policymakers and hackers can better learn from one another and craft more effective policies.”

Trey Herr, director, Cyber Statecraft Initiative, Atlantic Council:

“How can you make policy about infosec without the people working in infosec? Applied policy research means trying to get to know these issues from the perspective of those building, running, and breaking things.”

Sarah Powazek, program director, Public Interest Cybersecurity at the UC Berkeley Center for Long-Term Cybersecurity:

“To put it simply, hackers make good policy, and they shouldn’t have to travel to or live in DC to contribute to the cyber policy space. Policy@DEF CON aimed to bring the public policy party to hackers where they gather and with topics that are directly applicable to them.” 

2. What policy-focused programming was offered at DEF CON this year?

Scott: “DEF CON ran an entire Policy Village, of which was great to be a part. Some highlights that come to mind: the Meet the Fed Series, where DEFCON attendees got to hang out with different federal cybersecurity officials in a pretty laid-back capacity; and Gavel Battles, which saw some heated debates over beers and giant inflatable gavels.”

Loomis: “DEF CON officially introduced Policy @DEFCON this year – the first time in the conference’s history they have had a space dedicated exclusively to policy content. However, there was also plenty of additional policy-focused programming spread throughout the forum – I was able to catch some awesome maritime cyber policy talks at the ICS Village and a discussion on aerospace cyber regulations at the Aerospace Village.”

Shahwan Edwards: “There was an entire track just devoted to policy at DEF CON, which was cool, but what was even cooler was the amount of interest this track garnered! The policy village held over twenty discussions, but some that stood out to me were Hacking Law is for Hackers, Meet the Feds: ONCD + CISA Editions and the Offensive Cyber Industry discussion.”

Powazek: “There was an incredible roster of talks this year, all of which were interactive with big Q&A portions and sometimes breakout groups working on specific proposals! My favorite was the Election Security Bridge Building talk —which brought together election security machine vendors, election officials, and security researchers to talk about trust and collaboration. There were also talks on offensive security, hacker law, crazy Gavel Battle debates, and much more.”

3. What surprised you the most about your DEF CON experience?

Scott: “I was surprised at how much the conference crammed into a few days—trying to catch every presentation or workshop I was interested in wasn’t even close to possible.”

Loomis: “As this was my first DEF CON experience, I think I was most surprised both by  the sheer scale of people and programming and by how much the core hacker ethos was built into every single aspect of the event.”

Shahwan Edwards: “First, the sheer quantity of programming. I knew this would be a large conference, but I still wasn’t prepared for the sprawl and number of discussions, activities, and receptions. Second, I was surprised by the amount of interest in policy-focused programming and LineCon (the long lines outside any DEF CON programming is called LineCon) at the Policy Village.”

Herr: “The degree to which DEF CON is a celebration of the layered history of the culture of hacking and cyberspace. There’s a historical lens to a lot of what goes on – long running traditions and programming, as well as remembrance of those lost. This is much, much more than another cybersecurity conference in the desert – it’s all the flavors of an online bulletin board system come to life.”

Powazek: “I was shocked and gratified to see how popular the DEF CON policy space was this year. There were lines out the door for the policy team’s two small rooms, and many attendees had never been involved in policy before. There is an incredible appetite for relevant hacker policy content!”

4. What was one thing you missed at Summer Camp this year you’d like to do next year?

Scott: “I would have loved to spend more time at the technical talks. The sheer number and variety of exploits is amazing—I heard there was one talk where a pair of researchers used emojis to deliver shellcode? Wild.”

Loomis: “I would have liked to explore more of the wide array of programming offered at DEF CON, but more broadly, I wish I could have stopped by the B-Sides LV and the Diana Initiative conferences earlier in the week. It looked like there was a plethora of great content presented – it’s not just DEF CON!”

Shahwan Edwards: “The Social Engineering Community for sure. I’d love to learn more about the ways malicious actors can prompt certain actions or behaviors by leveraging soft skills—something often overlooked in cybersecurity.”

Herr: “Lockpicking remains one of the great microcosms of the security mindset and hacking. The lockpicking village is definitely on the list for next time.”

Powazek: “I didn’t get to spend very much time in the Villages, which are in many ways the heart of the con. I’d like to loiter longer in ICS Village, Girls Hack Village, and Aerospace Village to name a few.” 

5. What is your biggest takeaway coming out of Hacker Summer Camp?

Scott: “Don’t even try to see everything! Instead, pick a couple of things you need to be at and then go with the flow the rest of the time.”

Loomis: “Every single person approaches an event like this differently. Tailor your agenda to what YOU want to do – there are talks from 10am-11pm every day, so pace yourself – you won’t be able to do it all!”

Shahwan Edwards: “Have fun, talk to people, learn something new, but also be sure to pace yourself over the weekend.”

Herr: “Expired: Spot the Fed; Tired: Meet the Fed; Wired: Hack with the Feds!”

Powazek: “There is no substitute for meeting folks in person! I’m grateful for the chance to meet wonderful policy and hacker friends at least once a year at DEF CON, and I believe connecting these folks in person goes a long way in pushing forward technically informed and strategic policy proposals.” 

Interested in the work we presented at DEF CON? Check out:

Report

Sep 14, 2022

Dragon tails: Preserving international cybersecurity research

By Stewart Scott, Sara Ann Brackett, Yumi Gambrill, Emmeline Nettles, Trey Herr

A quantitative study on whether legal context can impact the supply of vulnerability research with detrimental effects for cybersecurity writ large through the coordinated vulnerability disclosure process (CVD), using recent regulations in China as a case study.

China Cybersecurity

Contributors:

Will Loomis is an associate director with the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He leads the Initiative’s work on critical infrastructure protection and industrial control systems (ICS) security. Will is also a Certified Bourbon Steward.

Safa Shahwan Edwards is the deputy director of the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). In this role, she manages the administration and external communications of the Initiative, as well as the Cyber 9/12 Strategy Challenge, the Initiative’s global cyber policy and strategy competition.

Dr. Trey Herr is the director of the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). His team works on cybersecurity and geopolitics including cloud computing, the security of the internet, supply chain policy, cyber effects on the battlefield, and growing a more capable cybersecurity policy workforce. 

Stewart Scott is an assistant director with the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He works on the Initiative’s systems security portfolio, which focuses on software supply chain risk management and open source software security policy. 

Sarah Powazek serves as the Program Director of Public Interest Cybersecurity at the UC Berkeley Center for Long-Term Cybersecurity (CLTC), where she leads flagship work on the Citizen Clinic, the Consortium of Cybersecurity Clinics, and public interest cybersecurity research. Sarah previously worked at CrowdStrike Strategic Advisory Services, and as the Program Manager of the Ransomware Task Force. She is also an active member of the hacker community, and helps organize Hackers On The Hill and DEF CON Policy.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Policy hackers take Vegas appeared first on Atlantic Council.

]]>
Dragon tails: Preserving international cybersecurity research https://www.atlanticcouncil.org/in-depth-research-reports/report/preserving-international-cybersecurity-research/ Wed, 14 Sep 2022 09:36:55 +0000 https://www.atlanticcouncil.org/?p=546678 A quantitative study on whether legal context can impact the supply of vulnerability research with detrimental effects for cybersecurity writ large through the coordinated vulnerability disclosure process (CVD), using recent regulations in China as a case study.

The post Dragon tails: Preserving international cybersecurity research appeared first on Atlantic Council.

]]>

Executive summary

Cybersecurity, writ large, benefits enormously from an international community of researchers, hackers, and bug hunters. They find and disclose critical vulnerabilities, often responsibly, while working outside affected vendors or codebases. Yet, the policy debates that shape the legal environment around vulnerability disclosure often fail to consider cybersecurity as a function of both the supply of vulnerability research and the health of those research communities. This paper analyzes a series of Chinese regulatory changes altering vulnerability disclosure practices to assess their impact on the supply of research from China’s significantly productive community. The paper examines disclosure data from a mix of proprietary and open-source codebases, looking across vendor and software types with a simple time-series analysis to look for the impact of recent Chinese regulations. The study of this data revealed that while national regulations do indeed affect the supply of vulnerability research under some circumstances, the effect is not as large, consistent, or discernible as might first be expected. The prospect of copycat regulations, however, motivates concluding policy recommendations focused on strengthening the health of the global vulnerability-research community and lowering barriers-to-entry for both research and disclosure.

Introduction

One complicated reality of cybersecurity is the sheer volume of vulnerability disclosure to technology vendors and open-source projects that originates outside these organizations. Indeed, the notion originally articulated by Eric Raymond and credited as Linus’ Law, that “given enough eyeballs, all bugs are shallow,” is a central part of the open-source ecosystem’s security model.1 What often goes underappreciated is that these “eyes” may belong to the same person or people frequently examining the same or related codebases—the global distribution of eyes is uneven. Bug-bounty programs often show disproportionate contributions from a small number of people and countries, as some are home to comparatively more active researcher communities.2

One of the most prolific of these communities is that in China. For at least a decade, Chinese corporate research teams and individual researchers have dominated marquee hacking competitions and corporate bounty programs, scouring everything from browsers and mobile operating systems to networking gear. Their dominance in hacking competitions halted abruptly in 2018, when China blocked its researchers from participating in such events abroad.3 Soon after, the Regulations on the Management of Network Product Security Vulnerabilities, or RMSV for short, took effect in September 2021. The law requires Chinese network product providers to notify the country’s Ministry of Industry and Information Technology (MIIT) about vulnerabilities found in “network products”4 within a few days of reporting them to the appropriate vendor.5 As 2021 wound to a close, the legal environment for Chinese vulnerability research appeared fraught with the potential for a chilling effect caused by the ambiguities and requirements within the RMSV.

Enter Log4j. At the end of 2021, a bug in the popular logging library Log4j came into the public view as vendors raced to patch millions of vulnerable devices and applications—by some estimates, 10 percent of digital systems, including servers, web applications, Internet-of-Things (IoT) devices and more, were vulnerable.6 Amid the ensuing tumult—White House summits,7 Congressional testimony,8 national directives,9 and desperate calls for patching—a somewhat surprising strand in the saga went largely undiscussed. In late November 2021, a researcher at Chinese technology giant Alibaba a severe vulnerability in Log4j and disclosed it privately to the Apache Software Foundation (ASF) team maintaining the library. A month later, Alibaba found itself on the receiving end of government sanction. China’s Ministry of Industry and Information Technology suspended subsidiary Alibaba Cloud from a cyber threat- and information-sharing partnership for six months, apparently for failing to report the Log4j vulnerability, also known as Log4Shell, directly and promptly to the MIIT.10 11

The precise enforcement mechanism behind Alibaba’s suspension remains unclear in the sparse reporting on the incident. The MIIT may have cited a clause in the company’s contract for the government-facing information-sharing platform, or it may have relied on the aforementioned RMSV, published in July 2021.12 Legal mechanisms aside, and disregarding the concurrent rifts between the Chinese government and Alibaba, which has been on the losing end of massive antitrust fines recently,13 an uncomfortable and under-addressed fact remains: the MIIT appears to have punished Alibaba, a titanic cybersecurity entity, for following what were by all accounts best practices, or at least something close to them. This augurs poorly for the supply of vulnerability research originating in China, and thus the security of software including open source and its “many eyes.” The law has the potential to either funnel vulnerability information to the MIIT well ahead of industry-standard timelines or to create “a chilling effect on future coordinated disclosure”14 in one of the world’s largest information technology (IT) hubs.

Community impact, not national intent

Chen Zhaojun, a researcher with Alibaba Cloud’s security team, reported Log4Shell to ASF privately via email on November 24, 2021. Just weeks later—by December 9—he followed up with an alert that discussion of the bug was percolating through cybersecurity fora.15 Coordinated vulnerability disclosure (CVD) refers to the process in which discoverers pass vulnerability information to various vendors, affected entities, and eventually the public. Given the many overlapping stakeholders in any software ecosystem, CVD is messy, and as in the case of Log4Shell, does not end after a single communication. As Carnegie Mellon University’s CERT Coordination Center (CERT CC) puts it, “there is no single ‘right’ way to do this.”16 Certainly though, it is difficult to consider Alibaba’s approach to Log4Shell “wrong” in any sense pertinent to current CVD norms. Alibaba’s researcher disclosed Log4Shell, a vulnerability easily exploitable through many vectors including chat messages on Minecraft servers, to those best suited to remediate it—the ASF team maintaining the library. Alibaba kept the information close for a relatively short time, well within the thirty-day window allowed by, say, Google’s Project Zero, and it communicated important developments regarding public knowledge of the vulnerability.17

Deep, legitimate concerns exist about the intent of requirements to provide advanced notice of vulnerabilities to government agencies. This is especially true where those government agencies are implicated in harvesting vulnerabilities for offensive use from national databases and hacking competitions,18 19 all under a framework conceiving of vulnerabilities as a national resource.20 Concerns over state access to vulnerabilities and influence on disclosure practices are not limited to China.21 22 However, this paper considers these regulations through a different lens: their impact on the supply of vulnerability disclosures worldwide.

CVD: The big picture

Researchers ranging from hobbyists to enterprise lab technicians hunt for vulnerabilities in products, open-source libraries, and embedded software. They have a variety of motivations: profit, prestige, ethical principles, and even entertainment.23 Testaments to the importance of this thriving, distributed, community are widespread: bug bounty programs and platforms like HackerOne or EU-FOSSA 2, tomes of acknowledgments for external researchers in common vulnerabilities and exposures (CVE) records, and even the remarkable innovation of the open-source ecosystem itself, premised on the open and free flow of contributions from researchers and developers to projects and their maintainers.

If many eyes can better find vulnerabilities, then the global supply of security research—the product of these “eyes”—is essential to managing the level of risk posed by software to users. Regulations that might constrict this supply or limit its global reach are thus concerning. While much previous work has focused on the intricacies of vulnerability disclosure in a specific, transactional frame,24 this paper takes an aggregate approach, concerned less with edge cases than the net effect of new laws on total vulnerability supply. The focus here is on the effect of the RMSV on research. Because Chinese researchers are such a significant proportion of the supply of vulnerability disclosures, and because the law offers a clear set of exogenous intervention dates, the possible effects of the RMSV are a critical case study for policymakers—if they exist, they should be relatively easy to detect and correlate to underlying events.

This paper seeks to answer whether the RMSV had any measurable effects on the supply of either global or Chinese vulnerability research.25 The analysis measures whether such effects are detectable in publicly available vulnerability-reporting and crediting data from a selection of proprietary vendors and open-source libraries. First, the paper offers a brief, international history of policies and laws impacting vulnerability disclosure before diving into the RMSV. The second section examines statistical findings on the effect of the RMSV and discusses data gathering and analytic methodology. Finally, the paper provides recommendations for the US government and its allies to consider as they update policies impacting vulnerability disclosure in the context of the current administration’s significant efforts to improve the security of software and IT supply chains.26 27 The report does not seek to indict any one country’s approach to cybersecurity. Rather, it attempts to detect fragility in the global supply of vulnerability disclosures through accessible disclosure and acknowledgment data to highlight the subject law, its effects, and the need for policies to better encourage vulnerability disclosure outside of any single, national legal context by bolstering the wider research community’s health and vitality.

Making law for vulnerability disclosure

The development of CVD processes stretches back to the 1990s, iterating through periods of tension and begrudging consensus28 Roles have evolved, and even some of today’s champions of coordinated disclosure and public bounty programs were at best reticent about—and at worst overtly hostile toward—external vulnerability disclosure in the not-too-recent past.29 In its “Guide to Coordinated Vulnerability Disclosure,” the CERT CC, affiliated with Carnegie Mellon’s Software Engineering Institute, describes CVD as the information management process of moving from initial discovery of a vulnerability to the deployment of remediation measures—patches, most commonly. A good CVD policy strives to establish rules that guide stakeholders through that process along an optimal route—somewhere between “disclose everything you know about a vulnerability to everyone as soon as you know it” and “never disclose anything you know about a vulnerability to anyone.”30 Differences in organizational preference, bug severity and publicity, patching timelines, maintenance resources, and much more cause great variation in the execution of CVD. Nonetheless, CERT CC’s guide lays out several key principles along the path to patching: reducing harm, presuming researcher benevolence, avoiding surprise, incentivizing desired behavior, making ethical considerations, improving process, and considering CVD as “a wicked problem.”31

In practice, there are many implementations of these and other guiding principles. For example, when Google researchers discover an external bug, they disclose it to vendors and provide a ninety-day period before going public with the vulnerability in the absence of a patch, with some wiggle room for slightly delayed patches and adoption, as well as a much more aggressive seven-day deadline for zero-day vulnerabilities under active exploitation.32 Bug-bounty reporting platforms like HackerOne and BugCrowd run a variety of programs as aggregation and coordination platforms, each with their own guidelines—HackerOne’s platform maintains a 180-day final deadline for disclosure in its public programs, for instance,33 and both run a variety of private programs subject to CVD processes customized by participating organizations.34

Governments also maintain CVD policies for responsibly disclosing bugs found in their systems. For the US government, the US Cybersecurity and Infrastructure Agency (CISA) published the Binding Operational Directive (BOD) 20-01 on September 2, 2020, which required all federal agencies to implement their own vulnerability disclosure programs (VDPs) by March 2022.35 36 In April 2022, the European Union Agency for Cybersecurity (ENISA) published a report on CVD policies in the European Union (EU), providing useful overview of processes in its twenty-seven member-state.37 Only four member-states—the Netherlands, France, Belgium, and Lithuania—had adopted a national CVD policy at the time of publication. Nine EU members had not begun the process of implementing a national CVD policy at all, and the rest were at some stage of development.38 The United Nations maintains a working group that has written on considerations for encouraging responsible disclosure.39 Meanwhile, the International Organization for Standardization (ISO) with the International Electrotechnical Commission (IEC) outline responsible vendor practices for handling external vulnerability disclosure in the ISO/IEC:29147:2018 standard.40

Because governments also control the legal environments that might delineate between security research and prosecutable crime, clear national CVD policies foster a healthy research community, while their absence often disincentivizes or even punishes researchers. For example, reporting by Just Security on the Log4j incident and the RMSV describes the investigation of a German security researcher who found vulnerabilities in a polling application for a political campaign. The authors—Fabiola Schwarz, Jantje Silomon, and Misha Hansel—emphasize that a lack of clear guidance and protections imposes legal concerns on researchers, impeding their ability to contribute research findings.41 Somewhat similarly, in the United States, Missouri threatened legal action against a reporter who found that thousands of social security numbers were publicly accessible on the internet through Missouri’s Department of Education website.42 Fortunately, governments have begun closing these loopholes to some degree. In addition to the CISA BOD 20-01, the US Department of Justice recently announced that it will choose not to charge security researchers acting in what it defines as good faith under the Computer Fraud and Abuse Act,43 which has long been criticized for practical overreach with regards to benevolent research.44

Importantly, where security vulnerabilities are concerned, governments do more than create legal environments and act on researcher findings to patch their systems—many have offensive organizations with an interest in obtaining some of these vulnerabilities for eventual use, adding a third dimension to their CVD interactions. In the United States, the Vulnerability Equities Process (VEP) governs the management of vulnerabilities found by government agencies, reviewing the most severe on a case-by-case basis to decide whether to retain them for offensive use or disclose them to the vendor or maintainer.45 Other countries have made some portion of their equities processes public too, including the United Kingdom, Australia, and Canada.46 While these policies are only tangential to most CVD processes, some US agencies are explicitly required to submit findings to the VEP even when working with open-source code,47 and the handling of vulnerabilities that the US government learns of in information sharing fora is not well understood from public documentation.48

The RMSV

In July 2021, the Cyberspace Administration of China (CAC) published a draft version of the RMSV. The following articles of the RMSV—in effect since September 2021—are the focus of this work:

  • Article 4: No organization or individual may…illegally collect, sell, or publish information on network product security loopholes…”
  • Article 7: Network product providers shall perform the following…:
    • i. After discovering or learning that there are security vulnerabilities in the provided network products…it shall immediately notify the relevant product provider…
    • ii. The relevant vulnerability information shall be submitted to the Network Security Threat and Vulnerability Information Sharing Platforms of the Ministry of Industry and Information Technology within 2 days…”
  • Article 9: Organizations or individuals engaged in the discovery and collection of network product security vulnerabilities shall release information on network product security vulnerabilities to the public through network platforms, media, conferences, competitions, etc. principles and abide by the following provisions:
    • i. Vulnerability information shall not be released before network product providers provide network product security vulnerability repair measures…
    • ii. Not to publish the details of the security loopholes in the networks, information systems and equipment used by network operators…
    • iii. …
    • iv. Not to publish or provide programs and tools specially used to exploit the security loopholes…in activities that endanger network security.”

The RMSV creates a few specific concerns including the potential for the law to create a chilling effect on the disclosure of vulnerabilities from China’s research community and thereby impact the supply of vulnerability disclosures more widely. Because of the difficulty of disclosing the required information within the given two-day timeline, the ambiguity of what is considered a “network product provider,” and the fuzzy borders between provider, individual, and individuals funded by a provider, researchers might hesitate to disclose a vulnerability to the vendor entirely, turning them over to the government only, if at all, or waiting for further legal clarity before continuing their work. The ambiguity about which entities the RMSV covers and the scope of the mandated notice to government also holds the prospect of legal penalties over individual researchers.

The RMSV is of particular interest in the context of studying the impact of CVD regulation for several reasons. First, it is unambiguous in requiring that some subsets of vulnerabilities be reported from private enterprise to the Chinese government prior to patching, even if there are ambiguities in what entities and which vulnerabilities it regulates. If anything, wider quantitative analysis might reveal the Chinese government’s current views of those gaps. Second, the RMSV provides a well-delineated timeline to examine—a ‘before’ and ‘after’ for when its reporting requirements were either publicly known or enforceable. Third, the portion of the global security research that it regulates is enormous—Chinese researchers, until recently, were prolific contributors at international hacking competitions,49 and the nation’s information and communications technology (ICT) sector is one of the largest globally.50 51 52 Any chilling effect would therefore be more likely to emerge in publicly available data, and its impact on global security would be nontrivial, particularly given the fact that the only known, documented instance of potential enforcement was Alibaba’s handling Log4j.

Strategic context

The RMSV is noteworthy in its strategic ambiguity. It is not made explicit how the law applies to multinational companies with offices in China, what kind of entity is considered a network product provider, or what degree of affiliation an individual can maintain with a network product provider without being subject to the law. In addition, it is unclear what level of severity a vulnerability must have to require reporting (and the law provides for no rating or review process), what level of early disclosure to affected entities outside of a product maintainer it allows, if any, what its designs for multi-party products are, or how it intends to regulate vulnerabilities in critical open-source software. Some bugs might be reported to vendors by researchers who do not even realize the presence of significant security impacts.  In some ways, all this vagueness might be the point, allowing enforcement at the convenience of the government and creating a Damocles sword of legal liability for researching entities. Even the MIIT’s ability to cope with the significant volume of vulnerability research the RMSV seems to demand is doubtful.

Strategic ambiguity is a recurring trend in Chinese cybersecurity policy and law. In China, cybersecurity is part of broader information security—the goal is full control of the information space within China to ensure social stability and regime continuity—including against myriad threats from and through digital technology.53 As a unitary government, and with recent reforms to recentralize the government following devolution to the local level during earlier phases of reform and opening up,54 China exercises a top-down imposition of government policy. Information security has seen a greater national-level consolidation in oversight because of its strategic nature. Moreover, many of China’s key enforcement mechanisms relevant to information security are the direct responsibility of state entities such as the MIIT, Ministry of State Security (MSS), and the CAC.

As China innovates in digital technologies, it has sought to preempt regulatory challenges through a top-down approach common across sectors (China conducts nearly all its development planning at the national level as well). Many of the country’s major technology firms—Ant Group, Baidu, Tencent, etc.—are private companies that nevertheless must function inside this policy environment. Recent fines for monopolistic behavior and blocks against international initial public offerings (IPOs) demonstrate the government’s tightening grip on its IT industry scions,55 56 alongside a drive towards stricter regulations and localization of data centers.57

The RMSV emerges from an approach to information management that focuses on applying regulation with purposeful ambiguity at the source of what is considered a resource, all to create wider and more flexible effects on firms and maybe even individuals. It is precisely this framework that threatens to undermine the supply of vulnerability disclosures abroad.

Effects of the RMSV

With these considerations in mind, this paper looks at a large dataset of publicly reported vulnerability acknowledgments to detect a significant change in contributions from Chinese researchers—from individuals to large companies—over the lifetime of the RMSV at three key points:

  • Public release of Data Security Law draft: July 2020.58 59
  • Publication: July 2021.
  • Enactment: September 2021.

Specifically, this paper proposes that any of those three dates may see either a significant decline in the proportion of vulnerabilities attributed to Chinese researchers or firms or a significant decline in the total reporting of vulnerabilities in the case that most reports were initially unattributed or anonymously made.

Methodology

To look for these effects, the team gathered a variety of publicly available vulnerability data from both proprietary vendors and open-source product managers. These entities included Microsoft, Apple, VMware, F5, and Red Hat, which provides data about a wide variety of open-source packages it is involved with. Specific data organization and entries varied among these entities. Microsoft provides the ability to download a spreadsheet of CVEs reported to the company, which includes external acknowledgments, dates, and affected products.60 Apple maintains records of its security updates, which include information on CVEs and external reporters scraped from its website, as with VMware and F5.61 62 63 Red Hat maintains several datasets relevant to the task, including the Extensive Markup Language (XML) files of CVEs reported to its open-source projects, CVE acknowledgments, dates, and affected projects.64 The team selected data sources for their ability to represent significant subsections of the technology ecosystem—multi-platform providers (Microsoft), significant infrastructure providers (VMware and F5), companies with massive consumer-facing product lines (Apple), and well-established open-source software (OSS) stakeholders (Red Hat). Other entities considered for study but not included due to processing challenges are Google (specifically its Chrome stable releases) and the Open Source Vulnerability (OSV) schema.65

The publication date of security advisories is not necessarily the same as the corresponding vulnerability’s disclosure, and the lag time between these two dates might vary within and among vendors based on a host of factors. These advisories are a useful source of time-stamped data and possibly make visible the effects of policy interventions. Random distribution or no consistent patterns of change are thus useful findings as well. In addition, the decision to publicly credit a researcher (as opposed to labeling them “anonymous” or crediting no entity or individual at all) despite the law also would also reveal something about the respective organization’s thinking.66

These datasets were compiled (separately) into a log of CVEs, originating webpages, acknowledged entities (differentiated into individuals and organizations where possible), best-estimated publication date, and affected products.67 From there, the team associated credited companies with a best-estimate country of legal provenance, while identifying credits to open-source projects and multinational organizations as such. Some organizations operated only in two countries and were split to reflect such. While the paper does not identify the legal environments where individuals operated, some geolocation information volunteered either through Twitter, GitHub, or email addresses is available and has been used in other studies.68 This provided the necessary material for analyzing, roughly, by country contribution over time. Each entry received a month-year tag for batching, especially convenient as two of the three significant dates were on the first and second days of a month (July 2, 2020, and September 1, 2021—the draft RMSV released on July 13, 2021). Each step presented various opportunities for cleaning the datasets, and this paper describes the steps taken for further analysis.

The data collected by this methodology was unavoidably noisy.69 Many entries were created by hand at the source, leading to an enormous quantity of typos and spelling variations (e.g., Qihoo 360, QIHU360, Qihoo360, or just 360 represent the same company), and different formats and encoding protocols mangled accents and non-English characters. There is also considerable overlap between datasets, where vulnerabilities in a common codebase affected multiple products or where CVEs were retroactively revealed to discuss the same vulnerability. There is no standard process for acknowledgment either—some uncredited CVEs may have been reported anonymously or discovered by researchers internal to the company providing the data. Meanwhile, F5 didn’t always list CVEs, relying on its own numbering system throughout.

Dating entries is similarly imprecise, reflecting many possible significant dates: original confidential report to a company, discovery of a known CVE’s impact on a product not previously known to be affected, public publishing, addition to another dataset like the US National Vulnerability Database (NVD), and so on. Not all Red Hat’s CVEs were updated with accurate reporting dates on one of their datasets, so this analysis used their searchable database to fill in dates for approximately one-third of the entries, supplementing with publication dates from MITRE and Tenable as needed.70 In addition, not all companies batched their data in the same manner. Microsoft seemed to organize reports by CVE mainly, while Apple and F5 focused on reporting CVEs within relevant software updates, leading to double reporting and time-shifts. These discrepancies alongside other unknown differences in internal policy and the variances in sample size prevent comparisons between datasets. Overall, while this data is by no means fully representative of the security research ecosystem, it does present best estimates of small slices of that community and its contributions to various product and project environments.

On the data

The datasets utilized for this study had samples sizes of 14,740 (Apple), 4,355 (Microsoft), 3,307 (Red Hat), 1,363 (VMware), and 335 (F5). In addition to providing data on potential impacts of the RMSV, this data helps illustrates trends in vulnerability disclosure across a portion of the technology ecosystem. Overall, country attribution acknowledgment ratios give a useful sketch of the largest, most active research communities as well as the differences among them.

The following charts detail the total number of records for each dataset and the number and percentage of those where an acknowledgment links back to organizations operating out of the United States, China, an EU member state, or other countries. Each also details the number and percent of entries with acknowledgments not linked to country-tagged organizations (where organizations were not able to be tagged to a specific country). Because acknowledgments can credit multiple organizations and thus tag multiple countries, the rows do not add up to 100 percent of entries.

Table 1: Aggregate contributions by country from the Apple dataset
Table 2: Aggregate contributions by country from the Microsoft dataset
Table 3: Aggregate contributions by country from the Red Hat dataset
Table 4: Aggregate contributions by country from the VMware dataset
Table 5: Aggregate contributions by country from the F5 dataset

These charts help highlight differences in vulnerability disclosure patterns across the datasets included here, but they also underline the variability in disclosure record-keeping practices. For example, all Red Hat and Microsoft entries contained some form of acknowledgment, and most entries in the Apple and F5 datasets did too, yet most VMware entries did not. Similarly, rates of Chinese contribution varied between datasets, from no identifiable contributing organizations in the F5 data to a quarter of entries in the Microsoft dataset crediting an organization based in China, among others. Somewhat surprisingly, the portion of acknowledgments crediting only individual researchers (rather than organizations either affiliated with the researcher or contributing independently) was consistent among the three largest datasets at around one quarter of entries, while much lower for F5 and VMware entries.

Benchmarking these aggregate measurements against other datasets is useful. An anonymous bug-bounty platform provided Dakota Cary in his February 2022 Congressional testimony with data on the portion of bounty payments paid to researchers in different countries by US firms in 2021 through the platform.71 This paper reproduces that data below in a similar format as the above data.

CountryFunds paidPercent of total payments
United States$6,718,92315%
EU$6,601,11415%
China$4,220,30210%
Table 6: Anonymized bug-bounty platform data by country through 2021

Notably, these data show less of a gap between the United States and China or the European Union, and greater representation of the European Union overall. Part of this gap originates from the narrower timeframe of the bug-bounty platform data. The datasets gathered by this paper from vendors show, consistently, US dominance of contributions in earlier years, followed by increasing representation of other countries—particularly China—as their IT sectors develop and the disclosure pipelines become more accessible to non-US researchers. Filtering this paper’s datasets for just 2021 reflects that shift, bringing parity to the United States and China points similar to that in the bug-bounty platform numbers, though the EU still lags. This might reflect selection bias in the countries with which the bug-bounty platform has developed strong relationships. Figure 1 shows the contributions tagged to the United States and China over time in the Apple dataset, illustrating the changes in composition over time.

Figure 1: Contribution counts by country per month, Apple

Findings on the RMSV

The majority of the analysis in this paper looks for an impact timed with one of the three key dates identified regarding the RMSV. First, it examines both the raw counts and proportional contributions by country, focusing on China while using the US as a baseline. It discusses these results below. F5 contained no acknowledgments tagged to Chinese companies, thus producing no finding. Data from Apple and VMware showed no significant impact correlated with the RMSV, though the paper includes basic charts of their raw contribution datasets in the appendices. Data from Red Hat and Microsoft produced more notable results and are considered and analyzed in greater detail below.

Microsoft

Between June and July of 2020, CVE contributions credited to Chinese organizations plummeted from 59 to 11, where they hover each month since (see figure 2). Even more surprisingly, this decline occurred as overall contributions increased and broke a trend of a steadily increasing proportion of Chinese contributions (see figure 3).

Figure 2: Contribution counts by country per month, Microsoft
Figure 3: Contribution proportions by country per month, Microsoft

To better analyze this result, this paper uses Google’s CausalImpact analysis package for R.72 To do so, it considers the July 2020 data as an intervention treatment for China-tagged contributions, predicting a post-treatment trendline based on pre-treatment China-tagged contributions and using non-China contributions as a covariate to help predict China-tagged contributions based on data-points unaffected by the RMSV. This modelling has the advantage of capturing, with considerable nuance, the relationship between China-tagged contributions and overall contributions each month—few overall contributions predicts a low number of China-tagged contributions, while a large number of contributions makes a large number of China-tagged contributions more likely. Last, this post-treatment forecast is measured against the actual post-treatment data and tested for statistical significance. The results of this analysis are shown in table, graph, and text form—provided by the CausalImpact package itself—below.

As a downside, the use of only one predictor variable is not optimal—the CausalImpact developers recommend somewhere between five and twenty where possible. Others to be considered include total CVEs made public each month and IT-sector size per month among other predictors of general vulnerability disclosure, though their addition is beyond the scope of this paper, which serves mainly as a proof-of-concept analysis. As such, while this is not rigorous statistical evidence of a significant impact from the RMSV, it is moderately convincing and provides clear direction for future analysis.

AverageCumulative
Actual11213        
Prediction (s.d.)27 (3.5)549 (69.4) 
95 percent CI[21, 34][418, 686]
Absolut effect (s.d.)-17 (3.5)-336 (69.4)
95 percent CI[-24, -10][-473, -205]
Relative effect (s.d.)-61% (13%)-61% (13%)
95 percent CI[-86%, -37%][-86%, -37%]
Posterior tail-area probability p0.00102
Posterior prob. of a causal effect99.89827%
Figure 4: Original and projected China-tagged contributions, pointwise difference, and cumulative difference, Microsoft

The x-axis of the above graphs shows the number of time-steps from the earliest data entry, while the y-axis shows the number of contributions. The top graph shows as a black line the number of China-tagged contributions over time, on top of the predicted number of China-tagged contributions derived from covariate and pre-treatment modelling as a dotted blue line, with confidence intervals shaded in light-blue and the treatment date shown as a vertical dotted gray line. The second graph shows the difference between prediction and observed data, called pointwise causal effects, which the third panel sums up to show the cumulative deviation from the predictive model. The significant drop-off in China-tagged contributions, especially in the context of no corresponding drop-off in contributions from other countries, is statistically significant and described in technical detail below.

During the post-intervention period, the response variable had an average value of approximately 10.65. By contrast, in the absence of an intervention, we would have expected an average response of 27.45. The 95 percent interval of this counterfactual prediction is [20.89, 34.32]. Subtracting this prediction from the observed response yields an estimate of the causal effect the intervention had on the response variable. This effect is -16.80 with a 95 percent interval of [-23.67, -10.24]. For a discussion of the significance of this effect, see below. Summing up the individual data points during the post-intervention period (which can only sometimes be meaningfully interpreted), the response variable had an overall value of 213.00. By contrast, had the intervention not taken place, we would have expected a sum of 548.91. The 95 percent interval of this prediction is [417.75, 686.49]. The above results are given in terms of absolute numbers. In relative terms, the response variable showed a decrease of -61 percent. The 95 percent interval of this percentage is [-86 percent, -37 percent]. This means that the negative effect observed during the intervention period is statistically significant. The probability of obtaining this effect by chance is very small (Bayesian one-sided tail-area probability p = 0.001). This means the causal effect can be considered statistically significant.73

Interestingly, the drop in China-tagged contributions coincides with an increase of similar size and significance in contributions tagged either to individuals, companies with no known country tag, or no acknowledgement at all.74 This might suggest that in response to the RMSV, researching entities and disclosure recipients opted to refrain from explicit, public acknowledgments rather than from disclosure all together.

Red Hat

While no significant change in contributions from Chinese entities occurred in the Red Hat data at any of the three key dates identified above, a significant decline in contributions did occur in April 2017 and has been largely sustained since, even amid a general increase in the overall number of contributions, though those declined from a high of 128 in September 2017 to a lower mean since (see figure 4). The proportional data reflects that initial drop while also showing an upward-trending resurgence of Chinese contributions beginning in August 2020 (see figure 5). As in the Microsoft, VMware, and Apple data, the trend of US predominance of contributions early on, followed by increased participation from China and other countries persists. If the April 2017 drop resulted from an external intervention, analysis similar to that performed on the Microsoft dataset and included in the appendix also indicates statistical significance, but no clear exogenous event is apparent, indicating that the movement reflects either internal policy changes, statistical noise, or a more complex interaction among contributors and stakeholders.

Figure 5: Contribution counts by country per month, Red Hat
Figure 6: Contribution portions by country per month, Red Hat

Product-type breakdown

This paper also looked at product-type breakdowns within each dataset, pulling out information for hypervisor products in VMware and Microsoft, internet browsers in Apple and Microsoft, and examining iOS and macOS updates in Apple. While no significant impact from any of the three RMSV dates arose in these subsets, further analysis of the datasets may reveal interesting areas of research focus. The analysis strove to compare Microsoft and Apple operating system trends, but a lack of clear labelling conventions frustrated attempts at identifying contributions to Microsoft operating systems. These non-findings may indicate that, while researchers necessarily specialize in specific product types and systems, the combination of large datasets and smaller numbers of true experts as well as the difficulties in pulling out information on vulnerability severity even with CVSS scores75 drown out any discernable trends in specialization.

By-Company Breakdown: Microsoft

To help clarify the specific cause of the July 2020 decline in China-based contributions in the Microsoft dataset, this paper analyzed the data’s China-tagged companies more closely. In addition to the first public knowledge of the RMSV, the July 2020 date coincided roughly with several rounds of US sanction activity against Chinese companies as well as significant cyber legislation in China.76While determining precisely which regulations might have caused the decline is beyond the scope of the paper, it is possible to measure some impact of company-specific sanction activity with this paper’s dataset.

Tracking the contributions of large, China-tagged companies over time was straightforward. This dataset already has disclosures tracked over time, linking them to companies and tagging those companies to the best-approximated country of operation. In practice, though, poor data quality complicates the process by providing multiple spellings of the same companies and inconsistently referencing subsidiary companies, subdivisions, and research labs. For each of these variations, we created—and then tracked over time—an alias tag, providing a common identity for typos, spelling variations, and subsidiaries. For example, QIHU360 (misspelling), 360 SkyEye Labs (subsidiary), and Vulcan Team (also called Qihoo 360 Vulcan Team) all received the same alias—Qihoo 360.

Sums of pre- and post-July 2020 contributions for each company (both by their original entry names and by their aliases) provide a preliminary analysis. Altogether, Chinese companies contributed 1,090 disclosures in the Microsoft dataset before July 2020, and 230 after. An impressive 691 of the pre-RMSV contributions from China-tagged companies came from Qihoo 360 affiliated groups, followed by 190 from Tencent, thirty from Baidu, and twenty-five from Alibaba (and several other companies contributed similarly or less prior to the RMSV—see the following table for more).

After July 2020, the data initially tells a different story. While some companies increased their disclosures—for example, DBAPP and Venustech more than doubled their contributions—the largest pre-RMSV contributors fell off precipitously with no other entities filling the gap to the same magnitude. Because Qihoo 360 contributed more than 60 percent of the total pre-RMSV, this paper tracks Qihoo’s year-month contributions to confirm that it was the driving force behind the China-tagged contribution drop-off in July 2020 (see figure 7).

Microsoft AliasesPre-RMSVPost-RMSVDecrease
Qihoo 36069111680
Tencent19011179
Baidu30327
Alibaba25520
Table 8: By-company Microsoft contributions
Figure 7: Contribution counts within Microsoft by month

Indeed, before July 2020, most month-to-month contributions from China-tagged entities came from Qihoo 360 and its affiliated labs and teams. The decline in Qihoo’s contributions accounts for nearly the entire drop in China-tagged contributions in the Microsoft dataset after July 2020, as well as the increase running up to it.

Crucially, the US Department of Commerce added Qihoo 360 to the Entity List—a list of foreign entities to which the Department applies moderate trade restrictions—on June 5, 2020, as well as Qihoo’s UK  offices and twenty-three other companies in China and Hong Kong.77 This would suggest that the decline in China-tagged contributions is primarily a result of Qihoo’s shifting legal status combined with their contribution preeminence, perhaps augmented by the inclusion of its UK branches, rather than the public circulation of the RMSV’s reporting mandates.

Qihoo 360 is a China-based internet software and security company founded in 2005 by the former head of Yahoo’s China operations, Zhou Hongyi. The company has enjoyed a litigious, dynamic history, from lawsuits against Yahoo China, Baidu, Tencent, and others to going private in 2016 as it delisted from the New York Stock Exchange and reshored to China.78 79 It also has close ties to the Chinese government, from executives working with the Cybersecurity Association of China80—which helped pass the RMSV—to its role in finger-pointing disputes with the United States and China’s overall cybersecurity posture.81 Qihoo’s researchers have long dominated hacking competitions like Pwn2Own and the Tianfu Cup as well as Microsoft’s security researcher leaderboards.82 Interestingly, the most recent tweet from one of Qihoo’s subsidiary accounts, 360BugCloud, was made on August 6, 2020, right after the decline in Microsoft contributions from the company—and, of course, the tweet bragged about the company’s preeminence in that research space.83 Despite the company’s decline in contributions to the Microsoft and Red Hat ecosystems, its researchers still seem active elsewhere—for instance, most recently, Steven Seeley discovered CVE-2022-31664 in early August of 2022, which allows for privilege escalation in VMware products.84

Notably, the other large China-tagged entities that demonstrate a similar decline in contribution do not conform to the same time frame. Tencent, for example, contributes through the July 2020 date for a few months before falling off, while Baidu and Alibaba contributions dry up earlier, around early 2019. This suggests that the majority of the chilling effect seen in the MSFT data arises from the Qihoo 360 entity listing—moreover, that the larger reticence of large China-tagged corporations to contribute is part reversion to mean, part generalized hesitance.

Figure 8: Contribution counts by company by month, Microsoft

The focus on Qihoo 360 might also help provide some context for the uptick in unattributed reports that follows the July 2020 date. Given the reasonably expected delay between receipt of a disclosure and remediating and reporting it, Microsoft likely found itself with a significant number of vulnerabilities from Qihoo 360 disclosed before the entity listing complicated the two companies’ relationship. The value continuity between pre-entity-listing Qihoo 360 contributions and post-entity-listing unattributed contributions might imply the company chose not to disclose the source of those discoveries. The fact that disclosure for the elevated levels of unattributed reports was for some five months might hint at the size of the backlog and the time it took to clear. Notably, the decline in unattributed disclosures following the post-July 2020 uptick coincides with a decline in overall contributions to Microsoft, which could imply that the chilling effect caused by the entity listing did impact the entire ecosystem due to the noteworthy productivity of the entity-listed company.

Given that this July 2020 trend existed only in the Microsoft ecosystem, checking for possible causal events within that company’s specific context was also necessary. Two possible explanations specific to Microsoft are changes to its bug bounty reward incentive programs both in April and July 2020,85 when the company reduced or eliminated prizes for certain classes of vulnerabilities that had surged in reporting. In theory, the removal of financial incentives from a specific type of vulnerability research might explain a drop in supply if Qihoo 360 alone focused on the work There are a few problems with this explanation, however. First, the later bounty change occurred on July 24, 2020, after the reporting date that saw the massive Qihoo 360 drop off—July 14, 2020. Second, the fact that much of the significance in the Qihoo 360 decline derives from the absence of any other similar drop off from other companies or countries frustrates the narrative of removed incentives: total reports would likely also fall off if they had surged around a handful of vulnerability types that no longer paid out.

The data specifying what vulnerabilities Qihoo was reporting before July 2020 and what vulnerabilities were reported after the date by other contributors can help address the argument that Qihoo 360 alone was affected by the reporting changes. 75 percent of the company’s findings focused on privilege escalation in the months leading up to July 2020. The April 2020 updates make no mention explicitly of privilege escalation, though technical synonyms are certainly possible, and following the July 2020 collapse, other companies continued to report privilege escalation vulnerabilities. Privilege escalation remains the second highest compensated security impact in fact, only following remote code execution. In simpler terms, the explanation that Qihoo 360 focused on a specific type of bug and stopped contributing at all when it was no longer lucrative doesn’t hold insofar as this paper’s researchers can determine with limited technical fluency. Other entities continued to the vulnerabilities Qihoo 360 had focused on long after the bounty rule changes. Moreover, Qihoo 360 should in theory have kept reporting 25 percent of their original contributions if the changes did indeed drive a research-type specific decline—instead, the company effectively collapsed to no reporting at all. In truth, the starkness of Qihoo’s drop and the complete absence of any other country or company behaving similarly at the same time suggests that something unique to China or Qihoo 360 was at play, and the timing of entity listing provides a more compelling explanation. Finally, it’s unclear what portion of the MSRC vulnerability dataset is connected to bounty payouts to begin with—doubtless, some are uncompensated.

Regarding the broader concept of preserving supplies of security information, this finding within the Microsoft data only emphasizes the point: while the RMSV may not have been the causal factor, reliance on a singular legal context, or even one company within a singular legal context, creates a security bottleneck. Changes in law, trade regulation, or even a company’s financial or legal status can have an outsized impact on security if those changes affect vulnerability research and disclosure at a particularly productive node, as shown in the data on Qihoo 360. Diversifying sources of research across national lines, market verticals, revenue sources, and other areas all increase the research community’s resilience and productivity, avoiding single points of failure.

By-Company Breakdown: Red Hat

The Red Hat data saw a similarly significant decline in China-tagged contributions in 2017 from February to April. This decline did not coincide with any of the critical dates highlighted in analyzing the RMSV—in fact it predates them all. Accordingly, this paper sought to explain that decline with the research first looking for potential causal mechanisms around the 2017 window in general news about Red Hat and Chinese technology firms. For example, in April of 2017, Red Hat and Huawei signed a collaboration agreement for delivering enterprise Linux.86 Earlier in the same month, then-President Trump met with Xi Jinping at a general-purpose US-China summit.87 However, the summit concluded on good terms, and the mechanism by which it or a Huawei-Red Hat deal would reduce China-tagged contributions are unclear.

For better insight, the team replicated the aliasing process used on the Microsoft dataset and discovered that, once again, Qihoo 360 was the driving force behind much of the China-tagged contributions in the Red Hat data, at least before April 2017. The company was tagged in almost 80 percent of Chinese contributions prior to April 2017 and a mere 12 percent afterward, and its work was the overwhelming source of trends in the total China-tagged contributions before the steep decline. Huawei, Tencent, Ant Group, NSFOCUS, VenusTech, and a handful of universities were also somewhat significant contributors to the Red Hat ecosystem, but by an order of magnitude less than Qihoo 360, and without noteworthy activity on either side of the key April 2017 date. If anything, their contributions grew years after the Qihoo decline.

Red Hat AliasesPre-April 2017 SumPost-April 2017 SumDecline
Qihoo 3601152293
Zhejiang University021-21
VenusTech019-19
Tencent316-13
NSFOCUS012-12
Ant Group011-11
SQLab NCTU011-11
Huawei15105
Qianxin Group010-10
Kunlun Lab06-6
UHK06-6
Tsinghua University05-5
Table 9: By-company Red Hat contributions

The Red Hat data, unlike that from Microsoft, does not show an increase in unattributed contributions coincident with the April 2017 decline. Moreover, the team could find no clear exogenous event that might have caused Qihoo 360 to reduce its contributions to the ecosystem to near-zero. In theory, perceived future competition from Huawei researchers turning to Red Hat products as part of their corporate agreement may have incentivized Qihoo, a much smaller company, to shift its limited resources elsewhere, particularly as their concurrent privatization may have created an environment of financial uncertainty.88 However, this is speculative at best. No other contributing entity was as prolific as Qihoo 360, and no entity took its place either. Most likely, this data captures a change in personnel or internal policy either at Red Hat, Qihoo 360, or both, and its implications for the fragility of the research community and the dangers of centralized dependence are unchanged.

Figure 9: Contribution counts within Red Hat by month

Discussion

These data suggest that the RMSVhas not yet had a significant impact on the supply of vulnerability disclosures in most of these codebases, but with the possible and notable exception of Microsoft. However, that is not to suggest that the research community in China is immune to its legal context. First, the potential for a delayed effect outside of this study’s timeframe remains, especially when acknowledging the considerable vagueness in CVE reporting and dating practices—depending on the duration of vulnerability retention, CVEs regulated by the RMSV may simply not have entered the public record in appreciable numbers yet. It is possible that the RMSV will more unambiguously impact vulnerability research in the future as enforcement practices come to light and undisclosed vulnerabilities reported prior to its enactment grow rarer over time. Second, two of the largest datasets utilized in this study do seem to some form of China-based supply shock, even if only one might be tied to the RMSV, and neither trickles clearly into global reporting numbers. For example, in the Microsoft dataset, that effect may be correlated with either the first public knowledge of the RMSV reporting requirements, US sanctioning of Chinese technology firms,89 other Chinese cybersecurity legislation, or some combination of the three, and its impact continues to the present day. In the Red Hat dataset, the cause of the decline in reporting is unclear as it predates any reported knowledge of the RMSV, and contributions from China appear to have since recovered to earlier levels.

Open-source spotlight

This paper’s findings are of particular importance to the open-source ecosystem. At an abstract level, vulnerability research and disclosure closely mirror the system of open-source contributions: developers, motivated by profit, prestige, personal interest, and other factors, contribute to open-source projects they do not necessarily maintain. Similarly, legal environments can indeed shape this supply of contributors. For example, various sanction regimes led to the blocking of GitHub developers in Iran, Syria, Crimea, and other geographies in 2019.90 And similar to the vulnerability research ecosystem, China is well noted as a significant contributor to and user of open-source projects, where the geographical distribution of developers resembles the distribution of vulnerability disclosures by country.91 92 In his writing on the shifting open-source ecosystem in China, Kevin Xu notes that this trend is likely to continue: “Why the central government would embrace open source is rather straightforward: it prefers to favor flexible technologies that aren’t tied to certain vendors, companies, or countries, so it can control and shape them at will. The thinking here is not that different from the rationale behind any large enterprise’s adoption of open source, in or outside China. ‘Self-reliance’ as a national theme and technological imperative will be front and center for China for many years to come.”93

This sustained interest and the general predominance of open-source software in codebases—both open and proprietary—makes the supply of open-source contributors as valuable as the supply of vulnerability research, and there is considerable overlap between the two. Potential threats to that supply are more varied in the open-source ecosystem as well—for example, China while on the one hand embracing GitHub has also worked to establish its own open-source ecosystem, Gitee.94 While there is nothing inherently wrong with competing codebases, and even some security to be derived from that diversity,95 fracturing open-source contributor bases might drain valuable developers from an already resource-strapped environment.

At the largest scale, these data illustrate a degree of fragility in external contribution ecosystems—the tireless work of security researchers cannot be taken for granted, and imprecise vulnerability-reporting laws do indeed have the potential to limit their contributions. The specifics of the mechanisms involved are less clear—say, whether laws regulating domestic researchers or limiting interactions with foreign entities providing research have more impact. Connecting these supply-side effects to security incidents downstream is almost impossible—one cannot know what vulnerabilities might have been discovered by researchers who would have otherwise been searching for them. Nonetheless, any reduction in the rate of vulnerability discovery or constraint on reporting those vulnerabilities to affected entities and codebase maintainers promises to reduce cybersecurity at large.

Conclusion and recommendations

The passage of the RMSV and its coincidence with an increased US government attention on the security and sustainability of open-source software provide a significant opportunity for both government and industry. The risk to the cybersecurity of the technology ecosystem of laws like the RMSV is the potential isolation of significant subsets of the research community from the larger global supply of vulnerability disclosures. This kind of fear and fragmentation only adds risk to an already difficult to mitigate landscape.

The United States and allied governments can proactively address these kinds of supply-side security effects in coordination with industry by further expanding the supply of disclosure information rather than mimicking such laws to hoard vulnerability disclosures like a scarce resource. Key to this proactive approach is smoothing the journey from discovery to disclosure to patching across jurisdictions, providing better, more consistent tooling for vulnerability discovery, and working to better recognize and countercyclically invest against emerging gaps in global vulnerability disclosure. Three recommendations come to the fore:

1. Harmonize Vulnerability Disclosure across the United States and Allies

The United States, through the National Cyber Director in partnership with CISA’s Computer Emergency Response Team (US-CERT) should seek to lower barriers to vulnerability disclosure in a group of like-minded allies, including Australia, New Zealand, the United Kingdom, Estonia, and the Netherlands. Such an activity should expand to include Japan and other NATO members in short order. Organized as an ad-hoc working group, staff-level engagement across these states should work to harmonize domestic vulnerability-disclosure laws so that cross-jurisdictional disclosure is less burdensome and uncertain for vulnerability researchers. Harmonization could focus on requirements for companies to publish and adhere to CVD policies and the removal of legal penalties for non-commercial reverse-engineering activities, among other avenues. The working group should not seek to determine a common definition of “good-faith” security research, but rather seek near-term wins to better knit these jurisdictions together into a common disclosure environment.

Properly realized, this harmonization would deepen the supply of vulnerability disclosures to firms and maintainers in the United States and allied states, promoting more effective function as a single disclosure environment. As a second stage, the working group should consider manners of establishing international processes and protections for receiving and validating anonymous vulnerabilities. These efforts should include members of civil society and industry on a limited basis, with Joint Cyber Defense Collaborative (JCDC) members as logical starting partners.

2. Improve the Quality and Consistency of Support of Vulnerability Discovery Tools

Should authorization of the Critical Technology Security Centers (CTSC) finally pass through the National Defense Authorization Act (NDAA) conference process, the director of CISA should include vulnerability-discovery tooling and long-term support for these tools as eligible areas of investment for the Open-Source Software CTSC. Where policy moves threaten to curtail the supply of vulnerability disclosure, wider access to more capable, better-supported vulnerability discovery tools can help counter that effect. Providing these tools as open source for free use by the community will directly benefit open-source software security too, and such an approach could well have similar effects on proprietary code. This would achieve the above state goals while furthering the administration’s avowed interest in improving the security of software.96 This would also serve as a good example of indirect investment from the public sector in the security of open-source software and follows recommendation thirteen of the recently released Cyber Safety Review Board report.97

3. Track Vulnerability Disclosure Patterns and Invest Against Gaps

The National Security Agency’s (NSA) Cybersecurity Directorate should work to track patterns in vulnerability disclosures, collaborating with researcher and industry partners through the Cybersecurity Collaboration Center where possible. While the resulting trends analysis need not be public, it should remain unclassified for maximal usefulness as evidence to compel investment where gaps or the absence of disclosures appears. This monitoring effort attempts to understand the sourcing patterns of vulnerability disclosures and where disclosures of a similar style or against critical software cluster.98 This tracking program should help identify where those disclosures significantly decline, perhaps as the result of laws impeding disclosure from other jurisdictions.

If such a gap emerges, the Directorate’s leadership should collaborate with the National Cyber Director, leveraging the office’s budgetary review authorities, as well as existing federal bug-bounty programs to offer added incentives, such as doubled payments, for vulnerabilities like those in the identified gap submitted to private bounty programs. This countercyclical investment could help incentivize further disclosure against critical software and offset the effects of policies that limit disclosures. This program would bring greater awareness of important trends in vulnerability disclosure regardless of the reason that such disclosure gaps emerged.  This funding would be particularly useful to incentivizing the discovery of vulnerabilities in technologies with sufficient maturity to have driven vulnerability density towards sparseness—in other words, the discovery of vulnerabilities in well-scrutinized systems are more valuable from a security standpoint,99 and incentives to find them can help economic rewards reflect that.

Conclusion

The supply of vulnerability disclosures is a significant driver of security outcomes in software. Threats to that supply will, over time, reduce the security of software and add risk for individual users and organizations. This is perhaps most important for open-source software, which thrives on disclosures and contributions more generally from outside of the original developer network. As the policy community continues to study the effects of the RMSV and other regulations, greater sensitivity of the potential diverging effects of these policies on open-source and proprietary code should help motivate wider support for public-sector investments in the health and sustainability of the open-source software ecosystem. For China watchers, the future of enforcement of the RMSV and related policies would benefit from better public study of how the laws apply under varying political conditions and to companies and individual researchers. A more concrete understanding of the law’s practical implementation will help counter the seemingly purposeful ambiguity it has created.

The United States and its allies should see the disclosure of Log4Shell as a call to action to improve the scale and resilience of the global supply of vulnerability disclosure. Domestic legal changes to improve vulnerability research in single countries are useful, but they are insufficient to address the strategic ramifications of a potential supply shock. More can be done, to proactively limit the harm from such a moment and improve the state of software security along the way. As a closing note, it is particularly important to acknowledge the general good-will of researchers in this space. In many ways, the Log4j case illustrates this emphatically—a corporate researcher found and responsibly disclosed a crippling vulnerability in an open-source library directly to its maintainers and kept them abreast of events directly relevant to their remediation timeline, all in spite of the RMSV and other legal contexts and with no apparent profit motive. That kind of relationship, writ large across the security ecosystem, is one well worth preserving.

About the authors

Sara Ann Bracket is a research assistant at the Atlantic Council’s Cyber Statecraft Initiative under the Digital Forensic Research Lab (DFRLab). She focuses her work on open-source software security, software bills of material, and software supply-chain risk management and is currently an undergraduate at Duke University.

Yumi Gambrill is a master’s candidate at Georgetown University’s Security Studies Program. She recently moved back to the United States after nearly seven years in the United Arab Emirates, where she completed her BS in chemistry at New York University Abu Dhabi. After graduation, Yumi was a management consultant at Booz Allen Hamilton, focusing on defense and security strategy and public administration reform in the Middle East and North Africa region. Her professional interests include doctrine and norms development for cyber and hybrid conflict, US-East Asia policy, and public-sector organizational transformation. In her free time, she is an avid baker and maker of Japanese sweets, violinist, and scuba diver.

Emmeline Nettles is a research assistant studying international affairs with minors in Chinese and creative technology at the University of Colorado Boulder. Prior to joining the Atlantic Council, she was an undergraduate research assistant in the University of Colorado Boulder’s political science research lab, STUDIO, where she conducted quantitative analysis on regional international organizations in Africa. Emmeline is interested in continuing to study cyber policy, particularly how the lack of norms in censorship of social media affects access to information as well as vulnerabilities found through open-source intelligence. She speaks some Chinese and German and has experience with a variety of programming languages. In her free time, she continues her Chinese studies and practices Muay Thai.

Acknowledgments

The authors of this report would like to thank Winnona DeSombre, Art Manion, Allen Householder, Nick Reese, Dakota Cary, John Speed Meyers, Chris Rohlf, Jarek Stanley, Sasha Romanosky, Jantje Silomon, Mischa Hansel, and Matthew Tompkins for their feedback and insight during the development of this report. Thank you to Nancy Messiah and Andrea Raitu for data visualization and web design and to Donald Partyka for graphic and document design.

CSI produced this report’s cover image using in part OpenAI’s DALL-E program, an AI image-generating application. Upon generating the draft image-prompt language, the authors reviewed, edited, and revised the language to their own liking and take ultimate responsibility for the content of this publication

Appendix I: Data challenges

Gathering from organizational feeds rather than CVE datasets allows for more complex multi-vector acknowledgment, most prominently in instances where a researcher showed an existing vulnerability’s impact on a previously unconnected product. It also allows focusing in on specific vendor and product types. For example, the VMware Security Advisory (VMSA) VMSA-2011-0004.3 responds to the following CVEs: CVE-2010-3613, CVE-2010-3614, CVE-2010-3762, CVE-2010-3316, CVE-2010-3435, CVE-2010-3853, CVE-2010-2059, and CVE-2010-3609. The VMWare security advisory credits Nicolas Gregoire and US CERT for reporting the issue that one of these CVEs created for their Service Location Protocol daemon regarding vulnerability to a denial of service attack. Most likely, they are referring specifically to CVE-2010-3609. However, a GitHub CVE list mentions Gregoire in none of those entries,100 101 though an exploit proof of concept on a common exploit database appears authored by Gregoire.102

That is not to say that the GitHub CVE data is “bad,” but that—between the changing standards in its recorded fields and the primary focus of its records—it captures a different segment of the research community. Further analyses could gather valuable insight from scraping through both the exploit and GitHub database credits and authorship logs. These analyses will face constraints too, though—for example, using exploit-db.com filters out all researchers who didn’t upload proof-of-concept code to that specific database, which likely biases against the inclusion of researchers outside of the English-speaking world. Accordingly, Google’s Security Research Team has contributed over one-thousand entries to exploit-db.com compared to just one from Qihoo 360. In contrast, Apple’s security advisories mention Qihoo 360 more than 300 times, versus 2000 mentions of Google that include multiple departments (Google Security Team, Google’s TAG, Google Project Zero, etc.).

It’s difficult to compare the 1:1,000 and 3:20 ratios directly, especially when credits might refer to CVE discovery, CVE application, or other nuanced forms of acknowledgment—some credits even imply that researchers played a role in developing patches. Nonetheless, working within a company’s public-facing ecosystem will help reduce (but not eliminate) bias against international research and deal with concerns about filtering effects where exploit code must be publicly disclosed for inclusion in a dataset.

Appendix II: The full RMSV

Sourced from http://www.cac.gov.cn/2021-07/13/c_1627761607640342.htm, translation provided by Google Translate.

Notice of the Ministry of Industry and Information Technology and the State Internet Information Office of the Ministry of Public Security on Issuing the Provisions on the Management of Security Vulnerabilities of Network Products

July 13, 2021 17:11

Notice of the Ministry of Industry and Information Technology and the State Internet Information Office of the Ministry of Public Security on Issuing the Provisions on the Management of Security Vulnerabilities of Network Products

Ministry of Industry and Information Technology Network Security [2021] No. 66

All provinces, autonomous regions, municipalities directly under the Central Government and Xinjiang Production and Construction Corps industry and informatization departments, Internet Information Offices, and public security departments (bureaus), and communications administrations of all provinces, autonomous regions, and municipalities directly under the Central Government:

The “Regulations on the Management of Security Vulnerabilities of Network Products” are hereby issued and will come into force on September 1, 2021.

Ministry of Industry and Information Technology State Internet Information Office Ministry of Public Security

July 12, 2021

Provisions on the Management of Security Vulnerabilities of Network Products

Article 1 In order to regulate the discovery, reporting, patching, and release of network product security vulnerabilities, and prevent network security risks, these Provisions are formulated in accordance with the “Network Security Law of the People’s Republic of China”.

Article 2 Providers and network operators of network products (including hardware and software) within the territory of the People’s Republic of China, as well as organizations or individuals engaged in activities such as the discovery, collection, and release of network product security vulnerabilities, shall abide by these Provisions.

Article 3 The Cyberspace Administration of China is responsible for coordinating and coordinating the management of network product security vulnerabilities. The Ministry of Industry and Information Technology is responsible for the comprehensive management of network product security vulnerabilities, and undertakes the supervision and management of network product security vulnerabilities in the telecommunications and Internet industries. The Ministry of Public Security is responsible for the supervision and management of network product security loopholes, and cracks down on illegal and criminal activities that take advantage of network product security loopholes in accordance with the law.

Relevant competent departments strengthen cross-departmental coordination, realize real-time sharing of network product security vulnerability information, and conduct joint assessment and disposal of major network product security vulnerability risks.

Article 4 No organization or individual may use network product security loopholes to engage in activities that endanger network security, and may not illegally collect, sell, or publish information on network product security loopholes; It provides technical support, advertising promotion, payment settlement and other assistance.

Article 5 Network product providers, network operators and network product security vulnerability collection platforms shall establish and improve network product security vulnerability information receiving channels and keep them unblocked, and keep network product security vulnerability information receiving logs for no less than 6 months.

Article 6 Relevant organizations and individuals are encouraged to notify network product providers of security vulnerabilities in their products.

Article 7 Network product providers shall perform the following network product security vulnerability management obligations, ensure that their product security vulnerabilities are promptly patched and reasonably released, and guide and support product users to take preventive measures:

(1) After discovering or learning that there are security vulnerabilities in the provided network products, it shall immediately take measures and organize the verification of the security vulnerabilities, and evaluate the degree of harm and the scope of influence of the security vulnerabilities; for the security vulnerabilities existing in its upstream products or components, it shall Immediately notify the relevant product provider.

(2) The relevant vulnerability information shall be submitted to the Network Security Threat and Vulnerability Information Sharing Platform of the Ministry of Industry and Information Technology within 2 days. The submitted content shall include the product name, model, version, and technical characteristics, harm, and scope of influence of the vulnerability that have network product security vulnerabilities.

(3) It should organize the repair of network product security vulnerabilities in a timely manner, and if it is necessary for product users (including downstream manufacturers) to take measures such as software and firmware upgrades, the network product security vulnerability risks and repair methods should be promptly notified to potentially affected product users. , and provide necessary technical support.

The network security threat and vulnerability information sharing platform of the Ministry of Industry and Information Technology simultaneously reports relevant vulnerability information to the National Network and Information Security Information Notification Center and the National Computer Network Emergency Technology Handling Coordination Center.

Network product providers are encouraged to establish a security vulnerability reward mechanism for the provided network products, and rewards are given to organizations or individuals who discover and report security vulnerabilities of the provided network products.

Article 8 After a network operator discovers or learns of a security loophole in its network, information system and equipment, it shall immediately take measures to verify and complete the repair of the security loophole in a timely manner.

Article 9 Organizations or individuals engaged in the discovery and collection of network product security vulnerabilities shall release information on network product security vulnerabilities to the public through network platforms, media, conferences, competitions, etc. principles and abide by the following provisions:

(1) Vulnerability information shall not be released before network product providers provide network product security vulnerability repair measures; if it is deemed necessary to release in advance, it shall evaluate and negotiate with relevant network product providers, and report to the Ministry of Industry and Information Technology and the Ministry of Public Security , published by the Ministry of Industry and Information Technology and the Ministry of Public Security after evaluation.

(2) Not to publish the details of the security loopholes in the networks, information systems and equipment used by network operators.

(3) Do not deliberately exaggerate the harm and risk of network product security vulnerabilities, and do not use information on network product security vulnerabilities to conduct malicious speculation or conduct fraud, extortion and other illegal and criminal activities.

(4) Not to publish or provide programs and tools specially used to exploit the security loopholes of network products to engage in activities that endanger network security.

(5) When releasing network product security loopholes, it shall simultaneously release repairs or preventive measures.

(6) During the period of major national events, without the consent of the Ministry of Public Security, it is not allowed to release information on network product security vulnerabilities without authorization.

(7) Not to provide undisclosed network product security vulnerability information to overseas organizations or individuals other than network product providers.

(8) Other relevant provisions of laws and regulations.

Article 10 Any organization or individual establishing a network product security vulnerability collection platform shall file with the Ministry of Industry and Information Technology. The Ministry of Industry and Information Technology shall promptly notify the Ministry of Public Security and the Cyberspace Administration of China of relevant vulnerability collection platforms, and publish the vulnerability collection platforms that have passed the filing.

Organizations or individuals who find security vulnerabilities in network products are encouraged to report to the Ministry of Industry and Information Technology’s Network Security Threat and Vulnerability Information Sharing Platform, National Network and Information Security Information Notification Center Vulnerability Platform, National Computer Network Emergency Technology Handling Coordination Center Vulnerability Platform, China Information Security The evaluation center vulnerability database reports network product security vulnerability information.

Article 11 Organizations engaged in the discovery and collection of network product security vulnerabilities shall strengthen internal management and take measures to prevent information leakage and illegal release of network product security vulnerabilities.

Article 12 If a network product provider fails to take measures to remedy or report network product security vulnerabilities in accordance with these regulations, the Ministry of Industry and Information Technology and the Ministry of Public Security shall deal with it according to their respective responsibilities; If the circumstances stipulated in this article are met, punishment shall be imposed in accordance with the provisions.

Article 13 If a network operator fails to take network product security loophole repairs or preventive measures in accordance with these regulations, it shall be handled by the relevant competent departments according to law; if it constitutes a situation specified in Article 59 of the “People’s Republic of China Network Security Law”, the regulations shall be followed. be punished.

Article 14 Violation of these regulations to collect and publish network product security vulnerability information shall be handled by the Ministry of Industry and Information Technology and the Ministry of Public Security in accordance with their respective responsibilities; punished in accordance with this provision.

Article 15 Those who use network product security loopholes to engage in activities that endanger network security, or provide technical support for others to use network product security loopholes to engage in activities endangering network security, shall be handled by the public security organs according to law; Those who fall under the circumstances stipulated in Article 63 shall be punished in accordance with the provisions; if a crime is constituted, criminal responsibility shall be investigated according to law.

Article 16 These regulations shall come into force on September 1, 2021.

Appendix III: Null finding charts

 Contribution counts by country per month, Apple
Contribution counts by country per month, VMware
Contributions by country per month, Apple Safari (browsers)
Contributions by country per month, Apple iOS (mobile operating systems)
Contributions by country per month, Apple macOS (operating systems)
Contributions by country per month, Microsoft Edge and Internet Explorer (browsers)
Contributions by country per month, Microsoft Hyper-V (hypervisors)
Contributions by country per month, VMware hypervisors

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Eric S. Raymond, The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary (O’Reilly Media, Inc., 2001).
2    China’s Cyber Capabilities: Warfare, Espionage, and Implications for the United States, Before the US-China Economic and Security Review Commission, 117th Cong. (2022) (statement of Dakota Cary, research analyst, Center for Security and Emerging Technology), https://www.uscc.gov/sites/default/files/2022-02/Dakota_Cary_Testimony.pdf.
3    Chris Bing, “China’s Government Is Keeping Its Security Researchers from Attending Conferences,” CyberScoop, March 8, 2018, https://www.cyberscoop.com/pwn2own-chinese-researchers-360-technologies-trend-micro/.
4    Cyberspace Administration of China (CAC), “Notice of the Ministry of Industry and Information Technology and the State Internet Information Office of the Ministry of Public Security on Issuing the Regulations on the Management of Security Vulnerabilities of Network Products-Office of the Central Committee of the Communist Party of China,” Pub. L. No. No. 66 (2021), http://www.cac.gov.cn/2021-07/13/c_1627761607640342.htm.
5    To the reader, because there are several translations of the title, one might also find sources calling the RMSV “Regulations on the Management of Network Product Security Vulnerability,” “the vulnerability disclosure provisions of the Data Security Law,” “Provisions on Security Loopholes of Network Products,” or any other number of synonyms.
6    Amit Yoran, “One in 10 Assets Assessed Are Vulnerable to Log4Shell,” Tenable (blog), December 22, 2021, https://www.tenable.com/blog/one-in-10-assets-assessed-are-vulnerable-to-log4shell?utm_source=charge&utm_medium=social&utm_campaign=internal-comms.
7    “Readout of White House Meeting on Software Security,” The White House, January 13, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/01/13/readout-of-white-house-meeting-on-software-security/.
8    “Responding to and Learning from the Log4Shell Vulnerability” (Washington, DC, February 8, 2022), https://www.hsgac.senate.gov/hearings/responding-to-and-learning-from-the-log4shell-vulnerability.
9    Cybersecurity and Infrastructure Security Agency (CISA), “Emergency Directive 22-02 Mitigate Apache Log4J Vulnerability,” ED 22-02 (2022), https://www.cisa.gov/emergency-directive-22-02.
10    Sophie Yu and Eduardo Baptista, “China Regulator Suspends Cyber Security Deal with Alibaba Cloud,” ed. Gerry Doyle, Reuters, December 22, 2021, https://www.reuters.com/world/china/china-regulator-suspends-cyber-security-deal-with-alibaba-cloud-2021-12-22/.
11    Southern Finance and Economics, “Exclusive丨Alibaba Cloud Is Suspended from the Ministry of Industry and Information Technology’s Network Security Threat Information Sharing Platform Cooperation Unit – 21 Finance,” December 22, 2021, https://m.21jingji.com/timestream/html/%7BU9Pjf0FaKEU=%7D.
12    To the reader, an anonymous source familiar with the matter indicated the former possibility was more likely, which was reiterated by reporting from the Wall Street Journal, though the text of the RMSV law, found in Appendix III of this paper, seems equally applicable. See David Uberti and Liza Lin, “Alibaba Employee First Spotted Log4j Software Flaw but Now the Company Is in Hot Water With Beijing,” Wall Street Journal, December 22, 2021, https://www.wsj.com/articles/china-halts-alibaba-cybersecurity-cooperation-for-slow-reporting-of-threat-state-media-says-11640184511. Other reporting refers to enforcement of the RMSV rather than contract clauses—see Phil Muncaster, “Alibaba Suffers Government Crackdown Over Log4j,” Infosecurity Magazine, December 23, 2021, https://www.infosecurity-magazine.com/news/alibaba-suffers-government/. Regardless of the precise legal lever used, the source of the apparent sanction was Alibaba’s failure to share the vulnerability with the MIIT more promptly, per the company’s own statement, cited in Xinmei Shen’s article, “Apache Log4j Bug: Alibaba Cloud Vows to Boost Compliance after Chinese Ministry Pulls Support for Not First Reporting Security Issue to Government,” South China Morning Post, December 23, 2021, https://www.scmp.com/tech/big-tech/article/3160854/apache-log4j-bug-alibaba-cloud-vows-boost-compliance-after-chinese.
13    Raymond Zhong, “China Fines Alibaba $2.8 Billion in Landmark Antitrust Case,” New York Times, April 9, 2021, https://www.nytimes.com/2021/04/09/technology/china-alibaba-monopoly-fine.html.
14    Cyber Safety Review Board, “Review of the December 2021 Log4j Event” (Arlington, VA: Department of Homeland Security, Cybersecurity and Infrastructure Security Agency, July 11, 2022), https://www.cisa.gov/sites/default/files/publications/CSRB-Report-on-Log4-July-11-2022_508.pdf.
15    Uberti and Lin, “Alibaba Employee First Spotted Log4j Software Flaw but Now the Company Is in Hot Water with Beijing.”
16    Allen D. Householder et al., “The CERT Guide to Coordinated Vulnerability Disclosure” (Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon University, August 2017), 4, https://resources.sei.cmu.edu/asset_files/specialreport/2017_003_001_503340.pdf.
17    Elizabeth Montalbano, “Google Project Zero Cuts Bug Disclosure Timeline to a 30-Day Grace Period,” Threatpost, April 16, 2021, https://threatpost.com/google-project-zero-cuts-bug-disclosure-timeline-to-a-30-day-grace-period/165432/.
18    Priscilla Moriuchi and Bill Ladd, “China Altered Public Vulnerability Data to Conceal MSS Influence,” Recorded Future, March 9, 2018, https://go.recordedfuture.com/hubfs/reports/cta-2018-0309.pdf.
19    Patrick Howell O’Neill, “How China Turned a Prize-Winning IPhone Hack against the Uyghurs,” MIT Technology Review, May 6, 2021, https://www.technologyreview.com/2021/05/06/1024621/china-apple-spy-uyghur-hacker-tianfu/.
20    Dakota Cary (@DakotaInDC), “Today Is My Last Day at @CSETGeorgetown…,” Twitter, April 22, 2022, 11:05 a.m., https://twitter.com/DakotaInDC/status/1517519983718256640.
21    Will Loomis and Stewart Scott, “A Role for the Vulnerabilities Equities Process in Securing Software Supply Chains,” Lawfare Institute, January 11, 2021, https://www.lawfareblog.com/role-vulnerabilities-equities-process-securing-software-supply-chains.
22    Mandiant, “Advanced Persistent Threats (APTs) | Threat Actors & Groups,” Mandiant, accessed August 2, 2022, https://www.mandiant.com/resources/insights/apt-groups.
23    Erik Silfversten et al., The Economics of Vulnerability Disclosure (Athens, Greece: ENISA, 2018), 28, https://www.enisa.europa.eu/news/enisa-news/the-economics-of-vulnerability-disclosure.
24    Silfversten et al., The Economics of Vulnerability Disclosure.
25    To the reader, with such effects, the paper also works to understand what, if any, characteristics of these effects varied with respect to vendors, product types, codebases, and contributions rates.
26    President Joseph Biden, Executive Order, “Improving the Nation’s Cybersecurity, Executive Order 14028 of May 12, 2021,” Federal Register, 86, no. 93 (May 17,2021): 26633–47, https://www.govinfo.gov/content/pkg/FR-2021-05-17/pdf/2021-10460.pdf.
27    Shalanda Young and Chris Inglis, “M-22-16 | Memorandum for the Heads of Executive Departments and Agencies: Administration Cybersecurity Priorities for the FY 2024 Budget,” July 22, 2022, https://www.whitehouse.gov/wp-content/uploads/2022/07/M-22-16.pdf.
28    .Haroon Meer and Thu T. Pham, “History of Vulnerability Disclosure,” Duo Security, August 3, 2015, https://duo.com/labs/research/history-of-vulnerability-disclosure.
29    Ars Technica Staff, “When Google Squares off with Microsoft on Bug Disclosure, Only Users Lose,” Ars Technica, January 12, 2015, https://arstechnica.com/information-technology/2015/01/google-sees-a-bug-before-patch-tuesday-but-windows-users-remain-vulnerable/.
30    Householder et al., “The CERT Guide to Coordinated Vulnerability Disclosure,” 6.
31    Householder et al., “The CERT Guide to Coordinated Vulnerability Disclosure,” 8.
32    “How Google Handles Security Vulnerabilities,” Google, accessed August 2, 2022, https://about.google/appsecurity/.
33    “Vulnerability Disclosure Guidelines,” HackerOne, accessed August 2, 2022, https://www.hackerone.com/disclosure-guidelines.
34    To the reader, for example, BugCrowd’s program list can be found at https://BugCrowd.com/programs, and hackerone’s at https://hackerone.com/directory/programs.
35    To the reader, in addition to outlining timelines and reporting requirements, VDPs often also define what types of research are permitted—for example, some VDPs prohibit testing denial-of-service attacks that would disrupt networks and data. They may also be referred to as Responsible Disclosure Programs, or RDPs.
36    Cybersecurity and Infrastructure Security Agency (CISA), “Binding Operational Directive 20-01 – Develop and Publish a Vulnerability Disclosure Policy,” BOD 20-01, (2020), https://www.cisa.gov/binding-operational-directive-20-01.
37    Debora di Giacomo et al., Coordinated Vulnerability Disclosure Policies in the EU, ed. Evangelos Kantas and Marnix Dekker (Athens, Greece: ENISA, 2022), https://www.enisa.europa.eu/publications/coordinated-vulnerability-disclosure-policies-in-the-eu.
38    To the reader, ENISA notes that there is a lack of standardization among member-state CVD policies stemming from differing legal and economic resources. The report also highlights the Network and Information Security Directive 2 (NIS2), which emphasizes the importance of each country creating its own computer emergency response team (CERT) and recommends the establishment of national vulnerability databases.
39    Mar Negreiro, “The NIS2 Directive: A High Common Level of Cybersecurity in the EU,” European Parliamentary Research Service PE 689.333 (June 2022), 13, https://www.europarl.europa.eu/RegData/etudes/BRIE/2021/689333/EPRS_BRI(2021)689333_EN.pdf.
40    14:00-17:00, “ISO/IEC 29147:2018,” ISO, https://www.iso.org/cms/render/live/en/sites/isoorg/contents/data/standard/07/23/72311.html. To the reader, this standard is currently behind a paywall, though arguments from security researchers Katie Moussouris and Art Manion swayed the ISO to make it freely available for a time. For more on this, read the following: Juha Saarinen, “ISO Vulnerability Disclosure Standard Now Free,” iTnews, April 18, 2016, https://www.itnews.com.au/news/iso-vulnerability-disclosure-standard-now-free-418253, and Katie Moussouris, “Vulnerability Disclosure Deja Vu: Prosecute Crime Not Research,” Dark Reading, May 12, 2015, https://www.darkreading.com/vulnerabilities-threats/vulnerability-disclosure-deja-vu-prosecute-crime-not-research.
41    Fabiola Schwarz, Jantje Silomon, and Micha Hansel, “Empowering Security Researchers Will Improve Global Cybersecurity,” Just Security, May 6, 2022, https://www.justsecurity.org/81293/empowering-security-researchers-will-improve-global-cybersecurity/.
42    Lucas Ropek, “Missouri Governor Accuses Journalist of Hacking for Warning That State Left Teachers’ Data Exposed,” Gizmodo, October 14, 2021, https://gizmodo.com/missouri-governor-wants-to-prosecute-journalist-for-war-1847866414.
43    US Department of Justice, Justice Manual, Title 9: Criminal 9-48.000 – Computer Fraud and Abuse Act, [updated May 19, 2022], accessed August 2, 2022, https://www.justice.gov/opa/press-release/file/1507126/download.
44    Adi Robertson, “Justice Department Pledges Not to Charge Security Researchers with Hacking Crimes,” The Verge, May 19, 2022, https://www.theverge.com/2022/5/19/23130910/justice-department-cfaa-hacking-law-guideline-limits-security-research.
45    “Vulnerabilities Equities Policy and Process for the United States Government,” November 15, 2017, https://trumpwhitehouse.archives.gov/sites/whitehouse.gov/files/images/External%20-%20Unclassified%20VEP%20Charter%20FINAL.PDF.
46    Schwarz, Silomon, and Hansel, “Empowering Security Researchers Will Improve Global Cybersecurity.”
47    John Sherman, “Memorandum for Senior Pentagon Leadership, Commandant of the Coast Guard, Commanders of the Combatant Commands, Defense Agency and DoD Field Activity Directors | Subject: Software Development and Open Source Software,” January 24, 2022, https://dodcio.defense.gov/portals/0/documents/library/softwaredev-opensource.pdf.
48    To the reader, these regulations are notably different from incident-reporting requirements, which have become more common. Incident-reporting requirements, such as nascent legislation covering critical infrastructure incidents in the United States, or India’s recent, more expansive regulations are premised on the fact that attackers have already made forays against (usually) non-government entities, by abusing either known vulnerabilities, unknown ones, or both. Because they are already able to compromise a target, which does not necessarily even know the exploits involved, there is less, if any, risk that reporting will reveal information about a vulnerability that others will exploit, especially when governments responsibly control what portion of the report is made public.
49    Bing, “China’s Government Is Keeping Its Security Researchers from Attending Conferences.”
50    Justina Alexandra Sava, “Global ICT Market Share 2013 – 2022, By Selected Country,” Statista, March 3, 2022, https://www.statista.com/statistics/263801/global-market-share-held-by-selected-countries-in-the-ict-market/.
51    “IT Industry Outlook 2022,” CompTIA, accessed August 2, 2022, http://connect.comptia.org/content/research/it-industry-trends-analysis.
52    To the reader, a similar study of the impact of better legal protections for researchers in the EU, for example, might struggle with its sample size given the patchwork of legal environments across the EU’s many member-states, whereas the Chinese model is conveniently (for the purposes of this study) monolithic.
53    Dean Cheng, Cyber Dragon: Inside China’s Information Warfare and Cyber Operations (Santa Barbara, California: Praeger, 2017).
54    Cheng, Cyber Dragon.
55    Zhong, “China Fines Alibaba $2.8 Billion in Landmark Antitrust Case.”
56    “China Blocks Didi From App Stores Days After Mega U.S. IPO,” Bloomberg News, July 4, 2021, https://www.bloomberg.com/news/articles/2021-07-04/china-regulator-orders-didi-to-be-removed-from-app-stores.
57    Yan Luo, Zhijing Yu, and Vicky Liu, “The Future of Data Localization and Cross-Border Transfer in China: A Unified Framework or a Patchwork of Requirements?,” International Association of Privacy Professionals (IAPP), June 22, 2021, https://iapp.org/news/a/the-future-of-data-localization-and-cross-border-transfer-in-china-a-unified-framework-or-a-patchwork-of-requirements/.
58    Catalin Cimpanu, “Chinese Government Lays out New Vulnerability Disclosure Rules,” The Record by Recorded Future (blog), July 14, 2021, https://therecord.media/chinese-government-lays-out-new-vulnerability-disclosure-rules/. To the reader, the precise timeline, however, of the specific vulnerability reporting requirements is difficult to track through the flurry of recent cybersecurity regulations in China. This best estimate derives from Bill Goodwin’s reporting at ComputerWeekly and Catalin Cimpanu’s at The Record, indicating that the first draft of the law that included the reporting mandate was published by the Standing Committee of the National People’s Congress of China on July 2, 2020, alongside the draft Data Security Law. See also: Bill Goodwin, “Chinese Law May Require Companies to Disclose Cyber-Security Preparations Outside China,” Computer Weekly, July 3, 2020, https://www.computerweekly.com/news/252485674/Chinese-law-may-require-companies-to-disclose-cyber-security-preparations-outside-China. The key quote citing Goodwin’s reporting is: “The provision that product vendors might need to share vulnerability details with Chinese state agencies has been known and in the public domain since at least 2020.”
59    To the reader, this date also collides with significant US sanctions activity and other cyber legislation initiatives in China, making it both the most likely to show an effect and the least conclusive in the case that it does.
60    Microsoft Security Response Center (MSRC), “Security Update Guide – Vulnerabilities,” Microsoft, [updated August 9, 2022], https://msrc.microsoft.com/update-guide/vulnerability.
61    “Apple Security Updates,” Apple Support, https://support.apple.com/en-us/HT201222.
62    “VMware Security Advisories,” VMware, Inc., https://www.vmware.com/security/advisories.html.
63    AskF5, “Security Knowledge Centers,” F5,  https://support.f5.com/csp/knowledge-center/security.
64    Red Hat Customer Portal, “Security Data,” Red Hat, https://access.redhat.com/security/data.
65    “A Distributed Vulnerability Database for Open Source,” OSV, https://osv.dev/.
66    To the reader, this is not to say that the law prohibits crediting, but rather that, if a researching entity fails to comply with the law, public acknowledgment of their disclosure provides an easy source of enforcement information to the MIIT.
67    To the reader, scripts used for scraping and processing for acknowledgments are included. Data cleaning not reflected in these scripts occurred in Excel.
68    Johannes Wachs et al., “The Geography of Open Source Software: Evidence from GitHub,” Technological Forecasting and Social Change 176, no. 121478 (March 2022), https://doi.org/10.1016/j.techfore.2022.121478.
69    To the reader, more discussion of data challenges can be found in the Appendix.
70    “CVEs,” Tenable, accessed August 2, 2022, https://www.tenable.com/cve.
71    Cary, China’s Cyber Capabilities: Warfare, Espionage, and Implications.
72    “An R Package for Causal Inference Using Bayesian Structural Time-Series Models,” CausalImpact, https://cran.r-project.org/web/packages/CausalImpact/vignettes/CausalImpact.html.
73    To the reader, text produced by the summary method of the CausalImpact package.
74    To the reader, statistical analysis of this effect is in the appendix for brevity, though it is essentially a positive version of the negative RMSV-correlated impact on China-tagged contribution.
75    Jacques Chester, “A Closer Look at CVSS Scores,” Theory of Predictable Software, June 19, 2022, https://theoryof.predictable.software/articles/a-closer-look-at-cvss-scores/.
76    US Department of Defense, “DOD Releases List of Additional Companies, in Accordance with Section 1237 of FY99 NDAA,” news release, August 28, 2022, https://www.defense.gov/News/Releases/Release/Article/2328894/dod-releases-list-of-additional-companies-in-accordance-with-section-1237-of-fy/.
77    Bureau of Industry and Security Department of Commerce, “Addition of Entities to the Entity List, Revision of Certain Entries on the Entity List,” Federal Register, June 5, 2020, https://www.federalregister.gov/documents/2020/06/05/2020-10869/addition-of-entities-to-the-entity-list-revision-of-certain-entries-on-the-entity-list.
78    Paul Mozur, “Qihoo 360’s Zhou Hongyi: Taking Aim at China’s Internet,” Wall Street Journal, November 30, 2012, http://online.wsj.com/article/SB10001424052970204707104578094460340552442.html.
79    Qihoo 360 Technology Co Ltd, “Qihoo 360 Announces Completion of Merger,” Cision PR Newswire, July 15, 2016, https://www.prnewswire.com/news-releases/qihoo-360-announces-completion-of-merger-300299435.html.
80     Viola Rothschild and Hongshen Zhu, “A Crack in the Wall? Not So Fast.,” Council on Foreign Relations (blog), October 15, 2020, https://www.cfr.org/blog/crack-wall-not-so-fast.
81    John Feng, “China Accuses CIA of Hacking Beijing for over a Decade,” Newsweek, July 20, 2021, https://www.newsweek.com/china-accuses-cia-hacking-beijing-over-decade-1611321.
82    360BugCloud (@360bugcloud), Twitter, August 6, 2020, https://twitter.com/360bugcloud.
83    360BugCloud (@360bugcloud), “Qihoo 360 swept the top three …,” Twitter, August 6, 2020, https://twitter.com/360bugcloud/status/1291583230332686339/photo/1.
84    Jonathan Grieg, “CISA Urges Defenders to Update after VMware Patches Vulnerabilities in Multiple Products,” The Record by Recorded Future (blog), August 4, 2022, https://therecord.media/cisa-urges-defenders-to-update-after-vmware-patches-vulnerabilities-in-multiple-products/.
85    Microsoft, “Microsoft Windows Insider Preview: Bounty Program,” microsoft.com, https://www.microsoft.com/en-us/msrc/bounty-windows-insider-preview.
86    Huawei, “Huawei Signs Server OEM Agreement with Red Hat Enterprise Linux – Huawei Press Center,” press release, April 26, 2017, https://www.huawei.com/en/news/2017/4/huawei-oem-agreement-redhat.
87    “Trump, Xi End Summit with ‘Tremendous’ Progress,” Aljazeera, April 7, 2017, https://www.aljazeera.com/news/2017/4/7/trump-xi-end-summit-with-tremendous-progress
88    Henny Sender, “Cayman Lawsuits Challenge Valuations of Delisted Chinese Companies,” Financial Times, February 28, 2017, https://www.ft.com/content/ed8768f4-fd1a-11e6-8d8e-a5e3738f9ae4.
89    US Department of Defense, “DOD Releases List of Additional Companies, in Accordance with Section 1237 of FY99 NDAA,” news release, August 28, 2022, https://www.defense.gov/News/Releases/Release/Article/2328894/dod-releases-list-of-additional-companies-in-accordance-with-section-1237-of-fy/.
90    Rita Liao and Manish Singh, “GitHub Confirms It Has Blocked Developers in Iran, Syria and Crimea,” TechCrunch, July 29, 2019, https://social.techcrunch.com/2019/07/29/github-ban-sanctioned-countries/.
91    Wachs et al., “The Geography of Open Source Software: Evidence from GitHub.”
92    Silver Keskkula, “What Is This Github You Speak Of?” Medium (blog), September 7, 2016, https://medium.com/@keskkyla/https-medium-com-keskkyla-what-is-this-github-you-speak-of-dd457a29771.
93    Kevin Xu, “Open Source in China: The Game,” Interconnected, May 10, 2020, https://interconnected.blog/open-source-in-china-the-game/.
94    Meaghan Tobin, “China Wants to Build an Open Source Ecosystem to Rival GitHub,” Rest of World, January 19, 2021, https://restofworld.org/2021/china-gitee-to-rival-github/.
95    Daniel Geer et al., “CyberInsecurity: The Cost of Monopoly,” Schneier on Security, September 24, 2003, https://www.schneier.com/essays/archives/2003/09/cyberinsecurity_the.html.
96    Biden, Executive Order 14028.
97    Cyber Safety Review Board, “Review of the December 2021 Log4j Event.”
98    Biden, Executive Order 14028.
99    Dan Geer, “For Good Measure: The Undiscovered,” Usenix, ;Login: 40, no. 2 (April 2015), 50–52, https://www.usenix.org/system/files/login/articles/login_apr15_12_geer.pdf.
100    CVE-Team (Auto-merge), “CVEProject / cvelist,” GitHub, accessed August 2, 2022, https://github.com/CVEProject/cvelist
101    CVE-Team (Synchronized Data), “CVEProject / cvelist,” GitHub, accessed August 2, 2022, https://github.com/CVEProject/cvelist/blob/master/2010/3xxx/CVE-2010-3609.json
102    CVE-Team (Synchronized Data), “CVEProject / cvelist,” GitHub, accessed August 2, 2022, https://github.com/CVEProject/cvelist/blob/master/2010/3xxx/CVE-2010-3609.json

The post Dragon tails: Preserving international cybersecurity research appeared first on Atlantic Council.

]]>
The role of electronic warfare, cyber, and space capabilities in the air littoral https://www.atlanticcouncil.org/content-series/airpower-after-ukraine/airpower-after-ukraine-taking-todays-lessons-to-tomorrows-war/ Tue, 30 Aug 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=555654 Electronic warfare, cyber, and space operations are critical to successful information operations in the air littoral fight.

The post The role of electronic warfare, cyber, and space capabilities in the air littoral appeared first on Atlantic Council.

]]>
Air Force Colonel Gene Cirillo once said, “the US Army will never control the ground under the sky, if the US Air Force does not control the sky over the ground.” The Russia-Ukraine conflict shows that such control may no longer be possible. Months into the conflict, both sides continue to throw drones, loitering munitions (munitions that loiter around a target area then strike), and missiles into the sky to no avail. This contest between offensive weapons and countermeasures has given rise to a new focus on the air littoral, the airspace between ground forces and high-altitude fighters and bombers. The air littoral has been critical in the war as a space for conducting strikes, collecting intelligence to guide artillery strikes, and collecting and disseminating propaganda.

In contesting and realizing the larger effects of the air littoral, information warfare plays a critical role through attacking and defending command-and-control links, communications channels, the computers controlling air-littoral weapons, and the space-based services the weapons depend upon. According to the Congressional Research Service, “information warfare” has no official definition, but it is essentially “the use and management of information to pursue competitive advantage, including offensive and defensive operations.” For the air-littoral fight, electronic, cyber, and space warfare are critical to successful information operations. A competitor that is able to leverage electronic warfare, cyber, and space will gain an advantage in littoral airspace.

Countermeasures in the air littoral

Electronic warfare (EW): EW—which intercepts, jams, or disrupts signals through use of the electromagnetic spectrum or directed energy—is commonly used to target drones and, to an extent, loitering munitions. Jammers, which comprise 72 percent of counter-drone systems, sever the link between the drone and the operator or the global positioning system (GPS) signals that the drone relies on for navigation. Numerous other countermeasures fall under the broad definition of information warfare, including spoofing and dazzling, as well as employing lasers and high-powered microwaves. Microwave weapons like the US Air Force’s THOR hold particular promise: Aside from being low cost-per-shot, they also have the ability to hit many targets at once by emitting microwave radiation over a wide area. This capability should make them effective at countering future drone swarms. Likewise, Russia claims to have fielded a new laser weapon for downing drones, which offers the same low cost-per-shot ability. Although it is far from clear that jamming missiles is likely to have a big effect, missiles depend on a sensor-shooter relationship, which is vulnerable to jamming. Decoys could deceive those sensors, jammers might sever communication links between sensors and shooters, and artificial intelligence (AI)-created deepfakes could encourage missiles to fire on empty fields.

Cyberattacks: Drones and loitering munitions are essentially flying computers, thus they are vulnerable to cyberattacks. Such attacks could break links from controller and platform, code might be altered to cause screwups, or nefarious code could be injected to cause friendly drones to blow up friendly units or allow an adversary to control the drone entirely. The fact that Ukraine and Russia employ commercial drones make such attacks easier to implement because both sides can acquire their own versions of the commercial drone and analyze the code in flight controllers, motor controllers, and other critical systems for weaknesses. Moreover, cyberattacks on missiles are difficult but not impossible to achieve. Such attacks can target missile designs, alter software and hardware, or damage command-and-control systems.

Of course, whether (and how) cyberattacks can be launched on drones and loitering munitions during an active war is an open question. Finding an exploitable vulnerability in highly complex, well-guarded weapons code can be time-consuming; fifth-generation aircraft can have millions of lines of code. Likewise, launching an attack requires various support activities, such as identifying and developing mechanisms to exploit vulnerabilities, building specialized malware, and providing operational management and command and control during the attack. All this incurs opportunity costs: If an adversary’s systems can be manipulated, disrupted, or just blown up, why bother with cyberattacks when conventional attacks can be executed much faster? Plus, what if defenders have strong allies helping them to guard cyberspace?

Space warfare: Satellites provide air-littoral weapons with position, navigation, and timing support, as well as longer-range command and control. Drones and loitering munitions often depend on GPS coordinates for navigation and strike. Jamming GPS signals could prevent accurate targeting, while spoofing GPS signals might cause the weapons to blow up in an empty field. A clever adversary could spoof a GPS signal so that a friendly military base is at a target location’s GPS coordinates. Missiles’ GPS links could also be spoofed or jammed, but doing so is tough. Missiles also have other, non-GPS-based guidance systems, thus the end result is mostly degrading accuracy—relevant to precise, single strikes, but not necessarily applicable to hitting large targets such as airfields or concentrated forces. More broadly, attacks on satellite systems providing communication and navigation links could inhibit air-littoral munitions over a broad area along with any other space-dependent systems.

Drone developments in response

Drone and loitering-munitions technology is evolving, too, shifting—but not eliminating—information vulnerabilities. Drones are becoming increasingly autonomous. The TB2, for example, can take off, cruise, and land without human control. Likewise, Russia is seemingly using the Lancet-3 loitering munition in Ukraine, which is reportedly capable of autonomous target selection and engagement. If these systems do not require human input or GPS, then jammers are far less effective. Still, jammers are not necessarily irrelevant: new, jammable communications might be needed as drones integrate into larger swarms. Likewise, increased autonomy could create new information vulnerabilities: AI systems can be tricked, AI training data poisoned, and more complex computer systems mean more opportunities to cause harm and potentially new points of entry for a cyberattack (a larger digital “attack surface”). Plus, if autonomous features in the weapon system rely on GPS signals, the system is more vulnerable to GPS jamming or spoofing, as well as to cyber or physical attacks on GPS infrastructures.

The evolution of drones, loitering munitions, and countermeasures will affect the tactics and strategies needed to contest enemies in the air littoral. Jammers are often small, handheld devices, allowing them to be shared and used broadly by even dismounted infantry in austere terrain. In contrast, microwaves and laser weapons are often relatively big, bulky, and vehicle-mounted. Finding, fixing, and engaging such a vehicle is probably much easier than finding, fixing, and engaging a large number of small, dispersed soldiers. Plus, the vehicles are likely much more expensive than a handheld system; thus there will most likely be fewer of them, allowing the systems to be more readily tracked and either avoided or defeated. This dimension plays into how to fight in the air littoral: Should countermeasures be targeted and destroyed, or should countermeasures be monitored and avoided?

Readying the force

The biggest takeaway for the United States and allied nations is the need to integrate information warfare, air-littoral capabilities, and capabilities on both sides of the littoral (ground and air; or surface and air) to achieve the desired effects. Achieving this requires information sharing; mutual understanding about what each component can and cannot do; as well as established processes or methods for integration, training, and exercises to practice, and doctrine to formalize best practices and concepts. Formal efforts, such as a new NATO Centre of Excellence on the air littoral could explore these issues in greater detail. The United States and its allies should also launch a formal effort, such as a congressional commission, on information warfare. Such a commission could look broadly across the military services and the broad national community to identify and plug information warfare capability, organizational, and policy gaps. For example, the commission could identify opportunities to create new organizations bringing together the elements of information warfare or make big new investments in electronic warfare. New thought is needed to succeed in a new area of competition.

***

Zachary Kallenborn is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), an officially proclaimed US Army “Mad Scientist,” and a national security consultant.

Read more essays in the series

Airpower after Ukraine: The future of air warfare

Airpower experts and practitioners examine interim lessons from the war in Ukraine and consider applications for twenty-first century air and space forces.

Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post The role of electronic warfare, cyber, and space capabilities in the air littoral appeared first on Atlantic Council.

]]>
A web of partnerships: Ukraine, operational collaboration, and effective national defense in cyberspace https://www.atlanticcouncil.org/content-series/airpower-after-ukraine/a-web-of-partnerships-ukraine-operational-collaboration-and-effective-national-defense-in-cyberspace/ Tue, 30 Aug 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=555802 Partnerships strengthen a nation's cyber defense, as Ukraine's effective web of cyber partnership demonstrates.

The post A web of partnerships: Ukraine, operational collaboration, and effective national defense in cyberspace appeared first on Atlantic Council.

]]>
Many longtime cyberwatchers predicted that the current war in Ukraine would offer a glimpse into the future of cyber warfare. They were correct—just not in the way envisioned. The harbinger of the future has not been a parade of paralyzing Russian operations against critical infrastructure and government services. Rather, if current indications are correct, a capable Ukrainian cyber defense points toward a very different outlook for competition in cyberspace. Specifically, it has highlighted the power of international and cross-sector partnerships in defending against the most sophisticated cyber actors. Although such partnerships are not new, something distinct may be occurring in the Ukraine case with implications for how states compete and organize national defense in cyberspace.

What happened?

Observers of the war were surprised by the lack of Russian cyber effects, with some asking whether Russian operators even tried to show up to the fight. Although Russian cyber operations have seemingly produced fewer meaningful strategic, operational, and tactical effects than anticipated, this outcome certainly did not result from a lack of effort. Russian cyber operators have been targeting Ukraine for years (including a notable attack that disrupted Ukrainian electricity distribution in 2015) and were preparing for the current conflict throughout at least the previous year. The resulting cyber campaign began just before the invasion started in February of this year and has been executed by at least six distinct sophisticated threat actors using eight families of advanced malware capable of disruption and destruction. One in particular (Industroyer2) specifically targets operational technology equipment used in physical industrial processes, such as those involved in electricity distribution. Thus far these operations have targeted at least forty-eight distinct Ukrainian public and industry organizations, including critical infrastructure operators and service providers.

Despite this effort, the effects of these operations appear to have been limited. There are certainly some exceptions, including one noteworthy success in disrupting thousands of Viasat terminals (in Ukraine and beyond) an hour before the invasion commenced. Generally speaking, however, Russian cyber operations have largely failed to pay off on the investment made in resources, people, and time. For instance, despite advances in capability and significant preparatory operations, an attempt to repeat and expand their previous successful attack against the Ukrainian electricity grid failed this time around.

At the same time, Ukraine appears to have had an unexpectedly high and growing number of cyber defense successes. The electricity grid operation failure noted above is not unique. Ukraine has effectively resisted or been resilient to a range of sophisticated Russian operations, from destructive industrial-control-system attacks to intelligence-collection infiltrations. Experts from both industry and academia point to Ukraine’s cyber defense as “the primary reason why Russian cyber efforts have had limited effect.” If true, the success of an apparently weaker state against one of the world’s top offensive cyber powers in a domain where offense has long been assumed to be far easier than defense requires explanation.

A web of partnerships

A number of factors have probably contributed to the success of Ukraine’s cyber defense efforts, but chief among these appears to be partnerships. Although cybersecurity partnerships are not new, success is often elusive and limited. In Ukraine’s case, Kyiv has effectively partnered with numerous capable entities across international, industry, and government lines at the operational level (where the business of cyber defense happens). As a result, Ukraine has been able to leverage an operational partnership web that allows dynamic alignment of disparate technical capabilities, expertise, and authorities for collaborative threat visibility and defensive action.

For instance, Microsoft provided successful defense assistance to Ukraine by both building working connections to Ukrainian cyber defenders, as well as mobilizing its own relationships with other industry and government partners. In one illustrative example, Microsoft observed a Russian GRU (Russian military intelligence) operation in progress, quickly relayed details to the Ukrainian targets to enable their own internal defensive response, and then worked with US Department of Justice partners (a relationship originally built to take down botnets) to gain legal authority to shut down the attack source domains.

Separately, teams composed of US soldiers from US Cyber Command and civilians from American companies deployed prior to the invasion to help prepare Ukrainian defenses. They built working relationships with Ukrainian infrastructure operators, which helped to prevent attacks on the most critical systems—from railroad infrastructure to border control networks. In several cases, they leveraged relationships with private cybersecurity firms and other government entities to provide defensive solutions tailored to the threats they found.

In these cases (and others), successful defense relied on dynamically aligning technical capabilities, expertise, and legal authorities that are internationally distributed across different public and private entities. This ability to collaboratively see and act in a common cyber defense was enabled by a distributed web of operational partnerships between states, between companies, and between governments and private firms.

Implications

If initial indications are correct, the Ukraine case holds several important implications for defense in cyberspace. First, the assumption that offense has the advantage over defense in cyberspace appears to be on shakier ground. Although a handful of scholars have rightly questioned the foundations of the idea of offense dominance in cyberspace, they are very much in the minority; offense dominance is the conventional wisdom in cyber scholarship and policy. As one scholar noted, “unknown vulnerabilities, unpredictable threats, complex defense surface, and supply chain risks add up to costs that far outweigh those of offense.” Ukraine may be the first real-world example of this idea’s fragility at a broad campaign level. As a recent report on Ukraine concluded, “these cyber defenses have proven stronger than offensive cyber capabilities.”

Second, the Ukraine case further opens the door to the potential of denial strategies in cyberspace, though not as traditionally considered. Ukraine’s cyber defense, facilitated through partnerships, involved better-informed traditional cybersecurity activities within targeted systems, as well as defensive actions beyond firewalls. If replicated elsewhere, this would likely expand the range of areas where the cost of pursuing targets for offense outweighs its potential gain (denial strategies can both increase costs and reduce gains in an adversary’s calculus), particularly into areas where current deterrence falls short (primarily activities judged to be below the threshold of armed conflict).

Finally, the Ukraine experience extends a lesson from other domains into cyberspace: how the capabilities are organized and used in combination matters as much or more than the characteristics of the capabilities themselves. Investment in cybersecurity continues to grow; yet the number of successful sophisticated hacks that threaten critical systems continues to rise. A key challenge is that these individual investments in cyber defense technology and people are fractured within and between different states and public and private organizations. Organizing these investments to operate in collaborative defense has the potential to counter sophisticated actors whose cyber campaigns often rely on exploiting these fractures. This further implies an important take-away from this case: those states with the ability to develop, organize, and leverage operational international and cross-sector partnerships will have a significant comparative advantage over those with weaker partnership options and capacity.

Looking forward

Looking beyond Ukraine, these implications suggest the need to refine approaches to international and cross-sector cybersecurity partnerships. First, the United States and its allies (public and private) should consider a revised partnership strategy that focuses on building a thick web of relationships among disparate capable actors (domestic, international, government, industry, civil society, etc.). Leveraging insights from previous partner-building efforts to expand operational interconnections and enable others to do the same would help mitigate the uncertainty of cyber risk and provide increased adaptability in threat visibility and mitigation. Such a strategy would also place greater emphasis on two partnering lessons that are often addressed individually: the need to have both leadership buy-in and involvement, as well as the need for operational-level engagement. Without leadership buy-in, operators must contend with limited resources and scope of authority. Without pushing interactions below the c-suite and e-ring to the people conducting the actual business of cyber defense, partnerships are more executive chatter than action.

Second, developing greater integration of effort across these partnerships would seem to be a critical parallel effort. Although often driven by individual organizational interests, varying degrees of common interest in cyber defense exist to enable collaborative interaction for mutual benefit. Investing in strategic and operational focal points for collaboration has already proven useful where they exist and would provide ready platforms for cross-community sharing, collaborative analysis, and alignment of effort.

Finally, steering cybersecurity technology development to facilitate this approach would help amplify its effects. New technology solutions should enable easy (where possible, automated) exchange and coordination among organizations with distinct interests, resources, and requirements (as well as trust, policy, and legal demands). One ongoing project, for example, performs large-scale real-time anomaly detection while meeting the various privacy requirements of multiple participants.

Although it is still too early to make comprehensive assessments of cyber conflict in Ukraine, the lessons drawn from the partnerships involved in defense will most likely affect future cyber competition. Cyberwatchers should be closely observing the Ukraine case to discern how these partnerships are developed, how they operate, and what influences their effectiveness. The insights revealed may enable policy makers to reduce the risk of cyber attacks where it matters most and provide comparative advantage to those with the greatest partnership capacity.

***

Sean Atkins is an active duty Air Force officer currently serving as Department Chair of the Joint All Domain Strategist program at the Air Command and Staff College. He holds a PhD in Political Science from the Massachusetts Institute of Technology.

The views expressed in this article are those of the author and do not necessarily reflect the official policy or position of the US Air Force, Department of Defense, or the US government.

Read more essays in the series

Airpower after Ukraine: The future of air warfare

Airpower experts and practitioners examine interim lessons from the war in Ukraine and consider applications for twenty-first century air and space forces.

Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post A web of partnerships: Ukraine, operational collaboration, and effective national defense in cyberspace appeared first on Atlantic Council.

]]>
Air denial: The dangerous illusion of decisive air superiority https://www.atlanticcouncil.org/content-series/airpower-after-ukraine/air-denial-the-dangerous-illusion-of-decisive-air-superiority/ Tue, 30 Aug 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=555841 The air war in Ukraine challenges traditional paradigms of air superiority. US and allied air forces must instead contemplate air denial strategies.

The post Air denial: The dangerous illusion of decisive air superiority appeared first on Atlantic Council.

]]>
Of all the surprises Ukraine had in store for Russia’s invading forces, perhaps the biggest is Ukraine’s denial of air superiority to a larger and more technologically sophisticated Russian air force. Given that the Russians have shown themselves incapable of conducting complex air operations, it is tempting to conclude that the air war in Ukraine holds few lessons for the United States and other Western air forces. They would surely do better than the Russians in a war like Ukraine. This is a comforting conclusion for Western defense analysts: If Russian failure is mainly self-inflicted, then the air war in Ukraine does not challenge existing doctrine and expensive modernization priorities. Although comforting, such confidence is misplaced.

The air war in Ukraine is a harbinger of air wars to come, when US adversaries will increasingly employ defense in vertical depth, layering the effects of cyber disruptions, electromagnetic jamming, air defenses, drones, and missiles in increasing degrees of strength, from higher to lower altitudes. Even if high-end fighters and bombers manage to gain air superiority in the “blue skies,” the airspace below them remains contested. The “air littoral”—that is, the airspace between ground forces and high-end fighters and bombers—then poses the more challenging and important contest for air control.

Denying manned aircraft—from the blues skies to the air littoral

Ukraine has successfully practiced a strategy of air denial, based on a defense-in-vertical-depth approach that employs multilayered and overlapping systems and integrates their effects across the domain, from the blue skies to the air littoral. As a result, Kyiv has managed to deny Russian manned aircraft freedom of movement over most of Ukraine while simultaneously operating its own, increasingly unmanned assets in the air littoral.

The outer layer of Ukrainian defenses consists of mobile surface-to-air missiles, dating back to the Cold War era, which cover approaches from the blue skies. Ukrainian defenders on the ground have used long-range S-300 series and medium-range Buk-M1 surface-to-air missiles to keep Russian aircraft at bay and under threat in the blue skies. Employing “shoot-and-scoot” tactics, Ukrainian air defense units fire their missiles and quickly turn off the radar and move away—making it difficult for the Russians to find and destroy them. During the 1991 Gulf War, the US-led coalition employed strike aircraft and special forces to hunt Iraq’s truck-mounted Scud missiles, but even with the benefit of air superiority, Iraq’s effective use of maneuver and high-fidelity decoys prevented the Air Force from claiming even a single confirmed kill. Russia’s hunt for Ukraine’s mobile surface-to-air missile defenses is even more challenging: its aircraft are “not only the hunter but also the hunted.” Russian pilots are therefore wary of entering Ukrainian airspace to conduct close-in strikes. As long as Ukraine maintains an active and credible threat against Russian warplanes—an air defense in being—its force is sufficient to deny Russia unfettered use of the blue skies over most of Ukraine.

Desperate to avoid these dangers, Russian warplanes have resorted to flying at low altitudes. Although this tactic allows these aircraft to evade radar detection by high-end surface-to-air missiles, it sends them right into the thick of Ukraine’s inner layer of air defenses—the air littoral. Flying at low altitude, Russian fixed-wing aircraft and helicopters are easy prey for Ukraine’s anti-aircraft artillery and thousands of shoulder-fired air defense systems, including some 1,400 American-supplied Stinger missiles. Ukraine has even reportedly used anti-tank missiles to shoot down low-flying Russian attack helicopters. The Ukraine case offers a glimpse of future wars, where the advantage will shift toward cheap mass and away from small numbers of expensive, exquisite manned aircraft.

Ukrainian defenders enjoy an inherent “home-court” advantage in the air littoral. “Ukraine has been effective in the sky because we operate on our own land,” according to Yuri Ihnat, a spokesman for the Ukrainian Air Force. He explained, “The enemy flying into our airspace is flying into the zone of our air defense systems.” The Ukrainians have intimate knowledge of the local topography, which they have exploited to lure Russian planes into their air defense traps. The compressed size of the air littoral not only restricts a pilot’s field of vision and makes it harder to detect incoming threats, but it also critically reduces the window for deploying evasive countermeasures. Taken together, these factors transform the air littoral into a robust and very lethal inner defensive layer.

Ukraine has also shown that defense in vertical depth is most effective when the defender exploits the interactions between the blue skies and the air littoral. Early in the war, Ukraine used Turkish-made Bayraktar TB2 drones, operating in the air littoral, to strike Russian convoys and ground troops. Before the Russians had set up their air defenses on the battlefield, they had little choice but to use their high-end fighters to hunt these weapon systems. Ukraine might have used the TB2 as a “decoy” to draw these aircraft from the blue skies into the air littoral, where Ukrainian defenders were ready to shoot them down. Russia has now taken a page from the Ukrainian playbook, introducing more Russian S-400 air defense batteries and drones to keep Ukrainian pilots from regularly flying through the Donbas area. The result is a state of mutual air denial: neither Russia’s nor Ukraine’s manned aircraft can operate consistently or effectively near the front lines.

Fighting robotically in the air littoral

Although Ukraine has denied Russia—and Russia has denied Ukraine—the effective use of manned airpower, Ukrainian defenders have exploited cheap and easy robotic access to the air littoral. Since the advent of military aviation, only major powers have been able to mount the financial, organizational, technological, and scientific barriers to employing large and advanced air forces. Today, however, the democratization of technology—the diffusion of multi-use technologies, rapidly decreasing costs, and the Internet’s global reach—make cheap but effective robotic airpower available to most countries. The TB2 has placed reconnaissance precision strike capabilities in the hands of Ukraine for a fraction of the price of manned intelligence, surveillance, and reconnaissance (ISR) and strike platforms. In addition to military drones, Ukrainian forces also reportedly operate more than 6,000 small commercial drones in a variety of ISR roles, including surveilling Russian movements, spotting for artillery, and inspecting buildings, as well as documenting Russian war crimes. This ability to maneuver in the air littoral adds a “spherical challenge,” with threats in both horizontal and vertical dimensions.

As the fighting has moved east to the Donbas region, both Russia and Ukraine have adapted their tactics. Russia has improved the density and organization of its ground-based air defenses, as well as its electronic warfare capabilities, and ramped up its use of military-grade and commercial drones to surveil the battlefield, retarget weapons, and drop explosives on Ukrainian positions. Ukraine has not stood idle, however; instead, it has adjusted its drone tactics. While Kyiv has had to scale back its use of TB2, reserving them largely for high-value strikes in other areas, it has turned to expendable “kamikaze drones,” or “loitering munitions” to strike Russian ground targets from the air littoral.

Increased jamming and a supposed lack of survivability has not rendered drones obsolete, however. Instead, the contested environment in eastern Ukraine has demonstrated the value of leveraging drones as cheap attritable mass. Whereas steep losses in manned aircraft quickly thinned Russia’s ranks of trained and experienced pilots, heavy Ukrainian drone losses are more sustainable—their operators live to fight another day, having gained wartime experience ready for immediate application. Gen. David Goldfein, the former chief of staff of the US Air Force, acknowledged that it takes a decade and between $6 and 10 million on average to train a fighter pilot. Russia may not have the same exacting standards, but the mounting death toll still limits its force generation and regeneration. The result has been to push the fight further down into the air littoral, where Russia has run short of armed reconnaissance drones and currently lacks the capacity to mass produce these cheap systems at scale.

Preparing for air denial

The United States and other Western air forces need to prepare for this future now. A strategy of air denial might be the smarter and more economical choice when trying to preserve the status quo on NATO’s eastern flank or across the Taiwan Strait. By employing sufficiently large numbers of smaller, cheaper, unmanned systems in a distributed way, the United States and its allies and partners would increase both the costs and uncertainty of Chinese or Russian efforts to quickly seize territory and present their conquest as a fait accompli. Such a strategy requires moving away from the capable but costly and numerically limited high-end fighters and bombers in favor of more unmanned and autonomous systems. It also requires moving away from penetration and precision strike with manned aircraft to swarming tactics of denial with thousands of cheap small-sized drones. Fighter pilots still capture the Western imagination—this year’s highest-grossing box office hit, Top Gun: Maverick, suggests that the mystique of the fighter pilot holds strong—but that kind of aerial combat is the exception to the rule. The future of air warfare is denial.

***

Maximilian K. Bremer is a US Air Force colonel and the director of the Special Programs Division at Air Mobility Command. The opinions expressed here are his own and do not reflect the views of the Department of Defense and/or the US Air Force.

Kelly A. Grieco is a resident senior fellow with the New American Engagement Initiative at the Atlantic Council’s Scowcroft Center for Strategy and Security.

Read more essays in the series

Airpower after Ukraine: The future of air warfare

Airpower experts and practitioners examine interim lessons from the war in Ukraine and consider applications for twenty-first century air and space forces.

Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Air denial: The dangerous illusion of decisive air superiority appeared first on Atlantic Council.

]]>
Early lessons from the Russia-Ukraine war as a space conflict https://www.atlanticcouncil.org/content-series/airpower-after-ukraine/early-lessons-from-the-russia-ukraine-war-as-a-space-conflict/ Tue, 30 Aug 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=555878 The Russia-Ukraine war may be remembered as the first two-sided space war, offering four preliminary lessons for future conflicts.

The post Early lessons from the Russia-Ukraine war as a space conflict appeared first on Atlantic Council.

]]>
The 1991 Persian Gulf War is often called “the first space war” owing to the American military’s use of global positioning systems and other space-based technologies—the first of several US conflicts against opponents with no space capabilities. Three decades later, the Russia–Ukraine war is perhaps the first two-sided space war.

As a potential harbinger of the future, Russia’s war in Ukraine offers four preliminary lessons for political and military leaders. First, despite having no indigenous space capability, Ukraine has made effective battlefield use of space-based communications and intelligence, surveillance, reconnaissance (ISR) assets from US and European commercial providers. Second, for all the attention on kinetic anti-satellite (ASAT) weapons, Russian counterspace attacks have been limited to the cyber domain—achieving some success and causing collateral damage in NATO countries. Third, commercial space will only grow in importance in conflicts, while policy makers in Western countries have yet to make clear when and how they would protect commercial assets. Last, Russia is gaining surprisingly little advantage from its space capabilities, reflecting the long-term weaknesses of the Russian space industry—weaknesses not shared by China, however.

Combatants can conduct space-enabled operations without owning space assets

In 2022, Ukraine had no national space capability. Nevertheless, space systems, in the form of third-party commercial and government assets, have played an important role in the Ukrainian war effort. The Ukrainian military makes extensive use of commercial satellite communications, in particular satellite links share data for its networked artillery system (GIS Arta, sometimes called “Uber for Artillery,” is an android app that collects target information from drones, US and NATO intelligence feeds, and conventional forward observers, then distributes orders to fire among multiple artillery units to make counterbattery fire more difficult.). Ukraine obtains high-resolution imagery from Western commercial firms, including synthetic-aperture radar that can “see” at night and through clouds. Specifics on Ukraine’s military use of commercial images are scarce, but the available resolution and timeliness of such images should make them tactically valuable. Commercial imagery can show individual military vehicles, and constellations of multiple satellites can image any target every few hours. This capability provides enough information to enable warfighters to attack fixed targets, or to cue assets such as unmanned aerial vehicles to the vicinity of mobile targets. The United States is also reportedly sharing imagery or signals intelligence from classified collection satellites.

The war in Ukraine demonstrates that what matters is having access to the products of space systems, not owning the satellites. With the explosion in commercial communications and imaging services, many combatants will have such products. Access will not be universal, however. Western companies are far in the lead in their capabilities and are subject to formal and informal limits on the customers to whom they sell data. Iran or North Korea could not buy the level of space-based services that Ukraine has at any price. Western governments should see this as a comparative advantage in supporting partners relative to what Russia or China can provide to their clients. Facilitating commercial access, supplying funding, and offering training in the use of commercial space products (or sharing classified products) can affect battlefield performance in a tangible way; moreover, such efforts are relatively low cost and perhaps less visibly provocative than weapons shipments.

Counterspace operations are more likely to be cyber or electronic than kinetic

In November 2021, Russia tested its Nudol kinetic ASAT weapon and created a cloud of orbital debris that threatened astronauts and satellites of many nations. Whether or not that demonstration was meant as a warning to NATO regarding Ukraine, there are no reports of physical space attacks being attempted. Russian cyberattacks, however, have succeeded. On the first day of the conflict, a Russian operation used destructive malware to disable tens of thousands of user terminals of ViaSat, a US-based commercial network, requiring factory repair of the devices before they could function again. The Ukrainian military was a heavy ViaSat user and the obvious target. Following that attack, SpaceX collaborated with Ukraine to deploy Starlink terminals. SpaceX leaders report that Russia has also attacked their service, so far unsuccessfully.

Space experts had assessed that cyber and electronic jamming would be more likely than physical space attacks, for several reasons. Cyberattacks do not create debris, they are less expensive than building interceptor missiles, offer deniability, and are probably less likely to spur armed retaliation. Developments in Ukraine also demonstrate the value of redundancy against ASAT attacks, that is, relying on large numbers of individually expendable satellites instead of a handful of large satellites. Starlink has twenty-five hundred satellites in service—too many for Russia to shoot down with its few, expensive interceptors. Communications and remote sensing services will continue to shift toward these so-called “mega-constellations.” The success of Russia’s attack on ViaSat, however, shows that an invulnerable satellite fleet is irrelevant if cyberattacks can impair its ground-based control systems and user access.

Commercial firms as important actors—and targets?

The Russia-Ukraine war highlights the explosive growth of the commercial space sector. Although the US military has long leased bandwidth on commercial satellites, the integration of Starlink at the battlefield level and the tactical use of commercial remote sensing is groundbreaking. Unsurprisingly, Russia says the satellites of companies working directly with the Ukrainian military are legitimate military targets—and the Russians are probably correct under international law. The international community accepts the established principle that third parties directly and knowingly contributing to a combatant’s war effort can be attacked, within the limits of proportionality and when causing minimal collateral damage. Recent articles in Chinese military newspapers suggested the Chinese also believe Starlink could be valid target in a future conflict.

It is unclear how the United States and its allies would respond to attacks on commercial space systems, whether by physical or cyber means. Russia’s successful ViaSat attack caused significant property damage to civilians in NATO nations, requiring tens of thousands of terminals to be replaced and causing disruptions, such as knocking thousands of wind turbines off the European electric grid for days. Satellite operators have been asking governments for more assistance in securing their systems and for more clarity about what governments will do to protect them; the current lack of clarity risks causing miscalculation by adversaries.

Evaluating Russian space capabilities (and lessons about China?)

Despite the long history of Soviet and Russian spaceflight, it is not obvious that the Russian military has benefited more from space than the Ukrainian side. Russian command-and-control difficulties, the absence of an apparent ISR advantage, and surprisingly large errors from Russian precision munitions (presumably GLONASS-guided), all hint at less effective employment of space systems than that of the United States or its more capable allies. This is not entirely surprising, however. Russian military communications and surveillance satellites lag far behind those of the United States in numbers and technology–Russia may only have two operational military imaging satellites. Technology sanctions imposed in 2014 set back the development of Russian space capabilities. Some Russian munitions may have been built with chips pulled from consumer appliances, but there is no alternative source for the unique radiation-hardened chips needed in satellites. Strict technology sanctions and the likely decline in Russian government revenues make it doubtful that Russia can close the space gap.

In the future, China would most likely be a more adept military space power than Russia. Beijing has launched dozens of military ISR satellites in the last five years. China has an emerging commercial space sector, and, unlike Russia, it has a sophisticated domestic electronics industry that can supply components for advanced military satellites. Russia might still lead China in ASAT missiles and a few other areas, but in most respects Chinese military space capabilities have surpassed those of Russia in quantity and technology. How the Chinese military fares at exploiting and integrating space capabilities in a real conflict remains to be seen.

Policy recommendations

Several implications flow from these observations:

  1. Space-based information services are a key enabler that the United States and its allies can provide to partner nations, especially “middle powers” with some technical proficiency (as opposed to less developed militaries, as in Afghanistan or Iraq).
  2. Redundant mega-constellations offset adversaries’ kinetic ASAT weapons, but cybersecurity at all levels must be a critical design and operational focus of space systems.
  3. The US commercial space sector is a strategic asset, but the United States and its allies need to develop clear policies for protecting commercial systems, whether through defense or deterrence.
  4. Although China has long been seen as “behind” Russia in space, that view is outdated. US military planners should assume China will likely make more effective use of space capabilities in a future conflict than Russia has in Ukraine.

***

David T. Burbach is an Associate Professor of National Security Affairs, US Naval War College. The ideas expressed in this essay are the author’s personal views and do not represent those of the Naval War College or the US government.

Read more essays in the series

Airpower after Ukraine: The future of air warfare

Airpower experts and practitioners examine interim lessons from the war in Ukraine and consider applications for twenty-first century air and space forces.

Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Early lessons from the Russia-Ukraine war as a space conflict appeared first on Atlantic Council.

]]>
Information warfare in the air littoral: Talking with the world https://www.atlanticcouncil.org/content-series/airpower-after-ukraine/information-warfare-in-the-air-littoral-talking-with-the-world/ Tue, 30 Aug 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=555913 Information operations play a crucial role in generating mass in the air littoral, the airspace between ground forces.

The post Information warfare in the air littoral: Talking with the world appeared first on Atlantic Council.

]]>
In the early days of the ongoing war in Ukraine, Kyiv put out calls over Facebook for civilians to donate their drones or sign up to join drone units. Informal donation pages were set up, too, along with online efforts to bring civilian drones into the country. Russian volunteers caught on and tried to emulate the practice, although their attempts were less successful than the Ukrainians’ efforts. Nevertheless, the donation of drones supports both actors in generating and sustaining concentrated military power (or mass in military parlance)—a significant factor in the contest over the air littoral, the airspace between ground forces and high-altitude fighters and bombers.

The importance of mass in the air littoral

The systems that are employed to contest the air littoral—drones, loitering munitions, and low-flying missiles—are often cheap and disposable. Swarming attacks of numerous drones, loitering munitions, and missiles can overwhelm target defenses, but with high attrition rates. If stocks run out and cannot be replenished, the air littoral cannot be used for guiding artillery strikes or gathering and sharing propaganda. Global public-facing information warfare operations can encourage the building of mass, hinder adversary attempts to build mass, and reduce strategic effects of air littoral competition.

The role of information operations in generating mass

Information operations may encourage (or hinder) support from allies in generating mass. The United States provided Ukraine with hundreds of Switchblade loitering munitions. Though American national interest was certainly an influential factor, Ukraine’s success in garnering international sympathy for its unexpected combat prowess and capacity to fight the Russian army also played a big role. The Ukrainians have used memes of “Saint Javelin” and farmers towing away Russian tanks to crowdsource military and humanitarian donations. Lithuania provides the clearest example: the nation crowdfunded five million euros to buy Ukraine a new Bayraktar TB2 drone. Then Turkey gave the TB2 to Ukraine for free, suggesting the funds be used for humanitarian support. Ukraine also generated mass through an unconventional source: civilians. Although not an information operation itself, civilian engagement may support a larger narrative about how all of Ukrainian society is deeply committed to the war effort.

Of course, since early 2014, Russia has also launched its own information operations, often centered on weapons and defenses for contesting the air littoral. Russia continues to push disinformation regarding a fake Ukrainian chemical and biological weapons program to justify the invasion and discourage sympathy and support for Ukraine. The Russian Ministry of Defense has even accused Ukraine of conducting a “drone chemical attack” against Russian forces. In addition, Russia has conducted information operations seemingly designed to degrade Ukraine’s ability to generate mass in the air littoral. For example, Russia claims to have fielded a new anti-drone laser, but the United States has pushed back on the report, with a Department of Defense official saying that he had not seen “anything to corroborate reports of lasers being used” in Ukraine. Although it is possible that the United States might have just not found the evidence, disinformation about fielding a fancy new countermeasure could be intended to discourage Western drone resupply and induce greater caution on Ukrainian drone deployments.

In addition, cyber warfare—another important aspect of information warfare more broadly—can help generate mass while attempting to disrupt the other side’s ability to do the same. For example, the hacking collective Anonymous, furious with Russian actions in Ukraine, claims to have hacked drone manufacturers, capturing various documents on planning and tactics (exactly how useful these documents are remains unclear). Such information could be used to design better countermeasures or improve Ukrainian systems. Alternatively, cyber espionage and attacks could be used to identify potential vulnerabilities—cyber, physical, or electronic—to sabotage supply chains, targeting critical part manufacturers when Russia has few (or no) alternative producers. More broadly, this example illustrates the importance adversaries place on the use of information operations to generate and sustain mass in the air littoral, and the growing importance of physical, electronic, and cyberattacks to interdict air-littoral weapon systems.

Information environment in the air littoral

An open question is how to best counter such efforts. The Russia-Ukraine conflict has seen significant use of Distributed Denial of Service (DDOS) attacks, which could be leveled against websites hosting drone recruitment messages, or local Internet providers. Alternatively, an adversary could, say, hack into the Facebook account hosting the message, or set up a fake effort to divert some of the drones. Taking down an entire channel would be difficult and would most likely produce only limited effects—the longest Facebook outage in history lasted 14 hours. Nevertheless, the open-source nature of social media websites could allow an adversary to collect useful intelligence. If an adversary knows the manufacturer and model of the drones being provided, they can also know operating parameters, potential vulnerabilities, and which countermeasures are most effective. They could also target supply chains, perhaps through information attacks.

A civilian’s drone-captured footage of Russian troop movements has little impact if the civilian cannot share the footage with those individuals capable of attacking the troops, emplacing obstacles to inhibit movement, avoiding the troops, or otherwise reacting to troop movements. Likewise, the civilian almost certainly will not know which unit to call. That means the military would require the capacity to find the video on the Internet, provide an alternative means for the civilian to upload the video, and relay the video to the appropriate units.

Of course, delays in information sharing can still have meaningful effects. A Ukrainian drone captured footage of a Russian soldier appearing to shoot a civilian who surrendered. If the operator had to wait weeks or months to share the video, the opportunity for it to have an impact could have been lost: states might have already decided whether to provide or withhold support. The video might go viral, stuck on the front page of world newspapers, but the conflict may be too far along for it to make a difference. Even more modest delays—days or just hours—might prevent action on particularly time-sensitive information. Direct attacks on popular information-sharing channels (Telegram, Twitter, Facebook) might have limited effects if a prolonged outage forces a sharing group to migrate to a new channel. However, because global companies with major information-technology capabilities operate those channels, extended outages are unlikely.

Preparing to wage information warfare in the air littoral

The information environment is compressing the tactical, operational, and strategic levels of warfare, especially in the air littoral. Tactical victories and errors can go viral, spreading from Wellington to Timbuktu. Winning the information warfare contest can mean that the victor receives more missiles, intelligence information, and humanitarian support. Losing can result in cyberattacks from anarchic nonstate actors, and adversaries empowered with outside support. The United States and allied forces need to be prepared: they should hold wargames and exercises to explore how information operations interact with the air littoral; explore ways to use civilian engagement to support air-littoral stocks; ensure that information awareness is baked deeply into military organizations; and strengthen mechanisms for interagency collaboration on information operations. Today, an act of violence can echo throughout the world.

***

Zachary Kallenborn is a Policy Fellow at the Schar School of Policy and Government, a Research Affiliate with the Unconventional Weapons and Technology Division of the National Consortium for the Study of Terrorism and Responses to Terrorism (START), an officially proclaimed US Army “Mad Scientist,” and national security consultant.

Read more essays in the series

Airpower after Ukraine: The future of air warfare

Airpower experts and practitioners examine interim lessons from the war in Ukraine and consider applications for twenty-first century air and space forces.

Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Information warfare in the air littoral: Talking with the world appeared first on Atlantic Council.

]]>
Will robotized fire power replace manned air power? https://www.atlanticcouncil.org/content-series/airpower-after-ukraine/will-robotized-fire-power-replace-manned-air-power/ Tue, 30 Aug 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=555994 Russia's aerospace campaign points toward the increased robotization of deep-strike systems in modern warfare.

The post Will robotized fire power replace manned air power? appeared first on Atlantic Council.

]]>
Russia’s war in Ukraine entered the summer of 2022 with no clear military victor in sight. What began as a war of expected bold Russian maneuvers coupled with a paralyzing aerospace and cyber campaign has degenerated into a massive tube-and-rocket-artillery duel, a World War I-style battle of attrition on a battlefield largely confined to the eastern Donbas region and along the Ukrainian border north and west of Crimea.

Although it is important to exercise caution in drawing any major conclusions, some powerful signs about the future of warfare can be derived from this conflict.

Emergent robotized deep-strike operations

At the strategic and operational levels of war, the Russian aerospace campaign points to an ongoing trend toward the increased robotization of deep-strike systems. The extensive use of long-range precision-guided cruise and ballistic missiles gave Russia the ability to strike a wide range of high-value targets without the use of a fleet of Russian manned combat aircraft. In fact, the Russian strategic bomber fleet acted as a standoff launch platform for long-range cruise missile and occasional hypersonic weapons. Noteworthy is the extensive use of ground- and sea-launched long-range cruise missiles, as well as the launching of precision-guided short-range ballistic missiles (SRBMs) to strike high-value targets.

This development is not unprecedented: The Islamic Republic of Iran used similar systems during the late summer of 2019 and conducted a precision SRBM bombardment of a US airfield in Iraq during January 2020. Meanwhile, the US Navy and Air Force have extensively leveraged long-range land attack cruise missiles (LACMs). This began with the NATO aerial campaign against Serbia in 1995. Now the diffusion of this robotized deep-strike capability has spread to major military actors in Eurasia and its periphery.

The development of next-generation long-range strike systems by the United States, China, and Russia—to include the rocket-propelled boost glide vehicle (BGV) and the hypersonic cruise missile (HCM)—demonstrates a far more damaging and sustained nonnuclear bombardment campaign.

Battlefield fires superiority vice air superiority

The character of the Russo-Ukrainian battlefield has revealed several interesting features. First, the mass diffusion of tactical anti-armor and anti-aircraft munitions has imposed very high attrition against ground and air forces that were not protected by a wide range of individual and collective countermeasures. This diffusion of guided anti-aircraft weapons had denied the Russian Aerospace Force the opportunity to gain operational and tactical air superiority over the battlefield.

Second, the war has witnessed the full operational emergence of Ukrainian and Russian reconnaissance fire complexes—the closed-looped systems that couple robotic aerial surveillance systems with tube and rocket artillery—which can use precision-guided munitions (PGMs). These new-generation artillery systems are now complemented by the employment of increasingly large numbers of loitering munitions that can simultaneously provide infantry with over-the-hill intelligence and a quick direct-strike capability. A further hint of this new feature of twenty-first century combined-arms warfare was the successful use of these systems by the Azerbaijani armed forces during their short 2020 war against the heavily entrenched and armored Armenian forces. This refined indirect fire system has largely replaced the use of combat aircraft armed with PGMs to provide close and direct air support to ground forces—a shift prompted by the presence of proliferated, mobile, and internetted air defense systems.

The Russo-Ukrainian war may answer the question of whether the employment of guided munitions and robotic fighting vehicles has returned disproportionate power to the tactical defense (not unlike the military circumstance the European armies faced in the summer of 1914). The tactical offensive must be reconstituted to respond to a battlefield wherein the main battle tank and its supporting cast of armored fighting vehicles are vulnerable to rapid discovery and destruction by robotic systems.

The answer might be revealed during the current Russo-Ukrainian war. This late summer, the Ukrainians could gain fire superiority over a very badly attritted Russian combined arms force—not unlike the Israeli defeat of the Egyptian Army in the Sinai during the Six Day War of 1967—thereby demonstrating that traditional armored forces have a major role in the future of combined arms operations. The design of future armored fighting vehicles could be radically altered with the widespread use of unmanned fighting vehicles to precede and compliment the offensive use of their larger and much more expensive manned systems. This concept is being vigorously explored by air forces in the form of developing increasingly autonomous combat aircraft to act as “loyal wingmen” for the piloted combat aircraft.

The Russian long-range missile bombardment campaign has been severely limited by its rather small prewar inventory and lack of industrial capacity to mass produce these weapons quickly. Overall, the Russian strategic bombardment campaign has not been decisive. On the other hand, NATO and the great powers of Asia will take note of the extreme vulnerability of their critical infrastructure to long-range PGM strikes. One of the pressing defense policy questions is how NATO and Washington’s Asian allies should respond to this clear and present danger. To make critical infrastructure resilient to precision bombardment, for example, the United States and its allies and partners should consider putting a portion of their military industrial production capacity, especially robotized instruments of war, underground to complement any major investment in homeland aerospace defense systems.

***

Peter A. Wilson is an adjunct senior national security researcher at the nonprofit, nonpartisan RAND Corporation and teaches courses on national security policy and the history of military technological innovation at the Osher Lifelong Learning Institute.

Read more essays in the series

Airpower after Ukraine: The future of air warfare

Airpower experts and practitioners examine interim lessons from the war in Ukraine and consider applications for twenty-first century air and space forces.

Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The post Will robotized fire power replace manned air power? appeared first on Atlantic Council.

]]>
The 5×5—The US-Japan-South Korea trilateral cybersecurity relationship https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-the-us-japan-south-korea-trilateral-cybersecurity-relationship/ Mon, 15 Aug 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=554969 Five experts share their insights on the future of US-Japan-South Korea trilateral cybersecurity cooperation.

The post The 5×5—The US-Japan-South Korea trilateral cybersecurity relationship appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

On July 27, Deputy National Security Advisor for Cyber and Emerging Technologies Anne Neuberger wrapped up a three-day visit to South Korea aimed at bolstering the United States’ cyber cooperation with the country and its new government in Seoul under President Yoon Suk-yeol. The United States and South Korea have shared interests in cyberspace, ranging from international norm setting to defending critical infrastructure from state-sponsored attacks and countering cybercrime. The visit represents the latest effort by the United States and South Korea to increase connectivity on cybersecurity issues, after South Korea’s National Intelligence Service joined the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE), the Alliance’s cyber defense unit, as a contributing participant in May 2022.

Japan, which joined NATO CCDCOE back in 2018, is another vital pillar in the United States’ Indo-Pacific strategy. The country, however, shares a bitter history with South Korea that affects bilateral cooperation between the two US allies to this day. Given the common cyber threats facing all three countries, which emanate from China, Russia, North Korea, as well as from non-state actors, increased cooperation would bolster cybersecurity across the trilateral relationship.

We brought together five experts with insights on cybersecurity and the US-Japan-South Korea relationship to share their perspectives on the future of trilateral cyber cooperation.

#1 What are the most pressing cyber threats facing the United States, Japan, and South Korea that warrant a joint approach?

Jason Bartlett, research associate, Energy, Economics, and Security Program, Center for a New American Security

“The United States, Japan, and South Korea are three economically and technologically advanced countries that routinely experience state-sponsored cyber threats from countries like China, Russia, and North Korea. Pyongyang, in particular, has leveraged its offensive cyber capabilities to target the global financial market with a notable shift from traditional financial institutions, such as banks, to non-traditional entities like cryptocurrency exchanges and decentralized finance (DeFi) platforms in recent years. This calls for greater integration of cybercrime-related information sharing and capacity building within partnership frameworks among Washington, Tokyo, and Seoul.” 

Jenny Jun, fellow, Cyber Statecraft Initiative, Digital Forensic Research Lab (DFRLab), Atlantic Council; PhD candidate, Department of Political Science, Columbia University: 

“North Korean cybercrime, especially cryptocurrency theft and extortion, merits a joint approach. But the United States, Japan, and South Korea also need cooperation from other countries in Southeast Asia to curb the illicit networks that facilitate the cashing-out process. The three countries can also potentially talk about supply chain security. Military-to-military cyber cooperation is also important, especially considering the possibility of a future crisis in the region.” 

June Lee, consultant, Booz Allen Hamilton

The responses to these questions were prepared by the contributor in her personal capacity. The views and opinions expressed are those of the contributor and do not necessarily reflect the official policy, opinion, or position of her employer. 

“The three countries share an interest in combatting cyber threats from North Korea, including cybercrime, cryptocurrency theft, and cyber-enabled money laundering. North Korea’s illicit cyber activity not only targets systems in the United States, Japan, and South Korea, but revenue earned through cybercriminal schemes funds the regime’s continued development of weapons of mass destruction (the regime allegedly stole $400 million worth of cryptocurrency in 2021). The recent surge in North Korean missile tests reinforces the risks to regional security and the need for a joint approach to cut off its illicit sources of revenue.” 

Adam Segal, Ira A. Lipman Chair, director, Digital and Cyberspace Policy Program, Council on Foreign Relations

“The three countries share the threat of state-backed hackers from North Korea and China, as well as ransomware groups and other non-state actors. Chinese cyberespionage groups target US, Japanese, and South Korean public and private sector networks, and North Korean hackers help the Kim Jong-un regime avoid sanctions and fund weapons programs by conducting financially motivated attacks on banks, online games, cryptocurrency exchanges, and other financial platforms. The US, Japanese, and South Korean militaries also want to prepare for more disruptive and destructive attacks in case there is a conflict on the Korean Peninsula, in the East China Sea, or across the Taiwan Strait.” 

Benjamin Young, assistant professor, Homeland Security & Emergency Preparedness Program, Virginia Commonwealth University

“North Korea’s cyber capabilities should not be underestimated. The rise of North Korea’s hacker army and its technical assistance from the Chinese government is a joint issue for the US, Japanese, and South Korean governments.”

#2 What are the considerations for US Cyber Command in evaluating a “Hunt Forward” approach to Indo-Pacific cyber defense?

Bartlett: “Incorporating “Hunt Forward” operations within US cyber strategy with allies in the Indo-Pacific will most likely agitate already sensitive ties between Southeast Asia and China, but the United States needs to increase its cyber presence in the region due to its constant exposure to illicit cyber activity. Numerous state-sponsored hackers, especially from North Korea, have operated from within Southeast Asia and other regions in the Indo-Pacific for years with little punitive backlash from local and national governments. In particular, securing cyber partnerships with Singapore and Malaysia would be crucial to ensuring a successful and long-lasting US cyber presence in the region.” 

Jun: “US Cyber Command “Hunt Forward” missions are likely to be against state-sponsored cyber threats from China, and to a lesser extent North Korea, in geopolitical hotspots in the Indo-Pacific region. While such missions have had successes in the past, it is still important not to overgeneralize from such cases to expand the scope and scale of the missions without regard for its implications. States have yet to come to an unambiguous, mutual understanding as to how certain actions in cyberspace are supposed to be interpreted, and such sources of misunderstandings may be especially dangerous during a crisis. For example, if rival states maintained access to portions of each other’s critical infrastructure to dissuade each from creating destructive effects on it, and one side chose to unilaterally kick out adversary access without explanation, especially during a crisis, the interpretation of that action from the adversary’s perspective is far from unambiguous, even if the other swears it was only for defensive purposes.” 

Lee: “US Cyber Command must ensure that any “Hunt Forward” operations in the Indo-Pacific are backed by sustained diplomacy and careful coordination with its counterparts in Seoul and Tokyo. More streamlined information sharing will ensure South Korea and Japan, and other regional partners, are able to rapidly act on any “indications and warnings” of adversary activity in their networks.” 

Segal: “The biggest risks—stepping on or working at cross purposes with a friend’s cyber operations, blowback from public opinion in friendly countries if US Cyber Command operations are revealed, and inadvertent escalation—appear to being taken into consideration. Host countries invite US Cyber Command to conduct “Hunt Forward” missions on their networks, helping address the first two concerns, and what little is known about the actual operations suggests they are non-escalatory, fairly restrained, and often focused on revealing adversaries’ exploits.” 

Young: “Well, Hunt Forward is something that has been done with Lithuania and this cyber partnership approach could also yield benefits for Indo-Pacific cyber defense as well. The problem is that Japan-South Korea relations are historically fraught with tension and mistrust.”

#3 Where does US involvement make cybersecurity more difficult for Japan and South Korea?

Bartlett: “Compared to Seoul and Tokyo, Washington tends to adopt a more publicly hawkish approach towards illicit behavior from Beijing, including cybercrime. However, both Japan and South Korea are prime targets of Chinese hackers that are looking to steal technology and industry-related information. Due to fear of economic retaliation from China, currently the largest trading partner of both Japan and South Korea, the two countries will likely prefer to adopt a more “closed door” approach towards responding to Chinese cyber intrusions.” 

Jun: “China. China has used economic coercion on several occasions to imposes costs on South Korea and Japan for pursuing policies it deems unfavorable, such as the placement of THAAD missile defense systems and AN/TPY-2 radars. A formal and public deepening and broadening of cybersecurity cooperation among the three countries, for example increased joint cyber defense exercises and joint attribution of Chinese state-sponsored threats, may invite Chinese responses to impose costs on South Korea and Japan. China has already expressed discomfort in South Korea joining NATO CCDCOE this year. While this should not be necessarily a hindrance to trilateral cooperation, policymakers should be mindful of the uneven distribution of risks associated with such cooperation during negotiations.”

Lee: “US involvement could complicate Japanese or South Korean efforts to strengthen cybersecurity when cooperation is framed as part of the United States’ competition with China or regional coalition building. Such framing needlessly politicizes cooperation and could cause Seoul or Tokyo to be more hesitant to sign on to potentially beneficial cooperative measures.”

Segal: “Washington’s tendency to frame most digital issues as a competition between democratic and authoritarian systems is likely to alienate many of Tokyo’s and Seoul’s regional partners. Southeast Asian countries, for example, are more focused on workforce development and capacity building than choosing any side in the conflict between the United States and China.”

Young: “If anything, US involvement makes it easier for the two to get along and share cyber knowhow on how to confront Russian, Chinese, and North Korean cyber threats. Even in cyber operations, the legacy of colonialism makes the South Korean-Japanese relationship tense.”

More from the Cyber Statecraft Initiative:

#4 How can the United States facilitate constructive interactions between Japan and South Korea in cybersecurity?

Bartlett: “Both South Korea and Japan are common targets for North Korean, Chinese, and Russian-backed hacking groups, and the United States can help play a mediator role by strengthening joint cybersecurity operations and information sharing within the existing US-South Korea-Japan defense partnership.” 

Jun: “This is a question that involves bigger discussions in the overall diplomatic relationship between Japan and South Korea, including discussions to “normalize” General Security of Military Information Agreement (GSOMIA), a military intelligence sharing agreement between the two countries. The current Yoon administration favors deepening bilateral cooperation on defense and security issues. This may also mean that the United States, Japan, and South Korea may have a window of opportunity to pursue increased cyber threat intelligence sharing. Aside from information sharing, it may be practical to start small by pursuing cooperation in specific issue areas where the problem is well defined and constrained, such as better law enforcement cooperation on cryptocurrency-based cybercrime.” 

Lee: “The United States can facilitate cooperation by extending areas of shared interest and trilateral cooperation to the cyber realm. For instance, senior officials frequently reference the countries’ shared interest in upholding international law in the region, bolstering engagement with the Association of Southeast Asian Nations (ASEAN), and collaborating on workforce development. In future trilateral engagements, the three governments can accordingly reaffirm the application of international law to state activity in cyberspace, collaborate in cyber capacity building efforts with ASEAN states, and discuss strategies for growing their cyber workforces. Washington could also consider expanding bilateral (US-South Korea, US-Japan) efforts to combat cybercrime or conduct military to military cybersecurity cooperation to the trilateral context. Finally, regional groups such as the Quad-plus (including South Korea) could lead efforts to strengthen regional cybersecurity, creating a forum for Japanese and South Korean officials to engage constructively.” 

Segal: “The United States should do what it can to help the two sides to address the sensitive political and historical issues that have interrupted intelligence sharing, but in the short term, Washington can facilitate people-to-people exchanges among cybersecurity experts and private cybersecurity firms in the three countries.” 

Young: “I think the United States can do so by expressing a confirmation that it will be a reliable partner in the Indo-Pacific for the foreseeable future and by stressing that cyberattacks from foreign adversaries impact their markets and democratic systems. Shared supply chains and the close economic ties between the three countries highlight why sharing cybersecurity insights is necessary in the digital age.”

#5 What are some of the different opportunities and challenges in US-Japan-South Korea cooperation vis-à-vis threats from states versus those from non-state actors?

Bartlett: “State-sponsored cyber threats pose a more complicated set of challenges because they do not lack government funding and support. Non-state actors often rely on fundraising efforts and other piecemeal activities to generate revenue, whereas state-sponsored actors such as North Korean hackers receive training, funding, and legal protection directly from their government. This also impacts the ability of targeted countries to successfully seek justice against these criminals because foreign governments will not likely punish or extradite state-backed hackers.” 

Jun: “It is often said that that United States-South Korea alliance is the “linchpin” of peace, security, and prosperity in the region. There is potentially an opportunity for the alliance to assume a similar strategic vision for cybersecurity in the region. There are challenges—much depends on the leadership of the three countries and the appropriate alignment of interests, and the three countries must navigate a potentially hostile response from China. At the same time, it could be an opportunity to further mature concepts such as “Hunt Forward” missions, engage in more active cyber diplomacy and norm development, and broaden cybersecurity cooperation to more countries in the Indo-Pacific region.” 

Lee: “The distinction of cyber threats from state versus non-state actors is an interesting one in the Asia Pacific, particularly as North Korean state-sponsored hackers engage in cybercriminal activities that would typically be associated with non-state criminals. The three countries’ shared concern about the North Korean cyber threat and existing information sharing networks create momentum and channels for expanded cooperation. Yet differences in the three countries’ legal frameworks for cybercrime, as well as secrecy within South Korea’s National Intelligence Service (NIS) on any matters relating to North Korea, complicate fluid coordination to address the threat from North Korean hackers.” 

Segal: “There is some degree of overlap, especially when it comes to North Korean ransomware actors, but with state-backed operations, the United States, Japan, and South Korea can work on developing a shared process on attribution and sanctions as well as norms development in regional fora. The cooperation on non-state actors will be more tactical and operational, focused on botnet takedowns and the tracking and recovery of ransom paid in cryptocurrency.” 

Young: “The biggest challenge is that Japan-South Korea relations are ridden with nationalistic tensions and the two governments tend to not trust each other. For example, if a North Korean cyberattack takes down the electrical grid of a Japanese city, how will Japan respond? Given the historical tensions, would South Korea share cyber knowhow with Japan on how to respond effectively to a North Korean cyberattack?” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Digital Forensic Research Lab (DFRLab). He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—The US-Japan-South Korea trilateral cybersecurity relationship appeared first on Atlantic Council.

]]>
2022 Washington, DC Cyber 9/12 Strategy Challenge Playbook https://www.atlanticcouncil.org/content-series/cyber-9-12-project/2022-washington-dc-cyber-9-12-strategy-challenge-playbook/ Wed, 03 Aug 2022 21:18:32 +0000 https://www.atlanticcouncil.org/?p=536874 Rockshot, Standingpalm, Software, Shoot! Scenario Summary Intelligence Report I Intelligence Report I Recommendations Intelligence Report II Intelligence Report II Recommendations Intelligence Report III Intelligence Report III Briefings Intelligence Report III Recommendations Scenario Summary In November 2022, while Qatar was hosting the , staff at the noticed a water-cooling issue within the plant. The staff managed […]

The post 2022 Washington, DC Cyber 9/12 Strategy Challenge Playbook appeared first on Atlantic Council.

]]>
Rockshot, Standingpalm, Software, Shoot!

Scenario Summary

In November 2022, while Qatar was hosting the FIFA World Cup, staff at the Ras Abu Fontas noticed a water-cooling issue within the plant. The staff managed to contain and resolve the problem in 11 hours but did not release a notice to the public. Information was later leaked, causing mass panic buying of water in Qatar, especially within the migrant community, and backlash online in the Twittersphere. Qatar Electricity and Water Company hired Fenghuang Labs, a Chinese company, to investigate the attack. Fenghuang Labs blamed the US based on prior attacks on Yuma software capabilities analyzed by the Chinese threat intelligence community. In the second round, it was unveiled that the backdoor in Yuma software was developed by a CIA asset working for Nikara Solutions in China.  

In Intelligence Report I, competitors are introduced to the initial attack and its immediate aftermath. During the 2022 FIFA World Cup kickoff, Qatar Energy and Water Company’s Ras Abu Fontas desalination plant experienced a disruption of their temperature sensors, preventing the water from reaching high enough temperatures, resulting in a low water yield. This malfunction inhibited the plant from meeting the high demand for water from the World Cup attendees. Ras Abu Fontas employees were informed by their Security Operations Center manager that malware caused the sensors to malfunction. The plant received approval to tap into the water reserves and restored the sensors to working order by using backups of the software that controls the sensors. This restoration occurred 11 hours after the activity commenced. While the plant did not release information about the attack, social media was quickly populated by news and rumors surrounding the attack at the plant. Rumors amplified the angry sentiment that the water disruption disproportionally affected the migrant worker community. This community, which comprises a substantial portion of Qatar’s population, was involved in building the infrastructure required to host the 2022 World Cup.  

Fenghuang Labs, a Beijing-based incident response firm, was hired by Qatar to investigate the attack. Chinese-Qatari relations are strong due to China’s economic involvement with the Gulf States through the Belt and Road Initiative. Fenghuang Labs claimed that the software used by Ras Abu Fontas, Yuma software, had a backdoor, dubbed STANDINGPALM, allowing the malware that affected the sensors, dubbed ROCKSHOT, to be installed on the plants systems. Fenghuang Labs attributed the attack to the US government, and cited previous attacks attributed to the US government as being consistent in nature with this attack. The lab also noted that many companies around the world using Yuma also contain STANDINGPALM. In response to this accusation, France’s OCLCTIC informed the US government that they will be monitoring the evolution of the investigation of the attack. Both the US and Emirati governments offered their support to Qatar. 

In Intelligence Report II, competitors were provided with further information on the aftermath of the initial attack. In a briefing, the NSA internally disclosed within the US government that STANDINGPALM, known as PENSIVEPENGUIN within the US Government, was a backdoor planted by a CIA asset working within Nikara Solutions for the purposes of collecting intelligence on the People’s Liberation Army’s military infrastructure. The memo stated that Qatar was not an intended target of PENSIVEPENGUIN and that the attack was committed by a non-US Government actor, and that the destructive payload used in the attack was not created by the NSA. A news article published after the initial attack in Qatar reports that Nikara Solutions created a patch for STANDINGPALM, but only distributed it within China, leaving other nations vulnerable to attacks. In Qatar, an American citizen working for the Ras Abu Fontas plant was detained for allegedly being linked to the Ras Abu Fontas attack. 

In Intelligence Report III, competitors learned that an organization known as the Rabinara Group published their own analysis of the attack and assigned attribution for STANDINGPALM to the US government. In a final Twitter post, the People's Militia claimed responsibility for the attack on Ras Abu Fontas, and cited their frustrations with the maltreatment of migrant workers in Qatar as the reason for their attack. The People’s Militia also alluded to an operative of theirs who was instrumental in the attack as being of South Asian heritage. France called for its European allies to cease any collaboration or involvement with the US government in cyberspace and cited US cyber operations and how the backdoor vulnerability was exploited outside of US containment, as reasons for ceasing joint cyber operations.  

From March 25 to March 26, 2022, participants analyzed how they can best A) respond to allegations of US responsibility in the attack, B) retain control of cyber vulnerabilities, and C) maintain international partnerships with Qatar and France. 

Below we feature the policy analysis and recommendations of a handful of teams who succeeded in advancing to the semi-final and final rounds of the 2022 Washington, DC competition.

Find the complete scenario here.

Intelligence Report I

Intelligence Report I Tabs

Intelligence Report I Top Recommendations

Intelligence Report I Policy Options

American University Cyber sQuad

Team members: Laila Abdelaziz, Kady Hammer, Taylor Kerr, Alex Neubecker 

Cyber sQuad from American University categorized their recommendations into four courses of actions: investigate, mitigate, communicate, and anticipate. The team provided packages of policy recommendations for the immediate term (72 hours), short term (1-6 weeks), and long term (2+ months).  

The investigation category called for the Office of the Director of National Intelligence (ODNI) to coordinate a whole government response into the backdoor and malware supported by the National Security Agency (NSA) performing a deep dive into logs for potential attribution. For mitigation, the Cybersecurity and Infrastructure Security Agency (CISA) must mitigate potential spread of the malware and backdoor access to later provide open-source patches and resiliency of critical infrastructure. AU’s Cyber sQuad also stressed that communication coordinated by the Department of State (DOS) with US allies and media was critical to smoothing a variety of international tensions. Building on the communication pillar, the anticipate category recommended proposed proactive alerts and the readying of CYBERCOM Protection Teams in case of further backdoor and malware proliferation.  

Read AU Cyber sQuad’s Intelligence Report I written policy brief and decision document

Harvard University Ghost in the Shellcode

Team members: Winnona DeSombre, Michaela Lee, Emma Plankey, Bethan Saunders 

Harvard University’s Ghost in the Shellcode offered three levels of policy recommendations but endorsed their middle policy package entitled “Aid and Attribution Diplomacy.” This policy package aims to provide solutions for domestic resiliency, concrete steps to engage France and Qatar, and mitigate diplomatic spillover of allegations against the United States.  

Aid and Attribution Diplomacy presents four deliverables. First, it calls for the DOS and the Federal Bureau of Investigation (FBI) to publicly issue a joint statement affirming that the US government is utilizing available resources to investigate the attack. Ghost in the Shellcode recommends the US Intelligence Community work amongst themselves in a classified manner to attribute the actors. Next, the team suggests the FBI engage private sector partners within the US to support the investigation. The final policy recommendation is to have USAID ship bottled water to the regions to assist with the ongoing shortage of supply. The team recognized that while this set of policy recommendations would smooth tensions with French and Qatar, it might increase tension with the Chinese government. The investigation into attribution might aggregate Chinese actors who already attributed the incident to the US government. 

Read Ghost in the Shellcode’s Intelligence Report I written policy brief and decision document.  

New York University APT 2785 (aka PURPLE APPLE) 

Team members: Shagun Nayar, Louie Reckford, Chadwick Shroy, Alastair Whitehead

APT 2785 (aka PURPLE APPLE) from New York University developed policy recommendations that sought to decrease uncertainty by gaining a deeper understanding of the risks, demonstrate that the US was a responsible cyber actor, and limit China’s ability to exploit the situation. To do so, the team broke down their approach into three categories: understand, prepare, and reassure.  

To understand, APT 2785 called on USCYBERCOM and the Central Intelligence Agency (CIA) to investigate the origins of STANDINGPALM and ODNI to investigate the origins of ROCKSHOT malware to reach a potential attribution. To prepare, the team tasked CISA and the Environmental Protection Agency (EPA) with drawing in relevant private sector knowledge to conduct vulnerability assessments into Yuma software and provide a remediation strategy, with the EPA promoting it to the public. Next, APT 2785 suggests the FBI and CISA reassure the international community, publicly offering Qatar technical and remediation assistance while also pressuring the Chinese to publicly share evidence in their possession. The last step for reassurance called for the US government to use classified military and diplomatic channels to reassure allies it was not involved, and it will share information about the attack when more is uncovered. 

Read APT 2785’s Intelligence Report I written policy brief and decision document.   

University of Colorado, Boulder ZeroDarkNerdy  

Team members: Andrew Nadas, Emmeline Nettles, Katherine McDonald, Nadin Al Milaify

ZeroDarkNerdy identified five pillars of concern and proposed policy actions for each pillar through core actions (short term), expansionary (long term), and escalatory recommendations. The team identified critical risks to address as: the spread of the malware/threat to US infrastructure, international reputation impacts, regional stability, US citizens and installations in Qatar, and humanitarian concerns. 

To mitigate the spread of malware and threat to US infrastructure, the team called on CISA to support US critical infrastructure, enhance security protocols, investigate how this malware could impact US facilities, and develop contingency and workforce plans. To deter further detrimental reputation impacts, it was recommended that DOS through the Bureau of Cyberspace Digital Policy should reaffirm norms and deny US involvement. To promote regional stability, the US should offer resources like the FBI and ODNI to support Qatar’s investigation and bolster other potential critical infrastructure protections. To protect US citizens and installations, it is critical to investigate airbases and plan for contingencies of evacuation. Finally, to address humanitarian concerns, the team called on the White House and other US organizations to release statements on human rights, labor rights, immigration rights, and potential push Qatar to release a statement condemning the action as well.  

Read Zero Dark Nerdy’s Intelligence Report I written policy brief and decision document

Intelligence Report II

Intelligence Report II Tabs

Intelligence Report II Top Recommendations

Intelligence Report II Policy Options

American University Cyber sQuad

Cyber sQuad from American University organized their recommendations into three components: investigate, remediate, and anticipate.  

Focusing on the immediate term, in the investigation section, the team tasked the FBI, Department of Justice (DOJ) and ODNI to draw in support from the larger intelligence apparatuses, both foreign and domestic. The FBI and DOJ were recommended to outreach to Qatari law enforcement and investigate the People’s Militia and other US threat actors in efforts to uncover the threat through a multinational push. ODNI was tasked with calling in the US Intelligence Community to investigate foreign actors, attribute the malware, and obtain a copy of the malware for backdoor patching, continuing to bolster partnerships into the long run. In the next step of remediation, the President of the United States (POTUS) would communicate with leaders in France and Qatar to repair relationships by offering FEMA, CYBERCOM, and National Guard assistance. In the longer term, both were tasked with continuing to bolster multinational relationships. Other US based agencies were tasked with coordinating patch development. The last step of anticipation called for proactive measures in the form of alerting the private and public sector entities of the incident so they might fortify their resilience. In addition to these recommendations, the team associated risk levels of their recommendations.  

Read Cyber sQuad’s Intelligence Report II decision document

Harvard University Ghost in the Shellcode

Ghost in the Shellcode from Harvard University’s Kennedy School in the second part of the competition prioritized protecting US critical infrastructure, ensuring resilience of the global economy, regaining trust with US allies, and preventing the sacrifice of priorities in the cyber domain. To achieve these strategic goals, the team presented a policy package named “Respond” aimed at providing solutions for domestic and global resiliency, creating avenues for engagement with France and Qatar, and developing opportunities to regain trust without the sacrifice of cyber capabilities.  

To achieve these objectives, the FBI was tasked with launching a public investigation into domestic organizations leveraging STANDINGPALM and the US national who abetted the attack. DOS was responsible for translating Yuma patching guidance into 50+ languages that CISA was to draft vulnerability guidance on. DOS and the White House were to liaise with Qatari and European officials to repair international relationships.  

Read Ghost in the Shellcode’s Intelligence Report II decision document

University of Colorado, Boulder ZeroDarkNerdy 

ZeroDarkNerdy narrowed their scope for their final decision document focusing on three rather than five components. Zeroing in on intelligence remediation, building global resilience, and communications planning, the team proposed three minimal risk policy packages.  

For intelligence remediation, the group brought in the Central Intelligence Agency (CIA) to connect with their asset and ensure saftey and ability to continue missions. They called for the larger Intelligence Community to conduct assessments of short- and long-term effects on national security. To build global resilience, CISA was called to secure critical infrastructure domestically and abroad through CERT cooperation. DOS and USAID were to offer long term work force development for international cooperation support. To bolster communications, ZeroDarkNerdy called on the White House and DOS to deny malware implementation allegations and promote to the public that the US government is bolstering cybersecurity to protect US assets. Further, the group suggested CISA, FBI, and Environmental Protection Agency (EPA) develop a joint task force for information sharing and coordinating efforts to repair water supply.  

Read ZeroDarkNerdy’s Intelligence Report II decision document

New York University APT 2785 (aka PURPLE APPLE) 

In four lines of effort, APT 2785 calls for the US National Security Council to reassert, recover, repair, and restore following the second Intelligence briefing. 

In pursuit of the reassertion strategy, APT 2785 recommends that as a responsible cyber actor, the United States should engage with the United Nation’s Security Council (UNSC) to establish cyber rules for great powers going forward. The recovery component of the strategy is colored by repairing critical relationships. APT 2785 focuses on drawing on US diplomatic channels of DOS to spearhead diplomatic talks for the Gulf states and USAID to re-establish Qatari water reserves. DOS is also tasked with messaging UAE and DOD smoothing relations with the French in an overall effort bolster relationship. To repair, APT 2785 focuses on patching malware both domestically and abroad through CISA, EPA, and NSA cooperation. In the restoration component of APT 2785’s strategy, the team recommends a review of lessons learned, calling for DOS to request consular access to the US citizen, an update on the US Intelligence Communities’ collection capabilities, and overall review by CISA and the FBI.  

Read APT 2785’s Intelligence Report II decision document.

Intelligence Report III

Intelligence Report III Tabs

Intelligence Report III Briefings

American University – Cyber sQuad

New York University – APT 2785 aka PURPLE APPLE PURPLE APPLE

University of Colorado, Boulder – ZeroDarkNerdy

Intelligence Report III Top Recommendations

The post 2022 Washington, DC Cyber 9/12 Strategy Challenge Playbook appeared first on Atlantic Council.

]]>
Behind the rise of ransomware https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/behind-the-rise-of-ransomware/ Tue, 02 Aug 2022 04:00:00 +0000 https://www.atlanticcouncil.org/?p=550004 Between 2016 and 2019, cybercriminals shifted from automated ransomware campaigns that emphasized scale to targeted extortion operations against organizations. This adaption made ransomware more disruptive and more profitable, culminating in the 2021 surge in ransomware. Though the US government has devoted more attention to ransomware since 2021, ransomware remains a significant and long-term threat to the US economy.

The post Behind the rise of ransomware appeared first on Atlantic Council.

]]>

Executive summary

This issue brief investigates the drivers of the ransomware surge that menaced the United States in the summer of 2021, explains why these attacks remain a persistent threat today, and offers recommendations for mitigating the problem in the future. The 2021 surge in ransomware activity stems from a change in how criminals launch ransomware attacks. Between 2016 and 2019, cybercriminals shifted away from automated ransomware campaigns that emphasized scale to targeted extortion operations against organizations and established businesses. This adaptation made ransomware more disruptive and more profitable, eventually attracting the attention of well-organized cybercrime gangs. The intensification of the ransomware epidemic from that point until the attack on Colonial Pipeline resulted from the growing adoption of this new extortion model among criminals.  

Though the US government has devoted more attention to ransomware over the ensuing months, ransomware remains a significant and long-term threat to the US economy. Three factors drive the persistence of the problem: the presence of a vast pool of security-poor organizations, the availability of a poorly regulated monetization pipeline in the form of cryptocurrency, and criminals’ ability to evade law enforcement by exploiting jurisdictional boundaries. Mitigating just one of these conditions, let alone all three, will demand years of sustained effort.  

Because the US government cannot eliminate ransomware overnight, it must begin planning how to manage the problem over the long term. To do so, it should start by investing in new efforts to improve the defenses of small- to medium-sized entities. The ease of compromising these organizations has been key to fueling the appetite for ransomware attacks. Yet, many of these organizations lack the personnel, incentives, and contracting power to secure their own networks.  

Moreover, the US government should require all US-based organizations to report ransomware payments to the government and publish quarterly reports with anonymized versions of the data. Comprehensive payment transparency offers the best way to measure success against ransomware over the long term. It will ensure that success against targeted ransomware is judged in terms of the overall volume of ransomware payments, not just the absence of attacks on high-risk or high-profile entities. 

Introduction

Since 1989, when an enigmatic evolutionary biologist named Joseph Popp shipped data-encrypting malware, via floppy disk, to the attendees of a scientific conference on AIDS, criminals have sought to leverage the techniques of computer exploitation and attack for the purposes of extortion.1 But the problem—known today as ransomware—has grown exponentially in recent years, whether measured in the volume of attacks, the money flowing to criminals, or the harms inflicted on society. How did ransomware become so dangerous, so fast? Now that it is on the radar of world leaders, why is it proving so difficult to stop? 

This paper makes three central arguments. First, the recent surge in ransomware activity stems from a shift in how criminals launch ransomware attacks, which transformed the digital extortion industry profoundly. From the rise of CryptoLocker in 2013 to the fall of GandCrab in 2018, cybercriminals primarily deployed ransomware in large, “spray-and-pray”-style campaigns targeting individual end users. These attacks demanded small ransoms from a vast pool of victims. Altogether, they pulled in modest revenues and inflicted limited damage. 

Between 2016 and 2019, criminals made a small change that paid vast dividends: they began to burrow within networks and launch targeted extortion campaigns against the organizations that controlled them.2 By creating a greater incentive for criminals to apply pressure to any given victim, this adaptation made ransomware more disruptive. By generating higher profits, it drew top-tier cybercrime gangs into the digital extortion business. The intensification of the ransomware problem from that point until the summer of 2021 reflects the growing attention and investment that this new extortion model generated among criminals. 

Second, despite renewed attention to the problem since the attack on Colonial Pipeline, ransomware remains a long-term threat to the US economy. Targeted ransomware may be new, but the conditions driving the problem are not—indeed, they have stymied the security community for the better part of the last decade. To wit, a sense of impunity among cybercriminals, widespread security deficiencies within the public and private sectors, and the emergence of a poorly regulated payment pipeline in the form of cryptocurrency present vexing hurdles for policymakers. Mitigating just one of these conditions, let alone all three, will demand years of sustained effort. 

Finally, because ransomware cannot be eliminated overnight, the federal government should begin planning how to manage the problem over the long term. It can start by adopting three measures. First, Congress should establish a tax relief program for small- to medium-sized organizations that implement a series of security best practices. Second, drawing on the Work Opportunity Tax Credit (WOTC),3 Congress should draft legislation offering federal tax credits to small- to medium-sized organizations that hire or retain employees with cybersecurity skills. The latter two measures will provide the most common victims of ransomware attacks—small- to medium-sized organizations—with the incentives and means to improve their own security. 

Finally, it is imperative that policymakers measure success against targeted ransomware in terms of the overall volume of ransomware payments, not just the absence of attacks on high-risk entities. Therefore, Congress should require all US-based organizations to report ransom payments to the Department of Homeland Security (DHS). To encourage compliance, DHS should anonymize the data it shares with the Department of Justice (DOJ), and Congress should offer limited liability protections to victims reporting attacks. To ensure transparency, DHS should be required to publish quarterly reports with the anonymized data. 

While the recent Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) represents a positive step forward, the law only covers entities within critical infrastructure sectors. It therefore fails to rectify the fundamental limitation of existing ransomware data sets, which can delineate broad trends but provide insufficient granularity for effective policy. 

It should not take another shock like the Colonial Pipeline attack for the US government to realize that ransomware has spiraled out of control. It is time to start investing in a more secure future.  

Part 1: The rise of ransomware

The story of the ransomware surge is the story of the discovery, professionalization, and growth of the targeted-attack extortion model. Prior to 2016, most ransomware campaigns targeted a large and effectively random pool of end users.4 This “spray-and-pray” business model privileged quantity over quality, meaning ransomware actors spent less time focusing on how to apply pressure on a given victim and more time trying to reach as many victims as possible.5 Until the tail end of this period, ransomware did not generate enormous profits. Being a second-tier avenue of cybercrime, it failed to attract as much talent or activity as it would in the years to come. 

Ransomware experienced its first period of significant growth between 2013 and 2016, when refinements to ransomware payloads, the emergence of virtual currencies, and enhanced anti-fraud measures from banks and cybersecurity vendors increased the profitability of digital extortion relative to other common avenues of cybercrime.6 What happened next remains unclear, but with more activity concentrating on ransomware, criminals appear to have learned how easy it was to extort organizations before piecing together how lucrative these attacks could be. Regardless, between 2016 and 2019, established cybercriminals gangs entered the targeted ransomware business en masse.7 

From that point until the summer of 2021, cybercriminals invested growing time and resources to improve the targeted extortion model. During this period, digital extortion became more profitable because cybercriminal gangs and cybercrime markets reoriented around a near limitless demand for targeted ransomware. Moreover, as criminals learned how to best extract revenue from victims, they launched increasingly disruptive ransomware attacks.  

Ransomware before 2016

Prior to 2016, ransomware attacks infrequently involved elaborate pressure tactics and protracted negotiation processes. While criminals employed primitive methods of price discrimination among victims by using their Internet protocol (IP) address for geolocation, ransomware demands represented a take it-or-leave it proposition.8 Depending on the malware and the victim, most ransoms ranged between $75 and $750.9 

So long as ransomware demands remained so low, scale represented the principal means that ransomware groups could maximize revenue. Consider one of the most successful early ransomware campaigns in history, CryptoLocker. Over a two-month period in the winter of 2013, CryptoLocker generated a remarkable $27 million.10 The malware included several innovations that would become the hallmarks of modern ransomware, such as a robust encryption algorithm and the use of Bitcoin as a payment mechanism. Nonetheless, only 30 percent of CryptoLocker victims paid the ransom, which ranged between $300 and $750.11 

CryptoLocker earned so much, so fast because it had unparalleled access to a vast pool of victims: at that time, the developers also controlled one of the world’s largest botnets.12 Through that botnet, they were able to deliver the malware at a scale and speed few criminals could rival.13 In short, in the early days of ransomware, a fancy payload or sleek payment mechanism mattered less than how to obtain access to a large pool of victims. 

Thus incentivized, ransomware groups before 2016 placed less emphasis on developing innovative extortion strategies or building more destructive payloads. According to one study of more than one-thousand ransomware samples collected between 2006 and 2014, most ransomware families at that time lacked “sophisticated destructive capabilities,” while many “encrypt[ed] or delete[ed] the victim’s files using only superficial techniques.”14 A second group of researchers found that as late as 2018, several ransomware variants exhibited basic cryptographic flaws that permitted data recovery.15 Early ransomware groups could get away with this because many users lacked the energy, money, or know-how to remediate a ransomware attack. 

Just as it was less disruptive, early ransomware was less profitable. A 2018 study undertaken by researchers at Google estimated that the two most prolific pray-and-spray-style ransomware families at that time, Locky and Cerber, made $7.8 million and $6.9 million, respectively.16 By contrast, eight of the top ten ransomware groups in 2020 made more than $10 million.17 One of them, REvil, is thought to have made $100 million.18

Early ransomware gangs neither made nearly as much money, nor caused nearly as much harm  as their successors. Many lacked the prestige and resources to recruit top-tier cybercrime talent. The most sophisticated cybercrime gangs of this era spent most of their time with bank fraud, where they could make larger paydays. 

The pivot to targeted attacks: 2016-2019

The shift to targeted ransomware attacks followed a period of significant growth in pray-and-spray-style ransomware attacks, with some companies estimating that ransomware had become a $1 billion industry by 2016.19 Likely, growth in one form of digital extortion precipitated the shift to the next: as more criminals flowed into the ransomware business, they became bolder, savvier—and, perhaps, luckier.  

Throughout the 2010s, improved anti-fraud defenses at major financial institutions and an exponential increase in the volume of stolen data on the cybercriminal underground squeezed margins in traditional, fraud-based avenues of crime.20 Between 2011 and 2016, for example, the price of a stolen credit card on the black market dropped from $25 to $6.21 

Meanwhile, the combination of file encryption, payment through virtual currency, and at-scale malware delivery offered a profitable business model that criminal groups increasingly emulated in the aftermath of CryptoLocker. The number of new ransomware strains discovered increased each year between 2011 and 2015, when the figure first broke one hundred.22 During this time, criminals developed more effective ways to deliver their payloads, encrypt data, receive payments, and pressure victims.23 

In the mid 2010s, changes introduced by major antivirus providers mitigated the effectiveness of end-user ransomware campaigns. According to Robert McArdle, a cybercrime expert at security company Trend Micro, ransomware gangs reacted by escalating privileges within a network, which allowed them to shut off organizations’ improved defenses at the edge. Once criminals learned how easy it was to establish administrator-level access to an organization, McArdle speculates, it was only a matter of time before they began to deploy ransomware from the core of a network, encrypting thousands of machines at one time.24  

The first ransomware group to focus exclusively on targeted attacks, SamSam, appeared in the winter of 2015.25 Perhaps because the operators behind SamSam came from outside the Eastern European cybercrime scene, it took some time for its approach to catch on elsewhere. But in the unregulated world of cybercrime, when word about a lucrative form of cybercrime gets out, copycats are quick to pile on.  

Starting in 2017, targeted ransomware began to displace end-user ransomware as the attack of choice in the digital extortion industry. Ransomware attacks on enterprises surpassed those on consumers for the first time in 2017.26 By 2018, businesses accounted for 81 percent of all ransomware attacks.27  

Between the summer of 2017 and April 2019, elite cybercrime gangs also jumped into the targeted ransomware game. In July 2017, the operators of the Dridex botnet launched BitPaymer; in August 2018, the operators of the TrickBot banking malware created Ryuk; and in April 2019, the former operators of the GandCrab ransomware spun off to create REvil.28 All three groups specialized in targeted extortion.  

2019-present: The professionalization of ransomware

As top-tier cybercrime groups entered the ransomware business and ransomware revenues climbed upward, criminals invested in new capabilities and cybercrime markets adjusted to meet a growing demand for resources to use in targeted ransomware attacks. Far more than their predecessors, contemporary ransomware actors therefore can purchase products and services or hire partners to make their attacks easier and more effective. 

Perhaps the greatest indicator of how much ransomware has changed cybercrime markets—and how those markets have in turn facilitated the growth of ransomware—is the growth of illicit access brokers and the markets where they trade. Through illicit access markets, criminals that obtain a foothold into an organization can turn around and sell that access to ransomware groups or other cybercriminals. 

Though such markets have existed in some form for years, they have experienced “meteoric growth” as a result of the ransomware surge.29 Once servicing a wider range of criminal activity, access brokers now cater overwhelmingly to targeted ransomware attacks.30 For example, listings on initial access markets now advertise the revenue of the organization a criminal has access to and the level of access that is available—indicators of how much a company might pay in the event of a ransomware incident and how much work criminals would need to do to launch one.31 

By outsourcing this first stage of a ransomware attack, ransomware groups can focus on movement within an organization. That specialization is one reason for the increasing pace of targeted ransomware attacks. According to estimates by cybersecurity company Recorded Future, ransomware groups executed sixty-five thousand targeted ransomware attacks in 2020.32 In the words of ransomware expert Allan Liska, that figure simply would not be possible without underground forums.33 

Increasing specialization across different stages of the ransomware life cycle also is evident in the growth of the ransomware-as-a-service model (RaaS). In a RaaS structure, a core group of criminals manage a ransomware payload, while outsourcing ransomware deployment to so-called “affiliates.” The model has the dual benefit of allowing ransomware groups to scale their operation and to off-load risk, with affiliates now drawing increasing attention from law enforcement. According to an interview given by a member of the REvil ransomware gang, the group at one point had sixty affiliates carrying out attacks on its behalf.34 As of October 2021, eight of the ten leading ransomware groups employed an affiliate model to carry out attacks.35

In addition to external partnerships, ransomware groups have tapped increasingly large war chests to strengthen their organizations from within. Some of the wealthiest ransomware groups rent access to botnets, which provision a steady stream of targets for the groups and obviate the need to participate in public access markets.36 Buoyed by past profits, ransomware groups also have thrown more money at recruiting top talent.37 At one point, the REvil group advertised that it was investing $1 million as part of a new recruitment drive.38 

The recent leak of internal communications from the Conti ransomware group demonstrates how years of steady revenue have turned ransomware groups into something that resembles a legitimate business. According to the leaks, at various points over the last two years, the group employed between sixty-five and one hundred salaried employees, whom it paid twice monthly through virtual currency.39 The group had vacation policies for its employees, a human resources department, and a 24/7 support staff.40 With an eye toward the future, the group also reinvested profits to improve its core business, studying weaknesses in popular cybersecurity products, researching new vulnerabilities and exploits, and identifying valuable partners.41 

Whether measured in the volume of attacks or their effectiveness, the gradual professionalization of the market for targeted ransomware attacks has transformed the digital extortion industry. The total value of cryptocurrency received by ransomware addresses grew from less than $25 million in 2016, when mass ransomware predominated, to roughly $692 million in 2020, when targeted ransomware had become the norm.42 Likewise, between 2018 and 2020, the number of ransomware complaints submitted to the Federal Bureau of Investigation’s Internet Crime Complaint Center increased 65.7 percent, while victim losses swelled 705 percent.43  

Targeted ransomware: a more disruptive form of extortion

The pivot to extorting organizations strengthened a dangerous incentive that lay half-dormant within prior iterations of digital extortion: whereas older ransomware campaigns prioritized scale and automation, modern ransomware places a premium on coercion. These dynamics manifest in the increase in average ransom payments over time, as well as the uptick in incidents against vulnerable entities, like schools and hospitals.  

As ransomware gangs spent more time extorting organizations, they developed more effective methods to extract revenue from victims. From the third quarter of 2018 to the third quarter of 2020, the average ransom payment grew from less than $6,000 to nearly $240,000, while the median payment grew from four figures to more than $100,000.44 Three variables have driven the increase in average ransom payments over time: victim size, negotiating leverage, and negotiating savvy.  

First, ransomware gangs started to attack larger victims. The median size of victim organizations, measured by the number of employees, jumped from twenty-five in 2018 to 250 by the end of 2020, according to Coveware.45 Likewise, the security firm Sophos found that ransomware attacks against organizations with one thousand to five thousand employees were more likely in 2021 than those with less than one thousand employees.46 

Second, attackers began to threaten to leak sensitive data stolen from victims, which is commonly referred to as “double extortion.” Pioneered by the Maze group in November 2019, double-extortion threats provide additional bargaining leverage beyond data encryption. Leaks can lead to the loss of intellectual property and brand damage or, if victims otherwise intend to keep incidents quiet, trigger regulatory investigations.  

The popularity of the data-extortion threat is undeniable. From the fourth quarter of 2019 to the fourth quarter of 2021, the percentage of ransomware attacks involving a threat to release data increased from less than 5 percent to roughly 70 percent, according to data collected by Coveware.47 Likewise, CrowdStrike found an 82 percent increase in ransomware-related data leaks in 2021, as compared with 2020.48 

Nonetheless, ransomware negotiators, cybercrime experts, and cyber insurance providers interviewed for this project insisted that business interruption losses—and not the threat of proprietary data loss or brand damage—represent the most consequential pain point for most victims. To explain the increase in ransom payments, they pointed to a third factor: criminals’ use of open-source business intelligence tools, like ZoomInfo, and on-network reconnaissance of financial and insurance information, to determine how much victims could afford to pay.   

Attacks on larger businesses, efforts to acquire additional leverage over victims, and better business intelligence represent three ways ransomware groups have adapted to the dynamics of targeted ransomware. A fourth involves victim selection: to maximize revenue, ransomware groups increasingly target entities with extensive and time-sensitive IT dependencies, like patient care, payroll, and just-in-time product delivery.   

For example, during the peak of the COVID-19 crisis, ransomware groups deliberately targeted hospitals and healthcare providers in the United States, which had significant consequences for patient care.49 One attack, on United Healthcare, simultaneously affected 250 healthcare facilities in the United States.50 A month later, US officials learned that a ransomware gang had acquired access to more than four hundred US-based healthcare providers and was threatening to attack a large number of them simultaneously.51 

Part 2: Why ransomware isn’t going away

While the Biden administration has taken major strides in the fight against ransomware since the attack on Colonial Pipeline, available data indicates that the volume of ransomware activity has declined slightly, if at all, during this period.52 As recently as February 2022, cybersecurity authorities in the United States, the United Kingdom, and Australia warned that “if the ransomware criminal business model continues to yield financial returns . . . ransomware incidents will become more frequent.”53

The persistence of the ransomware problem should not come as a surprise. Three factors underlie the persistence of targeted ransomware, and each presents significant hurdles for lawmakers: the presence of a vast pool of security-poor organizations; the availability of a poorly regulated payment vehicle in the form of cryptocurrency; and criminals’ ability to exploit jurisdictional boundaries. Together, these factors ensure that ransomware will present a challenge for years to come.

Taking the fight to ransomware actors

To reduce ransomware attacks, many governments have searched for ways to eliminate the legal sanctuaries where ransomware gangs operate. The proposals for doing so are manifold.54 They include high-level diplomatic negotiations with offending countries, like Russia; offensive cyber operations against ransomware gangs; and more aggressive law enforcement action against affiliate groups and adjacent service providers.

Yet, the fluidity, decentralization, and dynamism of the digital extortion market complicate the process of identifying individual ransomware actors. The relationships that characterize each ransomware group fluctuate constantly, with individuals moving between ransomware gangs, gangs purchasing tools and services from other criminals, and various groups contributing to different elements of an attack. The resulting complexity means that “it is often difficult to identify conclusively the actors behind a ransomware incident,” as cybersecurity authorities in the United States, Australia, and the United Kingdom recently observed.55

Just as the fluidity of the ransomware ecosystem complicates the process of identifying criminals, jurisdictional boundaries hinder enforcement. Historically, many leading cybercriminal groups have operated in Eastern Europe, where local law enforcement agencies lacked the capability or will to bring cybercriminals to justice. But ransomware has attracted criminals from across the world, and it will metastasize if extortion remains so profitable.

Russian noncompliance with transnational cybercrime investigations exacerbates the natural hurdles involved in transnational law enforcement. For more than a decade, major cybercriminal networks have operated with impunity out of Russia.56 Mounting evidence suggests that many of these criminals purchase their immunity through cooperation with Russian intelligence and law enforcement agencies.57

The outbreak of the war in Ukraine reduces the likelihood that Russian authorities will respond constructively to US pressure to rein in domestic cybercrime elements. That presents a significant problem when it comes to ransomware: according to data collected by Chainalysis, 74 percent of money made by ransomware actors in 2021 went to groups that were “highly likely to be affiliated with Russia.”58

Mitigating cryptocurrency cash-out schemes

Since most ransomware payments are made in cryptocurrency, and no other payment vehicle can facilitate pseudonymous, high-value, and high-volume payments across borders, those seeking to address the ransomware threat have increasingly called for greater regulation of global cryptocurrency exchanges.59 The hope is that better enforcement of existing Know-Your-Customer (KYC) and Anti-Money Laundering (AML) requirements would obstruct ransomware payments and empower cross-border law enforcement investigations.

As with law enforcement action, however, cryptocurrency regulation faces significant jurisdictional hurdles. Again, Russia plays a central role in noncompliance. For example, one investigation by Bloomberg traced four cryptocurrency exchanges involved in shady or illicit activity to a single office building in Moscow.60 Subsequent research from Chainalysis found that between 29 percent and 48 percent of all funds received by cryptocurrency businesses in the district that includes that building come from illicit and risky cryptocurrency addresses.61

The regulatory challenge extends beyond Russia. In 2019, more than 50 percent of all funds traced from criminal entities to exchange-hosted wallets ended up in Binance and Huobi, major cryptocurrency exchanges based in China.62 Tether, which claims to offer a dollar-backed stable coin and is key to modern money laundering tactics, is incorporated in Hong Kong.63 Overall, the global nature of the money laundering networks that support cryptocurrency cash-out schemes inhibit the federal government from enforcing effective regulatory regimes cheaply or quickly.

Since global cryptocurrency reform represents a long-term undertaking, it is also worth asking whether these efforts would undercut the ransomware market as much as some assume. While jurisdictional differences slow regulation, criminals adapt quickly. Between 2011 and 2019, large exchanges helped cash out 60 percent to 80 percent of Bitcoin transactions from known bad actors, according to data collected by blockchain analytics firm Elliptic.64 Then, as exchanges bolstered their anti-money laundering policies, criminals turned elsewhere, to unlicensed exchanges, over-the-counter brokers, and decentralized finance (DeFi) protocols.65

If the goal is to interdict or to seize payments, cryptocurrency regulation also presents practical and ethical trade-offs. Interdicting a ransom payment risks harm to victims, since criminals who do not receive funds might withhold a decryption key. Moreover, if ransomware payments remain legal, interdiction or seizures will be difficult to implement at scale. Many victims will not share information with law enforcement if they fear that doing so will inhibit their ability to acquire a decryption key.

Reducing widespread security deficiencies

Finally, governments have sought to reduce the impact of ransomware attacks by driving or incentivizing the reduction of widespread security deficiencies, in particular among government agencies and critical infrastructure identities. However, the most common victims of ransomware attacks, small- to medium-sized entities, face significant obstacles when it to comes to resources, contracting power, and incentives.

First, faced with razor-thin operating margins and business-critical operating dependencies, small- to medium-sized organizations cannot afford to build adequate security programs—something which demands steady investments in people, technology, and process. According to a January 2020 study of three thousand small businesses in the United States and the United Kingdom, 20 percent of companies use no endpoint security whatsoever, 33 percent of them use free consumer-grade software, and 43 percent of them have no cyber defense plan in place.66 Likewise, one 2019 survey of businesses with between one hundred and one thousand employees found that just 45 percent of respondents described their security posture as “sufficient.”67

Unlike their larger counterparts, small- to medium-sized organizations also lack the contracting power and resources to purchase software products with sufficient security protections, let alone security software. Many software providers upsell customers for baseline security features, like Single sign-on (SSO), while many security vendors are too expensive for small businesses. Too often, small- to medium-sized organizations must choose between security or affordability.

Many organizations also confront perverse incentives when it comes to delivering cybersecurity outcomes. Modern organizations rely on a complex network of software and IT suppliers.68 Those suppliers are not held liable if their products are compromised to hack one of their customers. As a result, software providers have weak incentives to improve security for their users, while users are skeptical that cybersecurity investments can adequately reduce risk.

Overall, small- and medium-sized organizations face a nasty predicament when it comes to security. Because it is difficult and costly, security is often deferred in favor of critical operating needs. But it becomes more expensive to fix the longer you wait to address it. This negative feedback loop of cost-cutting and debt-accumulation has left many organizations incapable of defending themselves, while easing the way for predatory ransomware groups.

Part 3: Addressing ransomware into the future

To date, the US government has pursued a series of policies aimed at meeting the needs of today’s ransomware threat, such as prosecuting ransomware actors, squeezing cryptocurrency cash-out schemes, and defending national critical infrastructure. But given the unlikelihood of eliminating digital extortion in the near future, the federal government also needs to start investing in better strategies for managing this problem over time.

It can start by considering the following three policies:

RECOMMENDATION 1: Congress should pass new legislation mandating that all US-based organizations report ransom payments to the government. The reports should be sent to the Cybersecurity and Infrastructure Security Agency (CISA) within seventy-two hours of payment, and be shared by CISA, in anonymized form, with the FBI. To encourage compliance, the legislation should include liability protection stipulating that the report cannot form the basis for regulatory or enforcement action against the victim.69

At a minimum, the reports should detail the size of the payment, the date of payment, and the sending and receiving addresses for the transaction. They should also include information on the victim, such as the size of the organization and the industry it is in. The reports should be anonymized, but CISA should be required to publish quarterly reports with the data.

At present, policymakers must rely on partial or anecdotal data streams about ransomware attacks.70 The methodological limits of these data sets make it difficult to assess the ransomware threat empirically, instead inclining the public to assess the problem via newspaper headlines or press statements. According to one CISA estimate, just one-fourth of all ransomware attacks are reported to the government.71

A comprehensive ransomware reporting mandate would also represent a significant improvement on CIRCIA. Because the law only obliges entities in critical infrastructure sectors to report ransom payments to the government, it fails to resolve the core problem that handicaps lawmakers today: available data does not provide sufficient fidelity or granularity to inform effective policy.

Comprehensive payment visibility would provide multiple benefits. In the short term, it would illuminate how widespread payments are and who is most affected. Moreover, by funneling standardized information to CISA, it will avoid data silos and formatting inconsistencies—two flaws with today’s reporting system highlighted by a recent Senate report.72

Over the long term, a reporting requirement will help law enforcement track the activity of ransomware groups, monitor money flows to, from, and within these organizations, and facilitate enforcement against criminals. It will also allow the government to monitor the ebb and flow of the digital extortion economy. Here, the quarterly reporting requirement is key. By forcing this data out into the open, it will ensure the problem is assessed at regular intervals and on a holistic basis.

Critics might counter that such a system is difficult to enforce. Some organizations may not know or understand their obligations. Others may resist, fearing the consequences of sharing sensitive information with the government.

In part, these objections rest on a misleading analogy to existing data breach notification regimes. Unlike data breach laws, which revolve around ambiguous definitions of access to personal data, ransom payments are clear and undeniable. No amount of creative lawyering can deny them. With minimal federal oversight, appropriate liability protections, and sufficient inducements for compliance—such as those outlined below—it stands to reason that most US-based organizations will be inclined to follow the law.

RECOMMENDATION 2: Congress should establish a tax relief program for small- to medium-sized organizations that implement a series of security best practices, including but not limited to the use of backups, the creation of an incident response plan, and the use of multifactor authentication (MFA) for remote access to administrative systems and services.

Mitigating the security deficiencies of smaller organizations represents an urgent public policy challenge. According to digital forensics and incident response (DFIR) firm Coveware, 55 percent of enterprise-targeted ransomware attacks hit companies with less than one hundred employees in 2021, and 75 percent of attacks victimized those with less than $50 million in revenue.73 Since small businesses are less likely to have access to expensive DFIR firms or be represented in corporate surveys, it is likely the figure undercounts the problem, perhaps significantly.

One ransomware negotiator interviewed for this project, Kurtis Minder, began advertising free ransomware negotiation services for small organizations on his firm’s website in 2020. In a matter of days, he said, he was flooded with new requests from small-business owners—a trend that continues to this day, according to Minder.74

A tax relief program would provide an incentive for these organizations to secure their networks before an attack, at which point many victims have limited resources to spare on security. The average organization faces twenty-six days of interruption following a ransomware attack, according to Coveware.75 During this period and well after, costs add up fast, including downtime, data recovery, lost business, brand damage, and other emergency expenses. Globally, the average total cost of recovery from a ransomware attack stood at $1.4 million in 2021, according to a recent study by security firm Sophos.76

Because high recovery costs accrue whether or not one pays a ransom, an ounce of prevention is worth a pound of cure.77 Fortunately, research indicates that simple changes can go a long way toward reducing a victim’s likelihood of paying a ransom. A recent study by the Cyentia Institute and Arete found that even partial implementation of MFA reduced a victim’s likelihood of paying a ransom by 12.5 percent, while victims who could successfully recover data were 19.7 percent less likely to pay than those who couldn’t.78

All told, the collective threat of ransomware to the US economy is enormous—and small organizations are desperate for help. According to a study by Cybersecurity Ventures, ransomware will cost global businesses $265 billion by 2031.79

RECOMMENDATION 3: Congress should draft legislation offering federal tax credits to small- to medium-sized organizations that hire or retain employees with cybersecurity expertise. The legislation could be modeled on the Work Opportunity Tax Credit, which provides federal tax credits to organizations that hire individuals who have consistently faced barriers to employment.

A WOTC for cyber should be drafted to encourage organizations not just to hire cybersecurity talent, which is in short supply, but also to develop it in-house. Through online education, part-time course work, and vocational training, cybersecurity skills development is increasingly available to the public. Moreover, given that most ransomware attacks exploit basic security deficiencies, like password reuse, the educational requirements would not have to be significant to have an impact.

First and foremost, a tax credit would help address a cybersecurity personnel shortage. A 2019 study sponsored by the National Institute of Standards and Technology (NIST) estimated that 450,000 cybersecurity positions remain unfilled in the United States.80 The gap, already immense, is expected to grow over time. The Bureau of Labor Statistics projects that information security analyst jobs will increase 33 percent between 2020 and 2030.81

Second, the incentive would provide relief to smaller organizations, which are hard-pressed to find cybersecurity talent, let alone afford it.82 These resource constraints are particularly acute in certain industries, such as healthcare. According to the 2017 Health Care Industry Cyber Task Force, most healthcare organizations face operating margins of less than 1 percent and many cannot afford to maintain in-house security personnel.83 These constraints exacerbate the inherent complexities of hospital administration and represent a central reason that healthcare companies became such popular targets for ransomware attacks over the last two years.84

Though part-time security training is no substitute for a professional security vendor or a mature security program, it represents a step in the right direction. It could reinforce the other recommendations included in this report, providing a logical point of liaison with the federal government on ransomware reporting and reducing the cost of proactive security implementations.

Conclusion

The recent surge in ransomware attacks is often explained via an alphabet soup of metallic-sounding acronyms and epithets. Ostensibly, acronyms and terms like “RaaS” (ransomware-a-service), “IABs” (initial access brokers), and “double-extortion” (the data disclosure threat) identify the phenomena that have supercharged digital extortion.

In truth, ransomware poses an analytic challenge at once simpler and more complex: more complex insofar as several factors contribute to the ransomware problem in different degrees, like weighted variables in some inscrutable algorithm; and simpler insofar as the ransomware surge boils down to a single discovery. Organizations present vulnerable, lucrative, and near limitless targets for digital extortion—something cybercriminals had grasped firmly by 2019.

The shift to extorting organizations, instead of individuals, transformed the digital extortion industry profoundly. By increasing the importance of any single victim in the eyes of the attackers, it made ransomware more disruptive. By making digital extortion so profitable, it attracted a flurry of new activity and investment from cybercriminals.

Since the attack on Colonial Pipeline, the United States and its allies have made stopping ransomware a priority. Thus far, those initiatives have failed to reduce the threat as much as many had hoped.85 But policymakers should not be discouraged.

The factors that made ransomware prolific may be new, but those that made it possible are familiar and deep-seated. To put ransomware in the rearview mirror, the United States will finally have to address the three conditions that have not just fueled digital extortion, but cybercrime, for much of the last decade: a large pool of security-poor victims, a poorly regulated payment vehicle in the form of cryptocurrency, and a sense of impunity among criminals.

Solving these problems will not just take time but understanding. Even though it is tempting to hope that we are just one diplomatic agreement, one technological leap, or one regulation away from its elimination, targeted ransomware is here to stay. As with other forms of crime, the government can expect better outcomes by planning how to manage the issue over time rather than searching for quick and complete solutions.

When it comes to ransomware, that means investing in the defense of broad swathes of the US economy—in particular small- to medium-sized organizations—and establishing more transparency about the problem. It also involves a continuation of efforts launched by the Biden administration over the past year: more pressure on ransomware groups and money laundering networks, better cryptocurrency regulation, and more support to national critical infrastructure.

If that answer is unsatisfying, so be it. In the words of a wise Jedi who tried, and failed, to solve a serious problem in a single afternoon: “Only a Sith deals in absolutes.”

About the author

John Sakellariadis is a 2021-2022 Fulbright US Student Research Grantee, studying EU cybersecurity policy in Athens, Greece. John has worked as a journalist and researcher, and has written for Slate, The Record, and SupChina. He received a master’s degree in Public Policy from Columbia University and a bachelor’s degree in History & Literature from Harvard University.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Popp’s history is curious, to say the least. For more, see Alina Simon, “The Strange History of Ransomware: Floppy Disks, AIDS Research, and a Panama P.O. Box,” (Blog) Medium, March 26, 2015, https://medium.com/@alinasimone/the-bizarre-pre-internet-history-of-ransomware-bb480a652b4b.
2    There are two other terms that, for the purposes of this paper, might be used interchangeably with “targeted ransomware”: “human-operated ransomware” and “big game hunting.” The author uses “targeted” ransomware because it best captures those characteristics that distinguish modern ransomware from its predecessor. “Human operated ransomware” can generate confusion about the degree of automation resident in certain elements of the current and past ransomware life cycle. It also directs focus away from the most salient aspect of the shift, the movement away from the spray-and-pray approach to malware deployment. “Big game hunting” conveys the latter point, but it describes only a small portion of ransomware actors. It also obscures the degree of opportunism that suffuses the targeted ransomware economy.
3    See “Work Opportunity Tax Credit,” Internal Revenue Service (website), accessed July 5, 2022, https://www.irs.gov/businesses/small-businesses-self-employed/work-opportunity-tax-credit. This tax credit was extended until December 31, 2025, via the Consolidated Appropriation Act of 2021.
4    A. Kharraz et al., “Cutting the Gordian Knot: A Look Under the Hood of Ransomware Attacks,” in Detection of Intrusions and Malware, and Vulnerability Assessment, eds. M. Almgren et al., DIMVA 2015, Lecture Notes in Computer Science 9148, Springer, Cham, https://doi.org/10.1007/978-3-319-20550-2_1.
5    Luca Invernizzi, Kylie McRoberts, and Elie Bursztein, “Tracking Desktop Ransomware Payments End-to-End,” Presentation, BlackHat USA, July 26, 2017.
6    Herb Weisbaum, “Ransomware: Now a Billion Dollar a Year Crime and Growing,” NBC News, January 9, 2017, https://www.nbcnews.com/tech/security/ransomware-now-billion-dollar-year-crime-growing-n704646.
7    This assessment is based on data from multiple sources, including author interviews. Brett Callow (threat analyst, Emsisoft), in discussion with the author, November 24, 2021; John Fokker (head of Cyber Investigations, Trellix Labs), in discussion with the author, January 18, 2022; Robert McArdle (director, Trend Micro’s Forward Looking Threat Research Team), in discussion with the author, January 27, 2022); Allan Liska (intelligence analyst, Recorded Future), in discussion with the author, January 15, 2022; pancak3 (online security researcher), in discussion with the author, March 8, 2022; Azim Khodjibaev (senior intelligence analyst, Cisco Talos), in discussion with the author, January 17, 2022; and Mark Arena (CEO, Intel 471), in discussion with the author, February 18, 2022. Several written sources capture the development, e.g., Kris Oosthoek, Jack Cable, and Georgios Smaragdakis, “Ransomware: A Tale of Two Markets,” arXiv: 2205.05028 [cs.CR], May 2022, https://doi.org/10.48550/arXiv.2205.05028; Institute for Security and Technology, “A Comprehensive Framework for Action: Recommendations from the Ransomware Task Force,” Report, April 2021, https://securityandtechnology.org/ransomwaretaskforce/report/; and Symantec, “Internet Security Threat Report,” February 2019, https://docs.broadcom.com/doc/internet-security-threat-report-volume-24-en.
8    Symantec, “The Evolution of Ransomware,” August 2015, 22.
9    Kharraz et al., “Cutting the Gordian Knot”; and Bitdefender, “Ransomware: A Victim’s Perspective,” Report on US and European Internet Users, 2016, https://download.bitdefender.com/resources/files/News/CaseStudies/study/59/Bitdefender-Ransomware-A-Victim-Perspective.pdf.
10    Josephine Wolff, “The $100 Million Bot Heist,” Nautilus, November 28, 2018, https://nautil.us/the-100-million-bot-heist-7816/.
11    James A. Sherer et al., “Ransomware: Practical and Legal Considerations for Confronting the New Economic Engine of the Dark Web,” Annual Survey, University of Richmond Journal of Law & Technology 23 (2017), http://jolt.richmond.edu/2017/04/30/volume23_annualsurvey_sherer/.
12    Sherer et al., “Ransomware.”
13    Sherer et al., “Ransomware.”
14    Kharraz et al., “Cutting the Gordian Knot.”
15    Pranshu Bajpai, Aditya Sood, and Richard Enbody, “A Key-Management-Based Taxonomy for Ransomware,” 2018 Anti-Phishing Working Group (APWG) Symposium on Electronic Crime Research (eCrime), 1-12.
16    Luca Invernizzi, Kylie McRoberts, and Elie Bursztein, “Tracking Desktop Ransomware Payments,” Presentation, Security and Privacy Conference, 2018, https://elie.net/publication/tracking-ransomware-end-to-end/. The researchers involved in this study characterized their data as lower-bound estimates of the ransomware market because they excluded several transactions without a clear link to ransomware. While there is reason to believe the undercounting is significant—the Federal Bureau of Investigation reported that $209 million in ransom payments were made in the first three months of 2016 —the same could be said of subsequent efforts to track ransom payments.
17    “The Chainalysis 2021 Crypto Crime Report,” Chainalysis.
18    “The Chainalysis 2021 Crypto Crime Report.”
19    Weisbaum, “Ransomware: Now a Billion Dollar a Year Crime.”
20    Brian Krebs, “Rogue Pharma, Fake AV Vendors Feel Credit Card Crunch,” Krebs on Security, October 18, 2012.
21    Verizon, “2016 Data Breach Investigations Report,” verizonenterprise.com/verizon-insights- lab/dbir/2016.
22    Symantec, “An ISTR Special Report: Ransomware and Businesses 2016,” August 10, 2016, https://conferences.law.stanford.edu/cyberday/wp-content/uploads/sites/10/2016/10/5c_ISTR2016_Ransomware_and_Businesses.pdf.
23    Bajpai et al., “A Key-Management-Based Taxonomy.”
24    McArdle, in discussion with the author. Separately, another factor that cut into the profitability of mass ransomware was the increased adoption of cloud-based services on consumer devices; see Symantec, “An ISTR Special Report.”
25    United States v. Savandi and Mansouri, D.N.J. (2018), https://www.justice.gov/opa/press-release/file/1114741/download.
26    United States v. Savandi and Mansouri, D.N.J. (2018), https://www.justice.gov/opa/press-release/file/1114741/download.
27    Symantec, “Internet Security Threat Report.”
28    CrowdStrike, “2020 Global Threat Report,” 2020, 53, https://www.crowdstrike.com/resources/reports/2020-crowdstrike-global-threat-report/; and John Fokker, “McAfee ATR Analyzes Sodinokibi aka REvil Ransomware-as-a-Service–The All-Stars,” McAfee (blog), October 2, 2019), https://www.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-atr-analyzes-sodinokibi-aka-revil-ransomware-as-a-service-the-all-stars/.
29    Allan Liska, Understand, Prevent, Recover (South Carolina: ActualTech Media, November 15, 2021).
30    “Steal, Then Strike: Access Merchants Are First Clues to Future Ransomware Attacks,” Intel471 (blog), December 1, 2020, https://intel471.com/blog/ransomware-attack-access-merchants-infostealer-escrow-service.
31    “Steal, Then Strike,” Intel471.
32    Liska, Understand, Prevent, Recover.
33    Liska, Understand, Prevent, Recover.
34    Dmitry Smilyanets, “‘I Scrounged through the Trash Heaps . . . Now I’m a Millionaire’: An Interview with REvil’s Unknown,” The Record, March 16, 2021, https://therecord.media/i-scrounged-through-the-trash-heaps-now-im-a-millionaire-an-interview-with-revils-unknown/.
35    Fyodor Yarochkin, “Ransomware as a Service: Enabler of Widespread Attacks,” Trend Micro (blog), October 5, 2021, https://www.trendmicro.com/vinfo/us/security/news/cybercrime-and-digital-threats/ransomware-as-a-service-enabler-of-widespread-attacks. There are some signs the affiliate model may be in decline. Numerous instances in which affiliate groups and ransomware operators have tried to steal from one another have created distrust. Moreover, as increased government attention has been directed toward ransomware, groups have become increasingly wary of delegating so much independence to trigger-happy affiliate groups. There are some indications that top-tier ransomware groups are adopting a more centralized and hierarchical model, though the fluidity of these relationships make any prediction uncertain.
36    Yelisey Boguslavskiy and Vitali Kremez, “Corporate Loader ‘Emotet’: History of ‘X’ Project Return for Ransomware,” Advanced Intel (blog), November 19, 2021, https://www.advintel.io/post/corporate-loader-emotet-history-of-x-project-return-for-ransomware.
37    Brian Krebs, “Amid an Embarrassment of Riches, Ransom Gangs Increasingly Outsource Their Work,” Krebs on Security, October 8, 2020, https://krebsonsecurity.com/2020/10/amid-an-embarrassment-of-riches-ransom-gangs-increasingly-outsource-their-work/.
38    Simon Chandler, “REvil Ransom Gang Offers $1 Million as Part of Recruitment Drive,” Forbes, October 6, 2020, https://www.forbes.com/sites/simonchandler/2020/10/06/revil-ransomware-gang-offers-1-million-as-part-of-recruitment-drive/.
39    Brian Krebs, “Conti Ransomware Group Diaries, Part II: The Office,” Krebs on Security, March 2, 2020, https://krebsonsecurity.com/2022/03/conti-ransomware-group-diaries-part-ii-the-office/.
40    Krebs, “Conti Ransomware Group Diaries, Part II.”
41    Krebs, “Conti Ransomware Group Diaries, Part II.”
42    “The Chainalysis 2021 Crypto Crime Report.”
43    Federal Bureau of Investigation, Internet Crime Report 2020, FBI Internet Crime Complaint Center (IC3), March 17, 2021, https://www.ic3.gov/Media/PDF/AnnualReport/2020_IC3Report.pdf.
44    Coveware, “Ransomware Attackers Down Shift to ‘Mid-Game’ Hunting in Q3 2021,” Coveware (blog), October 21, 2020, https://www.coveware.com/blog/2021/10/20/ransomware-attacks-continue-as-pressure-mounts.
45    Coveware, “Ransomware Attackers Down Shift.”
46    Coveware, “Ransomware Attackers Down Shift.”
47    Coveware, “The Marriage of Data Exfiltration and Ransomware,” Coveware (blog), January 10, 2020, https://www.coveware.com/blog/marriage-ransomware-data-breach.
48    CrowdStrike, “2022 Crowdstrike Global Threat Report,” Report, February 15, 2022, https://www.crowdstrike.com/resources/reports/global-threat-report/.
49    The Cyber Peace Institute, Playing with Lives: Cyberattacks on Healthcare Are Attacks on People, March 2021, https://cyberpeaceinstitute.org/report/2021-03-CyberPeaceInstitute-SAR001-Healthcare.pdf.
50    Office of Information Security, US Department of Health and Human Services, “Trickbot, Ryuk, and the HPH Sector,” Bulletin, November 12, 2020, https://www.hhs.gov/sites/default/files/trickbot-ryuk-and-the-hph-sector.pdf.
51    Brian Krebs, “Conti’s Ransomware Toll on the Healthcare Industry,” Krebs on Security, April 18, 2021, https://krebsonsecurity.com/2022/04/contis-ransomware-toll-on-the-healthcare-industry/.
52    Allan Liska, “Are Ransomware Attacks Slowing Down? It Depends on How You Look at It,” Recorded Future (blog), December 20, 2021, accessed April 2022.
53    Cybersecurity and Infrastructure Security Agency, US Department of Homeland Security, “2021 Trends Show Increased Globalized Threat of Ransomware,” Alert, February 9, 2022, https://www.cisa.gov/uscert/ncas/alerts/aa22-040a.
54    Several government and nonprofit groups have studied the ransomware problem extensively and offered thoughtful proposals for fixing it. See Institute for Security and Technology, A Comprehensive Framework for Action: Recommendations from the Ransomware Task Force, April 2021, https://securityandtechnology.org/ransomwaretaskforce/report/; “Use of Cryptocurrency in Ransomware Attacks, Available Data, and National Security Concerns,” US Senate Comm. on Homeland Security & Governmental Affairs, 117th Cong. (2022), https://www.hsgac.senate.gov/imo/media/doc/HSGAC%20Majority%20Cryptocurrency%20Ransomware%20Report_Executive%20Summary.pdf; and Mieke Eoyang, Allison Peters, Ishan Mehta, and Brandon Gaskey, “To Catch a Hacker: Towards a Comprehensive Strategy to Identify, Pursue and Punish Malicious Cyber Actors,” Third Way, October 29, 2018, https://www.thirdway.org/report/to-catch-a-hacker-toward-a-comprehensive-strategy-to-identify-pursue-and-punish-malicious-cyber-actors.
55    Cybersecurity and Infrastructure Security Agency, US Department of Homeland Security, “2021 Trends Show Increased Globalized Threat of Ransomware,” Alert, February 9, 2022, https://www.cisa.gov/uscert/ncas/alerts/aa22-040a.
56    Tim Maurer, “Why the Russian Government Turns a Blind Eye to Cybercriminals,” Slate, February 2, 2018, https://slate.com/technology/2018/02/why-the-russian-government-turns-a-blind-eye-to-cybercriminals.html.
57    Recorded Future, Dark Covenant: Connections Between the Russian State and Criminal Actors, Report, September 9, 2021, https://www.recordedfuture.com/russian-state-connections-criminal-actors/; and John Fokker and Jambul Tologonov, “Conti Leaks: Examining the Panama Papers of Ransomware,” Trellix (blog), March 31, 2022, https://www.trellix.com/en-au/about/newsroom/stories/threat-labs/conti-leaks-examining-the-panama-papers-of-ransomware.html.
58    Chainalysis, “Russian Cybercriminals Drive Significant Ransomware and Cryptocurrency-based Money Laundering Activity,” Chainalysis (blog), February 14, 2022, https://blog.chainalysis.com/reports/2022-crypto-crime-report-preview-russia-ransomware-money-laundering/.
59    Many people often point to business email compromise (BEC) scams and ask: why couldn’t criminals use the traditional banking system to facilitate ransom payments? BEC scams rely on deception. Banks facilitate these payments because they think they are legitimate. By contrast, victims cooperate with criminals to make ransom payments. Because the better regulated banking system is unlikely to support these transactions, and victims could face liability if they lie to financial intermediaries, victims turn to cryptocurrencies.
60    Kartikay Mehrotra and Olga Kharif, “Ransomware HQ: Moscow’s Tallest Tower Is a Cybercriminal Cash Machine,” Bloomberg, November 3, 2021, https://www.bloomberg.com/news/articles/2021-11-03/bitcoin-money-laundering-happening-in-moscow-s-vostok-tower-experts-say.
61    Chainalysis, “Russian Cybercriminals Drive Significant Ransomware.”
62    Chainalysis, “Crypto Money Laundering: How Criminals Cash Out Billions in Bitcoin and Other Cryptocurrencies,” Chainalysis (blog), January 15, 2020, https://blog.chainalysis.com/reports/crypto-money-laundering-2019/.
63    Bruce Scheier and Nicholas Weaver, “How to Cut Down on Ransomware Attacks Without Banning Bitcoin,” Slate, June 17, 2021, https://slate.com/technology/2021/06/banning-cryptocurrencies-bitcoin-ransomware-disruption-exchanges.html.
64    Elliptic, Financial Crime Typologies in Cryptoassets: The Concise Guide for Compliance Leaders, 2020, https://www.elliptic.co/hubfs/Financial%20Crime%20Typologies%20in%20Cryptoassets%20Guides%20(All%20Assets)/Typologies_Concise%20Guide_12-20.pdf.
65    Elliptic, Financial Crime Typologies in Cryptoassets. This bolstering could also be interpreted as a win for law enforcement, in the sense that criminals were forced to resort to riskier exchanges with less liquidity on their balance sheets. However, profits made by ransomware groups continued to grow during this period, suggesting that criminals were not greatly affected by such reform. Decentralized finance refers to financial transactions that do not rely on intermediaries, like brokerages or exchanges, and instead use peer-to-peer technologies to establish trust between parties.
66    BullGuard, “New Study Reveals One in Three SMBs Use Free Consumer Cybersecurity and One in Five Use No Endpoint Security at All,” February 19, 2020, https://www.bullguard.com/press/press-releases/2020/new-study-reveals-one-in-three-smbs-use-free-consu.aspx.
67    Ponemon Institute, 2019 Global State of Cybersecurity in Small- and Medium-sized Businesses, 2019, https://www.keeper.io/hubfs/2019%20Keeper%20Report_Final%20(1).pdf.
68    Trey Herr, William Loomis, Stewart Scott, and June Lee, Breaking Trust: Shades of Crisis Across an Insecure Software Supply Chain, Atlantic Council, July 26, 2020, https://www.atlanticcouncil.org/in-depth-research-reports/report/breaking-trust-shades-of-crisis-across-an-insecure-software-supply-chain/#executive.
69    This recommendation was inspired by the Ransomware Task Force. See Institute for Security and Technology, A Comprehensive Framework for Action: Recommendations from the Ransomware Task Force, April 2021, 47, https://securityandtechnology.org/ransomwaretaskforce/report/.
70    State and federal data breach notification laws revolve around the protection of personally identifiable information. If ransomware actors never access such protected data—or if victims lack evidence of that access—victims do not have to report a breach, let alone the fact of payment.
71    Jonathan Greig, “CISA Exec: Lack of Ransomware Incident Reporting Is Crippling Defense Efforts,” The Record, June 8, 2022, https://therecord.media/cisa-exec-lack-of-ransomware-incident-reporting-is-crippling-defense-efforts/.
72    “Use of Cryptocurrency in Ransomware Attacks,” US Senate Comm. on Homeland Security & Governmental Affairs.
73    Coveware, “Ransomware Attackers Down Shift.”
74    Kurtis Minder (CEO and co-founder, GoodSense, a nonprofit organization) in discussion with the author, January 22, 2022.
75    “Ransomware Actors Pivot from Big Game to Big Shame Hunting,” Coveware (blog), May 3, 2022, https://www.coveware.com/blog/2022/5/3/ransomware-threat-actors-pivot-from-big-game-to-big-shame-hunting.
77    The decryption keys proffered by ransomware groups are often imperfect and must be tested to ensure they do not cause further damage. Even then, the decryption process is slow and some data might prove unrecoverable. For example, it took the Irish Healthcare System four months to recover from a ransomware attack, even though Irish authorities received a decryption key six days after the ransomware was deployed. See PwC’s report for the Health Services Executive, Conti Cyber Attack on the HSE: Independent Post-Incident Review, December 3, 2021, https://www.hse.ie/eng/services/publications/conti-cyber-attack-on-the-hse-full-report.pdf.
78    See “Mitigating Ransomware’s Impact,” Report, Arete and the Cyentia Institute, June 2022, https://areteir.com/report/mitigating-ransomwares-impact-investigative-cybercrime-series-vol-1/.
79    Cybersecurity Ventures, “Global Ransomware Damage Costs Predicted to Exceed $265 Billion By 2031,” June 3, 2021, https://cybersecurityventures.com/global-ransomware-damage-costs-predicted-to-reach-250-billion-usd-by-2031/.
80    Data available via Cyber Seek’s “CybersecuritySupply/Demand Heat Map,”  https://www.cyberseek.org/heatmap.html.
81    US Bureau of Labor Statistics, “Occupational Outlook Handbook: Information Security Analysts,” https://www.bls.gov/ooh/computer-and-information-technology/information-security-analysts.htm.
82    National Academy of Public Administration, A Call to Action: The Federal Government’s Role in Building a Cybersecurity Workforce for the Nation, January 2022, https://s3.us-west-2.amazonaws.com/napa-2021/studies/dhs-cybersecurity-workforce/NAPA-Final-CISA-Cybersecurity-Workforce-Report-January-2022.pdf.
83    Health Care Industry Cybersecurity Task Force, Report on Improving Cybersecurity in the Healthcare Industry, June 2017, https://www.phe.gov/preparedness/planning/cybertf/documents/report2017.pdf.
84    The CyberPeace Institute, Playing with Lives: Cyberattacks on Healthcare are Attacks on People, March 2021, https://cyberpeaceinstitute.org/report/2021-03-CyberPeaceInstitute-SAR001-Healthcare.pdf.
85    Liska, “Are Ransomware Attacks Slowing Down?”

The post Behind the rise of ransomware appeared first on Atlantic Council.

]]>
Ukraine’s tech excellence is playing a vital role in the war against Russia https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-tech-excellence-is-playing-a-vital-role-in-the-war-against-russia/ Wed, 27 Jul 2022 16:24:39 +0000 https://www.atlanticcouncil.org/?p=551024 Ukraine's tech sector excellence is playing a key role in the war against Russia by providing rapid solutions to frontline challenges in ways that the more traditional top-down Russian military simply cannot match. 

The post Ukraine’s tech excellence is playing a vital role in the war against Russia appeared first on Atlantic Council.

]]>
Russia’s invasion of Ukraine is now in its sixth month with no end in sight to what is already Europe’s largest conflict since WWII. In the months following the outbreak of hostilities on February 24, the courage of the Ukrainian nation has earned admiration around the world. Many international observers are encountering Ukraine for the first time and are learning that in addition to their remarkable resilience, Ukrainians are also extremely innovative with high levels of digital literacy.

This tech sector strength is driving the Ukrainian response to Russia’s imperial aggression. It is enabling the country to defy and in many instances defeat one of the world’s leading military superpowers. A start-up culture that owes much to Ukraine’s vibrant IT industry is providing rapid solutions to frontline challenges in ways that the more traditional top-down Russian military simply cannot match. 

The tech component of Ukraine’s battlefield success is perhaps not as surprising as it might at first appear. According to the 2022 Global Skills Report by Coursera, the country ranks among the global top ten in terms of technological skills.

This high position reflects the impressive progress made in recent years to support the growth of the country’s IT sector and to foster greater digital literacy throughout Ukrainian society. Since 2019, the Ukrainian authorities have prioritized digital skills and have sought to promote learning through the Diia.Digital Education online platform, which serves as an “educational Netflix” featuring courses conducted by experts and celebrities.

This approach appears to be working. The platform currently boasts a completion rate of 80% among those who sign up for courses. Nor has Russia’s invasion prevented Ukrainians from enhancing their IT skills. Around 60,000 Ukrainians have registered for courses since the start of the war, with the most popular topics being training for new tech sector professions, media literacy, and cyber hygiene.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

Ukraine’s emphasis on digital innovation was shaping the country long before Putin launched his full-scale invasion on February 24. In 2021, Ukraine became the first country in the world to give digital passports the same legal status as physical passports for domestic use. Ukraine was the fourth European country to introduce digital driving licenses and also developed the world’s fastest online business registration service. 

Efforts to promote greater digitization continue despite today’s wartime conditions. This is recognized as important for the war effort and is also seen as an essential ingredient for Ukraine’s post-war recovery. I am convinced that tech-focused educational initiatives must remain a strategic priority for the country. By 2025, 85% of all occupations will require digital skills.

The Ukrainian authorities are currently supporting a project to train 5,000 internally displaced women for new careers in the creative and tech industries. There is clearly huge demand for such tech-related retraining opportunities, with the application process for the first phase of this initiative attracting around 36,000 candidates.

A pilot project to reform computer studies within the Ukrainian school system is also proceeding against the backdrop of the ongoing Russian invasion. The first stage will begin in September and will feature 50 secondary schools, before being scaled up to the entire country next year. Thanks to this project, an estimated four million Ukrainian schoolchildren will gain access to a state-of-the-art digital education.

Ukraine’s broader transformation into a genuinely digital state is continuing despite the disruption of the war. This progress is perhaps most visible in terms of the Diia.City project. Two weeks before the Russian invasion, Ukraine launched this special economic initiative offering some of the most attractive taxation terms in the world for tech companies. Ukrainian and international IT companies have continued to sign up to the Diia.City project since the outbreak of hostilities, with a total of 260 companies now registered. Clearly, they believe in Ukrainian victory and are confident about the country’s future development as a digital powerhouse.

Digital services have been launched to support those in the combat zone, allowing them to apply online for financial assistance. Likewise, the Diia mobile app allows anyone to financially support the Ukrainian military via a few clicks. Ukrainians can use the country’s digital platforms to report news of Russian military deployments in their localities and can submit digital reports detailing property damage.

The team at the Ministry of Digital Transformation is currently working with thousands of volunteers to wage a digital war against Russia on the information and cyber fronts. The ministry has initiated the creation of Ukraine’s very own IT army, which brings together specialists from Ukraine and other countries around the world. Today, this army consists of more than 250,000 IT volunteers participating in what is widely recognized as the world’s first cyber war.

Ukraine’s innovative use of technology is allowing the country to punch above its weight and defend itself against a much larger enemy. This experience will be studied for years to come as an example of how digital literacy and tech excellence can cancel out the traditional advantages of conventional military strength and transform the modern battlefield. The future of the world will be shaped by technology and today’s Ukraine is leading the way. 

Valeriya Ionan is Ukraine’s Deputy Minister for Eurointegration at the Ministry of Digital Transformation

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Ukraine’s tech excellence is playing a vital role in the war against Russia appeared first on Atlantic Council.

]]>
Panikoff in DefenseOne: Pass the CHIPS Act https://www.atlanticcouncil.org/insight-impact/in-the-news/panikoff-in-defenseone-pass-the-chips-act/ Tue, 26 Jul 2022 20:01:00 +0000 https://www.atlanticcouncil.org/?p=551776 The post Panikoff in DefenseOne: Pass the CHIPS Act appeared first on Atlantic Council.

]]>

The post Panikoff in DefenseOne: Pass the CHIPS Act appeared first on Atlantic Council.

]]>
Cybersecure the future: Ransomware https://www.atlanticcouncil.org/in-depth-research-reports/report/cybersecure-the-future-ransomware/ Mon, 25 Jul 2022 15:00:00 +0000 https://www.atlanticcouncil.org/?p=544221 This report endeavors to examine key challenges in predicting, safeguarding against, and dealing with ransomware attacks, thereby better informing US and international policy to combat such attacks and their perpetrators.

The post Cybersecure the future: Ransomware appeared first on Atlantic Council.

]]>

Executive summary

This report endeavors to examine key challenges in predicting, safeguarding against, and dealing with ransomware attacks, thereby better informing US and international policy to combat such attacks and their perpetrators. To identify the aforementioned challenges, the Atlantic Council’s GeoTech Center, in partnership with the Digital Forensic Research Lab, Cyber Statecraft Initiative, held four roundtables that connected government officials from the Department of Justice, the Federal Bureau of Investigation, and the United States Secret Service with executive-level industry experts in cybersecurity and ransomware. The key findings along with the primary observations of the roundtables are listed below.

Summary of findings and observations

Summary of findings
Finding 1.1. Industry is seeing two parallel trends when it comes to ransomware models. On the one hand, industry is seeing an increase in independent, skilled hackers, as opposed to established hacker gangs. This shift is resulting in friction in the cybercriminal world and could be positive for law enforcement agencies. Alternatively, some industry members are reporting that there is a lowered barrier of entry for inexperienced or nontechnical cybercriminals, therefore expanding the ransomware criminal industry. More research needs to be done to determine which one of these trends is truly on the rise.

Despite the two opposing trends, all of industry agrees that ransomware groups are learning from their mistakes and continually improving their techniques, tactics, and procedures (TTPs) while actively managing their brands and reputations.
Finding 1.2: Ransomware attacks are opportunistic—targeting organizations with vulnerable online systems and/or during key periods when they have pressure to be up and running.
Finding 2.1: Information sharing between government and the private sector, while integral to tackling ransomware, is inconsistent. Federal law enforcement has made it clear that the legal counsels of private companies have repeatedly raised concerns about constraints that limit the sharing of information that could aid in the detection and reporting of illicit activities.
Finding 2.2: The stigma and consequences of being the victim of a cyberattack present a challenge to information sharing. Oftentimes, victims are reluctant to report incidents to government agencies for fear of negative consequences such as double victimization.
Finding 2.3: Since ransomware attacks are happening at an increasing speed the information-sharing framework between law enforcement and industry should be faster.
Finding 3.1: The establishment of a national law enforcement team to focus specifically on cryptocurrency, which is increasingly used for cybercrime payments, is a step in the right direction.
Finding 3.2: Law enforcement discourages paying a ransom, but encourages prompt reporting regardless of a decision to pay.
Finding 3.3: Federal law enforcement should work to detail appropriate processes for how e-currency or cryptocurrency service providers work with law enforcement to monitor for criminal activities beyond just ransomware, including use of cryptocurrencies for illicit activities such as human trafficking and other transnational criminal offenses.
Summary of observations
Observation 1.1: An international public-private sector partnership needs to be developed to address the transnational nature of ransomware schemes. Such a partnership should focus on helping law enforcement to focus more of its energy on tracing and arresting perpetrators within ransomware groups.
Observation 1.2: It is important to implement stronger defense mechanisms and use updated and secure software to make entering a network more difficult, particularly for heavily targeted industries.
Observation 2.1: Better uniform reporting and sharing of information is needed. In particular, standardized timelines, questions, and formats are needed for incident reporting. Even with the Cyber Incident Reporting for Critical Infrastructure Act of 2022, there still remains confusion across government and critical infrastructure entities as to the logistics of reporting incident information. In addition, it is unclear how reported incident information will be shared between departments, government agencies, and the private sector.
Observation 2.2: Safe harbor and shield laws are needed for ransomware reporting; mandated reporting, in particular, requires a safe harbor framework.
Observation 2.3: Establish and strengthen public and private partnerships through joint tabletop exercises and relationship building with law enforcement and the government.
Observation 3.1: The US government and Federal Reserve should work with the National Cryptocurrency Enforcement Team to properly evaluate the strategic implementation of a US central bank digital currency (CBDC).
Observation 3.2: “To pay or not to pay” a ransom is ultimately a business decision. This decision should be made with proposed safe harbor protections in coordination with law enforcement.

Introduction

In an interconnected world, digital threats have become increasingly common. Chief among them is ransomware, a malware-based cyberattack that encrypts files, rendering data inaccessible. Once an attack has successfully been inflicted, hackers promise to restore systems and data in exchange for a ransom.

Ransomware has existed for over two decades but reached new heights in the last few years.1 In 2020, known ransomware payments totaled $400 million globally and topped $81 million in the first quarter of 2021.2 Financial motivations are not the only driver for these cyberattacks. Nation-states, among others, can use ransomware to demonstrate vulnerabilities in the critical infrastructures of their rivals or disguise deliberate destruction of data and information systems. This makes ransomware a potent tool of geopolitical power.3

Ransomware incidents have disrupted critical services and organizations of all sizes including schools, banks, hospitals, and transportation. A high-profile example of this is the 2021 Colonial Pipeline hack. This attack targeted Colonial Pipeline’s billing system and led to the shutdown of the largest fuel pipeline in the United States, introducing gas shortages across the East Coast. The hackers were affiliated with a Russian-speaking cybercrime group known as DarkSide and received $4.4 million in ransom from Colonial after the attack,4 part of which was later recovered with the assistance of US law enforcement.5 One of the criminals associated with this attack was later found and charged on January 14, 2022, as a result of US-Russia collaboration. At the request of the United States, Russia dismantled the ransomware crime group, REvil, in an operation in which it detained and charged the group’s members, one of whom was responsible for the Colonial Pipeline attack.6

As a harbinger of things to come, costs associated with ransomware are expected to reach new heights by 2031. Cybersecurity Venturesa, a research firm, predicts that there will be a new ransomware attack every two seconds by 2031 and that global costs are expected to exceed $265 billion.7 Against this backdrop, the Atlantic Council’s GeoTech Center and Digital Forensic Research Lab Cyber Statecraft Initiative held a series of off-the-record, private conversations. The discussions examined the connections among ransomware, cyber threat intelligence, industry insurance, cryptocurrencies, and adversarial actors. Participants included high-level members of the US Department of Justice, the Federal Bureau of Investigation, the US Secret Service, and industry experts. This report highlights the key findings of these conversations, followed by the observations that emerged as a result of those findings.

Building this report and conceptualizing the ransomware lifecycle

In 2021, the Atlantic Council GeoTech Center convened subject matter experts from the cybersecurity industry and federal law enforcement agencies for a series of four off-the-record roundtable conversations. The objective of these roundtables was to convene and allow subject matters experts to speak freely on issues surrounding ransomware and to compile these conversations into a report with concrete findings and corresponding observations. The findings and observations in this report expressly grew out of the views articulated by the private sector and law enforcement officials present for these conversations, and as such not every finding has a corresponding observation. In some cases, findings or observations are supplemented by existing research. Due to the private nature of these conversations, none of these findings or observations will be linked to the specific companies or law enforcement agencies that were present.

Senior executives from the following companies and organizations were in attendance. All participants in these roundtables were given an equal right to participate and share their views and experiences.

Roundtable 1: Ransomware and Cyber Threat Intelligence

Attendance record

  • US Department of Justice
  • Federal Bureau of Investigation
  • McAfee LLC8
  • Crowdstrike Services
  • Flashpoint
  • Accenture
  • Intel471
  • Atlantic Council

Roundtable 2: Ransomware and Cyber Incident Response

Attendance record

  • US Department of Justice
  • Federal Burau of Investigation
  • McAfee LLC
  • Crowdstrike Services
  • Flashpoint
  • Blue Ridge Networks
  • Intel471
  • Atlantic Council

Roundtable 3: Ransomware and Cryptocurrencies

Attendance record

  • US Department of Justice
  • Federal Bureau of Investigation
  • US Secret Service
  • Flashpoint
  • Crowdstrike Services
  • Andreessen Horowitz
  • Accenture
  • SICPA
  • Atlantic Council

Roundtable 4: Ransomware and On-the-Horizon Threats

Attendance record

  • US Department of Justice
  • Federal Bureau of Investigation
  • Flashpoint
  • Crowdstrike Services
  • Blue Ridge Networks
  • McAfee LLC
  • Maximus
  • Forward Edge-AI
  • System 1 Inc.
  • DataPolicyTrust
  • Accenture
  • Atlantic Council

Key findings

1. Facing ransomware realities 

Finding 1.1: Industry is seeing two parallel trends when it comes to ransomware models. On the one hand, industry is seeing an increase in independent, skilled hackers, as opposed to established hacker gangs. This shift is resulting in friction in the cybercriminal world and could be positive for law enforcement agencies. Alternatively, some industry members are reporting that there is a lowered barrier of entry for inexperienced or nontechnical cybercriminals, therefore expanding the ransomware criminal industry. More research needs to be done to determine which one of these trends is truly on the rise.

Despite the two opposing trends, all of industry agrees that ransomware groups are learning from their mistakes and continually improving their techniques, tactics, and procedures (TTPs) while actively managing their brands and reputations.

According to industry members, ransomware business models are shifting. Historically, ransomware-as-a-service (RaaS) was a hierarchical business model in which established ransomware gangs advertised their RaaS programs and recruited independent hackers to their team by conducting interviews and instituting hiring frameworks. In this model, developers held most of the leverage, as independent hackers were usually less skilled and just needed to generate installations via botnets, exploit kits, or stolen credentials. However, in recent years, some industry members are noting that the skill set for independent hackers has changed as ransomware gangs have shifted their focus from targeting individuals to targeting organizations. As a result, they must now penetrate and compromise entire networks. This has changed the typical independent hacker profile to one of a highly skilled cybercriminal that is more sought after. This has given independent hackers the freedom to demand elevated levels of compensation and authority in the group. In many cases, these independent hackers now have the skills and motivation to form their own groups, consisting of equally skilled partners.9

According to these industry experts, the onset of the pandemic also exacerbated the asking power of individual hackers as the cybercriminal underground was increasingly looking to identify the skills and talents of individuals.10 There have been advertisements for people with different language skills, broad technical abilities, marketing abilities, and more. Analysts have also noticed an uptick in freelancers, indicating a change in the original RaaS model. In this new age, potential affiliates are dictating which ransomware groups they will work with. 

Predictions from some of industry suggest that the shift in the power dynamic between ransomware gangs and individual hackers will continue to widen.11 These industry experts believe that increasing friction between independent hackers and ransomware gangs is likely positive for law enforcement as it indicates infighting within the criminal marketplace.12 According to these experts, independent hackers feel that ransomware gangs are not compensating them enough for their work or independent hackers simply disagree with the tactics of developers. This is exemplified by the recent Conti Crew leak in which a disgruntled affiliate leaked Conti’s playbook after alleging underpayment by the group. This move was a huge blow to the group as the leaked Conti documentation could help researchers or law enforcement to better understand the TTPs used by this group of criminals. It also could allow other groups to use the leaked playbook as a guide for their own criminal activities.13 Similarly, the source code for Babuk ransomware was also leaked on a Russian-language hacking forum by an alleged member of the group and, in general, Babuk has had a history of disagreements.14 Chief among these was the splintering of the group after the attack on Washington’s Metropolitan Police Department (MPD) in which the “Admin” wanted to leak MPD data for publicity, but other members of the group were against it. One threat actor from the group commented, We’re not good guys, but even for us it was too much.”15 After the MPD data leak, the group fractured and reformed as Babuk V2 without the Admin.16 Because of these patterns, several industry experts expect that these ransomware groups will become short-lived, and they see this as an opportunity for former gang members to work with law enforcement.17

Alternatively, other industry members have flagged a parallel trend in which the traditional RaaS model has lowered the barrier of entry for inexperienced or nontechnical cybercriminals. This allows for the expansion of ransomware due to a lowered barrier of technical experience, and high-profit margins. According to these experts, the expansion of ransomware attacks is also being fueled by the fact that more victims are willing to pay a hacker’s ransom and the increased media attention around these hacks puts pressure on victims to resolve hostile situations quickly. Not only are more victims willing to pay for decryption of their data, but also, many of them do not want to admit that they were victims of ransomware in the first place because of the negative press surrounding victimization. There is still a significant hesitancy to report incidents, industry members say. This hesitancy impedes law enforcement agencies: they cannot get accurate and timely information about the scale of attackers, victims, and ransoms paid.18

Despite the two opposing trends, all the industry members present expressed that ransomware groups are learning from their mistakes and are innovating. Ransomware groups are more aware of how they are perceived and are realizing that they need a healthy balance of attention for their business model to succeed. They need to be known and have a reputation to entice a ransom payment out of their victims, but if they get too big or launch a significant attack on critical infrastructure, as seen in the Colonial Pipeline and Kaseya attacks, they face the risk of garnering too much attention and ending up on law enforcement’s radar. When this does happen, they often need to recalibrate strategy and possibly reform.19

When reforming, industry representatives believe that these cybercrime groups do not spend a lot of time or money. They often use similar TTPs by recycling and leveraging existing malicious code, tools, and techniques, thereby reducing the amount of investment in research and development. Industry members also believe that ransomware actors exert more effort in increasing the speed of their attack—whether that is encrypting networks in record time or rapidly gaining access to a victim and deploying ransomware, as opposed to attacking covertly through reinvention. This preference is because the opportunity for great profit and wealth significantly outweighs the risk of repercussions for their attacks. According to one industry expert in the first roundtable, “They are more interested in being up and running fast, than completely obscuring who they are and who they were.”20

Observation 1.1: An international public-private sector partnership needs to be developed to address and conduct further research on the transnational nature of ransomware schemes particularly as they continue to innovate. Such a partnership should focus on helping law enforcement to focus more of its energy on tracing and arresting perpetrators within ransomware groups.

In October 2021, the White House National Security Council facilitated a Counter-Ransomware Initiative over two days and six sessions, starting with a plenary.21 As a result of the sessions in this summit, the ministers and representatives of Australia, Brazil, Bulgaria, Canada, the Czech Republic, the Dominican Republic, Estonia, the European Union, France, Germany, India, Ireland, Israel, Italy, Japan, Kenya, Lithuania, Mexico, the Netherlands, New Zealand, Nigeria, Poland, Republic of Korea, Romania, Singapore, South Africa, Sweden, Switzerland, Ukraine, United Arab Emirates, the United Kingdom, and the United States recognized that ransomware is an escalating global security threat with serious economic and security consequences.22 As part of the agenda, four areas of significant importance were identified: 

  1. Disrupt ransomware infrastructure and actors.
  2. Bolster resilience to withstand ransomware attacks.
  3. Address the abuse of virtual currency to launder ransom payments.
  4. Leverage international cooperation to disrupt the ransomware ecosystem and address safe harbors for ransomware criminals.23

Despite these recent efforts, and although government is a primary entity that has the power to act against rogue actors, groups, or nations via diplomatic, intelligence, military, economic, and enforcement actions, industry experts at the roundtable noted that governments cannot act alone on ransomware. They emphasized that there is no law enforcement, government, or private-sector entity that can fully tackle the problem of ransomware themselves, and that the current public-private sector partnerships are limited by the geographic, political, and legal boundaries of the countries in which they reside.To address the transnational nature of ransomware schemes, industry experts say, there should public-private partnerships both domestically and internationally, particularly with countries like Russia that serve as safe havens for many of these cybercriminals.

Interestingly, although the CRI did not involve Russia, the White House has commented thatthe U.S.-Kremlin Experts Group, which is led by the White House, was established by President Biden and President Putin.” This means that the United States engages directly with Russia on ransomware. The White House further added that they “look to the Russian government to address ransomware criminal activity coming from actors within Russia.”24

Such a transnational public-private partnership could potentially commence by including the countries that participated in the CRI. It should look at key questions such as:

  1. How should this partnership address the transnational nature of ransomware schemes and what actions should be taken internationally?
  2. Which nation should lead the organization of this partnership?
  3. Why are the existing international cooperative mechanisms to address ransomware insufficient and what can be learned from prior efforts?

Invitations to participate in such an initiative can also be extended to other countries that have demonstrated a sufficient level of action or intent to act against ransomware attacks and the individuals perpetrating them. A key focus of this partnership should be on global law enforcement agencies coming together to focus on tracing and arresting key perpetrators within ransomware groups.

In the aftermath of ransomware incidents, industry feels as though law enforcement does not focus enough on tracing and arresting individual members of ransomware groups. They feel as though some of the attention that law enforcement puts toward identifying the latest TTPs or the victims of the crime could instead be redirected to catching the criminals responsible for the attack.25

However, even if more resources such as this proposed partnership went into targeting the individuals that develop, create, and commit crimes with ransomware variants, there would still be challenges in finding and arresting them. One industry expert pointed to an individual who has been underground for a long time and has been an affiliate of six different ransomware variants. Although law enforcement officials know who he is and where he is, he can operate with impunity in Russia as long as he does not target organizations located in the Commonwealth of Independent States. Further, because investigations are naturally reactive, an initial investigation should begin by focusing on the TTP of the attack and on the indicators of compromise provided by the victims. After an initial assessment of the variant and victim, however, law enforcement should make greater efforts to focus their investigations on the individual perpetrators of cybercrimes.26

Finding 1.2: Ransomware attacks are opportunistic. They target organizations with vulnerable online systems during key periods when they have pressure to be up and running. The pressure to stay up and running often leads to some victims paying a ransom, creating a flawed system that involves trusting cybercriminals to return data.

Ransomware attacks are essentially attacks of opportunity. According to industry experts, there is a thriving ransomware marketplace and no shortage of individuals or groups called initial access brokers who can sell access into compromised organizations. In fact, industry experts believe that poorly secured remote desk protocol (RDP) endpoints are one of the most common vectors used to get inside an organization and can be acquired relatively cheaply. Therefore, at the end of the day, any sector or organization with online credentials or online systems is vulnerable.27 Key targeted industries are those that have pressure to be up and running. Specifically, criminals are looking for organizations that have poor security and that quantify downtime in high dollar amounts, creating pressure to pay the ransom as quickly as possible. Heavily targeted sectors that meet these criteria include healthcare, manufacturing, school districts, local governments, technology, media, and telecom services.28

Oftentimes, victim organizations in heavily targeted sectors will pay a ransom to get their data back. However, paying for data and getting a decryptor does not ensure the return of data, according to industry experts. Ransomware criminals might issue a decryptor that simply does not work or takes too long to work. Alternatively, these criminals might simply not respond and disappear with the money. Paying a ransom involves trusting criminals to keep their end of the bargain and this method is flawed.

Observation 1.2: It is important to implement stronger defense mechanisms and use updated and secure software to make entering a network more difficult, particularly for heavily targeted sectors. 

Most industry members pointed to one key recommendation to help organizations better prepare and protect themselves from ransomware attacks. Their primary suggestion was tightening basic defense mechanisms, making it more difficult for an adversary to enter networks. Security software and cybersecurity company experts thatdo real-time tracking of potential cyber threats found that initial entry vectors such as weak passwords or poorly protected systems are common in most of the incidents that they deal with.29

In many cases, the largest attacks of ransomware have been against companies that work in regulated industries and do not follow the established standards set out by the National Institute of Standards and Technology, the federal government, insurance companies, etc. Such standards include patch management and keeping restorable data backups. The WannaCry attack is a perfect example of this. In this attack, ransomware spread through server message block (SMB) protocol. SMB is used by Windows machines to communicate with file systems over networks. The ransomware in this attack worked by targeting machines that had not gotten the necessary security patch (MS17-010 Security Bulletin) from Microsoft. Once the ransomware was deployed, it spread to all the other devices in the same network that did not have the necessary patch, therefore taking control of their files as well. This attack worked so well that in five days the virus was able to spread to more than 150 countries.30

As illustrated by the WannaCry attack, industry experts emphasized that core infrastructure needs to be kept up to date, and incentives or punitive measures need to be put into place to ensure that standards are kept current. Some industry experts took this argument a step further and criticized some software developers. Although there is no such thing as 100 percent secure software, they pointed out that there are a lot of vendors that are very good at responding to vulnerabilities. However, some vendors dismiss vulnerabilities and respond to them by claiming that the product has reached its end of life and an upgrade to a newer model is needed, even if the former model has only been on the market for a short period of time. In these cases, some industry experts believe that software updates and support should be provided for a certain period of time. Once that period of time has passed, only then is it fair to ask the customer to invest in a new product. Another adjustment that needs to be made is the timeline between a patch getting released and it being applied in industry. As of right now, the timeline is 180 days, which industry experts argue is far too long.31

2. Information sharing and mandated reporting 

Finding 2.1: Information sharing between the government and the private sector, while integral to tackling ransomware, is inconsistent. Federal law enforcement has made it clear that the legal counsels of private companies have repeatedly raised concerns about constraints that limit the sharing of information that could aid in the detection and reporting of illicit activities.

Information sharing and communication between the public and the private sector is key to catching and deterring cybercriminals. Information sharing allows cybersecurity experts in both the public and private sectors to learn about new vulnerabilities in software and about new attack vectors. It also can help to strengthen collective resiliency in and between those sectors. Finally, information sharing allows for the scope of cybercrimes to be defined more accurately and can influence the processes used to anticipate or respond to threats.32

Although information sharing is important, in the event of a breach, there is only so much that the government can share with anyone who is not the victim. Sometimes government officials are unable to share specially protected information such as criminal locations or identifying factors such as names with victims. At the same time, the private sector needs a framework to safely share information without waiving corporate and legal protections such as attorney-client privilege in order to increase such sharing.33 When a company gets hit by a ransomware attack, the first step often is to engage a lawyer.34

At the time of the roundtables, industry experts explained that in many cases, information sharing with law enforcement was avoided to protect the company’s brand reputation and investor confidence, circumventing the stigma associated with being the victim of a cyberattack. Alternatively, a company might not see any benefit in reporting a crime or sharing any information with law enforcement. Ultimately, it was a business decision as to whether a company should immediately report the incident to law enforcement or handle the matter internally, especially since there were no contractual, regulatory, or statutory requirements. A privately held company could decide not to report a ransomware attack and pay the extortionists. A publicly traded company could also decide not to report the cyber incident to law enforcement and wait until its filing with the Securities and Exchange Commission (SEC). Collaborative investigations between public and private partners take time and resources, which sometimes prompts companies to decide to simply tackle the problem themselves.35

Until recently, there was no legal framework or protections (such as a shield law) for a company to safely share information with law enforcement or other government agencies. The primary method of mandated reporting to the US government for a breach or cybersecurity incident has been through a contractual agreement to follow the Federal Acquisition Regulations (FAR), the Defense Federal Acquisition Regulations (DFAR), or the requirements of the SEC for publicly traded companies.

However, this changed to an extent in the first quarter of 2022 with the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA), which became law in March. CIRCIA requires “critical infrastructure organizations to report cyberattacks to the Cybersecurity and Infrastructure Security Agency (CISA) within seventy-two hours. The law also creates an obligation to report ransomware payments within twenty-four hours.”36 It addresses in part the following observation of participants of the roundtables, which were conducted by the GeoTech Center prior to CIRCIA becoming law; yet there is still room for confusion across government and critical infrastructure entities as to the logistics of reporting incident information.

Observation 2.1: Better uniform reporting and sharing of information is needed. In particular, standardized timelines, questions, and formats are needed for incident reporting. In addition, it is unclear how reported incident information will be shared between departments, government agencies, and the private sector.

The contours of a US government reporting framework are beginning to form, but much work remains in addressing concerns voiced in roundtable discussion about the specifics of threat reporting and the handling of victim information. While CIRCIA designates CISA as the focal point for all private infrastructure owners and operators to report significant cyber incidents, and requires covered entities to report a covered cyber incident to CISA within 72 hours after it reasonably believes a covered cyber incident has occurred, this law does not define what constitutes “covered entities,” “covered cyber incident,” or “reasonably believes.” Instead, it requires CISA to fill in these blanks through the rulemaking process.37

Additionally, this law does not cover private companies who do not operate in the critical infrastructure sectors, and it is unclear how CISA will report information to law enforcement for action. Moreover, CISA has up to two years to issue proposed rules, and up to eighteen months thereafter to issue final rules.38 In this time, CISA should address the inconsistencies of this act in order for it to be truly effective and cover all the necessary parties.

CISA also should consider frustration points that existed prior to the passage of CIRCIA. Industry experts expressed frustration primarily over having to share the same data multiple times with the federal government, often with different units in the same department. They were also frustrated because information sharing did not seem to be a two-way street. Law enforcement in turn explained that not all information gets shared between the different parts of the government and sometimes they should not be shared with each other.

Additionally, victims and relevant incident responders were not always sure what should be shared and why it should be shared. This is because government agencies do not provide a uniform list of questions and sometimes different parts of government require drastically different sets of information. Therefore, CISA should consider having a defined set of questions and timelines that should be shared irrespective of the case or company that is attacked. Having a set of detailed, clear questions and timelines would make it much easier to coordinate the responses from the victim.

Finding 2.2: The stigma and consequences of being the victim of a cyberattack presents a challenge to information sharing. Oftentimes, victims are reluctant to report incidents to government for fear of negative consequences such as double victimization.

Prior to the passage of CIRCIA, publicly traded companies and critical infrastructure companies had a fiduciary responsibility to their shareholders to report information that may positively or negatively impact the value of the company and its stock. Since CIRCIA’s passage, critical infrastructure companies are now required to also report cybersecurity incidents to the US government. However, noncritical infrastructure companies that are publicly traded still are only bound by fiduciary duty, government regulation, or state law.39

Reporting a ransomware attack or the decision to pay a ransom can have regulatory effects and impact stock value and public trust. Privately held companies that are not classified as critical infrastructure organizations, unless required by contract, regulation, or law, will assess the impact on their bottom line in making a decision on whether to report a ransomware attack and/or pay the extortion that is demanded. For these companies, the question of whether to pay a ransom is ultimately a business decision. It is the calculus of the impact on business operation, time to resume operations, the amount of the ransom, impact on brand reputation, and risk. The business decision may be as simple as if a company does not pay, it will go out of business and in fact, according to leading industry members, some companies would be out of business today if they had not paid a hacker’s ransom.

Industry also voiced a concern that paying a ransom can cause companies to be unfairly targeted by the US Department of the Treasury’s Office of Foreign Assets Control (OFAC). This office “administers and enforces economic and trade sanctions based on US foreign policy and national security goals against targeted foreign countries and regimes, terrorists, international narcotics traffickers, those engaged in activities related to the proliferation of weapons of mass destruction, and other threats to the national security, foreign policy, or economy of the United States.“40

As a result, OFAC maintains lists that often includes cybercriminal groups or individuals involved in the act of cybercrime such as ransomware. However, because the true identities of ransomware gangs or individual extortionists are often unknown and are changed intentionally to hide from law enforcement, it is difficult for a company to know if the gang or individual is specifically prohibited or embargoed by an OFAC list.41 Therefore, these lists occasionally puts victims in a difficult position: in many cases they have to pay the ransom to remain economically viable, and thus cannot share information with the government because they might face ramifications due to paying a criminal group or individual on the OFAC lists.42

Despite all of the aforementioned difficulties related to information sharing, it is important to note that law enforcement and the government can be of great assistance to a company whose systems have been encrypted by ransomware. This is because law enforcement and the government may be in possession of the keys to decrypt the encryption pursuant to previous investigations, which can allow a victim company to speedily resume operations without having to pay a ransom.43

Observation 2.2: Safe harbor and shield laws are needed for ransomware reporting; mandated reporting, in particular, requires a safe harbor framework. 

When asked about the concept of mandated reporting (prior to the passage of the Cyber Incident Reporting for Critical Infrastructure Act of 2022), some industry experts already felt that mandated reporting around payments is a good option. However, since getting companies to share proprietary information regarding a cyberattack is challenging (particularly if that information is unfavorable to their reputation or causes financial risk), they stress the need for a safe harbor to report information to the federal government without fear of repercussions from regulators, investors, the public, etc. Industry experts also think that there needs to be a fundamental change in how ransomware incident reporting and information sharing is approached.

Specifically, they seek a new safe harbor framework that allows victims to recover their information and get back online as quickly as possible without blocking the government’s ability to pursue potential investigatory actions.44 Such a framework should:

  • Be specific about the types of companies, victims, and crimes that it will cover.
  • Include safety net assurances for victim organizations where law enforcement agencies can show how to safely share information, how the information is going to be protected, and how it is going to be used.
  • Determine what kind of limits on disclosure or federal action the framework is intended to forestall.
  • Take into consideration the existing liability protections in the CyberInformation Sharing Act of 2015 and the Cyber Incident Reporting for the Critical Infrastructure Act of 2022 to determine where they are insufficient and build on them.

To increase trust further, an industry expert recommended that law enforcement agencies themselves need to put “skin in the game” through this framework and show how they will be held accountable if the information provided is misused in some way.

The speed at which the information can be shared using this new framework is also a critical factor because so far, the right mechanism to share information at a higher speed does not exist. One industry expert pointed to the example of the WannaCry attack, which occurred within twenty-four hours. If a company waits twenty-four hours before sharing information with law enforcement, then by the time said information is processed and validated it is already far too late. It is integral to find a way to get this information to the relevant parties as quickly as possible because cybercriminals are continually increasing the speed of their operations and are encrypting or stealing data within hours of the initial infection.45

On the law enforcement side, the support for mandated reporting and the need for more information, in general, was extremely clear. Experts from the Department of Justice (DOJ) articulated that required sharing already existed in many contexts, often by regulation in certain sectors or state laws even prior to the passage of CIRCIA. With additional information, CISA might be able to better manage risk as the agency shares cybersecurity information across the private sector so that potential victims are aware of new cyberattacks and vulnerabilities. Law enforcement officials also agreed that they needed information through reporting because if victimization is not reported, there is no way for them to determine the true extent of cybercrime or assist in catching cybercriminals. In particular, law enforcement requires technical indicators of compromise (IOCs) as quickly as possible because sharing IoCs quickly can help other organizations preemptively defend themselves while also allowing government to take action.46

Finding 2.3: Ransomware attacks are happening at an increasing speed. Information sharing between industry and law enforcement is not keeping pace. 

From a ransomware incident-response perspective, there has been an increase in the speed and effectiveness of bad actors over the past three years. Attacks that used to take days, weeks, and sometimes months to execute can now take under an hour and the information sharing between law enforcement and victim companies is not effective enough to keep up with the increasing speed of these attacks.47

Observation 2.3: Establish and strengthen public and private partnerships through joint tabletop exercises and relationship building with law enforcement and the government.

Public and private partnerships need to be strengthened to keep up with the speed of criminals. Companies that have gone through tabletop exercises have worked out their responses ahead of an attack, and have already developed relationships with their local law enforcement offices; as a result, they are typically much more prepared when it comes to information sharing because they have already developed a certain level of trust with law enforcement.48 A number of such efforts have been initiated and could be further resourced, such as the US Secret Service Cyber Incident Response Simulations series.49 The federal government also trains state and local law enforcement through, for instance, the National Cyber Forensics Institute’s annual Cyber Games.50

When a company has not prepared for such an incident ahead of time, information sharing can be quite messy. In those cases, victim firms often wait to report useful information until weeks to months later,and by that time it is often too late to effectively disrupt cybercriminals.51

3. Ransomware and cryptocurrencies 

Finding 3.1: The establishment of a national law enforcement team to focus specifically on cryptocurrency, which is increasingly used for cybercrime payments, is a step in the right direction.

On October 6, 2021, Deputy Attorney General Lisa O. Monaco announced the formation of a National Cryptocurrency Enforcement Team (NCET). The creation of this new team combines the capabilities of DOJ’s Criminal Division Money Laundering and Asset Recovery Section (MLARS), the Computer Crime and Intellectual Property Section (CCIPS), and other criminal division sections. A significant focus for NCET will be understanding how to better stop the usage of cryptocurrency for criminal purposes. Other primary focuses include: crimes committed in virtual currency exchanges, mixing and tumbling services, which attempt to launder the origin of illicit funds with seemingly legitimate sources of funds, and other crimes committed by money launderers.

NCET was created so that the Department of Justice would be able to tackle the criminal misuse of cryptocurrencies and digital assets. It is made up of attorneys from across departments including prosecutors with professional backgrounds in cryptocurrency, cybercrime, money laundering, and forfeiture. The purpose of NCET is to identify, investigate, support, and pursue cases that involve the criminal use of digital assets with an emphasis on virtual currency exchanges, mixing and tumbling services, infrastructure providers, and other entities that are aiding the misuse of cryptocurrency and related technologies to commit or facilitate crimes. NCET will also set strategic priorities on digital asset technologies, classify areas that need higher investigative and prosecutorial focus, and lead the initiatives to coordinate with domestic and international law enforcement partners, regulatory agencies, and private industry to overcome the criminal usage of digital assets. Finally, NCET will improve the DOJ Criminal Division’s current efforts to deliver support and training to federal, state, local, and international law enforcement for the purpose of building capacity to investigate and prosecute cryptocurrency and digital asset crimes in the United States and globally.52

Observation 3.1: US government agencies and the Federal Reserve should work with the National Cryptocurrency Enforcement Team to properly evaluate the strategic implementation of a US central bank digital currency (CBDC).

In March President Biden signed an executive order, Ensuring Responsible Development of Digital Assets, directing the US government to “assess the technological infrastructure and capacity needs for a potential US CBDC in a manner that protects Americans’ interests.” It also calls on the Federal Reserve to continue to research, develop, and assess efforts for a potential US CBDC.53

Increasingly, victim payments resulting from ransomware attacks are being facilitated using cryptocurrencies. Therefore, evaluating the implementation of a US CBDC should be a top priority for the federal government and law enforcement, in consultation with industry experts.54

Finding 3.2: Law enforcement discourages paying a ransom and encourages prompt reporting regardless of a decision to pay.

While many companies that are hit with ransomware attacks end up paying their attackers using cryptocurrencies, law enforcement strongly discourages paying a ransom for several reasons. There is no way to track what the ransom money is being used for. In many cases, ransomware groups operate like organized crime. The revenues are so substantial that even if a significant part of a group’s operations is disrupted, it is still making millions of dollars. Those funds can be used to invest in infrastructure, to pay people off, and to buy assets.55

Observation 3.2: “To pay or not to pay” is ultimately a business decision. This decision should be made with proposed safe harbor protections in coordination with law enforcement.

If victim organizations stop paying ransom demands, cybercriminals have substantially less incentive to keep launching attacks. Furthermore, paying a ransom can make a company even more of a target for future attacks. According to law enforcement, it might be more effective to rebuild and secure networks and systems than to pay a ransom.

However, this is an overly simplistic view when you balance the amount of the ransom demanded and the cost of rebuilding an entire network. Ultimately, the decision to pay or not pay a ransom resulting from a cyberattack is a business decision and companies should not be penalized for doing what is best for the company financially.56

Finding 3.3: Federal law enforcement should work to detail appropriate processes for how e-currency or cryptocurrency service providers work with law enforcement to monitor for criminal activities beyond just ransomware, including use of cryptocurrencies for illicit activities such as human trafficking and other transnational criminal offenses.

The first rule of any criminal investigation is to follow the money. This is also the case when it comes to cybercrime involving digital currency. It is important to understand how bad actors are using cryptocurrency as a method of payment for all kinds of criminal activities, and how to disrupt or block this system.

According to industry experts, one of the ways to do this is to make sure that cryptocurrency cannot be converted to fiat currency through blockchain technology. However, to truly make this method of interruption effective, there should be a better partnership between the government and digital asset service providers. In fact, there is a better chance of being able to track bitcoin as opposed to cash because right now most ransomware is quite traceable and most payments are still being made through bitcoin, the largest cryptocurrency by market capitalization.57

Glossary

Babuk ransomware: A ransomware threat discovered in 2021 that currently targets the transportation, healthcare, plastic, electronics, and agriculture sectors. Similar to other ransomware variants, this one is deployed in the network of enterprises that criminals target and compromise

Blockchain: Blockchain is a distributed digital ledger which works as a chain that stores individual blocks of data. It is used to support nearly all cryptocurrencies and is unique in that it is decentralized.

Colonial Pipeline: In 2021, Colonial Pipeline (the largest fuel pipeline in the United States) was the target of a cyberattack by the DarkSide group. Attackers infiltrated Colonial’s network through a virtual private network and held 100 gigabytes of data hostage, posting a $4.4 million ransom. Within an hour, the entire pipeline was shut down for the first time in fifty-seven years to assess the threat. Colonial ultimately paid the ransom to DarkSide—part of which was later recovered by law enforcement.

DarkSide: A Russia-linked cybercrime group first seen in August 2020 that inflicted ransomware attacks in more than fifteen countries and targeted multiple industry sectors, including financial services, legal services, manufacturing, professional services, retail, and technology.

Decryptor: A tool that transforms data that has been rendered unreadable through encryption back to its unencrypted form.

Indicator of compromise (IoC): An indicator of compromise is described as evidence on a computer that indicates a security breach on networks. IoC data is gathered after the discovery of a suspicious incident.

Kaseya attack: Russian ransomware organization REvil carried out a ransomware attack on information technology management software company Kaseya in July 2021. The managed service provider attack paralyzed as many as 1,500 organizations.

OFAC lists: The US Office of Foreign Assets Control (OFAC) publishes lists of individuals and companies owned or controlled by, or acting for or on behalf of, targeted countries. It lists individuals, groups, and entities, such as terrorists and narcotics traffickers designated under programs that are not country specific.

WannaCry attack: A ransomware that contains a worm component, or a self-replicating program that is able to copy and spread itself without the help of any other program. These attacks can slow down network traffic, delete files on a system, or send infected documents by email.

CBDCs: Central bank digital currencies are virtual currencies backed and issued by a central bank.

About the authors

Trent R. Teyema is a former FBI special agent and Senior Executive Service retiree who is an independent consultant advising governments and companies on cybersecurity, infrastructure protection, national security, and technology. His firm specifically focuses on blockchain, cyber physical systems, and space security. Most recently, he was the chief information security officer for Novavax Inc., a COVID-19 biotechnology company; the global head of Cyber Threat Management; a business information security officer for the insurance giant AIG; and the senior vice president and chief technology officer in charge of research and development (R&D) and chair of the Intellectual Property Council for Parsons Corp.

Throughout his thirty-three-year investigative career, he served in numerous senior leadership positions including director of cybersecurity policy for the White House’s National Security Council under President Barack Obama, and was detailed under President George W. Bush. He was the special agent in charge of the Cyber and Counterintelligence Divisions for the FBI field office in Los Angeles, and served as the FBI Cyber Division’s chief operating officer. In addition, Mr. Teyema founded and led the National Cyber Investigative Joint Task Force, which is one of the US government’s seven national cybersecurity centers.

Mr. Teyema is a doctoral candidate in cybersecurity at Marymount University and holds a Master of Forensic Science from The George Washington University. He holds numerous professional certifications in cybersecurity, forensics, and risk.

Kiran S. Jivnani is a program assistant at the Atlantic Council’s GeoTech Center. She manages projects at the intersection of geopolitics, security, climate, health, and agriculture. Prior to joining the Atlantic Council, she worked for the United Nations Academic Impact and Millennium Campus Network Millennium Fellowship, where she managed student leaders globally through mentorship on United Nations Sustainable Development Goal-based projects. She later worked for a former member of European Parliament and the Social Democrat Party vice president, Dr. Miriam Dalli. In this role she worked on legislative dossiers of the European Parliament’s Environment, Public Health, and Food Safety Committee; Industry, Research, and Energy Committee; and the Beating Cancer Committee. She holds a bachelor’s degree from Northeastern University in Boston, where she studied criminal justice, international affairs, and law and public policy.

David A. Bray, PhD, is a distinguished fellow with the Atlantic Council. He is the founding principal at LeadDoAdapt Ventures and has served in a variety of leadership roles in turbulent environments, including bioterrorism preparedness and response with the Centers for Disease Control and Prevention and the broader US government from 2000 to 2005; executive director for a bipartisan US intelligence community commission on R&D; nonpartisan leadership as a federal agency senior executive; work with the US Navy and Marines on improving organizational adaptability; and efforts with US Special Operations Command on the challenges of countering disinformation online. He has received the Joint Civilian Service Commendation Award, Roger W. Jones Award for exceptional federal executive leadership, and the National Intelligence Exceptional Achievement Medal. 

He also provides strategy to both boards and start-ups espousing human-centric principles to technology-enabled decision making in complex environments. He was named a senior fellow with the Institute for Human-Machine Cognition, starting in 2018. Business Insider named him one of the top “24 Americans Who Are Changing the World” under 40, and he was named a Young Global Leader by the World Economic Forum. He has served in roles such as president, chief strategy officer, and strategic adviser for twelve different start-ups. He has been an invited keynote speaker before audiences of CEOs and world leaders and at events with more than three thousand participants in India, Vietnam, Australia, Taiwan, Dubai, South Africa, Brazil, Colombia, Mexico, Canada, Belgium, Sweden, Switzerland, and the United Kingdom.

Acknowledgments

The GeoTech Center would like to extend its thanks to the panelists, reviewers, and teammates whose expert insights and support proved invaluable to this project.

Editorial acknowledgments

  • Stephanie Wander, Director of Programs, GeoTech Center
  • Trey Herr, Director, Cyber Statecraft Initiative, Digital Forensic Research Lab
  • Safa Shahwan Edwards, Deputy Director, Cyber Statecraft Initiative, Digital Forensic Research Lab
  • Olivia Rowley, Assistant Director, Cyber Statecraft Initiative, Digital Forensic Research Lab
  • Emily Sespico, Project Assistant, GeoTech Center

Recognition of industry and government panelists

  • US Department of Justice
  • Federal Bureau of Investigation
  • US Secret Service 
  • McAfee LLC
  • CrowdStrike Services
  • Flashpoint
  • Accenture 
  • Intel471
  • Blue Ridge Networks
  • CrowdStrike Services
  • Andreessen Horowitz
  • SICPA 
  • Maximus
  • Forward Edge-AI
  • System 1, Inc.
  • DataPolicyTrust
  • Mandiant 

1    Chuck Brooks, “Ransomware on a Rampage; a New Wake-Up Call,” Forbes, August 21, 2021 https://www.forbes.com/sites/chuckbrooks/2021/08/21/ransomware-on-a-rampage-a-new-wake-up-call/?sh=524d9d822e81.
2    “FACT SHEET: Ongoing Public U.S. Efforts to Counter Ransomware,” White House Briefing Room (website), October 13, 2021, https://www.whitehouse.gov/briefing-room/statements-releases/2021/10/13/fact-sheet-ongoing-public-u-s-efforts-to-counter-ransomware/.
3    Commodification of Cyber Capabilities: A Grand Cyber Arms Bazaar, 2019 Public-Private Analytic Exchange Program, Department of Homeland Security, https://www.dhs.gov/sites/default/files/publications/ia/ia_geopolitical-impact-cyber-threats-nation-state-actors.pdf.
4    William Turton and Kartikay Mehrotra, “Hackers Breached Colonial Pipeline Using Compromised Password,” Bloomberg, June 4, 2021, https://www.bloomberg.com/news/articles/2021-06-04/hackers-breached-colonial-pipeline-using-compromised-password.
5    “Department of Justice Seizes $2.3 Million in Cryptocurrency Paid to the Ransomware Extortionists Darkside,” Justice Department Office of Public Affairs, June 7, 2021, https://www.justice.gov/opa/pr/department-justice-seizes-23-million-cryptocurrency-paid-ransomware-extortionists-darkside.
6    Tom Balmforth and Maria Tsvetkova, “Russia Takes Down REvil Hacking Group at U.S. Request: FSB,” Reuters, January 14, 2022, https://www.reuters.com/technology/russia-arrests-dismantles-revil-hacking-group-us-request-report-2022-01-14/.
7    Steve Morgan, “Top 6 Cybersecurity Predictions and Statistics for 2021 to 2025,” Cybercrime Magazine, December 30, 2021, https://cybersecurityventures.com/top-5-cybersecurity-facts-figures-predictions-and-statistics-for-2021-to-2025/#:~:text=The%20frequency%20of%20ransomware%20attacks,exceed%20200%20zettabytes%20by%202025.
8    Following the firm’s participation in roundtables, McAfee merged with FireEye and is known as Trellix.
9    Max Kersten, John Fokker, and Thibault Seret, “How Groove Gang Is Shaking Up the Ransomware-as-a-Service Market to Empower Affiliates,” Trellix (website), blog co-authored with Intel 471 and McAfee Enterprise Advanced Threat Research, September 08, 2021, https://www.mcafee.com/blogs/enterprise/mcafee-enterprise-atr/how-groove-gang-is-shaking-up-the-ransomware-as-a-service-market-to-empower-affiliates/.
10     Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence, September 2021.
11    Kersten, Fokker, and Seret, “How Groove Gang.”
12    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
13    Ionut Ilascu, “Translated Conti Ransomware Playbook Gives Insight into Attacks,” Bleeping Computer, September 2, 2021, https://www.bleepingcomputer.com/news/security/translated-conti-ransomware-playbook-gives-insight-into-attacks/.
14    Lawrence Abrams, “Babuk Ransomware’s Full Source Code Leaked on Hacker Forum,” Bleeping Computer, September 3, 2021, https://www.bleepingcomputer.com/news/security/babuk-ransomwares-full-source-code-leaked-on-hacker-forum/?&web_view=true.
15    “The Source Code for the Babuk Ransomware Leaked on a Hacker Forum,” Cyber Intel Magazine, September 7, 2021, https://cyberintelmag.com/malware-viruses/the-source-code-for-the-babuk-ransomware-leaked-on-a-hacker-forum/.
16    “The Source Code for the Babuk Ransomware Leaked.”
17    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
18    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
19    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
20    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
21    White House, “Background Press Call on the Virtual Counter-Ransomware Initiative Meeting,” Via Teleconference, October 12, 2021, https://www.whitehouse.gov/briefing-room/press-briefings/2021/10/13/background-press-call-on-the-virtual-counter-ransomware-initiative-meeting/.
22    Joint Statement of the Ministers and Representatives from the Counter Ransomware Initiative Meeting October 2021, https://s3.documentcloud.org/documents/21085090/joint-statement-international-counter-ransomware-initiative.pdf; also available at White House, Briefing Room, Statements and Releases, October 14, 2021,  https://www.whitehouse.gov/briefing-room/statements-releases/2021/10/14/joint-statement-of-the-ministers-and-representatives-from-the-counter-ransomware-initiative-meeting-october-2021/.
23    “FACT SHEET: Ongoing Public U.S. Efforts to Counter Ransomware.”
24    White House, “Background Press Call on the Virtual Counter-Ransomware Initiative Meeting.”
25    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
26    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
27    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
28    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
29    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
30    Samantha Donaldson, “Wannacry Ransomware: Who It Affected and Why It Matters,” Red Hat Developer (blog),https://developers.redhat.com/blog/2017/05/19/wannacry-ransomware-who-it-affected-and-why-it-matters#how_was_this_ransomware_stopped.
31    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
32    “Why Is Information Sharing Important in Cybersecurity?,” nstec.com (website), accessed April 29, 2022, https://www.nstec.com/network-security/cybersecurity/why-is-information-sharing-important-in-cybersecurity/#qa_3.
33    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response, September 2021.
34    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Threat Intelligence.
35    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response
36    The Cyber Incident Reporting for Critical Infrastructure Act of 2022 passed Congress as part of an omnibus spending bill in mid-March 2022. For more information about the law, see Scott Carlson and Danny Riley, “President Biden Signs Bill Mandating Cyber Reporting for Critical Infrastructure Entities,” Seyfarth Shaw LLP (article), JDSupra (website), April 14, 2022, https://www.jdsupra.com/legalnews/president-biden-signs-bill-mandating-1882190/. For the text of the act, see Consolidated Appropriations Act, P.L. No: 117-103 § Division Y (2022), https://www.congress.gov/bill/117th-congress/house-bill/2471/text.
37    Shardul Desai et al., “Cyber Incident Reporting Requirement for Critical Infrastructure Sectors Signed into Law,” Holland & Knight LLP (website), March 16, 2022, https://www.hklaw.com/en/insights/publications/2022/03/cyber-incident-reporting-requirements-for-critical-infrastructure.
38    Jena M. Valdetero, “Congress Passes 72-hour Federal Breach Reporting Law for Critical Infrastructure,” Greenberg Traurig LLP, Lexology (website), March 29, 2022, https://www.lexology.com/library/detail.aspx?g=b70dd100-5026-4494-8b5a-7050ea4b5632.
39    Consolidated Appropriations Act, P.L. No: 117-103 § Division Y (2022).
40    “Office of Foreign Assets Control–Sanctions Programs and Information,” US Department of Treasury (website), https://home.treasury.gov/policy-issues/office-of-foreign-assets-control-sanctions-programs-and-information.
41    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
42    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
43    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
44    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
45    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
46    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
47    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
48    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
49    US Secret Service, “Secret Service Hosts Cyber Incident Response Simulation,” Media Relations News Release, July 2, 2021, https://www.secretservice.gov/newsroom/releases/2021/07/secret-service-hosts-cyber-incident-response-simulation-0.
50    US Secret Service, “U.S. Secret Service Announces the Winner of the Nationwide Cyber Games,” Media Relations News Release, October 21, 2021, https://www.secretservice.gov/newsroom/releases/2021/10/us-secret-service-announces-winner-nationwide-cyber-games.
51    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Cyber Incident Response.
52    Department of Justice, “Deputy Attorney General Lisa O. Monaco Announces National Cryptocurrency Enforcement Team,” Office of Public Affairs News Release, October 6, 2021, https://www.justice.gov/opa/pr/deputy-attorney-general-lisa-o-monaco-announces-national-cryptocurrency-enforcement-team.
53    White House, “FACT SHEET: President Biden to Sign Executive Order on Ensuring Responsible Development of Digital Assets,” White House Briefing Room (website), March 9, 2022, https://www.whitehouse.gov/briefing-room/statements-releases/2022/03/09/fact-sheet-president-biden-to-sign-executive-order-on-ensuring-responsible-innovation-in-digital-assets/.
54    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Ransomware and Cryptocurrencies, October 2021.
55    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Ransomware and Cryptocurrencies.
56    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Ransomware and Cryptocurrencies.
57    Atlantic Council, GeoTech Center, Cybersecure the Future Roundtable, Ransomware and Cryptocurrencies.

The post Cybersecure the future: Ransomware appeared first on Atlantic Council.

]]>
Hackers, Hoodies, and Helmets: Technology and the changing face of Russian private military contractors https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/technology-change-and-the-changing-face-of-russian-private-military-contractors/ Mon, 25 Jul 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=548791 Table of contents Introduction PMCs in Russian international security strategy and the influence of technology Training military forces abroadResource securityCombat missionsPolitical warfare Accessing offensive cyber and information technologies in the PMC community Where do PMCs go from here? About the authors Introduction The first time Russia invaded Ukraine in the twenty-first century, the Wagner Group […]

The post Hackers, Hoodies, and Helmets: Technology and the changing face of Russian private military contractors appeared first on Atlantic Council.

]]>

Introduction

The first time Russia invaded Ukraine in the twenty-first century, the Wagner Group was born. The now widely profiled private military company (PMC) played an important role in exercising Russian national power over the Crimea and portions of the Donbas—while giving Moscow a semblance of plausible deniability. In the near decade since, the Russian PMC sector has grown considerably, and is active in more than a dozen countries around the world. PMCs are paramilitary organizations established and run as private companies—though they often operate in contract with one or more states. They are profit-motivated, expeditionary groups that make a business of the conduct of war.1 PMCs are in no way a uniquely Russian phenomenon, yet the expanding footprint of Russian PMCs and their links to state interests call for a particularly Russian-focused analysis of the industry. The growth of these firms and their direct links to the Kremlin’s oligarch network as well as Moscow’s foreign media, industrial, and cyber activities present a challenge to the United States and its allies as they seek to counter Russian malicious activities abroad. 

As signals intelligence and offensive cyber capabilities, drones and counter-drone systems, and encrypted communications become more accessible, these technologies will prove ever more decisive to both battlefield outcomes and statecraft. More exhaustive research on these issues is necessary. The ongoing conflict resulting from Russia’s second invasion of Ukraine in this young century seems likely to shape the conduct of Russian foreign policy and security behavior for years to come—and these firms will play a part. 

The activities of these PMCs include high-intensity combat operations, as evidenced in Syria in 2018 and Ukraine in 2022, and a mix of population control, escort and close protection, and local direct-action activities, as seen in Libya, Mali, and elsewhere.2 Given the sourcing and dependence of Russian PMCs on Russian military service personnel and no small influence of Russian doctrine, the questions to reasonably ask include: How do changes in the Russian conduct of war and adoption of new technologies influence these PMCs? Moreover, how might these technological changes influence the role these PMCs play in Russian strategic goals and activity abroad? 
 
The accelerating frequency of PMCs found operating around the world and the proliferation of private hacking, surveillance, and social media manipulation tools suggest that Russian PMCs will pose diverse policy challenges to the United States and allies going forward. This issue brief seeks to offer an initial exploration of these questions in the context of how these PMCs came about and how they are employed today. The section below addresses the origin and operations of PMCs in Russian international security strategy, and also profiles the changing role of technology in conflict and the activities of these PMCs. The last section closes with a set of open research questions. 

PMCs in Russian international security strategy and the influence of technology

Historically, Moscow has benefited from using mercenaries to advance its aims abroad. Imperial Russia extensively deployed Cossack brigades in the Napoleonic wars and, domestically, to quell peasant uprisings. Tsar Aleksandr II used them as a tool to balance pan-Slavic fervor against the imperial policy of nonintervention in the burgeoning Balkan-Ottoman conflict of the 1870s.3 Joseph Stalin rallied sympathetic brigades in support of the Republican faction in the late 1930s Spanish Civil War.4 More recent conflicts demonstrate the abiding imperatives which make PMCs an attractive tool of Russian statecraft. 

The number and prevalence of Russian PMCs as a turnkey model deployed in service of Moscow’s niche foreign objectives have increased over the past decade. Russian PMCs provide the Russian government and, if applicable, their overseas clients (foreign governments and/or companies) with a range of capabilities to augment or mimic Russian military and intelligence activity. This includes training foreign armed forces and groups, providing armed security/protection, conducting “political warfare” (from assassinations to running drones), and performing military-style functions. It also potentially includes surveillance and cyber(ed) activities that could be reliant on industry capabilities or further built out in the future. Moscow exercises control and provides support for these capabilities to varying degrees, and each of these capabilities feeds into benefits for the PMCs and for the oligarchs at their helm. 

Training military forces abroad

Russian PMCs train foreign armed forces and groups. In the early 1990s, for instance, Rubikon, a security firm based in St. Petersburg and “supervised by Russian security services,” helped organize volunteers to fight for the Serbs in then-Yugoslavia.5 This trend has continued through to recent times, with Russia’s Vladimir Putin even publicly stating in 2012 that Russian private military companies could be used to train foreign military personnel.6 Recently, it appears that Russian PMC ENOT Corp has run “military-type training camps for right-wing activists from foreign countries.”7 Russian PMCs in Libya have trained Libyan National Army (LNA) forces and even repaired their military equipment.8 And a July 2021 assessment from the US Office of the Director of National Intelligence  found that some “Russian private paramilitary groups” that are “trying to recruit and train Western RMVEs [racially and ethnically motivated violent extremists] to expand their reach into the West, increase membership, and raise money.”9

These organizations also provide armed security/protection to government, corporate, and individual clients. Indeed, part of the Russian PMC industry outgrowth stems from the chaos in the post-Soviet period of the 1990s, when former Soviet soldiers, intelligence personnel, and other members of the security apparatus formed companies to provide security for businesses.10 In the early days of Gazprom, Rosatom, Rosneft, and Russian Railways—all state-owned enterprises—Russian PMCs protected their assets overseas.11 Years later,  then-Prime Minister Putin noted that PMCs could act as extensions of Russian influence in conducting such protection operations at important facilities abroad, outside of Russian enterprises.12 Russian PMCs have provided protective services in the Central African Republic,13 in Mali,14 and to energy fields in Syria,15 in addition to other countries.  

The Wagner Group deployed to Mali in December 2021, following the withdrawal of French forces from the country, to train the Malian Armed Forces (FAMa) and provide protection for senior officials. At the time, the French government attempted to stop the reportedly $10.8 million deal, but the Malian government defended the prospect of closer cooperation with Russia.16 Immediately upon the Wagner Group’s arrival, it began to construct a base near a Malian air force installation at Bamako’s Modibo Keita International Airport.17 FAMa, according to a Mali army spokesperson, “had new acquisitions of planes and equipment from [the Russians] . . . It costs a lot less to train us on site than for us to go over there.”18 Less than a month after the Wagner Group’s arrival, French reporting indicated that at least one Wagner member was injured when a FAMa convoy was attacked in the center of the country—where insurgents ambushed the convoy and employed an improvised explosive device against one of the armored vehicles, leading to a firefight.19 Though the Wagner Group’s mission in Mali is training local forces for direct combat, not engaging in it itself, the mission is clearly one that requires it to work in parallel with local forces and thus consistently places Wagner forces in combat situations. 

Resource security

While the Kremlin realizes strategic benefits from PMC operations worldwide, the PMCs themselves and PMC proprietors—often members of Putin’s inner circle of oligarchs—reap financial windfalls. Through opaque ownership structures and cutouts, the model essentially provides paramilitary muscle and political support in exchange for preferential access to—if not control over—mineral rights and other sources of rent extraction for Moscow and its oligarch class.20 Particularly in areas where the main sources of Russian economic might—arms and energy—are already prevalent like in Syria, PMCs act as a force multiplier and reinforce Moscow as an indispensable partner for regime stability. For instance, in Africa—where Russian arms comprise half the continent’s market, and Moscow looks to invest big in oil, gas, and nuclear projects—PMCs act as an insurance policy.21 

In the Central African Republic, the Wagner Group has been used to bolster support for President Faustin-Archange Touadéra’s government—training local soldiers, protecting leaders, and providing security services at the country’s diamond mines—following the exit of French peacekeeping forces in 2017.22 Yevgeniy Prigozhin, the Russian oligarch known as “Putin’s Chef,” runs the Wagner group,23 a military force that is neither a single entity nor truly private or independent. The group also has close ties with the GRU24 and its direction appears dictated by the state, which aids in the procurement of contracts internationally. The group is funded partially through Prigozhin, but Wagner also receives direct foreign funding through its contracts. The Touadéra contract is a prime example. Many of the Central African Republic’s diamond mines have passed back and forth between government and rebel hands—a key source of funding for both the Touadéra government and the rebel groups. These mines, back in government hands, now fund Wagner. A portion of Wagner’s payment is provided in diamonds, avoiding formal financial systems and therefore international sanctions, and in resource extraction permits to Russian companies linked to Prigozhin.25 and Wagner, however, does not just deal with the government: it also has made deals with the rebels themselves to obtain illegally mined diamonds, cashing in on and likely exacerbating the conflict.26 Kimberley Marten, a scholar studying the Wagner Group, has suggested that Prigozhin may also use these connections and contracts to “engage in money-laundering or other criminal activity like smuggling, with the full knowledge and support of the Kremlin.”27 

It is quite possible, as the Russian government outsources more activities to PMCs, that it increasingly does so with cyber and information operations. For the PMCs, especially those with foreign government and foreign corporate clients, it is likely that market demands for these capabilities—as part of protective services, military combat augmentation, or something else—will drive them to increasingly develop or procure newer surveillance and cyber capabilities as well. 

In operations less closely tied to Russian forces, PMCs may pursue or build on technical capabilities in a different manner, likely focusing on expanding their political warfare tool kit rather than combat adjacent capabilities. Security deployments to resource extraction sites are already profitable for the PMCs, but they also provide a wealth of strategic opportunities. PMCs in Africa, for instance, already conduct or work in tandem with Russian influence operations and the integration of additional technological capabilities may heighten their effects.28 More advanced capabilities, such as cyber intrusion, represent an opportunity for PMCs to add or strengthen the political warfare layer of their operations while reaping profit. 

Combat missions

In Ukraine in 2014, soldiers without insignia, dubbed little green men, illegally invaded, attacked, and occupied territory, laying the path for a full-on Russian invasion of the country in 2022. This incursion into Crimea and the Donbas region of Ukraine leveraged a loose confederation of militia members and nonuniformed volunteers in mostly ancillary roles like diversion and sabotage.29 Ukraine’s Security Service accused the Wagner Group of assassinating Luhansk rebel leaders who disobeyed Russian orders.30 The conflict served, in many ways, as a proving ground for PMCs that would later deploy to other theaters like Syria and Libya—where their combat and support roles would become far more substantial and integrated with the Russian military. And where Wagner would prove the more professional, capable, and better equipped. 

PMCs like the Wagner Group perform military-style functions, engaging in armed combat, sometimes alongside the Russian military. In the fall of 2015, the Putin regime formally began its own intervention in Syria; by then, it had already sent hundreds of Wagner fighters into the country.31 Wagner forces have fought repeatedly in battles in Syria on behalf of Bashar al-Assad’s regime,32 both in the course of providing protection services and, in at least one instance, while Wagner fighters stayed at a GRU base in the country.33 Former Wagner fighters have described the PMC’s equipment in Syria as including “mortars, howitzers, tanks, infantry fighting vehicles, and armored personnel carriers” as well as man-portable surface-to-air missiles, anti-tank systems, and grenade launchers34—conventional military equipment for the battlefield. Wagner took part in training and equipping Syrian regime forces alongside—but distinct from—uniformed Russian soldiers. 

As part of these operations, Russian PMCs leverage a range of surveillance-, cyber-, and intelligence-related capabilities—which appear to be growing in number. RSB Group set up a cyber attachment in 2016 that was reportedly capable of both defensive and offensive activities.35 Russian PMCs in Syria have placed “intelligence specialists” on the front lines of armed combat to “better direct Russian airstrikes and enable pro-regime ground maneuvers.”36 Other PMC units “recruit human intelligence sources, guide [intelligence, surveillance, and reconnaissance] platforms and systems, collect signals intelligence, and analyze intelligence and open-source information,” according to a Center for Strategic & International Studies report (citing a presentation by Kiril Avramov, a nonresident fellow at the Intelligence Studies Project at University of Texas at Austin).37 

The widening adoption of surveillance and other technologies also poses a challenge to traditional PMC staffing and their own training, which may further pull companies in toward the Russian state. The classic pipeline for Russian service members to many PMCs begins in elite military units such as the VDV (abbreviation for Vozdushno-desantnye voyska, Russian Airborne Forces), Russian special forces, and various Spetsnaz38 formations—enabling them to serve a broad range of familiar functions, both embedded within and alongside Russian military forces. While these groups may provide a range of useful kinetic skills and small unit combat training, they are more likely to lead to specialized combat and maneuver skills like parachuting, covert insertion, and marksmanship rather than electronic warfare or cyber operations. The pipeline then for PMCs to support the acquisition and use of these technologies must look appreciably different, and source from new communities across the Russian armed forces. 

In Syria, Wagner has also taken contracts to secure resource extraction, specifically oil and gas. However, the presence of Western forces in the many-front conflict has complicated the mission, and members of the group have engaged in direct combat with the intention of protecting and preserving oil and gas access for the Assad regime. Wagner’s presence in Syria is perhaps best known for a 2018 incident near a Conoco gas plant in the eastern part of the country. A pro-Assad group that included Wagner forces launched an attack on a US-supported Kurdish outpost where US soldiers were present, resulting in the death of hundreds of pro-Assad fighters.39 The Pentagon later reported that in the hours leading up to the assault, US officials were in contact with their Russian counterparts and alerted them to an impending counterattack, but that the Russian command asserted that there were no Russians present. There is no evidence of Russian attempts to warn or interdict the Wagner forces on the ground. In the aftermath, the Russian Foreign Ministry said that “about five people who were ‘presumably Russian citizens’ may have been killed.” Yet, other reports pointed to “substantial losses.”40 Despite expectations that Wagner would lessen its presence in the region following the incident, companies linked to Prigozhin have gained contracts to develop and guard new oil and gas fields in Syria, including in the same region where the firefight with US forces took place.41 The additional contracts with the Assad regime follows—in no small part—the fact that Wagner receives payment at least partially in oil and gas, enabling it to skirt sanctions and financial regulations with its profit.42 

Building on battlefield successes in both countries, Wagner emerged as Moscow’s premier PMC, as evidenced by Prigozhin’s appearance alongside Defense Minister Sergey Shoygu in deliberations with the LNA commander, Khalifa Haftar, in 2018.43 Reported tensions between Shoygu’s defense ministry and Wagner notwithstanding,44 by the February 2022 invasion of Ukraine, the integration of PMCs—particularly Wagner—in Russian military operations had matured significantly. The Digital Forensic Research Lab has monitored Wagner activity across Ukraine, including in Zaporizhzhia, Volodymyrivka, and Klynove. Wagner activities in Ukraine appear to be intertwined with the Russian military, including Spetsnaz special forces.45  According to the UK Ministry of Defence, the Wagner Group was engaged in direct combat in Ukraine to reinforce front-line Russian military forces in the capture of Popasna and Lysyschansk. Wagner is seeing heavy casualties in combat, and increasingly, lost Wagner troops are being replaced with minimally qualified and trained recruits, including convicts.46 Indeed, Wagner’s experience in the comparatively permissive Syrian and Libyan theaters has proven insufficient to repeat their battlefield success, as they face far better trained and equipped Ukrainian forces.47 

To the extent plausible deniability was ever a motivation for the Kremlin to rely on PMCs, their notoriety since 2014—Wagner’s in particular—reveals an equally likely imperative: expendability. Contracted mercenaries simply require less accountability from the state, cost far less than training and outfitting conscripts, and entail fewer potential domestic constraints. 

Moscow has long had to contend with the mothers of soldiers lost to war, and has a poor track record of transparency regarding conflict casualties.48 In Donbas earlier this year, Ukrainian officials allege that Russia deployed mobile crematoria to dispose of its fallen soldiers, rather than sending them home.49 The Kremlin was slow to acknowledge any casualties whatsoever, and the Defense Ministry has sought to classify the notification process for families.50 While he is unlikely to face substantial public backlash for the Russian military’s catastrophic performance in Ukraine, Putin’s continued insistence on characterizing the war as a “special military operation,” and his apparent reticence to call for a general mobilization to support it, signal some wariness of the war’s political ramifications.51 Meanwhile, as the war in Ukraine looks to grind further on, the demand for expendable forces is likely to increase. 

Against that backdrop, PMCs like Wagner are an attractive option because they shift at least some of the burden of war away from the state—particularly as they cast combat operations as a commercial enterprise, versus a political one.52 As Putin stated in late 2018, “We can ban the private security business altogether, but once we do that, I think you will get a lot of petitions to protect this labor market. As for their presence somewhere abroad, if, I repeat again, they do not violate Russian law, they have the right to work and push their business interests anywhere in the world.”53 

Political warfare

Russian PMCs are also increasingly involved in conducting “political warfare” activities, ranging from subversive activities to assassination, reminiscent of the kinds of “active measures” that Soviet intelligence services deployed throughout the Cold War. In Syria in 2015, the Russian government spread propaganda prior to its involvement54 and used PMCs on the ground to augment its forces once in the country. In the Central African Republic in 2018, three Russian journalists who were investigating Wagner’s activities in Africa were killed, and while there is no conclusive documentation of the killer(s), the journalists’ driver that day was in contact with a police officer working with a member of the Wagner Group.55 Other reports describe PMCs as conducting political warfare activities such as kidnapping, sabotage, subversion, and blackmail.56 Moscow is increasingly placing cyber and information proxies overseas, to launch operations from within other countries and ostensibly to create deniability—such as establishing Russian Internet Research Agency (IRA) facilities in Ghana, Nigeria, and Mexico.57 In the Central African Republic, Prigozhin’s profit-seeking activities do not end with the Wagner Group. The oligarch has also built hospitals through his mining companies, created a Russian radio station with a wider reach than the state station, and created a children’s cartoon featuring a Russian bear saving its animal friends in Africa.58 Such activities exemplify the duality of PMC’s role in expanding Russian influence—pairing profit with propaganda. 

Prigozhin, in addition to heading the Wagner Group, is also at least partially responsible for the activities of the IRA, better known within the United States as the Russian Troll Factory. The US government has both sanctioned and indicted Prigozhin and associated companies in connection with IRA support of the 2014 invasion of Ukraine and its attempts to influence the 2016 US presidential election.59 Though this agency and the Wagner Group are not officially aligned, IRA activity has been uncovered in tandem with Wagner operations. A 2022 Twitter disclosure, for example, exposed a coordinated campaign within the Central African Republic of pro-Russian propaganda from both real and fake Twitter accounts linked to the IRA.60 In addition, Wagner’s activities in Mali appear closely buttressed by IRA efforts. In preparation for Wagner’s deployment to the country, “a coordinated network of Facebook pages in Mali promoted Russia as a ‘viable partner’ and ‘alternative to the West,’ encouraged postponement of democratic elections, and attempted to create local support for Wagner.”61 This disinformation machine also deployed earlier this year to deny and deflect responsibilities for massacres tied to the Wagner Group in Mali, such as those in Mourah and Gossi.62 

Accessing offensive cyber and information technologies in the PMC community

The fusion of several quasi-state models of digital subversion with the paramilitary prowess of Russian PMCs should also not be ruled out. One dimension of Russian PMCs acquiring these capabilities is the possibility that they might access existing public/private relationships established by organs of Russian intelligence or even the commercial market. The commercial development, sale, and support of offensive cyber capabilities and electronic surveillance services includes dozens of firms, some of whom have access to the latest security vulnerabilities and considerable technical design and development talent.63 With the addition of boutique cyber-surveillance tools, like those developed by commercial outfits like NSO Group and DarkMatter, to disruptive attacks-as-a-service brokered by ransomware collectives, like REvil, PMCs could vastly expand their clientele among global autocrats and oligarchs—thus substantially enhancing their utility to the Kremlin. These latter companies could provide access to technology systems and are well-positioned to provide PMCs with intelligence gathering and ongoing high-value target surveillance capacity across the world.64 

An alternative, especially in the case of offensive cyber capabilities, may be for these PMCs to partner with Russian private companies or state labs working as proxies for Russian military and intelligence organizations. In 2018, FireEye Intelligence pointed to Russia’s Central Scientific Research Institute of Chemistry and Mechanics as likely supporting the deployment of Triton, an operational technology-focused malware, and the US government later sanctioned the lab.65 The US government claims that a private Russian firm, Positive Technologies—which the US Treasury identified as supporting the Russian Federal Security Service (FSB) and sanctioned—continues to develop offensive cyber capabilities on behalf of the Russian government.66 Leveraging the capabilities of such organizations would prevent PMCs from needing to develop significant and costly new in-house talent or drawing the added scrutiny of Russian government authorities. 

Where do PMCs go from here?

Major course corrections in Russia’s geopolitical trajectory seem unlikely so long as Putin remains in power, and the trajectory of Moscow’s war effort in Ukraine remains speculative at best. Importantly, the driving forces for Russian PMC involvement in locations like Libya, Syria, Ukraine, Mali, and the Central African Republic appear diverse. In some instances, PMCs act alongside or immediately in lieu of still uniformed Russian forces. In other cases, these firms appear to be operating with greater independence, often with clear profit motive. 

Putin’s inner circle of oligarchs control and have interest in a wide range of industries, and often they and their close relatives are involved in various companies. These companies have several lines of revenue: thinly veiled authorized theft from the state, direct business revenue, and unofficially sanctioned criminal activity. In the oil and gas, entertainment, finance, and similar industries, this breakdown of oligarch profit is fairly straightforward. However, private military companies and those at their helm have a more complicated relationship with the workings of the Kremlin. 

The involvement of Russian PMCs in extractive and more purely profit-seeking activities raises questions about how their incentive structure will change in the aftermath of the ongoing war in Ukraine and in the face of the adoption and employment of new technologies in conflict. These include: 

  • What levers (sanctions, export controls, etc.) can the transatlantic community use to curb the flow of illicit kinetic and digital arms alike, not only to the Russian state, but to commercial entities or third countries that might enable PMCs? 
  • How can the United States and its allies and partners work together to disincentivize the use of PMCs for regime and mineral-deposit security among leaders in Africa and elsewhere? What alternatives can they offer? 
  • What lessons is the Kremlin drawing and not drawing from its full-on war on Ukraine? How might that shape future decision-making about PMCs and conflict? 
  • If the Russian military and state defense apparatus is involved with supplying PMCs, does that extend to technological and cyber capabilities today? Might it in the future, and if so, how? What do those relationships and dependencies look like? 

These quasi-private military forces are a useful tool that Russia can deploy to manage risk, foment instability, and exploit geopolitical and economic opportunities around the world in advance of, in addition to, or instead of Russian state capabilities. These groups, often run by Russian oligarchs, are employed in a wide range of operations that support, sometimes directly and sometimes more opaquely, Russian strategic objectives. The Russian state benefits from having a nominally independent additional reserve that can project force in places where state-tied operations may carry additional risk—from conflict zones where the state’s forces require additional support to areas of insecurity where PMCs can enrich themselves while projecting Russian power and influence abroad. 

The technological capabilities that these companies develop may serve as an indication of Russian strategic priority and perhaps its points of perceived weakness in the years to come. The wide remit of operations under the PMC umbrella means that there exists a foundation for these companies to develop in myriad ways. A more combat-focused PMC, for example, will not pursue the same technologies as a PMC focused on political warfare in non-warfare zones. The unique position of Russian PMCs—motivated both by profit and policy—exemplify the ongoing tension in Russia’s kleptocratic leadership and thus may be an effective way for the United States and its allies to understand Russian priorities and engage with them in a more persistent manner. 

About the authors

Emma Schroeder is an assistant director with the Atlantic Council’s Cyber Statecraft Initiative within the Scowcroft Center for Strategy and Security. Her focus in this role is on developing statecraft and strategy for cyberspace that are useful for both policymakers and practitioners.

Gavin Wilde a senior fellow at the Carnegie Foundation for International Peace and a nonresident fellow at Defense Priorities. He previously served as director for Russia, Baltic, and Caucasus affairs at the National Security Council, where his focus areas included election security and countering foreign malign influence and disinformation.

Justin Sherman is a nonresident fellow at the Atlantic Council’s Cyber Statecraft Initiative, where his work
focuses on the geopolitics, governance, and security of the global Internet. He is also a research fellow at the Tech, Law & Security Program at American University Washington College of Law, a fellow at Duke University’s Sanford School of Public Policy, and a contributor at WIRED magazine.

Dr. Trey Herr is the director of the Cyber Statecraft Initiative under the Scowcroft Center for Strategy and
Security at the Atlantic Council. His team works on the role of the technology industry in geopolitics, cyber conflict, the security of the internet, cyber safety, and growing a more capable cybersecurity policy workforce.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Sean McFate, Mercenaries and Privatized Warfare Current Trends and Developments, Office of the United Nations High Commissioner for Human Rights (OHCHR), April 24, 2020, https://www.ohchr.org/sites/default/files/Documents/issues/Mercenaries/WG/OtherStakeholders/sean-mcfate-submission.pdf.
2    Ministry of Defence (@DefenceHQ), “Latest Defence Intelligence update on the situation in Ukraine – 18 July 2022,” Twitter, July 18, 2022, 2:12 a.m., https://twitter.com/DefenceHQ/status/1548913656410226688; Ruslan Trad, “Wagner Group Continues Involvement in Russian Operations in Eastern Ukraine,” Digital Forensic Research Lab (DFRLab), July 8, 2022, https://medium.com/dfrlab/wagner-group-continues-involvement-in-russian-operations-in-eastern-ukraine-4c1c9b07e954; “Russian Troops Ill-Prepared for Ukraine War, Says Ex-Kremlin Mercenary,” Reuters, May 12, 2022, https://www.reuters.com/world/us/russian-troops-ill-prepared-ukraine-war-says-ex-kremlin-mercenary-2022-05-10/; Miriam Berger, “What Is the Wagner Group, The Russian Mercenary Entity in Ukraine?” Washington Post, April 9, 2022, https:// www.washingtonpost.com/world/2022/04/09/wagner-group-russia-uraine-mercenaries/; Thomas Gibbons-Neff, “How a 4-Hour Battle Between Russian Mercenaries and U.S. Commandos Unfolded in Syria,” New York Times, May 24, 2018, https://www.nytimes.com/2018/05/24/world/middleeast/american-commandos-russian-mercenaries-syria.html; Ilya Barabanov and Nader Ibrahim, “Wagner: Scale of Russian Mercenary Mission in Libya Exposed,” BBC News, August 11, 2021, https://www.bbc.com/news/world-africa-58009514; and Jason Burke and Emmanuel Akinwotu, “Russian Mercenaries Linked to Civilian Massacres in Mali,” Guardian (US edition), May 4, 2022, https://www.theguardian.com/world/2022/may/04/russian-mercenaries-wagner-group-linked-to-civilian-massacres-in-mali.
3    Alexis Heraclides and Ada Dialla, “The Balkan Crisis of 1875–78 and Russia: Between Humanitarianism and Pragmatism,” in Humanitarian Intervention in the Long Nineteenth Century: Setting the Precedent (United Kingdom: Manchester University Press, 2015), 173, https://www.jstor.org/stable/j.ctt1mf71b8.14?seq=5.
4    Matthew Wills, “The International Brigades,” JSTOR Daily (online magazine), JSTOR (digital library), April 20, 2022, https://daily.jstor.org/the-international-brigades/.
5    Tor Bukkvoll and Åse Gilje Østensen, “The Emergence of Russian Private Military Companies: A New Tool of Clandestine Warfare, Norwegian Defense Research Establishment, 2020, 3, https://publications.ffi.no/nb/item/asset/dspace:6751/1811576.pdf.
6    András Rácz, “Band of Brothers: The Wagner Group and the Russian State,” Center for Strategic and International Studies (blog), September 21, 2020, https://www.csis.org/blogs/post-soviet-post/band-brothers-wagner-group-and-russian-state.
7    Bukkvoll and Østensen, The Emergence of Russian Private Military Companies, 14.
8    R. Kim Cragin and Lachlan MacKenzie, “Russia’s Escalating Use of Private Military Companies in Africa,” Strategic Insights, Institute for National Strategic Studies, November 24, 2020, https://inss.ndu.edu/Media/News/Article/2425797/russias-escalating-use-of-private-militarycompanies-in-africa/.
9    US Office of the Director of National Intelligence, Russian Federation Support of Racially and Ethnically Motivated Violent Extremists, Office of the Director of National Intelligence, July 2021, https://www.scribd.com/document/558091662/ODNI-Report-Russian-Federation-Support-of-Racially-and-Ethnically-Motivated-Violent-Extremists#fullscreen&from_embed. Published as part of: Zach Dorfman and Jana Winter, “U.S. Intelligence Report Details ‘Indirect’ Russian Government Support for Western Neofascist Groups,” Yahoo! News, February 10, 2022, https://news.yahoo.com/us-intelligence-report-details-indirect-russian-government-support-for-western-neo-fascist-groups-233831082.html.
10    Andrew S. Bowen, “Russian Private Military Companies (PMCs),” In Focus (series), US Congressional Research Service, September 16, 2020, 1, https://sgp.fas.org/crs/row/IF11650.pdf.
11    Asymmetric Warfare Group Study, Russian Private Military Companies: Their Use and How to Consider Them in Operations, Competition, and Conflict (Fort Eustis: US Army, October 2020), 13.
12    Rácz, “Band of Brothers.”
13    Raphael Parens, The Wagner Group’s Playbook in Africa: Mali, Foreign Policy Research Institute, March 2022, 6, https://www.fpri.org/article/2022/03/the-wagner-groups-playbook-in-africa-mali/.
14    Parens, The Wagner Group’s Playbook in Africa, 9.
15    Swedish Defence Research Agency (FOI), “Russia’s (Not So) Private Military Companies,” FOI Memo 6653, January 2019, 2, https://www.foi.se/rest-api/report/FOI%20MEMO%206653.
16    John Irish and David Lewis, “Exclusive: Deal Allowing Russian Mercenaries into Mali Is Close–Sources,” Reuters, September 13, 2021, https://www.reuters.com/world/africa/exclusive-deal-allowing-russian-mercenaries-into-mali-is-close-sources-2021-09-13/.
17    Jared Thompson, Catrina Doxsee, and Joseph Bermudez, “Tracking the Arrival of Russia’s Wagner Group in Mali,” Commentary, Center for Strategic and International Studies (CSIS), February 2, 2022,  https://www.csis.org/analysis/tracking-arrival-russias-wagner-group-mali.
18    “Russian Troops Deploy to Timbuktu in Mali After French Withdrawal,” Reuters, January 6, 2022, https://www.reuters.com/article/mali-security-russia-idAFL8N2TM47J.
19    Tanguy Berthemet, “Au Mali, premiers accrochages entre Wagner et djihadistes,” Le Figaro, last updated June 1, 2022, https://www.lefigaro.fr/international/au-mali-premiers-accrochages-entre-wagner-et-djihadistes-20220105.
20    US Department of the Treasury, “Treasury Targets Financier’s Illicit Sanctions Evasion Activity,” News Release, July 15, 2020, https://home.treasury.gov/news/press-releases/sm1058; and Kimberly Marten, “Russia’s Use of Semi-State Security Forces: The Case of the Wagner Group,” Post-Soviet Affairs 35, no. 3 (March 2019): 181-204, doi:10.1080/1060586x.2019.1591142.
21    Pieter Wezeman et al., “Trends in International Arms Transfers, 2019,” Stockholm International Peace Research Institute, SIPRI Fact Sheet, March 2020, doi:10.55163/YJYW4676; and Eklavya Gupte and Rosemary Griffin, “Analysis: Russia Looks to Africa to Broaden Its Global Energy Influence,” S&P Global, October 22, 2019, https://www.spglobal.com/commodityinsights/en/market-insights/latest-news/oil/102219-analysis-russia-looks-to-africa-to-broaden-its-global-energy-influence.
22    Eric Schmitt, “Russia’s Military Mission Creep Advances to a New Front: Africa,” New York Times, March 31, 2019, https://www.nytimes.com/2019/03/31/world/africa/russia-military-africa.html; United Nations Security Council, Final Report of the Panel of Experts on the Central African Republic Extended Pursuant to Security Council Resolution 2399 (2018), with Cover Letter Dated 14 December 2018 to the President of the Security Council, United Nations Security Council, https://www.securitycouncilreport.org/atf/cf/%7B65BFCF9B-6D27-4E9C-8CD3-CF6E4FF96FF9%7D/s_2018_1119.pdf; and Dionne Searcey, “Gems, Warlords and Mercenaries: Russia’s Playbook in Central African Republic, New York Times, last updated May 4, 2020, https://www.nytimes.com/2019/09/30/world/russia-diamonds-africa-prigozhin.html.
23    “Wagner Group, Yevgeniy Prigozhin, and Russia’s Disinformation in Africa,” US Department of State (website), May 24, 2022, https://www.state.gov/disarming-disinformation/wagner-group-yevgeniy-prigozhin-and-russias-disinformation-in-africa/.
24    Glavnoye Razvedyvatelnoye Upravlenie, Russia’s main intelligence directorate
25    “The Wagner Group: A Russian Symphony of Profit and Politics,” Cipher Brief, accessed June 24, 2022, https://www.thecipherbrief.com/column_article/thewagner-
group-a-russian-symphony-of-profit-and-politics.
26    Searcey, “Gems, Warlords and Mercenaries”; Federica Saini Fasanotti, “Russia’s Wagner Group in Africa: Influence, Commercial Concessions, Rights
Violations, and Counterinsurgency Failure,” Order From Chaos (blog), Brookings Institution, February 8, 2022, https://www.brookings.edu/blog/order-fromchaos/
2022/02/08/russias-wagner-group-in-africa-influence-commercial-concessions-rights-violations-and-counterinsurgency-failure/
; and Luke Harding and
Jason Burke, “Russian Mercenaries Behind Human Rights Abuses in CAR, Say UN Experts,” Guardian (US edition), March 30, 2021, https://www.theguardian.
com/world/2021/mar/30/russian-mercenaries-accused-of-human-rights-abuses-in-car-un-group-experts-wagner-group-violence-election
.
27    Kimberly Marten, “Where’s Wagner? The All-New Exploits of Russia’s ‘Private’ Military Company,” Program on New Approaches to Research and Security in Eurasia, PONARS EurasiaPolicy Memo, September 15, 2020, https://www.ponarseurasia.org/where-s-wagner-the-all-new-exploits-of-russia-s-private-military-company/.
28    Jean Le Roux, “Pro-Russian Facebook Assets in Mali Coordinated Support for Wagner Group, Anti-Democracy Protests,” DFRLab, Atlantic Council, February 17, 2022, https://medium.com/dfrlab/pro-russian-facebook-assets-in-mali-coordinated-support-for-wagner-group-anti-democracy-protests-2abaac4d87c4; Wassim Nasr, “France Says Mercenaries from Russia’s Wagner Group Staged ‘French Atrocity’ in Mali,” France 24, April 22, 2022, https://www.france24.com/en/africa/20220422-france-says-mercenaries-from-russia-s-wagner-group-staged-french-atrocity-in-mali.
29    Sergey Sukhankin, Unleashing the PMCs and Irregulars in Ukraine: Crimea and Donbas, Jamestown Foundation, September 3, 2019, https://jamestown.org/program/unleashing-the-pmcs-and-irregulars-in-ukraine-crimea-and-donbas/.
30    Owen Matthews, “Putin’s Secret Armies Waged War in Syria—Where Will They Fight Next?” Newsweek, January 17, 2018, https://www.newsweek.com/2018/01/26/putin-secret-army-waged-war-syria-782762.html.
31    Nathaniel Reynolds, Putin’s Not-So-Secret Mercenaries: Patronage, Geopolitics, and the Wagner Group, Carnegie Endowment for International Peace, July 2019, 3, https://carnegieendowment.org/2019/07/08/putin-s-not-so-secret-mercenaries-patronage-geopolitics-and-wagner-group-pub-79442.
32    See, for example, “How ‘Wagner’ Came to Syria,” Economist, November 2, 2017,   https://www.economist.com/europe/2017/11/02/how-wagner-came-to-syria; and Reynolds, Putin’s Not-So-Secret Mercenaries, 5.
33    Rinat Sagdiev, Anton Zverev, and Maria Tsvetkova, “Exclusive: Kids’ Camp on a Defense Base? How Russian Firms Masked Secret Military Work,” Reuters, April 4, 2019, https://www.reuters.com/article/us-mideast-crisis-syria-russia-prigozhin/exclusive-kids-camp-on-a-defense-base-how-russian-firms-masked-secret-military-work-idUSKCN1RG1QT.
34    Justin Bristow, Russian Private Military Companies: An Evolving Set of Tools in Russian Military Strategy (Fort Leavenworth: US Foreign Military Studies Office, August 2019), 8-9; and Bukkvoll and Østensen, The Emergence of Russian Private Military Companies, 11.
35    Margarete Klein, Private Military Companies–A Growing Instrument in Russia’s Foreign and Security Policy Toolbox, European Centre of Excellence for Countering Hybrid Threats (Helsinki), June 2019, 3-4; and Bukkvoll and Østensen, The Emergence of Russian Private Military Companies, 14.
36    Seth G. Jones et al., Russia’s Corporate Soldiers: The Global Expansion of Russia’s Private Military Companies, A Report of the CSIS Transnational Threats Project (Lanham, Maryland: Rowman & Littlefield, July 2021), 18, 20, https://www.csis.org/analysis/russias-corporate-soldiers-global-expansion-russias-private-military-companies.
37    Jones et al., Russia’s Corporate Soldiers, 18-20. Avramov, an assistant professor at UT-Austin, also serves as director of its Global (Dis)Information Lab.
38    Spetsialnogo naznacheniya, meaningspecial purpose
39    Kimberly Marten, “The Puzzle of Russian Behavior in Deir al-Zour,” War on The Rocks, July 5, 2018, https://warontherocks.com/2018/07/the-puzzle-of-russian-behavior-in-deir-al-zour/.
40    Marten, “The Puzzle of Russian”; Mike Eckel, “Pentagon Says U.S. Was Told No Russians Involved in Syria Attack,” Radio Free Europe/Radio Liberty, February 23, 2018, https://www.rferl.org/a/syria-deir-zor-attack-pentagon-russians-involved/29058555.html, and Gibbons-Neff, “How a 4-Hour Battle Between Russian Mercenaries.”
41    Marten, “Where’s Wagner? The All-New Exploits.”
42    “The Wagner Group: A Russian Symphony,“ Cipher Brief.
43    Kate Baughman, “Russia’s Not-So-Invisible Role in the Libyan Conflict,” in-depth (blog), CNA, November 12, 2019, https://www.cna.org/our-media/indepth/2019/11/russias-not-so-invisible-role-in-the-libyan-conflict.
44    Warsaw Institute, “Shoigu’s Revenge,” Russia Monitor, February 25, 2018, https://warsawinstitute.org/shoigus-revenge/.
45    Trad, “Wagner Group Continues Involvement;” and Rob Lee (@RALee85), “Russian spetsnaz and Wagner private military contractors reportedly in Svitlodarsk and Myronivskyi,” Twitter, May 24, 2022, 6:59 p.m., https://twitter.com/RALee85/status/1529235651094360064.
46    Ministry of Defence (@DefenceHQ), “Latest Defence Intelligence update on the situation in Ukraine – 18 July 2022,” Twitter, July 18, 2022, 2:12 a.m., https://twitter.com/DefenceHQ/status/1548913656410226688.
47    Reuters, “Russian Troops Ill-Prepared for Ukraine War.”
48    Reuters, “Russian Troops Ill-Prepared for Ukraine War”; “‘Private Pivovarov Is on Assignment’: How Russia Hides Its Military Casualties,” Moscow Times, April 6, 2022, https://www.themoscowtimes.com/2022/04/06/private-pivovarov-is-on-assignment-how-russia-hides-its-military-casualties-a77247.
49    Russia Abandons Its Dead Soldiers on the Battlefield, Claims Ukraine,” Times (United Kingdom), March 30, 2022, https://www.thetimes.co.uk/article/russia-abandons-its-dead-soldiers-on-the-battlefield-claims-ukraine-wh8c092n2.
50    Lisa Kim, “Putin Spokesperson Admits ‘Significant Losses’ of Russian Troops in Ukraine,” Forbes, April 7, 2022, https://www.forbes.com/sites/lisakim/2022/04/07/putin-spokesperson-admits-significant-losses-of-russian-troops-in-ukraine/?sh=15deb12e2cfb; and “Russia to Classify Information on Ukraine Troop Deaths,” Moscow Times, April 20, 2022, https://www.themoscowtimes.com/2022/04/20/russia-to-classify-information-on-ukraine-troop-deaths-a77416.
51    Andrew Osborn and Polina Nikolskaya, “Russia’s Putin Authorises ‘Special Military Operation’ against Ukraine,” Reuters, February 24, 2022, https://www.reuters.com/world/europe/russias-putin-authorises-military-operations-donbass-domestic-media-2022-02-24/; and Jay Beecher, “ISW Russian Offensive Campaign Assessment, July 4,” Kyiv Post, July 5, https://www.kyivpost.com/russias-war/isw-russian-offensive-campaign-assessment-july-4.html.
52    “A mercenaries’ war: How Russia’s invasion of Ukraine led to a ‘secret mobilization’ that allowed oligarch Evgeny Prigozhin to win back Putin’s favor,” Meduza, July 14, 2022, https://meduza.io/en/feature/2022/07/14/a-mercenaries-war.
53    “Big Press Conference of Vladimir Putin,” Interfax-Russia (news agency), December 20, 2018, https://www.interfax.ru/russia/643241.
54    Peter Pomerantsev, “Inside the Kremlin’s Hall of Mirrors,” The Guardian, April 9, 2015, https://www.theguardian.com/news/2015/apr/09/kremlin-hall-of-mirrors-military-information-psychology.
55    Tim Lister and Sebastian Skukla, “Murdered journalists were tracked by police with shadowy Russian links,” CNN, January 10, 2019, https://www.cnn.com/2019/01/10/africa/russian-journalists-car-ambush-intl/index.html.
56    Jones et al., Russia’s Corporate Soldiers, 18.
57    US Office of the Director of National Intelligence. Foreign Threats to the 2020 US Federal Elections. ICA 2020-00078D. Washington, D.C.: US Office of the Director of National Intelligence, March 2021, https://www.dni.gov/files/ODNI/documents/assessments/ICA-declass-16MAR21.pdf, 4.
58    Searcey, “Gems, Warlords and Mercenaries”; Afrique Média, “Reportage sur la Radio Lengo Songo RCA Ngadi Kwa Vanessa,” YouTube video, January 31, 2019, https://www.youtube.com/watch?v=CQ9qWX3bQfYn; Улыбаемся Машем, Lionbear, YouTube video, July 18, 2019, https://www.youtube.com/watch?v=NCZ0YSyWVhk&t=4s.
59    “U.S. Widens Sanctions Net Against Kremlin-Connected Backer of ‘Troll Factory,’ Mercenary Group,” Radio Free Europe/Radio Liberty, September 23, 2020, https://www.rferl.org/a/u-s-widens-sanctions-net-against-kremlin-connected-backer-of-troll-factory-mercenary-group/30854350.html; US Department of Treasury, “Treasury Increases Pressure on Russian Financier,” News Release, September 23, 2020,  https://home.treasury.gov/news/press-releases/sm1133; “U.S. Imposes New Sanctions Targeting Russian ‘Troll Farm,’ Owner Prigozhin,” Radio Free Europe/Radio Liberty, September 30, 2019, https://www.rferl.org/a/us-imposes-new-sanctions-targeting-russian-troll-farm-owner-prigozhin/30191701.html; and United States v. Internet Research Agency, No. 1:18-cr-00032-DLF, (D.D.C. 2018), https://www.justice.gov/file/1035477/download. US District Court for the District of Columbia.
60    Twitter Safety (@TwitterSafety), “Disclosing State-Linked Information Operations We’ve Removed,” Twitter, December 2, 2021, https://archive.ph/ZXw4k; and US Department of State, “Wagner Group, Yevgeniy Prigozhin, and Russia’s Disinformation in Africa.”
61    US Department of State, “Wagner Group, Yevgeniy Prigozhin, and Russia’s Disinformation in Africa”; Le Roux, “Pro-Russian Facebook Assets in Mali”; and Nasr, “France Says Mercenaries.”
62    US Department of State, “Wagner Group, Yevgeniy Prigozhin, and Russia’s Disinformation in Africa”; Emmanual Akinwotu, “Russian Mercenaries and Mali Army Accused of Killing 300 Civilians,” Guardian (US edition), April 5, 2022, https:/www.theguardian.com/world/2022/apr/05/russian-mercenaries-and-mali-army-accused-of-killing-300-civilians; “Mali: l’armée annonce avoir tué plus de 200 «combattants» terroristes lors d’une opération,” RT France (Russian state-controlled media), April 2, 2022, https://archive.ph/pYOJT; Sam Mednick, “French Accuse Russian Mercenaries of Staging Burials in Mali,” Washington Post, April 22, 2022, https://web.archive.org/web/20220425175005/https:/www.washingtonpost.com/world/russians-accused-of-staging-french-burial-of-bodies-in-mali/2022/04/22/c6b768a4-c228-11ec-b5df-1fba61a66c75_story.html; and “La pensée de l’expert russe, Maxime Shugaley, sur les atrocités à Gossi,” Mali ACTU, April 28, 2022, https://maliactu.net/la-pensee-de-lexpert-russe-maxime-shugaley-sur-les-atrocites-a-gossi/.
63    Winnona DeSombre et al., “Surveillance Technology at the Fair: Proliferation of Cyber Capabilities in International Arms Markets,” Atlantic Council, Issue Brief, November 8, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/surveillance-technology-at-the-fair/.
64    Winnona DeSombre et al., Countering Cyber Proliferation: Zeroing in on Access-as-a-Service, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/.
65    FireEye Intelligence, “TRITON Attribution: Russian Government-Owned Lab Most Likely Built Custom Intrusion Tools for TRITON Attackers,”Mandiant, October 23, 2018, (FireEye is now part of Symphony Technology Group), https://www.mandiant.com/resources/triton-attribution-russian-government-owned-lab-most-likely-built-tools; and Catalin Cimpanu, “US Treasury Sanctions Russian Research Institute Behind Triton Malware,” ZDNet, October 23, 2020, https://www.zdnet.com/article/us-treasury-sanctions-russian-research-institute-behind-triton-malware/.
66    US Department of the Treasury, “Treasury Sanctions Russia with Sweeping New Sanctions Authority,” Press Release, April 15, 2021, https://home.treasury.gov/news/press-releases/jy0127; and Patrick Howell O’Neill, “The $1 Billion Russian Cyber Company That the US Says Hacks for Moscow,” MIT Technology Review, April 15, 2021, https://www.technologyreview.com/2021/04/15/1022895/us-sanctions-russia-positive-hacking/.

The post Hackers, Hoodies, and Helmets: Technology and the changing face of Russian private military contractors appeared first on Atlantic Council.

]]>
Securing the energy transition against cyber threats https://www.atlanticcouncil.org/in-depth-research-reports/report/securing-the-energy-transition-against-cyber-threats/ Tue, 12 Jul 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=545118 This report recommends a suite of key actions that government can take to shore up the US energy sector against future cybersecurity threats.

The post Securing the energy transition against cyber threats appeared first on Atlantic Council.

]]>

Report launch

As the US energy sector’s reliance on digitalization grows, its vulnerability to cyberattacks also increases. To better understand current and future threats, the Atlantic Council Global Energy Center convened the Atlantic Council Task Force on Cybersecurity and the Energy Transition to develop a cybersecurity framework designed to protect US energy infrastructure—and by extension, national security—against cyberattacks.  

Former Secretary of the US Department of Homeland Security Michael Chertoff and former General Wesley Clark served as co-chairmen of the task force, which produced this report, “Securing the Energy Transition against Cyber Threats.”  

The task force found that existing efforts to strengthen cybersecurity are insufficient to meet the demands the energy transition will bring. The fragmented, sometimes rivalrous set of institutions regulating and coordinating current cyber defenses leaves many gaps, ambiguities, and weak links. 

This report recommends a suite of key actions that government can take to shore up the US energy sector against future cyber threats. 

stay connected

The Global Energy Center promotes energy security by working alongside government, industry, civil society, and public stakeholders to devise pragmatic solutions to the geopolitical, sustainability, and economic challenges of the changing global energy landscape.

Subscribe to our newsletter

Sign up to receive our weekly DirectCurrent newsletter to stay up to date on the program’s work.



  • This field is for validation purposes and should be left unchanged.

The post Securing the energy transition against cyber threats appeared first on Atlantic Council.

]]>
Missing Key report cited in PanaTimes on CBDC trials using untested security protocols https://www.atlanticcouncil.org/insight-impact/in-the-news/missing-key-report-cited-in-panatimes-on-cbdc-trials-using-untested-security-protocols/ Tue, 12 Jul 2022 01:35:07 +0000 https://www.atlanticcouncil.org/?p=545362 Read the full article here.

The post Missing Key report cited in PanaTimes on CBDC trials using untested security protocols appeared first on Atlantic Council.

]]>
Read the full article here.

The post Missing Key report cited in PanaTimes on CBDC trials using untested security protocols appeared first on Atlantic Council.

]]>
Cary quoted in Bloomberg: Shanghai Data Breach Exposes Dangers of China’s Trove https://www.atlanticcouncil.org/insight-impact/in-the-news/cary-quoted-in-bloomberg-shanghai-data-breach-exposes-dangers-of-chinas-trove/ Tue, 05 Jul 2022 19:46:00 +0000 https://www.atlanticcouncil.org/?p=543942 On July 5, 2022, Global China Hub fellow Dakota Cary was quoted in a Bloomberg article titled, “Shanghai Data Breach Exposes Dangers of China’s Trove”. “The PRC government is likely in crisis mode right now. It seems obvious to ask why Shanghai MPS needed access to all this data, but this is the exact system […]

The post Cary quoted in Bloomberg: Shanghai Data Breach Exposes Dangers of China’s Trove appeared first on Atlantic Council.

]]>

On July 5, 2022, Global China Hub fellow Dakota Cary was quoted in a Bloomberg article titled, “Shanghai Data Breach Exposes Dangers of China’s Trove”.

“The PRC government is likely in crisis mode right now. It seems obvious to ask why Shanghai MPS needed access to all this data, but this is the exact system of surveillance and detail about individuals that the government wants.” Cary says.

More about our expert

The post Cary quoted in Bloomberg: Shanghai Data Breach Exposes Dangers of China’s Trove appeared first on Atlantic Council.

]]>
Investing in Ukraine’s brains is vital for the country’s post-war prosperity https://www.atlanticcouncil.org/blogs/ukrainealert/investing-in-ukraines-brains-is-vital-for-the-countrys-post-war-prosperity/ Sun, 03 Jul 2022 12:30:22 +0000 https://www.atlanticcouncil.org/?p=543734 International support for the development of Ukraine's education and tech sectors could hold the key to a strong and sovereign Ukrainian state once the current war with Putin's Russia is over, writes Gerson S. Sher.

The post Investing in Ukraine’s brains is vital for the country’s post-war prosperity appeared first on Atlantic Council.

]]>
In America’s recent USD 40 billion military and humanitarian assistance package for Ukraine, there was not a word about support for scientific research, higher education or industrial high-tech innovation in Ukraine. And yet these areas are absolutely vital if Ukraine is to be a sustainable, sovereign, and independent country.

For the past thirty years, Ukraine has experienced a massive brain drain of young, talented and dynamic scientific researchers, students, and innovators to the more attractive and lucrative laboratories and industries of Europe, Asia, and North America. This loss has been severely exacerbated by the current Russian invasion of the country. While there have been large-scale efforts to accommodate Ukrainian refugees in temporary positions abroad, it can be assumed that many will never return to their homeland.

It suffices to look to the wartime role of the Ukrainian IT sector to understand why advanced scientific research, education, and high-tech entrepreneurship are so essential to the country’s military and economic security. Since the outbreak of hostilities just over four months ago, young Ukrainian cyber warriors have stunningly upended expectations that Russian military and criminal hackers (which may be one and the same) would destroy Ukraine through cvber warfare.  

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

Ukrainian science goes much deeper than cyber-defense. In materials science, physics, mathematical modeling, engineering and a range of other areas, Ukraine’s advanced scientific research has made a significant impact not only in terms of international scientific publications but also in the world of technology and commerce.

Importantly, the strength of Ukrainian science and technology is not limited to the civilian sphere and has historically been closely tied to defense production. The famed Paton Electric Welding Institute in Kyiv has not only conducted leading-edge research on metallurgy and welding; it was also a primary producer of Soviet tanks as well as special metals for submarines and aircraft. Ukraine must also venture in new directions in the life sciences, in part to research countermeasures to biological warfare as well as to prevent the spread of disease among farm animals.

Major restructuring is necessary in order to get the most out of Ukraine’s tech potential. Much like other countries throughout the former Soviet bloc, Ukraine has inherited Russia’s heavily top-down, bureaucratic and inefficient research system. The concept of the modern research university, combining advanced research and education at all levels and contributing to technological innovation through linkages with industry, is still largely absent in Ukraine. Due to the thirty-year post-Soviet brain drain, a very high priority must now be placed on the education of the next generations of Ukrainian scientists and engineers.

Other post-Soviet countries have realigned their research and higher education systems in diverse ways and to varying degrees. Reforms have gained some traction in Ukraine since the country’s 2014 Revolution of Dignity, but in most areas progress remains painfully slow.

In this light, the radical devastation of war presents both deep challenges and major opportunities. In the words of the seventeenth century English historian Thomas Fuller, as put to music by the Mamas and the Papas, “the darkest hour of the night comes just before the dawn.”

As of now, the humanitarian and reconstruction assistance agenda of the United States government is silent on directed support for Ukraine’s science and technology sector. There are multiple opportunities for USAID and others to make a difference. There are also multiple opportunities not only to fail through inaction or lack of vision, but also for US assistance to fall behind support from other sources such as the European Union.

It is time for the US government to wake up and realize that for Ukraine to enter the modern world of knowledge economies, action is required now. Even before major physical reconstruction is underway, or simultaneously with it, there is an urgent need to address the immediate financial crisis in Ukrainian science, directing short-term support especially to those young scientists remaining in Ukraine. It is also essential to strengthen and realign the Ukrainian research, higher education, and technological innovation systems to succeed in the world of economic competitiveness and global cooperation.

There is now an ideal opportunity to look at these key issues in a fresh light and take bold steps. Without such thought and planning, throwing money at the problem will not have a lasting impact. As a senior Ukrainian member of the RESET-Ukraine working group informally remarked, “We need to change the system. If you go back to the old system, nothing will change.”

Only willingness to engage in significant system change, the kind that will lift Ukrainian research, education, and innovation to standards and practices resembling those in Europe and elsewhere, will inspire the confidence needed in order for external funders to provide the higher magnitudes of financing necessary in order make a long-term and sustainable difference.

Direct assistance grants, support for Ukrainian STEM programs especially in higher education, and jump-start grants to small high-tech Ukrainian businesses along the lines of the US Small Business Innovation Research (SBIR) program can go a long way, dollar for dollar, toward ensuring the long-term economic health and vitality of a sovereign and independent Ukraine. But these efforts must go hand in hand with a blueprint for systemic change, one that works for all Ukrainian stakeholders.

That is why the US National Academies of Sciences, Engineering, and Medicine have created an informal working group called Rebuilding Engineering, Science, Education, and Technology (RESET-Ukraine) to work together with our Ukrainian colleagues to consider the best practices of other countries in realigning their national research systems for the twenty-first century. Our work is already underway. But such efforts cannot fully reach their potential unaccompanied by a clear commitment, first and foremost, from the United States and other governments to put material assistance in this field high on the agenda.

Gerson S. Sher is a retired civil servant and foundation executive whose forty-year career involved leadership of scientific cooperation with the countries of the former Soviet Union. He is the author of “From Pugwash to Putin: A Critical History of US-Soviet Scientific Cooperation” (Bloomington, Indiana University Press, 2019), and is co-chair of the US National Academies of Science, Engineering, and Medicine’s informal working group, RESET-Ukraine.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Investing in Ukraine’s brains is vital for the country’s post-war prosperity appeared first on Atlantic Council.

]]>
Nikoladze cited in Foundation for the Defense of Democracies on cybersecurity export controls https://www.atlanticcouncil.org/insight-impact/in-the-news/nikoladze-cited-in-foundation-for-the-defense-of-democracies-on-cybersecurity-export-controls/ Fri, 01 Jul 2022 21:29:37 +0000 https://www.atlanticcouncil.org/?p=543264 Read the full article here.

The post Nikoladze cited in Foundation for the Defense of Democracies on cybersecurity export controls appeared first on Atlantic Council.

]]>
Read the full article here.

The post Nikoladze cited in Foundation for the Defense of Democracies on cybersecurity export controls appeared first on Atlantic Council.

]]>
Russian War Report: Russia claims Snake Island losses were ‘gesture of goodwill’ https://www.atlanticcouncil.org/content-series/russian-hybrid-threats-report/russia-claims-snake-island-goodwill-gesture/ Fri, 01 Jul 2022 16:09:40 +0000 https://www.atlanticcouncil.org/?p=543416 Plus, Russian diplomatic accounts spread a questionable story about anti-Russian stickers placed at an Auschwitz memorial.

The post Russian War Report: Russia claims Snake Island losses were ‘gesture of goodwill’ appeared first on Atlantic Council.

]]>
As Russia continues its assault on Ukraine, the Atlantic Council’s Digital Forensic Research Lab (DFRLab) is keeping a close eye on Russia’s movements across the military, cyber, and information domains. With more than seven years of experience monitoring the situation in Ukraine—as well as Russia’s use of propaganda and disinformation to undermine the United States, NATO, and the European Union—the DFRLab’s global team presents the latest installment of the Russian War Report. 

Security

Russia claims Snake Island losses were “gesture of goodwill”

Tracking Narratives

Russian diplomatic accounts spread questionable story about anti-Russian stickers placed at Auschwitz memorial

Comment from former Latvian minister sparks outrage from Russian politicians

Russian hackers exaggerate impact of Lithuanian cyberattack

Pro-Kremlin Telegram channels blame Ukraine for Kremenchuk mall attack

Pro-Kremlin Telegram channels spread disinformation about Ukraine selling weapons, again

Media Policy

Ukraine says YouTube removed hundreds of pro-Russian channels

Russia claims Snake Island losses were “gesture of goodwill”

On June 30, reports surfaced that Russian forces had abandoned Zmiinyi Island, better known as Snake Island, fleeing in small speedboats after Ukrainian forces shelled the island. 

The Ukrainian Ministry of Defense posted a drone video that day of the island being attacked. The footage appears to show Ukraine targeting a Pantsir-C1 Russian air defense system. Ukrainian media shared a similar video the previous day, claiming that on June 27, ten precision strikes hit Snake Island, destroying a Pantsir-C1 system. Given this context, the exact date and number of attacks remain unconfirmed.

Video posted by the Ukrainian Ministry of Defense, showing an attack on the Snake Island. (Source: @DefenceU/Archive) 

In high-resolution satellite images taken on June 29, large smoke plumes can be seen on the island. This coincides with other images of the island also taken on June 29 showing large smoke plumes.

Satellite imagery (top) from June 29 compared with a photo (bottom) of Snake Island taken on the same day. In both images. Large smoke plumes are seen in both images. (Source: @SputnikATONews/Archive) 
Satellite imagery (top) from June 29 compared with a photo (bottom) of Snake Island taken on the same day. In both images. Large smoke plumes are seen in both images. (Source: @SputnikATONews/Archive) 

The Russian Ministry of Defense claimed that the withdrawal from the island was planned as a “gesture of goodwill.” In a statement, the ministry said, “The Russian Federation does not interfere with UN efforts to organize a humanitarian corridor for the export Ukrainian of grain.” Ukraine and the West have accused Russia of fueling a global food crisis by blockading Ukrainian ports.

A video of Russian Ministry of Defense Spokesman Igor Konashenkov claiming that Russia’s withdrawal of forces from Snake Island was a “gesture of goodwill.” (Source: @rian_ru/Archive) 

Lukas Andriukaitis, DFRLab Associate Director, Brussels, Belgium

Russian diplomatic accounts spread questionable story about anti-Russian stickers placed at Auschwitz memorial

On June 23, the Russian Arms Control Delegation in Vienna claimed that anti-Russian stickers had appeared at the Auschwitz-Birkenau museum in southern Poland. The alleged stickers said in English, “Russia and Russians: The only gas you and your country deserve is Zyklon B.” Zyklon B is a reference to a type of poisonous gas” used by Nazi Germany in the gas chambers at Auschwitz. The tweet also contained photos of the stickers allegedly displayed around the Auschwitz memorial.  

The tweet was retweeted by other Russian diplomatic Twitter accounts, including the Russian Ministry of Foreign Affairs and the Russian embassies in Canada, Israel, and North Macedonia. The first tweet about the stickers was posted on Twitter around 6 pm CET on June 22 by @politoptimist, which appears to be a pro-Kremlin Twitter account.  

Pro-Kremlin Russian media also reported on the stickers. Eurasia Daily reported that the event took place on June 22 as a symbolic act, as this is the day when Russia commemorates the beginning of World War II. Pravda alleged that the stickers were placed by Ukrainian refugees. Other pro-Kremlin online outlets also reported on the incident, including Polit Rossiya, Vesti Podmoskovie, Russian Federal News Agency, Fin journal.info and Politika Segodnya.  

Polish fact-checking platform Konkret24 found that Twitter posts with photos of stickers were published in multiple languages, including French, Spanish, Portuguese, Polish, English and German

On June 24, the official Twitter account of the Auschwitz Museum commented on the tweet posted by the Russian Arms Control Delegation, saying, “the photographs are simply a manipulation and the incident should be treated in terms of primitive and gross propaganda.” 

The DFRLab analyzed the photos posted by the Russian Arms Control Delegation using error level analysis on FotoForensics. The analysis suggests that the sections of each image containing the sticker had a different error level compared to the rest of the image, which can indicate digital editing or compositing. When a photo is modified and re-saved, it increases the error level of the altered areas, which can be distinguished in error level analysis by different hues. This indicates that the stickers were likely composited into the images. The DFRLab also found a very similar sticker online, which could have been edited to create the manipulated sticker.

Forensic photo analysis revealed that some parts of the photos were likely altered. (Source: FotoForensics).
Forensic photo analysis revealed that some parts of the photos were likely altered. (Source: FotoForensics).

Givi Gigitashvili, Research Associate, Warsaw, Poland

Comment from former Latvian minister sparks outrage from Russian politicians

On June 27, former Latvian Interior Minister Maris Gulbis commented on Lithuania’s decision to ban Russia from transiting sanctioned goods through the country to Kaliningrad. Gulbis told the television program Preses Klubs, “I am now thinking that [the ban] is the first step by NATO and the European Union to separate Kaliningrad…from Russia.” He added that “a very clear signal was sent to Russia – if you will keep fooling around, we will take and give Konigsberg [the former Prussian name of Kaliningrad] back to Germany.”  

Gulbis’s comment was noted by former Russian President and Prime Minister Dmitry Medvedev, who now serves as deputy chairman of Russia’s Security Council. Medvedev took to Telegram to threaten Gulbis, writing, “When he comes to his senses, he will be afraid of every rustle at the door. And rightly so. We have a good memory.”

Screenshots of Medvedev’s Telegram post threatening Gulbis. (Source: Dmitry Medvedev/archive)
Screenshots of Medvedev’s Telegram post threatening Gulbis. (Source: Dmitry Medvedev/archive)

Medvedev has about 589,500 Telegram subscribers; his post about Gulbis received more than two million views, about four times more views than subscribers, according to TGStat, a Telegram analysis tool. The post was shared 565 times to public groups and channels. Medvedev frequently receives high engagement on his Telegram channel; according to TGStat, the percentage of subscribers who typically read Medvedev’s Telegram posts is around 240 percent, which suggests an audience much broader than his subscriber based. 

Screenshots of Medvedev’s post threatening Gulbis (left) and TGStat analytics about the post (bottom left and right). (Source: Dmitry Medvedev/archive, top left; DFRLab via TGStat, bottom left and right)
Screenshots of Medvedev’s post threatening Gulbis (left) and TGStat analytics about the post (bottom left and right). (Source: Dmitry Medvedev/archive, top left; DFRLab via TGStat, bottom left and right)

Additionally, RBK, a Kremlin-controlled media outlet in Russia, shared statements from three Russian politicians commenting on Gulbis’s suggestion that NATO and the European Union will take Kaliningrad away from Russia. Dmitry Lyskov, press secretary for the governor of the Kaliningrad region, said, “This is the private opinion of some individual Lithuanian [sic], and nothing more.” Andrey Klimov, deputy chairman of the Federation Council Committee on International Affairs, added, “They may want to chop off a lot of things, but there is a big difference between wanting and being able to do that.” Vladimir Dzhabarov, first deputy chairman of the Federation Council Committee on International Affairs, was more hostile. “Any attempt to tear Kaliningrad away from us will end with a military clash with Russia,” he said. “I don’t think NATO doesn’t understand this. Lithuania, Poland should think, because they are the first to seriously fall into this meat grinder.” 

Both Medvedev and Dzhabarov’s comments were picked up by other Russian language media outlets. A query conducted using the social media analysis tool Meltwater revealed that 191 news publications mentioned Dzhabarov’s comment, and ninety-nine publications mentioned Medvedev’s comment. In both cases, it was primarily Russian media that amplified the hostile comments from Medvedev and Dzhabarov.

Screenshots from Meltwater show the media mentions count and top locations for Russian language media coverage of Medvedev and Dzhabarov‘s comments. (Source: DFRLab via Meltwater)
Screenshots from Meltwater show the media mentions count and top locations for Russian language media coverage of Medvedev and Dzhabarov‘s comments. (Source: DFRLab via Meltwater) 

Nika Aleksejeva, lead researcher, Riga, Latvia

Russian hackers exaggerate impact of Lithuanian cyberattack

On June 23, the Lithuanian Cyber Security Center (NKSC) announced an increase in distributed denial of service (DDoS) attacks that were “leading to temporary disruptions of services” impacting public authorities and the transport and finance sector in Lithuania. On June 25, Killnet, a pro-Kremlin hacker collective, claimed responsibility for the attack, saying it was carried out to pressure Lithuania to reverse a ban on Russian goods being transported through the country.  

Killnet had threatened that more intense cyberattacks would take place on June 27. By noon on June 27, Killnet claimed “1,089 Lithuanian web resources are disabled due to local ISP [internet service provider] failure.” The post did not name any provider and said to wait for official sources to comment on the disruption. The DFRLab did not find any Lithuanian media reporting on internet provider disruptions. The next day, Killnet claimed that “In 39 hours, we achieved the isolation of 70 percent of the entire Lithuanian network infrastructure.”  

The claims from Killnet appear to be exaggerated. The DFRLab reviewed Killnet’s Telegram channel and identified 16 instances, between June 27 and 29, where the hacker group claimed to have taken down websites. When the DFRLab reviewed the websites on June 30, 14 were online. On June 30, Lithuanian police announced an investigation into the disruption of “more than 130 websites” between June 20 and June 29. This figure is nine times less than the figure published by Killnet on June 27. 

Giedrius Meskauskas, a cyber security expert in Lithuania, published an op-ed piece in a regional news outlet saying that “these attacks are like if a gang of teenagers were running through the district at night, say, pouring five liters of fuel from each car. Unpleasant? Yes. For those for whom it was the last five liters of fuel, it is very unpleasant. Those who had a full tank of fuel would probably not even notice the incident.” Meshkauskas explained that “DDoS attacks are unpleasant, but one of the most primitive means of cyber warfare that can affect a limited amount of time and actually do no greater damage to an organization’s IT resources.”

Nika Aleksejeva, Lead Researcher, Riga, Latvia 

Pro-Kremlin Telegram channels blame Ukraine for Kremenchuk mall attack

On June 27, Russian troops launched X-22 missiles from a TU-22M3 bomber, striking a busy shopping mall in Kremehchuk. In response, Ukrainian President Volodymyr Zelenskyy described Russia as a “terrorist state.” Russian President Vladimir Putin denied the attack. As the shopping mall was still burning, the Russian propaganda machine began obfuscating the truth. Several state officials, state media, and pro-Kremlin Telegram channels rushed to disseminate “alternative” explanations of what happened, most of which were debunked by fact-checkers. 

The first claim suggested that the attack struck a weapons depot and the resulting fire spread to a neighboring shopping mall. Nearby the mall there is a concrete mixing plant, but no weapons depot has been identified. Satellite imagery also shows extensive damage to the mall but not to any surrounding buildings. The only other impact is the site of the second strike, located roughly 500 meters from the mall. However, the buildings located in between the two areas of impact appear largely undamaged, further debunking the claim that a fire spread through the area. Lastly, Zelenskyy published surveillance footage of a missile hitting the shopping mall. Bellingcat geolocated and verified the footage, but pro-Kremlin propagandists continue to cast doubt on the video’s authenticity.  

Another narrative claimed that Ukraine had staged the attack. The narrative echoed debunked Russian denials that were made after the atrocities in Bucha. Some pro-Kremlin sources even dubbed the Kremenchuk tragedy “Bucha 2.0.” Meanwhile, other commenters cited the Latin phrase “cui bono,” which suggests that crimes are committed to benefit their perpetrator. One Telegram channel claimed that Kremenchuk was chosen because there is a small population, meaning less people are around to film the attack. The same pro-Kremlin channel twisted a statement from the Ukrainian Minister of Internal Affairs Denys Monastyrsky to suggest that Ukraine was planning more attacks.  

Some pro-Kremlin Telegram channels claimed that the mall had been shuttered since March, while others suggested that since there were not many cars in the parking lot, the mall must have been empty. However, multiple stores confirmed that their employees were injured in the attack. Shoppers even published receipts with the date to further prove the mall was open on the 27th. Bellingcat analyzed historical satellite imagery and found the mall is typically not very busy on Monday afternoons.   

Another conspiracy appeared on the channel of pro-Kremlin blogger Zergulio, who claimed that Google had indexed news stories about Kremenchuk from France24 and The Guardian before the event had occurred. However, both France24 and the Guardian run daily live blogs covering the war, which are created in the morning and updated throughout the day. The webpage indexed by Google in the morning can make changes to its content without altering the indexation time. 

Roman Osadchuk, Research Associate

Pro-Kremlin Telegram channels spread disinformation about Ukraine selling weapons, again

On June 25, the Kremlin-tied Telegram channel Spletnitsa (“Gossip Girl”) published a post accusing two Ukrainian businessmen of assisting the Presidential Office of Ukraine in selling weapons to Middle Eastern countries. The Telegram channel declared that firearms were being marked in official documents as destroyed to hide the sales. The channel also stated that the Armed Forces of Ukraine would have benefited from possessing the weapons, as their current supplies were dwindling.

The channel did not provide any evidence to support its claims. The DFRLab previously covered attempts by pro-Kremlin Telegram channels to promote similar disinformation using forged government letters and doctored images of dark web sites where transactions allegedly took place.

The original post was amplified by another pro-Kremlin Telegram channel, MediaKiller, which added that the alleged arms dealers also earn money from the Presidential Office. MediaKiller also cited an unrelated investigation conducted by the Security Service of Ukraine examining the theft of aircraft components in the Odesa region. Spletnitsa then forwarded the post from MediaKiller, using it as evidence that another channel agreed with its claims.

Roman Osadchuk, Research Associate

Ukraine says YouTube removed hundreds of pro-Russian channels 

The Security Service of Ukraine (SBU) said on June 25 that YouTube had responded to a request and blocked around 500 pro-Russian channels, that had a combined audience of more than 15 million subscribers. YouTube has not commented on the matter.  

An SBU statement said that Russia is spreading “insane fakes about ‘biolabs’ and radio-controlled geese, hatred of Ukrainians, and refined nonsense for their citizens,” producing “an endless stream of delusion.”  

According to the statement, in addition to YouTube, the SBU also requested the blockage of 1,529 Telegram channels and bots, 426 Instagram accounts, 93 Facebook accounts, and 1,050 TikTok accounts. The SBU called on Telegram users to use the chatbot @Traitor_Search_bot to report information about “internet agents” who leak “important data” to the enemy.  

On June 29, Russia’s Communist Party reported that YouTube had taken down their channel Красная Линия (Red Line). According to the statement, the YouTube channel was deleted due to “the videos with General Secretary of the Communist Party Gennady Zyuganov and the patriotic politics of the TV channel.” On the same day, TASS reported that YouTube had previously deleted a video on the Red Line channel in which Zyuganov laid flowers at the Tomb of the Unknown Soldier in Moscow and made assessments about the situation in Ukraine. 

In May, YouTube removed more than 70,000 videos and 9,000 channels related to Russia’s war in Ukraine. Despite threatening YouTube with a ban, Russia has not acted against Youtube, the popular video platform in the country. 

Eto Buziashvili, Research Associate, Washington DC

The post Russian War Report: Russia claims Snake Island losses were ‘gesture of goodwill’ appeared first on Atlantic Council.

]]>
Dr. Akhtar featured in Business Recorder: ISSI holds seminar on ‘ensuring traditional security through technology optimisation’ https://www.atlanticcouncil.org/insight-impact/in-the-news/dr-akhtar-featured-in-business-recorder-issi-holds-seminar-on-ensuring-traditional-security-through-technology-optimisation/ Wed, 29 Jun 2022 21:00:00 +0000 https://www.atlanticcouncil.org/?p=543647 The post Dr. Akhtar featured in Business Recorder: ISSI holds seminar on ‘ensuring traditional security through technology optimisation’ appeared first on Atlantic Council.

]]>

The post Dr. Akhtar featured in Business Recorder: ISSI holds seminar on ‘ensuring traditional security through technology optimisation’ appeared first on Atlantic Council.

]]>
The 5×5—Cybercrime and national security https://www.atlanticcouncil.org/content-series/the-5x5/the-5x5-cybercrime-and-national-security/ Wed, 29 Jun 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=541720 Five experts weigh in on emerging trends in cybercrime and their impacts on national security. 

The post The 5×5—Cybercrime and national security appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

From bank fraud to malware to romance scams, cybercrime is everywhere. The Federal Bureau of Investigation’s 2021 Internet Crime Report cited $7 billion in cybercrime-related losses, double the losses reported in 2019. The totality of these losses has a major impact on the US economy, in addition to the lives of affected individuals and businesses that may watch their bank accounts drained and confidential information stolen.

But cybercrime is far from a purely economic problem; real national security concerns are wrapped up in the issue as well. Just as cybercriminals learn from each other, state hacking groups learn from cybercriminals, and vice versa. Cybercriminal infrastructure and even cybercriminals themselves have been coopted by governments in the past, and there is evidence of states potentially acquiring tooling from the cybercriminal underground.

Cybercrime is, of course, not a uniquely US problem. Like with all forms of crime, cybercriminals seek to connect with and learn from each other. Criminal forums, marketplaces, group chats, and even Facebook pages are watering holes for this underground economy, allowing threat actors to adapt techniques to their unique environments and targeting all around the world. British fraudsters have targeted customers’ sensitive personal information online in order to commit tax fraud. Brazilian malware developers have manipulated electronic invoices issued in the country to their names. Financially-motivated threat actors have targeted Australian superannuation accounts.

We brought together five experts with a range of perspectives to weigh in on emerging trends in cybercrime and their impacts on national security. 

#1 How does cybercrime impact national security?

Marina E. Nogales Fulwoodglobal head – cyber external engagement, global response & intelligence, Santander Group:

“Cybercrime impacts national security in different ways, including by offering a fertile ground for organized crime and hostile nation states to obtain and launder illicit profits; threatening the economic stability of households, enterprises and governments; and, in some cases, disrupting supply chains and leaving critical sectors paralyzed. The paradigm shift ‘from online criminal activity to national security threat’ was bolstered by the recent ransomware attacks against Colonial Pipeline and Kaseya that prompted the classification of ransomware as a national security matter. The nationwide Conti ransomware attacks against Costa Rica’s public and private sector, and the country’s subsequent state of emergency declaration, is another clear example.”

Ian W. Graysenior director of intelligence, Flashpoint:

“To understand how cybercrime impacts national security, it is important to have a proper understanding of the motivations of cybercriminals and adversaries alike. There also may be substantial overlap with the tactics, techniques, and procedures (TTPs) employed by various threat actors, regardless of motivation. Cybercrime is often financially motivated. However, the same threat actors that are monetizing initial access to a network may also be selling that access to a state-sponsored adversary, whether they know it or not. State-sponsored adversaries may be employing proxies to deflect attribution attempts, thereby providing plausible deniability. The same TTPs that are often associated with less sophisticated cybercrime—social engineering, credential stealing malware, brute-forcing or credential stuffing—are also effective in state-sponsored attacks that can have a larger impact on national security.”

Matthew Noyescyber policy and strategy director, US Secret Service:

The views presented are his own and do not necessarily reflect the views of any agency of the United States Government.

“For over forty years, cybercrime has presented the risk of unauthorized access to national security information and associated information systems. Today, this risk is heightened by the growth of highly profitable transnational cybercriminal networks. These transnational criminal networks have both conducted and enabled highly disruptive cyber incidents that have impacted the operation of critical infrastructure and essential services. These criminal networks may serve as proxies for malicious foreign government activities or provide a degree of plausible deniability to foreign government security services for their own malicious cyber activities.”

Mario Rojascyber security and threat intelligence subject matter expert, Maltego:

“Cybercrime impacts our society on all levels, and national security is not exempt from the reach of cyber criminals, who target government agencies for financial gain, cyber warfare, or simply as a challenge. These cyber criminals undermine the security of our countries by attacking critical infrastructure such as hospitals, gas pipelines, and even military networks.”

Dmitry Smilyanetsprincipal product manager, Recorded Future:

“Espionage, attacks on critical infrastructure, account takeover (ATO) for government officials and employees, election meddling, and disinformation, are among the top threats to national security that I can see coming from the financially motivated actors.”

#2 Given limited resources, should counter-cybercrime efforts focus on a particular country/region or does the issue warrant a holistic approach?

Fulwood: “Cybercrime is borderless, and combatting it requires the widest level of international cooperation possible, encompassing stakeholders from government, law enforcement, and the private sector. As an example of this, most successful law enforcement counter-cybercrime operations have benefitted from internationally-coordinated frameworks, while many private sector companies have acquired a leading role in disrupting and providing investigative support to the public sector.”

Gray: “Holistic. Employing a fractured approach to countering cybercrime would have detrimental effects on developing internet standards. The globe is already interconnected, save for a few countries that choose to isolate in order to impact state control over internet usage. While certain countries are often associated with specific cybercrimes (like Russia and ransomware or China and intellectual property theft), it is vital that defensive efforts are implemented in a coordinated manner, even if attack vectors or objectives are varied. As a result, improving the defense of domestic networks, including strong public-private partnerships, is the best approach to countering cybercrime. This should be followed by building the capabilities of our multinational partners, including best practices and intelligence sharing.”

Noyes: “Resource allocation is the key question. Ross Anderson, et al. well captured it in a 2012 paper: “As for the more direct question of what should be done, our figures suggest that we should spend less in anticipation of cybercrime (on antivirus, firewalls, etc.) and more in response that is, on the prosaic business of hunting down cyber-criminals and throwing them in jail.” This analysis still holds up when you consider estimates of $1.75 billion in global spending on cybersecurity products and services, relative to the modest investments in law enforcement efforts and overall decline in fraud prosecutions. Transnational cybercriminal networks are global, and a wholistic approach is necessary to deter their criminal activity, reduce the profitability of their crimes, and successfully arrest and prosecute those that engage in these crimes.”

Rojas: “Governments and private institutions should cooperate, not only sharing knowledge and resources but also creating and supporting organizations to fight cybercrime and help educate the public.”

Smilyanets: “This decision should be made after the proper evaluation of risk is done, as well as the assessment of potential losses. Human life is first, but then, I believe the priority should be aligned with expectations of future damages.”

#3 What is an emerging cybercrime trend that we should be keeping an eye on?

Fulwood: “An emerging trend commonly observed is the symbiotic relationship that access brokers and ransomware groups enjoy. According to industry experts, in 2021, the average time between a network access offer and a ransomware group breaching the same company was seventy-one days. Therefore, closely monitoring access sales in underground forums and other channels used by cybercriminals can provide invaluable early-warning alerts for soon-to-be-breached companies.”

Gray: “The types of ways to steal someone’s identity have changed significantly over the last few years. Whereas username and password may have once been sufficient to gain access to an individual’s account and personal information, increased user awareness, multi-factor authentication, and cybersecurity have mitigated these types of attacks. The introduction of log shops that sell browser fingerprints, new methods of bypassing multi-factor authentication—like social engineering, SIM swapping, and more automated bypass methods like OTP bots, for example—all demonstrate the evolution of identity fraud that could result in account takeover.”

Noyes: “The growing illicit value transfer through the theft and illicit use of digital assets. Kevin Webach’s 2022testimony before the Senate highlighted this risk, stating, “When digital asset and DeFi firms demonstrate their inability to safeguard assets, and engage in behavior that suggests ill-intent or inconsistency, it should result in a drop in trust. The fact that many such firms, and the market as a whole, do not experience such a reaction, indicates that investors may not rationally be assessing risks. This could be a recipe for disaster.”

Rojas: “Supply chain attacks are an emerging threat that targets software developers and suppliers intending to access source codes, build processes, or update tools by infecting legitimate applications to spread malware. A great example of these attacks was the one that involved SolarWinds and affected thousands of customers, including government agencies around the world.”

Smilyanets: “Credential stealers such as RedLine, Vidar, and Raccoon pose a very serious threat to corporations, governments, and individuals. We see steady growth in that market as well as a strong correlation with ransomware attacks growth. 50 percent of ransomware attacks start from ATO of network access credentials previously compromised by information stealers.”

More from the Cyber Statecraft Initiative:

#4 What forms of cybercrime are impactful but do not get enough attention?

Fulwood: “While sophisticated and emerging forms of cyberattacks are widely reported by industry and news outlets, other types of cybercrime, like phishing, have been normalized. Despite its simple nature, phishing is a pervasive threat that every year yields countless economic losses.”

Gray: “Synthetic Identity Fraud (SIF). This crime involves leveraging legitimate personally identifiable information (PII) to create a false identity that can be used for several malicious purposes, including establishing lines of credit or committing financial fraud. During the COVID-19 pandemic, threat actors would leverage stolen PII to take advantage of the US government relief programs, like the CARES Act. Some agencies estimate that over $100 billion in taxpayer money was stolen by fraudsters stealing or creating fake identities to claim unemployment benefits from state workforce agencies. 

Attacks like ransomware and business email compromise (BEC) generally attract a lot of attention for their high payouts and business disruptions. However, “smaller” forms of fraud are more common and also generate major losses when employed en masse.”

Noyes: “More attention is warranted on BEC and similar fraud schemes, which are the economic foundation for transnational cybercriminals. While ransomware understandably gets significant attention due to its potential to disrupt critical infrastructure and essential services, the known and estimated financial losses to BEC and related cyber-fraud schemes are far greater. For example, in 2021 the Internet Crime Complaint Center received19,954 BEC complains with adjusted loss of $2.4 billion relative to 3,729 ransomware complaints with adjusted loss of $49.2 million.”

Rojas: “SIM swapping is a technique utilized by cybercriminals for diverse purposes, more recently to sidestep two-factor authentication solutions, granting them access to resources that otherwise would be out of reach; a passive reaction from service providers increases the efficacy of this technique.”

Smilyanets: “With every year, a digital identity becomes more and more valuable. The average internet user has approximately fifty passwords saved in his browser. Threat actors steal not just your passwords, but the browser’s fingerprints, and cookies with session tokens. That allows them to create synthetic identities, impersonate victims with high fidelity, and gain access to corporate infrastructure protected by multi-factor authentication.”

#5 How can the United States and its allies encourage cooperation from other countries on combatting cybercrime? 

Fulwood: “The United States and its allies can encourage cooperation by enabling more public-private collaboration and incorporating industry expertise in task forces and initiatives.”

Gray: “The relationship between international cybercrime, state-sponsored threat actors, and a burgeoning effort to establish coordinated and like-minded initiatives to thwart cybercrime, is quite complicated. However, existing international treaties like the Budapest Convention on Cybercrime, aims to establish a cooperative framework to combat cyber threats, and non-binding efforts like the Tallinn Manual, actively aim to address international legal issues when operating in cyberspace. Russia, meanwhile, has pushed back on the Budapest Convention and proposed its own Cybercrime Treaty to the United Nations (UN Resolution 74/247), broadening the definition of cybercrime and scope of their authority. Suffice it to say, it is extremely important for the United States and its allies to establish a firm understanding of the threat landscape and its shared security goals.”

Noyes: “Skillful diplomacy, public engagement, and coordinated application of various forms of sanctions and incentives have proven effective at fostering international law enforcement cooperation on a range of issues. Even when some states limit their cooperation, or actively interfere in the law enforcement activities of other countries, law enforcement agencies have proven effectiveness in apprehending persons and seizing assets when they are in cooperative jurisdictions. For example, consider the case of the arrest of Alexander Vinnik coupled with the shutdown and civil complaint against BTC-e, which was described as a major exchange converting ransomware payments from cryptocurrency to fiat currency. Enforcing the law in this manner not only helps to deter and disrupt transnational cybercriminals, but also reinforces norms of the rule of law, international stability, and encourages further international law enforcement cooperation.”

Rojas: “Sharing resources, tools, case studies, and white papers have proven invaluable for the private sector as cybersecurity professionals learn from those and can prevent and even disrupt the work of cybercriminals. Governments can also take advantage of these techniques to get other countries and organizations involved in the fight against cybercrime.”

Smilyanets: “Leading by great example in investigations and prosecutions will encourage partner states.”

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Scowcroft Center for Strategy and Security. He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

Liv Rowley is an assistant director at the Atlantic Council’s Cyber Statecraft Initiative.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Cybercrime and national security appeared first on Atlantic Council.

]]>
CBDC cybersecurity report cited in Inside Cybersecurity on CBDC design https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-cybersecurity-report-cited-in-inside-cybersecurity-on-cbdc-design/ Fri, 17 Jun 2022 03:24:28 +0000 https://www.atlanticcouncil.org/?p=538297 Read the full article here.

The post CBDC cybersecurity report cited in Inside Cybersecurity on CBDC design appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC cybersecurity report cited in Inside Cybersecurity on CBDC design appeared first on Atlantic Council.

]]>
Visualizing the NATO Strategic Concept: Five ways to look at the Alliance’s future https://www.atlanticcouncil.org/commentary/trackers-and-data-visualizations/visualizing-the-nato-strategic-concept/ Thu, 16 Jun 2022 21:08:26 +0000 https://www.atlanticcouncil.org/?p=526858 We asked our experts: With so much happening in the global arena, what topics will be featured in NATO's Strategic Concept - and how should the Alliance think about addressing them?

The post Visualizing the NATO Strategic Concept: Five ways to look at the Alliance’s future appeared first on Atlantic Council.

]]>

This section is part of the Transatlantic Security Initiative’s Stronger with Allies series, which charts the course forward for the Alliance in conjunction with the 2022 NATO Summit.

At the upcoming NATO Summit in Madrid, the Alliance’s attention will be on the Russian invasion of Ukraine. President Vladimir Putin’s unprovoked and illegal war is transforming how the Euro-Atlantic—not to mention global—community views its security environment. The war is having a profound effect on NATO’s strategy, which is due for a refresh at the summit with Alliance members set to agree on their new Strategic Concept—a critical document that will guide NATO’s political and military development for the foreseeable future

Yet even before the invasion, NATO faced a dramatically changing security landscape. The systemic challenge from China, the existential threat of climate change, the emergence of disruptive technologies, the use of cyberattacks as a core instrument of power, supply-chain problems, democratic backsliding among allies and partners, questions about adequate defense investment, and more all combine to present a complex and unsettling future for the Alliance.

NATO’s forthcoming Strategic Concept will need to grapple with all of these issues while finding commonality among the diverse perspectives and priorities of its thirty members (with two more likely on the way).

So we asked our experts: With so much happening in the global arena, what critical but underappreciated topics will be featured in the Strategic Concept—and how should NATO think about addressing them?

Dual-use technologies

Natasha Lander Finch is a nonresident senior fellow at the Scowcroft Center’s Transatlantic Security Initiative and former advisor on countering weapons of mass destruction to the US Department of Defense.

The Future of NATO’s Partnerships

As NATO reconceptualizes its role to focus on defense and deterrence while also addressing non-traditional challenges such as emerging technologies and climate change, the Alliance should look for opportunities to strengthen climate and technology cooperation with partners, especially with its closest partner states and like-minded international organizations.

NATO’s network of partners extends to forty states around the world, and it includes some of the most innovative economies and global leaders in addressing climate change. According to the United Nations World Intellectual Property Organization Global Innovation Index, eight of the top twenty most innovative global economies are NATO partners. And according to the MIT Green Future Index, which evaluates countries’ ability to transition to a low-carbon future, six of the top twenty states are also NATO partners.

New Partnership Priorities

NATO should identify a set of priorities for cooperation that leverages not only its allies but the strengths of its partners. As evidenced in the data, partner states are international leaders on climate policy, sustainability, and clean technology. They also manage sophisticated markets and innovation ecosystems. They invest heavily in research and development. And they possess world-class human capital. They have as much to offer the Alliance as NATO can offer them in conversations about emerging and disruptive technologies, building climate resilience, science and technology standards, and responding to natural disasters and crises, among others.

The Madrid Strategic Concept will redefine the Alliance’s core tasks. The focus will be on defense and deterrence in the Euro-Atlantic, but cooperative security and relations with partners are still relevant given the myriad non-traditional challenges posed by climate, technology, and authoritarianism. Cooperative security is a means of strengthening the Alliance’s relationships with these global innovation and climate leaders, and leveraging their strengths and experiences to help shape and sustain the rules-based international order. 

Lisa Aronsson is a nonresident senior fellow in the Scowcroft Center’s Transatlantic Security Initiative and a research fellow at National Defense University.

Brett Swaney is an associate research fellow at National Defense University focused on NATO, Europe, and the Baltic Sea region. The views expressed are the authors own and do not necessarily reflect those of the National Defense University, the Department of Defense, or the US Government. 

Threat perceptions across the alliance

Russia’s invasion of Ukraine has helped to sharpen the focus on the threat posed by the Kremlin, but it is not the only security challenge confronting NATO. To discern the diversity of allied threat perceptions and how the next Strategic Concept should address them, we studied the security strategies (produced before Russia’s war in Ukraine) from France, Germany, Italy, Poland, the United States, and the United Kingdom to see what the word count in each strategy might say about each country’s perceived greatest threats (e.g. words like China and cyber) as well its priorities (e.g. words like Europe/European and NATO).

Geographical concerns abound with Poland rather focused on Russia, Germany very Europe-centric, Italy biased towards the Mediterranean, and France particularly invested in Africa. France and the United Kingdom made the only mentions of the Arctic among the group. China was of some concern to all these allies, with the United States and France most invested in Indo-Pacific security–which reinforces why France was so bruised following the AUKUS agreement, as the region is a definite priority for Paris. Germany made the most mentions of NATO, alliances, and Europe, and its strategy very much reflects the long-held standard of a Federal Republic nestled at the heart of Europe and multilateral institutions. The challenge with the NATO Strategic Concept will be for drafters to reconcile US interest in the Asia-Pacific region against the more local interests of other allies. What role, if any, does NATO have regarding great-power competition in Asia? How exactly does the Alliance square the circle of requirements from the Artic to the Mediterranean?

The regional divergence was somewhat offset by similar perceptions of the primary challenges with cyber issues featuring across the board. Terrorism and societal resilience to terrorist attacks remains a prominent issue. The rise of authoritarianism and concerns about the strength of democratic societies are shared by many, but such concerns are not mentioned by Poland— not a surprise considering its own democratic backsliding. Nearly all the documents, especially the more recent ones, assert the challenge to the “liberal international order” and call for reinforcement and support for global norms and international law. Nuclear weapons proliferation is a worry for some but not all, and migration featured in the documents of countries that expressed more concern with instability in NATO’s near abroad.

Michael John Williams is a nonresident senior fellow with the Scowcroft Center’s Transatlantic Security Initiative and director of the international relations program at the Maxwell School for Citizenship and Public Affairs at Syracuse University.

Natalie Petit is a graduate student in international relations at the Maxwell School for Citizenship and Public Affairs at Syracuse University. 

NATO’s Military Capacity Post-Ukraine

Moscow’s war against Ukraine has altered the European security environment. As allies reorient NATO’s focus back toward collective defense in the Strategic Concept, it is time for the Alliance to get serious about defense spending and move the discussion beyond rhetoric and toward measurable contributions to defense and deterrence. As this graphic indicates, though a number of allies already spend above 2 percent of gross domestic product (GDP) on defense, if all allies were to meet or exceed the pledge (agreed to in 2014), they would have nearly one hundred billion dollars more to invest where it’s needed most: readiness, capabilities, and capacity. Not to mention what Finland and Sweden can bring to the Alliance.

Readiness

Unit and individual readiness should be dramatically increased. Expanded NATO training and exercise programs should integrate advanced command and control, logistics support, and military mobility initiatives.

Capabilities

Technology applications should be accelerated, particularly cyber defense, artificial intelligence, autonomy, precision engagement, power, energy, and logistics.

Capacity

NATO’s enhanced Forward Presence in Poland and the Baltics should be expanded beyond battalion strength, leveraging $1-2 billion of US European Deterrence Initiative funding. Naval operations in the High North, Mediterranean, and Black Sea should be expanded, providing NATO with opportunities to increase maritime presence and awareness. 

Numerous current and future allies have renewed the 2 percent pledge and already committed substantial new resources to defense. Yet allies have far more capacity to act, and the Strategic Concept must both reassert this pledge and clearly prioritize for a public audience where these new resources should be spent. With a substantial and focused increase in defense investment, NATO could enhance European defense and deterrence by responding to the increased Russian threat with essential readiness, capability, and capacity upgrades. NATO allies must summon the will to respond to the new security environment Putin has created. Spending at the 2 percent level should be considered a floor, and not a ceiling, as we move toward the new NATO Strategic Concept. At this moment, NATO must lay out a clear level of ambition to realign national defense programs to the actual needs of transatlantic security.

Wayne Schroeder is a nonresident senior fellow in the Scowcroft Center’s Transatlantic Security Initiative and a former US deputy undersecretary of defense for resource planning and management.

Attributing Russian cyber activity

It is a common saying among cyber practitioners that there are two types of victims: “those who know that they have been hacked and those who have, but don’t know it yet.” Attribution of an attack through cyberspace requires technical information and the willingness to name names. Attribution can be tricky, though it happens with increasing frequency in hints and outright statements from governments as well as a sea of claims from private sector firms. To establish attribution, analysts might try to determine if the cyberattack looks like—or originated from similar places in cyberspace—as attacks on other targets, if the software program used in the attack shares similarities with others, or even the language and time zone of the program (as simple as that may sound).

While government attribution against other states is more common now than even five years ago, it is still seen as a significant action in part because of the political will necessary to publicly decry offending states. This map identifies the NATO governments that have attributed an incident of cyber espionage and reconnaissance to Russia. As can be seen, the majority of NATO governments have publicly attributed cyber operations targeting sensitive official files and government personnel to Russia in recent years. In particular, the United States, Germany, France, the United Kingdom, Italy, and Poland have all reported breaches, and in some cases a multitude of them. Russia’s continued efforts to spy on the computer networks and classified systems of NATO governments, even when revealed in public, would suggest that the Kremlin is impervious to “naming and shaming” for these activities in cyberspace.

While cyberspace has taken its place firmly with air, land, sea, and space as one of the domains of modern warfare, the ease of connecting digitally across borders, significant role of the private sector, and a host of other factors can make cyberspace a challenging domain to manage. This is especially so when attacks are so common and, seemingly, useful to attackers. Until the United States and its NATO allies either increase the risks or lower the rewards for such attacks, Russia has no incentive to change course.

Paul Gebhard is a nonresident senior fellow in the Scowcroft Center’s Transatlantic Security Initiative and a vice president at the Cohen Group in Washington, DC.

The Transatlantic Security Initiative, in the Scowcroft Center for Strategy and Security, shapes and influences the debate on the greatest security challenges facing the North Atlantic Alliance and its key partners.

Vortex vector created by liuzishan – www.freepik.com

The post Visualizing the NATO Strategic Concept: Five ways to look at the Alliance’s future appeared first on Atlantic Council.

]]>
Fanti, CBDC cybersecurity report cited in Nextgov on Biden Administration’s approach to CBDCs https://www.atlanticcouncil.org/insight-impact/in-the-news/fanti-cbdc-cybersecurity-report-cited-in-nextgov-on-biden-administrations-approach-to-cbdcs/ Thu, 16 Jun 2022 21:05:51 +0000 https://www.atlanticcouncil.org/?p=538309 Read the full article here.

The post Fanti, CBDC cybersecurity report cited in Nextgov on Biden Administration’s approach to CBDCs appeared first on Atlantic Council.

]]>
Read the full article here.

The post Fanti, CBDC cybersecurity report cited in Nextgov on Biden Administration’s approach to CBDCs appeared first on Atlantic Council.

]]>
Report on CBDC cybersecurity cited in investing.com regarding CBDC design https://www.atlanticcouncil.org/insight-impact/in-the-news/report-on-cbdc-cybersecurity-cited-in-investing-com-regarding-cbdc-design/ Thu, 16 Jun 2022 21:02:25 +0000 https://www.atlanticcouncil.org/?p=538265 Read the full article here.

The post Report on CBDC cybersecurity cited in investing.com regarding CBDC design appeared first on Atlantic Council.

]]>
Read the full article here.

The post Report on CBDC cybersecurity cited in investing.com regarding CBDC design appeared first on Atlantic Council.

]]>
CBDC cybersecurity report cited in CoinTelegraph on CBDC design options https://www.atlanticcouncil.org/insight-impact/in-the-news/cbdc-cybersecurity-report-cited-in-cointelegraph-on-cbdc-design-options/ Thu, 16 Jun 2022 21:00:02 +0000 https://www.atlanticcouncil.org/?p=538244 Read the full article here.

The post CBDC cybersecurity report cited in CoinTelegraph on CBDC design options appeared first on Atlantic Council.

]]>
Read the full article here.

The post CBDC cybersecurity report cited in CoinTelegraph on CBDC design options appeared first on Atlantic Council.

]]>
White in CyberScoop on Russia and information operations https://www.atlanticcouncil.org/insight-impact/in-the-news/white-in-cyberscoop-on-russia-and-information-operations/ Thu, 16 Jun 2022 20:37:00 +0000 https://www.atlanticcouncil.org/?p=540749 "TJ" White was quoted in CyberScoop saying that the Russian focus on information operations has been unyielding. White, however, argued that Russian information war objectives have been thwarted to a large degree by Starlink satellite internet and by the fact that many Ukrainians have virtual private networks.

The post White in CyberScoop on Russia and information operations appeared first on Atlantic Council.

]]>

On June 16, Forward Defense nonresident senior fellow Timothy J. “TJ” White was quoted in CyberScoop saying that the Russian focus on information operations has been unyielding. White, however, argued that Russian information war objectives have been thwarted to a large degree by Starlink satellite internet and by the fact that many Ukrainians have virtual private networks. White remarked that, despite the centrality of information operations to the present conflict, the US defense community still lacks coherent definitions in this area.

“[W]e haven’t decided yet what is or isn’t information operations, information warfare, cyberspace operations, operations in cyberspace that enable information operations”

TJ White
Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

The post White in CyberScoop on Russia and information operations appeared first on Atlantic Council.

]]>
Vladimir Putin’s Ukraine invasion is the world’s first full-scale cyberwar https://www.atlanticcouncil.org/blogs/ukrainealert/vladimir-putins-ukraine-invasion-is-the-worlds-first-full-scale-cyberwar/ Wed, 15 Jun 2022 14:53:00 +0000 https://www.atlanticcouncil.org/?p=537587 The current Russo-Ukrainian War is a major milestone in our developing understanding of cyber security. It is now clear that the invasion unleashed by Vladimir Putin on February 24 is the world’s first full-scale cyberwar.

The post Vladimir Putin’s Ukraine invasion is the world’s first full-scale cyberwar appeared first on Atlantic Council.

]]>
Ever since the dawn of the Internet Age, the potential to weaponize digital technologies as tools of international aggression has been known. This was underlined by Russia’s 2007 cyber-attack on Estonia, which was widely recognized as the first such act by one state against another. In 2016, NATO officially recognized cyberspace as a field of military operations alongside the more traditional domains of land, sea and air.

The current Russo-Ukrainian War represents the next major milestone in our rapidly developing understanding of cyber security. It is now becoming increasingly apparent that the invasion unleashed by Vladimir Putin on February 24 is the world’s first full-scale cyberwar.

It will take many years to fully digest the lessons of this landmark conflict and assess the implications for the future of international security. However, it is already possible to draw a number of preliminary conclusions that have consequences for individuals, organizations and national governments around the world.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

The current war has confirmed that while Russian hackers often exist outside of official state structures, they are highly integrated into the country’s security apparatus and their work is closely coordinated with other military operations. Much as mercenary military forces such as the Wagner Group are used by the Kremlin to blur the lines between state and non-state actors, hackers form an unofficial but important branch of modern Russia’s offensive capabilities.

One month before the current invasion began, hackers hit Ukraine with a severe cyber-attack designed to weaken government structures and prepare the ground for the coming offensive. Critical infrastructure was targeted along with private data in a bid to undermine Ukraine’s ability to defend itself.

Again and again during the first few months of the conflict, we have witnessed the coordination of cyber operations with more conventional forms of warfare. On one entirely typical occasion, a cyber-attack on the Odesa City Council in southern Ukraine was timed to coincide with cruise missile strikes against the city.  

Just as the Russian army routinely disregards the rules of war, Russian hackers also appear to have no boundaries regarding legitimate targets for cyber-attacks. Popular targets have included vital non-military infrastructure such as energy and utilities providers. Hospitals and first responders have been subjected to cyber-attacks designed to disrupt the provision of emergency services in the immediate aftermath of airstrikes. As millions of Ukrainian refugees fled the fighting during the first month of the war, hackers attacked humanitarian organizations.

Individuals are also targets. Every Ukrainian citizen is potentially at risk of cyber-attack, with hacked personal data providing the Russian security services with opportunities to gain backdoor access to Ukrainian organizations and identify potential opponents or prepare tailored propaganda campaigns.

The scale of the cyber warfare currently being conducted against Ukraine is unprecedented but not entirely unexpected. Large-scale attacks began during the 2013-14 Euromaidan protests and initially enjoyed considerable success. This was followed by more ambitious attempts to hack into the Ukrainian electricity grid and spark power blackouts. Then came the Petya and NotPetya international cyber-attacks of 2016-17, which centered on Ukraine and caused huge global disruption.

It is clear that Russia’s current cyber offensive involves cybercriminals working in cooperation with military personnel while enjoying access to official intelligence data. This approach is relatively cheap, with cybercriminals often able to finance their operations using standard cyber fraud techniques. The idea of collaboration between the state and criminal elements is also nothing new. However, it is noteworthy that in this case, the state in question has a permanent seat on the United Nations Security Council. 

Perhaps the single most important outcome of the cyberwar so far is that we now have a much better picture of the enemy. We are able to see the threats posed by Russia and also assess Moscow’s limitations. Just as naval threats are countered by missiles and mines, cyber security is achievable given sufficient knowledge and resources.

Ukraine has come under unprecedented cyber-attack on a daily basis for more than a quarter of a year, but the Ukrainian authorities have managed to maintain basic utility services for the vast majority of the country. Even more striking is the fact that mobile communications and internet connection disruption has been minimal. In many instances, Ukrainians have been able to access online information while under Russian bombardment. 

One key lesson from the past few months is the need for everyone to take responsibility for their own cyber security. This applies to individuals and organizations alike. Neglecting cyber security risks creating weak links in wider systems which can have disastrous consequences for large numbers of people. Likewise, businesses should not rely on the state to take care of cyber security and should be prepared to invest in sensible precautions. This can no longer be viewed as an optional extra.

International cooperation is also vital for strong cyber security. Ukraine has received invaluable support from a number of partner countries while sharing its own experience and expertise. Much as the internet itself does not recognize national boundaries, the most successful cyber security efforts are also international in nature.

The Russian invasion of Ukraine has underlined the expansion of the modern battlefield to include almost every aspect of everyday life. The rise of the internet and the increasing ubiquity of digital technologies means that virtually anything from water supplies to banking services can and will be weaponized.

For years, the Kremlin has been developing the tools to carry out such attacks. The international community was slow to recognize the true implications of this strategy and is now engaged in a desperate game of catchup. The war in Ukraine has highlighted the military functions performed by hackers and the centrality of cyber-attacks to modern warfare. Restricting Russian access to modern technologies should therefore be viewed as an international security priority.

The Russo-Ukrainian War is the world’s first full-scale cyberwar but it will not be the last. On the contrary, all future conflicts will have a strong cyber component. In order to survive, cyber security will be just as important as maintaining a strong conventional military.

Yurii Shchyhol is head of Ukraine’s State Service of Special Communications and Information Protection.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Vladimir Putin’s Ukraine invasion is the world’s first full-scale cyberwar appeared first on Atlantic Council.

]]>
Missing Key: The challenge of cybersecurity and central bank digital currency https://www.atlanticcouncil.org/in-depth-research-reports/report/missing-key/ Wed, 15 Jun 2022 13:00:00 +0000 https://www.atlanticcouncil.org/?p=535275 New research on the cybersecurity challenges posed by digital currencies and design models that can provide a more secure financial system.

The post Missing Key: The challenge of cybersecurity and central bank digital currency appeared first on Atlantic Council.

]]>

Key takeaways

  • Cybersecurity concerns should not prevent creation of a CBDC. It is up to policy makers to make the appropriate foundational design choices that will enable central banks and private payment service providers to develop safe CBDCs. All of this is possible under current technological systems.
  • Current US payment systems, administered by the Fed, industry associations, and commercial banks, face a complex cybersecurity landscape and represent a major point of attack for both organized crime and state-sponsored actors. 
  • Deploying a CBDC would create new cybersecurity risks for the financial system and its participants. The exact set of new risks will depend on the digital currency variant that a country chooses for its CBDC system. 
  • Each CBDC currency design variant presents different trade-offs in terms of performance, security, and privacy. Legislatures, finance ministries, and central banks should choose which CBDC design variant to deploy based on a country’s policy priorities.
  • The design space for CBDCs is larger than the often-presented trifecta of centralized databases, distributed ledgers, and token models. This offers policy makers and regulators ample options to choose a technological design that is both reasonably secure and leverages the unique benefits a CBDC can provide. Our report encourages the use of best practices from system design, such as proven consensus protocols and cryptographic primitives, as key components of CBDC deployments.
  • A privacy-preserving currency design can strengthen security. In a privacy-preserving CBDC deployment that initially declines to collect or subsequently restricts sensitive user data even from trusted system insiders, breaches will have significantly less severe security consequences. Given the overlap of privacy and security in CBDCs, Congress should consider certain key areas in future legislation including data collection and deletion, universal searches, Fourth Amendment protections, and potential penalties.
  • Cash-like privacy and regulatory oversight do not have to be at odds in a CBDC. It is possible to design systems where users enjoy reasonable levels of payment privacy and regulatory authorities can at the same time advance other important policy goals. For example, a CBDC could keep payment details fully private if the total value of all payments by the same individual does not exceed a certain predefined threshold value (e.g., $10,000 per month).
  • To address cross-border cybersecurity risks of introducing a CBDC, policymakers should promote global interoperability between CBDCs through international coordination on standard setting. Crafting international CBDC cybersecurity and privacy regulations with democratic values is in the United States’ national security interest. It will help safer models proliferate as opposed to alternatives which do not protect privacy and do not have the most advanced cybersecurity technology.

Foreword

The challenge of securing the dollar dates back to the earliest days of the United States. Benjamin Franklin famously printed currency with the phrase “to counterfeit is death”—and colonial England used fake currency to try to devalue the Continental Dollar during the American Revolution.

In the modern era, security issues have multiplied with the rise of the Internet and the threat of cyberattacks. The United States Federal Reserve (Fed) considers cybersecurity a top priority and sees securing both the dollar and the international financial system as a core national security challenge. We are entering a new era of security and currency, one that requires responsible innovations in digital currency. This report examines the novel cybersecurity implications that could emerge if the United States issues a government-backed digital currency—known as a central bank digital currency (CBDC) or “digital dollar.”

This topic is fast-moving, consequential, and still somewhat nascent.

CBDCs have quickly landed on the international policy landscape. As of June 2022, according to Atlantic Council research, 105 countries representing 95 percent of the global GDP are researching and exploring the possible issuance of CBDCs. In the United States, spurred on by various domestic and international factors, the Fed has begun studying the issue and published a white paper in January 2022 that examines the potential benefits and risks of issuing a CBDC. In February 2022, the Federal Reserve Bank of Boston, in collaboration with the Massachusetts Institute of Technology, released test code and key findings on what a possible US CBDC might look like. But the government has so far demurred on whether it will actually issue a digital dollar, calling upon Congress to authorize such a major decision. Further complicating matters is the rapid ascendance of privately issued crypto dollars, sometimes referred to as stablecoins, which now surpass $130 billion in total market capitalization. As Fed Vice Chair Lael Brainard testified to the US House of Representatives’ Committee on Financial Services in May 2022, the recent collapse of the stablecoin TerraUSD raises new questions about the ways in which a CBDC could stabilize the digital asset ecosystem. 

The security of CBDCs has real-world import and is one of the major challenges to overcome if a CBDC is to be issued in the United States. Not just because of the classical counterfeiting scenarios or the possibility of a hacker looting the digital equivalent of Fort Knox, but also because a government-administered digital currency system could—depending on how it is designed—collect, centralize, and store massive amounts of sensitive data about individual Americans and granular details of millions of everyday transactions. For example, a CBDC could contain large volumes of personally identifiable information ranging from what prescription drugs you buy or where you travel each day. This could become a rich trove of data that could be stolen by advanced hackers or nation-states (similar to reams of personal data collected from federal employees that was stolen in 2016). Separately, other security issues could arise, for example, misuse or exfiltration of data by inside employees, smaller-scale identity theft, or “gray” charges via opaque fees. However, as our analysis shows, many of these risks already exist in the current system and could be mitigated through an effectively designed CBDC.

The security of CBDCs has real-world import and is one of the major challenges to overcome if a CBDC is to be issued in the United States.

The debate around CBDCs in the United States is also, relatively speaking, in its infancy, with the Fed and Treasury Department often taking the lead thus far, and several CBDC-related bills percolating through Congress. Part and parcel of the conversation about how and whether to develop a CBDC in the United States is what it will look like and how secure it could be. These intertwined questions of policy, design, and security should be an increasing focus of the conversation, both among federal agencies and between the executive branch and Congress. The United States can, and should, play a leading role in international standard setting. US President Joseph R. Biden, Jr.’s recent executive order highlighted the importance of digital assets protecting democratic values. 

This report introduces key concepts, potential design trade-offs, and some policy principles that we hope can help federal stakeholders make foundational decisions around the future of CBDCs in the years ahead. While it is too early for a CBDC to be designed with ideal cybersecurity, efforts to dismiss a CBDC as uniquely and categorically vulnerable to cyberattacks have overstated the risk. This report puts forward a road map for policy makers to build secure CBDCs. 

Executive summary 

This report examines the novel cybersecurity implications that could emerge if the United States or another country issues a Central Bank Digital Currency (CBDC). Central banks consider cybersecurity a major challenge to address before issuing a CBDC. The United States Federal Reserve (Fed) sees securing both the dollar and the international financial system as a core national security imperative. According to Atlantic Council research, currently 105 countries have been researching and exploring the possible issuance of CBDCs, with fifteen in pilot stage and ten fully launched.1 Of the Group of Twenty (G20) economies, nineteen are exploring a CBDC with the majority already in pilot or development. This raises immediate questions about cybersecurity and privacy. A government-issued digital currency system could, but does not necessarily need to, collect, centralize, and store massive amounts of individuals’ sensitive data, creating significant privacy concerns. It could also become a prime target for those seeking to destabilize a country’s financial system. 

This report analyzes the intertwined questions of policy, design, and security to focus policy makers on how to build secure CBDCs that protect users’ data and maintain financial stability. Our analysis shows that privacy-preserving CBDC designs are not only possible, but also come with inherent security advantages, compared to current payment systems, that may reduce the risk of cyberattacks. Divided into three chapters, the report:

(1) provides a brief background on the Fed’s process as a baseline for central banks’ current cybersecurity measures;

(2) explores the novel cybersecurity implications of different potential CBDC designs in depth; and 

(3) outlines legislative and regulatory principles for policy makers in the United States and beyond to set the conditions for secure CBDCs.

Payment systems’ status quo: How the Federal Reserve currently secures payments

Current wholesale and retail payment systems face a complex cybersecurity landscape and represent a major point of attack for both organized crime and state-sponsored actors. Cybersecurity risks posed by CBDCs must be assessed relative to this landscape.

A targeted attack on wholesale payment infrastructures, such as the Fed’s domestic funds transfer system, Fedwire, could cause major global financial shocks, including severe liquidity shortfalls, commercial bank defaults, and system-wide outages that would affect most daily transactions and financial stability. There would also be secondary effects, including severe market volatility. To minimize the risk of cyberattacks and reduce the impact of successful hacks, the Fed’s current measures include regular contingency testing for high-volume and high-value Fedwire participants; redundancy requirements, such as backup data centers and out-of-region staff; and transaction value limits. Other risks for the wholesale payments infrastructure include attacks on the Society for Worldwide Interbank Telecommunications (SWIFT) messaging system. After recent attacks revealed significant vulnerabilities, SWIFT and its member banks have taken several steps to shore up their defenses, focusing on stronger security standards and quicker response.

Key cybersecurity risks for retail payment systems include credit and debit fraud, which collectively caused nearly $25 billion in damages in 2018 worldwide; fraud by system insiders affecting platforms like the Automated Clearing House (ACH) in the United States; and user error, such as falling prey to phishing scams. Risk management strategies for retail payments often rely on voluntary industry standards, such as the Payment Card Industry Data Security Standard (PCI DSS). To counter phishing and other types of user error, ACH and other platforms require unique user credentials and offer merchants additional steps like micro validation, tokenization and encryption, and secure vault payments.

In sum, the various technical systems administered by the Fed, industry associations, and private banks already face considerable cybersecurity challenges.

Cybersecurity of CBDCs—Threats and design options

While a CBDC would be subject to many of the same cybersecurity risks as the existing financial systems, deployment of a CBDC would also create new risks. Depending on the choice of CBDC design, potential new cybersecurity risks include (but are not limited to): 

  • Increased centralization of payment processing and sensitive user data. It is possible a central bank would store user activity and transactions. 
  • Reduced regulatory oversight of financial systems
  • Increased difficulty reversing fraudulent or erroneous transactions 
  • Challenges in payment credential management and key custody
  • Susceptibility to erroneous or malicious transactions enabled by complex, automated financial applications
  • Increased reliance on third parties (e.g., non-banks)

The exact set of new cybersecurity risks depends largely on the digital currency variant that a country chooses for its CBDC system. Each digital currency variant also provides different properties in terms of system scalability, system robustness, user privacy, and networking requirements. Since each currency design variant presents different trade-offs in terms of performance, security, and privacy, the choice of which digital currency design variant to deploy as a CBDC is a policy choice for finance ministries, central banks, and legislatures. It should be driven by a thorough analysis of the relative technical trade-offs. This report reviews various possible digital currency design variants and compares the benefits and risks of each. Our analysis also challenges the prevailing thinking in several ways and outlines the following findings:

Finding 1: The design space for CBDCs is larger than the often-presented trifecta of centralized databases, distributed ledgers, and token models.

  • For example, this means that both ledger and token-based payments can embody robust privacy protection through certain cryptographic measures.

Finding 2: CBDCs can enable both strong user privacy and (some level of) regulatory oversight at the same time.

  • It is possible to design systems where users enjoy reasonable levels of payment privacy and regulatory authorities can at the same time advance other important policy goals.

Finding 3: A privacy-preserving currency design can inherently provide security advantages.

  • In a privacy-preserving CBDC deployment that initially declines to collect or subsequently restricts sensitive user data even from trusted system insiders, breaches will have significantly less severe security consequences.

Finding 4: It is critical to use best practices from system design, such as proven consensus protocols and cryptographic primitives.

  • Distributed security protocols, such as those used to secure distributed ledgers, can introduce subtle new design challenges and security trade-offs. This report encourages the use of well-tested protocols with provable security guarantees as key components of CBDC deployments.

Principles for future legislation and regulation

With most governments, including in the United States, still weighing whether to develop a CBDC, this report identifies key principles to help guide policy makers and regulators on how to deploy a CBDC with robust cybersecurity protections in mind.

Principle 1: Where possible, use existing risk management frameworks and regulations.

  • Depending on the CBDC design, policy makers and regulators should assess which areas of a new CBDC ecosystem will be covered by current laws and regulations and where novel statutes—or new technical frameworks—might be necessary to provide adequate protection.
  • When crafting new regulations for a CBDC, policy makers and regulators should set the conditions for a safe digital currency ecosystem that enables financial intermediaries to innovate and compete.

Principle 2: Privacy can strengthen security.

  • Privacy-preserving CBDC designs can have security benefits because they reduce the risk and potential harmful consequences of cyberattacks associated with data exfiltration and the centralization of detailed personally identifiable information.
  • CBDCs can offer cash-like privacy, while potentially providing reasonable oversight options to regulatory authorities. A CBDC’s level of privacy is a legislative and political choice that will filter through to the digital currency’s design and determine its cybersecurity profile.

Principle 3: Test, test, and test some more.

  • Governments should ensure that they have full access to, and can directly oversee, security testing and audits for all CBDC implementation instances. To enable extensive testing and security audits, the US Congress should consider the appropriations accordingly as part of next year’s budget process and allocate a pilot project.
  • Open-source CBDC code bases may be valuable for various reasons, including because they allow for more participation in the security testing process, especially when combined with longer-term bug bounty programs. Nonetheless, they still require due attention, funding, and staffing to maintain and monitor the code base over the long run.

Principle 4: Ensure accountability.

  • The overall framework governing CBDCs needs to establish clear rules and policies surrounding accountability for errors, breaches, and resulting consequences (both technical and financial).
  • For CBDCs that rely on distributed ledger technology (DLT), it is paramount to clearly establish accountability requirements among validators on the blockchain.

Principle 5: Promote interoperability.

  • To increase the resiliency of countries’ existing financial systems, policy makers should develop rules to ensure that a CBDC is interoperable with the country’s relevant financial infrastructure.
  • To strengthen the security of CBDC systems, US leadership is critical to promote global interoperability between CBDCs through international coordination on regulation and standard setting through fora like the Group of Seven (G7), the G20, the Financial Stability Board (FSB), and the Financial Action Task Force (FATF).

Principle 6: When new legislation is appropriate, make it technology neutral.

  • The US Congress can help study and oversee the application of federal cybersecurity laws to a potential CBDC with the goal of developing laws that apply evenhandedly to different technologies over time.
  • Congress may consider using incentives and accountability for CBDC development or set security requirements by empowering a federal agency to develop a cybersecurity framework for a CBDC as part of a pilot project.

Background: How the United States currently secures its payment systems 

Cybersecurity is an area of concern not only for CBDCs but also the current financial and payment systems. Any study of CBDCs’ cybersecurity must assess them relative to this current infrastructure and recognize how they will interact to alter and potentially remedy existing vulnerabilities. Additionally, it must draw lessons from how central banks currently handle payments’ cybersecurity.

The Fed has recognized the immense risks posed by cyberattacks to the current financial system. Asked in April 2021 about the chances for a systemic breakdown like the 2008 financial crisis, Federal Reserve Chairman Jerome Powell said that “the risk that we keep our eyes on the most now is cyber risk.” He specifically singled out a scenario in which a “large payment utility…breaks down and the payment system can’t work” or “a large financial institution would lose the ability to track the payments that it’s making.”2 At a conference in October, Loretta J. Mester, president of the Federal Reserve Bank of Cleveland, argued “there is no financial stability without cybersecurity.”3 As the issuer of the world’s reserve currency, the Fed’s cybersecurity models hold outsized importance for the global economy. The Fed’s standards have also become models for cybersecurity across central banks. 

Payments overview

The current payment system comprises three categories: retail, wholesale, and cross-border.4 Retail payments are what the vast majority of Americans interact with: purchasing groceries with a credit card, buying Cracker Jack at a baseball game with a five-dollar bill, or shopping online with payment service providers. The wholesale system operates in the background, serving as the plumbing of the financial system by enabling the transfer and settlement of funds between financial institutions. Cross-border payments are between different countries and require international coordination to bridge national systems.

All three of these systems could be impacted or overhauled by CBDC. CBDC is the digital form of a country’s fiat currency that is also a claim on the central bank. Instead of printing money, the central bank issues electronic coins or accounts backed by the full faith and credit of the government. This differs from current “e-money” because it is a direct liability of the central bank, like paper cash. A CBDC could take multiple forms: a retail CBDC would be issued to the public to enable fast and secure payment, while a wholesale CBDC would only be accessible by banks and would facilitate large-scale transfers. According to the Atlantic Council’s CBDC tracker, forty-five of the 105 countries pursuing a CBDC are focused on its retail use, while eight are exclusively developing it for a wholesale purpose, and twenty-three are doing both (with the remaining twenty-nine undecided).5 On the cross-border payments front, multiple partnerships between countries, such as Project Dunbar among South Africa, Singapore, Malaysia, and Australia, are piloting cross-border payments using CBDCs.

A CBDC could take multiple forms: a retail CBDC would be issued to the public to enable fast and secure payment, while a wholesale CBDC would only be accessible by banks and would facilitate large-scale transfers.

The key components of the United States’ current payment systems are described below.6

  • Fedwire is the Fed’s domestic and international funds transfer system that handles both messaging and settlement. 
  • Clearing House Inter-Payments System (CHIPS), privately operated and run by its member banks, handles dollar-denominated domestic and international funds transfers.7
  • The Society for Worldwide Interbank Telecommunications (SWIFT), operated as a consortium by member financial institutions, is a global messaging system that interfaces with Fedwire and CHIPS for the actual settlement of payments.8
  • FedNow will complement the Fed’s Fedwire with instant, around-the-clock settlement and service. A full rollout is planned over the next two years. 
  • The Automated Clearing House (ACH) is a network operated by the National Automated Clearing House Association (Nacha) that aggregates US transactions for processing and enables bank-to-bank money transfers.

While CBDCs will likely play a role in all three levels of the payment system, this background chapter as well as the report’s appendix predominantly examine risks to payment systems in which central banks are involved. That currently means the wholesale system. The Fed’s approach to securing wholesale payments sheds light on its current cybersecurity practices and how it might handle a CBDC. We also briefly examine the retail payment system to understand cyber risks that a retail CBDC could impact.

A targeted attack on wholesale payment infrastructures, such as Fedwire, could cause major global financial shocks, including severe liquidity shortfalls, commercial bank defaults, and system-wide outages that would affect most daily transactions and financial stability. There would also be secondary effects, including severe market volatility. To prevent cyberattacks and reduce the impact of successful hacks, the Fed’s current measures include regular contingency testing for high-volume and high-value Fedwire participants; redundancy requirements, such as backup data centers and out-of-region staff; and transaction value limits. Other risks for the wholesale payments infrastructure include attacks on the SWIFT messaging system. After recent attacks revealed significant vulnerabilities, SWIFT and its member banks have taken several steps to shore up their defenses, focusing on stronger security standards and quicker response times. 

Key cybersecurity risks for retail payment systems include credit and debit fraud, which collectively caused nearly $25 billion in damages in 2018 worldwide; fraud by system insiders affecting platforms like ACH in the United States; and user error, such as falling prey to phishing scams.9 Risk management strategies for retail payments often rely on voluntary industry standards, such as the PCI DSS. To counter phishing and other types of user error, ACH and other platforms require unique user credentials and offer merchants additional steps like micro validation, tokenization and encryption, and secure vault payments.

Information security is generally assessed along three core principles known as the CIA triad: confidentiality, integrity, and availability.10 Confidentiality requires that data are only accessible to those who are authorized.11 For payments, this means that data about participants and their transactions are kept private. Countermeasures to ensure confidentiality focus on areas like authentication, encryption, and educating users.12 Integrity means that data are “correct, authentic, and reliable” and can thus be trusted to not have been tampered with.13 This is accomplished via hashing and controlling access.14 In payments, integrity is linked to the need for non-repudiation: the payor cannot deny sending the payment, and the payee cannot pretend to have not received it.15 Finally, availability means that the system is up and running, allowing users to have timely and reliable access.16 In payments, this could be hampered by an attack on a specific institution or by the failure of supporting infrastructure like data centers. Securing availability can be done by hardening systems against attacks and building in redundancy.17

Looking ahead to CBDCs

Chapter 1 assesses the cybersecurity risks facing CBDCs and how design choices will shape vulnerabilities using a framework derived from the CIA triad but customized to the challenges of CBDCs. Understanding how CBDCs will fit into the existing landscape is crucial for turning this insight into actionable steps for policy makers, which we explore in Chapter 2. 

Chapter 1: Cybersecurity of CBDCs—Threats and design options

This chapter discusses the cybersecurity of CBDCs. A central theme, which pervades all aspects of this chapter, is how CBDCs may centralize data and control over the financial system. Although the current financial system is already relatively centralized (e.g., in the United States, more than 50 percent of banking assets in 2022 are controlled by just four banks)18 CBDCs have the potential to significantly increase centralization by storing a single ledger or similar data repository that aggregates transaction data from all participants. The ledger could even include data from payment modalities that are currently difficult to monitor, such as cash. Such dramatic centralization of CBDCs could have downstream effects that are difficult to predict or manage. For example, a database containing an entire nation’s financial transactions would represent an unprecedented target for cybercriminals. It can also provide unscrupulous regimes with a mechanism for mass surveillance. Such threats can be mitigated in part through technical design choices, but every design comes with implications (and trade-offs) regarding security, privacy, performance, and usability, to name a few. This chapter discusses a landscape of possible design variants, while highlighting the relevant trade-offs.19

We start our discussion by introducing the different roles that would be involved in a typical CBDC deployment, their primary tasks, and trust assumptions. After that, we introduce a threat model for CBDCs by discussing the main security requirements and involved threat actors. Then, we review common digital currency variants and analyze them with respect to the established threat model. We complete our analysis with a comparison that shows the main advantages and drawbacks of different currency designs. Finally, through case studies, we show how a few noteworthy CBDC pilot projects fit into our classification. The key contributions of this chapter are as follows.

Key contributions of this chapter

​​Systemize knowledge: We define a framework for systematically analyzing and comparing digital currency designs. We show the main pros and cons of common digital currency variants and explain how noteworthy existing CBDC pilot projects fit into our classification. We also identify potential cybersecurity risks involved in each currency variant.

Highlight recent research advances: As part of our review, we also highlight recent developments from the research community and possible digital currency design alternatives that are not yet typically considered in most CBDC reports. Such designs can enable improved user privacy or transaction validation scalability, for example.

Clarify common misconceptions: Throughout our discussion, we also point out common misconceptions, recurring harmful practices, or otherwise bad patterns related to the design and deployment of digital currencies. 

Roles and trust assumptions

Currency issuer. Every CBDC system needs an entity that creates money. We call this role currency issuer. In most envisioned CBDC deployments, this role would be played by a central bank. In a private digital currency, this role could also be played by a private company. The currency issuer should be trusted by all system participants for the correctness of money creation. That is, the money created by the issuer is considered valid by everyone involved in the system. This entity does not necessarily need to be trusted for all other aspects of the system, such as user privacy or payment validation.

Payment validator. CBDC systems require entities that keep the system running and provide the needed infrastructure for other participants. One such infrastructure role is the payment validator that approves payments and records them into data storage, such as a database or ledger. The role of the payment validator could be distributed among several nodes for increased security and performance, as will be discussed later in this chapter. The role of the payment validator could be taken by the central bank, or alternatively, it could be delegated to another public authority or to commercial banks. The payment validator needs to be trusted to verify the correctness of payments, but not necessarily for other properties, such as money creation or user privacy.

Account provider. Another infrastructure role in a typical retail CBDC system is an account provider that allows users to register, obtain payment credentials (e.g., in the form of a digital wallet), and start making CBDC payments. In most retail CBDC deployments, the account issuer would need to verify the identity of the user before account creation. Most likely, central banks would not want to interface with users directly and, therefore, this role would be better served by commercial banks that already have existing customer relationships. The account provider, such as a commercial bank, would need to be trusted for the verification of users’ identities. In a custodial solution, the account provider could be also trusted with the management of users’ payment credentials and it could control users’ monetary assets. In a non-custodial solution, the account provider would not control any monetary assets on behalf of the users. The role of account provider may not be needed in a wholesale CBDC deployment where the end users are financial institutions like commercial banks.

Payment sender and recipient. We consider two types of end users: payment senders and payment recipients. In a retail CBDC system, such users could be private individuals, commercial companies, or other legal entities. Such users would typically perform payments through a client device such as a smartphone that holds the payment credentials obtained from the account provider. For specific use cases like visiting tourists other solutions are likely to be needed for obtaining payment credentials. In a wholesale CBDC, the payment sender and recipient could be commercial banks performing an inter-bank settlement. Payment senders and recipients are generally not trusted by other system participants. Instead, it is assumed that users may behave arbitrarily or even fully maliciously.

Regulator. Another role that we consider is the regulator. The task of the regulator is to ensure that all payments in the system conform to requirements such as anti-money laundering rules. For example, in the United States, the recipients of a cash payment worth more than $10,000 are required to report the payment details to the Internal Revenue Service (IRS). In a CBDC deployment, all payments that exceed a similar threshold amount could be automatically forwarded to the regulator for audit. While the regulator is trusted to examine specific payments and report non-conforming payments, in a well-designed CBDC system, all details of all payments do not necessarily need to be visible to the regulator. For example, receiving $50 fully anonymously (i.e., such that even the regulator cannot see the payment details of the transaction) should be possible. We discuss the challenges involved in realizing such privacy-preserving regulation later in this chapter.

Technology provider. In a retail CBDC, the needed payment application could be provided by a technology company. For example, in the digital yuan pilot in China, the CBDC payment functionality is integrated into popular smartphone payment applications, such as Alipay from Ant Group and WeChat Pay from Tencent. In addition to providing the payment application, in a custodial deployment, the technology provider may assist the user in payment credential management. The end users (i.e., payment senders and recipients) need to trust the technology provider for the correctness of the payment application and potentially also for the management of payment credentials. In a wholesale CBDC, the payment senders and receivers (commercial banks) could obtain the needed (settlement) technology from an external software vendor.

Figures 1a and 1b below illustrate the typical relationships between these roles in retail and wholesale CBDC deployments, respectively. 

Figure 1a. Main roles involved in a retail CBDC system

Source: Figure created by Kari Kostiainen with icons licensed from Freepik Company. 

Figure 1b. Main roles involved in a wholesale CBDC system

Source: Figure created by Kari Kostiainen with icons licensed from Freepik Company.

Threat model

To understand the cybersecurity implications of CBDCs, it is important to first specify the threat model. In this section, we will highlight the security requirements and the threat actors that are relevant to CBDCs. 

Requirements

CBDCs should satisfy a number of properties, both security and performance related. These requirements are intertwined: different design variants can have different implications for each of these requirements. 

Integrity. The integrity of a financial system refers to its ability to ensure that money transfers and creation is correct. In other words, it should not be possible to create or delete money out of thin air. It should also not be possible to transfer funds that do not belong to the sender. 

Authentication and authorization. Only the legitimate owner of money should be able to transfer said money. In current payment systems, this is typically achieved through a two-step process. Authentication refers to the process of verifying a user’s identity.20 Authorization refers to the process of verifying the transaction details, such as the recipient’s identity and the amount to be paid. In some CBDC design variants, these two processes can be intertwined, so we address them jointly in this report. 

Confidentiality. Transactions should not be visible to unauthorized parties (e.g., telecommunications providers). Confidentiality is typically achieved via encryption of data in transport over untrusted channels. Such techniques are widely used in the banking industry today, and we do not expect them to vary significantly across different CBDC variants (though they may need to be updated due to emerging technologies, such as quantum computing). Because of this, we will not analyze confidentiality separately in the remainder of this document. 

Privacy. Whereas confidentiality aims to protect data from unauthorized parties, privacy aims to protect user information (e.g., payment transaction details) from authorized parties, such as payment validators. While these two concepts are closely related, we treat them as separate. Deciding what level of privacy to provide is a political decision as well as a technical one, and has repercussions for the architecture and design of the CBDC. 

Incorporating privacy protections into a CBDC design is important for two main reasons. The first reason is that the privacy of end users is valuable in itself. CBDCs will inevitably aggregate tremendous amounts of financial data, and consequently some national banks have indicated that their goal is not to build a tool of mass surveillance.21 Additionally, the successful adoption of CBDC technology may require that the deployed system meets the privacy expectations of end users. In a recent survey on the digital euro, participants rated privacy as the most important feature of a possible CBDC deployment.22 The second reason is that a system with strong privacy protections is also inherently more secure. If a system that collects huge amounts of sensitive user data does not include privacy protections and is breached, then all the sensitive information will be disclosed to the attacker and, potentially, to other unauthorized parties, which violates confidentiality. In a privacy-preserving design that hides sensitive user data even from trusted system insiders, a similar breach or insider attack will have significantly less severe consequences for security and confidentiality.

Takeaway: Privacy-conscious design can also provide security benefits
If a CBDC deployment without privacy protections is breached, either by an external attacker or a malicious insider, then all the sensitive user information is disclosed to unauthorized parties. In a privacy-preserving CBDC deployment that hides sensitive user data even from trusted system insiders, breaches will have less severe security consequences.

Resilience. The system should be robust to faults, or failures, of different components of the system. Typical faults include infrastructure failures (e.g., a server crashes), software-level failures (e.g., a program stops executing), and protocol-level failures (e.g., a validator node misbehaves). Faults can be either accidental (e.g., random infrastructure failures) or intentional (e.g., caused by misbehaving nodes). 

An important aspect of resilience is availability. System availability is often specified in terms of uptime; a common goal is “five nines,” i.e., the system is operational 99.999 percent of the time. As a result, the system must be able to process payments even if some parties are offline, including back-end infrastructure, the payment sender, or the payment recipient.

Another relevant dimension of resilience is transaction revertability. Fraudulent transactions are very common in financial systems. Ideally, if a transaction can be shown to be fraudulent, authorized parties, such as payment validators, should be able to revert the transaction, i.e., add the paid amount back to the payment sender’s account balance and deduct the paid amount from the recipient’s balance. 

Network performance and costs. The system must be highly performant to process nation-scale financial transactions. Common performance metrics include throughput (number of transactions that can be processed per second) and latency (time to transaction confirmation). For comparison, the Visa credit card network currently processes 1,700 transactions per second on average and is capable of processing up to 24,000 transactions per second.23 Meanwhile, typical transaction latencies for digital payments are in the order of seconds. 

In exchange, CBDCs will inherently incur communication (or bandwidth) and computation costs. These costs are divided between the back-end infrastructure and end users. In general, a CBDC is expected to impose high costs on back-end infrastructure, both in terms of computation and communication. As such, we do not focus further on back-end resource costs in this report. However, certain potential designs (e.g., privacy-preserving ledgers) require access to the entire ledger, in encrypted form, to verify the validity of transactions. This imposes significant bandwidth requirements on end users, as well as substantial computational requirements. These costs must be weighed against the associated privacy benefits.

Governance. The maintenance of a CBDC may involve the participation of multiple parties, including application developers, hardware manufacturers, cloud service providers, and transaction validators. It is important to ensure that these parties have well-designed guidelines for managing operations and conflicts. In addition, all parties should be incentivized to behave correctly and reliably. For example, in the case of distributed transaction validation pipelines, validators should be incentivized to validate transactions promptly and correctly (e.g., in the order they were received), and there should be clear policies in place for managing unfulfilled commitments.

Layers of the technical stack

Attackers can exploit different components of a CBDC to achieve their goals. In this section, we outline the CBDC technical stack, illustrated to the right. In other words, these are the conceptual components that an attacker could target using different vulnerabilities and offensive capabilities. These layers are not exhaustive and attackers can launch cross-layer attacks.

Human. Although end users are not part of a technical CBDC implementation, they can be exploited to affect system security at large. Users can be both a vector for launching attacks as well as victims. Examples of relevant attacks include fraud and money laundering. Operators of the CBDC can also pose vulnerabilities, e.g., through phishing attacks to gain access to the CBDC’s control mechanisms. 

Application. CBDCs are expected to usher in an ecosystem of new applications that can interface seamlessly with the digital payment system. Potential use cases include mobile applications for seamless disaster relief, more efficient tax processing, and everyday transaction processing.

Figure 2: CBDC technical stack

Source: Authors.

Many of these applications will likely be developed independent of underlying CBDC infrastructure, just as mobile application developers are typically independent of device issuers. This has several security implications. In particular, it may be difficult to control the security specifications and properties of applications. Developers can introduce vulnerabilities (consciously or not) that can be exploited to steal money or exfiltrate data. While application-level threats or failures may not be directly the fault of the CBDC, they can affect the viability of the CBDC as a whole, as seen in the early release of the eNaira in Nigeria, for example.24 The application ecosystem is, therefore, an important layer in the CBDC stack from a security perspective.

Consensus. In order to provide redundancy against unforeseen factors like faulty devices, compromised infrastructure, and resource outages, many proposed CBDC designs involve the use of consensus protocols: decentralized processes for determining the validity of financial transactions among multiple payment validators, for example. Consensus protocols can be designed with varying degrees of robustness to adversaries of varying strengths. At a high level, they provide robustness through redundancy: transactions are approved only pending the approval of multiple parties, according to specific, carefully designed protocols. The participants in consensus protocols could be different stakeholders in the systems (e.g., different banks running validator nodes) or they could be different servers controlled by the central bank but running on different infrastructure (e.g., in different data centers). For example, the Swedish e-krona uses distributed ledger technology (DLT) for consensus in which different stakeholders like banks run their own payment validator nodes.25

Attacks on the consensus protocol typically involve the corruption of one or more parties. Good protocols are designed to be robust up to some threshold number of corruptions. However, consensus protocols are notoriously subtle; to provide true robustness to malicious faults, they should be accompanied by mathematical security guarantees. Further, even when those security guarantees exist, they rely on assumptions about the adversary that may not hold in practice (e.g., many protocols assume the adversary can only corrupt up to one-third of all validator nodes). 

Computation and storage. CBDCs require back-end infrastructure to maintain a secure and functional payment system. For example, they may require distributed computation nodes to parallelize transaction processing in the face of stringent performance requirements. To the extent possible, ledger storage may also be distributed to reduce the load on any single node. However, some security mechanisms are easier to parallelize than others. For example, ledger-based systems typically require the full ledger to ascertain transaction validity; hence splitting the ledger into shards can affect the system’s ability to correctly validate transactions.

Network. The validation of transactions, issuance, deletion of money, and all other events in a CBDC will be communicated to the relevant parties via an underlying network. This network will very likely rely at least in part on private infrastructure to communicate updates among payment validators and CBDC internal parties. Interactions between account providers and end users will likely occur on the public Internet. These networks can be used to launch attacks such as denial of service, censorship attacks, or even partitioning attacks that cause different parts of the network to have different views of the global state. This causes the network layer to interact with the consensus layer. 

Hardware. CBDCs will ultimately run on hardware, including mobile devices, hardware wallets, and servers that maintain the state and functionality of the system. Hardware can become an attack vector through insecure firmware and/or vulnerabilities that are hard coded into the products (e.g., backdoors in a hardware wallet). Such vulnerabilities tend to be difficult to exploit by all but the most sophisticated adversaries.

Threat actors

Security is defined with respect to a particular adversary. In a CBDC, there are several potentially adversarial actors of interest. We consider the following, in increasing order of strength.26

Users. Users are typically limited in their ability to affect the internal mechanics of the CBDC. They are generally able to access and exploit applications only to the extent that they can manipulate other users. End users may be motivated to steal money from other users. 

Third parties. Various types of third parties can threaten a CBDC, including scammers, application developers, or hardware manufacturers. Such adversaries are generally more powerful than typical end users, with more resources to attack the CBDC at layers ranging from hardware to application. For example, they may release malicious applications into the ecosystem, or manufacture backdoored hardware wallets. Their motives may range from stealing money to destabilizing the currency (e.g., particularly at the behest of a nation-state). 

System insiders. Insiders refer to individuals (or groups of individuals) who have access to the internal operations of a CBDC, including infrastructure operators or CBDC developers; their capabilities range from modifying system-critical code to exfiltrating data to bringing down key infrastructure (e.g., unplugging servers). Such attackers are notoriously difficult to defend against. Their motivations can be political, financial, or even personal. Common goals of malicious insiders include stealing resources or simply bringing the system to a halt. 

Foreign nation-states. Foreign nation-states are among the most powerful adversaries that a CBDC must defend against. Such adversaries may have effectively limitless resources to spend on offensive tactics, including the development of zero-day attacks as well as deployment of sophisticated attacks on applications, operating systems, and hardware. Additionally, they may coerce third-party producers of hardware or software to hard code backdoors into products, thus giving easier downstream access. Such attacks can affect payment validator nodes, end user wallets, and custodial wallets hosted by account providers, to name a few. Their motivations are typically assumed to be political in nature.

Attack matrix

Different threat actors have different capabilities for infiltrating a CBDC. Table 1 indicates which attackers have access to which portions of the CBDC stack. Here, solid circles indicate that there exists the potential for full corruption of at least some portion of a given layer, whereas half-filled circles indicate the potential for partial corruption. Notice that all of the adversaries have only partial access to the network layer because CBDCs will rely in part on the public Internet. As such, full corruption is believed to be infeasible even for foreign nation-states. On the other hand, hardware is most easily corrupted through supply chain attacks, which can be executed by third parties as well as nation-states. 

Table 1: Which layers of the CBDC stack can different adversaries access or corrupt?

Source: Table created by Giulia Fanti. 
Note: Solid circles indicate (the potential for) full access, whereas half-filled circles indicate the potential for partial access.

CBDC design variants

In this section, we discuss major design choices related to cybersecurity for CBDC systems. The space of CBDC designs is vast, with each design presenting its own trade-offs.27 We present six digital currency variants that could form the basis of a CBDC system. This review does not attempt to cover all possible designs, but rather to give representative examples of different styles of digital currency schemes. For each design variant, we summarize the security, privacy, and performance trade-offs according to our requirements from the previous section. The design variants we discuss are reflected in Figure 3; orange boxes represent design variants, and blue boxes represent differentiating factors. Additionally, each design variant is annotated with one or more new cybersecurity challenge that arises in this CBDC design compared to the current financial system. These challenges are summarized below.

New cybersecurity challenges for CBDCs


The design variants discussed here pose various cybersecurity challenges that differ from challenges seen in the current digital financial system. 

  1. Financial data can be more centralized. Some design variants rely on a single, centralized database of financial transactions that is visible to system operators. This presents a central point of failure and a unified target for potential attackers. Although such databases exist with digital payments today (e.g., credit cards), CBDCs present an even greater potential for data centralization, and hence increased cybersecurity risk. 
  2. Regulatory agencies have less visibility into data. Some design variants prevent regulatory or law enforcement agencies from accessing transaction data, typically because said data is encrypted or stored only on local devices. This reduces regulators’ visibility into financial transaction flows compared to the current digital financial system and has implications for tracking illicit transactions, for example.
  3. Security hinges on the integrity of third-party validators. Some design variants use third-party validators (e.g., banks, telecommunications providers) to validate transactions. Transaction integrity is dependent on a (super-)majority of these validators not being compromised. This poses new challenges in terms of auditing and monitoring validators, as well as coordinating incident responses across validators, who may have different policies and procedures for dealing with breaches.
  4. Client key custody becomes more complicated. Some design variants require transactions to remain encrypted to provide client privacy. Custodial key management solutions, which are commonly used in the current financial system, would, therefore, compromise the promised privacy guarantees because the custodian could access client financial data. This requires client-side key management tools, which can present significant usability challenges. This problem has materialized in many cryptocurrencies and remains prevalent. 
  5. Security relies on trusted hardware manufacturers. Some design variants use trusted hardware to enforce transaction integrity. This places an increased supply chain risk specifically with trusted hardware manufacturers compared to the current financial system. 
  6. Transaction revocation is more difficult. Some design variants prevent an authority from unilaterally revoking fraudulent or contested transactions. This could be because client keys are stored locally, because there are multiple validators, or because data is encrypted so the central database is unable to ascertain the amount and endpoints of a contested transaction. 
  7. Programmable transactions can amplify the scope and scale of errors. Applications built on CBDCs are expected to rely on programmable transactions, or smart contracts (these are explained in more detail at the end of this chapter in the section on Additional Key Design Choices). Incorrectly specified smart contracts could result in misdirected funds at a massive scale, especially if these smart contracts are deployed naively. When coupled with Risk 6 (difficulty revoking transactions), this could lead to substantial financial losses.

Figure 3. CBDC design variants discussed in this chapter

Source: Figure created by Giulia Fanti. 
Note: Each variant is annotated with cybersecurity challenges that are new or elevated compared to the current financial system.

Takeaway: The design space for digital currencies is large
The discussion in many CBDC reports focuses on currency designs that are based on a centralized database, distributed ledger, or token model. We argue that the design space for digital currencies is larger than that. As will be discussed below, a digital currency can also be realized as signed balance updates or as a set of trusted hardware modules, and both the distributed ledger variant and the token model can support privacy-preserving transactions in addition to plaintext ones.

Database with account balances (status quo)

We start our review with a simple payment system that we call database with account balances. This design variant captures the payment approach used by the existing credit card payments, mobile payments, and bank account transfers. We assume that both the payment sender and the payment recipient have already established an account and obtained the needed payment credentials. We also assume a database (payment records in Figures 1a and 1b) that maintains an account balance for each user.

To initiate a payment, the sender first requests the payment details, such as account number, from the payment recipient. In the case of card payment, this would happen during interaction with the recipient’s payment terminal. In the case of a bank account transfer, the payment details could be obtained manually or by scanning a QR code. Then, the payment sender creates a payment request that defines the identity of the recipient and the payment amount, signs the payment request using their payment credentials, and sends it to the payment validators. The payment validators check from the database that the sender has sufficient funds associated with their account, and if that is the case, update the account balances of the sender and the recipient accordingly. This process may be distributed among multiple nodes for resilience and performance reasons, as in the recent Project Hamilton proposal.28 Finally, the payment validators send a payment completion acknowledgment to the payment recipient.

In this approach, all payment details necessary for validation are visible to the payment validators. The payment database stores the latest account balance for each user and such account balances are not disclosed to the public (only validators know the balance of each user account). 

Analysis

Integrity. The integrity of the database is entirely governed by the payment validator(s), who must (collectively) check that users do not overdraw their accounts. Assuming the currency issuer and payment validators perform these operations correctly, no money can be created out of thin air and no money will disappear from the system. To violate payment integrity, either an adversarial insider would need to manipulate the operations of the currency issuer or a sufficient number of payment validators’ nodes or an external adversary would need to compromise these entities through remote attacks.

Authentication and authorization. Users are authenticated upon logging into the system. Payments are authorized when an (authenticated) sender approves a transaction within a secure payment application. The easiest way for an adversary to break payment authorization is to compromise the initial authentication process, for example, through phishing attacks or malware. These threats can be mitigated through multi-factor authentication (MFA), including the use of hardware tokens. 

Privacy. The database model inherently provides no privacy to users. In terms of privacy, this design variant is comparable to current credit card and smartphone payments where the payment processors learn all transaction details. Any party with access to the database (i.e., payment validators) can see all transaction details: sender, receiver, amount, and time. If privacy is desired, it must be accomplished through non-technical means, such as implementing strict access control policies that prevent internal operators from accessing this data without approval. Hence, the primary attacks on privacy will be at the human layer, by corrupting operators and processes. 

Resilience. To process a payment, the sender only needs to submit the payment to the validator nodes. The receiver does not need to be online and can retrieve the funds the next time they access their wallet. However, to preserve availability, the validator infrastructure must be active at all times to confirm incoming transactions. Attacks on availability in this model are likely to target underlying infrastructure layers (e.g., network, storage, and/or compute). Transaction revocation is straightforward and can be executed unilaterally by the database operator (similar to credit card payments today).

Network performance. In terms of throughput, this design is very scalable and flexible. In particular, it can be implemented in a fully centralized fashion. This removes a major bottleneck to scaling throughput: communication bandwidth constraints. In this setting, we can feasibly achieve throughput comparable to existing financial services like banks or credit cards. 

This model has potentially the lowest communication costs overall. If implemented as a centralized service, transactions do not need to be validated by multiple parties. This reduces back-end communication costs. End users do not need to store any data except their own; this minimizes user-facing communication costs. Note that a “centralized design” can still boost throughput through parallelization.29

Governance. The governance requirements for this design are equivalent to those of the current financial system. In particular, as the system is centralized, there is no need to manage the threat of misbehaving validators. However, there is still a need for well-documented policies governing incidents at various layers of the stack, including insider attacks.

Takeaway: CBDC deployment might centralize user data collection
The main difference between a database with account balance CBDC and the current financial system is that the CBDC may result in a greater centralization of user data and financial infrastructure. This can have advantages, such as greater efficiency in implementing monetary policy. It can also have disadvantages, including the privacy threat of storing a single database containing users’ (or banks’) every transaction.


Case study: JAM-DEX (Jamaica)

In 2021, the Bank of Jamaica ran a pilot of a retail CBDC with vendor eCurrency Mint Inc. The Bank of Jamaica specifically chose to avoid blockchain technology for this pilot not because of technical misgivings, but in order to seamlessly interface with existing payment structures within the nation.30 Over the course of the pilot, the bank issued CBDC to banks and financial institutions as well as small retailers and individuals. After continuing these trials in early 2022 to test interoperability and transactions between clients and wallet providers, the Bank of Jamaica announced a phased launch of the Jamaican Digital Exchange (JAM-DEX) in May 2022.31

Advantages

Centralized databases are a mature technology and can in many cases be more easily integrated with existing infrastructure. 

Risks

A primary risk is related to privacy; this architecture exposes all users’ transactions in plaintext to the Bank of Jamaica. Even if the bank itself does not abuse this information, the transaction database poses an attractive target for hackers. The consequences of a data breach in a centralized setting may be very serious.

Distributed ledger with plaintext transactions 

Another popular design variant that we consider captures the way payments work in the currently popular public blockchain systems like Bitcoin and Ethereum. We call this approach distributed ledger with plaintext transactions. As above, the payment process starts such that the sender obtains the payment address of the recipient. The payment sender prepares a transaction that includes the payment details (sender and recipient identities, payment amount) in plaintext, authorizes the payment by signing the transaction with their payment credentials, and sends it to the validators. The validators check that the sender has sufficient funds (such a check is trivial because all payment details are in plaintext in the transaction) and then append the payment transaction into a ledger that records all the transactions of the system. 

In a public payment scheme like Bitcoin and Ethereum, the sender (or any other third party) can verify that the payment was approved by checking that it appears in the public ledger. For a CBDC deployment, most likely the ledger would be private and only accessible by authorized parties like the payment validators, currency issuer, and regulator. In such private ledger deployment, the payment recipient could verify the completion of the payment by querying the payment validators, instead of verifying the payment directly from the ledger. 

Analysis

Integrity. The integrity of a ledger with plaintext transactions relies on two properties. First, regular transactions must not draw upon funds that have already been spent. This is verified by checking the transaction source against the set of all unspent transactions from the ledger. To bypass such a check, a sufficient number of validator nodes need to be manipulated or compromised. Second, the payer must be authorized to spend the transaction; the next paragraph explains how to verify this. In the case of minting new money, the first condition is not relevant, as the money is being created; authorization is still essential, though. 

Authentication and authorization. This design variant can involve separate authentication and authorization processes, but they can also be merged. Payment authorization requires a cryptographic signature on the transaction. Hence, for each payment, validators must verify that the signature is valid (which is itself a form of authentication, as cryptographic keys are meant to be linked to a specific user) and authorized to spend the money in question. The easiest attack on authorization is for an adversary to steal a user’s private keys, for example via phishing attacks. More recent “ice phishing” attacks trick users into signing a transaction that delegates the right to spend a user’s tokens.32

Privacy. Ledgers with plaintext transactions do not inherently provide privacy to the transaction sender or receiver. Payments will be visible to any party with access to the ledger, including (at least) account issuers. At best, the system can provide pseudonymity with respect to parties that have access to the ledger; in other words, users are represented by pseudonymous public keys, and privacy is maintained only as long as these keys cannot be linked to a real-world identity. However, pseudonymity guarantees are known to be easily broken.33 Moreover, providing pseudonymity with respect to account issuers inherently complicates Anti-Money Laundering (AML) and Know Your Customer (KYC) efforts, and may, therefore, be less favored.34

Resilience. To process a payment, the sender only needs to submit the payment to the validator nodes. Notably, the receiver does not need to be online and can retrieve the funds the next time they access their wallet. However, to preserve availability, the validator infrastructure must be active at all times to confirm incoming transactions. In this design variant, transaction revocation can be more complex. For example, suppose a transaction sender requests that the transaction be revoked by appealing to their bank (which happens to be operating a validator node). However, the transaction receiver may argue to their bank (also a validator) that the transaction should stand. In this case, no bank has the authority to unilaterally revoke the transaction, absent legal or policy frameworks for handling such situations. Such challenges can be mitigated if a central authority (in this case, the central bank) is given the authority to revoke transactions and freeze assets. However, this requires the central bank to be directly involved in dispute resolution. Moreover, it changes the core threat model by involving a central trusted party in the validation process, thereby introducing a central point of failure.

Network performance. Ledger-based designs inherently require sequential processing that can limit throughput. In particular, validators must verify that each transaction is not drawing on previously spent funds. The only fully safe way to ensure this is by serially processing every transaction. Although there has been work in the research community showing how to achieve high throughput in such a setting,35 these systems tend to add implementation complexity. Alternatively, account issuers (e.g., banks) may be willing to parallelize the processing of smaller transactions to achieve higher throughput, at the risk of allowing double-spending. This risk can be managed through non-technical means, such as insurance. 

Communication costs will depend in part on architectural decisions. In the lowest-trust setting, validators should download the entire ledger to verify correctness. Some designs involve a tiered system, where certain nodes store the full ledger history, whereas others store only the system state that is relevant to them (e.g., a single user’s set of unspent transactions); in these tiered systems, so-called light clients may store only information relevant to their own needs, and outsource transaction verification to third parties to avoid the storage and bandwidth costs of maintaining the full ledger. In a CBDC, users are likely to want this light client functionality to transact from lightweight devices like a mobile phone; in this case, account provider(s) may play the role of the trusted third party, much as in the current financial system.

Governance. This design introduces the need for independent validators. As such, it is important to establish policies that govern situations in which one or more validators misbehave (e.g., approving invalid transactions, changing the order of transactions, or not meeting promised availability or latency guarantees). These policies can be retroactive, punishing entities that misbehave. They can also be proactive, by establishing mechanisms that incentivize validators to correctly and promptly validate transactions. Common examples of such mechanisms include transaction fees, which reward validators for each transaction processed, and block fees, which reward validators for processing a batch of transactions. A third possibility is to allow validators to accrue interest from a reserve pool, which is invested independent of the currency; this was the approach suggested for Libra, now Diem, Meta’s proposed digital currency.36 Another important governance issue is related to the interface between central banks and independent validators, such as banks or other financial institutions. For example, in the event of policy changes internally to the CBDC, do validators have a say, or will changes be imposed unilaterally by the central bank? How much information should validators share with each other and with the central bank, and at what timescales? We touch on these questions in Chapter 2.


Case study: Digital Won (South Korea) 

In 2021, the Bank of Korea announced plans to pilot a digital won. This pilot study, which started in late 2021, is an example of a distributed ledger with plaintext transactions. It is running on a Klaytn ledger,37 which uses a custom DLT consensus protocol that was initially proposed for the Ethereum blockchain.38 The validators in this blockchain are currently being run by various companies, including banks and payment providers. The technology is being provided by GroundX, which is the blockchain unit of Korean communications giant Kakao.

Advantages

The use of DLT technology can provide better integrity against certain adversaries. Specifically, decentralized validation protects against the threat of corrupt insiders arbitrarily modifying, rejecting, or creating transactions. 

Risks

The DLT consensus protocol used by Klaytn, while derived from well-established consensus protocols, is relatively untested and has not been publicly peer reviewed (to the best of our knowledge). Indeed, early versions of this consensus protocol had design errors that affected the integrity and robustness of the system.39 DLT consensus protocols are notoriously subtle to design, and care should be taken with new, untested protocols. 

User privacy may be limited, as Klaytn user accounts are associated with (internally visible) user-selected addresses. However, exploring privacy implications is one of the objectives of Phase 2 of the pilot study, scheduled to terminate in June 2022.

Distributed ledger with private transactions 

The next design alternative that we consider captures how private blockchain systems like Monero and Zcash work. We call this design alternative distributed ledger with private transactions. In this approach, the payment sender prepares the payment transaction such that payment details like identities and amounts are hidden. In practice, payment details can be hidden using encryption or cryptographic commitments. Additionally, the payment sender computes a zero-knowledge proof that allows the payment validators to verify that the transaction updates the user’s funds correctly without learning the payment details. More precisely, the zero-knowledge proof shows to the verifier that the sender has sufficient funds, the balances of the sender and the recipient are updated correctly by the transaction, and the proof is created by the legitimate owner of the funds (i.e., payment integrity and authorization hold). 

The payment sender uploads such private transactions to the payment validators who will verify the zero-knowledge proof without learning any payment details. If the proof is correct, the validators include the transaction in the ledger. As above, the payment recipient can verify the completion of the payment either by reading it directly from a public ledger (as is done in systems like Zcash and Monero) or by querying the payment validators (as would be more likely in a CBDC deployment with a private ledger).

​​Takeaway: Strong user privacy is possible
Recent reports on CBDCs imply that a CBDC would inherently provide weaker privacy than cash. To some extent, we agree that such a view is justified. In any CBDC realization, a payment transaction would leave some digital trace (e.g., a communication channel opened between the payer and the payment infrastructure). However, we argue that such a view is also an oversimplification. The use of modern cryptographic protections, such as encryption, commitments, and zero-knowledge proofs, enables digital currency designs where even the payment validators who process and approve transactions do not learn the identities involved in the payment or the payment amount or cannot link payments from the same individual together. For many practical purposes, such strong privacy protection is comparable to the privacy of cash.

Analysis

Integrity. The integrity of a private ledger relies on the same two properties as public ledgers: the transaction should draw on valid funds, and the sender should be authorized to send (or create) the funds in question. In this model, payments are accompanied by cryptographic proof (e.g., a zero-knowledge proof) that proves the funds can be spent. Hence, verifying integrity involves checking that a zero-knowledge proof is valid. Creating and checking these proofs incurs additional computational overhead compared to plaintext ledgers, but these overhead costs have been falling in recent years thanks to innovations in applied cryptography.40

Authentication and authorization. Unlike public ledgers, this design variant aims to break the linkage between users and their transactions. As with public ledgers, payment authorization requires a signature or a similar cryptographic operation using a key or credential that is only known to the owner of the assets (payment sender). The cryptographic operation is such that system insiders, such as payment validators, cannot link this payment authorization to the identity of the payment sender. Therefore, in this design variant there is no explicit authentication process. 

Privacy. Ledgers with private transactions are designed to protect both the transaction sender and receiver. Such approaches can prevent an observer from linking the sender or receiver to a given transaction, while also hiding the amount of a given transaction. In this model, the ledger is still fully available to all validation nodes, but in encrypted form. Notably, these techniques do not protect against privacy attacks at the network layer—only at the consensus and application layers. Generally speaking, ledger-based private transactions cannot be easily reverted, because the payment validators do not learn the identities of the transacting parties in the fraudulent transaction. 

Resilience. As with public ledgers, only validators and the sender need to be online to process a transaction. The receiver does not need to be online and can retrieve the funds the next time they access their wallet. In particular, validators should be active at all times to ensure the system remains operational. As with plaintext distributed ledgers, transaction revocation can be complicated by the presence of multiple validators. Additional challenges arise in the case of revoking transactions on distributed private ledgers because validators do not have visibility into the amounts and/or parties involved in a transaction.

Network performance. As before, the sequential processing associated with ledgers can limit throughput. In particular, validators must verify that each transaction is not drawing on previously spent funds. As mentioned above, checking zero-knowledge proofs does incur some extra computational overhead compared to checking the validity of plaintext transactions. Today, the Zcash cryptocurrency uses schemes that require a few seconds to generate a proof (needed to create a new transaction), whereas transaction validation takes only milliseconds.41 These schemes are close to the state-of-the-art today. This additional processing time primarily affects transaction latency rather than throughput. 

The communication costs of this model are high. As with plaintext ledgers, transaction validation requires access to the (encrypted) system state to validate transactions. This requires validator nodes to download large quantities of data in continuation. However, unlike in plaintext ledgers, light clients are difficult to implement as existing designs effectively break the promised privacy guarantees. 

Governance. This design has all the same governance requirements as the ledger with plaintext transactions, particularly regarding the interactions between private validators and the central bank. Although the ledger is encrypted in this setting, many types of validator misbehavior can be detected just as easily as in the plaintext setting. For example, validators who validate conflicting transactions (thereby violating integrity) can still be detected as their digital signatures are visible to other validators and can be linked to the originator. 

Plaintext payment tokens

The next design variant that we consider is a token-based payment system. In such a system, the payment sender withdraws digital coins (that function as payment tokens) from the currency issuer. This withdrawal operation is authenticated using the sender’s credentials and the currency issuer updates the account balance of the user based on the withdrawn amount. Each coin (token) has a specific denomination and a unique serial number. 

To create a payment, the payment sender passes an appropriate number of coins (tokens) to the payment recipient who verifies that each coin is correctly signed and that their total denomination corresponds to the expected payment amount. To prevent double-spending of coins, the payment recipient deposits the coins to the payment validators immediately. The payment validators maintain records of the already used serial numbers and check that the serial numbers in the deposited coins have not been already used. After that, the payment validators add the amount of the deposited coins to the balance of the payment recipient and inform the recipient that the payment has been accepted.

Analysis

Integrity. The integrity of a digital cash scheme relies on the correctness of the following two operations. First, when the payment sender withdraws coins from the currency issuer, the issuer must update the account balance of the user with an amount that matches the denomination of the withdrawn coins. Second, when the payment recipient deposits the received coins, the payment validators must check that the serial numbers of the coins have not already been used, and then update the account balance of the recipient with the denomination of the deposited coins. Assuming that the currency issuer and payment validators perform these operations correctly, no money can be created out of thin air and no money will disappear from the system. To violate payment integrity, either an insider adversary would need to manipulate the operation of the currency issuer or a sufficient number of payment validators, or an external adversary should be able to compromise these entities through remote attacks.

Authentication and authorization. In this design variant, anyone who holds coins (tokens) is able to authorize a payment by simply passing coins to a payment recipient. Sender authentication occurs when a user withdraws coins. Recipient authentication occurs when the user deposits received tokens. It is noteworthy that, unlike in most digital currency solutions, payment authorization does not require an explicit cryptographic operation like signing (only passing tokens from one entity to another). To break payment authorization, the adversary would need to steal coins from the user. Assuming that the coins are stored in the user’s wallet hosted on their smartphone, this might be possible by either stealing the device or tricking the user to install malicious software on the device. 

Privacy. Plaintext payment token systems do not provide privacy for the end users. The payment validators learn the payment amount and identity of the payment receiver during the coin deposit operation. Due to unique serial numbers, payment validators can link deposit operations to previous withdrawal operations, and thus also learn the identity of the payment sender.

Resilience. To perform a payment, the payment sender needs to contact the payment recipient, and the payment recipient needs to be online in order to deposit the received coins. In principle, the payment recipient can accept coins fully offline (and deposit them later), but in such a case, there is no double-spending protection, and thus payment acceptance is not safe. Because payments are processed by distributed validators in this design variant, transaction revocation may be more complicated. 

Network performance. Digital cash solutions are easy to scale for high throughput. The payment validators need to check a signature and serial number for each deposited coin. Because each payment and coin deposit is essentially independent of each other, such operations can be easily run by independent payment validators in parallel. For example, each payment validator can be responsible for one range of possible serial numbers. This is in contrast to ledger-based solutions where typically all payment validators need to communicate and share a common view of all payments in the system, which makes scaling more complicated.

Communication costs in this model are low. The payment senders need to download a number of coins that depend on the total amount of payments that the sender wants to make. 

Governance. This design is effectively centralized and, therefore, poses similar governance requirements to databases with account balances.


Case study: E-Krona (Sweden) 

In 2019, Sveriges Riksbank began planning the possible design of a CBDC, called e-krona, and investigating the regulatory implications of such a deployment. In 2020, together with Accenture as the technology provider, Riksbank started a CBDC pilot where one possible design alternative was tested.42

The piloted design follows the plaintext payment token approach where users withdraw coins (tokens), then make payments by passing them to the payment recipient who deposits them back to the payment infrastructure to verify the coins have not already been used (double-spending protection). 

Advantages

One advantage of the piloted design is that it is easy to scale. Separate payment validators can verify separate ranges of coin serial numbers without having to run a complicated and expensive consensus protocol. This makes payment verification fast and easy to scale for a large number of parallel validators. 

Risks

Compared to ledger and database variants, a token or coin-based design places a higher burden on the user for wallet management. If the wallet that stores the coins is lost, the user will lose all funds. In most other currency variants it is sufficient to securely manage and back up one key that is used to authorize payments. 

Additionally, the piloted design provides no privacy protection for the users, and, therefore, the payment infrastructure operator who runs the validator nodes learns the identities of the payment recipient and sender, and the amount of each payment.

Privacy-preserving payment tokens 

A privacy-preserving variant of the above token-based payment system was proposed by David Chaum.43 As above, the payment sender withdraws coins from the currency issuer. The main difference is that the coin withdrawal process leverages a cryptographic technique called a blind signature. The user who withdraws the coins picks random serial numbers for each coin, and the use of blind signatures allows the currency issuer to sign the coins without learning their serial numbers.

To create a payment, the payment sender passes an appropriate number of coins to the payment recipient who forwards them to payment validators for double-spending checks and for updating the payment recipient’s account balance. The main difference from plaintext tokens is that such a payment validation scheme preserves the privacy of the payment sender. The payment validators learn the payment amount and the identity of the payment recipient, but due to the use of blind signatures, the validators cannot link the deposit operation to a previous withdrawal operation and thus they cannot learn the identity of the payment sender. 

Analysis

The main differences between this currency variant and the previous one is the level of end-user privacy that is achieved, as well as the authentication process.

Authentication and authorization. Unlike in plaintext payment tokens, this design variant does not reveal the sender’s identity to the validator(s). As such, there is not an explicit sender authentication process at the time of payment (only at the time of coin withdrawal). The identity of the payment recipient is authenticated at the time of payment so that the recipient’s account balance can be updated accordingly. Payment authorization is similar to the plaintext setting (passing coins from the sender to the recipient).

Privacy. This currency variant provides privacy for the payment sender. The payment recipient can accept coins fully anonymously (i.e., without knowing the identity of the sender) and when the coins are deposited, the payment validators who may communicate with the currency issuer cannot link them to the identity of the sender either, due to the use of blind signatures during coin withdrawal. Private payment token systems do not provide privacy for the payment recipient. When the received coins are deposited, the recipient must authenticate their identity to the payment validators so that the validators can update the account balance of the recipient correctly. Such systems also do not ensure payment amount privacy, since the payment validators learn the denominations of the deposited coins, and thus the amount of the payment.


Case study: Swiss National Bank (Switzerland)

In 2021, the Swiss National Bank (SNB) released a working paper that outlines one possible design for a CBDC system.44 This working paper follows the private payment token approach with the use of cryptographic blind signatures during coin (token) withdrawal. To the best of our knowledge, there is no pilot project yet, but the working paper indicates that this currency variant is also being considered.

Advantages

Compared to the plaintext payment token scheme (used in the e-krona pilot), the main advantage is added privacy. More precisely, it is possible to perform payments where the identity of the payment sender remains private to the payment validators. For example, in a practical retail setting this would mean that the payment validators learn the payment amount and the identity of the merchant who accepts the payment, but not the identity of the customer who made the payment. Good scalability is another noteworthy advantage.

Risks

As discussed above, a token-based design places a higher burden on the user for wallet management, compared to ledger and database variants.

Signed balance updates

Next, we consider a hybrid payment approach proposed in recent research.45 This approach combines centralized signing used in digital cash schemes with the account model and zero-knowledge proofs commonly used in private ledger transactions. We call this approach signed balance updates.

To join the system, each user creates a cryptographic commitment to a randomly chosen serial number and their current account balance value and requests the payment validators to sign this commitment. To create a new payment, both the payment sender and the payment recipient create new commitments to fresh serial numbers and the updated account balances that add the payment value to the recipient’s balance and deduct the payment value from the sender’s balance. The payment sender will also create a zero-knowledge proof that shows that both commitments are updated with the correct amount and the payment sender has sufficient funds in their current commitment. The payment recipient then sends the new commitments and the proof to the payment validators.

Similar to digital cash, the payment validators maintain a database of already used serial numbers. The validators will verify the proof and check that the serial numbers associated with the commitments have not already been used. If that is the case, the payment validators sign the new commitments (that represent balance updates) and return them to the payment sender and recipient who can consider the payment completed. 

Analysis

Integrity. The integrity of payments relies on two mechanisms. First, the payment sender creates a zero-knowledge proof that allows the payment validator to verify that the cryptographic commitments that represent account balance values are updated correctly. Assuming that the payment sender cannot forge such a proof, the integrity of each individual payment holds. Second, the payment validators check that each commitment serial number is used only once. This prevents double-spending the same funds multiple times in the system. So, there is no double-spending as long as the payment validator who approves the payment is not compromised. Here we assume that the used zero-knowledge scheme cannot be forged, and thus the only way to violate integrity is to compromise (a sufficient number of) payment validators (either remotely or locally through insider attacks).

Authentication and authorization. As with private distributed ledgers, there is no explicit authentication process at transaction time, as this design variant aims to break the link between users and their transactions. Payment authorization is based on zero-knowledge proofs. Each proof shows that the payment sender holds a private key (payment credential) that is associated with the used commitments. Payments cannot be created by unauthorized parties, as long as they cannot steal the payment credentials of legitimate users. As before, typical attack vectors for stealing user credentials would include stealing the user’s device and tricking the user to install malicious software on their device.

Privacy. This approach provides sender privacy, recipient privacy, amount privacy, and payment unlinkability at the protocol level. The identities of the payment sender and recipient and the payment amount are hidden from the payment validators (and all other parties) because the used commitments hide all such details. Also, the used zero-knowledge proofs leak no information to the payment validators. Since fresh serial numbers are randomly chosen for each commitment, such payments also provide unlinkability. This means that payment validators, or another party, cannot connect one payment with another. (Linking of payments and construction of transaction graphs is a common technique used to de-anonymize ledger-based payments.) Network-level de-anonymization of users remains a potential privacy threat.

Resilience. To create a payment, both the payment sender and the payment recipient need to be online. The payment sender needs to communicate with the recipient and the validators. Due to the strong privacy protections provided by this design, fraudulent transactions cannot be easily reverted; in this regard, this design variant functions similar to cash.

Network performance. This approach provides good scalability. There can be several payment validators who are each in charge of separate ranges of commitment serial numbers and validate payments independently. A simple consistency check is needed between two validators (one who checks the sender commitment serial number and another who checks the recipient commitment serial number). The communication requirements of this scheme are moderate. Users upload commitments and proofs that they create, and download commitments signed by the payment validators. Users do not need to download the entire ledger that contains all transactions.

Governance. As before, the encryption in this design is primarily protecting the privacy of user transactions, not validator actions. As such, this design is effectively centralized and poses similar governance requirements to databases with account balances. 

Secure hardware on clients 

Finally, we consider a design alternative that assumes that every client has a trusted hardware module, such as a smart card or secure chip on a smartphone. This trusted hardware module maintains an account balance for the owner of the module. Payment is simple: the trusted hardware modules of the sender and the recipient execute a protocol where the payment amount is deducted from the balance in the sender’s module and the same amount is added to the balance in the recipient’s module. 

Analysis

Integrity. The integrity of such a solution relies on the assumption that every hardware module used in the system remains uncompromised. If even only one of the users is able to break their own module (to which they naturally have physical access), such malicious users can double-spend the same funds an unlimited number of times. Also, if an external adversary is able to compromise even one of the deployed hardware modules, unlimited double-spending is possible. Another risk is a malicious hardware vendor or supply chain attack. If some of the deployed hardware modules are already malicious during the deployment phase, these design variants cannot guarantee the integrity of the currency. Due to such reasons, this variant is commonly seen as too risky for many deployments.

Authentication and authorization. User authentication can be conducted when transferring funds to the secure hardware; at transaction time, the hardware itself acts as an identifier. Similarly, simple payment authorization could be based on physical access to the trusted hardware module. That is, anyone who has the module can perform a payment. Such authorization would be vulnerable to module theft. Another approach is to require local user authentication for each payment. For example, the owner of the trusted hardware token provides a PIN code or fingerprint to the hardware module to authorize a payment. 

Privacy. While this approach provides weak integrity guarantees, it offers strong privacy protections. Because payments happen directly between the sender and the recipient, there is no information leakage to validators or any other parties. Thus, such payments are fully anonymous and unlinkable (and leave no electronic trace to any payment infrastructure). Therefore, this design variant provides similar privacy guarantees as cash payments.

Resilience. This design variant supports offline payments. That is, payments are possible between the sender and the recipient even if both parties are offline, as long as they can communicate with each other (e.g., using a local communication channel such as near-field communication; NFC). Performing safe offline payments without trusted hardware is currently an open problem, and thus no other design variant discussed in this chapter provides similar offline-payment capability. In this design variant, fraudulent transactions cannot be easily reverted (similar to cash).

Network performance. Such design is extremely scalable, as there is no centralized authority like the payment validators who would need to approve each payment. Such payments would need only minimal communication between the payment sender and the recipient. 

Governance. The use of secure hardware introduces new challenges related to the responsibilities of hardware manufacturers. For example, policies must be put in place for managing the implications of possible security vulnerabilities (intentional or otherwise) in trusted hardware modules. 

Summary

While the design space is large, many central banks have narrowed their scope to three of the discussed design variants: databases with balances, distributed ledgers with plaintext transactions, and variants of digital cash. Although there are no central banks that have committed to the other three design choices (to the best of our knowledge), there could be hybrid architectures that allow for combinations of technologies. 

Figure 4. Breakdown of current adoption/exploration of different CBDC variants globally

The table below summarizes our analysis in this chapter. Due to space limitations, governance considerations are not included in the table; a discussion of governance can be found in the main text.

For a discussion on differences in governance models, see the discussion of individual design variants in this chapter.

Table 2. Summary of currency variant analysis 

Source: Authors.
Note: Text highlighted in green represents a well-supported requirement or an advantage of the analyzed currency variant. Text highlighted in red represents a requirement that is not well supported or an aspect of the currency variant that is a disadvantage compared to other variants.

Additional key design choices

The previous section described possible design alternatives for a CBDC and their analysis. In this section, we discuss other common design choices that (possibly) span different designs, including consensus, wallets, and privacy together with compliance. 

Consensus mechanism

System designers must choose a consensus mechanism, which determines how transactions are confirmed by the validator node(s). The choice of consensus mechanism requires understanding trade-offs between robustness and efficiency. At one extreme, we have a single validator to confirm the validity of each transaction. This is efficient because it requires no coordination between multiple validators, but it is not robust; if the validator goes offline or misbehaves, integrity and/or availability are lost. At the other extreme, we can design consensus schemes with hundreds or thousands of validators, as in public cryptocurrencies like Bitcoin or Ethereum. Such approaches tend to be much less efficient, as they are fundamentally limited by the bandwidth and latency of the underlying network. However, these mechanisms tend to be much more robust to misbehaving or unavailable validator nodes. In practice, we expect CBDCs are likely to operate in an intermediate regime, with, for example, tens of validators. 

Fault models. In this regime, two types of robust consensus mechanisms are typically considered: crash fault-tolerance and Byzantine fault-tolerance. Crash fault-tolerance means that the protocol is robust to some fraction of validators going offline, for example, due to a disruption in power or network infrastructure. Byzantine fault-tolerance is a stronger concept; in addition to tolerating crash faults, it is additionally robust to a fraction of validators actively misbehaving, for example, by deviating arbitrarily from protocol. Byzantine fault-tolerance requires additional communication costs compared to simple crash fault-tolerance; this accordingly increases latency and can reduce throughput. However, in a CBDC, the financial incentives for misbehavior are high; there is a compelling case to be made for building in robustness to Byzantine faults.

When evaluating consensus mechanisms, it is essential to consider the precise security assumptions and guarantees of each mechanism and ensure that back-end infrastructure is designed to match those assumptions. For example, many Byzantine fault-tolerant consensus protocols are robust up to some fraction of malicious parties (e.g., one-third or half). This means that (for example) up to one-third of the validators can be compromised without affecting the system’s integrity. These attractive security guarantees have led some countries to consider adopting such consensus mechanisms (e.g., the digital euro).

Takeaway: Use of proven protocols is important
Byzantine fault-tolerant consensus mechanisms (e.g., DLT) are notoriously difficult to design and implement securely. Consensus protocols should be carefully evaluated (e.g., through peer review), and run on fully independent infrastructure to give meaningful security guarantees.

Deployment considerations. Despite the apparent security benefits, Byzantine fault-tolerant distributed validation protocols should be implemented with care. For the security guarantees to be meaningful, it is essential that validators be independent. That is, the corruption of one validator should minimally (or not at all) affect the likelihood of another validator being corrupted. At a minimum, this means that validator nodes should be run on servers running from different locations and using different power sources and network infrastructure. Ideally, they should be hosted and managed by independent entities. This is meant to avoid situations where, for example, an adversary manages to compromise the integrity of a single validator, and then uses the same exploit to compromise the remaining validators. In such settings, the security guarantees provided by Byzantine fault-tolerant consensus would be vacuous. 

Another important consideration is the ability to identify misbehaving nodes in a consensus protocol. That is, suppose some fraction greater than half of validators misbehave. In such scenarios, it is important to be able to identify which nodes misbehaved to punish them appropriately. However, some consensus protocols make such reidentification difficult (e.g., PBFT-MAC), whereas others naturally support it (e.g., LibraBFT).46 Such questions of consensus protocol forensics are an important consideration when selecting a consensus mechanism.

Consensus and fairness. The choice of consensus mechanism can also have implications for the fairness of the CBDC. For example, some consensus mechanisms choose a single validator node to be the “leader” at each instant; the leader’s job is to order incoming transactions and commit them to the ledger. However, such leader-based protocols can undermine fair transaction ordering; the leader can be bribed to place some transactions before others, leading to the risk of financial manipulation. It is an active research area today to identify (efficient) consensus protocols that preserve the natural ordering of transactions in the presence of malicious validators.47

Wallets

In most currency variants reviewed above, the payment sender authorizes the payment by signing a transaction with their payment credential, such as a digital signature key, obtained from an account provider (typically a commercial bank). The secure storage and use of the payment credential are important — if unauthorized parties learn the payment credential, they can spend the user’s funds; or if the legitimate user loses the payment credential, they are no longer able to spend their funds. The data storage and computing environment where the credential is stored and used is commonly called a digital wallet.

Custodial wallets. One possible deployment alternative is one where the account providers (commercial banks) support the users in the management of their payment credentials. Such deployments are commonly called custodial wallets or custodial solutions. For example, a bank can create the payment credential and provision it to the user’s digital wallet on their smartphone. If the user loses the device, and thus the payment credential, the bank can issue a new payment credential and send it to the user (upon successful authentication). 

Another custodial alternative is that the payment credential is stored only at the bank. In such a solution, payment creation requires contacting the bank with user authentication. One benefit of this approach is that if the user’s device is lost or stolen, the payment credential is not leaked. Another benefit is that if the user has multiple devices, the payment credential can be conveniently accessed from any of the other devices without the user having to replicate or synchronize credentials across multiple devices. The main drawback of custodial solutions is that a malicious insider at the bank is able to use the user’s payment credential without permission. Also, if the relevant IT system of the bank is compromised, or if the bank is subject to a data leak, a large number of payment credentials may be leaked. 

Non-custodial wallets. The alternative approach is a non-custodial wallet, where the user maintains the payment credential themself. The payment credential can be created and stored on the user’s smartphone that hosts the wallet software. A simple wallet software could store the payment credential on the normal data storage like flash memory. The main benefit of non-custodial wallets is that the payment credential is not directly accessible to any other party besides the owner of the funds. The downside of non-custodial wallets is that the user needs to manage backups themself. Safe backups can be difficult to organize in practice (paper backups may get lost, online backups are not safe, and many users might forget to create a backup altogether).

Secure hardware. Because smartphones can get lost, stolen, or infected with malware, a more secure approach is to store the payment credential inside a protected environment like hardware-assisted Trusted Execution Environment (TEE). Most modern smartphones support a TEE technology called ARM TrustZone, while PC platforms like laptops typically support TEE technology called Intel SGX. Both TEE technologies allow storage and use of the payment credential such that the credential is not accessible by any other software except the wallet software on the same device. While TEE wallets increase the robustness of credential storage significantly, recent research has shown that TEEs can be vulnerable to sophisticated attacks like side-channel analysis. 

Another option is to store the payment credential in a separate hardware token, such as a USB dongle. Hardware tokens offer strong security guarantees because the payment credential is physically isolated from potentially unsafe devices like the user’s smartphone or laptop. A common challenge with hardware tokens is how to safely back up a payment credential. Another typical challenge is the limited user interface on small hardware dongles, and thus safe payment detail input or verification can be difficult with hardware tokens.

In all wallet solutions, safe storage of the payment credential relies on the trustworthiness of the hardware that hosts the wallet software. Nation-state adversaries could coerce hardware manufacturers to implement backdoors that would allow the adversary to learn any secrets stored on that hardware. While such attacks are only possible from the most powerful adversaries, such threats should be considered as part of an extensive threat profile for CBDCs.

Social engineering attacks. Social engineering attacks are currently one of the most widely used and successful attack vectors in IT systems. In traditional email phishing, the victim receives a benign-looking but malicious email from the adversary. The goal of the email is to convince the victim to enter their login credentials into a fake website controlled by the adversary.

Phishing attacks are becoming increasingly common also in the context of decentralized cryptocurrencies. A possible attack vector tricks the victim into revealing the “recovery seed” of their wallet.48 If the adversary obtains such information, they can recreate the victim’s payment credentials and steal all of the victim’s funds (most hardware wallets support a recovery seed so that the legitimate owner of the wallet can recover funds in case the hardware token is lost or damaged). Another possible attack is to trick the user into performing a fraudulent transaction that transfers some of their cryptocurrency assets to the adversary.49 Such spoofing attacks work even if the adversary does not obtain the user’s payment credentials—the adversary merely tricks the victim into using their credentials to the benefit of the adversary. As payments cannot be easily reverted in current decentralized cryptocurrencies, such attacks are difficult to recover from. 

Similar adversarial strategies could apply to future CBDC deployments. The fact that CBDC will be more centralized may alleviate some concerns (e.g., the possibility to revert fraudulent transactions can be built into certain currency designs), but in general, the same attack concepts apply to both decentralized and centralized systems. While certain countermeasures and defensive techniques are well-known (e.g., multi-factor authorization, safe wallet UI design practices), such attacks most likely cannot be fully eliminated by technical means alone. As in any large and complex IT system, humans and social engineering remain a viable threat vector, and security awareness and user training are probably required to limit the effectiveness of these threats. How exactly such attacks may manifest in future CBDCs will depend on how such systems will be implemented, what kinds of wallet user interfaces will become common, and various other similar factors. Therefore, performing a detailed analysis on this topic is not yet possible, but designers of CBDC systems are, nonetheless, advised to consider such adversarial strategies.

Smart contract design and deployment

Many CBDC deployments (especially of the retail variety) are expected to support applications that allow users to interact with the underlying CBDC infrastructure. Examples include payment or banking applications. These applications will likely be backed by smart contracts, which are computer programs that govern the transfer of digital assets. First proposed in the context of decentralized cryptocurrencies, the concept is much more general and can be applied to centralized financial services as well. To give an example, a smart contract could be used to programmatically implement disaster relief programs by specifying that every registered citizen should receive $500 at a particular date and time. Smart contracts can also specify conditions under which transfers should occur; for example, a smart contract could specify that every time a user (Alice) receives a payment from another user (Bob), 30 percent of that payment will be transferred to Alice’s family member (Carol). Finally, smart contracts can be composed with each other to create complex dependencies between events in a financial system.

Smart contracts have been particularly powerful in the cryptocurrency space because of standardization: although different parties have differing goals and requirements and can use different programming languages, all parties utilize the same set of rules for specifying and processing smart contracts. The most widely known set of such rules is the Ethereum Virtual Machine (EVM). The EVM has enabled the deployment of complicated applications between parties that would otherwise require either manual effort or custom-built services governing the logic and automated transfer of funds.

Smart contracts raise a number of issues that are likely to pose new cybersecurity challenges for CBDCs.

Managing vulnerabilities or errors. Because smart contracts are computer programs, it is inevitable that many smart contracts will have bugs: errors in the logic or implementation of the contract. Software bugs can lead to (sometimes catastrophic) security vulnerabilities. The main concern in a CBDC is that these errors could erroneously transfer large amounts of money to the wrong recipient, or enable malicious agents to steal money by exploiting vulnerabilities in a smart contract. While software vulnerabilities have always been a concern for financial institutions, the main risk that arises with smart contracts is greater scale: smart contracts enable large-scale, nearly instantaneous transactions, which can also set off a chain of downstream-dependent transactions. In decentralized systems (e.g., cryptocurrencies), smart contract bugs have been particularly problematic because contracts are immutable, meaning they cannot be changed once they are deployed. In a more centralized CBDC setting, contracts do not necessarily have to be immutable. However, as mentioned earlier, some CBDC design variants make it more difficult to revert transactions (Figure 3); these designs may complicate full recovery from bugs. ​​

To some extent, these vulnerabilities can be managed through a combination of technical and procedural means. On the technical side, software engineering best practices call for testing all code before deployment. In other words, smart contracts should be evaluated under a wide range of inputs to evaluate whether they contain vulnerabilities prior to deployment. While there are tools for software (and smart contract) testing, these are not error proof. The problem is particularly complicated for smart contracts because of the complex dependencies between them. An input to one contract may not obviously cause errors but could have cascading effects that cause errors in another contract that is invoked through a chain of downstream dependencies.

On the procedural side, CBDCs could implement staging or test environments where smart contracts are deployed and evaluated at a small scale before being fully deployed on the main CBDC network. This is again analogous to best practices in software engineering for deploying new updates to complex software systems.

Privacy 

Any deployed CBDC system is likely to collect significant amounts of data since such systems would process a large number of payments every day. Some central banks have indicated that their goal is not to build a tool of mass surveillance50 and, therefore, CBDC deployments should carefully consider and incorporate at least some privacy protections. 

System designers have a number of mechanisms for preserving privacy in a CBDC. Process and policy can be an important tool for enforcing privacy with respect to system insiders. Here, it is important to follow the principle of “least privilege”: operators should be given access only to the data they require to do their jobs. For example, an account operator should not have access to portions of the ledger that are not relevant to its own customers. Even when access control policies are in place, insiders within the currency issuer (or account issuer) can still have access to large quantities of sensitive financial data. Digital currency variants with built-in privacy protections, as described and analyzed in the previous section, provide a significantly stronger foundation for user privacy, as in such design the privacy of end users does not rely on the trustworthiness of system insiders.

Privacy and compliance. At the same time, most countries have rules and laws like AML regulations that would need to be appropriately enforced. For example, in the United States, it is mandatory to report the receipt of more than $10,000 in cash payments to the IRS. Obviously, policy makers do not want a CBDC system to become widely used for illicit activities or to create materially new problems for the enforcement of criminal law (more than what exists with cash). Another example of concern is that if holding large amounts of CBDC money is made safe and easy, users might be tempted to migrate their savings from commercial banks to a CBDC format.51 There are some concerns that this could threaten the safe operation of commercial banks (e.g., increase the possibility of bank runs during financial crises) and, under certain conditions, the stability of the monetary system. 

For various reasons, it may be desirable to create a system where users enjoy some measure of privacy, but at the same time authorities are still able to enforce laws such as how much CBDC money can be spent, received, or held. These two requirements are to some extent in tension since most currency variants are able to provide only one but not the other. For example, a database that holds account balances, or a ledger that records plaintext transactions, is easy to regulate but provides no end-user privacy. A ledger that records private transactions, similar to systems like Zcash or Monero, provides privacy but is hard to regulate because payment details like identities and amounts are not disclosed even to infrastructure nodes like validators who process the payments.

One of the most promising approaches to provide both privacy and compliance is to use cryptographic zero-knowledge proofs to construct payments that preserve user privacy (as much as possible) but can be verified to conform to specific regulatory rules. One example is a solution where the amount of each payment is hidden (e.g., encrypted) and each transaction must be accompanied with a zero-knowledge proof that shows to the regulator that all the payments received by the same user within the current time period (e.g., one month) are combined below a certain allowed limit (like $10,000).52

Another example is a solution where the zero-knowledge proof shows that the updated account balance of the recipient is below a certain limit (say, $50,000) without revealing the exact account balance to the payment validators or the regulator. The first technique could mimic the current rules regarding the reporting obligation for large cash payments, while the second technique could be used to address excessive migration of bank deposits to protect the stability of banks.

​​Takeaway: Privacy and compliance can coexist
Providing users with strong privacy protections and regulators with the extensive oversight they may desire are two inherently conflicting requirements. However, recent research developments have shown that it is possible to design digital currencies where these two requirements may coexist, at least to some extent. For example, it is possible to realize a digital currency where payment details remain fully private as long as the total value of all payments by the same individual does not exceed a certain predefined threshold value (say, $10,000 per month). In such a system, fully private payments are allowed up to a certain monthly limit and if the individual exceeds that limit, the regulatory authority is able to see the details of payment transactions. As discussed further below, privacy issues must be squarely addressed at the legislative level.

In addition to zero-knowledge proofs, other privacy-preserving techniques have also been studied and proposed in the research literature. Fully homomorphic encryption and private set intersection are two examples. Such techniques are not used or required in the design variants that are the focus of this chapter but may enable new privacy-preserving currency designs in the future. 

Privacy and network traces. Third-party adversaries with access to the network layer (e.g., ISPs) or compute layer (e.g., cloud service providers) can potentially de-anonymize transactions.53 This threat can (and should) be mitigated in part by encrypting all traffic between validators and end users. This is not possible in permissionless cryptocurrencies, where all transactions are meant to be publicly broadcast. In a CBDC, though, there is no reason for third parties to have access to transaction packet contents. 

Privacy and performance. Cryptographic privacy protections can have important implications for other security and efficiency properties. For example, zero-knowledge proofs increase the computational overhead of creating and validating transactions. This overhead can impact latency and, if implemented poorly, throughput (e.g., if the system requires interactive zero-knowledge proofs). More generally, the use of cryptography limits the kinds of operations that can be performed by validators on encrypted transactions. Thus, the system needs to be designed much more carefully to anticipate the kinds of computation that may be necessary down the line, for example, related to regulatory compliance.

Summary. Cryptography-based privacy solutions like zero-knowledge proofs for AML/KYC compliance are still an active area of research. The performance implications of initial research proposals need further validation, and more sophisticated solutions are likely to be proposed in the near future. However, fortunately, the initial results indicate that reconciling regulation and privacy is not a fully impossible task and central banks can consider such solutions that ensure both as part of their technology road map.

Cybersecurity frameworks

Over the last decades, several best practices and expert recommendations regarding how to build and deploy secure IT systems have been collected in various cybersecurity frameworks and standards. The ISO 27000 series and the National Institute of Standards and Technology’s Cybersecurity Framework (NIST CSF) are two popular examples.

Regarding the design and build phases of IT systems, such cybersecurity frameworks may, for example, mandate how the system design process should be documented or what kind of testing methods should be used. The frameworks can also provide security-related checklists that system designers and programmers may follow. Regarding the operation of a deployed IT system, such frameworks may provide organizational advice, such as who should be allowed to access confidential user data or how the organization should respond to possible security incidents. Many experts agree that leveraging such frameworks can be useful (if the added cost is acceptable for the project at hand).

For the creation of a CBDC, cybersecurity frameworks can provide similar benefits (and costs) as for other IT systems. Careful application of a chosen cybersecurity standard can, for example, help to ensure that the design process is documented appropriately, the software testing phase is performed based on industry best practices, and appropriate measures are in place to respond to possible security incidents.

However, the existing cybersecurity frameworks and standards do not provide advice regarding some of the most challenging and fundamental design choices related to the creation of a CBDC. As we have discussed earlier in this chapter, each digital currency variant provides a different security, privacy, and performance trade-off and comes with its unique set of risks and challenges. The currently available cybersecurity frameworks do not explicitly help system designers make critical choices such as which digital currency variant to choose. Therefore, the designers of future CBDC systems may need to consult a broader set of resources (such as the analysis presented previously in this chapter) during the design process.

Chapter 1 summary

In this chapter, we have analyzed the cybersecurity aspects of CBDCs. Our discussion first identified the main roles and entities involved in a CBDC deployment. After that, we discussed possible threat models and the key security requirements. Using such a framework, we then analyzed various possible digital currency design alternatives and compared their main advantages, drawbacks, and cybersecurity challenges. The main takeaways of this chapter are as follows.


The main takeaways of this chapter

  • CBDC deployment may introduce new cybersecurity risks. While a CBDC would be subject to many of the same cybersecurity risks as the existing financial systems, deployment of a CBDC would also create new cybersecurity risks, such as increased centralization, reduced regulatory oversight, increased difficulty of reversing fraudulent transactions, challenges in payment credential management, malicious transactions enabled by automated financial applications, and increased reliance on non-bank third parties. The exact set of risks depends largely on the design and deployment of a given CBDC. 
  • The design space for CBDCs is large. While most CBDC reports identify centralized databases, distributed ledgers, and token models as possible digital currency designs, our discussion shows that the design space for digital currency systems is actually larger than that. Currencies can also be realized as signed balance updates or as a set of trusted hardware modules. Both ledger and token-based payments can be made private through cryptographic protections. 
  • CBDC deployment might centralize user data collection. The main difference between a (centralized) CBDC deployment and the current financial system is that the CBDC may result in a greater centralization of user data and financial infrastructure. This can have advantages, such as new options for implementing monetary policy, but it can also have serious privacy and security disadvantages.
  • Privacy-preserving designs can also be more secure. If a CBDC deployment without privacy protections gets breached either by an external attacker or malicious insider, then large amounts of sensitive user information are disclosed to unauthorized parties. In a privacy-preserving CBDC deployment that initially declines to collect or subsequently restricts sensitive user data even from trusted system insiders, breaches will have significantly less severe security consequences.
  • Strong user privacy protection is possible. While some recent reports imply that CBDCs would inherently reduce the privacy of users, our review of recent research developments has shown that it is possible to design a digital currency system where transaction details are hidden even from the payment validators and infrastructure. We argue that such systems would provide a level of privacy that is comparable to cash.
  • Privacy and compliance can coexist. User privacy protection and enforcement of compliance rules are at odds with each other, and simple system designs can typically achieve only one or the other. Our review of recent research advancements indicates that it is possible to design systems where users enjoy reasonable levels of payment privacy and regulatory authorities can at the same time enforce common compliance rules. 
  • The use of proven protocols is important. Distributed security protocols, such as Byzantine fault-tolerant consensus protocols, are notoriously difficult to design securely. Our discussion shows that several current CBDC pilot projects rely on consensus protocols that lack strong, peer-reviewed security proofs. We discourage the use of such potentially unsafe protocols as key components of CBDC deployments.

Chapter 2: Policy recommendations—Principles for future legislation and regulation

At this early stage of CBDC research and development, the precise nature of the cybersecurity risks presented by CBDCs will depend significantly on the design and implementation decisions made by governments, legislatures, and central banks around the world. In the US context, we do not have a concrete decision on a CBDC design, let alone a definitive prototype, set of corresponding public policies, or authorizing legislation. That makes it somewhat more challenging to make detailed recommendations for strengthening cybersecurity at this early juncture.54 This chapter identifies key principles to help guide policy makers and regulators as they continue to explore and potentially deploy a CBDC with robust cybersecurity protections in mind. 

Principle 1: Where possible, use existing risk management frameworks and regulations 

Cybersecurity policy around CBDCs need not entirely reinvent the wheel. There are already a variety of laws, safeguards, and requirements in place to protect the traditional banking sector and consumers from cyberattacks, some of which might directly apply (in the case of a CBDC administered by a nationally chartered bank) or which might serve as a useful model for future adaptation. 

For example, in the United States, a combination of bank and non-bank regulators, federal statutes, state laws, and private sector standards shape cybersecurity in the traditional financial services sector.55 These include the ​​Gramm-Leach-Bliley Act of 1999 (on data privacy and security practices), the Sarbanes-Oxley Act of 2002 (reporting requirements), the Fair and Accurate Credit Transactions Act of 2003 (regarding identity theft guidelines), and the Bank Service Company Act of 1962 (regarding onsite examinations and proactive reporting of cybersecurity incidents).56 The Federal Deposit Insurance Corporation (FDIC) alone offers detailed guidance and resources on cyber risks and examinations for banks.57

Depending on how a CBDC was designed and deployed, some of these laws might apply directly or indirectly. For example, particularly to the extent a two-tier CBDC would be administered or held by banks, regulators would likely have to carefully review compliance with existing security frameworks and standards. Likewise, to the extent that a CBDC is administered or held by a fintech company—such as a mobile payments app, neobank, or hot wallet—then a number of existing laws would probably apply.58 In some instances, it will be prudent to streamline or deconflict preexisting regulations that overlap and apply to CBDCs in needlessly complex ways.

As a first step, policy makers and regulators should assess which areas of a new CBDC ecosystem will be covered by current regulations and where novel statutes—or new technical frameworks—might be necessary to provide adequate protection. Examples of existing cybersecurity frameworks include the NIST CSF, which “provides a comprehensive framework for critical infrastructure owners and operators to manage cybersecurity risks,”59 and the Committee on Payments and Market Infrastructures and the Board of the International Organization of Securities Commissions’ Guidance on Cyber Resilience for Financial Market Infrastructures.60 The G7’s “Fundamental Elements of Cybersecurity for the Financial Sector” offers policy makers another measuring stick to compare a CBDC’s necessary regulations against.61 Chapter 1 of this report details the benefits of using these frameworks, but stresses that none of the current schemes fully address the most challenging and fundamental choices related to designing a secure and resilient CBDC. Therefore, we encourage policy makers to begin collaborating with industry associations and leveraging international fora to update current frameworks using resources such as this report. The European Union (EU) provides a useful example as its current banking and stability provisions will cover certain aspects of new FinTech innovations.62

Regulators may have to balance old and new regulation, as well as weigh potentially competing policy values, such as security, innovation, competition, and speed of deployment. When crafting new regulations for a CBDC, policy makers and regulators should set the conditions for a safe digital currency ecosystem that enables financial intermediaries to innovate and compete.63 For a two-tier retail CBDC system, which according to the Atlantic Council’s CBDC Tracker is the most popular architecture choice,64 regulators will have to devise rules for private payment service providers (PSPs) that extend beyond commercial banks to cover activities by nontraditional financial firms involved in operating the CBDC. Alipay and WeChat’s important role as technology providers in the rollout of China’s e-CNY underscores this point.65 A CBDC’s design choices will determine where policy makers and regulators need to step in to provide new frameworks that protect participants from cyber risks. For example, as explained in Chapter 1, a CBDC with token-based wallets would both place a higher burden on consumers to keep their money safe and require policy makers to develop “a regulatory framework for custodial wallets with the necessary consumer and insolvency protections.”66 Related to wallets’ vulnerabilities, policy makers should consider putting in place consumer protections for data custody, including rules on the storage redundancy (and data retention limits) of transaction records and wallet balances. Doing so could insulate consumers and banks from the long-term impacts of breaches, technical failures, and fraud, enabling more rapid recovery and response from such incidents.

Given that certain CBDC designs might put a potentially higher burden on consumers to protect themselves against cyber fraud and theft, governments should engage PSPs and consumer protection groups to roll out cyber risk education campaigns well before launching a CBDC. As discussed in the background chapter, the credit card industry offers a cautionary tale of phishing and other cyber scams’ severe costs for consumers and the industry. A successful educational campaign would raise awareness among CBDC users about how to identify and protect themselves against a wide variety of cyberattacks. In addition to learning appropriate cyber hygiene when using wallets and other CBDC applications, consumers must be informed of their legal rights and responsibilities that come with holding and transacting in digital currency.67  At the same time, a CBDC must not offload all (or most) of the responsibility for cybersecurity onto its users. 

Principle 2: Privacy can strengthen security 

One of Chapter 1’s key findings is that privacy-preserving CBDC designs may also be more secure because they reduce the risk and potential harmful consequences of cyberattacks associated with data exfiltration, for example. CBDCs with stronger privacy rules may generate and store less sensitive data in the first place. In turn, potential attackers have a smaller incentive to infiltrate the system. If an attack is successful, the impact would be less severe. Our research also shows that CBDCs can offer cash-like privacy while potentially providing more efficient oversight options to regulatory authorities. To build a CBDC, policy makers in the US Congress and their colleagues around the world should carefully examine the relationship between privacy and security. They should weigh the findings of this report before making foundational decisions about a CBDC’s level of privacy that will filter through to the digital currency’s design and determine its cybersecurity profile. 

Our research also shows that CBDCs can offer cash-like privacy while potentially providing more efficient oversight options to regulatory authorities.

As part of the privacy question, policy makers must decide when, whether, and how users will prove their digital identity to access a potential CBDC. This report outlines how different CBDC designs can rely, among other access solutions, on conventional digital versions of current identification credentials, knowledge-based cryptographic keys, or a mix of different approaches. Policy makers’ decisions regarding digital identities are broader than CBDCs, but the design choices will once again determine what type of CBDC architectures are possible. Thus, policy makers should include considerations about the cybersecurity profile of a potential CBDC when deliberating the future of digital identification.68 Should the US Congress, for example, decide to create an entirely new digital identity infrastructure, such a system would need to be integrated at the outset with the cybersecurity frameworks of a potential digital dollar. Moreover, as explained in the below principle on interoperability, US policy makers would need to ensure that any domestic digital identity schemes are compatible with future global standards. To mitigate risks of accepting and sending foreign transactions, US policy makers and regulators would need to work with their global counterparts to make sure any transactions involving third countries comply with the appropriate US digital identity standards and safeguards. As a result, global standard-setting efforts to create secure, interoperable CBDC ecosystems could also help lead a push on harmonizing international digital identity regulations. The G7’s “Roadmap for Cooperation on Data Free Flow with Trust,” which focuses on “data localization, regulatory cooperation, and data sharing,” could provide a high-level blueprint for harmonizing countries’ digital identity approaches.69

To address privacy risks from a CBDC’s increased centralization of payment processing and sensitive user data, governments must establish clear rules around who has access to which data, for what specific reason, and for how long. This includes explicitly delineating responsibilities of Anti-Money Laundering/Combating the Financing of Terrorism (AML/CFT) compliance between the private and public sector stakeholders of a CBDC. There are a range of actions that Congress could take in authorizing legislation for a possible CBDC. The Biden administration’s 2022 Executive Order on Ensuring Responsible Development of Digital Assets directs the “Attorney General, in consultation with the Secretary of the Treasury and the Chairman of the Federal Reserve” to “provide to the President . . . an assessment of whether legislative changes would be necessary to issue a United States CBDC, should it be deemed appropriate and in the national interest.”70 This assessment, and any related or competing legislation that members of the US House of Representatives or the US Senate draft in the coming months, could advance important requirements regarding the overlap between security and privacy.

Specifically, Congress could consider the following measures in legislation related to a CBDC:

  • Original collection: Delineate or limit what personal information/consumer data is originally collected from consumers as part of a CBDC system and in daily transactions—and what should not be collected. For example, limit data related to the underlying item purchased, the location of the transaction (GPS coordinates), or other metadata available to the Fed or other actors in a disintermediated system.
  • Subsequent deletion: Set out a data retention or deletion policy, for example, requiring the periodic deletion (and/or meaningful anonymization) of CBDC data after a set period of time.
  • Universal searches: Establish internal security standards (including logs and audit procedures) about which personnel can search repositories of CBDC data—as well as how often and how extensively they may do so, and under what forms of supervision. By way of comparison, other government databases have experienced problems when a rogue government employee has complete discretion to perform universal search queries across millions of sensitive records, for example, about a former spouse, an ex-girlfriend, or fellow employee.
  • Fourth Amendment: Apply Fourth Amendment protections (and federal case law about unreasonable searches and seizures), including to personally identifiable information contained in CBDC repositories. Practically, this would mean that prosecutors would need a warrant to access certain personal records.
  • Subpoenas and review: When civil subpoenas are applicable, consider transparency mechanisms and procedures that would allow citizens to seek review before a CBDC system or administrators discloses personally identifiable information.
  • Remedies: Consider penalties or remedies that should be available if and when a privacy violation should occur (particularly when it is severe or pervasive).
  • Reports: Require annual reports on privacy-related issues (including a review of breaches or relevant inspector general reports), for example, to the Privacy and Civil Liberties Oversight Board, with a courtesy copy to relevant House or Senate oversight committee(s).

Principle 3: Test, test, and test some more 

Governments should ensure that they have full access to, and can directly oversee, security testing and audits for all CBDC implementation instances. There are also security and procurement benefits to making the relevant code bases open-source, which the Federal Reserve Bank of Boston has chosen to do with its current collaboration with MIT’s Digital Currency Initiative.71

When it comes to selection of a technical platform for pilot CBDC programs, policy makers should carefully consider the key contractual terms they negotiate with those vendors for who will own and have access to the code base and who will be responsible for testing and auditing that code. Regulators may find advantages to using multiple implementations and code bases to avoid relying on a single vendor (or a single, closed source code base) in a way that may lead to a single point of failure, but for each instance or implementation, governments will have to carefully negotiate these code ownership, maintenance, and testing responsibilities. 

The importance of testing was highlighted in the recent executive order on cybersecurity as a tool for efficiently and automatically identifying vulnerabilities.72 In the context of a CBDC, testing will be important at multiple layers of the stack. For example, at the hardware and application layers, wallet software and hardware should be tested for vulnerabilities that could enable attackers to steal funds from users, exfiltrate data, or prevent the execution of transactions. At the same time, central banks may be using smart contracts to govern the dissemination of funds. Smart contracts, which digitally facilitate the execution and storage of an agreement, will be critical to many future CBDC applications. Take government stimulus payments as a use case. For aid distribution to be governed by smart contracts in the future, in-person reviews conducted by engineers, who read the smart contract code and grant approval, may not be sufficient to ensure accountability. Bugs in smart contracts, which could incorrectly execute the dissemination of funds, have caused massive losses in cryptocurrencies already.73 To complement in-person reviews, there is a strong case for instituting automated reviews for verifying smart contracts. One option is a technique called formal methods. In addition, regulators should consider lessons from other smart contract designs, for instance, in the Ethereum ecosystem, to craft policies for a gradual rollout that are designed to catch implementation or design errors early in smart contracts’ development. Smart contract testing is itself an active area of research and may hinge upon the specific code in use. 

At the consensus layer, third-party vendors may provide software that implements the database management software and/or consensus management software for validators. This software should be thoroughly tested for call sequences that can induce faults in the liveness and/or correctness of the system. 

Especially in the early days of pilot programs, CBDCs will require extensive testing and security audits. Governments will either require in-house expertise to conduct these audits or contract with additional vendors to perform the necessary testing and security assessment. Open-source CBDC code bases may allow for more participation in the security testing process, especially when combined with longer-term bug bounty programs, but still require due attention to the security testing process. To enable this extensive testing and security audits, the US Congress must consider the appropriations accordingly as part of the budget process.74

Principle 4: Ensure accountability

Establishing accountability across all parts of a CBDC’s technical design is a necessary precondition for a secure and resilient CBDC ecosystem in the face of cyberattacks. The previous principle illustrated the importance of testing software (including smart contracts) prior to deployment. However, testing alone is not enough. Every major piece of software deployed in practice has bugs, and the same will be true of CBDCs. Given this, CBDCs need to establish clear rules and policies surrounding accountability for errors, and resulting consequences. For example, if a CBDC deploys a smart contract that allows citizens to withdraw twice as much money as was initially intended, who is responsible? The developer of the smart contract? The company that hired the developer? The central bank? Such accountability policies should be determined ahead of time, along with a plan for dealing with eventual challenges and disclosing the relevant vulnerabilities if and when they arise. Similar problems might occur with certain CBDC designs that make it impossible to revoke fraudulent or contested transactions. Policy makers should establish clear lines of responsibility for public authorities, PSPs, and users to cover potential losses and refund payments. To minimize the risk of attackers using hardware vulnerabilities to infiltrate CBDCs, policy makers might also consider processes to certify hardware suppliers and collaborate with the private sector to secure all parts of the supply chain.

Another important need for accountability arises at the consensus layer. Particularly with CBDCs that rely on DLT technology,75 It is paramount to clearly establish accountability requirements among validators on the blockchain. In DLT-based CBDCs, security hinges on most of the participating validators behaving correctly. If one validator or node is compromised, that compromise may have exploited a vulnerability that remains unpatched among other validators as well. For this reason, robust reporting requirements must ensure that all other stakeholders learn about security breaches as quickly as possible to reduce the risk of attackers exploiting the same vulnerability across multiple validators. This, in turn, mitigates the risk of validators approving faulty transactions. Concretely, there may be a need for baseline requirements to determine how quickly validators should notify other stakeholders upon discovery of a breach or malfunction. While analogous requirements exist for trade finance, the timescales for notifying other parties of a breach are much slower. In a DLT-based CBDC, validators’ accountability, particularly with regard to reporting and vulnerability disclosure, becomes much more urgent because of the potential for cascading effects across the blockchain and CBDC ecosystem. 

Liability considerations

Another set of important questions for policy makers to answer revolves around the issue of liability for CBDCs and who will be legally responsible for covering the costs of cybersecurity incidents (i.e., theft of consumer data or funds). The liability question illustrates different options policy makers have at their disposal to approach CBDC cybersecurity regulation. This is an area where existing financial regulation for traditional banking lays out clear and largely pro-consumer rules for financial fraud and theft. On the one hand, policy makers could aim to implement similarly specific consumer protection-oriented rules for CBDC implementation at the outset of their development, especially because these rules will not inhibit specific innovations in the technical design of the CBDCs. By placing some responsibility or liability for fraud on the operators of a CBDC implementation, policy makers can incentivize the groups designing these systems to invest in greater security and oversight without dictating exactly how those goals should be achieved. This approach potentially allows for greater flexibility than security regulations that dictate specific standards or controls, but it also might provide less concrete security guidance to the vendors responsible for designing these systems. 

Standard setting

A different approach for policy makers would be to set concrete technical standards for CBDCs that include security and privacy protections. These standards do not yet exist, and they are unlikely to evolve until there are specific CBDC implementations that have been piloted for a longer period to move policy makers toward a concrete decision on CBDCs. 

In many circumstances, it may be more effective for the federal government to consult with—or expressly rely upon—private or nonprofit consortiums that develop and maintain technical standards. Policy makers and industry stakeholders may find some useful road maps in the existing standards, like the EMV standard for chip credit cards or the Data Security Standard published by the Payment Card Industry Security Standards Council. Voluntary technical security standards and protocols, like SSL and TLS, provide another model for standards development, though because they are not mandated or accompanied by liability regimes that incentivize their implementation, these models may be of more limited utility for securing a large-scale CBDC implementation. For early stage, small-scale pilot projects, however, voluntary technical standards may suffice to help provide some security guidance to initial vendors and provide some early data on which standards are most effective at preventing security breaches. 

Principle 5: Promote interoperability 

In a domestic context, policy makers should develop rules to ensure that a CBDC is interoperable with the country’s relevant financial infrastructure and can serve as an “effective substitute.”76 This will increase the resiliency of countries’ financial systems against failures due to cyberattacks and is a key benefit of adopting a CBDC. 

To strengthen the security of CBDC systems, it is also critical to promote global interoperability between CBDCs through international coordination on regulation and standard setting. Through its body of research, the Atlantic Council has long stressed the need for US leadership “to shape the trajectory of CBDC”77 and specifically develop strong international cybersecurity standards through fora, including the G20, Financial Action Task Force (FATF), and Financial Stability Board (FSB), to “ensure countries create digital currencies that are both safe from attack and can safeguard citizens’ data.”78 The Biden administration’s recent executive order on digital assets,79 which outlined the US government’s goal to take a more active role in global standard-setting bodies for CBDCs and encouraged US participation in cross-border CBDC pilots, is a welcome step forward. US policy makers should explore a transatlantic CBDC cross-border wholesale trial with an explicit focus on standards development and mitigation of cyber threats. By involving FATF and FSB in such a CBDC pilot, regulators could ascertain where current international standards provide sufficient protections, in what areas new rules are necessary, and what new regulations might look like. 

Regulators should also study ongoing and completed cross-border CBDC trials, including Project Dunbar and mCBDC Bridge, to build on these projects’ cybersecurity findings for future tests. Based on our research, we understand that several countries are interested in collaborating with the United States on cross-border pilot projects using both wholesale and retail CBDCs. Through its innovation hub and linkages with the banking industry, the Federal Reserve Bank of New York may be particularly well placed to lead on wholesale testing. Given the Federal Reserve Bank of Boston’s continued work on a retail-based CBDC, it could facilitate retail testing with other central banks. 

The cyberattack on Bangladesh Bank, as detailed in the appendix, illustrates the risk of attackers using cross-border financial infrastructure, in this case SWIFT, to infiltrate a central bank. While cross-border payments via CBDCs will be settled differently, the case of Bangladesh Bank underscored the importance of incorporating cybersecurity considerations into payment verification mechanisms from the outset. A question of central importance is how to handle incoming international transactions that are validated and confirmed using different, possibly weaker, security standards. Accepting such transactions (and building upon them) can have cascading effects at a faster timescale than in the traditional financial system. 

It is important to note that the United States does not need to reach a final decision on issuing a CBDC to have enormous influence on the design of CBDCs around the world. If Congress were to authorize a limited cross-border testing project with the goal of determining cybersecurity vulnerabilities and protecting user privacy, this alone would send a strong signal to central banks that are further along in the CBDC process. 

It is important to note that the United States does not need to reach a final decision on issuing a CBDC to have enormous influence on the design of CBDCs around the world.

Principle 6: When new legislation is appropriate, make it technology neutral

In the United States, Congress has considered a sizable number of bills related to cryptocurrency, including several directly about CBDCs. For example, the bipartisan Responsible Financial Innovation Act introduced by Senators Luumis and Gillibrand requires an interagency report on cybersecurity standards and guidance on all digital assets including CBDCs.

Few of these draft bills have moved out of committee or gotten to a successful floor vote in Congress, so it is difficult to make nuanced recommendations about granular legislative changes or comparisons at this point.

Still, two overarching points are worth highlighting:

First, Congress is still in a prime position to study and oversee the application of federal cybersecurity laws to a potential CBDC. Past or pending legislation scarcely mentions cybersecurity in any depth. One of the more detailed provisions is in H.R. 1030, titled the “Automatic Boost to Communities Act,” introduced by US Rep. Rashida Tlaib (D-MI),80 which states that:

“(i) (1) (G) Digital dollar account wallets shall comply with the relevant portions of the Bank Secrecy Act in establishing and maintaining digital dollar account wallets and shall impose privacy obligations on providers under the Privacy Act of 1974 that mirror those applicable to Federal tax returns under sections 6103, 7213(a)(1), 7213A, and 7431 of the Internal Revenue Code of 1986…

“(i) (3) (C) a Digital Financial Privacy Board shall be— (i) established by the Secretary to oversee, monitor, and report on the design and implementation of the digital dollar cash wallet system; (ii) maintained thereafter to provide ongoing oversight over its administration; and (iii) designed in such a way as to replicate the privacy and anonymity-respecting features of physical currency transactions as closely as possible, including prohibition of surveillance or censorship-enabling backdoor features.”81

Also relevant is H.R. 2211, the “Central Bank Digital Currency Study Act of 2021,” introduced by US Rep. Bill Foster (D-IL), which commissions a study including:

“(1) consumers and small businesses, including with respect to financial inclusion, accessibility, safety, privacy, convenience, speed, and price considerations (emphasis added);

“(7) data privacy and security issues (emphasis added) related to CBDC, including transaction record anonymity and digital identity authentication;

“(8) the international technical infrastructure and implementation of such a system, including with respect to interoperability, cybersecurity, resilience, offline transaction capability, and programmability (emphasis added).”82

However, this bill, in particular, may have been overcome by the Biden administration’s issuance of the Executive Order on Ensuring Responsible Development of Digital Assets on March 9, 2022.83 That executive order commissions upwards of nine separate reports and repeatedly emphasizes the importance of privacy and developing a CBDC that comports with democratic values.84 Of particular relevance to cybersecurity are the portions of the executive order that ask “the Director of the Office of Science and Technology Policy and the Chief Technology Officer of the United States, in consultation with the Secretary of the Treasury, the Chairman of the Federal Reserve, and the heads of other relevant agencies” to study “how the inclusion of digital assets in Federal processes may affect the work of the United States Government and the provision of Government services, including risks and benefits to cybersecurity.”85

Second, Congress should keep in mind the overarching principle of technology neutrality, which augurs toward developing laws that apply evenhandedly to different technologies over time—as opposed to a specific technological product or feature that may exist today (but be upgraded or overtaken by other innovations tomorrow).86 In the context of CBDCs, that may mean using incentives and accountability (described above), rather than setting a precise numerical threshold (for an acceptable number of cyber incidents per year, or precise NIST standards that are applied). Alternatively, Congress may consider setting CBDC security requirements at a fairly high level of abstraction and empowering a federal agency or private consortium to utilize their expertise to develop and periodically update the details.

Conclusion

This report seeks to shine light on the novel cybersecurity risks for governments, the private sector, and consumers of introducing CBDCs. Our research demonstrates, however, that the design space for CBDCs is large, and offers policy makers and regulators ample options to choose a technological design that is both reasonably secure and leverages the unique benefits a CBDC can provide.

According to recent surveys about using CBDCs, privacy is consumers’ number one concern.87 Our analysis shows that privacy-preserving CBDC designs are not only possible, but also come with inherent security advantages that reduce the risks of cyberattacks. At the same time, the report explains that CBDCs can offer authorities regulatory oversight while providing strong user privacy. In short, cybersecurity concerns alone need not halt the development of a CBDC. It is up to policy makers to make the appropriate foundational design choices that will enable central banks and PSPs to develop safe CBDCs.

Imbuing the process to craft global CBDC regulations with democratic values is in the United States’ national security interest.

To address other, cross-border cybersecurity risks of introducing a CBDC, policy makers should promote global interoperability between CBDCs through international coordination on standard setting. This applies to all governments irrespective of whether they decide to develop a digital fiat. Imbuing the process to craft global CBDC regulations with democratic values is in the United States’ national security interest. With more than 100 countries actively researching, developing, or piloting CBDCs, it is time to act to ensure domestic and international systems are prepared for the rapidly evolving digital currency ecosystems. The United States can and should play a leading role in shaping standards around the future of money.

Appendix

For understandable security reasons, the Federal Reserve (the Fed) has shared little detail about the vulnerabilities of its current systems and of the broader payments landscape. While this makes an exact evaluation of current dangers difficult, this report uses public information to outline cyber risks across the financial and payment systems. We focus on public and private wholesale layers and especially on Fed services since the central bank would presumably be the issuer of a digital dollar.

Fedwire, operated by the Fed, is the dominant domestic funds transfer system, handling both messaging and settlement. The Clearing House Inter-Payments System (CHIPS), privately operated and run by its member banks, fills a similar role for dollar- denominated international funds transfers.88 The Society for Worldwide Interbank Telecommunications (SWIFT), operated as a consortium by member financial institutions, is a global messaging system that interfaces with Fedwire and CHIPS for the actual settlement of payments.89 On the horizon, the Fed’s FedNow promises instant, around-the-clock settlement and service, with a full rollout over the next two years.

While Chapter 1 assesses CBDC cybersecurity from a global perspective, this appendix focuses on the US payment system given the dollar’s reserve and vehicle currency status, the Fed’s centrality to the wholesale payment system, and the diversity of layers. Studying the Fed’s cybersecurity system also sheds light on other countries’ approaches as the Fed’s payment cybersecurity practices are largely analogous, and often the model, to those of other central banks considering the deployment of a CBDC.

Public wholesale layers

Fedwire: The Fedwire Funds Service is a real-time, gross settlement (RTGS) system that enables “financial institutions and businesses to send and receive same-day payments.”90 RTGS means that payments immediately process and are irrevocable and that payments are not netted out over a longer time period. It operates twenty-two hours a day every business day and has thousands of participants who use it for “largevalue, time-critical payments.”91 To make a transfer, the master account of the sending institution is debited by its Federal Reserve Bank, and the master account of the recipient institution is credited.92 Payments are final, which makes it difficult to fix mistakes. In 2021, Fedwire handled more than 204 million transfers with a total value greater than $991 trillion, a sum more than forty times the United States’ 2021 GDP.93 This translates into an average value of $4.57 million, which is reportedly skewed by a small number of high-value payments.94

In assessing Fedwire’s cybersecurity, the Fed aims for the core principle that it should possess “a high degree of security and operational reliability and should have contingency arrangements for timely completion of daily processing.95 On the reliability front, Fedwire has an availability standard of 99.9 percent. In 2013, it exceeded this standard for all forms of access.96 Any wholesale CBDC must achieve similar results to underpin the financial system. To preserve continuity of operations, the Fed focuses on both its own systems and those of Fedwire participants. The Fed requires high-volume and highvalue Fedwire participants (core nodes) to participate in multiple contingency tests each year, including for their backupsites.97 To preserve the functionality of the core Fedwire service, the Federal Reserve Banks “maintain multiple out-of-region backup data centers and redundant out-of-region staffs for the data centers.”98 Thus, Fedwire’s availability is secured both through redundant systems and endpoint security.

The Fedwire network has a “core-periphery” structure: the top five banks are responsible for around half of the payment volume, and the most important banks have a far greater number of network connections.99 The concentration makes it a scalefree network: one with “most nodes having few connections but with highly connected hub nodes.100 As a scale-free network, Fedwire has “significant tolerance for random failures but [is] highly vulnerable to targeted attacks.” A random failure is likely to happen at a small institution, while a targeted attack on a core node could impact large amounts of transfers and severely reduce liquidity.101 In this case, the Fedwire network could become “a coupled system where payments cannot be initiated until other payments complete,” causing the entire system to grind to a halt.102

The Federal Reserve Bank of New York conducted a “pre-mortem” assessing how cyberattacks could disrupt Fedwire, specifically focusing on the type of targeted attack the network is vulnerable to.103 The researchers assessed how a cyberattack impacting the availability or integrity (core elements of the CIA triad) of a top-five financial institution ripples through the wholesale payments network. They found that, excluding the target bank, “6 percent of institutions breach their end-of-day reserves threshold.” When weighted by assets, this is equivalent to 38 percent of bank assets.104 Breaching the reserves threshold means that reserves fall significantly below a bank’s average level, impairing its liquidity and thereby its financial stability. The seizing up of the payments network is partly due to Fedwire’s structure, which enables the receiving of payments even if an institution cannot send or observe them, which means the impacted institution could become a “liquidity black hole.”105 Such an institution would receive payments, and, therefore, liquidity, from the rest of the financial system but not send any payments to other institutions, thereby draining liquidity from other institutions. This spillover is magnified further if banks strategically hoard liquidity in response to the disruption. If the attack lasts for several days, liquidity shortfalls could grow to reach $1 trillion by the fifth day, requiring a massive intervention from the Fed.106 Additionally, any attack on Fedwire could harm liquidity in financial market utilities (FMUs) like CHIPS and CLS, which are crucial to wholesale payments and foreign exchange markets, respectively.107 Since Fedwire operates as the plumbing of these other forms of infrastructure (meaning it handles the final settlement of payments), any compromise of Fedwire would impact them.

Eisenbach, Kovner, and Lee document how escalating levels of private information about network interconnectedness (breaches of confidentiality) and days with large payment volumes allow attackers to maximize damage and systemic risk.108 For example, an attacker who lingers in the network of a financial institution for months can observe payment patterns and choose the day when maximum damage will be inflicted.109 One additional vulnerability of Fedwire is that third-party service providers are often shared across institutions, making them attractive targets for attackers looking to take down the network.110

The history of disruptions to Fedwire paints a mixed picture of its resilience. In the aftermath of the September 11 attacks, payment volumes rebounded despite financial infrastructure failing in Lower Manhattan and core nodes essentially ceasing to function.111

Perhaps no incident better captures the vulnerability of Fedwire, and the broader public-private wholesale payment system, than the attempted heist of Bangladesh Bank in 2016. Hackers infiltrated the network of Bangladesh Bank, which lacked a firewall and was poorly secured. The attackers used the bank’s SWIFT messaging system to send fraudulent payment orders to the Federal Reserve Bank of New York. Despite issues with the messages that led them to be returned and resent, the differing time zones, work schedules, and absence of a communications channel between the two banks prevented Bangladesh Bank from being able to stop the New York Fed from transferring funds.112 It took four days after the attack for communication to be established, and the Fed had already sent $101 million of funds through Fedwire via correspondent banks.113 Nearly $1 billion could have been lost if not for the “total fluke” that the address of one recipient bank had the word “Jupiter,” which was the name of an oil tanker and a sanctioned Athens-based shipping company, triggering further scrutiny.114

While not a direct attack on the Fedwire network, the Bangladesh Bank incident illustrates how the integrity of the current wholesale payment system is dependent on the practices of individual nodes. While the 2016 attack aimed at monetary gain and not explicitly at systemic disruption, the successful theft of $1 billion could have easily shaken confidence in the entire system. Additionally, the attack revealed the shocking reliance of the payment system on outdated technology. To its credit, the Federal Reserve Bank of New York set up a “24-hour hotline for emergency calls from some 250 account holders, mostly central banks” to prevent future miscommunication.115 As discussed in Chapter 1, the ledger technology of CBDCs could enable innovations to reduce the likelihood of unauthenticated, fraudulent payments and enable faster communication. That said, quicker final settlement could add risks, since there is less time available for catching mistakes.

The cybersecurity challenges and approach of Fedwire are similar to those of other major central bank payment systems. For example, the European Central Bank’s (ECB’s) TARGET2 RTGS system relies on SWIFT for payment messages, exposing it to the vulnerabilities of that system.116 Similar to the Fed, the ECB has focused on risks from TARGET2 participants via self-certification of information security and implementation of SWIFT’s Customer Security Programme.117

FedNow: After a long series of delays, the Fed is planning to launch FedNow in 2023 or 2024. This is an RTGS system that, unlike FedWire, will operate twenty-four hours a day and three hundred and sixty-five days a year and offer instant payments that are irrevocable. As discussed earlier, instant payments, while convenient, limit chances to retract fraudulent payments. The service will be available to financial institutions with accounts at Federal Reserve Banks through the FedLine network, meaning it will not be available to nonbanks.118 End users will encompass both individuals and businesses.119

While details are still limited, the Fed has promised to include fraud prevention tools to protect integrity, including transaction value limits (with a maximum set by the Federal Reserve Banks), conditions for rejecting transactions, and reporting features. Future features that may be implemented include aggregate transaction limits and centralized monitoring.120

Private wholesale layers

SWIFT: The Society for Worldwide Interbank Telecommunications (SWIFT) system is a messaging system used for international payments and run by a consortium of member banks. While FedWire and CHIPS handle both messaging and settlement, SWIFT only acts as a uniform messaging service for funds transfer instructions. Financial institutions can then “map” the SWIFT message into a FedWire or CHIPS message for the actual transfer of funds.

From January 2015 to January 2018, at least ten hacks were based on SWIFT, leading to initial losses of $336 million and actual losses of around $87 million.121 As highlighted in the section on FedWire, one of these attacks was on Bangladesh Bank and relied on infiltrating its SWIFT messaging system. This is the chief vulnerability of the SWIFT system: attackers will access the messaging capability of a member bank, observe payment patterns, and then begin sending payment messages. Since the Bangladesh Bank hack, SWIFT has taken several steps to shore up its defenses, focusing on stronger security standards and quicker response.122

This means that attacks are stopped during the preparation period before fraudulent transaction instructions are sent out. However, banks further down the payment chain can also stop transactions.123 During the Bangladesh Bank hack, this was possible due to the lag in actual settlement. In a cybersecurity report, SWIFT notes that the role of other institutions will become even more important “as the speed of cash pay-outs increases.”124 Wholesale CBDCs could offer even faster payments, decreasing the time to retract a payment and requiring quick action to stop fraud by banks involved in settlement.

Following the Bangladesh Bank attack, SWIFT introduced the Customer Security Programme (CSP) with three pillars: “(1) securing your local environment, (2) preventing and detecting fraud in your commercial relationships, and (3) continuously sharing information and preparing to defend against future cyber threats.”125 Most recently, in 2019, SWIFT introduced the Customer Security Controls Framework (CSCF) as part of CSP. These require member banks to implement certain levels of security standards. The CSP has been successful in reducing successful attacks and securing SWIFT’s integrity.126

While wholesale CBDCs will reshape the messaging and settlement functions of international payments, the SWIFT network’s vulnerabilities illustrate the vital role of banks in securing their own systems.

Retail payments

Physical cash: The most basic form of retail payments, and the only current public layer, is paper money. It is worth noting that cash also has security risks, even if these risks are not in the cyber realm. While the confidentiality and availability of cash is not of concern, cash can be counterfeited or physically stolen, damaging its integrity. The Treasury Department devotes technical effort to develop anti-counterfeiting features, such as holograms, paper selection, ink formulation, and artistic design, as well as other security choices, including serial numbers and storage at regional Federal Reserve Banks.127 Federal Reserve Banks screen currency to identify possible counterfeits and send these to the Secret Service for investigation.128 Additionally, physical cash provides perspective on the privacy trade-offs of CBDCs. While cash is largely anonymous, any cash transaction over $10,000 must be reported to the Internal Revenue Service (IRS) on Form 8300 to assist in combatting money laundering.129

Payment cards: Credit and debit cards are highly targeted by cybercriminals. In 2018, nearly $25 billion was lost to payment card fraud worldwide.130 Such fraud, which often is part of identity theft, increased by more than 40 percent in 2020.131 With new data breaches emerging more often than consumers can keep track of, enormous amounts of credit card information are floating around for purchase. In the past, payment card fraud often occurred in person, with criminals using “skimmers” to collect data at ATMs or gas stations and then replicating cards for use at point-of-sale terminals. Recently, online fraud has become more prevalent due to chip cards and the movement to e-commerce. Hackers now use digital skimmers, which entail installing malware in a merchant’s website, to collect data for use in online purchases.132 While credit card companies often offer fraud protection tools to protect consumers from losses, the prevalence and annoyance of credit card and identity theft shows the current retail payment system is far from risk free in terms of confidentiality and integrity.

The industry has taken steps to address this problem, with several major companies founding the Payment Card Industry Data Security Standard (PCI DSS) in 2006. PCI DSS is a set of twelve requirements, with penalties for noncompliance, to protect payment card data, and it applies to anyone storing, processing, or transmitting this data. While PCI DSS is proven to reduce cyber risk, compliance is declining.133 PCI standards can help payment providers work toward the CIA triad: the standards focus heavily on insecure protocols with the aim of protecting cardholder confidentiality.134

ACH: The Automated Clearing House (ACH) is a network operated by the National Automated Clearing House Association (Nacha) that aggregates transactions for processing and enables bank-to-bank money transfers.135 In 2020, ACH handled more than $60 trillion in payments.136 ACH is used for direct deposit of paychecks, paying bills, transferring money between banking and brokerage accounts, and paying vendors, and also underpins apps like Venmo.137 This makes it a competitor to functions that a retail CBDC could fulfill, such as direct deposits of Social Security payments or tax refunds to individuals.

ACH is subject to fraud risks, though there are safeguards in place. Users must register with a username, password, bank details, and routing number. While these steps are similar to payment cards, ACH payments are not subject to the same PCI standards. That said, merchants can take additional steps like micro validation, tokenization, and encryption, and secure vault payments.138 As with any retail payment system, many risks also stem from user behavior, such as falling prey to phishing scams. Overall, ACH payment fraud is relatively rare, accounting for only .08 basis points of all funds transferred.139

Digital payments: Payment services play a major role in facilitating online payments, and services like Stripe and Circle enable merchants to easily accept payments. While it is impossible to cover the cybersecurity risks of all these services, each has undergone security challenges and adaptations. For example, researchers recently found that attackers could target Apple Pay and bypass iPhone security through contactless messages that would drain the user of funds.140 Two-tiered retail CBDCs would likely operate through many of the same current digital payments platforms, so security vulnerabilities and fraud opportunities could impact the rollout of a CBDC.

As discussed in this appendix, current wholesale and retail payment systems face a complex cybersecurity landscape and represent a major point of attack for both criminals and geopolitically motivated actors. Cybersecurity risks posed by CBDCs must be assessed relative to this landscape and how the technology could remedy existing vulnerabilities.

At the intersection of economics, finance, and foreign policy, the GeoEconomics Center is a translation hub with the goal of helping shape a better global economic future.

About the authors

Giulia Fanti is an Assistant Professor of Electrical and Computer Engineering at Carnegie Mellon University. Her research interests span the security, privacy, and efficiency of distributed systems. She is a two- time fellow of the World Economic Forum’s Global Future Council on Cybersecurity and a member of NIST’s Information Security and Privacy Advisory Board. Her work has been recognized with best paper awards, a Sloan Research Fellowship, an Intel Rising Star Faculty Research Award, a U.S. Air Force Research Laboratory Young Investigator Grant, and faculty research awards from Google and JP Morgan Chase.

Kari Kostiainen is Senior Scientist at ETH Zurich and Director of Zurich Information Security Center (ZISC). Before joining ETH, Kari was a researcher at Nokia. He has a PhD in computer science from Aalto. Kari’s research focuses on system security. Recent topics include trusted computing, blockchain security, and human factors of security.

William Howlett is currently a senior at Stanford University, where he is studying economics and international relations and writing an honors thesis on US financial diplomacy towards China’s current account surplus from 2009-13. On campus, he also conducts research for LTG H.R. McMaster on national security and economics. After graduation, William will be joining the Treasury Department as a junior fellow in the Office of International Monetary Policy, where he will work on IMF and G7/20 issues. He previously worked at the Atlantic Council’s GeoEconomics Center and on the legislative team in California Governor Newsom’s office.

Josh Lipsky is the senior director of the Atlantic Council’s GeoEconomics Center. He previously served as an advisor at the International Monetary Fund (IMF) and Speechwriter to Christine Lagarde. Prior to joining the IMF, Josh was an appointee at the State Department, serving as Special Advisor to the Under Secretary of State for Public Diplomacy. Before joining the State Department, Josh worked in the White House and was tasked with helping plan President Obama’s participation at the G-20 and other global summits. He is a term-member at the Council on Foreign Relations and an Economic Diplomacy Fellow at Harvard University’s Belfer Center for Science and International Affairs.

Ole Moehr is a nonresident fellow and consultant with the Atlantic Council’s GeoEconomics Center. Previously, he served as the GeoEconomics Center’s associate director. In Ole’s current capacity, he contributes to the Center’s future of money work and conducts research on global finance, growth, and trade. Ole’s project portfolio includes work on global monetary policy, central bank digital currencies, global value chains, the EU’s economic architecture, and economic sanctions. Prior to joining the Council, Ole served as a Brent Scowcroft Award Fellow at the Aspen Institute.

John Paul Schnapper-Casteras is a nonresident senior fellow with the GeoEconomics Center, focusing on financial technology, central bank digital currency, and cryptocurrency. JP is the founder and managing partner of Schnapper-Casteras, PLLC, a boutique law firm that advises technology companies, non-profits, and individuals about cutting-edge regulatory is- sues, litigation, and compliance. Previously, he worked on a broad array of constitutional and civil cases as Special Counsel for Appellate and Supreme Court Advocacy to the NAACP Legal Defense Fund and in the appellate practice of Sidley Austin LLP.

Josephine Wolff is a nonresident fellow with the Atlantic Council’s Cyber Statecraft Initiative, an associate professor of cybersecurity policy at the Tufts University Fletcher School of Law and Diplomacy, and a contributing opinion writer for the New York Times. Her research interests include the social and economic costs of cybersecurity incidents, cyber-insurance, internet regulation, and security responsibilities and liability of online intermediaries. Her book “You’ll See This Message When It Is Too Late: The Legal and Economic Aftermath of Cybersecurity Breaches” was published by MIT Press in 2018.

Acknowledgements

This report was made possible by the generous support of PayPal.

The GeoEconomics Center would like to thank Erinmichelle Perri, Ali Javaheri, Eli Clemens, Victoria (Hsiang Ning) Lin, Nathaniel Low, Claire (Ning) Yan, Thomas Rowland, and Jerry (Xinyu) Zhao for their important contributions to this report.

The GeoEconomics Center would also like to express their gratitude to Trey Herr and Safa Shahwan Edwards from the Scowcroft Center’s Cyberstatecraft Initiative for their close collaboration in developing this report.

1    “Central Bank Digital Currency Tracker,” Atlantic Council, last updated June 2022, https://www.atlanticcouncil.org/cbdctracker/.
2    Scott Pelley and Jerome Powell, “Jerome Powell: Full 2021 60 Minutes Interview Transcript,” CBS News, April 11, 2021, https://www.cbsnews.com/news/
jerome-powell-full-2021-60-minutes-interview-transcript/
.
3    Federal Reserve Bank of Cleveland President Loretta J. Mester, “Cybersecurity and the Federal Reserve,” speech to the Fourth Annual Managing Cyber Risk from the C-Suite Conference, October 5, 2021, https://www.clevelandfed.org/newsroom-and-events/speeches/sp-20211005cybersecurity-and-the-federalreserve.aspx.
4    Eswar S. Prasad, The Future of Money: How the Digital Revolution Is Transforming Currencies and Finance (Cambridge, Massachusetts: The Belknap Press of Harvard University Press, 2021), 45–48.
5    “Central Bank Digital Currency Tracker.”
6    See the appendix of this report for a detailed analysis of US payment system providers’ current cybersecurity measures.
7    “CHIPS,” Clearing House, accessed January 14, 2022, https://www.theclearinghouse.org/payment-systems/chips.
8    Financial Crimes Enforcement Network, Feasibility of a Cross-Border Electronic Funds Transfer Reporting System under the Bank Secrecy Act, US Department of the Treasury, October 2006, https://www.fincen.gov/sites/default/files/shared/CBFTFS_Complete.pdf.
9    “Credit Card Fraud Statistics,” SHIFT Credit Card Processing, last updated September 2021, https://shiftprocessing.com/credit-card-fraud-statistics/
10    “The Three Essentials Pillars of Cybersecurity: Preventing Losses from Cyber Attack,” Lexology, https://www.lexology.com/library/detail.aspx?g=03734e1f-98d0-47ef-908f-f29ad6f69a7b.
11    Debbie Walkowski, “What Is the CIA Triad?” F5 Labs, July 9, 2019, https://www.f5.com/labs/articles/education/what-is-the-cia-triad.
12    Ibid.
13    Ibid.
14    Ibid.
15    Ibid.
16    Ibid.
17    Ibid.
18    “Large Commercial Banks,” Federal Reserve Statistical Release, accessed March 13, 2021, https://www.federalreserve.gov/releases/lbr/current/default.htm.
19    Federal Reserve Bank of Boston, The Federal Reserve Bank of Boston and Massachusetts Institute of Technology release technological research on a central bank digital currency, press release, February 3, 2022, https://www.bostonfed.org/news-and-events/press releases/2022/frbb-and-mit-open-cbdcphase-one.aspx#resources-tab.
20    The initial process of linking a digital identifier to a user can be achieved through offline channels, for example. A detailed discussion of this topic is beyond
the scope of this report.
21    David Chaum, Christian Grothoff, and Thomas Moser, How to Issue a Central Bank Digital Currency, Swiss National Bank Working Papers, March 2021. https://www.snb.ch/n/mmr/reference/working_paper_2021_03/source/working_paper 2021_03.n.pdf.
22    Eurosystem Report on the Public Consultation on a Digital Euro, European Central Bank, April 2021, https://www.ecb.europa.eu/pub/pdf/other/Eurosystem_report_on_the_public_consultation_on_a_digital_euro~539fa8cd8d.en.pdf.
23    “Visa Acceptance for Retailers,” Visa, accessed May 16, 2022, https://usa.visa.com/run-your-business/small-business-tools/retail.html.
24    Alexander Onukwue, “Nigeria’s eNaira Digital Currency Had an Embarrassing First Week,” Quartz, October 28, 2021, https://qz.com/africa/2080949/nigeriasenaira-android-wallet-deleted-days-after-launch/.
25    “E-kronapiloten – test av teknisk lösning för e-krona” [“The e-Krona Pilot – Test of Technical Solution for the e-Krona”], Sveriges Riksbank, last updated April 6, 2021, https://www.riksbank.se/sv/betalningar–kontanter/e-krona/teknisk-losning-for-e-kronapiloten/.
26    Note that in security analysis, threat modeling typically considers an adversary’s means (what are their capabilities?) as well as their motives (what are they
trying to achieve?). We, therefore, discuss both.
27    James Lovejoy et al., “Project Hamilton Phase 1: A High Performance Payment Processing System Designed for Central Bank Digital Currencies,” Federal
Reserve Bank of Boston, February 3, 2022. https://www.bostonfed.org/-/media/Documents/Project-Hamilton/Project-Hamilton-Phase-1-Whitepaper.pdf.
28    Lovejoy et al. “Project Hamilton Phase 1: A High Performance.”
29    Federal Reserve Bank of Boston, The Federal Reserve Bank of Boston and Massachusetts Institute of Technology.
30    Natalie Haynes, “A Primer on BOJ’s Central Bank Digital Currency,” Bank of Jamaica, accessed March 31, 2022, https://boj.org.jm/aprimer-on-bojs-central-bank-digital-currency/.
31    Bank of Jamaica, “Bank of Jamaica’s CBDC Pilot Project a Success,” Jamaica Information Service, December 31, 2021, https:// jis.gov.jm/bank-of-jamaicas-cbdc-pilot-project-a-success/.
32    Microsoft 365 Defender Research Team, “‘Ice Phishing’ on the Blockchain,” Microsoft, February 16, 2022, https://www.microsoft.com/security/blog/2022/02/16/ice-phishing-on-the-blockchain/.
33    Josyula R. Rao and Pankaj Rohatgi, “Can Pseudonymity Really Guarantee Privacy?” 9th USENIX Security Symposium Paper, 2000, https://www.usenix.org/events/sec2000/full_papers/rao/rao_html.
34    Bank of England, “Central Bank Digital Currency: Opportunities, Challenges and Design,” Discussion Paper, March 12, 2020, https://www.bankofengland.co.uk/paper/2020/central-bank-digital-currency-opportunities-challenges-and-design-discussion-paper.
35    Vivek Bagaria et al., Prism: Deconstructing the Blockchain to Approach Physical Limits, CCS ’19, November 11-15, 2019, London, United Kingdom, https://dl.acm.org/doi/pdf/10.1145/3319535.3363213; Haifeng Yu et al., “OHIE: Blockchain Scaling Made Simple,” 2020 IEEE Symposium on Security and Privacy, https://ieeexplore.ieee.org/iel7/9144328/9152199/09152798.pdf; and Ittai Abraham, Dahlia Malkhi, and Alexander Spiegelman, “Asymptotically Optimal Validated Asynchronous Byzantine Agreement,” proceedings of the 2019 ACM Symposium on Principles of Distributed Computing, July, 2019, 337–346, https://doi.org/10.1145/3293611.3331612.
36    Zachary Amsden et al., The Libra Blockchain, MIT Sloan School of Management, revised July 23, 2019, https://mitsloan.mit.edu/shared/ods/documents?PublicationDocumentID=5859.
37    “Consensus Mechanism,” Klaytn, accessed May 16, 2022, https://docs.klaytn.foundation/klaytn/design/consensus-mechanism.
38    ConsenSys, “Scaling Consensus for Enterprise: Explaining the IBFT Algorithm,” June 22, 2018, https://consensys.net/ blog/enterprise-blockchain/scaling-consensus-for-enterpriseexplaining-the-ibft-algorithm/.
39    Roberto Saltini, “IBFT Liveness Analysis,” 2019 IEEE International Conference on Blockchain, 245–252, 10.1109/Blockchain.2019.00039
40    Paige Peterson, “Reducing Shielded Proving Time in Sapling,” Electric Coin Co., December 17, 2018, https://electriccoin.co/blog/reducing-shielded-provingtime-in-sapling/.
41    Ibid.
42    Sveriges Riksbank, E-Krona Pilot Phase 1, April 2021, https://www.riksbank.se/globalassets/media/rapporter/e-krona/2021/e-krona-pilot-phase-1.pdf
43    David Chaum, “Blind Signatures for Untraceable Payments: Advances in Cryptology,” proceedings of the Springer-Verlag Crypto ’82 conference, 3, 1983, 199–203.
44    David Chaum, Christian Grothoff, and Thomas Moser, “How to Issue a Central Bank Digital Currency,” SNB (Swiss National Bank) Working Papers, March 2021, https://www.snb.ch/en/mmr/papers/id/working_paper_2021_03.
45    Karl Wüst, Kari Kostiainen, and Srdjan Capkun, “Platypus: A Central Bank Digital Currency with Unlinkable Transactions and Privacy Preserving Regulation,” Cryptology ePrint Archive, October 27, 2021, https://eprint.iacr.org/2021/1443.
46    Peiyao Sheng et al., “BFT Protocol Forensics,” CCS ’21: Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security,
November 2021, 1722–1743, https://doi.org/10.1145/3460120.3484566.
47    Mahimna Kelkar et al., Order-Fairness for Byzantine Consensus, August 9, 2020, https://eprint.iacr.org/2020/269.pdf.
48    M. Moon, “Crypto Scammers Stole $500K from Wallets Using Targeted Google Ads,” Engadget, November 4, 2021, https://www.engadget.com/cryptoscammers-google-ads-phishing-campaign-100044007.html.
49    Charlie Osborne, “Microsoft Warns of Emerging ‘Ice Phishing’ Threat on Blockchain, DeFi Networks,” ZDNet, February 17, 2022, https://www.zdnet.com/article/microsoft-warns-of-ice-phishing-on-blockchain-networks/.
50    Bank of England, “Central Bank Digital Currency.”
51    Stan Higgins, “Central Bank Digital Currencies Could Fuel Bank Runs, BIS Says,” CoinDesk, updated September, 13, 2021, https://www.coindesk.com/markets/2018/03/12/central-bank-digital-currencies-could-fuel-bank-runs-bis-says/.
52    Karl Wüst et al., “PRCash: Fast, Private and Regulated Transactions for Digital Currencies,” https://fc19.ifca.ai/preproceedings/5-preproceedings.pdf.
53    Alex Biryukov, Dmitry Khovratovich, and Ivan Pustogarov, “Deanonymisation of Clients in Bitcoin P2P Network,” CCS ’14: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, 15–29, https://doi.org/10.1145/2660267.2660379.
54    We will closely watch the Federal Reserve Bank of Boston and MIT’s CBDC project for code samples.
55    M. Maureen Murphy and Andrew P. Scott, Financial Services and Cybersecurity: The Federal Role, U.S. Library of Congress, Congressional Research Service, R44429, updated March 23, 2016, https://crsreports.congress.gov/product/pdf/R/R44429. See also Jeff Kosseff, “New York’s Financial Cybersecurity Regulation: Tough, Fair, and a National Model,” Georgetown Law Technology Review, April 2017, https://georgetownlawtechreview.org/new-yorks-financialcybersecurity-regulation-tough-fair-and-a-national-model/GLTR-04-2017/.
56    Andrew P. Scott and Paul Tierno, “Introduction to Financial Services: Financial Cybersecurity,” US Library of Congress, Congressional Research Service, IF11717, updated January 13, 2022, https://sgp.fas.org/crs/misc/IF11717.pdf.
57    “Banker Resource Center, Information Technology (IT) and Cybersecurity,” Federal Deposit Insurance Corporation, accessed February 15, 2022, https://www.fdic.gov/resources/bankers/information-technology/.
58    See generally Chris Brummer, Fintech Law in a Nutshell (St. Paul, Minnesota: West Academic), 461–538. Brummer summarizes the cybersecurity regulations that apply to fintech services, for example, the Cybersecurity Act of 2015, the Gramm-Leach-Bliley Act, and other rules.
59    ”Tarik Hansen and Katya Delak, “Security Considerations for a Central Bank Digital Currency,” FEDS Notes, Board of Governors of the Federal Reserve System, February 3, 2022, https://doi.org/10.17016/2380-7172.2970.
60    Committee on Payments and Market Infrastructures and the Board of the International Organization of Securities Commissions, Guidance on Cyber Resilience for Financial Market Infrastructures, June 2016, https://www.bis.org/cpmi/publ/d146.pdf.
61    G7 (Group of Seven), “Fundamental Elements of Cybersecurity for the Financial Sector,” October 2016, https://www.ecb.europa.eu/paym/pol/shared/pdf/G7_Fundamental_Elements_Oct_2016.pdf.
62    See, for example, Juan Carlos Crisanto and Jermy Prenio, Regulatory Approaches to Enhance Banks’ Cyber-Security Frameworks, FSI Insights on policy implementation No. 2, Financial Stability Institute, August 2017, https://www.bis.org/fsi/publ/insights2.pdf.
63    “Digital Currency Consumer Protection Risk Mapping,” Digital Currency Governance Consortium White Paper Series, World Economic Forum, November 2021, 17, https://www3.weforum.org/docs/WEF_Digital_Currency_Consumer_Protection_2021.pdf.
64    “Central Bank Digital Currency Tracker.”
65    Arjun Kharpal, “China’s Digital Currency Comes to Its Biggest Messaging App WeChat, Which Has over a Billion Users,” CNBC, January 6, 2022, https://www.cnbc.com/2022/01/06/chinas-digital-currency-comes-to-tencents-wechat-in-expansion-push.html.
66    “Digital Currency Consumer Protection Risk Mapping,” 18.
67    Ibid., 17.
68    “Privacy and Confidentiality Options for Central Bank Digital Currency,” Digital Currency Governance Consortium White Paper Series, World Economic Forum, November 2021, 17, https://www3.weforum.org/docs/WEF_Privacy_and_Confidentiality_Options_for_CBDCs_2021.pdf.
69    Fumiko Kudo, Ryosuke Sakabi, and Jonathan Soble, “Every Country Has Its Own Digital Laws. How Can We Get Data Flowing Freely between Them?” World Economic Forum, May 20, 2022, https://www.weforum.org/agenda/2022/05/cross-border-data-regulation-dfft/.
70    “Executive Order 14067 of March 9, 2022: Ensuring Responsible Development of Digital Assets,” Code of Federal Regulations, 87 FR 14143, https://www.whitehouse.gov/briefing-room/presidential-actions/2022/03/09/executive-order-on-ensuring-responsible-development-of-digital-assets/.
71    “Central Bank Digital Currencies,” Federal Reserve Bank of Boston, accessed February 15, 2022, https://www.bostonfed.org/payments-innovation/centralbank-digital-currencies.aspx.
72    “Executive Order 14028 of May 12, 2021: Improving the Nation’s Cybersecurity,” Code of Federal Regulations, 86 FR 26633, https://www.whitehouse.gov/briefing-room/presidential-actions/2021/05/12/executive-order-on-improving-the-nations-cybersecurity/.
73    Simon Joseph Aquilina et al., “EtherClue: Digital Investigation of Attacks on Ethereum Smart Contracts,” Blockchain: Research and Applications, 2 (4) (2021),
100028, https://doi.org/10.1016/j.bcra.2021.100028.
74    See Principle 6 below for additional details on pending congressional legislation.
75    Eighteen central banks are currently exploring CBDCs using DLT.
76    “CBDC Technology Considerations,” Digital Currency Governance Consortium White Paper Series, World Economic Forum, November 2021, 9, https://www3.weforum.org/docs/WEF_CBDC_Technology_Considerations_2021.pdf.
77    ”The Promises and Perils of Central Bank Digital Currencies, US House Committee on Financial Services, 117th Cong. (2021) (statement of Julia Friedlander, Atlantic Council’s C. Boyden Gray senior fellow and GeoEconomics Center deputy director), https://financialservices.house.gov/uploadedfiles/hhrg-117-ba10-wstate-friedlanderj-20210727.pdf.
78    Ibid., 9.
79    “Executive Order 14067 of March 9, 2022.”
80    Automatic Boost to Communities Act, H.R.1030, 117th Cong., 1st Session (2021), https://www.congress.gov/bill/117th-congress/house-bill/1030/text.
81    Ibid.
82    Central Bank Digital Currency Study Act of 2021, H.R.2211, 117th Cong., 1st Session (2021), https://www.congress.gov/bill/117th-congress/house-bill/2211/text?format=txt.
83    “Ensuring Responsible Development of Digital Assets.”
84    “What does Biden’s executive order on crypto actually mean? We gave it a close read,”New Atlanticist (Atlantic Council), March 11, 2022, https://www.atlanticcouncil.org/blogs/new-atlanticist/what-does-bidens-executive-order-on-crypto-actually-mean-we-gave-it-a-close-read/
85    Ibid.
86    See, for example, Rajab Ali, Technological Neutrality, Lex Electronica, 14 (2) (Fall 2009), https://www.lex-electronica.org/files/sites/103/14-2_ali.pdf
87    Eurosystem Report on the Public Consultation on a Digital Euro, European Central Bank, April 2021, https://www.ecb.europa.eu/pub/pdf/other/Eurosystem_report_on_the_public_consultation_on_a_digital_euro~539fa8cd8d.en.pdf.
88    Ibid.
89    Financial Crimes Enforcement Network, Feasibility of a Cross-Border Electronic Funds Transfer.
90    “Fedwire Funds Service,” Federal Reserve Bank Services, accessed January 30, 2022, https://www.frbservices.org/binaries/content/assets/crsocms/financialservices/
wires/funds.pdf
.
91    “Fedwire Funds Services,” Board of Governors of the Federal Reserve System, last updated May 7, 2021, https://www.federalreserve.gov/paymentsystems/fedfunds_about.htm.
92    “Fedwire Funds Service.”
93    “Fedwire Funds Service – Annual Statistics,” Federal Reserve Bank Services, last updated February 15, 2022, https://www.frbservices.org/resources/financialservices/wires/volume-value-stats/annual-stats.html; and Bureau of Economic Analysis, Gross Domestic Product, Fourth Quarter and Year 2021 (Advance
Estimate), news release, January 27, 2022, https://www.bea.gov/news/2022/gross-domestic-product-fourth-quarter-and-year-2021-advance-estimate.
94    Anton Badev et al., “Fedwire Funds Service: Payments, Balances, and Available Liquidity,” Finance and Economics Discussion Series (Washington, DC: Board of Governors of the Federal Reserve System, October 5, 2021), 12, https://www.federalreserve.gov/econres/feds/files/2021070pap.pdf.
95    ”Board of Governors of the Federal Reserve System, The Fedwire Funds Service: Assessment of Compliance with the Core Principles for Systemically Important Payment Systems, revised July 2014, 26, https://www.federalreserve.gov/paymentsystems/files/fedfunds_coreprinciples.pdf.
96    Ibid., 27.
97    The Fedwire Funds Service: Assessment of Compliance, 27.
98    Ibid.
99    Thomas M. Eisenbach, Anna Kovner, and Michael Junho Lee, Cyber Risk and the U.S. Financial System: A Pre-Mortem Analysis, Federal Reserve Bank of New York, No. 909, January 2020, revised May 2021, https://www.newyorkfed.org/medialibrary/media/research/staff_reports/sr909.pdf.
100    ”Mark J. Bilger, “Cyber-Security Risks of Fedwire,” Journal of Digital Forensics, Security, and Law 14 (4) (April 2020): 4, https://doi.org/10.15394/jdfsl.2019.1590.
101    Ibid., 4–5.
102    Ibid., 5.
103    Eisenbach, Kovner, and Lee, Cyber Risk.
104    Ibid., 2–3.
105    Ibid., 14.
106    Ibid., 41.
107    Ibid., 37.
108    Ibid., 24–26.
109    Ibid., 25.
110    Ibid., 32–33.
111    Bilger, “Cyber-Security Risks,” 5.
112    Krishna N. Das and Jonathan Spicer, “How the New York Fed Fumbled over the Bangladesh Bank Cyber-Heist,” Reuters, July 21, 2016, https://www.reuters.com/investigates/special-report/cyber-heist-federal/#:~:text=When%20hackers%20broke%20into%20the,into%20paying%20out%20%24101%20million.
113    Joshua Hammer, “The Billion-Dollar Bank Job,” New York Times, May 3, 2018, https://www.nytimes.com/interactive/2018/05/03/magazine/money-issuebangladesh-billion-dollar-bank-heist.html.
114    Das and Spicer, “How the New York Fed.”
115    Ibid.
116    “Factbox: How Do Bank Payments Work in the Euro Zone?” Reuters, May 20, 2016, https://www.reuters.com/article/us-cyber-heist-ecb/factbox-how-do-bankpayments-work-in-the-euro-zone-idUSKCN0YB29H.
117    Jere Virtanen, “Endpoint Security in TARGET2,” European Central Bank, Frankfurt, December 4, 2019, https://www.ecb.europa.eu/paym/groups/shared/docs/01eec-ami-pay-2019-12-04-item-5.2-endpoint-security-in-target2.pdf.
118    Margaret Tahyar, Jai Massari, and Andrew Samuel, “FedNow: The Federal Reserve’s Planned Instant Payments Service,” Harvard Law School Forum on Corporate Governance, August 31, 2020, https://corpgov.law.harvard.edu/2020/08/31/fednow-the-federal-reserves-planned-instant-payments-service/.
119    “Use Case Series: Unlock Instant Payment Use Cases with the FedNow Service,” Federal Reserve Bank Services, accessed January 31, 2022, https://www.frbservices.org/binaries/content/assets/crsocms/financial-services/fednow/general-use-case.pdf.
120    Tahyar, Massari, and Samuel, “FedNow: The Federal Reserve’s.”
121    Antoine Bouveret, “Cyber Risk for the Financial Sector: A Framework for Quantitative Assessment,” IMF Working Paper WP/18/143, International Monetary
Fund, June 2018, 13, https://www.imf.org/-/media/Files/Publications/WP/2018/wp18143.ashx.
122    Three Years on from Bangladesh: Tackling the Adversaries, SWIFT, April 10, 2019, https://www.swift.com/news-events/news/swift-report-shares-insightsevolving-cyber-threats.
123    Ibid., 2.
124    Ibid.
125    “SWIFT Customer Security Program,” KPMG, 2021, https://assets.kpmg/content/dam/kpmg/qa/pdf/2021/04/swift-customer-security-program.pdf.
126    Adrian Nish, Saher Naumann, and James Muir, Enduring Cyber Threats and Emerging Challenges to the Financial Sector, Carnegie Endowment for International Peace, November 18, 2020, https://carnegieendowment.org/2020/11/18/enduring-cyber-threats-and-emerging-challenges-to-financial-sectorpub-83239.
127    “The Latest in U.S. Currency Design,” U.S. Currency Education Program, accessed January 31, 2022, https://www.uscurrency.gov/sites/default/files/downloadable-materials/files/en/multinote-booklet-en.pdf.
128    Allison Chase, “Fed’s Counterfeiting Experts Fight Flow of Fake Money,” Federal Reserve Bank of Boston, October 15, 2019, https://www.bostonfed.org/news-and-events/news/2019/10/counterfeiting-experts-at-boston-fed-fight-flow-of-fake-money.aspx.
129    “Form 8300 and Reporting Cash Payments of Over $10,000,” Internal Revenue Service, accessed February 23, 2022, https://www.irs.gov/businesses/smallbusinesses-self-employed/form-8300-and-reporting-cash-payments-of-over-10000.
130    “Credit Card Fraud Statistics,” SHIFT Credit Card Processing, last updated September 2021, https://shiftprocessing.com/credit-card-fraud-statistics/.
131    “25 Credit Card Fraud Statistics to Know in 2021 + 5 Steps for Reporting Fraud,” Intuit Mint, last modified December 17, 2021, https://mint.intuit.com/blog/planning/credit-card-fraud-statistics/.
132    2021 Banking and Financial Services Industry Cyber Threat Landscape Report, Intsights, accessed March 31, 2022, https://intsights.com/resources/2021-banking-and-financial-services-industry-cyber-threat-landscape-report.
133    Leonard Wills, “The Payment Card Industry Data Security Standard,” American Bar Association, January 3, 2019, https://www.americanbar.org/groups/litigation/committees/minority-trial-lawyer/practice/2019/the-payment-card-industry-data-security-standard/.
134    Alara Basul, “How PCI Compliance Is the First Step in Achieving the ‘CIA Triad,’” Payment Eye, June 21, 2017, https://www.paymenteye.com/2017/06/21/howpci-compliance-is-the-first-step-in-achieving-the-cia-triad/.
135    Rebecca Lake, “ACH Transfers: What Are They and How Do They Work?” Investopedia, April 30, 2021, https://www.investopedia.com/ach-transfers-what-arethey-and-how-do-they-work-4590120.
136    “ACH Network Volume and Value Statistics,” Nacha, accessed January 30, 2022, https://www.nacha.org/content/ach-network-volume-and-value-statistics.
137    Lake, “ACH Transfers.”
138    “Is ACH Secure?” Clover, accessed January 31, 2022, https://blog.clover.com/is-ach-secure/.
139    “Understanding the Basics of ACH Fraud,” Sila, October 23, 2020, https://silamoney.com/ach/understanding-the-basics-of-ach-fraud.
140    Pieter Arntz, “Apple Pay Vulnerable to Wireless Pickpockets,” Malwarebytes Labs, October 1, 2021, https://blog.malwarebytes.com/exploits-andvulnerabilities/2021/10/apple-pay-vulnerable-to-wireless-pickpockets/.

The post Missing Key: The challenge of cybersecurity and central bank digital currency appeared first on Atlantic Council.

]]>
Victory reimagined: Toward a more cohesive US cyber strategy https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/victory-reimagined/ Tue, 14 Jun 2022 17:03:54 +0000 https://www.atlanticcouncil.org/?p=535753 US policy is on two potentially divergent paths: one that prioritizes the protection of American infrastructure through the pursuit of US cyber superiority, and one that seeks an open, secure cyber ecosystem.

The post Victory reimagined: Toward a more cohesive US cyber strategy appeared first on Atlantic Council.

]]>

Executive summary

A strategy to defeat US adversaries in cyberspace is not the same as, nor sufficient for, securing cyberspace. US policy is on two potentially divergent paths: one that prioritizes the protection of US infrastructure through the pursuit of US cyber superiority, and one that seeks an open, secure cyber ecosystem. Defend Forward was a compelling and necessary shift in thinking, but it is just one of many policy tools available to implement the US cyber strategy. In the new National Cyber Strategy, policymakers and practitioners should heed the costly lessons of a generation of counterinsurgency and ensure that efforts to defeat adversaries in cyberspace do not displace efforts to secure it. In an article published by Foreign Affairs, National Cyber Director Chris Inglis and Harry Krejsa, assistant national cyber director for strategy and research, emphasized, “security is a prerequisite for prosperity in the physical world, and cyberspace is no different.”1 A revised national cyber strategy should: (1) enhance security in the face of a wider range of threats than just the most strategic adversaries, (2) better coordinate efforts toward protection and security with allies and partners, and (3) focus on bolstering the resilience of the cyber ecosystem, rather than merely reducing harm.

Introduction

With a new US cyber strategy in the offing, policymakers will have the chance to readjust to meet the demands of the constantly changing cyber environment. The stakes are high. Even on the most tranquil days in cyberspace, millions of malicious emails flicker and fall against Department of Defense (DoD) firewalls,2 security firms track salvos of hundreds of thousands of attacks across the planet,3 and attackers scan the entire internet for vulnerable targets within hours of bugs becoming public.4 Markets trade in tools and certificates for offensive use and churn billions of dollars’ worth of products ranging from basic keyloggers to exploit suites built by the National Security Agency (NSA).5 Meanwhile, legislation aims to harvest zero-days at their source, diverting them from industry to government use.6 All this activity persists—and by most accounts is increasing—despite vast investments of time, effort, and money from government and industry alike.

The 2018 National Cyber Strategy embedded a central dissonance between the defense of US assets and interests and the security of a safer cyber ecosystem. While efforts toward each of these policies are not mutually exclusive, protection is not sufficient for security, and if improperly balanced, their implementations risk working against each other. US cyber-protection operations are organized on the assumption that protecting US assets in cyberspace through establishing superiority is a necessary and constructive step toward a more secure digital ecosystem at large. Defend Forward is a manifestation of this pursuit, developed by the DoD to create friction as close as possible to the source of malicious activity to prevent, and eventually disincentivize, attacks against US cyber assets.

Defend Forward has garnered much attention in debates over strategy in cyberspace. Its advocates cheer the agility and proactive stance it affords the military, anticipating a greater ability to disrupt adversaries before they can cause harm and even behavioral changes brought about by better imposing costs on malicious actors.7 Its critics have concerns, meanwhile. Some worry about the systemic risk to the cyber ecosystem that might accrue through more frequent exploitation.8 Others warn that the relatively constrained level of conflict in cyberspace—having not yet escalated to the equivalent of armed attacks—is a product of the world’s current geopolitical context more than anything inherent to the domain.9

The effort to Defend Forward has pulled policy attention and resources to counter high-end and strategic adversaries, while leaving pervasive insecurities in commonly used technology systems and a permissive operating environment for a host of other threats. There is a similar dissonance between protection and security in the conduct of counterinsurgencies, and the parallels are instructive. Unlike conventional war, the central goal of a counterinsurgency is not to render an adversary incapable of further resistance but to create a secure environment for a society.10 Pursuing protection requires offensive action against enemy forces, but those operations alone are not sufficient to create the larger strategic aim of security. While an important concept in the current US strategy in cyberspace, Defend Forward must work within a broader effort to secure, rather than merely protect, the United States in cyberspace.

Taking these lessons to heart can build on the successes of Defend Forward, including pulling the US policy community past latent Cold War assumptions and rhetoric while ensuring US strategy is adequate to the goal of creating security and not merely removing significant harm. By producing a national cybersecurity strategy that redoubles a commitment to security and accounts for the important but auxiliary role of protecting itself and its allies, the United States will be better able to secure cyberspace and all the social and economic activities within it.

This paper uses the analogy of counterinsurgency operations to help frame a more cohesive implementation of Defend Forward. In some ways, the lessons learned from the counterinsurgent military operations that sought to disrupt and degrade an insurgency’s ability to mount attacks apply to US Defend Forward efforts. Like destroying insurgent forces, offensive cyber operations and the maintenance of access to third-party systems necessary to sustain Defend Forward are a useful and distinctly martial step in an ongoing campaign, but on their own they are insufficient to achieve “victory.” In real-world analogs, such as the United States’ recent engagement in Afghanistan, that “victory” is something close to nation building, and in cyberspace, it is achieving markedly improved security writ large. To avoid burdening Defend Forward (and the DoD) with too much of the responsibility for improving the security of cyberspace, the next cybersecurity strategy, in close conversation with the national security strategy, should strive toward its integration with a suite of complementary policy tools.

Current US cyber strategy: Goals and means

Strategy is the effort to achieve policy goals through the application of appropriate means on the adversary and on the operating environment. As these factors change and policymakers better understand those changes, strategy must evolve to ensure that the foundation of action remains that intended outcome (i.e., ends)—not the means themselves. This tension between means, especially those of the offensive military variety, and political goals has long stood as one of the central domestic debates during times of war and conflict. Assigning priority to a goal that should be the subsidiary, such as claiming territory or killing enemy forces, risks constant, strategically stagnant conflict.

This tension is not unique to cyberspace. A telling manifestation is the cooperative and contentious relationship between Prussian Chancellor Otto von Bismarck and Helmuth von Moltke, chief of the Prussian General Staff during the mid-nineteenth century. Moltke believed the military commander must have total freedom of decision within the operation of war itself, as “no plan of operations extends with certainty beyond the first encounter with the enemy’s main strength.11 As a statesman, Bismarck was more concerned with the utility of war to realize state goals. He considered the objectives and capabilities of the armed forces to be of extreme importance, but only inasmuch as they could contribute to achieving the overall political objective of a war.12 Killing the enemy, in Bismarck’s eyes, was a means rather than a goal in and of itself.  Moltke and Bismarck were often at odds with one another’s views, yet in practice, the Prussian government benefited from this push-and-pull between political intent and means on strategy. Though these three—strategy, means, and politics—are all inseparably linked, politics must act as the driving force, determining the desired outcomes of action, while means must act as the constraining force, determining which actions are possible and effective.

There is a similar tension in the conception and execution of US cyber strategy. The 2018 National Cyber Strategy lays out four pillars to strive towards:

  1. Defend the homeland by protecting networks, systems, functions, and data;
  2. Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;
  3. Preserve peace and security by strengthening the ability of the United States—in concert with allies and partners—to deter and, if necessary, punish those who use cyber tools for malicious purposes; and
  4. Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.13

While the first and third pillars focus on the relationship between the cyber domain and the United States or entities within it, the more expansive, final pillar illustrates how the United States conceives of desired goals for the entire domain and how it imagines getting there.

US policy is on two potentially divergent paths: one that prioritizes the protection of US infrastructure through the pursuit of US cyber superiority, and one that seeks an open, secure cyber ecosystem.

Many policies can contribute constructively to both US cyber superiority and an open, secure ecosystem. However, stability and offensive prowess do not always perfectly align, particularly in cyberspace, where patching vulnerabilities or exploiting them are often in direct tension. The method to pursue the first and third pillars—towards defending forward—is to some degree incompatible with and possibly counter to the goal of the fourth without complement. That tension expresses itself in the incomplete Defend Forward doctrine, the competing equities of US offensive and defensive cyber elements, and the vagueness of achieving “cyber superiority.” Clarifying US strategic cyber objectives and grounding them in the domain’s dynamics and actor interactions is key to incorporating Defend Forward within a cohesive, national cyber strategy based on achievable, constructive, and proactive national security goals. The protection of US infrastructure is crucial, but the central goal of US operations in the cyber domain must be a secure ecosystem.

Defend forward as cyber protection

Defend Forward undergirds the first and third pillars of the 2018 National Cyber Strategy: protecting US entities by disrupting and imposing costs on malicious actors. General Paul Nakasone, commander of US Cyber Command, wrote in 2019 that “we must take this fight to the enemy … to compete with and contest adversaries globally, continuously, and at scale, engaging more effectively in the strategic competition that is already under way.14 Defend Forward includes actions in cyberspace during day-to-day competition but focuses on strategic threats, specifically calling out China and Russia.15 The repeated back-and-forth between actors in cyberspace, termed competitive interaction, helps players locate the line between the acceptable and the escalatory.16 If the United States wants to play a role in shaping the development of this threshold, explain Michael Fischerkeller and Richard Harknett, “it can do so only through active cyber operations.”17

Importantly, while some debate how Defend Forward focuses on its potential to deter by imposing costs on adversary operations, those effects are second-order and highly speculative. For example, Erica Lonergan writes, “Defend Forward hypothesizes the US can change adversary behavior through making attacks less effective and, cumulatively, by altering the adversary’s decision calculus regarding the perceived benefits, costs and risks of conducting malicious campaigns against the United States.18 That may be the case, but knowing an enemy’s risk calculus and how it changes over time is extremely fuzzy, especially in the cyber domain where many operations confound observation. This cumulative change in behavior may also run in unanticipated and harmful directions.19 Defend Forward finds tangible value in breaking up enemy attacks ahead of time—incentivizing behavioral change is possible but also extremely difficult to measure and further outside the remit of DoD operations. Evaluating Defend Forward by its impact on adversary choice sets a high bar for the concept and makes success far less measurable while downplaying the significance of its main goals—protecting US cyber interests by interrupting attacks before they can cause harm.

There are circumstances that lend themselves to successful Defend Forward operations. Knowing who the adversary is, which networks the adversary operates on and against, and the adversary’s general objectives and timelines all help calibrate operations. In the physical analogy, the most disruptive, successful offensive sweeps will anticipate when and where an enemy is gathering its forces and disrupt them at the source ahead of any campaigns they may undertake. Narrowly tailored, these proactive efforts to disrupt and defend outside the ‘wire’ can be an effective way to avoid costly pitched battles close to home.

For example, offensive cyber operations were reportedly used to counter Russian cyber campaigns targeting the United States’ 2018 midterm and 2020 presidential  elections.20 In his testimony before the Senate Armed Services Committee, Nakasone told the panel that US Cyber Command had conducted more than two dozen operations across nine different countries “to get ahead of foreign threats before they interfered with or influenced our elections in 2020.21 The operations to counter malign foreign influence on US elections, appeared successful, establishing the Russian Small Group, disrupting botnets,22 and stemming Russian information operations at their source in concert with Department of Justice efforts across both election cycles.23 Similarly, while the public record of the US role in the cyber component of the war in Ukraine is extremely limited, simply being able to identify the most likely targets—Ukrainian infrastructure—and match them to the general timeline of on-the-ground offensives is likely to have contributed to successful Defend Forward operations.

Securing cyberspace

Viewing Defend Forward as protective offense casts it more accurately as one of several contributors to overall security. In real-world counterinsurgencies, the ability of the United States to directly degrade an enemy’s capacity through preemptive strikes is critical. However, this requires careful coordination with the overall counterinsurgency effort and clear scoping—for instance, the ability of Defend Forward to disrupt endemic crime (cyber or physical) where rule of law is unenforced or to build resilient infrastructure for civilian use is limited, but both are key to successful counterinsurgency. The central task of counterinsurgency efforts cannot be limited to destroying the enemy but must aim primarily to create a secure environment.24

Various military historians and practitioners have through the years written on the US preference, through several counterinsurgencies, for attrition to the detriment of strategy. According to British Brigadier Nigel Aylwin-Foster, for example, US forces in Iraq were “inclined ‘to consider offensive operations as the key’ without understanding the penalties of that approach.25 Similarly, Defend Forward literature claims contributions to the broader security of the cyber ecosystem. Yet, these generated effects may not be as beneficial to the security of the domain as they are to the day-to-day defense of US infrastructure. As Max Smeets and Herb Lin argue, even though the US government may believe that US superiority in cyberspace is a precondition for security throughout the cyber domain, this is far from the only possible truth.26

Defend Forward does not lend itself to securing cyberspace from all malicious activity and seems better suited to addressing the most dangerous campaigns emanating from strategic adversaries or those targeting selected high consequence targets within the United States. In fact, the actions required for successful forward defense may necessitate a degree of insecurity throughout the domain. This entails persistent preparation of, and operation through, cyberspace to identify and maintain access for future operations, as well as disrupting, degrading, and eavesdropping through those gaps indefinitely. This is not to say that Defend Forward is a bad strategy so much as it is not a strategy on its own and not a means of fully realizing the goals of the current US cybersecurity strategy. Indeed, its place as the paramount concept of US cyber strategy is in tension with broader US objectives of a secure and stable cyberspace.

A new US National Cyber Strategy should explicitly set the improved security of the cyber domain as its central strategic goal, supported by other tactical priorities—not least of which is the direct and proactive protection of US assets and interests. This paper argues that such a strategy should focus on three areas: enhancing security across the spectrum of threats, better coordinating with allies and partners at a strategic level, and improving the resilience of the cyber ecosystem.

1. Security across the spectrum of threats

There is a wide spectrum of threats that exceeds the capacity of the DoD to address alone. Military operations intended to claim territory or damage adversary capacity cannot create a secure environment without cooperation along all the levers of power. As General James Mattis famously offered in a 2013 Senate Armed Services Committee hearing, “If you don’t fund the State Department fully, then I need to buy more ammunition.27 Defend Forward seeks out adversary activity that may become a threat to the security and interests of the United States as close to the source of that activity as possible. Operationally, that means making decisions about the type of adversaries and potential targets to prioritize. The Command Vision for US Cyber Command explicitly focuses on the actions of Russia and China, and relegates its considerations of a broader set of adversary operations impacting overall economic prosperity to a footnote.28

Even against nation-state actors, against whom Defend Forward takes primary aim, the United States cannot repel or even detect every operation. For example, though Defend Forward operations helped stymie Russia’s interference with the 2018 and 2020 US elections, around the same time Russia conducted a massive espionage operation in US cyberspace. Russian operators successfully inserted a backdoor into a SolarWinds Orion software update and methodically gained access to hundreds of targets across the US private sector and within the US government itself, only discovered through the disclosure of a private cybersecurity company.29 This does not mean that ongoing Defend Forward efforts failed—just that the success was in countering a known threat. But seeking out adversaries where and when they are suspected to strike is insufficient on its own, especially in a domain where perfect adversary awareness is impossible and operations ubiquitous and constant. Defend Forward is an effective strategy for higher levels of threat—especially when their timing and targeting can be “guessed”—but the United States needs a strategy that underpins defense against malicious cyber activity across the entire spectrum.

However, state actors are not the only threats to US interests or the security of cyberspace. Ransomware—a relatively simple but effective tactic that persistently features in cybercriminal activity and against which sub-Fortune 100 businesses are deeply vulnerable—remains a persistent scourge against organizations large and small. While individual ransomware perpetrators may pose little threat to the entire digital ecosystem, on aggregate their impact is staggering, with most estimates putting the net global cost of cybercrime over the past few years at trillions of dollars.30  States like Russia and North Korea sponsor or provide safe haven for ransomware attackers and may even partake at the state level themselves. There is a thriving marketplace for black-market exploits, certificates, and data links malicious actors at all levels of sophistication.31 Precisely because of their decentralized nature, private-sector targets, and widely varied tactics, ransomware and similar incidents of cybercrime are a systemic source of insecurity that the current Defend Forward strategy is not well equipped to counter.

Assuring security requires action to counter a range of threats, not merely those most strategic or capable. There are similar limits on the applicability of Defend Forward-like tactics in counterinsurgency operations. According to the 2009 US Counterinsurgency Guide, counterinsurgency centers on “securing and controlling a given population” rather than “defeating a particular enemy group.”32 In environments of insecurity, tools aimed at creating and encouraging a broad environment of security may better serve the strategic purpose than those aimed only at countering the most sophisticated actors. In counterinsurgencies, these measures, often diplomatic and economic in nature, prioritize the construction of resource pathways intended to incentivize cooperative behavior and sustain healthy local security, governance, and political processes. In cyberspace, achieving security against the entire spectrum of threats requires finding balance across a wide range of policy tools. As it updates its cyber strategy, the United States must set the security of the domain as its central strategic goal, with defense within and across that domain as subordinate but still critical priorities.

Recommendations33

Impose cost, but also deny benefit: The vast majority of malicious cyber activities at the lower end of the cyber conflict spectrum are financially motivated. With relatively inexpensive purchases of capability, malicious actors can reap impressive profits—a Deloitte study found that penetration tools costing an average of just $3,800 a month could net cybercriminals $1 million over the same time.34 And while much criminal activity, such as phishing, is relatively unsophisticated, individual incidents can still threaten security on a national scale. For example, the ransomware attacks that impacted fuel pipeline services and meat production in recent years.35 Raising the baseline of defensibility should make cybercrime less profitable at a greater scale than Defend Forward can.

The next US Cyber Strategy should take account of ongoing policy changes and redouble efforts to support public-private partnerships investing against capabilities and in infrastructure rather than just response. To aid smaller, less well-resourced companies, the US government should fund security tooling access and professional education for small-to-medium enterprises (SMEs) while working to improve the size and capacity of the cybersecurity workforce at a national scale. There have been several legislative efforts to effect such a change: HR 4515, the Small Business Development Center Cyber Training Act36 and the cybersecurity provisions within HR 5376, the Build Back Better Act.37 In addition, further legislation is required to make permanent the cybersecurity grant program under the recently passed infrastructure bill (Public Law 117-58) with the added guidance from the Cybersecurity and Infrastructure Security Agency (CISA).38

CISA, in cooperation with its Joint Cyber Defense Collaborative (JCDC), the Department of Justice, and the Treasury Department, should compile clear, updated guidance for victims of ransomware, including how victims unable or unwilling to make ransomware payments can request aid from the Cyber Response and Recovery Fund.39 Further legislation should focus on federal subsidies for access to basic, managed cybersecurity services like email filtering, secure file transfers, and identity and access management services.40

2. Work across allies and partners

Defend Forward operations themselves must move through allied and partner networks—known as the “grey space”—to target adversaries.41 The Defend Forward imperative to operate “as close as possible to adversaries and their operations” means carrying out US operations and likely causing friction within this grey space. Even allied states may not accept US operations affecting and degrading infrastructure in their territory, especially without their prior knowledge. Prior knowledge of specific operations through open, honest bilateral dialogue is unlikely in a domain where vulnerabilities are fleeting and secrecy central. The states and private sector entities that comprise the grey zone would likely have concerns ranging from sovereignty, privacy, and threats to their own ongoing cyber operations and services. For instance, a 2019 French Ministry of Armed Forces communique asserted that “any cyberattack against French digital systems, or any effects produced on French territory by digital means by a State organ … constitutes a breach of sovereignty.42 US interference could very likely impede the ability of others to pursue their own strategies, persistent or not, in cyberspace. Commenters have noted that such operations risk “undermining allies’ trust and confidence in ways that are subtle and not easily observable.”43 Adversaries, knowing this point of friction, would then benefit from moving through this grey space, pairing their operational goals with the strategic impact of forcing the United States to move against the interest of US allies.

The parallels to counterinsurgency are apparent here as well. When a foreign power assists or intervenes in a domestic counterinsurgency effort, its ability to move through the space is hindered by the acceptance of the local population and governance structures as much as geography and resource limitations. Even with explicit requests from the local government, the presence of a foreign power exerting influence and control may be viewed locally as occupation and a challenge to independence. The presence of US forces abroad, in places like Iraq and Afghanistan, can ignite the very tensions their presence intended to douse.44

As in cyberspace, this tension is an attractive point of vulnerability for exploitation by adversaries, with little incentive to prioritize security. Inviting retribution against the population, or incentivizing actions that create insecurity and friction, may incur tactical or operational losses but can translate into strategic gains. Certain American offensive operations in Iraq and Afghanistan—such as US drone strikes accused of killing or injuring civilians—undoubtedly contributed to the lack of popular support for these wars.45 Cyber conflict, despite its technical and largely intangible nature, shares a population-centric element with insurgencies—the population is attacker, target, bystander, and an intrinsic part of the medium through which these conflicts are waged. Friction and discord with local populations, or at the points where cyber meets the physical world, are only a benefit to the adversary.

Coordination with allies and partners is not only crucial in denying adversaries potential opportunities for exploitation, but in creating a more free and secure domain through positive and constructive interaction. Cyberspace, an interconnected and relatively borderless domain, requires coordinated effort from states and the private sector alike to affect change. In areas of persistent conflict where state control is not absolute, like cyberspace and conflict zones, improving security is an exercise in cooperation. US efforts toward stabilization through cooperation in Afghanistan were exceptionally complex. However, there are lessons learned, both positive and negative, that apply to the problem of cooperation within the cyber domain. Namely, that stabilization was most successful where there was continuous dialogue among US and allied forces and local governance structures, where long-term programs were not usurped by those looking for quick gains, and where local governments saw clear evidence of the benefits of their participation.46

This means a shared, or at least commonly understood, vision for the state of the domain, as well as agreement and understanding as to the acceptable methods of operation outside a state’s “territory” and through privately owned infrastructure. And, of course, the entities through which such agreements can be discussed and amended. The vast majority of activity in the cyber domain is conducted through and with infrastructure and tools operated by the private sector. Private sector companies, in a way, are both the native human terrain as well as the deepest network of information on the activity within that terrain. This means that true coordination cannot be an imposition of the state or states but must more closely resemble a partnership between countries, companies, and civil society organizations.

Recommendations

US strategic cohesion: The United States government must ensure that its operations in cyberspace are consistent with an overall strategy to enhance security. The recently established Department of State Bureau of Cyberspace and Digital Policy’s (CDP) intent is to “encourage responsible state behavior in cyberspace and advance policies that protect the integrity and security of the infrastructure of the Internet, serve U.S. interests, promote competitiveness, and uphold democratic values.”47 The CDP should have the explicit priority of promoting US and allied policies that improve the security of the cyber domain at large while expanding internet freedom. This work must align with the security of cyberspace as much as the promotion of free, open, and secure technologies such as the internet and liberal information environments that have more traditionally been the State Department’s focus. As the CDP engages with allies and partners, the bureau should act as a credible, capable partner within the US government to support the “cooperative security” strategic perspective. Rebalancing equities in the US National Cyber Strategy demands a more even handling of roles for the Defense and State Departments as much as it does continued improvements in State’s capacity to operate effectively as an interagency partner on these security issues.

Coordination in strategy: Cyberspace is an inherently cooperative domain. The US government must work in tandem with allies and partners in the private sector to improve domain security. The US National Cyber Strategy must build this coordination into its foundation. In the previous National Cyber Strategy, allies and partners primarily focused on establishing and encouraging norms of responsible behavior, securing critical infrastructure, combating cybercrime, and promoting US economic prosperity. The new strategy needs a more robust framework for how US government agencies and entities will engage with and consider consequences for allies and partners while pursuing their respective missions. United States Cyber Command (CYBERCOM) should coordinate explicitly with the defense entities of US allies to set expectations and parameters for Defend Forward operations. These should include agreed-on standards for disclosure of operations and upper limits on operational freedom to an appropriate degree, recognizing that such decisions are rarely black and white. Similarly, DoD should work with CISA’s JCDC to coordinate its offensive action with the largest private-sector entities through whose networks and technologies retaliatory blows, and subsequent operations, are likely to pass. This coordination should strive to establish a precedent for communication and cooperation as possible, recognizing the significant effect that offensive activities can have on defenders.

3. Ensuring a more resilient cyber ecosystem

In emphasizing offensive operations outside of home networks, US forces must continually prepare the potential battlespace—cyber operations cannot launch on a whim. The 2018 US Cyber Command Vision defines cyberspace superiority as “the degree of dominance in cyberspace by one force that permits the secure, reliable conduct of operations.”48 Essentially, it defines superiority as freedom of movement. To give policymakers and military leaders options, operators must locate and retain vulnerabilities, developing exploits beforehand and carefully retaining and cultivating them. These capabilities require monitoring, maintenance, and management, as the shifting cyber domain makes tools and capabilities temporary.49 However, an exploit maintained is an exploit unremedied. Vulnerabilities and exploit tools—despite best intentions—can also serve adversaries.

This freedom of movement, often considered an enabling factor of victory in counterinsurgencies, is “always a reversible condition.”50 There is no end in sight and no set of metrics on which to measure progress or success. While US cyber strategy might not explicitly seek to protract cyber conflict, it nonetheless implies perpetual action taken toward an insufficiently outlined goal. The sophisticated, intense cyber operations conceived of by the current US strategy rely on considerable effort: the “defender” must identify a target, find useful vulnerabilities, create tools to exploit them, maintain control over the tools’ launch and continued effects, and orchestrate many complex, interdependent operations. Therefore, Defend Forward requires preserving some measure of technical insecurity.

The threat posed by continued conflict to ecosystem security is familiar in insurgency literature. In his 2007 study of modern insurgency, Stephen Metz, a professor of national security and strategy at the US Army War College, wrote, “Protracted conflict, not insurgent victory, is the threat … the deleterious effects of sustained conflict, and if it is part of systemic failure and pathology in which key elites and organizations develop a vested interest in sustaining the conflict.”51 The same risk exists for cyberspace. Unsurprisingly, vague cyber policy aims mirror the lack of endgame often critiqued in US counterinsurgency operations, and for good reason—adversary goals of eroding an asymmetry, compromising an organization, or undermining a government are tangible and finite. Preserving advantage is weakly defined, difficult to measure, and potentially without end.

The EternalBlue debacle, while predating Defend Forward, exemplifies this tension between shoring up the cyber ecosystem and preserving offensive capabilities inherent to the current strategy. The NSA developed an exploit, EternalBlue, that allowed it to carry out reportedly effective and widespread intelligence collection.52 The agency debated whether to disclose information about the pervasive, deep-reaching vulnerability in Microsoft’s software to the Redmond giant or to use it offensively, opting for the latter.53  However, in April 2017, some entity identifying as an independent organization called the Shadow Brokers released information about EternalBlue and other NSA tools online.54 Though the NSA privately disclosed the existence of EternalBlue to Microsoft after discovery of the theft and shortly before it became public, adoption of the company’s security update lagged, as is common.55 Microsoft’s decision to initially restrict the distribution of the patch to customers with paid support contracts exacerbated the lag.56 Within two months, North Korean government hackers converted the exploit into a worm and then used the borrowed capability to launch the massive ransomware operation, WannaCry.57

Despite continued efforts by Microsoft to patch its systems, EternalBlue is now a ubiquitous malware feature used by state-affiliated groups like Russia’s Fancy Bear and Iran’s Chafer, as well as a myriad of non-state and unsophisticated criminal actors.58 Much of the response to the incident, however, focused on the failure to secure the capability, rather than its development—cries of “how could you lose control?” reigned.59 However, the fallout from the incident also inspired questions about the apparent paradox of securing cyberspace by preparing weapons to compromise it.60

US cyber strategy must combine judicious offensive activity with deep investment in both the architecture of and distribution of risk across the cyber ecosystem. Attacker speed, incident impact, and the opportunity for exploitation increasingly outpace the efforts of cyber defenders throughout the software and basic technologies forming the fabric of cyberspace. These security challenges are widespread and must be confronted through resilient architectures, standards, and practices to address the risk at the root of widely used technology systems. The centrality of the private sector in this effort requires government to create policy that enables and incentivizes industry to manage risk across the ecosystem.

The NSA has since updated its policies on releasing discovered vulnerabilities. In cooperation with several other entities across the United States government the agency engages in the Vulnerabilities Equities Process (VEP) to determine “which software vulnerabilities it discloses, and which ones it withholds for its own use in espionage, law enforcement, and cyber warfare.”61 However this balance is fragile. One that operates under an impermanent executive policy and still lacks legislative investiture.62 The improved transparency is a welcome step. However, the government needs to do more to instill confidence in the public that US efforts balance the need to create and preserve offensive capabilities alongside the potentially competing desire to create a more secure ecosystem by reducing overall vulnerability.

Recommendations

Balance security and maintained vulnerability: Retaining and managing vulnerabilities that enable Defend Forward operations necessitates a purposeful balance, neither radical transparency nor opacity. The practice of Defend Forward is complex and might include instances where adversarial access is degraded preemptively rather than disrupted. Therefore, the government should encourage time-delayed declassification of VEP disclosure decisions where possible to increase transparency with allies and partners. Moreover, the government should require an automatic review of decisions to withhold vulnerabilities from disclosure no later than one year after a decision. This automatic process would encourage consistent enforcement and should include input from defensive collaboration vehicles like CISA’s JCDC to better contextualize the security benefits of disclosure.

Improve measurement: While there is no shortage of headline grabbing figures about insecurity in the cyber ecosystem, many of those metrics also lack standardization and transparency, leading to great variety among measures and reducing insight into their fluctuations over time. Many figures rely on private datasets and voluntary reporting. Understanding the state of security across the domain will require more rigorous measurement. The National Science Foundation (NSF) should fund select universities and research institutions to develop more rigorous statistics and measurement methods for cybersecurity (and insecurity). Multiple parallel studies undertaken across academia will prevent overreliance on single methodology sources while driving greater statistical rigor. This grant program should closely coordinate with improved information sharing among the private sector through CISA’s JCDC to give academic studies some access to proprietary datasets. It should coordinate similarly with Department of Justice mandatory reporting standards.

Invest proactively in resilience: Excessive strategic focus on offensive and forward defense activity preferences preemptive disruption over defensibility. While there is language about the importance of improved ecosystem resilience throughout US cyber strategy documents, this topic deserves far richer treatment than a framing device. Just as a successful counterinsurgency campaign needs to create secure governance as much as it requires degrading insurgent capabilities, policy must reduce the adversary’s ability to do harm—both in disrupting criminal activities and in bolstering the social, economic, and political resilience. In the cyber ecosystem, this should start with prioritizing secure designs in information-technology (IT) systems. Starting at the root of common technologies gives policymakers the widest possible impact and helps nudge complex, dynamic systems towards security at scale.63 A strong example of this would be public-private investment in memory-safe code that can reduce the prevalence of entire classes of vulnerability while providing the opportunity to prioritize mission-critical code in government and industry.64 Refocusing on resilience and strengthening the security of core technology architectures should improve the lot of cyber defenders and users, producing systems that are innately easier to defend, more costly to compromise, and better able to improve over time—driving security without compromising efforts at protection.

Conclusion

Many policies can contribute constructively to both US cyber superiority and an open, secure ecosystem. However, stability and offensive prowess do not always perfectly align. Put more directly—US victory over major adversaries is not sufficient to ensure a secure and stable cyberspace. Neither withdrawing from nor completely pacifying the digital domain is possible—all that remains is to secure it incrementally and continually. The work to develop a new US cybersecurity strategy can help reshape the balance of policy toward greater security and help ensure an important, but complementary role, for the offensive and Defend Forward activities at the center of the current strategic concept.

Defend Forward operations have a key role to play in disrupting adversaries before they can do harm, especially when targets and timelines are known; that entails, to some extent, preserving and exploiting insecurity. Yet, the observable shortcomings of these efforts resemble common critiques of US counterinsurgency efforts—mission creep, unquantifiable objectives, and indefinite timelines—precisely because of the assumption that effectively protecting US assets from adversaries in cyberspace is the same as creating a secure digital ecosystem at large. Instead, it is one step among many toward that end and may indeed be counterproductive to the larger goal if applied poorly or to excess. In the same way that killing insurgents is necessary to, but insufficient for, winning a counterinsurgency, offensive and forward defensive activities can realize much more strategic value alongside efforts that better address the full spectrum of cyber threats, improve coordination with allies, and encouraging a more resilient cyber ecosystem. All these will contribute to a key pivot in framing: from Defend Forward as a whole-of-nation endeavor to one piece in a whole-of-nation strategy.65

As the United States redevelops its national cyber strategy, the question of overall political intent must stand at the forefront. This strategy needs to clearly address the dissonance between the stated policy goals of protection and domain security—a tall order, but a feasible one. Proactive offensive cyber operations that protect US infrastructure and interests are, and will continue to be, necessary. But just as in counterinsurgencies of the past, the United States must ensure that it does not fall into a “strategy of tactics,”66 losing the war by winning the battles.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

1    Chris Inglis and Harry Krejsa, “The Cyber Social Contract,” Foreign Affairs, February 21, 2022, https://www.foreignaffairs.com/articles/united-states/2022-02-21/cyber-social-contract.
2    Frank Konkel, “Pentagon Thwarts 36 Million Email Breach Attempts Daily,” Nextgov.com, January 11, 2018, https://www.nextgov.com/cybersecurity/2018/01/pentagon-thwarts-36-million-email-breach-attempts-daily/145149/.
3    “Cyber Threat Map,” FireEye, accessed May 27, 2022, https://www.fireeye.com/cyber-map/threat-map.html.
4    Sergiu Gatlan, “Attackers Scan for Vulnerable VMware Servers after PoC Exploit Release,” BleepingComputer, February 25, 2021, https://www.bleepingcomputer.com/news/security/attackers-scan-for-vulnerable-vmware-servers-after-poc-exploit-release/.
5    Winnona DeSombre, et al., Countering Cyber Proliferation: Zeroing in on Access-as-a-Service, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/countering-cyber-proliferation-zeroing-in-on-access-as-a-service/.
6    Robert Lemos, “China’s Claim on Vulnerability Details Could Chill Researchers,” Dark Reading, July 20, 2021, https://www.darkreading.com/vulnerabilities-threats/china-s-claim-on-vulnerability-details-could-chill-researchers.
7    Erica Lonergan and Mark Montgomery, “Defend Forward as a Whole-of-Nation Effort,” Lawfare, March 11, 2020, https://www.lawfareblog.com/defend-forward-whole-nation-effort.
8    Jason Healey, “The Implications of Persistent (and Permanent) Engagement in Cyberspace,” Journal of Cybersecurity 5, 1 (2019), https://doi.org/10.1093/cybsec/tyz008; Max Smeets and Herbert Lin, “An Outcome-Based Analysis of US Cyber Strategy of Persistence & Defend Forward,” Lawfare, November 28, 2018, https://www.lawfareblog.com/outcome-based-analysis-us-cyber-strategy-persistence-defend-forward; Max Smeets, “Cyber Command’s Strategy Risks Friction With Allies,” Lawfare, May 28, 2019, https://www.lawfareblog.com/cyber-commands-strategy-risks-friction-allies.
9    Jason Healey and Robert Jervis, “The Escalation Inversion and Other Oddities of Situational Cyber Stability,” Texas National Security Review 3, 4 (2020), 30–53, http://dx.doi.org/10.26153/tsw/10962.
10    Carl von Clausewitz, On War, Michael Howard and Peter Paret, eds. (Princeton, NJ: Princeton University Press, 1976), 75, https://www.usmcu.edu/Portals/218/EWS%20On%20War%20Reading%20Book%201%20Ch%201%20Ch%202.pdf; “US Government Counterinsurgency Guide,” Department of State, Bureau of Political-Military Affairs, January 2009, 3, https://2009-2017.state.gov/documents/organization/119629.pdf; Heather S. Gregg, “Beyond Population Engagement: Understanding Counterinsurgency,” US Army, December 29, 2009, https://www.army.mil/article/32363/beyond_population_engagement_understanding_counterinsurgency.
11    ”Helmuth von Moltke, “On Strategy,” in Moltke on the Art of War: Selected Writings, ed. Daniel J. Hughes (Novato, CA: Presidio Press, 1995), 46.
12    Otto von Bismarck, Bismarck: the Man and the Statesman: Being the Reflections and Reminiscences of Otto Prince Von Bismarck, vol. 2 (London: Smith, Elder & Co., 1898), 105.
13    “National Cyber Strategy of the United States of America,” Office of the President of the United States, September 2018, https://trumpwhitehouse.archives.gov/wp-content/uploads/2018/09/National-Cyber-Strategy.pdf.
14    ”Paul Nakasone, “A Cyber Force for Persistent Operations,” Joint Force Quarterly, no. 92 (January 22, 2019): 10–14.
15    “Summary: Department of Defense Cyber Strategy,” US Department of Defense, September 2018), https://media.defense.gov/2018/Sep/18/2002041658/-1/-1/1/CYBER_STRATEGY_SUMMARY_FINAL.PDF.
16    Michael P. Fischerkeller and Richard J. Harknett, “Deterrence Is Not a Credible Strategy for Cyberspace,” 61, 3 (2017), 381–393, https://doi.org/10.1016/j.orbis.2017.05.003.
17    Fischerkeller and Harknett, “Deterrence Is Not a Credible Strategy for Cyberspace.”
18    ”Erica Lonergan, “Operationalizing Defend Forward: How the Concept Works to Change Adversary Behavior,” Lawfare, March 12, 2020, https://www.lawfareblog.com/operationalizing-defend-forward-how-concept-works-change-adversary-behavior.
19    Jenny Jun, “Preparing for the Next Phase of US Cyber Strategy,” Atlantic Council, March 30, 2022, https://www.atlanticcouncil.org/in-depth-research-reports/issue-brief/preparing-the-next-phase-of-us-cyber-strategy/
20    Eric O’Neill, “Defend Forward Amid a New Era of Cyber Espionage,” Newsweek, April 23, 2021, https://www.newsweek.com/defend-forward-amid-new-era-cyber-espionage-opinion-1585854.
21    ”United States Special Operations Command and United States Cyber CommandBefore the Senate Armed Services Committee, 117th Congress (2021) (Gen. Paul Nakasone, Commander, US Cyber Command) https://www.armed-services.senate.gov/imo/media/doc/Nakasone_03-25-21.pdf.
22    Erica Lonergan, “Cyber Command’s Role in Election Defense: Important, But Not a Panacea,” Lawfare, October 30, 2020, https://www.lawfareblog.com/cyber-commands-role-election-defense-important-not-panacea.
23    Mark Pomerleau, “The US Military Is Targeting Foreign Actors to Defend the Presidential Election,” C4ISRNet, October 30, 2020, https://www.c4isrnet.com/cyber/2020/10/30/the-us-military-is-targeting-foreign-actors-to-defend-the-presidential-election/.
24    David Kilcullen, The Accidental Guerrilla: Fighting Small Wars in the Midst of a Big One (Oxford: Oxford University Press, 2011), 266.
25    John Mackinlay and Alison Al-Baddawy, “Rethinking Counterinsurgency: RAND Counterinsurgency Study—Volume 5,” RAND, April 15, 2008, https://www.rand.org/pubs/monographs/MG595z5.html; Nigel Aylwin-Foster, “Changing the Army for Counterinsurgency Operations,” Military Review LXXXV, 6 (2005), https://www.hsdl.org/?view&did=484927; John A. Nagl, Learning to Eat Soup with a Knife: Counterinsurgency Lessons from Malaya and Vietnam (Chicago: University of Chicago Press, 2005), 116; Colin S. Gray, “Irregular Enemies and the Essence of Strategy: Can the American Way of War Adapt,” Strategic Studies Institute, US Army War College, March 1, 2006, https://www.jstor.org/stable/resrep11464.
26    Max Smeets and Herbert Lin, “Chapter 4: A Strategic Assessment of the U.S. Cyber Command Vision,” in Bytes, Bombs, and Spies: The Strategic Dimensions of Offensive Cyber Operations, Amy Zergart and Herbert Lin, eds., (Washington, DC: Brookings Institution Press, 2019), 81–104, https://www.brookings.edu/book/bytes-bombs-and-spies/.
27    ”Mattis on Ammunition (Washington, DC, 2013), https://www.c-span.org/video/?c4658822/user-clip-mattis-ammunition.
28    “Achieve and Maintain Cyberspace Superiority: Command Vision for US Cyber Command,” US Cyber Command, April 2020, https://www.cybercom.mil/Portals/56/Documents/USCYBERCOM%20Vision%20April%202018.pdf.
29    Trey Herr et al., Broken Trust: Lessons from Sunburst, Atlantic Council, 2021, https://www.atlanticcouncil.org/in-depth-research-reports/report/broken-trust-lessons-from-sunburst/.
30    Kelly Bissell, Ryan LaSalle, and Paolo Dal Cin, “The Cost of Cyber Crime: Ninth Annual Cost of Cybercrime Study—Unlocking the Value of Improved Cybersecurity Protraction,” AccentureSecurity, Ponemon Institute, March 6, 2019, https://www.accenture.com/us-en/insights/security/cost-cybercrime-study; James Andrew Lewis, Zhanna Malekos Smith, and Eugenia Lostri, “The Hidden Costs of Cybercrime,” (San Jose, CA: CSIS, McAfee, December 9, 2020), https://www.mcafee.com/enterprise/en-us/assets/reports/rp-hidden-costs-of-cybercrime.pdf.
31    Winnona DeSombre et al., Countering Cyber Proliferation: Zeroing in on Access-as-a-Service, Atlantic Council, March 1, 2021, https://www.atlanticcouncil.org/programs/scowcroft-center-for-strategy-and-security/cyber-statecraft-initiative/the-proliferation-of-offensive-cyber-capabilities/.
32    David Kilcullen, Matt Porter, and Carlos Burgos, “U.S. Government Counterinsurgency Guide,” Washington, DC: US Department of State, January 1, 2009, 12, https://apps.dtic.mil/sti/citations/ADA494660, 12.
33    For context on several of these recommendations and more, see the Buying Down Risk series from the Cyber Statecraft Initiative: https://www.atlanticcouncil.org/content-series/buying-down-risk/home/
34    Andrew Morrison, Emily Mossburg, and Ed Powers, “Black-Market Ecosystem: Estimating the Cost of ‘Pwnership’” (New York, NY: Deloitte, December 14, 2018), https://www2.deloitte.com/us/en/pages/about-deloitte/articles/press-releases/deloitte-announces-new-cyber-threat-study-on-criminal-operational-cost.html.
35    Jacob Bunge, “JBS Paid $11 Million to Resolve Ransomware Attack,” Wall Street Journal, June 9, 2021, https://www.wsj.com/articles/jbs-paid-11-million-to-resolve-ransomware-attack-11623280781; William Turton and Kartikay Mehrotra, “Hackers Breached Colonial Pipeline Using Compromised Password,” Bloomberg, June 4, 2021, https://www.bloomberg.com/news/articles/2021-06-04/hackers-breached-colonial-pipeline-using-compromised-password.
36    US Congress, Senate Small Business and Entrepreneurship Committee and House Small Business Committee, Small Business Development Center Cyber Training Act, 117 Cong., 2021, https://www.congress.gov/bill/117th-congress/house-bill/4515.
37    US Congress, House Budget Committee, Build Back Better Act, 117 Cong., 2021, https://www.congress.gov/bill/117th-congress/house-bill/5376?q=%7B%22search%22%3A%22hr+5376%22%7D&s=2&r=3.
38    US Congress, House Transportation and Infrastructure Committee, Infrastructure Investment and Jobs Act, 117 Cong., 2022, https://www.congress.gov/bill/117th-congress/house-bill/3684/text.
39    “Combating Ransomware: A Comprehensive Framework for Action: Key Recommendations from the Ransomware Task Force,” Institute for Security and Technology, April 2021, https://securityandtechnology.org/ransomwaretaskforce/report/.
40    Stewart Scott et al., “Buying down risk: Cyber poverty line,” Atlantic Council, May 3, 2022,
https://www.atlanticcouncil.org/content-series/buying-down-risk/cyber-poverty-line/.
41    Max Smeets, “U.S. Cyber Strategy of Persistent Engagement & Defend Forward: Implications for the Alliance and Intelligence Collection,” Intelligence and National Security 35, 3 (2020), 444–453, https://doi.org/10.1080/02684527.2020.1729316.
42    ”Harriet Moynihan, “The Vital Role of International Law in the Framework for Responsible State Behaviour in Cyberspace,” Journal of Cyber Policy 6, 3 (2021), 394–410, https://doi.org/10.1080/23738871.2020.1832550.
43    Max Smeets, “Cyber Command’s Strategy Risks Friction With Allies,” Lawfare, May 28, 2019, https://www.lawfareblog.com/cyber-commands-strategy-risks-friction-allies.
44    Angela O’Mahony, et al., “U.S. Presence and the Incidence of Conflict,” RAND, 2018, https://www.rand.org/pubs/research_reports/RR1906.html; Eric Neumayer and Thomas Plümper, “Foreign Terror on Americans,” Journal of Peace Research 48, 1 (2010), 3–17, https://doi.org/10.1177/0022343310390147; Alex Braithwaite, “Transnational Terrorism as an Unintended Consequence of a Military Footprint,” Security Studies 24, 2 (2015), 349–375, https://doi.org/10.1080/09636412.2015.1038192; Alexander Cooley, “Base Politics,” Foreign Affairs 84, 6 (2005), 79–92; Thomas Gries, Daniel Meierrieks, and Margarete Redlin, “Oppressive Governments, Dependence on the USA, and Anti-American Terrorism,” Oxford Economic Papers 67, 1 (2015), 83–103, https://doi.org/10.1093/oep/gpu038.
45    Amrit Singh, “Death by Drone: Civilian Harm Caused by U.S. Targeted Killings in Yemen,” Open Society Justice Initiative, Mwatana Organization for Human Rights, 2015, https://www.justiceinitiative.org/publications/death-drone; Sarah Kreps, Paul Lushenko, and Shyam Raman, “Biden Can Reduce Civilian Casualties during US Drone Strikes. Here’s How,” Brookings, January 19, 2022, https://www.brookings.edu/articles/biden-can-reduce-civilian-casualties-during-us-drone-strikes-heres-how; Chantal Grut and Naureen Shah, “Counting Drone Strike Deaths,” Columbia Law School Human Rights Clinic, October 2012, https://web.law.columbia.edu/sites/default/files/microsites/human-rights-institute/files/COLUMBIACountingDronesFinal.pdf; “Drone Warfare,” Bureau of Investigative Journalism, last visited May 27, 2022, https://www.thebureauinvestigates.com/projects/drone-war.
46    John Sopko, “Stabilization: Lessons from the U.S. Experience in Afghanistan,” Special Inspector General for Afghanistan Reconstruction, May 2018, https://www.sigar.mil/interactive-reports/stabilization/index.html.
47    Jennifer Bachus, “Bureau of Cyberspace and Digital Policy,” United States Department of State (blog), accessed May 27, 2022, https://www.state.gov/bureaus-offices/deputy-secretary-of-state/bureau-of-cyberspace-and-digital-policy/.
48    “Achieve and Maintain Cyberspace Superiority,” 6.
49    Lonergan, “Operationalizing Defend Forward.”
50    Ben Connable, et al., “Assessing Freedom of Movement for Counterinsurgency Campaigns,”RAND, 2012, 23, https://www.rand.org/pubs/technical_reports/TR1014.html.
51    Steven Metz, “Rethinking Insurgency” (Strategic Studies Institute, US Army War College, June 1, 2007), Summary vi, https://www.jstor.org/stable/resrep11642.
52    Ellen Nakashima and Craig Timberg, “NSA Officials Worried about the Day Its Potent Hacking Tool Would Get Loose. Then It Did.,” Washington Post, May 16, 2017, https://www.washingtonpost.com/business/technology/nsa-officials-worried-about-the-day-its-potent-hacking-tool-would-get-loose-then-it-did/2017/05/16/50670b16-3978-11e7-a058-ddbb23c75d82_story.html.
53    Nakashima and Timberg, “NSA Officials Worried about the Day.”
54    Andy Greenberg, “Major Leak Suggests NSA Was Deep in Middle East Banking System,” Wired, April 14, 2017, https://www.wired.com/2017/04/major-leak-suggests-nsa-deep-middle-east-banking-system/.
55    Dan Goodin, “Fearing Shadow Brokers Leak, NSA Reported Critical Flaw to Microsoft,” Ars Technica, May 17, 2017, https://arstechnica.com/information-technology/2017/05/fearing-shadow-brokers-leak-nsa-reported-critical-flaw-to-microsoft/.
56    Richard Waters and Hannah Kuchler, “Microsoft held back free patch that could have slowed WannaCry,” Financial Times, May 17, 2017, https://www.ft.com/content/e2786cbe-3a97-11e7-821a-6027b8a20f23.
57    Ali Islam, Nicole Oppenheim, and Winny Thomas, “SMB Exploited: WannaCry Use of ‘EternalBlue,’” Mandiant, May 26, 2017, https://www.mandiant.com/resources/smb-exploited-wannacry-use-of-eternalblue; Nakashima and Timberg, “NSA Officials Worried about the Day Its Potent Hacking Tool Would Get Loose. Then It Did.”; Lily Hay Newman, “How Leaked NSA Spy Tool ‘EternalBlue’ Became a Hacker Favorite,” Wired, March 7, 2018, https://www.wired.com/story/eternalblue-leaked-nsa-spy-tool-hacked-world/.
58    Nakashima and Timberg, “NSA Officials Worried about the Day Its Potent Hacking Tool Would Get Loose. Then It Did.”; “Chafer: Latest Attacks Reveal Heightened Ambitions,” Symantec, February 28, 2018, https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/chafer-latest-attacks-reveal-heightened-ambitions.; Andy Greenberg, “A Russian Hacker Group Used a Leaked NSA Tool to Spy on Hotel Guests,” Wired, August 11, 2017, https://www.wired.com/story/fancy-bear-hotel-hack/; “Fancy Bear Hackers (APT28): Targets & Methods,” CrowdStrike, February 12, 2019, https://www.crowdstrike.com/blog/who-is-fancy-bear/.
59    Nakashima and Timberg, “NSA Officials Worried about the Day Its Potent Hacking Tool Would Get Loose. Then It Did.”; Thomas Brewster, “An NSA Cyber Weapon Might Be Behind A Massive Global Ransomware Outbreak,” Forbes, May 12, 2017, https://www.forbes.com/sites/thomasbrewster/2017/05/12/nsa-exploit-used-by-wannacry-ransomware-in-global-explosion/?sh=40a8274ae599.
60    Brad Smith, “The Need for Urgent Collective Action to Keep People Safe Online: Lessons from Last Week’s Cyberattack,” Microsoft, May 14, 2017, https://blogs.microsoft.com/on-the-issues/2017/05/14/need-urgent-collective-action-keep-people-safe-online-lessons-last-weeks-cyberattack/#sm.0001g1c7g94cgcqzpqt24knkjj2ra; Amy Zegart, “The NSA Confronts a Problem of Its Own Making,” Atlantic, June 29, 2017, https://www.theatlantic.com/international/archive/2017/06/nsa-wannacry-eternal-blue/532146/.
61    Lily Hay Newman, “Feds Explain Their Software Bug Stash-But Don’t Erase Concerns,” Wired, November 15, 2017, https://www.wired.com/story/vulnerability-equity-process-charter-transparency-concerns/
62    William Loomis and Stewart Scott, “A Role for the Vulnerabilities Equities Process in Securing Software Supply Chains, Lawfare, January 11, 2021, https://www.lawfareblog.com/role-vulnerabilities-equities-process-securing-software-supply-chains.
63    Loomis and Scott, ”A Role for the Vulnerabilities Equities Process.”
64    Stewart Scott et al., “Buying down risk: Memory safety,” Atlantic Council, May 3, 2022, https://www.atlanticcouncil.org/content-series/buying-down-risk/memory-safety/.
65    Lonergan and Montgomery, “Defend Forward as a Whole-of-Nation Effort.”
66    Gray, “Irregular Enemies and the Essence of Strategy,” 20.

The post Victory reimagined: Toward a more cohesive US cyber strategy appeared first on Atlantic Council.

]]>
These organizations are on the front lines of internet equity—and countering China’s influence https://www.atlanticcouncil.org/news/transcripts/these-organizations-are-on-the-front-lines-of-internet-equity-and-countering-chinas-influence/ Tue, 07 Jun 2022 21:11:41 +0000 https://www.atlanticcouncil.org/?p=534046 Will democracies make the investments in digital equity needed to expand access to health and education, political participation, and economic opportunity?

The post These organizations are on the front lines of internet equity—and countering China’s influence appeared first on Atlantic Council.

]]>
Watch the panel

360/Open Summit: Contested Realities | Connected Futures

June 6-7, 2022

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosts 360/Open Summit: Contested Realities | Connected Futures in Brussels, Belgium.

Event transcript

Uncorrected transcript: Check against delivery

Speakers

Mark Gitenstein
Ambassador of the United States to the European Union, US Mission to the EU

Catalina Escobar
Co-Founder and Chief Strategy Officer, Makaia

Malavika Jayaram
Executive Director, Digital Asia Hub

Moderator

Jochai Ben-Avie
Co-Founder and Chief Executive, Connect Humanity

JOCHAI BEN-AVIE: Good morning, good afternoon, good evening wherever you are. Thank you for being with us.

My name’s Jochai Ben-Avie. I am the co-founder and chief executive of Connect Humanity—we are a fund to accelerate digital equity—and, as you heard, a nonresident fellow at the Atlantic Council’s DFRLab.

Let’s start with a sobering truth. Even under the most generous accounting, three billion people lack access to the internet. That number from the ITU, which you might hear sometime, counts anyone who has used the internet at least once in the last three months. When you add people who don’t have reliable access, the number goes up. When you count people who don’t have high-speed access, that number goes substantially up. When you add people who can’t afford the internet, the number goes way up. When you add the people who lack the digital literacy to meaningfully use the internet to improve their lives, the number goes even higher still. And estimates that cannot—estimates of the cost to connect everyone range from $428 billion to in excess of 2.2 trillion dollars, and committing those funds has never been more important.

During the pandemic, we’ve seen just how important the internet is, sometimes painfully so, right? Those numbers hide the fact that what we’re talking about is kids being able to go to school. It’s about folks being able to work remotely and provide for their families, to engage with their communities, to talk to their doctor and find access about the vaccine, to participate in democracy, and so much more. What we have is a world where billions of people are falling further behind just by staying where they are.

Connect Humanity, the organization I’m privileged to lead, was started by a group of colleagues who came together by the overwhelming feeling that it doesn’t have to be this way. We generally know how to connect the unconnected. That’s not the hard part. And we generally know that traditional telecom operators—your AT&Ts and Vodafones of the world—have not and will not connect everyone. It’s simply not in their business model to do so. It’s not a problem the market is going to solve.

And so at Connect Humanity we focus on nontraditional operators—the sort of community networks, cooperatives, municipal networks, smaller operators—folks who are more grounded in their communities, often community-owned, who have different business models and different incentives. But most of these communities and most of these operators struggle nearly universally with access to capital. They’re often too big for philanthropy and microfinance and too small for direct foreign investment, international aid organizations, and commercial bank loans. Our existing funding mechanisms, simply put, have a much easier time funding a large company to build a billion-dollar submarine cable than to give a million dollars or even a hundred thousand dollars to an underserved community.

And even for the many governments of the world who are looking to connect their people, there are few choices. Indeed, often the only choice for the governments who want to invest in connecting their people is Chinese financing. In Africa, Chinese investment in ICT infrastructure surpasses spending from African governments, G7 nations, and multilateral agencies combined. Chinese financing often comes with Chinese vendors and construction companies, Chinese hardware, Chinese software, and Chinese control. For many if not most of the world, their first online steps will be on Chinese infrastructure owned by Chinese-controlled operators, with all that data available to the Chinese government. In doing so, China’s expanding not just their surveillance capabilities but also their influence over vast swaths of the world.

A huge part of this is through the Chinese Belt and Road Initiative, and at least 146 countries are currently receiving support from China through the Belt and Road Initiative. And as we’ll hear from our panel today, this raises real challenges for democracy, human rights, economic opportunity, and ultimately national security. And I should add that at a time when climate change is increasingly being recognized as a national-security threat and network equipment produces as much carbon-dioxide emissions as the airline sector, we must also think about the energy resources that will be used to connect the other half of the world.

And whether it’s climate change or awareness raised by the pandemic about the need to expand reliable, affordable access to the internet, the democratic world largely has not come up with a compelling answer to Chinese money—with democracy, human rights, economic development, and national security hanging in the balance.

And with that, let me bring in our panel. I’m joined today and honored to be joined by US Ambassador to the EU Mark Gitenstein; Malavika Jayaram, the executive director of the Digital Asia Hub; and Catalina Escobar, the co-founder and chief strategy officer of Makaia.

Let’s start with Catalina. I know it’s late for you. Appreciate you being here with us today. Your organization, Makaia, has been doing great work to connect the unconnected and underserved communities across Colombia, especially in formerly FARC-controlled territories. Maybe you can set the stage for us by describing what internet access in Colombia has meant for the peace and reconciliation process, economic development, and political participation. And why is it that a nonprofit organization like yours is connecting people and not the large telecoms in your country?

CATALINA ESCOBAR: Well, hi, everybody. Jochai, thank you for the introduction and for having me here. Also, hi to my fellow panelists.

It’s very great to be here. Actually, it’s not late. It’s early in the morning. It’s 4:30 in the morning, so. I’m based here in Medellin, Colombia, but super excited to be virtually here.

So, to answer your question, so a little bit of background. So Makaia is a nonprofit organization created and based here in Colombia. We have been up and running for sixteen years and our purpose is to build capacities for social development using technology and innovation.

So why did we end up doing a connectivity project in the peace zones in Colombia? So we actually have worked in two of the municipalities that have been determined to be essential for the peace process in Colombia. But the reason why we ended up doing connectivity is because, I mean, at the beginning it wasn’t like, oh, let’s do our connectivity project in these two municipalities. The reason why it ended up there is because we wanted to do a technical support process for coffee growers in those municipalities. So the purpose was not connectivity itself; the purpose was to bring tech capacities, digital skills to coffee growers. We started in one of the peace zones. So it started as a—as a digital skills project.

When we went to the zone in the first visit—I actually went to that first visit—we realized that it was impossible to do a digital skills and tech capacity project because there was no connectivity. And when I say no connectivity, it’s like no connectivity at all. Our cell phones didn’t work when we were visiting the coffee growers. So we sort of had to go back to the basics and said, OK, what are we going to have to do here?

We were super fortunate that the funder was flexible, because when funders are not flexible these type of unexpected circumstances are very difficult to manage. And the funder was Lavazza Foundation—Lavazza, like the coffee company—because they wanted to engage coffee growers more and better into their value chain. So we went to Lavazza and we say, hey, there’s no connectivity; we need to start from the basics. And they said, OK, do whatever you have to do. So I think that there’s one lesson learned, and it’s that funder flexibility is super important in these type of projects.

So I guess—so we ended up connecting five coffee farms and some schools using TV white spaces. The legislation had recently passed in Colombia, so it was like a really, really good moment to do a pilot using TV white spaces.

And I think another lesson is that it was connectivity with a purpose. It wasn’t connectivity just for providing connectivity; it was connectivity to improve the quality and the efficiency of the coffee value chain. So we were connecting coffee growers to teach them about prices; about quality; about how to engage with Lavazza, which is their final purchaser of the coffee; and things like that.

And then we replicated this project in another peace zone, in another peace municipality in Colombia. And I can talk more later about the small cooperatives that are starting to provide internet access.

JOCHAI BEN-AVIE: Wonderful.

CATALINA ESCOBAR: But going back, Jochai, to your other question, why do—large telecoms don’t engage? Actually, our first thought was let’s go to the telecoms and ask them to bring internet access to these municipalities. It was not possible. After many conversations, we realized that for them—basically, that the answer from them is there is no market. And when they just say that, it sort of closes the door for any possible future conversations. And that’s why we are so aligned to the Connect Humanity purpose, that small operators, cooperatives, coffee-grower cooperatives are really the solutions for these isolated communities around the world.

So that’s why a nonprofit ended up doing a connectivity project, because we needed connectivity for a specific purpose. And now it’s been transferred to local, small cooperatives that are doing the connectivity. And as I said, I can talk more about that later.

JOCHAI BEN-AVIE: Thank you, Catalina. I appreciate it.

And I think this really emphasizes, again, how it’s not just network operators who can build the internet, right? It is—you know, people search the internet not for connectivity’s sake, but to improve their lives. And so we see coffee growers who are developing their own internet networks to meet their needs, often where the market won’t otherwise solve for that problem.

To continue our sort of scene-setting, Mr. Ambassador, maybe I can turn to you. During your time as ambassador in Romania, you supported some extremely successful civic tech programs. Could you share a bit about what happened and why, and how important the internet was to those efforts?

MARK GITENSTEIN: Well, thank you, Jochai.

First of all, I want to give credit to your partner, Chris Worman, who helped me come up with this idea. But it was 2010. I came to Romania as US ambassador in 2009 with a mandate to deal with the issue of anticorruption and rule of law, which is still a problem in Romania but there’s been tremendous progress—primarily, by the way, because of its membership in the EU. It’s one of the reasons I wanted this job. So it’s late 2010. The Arab Spring had just started. About a couple of months before that, I had seen something very surprising in Romania. By the way, the infrastructure in Romania for the internet was actually pretty good. But social media was just emerging in Romania, and I became aware of the fact that the fastest-growing Facebook market probably in the world, certainly in Europe, was in Bucharest. But the other thing is around that same time there had been an activist nongovernmental-organization-sponsored event countrywide in Romania. It was actually designed to clean up the trash on a single day, and they got hundreds of thousands of Romanians on the street in a single day.

So I thought, watching what was happening in the Arab Spring where social media drove a lot of activism, I said, why can’t we do this with anticorruption in Romania? And into my office popped Chris Worman, who was then running TechSoup Romania, which we can talk about later if you want. TechSoup’s a great organization that focuses on many of these issues. And I told him, I said, why can’t we have a MoveOn.org in Romania that’s focused on anticorruption? He says, well, here’s an idea. Let’s find some money. Turned out there was ninety thousand dollars available—just shows you how a small amount of money can have a huge impact—in one of our accounts at the embassy. And he says, why don’t we do what he called a competition. And so we sent out a communication to almost every activist in Romania. Before it was over—it turns out within a couple of weeks we had reached 1.2 million Romanians out of eighteen—there’s only eighteen million people in Romania. That was pretty remarkable.

And they came back, and the idea of the competition was give us an idea for how you can use social media to fight corruption. It came back with 150 ideas, and then we had a conclave of and had a voting system where if you were invited you got to vote on the best ideas. We narrowed it down to ten ideas, and then we had a vote, and I think we funded five ideas. And between the cost of the—of the—of getting everybody to Bucharest—paying for their travel, et cetera—and the money, we had maybe ten thousand dollars for each of these internet websites, which in Romania was a lot of money. And one of those—they were all very successful, but one of them you may have heard of. Funky Citizens is a great organization that has since become one of the most important activist organizations in Romania. I think one-third of all people online are on the Funky Citizens website.

And after I left in 2012, I learned that not only did they create their own MoveOn.org—it’s called Click It, I think, or Click On; it has 1.2 million people online in 2017—you may have read about this—there was an effort by the new majority party to undo all of the work we had done on anticorruption. And through this organization, between Click It and Funky Citizens, they got six hundred thousand people on the street in Bucharest—it was on the front page of the Wall Street Journal—and they basically reversed what happened—what was happening in Romania. And it was all spontaneous. But we built—what we did is we built social-media capability to fight corruption and it’s had a huge impact.

By the way, Funky Citizens—I just learned this this morning; Elena Calistru, who runs Funky Citizens, had mentioned this to me, but I was stunned by it—within two days of the war, they raised six hundred thousand dollars online, which in Romania is a huge amount of money. They sent fifteen semis full of food and help into Ukraine and they have moved out tens of thousands of refugees just through that organization with Sean Penn’s organization.

So here ninety thousand dollars, and that’s what we were able to do with ninety thousand dollars, a small amount of money, using infrastructure. Now, the Arab Spring didn’t turn out so well—but it certainly served as a good model.

JOCHAI BEN-AVIE: Maybe I can ask you about another model. I mean, this is—I think really demonstrates the power of civil society and civic tech, but really only possible in a country with ubiquitous access.

MARK GITENSTEIN: Yeah.

JOCHAI BEN-AVIE: And you know, I know Chris and you have spoken about sort of the role of the SEED Act in helping to sort of catalyze that kind of investment in Eastern and sort of Central Europe. I wonder if you could speak briefly about what the SEED Act is/was and sort of the role it played, and then we’ll—

MARK GITENSTEIN: Yeah. I can’t remember what the actual letters mean, by the SEED Act, I can tell you what it—it was passed in 1989. I actually just researched the legislative history coming over in the car just now. It was passed in 1989. It’s back in an era when there really was bipartisan collaboration in the Senate, unlike today. And I actually read through the debate.

By the way, Biden then was the chairman of the European Affairs Subcommittee of the Foreign Relations Committee. It was his substitute that actually passed the Senate, and it had two big elements in it that were very relevant. One, it was designed to provide money to Central and Eastern Europe—this is right after the wall came down—for things like social media and internet—the internet was in its infancy in those days—but also money for what was known as enterprise funds. And enterprise funds were, in effect, venture capital funds created in each—initially in just Poland and Hungary, and then expanded to all of Central and Eastern Europe. And these funds were designed to fund, in effect, tech companies, but other enterprises.

One of the companies that they funded in Romania was Bitdefender. I don’t know if you know what Bitdefender is, one of the top cybersecurity companies in the world now. That money, in turn, when the money came back into the venture fund, was used to fund a foundation. That’s where Chris is today; he’s on the board of that foundation. And so that’s become—and this happened throughout Central and Eastern Europe. The most successful, actually, was in Poland, where they actually created equity markets, powerful equity markets which are driving democracy and free markets in Poland. And it’s now happening in Romania.

So the SEED Act was a small amount of money that was put in each of these countries to fund private-sector but also public-sector, but very targeted and focused on capacity building. By the way, the ninety thousand dollars was SEED Act money. So it’s an incredibly smart investment by the US in both free markets and democracy capacity building.

JOCHAI BEN-AVIE: Thank you, Mr. Ambassador. I think that’s a fascinating model and one we can learn from, and I think also speaks to the type of flexible funding that Catalina was speaking of earlier. It’s a model that I wish we would see more of, you know, in—from democracies around the world.

MARK GITENSTEIN: Can I—can I just add something to that? The reason it worked is because Romanian activists were making the decisions. They were not being made in Washington. Very important.

JOCHAI BEN-AVIE: I think that’s essential, right? I think it’s about empowering local communities to be, you know, in control of their digital futures, and we need to always be keeping that in mind as we think about funding these kinds of efforts.

Speaking of people who are not thinking about that, you know, to the extent that any nation is really investing the billions if not trillions of dollars that it’s going to take to connect everyone, frankly, it’s China. Which brings me to our final speaker, Malavika. I was wondering if you could hopefully comment on sort of: What is the scale of China’s investment in global internet infrastructure, and how does that fit into the country’s broader geopolitical strategy? And how worried should we be about this from a national security and human rights perspective?

MALAVIKA JAYARAM: Thanks, Jochai. Thanks, Rose and everyone else, for having me here.

This is a really contested and polarizing topic, so I’m really glad we’re discussing it here. Even in the framing of the question, you go straight from mentioning China to going to what does this mean for human rights, right? So the fact that those two things are so intimately tied I think makes this a really important conversation.

On the question of scale, I mean, you can look this up and find all kinds of data on, you know, Wikimedia and all of the interwebs, so I’m not going to bore you with statistics. But I think qualitatively when we think of scale you’ve all heard of what’s now the Belt and Road Initiative, what was formerly known as One Belt, One Road. And I want to bring that up first because I think even the name change is really significant when it comes to understanding China’s political ambitions.

There was criticism—even though it literally translates, you know, from Mandarin from “yidai yilu,” which is One Belt, One Road—even though that’s the literal translation, there was this sort of PR job done to it which assumed that the idea of thinking of it as a singular idea—a singular belt, a singular road—was problematic both as a narrative but also as an actual fact because there will be five roads, there will be many belts, many roads. So I think moving away from the idea of the singular was also very powerful because it was implying that there is something pluralistic about this idea, there is some kind of element of inclusion attached to this, and it wasn’t China trying to capture the single road or the single narrative but that it was open to influence, it was open to negotiation, it was open to conversation and dialogue. And so it was felt that this would be a much better name for the initiative, so Belt and Road was less contested.

And I think that’s also interesting because even the word “initiative” implies that it’s a work in progress, right? It implies that it’s not fully baked, it’s open to what partners want. And the thing that’s very compelling—I mean, you can have all kinds of statistics about, you know, 100 and countries—180 countries have signed deals, it affects so many countries, so many places already have infrastructure, but I think what’s very compelling is this narrative that pushes it into the context of a trade war, right? You’re bringing it into the context of a competition not just between competition for who provides infrastructure and funds it, but a competition around ideas and ideologies, right? Do you go the American, human-rights-respecting, freedom-oriented approach, or do you want the authoritarian Chinese, human-rights-violating approach—which, as all binaries are, is a very reductive, terrible way to start, but that’s often how it’s seen, right? And I think you’ll find the truth might be somewhere in the middle or it may be skewed a little closer to one side.

But despite the fact that it’s a very binary narrative, you’ll see a lot of stuff about: Why is China buying up all the ports? Why is Sri Lanka, you know, now in thrall to global debt-trap diplomacy? Why is China using Trojan horses to enter Europe? Or Macron saying roads are not a one-way street, right, or words to that effect, that it’s a two-way thing?

So you still have these very emotive, very powerful narratives that pitch the Chinese effort as sort of, you know, dead on arrival, which belies the actual influence it has in the region. And I think that’s particularly dangerous because in an era when there’s a lot of love of strongmen in Asia, a lot of love for dictators, a lot of feeling that, you know, we need to stand up to countries, especially as America’s power in the world is seen as a little diminished relative to what it used to be. I mean, nature abhors a vacuum. Apparently, so does China, right? It sees an infrastructure deficit and says: Why don’t we go plug that gap? America is involved with internal affairs, domestic politics, not so outward-looking as it used to be—or at least so goes the narrative, which we can dispute—why don’t we go and plug this gap and actually start building out infrastructure that America currently isn’t interested in, Europe isn’t interested in, right? They’re busy drafting the GDPR. Why don’t we just go and, like, flout data-protection policies around the world and build out this infrastructure? So I think that’s kind of the context in which we’re seeing this.

I also want to point out a couple of other things, which is that we act as if surveillance is the monopoly of a country like China, except that as we’ve been talking about throughout this conference it isn’t, right? We have the term “surveillance capitalism” as being one of the most-touted words we use today. That’s a very American phenomenon. It’s very linked to a particular economic model, a very particular political ideology. You don’t have surveillance authoritarianism. You don’t have surveillance neocolonialism or neoimperialism, right? So the fact that surveillance as a business model is so closely tied to a Silicon Valley approach, to capitalistic ideas around data extraction/exploitation, I think we need to sort of name the fact that it’s not an us versus them. Surveillance is an endemic problem the world over.

Having said that, to what extent do backdoors that China might provide or Chinese telecom companies provide or the fact that the data is available for the mothership to view, to what extent does that—to the second part of your question—to what extent does that affect human rights? And I think that is a very, very grave danger. We’ve seen with, you know, the hacking of the African Union headquarters, entirely built and financed by China, and so many other examples that the idea of, like, ET phoning home is not, you know, something in the movies; it actually happens. So I think that risk is inherent.

And I think the other risk that’s really, really important is the extent to which civil society is placed under personal physical risks even to work in this space—to advocate for inclusion, to advocate for connectivity, even just connecting coffee growers, right, when there are political interests at snuffing out coffee growers and handing over that land to powerful, you know, property barons. So I think the idea of data infrastructure surveillance capabilities is very, very real, and the extent to which we don’t adopt infrastructure, you know, and be agnostic to the sort of social construct in which it’s built, the politics in which it’s embedded, I think that’s really key to understanding what China’s trying to do here.

JOCHAI BEN-AVIE: Absolutely.

I want to pick up on the point, though, you raised about the idea that the perception that the US and the EU are more focused on domestic concerns at the moment and in that void we are seeing China and other authoritarians step in. It’s not just China, although, obviously, massive compared to anyone else. Ambassador Gitenstein, Secretary Blinken recently called China, quote, “the most serious long-term challenge to the international order.” In the face of this kind of massive investment, these kind of partnerships with apparently 180-some countries, you know, and in doing so that increase in influence, of—you know, of surveillance capability, of control over the online experience of billions of people, what is the US government’s approach to countering the Chinese government in the internet-infrastructure space?

MARK GITENSTEIN: Well, first of all, picking up on a point that both of you just made about the investment in Belt and Road, 146 countries I think you said? Yeah.

JOCHAI BEN-AVIE: A hundred fifty-six.

MARK GITENSTEIN: A hundred fifty-six. Well, if you took the names of those countries and laid them against the last UN vote, I’ll bet you it’s almost completely coterminous. That tells you part of the answer.

I mean, remember what that vote was on. It was—there were two votes, actually. One was on sanctioning the Russians, you know, publicly for what they had done on Ukraine, and the other was on the Human Rights Council. The notion that Russia could sit on the Human Rights Council when they’re murdering people in Bucha is appalling. And yet, many of the recipients of that money voted against us. And what—if you count up all the population in those countries, 60 or 70 percent of the world actually disagrees with us on what’s happening on our position on Ukraine. What does that tell you?

And you know, it’s not simply an issue of democracy versus autocracy, which my friend the president likes to use—I use it myself—but it’s rule-based order, as that great diplomat in Kenya points out. You know, if boundaries are no longer sanctified, if they’re no longer respected, any country in the world is subject to being basically taken over by the neocolonialists. And to be a little demagogic here, the real colonialists in the world right now are the Russians and the Chinese. They’re doing what the Europeans did in the nineteenth century. They’re buying countries, taking over countries. And the Belt and Road Initiative is not some—done out of the kindness of Xi’s heart; it’s done to take control of these countries and the narrative in these countries and to counter the democratic and humanitarian interests of the West and the Europeans and the United States. And it’s extremely dangerous.

And what is the United States doing? Not enough, that’s all I can tell you. I think, you know, I would like to see the Global South to be at least where Romania was when I got there in 2009, which means an independent infrastructure in the internet, and I don’t know if that’s possible right now. I think we’re never going to match—you know, I’ve seen the numbers—ten billion dollars, twenty billion dollars, thirty billion dollars spent by the Chinese alone in the Global South or in Africa alone. We’re not going to match that. But maybe if the private sector gets engaged and we take the issue more seriously at every mission in Africa with every US ambassador, which we’re not doing right now, it would have an impact.

JOCHAI BEN-AVIE: We’re going to run out of time on this panel, unfortunately. I feel like we could talk about this for hours.

And so maybe we can close by asking what would it look like. What would—if you could wave a magic wand to really help meet the sort of funding need that exists here—as I say pretty much every day, connecting the unconnected is an incredibly capital-intensive business. And so I wonder, maybe Catalina first to you, you know, if you could wave a magic wand to help support folks like yourself who are connecting communities that the markets and governments have often sort of left behind, what kind of support do you need?

CATALINA ESCOBAR: Thank you. I think there’s—I’ve been thinking a lot about this, and I think there’s two things.

I think one is, like, I don’t know if this is the right word in English, like demystifying the access issue. Because, for example, here in Colombia people believe that everybody’s connected because, yes, the big cities and the main municipalities are connected. So I think that demystifying this, that everybody’s connected. So talk more, advocate more about all the unconnected people.

The recent report from the Alliance for Affordable Internet talks about a topic that I really like, and it is the meaningful connectivity. So it’s not just having people connected, but with the adequate device, with the adequate skills, and all the capacities to really, really take advantage of the connectivity. In Colombia, it’s 26 percent, so only 26 percent of the people have meaningful connectivity. But when you go to the Ministry of the ICT or you go to the—to the telecoms, they believe that everybody’s connected. So we need to talk more about the unconnected.

And the other thing that I feel that we need, of course, is funding, but funding for the fundamentals, because it seems like digital skills are not attractive enough for the—for the international community, and people think that youth are digital natives. But, yes, they could use technology, but are they using it for the right things and for the right reasons? So I think that we need to demystify a lot of things.

And the other thing about funding—and I’m going to finish with this—is that we seem to be living, like, in two worlds. So it’s like when you talk about funds, it seems like the big investors are trying to put their money in the Metaverse and in other, like, super high-technology solutions, but that is just going to be increasing the gaps because there is this whole investment around the super-high-end technologies that is needed but we can never forget the other end of the world that has no connectivity, no skills, no knowledge.

So I would summarize it, Jochai, in those two things: a lot of advocacy and funding for the fundamentals—digital skills, digital access that needs to—needs to be on the agenda again because it seems to be out of the agenda lately.

JOCHAI BEN-AVIE: Sounds like you have a lot on your wish list, Catalina, but I think that—

CATALINA ESCOBAR: Yes, I do.

JOCHAI BEN-AVIE:—demonstrates the complexity of this topic.

We’re going to run out of time in just a few seconds at this point, but, Malavika, maybe final word sort of responding to what would you like to see if you had that magic wand. What would you do?

MALAVIKA JAYARAM: I think I would like to see people in the Global South treated as equal collaborators and participants in their own futures, like you mentioned—not as victims to be saved by someone else, not as, you know, passive people, not as recipients of largesse that someone else decides somewhere far away—and actually help design the products, the solutions, the services that they need. So I think that that’s kind of biggest on my wish list.

But I think the last thing is also that we need to look at how people actually use the internet after it’s been provided. We often see connectivity as a destination, and once the connection’s been switched on it’s like, OK, we’re done, our work here is done, without actually looking to what happens—how they actually use it, where they meet roadblocks, what Jonathan Donner calls an after-access lens. I think that’s really important to see where actual problems exist to keep iterating and improving on them.

JOCHAI BEN-AVIE: Absolutely. And that’s why at Connect Humanity we often talk about digital equity and not just connectivity, right?

Thank you to our panel. Thank you to our hosts for a fascinating conversation. I think what we’ve heard is that digital equity is one of the great challenges of our time. And if we care about advancing democracy, human—protecting human rights, expanding economic opportunity, and defending national security, we must confront and offer viable alternatives and substantially more funding to solve this challenge. And yet, despite virtually all democratic countries having interests affected by increasing Chinese control of the internet, the democracies of the world have largely been on the sidelines. That said, there are good models like the SEED Act that we heard about earlier today that we can learn from and leverage as we think about how to meet this funding gap.

And it’s not just a matter of pouring money into this issue. Our existing funding mechanisms and telecom models, as we’ve heard, won’t be sufficient to connect the unconnected to achieve digital equity. And so we need to evolve conventional understanding of what a network operator looks like—might be coffee growers—and can deliver funding in the sizes and structures that operators require to meet the needs in their underserved communities.

Thank you so much for this conversation, and looking forward to working with many of you as we work on this challenge. Thank you.

CATALINA ESCOBAR: Thank you.

Watch the full event

The post These organizations are on the front lines of internet equity—and countering China’s influence appeared first on Atlantic Council.

]]>
The war in Ukraine shows the disinformation landscape has changed. Here’s what platforms should do about it. https://www.atlanticcouncil.org/news/transcripts/the-war-in-ukraine-shows-the-disinformation-landscape-has-changed-heres-what-platforms-should-do-about-it/ Tue, 07 Jun 2022 21:11:23 +0000 https://www.atlanticcouncil.org/?p=534071 Panelists at the 360/Open Summit discuss whether information threats like malign foreign influence or disinfo-for-hire firms are becoming more diffuse.

The post The war in Ukraine shows the disinformation landscape has changed. Here’s what platforms should do about it. appeared first on Atlantic Council.

]]>
Watch the panel

360/Open Summit: Contested Realities | Connected Futures

June 6-7, 2022

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosts 360/Open Summit: Contested Realities | Connected Futures in Brussels, Belgium.

Event transcript

Uncorrected transcript: Check against delivery

Speakers

David Agranovich
Director, Global Threat Disruption, Meta

Alicia Wanless
Director, Partnership for Countering Influence Operations, Carnegie Endowment for International Peace

Min Hsuan Wu (aka “Ttcat”)
Co-Founder and CEO, Doublethink Labs

Moderator

Lizza Dwoskin
Silicon Valley Correspondent, the Washington Post

LIZZA DWOSKIN: So first I’d like to introduce—right next to me, I’d like to introduce David Agranovich. He is the director of global threat disruption at Meta, the company formerly known as Facebook. So he coordinates the identification and disruption of influence-operation networks across Facebook. And prior to joining Facebook, he served as the director for intelligence at the National Security Council at the White House, where he led the US government’s efforts to address foreign interference.

Next to him, we have Alicia Wanless, who we’re meeting for the first time. Alicia’s the director of the Partnership for Countering Influence Operations and the Carnegie Endowment for International Peace. She researches how people shape and are shaped by changing—the changing information space. She conducts content and network analyses and has developed original models for identifying and analyzing digital propaganda campaigns.

And then we have Ttcat. You’ll have to tell me how to pronounce your last name. How do I pronounce your last name?

MIN HSUAN WU: My last name is Wu. It’s kind of easier. But my first name is Min Hsuan.

LIZZA DWOSKIN: Perfect. Better than I.

MIN HSUAN WU: All right, just go back to Ttcat. All right.

LIZZA DWOSKIN: Better than I. He’s the—he’s the co-founder and CEO of Doublethink Labs, and they’re really at the forefront of the effort to track Chinese and Chinese-language disinformation. He’s an activist and a campaigner around a number of social movements in Taiwan, including the anti-nuclear movement, environmental, LGBTQ, and human rights movement, and as I said is on the forefront of tracking disinformation by China.

So I want to jump in and say when Graham asked me to do this panel, he said it came out of a conversation he and I had last year when I had just come back from Israel. And I was tracking a disinformation-for-hire company—not going to say the name because it’s related to a forthcoming article in the Washington Post—but this company essentially is one of many that have proliferated around the world that governments or political actors can hire if they want to run a disinformation campaign and they want to outsource it somewhere. And I called Graham because I was thinking about how much the world has changed since we first—myself and other journalists first started reporting and uncovering Russian interference in the 2016 election and the platforms’ very weak response to it; just kind of wasn’t prepared for it.

So I wanted to spend some time chatting with you guys today, you guys and gal, about how different the world looks today than the way it did, how different the defenses are, how different the attackers are, and how the landscape has changed. And then what are the responses to that changing landscape, both from governments and from platforms and from civil society?

So I want to start with the question we’re all talking about, the most pressing reality, which is the war in Ukraine, and David, ask you, how does the world look different from where you sit at Meta than it did before February 27, before the war?

DAVID AGRANOVICH: Thanks for kicking us off. I think it’s a really topical question, particularly given how much Ukraine has focused on the conversations here at the conference over the last few days.

Maybe just for a little bit of grounding, my team has been working across the company with our threat investigative teams to look for, identify, disrupt, and then build kind of resilience into our systems around influence operations for the last several years. I joined the company back in mid-2018. That effort was already underway after the 2016 elections.

And so some of the things I’ll talk about in terms of what we saw from particularly Russian influence operations around the February 24 invasion of Ukraine are predicated on the trends we’ve observed over the last four or so years of Russian activity.

And I’ll break this up maybe into three main categories: First, kind of what looks different from a preparation perspective, what looks different from a response perspective, and then what looks different from a capabilities-across-society perspective.

On the preparation piece, I think one of the biggest differences here was in the weeks leading up to the 24th of February, you saw a substantial shift in the ways that both platform companies prepared for, you know, Russia crossing the line of control in eastern Ukraine as well as the way that governments and civil society were engaging around the possibility of influence operations, disinformation, surrounding the crisis.

When I was still at the White House, I was working on the global response to the poisonings in Salisbury in the UK of Sergei Skripal and his daughter. And at the time it was really hard for governments to share information about what we thought people were going to push as disinformation narratives, and it was very difficult to kind of get ahead of what at the time felt like a very agile disinformation apparatus surrounding the Russian government.

Ahead of the 24th of February, you saw these somewhat unprecedented strategic disclosures that narrowed the operating space of Russian disinformation operators by the US government, by NATO, by the Ukrainian government and others.

On the platform side, several platform companies spent the weeks in the run-up to the 24th of February preparing for what we expected to see, what would we need to detect, refreshing our investigations into known Russian-linked disinformation operations we had previously detected. And so when the 24th rolled around, there was already this very constrained operating space. I mean, this was the response piece. And there were platforms ready to look for them, civil-society researchers… who were already out there with capacity to look for this stuff.

And so though we saw several influence operations linked to known Russia-linked disinfo networks, they didn’t seem to get much traction, either on the platform or in the broader media ecosystem. That’s not to say that there isn’t a threat there, but rather that the defenders were more prepared.

The last thing I wanted to touch on was the capabilities piece, the strategic disclosures, the preparation work, that gave us fertile ground to continue our work in kind of constraining this type of influence-operations activity. But now that we are in the post-initial-invasion phase of the operation, the war isn’t over, right. Neither is it over on the ground nor is it over in the information space.

And so I think what we’ll need to focus on is ensuring that these early victories of essentially constraining the success of some of these operations aren’t lost as kind of global attention continues to shift from issue to issue. And so that’s an area I think, I hope, we’ll have a chance to focus on a bit here.

LIZZA DWOSKIN: Right, because the world, of course, was actively debunking Russian disinformation in the beginning of the war. And there were so many—you know, the whole of the world was responding. And now that the world isn’t paying as much attention, that’s where perhaps these influence operations then can get more traction.

Alicia, what do you think?

ALICIA WANLESS: Well, I’ve been looking at problems like propaganda and disinformation since about 2014. And so the longer tale of that is that I think that the bigger change now, even since 2014 but not necessarily because of Ukraine, was a greater awareness that we have problems in an information space.

When it comes to Ukraine, I think what it’s demonstrated is a lack of a multistakeholder response, that we really didn’t have a strategy, particularly in the West, that could bridge the gap between, say, industry, civil society, and governments. And in that way they were working in their own field, their own sector. But even within each one, they tend to work in their own area and were broken up by topics. So one team over here might be working on disinformation. It might be foreign-originating. Another one might be strategic communications. Another one would be cybersecurity. And all of these things are part of the information environment.

And even within companies, they work on single-policy enforcements. They’ve got teams that do singular and different things. And those don’t necessarily come together. But then between those stakeholders, the trust between them, the languages that they’re speaking, they are not usually the same, and they haven’t really collaborated. There’s been more tension than not before the conflict.

So what we do have here is a unique opportunity, if there is a will, to build a stakeholder response that actually helps create efficiencies in terms of how things are coming in. So, for example, what we see is maybe governments making multiple requests to companies and not coming together. Well, maybe multilateral institutions would be the better bet to do a singular briefing, but also companies providing greater information to stakeholders like civil society and the government as well in advance, to be able to get ahead of a threat.

But the key here is that we have to find standards and systems that make this safe and collaborative and that there is some sort of an outcome with lines in the sand, because ultimately this is the thing we’re missing the most, rules of engagement and a strategy.

LIZZA DWOSKIN: Well, it’s interesting, because I—you know, I saw, actually—and David and I were talking about this before the panel—that the companies were willing—at least the platforms were willing to draw a line in the sand, you know, and take a side, which is different.

But to your point, you know, you have Google that decides they’re going to ban any content that distorts real-world events. And Facebook has a different policy and they’re going to, you know, allow people to criticize Putin and potentially Russians. And, you know, they were all—you know, there wasn’t a uniform response from the companies, even though, in some ways, there was maybe more uniform response than we’ve seen in the past.

What do you think, David?

DAVID AGRANOVICH: So—

LIZZA DWOSKIN: You brought that up with me before.

DAVID AGRANOVICH: I do think that there’s coordination between kind of the threat-investigative sides of companies that’s grown out of the 2016 period. And so you saw this around elections, whether it was in the US or in the Philippines or in Brazil or in India.

But in particular, I think one of the challenges around setting these types of content-moderation policies—and I know Emerson and Katie talked about this yesterday—is in these fast-changing periods of potentially global conflicts or ethnic strife, it’s difficult and, I think, perhaps not always the best position to rely on the platform companies to be the leading indicator of where we want those lines to be drawn.

This is actually, I think, a place where civil society, where governments, particularly can lead, because these types of decisions have effects on people’s lives. And having a clear kind of norm-setting across the industry would be really useful.

LIZZA DWOSKIN: But I feel like in this case there was a war. And pretty much the whole world, civil society and the companies, were against it.

DAVID AGRANOVICH: I think that’s right.

LIZZA DWOSKIN: So is this something that—you know, we talked before about how this is unusual for platforms to draw a line in the sand like this politically.

DAVID AGRANOVICH: I think that—I mean, there’s some helpful guiding principles here. And I’d be interested in kind of Alicia’s take in particular of how we take this from just, like, platform policy to, like, strategic.

But the guiding principle is how do you protect the people who are using your platform. And in the context of people in Ukraine, right, that is how do you protect their accounts? How do you give them tools to lock their profiles down so that if the city that they’re in is taken over by an invader, they can quickly hide the information that might get them into trouble?

But it also means how do you protect, for example, dissenting voices in Russia, where talking openly about the war might result in physical-security risks or risks of imprisonment and the like? And so I think that that guiding principle, that I would argue pretty much all platforms should have—how do you protect the people who are using your platform—can help, you know, bridge some of the differences in how the platforms approach these types of problems.

LIZZA DWOSKIN: Mmm hmm. What do you think?

ALICIA WANLESS: I’m not going to comment on that specifically. But I do think that there are also other areas where it makes it painfully apparent that we aren’t really coordinated. Now, stepping aside from Ukraine, bringing Ttcat in—this is something that we talk about quite a bit—in terms of even just the research community.

So you have a very wide and diverse group of people who are working on research related to influence operations. They might be in civil society, nonprofits, think tanks. They might be academics. But all of them are almost entirely working in isolation, building up their own data pipelines that don’t necessarily get reused. And we’re talking about research that’s really engineering resource-heavy, and that’s extremely costly. And we haven’t really found a mechanism to come together to be able so share that type of resource, build up datasets that we can use together, and have representative samples. And this is—this is just one example where we lack coordination.

MIN HSUAN WU: Me? All right.

ALICIA WANLESS: Yeah. Just—I’m giving you the floor.

MIN HSUAN WU: Yeah, all right.

LIZZA DWOSKIN: She’s looking at you.

MIN HSUAN WU: So thank you so much. All right. So especially when you’re talking about the data, it’s not only on Facebook or Twitter, which they granted certain access to the research group for the API. We are also talking about a platform like Weibo or WeChat or TikTok or Douyin that’s even harder, right? And they are consistently changing their rules for people to collecting those data.

And in fact, actually, I know that there is a business model for industry. When they collecting those data, they were actually exchange those dataset from company to company so they can build up more data for the older clients. And we don’t have those exchange mechanism in our community. So I think that’s also very hard.

Just quoting one, like, very good Ukrainian partners here I just met last night, they say he found four different group of people in this summit collecting the same dataset with them, right?

LIZZA DWOSKIN: Really?

MIN HSUAN WU: So we are all collecting data and spending money and also build up those dashboard, and we definitely need some more coordinated efforts from our community.

LIZZA DWOSKIN: So that’s a really interesting idea, like creating a central repository of all information influence operations and evidence of them across the platforms that any researcher can use. Is it feasible?

MIN HSUAN WU: Yeah, but it also is like we spend a lot of money collecting those data because we think that collecting those data, it will be able to help us to investigate what’s happening inside those dataset in the future. But actually, we are collecting the data more than what we can do for analysis because we are also facing the capacity issue for the analysis of this data. So, yeah.

LIZZA DWOSKIN: This is a question actually on that, which is, you know, Russia will be taken to court for war crimes. And what happens to all the—all the content that platforms deleted because they were fighting these influence operations during the war? Can that be retrieved in legal cases?

ALICIA WANLESS: I was just going to say, this here is another massive gap that we have in terms of regulation that governs how we actually deal with our modern information environment and the information within it and who actually gets to dictate that. Most laws—I’m not a lawyer, so I’m going to qualify this—tend to happen at a national level, but we don’t necessarily even have that in place right now, much less some sort of international agreement of what could happen and where. So, again, we have this massive, gaping hole that we weren’t prepared for. And yes, it takes years to build up for that.

My hope is that with something like Ukraine it’s enough of a force-multiplying factor that we come together and we’re aware of this—we’re aware of the wider information environment, the lack of guidelines that we have, the lack of norms, et cetera, and that we suddenly, hopefully, have impetus from governments to take a charge on that and do something.

DAVID AGRANOVICH: Maybe just to plus-one that, I think it’s—to Ttcat’s point, the industry’s responses to particularly the question of how do you archive and enable research are very different platform by platform, in no small part, as Alicia noted, because there hasn’t been a lot of clear guidance from regulators or democratic governments of, like, what people actually want to see and how they want that data shared and with whom they want that data shared.

Similarly, right—so my background is more on the traditional security/cybersecurity space. The law and the norms around information sharing for, like, cybersecurity threat indicators, as folks who work on the cybersecurity front know, is much clearer, right? There are vehicles explicitly designed to enable companies and research organizations to share information about cybersecurity threats. We don’t have that in any clear form whatsoever around issues like influence operations.

ALICIA WANLESS: We don’t even have it for data sharing for research purposes, although I’m really hopeful for that EDMO report.

MIN HSUAN WU: If I may, but bottom line is that those public opinions or whatever the content that push your platform or other platforms essentially is not the data owned by the tech company. It’s the data about our own society, our countries, what’s happening there, what people are talking about there. So it should be public available or at least for a research group to understand what is happening to our citizens, what are they talking about, what are they producing, right?

LIZZA DWOSKIN: You know, you’re right. It’s a societal—it’s a societal record. But then it kind of—it actually comes back to the question that, I think, sparked this panel and my conversation with Graham, which is, OK, we’re just starting to have these global frameworks for cyber weapons and laws, but disinformation and influence operation is their weapons and there’s no framework both in terms of sharing data, how governments should handle it, and it seems like it’s a void and then one—you know, the example here.

Do you all think that there should be that a government or global governments or international bodies should come in, therefore, and, like, mandate, for example, that platforms archive influence operations, that they share it publicly in uniform ways? Do you think that that should be mandated by governments?

ALICIA WANLESS: Can I start?

LIZZA DWOSKIN: Yeah.

ALICIA WANLESS: I think we should start with the first step, which would be transparency or operational reporting of online services to understand what even data they have beyond that, because influence operations and disinformation is one part of the problem.

I mean, we—I’m sorry, David. I’m picking on you now. We don’t understand how the policies are developed. Well, the people who didn’t work for the companies don’t understand how the policies were developed. We don’t necessarily understand how they’re enforced or what research is happening, and then this research comes out and leaks and it erodes more trust in our information environment, and these things need to be rectified.

So first step, I would say, is that governments should regulate operational reporting to inform how companies are working. It would be ideal if a number of countries came together and broadly harmonized that. Maybe a place like the OECD leads on this and that would be extremely helpful and expedite things.

That would inform researchers on what information is available to research and also inform policymakers on how we can do regulation to actually control things and archive stuff like that.

LIZZA DWOSKIN: Then it wouldn’t be as fun for journalists because we depend on leaks.

What about a question on—what about—you know, we haven’t talked yet about the disinformation from our hire industry but it’s something, David, that Meta has actually talked a lot about in your reports, which is that it used to be, you know, that governments would pay for this directly. Now they’re, increasingly, outsourcing it.

Tell me—tell us about that world, and how does that world get regulated? What can prevent this from happening, this gray space?

DAVID AGRANOVICH: So it’s a difficult question in no small part, I think, because disinformation for hire definitionally and PR agency are not hugely different definitions, right. It’s more what those companies end up doing.

But that said, right, we—so our teams put out a report last year about surveillance for hire industry, right—your NSO Groups, your Black Cubes of the world—and one of the things that, I think, worried us the most about these surveillance companies was not only are they engaged in these egregious abuses of people’s privacy by hacking their phones, hacking their accounts, hacking their email addresses, they do so for commercial gain for any customer that’s willing to pay and in doing so they hide the people behind them, right.

Oftentimes, if you look at our surveillance for hire report you’ll notice that in almost all of those cases we weren’t really able to identify the clients. We could tell you exactly what company was providing the services but the whole business model is hiding who that ultimate client is.

Whereas if you look at our influence operations reporting going back a few years, there’s a ton of this very specific attribution to governments, to intelligence services, including some of the very sophisticated services in Russia.

And so one of the big risks around disinfo for hire is that it creates this whole industry that, essentially, just hides from all of our views, whether you’re an OSINT researcher or an investigator at a platform company, who is actually paying for it, who is driving these operations, and why are they targeting the people they’re targeting.

How do you regulate them? Some of the challenge here is that we’ve taken down a handful of disinfo for hire firms. We’ve banned them from our platform when we find them because their business model violates our policies.

But I can’t think of a single example where the people who ran the operations at the firms or where the firms themselves faced any meaningful business impact for doing so, right. Those people still work in the PR industry. The firms themselves still have very large clients all over the world.

Until there are some actual costs for engaging in this behavior beyond Facebook taking down your accounts and then trying to embarrass you in a public blog post, it’s hard to imagine that a profitable business model isn’t going to continue driving that type of PR and ad agency activity.

ALICIA WANLESS: And the politicians benefit from using it. This happens quite a lot. Maybe not the politicians, but in Taiwan—

MIN HSUAN WU: Yes. So lots of different tactic that—by the Chinese information operation mode is that—we published a report last year. There’s four different one that people commonly noticed that those, like, state-funded media, they do a lot of the propaganda working with other media outlets. Or you see a lot of patriotics or the cyber troop, they’re trolling the people around. But that’s an easy one, right?

So, but the hard one is the one you just mentioned, that when they hire people actually in your society, in your country, that the people who create those content is Taiwanese or the people who promote that content is also Taiwanese, how do you defenses then be higher or they are just people have a different idea or different political opinion with you within your democratic society?

So to increase the cost for those activities, and also shrinking their business model or profit model, I think it’s essential that—to prevent those things, because in the end of the day they are the one—whatever they do for politician for business, for make-up company products—what they do is they input a lot of inauthentic content, opinions, and pretend that it’s genuine to the audience in your society. So I think that should be at least a social norm that you don’t engage with those PR firm or marketing company that provide those service at all.

LIZZA DWOSKIN: Yeah. I’ve done some reporting on this in the Philippines, and I really felt like this disinfo for hire individual, it’s, like, a hot new job for a twenty-something in the global south. Because you can make money, you can be online, you can become an influencer. Or, if you were already an influencer, you can get paid for political sponsorships. But, yeah, it just sounds like, from what you all are saying, I don’t—it doesn’t sound like there’s really any incentive from any government to actually stop this.

ALICIA WANLESS: I would like to distinguish between the people who work in the bureaucracies and the politicians. Because my experience has been those inside the government would like to do things, and they would like to clean up the information environment and make it more reliable. Politicians don’t have the vested interest, usually.

LIZZA DWOSKIN: I want to open it up for questions in a minute. So would love to see your questions, if anyone wants to come up to the mic, or you can send a question already. Ttcat, while people are teeing up their questions, I did want to go—I did want to go back to Russia and Ukraine, because you’ve done so much research on China’s involvement in that conflict. And I wanted to ask you about how you see China walking a fine line in terms of the disinformation it will echo, and where it diverges.

MIN HSUAN WU: Right. So ever since the war started—well, back to February 22nd, that our team started special taskforce, everybody work over hour, and published the digest every day, and a look at how China state media, influencers, and also those nationalist media outlets are pushing those narrative against Ukraine. They copy a lot of things from Russia. They translated a lot of things. And they tweet—they twist whatever Zelensky say to another meaning, and push that to the Chinese-speaking citizens.

First of all, I want to say two things here. One is that oftentimes when you heard, like, something like this, it feels, like, very exhausted, right? So it’s like something far away. But actually those disinformation or propaganda campaign in Chinese language is not only about China people. It’s also about the Chinese speaking world, like in Malaysia, in Singapore, in Taiwan, in Australia, Canada. Everywhere the diaspora community. Ask your friend whether they—what news outlets they are reading in New York, in Vancouver. And all over the place they read the news on WeChat and also all this—you know, whatever Chinese news is available there. So, first of all, it’s not only about the people within China.

Second, think about what they do stuff on the war until now, it’s over one hundred days. They are pushing this narrative, dragging those Chinese audience away from Western country, Western value. They are attacking that… Whatever they do, they are preparing the environment—the information environment. That’s exactly what Russia did in 2014. They start to demonize Ukraine and prepare those propaganda. Of course, some people don’t believe. Some people don’t believe. But that’s just right now, one hundred days, right? How about two years later? How about four years later when they keep pushing those narrative?

LIZZA DWOSKIN: So you’re talking about preparing for an invasion of—laying the groundwork for an invasion of Taiwan?

MIN HSUAN WU: I don’t want to jump that conclusion, but I would say they are preparing for whatever things they want to do, because it’s all pre-justified. They don’t need to explain to their citizen why we don’t want to help Ukraine anymore, right, why we want to help Russia today. Yeah, because they’re already—there’s already a lot of narrative and justification out there by those disinformation.

LIZZA DWOSKIN: And then you—yeah.

ALICIA WANLESS: They see the information environment as a system and have for a long time. They’re not quibbling over definitions like we are and debating this. They have a center of gravity to understand it and they have a strategy. We don’t.

LIZZA DWOSKIN: But, you know, Ttcat, when we were talking earlier, I thought it was real interesting how you said, you know, there’s so many limits to where Chinese disinformation will go in support of Russia. How you said that they will not—they will not mimic the narrative around independence in the Donbas region.

MIN HSUAN WU: Yeah, right. There’s an ecosystem, right? So there’s an ecosystem for—if you want to make profit, I can recommend this new gig for you guys—because we have lots of White people here—make a video or a TikTok video that promote how great China is. Then you will become an influencer. That’s how they work. So this nationalism created a huge nationalism interest, become a new business model. So China government doesn’t have to pay you as an influencer. Once you follow their narrative, follow their state media, whatever they are talking today. You open the People’s Daily, CGTN, whatever the hot topic today, you just follow it, then you gain followers. You gain traffic. You gain profit. That’s how they work. So this whole bottom up or decentralized network is what we’re dealing with right now for the space.

LIZZA DWOSKIN: And why is it not as profitable to be an anti-government influencer?

MIN HSUAN WU: Oh, yes. That’s a good question. So I think a lot—we don’t have that much yet, but I do see a lot of people are going that direction right now in Taiwan or in other places, as a diaspora community. They also do that, but they are not as profitable as China citizens—the pro-China one, yes. I don’t know why.

LIZZA DWOSKIN: If no one else is itching to jump in, we can go to a question. So we have a question here which says: It feels like the discussion around accountability by social media platforms happens only in reference to Western companies. What leverage does the democratic world have over platforms like VKontakte, Telegram, and WeChat? Great question.

MIN HSUAN WU: Right. That’s a question I also want to ask. I don’t have the answers, yes.

LIZZA DWOSKIN: Does anyone want to take that?

ALICIA WANLESS: Well, yeah, no, it’s—I mean, it’s the same as, like, GDPR and the EU. It will apply to where that law is placed. So, I mean, the West has the same options, I wouldn’t advocate for it, that Russia has taken, and China has taken, in kicking out companies that don’t comply with the way that they’ve decided they’re going to regulate their information space. So it’s possible. It’s there. I think the emphasis for a long time has been on the major American ones because they’re there at home and they’ve taken a central role in our own information ecosystem.

DAVID AGRANOVICH: I do think one thing that can help here is, so, one of the things that we’ve been trying to do more and more of in our own analytical reporting is calling out the platforms that we see content spread to, right? I think more and more—and I imagine most of the Sherlocks in the room would agree—these operations are inherently cross-platform. And so one thing we’ve done, in particularly the operations around Ukraine, we called out the fact that we saw, you know, Facebook profiles who were designed to backstop content written on websites that were primarily amplified on VKontakte and OdnoKlassniki, for example. So in some ways, hopefully, just raising some of this awareness of how these other platforms play in the global information ecosystem, in hopes that it will then inform some of the regulatory conversations.

ALICIA WANLESS: We need to look at things as a system.

LIZZA DWOSKIN: I’m just laughing because you said that before, so. Because you believe it.

ALICIA WANLESS: Yeah, I do. I think that’s the only way we can get out of this. The information environment is like the physical environment. If we don’t start looking at the systemic, we have no way out of this. We will just constantly be reacting as we are.

 LIZZA DWOSKIN: But what is systemic? You know, WeChat, they’re not going to face pressure from their government the way that the American platforms face pressure from their governments to crack down on this stuff. They’re just not.

ALICIA WANLESS: No, but they may not be operating in the environments that they are right now. I mean, they can be banned. We see that they can—things can be banned. Russia banned. China’s banned. I mean, I’m not advocating for—

LIZZA DWOSKIN: Or TikTok could be banned in the United States.

ALICIA WANLESS: Exactly. That’s what I’m saying.

LIZZA DWOSKIN: Yeah.

MIN HSUAN WU: I don’t want to put you cold water, but what they can do is they can separate a company and promote a different version, like what TikTok and Douyin does. So and actually, WeChat is—Weibo is also—they have an international version. So whatever you download is actually—there’s a different—you probably see different stuff or different—you face different content moderation standards.

ALICIA WANLESS: Yeah. TikTok US is, technically, separate, I believe.

MIN HSUAN WU: Yes.

ALICIA WANLESS: But, again, global information ecosystem.

LIZZA DWOSKIN: There was someone who raised their hand over there. Yes. I think—oh, OK.

Q: Hi. So my name is Omri Preiss. I’m the director of Alliance4Europe and also part of the DISARM Foundation. I want to thank you for the really interesting panel and also a great discussion that we had at a session yesterday.

And DISARM stands for Disinformation Analysis and Risk Management. It’s exactly the kind of framework, it’s a common language on disinformation that we’re talking about here, basically, applying cybersecurity approaches to share information. It’s based on MITRE ATT&CK, for those who are familiar, and it’s something that we’ve been working on to bring stakeholders together around how we get this off the ground in a way that really enables information flows in a way that is, you know, transparent to the community and really is able to engage, you know, those in this space.

Now, Alliance4Europe has been working on this kind of cooperation building for the last several years and what we see is that there is a reason why everyone wants to have their own thing and want to invest their resources in one specific space or one specific project.

Everyone wants to have their funding, their branding, and the right to do so. Everyone wants to have their own great idea. And so the genuine question, I think, that we face as an organization and as a community and in establishing these common resources is how do we do that in a way that is a win-win for everyone.

How do we enable everyone to have a common interest to use these tools together, to share information together, and not feel like oh, well, I just lost a bit of funding to that guy because they’re going to steal my idea, or, you know, how do I shine through?

How do we really solve this collective action problem and show everyone, like, you can buy into this forum and feel that you’re going to gain for it for your own advancement as well as advancing the community and the common cause that we have, which is to have, you know, a democracy that is safe in the digital world and being able to really communicate together?

So over to you.

LIZZA DWOSKIN: I’m going to ask one person to address that for a minute so we can get to some more questions, whoever wants to take it.

MIN HSUAN WU: I can.

ALICIA WANLESS: If you want.

MIN HSUAN WU: Well, we start our work by—we think that we want to build a cross-platform database that our analysis, just put a keyword, they can gather all the data from Weibo, from all these, like, China junk news site. And turns out, well, we did it in just a few months, then they changed. Then we keep spending the money, try to adopt it, and it’s never done.

So I would suggest that maybe we can develop our competitive strengths in analysis or other way. Once we have—if we have a joint—if we can—if we don’t need to bother for collecting those data, we can spend our money and our time on developing an algorithm or develop training our analysis or building up our capacity, yeah, because we will never be better than who owns the data, right? Yeah.

LIZZA DWOSKIN: I see another question on the board, which is how does the model of surveillance capitalism driving major social media platforms enable the disinformation for hire industry, and what challenges do the design of the platforms pose in formulating lasting change? Which, I’m assuming, has to do with the fact that disinformation can be controversial and enraging and get clicks.

Who wants to take that for a minute?

ALICIA WANLESS: That’s a full-on research paper question. To answer in less than three minutes, I think, would be a little bit much.

I mean, I think it’s not just surveillance capitalism. It’s the role of influence in our society that we are just not having a frank conversation about. I mean, this goes beyond influence operations and disinformation to the very fundamental basis of our legitimacy.

I mean, we have influence happening everywhere to sell us things, to get us to vote for somebody, and for some reason in democracies we have not had that moment to come and really discuss how far is too far, at what point do people lose their agency, and to get to that we need to accelerate research around the impact of these things, and we’re not going to do that unless we start to pool resources and have shared engineering infrastructure, something as big as a CERN for the information environment.

LIZZA DWOSKIN: OK. You have had a question for a while.

Q: Hey, yeah. My name’s Justin, Code for Africa. We track a lot of this stuff across twenty-one countries in Africa at the moment, and you’ve hit on a lot of important points that we’ve been trying to hit on with our partners.

Disinformation’s super profitable. It’s a boom industry in places like Kenya. It’s not just disinfo for hire; there’s a whole subset of sub-economies inside there. But we’re seeing the same kind of playbooks being used everywhere from Sudan through to Ethiopia, Burkina Faso, Mali, kind of you name it, regardless of language or audience. It’s cross-platform. It wherever possible tries to use vernacular to avoid algorithmic kind of detection. It’s franchise-driven—specifically in the cases that we monitor, Russia protagonists franchising out to local kind of implementers.

What are we doing—and so I’ve got a three-part quick question. What are we doing to stop the fragmentation that’s happened where even within the platforms your fact-checking teams were and the people who are trying to debunk the misleading information were completely separate from the threat-disruption teams? There’s this firewall between them, and we’re seeing that play out in the rest of kind of the ecosystem now as well. Fact-checkers are not speaking to the guys who are doing, you know, kind of the work that DFRLabs or others do ourselves. So that’s the first question, because it’s part of—it’s something that the people driving the disinformation, they don’t see this distinction. They’re leveraging all of that. So that’s the first one.

The second one is that the enablers who are building this wish-fulfillment infrastructure are not just the political kind of PR click-for-hire people. It’s the scams—the scam artists who are building mass audiences, almost like an Amazon delivery service for disinformation operators. What are we doing to take them down, or if not taking them down to map them out? At the moment in Africa, we’re seeing there’s a massive campaign to drive everyone on Facebook and Twitter onto dark social, specifically because enforcement’s getting better.

And then the third question was kind of slightly self-serving. Ttcat mentioned it. It’s local nuance, understanding the local ecosystem. Most of the people doing work in the space are in the North. What are we doing to support kind of in-country, in-region analysts, researchers, and the people who join the dots?

ALICIA WANLESS: I’m not sure that was so much as questions as important statements that needed to be heard because it, again, reiterates the lack of coordination, the lack of bringing all of the different bits of knowledge that we have generated together and the lack of an international, interconnected approach to this. I don’t have answers in that amount of time.

LIZZA DWOSKIN: It looks—is anyone else itching to take that?

MIN HSUAN WU: Yeah. I don’t have an answer for others, but in sum I echo whatever Justin just said that, yes, for the—for the local context. But also in some region, like for the region where I from, I feel like we need more digital Sherlock. We need more capacity building for—training more people who also understand their local context, local language, and local political context, and also can do those analysis work.

Frankly, lots of people asked me, do you know what China do information operation in Thailand or in Middle East? How I supposed to know, right? So we don’t live there, right, and we are not—as long as we don’t have the chance to send people actually there, whatever tools or whatever knowledge that we have and join with this community, we will never find out what they do there.

So that’s my kind of response, or, yeah.

LIZZA DWOSKIN: Did you want to?

DAVID AGRANOVICH: Maybe just—knowing that we’re almost out of time.

So I did want to echo, I think, Alicia and Ttcat, right? A lot of those points are really important, particularly the scams piece, the fact that I think we’ve seen this growth in these kind of scam and spam actors trying to get into this business.

But the most important takeaway of those three points is the importance of enabling communities like Sherlocks all over the world, in particular people who have that ability to dive really deep in local context, understand not just what’s happening on the internet in a particular country but what’s happening on the ground.

And I know one of the priorities of the folks on my team is not just building some of the tools that I know some of the folks here are familiar with to archive and share information about influence operations; it was also working directly with some of these teams. So hopefully we’ll have a chance, for those of you I haven’t met, to talk after this panel because it’s something I think we really do want to do more of.

LIZZA DWOSKIN: Well, I just want to thank all of you because I learned so much from the panel. I was thinking very quickly about the theme that we—oh yeah, I want to remind everyone that you can get this content and other relevant event information, the agenda, on the DFRLabs website and also their social media account, so go check that out.

Yeah, I learned a ton. Thinking about the—going back to the beginning where I asked how is the world different from six years ago when there was the IRA infiltrating American social media companies in the US election, and now it’s like a million small IRAs with all sorts of different motives paid by different actors. And it’s really fascinating to hear the collective knowledge in this room, actually, about how to tackle this problem, so it helps my coverage a lot. So thank you so much.

Watch the full event

The post The war in Ukraine shows the disinformation landscape has changed. Here’s what platforms should do about it. appeared first on Atlantic Council.

]]>
Blinken on protecting human rights online: It’ll take ‘day-in and day-out vigilance’ https://www.atlanticcouncil.org/news/transcripts/blinken-on-protecting-human-rights-online-itll-take-day-in-and-day-out-vigilence/ Tue, 07 Jun 2022 21:10:27 +0000 https://www.atlanticcouncil.org/?p=534014 Secretary of State Antony Blinken joins Maria Ressa for a conversation on stopping democratic backsliding online at 360/Open Summit.

The post Blinken on protecting human rights online: It’ll take ‘day-in and day-out vigilance’ appeared first on Atlantic Council.

]]>
Watch the keynote

360/Open Summit: Contested Realities | Connected Futures

June 6-7, 2022

The Atlantic Council’s Digital Forensic Research Lab (DFRLab) hosts 360/Open Summit: Contested Realities | Connected Futures in Brussels, Belgium.

Event transcript

Uncorrected transcript: Check against delivery

Speakers

Robert G. Berschinski
Senior Director for Democracy and Human Rights, US National Security Council

Maria Ressa
CEO and Nobel Peace Prize Winner, Rappler

Secretary Antony J. Blinken
Secretary of State, US Department of State

ROBERT G. BERSCHINSKI: Hi, everybody. I feel like that was kind of the equivalent of what we’ve all experienced in terms of giving our speech on Zoom with the—with the mute button still on. So thanks. Thanks, Rose. Thanks, Melissa, if you’re still out there, and to everybody at RightsCon. And thanks to DFRLab for putting on this session and for the opportunity to join you to introduce Secretary of State Antony Blinken and Maria Ressa.

It’s been a real pleasure to have spent the last two days participating in discussions at the forefront of democracy and human rights. And I say that, really, with everyone here in mind, but particularly with respect to those truly on the frontlines who have felt the impact for the struggle for human rights and democracy in deeply personal ways. These are women like… Carine Kanimba, someone I had the chance to speak in depth with a couple of nights ago, and also the woman that we’ll hear from shortly, Maria Ressa.

Before we turn to that interview, I want to take a few moments to reflect on what President Biden and so many of you both in the room and at home know is a key challenge of our time: demonstrating that democracy, rather than autocracy, is best poised to deliver for its citizens. In December, as I hope most in the room know, President Biden hosted one hundred governmental leaders, democratic opposition figures, activists, and business and civil society leaders from around the world in what we termed the first Summit for Democracy. Both Secretary Blinken and Maria Ressa spoke at the summit on a panel focused on media freedom and sustainability, and that issue alone reflects the ramifications that technology has had on the world around us.

A free media is, of course, the bedrock of pluralistic discourse and a healthy democratic society. But in—the digital age, as many of you also know, has fundamentally altered the business model that has sustained and enabled independent journalism now for decades. One recent study suggests that the move to digital advertising alone eliminated nearly $24 billion in annual advertising revenue for public-interest media between 2017 and last year, 2021. The economic vulnerability of the media has resulted in its capture and closure around the world. And this trend has, of course, been further compounded by governments who seek to silence critical voices through internet shutdowns, censorship, digital harassment, and political and regulatory pressure that incentivizes acquiescence or leads to media capture. At the same time, digital technologies have enabled individuals, groups, and governments to create, disseminate, and amplify manipulated information for their own political, ideological, and commercial interests.

So now we’re at a point in time where the costs of producing high-quality journalism are high while the costs of disseminating false information and silencing critical voices, like the one we’ll hear from shortly, are relatively low. And communities around the world are being impacted by this every day—not least in the United States, where an estimated quarter of newspapers have closed in just the last fifteen years. And that means fewer local trusted voices informing our debate.

So all of us joining in the 360/OS and in RightsCon are keenly aware of the human-rights impacts of this and other technology-enabled challenges. And while this could be a moment of despair, the breadth of debate, discussion, and participation at events like this reflects another new trend, one where governments and activists and companies are increasingly working together trying to break down their silos to productively design for and mitigate the risks from new technologies. And we know authoritarian governments and other actors will continue to develop and abuse technologies for their own political and financial benefit. We know they seek to rewrite the rules of the international system and the norms that govern technology.

So that’s why the Biden administration is driving an agenda in which critical and emerging technologies work for and not against democratic societies. To give one example, two months ago the United States launched, with sixty of our partners around the world, the Declaration for the Future of the Internet. This is a political commitment among declaration partners to advance a positive vision for the internet and digital technologies.

We’re backing our political commitment with expanded investments to support internet freedom as well as digital safety and security for targeted groups while improving cybersecurity, and in parallel, under the auspices of the Summit for Democracy, we’ve launched hundreds of millions of new dollars in programming to expand our support for free and independent media, to fight corruption, to bolster democratic reformers, and defend free and fair election processes, and in the wake of Russia’s aggression against Ukraine we further expanded our investments in Europe and Eurasia in these thematic areas.

We’re also working to more effectively hold to account those who abuse technology to unlawfully surveille and harass human rights defenders, journalists, and opposition leaders, just as Melissa was mentioning in the intro in terms of the discussion at RightsCon.

Yesterday, panelists stood on this stage and detailed harrowing accounts of being targeted via commercial spyware technology among other forms of what we in the US government are increasingly referring to as transnational repression.

The United States views the unlawful or inappropriate use of this technology as a national security issue. So in October of last year, we updated our export control rules governing items that can be used for malicious cyber activities, and then in November we added four foreign companies, including but not limited to NSO Group, to the Department of Commerce’s Entity List, based on evidence that these firms developed and supplied spyware to foreign governments that then used the tools provided to maliciously target government officials, journalists, business people, activists, and embassy workers, and we intend to do much more in this space using all the tools at our disposal.

At the same time, we’re placing renewed emphasis on supporting multi stakeholder initiatives like the Freedom Online Coalition and the OSCD’s work on reinforcing democracy.

Just over one year ago, we joined the Christchurch Call to eliminate terrorist and violent extremist content online, and then in November we announced our support for the Paris Call for trust and security in cyberspace. And we’re working also with key allies and partners on new initiatives like the global partnership for action to end online harassment and abuse, and, as those here in Brussels know well, the US-EU Trade and Technology Council.

Yet, we know that no single commitment, program, or action is going to resolve all of the challenges that we’ve been discussing over the course of the last few days and that we’ll hear momentarily from the US Secretary of State.

Russia’s aggression in Ukraine underscores the importance of taking a holistic approach to continuing threats to democracy diplomatically, militarily, economically, and in the information realm. But by working together, by doing exactly what all of you are here doing today, governments, advocates, researchers in the private sector together across disciplines, regions, and responsibilities, we can and we are driving change that’s going to prove to be asymmetrically advantageous for democracies.

We’re pursuing efforts to close the gap in digital access and driving innovation in ways that are going to foster inclusion, equity, and accountability, and support human rights rather than undermining them.

So, momentarily, Secretary Blinken will provide more on the breadth of efforts that the US is taking to advance this agenda in his interview with Maria Ressa. Maria and her team at Rappler and so many other journalists, human rights defenders, and activists, including many of you here in Brussels and online, have demonstrated courage and commitment against a global tide of democratic backsliding.

So with that, I’m very pleased to announce a woman who epitomizes courage and conviction, Nobel Peace Prize-winning journalist Maria Ressa, in conversation with the US Secretary of State Tony Blinken.

Thank you.

MARIA RESSA: Hello, everyone. Thank you so much for joining us. I am Maria Ressa from the Philippines. What an honor to have US Secretary of State Antony Blinken with us today at a crucial moment for all of us working for a better digital rights world.

Secretary Blinken, thank you for joining us.

ANTONY J. BLINKEN: Maria, great to be with you and great to be with everyone.

This is really a pleasure for me. I’m thrilled to be hosted by RightsCon, to be talking to you. I want to say greetings to everyone from the 360/Open Summit and from around the world who is, in one way or another, logged on, tuned in, and joining this conversation.

You know, it’s so important from our perspective that the United States, likeminded governments—but especially with civil society, with nongovernmental organizations, with think tanks, with the private sector—work to protect human rights online, work to demonstrate that our democracies can deliver for people as we navigate this extraordinary digital transformation that is having an impact on the lives of virtually everyone on this planet.

One thing I wanted to say at the outset, before we get into the conversation, is I am very pleased to announce that for the first time the United States will become chair of the Freedom Online Coalition in 2023. We want to strengthen the coalition. We want to bring more members on board. We want to make it a center of action for ensuring a free and open digital future. And this in part is going to be building on Canada’s terrific work as the current chair and really trying to carry it forward. So I’m really pleased to do that, to be able to announce that.

And Maria, it’s great to be with you. You have been, you are, an extraordinarily courageous champion of freedom of speech, freedom of press and media, and freedom for a digital future that we all want and we hope to build together. So thank you for being willing to have this conversation today.

MARIA RESSA: Well, you know, that’s really great to hear from you, Mr. Secretary, exactly at this moment in time when, you know, it seems at times hopeless. And you never want to be hopeless, right.

So let me ask you—you’ve been very outspoken about the way digital authoritarians have used tech to abuse human rights; you know, a growing trend that people like us on the front lines increasingly defenseless. I mean, what have you seen globally? And what can you do about it?

ANTONY J. BLINKEN: So you’re right. Unfortunately, that’s exactly what we’re seeing.

Look, I think, as in so many ways, when we saw the emergence of a lot of this technology, starting mostly in the 1990s, the early 2000s, I think there was great hope that it would be inexorably a force for openness, transparency, freedom. And, of course, in many ways it is. But we’re also seeing, of course, the abuse of this technology in various ways, including by repressive governments trying to control populations, to stifle dissent, to surveil and censor. We see that, of course, in the PRC with technology being used, for example, for mass surveillance, including of the Uyghurs and other minorities.

So the question is, what is to be done? What do we do about it? And there are a number of things that we need to do and, in fact, that we are doing.

One is to start by calling things out. That’s often the basis for everything. We have to call out the abusive technology, including digital authoritarianism.

Second, as I mentioned, we’re going to be taking on the chairmanship of the Freedom Online Coalition. We’re working to strengthen it. And this is an important vehicle to try to protect and advance internet freedom and to push back against digital authoritarianism.

Very practically speaking, there are a number of things that we—countries, nongovernmental organizations, and others—are doing to, for example, get anti-censorship technology into the hands of people who need it so that they have the tools to push back against the misuse of technology in an authoritarian way. We set up a multinational fund to do that at the Summit for Democracy that we hosted last year; and then, for example, putting export controls on surveillance technology to make sure that technology that we and other countries are producing that could have a dual use and be misused for the surveillance of populations, that doesn’t get into the wrong hands.

That takes working together. One country alone can’t do it. And, in fact, governments alone can’t effectively do it. We need to build these coalitions to make sure that we identify where technology should not go because it’s being misused, and then we’re together to make sure that it doesn’t get there.

MARIA RESSA: Yeah, I agree with working together. Mr. Secretary, you know that early on I said that the tech platforms that took control—became the gatekeepers from journalists abdicated responsibility for protecting the public sphere. And in some ways it’s taken so long to get government regulations that, in a way, governments have also abdicated responsibility. We’re just starting to see the beginning of this rollout in the spring from the EU, right.

And yet we know the impact of disinformation. In the Philippines we have seen disinformation repeatedly change our history. It’s that Milan Kundera quote; you know, the struggle of man against power. Well, we’ve forgotten really quickly. And disinformation is being used to manipulate our biology.

Where do you see—what can you do about this? And how do we fight back, given that there are more than thirty elections this year? And you can’t have integrity of elections if you don’t have integrity of facts.

ANTONY J. BLINKEN: Couldn’t agree with you more. And, you know, this has been one of the other changes that we thought was going to be totally for the good, but, of course, that hasn’t been the case.

In the United States a few decades ago, information that most people used in their daily lives, there was a common foundation, because there were actually a fairly limited number of sources of the information that people got. We had three television networks back then. We didn’t have cable. We didn’t have an internet. We didn’t have talk radio, et cetera, et cetera. And the hope, of course, was that the democratization of information would be a good thing overall. And fundamentally I believe that’s still the case.

But as a result of this, as a result of this disaggregation, you’ve lost exactly what you said, which are sort of the trusted mediators who can make sure that information, to the greatest extent possible, is actually backed up by the facts. And at the same time, the technology itself has allowed the abuse and the spreading of misinformation and disinformation in ways that we probably didn’t fully anticipate or imagine.

So we see authoritarian governments using this. We see it, for example, right now in the Russian aggression against Ukraine. We saw it in 2014 when Russia initially went at Ukraine and was using information as a weapon of war. So in that particular instance and in this instance, we’ve actually reversed this on them, precisely by using information, real information, to call out what we saw them preparing and working to do.

And being able to do that and to bring to the world everything that we were seeing about the planned Russian aggression, and to lay out exactly the steps they were likely to take, and which unfortunately they did, I think, has done a profound service to making sure that credible information is what carries the day and disinformation is undermined.

But there are a number of things that we can here again and we are doing to combat the misuse of information. Again, we start by exposing it. And we start by sharing the information that we have, working with others, again, in a coordinated way. We have at the State Department something called the Global Engagement Center, which is focused intensely on finding, exposing disinformation, the techniques that are used by those who are propagating it, and in a coordinated way, working with other countries, pushing back on it and giving people the tools to do it.

It’s critical for us that we also build the capacity of partners around the world, both governments but also journalists, nongovernmental organizations, civil society. There are a number of things that we’re doing. We have initiatives to help give people fact-checking tools to make sure that the information that they’re getting actually is backed up by the facts and to show when it’s not.

Digital literacy training, which is so critical to understanding what people are consuming and being able to separate the wheat from the chaff, the true from the misinformation and disinformation.

Bolstering independent media. This is so critical. The single best check and balance against misinformation and disinformation is an effective, independent media. And we have initiatives to do that, including, as appropriate, financing and other things.

We see that there’s a deliberate attack to take down independent media, to take down nongovernmental organizations that are operating in this space. So we’re putting in place protections. For example, countries actually try to use legal means—or I should say legal in quotation marks—legal means through lawsuits, as you know very well—

MARIA RESSA: Yes.

ANTONY J. BLINKEN:—and through regulatory challenge. Well, we’re putting in place programs, funding, to enable people, institutions, media organizations, to actually push back on that. All of these things together are part of what we need to do.

And finally, it’s so critical that we and you, this entire community, work with the platforms to find ways to more effectively ensure that they’re not being abused and used as a means of propagating misinformation, disinformation. Of course, it’s primarily on the platforms themselves to take the steps necessary to push back against that. I hope very much that we can continue to do that in a collaborative fashion. And sharing the information, what we’re seeing, for example, with the platforms, we’ve found that when we’ve been able to point them to malicious actors using the platforms in abusive ways, they’ve been responsive in making sure those actors can’t do it. But of course, it’s a moving target. And for every bad actor that you take off, maybe it comes back under another guise or something else pops up. So we have to be vigilant. We have to be relentlessly focused on this. And I hope that we can do this in a cooperative or collaborative way.

MARIA RESSA: Well, that’s certainly what we’ve been trying to do. But what we’ve seen in the last—you mentioned 2014 until now, right? The disinformation, the splintered reality that allowed Russia to annex Crimea, and then eight years later to invade Ukraine, those meta-narratives were seeded, the platforms were told about it, not much was done. And the question, of course, is would we be at this place if more was done, right?

But I guess this is—this does to the last—the crucial question, which is: We have had impunity in the virtual world. And that impunity, you have one-thousand-page document from the Senate that outlines what Russian disinformation did in 2016 in the United States. That impunity has filtered in the real world, and really severed the checks and balances that are there. I guess the—and here, to quote Shoshana Zuboff, where she just says: We live in one world.

And if you don’t have rule of law in the virtual world, you know, how can you have rule of law in the real world? And this goes back to what is your democratic vision? I think that’s what’s been missing, is that we don’t have a democratic vision for the twenty-first century with this technology that we have. What is that you have?

ANTONY J. BLINKEN: Yeah, Maria, I think you’re exactly right. And first, let me say, look, we’ve been awoken to this challenge over the last years. And I think for me it certainly started particularly in 2014 with the initial Russian aggression against Ukraine, and the use of misinformation and disinformation as a weapon to war, as critical to their campaign. And then, of course, we saw the interference in our elections. And all of that has created a—I think, an increasingly greater consciousness of the challenge, and the need to do something about it.

But doing something about it starts with exactly what you said, which is advancing a positive vision, an affirmative vision of what this future should look like. A vision of an open, free, global, interoperable, secure, reliable internet. One of the ways we’ve done that is with this declaration for the future of the internet that now some sixty countries have joined onto, that actually lays out what this positive vision is. We’re working in concrete ways, though, not just to put out the vision, but to realize it.

MARIA RESSA: So what are the concrete steps that you’re taking?

ANTONY J. BLINKEN: So much of the work that we’re doing is to make sure that we, and other likeminded countries, are at the table when so many of the rules and norms that are going to shape the future of the internet are being decided. And we’re doing that in a variety of ways. We’ve come together with the European Union through something we’ve stood up called the Trade and Technology Council, to make sure that we’re working together to advance these different norms and standards. There’s growing convergence between the United States and the European Union on this vision for the future. Now we put that in practice by bringing our combined weight together everywhere these rules and norms are being shaped.

We’re making sure that we’re investing in our own capacity to do that. Here at the State Department over just six months we stood up a new bureau for cyberspace and digital policy. We will soon have a senior envoy to deal with emerging technologies to make sure that to the extent values are infused in technology, they’ll be liberal values not illiberal ones, and making sure that technology is used for the good and to advance democracy, not to—not to undermine it.

We’ve been working to make sure that after last year’s Summit for Democracy we make this year a year for action in terms of implementing many of the concrete initiatives that were announced at the summit, including some that I mentioned a short while ago in terms of supporting independent media, giving people the tools they need to combat censorship, making sure that journalists and other organizations under siege can fight back and have the tools and the means to do so.

We, as I mentioned, have initiated a declaration for the future of the internet with sixty countries so far, making sure that we’re all aligned in a shared vision and trying to advance it. And finally, the institutions that are actually doing this work, that are deciding how all the technology that we share is being used, it’s hugely important that people who share this vision, share these values, are running these institutions.

There’s a hugely important election for the head of the International Telecommunications Union coming up. And the candidate we support, Doreen Bogdan-Martin, is someone of vision and of value who can help advance this shared perspective that we have. So it’s one of those—one of those things where probably 99.999 percent of people have no idea what the ITU is or how important this election is, but we’re very focused on it, and making sure that someone with a shared vision can drive this forward.

Last thing I’ll say, Maria, is this: I think everyone present today is at the heart of this effort. Civil society, nongovernmental organizations, the private sector, independent media working together, holding governments to account, and then ideally all of us joining forces. When you put all that together, it’s a very powerful force, and it’s one that I’m convinced can carry the day in making sure that the future of technology and the future of the internet is on that actually advances freedom, that advances democratic principles, and that makes sure that together we can build a future that reflects the values that we share.

So the work that every single one of you is doing in ways big and small, that’s what really counts. And I’m just pleased for the opportunity to spend a few minutes talking about how we see it, how we think about it. Especially, Maria, with you. So thank you.

MARIA RESSA: No, thank you so much, Secretary Blinken. Can I quick—just one quick question, because you—

ANTONY J. BLINKEN: Of course.

MARIA RESSA: So you mentioned leaning in. Sheryl Sandberg just said that she would be leaving Meta this—at the end of this year. These are American companies that did have values that were infused into their design—and, again, probably not by their design—but encouraged the death of democracies in many parts of the world. In Norway just last week, I kind of thought the next two years will be critical for the survival of democracy. And there were people from Kyiv, from Ukraine who really said that they received the most help from ordinary people. You’ve just asked us all to work together. I guess, you know, is there a timetable? You know, long term, yes, education. Medium term, yes, laws. In the short term, how can we stop what Anne Applebaum called Autocracy, Inc. from taking over in this period of chaos?

ANTONY J. BLINKEN: Maria, I think we all have to be seized with the fierce urgency of now. And, yes, many of the things that we’re talking about will play out over time. Much of this is not flipping a light switch or turning on or off a computer. It does take time. But if we bring to it together a sense of—a sense of urgency and a sense of determination, that’s hugely important. And if this entire community is galvanized, I think we can make—we can make a real difference. But that requires day-in and day-out vigilance. It requires day-in, day-out action. And I think what we’ll see, if we—if we do it right and do it in a sustained way, is you take a step and you look and it doesn’t look like you’ve traveled very far. But my hope and expectation is that over the next few years we will take many steps together and we’ll actually recognize that we traveled a great distance.

The hard reality that we face, and it’s a cliché but it’s profoundly true, technology itself isn’t inherently good or bad. How it’s used determines whether it’s for the good or for the bad. And if we marshal all of our forces together, I think we carry a great weight into this fight to make sure, to the best of our ability, that technology is used for the good. That it’s used to advance a more open, more free, more democratic world. And that it’s not misused and abused to undermine those basic principles. But I think we have to have exactly what you’ve said, a real sense of urgency about that, a real sense of vigilance, a determination to call out misuse and abuse, a determination on the part of nongovernmental organizations and civil society to hold governments and hold the private sector to account. And I’m—I remain optimistic that, marshalling all of these forces together with that sense of urgency, we can make a difference and we can shape a future that is more open, more tolerant, and actually supports and defend freedom and democracy and doesn’t undermine it. That’s the objective.

But look, we have to show, all of us in different ways, that we can actually deliver on this. So I recognize declarations are good. Calling things out are good. But what really counts is action that makes a change, action that deals with the problem. None of that is easy, but we’re determined to do it and we’re determined to do it together.

MARIA RESSA: Fantastic. Thank you so much for your time, Secretary Blinken.

ANTONY J. BLINKEN: Thanks, Maria.

Watch the full event

The post Blinken on protecting human rights online: It’ll take ‘day-in and day-out vigilance’ appeared first on Atlantic Council.

]]>
Spyware like Pegasus is a warning: Digital authoritarianism can happen in democracies, too https://www.atlanticcouncil.org/blogs/new-atlanticist/spyware-like-pegasus-is-a-warning-digital-authoritarianism-can-happen-in-democracies-too/ Mon, 06 Jun 2022 20:21:37 +0000 https://www.atlanticcouncil.org/?p=533539 Journalists and citizens targeted by spyware warn the audience at the Digital Forensic Research Lab's 360/Open Summit about the proliferation of state-sponsored digital surveillance.

The post Spyware like Pegasus is a warning: Digital authoritarianism can happen in democracies, too appeared first on Atlantic Council.

]]>
Watch the full event

When Szabolcs Panyi discovered he had been targeted by Pegasus spyware, his reaction was understandable: “Well, I freaked out,” the Hungarian journalist said, as he was in the middle of investigating the powerful, Russian-controlled International Investment Bank. He wondered why he had been targeted and how he had installed the malware. “What’s going to happen to my sources?” 

For Panyi and many other journalists in Hungary, it was the first direct evidence of something they had long suspected: That they were being watched by the Viktor Orbán government in Budapest. And they weren’t alone, as was revealed by an extensive coordinated global investigation by journalists and nonprofits.

Carine Kanimba—a dual US-Belgian citizen working to free her father, Paul Rusesabagina, the imprisoned Rwandan activist who inspired the film Hotel Rwanda—was one of the fifty thousand phone surveillance targets revealed in the investigation. Studying the data, Kanimba and Amnesty International discovered that the software had been active during a meeting she had with the Belgian foreign minister. “From the moment I walked in to the moment I walked out, the software was active—not only spying on me, but spying on the [Belgian] government and the other officials I’m interacting with to free my father.”

Kanimba and Panyi spoke Monday at a panel discussion on “Digital Authoritarianism on the Open Market,” hosted by the Atlantic Council’s Digital Forensic Research Lab at this year’s 360/Open Summit in Brussels. Here are some more key takeaways from the conversation. 

The lay of a land in the shadows

  • It’s not just governments known for privacy abuses that are using digital surveillance tools like Pegasus, warned panel moderator Miranda Patrucic, the deputy editor in chief for regional stories and central Asia for the Organized Crime and Corruption Reporting Project: “These tools are open for misuses, not just by different authoritarian governments, but also by democracies worldwide.”
  • State-sponsored digital surveillance is not a new industry, added Donncha Ó Cearbhaill, acting head of Security Lab at Amnesty International. For example, such surveillance tools were used against civil society during the early days of the Arab Spring, and the National Security Agency carried out an illegal spying program in the United States. 
  • Some spy tech programs have been successfully exposed. Milan-based Area Spa was raided by Italian authorities in 2016 after being accused of working with Syria. Munich-based FinFisher was raided by German authorities in 2020 after its tech was used by the Turkish government and others, and has since shut down. 
  • While the tech changes, the targets often stay the same: “The same individuals the states see as a threat are being targeted again and again, by new companies and new software that is getting more and more sophisticated over time,” Cearbhaill said.

What is being done?

  • Panyi and other Hungarian journalists are taking legal action to discover why they were targeted, as well as suing Israeli government officials for approving the sale of Pegasus to Hungary, given its record for cracking down on the media. While he’s not confident they will succeed, Panyi says the goal is to spread awareness. “If a relatively unknown journalist from a small country can become a target, you can imagine what can happen to others,” Panyi said. 
  • France and Israel opened investigations into the NSO Group after the Pegasus Project was published, while the US Department of Commerce added the NSO Group to its Entity List for trade restrictions. Companies responded, too, with Apple suing the NSO Group and Amazon Web Services shutting down infrastructure and accounts linked to the company. WhatsApp now sends notifications to those who may have been exposed to Pegasus software, which has led to new spyware cases discovered in Jordan and El Salvador. “Activists, journalists, we have power. We were able to make a difference, even with tools like that, and obviously we need more support, and more action,” Patrucic said.
  • Still, direct policy action has been limited, outside of a European Union parliamentary inquiry. “Several states, while they are critical of activists in their own countries getting targeted, they have so far been reluctant to put in meaningful regulation on these tools because they also benefit from an open system where they can apply these tools without much transparency,” Cearbhaill said, adding that better export controls would help states and the public track the use of surveillance tech as it is sold across borders.

Trying to protect against surveillance

  • All of the panel speakers have adjusted their behaviors since discovering the surveillance. Kanimba got rid of her surveilled phone only to find tracking on newer devices, too. That’s led her family to somewhat drastic measures when talking about sensitive topics. “Since everyone is frightened, we put all our phones in the microwave,” she said. “I don’t know that it works, but at least it makes everyone feel safe. But unfortunately, there’s no way until there is more work done by our governments.”
  • Panyi has changed the ways he works with sensitive information, especially as his team prepares legal actions, which could be ruined if Hungarian intelligence hacked their communications and shared them with Israel. Teaming with Forbidden Stories, Amnesty International, and large international outlets like the Guardian and the Washington Post gave smaller newsrooms like his some tech and legal cover, Panyi said: “With the PR firm employed by NSO group, or the legal threats, you can imagine what kind of power and money that involves.”
  • Source protection has taken on even greater importance to journalists. Panyi relies more often on “old-school methods” to schedule meetings and gather information, such as using code words and conducting interviews in public spaces. “I’m pretty sure that as technology develops, there are going to be new Pegasuses, but if you just leave your phone behind… I think, relatively, you should be fine.” 

Nick Fouriezos is an Atlanta-based writer with bylines from every US state and six continents. Follow him on Twitter @nick4iezos.

Watch the panel

The post Spyware like Pegasus is a warning: Digital authoritarianism can happen in democracies, too appeared first on Atlantic Council.

]]>
Eftimiades in UBAA Global on Chinese espionage focused on US industries https://www.atlanticcouncil.org/insight-impact/in-the-news/eftimiades-in-ubaa-global-on-chinese-espionage-focused-on-us-industries/ Mon, 06 Jun 2022 18:22:00 +0000 https://www.atlanticcouncil.org/?p=534666 Forward Defense nonresident senior fellow Nicholas Eftimiades discusses Chinese espionage and the ineffective US response.

The post Eftimiades in UBAA Global on Chinese espionage focused on US industries appeared first on Atlantic Council.

]]>

On June 6, Forward Defense nonresident senior fellow Nicholas Eftimiades was quoted in UBAA GLOBAL on the US government’s inability to counter Chinese espionage. 

The U.S. government is not well-structured to counter Chinese espionage efforts.

Nicholas Eftimiades
Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

The Scowcroft Center for Strategy and Security works to develop sustainable, nonpartisan strategies to address the most important security challenges facing the United States and the world.

The post Eftimiades in UBAA Global on Chinese espionage focused on US industries appeared first on Atlantic Council.

]]>
European VP Schinas: ‘We have never been closer with our American friends and partners as we are now’ https://www.atlanticcouncil.org/commentary/event-recap/european-vp-schinas-we-have-never-been-closer-with-our-american-friends-and-partners-as-we-are-now/ Fri, 03 Jun 2022 21:17:30 +0000 https://www.atlanticcouncil.org/?p=532862 On June 2, 2022, Margaritis Schinas, vice-president of the European Commission, joined Frances Burwell, distinguished fellow at the Atlantic Council’s Europe Center, for a conversation on transatlantic relations through the prism of Russia’s aggression against Ukraine, highlighting key aspects of the vice president’s portfolio, promoting the European way of life, including strengthening resilience in critical […]

The post European VP Schinas: ‘We have never been closer with our American friends and partners as we are now’ appeared first on Atlantic Council.

]]>
On June 2, 2022, Margaritis Schinas, vice-president of the European Commission, joined Frances Burwell, distinguished fellow at the Atlantic Council’s Europe Center, for a conversation on transatlantic relations through the prism of Russia’s aggression against Ukraine, highlighting key aspects of the vice president’s portfolio, promoting the European way of life, including strengthening resilience in critical sectors, cybersecurity, and migration.

The next phase for cybersecurity

“The era of European naivety about cybersecurity is over . . . we’re building a cyber shield at different levels.”

Margaritis Schinas
  • The vice-president noted that the European Union (EU) is changing its regulatory framework while at the same time building capacity. Specifically, he mentioned that the Commission has proposed two complementary directives: the Network Information Systems Directive and the Critical Infrastructure Directive. In addition to regulatory changes, the Commission has EU teams with experts to protect the institutions themselves and to work with member states on cybersecurity.
  • The vice-president outlines two concerns that the EU is facing: lack of trust in the institutions by member states as well as a lack of a talent pool with sufficient domain expertise.
  • Lastly, the vice-president applauded the efforts of Ukraine not only for its military operations but also its cybersecurity efforts, with successes achieved with the help of the EU and the United States.

New challenges for migration and asylum

“The French presidency is working very actively now on a draft agreement on the bulk of the pact asylum proposals. And I’m very confident that they will make it. But I also understand that the situation we’re having now, and with the Ukrainian [refugees] in the European Union, is also a de facto accelerator.

Margaritis Schinas
  • With almost six million Ukrainian refugees having escaped to other European countries, the vice-president elaborated on the blanket protections that offer access to the job market, education and health systems, and residence permits across the EU. He mentioned that the EU is currently working to create a system of validating the qualifications of Ukrainian refugees in Europe.
  • When asked about the different approach to the refugee crisis now and in 2013, the vice-president noted that geography and timing account for it, and reiterated that the principles of asylum in the EU remain the same.
  • Lastly, he mentioned that the Commission is working on a proposal for legal migration, with member states being ready to move forward with the idea once there is an agreement on the EU Migration Pact.

Where US-EU cooperation is headed

“Of course, the issues of international crime, migration, [and] cyber are at the heart of our cooperation, but we proposed, two weeks ago, a new initiative to fight child sexual abuse online, which is necessary, on which we would need the cooperation of the US government, but also the platforms—the companies.”

Margaritis Schinas
  • While in Washington, the vice-president is meeting with US Secretary of Homeland Security Alejandro Mayorkas to discuss transatlantic cooperation on various issues with a special focus on this new initiative to fight child sexual abuse. The EU will need the support of the digital platforms for this effort, and the technology companies seem open to the idea. 
  • This issue is especially important to the EU since 70 percent of such content is hosted on EU servers. The challenge, however, is difficult to address since this is first time the Commission is venturing into issues of privacy and encryption following passage of EU privacy laws. He underlined that a transatlantic approach to this issue is crucial.

Watch the full event

Europe Center

Providing expertise and building communities to promote transatlantic leadership and a strong Europe in turbulent times.

The Europe Center promotes the transatlantic leadership and strategies required to ensure a strong Europe.

The post European VP Schinas: ‘We have never been closer with our American friends and partners as we are now’ appeared first on Atlantic Council.

]]>
Now is the right time to launch a Digital Marshall Plan for Ukraine  https://www.atlanticcouncil.org/blogs/ukrainealert/now-is-the-right-time-to-launch-a-digital-marshall-plan-for-ukraine/ Mon, 30 May 2022 12:02:27 +0000 https://www.atlanticcouncil.org/?p=530449 As the world explores the challenges of rebuilding Ukraine, one smart option may be to initiate a Digital Marshall Plan that will play to Ukraine's existing tech strengths while securing the country's modernization.

The post Now is the right time to launch a Digital Marshall Plan for Ukraine  appeared first on Atlantic Council.

]]>
The Russo-Ukrainian War is now in its fourth month. While there is currently no end in sight to the carnage, discussions are already underway over what kind of Ukraine should emerge during the post-war period.

The war unleashed by Vladimir Putin on February 24 is widely acknowledged as the largest and most destructive European conflict since WWII. Tens of thousands of Ukrainians have been killed during the first three months of the invasion, while the damage done by Putin’s forces has been estimated at hundreds of billions of dollars. Whole cities have already been destroyed, while Russia’s heavy reliance on airstrikes and artillery bombardments means the tragic toll will continue to rise.  

Clearly, rebuilding Ukraine will be a Herculean task requiring unprecedented financing and the full participation of the international community. It is also vital that plans for the new Ukraine should reflect the country’s immediate needs and competitive advantages. This is why it makes sense to begin work without delay on a Digital Marshall Plan that will harness Ukraine’s tech excellence and enable Ukrainians to continue the important progress made in recent years.

The war is being prosecuted across multiple strata, including the informational and digital spaces. While soldiers defend Ukraine on the battlefield, the country’s Ministry of Digital Transformation and the Ukrainian IT community are developing a sustainable digital rear. 

From the first days of the war, the Ministry has been working with other state bodies to actively increase Ukraine’s digital resilience. These efforts have included creating a layered system of cyber defense for state IT infrastructure and adapting public e-services.

Digital diplomacy has become a critically important field of activity for the Ministry. Minister of Digital Transformation Mykhailo Fedorov has appealed to hundreds of technology companies asking them to join the technological blockade of Russia while calling on them to stop paying taxes to the Russian budget and develop a presence in Ukraine.

An enormous amount of work has also been done to improve the digital defense capability of the Ukrainian state. In cooperation with the Ukrainian blockchain community, the Ministry of Digital Transformation launched a large-scale fundraising campaign gathering crypto donations for ammunition purchases. The Ministry of Digital Transformation and Ukrainian IT companies have launched a number of specific projects to protect civilians such as the Air Alarm App, while also supporting refugees and those living in Russian-occupied regions of the country.

Subscribe to UkraineAlert

As the world watches the Russian invasion of Ukraine unfold, UkraineAlert delivers the best Atlantic Council expert insight and analysis on Ukraine twice a week directly to your inbox.



  • This field is for validation purposes and should be left unchanged.

Anyone familiar with the pre-war structure of the Ukrainian economy will not be surprised by the prominent role of the Ministry of Digital Transformation and the country’s IT industry as a whole in the current conflict. President Zelenskyy came to power in spring 2019 promising a digital transformation. As soon as he took office, he began implementing the “Country in a Smartphone” program.

During the pre-war years of Zelenskyy’s presidency, hundreds of public services were digitalized. The Diya smartphone application was central to these efforts and became the main personal ID for millions of Ukrainians. By digitizing government services, the authorities were able to simplify bureaucratic processes and dramatically reduce the scope for corruption within state agencies.

It is important to note that the digital transformation of Ukraine has never relied on the purchase of imported solutions. Instead, it has been based almost exclusively on the tailored work of Ukrainian IT engineers. This is only natural given the remarkable rise of the Ukrainian IT sector over the past few decades.  

The Ukrainian IT industry has been the main driver of rising export revenues for a number of years. In 2021, Ukrainian IT exports grew 36% year-on-year to total USD 6.8 billion, representing 10% of the country’s total exports. Meanwhile, the number of Ukrainians employed in the IT industry increased from 200,000 to 250,000. This growth was set to accelerate further in 2022 until the war intervened.

According to current World Bank forecasts, Ukraine’s GDP in 2022 will fall by more than 45%. Depending on the course of the war, this figure could rise significantly. Ukraine’s Western partners are well aware of the need to keep the Ukrainian economy afloat while also preparing for the massive rebuilding project that will eventually follow. British Prime Minister Boris Johnson has publicly backed a new Marshall Plan for Ukraine. Other world leaders have also voiced their support for this initiative.

The immediate priority will be to repair the catastrophic damage done to Ukrainian homes, hospitals, schools, roads, bridges, airports, industries, and other vital elements of national infrastructure. At the same time, the most effective long-term use of resources may be to focus on strengthening Ukraine’s digital economy and the country’s IT industry. Investing in this sector will have an immediate economic impact and will create the largest number of jobs. After all, global studies consistently indicate that every new work place in the IT industry creates five more jobs in non-related service industries.

What should a Digital Marshall Plan for Ukraine look like? First of all, it should feature large-scale strategic investment in the digital transformation of Ukraine including all public services, healthcare, and education. This will lead to the radical modernization of the Ukrainian public sector while creating huge demand for the services of Ukrainian IT companies, many of which have lost their Western customer bases due to the war.

Meanwhile, investment into the rapid retraining of Ukrainians from other professional backgrounds will help to drastically reduce unemployment. Even before the war, the Ukrainian IT industry consistently suffered from a shortage of personnel. With industrial facilities across the country destroyed and whole sectors of the economy on pause as a result of the Russian invasion, unemployment is a major issue in today’s Ukraine. Comprehensive training programs can enable tens of thousands of Ukrainians to become qualified IT specialists and find new work during the initial post-war period or possibly even sooner. 

The Ukrainian IT industry must not only be preserved but also brought to the next level. To make this happen, Ukraine and the country’s partners should work together to create attractive financial conditions that will encourage more of the world’s leading tech companies to open Ukrainian hubs and R&D centers.

It is also necessary to establish a large-scale “fund of funds” for IT entrepreneurs that will invest in venture funds operating in Ukraine. Startups were fast becoming the most important growth point of the Ukrainian tech sector before the war and have huge potential for the future. Once the conflict is over, Ukrainian innovators will bring the many tech solutions created during the war to global markets. International interest is likely to be intense.

Work on a Digital Marshall Plan needs to begin now. The rebuilding of Ukraine will necessarily take many years and looks set to be one of the most challenging international undertakings of the twenty-first century. Investing in the Ukrainian IT industry now will provide an immediate and significant economic boost. It will also enable the country to develop an optimized digital infrastructure that will lay the foundations for future prosperity and help secure Ukraine’s place among the community of European democracies.

Anatoly Motkin is president of the StrategEast Center for a New Economy, a non-profit organization with offices in the United States, Ukraine, Georgia, and Kyrgyzstan.

Further reading


The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Now is the right time to launch a Digital Marshall Plan for Ukraine  appeared first on Atlantic Council.

]]>
Eye to eye in AI: Developing artificial intelligence for national security and defense https://www.atlanticcouncil.org/in-depth-research-reports/report/eye-to-eye-in-ai/ Wed, 25 May 2022 17:29:12 +0000 https://www.atlanticcouncil.org/?p=527708 As artificial intelligence transforms national security and defense, it is imperative for the Department of Defense, Congress, and the private sector to closely collaborate in order to advance major AI development priorities. However, key barriers remain. As China and Russia develop their own capabilities, the stakes of the military AI competition are high, and time is short.

The post Eye to eye in AI: Developing artificial intelligence for national security and defense appeared first on Atlantic Council.

]]>

As artificial intelligence (AI) transforms national security and defense, it is imperative for the Department of Defense (DoD), Congress, and the private sector to closely collaborate in order to advance major AI development priorities.

However, key barriers remain. Bureaucracy, acquisition processes, and organizational culture continue to inhibit the military’s ability to bring in external innovation and move more rapidly toward AI integration and adoption. As China—and, to a lesser extent, Russia—develop their own capabilities, the stakes of the military AI competition are high, and time is short.

It is now well past time to see eye to eye in AI. Therefore, Forward Defense’s latest report, generously supported by Accrete AI, addresses these key issues and more.

Executive summary

Over the past several years, militaries around the world have increased interest and investment in the development of artificial intelligence (AI) to support a diverse set of defense and national security goals. However, general comprehension of what AI is, how it factors into the strategic competition between the United States and China, and how to optimize the defense-industrial base for this new era of deployed military AI is still lacking. It is now well past time to see eye to eye in AI, to establish a shared understanding of modern AI between the policy community and the technical community, and to align perspectives and priorities between the Department of Defense (DoD) and its industry partners. Accordingly, this paper addresses the following core questions.

What is AI and why should national security policymakers care?

AI-enabled capabilities hold the potential to deliver game-changing advantages for US national security and defense, including

  • greatly accelerated and improved decision-making;
  • enhanced military readiness and operational competence;
  • heightened human cognitive and physical performance;
  • new methods of design, manufacture, and sustainment of military systems;
  • novel capabilities that can upset delicate military balances; and
  • the ability to create and detect strategic cyberattacks, disinformation campaigns, and influence operations.

Recognition of the indispensable nature of AI as a horizontal enabler of the critical capabilities necessary to deter and win the future fight has gained traction within the DoD, which has made notable investments in AI over the past five years.

But, policymakers beyond the Pentagon—as well as the general public and the firms that are developing AI technologies—require a better understanding of the capabilities and limitations of today’s AI, and a clear sense of both the positive and the potentially destabilizing implications of AI for national security.

Why is AI essential to strategic competition?

The Pentagon’s interest in AI must also be seen through the lens of intensifying strategic competition with China—and, to a lesser extent, Russia—with a growing comprehension that falling behind on AI and related emerging technologies could compromise the strategic, technological, and operational advantages retained by the US military since the end of the Cold War. Some defense leaders even argue that the United States has already lost the military-technological competition to China.1

While this paper does not subscribe to such a fatalist perspective, it argues that the stakes of the military AI competition are high—and that time is short.

What are the obstacles to DoD AI adoption?

The infamous Pentagon bureaucracy, an antiquated acquisition and contracting system, and a risk-averse organizational culture continue to inhibit the DoD’s ability to bring in external innovation and move more rapidly toward widespread AI integration and adoption. Solving systemic problems of this caliber is a tall order. But, important changes are already under way to facilitate DoD engagement with the commercial technology sector and innovative startups, and there seems to be a shared sense of urgency to solidify these public-private partnerships in order to ensure sustained US technological and military advantage. Still, much remains to be done in aligning the DoD’s and its industry partners’ perspectives about the most impactful areas for AI development, as well as articulating and implementing common technical standards and testing mechanisms for trustworthy and responsible AI.

Key takeaways and recommendations

The DoD must move quickly to transition from a broad recognition of AI’s importance to the creation of pathways, processes, practices, and principles that will accelerate adoption of the capabilities enabled by AI technologies. Without intentional, coordinated, and immediate action, the United States risks falling behind competitors in the ability to harness game-winning technologies that will dominate the kinetic and non-kinetic battlefield of the future. This report identifies three courses of action for the DoD that can help ensure the US military retains its global leadership in AI by
catalyzing the internal changes necessary for more rapid AI adoption and capitalizing on the vibrant and diverse US innovation ecosystem, including

  • prioritizing safe, secure, trusted, and responsible AI development and deployment;
  • aligning key priorities for AI development and strengthening coordination between the DoD and industry partners to help close AI capability gaps; and
  • promoting coordination between leading defense-technology companies and nontraditional vendors to accelerate DoD AI adoption.

This report is published at a time that is both opportune and uncertain in terms of the future trajectory of the DoD’s AI adoption efforts and global geopolitics. The ongoing conflict in Ukraine has placed in stark relief the importance of constraining authoritarian impulses to control territory, populations, standards, and narratives,
and the role that alliances committed to maintaining long-standing norms of international behavior can play in this effort. As a result, the authors urge the DoD to engage and integrate the United States’ allies and trusted partners at governmental and, where possible, industry levels to better implement the three main recommendations of this paper.

Introduction

AI embodies a significant opportunity for defense policymakers. The ability of AI to process and fuse information, and to distill data into insights that augment decision-making, can lift the “fog of war” in a chaotic, contested environment in which speed is king. AI can also unlock the possibility of new types of attritable and single-use uncrewed systems that can enhance deterrence.2 It can help safeguard the lives of US service members, for example, by powering the navigation software that guides autonomous resupply trucks in conflict zones.3 While humans remain in charge of making the final decision on targeting, AI algorithms are increasingly playing a role in helping intelligence professionals identify and track malicious actors, with the aim
of “shortening the kill chain and accelerating the speed of decision-making.”4

AI development and integration are also imperative due to the broader geostrategic context in which the United States operates—particularly the strategic competition with China.5 The People’s Liberation Army (PLA) budget for AI seems to match that of the US military, and the PLA is developing AI technology for a similarly broad set of applications and capabilities, including training and simulation, swarming autonomous systems, and information operations—among many others—all of which could abrogate the US military-technological advantage.6

As US Secretary of Defense Lloyd Austin noted in July 2021, “China’s leaders have made clear they intend to be globally dominant in AI by the year 2030. Beijing already talks about using AI for a range of missions, from surveillance to cyberattacks to autonomous weapons.”7 The United States cannot afford to fall behind China or other competitors.

To accelerate AI adoption, the Pentagon must confront its demons: a siloed bureaucracy that frustrates efficient data-management efforts and thwarts the technical infrastructure needed to leverage DoD data at scale; antiquated acquisition and contracting processes that inhibit the DoD’s ability to bring in external innovation and transition successful AI technology prototypes to production and deployment; and a risk-averse culture at odds with the type of openness, experimentation, and tolerance for failure known to fuel innovation.8

Several efforts are under way to tackle some of these problems. Reporting directly to the under secretary of defense, the chief data and artificial intelligence officer (CDAO) role was recently announced to consolidate the office of the chief data officer, the Joint Artificial Intelligence Center (JAIC), and the Defense Digital Service (DDS). This reorganization brings the DoD’s data and AI efforts under one roof to deconflict overlapping authorities that have made it difficult to plan and execute AI projects.9 Expanding use of alternative acquisition methods, organizations like the Defense Innovation Unit (DIU) and the Air Force’s AFWERX are bridging the gap with the commercial technology sector, particularly startups and nontraditional vendors. Still, some tech leaders believe these efforts are falling short, warning that “time is running out.”10

As the DoD shifts toward adoption of AI at scale, this report seeks to provide insights into outstanding questions regarding the nature of modern AI, summarize key advances in China’s race toward military AI development, and highlight some of the most compelling AI use cases across the DoD. It also offers a brief assessment of the incongruencies between the DoD and its industry partners, which continue to stymie the Pentagon’s access to the game-changing technologies the US military will need to deter adversary aggression and dominate future battlefields.

The urgency of competition, however, must not overshadow the commitment to the moral code that guides the US military as it enters the age of deployed AI. As such, the report reiterates the need to effectively translate the DoD’s ethical AI guidelines into common technical standards and evaluation metrics for assessing trustworthiness, and to enhance cooperation and coordination with the DoD’s industry partners—especially startups and nontraditional vendors across these critical issues.

We conclude this report with a number of considerations for policymakers and other AI stakeholders across the national security ecosystem. Specifically, we urge the DoD to prioritize safe, secure, trusted, and responsible AI development and deployment, align key priorities for AI development between the DoD and industry to help close the DoD’s AI capability gaps, and promote coordination between leading defense technology companies and nontraditional vendors to accelerate the DoD’s AI adoption efforts.

Defining AI

Artificial intelligence, machine learning, and big-data analytics

The term “artificial intelligence” encompasses an array of research approaches, techniques, and technologies spread across a wide range of fields, from computer science and engineering to medicine and philosophy.

The 2018 DoD AI Strategy defined AI as “the ability of machines to perform tasks that normally require human intelligence—for example, recognizing patterns, learning from experience, drawing conclusions, making predictions, or taking action.”11 This ability to execute tasks traditionally thought to be only possible by humans is central to many definitions of AI, although others are less proscriptive. The National Artificial Intelligence Act of 2020 merely describes AI as machine-based systems that can “make predictions, recommendations or decisions” for a given set of human-defined objectives.12 Others have emphasized rationality, rather than fidelity to human performance, in their definitions of artificial intelligence.13

As the list of tasks that computers can perform at human or near-human levels continues to grow, the bar for what is considered “intelligent” rises, and the definition of AI evolves accordingly.14 The task of optical character recognition (OCR), for instance, once stood at the leading edge of AI research, but implementations of this technology, such as automated check processing, have long since become routine, and most experts would no longer consider such a system an example of artificial intelligence. This constant evolution of the definition is, in part, responsible for the confusion surrounding modern AI.15

This report adopts the Defense Innovation Board’s (DIB) definition by considering AI as “a variety of information processing techniques and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task.”16 These techniques, as the DIB explains, can include, but are not limited to, symbolic logic, expert systems, machine learning (ML), and hybrid systems. We use the term “AI” when referring to the broad range of relevant techniques and technologies, and “ML” when dealing with this subset of systems more specifically. For alternative conceptualizations, the 2019 RAND study on the DoD’s posture for AI offers a useful sample of relevant definitions put forth by federal, academic, and technical sources.17

Much of the progress made in AI over the past decade has come from ML, a modern AI paradigm that differs fundamentally from the human-driven expert systems that dominated in the past. Rather than following a traditional software-development process, in which programs are designed and then coded by human engineers, “machine learning systems use computing power to execute algorithms that learn from data.”18

Figure 1. The progression from, and variance among, big-data analytics, predictive big-data analytics, and
machine learning, three terms that are occasionally conflated in discussions of AI. Source: authors.

Three elements—algorithms, data, and computing power—are foundational to modern AI technologies, although their relative importance changes depending on particular methods used and, inherently, the trajectory of technological development.

Given that the availability of very large data sets has been critical to the development of ML and AI, it is worth noting that, while the fields of big-data analytics and AI are closely related, there are important differences between the two. Big-data analytics look for patterns, define and structure large sets of data, and attempt to gain insights, but are an essentially descriptive technique unable to make predictions or act on results. Predictive data analytics go a step further, and use collected data to make predictions based on historical information. Such predictive insights have been extremely useful in commercial settings such as marketing or business analytics, but the practice is nonetheless reliant on the assumption that future patterns will follow past trends, and depends on human data analysts to create and test assumptions, query the data, and validate patterns. Machine-learning systems, on the other hand, are able to autonomously generate assumptions, test those assumptions, and learn from them.19

ML is, therefore, a subset of AI techniques that have allowed researchers to tackle many problems previously considered impossible, with numerous promising applications across national security and defense, as discussed later in the report.

Limitations of AI

There are, however, important limitations and drawbacks to AI systems—particularly in operational environments—in large part, because of their brittleness. These systems perform well in stable simulation and training settings, but they can struggle to function reliably or correctly if the data inputs change, or if they encounter uncertain or novel situations.

ML systems are also particularly vulnerable to adversarial attacks aimed at the algorithms or data upon which the system relies. Even small changes to data sets or algorithms can cause the system to malfunction, reach wrong conclusions, or fail in other unpredictable ways.20

Another challenge is that AI/ML systems do not typically have the capacity to explain their own reasoning, or the processes by which they reach certain conclusions, provide recommendations, and take action, in a way that is evident or understandable to humans. Explainability—or what some have referred to as interpretability—is critical for building trust in human-AI teams, and is especially important as advances in AI enable
greater autonomy in weapons, which raises serious ethical and legal concerns about human control, responsibility, and accountability for decisions related to the use of lethal force.

A related set of challenges includes transparency, traceability, and integrity of the data sources, as well as the prevention or detection of adversary attacks on the algorithms of AI-based systems. Having visibility into who trains these systems, what data are used in training, and what goes into an algorithm’s recommendations can mitigate unwanted bias and ensure these systems are used appropriately, responsibly, and ethically. All these challenges are inherently linked to the question of trust explored later in the report.

Figure 2. Understanding AI limitations. Source: Authors. Lower right icon created by Ranah Pixel Studio.

Military competition in AI innovation and adoption

Much of the urgency driving the DoD’s AI development and adoption efforts stems from the need to ensure the United States and its allies outpace China in the military-technological competition that has come to dominate the relationship between the two nations. Russia’s technological capabilities are far less developed,
but its aggression undermines global security and threatens US and NATO interests.

China

China has prioritized investment in AI for both defense and national security as part of its efforts to become a “world class military” and to gain advantage in future “intelligentized” warfare—in which AI (alongside other emerging technologies) is more completely integrated into military systems and operations through “networked, intelligent, and autonomous systems and equipment.”21

While the full scope of China’s AI-related activities is not widely known, an October 2021 review of three hundred and forty-three AI-related Chinese military contracts by the Center for Security and Emerging Technology (CSET) estimates that PLA “spends more than $1.6 billion each year on AI-related systems and equipment.”22 The National Security Commission on Artificial Intelligence’s (NSCAI) final report assessed that “China’s plans, resources, and progress should concern all Americans. It is an AI peer in many areas and an AI leader in some applications.”23

CSET’s review and other open-source assessments reveal that China’s focus areas for AI development, like those of the United States, are broad, and include24

  • intelligent and autonomous vehicles, with a particular focus on swarming technologies;
  • intelligence, surveillance, and reconnaissance (ISR);
  • predictive maintenance and logistics;
  • information, cyber, and electronic warfare;
  • simulation and training (to include wargaming);
  • command and control (C2); and
  • automated target recognition.

Progress in each of these areas constitutes a challenge to the United States’ capacity to keep pace in a military-technological competition with China. However, it is worth examining China’s advancing capabilities in two areas that could have a particularly potent effect on the military balance.

Integration

First, AI can help the PLA bridge gaps in operational readiness by artificially enhancing military integration and cross-domain operations. Many observers have pointed to the PLA’s lack of operational experience in conflict as a critical vulnerability. As impressive as China’s advancing military modernization has been from a technological perspective, none of the PLA’s personnel have been tested under fire in a high-end conflict in the same ways as the US military over the last twenty years. The PLA’s continuing efforts to increase its “jointness” from an organizational and doctrinal standpoint is also nascent and untested.

The use of AI to improve the quality, fidelity, and complexity of simulations and wargames is one way the PLA is redressing this area of concern. A 2019 report by the Center for a New American Security observed that “[for] Chinese military strategists, among the lessons learned from AlphaGo’s victory was the fact that an AI could create tactics and stratagems superior to those of a human player in a game that can be compared to a wargame” that can more arduously test PLA decision-makers and improve upon command decision-making.25 In fact, the CSET report found that six percent of the three hundred and forty-three contracts surveyed were for the use of AI in simulation and training, including use of AI systems to wargame a Taiwan contingency.26

During the Defense Advanced Research Projects Agency’s (DAPRA)’s AlphaDogfight Trials, an operational F-16 pilot flies in a virtual reality simulator against the champion F-16 AI agent developed by Heron Systems. The Heron AI agent defeated the human pilot in five straight dogfights to conclude the trials. Source: DARPA, https://www.darpa.mil/news-events/2020-08-26.

The focus on AI integration to reduce perceived vulnerabilities in experience also applies to operational and tactical training. In July 2021, the Chinese Communist Party mouthpiece publication Global Times reported that the PLA Air Force (PLAAF) has started to deploy AI as simulated opponents in pilots’ aerial combat training to “hone their decision-making and combat skills against fast- calculating computers.”27

Alongside virtual simulations, China is also aiming to use AI to support pilot training in real-world aircraft. In a China Central Television (CCTV) program that aired in November 2020, Zhang Hong, the chief designer of China’s L-15 trainer, noted that AI onboard training aircraft can “identify different habits each pilot has in flying. By managing them, we will let the pilots grow more safely and gain more combat capabilities in the future.”28

Notably, the PLAAF’s July 2021 AI–human dogfight was similar to the Defense Advanced Research Projects Agency’s (DARPA) September 2020 AlphaDogFight Challenge in which an AI agent defeated a human pilot in a series of five simulated dogfights.29 Similarly, the United States announced in September 2021 the award of a contract to training-and-simulation company Red 6 to integrate the company’s Airborne Tactical Augmented Reality System (ATARS)—which allows a pilot flying a real- world plane to train against AI-generated virtual aircraft using an augmented-reality headset—into the T-38 Talon trainer with plans to eventually install the system in fourth-generation aircraft.30 AI-enabled training and simulation are, therefore, key areas in which the US military is in a direct competition with the PLA. As the Chinese military is leveraging AI to enhance readiness, the DoD cannot afford to fall behind.

Autonomy

A second area of focus for Chinese AI development is in autonomous systems, especially swarming technologies, in which several systems will operate independently or in conjunction with one another to confuse and overwhelm opponent defensive systems. China’s interests in, and capacity for, developing swarm technologies has been well demonstrated, including the then record-setting launch of one hundred and eighteen small drones in a connected swarm in June 2017.31

In September 2020, China Academy of Electronics and Information Technology (CAEIT) reportedly launched a swarm of two hundred fixed-wing CH- 901 loitering munitions from a modified Dongfeng Mengshi light tactical vehicle.32 A survey of the Unmanned Exhibition 2022 show in Abu Dhabi in February 2022 revealed not only a strong Chinese presence—both China National Aero-Technology Import and Export Corporation (CATIC) and China North Industries Corporation (NORINCO) had large pavilions—but also a focus on “collaborative” operations and intelligent swarming.33

An example of collaborative swarming drones on display at the UMEX 2022 exhibition in Abu Dhabi in February. Source: Tate Nurkin.

This interest in swarming is not limited to uncrewed aerial vehicles (UAVs). China is also developing the ability to deploy swarms of autonomous uncrewed surface vehicles (USVs) to “intercept, besiege and expel invasive targets,” according to the Global Times.34 In November 2021, Chinese company Yunzhou Tech—which in 2018 carried out a demonstration of a swarm of fifty-six USVs— released a video showing six USVs engaging in a “cooperative confrontation” as part of an effort to remove a crewed vessel from Chinese waters.35

It is not difficult to imagine how such cooperative confrontation could be deployed against US or allied naval vessels, or even commercial ships, to develop or maintain sea control. This capability is especially powerful in a gray-zone contingency in which escalation concerns may limit response options.

Russia

Russia lags behind the United States and China in terms of investments and capabilities in AI. The sanctions imposed over the war in Ukraine are also likely to take a massive toll on Russia’s science and technology sector. That said, US national decision-makers should not discount Russia’s potential to use AI-enabled technologies in asymmetric ways to undermine US and NATO interests. The Russian
Ministry of Defense has numerous autonomy and AI-related programs at different stages of development and experimentation related to military robotics, unmanned systems, swarming technology, early-warning and air-defense systems, ISR, C2, logistics, electronic warfare, and information operations.36

Russian military strategists see immense potential in greater autonomy and AI on future battlefields to speed up information processing, augment decision-making, enhance situational awareness, and safeguard the lives of Russian military personnel. The development and use of autonomous and AI-enabled systems are also discussed within the broader context of Russia’s military doctrine. Its doctrinal focus is on employing these technologies to disrupt and destroy the adversary’s command-and-control systems and communication capabilities, and use non-military means to establish information superiority during the initial period of war, which, from Russia’s perspective, encompasses periods of non-kinetic conflict with adversaries like the United States and NATO.37

The trajectory of Russia’s AI development is uncertain. But, with continued sanctions, it is likely Russia will become increasingly dependent on China for microelectronics and fall further behind in the technological competition with the United States.

Overview of US military progress in AI

The Pentagon’s interest and urgency related to AI is due both to the accelerating pace of development of technology and, increasingly, the transformative capabilities it can enable. Indeed, AI is poised to fundamentally alter how militaries think about, prepare for, carry out, and sustain operations. Drawing on a previous Atlantic Council report outline, the “Five Revolutions” framework for classifying the potential impact of AI across five broad capability areas, Figure 3 below illustrates the different ways in which AI could augment human cognitive and physical capabilities, fuse networks and systems for optimal efficiency and performance, and usher in a new era of cyber conflict and chaos in the information space, among other effects.38

The DoD currently has more than six hundred AI-related efforts in progress, with a vision to integrate AI into every element of the DoD’s mission—from warfighting operations to support and sustainment functions to the business operations and processes that undergird the vast DoD enterprise.39 A February 2022 report by the US Government Accountability Office (GAO) has found that the DoD is pursuing AI capabilities for warfighting that predominantly focus on “(1) recognizing targets through intelligence and surveillance analysis, (2) providing recommendations to operators on the battlefield (such as where to move troops or which weapon is best positioned to respond to a threat), and (3) increasing the autonomy of uncrewed systems.”40 Most of the DoD’s AI capabilities, especially the efforts related to warfighting, are still in development, and not yet aligned with or integrated into specific systems. And, despite notable progress in experimentation and some experience with deploying AI-enabled capabilities in combat operations, there are still significant challenges ahead for wide-scale adoption.

In September 2021, the Air Force’s first chief software officer, Nicolas Chaillan, resigned in protest of the bureaucratic and cultural challenges that have slowed technology adoption and hindered the DoD from moving fast enough to effectively compete with China. In Chaillan’s view, in twenty years, the United States and its allies “will have no chance competing in a world where China has the drastic advantage in population.”41 Later, he added that China has essentially already won, saying, “Right now, it’s already a done deal.”42 Chaillan’s assessment of the United States engaged in a futile competition with China is certainly not shared across the DoD, but it reflects what many see as a lack of urgency within the risk-averse and ponderous culture of the department.

Lt. General Michael Groen, the head of the JAIC, agreed that “inside the department, there is a cultural change that has to occur.”43 However, he also touted the innovative capacity of the United States and highlighted the establishment of an AI accelerator and the finalization of a Joint Common Foundation (JCF) for AI development, testing, and sharing of AI tools across DoD entities.44 The cloud-enabled JCF is an important step forward that will allow for AI development based on common standards and architectures. This should help encourage sharing between the military services and DoD components and, according to the JAIC, ensure that “progress by one DoD AI initiative will build momentum across the entire DoD enterprise.”45

Toward perfect situational awareness: Perception, processing, and cognition

  • Speeding up processing, integration, and visualization of large and complex datasets to improve situational awareness and
    decision-making
  • Predictive analysis to anticipate likely contingencies or crises or pandemic outbreaks

Hyper-enabled platforms and people: Human and machine performance enhancement

  • Improving and making training more accessible and less costly and also improving the complexity and fidelity of simulations
    and wargaming
  • Enhancing cognitive and physical capacities of humans
  • Human-machine teaming and symbiosis, including brain-computer interfaces and AI agents performing mundane tasks to allow humans to focus on mission management

The impending design age: Manufacturing, supply chain, and logistics

  • Enabling digital engineering, advanced manufacturing, and new supply chain management tools to speed up and reduce costs associated with defense production
  • Predictive maintenance to enhance platform and system readiness and increase efficiency of sustainment

Connectivity, lethality, and flexibility: Communication, navigation, targeting, and strike

  • Cognitive sensing, spectrum management, threat detection and categorization, cognitive electronic warfare
  • Autonomous systems
  • AI enabled or supported targeting
  • Swarms

Monitoring, manipulation, and weaponization: Cyber and information operations

  • Detecting and defending against cyber attacks and disinformation campaigns
  • Offensive cyber and information operations

While progress should be commended, obstacles remain that are slowing the adoption of AI capabilities critical to deterring threats in the near future, and to meeting China’s competitive challenges in this decade and beyond.

The three case studies below provide examples of the technological, bureaucratic, and adoption advancements that have occurred in DoD AI efforts. These cases also highlight the enduring issues hindering the United States’ ability to bring its national innovation ecosystem fully to bear in the intensifying military-technological competition with China and, to a lesser extent, Russia.

Figure 4: The stages of the Joint Artificial Intelligence Center’s (JAIC’s) AI adoption journey. Source: JAIC, https://www.ai.mil/.

Use case 1: The irreversible momentum, grand ambition, and integration challenges of JADC2

Among the Pentagon’s most important modernization priorities is the Joint All-Domain Command and Control (JADC2) program, described as a “concept to connect sensors from all the military services…into a single network.”46 According to the Congressional Research Service, “JADC2 intends to enable commanders to make better decisions by collecting data from numerous sensors, processing the data using AI algorithms to identify targets, then recommending the optimal weapon—both kinetic and non-kinetic—to engage the target.”47 If successful, JADC2 holds the potential to eliminate silos between service C2 networks that previously slowed the transfer of relevant information across the force and, as a result, generate more comprehensive situational awareness upon which commanders can make better and faster decisions.

Figure 5. The JADC2 Placemat reflects the complexity and ambition associated with the Department of Defense’s JADC2 Implementation Plan. Source: US Department of Defense.

AI is essential to this effort, and the DoD is exploring how best to safely integrate it into the JADC2 program.48 In December 2021, reports emerged that the JADC2 cross-functional team (CTF) would start up an “AI for C2” working group, which will examine how to leverage responsible AI to enhance and accelerate command and control, reinforcing the centrality of responsible AI to the project.49

In March 2022, the DoD released an unclassified version of its JADC2 Implementation Plan, a move that represented, in the words of General Mark Milley, chairman of the Joint Chiefs of Staff, “irreversible momentum toward implementing” JADC2.50

However, observers have highlighted several persistent challenges to implementing JADC2 along the urgent timelines required to maintain (or regain) advantage in perception, processing, and cognition, especially vis-à-vis China.

Data security and cybersecurity, data-governance and sharing issues, interoperability with allies, and issues associated with integrating the service’s networks have all been cited as challenges with recognizing the ambitious promise of JADC2’s approach. Some have also highlighted that all- encompassing ambition as a challenge as well.
The Hudson Institute’s Bryan Clark and Dan Patt argue that “the urgency of today’s threats and the opportunities emerging from new technologies demand that Pentagon leaders flip JADC2’s focus from what the US military services want to what warfighters need.”51

To be sure, grand ambition is not necessarily something to be avoided in AI development and integration programs. However, pathways to adoption will need to balance difficult-to-achieve, bureaucratically entrenched, time-consuming, and expensive objectives with developing systems that can deliver capability and advantage along the more immediate threat timelines facing US forces.

Use case 2: Brittle AI and the ethics and safety challenges of integrating AI into targeting

Demonstrating that the age of deployed AI is indeed here, in September 2021 Secretary of the Air Force Frank Kendall announced that the Air Force had “deployed AI algorithms for the first time to a live operational kill chain.”52 According to Kendall, the objective of incorporating AI into the targeting process is to “significantly reduce the manpower-intensive tasks of manually identifying targets— shortening the kill chain and accelerating the speed of decision-making.”53 The successful use of AI to support targeting constitutes a milestone for AI development, though there remain ethical, safety, and technical challenges to more complete adoption of AI in this role.

For example, a 2021 DoD test highlighted the problem of brittle AI. According to reporting from Defense One, the AI-enabled targeting used in the test was accurate only about 25 percent of the time in environments in which the AI had to decipher data from different angles—though it believed it was accurate 90 percent of the time—revealing a lack of ability “to adapt to conditions outside of a narrow set of assumptions.”54 These results illustrate the limitations of today’s AI technology in security-critical settings, and reinforce the need for aggressive and extensive real-world and digital-world testing and evaluation of AI under a range of conditions.

The ethics and safety of AI targeting could also constitute a challenge to further adoption, especially as confidence in AI algorithms grows. The Air Force operation involved automated target recognition in a supporting role, assisting “intelligence professionals”—i.e., human decision-makers.55 Of course, DoD has a rigorous targeting procedure in place, of which AI-enabled targeting algorithms would be a part, and that, thinking further ahead, autonomous systems would have to go through. Still, even as they are part of this process and designed to support human decisions, a high error rate combined with a high level of confidence in AI outputs could potentially lead to undesirable or grave outcomes.

Use case 3: The limits of AI adoption in the information domain

Intensifying competition with China and Russia is increasingly playing out in the information and cyber domains with real, enduring, and disruptive implications for US security, as well as the US economy, society, and polity.

For cyber and information operations, AI technologies and techniques are central to the future of both offensive and defensive operations, highlighting both the peril and promise of AI in the information domain.

Concern is growing about the threat of smart bots, synthetic media such as deepfakes—realistic video or audio productions that depict events or statements that did not take place—and large- language models that can create convincing prose and text.56 And, these are just the emerging AI-enabled disinformation weapons that can be conceived of today. While disinformation is a challenge that requires a societal and whole-of-government response, DoD will undoubtably play a key role in managing and responding to this threat— due to its prominence in US politics and society, the nature of its functional role, and the impact of its ongoing activities.

AI is at the forefront of Pentagon and other US government efforts to detect bots and synthetic media. DARPA’s MediaForensics (MediFor) program is using AI algorithms to “automatically quantify the integrity of an image or video,” for example.57

Still, there is concern about the pace at which this detection happens, given the speed of diffusion of synthetic media via social media. As Lt. General Dennis Crall, the Joint Staff’s chief information officer, observed, “the speed at which machines and AI won some of these information campaigns changes the game for us…digital transformation, predictive analytics, ML, AI, they are changing the game…and if we don’t match that speed, we will make it to the right answer and that the right answer will be completely irrelevant.”58

Accelerating DoD AI adoption

As the discussion above illustrates, the DoD has a broad set of AI-related initiatives across different stages of development and experimentation, building on the successful deployment of AI-enabled information-management and decision-support tools. As the focus shifts toward integration and scaling, accelerating these adoption efforts is critical for maintaining US advantage in the strategic competition against China, as well as effectively containing Russia.

In this section, the paper highlights some of the incongruencies in the relationship between the DoD and its industry partners that may cause lost opportunities for innovative and impactful AI projects, the positive impact of expanding the use of alternative acquisition methods, and the growing urgency to align processes and timelines to ensure that the US military has access to high- caliber technological capabilities for future warfare. Additionally, this section discusses the DoD’s approach to implementing ethical AI principles, and issues related to standards and testing of trusted and responsible systems.

DoD and industry partnerships: Aligning perspectives, processes, and timelines

Although the DoD has issued a number of high-level documents outlining priority areas for AI development and deployment, the market’s ability to meet, or even understand, these needs is far from perfect. A recent IBM survey of two hundred and fifty technology leaders from global defense organizations reveals some important differences in how defense-technology leaders and the DoD view the value of AI for the organization and the mission.59 For instance, only about one-third of the technology leaders surveyed said they see significant potential value in AI for military logistics, medical and health services, and information operations and deepfakes. When asked about the potential value of AI-enabled solutions to business and other noncombat applications, less than one-third mentioned maintenance, procurement, and human resources.60

These views are somewhat incongruent with the DoD’s goals in AI. For example, military logistics and sustainment functions that encompass equipment maintenance and procurement are among the top DoD priorities for implementing AI. Leidos’ work with the Department of Veterans Affairs also illustrates the potential of AI in medical and health services.61 Finally, with the use of AI in disinformation campaigns already under way, and as the discussion in the previous section highlights, there is an urgent need to develop technical measures and AI-enabled tools for detecting and countering AI-powered information operations.62

The DoD and its industry partners have different priorities and incentives based on their respective problem sets and missions. But, divergent perspectives on valuable and critical areas for AI development could result in lost opportunities for impactful AI projects. That said, even when the Pentagon and its industry partners see eye to eye on AI, effective collaboration is often thwarted by a clumsy bureaucracy that is too often tethered to legacy processes, structures, and cultures.

The DoD’s budget planning, procurement, acquisition, and contracting processes are, by and large, not designed for buying software. These institutional barriers, coupled with the complex and protracted software-development and compliance regulations, are particularly hard on small startups and nontraditional vendors that lack the resources, personnel, and prior knowledge required to navigate the system in the same way that defense primes do.63

The DoD is well aware of these challenges. Since 2015, the Office of the Secretary of Defense and the military services have set up several entities—such as DIU, AFWERX, NavalX, and Army Applications Laboratory—to interface with the commercial technology sector, especially startups and nontraditional vendors, with the aim of accelerating the delivery of best-in-class technology solutions. Concurrently, the DoD has taken other notable steps to promote the use of alternative authorities for acquisition and contracting, which provide greater flexibility to structure and execute agreements than traditional procurement.64 These include “other transaction authorities, middle-tier acquisitions, rapid prototyping and rapid fielding, and specialized pathways for software acquisition.”65

The DIU has been at the forefront of using some of these alternative acquisition pathways to source AI solutions from the commercial technology sector. The Air Force’s AFWERX has also partnered with the Air Force Research Lab and the National Security Innovation Network to make innovative use of the Small Business Innovation Research (SBIR) and Small Business Technology Transfer (STTR) funding to “increase the efficiency, effectiveness, and transition rate” of programs.66 In June 2021, for instance, the USAF SBIR/STTR AI Pitch Day awarded more than $18 million to proposals on the topic of “trusted artificial intelligence, which indicates systems are safe, secure, robust, capable, and effective.”67

These are steps in the right direction, and it has indeed become easier to receive DoD funding for research, development, and prototyping. Securing timely funding for production, however, remains a major challenge. This “valley of death” problem—the gap between the research-and-development phase and an established, funded program of record—is particularly severe for nontraditional defense firms, because of the disparity between venture-capital funding cycles for startups and how long it takes to get a program into the DoD budget.68

The Pentagon understands that bridging the “valley of death” is crucial for advancing and scaling innovation, and has recently launched the Rapid Defense Experimentation Reserve to deal with these issues.69 Still, the systematic changes necessary to align budget planning, acquisition, and contracting processes with the pace of private capital require congressional action and could take years to implement. Delays in implementing such reforms are undermining the DoD’s ability to access cutting-edge technology that could prove essential on future battlefields.

Building trusted and responsible AI systems

Ensuring that the US military can field safe and reliable AI-enabled and autonomous systems and use them in accordance with international humanitarian law will help the United States maintain its competitive advantage against authoritarian countries, such as China and Russia, that are less committed to ethical use of AI. An emphasis on trustworthy AI is also crucial because the majority of the DoD’s AI programs entails elements of human-machine teaming and collaboration, and their successful implementation depends, in large part, on operators trusting the system enough to use it. Finally, closer coordination between DoD and industry partners on shared standards and testing requirements for trustworthy and responsible AI is critical for moving forward with DoD AI adoption.

Alongside the DoD’s existing weapons-review and targeting procedures, including protocols for autonomous weapons systems, the department is also looking to address the ethical, legal, and policy ambiguities and risks raised more specifically by AI.70 In February 2020, the Pentagon adopted five ethical principles to guide the development and use of AI, calling for AI that is responsible, equitable, traceable, reliable, and governable. Looking to put these principles into practice, Deputy Secretary of Defense Kathleen Hicks issued a memorandum directing a “holistic, integrated, and disciplined approach” for integrating responsible AI (RAI) across six tenets: governance, warfighter trust, product-and-acquisition lifecycle, requirements validation, responsible AI ecosystem, and AI workforce.71 While JAIC was tasked with the implementation of the RAI strategy, it is unclear how this effort will unfold now that it has been integrated into the new CDAO office.

Meanwhile, in November 2021, the DIU released its responsible-AI guidelines, responding to the memo’s call for “tools, policies, processes, systems, and guidance” that integrate the ethical AI principles into the department’s acquisition policies.72 These guidelines are a tangible step toward operationalizing and implementing ethics in DoD AI programs, building on DIU’s experience working on AI solutions in areas such as predictive health, underwater autonomy, predictive maintenance, and supply-chain analysis. They are meant to be actionable, adaptive, and useful while ensuring that AI vendors, DoD stakeholders, and DIU program managers take fairness, accountability, and transparency into account during the planning, development, and deployment phases of the AI system lifecycle.73

The success of the DoD’s AI programs will depend, in large part, on ensuring that humans develop and maintain the appropriate level of trust in their intelligent-machine teammates. The DoD’s emphasis on trusted AI is, therefore, increasingly echoed throughout some of its flagship AI projects. In August 2020, for instance, DARPA’s Air Combat Evolution (ACE) program attracted a great deal of attention when an AI system beat one of the Air Force’s top F-16 fighter pilots in a simulated aerial dogfight contest.74 Rather than pitting humans against machines, a key question for ACE is “how to get the pilots to trust the AI enough to use it.”75 ACE selected the dogfight scenario, in large part, because this type of air-to-air combat encompasses many of the basic flight maneuvers necessary for becoming a trusted wing-mate within the fighter-pilot community. Getting the AI to master the basic flight maneuvers that serve as the foundation to more complex tasks, such as suppression of enemy air defenses or escorting friendly aircraft, is only one part of the equation.76 The AlphaDogfight Trials, according to the ACE program manager, are “all about increasing trust in AI.”77

AI development is moving fast, making it difficult to design and implement a regulatory structure that is sufficiently flexible to remain relevant without being so restrictive that it stifles innovation. Companies working with the DoD are seeking guidelines for the development, deployment, use, and maintenance of AI systems compliant with the department’s ethical principles for AI. Many of these industry partners have adopted their own frameworks for trusted and responsible AI solutions, highlighting attributes such as safety, security, robustness, resilience, accountability, transparency, traceability, auditability, explainability, fairness, and other related qualities.78 That said, there are important divergences in risk- management approaches, organizational policies, bureaucratic processes, performance benchmarks, and standards for integrating trustworthiness considerations across the AI system lifecycle.

Currently, there are no shared technical standards for what constitutes ethical or trustworthy AI systems, which can make it difficult for nontraditional AI vendors to set expectations and

navigate the bureaucracy. The DoD is not directly responsible for setting standards. Rather, the 2021 National Defense Authorization Act (NDAA) expanded the National Institute of Standards and Technology (NIST) mission “to include advancing collaborative frameworks, standards, guidelines for AI, supporting the development of a risk mitigation framework for AI systems, and supporting the development of technical standards and guidelines to promote trustworthy AI systems.”79 In July 2021, the NIST issued a request for information from stakeholders as it develops its AI Risk Management Framework, meant to help organizations “incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.”80

A US Army soldier uses the tactical robotic controller to control the expeditionary modular autonomous vehicle as a practice exercise in preparation for Project Convergence at Yuma Proving Ground, Arizona, on October 19, 2021. During Project Convergence 21, soldiers experimented with using this vehicle for semi-autonomous reconnaissance and re-supply. Both on and beyond the battlefield, trust in AI-enabled capabilities like autonomous and semi-autonomous vehicles is crucial to success. Source: US Army photo by Sgt. Marita Schwab, US Army flickr. https://www.flickr.com/photos/35703177@ N00/51690959553/

There are no easy solutions to this challenge. But, a collaborative process that engages stakeholders across government, industry, academia, and civil society could help prevent AI development from going down the path of social media, where public policy failed to anticipate and was slow to respond to the risks and damages caused by disinformation and other malicious activity on these platforms.

Related to standards are the challenges linked to testing, evaluation, verification, and validation (TEVV). Testing and verification processes are meant to “help decision-makers and operators understand and manage the risks of developing, producing, operating, and sustaining AI-enabling systems,” and are essential for building trust
in AI.81 The DoD’s current TEVV protocols and infrastructure are meant primarily for major defense acquisition programs like ships, airplanes, or tanks; it is linear, sequential, and, ultimately, finite once the program transitions to production and deployment. With AI systems, however, “development is never really finished, so neither is testing.”82 Adaptive, continuously learning emerging technologies like AI, therefore, require a more agile and iterative development-and-testing approach—one that, as the NSCAI recommended, “integrates testing as a continuous part of requirements specification, development, deployment, training, and maintenance and includes run-time monitoring of operational behavior.”83

An integrated and automated approach to development and testing, which builds upon the commercial best practice of development, security, and operations (DevSecOps), is much better suited for AI/ML systems. While the JAIC’s JCF has the potential to enable a true AI DevSecOps approach, scaling such efforts across the DoD is a major challenge because it requires significant changes to the current testing infrastructure, as well as more resources such as bandwidth, computing support, and technical personnel. That said, failing to develop new testing methods better suited to AI, and not adapting the current testing infrastructure to support iterative testing, will stymie efforts to integrate and adopt trusted and responsible AI at scale.

The above discussion of standards and TEVV encapsulates the unique challenges modern AI systems pose to existing DoD frameworks and processes, as well as the divergent approaches commercial technology companies and the DoD take to AI development, deployment, use, and maintenance. To accelerate AI adoption, the DoD and its industry partners need to better align on concrete, realistic, operationally relevant standards and performance requirements, testing processes, and evaluation metrics that incorporate ethical AI principles. A defense-technology ecosystem oriented around trusted and responsible AI could promote the cross-pollination of best practices and lower the bureaucratic and procedural barriers faced by nontraditional vendors and startups.

Key takeaways and recommendations

Fully exploiting AI’s capacity to drive efficiencies in cost and time, support human decision-makers, and enable autonomy will require more than technological advancement or development of novel operational concepts. Below, we outline three key areas of prioritized effort necessary to more successfully integrate AI across the DoD enterprise and ensure the United States is able to deter threats and maintain a strategic, operational, and tactical advantage over its competitors and potential adversaries.

Prioritize safe, secure, trusted, and responsible AI development and deployment

The intensifying strategic competition with China, the promise of exquisite technological and operational capabilities, and repeated comparisons to the rapid pace of technology development and integration in the private sector are all putting pressure on the DoD to move faster toward fielding AI systems. There is much to gain from encouraging greater risk tolerance in AI development to enable progress toward adopting AI at scale. But, rushing to field AI-enabled systems that are vulnerable to
a range of adversary attacks, and likely to fail in an operational environment, simply to “one-up” China will prove counterproductive.

The ethical code that guides the US military reflects a fundamental commitment to abiding with the laws of war at a time when authoritarian countries like China and Russia show little regard for human rights and humanitarian principles. Concurrently, the DoD’s rigorous approach to testing and assurance of new capabilities is designed to ensure that new weapons are used responsibly and appropriately, and to minimize the risk from accidents, misuse, and abuse of systems and capabilities that can have dangerous, or even catastrophic, effects. These values and principles that the United States shares with many of its allies and partners are a strategic asset in the competition against authoritarian countries as they field AI-enabled military systems. To cement the DoD’s advantage in this arena, we recommend the following steps.

  • The DoD should integrate DIU’s Responsible AI Guidelines into relevant requests for proposals, solicitations, and other materials that require contractors to demonstrate how their AI products and solutions implement the DoD’s AI ethical principles. This will set a common and clear set of expectations, helping nontraditional AI vendors and startups navigate the Pentagon’s proposal process. There is recent precedent of the DoD developing acquisition categories for programs that required industry to pivot its development process to meet evolving DoD standards. In September 2020, for example, the US Air Force developed the e-series acquisition designation for all procurement efforts that required vendors to use digital engineering practices—rather than building prototypes—as part of their bid to incentivize industry to embrace digital engineering.84
  • DoD industry partners, especially nontraditional AI vendors, should actively engage with NIST as the institute continues its efforts to develop standards and guidelines to promote trustworthy AI systems, to ensure their perspectives inform subsequent frameworks.
  • Among the challenges to effective AI adoption referenced in this paper were brittle AI and the potential for adversary cyberattacks designed to corrupt the data on which AI algorithms are based. Overcoming these challenges will require a continued commitment within the DoD to increase the speed, variety, and capability of test and evaluation of DoD AI systems to ensure that these AI systems function as intended under a broader range of different environments. Some of this testing will need to take place in real-world environments, but advances in model-based simulations can allow for an increasing amount of validation of AI system performance in the digital/virtual world, reducing the costs and timelines associated with this testing.
  • Moreover, the DoD should also leverage the under secretary of defense for research and engineering’s (USDR&E) testing practices and priorities to ensure planned and deployed AI systems are hardened against adversary attacks, including data pollution and algorithm corruption.
  • The DoD should leverage allies and foreign partners to develop, deploy, and adopt trusted AI. Engagement of this nature is vital for coordination on common norms for AI development and use that contain and counter China and Russia’s authoritarian technology models. Pathways for expanding existing cooperation modes and building new partnerships can include the following.
  1. Enhancing an emphasis on ethical, safe, and responsible AI as part of the JAIC’s Partnership for Defense, through an assessment of commonalities and differences in the members’ approaches to identify concrete opportunities for future joint projects and cooperation.
  2. Cross-sharing and implementing joint ethics programs with Five Eyes, NATO, and AUKUS partners.85 In addition to supporting interoperability, this will add a diversity of perspectives and experiences, as well as help to ensure that AI development efforts limit various forms of bias. As one former general officer interviewed for this project noted, “diversity is how we ensure reliability. It is essential.”86
  3. Broadening outreach to allies and partners of varying capabilities and geographies, including India, South Africa, Vietnam, and Taiwan, to explore opportunities for bilateral and multilateral research-and-development efforts and technology-sharing programs that address the technical attributes of trusted and responsible AI.87

Align key priorities for AI development and strengthen coordination between the DoD and industry partners to help close DoD AI capability gaps.

The DoD will not be able to fulfill its ambitions in AI and compete effectively with the Chinese model of sourcing technology innovation through military- civil fusion without close partnerships with a broad range of technology companies. This includes defense-industry leaders with long-standing ties to the Pentagon, technology giants at the forefront of global innovation, commercial technology players seeking to expand their government portfolio, and startups at the cutting edge of AI development. But, the DoD’s budget-planning, procurement, acquisition, contracting, and compliance processes will likely need to be fundamentally restructured to effectively engage with the entirety of this vibrant and diverse technology ecosystem.

Systemic change is a slow, arduous process. But, delaying this transition risks the US military falling behind on exploiting the advantages AI promises to deliver, from operational speed to decision dominance. In the meantime, the following actions could help improve coordination with industry partners to accelerate the DoD’s AI adoption efforts.

  • The DoD should assess its communications and outreach strategy to clarify and streamline messaging around the department’s priorities in AI.
  • The DoD should partner with technology companies to reexamine their assessments regarding the potential value of AI solutions in certain categories, including, but not limited to, logistics, medical and health services, and information operations.
  • The DoD should implement the NSCAI’s recommendation to accelerate efforts to train acquisition professionals on the full range of available options for acquisition and contracting, and incentivize their use for AI and digital technologies.”88 Moreover, such acquisition- workforce training initiatives should ensure that acquisition professionals have a sufficient understanding of the DoD’s ethical principles for AI and the technical dimensions of trusted and responsible AI. The DIU’s ethical guidelines can serve as the foundation for this training.

The DoD should implement the NSCAI’s recommendation to accelerate efforts to train acquisition professionals on the full range of available options for acquisition and contracting, and incentivize their use for AI and digital technologies.”88 Moreover, such acquisition- workforce training initiatives should ensure that acquisition professionals have a sufficient understanding of the DoD’s ethical principles for AI and the technical dimensions of trusted and responsible AI. The DIU’s ethical guidelines can serve as the foundation for this training.

Promote coordination between leading defense technology companies and nontraditional vendors to accelerate DoD AI adoption.

Rather than building entirely new AI-enabled systems, in the short to medium term, the DoD will be integrating AI into a range of existing software and hardware systems—from cyberdefense architectures to fighter jets to C2. Progress toward implementing AI will, therefore, also depend upon streamlining collaboration between the startups and nontraditional AI vendors that the DoD has been courting for their innovative and cutting-edge technologies and the defense primes responsible for integrating new capabilities into legacy systems.

The NSCAI recommends identifying “new opportunities for defense primes to team with non-traditional firms to adopt AI capabilities more quickly across existing platforms.”89 We echo this recommendation: improved coordination between defense primes and nontraditional firms can help ensure AI solutions are robust, resilient, and operationally relevant, as well as usher promising prototypes through the “valley of death.”

Without a doubt, moving from concept to practice can be tricky. This paper’s research revealed a significant disconnect in perspectives on where the main challenges to moving innovative new technologies from the lab to adoption in programs of record reside. Startups tend to view system integrators as resistant to engaging, while startups may be viewed as lacking understanding of the
acquisition process and of developing technologies that are difficult to integrate into, or scale for, programs of record.90

Bridging this gap will require new government approaches to resolving concerns of nontraditional suppliers around intellectual property. Most are reticent to give ownership of sensitive technologies that are sold largely to customers outside the defense market. It will also involve the DoD helping small businesses navigate the federal acquisition process through steps such as speeding up cyber certification and the Authority To Operate (ATO) process, as well as helping interesting companies develop use cases for different components of the DoD. Such proactive facilitation will help nontraditional suppliers that have worked with DoD through research-and-development grants come to a partnership with systems integrators more prepared.

Most importantly, optimizing the benefits of both large systems integrators and smaller innovators will require the DoD to play a more active interlocutor role in connecting small companies with those that are running programs of record. There is currently some understandable hesitancy for the DoD to demand that companies work together, largely for fear of running afoul of Federal Acquisition Regulations (FAR). But, as one industry expert interviewed for this project argued, the DoD could be more aggressive in understanding what is permissible under the FAR and helping companies connect, especially to meet a specific acquisition priority or program.

Conclusion

Over the last several years, interest and investment in AI have gained momentum. This is especially true in the national security and defense community, as strategists, policymakers, and executives seek decisive advantages amid rising geostrategic competition and prepare for future operating environments characterized by complexity, uncertainty, and, most importantly, speed. AI is now at the center of military-technological competition between the United States and China, and both countries, as well as other militaries throughout the world, are already deploying AI-enabled systems with the goal of dominating the battlefield of the future.

The United States cannot risk falling behind China— not in AI innovation, not in AI adoption, and not in the full-scale integration of AI across the national defense enterprise. Urgency is required in addressing the range of technical and bureaucratic processes, and cultural issues that have, to date, dampened the pace of AI adoption within the DoD. Specifically, the DoD should prioritize the following.

  • Building trust in AI: Rather than replacing humans, DoD AI efforts are primarily centered on technologies that augment human understanding, decision-making, and performance. Building trust and confidence between humans and their intelligent-machine teammates is, therefore, a critical aspect of the successful development and deployment of military AI.
  • Developing and implementing standards for trusted and responsible AI: Currently, there are no commonly held standards or system- performance requirements for what constitutes trusted and responsible AI. The Pentagon and its industry partners must, therefore, work collaboratively with bodies like NIST to develop and implement operationally relevant standards, testing processes, and evaluation metrics that incorporate ethical, trustworthy, and responsible AI principles. This will help advance successful AI research prototypes into production-ready solutions.
  • Facilitating the optimization of the US innovation ecosystem and defense industrial base: Bringing cutting-edge AI technologies into the DoD also requires the Pentagon to reduce the bureaucratic challenges frequently associated with the DoD acquisition process, especially for innovative companies that are outside the traditional defense-industrial base. Developing new means of supporting and incentivizing engagement of these companies and promoting intra-industry partnerships between leading defense-technology companies and startups and nontraditional suppliers will be crucial.
  • Engaging allies and partners: As noted at the outset of this paper, the war in Ukraine has reinforced the importance of allies and partners in enforcing geopolitical norms and standards. The same is likely to be true of the future of AI development and adoption. The DoD will benefit not only from collaboration across industry and the national security community, but also with allies and foreign partners to ensure establishment and promulgation of norms and standards that will enable trusted, responsible, and interoperable AI development and deployment.

Acknowledgments

This report is the culmination of an eight-month research project on the national security and defense implications of AI, conducted under the supervision of FD Deputy Director Clementine Starling and Assistant Director Christian Trotti, and enabled by research and editing support from FD Young Global Professionals Timothy Trevor and Caroline Steel. It is made possible through the generous support of Accrete AI.

To produce this report, the authors conducted a number of interviews and consultations. They list alphabetically below, with gratitude, some of the individuals consulted and whose insights informed this report. The analysis and recommendations presented in this report are those of the authors alone, and do not necessarily represent the views of the individuals consulted. Moreover, the named individuals participated in a personal, not institutional, capacity.

  • Mr. Prashant Bhuyan, Founder and CEO, Accrete AI
  • Gen James Cartwright, USMC (Ret.), Board Director, Atlantic Council; Former Vice Chairman, US Joint Chiefs of Staff; Former Commander, US Strategic Command
  • Mr. Jonathan Doyle, Partner, Axion Partners
  • Mr. Brian Drake, Federal Chief Technology Officer, Accrete AI
  • Ms. Evanna Hu, Nonresident Senior Fellow, Forward Defense, Scowcroft Center for Strategy and Security, Atlantic Council
  • Mr. Ron Keesing, Senior Vice President for Technology Integration, Leidos
  • Mr. Stephen Rodriguez, Senior Advisor, Scowcroft Center for Strategy and Security, Atlantic Council

The authors would also like to thank the following individuals for their peer review of various drafts of this report, listed below in alphabetical order. The analysis and recommendations presented in this report are those of the authors alone, and do not necessarily represent the views of the peer reviewers. Moreover, the named individuals participated in a personal, not institutional, capacity.

  • Gen James Cartwright, USMC (Ret.), Board Director, Atlantic Council; Former Vice Chairman, US Joint Chiefs of Staff; Former Commander, US Strategic Command
  • Mr. Jonathan Doyle, Partner, Axion Partners
  • Mr. Brian Drake, Federal Chief Technology Officer, Accrete AI
  • Ms. Evanna Hu, Nonresident Senior Fellow, Forward Defense, Scowcroft Center for Strategy and Security, Atlantic Council
  • Mr. Justin Lynch, Director, Research and Analysis, Special Competitive Studies Project
  • Ms. Kelley Sayler, Analyst, Advanced Technology and Global Security, Congressional Research Service

About the authors

Watch the launch event

Featuring keynote remarks by the Director of the Pentagon’s Joint Artificial Intelligence Center, Lieutenant General Michael S. Groen, and by the CEO of Accrete AI, Prashant Bhuyan, as well as panel discussions on DoD’s and industry’s roles in AI development.
Forward Defense

Forward Defense, housed within the Scowcroft Center for Strategy and Security, generates ideas and connects stakeholders in the defense ecosystem to promote an enduring military advantage for the United States, its allies, and partners. Our work identifies the defense strategies, capabilities, and resources the United States needs to deter and, if necessary, prevail in future conflict.

1    Katrina Manson, “US Has Already Lost AI Fight to China, Says Ex-Pentagon Software Chief,” Financial Times, October 10, 2021, https://www.ft.com/content/f939db9a-40af-4bd1-b67d-10492535f8e0.
2    2 Yuna Huh Wong, et al., Deterrence in the Age of Thinking Machines, RAND, 2020, https://www.rand.org/content/dam/rand/pubs/research_reports/RR2700/RR2797/RAND_RR2797.pdf
3    Maureen Thompson, “Utilizing Semi-Autonomous Resupply to Mitigate Risks to Soldiers on the Battlefield,” Army Futures Command, October 26, 2021, https://www.army.mil/article/251476/utilizing_semi_autonomous_resupply_to_mitigate_risks_to_soldiers_on_the_battlefield.
4    Amy Hudson, “AI Efforts Gain Momentum as US, Allies and Partners Look to Counter China,” Air Force Magazine, July 13, 2021, https://www.airforcemag.com/dods-artificial-intelligence-efforts-gain-momentum-as-us-allies-and-partners-look-to-counter-china.
5    On AI and the strategic competition, see: Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review 1, 3 (May 2018), https://repositories.lib.utexas.edu/bitstream/handle/2152/65638/TNSR-Vol-1-Iss-3_Horowitz.pdf; Michael C. Horowitz, et al., “Strategic Competition in an Era of Artificial Intelligence,” Center for National Security, July 2018, http://files.cnas.org.s3.amazonaws.com/documents/CNAS-Strategic-Competition-in-an-Era-of-AI-July-2018_v2.pdf.
6    Ryan Fedasiuk, Jennifer Melot, and Ben Murphy, “Harnessed Lightning: How the Chinese Military is Adopting Artificial Intelligence,” Center for Security and Emerging Technology, Georgetown University, October 2021, https://cset.georgetown.edu/publication/harnessed-lightning.
7    C. Todd Lopez, “Ethics Key to AI Development, Austin Says,” DOD News, July 14, 2021, https://www.defense.gov/News/News-Stories/Article/ Article/2692297/ethics-key-to-ai-development-austin-says/.
8    Danielle C. Tarraf, et al., The Department of Defense Posture for Artificial Intelligence: Assessment and Recommendations, RAND, 2019, https://www.rand.org/pubs/research_reports/RR4229.html.
9    Brian Drake, “A To-Do List for the Pentagon’s New AI Chief,” Defense One, December 14, 2021, https://www.defenseone.com/ideas/2021/12/list- pentagons-new-ai-chief/359757.
10    Valerie Insinna, “Silicon Valley Warns the Pentagon: ‘Time Is Running Out,’” Breaking Defense, December 21, 2021, https://breakingdefense.com/2021/12/silicon-valley-warns-the-pentagon-time-is-running-out.
11     11. “Summary of the 2018 Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our Security and Prosperity,” US Department of Defense, 2018, https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF.
12    “12. Artificial Intelligence,” US Department of State, accessed May 4, 2022, https://www.state.gov/artificial-intelligence.
13    13. Stuart J. Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, Fourth Edition (Hoboken, NJ: Pearson, 2021), 1. For further definitions of AI, see, for example: Nils J. Nilsson, The Quest for Artificial Intelligence: A History of Ideas and Achievements (Cambridge: Cambridge University Press, 2010); Shane Legg and Marcus Hutter, “A Collection of Definitions of Intelligence,” Dalle Molle Institute for Artificial Intelligence, June 15, 2007, https://arxiv.org/pdf/0706.3639.pdf.
14    14. Robert W. Button, Artificial Intelligence and the Military, RAND, September 7, 2017, https://www.rand.org/blog/2017/09/artificial-intelligence-and- the-military.html.
15    15. Ibid.
16    16. “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense,” Defense Innovation Board, October 2019, https://admin.govexec.com/media/dib_ai_principles_-supporting_document-embargoed_copy(oct_2019).pdf.
17    17. Danielle C. Tarraf, William Shelton, Edward Parker, Brien Alkire, Diana Gehlhaus, Justin Grana, Alexis Levedahl, Jasmin Léveillé, Jared Mondschein, James Ryseff, et al., The Department of Defense Posture for Artificial Intelligence, RAND, 2019, https://www.rand.org/pubs/research_reports/RR4229.html
18    18. Ben Buchanan, “The AI Triad and What It Means for National Security Strategy,” Center for Security and Emerging Technology, Georgetown University, August 2020, iii, https://cset.georgetown.edu/publication/the-ai-triad-and-what-it-means-for-national-security-strategy.
19    19. Ibid.
20    20. Alexey Kurakin, Ian Goodfellow, and Samy Bengio, “Adversarial Machine Learning at Scale,” Arxiv, Cornell University, February 2017, https://arxiv.org/abs/1611.01236.
21    21. Fedasiuk, Melot, and Murphy, “Harnessed Lightning,” 4.
22    22. Ibid., iv.
23    23. “Final Report,” National Security Commission on AI, 2021, https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf.
24    24. Fedasiuk, Melot, and Murphy, “Harnessed Lightning,” 13.
25    25. Elsa Kania, “Learning Without Fighting: New Developments in PLA Artificial Intelligence War-Gaming,” Jamestown Foundation, China Brief, 19, 7 (2019), https://jamestown.org/program/learning-without-fighting-new-developments-in-pla-artificial-intelligence-war-gaming.
26    26. Fedasiuk, Melot, and Murphy, “Harnessed Lightning,” 22–23.
27    27. Liu Xuanzun, “PLA Deploys AI in Mock Warplane Battles, ‘Trains Both Pilots and Ais,’” Global Times, June 14, 2021, https://www.globaltimes.cn/ page/202106/1226131.shtml.
28    28. Liu Xuanzun, “China’s Future Fighter Trainer Could Feature AI to Boost Pilot’s Combat Capability: Top Designer,” Global Times, November 16, 2020, http://en.people.cn/n3/2020/1116/c90000-9780437.html.
29    29. Joseph Trevithick, “Chinese Pilots Are Also Dueling With AI Opponents in Simulated Dogfights and Losing: Report,” Drive, June 18, 2021, https://www.thedrive.com/the-war-zone/41152/chinese-pilots-are-also-dueling-with-ai-opponents-in-simulated-dogfights-and-losing-report.
30    30. “Red 6 to Continue Support ATARS Integration into USAF T-38 Talon,” Air Force Technology, February 3, 2022, https://www.airforce-technology.com/news/red-6-atars-integration.
31    31. Xiang Bo, “China Launches Record Breaking Drone Swarm,” XinhuaNet, June 11, 2017, http://www.xinhuanet.com/english/2017- 06/11/c_136356850.htm.
32    32. David Hambling, “China Releases Video Of New Barrage Swarm Drone Launcher,” Forbes, October 14, 2020, https://www.forbes.com/sites/davidhambling/2020/10/14/china-releases-video-of-new-barrage-swarm-drone-launcher/?sh=29b76fa12ad7.
33    33. An author of this paper attended the exhibition.
34    Cao Siqi, “Unmanned High-Speed Vessel Achieves Breakthrough in Dynamic Cooperative Confrontation Technology: Developer,” Global Times, November 28, 2021, https://www.globaltimes.cn/page/202111/1240135.shtml.
35    35. Ibid.
36    36. Jeffrey Edmonds, et al., “Artificial Intelligence and Autonomy in Russa,” CNA, May 2021, https://www.cna.org/CNA_files/centers/CNA/sppp/rsp/ russia-ai/Russia-Artificial-Intelligence-Autonomy-Putin-Military.pdf.
37    37. “Advanced Military Technology in Russia,” Chatham House, September 2021, https://www.chathamhouse.org/2021/09/advanced-military- technology-russia/06-military-applications-artificial-intelligence.
38    38. Tate Nurkin, The Five Revolutions: Examining Defense Innovation in the Indo-Pacific, Atlantic Council, November 2020, https://www.atlanticcouncil.org/in-depth-research-reports/report/the-five-revolutions-examining-defense-innovation-in-the-indo-pacific-region.
39    39. Hudson, “AI Efforts Gain Momentum as US, Allies and Partners Look to Counter China.”
40    40. “Artificial Intelligence: Status of Developing and Acquiring,” US Government Accountability Office, February 2022, 17, https://www.gao.gov/assets/gao-22-104765.pdf.
41    41. Nicolas Chaillan, “Its Time to Say Goodbye,” LinkedIn, September 2, 2021, https://www.linkedin.com/pulse/time-say-goodbye-nicolas-m-chaillan.
42    42. Manson, “US Has Already Lost AI Fight to China, Says Ex-Pentagon Software Chief.”
43    43. Patrick Tucker, “Pentagon AI Chief Responds to USAF Software Leader Who Quit in Frustration,” Defense One, October 26, 2021, https://www.defenseone.com/technology/2021/10/pentagon-ai-chief-responds-usaf-software-leader-who-quit-frustration/186368.
44    44. Ibid.
46    46. Jackson Bennett, “2021 in Review: JADC2 Has Irreversible Momentum, but What Does That Mean?” FedScoop, December 29, 2021, https://www.fedscoop.com/2021-in-review-jadc2-has-irreversible-momentum.
47    47. “Joint All-Domain Command and Control (JADC2) In Focus Briefing,” Congressional Research Service, January 21, 2022.
48    48. Ibid.
49    49. Jackson Bennett, “JADC2 Cross Functional Team to Stand Up AI-Focused Working Group,” FedScoop, December 16, 2021, https://www.fedscoop.com/jadc2-cft-stands-up-ai-working-group.
50    50. “DoD Announces Release of JADC2 Implementation Plan,” US Department of Defense, press release, March 17, 2022, https://www.defense.gov/News/Releases/Release/Article/2970094/dod-announces-release-of-jadc2-implementation-plan.
51    51. Bryan Clark and Dan Patt, “The Pentagon Should Focus JADC2 on Warfighters, Not Service Equities,” Breaking Defense, March 30, 2022, https://breakingdefense.com/2022/03/the-pentagon-should-focus-jadc2-on-warfighters-not-service-equities.
52    52. Amanda Miller, “AI Algorithms Deployed in Kill Chain Target Recognition,” Air Force Magazine, September 21, 2021, https://www.airforcemag.com/ai-algorithms-deployed-in-kill-chain-target-recognition.
53    53. Ibid.
54    54. Patrick Tucker, “Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25%,” Defense One, December 9, 2021, https://www.defenseone.com/technology/2021/12/air-force-targeting-ai-thought-it-had-90-success-rate-it-was-more-25/187437.
55    55. Miller, “AI Algorithms Deployed in Kill Chain Target Recognition.”
56    56. Alex Tamkin and Deep Ganguli, “How Large Language Models Will Transform Science, Society, and AI”, Stanford University Human-Centered Artificial Intelligence, February 21, 2021, https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai.
57    57. Matt Turek, “Media Forensics (MediFor),” Defense Advanced Research Projects Agency, accessed May 4, 2022, https://www.darpa.mil/program/media-forensics.
58    58. Patrick Tucker, “Joint Chiefs’ Information Officer: US is Behind on Information Warfare. AI Can Help,” Defense One, November 5, 2021, https://www.defenseone.com/technology/2021/11/joint-chiefs-information-officer-us-behind-information-warfare-ai-can-help/186670.
59    59. “Deploying AI in Defense Organizations: The Value, Trends, and Opportunities,” IBM, May 2021, https://www.ibm.com/downloads/cas/ EJBREOMX.
60    60. Ibid.
61    61. Authors’ interview with a defense technology industry executive.
62    62 Katerina Sedova, et al., “AI and the Future of Disinformation Campaigns, Part 1: The RICHDATA Framework,” Center for Security and Emerging Technology, Georgetown University, December 2021, https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns/; Katerina Sedova et.al, “AI and the Future of Disinformation Campaigns, Part 2: A Threat Model, Center for Security and Emerging Technology,” Center for Security and Emerging Technology, Georgetown University, December 2021, 1, https://cset.georgetown.edu/wp-content/uploads/CSET-AI-and-the-Future-of-Disinformation-Campaigns-Part-2.pdf; Ben Buchanan, et al., “Truth, Lies, and Automation: How Language Models Could Change Disinformation,” Center for Security and Emerging Technology, Georgetown University, May 2021, https://cset.georgetown.edu/publication/truth-lies-and-automation.
63    63. Daniel K. Lim, “Startups and the Defense Department’s Compliance Labyrinth,” War on the Rocks, January 3, 2022, https://warontherocks.com/2022/01/startups-and-the-defense-departments-compliance-labyrinth.
64    64. Moshe Schwarz and Heidi M. Peters, “Department of Defense Use of Other Transaction Authority: Background, Analysis, and Issues for Congress,” Congressional Research Service, February 22, 2019, https://sgp.fas.org/crs/natsec/R45521.pdf.
65    65. “Final Report.”
66    66. “SBIR Open Topic,” US Department of the Air Force, Air Force Research Laboratory, https://afwerx.com/sbirsttr.
67    67. “Trusted AI at Scale,” Griffiss Institute, July 26, 2021, https://www.griffissinstitute.org/about-us/events/ev-detail/trusted-ai-at-scale-1.
68    68. Insinna, “Silicon Valley Warns the Pentagon: ‘Time is Running Out.’”
69    69. Jory Heckman, “DoD Seeks to Develop New Career Paths to Stay Ahead of AI Competition,” Federal News Network, July 13, 2021, https://federalnewsnetwork.com/artificial-intelligence/2021/07/dod-seeks-to-develop-new-career-paths-to-stay-ahead-of-ai-competition.
70    70. “DOD Adopts Ethical Principles for Artificial Intelligence,” US Department of Defense, February 24, 2020, https://www.defense.gov/News/ Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.
71    70. “DOD Adopts Ethical Principles for Artificial Intelligence,” US Department of Defense, February 24, 2020, https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/.
72    72. Ibid.
73    73. Jared Dunnmon, et al., “Responsible AI Guidelines in Practice,” Defense Innovation Unit, https://assets.ctfassets.net/3nanhbfkr0pc/acoo1Fj5uungnGNPJ3QWy/3a1dafd64f22efcf8f27380aafae9789/2021_RAI_Report-v3.pdf.
74    74. Margarita Konaev and Husanjot Chahal, “Building Trust in Human-Machine Teams,” Brookings, February 18, 2021, https://www.brookings.edu/techstream/building-trust-in-human-machine-teams/ ; Theresa Hitchens, “AI Slays Top F-16 Pilot in DARPA Dogfight Simulation,” Breaking Defense, August 20, 2020, https://breakingdefense.com/2020/08/ai-slays-top-f-16-pilot-in-darpa-dogfight-simulation.
75    75. Sue Halpern, “The Rise of A.I. Fighter Pilots,” New Yorker, January 17, 2022, https://www.newyorker.com/magazine/2022/01/24/the-rise-of-ai- fighter-pilots.
76    76. Adrian P. Pope, et al., “Hierarchical Reinforcement Learning for Air-to-Air Combat,” Lockheed Martin, June 11, 2021, https://arxiv.org/pdf/2105.00990.pdf.
77    77. “AlphaDogfight Trials Go Virtual for Final Event,” Defense Advanced Research Projects Agency, July 2020, https://www.darpa.mil/news-events/2020-08-07.
78    78. A recent IBM survey of two hundred and fifty technology leaders from global defense organizations revealed that about 42 percent have a framework for deploying AI ethically and safely; notably, formalized plans for the ethical application of AI are more common in organizations whose mission functions include combat and fighting arms than organizations with non-combat missions. These leaders surveyed represent organizations from a broad range of mission functions, including combat and fighting arms (18 percent), combat support (44 percent), and combat service-support (37 percent) organizations. “Deploying AI in Defense Organizations,” 4, https://www.ibm.com/downloads/cas/EJBREOMX; “Autonomy and Artificial Intelligence: Ensuring Data-Driven Decisions,” C4ISR, January 2021, https://hub.c4isrnet.com/ebooks/ai-autonomy-2020; “How Effective and Ethical Artificial Intelligence Will Enable JADC2,” Breaking Defense, December 2, 2021, https://breakingdefense.com/2021/12/how-effective-and-ethical-artificial-intelligence-will-enable-jadc2.
79    79. Pub. L. 116-283, William M. (Mac) Thornberry National Defense Authorization Act for Fiscal Year 2021, 134 Stat. 3388 (2021), https://www.congress.gov/116/plaws/publ283/PLAW-116publ283.pdf.
80    80. “Summary Analysis of Responses to the NIST Artificial Intelligence Risk Management Framework (AI RMF)—Request for Information (RFI),” National Institute of Standards and Technology, October 15, 2021, https://www.nist.gov/system/files/documents/2021/10/15/AI%20RMF_RFI%20Summary%20Report.pdf.
81    81 Michele A. Flournoy, Avril Haines, and Gabrielle Chefitz, “Building Trust through Testing: Adapting DOD’s Test & Evaluation, Validation & Verification (TEVV) Enterprise for Machine Learning Systems, including Deep Learning Systems,” WestExec, October 2020, 3–4, https://cset.georgetown.edu/wp-content/uploads/Building-Trust-Through-Testing.pdf.
82    82. Flournoy, Haines, and Chefitz, “Building Trust through Testing,” 3.
83    83. “Final Report,” 384.
84    84. “Air Force Acquisition Executive Unveils Next E-Plane, Publishes Digital Engineering Guidebook,” US Department of the Air Force, January 19, 2021, https://www.af.mil/News/Article-Display/Article/2476500/air-force-acquisition-executive-unveils-next-e-plane-publishes-digital-engineer.
85    85. Zoe Stanley-Lockman, “Responsible and Ethical Military AI: Allies and Allied Perspectives,” Center for Security and Emerging Technology, Georgetown University, August 2021, https://cset.georgetown.edu/wp-content/uploads/CSET-Responsible-and-Ethical-Military-AI.pdf.
86    86. Authors’ interview with a former US military general.
87    87. Zoe Stanley-Lockman, “Military AI Cooperation Toolbox: Modernizing Defense Science and Technology Partnerships for the Digital Age,” Center for Security and Emerging Technology, Georgetown University, August 2021, https://cset.georgetown.edu/publication/military-ai-cooperation-toolbox/.
88    88. “Final Report,” 65.
89    89. Ibid., 305.
90    90. Authors’ interview with a defense technology executive.

The post Eye to eye in AI: Developing artificial intelligence for national security and defense appeared first on Atlantic Council.

]]>
The 5×5—Reflections on trusting trust: Securing software supply chains https://www.atlanticcouncil.org/content-series/the-5x5/the-55reflections-on-trusting-trust-securing-software-supply-chains/ Thu, 12 May 2022 04:01:00 +0000 https://www.atlanticcouncil.org/?p=522510 Five experts discuss the implications of insecure software supply chains and realistic paths to securing them. 

The post The 5×5—Reflections on trusting trust: Securing software supply chains appeared first on Atlantic Council.

]]>
This article is part of The 5×5, a monthly series by the Cyber Statecraft Initiative, in which five featured experts answer five questions on a common theme, trend, or current event in the world of cyber. Interested in the 5×5 and want to see a particular topic, event, or question covered? Contact Simon Handler with the Cyber Statecraft Initiative at SHandler@atlanticcouncil.org.

Nearly every bit of technology and infrastructure that enables modern society to function runs on software. Lines of code—in some cases millions of them—underpin systems ranging from smart electronic kettles to fifth-generation fighter jets. A significant portion of software is cobbled together with and dependent on many pieces of open-source, non-proprietary code that, by and large, is built and maintained by volunteer software engineers for whom security may not be a top priority. As such, vulnerabilities abound in widely used systems, and securing the entirety of the increasingly complex software supply chain is no easy feat. 

Notable examples like the Sunburst/SolarWinds cyber-espionage campaign shed light on how adversaries are increasingly exploiting vulnerabilities in the software supply chain to compromise critically important public- and private-sector systems, and yet software supply chain security remains an underdeveloped aspect of public policy with meaningful gains starting to be made only more recently. On May 12, 2021, US President Joe Biden signed Executive Order (EO) 14028 on Improving the Nation’s Cybersecurity, a portion of which addressed the need to enhance software supply chain security. On May 5, 2022, in accordance with the EO, the National Institute of Standards and Technology (NIST) released its updated Cybersecurity Guidance for Supply Chain Risk Management.  

Between 2010 and 2021, according to the Atlantic Council’s Breaking Trust dataset, at least forty-two attacks or vulnerability disclosures involved open-source projects and repositories. Public policy still has significant room to address how both government and software developers improve the security of software supply chains, especially the wider health of the open-source ecosystem. 

We brought together five experts with a range of perspectives to discuss the implications of insecure software supply chains and realistic paths to securing them. 

#1 How does the security of software supply chains impact national security? 

Jennifer Fernick SVP & global head of research, NCC Group; member of the Governing Board, Open Source Security Foundation (OpenSSF)

“Software is the core infrastructure of almost every aspect of contemporary life, and a security vulnerability in any aspect of a system or network. Its effect on government, intelligence, defense operations, financial transaction networks, energy companies, telecommunications, food and pharmaceutical supply chains, and other public- or private-sector critical infrastructure can have devastating consequences in the physical world. The now-popular notion of securing the ‘software supply chain’ ultimately reflects a growing apprehension that the security risks of computing systems are many layers deeper, more invisible, and more interdependent and impactful than they initially seemed.” 

Amélie Koran, senior fellow, Cyber Statecraft Initiative; director of external technology partnerships, Electronic Arts, Inc.

“Security does not always mean reliability. It is an outcome in most cases, such as resiliency is, but for most, the typical CIA triad of confidentiality, integrity, and availability drives most assessments of basic system and software security. While we can develop a very secure national security system—from defense to utility and other infrastructure—the additional security components sometimes hurt reliability or resiliency, as they must perform a set of steps or lack inclusion of typical convenience features that non-critical or national security systems tend to avoid, as they may just be bad security choices. 

In this case, often the selection of components for ensuring a secure system, including the construction, building and testing, as well as operation of those utilized for national security purposes become very oblique and even harder to manage. For many years those building these systems eschewed open-source software due to the fear of unexpected or unpredictable inclusion in the code base would make systems less secure. Over time, however, they have realized that, in most cases, the transparency of such software sources results in more rapid fixes, and thus it closes the windows of opportunity for potential attackers versus closed-source or even more bespoke systems that do not go under as much scrutiny. In other words, some of the most trusted national security solutions may actually be the most insecure from iterative tests and test of rigor by others. A test is only as good as the test developer, and once you cut those creators and testers down, less issues tend to get caught and resolved.” 

John Speed Meyers, security data scientist, Chainguard

“It is not too much of a stretch to say that the functioning of most digital systems—including those of western militaries, governments, and societies—have become deeply reliant on a hard-to-understand and hard-to-secure software supply chain. It is like we built a digital Manhattan on a foundation of quicksand and swamps.” 

Wendy Nather, senior fellow, Cyber Statecraft Initiative; head of advisory CISOs, Cisco

“No matter how they are compromised or by whom, software supply chains have an outsized network effect on the security and stability of everything from utilities to emergency response, transportation, healthcare, aviation, and public safety. As everything digitally transforms, the attack surface grows in subtle, remotely accessible ways, and it potentially affects even those populations without access to technology.” 

Stewart Scott, assistant director, Cyber Statecraft Initiative

“Software is eating the world, or so I am told, so securing any system or application relevant to national security is critical—everything from computer systems on fighter jets to government-adjacent email accounts. Securing one’s own systems is challenging enough, but modern software services are a patchwork of in-house programs, purchased products, imported libraries, cloud applications, open-source components, and even copy-and-pasted code. Security for software supply chains extends the challenge to using others’ code securely, identifying what products are developed and maintained securely, and even figuring out what dependencies exist. It is incredibly complicated and difficult for security everywhere, and national security especially, as incidents like the Sunburst campaign and log4j illustrate.” 

#2 What are the challenges to building more secure software supply chains? Do developers, intermediaries, consumers know what they need to do? 

Fernick: “Vulnerabilities are cheap to create but expensive to find—even the best programmers in the world regularly write code with security vulnerabilities, and even the best security tools on earth will fail to find all of these vulnerabilities without the time-intensive intervention of human experts. Yet, even theoretically perfectly secure code would more likely than not depend on another piece of software that is full of exploitable vulnerabilities, will run on an insecure operating system on top of flawed hardware, and be deployed through a build pipeline that can be compromised by attackers. Security is very hard, and yet I feel like as an industry we push too much of this responsibility downstream to other developers and users who are not equipped to face it. Instead, we need to ‘shift left’ and assume that vulnerabilities are present, but find ways of reducing them at scale or detecting and remediating them early, through improved programming languages and frameworks, scalable vulnerability-finding tools, and other systematic investments in improving the ecosystem as a whole.” 

Koran: “Complexity is by far one of the greatest challenges to securing software and digital supply chains. Most tools and systems that are available to organizations provide a patchwork of awareness of the overall risk, and require a high level of competence in the minutia of the software or system in order to generate a plan of action, short of a basic “update this code with something deemed more secure.” One of the major issues in all of this also is consumer based. In most cases, it takes extra effort to track back the health of code or source components that may be utilized to build systems. These may be stitched together, but could be, once together, increasing the insecurity or risk of operations in certain configurations.  

Put it this way, while the suggestion of a software bill of materials (SBOM) may tell you what is in the box, it does not tell you if the ingredients are good for you. It is only one portion of a chain of decision-support processes that are necessary to build safer and more secure software supply chains. Imagine standing in the cereal aisle at a grocery store, and while you can look at the ingredients between a toasted whole wheat cereal and “sugar bombs,” what appetite are you satisfying chasing one over the other? Technically, the base components of the grains and such within in them may be very close to one another, but if there is more of one bad item (e.g., preservatives or sugar), while it solves the same task of giving you a breakfast, the satisfaction on consumption may be different. The same goes for software development. Pick a quick and dirty “all-in-one” suite that solves problems quickly, but may be opaque as to what is in it and how it was built, or make an artisan selection of bespoke code—you will have a lot of potential work ahead from choosing a turn-key opportunity provided by the former over the latter.” 

Meyers: “There are many. If you are reading this article on a computer of some sort, ask yourself why you trust the chain of software that allows you to read this article. If you have no answer, try to find a little solace in the knowledge that most experts do not have a great answer either. And nobody really knows what they need to do, although software supply chain integrity frameworks like SLSA (Supply chain Levels for Software Artifacts, pronounced “salsa”) are a good start.” 

Nather: “Software is organic and dynamic; it changes faster than humans can follow, as it is the result of human contributions on a worldwide scale. Software challenges cannot be compared directly to a hardware supply chain or a manufacturing line because of these additional complexities. Only with specific expertise can developers and intermediaries know what they need to do, and many developers do not come from the traditional educational pipelines any more. The answer is NOT to rely on consumers to have this expertise to make their own market-driven choices; it is to ensure that security is baked into the software standards, protocols, tools, and automation at scale.” 

Scott: “Writ large, the challenge is identifying and managing an incredible number of rapidly changing relationships that range everywhere from import lines to massive government acquisition contracts. There is a huge range of understanding about and capability to address software supply chain security. If developers, maintainers, vendors, CIOs, and consumers aren’t all on the same page about how to improve supply chain security—let alone provided with sufficient resources to get things moving—there’s going to be progress in some places and awkward lapses elsewhere. For example, GitHub is moving towards universal multi-factor authentication, which will help secure massive amounts of open-source components, but many entities will not even know the degree to which they are relying on and/or contributing to that code, especially at higher organizational levels.” 

#3 How would you describe the state of public-private sector collaboration on securing software supply chains? Compared to where it could be? 

Fernick: “I am optimistic and encouraged by the proactive and collaborative engagement between senior US government officials and the Open-Source Security Foundation (OpenSSF), a cross-industry effort to improve the security of the open-source ecosystem, which I helped to establish with colleagues across the industry in early 2020. The January 2022 White House meeting on Software Security brought together a powerful alliance of public- and private-sector organizations, including OpenSSF, to discuss initiatives to: (1) prevent open-source software security vulnerabilities, (2) improve coordinated vulnerability disclosure, and (3) reduce vulnerability remediation times. In mid-May, we will be returning to Washington, DC with a bold mobilization plan for exactly that; several carefully-defined initiatives that will together help radically improve the security of the world’s most critical open-source software.” 

Koran: “As much as somebody would want to rehash “I’m from the government, I’m here to help” as an opening line, developers and other technicians tend not to like bureaucracy meddling in their creations and orbits, especially if it means more under-resourced work for them to perform. While compliance is also not the best carrot to achieve results either, a hybrid model of standards that can be modeled and accepted as a method to align, much like NIST’s Special Publication series, which both public and private sectors use to get to a reasonable level of assurance. Possibly a similar Commerce Department/NIST-driven guidance that understands scale and complexity, but also operating methods, such as making recommendations support DevOps that can be better integrated. Ensuring that input for this guidance incorporates not only broad industry input and support, but also addresses needs from small- and medium-sized businesses, as well as enterprises. Most regulations and guidance are selectively chosen due to the lack of timely, affordable and manageable actions to comply. In short, “keep it simple, stupid” (KISS) should be the name of the game to have better than average uptake, and note that all of this should also be iterated upon.” 

Meyers: “Nascent, especially when it comes to the security of the open-source software supply chain. Anecdotally, when I worked at In-Q-Tel, a strategic investor for the US intelligence community, many intelligence community staff used to look at me cross-eyed when my colleagues and I would suggest that they should devote their time and resources to open-source software supply chain security. Log4j and the recent White House meeting on open-source software security suggests, however, that the times are changing.” 

Nather: “There have been excellent ad hoc responses to specific events, but we need to create a more repeatable process that includes not just the ‘biggest and loudest’ private sector companies, but also the ones below the Security Poverty Line, which are equally likely to be victimized by software supply chain attacks. For example, the Blackbaud ransomware incident at last count affected over a thousand organizations, many of them nonprofits who provide critical services. Beyond response, we need to create a way for every organization to understand its supply chain risk. SBOMs are a start in this direction, but it is an after-the-fact report at this point, not a demonstration of secure software development practices. Make no mistake: this is not a chain; it is a vast web in which we are all somewhere in the middle.” 

Scott: “There is building momentum in the federal government around software supply chains, and fora like the Cybersecurity and Infrastructure Security Agency’s (CISA) Joint Cyber Defense Collaborative are a good start at bringing industry to the table. Parts of the private sector, too, have recently started pouring a lot of resources into the issue, especially open source, but it is patchwork. Some companies are piling millions of dollars on to the issue, and some are not ready to take seriously how much it affects their security. It is also unfortunate that a lot of that momentum seems a response to recent incidents—I would love to see more proactive security collaboration capitalize on a well-intentioned reaction to compromise. In that vein, expanding the scope of existing public-private partnerships to include industry consumers outside of the usual IT vendors and, regarding open source, the nonprofits, maintainers, repositories, and package managers responsible for a lot of the actual code in question. Continued formalization of those ventures would be great too—supply chain and open-source security need to be ongoing discussions within among all stakeholders in the cyber policy world.” 

More from the Cyber Statecraft Initiative:

#4 How can the United States and European Union most effectively contribute to the security of open-source software?  

Fernick: “Coordination among major stakeholders is key, as open-source software is a vast, complex global ecosystem. Our success at securing the supply chain hinges upon having a singular place where representatives from government, the private sector, and open-source software projects alike come together to work on and make impact-prioritized, coordinated investments in things like security audits of critical software, improving vulnerability disclosure and remediation, and coordinating vendor-neutral emergency response teams to support open-source software maintainers in times of security crisis. Piecemeal initiatives and investments cannot comprehensively solve a lifecycle problem like securing the software supply chain—attackers will simply exploit the weakest remaining link.” 

Koran: “Coordinate. There is a number of international organizations that have attempted to bite off various pieces of the open-source security, and software and digital product security, pie. Organizations may be looking at anything from vulnerability disclosure treatments to secure coding practices, but many of them attempt to reinvent their own wheel for their influence areas. Because of that, there is no single or coordinated voice or guidance as to what to do. The community, private sector, and individuals either choose to strike out on their own or pick and choose those pieces of guidance, frameworks, or regulations that may be best suited to them, or, in the worst case, may be the minimum bar in order for their work to comply. While asked how to “contribute”, it does not always mean having to directly provide technical contributions, like code, infrastructure or other bits and bobs; it can mean doing what governments do best, which is to get the right people talking to one another to share information at various levels. That is where they need to start—get on the same page and the right heads sharing knowledge and experiences.” 

Meyers: “While I welcome transatlantic cooperation on open-source software security, each government probably needs to examine its own software supply chain security practices before there can be an open-source software Atlantic Charter.” 

Nather: “Ensure alignment in the standards, practices, regulations, etc. that are being generated. Like everything else in cyber, open-source software development is cross-border, and world-wide coordination is required to ensure security is effective and aligned with the motivations of the open-source project contributors and maintainers. Tracking the dependencies in open-source software and identifying those ‘linchpin’ components that cross a certain threshold of impact would be a good start, as would a coordinated effort to fill resource gaps for software projects that are under-resourced or abandoned.” 

Scott: “Governments can start by recognizing open source as infrastructure—it is everywhere, comprising 70 to 90 percent of many codebases, and it supports a huge part of the innovation and functionality in the digital economy. Ideally, that kind of framing will lead to regular, proactive investment from government alongside industry and help keep a transatlantic approach from getting bogged down by different approaches towards licensing and privacy. Open-source security is much more about how responsibly consumers are using (and tracking their use of) components, contributing to them, and supporting the ecosystem. An infrastructure approach should also help move away from the simple narrative that its maintainers are not getting paid enough—sometimes that is true, and sometimes maintainers have immense corporate support or even work for premiere IT companies. Usage, tooling, and self-knowledge are all hugely important and point to a much broader solution set than throwing money towards developers and hoping they can ‘fix everything.’ 

#5 What single proposal or idea to better secure software supply chains would you like to see Congress pass in the next year? 

Fernick: “In the legislation that it passes, I would like to see Congress work to incentivize companies to work collaboratively with good-faith security researchers who choose to responsibly report security vulnerabilities that they find in software products to the affected vendors, without fear of retribution or legal risk on behalf of the researcher. Many companies still lack the maturity to see good-faith security vulnerability reports for what they are—a free gift, and an actionable opportunity to improve their own products to help protect their customers, and the world, from threat actors.” 

Koran: “There needs to be focus. Most of the directives that have come since the recent change in administration have been merely addressing the federal government rather than the larger sphere. While that is an admirable focus, it is very top down and scoped very small. If Congress is to engage, it needs to be a wider and more comprehensive action where, possibly driven by some federal stewardship, the onus really lives within the community—among developers, private organizations, and individuals. This also should not be a “thou shalt” type of direction, but a way to structure guidance, oversight and engagement. This could possibly be either addressing which government agency or commission has the lead for certain areas, or by establishing something new to subsume a number of critical roles. This is not going to just to address a technical compliance issue, but also governance and sustainment by possibly providing grant-making capabilities, interfaces to the private sector, academia, and international partners. It will also need to be funded. Good ideas without a budget are just good ideas, not actions that can be relied upon for an outcome.” 

Meyers: “The creation of an open-source software security center within the federal government, perhaps within the Department of Homeland Security, seems like a promising first step. This center could help assess and improve the security of the open-source software that the federal government and critical infrastructure relies upon and even contribute to the security of the overall open-source software ecosystem.” 

Nather: “Securing the software supply chain goes beyond just creating secure software, as the Executive Order points out in calling for basic controls such as multi-factor authentication, monitoring and alerting to secure the development, distribution and production environments as well. The underlying problem is still assessing risk and impact, and prompting those conversations among suppliers and consumers. As this is all currently being piloted by the Biden administration, the next logical step would be for Congress to put it in a statutory framework to ensure effective oversight.” 

Scott: “I would love to see Congress stand up offices in government dedicated to open-source security and sustainability. An official office in CISA, even a small one, would provide a clear place for industry and maintainers to turn to and interface with, and the Critical Technology Security Centers would be great outputs to channel grantmaking. Having that formal infrastructure in place would make it much easier to involve developers and maintainers in open-source policy discussions in which they have not been included much yet. It would also help put to rest any lingering misconceptions that open source is inherently less secure than proprietary code or that open source is something that should—or even could—be avoided. Digital infrastructure everywhere depends in large part on open source, so the challenge is not securing or fixing that community or ecosystem—it is figuring out proactive, leveraged investments that government and industry can regularly make in its security.” 

Simon Handler is a fellow at the Atlantic Council’s Cyber Statecraft Initiative within the Scowcroft Center for Strategy and Security. He is also the editor-in-chief of The 5×5, a series on trends and themes in cyber policy. Follow him on Twitter @SimonPHandler.

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post The 5×5—Reflections on trusting trust: Securing software supply chains appeared first on Atlantic Council.

]]>
Assumptions and hypotheticals: First edition https://www.atlanticcouncil.org/commentary/article/cyber-strategy-assumptions-and-hypotheticals/ Mon, 09 May 2022 13:54:51 +0000 https://www.atlanticcouncil.org/?p=515583 The first edition of "Assumptions and Hypotheticals" considers several ongoing debates, including the escalatory potential of cyber operations, the measure of deniability created through the use of proxies, and the offense-defense balance in cyber engagements. 

The post Assumptions and hypotheticals: First edition appeared first on Atlantic Council.

]]>

When academics, policymakers, and practitioners discuss security and conflict within the cyber domain, they are often hampered by a series of ongoing debates and unarticulated assumptions, some more commonly agreed upon than others, which they nevertheless must cope with to better understand the domain.  

We have brought together members of these communities to discuss the reasons that these debates are important to the shaping of cybersecurity and strategic plans, as well as how outcomes of these debates might impact the way that public- and private-sector actors’ actions, informed by these debates one way or another, affect the domain, their adversaries, and their own goals. 

The first edition of this series considers several ongoing debates, including the escalatory potential of cyber operations, the measure of deniability created through the use of proxies, and the offense-defense balance in cyber engagements. 

Assumption #1

Assumption #2

Assumption #3

No one has an overview of US R/D on cyber

Cyber Strategy Series

The Cyber Strategy Series presents new perspectives on the most pressing topics in cybersecurity strategy. This series is intended to challenge assumptions and spark productive debate, to contribute to a better understanding of how the United States and its allies and partners can and should operate in the cyber domain.

Assumption: Cyber operations are not escalatory, and are even de-escalatory.

Why is this discussion important?

Kenton Thibaut

Cyber operations (like operations in any other domain) are not inherently escalatory or de-escalatory. Escalation/de-escalation only exists in the context of an engagement with an adversary/competitor. Owing to the ability of cyber operations to create reversible damage/effects, they present a broader range of options at the lower end of the spectrum of conflict. This can provide nonescalatory or de-escalatory options, even face-saving measures, but it can also mean that the United States is always in a higher level of conflict with, say, the People’s Republic of China, than would otherwise be the case. Ambiguity about thresholds or even credibility when it comes to defining thresholds to US adversaries might also encourage escalatory behavior. The range of views on what constitutes proportionality (especially when trying to weigh cyber actions against actions in other domains or hybrid activities) makes it difficult to accurately understand other actors’ logic and to communicate Washington’s. This can lead to complicated escalatory dynamics that are not yet well understood. A nuanced understanding of these dynamics is hampered by generalizations such as this assumption. 

Bulelani Jili

The debate over cyber operations’ escalatory potential is important because policymakers need to understand under what conditions cyber campaigns and/or operations bolster or undermine national security. Treating the statement above as a generalizable assumption for the basis of policy across the full spectrum of strategic competition—day-to-day competition short of militarized crisis and armed conflict, militarized crisis, and armed conflict—risks unintended escalation, either accidental or inadvertent, or military failure.  

For example, the empirical record suggests that this assumption is valid in day-to-day competition (operations that tend not to generate armed-attack equivalent effects). Under this condition, cyber campaigns can inhibit opponent gains while advancing US interests, set the conditions for deterrence success should a crisis emerge, and set conditions for military victory should armed conflict erupt—all without risking escalation out of competition to conflict.  

On the other hand, there is no empirical evidence to suggest the assumption is valid during the qualitatively different condition of militarized crises. Moreover, there is a sound, deductive argument based on crisis-decision-making theory that the core features of crises and the character of cyber operations interact to increase the risk of unintended escalation. Thus, accepting this assumption for policy under a condition of crisis could lead to a disastrous outcome.  

Although there are few cases of cyber operations being used in armed conflict, events in the Russia-Ukraine conflict challenge the validity of the assumption that cyber operations in war are being perceived by either belligerent as signaling a desire to de-escalate. Were Ukraine to relax its pressure on Russian forces as a de-escalatory gesture after being subject to Russian distributed denial-of-service or wiperware attacks, Kyiv’s move would most likely further encourage Russian aggression. 

Justin Sherman

The main reason this debate is important is that it also comes with a secondary assumption that some potential cyber scenario will cross a so far imaginary redline that might lead to land warfare between great powers. Without considering the intent of the operator, the assumption can only focus on the impact of a given cyber operation . . . but we know that the only difference between many reconnaissance missions and offensive payloads is often the intent of the operator or operators and their own affiliations. The escalation conversation for cyber operations is better met with debates related to defining the rules of engagement in cyberspace and the military practice of effects-based, rather than means-based, analysis and response. 

Joshua Rovner

The escalatory nature of cyber operations remains a matter of debate. In fact, the prevailing assumption among practitioners and academics has been that cyber operations are dangerously escalatory, rather than the opposite. It is only recently that the consensus has begun to shift in the opposite direction, largely driven by academic research—through war games, statistical analysis, surveys, and case studies—that has found little empirical support for the contention that cyber operations cause escalation. Instead, there is emerging evidence that such operations could facilitate the de-escalation of crises because they lack the physical violence associated with kinetic military capabilities, and their ambiguity and plausible deniability can create breathing room for crises to resolve short of war. This is a critical debate because it has direct implications for the stability of cyber rivalries and the international system as a whole. 

If purely cyber operations launched by Russia against Ukraine affect systems within neighboring NATO countries and those governments deem them to be proportional to an armed attack, then . . .

Kenton Thibaut

Does the US deem such attacks to be proportional to an armed attack (what was the effect? were those NATO countries intended targets and does it matter? Etc.)? If so, should the US encourage an Article 5 invocation—arguably, of the worst strategic outcomes would be an Article V invocation that does not get unanimous support from NATO members—or instead aim for a non-NATO, bi-/multi-lateral response. What kind of precedent does that set? How can the US help build consensus in the rest of NATO, on both how to characterize and how to respond to the attack — and what are the starting positions of other members?  

If the US does not deem the attacks to be proportional to an armed attack, what other actions could the US take to assure the affected nations and deter Russia from continuing this behavior? To the extent that it might take longer to reach consensus on key issues within NATO than it would in a non-cyber attack scenario, what do we do in that critical window between the attack and any kind of response? How can NATO members’ cyber defensive posture be improved and what immediate defensive actions would need to take place? What if the affected nations are not satisfied by NATO/US actions, or consensus is too difficult, and they act unilaterally or in an alliance that excludes the US and NATO? In the event of an allied (either NATO or bi-lateral/other) response, would sensitivities about sharing offensive cyber capabilities make retaliation in other domains more attractive? If the response is disjointed or weak, does an emboldened Russia escalate its cyber attacks against eastern flank and other NATO countries? How else might Russia exploit or capitalize on new divisions within the alliance? What does the PRC (and others) learn from this? 

Bulelani Jili

If those governments attribute the operations to Russia, the states have the right to invoke Article 5 and then to decide based on their particular national circumstances how to respond. Whether or not they would invoke, however, would likely be informed by conditionality (day-to-day competition, militarized crises, or armed conflict). Were the affected NATO member states not, at that time, in a militarized crisis with Russia (presuming they are not in an armed conflict), the states may be inclined to consider the operations an accident, albeit a costly one. Some have argued that NotPetya, due to its substantial economic damage, could be considered as causing armed-attack equivalent effects. The NATO response to NotPetya could, then, be instructive. Although the NATO Cooperative Cyber Defence Centre of Excellence stated that a state actor was most likely behind NotPetya, it did not attribute the operation to Russia, and, instead, called for an international investigation. On the other hand, under a condition of militarized crisis with Russia, if the operation were attributed to Russia, NATO would be more inclined to invoke Article 5. 

Justin Sherman

Such attacks would probably warrant either a cyber in-kind retaliation, increased sanctions, broader law-enforcement activity and cooperation, or a justifiable military response, depending on the impact they have (especially in the physical world) and the population they impact. 

Joshua Rovner

NATO will face an important test. However, it’s important to note that the plausibility of NATO countries defining a spillover cyber attack as proportional to an armed attack is low—this is a pretty high bar. That said, this scenario would raise a critical issue for NATO: the applicability in practice of Article 5 (collective defense) to cyber attacks. Since the 2014 Wales Summit, NATO has stated that Article 5 applies to cyberspace. The alliance has reaffirmed this at subsequent summits, most recently in Brussels last year. However, leaders have hedged when it comes to clarifying what type of cyber attack would actually trigger Article 5. During a press conference in February of this year about the Ukraine conflict, NATO Secretary General Jens Stoltenberg stated that, “We have never gone into the position where we give a potential adversary the privilege of defining exactly when we trigger Article 5.” Therefore, this hypothetical case would be a significant test of the credibility of Article 5—in cyberspace and beyond. The immediate outcome would be deliberations within the North Atlantic Council about if and how to respond. Any NATO response would require consensus, potentially creating the conditions for allied unity to be undermined if allies fail to agree. Sustained and public disagreements could have negative implications for the credibility of NATO deterrence and collective defense more broadly. 

Want to read more on the topic?

Assumption: States can effectively rely on ‘cyber proxies’ to create deniability

Why is this discussion important?

Louise Marie Hurel

This assumption is important because, if true, aggressor states will conclude they can pursue their interests through illegal or unacceptable acts without facing meaningful consequences. If an aggressor state successfully conceals its participation in an activity through the use of a proxy, it is unlikely that the state can be held accountable for its actions. Under international law, a victim state cannot respond with force in self-defense or by use of countermeasures against a state that is not responsible for a proxy’s actions. If a victim has intelligence or other evidence that an aggressor has direction or control over a proxy, and asserts so publicly, the aggressor may challenge the victim to produce the information on which it relies. This might force the victim state to either reveal information that could make future operations ineffective, compromise sources that produced the information, or risk leaving its accusation unsubstantiated. It appears that states might have greater success hiding their connections to proxies conducting activities in cyberspace than other domains, making this form of competition desirable. 

Joshua Rovner

It is often assumed that plausible deniability is one of the main reasons states outsource to proxies, yet the relationship between proxies and plausible deniability is anything but straightforward. Cyberspace is already secretive and deterrence is questionable. Attacks often go undiscovered for extended periods, and bringing foreign perpetrators to justice is difficult. So what is the value-add of extra deniability? Researchers have pointed out that states don’t always bother to deny their involvement in cyber operations, and that the logic of plausible deniability is questionable even in physical domains. For scholars and practitioners, questioning the motives of cyber adversaries is important because altering their behavior depends on understanding why they do what they do. 

Melissa Griffith

Whether or not states can effectively rely on cyber proxies can have major implications on escalation dynamics, especially in times of crises. If cyber proxies can create deniability for states, states are afforded a range of options while reducing their own exposure to any potential retaliation. States that can plausibly deny their proxy’s actions might act more aggressively through their proxies without fear of the consequences. States could use their cyber proxies to conduct attacks, as well as collect intelligence and steal money on their behalf. Without this deniability, defenders can treat all hostile acts—whether they emanate from the adversary state or its proxy—as equally state-backed. 

If the ‘deniability’ provided through the use of cyber proxies is not sufficiently countered by the attribution capability of states and private companies, then . . .

Louise Marie Hurel

Cyberspace will become a more lawless space where actors, particularly highly capable state actors, can act without fear of meaningful consequences. Power alone will determine the rules for acceptable cyberspace behavior rather than deliberation and consensus. The rule of law will apply to malicious state cyber actors in few circumstances, and states seeking to use capabilities to advance their interests will face few restraints. Without the ability to attribute discrete malicious cyberspace activities, the only meaningful limit on an individual actor will be the fear that eventually the sum total of one’s malicious acts may be discovered, connected, and attributed. This fear will most likely be remote compared to the rewards of continued malicious activity, especially if the actor is directed, protected, sponsored, or at least ignored by the state from which the actor operates. If directed by the state, it is likely that the proxy’s protection from domestic consequences will be comprehensive; the only meaningful threat an individual actor serving as a state proxy may be presented is denial of international travel and sanctions. With proxies obfuscating the role of states in their activities, states can pursue national agendas freely.

Joshua Rovner

Reliance on cyber proxies would be widespread— and this is the conventional wisdom for why it is. Nevertheless, deniability may not really be the main reason why states outsource to proxies, at least anymore. After all, when talking about states that rely on proxies, researchers tend to point to the usual suspects. If the United States already has an idea regarding which country is behind those proxies, how deniable are they really?

In an article recently accepted by the Journal of Cybersecurity, I argue that targets are increasingly willing to go public, regardless of whether proxies or one’s own agents are used. One consequence is that state sponsors are using in-house personnel to conduct attacks more than they used to. The reasons for this should be obvious. Outsourcing to proxies that do not convey plausible deniability means sponsors get the worst of both worlds: proxies can be difficult to control, might have their own agenda, and do not offer any additional political cover vis-à-vis a target who is not fooled about who really stands to gain. Capable sponsors who find that they will take the heat either way learn that they may as well do it themselves. Of course, states do sometimes outsource to proxies, as others have described. But if using proxies is still beneficial, it is probably not because they offer much plausible deniability. Targets are ultimately the ones who get to decide what is plausible, and not only has attribution gotten better—because cyber conflict is not a courtroom—suspicion is evidence enough. 

Melissa Griffith

It is not difficult to imagine a scenario in which even a small degree of deniability can create real dilemmas for defenders. Take the current war in Ukraine. NATO Secretary General Jens Stoltenberg asserted that spillover from the conflict, including a serious Russian cyberattack against a NATO member state, would trigger the Alliance’s Article 5 collective-defense measures. But what if such an attack was conducted by a nonstate proxy at the behest of the Kremlin? Even with a little plausible deniability, Russia might employ proxy operations as a means of sowing divides among NATO member states on what necessitates collective defense. In addition to technical attribution, the need for accurate intelligence on the nature of proxy relationships and their chains of command is critical to countering this threat. 

Want to read more on the topic?

Assumption: The offense has the advantage over defense 

Why is this discussion important?

June Lee

Offensive advantage and defensive advantage are the two extreme poles that define a dynamic competitive space. The US position at any given moment in that space depends on the actors, technologies, organizational/ecosystem posture (like nature of public-private partnerships, or how stakeholders cooperate, for example) and goals, among other things. If the goal is intelligence collection (where encryption technology is one driver), the United States might be in a different place on that spectrum than if the goal is irreversible destructive effects, or influence operations, etc. The nature of the goal in question will drive what technological competitions, organizational structures, systems, etc. are most relevant in defining where the United States is in the competitive space. Conversely, the state of the technological competition, systems, etc. will shape what goals are possible.

Joshua Rovner

The debate is valuable, although it is not important for cyberspace. It is valuable because the concept of offense-defense balance has informed a number of policy debates regarding the nuclear and conventional strategic contexts, including but not limited to arms races, preemptive attack, and expectations of war duration. Additionally, history has shown that pursuing policy not aligned with the strategic environment can be catastrophic. It seemed reasonable, then, to apply the concept to the cyber environment to possibly discover useful policy insights. Those who have attempted to do so, however, have found the concept wanting, with some having to dive into state-level attributes (or even deeper) in order to suggest any prescriptive value. But offense-defense theory is a structural theory of international relations where core features of the strategic environment are argued to be determinative. For example, the nuclear strategic environment—where nuclear weapons capabilities ensure that the offense wins every time—is an offense-dominant environment. In the conventional strategic environment, offense-defense advantage is determined by the combination of technology, operations, and tactics. Neither of these frames apply to the cyber strategic environment, which comprises a set of technologies that are macro-resilient and yet micro-vulnerable, where defense is possible but always at risk. Consequently, the debate does not account for the primary mechanism for achieving advantage in cyberspace—initiative persistence—which requires a persistent, fluid operational approach for precluding or inhibiting opponents’ gains by exploiting adversary vulnerabilities and reducing the potential for exploitation of one’s own.

Melissa Griffith

This is less of an assumption and more of a modus operandi. Criminals often work to outsmart or subvert rules and norms. Transnational criminals have long conducted operations in a way that implies an understanding that the scale of those operations could overwhelm response capabilities. Customs officials cannot search every single shipping container at a port without significantly delaying deliveries and impacting trade and economies. As countries continue to battle smuggling at land borders, cigarettes are well known to be among the most illicit products bought and sold in the United States, in an effort to evade taxes. There are no real parameters for what is a “felony” or “violent” crime in cyber compared to petty thefts or misdemeanors. If cyber operations are viewed as a monolith, offense has the advantage. But cyber is seen in terms of risk tolerance vs. risk mitigation, nuclear weapons are very secure for very good reasons. Banking and finance do a great job to stay ahead of evolving tactics, techniques, and practices (TTPs) and thwart widespread attacks and cascading impacts. So on and so forth, the offense/defense divide depends very much on what is being defended and by whom. 

Erica Lonergan

The assumption that offense has the advantage over defense is deeply linked to debates about whether cyberspace is truly dangerous and escalatory. In the traditional security-studies literature, when offense has the advantage, arms races and spirals are likely, conquest is perceived to be easy, and states see an incentive to strike first. Political scientist Robert Jervis measures offensive advantage as follows: “If the state has one dollar to spend on increasing its security, should it be put into offensive or defensive forces? Second, with a given inventory of forces, is it better to attack or to defend?” Therefore, an essential element of measuring the offense-defense balance is relative cost; it is not simply whether an attacker can get through, but at what cost. In conventional warfare, defense typically has the advantage over offense, measured by factors such as force-to-force or force-to-space ratios. Extending this logic to cyberspace, many experts argue that the attacker has a significant advantage over the defender. However, others are more skeptical, noting the investments in time, skill, and resources that are required for offensive cyber operations—particularly against strategic targets—and their unpredictable and limited results.

If when considering the offense-defense balance, defense has the strategic advantage, then …

June Lee

Actors are incentivized to develop disruptive technologies and TTPs to shift the balance toward offense; breakthroughs in offensive technologies/TTPs/operational concepts may be particularly surprising; actors are incentivized to develop novel ways to achieve their goals, using cyber tools as part of a hybrid approach; increased difficulty of using the cyber domain for intelligence collection increases the risk of operational and strategic surprise in other domains.  

Here, it would also be important to explore: what does this defensive strategic advantage look like? What is the source of the advantage? For example, if intrusions are easy to mitigate once detected, that incentivizes the development of tools and operational concepts that focus on rapid actions on target to achieve effects prior to detection. On the other hand, if penetration itself is almost impossibly difficult but once achieved mitigation is not easy, that might drive very different behavior and goals. Those examples are oversimplifications, just intended to illustrate the point. 

Joshua Rovner

Given the balance of incentives (ease of use v. security) and technology production trends, it is difficult to imagine a future cyber strategic environment in which the technology favors the defense. Even many who look to, for example, the promise of artificial intelligence/machine learning (AI/ML) to someday give defense the upper hand in cyberspace admit that AI/ML algorithms are as likely to make the offense more capable. Leaning on the offense-defense frame for informing future policy (including technology investments) regarding cyber security, therefore, is constraining the cyber security solution space. Policy solutions supporting initiative persistence are, for the foreseeable future, the most promising route to security.  

Melissa Griffith

It will take the best and the brightest talent to defend the systems and information most critical to a mission, a nation, a company, etc. Defense in cyber is more than a strategy or an activity where monotonous practice and lots of spending yield results, it is a daily evolution of analyzing and responding to changing TTPs, and a tradecraft that is practiced and perfected continuously over time. 

Erica Lonergan

Cyberspace is not a dangerous domain, escalatory spirals are less likely and, when they occur, are less severe. States get more out of investing in capabilities that support defense and resilience than they do out of investing in offensive capabilities that are expensive and often net less-than-desirable results. What is fascinating about this debate, however, is that many of the same experts who claim that cyberspace is escalatory and offense-dominant also argue that states should invest in defense—when the reverse should be true, according to the logic of offense-defense theory. If the attacker has the advantage in cyberspace, then from a purely strategic perspective states should be leaning into offensive strategies.  

Want to read more on the topic?

The Atlantic Council’s Cyber Statecraft Initiative, under the Digital Forensic Research Lab (DFRLab), works at the nexus of geopolitics and cybersecurity to craft strategies to help shape the conduct of statecraft and to better inform and secure users of technology.

The post Assumptions and hypotheticals: First edition appeared first on Atlantic Council.

]]>
Putin’s Ukraine War: Desperate Belarus dictator strikes back https://www.atlanticcouncil.org/blogs/belarusalert/putins-ukraine-war-desperate-belarus-dictator-strikes-back/ Wed, 04 May 2022 19:08:35 +0000 https://www.atlanticcouncil.org/?p=520249 Belarus dictator Alyaksandr Lukashenka is seeking to introduce the death penalty for anti-war activists who are sabotaging Russian troop movements in protest over Belarus's supporting role in Putin's Ukraine invasion.

The post Putin’s Ukraine War: Desperate Belarus dictator strikes back appeared first on Atlantic Council.

]]>
The autocratic regime of Alyaksandr Lukashenka has apparently decided to take off the gloves when dealing with Belarusian citizens who have been acting to disrupt Russia’s invasion of Ukraine.

The Belarusian parliament is considering legislation that would impose the death penalty on those convicted of attempted acts of terrorism, including the sabotage of rail lines to disrupt the movement of Russian troops to Ukraine. If enacted, this would mark a dramatic expansion of Belarus’s existing death penalty law, which currently restricts capital punishment to a small number of serious offenses including acts of terrorism that result in loss of life, particularly brutal killings, and multiple murders.

Vladimir Andreychenko, the speaker of Belarus’s lower house of parliament, made clear whom the change in the death penalty statute was directed against. “Destructive forces are continuing terrorist extremist activity by trying to rock the situation in Belarus, provoking domestic instability and conflicts,” Andreychenko said. “Actions are being taken to disable railway equipment and tracks, objects of strategic importance. There can be no justification for the actions of terrorists.”

The measure passed Belarus’s lower house of parliament on April 27. It must still pass the upper house and be signed by Lukashenka before it becomes law.

The threat of capital punishment for railway saboteurs is not the only way the Lukashenka regime is seeking to deter anti-war activists. Ukrainian military intelligence says that internet connections in Belarus are being disabled in some regions in order to conceal the movement of Russian military equipment through the country, Ukrayinska Pravda reported. This appears designed to make it more difficult for Belarusian activists to monitor, expose and disrupt the movement of Russian troops and military hardware.

“The KGB of Belarus and the FSB of Russia are trying to limit the communication of patriotic citizens and prevent the dissemination of information on social networks about the movement of Russian military equipment across the territory of the republic,” the Main Intelligence Directorate of the Ukrainian Ministry of Defense announced on Telegram.

In one sense, the latest moves by the Lukashenka regime are a case of the dictator striking back against the surprising, and surprisingly effective, campaign by Belarusian citizens to disrupt Vladimir Putin’s ability to use Belarus as a platform for his war in Ukraine. In addition to railway workers sabotaging rail lines carrying Russian troops, activist groups like the Cyber Partisans have hacked the country’s transport system. Meanwhile, hundreds of Belarusian volunteers have joined brigades to fight on the Ukrainian side in the war.

Lukashenka’s recent steps are also an act of desperation. Public opinion polls consistently show that Belarusians are opposed to Russia’s war in Ukraine and staunchly object to Belarus facilitating or participating in the invasion. Lukashenka’s supporting role in Putin’s war has breathed new life into the Belarusian opposition, which had been largely muzzled by a brutal Kremlin-backed crackdown in the wake of massive street protests following the stolen Belarusian presidential election in August 2020.

The Belarusian opposition is clearly aware of the opportunities presented by domestic disquiet over Putin’s war. “The fate of Ukraine and the fate of Belarus are interconnected and we will stand with Ukrainians through these perilous times,” exiled opposition leader Sviatlana Tsikhanouskaya said in a speech at the Ottawa Conference on Defense and Security in March. “History will judge us, not for our words but for our deeds. History will show whether the world united in defense of freedom and democracy, or whether we allowed aggression to win because we acted too late or too little. Let’s write history today and stand together against darkness.”

The Russian invasion of Ukraine has shifted the tectonic plates of geopolitics. This is most immediately apparent in the former Soviet Union. The war is shifting and recalibrating the calculus of authoritarian regimes like Lukashenka’s as well as the activists who oppose them. And it is changing everybody’s estimation of Russia’s real power in the region.

Being the wily survivor that he is, Lukashenka certainly understands this. He’s been a master gamer for decades, but the game has suddenly changed. Having thrown in his lot with Putin since August 2020, Lukashenka is now stuck with him. His old tactic of pretending to cozy up to the West is no longer an option. If Putin fails in Ukraine, Lukashenka almost certainly understands that the game will also be up for him.

“Mr. Lukashenko is an accomplice to Mr. Putin’s war, and should end up in the dock with him as a war criminal,” The Washington Post opined in an editorial on May 2. With nowhere else to turn, Lukashenka is now reverting to his default setting of repressing his own people and tightly embracing his master in the Kremlin.

Brian Whitmore is a nonresident senior fellow at the Atlantic Council’s Eurasia Center, an Assistant Professor of Practice at the University of Texas at Arlington, and host of The Power Vertical Podcast.

Further reading

The views expressed in UkraineAlert are solely those of the authors and do not necessarily reflect the views of the Atlantic Council, its staff, or its supporters.

The Eurasia Center’s mission is to enhance transatlantic cooperation in promoting stability, democratic values and prosperity in Eurasia, from Eastern Europe and Turkey in the West to the Caucasus, Russia and Central Asia in the East.

Follow us on social media
and support our work

The post Putin’s Ukraine War: Desperate Belarus dictator strikes back appeared first on Atlantic Council.

]]>
Buying down risk: Container security https://www.atlanticcouncil.org/content-series/buying-down-risk/container-security/ Tue, 03 May 2022 13:08:05 +0000 https://www.atlanticcouncil.org/?p=451493 Industry's move towards container architectures provides great promise for dynamic systems and service provision, but it also brings up new concerns and opportunities for the cybersecurity ecosystem.

The post Buying down risk: Container security appeared first on Atlantic Council.

]]>
Containers are a form of virtualization that provides distinct advantages over more traditional virtual machine (VM) architecture. Although every VM has its own operating systems, containers do away with this added overhead, instead providing only the dependencies necessary for a specific application to run, sitting directly on a host server’s OS. The two designs are not mutually exclusive, as containers can be built on top of virtual machines too. Containers lend themselves to increased segregation of services, file systems, input/output streams, and more.

Containers have unique design and development requirements. Every instance of a container is created from a static image, which is stored in a registry. When containers fail or need rebuilding, they are reconstructed from the image. To update containers, developers first update an image and then push the new version to the registry. In environments running many containers at once, orchestrators coordinate the building and allocation of resources among containers, as well as container-to-container communication. Orchestrators typically sit between registries and the servers that host containers, converting images into containers according to workload needs and available resources while acting as the go-to coordinator for the system’s disparate parts. An additional layer of servers, called master nodes, might sit between the orchestrator and groups of host servers, facilitating communication among containers, host servers, and the orchestrator.

Based on diagram in NIST SP 800-190 (https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-190.pdf)

Benefits of containerization

Containers provide several performance and security benefits. Because they contain all the requisite dependencies for the software they run, containers are extremely portable. They are generally less resource-intensive than VMs because containers do not require an entire OS, making the creation of a new container instance easier. These attributes make containers extremely well suited to and thus popular across the many flavors of cloud services. Registries provide a natural chokepoint in the software development process: images must first upload to registries before percolating out into updated containers, so strict, automated reviews can enhance container quality downstream. Moreover, the registry system and clear insight into container contents and versioning allow for fine-toothed control over patching. System administrators can choose to let updated containers percolate through a network naturally, to schedule batches of containers for updates, or to reinitialize all containers at once. Implemented well, this process can ensure service continuity and provide a more agile update schedule.

Security in a containerized world

Container architecture has several key security considerations. Registries and image repositories are attractive, centralized targets for attackers seeking to corrupt container instances. Ensuring the integrity of images, both when added to registries and when built into containers, is critical. Moreover, developers often rely on popular container images distributed across repositories, opening an avenue for supply-chain attacks reminiscent of exploits of popular open source packages. Attackers with access to a container want to move into other environments, requiring strict segregation for security. Because containers can sit directly on the OS of their server without the separation provided for VMs by the hypervisor, an attacker may try to move down into the server to gain access to all its containers. The attacker could also move upward in the system hierarchy toward either the orchestrator or the intermediary servers that coordinate between orchestrator and container servers (also called worker nodes).

Containers lend themselves excellently to the dynamic workloads and agile development of cloud systems such as AWS and Azure. Yet, Unit 42, the cybersecurity consultant arm of Palo Alto Networks, found vulnerabilities in those systems resulting from container architecture, as have other researchers. During their analysis of Azure’s container-as-a-service (CaaS) offering, Azure Container Instances (ACI), Unit 42 analysts managed to deploy a malicious container to ACI and escape the container environment to run as root on their host node. From there, the analysts spread to other nodes through the master node sitting above their host, abusing quirks in node communication with the master. More broadly, Red Hat Foundation research on common container systems found that 94 percent of users of one of the most common orchestrators, Kubernetes, reported experiencing security incidents, with a third of those reporting major vulnerabilities and 47 percent reporting fears of exposure due to misconfiguration.

Recommendations

  1. Voluntary codes of practice: Key industry players should develop, publicize, and adopt voluntary best practices for container security. These codes of practice might eventually develop into acquisition requirements and certifications. The largest cloud providers—Microsoft, Google, and AWS—are well situated to develop these standards in collaboration with container technology entities like Kubernetes and Docker. NIST Special Publication 800-190 “Application Container Security Guide” and the “Security Guidance for 5G Cloud Infrastructures” series published jointly by CISA and the National Security Agency (NSA) serve as useful starting points, providing crucial security practices at the technical and administrative levels. Meanwhile, the Cloud Native Computing Foundation (CNCF), a Linux Foundation project focused on fostering the development of cloud-native open source projects, is already well positioned to coordinate an industry-led effort with membership from Microsoft, AWS, Oracle, Cisco, SAP, and oversight for dozens of the most critical cloud and container technologies. Industry input should provide more detailed and practical technical approaches to container security and recommend tooling needs. Other industry input should include:
    • standards for vetting both image integrity and origin as well as for registry management and security,
    • best practices for cross-container communication and the development-to-registry pipeline, and
    • standards for providing configuration best practices and resources to customers. This last point is particularly critical because containerization is part of the shared-responsibility paradigm in cloud service provision. If vendors do not make security transparent and straightforward for users, the consequences could easily spiral outside the bounds of a single compromised environment. Recent attacks on Docker containers to deploy cryptocurrency mining malware aptly illustrate the dangers of misconfiguration, as do the results of the aforementioned Red Hat Foundation survey. Codes of practice must include sections specific to empowering users of container services with relevant security tooling and configuration guidance.
  2. Leverage customers’ buying power: Although the purveyors of critical, container-enabled technologies can voluntarily adopt codes of practice, their main customers have equal power and responsibility to demand such assurances from vendors as a condition of doing business. Large corporate entities not usually considered cybersecurity vendors—major investment and commercial banks, retail companies, and manufacturing firms—should require that their IT vendors establish and adhere to the set of standards and practices discussed above in order to inspire a baseline of customer confidence, using their immense buying power as leverage to better secure their own systems and networks while also improving the ecosystem as a whole. Likewise, the federal government should use its acquisition security levers—FedRAMP and applicable DOD processes—to further incentivize these codes of practice.
  3. Secure critical open source container infrastructure: Most container systems rely on just a handful of orchestrator, registry, and image technologies—many of which, including Kubernetes and Docker, are open source. Many proprietary service offerings depend on these linchpin technologies, yet RedHat found that in 2021, 94 percent of survey respondents still encountered security incidents in their container environments. As more cloud and computing systems rely on containerized environments, the security of those underlying infrastructures will become increasingly critical.
    • Industry, including Microsoft, Google, IBM, and Amazon, should commit resources to the development of security tooling for container environments, particularly automated configuration-management tools—misconfiguration accounted for 60 percent of security incidents in the aforementioned survey. The CNCF is already linked to many of these tooling projects, should serve as the conduit between projects and enterprise-sourced resources. Google’s recent submission of its Knative tool for Kubernetes to the CNFC illustrates a viable path for the incubation and provision of marquee industry tooling. Industry, CISA, and the NSA can collaborate to identify additional tooling useful for adhering to the security practices recommended in the jointly published “Security Guidance for 5G Cloud Infrastructures” as well.
    • CISA should identify Kubernetes and similar services as critical linchpin technologies in line with the language in Executive Order 14028 and increase resource commitments as needed. Cloud-services security does not need to be limited to products as used but should address technologies as designed and deployed. In this area, container management looms large.
    • The establishment of Critical Technology Security Centers (CTSCs) added to the House-passed COMPETES ACT (HR 4521) offers a model for federal outreach to open-source container components. The provisions would create at least four CTSCs for the security of network technologies, connected industrial control systems, open source software, and federal critical software. These CTSCs would work with the input of the DHS Under Secretary of Science and Technology and the Director of CISA to study, test the security of, coordinate community funding for, and generally support CISA’s work regarding their respective technologies. The Center for Open Source Software Security should work with CISA to coordinate the protection of critical open source container infrastructure.
  4. Establish an architectural resilience review process for service of services: The continued containerization of cloud systems and other services will speed up change in already dynamic environments, improving innovation and development. However, that swiftness can compromise long-term architectural planning and structural review of the vast, critical cloud systems it enables. Industry should coordinate with CISA, the Office of the National Cyber Director (ONCD), and the NSA on long-term architectural reviews of the largest, most critical containerized systems in the form of biannual architecture review meetings to review case studies as well as best and worst industry practices. ONCD should be responsible for the organization and strategic vision for this process, executed through CISA in partnership with NSA. NIST should develop a publication series on long-term architectural practices based on these industry-wide fora within two years of their start, which several compliance and acquisition regimes could incorporate down the road.