Problem 2: "Arms Race" Framing Treats AI as a Single Technology
Relatedly, the arms race framing often prompts discussion of artificial intelligence as a single technology鈥攂ut this, too, is inaccurate and could lead to bad policymaking. There is no true consensus definition of artificial intelligence, even among experts, but it鈥檚 clear that AI is not one technology. Instead, artificial intelligence is 鈥渁 catch-call concept alluding to a range of techniques with varied applications in enabling new capabilities,鈥1 from image recognition to disease prediction. However, thinking of AI as a single thing threatens to guide policymakers to mishandle AI risks and miss out on upsides.
To provide context on the topic, much of what today鈥檚 commentators refer to as 鈥淎I鈥 is just machine learning. While the term is often used without definition, the premise is relatively simple: Computers can identify patterns from data and use that pattern analysis to make decisions on their own.2 For instance, a computer that receives labeled images of cats and dogs can, with the right algorithm, learn to distinguish between the two classes of images. Researchers would feed labeled images of the two animals, perhaps hundreds or thousands at a time, to the model. As this occurs, the model begins to make observations about what characterizes each type of image, as well as what distinguishes one type from the other (e.g., through various statistical techniques often unknown to the programmer). Eventually, this so-called training process will be complete, at which point the human programmers would feed the machine unlabeled photos of cats and dogs, hoping it can now identify which is which. The computer鈥檚 skills on these tests provide success benchmarks from which further improvements can be made.3
Thinking of AI as a single thing threatens to guide policymakers to mishandle AI risks and miss out on upsides.
This is precisely why AI should not be used in reference to one technology: The process for teaching a computer to identify pictures of household pets is different than the process for teaching it to comprehend human language. Similarly, machine learning models for facial recognition are designed differently than machine learning models used to generate risk scores on convicts or estimate someone鈥檚 likelihood of defaulting on a loan. Depending on the task, the code itself鈥攖he machine learning model and/or its specific properties鈥攚ill differ between AI implementations. The same goes for the dataset, which is often specifically tailored to single use cases. Even within a single application area of AI like image recognition, detecting enemy combatants鈥 faces would require a notably different dataset than the one used for a cat-versus-dog image classifier.
Discussing a single arms race, though, makes the development of AI sound as if it鈥檚 focused on one technology.4 Commentators, in turn, talk of China 鈥渂eating鈥 the United States in AI5 without any particular understanding of what that means鈥攖he winning, or what it means for China to do it. Will Chinese tech giant Tencent develop a more accurate facial recognition system than the FBI? Will Chinese company Baidu minimize the bias6 of its AI systems compared to bias in systems built by Amazon? Will China鈥檚 military drones be able to fly faster than ours, or its intelligence service鈥檚 natural language processors better spy on phone calls than an American private security firm?
In reality, it鈥檚 difficult to decipher the answers to these questions since the underlying logic behind a single AI arms race, with a clear winner and clear loser, is flawed.7 Further, referring to China as a single entity鈥攁lbeit considering the government鈥檚 relative control of industry compared to the United States鈥攕till loses much important nuance. This could have many potentially damaging effects. As the U.S. government still needs to develop a cohesive national AI strategy, putting many forms of AI in a single bucket may yield disastrous risk management. Skin cancer predictors yield different economic, legal, social, political, and ethical risks than do facial recognition systems deployed in poor urban centers. Relatedly, some forms of artificial intelligence鈥攍ike a system that鈥檚 world-class at playing games like Dota 2 or Go鈥攃ould make for grabby news headlines, but likely have much less strategic value in U.S.-China great power competition than, say, an AI system in a lethal autonomous weapon.
Artificial intelligence is not a single technology, and policymakers must recognize that AI is instead a catch-all term that refers to many unique technologies, many of which that have their own distinctive development methods and timelines. Different applications of AI will develop at different speeds and with different levels of accuracy and effectiveness. Other factors, such as the computing power needed to run particular functionalities or the data used to test a given system, will also vary. For policymakers to better invest government resources in AI development鈥攁nd to better coordinate non-governmental efforts鈥攊t鈥檚 essential that artificial intelligence鈥檚 many forms are not thrown into a single bucket and treated in the same fashion. As just iterated, doing so will almost certainly result in U.S. policymakers mishandling AI risks while missing out on critical AI upsides. Treating AI as one thing greatly oversimplifies AI development at the peril of the United States鈥 economic, technological, and strategic leadership.
Citations
- Elsa B. Kania, 鈥淭he Pursuit of AI Is More Than an Arms Race,鈥 Defense One, April 19, 2018, .
- For more on this, see: Robert D. Hof, 鈥淒eep Learning,鈥 MIT Technology Review, n.d., ; and Karen Hao, 鈥淲hat is machine learning? We drew you another flowchart,鈥 MIT Technology Review, November 17, 2018, .
- This is a deliberate oversimplification. Once again, see the previous endnote for a more detailed primer on some machine learning basics.
- Political scientist Michael Horowitz argues that 鈥淸t]here will not be one exclusively military AI arms race鈥 but instead 鈥渕any AI arms races, as countries (and, sometimes, violent nonstate actors) develop new algorithms or apply private sector algorithms to help them accomplish particular tasks.鈥 Again, the concept of an 鈥渁rms race鈥 is perhaps too constraining and too winner-takes-all, yet his point about multiple, overlapping, intersecting AI development tracks stands true. See: Michael C. Horowitz, 鈥淭he Algorithms of August,鈥 Foreign Policy, September 12, 2018, .
- See previous references on the permeation of the winner-takes-all 鈥渁rms race鈥 rhetoric in commentary from journalists, policymakers, and analysts. Also, it鈥檚 worth noting that even in some interviews where artificial intelligence experts bring out more nuance in the conversation, headlines reframe the conversation in the context of a winner-takes-all AI 鈥渁rms race.鈥 See, for instance: Phred Dvorak, 鈥淲hich Country Is Winning the AI Race鈥攖he U.S. or China?鈥 The Wall Street Journal, November 12, 2018, .
- While trying to remain relatively high-level, it鈥檚 still worth noting that understandings of 鈥渂ias鈥 take many forms. For just some discussion of different fairness definitions, see: Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel, 鈥淔airness Through Awareness,鈥 arxiv.org, November 29, 2011, ; Moritz Hardt, Eric Price, and Nathan Srebro, 鈥淓quality of Opportunity in Supervised Learning,鈥 arxiv.org, October 7, 2016, ; Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, 鈥淚nherent Trade-Offs in the Fair Determination of Risk Scores,鈥 arxiv.org, November 17, 2016, ; Matt Kusner, Joshua Loftus, Chris Russell, and Ricardo Silva, 鈥淐ounterfactual Fairness,鈥 31st Conference on Neural Information Processing Systems, 2017, ; and Niki Kilbertus, Mateo Rojas-Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch枚lkopf, 鈥淎voiding Discrimination through Causal Reasoning,鈥 arxiv.org, January 21, 2018, . These are pulled from a syllabus developed by Duke University鈥檚 Ashwin Machanavajjhala.
- This framing is flawed for the aforementioned reasons, but also because technological 鈥渟uperiority is not synonymous with security.鈥 When it comes to technologies like artificial intelligence, 鈥渢he most reasonable expectation is that the introduction of complex, opaque, novel, and interactive technologies will produce accidents, emergent effects, and sabotage鈥 that will cause 鈥渢he American national security establishment [to] lose control of what it creates.鈥 See: Richard Danzig, 鈥淭echnology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority,鈥 Center for a 麻豆果冻传媒n Security, May 30, 2018, . Page 2.