As a category, “tech” emerged in its current form in the mid-1980s, relying on the conflation of economic and national security made tangible in the form of high-tech products like semiconductors. As an industry, tech has since its inception been marked by governmental intervention, which has sustained the industry and upheld particular players, priorities, and uses. The ways in which academia, industry, and government have enmeshed have changed over time; the fact of their imbrication and interdependency has not.

Tech and the industries associated with it have rearranged governance and political economy around redeeming the promises of speculative futures. At various moments, tech has come to represent the health of the US state—its prestige, its ability to project global power, and its economic and national security. Tech’s symbolic and material importance has meant that the US government and information industries have remained intertwined.

Hailed by many as the return of industrial policy and government intervention, the 2022 CHIPS Act and Biden’s chip-focused executive orders are continuous with older forms of US industrial policy. The model of this policy changed significantly during the eighties under Reagan, creating a closer synthesis of the tech industry and the US national security state.1 It is hard to tell the story of one set of institutions without the other. The Clinton administration entrenched and extended Reagan-era institutional experiments, which became norms after 9/11 took defense-industrial cuts off the table. As industries that relied on cheap chips ascended and US leadership in microelectronics was taken for granted in the Obama era, government focus and support waned. Until events in 2016 convinced the defense world and 2020 COVID-era shortages convinced politicians to reengage with the industry, semiconductors were not a central focus for policymakers. However, the political economy inaugurated through this history persisted.

Artificial intelligence has significantly benefited from, and been shaped by, government intervention not just in AI itself but crucially in semiconductors. From the early Cold War to the present, “AI” has referred to many disparate sets of practices. In particular the meaning of intelligence in “artificial intelligence” has numerous and shifting connotations that complement the assumptions adopted by its practitioners and conditioned by their context. Broadly speaking, “intelligence” encompasses any attempt to make machines display human capabilities such as understanding language (e.g. speech recognition and translation), learning, and problem-solving. Federal funding was, and continues to be, essential for the development of AI. Until recently, in fact, the US federal government provided the bulk of funding for research into AI and AI-related fields. When industry at various points abandoned AI for fear that commercial implementation was distant, federal funding filled the gap in areas like expert systems, speech recognition, natural-language processing, and image processing. Moreover, significant portions of what is recognized today as AI originated in other fields. Speech recognition, graphical models, and natural-language processing all use techniques borrowed from mathematics, statistics, and physics rather than what has traditionally been labeled as AI.

The history of AI is inseparable from the history of semiconductors. Advances in what now gets called AI (formerly termed “machine learning” or “statistical prediction”) are entirely dependent on computing power that in turn derives from exponential improvements in semiconductors.2 Guido Appenzeller, Matt Bornstein, and Martin Casado, “Navigating the High Cost of AI Compute,” Andreessen Horowitz, April 27, 2023, https://a16z.com/navigating-the-high-cost-of-ai-compute. See also Jai Vipra and Sarah Myers West, “Computational Power and AI,” AI Now Institute, September 27, 2023, https://ainowinstitute.org/publication/policy/compute-and-ai. Advances in chips have also undergirded advances and profits in personal computing, graphics, communications, networked computing (e.g., the internet, the cloud), and most other information technologies. According to the National Academies, for example, “[o]nly after continued increases in processing power and memory capacity did hidden Markov models become feasible for use in recognizing continuous speech on PCs” in the 1990s.3 National Research Council, Funding a Revolution: Government Support for Computing Research (Washington, DC: National Academy Press, 1999), 151, https://doi.org/10.17226/6323.

These massive quantities of resources, complex coordination, and global negotiations needed to make ever-improving semiconductors inevitably require considerable state involvement, partnership, and active intervention. Despite this fact, government intervention is rarely given its due. A coup of bipartisan American propaganda promoting the myth of the lone American entrepreneurial tech genius has been to veil the equally bipartisan support for tech industrial policy. The American state has created the conditions that make Bill Gates’s massive profits possible—including, for example, an extremely permissive antitrust policy.

Government support for these infrastructures has largely emerged from the national security state. Defense’s needs have always shaped these industries in one way or another. The Pentagon’s oft-renewed strategic focus on high tech has led to consistent defense funding and defense interest in information industries. The relative emphasis on a given information technology and the means through which defense needs have shaped technology have changed over time, but the overbearing impact of national security agencies has not. During the Cold War, these were tools of the state’s imagined electronic battlefield.4 Paul N. Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996), 300. That vision, as many have noted, still shapes the DoD’s approach to information technologies.5 Ibid. See also Rebecca Slayton, Arguments That Count: Physics, Computing, and Missile Defense, 1949–2012 (Cambridge, MA: MIT Press, 2013); Gian Gentile, Michael Shurkin, Alexandra T. Evans, Michelle Grisé, Mark Hvizda, and Rebecca Jensen, “A History of the Third Offset, 2014–2018,” RAND Corporation, March 31, 2021, https://www.rand.org/pubs/research_reports/RRA454-1.html

Cold War Status Quo (Pre-1970)

The Cold War US state institutionalized state support for technology development in agencies derived from WWII institutions and projects. The National Science Foundation (NSF), the Defense Advanced Research Projects Agency (DARPA), and other elements of the national security state emerged from the early Cold War, for example. Physicists dominated these new institutions, and the state prioritized the production of such physicists. Dangerous and increasingly taboo tests of nuclear weapons and their components drove physicists6 David Kaiser, “Cold War Requisitions, Scientific Manpower, and the Production of American Physicists after World War II,” Historical Studies in the Physical and Biological Sciences 33, no. 1 (September 1, 2002): 131–59. toward computer simulations. As Peter Galison shows in his article “Computer Simulations and the Trading Zone,” prominent physicists began to view computers less as tools and more as reflections of nature itself.7 Peter Galison, “Computer Simulations and the Trading Zone,” in The Disunity of Science: Boundaries, Contexts, and Power, eds. Peter Galison and David J. Stump (Stanford: Stanford University Press, 1996), 118–57. Through new questions about the limits of computation (and with that, the limits of physical reality), physicists became increasingly concerned with and involved in computing (especially theories of computation). Carver Mead, for example, a major Moore’s law promoter, developed Very Large Scale Integration (VLSI) with Lynne Conway and worked closely with Gordon Moore. A physicist by training, he engaged directly with the limits of computing and physics of computing fields. Likewise, by the 1970s, famous physicists like Richard Feynman and John Wheeler, who had close relationships to the Cold War national security state, began to pursue the physics of computation. Perhaps because of this theoretical orientation, these physicists tended to have an exaggerated view of the capabilities of computer systems.8 In her book Arguments that Count, Rebecca Slayton compares their approach to the much more circumspect views of those who worked on implementation and software, who would become known by the 1980s as software engineers. The relative power and esteem in which physicists held computing led to its imbrication in more areas of government—especially defense.

At the same time, the less-practically-realized technofuturist fields of cybernetics and AI emerged from an interdisciplinary attempt to create master sciences across minds and machines. AI was one of many fields in the soup of economics, physics, neuroscience, information theory, systems theory, operations research, and game theory in Cold War defense institutions like RAND, the DoD, and other elements of the military-industrial complex.9 Warren McColloch and Walter Pitts’s neural networks, for example, emerged from this context. The advent of nuclear weapons during WWII required secrecy, motivated the creation of the national security state, and necessarily consolidated power within the executive-controlled national security apparatus.10 Garry Wills, Bomb Power: The Modern Presidency and the National Security State (New York: Penguin Press, 2010). Nukes, with their demonstrated capability for massive destruction, arrived with multiple rapid technological changes. These swift-moving technological shifts produced a unique “Cold War rationality” in state institutions—a desire for subjectivity-free knowledge and mechanized decision-making.11 Paul Erickson et al., How Reason Almost Lost Its Mind : The Strange Career of Cold War Rationality (Chicago: University of Chicago Press, 2013), 1–26. Such “trading zones” made AI and other technological dreams of the present thinkable and desirable. The same circumstances convinced pioneers of AI like Herbert Simon to identify options pricing theory as closely resembling the kind of random walk-style optimization he imagined for AI when it emerged in the early 1970s.12 Orit Halpern, Beautiful Data: A History of Vision and Reason since 1945 (Durham: Duke University Press, 2014), 176–7. Meanwhile, what would lay the foundations for the modern machine learning version of AI was developed during the same period as a branch of physics called statistical mechanics.

Significant components of the tech sector emerged from the Cold War state and both shaped and were shaped by the contours of its history. From the dawn of the space race in the 1950s, semiconductors have been at the heart of US defense strategy. In the 1970s, this found a formal articulation in the Offset Strategy: Pentagon leaders believed they could offset the Soviet advantage in sheer numbers of soldiers with superior technological capability.13 Specifically, this technological capability would be used for things like surveillance, reconnaissance, intelligence, precision-guided munitions, sensors, and targeting. US technology leadership in microelectronics at the time served as the basis for this strategy, which was an explicit declaration and extension of the relationship between defense and computing.14 As Paul Edwards writes in The Closed World, Cold War politics and computing cocreated each other; Cold War computers served as a support for Cold War culture, politics, and worldview. Command and control as a paradigm shaped both computers and military strategy: “[T]he key theme of closed world discourse was global surveillance and control through high technology military power. Computers made the closed world work simultaneously as technology, as political system, and as ideological mirage.” This culture and its institutional effects created the conditions for the perpetuation of the Offset Strategy, and therefore for the centrality of information technology to conceptions of national security. See Edwards, The Closed World: Computers and the Politics of Discourse in Cold War America (Cambridge, MA: MIT Press, 1996), 1–2.

AI similarly owes a significant debt to the Cold War national security state. “The establishment in 1962 of DARPA’s Information Processing Techniques Office (IPTO),” for example, “radically changed the scale of research in AI, propelling it from a collection of small projects into a large-scale, high-profile domain.”15 DARPA “supported work in problem-solving, natural-language processing, pattern recognition, heuristic programming, automatic theorem proving, graphics, and intelligent automata. Various problems relating to human-machine communication—tablets, graphic systems, hand-eye coordination—were all pursued with IPTO support.”16 This support “rapidly advanced the emergence of a formal discipline” and legitimized the field. Because AI objectives often took a very long time to accomplish, federal support was necessary; private companies had little patience or financial incentive to fund long-term research.

Nixon–Carter: Economic Conversion from Vietnam and the Remaking of Tech Policy Infrastructure (1970s–1980s)

The Cold War triple helix17 Henry Etzkowitz and Loet Leydesdorff, “The Triple Helix – University-Industry-Government Relations: A Laboratory for Knowledge Based Economic Development,” EASST Review 14, no. 1 (1995): 14–19, https://ssrn.com/abstract=2480085. of national security state, academia, and industry began to unravel in the late 1960s and early 1970s as the Vietnam War wound down. Unlike after WWII, the US did not demilitarize significantly after the end of the Korean war, due to the exigencies of the Cold War. Therefore, the demilitarization that occurred in the wake of the Vietnam War created significant economic and social problems—for example, the massive unemployment of engineers, computer scientists, and technicians.

This and the earlier end of cost-plus contracting in Robert McNamara’s Pentagon led to a sudden drop in defense spending without any substitute (despite several prospective plans). Mathematics, computer science, and AI were hit especially hard. DoD funding for mathematics and computer science reached a two-decade low in 1975.18 National Research Council, Funding a Revolution, 112. The Nixon administration pushed for an emphasis on discrete applications in federal research. Together with the short-lived 1969 Mansfield Amendment (which forbade military funding for research without military applications), this decimated funding for long-term or speculative projects. This trajectory was reinforced by Ford’s DARPA director George Heilmeier (1975–77), who created “tremendous pressure to produce stuff that looked like it had a short applications horizon.” 19 Ibid., 113.

The decrease mobilized two related groups in defense of their fields: venture capitalists and their allies; and scientists, engineers, and technicians working in defense. The effects were not evenly spread. Massachusetts, California (Silicon Valley and SoCal), and the Sunbelt especially suffered because of the concentration of the defense industry in those areas.20 For example, the Massachusetts company Raytheon went from thirty thousand jobs to 3,500 jobs. See Lily Geismer, Don’t Blame Us: Suburban Liberals and the Transformation of the Democratic Party (Princeton: Princeton University Press, 2014), 150–2). In Massachusetts, this meant that influential Massachusetts democrats like Edward Kennedy, Paul Tsongas, John Kerry, Robert Drinan, Barney Frank, Michael Dukakis, and others elected to state and federal office were closely tied to this new coalition. Underemployed defense workers were also important to the McGovern campaign, which promised this group McGovern would not eliminate any aerospace and defense jobs until there were comparable civilian jobs.21 Ibid., 169. The reconversion promised by McGovern and sought by these scientists and technicians stressed public-private partnerships, as well as public support for small innovative new businesses (what would later be termed “startups”) through R&D spending.

The mobilizations of venture capitalists and scientists were not immediately linked. In response to both genuine opposition to the Vietnam War and the social stigma experienced by those working in defense, scientists, engineers, and technicians organized around a conversion of the defense industry to civilian uses. These hopes seem to have been dashed with the McGovern campaign and his loss.

Venture capitalists (VCs) like William J. Casey22 Casey was a famous theorist of tax havens and the CIA director during the Iran–Contra affair. mobilized instead to secure subsidies and benefits for “small businesses” and eventually “small innovative businesses” from state and federal governments.23 Molly Sauter, “A Businessman’s Risk: The Construction of Venture Capital at the Center of U.S. High Technology” (PhD diss., McGill University, 2020), https://escholarship.mcgill.ca/concern/theses/j6731853z. He and others in the conservative finance world saw the nexus of security state and industry in high tech as a vehicle for their ends, tying VCs to “tech” and securing numerous regulatory and tax benefits and state backing, as well as promoting “tech” narratives. This coalition also created political support and infrastructure for a broader high-tech-focused deregulatory project as well as state-industry transformation.24 Ibid., 51. Casey’s own firm Vanguard Ventures, formed in 1968, also operated as a tax shelter for investors—not surprising for a theorist of tax havens.  Casey’s agitation led to the Small Business Administration Task Force on Venture and Equity Capital in 1976, later known as the Casey Task Force. According to historian Mols Sauter, this report “largely invented and certainly normalized the view that the venture capital funding structure, particularly as manifest in the limited partnership organizational model, is a basic and inextricable part of what would come to be identified as the ‘innovation economy.’”25 Ibid., 86–7.

The two groups, however, came together for the January 1980 White House conference on small business. The fundamental thesis of this meeting was that small business was not getting its fair share of institutional support.26 But what did they mean by “small business”? The term was intended to produce a patriotic update on the Jeffersonian yeoman farmer ideal—the report that came from the meeting claims that small business creates truly free citizens with “with a direct stake in fortifying democratic government.” See Hearing before the Committee on Small Business, Ninety-Sixth Cong., Second Session (1980). Though the authors intended to evoke mom-and-pop enterprises and offered small business as a vehicle for women and minorities to get ahead, the top legislative priority coming out of the conference was the Small Business Innovation Research Program (SBIR). This program benefited both VCs and the small innovative businesses (proto-startups) that former defense workers thought would be a long-term solution to their funding and employment problems. The report offered small business as a solution to all the era’s issues: flagging productivity, inflation, innovation and competitiveness, postindustrialism, high unemployment, general American decline, and the difficulty of maintaining the US’s position in high tech and automobiles. Stressing the need for a supply-side approach, the report called for, among other things, tax cuts,27 For example, removing or lowering the capital gains tax and estate tax, as well as corporate taxes for “small businesses.” slashing regulations, shrinking government, reversing antimonopoly legislation, and lowering or eliminating the minimum wage.28 Ralph L. Stanley et al., The White House Conference on Small Business: A Report to the President of the United States (Washington, DC: US ​​Government Printing Office, 1980). 

Following the conference, members of the small business coalition expected President Carter to implement their recommendations. Instead, he cut funding for SBIR and other high-tech small-business priorities. Irate, members shifted their allegiances to the right for the 1980 election. 

In the late 1970s, with the rise of civilian computing, tech industries experienced major structural shifts. Civilians began consuming vastly larger numbers of chips than the military, which caused big companies such as Bell Labs—which consisted of major research arms and subsisted on large government contracts—to give way to smaller startups that targeted the civilian market. The immense growth of civilian computing meant that companies had more incentive to focus on that market at the expense of defense—especially in the wake of Vietnam and public pressure to stay away from military projects.29 Linda Weiss, America Inc.? Innovation and Enterprise in the National Security State (Ithaca : Cornell University Press, 2014), 38. Arati Prabhkar, who served as the director of DARPA, where she headed coordination with SEMATECH and NIST, and who now currently heads the White House Office of Science and Technology Policy, stated that DARPA made concerted efforts to reduce its dependence on scale in semiconductor manufacturing, but that these endeavors ultimately proved unsuccessful. Consequently, she claimed, DARPA had to rely on civilian firms, dual-use technology, and industry consolidation to continue making advances in this field.30 Arati Prabhakar interview with author, May 19, 2021. 

Carter and Reagan: The 1980s and the Japan Crisis

Upon entering office, Reagan, unlike Carter, delivered for the small business-VC coalition by supporting SBIR, a variety of tax breaks, subsidies, antitrust benefits, new public and private initiatives to assist small business, export controls, and other measures. In supporting the SBIR and similar government benefits for high-tech innovative small businesses, Reagan defied members of his coalition like paleocon Dennis Prager and more libertarian organizations like the Heritage Foundation, as well as universities and big electronics companies represented by the American Electronics Association (AEA). His defense buildup, together with his aggressive trade policy, benefited high-tech companies, especially chipmakers who faced a vigorous challenge from Japan. According to organizers, a majority of the sixty recommendations from the 1980 White House Conference on Small Business were acted upon.

These policies delivered for defense hubs—for example, Massachusetts received one-third of the total SBIR funding. Such programs kept startups afloat.31 Chris Miller, Chip War: The Fight for the World’s Most Critical Technology (New York: Scribner, 2022), 139. They also worked for Silicon Valley. As Victor Reis, Deputy DARPA director from 1989 to 1990, and then DARPA director from 1990 to 1991, claimed: “DARPA [was] very integral in getting a lot of that Silicon Valley stuff […] going at Stanford. And all the spin-offs that went with that were, in large measure, from DARPA.”32 Victor Reis, interview by DARPA, January 17, 2007, https://www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/DARPA/15-F-0751_DARPA_Director_Victor_Reis.pdf.

The SBIR, similar programs, and other Reagan-era changes institutionalized the role of VCs in the federal research apparatus. Programs like SBIR meant that VCs bore much less risk. SBIR not only provided billions in funding but moreover provided multiple non-monetary benefits.33 Tom Nicholas, VC: An American History (Cambridge, MA: Harvard University Press, 2019), 246–7. The government did the early technology development and evaluation, significantly cutting the time from investment to payout. These programs and their extensions have created an environment where, contrary to the public narrative, “federal programs, not private VC, provide the majority of the high-risk startup and early-stage capital for U.S. innovation.”34 Weiss, America Inc.?, 74. The SBIR created a motor for the VC industry; the program was structured such that government would fund and oversee the first two phases of startup development and VCs would invest in the third phase. This created a permanent role for VC in industrial policy. Small businesses, startups, and VCs were also much more integrated into federal governance—in policymaking, grant evaluation, and the selection process for contracting. This in turn led to a greater emphasis on commercialization and economic criteria in awarding funding. Similarly, pressure from this coalition convinced Reagan to lean on other sources of funding like Federal Focus to support applied research. This was an effective subsidy for tech businesses.

The institutionalization of VCs in the same framework as startup and small-business funding solidified the coalition of right-leaning financial interests and liberal tech and defense interests. It also led to the expansion of the SBIR model: the Defense Small Business Advanced Technology Program was structured in the same way as the SBIR. This later became the Advanced Technology Program (ATP) under Bush and influenced other industrial policy programs like the Technology Reinvestment Project (TRP).35 According to Tony Tether, DARPA director under Bush II, the idea behind TRP was: “Hey, we got a bunch of smart guys that have really done great in the ’80s. Let’s have them do venture capital types of things—commercially.” See Tony Tether, interview by DARPA, May 1, 2007, https://www.esd.whs.mil/Portals/54/Documents/FOID/Reading%20Room/DARPA/15-F-0751_DARPA_Director_Tony_Tether.pdf. The democratic side of this coalition, moreover, reined in the ambitions of Atari Democrats36 The term “Atari Democrats” came into use in the 1980s to refer to young Democratic legislators who championed tech and believed that it and efficiency through market mechanisms would stimulate the economy and create jobs. I follow Lily Geismer’s definition in Left Behind: The Democrats’ Failed Attempt to Solve Inequality (New York: Public Affairs, 2022), 18–19, 29, 40. Atari Democrats as a group predated the “Reagan Revolution” and aimed to reformulate liberalism and the traditional precepts of the party with the belief that “the market and private sector [can] do social good.” This meant “fusing government reform and economic growth with opportunity and equality.” They likewise believed that “the future for the economy and the Democrats lay in a new model of growth that focused on bolstering trade and the postindustrial sector, especially high-tech entrepreneurship.” like Paul Tsongas, who wanted to pursue policies modeled after Japan’s Ministry of International Trade and Industry (MITI). 

The transition from the Cold War into the “unipolar moment” (i.e., US hegemony) was also the beginning of a period where science and technology played a more central role in politics; the coalition forged in the wake of the Vietnam War by Midwest and Eastern financial interests and tech liberals ensured that. As historians like Lily Geismer have documented, ex-Defense scientists, engineers, and technicians bound the Democratic Party to the interests of the emerging tech sector—then made up of small, often spin-off science-and-engineering-focused government contractors—in a political alliance that has only recently begun to fray.37 John Ganz, “The Emerging Tech-Lash: The Politics of Tech Oligarchy,” Unpopular Front (blog), April 26, 2022, https://www.unpopularfront.news/p/the-emerging-tech-lash. 

This bipartisan coalition emerged at the same time that intellectual justifications for favoring high tech blossomed in economics policy circles. Prominent MIT economist Lester Thurlow, for example, posited that 1970s economic crises could be resolved by accelerating productivity through boosting “sunrise industries” (e.g., computing and biotechnology) and offshoring “sunset industries” (automobiles, steel, textiles, consumer electronics). Sunset industries could offshore to cheaper countries, which would then supposedly be elevated to the next stage of development through these industries.38 These political choices were, by the end of the 1990s, portrayed as inevitabilities. A large literature premised on the belief that the US “lost” the auto industry flourished. See National Research Council, Funding a Revolution; and Alex Roland and Philip Shiman, Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983–1993 (Cambridge, MA: MIT Press, 2002), 91. 

Proponents of New Growth theory on the center-left and supply-side economists on the right could all find common cause in championing a transition to a new postindustrial economic order where control of sunrise industries would determine global power. This belief guided the design of US industrial policy through the Clinton administration. Moreover, this theory broadened the coalition around industrial policy for high-tech industries to include not just high-tech industry and VCs, but also foreign-policy hawks interested in the maintenance of American power projection. 

The coalition of Atari Democrats, defense scientists, foreign-policy realists, and VC-related financial interests proved powerful enough to successfully withstand the pressures of more radical groups like the Heritage Foundation and the Gingrich Congress elected in 1994. The policies pursued by this coalition maintained and even intensified the military reliance on high tech despite its frequent failures in practice.39 Consider, for example, US interventions in Vietnam and Iraq, and US use of imprecise or poorly targeted drone strikes and precision-guided munitions both in war and in other, more ambiguous contexts. Tech not only offered appealing fantasies of control, but moreover functioned as a central engine of the US economy that could survive right-wing attacks on the state. 

The symbolically laden economic conflict with Japan peaked from the mid-1980s to the early 1990s. For many observers, it confirmed the importance of high tech as the centerpiece of the next stage of economic development and therefore government support for this sector. Japan’s 1976 VLSI Program aimed to improve the manufacturability of devices through a collaborative research effort involving the country’s five largest industrial chipmakers.40 The program was apparently sparked by rumors in 1975 that IBM was working on a new line of computers that would use VLSI. The Japanese VLSI program was such a success that IBM modeled later initiatives after it. See Kiyonori Sakakibara, “R&D Cooperation Among Competitors: Lessons from the VLSI Semiconductor Research Project in Japan” (working Paper #650, University of Michigan School of Business Administration, January 1991), https://deepblue.lib.umich.edu/bitstream/handle/2027.42/36069/b1425067.0001.001.pdf. The program was widely acknowledged as one of the most successful national cooperative research efforts in the history of the industry, and its success inspired similar collaborative research efforts.41 Robert Schaller, “Technological Innovation in the Semiconductor Industry: A Case Study of the International Technology Roadmap for Semiconductors (ITRS)” (PhD diss., George Mason University, 2004), 437.

In response to Japanese success in the semiconductor market, bookings (orders received) for the US semiconductor industry dropped suddenly in December 1984.42 The industry had been blindsided by the challenge to American dominance in semiconductor markets and was “in full crisis mode.”43 Arati Prabhakar interview with author, May 19, 2021. See also “CEO Sees End to ‘Cowboy’ Chip Purchasing,” Electronic Business Buyer 21, no. 7–8, July 1995.  Industry lobbyists swarmed Washington, urging legislators to help resolve the issue. The lobbyists and industry leaders framed the problem of their dwindling market share as a national security one necessitating urgent state action. The government’s formal responses to this program were extensive.44 Harry Sello and Daryl Hatano, “Oral History of Paolo Gargini,” transcript, Computer History Museum, 20, July 27, 2011, https://archive.computerhistory.org/resources/access/text/2012/08/102714338-05-01-acc.pdf. They aimed to help the US regain technological parity with Japanese commercial industry and advance integrated circuit (IC) technology for the benefit of the US semiconductor industry, all while making sure the DoD’s needs were met in a context where the semiconductor industry no longer relied on government contracts.45 The VHSIC Program had technology targets based on device feature sizes, and despite being focused on defense, it made important contributions to industrial integrated circuit technology. The VLSI program, on the other hand, was very open and less focused on specific military applications. Larry Sumney, the director of the VHSIC Program, became director of the industry group Semiconductor Research Corporation (SRC) in 1982. Sumney remained as the director and later president for the entire life of the SRC. See Robert M. Burger, “Cooperative Research: The New Paradigm,” Semiconductor Research Corporation, March 1, 2001, 26; and Schaller, “Technological Innovation in the Semiconductor Industry,” 438. 

By the time the Japanese government announced its Fifth Generation Computer System (focused on AI and logic programming) and SuperSpeed (focused on supercomputing) programs in the early 1980s, many in Congress found this threat more urgent than anything related to Communist states.46 Roland and Shiman, Strategic Computing, 320. In response, funding for a wide variety of computing projects dramatically increased with the 1983 Strategic Computing Initiative (SCI). On the other hand, the technical community had a more varied assessment of this announcement’s potential. Many saw this moment instead as a means to increase federal funding for computing research.47 Ibid. In some cases it was.48 “Cooper, Kahn, and others, who had gone to Japan to see for themselves what kind of threat the Fifth Generation posed, came back with a very different view than the one that Feigenbaum had sold to Congress. They thought the Japanese were far behind the US in computer development and AI. What is more, they were sure that the Japanese were headed down the wrong path.  But if playing the Japan card would help sell SC, then they would play it.  ‘We trundled out the Japanese as the arch-enemies,’ Cooper later confessed, noting that in private conversations with congresspeople and senators he ‘used it . . . unabashedly.’ In fact, Cooper went so far as to assert that he went to Japan specifically to gather material for this argument. The tactic worked. Congress formally approved the SCI in the Defense Appropriations Act of 1984. President Reagan signed it on the day it was passed, 8 December 1983. Roland and Shiman, Strategic Computing, 91. In others, computing and microelectronics figures genuinely saw foreign competition as their biggest threat.49 Roland and Shiman, Strategic Computing, 300. The DoD was deeply concerned about “plac[ing] technology critical to American security interests in the hands of foreigners.”50 Ibid., 287.

AI was a key early focus of the Reagan administration, along with other technofuturist endeavors like the president’s efforts to construct space lasers. In 1981, “the Defense Science Board, a panel of civilian experts advising the Department of Defense, ranked AI second from the top of its list of which technologies had the most potential to make an order-of-magnitude impact on defense in the 1990s.”51 Emma Salisbury, “A Cautionary Tale on Ambitious Feats of AI: The Strategic Computing Program,” War on the Rocks, May 22, 2020, http://warontherocks.com/2020/05/cautionary-tale-on-ambitious-feats-of-ai-the-strategic-computing-program. This group recognized that the “key limiting factor on progress towards AI was clearly computing power, and this spurred calls for research into the development of faster and more powerful interactive computer systems.”52 Ibid.

The SCI aimed to create an “industrial base for artificial intelligence.”53 National Research Council, Funding a Revolution, 123.In AI, the SCI focused research around concrete military applications “intended to spark the military services’ interest in developing AI technology based on fundamental research.”54 Ibid., 214. This applied vision “altered [the] character [of AI research].”55 Ibid. The SCI “attracted a tremendous amount of industry investment and venture capital to AI research and development,”56 Ibid. and sent close to half of its research funds to industry hoping for spin-offs.57 Ibid. These planners wanted to produce a true AI industry that could be embedded into “‘central roles in military equipment and command.’”58 Edwards, The Closed World, 295; National Research Council, Funding a Revolution, 123. Advances in chips during this period allowed ideas like John Hopfield’s neural nets to be tested in practice; once again, increases in AI capabilities relied on government intervention in semiconductors.

The Japan conflict convinced pundits and national-security intellectuals that economies were the battlefields of the future. They expressed belief in the term “economic security,” an expansion of national security to include “disposable capital in lieu of firepower, civilian innovation in lieu of military-technical advancement, and market penetration in lieu of garrisons and bases […] the logic of war in the grammar of commerce.”59 Mario Daniels and John Krige, Knowledge Regulation and National Security in Postwar America (Chicago: University of Chicago Press, 2022), 200. What mattered most was “control of markets, investment, and technology.”60 Ibid., 199. ​See also Reis, interview by DARPA: “So, how do we deal with that? He [DARPA director Craig Fields] felt that the thrust was going to be in things like, high-definition television, advanced electronics, and advanced computing. And it was important for DARPA to stay ahead in those sorts of things. In other words, the interaction between the commercial world and the military world was going to get more and more blurred as time goes on. So, it was important for the nation to stay ahead in the commercial world, as well as in the national security world.” 

US policymakers and various companies arranged for frequent visits to Japanese industry to learn about their methods and technologies; this was formalized by the Clinton administration’s commerce department as the Japan Technology Project. These trips and facilitated information exchanges led to the professionalization of US chip manufacturing. Where manufacturing facilities had previously resembled research labs, they became more like profit-maximizing factories as a result of Japan competition.61 Miller, Chip War, 126.

The chip industry, in particular, closely copied Japanese organizations and methods. The US government-industry collaboration, SEMATECH, for example, was explicitly modeled on Japan’s MITI. The National Cooperative Research Act of 1984 “exempted research consortia from some antitrust laws and facilitated […] mergers.”62 National Research Council, Funding a Revolution, 113. This made initiatives like SEMATECH possible. Defense needs were also represented. “DARPA’s objectives” for example, “were mentioned in SEMATECH’s strategic plan, including efforts to rapidly convert manufacturing technology into practice and to develop technology for more flexible semiconductor production.”63 Ibid., 130. SEMATECH coordination “allowed equipment manufacturers to meet one set of industry specifications rather than a variety of company specifications.”64 Ibid., 129–30.

The Japan conflict justified coercive trade agreements for the Realist school. This is particularly interesting because it runs counter to the common understanding of the Reagan era as a period that centered the hegemony of neoclassical economics. Instead, the reactions to this conflict demonstrate the power of the national security state in alliance with semiconductor firms (contra the near-term interests of the computing industry). 

The extent to which the US conflict with Japan reshaped understandings of war and peace in an imagined postindustrial age is clearly apparent in ex-DARPA director Craig Field’s remarks at a 1995 White House forum:

[W]e are in a new age [of national security]. We cannot quite tell the difference between peace and war. It is not now black and white, it is shades of gray. It is not so clear who are friends and who are foes. […] There are lots of different kinds of aggression other than direct military aggression[:] indirect, trade, and so on. It is not so clear what a country is anymore, and companies are more and more global.65 Clinton White House (archive), “National Security Science and Technology Strategy,” Strengthening Economic Security, 1995, https://clintonwhitehouse3.archives.gov/WH/EOP/OSTP/forum/html/fields.html 

This blurring moreover meant that realists increasingly viewed globalizing industries and multinational firms as extensions of US state power: “even if markets were populated by private actors, the ‘security issues do not disappear’; they only ‘become submerged and hidden by market relations.’”66 Daniels and Krige, Knowledge Regulation and National Security in Postwar America, 201.

As industry and VCs moved away from early-stage, high-risk ventures in the 1980s, the federal government increasingly filled the gap. Moves like this one intensified what Daniella Gabor terms the “de-risking state.” The 1988 Omnibus Competitiveness bill, for example, transformed NIST and the federal labs around the needs of industry in the name of US “competitiveness.”67 “New Directions: Overview,” NIST, July 23, 2001, https://www.nist.gov/pao/nist-100-foundations-progress/new-directions-overview. While some accounts depict this as corporate capture, the reality is somewhat more complex. 

The Japan panic of the 1980s, and the belief that future post–Cold War conflicts would resemble it, convinced many that US corporations were extensions of the US state and state power.68 The federal government helped create a new ecosystem around high tech that shifted risk from private to public institutions and simultaneously shifted profits from public to private ones. Other benefits were more subtle: where defense and federal money used to focus on the production of physicists during the Cold War,69 it now began to focus on the production of chip designers—aiming to keep Moore’s law going and to produce related technicians. In return for shouldering the burden of high-risk investments, federal agencies got seats on the boards of new tech companies, access to and a role in shaping new technologies, and influence over the system as a whole.

Bush I: Semi Chips & Potato Chips

Reagan-era tech policy marked a shift in focus toward civilian industry. This shift became a site of conflict under the Bush and Clinton administrations as Heritage-style conservatives became more organized and gained political power on the right. The New Right stalwarts were critical of Reagan’s aggressive trade policy on behalf of semiconductor firms. They were also furious at the government intervention involved in programs like SBIR and the practices of agencies like NIST. The semiconductor industry was at the heart of this dispute. In the words of one analyst writing about SEMATECH, “the half-billion-dollar federal commitment marks a major shift in U.S. technology policy: a turn toward explicit support for commercially oriented R&D carried out in the private sector.”70 National Research Council, Funding a Revolution, 129. As the Cold War wound down in the late 1980s, some imagined “a civilian DARPA that could do for U.S. economic competitiveness what the old DARPA had done for military competitiveness.”71 Roland and Shiman, Strategic Computing, 7. This view significantly shaped programs like the SCI.

Reagan’s defiance of institutions like Heritage on aggressive state intervention for high-tech industries led to significant pressure on his successor, George H.W. Bush. New Right Republicans trusted Bush much less than Reagan and could vent their frustration more easily because Bush was not an emblem of their movement’s success. The end of the Cold War added fuel to the orthodox New Right case as well. Was defense spending on the scale of the Cold War necessary? The National Academy, writing in 1999, outlined the novelty of this debate. According to this body, the conflicts of the 1990s constituted “the first time in which fundamental questions are being raised about the infrastructural commitments and organizational principles that have guided federal support for research.”72 National Research Council, Funding a Revolution, 34

The dismissal of DARPA’s director Craig Fields by the George H.W. Bush administration was a pivotal moment in this struggle over the role of the state. Fields’s firing was significant because DARPA had a long history of promoting the kind of innovation Field promoted. The pressure on H.W. Bush to make this move came from libertarian-leaning groups. Bush dismissed Fields for pursuing ventures “deemed to be more concerned with improving US commercial competitiveness than enhancing military preparedness.”73 Weiss, America Inc.?, 164. Specifically, Fields was fired for providing too much obvious aid to the semiconductor industry—through dual-use ventures and investments in semiconductor firms. He was “appearing to stray too far into the commercial arena, after having taken DARPA into a series of new dual-use ventures. But the final straw came when he authorized a $4 million equity investment in a company making semiconductor devices with advanced materials,” which was an “obvious breach of the state-market divide.”74 Ibid. Fields subsequently became the president of the Microelectronics and Computer Technology Corporation (MCC) and played a major role in Clinton administration tech and defense policy.75 Roland and Shiman, Strategic Computing, 310.

Even after Fields’s dismissal, key figures in the Bush administration were deeply concerned about industrial policy and its effect on the budget deficit.76 Ibid., 315. According to Alex Roland and Phillip Shiman’s book on the SCI, “[s]everal of the president’s close advisers, particularly Richard Darman, the budget director, and Michael Boskin, the chairman of the Council of Economic Advisors, were particularly opposed to any interference in the functioning of the free market.”77 Ibid. Boskin is famous for his (possibly apocryphal) comment on chips: “Potato chips, semiconductor chips, what is the difference? They are all chips.”78 Carl Cannon, “Letter From Washington: The Bill Comes Due,” Forbes, September 10, 2001, https://www.forbes.com/asap/2001/0910/032.html. Darman, a Reagan holdover, similarly demonstrated his commitment to “the free market” when he showed little concern over Japan dumping DRAM chips, claiming: “What’s wrong with dumping? It is a gift to chip users because they get cheap chips. If our guys can’t hack it, let them go.”79 Ibid. 

Nonetheless, the Bush administration continued Reagan’s significant and enthusiastic support for computing, especially the microelectronics industry. For example, the High-Performance Computing and Communications Initiative (HPCCI) began in 1989 as an Office of Science and Technology Policy (OSTP) initiative and was formally legislated in 1991.80 National Research Council, Funding a Revolution, 130–1. This program coordinated DOE, NASA, NSF, NSA, EPA, NIH, NIST, NOAA, DOE, and the VA around supercomputing. Due to the pace of microelectronics improvements, infrastructure developed for high-end computers was rapidly diffused to everyday civilian applications, so the program had considerable impact.

The semiconductor industry served as a model for government involvement in other industries as well. As a result of the greater emphasis on industrial needs initiated because of small-business and VC organizing, the federal role in research and development continued to transform under Bush. NSF, for example, “established a number of Engineering Research Centers (ERCs) to better link academic research to industrial needs, and the National Institute of Standards and Technology began its Advanced Technology Program, which funded consortia working on precompetitive research projects of mutual interest,” in the model of SEMATECH.81 Ibid., 154. Likewise, interdisciplinary science and technology centers (STCs) focusing on areas in computer science82 These areas included computer graphics and scientific visualization, discrete mathematics and theoretical computer science, parallel computing, and research in cognitive science. began appearing in 1989, funded by multiple agencies, universities, and industries.83 National Research Council, Funding a Revolution, 124–6.


The Gulf War created the impression that the technological dreams of Vietnam had been realized, and convinced many in the realist foreign policy camp that their support for tech industrial policy had been worth it. In particular, for AI, a “report by the American Association for Artificial Intelligence (1994) paraphrased a former director of ARPA in saying that DART (the intelligent system used for troop and materiel deployment for Operation Desert Shield and Operation Desert Storm in 1990 and 1991) ‘justified ARPA’s entire investment in artificial-intelligence technology.’”84 Ibid., 225. This use of AI mirrors modern uses. Israel in its present war on Gaza uses AI in similar ways85 Ben Reiff, “‘A Mass Assassination Factory’: Inside Israel’s Calculated Bombing of Gaza,” +972 Magazine, November 30, 2023, https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza. to generate targets that are legitimated by the public’s trust in numbers86 Theodore M. Porter, Trust in Numbers : The Pursuit of Objectivity in Science and Public Life (Princeton: Princeton University Press, 2001). and in tech’s infallibility.87 It looks as though the US is adapting the same approach. Manson, Katrina. “AI Warfare Is Already Here.” Bloomberg.Com, February 28, 2024. https://www.bloomberg.com/features/2024-ai-warfare-project-maven/ These military ambitions and uses have shaped the form, funding, and development of information technologies.

The Clinton Administration: Gingrich versus Atari Democrats and the Information Industry Coalition

The Clinton administration was the purest articulation of Atari Democrat orthodoxy, further binding the party ideologically and materially to the tech sector. In their 1992 run, Clinton and Gore focused on seducing tech executives, who typically skewed Republican.88 Lily Geismer, Left Behind: The Democrats’ Failed Attempt to Solve Inequality (New York: Public Affairs, 2022), 236. They aimed to replicate Reagan’s industrial policy focused on civilian industry rather than defense—with a few additional tweaks. As information industries occupied a larger and growing position in the nation’s economy because industrial policy favoring this industry matured and other unsupported industries collapsed and consolidated, this favoritism became a matter of common sense and political survival. Exponential growth, underlain by exponential improvements in chip technology, made this strategy even more imbricated and easily justified. By the end of the 1990s, the National Academies of Sciences could write narratives like the following: 

The computer revolution is not simply a technical change; it is a sociotechnical revolution comparable to an industrial revolution. The British Industrial Revolution of the late 18th century not only brought with it steam and factories, but also ushered in a modern era characterized by the rise of industrial cities, a politically powerful urban middle class, and a new working class. So, too, the sociotechnical aspects of the computer revolution are now becoming clear. Millions of workers are flocking to computing-related industries. Firms producing microprocessors and software are challenging the economic power of firms manufacturing automobiles and producing oil. Detroit is no longer the symbolic center of the U.S. industrial empire; Silicon Valley now conjures up visions of enormous entrepreneurial vigor. Men in boardrooms and gray flannel suits are giving way to the casually dressed young founders of start-up computer and Internet companies. Many of these entrepreneurs had their early hands-on computer experience as graduate students conducting federally funded university research.89 National Research Council, Funding a Revolution, 1–2. 

Not only did the Clinton administration desire a closer relationship with tech industries like the semiconductor industry, but the semiconductor industry also wanted a closer relationship with the government.90 Craig Barrett to John Gibbons and John Deutsch, July 1, 1993. William J. Clinton Presidential Library and Museum, National Archives. Craig Barrett here is acting in his position as chair of SIA strategy. Clinton administration figures correctly identified the extent to which Republicans were constrained by their right flank in support for the tech industry and made explicit promises to deliver where Bush could not. For example, in 1993 talking points for an upcoming meeting with Semiconductor Industry Association (SIA) figures, Clinton’s OSTP writes that “despite industry’s concerns, this administration will provide a more favorable environment than the Bush administration did for NACs.”91 Mark Hartney to John Gibbons and Kitty Gillman July 16, 1993. William J. Clinton Presidential Library and Museum, National Archives. Both industry and the Clinton administration wanted to extend the SEMATECH model within and beyond the semiconductor industry.92 Bill Spencer to D.A. Henderson March 22, 1993. William J. Clinton Presidential Library and Museum, National Archives. Clinton administration figures attributed chip industry resurgence to Reagan-era policies such as SEMATECH and US government efforts to “open the Japanese market.”93 John Gibbons to Ron Brown, Hazel O’Leary, William Perry, and Neal Lane, January 13, 1993. William J. Clinton Presidential Library and Museum, National Archives.

The SIA road map created a vehicle for more closely coordinating government and industry, as well as for major changes in industry itself. SIA, SEMATECH, and Semiconductor Research Corporation (SRC) adapted their structures to roadmap needs and began collaborating more closely. Likewise, the document and attendant planning and implementation processes gave industry the occasion to coordinate with numerous agencies (e.g., DoD, DoE, DoC, NSF, NIST, OSTP, and NEC) around roadmap goals. The administration formalized this collaboration by creating the Semiconductor Technology Council, which replaced the SEMATECH oversight committee. The Clinton administration also made industry partnership with agencies and labs easier.94 Via Cooperative Research and Development Agreement (CRADA) agreements. Industry actively and urgently sought this collaboration.95 Bill Spencer, for example, wrote to Gore: “To be effective, it [the road map] will require an interagency perspective from the government as well as a capability to act on cross-cutting initiatives that go beyond the mission of individual agencies or departments. […] A partnership is now needed to mobilize our nation-wide talent and to address the entire range of semiconductor technology complexities that will confront us as we face the challenges of leadership in the information age.” Bill Spencer to Al Gore, March 10, 1993, William J. Clinton Presidential Library and Museum, National Archives.

Government partnerships with information industries were even more extensive and formalized in the first half of the Clinton administration. Government adoption of business practices is often commented upon, but not the inverse; yet this was the product of the revolving door and other forms of public-private blurring in addition to ideology.96 This collaboration further blurred the public-private divide, and institutionalized this blurring in personnel decisions and in many industry-government initiatives. At the same time, government and industry practices began to resemble each other more and more. Industry adopted more governmental features and practices, while government did the same with industry practices. As Clinton figures articulate, the “magnitude [of government-industry cooperation in microelectronics] masked its dispersal across various agencies and firms.”97 Mark Hartney to Skip Johns, April 1, 1993, William J. Clinton Presidential Library and Museum, National Archives. The Clinton administration, for example, formalized access to Japanese techniques and technologies along with other foreign tech assessments—which industry repeatedly asked for.

The Clinton administration, moreover, presided over and shaped the construction of a new global order in the wake of the collapse of the Soviet Union. The administration built that order around the needs of high-tech industries. As Chris Miller argues, the US had replaced the early-Cold War order in Asia, which centered around the Korean and Vietnam Wars, with a post-Vietnam US-centered order around chip production.98 Miller, Chip War, 78, 112–4,132, 149, 163–7. While the ascendency of Japanese high-tech companies endangered this order in the 1980s, by 1993, the Japanese threat to American technological supremacy had faded. The Clinton administration formalized, extended, and expanded this strategy as computing and information industries gained greater shares of the US economy—a natural outcome of policies pursued under Reagan and the elder Bush.

The Clinton administration and tech industry worked closely on trade deals, rules, and institutions to shape the post–Cold War international order, as well as domestic policies. The administration also gave industry other benefits like lax antitrust regulation. AI, despite declines in federal funding, got an effective subsidy from US government funding, planning, and foreign policy for semiconductors and other information technologies.

The SIA roadmap delivered benefits not just for the semiconductor industry but for all industries that relied on cheap, predictable improvements in chips. It coordinated vast swaths of the industry, including suppliers and peripheral entities; and institutionalized Moore’s law, which delivered relatively predictable advances in chip technologies. Other industries could plan around and reap the benefits of this predictable advance in capability. In the late 1990s, when engineering challenges and fears of international competition pushed the road map to internationalize, these benefits to related industries like AI increased.

In tandem with this industrial-state coordination, narratives about “the New Economy” were developed and disseminated through networks of politicians, pundits, and executives:

[T]he rapid integration of computing and telecommunications technologies into international economic life, coupled with dramatic rounds of corporate layoffs and restructuring,99 The defense industry, most notably, imploded. had given rise to a new economic era. Individuals could now no longer count on the support of their employers; they would instead have to become entrepreneurs, moving flexibly from place to place, sliding in and out of collaborative teams, building their knowledge bases and skill sets in a process of constant self-education. The proper role of government in this environment, many argued, was to pull back, to deregulate the technology industries that were ostensibly leading the transformation, and, while they were at it, business in general.100 Fred Turner, From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism (Chicago: University of Chicago Press, 2006), 7.  

Accounts like these, distilled by Fred Turner, were undergirded by myths of the self-made tech entrepreneur, who supposedly started lucrative multinational corporations from his garage.101 And it was almost always “his.” Such myths have been punctured time and time again. Is it any wonder that these elite-flattering narratives were originally produced to sell tech-anxious elites consulting services and access to elite networks?102 Stuart Brand’s Global Business Network, for example. See Turner, From Counterculture to Cyberculture,, Chapter 6; and Geismer, Left Behind, 237. 

At the same time, these narratives and others that painted tech as a tide that would lift all boats provided cover for the Clinton administration to significantly cut welfare. As a result, people who were not in any way freed from the banalities and rootedness of their jobs (unless they were ex-factory workers unlucky enough to be freed from employment entirely as a result of new tech-friendly trade deals) suffered. Nonsense techno-optimist narratives, self-flattery, and visions of liberation from material conditions and “nonhierarchical meritocracy” for the new elite; cheap credit, “access” to banking and trimmed-down welfare (which had actually provided some protection from the vicissitudes of the market) for everyone else. 103 Geismer, Left Behind, Chapter 5.

Clinton’s reelection campaign deepened the administration’s ties to Silicon Valley; software and new Silicon Valley businesses began flexing their political muscles. Seventy-six prominent tech executives, including Steve Jobs, backed Clinton; Marc Andreesen gushed about Gore.104 Ibid. 238–9. Once Republicans regained control of the legislative branches in the same 1994 election, they attacked the mainstays of Reagan-era industrial policy. These Republicans rejected the idea that the “federal science establishment” had much to do with US technological competitiveness.105 Weiss, America Inc.?, 44. United States, Department of Commerce Dismantling Act of 1995: Joint Hearing before the Subcommittee on Commerce, Trade, and Hazardous Materials and the Subcommittee on Telecommunications and Finance of the Committee on Commerce, House of Representatives, One Hundred Fourth Congress, First Session, on H.R. 1756, July 24, 1995 (Washington, DC: US Government Publishing Office., 1995), 7. They even objected to the public-private partnerships that became a staple of Clintonite industrial policy: “promoting government industry partnerships to advance technology for which the government is not the primary customer.”106 Ibid.. They claimed all foreign industrial policy efforts had failed.107 That was demonstrably not the case; cf. Japan’s VLSI program.

Upon his election as House majority leader in 1995, New Right Republican congressman Newt Gingrich took up the mantle of Reagan with a greater allegiance to the libertarian New Right elements of the party. He and his allies espoused an even more techno-utopian ideology than the Atari Democrats. They imagined the internet as the mechanism through which to present the aims of the party—“welfare reform”, tough-on-crime policies, tax cuts, and deregulation—as policies of the future.108 Turner, From Counterculture to Cyberculture, 231. Gingrich believed technology would obviate the need for the state economically and politically.109 “The elections of 1994 usher in the first Republican majority in both houses of Congress for forty years. Led by Newt Gingrich, the House of Representatives in the mid 1990s pushed for the downsizing of government and widespread deregulation—especially in the telecommunications sector. Together with Alvin Toffler, George Gilder, and technology journalist and entrepreneur Esther Dyson, Gingrich argues that America was about to enter a new era, one in which technology would do away with the need for bureaucratic oversight of both markets and politics. As Gingrich and others saw it, deregulation would free markets to become the engines of political and social change that they were meant to be.” Turner, From Counterculture to Cyberculture, 215. It is notable, then, that high-technology industries for the most part aligned with the Atari Democrats—on the side of industrial policy. In particular, the Gingrich house-led scorched-earth campaign against the ATP and Technology Reinvestment (TRP) programs, which were designed in large part to help the semiconductor and electronics industry, forced industrial planners to hide their work. The Gingrich House likewise tried to dismantle the Department of Commerce (home of NIST) and the federal laboratory system. Industry and government agencies banded together and successfully blocked most of the proposed changes.

In coordination with industrial partners, Clinton implemented a shadow policy for the information sector (like the semiconductor industry) and extended methods pioneered there to other service industries like banking.110 Banking as an industry transformed dramatically as a result of information industries and relaxed antitrust rules. In the 1990s, the industry consolidated dramatically and practices transformed as computers and the internet were integrated into everyday life. See David P. Leech, Albert N. Link, John T. Scott, and Leon S. Reed, NIST Report: 98-2 Planning Report The Economics of a Technology-Based Service Sector (Arlington, VA: TASC, Inc.: January 1998). The boundaries between public and private blurred significantly as a result of the political necessity to conceal industrial policy and close coordination with information industries. AI benefited from the information revolution and industrial policies put in place by the Clinton administration. Several AI initiatives funded by DARPA in the 1960s and 1970s found applications in the “emerging national information infrastructure and electronic commerce” of the 1990s.111 National Research Council, Funding a Revolution, 216. Although funding for AI was significant, it was hidden by its dispersal throughout a number of programs and agencies like the Intelligent Systems and Software program, Intelligent Integration of Information program, and basic research in the information sciences budget.112 Ibid., 219.

The Second Bush Administration: Privatization, VCs, Neglect, and Military-Industrial Consolidation

Following Gingrich and the libertarian right, the second Bush administration originally sought to dismantle Clinton-era government support for industrial policy. 9/11, however, made any cuts to the national security state industrial complex politically impossible. The collapse of the defense industry through massive mergers and cuts during the 1990s created an opportunity for massive tech profits in the 2000s. New tech companies and startups filled the void when the US invaded Iraq in 2003. Following the apparent success of Desert Storm and Rumsfeld’s aspiration to put the DoD in charge of tech policy, Iraq War military funding emphasized information technologies. The second Bush administration similarly advanced the privatization of more military functions through outsourcing and contracting.113 Jennifer Mittelstadt and Mark Wilson, eds., The Military and the Market (Philadelphia: University of Pennsylvania Press, 2022), 30. New tech-focused defense conglomerates like Booz Allen Hamilton gained prominence; surviving older defense contractors developed tech capabilities.114 Ibid., 54–5. Another outcome of the Bush II-era defense policies was the proliferation of agency venture capital initiatives modeled on the VC appendages developed by companies like Intel in the 1990s. These VCs allowed US agencies like the CIA to shape technology development, much in the same way as Intel’s VC did in the 1990s. This occurred as most VCs retreated from semiconductors and biotech and moved into software, internet services, and (as many alleged in the wake of the collapses of Pets.com and WeWork) vaporware.115 Nicholas, VC, 268–9.

The second Bush administration was characterized by a permissive attitude toward tech firms, devolving control to the security state in the wake of 9/11. At the behest of Intel, they further relaxed export controls, and transferred extreme ultraviolet (EUV) lithography technology to Dutch firm ASML.116 Miller, Chip War, 187–9. Similarly, the administration did nothing to prevent technology transfers to Korean, Taiwanese, Singaporean, Japanese, and eventually Chinese chip producers. As companies like Intel felt assured in their continued hegemony via the international road map, they became less entwined in government lobbying. 

Firms like Windows and Intel significantly benefited from the government’s relaxation and consistent lax enforcement of antitrust rules. Chips for Microsoft’s PCs helped give Intel its technological lead in the 1990s and early 2000s. The collaboration, known as Win-Tel, meant that most computers were sold with Intel chips and Windows software, producing massive profits and near-monopoly status for both companies.117 Ibid., 127. 

The Obama Era: Neglect and the Fabless Model

The Obama administration followed in the footsteps of the Clinton-era Atari Democrats. They were politically cozy with tech elites and ignored chip-producing firms in favor of design firms and (at least in the short term) cheap consumer electronics. Another marker of this era was the proliferation of agency “ARPAs” to fund technology like the Intelligence Advanced Research Projects Activity (IARPA) for the intelligence community.118 This may be for both ideological and practical political reasons—preferring to echo DARPA rather than VCs, which had been increasingly associated with vaporware and valuation collapse. 

The expense of new lithography techniques separated design and fabrication firms, an innovation pioneered by Apple and Taiwan Semiconductor Manufacturing Company Limited (TSMC). This separation was accelerated by a shift in who headed tech firms. The proliferation of MBAs rather than engineers accelerated the fabless model, which cut operating costs in the short term by outsourcing fabrication. A focus on short-term profits over long-term research and sustainability spread under Obama’s term like wildfire via firms like McKinsey, which had no special expertise119 Laleh Khalili, “In Clover,” London Review of Books 44, no. 24 (December 15, 2022),, https://www.lrb.co.uk/the-paper/v44/n24/laleh-khalili/in-clover. in the complicated field of chip production.120 Miller, Chip War, 215. TSMC, incidentally, went in the opposite direction during this period and significantly reinvested in production, unlike Intel.121 Ibid., 220. 

Following Clinton-era thinkers like Craig Fields, the Obama administration believed that tech diffusion and globalization were inevitable and could only be slowed. This belief caused them to misdiagnose problems in the chip industry as related to globalization instead of correctly attributing those issues to monopolization.122 Ibid., 297. This configuration seemed to work as new, improved chips continued to provide the basis for other monopolistic firms like Google and to produce new AI-esque products—such as improving natural language processing and virtual assistants like Siri.

Conclusion

The end of the Obama administration brought about a number of major interrelated changes: the shift to costly EUV lithography; the end of the semiconductor road map and the inauguration of the less influential device road map (IDRS); Intel’s inability to keep up with competition; the closure of IBM’s fortress-model fab for defense; and Intel’s panic about Chinese subsidies and interference (not unlike the 1980s Japan panic), which had dashed dreams of a US solar panel industry. All of these events together created a renewed chip panic among the defense-industrial complex by 2016. COVID-era supply chain issues in 2020 and 2021 caused politicians and lawmakers to pay attention; they conflated defense concerns with these short-term visible shocks. This conflation led to the passage of the CHIPS bill in 2022.

The rise of cloud computing and the more recent increase in AI use of cloud computing facilities is leading to new vertical integration. Because specialized chips save energy (and thus money) for data center firms like Amazon, such firms are buying Nvidia-style chips for now, but are beginning to design their own chips for machine learning applications.123 Ibid., 238.  This innovation has cut into the profits of general-purpose chip-producers like Intel.124 Ibid., 237. Yearly conferences on how to continue or move beyond Moore’s law likewise frequently float special-purpose chips as a means to maintain Moore’s law-like improvements. 

Defense needs have shaped chips and information technologies like AI for their entire existence. Most major tech companies do at least some significant work with defense. The fact that they do not exclusively function as defense contractors shields them from the typical stigma of working in the defense-industrial complex. Many technologies are developed to be dual-use and therefore have imagined civilian and military uses. The US military is presently imagining a new offset strategy based not in microchips, but instead in AI (though this would, like Reagan’s Strategic Defense Initiative, require advances and investments in chips).125 Ibid., 287. The collaboration with tech executives cuts both ways. The consequences of Democrat—and increasingly Republican—affiliations with tech companies have meant that the executives of those companies have an outsize influence on seemingly unrelated policy. For example, tech executives provided the major impetus for charter schools and “education reform” under Clinton and Obama.126 Geismer, Left Behind., 239.


As I’ve detailed throughout this chapter, the histories of AI and compute power (especially semiconductors) are closely intertwined. Often, the technofuturist promises of AI have functioned to provide cover for the funding of more banal improvements in chips and chip infrastructure. This was true of SCI funding and continues to be true of present AI funding—for example, with Governor Hochul’s recent promotion of New York as an AI hub.127 Governor Kathy Hochul, “Governor Hochul Unveils Fifth Proposal of 2024 State of the State: Empire AI Consortium to Make New York the National Leader in AI Research and Innovation,” press release, January 8, 2024, https://www.governor.ny.gov/news/governor-hochul-unveils-fifth-proposal-2024-state-state-empire-ai-consortium-make-new-york.  With the rise of special-purpose chips and cloud computing facilities, the fates of AI and the chip industry are entwined even more closely than before. Several semiconductor firms contest this and insist AI is a short-term bubble not requiring sustained investment in new kinds of chips. Whether or not AI produces anything like the promised revolution, the current volume of money directed at AI chip production by the industry (bubble or not) will impact the trajectory and production of ever-improving semiconductors. In turn, those chips, their cost, and their capabilities will shape the political economy of tech and will determine how sustainable the political order built around access to cheap and regularly improving semiconductors proves.