
Authored by Mihir Kshirsagar
Observers invoke railroad, electricity, and telecom precedents when contextualizing the current generative artificial intelligence (GenAI) infrastructure boom—usually to debate whether or when we are heading for a crash. But these discussions miss an important pattern that held across all three prior cycles: when the bubbles burst, investors lost money but society gained lasting benefits. The infrastructure enabled productivity gains that monopolistic owners could not fully capture. Investors lost, but society won.
GenAI threatens to break this pattern. Whether or not the bubble bursts as many anticipate, we may not get the historical consolation prize. There are two reasons to doubt that GenAI will follow this trend:
First, as I discuss in my prior post, the chips powering today’s systems have short-asset lives compared to the decades-long life of infrastructure of past cycles. Companies are also actively pursuing software optimization techniques that could dramatically shrink hardware requirements. In either case, the infrastructure that is left behind after a correction is not likely to become a cheap commodity for future growth.
Second, the current market is shaped by hyperscaler-led coalitions that enable surplus extraction at multiple layers. As I discuss below, the usage-based API pricing captures application value, information asymmetry enables direct competition with customers, and coalition structures subordinate model developers to infrastructure owners. If productivity gains materialize at scale, these rent extraction capabilities may enable hyperscalers to realize the revenues that justify infrastructure investment—something past infrastructure owners could not do. But whether through sustained profitability or post-bust consolidation, the structural conditions that enabled broad diffusion of benefits in past cycles are absent.
Now, there are some countervailing considerations. The research and development supporting open-weight models might be a source for productivity gains to be spread more broadly, and could even serve as the “stranded assets” that enable future innovation if the bubble bursts. But the regulatory environment needs to support such initiatives.
Railroads
The railroad industry consolidated dramatically after the Panic of 1873. By the early 1900s, seven financial groups controlled two-thirds of the nation’s railroad mileage. J.P. Morgan’s syndicate reorganized bankrupt roads into the Southern Railway and consolidated eastern trunk lines. Edward Harriman controlled the Union Pacific and Southern Pacific systems. James J. Hill dominated northern routes through the Great Northern and Northern Pacific. This concentration raised serious antitrust concerns—the Sherman Act was passed in 1890 largely in response to railroad monopoly power.
But even with consolidation, railroad owners still struggled to pay their debts because the infrastructure’s economic benefits were dispersed across the economy and did not flow back directly to the owners. Richard Hornbeck and Martin Rotemberg’s important work shows how at the aggregate level when economies have input distortions—misallocated labor, capital stuck in less productive uses, frictions in resource allocation—the railroad infrastructure can generate substantial economy-wide productivity gains. These gains persisted over decades regardless of which financial group controlled the local rail lines. Farmers in Iowa shipping grain to Chicago paid freight rates, but the productivity improvements from market access—crop specialization, mechanization investments justified by larger markets, fertilizer access—stayed with the agricultural sector.
The infrastructure that enabled these gains had useful lives measured in decades. Railroad tracks laid in the 1880s remained economically viable into the 1920s and beyond. Rolling stock, locomotives, and terminal facilities similarly had useful lives of twenty to forty years. When railroads consolidated, the long-lived infrastructure continued enabling agricultural productivity gains. The consolidation was anticompetitive, but the economic benefits didn’t concentrate entirely with the infrastructure owners.
Three structural constraints, beginning with the Interstate Commerce Act of 1887, but only effectively imposed nearly two decades later, limited railroad owners from capturing the economic surplus generated by their investments. First, bound by common carrier obligations, railroads charged fixed rates for shipping based on weight and distance, not a share of crop value. The railroad recovered infrastructure costs plus a margin, but could not discriminate based on agricultural productivity. Second, railroads had no visibility into which farms were most productive, or which crops were most profitable beyond what could be inferred from shipping volumes. As a result, they could not observe and selectively advantage their own agricultural ventures. Third, railroads faced substantial barriers to entering agriculture; directly operating farms required different expertise, capital, and management than operating rail networks. Now, railroads did try to move upstream, but regulatory actions prevented them from extending their dominant position.
Electricity
Samuel Insull built a utility empire in the 1920s that collapsed spectacularly in 1932, taking over $2 billion in investor wealth with it (nearly $50 billion today). The subsequent restructuring produced regional utility monopolies—by the 1940s, electricity generation and distribution were recognized as natural monopolies requiring either public ownership or regulated private provision. This consolidation was problematic enough that Congress passed the Public Utility Holding Company Act in 1935 to break up remaining utility combinations.
Despite the market correction, the generating plants and transmission infrastructure built in the 1920s and 1930s had useful lives of forty to fifty years. Even as utility ownership consolidated into regional monopolies, the long-lived infrastructure continued enabling manufacturing productivity gains that utilities sold electricity to but couldn’t capture surplus from.
Cheap electricity transformed American manufacturing in ways the utilities could not fully capture. Paul David’s foundational work on the “dynamo problem” shows that electrification enabled factory reorganization—moving from centralized steam power with belt drives to distributed electric motors allowed flexible factory layouts, continuous-process manufacturing, and eventually assembly-line production. Manufacturing productivity gains from electrification were substantial and persistent, but utilities sold kilowatt-hours at regulated rates. They could not price discriminate based on which manufacturers were most innovative or extract ongoing surplus from manufacturing productivity improvements.
The constraints preventing electric utilities from capturing the surplus paralleled railroads in important respects, and were also eventually imposed through regulation. Utilities charged volumetric rates for electricity consumed, not a share of manufacturing output. A factory paid based on kilowatt-hours used, whether it was producing innovative products or commodity goods. Regulation eventually standardized rate structures, limiting even the ability to price discriminate across customer classes. Utilities had minimal visibility into how electricity was being used productively—they knew aggregate consumption but couldn’t observe which production processes were most valuable. And while some utilities did integrate forward into consumer appliances to stimulate residential demand, this was primarily about increasing electricity consumption rather than controlling downstream markets. Utilities faced prohibitive barriers to entering manufacturing directly; operating generating plants and distribution networks required different capabilities than running factories.
Telecom
In more recent memory, the telecom bust following the dot-com crash was severe. Several competitive local exchange carriers went bankrupt between 2000 and 2003. WorldCom filed for the largest corporate bankruptcy of its time in 2002. The resulting consolidation was substantial—Level 3 Communications acquired multiple bankrupt competitors’ assets, Verizon absorbed MCI/WorldCom, AT&T was reconstituted through acquisitions. By the mid-2000s, broadband infrastructure was concentrated among a handful of major carriers.
But the fiber deployed in the 1990s—much of it still in use today—enabled the internet economy to flourish. The economic productivity gains from internet access are well-documented: e-commerce, SaaS businesses, remote work, streaming services, cloud computing, and so on.
The constraints limiting telecom value capture were similar to earlier cycles. Carriers primarily sold bandwidth based on monthly subscriptions or per-gigabyte charges, not revenue shares from application success. A startup building on fiber infrastructure paid the same rates as established businesses. Carriers had limited visibility into which applications were succeeding and could not easily observe application-layer innovation. And telecom providers faced substantial technical and regulatory barriers to competing at the application layer during the critical formation period. Network operators were not positioned to compete with e-commerce sites, SaaS platforms, or streaming services in the late 1990s through early 2010s when the web economy was taking shape.
There were exceptions that tested these boundaries. AT&T’s acquisition of Time Warner and Verizon’s forays into media ventures showed carriers trying vertical integration. And the important net neutrality debates centered on whether carriers could favor their own services or extract rents from application providers. Regardless, during the critical period when the web economy came into prominence, telecom companies were not vertically integrated and therefore their infrastructure was available on more horizontal terms.
The pattern across all three historical cases is consistent. Infrastructure consolidation happened and proved sticky, raising legitimate competition concerns. But structural constraints meant even monopolistic infrastructure owners could not fully capture application-layer surplus. They charged for access to infrastructure—shipping, kilowatt-hours, bandwidth—but the productivity gains from using that infrastructure diffused broadly through the economy. The long useful lives of the infrastructure meant these spillovers persisted for decades, even as ownership consolidated.
GenAI’s Obsolescence Trap
As I’ve discussed here previously, the chips powering today’s AI systems have useful lives of one to three years due to rapid technological obsolescence and physical wear from high-utilization AI workloads. This short useful life means that even if AI infrastructure spending produces excess capacity, that capacity will not be available for new entrants to acquire and leverage effectively. In railroads, electricity, and telecom, stranded assets with decades of remaining useful life became resources that others could access. Three-year-old GPUs do not provide a competitive foundation when incumbent coalitions are running current-generation hardware. Put differently, in a hypothetical 2027 GenAI bust, an over-leveraged data center stocked with 2-year-old H100s will be comparatively worthless. That compute cannot be bought for pennies on the dollar to fuel new competition. The only entities that can survive are those hyperscalers with the massive, continuous free cash flow to stay on the “GPU treadmill”—namely, Microsoft, Google, Amazon, and Meta. (Dramatic increases in software efficiency could break this hardware moat, but the hyperscalers control over distribution channels is difficult to overcome.)
The combination is what changes the outcome: vertical integration that enables surplus extraction, information position that enables direct competition, coalition structure that subordinates model developers to infrastructure owners, and short asset life that prevents the emergence of reusable infrastructure that others can access.
GenAI’s Vertical Integration Overcomes Prior Constraints
The GenAI infrastructure buildout is producing market concentration through coalition structures: Microsoft-OpenAI, Amazon-Anthropic, Google-DeepMind. These are not loose partnerships—they are deeply integrated arrangements where the hyperscaler’s infrastructure economics directly enable their coalition’s competitive positioning at the application layer. Microsoft has invested billions in OpenAI and provides exclusive Azure infrastructure. Amazon is heavily invested in Anthropic. Google acquired DeepMind and is developing Gemini models that are integrated across Google Workspace and Cloud.
This vertical integration attacks all three constraints that limited value capture in past cycles.
First, usage-based GenAI pricing captures application-layer surplus through uncapped rates. Historically, railroads also charged based on usage—more cargo meant higher bills—but eventually regulators imposed the requirement to charge “reasonable and just” rates. Similarly, electric utilities charge per kilowatt-hour but face state commission oversight that caps rates at cost-plus-reasonable-return. These regulatory firewalls prevented infrastructure providers with natural monopoly characteristics from extracting surplus beyond what regulators deemed justified by their costs. While GenAI providers charge uniform per-token rates, they do have common carrier obligations. Moreover, while enterprise pricing remains opaque, the structure of published rates suggests that costs scale in close proportion to usage. This pricing structure, unconstrained by rate regulation or transparent volume pricing, allows concentrated infrastructure providers to capture ongoing application-layer surplus as successful applications scale.
This capability has implications beyond just who benefits. In past cycles, infrastructure owners couldn’t capture application-layer surplus, which meant projected revenues never materialized and bubbles burst. If GenAI’s rent extraction model works, it changes the financial calculus and hyperscalers may actually generate sufficient revenues to cover their capital expenditures. But this “success” would come at the cost of concentrating gains rather than diffusing them broadly.
Second, API usage patterns reveal application-layer innovation. Railroads could not easily observe crop profitability, utilities could not see manufacturing processes, and telecom providers in the 1990s-2000s could not easily monitor which web applications were succeeding. Hyperscalers can see which applications are working through API call patterns, token usage, and query types. This information asymmetry could allow them to identify promising use cases and compete directly. For example, Microsoft can observe what enterprises build with OpenAI. Or Google can see which applications gain traction on Gemini. The infrastructure position provides comprehensive competitive intelligence about the application layer.
Third, hyperscalers are positioned to compete at the application layer. Railroads did not enter farming, utilities did not run factories, and while some telecom providers in the 1990s tried to compete with web startups by using “walled gardens” that strategy failed. By contrast, hyperscalers are already application-layer competitors. Microsoft competes in enterprise software. Google competes in productivity tools through Workspace. They can leverage GenAI capabilities to enhance existing products while simultaneously selling API access to would-be competitors. The integration runs both directions—infrastructure enables their own applications while extracting value from others’ applications. Indeed, in the software industry there is a long history of platforms cannibalizing or “sherlocking” the applications they enable.
Moreover, this dynamic differs fundamentally from how cloud services were used in the last decade. When Netflix or Uber ran on AWS, they used the cloud as a commodity utility to host their own proprietary code and business logic. Amazon provided the servers, but it was not the “brain” of the application. In the GenAI stack the application logic—the reasoning, the content generation, the analysis—resides within the infrastructure provider’s model, not the customer’s code. This shifts the relationship from hosting a business to “renting cognition,” allowing the infrastructure owner to capture a significantly higher share of the value creation.
The coalition structure reinforces vertical control. OpenAI is the public face of AI innovation but is structurally dependent on Microsoft’s infrastructure. Anthropic operates primarily on AWS and is tied to Amazon’s ecosystem. Even the most prominent model developers lack true independence—they’re subordinate partners in coalitions where the hyperscaler captures value through multiple channels while retaining the option to marginalize or compete with the model developer if advantageous.
Consolidation Without Spillovers
In past infrastructure cycles, the implicit social bargain was clear: while investors lost, society gained. Railroad, electricity, and telecom markets all concentrated substantially after their corrections, but the infrastructure continued enabling broad economic gains that owners could not fully capture. GenAI breaks this pattern. Whether through sustained profitability (enabled by rent extraction) or through post-bust consolidation (without reusable stranded assets), we may not get the historical consolation prize.
The primary counter-narrative rests with the open-weight ecosystem. A robust, competitive landscape of open models could directly challenge the structural constraints that enable surplus extraction. This open-model path, therefore, represents a critical mechanism for realizing the broad, decentralized “spillover” benefits that characterized past infrastructure cycles. Thus, supporting this ecosystem, whether through public access to compute or pro-competitive interoperability rules, should be a strategic imperative for ensuring that the productivity gains from AI diffuse broadly rather than concentrating within the hyperscaler coalitions.
Author Note: Thanks to Sander McComiskey for his excellent research assistance and critical feedback. Also thanks to Andrew Shi and Arvind Narayanan for invaluable feedback.
Mihir Kshirsagar directs Princeton CITP’s technology policy clinic, where he focuses on how to shape a digital economy that serves the public interest. Drawing on his background as an antitrust and consumer protection litigator, his research examines the consumer impact of digital markets and explores how digital public infrastructure can be designed for public benefit.


Leave a Reply