On October 27, 2025, OpenAI submitted a letter to the White House Office of Science and Technology Policy (OSTP) requesting that the U.S. government widen the scope of the Advanced Manufacturing Investment Credit (AMIC) under the CHIPS Act to include AI-server production, data-centres and electrical-grid components. A week later, CEO Sam Altman publicly reiterated the message: the U.S. must re-industrialise “across the entire stack” in order to maintain its AI leadership Why it matters now: as AI models grow in size and complexity, infrastructure bottlenecks from semiconductors to data-centre power to grid capacity threaten U.S. ability to compete globally. In this report I will: (1) lay out the background of the CHIPS Act and OpenAI’s request, (2) analyse the strategic rationale and implications, and (3) examine lesser-covered angles including workforce and supply-chain risk, and raise the question of where this leaves smaller players
The CHIPS Act and U.S. industrial policy
The CHIPS and Science Act (enacted in 2022) was designed to reverse decades of offshoring of semiconductor manufacturing by offering large subsidies, tax incentives and support for domestic fabrication. A core element is the Advanced Manufacturing Investment Credit (AMIC) a tax credit aimed at building fab capacity in the U.S However, as OpenAI notes in its “Economic Blueprint”, this framework was built primarily with semiconductors in mind not the full stack of AI infrastructure (power, data-centres, grid, networking) In other words: manufacturing chips is one thing; building the complete ecosystem that trains and runs large AI models is another
OpenAI’s request
OpenAI’s letter to the OSTP asks that the existing 35 % tax credit be expanded to include
AI server production (not just semiconductor fabs)
AI data-centres themselves
Grid and infrastructure components (transformers, specialized steel, turbines) needed for large-scale data centres In a public post on X (formerly Twitter), Altman emphasised “re-industrialisation across the entire stack fabs, turbines, transformers, steel, and much more”. He also clarified that the tax credit request is distinct from any request for direct federal loan guarantees or bail-outs.
Strategic Analysis Why This Request Matters
1. Infrastructure scale and urgency
OpenAI has committed to invest approximately US$1.4 trillion over the next eight years to build computing resources. At the same time, in its internal analysis, OpenAI projects that the first US$1 trillion of AI-infrastructure investment could yield more than 5 % GDP growth over a three-year span This underscores two pressures: (a) AI is entering a growth phase characterised by enormous capital intensity, and (b) there is a narrow window of opportunity for the U.S. to scale infrastructure so as not to cede leadership to other countries (notably China)
2. Emergence of bottlenecks beyond semiconductors
OpenAI’s argument draws attention to lesser-discussed constraints
Electrical-grid capacity: In 2024, China added 429 gigawatts of new power capacity, the U.S. only ~51 GW creating what OpenAI calls an “electron gap
Supply-chain for materials: specialised steel, large transformers, turbines, etc, that underpin data-centres and chip fabs. The letter emphasises “the entire stack
Thus, tax credits that focus only on chip fabrication may leave the data-centre ecosystem hindered by grid or supply-chain shortfalls
3. National-security and geopolitical dimensions
The request aligns with a broader U.S. strategy of “AI-as-infrastructure”. Maintaining dominance in large-scale training and deployment of AI models has become a matter of competitiveness. OpenAI explicitly frames its infrastructure proposals in national-security language From a policy lens: by widening the incentive net, the U.S. government would effectively signal that the build-out of AI infrastructure merits the same treatment as chip-fabs enhancing domestic capacity and reducing dependency on foreign supply-chains or overseas compute.
4. The investment risk and private-public interplay
OpenAI is asking for tax credit expansion, not a bail-out, and Altman stated the company is not seeking to become “too big to fail”. But the scale of their commitment (US$1.4 trillion) raises questions: who bears the risk if the infrastructure build-out stalls or overshoots demand ? The underlying logic is: lowering the effective cost of capital (through tax credits) de-risks the early-stage infrastructure investment, making private capital deployment more likely.
5. Implications for competition, labour and smaller players
A side-effect of this push is that large players who can mobilise billions may gain an even greater advantage. Meanwhile workforce and localisation challenges remain OpenAI notes that the U.S. will need an “estimated 20 % of its current skilled-trades workforce over the next five years” to support data-centres and energy infrastructure.
Thus, failure to pair infrastructure incentives with workforce and regional development could exacerbate geographic and economic inequalities
Lesser-Covered Angle: Small Players, Regional Disparities & Supply-Chain Fragility
Impact on smaller AI firms and regional ecosystems
Much of the public discussion focuses on large tech players (e.g., OpenAI, Nvidia, chip manufacturers). But what about smaller AI startups, university labs, or regional data-centres? If tax incentives become skewed toward large-scale infrastructure, the result could be a “two-tier” ecosystem: big incumbents build massive facilities, smaller players continue to rent or be excluded Furthermore, regional disparities (between states or metropolitan vs rural areas) could deepen. States with favourable permitting, abundant power and grid capacity (e.g., Texas, Arizona) will likely host the large data-centres; states without those advantages may be left behind.
Supply-chain resilience and single-point failures
While building data-centres and chips is essential, there are risks of single-point dependencies: e.g., reliance on one provider of grid equipment, one foundry, one region with cheap power. This is rarely discussed in depth. Yet if a supply-chain disruption hits (steel, rare earths, transformers), the entire AI build-out slows OpenAI’s mention of “transformers, steel, turbines” signals awareness of this. But policy focus tends to stay on chip-fabs
Workforce transition and structural challenges
Training large AI models and running massive compute farms is not just about capital expenditure; it’s about skilled labour, infrastructure maintenance, and energy management. Historically, U.S. domestic semiconductor manufacturing has faced workforce bottlenecks. If this next era of AI infrastructure accelerates faster than workforce training, the gap will be in labour not just hardware.
What This Means Going Forward
For U.S. policy
If the U.S. government extends the tax credit as requested, it could signal a shift in industrial policy: from supporting chip-fabs alone to underwriting the broader AI compute stack (hardware, data-centres, grid). That could accelerate build-out, attract private investment, and enhance U.S. resilience However, it also raises questions: Will Congress approve such an expansion? Will the focus on large-scale infrastructure defer attention from regulation of AI risks (data privacy, algorithmic bias, surveillance)? And how will the U.S. balance investment with avoiding subsidy capture by a few winners?
For the private sector
Large AI firms like OpenAI are hedging that massive infrastructure will be a competitive moat. Smaller players may need to find cost-efficient ways to access compute (cloud, partnerships) or niche specialisations rather than build exnihilo Hardware suppliers, power companies, grid-equipment firms and local construction trades are likely to benefit from this push—but only if permitting and logistics keep pace.
For global dynamics
China, Europe and other regions are not idle. OpenAI’s submission points to China’s aggressive build-out of power capacity and data-centres. The U.S. accelerating its AI ecosystem means competition will shift more into the compute-and-infrastructure domain rather than pure algorithms.
For broader society
Massive infrastructure means power draw, environmental footprint, and regional impacts in terms of employment, housing, and land use. If the build-out is concentrated in a few regions, there may be local knock-on effects (energy price spikes, workforce stress, real-estate inflation). These are rarely discussed in tech-news framing Moreover, access to that infrastructure (who uses it?) will determine how distributed the benefits of AI growth are across communities
OpenAI’s push to expand the CHIPS Act’s tax credit to encompass AI servers, data-centres and grid components is more than a corporate ask it is a strategic move to reshape U.S. industrial policy around the next era of AI. It signals that the challenge is no longer just chips, but the full stack of compute, power, infrastructure and supply-chain If successful, this could accelerate U.S. AI leadership but it also raises risks: concentration of infrastructure in the hands of large players, deepening regional divides, workforce bottlenecks, and under-addressed supply-chain fragilities The critical question now: *Will the U.S. government meet the moment and expand incentives accordingly, and if so, how will policy ensure that the benefits of this infrastructure build-out are widely shared?*
