Jensen Huang secures positions early: On the eve of the CPO boom, optical modules enter an "arms race"
Source: U.S. Stock Research Agency
Amid the rapid advance of AI, the market often focuses on the iteration of compute power for GPUs, while easily overlooking the "nervous system" that supports this massive compute power network.
While everyone is still arguing over the production allocation of H100 and B200, NVIDIA founder Jensen Huang has already turned his gaze to deeper infrastructure—optical interconnects.
Recently, it has been reported that NVIDIA has invested $2 billion each in leading optical communication companies Lumentum and Coherent, securing large purchase commitments and future production capacity rights.
This move may appear as a financial investment, but it's essentially a "strategic encirclement" of compute power infrastructure.
This marks a fundamental shift in the competitive logic of the AI industry chain: from a simple race for chip performance to capital binding and capacity locking in the upstream core supply chain.
Optical modules, especially Co-Packaged Optics (CPO) technology, are on the verge of an explosive growth, signaling the silent start of an "arms race" for optical interconnects.
It's Not an Investment, It's Capacity Locking: NVIDIA
Preparing for an "Optical Module Shortage"
To understand the real significance of NVIDIA's move, one must look back at the past two years when profit margins for AI servers were squeezed by HBM (High Bandwidth Memory) supply chain constraints.
At the early stage of AI compute power explosion, the market generally saw GPU manufacturing capacity as the bottleneck. However, as demand surged, real constraints quickly shifted to HBM.
The three memory giants (SK Hynix, Samsung, Micron) held strong pricing power due to their monopoly over HBM packaging technology.
Continuous rise in HBM prices directly eroded profit margins for GPU vendors, and even resulted in the inability to deliver some high-end graphics cards due to memory shortages.
This passive situation is intolerable for NVIDIA, a company that pursues extreme efficiency and cost control.
"A lesson learned the hard way.” NVIDIA knows well that amid exponentially increasing compute density, the next bottleneck will not be memory, but bandwidth and power consumption of data transmission.
As AI clusters scale from tens of thousands to hundreds of thousands of cards, traditional electrical interconnect technology has reached the limits of physics.
In high-speed signal transmission, loss and heat generation in copper cables rise exponentially, and the Power Wall has become the biggest obstacle to unleashing compute power.
Data shows that in high-speed scenarios, I/O interface energy consumption can even surpass the chip’s own computing power. At this point, optical interconnects are no longer optional, but essential.
Co-Packaged Optics (CPO) technology, by packaging optical engines together with switching chips, greatly shortens electrical signal transmission distance, making it the only energy-saving solution for the next generation of "gigawatt-level AI factories".
Whoever controls high-end laser and silicon photonics capacity, controls the rhythm of data center evolution.
However, the CPO industry chain is far less mature than traditional optical modules, with slower capacity ramp-up and higher technical barriers. Waiting for the supply chain to mature naturally, NVIDIA could repeat the HBM dilemma—facing core component shortages or price hikes from suppliers amid a demand surge.
Therefore, NVIDIA has chosen not to wait, but to become a joint builder of the supply chain through capital investment.
Investing in Lumentum and Coherent while locking in capacity is essentially a "defensive offense." This is not a simple financial investment seeking short-term share price returns, but for long-term supply chain security and cost control.
Through capital ties, NVIDIA elevates the originally loose business cooperation into a "capacity community" with closely aligned interests.
This is an almost blatant strategy: before optical modules become scarce resources, NVIDIA draws quality capacity under its own umbrella in advance, ensuring priority “navigational rights” for its compute clusters when CPO explodes.
From 35% and Beyond: LITE's "Super Order"
It's Practically Written on the Timeline
In this fusion of capital and technology, Lumentum (LITE) is undoubtedly one of the biggest beneficiaries.
Financial data reveals the depth of their relationship. In fiscal year 2024, NVIDIA already contributed about 35% of Lumentum’s revenue.
This means LITE is already a core supplier deeply integrated with the GPU giant. This $2 billion equity injection, combined with several billion dollars in purchasing commitments, marks an upgrade from “important customer” to “strategic symbiosis”.
We can make a conservative estimate of future order sizes. If, over the next four years, total purchasing commitments and new orders surpass $5 billion, that’s approximately $1.25 billion in new annual revenue.
Given Lumentum’s current annual revenue of about $2 billion, this means its theoretical revenue ceiling could approach the $4 billion range, nearly doubling in size.
But it’s not just a linear numerical increase—it’s also a structural amplification of profit quality.
First, the ASP (average selling price) and gross margin structure of AI data center optical modules are much better than traditional telecom products. Telecom optical modules depend on carriers’ capex cycles, face brutal price competition, and have thin margins.
AI optical modules serve high-performance computing infrastructure, with extremely high demands on stability and speed. Customers are less price-sensitive and focus more on delivery capability.
Thus, LITE’s orders from NVIDIA will significantly raise its overall gross margin.
Second, CPO is a frontier technology with high barriers to entry and limited competition. Silicon photonics involves complex chip design and packaging processes, and only a handful of global companies can mass-produce at scale. With supply limited, LITE, as a core supplier, will have greater bargaining power.
Third, the deployment cycle of NVIDIA’s next-generation Rubin architecture coincides with CPO adoption. The market expects Rubin to significantly increase demand for optical interconnects to support massive memory bandwidth and cluster communication. Orders are almost predetermined and “written on the timeline.”
If LITE was previously just an ordinary optical module company following market cycles, it now looks far more like a strategically reserved node enterprise by NVIDIA. Its performance volatility will no longer depend only on the macro telecom market, but directly aligns with the progress of global AI compute construction.
Capital markets often react most truthfully. The recent strong performance of LITE’s stock is not just a literal interpretation of the news, but a direct pricing of the visible order pipeline.
Investors understand the logic: in the AI arms race, shovel sellers may face competition, but those whose shovels are reserved and invested in by the “arms dealer” enjoy an irreplaceable position.
The Real Question: Will Optical Interconnect Becoming
The Next HBM?
As NVIDIA enters the game, controversy follows. Market expectations for optical modules have diverged.
Some conservative investors worry that CPO technology is still in the deployment phase, the ecosystem has yet to mature, and significant volume may take years, making near-term profits unlikely.
Others believe that once the “gigawatt-level AI factory” phase begins, demand elasticity for optical modules will far exceed the market's imagination, potentially replicating or even surpassing the explosive path of HBM.
The real conflict is whether optical interconnect will become the next HBM—that is, the core bottleneck constraining compute development, triggering drastic price and profit redistribution?
There are two key variables.
First, the exponential expansion of AI cluster scale. Current AI training clusters are moving from tens of thousands to hundreds of thousands of cards. At this size, bandwidth requirements for card-to-card and server-to-server communication rise exponentially.
Electrical interconnects are nearing limits in power consumption and signal integrity and cannot support longer distance or higher-speed transmission. Once CPO penetration breaks through, it will quickly replicate HBM’s path—from "optional configuration" to "standard configuration", with demand exploding.
Second, NVIDIA’s investment reshapes the supply-demand dynamic. Capacity locked in advance means that in the coming years, effective supply of high-end optical modules will be "strategically reserved". This creates implicit pressure on other GPU manufacturers.
AMD, Intel, and other custom chip vendors may face "money can’t buy supply" or "paying a higher premium" when competing for high-end optical module capacity. Such supply-demand mismatch will further push up the overall prosperity of the optical module industry.
From an investment perspective, this is a clear directional signal.
When the giants of compute power begin to extend capital upstream, it means the industry’s profit center is shifting.
In the early days of GPU development, profits were concentrated in chip design; when HBM exploded, storage captured massive profits; now, optical interconnects are rising as the new high ground.
NVIDIA is not betting on a single company, but paving the "light-speed channel" for next-generation compute infrastructure, ensuring its ecosystem fortress is both deep and wide.
For Lumentum, this is almost a written growth script. But what capital markets truly care about is not “will it grow,” but—when the optical module boom arrives, will valuations have already lagged behind?
Current market pricing may only reflect short-term purchase expectations, not fully factoring in the long-term monopoly premium after CPO’s widespread penetration.
Once the industry enters a capacity-constrained "seller’s market," companies with capacity lock-in advantages will see their valuation logic shift from "manufacturing" to "resources".
Conclusion:The “Second Half” of the Compute Power War
NVIDIA’s capacity lock with Lumentum and Coherent is a milestone signalling the “second half” of the AI compute power war.
The first half was the competition for chip compute power, based on transistor counts and architectural efficiency; the second half is about system efficiency—data transfer speed and energy control.
In this phase, optical modules are no longer peripheral components, but the core hubs determining the efficiency of compute clusters.
Jensen’s preemptive “capacity lock” is not only a response to the HBM bottleneck, but also a firm bet on future technology directions.
It sends a clear message to the entire industry chain: in the age of AI, supply chain security and controllability are as valuable as technical innovation itself.
For investors, attention should shift from mere GPU vendors to upstream companies that are deeply integrated with the giants and master core optical interconnect technologies.
Because, when the tide recedes and the lights of the compute factories come on, it is these reserved beams of “light” that illuminate the entire system.
The optical module arms race has already started, and the starting gun was NVIDIA’s capital injection. In this race, whoever controls light, controls the speed of AI’s future.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
BlackRock under pressure: The finance giant limits withdrawals

Ripple: We Use XRP to Generate Liquidity for Payment Flows
Bitcoin ‘bull trap’ forming as bear market enters middle phase: Willy Woo

US Treasury report acknowledges legitimate uses of crypto mixers

