As Nvidia prepares to report quarterly earnings on February 25, the supply chain girding AI accelerator production is flashing signals that are simultaneously bullish and cautionary — strong demand, tightening capacity, and execution risk concentrated in lead times that can stretch well past a year.
Aehr Test Systems, whose FOX-XP and Sonoma platforms handle wafer-level and packaged-part burn-in for high-power semiconductors, issued second-half FY2026 bookings guidance of $60M to $80M — a figure that stands out given the company posted just $6.2M in bookings during Q2. The surge is being driven almost entirely by AI wafer-level and packaged-part burn-in demand, with silicon carbide contributing minimally. CEO Gayn Erickson disclosed that the company's lead Sonoma production customer — an AI ASIC manufacturer — has provided a "very large forecast" with shipments expected to begin in Q1 FY2027 (starting May 30, 2026).
The Sonoma system, which now supports configurations up to 2,000 watts per device, is purpose-built for the thermal and electrical demands of modern AI accelerators. Aehr claims production capacity exceeding 20 systems per month, with the ability to ship 20 wafer-level and 20 packaged-part units simultaneously if demand requires. That headroom matters: the WaferPak consumables central to wafer-level testing carry an eight-week turnaround, while the development of new HBF (high-bandwidth film) products requires more than a year — timelines that cannot easily be compressed regardless of how large a purchase order arrives.
The broader infrastructure picture is similarly constructive. Credo Technology, which supplies high-speed active electrical cable and optical interconnect solutions for data center switching fabrics, guided Q3 revenue to $335–345M — a figure that reflects sustained capital expenditure from hyperscale customers building out AI training clusters. Interconnect bandwidth is increasingly the limiting factor in large-scale GPU deployments, and Credo's trajectory suggests that constraint is translating directly into orders.
On the memory side, HBM3e adoption continues to accelerate. High-bandwidth memory is integral to Nvidia's Hopper and Blackwell architectures, and supply tightness from DRAM manufacturers has been a recurring theme in recent quarters. The combination of HBM3e demand, burn-in test equipment backlogs, and interconnect buildout paints a picture of an AI hardware ecosystem that is maturing — but not yet in equilibrium.
Meanwhile, Groq's licensing relationship with Nvidia for AI chip intellectual property underscores a quieter consolidation dynamic: even companies building differentiated inference accelerators are navigating an IP landscape increasingly shaped by Nvidia's foundational patents. This adds another layer of complexity to supply chain planning, as licensing terms and cross-dependencies can affect product roadmaps and vendor relationships across the ecosystem.
The convergence of these signals ahead of Nvidia's February 25 report sets a telling backdrop. Investors and operators watching the AI infrastructure buildout should note that the bottlenecks are no longer primarily about chip design or fab capacity — they are increasingly about the unglamorous but essential equipment, consumables, and interconnects that turn raw silicon into deployable AI compute. Execution on those lead times will determine how quickly the next generation of accelerators reaches data center floors.

