CoreWeave reported $1.4 billion in Q3 revenue, a 134% year-over-year increase driven by enterprise demand for GPU compute capacity. The specialized cloud provider added over $25 billion in backlog during the quarter, bringing total commitments to $55 billion.
The company reached $50 billion in remaining performance obligations (RPO) faster than any cloud provider in history. Customers spending over $100 million in the past 12 months tripled year-over-year, signaling enterprise-scale AI infrastructure adoption.
This growth occurs in a supply-constrained market where GPU availability limits expansion. CoreWeave's vertical integration with NVIDIA positions it to capture demand from enterprises scaling AI workloads beyond proof-of-concept stages.
The backlog surge reflects long-term infrastructure commitments rather than speculative capacity reservations. Companies are locking in multi-year GPU access as training larger models and deploying production AI systems require sustained compute resources.
Traditional cloud providers face capacity constraints meeting AI workload demands. AWS, Azure, and Google Cloud prioritize existing customer bases, creating opportunities for specialized providers like CoreWeave to serve enterprises requiring dedicated GPU clusters.
NVIDIA's semiconductor performance drives this infrastructure expansion. The company outperforms the broader semiconductor industry as enterprises transition from CPU-based workloads to GPU-accelerated AI systems.
CoreWeave's $55 billion backlog represents approximately 10 years of revenue at current run rates. This visibility supports infrastructure investments and signals sustained enterprise AI spending through 2026 and beyond.
The market for AI cloud services is fragmenting. Hyperscalers offer broad platform capabilities while specialized providers deliver optimized GPU infrastructure for training and inference workloads. Enterprises increasingly adopt multi-cloud strategies to secure capacity across providers.

