GOOGL/META/AMZN: Insights into AI Capex from Hot Chips 2025 and NVDA earnings
What’s New: GOOGL and META presented technical details of several AI investment projects earlier this week at the Hot Chips 2025 conference, and NVDA results provided insights into the AI infrastructure supply chain.
Our Take: GOOGL’s Hot Chips presentations reinforced our conviction in the company’s well-established custom silicon/TPU development strategy. META is both spending big (well understood and we think reinforced by NVDA results) but also working to adapt systems to its existing footprint, as its presentation at Hot Chips reinforced. AMZN wasn’t at the conference, but NVDA’s disclosures this quarter offered potentially mixed signals on supply allocation, and thus potential AWS revenue re-acceleration.
GOOGL is reaping the advantages of early TPU and datacenter design
GOOGL was very visible at Hot Chips across multiple sessions on AI-related topics including software for low-level AI coding on TPUs (such as JAX and Pallas), water cooling for TPUs, a keynote by Noam Shazeer (the recently repatriated co-founder of Character.ai and now Gemini co-lead), and a final session on TPUs.
Shazeer’s presentation entitled ” Predictions for the Next Phase of AI” was the most insightful and detailed how GOOGL’s early work in AI model research led to advancements in hardware and datacenter design that are now paying dividends in well-suited, purpose-built infrastructure. Details of some of the choices made over time were laid out in other presentations at the event and included GCP’s toroidal network topology and early adoption of optical interconnects.
We came away with greater conviction in the cumulative advantages of GOOGL’s earlier planning for AI accelerators (including custom silicon) and data center development and better appreciate some of the intrinsic performance advantages of the TPU architecture at the heart of powering GOOGL’s increasing lead in frontier models.
Gemini 2.5 Pro retaking first place on the LMArena leaderboard over GPT-5 is the latest evidence that GOOGL is finding the capacity to support both large-scale inference workloads, like the rollout of AI Mode globally, as well as high-impact frontier model training. We believe early infrastructure choices and TPUs have been and will remain key to enabling GOOGL’s success.
META is moving fast and finding hacks to support increased GPU capacity…
META also had multiple presentations at Hot Chips focused on datacenter layouts and accelerators, but in contrast to GOOGL’s many years of iterative development, META is finding new and innovative ways to bring infrastructure capacity online quickly without starting from scratch.
Presentations showed META’s work to adapt water cooling into rack-based units that can sit side-by-side with computing racks, bringing the advantages of water cooling into existing datacenter shells. We also learned a lot more about META’s efforts to modify NVDA’s NVL72 racks into META’s Catalina configurations, which have more CPU and RAM capacity to better suit META’s needs.
There was no presentation on META’s custom AI chip efforts at the event, and this may be a slight negative read on their efforts in this area. Reports from March indicated that META had begun testing its first training-focused chips, but the lack of updates since then may indicate growing pains, or at least that there is less likelihood of a major push into production workloads in the near-term.
…just as an NVDA disclosure tweak could indicate META taking #2 position?
NVDA disclosed two indirect customers greater than 10% share of Compute and Networking segment revenues in F1Q26 and F2Q26, but only one in F1H26. We think this could imply there are two indirect customers vying for #2 behind the presumptive largest customer MSFT, and that they likely swapped the position from 1Q to 2Q to round out the Top 3.
META being that rising #2 customer would align with its aggressive publicly announced AI infrastructure projects and rising capex intensity (from 24% of revenue in 2024 to 36% in 2025). It would also align with our AI Capex estimates for GOOGL, META, and AMZN, where we model META to increase its share of NVDA revenue to match that of AMZN, and remain ahead of GOOGL in 2025. We await Zuck’s next Reel sharing more on META’s GPU unit purchases – which have come the day before 4Q earnings each of the past two years -- and we think they will be way higher than the last one.
The disclosure is also relevant for the presumptive former #2 and new #3, especially if it is AMZN/AWS given investor concerns about AWS share of NVDA GPUs (and especially following the 2Q25 print). It could also be GOOGL, which would suggest that its capacity constraints may linger longer, but would make 2Q25 GCP revenue acceleration all the more impressive if GOOGL is falling in GPU delivery share.
For AWS, falling share of NVDA GPU deliveries is baked into our AI capex estimates, which underpin the installed base used by AWS, and we remain confident that AWS revenue will accelerate in 2H25 on easing capacity constraints for generative AI. We think AMZN is pushing hard to increase all capacity, including a broad embrace of NVDA infrastructure that we saw at the AWS Summit NYC, while this fall likely brings the introduction of Trainium3 at the flagship re:Invent event in Las Vegas, starting with keynotes on December 1st.
For reference, below are the NVDA disclosures, with bolding added):
F1Q26 10-Q: For the first quarter of fiscal year 2026, two indirect customers which primarily purchase our products through system integrators and distributors, including through Direct Customers A and B, are estimated to represent 10% or more of total revenue, attributable to the Compute & Networking segment.
F2Q26 10-Q: For the second quarter of fiscal year 2026, two indirect customers—primarily purchasing our products through Direct Customers A and B—are each estimated to represent 10% or more of total revenue and attributable to the Compute & Networking segment.
For the first half of fiscal year 2026, one indirect customer—primarily purchasing through Direct Customer A—is estimated to represent 10% or more of total revenue and attributable to the Compute & Networking segment.