Meta(META.US) in-house advanced chip development terminated, expanding collaboration with NVIDIA and AMD

robot
Abstract generation in progress

Bloomberg reports that Meta Platforms (META.US) has encountered challenges in developing its internal AI chips and has abandoned its most advanced chip方案, opting instead for a simplified design. Last week, due to technical difficulties in the chip design process, the company officially terminated its cutting-edge AI model training chip project. The report also notes that Meta synchronized its latest plans for this technical route adjustment with employees in the AI infrastructure department last week.

Meta’s decision to abandon in-house chip development reveals the common difficulties faced by companies attempting to design AI chips capable of competing with market leader NVIDIA (NVDA.US).

This chip roadmap adjustment follows recent collaborations with AMD (AMD.US), NVIDIA, and Google (GOOGL.US) under Alphabet. Reports indicate that the company has signed multi-billion dollar agreements to lease AI chips from Google.

Earlier this week, AMD announced a partnership with Meta to deploy up to 6 gigawatts of AMD Instinct chips to support its next-generation AI infrastructure. Additionally, Meta reached a “cross-generational” strategic partnership with NVIDIA earlier this month, committing to large-scale deployment of NVIDIA chips in its data centers.

Meta’s self-developed AI chips are part of its Meta Training and Inference Accelerator (MTIA) project system. The core goal of this initiative is to reduce long-term operational costs through vertical integration of chip design capabilities and to strengthen autonomous control over data center infrastructure.

A Meta spokesperson stated, “We continue to invest in building a diversified chip supply portfolio to meet business needs, with advancing the MTIA product line as a key strategic direction. This year, we will disclose more about the development progress and deployment plans for this product line.”

Reports indicate that Meta has abandoned its second-generation training chip codenamed Iris, and the more advanced Olympus project, which was initiated afterward, has also been terminated.

An internal source involved in Meta’s chip development revealed that there is a general cautious attitude within the company regarding plans to develop chips that can match NVIDIA’s performance, mainly due to concerns about project delays or the need for redesigns. The person pointed out that reports show such chip development requires assembling a large team of engineers responsible for design, debugging, and power management. If power consumption issues cannot be effectively addressed, these self-developed chips may not be worth deploying compared to NVIDIA’s mature products.

The Iris training chip uses a Single Instruction Multiple Data (SIMD) architecture. While this architecture is easier for hardware engineers to design, software engineers face significant programming challenges when training AI models. Reports disclose that Olympus adopts a Single Instruction Multiple Thread (SIMT) architecture similar to NVIDIA’s AI chips—this architecture is easier for software engineers to program but demands higher technical requirements for hardware design.

NVIDIA’s promoted SIMT architecture offers greater flexibility and better suits modern AI model training needs, making it favored by many tech companies. Meta initially planned to complete the Olympus chip design by Q4 2026. However, reports add that transitioning from initial R&D to mass production typically takes nine months or longer, which could further delay actual mass production.

The core component responsible for AI computation in Olympus—the Graphics Processing Unit (GPU)—was originally planned to use designs from Rivos, a chip startup acquired by Meta last year. Rivos claimed its GPUs could efficiently run NVIDIA’s CUDA software code, which is the dominant software framework for training and running AI models.

The report also notes that Meta initially planned to build large server clusters using Olympus, but company executives believed that, during the critical period of competing with OpenAI and Google, this move could pose potential risks to training new models.

Specifically, the training software supporting these chips is inherently less stable than NVIDIA’s products, and Olympus’s complex design could further hinder large-scale mass production. Therefore, Meta currently plans to continue using training chips from other manufacturers, as their software is more mature and reliable, better supporting AI model training needs.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)