NVIDIA agreed with Taiwanese Nanya for the Vera Rubin artificial intelligence platform. LPDDR5X memories will offer 3 times the capacity and more bandwidth.
NVIDIA took a strategic step and partnered with Taiwan-based Nanya Technology for the Vera Rubin platform, which will change the balance in the world of artificial intelligence. Within the scope of this collaboration, Nanya will provide LPDDR5X memory, which is critical for NVIDIA’s next generation Agentic AI systems.
With this move, the Taiwanese manufacturer managed to become the first local company to enter NVIDIA’s main memory supply chain and offered a new alternative to the Korean and American dominance in the sector. The new platform is expected to triple capacity and provide a more than 50 percent increase in bandwidth over previous Grace Blackwell servers.
NVIDIA Diversifies Its Supply Chain with Taiwanese Manufacturers
With the rapid evolution of artificial intelligence technologies, NVIDIA aims to make the supply chain for hardware components more resilient. The Vera Rubin platform requires two different types of memory: high-bandwidth HBM4 DRAM for Rubin GPUs and energy-efficient LPDDR5X DRAM memories for Vera CPUs. While HBM4 production is under the control of giants such as Samsung, SK Hynix and Micron, there is a larger supplier pool on the LPDDR5X side.
NVIDIA is implementing strategic risk management by opening its doors to Taiwanese manufacturers in order to reduce dependence on the global memory market.
Nanya Technology’s involvement in this process shows that Taiwan has become assertive not only in the field of foundry and packaging, but also in memory technologies. TSMC’s guidance to local companies in optimizing their manufacturing processes enabled Nanya to produce memory for one of the world’s fastest artificial intelligence systems.
Vera Rubin Platform Pushes Performance Boundaries
The Vera Rubin platform offers revolutionary improvements in AI processing capacity. Each Vera Rubin superchip will be equipped with 1.5TB of memory running at 1.2TB/s.
These figures represent a huge leap forward compared to the previous generation of Grace Blackwell systems. Especially in rack-scale artificial intelligence solutions, up to 400 TB of memory and up to 315 TB/s total bandwidth can be achieved by combining 256 Vera chips.
The Vera Rubin platform will dramatically increase the processing speed of large language models with a threefold increase in memory capacity.
The proliferation of agentic AI models is shifting the processing load from GPUs to CPUs. Although data compression technologies develop in new models, increasing processing power demand necessitates more memory capacity.
NVIDIA plans to meet this huge demand and consolidate its leadership in the artificial intelligence market by working with new partners such as Nanya.
How does NVIDIA’s new partnership with Taiwanese manufacturers affect the future of AI hardware? Share your opinions with us in the comments section.