Announcements
We ıntegrate ınformatıon ın lıfe

  • DOLAR
    %0,04
  • EURO
    %-0,03
  • ALTIN
    %-2,15
  • BIST
    %-1,81
NVIDIA Disinterested in HBF Technology

NVIDIA Disinterested in HBF Technology

While NVIDIA remains uninterested in 4 TB capacity HBF memory technology, Google is preparing to use this new generation technology for artificial intelligence projects.

While High Bandwidth Flash (HBF) memory technology is positioned as an alternative to HBM in the world of artificial intelligence, NVIDIA is reportedly uninterested in this new generation solution. HBF technology, developed by SanDisk and SK Hynix and capable of offering a capacity of up to 4 TB, is preparing to move into the sampling phase in the second half of this year.

While it is stated that Google is willing to use this technology to expand its own TPU ecosystem, NVIDIA prefers to focus on eSSD solutions to overcome current capacity and speed restrictions. This strategic separation initiates a critical process on how memory standards will be shaped in high-performance artificial intelligence infrastructures.

  • HBF technology, unlike HBM memories, can reach high capacity values ​​​​​​up to 4 TB.
  • NVIDIA relies on Kioxia-developed PCIe Gen7 SSDs instead of HBF to meet high bandwidth requirements.
  • HBF samples developed by SK Hynix are set to be released in the second half of 2026.
  • Google aims to be one of the main customers adopting HBF technology to meet the increasing processing power needs in artificial intelligence projects.

HBF technology can provide serious efficiency advantages on the server side by replacing traditional DDR memory.

Why Doesn’t NVIDIA Prefer HBF Technology?

NVIDIA is taking a different roadmap to solve memory bottlenecks in AI workloads. The company argues that the high bandwidth offered by HBF can already be adequately met by eSSD technologies.

In particular, the collaboration with Kioxia aims to produce PCIe Gen7 SSDs that are 100 times faster than standard models. This approach allows NVIDIA to increase performance without making any major changes to the existing hardware architecture.

Google Aims to Increase TPU Capacity with HBF

On the Google side, the situation is quite different. The company, which is rapidly growing its own TPU ecosystem, evaluates the high capacity advantages of HBF technology for new generation processing units.

This technology promises to eliminate memory limitations, especially in intensive artificial intelligence inference tasks. HBF’s multilayer stack structure has the potential to minimize energy consumption while saving PCB space.

The widespread use of HBF could lead to a revolutionary change in server memory architectures.

Do you think NVIDIA’s SSD-focused strategy instead of HBF will be sufficient in the long run? Share your opinions about future memory technology in artificial intelligence infrastructures below.

Social Media Share:

TOGETHER FOR A LOOK

Can you share with us your comment?