HBM4 zaoferuje duży wzrost wydajności. Transfery będą kosmiczne

HBM4 will offer a huge increase in performance. Transfers will be astronomical

HBM4 is getting closer and it looks like it will bring a lot of changes. We will see both larger capacities and faster speeds.

Both graphics cards and AI accelerators need efficient memorywhere the most important data is temporarily stored. In the first case, we see mainly bones GDDRwhile in the second case, more expensive ones are used more often HBM. And it just so happens that the organization JEDEC established the initial specifications for the next generation.

HBM4 is being built with AI and HPC in mind

New standard HBM4 are systems consisting of 4, 8, 12 or 16 TSV stacks, where the layers have capacity 24 or 32 Gb. Initially, speeds are said up to 6.4 GT/sbut there are ongoing discussions about higher values.

Thus, the new type of chip, consisting of 16 layers of 32 Gb, will offer us a capacity up to 64GBso a graphics card or accelerator with standard four systems will offer up to 256GB VRAM memory with peak throughput of 6.56 TB/s using an 8192-bit data bus.

HBM4 memories are supposed to have more channels per stack and take up more space on the PCB. Interestingly JEDEC did not mention anything about integrating memory directly into the processors.

Tech giants like SK hynix and TSMC announced their cooperation on HBM4 earlier this year. Shortly afterwards, during the European Technology Symposium 2024, the Taiwanese confirmed that they would use it for this purpose 12FFC+ and N5 lithographsi.e. classes 12 and 5 nanometers, respectively.

It should also be remembered that HBM4 is primarily intended to meet the requirements of AI and HPC, and will therefore be included in AMD, NVIDIA and Intel systems designed for server rooms and workstations. You shouldn’t expect this memory in gaming graphics cards.

Similar Posts