NVIDIA dropped a bomb. Their new GPU is x4 more efficient
The Greens officially presented the new family of NVIDIA Blackwell graphics processors. Unfortunately, we will have to wait a while for consumer systems.
Although there are more than a few companies dealing seriously with artificial intelligence, they all use the same systems. Both giants such as OpenAI, Meta Whether Microsoft they mainly choose systems from NVIDIA. Reason? Much higher performance than AMD or Intel.
The consumer segment will see new GPUs in 2025
Models like NVIDIA H100 Whether H200 power the most powerful supercomputers and server rooms in the world. However, the Greens already have the successor to the Hopper generation. Jensen Huang officially presented the graphics processor today NVIDIA Blackwell B200 created from scratch with artificial intelligence in mind.
NVIDIA B200 offers over H100 more than doubles the number of transistors – jump from 80 to 208 billion. Additionally, we are talking about efficiency 20 PFLOPS in AI tasks, while the H100 is “barely” 4 PFLOPS. The icing on the cake is more capacious and faster VRAM – 192 GB HBM3e with bandwidth up to 8 TB/s.
However, it is worth emphasizing that NVIDIA B200 does not use a monolithic structure. Instead, we have a combination of two cores that act as one unified CUDA GPU. They are connected to help NV-HBI interface (Nvidia High Bandwidth Interface) with a bandwidth of 10 TB/s.
It’s a lithography TSMC 4NP, i.e. an improved version of the existing 4N used by Hopper and Ada Lovelace systems. This is probably also the reason for this construction – a slight improvement in density should be assumed, which forced two cores to be connected together due to the use of the full size of the mesh.
NVIDIA is preparing at least five different systems for its customers – GB200, B200, B100, HGX B200 and HGX B100. The first of these is a super power consumption system up to 2700 Wwhich is equipped with two previously described B200 cores and an additional one from the Grace family.