Nvidia promises to improve the quality of YouTube and Netflix videos even more with this update
Nvidia announces several updates and optimizations for its AI-related tools and technologies. Topping the list is an update to RTX Video Super Resolution that promises notable image quality improvements.
Nvidia continues to evolve its AI-powered tools to take advantage of the latest hardware innovations in its graphics processors dedicated to professionals and the general public. After unveiling DLSS 3.5 to significantly improve the rendering of games in ray tracingthe manufacturer is now turning to generative AI with this update.
But the update that interests us the most here is that of RTX Video Super Resolution, itsupscaling video working on all streamed content (YouTube, Netflix, Twitch, Prime Video, etc.) and even local video files.
Better image quality, even in native definition
With this version 1.5 of its technology, Nvidia announces that it has improved its machine learning model in order to better detect the difference between the finest details of a video and compression artifacts. We know that videos on YouTub and Twitch are often heavily compressed to be played by all connections. RTX VSR thus wants to improve the quality without impacting the bandwidth.

And like its DLSS for video games, RTX Video Super Resolution now works for videos played at their native definition. While it was previously necessary to launch a video at a definition lower than that of your screen (1080p content on a 4K monitor for example), technology now makes it possible to improve sharpness and erase artifacts as much as possible without having to need to scale.

This 1.5 update for RTX Video Super Resolution is already available with Nvidia’s new Game Ready driver released today, and will be integrated into the Studio driver next month. Follow our guide to enable Video Super Resolution.
Stable Diffusion, LLM: more reliable and faster generative AI
If Tensor cores are very useful for operationsupscaling like DLSS or RTX Video Super Resolution, they are also essential for accelerating all types of tasks related to AI and machine learning. Here, Nvidia focused on generative AI with new advances in reliability and performance.
The manufacturer is thus announcing a notable acceleration in the generation of visuals within the open source Stable Diffusion tool. Thanks to the Tensor cores present in the RTX, Nvidia announces a speed multiplied by 7 on an RTX 4090 compared to a Mac with an M2 Ultra chip. All you need to do is install the TensorRT extension for Stable Diffusion available on GitHub.
Finally, Nvidia also highlights optimizations concerning TensorRT-LLM, its open source library which aims to execute large language models optimized by the Tensor cores of the brand’s GPUs. The result is faster and more accurate responses on Windows serving popular language models like Meta’s Llama 2. This update will soon be available on the NVIDIA Developer site for TensorRT-LLM.