Home

The chip manufacturing market is dominated by Nvidia and AMD, who both frequently unveil their new products to stay on top of the food chain. Both GPU manufacturers have a huge catalog that looks to serve the needs of every user imaginable. In a recent exhibition, AMD launched its new MI300X accelerator, that just may topple Nvidia from its current top spot.

In the launch event, AMD’s head Lisa Su announced that the new accelerator would offer memory support of up to 192 GB. The revelation took the internet by storm as the competing Nvidia H100 solution provides only 120 GB of storage support.

AMD’s MI300X looks to change the game

ADVERTISEMENT

Article continues below this ad

GPUs have seen recent market growth thanks to the development of cutting-edge AI platforms like ChatGPT, which require powerful chips to process seas of information. Even though AMD is mostly renowned for its traditional computer processors, the MI300X may pose as a great choice for server makers and developers, which in turn would pose the possibility of Nvidia chips being substituted for good.

The demonstration saw Lisa Su highlighting the new accelerators to be able to work with complex AI language models that can contain 40 billion parameters. The news takes the GPU industry to a crossroads as the new MI300X accelerator would significantly reduce the necessity to use multiple GPUs to work on larger AI languages. The new generation GPUs, or as AMD calls them, “accelerators”, would be available to customers later this year.

Read more Fans Shocked By Outrageously Priced Latest Nvidia GPU Series

AMD challenges Nvidia’s dominance, makes move to outperform Nvidia solution

AMD’s new advanced accelerator for generative AI may pose a serious threat to their main competition Nvidia. Supported by the Infinity architecture, users may be able to combine as many as eight MI300X accelerators in a single system. Even though Nvidia still dominates the AI chips market, they also still use a relatively older CUDA software ecosystem. The CUDA architecture greatly limits the capabilities of the Nvidia H100 solution compared to the new launch of AMD.

ADVERTISEMENT

Article continues below this ad

For its new generation of AI chips to develop applications, AMD has opted for the ROCm platform, which is an open ecosystem. The Zen 4 and CDNA 3 architecture form the basis for the MI3000X chips, which are further supported by HBM3 memory stacks. With such advanced tech going into its creation, AMDs latest AI language solution transcends Nvidia’s solution in terms of memory bandwidth as well.

ADVERTISEMENT

Article continues below this ad

With so many options available in the market, it is only to be seen how AMD’s new chip performs and if it can really take the throne from Nvidia for AI processors.

Watch this story Starfield Best Collectors Editions Bethesda Ever Released