Jensen Huang, co-founder and CEO of Nvidia, speaks in Taipei on June 2, 2024.
Taipei/Hong Kong CNN  — 

Nvidia, AMD and Intel have separately launched the next generation of their artificial intelligence (AI) chips in Taiwan, as a three-way race heats up.

Jensen Huang, CEO of Nvidia (NVDA), said Sunday that the company would roll out its most advanced AI chip platform, called Rubin, in 2026.

The Rubin platform will succeed the Blackwell, which supplies chips for data centers and was announced only in March. It was dubbed by Nvidia at the time as the “world’s most powerful chip.”

The Rubin will feature new graphics processing units (GPUs), a new central processing unit (CPU) called Vera and advanced networking chips, Huang said in an address at National Taiwan University in Taipei.

“Today, we’re at the cusp of a major shift in computing,” Huang told the audience ahead of the opening of Computex, a tech trade show organized annually in Taiwan. “The intersection of AI and accelerated computing is set to redefine the future.”

He revealed a roadmap for new semiconductors that will arrive on a “one-year rhythm.”

Investors have been driving up shares in chip firms riding the generative AI boom. Shares of Nvidia, the market leader, have more than doubled over the past year.

“Nvidia clearly intends to keep its dominance for as long as possible and in the current generation, there is nothing really on the horizon to challenge that,” said Richard Windsor, founder of Radio Free Mobile, a research company focusing on the digital and mobile ecosystem.

Nvidia accounts for around 70% of AI semiconductor sales. But competition is growing, with major competitors AMD (AMD) and Intel (INTC) introducing new products in an effort to challenge Nvidia’s dominance.

On Monday, AMD CEO Lisa Su unveiled the company’s latest AI processors in Taipei and a plan to develop new products over the next two years.

Its next generation MI325X accelerator will be made available in the fourth quarter of this year, she said.

A day later, Intel CEO Patrick Gelsinger announced the sixth generation of its Xeon chips for data centers and its Gaudi 3 AI accelerator chips. He touted the latter, which competes with Nvidia’s H100, as being one-third cheaper than its rivals.

Number one priority

The global competition to create generative AI applications has led to soaring demand for the cutting edge chips used in data centers to support these programs.

Both Nvidia and AMD, which are run by Taiwan-born American CEOs who are part of the same family, were once best known by gamers for selling GPUs that display visuals in video games, helping them come to life.

While the two still compete in that space, their GPUs are now also being used to power generative AI, the technology that underpins newly popular systems such as ChatGPT.

“AI is our number one priority, and we’re at the beginning of an incredibly exciting time for the industry,” Su added.

“We launched MI300X last year with leadership inference performance, memory size and compute capabilities, and we have now expanded our roadmap so it’s now on an annual cadence, that means a new product family every year,” she said.

The new chip will succeed the MI300 and feature more memory, faster memory bandwidth and better computer performance, Su added. The company will launch a new product family every year, with the MI350 set for 2025 and MI400 a year later.