RTX 5090 early benchmarks show underwhelming performance uplift over the RTX 4090

DragonSlayer101

Posts: 577   +3
Staff
Something to look forward to: Nvidia announced next-gen RTX 50 series graphics cards earlier this month at CES 2025. While gamers are still waiting for the official launch, early benchmarks appear to suggest that the RTX 5090 will see only a moderate performance boost over its predecessor, the RTX 4090. For a full understanding of how the RTX 5090 performs in popular games, wait for our exhaustive review later this week.

Update (Jan 23): Our Nvidia RTX 5090 review is now live.

The RTX 5090 was put through its paces in Geekbench 5, where it notched up impressive scores in both the OpenCL and Vulkan tests (via BenchLeaks). In the former, the 5090 scored 367,740 points, which is 15 percent more than what the RTX 4090 achieved. In the latter, the new flagship chalked up 359,742 points, which is 37 percent higher than the RTX 4090's score.

In the CUDA API, the card scored 542,157 points, 27 percent higher than the 424,332 points racked up by the RTX 4090. While it is an impressive score, it's not as significant a generational leap as some had expected, given that the new card has 32 percent more CUDA cores than its predecessor.

The RTX 5090 was also recently tested in Blender 3.6.0, where it notched up a median score of 17,822.17. This makes it roughly 36 percent faster than the RTX 4090, which scored 13,064.17 in the same test on the same version of the app. The China-exclusive RTX 5090D scored 14,706.65 in Blender v4.3.0, beating the RTX 4090D's score of 10,516.64 points by 40%.

It is worth noting that these scores should be taken with a grain of salt as synthetic benchmarks do not always reflect real-world gaming performance, and the Blender benchmark only shows the performance metrics in a single app.

For a better understanding of how the RTX 5090 performs in various popular games and how it compares to its predecessor, wait for our exhaustive review later this week. In the meantime, you can also watch Steve unbox our 5090 review sample below:

Alongside the RTX 5090, Nvidia also announced the RTX 5080, RTX 5070 Ti and RTX 5070 at CES 2025. The 5090 and 5080 are set to launch on January 30, while the two RTX 5070 models will likely be available next month. The new cards offer many upgrades over the Ada Lovelace generation, but they're also priced higher, with the flagship 5090 costing $1,999.

Permalink to story:

 
I’m curious as to why Tensor cores aren’t mentioned - Nvidia states a 40% tensor core increase - which should offload the base cuda cores, lifting performance in all games with Raytracing well above the 4090 - framgen not included.
Take games like Indiana Jones, built from the bottom up with Raytracing (can’t even load the game without a Ray Trace enabled card) - that should get a massive increase if what Nvidia states is correct
 
Creat fud that price is $2499 or higher, create fud of scarcity to cause mass hysteria. Nvidia tactics 101!
Also

What if Nvidia's goal is to make fe edition slightly better than last gen only to make the aib cards that will probably be more expensive more attractive and performant in comparison 🤔?

Paper launch fe edition and flood the market ( in context to paper launch of fe model) with aib cards that are more expensive but potentially clock higher.

Bait and switch!
 
Last edited:
I’m curious as to why Tensor cores aren’t mentioned - Nvidia states a 40% tensor core increase - which should offload the base cuda cores, lifting performance in all games with Raytracing well above the 4090 - framgen not included.
What? Tensor cores and CUDA cores are completely different things.

CUDA cores are specialized processors designed for general-purpose parallel computing, while Tensor cores are specialized processors designed for deep learning and AI workloads.

An increase of Tensor cores is specifically to run Multi-Frame Gen, that's all so far.
Take games like Indiana Jones, built from the bottom up with Raytracing (can’t even load the game without a Ray Trace enabled card) - that should get a massive increase if what Nvidia states is correct
Well no? Ray Tracing performance is mainly dictated by the RT Cores, The RT Cores are specifically optimized for real-time ray tracing calculations.

All Nvidia has stated is that DLSS 4 looks better than older versions of DLSS and Multi-Frame Gen will boost framerates dramatically by inserting guestimated frames in-between real frames.

Actual raw performance though? Was never going to be a big leap, if you look at the core count and power usage between 4090 and 5090, it'll be, at most, 30% faster, but probably 20-25% on average. There's a good reason Nvidia has only hyped the AI stuff in the marketing and not the raw power, because it hasn't really changed all that much.
 
Last edited:
I’m curious as to why Tensor cores aren’t mentioned - Nvidia states a 40% tensor core increase - which should offload the base cuda cores, lifting performance in all games with Raytracing well above the 4090 - framgen not included.
Tensor cores do not affect ray tracing performance. They are used to accelerate the matrix multiplication operations used for machine learning-related tasks.
 
Calling a 15-40% increase 'underwhelming' is just ridiculous, especially considering these are pre-release drivers with zero optimization. A 40% jump in Blender and 37% in Vulkan is huge. Even the CUDA and OpenCL scores are nothing to scoff at. People are acting like they expected a 100% leap in every benchmark. Let’s wait for real-world gaming tests before writing it off, but honestly, these early numbers are already pretty impressive.
 
Calling a 15-40% increase 'underwhelming' is just ridiculous, especially considering these are pre-release drivers with zero optimization. A 40% jump in Blender and 37% in Vulkan is huge. Even the CUDA and OpenCL scores are nothing to scoff at. People are acting like they expected a 100% leap in every benchmark. Let’s wait for real-world gaming tests before writing it off, but honestly, these early numbers are already pretty impressive.
Compared to the 4090 vs 3090 though, it's nowhere near the same kind of jump.

Edit: Also, 3090Ti to 4090, the price actually went down, the 5090 is 25% more expensive than the 4090, whilst being almost the same 25% better performance.

The 5090 is a nothing burger when you look at price vs performance compared to the 4090.
 
Last edited:
-Well, the 1080Ti had a street price of $700 and the 4090 has a street price of $2000 so a far more select audience of users might remember it fondly...
Yeah, but the 1080ti was more like a corvette, the 4090 was more of a Lamborghini. As much as I hate nVidia, I won't deny that the 4090 is an amazing peice of tech even if I think the price is stupid.
 
The 50 series was the generation I was planning to upgrade to but the real world performance I'm seeing has left me pumping brakes. I was hoping for great AI performance(which 5090 has) but I'm am very disappointed that I need to 5090 for what feels like sub par generational rasterization performance improvements... my 1080 may need to stretch a few more months until I see something I can be more excited about.
 
That is BECAUSE:
Blackwell is not design/engineered for Games, like RDNA is.
Blackwell Architecture and the chip (GB202-300) that the RTX 5090 is based on, is not even the full die/chip. The RTX 5090 is based off a PRO card (full die) that sells for $3,499.

CUDA is not for gaming, Nvidia just overmarkets it's lack of raw power behind gimmicks to sell to Gamers, who do not do creative content, nor work in Enterprise.

The RTX 5080 is going to illustrate that^ even more, because it even small chip with even less Gaming die space than the 5090.


If you are a GAMER, then RTX is not for you, that is why SONY, Microsoft, Steam, etc.. all chose RDNA as the architecture choice in their gaming hardware. AMD is for gaming.
 
There wasn't a node shrink this gen, so uplift is limited to architecture and power increases.

Further, the 5090 is too good at AI to even be considered in the same conversation as normal gaming GPUs. It's essentially an AI lite card that also games, and therefore, the price is based more on competing AI cards than anything else.

Nvidia's 50 series will be a refinement of the product positioning started in the 40 series:

5090 AI GPU / Halo product (now irrelevant for other GPU positioning)
5080 . . +20% of 4080 $1000
5070 TI +20% of 4070 TI (~4080) $750
5070 . . +20% of 4070 (~4070 TI) $550

New gens will now be 1 step up from last gen which is now set at 20% between models for a similar price increase each step. It seems the market will bear this (barely) so it is unlikely to change until AI slows or competition increases.
 
Last edited:
There wasn't a node shrink this gen, so uplift is limited to architecture and power increases.

Further, the 5090 is too good at AI to even be considered in the same conversation as normal gaming GPUs. It's essentially an AI lite card that also games, and therefore, the price is based more on competing AI cards than anything else.

Nvidia's 50 series will be a refinement of the product positioning started in the 40 series:

5090 AI GPU / Halo product (now irrelevant for other GPU positioning)
5080 . . +20% of 4080 $1000
5070 TI +20% of 5070 TI (~4080) $750
5070 . . +20% of 5070 (~4070 TI) $550

New gens will now be 1 step up from last gen which is now set at 20% between models for a similar price increase each step. It seems the market will bear this (barely) so it is unlikely to change until AI slows or competition increases.

Don't forget that AMD is going 2 prong: Chiplet and Monolithic.

AMD is about to humiliate Jensen for trying to push so much of their non-gaming hardware off on consumers, instead of Prosumers.



The RX 9080 is going to show the power of RDNA and show price/performance/power supremacy of their Gaming Architecture.

And then later this year, AMD will announce their top-tier Chiplet architecture using AMD Prosumer XDNA architecture, allowing AMD to compete directly with nVidia's $3k "gaming card", but AMD will offer custom chiplet designs, catering to individual needs.

If u need more Ai, then choose the RX Chiplet that has more tensor/xdna cores, etc. If you want all raster, then pick up the chiplet that has your best interests at heart.


Blackwell is a joke for Gaming !
 
That is BECAUSE:
Blackwell is not design/engineered for Games, like RDNA is.
Blackwell Architecture and the chip (GB202-300) that the RTX 5090 is based on, is not even the full die/chip. The RTX 5090 is based off a PRO card (full die) that sells for $3,499.

CUDA is not for gaming, Nvidia just overmarkets it's lack of raw power behind gimmicks to sell to Gamers, who do not do creative content, nor work in Enterprise.

The RTX 5080 is going to illustrate that^ even more, because it even small chip with even less Gaming die space than the 5090.


If you are a GAMER, then RTX is not for you, that is why SONY, Microsoft, Steam, etc.. all chose RDNA as the architecture choice in their gaming hardware. AMD is for gaming.
Interesting take I hadn't considered, thank you. I haven't owned an AMD card since my dual HD 5670s in crossfire... and now I feel old.
 
Calling a 15-40% increase 'underwhelming' is just ridiculous, especially considering these are pre-release drivers with zero optimization. A 40% jump in Blender and 37% in Vulkan is huge. Even the CUDA and OpenCL scores are nothing to scoff at. People are acting like they expected a 100% leap in every benchmark. Let’s wait for real-world gaming tests before writing it off, but honestly, these early numbers are already pretty impressive.
It's disappointing as expected because there are basically zero architecture improvements on those. You can always make faster GPU just putting more computing units. However that also increases power consumption so net gain is basically zero.
 
30% is in line with the game benchmarks that did not use frame generation posted in the press release. Far Cry 6 and A plague tail: requiem
2025-01-07-image-7.jpg

Without a real process upgrade, there is not much "free" performance increase. The 30% performance increase comes from increasing the die size from 600 to 750 mm^2 and the power consumption from 450 to 575W. Like it or not, with the available process and the maximum chip size of about 800 square millimeters the 5090 is the maximum possible.
 
Last edited:
I'm honestly curious who expected more raw performance than this, and how they thought that might come to pass.

Hoping for more in the price/performance area was a potential long-shot I guess, but given AMD's statements that they won't be competing at this tier, and continuing strong demand for AI applications, never seemed that likely to me.
 
Tensor cores do not affect ray tracing performance. They are used to accelerate the matrix multiplication operations used for machine learning-related tasks.
Actually - Cuda cores are used for calculating light traversal, then Tensor Cores come into play for AI-accelerated denoising of the noisy ray-traced image.
This means that Tensor cores are cleaning up the picture, with a 40% increase this should offload the gpu and increase ray tracing effectivness.
But of course, we’ll have to see at release. This card is a monster - and with 32gb of ddr7 memory, we should also see less need for memory purge as it can store massive amounts of data.
This card is clearly meant for 4k gaming though - would probably be a waste on a 1440p setup
 
That is BECAUSE:
Blackwell is not design/engineered for Games, like RDNA is.
Blackwell Architecture and the chip (GB202-300) that the RTX 5090 is based on, is not even the full die/chip. The RTX 5090 is based off a PRO card (full die) that sells for $3,499.

CUDA is not for gaming, Nvidia just overmarkets it's lack of raw power behind gimmicks to sell to Gamers, who do not do creative content, nor work in Enterprise.

The RTX 5080 is going to illustrate that^ even more, because it even small chip with even less Gaming die space than the 5090.
It doesn't really matter if it's developed for gaming first or not as long as it does well for the price and once you reach a certain price level AMD just stops offering products. They have nothing to compete with the RTX 4090 or 5090. So technically the 'best' gaming experience (the most possible frames at the highest resolution) is on NVIDIA.
--
Now most people don't want to or can't spend that much on it so there's definitely a place for AMD.

If you are a GAMER, then RTX is not for you, that is why SONY, Microsoft, Steam, etc.. all chose RDNA as the architecture choice in their gaming hardware. AMD is for gaming.
Sony and Microsoft according to (credible) rumors went with AMD at least initially because NVIDIA wasn't willing to drop the margins low enough. Luckily for AMD that provided a lifeline to keep the company afloat during the dark Bulldozer days.
Once locked into the x86 architecture with the PS4 (and xbox counterpart) the PS5 (and xbox counterpart) AMD just makes sense for backwards compatibility. I think Intel was briefly considered as well.

Valve (Steam) might have had many reasons to go with AMD.
* AMD can offer x86 which makes things easier when the goal is to run games, NVIDIA doesn't have a license to do so
* By extension that means AMD can provide a SOC and NVIDIA can't, which has some major power advantages over the normal dedicated CPU + GPU combo. Nice to have in a handheld.
* They could get AMD to provide a chip that seems to be initially targeted at the Microsoft Surface Books pretty cheap

Another reason might be that according to rumors (interviews with people working at tech companies) NVIDIA has a very arrogant attitude whilst AMD is much more willing to listen and make concessions.

All that said, NVIDIA got the Switch contract that outsold the Xbox and Playstation combined. I'm guessing Nintendo just wanted something very low power and NVIDIA could offer just that. With backwards compatibility in mind the Switch 2 will once again be NVIDIA.
 
No shocker. Anyone with half a brain realizes this is an interim series...the last large leap was the 4090, you aren't getting quantum leaps on 2 consecutive models in a row..
 
Back