• Ever wanted an RSS feed of all your favorite gaming news sites? Go check out our new Gaming Headlines feed! Read more about it here.
  • We have made minor adjustments to how the search bar works on ResetEra. You can read about the changes here.
Status
Not open for further replies.

Herne

Member
Dec 10, 2017
5,321
Expect Nvidia to make everything about ray tracing with the GeForce 4xxx series. RDNA3 will likely have massively improved ray tracing performance over RDNA2 but surpassing Lovelace will be doubtful, so that's where Nvidia will tell everyone to look. Nvidia will say that raster performance is perfectly fine but ray tracing is where the industry moving to and people will eat it up.

Not that that will be incorrect, mind you, but it's still going to take a few years for ray tracing to really take off and be absolutely necessary. Nvidia is just going to emphasise that over everything else because given that this rumour of massive RDNA3 raster performance and much better power efficiency keeps coming back, it's very likely there's something to it. And we've all heard about Nvidia pushing their TDP to try and match, thus the ridiculous 600w+ numbers we've been seeing. So they will pivot the performance conversation point to where they are strong.
 

Spork4000

Avenger
Oct 27, 2017
8,552
It'll be neat to see the 100+ Tflop series X 2 and PS6 in 6 years. Then we'll actually get games that use this power.
 

BreakAtmo

Member
Nov 12, 2017
12,865
Australia
People stop worrying about RDNA 3's power consumption:


If correct 35W on a 9.2 TF iGPU (6 WGP @ 3GHz) with 8 Zen 4 cores in a mobile APU. It's very likely the top end card probably will consume 375W.

The big problem is y'all hearing test AD102 boards consuming 900W because Jensen's ego demands it and Nvidia Engineers testing stuff for shits and giggles.


Does that mean we could get a Zen4/RDNA3 Steam Deck 2 in a couple of years that hits 4TF?
 

Kromis

Member
Oct 29, 2017
6,519
SoCal
So I should toss my brand new 3060ti away then?

LMAO I asked a similar question. I think it will depend on how often you're playing games. If you're like me where you don't game as much anymore due to other obligations, then I think selling is fine and pray you break even. Otherwise enjoy it. I think I want to sell my FE but I'm just so lazy to meet up with someone.
 
Nov 8, 2017
13,130
Goddamn my 4090 is going to have such an incredibly high framerate average (don't look at the stutters on the frametime graph).
 

CreepingFear

Banned
Oct 27, 2017
16,766
When I build a new PC this fall as a birthday present to myself, I'm going from an 850 watt PSU to 1200-1300 watt and also going from a quiet case to the Fractal Design Torrent in response to the increasing power usage in the higher end CPUs and GPUs.
 

Spoit

Member
Oct 28, 2017
4,007
Yep... call me when I can get 3080 Ti performance for 30 to 50 less watts
This is actually an interesting question. The literallaw of physics are limiting how much smaller we can go. It literally might not be possible.
I mean, with the node shrink, and especially going to TSMC over samsung, I wouldn't be suprised if that'd be the performance target of like the 4070 ti or something, and still hitting it sub 300W.
 

brain_stew

Member
Oct 30, 2017
4,736
Does that mean we could get a Zen4/RDNA3 Steam Deck 2 in a couple of years that hits 4TF?

The RDNA2 portion of Steam Deck is already efficient enough to hit peak clocks within a 5w budget at 7nm, so 4TF at 5nm feels possible without any architectural improvements and just a dumb die shrink. RDNA2 is already ridiculously efficient at lower clock speeds @7nm.

It's the 7nm Zen 2 cores that eat the power budget on Steam Deck, 5nm Zen 4 should go a long way towards fixing that.

Memory bandwidth still needs to be worked around of course, but including a ~32/64MB Infinity cache could go an awfully long way to resolving that with the low render targets @720p. 128MB seems to work well for 1440p tender targets, so 32MB should translate well to 720p tender targets.

With memory bandwidth resolved through an infinity cache, a genuine portable Series S could be possible.
 

AwakenedCloud

Member
Oct 27, 2017
1,817
So if we're keeping with Dragon Ball power scaling, are we officially in Super territory? Is this card SSG or SSGSS?
 

Spark

Member
Dec 6, 2017
2,543
Actual top end competition and an new competitor on the mid level with Intel means the GPU market could be fantastic over the next two years.
 

JackDT

Member
Oct 27, 2017
1,123
I hope NVIDIA somehow goes to 48GB on the 4090. Insane value compared to the existing 48GB GPUs you sometimes need for AI.
 

Bowl0l

Member
Oct 27, 2017
4,608
I just need an AMD GPU in MSRP. Somehow, Nvidia GPUs is within MSRP in my country but they go full ham on AMD GPU pricing...
 

BreakAtmo

Member
Nov 12, 2017
12,865
Australia
The RDNA2 portion of Steam Deck is already efficient enough to hit peak clocks within a 5w budget at 7nm, so 4TF at 5nm feels possible without any architectural improvements and just a dumb die shrink. RDNA2 is already ridiculously efficient at lower clock speeds @7nm.

It's the 7nm Zen 2 cores that eat the power budget on Steam Deck, 5nm Zen 4 should go a long way towards fixing that.

Memory bandwidth still needs to be worked around of course, but including a ~32/64MB Infinity cache could go an awfully long way to resolving that with the low render targets @720p. 128MB seems to work well for 1440p tender targets, so 32MB should translate well to 720p tender targets.

With memory bandwidth resolved through an infinity cache, a genuine portable Series S could be possible.

Wait, are you sure about that? I was under the impression that the Deck GPU at its max clock of 1.6GHz consumed like 10w or so, with the rest for the CPU. Yes, technically the CPU can eat up 15w by itself if you max it out, but my understanding was that the system prioritises the GPU overall.
 

Dreamwriter

Member
Oct 27, 2017
7,461
I really wish I could use Radeon cards, they are so much more efficient than nVidia. Unfortunately, I ran into a lot of problems running VR with a Radeon card, apparently Radeon makes use of part of the DisplayPort standard that NVidia cards don't, and nVidia has been the popular card for so long that developers don't even test their games (or VR headsets!) on Radeon cards, or do test, see that there are problems, and decide not to fix them. I ran into two VR headsets that actually said they don't support AMD GPUs.
 
Jul 26, 2018
2,464
How many watts can a build hold on a Cali/Texas summer without any fancy colling system (say, water pump)? To get a sense of how much is "reasonable".
 

DieH@rd

Member
Oct 26, 2017
10,594
We always buy pre-release hype of 2x performance leaps, and they never happen.
I'm fairly certain that this time 2x will happen. Nvidia is moving from Samsung to TSMC [and are willing to transform PC cases into fire hazards], and AMD is moving toward chiplets/3D stacking with an architecture that is receiving very strong early rumors.

IMO more important thing is that power level of 6900XT/3090 will now move into midrange-priced chips. PC will truly outpace consoles in all GPU price ranges and multiplat games will become much easier to run in GPU-limited scenarios. And on top of that, we will also get a galore of great upscaling middleware systems, eliminating the need to render native 4K.

And RDNA4/RTX50 will be quickly upon us, most likely bringing additional +50% [there we won't get 100% boost for sure]. It's wild to think that 1.5 - 2 years from now we will get GPUs on sale that are 3x 6900XT.
 

rafiii

Member
Feb 7, 2019
498
Holy hell they are finally moving away from 4 shader engine (since Hawaii / r9 290) to 6!
Going to be a big boom for rasterization / geometry fixed functions!
 

Ra

Rap Genius
Moderator
Oct 27, 2017
12,240
Dark Space
Pascal was almost exactly that.
It wasn't though. Not close, go back and check. The legendary Pascal was 60-70% faster.

3SMaiFl.png


hacBNAe.png
 

NaDannMaGoGo

Member
Oct 25, 2017
5,968
An all-around 2x performance increase would indeed be special and if rumors are to be believed, we might indeed be heading there. Obviously, best not to have too high expectations, but it's exciting nevertheless.

Granted, the bang/bucks ratio is probably not going to see as drastic of an improvement. First of all these newer processing nodes are plainly more expensive, if I'm not mistaken. They're most certainly significantly supply-limited, too, with essentially all the big companies going for TSCM now. Then there's the general chip shortage and China's insane Covid measures still ongoing.

High-end is going to cost a very pretty penny and low-end and probably even plenty mid-tier will be relegated to RTX 3000 / RX 6000 cards for a very long time, I reckon. Only if supply outstrips demand relatively quickly, which might be happening with less interest in crypto and simply too few people able to afford those high prices, I think we'll see a drastically better price/performance ratio.

I just highly doubt we'll see, say, an RTX 4600 in Q2 2023 that costs $300-400 and is roughly on par with an RTX 3080.
 

Cross-Section

Member
Oct 27, 2017
6,875
Get a 3080ti and undervolt it? That's basically what you're asking for.
I've seen a few replies now suggesting this; it's something I've definitely tried but the core issue is that of stability. Even a light undervolt (like 1830 mhz @ .875 on my FTW3 Ultra) produces issues in some games. Heat-wise, it's also only a few degrees cooler if that. It's easier to set a power limit, but at that point I feel like I'm not quite getting my money's worth.

It'd just be nice to have a card with the same level of perfomance that doesn't chug power like water from a fountain on a hot day, undervolt or no
 

tokkun

Member
Oct 27, 2017
5,418
It wasn't though. Not close, go back and check. The legendary Pascal was 60-70% faster.

3SMaiFl.png


hacBNAe.png

The difference was even less than that in practice. At the time of its release, the 1080 was competing with the 980 Ti in price, and the 980 Ti had much more overclocking headroom. After overclocking, the difference was more like 30-40%. Basically in the typical range of what we see with a flagship generational improvement.

This is around the time when chips began incorporating much more sophisticated DVFS and the amount of improvement you could get from manually overclocking shrank dramatically. As a result, when you look at comparisons of stock performance, you see an unusually large performance jump for this generation. It was great for people who were not comfortable overclocking, but I'd guess most enthusiasts buying cards in this price range were overclocking at the time, so it was less of a leap for them.
 

DieH@rd

Member
Oct 26, 2017
10,594
Prices will go up, that's for sure. Those late 2020 prices [3070 for $500, 3080 for $700] are not repeating again.

I expect something like 4070 [perf range of 3090] for $700, 4080 [perf range of 4090 + 30%] for $900-1000, and then who knows what will be for beefier models.

AMD could press Nvidia into more reasonable prices, but since everything is selling immediately, most likely Radeons will also be pricey. Alchemist is coming late, they will provide entry-level cards [all up to 3070 perf].
 

NaDannMaGoGo

Member
Oct 25, 2017
5,968
edit: quoted you before you added that last bit

AMD could press Nvidia into more reasonable prices, but since everything is selling immediately, most likely Radeons will also be pricey.

I also wonder about Intel and if they could invigorate the low-end market a tiny bit.

Like, for the time being, it's 2000% obvious that intel is not going to be particularly competitive overall for a long time. That's sort of fine and expected, as an entry into the graphics card market can't be easy, nor cheap, especially so with both Nvidia and AMD making great strides recently.

But they'll have to claw their way into the market some way and perhaps they can do so by offering competitive low-end cards, perhaps even if it means eating some losses. For us consumers, even just having that would be nice. I don't have high hopes for that, however. Igor's Lab did some video about their current driver performance roughly a month ago (forgot how exactly he got some legit insight there, but it was legit IIRC) and that wasn't great yet. With a good number of games not running or running poorly. And Intel is really dragging their feet with putting anything out at all there, anyway. If AMD and Nvidia now deliver one of their strongest generational jumps yet, one can only wonder if Intel can offer anything even for merely some tiny niche.
 

VariantX

Member
Oct 25, 2017
16,903
Columbia, SC
Prices will go up, that's for sure. Those late 2020 prices [3070 for $500, 3080 for $700] are not repeating again.

I expect something like 4070 [perf range of 3090] for $700, 4080 [perf range of 4090 + 30%] for $900-1000, and then who knows what will be for beefier models.

AMD could press Nvidia into more reasonable prices, but since everything is selling immediately, most likely Radeons will also be pricey. Alchemist is coming late, they will provide entry-level cards [all up to 3070 perf].

I'm guessing intel when their desktop stuff finally releases won't be too competitive unless they're really desperate to gain market share for the long game. They'd be leaving money on the table they could make right now.
 

Joe White

Member
Oct 27, 2017
3,047
Finland
Prices will go up, that's for sure. Those late 2020 prices [3070 for $500, 3080 for $700] are not repeating again.

I expect something like 4070 [perf range of 3090] for $700, 4080 [perf range of 4090 + 30%] for $900-1000, and then who knows what will be for beefier models.

I'll keep buying used components from friends. With that price/perf range for 4000 series, I might have good opportunity to buy used 3090 (~36TLFOP GPU) for 600€.
 

PLASTICA-MAN

Member
Oct 26, 2017
23,684
Wow all that GPU clock speed which couid be kinda dangerous but still ends up being less TFlops than the RTX 4090 and still in GDDR6 and not X? I bet even the bandwidth is slower too (surprsisngly still not mentioned there for both).
 
Last edited:
Nov 2, 2017
2,275
It wasn't though. Not close, go back and check. The legendary Pascal was 60-70% faster.

3SMaiFl.png


hacBNAe.png
GP102 was 85% faster on average than GM200. Hence I said almost 2x. These comparisons are not that relevant anyway given the massive increase in TDP we're going to be seeing. Like a 350W 1080ti would probably be 2x faster than a 980Ti.

perfrel_3840_2160.png
 

Herne

Member
Dec 10, 2017
5,321
Wow all that GPU speed speed which couid be kinda dangerous

What?

but still ends up being less TFlops than the RTX 4090 and still in GDDR6 and not X? I bet even the bandwidth is slwoer too (surprsisngly still not mentioned there for both).

Teraflop performance is not usually equal between similar architectures, to say nothing of products from entirely different companies.
 
Status
Not open for further replies.