Chinese factory workers obviously.
lol
console prices will be the least of peoples worries if this shit continues
Wireless charging on the top sides would be a nice touch.
that would be a completely pointless feature. If you are using it you can't play an charge simultaneously, and if you are going to have to drop your controller on a console to charge you might as well just plug it in via usb and at least be able to game and charge simultaneously.
Has your opinion shifted on performance or price targets since then? Do you see tariffs having an impact?
Did you know at that time that 8tf navi would be faster than vega 64 12.5tf?
I was aware there were going to be inherent performance benefits over the existing chips so that the TFLOPS would not be 1:1 with current gen. I was not aware of the details, but it was expected.
However, I'm dubious of these 40% performance increase numbers that are being mentioned, on fundamentally compatible architecture. Seems like something that maybe happens in a benchmark, but these systems need to be silicon compatible with the past generations so it seems a stretch we're going to see that level of performance increase unless you're including Ray Tracing in the calculation.
That said, I feel like I've always trended towards pessimistic in these threads, so I'll be happy to be wrong.
lol
console prices will be the least of peoples worries if this shit continues
I was aware there were going to be inherent performance benefits over the existing chips so that the TFLOPS would not be 1:1 with current gen. I was not aware of the details, but it was expected.
However, I'm dubious of these 40% performance increase numbers that are being mentioned, on fundamentally compatible architecture. Seems like something that maybe happens in a benchmark, but these systems need to be silicon compatible with the past generations so it seems a stretch we're going to see that level of performance increase unless you're including Ray Tracing in the calculation.
That said, I feel like I've always trended towards pessimistic in these threads, so I'll be happy to be wrong.
The compatibility is served by the GPU Instruction Set Architecture. On a hardware level, however, RDNA's scheduling and cache performance VASTLY outperforms GCN. Just as the poster above me mentions, the transistor budget per compute unit is much higher, so the performance increase "per flop" isn't just speculation or hearsay, it's baked into the hardware itself and displayed in the many benchmarks using real games for the two RDNA desktop PC cards available at the moment.
I think your pessimism on this point is somewhat unwarranted.
Digital Foundry has a great article showing the performance of the two Navi GPU's available now, the 5700 and 5700XT. The former is largely a match for the RTX 2060 Super while the latter is a match for the 2070 in most games. Here's the link: https://www.eurogamer.net/articles/...-review-head-to-head-with-nvidia-super?page=6Awesome. I'll be curious to see how this manifests itself when devs start coding specifically for it.
Good references to see these benchmarks? I'm interested to readup.
I was aware there were going to be inherent performance benefits over the existing chips so that the TFLOPS would not be 1:1 with current gen. I was not aware of the details, but it was expected.
However, I'm dubious of these 40% performance increase numbers that are being mentioned, on fundamentally compatible architecture. Seems like something that maybe happens in a benchmark, but these systems need to be silicon compatible with the past generations so it seems a stretch we're going to see that level of performance increase unless you're including Ray Tracing in the calculation.
That said, I feel like I've always trended towards pessimistic in these threads, so I'll be happy to be wrong.
Thx for response, no, I just compared rx5700(7.5tf) to vega64(12.56tf) performanceI was aware there were going to be inherent performance benefits over the existing chips so that the TFLOPS would not be 1:1 with current gen. I was not aware of the details, but it was expected.
However, I'm dubious of these 40% performance increase numbers that are being mentioned, on fundamentally compatible architecture. Seems like something that maybe happens in a benchmark, but these systems need to be silicon compatible with the past generations so it seems a stretch we're going to see that level of performance increase unless you're including Ray Tracing in the calculation.
That said, I feel like I've always trended towards pessimistic in these threads, so I'll be happy to be wrong.
If we non-americans would suffer then a 10-20% is doable I guess for a gamer? We are talking about from 499 USD to 549 if 10% or 599 if 20%Hoping this! The rest of the world shouldn't suffer from downgraded specs due to Trump and can still be offered the originally planned storage size.
Never underestimate the utter lack of perspective about issues outside of gaming here.
I never understand these comments, this is a gaming forum...that's the prism almost everything is viewed through.
Call of Duty: Modern Warfare has managed to provoke an interesting response. Standard shadows are replaced with hardware-accelerated RT versions, which has led to some interesting observations from some of our audience on a perceived 'downgrade' - when the reality is that a realistically rendered ray traced shadow is obviously more accurate than a razor-sharp, non-graduated shadow map. It's a fascinating example of how some aspects of video game graphics rendering have become the norm to the point where more realistic replacements are considered inferior!
I am writing a basic raytracer and have implemented normal maps. However, when using normal maps, sometimes the rays generated are opposite to the surface's geometry normal, so that the rays are reaching light sources behind the object. If I kill the ray, the shading results looked faceted and the surface normals are apparent. If I don't kill the ray, the ray reaches light sources on the opposite side of the geometry.
How is this handled normally?
That is, to my knowledge, a problem without a proper solution. You're seeing the discrepancy between shading normal and geometry normal and it becomes obvious, that the shading normal is just a trick. PBRT has a paragraph on this, their solution is to look at the geometric normal to determine whether to call the BRDF (reflection) or the BTDF (transmission), then to pass the shading normal to the BxDF. Still, this doesn't work robustly in all situations.
This problem is also known and described for production proven render engines:http://blog.irayrender.com/post/29042276644/shadow-acne-and-the-shadow-terminator The solution suggested for iRay is to use displacement mapping instead of normal maps. This way, shading and geometry normal are in agreement again.
Heh. This comes from comments under the video where people explicitly wrote how "shadows disappear" with RT on in some scenes, when in reality, they are still there, but have contact hardening so they terminate gradually. I think people are arrested to fake looks in VG rendering at times, like how cubemaps make surfaces decidedly less reflective than they really are.Very funny but maybe the problem is coming from mixing raytraced shadows and normal maps. Dictator No idea if the perceived downgrade comes from this...
[/URL]
The benchmarks already prove you wrong. RDNA is a new architecture while still being compatible with GCN...the die space of 1 CU of RDNA is larger than that of GCN
I think the ones I've seen bring up the 14TF number are basing it on a 10+TF RDNA gpu, not the 5700 cards.Heck, I see some people saying it's on par with a 14Tflops GCN GPU... When it's on average 10% slower than Radeon VII which is 13.8Tflops.
my RDNA to previous architecture comparison comes from a test that compared the rx 590 clocked at 1500MHz (in the middle between base clock and boost clock) compared to the rx 5700 also clocked at 1500MHZ (again a bit above the base clock), and both of those GPUs have 36 active compute units (so same TF with the same configuration), according to multiple game tests, the RX 5700 performed 39% faster than the RX 590 although they share clock speed and amount of compute units. while its true that this is not a perfect test as its not accounting for memory bandwidth constraints, i think the test is the closest we can get to comparing RDNA1 IPC compared to Ploaris IPC.People are pulling figure from different GPU sources. If you base it on comparisons with VEGA 64, even the RX 5700 is faster in some cases.
If you base it on RX 580, which is 6.2 Tflops, RX 5700XT is slightly less than 2 times faster. Heck, I see some people saying it's on par with a 14Tflops GCN GPU... When it's on average 10% slower than Radeon VII which is 13.8Tflops.
Heh. This comes from comments under the video where people explicitly wrote how "shadows disappear" with RT on in some scenes, when in reality, they are still there, but have contact hardening so they terminate gradually. I think people are arrested to fake looks in VG rendering at times, like how cubemaps make surfaces decidedly less reflective than they really are.
my RDNA to previous architecture comparison comes from a test that compared the rx 590 clocked at 1500MHz (in the middle between base clock and boost clock) compared to the rx 5700 also clocked at 1500MHZ (again a bit above the base clock), and both of those GPUs have 36 active compute units (so same TF with the same configuration), according to multiple game tests, the RX 5700 performed 39% faster than the RX 590 although they share clock speed and amount of compute units. while its true that this is not a perfect test as its not accounting for memory bandwidth constraints, i think the test is the closest we can get to comparing RDNA1 IPC compared to Ploaris IPC.
Those 40% are RDNA vs Polaris. It references a couple of tests that were made by computerbase.de with a normalized setup for the compared PC GPUs and the average performance gain they found in the games tested. By no means the most accurate testing but a very good indication what you already capable to achieve with current games not optimized for RDNA yet. The test also confirmed AMD's released numbers of having a 1.25 times better IPC than their Vega architecture.I was aware there were going to be inherent performance benefits over the existing chips so that the TFLOPS would not be 1:1 with current gen. I was not aware of the details, but it was expected.
However, I'm dubious of these 40% performance increase numbers that are being mentioned, on fundamentally compatible architecture. Seems like something that maybe happens in a benchmark, but these systems need to be silicon compatible with the past generations so it seems a stretch we're going to see that level of performance increase unless you're including Ray Tracing in the calculation.
That said, I feel like I've always trended towards pessimistic in these threads, so I'll be happy to be wrong.
Yep, it depands to what amd gpu you compare, bigger with big number of cu's are less effective and more bandwidth dependantPeople are pulling figure from different GPU sources. If you base it on comparisons with VEGA 64, even the RX 5700 is faster in some cases.
If you base it on RX 580, which is 6.2 Tflops, RX 5700XT is slightly less than 2 times faster. Heck, I see some people saying it's on par with a 14Tflops GCN GPU... When it's on average 10% slower than Radeon VII which is 13.8Tflops.
Other posters have already made the points well with links, but the takeaways are this:Awesome. I'll be curious to see how this manifests itself when devs start coding specifically for it.
Good references to see these benchmarks? I'm interested to readup.
Bit late but...
Your not going to be able to make a good approximation of possible perf cause we have never seen games built around a system with this much power before. Its going to be nuts when we start getting true next gen games.
It's like when we were closing in on current gen people would have claimed that next gen would look like Assassin's Creed 3 on ultra, when AC Odyssey looks like it comes from another universe.
Even the best looking current gen game on pc will not come close to what Xbox and PS5 exclusives will pull off especially after 2 years (and pc versions of most AAA games).
To be honest, I even expect launch titles to be insane considering the SSD and CPU jump. I can't wait for Cerny to showcase Knack 3. :D
4x jump in CPU over X, 2 times graphics, and 50+ times jump in storage speeds... If someone told me any of that a year ago, I would've slapped them.
I couldn't disagree more. He'll continue his stupid trade war and he'll try to pin it on others (democrats).lmao. they arent going to go with a smaller die, start removing cooling, bottleneck their entire system with less ram or fewer RT cores. Especially since these tarrifs will only affect U.S sales. just build those consoles elsewhere.
There is a good chance trump wont be president come launch either. Do we know if the tarrifs have actually increased the cost of electronics in recent months? I havent heard of any price increases for iPhones, smartphones, cameras and TVs. granted they have a bigger markup than consoles, but sony makes most of their money from royalties, ps+ subs and the psn store anyway.
bdesides, trump isnt stupid. he isnt going to have this tarrif war going on come election season next year. i expect this to calm down by June of next year.
rtx2060 level is probably the lowest we can getSomthing liker an 8 core zen 2, 16gb gddr6 4gb ddr4 for os, a GPU with RX5700 + RT hardware at rtx2060 level and a NVMe ssd will have great great synergies together.
I will be very happy if we get something like that
We at least know that sony has put an arm hip in their consoles before and DDR3 ram. Based on that I am hoping that they go with an arm chip with 4-6GB of LPDDR4 RAM all in one package for the OS exclusively. Then the APU with GDDR6 RAM will be exclusively for games.Somthing liker an 8 core zen 2, 16gb gddr6 4gb ddr4 for os, a GPU with RX5700 + RT hardware at rtx2060 level and a NVMe ssd will have great great synergies together.
I will be very happy if we get something like that
He doesn't talk about it in the video but:
1) It uses RT.
2) It's his own implementation, he doesn't use DXR or any other RT API.
3) It's running on a 1070 GTX.
We at least know that sony has put an arm hip in their consoles before and DDR3 ram. Based on that I am hoping that they go with an arm chip with 4-6GB of LPDDR4 RAM all in one package for the OS exclusively. Then the APU with GDDR6 RAM will be exclusively for games.
We at least know that sony has put an arm hip in their consoles before and DDR3 ram. Based on that I am hoping that they go with an arm chip with 4-6GB of LPDDR4 RAM all in one package for the OS exclusively. Then the APU with GDDR6 RAM will be exclusively for games.
Arm CPUs run Android comfortably nowaday. There is no reason it can't run a freebsd with a basic gui.But will the ARM CPu be enough for the OS? Wouldn't be better to allocate one core of Ryzen to the OS instead?
ARM CPUs have advanced miles since 2013 consoles. 32-> 64 bit and probably 2.5x IPC or greater. Snapdragon running real Windows now.But will the ARM CPu be enough for the OS? Wouldn't be better to allocate one core of Ryzen to the OS instead?
Arm CPUs run Android comfortably nowaday. There is no reason it can't run a freebsd with a basic gui.
ARM CPUs have advanced miles since 2013 consoles. 32-> 64 bit and probably 2.5x IPC or greater.
Yeah I thought the same thing too.....until I tried Destiny 2 on PC this weekend in it's 60 FPS glory. Even the "X" is significantly out dated because of it's shitty CPU. I'll be a day 1 Scarlet buyer no matter what the cost.If these consoles are more the $499 due to tariffs,, I will just wait, its not like the Xbox X already will play these games great.
For a wireless next gen VR headset.