I totally agree...and moreover, I think 1440-1880 is the sweet spot for typical large 4K TVs and normal viewing angles
To truly resolve a ton of the detail of full 4K you need to sit extremely close to the TV and that makes games practically unplayable
I cannot really tell a huge difference between resolutions beyond 1440p or so.
I want next gen to hit the resolution sweet spot and then use all those TFs for graphical fidelity or framerate.
Which is also why I'm not a huge fan of a 2 SKU strategy, I want 1 high end SKU baseline
I'm guessing you don't have a 1X or a high end PC and a Pro? You don't need to sit close to see the detail of a native 4K scene and there certainly is a massive difference between 1440p and 4K.
Play nothing but a native 4K (or 4K CBR even) game for a few weeks straight then load up that same game at 1440p and still tell us there isn't a big difference.
Ok, but what exactly is the RT hardware?
They are specialized cores that will break down a scene into grids to analyze whether or not a ray hits an object. It will then break that object down into more grids and so on until the core determines which pixels are hit with a ray before sending the information to the shader cores. The frame will then be passed to the Turing cores who then clean up the noise introduced by the rays.
Out of curiosity, what data are you using to work out that 995 of users can't tell the difference?
Rendering techniques don't all scale linearly with resolution, and it's possible to combine different resolution buffers to create a final native 4K image.
Once again, developers will choose what is right for their titles. People are winding themselves up for no reason by stating what developers should or should not do with the potential of next-generation hardware.
It's perfectly feasible to imagine a 4K60or120fps 1vs1 arena shooter happily living alongside a 30fps title that uses 4KCB or dynamic resolution, and there's plenty of other paths for developers to go down.
Thank you. There is no one size fits all solution and we shouldn't be so focused on whether a developer uses a reconstructed technique or not. What potentially happens then is that people may start questioning the developers if their preferred technique is not used (e.g. "this game would look better if they used CBR" or "this game would look sharper is they went native"). We need to trust the developers to implement the best methods for their games.
Checkerboard 4K resolves a native 4K image. That's a fact. The only people who will complain are fanboys who want some kind of "gotcha" notch in their belt.
Besides, it didn't exactly work out for the guy who
sued Sony over Killzone's resolution.
SMH it resolves a native 4K image in a static scene but elements can break down once the scene is in motion creating artifacts. The people slamming 4K CBR as some ugly alternative are no different than people like you claiming CBR is the better route to take. So basically playstation gamers versus PC and Xbox gamers. It's no surprise that people's preferred platforms will shape their rendering preferences.
Still what you're saying is silly and missing context. I'm perfectly fine with CBR, games will still use the technique next gen and likely look great. However you're fooling yourself if you think it's the same as a native 4K game. It can be close, and surely AA techniques can further blur that line (no pun intended) but it's not the same. We shouldn't begrudge any developer that chooses a native technique over a reconstructed one, or vice-versa.