This may be sort of off topic, but when HDR was first announced for consoles everyone was saying that it would be really easy to implement and would basically just be flipping a switch since most games "render in hdr anyways, but just convert down to sdr." Where did all of that come from, because I cannot imagine it was devs?
It is not, the HDR method within engines can, and does, sit at various points. The entire frame or buffers will not be in HDR Cspace (R2020) and any conversion/tone mapping down into a clipped space for 709 would happen on the relevant HDR elements, e.g. post processed bloom is a good one. This conversation is likely to have happened prior to this stage and then you have to factor in any AA options, MSAA is a good example of this but the games TAA will be a Huge farctor here. Not only with the HDR option but also the abundance of the stochastic render patterns the game uses for Alpha, not even getting into transparencies and Alpha to coverage work for these. This is why engine pipelines are complex and with a deferred engine in Rage and multiple G-buffers this adds more complexity.
I covered some of this in my video and earlier in this thread, but this is a PR tick box IMOP, as such they were better to get a form of HDR IN than not as it would have been worse of an outcry, damned if you do/don't. In the great scheme of things it does not diminish the games quality visuals even on a 4K screen and so long as you tweak your TV it should not look WORSE than SDR but it almost certainly does not look better.