I was a bit worried about how this would look as judging from the clips I'd seen posted the video sequences have lost detail too.
Why didn't they run the pre-rendered backdrops through an AI upscaler like they did the cutscenes? I feel like it would have given much better results than sticking a 5% blur on them in Photoshop. Characters stand out way too much now.
Are we sure they didn't already but didn't do a great job?
The problem with AI upscalers, especially with images/video containing pixelation/dithering that isn't simply caused by the original resolution, is that it can be hard to strike the right balance between preserving detail, avoiding introducing weird artefacts, and producing something that actually looks higher res. Trying to preserve detail can end up with nasty artefacts where the upscaler interprets pixelation as actual scene detail and enhances it. Alternatives are to either do a preprocessing pass to smooth things, use a model that performs noise reduction before upscaling, or use a model that's more subtle and less sensitive to false positives from the pixelation/dithering. Of course, depending on the AI model that tends ends up with detail being erased, or an upscale that looks almost identical to the original input.
With enough time and working frame by frame I'm sure good results are possible but I guess it would depend on the time that would take and the cost of it.
It's a problem I've had trying to upscale animated DVDs with crappy transfers. I want to remove some of the noise and sharpen everything up, but rain scenes especially become tricky as AI models with more denoising tend to confuse the rain effect with noise. Retain the rain and other scenes end up looking strange or plain nasty. Also, in old NTSC to PAL transfers the upscaler seems to be able to detect and enhance pixelation caused by that slight upscale even though it's almost invisible to the naked eye.