Thanks for the reply.
Could you elaborate on how this process exactly works for you (as the developer) and do you think that the cert process could be somewhat improved on in any kind of way?
Companies are strict and I can't really get into specific details. But the general process is that there's a large, LARGE list of specific testing scenarios that must be met in a satisfactory way. For instance, if the game detects a corrupt save file, the player should be alerted and the game not crash. If this does not occur, you fail, and did not pass cert. Repeat for hundreds of other specific tests.
It's actually a very valuable service because typically console manufacturers do not charge for it. I don't honestly see much of a solution, though. It's a TON of work (it has to be done *numerous* times for every single title on the platform, regardless of how large it is) and the manufacturers already devote a lot of resources to it. It's definitely not perfect and sometimes things can slip through, because ultimately people are people and software is enormously complex and impossible to test perfectly, especially for a remote team who doesn't know the first thing about you, your game, or the specifics of how it was built and coded.
Were I to suggest something (and this is just off the top of my head, really), I would say to try develop automated software built into the API so we could run checks on a lot of the simpler save/load/OS behavior automatically before we send it off. Typically any cert failures get the software sent back with a mandatory waiting period (so you can't just overload and spam the team with submissions), but sometimes you just make one tiny mistake, and it flubs the whole submission. Having an automated testing suite to catch some of the easier stuff could help save a lot of time.