A couple of things about this, which a user I'm quoting also notes below. A/B tests are not as simple as you might think and there is a reason that there is an entire profession,
User Research, dedicated to these types of design experiments.
You need answers to the following questions before you even consider an A/B test:
- Experimental Design:
- As pointed out via the user below, how are you going to sample the ResetERA population?
- If you're including stakeholders, like Mods, why are you and do the benefits outweigh the risks?
- Will the tests be opt in or opt out?
- What are the pros and cons to each of these strategies?
- Will you do a pilot study with the Prominent Members?
- Before you even considered a redesign, did you speak with your users to collect feedback and get a direction?
- How many variables should you change in an A/B test in order to make valid assumptions?
- How long do you need to let participants use the experimental design in order to collect adequate, accurate data?
- Note: The time used for this "experiment" was not enough. The data was biased by poor user onboarding, shock from the change, and limited timeframe.
- Data:
- How many users will you need to collect data from before you feel confident that you're making the right decision?
- How are you collecting usage metrics?
- Are you just basing decisions off of quantitative metrics? Or are you doing qualitative follow-ups based on trends you see?
- Will you publish all, or some of the data to the ResetERA?
People often misunderstand how to do design research and overly leverage quantitative data assuming they're doing it right. I implore you, if you're going to attempt to do this? Make sure that you're doing it correctly. Otherwise stakeholders will just double down on bad decisions utilizing false data to plug their ears to feedback.