We've been planning some changes to User Reviews for a while now, and we thought it'd be useful to give you some insight into our thinking on review bombing, which is one of the problems we're tackling. Review bombing is where players post a large number of reviews in a very compressed time frame, aimed at lowering the Review Score of a game. At the same time, they upvote each other's reviews and downvote all the other reviews. To understand why we think this is a problem at all, we first need to talk a bit about the goals of User Reviews, and the associated Review Score.
For User Reviews, the goal is fairly obvious - allow people who've played a game to tell potential purchasers whether or not they should buy the game, and why. When we first implemented reviews, we hoped their existence would be enough. But it became clear very quickly that many potential purchasers wanted a summary of some kind, so we created the Review Score. It shows you the ratio of positive to negative reviews for the people who've purchased the game, and from that, it has the goal of trying to provide a potential purchaser with an estimate of how likely it is that they'll be happy with that purchase.
So why is review bombing a problem? On the one hand, the players doing the bombing are fulfilling the goal of User Reviews - they're voicing their opinion as to why other people shouldn't buy the game. But one thing we've noticed is that the issue players are concerned about can often be outside the game itself. It might be that they're unhappy with something the developer has said online, or about choices the developer has made in the Steam version of their game relative to other platforms, or simply that they don't like the developer's political convictions. Many of these out-of-game issues aren't very relevant when it comes to the value of the game itself, but some of them are real reasons why a player may be unhappy with their purchase.
When it comes to the Review Score itself, however, it's even less clear that these out-of-game reasons are relevant. When we look at what happens with the Review Score after a review bomb, we see that it generally recovers, in some cases fully back to where it was beforehand. We took some time to examine the data more closely, measuring the weekly positive-to-negative ratio of new reviews in the time periods around the review bomb, it was even clearer - the review bomb ends up being a temporary distortion of the Review Score.
This implies that, while the review bombers were unhappy with a decision the developer made, the purchasers of the product afterwards were often as happy with the game as the players before them. In the cases where the Review Score didn't return fully to its prior level, we believe the issue behind the review bomb genuinely did affect the happiness of future purchasers of the game, and ended up being accurately reflected in the regular ongoing reviews submitted by new purchasers. In some review bomb cases, the developers made changes in response to the community dissatisfaction, and in others they didn't - but there didn't seem to be much correlation between whether they did and what happened to their Review Score afterwards.
In short, review bombs make it harder for the Review Score to achieve its goal of accurately representing the likelihood that you'd be happy with your purchase if you bought a game. We thought it would be good to fix that, if we could do it in a way that didn't stop players from being able to voice their opinions.
So what solutions did we explore? An obvious one would be to simply remove the Review Score. Then, as a potential customer you'd be forced to read the User Reviews themselves to see if the product sounded interesting. If you saw reviews talking about something outside the scope of the game, you could decide for yourself if it was an issue that would affect your happiness with the purchase. Unfortunately, we're pretty certain that this isn't really an option - scores were added in response to player demand in the past, and that demand for a summary of some kind is likely to still be there, even if players know it isn't always accurate.
Another idea was a temporary lock on reviewing, similar to how stock markets prevent trading on specific stocks when abnormal behavior is detected. Based on the theory that review bombs are temporary distortions, we could prevent reviews for short periods of time whenever we detect massive distortions in submissions. In the cases where the cause of the community's dissatisfaction truly affects the game's potential happiness to new customers, we're confident it would still result in the Review Score moving down after the lock period ended.
But if we lock reviews on a product for a short period of time, what does that mean exactly? Are players no longer able to post reviews at all during that time? Or should they be able to post them, but we ignore them for the purpose of calculating the Review Score? In the end, we didn't like the way this ultimately meant restricting the ability for players to voice their opinions. We don't want to stop the community having a discussion about the issue they're unhappy about, even though there are probably better places to have that conversation than in Steam User Reviews.
We could change the way the Review Score is calculated, focusing on much more recent data. One of the reasons a review bomb can distort a game's Review Score for a significant period of time is because the score is based on reviews over a period of 30 days for the Recent value, and all time for the Overall value. But doing this would likely result in more fluctuation and potentially less accuracy for all games, not just review bombed ones.
In the end, we decided not to change the ways that players can review games, and instead focused on how potential purchasers can explore the review data. Starting today, each game page now contains a histogram of the positive to negative ratio of reviews over the entire lifetime of the game, and by clicking on any part of the histogram you're able to read a sample of the reviews from that time period. As a potential purchaser, it's easy to spot temporary distortions in the reviews, to investigate why that distortion occurred, and decide for yourself whether it's something you care about. This approach has the advantage of never preventing anyone from submitting a review, but does require slightly more effort on the part of potential purchasers.
It also has the benefit of allowing you to see how a game's reviews have evolved over time, which is great for games that are operating as services. One subtlety that's not obvious at first is that most games slowly trend downwards over time, even if they haven't changed in any way. We think this makes sense when you realize that, generally speaking, earlier purchasers of a game are more likely to enjoy it than later purchasers. In the pool of players who are interested in a game, the ones who are more confident that they'll like the game will buy it first, so as time goes on the potential purchasers left are less and less certain that they'll like the game. So if you see a game's reviews trending up over time, it may be an even more powerful statement about the quality of work its developers are doing.
Hopefully this post has been useful. It's quite possible that we'll need to revisit this when we move to personalized review scores, where our prediction of your happiness with a purchase is based upon the games you've enjoyed in the past. In the meantime, we'll keep a close eye on the community conversation around reviews.
Grow, fight, collect and evade in the last cosmic arena the very end of time. Acquire MASS to evolve your EXOFORM from the nimble ATOM to the godlike OMEGA and compete for fun and dominance in a fast-paced, multiplayer shooter. ATOMEGA™ !