Revealing Minerva and addressing toxicity and abusive behaviour in matches

Hi everyone,
We love competitive gaming, but it isn’t perfect. One of the key issues of a competitive gaming environment, and something players experience on our platform, is toxicity. Players misbehave during matches often affecting and ruining the experience of others.
We understand that there is nothing worse than spending your time in a match with throwers, griefers, haters and racists.
This is why we’ve been thinking about how to solve this problem on a large scale, and we’re excited to finally start sharing some news about what we’ve been working on.
TL;DR
- Announcing our Admin AI trained through Machine Learning (Minerva) to address toxicity at scale
- First practical implementation is live and taking actions on the platform with positive results (20% reduction in toxic messages and 10% reduction of toxic messages on all messages)
- Introducing SMS verification to prevent accounts flagged for smurfing, boosting, cheating or general toxicity to play into any official FACEIT competition unless they verify their phone number first.
OUR APPROACH TO ADDRESS TOXIC BEHAVIOR
In the past year, we have been closely listening to the community’s feedback on players’ abuse and toxicity in order to find a solution to address this issue.
Based on this feedback it became clear that toxicity expresses itself in many ways and it is no easy task to detect and solve the issue.

The demand by a big part of the community was to have an impartial admin observing and judging every match that is happening on FACEIT, which at the current volume of matches is simply not feasible. Therefore we started looking into a solution that could have the same effects and be scalable at the same time.
In order to do this we looked at the actual effects of an admin in game, which we summarized in three main areas:
- Identify promptly any kind of toxic behavior happening in a match;
- Ensure that if a player behaves in a toxic way action is taken immediately;
- Provide players with feedback on what they did wrong to correct their behavior.
In order to address this feedback in the short term we could have built a half baked solution (i.e. a reports based “karma” system or revamping the current FBI system, etc.), but this would clearly not have achieved the desired results in the long term as it would not identify these behaviors accurately and quickly enough to take precise and immediate action on them .
Therefore we decided to embark on a long term investment: building an admin-like AI powered by machine learning. This is why with the help of Google Cloud and Jigsaw, the team that developed Perspective Api, we started building our Artificial Intelligence: Minerva, the Admin AI.
MINERVA 0.1 IS LIVE
Today, we are excited to share our first milestone, the first version of Minerva is live in production and already having an impact on the platform.
In the first iteration we decided to focus on chat messages from the text chat of CS:GO matches. If a message is perceived as toxic in the context of the conversation, Minerva issues a warning for verbal abuse, while similar messages in a chat are flagged as spam.
Minerva is able to take a decision just a few seconds after a match has ended: if an abuse is detected it sends a notification containing a warning or ban to the abuser.

Additionally to address repeat offenders, punishments get harsher every time a player commits the same abuse over a short period of time.
After months of training to minimize false positives, we have enabled Minerva’s automation. This began in late August, resulting in her making warning and ban decisions without manual intervention.
If you want to know more about how Minerva’s chat model work, you can read this case study by Jigsaw.

To share some insights with you, in the last few months over 200,000,000 chat messages were analyzed and 7,000,000 messages were marked as toxic.
In her first month and half of activity, Minerva issued 90,000 warnings and 20,000 bans for verbal abuse and spam. Toxic messages reduced from 2,280,769 in August to 1,821,723 in September, marking a 20.13% decrease.

Overall, fewer unique players are sending toxic messages. In August, 247,243 unique players sent at least one message that was flagged as toxic. This decreased to 227,676 unique players in September, decreasing by about 8%.
SMS VERIFICATION FOR SMURFS, BOOSTERS AND MORE
A growing concern within our community is also the presence of smurf accounts and boosters on the platform.
Here, we decided to take a more preventive approach by introducing SMS verification for flagged accounts: every account that is suspected to be a smurf, booster, potential cheater or generally toxic is required to verify their phone number before being able to play into any official FACEIT competition.
Over the past two months, more than 250,000 accounts were required to verify their phone number. Of those, 50,000 illegit accounts were blocked before being able to play.
In the near future, we are planning to make this feature available to Organizers to allow them to restrict access to tournaments and hubs to users without a verified phone number.
WHAT’S NEXT
In-game chat detection is only the first and most simplistic of the applications of Minerva and more of a case study that serves as a first step toward our vision for this AI
We’re really excited about this foundation as it represents a strong base that will allow us to improve Minerva until we finally detect and address all kinds of abusive behaviors in real-time.
In the coming weeks we will announce new systems that will support Minerva in her training.
Stay tuned for updates,
The FACEIT team.
WE’RE HIRING
If you have any interest and experience in development of these types of solutions, or development in general, have a look at our available positions on our Jobs page.