Riot Games is rolling out newer and more stringent measures to combat toxic behavior in Valorant.
Starting July 13, the developer will begin a background launch of a voice chat evaluation system in North America. This is to help train its language models — albeit in English only — and get the technology ready for a beta launch later this year.
The goal is to eventually have a reliable method of collecting “clear evidence” that can help verify any violations of behavioral policies before Riot takes action. In addition, it will enable Riot to provide more concrete reasons as to why a particular action resulted in a penalty.
Riot will listen in to your voice chat to train its language models
Back in April 2021, Riot updated its Privacy Notice and Terms of Service to allow it to record and evaluate in-game voice communications when a report is submitted.
It’s finally ready to take advantage of that change. The move is also part of a larger effort to combat what it refers to as “disruptive behavior,” with the goal of launching it in Valorant first. This suggests that the technology could eventually make its way to other Riot games, such as League of Legends.
However, voice evaluation during this period will not be used for disruptive behavior reports. That will only begin in a future beta, according to Riot.
“We know that before we can even think of expanding this tool, we’ll have to be confident it’s effective, and if mistakes happen, we have systems in place to make sure we can correct any false positives (or negatives for that matter),” said Riot.
As with many online games, toxicity remains a big problem in Valorant. Riot hopes to create a safer and more inclusive environment for everyone, and the new voice chat evaluation system is a big part of that vision.