The announcement comes via a post on the Riot Games website (via PC Gamer), which reads: “As part of a larger effort to combat disruptive behaviour, Riot Games recently updated its Privacy Notice and Terms of Service to allow us to record and evaluate in-game voice communications when a report for that type of behaviour is submitted.”
A voice evaluation system will be launching on July 13 to “help train our language models and get the tech in a good enough place for a Beta launch later this year.” However, during that period voice evaluation “will not be used for disruptive behaviour reports”, with that only coming in a future Beta.
The post continues: “We know that before we can even think of expanding this tool, we’ll have to be confident it’s effective, and if mistakes happen, we have systems in place to make sure we can correct any false positives (or negatives for that matter).”
It closes by iterating that this is “brand new tech and there will for sure be growing pains”, but that “the promise of a safer and more inclusive environment for everyone who chooses to play is worth it.”
This comes off the heels of Riot Games’ previous efforts to combat the toxicity within Valorant’s voice chat. The developer stated in February that “voice chat abuse is significantly harder to detect compared to text”, although it was clarified that attempts were being made to combat it at the time.