Our commitment to foster a safe, welcoming, and inclusive gaming community is powered by the ongoing investments and innovations we make to protect players and promote positive play. Today, we are releasing our fourth Xbox Transparency Report, which demonstrates our advancements in promoting player safety and our commitment to transparency.
At Xbox, we remain dedicated to advancing the responsible application of AI to amplify our human expertise in the detection of potential toxicity. We have a unique opportunity to create industry-leading safety innovations that better protect our communities, leveraging Microsoft’s investments that combine the latest generative AI advancements with human expertise and judgement.
Our progressive approach to leveraging AI to categorize and identify harmful content is guided by Microsoft’s Responsible AI Standard and allows our human moderators to focus on the more complex and nuanced harms, ensuring that our actions to safeguard the gaming community are accurate, consistent, and fair.
Early AI-supported investments include:
Auto Labeling helps to classify conversational text by identifying words or phrases that match criteria and characteristics of potentially harmful content. This approach uses AI to analyze reported text content and helps community moderators quickly sort out false reports so human moderation efforts can be focused on content that is the most critical.
Image Pattern Matching, powered by advanced database and image matching techniques enables rapid removal of known harmful content and identification of emergent toxic imagery.
Among the key takeaways in the report:
Effective New Ways of Protecting Players: Player behavior on Xbox voice chat has meaningfully improved since the launch of our new voice reporting feature. The feature has been effective in enabling players to report inappropriate verbal behavior with minimal impact to their gameplay. Since its launch, 138k voice records have been captured utilizing our ‘capture now, report later’ system. When those reports resulted in an Enforcement Strike, 98% of players showed improvement in their behavior and did not receive subsequent enforcements. Additionally, we have updated our proactive approach to more effectively prevent harmful content from reaching players (3.2 million more lines of text than last report, or 67%), allowing players to engage in a positive way. We will continue to invest in features that protect and enhance the Xbox player experience.
Understanding of Enforcements Leads to a Safer Community: The Enforcement Strike System was launched last year to promote positive play while helping players understand the severity of a violation. Since its launch, 88% of those who received an Enforcement Strike did not engage in activity that violated our Community Standards to receive another enforcement. We also reduced suspension lengths overall for minor offenses. Of enforcements that would have previously resulted in a 3-day or more suspension, 44% were given a reduced length. The combination of these results shows that the majority of players choose to improve their behavior after only one suspension, even when it is short.
Blocking Inauthentic Accounts Before They Have Impact: We have been continuously investing in ways to spot inauthentic accounts, which has allowed us to quickly block many of them as soon as they’re created, preventing them from affecting our players. Our improved methods prevented millions of inauthentic accounts from being used as soon as they are created and have led to a decline in the number of proactive enforcements on inauthentic accounts for the first time in two years, with enforcements dropping from 16.3 million in the last report to 7.3 million.
Outside of the Transparency Report, our team continues to work closely to drive innovation in safety and improve our players’ experience:
Launch of Microsoft Family Safety Toolkit: Microsoft understands that parents and caregivers are busy, and that the tech landscape keeps evolving. This toolkit provides guidance on how to leverage Microsoft’s safety features and family safety settings to support and enhance digital parenting, plus guidance for families looking to navigate the world of generative AI together. We’ve also included links to a selection of informational resources already made for parents, such as the Family Online Safety Institute’s How To Be A Good Digital Parent Toolkit.
Microsoft’s Annual Global Online Safety Survey: This survey seeks to better understand the digital ecosystem and individual’s experiences online. We share our results publicly so that others can benefit from the insights as we collectively strive to create a safer online environment. The survey looks at how people of all ages perceive the opportunities and risks posed by technology.
Building a better online community with Minecraft Education’s Good Game: This recent addition to the CyberSafe collection of immersive learning worlds is a story-based adventure aimed at helping players from ages 8-18 understand the responsibilities, tools, responses and strategies that foster empathy and enable healthy online interactions. Minecraft Education users can find CyberSafe: Good Game in the in-game lesson library. The world is also available for free for Minecraft Bedrock players in the Minecraft Marketplace.
Together with our players, we continue to build a strong and supportive community for everyone, free from intimidation and distractions. We are dedicated to improving our safety features to foster our connection to players through feedback and in-game reporting, and to apply layers of protection more effectively. The AI solutions we release are all part of the journey we are taking to build a safer and more welcoming environment for all, because everyone deserves to play comfortably and experience the joy that gaming has to offer. Thank you for taking part in this journey with us.
Some additional resources:
GIPHY App Key not set. Please check settings