Every year, Safer Internet Day provides an opportunity to pause and reflect on the state of online safety – how far we’ve come and how we can continue to improve. For almost a decade, Microsoft has marked the occasion by releasing research on how individuals of all ages perceive and experience risk online. Last year, we highlighted the growing importance of AI. This year, in our ninth Global Online Safety Survey, we’ve dug deeper to understand how people view and are using this technology, plus how well they can identify AI-generated content.
Our findings show that while there has been a global increase in AI users (51% have ever used compared to 39% in 2023), worries about the technology have also increased: 88% of people were worried about generative AI, compared to 83% last year. Further, our data confirms that people have difficulty in identifying AI generated content, which may amplify abusive AI content risks.
![](https://blackpeople.biz/wp-content/uploads/2018/07/dummy-ad-300x250-1.jpg)
Announcing new resources to empower the responsible use of AI
At Microsoft, we are committed to advancing AI responsibly to realize its benefits. Fundamental to this is the work we do to build a strong safety architecture and to safeguard our services from abuse. Unfortunately, we know that the creation of harmful content is one of the ways in which AI can be subject to abuse, which is why we are taking a comprehensive approach to addressing this issue. That approach includes public awareness and education – and this year’s research underscored the need for media literacy and guidance on the responsible use of AI. Building on the launch of our Family Safety Toolkit last year, we’re pleased to announce new resources:
Partnership with Childnet: We are proud to partner with Childnet, a leading UK organization dedicated to making the internet a safer place for children. Together, we are developing educational materials aimed at preventing the misuse of AI, such as the creation of deepfakes. These resources will be available to schools and families, providing valuable information on how to protect children from online risks. This partnership underscores our comprehensive approach to tackling non-consensual intimate imagery (NCII) risks, including through education for teens.
Minecraft “CyberSafe AI: Dig Deeper”: We are thrilled to announce the release of “CyberSafe AI: Dig Deeper,” a new educational game in Minecraft and Minecraft Education that focuses on the responsible use of AI. This game is designed to engage young minds and foster curiosity while teaching important lessons about AI in a safe and controlled game environment. Players will embark on exciting adventures, solving puzzles and challenges that highlight the ethical considerations of AI and prepare them to navigate real-world digital safety scenarios at home and at school. While the player doesn’t engage with generative AI technology directly through the game, they will work through challenges and scenarios that simulate use of AI and learn how to use it responsibly. “Dig Deeper” is the fourth installment in a series of CyberSafe worlds from Minecraft created in partnership with Xbox Family Safety that have been downloaded more than 80 million times.
AI Guide for Older Adults: We are also proud to partner with Older Adults Technology Services (OATS) from AARPwhose programs and partners collectively engage over 500,000 older adults each year with free technology and AI training. As part of the partnership, OATS released an AI Guide for Older Adults that helps people age 50+ understand the benefits and risks of AI, including guidance on staying safe. Training for OATS call center staff to handle AI-related questions is also helping increase older adults’ confidence in their ability to use the technology and spot scams.
Additional resources for educators to help students navigate the digital world can be found here.
A deeper dive into this year’s Global Online Safety Survey findings
As the digital landscape evolves, we adapt our global survey questions to reflect these changes. This year, we identified an opportunity to quiz people on their ability to identify AI-generated content using images from Microsoft’s “Real or Not” quiz. We asked respondents about their confidence in spotting deepfakes before and after looking at a series of images. We found 73% of respondents admitted that spotting AI-generated images is hard, and only 38% of images were identified correctly. We also asked people about their concerns: common worries about generative AI included scams (73%), sexual or online abuse (73%) and deepfakes (72%).
Our research also shows that people worldwide continue to be exposed to a variety of online risks, with 66% exposed to at least one risk over the last year. You can find the full results, including additional data on teen and parent experiences and perceptions of life online here.
Reaffirming our commitment to online safety
Our approach at Microsoft is centered on empowering users by advancing safety and human rights. We know we have a responsibility to take steps to protect our users from illegal and harmful online content and conduct, as well as to contribute to a safer online ecosystem. We also have a responsibility to protect human rights, including critical values such as freedom of expression, privacy, and access to information. At Microsoft, we achieve this balance through carefully tailoring our safety interventions across our different consumer services, depending on the nature of the service and of the harm.
Our approach to advance online safety has always been grounded in privacy and free expression. We advocate for proportionate and tailored safety regulations, supporting risk-based approaches while cautioning against over-broad measures that hinder privacy or freedom of speech. We will continue to engage closely with policymakers and regulators around the world on ways to tackle the biggest risks, especially to children, in thoughtful ways: productivity software like Microsoft Word, for example, should not be subject to the same requirements as a social media service. And finally, we will continue our advocacy for modernized legislation to protect the public from abusive AI-generated content in support of a safer digital environment for all.
Global Online Safety Survey Methodology
Microsoft has published annual research since 2016 that surveys how people of varying ages use and view online technology. This latest consumer-based report is based on a survey of nearly 15,000 teens (13-17) and adults that was conducted this past summer in 15 countries examining people’s attitudes and perceptions about online safety tools and interactions. Responses to online safety differ depending on the country. Full results can be accessed here.
GIPHY App Key not set. Please check settings