We would love for every member of our community to have a positive experience while using the WHO app. That means being as safe and secure as possible.
And that's why we've built in several robust safety features into the app, including age verification, facial recognition, Artificial Intelligence Content Moderation, and Human-Assisted Anonymized Content Moderation.
Connect with a real person
Both users who have matched on the app must have their faces visible on the camera during video chatting. If not, our content moderation features will automatically detect a problem and intervene to help our users. This feature ensures that you are always connected with a real person.
Manual and Artificial Intelligence Based Content Moderation
While we are unable to control what each user does, we have been working hard to ensure that you're not exposed to any offensive or unpleasant content on our platform. To keep our community safe, we have built tools to detect and stop content that violates our community guidelines such as violence, nudity or offensive images.
Reports from our user are also analyzed with help of our A.I. based tools with the help from human moderators.
Age verification upon signing up
Our community is strictly for those who are 18 years of age or older. When you sign up - whether it's through Facebook, Google Play, the App Store or by entering a phone number - you will need to verify your age.
Block and Manual Reports
You can choose to block a user you've added or a user that has directly messaged you in your chat settings. After being blocked, that user will be added to your blacklist, and they will not be able to send messages to you.
You can also choose to "Block & Report" a user. This will also generate a report for the user being blocked. Later, our Automatic and Manual Content Moderation system will assess the report and take action to protect our community.