Sunday, November 17, 2024
Purpose:
The Content Moderation Policy outlines how content is reviewed, flagged, and moderated across the Oxy ecosystem. Our goal is to create a balanced environment where users feel safe to express themselves while ensuring community standards are upheld.
1. Moderation Methods
Oxy uses a combination of AI-based tools and human moderators to review content. AI filtering helps in identifying potentially harmful or inappropriate content at scale, while human moderators provide context-sensitive reviews to ensure fairness.
2. Content Flagging by Users
Users can flag content that they believe violates community guidelines. When content is flagged, it is automatically sent for review by our moderation team. We take all reports seriously and encourage the community to use flagging responsibly.
3. Review and Removal Process
When flagged content is reviewed, it may be removed if it violates Oxy's Community Guidelines. Depending on the nature of the violation, the responsible user may receive a warning, temporary suspension, or be permanently banned.
4. Appeals Process
If users believe their content was wrongly removed or if their account was unjustly penalized, they have the right to appeal. Appeals are reviewed by a different moderator to ensure impartiality. Decisions on appeals will be communicated to the user in a timely manner.
5. Transparency and Criteria for Moderation
We believe in transparency regarding our moderation practices. Our moderators follow a strict set of criteria to evaluate content, which includes considering the context, intent, and overall impact of the content on the community. We are committed to reducing bias and maintaining fair decision-making.
6. Content Types Reviewed
The moderation process includes reviewing posts, comments, images, videos, and links shared across all Oxy services. Any content that violates our Community Guidelines or legal requirements will be acted upon.
7. Communication with Users
Users whose content has been moderated will receive a notification explaining the reason for the action taken. Our aim is to ensure that all moderation decisions are clearly communicated to affected users so that they understand our processes.
8. Moderation Timeline
We strive to review flagged content as swiftly as possible. While most flagged content is reviewed within 24 hours, complex cases requiring deeper review may take slightly longer.
9. Preventive Measures
To minimize harmful content, we use preventive measures such as proactive monitoring tools and educating users about community standards. This helps maintain a safe environment and reduce violations before they occur.
10. Updates to the Policy
This moderation policy may be updated to reflect new challenges or improve existing practices. We encourage users to periodically review these policies to stay informed about how we keep Oxy a secure and welcoming space for all.