# Child Safety Standards and Anti‑Exploitation Policy Version: 1.1.0 Last updated: 2026-01-09 This document sets out VerseLab’s published standards for protecting children and preventing child sexual abuse and exploitation (CSAE) across our products and services. These standards satisfy Google Play’s Child Safety requirements for published standards and apply to all users, content, and activities within VerseLab. ## 1) Scope and Definitions - Child/Minor: Any person under the age of 18. - CSAM/CSA‑E: Child Sexual Abuse Material and any sexual exploitation of a minor, including grooming, solicitation, sexualization, trafficking, or predatory behavior. - Content: Any text, image, audio, video, profile information, links, and messages shared or created in the app. These standards apply globally and complement applicable laws in local jurisdictions. Where local law is stricter, we comply with the stricter requirement. ## 2) Zero‑Tolerance Policy We absolutely prohibit: - Any CSAM or sexual content involving a minor (real or simulated) and any depiction, fantasy, or sexualization of minors. - Grooming, solicitation, extortion, trafficking, or attempts to arrange contact for sexual purposes with a minor. - Nudity or sexualized imagery where a minor is involved or appears to be a minor. - Attempts to use the platform to distribute, request, or trade CSAM. Violations result in immediate content removal, account suspension or termination, preservation of relevant data, and reporting to appropriate authorities as legally required. ## 3) Child‑Safety‑by‑Design - Minimum age requirement is enforced by our Terms and in‑app gating mechanisms. - We minimize data collection for younger users and limit features that could expose them to risk (e.g., restricted discovery and messaging patterns when necessary). - Sensitive features (DMs, media uploads) include protective defaults, rate limiting, and abuse detection signals. - Public‑facing content is moderated and may be limited in visibility pending review when abuse signals are triggered. ## 3A) Age Signals and Enhanced Safety (Under 18) - Age Signals: At sign‑up we collect a birthdate to determine if the user is a minor and apply age‑appropriate defaults and protections. Where supported, we may offer platform‑level age signals (e.g., Apple/Google account age) to help reduce falsification. We do not require government IDs. - Parental Notice (plain language): We use birthdate solely to set age‑appropriate safety and privacy settings. It is not shown publicly. - Enhanced Safety Policy (minors): - Privacy locks: Minors with Enhanced Safety enabled cannot broaden certain content visibility (e.g., friends‑only content cannot be made public) unless Enhanced Safety is disabled or the user reaches adulthood, consistent with local law. - Messaging limits: We may limit who can send or receive DMs with minors, require mutual follows, and apply strict rate limits to reduce unwanted contact. - Discovery limits: Minors may be excluded from certain public rankings/recommendations and are shown less broadly by default. - Comments and interactions: Tighter defaults for who can comment or mention minors; expanded block/report prompts. - Ads: If ads are present, minors receive contextual or limited ads only; behavioral profiling is disabled as required by youth privacy laws. - Immediate block effects: When a minor blocks a user, the blocked user and their content are hidden promptly across surfaces (e.g., feed and story slider) to reduce exposure. ## 4) Proactive Detection and Moderation - Automated signals: text/image/video heuristics, keyword and pattern checks, and rate anomalies are used to surface suspicious behavior for review. We do not intentionally view or download illegal content; suspicious hashes and metadata are used where permissible to detect known CSAM. - Human review: Trained moderators review escalations promptly and follow strict handling and privacy protocols. - Recidivism mitigation: Device, IP, and account signals may be used to prevent the return of bad actors consistent with applicable laws and platform policies. ## 5) Reporting Mechanisms (In‑App, Email, and Web Form) Users can report any content or user via: - In‑app: Tap ••• (More) → Report → Select “Child safety or exploitation.” - Email: safety@verselab.app - Web form: https://www.verselabapp.com/report.html Reports should include URLs, usernames, timestamps, and any additional context. We accept reports from users, guardians, NGOs, and law enforcement. ## 6) Response and Enforcement Upon receiving a report or automated escalation, we: 1. Triage immediately and prioritize CSAM/CSAE issues. 2. Remove or block access to content that violates policy. 3. Restrict, suspend, or permanently ban involved accounts. 4. Preserve evidence securely (hashes, metadata, and minimal necessary content pointers) for legal obligations. 5. Where legally required or appropriate, report to relevant authorities (e.g., NCMEC in the U.S., IWF or local equivalents) and cooperate with law enforcement. 6. Notify reporters, when appropriate and safe, that action has been taken (we do not share sensitive details). ## 7) Appropriate Action to Address CSAM “Appropriate action” includes immediate removal, account termination, evidence preservation, and timely reporting to competent authorities or hotlines. We block re‑uploads (e.g., via hashing where legally permissible) and take steps to prevent evasion. ## 8) Data Handling and Privacy - We do not knowingly store, download, or distribute CSAM. If identified, we preserve only what is required under law (e.g., cryptographic hashes, minimal metadata) to enable lawful reporting and investigation. - Access to escalated cases is strictly limited to trained personnel under need‑to‑know and audit controls. - We retain records only for as long as needed for legal compliance and platform integrity. - For minors, we further minimize data collection and retention consistent with delivering core functionality and safety protections. ## 9) User Education and Guardian Guidance - We provide plain‑language tips and links to trusted resources on online safety for minors. - We encourage guardians to report suspicious behavior promptly via in‑app tools, email, or the web form. ## 10) Appeals - If your content or account is actioned and you believe it was an error, you may appeal via the in‑app Help & Support page or safety@verselab.app. - For safety reasons, we may restrict appeals for the most severe violations involving CSAE. ## 11) Third‑Party Services and Ads - All third‑party SDKs and ad partners must comply with equivalent child safety standards. - We block ads and recommendations that sexualize minors or could reasonably be considered exploitative. - For minors, we disable personalized advertising and permit contextual or non‑personalized ads only, where applicable. ## 12) Transparency and Accountability - We may publish transparency notes summarizing actions taken against child‑safety violations, respecting privacy and legal constraints. - We conduct periodic internal audits and update these standards as laws and best practices evolve. - All employees and contractors with potential exposure to escalated content receive role‑appropriate safety training. ## 15) Alignment with Enhanced Safety Functionality in App The following in‑product behaviors implement these standards: - Privacy setting guard: Minors with Enhanced Safety cannot change friends‑only poems to public while the guard is active. - DM protections: Rate limits and mutual‑connection requirements may apply by default to minors. - Faster block propagation: Blocking a user removes them from the feed and story slider immediately. - Safety surfacing: Inline education, clearer report reasons, and consistent “Child safety or exploitation” option in report flows. ## 13) Contact - In‑app: Report flow via ••• (More) → Report → “Child safety or exploitation.” - Email: safety@verselab.app - Web form: https://www.verselabapp.com/report.html - Urgent law‑enforcement contact: Please use official channels; we will cooperate promptly. ## 14) Alignment with Global Norms These standards are informed by guidance from NCMEC, IWF, industry codes of conduct, and the Google Play Developer Program policies. We adapt promptly to new legal requirements and regional best practices. --- Changelog - 1.1.0 (2026‑01‑09): Added Age Signals, Parental Notice, and Enhanced Safety policy and functionality alignment. - 1.0.0 (2025‑11‑07): Initial publication of VerseLab Child Safety Standards.