This safety and reporting policy explains how Skyline Media LLC (“we,” “us,” or “our”) maintains user safety, manages content moderation, and handles reports of potential violations of law or platform rules. It applies to all users and all versions of our platforms, including both web and mobile applications.
Our goal is to provide a lawful, respectful, and secure environment for live and interactive communication.
Purpose and Scope
This policy describes:
How users can report illegal, harmful, or inappropriate behavior;
How we review, moderate, and enforce our Community Guidelines;
How we cooperate with law enforcement and regulatory authorities; and
How we protect user safety while respecting privacy and due process.
This policy applies in addition to our Community Guidelines and Terms of Service. Together, these documents form the foundation of our trust and safety framework.
Safety Principles
We are committed to maintaining a safe platform through:
Prevention: Using technology and user education to deter harm before it occurs.
Detection: Monitoring and moderation systems that identify potential violations.
Response: Swift, fair action to address illegal, abusive, or unsafe conduct.
Transparency: Clear reporting mechanisms and accountability for enforcement actions.
Safety on the platform is a shared responsibility. Every user is expected to act lawfully and respectfully and to report suspected violations promptly.
Reporting a Violation
How to Report
If you experience or witness behavior that violates our Community Guidelines or applicable law, you may report it by:
Using the in-session Report button (available during or immediately after a chat); or
Emailing [email protected] with a description of the incident and the approximate date and time it occurred.
If you are in immediate danger or believe someone else is at risk of harm, contact local law enforcement before submitting a report.
What to Include
To help us investigate efficiently, please include:
The approximate time and date of the incident;
The type of interaction (video or text chat);
A brief description of what occurred; and
Any additional details that may help us locate the session in our moderation logs (for example, country, gender filter, or other session context).
Reports involving suspected minors, criminal activity, or credible threats to safety are reviewed immediately and may result in referral to law enforcement.
Moderation and Enforcement Process
We use automated systems, session monitoring, and trained moderation staff to detect and address potential violations of our Community Guidelines and applicable law. Because most sessions occur between randomly connected users without public profiles or visible usernames, moderation relies primarily on session data, device identifiers, and automated detection signals rather than user identity.
Moderation may occur during a live session or after a report has been filed. Limited metadata (such as timestamps, country or region, connection information, or moderation flags) may be retained temporarily to support investigations, enforce platform rules, or comply with legal obligations.
Moderation outcomes may include:
Immediate disconnection from a live chat;
Temporary access restrictions or cooldown periods;
Device- or connection-level blocking;
Account suspension or termination (if the user is registered); or
Referral to law enforcement or regulatory authorities when required by law.
Enforcement decisions are made based on the nature and severity of the violation, as well as any pattern of repeated misconduct. Since usernames are not visible during chats, most moderation actions are applied at the session or device level. When a registered account is involved, enforcement may also apply to that account to prevent future misuse.
Appeals
If you believe a moderation or enforcement decision was made in error, you may appeal within 14 days by emailing [email protected].
Please include:
The approximate date and time of the enforcement action;
A brief explanation of why you believe the decision was incorrect; and
If you have a registered account, the email address or login method (e.g., Google or Apple) you used to access the platform.
Appeals are reviewed independently by a separate moderation team to ensure fairness. Because many users do not register accounts, we may rely on session or device information to locate and review the relevant moderation action. All decisions following review are final.
Cooperation with Law Enforcement
We cooperate with law enforcement and child protection authorities when necessary to prevent or respond to criminal conduct, including exploitation, trafficking, or child sexual abuse.
Requests for user information must be lawful, specific, and issued by a competent authority. We may act without a formal request only when there is a clear and immediate risk to human life, child safety, or serious physical harm.
Confirmed reports of child sexual abuse material (CSAM) or imminent threats of violence are referred without delay to the National Center for Missing and Exploited Children (NCMEC) and other relevant agencies. We maintain records of those referrals as required by law.
Data Handling and Privacy
Reports and moderation records are stored securely and accessed only by authorized personnel. We retain data only as long as necessary to:
investigate and resolve the report;
comply with legal obligations; or
protect user safety and platform integrity;
We handle all personal data in accordance with our Privacy Policy and applicable privacy laws.
Transparency and Accountability
We maintain internal logs of all moderation actions and, where required by law, publish periodic transparency reports summarizing enforcement activity.
Transparency reports may include aggregated data on:
number of user reports received;
types of violations detected;
actions taken; and
average response times;
We do not publish personally identifiable information in these reports.
Emergency Safety Response
If moderators or automated systems detect conduct that presents an imminent risk of harm — such as threats of self-harm, suicide, or credible violence — we may take immediate steps to protect the safety of users and others. Depending on the situation, we may:
Contact relevant emergency or law enforcement authorities;
Terminate or restrict access from a specific device or connection to prevent further harm; and
Preserve and share limited session or connection data with competent authorities strictly for the purpose of protecting human life.
Because most users do not have registered accounts, our emergency response relies on available technical information (such as timestamps, connection identifiers, or session data) rather than personal identity. These actions are taken in good faith, in compliance with applicable law, and without waiving user privacy rights except as legally required.
Contact Information
For safety, reporting, or moderation matters, please contact the appropriate channel:
Abuse and Safety Reports: [email protected]
Appeals and Moderation Review: [email protected]
Law Enforcement Requests: [email protected]
General Support: [email protected]
Postal Correspondence: Skyline Media LLC
30 North Gould Street, Suite R, Sheridan, Wyoming 82801
All communications are handled confidentially by trained staff. We respond as promptly as possible, prioritizing reports that involve safety or legal risks.
Policy Updates
We may update this policy periodically to reflect changes in law, technology, or operational practice. Updates will be published on our website and within the platform. Continued use of the Service after updates take effect constitutes acceptance of the revised policy.