What will get you removed from the suite
This is the list of things that get you quarantined, removed, or publicly flagged. Written for clarity: no ambiguous "community standards" language, no hidden tiers. What is on the list is enforced. What is not on the list is allowed. The same policy applies across the founder suite (viralr, networkr, outboxr). A removal on one surface removes you from all three.
1. What is prohibited
We group prohibited behaviour by severity. Severe violations skip the 24-hour notification step and go straight to permanent removal.
Severe. Immediate permanent removal
Child sexual abuse material (CSAM)
Any material depicting child sexual abuse is reported to NCMEC, the account is removed from the suite, the operator is permanently banned, and all associated data is preserved for law enforcement.
Credible threats of violence
Content threatening specific people or groups with physical harm is reported to relevant authorities and results in immediate permanent removal.
Doxxing and targeted harassment
Publishing private personal information to harass, intimidate, or enable targeted violence. Includes coordinated pile-on campaigns.
Malware / phishing distribution
Posting, linking to, or amplifying malicious software, credential-harvesting pages, scam shops, or social engineering content designed to defraud users.
Illegal market content
Drugs, weapons, stolen goods, fake IDs, counterfeit currency, or other material whose distribution is criminal in the target jurisdiction.
High. Quarantine then permanent removal on confirmation
Platform policy laundering
Using Viralr to bypass a ban or suspension on X, LinkedIn, Threads, TikTok, or any other connected platform. If a platform removes you, you do not get to post there through us.
Deceptive authenticity
Fake testimonials, fabricated customer quotes, invented endorsements, or Builder-log threads that cite commits that do not exist. The Builder-log engine checks SHAs.
Engagement farming through deception
Ragebait designed to manufacture outrage, fake personal stories presented as real, and misleading hooks that contradict the body of the post.
Undisclosed paid promotion
Sponsored content not labelled per the jurisdiction's advertising standards. If it's an ad, it must say so.
Misleading health, financial, or legal claims
Medical advice, investment promises, or legal shortcuts presented as verified when they are not. Viralr is not a medical, financial, or legal publisher.
Cross-suite abuse
Using outboxr email lists to seed harassment on viralr, or using networkr articles as doorway pages to funnel ad clicks. The suite shares a backend. Abuse on one surface is abuse on all three.
Coordinated inauthentic behaviour
Running multiple Viralr accounts to amplify a single message while presenting them as independent. Includes sock-puppet networks and artificial engagement rings.
Scraped or plagiarised content
Posting text, media, or data copied from third parties without license or substantial transformation. Applies to content you supply and content generated on your behalf that you choose to publish.
Standard. Quarantine with full review and appeal
Persistent quality-gate failures
Over 30% of pipeline runs failing pre-publish gates in a 30-day window indicates a configuration or strategy problem. The pipeline pauses pending review, not removal.
Drift into prohibited categories
A connected site whose output gradually shifts away from its original audit. A dev-tool account pivoting to crypto-shilling, for example, triggers re-audit rather than immediate removal.
Platform warning thresholds
A single platform returning repeated warnings (rate limits, policy flags, account restrictions) for posts shaped by Viralr. Pipeline pauses that surface pending review.
Abuse of the appeals process
Frivolous or bad-faith appeals filed repeatedly to exhaust reviewer time. Three bad-faith appeals in 90 days triggers manual review of all future appeals from the operator.
2. Enforcement timeline
For standard and high-severity violations, this is the timeline. Severe violations skip directly to permanent removal.
Post queue frozen across every platform. Ad creative paused. Existing posts stay live. The account is not deleted. It is frozen.
The operator receives an email containing: the rule triggered, the specific evidence (post IDs, signal flags, evidence hashes), and a case number for appeal.
Appeals are processed by a human reviewer. Reviewer decisions are published with case number and reasoning. If you do not appeal within 48h, the case advances automatically.
Account removed from the whole suite. A public entry is added with reason codes and evidence hashes. The decision is linkable, auditable, and (if new information emerges) reversible.
3. How to appeal
Every enforcement action comes with a case number. To appeal:
- Email heimlandr.io with the case number in the subject line.
- Describe why the flag is wrong. Include post URLs, screenshots, log references, or counter-evidence where relevant.
- A human reviewer (not the system that flagged you) evaluates the case within 48 hours of receipt.
- The reviewer's decision, reasoning, and any adjustments to our detection rules are published publicly in the transparency report with the case number.
Appeals do not automatically unfreeze the account. That depends on the reviewer's finding. Successful appeals restore full access and remove any public entry within 2 hours.
4. Our commitments back to you
- Evidence before action. We do not enforce on rumour, competitive report, or vibes. Every flag requires a signal logged by our detection system with a reproducible fingerprint.
- False positives cost us more than false negatives. Detection thresholds err toward not-flagging. Borderline cases go to human review before quarantine, not after.
- No competitive-report escalation. An account will not be flagged or removed because another account complained. Only signal-driven detection counts.
- Every overturned flag is published. When we are wrong, the transparency report says so, names the rule that misfired, and links the rule update that prevents recurrence.
- Self-policing first. Any Heimlandr-operated account that trips the same detection is quarantined under the same timeline. We hold our own accounts to the same standard.
5. Reporting another account
If you believe a Viralr account is violating this policy:
- Email heimlandr.io with the post URL and the specific policy clause you believe is violated.
- Include evidence: post URLs, screenshots, archive captures, or reproduction steps.
- We acknowledge receipt within 24 hours. Reports with clear evidence trigger a detection pass within 48 hours.
- Reports drive detection, not decisions. The same signal-based detection rules apply. We do not quarantine on your say-so, but a credible report often surfaces a signal our crawler missed.
For copyright (DMCA) concerns, reach us at heimlandr.io with the full DMCA notice components required by 17 U.S.C. §512(c)(3).
6. Repeat infringers
Operators whose accounts are removed for confirmed abuse are permanently blocked from registering new accounts under the same ownership. Detection uses multiple signals (email, payment instrument, infrastructure fingerprint) and (consistent with §4 above) errs toward false-negatives. An operator who believes they were wrongly blocked can appeal under the same process in §3.
7. Changes
Material changes are announced by email to active tenants 14 days before taking effect. The same policy is mirrored on networkr.dev/acceptable-use and outboxr.dev/acceptable-use; all three update together.
8. Contact
Appeals: heimlandr.io. Abuse reports: heimlandr.io. Legal: heimlandr.io.
Acceptable Use Policy v1.0 · Published 2026-04-22 · Applies across the founder suite (viralr.dev, networkr.dev, outboxr.dev).