BETAEndpoint launching soon. Start on networkr today. Your /connect will arm viralr the moment we open.
The Viralr Standards

What we enforce. What we reject. How you can verify.

Version 1.0Published 2026-04-22

This page describes what Viralr enforces on every post, thread, clip, and ad creative shipped through its pipeline. We treat it as a binding commitment, not marketing. Every claim ends with a way for you to verify it yourself. The same standards apply on networkr.dev (SEO) and outboxr.dev (email); the three surfaces share one backend and one gate system.

What we are

Viralr generates platform-native posts, threads, short-form clips, and paid-ad creative. We shape each angle for each feed, schedule for peak-engagement windows per audience, and pipe engagement signals back into the topic engine. We are AI-assisted. We disclose it on posts where the platform requires disclosure.

What we reject, by name
  • Invented specifics in posts. Numbers, customer names, revenue figures, or feature claims that are not already public on your site are stripped pre-publish.
  • Unverifiable factual claims. Extracted by a second-pass model and checked against live sources; low-confidence claims block publish.
  • Slop phrasing. The banned phrase filter blocks "leverage", "unlock", "game-changing", "truly", and other tells. Posts that trip the filter are regenerated or rejected.
  • Fake Builder-log claims. Threads that cite commit SHAs the engine cannot match against the real repo are rejected. You cannot fabricate a build log through Viralr.
  • Platform policy violations. Posts that trip the connected platform's own content rules (hate speech, sexual content, violent content, medical claims) are blocked before the outbound call.
  • Engagement-bait hooks. Hooks that contradict the body of the post (ragebait, ambiguity-on-purpose) are flagged by the hook/body consistency gate.
  • Strategy leakage from the operator's own site. Posts may only reference what is publicly visible on the operator's homepage and README. Drafts that reference not-yet-exposed material are logged, never published.
The pre-publish quality gates

Every post, thread, clip, and ad creative passes all of these before publish. Any single failure blocks publication. No manual override.

01

Platform-native shape

X gets a threaded hook. LinkedIn gets a narrative. Threads gets a single scroll-stop. Bluesky gets a quieter version. Each platform has its own shape spec, and the post must match.

02

Voice match

Every post is scored against the learned voice profile (persona, tone, cadence, banned words). Drift beyond threshold blocks publish.

03

Hook / body consistency

The hook must actually describe what the body delivers. Ragebait, misleading openers, and ambiguity-on-purpose are detected and rejected.

04

Fact check

Claims extracted by a second-pass model, verified against live sources, low-confidence claims blocked. Source citations attached to the lineage.

05

Exposure-manifest compliance

No post references a product feature, tech decision, or claim that is not already publicly visible on the operator's own homepage or README. Rejected drafts are logged, never published.

06

Slop-phrase filter

Per-account configurable blocklist. Common slop phrases ("leverage", "unlock", "truly", "game-changing") can be banned and enforced at generation.

07

Platform-policy pre-check

Before the outbound call, posts are checked against the target platform's content rules. Likely violations block the call so the platform does not see a rejected post.

08

Send-window validation

Scheduling lands inside the peak-engagement window for the connected audience. Out-of-window posts are queued, not fired immediately.

The post-publish audit

After publish, each post is monitored through the connected platform's engagement surface:

  • Within 24 hours, we pull reach, engagement, and reply volume. Posts that outperform bookmark for re-amplification across the suite.
  • Posts that underperform more than two standard deviations below the account mean trigger a voice review. Recurring misses retune the voice profile.
  • Best-performing threads feed networkr's next SEO article on the same topic, and outboxr's newsletter lineup. The three surfaces compound.
  • Platform-side issues (rate limits, policy warnings, account restrictions) are reported to the operator, never silently retried.

The result: we iterate based on each platform's own judgment, not ours. Every iteration is logged and publicly viewable on the post's lineage page.

Safety enforcement timeline

Accounts join the suite only after passing a trust audit. Admitted accounts are re-audited continuously. When an account trips detection, this timeline runs publicly with a case number.

T+0 -> T+1h
Quarantine

Post queue frozen across every platform. Ad creative paused. Existing posts remain live. The account is not deleted. It is frozen.

T+24h
Notification

The operator receives an email with specific signal flags, evidence hashes, and a case number for appeal.

T+48h
Human review + appeal

Appeals are processed by a human reviewer. Reviewer decisions are logged publicly with case number and reasoning.

On confirmed abuse
Permanent removal

Account removed from the whole suite. Public entry with reason codes and evidence hashes. Decision linkable, auditable, reversible.

False positives cost us more than false negatives. Detection thresholds err toward not-flagging. Every flag that gets appealed and reversed is published in the transparency report as an error we made, not hidden.

What we publish, publicly, for accountability
SignalWhere
Per-post lineage: prompts, gates, model, timestamps, engagement signals, reshape log/posts/<id>/lineage
Flagged accounts with reason codes/spam-index
Quarterly transparency report: accounts admitted, quarantined, removed, appeals/transparency-report
Every quality-gate specification, versioned/standards/<gate-name>
Every model change that affects generation/changelog?tag=model
Platform policy alignment

Each row names a platform requirement and the Viralr mechanism that enforces it. This is the line-by-line proof that we build to the specs the platforms already publish.

Platform requirementViralr mechanism
Platform automation rules (X, LinkedIn, Meta)OAuth scopes limited to post + schedule · no DM automation · no follow-follower scrapers
Spam policies (coordinated inauthentic behaviour)Per-account voice uniqueness · no sock-puppet amplification · public lineage
Advertising standardsPaid ads always labelled · fact-check gate applies to ad creative · no third-party audience data purchased
AI-content disclosure (where required)Disclosure inserted per platform policy · logged on the post's lineage
Rate limitsCadence-gated per platform per account · queues respect published API limits
Authenticity (Builder-log mode)Git-verifiable commit SHAs required · fake build-log claims cannot pass the gate
How to verify any of this

Pick any post from any account in the suite. Open its /posts/<id>/lineage page. You will see:

  • The signals used to pick the topic
  • The model and pipeline version
  • Each quality gate's result (pass / fail + details)
  • The hook/body consistency report
  • The publish timestamp, engagement signals, and any post-publish events

If anything on this page doesn't match what you see in a lineage page, reach us via heimlandr.io. Confirmed inconsistencies are corrected within 72 hours.

What this page doesn't promise
  • ·We do not promise engagement or follower growth. We promise that we do not violate platform policies and that we publish the audit trail to prove it.
  • ·We do not promise zero errors. We promise public accountability when we make them.
  • ·We do not promise every post will land. We promise that we respond to each platform's verdicts and iterate.
Related

Standards v1.0 · Published 2026-04-22 · Applies across the founder suite (viralr.dev, networkr.dev, outboxr.dev).