Centuries Mutual Trust
How the trust system works
Trust is not a single badge—it is an in-house rating and decision layer that runs across roommate matching, listings, messaging, documents, and claims. This page walks through each major component in depth. Every block below is a full section you can read top to bottom; scroll to move through the story in order.
The system is built for transparency toward members: you should understand what signals feed your score, how scores influence what you see, and what happens when something goes wrong. Numeric outputs are paired with policies and human review where stakes are high—especially around housing, payments, and safety.
Purpose, boundaries, and design principles
The trust system exists to reduce harm in real-world outcomes: bad roommates, fraudulent listings, abandoned obligations, and repeated platform abuse. It is not a credit score in the banking sense, though some inputs may resemble financial hygiene where we offer payment-backed products. It is a brokerage-context trust model tuned for people sharing housing, documents, and money on Centuries Mutual.
Fairness. Where possible, scores emphasize behaviors members can influence over time: completing identity steps, honoring agreements, resolving disputes in good faith, and keeping communications inside secure channels. We avoid “secret sauce” punishment without remedy—serious penalties tie to clear policy violations or sustained risk signals.
Proportionality. A first-time friction is not the same as a permanent ban. The engine surfaces graduated responses: warnings, feature throttles, manual review, or hard blocks when automated detection crosses risk thresholds aligned with legal and safety duties.
Explainability at the product level. Internally, models blend rules with statistical signals. Externally, you see summaries (“verification status”, “recent completion”, “community standing”) rather than raw coefficients—so guidance stays human-readable while engineering retains room to improve precision.
The 1–300 trust scale
Members receive a trust score on a 1–300 integer scale. The scale is wide enough to separate “new but clean” profiles from long-tenured, highly reliable members without collapsing everyone into a narrow band. The number is produced by our recommendation and risk stack: rule checks, verified attributes, transaction history on the platform, dispute outcomes, and behavioral signals (such as spam patterns or policy breaches).
Lower band (approx. 1–100). New accounts, incomplete verification, or elevated risk markers. Product surfaces may limit high-stakes actions until milestones are cleared (for example, identity confirmation or first successful lease completion).
Mid band (approx. 101–200). Typical active members with stable participation. Adequate for most roommate searches, messaging, and standard document flows.
Upper band (approx. 201–300). Strong pattern of trustworthy completions, longer positive history, and fewer contested events. These members may receive better placement in discovery, broader eligibility for premium tools, or faster automated approvals where policy allows—always subject to manual override in edge cases.
What feeds the score: signals and stewardship
Inputs fall into buckets. Not every member has data in every bucket; the model is built to handle sparsity without punishing newcomers beyond what prudence requires.
Identity & verification
Government-aligned checks, selfie liveness where used, phone and email possession, device reputation, and linkage to payment instruments. These reduce impersonation and multi-account abuse.
Transactional integrity
Successful rent flows, on-time documented payments in our rails, completion of eDocument milestones, and low rates of chargebacks or contested reversals where applicable.
Community & social graph
Invites from higher-trust members, mutual connections with good standing, and healthy message patterns—as opposed to blast spam or coercion red flags surfaced by classifiers.
Listing and inventory behavior
Accuracy of photos and copy, duplication detection, price manipulation signals, and landlord/host responsiveness tied to inquiry SLAs.
Safety & policy adherence
Reports validated by moderation, harassment findings, discrimination policy breaches, illegal-sublet patterns, or attempts to move payments off-platform to evade protections.
Disputes and claims
Outcomes from structured resolution: good-faith participation, evidence quality, repeated losses on materially similar facts, or escalation to claims workflows.
Recommendation engine: from score to surfaced experience
The trust score is an input vector to ranking and eligibility services. When you search roommates or listings, candidates are not sorted by score alone: we blend fit (preferences, budget, location),availability, and trust-adjusted confidence so a perfect match is not buried beneath a marginal one simply because of a few points difference.
Certain workflows are gated: for example, bulk messaging, instant booking, or high-value contracts may require minimum trust tiers or supplemental verification. Gates exist to protect both sides— especially in asymmetric markets where one party sends money before move-in.
The engine re-evaluates continuously as events stream in (new reviews, dispute resolutions, document signatures). Stale scores are periodically refreshed so a member who repairs their standing sees improvement after consistent positive behavior—not only at arbitrary calendar intervals.
How trust interacts with documents, blockchain, and claims
Trust is orthogonal but linked to immutability. A blockchain-backed lease hash does not, by itself, prove someone is trustworthy—but it proves which version of an agreement existed at a point in time. Together, trust scoring and document tooling let counterparties reason about both who they are dealing with and what was agreed.
When claims fire, trust history informs prioritization and settlement posture: repeated bad-faith actors may face faster escalation, while members with clean records and strong documentation may see expedited paths where policy allows.
Privacy, appeals, and human oversight
Sensitive attributes used for verification are handled under our Privacy Policy. We retain audit logs for security and dispute resolution, with retention tuned to legal needs and product functionality.
If you believe a score or restriction is wrong, you may open a review through Help Desk. Human operators can override automated decisions when presented with compelling evidence; machine decisions that affect eligibility for housing or payments receive heightened scrutiny.
We monitor the trust stack for drift and bias: periodic evaluations compare outcomes across cohorts, and rule changes are versioned so engineering and compliance can explain what changed and why.
Put trust to work on your account
See product-specific explainers, numeric deep-dives, and onboarding paths. The Trust System and Trust Score pages complement this narrative with UI-oriented walkthroughs.