Privacy & Security
A trust document for how customer data and evidence are handled.
This page is written as an operational summary, not as decorative legal copy. The goal is to make the product's trust posture easy to understand.
Trust model at a glance
No training on client data
Targets, steps-to-reproduce, payloads, and verification results are not used as model training data.
Isolated execution
Verification runs in isolated browser environments with no browser context reuse across tenants.
Evidence integrity
Artifacts are tied back to the retest through integrity checks so the output remains defensible.
Tenant separation
Customer data is separated at the storage and application levels to avoid cross-tenant leakage.
Operational Summary
The product is designed for bounded verification, not loose data collection.
Collected inputs
For each retest, RiftX processes the target URL, the pentester's steps-to-reproduce, the reported vulnerability type, and any optional supporting payload or context required to execute the verification workflow.
Generated artifacts
The system produces evidence artifacts such as network captures, browser artifacts, and verification reports so the resulting verdict can be reviewed and defended by a pentester.
Isolation and storage
Execution is isolated per retest and customer data remains separated by tenant. Evidence storage and artifact handling are designed around retaining the proof required for verification while avoiding unnecessary exposure.
Security controls
Target validation, isolated execution, safety limits, and auditable system actions are part of the default trust model. The product is designed to do bounded verification work, not unrestricted exploration.
Model usage
LLM support is used as a constrained part of the workflow and not as a replacement for deterministic verification logic. Client data is not reused to train the underlying models.
Contact
Security concerns
security@riftx.io
Privacy requests
privacy@riftx.io