Verification is not advocacy. It is evidence. What institutions do with that evidence is their decision.
Traditional due diligence happens after capital is committed — reviewing documents, interviewing management, stress-testing projections. In frontier markets with thin data, informal economies, and no comparable transactions, this produces expensive guesswork. The diligence looks thorough. The underlying evidence does not exist.
Venture studios and accelerators solve a different problem entirely. They build companies. RV Design does not build companies — it tests whether specific underwriting questions have answers. The output is evidence, not a startup.
Impact measurement frameworks (IRIS+, IMP, SDG alignment) track outcomes after the fact. They tell you what happened. They do not tell an institution whether to deploy capital in the first place. Pre-underwriting produces the evidence before the commitment, when it actually affects the decision.
RV Design sits in the gap between these approaches: producing institutional-grade verification data during a structured operational test, so that when capital is finally deployed, the underwriting is grounded in evidence rather than narrative.
Every outcome claimed by any RV Design venture must pass three independent tests before counting as verified. Outcomes that pass receive a Verification Receipt — a standardized, auditable record containing outcome description, methodology, findings, and verifier identity.
Would this have happened anyway?
Would this outcome have occurred without the intervention? Matched comparison groups where feasible. Pre-intervention baseline measurement where randomization is impractical. Contribution analysis for systemic interventions. Documented in a Pilot Verification Plan before first disbursement.
Does it last?
Does the outcome persist beyond the measurement period? Tracked at 6-month and 12-month post-intervention windows. Outcomes decaying below 70% of initial measurement within 12 months are reclassified as "provisional" and excluded from verified counts.
Who caused it?
Can the outcome be credibly linked to RV Design activity versus confounding factors? Contribution analysis maps inputs against other actors and external conditions. Only "primary" and "contributing" outcomes count toward verified totals.
Three-layer governance ensures methodology integrity. We don't grade our own homework.
Established evaluation organizations review methodology design, sampling protocols, and findings. Independent of RV Design management and LP interests.
Annual independent review of verification processes using agreed-upon procedures. Designed to provide institutional-grade assurance on process integrity and data reliability.
Published as Commons IP enabling external scrutiny. Anyone can examine, challenge, or improve the verification framework. Transparency is structural, not optional.
Live verification deployed from Month 1. Produces sector-specific toolkits that transform from cost center to revenue stream. Deliverable toolkits across all verticals.
Turns verification into a revenue line.
Ventures test Youth & Employment, Data Sovereignty & Innovation, and Green & Ecological underwriting questions. Cross-vertical learning compounds verification methodology strength.
One method, three sectors, compounding evidence.
A portion of facility spread and equity profit returns to the Commons. The most junior claim in the waterfall. Capital structure feature, not philanthropy.
Regeneration built into the capital stack.
Equity per venture allocated to community stakeholders. Capital circulates for community benefit without extraction, funded primarily by Reflow revenues.
Community upside without extractive structure.