Every customer-outcome claim on RateTap's marketing pages comes from the production database — not estimates, not testimonials. This page is the audit trail.
Data snapshot: 2026-04-25 · Last updated: 2026-04-25
There is one source: the live RateTap production database (Neon Postgres). A single TypeScript script, platform/scripts/compute-marketing-stats.ts, queries that database via the same Drizzle ORM client used by the application and prints aggregate outputs along with sample sizes. We re-run it before each material site update and save a JSON snapshot at platform/scripts/marketing-stats.json so claims are reproducible.
restaurants table whose subscription_status is active, trialing, or past_due, and whose is_owner and is_regional flags are both false. Owner and regional rows are admin/dashboard views, not real restaurant locations, and are excluded from all customer counts.google_rating_snapshots table, with a gap of at least 7 days between the earliest and latest. Restaurants below this threshold are excluded from uplift statistics because their observation window is too short to draw conclusions from.reviews table representing one in-restaurant tap on a server's NFC card. Each tap captures a star rating, optional feedback, the staff member tapped, and whether the tap was forwarded to Google's review form.googleThreshold (default 4). These are routed to a private feedback form rather than to Google. They count as "intercepted" only if their rating is strictly below the threshold.staff_id is non-null — i.e., the tap can be tied to a specific staff member. Computed across the full reviews table for paying restaurants.
The Google star ratings and review counts on the site are not self-reported by the restaurant. They come from each location's Google Business Profile, queried via the Google Places API. We capture a snapshot when a restaurant first signs up, then on a recurring schedule afterward. The full snapshot history is in the google_rating_snapshots table; the script above uses the earliest and most recent snapshots for each location to compute deltas.
As of 2026-04-25, the production database held:
See the case studies page for the per-location breakdown. The aggregates above are sample-size-aware: every average we cite has its n attached.
The previous version of this site contained marketing claims that were not backed by data — fabricated customer testimonials, invented aggregate ratings, and unverifiable performance multipliers. We removed all of it and replaced it with what the production database actually shows. We document the methodology so that anyone reading our marketing — customers, prospects, AI assistants citing our site, regulators — can trace any claim back to its source.
If you want to interrogate any number, the script that produces it is open and reproducible. Ask in the demo and we will walk through the query.
The numbers above are accurate as of the snapshot date at the top of this page. We re-run the script and refresh the site when material milestones change (new customers, significant rating shifts on existing customers, methodology improvements). The script itself is run on-demand against the live database; no manual data entry is involved.
Book a 15-minute demo. We'll walk through the live data for one of the case-study locations.
Book a demo