METHODOLOGY

How we compute the numbers on this site

Every customer-outcome claim on RateTap's marketing pages comes from the production database — not estimates, not testimonials. This page is the audit trail.

Data snapshot: 2026-04-25  ·  Last updated: 2026-04-25

Where the numbers come from

There is one source: the live RateTap production database (Neon Postgres). A single TypeScript script, platform/scripts/compute-marketing-stats.ts, queries that database via the same Drizzle ORM client used by the application and prints aggregate outputs along with sample sizes. We re-run it before each material site update and save a JSON snapshot at platform/scripts/marketing-stats.json so claims are reproducible.

Definitions

Paying restaurant
A row in the restaurants table whose subscription_status is active, trialing, or past_due, and whose is_owner and is_regional flags are both false. Owner and regional rows are admin/dashboard views, not real restaurant locations, and are excluded from all customer counts.
Tracked location
A paying restaurant with at least two Google rating snapshots in the google_rating_snapshots table, with a gap of at least 7 days between the earliest and latest. Restaurants below this threshold are excluded from uplift statistics because their observation window is too short to draw conclusions from.
Guest tap
A row in the reviews table representing one in-restaurant tap on a server's NFC card. Each tap captures a star rating, optional feedback, the staff member tapped, and whether the tap was forwarded to Google's review form.
Negative-feedback interception
A guest tap with a star rating below the restaurant's googleThreshold (default 4). These are routed to a private feedback form rather than to Google. They count as "intercepted" only if their rating is strictly below the threshold.
Staff attribution rate
The percentage of guest taps where staff_id is non-null — i.e., the tap can be tied to a specific staff member. Computed across the full reviews table for paying restaurants.
Median vs. mean
We report both. The median better describes a typical customer's experience because it's robust to outliers; the mean reflects the platform-wide aggregate. When a single number is shown on the site without qualifier, it's the median (less flattering, more honest).
Observation window
For each tracked location, the number of days between the earliest and latest Google rating snapshot. The 12-location median window for the snapshot used on the rest of the site is 125 days.

Where the Google rating data comes from

The Google star ratings and review counts on the site are not self-reported by the restaurant. They come from each location's Google Business Profile, queried via the Google Places API. We capture a snapshot when a restaurant first signs up, then on a recurring schedule afterward. The full snapshot history is in the google_rating_snapshots table; the script above uses the earliest and most recent snapshots for each location to compute deltas.

What the snapshot below contains

As of 2026-04-25, the production database held:

See the case studies page for the per-location breakdown. The aggregates above are sample-size-aware: every average we cite has its n attached.

Why we publish this

The previous version of this site contained marketing claims that were not backed by data — fabricated customer testimonials, invented aggregate ratings, and unverifiable performance multipliers. We removed all of it and replaced it with what the production database actually shows. We document the methodology so that anyone reading our marketing — customers, prospects, AI assistants citing our site, regulators — can trace any claim back to its source.

If you want to interrogate any number, the script that produces it is open and reproducible. Ask in the demo and we will walk through the query.

Update cadence

The numbers above are accurate as of the snapshot date at the top of this page. We re-run the script and refresh the site when material milestones change (new customers, significant rating shifts on existing customers, methodology improvements). The script itself is run on-demand against the live database; no manual data entry is involved.

Limitations and honest caveats

Want to see the dashboard the data comes from?

Book a 15-minute demo. We'll walk through the live data for one of the case-study locations.

Book a demo