Toast → GHL Pipeline — Step-by-Step Diagnostic

Where each step stands as of 2026-04-25 evening · click any box to jump to detail
1
Toast
auth
2
Switch
location
3
Click
Download
4
Wait for
email
5
Download
CSV
6
Row count
check
7
Parse +
clean names
8
Phone
dedup
9
GHL
upsert
10
GHL
workflow
11
Save
ledger
12
Slack
notify
01
Toast Session Auth
✓ Working

The session cookies (auth0, TOAST_SESSION, etc.) are loaded from toast_session.json and refreshed on each run. session.ensure_session() handles re-login if cookies expire.

Code
session.py · ensure_session(headless=True)
Today
Re-authenticated cleanly on every run. No session-expiry failures.
02
Switch Toast Location · 🎯 THE BROKEN STEP
✗ Misfires

What it's supposed to do: click the top-left location switcher in Toast UI, type the target restaurant name, and select it. The page should then show that restaurant's guestbook.

What actually happens: the dropdown click visibly opens, the right item is selected, and the page header updates to show the new location. But Toast's server-side "active restaurant" doesn't actually change. The next export reads the OLD context.

The cyclic shift in today's seed runs

Buena Park
Asked for: 4,476 rows
Got: 12,668 rows
(GG's data)
Garden Grove
Asked for: 12,661 rows
Got: 21,683 rows
(IC's data)
Irvine Culver
Asked for: 21,682 rows
Got: 13,425 rows
(Pasadena's data)
Pasadena
Asked for: 13,423 rows
Got: 4,484 rows
(BP's data)

Each location reliably gets the NEXT location's data. Wrapping bug — confirmed via direct API calls too.

Code
trigger_toast_export.py · switch_toast_location()
Tried fixes
PRs #3-#7: text-keyword click, structural click, ArrowDown→Enter, direct DOM click, Playwright native mouse click, 15s wait, per-location URL, lastRestaurantGuid cookie. Each landed the click correctly but Toast's export still came back wrong.
Root cause
Server-side opaque. Toast's export endpoint ignores both the body's restaurantGuidList and the toast-restaurant-external-id header, AND the API-returned reportUUIDs never appear in any actual email — the export just doesn't fully fire for our requests.

Screenshots from the most recent diagnostic seed run (Pasadena iteration):

Location switcher dropdown
1. Switcher dropdown opened. The 4 locations are listed. We requested Pasadena (624 E Colorado Blvd).
Page state before clicking Download
2. Right before Download click. The page header SAYS Pasadena (or whatever was requested) — UI looks correct.
After download click
3. After Download click. Toast accepted the click and queues the export.
Toast export confirmation
4. Export confirmation. Toast confirms it's running. Email arrives ~60s later — but with the WRONG location's data.

→ The UI looks fine at every step. The bug only reveals itself when the actual CSV file content is checked against the expected row count baseline. Click any image to enlarge.

03
Click Download Button
○ Mechanically works

The Download button click + "Agree and continue" modal handling work fine. Toast accepts the click, queues an export, and emails the link within ~30-90s.

This step is downstream of Step 2 — if Step 2 silently set the wrong location, this faithfully exports that wrong location.

Code
trigger_toast_export.py · button-finding evaluate + Agree modal handling
04
Wait for Toast Email (Gmail IMAP)
✓ Working

Polls rlee@tiny-mammoth.com via IMAP for emails with subject "Your Guestbook contacts are ready to be downloaded". Filters by:

  • triggered_after timestamp — skips emails that pre-date this trigger
  • exclude_uuids — skips UUIDs already consumed in this run

Hardened today against IMAP connection drops + bad Date headers.

Code
fetch_toast_link.py
05
Download CSV (UUID URL)
○ Mechanically works

Navigates to ?downloadReportUUID=<uuid> in Playwright with accept_downloads=True. Saves the ZIP, extracts the CSV.

Mechanically reliable — but downstream of the same wrong-location bug. The downloaded file is whatever Toast generated.

Code
download_toast_csv.py
06
Row Count Smell Test 🛡️
✓ The Safety Net

Compares the downloaded CSV's row count to the expected baseline per location, with a ±30% tolerance band. This is what's protecting your GHL accounts from getting wrong-location data uploaded.

LocationExpectedTolerance bandToday's seed gotResult
Kaju Buena Park4,4763,133 – 5,81812,668aborted ✓
Kaju Garden Grove12,6618,862 – 16,45921,683aborted ✓
Kaju Irvine Culver21,68215,177 – 28,18613,425aborted ✓
Oji Sushi Pasadena13,4239,396 – 17,4494,484aborted ✓

All 4 mis-routings caught. Zero bad data reached GHL from the seed runs.

Code
process_and_upload.py · assert_row_count_matches()
07
Parse + Clean Names
✓ Working

Parses the Toast CSV. Filters by lastVisitDate against the location's checkpoint. Dedupes within-batch by phone (Toast can repeat the same phone across rows). Cleans junk names from card swipes.

Junk-name examples that now fall back to "Kajufam" / "Ojifam":

'Visa | Cardholder'           → 'Kajufam'   # 405 rows in BP CSV
'Valued Customer | ''          → 'Kajufam'   # 57 rows
'Discover | Cardmember'        → 'Kajufam'   # 11 rows
'10m??'                        → 'Kajufam'
'Mike'                         → 'Mike'      # real names preserved
'Sarah | Cardholder'           → 'Sarah'     # first-name still real
Code
process_and_upload.py · load_and_clean_csv() + clean_first_name()
08
Phone-Level Dedup (180-day cooldown)
✓ Working

Each candidate phone gets bucketed against the per-location ledger (uploaded_phones.json):

  • NEW — phone not in ledger → enroll
  • RE-ENTRY — in ledger, last_enrolled ≥ 180 days ago → enroll, increment count
  • COOLDOWN — in ledger, last_enrolled < 180 days ago → skip

Verified end-to-end against the local BP CSV: 130 rows with phone → 113 unique phones → first run all NEW → second run all 113 cooldown-skipped.

Code
process_and_upload.py · bucket_contacts() · state.record_enrollment()
09
GHL Contact Upsert
✓ Working

POST https://services.leadconnectorhq.com/contacts/upsert with each location's API key + locationId. Creates new contacts or updates existing ones (matched by phone). Returns the GHL contactId.

Proven by
This morning's 10:27 AM run successfully upserted 131 BP, 147 GG, 185 IC, 141 Pasadena = 604 contacts. Zero API failures.
Code
process_and_upload.py · upsert_contact()
10
GHL Workflow Enrollment
✓ Working

POST /contacts/{contactId}/workflow/{workflow_id} — enrolls each contact in their location's "2b Meals Momentum" workflow. The workflow then fires the SMS sequence (Day 1, Day 3, Day 7, etc.).

This is what's currently mis-firing dupes. Today we triggered ~2,036 enrollments across 3 runs (midnight + 10:27 AM + 11:47 AM) — about ~1,434 of those are duplicates of contacts who got enrolled earlier in the day. Each duplicate enrollment runs the WHOLE sequence again.

Code
process_and_upload.py · add_to_workflow()
11
Save Checkpoint + Phone Ledger
✓ Working

After each location: writes the max lastVisitDate to run_state.json (date checkpoint) and adds enrolled phones to uploaded_phones.json (ledger). The CI workflow commits both files back to master with [skip ci] after each run.

Code
state.py · save_state() · save_ledger()
12
Slack Notification
✓ Working

Posts a per-location summary to #system-notifications via the TM Notify worker. Format includes total guest rows, % with phone, % with email, new vs re-entry counts, cooldown-skipped count.

Code
process_and_upload.py · notify()
📊
Today's Runs · Reality Check
summary
RunTime (PT)BPGGICPasadenaNotes
Local midnight00:00–00:19274 ✗229 ✗229+241 ✗Pasadena ran twice; BP missed
GH Action morning10:27131 ✗147 ✗185 ✗141 ✗"Successful" but dupes of earlier batches
GH Action 11:4711:47~130 ✗~145 ✗~184 ✗(failed)Triggered all again before crashing on Pasadena
Seed run #112:50abortedabortedabortedabortedRow count gate caught all 4 mis-routings
Seed run #213:08abortedabortedabortedabortedSame — direct dropdown click didn't fix it
Seed run #313:34abortedabortedabortedaborted15s wait + diagnostic — page stuck on BP
Seed run #414:52abortedabortedabortedabortedPlaywright native click — same cyclic shift
🧹
Cleanup of Dupe Enrollments
✗ NOT DONE

Estimated state of GHL right now:

LocationTotal enrollments todayLikely dupesWorkflow steps still firing?
BP~261~130YES
GG~566~420YES
IC~598~414YES
Pasadena~611~470YES
Total~2,036~1,434

The cleanup script cleanup_today_enrollments.py exists locally on a feature branch but was never merged or executed. It needs the filter narrowed (dateAdded instead of dateUpdated) before running.

The verdict — where to focus

The misfiring step is #2 (Switch Toast Location). Everything downstream works mechanically; everything upstream works fully. The bug is server-side at Toast and not solvable from the client today.

Two unblocking moves:

1. Stop the bleeding — narrow + run cleanup_today_enrollments.py to remove the ~1,434 dupe workflow enrollments still firing SMS.

2. Today's 4/10–4/25 rerun — manual export of 4 CSVs from Toast UI (one click each), I process them deterministically. Bypass step 2 entirely for tonight.

3. Long-term — file a Toast support ticket asking for the proper API to specify location for export. The dropdown's behavior is a Toast UX choice we can't override.