The session cookies (auth0, TOAST_SESSION, etc.) are loaded from toast_session.json and refreshed on each run. session.ensure_session() handles re-login if cookies expire.
session.py · ensure_session(headless=True)Each location gets its own isolated browser session with a fresh OAuth login. The browser switches to the target restaurant, waits for Toast's full re-authorization to complete, and verifies the correct restaurant GUID appears in network traffic before proceeding.
Smoke test results (2026-04-26):
| Location | Switch needed? | Verified XHRs | Export triggered? |
|---|---|---|---|
| Kaju Buena Park | No (default) | 37 | Yes |
| Kaju Garden Grove | Yes (same mgmt group) | 39 | Yes |
| Kaju Irvine Culver | Yes (same mgmt group) | 39 | Yes |
| Oji Sushi Pasadena | Yes (different mgmt group) | 39 | Yes |
unified_export.py · run_export() + switch_location()The export is now triggered via a direct API call in the same isolated browser session, using that session's restaurant-scoped OAuth token. Toast accepts the request, queues the export, and emails the download link within ~30-90s.
unified_export.py · run_export()Polls rlee@tiny-mammoth.com via IMAP for emails with subject "Your Guestbook contacts are ready to be downloaded". Filters by:
triggered_after timestamp — skips emails that pre-date this triggerexclude_uuids — skips UUIDs already consumed in this runHardened against IMAP connection drops + bad Date headers.
fetch_toast_link.pyDownloads the CSV in the same browser session that triggered the export. The browser navigates to ?downloadReportUUID=<uuid> using the session's existing cookies, ensuring the correct restaurant's data is returned.
unified_export.py · BrowserSession.download()Compares the downloaded CSV's row count to the expected baseline per location, with a +/-30% tolerance band. This gate caught all 4 mis-routings on April 25 before any bad data reached GHL. It remains as a safety net even with the fix in place.
| Location | Expected | Tolerance band |
|---|---|---|
| Kaju Buena Park | 4,476 | 3,133 – 5,818 |
| Kaju Garden Grove | 12,661 | 8,862 – 16,459 |
| Kaju Irvine Culver | 21,682 | 15,177 – 28,186 |
| Oji Sushi Pasadena | 13,423 | 9,396 – 17,449 |
process_and_upload.py · assert_row_count_matches()Parses the Toast CSV. Filters by lastVisitDate against the location's checkpoint. Dedupes within-batch by phone. Cleans junk names from card swipes.
'Visa | Cardholder' → 'Kajufam' # credit card "name" 'Valued Customer | '' → 'Kajufam' # generic placeholder '10m??' → 'Kajufam' # digit-prefix garbage 'Mike' → 'Mike' # real names preserved 'Sarah | Cardholder' → 'Sarah' # first-name still real
process_and_upload.py · load_and_clean_csv() + clean_first_name()Each candidate phone gets bucketed against the per-location ledger (uploaded_phones.json):
process_and_upload.py · bucket_contacts() · state.record_enrollment()POST https://services.leadconnectorhq.com/contacts/upsert with each location's API key + locationId. Creates new contacts or updates existing ones (matched by phone). Returns the GHL contactId.
process_and_upload.py · upsert_contact()POST /contacts/{contactId}/workflow/{workflow_id} — enrolls each contact in their location's "2b Meals Momentum" workflow. The workflow then fires the SMS sequence.
process_and_upload.py · add_to_workflow()After each location: writes the max lastVisitDate to run_state.json (date checkpoint) and adds enrolled phones to uploaded_phones.json (ledger). The CI workflow commits both files back to master with [skip ci] after each run.
state.py · save_state() · save_ledger()Posts a per-location summary to #system-notifications via the TM Notify worker. Format includes total guest rows, % with phone, % with email, new vs re-entry counts, cooldown-skipped count.
process_and_upload.py · notify()The location-switching bug (Step 2) that blocked the pipeline from April 25 is resolved. The fix uses isolated browser sessions per location, each with its own OAuth-scoped login token. Smoke tests verified all 4 locations produce the correct restaurant ID in network traffic.
Remaining items:
1. Merge PR — feature/isolated-sessions branch ready for review.
2. First production run — re-enable the daily cron in .github/workflows/daily-upload.yml after merge. Run with seed=true first to verify row counts match before enabling live GHL enrollment.
3. Dupe cleanup — ~1,434 duplicate workflow enrollments from April 25 still need to be cleaned up in GHL (separate task from the pipeline fix).