This repository contains:
- Paper templates (LaTeX and Markdown)
- Experiment framework with synthetic-data scripts to reproduce the figures for:
- E1: Compute scaling curves
- E2: Domain-knowledge ablation
- E3: Cross-channel transfer via adapters
- E4: Offline→Online correlation
- E5: Safety/compliance under search
- Mind-map JSON export for your website
Core idea: Follow Sutton’s “bitter lesson.” Keep domain specifics as adapters + rewards and put performance in general methods (search, learning, retrieval).
# 1) Create a virtual env (optional)
python3 -m venv .venv && source .venv/bin/activate
# 2) Install dependencies
pip install -r requirements.txt
# 3) Run all experiments
python src/experiments/run_all.py
# 4) See outputs
ls outputs/Outputs are saved to outputs/ as PNG figures and CSVs. You can tweak experiment knobs in each script.
paper/
main.tex
references.bib
paper.md
mindmap/
llm_marketing_mindmap.json
src/
adapters/
schemas.py
connectors/
ga4.py gads.py dv360.py crm.py
retrieval/ (index + rag stubs)
planner/ (search + evolution stubs)
policy/ (bandit, offline replay, budget RL stubs)
sim/ (click/conv simulator)
eval/ (CUPED, sequential testing, metrics)
actuators/ (API writers)
governance/ (guardrails + versioning)
experiments/ (E1–E5 + run_all)
utils/ (logging + config)
outputs/
requirements.txt
.gitignore
LICENSE
Makefile
- All experiment scripts run with synthetic data to validate methodology, plots, and logging.
- Replace stubs in
adapters/,actuators/, andgovernance/with real platform code and policies. - Figures are referenced in
paper/main.texandpaper.md(update paths if needed).