This Docker image reproduces the Architecture of Coherent Reality (ACR) analysis workflow on the loophole-free NIST 2015 Bell test data. It converts native .dat files into Parquet, builds compact HDF5 "mini" files, and computes
- the normalized Clauser-Horne (CH) statistic, and
- the third-order T3 statistic (radius scan + permutation / bootstrap).
All steps are scriptable and deterministic; anyone can rebuild the numbers published in the ACR CH/T3 report.
- Docker >= 24 and Docker Compose v2 (
docker composecommand). - ~15 GB of free disk space (raw files + intermediate Parquet/HDF5 + output) for a single data pair (
*_sync.T1.dat,*_sync.T2.dat,*alice.dat,*bob.dat). - ~105 GB total required if you plan to run all 8 recommended ACR analysis runs.
- Place the NIST raw
.datfiles and corresponding*_find_sync.T{1,2}.datin./datadirectory.
List of .dat files used for all 8 recommended ACR analysis runs:
| Series | NIST raw .dat files |
|---|---|
run00_44 |
00_03_find_sync.T1.dat00_03_find_sync.T2.dat00_44_CH_pockel_100kHz.run3.alice.dat00_43_CH_pockel_100kHz.run3.bob.dat |
run02_54 |
02_24_find_sync.T1.dat02_24_find_sync.T2.dat02_54_CH_pockel_100kHz.run4.afterTimingfix2.alice.dat02_54_CH_pockel_100kHz.run4.afterTimingfix2.bob.dat |
run03_31 |
02_24_find_sync.T1.dat02_24_find_sync.T2.dat03_31_CH_pockel_100kHz.run4.afterTimingfix2_training.alice.dat03_31_CH_pockel_100kHz.run4.afterTimingfix2_training.bob.dat |
run03_43 |
02_24_find_sync.T1.dat02_24_find_sync.T2.dat03_43_CH_pockel_100kHz.run4.afterTimingfix2_afterfixingModeLocking.alice.dat03_43_CH_pockel_100kHz.run4.afterTimingfix2_afterfixingModeLocking.bob.dat |
run19_45 |
19_44_find_sync.T1.dat19_44_find_sync.T2.dat19_45_CH_pockel_100kHz.run.nolightconeshift.alice.dat19_44_CH_pockel_100kHz.run.nolightconeshift.bob.dat |
run21_15 |
21_05_find_sync.T1.dat21_04_find_sync.T2.dat21_15_CH_pockel_100kHz.run.200nsadditiondelay_lightconeshift.alice.dat21_15_CH_pockel_100kHz.run.200nsadditiondelay_lightconeshift.bob.dat |
run22_20 |
23_27_find_sync.T1.dat23_26_find_sync.T2.dat22_20_CH_pockel_100kHz.run.200nsreduceddelay_lightconeshift.alice.dat22_20_CH_pockel_100kHz.run.200nsreduceddelay_lightconeshift.bob.dat |
run23_55 |
23_44_find_sync.T1.dat23_44_find_sync.T2.dat23_55_CH_pockel_100kHz.run.ClassicalRNGXOR.alice.dat23_55_CH_pockel_100kHz.run.ClassicalRNGXOR.bob.dat |
Tested under Windows 11 (WSL 2). Docker version 27.0.3, build 7d4bcd8
- Build the image (Python 3.12-slim base)
docker compose build # produces image "nist-acr:latest"- Prepare host folders
mkdir -p data out # make sure "data" and "out" directories exists-
Download and copy all NIST raw
*.datand*find_sync.T*.datfiles listed in the table above into the./datadirectory. -
Run all tests + diagnostics
# Run CH test pipeline for all 8 recommended ACR analysis runs
docker compose run --rm nist-acr \
bash run_all_ch.sh
# Then run T3 test pipeline for all 8 recommended ACR analysis runs
docker compose run --rm nist-acr \
bash run_all_t3.sh
# Then run all diagnostics (optional)
docker compose run --rm nist-acr \
bash run_all_diag.shMain CH test pipeline. By default, the container is set to execute pipeline.py, which is the main CH pipeline, so you don't have to specify it manually.
run a single data pair -> CH statistic
docker compose run --rm nist-acr \
--find-t1 /data/00_03_find_sync.T1.dat \
--find-t2 /data/00_03_find_sync.T2.dat \
--raw-alice /data/00_44_CH_pockel_100kHz.run3.alice.dat \
--raw-bob /data/00_43_CH_pockel_100kHz.run3.bob.dat \
--name run00_44 \
--out-dir /outThe standard list of supported parameters for the CH test pipeline:
--find-t1 /data/${fT1} \ # location of the find sync T1 `.dat` file
--find-t2 /data/${fT2} \ # location of the find sync T2 `.dat` file
--raw-alice /data/${aliceRAW} \ # location of the raw Alice `.dat` file
--raw-bob /data/${bobRAW} \ # location of the raw Bob `.dat` file
--name ${tag} \ # name to use for output result files
--out-dir /out \ # output directory
--pk ${PK} \ # pk value, default is 90There are also 2 more parameters available, but they cannot be used together. Only one of these options can be used at a time.
--radius 0.05 # phase window radius, default is 0.05
# or
--scan-radius "0.02,0.03,0.04,0.05,0.06,0.07" # comma-separated list of radii to scanAdditionally, a couple of extra parameters are available:
--shuffle 5000 # permutation test iterations
--bootstrap 5000 # bootstrap iterations
--threads 16 # number of worker processes, used only in shuffle/bootstrap mode. Default is 16 or the maximum available CPU count.
--seed 42 # RNG seed for shuffle / bootstrap. Default: None.T3 pipeline expects that CH run has already been completed and uses files from /out directory to conduct the T3 test further.
Base run (radius 0.05, r-mode any)
docker compose run --rm nist-acr \
t3_pipeline.py \
--parquet /out/run00_44_raw.parquet \
--sync /out/run00_44_sync.json \
--name run00_44_any \
--out-dir /out/t3 \
--radius 0.05 \
--r-mode any \
--shuffle 5000 \
--bootstrap 5000 \
--seed 42Checking the sign (Alice-only)
docker compose run --rm nist-acr \
t3_pipeline.py \
--parquet /out/run00_44_raw.parquet \
--sync /out/run00_44_sync.json \
--name run00_44_alice \
--out-dir /out/t3 \
--radius 0.05 \
--r-mode alice \
--shuffle 5000 \
--bootstrap 5000 \
--seed 42Checking the sign (Bob-only)
docker compose run --rm nist-acr \
t3_pipeline.py \
--parquet /out/run00_44_raw.parquet \
--sync /out/run00_44_sync.json \
--name run00_44_bob \
--out-dir /out/t3 \
--radius 0.05 \
--r-mode bob \
--shuffle 5000 \
--bootstrap 5000 \
--seed 42Radius-scan (produces run00_44_scan_t3_counts.json)
docker compose run --rm nist-acr \
t3_pipeline.py \
--parquet /out/run00_44_raw.parquet \
--sync /out/run00_44_sync.json \
--name run00_44_scan \
--out-dir /out/t3 \
--scan-radius 0.02,0.03,0.04,0.05,0.06,0.07 \
--r-mode anyAdditionally, a couple of extra parameters are available:
--cluster 50 # cluster-robust sigma with non-overlapping blocks of 50 trials (0 = off)
--azuma # append Azuma-Hoeffding two-sided tail-bound p-value
--shuffle-mode pair # pair or side shuffle mode options available. Default value is "pair".
--threads 16 # number of worker processes, used only in shuffle/bootstrap mode. Default is 16 or the maximum available CPU count.
--seed 42 # RNG seed for shuffle / bootstrap. Default: None.After T3 test runs complete, you can run combine_t3.py script to generate combined T3 results report:
docker compose run --rm nist-acr \
combine_t3.py '/out/t3/run*_any_t3_data.npz'The following scripts are available for diagnostics and plot generation.
- gps_jitter_check.py
- pk_overlap_mc.py
- scan_pk_overlap.py
- phase_peak_scan.py
- check_covariance.py
- cumulative_ch_plot.py
- bitmask_coverage.py
- scan_ch_plot.py
This will execute all 7 diagnostic scripts except scan_ch_plot.py:
docker compose run --rm nist-acr \
diagnostic_pipeline.py run02_54_0.050_mini.hdf5 run02_54_sync.json --radius 0.05 --outdir /out/diagnosticsTo run scan_ch_plot.py separately:
docker compose run --rm nist-acr \
scan_ch_plot.py --reports /out/scan --outdir /out/diagnostics/ch_scanrun_all_ch.sh– runs all 8 recommended ACR CH test runs.run_all_t3.sh– runs all 8 recommended ACR T3 test runs.run_all_diag.sh– runs all 7 diagnostics on the 8 recommended ACR CH test runs, and then runsscan_ch_plot.py.generate_checksums.sh– generates checksums fordat,json,parquet,hdf5, andtxtfiles in./dataand./out, then stores them in./out/checksums.txt. This script will stop execution with an error if./out/checksums.txtexists, to prevent overwriting. Make sure that./out/checksums.txtdoes not exist before executing this script.verify_checksums.sh– verifies that SHA256 sums from./out/checksums.txtmatch the actual SHA256 sum of each listed file.
Usage example:
# Run all 8 CH tests
docker compose run --rm nist-acr \
bash run_all_ch.sh
# Then run all 8 T3 tests
docker compose run --rm nist-acr \
bash run_all_t3.sh
# Then run all diagnostics (optional)
docker compose run --rm nist-acr \
bash run_all_diag.sh
# Generate checksums
docker compose run --rm nist-acr \
bash generate_checksums.sh
# Verify checksums
docker compose run --rm nist-acr \
bash verify_checksums.shThis pipeline re-analyses the public-domain data set "Bell Test Research Software and Data" published by NIST (2015).
- Upstream algorithms: original NIST script
calc_ch_from_hdf5.py - Complete NIST Software Disclaimer is reproduced in
NOTICE_NIST.txtNIST page: https://www.nist.gov/oism/copyrights#software
Raw .dat streams are not redistributed here; download them from the URL in the table above and run the pipeline to reproduce every result.
Preliminary code skeletons in this repository were generated with ChatGPT (OpenAI; versions o1-pro, o3, o4-mini-high, GPT-4o, used between May 2024–June 2025). All final scripts were reviewed, tested, and approved by the human author (A. Ahmedov). No AI model is listed as a co-author.
All code in this repository is released under the MIT License.
The upstream NIST scripts are in the public domain (NIST-PD); see
NOTICE_NIST.txt for the full disclaimer.