-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathglint_fitting_gpu6.py
More file actions
1563 lines (1380 loc) · 93.3 KB
/
glint_fitting_gpu6.py
File metadata and controls
1563 lines (1380 loc) · 93.3 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# -*- coding: utf-8 -*-
"""
This code determines the astrophysical null depth either by the NSC method
(https://ui.adsabs.harvard.edu/abs/2011ApJ...729..110H/abstract)
which does not necessarily requires a calibrator, either by a classic measurement
(https://ui.adsabs.harvard.edu/abs/2011ApJ...729..110H/abstract) which must be calibrated.
The fit is done via the fitting algorithm Trust Region Reflective used in the scipy function least_squares (https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.least_squares.html).
In order to be more resilient to local minima, basin hopping is practiced 10 times by slighly changing the initial conditions (see the documentation of the function ``basin_hoppin_values`` below).
NOTE: the code is not fully stabilised yet so some discarded functions exist but are not used
anymore. Similarly, the execution is highly customisable due to the different explorations done.
The data to load and the settings about the space parameters are handled by the
script ``glint_fitting_config6.py``.
This script handles the way the null depth is determined and on which baseline.
The library is ``glint_fitting_function6.py``. However, some core functions
are coded here:
* **gaussian**: to generate Gaussian curves;
* **MCfunction**: the core function to create and fit the histogram of synthetic null depths to the real histograms;
* **Logger**: class logging all the console outputs into a log file;
* **basin_hoppin_values**: function changing initial condition to do basin hopping on the fitting algorithm.
Fitted values are stored into npy file (one per basin iteration in case of breakdown of the program).
Pickle files stored the results from all iterations.
Plots of the fit of the histogram and plots of the intermediate quantities
(histogram of (anti-)null outputs, synthetic or real, photometries, injection)
are also saved.
The settings of the script are defined by:
* **linear_model**: bool, use the linear model :math:`N = Na + Ninstr`;
* **activate_use_antinull**: bool. If ``True``, it use the antinull output;
* **activate_select_frames**: bool. If ``True``, it keeps frames for which the null depths are between boundaries for every spectral channels;
* **activate_random_init_guesses**: bool. If ``True``, it activates the basin hopping. It is automatically set to ``True`` after the first iteration;
* **activate_spectral_binning**: bool. If ``True``, it bins the spectral channels;
* **activate_use_photometry**: bool. If ``True``, it uses the photometric outputs instead of simulating sequences of photometries for the fit. It is advised to keep it at ``False`` as its use gives non-convincing results and more investigation are required;
* **activate_dark_correction**: bool. If ``True``, it corrects the distribution of the injection from the contribution of the detector noise assuming they all follow normal distribution. It is adviced to keep it at ``False``;
* **activate_save_basic_esti**: bool. If ``True``, it gives a broadband null depth to calibrate in the old-school way;
* **activate_frame_sorting**: bool. If ``True``, it keeps the frame for which the OPD follows a normal distribution (anti-LWE filter). It is strongly recommended to keep it ``True``.
* **activate_preview_only**: bool. If ``True``, the script stops after sorting the frame according to the phase distribution and plot the sequences of frames with highlighting the kept ones to tune the parameters of the filter;
* **nb_frames_sorting_binning**: integer, number of frames to bin to set the frame sorting parameters. Typical value is 100;
* **activate_time_binning_photometry**: bool. If ``True``, it bins the photometric outputs temporally. It is strongly recommended to keep it ``True`` to mitigate the contribution of the detector noise;
* **nb_frames_binning_photometry**: integer, number of frames to bin the photometric outputs. Typical value is 100;
* **global_binning**: integer, frames to bin before anything starts in order to increase SNR;
* **wl_min**: float, lower bound of the bandwidth to process;
* **wl_max**: float, upper bound of the bandwidth to process;
* **n_samp_total**: int, number of total Monte-Carlo values to generate. Typical value is 1e+8;
* **n_samp_per_loop**: int, number of Monte-Carlo values to generate per loop (to not saturate the memory). Typical value is 1e+7;
* **phase_bias_switch**: bool. If ``True``, it implements a non-null achromatic phase in the null model;
* **opd_bias_switch**: bool. If ``True``, it implements an offset OPD in the null model;
* **zeta_switch**: bool. If ``True``, it uses the measured zeta coeff. If False, value are set to 0.5;
* **oversampling_switch**: bool. If ``True``, it includes the loss of coherence due to non-monochromatic spectral channels;
* **skip_fit**: bool. If ``True``, the histograms of data and model given the initial guesses are plotted without doing the fit. Useful to just display the histograms and tune the settings;
* **chi2_map_switch**: bool. If ``True``, it grids the parameter space in order to set the boundaries for the fitting algorithm;
* **nb_files_data**: 2-int tuple. Load the data files comprised between the values in the tuple;
* **nb_files_dark**: 2-int tuple. Load the dark files comprised between the values in the tuple;
* **basin_hopping_nloop**: 2-int tuple. Lower and upper bound of the iteration loop for basin hopping method;
* **which_nulls**: list. List of baselines to process;
* **map_na_sz**: int. Number of values of astrophysical null depth in the grid of the parameter space;
* **map_mu_sz**: int. Number of values of :math:`\mu` in the grid of the parameter space;
* **map_sig_sz**: int. Number of values of :math:`\sig` in the grid of the parameter space.
"""
import numpy as np
import cupy as cp
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
from timeit import default_timer as time
import h5py
import os
import sys
import glint_fitting_functions6 as gff
from scipy.special import erf
from scipy.io import loadmat
from scipy.stats import norm, skewnorm
import pickle
from datetime import datetime
from glint_fitting_config6 import prepareConfig
def gaussian(x, a, x0, sig):
"""
Computes a Gaussian curve.
:Parameters:
**x**: values where the curve is estimated.
**a**: amplitude of the Gaussian.
**x0**: location of the Gaussian.
**sig**: scale of the Gaussian.
:Returns:
Gaussian curve.
"""
return a * np.exp(-(x-x0)**2/(2*sig**2))
def MCfunction(bins0, na, mu_opd, sig_opd, *args):
"""
Monte-Carlo function ofr NSC method. It fits polychromatic data to give one achromatic null depth.
This functions runs on the GPU with cupy (https://cupy.dev/).
The arguments of the function are the parameters to fit.
The other values required to calculated the synthetic sequences of null depth are imported
via global variables listed at the beginning of the function.
The verbose of this function displays the input parameters to track the convergence of the fitting algorithm.
Global variables include quantities necessary to fit the histograms and intermediate products to control the efficiency of the fit.
The functions loops over the spectral channels to create independent histograms for each of them and loops over the Monte-Carlo samples to avoid the saturation of the memory.
For the latter, the histograms are added together then averaged to get the normalization to 1.
:Parameters:
**bins0**: array
Bins of the histogram of the null depth to fit
**na**: float
Astrophysical null depth, quantity of interest.
**mu_opd**: float
Instrumental OPD of the considered baseline.
**sig_opd**: float
Sigma of the normal distribution describing the fluctuations of the OPD.
**args**: optional
Array, use to fit the parameters ``mu`` and ``sig`` of the normal distribution of Ir (https://ui.adsabs.harvard.edu/abs/2011ApJ...729..110H/abstract) if the antinull output is not used. arg can be either a 2-element or a :math:`2 \times number of spectral channels`-element array, depending on the fit of a unique Ir or one per spectral channel.
:Returns:
**accum**: array
Histogram of the synthetic sequences of null depth.
:TODO:
Implement the input of **args**.
"""
# Imported in the funcion
global data_IA_axis, cdf_data_IA, data_IB_axis, cdf_data_IB, spectra # On GPU
global zeta_minus_A, zeta_minus_B, zeta_plus_A, zeta_plus_B, data_IA, data_IB
global dark_Iminus_cdf, dark_Iminus_axis, dark_Iplus_cdf, dark_Iplus_axis # On GPU
global spec_chan_width, activate_use_photometry, linear_model, activate_use_antinull
# generated by the function
global rv_IA, rv_IB, rv_opd, rv_dark_Iminus, rv_dark_Iplus, rv_null, rv_interfminus, rv_interfplus, interfminus, interfplus # On GPU
global phase_bias, bins, dphase_bias
global interfminus_binned, interfplus_binned
# Parameters of the MC process
global n_samp_per_loop, count, wl_scale0, nloop, number_of_Ir
global oversampling_switch, switch_invert_null
global rv_IA_list
# Test and diagnostic
global Iplus, liste_rv_interfminus, liste_rv_interfplus, liste_rv_dark_Iminus, liste_rv_dark_Iplus
global injection_mean, injection_corrected_std
if not activate_use_antinull and len(args) == 0:
raise Exception('Mode NoAntinull activated needs inputs mu_ir and sig_Ir to work')
count += 1
text_intput = (int(count), na, mu_opd, sig_opd, *args)
print(text_intput)
accum = cp.zeros((bins0.shape[0], bins0.shape[1]-1), dtype=cp.float32) # Axes: spectral channel, number of bins
if wl_scale0.size > 1:
spec_chan_width = abs(np.mean(np.diff(wl_scale0)))
else:
spec_chan_width = 5
''' Number of samples to simulate is high and the memory is low so we iterate to create an average histogram '''
for _ in range(nloop):
liste_rv_interfminus = cp.zeros((wl_scale0.size, n_samp_per_loop), dtype=cp.float32)
liste_rv_interfplus = cp.zeros((wl_scale0.size, n_samp_per_loop), dtype=cp.float32)
liste_rv_dark_Iminus = cp.zeros((wl_scale0.size, n_samp_per_loop), dtype=cp.float32)
liste_rv_dark_Iplus = cp.zeros((wl_scale0.size, n_samp_per_loop), dtype=cp.float32)
rv_opd = cp.random.normal(mu_opd, sig_opd, n_samp_per_loop)
rv_opd = rv_opd.astype(cp.float32)
# rv_opd = skewnorm.rvs(skew, loc=mu_opd, scale=sig_opd, size=n_samp_per_loop) # Skewd Gaussian distribution
# rv_opd = cp.asarray(rv_opd, dtype=cp.float32) # Load the random values into the graphic card
if activate_spectral_binning:
interfminus_binned, interfplus_binned = cp.zeros(n_samp_per_loop, dtype=cp.float32), cp.zeros(n_samp_per_loop, dtype=cp.float32)
# Generate random values of injection
if not activate_use_photometry:
rv_injectionA = gff.rv_generator(data_IA_axis, cdf_data_IA, n_samp_per_loop) # random values for photometry A
rv_injectionB = gff.rv_generator(data_IB_axis, cdf_data_IB, n_samp_per_loop) # random values for photometry B
# rv_injectionA = cp.random.normal(injection_mean[0], injection_corrected_std[0], size=n_samp_per_loop, dtype=cp.float32)
# rv_injectionB = cp.random.normal(injection_mean[1], injection_corrected_std[1], size=n_samp_per_loop, dtype=cp.float32)
for k in range(wl_scale0.size): # Iterate over the wavelength axis
# bin_width = cp.mean(cp.diff(bins), dtype=cp.float32)
# random values for dark noise
rv_dark_Iminus = gff.rv_generator(dark_Iminus_axis[k], dark_Iminus_cdf[k], n_samp_per_loop)
rv_dark_Iminus = rv_dark_Iminus.astype(cp.float32)
rv_dark_Iplus = gff.rv_generator(dark_Iplus_axis[k], dark_Iplus_cdf[k], n_samp_per_loop)
rv_dark_Iplus = rv_dark_Iplus.astype(cp.float32)
# rv_dark_Iminus -= rv_dark_Iminus.mean()
# rv_dark_Iplus -= rv_dark_Iplus.mean()
if len(args) == 2*wl_scale0.size:
mu_Ir = args[k]
sig_Ir = args[wl_scale0.size+k]
rv_Ir = cp.random.normal(mu_Ir, sig_Ir, n_samp_per_loop)
elif len(args) == 2:
mu_Ir = args[0]
sig_Ir = args[1]
rv_Ir = cp.random.normal(mu_Ir, sig_Ir, n_samp_per_loop)
if not activate_use_photometry:
rv_IA = rv_injectionA * spectra[0][k]
rv_IB = rv_injectionB * spectra[1][k]
else:
rv_IA = cp.asarray(data_IA, dtype=cp.float32)
rv_IB = cp.asarray(data_IB, dtype=cp.float32)
# Computing synthetic null depths
if activate_use_antinull:
if linear_model:
rv_null, rv_interfminus, rv_interfplus = gff.computeNullDepthLinear(na, rv_IA, rv_IB, wl_scale0[k], rv_opd, phase_bias, dphase_bias, rv_dark_Iminus, rv_dark_Iplus,
zeta_minus_A[k], zeta_minus_B[k], zeta_plus_A[k], zeta_plus_B[k],
spec_chan_width, oversampling_switch, switch_invert_null)
else:
rv_null, rv_interfminus, rv_interfplus = gff.computeNullDepth(na, rv_IA, rv_IB, wl_scale0[k], rv_opd, phase_bias, dphase_bias, rv_dark_Iminus, rv_dark_Iplus,
zeta_minus_A[k], zeta_minus_B[k], zeta_plus_A[k], zeta_plus_B[k],
spec_chan_width, oversampling_switch, switch_invert_null)
if activate_spectral_binning:
interfminus_binned += rv_interfminus
interfplus_binned += rv_interfplus
else:
rv_null = rv_null[~np.isnan(rv_null)] # Remove NaNs
rv_null = cp.sort(rv_null)
''' Compute the average histogram over the nloops iterations '''
bins = cp.asarray(bins0[k], dtype=cp.float32)
pdf_null = cp.histogram(rv_null, bins)[0]
accum[k] += pdf_null / cp.sum(pdf_null)
''' Save some debug stuff '''
liste_rv_interfminus[k] = rv_interfminus
liste_rv_interfplus[k] = rv_interfplus
liste_rv_dark_Iminus[k] = rv_dark_Iminus
liste_rv_dark_Iplus[k] = rv_dark_Iplus
else:
rv_interfminus, rv_interfplus = gff.computeNullDepthNoAntinull(rv_IA, rv_IB, wl_scale0[k], rv_opd, rv_dark_Iminus, rv_dark_Iplus,
zeta_minus_A[k], zeta_minus_B[k], zeta_plus_A[k], zeta_plus_B[k],
spec_chan_width, oversampling_switch, switch_invert_null)
''' Save some debug stuff '''
liste_rv_interfminus[k] = rv_interfminus
liste_rv_interfplus[k] = rv_interfplus
liste_rv_dark_Iminus[k] = rv_dark_Iminus
liste_rv_dark_Iplus[k] = rv_dark_Iplus
if activate_spectral_binning:
interfminus_binned += rv_interfminus
interfplus_binned += rv_interfplus
else:
if switch_invert_null:
rv_null = rv_interfplus / rv_interfminus
else:
rv_null = rv_interfminus / rv_interfplus
rv_null = rv_Ir * (rv_null + na)
rv_null = rv_null[~np.isnan(rv_null)] # Remove NaNs
rv_null = cp.sort(rv_null)
''' Compute the average histogram over the nloops iterations '''
bins = cp.asarray(bins0[k], dtype=cp.float32)
pdf_null = cp.histogram(rv_null, bins)[0]
accum[k] += pdf_null / cp.sum(pdf_null)
# Inverting the fake sequences if the nll and antinull outputs are swaped.
# Non-spectrally binned are already swaped in the "computeNullDepth" function above
if activate_spectral_binning:
# interfminus_binned = interfminus_binned / wl_scale0.size
# interfplus_binned = interfplus_binned / wl_scale0.size
if switch_invert_null:
rv_null = interfplus_binned / interfminus_binned
else:
rv_null = interfminus_binned / interfplus_binned
if linear_model or not activate_use_antinull:
rv_null = rv_null + na
if not activate_use_antinull:
rv_null = rv_null * rv_Ir
rv_null = rv_null[~np.isnan(rv_null)] # Remove NaNs
rv_null = cp.sort(rv_null)
pdf_null = cp.histogram(rv_null, cp.asarray(bins0[0], dtype=cp.float32))[0]
accum[0] += pdf_null / cp.sum(pdf_null)
accum = accum / nloop
if cp.all(cp.isnan(accum)):
accum[:] = 0
accum = cp.asnumpy(accum)
return accum.ravel()
class Logger(object):
"""
Class allowing to save the content of the console inside a txt file.
"""
def __init__(self, log_path):
self.orig_stdout = sys.stdout
self.terminal = sys.stdout
self.log = open(log_path, "a")
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
#this flush method is needed for python 3 compatibility.
#this handles the flush command by doing nothing.
#you might want to specify some extra behavior here.
pass
def close(self):
sys.stdout = self.orig_stdout
self.log.close()
print('Stdout closed')
def basin_hoppin_values(mu_opd0, sig_opd0, na0, bounds_mu, bounds_sig, bounds_na):
"""
Create as many as initial guess as there are basin hopping iterations to do.
The generation of these values are done with a normal distribution.
:Parameters:
**mu_opd0**: float
Instrumental OPD around which random initial guesses are created.
**sig_opd0**: float
Instrumental OPD around which random initial guesses are created.
**na0**: float
Instrumental OPD around which random initial guesses are created.
**bounds_mu**: 2-tuple
Lower and upper bounds between which the random values of ``mu_opd`` must be.
**bounds_sig**: 2-tuple
Lower and upper bounds between which the random values of ``sig_opd`` must be.
**bounds_na**: 2-tuple
Lower and upper bounds between which the random values of ``na`` must be.
:Returns:
**mu_opd**: float
New initial guess for ``mu_opd``.
**sig_opd**: float
New initial guess for ``sig_opd``.
**na**: float
New initial guess for ``na``.
"""
# orig_seed = np.random.get_state()
# np.random.seed(1)
print('Random withdrawing of init guesses')
for _ in range(1000):
mu_opd = np.random.normal(mu_opd0, 20)
if mu_opd > bounds_mu[0] and mu_opd < bounds_mu[1]:
break
if _ == 1000-1:
print('mu_opd: no new guess, take initial one')
mu_opd = mu_opd0
for _ in range(1000):
sig_opd = abs(np.random.normal(sig_opd0, 20))
if sig_opd > bounds_sig[0] and sig_opd < bounds_sig[1]:
break
if _ == 1000-1:
print('sig opd: no new guess, take initial one')
sig_opd = sig_opd0
for _ in range(1000):
na = np.random.normal(na0, 0.03)
if na > bounds_na[0] and na < bounds_na[1]:
break
if _ == 1000-1:
print('na: no new guess, take initial one')
na = na0
print('Random drawing done')
# np.random.set_state(orig_seed)
return mu_opd, sig_opd, na
if __name__ == '__main__':
''' Settings '''
linear_model = False
activate_use_antinull = True
activate_select_frames = False
activate_random_init_guesses = True
activate_spectral_binning = False
activate_use_photometry = False
activate_dark_correction = False
activate_save_basic_esti = False
activate_frame_sorting = True
activate_preview_only = False
nb_frames_sorting_binning = 100
activate_time_binning_photometry = True
nb_frames_binning_photometry = 100
global_binning = 1
wl_min = 1525 # lower bound of the bandwidth to process
wl_max = 1575
wl_mid = (wl_max + wl_min)/2 # Centre wavelength of the bandwidth
n_samp_total = int(1e+8)
n_samp_per_loop = int(1e+7) # number of samples per loop
nloop = n_samp_total // n_samp_per_loop
phase_bias_switch = True # Implement a non-null achromatic phase in the null model
opd_bias_switch = True # Implement an offset OPD in the null model
zeta_switch = True # Use the measured zeta coeff. If False, value are set to 0.5
oversampling_switch = True # Include the loss of coherence when OPD is too far from 0, according to a sinc envelop
skip_fit = False # Do not fit, plot the histograms of data and model given the initial guesses
chi2_map_switch = False
# Map the parameters space over astronull, DeltaPhi mu and sigma
nb_files_data = (0, 4000) # Which data files to load
nb_files_dark = (0, 4000) # Which dark files to load
basin_hopping_nloop = (20, 30) # lower and upper bound of the iteration loop for basin hopping method♀
which_nulls = ['null1', 'null4', 'null5', 'null6'][1:2]
map_na_sz = 10
map_mu_sz = 200
map_sig_sz = 10
config = prepareConfig()
nulls_to_invert = config['nulls_to_invert'] # If one null and antinull outputs are swapped in the data processing
nulls_to_invert_model = config['nulls_to_invert_model'] # If one null and antinull outputs are swapped in the data processing
''' Set the bounds of the parameters to fit '''
bounds_mu0 = config['bounds_mu0'] # bounds for DeltaPhi mu, one tuple per null
bounds_sig0 = config['bounds_sig0'] # bounds for DeltaPhi sig
bounds_na0 = config['bounds_na0'] # bounds for astronull
diffstep = config['diffstep'] # differential step to apply to the TRF fitting algorithm, used for computing the finite difference
xscale = config['xscale'] # scale factor of the parameters to fit, see least_squares doc for more details
bin_bounds0 = config['bin_bounds0'] # Boundaries of the histogram, to be set manually after checking the histogram sphape with "skip_fit = True"
''' Set the initial conditions '''
mu_opd0 = config['mu_opd0'] # initial guess of DeltaPhi mu for the 6 baselines
sig_opd0 = config['sig_opd0'] # initial guess of DeltaPhi sig for the 6 baselines
na0 = config['na0'] # initial guess of astro null for the 6 baselines
''' Import real data '''
datafolder = config['datafolder']
darkfolder = config['darkfolder']
datafolder = datafolder
darkfolder = darkfolder
root = config['root']
file_path = config['file_path']
save_path = config['save_path']
data_list = config['data_list'][nb_files_data[0]:nb_files_data[1]]
dark_list = config['dark_list'][nb_files_dark[0]:nb_files_dark[1]]
calib_params_path = config['calib_params_path']
zeta_coeff_path = config['zeta_coeff_path']
factor_minus0, factor_plus0 = config['factor_minus0'], config['factor_plus0']
# =============================================================================
# Rock 'n roll
# =============================================================================
print('Loaded configuration for %s, at the date %s'%(config['starname'], config['date']))
if len(data_list) == 0 or len(dark_list) == 0:
raise UserWarning('data list or dark list is empty')
if not os.path.exists(save_path):
os.makedirs(save_path)
''' Some constants/useless variables kept for retrocompatibility '''
dphase_bias = 0. # constant value for corrective phase term
phase_bias = 0. # Phase of the fringes
# List in dictionary: indexes of null, beam A and beam B, zeta label for antinull, segment id
null_table = {'null1':[0,[0,1], 'null7'], 'null2':[1,[1,2], 'null8'], 'null3':[2,[0,3], 'null9'], \
'null4':[3,[2,3], 'null10'], 'null5':[4,[2,0], 'null11'], 'null6':[5,[3,1], 'null12']}
''' Specific settings for some configurations '''
if chi2_map_switch:
n_samp_per_loop = int(1e+5) # number of samples per loop
nloop = 1
basin_hopping_nloop = (basin_hopping_nloop[0], basin_hopping_nloop[0]+1)
if skip_fit:
basin_hopping_nloop = (basin_hopping_nloop[0], basin_hopping_nloop[0]+1)
''' Fool-proof '''
if not chi2_map_switch and not skip_fit:
check_mu = np.any(mu_opd0 <= np.array(bounds_mu0)[:,0]) or np.any(mu_opd0 >= np.array(bounds_mu0)[:,1])
check_sig = np.any(sig_opd0 <= np.array(bounds_sig0)[:,0]) or np.any(sig_opd0 >= np.array(bounds_sig0)[:,1])
check_null = np.any(na0 <= np.array(bounds_na0)[:,0]) or np.any(na0 >= np.array(bounds_na0)[:,1])
if check_mu or check_sig or check_null:
raise Exception('Check boundaries: the initial guesses (marked as True) are not between the boundaries (null:%s, mu:%s, sig:%s).'%(check_null, check_mu, check_sig))
total_time_start = time()
for key in which_nulls: # Iterate over the null to fit
# =============================================================================
# Import data
# =============================================================================
print('****************')
print('Processing %s \n'%key)
# plt.ioff()
start_loading = time()
if key in nulls_to_invert_model:
switch_invert_null = True
else:
switch_invert_null = False
''' Pick up the correct indexes for loading data'''
idx_null = null_table[key][0] # Select the index of the null output to process
idx_photo = null_table[key][1] # Select the indexes of the concerned photometries
key_antinull = null_table[key][2] # Select the index of the antinull output to process
''' Load data about the null to fit '''
dark = gff.load_data(dark_list, (wl_min, wl_max), key, nulls_to_invert, frame_binning=global_binning)
data = gff.load_data(data_list, (wl_min, wl_max), key, nulls_to_invert, dark, frame_binning=global_binning)
# data0 = gff.load_data(data_list, (wl_min, wl_max), key, nulls_to_invert, dark)
stop_loading = time()
if activate_frame_sorting or activate_preview_only:
data, idx_good_frames = gff.sortFrames(data, nb_frames_sorting_binning, 0.1, factor_minus0[idx_null], factor_plus0[idx_null], key, plot=activate_preview_only, save_path=save_path)
if activate_preview_only:
continue
wl_scale0 = data['wl_scale'] # Wavelength axis. One histogrm per value in this array will be created. The set of histogram will be fitted at once.
''' Load the zeta coeff we need. if "wl_bounds" kew is sent, the return zeta coeff are the average over the bandwidth set by the tuple of this key'''
zeta_coeff = gff.get_zeta_coeff(zeta_coeff_path, wl_scale0, False)
if not zeta_switch:
for zkey in zeta_coeff.keys():
if zkey != 'wl_scale':
zeta_coeff[zkey][:] = 1.
''' Get histograms of intensities and dark current in the pair of photomoetry outputs '''
data_IA, data_IB = data['photo'][0], data['photo'][1] # Set photometries in dedicated variable into specific variables for clarity. A and B are the generic id of the beams for the processed baseiune
dark_IA, dark_IB = dark['photo'][0], dark['photo'][1]
zeta_minus_A, zeta_minus_B = zeta_coeff['b%s%s'%(idx_photo[0]+1, key)], zeta_coeff['b%s%s'%(idx_photo[1]+1, key)] # Set zeta coeff linking null and photometric outputs into dedicated variables for clarity
zeta_plus_A, zeta_plus_B = zeta_coeff['b%s%s'%(idx_photo[0]+1, key_antinull)], zeta_coeff['b%s%s'%(idx_photo[1]+1, key_antinull)] # Set zeta coeff linking antinull and photometric outputs into dedicated variables for clarity
# =============================================================================
# Get CDF of dark
# =============================================================================
''' Get CDF of dark currents in interferometric outputs for generating random values in the MC function '''
dark_size = [len(np.linspace(dark['Iminus'][i].min(), dark['Iminus'][i].max(), \
np.size(np.unique(dark['Iminus'][i])), endpoint=False)) for i in range(len(wl_scale0))]
dark_Iminus_axis = cp.array([np.linspace(dark['Iminus'][i].min(), dark['Iminus'][i].max(), \
min(dark_size), endpoint=False) for i in range(len(wl_scale0))], \
dtype=cp.float32)
dark_Iminus_cdf = cp.array([cp.asnumpy(gff.computeCdf(dark_Iminus_axis[i], dark['Iminus'][i], 'cdf', True)) \
for i in range(len(wl_scale0))], dtype=cp.float32)
dark_size = [len(np.linspace(dark['Iplus'][i].min(), dark['Iplus'][i].max(), \
np.size(np.unique(dark['Iminus'][i])), endpoint=False)) for i in range(len(wl_scale0))]
dark_Iplus_axis = cp.array([np.linspace(dark['Iplus'][i].min(), dark['Iplus'][i].max(), \
min(dark_size), endpoint=False) for i in range(len(wl_scale0))], \
dtype=cp.float32)
dark_Iplus_cdf = cp.array([cp.asnumpy(gff.computeCdf(dark_Iplus_axis[i], dark['Iplus'][i], 'cdf', True)) \
for i in range(len(wl_scale0))], dtype=cp.float32)
# =============================================================================
# Get the values of injection and spectrum
# =============================================================================
''' Handling photometry: either get the CDF for rv generation or just keep the temporal sequence '''
injection, spectra = gff.getInjectionAndSpectrum(data_IA, data_IB, wl_scale0)
injection = np.array(injection)
injectionbefore = injection.copy()
injection_dark, spectra_dark = gff.getInjectionAndSpectrum(dark_IA, dark_IB, wl_scale0)
injection_dark = np.array(injection_dark)
if activate_use_photometry:
n_samp_per_loop = data_IA.shape[1]
nloop = max((n_samp_total // n_samp_per_loop, 1))
# =============================================================================
# Compute the null
# =============================================================================
if activate_spectral_binning:
Iminus, dummy = gff.binning(data['Iminus'], data['Iminus'].shape[0], 0, False)
Iplus, dummy = gff.binning(data['Iplus'], data['Iplus'].shape[0], 0, False)
wl_scale, dummy = gff.binning(wl_scale0, wl_scale0.size, 0, True)
Iminus_dk, dummy = gff.binning(dark['Iminus'], dark['Iminus'].shape[0], 0, False)
Iplus_dk, dummy = gff.binning(dark['Iplus'], dark['Iplus'].shape[0], 0, False)
else:
Iminus = data['Iminus']
Iplus = data['Iplus']
wl_scale = wl_scale0
Iminus_dk = dark['Iminus']
Iplus_dk = dark['Iplus']
if activate_use_antinull:
if key in nulls_to_invert:
data_null = Iplus / Iminus
else:
data_null = Iminus / Iplus
else:
IA = data_IA.copy()
IB = data_IB.copy()
# IA = injection[0][None,:] * spectra[0][:,None]
# IB = injection[1][None,:] * spectra[1][:,None]
IA[IA<=0] = 0
IB[IB<=0] = 0
if key in nulls_to_invert:
Iminus = IA*zeta_plus_A[:,None] + IB*zeta_plus_B[:,None] + 2 * np.sqrt(IA * IB) * np.sqrt(zeta_plus_A[:,None]*zeta_plus_B[:,None])
data_null = Iplus / Iminus
else:
Iplus = IA*zeta_minus_A[:,None] + IB*zeta_minus_B[:,None] + 2 * np.sqrt(IA * IB) * np.sqrt(zeta_minus_A[:,None]*zeta_minus_B[:,None])
data_null = Iminus / Iplus
# =============================================================================
# Compute the histogram
# =============================================================================
print('Compute survival function and error bars')
data_null0 = data_null.copy()
bin_bounds = bin_bounds0[idx_null]
if activate_select_frames:
mask_null = np.array([(d>=bin_bounds[0])&(d<=bin_bounds[1]) for d in data_null])
mask_frames = np.unique(np.where(mask_null==False)[1]) # Mask of frames in which all the spectral channels are not simultaneously true
mask_null[:, mask_frames] = False # Set to false frames with at least one spectral channel which does not have a null depth in the studied range
else:
mask_null = np.ones_like(data_null, dtype=np.bool)
sz_list = np.array([np.size(d[(d>=bin_bounds[0])&(d<=bin_bounds[1])]) for d in data_null]) # size of the sample of measured null depth.
sz = np.max(sz_list) # size of the sample of measured null depth.
sz = 1000**2
''' Creation of the x-axis of the histogram (one per wavelength)'''
data_null = np.array([data_null[k][mask_null[k]] for k in range(data_null.shape[0])])
# sz = data_null.shape[1]
null_axis = np.array([np.linspace(bin_bounds[0], bin_bounds[1], int(sz**0.5+1), retstep=False, dtype=np.float32) for i in range(data_null.shape[0])])
null_pdf = []
null_pdf_err = []
for wl in range(len(wl_scale)): # Compute the histogram per spectral channel
''' Create the histogram (one per wavelegnth) '''
pdf = np.histogram(data_null[wl], null_axis[wl], density=False)[0]
pdf_size = np.sum(pdf)
bin_width = null_axis[wl][1]-null_axis[wl][0]
pdf = pdf / np.sum(pdf)
null_pdf.append(pdf)
# pdf_err = gff.getErrorPDF(data_null[wl], data_null_err[wl], null_axis[wl]) # Barnaby's method, tend to underestimate the error
pdf_err = gff.getErrorBinomNorm(pdf, pdf_size) # Classic method
null_pdf_err.append(pdf_err)
null_pdf = np.array(null_pdf)
null_pdf_err = np.array(null_pdf_err)
# =============================================================================
# Basic estimation for classic calibration
# =============================================================================
if activate_save_basic_esti:
data_null2 = [d[d>=0] for d in data_null]
rms = [d.std() for d in data_null2]
data_null3 = [data_null2[k][(data_null2[k]>=data_null2[k].min())&(data_null2[k]<=data_null2[k].min()+rms[k])] for k in range(len(data_null2))]
avg = np.array([np.mean(elt) for elt in data_null3])
std = np.array([np.std(elt) for elt in data_null3])
basic_esti = np.array([avg, std, wl_scale])
basic_save = save_path+key+'_basic_esti'+'_'+str(wl_min)+'-'+str(wl_max)+'_files_'+str(nb_files_data[0])+'_'+str(nb_files_data[1])+'_'+os.path.basename(datafolder[:-1])
if activate_spectral_binning:
basic_save = basic_save+'_sp'
np.save(basic_save, basic_esti)
continue
# =============================================================================
# Cropping photometries, Iminus and Iplus to the kept values of null depths
# =============================================================================
if activate_spectral_binning:
mask_null2 = np.tile(mask_null, (data_IA.shape[0],1))
else:
mask_null2 = mask_null
data_IA = np.array([data_IA[k][mask_null2[k]] for k in range(data_IA.shape[0])])
data_IB = np.array([data_IB[k][mask_null2[k]] for k in range(data_IB.shape[0])])
Iminus = np.array([Iminus[k][mask_null2[k]] for k in range(Iminus.shape[0])])
Iplus = np.array([Iplus[k][mask_null2[k]] for k in range(Iplus.shape[0])])
injection, dummy = gff.getInjectionAndSpectrum(data_IA, data_IB, wl_scale0)
injection = np.array(injection)
if activate_spectral_binning:
data_IA = data_IA.sum(0).reshape((1,-1))
data_IB = data_IB.sum(0).reshape((1,-1))
# =============================================================================
# Get the distribution of the photometries and selectring values in the range of null depths
# =============================================================================
''' Handling photometry: either get the CDF for rv generation or just keep the temporal sequence '''
if activate_time_binning_photometry:
if nb_frames_binning_photometry < 0:
injection = injectionbefore.mean(axis=-1).reshape(2,-1)
else:
injection, dummy = gff.binning(injection, nb_frames_binning_photometry, axis=1, avg=True)
injection_dark, dummy = gff.binning(injection_dark, nb_frames_binning_photometry, axis=1, avg=True)
if activate_dark_correction and not activate_use_photometry:
injection_saved = injection.copy()
# mean_data, var_data = np.mean(injection, axis=-1), np.var(injection, axis=-1)
# mean_dark, var_dark = np.mean(injection_dark, axis=-1), np.var(injection_dark, axis=-1)
# injection = (injection - mean_data[:,None]) * \
# ((var_data[:,None]-var_dark[:,None])/var_data[:,None])**0.5 + mean_data[:,None] - mean_dark[:,None]
# if np.any(np.isnan(injection)) or np.any(np.isinf(injection)):
# print('Restore injection')
# injection = injection_saved.copy()
histo_injectionA = list(np.histogram(injection[0], int(injection[0].size**0.5)+1, density=True))
histo_injectionB = list(np.histogram(injection[1], int(injection[1].size**0.5)+1, density=True))
abscA = histo_injectionA[1][:-1] + np.diff(histo_injectionA[1])/2
abscB = histo_injectionB[1][:-1] + np.diff(histo_injectionB[1])/2
poptA, pcovA = curve_fit(gaussian, abscA, histo_injectionA[0], p0=[np.max(histo_injectionA[0]), abscA[np.argmax(histo_injectionA[0])], 100])
poptB, pcovB = curve_fit(gaussian, abscB, histo_injectionB[0], p0=[np.max(histo_injectionB[0]), abscB[np.argmax(histo_injectionB[0])], 100])
injection_saved = injection.copy()
injection_mean = np.array([poptA[1], poptB[1]])
injection_var = np.array([poptA[2]**2, poptB[2]**2])
# injection_mean = injection.mean(axis=1)
# injection_var = injection.var(axis=1)
injection_dk_var = injection_dark.var(axis=1)
injection_corrected_std = (injection_var - injection_dk_var)**0.5
injection = np.array([np.random.normal(injection_mean[k], injection_corrected_std[k], injection.shape[1]) for k in range(injection.shape[0])])
data_IA_axis = cp.linspace(injection[0].min(), injection[0].max(), np.size(np.unique(injection[0])), dtype=cp.float32)
cdf_data_IA = gff.computeCdf(data_IA_axis, injection[0], 'cdf', True)
cdf_data_IA = cp.array(cdf_data_IA, dtype=cp.float32)
data_IB_axis = cp.linspace(injection[1].min(), injection[1].max(), np.size(np.unique(injection[1])), dtype=cp.float32)
cdf_data_IB = gff.computeCdf(data_IB_axis, injection[1], 'cdf', True)
cdf_data_IB = cp.array(cdf_data_IB, dtype=cp.float32)
# =============================================================================
# Section where the fit is done.
# =============================================================================
''' Select the bounds of the baseline (null) to process '''
bounds_mu = bounds_mu0[idx_null]
bounds_sig = bounds_sig0[idx_null]
bounds_na = bounds_na0[idx_null]
# Compile them into a readable tuple called by the TRF algorithm
bounds_fit = ([bounds_na[0], bounds_mu[0], bounds_sig[0]],
[bounds_na[1], bounds_mu[1], bounds_sig[1]])
if not activate_use_antinull:
number_of_Ir = 1#wl_scale0.size
bounds_fit = ([bounds_na[0], bounds_mu[0], bounds_sig[0]]+[0]*number_of_Ir+[1e-6]*number_of_Ir,
[bounds_na[1], bounds_mu[1], bounds_sig[1]]+[10]*number_of_Ir+[10]*number_of_Ir)
diffstep = diffstep + [0.1]*number_of_Ir+[0.1]*number_of_Ir
xscale = np.append(xscale, [[1]*number_of_Ir, [1]*number_of_Ir])
wl_scale_saved = wl_scale.copy()
for idx_basin, basin_hopping_count in enumerate(range(basin_hopping_nloop[0], basin_hopping_nloop[1])):
if not skip_fit and not chi2_map_switch:
plt.close('all')
if not chi2_map_switch:
sys.stdout = Logger(save_path+'%s_%03d_basin_hop.log'%(key, basin_hopping_count)) # Save the content written in the console into a txt file
chi2_liste = [] # Save the reduced Chi2 of the different basin hop
popt_liste = [] # Save the optimal parameters of the different basin hop
uncertainties_liste = [] # Save the errors on fitted parameters of the different basin hop
init_liste = [] # Save the initial guesses of the different basin hop
pcov_liste = [] # Save the covariance matrix given by the fitting algorithm of the different basin hop
termination_liste = [] # Save the termination condition of the different basin hop
print('-------------')
print(basin_hopping_count)
print('-------------')
print('Fitting '+key)
print('%s-%02d-%02d %02dh%02d'%(datetime.now().year, datetime.now().month, datetime.now().day, datetime.now().hour, datetime.now().minute))
print('Spectral bandwidth (in nm):%s,%s'%(wl_min, wl_max))
print('Spectral binning %s'%activate_spectral_binning)
print('Time binning of photometry %s (%s)'%(activate_time_binning_photometry, nb_frames_binning_photometry))
print('Bin bounds and number of bins %s %s'%(bin_bounds, int(sz**0.5)))
print('Boundaries (na, mu, sig):')
print(str(bounds_na)+str(bounds_mu)+str(bounds_sig))
print('Number of elements ', n_samp_total)
print('Number of loaded points', data_null.shape[1], nb_files_data)
print('Number of loaded dark points', dark['Iminus'].shape[1], nb_files_dark)
print('Global binning of frames:', global_binning)
print('Zeta file', os.path.basename(config['zeta_coeff_path']))
print('Time to load files: %.3f s'%(stop_loading - start_loading))
print('activate_frame_sorting', activate_frame_sorting)
print('activate_preview_only', activate_preview_only)
print('factor_minus, factor_plus', factor_minus0[idx_null], factor_plus0[idx_null])
print('nb_frames_sorting_binning', nb_frames_sorting_binning)
print('')
# model fitting initial guess
''' Create the set of initial guess for each hop '''
if idx_basin > 0 and activate_random_init_guesses:
mu_opd, sig_opd, na = basin_hoppin_values(mu_opd0[idx_null], sig_opd0[idx_null], na0[idx_null], bounds_mu, bounds_sig, bounds_na)
else:
mu_opd = mu_opd0[idx_null]
sig_opd = sig_opd0[idx_null]
na = na0[idx_null]
start_basin_hoppin = True
''' Model fitting '''
if not chi2_map_switch:
guess_na = na
initial_guess = [guess_na, mu_opd, sig_opd]
if not activate_use_antinull:
initial_guess = [guess_na, mu_opd, sig_opd] + [1]*number_of_Ir + [1]*number_of_Ir
initial_guess = np.array(initial_guess, dtype=np.float64)
if skip_fit:
''' No fit is perforend here, just load the values in the initial guess and compute the histogram'''
print('Direct display')
count = 0.
start = time()
out = MCfunction(null_axis, *initial_guess)
stop = time()
print('Duration:', stop-start)
# out = z.reshape(null_cdf.shape)
na_opt = na
uncertainties = np.zeros(3)
popt = (np.array([na, mu_opd, sig_opd]), np.ones((3,3)))
chi2 = 1/(null_pdf.size-popt[0].size) * np.sum((null_pdf.ravel() - out)**2/null_pdf_err.ravel()**2)
term_status = None
print('chi2', chi2)
else:
''' Fit is done here '''
print('Model fitting')
count = 0.
init_liste.append(initial_guess)
start = time()
''' Save the content of the console generated by this function into a txt file'''
popt = gff.curvefit(MCfunction, null_axis, null_pdf.ravel(), p0=initial_guess, sigma=null_pdf_err.ravel(),
bounds=bounds_fit, diff_step = diffstep, x_scale=xscale)
res = popt[2] # all outputs of the fitting function but optimal parameters and covariance matrix (see scipy.optimize.minimize doc)
popt = popt[:2] # Optimal parameters found
print('Termination:', res.message) # Termination condition
term_status = res.status # Value associated by the termination condition
stop = time()
print('Termination', term_status)
print('Duration:', stop - start)
out = MCfunction(null_axis, *popt[0]) # Hsigogram computed according to the optimal parameters
uncertainties = np.diag(popt[1])**0.5 # Errors on the optimal parameters
chi2 = 1/(null_pdf.size-popt[0].size) * np.sum((null_pdf.ravel() - out)**2/null_pdf_err.ravel()**2) # Reduced Chi2
print('chi2', chi2)
''' Display in an easy-to-read way this key information (optimal parameters, error and reduced Chi2) '''
na_opt = popt[0][0]
print('******')
print(popt[0])
print(uncertainties*chi2**0.5)
print(chi2)
print('******')
''' Save input and the outputs of the fit into a npz file. One per basin hop '''
np.savez(save_path+'%s_%03d_'%(key, basin_hopping_count)+os.path.basename(file_path[:-1]),
chi2=chi2, popt=[na_opt]+[elt for elt in popt[0][1:]], uncertainties=uncertainties, init=[guess_na]+list(initial_guess[1:]),
termination=np.array([term_status]), nsamp=np.array([n_samp_per_loop]), wl=wl_scale)
chi2_liste.append(chi2)
popt_liste.append([na_opt]+[elt for elt in popt[0][1:]])
uncertainties_liste.append(uncertainties)
termination_liste.append(term_status)
pcov_liste.append(popt[1])
''' Display the results of the fit for publishing purpose'''
def update_label(old_label, exponent_text):
if exponent_text == "":
return old_label
try:
units = old_label[old_label.index("(") + 1:old_label.rindex(")")]
except ValueError:
units = ""
label = old_label.replace("({})".format(units), "")
exponent_text = exponent_text.replace("\\times", "")
if units != "":
return "{} ({} {})".format(label, exponent_text, units)
else:
return "{} ({})".format(label, exponent_text)
def format_label_string_with_exponent(ax, save_path, axis='both'):
""" Format the label string with the exponent from the ScalarFormatter """
ax.ticklabel_format(axis=axis, style='sci', useMathText=True, scilimits=(0,0))
axes_instances = []
if axis in ['x', 'both']:
axes_instances.append(ax.xaxis)
if axis in ['y', 'both']:
axes_instances.append(ax.yaxis)
for axe in axes_instances:
# axe.major.formatter._useMathText = True
plt.draw() # Update the text
plt.savefig(save_path+'deleteme.png') # Had to add it otherwise the figure is not updated
exponent_text = axe.get_offset_text().get_text()
label = axe.get_label().get_text()
axe.offsetText.set_visible(False)
axe.set_label_text(update_label(label, exponent_text))
nb_rows_plot = (wl_scale.size//2) + (wl_scale.size%2)
wl_idx0 = np.arange(wl_scale.size)[::-1]
wl_idx0 = list(gff.divide_chunks(wl_idx0, nb_rows_plot*2)) # Subset of wavelength displayed in one figure
labelsz = 28
for wl_idx in wl_idx0:
# f = plt.figure(figsize=(19.20,3.60*nb_rows_plot/0.95))
f = plt.figure(figsize=(19.20,3.60*nb_rows_plot))
# txt3 = '%s '%key+'Fitted values: ' + 'Na$ = %.2E \pm %.2E$, '%(na_opt, uncertainties[0]) + \
# r'$\mu_{OPD} = %.2E \pm %.2E$ nm, '%(popt[0][1], uncertainties[1]) + '\n'+\
# r'$\sigma_{OPD} = %.2E \pm %.2E$ nm,'%(popt[0][2], uncertainties[2])+' Chi2 = %.2E '%(chi2)+'(Duration = %.3f s)'%(stop-start)
count = 0
axs = []
for wl in wl_idx:
if len(wl_idx) > 1:
ax = f.add_subplot(nb_rows_plot,2,count+1)
else:
ax = f.add_subplot(1,1,count+1)
axs.append(ax)
ax.ticklabel_format(axis='y', style='sci', useMathText=True, scilimits=(0,0))
# ax.set_title('%.0f nm'%(wl_scale[wl]), size=35)
ax.errorbar(null_axis[wl][:-1], null_pdf[wl], yerr=null_pdf_err[wl], fmt='.', markersize=20, label='Data')
ax.errorbar(null_axis[wl][:-1], out.reshape((wl_scale.size,-1))[wl], markersize=5, lw=7, alpha=0.8, label='Fit')
ax.grid()
# plt.legend(loc='best', fontsize=25)
if list(wl_idx).index(wl) >= len(wl_idx)-2 or len(wl_idx) == 1:
ax.set_xlabel('Null depth', size=labelsz)
if count %2 == 0:
ax.set_ylabel('Frequency', size=labelsz)
plt.savefig('deleteme.png')
exponent_text = ax.yaxis.get_offset_text().get_text()
label = ax.yaxis.get_label().get_text()
ax.yaxis.offsetText.set_visible(False)
ax.yaxis.set_label_text(update_label(label, exponent_text))
else:
ax.yaxis.offsetText.set_visible(False)
ax.set_xticks(ax.get_xticks()[::2])#, size=30)
ax.tick_params(axis='both', labelsize=labelsz-2)
exponent = np.floor(np.log10(null_pdf.max()))
ylim = null_pdf.max() * 10**(-exponent)
ax.set_ylim(-10**(exponent)/10, null_pdf.max()*1.05)
ax.text(0.7, 0.8, '%.0f nm'%(wl_scale[wl]), va='center', transform = ax.transAxes, fontsize=labelsz)
count += 1
plt.tight_layout()
string = key+'_'+'%03d'%(basin_hopping_count)+'_'+str(wl_min)+'-'+str(wl_max)+'_'+os.path.basename(datafolder[:-1])+'_%s'%int(np.around(wl_scale[wl_idx[-1]]))
if not oversampling_switch: string = string + '_nooversamplinginmodel'
if not zeta_switch: string = string + '_nozetainmodel'
if not skip_fit:
string = string + '_fit_pdf'
if activate_spectral_binning: string = string + '_sb'
plt.savefig(save_path+string+'_compiled.png', dpi=300)
plt.savefig(save_path+string+'_compiled.pdf', format='pdf')
plt.close()
''' Display for archive and monitoring purpose '''
nb_rows_plot = 3
wl_idx0 = np.arange(wl_scale.size)[::-1]
wl_idx0 = list(gff.divide_chunks(wl_idx0, nb_rows_plot*2)) # Subset of wavelength displayed in one figure
for wl_idx in wl_idx0:
f = plt.figure(figsize=(19.20,3.60*nb_rows_plot))
txt3 = '%s '%key+'Fitted values: ' + 'Na$ = %.2E \pm %.2E$, '%(na_opt, uncertainties[0]) + \
r'$\mu_{OPD} = %.2E \pm %.2E$ nm, '%(popt[0][1], uncertainties[1]) + '\n'+\
r'$\sigma_{OPD} = %.2E \pm %.2E$ nm,'%(popt[0][2], uncertainties[2])+' Chi2 = %.2E '%(chi2)+'(Duration = %.3f s)'%(stop-start)
# txt3 = '%s '%key+'Fitted values: ' + 'Na$ = %.2E \pm %.2E$, '%(na_opt, uncertainties[0]) +' Chi2 = %.2E '%(chi2)+'(Last = %.3f s)'%(stop-start)
count = 0
axs = []
for wl in wl_idx:
if len(wl_idx) > 1:
ax = f.add_subplot(nb_rows_plot,2,count+1)
else:
ax = f.add_subplot(1,1,count+1)
axs.append(ax)
plt.title('%.0f nm'%wl_scale[wl], size=30)
plt.errorbar(null_axis[wl][:-1], null_pdf[wl], yerr=null_pdf_err[wl], fmt='.', markersize=10, label='Data')
if not skip_fit or 1: plt.errorbar(null_axis[wl][:-1], out.reshape((wl_scale.size,-1))[wl], markersize=10, lw=3, alpha=0.8, label='Fit')
plt.errorbar(null_axis[wl][:-1], out.reshape((wl_scale.size,-1))[wl], markersize=5, lw=5, alpha=0.8, label='Fit')
plt.grid()