Pipeline_Offline.py
In each epoch, why do you use only one minibatch to compute the loss and gradient instead of the whole dataset? I noticed that the variable Batch_Optimizing_LOSS_sum wasn't used, and I didn't see something like " for y_training, train_target in train_data " that people normally use in the training process
Thank you