diff --git a/Codes/Deeplab_network/LICENSE b/Codes/Deeplab_network/LICENSE new file mode 100644 index 0000000..9de37c7 --- /dev/null +++ b/Codes/Deeplab_network/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2017 Zhengyang Wang + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/Codes/Deeplab_network/README.md b/Codes/Deeplab_network/README.md new file mode 100644 index 0000000..4a7dac3 --- /dev/null +++ b/Codes/Deeplab_network/README.md @@ -0,0 +1,154 @@ +# Deeplab v2 ResNet for Semantic Image Segmentation + +This is an (re-)implementation of [DeepLab v2 (ResNet-101)](http://liangchiehchen.com/projects/DeepLabv2_resnet.html) in TensorFlow for semantic image segmentation on the [PASCAL VOC 2012 dataset](http://host.robots.ox.ac.uk/pascal/VOC/). We refer to [DrSleep's implementation](https://github.com/DrSleep/tensorflow-deeplab-resnet) (Many thanks!). We do not use tf-to-caffe packages like kaffe so you only need TensorFlow 1.3.0+ to run this code. + +The deeplab pre-trained ResNet-101 ckpt files (pre-trained on MSCOCO) are provided by DrSleep -- [here](https://drive.google.com/drive/folders/0B_rootXHuswsZ0E4Mjh1ZU5xZVU). Thanks again! + +Created by [Zhengyang Wang](http://www.eecs.wsu.edu/~zwang6/) and [Shuiwang Ji](http://www.eecs.wsu.edu/~sji/) at Washington State University. + +## Update + +**12/13/2017**: + +* Now the test code will output the mIoU as well as the IoU for each class. + +**12/12/2017**: + +* Add 'predict' function, you can use '--option=predict' to save your outputs now (both the true prediction where each pixel is between 0 and 20 and the visual one where each class has its own color). + +* Add multi-scale training, testing and predicting. Check main_msc.py and model_msc.py and use them just as main.py and model.py. + +* Add plot_training_curve.py to use the log.txt to make plots of training curve. + +* Now this is a 'full' (re-)implementation of [DeepLab v2 (ResNet-101)](http://liangchiehchen.com/projects/DeepLabv2_resnet.html) in TensorFlow. Thank you for the support. You are welcome to report your settings and results as well as any bug! + +**11/09/2017**: + +* The new version enables using original ImageNet pre-trained ResNet models (without pre-training on MSCOCO). You may change arguments ('encoder_name' and 'pretrain_file') in main.py to use corresponding pre-trained models. The original pre-trained ResNet-101 ckpt files are provided by tensorflow officially -- [res101](http://download.tensorflow.org/models/resnet_v1_101_2016_08_28.tar.gz) and [res50](http://download.tensorflow.org/models/resnet_v1_50_2016_08_28.tar.gz). + +* To help those who want to use this model on the CityScapes dataset, I shared the corresponding txt files and the python file which generates them. Note that you need to use tools [here](https://github.com/mcordts/cityscapesScripts) to generate labels with trainID first. Hope it would be helpful. Do not forget to change IMG_MEAN in model.py and other settings in main.py. + +* 'is_training' argument is removed and 'self._batch_norm' changes. Basically, for a small batch size, it is better to keep the statistics of the BN layers (running means and variances) frozen, and to not update the values provided by the pre-trained model by setting 'is_training=False'. Note that is_training=False still updates BN parameters gamma (scale) and beta (offset) if they are presented in var_list of the optimiser definition. Set 'trainable=False' in BN fuctions to remove them from trainable_variables. + +* Add 'phase' argument in network.py for future development. 'phase=True' means training. It is mainly for controlling batch normalization (if any) in the non-pre-trained part. +``` +Example: If you have a batch normalization layer in the decoder, you should use + +outputs = self._batch_norm(inputs, name='g_bn1', is_training=self.phase, activation_fn=tf.nn.relu, trainable=True) +``` +* Some changes to make the code more readable and easy to modify for future research. + +* I plan to add 'predict' function to enable saving predicted results for offline evaluation, post-processing, etc. + +## System requirement + +#### Programming language +``` +Python 3.5 +``` +#### Python Packages +``` +tensorflow-gpu 1.3.0 +``` +## Configure the network + +All network hyperparameters are configured in main.py. + +#### Training +``` +num_steps: how many iterations to train + +save_interval: how many steps to save the model + +random_seed: random seed for tensorflow + +weight_decay: l2 regularization parameter + +learning_rate: initial learning rate + +power: parameter for poly learning rate + +momentum: momentum + +encoder_name: name of pre-trained model, res101, res50 or deeplab + +pretrain_file: the initial pre-trained model file for transfer learning + +data_list: training data list file + +grad_update_every (msc only): accumulate the gradients for how many steps before updating weights. Note that in the msc case, this is actually the true training batch size. +``` +#### Testing/Validation +``` +valid_step: checkpoint number for testing/validation + +valid_num_steps: = number of testing/validation samples + +valid_data_list: testing/validation data list file +``` +#### Data +``` +data_dir: data directory + +batch_size: training batch size + +input height: height of input image + +input width: width of input image + +num_classes: number of classes + +ignore_label: label pixel value that should be ignored + +random_scale: whether to perform random scaling data-augmentation + +random_mirror: whether to perform random left-right flipping data-augmentation +``` +#### Log +``` +modeldir: where to store saved models + +logfile: where to store training log + +logdir: where to store log for tensorboard +``` +## Training and Testing + +#### Start training + +After configuring the network, we can start to train. Run +``` +python main.py +``` +The training of Deeplab v2 ResNet will start. + +#### Training process visualization + +We employ tensorboard for visualization. + +``` +tensorboard --logdir=log --port=6006 +``` + +You may visualize the graph of the model and (training images + groud truth labels + predicted labels). + +To visualize the training loss curve, write your own script to make use of the training log. + +#### Testing and prediction + +Select a checkpoint to test/validate your model in terms of pixel accuracy and mean IoU. + +Fill the valid_step in main.py with the checkpoint you want to test. Change valid_num_steps and valid_data_list accordingly. Run + +``` +python main.py --option=test +``` + +The final output includes pixel accuracy and mean IoU. + +Run + +``` +python main.py --option=predict +``` +The outputs will be saved in the 'output' folder. \ No newline at end of file diff --git a/Codes/Deeplab_network/__pycache__/model.cpython-34.pyc b/Codes/Deeplab_network/__pycache__/model.cpython-34.pyc new file mode 100644 index 0000000..046f5dd Binary files /dev/null and b/Codes/Deeplab_network/__pycache__/model.cpython-34.pyc differ diff --git a/Codes/Deeplab_network/__pycache__/model.cpython-35.pyc b/Codes/Deeplab_network/__pycache__/model.cpython-35.pyc new file mode 100644 index 0000000..20b5a59 Binary files /dev/null and b/Codes/Deeplab_network/__pycache__/model.cpython-35.pyc differ diff --git a/Codes/Deeplab_network/__pycache__/network.cpython-34.pyc b/Codes/Deeplab_network/__pycache__/network.cpython-34.pyc new file mode 100644 index 0000000..0461179 Binary files /dev/null and b/Codes/Deeplab_network/__pycache__/network.cpython-34.pyc differ diff --git a/Codes/Deeplab_network/__pycache__/network.cpython-35.pyc b/Codes/Deeplab_network/__pycache__/network.cpython-35.pyc new file mode 100644 index 0000000..ef5f9ed Binary files /dev/null and b/Codes/Deeplab_network/__pycache__/network.cpython-35.pyc differ diff --git a/Codes/Deeplab_network/main.py b/Codes/Deeplab_network/main.py new file mode 100644 index 0000000..ba240d3 --- /dev/null +++ b/Codes/Deeplab_network/main.py @@ -0,0 +1,137 @@ +import argparse +import os +import tensorflow as tf +from model import Model + + + +""" +This script defines hyperparameters. +""" + + + +def configure(test_data_list_, out_dir_, test_step_, test_num_steps_, modeldir_, data_dir_, num_steps_, + save_interval_, learning_rate_, pretrain_file_, data_list_, batch_size_, input_height_, input_width_, + num_classes_, print_color_, log_dir_, log_file_,encoder_name_): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', num_steps_, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', save_interval_, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', learning_rate_, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', encoder_name_, 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', pretrain_file_, 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', data_list_, 'training data list filename') + + # validation + flags.DEFINE_integer('valid_step', 217000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 81605, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataAugment/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', out_dir_, 'directory for saving outputs') + flags.DEFINE_integer('test_step', test_step_, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', test_num_steps_, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', test_data_list_, 'testing/validation data list filename') + flags.DEFINE_boolean('visual', False, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', data_dir_, 'data directory') + flags.DEFINE_integer('batch_size', batch_size_, 'training batch size') + flags.DEFINE_integer('input_height', input_height_, 'input image height') + flags.DEFINE_integer('input_width', input_width_, 'input image width') + flags.DEFINE_integer('num_classes', num_classes_, 'number of classes') + flags.DEFINE_integer('ignore_label', 255, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', False, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', False, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', modeldir_, 'model directory') + flags.DEFINE_string('logfile', log_file_, 'training log filename') + flags.DEFINE_string('logdir', log_dir_, 'training log directory') + + # text color + flags.DEFINE_string('print_color', print_color_, 'color of printed outputs') + + + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model(sess, configure(test_data_list_=args.test_data_list, out_dir_=args.out_dir, test_step_=args.test_step, + test_num_steps_=args.test_num_steps, modeldir_=args.modeldir, data_dir_=args.data_dir, + num_steps_=args.num_steps, save_interval_=args.save_interval, learning_rate_=args.learning_rate, + pretrain_file_=args.pretrain_file, data_list_=args.data_list, batch_size_=args.batch_size, + input_height_=args.input_height, input_width_=args.input_width, num_classes_=args.num_classes, + print_color_=args.print_color, log_dir_=args.log_dir, log_file_=args.log_file, encoder_name_=args.encoder_name)) + getattr(model, args.option)() + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + + parser.add_argument('--option', dest='option', type=str, default='train', + help='actions: train, test, or predict') + parser.add_argument('--test_data_list', dest='test_data_list', type=str, default='./dataset/test.txt', + help='testing/validation data list filename') + parser.add_argument('--out_dir', dest='out_dir', type=str, default='output', + help='directory for saving testing outputs') + parser.add_argument('--test_step', dest='test_step', type=int, default=350000, + help='checkpoint number for testing/validation') + parser.add_argument('--test_num_steps', dest='test_num_steps', type=int, default=81605, + help='number of testing/validation samples') + parser.add_argument('--modeldir', dest='modeldir', type=str, default='modelAugment', + help='model directory') + parser.add_argument('--data_dir', dest='data_dir', type=str, default='/hdd/wsi_fun/ImageAugCustom/AugmentationOutput', + help='data directory') + parser.add_argument('--gpu', dest='gpu', type=str, default='0', + help='specify which GPU to use') + parser.add_argument('--num_steps', dest='num_steps', type=int, default=100000, + help='maximum number of iterations') + parser.add_argument('--save_interval', dest='save_interval', type=int, default=15000, + help='number of iterations for saving and visualization') + parser.add_argument('--learning_rate', dest='learning_rate', type=float, default=2.5e-4, + help='learning rate') + parser.add_argument('--pretrain_file', dest='pretrain_file', type=str, default='deeplab_resnet.ckpt', + help='pre-trained model filename corresponding to encoder_name') + parser.add_argument('--data_list', dest='data_list', type=str, default='./dataAugment/train.txt', + help='training data list filename') + parser.add_argument('--batch_size', dest='batch_size', type=int, default=15, + help='training batch size') + parser.add_argument('--input_height', dest='input_height', type=int, default=256, + help='input image height') + parser.add_argument('--input_width', dest='input_width', type=int, default=256, + help='input image width') + parser.add_argument('--num_classes', dest='num_classes', type=int, default=2, + help='number of classes in images') + parser.add_argument('--log_dir', dest='log_dir', type=str, default="log", + help='directory for saving log files') + parser.add_argument('--log_file', dest='log_file', type=str, default="log.txt", + help='Default logfile name') + parser.add_argument('--print_color', dest='print_color', type=str, default="\033[0;37;40m", + help='color of printed text') + parser.add_argument('--encoder_name', dest='encoder_name', type=str, default=" ", + help='color of printed text') + + + args = parser.parse_args() + + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu + tf.app.run() diff --git a/Codes/Deeplab_network/main.py~ b/Codes/Deeplab_network/main.py~ new file mode 100644 index 0000000..c503f45 --- /dev/null +++ b/Codes/Deeplab_network/main.py~ @@ -0,0 +1,82 @@ +import argparse +import os +import tensorflow as tf +from model import Model + + + +""" +This script defines hyperparameters. +""" + + + +def configure(): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', 250000, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', 5000, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', 'model3/', 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') + + # validation + flags.DEFINE_integer('valid_step', 200000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 32659, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataset/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', 'output', 'directory for saving outputs') + flags.DEFINE_integer('test_step', 60000, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', 48209, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', './dataset/test.txt', 'testing/validation data list filename') + flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', '/hdd/wsi_fun/wsi_data/wsi_blocks/', 'data directory') + flags.DEFINE_integer('batch_size', 15, 'training batch size') + flags.DEFINE_integer('input_height', 256, 'input image height') + flags.DEFINE_integer('input_width', 256, 'input image width') + flags.DEFINE_integer('num_classes', 2, 'number of classes') + flags.DEFINE_integer('ignore_label', 254, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', True, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', True, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', 'model3', 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + parser = argparse.ArgumentParser() + parser.add_argument('--option', dest='option', type=str, default='train', + help='actions: train, test, or predict') + args = parser.parse_args() + + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model(sess, configure()) + getattr(model, args.option)() + + +if __name__ == '__main__': + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = '0' + tf.app.run() diff --git a/Codes/Deeplab_network/mainT.py b/Codes/Deeplab_network/mainT.py new file mode 100644 index 0000000..739fa50 --- /dev/null +++ b/Codes/Deeplab_network/mainT.py @@ -0,0 +1,98 @@ +import argparse +import os +import tensorflow as tf +from model import Model + + + +""" +This script defines hyperparameters. +""" + + + +def configure(test_data_list_, out_dir_, test_step_, test_num_steps_, modeldir_, data_dir_): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', 350000, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', 5000, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', './modelAugment/model.ckpt-18000', 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') + + # validation + flags.DEFINE_integer('valid_step', 18000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 32659, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataset/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', out_dir_, 'directory for saving outputs') + flags.DEFINE_integer('test_step', test_step_, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', test_num_steps_, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', test_data_list_, 'testing/validation data list filename') + flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', data_dir_, 'data directory') + flags.DEFINE_integer('batch_size', 15, 'training batch size') + flags.DEFINE_integer('input_height', 256, 'input image height') + flags.DEFINE_integer('input_width', 256, 'input image width') + flags.DEFINE_integer('num_classes', 2, 'number of classes') + flags.DEFINE_integer('ignore_label', 254, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', False, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', False, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', modeldir_, 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model(sess, configure(test_data_list_=args.test_data_list, out_dir_=args.out_dir, test_step_=args.test_step, test_num_steps_=args.test_num_steps, modeldir_=args.modeldir, data_dir_=args.data_dir)) + getattr(model, args.option)() + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + + parser.add_argument('--option', dest='option', type=str, default='predict', + help='actions: train, test, or predict') + parser.add_argument('--test_data_list', dest='test_data_list', type=str, default='./dataset/test.txt', + help='testing/validation data list filename') + parser.add_argument('--out_dir', dest='out_dir', type=str, default='outputAugmentTest', + help='directory for saving testing outputs') + parser.add_argument('--test_step', dest='test_step', type=int, default=18000, + help='checkpoint number for testing/validation') + parser.add_argument('--test_num_steps', dest='test_num_steps', type=int, default=32659, + help='number of testing/validation samples') + parser.add_argument('--modeldir', dest='modeldir', type=str, default='modelAugment', + help='model directory') + parser.add_argument('--data_dir', dest='data_dir', type=str, default='/hdd/wsi_fun/wsi_data/boundary_blocks/for_training', + help='data directory') + parser.add_argument('--gpu', dest='gpu', type=str, default='1', + help='specify which GPU to use') + + args = parser.parse_args() + + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu + tf.app.run() diff --git a/Codes/Deeplab_network/mainT.py~ b/Codes/Deeplab_network/mainT.py~ new file mode 100644 index 0000000..c955a7b --- /dev/null +++ b/Codes/Deeplab_network/mainT.py~ @@ -0,0 +1,98 @@ +import argparse +import os +import tensorflow as tf +from model import Model + + + +""" +This script defines hyperparameters. +""" + + + +def configure(test_data_list_, out_dir_, test_step_, test_num_steps_, modeldir_, data_dir_): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', 350000, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', 5000, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', './modelAugment/model.ckpt-217000', 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', './dataAugment/train.txt', 'training data list filename') + + # validation + flags.DEFINE_integer('valid_step', 217000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 81605, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataAugment/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', out_dir_, 'directory for saving outputs') + flags.DEFINE_integer('test_step', test_step_, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', test_num_steps_, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', test_data_list_, 'testing/validation data list filename') + flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', data_dir_, 'data directory') + flags.DEFINE_integer('batch_size', 15, 'training batch size') + flags.DEFINE_integer('input_height', 256, 'input image height') + flags.DEFINE_integer('input_width', 256, 'input image width') + flags.DEFINE_integer('num_classes', 2, 'number of classes') + flags.DEFINE_integer('ignore_label', 254, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', False, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', False, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', modeldir_, 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model(sess, configure(test_data_list_=args.test_data_list, out_dir_=args.out_dir, test_step_=args.test_step, test_num_steps_=args.test_num_steps, modeldir_=args.modeldir, data_dir_=args.data_dir)) + getattr(model, args.option)() + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + + parser.add_argument('--option', dest='option', type=str, default='predict', + help='actions: train, test, or predict') + parser.add_argument('--test_data_list', dest='test_data_list', type=str, default='./dataAugment/val.txt', + help='testing/validation data list filename') + parser.add_argument('--out_dir', dest='out_dir', type=str, default='outputAugmentTest', + help='directory for saving testing outputs') + parser.add_argument('--test_step', dest='test_step', type=int, default=217000, + help='checkpoint number for testing/validation') + parser.add_argument('--test_num_steps', dest='test_num_steps', type=int, default=81605, + help='number of testing/validation samples') + parser.add_argument('--modeldir', dest='modeldir', type=str, default='modelAugment', + help='model directory') + parser.add_argument('--data_dir', dest='data_dir', type=str, default='/hdd/wsi_fun/ImageAugCustom/AugmentationOutput', + help='data directory') + parser.add_argument('--gpu', dest='gpu', type=str, default='1', + help='specify which GPU to use') + + args = parser.parse_args() + + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu + tf.app.run() diff --git a/Codes/Deeplab_network/main_backup_12282017.py b/Codes/Deeplab_network/main_backup_12282017.py new file mode 100644 index 0000000..19f0b7a --- /dev/null +++ b/Codes/Deeplab_network/main_backup_12282017.py @@ -0,0 +1,82 @@ +import argparse +import os +import tensorflow as tf +from model import Model + + + +""" +This script defines hyperparameters. +""" + + + +def configure(): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', 250000, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', 5000, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', 'model3/model.ckpt-200000', 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') + + # validation + flags.DEFINE_integer('valid_step', 200000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 32659, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataset/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', 'output', 'directory for saving outputs') + flags.DEFINE_integer('test_step', 200000, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', 48209, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', './dataset/test.txt', 'testing/validation data list filename') + flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', '/hdd/wsi_fun/wsi_data/wsi_blocks/', 'data directory') + flags.DEFINE_integer('batch_size', 15, 'training batch size') + flags.DEFINE_integer('input_height', 256, 'input image height') + flags.DEFINE_integer('input_width', 256, 'input image width') + flags.DEFINE_integer('num_classes', 2, 'number of classes') + flags.DEFINE_integer('ignore_label', 254, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', True, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', True, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', 'model3', 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + parser = argparse.ArgumentParser() + parser.add_argument('--option', dest='option', type=str, default='train', + help='actions: train, test, or predict') + args = parser.parse_args() + + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model(sess, configure()) + getattr(model, args.option)() + + +if __name__ == '__main__': + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = '0' + tf.app.run() diff --git a/Codes/Deeplab_network/main_msc.py b/Codes/Deeplab_network/main_msc.py new file mode 100644 index 0000000..0c17e1d --- /dev/null +++ b/Codes/Deeplab_network/main_msc.py @@ -0,0 +1,84 @@ +import argparse +import os +import tensorflow as tf +from model_msc import Model_msc + + + +""" +This script defines hyperparameters. +""" + + + +def configure(): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', 20000, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', 1000, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', '../reference model/deeplab_resnet_init.ckpt', 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') + flags.DEFINE_integer('grad_update_every', 10, 'gradient accumulation step') + # Note: grad_update_every = true training batch size + + # validation + flags.DEFINE_integer('valid_step', 20000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 1449, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataset/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', 'output', 'directory for saving outputs') + flags.DEFINE_integer('test_step', 20000, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', 1449, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', './dataset/val.txt', 'testing/validation data list filename') + flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', '/tempspace2/zwang6/VOC2012', 'data directory') + flags.DEFINE_integer('batch_size', 1, 'training batch size') + flags.DEFINE_integer('input_height', 321, 'input image height') + flags.DEFINE_integer('input_width', 321, 'input image width') + flags.DEFINE_integer('num_classes', 21, 'number of classes') + flags.DEFINE_integer('ignore_label', 255, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', True, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', True, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', 'model', 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + parser = argparse.ArgumentParser() + parser.add_argument('--option', dest='option', type=str, default='train', + help='actions: train, test, or predict') + args = parser.parse_args() + + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model_msc(sess, configure()) + getattr(model, args.option)() + + +if __name__ == '__main__': + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = '7' + tf.app.run() diff --git a/Codes/Deeplab_network/main_msc.py~ b/Codes/Deeplab_network/main_msc.py~ new file mode 100644 index 0000000..72fb7f4 --- /dev/null +++ b/Codes/Deeplab_network/main_msc.py~ @@ -0,0 +1,85 @@ +import argparse +import os +import tensorflow as tf +from model_msc import Model_msc + + + +""" +This script defines hyperparameters. +""" + + + +def configure(): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', 100000, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', 5000, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', 2.5e-4, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', 'deeplab_resnet.ckpt', 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', './dataset/train.txt', 'training data list filename') + flags.DEFINE_integer('grad_update_every', 15, 'gradient accumulation step') + # Note: grad_update_every = true training batch size + + # validation + flags.DEFINE_integer('valid_step', 100000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 48209, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataset/test.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', 'output', 'directory for saving outputs') + flags.DEFINE_integer('test_step', 100000, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', 48209, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', './dataset/test.txt', 'testing/validation data list filename') + flags.DEFINE_boolean('visual', True, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', '/hdd/wsi_fun/wsi_data/train_wsi/train_wsi_blocks/', 'data directory') + flags.DEFINE_integer('batch_size', 15, 'training batch size') + flags.DEFINE_integer('input_height', 256, 'input image height') + flags.DEFINE_integer('input_width', 256, 'input image width') + flags.DEFINE_integer('num_classes', 2, 'number of classes') + + flags.DEFINE_integer('ignore_label', 255, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', False, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', False, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', 'model', 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + parser = argparse.ArgumentParser() + parser.add_argument('--option', dest='option', type=str, default='train', + help='actions: train, test, or predict') + args = parser.parse_args() + + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model_msc(sess, configure()) + getattr(model, args.option)() + + +if __name__ == '__main__': + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = '7' + tf.app.run() diff --git a/Codes/Deeplab_network/model.py b/Codes/Deeplab_network/model.py new file mode 100644 index 0000000..633f3e8 --- /dev/null +++ b/Codes/Deeplab_network/model.py @@ -0,0 +1,409 @@ +from datetime import datetime +import os +import sys +import time +import numpy as np +import tensorflow as tf +from PIL import Image + +from network import * +from utils import ImageReader, decode_labels, inv_preprocess, prepare_label, write_log, read_labeled_image_list + + + +""" +This script trains or evaluates the model on augmented PASCAL VOC 2012 dataset. +The training set contains 10581 training images. +The validation set contains 1449 validation images. + +Training: +'poly' learning rate +different learning rates for different layers +""" + + + +IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) + +class Model(object): + + def __init__(self, sess, conf): + self.sess = sess + self.conf = conf + + # train + def train(self): + normal_color = "\033[0;37;40m" + self.train_setup() + + self.sess.run(tf.global_variables_initializer()) + + # Load the pre-trained model if provided + if self.conf.pretrain_file is not None: + self.load(self.loader, self.conf.pretrain_file) + + # Start queue threads. + threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) + + # Train! + for step in range(self.conf.num_steps+1): + start_time = time.time() + feed_dict = { self.curr_step : step } + + if step % self.conf.save_interval == 0: + loss_value, images, labels, preds, summary, _ = self.sess.run( + [self.reduced_loss, + self.image_batch, + self.label_batch, + self.pred, + self.total_summary, + self.train_op], + feed_dict=feed_dict) + self.summary_writer.add_summary(summary, step) + self.save(self.saver, step) + else: + loss_value, _ = self.sess.run([self.reduced_loss, self.train_op], + feed_dict=feed_dict) + + duration = time.time() - start_time + print(self.conf.print_color + 'step {:d} \t loss = {:.3f}, ({:.3f} sec/step)'.format(step, loss_value, duration) + normal_color) + write_log('{:d}, {:.3f}'.format(step, loss_value), self.conf.logfile) + + # finish + self.coord.request_stop() + self.coord.join(threads) + + # evaluate + def test(self): + normal_color = "\033[0;37;40m" + self.test_setup() + + self.sess.run(tf.global_variables_initializer()) + self.sess.run(tf.local_variables_initializer()) + + # load checkpoint + checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) + self.load(self.loader, checkpointfile) + + # Start queue threads. + threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) + + # Test! + confusion_matrix = np.zeros((self.conf.num_classes, self.conf.num_classes), dtype=np.int) + for step in range(self.conf.valid_num_steps): + preds, _, _, c_matrix = self.sess.run([self.pred, self.accu_update_op, self.mIou_update_op, self.confusion_matrix]) + confusion_matrix += c_matrix + if step % 100 == 0: + print(self.conf.print_color + 'step {:d}'.format(step) + normal_color) + print(self.conf.print_color + 'Pixel Accuracy: {:.3f}'.format(self.accu.eval(session=self.sess)) + normal_color) + print(self.conf.print_color + 'Mean IoU: {:.3f}'.format(self.mIoU.eval(session=self.sess)) + normal_color) + self.compute_IoU_per_class(confusion_matrix) + + # finish + self.coord.request_stop() + self.coord.join(threads) + + # prediction + def predict(self): + normal_color = "\033[0;37;40m" + self.predict_setup() + + self.sess.run(tf.global_variables_initializer()) + self.sess.run(tf.local_variables_initializer()) + + # load checkpoint + checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.test_step) + self.load(self.loader, checkpointfile) + + # Start queue threads. + threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) + + # img_name_list + image_list, _ = read_labeled_image_list('', self.conf.test_data_list) + + # Predict! + for step in range(self.conf.test_num_steps): + preds = self.sess.run(self.pred) + + img_name = image_list[step].split('/')[2].split('.')[0] + # Save raw predictions, i.e. each pixel is an integer between [0,20]. + im = Image.fromarray(preds[0,:,:,0], mode='L') + filename = '/%s_mask.png' % (img_name) + im.save(self.conf.out_dir + '/prediction' + filename) + + # Save predictions for visualization. + # See utils/label_utils.py for color setting + # Need to be modified based on datasets. + if self.conf.visual: + msk = decode_labels(preds, num_classes=self.conf.num_classes) + im = Image.fromarray(msk[0], mode='RGB') + filename = '/%s_mask_visual.png' % (img_name) + im.save(self.conf.out_dir + '/visual_prediction' + filename) + + if step % 100 == 0: + print(self.conf.print_color + 'step {:d}'.format(step) + normal_color) + + print(self.conf.print_color + 'The output files has been saved to {}'.format(self.conf.out_dir) + normal_color) + + # finish + self.coord.request_stop() + self.coord.join(threads) + + def train_setup(self): + tf.set_random_seed(self.conf.random_seed) + + # Create queue coordinator. + self.coord = tf.train.Coordinator() + + # Input size + input_size = (self.conf.input_height, self.conf.input_width) + + # Load reader + with tf.name_scope("create_inputs"): + reader = ImageReader( + self.conf.data_dir, + self.conf.data_list, + input_size, + self.conf.random_scale, + self.conf.random_mirror, + self.conf.ignore_label, + IMG_MEAN, + self.coord) + self.image_batch, self.label_batch = reader.dequeue(self.conf.batch_size) + + # Create network + if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: + print('encoder_name ERROR!') + print("Please input: res101, res50, or deeplab") + sys.exit(-1) + elif self.conf.encoder_name == 'deeplab': + net = Deeplab_v2(self.image_batch, self.conf.num_classes, True) + # Variables that load from pre-trained model. + restore_var = [v for v in tf.global_variables() if 'fc' not in v.name] + # Trainable Variables + all_trainable = tf.trainable_variables() + # Fine-tune part + encoder_trainable = [v for v in all_trainable if 'fc' not in v.name] # lr * 1.0 + # Decoder part + decoder_trainable = [v for v in all_trainable if 'fc' in v.name] + else: + net = ResNet_segmentation(self.image_batch, self.conf.num_classes, True, self.conf.encoder_name) + # Variables that load from pre-trained model. + restore_var = [v for v in tf.global_variables() if 'resnet_v1' in v.name] + # Trainable Variables + all_trainable = tf.trainable_variables() + # Fine-tune part + encoder_trainable = [v for v in all_trainable if 'resnet_v1' in v.name] # lr * 1.0 + # Decoder part + decoder_trainable = [v for v in all_trainable if 'decoder' in v.name] + + decoder_w_trainable = [v for v in decoder_trainable if 'weights' in v.name or 'gamma' in v.name] # lr * 10.0 + decoder_b_trainable = [v for v in decoder_trainable if 'biases' in v.name or 'beta' in v.name] # lr * 20.0 + # Check + assert(len(all_trainable) == len(decoder_trainable) + len(encoder_trainable)) + assert(len(decoder_trainable) == len(decoder_w_trainable) + len(decoder_b_trainable)) + + # Network raw output + raw_output = net.outputs # [batch_size, h, w, 21] + + # Output size + output_shape = tf.shape(raw_output) + output_size = (output_shape[1], output_shape[2]) + + # Groud Truth: ignoring all labels greater or equal than n_classes + label_proc = prepare_label(self.label_batch, output_size, num_classes=self.conf.num_classes, one_hot=False) + raw_gt = tf.reshape(label_proc, [-1,]) + indices = tf.squeeze(tf.where(tf.less_equal(raw_gt, self.conf.num_classes - 1)), 1) + gt = tf.cast(tf.gather(raw_gt, indices), tf.int32) + raw_prediction = tf.reshape(raw_output, [-1, self.conf.num_classes]) + prediction = tf.gather(raw_prediction, indices) + + # Pixel-wise softmax_cross_entropy loss + loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction, labels=gt) + # L2 regularization + l2_losses = [self.conf.weight_decay * tf.nn.l2_loss(v) for v in all_trainable if 'weights' in v.name] + # Loss function + self.reduced_loss = tf.reduce_mean(loss) + tf.add_n(l2_losses) + + # Define optimizers + # 'poly' learning rate + base_lr = tf.constant(self.conf.learning_rate) + self.curr_step = tf.placeholder(dtype=tf.float32, shape=()) + learning_rate = tf.scalar_mul(base_lr, tf.pow((1 - self.curr_step / self.conf.num_steps), self.conf.power)) + # We have several optimizers here in order to handle the different lr_mult + # which is a kind of parameters in Caffe. This controls the actual lr for each + # layer. + opt_encoder = tf.train.MomentumOptimizer(learning_rate, self.conf.momentum) + opt_decoder_w = tf.train.MomentumOptimizer(learning_rate * 10.0, self.conf.momentum) + opt_decoder_b = tf.train.MomentumOptimizer(learning_rate * 20.0, self.conf.momentum) + # To make sure each layer gets updated by different lr's, we do not use 'minimize' here. + # Instead, we separate the steps compute_grads+update_params. + # Compute grads + grads = tf.gradients(self.reduced_loss, encoder_trainable + decoder_w_trainable + decoder_b_trainable) + grads_encoder = grads[:len(encoder_trainable)] + grads_decoder_w = grads[len(encoder_trainable) : (len(encoder_trainable) + len(decoder_w_trainable))] + grads_decoder_b = grads[(len(encoder_trainable) + len(decoder_w_trainable)):] + # Update params + train_op_conv = opt_encoder.apply_gradients(zip(grads_encoder, encoder_trainable)) + train_op_fc_w = opt_decoder_w.apply_gradients(zip(grads_decoder_w, decoder_w_trainable)) + train_op_fc_b = opt_decoder_b.apply_gradients(zip(grads_decoder_b, decoder_b_trainable)) + # Finally, get the train_op! + update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # for collecting moving_mean and moving_variance + with tf.control_dependencies(update_ops): + self.train_op = tf.group(train_op_conv, train_op_fc_w, train_op_fc_b) + + # Saver for storing checkpoints of the model + self.saver = tf.train.Saver(var_list=tf.global_variables(), max_to_keep=0) + + # Loader for loading the pre-trained model + self.loader = tf.train.Saver(var_list=restore_var) + + # Training summary + # Processed predictions: for visualisation. + raw_output_up = tf.image.resize_bilinear(raw_output, input_size) + raw_output_up = tf.argmax(raw_output_up, axis=3) + self.pred = tf.expand_dims(raw_output_up, dim=3) + # Image summary. + images_summary = tf.py_func(inv_preprocess, [self.image_batch, 2, IMG_MEAN], tf.uint8) + labels_summary = tf.py_func(decode_labels, [self.label_batch, 2, self.conf.num_classes], tf.uint8) + preds_summary = tf.py_func(decode_labels, [self.pred, 2, self.conf.num_classes], tf.uint8) + self.total_summary = tf.summary.image('images', + tf.concat(axis=2, values=[images_summary, labels_summary, preds_summary]), + max_outputs=2) # Concatenate row-wise. + if not os.path.exists(self.conf.logdir): + os.makedirs(self.conf.logdir) + self.summary_writer = tf.summary.FileWriter(self.conf.logdir, graph=tf.get_default_graph()) + + def test_setup(self): + # Create queue coordinator. + self.coord = tf.train.Coordinator() + + # Load reader + with tf.name_scope("create_inputs"): + reader = ImageReader( + self.conf.data_dir, + self.conf.valid_data_list, + None, # the images have different sizes + False, # no data-aug + False, # no data-aug + self.conf.ignore_label, + IMG_MEAN, + self.coord) + image, label = reader.image, reader.label # [h, w, 3 or 1] + # Add one batch dimension [1, h, w, 3 or 1] + self.image_batch, self.label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) + + # Create network + if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: + print('encoder_name ERROR!') + print("Please input: res101, res50, or deeplab") + sys.exit(-1) + elif self.conf.encoder_name == 'deeplab': + net = Deeplab_v2(self.image_batch, self.conf.num_classes, False) + else: + net = ResNet_segmentation(self.image_batch, self.conf.num_classes, False, self.conf.encoder_name) + + # predictions + raw_output = net.outputs + raw_output = tf.image.resize_bilinear(raw_output, tf.shape(self.image_batch)[1:3,]) + raw_output = tf.argmax(raw_output, axis=3) + pred = tf.expand_dims(raw_output, dim=3) + self.pred = tf.reshape(pred, [-1,]) + # labels + gt = tf.reshape(self.label_batch, [-1,]) + # Ignoring all labels greater than or equal to n_classes. + temp = tf.less_equal(gt, self.conf.num_classes - 1) + weights = tf.cast(temp, tf.int32) + + # fix for tf 1.3.0 + gt = tf.where(temp, gt, tf.cast(temp, tf.uint8)) + + # Pixel accuracy + self.accu, self.accu_update_op = tf.contrib.metrics.streaming_accuracy( + self.pred, gt, weights=weights) + + # mIoU + self.mIoU, self.mIou_update_op = tf.contrib.metrics.streaming_mean_iou( + self.pred, gt, num_classes=self.conf.num_classes, weights=weights) + + # confusion matrix + self.confusion_matrix = tf.contrib.metrics.confusion_matrix( + self.pred, gt, num_classes=self.conf.num_classes, weights=weights) + + # Loader for loading the checkpoint + self.loader = tf.train.Saver(var_list=tf.global_variables()) + + def predict_setup(self): + # Create queue coordinator. + self.coord = tf.train.Coordinator() + + # Load reader + with tf.name_scope("create_inputs"): + reader = ImageReader( + self.conf.data_dir, + self.conf.test_data_list, + None, # the images have different sizes + False, # no data-aug + False, # no data-aug + self.conf.ignore_label, + IMG_MEAN, + self.coord) + image, label = reader.image, reader.label # [h, w, 3 or 1] + # Add one batch dimension [1, h, w, 3 or 1] + image_batch, label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) + + # Create network + if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: + print('encoder_name ERROR!') + print("Please input: res101, res50, or deeplab") + sys.exit(-1) + elif self.conf.encoder_name == 'deeplab': + net = Deeplab_v2(image_batch, self.conf.num_classes, False) + else: + net = ResNet_segmentation(image_batch, self.conf.num_classes, False, self.conf.encoder_name) + + # Predictions. + raw_output = net.outputs + raw_output = tf.image.resize_bilinear(raw_output, tf.shape(image_batch)[1:3,]) + raw_output = tf.argmax(raw_output, axis=3) + self.pred = tf.cast(tf.expand_dims(raw_output, dim=3), tf.uint8) + + # Create directory + if not os.path.exists(self.conf.out_dir): + os.makedirs(self.conf.out_dir) + os.makedirs(self.conf.out_dir + '/prediction') + if self.conf.visual: + os.makedirs(self.conf.out_dir + '/visual_prediction') + + # Loader for loading the checkpoint + self.loader = tf.train.Saver(var_list=tf.global_variables()) + + def save(self, saver, step): + ''' + Save weights. + ''' + model_name = 'model.ckpt' + checkpoint_path = os.path.join(self.conf.modeldir, model_name) + if not os.path.exists(self.conf.modeldir): + os.makedirs(self.conf.modeldir) + saver.save(self.sess, checkpoint_path, global_step=step) + print('The checkpoint has been created.') + + def load(self, saver, filename): + ''' + Load trained weights. + ''' + saver.restore(self.sess, filename) + print("Restored model parameters from {}".format(filename)) + + def compute_IoU_per_class(self, confusion_matrix): + mIoU = 0 + for i in range(self.conf.num_classes): + # IoU = true_positive / (true_positive + false_positive + false_negative) + TP = confusion_matrix[i,i] + FP = np.sum(confusion_matrix[:, i]) - TP + FN = np.sum(confusion_matrix[i]) - TP + IoU = TP / (TP + FP + FN) + print ('class %d: %.3f' % (i, IoU)) + mIoU += IoU / self.conf.num_classes + print ('mIoU: %.3f' % mIoU) diff --git a/Codes/Deeplab_network/model_msc.py b/Codes/Deeplab_network/model_msc.py new file mode 100644 index 0000000..03b42e8 --- /dev/null +++ b/Codes/Deeplab_network/model_msc.py @@ -0,0 +1,502 @@ +from datetime import datetime +import os +import sys +import time +import numpy as np +import tensorflow as tf +from PIL import Image + +from network import * +from utils import ImageReader, decode_labels, inv_preprocess, prepare_label, write_log, read_labeled_image_list + + + +""" +This script trains or evaluates the model on augmented PASCAL VOC 2012 dataset. +The training set contains 10581 training images. +The validation set contains 1449 validation images. + +Training: +'poly' learning rate +different learning rates for different layers +""" + + + +IMG_MEAN = np.array((104.00698793,116.66876762,122.67891434), dtype=np.float32) + +class Model_msc(object): + + def __init__(self, sess, conf): + self.sess = sess + self.conf = conf + + # train + def train(self): + self.train_setup() + + self.sess.run(tf.global_variables_initializer()) + + # Load the pre-trained model if provided + if self.conf.pretrain_file is not None: + self.load(self.loader, self.conf.pretrain_file) + + # Start queue threads. + threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) + + # Train! + for step in range(self.conf.num_steps+1): + start_time = time.time() + feed_dict = { self.curr_step : step } + loss_value = 0 + + # Clear the accumulated gradients. + self.sess.run(self.zero_op, feed_dict=feed_dict) + + # Accumulate gradients. + for i in range(self.conf.grad_update_every): + _, l_val = self.sess.run([self.accum_grads_op, self.reduced_loss], feed_dict=feed_dict) + loss_value += l_val + + # Normalise the loss. + loss_value /= self.conf.grad_update_every + + # Apply gradients. + if step % self.conf.save_interval == 0: + images, labels, summary, _ = self.sess.run( + [self.image_batch, + self.label_batch, + self.total_summary, + self.train_op], + feed_dict=feed_dict) + self.summary_writer.add_summary(summary, step) + self.save(self.saver, step) + else: + self.sess.run(self.train_op, feed_dict=feed_dict) + + duration = time.time() - start_time + print('step {:d} \t loss = {:.3f}, ({:.3f} sec/step)'.format(step, loss_value, duration)) + write_log('{:d}, {:.3f}'.format(step, loss_value), self.conf.logfile) + + # finish + self.coord.request_stop() + self.coord.join(threads) + + # evaluate + def test(self): + self.test_setup() + + self.sess.run(tf.global_variables_initializer()) + self.sess.run(tf.local_variables_initializer()) + + # load checkpoint + checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) + self.load(self.loader, checkpointfile) + + # Start queue threads. + threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) + + # Test! + confusion_matrix = np.zeros((self.conf.num_classes, self.conf.num_classes), dtype=np.int) + for step in range(self.conf.valid_num_steps): + preds, _, _, c_matrix = self.sess.run([self.pred, self.accu_update_op, self.mIou_update_op, self.confusion_matrix]) + confusion_matrix += c_matrix + if step % 100 == 0: + print('step {:d}'.format(step)) + print('Pixel Accuracy: {:.3f}'.format(self.accu.eval(session=self.sess))) + print('Mean IoU: {:.3f}'.format(self.mIoU.eval(session=self.sess))) + self.compute_IoU_per_class(confusion_matrix) + + # finish + self.coord.request_stop() + self.coord.join(threads) + + # prediction + def predict(self): + self.predict_setup() + + self.sess.run(tf.global_variables_initializer()) + self.sess.run(tf.local_variables_initializer()) + + # load checkpoint + checkpointfile = self.conf.modeldir+ '/model.ckpt-' + str(self.conf.valid_step) + self.load(self.loader, checkpointfile) + + # Start queue threads. + threads = tf.train.start_queue_runners(coord=self.coord, sess=self.sess) + + # img_name_list + image_list, _ = read_labeled_image_list('', self.conf.test_data_list) + + # Predict! + for step in range(self.conf.test_num_steps): + preds = self.sess.run(self.pred) + + img_name = image_list[step].split('/')[2].split('.')[0] + # Save raw predictions, i.e. each pixel is an integer between [0,20]. + im = Image.fromarray(preds[0,:,:,0], mode='L') + filename = '/%s_mask.png' % (img_name) + im.save(self.conf.out_dir + '/prediction' + filename) + + # Save predictions for visualization. + # See utils/label_utils.py for color setting + # Need to be modified based on datasets. + if self.conf.visual: + msk = decode_labels(preds, num_classes=self.conf.num_classes) + im = Image.fromarray(msk[0], mode='RGB') + filename = '/%s_mask_visual.png' % (img_name) + im.save(self.conf.out_dir + '/visual_prediction' + filename) + + if step % 100 == 0: + print('step {:d}'.format(step)) + + print('The output files has been saved to {}'.format(self.conf.out_dir)) + + # finish + self.coord.request_stop() + self.coord.join(threads) + + def train_setup(self): + tf.set_random_seed(self.conf.random_seed) + + # Create queue coordinator. + self.coord = tf.train.Coordinator() + + # Input size + h, w = (self.conf.input_height, self.conf.input_width) + input_size = (h, w) + + # Load reader + with tf.name_scope("create_inputs"): + reader = ImageReader( + self.conf.data_dir, + self.conf.data_list, + input_size, + self.conf.random_scale, + self.conf.random_mirror, + self.conf.ignore_label, + IMG_MEAN, + self.coord) + self.image_batch, self.label_batch = reader.dequeue(self.conf.batch_size) + image_batch_075 = tf.image.resize_images(self.image_batch, [int(h * 0.75), int(w * 0.75)]) + image_batch_05 = tf.image.resize_images(self.image_batch, [int(h * 0.5), int(w * 0.5)]) + + # Create network + if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: + print('encoder_name ERROR!') + print("Please input: res101, res50, or deeplab") + sys.exit(-1) + elif self.conf.encoder_name == 'deeplab': + with tf.variable_scope('', reuse=False): + net = Deeplab_v2(self.image_batch, self.conf.num_classes, True) + with tf.variable_scope('', reuse=True): + net075 = Deeplab_v2(image_batch_075, self.conf.num_classes, True) + with tf.variable_scope('', reuse=True): + net05 = Deeplab_v2(image_batch_05, self.conf.num_classes, True) + # Variables that load from pre-trained model. + restore_var = [v for v in tf.global_variables() if 'fc' not in v.name] + # Trainable Variables + all_trainable = tf.trainable_variables() + # Fine-tune part + encoder_trainable = [v for v in all_trainable if 'fc' not in v.name] # lr * 1.0 + # Decoder part + decoder_trainable = [v for v in all_trainable if 'fc' in v.name] + else: + with tf.variable_scope('', reuse=False): + net = ResNet_segmentation(self.image_batch, self.conf.num_classes, True, self.conf.encoder_name) + with tf.variable_scope('', reuse=True): + net075 = ResNet_segmentation(image_batch_075, self.conf.num_classes, True, self.conf.encoder_name) + with tf.variable_scope('', reuse=True): + net05 = ResNet_segmentation(image_batch_05, self.conf.num_classes, True, self.conf.encoder_name) + # Variables that load from pre-trained model. + restore_var = [v for v in tf.global_variables() if 'resnet_v1' in v.name] + # Trainable Variables + all_trainable = tf.trainable_variables() + # Fine-tune part + encoder_trainable = [v for v in all_trainable if 'resnet_v1' in v.name] # lr * 1.0 + # Decoder part + decoder_trainable = [v for v in all_trainable if 'decoder' in v.name] + + decoder_w_trainable = [v for v in decoder_trainable if 'weights' in v.name or 'gamma' in v.name] # lr * 10.0 + decoder_b_trainable = [v for v in decoder_trainable if 'biases' in v.name or 'beta' in v.name] # lr * 20.0 + # Check + assert(len(all_trainable) == len(decoder_trainable) + len(encoder_trainable)) + assert(len(decoder_trainable) == len(decoder_w_trainable) + len(decoder_b_trainable)) + + # Network raw output + raw_output100 = net.outputs + raw_output075 = net075.outputs + raw_output05 = net05.outputs + raw_output = tf.reduce_max(tf.stack([raw_output100, + tf.image.resize_images(raw_output075, tf.shape(raw_output100)[1:3,]), + tf.image.resize_images(raw_output05, tf.shape(raw_output100)[1:3,])]), axis=0) + + # Groud Truth: ignoring all labels greater or equal than n_classes + label_proc = prepare_label(self.label_batch, tf.stack(raw_output.get_shape()[1:3]), num_classes=self.conf.num_classes, one_hot=False) # [batch_size, h, w] + label_proc075 = prepare_label(self.label_batch, tf.stack(raw_output075.get_shape()[1:3]), num_classes=self.conf.num_classes, one_hot=False) + label_proc05 = prepare_label(self.label_batch, tf.stack(raw_output05.get_shape()[1:3]), num_classes=self.conf.num_classes, one_hot=False) + + raw_gt = tf.reshape(label_proc, [-1,]) + raw_gt075 = tf.reshape(label_proc075, [-1,]) + raw_gt05 = tf.reshape(label_proc05, [-1,]) + + indices = tf.squeeze(tf.where(tf.less_equal(raw_gt, self.conf.num_classes - 1)), 1) + indices075 = tf.squeeze(tf.where(tf.less_equal(raw_gt075, self.conf.num_classes - 1)), 1) + indices05 = tf.squeeze(tf.where(tf.less_equal(raw_gt05, self.conf.num_classes - 1)), 1) + + gt = tf.cast(tf.gather(raw_gt, indices), tf.int32) + gt075 = tf.cast(tf.gather(raw_gt075, indices075), tf.int32) + gt05 = tf.cast(tf.gather(raw_gt05, indices05), tf.int32) + + raw_prediction = tf.reshape(raw_output, [-1, self.conf.num_classes]) + raw_prediction100 = tf.reshape(raw_output100, [-1, self.conf.num_classes]) + raw_prediction075 = tf.reshape(raw_output075, [-1, self.conf.num_classes]) + raw_prediction05 = tf.reshape(raw_output05, [-1, self.conf.num_classes]) + + prediction = tf.gather(raw_prediction, indices) + prediction100 = tf.gather(raw_prediction100, indices) + prediction075 = tf.gather(raw_prediction075, indices075) + prediction05 = tf.gather(raw_prediction05, indices05) + + # Pixel-wise softmax_cross_entropy loss + loss = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction, labels=gt) + loss100 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction100, labels=gt) + loss075 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction075, labels=gt075) + loss05 = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=prediction05, labels=gt05) + # L2 regularization + l2_losses = [self.conf.weight_decay * tf.nn.l2_loss(v) for v in all_trainable if 'weights' in v.name] + # Loss function + self.reduced_loss = tf.reduce_mean(loss) + tf.reduce_mean(loss100) + tf.reduce_mean(loss075) + tf.reduce_mean(loss05) + tf.add_n(l2_losses) + + # Define optimizers + # 'poly' learning rate + base_lr = tf.constant(self.conf.learning_rate) + self.curr_step = tf.placeholder(dtype=tf.float32, shape=()) + learning_rate = tf.scalar_mul(base_lr, tf.pow((1 - self.curr_step / self.conf.num_steps), self.conf.power)) + # We have several optimizers here in order to handle the different lr_mult + # which is a kind of parameters in Caffe. This controls the actual lr for each + # layer. + opt_encoder = tf.train.MomentumOptimizer(learning_rate, self.conf.momentum) + opt_decoder_w = tf.train.MomentumOptimizer(learning_rate * 10.0, self.conf.momentum) + opt_decoder_b = tf.train.MomentumOptimizer(learning_rate * 20.0, self.conf.momentum) + + # Gradient accumulation + # Define a variable to accumulate gradients. + accum_grads = [tf.Variable(tf.zeros_like(v.initialized_value()), + trainable=False) for v in encoder_trainable + decoder_w_trainable + decoder_b_trainable] + # Define an operation to clear the accumulated gradients for next batch. + self.zero_op = [v.assign(tf.zeros_like(v)) for v in accum_grads] + # To make sure each layer gets updated by different lr's, we do not use 'minimize' here. + # Instead, we separate the steps compute_grads+update_params. + # Compute grads + grads = tf.gradients(self.reduced_loss, encoder_trainable + decoder_w_trainable + decoder_b_trainable) + # Accumulate and normalise the gradients. + self.accum_grads_op = [accum_grads[i].assign_add(grad / self.conf.grad_update_every) for i, grad in enumerate(grads)] + + grads = tf.gradients(self.reduced_loss, encoder_trainable + decoder_w_trainable + decoder_b_trainable) + grads_encoder = accum_grads[:len(encoder_trainable)] + grads_decoder_w = accum_grads[len(encoder_trainable) : (len(encoder_trainable) + len(decoder_w_trainable))] + grads_decoder_b = accum_grads[(len(encoder_trainable) + len(decoder_w_trainable)):] + # Update params + train_op_conv = opt_encoder.apply_gradients(zip(grads_encoder, encoder_trainable)) + train_op_fc_w = opt_decoder_w.apply_gradients(zip(grads_decoder_w, decoder_w_trainable)) + train_op_fc_b = opt_decoder_b.apply_gradients(zip(grads_decoder_b, decoder_b_trainable)) + # Finally, get the train_op! + update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) # for collecting moving_mean and moving_variance + with tf.control_dependencies(update_ops): + self.train_op = tf.group(train_op_conv, train_op_fc_w, train_op_fc_b) + + # Saver for storing checkpoints of the model + self.saver = tf.train.Saver(var_list=tf.global_variables(), max_to_keep=0) + + # Loader for loading the pre-trained model + self.loader = tf.train.Saver(var_list=restore_var) + + # Training summary + # Processed predictions: for visualisation. + raw_output_up = tf.image.resize_bilinear(raw_output, input_size) + raw_output_up = tf.argmax(raw_output_up, axis=3) + self.pred = tf.expand_dims(raw_output_up, dim=3) + # Image summary. + images_summary = tf.py_func(inv_preprocess, [self.image_batch, 1, IMG_MEAN], tf.uint8) + labels_summary = tf.py_func(decode_labels, [self.label_batch, 1, self.conf.num_classes], tf.uint8) + preds_summary = tf.py_func(decode_labels, [self.pred, 1, self.conf.num_classes], tf.uint8) + self.total_summary = tf.summary.image('images', + tf.concat(axis=2, values=[images_summary, labels_summary, preds_summary]), + max_outputs=1) # Concatenate row-wise. + if not os.path.exists(self.conf.logdir): + os.makedirs(self.conf.logdir) + self.summary_writer = tf.summary.FileWriter(self.conf.logdir, graph=tf.get_default_graph()) + + def test_setup(self): + # Create queue coordinator. + self.coord = tf.train.Coordinator() + + # Load reader + with tf.name_scope("create_inputs"): + reader = ImageReader( + self.conf.data_dir, + self.conf.valid_data_list, + None, # the images have different sizes + False, # no data-aug + False, # no data-aug + self.conf.ignore_label, + IMG_MEAN, + self.coord) + image, label = reader.image, reader.label # [h, w, 3 or 1] + # Add one batch dimension [1, h, w, 3 or 1] + self.image_batch, self.label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) + h_orig, w_orig = tf.to_float(tf.shape(self.image_batch)[1]), tf.to_float(tf.shape(self.image_batch)[2]) + image_batch_075 = tf.image.resize_images(self.image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) + image_batch_05 = tf.image.resize_images(self.image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.5)), tf.to_int32(tf.multiply(w_orig, 0.5))])) + + # Create network + if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: + print('encoder_name ERROR!') + print("Please input: res101, res50, or deeplab") + sys.exit(-1) + elif self.conf.encoder_name == 'deeplab': + with tf.variable_scope('', reuse=False): + net = Deeplab_v2(self.image_batch, self.conf.num_classes, False) + with tf.variable_scope('', reuse=True): + net075 = Deeplab_v2(image_batch_075, self.conf.num_classes, False) + with tf.variable_scope('', reuse=True): + net05 = Deeplab_v2(image_batch_05, self.conf.num_classes, False) + else: + with tf.variable_scope('', reuse=False): + net = ResNet_segmentation(self.image_batch, self.conf.num_classes, False, self.conf.encoder_name) + with tf.variable_scope('', reuse=True): + net075 = ResNet_segmentation(image_batch_075, self.conf.num_classes, False, self.conf.encoder_name) + with tf.variable_scope('', reuse=True): + net05 = ResNet_segmentation(image_batch_05, self.conf.num_classes, False, self.conf.encoder_name) + + # predictions + # Network raw output + raw_output100 = net.outputs + raw_output075 = net075.outputs + raw_output05 = net05.outputs + raw_output = tf.reduce_max(tf.stack([raw_output100, + tf.image.resize_images(raw_output075, tf.shape(raw_output100)[1:3,]), + tf.image.resize_images(raw_output05, tf.shape(raw_output100)[1:3,])]), axis=0) + raw_output = tf.image.resize_bilinear(raw_output, tf.shape(self.image_batch)[1:3,]) + raw_output = tf.argmax(raw_output, axis=3) + pred = tf.expand_dims(raw_output, dim=3) + self.pred = tf.reshape(pred, [-1,]) + # labels + gt = tf.reshape(self.label_batch, [-1,]) + # Ignoring all labels greater than or equal to n_classes. + temp = tf.less_equal(gt, self.conf.num_classes - 1) + weights = tf.cast(temp, tf.int32) + + # fix for tf 1.3.0 + gt = tf.where(temp, gt, tf.cast(temp, tf.uint8)) + + # Pixel accuracy + self.accu, self.accu_update_op = tf.contrib.metrics.streaming_accuracy( + self.pred, gt, weights=weights) + + # mIoU + self.mIoU, self.mIou_update_op = tf.contrib.metrics.streaming_mean_iou( + self.pred, gt, num_classes=self.conf.num_classes, weights=weights) + + # confusion matrix + self.confusion_matrix = tf.contrib.metrics.confusion_matrix( + self.pred, gt, num_classes=self.conf.num_classes, weights=weights) + + # Loader for loading the checkpoint + self.loader = tf.train.Saver(var_list=tf.global_variables()) + + def predict_setup(self): + # Create queue coordinator. + self.coord = tf.train.Coordinator() + + # Load reader + with tf.name_scope("create_inputs"): + reader = ImageReader( + self.conf.data_dir, + self.conf.test_data_list, + None, # the images have different sizes + False, # no data-aug + False, # no data-aug + self.conf.ignore_label, + IMG_MEAN, + self.coord) + image, label = reader.image, reader.label # [h, w, 3 or 1] + # Add one batch dimension [1, h, w, 3 or 1] + image_batch, label_batch = tf.expand_dims(image, dim=0), tf.expand_dims(label, dim=0) + h_orig, w_orig = tf.to_float(tf.shape(image_batch)[1]), tf.to_float(tf.shape(image_batch)[2]) + image_batch_075 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.75)), tf.to_int32(tf.multiply(w_orig, 0.75))])) + image_batch_05 = tf.image.resize_images(image_batch, tf.stack([tf.to_int32(tf.multiply(h_orig, 0.5)), tf.to_int32(tf.multiply(w_orig, 0.5))])) + + + # Create network + if self.conf.encoder_name not in ['res101', 'res50', 'deeplab']: + print('encoder_name ERROR!') + print("Please input: res101, res50, or deeplab") + sys.exit(-1) + elif self.conf.encoder_name == 'deeplab': + with tf.variable_scope('', reuse=False): + net = Deeplab_v2(image_batch, self.conf.num_classes, False) + with tf.variable_scope('', reuse=True): + net075 = Deeplab_v2(image_batch_075, self.conf.num_classes, False) + with tf.variable_scope('', reuse=True): + net05 = Deeplab_v2(image_batch_05, self.conf.num_classes, False) + else: + with tf.variable_scope('', reuse=False): + net = ResNet_segmentation(image_batch, self.conf.num_classes, False, self.conf.encoder_name) + with tf.variable_scope('', reuse=True): + net075 = ResNet_segmentation(image_batch_075, self.conf.num_classes, False, self.conf.encoder_name) + with tf.variable_scope('', reuse=True): + net05 = ResNet_segmentation(image_batch_05, self.conf.num_classes, False, self.conf.encoder_name) + + # predictions + # Network raw output + raw_output100 = net.outputs + raw_output075 = net075.outputs + raw_output05 = net05.outputs + raw_output = tf.reduce_max(tf.stack([raw_output100, + tf.image.resize_images(raw_output075, tf.shape(raw_output100)[1:3,]), + tf.image.resize_images(raw_output05, tf.shape(raw_output100)[1:3,])]), axis=0) + raw_output = tf.image.resize_bilinear(raw_output, tf.shape(image_batch)[1:3,]) + raw_output = tf.argmax(raw_output, axis=3) + self.pred = tf.cast(tf.expand_dims(raw_output, dim=3), tf.uint8) + + # Create directory + if not os.path.exists(self.conf.out_dir): + os.makedirs(self.conf.out_dir) + os.makedirs(self.conf.out_dir + '/prediction') + if self.conf.visual: + os.makedirs(self.conf.out_dir + '/visual_prediction') + + # Loader for loading the checkpoint + self.loader = tf.train.Saver(var_list=tf.global_variables()) + + def save(self, saver, step): + ''' + Save weights. + ''' + model_name = 'model.ckpt' + checkpoint_path = os.path.join(self.conf.modeldir, model_name) + if not os.path.exists(self.conf.modeldir): + os.makedirs(self.conf.modeldir) + saver.save(self.sess, checkpoint_path, global_step=step) + print('The checkpoint has been created.') + + def load(self, saver, filename): + ''' + Load trained weights. + ''' + saver.restore(self.sess, filename) + print("Restored model parameters from {}".format(filename)) + + def compute_IoU_per_class(self, confusion_matrix): + mIoU = 0 + for i in range(self.conf.num_classes): + # IoU = true_positive / (true_positive + false_positive + false_negative) + TP = confusion_matrix[i,i] + FP = np.sum(confusion_matrix[:, i]) - TP + FN = np.sum(confusion_matrix[i]) - TP + IoU = TP / (TP + FP + FN) + print ('class %d: %.3f' % (i, IoU)) + mIoU += IoU / self.conf.num_classes + print ('mIoU: %.3f' % mIoU) \ No newline at end of file diff --git a/Codes/Deeplab_network/network.py b/Codes/Deeplab_network/network.py new file mode 100644 index 0000000..ca582b9 --- /dev/null +++ b/Codes/Deeplab_network/network.py @@ -0,0 +1,357 @@ +import tensorflow as tf +import numpy as np +import six + + + +""" +This script defines the segmentation network. + +The encoding part is a pre-trained ResNet. This script supports several settings (you need to specify in main.py): + + Deeplab v2 pre-trained model (pre-trained on MSCOCO) ('deeplab_resnet_init.ckpt') + Deeplab v2 pre-trained model (pre-trained on MSCOCO + PASCAL_train+val) ('deeplab_resnet.ckpt') + Original ResNet-101 ('resnet_v1_101.ckpt') + Original ResNet-50 ('resnet_v1_50.ckpt') + +You may find the download links in README. + +To use the pre-trained models, the name of each layer is the same as that in .ckpy file. +""" + + + +class Deeplab_v2(object): + """ + Deeplab v2 pre-trained model (pre-trained on MSCOCO) ('deeplab_resnet_init.ckpt') + Deeplab v2 pre-trained model (pre-trained on MSCOCO + PASCAL_train+val) ('deeplab_resnet.ckpt') + """ + def __init__(self, inputs, num_classes, phase): + self.inputs = inputs + self.num_classes = num_classes + self.channel_axis = 3 + self.phase = phase # train (True) or test (False), for BN layers in the decoder + self.build_network() + + def build_network(self): + self.encoding = self.build_encoder() + self.outputs = self.build_decoder(self.encoding) + + def build_encoder(self): + print("-----------build encoder: deeplab pre-trained-----------") + outputs = self._start_block() + print("after start block:", outputs.shape) + outputs = self._bottleneck_resblock(outputs, 256, '2a', identity_connection=False) + outputs = self._bottleneck_resblock(outputs, 256, '2b') + outputs = self._bottleneck_resblock(outputs, 256, '2c') + print("after block1:", outputs.shape) + outputs = self._bottleneck_resblock(outputs, 512, '3a', half_size=True, identity_connection=False) + for i in six.moves.range(1, 4): + outputs = self._bottleneck_resblock(outputs, 512, '3b%d' % i) + print("after block2:", outputs.shape) + outputs = self._dilated_bottle_resblock(outputs, 1024, 2, '4a', identity_connection=False) + for i in six.moves.range(1, 23): + outputs = self._dilated_bottle_resblock(outputs, 1024, 2, '4b%d' % i) + print("after block3:", outputs.shape) + outputs = self._dilated_bottle_resblock(outputs, 2048, 4, '5a', identity_connection=False) + outputs = self._dilated_bottle_resblock(outputs, 2048, 4, '5b') + outputs = self._dilated_bottle_resblock(outputs, 2048, 4, '5c') + print("after block4:", outputs.shape) + return outputs + + def build_decoder(self, encoding): + print("-----------build decoder-----------") + outputs = self._ASPP(encoding, self.num_classes, [6, 12, 18, 24]) + print("after aspp block:", outputs.shape) + return outputs + + # blocks + def _start_block(self): + outputs = self._conv2d(self.inputs, 7, 64, 2, name='conv1') + outputs = self._batch_norm(outputs, name='bn_conv1', is_training=False, activation_fn=tf.nn.relu) + outputs = self._max_pool2d(outputs, 3, 2, name='pool1') + return outputs + + def _bottleneck_resblock(self, x, num_o, name, half_size=False, identity_connection=True): + first_s = 2 if half_size else 1 + assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' + # branch1 + if not identity_connection: + o_b1 = self._conv2d(x, 1, num_o, first_s, name='res%s_branch1' % name) + o_b1 = self._batch_norm(o_b1, name='bn%s_branch1' % name, is_training=False, activation_fn=None) + else: + o_b1 = x + # branch2 + o_b2a = self._conv2d(x, 1, num_o / 4, first_s, name='res%s_branch2a' % name) + o_b2a = self._batch_norm(o_b2a, name='bn%s_branch2a' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2b = self._conv2d(o_b2a, 3, num_o / 4, 1, name='res%s_branch2b' % name) + o_b2b = self._batch_norm(o_b2b, name='bn%s_branch2b' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='res%s_branch2c' % name) + o_b2c = self._batch_norm(o_b2c, name='bn%s_branch2c' % name, is_training=False, activation_fn=None) + # add + outputs = self._add([o_b1,o_b2c], name='res%s' % name) + # relu + outputs = self._relu(outputs, name='res%s_relu' % name) + return outputs + + def _dilated_bottle_resblock(self, x, num_o, dilation_factor, name, identity_connection=True): + assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' + # branch1 + if not identity_connection: + o_b1 = self._conv2d(x, 1, num_o, 1, name='res%s_branch1' % name) + o_b1 = self._batch_norm(o_b1, name='bn%s_branch1' % name, is_training=False, activation_fn=None) + else: + o_b1 = x + # branch2 + o_b2a = self._conv2d(x, 1, num_o / 4, 1, name='res%s_branch2a' % name) + o_b2a = self._batch_norm(o_b2a, name='bn%s_branch2a' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2b = self._dilated_conv2d(o_b2a, 3, num_o / 4, dilation_factor, name='res%s_branch2b' % name) + o_b2b = self._batch_norm(o_b2b, name='bn%s_branch2b' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='res%s_branch2c' % name) + o_b2c = self._batch_norm(o_b2c, name='bn%s_branch2c' % name, is_training=False, activation_fn=None) + # add + outputs = self._add([o_b1,o_b2c], name='res%s' % name) + # relu + outputs = self._relu(outputs, name='res%s_relu' % name) + return outputs + + def _ASPP(self, x, num_o, dilations): + o = [] + for i, d in enumerate(dilations): + o.append(self._dilated_conv2d(x, 3, num_o, d, name='fc1_voc12_c%d' % i, biased=True)) + return self._add(o, name='fc1_voc12') + + # layers + def _conv2d(self, x, kernel_size, num_o, stride, name, biased=False): + """ + Conv2d without BN or relu. + """ + num_x = x.shape[self.channel_axis].value + with tf.variable_scope(name) as scope: + w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) + s = [1, stride, stride, 1] + o = tf.nn.conv2d(x, w, s, padding='SAME') + if biased: + b = tf.get_variable('biases', shape=[num_o]) + o = tf.nn.bias_add(o, b) + return o + + def _dilated_conv2d(self, x, kernel_size, num_o, dilation_factor, name, biased=False): + """ + Dilated conv2d without BN or relu. + """ + num_x = x.shape[self.channel_axis].value + with tf.variable_scope(name) as scope: + w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) + o = tf.nn.atrous_conv2d(x, w, dilation_factor, padding='SAME') + if biased: + b = tf.get_variable('biases', shape=[num_o]) + o = tf.nn.bias_add(o, b) + return o + + def _relu(self, x, name): + return tf.nn.relu(x, name=name) + + def _add(self, x_l, name): + return tf.add_n(x_l, name=name) + + def _max_pool2d(self, x, kernel_size, stride, name): + k = [1, kernel_size, kernel_size, 1] + s = [1, stride, stride, 1] + return tf.nn.max_pool(x, k, s, padding='SAME', name=name) + + def _batch_norm(self, x, name, is_training, activation_fn, trainable=False): + # For a small batch size, it is better to keep + # the statistics of the BN layers (running means and variances) frozen, + # and to not update the values provided by the pre-trained model by setting is_training=False. + # Note that is_training=False still updates BN parameters gamma (scale) and beta (offset) + # if they are presented in var_list of the optimiser definition. + # Set trainable = False to remove them from trainable_variables. + with tf.variable_scope(name) as scope: + o = tf.contrib.layers.batch_norm( + x, + scale=True, + activation_fn=activation_fn, + is_training=is_training, + trainable=trainable, + scope=scope) + return o + + + +class ResNet_segmentation(object): + """ + Original ResNet-101 ('resnet_v1_101.ckpt') + Original ResNet-50 ('resnet_v1_50.ckpt') + """ + def __init__(self, inputs, num_classes, phase, encoder_name): + if encoder_name not in ['res101', 'res50']: + print('encoder_name ERROR!') + print("Please input: res101, res50") + sys.exit(-1) + self.encoder_name = encoder_name + self.inputs = inputs + self.num_classes = num_classes + self.channel_axis = 3 + self.phase = phase # train (True) or test (False), for BN layers in the decoder + self.build_network() + + def build_network(self): + self.encoding = self.build_encoder() + self.outputs = self.build_decoder(self.encoding) + + def build_encoder(self): + print("-----------build encoder: %s-----------" % self.encoder_name) + scope_name = 'resnet_v1_101' if self.encoder_name == 'res101' else 'resnet_v1_50' + with tf.variable_scope(scope_name) as scope: + outputs = self._start_block('conv1') + print("after start block:", outputs.shape) + with tf.variable_scope('block1') as scope: + outputs = self._bottleneck_resblock(outputs, 256, 'unit_1', identity_connection=False) + outputs = self._bottleneck_resblock(outputs, 256, 'unit_2') + outputs = self._bottleneck_resblock(outputs, 256, 'unit_3') + print("after block1:", outputs.shape) + with tf.variable_scope('block2') as scope: + outputs = self._bottleneck_resblock(outputs, 512, 'unit_1', half_size=True, identity_connection=False) + for i in six.moves.range(2, 5): + outputs = self._bottleneck_resblock(outputs, 512, 'unit_%d' % i) + print("after block2:", outputs.shape) + with tf.variable_scope('block3') as scope: + outputs = self._dilated_bottle_resblock(outputs, 1024, 2, 'unit_1', identity_connection=False) + num_layers_block3 = 23 if self.encoder_name == 'res101' else 6 + for i in six.moves.range(2, num_layers_block3+1): + outputs = self._dilated_bottle_resblock(outputs, 1024, 2, 'unit_%d' % i) + print("after block3:", outputs.shape) + with tf.variable_scope('block4') as scope: + outputs = self._dilated_bottle_resblock(outputs, 2048, 4, 'unit_1', identity_connection=False) + outputs = self._dilated_bottle_resblock(outputs, 2048, 4, 'unit_2') + outputs = self._dilated_bottle_resblock(outputs, 2048, 4, 'unit_3') + print("after block4:", outputs.shape) + return outputs + + def build_decoder(self, encoding): + print("-----------build decoder-----------") + with tf.variable_scope('decoder') as scope: + outputs = self._ASPP(encoding, self.num_classes, [6, 12, 18, 24]) + print("after aspp block:", outputs.shape) + return outputs + + # blocks + def _start_block(self, name): + outputs = self._conv2d(self.inputs, 7, 64, 2, name=name) + outputs = self._batch_norm(outputs, name=name, is_training=False, activation_fn=tf.nn.relu) + outputs = self._max_pool2d(outputs, 3, 2, name='pool1') + return outputs + + def _bottleneck_resblock(self, x, num_o, name, half_size=False, identity_connection=True): + first_s = 2 if half_size else 1 + assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' + # branch1 + if not identity_connection: + o_b1 = self._conv2d(x, 1, num_o, first_s, name='%s/bottleneck_v1/shortcut' % name) + o_b1 = self._batch_norm(o_b1, name='%s/bottleneck_v1/shortcut' % name, is_training=False, activation_fn=None) + else: + o_b1 = x + # branch2 + o_b2a = self._conv2d(x, 1, num_o / 4, first_s, name='%s/bottleneck_v1/conv1' % name) + o_b2a = self._batch_norm(o_b2a, name='%s/bottleneck_v1/conv1' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2b = self._conv2d(o_b2a, 3, num_o / 4, 1, name='%s/bottleneck_v1/conv2' % name) + o_b2b = self._batch_norm(o_b2b, name='%s/bottleneck_v1/conv2' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='%s/bottleneck_v1/conv3' % name) + o_b2c = self._batch_norm(o_b2c, name='%s/bottleneck_v1/conv3' % name, is_training=False, activation_fn=None) + # add + outputs = self._add([o_b1,o_b2c], name='%s/bottleneck_v1/add' % name) + # relu + outputs = self._relu(outputs, name='%s/bottleneck_v1/relu' % name) + return outputs + + def _dilated_bottle_resblock(self, x, num_o, dilation_factor, name, identity_connection=True): + assert num_o % 4 == 0, 'Bottleneck number of output ERROR!' + # branch1 + if not identity_connection: + o_b1 = self._conv2d(x, 1, num_o, 1, name='%s/bottleneck_v1/shortcut' % name) + o_b1 = self._batch_norm(o_b1, name='%s/bottleneck_v1/shortcut' % name, is_training=False, activation_fn=None) + else: + o_b1 = x + # branch2 + o_b2a = self._conv2d(x, 1, num_o / 4, 1, name='%s/bottleneck_v1/conv1' % name) + o_b2a = self._batch_norm(o_b2a, name='%s/bottleneck_v1/conv1' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2b = self._dilated_conv2d(o_b2a, 3, num_o / 4, dilation_factor, name='%s/bottleneck_v1/conv2' % name) + o_b2b = self._batch_norm(o_b2b, name='%s/bottleneck_v1/conv2' % name, is_training=False, activation_fn=tf.nn.relu) + + o_b2c = self._conv2d(o_b2b, 1, num_o, 1, name='%s/bottleneck_v1/conv3' % name) + o_b2c = self._batch_norm(o_b2c, name='%s/bottleneck_v1/conv3' % name, is_training=False, activation_fn=None) + # add + outputs = self._add([o_b1,o_b2c], name='%s/bottleneck_v1/add' % name) + # relu + outputs = self._relu(outputs, name='%s/bottleneck_v1/relu' % name) + return outputs + + def _ASPP(self, x, num_o, dilations): + o = [] + for i, d in enumerate(dilations): + o.append(self._dilated_conv2d(x, 3, num_o, d, name='aspp/conv%d' % (i+1), biased=True)) + return self._add(o, name='aspp/add') + + # layers + def _conv2d(self, x, kernel_size, num_o, stride, name, biased=False): + """ + Conv2d without BN or relu. + """ + num_x = x.shape[self.channel_axis].value + with tf.variable_scope(name) as scope: + w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) + s = [1, stride, stride, 1] + o = tf.nn.conv2d(x, w, s, padding='SAME') + if biased: + b = tf.get_variable('biases', shape=[num_o]) + o = tf.nn.bias_add(o, b) + return o + + def _dilated_conv2d(self, x, kernel_size, num_o, dilation_factor, name, biased=False): + """ + Dilated conv2d without BN or relu. + """ + num_x = x.shape[self.channel_axis].value + with tf.variable_scope(name) as scope: + w = tf.get_variable('weights', shape=[kernel_size, kernel_size, num_x, num_o]) + o = tf.nn.atrous_conv2d(x, w, dilation_factor, padding='SAME') + if biased: + b = tf.get_variable('biases', shape=[num_o]) + o = tf.nn.bias_add(o, b) + return o + + def _relu(self, x, name): + return tf.nn.relu(x, name=name) + + def _add(self, x_l, name): + return tf.add_n(x_l, name=name) + + def _max_pool2d(self, x, kernel_size, stride, name): + k = [1, kernel_size, kernel_size, 1] + s = [1, stride, stride, 1] + return tf.nn.max_pool(x, k, s, padding='SAME', name=name) + + def _batch_norm(self, x, name, is_training, activation_fn, trainable=False): + # For a small batch size, it is better to keep + # the statistics of the BN layers (running means and variances) frozen, + # and to not update the values provided by the pre-trained model by setting is_training=False. + # Note that is_training=False still updates BN parameters gamma (scale) and beta (offset) + # if they are presented in var_list of the optimiser definition. + # Set trainable = False to remove them from trainable_variables. + with tf.variable_scope(name+'/BatchNorm') as scope: + o = tf.contrib.layers.batch_norm( + x, + scale=True, + activation_fn=activation_fn, + is_training=is_training, + trainable=trainable, + scope=scope) + return o diff --git a/Codes/Deeplab_network/plot_training_curve.py b/Codes/Deeplab_network/plot_training_curve.py new file mode 100644 index 0000000..7c95b1f --- /dev/null +++ b/Codes/Deeplab_network/plot_training_curve.py @@ -0,0 +1,44 @@ +import matplotlib.pyplot as plt +import numpy as np + +LOG_FILE = './log.txt' + +def get_log(log): + f = open(log, 'r') + lines = f.readlines() + f.close() + + loss = [] + for line in lines: + loss.append(float(line.strip('\n').split(' ')[1])) + + return loss + +def plot_iteration(log): + loss = get_log(log) + plt.plot(range(len(loss)), loss) + plt.xlabel('Iteration') + plt.ylabel('Loss') + plt.title('Training Curve') + plt.show() + +def plot_epoch(log, num_samples, batch_size): + """Avg for each epoch + num_per_epoch: number of samples in the training dataset + batch_size: training batch size + """ + loss = get_log(log) + epochs = len(loss) * batch_size // num_samples + iters_per_epochs = num_samples // batch_size + x = range(0, epochs+1) + y = [loss[0]] + for i in range(epochs): + y.append(np.mean(np.array(loss[i*iters_per_epochs+1: (i+1)*iters_per_epochs+1]))) + plt.plot(x, y) + plt.xlabel('Epoch') + plt.ylabel('Loss') + plt.title('Training Curve') + plt.show() + +if __name__ == '__main__': + plot_epoch(LOG_FILE, 10582, 10) \ No newline at end of file diff --git a/Codes/Deeplab_network/plot_training_curve.py~ b/Codes/Deeplab_network/plot_training_curve.py~ new file mode 100644 index 0000000..8cae9f1 --- /dev/null +++ b/Codes/Deeplab_network/plot_training_curve.py~ @@ -0,0 +1,44 @@ +import matplotlib.pyplot as plt +import numpy as np + +LOG_FILE = './log.txt' + +def get_log(log): + f = open(log, 'r') + lines = f.readlines() + f.close() + + loss = [] + for line in lines: + loss.append(float(line.strip('/n').split(' ')[1])) + + return loss + +def plot_iteration(log): + loss = get_log(log) + plt.plot(range(len(loss)), loss) + plt.xlabel('Iteration') + plt.ylabel('Loss') + plt.title('Training Curve') + plt.show() + +def plot_epoch(log, num_samples, batch_size): + """Avg for each epoch + num_per_epoch: number of samples in the training dataset + batch_size: training batch size + """ + loss = get_log(log) + epochs = len(loss) * batch_size // num_samples + iters_per_epochs = num_samples // batch_size + x = range(0, epochs+1) + y = [loss[0]] + for i in range(epochs): + y.append(np.mean(np.array(loss[i*iters_per_epochs+1: (i+1)*iters_per_epochs+1]))) + plt.plot(x, y) + plt.xlabel('Epoch') + plt.ylabel('Loss') + plt.title('Training Curve') + plt.show() + +if __name__ == '__main__': + plot_epoch(LOG_FILE, 10582, 10) diff --git a/Codes/Deeplab_network/utils/__init__.py b/Codes/Deeplab_network/utils/__init__.py new file mode 100644 index 0000000..df26f9e --- /dev/null +++ b/Codes/Deeplab_network/utils/__init__.py @@ -0,0 +1,3 @@ +from .image_reader import ImageReader, read_labeled_image_list +from .label_utils import decode_labels, inv_preprocess, prepare_label +from .write_to_log import write_log \ No newline at end of file diff --git a/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-34.pyc b/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-34.pyc new file mode 100644 index 0000000..b04a04d Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-34.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-35.pyc b/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-35.pyc new file mode 100644 index 0000000..ff399f8 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-35.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-36.pyc b/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-36.pyc new file mode 100644 index 0000000..1dd53a9 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/__init__.cpython-36.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-34.pyc b/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-34.pyc new file mode 100644 index 0000000..ed2879d Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-34.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-35.pyc b/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-35.pyc new file mode 100644 index 0000000..faf4a14 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-35.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-36.pyc b/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-36.pyc new file mode 100644 index 0000000..4f2b254 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/image_reader.cpython-36.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-34.pyc b/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-34.pyc new file mode 100644 index 0000000..785f608 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-34.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-35.pyc b/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-35.pyc new file mode 100644 index 0000000..5253535 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-35.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-36.pyc b/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-36.pyc new file mode 100644 index 0000000..ccc6e10 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/label_utils.cpython-36.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-34.pyc b/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-34.pyc new file mode 100644 index 0000000..c4cb556 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-34.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-35.pyc b/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-35.pyc new file mode 100644 index 0000000..6413f6d Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-35.pyc differ diff --git a/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-36.pyc b/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-36.pyc new file mode 100644 index 0000000..53eee96 Binary files /dev/null and b/Codes/Deeplab_network/utils/__pycache__/write_to_log.cpython-36.pyc differ diff --git a/Codes/Deeplab_network/utils/image_reader.py b/Codes/Deeplab_network/utils/image_reader.py new file mode 100644 index 0000000..4686736 --- /dev/null +++ b/Codes/Deeplab_network/utils/image_reader.py @@ -0,0 +1,180 @@ +import os + +import numpy as np +import tensorflow as tf + +def image_scaling(img, label): + """ + Randomly scales the images between 0.5 to 1.5 times the original size. + + Args: + img: Training image to scale. + label: Segmentation mask to scale. + """ + + scale = tf.random_uniform([1], minval=0.5, maxval=1.5, dtype=tf.float32, seed=None) + h_new = tf.to_int32(tf.multiply(tf.to_float(tf.shape(img)[0]), scale)) + w_new = tf.to_int32(tf.multiply(tf.to_float(tf.shape(img)[1]), scale)) + new_shape = tf.squeeze(tf.stack([h_new, w_new]), squeeze_dims=[1]) + img = tf.image.resize_images(img, new_shape) + label = tf.image.resize_nearest_neighbor(tf.expand_dims(label, 0), new_shape) + label = tf.squeeze(label, squeeze_dims=[0]) + + return img, label + +def image_mirroring(img, label): + """ + Randomly mirrors the images. + + Args: + img: Training image to mirror. + label: Segmentation mask to mirror. + """ + + distort_left_right_random = tf.random_uniform([1], 0, 1.0, dtype=tf.float32)[0] + mirror = tf.less(tf.stack([1.0, distort_left_right_random, 1.0]), 0.5) + mirror = tf.boolean_mask([0, 1, 2], mirror) + img = tf.reverse(img, mirror) + label = tf.reverse(label, mirror) + return img, label + +def random_crop_and_pad_image_and_labels(image, label, crop_h, crop_w, ignore_label=255): + """ + Randomly crop and pads the input images. + + Args: + image: Training image to crop/ pad. + label: Segmentation mask to crop/ pad. + crop_h: Height of cropped segment. + crop_w: Width of cropped segment. + ignore_label: Label to ignore during the training. + """ + + label = tf.cast(label, dtype=tf.float32) + label = label - ignore_label # Needs to be subtracted and later added due to 0 padding. + combined = tf.concat(axis=2, values=[image, label]) + image_shape = tf.shape(image) + combined_pad = tf.image.pad_to_bounding_box(combined, 0, 0, tf.maximum(crop_h, image_shape[0]), tf.maximum(crop_w, image_shape[1])) + + last_image_dim = tf.shape(image)[-1] + # last_label_dim = tf.shape(label)[-1] + combined_crop = tf.random_crop(combined_pad, [crop_h, crop_w, 4]) + img_crop = combined_crop[:, :, :last_image_dim] + label_crop = combined_crop[:, :, last_image_dim:] + label_crop = label_crop + ignore_label + label_crop = tf.cast(label_crop, dtype=tf.uint8) + + # Set static shape so that tensorflow knows shape at compile time. + img_crop.set_shape((crop_h, crop_w, 3)) + label_crop.set_shape((crop_h,crop_w, 1)) + return img_crop, label_crop + +def read_labeled_image_list(data_dir, data_list): + """Reads txt file containing paths to images and ground truth masks. + + Args: + data_dir: path to the directory with images and masks. + data_list: path to the file with lines of the form '/path/to/image /path/to/mask'. + + Returns: + Two lists with all file names for images and masks, respectively. + """ + f = open(data_list, 'r') + images = [] + masks = [] + for line in f: + try: + image, mask = line.strip("\n").split(' ') + except ValueError: # Adhoc for test. + image = mask = line.strip("\n") + images.append(data_dir + image) + masks.append(data_dir + mask) + return images, masks + +def read_images_from_disk(input_queue, input_size, random_scale, random_mirror, ignore_label, img_mean): # optional pre-processing arguments + """Read one image and its corresponding mask with optional pre-processing. + + Args: + input_queue: tf queue with paths to the image and its mask. + input_size: a tuple with (height, width) values. + If not given, return images of original size. + random_scale: whether to randomly scale the images prior + to random crop. + random_mirror: whether to randomly mirror the images prior + to random crop. + ignore_label: index of label to ignore during the training. + img_mean: vector of mean colour values. + + Returns: + Two tensors: the decoded image and its mask. + """ + + img_contents = tf.read_file(input_queue[0]) + label_contents = tf.read_file(input_queue[1]) + + img = tf.image.decode_jpeg(img_contents, channels=3) + img_r, img_g, img_b = tf.split(axis=2, num_or_size_splits=3, value=img) + img = tf.cast(tf.concat(axis=2, values=[img_b, img_g, img_r]), dtype=tf.float32) + # Extract mean. + img -= img_mean + + label = tf.image.decode_png(label_contents, channels=1) + + if input_size is not None: + h, w = input_size + + # Randomly scale the images and labels. + if random_scale: + img, label = image_scaling(img, label) + + # Randomly mirror the images and labels. + if random_mirror: + img, label = image_mirroring(img, label) + + # Randomly crops the images and labels. + img, label = random_crop_and_pad_image_and_labels(img, label, h, w, ignore_label) + + return img, label + +class ImageReader(object): + '''Generic ImageReader which reads images and corresponding segmentation + masks from the disk, and enqueues them into a TensorFlow queue. + ''' + + def __init__(self, data_dir, data_list, input_size, + random_scale, random_mirror, ignore_label, img_mean, coord): + '''Initialise an ImageReader. + + Args: + data_dir: path to the directory with images and masks. + data_list: path to the file with lines of the form '/path/to/image /path/to/mask'. + input_size: a tuple with (height, width) values, to which all the images will be resized. + random_scale: whether to randomly scale the images prior to random crop. + random_mirror: whether to randomly mirror the images prior to random crop. + ignore_label: index of label to ignore during the training. + img_mean: vector of mean colour values. + coord: TensorFlow queue coordinator. + ''' + self.data_dir = data_dir + self.data_list = data_list + self.input_size = input_size + self.coord = coord + + self.image_list, self.label_list = read_labeled_image_list(self.data_dir, self.data_list) + self.images = tf.convert_to_tensor(self.image_list, dtype=tf.string) + self.labels = tf.convert_to_tensor(self.label_list, dtype=tf.string) + self.queue = tf.train.slice_input_producer([self.images, self.labels], + shuffle=input_size is not None) # not shuffling if it is val + self.image, self.label = read_images_from_disk(self.queue, self.input_size, random_scale, random_mirror, ignore_label, img_mean) + + def dequeue(self, num_elements): + '''Pack images and labels into a batch. + + Args: + num_elements: the batch size. + + Returns: + Two tensors of size (batch_size, h, w, {3, 1}) for images and masks.''' + image_batch, label_batch = tf.train.batch([self.image, self.label], + num_elements) + return image_batch, label_batch diff --git a/Codes/Deeplab_network/utils/label_utils.py b/Codes/Deeplab_network/utils/label_utils.py new file mode 100644 index 0000000..bd824e6 --- /dev/null +++ b/Codes/Deeplab_network/utils/label_utils.py @@ -0,0 +1,78 @@ +from PIL import Image +import numpy as np +import tensorflow as tf + +# colour map +label_colours = [(0,0,0) + # 0=background + ,(128,0,0),(0,128,0),(128,128,0),(0,0,128),(128,0,128) + # 1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle + ,(0,128,128),(128,128,128),(64,0,0),(192,0,0),(64,128,0) + # 6=bus, 7=car, 8=cat, 9=chair, 10=cow + ,(192,128,0),(64,0,128),(192,0,128),(64,128,128),(192,128,128) + # 11=diningtable, 12=dog, 13=horse, 14=motorbike, 15=person + ,(0,64,0),(128,64,0),(0,192,0),(128,192,0),(0,64,128)] + # 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor + +def decode_labels(mask, num_images=1, num_classes=2): + """Decode batch of segmentation masks. + + Args: + mask: result of inference after taking argmax. + num_images: number of images to decode from the batch. + num_classes: number of classes to predict (including background). + + Returns: + A batch with num_images RGB images of the same size as the input. + """ + n, h, w, c = mask.shape + assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images) + outputs = np.zeros((num_images, h, w, 3), dtype=np.uint8) + for i in range(num_images): + img = Image.new('RGB', (len(mask[i, 0]), len(mask[i]))) # Size is given as a (width, height)-tuple. + pixels = img.load() + for j_, j in enumerate(mask[i, :, :, 0]): + for k_, k in enumerate(j): + if k < num_classes: + pixels[k_,j_] = label_colours[k] + outputs[i] = np.array(img) + return outputs + +def prepare_label(input_batch, new_size, num_classes, one_hot=True): + """Resize masks and perform one-hot encoding. + + Args: + input_batch: input tensor of shape [batch_size H W 1]. + new_size: a tensor with new height and width. + num_classes: number of classes to predict (including background). + one_hot: whether perform one-hot encoding. + + Returns: + Outputs a tensor of shape [batch_size h w 21] + with last dimension comprised of 0's and 1's only. + """ + with tf.name_scope('label_encode'): + input_batch = tf.image.resize_nearest_neighbor(input_batch, new_size) # as labels are integer numbers, need to use NN interp. + input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) # reducing the channel dimension. + if one_hot: + input_batch = tf.one_hot(input_batch, depth=num_classes) + return input_batch + +def inv_preprocess(imgs, num_images, img_mean): + """Inverse preprocessing of the batch of images. + Add the mean vector and convert from BGR to RGB. + + Args: + imgs: batch of input images. + num_images: number of images to apply the inverse transformations on. + img_mean: vector of mean colour values. + + Returns: + The batch of the size num_images with the same spatial dimensions as the input. + """ + n, h, w, c = imgs.shape + assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images) + outputs = np.zeros((num_images, h, w, c), dtype=np.uint8) + for i in range(num_images): + outputs[i] = (imgs[i] + img_mean)[:, :, ::-1].astype(np.uint8) + return outputs diff --git a/Codes/Deeplab_network/utils/label_utils.py~ b/Codes/Deeplab_network/utils/label_utils.py~ new file mode 100644 index 0000000..fef214c --- /dev/null +++ b/Codes/Deeplab_network/utils/label_utils.py~ @@ -0,0 +1,78 @@ +from PIL import Image +import numpy as np +import tensorflow as tf + +# colour map +label_colours = [(0,0,0) + # 0=background + ,(128,0,0),(0,128,0),(128,128,0),(0,0,128),(128,0,128) + # 1=aeroplane, 2=bicycle, 3=bird, 4=boat, 5=bottle + ,(0,128,128),(128,128,128),(64,0,0),(192,0,0),(64,128,0) + # 6=bus, 7=car, 8=cat, 9=chair, 10=cow + ,(192,128,0),(64,0,128),(192,0,128),(64,128,128),(192,128,128) + # 11=diningtable, 12=dog, 13=horse, 14=motorbike, 15=person + ,(0,64,0),(128,64,0),(0,192,0),(128,192,0),(0,64,128)] + # 16=potted plant, 17=sheep, 18=sofa, 19=train, 20=tv/monitor + +def decode_labels(mask, num_images=1, num_classes=21): + """Decode batch of segmentation masks. + + Args: + mask: result of inference after taking argmax. + num_images: number of images to decode from the batch. + num_classes: number of classes to predict (including background). + + Returns: + A batch with num_images RGB images of the same size as the input. + """ + n, h, w, c = mask.shape + assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images) + outputs = np.zeros((num_images, h, w, 3), dtype=np.uint8) + for i in range(num_images): + img = Image.new('RGB', (len(mask[i, 0]), len(mask[i]))) # Size is given as a (width, height)-tuple. + pixels = img.load() + for j_, j in enumerate(mask[i, :, :, 0]): + for k_, k in enumerate(j): + if k < num_classes: + pixels[k_,j_] = label_colours[k] + outputs[i] = np.array(img) + return outputs + +def prepare_label(input_batch, new_size, num_classes, one_hot=True): + """Resize masks and perform one-hot encoding. + + Args: + input_batch: input tensor of shape [batch_size H W 1]. + new_size: a tensor with new height and width. + num_classes: number of classes to predict (including background). + one_hot: whether perform one-hot encoding. + + Returns: + Outputs a tensor of shape [batch_size h w 21] + with last dimension comprised of 0's and 1's only. + """ + with tf.name_scope('label_encode'): + input_batch = tf.image.resize_nearest_neighbor(input_batch, new_size) # as labels are integer numbers, need to use NN interp. + input_batch = tf.squeeze(input_batch, squeeze_dims=[3]) # reducing the channel dimension. + if one_hot: + input_batch = tf.one_hot(input_batch, depth=num_classes) + return input_batch + +def inv_preprocess(imgs, num_images, img_mean): + """Inverse preprocessing of the batch of images. + Add the mean vector and convert from BGR to RGB. + + Args: + imgs: batch of input images. + num_images: number of images to apply the inverse transformations on. + img_mean: vector of mean colour values. + + Returns: + The batch of the size num_images with the same spatial dimensions as the input. + """ + n, h, w, c = imgs.shape + assert(n >= num_images), 'Batch size %d should be greater or equal than number of images to save %d.' % (n, num_images) + outputs = np.zeros((num_images, h, w, c), dtype=np.uint8) + for i in range(num_images): + outputs[i] = (imgs[i] + img_mean)[:, :, ::-1].astype(np.uint8) + return outputs diff --git a/Codes/Deeplab_network/utils/write_to_log.py b/Codes/Deeplab_network/utils/write_to_log.py new file mode 100644 index 0000000..5d06196 --- /dev/null +++ b/Codes/Deeplab_network/utils/write_to_log.py @@ -0,0 +1,3 @@ +def write_log(str, filename): + with open(filename, 'a') as f: + f.write(str + "\n") diff --git a/Codes/Deeplab_network/viewPredictions.py b/Codes/Deeplab_network/viewPredictions.py new file mode 100644 index 0000000..f2f0e82 --- /dev/null +++ b/Codes/Deeplab_network/viewPredictions.py @@ -0,0 +1,55 @@ +import glob +import cv2 + + + + +predictionAddress='/hdd/wsi_fun/Codes/Deeplab-v2--ResNet-101/outputAugmentTest/prediction/*.png' +imageAddress='/hdd/wsi_fun/ImageAugCustom/AugmentationOutput/Images/' +maskList=glob.glob(predictionAddress) +Total=len(maskList) + + +for im in range(0,Total): + fileID=maskList[im].split('/') + fileID=fileID[-1] + fileID=fileID.split('_mask') + fileID=fileID[0] + if 'K14' in fileID: + + mask=(cv2.imread(maskList[im],0)+1)*.5 + image=cv2.imread(imageAddress + fileID + '.jpeg',1) + image[:,:,0]=image[:,:,0]*(mask) + image[:,:,1]=image[:,:,1]*(mask) + image[:,:,2]=image[:,:,2]*(mask) + cv2.imshow('image',image) + cv2.waitKey(500) + elif 'K17' in fileID: + + mask=(cv2.imread(maskList[im],0)+1)*.5 + image=cv2.imread(imageAddress + fileID + '.jpeg',1) + image[:,:,0]=image[:,:,0]*(mask) + image[:,:,1]=image[:,:,1]*(mask) + image[:,:,2]=image[:,:,2]*(mask) + cv2.imshow('image',image) + cv2.waitKey(500) + elif 'K13' in fileID: + + mask=(cv2.imread(maskList[im],0)+1)*.5 + image=cv2.imread(imageAddress + fileID + '.jpeg',1) + image[:,:,0]=image[:,:,0]*(mask) + image[:,:,1]=image[:,:,1]*(mask) + image[:,:,2]=image[:,:,2]*(mask) + cv2.imshow('image',image) + cv2.waitKey(500) + elif 'K16' in fileID: + + mask=(cv2.imread(maskList[im],0)+1)*.5 + image=cv2.imread(imageAddress + fileID + '.jpeg',1) + image[:,:,0]=image[:,:,0]*(mask) + image[:,:,1]=image[:,:,1]*(mask) + image[:,:,2]=image[:,:,2]*(mask) + cv2.imshow('image',image) + cv2.waitKey(500) + else: + continue diff --git a/Codes/InitializeFolderStructure.py b/Codes/InitializeFolderStructure.py new file mode 100644 index 0000000..66c1dc2 --- /dev/null +++ b/Codes/InitializeFolderStructure.py @@ -0,0 +1,124 @@ +import os +import numpy as np +from glob import glob +from shutil import rmtree,move,copyfile,copy + +def purge_training_set(args): + rmtree(args.base_dir + '/' + args.project + '/Permanent/') + rmtree(args.base_dir + '/' + args.project + '/TempHR/') + rmtree(args.base_dir + '/' + args.project + '/TempLR/') + initFolder(args=args) + +def prune_training_set(args): + # prune HR dataset + regions_path = args.base_dir + '/' + args.project + '/Permanent/HR/regions/' + masks_path = args.base_dir + '/' + args.project + '/Permanent/HR/masks/' + prune_percent = args.prune_HR + prune_data(regions_path, masks_path, prune_percent, args) + + # prune LR dataset + regions_path = args.base_dir + '/' + args.project + '/Permanent/LR/regions/' + masks_path = args.base_dir + '/' + args.project + '/Permanent/LR/masks/' + prune_percent = args.prune_LR + prune_data(regions_path, masks_path, prune_percent, args) + +def prune_data(regions_path, masks_path, prune_percent, args): + imgs = glob(regions_path + '*' + args.imBoxExt) + if imgs == None: + return + + keep = (np.random.rand(len(imgs))) >= prune_percent + for idx,img in enumerate(imgs): + if keep[idx] == False: + filename = os.path.basename(img) + mask = masks_path + '/' + os.path.splitext(filename)[0] + '.png' + os.remove(img) # remove region + os.remove(mask) # remove mask + print(img) + + +def initFolder(args): + dirs = {'imExt': '.jpeg'} + dirs['basedir'] = args.base_dir + dirs['maskExt'] = '.png' + dirs['modeldir'] = '/MODELS/' + dirs['tempdirLR'] = '/TempLR/' + dirs['tempdirHR'] = '/TempHR/' + dirs['pretraindir'] = '/Deeplab_network/' + dirs['training_data_dir'] = '/TRAINING_data/' + dirs['validation_data_dir'] = '/HOLDOUT_data/' + dirs['project']= '/' + args.project + dirs['data_dir_HR'] = args.base_dir + args.project + '/Permanent/HR/' + dirs['data_dir_LR'] = args.base_dir + args.project + '/Permanent/LR/' + initializeFolderStructure(dirs,args) + print('Please add xmls/svs files to the newest TRAINING_data folder.') + print('Please add model(s) to the MODELS/0/ folder.') + + +def initializeFolderStructure(dirs,args): + make_folder(dirs['basedir'] +dirs['project'] + dirs['modeldir'] + str(0) + '/LR/') + make_folder(dirs['basedir'] +dirs['project']+ dirs['modeldir'] + str(0) + '/HR/') + + if args.transfer==' ': + pass + else: + + modelsCurrent=os.listdir(dirs['basedir'] + '/' + args.transfer + dirs['modeldir']) + gens=map(int,modelsCurrent) + modelOrder=np.argsort(gens) + modelLast=np.max(gens) + pretrainsLR=glob(dirs['basedir']+ '/' + args.transfer + dirs['modeldir'] + str(modelLast) + '/LR/' + 'model*') + pretrainsHR=glob(dirs['basedir']+ '/' + args.transfer + dirs['modeldir'] + str(modelLast) + '/HR/' + 'model*') + + maxmodel=0 + for modelfiles in pretrainsLR: + modelID=modelfiles.split('.')[-2].split('-')[1] + if int(modelID)>maxmodel: + maxmodelLR=int(modelID) + for modelfiles in pretrainsHR: + modelID=modelfiles.split('.')[-2].split('-')[1] + if int(modelID)>maxmodel: + maxmodelHR=int(modelID) + + pretrain_filesLR=glob(dirs['basedir']+ '/' + args.transfer + dirs['modeldir'] + str(modelLast) + '/LR/' + 'model.ckpt-' + str(maxmodelLR) + '*') + pretrain_filesHR=glob(dirs['basedir']+ '/' + args.transfer + dirs['modeldir'] + str(modelLast) + '/HR/' + 'model.ckpt-' + str(maxmodelHR) + '*') + for file in pretrain_filesLR: + copy(file,dirs['basedir'] +dirs['project']+ dirs['modeldir'] + str(0) + '/LR/') + + for file in pretrain_filesHR: + copy(file,dirs['basedir'] +dirs['project']+ dirs['modeldir'] + str(0) + '/HR/') + + + make_folder(dirs['basedir']+dirs['project'] + dirs['training_data_dir'] + str(0)) + + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/masks') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/Augment' +'/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/Augment' +'/masks') + + make_folder(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirHR'] + '/masks') + + make_folder(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' +'/regions') + make_folder(dirs['basedir']+dirs['project']+ dirs['tempdirHR'] + '/Augment' +'/masks') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['modeldir']) + make_folder(dirs['basedir'] +dirs['project']+ dirs['training_data_dir']) + make_folder(dirs['basedir'] +dirs['project']+ dirs['validation_data_dir']) + + + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/LR/'+ 'regions/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/LR/'+ 'masks/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/HR/'+ 'regions/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/HR/'+ 'masks/') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['training_data_dir']) + + make_folder(dirs['basedir'] + '/Codes/Deeplab_network/datasetLR') + make_folder(dirs['basedir'] + '/Codes/Deeplab_network/datasetHR') + + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory # Check if folder exists, if not make it diff --git a/Codes/InitializeFolderStructure.pyc b/Codes/InitializeFolderStructure.pyc new file mode 100644 index 0000000..39c85ba Binary files /dev/null and b/Codes/InitializeFolderStructure.pyc differ diff --git a/Codes/IterativePredict.py b/Codes/IterativePredict.py new file mode 100644 index 0000000..0719094 --- /dev/null +++ b/Codes/IterativePredict.py @@ -0,0 +1,741 @@ +import cv2 +import numpy as np +import os +import sys +import argparse +import multiprocessing +import lxml.etree as ET +import warnings +import time + +sys.path.append(os.getcwd()+'/Codes') + +from glob import glob +from subprocess import call +from joblib import Parallel, delayed +from skimage.io import imread, imsave +from skimage.transform import resize +from scipy.ndimage.measurements import label +from skimage.segmentation import clear_border +from skimage.morphology import remove_small_objects +from skimage import color +from shutil import rmtree +from IterativeTraining import get_num_classes +from get_choppable_regions import get_choppable_regions +from get_network_performance import get_perf +from matplotlib import pyplot as plt + + +# define xml class colormap +xml_color = [65280, 65535, 255, 16711680, 33023] + +def validate(args): + # define folder structure dict + dirs = {'outDir': args.base_dir + '/' + args.project + args.outDir} + dirs['txt_save_dir'] = '/txt_files/' + dirs['img_save_dir'] = '/img_files/' + dirs['final_output_dir'] = '/boundaries/' + dirs['final_boundary_image_dir'] = '/images/' + dirs['mask_dir'] = '/wsi_mask/' + dirs['chopped_dir'] = '/originals/' + dirs['crop_dir'] = '/wsi_crops/' + dirs['save_outputs'] = args.save_outputs + dirs['modeldir'] = '/MODELS/' + dirs['training_data_dir'] = '/TRAINING_data/' + dirs['validation_data_dir'] = '/HOLDOUT_data/' + + # find current iteration + if args.iteration == 'none': + iteration = get_iteration(args=args) + else: + iteration = int(args.iteration) + + # get all WSIs + WSIs = [] + for ext in [args.wsi_ext]: + WSIs.extend(glob(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + '/*' + ext)) + + if iteration == 'none': + print('ERROR: no trained models found \n\tplease use [--option train]') + + else: + for iter in range(1,iteration+1): + dirs['xml_save_dir'] = args.base_dir + '/' + args.project + dirs['validation_data_dir'] + str(iter) + '_Predicted_XMLs/' + + + # check main directory exists + make_folder(dirs['outDir']) + + if not os.path.exists(dirs['xml_save_dir']): + make_folder(dirs['xml_save_dir']) + print('working on iteration: ' + str(iter)) + + with open(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + 'validation_stats.txt', 'a') as f: + f.write('\niteration: \t'+str(iter)+'\n') + f.write('\twsi\t\t\tsensitivity\t\t\tspecificity\t\t\tprecision\t\t\taccuracy\t\t\tprediction time\n') + + for wsi in WSIs: + # predict xmls + startTime = time.time() + predict_xml(args=args, dirs=dirs, wsi=wsi, iteration=iter) + predictTime = time.time() - startTime + # test performance + gt_xml = os.path.splitext(wsi)[0] + '.xml' + predicted_xml = gt_xml.split('/') + predicted_xml = dirs['xml_save_dir'] + predicted_xml[-1] + sensitivity,specificity,precision,accuracy = get_perf(wsi=wsi, xml1=gt_xml, xml2 = predicted_xml, args=args) + + with open(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + 'validation_stats.txt', 'a') as f: + f.write('\t'+wsi.split('/')[-1]+'\t\t'+str(sensitivity)+'\t\t'+str(specificity)+'\t\t'+str(precision)+'\t\t'+str(accuracy)+'\t\t'+str(predictTime)+'\n') + + print('\n\n\033[92;5mDone validating: \n\t\033[0m\n') + +def predict(args): + # define folder structure dict + dirs = {'outDir': args.base_dir + '/' + args.project + args.outDir} + dirs['txt_save_dir'] = '/txt_files/' + dirs['img_save_dir'] = '/img_files/' + dirs['final_output_dir'] = '/boundaries/' + dirs['final_boundary_image_dir'] = '/images/' + dirs['mask_dir'] = '/wsi_mask/' + dirs['chopped_dir'] = '/originals/' + dirs['crop_dir'] = '/wsi_crops/' + dirs['save_outputs'] = args.save_outputs + dirs['modeldir'] = '/MODELS/' + dirs['training_data_dir'] = '/TRAINING_data/' + + # find current iteration + if args.iteration == 'none': + iteration = get_iteration(args=args) + else: + iteration = int(args.iteration) + + print(iteration) + dirs['xml_save_dir'] = args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration) + '/Predicted_XMLs/' + + if iteration == 'none': + print('ERROR: no trained models found \n\tplease use [--option train]') + + else: + # check main directory exists + make_folder(dirs['outDir']) + make_folder(dirs['xml_save_dir']) + + # get all WSIs + WSIs = [] + for ext in [args.wsi_ext]: + + WSIs.extend(glob(args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration)+ '/*' + ext)) + + for wsi in WSIs: + #try: + predict_xml(args=args, dirs=dirs, wsi=wsi, iteration=iteration) + #except KeyboardInterrupt: + # break + #except: + # print('!!! Prediction on ' + wsi + ' failed\nmoving on...') + print('\n\n\033[92;5mPlease correct the xml annotations found in: \n\t' + dirs['xml_save_dir']) + print('\nthen place them in: \n\t'+ args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration) + '/') + print('\nand run [--option train]\033[0m\n') + + +def predict_xml(args, dirs, wsi, iteration): + # reshape regions calc + downsample = int(args.downsampleRateLR**.5) + downsample_HR = int(args.downsampleRateHR**.5) + region_size = int(args.boxSizeLR*(downsample)) + step = int(region_size*(1-args.overlap_percentLR)) + + + # figure out the number of classes + if args.classNum == 0: + annotatedXMLs=glob(args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration-1) + '/*.xml') + classes = [] + for xml in annotatedXMLs: + classes.append(get_num_classes(xml)) + print(classes) + print(args.classNum) + classNum_LR = max(classes) + classNum_HR = max(classes) + else: + classNum_LR = args.classNum + if args.classNum_HR != 0: + classNum_HR = args.classNum_HR + else: + classNum_HR = classNum_LR + + # chop wsi + if args.chop_data == 'True': + # chop wsi + fileID, test_num_steps = chop_suey(wsi, dirs, downsample, region_size, step, args) + dirs['fileID'] = fileID + print('Chop SUEY!\n') + else: + basename = os.path.splitext(wsi)[0] + + if wsi.split('.')[-1] != 'tif': + slide=getWsi(wsi) + # get image dimensions + dim_x, dim_y=slide.dimensions + else: + im = Image.open(wsi) + dim_x, dim_y=im.size + + fileID=basename.split('/') + dirs['fileID'] = fileID=fileID[len(fileID)-1] + test_num_steps = file_len(dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt") + + # call DeepLab for prediction (Low resolution) + print('finding Glom locations ...\n') + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + 'prediction') + + test_data_list = fileID + '_images' + '.txt' + modeldir = args.base_dir + '/' + args.project + dirs['modeldir'] + str(iteration) + '/LR' + test_step = get_test_step(modeldir) + print("\033[1;32;40m"+"starting prediction using model: \n\t" + modeldir + str(test_step) + "\033[0;37;40m"+"\n\n") + + call(['python3.5', args.base_dir+'/Codes/Deeplab_network/main.py', + '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--test_step', str(test_step), + '--test_num_steps', str(test_num_steps), + '--modeldir', modeldir, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--num_classes', str(classNum_LR), + '--gpu', str(args.gpu), + '--encoder_name',args.encoder_name]) + + # un chop + print('\nreconstructing wsi map ...\n') + wsiMask = un_suey(dirs=dirs, args=args) + + + # save hotspots + if dirs['save_outputs'] == True: + #reduce the resolution of the image + wsidims=wsiMask.shape + wsiMask_save=resize(wsiMask,(int(wsidims[0]/4),int(wsidims[1]/4)),order=0,preserve_range=True) + + make_folder(dirs['outDir'] + fileID + dirs['mask_dir']) + print('saving to: ' + dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png') + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png', wsiMask_save*255) + + # find glom locations in reconstructed map + print('\ninterpreting prediction map ...') + test_num_steps, labeledArray, label_offsets = find_suey(wsiMask, dirs, downsample, args, wsi) + print('\n\nthe cropped regions have been saved to: ' + dirs['outDir'] + fileID + dirs['img_save_dir'] + fileID + dirs['crop_dir']) + + # call DeepLab to predict Glom boundaries (High resolution) + print('\ngetting Glom boundaries ...\n') + make_folder(dirs['outDir'] + fileID + dirs['final_output_dir'] + 'prediction') + + test_data_list = fileID + '_crops.txt' + modeldir = args.base_dir + '/' + args.project + dirs['modeldir'] + str(iteration) + '/HR' + test_step = get_test_step(modeldir) + print("\033[1;32;40m"+"starting prediction using model: \n\t" + modeldir + str(test_step) + "\033[0;37;40m"+"\n\n") + + call(['python3.5', args.base_dir+'/Codes/Deeplab_network/main.py', + '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['final_output_dir'], + '--test_step', str(test_step), + '--test_num_steps', str(test_num_steps), + '--modeldir', modeldir, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--num_classes', str(classNum_HR), + '--gpu', str(args.gpu), + '--encoder_name',args.encoder_name]) + + print('\nsaving final images ...') + print('\nstitching high resolution WSI mask:') + + wsiMask_HR=unstitch_HR(dirs=dirs,args=args,wsi=wsi) + wsidims=wsiMask_HR.shape + d1=int(wsidims[0]*(1/args.approx_downsample)) + d2=int(wsidims[1]*(1/args.approx_downsample)) + + if args.approx_downsample!=1: + print('\nDownsampling high resolution mask for prediction smoothing...') + wsiMask_HR=resize(wsiMask_HR,(d1,d2),order=0,preserve_range=True) + #mask_display=resize(wsiMask_HR,(d1,d2)) + #plt.imshow(mask_display/np.max(wsiMask)) + #plt.show() + print('\nGenerating XML annotations from WSI mask...') + make_folder(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir']) + crop_suey(wsiMask_HR,label_offsets, dirs, args, classNum_HR, downsample_HR) + + + # clean up + if dirs['save_outputs'] == False: + print('\ncleaning up') + rmtree(dirs['outDir']+fileID) + + +def get_iteration(args): + currentmodels=os.listdir(args.base_dir + '/' + args.project + '/MODELS/') + if not currentmodels: + return 'none' + else: + currentmodels=map(int,currentmodels) + Iteration=np.max(list(currentmodels)) + return Iteration + +def get_test_step(modeldir): + pretrains=glob(modeldir + '/*.ckpt*') + maxmodel=0 + for modelfiles in pretrains: + modelID=modelfiles.split('.')[-2].split('-')[1] + try: + modelID = int(modelID) + if modelID>maxmodel: + maxmodel=modelID + except: pass + return maxmodel + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory + +def restart_line(): # for printing chopped image labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def getWsi(path): #imports a WSI + import openslide + slide = openslide.OpenSlide(path) + return slide + +def file_len(fname): # get txt file length (number of lines) + with open(fname) as f: + for i, l in enumerate(f): + pass + + if 'i' in locals(): + return i + 1 + + else: + return 0 + + +def chop_suey(wsi, dirs, downsample, region_size, step, args): # chop wsi + print('\nopening: ' + wsi) + basename = os.path.splitext(wsi)[0] + + slide=getWsi(wsi) + + fileID=basename.split('/') + dirs['fileID'] = fileID=fileID[len(fileID)-1] + print('\nchopping ...\n') + + # make txt file + make_folder(dirs['outDir'] + fileID + dirs['txt_save_dir']) + f_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + ".txt" + f2_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt" + f = open(f_name, 'w') + f2 = open(f2_name, 'w') + f2.close() + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + dirs['chopped_dir']) + + # get image dimensions + dim_x, dim_y=slide.dimensions + f.write('Image dimensions:\n') + + # make index for iters + index_y=range(0,dim_y-step,step) + index_x=range(0,dim_x-step,step) + + f.write('X dim: ' + str((index_x[-1]+region_size)/downsample) +'\n') + f.write('Y dim: ' + str((index_y[-1]+region_size)/downsample) +'\n\n') + f.write('Regions:\n') + f.write('image:xStart:xStop:yStart:yStop\n\n') + f.close() + + # get non white regions + choppable_regions = get_choppable_regions(wsi=wsi, index_x=index_x, index_y=index_y, boxSize=region_size,white_percent=args.white_percent) + + print('saving region:') + + num_cores = multiprocessing.cpu_count() + + Parallel(n_jobs=num_cores, backend='threading')(delayed(chop_wsi)(yStart=i, xStart=j, idxx=idxx, idxy=idxy, f_name=f_name, f2_name=f2_name, dirs=dirs, downsample=downsample, region_size=region_size, args=args, wsi=wsi, choppable_regions=choppable_regions) for idxy, i in enumerate(index_y) for idxx, j in enumerate(index_x)) + + test_num_steps = file_len(dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt") + print('\n\n' + str(test_num_steps) +' image regions chopped') + + return fileID, test_num_steps + +def chop_wsi(yStart, xStart, idxx, idxy, f_name, f2_name, dirs, downsample, region_size, args, wsi, choppable_regions): # perform cutting in parallel + if choppable_regions[idxy, idxx] != 0: + slide = getWsi(wsi) + + yEnd = yStart+region_size + #print(yEnd) + xEnd = xStart+region_size + #print(xEnd) + xLen=xEnd-xStart + yLen=yEnd-yStart + + subsect= np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + subsect=subsect[:,:,:3] + + #print(whiteRatio) + imageIter = str(xStart)+str(yStart) + + f = open(f_name, 'a+') + f2 = open(f2_name, 'a+') + + # append txt file + f.write(imageIter + ':' + str(xStart/downsample) + ':' + str(xEnd/downsample) + + ':' + str(yStart/downsample) + ':' + str(yEnd/downsample) + '\n') + + # resize images ans masks + c=(subsect.shape) + s1=int(c[0]/(args.downsampleRateLR**.5)) + s2=int(c[1]/(args.downsampleRateLR**.5)) + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + subsect=(resize(subsect,(s1,s2), mode='constant')*255).astype('uint8') + + # save image + directory = dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['chopped_dir'] + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(directory + dirs['fileID'] + str(imageIter) + args.imBoxExt,subsect) + + f2.write(dirs['chopped_dir'] + dirs['fileID'] + str(imageIter) + args.imBoxExt + '\n') + f.close() + f2.close() + + sys.stdout.write(' <'+str(xStart)+':'+str(xEnd)+' '+str(yStart)+':'+str(yEnd)+'> ') + sys.stdout.flush() + restart_line() + +def unstitch_HR(dirs,args,wsi):# reconstruct wsi from predicted masks, high resolution + image_coordinate_file=dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops.txt' + slide=getWsi(wsi) + dim_x, dim_y=slide.dimensions + wsiMask=np.zeros([dim_y,dim_x]) + f = open(image_coordinate_file, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + for regionNum in range(0, np.size(lines)): + + sys.stdout.write(' <'+str(regionNum)+'/'+ str(np.size(lines)-1)+ '> ') + sys.stdout.flush() + restart_line() + # get region + uniqueImageID=lines[regionNum].split('/')[-1] + maskID=uniqueImageID.split('.')[0] + region = maskID.split('_')[-4:] + # read mask + mask = imread(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + 'prediction/'+ maskID + '_mask.png') + + # get region bounds + xStart = np.int32(region[2]) + #print('xStart: ' + str(xStart)) + xStop = np.int32(region[3]) + #print('xStop: ' + str(xStop)) + yStart = np.int32(region[0]) + if yStart < 0: + yStart = 0 + #print('yStart: ' + str(yStart)) + yStop = np.int32(region[1]) + #print('yStop: ' + str(yStop)) + + mask_part = wsiMask[yStart:yStop, xStart:xStop] + ylen, xlen = np.shape(mask_part) + mask = mask[:ylen, :xlen] + + # populate wsiMask with max + #print(np.shape(wsiMask)) + wsiMask[yStart:yStop, xStart:xStop] = np.maximum(mask_part, mask) + #wsiMask[yStart:yStop, xStart:xStop] = np.ones([yStop-yStart, xStop-xStart]) + + return wsiMask + +def un_suey(dirs, args): # reconstruct wsi from predicted masks, low resolution + txtFile = dirs['fileID'] + '.txt' + + # read txt file + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + # get wsi size + xDim = np.int32(float((lines[1].split(': ')[1]).split('\n')[0])) + yDim = np.int32(float((lines[2].split(': ')[1]).split('\n')[0])) + #print('xDim: ' + str(xDim)) + #print('yDim: ' + str(yDim)) + + # make wsi mask + wsiMask = np.zeros([yDim, xDim]) + + # read image regions + for regionNum in range(7, np.size(lines)): + # get region + region = lines[regionNum].split(':') + region[4] = region[4].split('\n')[0] + + # read mask + mask = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + 'prediction/' + dirs['fileID'] + region[0] + '_mask.png') + + # get region bounds + xStart = np.int32(float(region[1])) + #print('xStart: ' + str(xStart)) + xStop = np.int32(float(region[2])) + #print('xStop: ' + str(xStop)) + yStart = np.int32(float(region[3])) + if yStart < 0: + yStart = 0 + #print('yStart: ' + str(yStart)) + yStop = np.int32(float(region[4])) + #print('yStop: ' + str(yStop)) + + # populate wsiMask with max + #print(np.shape(wsiMask)) + wsiMask[yStart:yStop, xStart:xStop] = np.maximum(wsiMask[yStart:yStop, xStart:xStop], mask) + #wsiMask[yStart:yStop, xStart:xStop] = np.ones([yStop-yStart, xStop-xStart]) + + return wsiMask + +def find_suey(wsiMask, dirs, downsample, args, wsi): # locates the detected glom regions in the reconstructed wsi mask + # clean up mask + print(' removing small objects') + cleanMask = remove_small_objects(wsiMask.astype(bool), (args.min_size)/(downsample*downsample)) + + # find all unconnected regions + labeledArray, num_features = label(cleanMask) + print('found: '+ str(num_features-1) + ' regions') + + # save cleaned mask + if dirs['save_outputs'] == True: + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['mask_dir'] + dirs['fileID'] + '_cleaned.png', cleanMask*255) + + make_folder(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir']) + f_name = dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops.txt' + f = open(f_name, 'w') + f.close() + + # run crop_region in parallel + print('\nsaving:') + #num_cores = multiprocessing.cpu_count() + #Parallel(n_jobs=num_cores)(delayed(crop_region)(region_iter=i, labeledArray=labeledArray, fileID=fileID, f_name=f_name) for i in range(1, num_features)) + label_offsets = [] + for region_iter in range(1, num_features+1): + label_offsets = crop_region(region_iter=region_iter, labeledArray=labeledArray, f_name=f_name, dirs=dirs, downsample=downsample, args=args, wsi=wsi,label_offsets=label_offsets) + #label_offsets.append(label_offset[:]) + + test_num_steps = file_len(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops' + ".txt") + return test_num_steps, labeledArray, label_offsets + +def crop_region(region_iter, labeledArray, f_name, dirs, downsample, args, wsi,label_offsets): # crop selected region from wsi and save // location defined by labeledArray + slide = getWsi(wsi) + slide_size = slide.dimensions + + # get list of locations for pixels == region_iter + mask_region = np.argwhere(labeledArray == region_iter) + # calculate the region bounds + yStart = max(0, (min(mask_region[:,0]) * downsample) - args.LR_region_pad) + yLen = min((max(mask_region[:,0]) * downsample) - yStart + args.LR_region_pad, slide_size[1]*downsample - yStart) + xStart = max(0, (min(mask_region[:,1]) * downsample) - args.LR_region_pad) + xLen = min((max(mask_region[:,1]) * downsample) - xStart + args.LR_region_pad, slide_size[0]*downsample - yStart) + + region = np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + region = region[:,:,0:3] + + dims=region.shape + max_block_dim=args.max_block_dim + sub_box_size=args.boxSizeHR + sub_region_iter=0 + if (xLen*yLen)>(max_block_dim*max_block_dim): + stepHR = int(sub_box_size*(1-args.overlap_percentHR)) + Idx1=np.array(range(xStart,xStart+xLen+sub_box_size,stepHR)-xStart) + Idx2=np.array(range(yStart,yStart+yLen+sub_box_size,stepHR)-yStart) + Idx1_o=np.array(range(xStart,xStart+xLen+sub_box_size,stepHR)) + Idx2_o=np.array(range(yStart,yStart+yLen+sub_box_size,stepHR)) + + Ovl1=xLen-Idx1[-2] + Ovl2=yLen-Idx2[-2] + Idx1[-1]=xLen + Idx2[-1]=yLen + Idx1_o[-1]=xStart+xLen + Idx2_o[-1]=yStart+yLen + + for index1,ii in enumerate(Idx1): + for index2,jj in enumerate(Idx2): + sys.stdout.write(' <' + str(region_iter)+'_' +str(sub_region_iter) + '> ') + sys.stdout.flush() + restart_line() + if ii==Idx1[-1]: + continue + if jj==Idx2[-1]: + continue + if ii>xLen: + continue + if jj>yLen: + continue + + if (ii+sub_box_size)>xLen: + IdxEndx=xLen + else: + IdxEndx=ii+sub_box_size + + if (jj+sub_box_size)>yLen: + IdxEndy=yLen + else: + IdxEndy=jj+sub_box_size + + im_name=dirs['fileID']+'_'+str(Idx2_o[index2])+'_'+str(IdxEndy+yStart)+'_'+str(Idx1_o[index1])+'_'+str(IdxEndx+xStart)+args.imBoxExt + sub_block=region[jj:IdxEndy,ii:IdxEndx,:] + sub_region_iter=sub_region_iter+1 + + f = open(f_name, 'a+') + f.write(dirs['crop_dir'] + im_name+ '\n') + f.close() + + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir'] + im_name, sub_block) + + label_offsets.append({'Y':Idx2_o[index2],'X': Idx1_o[index1]}) + else: + sub_region_iter=0 + # print output + sys.stdout.write(' <' + str(region_iter)+'_' +str(sub_region_iter) + '> ') + sys.stdout.flush() + restart_line() + im_name=dirs['fileID']+'_'+str(yStart)+'_'+str(yStart+yLen)+'_'+str(xStart)+'_'+str(xStart+xLen)+args.imBoxExt + # write image path to text file + f = open(f_name, 'a+') + f.write(dirs['crop_dir'] + im_name + '\n') + f.close() + + + # save image region + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir'] + im_name, region) + label_offsets.append({'Y': yStart, 'X': xStart}) + return label_offsets + + +def crop_suey(wsiMask,label_offsets, dirs, args, classNum, downsample): + # make xml + Annotations = xml_create() + # add annotation + for i in range(classNum)[1:]: # exclude background class + Annotations = xml_add_annotation(Annotations=Annotations, annotationID=i) + + + for classregion in range(1,classNum): + binaryMask = np.zeros(np.shape(wsiMask)).astype('uint8') + binaryMask[wsiMask == classregion] = 1 + pointsList = get_contour_points(binaryMask, args=args, downsample=downsample) + + for i in range(np.shape(pointsList)[0]): + pointList = pointsList[i] + + Annotations = xml_add_region(Annotations=Annotations, pointList=pointList,args=args, annotationID=classregion) + + #txtFile = dirs['fileID'] + '_crops.txt' + + + #make_folder(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir'][1:]) + txtFile = dirs['fileID'] + '_crops.txt' + + # read txt file with img paths + lines=[] + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + for line in range(0, np.size(lines)): + image_path = lines[line].split('\n')[0] + + file_name = (image_path.split('.')[0]).split(dirs['crop_dir'])[1] + mask_image = imread(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + 'prediction/' + + file_name + '_mask.png') + # save mask images + if dirs['save_outputs'] == True: + glom_image = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + image_path[1:]) + if np.sum(mask_image) != 0: + # remove background in images + for i in range(3): + glom_image[:,:,i] = glom_image[:,:,i] * (mask_image * ((1-args.bg_intensity)) + args.bg_intensity) + + # save resulting image + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir'][1:] + file_name + args.finalImgExt, glom_image) + + # save xml + xml_save(Annotations=Annotations, filename=dirs['xml_save_dir']+'/'+dirs['fileID']+'.xml') + +def get_contour_points(mask, args, downsample, offset={'X': 0,'Y': 0}): + # returns a dict pointList with point 'X' and 'Y' values + # input greyscale binary image + #_, maskPoints, contours = cv2.findContours(np.array(mask), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) + _,maskPoints, contours = cv2.findContours(np.array(mask), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) + + pointsList = [] + for j in range(np.shape(maskPoints)[0]): + if cv2.contourArea(maskPoints[j]) > ((args.min_size)/(downsample*downsample*args.approx_downsample)): + pointList = [] + for i in range(0,np.shape(maskPoints[j])[0]): + point = {'X': (maskPoints[j][i][0][0] * downsample) + offset['X'], 'Y': (maskPoints[j][i][0][1] * downsample) + offset['Y']} + pointList.append(point) + pointsList.append(pointList) + return pointsList + +### functions for building an xml tree of annotations ### +def xml_create(): # create new xml tree + # create new xml Tree - Annotations + Annotations = ET.Element('Annotations', attrib={'MicronsPerPixel': '0.252000'}) + return Annotations + +def xml_add_annotation(Annotations, annotationID=None): # add new annotation + # add new Annotation to Annotations + # defualts to new annotationID + if annotationID == None: # not specified + annotationID = len(Annotations.findall('Annotation')) + 1 + Annotation = ET.SubElement(Annotations, 'Annotation', attrib={'Type': '4', 'Visible': '1', 'ReadOnly': '0', 'Incremental': '0', 'LineColorReadOnly': '0', 'LineColor': str(xml_color[annotationID-1]), 'Id': str(annotationID), 'NameReadOnly': '0'}) + Regions = ET.SubElement(Annotation, 'Regions') + return Annotations + +def xml_add_region(Annotations, pointList,args, annotationID=-1, regionID=None): # add new region to annotation + # add new Region to Annotation + # defualts to last annotationID and new regionID + Annotation = Annotations.find("Annotation[@Id='" + str(annotationID) + "']") + Regions = Annotation.find('Regions') + if regionID == None: # not specified + regionID = len(Regions.findall('Region')) + 1 + Region = ET.SubElement(Regions, 'Region', attrib={'NegativeROA': '0', 'ImageFocus': '-1', 'DisplayId': '1', 'InputRegionId': '0', 'Analyze': '0', 'Type': '0', 'Id': str(regionID)}) + Vertices = ET.SubElement(Region, 'Vertices') + for point in pointList: # add new Vertex + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(point['X']*args.approx_downsample), 'Y': str(point['Y']*args.approx_downsample), 'Z': '0'}) + # add connecting point + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(pointList[0]['X']*args.approx_downsample), 'Y': str(pointList[0]['Y']*args.approx_downsample), 'Z': '0'}) + return Annotations + +def xml_save(Annotations, filename): + xml_data = ET.tostring(Annotations, pretty_print=True) + #xml_data = Annotations.toprettyxml() + f = open(filename, 'wb') + f.write(xml_data) + f.close() + +def read_xml(filename): + # import xml file + tree = ET.parse(filename) + root = tree.getroot() diff --git a/Codes/IterativePredict.pyc b/Codes/IterativePredict.pyc new file mode 100644 index 0000000..d7d0371 Binary files /dev/null and b/Codes/IterativePredict.pyc differ diff --git a/Codes/IterativePredict_1X.py b/Codes/IterativePredict_1X.py new file mode 100644 index 0000000..8fcb695 --- /dev/null +++ b/Codes/IterativePredict_1X.py @@ -0,0 +1,521 @@ +import cv2 +import numpy as np +import os +import sys +import argparse +import multiprocessing +import lxml.etree as ET +import warnings +import time +from PIL import Image +from glob import glob +from subprocess import call +from joblib import Parallel, delayed +from skimage.io import imread +import imageio +from skimage.transform import resize +from shutil import rmtree + +sys.path.append(os.getcwd()+'/Codes') + +from IterativeTraining import get_num_classes +from get_choppable_regions import get_choppable_regions +from get_network_performance import get_perf + +""" +Pipeline code to segment regions from WSI + +""" + +# define xml class colormap +xml_color = [65280, 65535, 255, 16711680, 33023] + +def validate(args): + # define folder structure dict + dirs = {'outDir': args.base_dir + '/' + args.project + args.outDir} + dirs['txt_save_dir'] = '/txt_files/' + dirs['img_save_dir'] = '/img_files/' + dirs['mask_dir'] = '/wsi_mask/' + dirs['chopped_dir'] = '/originals/' + dirs['save_outputs'] = args.save_outputs + dirs['modeldir'] = '/MODELS/' + dirs['training_data_dir'] = '/TRAINING_data/' + dirs['validation_data_dir'] = '/HOLDOUT_data/' + + # find current iteration + if args.iteration == 'none': + iteration = get_iteration(args=args) + else: + iteration = int(args.iteration) + + # get all WSIs + WSIs = [] + for ext in [args.wsi_ext]: + WSIs.append(glob(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + '/*' + ext)) + + if iteration == 'none': + print('ERROR: no trained models found \n\tplease use [--option train]') + + else: + for iter in range(1,iteration+1): + dirs['xml_save_dir'] = args.base_dir + '/' + args.project + dirs['validation_data_dir'] + str(iter) + '_Predicted_XMLs/' + + + # check main directory exists + make_folder(dirs['outDir']) + + if not os.path.exists(dirs['xml_save_dir']): + make_folder(dirs['xml_save_dir']) + + print('working on iteration: ' + str(iter)) + + with open(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + 'validation_stats.txt', 'a') as f: + f.write('\niteration: \t'+str(iter)+'\n') + f.write('\twsi\t\t\tsensitivity\t\t\tspecificity\t\t\tprecision\t\t\taccuracy\t\t\tprediction time\n') + + for wsi in WSIs: + # predict xmls + startTime = time.time() + + filename=dirs['xml_save_dir']+'/'+ (wsi.split('/')[-1]).split('.')[0] +'.xml' + if not os.path.isfile(filename): + predict_xml(args=args, dirs=dirs, wsi=wsi, iteration=iter) + + predictTime = time.time() - startTime + # test performance + gt_xml = os.path.splitext(wsi)[0] + '.xml' + predicted_xml = gt_xml.split('/') + predicted_xml = dirs['xml_save_dir'] + predicted_xml[-1] + sensitivity,specificity,precision,accuracy = get_perf(wsi=wsi, xml1=gt_xml, xml2 = predicted_xml, args=args) + + with open(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + 'validation_stats.txt', 'a') as f: + f.write('\t'+wsi.split('/')[-1]+'\t\t'+str(sensitivity)+'\t\t'+str(specificity)+'\t\t'+str(precision)+'\t\t'+str(accuracy)+'\t\t'+str(predictTime)+'\n') + + print('\n\n\033[92;5mDone validating: \n\t\033[0m\n') + +def predict(args): + # define folder structure dict + dirs = {'outDir': args.base_dir + '/' + args.project + args.outDir} + dirs['txt_save_dir'] = '/txt_files/' + dirs['img_save_dir'] = '/img_files/' + dirs['mask_dir'] = '/wsi_mask/' + dirs['chopped_dir'] = '/originals/' + dirs['save_outputs'] = args.save_outputs + dirs['modeldir'] = '/MODELS/' + dirs['training_data_dir'] = '/TRAINING_data/' + + # find current iteration + if args.iteration == 'none': + iteration = get_iteration(args=args) + else: + iteration = int(args.iteration) + + print(iteration) + dirs['xml_save_dir'] = args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration) + '/Predicted_XMLs/' + + if iteration == 'none': + print('ERROR: no trained models found \n\tplease use [--option train]') + + else: + # check main directory exists + make_folder(dirs['outDir']) + make_folder(dirs['xml_save_dir']) + + # get all WSIs + WSIs = [] + for ext in [args.wsi_ext]: + WSIs.extend(glob(args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration) + '/*' + ext)) + + for wsi in WSIs: + #try: + predict_xml(args=args, dirs=dirs, wsi=wsi, iteration=iteration) + #except KeyboardInterrupt: + # break + #except: + #print('!!! Prediction on ' + wsi + ' failed\nmoving on...') + print('\n\n\033[92;5mPlease correct the xml annotations found in: \n\t' + dirs['xml_save_dir']) + print('\nthen place them in: \n\t'+ args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration) + '/') + print('\nand run [--option train]\033[0m\n') + + +def predict_xml(args, dirs, wsi, iteration): + # reshape regions calc + downsample = int(args.downsampleRateHR**.5) + region_size = int(args.boxSizeHR*(downsample)) + step = int(region_size*(1-args.overlap_percentHR)) + + # figure out the number of classes + if args.classNum == 0: + annotatedXMLs=glob(args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration-1) + '/*.xml') + classes = [] + for xml in annotatedXMLs: + classes.append(get_num_classes(xml)) + classNum = max(classes) + else: + classNum = args.classNum + + # chop wsi + if args.chop_data == 'True': + # chop wsi + fileID, test_num_steps = chop_suey(wsi, dirs, downsample, region_size, step, args) + dirs['fileID'] = fileID + print('Chop SUEY!\n') + else: + basename = os.path.splitext(wsi)[0] + + if wsi.split('.')[-1] != 'tif': + slide=getWsi(wsi) + # get image dimensions + dim_x, dim_y=slide.dimensions + else: + im = Image.open(wsi) + dim_x, dim_y=im.size + + fileID=basename.split('/') + dirs['fileID'] = fileID=fileID[len(fileID)-1] + test_num_steps = file_len(dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt") + # call DeepLab for prediction + print('Segmenting tissue ...\n') + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + 'prediction') + + test_data_list = fileID + '_images' + '.txt' + modeldir = args.base_dir + '/' + args.project + dirs['modeldir'] + str(iteration) + '/HR' + test_step = get_test_step(modeldir) + print("\033[1;32;40m"+"starting prediction using model: \n\t" + modeldir + '/' + str(test_step) + "\033[0;37;40m"+"\n\n") + + call(['python3.5', args.base_dir+'/Codes/Deeplab_network/main.py', + '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--test_step', str(test_step), + '--test_num_steps', str(test_num_steps), + '--modeldir', modeldir, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--num_classes', str(classNum), + '--gpu', str(args.gpu), + '--encoder_name',args.encoder_name]) + + # un chop + print('\nreconstructing wsi map ...\n') + wsiMask = un_suey(dirs=dirs, args=args) + + # save hotspots + if dirs['save_outputs'] == True: + #reduce the resolution of the image + wsidims=wsiMask.shape + wsiMask_save=resize(wsiMask,(int(wsidims[0]/4),int(wsidims[1]/4)),order=0,preserve_range=True) + + make_folder(dirs['outDir'] + fileID + dirs['mask_dir']) + print('saving to: ' + dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png') + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imageio.imwrite(dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png', wsiMask_save) + + print('\n\nStarting XML construction: ') + + xml_suey(wsiMask=wsiMask, dirs=dirs, args=args, classNum=classNum, downsample=downsample,glob_offset=[0,0]) + + # clean up + if dirs['save_outputs'] == False: + print('cleaning up') + rmtree(dirs['outDir']+fileID) + + +def get_iteration(args): + currentmodels=os.listdir(args.base_dir + '/' + args.project + '/MODELS/') + + if not currentmodels: + return 'none' + else: + currentmodels=map(int,currentmodels) + Iteration=np.max(list(currentmodels)) + return Iteration + +def get_test_step(modeldir): + pretrains=glob(modeldir + '/*.ckpt*') + + maxmodel=0 + for modelfiles in pretrains: + modelID=modelfiles.split('.')[-2].split('-')[1] + try: + modelID = int(modelID) + if modelID>maxmodel: + maxmodel=modelID + except: pass + + return maxmodel + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory + +def restart_line(): # for printing chopped image labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def getWsi(path): #imports a WSI + import openslide + slide = openslide.OpenSlide(path) + return slide + +def file_len(fname): # get txt file length (number of lines) + with open(fname) as f: + for i, l in enumerate(f): + pass + + if 'i' in locals(): + return i + 1 + + else: + return 0 + + +def chop_suey(wsi, dirs, downsample, region_size, step, args): # chop wsi + print('\nopening: ' + wsi) + basename = os.path.splitext(wsi)[0] + + if wsi.split('.')[-1] != 'tif': + slide=getWsi(wsi) + # get image dimensions + dim_x, dim_y=slide.dimensions + else: + im = Image.open(wsi) + dim_x, dim_y=im.size + + fileID=basename.split('/') + dirs['fileID'] = fileID=fileID[len(fileID)-1] + print('\nchopping ...\n') + + # make txt file + make_folder(dirs['outDir'] + fileID + dirs['txt_save_dir']) + f_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + ".txt" + f2_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt" + f = open(f_name, 'w') + f2 = open(f2_name, 'w') + f2.close() + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + dirs['chopped_dir']) + + f.write('Image dimensions:\n') + + # make index for iters + index_y=np.array(range(0,dim_y,step)) + index_x=np.array(range(0,dim_x,step)) + index_y[-1]=dim_y-step + index_x[-1]=dim_x-step + + f.write('X dim: ' + str((index_x[-1]+region_size)/downsample) +'\n') + f.write('Y dim: ' + str((index_y[-1]+region_size)/downsample) +'\n\n') + f.write('Regions:\n') + f.write('image:xStart:xStop:yStart:yStop\n\n') + f.close() + + # get non white regions + choppable_regions = get_choppable_regions(wsi=wsi, index_x=index_x, index_y=index_y, boxSize=region_size,white_percent=args.white_percent) + + print('saving region:') + + num_cores = multiprocessing.cpu_count() + + Parallel(n_jobs=num_cores, backend='threading')(delayed(chop_wsi)(yStart=i, xStart=j, idxx=idxx, idxy=idxy, f_name=f_name, f2_name=f2_name, dirs=dirs, downsample=downsample, region_size=region_size, args=args, wsi=wsi, choppable_regions=choppable_regions) for idxy, i in enumerate(index_y) for idxx, j in enumerate(index_x)) + + test_num_steps = file_len(dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt") + print('\n\n' + str(test_num_steps) +' image regions chopped') + + return fileID, test_num_steps + +def chop_wsi(yStart, xStart, idxx, idxy, f_name, f2_name, dirs, downsample, region_size, args, wsi, choppable_regions): # perform cutting in parallel + if choppable_regions[idxy, idxx] != 0: + yEnd = yStart+region_size + #print(yEnd) + xEnd = xStart+region_size + #print(xEnd) + xLen=xEnd-xStart + yLen=yEnd-yStart + + if wsi.split('.') != 'tif': + slide = getWsi(wsi) + subsect= np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + subsect=subsect[:,:,:3] + + else: + subsect_ = imread(wsi)[yStart:yEnd, xStart:xEnd, :3] + subsect = np.zeros([region_size,region_size,3]) + subsect[0:subsect_.shape[0], 0:subsect_.shape[1], :] = subsect_ + + #print(whiteRatio) + imageIter = str(xStart)+str(yStart) + + f = open(f_name, 'a+') + f2 = open(f2_name, 'a+') + + # append txt file + f.write(imageIter + ':' + str(xStart/downsample) + ':' + str(xEnd/downsample) + + ':' + str(yStart/downsample) + ':' + str(yEnd/downsample) + '\n') + + # resize images ans masks + if downsample > 1: + c=(subsect.shape) + s1=int(c[0]/downsample) + s2=int(c[1]/downsample) + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + subsect=resize(subsect,(s1,s2), mode='constant') + + # save image + directory = dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['chopped_dir'] + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imageio.imwrite(directory + dirs['fileID'] + str(imageIter) + args.imBoxExt,subsect) + + f2.write(dirs['chopped_dir'] + dirs['fileID'] + str(imageIter) + args.imBoxExt + '\n') + f.close() + f2.close() + + sys.stdout.write(' <'+str(xStart)+':'+str(xEnd)+' '+str(yStart)+':'+str(yEnd)+'> ') + sys.stdout.flush() + restart_line() + +def un_suey(dirs, args): # reconstruct wsi from predicted masks + txtFile = dirs['fileID'] + '.txt' + + # read txt file + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + # get wsi size + xDim =int(float((lines[1].split(': ')[1]).split('\n')[0])) + yDim = int(float((lines[2].split(': ')[1]).split('\n')[0])) + #print('xDim: ' + str(xDim)) + #print('yDim: ' + str(yDim)) + + # make wsi mask + wsiMask = np.zeros([yDim, xDim]).astype(np.uint8) + + # read image regions + for regionNum in range(7, np.size(lines)): + # print regionNum + sys.stdout.write(' <'+str(regionNum-7)+ ' of ' + str(np.size(lines)-8) +'> ') + sys.stdout.flush() + restart_line() + + # get region + region = lines[regionNum].split(':') + region[4] = region[4].split('\n')[0] + + # read mask + mask = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + 'prediction/' + dirs['fileID'] + region[0] + '_mask.png') + + # get region bounds + xStart = np.uint32(float(region[1])) + #print('xStart: ' + str(xStart)) + xStop = np.uint32(float(region[2])) + #print('xStop: ' + str(xStop)) + yStart = np.uint32(float(region[3])) + if yStart < 0: + yStart = 0 + #print('yStart: ' + str(yStart)) + yStop = np.uint32(float(region[4])) + #print('yStop: ' + str(yStop)) + + mask_part = wsiMask[yStart:yStop, xStart:xStop] + ylen, xlen = np.shape(mask_part) + mask = mask[:ylen, :xlen] + + # populate wsiMask with max + #print(np.shape(wsiMask)) + wsiMask[yStart:yStop, xStart:xStop] = np.maximum(mask_part, mask).astype(np.uint8) + #wsiMask[yStart:yStop, xStart:xStop] = np.ones([yStop-yStart, xStop-xStart]) + + return wsiMask + +def xml_suey(wsiMask, dirs, args, classNum, downsample,glob_offset): + # make xml + Annotations = xml_create() + # add annotation + for i in range(classNum)[1:]: # exclude background class + Annotations = xml_add_annotation(Annotations=Annotations, annotationID=i) + + unique_mask = [] + for i in range(0, len(wsiMask), 7000): + unique_mask.extend(np.unique(wsiMask[i:i + 7000])) + + print(np.unique(wsiMask)) + for value in np.unique(unique_mask)[1:]: + # print output + print('\t working on: annotationID ' + str(value)) + # get only 1 class binary mask + binary_mask = np.zeros(np.shape(wsiMask)).astype('uint8') + binary_mask[wsiMask == value] = 1 + print('binary_mask ==', np.unique(binary_mask)) + + # add mask to xml + pointsList = get_contour_points(binary_mask, args=args, downsample=downsample,value=value,offset={'X':glob_offset[0],'Y':glob_offset[1]}) + for i in range(np.shape(pointsList)[0]): + pointList = pointsList[i] + Annotations = xml_add_region(Annotations=Annotations, pointList=pointList, annotationID=value) + + # save xml + xml_save(Annotations=Annotations, filename=dirs['xml_save_dir']+'/'+dirs['fileID']+'.xml') + +def get_contour_points(mask, args, downsample,value, offset={'X': 0,'Y': 0}): + # returns a dict pointList with point 'X' and 'Y' values + # input greyscale binary image + maskPoints, contours = cv2.findContours(np.array(mask), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_KCOS) + pointsList = [] + + for j in np.array(range(len(maskPoints))): + if len(maskPoints[j])>2: + if cv2.contourArea(maskPoints[j]) > args.min_size: + pointList = [] + for i in np.array(range(0,len(maskPoints[j]),4)): + point = {'X': (maskPoints[j][i][0][0] * downsample) + offset['X'], 'Y': (maskPoints[j][i][0][1] * downsample) + offset['Y']} + pointList.append(point) + pointsList.append(pointList) + return pointsList + + +### functions for building an xml tree of annotations ### +def xml_create(): # create new xml tree + # create new xml Tree - Annotations + Annotations = ET.Element('Annotations') + return Annotations + +def xml_add_annotation(Annotations, annotationID=None): # add new annotation + # add new Annotation to Annotations + # defualts to new annotationID + if annotationID == None: # not specified + annotationID = len(Annotations.findall('Annotation')) + 1 + Annotation = ET.SubElement(Annotations, 'Annotation', attrib={'Type': '4', 'Visible': '1', 'ReadOnly': '0', 'Incremental': '0', 'LineColorReadOnly': '0', 'LineColor': str(xml_color[annotationID-1]), 'Id': str(annotationID), 'NameReadOnly': '0'}) + Regions = ET.SubElement(Annotation, 'Regions') + return Annotations + +def xml_add_region(Annotations, pointList, annotationID=-1, regionID=None): # add new region to annotation + # add new Region to Annotation + # defualts to last annotationID and new regionID + Annotation = Annotations.find("Annotation[@Id='" + str(annotationID) + "']") + Regions = Annotation.find('Regions') + if regionID == None: # not specified + regionID = len(Regions.findall('Region')) + 1 + Region = ET.SubElement(Regions, 'Region', attrib={'NegativeROA': '0', 'ImageFocus': '-1', 'DisplayId': '1', 'InputRegionId': '0', 'Analyze': '0', 'Type': '0', 'Id': str(regionID)}) + Vertices = ET.SubElement(Region, 'Vertices') + for point in pointList: # add new Vertex + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(point['X']), 'Y': str(point['Y']), 'Z': '0'}) + # add connecting point + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(pointList[0]['X']), 'Y': str(pointList[0]['Y']), 'Z': '0'}) + return Annotations + +def xml_save(Annotations, filename): + xml_data = ET.tostring(Annotations, pretty_print=True) + #xml_data = Annotations.toprettyxml() + f = open(filename, 'wb') + f.write(xml_data) + f.close() + +def read_xml(filename): + # import xml file + tree = ET.parse(filename) + root = tree.getroot() diff --git a/Codes/IterativePredict_1X.pyc b/Codes/IterativePredict_1X.pyc new file mode 100644 index 0000000..e66ec3c Binary files /dev/null and b/Codes/IterativePredict_1X.pyc differ diff --git a/Codes/IterativeTraining.py b/Codes/IterativeTraining.py new file mode 100644 index 0000000..c6af646 --- /dev/null +++ b/Codes/IterativeTraining.py @@ -0,0 +1,611 @@ +import numpy as np +import multiprocessing +import os +import sys +import cv2 +import matplotlib.pyplot as plt +import time +import random +import warnings +import argparse + +from skimage.transform import resize +from skimage.io import imread, imsave +from skimage.morphology import remove_small_objects +from skimage.color import rgb2lab +from scipy.ndimage.measurements import label +from scipy.ndimage.morphology import binary_fill_holes +from glob import glob +from getWsi import getWsi +from xml_to_mask import xml_to_mask,get_num_classes +from joblib import Parallel, delayed +from shutil import rmtree,move,copyfile +from imgaug import augmenters as iaa +from randomHSVshift import randomHSVshift +from generateTrainSet import generateDatalists +from subprocess import call +from get_choppable_regions import get_choppable_regions +""" + +Code for - cutting / augmenting / training CNN + +This uses WSI and XML files to train 2 neural networks for semantic segmentation + of histopath tissue via human in the loop training + +""" + +global seq #Define geometric augmentation strategies +seq=iaa.Sequential([ +iaa.Fliplr(0.5), +iaa.Flipud(0.5), +iaa.PiecewiseAffine(scale=(0.01, 0.05),order=0), +]) + +#Record start time +totalStart=time.time() + +def IterateTraining(args): + ## calculate low resolution block params + downsampleLR = int(args.downsampleRateLR**.5) #down sample for each dimension + region_sizeLR = int(args.boxSizeLR*(downsampleLR)) #Region size before downsampling + stepLR = int(region_sizeLR*(1-args.overlap_percentLR)) #Step size before downsampling + ## calculate low resolution block params + downsampleHR = int(args.downsampleRateHR**.5) #down sample for each dimension + region_sizeHR = int(args.boxSizeHR*(downsampleHR)) #Region size before downsampling + stepHR = int(region_sizeHR*(1-args.overlap_percentHR)) #Step size before downsampling + + + global classEnumLR,classEnumHR + dirs = {'imExt': '.jpeg'} + dirs['basedir'] = args.base_dir + dirs['maskExt'] = '.png' + dirs['modeldir'] = '/MODELS/' + dirs['tempdirLR'] = '/TempLR/' + dirs['tempdirHR'] = '/TempHR/' + dirs['pretraindir'] = '/Deeplab_network/' + dirs['training_data_dir'] = '/TRAINING_data/' + dirs['model_init'] = 'deeplab_resnet.ckpt' + dirs['project']= '/' + args.project + dirs['data_dir_HR'] = args.base_dir +'/' + args.project + '/Permanent/HR/' + dirs['data_dir_LR'] = args.base_dir +'/' +args.project + '/Permanent/LR/' + + + ##All folders created, initiate WSI loading by human + #raw_input('Please place WSIs in ') + + ##Check iteration session + + currentmodels=os.listdir(dirs['basedir'] + dirs['project'] + dirs['modeldir']) + + currentAnnotationIteration=check_model_generation(dirs) + + print('Current training session is: ' + str(currentAnnotationIteration)) + + ##Create objects for storing class distributions + annotatedXMLs=glob(dirs['basedir'] + dirs['project'] + dirs['training_data_dir'] + str(currentAnnotationIteration) + '/*.xml') + classes=[] + if args.classNum == 0: + for xml in annotatedXMLs: + classes.append(get_num_classes(xml)) + + classNum_LR = max(classes) + classNum_HR = max(classes) + else: + classNum_LR = args.classNum + if args.classNum_HR != 0: + classNum_HR = args.classNum_HR + else: + classNum_HR = classNum_LR + classEnumLR=np.zeros([classNum_LR,1]) + classEnumHR=np.zeros([classNum_HR,1]) + + + ##for all WSIs in the initiating directory: + if args.chop_data == 'True': + print('Chopping') + + start=time.time() + for xmlID in annotatedXMLs: + + #Get unique name of WSI + fileID=xmlID.split('/')[-1].split('.xml')[0] + + #create memory addresses for wsi files + for ext in [args.wsi_ext]: + wsiID=dirs['basedir'] + dirs['project']+ dirs['training_data_dir'] + str(currentAnnotationIteration) +'/'+ fileID + ext + + #Ensure annotations exist + if os.path.isfile(wsiID)==True: + break + + #Ensure annotations exist + if os.path.isfile(wsiID)==False: + print('\nError - missing wsi file: ' + wsiID + ' Please provide.\n') + + #Load openslide information about WSI + if ext != '.tif': + slide=getWsi(wsiID) + #WSI level 0 dimensions (largest size) + dim_x,dim_y=slide.dimensions + else: + im = Image.open(wsiID) + dim_x, dim_y=im.size + wsi_mask=xml_to_mask(xmlID, [0,0], [dim_x,dim_y]) + print('Loaded mask') + + #Generate iterators for parallel chopping of WSIs in low resolution + index_yLR=np.array(range(0,dim_y,stepLR)) + index_xLR=np.array(range(0,dim_x,stepLR)) + + index_yLR[-1]=dim_y-stepLR + index_xLR[-1]=dim_x-stepLR + #Create memory address for chopped images low resolution + outdirLR=dirs['basedir'] + dirs['project'] + dirs['tempdirLR'] + + #Enumerate cpu core count + num_cores = multiprocessing.cpu_count() + + #Perform low resolution chopping in parallel and return the number of + #images in each of the labeled classes + chop_regions=get_choppable_regions(wsi=wsiID, + index_x=index_xLR,index_y=index_yLR,boxSize=region_sizeLR,white_percent=args.white_percent) + + + classEnumCLR=Parallel(n_jobs=num_cores)(delayed(return_region)(args=args, + wsi_mask=wsi_mask, wsiID=wsiID, + fileID=fileID, yStart=j, xStart=i, idxy=idxy, + idxx=idxx, downsampleRate=args.downsampleRateLR, + outdirT=outdirLR, region_size=region_sizeLR, + dirs=dirs, chop_regions=chop_regions,classNum=classNum_LR) for idxx,i in enumerate(index_xLR) for idxy,j in enumerate(index_yLR)) + print('Time for low res WSI chopping: ' + str(time.time()-start)) + + #Add number of images in each class to the global count low resolution + CSLR=(sum(classEnumCLR)) + for c in range(0,CSLR.shape[0]): + classEnumLR[c]=classEnumLR[c]+CSLR[c] + + #Print enumerations for each class + + #Generate iterators for parallel chopping of WSIs in high resolution + index_yHR=np.array(range(0,dim_y,stepHR)) + index_xHR=np.array(range(0,dim_x,stepHR)) + index_yHR[-1]=dim_y-stepHR + index_xHR[-1]=dim_x-stepHR + #Create memory address for chopped images high resolution + outdirHR=dirs['basedir'] + dirs['project'] + dirs['tempdirHR'] + + #Perform high resolution chopping in parallel and return the number of + #images in each of the labeled classes + chop_regions=get_choppable_regions(wsi=wsiID, + index_x=index_xHR,index_y=index_yHR,boxSize=region_sizeHR,white_percent=args.white_percent) + + classEnumCHR=Parallel(n_jobs=num_cores)(delayed(return_region)(args=args, + wsi_mask=wsi_mask, wsiID=wsiID, + fileID=fileID, yStart=j, xStart=i, idxy=idxy, + idxx=idxx, downsampleRate=args.downsampleRateHR, + outdirT=outdirHR, region_size=region_sizeHR, + dirs=dirs, chop_regions=chop_regions,classNum=classNum_HR) for idxx,i in enumerate(index_xHR) for idxy,j in enumerate(index_yHR)) + print('Time for high res WSI chopping: ' + str(time.time()-start)) + + #Add number of images in each class to the global count high resolution + CSHR=(sum(classEnumCHR)) + for c in range(0,CSHR.shape[0]): + classEnumHR[c]=classEnumHR[c]+CSHR[c] + + #classEnumHR=[float(6334),float(488)] + #Print enumerations for each class + + print('Time for WSI chopping: ' + str(time.time()-start)) + + ##Augment low resolution data + #Location of augmentable data + + #Output location for augmented data + dirs['outDirAI']=dirs['basedir']+dirs['project'] + dirs['tempdirLR'] + '/Augment' + '/regions/' + dirs['outDirAM']=dirs['basedir']+dirs['project'] + dirs['tempdirLR'] + '/Augment' + '/masks/' + + #Enumerate low resolution class distributions for augmentation ratios + classDistLR=np.zeros(len(classEnumLR)) + + for idx,value in enumerate(classEnumLR): + classDistLR[idx]=value/sum(classEnumLR) + + #Define number of augmentations per class + if args.aug_LR > 0: + augmentOrder=np.argsort(classDistLR) + classAugs=(np.round(args.aug_LR*(1-classDistLR))+1) + classAugs=classAugs.astype(int) + print('Low resolution augmentation distribution:') + print(classAugs) + imagesToAugmentLR=dirs['basedir']+dirs['project'] + dirs['tempdirLR'] + 'regions/' + masksToAugmentLR=dirs['basedir']+dirs['project'] + dirs['tempdirLR'] + 'masks/' + augmentList=glob(imagesToAugmentLR + '*.jpeg') + + #Parallel iter + augIter=range(0,len(augmentList)) + + auglen=len(augmentList) + #Augment images in parallel using inverted class distributions for augmentation iterations + num_cores = multiprocessing.cpu_count() + start=time.time() + + Parallel(n_jobs=num_cores)(delayed(run_batch)(augmentList,masksToAugmentLR, + batchidx,classAugs,args.boxSizeLR,args.hbound,args.lbound, + augmentOrder,dirs,classNum_LR,auglen) for batchidx in augIter) + + moveimages(dirs['outDirAI'], dirs['basedir']+dirs['project'] + '/Permanent/LR/regions/') + moveimages(dirs['outDirAM'], dirs['basedir']+dirs['project'] + '/Permanent/LR/masks/') + + moveimages(dirs['basedir']+dirs['project'] + dirs['tempdirLR']+ '/regions/', dirs['basedir']+dirs['project'] + '/Permanent/LR/regions/') + moveimages(dirs['basedir']+dirs['project'] + dirs['tempdirLR']+ '/masks/',dirs['basedir']+dirs['project'] + '/Permanent/LR/masks/') + end=time.time()-start + print('Time for low resolution augmenting: ' + str((time.time()-totalStart)/60) + ' minutes.') + ##High resolution augmentation + #Enumerate high resolution class distribution + + classDistHR=np.zeros(len(classEnumHR)) + for idx,value in enumerate(classEnumHR): + classDistHR[idx]=value/sum(classEnumHR) + + + #Define number of augmentations per class + if args.aug_HR >0: + augmentOrder=np.argsort(classDistHR) + classAugs=(np.round(args.aug_HR*(1-classDistHR))+1) + classAugs=classAugs.astype(int) + print('High resolution augmentation distribution:') + print(classAugs) + #High resolution input augmentable data + imagesToAugmentHR=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + 'regions/' + masksToAugmentHR=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + 'masks/' + augmentList=glob(imagesToAugmentHR + '*.jpeg') + + #Parallel iterator + augIter=range(0,len(augmentList)) + auglen=len(augmentList) + + #Output for augmented data + dirs['outDirAI']=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' + '/regions/' + dirs['outDirAM']=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' + '/masks/' + + #Augment in parallel + num_cores = multiprocessing.cpu_count() + start=time.time() + Parallel(n_jobs=num_cores)(delayed(run_batch)(augmentList,masksToAugmentHR, + batchidx,classAugs,args.boxSizeHR,args.hbound,args.lbound, + augmentOrder,dirs,classNum_HR,auglen) for batchidx in augIter) + end=time.time()-start + #augamt=len(glob(dirs['outDirAI'] + '*' + dirs['imExt'])) + + + moveimages(dirs['outDirAI'], dirs['basedir']+dirs['project'] + '/Permanent/HR/regions/') + moveimages(dirs['outDirAM'], dirs['basedir']+dirs['project'] + '/Permanent/HR/masks/') + + moveimages(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/regions/', dirs['basedir']+dirs['project'] + '/Permanent/HR/regions/') + moveimages(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/masks/',dirs['basedir']+dirs['project'] + '/Permanent/HR/masks/') + + + #Total time + print('Time for high resolution augmenting: ' + str((time.time()-totalStart)/60) + ' minutes.') + + #Generate training and validation arguments + training_args_list = [] + training_args_LR = [] + training_args_HR = [] + + ##### LOW REZ ARGS ##### + dirs['outDirAILR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/LR/regions/' + dirs['outDirAMLR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/LR/masks/' + + ########fix this + trainOutLR=dirs['basedir'] + '/Codes' + '/Deeplab_network/datasetLR/train.txt' + valOutLR=dirs['basedir'] + '/Codes' + '/Deeplab_network/datasetLR/val.txt' + + generateDatalists(dirs['outDirAILR'],dirs['outDirAMLR'],'/regions/','/masks/',dirs['imExt'],dirs['maskExt'],trainOutLR) + numImagesLR=len(glob(dirs['outDirAILR'] + '*' + dirs['imExt'])) + + numStepsLR=int((args.epoch_LR*numImagesLR)/ args.CNNbatch_sizeLR) + pretrain_LR=get_pretrain(currentAnnotationIteration,'/LR/',dirs) + modeldir_LR =dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration +1) + '/LR/' + + + + pretrain_HR=get_pretrain(currentAnnotationIteration,'/HR/',dirs) + + modeldir_HR = dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration+1) + '/HR/' + + # assign to dict + training_args_LR = { + 'numImages': numImagesLR, + 'data_list': trainOutLR, + 'batch_size': args.CNNbatch_sizeLR, + 'num_steps': numStepsLR, + 'save_interval': np.int(round(numStepsLR/args.saveIntervals)), + 'pretrain_file': pretrain_LR, + 'input_height': args.boxSizeLR, + 'input_width': args.boxSizeLR, + 'modeldir': modeldir_LR, + 'num_classes': classNum_LR, + 'gpu': args.gpu, + 'data_dir': dirs['data_dir_LR'], + 'print_color': "\033[3;37;40m", + 'log_file': modeldir_LR + 'log_'+ str(currentAnnotationIteration+1) +'_LR.txt', + 'log_dir': modeldir_LR + 'log/', + 'learning_rate': args.learning_rate_LR, + 'encoder_name':args.encoder_name + } + training_args_list.append(training_args_LR) + + + ##### HIGH REZ ARGS ##### + dirs['outDirAIHR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/HR/regions/' + dirs['outDirAMHR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/HR/masks/' + + #######Fix this + trainOutHR=dirs['basedir'] + '/Codes' +'/Deeplab_network/datasetHR/train.txt' + valOutHR=dirs['basedir'] + '/Codes' + '/Deeplab_network/datasetHR/val.txt' + + generateDatalists(dirs['outDirAIHR'],dirs['outDirAMHR'],'/regions/','/masks/',dirs['imExt'],dirs['maskExt'],trainOutHR) + numImagesHR=len(glob(dirs['outDirAIHR'] + '*' + dirs['imExt'])) + + numStepsHR=int((args.epoch_HR*numImagesHR)/ args.CNNbatch_sizeHR) + # assign to dict + training_args_HR={ + 'numImages': numImagesHR, + 'data_list': trainOutHR, + 'batch_size': args.CNNbatch_sizeHR, + 'num_steps': numStepsHR, + 'save_interval': np.int(round(numStepsHR/args.saveIntervals)), + 'pretrain_file': pretrain_HR, + 'input_height': args.boxSizeHR, + 'input_width': args.boxSizeHR, + 'modeldir': modeldir_HR, + 'num_classes': classNum_HR, + 'gpu': args.gpu + args.gpu_num - 1, + 'data_dir': dirs['data_dir_HR'], + 'print_color': "\033[1;32;40m", + 'log_file': modeldir_HR + 'log_'+ str(currentAnnotationIteration+1) +'_HR.txt', + 'log_dir': modeldir_HR + 'log/', + 'learning_rate': args.learning_rate_HR, + 'encoder_name': args.encoder_name + } + training_args_list.append(training_args_HR) + + # train networks in parallel + num_cores = args.gpu_num # GPUs + Parallel(n_jobs=num_cores, backend='threading')(delayed(train_net)(training_args,dirs) for training_args in training_args_list) + + + + finish_model_generation(dirs,currentAnnotationIteration) + + print('\n\n\033[92;5mPlease place new wsi file(s) in: \n\t' + dirs['basedir'] + dirs['project']+ dirs['training_data_dir'] + str(currentAnnotationIteration+1)) + print('\nthen run [--option predict]\033[0m\n') + + + + +def moveimages(startfolder,endfolder): + filelist=glob(startfolder + '*') + for file in filelist: + fileIDi=file.split('/')[-1] + move(file,endfolder + fileIDi) + +def train_net(training_args,dirs): + ''' + Recives a dictionary of variables: training_args + [data_list, num_steps, save_interval, pretrain_file, input_height, input_width, batch_size, num_classes, modeldir, data_dir, gpu] + ''' + + print('Running [' + str( training_args['num_steps'] ), '] iterations') + print('Saving every [' + str( training_args['save_interval'] ) + '] iterations') + + call(['python3.5', dirs['basedir'] +'/Codes/Deeplab_network/main.py', '--option', 'train', + '--data_list', training_args['data_list'], + '--num_steps', str(training_args['num_steps']), + '--save_interval',str(training_args['save_interval']), + '--pretrain_file', training_args['pretrain_file'], + '--input_height',str(training_args['input_height']), + '--input_width',str(training_args['input_width']), + '--batch_size',str(training_args['batch_size']), + '--num_classes',str(training_args['num_classes']), + '--modeldir', training_args['modeldir'], + '--data_dir', training_args['data_dir'], + '--log_file', training_args['log_file'], + '--log_dir', training_args['log_dir'], + '--gpu', str(training_args['gpu']), + '--learning_rate', str(training_args['learning_rate']), + '--print_color', training_args['print_color'], + '--encoder_name',training_args['encoder_name']]) + + +def check_model_generation(dirs): + modelsCurrent=os.listdir(dirs['basedir'] + dirs['project'] + dirs['modeldir']) + gens=map(int,modelsCurrent) + modelOrder=np.sort(list(gens))[::-1] + + for idx in modelOrder: + modelsChkptsLR=glob(dirs['basedir'] + dirs['project'] + dirs['modeldir']+str(idx) + '/LR/*.ckpt*') + modelsChkptsHR=glob(dirs['basedir'] + dirs['project'] + dirs['modeldir']+ str(idx) +'/HR/*.ckpt*') + + if modelsChkptsLR == []: + continue + elif modelsChkptsHR == []: + continue + else: + return idx + break + +def finish_model_generation(dirs,currentAnnotationIteration): + make_folder(dirs['basedir'] + dirs['project'] + dirs['training_data_dir'] + str(currentAnnotationIteration + 1)) + +def get_pretrain(currentAnnotationIteration,res,dirs): + + if currentAnnotationIteration==0: + pretrain_file = glob(dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration) + res + '*') + pretrain_file=pretrain_file[0].split('.')[0] + '.' + pretrain_file[0].split('.')[1] + + else: + pretrains=glob(dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration) + res + 'model*') + print(pretrains) + maxmodel=0 + for modelfiles in pretrains: + modelID=modelfiles.split('.')[-2].split('-')[1] + if int(modelID)>maxmodel: + maxmodel=int(modelID) + pretrain_file=dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration) + res + 'model.ckpt-' + str(maxmodel) + return pretrain_file + +def restart_line(): # for printing chopped image labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def file_len(fname): # get txt file length (number of lines) + with open(fname) as f: + for i, l in enumerate(f): + pass + return i + 1 + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory # Check if folder exists, if not make it + +def make_all_folders(dirs): + + + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/masks') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/Augment' +'/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/Augment' +'/masks') + + make_folder(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirHR'] + '/masks') + + make_folder(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' +'/regions') + make_folder(dirs['basedir']+dirs['project']+ dirs['tempdirHR'] + '/Augment' +'/masks') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['modeldir']) + make_folder(dirs['basedir'] +dirs['project']+ dirs['training_data_dir']) + + + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/LR/'+ 'regions/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/LR/'+ 'masks/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/HR/'+ 'regions/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/HR/'+ 'masks/') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['training_data_dir']) + + make_folder(dirs['basedir'] + '/Codes/Deeplab_network/datasetLR') + make_folder(dirs['basedir'] + '/Codes/Deeplab_network/datasetHR') + +def return_region(args, wsi_mask, wsiID, fileID, yStart, xStart, idxy, idxx, downsampleRate, outdirT, region_size, dirs, chop_regions,classNum): # perform cutting in parallel + sys.stdout.write(' <'+str(xStart)+'/'+ str(yStart)+ '> ') + sys.stdout.flush() + restart_line() + + if chop_regions[idxy,idxx] != 0: + + uniqID=fileID + str(yStart) + str(xStart) + if wsiID.split('.')[-1] != 'tif': + slide=getWsi(wsiID) + Im=np.array(slide.read_region((xStart,yStart),0,(region_size,region_size))) + Im=Im[:,:,:3] + else: + yEnd = yStart + region_size + xEnd = xStart + region_size + Im = np.zeros([region_size,region_size,3], dtype=np.uint8) + Im_ = imread(wsiID)[yStart:yEnd, xStart:xEnd, :3] + Im[0:Im_.shape[0], 0:Im_.shape[1], :] = Im_ + + mask_annotation=wsi_mask[yStart:yStart+region_size,xStart:xStart+region_size] + + o1,o2=mask_annotation.shape + if o1 !=region_size: + mask_annotation=np.pad(mask_annotation,((0,region_size-o1),(0,0)),mode='constant') + if o2 !=region_size: + mask_annotation=np.pad(mask_annotation,((0,0),(0,region_size-o2)),mode='constant') + + if downsampleRate !=1: + c=(Im.shape) + s1=int(c[0]/(downsampleRate**.5)) + s2=int(c[1]/(downsampleRate**.5)) + Im=(resize(Im,(s1,s2),mode='reflect')*255).astype('uint8') + mask_annotation=resize(mask_annotation,(s1,s2),order=0,preserve_range=True) + + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(outdirT + '/regions/' + uniqID + dirs['imExt'],Im) + imsave(outdirT + '/masks/' + uniqID +dirs['maskExt'],np.uint8(mask_annotation)) + classespresent=np.unique(mask_annotation) + classes=range(0,classNum) + classEnumC=np.zeros([classNum,1]) + + for index,chk in enumerate(classes): + if chk in classespresent: + classEnumC[index]=classEnumC[index]+1 + return classEnumC + else: + + + classes=range(0,classNum) + classEnumC=np.zeros([classNum,1]) + return classEnumC + +def load_batch(imageList,maskDir,batchindex,batch_augs,boxsize,dirs): + + X_data=[] + mask_data=[] + for b in range(0,batch_augs): + fileIDi = imageList[batchindex] + X_data.append(imread(fileIDi)) + fileIDi=fileIDi.split('/')[-1].split('.')[0] + mask_data.append(imread(maskDir+fileIDi+dirs['maskExt'])) + + + return X_data,mask_data #Load N copies of current image based on class distributions + +def save_batch(imageblock,maskblock,imageList,batchindex,dirs): + + fileIDi = imageList[batchindex] + fileIDi=fileIDi.split('/')[-1].split('.')[0] + for index in range(0,len(imageblock)): + + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDirAI'] + fileIDi +'_'+ str(index) + dirs['imExt'],np.uint8(imageblock[index]*255)) + imsave(dirs['outDirAM'] + fileIDi +'_'+ str(index) + dirs['maskExt'],np.uint8(maskblock[index])) #Save N copies of current image + +def run_batch(imageList, maskDir, batchindex, class_augs, box_size, + hbound, lbound, augmentOrder,dirs,classNum,auglen): + sys.stdout.write(' <'+str(batchindex)+'/'+str(auglen)+ '> ') + sys.stdout.flush() + restart_line() + #Load image, determine augmentation probability, augment image, augment colorspace, save images + global seq + seq_det = seq.to_deterministic() + + imageblock,maskblock=load_batch(imageList,maskDir,batchindex,1,box_size,dirs) + + classespresent=np.unique(maskblock) + classes=range(0,classNum) + + for idx in augmentOrder: + if idx in classespresent: + prob=class_augs[idx] + break + imageblock,maskblock=load_batch(imageList,maskDir,batchindex,prob,box_size,dirs) + + + imageblock=seq_det.augment_images(imageblock) + imageblock=colorshift(imageblock,hbound,lbound) + + maskblock=seq_det.augment_images(maskblock) + save_batch(imageblock,maskblock,imageList,batchindex,dirs) + +def colorshift(imageblock, hbound, lbound): #Shift Hue of HSV space and Lightness of LAB space + for im in range(0,len(imageblock)): + hShift=np.random.normal(0,hbound) + lShift=np.random.normal(1,lbound) + imageblock[im]=randomHSVshift(imageblock[im],hShift,lShift) + return imageblock diff --git a/Codes/IterativeTraining.pyc b/Codes/IterativeTraining.pyc new file mode 100644 index 0000000..1a449bf Binary files /dev/null and b/Codes/IterativeTraining.pyc differ diff --git a/Codes/IterativeTraining_1X.py b/Codes/IterativeTraining_1X.py new file mode 100644 index 0000000..9721637 --- /dev/null +++ b/Codes/IterativeTraining_1X.py @@ -0,0 +1,493 @@ +import numpy as np +import multiprocessing +import os +import sys +import cv2 +import matplotlib.pyplot as plt +import time +import random +import warnings +import argparse + +from skimage.transform import resize +from skimage.io import imread, imsave +from skimage.morphology import remove_small_objects +from skimage.color import rgb2lab +from scipy.ndimage.measurements import label +from scipy.ndimage.morphology import binary_fill_holes +from glob import glob +from getWsi import getWsi +from xml_to_mask import xml_to_mask,get_num_classes +from joblib import Parallel, delayed +from shutil import rmtree,move,copyfile +from imgaug import augmenters as iaa +from randomHSVshift import randomHSVshift +from generateTrainSet import generateDatalists +from subprocess import call +from get_choppable_regions import get_choppable_regions +from PIL import Image +""" + +Code for - cutting / augmenting / training CNN + +This uses WSI and XML files to train 2 neural networks for semantic segmentation + of histopath tissue via human in the loop training + +""" + +global seq #Define geometric augmentation strategies +seq=iaa.Sequential([ +iaa.Fliplr(0.5), +iaa.Flipud(0.5), +iaa.PiecewiseAffine(scale=(0.01, 0.05),order=0), +]) + +#Record start time +totalStart=time.time() + +def IterateTraining(args): + ## calculate low resolution block params + downsampleLR = int(args.downsampleRateLR**.5) #down sample for each dimension + region_sizeLR = int(args.boxSizeLR*(downsampleLR)) #Region size before downsampling + stepLR = int(region_sizeLR*(1-args.overlap_percentLR)) #Step size before downsampling + ## calculate low resolution block params + downsampleHR = int(args.downsampleRateHR**.5) #down sample for each dimension + region_sizeHR = int(args.boxSizeHR*(downsampleHR)) #Region size before downsampling + stepHR = int(region_sizeHR*(1-args.overlap_percentHR)) #Step size before downsampling + + + global classNum,classEnumLR,classEnumHR + dirs = {'imExt': '.jpeg'} + dirs['basedir'] = args.base_dir + dirs['maskExt'] = '.png' + dirs['modeldir'] = '/MODELS/' + dirs['tempdirLR'] = '/TempLR/' + dirs['tempdirHR'] = '/TempHR/' + dirs['pretraindir'] = '/Deeplab_network/' + dirs['training_data_dir'] = '/TRAINING_data/' + dirs['model_init'] = 'deeplab_resnet.ckpt' + dirs['project']= '/' + args.project + dirs['data_dir_HR'] = args.base_dir +'/' + args.project + '/Permanent/HR/' + dirs['data_dir_LR'] = args.base_dir +'/' +args.project + '/Permanent/LR/' + + + currentmodels=os.listdir(dirs['basedir'] + dirs['project'] + dirs['modeldir']) + + currentAnnotationIteration=check_model_generation(dirs) + + print('Current training session is: ' + str(currentAnnotationIteration)) + + ##Create objects for storing class distributions + annotatedXMLs=glob(dirs['basedir'] + dirs['project'] + dirs['training_data_dir'] + str(currentAnnotationIteration) + '/*.xml') + + if args.classNum == 0: + classNum=get_num_classes(annotatedXMLs[0]) + else: + classNum = args.classNum + + classEnumLR=np.zeros([classNum,1]) + classEnumHR=np.zeros([classNum,1]) + + ##for all WSIs in the initiating directory: + if args.chop_data == 'True': + print('Chopping') + + start=time.time() + for xmlID in annotatedXMLs: + + #Get unique name of WSI + fileID=xmlID.split('/')[-1].split('.xml')[0] + + #create memory addresses for wsi files + for ext in [args.wsi_ext]: + wsiID=dirs['basedir'] + dirs['project']+ dirs['training_data_dir'] + str(currentAnnotationIteration) +'/'+ fileID + ext + + #Ensure annotations exist + if os.path.isfile(wsiID)==True: + break + + + #Load openslide information about WSI + if ext != '.tif': + slide=getWsi(wsiID) + #WSI level 0 dimensions (largest size) + dim_x,dim_y=slide.dimensions + else: + im = Image.open(wsiID) + dim_x, dim_y=im.size + + wsi_mask=xml_to_mask(xmlID, [0,0], [dim_x,dim_y]) + print('Loaded mask') + + + #Generate iterators for parallel chopping of WSIs in low resolution + + #Enumerate cpu core count + num_cores = multiprocessing.cpu_count() + + + #Generate iterators for parallel chopping of WSIs in high resolution + index_yHR=np.array(range(0,dim_y,stepHR)) + index_xHR=np.array(range(0,dim_x,stepHR)) + index_yHR[-1]=dim_y-stepHR + index_xHR[-1]=dim_x-stepHR + + #Create memory address for chopped images high resolution + outdirHR=dirs['basedir'] + dirs['project'] + dirs['tempdirHR'] + + #Perform high resolution chopping in parallel and return the number of + #images in each of the labeled classes + chop_regions=get_choppable_regions(wsi=wsiID, + index_x=index_xHR,index_y=index_yHR,boxSize=region_sizeHR,white_percent=args.white_percent) + + classEnumCHR=Parallel(n_jobs=num_cores)(delayed(return_region)(args=args, + wsi_mask=wsi_mask, wsiID=wsiID, + fileID=fileID, yStart=j, xStart=i, idxy=idxy, + idxx=idxx, downsampleRate=args.downsampleRateHR, + outdirT=outdirHR, region_size=region_sizeHR, + dirs=dirs, chop_regions=chop_regions,classNum_HR=classNum) for idxx,i in enumerate(index_xHR) for idxy,j in enumerate(index_yHR)) + CSHR=(sum(classEnumCHR)) + for c in range(0,CSHR.shape[0]): + classEnumHR[c]=classEnumHR[c]+CSHR[c] + + print('Time for WSI chopping: ' + str(time.time()-start)) + + ##High resolution augmentation + #Enumerate high resolution class distribution + classDistHR=np.zeros(len(classEnumHR)) + for idx,value in enumerate(classEnumHR): + classDistHR[idx]=value/sum(classEnumHR) + + #Define number of augmentations per class + if args.aug_HR >0: + augmentOrder=np.argsort(classDistHR) + classAugs=(np.round(args.aug_HR*(1-classDistHR))+1) + classAugs=classAugs.astype(int) + print('Augmentation distribution:') + print(classAugs) + #High resolution input augmentable data + imagesToAugmentHR=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + 'regions/' + masksToAugmentHR=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + 'masks/' + augmentList=glob(imagesToAugmentHR + '*.jpeg') + + #Parallel iterator + auglen=len(augmentList) + augIter=range(0,auglen) + + #Output for augmented data + dirs['outDirAI']=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' + '/regions/' + dirs['outDirAM']=dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' + '/masks/' + + #Augment in parallel + num_cores = multiprocessing.cpu_count() + start=time.time() + + Parallel(n_jobs=num_cores)(delayed(run_batch)(augmentList,masksToAugmentHR, + batchidx,classAugs,args.boxSizeHR,args.hbound,args.lbound, + augmentOrder,dirs,classNum,auglen) for batchidx in augIter) + end=time.time()-start + #augamt=len(glob(dirs['outDirAI'] + '*' + dirs['imExt'])) + + + moveimages(dirs['outDirAI'], dirs['basedir']+dirs['project'] + '/Permanent/HR/regions/') + moveimages(dirs['outDirAM'], dirs['basedir']+dirs['project'] + '/Permanent/HR/masks/') + + moveimages(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/regions/', dirs['basedir']+dirs['project'] + '/Permanent/HR/regions/') + moveimages(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/masks/',dirs['basedir']+dirs['project'] + '/Permanent/HR/masks/') + + + #Total time + print('Time for high resolution augmenting: ' + str((time.time()-totalStart)/60) + ' minutes.') + + + #Generate training and validation arguments + training_args_list = [] # list of training argument directories low res and high res + training_args_LR = [] + training_args_HR = [] + + ##### LOW REZ ARGS ##### + dirs['outDirAILR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/LR/regions/' + dirs['outDirAMLR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/LR/masks/' + + ########fix this + trainOutLR=dirs['basedir'] + '/Codes' + '/Deeplab_network/datasetLR/train.txt' + + + + pretrain_HR=get_pretrain(currentAnnotationIteration,'/HR/',dirs) + + modeldir_HR = dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration+1) + '/HR/' + + + ##### HIGH REZ ARGS ##### + dirs['outDirAIHR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/HR/regions/' + dirs['outDirAMHR']=dirs['basedir']+'/'+dirs['project'] + '/Permanent/HR/masks/' + + #######Fix this + trainOutHR=dirs['basedir'] + '/Codes' +'/Deeplab_network/datasetHR/train.txt' + + generateDatalists(dirs['outDirAIHR'],dirs['outDirAMHR'],'/regions/','/masks/',dirs['imExt'],dirs['maskExt'],trainOutHR) + numImagesHR=len(glob(dirs['outDirAIHR'] + '*' + dirs['imExt'])) + + numStepsHR=int((args.epoch_HR*numImagesHR)/ args.CNNbatch_sizeHR) + # assign to dict + training_args_HR={ + 'numImages': numImagesHR, + 'data_list': trainOutHR, + 'batch_size': args.CNNbatch_sizeHR, + 'num_steps': numStepsHR, + 'save_interval': np.int(round(numStepsHR/args.saveIntervals)), + 'pretrain_file': pretrain_HR, + 'input_height': args.boxSizeHR, + 'input_width': args.boxSizeHR, + 'modeldir': modeldir_HR, + 'num_classes': classNum, + 'gpu': args.gpu, + 'data_dir': dirs['data_dir_HR'], + 'print_color': "\033[1;32;40m", + 'log_file': modeldir_HR + 'log_'+ str(currentAnnotationIteration+1) +'_HR.txt', + 'log_dir': modeldir_HR + 'log/', + 'learning_rate': args.learning_rate_HR, + 'encoder_name': args.encoder_name + } + training_args_list.append(training_args_HR) + + # train networks in parallel + num_cores = args.gpu_num # GPUs + #Parallel(n_jobs=num_cores, backend='threading')(delayed(train_net)(training_args,dirs) for training_args in training_args_list) + train_net(training_args_HR,dirs) + + + finish_model_generation(dirs,currentAnnotationIteration) + + print('\n\n\033[92;5mPlease place new wsi file(s) in: \n\t' + dirs['basedir'] + dirs['project']+ dirs['training_data_dir'] + str(currentAnnotationIteration+1)) + print('\nthen run [--option predict]\033[0m\n') + + + + +def moveimages(startfolder,endfolder): + filelist=glob(startfolder + '*') + for file in filelist: + fileID=file.split('/')[-1] + move(file,endfolder + fileID) + +def train_net(training_args,dirs): + ''' + Recives a dictionary of variables: training_args + [data_list, num_steps, save_interval, pretrain_file, input_height, input_width, batch_size, num_classes, modeldir, data_dir, gpu] + ''' + + print('Running [' + str( training_args['num_steps'] ), '] iterations') + print('Saving every [' + str( training_args['save_interval'] ) + '] iterations') + + call(['python3.5', dirs['basedir'] +'/Codes/Deeplab_network/main.py', '--option', 'train', + '--data_list', training_args['data_list'], + '--num_steps', str(training_args['num_steps']), + '--save_interval',str(training_args['save_interval']), + '--pretrain_file', training_args['pretrain_file'], + '--input_height',str(training_args['input_height']), + '--input_width',str(training_args['input_width']), + '--batch_size',str(training_args['batch_size']), + '--num_classes',str(training_args['num_classes']), + '--modeldir', training_args['modeldir'], + '--data_dir', training_args['data_dir'], + '--log_file', training_args['log_file'], + '--log_dir', training_args['log_dir'], + '--gpu', str(training_args['gpu']), + '--learning_rate', str(training_args['learning_rate']), + '--print_color', training_args['print_color'], + '--encoder_name',training_args['encoder_name']]) + +def check_model_generation(dirs): + modelsCurrent=os.listdir(dirs['basedir'] + dirs['project'] + dirs['modeldir']) + + gens=map(int,modelsCurrent) + + modelOrder=np.sort(list(gens))[::-1] + + for idx in modelOrder: + #modelsChkptsLR=glob(dirs['basedir'] + dirs['project'] + dirs['modeldir']+str(modelsCurrent[idx]) + '/LR/*.ckpt*') + modelsChkptsHR=glob(dirs['basedir'] + dirs['project'] + dirs['modeldir']+ str(idx) +'/HR/*.ckpt*') + if modelsChkptsHR == []: + continue + else: + return idx + break + +def finish_model_generation(dirs,currentAnnotationIteration): + make_folder(dirs['basedir'] + dirs['project'] + dirs['training_data_dir'] + str(currentAnnotationIteration + 1)) + +def get_pretrain(currentAnnotationIteration,res,dirs): + + if currentAnnotationIteration==0: + pretrain_file = glob(dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration) + res + '*') + pretrain_file=pretrain_file[0].split('.')[0] + '.' + pretrain_file[0].split('.')[1] + + else: + pretrains=glob(dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration) + res + 'model*') + print(pretrains) + maxmodel=0 + for modelfiles in pretrains: + modelID=modelfiles.split('.')[-2].split('-')[1] + if int(modelID)>maxmodel: + maxmodel=int(modelID) + pretrain_file=dirs['basedir']+dirs['project'] + dirs['modeldir'] + str(currentAnnotationIteration) + res + 'model.ckpt-' + str(maxmodel) + return pretrain_file + +def restart_line(): # for printing chopped image labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def file_len(fname): # get txt file length (number of lines) + with open(fname) as f: + for i, l in enumerate(f): + pass + return i + 1 + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory # Check if folder exists, if not make it + +def make_all_folders(dirs): + + + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/masks') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/Augment' +'/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirLR'] + '/Augment' +'/masks') + + make_folder(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/regions') + make_folder(dirs['basedir'] +dirs['project']+ dirs['tempdirHR'] + '/masks') + + make_folder(dirs['basedir']+dirs['project'] + dirs['tempdirHR'] + '/Augment' +'/regions') + make_folder(dirs['basedir']+dirs['project']+ dirs['tempdirHR'] + '/Augment' +'/masks') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['modeldir']) + make_folder(dirs['basedir'] +dirs['project']+ dirs['training_data_dir']) + + + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/LR/'+ 'regions/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/LR/'+ 'masks/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/HR/'+ 'regions/') + make_folder(dirs['basedir'] +dirs['project']+ '/Permanent' +'/HR/'+ 'masks/') + + make_folder(dirs['basedir'] +dirs['project']+ dirs['training_data_dir']) + + make_folder(dirs['basedir'] + '/Codes/Deeplab_network/datasetLR') + make_folder(dirs['basedir'] + '/Codes/Deeplab_network/datasetHR') + + +def return_region(args, wsi_mask, wsiID, fileID, yStart, xStart, idxy, idxx, downsampleRate, outdirT, region_size, dirs, chop_regions,classNum_HR): # perform cutting in parallel + sys.stdout.write(' <'+str(xStart)+'/'+ str(yStart)+'/'+str(chop_regions[idxy,idxx] != 0)+ '> ') + sys.stdout.flush() + restart_line() + + if chop_regions[idxy,idxx] != 0: + + uniqID=fileID + str(yStart) + str(xStart) + if wsiID.split('.')[-1] != 'tif': + slide=getWsi(wsiID) + Im=np.array(slide.read_region((xStart,yStart),0,(region_size,region_size))) + Im=Im[:,:,:3] + else: + yEnd = yStart + region_size + xEnd = xStart + region_size + Im = np.zeros([region_size,region_size,3], dtype=np.uint8) + Im_ = imread(wsiID)[yStart:yEnd, xStart:xEnd, :3] + Im[0:Im_.shape[0], 0:Im_.shape[1], :] = Im_ + + mask_annotation=wsi_mask[yStart:yStart+region_size,xStart:xStart+region_size] + + o1,o2=mask_annotation.shape + if o1 !=region_size: + mask_annotation=np.pad(mask_annotation,((0,region_size-o1),(0,0)),mode='constant') + if o2 !=region_size: + mask_annotation=np.pad(mask_annotation,((0,0),(0,region_size-o2)),mode='constant') + + + if downsampleRate !=1: + c=(Im.shape) + s1=int(c[0]/(downsampleRate**.5)) + s2=int(c[1]/(downsampleRate**.5)) + Im=(resize(Im,(s1,s2),mode='reflect')*255).astype('uint8') + mask_annotation=resize(mask_annotation,(s1,s2),order=0,preserve_range=True) + + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(outdirT + '/regions/' + uniqID + dirs['imExt'],Im) + imsave(outdirT + '/masks/' + uniqID +dirs['maskExt'],np.uint8(mask_annotation)) + classespresent=np.unique(mask_annotation) + classes=range(0,classNum_HR) + classEnumC=np.zeros([classNum_HR,1]) + + for index,chk in enumerate(classes): + if chk in classespresent: + classEnumC[index]=classEnumC[index]+1 + return classEnumC + else: + + + classes=range(0,classNum_HR) + classEnumC=np.zeros([classNum_HR,1]) + return classEnumC + +def load_batch(imageList,maskDir,batchindex,batch_augs,boxsize,dirs): + + X_data=[] + mask_data=[] + for b in range(0,batch_augs): + fileID = imageList[batchindex] + X_data.append(imread(fileID)) + fileID=fileID.split('/')[-1].split('.')[0] + mask_data.append(imread(maskDir+fileID+dirs['maskExt'])) + + + return X_data,mask_data #Load N copies of current image based on class distributions + +def save_batch(imageblock,maskblock,imageList,batchindex,dirs): + + fileID = imageList[batchindex] + fileID=fileID.split('/')[-1].split('.')[0] + for index in range(0,len(imageblock)): + + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDirAI'] + fileID +'_'+ str(index) + dirs['imExt'],np.uint8(imageblock[index]*255)) + imsave(dirs['outDirAM'] + fileID +'_'+ str(index) + dirs['maskExt'],np.uint8(maskblock[index])) #Save N copies of current image + +def run_batch(imageList, maskDir, batchindex, class_augs, box_size, + hbound, lbound, augmentOrder,dirs,classNum_HR,auglen): + sys.stdout.write(' <'+str(batchindex)+'/'+str(auglen)+ '> ') + sys.stdout.flush() + restart_line() + #Load image, determine augmentation probability, augment image, augment colorspace, save images + global seq + seq_det = seq.to_deterministic() + + imageblock,maskblock=load_batch(imageList,maskDir,batchindex,1,box_size,dirs) + + classespresent=np.unique(maskblock) + classes=range(0,classNum_HR) + + for idx in augmentOrder: + if idx in classespresent: + prob=class_augs[idx] + break + imageblock,maskblock=load_batch(imageList,maskDir,batchindex,prob,box_size,dirs) + + + imageblock=seq_det.augment_images(imageblock) + imageblock=colorshift(imageblock,hbound,lbound) + + maskblock=seq_det.augment_images(maskblock) + save_batch(imageblock,maskblock,imageList,batchindex,dirs) + + +def colorshift(imageblock, hbound, lbound): #Shift Hue of HSV space and Lightness of LAB space + for im in range(0,len(imageblock)): + hShift=np.random.normal(0,hbound) + lShift=np.random.normal(1,lbound) + imageblock[im]=randomHSVshift(imageblock[im],hShift,lShift) + return imageblock diff --git a/Codes/IterativeTraining_1X.pyc b/Codes/IterativeTraining_1X.pyc new file mode 100644 index 0000000..732fe35 Binary files /dev/null and b/Codes/IterativeTraining_1X.pyc differ diff --git a/Codes/__pycache__/InitializeFolderStructure.cpython-310.pyc b/Codes/__pycache__/InitializeFolderStructure.cpython-310.pyc new file mode 100644 index 0000000..8215b76 Binary files /dev/null and b/Codes/__pycache__/InitializeFolderStructure.cpython-310.pyc differ diff --git a/Codes/__pycache__/InitializeFolderStructure.cpython-311.pyc b/Codes/__pycache__/InitializeFolderStructure.cpython-311.pyc new file mode 100644 index 0000000..b1759ae Binary files /dev/null and b/Codes/__pycache__/InitializeFolderStructure.cpython-311.pyc differ diff --git a/Codes/__pycache__/InitializeFolderStructure.cpython-35.pyc b/Codes/__pycache__/InitializeFolderStructure.cpython-35.pyc new file mode 100644 index 0000000..35372b7 Binary files /dev/null and b/Codes/__pycache__/InitializeFolderStructure.cpython-35.pyc differ diff --git a/Codes/__pycache__/InitializeFolderStructure.cpython-36.pyc b/Codes/__pycache__/InitializeFolderStructure.cpython-36.pyc new file mode 100644 index 0000000..453f0ca Binary files /dev/null and b/Codes/__pycache__/InitializeFolderStructure.cpython-36.pyc differ diff --git a/Codes/__pycache__/InitializeFolderStructure.cpython-37.pyc b/Codes/__pycache__/InitializeFolderStructure.cpython-37.pyc new file mode 100644 index 0000000..810b5ef Binary files /dev/null and b/Codes/__pycache__/InitializeFolderStructure.cpython-37.pyc differ diff --git a/Codes/__pycache__/IterativePredict.cpython-37.pyc b/Codes/__pycache__/IterativePredict.cpython-37.pyc new file mode 100644 index 0000000..88ef5a3 Binary files /dev/null and b/Codes/__pycache__/IterativePredict.cpython-37.pyc differ diff --git a/Codes/__pycache__/IterativePredict_1X.cpython-311.pyc b/Codes/__pycache__/IterativePredict_1X.cpython-311.pyc new file mode 100644 index 0000000..5350dd8 Binary files /dev/null and b/Codes/__pycache__/IterativePredict_1X.cpython-311.pyc differ diff --git a/Codes/__pycache__/IterativePredict_1X.cpython-35.pyc b/Codes/__pycache__/IterativePredict_1X.cpython-35.pyc new file mode 100644 index 0000000..6108969 Binary files /dev/null and b/Codes/__pycache__/IterativePredict_1X.cpython-35.pyc differ diff --git a/Codes/__pycache__/IterativePredict_1X.cpython-37.pyc b/Codes/__pycache__/IterativePredict_1X.cpython-37.pyc new file mode 100644 index 0000000..e4f53d0 Binary files /dev/null and b/Codes/__pycache__/IterativePredict_1X.cpython-37.pyc differ diff --git a/Codes/__pycache__/IterativeTraining.cpython-310.pyc b/Codes/__pycache__/IterativeTraining.cpython-310.pyc new file mode 100644 index 0000000..1f8433f Binary files /dev/null and b/Codes/__pycache__/IterativeTraining.cpython-310.pyc differ diff --git a/Codes/__pycache__/IterativeTraining.cpython-311.pyc b/Codes/__pycache__/IterativeTraining.cpython-311.pyc new file mode 100644 index 0000000..5d07bd6 Binary files /dev/null and b/Codes/__pycache__/IterativeTraining.cpython-311.pyc differ diff --git a/Codes/__pycache__/IterativeTraining.cpython-35.pyc b/Codes/__pycache__/IterativeTraining.cpython-35.pyc new file mode 100644 index 0000000..ccd3076 Binary files /dev/null and b/Codes/__pycache__/IterativeTraining.cpython-35.pyc differ diff --git a/Codes/__pycache__/IterativeTraining.cpython-37.pyc b/Codes/__pycache__/IterativeTraining.cpython-37.pyc new file mode 100644 index 0000000..65ba3c0 Binary files /dev/null and b/Codes/__pycache__/IterativeTraining.cpython-37.pyc differ diff --git a/Codes/__pycache__/IterativeTraining_1X.cpython-310.pyc b/Codes/__pycache__/IterativeTraining_1X.cpython-310.pyc new file mode 100644 index 0000000..a8bc327 Binary files /dev/null and b/Codes/__pycache__/IterativeTraining_1X.cpython-310.pyc differ diff --git a/Codes/__pycache__/IterativeTraining_1X.cpython-311.pyc b/Codes/__pycache__/IterativeTraining_1X.cpython-311.pyc new file mode 100644 index 0000000..5d5d1bc Binary files /dev/null and b/Codes/__pycache__/IterativeTraining_1X.cpython-311.pyc differ diff --git a/Codes/__pycache__/IterativeTraining_1X.cpython-35.pyc b/Codes/__pycache__/IterativeTraining_1X.cpython-35.pyc new file mode 100644 index 0000000..44bdca7 Binary files /dev/null and b/Codes/__pycache__/IterativeTraining_1X.cpython-35.pyc differ diff --git a/Codes/__pycache__/IterativeTraining_1X.cpython-36.pyc b/Codes/__pycache__/IterativeTraining_1X.cpython-36.pyc new file mode 100644 index 0000000..42404ef Binary files /dev/null and b/Codes/__pycache__/IterativeTraining_1X.cpython-36.pyc differ diff --git a/Codes/__pycache__/IterativeTraining_1X.cpython-37.pyc b/Codes/__pycache__/IterativeTraining_1X.cpython-37.pyc new file mode 100644 index 0000000..afd7b59 Binary files /dev/null and b/Codes/__pycache__/IterativeTraining_1X.cpython-37.pyc differ diff --git a/Codes/__pycache__/evolve_predictions.cpython-310.pyc b/Codes/__pycache__/evolve_predictions.cpython-310.pyc new file mode 100644 index 0000000..072071b Binary files /dev/null and b/Codes/__pycache__/evolve_predictions.cpython-310.pyc differ diff --git a/Codes/__pycache__/evolve_predictions.cpython-36.pyc b/Codes/__pycache__/evolve_predictions.cpython-36.pyc new file mode 100644 index 0000000..ff4204b Binary files /dev/null and b/Codes/__pycache__/evolve_predictions.cpython-36.pyc differ diff --git a/Codes/__pycache__/evolve_predictions.cpython-37.pyc b/Codes/__pycache__/evolve_predictions.cpython-37.pyc new file mode 100644 index 0000000..78a547b Binary files /dev/null and b/Codes/__pycache__/evolve_predictions.cpython-37.pyc differ diff --git a/Codes/__pycache__/generateTrainSet.cpython-311.pyc b/Codes/__pycache__/generateTrainSet.cpython-311.pyc new file mode 100644 index 0000000..8aec1ab Binary files /dev/null and b/Codes/__pycache__/generateTrainSet.cpython-311.pyc differ diff --git a/Codes/__pycache__/generateTrainSet.cpython-35.pyc b/Codes/__pycache__/generateTrainSet.cpython-35.pyc new file mode 100644 index 0000000..dab26db Binary files /dev/null and b/Codes/__pycache__/generateTrainSet.cpython-35.pyc differ diff --git a/Codes/__pycache__/generateTrainSet.cpython-37.pyc b/Codes/__pycache__/generateTrainSet.cpython-37.pyc new file mode 100644 index 0000000..5bb85bd Binary files /dev/null and b/Codes/__pycache__/generateTrainSet.cpython-37.pyc differ diff --git a/Codes/__pycache__/getWsi.cpython-310.pyc b/Codes/__pycache__/getWsi.cpython-310.pyc new file mode 100644 index 0000000..bc0d80e Binary files /dev/null and b/Codes/__pycache__/getWsi.cpython-310.pyc differ diff --git a/Codes/__pycache__/getWsi.cpython-311.pyc b/Codes/__pycache__/getWsi.cpython-311.pyc new file mode 100644 index 0000000..efe9a08 Binary files /dev/null and b/Codes/__pycache__/getWsi.cpython-311.pyc differ diff --git a/Codes/__pycache__/getWsi.cpython-35.pyc b/Codes/__pycache__/getWsi.cpython-35.pyc new file mode 100644 index 0000000..f315e5a Binary files /dev/null and b/Codes/__pycache__/getWsi.cpython-35.pyc differ diff --git a/Codes/__pycache__/getWsi.cpython-37.pyc b/Codes/__pycache__/getWsi.cpython-37.pyc new file mode 100644 index 0000000..b2ed7a9 Binary files /dev/null and b/Codes/__pycache__/getWsi.cpython-37.pyc differ diff --git a/Codes/__pycache__/get_choppable_regions.cpython-311.pyc b/Codes/__pycache__/get_choppable_regions.cpython-311.pyc new file mode 100644 index 0000000..f16aa43 Binary files /dev/null and b/Codes/__pycache__/get_choppable_regions.cpython-311.pyc differ diff --git a/Codes/__pycache__/get_choppable_regions.cpython-35.pyc b/Codes/__pycache__/get_choppable_regions.cpython-35.pyc new file mode 100644 index 0000000..8107672 Binary files /dev/null and b/Codes/__pycache__/get_choppable_regions.cpython-35.pyc differ diff --git a/Codes/__pycache__/get_choppable_regions.cpython-37.pyc b/Codes/__pycache__/get_choppable_regions.cpython-37.pyc new file mode 100644 index 0000000..5fbb0b0 Binary files /dev/null and b/Codes/__pycache__/get_choppable_regions.cpython-37.pyc differ diff --git a/Codes/__pycache__/get_network_performance.cpython-311.pyc b/Codes/__pycache__/get_network_performance.cpython-311.pyc new file mode 100644 index 0000000..62f08f2 Binary files /dev/null and b/Codes/__pycache__/get_network_performance.cpython-311.pyc differ diff --git a/Codes/__pycache__/get_network_performance.cpython-35.pyc b/Codes/__pycache__/get_network_performance.cpython-35.pyc new file mode 100644 index 0000000..bb212ce Binary files /dev/null and b/Codes/__pycache__/get_network_performance.cpython-35.pyc differ diff --git a/Codes/__pycache__/get_network_performance.cpython-37.pyc b/Codes/__pycache__/get_network_performance.cpython-37.pyc new file mode 100644 index 0000000..2525485 Binary files /dev/null and b/Codes/__pycache__/get_network_performance.cpython-37.pyc differ diff --git a/Codes/__pycache__/randomHSVshift.cpython-310.pyc b/Codes/__pycache__/randomHSVshift.cpython-310.pyc new file mode 100644 index 0000000..8025ede Binary files /dev/null and b/Codes/__pycache__/randomHSVshift.cpython-310.pyc differ diff --git a/Codes/__pycache__/randomHSVshift.cpython-311.pyc b/Codes/__pycache__/randomHSVshift.cpython-311.pyc new file mode 100644 index 0000000..a174ace Binary files /dev/null and b/Codes/__pycache__/randomHSVshift.cpython-311.pyc differ diff --git a/Codes/__pycache__/randomHSVshift.cpython-35.pyc b/Codes/__pycache__/randomHSVshift.cpython-35.pyc new file mode 100644 index 0000000..ef11463 Binary files /dev/null and b/Codes/__pycache__/randomHSVshift.cpython-35.pyc differ diff --git a/Codes/__pycache__/randomHSVshift.cpython-37.pyc b/Codes/__pycache__/randomHSVshift.cpython-37.pyc new file mode 100644 index 0000000..62d7fb4 Binary files /dev/null and b/Codes/__pycache__/randomHSVshift.cpython-37.pyc differ diff --git a/Codes/__pycache__/xml_to_mask.cpython-310.pyc b/Codes/__pycache__/xml_to_mask.cpython-310.pyc new file mode 100644 index 0000000..514ba88 Binary files /dev/null and b/Codes/__pycache__/xml_to_mask.cpython-310.pyc differ diff --git a/Codes/__pycache__/xml_to_mask.cpython-311.pyc b/Codes/__pycache__/xml_to_mask.cpython-311.pyc new file mode 100644 index 0000000..753753c Binary files /dev/null and b/Codes/__pycache__/xml_to_mask.cpython-311.pyc differ diff --git a/Codes/__pycache__/xml_to_mask.cpython-35.pyc b/Codes/__pycache__/xml_to_mask.cpython-35.pyc new file mode 100644 index 0000000..1c23c52 Binary files /dev/null and b/Codes/__pycache__/xml_to_mask.cpython-35.pyc differ diff --git a/Codes/__pycache__/xml_to_mask.cpython-37.pyc b/Codes/__pycache__/xml_to_mask.cpython-37.pyc new file mode 100644 index 0000000..4064425 Binary files /dev/null and b/Codes/__pycache__/xml_to_mask.cpython-37.pyc differ diff --git a/Codes/evolve_predictions.py b/Codes/evolve_predictions.py new file mode 100644 index 0000000..01ce4aa --- /dev/null +++ b/Codes/evolve_predictions.py @@ -0,0 +1,546 @@ +import cv2 +import numpy as np +import os +import sys +import argparse +import multiprocessing +import lxml.etree as ET +import warnings +import time + +sys.path.append(os.getcwd()+'/Codes') + +from glob import glob +from subprocess import call +from joblib import Parallel, delayed +from skimage.io import imread, imsave +from skimage.transform import resize +from scipy.ndimage.measurements import label +from skimage.segmentation import clear_border +from skimage.morphology import remove_small_objects +from skimage import color +from shutil import rmtree +from IterativeTraining import get_num_classes +from get_choppable_regions import get_choppable_regions +from get_network_performance import get_perf + +""" +Code to test a WSI using all saved models in project +Saves a .gif image of the predictions from the specified region + +""" + +# define xml class colormap +xml_color = [65280, 65535, 255, 16711680, 33023] + +def evolve(args): + # define folder structure dict + dirs = {'outDir': args.base_dir + '/' + args.project + args.outDir} + dirs['txt_save_dir'] = '/txt_files/' + dirs['img_save_dir'] = '/img_files/' + dirs['final_output_dir'] = '/boundaries/' + dirs['final_boundary_image_dir'] = '/images/' + dirs['mask_dir'] = '/wsi_mask/' + dirs['chopped_dir'] = '/originals/' + dirs['crop_dir'] = '/wsi_crops/' + dirs['save_outputs'] = args.save_outputs + dirs['modeldir'] = '/MODELS/' + dirs['training_data_dir'] = '/TRAINING_data/' + dirs['validation_data_dir'] = '/HOLDOUT_data/' + + # find current iteration + iteration = get_iteration(args=args) + + # get all WSIs + WSIs = glob(args.base_dir + '/' + args.project + dirs['validation_data_dir'] + + '/*.svs') + + if iteration == 'none': + print('ERROR: no trained models found \n\tplease use [--option train]') + + else: + for iter in range(1,iteration+1): + dirs['xml_save_dir'] = args.base_dir + '/' + args.project + dirs['validation_data_dir'] + 'evolved_XMLs/' + + + # check main directory exists + make_folder(dirs['outDir']) + + if not os.path.exists(dirs['xml_save_dir']): + make_folder(dirs['xml_save_dir']) + print('working on iteration: ' + str(iter)) + + for wsi in WSIs: + # predict xmls + predict_all_xmls(args=args, dirs=dirs, wsi=wsi, iteration=iter) + + print('\n\n\033[92;5mDone evolving: \n\t\033[0m\n') + + +def predict_all_xmls(args, dirs, wsi, iteration): + # reshape regions calc + downsample = int(args.downsampleRateLR**.5) + region_size = int(args.boxSizeLR*(downsample)) + step = int(region_size*(1-args.overlap_percentLR)) + + # figure out the number of classes + annotatedXMLs=glob(args.base_dir + '/' + args.project + dirs['training_data_dir'] + str(iteration-1) + '/*.xml') + classes = [] + for xml in annotatedXMLs: + classes.append(get_num_classes(xml)) + classNum = max(classes) + + # chop wsi + fileID, test_num_steps, slide = chop_suey(wsi, dirs, downsample, region_size, step, args) + dirs['fileID'] = fileID + print('Chop SUEY!\n') + + # call DeepLab for prediction (Low resolution) + print('finding Glom locations ...\n') + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + 'prediction') + + modeldir = args.base_dir + '/' + args.project + dirs['modeldir'] + str(iteration) + '/LR' + test_steps_LR = get_all_test_steps(modeldir) + + + for idx, test_step in enumerate(test_steps_LR): + # test all saved models in current iteration + + modeldir = args.base_dir + '/' + args.project + dirs['modeldir'] + str(iteration) + '/LR' + test_data_list = fileID + '_images' + '.txt' + print("\033[1;32;40m"+"starting prediction using model: \n\t" + modeldir + str(test_step) + "\033[0;37;40m"+"\n\n") + + call(['python3', args.base_dir+'/Codes/Deeplab_network/main.py', + '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--test_step', str(test_step), + '--test_num_steps', str(test_num_steps), + '--modeldir', modeldir, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--num_classes', str(classNum), + '--gpu', '0']) + + # un chop + print('\nreconstructing wsi map ...\n') + wsiMask = un_suey(dirs=dirs, args=args) + + # save hotspots + if dirs['save_outputs'] == True: + make_folder(dirs['outDir'] + fileID + dirs['mask_dir']) + print('saving to: ' + dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png') + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png', wsiMask) + + # find glom locations in reconstructed map + print('\ninterpreting prediction map ...') + test_num_steps, labeledArray, label_offsets = find_suey(wsiMask, dirs, downsample, args, wsi) + print('\n\nthe cropped regions have been saved to: ' + dirs['outDir'] + fileID + dirs['img_save_dir'] + fileID + dirs['crop_dir']) + + # call DeepLab to predict Glom boundaries (High resolution) + print('\ngetting Glom boundaries ...\n') + make_folder(dirs['outDir'] + fileID + dirs['final_output_dir'] + 'prediction') + + test_data_list = fileID + '_crops.txt' + modeldir = args.base_dir + '/' + args.project + dirs['modeldir'] + str(iteration) + '/HR' + + test_steps_HR = get_all_test_steps(modeldir) + if idx > len(test_steps_HR): + test_step = test_steps_HR[-1] + else: + test_step = test_steps_HR[idx] + + + print("\033[1;32;40m"+"starting prediction using model: \n\t" + modeldir + str(test_step) + "\033[0;37;40m"+"\n\n") + + call(['python3', args.base_dir+'/Codes/Deeplab_network/main.py', + '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['final_output_dir'], + '--test_step', str(test_step), + '--test_num_steps', str(test_num_steps), + '--modeldir', modeldir, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], + '--num_classes', str(classNum), + '--gpu', '0']) + + print('\nsaving final glom images ...') + print('\nworking on:') + + crop_suey(label_offsets, dirs, args, classNum, iteration, idx) + + + + # clean up + if dirs['save_outputs'] == False: + print('cleaning up') + rmtree(dirs['outDir']+fileID) + + +def get_iteration(args): + currentmodels=os.listdir(args.base_dir + '/' + args.project + '/MODELS/') + if not currentmodels: + return 'none' + else: + currentmodels=map(int,currentmodels) + Iteration=np.max(currentmodels) + return Iteration + +def get_all_test_steps(modeldir): + pretrains=glob(modeldir + '/*.ckpt*') + model_IDs = [] + for modelfiles in pretrains: + modelID=modelfiles.split('.')[-2].split('-')[1] + model_IDs.append(int(modelID)) + model_IDs = np.sort(np.unique(np.array(model_IDs))) + print('Found models: ') + print(model_IDs[1:]) + + return model_IDs[1:] + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory + +def restart_line(): # for printing chopped image labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def getWsi(path): #imports a WSI + import openslide + slide = openslide.OpenSlide(path) + return slide + +def file_len(fname): # get txt file length (number of lines) + with open(fname) as f: + for i, l in enumerate(f): + pass + + if 'i' in locals(): + return i + 1 + + else: + return 0 + + +def chop_suey(wsi, dirs, downsample, region_size, step, args): # chop wsi + print('\nopening: ' + wsi) + basename = os.path.splitext(wsi)[0] + + slide=getWsi(wsi) + + fileID=basename.split('/') + dirs['fileID'] = fileID=fileID[len(fileID)-1] + print('\nchopping ...\n') + + # make txt file + make_folder(dirs['outDir'] + fileID + dirs['txt_save_dir']) + f_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + ".txt" + f2_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt" + f = open(f_name, 'w') + f2 = open(f2_name, 'w') + f2.close() + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + dirs['chopped_dir']) + + # get image dimensions + dim_x, dim_y=slide.dimensions + f.write('Image dimensions:\n') + + # make index for iters + index_y=range(0,dim_y-step,step) + index_x=range(0,dim_x-step,step) + + f.write('X dim: ' + str((index_x[-1]+region_size)/downsample) +'\n') + f.write('Y dim: ' + str((index_y[-1]+region_size)/downsample) +'\n\n') + f.write('Regions:\n') + f.write('image:xStart:xStop:yStart:yStop\n\n') + f.close() + + # get non white regions + choppable_regions = get_choppable_regions(wsi=wsi, index_x=index_x, index_y=index_y, boxSize=region_size) + + print('saving region:') + + num_cores = multiprocessing.cpu_count() + + Parallel(n_jobs=num_cores, backend='threading')(delayed(chop_wsi)(yStart=i, xStart=j, idxx=idxx, idxy=idxy, f_name=f_name, f2_name=f2_name, dirs=dirs, downsample=downsample, region_size=region_size, args=args, wsi=wsi, choppable_regions=choppable_regions) for idxy, i in enumerate(index_y) for idxx, j in enumerate(index_x)) + + test_num_steps = file_len(dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt") + print('\n\n' + str(test_num_steps) +' image regions chopped') + + return fileID, test_num_steps, slide + +def chop_wsi(yStart, xStart, idxx, idxy, f_name, f2_name, dirs, downsample, region_size, args, wsi, choppable_regions): # perform cutting in parallel + if choppable_regions[idxy, idxx] != 0: + slide = getWsi(wsi) + + yEnd = yStart+region_size + #print(yEnd) + xEnd = xStart+region_size + #print(xEnd) + xLen=xEnd-xStart + yLen=yEnd-yStart + + subsect= np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + subsect=subsect[:,:,:3] + + #print(whiteRatio) + imageIter = str(xStart)+str(yStart) + + f = open(f_name, 'a+') + f2 = open(f2_name, 'a+') + + # append txt file + f.write(imageIter + ':' + str(xStart/downsample) + ':' + str(xEnd/downsample) + + ':' + str(yStart/downsample) + ':' + str(yEnd/downsample) + '\n') + + # resize images ans masks + c=(subsect.shape) + s1=int(c[0]/(args.downsampleRateLR**.5)) + s2=int(c[1]/(args.downsampleRateLR**.5)) + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + subsect=resize(subsect,(s1,s2), mode='constant') + + # save image + directory = dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['chopped_dir'] + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(directory + dirs['fileID'] + str(imageIter) + args.imBoxExt,subsect) + + f2.write(dirs['chopped_dir'] + dirs['fileID'] + str(imageIter) + args.imBoxExt + '\n') + f.close() + f2.close() + + sys.stdout.write(' <'+str(xStart)+':'+str(xEnd)+' '+str(yStart)+':'+str(yEnd)+'> ') + sys.stdout.flush() + restart_line() + +def un_suey(dirs, args): # reconstruct wsi from predicted masks + txtFile = dirs['fileID'] + '.txt' + + # read txt file + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + # get wsi size + xDim = np.int32((lines[1].split(': ')[1]).split('\n')[0]) + yDim = np.int32((lines[2].split(': ')[1]).split('\n')[0]) + #print('xDim: ' + str(xDim)) + #print('yDim: ' + str(yDim)) + + # make wsi mask + wsiMask = np.zeros([yDim, xDim]) + + # read image regions + for regionNum in range(7, np.size(lines)): + # get region + region = lines[regionNum].split(':') + region[4] = region[4].split('\n')[0] + + # read mask + mask = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + 'prediction/' + dirs['fileID'] + region[0] + '_mask.png') + + # get region bounds + xStart = np.int32(region[1]) + #print('xStart: ' + str(xStart)) + xStop = np.int32(region[2]) + #print('xStop: ' + str(xStop)) + yStart = np.int32(region[3]) + if yStart < 0: + yStart = 0 + #print('yStart: ' + str(yStart)) + yStop = np.int32(region[4]) + #print('yStop: ' + str(yStop)) + + # populate wsiMask with max + #print(np.shape(wsiMask)) + wsiMask[yStart:yStop, xStart:xStop] = np.maximum(wsiMask[yStart:yStop, xStart:xStop], mask) + #wsiMask[yStart:yStop, xStart:xStop] = np.ones([yStop-yStart, xStop-xStart]) + + return wsiMask + +def find_suey(wsiMask, dirs, downsample, args, wsi): # locates the detected glom regions in the reconstructed wsi mask + # clean up mask + print(' removing small objects') + cleanMask = remove_small_objects(wsiMask.astype(bool), args.min_size) + print(' separating Glom objects\n') + # find all unconnected regions + labeledArray, num_features = label(cleanMask) + print('found: '+ str(num_features-1) + ' regions') + + # save cleaned mask + if dirs['save_outputs'] == True: + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['mask_dir'] + dirs['fileID'] + '_cleaned.png', cleanMask*255) + + make_folder(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir']) + + f_name = dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops.txt' + f = open(f_name, 'w') + f.close() + + # run crop_region in parallel + print('\nsaving:') + #num_cores = multiprocessing.cpu_count() + #Parallel(n_jobs=num_cores)(delayed(crop_region)(region_iter=i, labeledArray=labeledArray, fileID=fileID, f_name=f_name) for i in range(1, num_features)) + label_offsets = [] + for region_iter in range(1, num_features): + label_offset = crop_region(region_iter=region_iter, labeledArray=labeledArray, f_name=f_name, dirs=dirs, downsample=downsample, args=args, wsi=wsi) + label_offsets.append(label_offset) + + test_num_steps = file_len(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops' + ".txt") + return test_num_steps, labeledArray, label_offsets + +def crop_region(region_iter, labeledArray, f_name, dirs, downsample, args, wsi): # crop selected region from wsi and save // location defined by labeledArray + slide = getWsi(wsi) + + # get list of locations for pixels == region_iter + mask_region = np.argwhere(labeledArray == region_iter) + # calculate the region bounds + yStart = (min(mask_region[:,0]) * downsample) - args.LR_region_pad + yLen = (max(mask_region[:,0]) * downsample) - yStart + args.LR_region_pad + xStart = (min(mask_region[:,1]) * downsample) - args.LR_region_pad + xLen = (max(mask_region[:,1]) * downsample) - xStart + args.LR_region_pad + + region = np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + region = region[:,:,0:3] + + # print output + sys.stdout.write(' <' + str(region_iter) + '> ') + sys.stdout.flush() + restart_line() + + # write image path to text file + f = open(f_name, 'a+') + f.write(dirs['crop_dir'] + dirs['fileID'] + str(region_iter) + args.imBoxExt + '\n') + f.close() + + # save image region + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir'] + dirs['fileID'] + str(region_iter) + args.imBoxExt, region) + label_offset = {'Y': yStart, 'X': xStart} + return label_offset + + +def crop_suey(label_offsets, dirs, args, classNum, iter, idx): + txtFile = dirs['fileID'] + '_crops.txt' + + # read txt file with img paths + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + make_folder(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir'][1:]) + + # make xml + Annotations = xml_create() + # add annotation + for i in range(classNum)[1:]: # exclude background class + Annotations = xml_add_annotation(Annotations=Annotations, annotationID=i) + + for line in range(0, np.size(lines)): + image_path = lines[line].split('\n')[0] + + # get glom and corresponding mask + file_name = (image_path.split('.')[0]).split(dirs['crop_dir'])[1] + mask_image = imread(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + 'prediction/' + + file_name + '_mask.png') + + # print output + sys.stdout.write(' <' + file_name + '> ') + sys.stdout.flush() + restart_line() + + for value in np.unique(mask_image)[1:]: + # get only 1 class binary mask + binary_mask = np.zeros(np.shape(mask_image)).astype('uint8') + binary_mask[mask_image == value] = 1 + + # add mask to xml + label_offset = label_offsets[line] + pointsList = get_contour_points(binary_mask, args=args, offset=label_offset) + for i in range(np.shape(pointsList)[0]): + pointList = pointsList[i] + Annotations = xml_add_region(Annotations=Annotations, pointList=pointList, annotationID=value) + + # save mask images + if dirs['save_outputs'] == True: + glom_image = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + image_path[1:]) + if np.sum(mask_image) != 0: + # remove background in images + for i in range(3): + glom_image[:,:,i] = glom_image[:,:,i] * (mask_image * ((1-args.bg_intensity)) + args.bg_intensity) + + # save resulting image + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir'][1:] + file_name + '_glom' + args.finalImgExt, glom_image) + + # save xml + xml_save(Annotations=Annotations, filename=dirs['xml_save_dir']+'/'+dirs['fileID']+'_'+str(iter)+'_'+str(idx)+'.xml') + +def get_contour_points(mask, args, offset={'X': 0,'Y': 0}): + # returns a dict pointList with point 'X' and 'Y' values + # input greyscale binary image + _, maskPoints, contours = cv2.findContours(np.array(mask), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) + pointsList = [] + + for j in range(np.shape(maskPoints)[0]): + if cv2.contourArea(maskPoints[j]) > args.min_size: + pointList = [] + for i in range(np.shape(maskPoints[j])[0]): + point = {'X': maskPoints[j][i][0][0] + offset['X'], 'Y': maskPoints[j][i][0][1] + offset['Y']} + pointList.append(point) + pointsList.append(pointList) + return pointsList + +### functions for building an xml tree of annotations ### +def xml_create(): # create new xml tree + # create new xml Tree - Annotations + Annotations = ET.Element('Annotations', attrib={'MicronsPerPixel': '0.252000'}) + return Annotations + +def xml_add_annotation(Annotations, annotationID=None): # add new annotation + # add new Annotation to Annotations + # defualts to new annotationID + if annotationID == None: # not specified + annotationID = len(Annotations.findall('Annotation')) + 1 + Annotation = ET.SubElement(Annotations, 'Annotation', attrib={'Type': '4', 'Visible': '1', 'ReadOnly': '0', 'Incremental': '0', 'LineColorReadOnly': '0', 'LineColor': str(xml_color[annotationID-1]), 'Id': str(annotationID), 'NameReadOnly': '0'}) + Regions = ET.SubElement(Annotation, 'Regions') + return Annotations + +def xml_add_region(Annotations, pointList, annotationID=-1, regionID=None): # add new region to annotation + # add new Region to Annotation + # defualts to last annotationID and new regionID + Annotation = Annotations.find("Annotation[@Id='" + str(annotationID) + "']") + Regions = Annotation.find('Regions') + if regionID == None: # not specified + regionID = len(Regions.findall('Region')) + 1 + Region = ET.SubElement(Regions, 'Region', attrib={'NegativeROA': '0', 'ImageFocus': '-1', 'DisplayId': '1', 'InputRegionId': '0', 'Analyze': '0', 'Type': '0', 'Id': str(regionID)}) + Vertices = ET.SubElement(Region, 'Vertices') + for point in pointList: # add new Vertex + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(point['X']), 'Y': str(point['Y']), 'Z': '0'}) + # add connecting point + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(pointList[0]['X']), 'Y': str(pointList[0]['Y']), 'Z': '0'}) + return Annotations + +def xml_save(Annotations, filename): + xml_data = ET.tostring(Annotations, pretty_print=True) + #xml_data = Annotations.toprettyxml() + f = open(filename, 'w') + f.write(xml_data) + f.close() + +def read_xml(filename): + # import xml file + tree = ET.parse(filename) + root = tree.getroot() diff --git a/Codes/evolve_predictions.pyc b/Codes/evolve_predictions.pyc new file mode 100644 index 0000000..84d8b60 Binary files /dev/null and b/Codes/evolve_predictions.pyc differ diff --git a/Codes/generateTrainSet.py b/Codes/generateTrainSet.py new file mode 100644 index 0000000..4612d7c --- /dev/null +++ b/Codes/generateTrainSet.py @@ -0,0 +1,22 @@ +import glob +import numpy as np +import os +from cv2 import imread,imwrite + + +def generateDatalists(images,masks,imfolder,maskfolder,imExt,maskExt,f_name1): + if os.path.exists(f_name1): + os.remove(f_name1) + f1=open(f_name1,'w') + f1.close() + + trainingNames=glob.glob(images + '*' + imExt) + totalImages=len(trainingNames) + + f1=open(f_name1,'a') + for im in range(0,totalImages): + fileID=trainingNames[im].split('/')[-1].split('.')[0] + imagename=imfolder + fileID + '.jpeg' + maskname=maskfolder + fileID + '.png' + f1.write(imagename + ' ' + maskname + '\n') + f1.close() diff --git a/Codes/generateTrainSet.pyc b/Codes/generateTrainSet.pyc new file mode 100644 index 0000000..d681b46 Binary files /dev/null and b/Codes/generateTrainSet.pyc differ diff --git a/Codes/getWsi.py b/Codes/getWsi.py new file mode 100644 index 0000000..0a7e221 --- /dev/null +++ b/Codes/getWsi.py @@ -0,0 +1,5 @@ + +def getWsi(path): #imports a WSI + import openslide + wsi = openslide.OpenSlide(path) + return wsi diff --git a/Codes/getWsi.pyc b/Codes/getWsi.pyc new file mode 100644 index 0000000..4a00026 Binary files /dev/null and b/Codes/getWsi.pyc differ diff --git a/Codes/get_choppable_regions.py b/Codes/get_choppable_regions.py new file mode 100644 index 0000000..0e5ad32 --- /dev/null +++ b/Codes/get_choppable_regions.py @@ -0,0 +1,58 @@ + +import numpy as np +from getWsi import getWsi +from skimage.filters import threshold_otsu +from skimage.morphology import binary_closing, disk, remove_small_objects,label +from scipy.ndimage.morphology import binary_fill_holes +import matplotlib.pyplot as plt +from skimage.color import rgb2hsv +from skimage.filters import gaussian +from skimage.morphology import binary_dilation, diamond +def get_choppable_regions(wsi,index_x, index_y, boxSize,white_percent): + if wsi.split('.')[-1] != 'tif': + slide=getWsi(wsi) + slide_level = slide.level_count-1 + + fullSize=slide.level_dimensions[0] + resRatio= 16 + ds_1=fullSize[0]/16 + ds_2=fullSize[1]/16 + Im=np.array(slide.get_thumbnail((ds_1,ds_2))) + + ID=wsi.split('.svs')[0] + + hsv=rgb2hsv(Im) + + g=gaussian(hsv[:,:,1],20) + + + binary=(g>0.05).astype('bool') + binary2=binary_dilation(binary,selem=diamond(20)) + binary2=binary_fill_holes(binary2) + + ''' + Im2=Im + ax1=plt.subplot(121) + ax1=plt.imshow(Im) + ax1=plt.subplot(122) + Im2[binary2==0,:]=0 + ax1=plt.imshow(Im2) + + plt.savefig(ID+'.png') + ''' + + choppable_regions=np.zeros((len(index_y),len(index_x))) + for idxy,yi in enumerate(index_y): + for idxx,xj in enumerate(index_x): + yStart = int(np.round((yi)/resRatio)) + yStop = int(np.round((yi+boxSize)/resRatio)) + xStart = int(np.round((xj)/resRatio)) + xStop = int(np.round((xj+boxSize)/resRatio)) + box_total=(xStop-xStart)*(yStop-yStart) + if np.sum(binary2[yStart:yStop,xStart:xStop])>(white_percent*box_total): + choppable_regions[idxy,idxx]=1 + + else: + choppable_regions=np.ones((len(index_y),len(index_x))) + + return choppable_regions diff --git a/Codes/get_choppable_regions.pyc b/Codes/get_choppable_regions.pyc new file mode 100644 index 0000000..7fa4f39 Binary files /dev/null and b/Codes/get_choppable_regions.pyc differ diff --git a/Codes/get_network_performance.py b/Codes/get_network_performance.py new file mode 100644 index 0000000..cdf52df --- /dev/null +++ b/Codes/get_network_performance.py @@ -0,0 +1,54 @@ +import numpy as np +import getWsi +from xml_to_mask import xml_to_mask +from joblib import Parallel, delayed +import multiprocessing +from PIL import Image + +def get_perf(wsi,xml1,xml2,args): + if args.wsi_ext != '.tif': + WSIinfo=getWsi.getWsi(wsi) + dim_x, dim_y=WSIinfo.dimensions + else: + im = Image.open(wsi) + dim_x, dim_y=im.size + + totalPixels=np.float(dim_x*dim_y) + + # annotated xml + mask_gt = xml_to_mask(xml1, (0,0), (dim_x,dim_y), 1, 0) + # predicted xml + mask_pred = xml_to_mask(xml2, (0,0), (dim_x,dim_y), 1, 0) + + np.place(mask_pred,mask_pred>0,1) + np.place(mask_gt,mask_gt>0,1) + + TP = float(np.sum(np.multiply(mask_pred, mask_gt))) + FP = float(np.sum(mask_pred) - TP) + + mask_pred = abs(mask_pred - 1) + mask_gt = abs(mask_gt - 1) + np.place(mask_pred,mask_pred>0,1) + np.place(mask_gt,mask_gt>0,1) + + TN = float(np.sum(np.multiply(mask_pred,mask_gt))) + FN = float(np.sum(mask_pred) - TN) + + if TP+FP==0: + precision = 1 + else: + precision = (TP/(TP+FP)) + + accuracy = ((TP + TN) / (TN+FN+TP+FP)) + + if TN+FP == 0: + specificity = 1 + else: + specificity = (TN/(FP+TN)) + + if TP+FN ==0: + sensitivity= 1 + else: + sensitivity = (TP / (TP+FN)) + + return sensitivity,specificity,precision,accuracy diff --git a/Codes/get_network_performance.pyc b/Codes/get_network_performance.pyc new file mode 100644 index 0000000..cd2506b Binary files /dev/null and b/Codes/get_network_performance.pyc differ diff --git a/Codes/get_network_performance_folder.py b/Codes/get_network_performance_folder.py new file mode 100644 index 0000000..33877f5 --- /dev/null +++ b/Codes/get_network_performance_folder.py @@ -0,0 +1,107 @@ +import numpy as np +import getWsi +from xml_to_mask import xml_to_mask +from joblib import Parallel, delayed +import multiprocessing +from glob import glob +from matplotlib import pyplot as plt +from PIL import Image +#def get_network_performance(WSI_location,xml_annotation,xml_prediction): +block_size=2000 +anotDir='/home/bgbl/H-AI-L/IFTAKuang/Annotations/' +predDir='/home/bgbl/H-AI-L/IFTAKuang/TRAINING_data/1/Predicted_XMLs/' +dataDir='/home/bgbl/H-AI-L/IFTAKuang/wsi/' +txtDir='/home/bgbl/H-AI-L/IFTAKuang/' +savelist=[]; +f_name1=txtDir + 'performance.txt' +f1=open(f_name1,'w') +f1.close() + +def main(): + xmlAnnotation=glob(anotDir + '*.xml') + xmlPrediction=glob(predDir + '*.xml') + print(xmlAnnotation) + for idx,xml in enumerate(xmlAnnotation): + + annotationID=xml.split('/')[-1] + x1=anotDir + annotationID + x2=predDir + annotationID + w=dataDir + annotationID.split('.xml')[0] + '.svs' + f1=open(f_name1,'a') + f1.write(str(get_perf(w,x1,x2)) + '\n') + f1.close() + +# +#r=Parallel(n_jobs=num_cores)(delayed(inspect_mask)(yStart=i, xStart=j, xml_annotation=xml_annotation, f_name=f_name, f2_name=f2_name) for i in index_y for j in index_x) +def get_perf(wsi,xml1,xml2,args): + #specs=inspect_mask(index_y[0],index_x[0],block_size,xml_annotation,xml_prediction) + + if args.wsi_ext != '.tif': + WSIinfo=getWsi.getWsi(wsi) + dim_x, dim_y=WSIinfo.dimensions + else: + im = Image.open(wsi) + dim_x, dim_y=im.size + + totalPixels=np.float(dim_x*dim_y) + index_y=range(0,dim_y-block_size,block_size) + index_x=range(0,dim_x-block_size,block_size) + + num_cores = multiprocessing.cpu_count() + r=Parallel(n_jobs=num_cores)(delayed(inspect_mask)(yStart=i, xStart=j, block_size=block_size, annotation_xml=xml1,prediction_xml=xml2) for i in index_y for j in index_x) + + TN=np.zeros((1,5)); + TP=np.zeros((1,5)); + FP=np.zeros((1,5)); + FN=np.zeros((1,5)); + sensitivity=np.zeros((1,5)) + specificity=np.zeros((1,5)) + precision=np.zeros((1,5)) + + for classID in range(0,5): + for t in range(0,len(r)): + + currentspecs=r[t] + + TP[0,classID]=TP[0,classID]+currentspecs[classID,0] + FP[0,classID]=FP[0,classID]+currentspecs[classID,1] + FN[0,classID]=FN[0,classID]+currentspecs[classID,2] + TN[0,classID]=TN[0,classID]+currentspecs[classID,3] + + if (TP[0,classID]+FN[0,classID])==0: + sensitivity[0,classID]=0 + else: + sensitivity[0,classID]=np.float(TP[0,classID])/np.float(TP[0,classID]+FN[0,classID]) + + if (TN[0,classID]+FP[0,classID])==0: + specificity[0,classID]=0 + else: + specificity[0,classID]=np.float(TN[0,classID])/np.float(TN[0,classID]+FP[0,classID]) + #precision[0,classID]=np.float(TP[0,classID])/np.float(TP[0,classID]+FP[0,classID]) + return sensitivity,specificity + +def inspect_mask(yStart, xStart,block_size, annotation_xml,prediction_xml): # perform cutting in parallel + performance=np.zeros((5,4)) + yEnd = yStart+block_size + #print(yEnd) + xEnd = xStart+block_size + #print(xEnd) + xLen=xEnd-xStart + yLen=yEnd-yStart + mask_annotation=xml_to_mask(annotation_xml,[xStart,yStart],[xLen,yLen],1,0) + prediction_annotation=xml_to_mask(prediction_xml,[xStart,yStart],[xLen,yLen],1,0) + for classID in range(0,5): + annotation=mask_annotation==classID + prediction=prediction_annotation==classID + + TP=(np.sum(np.multiply(annotation,prediction))) + FP=(np.sum(np.multiply((1-annotation),(prediction)))) + FN=(np.sum(np.multiply((annotation),(1-prediction)))) + TN=(np.sum(np.multiply((1-annotation),(1-prediction)))) + performance[classID,:]=[TP,FP,FN,TN] + + return performance + + +if __name__ == '__main__': + main() diff --git a/Codes/main.py b/Codes/main.py new file mode 100644 index 0000000..4627682 --- /dev/null +++ b/Codes/main.py @@ -0,0 +1,118 @@ +import argparse +import os +import tensorflow as tf +from model import Model + + + +""" +This script defines hyperparameters. +""" + + + +def configure(test_data_list_, out_dir_, test_step_, test_num_steps_, modeldir_, data_dir_, num_steps_, save_interval_, learning_rate_, pretrain_file_, data_list_, batch_size_, input_height_, input_width_, num_classes_): + flags = tf.app.flags + + # training + flags.DEFINE_integer('num_steps', num_steps_, 'maximum number of iterations') + flags.DEFINE_integer('save_interval', save_interval_, 'number of iterations for saving and visualization') + flags.DEFINE_integer('random_seed', 1234, 'random seed') + flags.DEFINE_float('weight_decay', 0.0005, 'weight decay rate') + flags.DEFINE_float('learning_rate', learning_rate_, 'learning rate') + flags.DEFINE_float('power', 0.9, 'hyperparameter for poly learning rate') + flags.DEFINE_float('momentum', 0.9, 'momentum') + flags.DEFINE_string('encoder_name', 'deeplab', 'name of pre-trained model, res101, res50 or deeplab') + flags.DEFINE_string('pretrain_file', pretrain_file_, 'pre-trained model filename corresponding to encoder_name') + flags.DEFINE_string('data_list', data_list_, 'training data list filename') + + # validation + flags.DEFINE_integer('valid_step', 217000, 'checkpoint number for validation') + flags.DEFINE_integer('valid_num_steps', 81605, '= number of validation samples') + flags.DEFINE_string('valid_data_list', './dataAugment/val.txt', 'validation data list filename') + + # prediction / saving outputs for testing or validation + flags.DEFINE_string('out_dir', out_dir_, 'directory for saving outputs') + flags.DEFINE_integer('test_step', test_step_, 'checkpoint number for testing/validation') + flags.DEFINE_integer('test_num_steps', test_num_steps_, '= number of testing/validation samples') + flags.DEFINE_string('test_data_list', test_data_list_, 'testing/validation data list filename') + flags.DEFINE_boolean('visual', False, 'whether to save predictions for visualization') + + # data + flags.DEFINE_string('data_dir', data_dir_, 'data directory') + flags.DEFINE_integer('batch_size', batch_size_, 'training batch size') + flags.DEFINE_integer('input_height', input_height_, 'input image height') + flags.DEFINE_integer('input_width', input_width_, 'input image width') + flags.DEFINE_integer('num_classes', num_classes_, 'number of classes') + flags.DEFINE_integer('ignore_label', 255, 'label pixel value that should be ignored') + flags.DEFINE_boolean('random_scale', False, 'whether to perform random scaling data-augmentation') + flags.DEFINE_boolean('random_mirror', False, 'whether to perform random left-right flipping data-augmentation') + + # log + flags.DEFINE_string('modeldir', modeldir_, 'model directory') + flags.DEFINE_string('logfile', 'log.txt', 'training log filename') + flags.DEFINE_string('logdir', 'log', 'training log directory') + + flags.FLAGS.__dict__['__parsed'] = False + return flags.FLAGS + +def main(_): + if args.option not in ['train', 'test', 'predict']: + print('invalid option: ', args.option) + print("Please input a option: train, test, or predict") + else: + # Set up tf session and initialize variables. + # config = tf.ConfigProto() + # config.gpu_options.allow_growth = True + # sess = tf.Session(config=config) + sess = tf.Session() + # Run + model = Model(sess, configure(test_data_list_=args.test_data_list, out_dir_=args.out_dir, test_step_=args.test_step, test_num_steps_=args.test_num_steps, modeldir_=args.modeldir, data_dir_=args.data_dir, num_steps_=args.num_steps, save_interval_=args.save_interval, learning_rate_=args.learning_rate, pretrain_file_=args.pretrain_file, data_list_=args.data_list, batch_size_=args.batch_size, input_height_=args.input_height, input_width_=args.input_width, num_classes_=args.num_classes)) + getattr(model, args.option)() + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + + parser.add_argument('--option', dest='option', type=str, default='train', + help='actions: train, test, or predict') + parser.add_argument('--test_data_list', dest='test_data_list', type=str, default='./dataset/test.txt', + help='testing/validation data list filename') + parser.add_argument('--out_dir', dest='out_dir', type=str, default='output', + help='directory for saving testing outputs') + parser.add_argument('--test_step', dest='test_step', type=int, default=350000, + help='checkpoint number for testing/validation') + parser.add_argument('--test_num_steps', dest='test_num_steps', type=int, default=81605, + help='number of testing/validation samples') + parser.add_argument('--modeldir', dest='modeldir', type=str, default='modelAugment', + help='model directory') + parser.add_argument('--data_dir', dest='data_dir', type=str, default='/hdd/wsi_fun/ImageAugCustom/AugmentationOutput', + help='data directory') + parser.add_argument('--gpu', dest='gpu', type=str, default='0', + help='specify which GPU to use') + parser.add_argument('--num_steps', dest='num_steps', type=int, default=100000, + help='maximum number of iterations') + parser.add_argument('--save_interval', dest='save_interval', type=int, default=15000, + help='number of iterations for saving and visualization') + parser.add_argument('--learning_rate', dest='learning_rate', type=float, default=2.5e-4, + help='learning rate') + parser.add_argument('--pretrain_file', dest='pretrain_file', type=str, default='deeplab_resnet.ckpt', + help='pre-trained model filename corresponding to encoder_name') + parser.add_argument('--data_list', dest='data_list', type=str, default='./dataAugment/train.txt', + help='training data list filename') + parser.add_argument('--batch_size', dest='batch_size', type=int, default=15, + help='training batch size') + parser.add_argument('--input_height', dest='input_height', type=int, default=256, + help='input image height') + parser.add_argument('--input_width', dest='input_width', type=int, default=256, + help='input image width') + parser.add_argument('--num_classes', dest='num_classes', type=int, default=2, + help='number of classes in images') + + + + args = parser.parse_args() + + # Choose which gpu or cpu to use + os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu + tf.app.run() diff --git a/Codes/predict_xml.py b/Codes/predict_xml.py new file mode 100644 index 0000000..2470540 --- /dev/null +++ b/Codes/predict_xml.py @@ -0,0 +1,486 @@ +import cv2 +import numpy as np +import os +import sys +from subprocess import call +import argparse +from joblib import Parallel, delayed +import multiprocessing +from skimage.io import imread, imsave +from skimage.transform import resize +from scipy.ndimage.measurements import label +from skimage.segmentation import clear_border +from skimage.morphology import remove_small_objects +import lxml.etree as ET +import warnings +from shutil import rmtree + +""" +Pipeline code to find gloms from WSI + +Call this: +python get_gloms.py --wsi + +""" + +def main(args): + # define folder structure dict + dirs = {'outDir': args.outDir} + dirs['xml_save_dir'] = args.xml_save_dir + dirs['txt_save_dir'] = '/txt_files/' + dirs['img_save_dir'] = '/img_files/' + dirs['final_output_dir'] = '/boundaries/' + dirs['final_boundary_image_dir'] = '/images/' + dirs['mask_dir'] = '/wsi_mask/' + dirs['chopped_dir'] = '/originals/' + dirs['crop_dir'] = '/wsi_crops/' + dirs['save_outputs'] = args.save_outputs + + # reshape regions calc + downsample = int(args.downsampleRate**.5) + region_size = int(args.boxSize*(downsample)) + step = int(region_size*(1-args.overlap_percent)) + + # check main directory exists + make_folder(dirs['outDir']) + + if args.wsi == ' ': + print('\nPlease specify the whole slide image path\n\nUse flag:') + print('--wsi \n') + + else: + # chop wsi + fileID, test_num_steps = chop_suey(dirs, downsample, region_size, step, args) + dirs['fileID'] = fileID + print('Chop SUEY!\n') + + # call DeepLabv2_resnet for prediction + print('finding Glom locations ...\n') + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + 'prediction') + + test_data_list = fileID + '_images' + '.txt' + + call(['python3', '/hdd/wsi_fun/Codes/Deeplab-v2--ResNet-101/main.py', '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['img_save_dir'], '--test_step', str(args.test_step), + '--test_num_steps', str(test_num_steps), '--modeldir', args.modeldir, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], '--gpu', '1']) + + # un chop + print('\nreconstructing wsi map ...\n') + wsiMask = un_suey(dirs=dirs) + + # save hotspots + if dirs['save_outputs'] == True: + make_folder(dirs['outDir'] + fileID + dirs['mask_dir']) + print('saving to: ' + dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png') + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + fileID + dirs['mask_dir'] + fileID + '.png', wsiMask) + + # find glom locations in reconstructed map + print('\ninterpreting prediction map ...') + test_num_steps, labeledArray, label_offsets = find_suey(wsiMask, dirs, downsample, args) + print('\n\nthe cropped regions have been saved to: ' + dirs['outDir'] + fileID + dirs['img_save_dir'] + fileID + dirs['crop_dir']) + + # call network 2 to predict Glom boundaries + print('\ngetting Glom boundaries ...\n') + make_folder(dirs['outDir'] + fileID + dirs['final_output_dir'] + 'prediction') + + test_data_list = fileID + '_crops.txt' + + call(['python3', '/hdd/wsi_fun/Codes/Deeplab-v2--ResNet-101/main.py', '--option', 'predict', + '--test_data_list', dirs['outDir']+fileID+dirs['txt_save_dir']+test_data_list, + '--out_dir', dirs['outDir']+fileID+dirs['final_output_dir'], '--test_step', str(args.test_step_2), + '--test_num_steps', str(test_num_steps), '--modeldir', args.modeldir_2, + '--data_dir', dirs['outDir']+fileID+dirs['img_save_dir'], '--gpu', '1']) + + print('\nsaving final glom images ...') + print('\nworking on:') + + + crop_suey(label_offsets, dirs, args) + + # clean up + if dirs['save_outputs'] == False: + print('cleaning up') + rmtree(dirs['outDir']+fileID) + + print('\nall done.\n') + +def make_folder(directory): + if not os.path.exists(directory): + os.makedirs(directory) # make directory if it does not exit already # make new directory + +def restart_line(): # for printing chopped image labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def getWsi(path): #imports a WSI + import openslide + slide = openslide.OpenSlide(path) + return slide + +def file_len(fname): # get txt file length (number of lines) + with open(fname) as f: + for i, l in enumerate(f): + pass + return i + 1 + +def chop_suey(dirs, downsample, region_size, step, args): # chop wsi + wsi = args.wsi + print('\nopening: ' + wsi) + basename = os.path.splitext(wsi)[0] + + global slide + slide=getWsi(wsi) + + fileID=basename.split('/') + dirs['fileID'] = fileID=fileID[len(fileID)-1] + print('\nchopping ...\n') + + # make txt file + make_folder(dirs['outDir'] + fileID + dirs['txt_save_dir']) + f_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + ".txt" + f2_name = dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt" + f = open(f_name, 'w') + f2 = open(f2_name, 'w') + f2.close() + + make_folder(dirs['outDir'] + fileID + dirs['img_save_dir'] + dirs['chopped_dir']) + + # get image dimensions + dim_x, dim_y=slide.dimensions + f.write('Image dimensions:\n') + + # make index for iters + index_y=range(0,dim_y-step,step) + index_x=range(0,dim_x-step,step) + + f.write('X dim: ' + str((index_x[-1]+region_size)/downsample) +'\n') + f.write('Y dim: ' + str((index_y[-1]+region_size)/downsample) +'\n\n') + f.write('Regions:\n') + f.write('image:xStart:xStop:yStart:yStop\n\n') + f.close() + + print('saving region:') + num_cores = multiprocessing.cpu_count() + Parallel(n_jobs=num_cores)(delayed(chop_wsi)(yStart=i, xStart=j, f_name=f_name, f2_name=f2_name, dirs=dirs, downsample=downsample, region_size=region_size, args=args) for i in index_y for j in index_x) + + test_num_steps = file_len(dirs['outDir'] + fileID + dirs['txt_save_dir'] + fileID + '_images' + ".txt") + print('\n\n' + str(test_num_steps) +' image regions chopped') + + return fileID, test_num_steps + +def chop_wsi(yStart, xStart, f_name, f2_name, dirs, downsample, region_size, args): # perform cutting in parallel + yEnd = yStart+region_size + #print(yEnd) + xEnd = xStart+region_size + #print(xEnd) + xLen=xEnd-xStart + yLen=yEnd-yStart + + subsect= np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + subsect=subsect[:,:,:3] + grayImage=cv2.cvtColor(subsect,cv2.COLOR_BGR2GRAY) + np.place(grayImage,grayImage==0, 255) + whiteRatio=(np.sum(grayImage)/(grayImage.size*255)) + + if whiteRatio < args.whiteMax: + #print(whiteRatio) + imageIter = str(xStart)+str(yStart) + + f = open(f_name, 'a+') + f2 = open(f2_name, 'a+') + + # append txt file + f.write(imageIter + ':' + str(xStart/downsample) + ':' + str(xEnd/downsample) + + ':' + str(yStart/downsample) + ':' + str(yEnd/downsample) + '\n') + + # resize images ans masks + c=(subsect.shape) + s1=int(c[0]/(args.downsampleRate**.5)) + s2=int(c[1]/(args.downsampleRate**.5)) + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + subsect=resize(subsect,(s1,s2), mode='constant') + + # save image + directory = dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['chopped_dir'] + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(directory + dirs['fileID'] + str(imageIter) + args.imBoxExt,subsect) + + f2.write(dirs['chopped_dir'] + dirs['fileID'] + str(imageIter) + args.imBoxExt + '\n') + f.close() + f2.close() + + sys.stdout.write(' <'+str(xStart)+':'+str(xEnd)+' '+str(yStart)+':'+str(yEnd)+'> ') + sys.stdout.flush() + restart_line() + +def un_suey(dirs): # reconstruct wsi from predicted masks + txtFile = dirs['fileID'] + '.txt' + + # read txt file + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + # get wsi size + xDim = np.int32((lines[1].split(': ')[1]).split('\n')[0]) + yDim = np.int32((lines[2].split(': ')[1]).split('\n')[0]) + #print('xDim: ' + str(xDim)) + #print('yDim: ' + str(yDim)) + + # make wsi mask + wsiMask = np.zeros([yDim, xDim]) + + # read image regions + for regionNum in range(7, np.size(lines)): + # get region + region = lines[regionNum].split(':') + region[4] = region[4].split('\n')[0] + + # read mask + mask = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + 'prediction/' + dirs['fileID'] + region[0] + '_mask.png') + + # get region bounds + xStart = np.int32(region[1]) + #print('xStart: ' + str(xStart)) + xStop = np.int32(region[2]) + #print('xStop: ' + str(xStop)) + yStart = np.int32(region[3]) + #print('yStart: ' + str(yStart)) + yStop = np.int32(region[4]) + #print('yStop: ' + str(yStop)) + + # populate wsiMask with max + #print(np.shape(wsiMask)) + wsiMask[yStart:yStop, xStart:xStop] = np.maximum(wsiMask[yStart:yStop, xStart:xStop], mask) + #wsiMask[yStart:yStop, xStart:xStop] = np.ones([yStop-yStart, xStop-xStart]) + + return wsiMask + +def find_suey(wsiMask, dirs, downsample, args): # locates the detected glom regions in the reconstructed wsi mask + # clean up mask + print(' removing small objects') + cleanMask = remove_small_objects(wsiMask.astype(bool), args.min_size) + print(' separating Glom objects\n') + # find all unconnected regions + labeledArray, num_features = label(cleanMask) + print('found: '+ str(num_features-1) + ' regions') + + # save cleaned mask + if dirs['save_outputs'] == True: + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['mask_dir'] + dirs['fileID'] + '_cleaned.png', cleanMask*255) + + make_folder(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir']) + + f_name = dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops.txt' + f = open(f_name, 'w') + f.close() + + # run crop_region in parallel + print('\nsaving:') + #num_cores = multiprocessing.cpu_count() + #Parallel(n_jobs=num_cores)(delayed(crop_region)(region_iter=i, labeledArray=labeledArray, fileID=fileID, f_name=f_name) for i in range(1, num_features)) + label_offsets = [] + for region_iter in range(1, num_features): + label_offset = crop_region(region_iter=region_iter, labeledArray=labeledArray, f_name=f_name, dirs=dirs, downsample=downsample, args=args) + label_offsets.append(label_offset) + + test_num_steps = file_len(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + dirs['fileID'] + '_crops' + ".txt") + return test_num_steps, labeledArray, label_offsets + +def crop_region(region_iter, labeledArray, f_name, dirs, downsample, args): # crop selected region from wsi and save // location defined by labeledArray + # get list of locations for pixels == region_iter + mask_region = np.argwhere(labeledArray == region_iter) + # calculate the region bounds + yStart = min(mask_region[:,0]) * downsample + yLen = (max(mask_region[:,0]) * downsample) - yStart + xStart = min(mask_region[:,1]) * downsample + xLen = (max(mask_region[:,1]) * downsample) - xStart + + region = np.array(slide.read_region((xStart,yStart),0,(xLen,yLen))) + + # print output + sys.stdout.write(' <' + str(region_iter) + '> ') + sys.stdout.flush() + restart_line() + + # write image path to text file + f = open(f_name, 'a+') + f.write(dirs['crop_dir'] + dirs['fileID'] + str(region_iter) + args.imBoxExt + '\n') + f.close() + + # save image region + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + dirs['crop_dir'] + dirs['fileID'] + str(region_iter) + args.imBoxExt, region) + label_offset = {'Y': yStart, 'X': xStart} + return label_offset + + +def crop_suey(label_offsets, dirs, args): + txtFile = dirs['fileID'] + '_crops.txt' + + # read txt file with img paths + f = open(dirs['outDir'] + dirs['fileID'] + dirs['txt_save_dir'] + txtFile, 'r') + lines = f.readlines() + f.close() + lines = np.array(lines) + + make_folder(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir'][1:]) + + # make xml + Annotations = xml_create() + # add annotation + Annotations = xml_add_annotation(Annotations=Annotations) + + for line in range(0, np.size(lines)): + image_path = lines[line].split('\n')[0] + + # get glom and corresponding mask + file_name = (image_path.split('.')[0]).split(dirs['crop_dir'])[1] + glom_image = imread(dirs['outDir'] + dirs['fileID'] + dirs['img_save_dir'] + image_path[1:]) + mask_image = imread(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + 'prediction/' + + file_name + '_mask.png') + + # print output + sys.stdout.write(' <' + file_name + '> ') + sys.stdout.flush() + restart_line() + + # add mask to xml + label_offset = label_offsets[line] + pointsList = get_contour_points(mask_image, offset=label_offset) + for i in range(np.shape(pointsList)[0]): + pointList = pointsList[i] + Annotations = xml_add_region(Annotations=Annotations, pointList=pointList) + + # save mask images + if dirs['save_outputs'] == True: + if np.sum(mask_image) != 0: + # remove background in images + for i in range(3): + glom_image[:,:,i] = glom_image[:,:,i] * (mask_image * ((1-args.bg_intensity)) + args.bg_intensity) + + # save resulting image + with warnings.catch_warnings(): + warnings.simplefilter("ignore") + imsave(dirs['outDir'] + dirs['fileID'] + dirs['final_output_dir'] + dirs['final_boundary_image_dir'][1:] + file_name + '_glom' + args.finalImgExt, glom_image) + + # save xml + xml_save(Annotations=Annotations, filename=dirs['xml_save_dir']+'/'+dirs['fileID']+'.xml') + +def get_contour_points(mask, offset={'X': 0,'Y': 0}): + # returns a dict pointList with point 'X' and 'Y' values + # input greyscale binary image + maskPoints, contours = cv2.findContours(np.array(mask), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) + pointsList = [] + for j in range(np.shape(maskPoints)[0]): + pointList = [] + for i in range(np.shape(maskPoints[j])[0]): + point = {'X': maskPoints[j][i][0][0] + offset['X'], 'Y': maskPoints[j][i][0][1] + offset['Y']} + pointList.append(point) + pointsList.append(pointList) + return pointsList + +### functions for building an xml tree of annotations ### +def xml_create(): # create new xml tree + # create new xml Tree - Annotations + Annotations = ET.Element('Annotations', attrib={'MicronsPerPixel': '0.252000'}) + return Annotations + +def xml_add_annotation(Annotations, annotationID=None): # add new annotation + # add new Annotation to Annotations + # defualts to new annotationID + if annotationID == None: # not specified + annotationID = len(Annotations.findall('Annotation')) + 1 + Annotation = ET.SubElement(Annotations, 'Annotation', attrib={'Type': '4', 'Visible': '1', 'ReadOnly': '0', 'Incremental': '0', 'LineColorReadOnly': '0', 'LineColor': '65280', 'Id': str(annotationID), 'NameReadOnly': '0'}) + Regions = ET.SubElement(Annotation, 'Regions') + return Annotations + +def xml_add_region(Annotations, pointList, annotationID=-1, regionID=None): # add new region to annotation + # add new Region to Annotation + # defualts to last annotationID and new regionID + Annotation = Annotations[annotationID] + Regions = Annotation.find('Regions') + if regionID == None: # not specified + regionID = len(Regions.findall('Region')) + 1 + Region = ET.SubElement(Regions, 'Region', attrib={'NegativeROA': '0', 'ImageFocus': '-1', 'DisplayId': '1', 'InputRegionId': '0', 'Analyze': '0', 'Type': '0', 'Id': str(regionID)}) + Vertices = ET.SubElement(Region, 'Vertices') + for point in pointList: # add new Vertex + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(point['X']), 'Y': str(point['Y']), 'Z': '0'}) + # add connecting point + ET.SubElement(Vertices, 'Vertex', attrib={'X': str(pointList[0]['X']), 'Y': str(pointList[0]['Y']), 'Z': '0'}) + return Annotations + +def xml_save(Annotations, filename): + xml_data = ET.tostring(Annotations, pretty_print=True) + #xml_data = Annotations.toprettyxml() + f = open(filename, 'w') + f.write(xml_data) + f.close() + +def read_xml(filename): + # import xml file + tree = ET.parse(filename) + root = tree.getroot() + + + +if __name__ == '__main__': + + parser = argparse.ArgumentParser() + + parser.add_argument('--wsi', dest='wsi', default=' ',type=str, + help='Please specify the whole slide image path') + + ### Params for saving results ### + parser.add_argument('--outDir', dest='outDir', default='/hdd/IterativeAnnotation/Gloms/' ,type=str, + help='output directory') + parser.add_argument('--xml_save_dir', dest='xml_save_dir', default='/hdd/IterativeAnnotation/Gloms/' ,type=str, + help='directory where xml file is saved') + parser.add_argument('--save_outputs', dest='save_outputs', default=False ,type=bool, + help='save outputs from chopping etc. [final image masks]') + parser.add_argument('--bg_intensity', dest='bg_intensity', default=.5 ,type=float, + help='if displaying output classifications [save_outputs = True] background color [0-1]') + + ### Params for cutting wsi ### + parser.add_argument('--overlap_percent', dest='overlap_percent', default=0.5 ,type=float, + help='overlap percentage of blocks [0-1]') + parser.add_argument('--boxSize', dest='boxSize', default=750 ,type=int, + help='size of blocks') + parser.add_argument('--downsampleRate', dest='downsampleRate', default=16 ,type=int, + help='downsample rate for low rez network') + parser.add_argument('--imBoxExt', dest='imBoxExt', default='.jpeg' ,type=str, + help='ext of saved image blocks') + parser.add_argument('--finalImgExt', dest='finalImgExt', default='.jpeg' ,type=str, + help='ext of final saved images') + parser.add_argument('--whiteMax', dest='whiteMax', default=0.9 ,type=float, + help='exclude white blocks') + + ### Params for network to test with + parser.add_argument('--modeldir', dest='modeldir', default='/home/wsi_fun/Codes/model1' ,type=str, + help='prepass model folder') + parser.add_argument('--test_step', dest='test_step', default=217000 ,type=int, + help='prepass model iteration') + parser.add_argument('--modeldir_2', dest='modeldir_2', default='/home/wsi_fun/Codes/model2' ,type=str, + help='second pass model folder') + parser.add_argument('--test_step_2', dest='test_step_2', default=234000 ,type=int, + help='second pass model iteration') + + ### Params for optimizing wsi mask cleanup + parser.add_argument('--min_size', dest='min_size', default=625 ,type=int, + help='min size region to be considered after prepass [in pixels]') + + + args = parser.parse_args() + + main(args=args) diff --git a/Codes/randomHSVshift.py b/Codes/randomHSVshift.py new file mode 100644 index 0000000..04c6b9b --- /dev/null +++ b/Codes/randomHSVshift.py @@ -0,0 +1,17 @@ +import numpy as np +from skimage.color import rgb2hsv,hsv2rgb,rgb2lab,lab2rgb +import matplotlib as plt +import cv2 +from skimage import exposure + +def randomHSVshift(x,hShift,lShift): + I=x + + + I=rgb2hsv(I) + I[:,:,0]=(I[:,:,0]+hShift) + I=hsv2rgb(I) + I=rgb2lab(I) + I[:,:,0]=exposure.adjust_gamma(I[:,:,0],lShift) + I=lab2rgb(I) + return I diff --git a/Codes/randomHSVshift.pyc b/Codes/randomHSVshift.pyc new file mode 100644 index 0000000..b0ab261 Binary files /dev/null and b/Codes/randomHSVshift.pyc differ diff --git a/Codes/xmlCheck.py b/Codes/xmlCheck.py new file mode 100644 index 0000000..ac65a15 --- /dev/null +++ b/Codes/xmlCheck.py @@ -0,0 +1,11 @@ +from xml_to_mask import xml_to_mask +from getWsi import getWsi +from matplotlib import pyplot as plt + +slide=getWsi('/hdd/bg/HAIL2/DeepZoomPrediction/TRAINING_data/0/52483.svs') +[d1,d2]=slide.dimensions +x='/hdd/bg/HAIL2/DeepZoomPrediction/TRAINING_data/0/52483.xml' +wsiMask=xml_to_mask(x,(0,0),(d1,d2),16,0) + +plt.imshow(wsiMask*255) +plt.show() diff --git a/Codes/xml_to_mask.py b/Codes/xml_to_mask.py new file mode 100644 index 0000000..f1a3f8c --- /dev/null +++ b/Codes/xml_to_mask.py @@ -0,0 +1,150 @@ +import numpy as np +import sys +import lxml.etree as ET +import cv2 + +def get_num_classes(xml_path): + # parse xml and get root + tree = ET.parse(xml_path) + root = tree.getroot() + + annotation_num = 0 + for Annotation in root.findall("./Annotation"): # for all annotations + annotation_num = annotation_num + 1 + + return annotation_num + 1 + + +""" +location (tuple) - (x, y) tuple giving the top left pixel in the level 0 reference frame +size (tuple) - (width, height) tuple giving the region size + +""" + +def xml_to_mask(xml_path, location, size, downsample_factor=1, verbose=0): + # parse xml and get root + + tree = ET.parse(xml_path) + root = tree.getroot() + + # calculate region bounds + bounds = {'x_min' : location[0], 'y_min' : location[1], 'x_max' : location[0] + size[0], 'y_max' : location[1] + size[1]} + + IDs = regions_in_mask(root=root, bounds=bounds, verbose=verbose) + + if verbose != 0: + print('\nFOUND: ' + str(len(IDs)) + ' regions') + + # find regions in bounds + Regions = get_vertex_points(root=root, IDs=IDs, verbose=verbose) + + # fill regions and create mask + mask = Regions_to_mask(Regions=Regions, bounds=bounds, IDs=IDs, downsample_factor=downsample_factor, verbose=verbose) + if verbose != 0: + print('done...\n') + + return mask + +def restart_line(): # for printing labels in command line + sys.stdout.write('\r') + sys.stdout.flush() + +def regions_in_mask(root, bounds, verbose=1): + # find regions to save + IDs = [] + + for Annotation in root.findall("./Annotation"): # for all annotations + annotationID = Annotation.attrib['Id'] + + for Region in Annotation.findall("./*/Region"): # iterate on all region + + if verbose != 0: + sys.stdout.write('TESTING: ' + 'Annotation: ' + annotationID + '\tRegion: ' + Region.attrib['Id']) + sys.stdout.flush() + restart_line() + + for Vertex in Region.findall("./*/Vertex"): # iterate on all vertex in region + # get points + x_point = np.int32(np.float64(Vertex.attrib['X'])) + y_point = np.int32(np.float64(Vertex.attrib['Y'])) + # test if points are in bounds + if bounds['x_min'] <= x_point <= bounds['x_max'] and bounds['y_min'] <= y_point <= bounds['y_max']: # test points in region bounds + # save region Id + IDs.append({'regionID' : Region.attrib['Id'], 'annotationID' : annotationID}) + break + return IDs + +def get_vertex_points(root, IDs, verbose=1): + Regions = [] + + for ID in IDs: # for all IDs + if verbose != 0: + sys.stdout.write('PARSING: ' + 'Annotation: ' + ID['annotationID'] + '\tRegion: ' + ID['regionID']) + sys.stdout.flush() + restart_line() + + # get all vertex attributes (points) + Vertices = [] + + for Vertex in root.findall("./Annotation[@Id='" + ID['annotationID'] + "']/Regions/Region[@Id='" + ID['regionID'] + "']/Vertices/Vertex"): + # make array of points + Vertices.append([int(float(Vertex.attrib['X'])), int(float(Vertex.attrib['Y']))]) + + + Regions.append(np.array(Vertices)) + + return Regions + +def Regions_to_mask(Regions, bounds, IDs, downsample_factor, verbose=1): + downsample = int(np.round(downsample_factor**(.5))) + + if verbose !=0: + print('\nMAKING MASK:') + + if len(Regions) != 0: # regions present + # get min/max sizes + min_sizes = np.empty(shape=[2,0], dtype=np.int32) + max_sizes = np.empty(shape=[2,0], dtype=np.int32) + for Region in Regions: # fill all regions + min_bounds = np.reshape((np.amin(Region, axis=0)), (2,1)) + max_bounds = np.reshape((np.amax(Region, axis=0)), (2,1)) + min_sizes = np.append(min_sizes, min_bounds, axis=1) + max_sizes = np.append(max_sizes, max_bounds, axis=1) + min_size = np.amin(min_sizes, axis=1) + max_size = np.amax(max_sizes, axis=1) + + + + # add to old bounds + bounds['x_min_pad'] = min(min_size[1], bounds['x_min']) + bounds['y_min_pad'] = min(min_size[0], bounds['y_min']) + bounds['x_max_pad'] = max(max_size[1], bounds['x_max']) + bounds['y_max_pad'] = max(max_size[0], bounds['y_max']) + + # make blank mask + mask = np.zeros([ int(np.round((bounds['y_max_pad'] - bounds['y_min_pad']) / downsample)), int(np.round((bounds['x_max_pad'] - bounds['x_min_pad']) / downsample)) ], dtype=np.int8) + + + # fill mask polygons + index = 0 + for Region in Regions: + # reformat Regions + Region[:,1] = np.int32(np.round((Region[:,1] - bounds['y_min_pad']) / downsample)) + Region[:,0] = np.int32(np.round((Region[:,0] - bounds['x_min_pad']) / downsample)) + # get annotation ID for mask color + ID = IDs[index] + cv2.fillPoly(mask, [Region], int(ID['annotationID'])) + index = index + 1 + + # reshape mask + x_start = np.int32(np.round((bounds['x_min'] - bounds['x_min_pad']) / downsample)) + y_start = np.int32(np.round((bounds['y_min'] - bounds['y_min_pad']) / downsample)) + x_stop = np.int32(np.round((bounds['x_max'] - bounds['x_min_pad']) / downsample)) + y_stop = np.int32(np.round((bounds['y_max'] - bounds['y_min_pad']) / downsample)) + # pull center mask region + mask = mask[ y_start:y_stop, x_start:x_stop ] + + else: # no Regions + mask = np.zeros([ int(np.round((bounds['y_max'] - bounds['y_min']) / downsample)), int(np.round((bounds['x_max'] - bounds['x_min']) / downsample)) ]) + + return mask diff --git a/Codes/xml_to_mask.pyc b/Codes/xml_to_mask.pyc new file mode 100644 index 0000000..19edc39 Binary files /dev/null and b/Codes/xml_to_mask.pyc differ diff --git a/H-AI-L_pipeline_overview.pdf b/H-AI-L_pipeline_overview.pdf new file mode 100644 index 0000000..e83194d Binary files /dev/null and b/H-AI-L_pipeline_overview.pdf differ diff --git a/IFTA.def b/IFTA.def new file mode 100644 index 0000000..15d59bf --- /dev/null +++ b/IFTA.def @@ -0,0 +1,55 @@ +Bootstrap: docker +From: tensorflow/tensorflow:1.7.0-rc1-devel-gpu-py3 +%help + Singularity image (version 3) for Jupyter Notebook with deep learning and image processing modules including OpenSlide. + +%post + export DEBIAN_FRONTEND=noninteractive + apt-get update + apt-get update && apt-get install -y --no-install-recommends \ + build-essential \ + curl \ + libfreetype6-dev \ + libpng-dev \ + libzmq3-dev \ + pkg-config \ + libffi-dev \ + rsync \ + software-properties-common \ + unzip \ + ffmpeg \ + libsm6 \ + libxext6 \ + openslide-tools \ + libopenslide-dev \ + llvm \ + build-essential \ + python3-tk + + pip install --upgrade pip + pip --no-cache-dir install \ + numba \ + llvmlite==0.31.0 \ + scikit-build \ + opencv-python \ + imgaug \ + wheel \ + scikit-learn \ + scipy \ + tensorboard \ + torch_geometric \ + einops \ + numpy==1.16.4 \ + scikit-learn \ + matplotlib \ + joblib \ + openslide-python \ + tk-tools \ + Pillow \ + lxml \ + imageio + + + + apt-get clean && \ + rm -rf /var/lib/apt/lists/* diff --git a/LICENSE b/LICENSE deleted file mode 100644 index f288702..0000000 --- a/LICENSE +++ /dev/null @@ -1,674 +0,0 @@ - GNU GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 - - Copyright (C) 2007 Free Software Foundation, Inc. - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The GNU General Public License is a free, copyleft license for -software and other kinds of works. - - The licenses for most software and other practical works are designed -to take away your freedom to share and change the works. By contrast, -the GNU General Public License is intended to guarantee your freedom to -share and change all versions of a program--to make sure it remains free -software for all its users. We, the Free Software Foundation, use the -GNU General Public License for most of our software; it applies also to -any other work released this way by its authors. You can apply it to -your programs, too. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -them if you wish), that you receive source code or can get it if you -want it, that you can change the software or use pieces of it in new -free programs, and that you know you can do these things. - - To protect your rights, we need to prevent others from denying you -these rights or asking you to surrender the rights. Therefore, you have -certain responsibilities if you distribute copies of the software, or if -you modify it: responsibilities to respect the freedom of others. - - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must pass on to the recipients the same -freedoms that you received. You must make sure that they, too, receive -or can get the source code. And you must show them these terms so they -know their rights. - - Developers that use the GNU GPL protect your rights with two steps: -(1) assert copyright on the software, and (2) offer you this License -giving you legal permission to copy, distribute and/or modify it. - - For the developers' and authors' protection, the GPL clearly explains -that there is no warranty for this free software. For both users' and -authors' sake, the GPL requires that modified versions be marked as -changed, so that their problems will not be attributed erroneously to -authors of previous versions. - - Some devices are designed to deny users access to install or run -modified versions of the software inside them, although the manufacturer -can do so. This is fundamentally incompatible with the aim of -protecting users' freedom to change the software. The systematic -pattern of such abuse occurs in the area of products for individuals to -use, which is precisely where it is most unacceptable. Therefore, we -have designed this version of the GPL to prohibit the practice for those -products. If such problems arise substantially in other domains, we -stand ready to extend this provision to those domains in future versions -of the GPL, as needed to protect the freedom of users. - - Finally, every program is threatened constantly by software patents. -States should not allow patents to restrict development and use of -software on general-purpose computers, but in those that do, we wish to -avoid the special danger that patents applied to a free program could -make it effectively proprietary. To prevent this, the GPL assures that -patents cannot be used to render the program non-free. - - The precise terms and conditions for copying, distribution and -modification follow. - - TERMS AND CONDITIONS - - 0. Definitions. - - "This License" refers to version 3 of the GNU General Public License. - - "Copyright" also means copyright-like laws that apply to other kinds of -works, such as semiconductor masks. - - "The Program" refers to any copyrightable work licensed under this -License. Each licensee is addressed as "you". "Licensees" and -"recipients" may be individuals or organizations. - - To "modify" a work means to copy from or adapt all or part of the work -in a fashion requiring copyright permission, other than the making of an -exact copy. The resulting work is called a "modified version" of the -earlier work or a work "based on" the earlier work. - - A "covered work" means either the unmodified Program or a work based -on the Program. - - To "propagate" a work means to do anything with it that, without -permission, would make you directly or secondarily liable for -infringement under applicable copyright law, except executing it on a -computer or modifying a private copy. Propagation includes copying, -distribution (with or without modification), making available to the -public, and in some countries other activities as well. - - To "convey" a work means any kind of propagation that enables other -parties to make or receive copies. Mere interaction with a user through -a computer network, with no transfer of a copy, is not conveying. - - An interactive user interface displays "Appropriate Legal Notices" -to the extent that it includes a convenient and prominently visible -feature that (1) displays an appropriate copyright notice, and (2) -tells the user that there is no warranty for the work (except to the -extent that warranties are provided), that licensees may convey the -work under this License, and how to view a copy of this License. If -the interface presents a list of user commands or options, such as a -menu, a prominent item in the list meets this criterion. - - 1. Source Code. - - The "source code" for a work means the preferred form of the work -for making modifications to it. "Object code" means any non-source -form of a work. - - A "Standard Interface" means an interface that either is an official -standard defined by a recognized standards body, or, in the case of -interfaces specified for a particular programming language, one that -is widely used among developers working in that language. - - The "System Libraries" of an executable work include anything, other -than the work as a whole, that (a) is included in the normal form of -packaging a Major Component, but which is not part of that Major -Component, and (b) serves only to enable use of the work with that -Major Component, or to implement a Standard Interface for which an -implementation is available to the public in source code form. A -"Major Component", in this context, means a major essential component -(kernel, window system, and so on) of the specific operating system -(if any) on which the executable work runs, or a compiler used to -produce the work, or an object code interpreter used to run it. - - The "Corresponding Source" for a work in object code form means all -the source code needed to generate, install, and (for an executable -work) run the object code and to modify the work, including scripts to -control those activities. However, it does not include the work's -System Libraries, or general-purpose tools or generally available free -programs which are used unmodified in performing those activities but -which are not part of the work. For example, Corresponding Source -includes interface definition files associated with source files for -the work, and the source code for shared libraries and dynamically -linked subprograms that the work is specifically designed to require, -such as by intimate data communication or control flow between those -subprograms and other parts of the work. - - The Corresponding Source need not include anything that users -can regenerate automatically from other parts of the Corresponding -Source. - - The Corresponding Source for a work in source code form is that -same work. - - 2. Basic Permissions. - - All rights granted under this License are granted for the term of -copyright on the Program, and are irrevocable provided the stated -conditions are met. This License explicitly affirms your unlimited -permission to run the unmodified Program. The output from running a -covered work is covered by this License only if the output, given its -content, constitutes a covered work. This License acknowledges your -rights of fair use or other equivalent, as provided by copyright law. - - You may make, run and propagate covered works that you do not -convey, without conditions so long as your license otherwise remains -in force. You may convey covered works to others for the sole purpose -of having them make modifications exclusively for you, or provide you -with facilities for running those works, provided that you comply with -the terms of this License in conveying all material for which you do -not control copyright. Those thus making or running the covered works -for you must do so exclusively on your behalf, under your direction -and control, on terms that prohibit them from making any copies of -your copyrighted material outside their relationship with you. - - Conveying under any other circumstances is permitted solely under -the conditions stated below. Sublicensing is not allowed; section 10 -makes it unnecessary. - - 3. Protecting Users' Legal Rights From Anti-Circumvention Law. - - No covered work shall be deemed part of an effective technological -measure under any applicable law fulfilling obligations under article -11 of the WIPO copyright treaty adopted on 20 December 1996, or -similar laws prohibiting or restricting circumvention of such -measures. - - When you convey a covered work, you waive any legal power to forbid -circumvention of technological measures to the extent such circumvention -is effected by exercising rights under this License with respect to -the covered work, and you disclaim any intention to limit operation or -modification of the work as a means of enforcing, against the work's -users, your or third parties' legal rights to forbid circumvention of -technological measures. - - 4. Conveying Verbatim Copies. - - You may convey verbatim copies of the Program's source code as you -receive it, in any medium, provided that you conspicuously and -appropriately publish on each copy an appropriate copyright notice; -keep intact all notices stating that this License and any -non-permissive terms added in accord with section 7 apply to the code; -keep intact all notices of the absence of any warranty; and give all -recipients a copy of this License along with the Program. - - You may charge any price or no price for each copy that you convey, -and you may offer support or warranty protection for a fee. - - 5. Conveying Modified Source Versions. - - You may convey a work based on the Program, or the modifications to -produce it from the Program, in the form of source code under the -terms of section 4, provided that you also meet all of these conditions: - - a) The work must carry prominent notices stating that you modified - it, and giving a relevant date. - - b) The work must carry prominent notices stating that it is - released under this License and any conditions added under section - 7. This requirement modifies the requirement in section 4 to - "keep intact all notices". - - c) You must license the entire work, as a whole, under this - License to anyone who comes into possession of a copy. This - License will therefore apply, along with any applicable section 7 - additional terms, to the whole of the work, and all its parts, - regardless of how they are packaged. This License gives no - permission to license the work in any other way, but it does not - invalidate such permission if you have separately received it. - - d) If the work has interactive user interfaces, each must display - Appropriate Legal Notices; however, if the Program has interactive - interfaces that do not display Appropriate Legal Notices, your - work need not make them do so. - - A compilation of a covered work with other separate and independent -works, which are not by their nature extensions of the covered work, -and which are not combined with it such as to form a larger program, -in or on a volume of a storage or distribution medium, is called an -"aggregate" if the compilation and its resulting copyright are not -used to limit the access or legal rights of the compilation's users -beyond what the individual works permit. Inclusion of a covered work -in an aggregate does not cause this License to apply to the other -parts of the aggregate. - - 6. Conveying Non-Source Forms. - - You may convey a covered work in object code form under the terms -of sections 4 and 5, provided that you also convey the -machine-readable Corresponding Source under the terms of this License, -in one of these ways: - - a) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by the - Corresponding Source fixed on a durable physical medium - customarily used for software interchange. - - b) Convey the object code in, or embodied in, a physical product - (including a physical distribution medium), accompanied by a - written offer, valid for at least three years and valid for as - long as you offer spare parts or customer support for that product - model, to give anyone who possesses the object code either (1) a - copy of the Corresponding Source for all the software in the - product that is covered by this License, on a durable physical - medium customarily used for software interchange, for a price no - more than your reasonable cost of physically performing this - conveying of source, or (2) access to copy the - Corresponding Source from a network server at no charge. - - c) Convey individual copies of the object code with a copy of the - written offer to provide the Corresponding Source. This - alternative is allowed only occasionally and noncommercially, and - only if you received the object code with such an offer, in accord - with subsection 6b. - - d) Convey the object code by offering access from a designated - place (gratis or for a charge), and offer equivalent access to the - Corresponding Source in the same way through the same place at no - further charge. You need not require recipients to copy the - Corresponding Source along with the object code. If the place to - copy the object code is a network server, the Corresponding Source - may be on a different server (operated by you or a third party) - that supports equivalent copying facilities, provided you maintain - clear directions next to the object code saying where to find the - Corresponding Source. Regardless of what server hosts the - Corresponding Source, you remain obligated to ensure that it is - available for as long as needed to satisfy these requirements. - - e) Convey the object code using peer-to-peer transmission, provided - you inform other peers where the object code and Corresponding - Source of the work are being offered to the general public at no - charge under subsection 6d. - - A separable portion of the object code, whose source code is excluded -from the Corresponding Source as a System Library, need not be -included in conveying the object code work. - - A "User Product" is either (1) a "consumer product", which means any -tangible personal property which is normally used for personal, family, -or household purposes, or (2) anything designed or sold for incorporation -into a dwelling. In determining whether a product is a consumer product, -doubtful cases shall be resolved in favor of coverage. For a particular -product received by a particular user, "normally used" refers to a -typical or common use of that class of product, regardless of the status -of the particular user or of the way in which the particular user -actually uses, or expects or is expected to use, the product. A product -is a consumer product regardless of whether the product has substantial -commercial, industrial or non-consumer uses, unless such uses represent -the only significant mode of use of the product. - - "Installation Information" for a User Product means any methods, -procedures, authorization keys, or other information required to install -and execute modified versions of a covered work in that User Product from -a modified version of its Corresponding Source. The information must -suffice to ensure that the continued functioning of the modified object -code is in no case prevented or interfered with solely because -modification has been made. - - If you convey an object code work under this section in, or with, or -specifically for use in, a User Product, and the conveying occurs as -part of a transaction in which the right of possession and use of the -User Product is transferred to the recipient in perpetuity or for a -fixed term (regardless of how the transaction is characterized), the -Corresponding Source conveyed under this section must be accompanied -by the Installation Information. But this requirement does not apply -if neither you nor any third party retains the ability to install -modified object code on the User Product (for example, the work has -been installed in ROM). - - The requirement to provide Installation Information does not include a -requirement to continue to provide support service, warranty, or updates -for a work that has been modified or installed by the recipient, or for -the User Product in which it has been modified or installed. Access to a -network may be denied when the modification itself materially and -adversely affects the operation of the network or violates the rules and -protocols for communication across the network. - - Corresponding Source conveyed, and Installation Information provided, -in accord with this section must be in a format that is publicly -documented (and with an implementation available to the public in -source code form), and must require no special password or key for -unpacking, reading or copying. - - 7. Additional Terms. - - "Additional permissions" are terms that supplement the terms of this -License by making exceptions from one or more of its conditions. -Additional permissions that are applicable to the entire Program shall -be treated as though they were included in this License, to the extent -that they are valid under applicable law. If additional permissions -apply only to part of the Program, that part may be used separately -under those permissions, but the entire Program remains governed by -this License without regard to the additional permissions. - - When you convey a copy of a covered work, you may at your option -remove any additional permissions from that copy, or from any part of -it. (Additional permissions may be written to require their own -removal in certain cases when you modify the work.) You may place -additional permissions on material, added by you to a covered work, -for which you have or can give appropriate copyright permission. - - Notwithstanding any other provision of this License, for material you -add to a covered work, you may (if authorized by the copyright holders of -that material) supplement the terms of this License with terms: - - a) Disclaiming warranty or limiting liability differently from the - terms of sections 15 and 16 of this License; or - - b) Requiring preservation of specified reasonable legal notices or - author attributions in that material or in the Appropriate Legal - Notices displayed by works containing it; or - - c) Prohibiting misrepresentation of the origin of that material, or - requiring that modified versions of such material be marked in - reasonable ways as different from the original version; or - - d) Limiting the use for publicity purposes of names of licensors or - authors of the material; or - - e) Declining to grant rights under trademark law for use of some - trade names, trademarks, or service marks; or - - f) Requiring indemnification of licensors and authors of that - material by anyone who conveys the material (or modified versions of - it) with contractual assumptions of liability to the recipient, for - any liability that these contractual assumptions directly impose on - those licensors and authors. - - All other non-permissive additional terms are considered "further -restrictions" within the meaning of section 10. If the Program as you -received it, or any part of it, contains a notice stating that it is -governed by this License along with a term that is a further -restriction, you may remove that term. If a license document contains -a further restriction but permits relicensing or conveying under this -License, you may add to a covered work material governed by the terms -of that license document, provided that the further restriction does -not survive such relicensing or conveying. - - If you add terms to a covered work in accord with this section, you -must place, in the relevant source files, a statement of the -additional terms that apply to those files, or a notice indicating -where to find the applicable terms. - - Additional terms, permissive or non-permissive, may be stated in the -form of a separately written license, or stated as exceptions; -the above requirements apply either way. - - 8. Termination. - - You may not propagate or modify a covered work except as expressly -provided under this License. Any attempt otherwise to propagate or -modify it is void, and will automatically terminate your rights under -this License (including any patent licenses granted under the third -paragraph of section 11). - - However, if you cease all violation of this License, then your -license from a particular copyright holder is reinstated (a) -provisionally, unless and until the copyright holder explicitly and -finally terminates your license, and (b) permanently, if the copyright -holder fails to notify you of the violation by some reasonable means -prior to 60 days after the cessation. - - Moreover, your license from a particular copyright holder is -reinstated permanently if the copyright holder notifies you of the -violation by some reasonable means, this is the first time you have -received notice of violation of this License (for any work) from that -copyright holder, and you cure the violation prior to 30 days after -your receipt of the notice. - - Termination of your rights under this section does not terminate the -licenses of parties who have received copies or rights from you under -this License. If your rights have been terminated and not permanently -reinstated, you do not qualify to receive new licenses for the same -material under section 10. - - 9. Acceptance Not Required for Having Copies. - - You are not required to accept this License in order to receive or -run a copy of the Program. Ancillary propagation of a covered work -occurring solely as a consequence of using peer-to-peer transmission -to receive a copy likewise does not require acceptance. However, -nothing other than this License grants you permission to propagate or -modify any covered work. These actions infringe copyright if you do -not accept this License. Therefore, by modifying or propagating a -covered work, you indicate your acceptance of this License to do so. - - 10. Automatic Licensing of Downstream Recipients. - - Each time you convey a covered work, the recipient automatically -receives a license from the original licensors, to run, modify and -propagate that work, subject to this License. You are not responsible -for enforcing compliance by third parties with this License. - - An "entity transaction" is a transaction transferring control of an -organization, or substantially all assets of one, or subdividing an -organization, or merging organizations. If propagation of a covered -work results from an entity transaction, each party to that -transaction who receives a copy of the work also receives whatever -licenses to the work the party's predecessor in interest had or could -give under the previous paragraph, plus a right to possession of the -Corresponding Source of the work from the predecessor in interest, if -the predecessor has it or can get it with reasonable efforts. - - You may not impose any further restrictions on the exercise of the -rights granted or affirmed under this License. For example, you may -not impose a license fee, royalty, or other charge for exercise of -rights granted under this License, and you may not initiate litigation -(including a cross-claim or counterclaim in a lawsuit) alleging that -any patent claim is infringed by making, using, selling, offering for -sale, or importing the Program or any portion of it. - - 11. Patents. - - A "contributor" is a copyright holder who authorizes use under this -License of the Program or a work on which the Program is based. The -work thus licensed is called the contributor's "contributor version". - - A contributor's "essential patent claims" are all patent claims -owned or controlled by the contributor, whether already acquired or -hereafter acquired, that would be infringed by some manner, permitted -by this License, of making, using, or selling its contributor version, -but do not include claims that would be infringed only as a -consequence of further modification of the contributor version. For -purposes of this definition, "control" includes the right to grant -patent sublicenses in a manner consistent with the requirements of -this License. - - Each contributor grants you a non-exclusive, worldwide, royalty-free -patent license under the contributor's essential patent claims, to -make, use, sell, offer for sale, import and otherwise run, modify and -propagate the contents of its contributor version. - - In the following three paragraphs, a "patent license" is any express -agreement or commitment, however denominated, not to enforce a patent -(such as an express permission to practice a patent or covenant not to -sue for patent infringement). To "grant" such a patent license to a -party means to make such an agreement or commitment not to enforce a -patent against the party. - - If you convey a covered work, knowingly relying on a patent license, -and the Corresponding Source of the work is not available for anyone -to copy, free of charge and under the terms of this License, through a -publicly available network server or other readily accessible means, -then you must either (1) cause the Corresponding Source to be so -available, or (2) arrange to deprive yourself of the benefit of the -patent license for this particular work, or (3) arrange, in a manner -consistent with the requirements of this License, to extend the patent -license to downstream recipients. "Knowingly relying" means you have -actual knowledge that, but for the patent license, your conveying the -covered work in a country, or your recipient's use of the covered work -in a country, would infringe one or more identifiable patents in that -country that you have reason to believe are valid. - - If, pursuant to or in connection with a single transaction or -arrangement, you convey, or propagate by procuring conveyance of, a -covered work, and grant a patent license to some of the parties -receiving the covered work authorizing them to use, propagate, modify -or convey a specific copy of the covered work, then the patent license -you grant is automatically extended to all recipients of the covered -work and works based on it. - - A patent license is "discriminatory" if it does not include within -the scope of its coverage, prohibits the exercise of, or is -conditioned on the non-exercise of one or more of the rights that are -specifically granted under this License. You may not convey a covered -work if you are a party to an arrangement with a third party that is -in the business of distributing software, under which you make payment -to the third party based on the extent of your activity of conveying -the work, and under which the third party grants, to any of the -parties who would receive the covered work from you, a discriminatory -patent license (a) in connection with copies of the covered work -conveyed by you (or copies made from those copies), or (b) primarily -for and in connection with specific products or compilations that -contain the covered work, unless you entered into that arrangement, -or that patent license was granted, prior to 28 March 2007. - - Nothing in this License shall be construed as excluding or limiting -any implied license or other defenses to infringement that may -otherwise be available to you under applicable patent law. - - 12. No Surrender of Others' Freedom. - - If conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot convey a -covered work so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you may -not convey it at all. For example, if you agree to terms that obligate you -to collect a royalty for further conveying from those to whom you convey -the Program, the only way you could satisfy both those terms and this -License would be to refrain entirely from conveying the Program. - - 13. Use with the GNU Affero General Public License. - - Notwithstanding any other provision of this License, you have -permission to link or combine any covered work with a work licensed -under version 3 of the GNU Affero General Public License into a single -combined work, and to convey the resulting work. The terms of this -License will continue to apply to the part which is the covered work, -but the special requirements of the GNU Affero General Public License, -section 13, concerning interaction through a network will apply to the -combination as such. - - 14. Revised Versions of this License. - - The Free Software Foundation may publish revised and/or new versions of -the GNU General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - - Each version is given a distinguishing version number. If the -Program specifies that a certain numbered version of the GNU General -Public License "or any later version" applies to it, you have the -option of following the terms and conditions either of that numbered -version or of any later version published by the Free Software -Foundation. If the Program does not specify a version number of the -GNU General Public License, you may choose any version ever published -by the Free Software Foundation. - - If the Program specifies that a proxy can decide which future -versions of the GNU General Public License can be used, that proxy's -public statement of acceptance of a version permanently authorizes you -to choose that version for the Program. - - Later license versions may give you additional or different -permissions. However, no additional obligations are imposed on any -author or copyright holder as a result of your choosing to follow a -later version. - - 15. Disclaimer of Warranty. - - THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY -APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT -HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY -OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, -THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR -PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM -IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF -ALL NECESSARY SERVICING, REPAIR OR CORRECTION. - - 16. Limitation of Liability. - - IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS -THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY -GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE -USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF -DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD -PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), -EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF -SUCH DAMAGES. - - 17. Interpretation of Sections 15 and 16. - - If the disclaimer of warranty and limitation of liability provided -above cannot be given local legal effect according to their terms, -reviewing courts shall apply local law that most closely approximates -an absolute waiver of all civil liability in connection with the -Program, unless a warranty or assumption of liability accompanies a -copy of the Program in return for a fee. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -state the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 3 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program. If not, see . - -Also add information on how to contact you by electronic and paper mail. - - If the program does terminal interaction, make it output a short -notice like this when it starts in an interactive mode: - - Copyright (C) - This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, your program's commands -might be different; for a GUI interface, you would use an "about box". - - You should also get your employer (if you work as a programmer) or school, -if any, to sign a "copyright disclaimer" for the program, if necessary. -For more information on this, and how to apply and follow the GNU GPL, see -. - - The GNU General Public License does not permit incorporating your program -into proprietary programs. If your program is a subroutine library, you -may consider it more useful to permit linking proprietary applications with -the library. If this is what you want to do, use the GNU Lesser General -Public License instead of this License. But first, please read -. diff --git a/Pipeline.txt b/Pipeline.txt new file mode 100644 index 0000000..59521e0 --- /dev/null +++ b/Pipeline.txt @@ -0,0 +1 @@ +singularity exec IFTA.sif python3 segmentation_school.py --option predict --project TxR01 --encoder_name deeplab --one_network true --classNum 1 \ No newline at end of file diff --git a/README.md b/README.md index 5a5c90f..e70ab7c 100644 --- a/README.md +++ b/README.md @@ -1,34 +1,27 @@ -# IFTA and glomerulosclerosis segmentation +### H-AI-L (Human-A.I.-Loop) for semantic segmentation of WSI (whole slide images) -This readme contains information on accessing the shared materials for the JASN manuscript entitled "Automated Computational Detection of Interstitial Fibrosis, Tubular Atrophy, and Glomerulosclerosis", submitted to JASN in November 2020. Shared materials include a pre-trained CNN model and corresponding codes to perform whole slide segmentation of IFTA and glomerulosclerosis on new renal biopsies. Also included are example whole slide images and their CNN segmented output. +This readme contains information on running IFTA segmentation on HiperGator -# Whole slide images -To view the CNN segmentation on whole slide images, you must have Aperio ImageScope (https://www.leicabiosystems.com/digital-pathology/manage/aperio-imagescope/) installed on your computer. Then, simply place the whole slide image (.svs file) and its corresponding annotations (.xml file) in the same directory on your computer. Upon opening the .svs file with ImageScope, you will see the CNN predictions overlaid on the whole slide. We provide some example pre-generated segmentations on whole slide images here: https://buffalo.box.com/s/thlo5vry0ii8sutvke9bmva0gm5aos0e. These segmentation outputs were morphologically post-processed to remove IFTA regions with size < 1730µm2 glomerular regions with size < 1500µm2 from the whole slide mask. +Clone the IFTA_segmentation repository(https://github.com/SarderLab/IFTA_segmentation) and switch to sumanth_ifta_hpg branch -# Performing segmentation on new data -Before segmenting your own whole slides, you will need to: + git clone https://github.com/SarderLab/IFTA_segmentation.git + cd IFTA_segmentation + git checkout sumanth_ifta_hpg -1) Configure HAIL (https://github.com/SarderLab/H-AI-L) and its dependencies on your computer that will perform the segmentation -2) Download the 3 pre-trained model files available at https://buffalo.box.com/s/thlo5vry0ii8sutvke9bmva0gm5aos0e -After dependencies are installed, navigate to the directory where you have downloaded the HAIL codes and call the following command: +create a folder with the project name specified in the slurm file. For example if project name is 'TxR01' create '/orange/pinaki.sarder/sdevarasetty/IFTA_segmentation/TxR01' +In '/orange/pinaki.sarder/sdevarasetty/IFTA_segmentation/TxR01' create 'IFTA_segmentation/TxR01/TRAINNING_data/0' and put the whole slide images for prediction - python segmentation_school.py --option new --project your_project_name +create '/orange/pinaki.sarder/sdevarasetty/IFTA_segmentation/TxR01/TRAINNING_data/Predited_XMLs' to save the output annotation files -Where --option tells HAIL to create a new directory for a new project which has name specified by --project +Create a MODELS directory '/orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR' to upload the model files -Navigate inside the newly created folder "your_project_name", and find the folder MODELS/0/HR/. Put the three files you downloaded from step 2) above into this folder. - -Go back up to the directory "your_project_name". Find the folder TRAINING_data/0/, and place any whole slide images for segmentation (.svs files) inside this folder. - -From a terminal prompt, while inside the directory that contains "segmentation_school.py" (i.e., where the HAIL codes are downloaded), call the following command: - - python segmentation_school.py --option predict --project your_project_name --one_network True --encoder_name deeplab --classNum 4 --boxSizeHR 3000 --overlap_percentHR 0.5 - -Adapt the boxSizeHR parameter to a lower value if your GPU does not have enough memory, and use --gpu 0, --gpu 1, etc to change which device on your computer to use for network prediction (out of range device numbers default to the CPU). When the algorithms have finished processing a slide, a new folder will be created: your_project_name/TRAINING_data/0/Predicted_XMLs. Inside, there will be .xml files which correspond to whole slide predictions for each .svs file inside the TRAINING_data/0/ folder. +To run the code + + sbatch run.sh diff --git a/random-useful-codes/check_for_missing_masks_and_patches.py b/random-useful-codes/check_for_missing_masks_and_patches.py new file mode 100644 index 0000000..d4074c2 --- /dev/null +++ b/random-useful-codes/check_for_missing_masks_and_patches.py @@ -0,0 +1,32 @@ +from glob import glob +import os + +def check_missing(start, to, ext): + files = glob(start) + total = len(files) + tot = 0 + for iter, file in enumerate(files): + base = file.split('/')[-1] + base = base.split('.')[0] + print('\r[{} of {}] | {}'.format(iter, total, file), end='') + if not os.path.isfile('{}/{}{}'.format(to, base, ext)): + tot += 1 + os.remove(file) + + return tot + +print('\n1 of 4') +tot = check_missing('./Permanent/LR/masks/*.png', './Permanent/LR/regions/', '.jpeg') +print('\n\tremoved: {}'.format(tot)) + +print('\n2 of 4') +tot = check_missing('./Permanent/LR/regions/*.jpeg', './Permanent/LR/masks/', '.png') +print('\n\tremoved: {}'.format(tot)) + +print('\n3 of 4') +tot = check_missing('./Permanent/HR/masks/*.png', './Permanent/HR/regions/', '.jpeg') +print('\n\tremoved: {}'.format(tot)) + +print('\n4 of 4') +tot = check_missing('./Permanent/HR/regions/*.jpeg', './Permanent/HR/masks/', '.png') +print('\n\tremoved: {}'.format(tot)) diff --git a/random-useful-codes/get_slide_bounds.py b/random-useful-codes/get_slide_bounds.py new file mode 100644 index 0000000..262adba --- /dev/null +++ b/random-useful-codes/get_slide_bounds.py @@ -0,0 +1,18 @@ +import numpy as np +import lxml.etree as ET + +def get_slide_bounds(file): + + tree = ET.parse(file) + root = tree.getroot() + + # get all vertex attributes (points) + X = [] + Y = [] + + for Vertex in root.findall("./Annotation[@Id='1']/Regions/Region[@Id='1']/Vertices/Vertex"): + # make array of points + X.append(int(float(Vertex.attrib['X']))) + Y.append(int(float(Vertex.attrib['Y']))) + + return X,Y diff --git a/random-useful-codes/move_already_predicted.py b/random-useful-codes/move_already_predicted.py new file mode 100644 index 0000000..f058465 --- /dev/null +++ b/random-useful-codes/move_already_predicted.py @@ -0,0 +1,17 @@ +import os +from glob import glob + +dir = 'Predicted_XMLs/' +print('\n') + +xmls = glob('{}*.xml'.format(dir)) +for xml in xmls: + base = xml.split('.xml')[-2] + base = base.split(dir)[-1] + + + try: + wsi = glob('{}.*'.format(base))[0] + print('moving: [{}]'.format(base)) + os.rename(wsi, '{}/{}'.format(dir,wsi)) + except: pass diff --git a/random-useful-codes/move_patches.py b/random-useful-codes/move_patches.py new file mode 100644 index 0000000..247e2c7 --- /dev/null +++ b/random-useful-codes/move_patches.py @@ -0,0 +1,33 @@ +from glob import glob +import os + +def move_patches(start, to): + files = glob(start) + total = len(files) + for iter, file in enumerate(files): + base = file.split('/')[-1] + print('\r[{} of {}] | {}'.format(iter, total, file), end='') + # print(file, '{}/{}'.format(to, base)) + os.rename(file, '{}/{}'.format(to, base)) + +print('\n1 of 8') +move_patches('./TempLR/Augment/masks/*.png', './Permanent/LR/masks/') +print('\n2 of 8') +move_patches('./TempLR/Augment/regions/*.jpeg', './Permanent/LR/regions/') + +print('\n3 of 8') +move_patches('./TempHR/Augment/masks/*.png', './Permanent/HR/masks/') +print('\n4 of 8') +move_patches('./TempHR/Augment/regions/*.jpeg', './Permanent/HR/regions/') + +print('\n5 of 8') +move_patches('./TempLR/masks/*.png', './Permanent/LR/masks/') +print('\n6 of 8') +move_patches('./TempLR/regions/*.jpeg', './Permanent/LR/regions/') + +print('\n7 of 8') +move_patches('./TempHR/masks/*.png', './Permanent/HR/masks/') +print('\n8 of 8') +move_patches('./TempHR/regions/*.jpeg', './Permanent/HR/regions/') + +print('\n\nall done...') diff --git a/run.sh b/run.sh new file mode 100644 index 0000000..d5d4af2 --- /dev/null +++ b/run.sh @@ -0,0 +1,23 @@ +#!/bin/sh +#SBATCH --cpus-per-task=10 +#SBATCH --mem-per-cpu=16gb +#SBATCH --partition=gpu +#SBATCH --gpus=geforce:1 +#SBATCH --time=72:00:00 +#SBATCH --output=./slurm_log.out +#SBATCH --job-name="ifta:t0" +echo "SLURM_JOBID="$SLURM_JOBID +echo "SLURM_JOB_NODELIST="$SLURM_JOB_NODELIST +echo "SLURM_NNODES="$SLURM_NNODES +echo "SLURMTMPDIR="$SLURMTMPDIR + +echo "working directory = "$SLURM_SUBMIT_DIR +ulimit -s unlimited +module load singularity +module load pytorch +ls +ml + +USER=sdevarasetty + +singularity exec --nv -B $(pwd):/exec/, IFTA.sif python3 /exec/segmentation_school.py --option predict --project TxR01 --encoder_name deeplab --one_network True --classNum 4 --boxSizeHR 3000 --overlap_percentHR 0.5 \ No newline at end of file diff --git a/segmentation_school.py b/segmentation_school.py new file mode 100644 index 0000000..5923080 --- /dev/null +++ b/segmentation_school.py @@ -0,0 +1,204 @@ +import os +import argparse +import sys +import numpy as np +import time + +sys.path.append(os.getcwd()+'/Codes') + +""" +main code for training semantic segmentation of WSI iteratively + + --option - code options + [new] - set up a new project + [train] - begin network training with new data + [predict] - use trained network to annotate new data + [validate] - get the network performance on holdout dataset + [evolve] - visualize the evolving network predictions + [purge] - remove previously chopped/augmented data from project + [prune] - randomly remove saved training images (--prune_HR/LR) + + --project + [] - specify the project name + + --transfer + [] - pull newest model from specified project + for transfer learning + +""" + + +def main(args): + + from InitializeFolderStructure import initFolder, purge_training_set, prune_training_set + if args.one_network == 'True' or args.one_network=='true' or args.one_network=='TRUE': + from IterativeTraining_1X import IterateTraining + from IterativePredict_1X import predict, validate + else: + from evolve_predictions import evolve + from IterativeTraining import IterateTraining + from IterativePredict import predict, validate + + # for teaching young segmentations networks + starttime = time.time() + + if args.project == ' ': + print('Please specify the project name: \n\t--project [folder]') + + elif args.option in ['new', 'New']: + initFolder(args=args) + savetime(args=args, starttime=starttime) + elif args.option in ['train', 'Train']: + IterateTraining(args=args) + savetime(args=args, starttime=starttime) + assert(args.one_network!=' '),'You must specify --one_network True for dense prediction or --one_network False for sparse prediction' + + elif args.option in ['predict', 'Predict']: + predict(args=args) + savetime(args=args, starttime=starttime) + assert(args.one_network!=' '),'You must specify --one_network True for dense prediction or --one_network False for sparse prediction' + + elif args.option in ['validate', 'Validate']: + validate(args=args) + elif args.option in ['evolve', 'Evolve']: + evolve(args=args) + elif args.option in ['purge', 'Purge']: + purge_training_set(args=args) + elif args.option in ['prune', 'Prune']: + prune_training_set(args=args) + + else: + print('please specify an option in: \n\t--option [new, train, predict, validate]') + +def savetime(args, starttime): + if args.option in ['new', 'New']: + with open(args.base_dir + '/' + args.project + '/runtime.txt', 'w') as timefile: + timefile.write('option' +'\t'+ 'time' +'\t'+ 'epochs_LR' +'\t'+ 'epochs_HR' +'\t'+ 'aug_LR' +'\t'+ 'aug_HR' +'\t'+ 'overlap_percentLR' +'\t'+ 'overlap_percentHR') + if args.option in ['train', 'Train']: + with open(args.base_dir + '/' + args.project + '/runtime.txt', 'a') as timefile: + timefile.write('\n' + args.option +'\t'+ str(time.time()-starttime) +'\t'+ str(args.epoch_LR) +'\t'+ str(args.epoch_HR) +'\t'+ str(args.aug_LR) +'\t'+ str(args.aug_HR) +'\t'+ str(args.overlap_percentLR) +'\t'+ str(args.overlap_percentHR)) + if args.option in ['predict', 'Predict']: + with open(args.base_dir + '/' + args.project + '/runtime.txt', 'a') as timefile: + timefile.write('\n' + args.option +'\t'+ str(time.time()-starttime)) + + +if __name__ == '__main__': + parser = argparse.ArgumentParser() + + ##### Main params (MANDITORY) ############################################## + # School subject + parser.add_argument('--project', dest='project', default=' ' ,type=str, + help='Starting directory to contain training project') + # option + parser.add_argument('--option', dest='option', default=' ' ,type=str, + help='option for [new, train, predict, validate]') + parser.add_argument('--transfer', dest='transfer', default=' ' ,type=str, + help='name of project for transfer learning [pulls the newest model]') + parser.add_argument('--one_network', dest='one_network', default=' ' ,type=str, + help='use only high resolution network for training/prediction/validation') + parser.add_argument('--encoder_name', dest='encoder_name', default=' ' ,type=str, + help='encoder options are res50, res101, or deeplab') + + # automatically generated + parser.add_argument('--base_dir', dest='base_dir', default=os.getcwd(),type=str, + help='base directory of code folder') + + + ##### Args for training / prediction #################################################### + parser.add_argument('--gpu_num', dest='gpu_num', default=2 ,type=int, + help='number of GPUs avalable') + parser.add_argument('--gpu', dest='gpu', default=0 ,type=int, + help='GPU to use for prediction') + parser.add_argument('--iteration', dest='iteration', default='none' ,type=str, + help='Which iteration to use for prediction') + parser.add_argument('--prune_HR', dest='prune_HR', default=0.0 ,type=float, + help='percent of high rez data to be randomly removed [0-1]-->[none-all]') + parser.add_argument('--prune_LR', dest='prune_LR', default=0.0 ,type=float, + help='percent of low rez data to be randomly removed [0-1]-->[none-all]') + parser.add_argument('--classNum', dest='classNum', default=0 ,type=int, + help='number of classes present in the training data plus one (one class is specified for background)') + parser.add_argument('--classNum_HR', dest='classNum_HR', default=0 ,type=int, + help='number of classes present in the High res training data [USE ONLY IF DIFFERENT FROM LOW RES]') + + ### Params for cutting wsi ### + #White level cutoff + parser.add_argument('--white_percent', dest='white_percent', default=0.05 ,type=float, + help='white level checkpoint for chopping') + parser.add_argument('--max_block_dim', dest='max_block_dim', default=2000,type=int, + help='white level checkpoint for chopping') + #Low resolution parameters + parser.add_argument('--overlap_percentLR', dest='overlap_percentLR', default=0.5 ,type=float, + help='overlap percentage of low resolution blocks [0-1]') + parser.add_argument('--boxSizeLR', dest='boxSizeLR', default=450 ,type=int, + help='size of low resolution blocks') + parser.add_argument('--downsampleRateLR', dest='downsampleRateLR', default=16 ,type=int, + help='reduce image resolution to 1/downsample rate') + #High resolution parameters + parser.add_argument('--overlap_percentHR', dest='overlap_percentHR', default=0.5 ,type=float, + help='overlap percentage of high resolution blocks [0-1]') + parser.add_argument('--boxSizeHR', dest='boxSizeHR', default=450 ,type=int, + help='size of high resolution blocks') + parser.add_argument('--downsampleRateHR', dest='downsampleRateHR', default=1 ,type=int, + help='reduce image resolution to 1/downsample rate') + + ### Params for augmenting data ### + #High resolution + parser.add_argument('--aug_HR', dest='aug_HR', default=3 ,type=int, + help='augment high resolution set this many magnitudes') + #Low resolution + parser.add_argument('--aug_LR', dest='aug_LR', default=15 ,type=int, + help='augment low resolution set this many magnitudes') + #Color space transforms + parser.add_argument('--hbound', dest='hbound', default=0.05 ,type=float, + help='Gaussian variance defining bounds on Hue shift for HSV color augmentation') + parser.add_argument('--lbound', dest='lbound', default=0.025 ,type=float, + help='Gaussian variance defining bounds on L* gamma shift for color augmentation [alters brightness/darkness of image]') + + ### Params for training networks ### + #Low resolution hyperparameters + parser.add_argument('--CNNbatch_sizeLR', dest='CNNbatch_sizeLR', default=2 ,type=int, + help='Size of batches for training low resolution CNN') + #High resolution hyperparameters + parser.add_argument('--CNNbatch_sizeHR', dest='CNNbatch_sizeHR', default=3 ,type=int, + help='Size of batches for training high resolution CNN') + #Hyperparameters + parser.add_argument('--epoch_LR', dest='epoch_LR', default=1 ,type=int, + help='training epochs for low resolution network') + parser.add_argument('--epoch_HR', dest='epoch_HR', default=1 ,type=int, + help='training epochs for high resolution network') + parser.add_argument('--saveIntervals', dest='saveIntervals', default=10 ,type=int, + help='how many checkpoints get saved durring training') + parser.add_argument('--learning_rate_HR', dest='learning_rate_HR', default=2.5e-4, + type=float, help='High rez learning rate') + parser.add_argument('--learning_rate_LR', dest='learning_rate_LR', default=2.5e-4, + type=float, help='Low rez learning rate') + parser.add_argument('--chop_data', dest='chop_data', default='True', + type=str, help='chop and augment new data before training') + + ### Params for saving results ### + parser.add_argument('--outDir', dest='outDir', default='/Predictions/' ,type=str, + help='output directory') + parser.add_argument('--save_outputs', dest='save_outputs', default=False ,type=bool, + help='save outputs from chopping etc. [final image masks]') + parser.add_argument('--imBoxExt', dest='imBoxExt', default='.jpeg' ,type=str, + help='ext of saved image blocks') + parser.add_argument('--finalImgExt', dest='finalImgExt', default='.jpeg' ,type=str, + help='ext of final saved images') + parser.add_argument('--wsi_ext', dest='wsi_ext', default='.svs' ,type=str, + help='file ext of wsi images') + parser.add_argument('--bg_intensity', dest='bg_intensity', default=.5 ,type=float, + help='if displaying output classifications [save_outputs = True] background color [0-1]') + parser.add_argument('--approximation_downsample', dest='approx_downsample', default=1 ,type=float, + help='Amount to downsample high resolution prediction boundaries for smoothing') + + + ### Params for optimizing wsi mask cleanup ### + parser.add_argument('--min_size', dest='min_size', default=650 ,type=int, + help='min size region to be considered after prepass [in pixels]') + parser.add_argument('--LR_region_pad', dest='LR_region_pad', default=50 ,type=int, + help='padded region for low resolution region extraction') + + + + args = parser.parse_args() + main(args=args) diff --git a/slurm_log.out b/slurm_log.out new file mode 100644 index 0000000..6220c3b --- /dev/null +++ b/slurm_log.out @@ -0,0 +1,1394 @@ +SLURM_JOBID=27795286 +SLURM_JOB_NODELIST=c0309a-s13 +SLURM_NNODES=1 +SLURMTMPDIR= +working directory = /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L +Codes +H-AI-L_pipeline_overview.pdf +ifta2.sif +IFTA.def +IFTA.sif +LICENSE +Pipeline.txt +Pretrained_Model +random-useful-codes +README.md +run.sh +segmentation_school.py +slurm_log.out +TxR01 + +Currently Loaded Modules: + 1) ufrc 2) singularity/3.10.4 3) pytorch/1.13.0 + + + +WARNING: underlay of /usr/bin/nvidia-smi required more than 50 (360) bind mounts +0 + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57774.svs + +chopping ... + +saving region: + <40500:43500 7500:10500> <43500:46500 6000:9000> <48000:51000 0:3000> <48000:51000 1500:4500> <48000:51000 6000:9000> <40500:43500 6000:9000> <48000:51000 4500:7500> <42000:45000 9000:12000> <46500:49500 6000:9000> <45000:48000 7500:10500> <46500:49500 3000:6000> <45000:48000 4500:7500> <39000:42000 9000:12000> <49500:52500 3000:6000> <48000:51000 7500:10500> <45000:48000 3000:6000> <46500:49500 1500:4500> <49500:52500 1500:4500> <49500:52500 4500:7500> <49500:52500 0:3000> <46500:49500 7500:10500> <43500:46500 4500:7500> <42000:45000 6000:9000> <42000:45000 4500:7500> <40500:43500 9000:12000> <48000:51000 3000:6000> <45000:48000 6000:9000> <46500:49500 4500:7500> <37500:40500 9000:12000> <43500:46500 7500:10500> <39000:42000 7500:10500> <42000:45000 7500:10500> <36000:39000 15000:18000> <40500:43500 10500:13500> <37500:40500 10500:13500> <42000:45000 13500:16500> <34500:37500 15000:18000> <40500:43500 12000:15000> <36000:39000 16500:19500> <37500:40500 13500:16500> <31500:34500 16500:19500> <31500:34500 15000:18000> <43500:46500 10500:13500> <43500:46500 9000:12000> <30000:33000 16500:19500> <45000:48000 9000:12000> <37500:40500 15000:18000> <39000:42000 13500:16500> <42000:45000 10500:13500> <39000:42000 10500:13500> <39000:42000 12000:15000> <34500:37500 12000:15000> <33000:36000 13500:16500> <36000:39000 10500:13500> <40500:43500 13500:16500> <34500:37500 13500:16500> <37500:40500 12000:15000> <34500:37500 16500:19500> <33000:36000 16500:19500> <39000:42000 15000:18000> <42000:45000 12000:15000> <36000:39000 12000:15000> <33000:36000 15000:18000> <36000:39000 13500:16500> <37500:40500 16500:19500> <28500:31500 18000:21000> <31500:34500 18000:21000> <28500:31500 21000:24000> <28500:31500 19500:22500> <31500:34500 21000:24000> <30000:33000 21000:24000> <24000:27000 25500:28500> <24000:27000 24000:27000> <28500:31500 24000:27000> <30000:33000 24000:27000> <28500:31500 25500:28500> <25500:28500 25500:28500> <34500:37500 19500:22500> <34500:37500 18000:21000> <28500:31500 22500:25500> <22500:25500 25500:28500> <33000:36000 18000:21000> <27000:30000 21000:24000> <27000:30000 22500:25500> <19500:22500 27000:30000> <36000:39000 18000:21000> <27000:30000 24000:27000> <33000:36000 19500:22500> <31500:34500 24000:27000> <25500:28500 22500:25500> <31500:34500 19500:22500> <30000:33000 19500:22500> <30000:33000 18000:21000> <30000:33000 22500:25500> <25500:28500 24000:27000> <27000:30000 25500:28500> <33000:36000 21000:24000> <21000:24000 27000:30000> <31500:34500 22500:25500> <24000:27000 27000:30000> <27000:30000 27000:30000> <22500:25500 27000:30000> <19500:22500 28500:31500> <18000:21000 33000:36000> <24000:27000 28500:31500> <22500:25500 28500:31500> <24000:27000 30000:33000> <16500:19500 31500:34500> <15000:18000 33000:36000> <18000:21000 28500:31500> <13500:16500 33000:36000> <21000:24000 33000:36000> <22500:25500 30000:33000> <15000:18000 31500:34500> <22500:25500 31500:34500> <21000:24000 28500:31500> <18000:21000 30000:33000> <19500:22500 30000:33000> <24000:27000 31500:34500> <19500:22500 31500:34500> <19500:22500 33000:36000> <27000:30000 28500:31500> <22500:25500 33000:36000> <28500:31500 27000:30000> <16500:19500 30000:33000> <21000:24000 31500:34500> <25500:28500 30000:33000> <21000:24000 30000:33000> <25500:28500 28500:31500> <25500:28500 27000:30000> <16500:19500 33000:36000> <18000:21000 31500:34500> <15000:18000 34500:37500> <13500:16500 34500:37500> <12000:15000 39000:42000> <18000:21000 36000:39000> <16500:19500 39000:42000> <15000:18000 36000:39000> <10500:13500 40500:43500> <7500:10500 40500:43500> <16500:19500 36000:39000> <12000:15000 40500:43500> <18000:21000 34500:37500> <19500:22500 34500:37500> <13500:16500 36000:39000> <16500:19500 40500:43500> <13500:16500 40500:43500> <15000:18000 37500:40500> <21000:24000 34500:37500> <10500:13500 39000:42000> <12000:15000 36000:39000> <13500:16500 37500:40500> <18000:21000 37500:40500> <15000:18000 39000:42000> <16500:19500 37500:40500> <7500:10500 42000:45000> <10500:13500 37500:40500> <12000:15000 37500:40500> <16500:19500 34500:37500> <9000:12000 42000:45000> <9000:12000 39000:42000> <9000:12000 40500:43500> <15000:18000 40500:43500> <10500:13500 42000:45000> <13500:16500 39000:42000> <19500:22500 36000:39000> <12000:15000 42000:45000> <13500:16500 42000:45000> <7500:10500 43500:46500> <10500:13500 43500:46500> <15000:18000 42000:45000> <6000:9000 43500:46500> <9000:12000 43500:46500> <6000:9000 46500:49500> <6000:9000 45000:48000> <7500:10500 46500:49500> <3000:6000 49500:52500> <13500:16500 43500:46500> <12000:15000 45000:48000> <7500:10500 51000:54000> <10500:13500 46500:49500> <3000:6000 51000:54000> <7500:10500 49500:52500> <4500:7500 46500:49500> <9000:12000 49500:52500> <6000:9000 48000:51000> <12000:15000 43500:46500> <4500:7500 51000:54000> <10500:13500 45000:48000> <9000:12000 46500:49500> <7500:10500 48000:51000> <4500:7500 49500:52500> <9000:12000 48000:51000> <9000:12000 45000:48000> <6000:9000 51000:54000> <1500:4500 52500:55500> <1500:4500 49500:52500> <7500:10500 45000:48000> <1500:4500 51000:54000> <6000:9000 49500:52500> <3000:6000 48000:51000> <4500:7500 45000:48000> <0:3000 52500:55500> <4500:7500 48000:51000> <3000:6000 52500:55500> <6000:9000 52500:55500> <1500:4500 54000:57000> <4500:7500 54000:57000> <0:3000 54000:57000> <3000:6000 55500:58500> <3000:6000 54000:57000> <4500:7500 52500:55500> <1500:4500 55500:58500> 2024-04-06 06:04:43.839774: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 06:04:43.975877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 06:04:43.976014: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 06:04:49.278116: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 06:04:49.278253: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 06:04:49.278281: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 06:04:49.287901: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 06:05:11.345433: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 06:05:11.345565: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57774/img_files/ + + +213 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 212> <1 of 212> <2 of 212> <3 of 212> <4 of 212> <5 of 212> <6 of 212> <7 of 212> <8 of 212> <9 of 212> <10 of 212> <11 of 212> <12 of 212> <13 of 212> <14 of 212> <15 of 212> <16 of 212> <17 of 212> <18 of 212> <19 of 212> <20 of 212> <21 of 212> <22 of 212> <23 of 212> <24 of 212> <25 of 212> <26 of 212> <27 of 212> <28 of 212> <29 of 212> <30 of 212> <31 of 212> <32 of 212> <33 of 212> <34 of 212> <35 of 212> <36 of 212> <37 of 212> <38 of 212> <39 of 212> <40 of 212> <41 of 212> <42 of 212> <43 of 212> <44 of 212> <45 of 212> <46 of 212> <47 of 212> <48 of 212> <49 of 212> <50 of 212> <51 of 212> <52 of 212> <53 of 212> <54 of 212> <55 of 212> <56 of 212> <57 of 212> <58 of 212> <59 of 212> <60 of 212> <61 of 212> <62 of 212> <63 of 212> <64 of 212> <65 of 212> <66 of 212> <67 of 212> <68 of 212> <69 of 212> <70 of 212> <71 of 212> <72 of 212> <73 of 212> <74 of 212> <75 of 212> <76 of 212> <77 of 212> <78 of 212> <79 of 212> <80 of 212> <81 of 212> <82 of 212> <83 of 212> <84 of 212> <85 of 212> <86 of 212> <87 of 212> <88 of 212> <89 of 212> <90 of 212> <91 of 212> <92 of 212> <93 of 212> <94 of 212> <95 of 212> <96 of 212> <97 of 212> <98 of 212> <99 of 212> <100 of 212> <101 of 212> <102 of 212> <103 of 212> <104 of 212> <105 of 212> <106 of 212> <107 of 212> <108 of 212> <109 of 212> <110 of 212> <111 of 212> <112 of 212> <113 of 212> <114 of 212> <115 of 212> <116 of 212> <117 of 212> <118 of 212> <119 of 212> <120 of 212> <121 of 212> <122 of 212> <123 of 212> <124 of 212> <125 of 212> <126 of 212> <127 of 212> <128 of 212> <129 of 212> <130 of 212> <131 of 212> <132 of 212> <133 of 212> <134 of 212> <135 of 212> <136 of 212> <137 of 212> <138 of 212> <139 of 212> <140 of 212> <141 of 212> <142 of 212> <143 of 212> <144 of 212> <145 of 212> <146 of 212> <147 of 212> <148 of 212> <149 of 212> <150 of 212> <151 of 212> <152 of 212> <153 of 212> <154 of 212> <155 of 212> <156 of 212> <157 of 212> <158 of 212> <159 of 212> <160 of 212> <161 of 212> <162 of 212> <163 of 212> <164 of 212> <165 of 212> <166 of 212> <167 of 212> <168 of 212> <169 of 212> <170 of 212> <171 of 212> <172 of 212> <173 of 212> <174 of 212> <175 of 212> <176 of 212> <177 of 212> <178 of 212> <179 of 212> <180 of 212> <181 of 212> <182 of 212> <183 of 212> <184 of 212> <185 of 212> <186 of 212> <187 of 212> <188 of 212> <189 of 212> <190 of 212> <191 of 212> <192 of 212> <193 of 212> <194 of 212> <195 of 212> <196 of 212> <197 of 212> <198 of 212> <199 of 212> <200 of 212> <201 of 212> <202 of 212> <203 of 212> <204 of 212> <205 of 212> <206 of 212> <207 of 212> <208 of 212> <209 of 212> <210 of 212> <211 of 212> <212 of 212> + +Starting XML construction: +[0 1 2] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57346.svs + +chopping ... + +saving region: + <13500:16500 3000:6000> <96000:99000 4500:7500> <10500:13500 3000:6000> <12000:15000 3000:6000> <9000:12000 1500:4500> <9000:12000 4500:7500> <97500:100500 4500:7500> <13500:16500 1500:4500> <12000:15000 4500:7500> <16500:19500 6000:9000> <97500:100500 6000:9000> <10500:13500 1500:4500> <96000:99000 6000:9000> <10500:13500 4500:7500> <10500:13500 0:3000> <9000:12000 3000:6000> <12000:15000 6000:9000> <99000:102000 4500:7500> <9000:12000 0:3000> <15000:18000 6000:9000> <12000:15000 0:3000> <7500:10500 1500:4500> <15000:18000 3000:6000> <10500:13500 6000:9000> <9000:12000 6000:9000> <7500:10500 3000:6000> <12000:15000 1500:4500> <15000:18000 4500:7500> <13500:16500 4500:7500> <100500:103500 4500:7500> <7500:10500 4500:7500> <13500:16500 6000:9000> <100500:103500 9000:12000> <18000:21000 9000:12000> <94500:97500 7500:10500> <18000:21000 10500:13500> <15000:18000 10500:13500> <102000:105000 9000:12000> <100500:103500 7500:10500> <15000:18000 7500:10500> <10500:13500 7500:10500> <97500:100500 7500:10500> <96000:99000 9000:12000> <13500:16500 7500:10500> <13500:16500 9000:12000> <97500:100500 9000:12000> <16500:19500 9000:12000> <12000:15000 10500:13500> <99000:102000 7500:10500> <10500:13500 9000:12000> <96000:99000 7500:10500> <13500:16500 10500:13500> <19500:22500 9000:12000> <16500:19500 10500:13500> <16500:19500 7500:10500> <99000:102000 9000:12000> <99000:102000 6000:9000> <102000:105000 7500:10500> <15000:18000 9000:12000> <19500:22500 10500:13500> <12000:15000 9000:12000> <100500:103500 6000:9000> <12000:15000 7500:10500> <18000:21000 7500:10500> <102000:105000 10500:13500> <4500:7500 12000:15000> <21000:24000 12000:15000> <6000:9000 12000:15000> <16500:19500 12000:15000> <15000:18000 12000:15000> <19500:22500 13500:16500> <15000:18000 13500:16500> <100500:103500 12000:15000> <97500:100500 10500:13500> <19500:22500 12000:15000> <4500:7500 13500:16500> <6000:9000 13500:16500> <1500:4500 12000:15000> <102000:105000 12000:15000> <3000:6000 12000:15000> <99000:102000 12000:15000> <0:3000 13500:16500> <103500:106500 12000:15000> <3000:6000 13500:16500> <22500:25500 12000:15000> <21000:24000 10500:13500> <96000:99000 10500:13500> <99000:102000 10500:13500> <103500:106500 10500:13500> <16500:19500 13500:16500> <1500:4500 13500:16500> <18000:21000 12000:15000> <18000:21000 13500:16500> <97500:100500 12000:15000> <13500:16500 12000:15000> <100500:103500 10500:13500> <16500:19500 15000:18000> <24000:27000 13500:16500> <102000:105000 15000:18000> <22500:25500 15000:18000> <25500:28500 15000:18000> <103500:106500 13500:16500> <22500:25500 13500:16500> <4500:7500 16500:19500> <0:3000 15000:18000> <1500:4500 15000:18000> <105000:108000 15000:18000> <100500:103500 15000:18000> <0:3000 16500:19500> <6000:9000 16500:19500> <18000:21000 15000:18000> <21000:24000 13500:16500> <3000:6000 16500:19500> <103500:106500 15000:18000> <105000:108000 13500:16500> <99000:102000 15000:18000> <97500:100500 13500:16500> <24000:27000 15000:18000> <6000:9000 15000:18000> <4500:7500 15000:18000> <3000:6000 15000:18000> <21000:24000 15000:18000> <7500:10500 15000:18000> <1500:4500 16500:19500> <100500:103500 13500:16500> <19500:22500 15000:18000> <102000:105000 13500:16500> <99000:102000 13500:16500> <108000:111000 16500:19500> <105000:108000 16500:19500> <18000:21000 16500:19500> <21000:24000 16500:19500> <99000:102000 16500:19500> <19500:22500 18000:21000> <25500:28500 16500:19500> <6000:9000 18000:21000> <3000:6000 18000:21000> <102000:105000 16500:19500> <90000:93000 16500:19500> <27000:30000 16500:19500> <24000:27000 18000:21000> <25500:28500 18000:21000> <9000:12000 16500:19500> <7500:10500 16500:19500> <9000:12000 18000:21000> <22500:25500 16500:19500> <24000:27000 16500:19500> <27000:30000 18000:21000> <1500:4500 18000:21000> <7500:10500 18000:21000> <100500:103500 16500:19500> <103500:106500 16500:19500> <88500:91500 16500:19500> <19500:22500 16500:19500> <106500:109500 16500:19500> <22500:25500 18000:21000> <87000:90000 16500:19500> <91500:94500 16500:19500> <21000:24000 18000:21000> <4500:7500 18000:21000> <85500:88500 19500:22500> <3000:6000 19500:22500> <4500:7500 19500:22500> <30000:33000 19500:22500> <28500:31500 18000:21000> <25500:28500 19500:22500> <91500:94500 18000:21000> <106500:109500 18000:21000> <105000:108000 18000:21000> <100500:103500 18000:21000> <90000:93000 18000:21000> <85500:88500 18000:21000> <108000:111000 18000:21000> <27000:30000 19500:22500> <10500:13500 19500:22500> <87000:90000 18000:21000> <93000:96000 19500:22500> <22500:25500 19500:22500> <88500:91500 18000:21000> <7500:10500 19500:22500> <88500:91500 19500:22500> <6000:9000 19500:22500> <1500:4500 19500:22500> <102000:105000 18000:21000> <21000:24000 19500:22500> <87000:90000 19500:22500> <90000:93000 19500:22500> <28500:31500 19500:22500> <103500:106500 18000:21000> <9000:12000 19500:22500> <91500:94500 19500:22500> <24000:27000 19500:22500> <3000:6000 21000:24000> <10500:13500 21000:24000> <34500:37500 21000:24000> <108000:111000 19500:22500> <88500:91500 21000:24000> <28500:31500 21000:24000> <102000:105000 21000:24000> <93000:96000 21000:24000> <6000:9000 21000:24000> <87000:90000 21000:24000> <30000:33000 21000:24000> <109500:112500 19500:22500> <1500:4500 21000:24000> <9000:12000 21000:24000> <102000:105000 19500:22500> <103500:106500 19500:22500> <12000:15000 21000:24000> <37500:40500 21000:24000> <90000:93000 21000:24000> <7500:10500 21000:24000> <33000:36000 21000:24000> <25500:28500 21000:24000> <105000:108000 19500:22500> <106500:109500 19500:22500> <36000:39000 21000:24000> <27000:30000 21000:24000> <24000:27000 21000:24000> <85500:88500 21000:24000> <22500:25500 21000:24000> <31500:34500 21000:24000> <91500:94500 21000:24000> <4500:7500 21000:24000> <10500:13500 22500:25500> <25500:28500 22500:25500> <34500:37500 22500:25500> <37500:40500 22500:25500> <103500:106500 21000:24000> <30000:33000 22500:25500> <105000:108000 21000:24000> <27000:30000 22500:25500> <93000:96000 22500:25500> <3000:6000 22500:25500> <40500:43500 22500:25500> <6000:9000 22500:25500> <9000:12000 22500:25500> <91500:94500 22500:25500> <108000:111000 21000:24000> <31500:34500 22500:25500> <109500:112500 21000:24000> <106500:109500 21000:24000> <12000:15000 22500:25500> <94500:97500 22500:25500> <33000:36000 22500:25500> <85500:88500 22500:25500> <28500:31500 22500:25500> <4500:7500 22500:25500> <7500:10500 22500:25500> <87000:90000 22500:25500> <24000:27000 22500:25500> <88500:91500 22500:25500> <39000:42000 22500:25500> <90000:93000 22500:25500> <36000:39000 22500:25500> <111000:114000 21000:24000> <106500:109500 22500:25500> <105000:108000 22500:25500> <10500:13500 24000:27000> <36000:39000 24000:27000> <40500:43500 24000:27000> <25500:28500 24000:27000> <108000:111000 22500:25500> <7500:10500 24000:27000> <93000:96000 24000:27000> <37500:40500 24000:27000> <28500:31500 24000:27000> <42000:45000 24000:27000> <4500:7500 24000:27000> <31500:34500 24000:27000> <6000:9000 24000:27000> <34500:37500 24000:27000> <111000:114000 22500:25500> <91500:94500 24000:27000> <103500:106500 22500:25500> <109500:112500 22500:25500> <88500:91500 24000:27000> <9000:12000 24000:27000> <13500:16500 24000:27000> <87000:90000 24000:27000> <43500:46500 24000:27000> <12000:15000 24000:27000> <90000:93000 24000:27000> <27000:30000 24000:27000> <39000:42000 24000:27000> <33000:36000 24000:27000> <94500:97500 24000:27000> <30000:33000 24000:27000> <36000:39000 25500:28500> <37500:40500 25500:28500> <13500:16500 25500:28500> <27000:30000 25500:28500> <28500:31500 25500:28500> <43500:46500 25500:28500> <112500:115500 24000:27000> <7500:10500 25500:28500> <31500:34500 25500:28500> <46500:49500 25500:28500> <93000:96000 25500:28500> <39000:42000 25500:28500> <33000:36000 25500:28500> <42000:45000 25500:28500> <10500:13500 25500:28500> <105000:108000 24000:27000> <15000:18000 25500:28500> <91500:94500 25500:28500> <9000:12000 25500:28500> <6000:9000 25500:28500> <40500:43500 25500:28500> <34500:37500 25500:28500> <12000:15000 25500:28500> <106500:109500 24000:27000> <90000:93000 25500:28500> <109500:112500 24000:27000> <111000:114000 24000:27000> <88500:91500 25500:28500> <45000:48000 25500:28500> <30000:33000 25500:28500> <87000:90000 25500:28500> <108000:111000 24000:27000> <9000:12000 27000:30000> <30000:33000 27000:30000> <10500:13500 27000:30000> <37500:40500 27000:30000> <96000:99000 25500:28500> <45000:48000 27000:30000> <43500:46500 27000:30000> <105000:108000 25500:28500> <33000:36000 27000:30000> <16500:19500 27000:30000> <114000:117000 25500:28500> <111000:114000 25500:28500> <87000:90000 27000:30000> <31500:34500 27000:30000> <36000:39000 27000:30000> <15000:18000 27000:30000> <39000:42000 27000:30000> <13500:16500 27000:30000> <108000:111000 25500:28500> <109500:112500 25500:28500> <34500:37500 27000:30000> <88500:91500 27000:30000> <106500:109500 25500:28500> <48000:51000 27000:30000> <112500:115500 25500:28500> <90000:93000 27000:30000> <46500:49500 27000:30000> <94500:97500 25500:28500> <7500:10500 27000:30000> <42000:45000 27000:30000> <40500:43500 27000:30000> <12000:15000 27000:30000> <91500:94500 27000:30000> <93000:96000 27000:30000> <114000:117000 27000:30000> <90000:93000 28500:31500> <39000:42000 28500:31500> <108000:111000 27000:30000> <115500:118500 27000:30000> <93000:96000 28500:31500> <94500:97500 27000:30000> <109500:112500 27000:30000> <13500:16500 28500:31500> <40500:43500 28500:31500> <88500:91500 28500:31500> <16500:19500 28500:31500> <42000:45000 28500:31500> <112500:115500 27000:30000> <48000:51000 28500:31500> <106500:109500 27000:30000> <91500:94500 28500:31500> <12000:15000 28500:31500> <46500:49500 28500:31500> <45000:48000 28500:31500> <96000:99000 27000:30000> <9000:12000 28500:31500> <49500:52500 28500:31500> <94500:97500 28500:31500> <97500:100500 28500:31500> <96000:99000 28500:31500> <18000:21000 28500:31500> <15000:18000 28500:31500> <111000:114000 27000:30000> <10500:13500 28500:31500> <43500:46500 28500:31500> <13500:16500 30000:33000> <18000:21000 30000:33000> <117000:120000 28500:31500> <19500:22500 30000:33000> <42000:45000 30000:33000> <109500:112500 28500:31500> <43500:46500 30000:33000> <45000:48000 30000:33000> <108000:111000 28500:31500> <118500:121500 28500:31500> <51000:54000 30000:33000> <114000:117000 28500:31500> <94500:97500 30000:33000> <15000:18000 30000:33000> <49500:52500 30000:33000> <96000:99000 30000:33000> <9000:12000 30000:33000> <115500:118500 28500:31500> <112500:115500 28500:31500> <40500:43500 30000:33000> <48000:51000 30000:33000> <10500:13500 30000:33000> <91500:94500 30000:33000> <16500:19500 30000:33000> <93000:96000 30000:33000> <46500:49500 30000:33000> <121500:124500 28500:31500> <123000:126000 28500:31500> <12000:15000 30000:33000> <111000:114000 28500:31500> <90000:93000 30000:33000> <120000:123000 28500:31500> <126000:129000 30000:33000> <94500:97500 31500:34500> <115500:118500 30000:33000> <97500:100500 30000:33000> <99000:102000 30000:33000> <18000:21000 31500:34500> <118500:121500 30000:33000> <52500:55500 31500:34500> <121500:124500 30000:33000> <114000:117000 30000:33000> <45000:48000 31500:34500> <46500:49500 31500:34500> <111000:114000 30000:33000> <13500:16500 31500:34500> <16500:19500 31500:34500> <93000:96000 31500:34500> <112500:115500 30000:33000> <123000:126000 30000:33000> <15000:18000 31500:34500> <19500:22500 31500:34500> <91500:94500 31500:34500> <120000:123000 30000:33000> <117000:120000 30000:33000> <124500:127500 30000:33000> <12000:15000 31500:34500> <43500:46500 31500:34500> <51000:54000 31500:34500> <49500:52500 31500:34500> <21000:24000 31500:34500> <48000:51000 31500:34500> <90000:93000 31500:34500> <109500:112500 30000:33000> <121500:124500 31500:34500> <12000:15000 33000:36000> <93000:96000 33000:36000> <51000:54000 33000:36000> <111000:114000 31500:34500> <96000:99000 31500:34500> <118500:121500 31500:34500> <49500:52500 33000:36000> <48000:51000 33000:36000> <117000:120000 31500:34500> <99000:102000 31500:34500> <22500:25500 33000:36000> <52500:55500 33000:36000> <115500:118500 31500:34500> <123000:126000 31500:34500> <120000:123000 31500:34500> <127500:130500 31500:34500> <18000:21000 33000:36000> <45000:48000 33000:36000> <112500:115500 31500:34500> <19500:22500 33000:36000> <114000:117000 31500:34500> <97500:100500 31500:34500> <91500:94500 33000:36000> <21000:24000 33000:36000> <54000:57000 33000:36000> <126000:129000 31500:34500> <15000:18000 33000:36000> <46500:49500 33000:36000> <16500:19500 33000:36000> <124500:127500 31500:34500> <13500:16500 33000:36000> <118500:121500 33000:36000> <99000:102000 33000:36000> <112500:115500 33000:36000> <48000:51000 34500:37500> <115500:118500 33000:36000> <96000:99000 33000:36000> <124500:127500 33000:36000> <94500:97500 33000:36000> <123000:126000 33000:36000> <114000:117000 33000:36000> <21000:24000 34500:37500> <120000:123000 33000:36000> <129000:132000 33000:36000> <100500:103500 33000:36000> <24000:27000 34500:37500> <15000:18000 34500:37500> <52500:55500 34500:37500> <51000:54000 34500:37500> <117000:120000 33000:36000> <121500:124500 33000:36000> <97500:100500 33000:36000> <93000:96000 34500:37500> <19500:22500 34500:37500> <127500:130500 33000:36000> <16500:19500 34500:37500> <22500:25500 34500:37500> <46500:49500 34500:37500> <54000:57000 34500:37500> <49500:52500 34500:37500> <13500:16500 34500:37500> <126000:129000 33000:36000> <18000:21000 34500:37500> <18000:21000 36000:39000> <16500:19500 36000:39000> <100500:103500 34500:37500> <121500:124500 34500:37500> <19500:22500 36000:39000> <124500:127500 34500:37500> <97500:100500 34500:37500> <114000:117000 34500:37500> <52500:55500 36000:39000> <15000:18000 36000:39000> <130500:133500 34500:37500> <54000:57000 36000:39000> <25500:28500 36000:39000> <51000:54000 36000:39000> <126000:129000 34500:37500> <96000:99000 34500:37500> <123000:126000 34500:37500> <93000:96000 36000:39000> <129000:132000 34500:37500> <24000:27000 36000:39000> <117000:120000 34500:37500> <21000:24000 36000:39000> <127500:130500 34500:37500> <49500:52500 36000:39000> <99000:102000 34500:37500> <120000:123000 34500:37500> <118500:121500 34500:37500> <94500:97500 34500:37500> <102000:105000 34500:37500> <115500:118500 34500:37500> <48000:51000 36000:39000> <22500:25500 36000:39000> <18000:21000 37500:40500> <51000:54000 37500:40500> <99000:102000 36000:39000> <24000:27000 37500:40500> <130500:133500 36000:39000> <19500:22500 37500:40500> <28500:31500 37500:40500> <132000:135000 36000:39000> <129000:132000 36000:39000> <99000:102000 37500:40500> <25500:28500 37500:40500> <102000:105000 36000:39000> <123000:126000 36000:39000> <103500:106500 37500:40500> <96000:99000 37500:40500> <102000:105000 37500:40500> <121500:124500 36000:39000> <94500:97500 37500:40500> <96000:99000 36000:39000> <97500:100500 37500:40500> <124500:127500 36000:39000> <126000:129000 36000:39000> <100500:103500 36000:39000> <127500:130500 36000:39000> <94500:97500 36000:39000> <27000:30000 37500:40500> <100500:103500 37500:40500> <52500:55500 37500:40500> <97500:100500 36000:39000> <22500:25500 37500:40500> <21000:24000 37500:40500> <54000:57000 37500:40500> <129000:132000 39000:42000> <27000:30000 39000:42000> <105000:108000 39000:42000> <132000:135000 39000:42000> <127500:130500 37500:40500> <22500:25500 39000:42000> <30000:33000 39000:42000> <99000:102000 39000:42000> <124500:127500 37500:40500> <25500:28500 39000:42000> <21000:24000 39000:42000> <97500:100500 39000:42000> <130500:133500 37500:40500> <126000:129000 39000:42000> <135000:138000 39000:42000> <24000:27000 39000:42000> <129000:132000 37500:40500> <96000:99000 39000:42000> <130500:133500 39000:42000> <22500:25500 40500:43500> <19500:22500 39000:42000> <21000:24000 40500:43500> <100500:103500 39000:42000> <103500:106500 39000:42000> <133500:136500 39000:42000> <132000:135000 37500:40500> <126000:129000 37500:40500> <28500:31500 39000:42000> <24000:27000 40500:43500> <133500:136500 37500:40500> <102000:105000 39000:42000> <127500:130500 39000:42000> <25500:28500 40500:43500> <33000:36000 42000:45000> <34500:37500 42000:45000> <31500:34500 42000:45000> <100500:103500 40500:43500> <25500:28500 42000:45000> <30000:33000 40500:43500> <103500:106500 40500:43500> <102000:105000 40500:43500> <24000:27000 42000:45000> <132000:135000 40500:43500> <135000:138000 40500:43500> <99000:102000 40500:43500> <97500:100500 40500:43500> <136500:139500 40500:43500> <37500:40500 42000:45000> <27000:30000 40500:43500> <27000:30000 42000:45000> <129000:132000 40500:43500> <39000:42000 42000:45000> <33000:36000 40500:43500> <31500:34500 40500:43500> <133500:136500 40500:43500> <28500:31500 42000:45000> <36000:39000 42000:45000> <106500:109500 40500:43500> <127500:130500 40500:43500> <130500:133500 40500:43500> <28500:31500 40500:43500> <22500:25500 42000:45000> <105000:108000 40500:43500> <30000:33000 42000:45000> <42000:45000 42000:45000> <27000:30000 43500:46500> <105000:108000 42000:45000> <129000:132000 42000:45000> <40500:43500 43500:46500> <39000:42000 43500:46500> <133500:136500 42000:45000> <130500:133500 42000:45000> <37500:40500 43500:46500> <132000:135000 42000:45000> <30000:33000 43500:46500> <135000:138000 42000:45000> <34500:37500 43500:46500> <45000:48000 42000:45000> <43500:46500 42000:45000> <108000:111000 42000:45000> <25500:28500 43500:46500> <48000:51000 42000:45000> <102000:105000 42000:45000> <103500:106500 42000:45000> <36000:39000 43500:46500> <31500:34500 43500:46500> <99000:102000 42000:45000> <28500:31500 43500:46500> <40500:43500 42000:45000> <136500:139500 42000:45000> <46500:49500 42000:45000> <42000:45000 43500:46500> <138000:141000 42000:45000> <33000:36000 43500:46500> <106500:109500 42000:45000> <100500:103500 42000:45000> <46500:49500 45000:48000> <102000:105000 43500:46500> <27000:30000 45000:48000> <46500:49500 43500:46500> <109500:112500 43500:46500> <43500:46500 43500:46500> <106500:109500 43500:46500> <132000:135000 43500:46500> <33000:36000 45000:48000> <105000:108000 43500:46500> <130500:133500 43500:46500> <133500:136500 43500:46500> <45000:48000 43500:46500> <48000:51000 43500:46500> <28500:31500 45000:48000> <37500:40500 45000:48000> <100500:103500 43500:46500> <49500:52500 43500:46500> <138000:141000 43500:46500> <42000:45000 45000:48000> <40500:43500 45000:48000> <136500:139500 43500:46500> <34500:37500 45000:48000> <31500:34500 45000:48000> <43500:46500 45000:48000> <30000:33000 45000:48000> <108000:111000 43500:46500> <45000:48000 45000:48000> <39000:42000 45000:48000> <135000:138000 43500:46500> <36000:39000 45000:48000> <103500:106500 43500:46500> <139500:142500 45000:48000> <105000:108000 45000:48000> <106500:109500 45000:48000> <133500:136500 45000:48000> <45000:48000 46500:49500> <135000:138000 45000:48000> <106500:109500 46500:49500> <48000:51000 45000:48000> <39000:42000 46500:49500> <36000:39000 46500:49500> <108000:111000 45000:48000> <105000:108000 46500:49500> <132000:135000 45000:48000> <31500:34500 46500:49500> <42000:45000 46500:49500> <43500:46500 46500:49500> <103500:106500 45000:48000> <138000:141000 45000:48000> <102000:105000 45000:48000> <48000:51000 46500:49500> <49500:52500 46500:49500> <49500:52500 45000:48000> <111000:114000 45000:48000> <30000:33000 46500:49500> <109500:112500 45000:48000> <37500:40500 46500:49500> <33000:36000 46500:49500> <46500:49500 46500:49500> <136500:139500 45000:48000> <103500:106500 46500:49500> <34500:37500 46500:49500> <40500:43500 46500:49500> <108000:111000 48000:51000> <109500:112500 48000:51000> <105000:108000 48000:51000> <133500:136500 46500:49500> <112500:115500 46500:49500> <40500:43500 48000:51000> <138000:141000 46500:49500> <111000:114000 46500:49500> <39000:42000 48000:51000> <36000:39000 48000:51000> <109500:112500 49500:52500> <37500:40500 48000:51000> <43500:46500 48000:51000> <114000:117000 48000:51000> <34500:37500 48000:51000> <139500:142500 46500:49500> <111000:114000 48000:51000> <45000:48000 48000:51000> <136500:139500 46500:49500> <135000:138000 48000:51000> <109500:112500 46500:49500> <114000:117000 46500:49500> <115500:118500 48000:51000> <108000:111000 46500:49500> <108000:111000 49500:52500> <112500:115500 48000:51000> <42000:45000 48000:51000> <138000:141000 48000:51000> <135000:138000 46500:49500> <106500:109500 48000:51000> <136500:139500 48000:51000> <106500:109500 49500:52500> <121500:124500 51000:54000> <109500:112500 51000:54000> <114000:117000 49500:52500> <114000:117000 52500:55500> <123000:126000 51000:54000> <118500:121500 52500:55500> <129000:132000 51000:54000> <124500:127500 51000:54000> <114000:117000 51000:54000> <118500:121500 49500:52500> <120000:123000 49500:52500> <112500:115500 51000:54000> <117000:120000 52500:55500> <123000:126000 52500:55500> <111000:114000 49500:52500> <108000:111000 51000:54000> <118500:121500 51000:54000> <109500:112500 52500:55500> <120000:123000 52500:55500> <126000:129000 51000:54000> <115500:118500 51000:54000> <127500:130500 51000:54000> <111000:114000 52500:55500> <112500:115500 52500:55500> <112500:115500 49500:52500> <117000:120000 49500:52500> <121500:124500 52500:55500> <120000:123000 51000:54000> <115500:118500 49500:52500> <111000:114000 51000:54000> <117000:120000 51000:54000> <115500:118500 52500:55500> <112500:115500 54000:57000> <130500:133500 52500:55500> <120000:123000 54000:57000> <127500:130500 52500:55500> <127500:130500 54000:57000> <130500:133500 55500:58500> <130500:133500 54000:57000> <114000:117000 54000:57000> <115500:118500 55500:58500> <132000:135000 54000:57000> <118500:121500 54000:57000> <129000:132000 54000:57000> <121500:124500 55500:58500> <132000:135000 55500:58500> <121500:124500 54000:57000> <124500:127500 54000:57000> <118500:121500 57000:60000> <117000:120000 55500:58500> <120000:123000 55500:58500> <124500:127500 52500:55500> <117000:120000 54000:57000> <126000:129000 52500:55500> <126000:129000 54000:57000> <129000:132000 52500:55500> <118500:121500 55500:58500> <124500:127500 55500:58500> <123000:126000 55500:58500> <129000:132000 55500:58500> <115500:118500 54000:57000> <126000:129000 55500:58500> <127500:130500 55500:58500> <123000:126000 54000:57000> <121500:124500 57000:60000> <129000:132000 57000:60000> <123000:126000 57000:60000> <120000:123000 57000:60000> <130500:133500 57000:60000> <126000:129000 57000:60000> <124500:127500 57000:60000> <127500:130500 57000:60000> 2024-04-06 06:32:04.616444: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 06:32:04.730702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 06:32:04.730831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 06:32:05.882287: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 06:32:05.882419: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 06:32:05.882447: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 06:32:05.882554: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 06:32:17.977574: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 06:32:17.977702: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +step 400 +step 500 +step 600 +step 700 +step 800 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57346/img_files/ + + +841 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 840> <1 of 840> <2 of 840> <3 of 840> <4 of 840> <5 of 840> <6 of 840> <7 of 840> <8 of 840> <9 of 840> <10 of 840> <11 of 840> <12 of 840> <13 of 840> <14 of 840> <15 of 840> <16 of 840> <17 of 840> <18 of 840> <19 of 840> <20 of 840> <21 of 840> <22 of 840> <23 of 840> <24 of 840> <25 of 840> <26 of 840> <27 of 840> <28 of 840> <29 of 840> <30 of 840> <31 of 840> <32 of 840> <33 of 840> <34 of 840> <35 of 840> <36 of 840> <37 of 840> <38 of 840> <39 of 840> <40 of 840> <41 of 840> <42 of 840> <43 of 840> <44 of 840> <45 of 840> <46 of 840> <47 of 840> <48 of 840> <49 of 840> <50 of 840> <51 of 840> <52 of 840> <53 of 840> <54 of 840> <55 of 840> <56 of 840> <57 of 840> <58 of 840> <59 of 840> <60 of 840> <61 of 840> <62 of 840> <63 of 840> <64 of 840> <65 of 840> <66 of 840> <67 of 840> <68 of 840> <69 of 840> <70 of 840> <71 of 840> <72 of 840> <73 of 840> <74 of 840> <75 of 840> <76 of 840> <77 of 840> <78 of 840> <79 of 840> <80 of 840> <81 of 840> <82 of 840> <83 of 840> <84 of 840> <85 of 840> <86 of 840> <87 of 840> <88 of 840> <89 of 840> <90 of 840> <91 of 840> <92 of 840> <93 of 840> <94 of 840> <95 of 840> <96 of 840> <97 of 840> <98 of 840> <99 of 840> <100 of 840> <101 of 840> <102 of 840> <103 of 840> <104 of 840> <105 of 840> <106 of 840> <107 of 840> <108 of 840> <109 of 840> <110 of 840> <111 of 840> <112 of 840> <113 of 840> <114 of 840> <115 of 840> <116 of 840> <117 of 840> <118 of 840> <119 of 840> <120 of 840> <121 of 840> <122 of 840> <123 of 840> <124 of 840> <125 of 840> <126 of 840> <127 of 840> <128 of 840> <129 of 840> <130 of 840> <131 of 840> <132 of 840> <133 of 840> <134 of 840> <135 of 840> <136 of 840> <137 of 840> <138 of 840> <139 of 840> <140 of 840> <141 of 840> <142 of 840> <143 of 840> <144 of 840> <145 of 840> <146 of 840> <147 of 840> <148 of 840> <149 of 840> <150 of 840> <151 of 840> <152 of 840> <153 of 840> <154 of 840> <155 of 840> <156 of 840> <157 of 840> <158 of 840> <159 of 840> <160 of 840> <161 of 840> <162 of 840> <163 of 840> <164 of 840> <165 of 840> <166 of 840> <167 of 840> <168 of 840> <169 of 840> <170 of 840> <171 of 840> <172 of 840> <173 of 840> <174 of 840> <175 of 840> <176 of 840> <177 of 840> <178 of 840> <179 of 840> <180 of 840> <181 of 840> <182 of 840> <183 of 840> <184 of 840> <185 of 840> <186 of 840> <187 of 840> <188 of 840> <189 of 840> <190 of 840> <191 of 840> <192 of 840> <193 of 840> <194 of 840> <195 of 840> <196 of 840> <197 of 840> <198 of 840> <199 of 840> <200 of 840> <201 of 840> <202 of 840> <203 of 840> <204 of 840> <205 of 840> <206 of 840> <207 of 840> <208 of 840> <209 of 840> <210 of 840> <211 of 840> <212 of 840> <213 of 840> <214 of 840> <215 of 840> <216 of 840> <217 of 840> <218 of 840> <219 of 840> <220 of 840> <221 of 840> <222 of 840> <223 of 840> <224 of 840> <225 of 840> <226 of 840> <227 of 840> <228 of 840> <229 of 840> <230 of 840> <231 of 840> <232 of 840> <233 of 840> <234 of 840> <235 of 840> <236 of 840> <237 of 840> <238 of 840> <239 of 840> <240 of 840> <241 of 840> <242 of 840> <243 of 840> <244 of 840> <245 of 840> <246 of 840> <247 of 840> <248 of 840> <249 of 840> <250 of 840> <251 of 840> <252 of 840> <253 of 840> <254 of 840> <255 of 840> <256 of 840> <257 of 840> <258 of 840> <259 of 840> <260 of 840> <261 of 840> <262 of 840> <263 of 840> <264 of 840> <265 of 840> <266 of 840> <267 of 840> <268 of 840> <269 of 840> <270 of 840> <271 of 840> <272 of 840> <273 of 840> <274 of 840> <275 of 840> <276 of 840> <277 of 840> <278 of 840> <279 of 840> <280 of 840> <281 of 840> <282 of 840> <283 of 840> <284 of 840> <285 of 840> <286 of 840> <287 of 840> <288 of 840> <289 of 840> <290 of 840> <291 of 840> <292 of 840> <293 of 840> <294 of 840> <295 of 840> <296 of 840> <297 of 840> <298 of 840> <299 of 840> <300 of 840> <301 of 840> <302 of 840> <303 of 840> <304 of 840> <305 of 840> <306 of 840> <307 of 840> <308 of 840> <309 of 840> <310 of 840> <311 of 840> <312 of 840> <313 of 840> <314 of 840> <315 of 840> <316 of 840> <317 of 840> <318 of 840> <319 of 840> <320 of 840> <321 of 840> <322 of 840> <323 of 840> <324 of 840> <325 of 840> <326 of 840> <327 of 840> <328 of 840> <329 of 840> <330 of 840> <331 of 840> <332 of 840> <333 of 840> <334 of 840> <335 of 840> <336 of 840> <337 of 840> <338 of 840> <339 of 840> <340 of 840> <341 of 840> <342 of 840> <343 of 840> <344 of 840> <345 of 840> <346 of 840> <347 of 840> <348 of 840> <349 of 840> <350 of 840> <351 of 840> <352 of 840> <353 of 840> <354 of 840> <355 of 840> <356 of 840> <357 of 840> <358 of 840> <359 of 840> <360 of 840> <361 of 840> <362 of 840> <363 of 840> <364 of 840> <365 of 840> <366 of 840> <367 of 840> <368 of 840> <369 of 840> <370 of 840> <371 of 840> <372 of 840> <373 of 840> <374 of 840> <375 of 840> <376 of 840> <377 of 840> <378 of 840> <379 of 840> <380 of 840> <381 of 840> <382 of 840> <383 of 840> <384 of 840> <385 of 840> <386 of 840> <387 of 840> <388 of 840> <389 of 840> <390 of 840> <391 of 840> <392 of 840> <393 of 840> <394 of 840> <395 of 840> <396 of 840> <397 of 840> <398 of 840> <399 of 840> <400 of 840> <401 of 840> <402 of 840> <403 of 840> <404 of 840> <405 of 840> <406 of 840> <407 of 840> <408 of 840> <409 of 840> <410 of 840> <411 of 840> <412 of 840> <413 of 840> <414 of 840> <415 of 840> <416 of 840> <417 of 840> <418 of 840> <419 of 840> <420 of 840> <421 of 840> <422 of 840> <423 of 840> <424 of 840> <425 of 840> <426 of 840> <427 of 840> <428 of 840> <429 of 840> <430 of 840> <431 of 840> <432 of 840> <433 of 840> <434 of 840> <435 of 840> <436 of 840> <437 of 840> <438 of 840> <439 of 840> <440 of 840> <441 of 840> <442 of 840> <443 of 840> <444 of 840> <445 of 840> <446 of 840> <447 of 840> <448 of 840> <449 of 840> <450 of 840> <451 of 840> <452 of 840> <453 of 840> <454 of 840> <455 of 840> <456 of 840> <457 of 840> <458 of 840> <459 of 840> <460 of 840> <461 of 840> <462 of 840> <463 of 840> <464 of 840> <465 of 840> <466 of 840> <467 of 840> <468 of 840> <469 of 840> <470 of 840> <471 of 840> <472 of 840> <473 of 840> <474 of 840> <475 of 840> <476 of 840> <477 of 840> <478 of 840> <479 of 840> <480 of 840> <481 of 840> <482 of 840> <483 of 840> <484 of 840> <485 of 840> <486 of 840> <487 of 840> <488 of 840> <489 of 840> <490 of 840> <491 of 840> <492 of 840> <493 of 840> <494 of 840> <495 of 840> <496 of 840> <497 of 840> <498 of 840> <499 of 840> <500 of 840> <501 of 840> <502 of 840> <503 of 840> <504 of 840> <505 of 840> <506 of 840> <507 of 840> <508 of 840> <509 of 840> <510 of 840> <511 of 840> <512 of 840> <513 of 840> <514 of 840> <515 of 840> <516 of 840> <517 of 840> <518 of 840> <519 of 840> <520 of 840> <521 of 840> <522 of 840> <523 of 840> <524 of 840> <525 of 840> <526 of 840> <527 of 840> <528 of 840> <529 of 840> <530 of 840> <531 of 840> <532 of 840> <533 of 840> <534 of 840> <535 of 840> <536 of 840> <537 of 840> <538 of 840> <539 of 840> <540 of 840> <541 of 840> <542 of 840> <543 of 840> <544 of 840> <545 of 840> <546 of 840> <547 of 840> <548 of 840> <549 of 840> <550 of 840> <551 of 840> <552 of 840> <553 of 840> <554 of 840> <555 of 840> <556 of 840> <557 of 840> <558 of 840> <559 of 840> <560 of 840> <561 of 840> <562 of 840> <563 of 840> <564 of 840> <565 of 840> <566 of 840> <567 of 840> <568 of 840> <569 of 840> <570 of 840> <571 of 840> <572 of 840> <573 of 840> <574 of 840> <575 of 840> <576 of 840> <577 of 840> <578 of 840> <579 of 840> <580 of 840> <581 of 840> <582 of 840> <583 of 840> <584 of 840> <585 of 840> <586 of 840> <587 of 840> <588 of 840> <589 of 840> <590 of 840> <591 of 840> <592 of 840> <593 of 840> <594 of 840> <595 of 840> <596 of 840> <597 of 840> <598 of 840> <599 of 840> <600 of 840> <601 of 840> <602 of 840> <603 of 840> <604 of 840> <605 of 840> <606 of 840> <607 of 840> <608 of 840> <609 of 840> <610 of 840> <611 of 840> <612 of 840> <613 of 840> <614 of 840> <615 of 840> <616 of 840> <617 of 840> <618 of 840> <619 of 840> <620 of 840> <621 of 840> <622 of 840> <623 of 840> <624 of 840> <625 of 840> <626 of 840> <627 of 840> <628 of 840> <629 of 840> <630 of 840> <631 of 840> <632 of 840> <633 of 840> <634 of 840> <635 of 840> <636 of 840> <637 of 840> <638 of 840> <639 of 840> <640 of 840> <641 of 840> <642 of 840> <643 of 840> <644 of 840> <645 of 840> <646 of 840> <647 of 840> <648 of 840> <649 of 840> <650 of 840> <651 of 840> <652 of 840> <653 of 840> <654 of 840> <655 of 840> <656 of 840> <657 of 840> <658 of 840> <659 of 840> <660 of 840> <661 of 840> <662 of 840> <663 of 840> <664 of 840> <665 of 840> <666 of 840> <667 of 840> <668 of 840> <669 of 840> <670 of 840> <671 of 840> <672 of 840> <673 of 840> <674 of 840> <675 of 840> <676 of 840> <677 of 840> <678 of 840> <679 of 840> <680 of 840> <681 of 840> <682 of 840> <683 of 840> <684 of 840> <685 of 840> <686 of 840> <687 of 840> <688 of 840> <689 of 840> <690 of 840> <691 of 840> <692 of 840> <693 of 840> <694 of 840> <695 of 840> <696 of 840> <697 of 840> <698 of 840> <699 of 840> <700 of 840> <701 of 840> <702 of 840> <703 of 840> <704 of 840> <705 of 840> <706 of 840> <707 of 840> <708 of 840> <709 of 840> <710 of 840> <711 of 840> <712 of 840> <713 of 840> <714 of 840> <715 of 840> <716 of 840> <717 of 840> <718 of 840> <719 of 840> <720 of 840> <721 of 840> <722 of 840> <723 of 840> <724 of 840> <725 of 840> <726 of 840> <727 of 840> <728 of 840> <729 of 840> <730 of 840> <731 of 840> <732 of 840> <733 of 840> <734 of 840> <735 of 840> <736 of 840> <737 of 840> <738 of 840> <739 of 840> <740 of 840> <741 of 840> <742 of 840> <743 of 840> <744 of 840> <745 of 840> <746 of 840> <747 of 840> <748 of 840> <749 of 840> <750 of 840> <751 of 840> <752 of 840> <753 of 840> <754 of 840> <755 of 840> <756 of 840> <757 of 840> <758 of 840> <759 of 840> <760 of 840> <761 of 840> <762 of 840> <763 of 840> <764 of 840> <765 of 840> <766 of 840> <767 of 840> <768 of 840> <769 of 840> <770 of 840> <771 of 840> <772 of 840> <773 of 840> <774 of 840> <775 of 840> <776 of 840> <777 of 840> <778 of 840> <779 of 840> <780 of 840> <781 of 840> <782 of 840> <783 of 840> <784 of 840> <785 of 840> <786 of 840> <787 of 840> <788 of 840> <789 of 840> <790 of 840> <791 of 840> <792 of 840> <793 of 840> <794 of 840> <795 of 840> <796 of 840> <797 of 840> <798 of 840> <799 of 840> <800 of 840> <801 of 840> <802 of 840> <803 of 840> <804 of 840> <805 of 840> <806 of 840> <807 of 840> <808 of 840> <809 of 840> <810 of 840> <811 of 840> <812 of 840> <813 of 840> <814 of 840> <815 of 840> <816 of 840> <817 of 840> <818 of 840> <819 of 840> <820 of 840> <821 of 840> <822 of 840> <823 of 840> <824 of 840> <825 of 840> <826 of 840> <827 of 840> <828 of 840> <829 of 840> <830 of 840> <831 of 840> <832 of 840> <833 of 840> <834 of 840> <835 of 840> <836 of 840> <837 of 840> <838 of 840> <839 of 840> <840 of 840> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57334.svs + +chopping ... + +saving region: + <13500:16500 1500:4500> <13500:16500 4500:7500> <16500:19500 3000:6000> <18000:21000 4500:7500> <16500:19500 1500:4500> <10500:13500 6000:9000> <93000:96000 3000:6000> <15000:18000 0:3000> <96000:99000 3000:6000> <15000:18000 1500:4500> <97500:100500 3000:6000> <15000:18000 3000:6000> <16500:19500 4500:7500> <13500:16500 0:3000> <13500:16500 3000:6000> <97500:100500 4500:7500> <12000:15000 1500:4500> <94500:97500 3000:6000> <93000:96000 4500:7500> <96000:99000 4500:7500> <12000:15000 3000:6000> <15000:18000 4500:7500> <96000:99000 1500:4500> <10500:13500 1500:4500> <10500:13500 4500:7500> <99000:102000 4500:7500> <12000:15000 0:3000> <12000:15000 4500:7500> <99000:102000 3000:6000> <94500:97500 4500:7500> <10500:13500 3000:6000> <16500:19500 0:3000> <16500:19500 6000:9000> <97500:100500 9000:12000> <15000:18000 6000:9000> <93000:96000 6000:9000> <12000:15000 6000:9000> <99000:102000 9000:12000> <96000:99000 7500:10500> <10500:13500 7500:10500> <18000:21000 7500:10500> <96000:99000 6000:9000> <94500:97500 6000:9000> <99000:102000 6000:9000> <18000:21000 6000:9000> <94500:97500 9000:12000> <13500:16500 6000:9000> <94500:97500 7500:10500> <99000:102000 7500:10500> <15000:18000 7500:10500> <7500:10500 9000:12000> <93000:96000 9000:12000> <12000:15000 9000:12000> <100500:103500 7500:10500> <13500:16500 9000:12000> <9000:12000 9000:12000> <96000:99000 9000:12000> <93000:96000 7500:10500> <10500:13500 9000:12000> <16500:19500 7500:10500> <16500:19500 9000:12000> <15000:18000 9000:12000> <97500:100500 7500:10500> <13500:16500 7500:10500> <12000:15000 7500:10500> <97500:100500 6000:9000> <10500:13500 10500:13500> <10500:13500 12000:15000> <15000:18000 12000:15000> <10500:13500 13500:16500> <91500:94500 12000:15000> <6000:9000 13500:16500> <99000:102000 12000:15000> <94500:97500 10500:13500> <9000:12000 10500:13500> <12000:15000 12000:15000> <93000:96000 12000:15000> <93000:96000 10500:13500> <100500:103500 10500:13500> <90000:93000 12000:15000> <97500:100500 12000:15000> <91500:94500 10500:13500> <13500:16500 12000:15000> <6000:9000 12000:15000> <16500:19500 10500:13500> <9000:12000 13500:16500> <7500:10500 10500:13500> <7500:10500 12000:15000> <15000:18000 10500:13500> <13500:16500 10500:13500> <7500:10500 13500:16500> <12000:15000 10500:13500> <100500:103500 9000:12000> <97500:100500 10500:13500> <94500:97500 12000:15000> <96000:99000 10500:13500> <99000:102000 10500:13500> <96000:99000 12000:15000> <9000:12000 12000:15000> <13500:16500 13500:16500> <88500:91500 16500:19500> <13500:16500 15000:18000> <10500:13500 15000:18000> <7500:10500 16500:19500> <93000:96000 15000:18000> <94500:97500 15000:18000> <97500:100500 15000:18000> <4500:7500 16500:19500> <9000:12000 16500:19500> <3000:6000 16500:19500> <91500:94500 15000:18000> <6000:9000 16500:19500> <97500:100500 13500:16500> <13500:16500 16500:19500> <12000:15000 13500:16500> <94500:97500 13500:16500> <90000:93000 15000:18000> <7500:10500 15000:18000> <99000:102000 13500:16500> <10500:13500 16500:19500> <15000:18000 16500:19500> <88500:91500 15000:18000> <91500:94500 13500:16500> <93000:96000 13500:16500> <9000:12000 15000:18000> <12000:15000 15000:18000> <6000:9000 15000:18000> <96000:99000 15000:18000> <16500:19500 16500:19500> <96000:99000 13500:16500> <90000:93000 13500:16500> <12000:15000 16500:19500> <90000:93000 16500:19500> <12000:15000 19500:22500> <13500:16500 19500:22500> <10500:13500 18000:21000> <4500:7500 19500:22500> <0:3000 19500:22500> <13500:16500 18000:21000> <3000:6000 18000:21000> <4500:7500 18000:21000> <88500:91500 18000:21000> <6000:9000 18000:21000> <9000:12000 18000:21000> <9000:12000 19500:22500> <7500:10500 19500:22500> <7500:10500 18000:21000> <15000:18000 18000:21000> <1500:4500 19500:22500> <90000:93000 18000:21000> <6000:9000 19500:22500> <96000:99000 18000:21000> <97500:100500 18000:21000> <93000:96000 16500:19500> <91500:94500 18000:21000> <94500:97500 18000:21000> <96000:99000 16500:19500> <12000:15000 18000:21000> <93000:96000 18000:21000> <16500:19500 18000:21000> <18000:21000 18000:21000> <91500:94500 16500:19500> <94500:97500 16500:19500> <10500:13500 19500:22500> <1500:4500 18000:21000> <3000:6000 19500:22500> <15000:18000 19500:22500> <96000:99000 21000:24000> <97500:100500 21000:24000> <99000:102000 19500:22500> <12000:15000 21000:24000> <84000:87000 21000:24000> <4500:7500 21000:24000> <94500:97500 21000:24000> <13500:16500 21000:24000> <88500:91500 19500:22500> <19500:22500 21000:24000> <93000:96000 21000:24000> <96000:99000 19500:22500> <18000:21000 21000:24000> <6000:9000 21000:24000> <90000:93000 19500:22500> <87000:90000 19500:22500> <97500:100500 19500:22500> <94500:97500 19500:22500> <0:3000 21000:24000> <88500:91500 21000:24000> <90000:93000 21000:24000> <16500:19500 21000:24000> <3000:6000 21000:24000> <85500:88500 21000:24000> <1500:4500 21000:24000> <16500:19500 19500:22500> <15000:18000 21000:24000> <85500:88500 19500:22500> <91500:94500 19500:22500> <87000:90000 21000:24000> <91500:94500 21000:24000> <18000:21000 19500:22500> <93000:96000 19500:22500> <99000:102000 21000:24000> <15000:18000 24000:27000> <16500:19500 24000:27000> <18000:21000 24000:27000> <15000:18000 22500:25500> <100500:103500 21000:24000> <91500:94500 22500:25500> <16500:19500 22500:25500> <85500:88500 22500:25500> <87000:90000 22500:25500> <84000:87000 22500:25500> <90000:93000 22500:25500> <3000:6000 24000:27000> <6000:9000 24000:27000> <1500:4500 24000:27000> <0:3000 24000:27000> <13500:16500 22500:25500> <3000:6000 22500:25500> <96000:99000 22500:25500> <18000:21000 22500:25500> <0:3000 22500:25500> <6000:9000 22500:25500> <13500:16500 24000:27000> <7500:10500 24000:27000> <100500:103500 22500:25500> <99000:102000 22500:25500> <88500:91500 22500:25500> <97500:100500 22500:25500> <4500:7500 22500:25500> <102000:105000 22500:25500> <94500:97500 22500:25500> <93000:96000 22500:25500> <4500:7500 24000:27000> <1500:4500 22500:25500> <19500:22500 22500:25500> <19500:22500 24000:27000> <97500:100500 25500:28500> <87000:90000 24000:27000> <82500:85500 24000:27000> <103500:106500 24000:27000> <88500:91500 25500:28500> <84000:87000 24000:27000> <85500:88500 24000:27000> <6000:9000 25500:28500> <82500:85500 25500:28500> <102000:105000 24000:27000> <7500:10500 25500:28500> <90000:93000 25500:28500> <99000:102000 24000:27000> <4500:7500 25500:28500> <90000:93000 24000:27000> <18000:21000 25500:28500> <96000:99000 24000:27000> <84000:87000 25500:28500> <21000:24000 24000:27000> <97500:100500 24000:27000> <3000:6000 25500:28500> <9000:12000 25500:28500> <21000:24000 25500:28500> <87000:90000 25500:28500> <16500:19500 25500:28500> <85500:88500 25500:28500> <1500:4500 25500:28500> <15000:18000 25500:28500> <96000:99000 25500:28500> <100500:103500 24000:27000> <88500:91500 24000:27000> <19500:22500 25500:28500> <99000:102000 25500:28500> <9000:12000 28500:31500> <10500:13500 28500:31500> <12000:15000 28500:31500> <97500:100500 27000:30000> <102000:105000 25500:28500> <85500:88500 27000:30000> <9000:12000 27000:30000> <19500:22500 27000:30000> <6000:9000 27000:30000> <3000:6000 27000:30000> <100500:103500 25500:28500> <16500:19500 27000:30000> <13500:16500 27000:30000> <18000:21000 27000:30000> <88500:91500 27000:30000> <7500:10500 27000:30000> <1500:4500 27000:30000> <103500:106500 25500:28500> <90000:93000 27000:30000> <100500:103500 27000:30000> <21000:24000 27000:30000> <87000:90000 27000:30000> <99000:102000 27000:30000> <102000:105000 27000:30000> <3000:6000 28500:31500> <12000:15000 27000:30000> <15000:18000 27000:30000> <6000:9000 28500:31500> <4500:7500 27000:30000> <4500:7500 28500:31500> <7500:10500 28500:31500> <84000:87000 27000:30000> <10500:13500 27000:30000> <103500:106500 27000:30000> <13500:16500 28500:31500> <18000:21000 30000:33000> <93000:96000 30000:33000> <91500:94500 30000:33000> <87000:90000 28500:31500> <12000:15000 30000:33000> <16500:19500 28500:31500> <99000:102000 28500:31500> <13500:16500 30000:33000> <102000:105000 28500:31500> <16500:19500 30000:33000> <19500:22500 28500:31500> <100500:103500 28500:31500> <93000:96000 28500:31500> <84000:87000 28500:31500> <9000:12000 30000:33000> <91500:94500 28500:31500> <10500:13500 30000:33000> <15000:18000 28500:31500> <88500:91500 30000:33000> <4500:7500 30000:33000> <85500:88500 28500:31500> <90000:93000 30000:33000> <21000:24000 28500:31500> <88500:91500 28500:31500> <6000:9000 30000:33000> <85500:88500 30000:33000> <103500:106500 28500:31500> <19500:22500 30000:33000> <15000:18000 30000:33000> <90000:93000 28500:31500> <87000:90000 30000:33000> <94500:97500 30000:33000> <7500:10500 30000:33000> <18000:21000 28500:31500> <96000:99000 30000:33000> <12000:15000 33000:36000> <100500:103500 31500:34500> <15000:18000 33000:36000> <12000:15000 31500:34500> <7500:10500 31500:34500> <6000:9000 31500:34500> <16500:19500 33000:36000> <94500:97500 31500:34500> <85500:88500 31500:34500> <103500:106500 30000:33000> <13500:16500 33000:36000> <90000:93000 31500:34500> <19500:22500 31500:34500> <100500:103500 30000:33000> <99000:102000 30000:33000> <102000:105000 31500:34500> <9000:12000 31500:34500> <15000:18000 31500:34500> <97500:100500 31500:34500> <103500:106500 31500:34500> <10500:13500 33000:36000> <102000:105000 30000:33000> <13500:16500 31500:34500> <93000:96000 31500:34500> <10500:13500 31500:34500> <87000:90000 31500:34500> <97500:100500 30000:33000> <9000:12000 33000:36000> <91500:94500 31500:34500> <96000:99000 31500:34500> <18000:21000 33000:36000> <16500:19500 31500:34500> <18000:21000 31500:34500> <99000:102000 31500:34500> <88500:91500 31500:34500> <102000:105000 33000:36000> <91500:94500 33000:36000> <91500:94500 34500:37500> <99000:102000 36000:39000> <96000:99000 33000:36000> <93000:96000 33000:36000> <102000:105000 34500:37500> <100500:103500 33000:36000> <94500:97500 36000:39000> <97500:100500 36000:39000> <87000:90000 33000:36000> <103500:106500 33000:36000> <96000:99000 36000:39000> <90000:93000 34500:37500> <94500:97500 33000:36000> <100500:103500 34500:37500> <90000:93000 33000:36000> <99000:102000 34500:37500> <88500:91500 33000:36000> <93000:96000 34500:37500> <97500:100500 34500:37500> <94500:97500 34500:37500> <96000:99000 34500:37500> <97500:100500 33000:36000> <99000:102000 33000:36000> 2024-04-06 07:25:29.662998: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 07:25:29.779633: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 07:25:29.779762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 07:25:31.086150: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 07:25:31.086284: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 07:25:31.086311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 07:25:31.086411: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 07:25:44.376037: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 07:25:44.376198: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57334/img_files/ + + +399 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 398> <1 of 398> <2 of 398> <3 of 398> <4 of 398> <5 of 398> <6 of 398> <7 of 398> <8 of 398> <9 of 398> <10 of 398> <11 of 398> <12 of 398> <13 of 398> <14 of 398> <15 of 398> <16 of 398> <17 of 398> <18 of 398> <19 of 398> <20 of 398> <21 of 398> <22 of 398> <23 of 398> <24 of 398> <25 of 398> <26 of 398> <27 of 398> <28 of 398> <29 of 398> <30 of 398> <31 of 398> <32 of 398> <33 of 398> <34 of 398> <35 of 398> <36 of 398> <37 of 398> <38 of 398> <39 of 398> <40 of 398> <41 of 398> <42 of 398> <43 of 398> <44 of 398> <45 of 398> <46 of 398> <47 of 398> <48 of 398> <49 of 398> <50 of 398> <51 of 398> <52 of 398> <53 of 398> <54 of 398> <55 of 398> <56 of 398> <57 of 398> <58 of 398> <59 of 398> <60 of 398> <61 of 398> <62 of 398> <63 of 398> <64 of 398> <65 of 398> <66 of 398> <67 of 398> <68 of 398> <69 of 398> <70 of 398> <71 of 398> <72 of 398> <73 of 398> <74 of 398> <75 of 398> <76 of 398> <77 of 398> <78 of 398> <79 of 398> <80 of 398> <81 of 398> <82 of 398> <83 of 398> <84 of 398> <85 of 398> <86 of 398> <87 of 398> <88 of 398> <89 of 398> <90 of 398> <91 of 398> <92 of 398> <93 of 398> <94 of 398> <95 of 398> <96 of 398> <97 of 398> <98 of 398> <99 of 398> <100 of 398> <101 of 398> <102 of 398> <103 of 398> <104 of 398> <105 of 398> <106 of 398> <107 of 398> <108 of 398> <109 of 398> <110 of 398> <111 of 398> <112 of 398> <113 of 398> <114 of 398> <115 of 398> <116 of 398> <117 of 398> <118 of 398> <119 of 398> <120 of 398> <121 of 398> <122 of 398> <123 of 398> <124 of 398> <125 of 398> <126 of 398> <127 of 398> <128 of 398> <129 of 398> <130 of 398> <131 of 398> <132 of 398> <133 of 398> <134 of 398> <135 of 398> <136 of 398> <137 of 398> <138 of 398> <139 of 398> <140 of 398> <141 of 398> <142 of 398> <143 of 398> <144 of 398> <145 of 398> <146 of 398> <147 of 398> <148 of 398> <149 of 398> <150 of 398> <151 of 398> <152 of 398> <153 of 398> <154 of 398> <155 of 398> <156 of 398> <157 of 398> <158 of 398> <159 of 398> <160 of 398> <161 of 398> <162 of 398> <163 of 398> <164 of 398> <165 of 398> <166 of 398> <167 of 398> <168 of 398> <169 of 398> <170 of 398> <171 of 398> <172 of 398> <173 of 398> <174 of 398> <175 of 398> <176 of 398> <177 of 398> <178 of 398> <179 of 398> <180 of 398> <181 of 398> <182 of 398> <183 of 398> <184 of 398> <185 of 398> <186 of 398> <187 of 398> <188 of 398> <189 of 398> <190 of 398> <191 of 398> <192 of 398> <193 of 398> <194 of 398> <195 of 398> <196 of 398> <197 of 398> <198 of 398> <199 of 398> <200 of 398> <201 of 398> <202 of 398> <203 of 398> <204 of 398> <205 of 398> <206 of 398> <207 of 398> <208 of 398> <209 of 398> <210 of 398> <211 of 398> <212 of 398> <213 of 398> <214 of 398> <215 of 398> <216 of 398> <217 of 398> <218 of 398> <219 of 398> <220 of 398> <221 of 398> <222 of 398> <223 of 398> <224 of 398> <225 of 398> <226 of 398> <227 of 398> <228 of 398> <229 of 398> <230 of 398> <231 of 398> <232 of 398> <233 of 398> <234 of 398> <235 of 398> <236 of 398> <237 of 398> <238 of 398> <239 of 398> <240 of 398> <241 of 398> <242 of 398> <243 of 398> <244 of 398> <245 of 398> <246 of 398> <247 of 398> <248 of 398> <249 of 398> <250 of 398> <251 of 398> <252 of 398> <253 of 398> <254 of 398> <255 of 398> <256 of 398> <257 of 398> <258 of 398> <259 of 398> <260 of 398> <261 of 398> <262 of 398> <263 of 398> <264 of 398> <265 of 398> <266 of 398> <267 of 398> <268 of 398> <269 of 398> <270 of 398> <271 of 398> <272 of 398> <273 of 398> <274 of 398> <275 of 398> <276 of 398> <277 of 398> <278 of 398> <279 of 398> <280 of 398> <281 of 398> <282 of 398> <283 of 398> <284 of 398> <285 of 398> <286 of 398> <287 of 398> <288 of 398> <289 of 398> <290 of 398> <291 of 398> <292 of 398> <293 of 398> <294 of 398> <295 of 398> <296 of 398> <297 of 398> <298 of 398> <299 of 398> <300 of 398> <301 of 398> <302 of 398> <303 of 398> <304 of 398> <305 of 398> <306 of 398> <307 of 398> <308 of 398> <309 of 398> <310 of 398> <311 of 398> <312 of 398> <313 of 398> <314 of 398> <315 of 398> <316 of 398> <317 of 398> <318 of 398> <319 of 398> <320 of 398> <321 of 398> <322 of 398> <323 of 398> <324 of 398> <325 of 398> <326 of 398> <327 of 398> <328 of 398> <329 of 398> <330 of 398> <331 of 398> <332 of 398> <333 of 398> <334 of 398> <335 of 398> <336 of 398> <337 of 398> <338 of 398> <339 of 398> <340 of 398> <341 of 398> <342 of 398> <343 of 398> <344 of 398> <345 of 398> <346 of 398> <347 of 398> <348 of 398> <349 of 398> <350 of 398> <351 of 398> <352 of 398> <353 of 398> <354 of 398> <355 of 398> <356 of 398> <357 of 398> <358 of 398> <359 of 398> <360 of 398> <361 of 398> <362 of 398> <363 of 398> <364 of 398> <365 of 398> <366 of 398> <367 of 398> <368 of 398> <369 of 398> <370 of 398> <371 of 398> <372 of 398> <373 of 398> <374 of 398> <375 of 398> <376 of 398> <377 of 398> <378 of 398> <379 of 398> <380 of 398> <381 of 398> <382 of 398> <383 of 398> <384 of 398> <385 of 398> <386 of 398> <387 of 398> <388 of 398> <389 of 398> <390 of 398> <391 of 398> <392 of 398> <393 of 398> <394 of 398> <395 of 398> <396 of 398> <397 of 398> <398 of 398> + +Starting XML construction: +[0 1 2] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57861.svs + +chopping ... + +saving region: + <18000:21000 4500:7500> <24000:27000 4500:7500> <13500:16500 6000:9000> <19500:22500 6000:9000> <22500:25500 3000:6000> <21000:24000 1500:4500> <24000:27000 3000:6000> <19500:22500 1500:4500> <30000:33000 3000:6000> <21000:24000 4500:7500> <31500:34500 4500:7500> <18000:21000 3000:6000> <25500:28500 1500:4500> <27000:30000 3000:6000> <27000:30000 4500:7500> <25500:28500 4500:7500> <30000:33000 4500:7500> <22500:25500 1500:4500> <31500:34500 3000:6000> <24000:27000 1500:4500> <27000:30000 1500:4500> <15000:18000 6000:9000> <25500:28500 3000:6000> <19500:22500 3000:6000> <16500:19500 4500:7500> <16500:19500 6000:9000> <21000:24000 3000:6000> <28500:31500 3000:6000> <19500:22500 4500:7500> <28500:31500 4500:7500> <22500:25500 4500:7500> <18000:21000 6000:9000> <12000:15000 9000:12000> <25500:28500 6000:9000> <22500:25500 6000:9000> <19500:22500 9000:12000> <30000:33000 7500:10500> <21000:24000 6000:9000> <16500:19500 9000:12000> <15000:18000 7500:10500> <24000:27000 9000:12000> <25500:28500 7500:10500> <21000:24000 9000:12000> <18000:21000 7500:10500> <24000:27000 6000:9000> <16500:19500 7500:10500> <22500:25500 7500:10500> <27000:30000 6000:9000> <15000:18000 9000:12000> <30000:33000 6000:9000> <18000:21000 9000:12000> <13500:16500 7500:10500> <27000:30000 9000:12000> <21000:24000 7500:10500> <13500:16500 9000:12000> <31500:34500 6000:9000> <25500:28500 9000:12000> <27000:30000 7500:10500> <28500:31500 9000:12000> <22500:25500 9000:12000> <19500:22500 7500:10500> <24000:27000 7500:10500> <28500:31500 6000:9000> <28500:31500 7500:10500> <16500:19500 10500:13500> <9000:12000 12000:15000> <24000:27000 10500:13500> <12000:15000 12000:15000> <10500:13500 10500:13500> <22500:25500 12000:15000> <28500:31500 10500:13500> <10500:13500 12000:15000> <13500:16500 10500:13500> <27000:30000 12000:15000> <30000:33000 10500:13500> <25500:28500 12000:15000> <16500:19500 12000:15000> <19500:22500 12000:15000> <28500:31500 12000:15000> <15000:18000 12000:15000> <24000:27000 12000:15000> <27000:30000 10500:13500> <22500:25500 10500:13500> <25500:28500 10500:13500> <18000:21000 10500:13500> <19500:22500 10500:13500> <13500:16500 12000:15000> <15000:18000 10500:13500> <7500:10500 13500:16500> <9000:12000 13500:16500> <21000:24000 10500:13500> <18000:21000 12000:15000> <10500:13500 13500:16500> <21000:24000 12000:15000> <30000:33000 9000:12000> <12000:15000 10500:13500> <9000:12000 16500:19500> <7500:10500 15000:18000> <18000:21000 13500:16500> <12000:15000 15000:18000> <18000:21000 15000:18000> <22500:25500 13500:16500> <19500:22500 15000:18000> <27000:30000 13500:16500> <15000:18000 15000:18000> <16500:19500 15000:18000> <24000:27000 15000:18000> <21000:24000 13500:16500> <13500:16500 15000:18000> <12000:15000 13500:16500> <19500:22500 13500:16500> <27000:30000 15000:18000> <16500:19500 13500:16500> <9000:12000 15000:18000> <10500:13500 15000:18000> <25500:28500 13500:16500> <25500:28500 15000:18000> <22500:25500 15000:18000> <12000:15000 16500:19500> <21000:24000 15000:18000> <13500:16500 16500:19500> <10500:13500 16500:19500> <6000:9000 15000:18000> <6000:9000 16500:19500> <15000:18000 13500:16500> <24000:27000 13500:16500> <13500:16500 13500:16500> <7500:10500 16500:19500> <16500:19500 18000:21000> <19500:22500 18000:21000> <21000:24000 18000:21000> <15000:18000 16500:19500> <18000:21000 19500:22500> <6000:9000 18000:21000> <18000:21000 18000:21000> <10500:13500 18000:21000> <13500:16500 19500:22500> <15000:18000 18000:21000> <7500:10500 19500:22500> <12000:15000 19500:22500> <22500:25500 18000:21000> <12000:15000 18000:21000> <25500:28500 16500:19500> <18000:21000 16500:19500> <10500:13500 19500:22500> <6000:9000 19500:22500> <7500:10500 18000:21000> <16500:19500 16500:19500> <24000:27000 18000:21000> <22500:25500 16500:19500> <9000:12000 19500:22500> <21000:24000 16500:19500> <24000:27000 16500:19500> <4500:7500 19500:22500> <16500:19500 19500:22500> <19500:22500 16500:19500> <15000:18000 19500:22500> <4500:7500 18000:21000> <13500:16500 18000:21000> <9000:12000 18000:21000> <3000:6000 22500:25500> <4500:7500 24000:27000> <7500:10500 22500:25500> <12000:15000 22500:25500> <3000:6000 24000:27000> <4500:7500 21000:24000> <6000:9000 24000:27000> <16500:19500 22500:25500> <12000:15000 21000:24000> <9000:12000 24000:27000> <7500:10500 21000:24000> <9000:12000 21000:24000> <18000:21000 22500:25500> <21000:24000 19500:22500> <18000:21000 21000:24000> <13500:16500 21000:24000> <9000:12000 22500:25500> <13500:16500 22500:25500> <12000:15000 24000:27000> <6000:9000 22500:25500> <7500:10500 24000:27000> <10500:13500 22500:25500> <15000:18000 22500:25500> <19500:22500 19500:22500> <10500:13500 24000:27000> <6000:9000 21000:24000> <4500:7500 22500:25500> <15000:18000 21000:24000> <19500:22500 21000:24000> <22500:25500 19500:22500> <16500:19500 21000:24000> <10500:13500 21000:24000> <1500:4500 30000:33000> <3000:6000 28500:31500> <3000:6000 25500:28500> <10500:13500 25500:28500> <15000:18000 25500:28500> <9000:12000 25500:28500> <6000:9000 28500:31500> <6000:9000 27000:30000> <4500:7500 28500:31500> <12000:15000 27000:30000> <3000:6000 30000:33000> <10500:13500 28500:31500> <10500:13500 27000:30000> <13500:16500 25500:28500> <13500:16500 24000:27000> <1500:4500 25500:28500> <9000:12000 27000:30000> <13500:16500 27000:30000> <6000:9000 25500:28500> <15000:18000 24000:27000> <7500:10500 27000:30000> <16500:19500 24000:27000> <1500:4500 27000:30000> <12000:15000 25500:28500> <9000:12000 28500:31500> <4500:7500 27000:30000> <12000:15000 28500:31500> <7500:10500 25500:28500> <7500:10500 28500:31500> <1500:4500 28500:31500> <4500:7500 25500:28500> <3000:6000 27000:30000> <6000:9000 30000:33000> <4500:7500 30000:33000> <4500:7500 31500:34500> <1500:4500 31500:34500> <3000:6000 31500:34500> 2024-04-06 07:50:02.448564: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 07:50:02.564373: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 07:50:02.564504: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 07:50:03.705878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 07:50:03.706013: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 07:50:03.706041: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 07:50:03.706144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 07:50:15.684076: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 07:50:15.684211: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57861/img_files/ + + +229 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 228> <1 of 228> <2 of 228> <3 of 228> <4 of 228> <5 of 228> <6 of 228> <7 of 228> <8 of 228> <9 of 228> <10 of 228> <11 of 228> <12 of 228> <13 of 228> <14 of 228> <15 of 228> <16 of 228> <17 of 228> <18 of 228> <19 of 228> <20 of 228> <21 of 228> <22 of 228> <23 of 228> <24 of 228> <25 of 228> <26 of 228> <27 of 228> <28 of 228> <29 of 228> <30 of 228> <31 of 228> <32 of 228> <33 of 228> <34 of 228> <35 of 228> <36 of 228> <37 of 228> <38 of 228> <39 of 228> <40 of 228> <41 of 228> <42 of 228> <43 of 228> <44 of 228> <45 of 228> <46 of 228> <47 of 228> <48 of 228> <49 of 228> <50 of 228> <51 of 228> <52 of 228> <53 of 228> <54 of 228> <55 of 228> <56 of 228> <57 of 228> <58 of 228> <59 of 228> <60 of 228> <61 of 228> <62 of 228> <63 of 228> <64 of 228> <65 of 228> <66 of 228> <67 of 228> <68 of 228> <69 of 228> <70 of 228> <71 of 228> <72 of 228> <73 of 228> <74 of 228> <75 of 228> <76 of 228> <77 of 228> <78 of 228> <79 of 228> <80 of 228> <81 of 228> <82 of 228> <83 of 228> <84 of 228> <85 of 228> <86 of 228> <87 of 228> <88 of 228> <89 of 228> <90 of 228> <91 of 228> <92 of 228> <93 of 228> <94 of 228> <95 of 228> <96 of 228> <97 of 228> <98 of 228> <99 of 228> <100 of 228> <101 of 228> <102 of 228> <103 of 228> <104 of 228> <105 of 228> <106 of 228> <107 of 228> <108 of 228> <109 of 228> <110 of 228> <111 of 228> <112 of 228> <113 of 228> <114 of 228> <115 of 228> <116 of 228> <117 of 228> <118 of 228> <119 of 228> <120 of 228> <121 of 228> <122 of 228> <123 of 228> <124 of 228> <125 of 228> <126 of 228> <127 of 228> <128 of 228> <129 of 228> <130 of 228> <131 of 228> <132 of 228> <133 of 228> <134 of 228> <135 of 228> <136 of 228> <137 of 228> <138 of 228> <139 of 228> <140 of 228> <141 of 228> <142 of 228> <143 of 228> <144 of 228> <145 of 228> <146 of 228> <147 of 228> <148 of 228> <149 of 228> <150 of 228> <151 of 228> <152 of 228> <153 of 228> <154 of 228> <155 of 228> <156 of 228> <157 of 228> <158 of 228> <159 of 228> <160 of 228> <161 of 228> <162 of 228> <163 of 228> <164 of 228> <165 of 228> <166 of 228> <167 of 228> <168 of 228> <169 of 228> <170 of 228> <171 of 228> <172 of 228> <173 of 228> <174 of 228> <175 of 228> <176 of 228> <177 of 228> <178 of 228> <179 of 228> <180 of 228> <181 of 228> <182 of 228> <183 of 228> <184 of 228> <185 of 228> <186 of 228> <187 of 228> <188 of 228> <189 of 228> <190 of 228> <191 of 228> <192 of 228> <193 of 228> <194 of 228> <195 of 228> <196 of 228> <197 of 228> <198 of 228> <199 of 228> <200 of 228> <201 of 228> <202 of 228> <203 of 228> <204 of 228> <205 of 228> <206 of 228> <207 of 228> <208 of 228> <209 of 228> <210 of 228> <211 of 228> <212 of 228> <213 of 228> <214 of 228> <215 of 228> <216 of 228> <217 of 228> <218 of 228> <219 of 228> <220 of 228> <221 of 228> <222 of 228> <223 of 228> <224 of 228> <225 of 228> <226 of 228> <227 of 228> <228 of 228> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57845.svs + +chopping ... + +saving region: + <6000:9000 0:3000> <4500:7500 6000:9000> <6000:9000 3000:6000> <4500:7500 0:3000> <9000:12000 3000:6000> <0:3000 3000:6000> <3000:6000 3000:6000> <4500:7500 4500:7500> <1500:4500 6000:9000> <4500:7500 1500:4500> <9000:12000 4500:7500> <6000:9000 6000:9000> <1500:4500 1500:4500> <3000:6000 6000:9000> <9000:12000 1500:4500> <10500:13500 4500:7500> <7500:10500 0:3000> <4500:7500 3000:6000> <3000:6000 1500:4500> <10500:13500 3000:6000> <10500:13500 6000:9000> <7500:10500 6000:9000> <12000:15000 6000:9000> <6000:9000 1500:4500> <1500:4500 3000:6000> <3000:6000 4500:7500> <7500:10500 4500:7500> <7500:10500 3000:6000> <7500:10500 1500:4500> <1500:4500 4500:7500> <9000:12000 6000:9000> <6000:9000 4500:7500> <3000:6000 9000:12000> <15000:18000 9000:12000> <7500:10500 12000:15000> <13500:16500 7500:10500> <9000:12000 10500:13500> <13500:16500 10500:13500> <6000:9000 12000:15000> <4500:7500 7500:10500> <10500:13500 10500:13500> <7500:10500 9000:12000> <6000:9000 9000:12000> <10500:13500 12000:15000> <4500:7500 12000:15000> <10500:13500 9000:12000> <6000:9000 10500:13500> <6000:9000 7500:10500> <12000:15000 9000:12000> <3000:6000 10500:13500> <3000:6000 7500:10500> <9000:12000 7500:10500> <9000:12000 9000:12000> <10500:13500 7500:10500> <15000:18000 10500:13500> <4500:7500 9000:12000> <13500:16500 9000:12000> <16500:19500 10500:13500> <7500:10500 10500:13500> <9000:12000 12000:15000> <7500:10500 7500:10500> <12000:15000 7500:10500> <4500:7500 10500:13500> <12000:15000 10500:13500> <12000:15000 12000:15000> <19500:22500 15000:18000> <12000:15000 15000:18000> <7500:10500 15000:18000> <6000:9000 13500:16500> <19500:22500 13500:16500> <7500:10500 13500:16500> <6000:9000 15000:18000> <9000:12000 15000:18000> <21000:24000 15000:18000> <10500:13500 15000:18000> <15000:18000 13500:16500> <13500:16500 15000:18000> <10500:13500 13500:16500> <16500:19500 12000:15000> <18000:21000 13500:16500> <15000:18000 12000:15000> <12000:15000 16500:19500> <10500:13500 16500:19500> <18000:21000 15000:18000> <16500:19500 15000:18000> <18000:21000 12000:15000> <16500:19500 13500:16500> <13500:16500 12000:15000> <22500:25500 15000:18000> <9000:12000 16500:19500> <12000:15000 13500:16500> <21000:24000 13500:16500> <7500:10500 16500:19500> <15000:18000 15000:18000> <9000:12000 13500:16500> <13500:16500 13500:16500> <22500:25500 18000:21000> <21000:24000 18000:21000> <12000:15000 19500:22500> <13500:16500 19500:22500> <24000:27000 19500:22500> <9000:12000 18000:21000> <13500:16500 18000:21000> <16500:19500 16500:19500> <16500:19500 19500:22500> <12000:15000 18000:21000> <15000:18000 16500:19500> <19500:22500 16500:19500> <13500:16500 16500:19500> <24000:27000 16500:19500> <10500:13500 18000:21000> <15000:18000 18000:21000> <10500:13500 19500:22500> <25500:28500 19500:22500> <24000:27000 18000:21000> <18000:21000 16500:19500> <21000:24000 16500:19500> <18000:21000 19500:22500> <19500:22500 18000:21000> <18000:21000 18000:21000> <22500:25500 19500:22500> <13500:16500 21000:24000> <21000:24000 19500:22500> <15000:18000 19500:22500> <22500:25500 16500:19500> <12000:15000 21000:24000> <16500:19500 18000:21000> <19500:22500 19500:22500> <18000:21000 21000:24000> <15000:18000 21000:24000> <22500:25500 21000:24000> <16500:19500 24000:27000> <13500:16500 22500:25500> <16500:19500 27000:30000> <12000:15000 22500:25500> <21000:24000 22500:25500> <18000:21000 24000:27000> <25500:28500 21000:24000> <16500:19500 21000:24000> <15000:18000 22500:25500> <24000:27000 21000:24000> <15000:18000 24000:27000> <21000:24000 25500:28500> <22500:25500 24000:27000> <19500:22500 24000:27000> <16500:19500 22500:25500> <24000:27000 24000:27000> <21000:24000 24000:27000> <13500:16500 24000:27000> <21000:24000 21000:24000> <16500:19500 25500:28500> <19500:22500 21000:24000> <15000:18000 25500:28500> <19500:22500 25500:28500> <22500:25500 25500:28500> <24000:27000 22500:25500> <18000:21000 25500:28500> <22500:25500 22500:25500> <18000:21000 22500:25500> <19500:22500 22500:25500> <19500:22500 27000:30000> <18000:21000 27000:30000> <21000:24000 27000:30000> 2024-04-06 08:03:25.571164: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 08:03:25.686691: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 08:03:25.686819: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 08:03:26.862602: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 08:03:26.862735: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 08:03:26.862762: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 08:03:26.862875: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 08:03:38.783832: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 08:03:38.783967: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57845/img_files/ + + +163 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 162> <1 of 162> <2 of 162> <3 of 162> <4 of 162> <5 of 162> <6 of 162> <7 of 162> <8 of 162> <9 of 162> <10 of 162> <11 of 162> <12 of 162> <13 of 162> <14 of 162> <15 of 162> <16 of 162> <17 of 162> <18 of 162> <19 of 162> <20 of 162> <21 of 162> <22 of 162> <23 of 162> <24 of 162> <25 of 162> <26 of 162> <27 of 162> <28 of 162> <29 of 162> <30 of 162> <31 of 162> <32 of 162> <33 of 162> <34 of 162> <35 of 162> <36 of 162> <37 of 162> <38 of 162> <39 of 162> <40 of 162> <41 of 162> <42 of 162> <43 of 162> <44 of 162> <45 of 162> <46 of 162> <47 of 162> <48 of 162> <49 of 162> <50 of 162> <51 of 162> <52 of 162> <53 of 162> <54 of 162> <55 of 162> <56 of 162> <57 of 162> <58 of 162> <59 of 162> <60 of 162> <61 of 162> <62 of 162> <63 of 162> <64 of 162> <65 of 162> <66 of 162> <67 of 162> <68 of 162> <69 of 162> <70 of 162> <71 of 162> <72 of 162> <73 of 162> <74 of 162> <75 of 162> <76 of 162> <77 of 162> <78 of 162> <79 of 162> <80 of 162> <81 of 162> <82 of 162> <83 of 162> <84 of 162> <85 of 162> <86 of 162> <87 of 162> <88 of 162> <89 of 162> <90 of 162> <91 of 162> <92 of 162> <93 of 162> <94 of 162> <95 of 162> <96 of 162> <97 of 162> <98 of 162> <99 of 162> <100 of 162> <101 of 162> <102 of 162> <103 of 162> <104 of 162> <105 of 162> <106 of 162> <107 of 162> <108 of 162> <109 of 162> <110 of 162> <111 of 162> <112 of 162> <113 of 162> <114 of 162> <115 of 162> <116 of 162> <117 of 162> <118 of 162> <119 of 162> <120 of 162> <121 of 162> <122 of 162> <123 of 162> <124 of 162> <125 of 162> <126 of 162> <127 of 162> <128 of 162> <129 of 162> <130 of 162> <131 of 162> <132 of 162> <133 of 162> <134 of 162> <135 of 162> <136 of 162> <137 of 162> <138 of 162> <139 of 162> <140 of 162> <141 of 162> <142 of 162> <143 of 162> <144 of 162> <145 of 162> <146 of 162> <147 of 162> <148 of 162> <149 of 162> <150 of 162> <151 of 162> <152 of 162> <153 of 162> <154 of 162> <155 of 162> <156 of 162> <157 of 162> <158 of 162> <159 of 162> <160 of 162> <161 of 162> <162 of 162> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57656.svs + +chopping ... + +saving region: + <130500:133500 0:3000> <129000:132000 0:3000> <127500:130500 3000:6000> <48000:51000 4500:7500> <48000:51000 3000:6000> <46500:49500 3000:6000> <129000:132000 1500:4500> <129000:132000 3000:6000> <127500:130500 0:3000> <46500:49500 4500:7500> <45000:48000 3000:6000> <51000:54000 4500:7500> <124500:127500 1500:4500> <49500:52500 3000:6000> <132000:135000 1500:4500> <45000:48000 4500:7500> <126000:129000 3000:6000> <123000:126000 4500:7500> <43500:46500 4500:7500> <46500:49500 1500:4500> <130500:133500 1500:4500> <121500:124500 4500:7500> <48000:51000 1500:4500> <126000:129000 1500:4500> <132000:135000 3000:6000> <127500:130500 1500:4500> <43500:46500 3000:6000> <49500:52500 4500:7500> <42000:45000 4500:7500> <124500:127500 3000:6000> <123000:126000 3000:6000> <130500:133500 3000:6000> <49500:52500 6000:9000> <121500:124500 6000:9000> <118500:121500 6000:9000> <126000:129000 4500:7500> <127500:130500 6000:9000> <45000:48000 7500:10500> <120000:123000 6000:9000> <48000:51000 6000:9000> <129000:132000 6000:9000> <37500:40500 7500:10500> <124500:127500 6000:9000> <46500:49500 6000:9000> <127500:130500 4500:7500> <43500:46500 6000:9000> <126000:129000 6000:9000> <39000:42000 6000:9000> <45000:48000 6000:9000> <51000:54000 6000:9000> <123000:126000 6000:9000> <42000:45000 6000:9000> <129000:132000 4500:7500> <42000:45000 7500:10500> <124500:127500 4500:7500> <40500:43500 7500:10500> <36000:39000 7500:10500> <43500:46500 7500:10500> <48000:51000 7500:10500> <130500:133500 6000:9000> <46500:49500 7500:10500> <130500:133500 4500:7500> <39000:42000 7500:10500> <40500:43500 6000:9000> <120000:123000 7500:10500> <117000:120000 7500:10500> <117000:120000 9000:12000> <39000:42000 9000:12000> <114000:117000 9000:12000> <45000:48000 9000:12000> <34500:37500 9000:12000> <37500:40500 9000:12000> <118500:121500 9000:12000> <124500:127500 9000:12000> <123000:126000 7500:10500> <127500:130500 7500:10500> <36000:39000 9000:12000> <126000:129000 7500:10500> <42000:45000 9000:12000> <126000:129000 9000:12000> <48000:51000 9000:12000> <120000:123000 9000:12000> <46500:49500 9000:12000> <33000:36000 10500:13500> <127500:130500 9000:12000> <34500:37500 10500:13500> <129000:132000 7500:10500> <121500:124500 9000:12000> <123000:126000 9000:12000> <118500:121500 7500:10500> <43500:46500 9000:12000> <49500:52500 7500:10500> <40500:43500 9000:12000> <124500:127500 7500:10500> <121500:124500 7500:10500> <31500:34500 10500:13500> <115500:118500 9000:12000> <34500:37500 12000:15000> <115500:118500 12000:15000> <126000:129000 10500:13500> <120000:123000 10500:13500> <37500:40500 12000:15000> <43500:46500 12000:15000> <30000:33000 12000:15000> <112500:115500 10500:13500> <36000:39000 12000:15000> <124500:127500 10500:13500> <36000:39000 10500:13500> <118500:121500 10500:13500> <31500:34500 12000:15000> <39000:42000 12000:15000> <112500:115500 12000:15000> <45000:48000 10500:13500> <33000:36000 12000:15000> <40500:43500 10500:13500> <39000:42000 10500:13500> <43500:46500 10500:13500> <117000:120000 10500:13500> <115500:118500 10500:13500> <42000:45000 12000:15000> <46500:49500 10500:13500> <40500:43500 12000:15000> <45000:48000 12000:15000> <123000:126000 10500:13500> <114000:117000 10500:13500> <42000:45000 10500:13500> <111000:114000 12000:15000> <37500:40500 10500:13500> <121500:124500 10500:13500> <114000:117000 12000:15000> <117000:120000 12000:15000> <39000:42000 15000:18000> <36000:39000 13500:16500> <109500:112500 13500:16500> <114000:117000 13500:16500> <34500:37500 13500:16500> <31500:34500 13500:16500> <30000:33000 15000:18000> <111000:114000 13500:16500> <34500:37500 15000:18000> <37500:40500 13500:16500> <120000:123000 13500:16500> <115500:118500 13500:16500> <30000:33000 13500:16500> <120000:123000 12000:15000> <31500:34500 15000:18000> <28500:31500 15000:18000> <123000:126000 12000:15000> <28500:31500 13500:16500> <40500:43500 13500:16500> <121500:124500 13500:16500> <33000:36000 15000:18000> <37500:40500 15000:18000> <42000:45000 13500:16500> <112500:115500 13500:16500> <33000:36000 13500:16500> <27000:30000 15000:18000> <39000:42000 13500:16500> <36000:39000 15000:18000> <117000:120000 13500:16500> <121500:124500 12000:15000> <118500:121500 13500:16500> <118500:121500 12000:15000> <108000:111000 15000:18000> <31500:34500 18000:21000> <33000:36000 18000:21000> <30000:33000 16500:19500> <108000:111000 16500:19500> <109500:112500 15000:18000> <115500:118500 15000:18000> <115500:118500 16500:19500> <24000:27000 16500:19500> <24000:27000 18000:21000> <28500:31500 16500:19500> <30000:33000 18000:21000> <117000:120000 16500:19500> <112500:115500 15000:18000> <25500:28500 18000:21000> <33000:36000 16500:19500> <111000:114000 15000:18000> <118500:121500 15000:18000> <114000:117000 16500:19500> <28500:31500 18000:21000> <112500:115500 16500:19500> <27000:30000 16500:19500> <109500:112500 16500:19500> <22500:25500 18000:21000> <106500:109500 16500:19500> <114000:117000 15000:18000> <36000:39000 16500:19500> <37500:40500 16500:19500> <111000:114000 16500:19500> <25500:28500 16500:19500> <117000:120000 15000:18000> <31500:34500 16500:19500> <27000:30000 18000:21000> <34500:37500 18000:21000> <34500:37500 16500:19500> <27000:30000 21000:24000> <24000:27000 19500:22500> <112500:115500 19500:22500> <25500:28500 21000:24000> <105000:108000 18000:21000> <109500:112500 19500:22500> <31500:34500 19500:22500> <21000:24000 21000:24000> <114000:117000 18000:21000> <27000:30000 19500:22500> <111000:114000 19500:22500> <22500:25500 19500:22500> <25500:28500 19500:22500> <109500:112500 18000:21000> <112500:115500 18000:21000> <30000:33000 21000:24000> <19500:22500 21000:24000> <111000:114000 18000:21000> <24000:27000 21000:24000> <115500:118500 18000:21000> <103500:106500 19500:22500> <28500:31500 21000:24000> <106500:109500 19500:22500> <108000:111000 18000:21000> <21000:24000 19500:22500> <28500:31500 19500:22500> <108000:111000 19500:22500> <105000:108000 19500:22500> <33000:36000 19500:22500> <22500:25500 21000:24000> <30000:33000 19500:22500> <106500:109500 18000:21000> <106500:109500 21000:24000> <22500:25500 24000:27000> <102000:105000 21000:24000> <102000:105000 24000:27000> <25500:28500 22500:25500> <105000:108000 21000:24000> <108000:111000 21000:24000> <21000:24000 24000:27000> <102000:105000 22500:25500> <28500:31500 22500:25500> <27000:30000 22500:25500> <19500:22500 22500:25500> <100500:103500 22500:25500> <106500:109500 22500:25500> <103500:106500 22500:25500> <21000:24000 22500:25500> <109500:112500 21000:24000> <18000:21000 24000:27000> <16500:19500 24000:27000> <108000:111000 22500:25500> <100500:103500 24000:27000> <109500:112500 22500:25500> <22500:25500 22500:25500> <25500:28500 24000:27000> <27000:30000 24000:27000> <99000:102000 24000:27000> <24000:27000 24000:27000> <111000:114000 21000:24000> <103500:106500 21000:24000> <19500:22500 24000:27000> <24000:27000 22500:25500> <105000:108000 22500:25500> <99000:102000 25500:28500> <102000:105000 27000:30000> <96000:99000 27000:30000> <103500:106500 24000:27000> <94500:97500 27000:30000> <99000:102000 27000:30000> <106500:109500 25500:28500> <15000:18000 27000:30000> <21000:24000 25500:28500> <96000:99000 25500:28500> <106500:109500 24000:27000> <105000:108000 24000:27000> <105000:108000 25500:28500> <18000:21000 25500:28500> <100500:103500 27000:30000> <22500:25500 27000:30000> <15000:18000 25500:28500> <13500:16500 27000:30000> <16500:19500 25500:28500> <100500:103500 25500:28500> <108000:111000 24000:27000> <18000:21000 27000:30000> <102000:105000 25500:28500> <21000:24000 27000:30000> <24000:27000 25500:28500> <22500:25500 25500:28500> <19500:22500 27000:30000> <16500:19500 27000:30000> <97500:100500 25500:28500> <97500:100500 27000:30000> <12000:15000 27000:30000> <19500:22500 25500:28500> <103500:106500 25500:28500> <103500:106500 27000:30000> <19500:22500 30000:33000> <21000:24000 30000:33000> <15000:18000 30000:33000> <99000:102000 28500:31500> <91500:94500 28500:31500> <100500:103500 28500:31500> <19500:22500 28500:31500> <16500:19500 28500:31500> <10500:13500 28500:31500> <3000:6000 30000:33000> <102000:105000 28500:31500> <7500:10500 28500:31500> <94500:97500 28500:31500> <13500:16500 28500:31500> <97500:100500 28500:31500> <15000:18000 28500:31500> <9000:12000 28500:31500> <16500:19500 30000:33000> <9000:12000 30000:33000> <6000:9000 30000:33000> <96000:99000 28500:31500> <103500:106500 28500:31500> <13500:16500 30000:33000> <7500:10500 30000:33000> <4500:7500 30000:33000> <93000:96000 28500:31500> <12000:15000 28500:31500> <105000:108000 27000:30000> <21000:24000 28500:31500> <18000:21000 28500:31500> <18000:21000 30000:33000> <12000:15000 30000:33000> <10500:13500 30000:33000> <85500:88500 30000:33000> <13500:16500 31500:34500> <99000:102000 30000:33000> <91500:94500 30000:33000> <96000:99000 31500:34500> <90000:93000 30000:33000> <7500:10500 31500:34500> <18000:21000 31500:34500> <9000:12000 31500:34500> <85500:88500 31500:34500> <88500:91500 30000:33000> <6000:9000 31500:34500> <87000:90000 30000:33000> <102000:105000 30000:33000> <1500:4500 31500:34500> <12000:15000 31500:34500> <15000:18000 31500:34500> <93000:96000 30000:33000> <96000:99000 30000:33000> <84000:87000 31500:34500> <3000:6000 31500:34500> <4500:7500 31500:34500> <91500:94500 31500:34500> <94500:97500 31500:34500> <93000:96000 31500:34500> <97500:100500 30000:33000> <90000:93000 31500:34500> <87000:90000 31500:34500> <10500:13500 31500:34500> <94500:97500 30000:33000> <16500:19500 31500:34500> <100500:103500 30000:33000> <88500:91500 31500:34500> <90000:93000 33000:36000> <7500:10500 34500:37500> <97500:100500 31500:34500> <87000:90000 33000:36000> <16500:19500 33000:36000> <12000:15000 33000:36000> <93000:96000 33000:36000> <10500:13500 33000:36000> <7500:10500 33000:36000> <0:3000 34500:37500> <99000:102000 33000:36000> <88500:91500 33000:36000> <4500:7500 34500:37500> <0:3000 33000:36000> <3000:6000 34500:37500> <4500:7500 33000:36000> <6000:9000 33000:36000> <96000:99000 33000:36000> <99000:102000 31500:34500> <84000:87000 33000:36000> <9000:12000 34500:37500> <85500:88500 33000:36000> <100500:103500 31500:34500> <13500:16500 33000:36000> <3000:6000 33000:36000> <15000:18000 33000:36000> <1500:4500 34500:37500> <94500:97500 33000:36000> <82500:85500 33000:36000> <10500:13500 34500:37500> <6000:9000 34500:37500> <91500:94500 33000:36000> <9000:12000 33000:36000> <1500:4500 33000:36000> <97500:100500 33000:36000> <7500:10500 36000:39000> <6000:9000 37500:40500> <0:3000 37500:40500> <1500:4500 37500:40500> <9000:12000 36000:39000> <85500:88500 36000:39000> <90000:93000 34500:37500> <90000:93000 36000:39000> <93000:96000 36000:39000> <13500:16500 34500:37500> <3000:6000 37500:40500> <12000:15000 34500:37500> <82500:85500 36000:39000> <87000:90000 36000:39000> <85500:88500 34500:37500> <94500:97500 34500:37500> <0:3000 36000:39000> <91500:94500 36000:39000> <4500:7500 36000:39000> <3000:6000 36000:39000> <6000:9000 36000:39000> <84000:87000 34500:37500> <88500:91500 34500:37500> <88500:91500 36000:39000> <10500:13500 36000:39000> <91500:94500 34500:37500> <1500:4500 36000:39000> <93000:96000 34500:37500> <87000:90000 34500:37500> <84000:87000 36000:39000> <96000:99000 34500:37500> <4500:7500 37500:40500> <82500:85500 34500:37500> <82500:85500 37500:40500> <87000:90000 39000:42000> <90000:93000 37500:40500> <87000:90000 37500:40500> <85500:88500 39000:42000> <84000:87000 37500:40500> <84000:87000 39000:42000> <4500:7500 39000:42000> <85500:88500 37500:40500> <88500:91500 37500:40500> <1500:4500 39000:42000> <6000:9000 39000:42000> <88500:91500 39000:42000> <3000:6000 39000:42000> 2024-04-06 08:20:14.388493: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 08:20:14.503820: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 08:20:14.503961: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 08:20:15.633717: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 08:20:15.633854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 08:20:15.633881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 08:20:15.633995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 08:20:27.665528: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 08:20:27.665659: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +step 400 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57656/img_files/ + + +444 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 443> <1 of 443> <2 of 443> <3 of 443> <4 of 443> <5 of 443> <6 of 443> <7 of 443> <8 of 443> <9 of 443> <10 of 443> <11 of 443> <12 of 443> <13 of 443> <14 of 443> <15 of 443> <16 of 443> <17 of 443> <18 of 443> <19 of 443> <20 of 443> <21 of 443> <22 of 443> <23 of 443> <24 of 443> <25 of 443> <26 of 443> <27 of 443> <28 of 443> <29 of 443> <30 of 443> <31 of 443> <32 of 443> <33 of 443> <34 of 443> <35 of 443> <36 of 443> <37 of 443> <38 of 443> <39 of 443> <40 of 443> <41 of 443> <42 of 443> <43 of 443> <44 of 443> <45 of 443> <46 of 443> <47 of 443> <48 of 443> <49 of 443> <50 of 443> <51 of 443> <52 of 443> <53 of 443> <54 of 443> <55 of 443> <56 of 443> <57 of 443> <58 of 443> <59 of 443> <60 of 443> <61 of 443> <62 of 443> <63 of 443> <64 of 443> <65 of 443> <66 of 443> <67 of 443> <68 of 443> <69 of 443> <70 of 443> <71 of 443> <72 of 443> <73 of 443> <74 of 443> <75 of 443> <76 of 443> <77 of 443> <78 of 443> <79 of 443> <80 of 443> <81 of 443> <82 of 443> <83 of 443> <84 of 443> <85 of 443> <86 of 443> <87 of 443> <88 of 443> <89 of 443> <90 of 443> <91 of 443> <92 of 443> <93 of 443> <94 of 443> <95 of 443> <96 of 443> <97 of 443> <98 of 443> <99 of 443> <100 of 443> <101 of 443> <102 of 443> <103 of 443> <104 of 443> <105 of 443> <106 of 443> <107 of 443> <108 of 443> <109 of 443> <110 of 443> <111 of 443> <112 of 443> <113 of 443> <114 of 443> <115 of 443> <116 of 443> <117 of 443> <118 of 443> <119 of 443> <120 of 443> <121 of 443> <122 of 443> <123 of 443> <124 of 443> <125 of 443> <126 of 443> <127 of 443> <128 of 443> <129 of 443> <130 of 443> <131 of 443> <132 of 443> <133 of 443> <134 of 443> <135 of 443> <136 of 443> <137 of 443> <138 of 443> <139 of 443> <140 of 443> <141 of 443> <142 of 443> <143 of 443> <144 of 443> <145 of 443> <146 of 443> <147 of 443> <148 of 443> <149 of 443> <150 of 443> <151 of 443> <152 of 443> <153 of 443> <154 of 443> <155 of 443> <156 of 443> <157 of 443> <158 of 443> <159 of 443> <160 of 443> <161 of 443> <162 of 443> <163 of 443> <164 of 443> <165 of 443> <166 of 443> <167 of 443> <168 of 443> <169 of 443> <170 of 443> <171 of 443> <172 of 443> <173 of 443> <174 of 443> <175 of 443> <176 of 443> <177 of 443> <178 of 443> <179 of 443> <180 of 443> <181 of 443> <182 of 443> <183 of 443> <184 of 443> <185 of 443> <186 of 443> <187 of 443> <188 of 443> <189 of 443> <190 of 443> <191 of 443> <192 of 443> <193 of 443> <194 of 443> <195 of 443> <196 of 443> <197 of 443> <198 of 443> <199 of 443> <200 of 443> <201 of 443> <202 of 443> <203 of 443> <204 of 443> <205 of 443> <206 of 443> <207 of 443> <208 of 443> <209 of 443> <210 of 443> <211 of 443> <212 of 443> <213 of 443> <214 of 443> <215 of 443> <216 of 443> <217 of 443> <218 of 443> <219 of 443> <220 of 443> <221 of 443> <222 of 443> <223 of 443> <224 of 443> <225 of 443> <226 of 443> <227 of 443> <228 of 443> <229 of 443> <230 of 443> <231 of 443> <232 of 443> <233 of 443> <234 of 443> <235 of 443> <236 of 443> <237 of 443> <238 of 443> <239 of 443> <240 of 443> <241 of 443> <242 of 443> <243 of 443> <244 of 443> <245 of 443> <246 of 443> <247 of 443> <248 of 443> <249 of 443> <250 of 443> <251 of 443> <252 of 443> <253 of 443> <254 of 443> <255 of 443> <256 of 443> <257 of 443> <258 of 443> <259 of 443> <260 of 443> <261 of 443> <262 of 443> <263 of 443> <264 of 443> <265 of 443> <266 of 443> <267 of 443> <268 of 443> <269 of 443> <270 of 443> <271 of 443> <272 of 443> <273 of 443> <274 of 443> <275 of 443> <276 of 443> <277 of 443> <278 of 443> <279 of 443> <280 of 443> <281 of 443> <282 of 443> <283 of 443> <284 of 443> <285 of 443> <286 of 443> <287 of 443> <288 of 443> <289 of 443> <290 of 443> <291 of 443> <292 of 443> <293 of 443> <294 of 443> <295 of 443> <296 of 443> <297 of 443> <298 of 443> <299 of 443> <300 of 443> <301 of 443> <302 of 443> <303 of 443> <304 of 443> <305 of 443> <306 of 443> <307 of 443> <308 of 443> <309 of 443> <310 of 443> <311 of 443> <312 of 443> <313 of 443> <314 of 443> <315 of 443> <316 of 443> <317 of 443> <318 of 443> <319 of 443> <320 of 443> <321 of 443> <322 of 443> <323 of 443> <324 of 443> <325 of 443> <326 of 443> <327 of 443> <328 of 443> <329 of 443> <330 of 443> <331 of 443> <332 of 443> <333 of 443> <334 of 443> <335 of 443> <336 of 443> <337 of 443> <338 of 443> <339 of 443> <340 of 443> <341 of 443> <342 of 443> <343 of 443> <344 of 443> <345 of 443> <346 of 443> <347 of 443> <348 of 443> <349 of 443> <350 of 443> <351 of 443> <352 of 443> <353 of 443> <354 of 443> <355 of 443> <356 of 443> <357 of 443> <358 of 443> <359 of 443> <360 of 443> <361 of 443> <362 of 443> <363 of 443> <364 of 443> <365 of 443> <366 of 443> <367 of 443> <368 of 443> <369 of 443> <370 of 443> <371 of 443> <372 of 443> <373 of 443> <374 of 443> <375 of 443> <376 of 443> <377 of 443> <378 of 443> <379 of 443> <380 of 443> <381 of 443> <382 of 443> <383 of 443> <384 of 443> <385 of 443> <386 of 443> <387 of 443> <388 of 443> <389 of 443> <390 of 443> <391 of 443> <392 of 443> <393 of 443> <394 of 443> <395 of 443> <396 of 443> <397 of 443> <398 of 443> <399 of 443> <400 of 443> <401 of 443> <402 of 443> <403 of 443> <404 of 443> <405 of 443> <406 of 443> <407 of 443> <408 of 443> <409 of 443> <410 of 443> <411 of 443> <412 of 443> <413 of 443> <414 of 443> <415 of 443> <416 of 443> <417 of 443> <418 of 443> <419 of 443> <420 of 443> <421 of 443> <422 of 443> <423 of 443> <424 of 443> <425 of 443> <426 of 443> <427 of 443> <428 of 443> <429 of 443> <430 of 443> <431 of 443> <432 of 443> <433 of 443> <434 of 443> <435 of 443> <436 of 443> <437 of 443> <438 of 443> <439 of 443> <440 of 443> <441 of 443> <442 of 443> <443 of 443> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57361.svs + +chopping ... + +saving region: + <27000:30000 4500:7500> <31500:34500 4500:7500> <30000:33000 1500:4500> <21000:24000 1500:4500> <25500:28500 1500:4500> <24000:27000 4500:7500> <27000:30000 3000:6000> <27000:30000 0:3000> <28500:31500 3000:6000> <28500:31500 1500:4500> <19500:22500 3000:6000> <19500:22500 4500:7500> <27000:30000 1500:4500> <24000:27000 1500:4500> <31500:34500 1500:4500> <22500:25500 4500:7500> <28500:31500 4500:7500> <22500:25500 1500:4500> <21000:24000 3000:6000> <25500:28500 3000:6000> <25500:28500 0:3000> <28500:31500 0:3000> <16500:19500 4500:7500> <30000:33000 3000:6000> <31500:34500 3000:6000> <18000:21000 4500:7500> <22500:25500 3000:6000> <30000:33000 4500:7500> <25500:28500 4500:7500> <24000:27000 3000:6000> <30000:33000 0:3000> <24000:27000 0:3000> <21000:24000 4500:7500> <15000:18000 6000:9000> <15000:18000 7500:10500> <22500:25500 9000:12000> <24000:27000 9000:12000> <16500:19500 6000:9000> <31500:34500 9000:12000> <19500:22500 6000:9000> <25500:28500 7500:10500> <28500:31500 6000:9000> <13500:16500 9000:12000> <114000:117000 7500:10500> <18000:21000 7500:10500> <21000:24000 7500:10500> <24000:27000 7500:10500> <22500:25500 6000:9000> <117000:120000 7500:10500> <19500:22500 9000:12000> <18000:21000 9000:12000> <12000:15000 9000:12000> <15000:18000 9000:12000> <25500:28500 6000:9000> <27000:30000 6000:9000> <21000:24000 6000:9000> <12000:15000 7500:10500> <24000:27000 6000:9000> <115500:118500 7500:10500> <18000:21000 6000:9000> <27000:30000 7500:10500> <33000:36000 9000:12000> <16500:19500 9000:12000> <19500:22500 7500:10500> <10500:13500 9000:12000> <22500:25500 7500:10500> <13500:16500 7500:10500> <16500:19500 7500:10500> <21000:24000 9000:12000> <112500:115500 9000:12000> <109500:112500 10500:13500> <10500:13500 10500:13500> <36000:39000 10500:13500> <115500:118500 10500:13500> <111000:114000 9000:12000> <16500:19500 10500:13500> <31500:34500 10500:13500> <22500:25500 10500:13500> <112500:115500 10500:13500> <114000:117000 10500:13500> <114000:117000 9000:12000> <7500:10500 10500:13500> <117000:120000 10500:13500> <120000:123000 10500:13500> <4500:7500 12000:15000> <12000:15000 10500:13500> <34500:37500 10500:13500> <34500:37500 9000:12000> <120000:123000 9000:12000> <18000:21000 10500:13500> <30000:33000 10500:13500> <15000:18000 10500:13500> <118500:121500 9000:12000> <33000:36000 10500:13500> <115500:118500 9000:12000> <13500:16500 10500:13500> <19500:22500 10500:13500> <9000:12000 10500:13500> <6000:9000 12000:15000> <117000:120000 9000:12000> <118500:121500 10500:13500> <111000:114000 10500:13500> <21000:24000 10500:13500> <1500:4500 13500:16500> <117000:120000 12000:15000> <12000:15000 13500:16500> <13500:16500 13500:16500> <3000:6000 13500:16500> <31500:34500 12000:15000> <15000:18000 13500:16500> <118500:121500 12000:15000> <16500:19500 13500:16500> <10500:13500 13500:16500> <7500:10500 13500:16500> <13500:16500 12000:15000> <12000:15000 12000:15000> <7500:10500 12000:15000> <114000:117000 12000:15000> <30000:33000 12000:15000> <108000:111000 12000:15000> <112500:115500 12000:15000> <21000:24000 12000:15000> <19500:22500 12000:15000> <18000:21000 12000:15000> <36000:39000 12000:15000> <106500:109500 12000:15000> <109500:112500 12000:15000> <34500:37500 12000:15000> <6000:9000 13500:16500> <9000:12000 12000:15000> <28500:31500 12000:15000> <16500:19500 12000:15000> <33000:36000 12000:15000> <10500:13500 12000:15000> <9000:12000 13500:16500> <4500:7500 13500:16500> <115500:118500 12000:15000> <111000:114000 12000:15000> <15000:18000 12000:15000> <34500:37500 13500:16500> <28500:31500 13500:16500> <27000:30000 15000:18000> <25500:28500 15000:18000> <30000:33000 13500:16500> <6000:9000 15000:18000> <4500:7500 15000:18000> <7500:10500 15000:18000> <3000:6000 15000:18000> <19500:22500 13500:16500> <33000:36000 13500:16500> <1500:4500 15000:18000> <18000:21000 13500:16500> <112500:115500 13500:16500> <36000:39000 13500:16500> <25500:28500 13500:16500> <114000:117000 13500:16500> <108000:111000 13500:16500> <109500:112500 13500:16500> <31500:34500 13500:16500> <105000:108000 13500:16500> <12000:15000 15000:18000> <10500:13500 15000:18000> <0:3000 15000:18000> <106500:109500 13500:16500> <111000:114000 13500:16500> <15000:18000 15000:18000> <9000:12000 15000:18000> <13500:16500 15000:18000> <28500:31500 15000:18000> <16500:19500 15000:18000> <27000:30000 13500:16500> <115500:118500 13500:16500> <117000:120000 13500:16500> <24000:27000 15000:18000> <30000:33000 15000:18000> <24000:27000 16500:19500> <31500:34500 16500:19500> <34500:37500 16500:19500> <33000:36000 16500:19500> <105000:108000 15000:18000> <22500:25500 16500:19500> <28500:31500 16500:19500> <6000:9000 16500:19500> <1500:4500 16500:19500> <10500:13500 16500:19500> <3000:6000 16500:19500> <114000:117000 15000:18000> <25500:28500 16500:19500> <9000:12000 16500:19500> <7500:10500 16500:19500> <108000:111000 15000:18000> <106500:109500 15000:18000> <0:3000 16500:19500> <112500:115500 15000:18000> <13500:16500 16500:19500> <103500:106500 15000:18000> <111000:114000 15000:18000> <34500:37500 15000:18000> <12000:15000 16500:19500> <36000:39000 15000:18000> <30000:33000 16500:19500> <31500:34500 15000:18000> <109500:112500 15000:18000> <36000:39000 16500:19500> <15000:18000 16500:19500> <115500:118500 15000:18000> <33000:36000 15000:18000> <27000:30000 16500:19500> <4500:7500 16500:19500> <102000:105000 15000:18000> <100500:103500 16500:19500> <34500:37500 18000:21000> <25500:28500 18000:21000> <109500:112500 16500:19500> <27000:30000 18000:21000> <21000:24000 18000:21000> <1500:4500 18000:21000> <12000:15000 18000:21000> <9000:12000 18000:21000> <108000:111000 16500:19500> <4500:7500 18000:21000> <3000:6000 18000:21000> <33000:36000 18000:21000> <123000:126000 16500:19500> <6000:9000 18000:21000> <111000:114000 16500:19500> <105000:108000 16500:19500> <28500:31500 18000:21000> <106500:109500 16500:19500> <121500:124500 16500:19500> <13500:16500 18000:21000> <19500:22500 18000:21000> <114000:117000 16500:19500> <7500:10500 18000:21000> <30000:33000 18000:21000> <102000:105000 16500:19500> <31500:34500 18000:21000> <22500:25500 18000:21000> <10500:13500 18000:21000> <112500:115500 16500:19500> <0:3000 18000:21000> <103500:106500 16500:19500> <24000:27000 18000:21000> <97500:100500 18000:21000> <99000:102000 18000:21000> <31500:34500 19500:22500> <33000:36000 19500:22500> <34500:37500 19500:22500> <3000:6000 19500:22500> <18000:21000 19500:22500> <100500:103500 18000:21000> <22500:25500 19500:22500> <28500:31500 19500:22500> <19500:22500 19500:22500> <123000:126000 18000:21000> <118500:121500 18000:21000> <121500:124500 18000:21000> <120000:123000 18000:21000> <124500:127500 18000:21000> <25500:28500 19500:22500> <4500:7500 19500:22500> <6000:9000 19500:22500> <1500:4500 19500:22500> <103500:106500 18000:21000> <0:3000 19500:22500> <21000:24000 19500:22500> <24000:27000 19500:22500> <111000:114000 18000:21000> <30000:33000 19500:22500> <7500:10500 19500:22500> <105000:108000 18000:21000> <27000:30000 19500:22500> <108000:111000 18000:21000> <102000:105000 18000:21000> <109500:112500 18000:21000> <9000:12000 19500:22500> <112500:115500 18000:21000> <106500:109500 18000:21000> <94500:97500 19500:22500> <96000:99000 19500:22500> <93000:96000 21000:24000> <33000:36000 21000:24000> <94500:97500 21000:24000> <109500:112500 19500:22500> <22500:25500 21000:24000> <97500:100500 19500:22500> <25500:28500 21000:24000> <108000:111000 19500:22500> <117000:120000 19500:22500> <111000:114000 19500:22500> <118500:121500 19500:22500> <16500:19500 21000:24000> <105000:108000 19500:22500> <96000:99000 21000:24000> <3000:6000 21000:24000> <120000:123000 19500:22500> <30000:33000 21000:24000> <27000:30000 21000:24000> <6000:9000 21000:24000> <19500:22500 21000:24000> <123000:126000 19500:22500> <106500:109500 19500:22500> <28500:31500 21000:24000> <103500:106500 19500:22500> <102000:105000 19500:22500> <124500:127500 19500:22500> <4500:7500 21000:24000> <18000:21000 21000:24000> <100500:103500 19500:22500> <31500:34500 21000:24000> <99000:102000 19500:22500> <21000:24000 21000:24000> <121500:124500 19500:22500> <97500:100500 21000:24000> <24000:27000 21000:24000> <115500:118500 21000:24000> <91500:94500 22500:25500> <96000:99000 22500:25500> <94500:97500 22500:25500> <124500:127500 21000:24000> <121500:124500 21000:24000> <99000:102000 21000:24000> <90000:93000 22500:25500> <15000:18000 22500:25500> <117000:120000 21000:24000> <28500:31500 22500:25500> <18000:21000 22500:25500> <118500:121500 21000:24000> <108000:111000 21000:24000> <105000:108000 21000:24000> <109500:112500 21000:24000> <45000:48000 22500:25500> <24000:27000 22500:25500> <27000:30000 22500:25500> <123000:126000 21000:24000> <19500:22500 22500:25500> <106500:109500 21000:24000> <120000:123000 21000:24000> <100500:103500 21000:24000> <16500:19500 22500:25500> <43500:46500 22500:25500> <103500:106500 21000:24000> <22500:25500 22500:25500> <30000:33000 22500:25500> <102000:105000 21000:24000> <21000:24000 22500:25500> <93000:96000 22500:25500> <25500:28500 22500:25500> <97500:100500 22500:25500> <31500:34500 22500:25500> <40500:43500 24000:27000> <99000:102000 22500:25500> <43500:46500 24000:27000> <45000:48000 24000:27000> <112500:115500 22500:25500> <37500:40500 24000:27000> <28500:31500 24000:27000> <16500:19500 24000:27000> <121500:124500 22500:25500> <15000:18000 24000:27000> <36000:39000 24000:27000> <42000:45000 24000:27000> <105000:108000 22500:25500> <123000:126000 22500:25500> <102000:105000 22500:25500> <30000:33000 24000:27000> <100500:103500 22500:25500> <39000:42000 24000:27000> <115500:118500 22500:25500> <21000:24000 24000:27000> <103500:106500 22500:25500> <18000:21000 24000:27000> <27000:30000 24000:27000> <24000:27000 24000:27000> <106500:109500 22500:25500> <124500:127500 22500:25500> <117000:120000 22500:25500> <22500:25500 24000:27000> <13500:16500 24000:27000> <25500:28500 24000:27000> <19500:22500 24000:27000> <120000:123000 22500:25500> <118500:121500 22500:25500> <114000:117000 22500:25500> <46500:49500 24000:27000> <48000:51000 24000:27000> <24000:27000 25500:28500> <22500:25500 25500:28500> <117000:120000 24000:27000> <111000:114000 24000:27000> <12000:15000 25500:28500> <115500:118500 24000:27000> <112500:115500 24000:27000> <123000:126000 24000:27000> <88500:91500 24000:27000> <13500:16500 25500:28500> <18000:21000 25500:28500> <97500:100500 24000:27000> <114000:117000 24000:27000> <102000:105000 24000:27000> <93000:96000 24000:27000> <91500:94500 24000:27000> <90000:93000 24000:27000> <49500:52500 24000:27000> <19500:22500 25500:28500> <121500:124500 24000:27000> <25500:28500 25500:28500> <105000:108000 24000:27000> <16500:19500 25500:28500> <94500:97500 24000:27000> <118500:121500 24000:27000> <27000:30000 25500:28500> <103500:106500 24000:27000> <15000:18000 25500:28500> <99000:102000 24000:27000> <100500:103500 24000:27000> <120000:123000 24000:27000> <124500:127500 24000:27000> <21000:24000 25500:28500> <96000:99000 24000:27000> <90000:93000 25500:28500> <46500:49500 25500:28500> <121500:124500 25500:28500> <120000:123000 25500:28500> <34500:37500 25500:28500> <45000:48000 25500:28500> <94500:97500 25500:28500> <109500:112500 25500:28500> <28500:31500 25500:28500> <118500:121500 25500:28500> <99000:102000 25500:28500> <48000:51000 25500:28500> <36000:39000 25500:28500> <114000:117000 25500:28500> <91500:94500 25500:28500> <111000:114000 25500:28500> <103500:106500 25500:28500> <100500:103500 25500:28500> <97500:100500 25500:28500> <96000:99000 25500:28500> <117000:120000 25500:28500> <51000:54000 25500:28500> <49500:52500 25500:28500> <102000:105000 25500:28500> <43500:46500 25500:28500> <42000:45000 25500:28500> <33000:36000 25500:28500> <39000:42000 25500:28500> <88500:91500 25500:28500> <93000:96000 25500:28500> <112500:115500 25500:28500> <37500:40500 25500:28500> <115500:118500 25500:28500> <123000:126000 25500:28500> <40500:43500 25500:28500> <10500:13500 27000:30000> <94500:97500 27000:30000> <97500:100500 27000:30000> <99000:102000 27000:30000> <21000:24000 27000:30000> <34500:37500 27000:30000> <39000:42000 27000:30000> <15000:18000 27000:30000> <48000:51000 27000:30000> <42000:45000 27000:30000> <90000:93000 27000:30000> <13500:16500 27000:30000> <25500:28500 27000:30000> <46500:49500 27000:30000> <45000:48000 27000:30000> <49500:52500 27000:30000> <22500:25500 27000:30000> <43500:46500 27000:30000> <88500:91500 27000:30000> <36000:39000 27000:30000> <37500:40500 27000:30000> <33000:36000 27000:30000> <40500:43500 27000:30000> <51000:54000 27000:30000> <93000:96000 27000:30000> <91500:94500 27000:30000> <18000:21000 27000:30000> <28500:31500 27000:30000> <24000:27000 27000:30000> <19500:22500 27000:30000> <12000:15000 27000:30000> <27000:30000 27000:30000> <16500:19500 27000:30000> <108000:111000 27000:30000> <100500:103500 27000:30000> <96000:99000 27000:30000> <45000:48000 28500:31500> <43500:46500 28500:31500> <109500:112500 27000:30000> <114000:117000 27000:30000> <19500:22500 28500:31500> <37500:40500 28500:31500> <36000:39000 28500:31500> <24000:27000 28500:31500> <48000:51000 28500:31500> <112500:115500 27000:30000> <111000:114000 27000:30000> <12000:15000 28500:31500> <115500:118500 27000:30000> <118500:121500 27000:30000> <10500:13500 28500:31500> <22500:25500 28500:31500> <117000:120000 27000:30000> <121500:124500 27000:30000> <123000:126000 27000:30000> <15000:18000 28500:31500> <33000:36000 28500:31500> <34500:37500 28500:31500> <18000:21000 28500:31500> <46500:49500 28500:31500> <120000:123000 27000:30000> <39000:42000 28500:31500> <9000:12000 28500:31500> <25500:28500 28500:31500> <13500:16500 28500:31500> <42000:45000 28500:31500> <21000:24000 28500:31500> <49500:52500 28500:31500> <40500:43500 28500:31500> <16500:19500 28500:31500> <34500:37500 30000:33000> <18000:21000 30000:33000> <40500:43500 30000:33000> <31500:34500 30000:33000> <19500:22500 30000:33000> <91500:94500 28500:31500> <36000:39000 30000:33000> <15000:18000 30000:33000> <118500:121500 28500:31500> <117000:120000 28500:31500> <94500:97500 28500:31500> <93000:96000 28500:31500> <16500:19500 30000:33000> <39000:42000 30000:33000> <21000:24000 30000:33000> <13500:16500 30000:33000> <112500:115500 28500:31500> <121500:124500 28500:31500> <22500:25500 30000:33000> <115500:118500 28500:31500> <12000:15000 30000:33000> <37500:40500 30000:33000> <33000:36000 30000:33000> <90000:93000 28500:31500> <10500:13500 30000:33000> <108000:111000 28500:31500> <96000:99000 28500:31500> <114000:117000 28500:31500> <120000:123000 28500:31500> <106500:109500 28500:31500> <109500:112500 28500:31500> <51000:54000 28500:31500> <111000:114000 28500:31500> <24000:27000 30000:33000> <42000:45000 30000:33000> <109500:112500 30000:33000> <37500:40500 31500:34500> <39000:42000 31500:34500> <112500:115500 30000:33000> <10500:13500 31500:34500> <33000:36000 31500:34500> <12000:15000 31500:34500> <19500:22500 31500:34500> <43500:46500 30000:33000> <49500:52500 30000:33000> <115500:118500 30000:33000> <105000:108000 30000:33000> <106500:109500 30000:33000> <118500:121500 30000:33000> <31500:34500 31500:34500> <21000:24000 31500:34500> <22500:25500 31500:34500> <108000:111000 30000:33000> <16500:19500 31500:34500> <117000:120000 30000:33000> <93000:96000 30000:33000> <46500:49500 30000:33000> <34500:37500 31500:34500> <13500:16500 31500:34500> <36000:39000 31500:34500> <48000:51000 30000:33000> <120000:123000 30000:33000> <114000:117000 30000:33000> <45000:48000 30000:33000> <111000:114000 30000:33000> <15000:18000 31500:34500> <103500:106500 30000:33000> <18000:21000 31500:34500> <42000:45000 31500:34500> <40500:43500 31500:34500> <42000:45000 33000:36000> <40500:43500 33000:36000> <126000:129000 31500:34500> <37500:40500 33000:36000> <114000:117000 31500:34500> <112500:115500 31500:34500> <34500:37500 33000:36000> <45000:48000 33000:36000> <36000:39000 33000:36000> <103500:106500 31500:34500> <118500:121500 31500:34500> <48000:51000 31500:34500> <111000:114000 31500:34500> <127500:130500 31500:34500> <43500:46500 33000:36000> <39000:42000 33000:36000> <109500:112500 31500:34500> <132000:135000 31500:34500> <108000:111000 31500:34500> <45000:48000 31500:34500> <136500:139500 31500:34500> <117000:120000 31500:34500> <130500:133500 31500:34500> <31500:34500 33000:36000> <115500:118500 31500:34500> <106500:109500 31500:34500> <120000:123000 31500:34500> <46500:49500 31500:34500> <135000:138000 31500:34500> <133500:136500 31500:34500> <43500:46500 31500:34500> <129000:132000 31500:34500> <105000:108000 31500:34500> <33000:36000 33000:36000> <117000:120000 33000:36000> <123000:126000 33000:36000> <31500:34500 34500:37500> <124500:127500 33000:36000> <138000:141000 33000:36000> <109500:112500 33000:36000> <103500:106500 33000:36000> <114000:117000 33000:36000> <133500:136500 33000:36000> <36000:39000 34500:37500> <112500:115500 33000:36000> <132000:135000 33000:36000> <108000:111000 33000:36000> <130500:133500 33000:36000> <115500:118500 33000:36000> <135000:138000 33000:36000> <30000:33000 34500:37500> <102000:105000 33000:36000> <127500:130500 33000:36000> <105000:108000 33000:36000> <34500:37500 34500:37500> <136500:139500 33000:36000> <33000:36000 34500:37500> <46500:49500 33000:36000> <111000:114000 33000:36000> <37500:40500 34500:37500> <126000:129000 33000:36000> <129000:132000 33000:36000> <27000:30000 34500:37500> <28500:31500 34500:37500> <106500:109500 33000:36000> <42000:45000 34500:37500> <39000:42000 34500:37500> <40500:43500 34500:37500> <30000:33000 36000:39000> <33000:36000 36000:39000> <36000:39000 36000:39000> <34500:37500 36000:39000> <39000:42000 36000:39000> <106500:109500 34500:37500> <25500:28500 36000:39000> <109500:112500 34500:37500> <103500:106500 34500:37500> <136500:139500 34500:37500> <105000:108000 34500:37500> <129000:132000 34500:37500> <133500:136500 34500:37500> <123000:126000 34500:37500> <117000:120000 34500:37500> <37500:40500 36000:39000> <121500:124500 34500:37500> <124500:127500 34500:37500> <43500:46500 34500:37500> <132000:135000 34500:37500> <115500:118500 34500:37500> <114000:117000 34500:37500> <126000:129000 34500:37500> <130500:133500 34500:37500> <24000:27000 36000:39000> <135000:138000 34500:37500> <127500:130500 34500:37500> <102000:105000 34500:37500> <28500:31500 36000:39000> <31500:34500 36000:39000> <27000:30000 36000:39000> <100500:103500 34500:37500> <138000:141000 34500:37500> <112500:115500 34500:37500> <108000:111000 34500:37500> <111000:114000 34500:37500> <40500:43500 36000:39000> <127500:130500 36000:39000> <33000:36000 37500:40500> <36000:39000 37500:40500> <34500:37500 37500:40500> <112500:115500 36000:39000> <126000:129000 36000:39000> <31500:34500 37500:40500> <124500:127500 36000:39000> <135000:138000 36000:39000> <133500:136500 36000:39000> <28500:31500 37500:40500> <132000:135000 36000:39000> <105000:108000 36000:39000> <22500:25500 37500:40500> <25500:28500 37500:40500> <106500:109500 36000:39000> <130500:133500 36000:39000> <123000:126000 36000:39000> <102000:105000 36000:39000> <27000:30000 37500:40500> <42000:45000 36000:39000> <24000:27000 37500:40500> <108000:111000 36000:39000> <138000:141000 36000:39000> <129000:132000 36000:39000> <99000:102000 36000:39000> <109500:112500 36000:39000> <114000:117000 36000:39000> <121500:124500 36000:39000> <136500:139500 36000:39000> <111000:114000 36000:39000> <103500:106500 36000:39000> <37500:40500 37500:40500> <30000:33000 37500:40500> <100500:103500 36000:39000> <39000:42000 37500:40500> <31500:34500 39000:42000> <36000:39000 39000:42000> <135000:138000 37500:40500> <124500:127500 37500:40500> <34500:37500 39000:42000> <103500:106500 37500:40500> <136500:139500 37500:40500> <28500:31500 39000:42000> <99000:102000 37500:40500> <30000:33000 39000:42000> <114000:117000 37500:40500> <37500:40500 39000:42000> <133500:136500 37500:40500> <126000:129000 37500:40500> <132000:135000 37500:40500> <24000:27000 39000:42000> <27000:30000 39000:42000> <123000:126000 37500:40500> <106500:109500 37500:40500> <105000:108000 37500:40500> <102000:105000 37500:40500> <33000:36000 39000:42000> <22500:25500 39000:42000> <112500:115500 37500:40500> <121500:124500 37500:40500> <25500:28500 39000:42000> <111000:114000 37500:40500> <109500:112500 37500:40500> <100500:103500 37500:40500> <108000:111000 37500:40500> <129000:132000 37500:40500> <127500:130500 37500:40500> <130500:133500 37500:40500> <99000:102000 39000:42000> <111000:114000 39000:42000> <103500:106500 40500:43500> <105000:108000 40500:43500> <106500:109500 40500:43500> <25500:28500 40500:43500> <24000:27000 40500:43500> <36000:39000 40500:43500> <22500:25500 40500:43500> <105000:108000 39000:42000> <130500:133500 39000:42000> <121500:124500 39000:42000> <112500:115500 39000:42000> <34500:37500 40500:43500> <27000:30000 40500:43500> <124500:127500 39000:42000> <33000:36000 40500:43500> <28500:31500 40500:43500> <100500:103500 39000:42000> <100500:103500 40500:43500> <126000:129000 39000:42000> <129000:132000 39000:42000> <123000:126000 39000:42000> <135000:138000 39000:42000> <102000:105000 39000:42000> <106500:109500 39000:42000> <108000:111000 39000:42000> <133500:136500 39000:42000> <103500:106500 39000:42000> <31500:34500 40500:43500> <127500:130500 39000:42000> <109500:112500 39000:42000> <102000:105000 40500:43500> <132000:135000 39000:42000> <30000:33000 40500:43500> <108000:111000 40500:43500> <109500:112500 40500:43500> <27000:30000 43500:46500> <28500:31500 43500:46500> <30000:33000 43500:46500> <126000:129000 42000:45000> <118500:121500 42000:45000> <124500:127500 40500:43500> <129000:132000 40500:43500> <33000:36000 42000:45000> <126000:129000 40500:43500> <30000:33000 42000:45000> <135000:138000 40500:43500> <22500:25500 42000:45000> <123000:126000 40500:43500> <114000:117000 43500:46500> <121500:124500 40500:43500> <124500:127500 42000:45000> <121500:124500 42000:45000> <28500:31500 42000:45000> <120000:123000 40500:43500> <127500:130500 40500:43500> <24000:27000 42000:45000> <120000:123000 42000:45000> <31500:34500 42000:45000> <130500:133500 42000:45000> <130500:133500 40500:43500> <25500:28500 42000:45000> <27000:30000 42000:45000> <117000:120000 42000:45000> <132000:135000 40500:43500> <123000:126000 42000:45000> <127500:130500 42000:45000> <115500:118500 43500:46500> <129000:132000 42000:45000> <132000:135000 42000:45000> <133500:136500 40500:43500> <121500:124500 46500:49500> <126000:129000 43500:46500> <111000:114000 48000:51000> <123000:126000 45000:48000> <127500:130500 45000:48000> <129000:132000 43500:46500> <114000:117000 46500:49500> <121500:124500 43500:46500> <111000:114000 46500:49500> <124500:127500 46500:49500> <121500:124500 45000:48000> <115500:118500 45000:48000> <115500:118500 46500:49500> <123000:126000 43500:46500> <126000:129000 45000:48000> <118500:121500 45000:48000> <112500:115500 45000:48000> <120000:123000 43500:46500> <117000:120000 43500:46500> <114000:117000 45000:48000> <117000:120000 46500:49500> <120000:123000 45000:48000> <112500:115500 48000:51000> <120000:123000 46500:49500> <117000:120000 45000:48000> <112500:115500 46500:49500> <130500:133500 43500:46500> <127500:130500 43500:46500> <124500:127500 43500:46500> <118500:121500 43500:46500> <126000:129000 46500:49500> <124500:127500 45000:48000> <123000:126000 46500:49500> <118500:121500 46500:49500> <114000:117000 48000:51000> <117000:120000 48000:51000> <117000:120000 51000:54000> <121500:124500 48000:51000> <114000:117000 49500:52500> <121500:124500 49500:52500> <120000:123000 51000:54000> <124500:127500 48000:51000> <115500:118500 49500:52500> <120000:123000 49500:52500> <123000:126000 49500:52500> <118500:121500 51000:54000> <123000:126000 48000:51000> <120000:123000 48000:51000> <118500:121500 49500:52500> <115500:118500 48000:51000> <112500:115500 49500:52500> <117000:120000 49500:52500> <115500:118500 51000:54000> <118500:121500 48000:51000> 2024-04-06 09:01:43.472970: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 09:01:43.589572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 09:01:43.589702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 09:01:44.884224: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 09:01:44.884363: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 09:01:44.884391: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 09:01:44.884499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 09:01:58.395952: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 09:01:58.396092: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +step 400 +step 500 +step 600 +step 700 +step 800 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57361/img_files/ + + +895 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 894> <1 of 894> <2 of 894> <3 of 894> <4 of 894> <5 of 894> <6 of 894> <7 of 894> <8 of 894> <9 of 894> <10 of 894> <11 of 894> <12 of 894> <13 of 894> <14 of 894> <15 of 894> <16 of 894> <17 of 894> <18 of 894> <19 of 894> <20 of 894> <21 of 894> <22 of 894> <23 of 894> <24 of 894> <25 of 894> <26 of 894> <27 of 894> <28 of 894> <29 of 894> <30 of 894> <31 of 894> <32 of 894> <33 of 894> <34 of 894> <35 of 894> <36 of 894> <37 of 894> <38 of 894> <39 of 894> <40 of 894> <41 of 894> <42 of 894> <43 of 894> <44 of 894> <45 of 894> <46 of 894> <47 of 894> <48 of 894> <49 of 894> <50 of 894> <51 of 894> <52 of 894> <53 of 894> <54 of 894> <55 of 894> <56 of 894> <57 of 894> <58 of 894> <59 of 894> <60 of 894> <61 of 894> <62 of 894> <63 of 894> <64 of 894> <65 of 894> <66 of 894> <67 of 894> <68 of 894> <69 of 894> <70 of 894> <71 of 894> <72 of 894> <73 of 894> <74 of 894> <75 of 894> <76 of 894> <77 of 894> <78 of 894> <79 of 894> <80 of 894> <81 of 894> <82 of 894> <83 of 894> <84 of 894> <85 of 894> <86 of 894> <87 of 894> <88 of 894> <89 of 894> <90 of 894> <91 of 894> <92 of 894> <93 of 894> <94 of 894> <95 of 894> <96 of 894> <97 of 894> <98 of 894> <99 of 894> <100 of 894> <101 of 894> <102 of 894> <103 of 894> <104 of 894> <105 of 894> <106 of 894> <107 of 894> <108 of 894> <109 of 894> <110 of 894> <111 of 894> <112 of 894> <113 of 894> <114 of 894> <115 of 894> <116 of 894> <117 of 894> <118 of 894> <119 of 894> <120 of 894> <121 of 894> <122 of 894> <123 of 894> <124 of 894> <125 of 894> <126 of 894> <127 of 894> <128 of 894> <129 of 894> <130 of 894> <131 of 894> <132 of 894> <133 of 894> <134 of 894> <135 of 894> <136 of 894> <137 of 894> <138 of 894> <139 of 894> <140 of 894> <141 of 894> <142 of 894> <143 of 894> <144 of 894> <145 of 894> <146 of 894> <147 of 894> <148 of 894> <149 of 894> <150 of 894> <151 of 894> <152 of 894> <153 of 894> <154 of 894> <155 of 894> <156 of 894> <157 of 894> <158 of 894> <159 of 894> <160 of 894> <161 of 894> <162 of 894> <163 of 894> <164 of 894> <165 of 894> <166 of 894> <167 of 894> <168 of 894> <169 of 894> <170 of 894> <171 of 894> <172 of 894> <173 of 894> <174 of 894> <175 of 894> <176 of 894> <177 of 894> <178 of 894> <179 of 894> <180 of 894> <181 of 894> <182 of 894> <183 of 894> <184 of 894> <185 of 894> <186 of 894> <187 of 894> <188 of 894> <189 of 894> <190 of 894> <191 of 894> <192 of 894> <193 of 894> <194 of 894> <195 of 894> <196 of 894> <197 of 894> <198 of 894> <199 of 894> <200 of 894> <201 of 894> <202 of 894> <203 of 894> <204 of 894> <205 of 894> <206 of 894> <207 of 894> <208 of 894> <209 of 894> <210 of 894> <211 of 894> <212 of 894> <213 of 894> <214 of 894> <215 of 894> <216 of 894> <217 of 894> <218 of 894> <219 of 894> <220 of 894> <221 of 894> <222 of 894> <223 of 894> <224 of 894> <225 of 894> <226 of 894> <227 of 894> <228 of 894> <229 of 894> <230 of 894> <231 of 894> <232 of 894> <233 of 894> <234 of 894> <235 of 894> <236 of 894> <237 of 894> <238 of 894> <239 of 894> <240 of 894> <241 of 894> <242 of 894> <243 of 894> <244 of 894> <245 of 894> <246 of 894> <247 of 894> <248 of 894> <249 of 894> <250 of 894> <251 of 894> <252 of 894> <253 of 894> <254 of 894> <255 of 894> <256 of 894> <257 of 894> <258 of 894> <259 of 894> <260 of 894> <261 of 894> <262 of 894> <263 of 894> <264 of 894> <265 of 894> <266 of 894> <267 of 894> <268 of 894> <269 of 894> <270 of 894> <271 of 894> <272 of 894> <273 of 894> <274 of 894> <275 of 894> <276 of 894> <277 of 894> <278 of 894> <279 of 894> <280 of 894> <281 of 894> <282 of 894> <283 of 894> <284 of 894> <285 of 894> <286 of 894> <287 of 894> <288 of 894> <289 of 894> <290 of 894> <291 of 894> <292 of 894> <293 of 894> <294 of 894> <295 of 894> <296 of 894> <297 of 894> <298 of 894> <299 of 894> <300 of 894> <301 of 894> <302 of 894> <303 of 894> <304 of 894> <305 of 894> <306 of 894> <307 of 894> <308 of 894> <309 of 894> <310 of 894> <311 of 894> <312 of 894> <313 of 894> <314 of 894> <315 of 894> <316 of 894> <317 of 894> <318 of 894> <319 of 894> <320 of 894> <321 of 894> <322 of 894> <323 of 894> <324 of 894> <325 of 894> <326 of 894> <327 of 894> <328 of 894> <329 of 894> <330 of 894> <331 of 894> <332 of 894> <333 of 894> <334 of 894> <335 of 894> <336 of 894> <337 of 894> <338 of 894> <339 of 894> <340 of 894> <341 of 894> <342 of 894> <343 of 894> <344 of 894> <345 of 894> <346 of 894> <347 of 894> <348 of 894> <349 of 894> <350 of 894> <351 of 894> <352 of 894> <353 of 894> <354 of 894> <355 of 894> <356 of 894> <357 of 894> <358 of 894> <359 of 894> <360 of 894> <361 of 894> <362 of 894> <363 of 894> <364 of 894> <365 of 894> <366 of 894> <367 of 894> <368 of 894> <369 of 894> <370 of 894> <371 of 894> <372 of 894> <373 of 894> <374 of 894> <375 of 894> <376 of 894> <377 of 894> <378 of 894> <379 of 894> <380 of 894> <381 of 894> <382 of 894> <383 of 894> <384 of 894> <385 of 894> <386 of 894> <387 of 894> <388 of 894> <389 of 894> <390 of 894> <391 of 894> <392 of 894> <393 of 894> <394 of 894> <395 of 894> <396 of 894> <397 of 894> <398 of 894> <399 of 894> <400 of 894> <401 of 894> <402 of 894> <403 of 894> <404 of 894> <405 of 894> <406 of 894> <407 of 894> <408 of 894> <409 of 894> <410 of 894> <411 of 894> <412 of 894> <413 of 894> <414 of 894> <415 of 894> <416 of 894> <417 of 894> <418 of 894> <419 of 894> <420 of 894> <421 of 894> <422 of 894> <423 of 894> <424 of 894> <425 of 894> <426 of 894> <427 of 894> <428 of 894> <429 of 894> <430 of 894> <431 of 894> <432 of 894> <433 of 894> <434 of 894> <435 of 894> <436 of 894> <437 of 894> <438 of 894> <439 of 894> <440 of 894> <441 of 894> <442 of 894> <443 of 894> <444 of 894> <445 of 894> <446 of 894> <447 of 894> <448 of 894> <449 of 894> <450 of 894> <451 of 894> <452 of 894> <453 of 894> <454 of 894> <455 of 894> <456 of 894> <457 of 894> <458 of 894> <459 of 894> <460 of 894> <461 of 894> <462 of 894> <463 of 894> <464 of 894> <465 of 894> <466 of 894> <467 of 894> <468 of 894> <469 of 894> <470 of 894> <471 of 894> <472 of 894> <473 of 894> <474 of 894> <475 of 894> <476 of 894> <477 of 894> <478 of 894> <479 of 894> <480 of 894> <481 of 894> <482 of 894> <483 of 894> <484 of 894> <485 of 894> <486 of 894> <487 of 894> <488 of 894> <489 of 894> <490 of 894> <491 of 894> <492 of 894> <493 of 894> <494 of 894> <495 of 894> <496 of 894> <497 of 894> <498 of 894> <499 of 894> <500 of 894> <501 of 894> <502 of 894> <503 of 894> <504 of 894> <505 of 894> <506 of 894> <507 of 894> <508 of 894> <509 of 894> <510 of 894> <511 of 894> <512 of 894> <513 of 894> <514 of 894> <515 of 894> <516 of 894> <517 of 894> <518 of 894> <519 of 894> <520 of 894> <521 of 894> <522 of 894> <523 of 894> <524 of 894> <525 of 894> <526 of 894> <527 of 894> <528 of 894> <529 of 894> <530 of 894> <531 of 894> <532 of 894> <533 of 894> <534 of 894> <535 of 894> <536 of 894> <537 of 894> <538 of 894> <539 of 894> <540 of 894> <541 of 894> <542 of 894> <543 of 894> <544 of 894> <545 of 894> <546 of 894> <547 of 894> <548 of 894> <549 of 894> <550 of 894> <551 of 894> <552 of 894> <553 of 894> <554 of 894> <555 of 894> <556 of 894> <557 of 894> <558 of 894> <559 of 894> <560 of 894> <561 of 894> <562 of 894> <563 of 894> <564 of 894> <565 of 894> <566 of 894> <567 of 894> <568 of 894> <569 of 894> <570 of 894> <571 of 894> <572 of 894> <573 of 894> <574 of 894> <575 of 894> <576 of 894> <577 of 894> <578 of 894> <579 of 894> <580 of 894> <581 of 894> <582 of 894> <583 of 894> <584 of 894> <585 of 894> <586 of 894> <587 of 894> <588 of 894> <589 of 894> <590 of 894> <591 of 894> <592 of 894> <593 of 894> <594 of 894> <595 of 894> <596 of 894> <597 of 894> <598 of 894> <599 of 894> <600 of 894> <601 of 894> <602 of 894> <603 of 894> <604 of 894> <605 of 894> <606 of 894> <607 of 894> <608 of 894> <609 of 894> <610 of 894> <611 of 894> <612 of 894> <613 of 894> <614 of 894> <615 of 894> <616 of 894> <617 of 894> <618 of 894> <619 of 894> <620 of 894> <621 of 894> <622 of 894> <623 of 894> <624 of 894> <625 of 894> <626 of 894> <627 of 894> <628 of 894> <629 of 894> <630 of 894> <631 of 894> <632 of 894> <633 of 894> <634 of 894> <635 of 894> <636 of 894> <637 of 894> <638 of 894> <639 of 894> <640 of 894> <641 of 894> <642 of 894> <643 of 894> <644 of 894> <645 of 894> <646 of 894> <647 of 894> <648 of 894> <649 of 894> <650 of 894> <651 of 894> <652 of 894> <653 of 894> <654 of 894> <655 of 894> <656 of 894> <657 of 894> <658 of 894> <659 of 894> <660 of 894> <661 of 894> <662 of 894> <663 of 894> <664 of 894> <665 of 894> <666 of 894> <667 of 894> <668 of 894> <669 of 894> <670 of 894> <671 of 894> <672 of 894> <673 of 894> <674 of 894> <675 of 894> <676 of 894> <677 of 894> <678 of 894> <679 of 894> <680 of 894> <681 of 894> <682 of 894> <683 of 894> <684 of 894> <685 of 894> <686 of 894> <687 of 894> <688 of 894> <689 of 894> <690 of 894> <691 of 894> <692 of 894> <693 of 894> <694 of 894> <695 of 894> <696 of 894> <697 of 894> <698 of 894> <699 of 894> <700 of 894> <701 of 894> <702 of 894> <703 of 894> <704 of 894> <705 of 894> <706 of 894> <707 of 894> <708 of 894> <709 of 894> <710 of 894> <711 of 894> <712 of 894> <713 of 894> <714 of 894> <715 of 894> <716 of 894> <717 of 894> <718 of 894> <719 of 894> <720 of 894> <721 of 894> <722 of 894> <723 of 894> <724 of 894> <725 of 894> <726 of 894> <727 of 894> <728 of 894> <729 of 894> <730 of 894> <731 of 894> <732 of 894> <733 of 894> <734 of 894> <735 of 894> <736 of 894> <737 of 894> <738 of 894> <739 of 894> <740 of 894> <741 of 894> <742 of 894> <743 of 894> <744 of 894> <745 of 894> <746 of 894> <747 of 894> <748 of 894> <749 of 894> <750 of 894> <751 of 894> <752 of 894> <753 of 894> <754 of 894> <755 of 894> <756 of 894> <757 of 894> <758 of 894> <759 of 894> <760 of 894> <761 of 894> <762 of 894> <763 of 894> <764 of 894> <765 of 894> <766 of 894> <767 of 894> <768 of 894> <769 of 894> <770 of 894> <771 of 894> <772 of 894> <773 of 894> <774 of 894> <775 of 894> <776 of 894> <777 of 894> <778 of 894> <779 of 894> <780 of 894> <781 of 894> <782 of 894> <783 of 894> <784 of 894> <785 of 894> <786 of 894> <787 of 894> <788 of 894> <789 of 894> <790 of 894> <791 of 894> <792 of 894> <793 of 894> <794 of 894> <795 of 894> <796 of 894> <797 of 894> <798 of 894> <799 of 894> <800 of 894> <801 of 894> <802 of 894> <803 of 894> <804 of 894> <805 of 894> <806 of 894> <807 of 894> <808 of 894> <809 of 894> <810 of 894> <811 of 894> <812 of 894> <813 of 894> <814 of 894> <815 of 894> <816 of 894> <817 of 894> <818 of 894> <819 of 894> <820 of 894> <821 of 894> <822 of 894> <823 of 894> <824 of 894> <825 of 894> <826 of 894> <827 of 894> <828 of 894> <829 of 894> <830 of 894> <831 of 894> <832 of 894> <833 of 894> <834 of 894> <835 of 894> <836 of 894> <837 of 894> <838 of 894> <839 of 894> <840 of 894> <841 of 894> <842 of 894> <843 of 894> <844 of 894> <845 of 894> <846 of 894> <847 of 894> <848 of 894> <849 of 894> <850 of 894> <851 of 894> <852 of 894> <853 of 894> <854 of 894> <855 of 894> <856 of 894> <857 of 894> <858 of 894> <859 of 894> <860 of 894> <861 of 894> <862 of 894> <863 of 894> <864 of 894> <865 of 894> <866 of 894> <867 of 894> <868 of 894> <869 of 894> <870 of 894> <871 of 894> <872 of 894> <873 of 894> <874 of 894> <875 of 894> <876 of 894> <877 of 894> <878 of 894> <879 of 894> <880 of 894> <881 of 894> <882 of 894> <883 of 894> <884 of 894> <885 of 894> <886 of 894> <887 of 894> <888 of 894> <889 of 894> <890 of 894> <891 of 894> <892 of 894> <893 of 894> <894 of 894> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57928.svs + +chopping ... + +saving region: + <4500:7500 13500:16500> <21000:24000 13500:16500> <4500:7500 12000:15000> <12000:15000 13500:16500> <31500:34500 13500:16500> <9000:12000 10500:13500> <15000:18000 13500:16500> <12000:15000 12000:15000> <9000:12000 12000:15000> <9000:12000 13500:16500> <6000:9000 10500:13500> <15000:18000 12000:15000> <16500:19500 13500:16500> <6000:9000 13500:16500> <1500:4500 12000:15000> <7500:10500 10500:13500> <13500:16500 12000:15000> <3000:6000 12000:15000> <3000:6000 13500:16500> <30000:33000 13500:16500> <10500:13500 12000:15000> <13500:16500 13500:16500> <0:3000 13500:16500> <28500:31500 13500:16500> <7500:10500 13500:16500> <1500:4500 13500:16500> <6000:9000 12000:15000> <18000:21000 13500:16500> <27000:30000 13500:16500> <10500:13500 13500:16500> <19500:22500 13500:16500> <7500:10500 12000:15000> <33000:36000 13500:16500> <34500:37500 13500:16500> <4500:7500 16500:19500> <3000:6000 16500:19500> <3000:6000 15000:18000> <21000:24000 15000:18000> <1500:4500 15000:18000> <12000:15000 15000:18000> <37500:40500 15000:18000> <6000:9000 15000:18000> <28500:31500 15000:18000> <27000:30000 15000:18000> <1500:4500 16500:19500> <15000:18000 15000:18000> <39000:42000 15000:18000> <16500:19500 15000:18000> <0:3000 15000:18000> <10500:13500 15000:18000> <33000:36000 15000:18000> <7500:10500 15000:18000> <34500:37500 15000:18000> <25500:28500 15000:18000> <30000:33000 15000:18000> <42000:45000 15000:18000> <4500:7500 15000:18000> <19500:22500 15000:18000> <40500:43500 15000:18000> <6000:9000 16500:19500> <36000:39000 15000:18000> <9000:12000 15000:18000> <22500:25500 15000:18000> <31500:34500 15000:18000> <13500:16500 15000:18000> <24000:27000 15000:18000> <18000:21000 15000:18000> <7500:10500 16500:19500> <15000:18000 18000:21000> <19500:22500 18000:21000> <21000:24000 18000:21000> <24000:27000 18000:21000> <22500:25500 16500:19500> <16500:19500 16500:19500> <13500:16500 16500:19500> <18000:21000 18000:21000> <40500:43500 16500:19500> <45000:48000 16500:19500> <22500:25500 18000:21000> <33000:36000 16500:19500> <31500:34500 16500:19500> <13500:16500 18000:21000> <28500:31500 16500:19500> <37500:40500 16500:19500> <15000:18000 16500:19500> <43500:46500 16500:19500> <34500:37500 16500:19500> <9000:12000 16500:19500> <18000:21000 16500:19500> <27000:30000 16500:19500> <12000:15000 18000:21000> <10500:13500 16500:19500> <16500:19500 18000:21000> <21000:24000 16500:19500> <19500:22500 16500:19500> <24000:27000 16500:19500> <39000:42000 16500:19500> <36000:39000 16500:19500> <42000:45000 16500:19500> <12000:15000 16500:19500> <25500:28500 16500:19500> <46500:49500 16500:19500> <30000:33000 16500:19500> <25500:28500 18000:21000> <46500:49500 19500:22500> <33000:36000 18000:21000> <27000:30000 18000:21000> <34500:37500 19500:22500> <43500:46500 19500:22500> <36000:39000 19500:22500> <48000:51000 19500:22500> <21000:24000 19500:22500> <37500:40500 18000:21000> <30000:33000 18000:21000> <46500:49500 18000:21000> <34500:37500 18000:21000> <45000:48000 19500:22500> <33000:36000 19500:22500> <25500:28500 19500:22500> <40500:43500 18000:21000> <42000:45000 19500:22500> <28500:31500 18000:21000> <36000:39000 18000:21000> <45000:48000 18000:21000> <43500:46500 18000:21000> <27000:30000 19500:22500> <37500:40500 19500:22500> <24000:27000 19500:22500> <39000:42000 19500:22500> <18000:21000 19500:22500> <39000:42000 18000:21000> <31500:34500 18000:21000> <22500:25500 19500:22500> <42000:45000 18000:21000> <40500:43500 19500:22500> <48000:51000 18000:21000> <19500:22500 19500:22500> <49500:52500 19500:22500> <42000:45000 22500:25500> <49500:52500 21000:24000> <46500:49500 21000:24000> <51000:54000 22500:25500> <48000:51000 21000:24000> <51000:54000 21000:24000> <45000:48000 21000:24000> <46500:49500 24000:27000> <46500:49500 22500:25500> <43500:46500 21000:24000> <45000:48000 24000:27000> <51000:54000 24000:27000> <52500:55500 21000:24000> <40500:43500 21000:24000> <45000:48000 22500:25500> <49500:52500 22500:25500> <51000:54000 19500:22500> <52500:55500 22500:25500> <43500:46500 22500:25500> <49500:52500 24000:27000> <48000:51000 24000:27000> <42000:45000 21000:24000> <52500:55500 24000:27000> <48000:51000 22500:25500> 2024-04-06 09:51:20.031240: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 09:51:20.147774: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 09:51:20.147909: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 09:51:21.451172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 09:51:21.451304: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 09:51:21.451333: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 09:51:21.451437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 09:51:33.458758: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 09:51:33.458901: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57928/img_files/ + + +162 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 161> <1 of 161> <2 of 161> <3 of 161> <4 of 161> <5 of 161> <6 of 161> <7 of 161> <8 of 161> <9 of 161> <10 of 161> <11 of 161> <12 of 161> <13 of 161> <14 of 161> <15 of 161> <16 of 161> <17 of 161> <18 of 161> <19 of 161> <20 of 161> <21 of 161> <22 of 161> <23 of 161> <24 of 161> <25 of 161> <26 of 161> <27 of 161> <28 of 161> <29 of 161> <30 of 161> <31 of 161> <32 of 161> <33 of 161> <34 of 161> <35 of 161> <36 of 161> <37 of 161> <38 of 161> <39 of 161> <40 of 161> <41 of 161> <42 of 161> <43 of 161> <44 of 161> <45 of 161> <46 of 161> <47 of 161> <48 of 161> <49 of 161> <50 of 161> <51 of 161> <52 of 161> <53 of 161> <54 of 161> <55 of 161> <56 of 161> <57 of 161> <58 of 161> <59 of 161> <60 of 161> <61 of 161> <62 of 161> <63 of 161> <64 of 161> <65 of 161> <66 of 161> <67 of 161> <68 of 161> <69 of 161> <70 of 161> <71 of 161> <72 of 161> <73 of 161> <74 of 161> <75 of 161> <76 of 161> <77 of 161> <78 of 161> <79 of 161> <80 of 161> <81 of 161> <82 of 161> <83 of 161> <84 of 161> <85 of 161> <86 of 161> <87 of 161> <88 of 161> <89 of 161> <90 of 161> <91 of 161> <92 of 161> <93 of 161> <94 of 161> <95 of 161> <96 of 161> <97 of 161> <98 of 161> <99 of 161> <100 of 161> <101 of 161> <102 of 161> <103 of 161> <104 of 161> <105 of 161> <106 of 161> <107 of 161> <108 of 161> <109 of 161> <110 of 161> <111 of 161> <112 of 161> <113 of 161> <114 of 161> <115 of 161> <116 of 161> <117 of 161> <118 of 161> <119 of 161> <120 of 161> <121 of 161> <122 of 161> <123 of 161> <124 of 161> <125 of 161> <126 of 161> <127 of 161> <128 of 161> <129 of 161> <130 of 161> <131 of 161> <132 of 161> <133 of 161> <134 of 161> <135 of 161> <136 of 161> <137 of 161> <138 of 161> <139 of 161> <140 of 161> <141 of 161> <142 of 161> <143 of 161> <144 of 161> <145 of 161> <146 of 161> <147 of 161> <148 of 161> <149 of 161> <150 of 161> <151 of 161> <152 of 161> <153 of 161> <154 of 161> <155 of 161> <156 of 161> <157 of 161> <158 of 161> <159 of 161> <160 of 161> <161 of 161> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/54254.svs + +chopping ... + +saving region: + <31500:34500 1500:4500> <6000:9000 4500:7500> <25500:28500 1500:4500> <30000:33000 3000:6000> <36000:39000 4500:7500> <24000:27000 1500:4500> <31500:34500 4500:7500> <24000:27000 4500:7500> <22500:25500 1500:4500> <4500:7500 4500:7500> <28500:31500 4500:7500> <21000:24000 4500:7500> <31500:34500 3000:6000> <30000:33000 1500:4500> <33000:36000 4500:7500> <28500:31500 3000:6000> <19500:22500 4500:7500> <34500:37500 4500:7500> <25500:28500 4500:7500> <24000:27000 3000:6000> <25500:28500 3000:6000> <28500:31500 1500:4500> <36000:39000 3000:6000> <22500:25500 4500:7500> <27000:30000 3000:6000> <30000:33000 4500:7500> <27000:30000 4500:7500> <21000:24000 3000:6000> <27000:30000 1500:4500> <33000:36000 3000:6000> <22500:25500 3000:6000> <34500:37500 3000:6000> <37500:40500 4500:7500> <39000:42000 4500:7500> <28500:31500 6000:9000> <22500:25500 10500:13500> <21000:24000 10500:13500> <21000:24000 12000:15000> <19500:22500 12000:15000> <25500:28500 6000:9000> <19500:22500 9000:12000> <37500:40500 6000:9000> <21000:24000 9000:12000> <27000:30000 6000:9000> <4500:7500 6000:9000> <6000:9000 6000:9000> <24000:27000 6000:9000> <34500:37500 6000:9000> <22500:25500 6000:9000> <18000:21000 7500:10500> <18000:21000 9000:12000> <31500:34500 6000:9000> <22500:25500 7500:10500> <19500:22500 6000:9000> <31500:34500 7500:10500> <21000:24000 6000:9000> <18000:21000 10500:13500> <18000:21000 6000:9000> <24000:27000 7500:10500> <19500:22500 10500:13500> <19500:22500 7500:10500> <33000:36000 7500:10500> <30000:33000 6000:9000> <22500:25500 12000:15000> <21000:24000 7500:10500> <36000:39000 6000:9000> <18000:21000 12000:15000> <22500:25500 9000:12000> <33000:36000 6000:9000> <39000:42000 6000:9000> <18000:21000 13500:16500> <24000:27000 19500:22500> <22500:25500 21000:24000> <19500:22500 18000:21000> <24000:27000 15000:18000> <21000:24000 15000:18000> <21000:24000 21000:24000> <21000:24000 16500:19500> <19500:22500 13500:16500> <43500:46500 19500:22500> <45000:48000 19500:22500> <22500:25500 16500:19500> <22500:25500 19500:22500> <24000:27000 18000:21000> <49500:52500 19500:22500> <19500:22500 15000:18000> <21000:24000 19500:22500> <22500:25500 13500:16500> <45000:48000 18000:21000> <25500:28500 18000:21000> <25500:28500 21000:24000> <46500:49500 19500:22500> <25500:28500 16500:19500> <24000:27000 21000:24000> <25500:28500 19500:22500> <21000:24000 18000:21000> <48000:51000 19500:22500> <21000:24000 13500:16500> <22500:25500 15000:18000> <19500:22500 21000:24000> <24000:27000 13500:16500> <19500:22500 16500:19500> <46500:49500 18000:21000> <24000:27000 16500:19500> <22500:25500 18000:21000> <48000:51000 22500:25500> <55500:58500 22500:25500> <52500:55500 22500:25500> <45000:48000 21000:24000> <51000:54000 22500:25500> <57000:60000 22500:25500> <49500:52500 22500:25500> <51000:54000 21000:24000> <43500:46500 21000:24000> <21000:24000 22500:25500> <54000:57000 21000:24000> <18000:21000 24000:27000> <24000:27000 22500:25500> <45000:48000 24000:27000> <43500:46500 22500:25500> <25500:28500 22500:25500> <24000:27000 24000:27000> <46500:49500 21000:24000> <46500:49500 22500:25500> <48000:51000 21000:24000> <19500:22500 24000:27000> <49500:52500 21000:24000> <55500:58500 21000:24000> <45000:48000 22500:25500> <42000:45000 22500:25500> <52500:55500 21000:24000> <40500:43500 22500:25500> <54000:57000 22500:25500> <22500:25500 22500:25500> <21000:24000 24000:27000> <19500:22500 22500:25500> <46500:49500 24000:27000> <48000:51000 24000:27000> <22500:25500 24000:27000> <49500:52500 24000:27000> <13500:16500 28500:31500> <19500:22500 28500:31500> <15000:18000 28500:31500> <55500:58500 24000:27000> <54000:57000 27000:30000> <55500:58500 27000:30000> <16500:19500 27000:30000> <57000:60000 24000:27000> <19500:22500 25500:28500> <49500:52500 25500:28500> <16500:19500 25500:28500> <22500:25500 27000:30000> <15000:18000 27000:30000> <55500:58500 25500:28500> <51000:54000 25500:28500> <52500:55500 24000:27000> <54000:57000 24000:27000> <22500:25500 25500:28500> <21000:24000 25500:28500> <57000:60000 25500:28500> <21000:24000 27000:30000> <24000:27000 25500:28500> <18000:21000 27000:30000> <57000:60000 27000:30000> <18000:21000 28500:31500> <48000:51000 25500:28500> <54000:57000 25500:28500> <52500:55500 25500:28500> <19500:22500 27000:30000> <51000:54000 24000:27000> <16500:19500 28500:31500> <18000:21000 25500:28500> <21000:24000 28500:31500> <12000:15000 30000:33000> <16500:19500 30000:33000> <15000:18000 33000:36000> <15000:18000 31500:34500> <13500:16500 31500:34500> <13500:16500 33000:36000> <31500:34500 31500:34500> <31500:34500 33000:36000> <12000:15000 33000:36000> <21000:24000 30000:33000> <18000:21000 30000:33000> <33000:36000 31500:34500> <19500:22500 31500:34500> <16500:19500 33000:36000> <7500:10500 33000:36000> <34500:37500 31500:34500> <4500:7500 34500:37500> <9000:12000 33000:36000> <18000:21000 31500:34500> <33000:36000 33000:36000> <30000:33000 31500:34500> <10500:13500 31500:34500> <30000:33000 33000:36000> <16500:19500 31500:34500> <9000:12000 31500:34500> <10500:13500 33000:36000> <34500:37500 33000:36000> <15000:18000 30000:33000> <12000:15000 31500:34500> <19500:22500 30000:33000> <13500:16500 30000:33000> <18000:21000 33000:36000> <6000:9000 34500:37500> <7500:10500 34500:37500> <9000:12000 37500:40500> <10500:13500 37500:40500> <30000:33000 36000:39000> <27000:30000 37500:40500> <25500:28500 37500:40500> <12000:15000 36000:39000> <15000:18000 34500:37500> <0:3000 37500:40500> <13500:16500 34500:37500> <7500:10500 36000:39000> <12000:15000 37500:40500> <33000:36000 34500:37500> <31500:34500 36000:39000> <3000:6000 36000:39000> <4500:7500 37500:40500> <3000:6000 37500:40500> <13500:16500 36000:39000> <6000:9000 36000:39000> <27000:30000 36000:39000> <28500:31500 36000:39000> <30000:33000 34500:37500> <9000:12000 34500:37500> <10500:13500 36000:39000> <10500:13500 34500:37500> <12000:15000 34500:37500> <31500:34500 34500:37500> <34500:37500 34500:37500> <4500:7500 36000:39000> <7500:10500 37500:40500> <9000:12000 36000:39000> <33000:36000 36000:39000> <1500:4500 37500:40500> <1500:4500 36000:39000> <28500:31500 34500:37500> <6000:9000 37500:40500> <28500:31500 37500:40500> <1500:4500 40500:43500> <3000:6000 39000:42000> <24000:27000 42000:45000> <27000:30000 42000:45000> <25500:28500 42000:45000> <3000:6000 40500:43500> <30000:33000 37500:40500> <31500:34500 37500:40500> <7500:10500 39000:42000> <30000:33000 39000:42000> <10500:13500 39000:42000> <24000:27000 40500:43500> <27000:30000 40500:43500> <4500:7500 39000:42000> <0:3000 40500:43500> <28500:31500 42000:45000> <4500:7500 40500:43500> <6000:9000 40500:43500> <27000:30000 39000:42000> <6000:9000 39000:42000> <28500:31500 39000:42000> <4500:7500 42000:45000> <6000:9000 42000:45000> <9000:12000 39000:42000> <25500:28500 40500:43500> <28500:31500 40500:43500> <7500:10500 40500:43500> <3000:6000 42000:45000> <0:3000 39000:42000> <24000:27000 39000:42000> <9000:12000 40500:43500> <1500:4500 39000:42000> <1500:4500 42000:45000> <7500:10500 42000:45000> <25500:28500 39000:42000> <3000:6000 43500:46500> <22500:25500 46500:49500> <4500:7500 43500:46500> <13500:16500 51000:54000> <27000:30000 45000:48000> <24000:27000 48000:51000> <28500:31500 43500:46500> <25500:28500 43500:46500> <21000:24000 49500:52500> <21000:24000 48000:51000> <25500:28500 48000:51000> <16500:19500 49500:52500> <27000:30000 48000:51000> <22500:25500 49500:52500> <28500:31500 48000:51000> <24000:27000 46500:49500> <12000:15000 51000:54000> <24000:27000 49500:52500> <28500:31500 45000:48000> <25500:28500 46500:49500> <27000:30000 46500:49500> <27000:30000 49500:52500> <18000:21000 49500:52500> <28500:31500 49500:52500> <27000:30000 43500:46500> <24000:27000 45000:48000> <24000:27000 43500:46500> <22500:25500 48000:51000> <25500:28500 45000:48000> <25500:28500 49500:52500> <28500:31500 46500:49500> <10500:13500 51000:54000> <15000:18000 51000:54000> <19500:22500 49500:52500> <16500:19500 51000:54000> <18000:21000 51000:54000> <12000:15000 55500:58500> <16500:19500 55500:58500> <13500:16500 55500:58500> <19500:22500 55500:58500> <21000:24000 54000:57000> <21000:24000 51000:54000> <10500:13500 52500:55500> <22500:25500 51000:54000> <21000:24000 55500:58500> <19500:22500 51000:54000> <18000:21000 52500:55500> <10500:13500 55500:58500> <24000:27000 52500:55500> <19500:22500 54000:57000> <27000:30000 51000:54000> <13500:16500 52500:55500> <24000:27000 51000:54000> <22500:25500 54000:57000> <12000:15000 52500:55500> <18000:21000 54000:57000> <22500:25500 52500:55500> <21000:24000 52500:55500> <15000:18000 54000:57000> <10500:13500 54000:57000> <24000:27000 54000:57000> <18000:21000 55500:58500> <16500:19500 52500:55500> <25500:28500 51000:54000> <16500:19500 54000:57000> <25500:28500 52500:55500> <19500:22500 52500:55500> <12000:15000 54000:57000> <15000:18000 52500:55500> <15000:18000 55500:58500> <13500:16500 54000:57000> <10500:13500 57000:60000> <13500:16500 57000:60000> <15000:18000 57000:60000> <16500:19500 57000:60000> <12000:15000 57000:60000> <18000:21000 57000:60000> 2024-04-06 10:06:57.486436: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 10:06:57.602144: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 10:06:57.602289: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 10:06:58.911247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 10:06:58.911360: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 10:06:58.911388: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 10:06:58.911499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 10:07:12.051288: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 10:07:12.051419: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/54254/img_files/ + + +355 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 354> <1 of 354> <2 of 354> <3 of 354> <4 of 354> <5 of 354> <6 of 354> <7 of 354> <8 of 354> <9 of 354> <10 of 354> <11 of 354> <12 of 354> <13 of 354> <14 of 354> <15 of 354> <16 of 354> <17 of 354> <18 of 354> <19 of 354> <20 of 354> <21 of 354> <22 of 354> <23 of 354> <24 of 354> <25 of 354> <26 of 354> <27 of 354> <28 of 354> <29 of 354> <30 of 354> <31 of 354> <32 of 354> <33 of 354> <34 of 354> <35 of 354> <36 of 354> <37 of 354> <38 of 354> <39 of 354> <40 of 354> <41 of 354> <42 of 354> <43 of 354> <44 of 354> <45 of 354> <46 of 354> <47 of 354> <48 of 354> <49 of 354> <50 of 354> <51 of 354> <52 of 354> <53 of 354> <54 of 354> <55 of 354> <56 of 354> <57 of 354> <58 of 354> <59 of 354> <60 of 354> <61 of 354> <62 of 354> <63 of 354> <64 of 354> <65 of 354> <66 of 354> <67 of 354> <68 of 354> <69 of 354> <70 of 354> <71 of 354> <72 of 354> <73 of 354> <74 of 354> <75 of 354> <76 of 354> <77 of 354> <78 of 354> <79 of 354> <80 of 354> <81 of 354> <82 of 354> <83 of 354> <84 of 354> <85 of 354> <86 of 354> <87 of 354> <88 of 354> <89 of 354> <90 of 354> <91 of 354> <92 of 354> <93 of 354> <94 of 354> <95 of 354> <96 of 354> <97 of 354> <98 of 354> <99 of 354> <100 of 354> <101 of 354> <102 of 354> <103 of 354> <104 of 354> <105 of 354> <106 of 354> <107 of 354> <108 of 354> <109 of 354> <110 of 354> <111 of 354> <112 of 354> <113 of 354> <114 of 354> <115 of 354> <116 of 354> <117 of 354> <118 of 354> <119 of 354> <120 of 354> <121 of 354> <122 of 354> <123 of 354> <124 of 354> <125 of 354> <126 of 354> <127 of 354> <128 of 354> <129 of 354> <130 of 354> <131 of 354> <132 of 354> <133 of 354> <134 of 354> <135 of 354> <136 of 354> <137 of 354> <138 of 354> <139 of 354> <140 of 354> <141 of 354> <142 of 354> <143 of 354> <144 of 354> <145 of 354> <146 of 354> <147 of 354> <148 of 354> <149 of 354> <150 of 354> <151 of 354> <152 of 354> <153 of 354> <154 of 354> <155 of 354> <156 of 354> <157 of 354> <158 of 354> <159 of 354> <160 of 354> <161 of 354> <162 of 354> <163 of 354> <164 of 354> <165 of 354> <166 of 354> <167 of 354> <168 of 354> <169 of 354> <170 of 354> <171 of 354> <172 of 354> <173 of 354> <174 of 354> <175 of 354> <176 of 354> <177 of 354> <178 of 354> <179 of 354> <180 of 354> <181 of 354> <182 of 354> <183 of 354> <184 of 354> <185 of 354> <186 of 354> <187 of 354> <188 of 354> <189 of 354> <190 of 354> <191 of 354> <192 of 354> <193 of 354> <194 of 354> <195 of 354> <196 of 354> <197 of 354> <198 of 354> <199 of 354> <200 of 354> <201 of 354> <202 of 354> <203 of 354> <204 of 354> <205 of 354> <206 of 354> <207 of 354> <208 of 354> <209 of 354> <210 of 354> <211 of 354> <212 of 354> <213 of 354> <214 of 354> <215 of 354> <216 of 354> <217 of 354> <218 of 354> <219 of 354> <220 of 354> <221 of 354> <222 of 354> <223 of 354> <224 of 354> <225 of 354> <226 of 354> <227 of 354> <228 of 354> <229 of 354> <230 of 354> <231 of 354> <232 of 354> <233 of 354> <234 of 354> <235 of 354> <236 of 354> <237 of 354> <238 of 354> <239 of 354> <240 of 354> <241 of 354> <242 of 354> <243 of 354> <244 of 354> <245 of 354> <246 of 354> <247 of 354> <248 of 354> <249 of 354> <250 of 354> <251 of 354> <252 of 354> <253 of 354> <254 of 354> <255 of 354> <256 of 354> <257 of 354> <258 of 354> <259 of 354> <260 of 354> <261 of 354> <262 of 354> <263 of 354> <264 of 354> <265 of 354> <266 of 354> <267 of 354> <268 of 354> <269 of 354> <270 of 354> <271 of 354> <272 of 354> <273 of 354> <274 of 354> <275 of 354> <276 of 354> <277 of 354> <278 of 354> <279 of 354> <280 of 354> <281 of 354> <282 of 354> <283 of 354> <284 of 354> <285 of 354> <286 of 354> <287 of 354> <288 of 354> <289 of 354> <290 of 354> <291 of 354> <292 of 354> <293 of 354> <294 of 354> <295 of 354> <296 of 354> <297 of 354> <298 of 354> <299 of 354> <300 of 354> <301 of 354> <302 of 354> <303 of 354> <304 of 354> <305 of 354> <306 of 354> <307 of 354> <308 of 354> <309 of 354> <310 of 354> <311 of 354> <312 of 354> <313 of 354> <314 of 354> <315 of 354> <316 of 354> <317 of 354> <318 of 354> <319 of 354> <320 of 354> <321 of 354> <322 of 354> <323 of 354> <324 of 354> <325 of 354> <326 of 354> <327 of 354> <328 of 354> <329 of 354> <330 of 354> <331 of 354> <332 of 354> <333 of 354> <334 of 354> <335 of 354> <336 of 354> <337 of 354> <338 of 354> <339 of 354> <340 of 354> <341 of 354> <342 of 354> <343 of 354> <344 of 354> <345 of 354> <346 of 354> <347 of 354> <348 of 354> <349 of 354> <350 of 354> <351 of 354> <352 of 354> <353 of 354> <354 of 354> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57882.svs + +chopping ... + +saving region: + <37500:40500 6000:9000> <33000:36000 6000:9000> <36000:39000 4500:7500> <37500:40500 3000:6000> <39000:42000 1500:4500> <34500:37500 0:3000> <37500:40500 1500:4500> <30000:33000 7500:10500> <36000:39000 0:3000> <28500:31500 7500:10500> <34500:37500 6000:9000> <39000:42000 3000:6000> <33000:36000 4500:7500> <40500:43500 4500:7500> <36000:39000 1500:4500> <37500:40500 0:3000> <33000:36000 7500:10500> <30000:33000 6000:9000> <31500:34500 7500:10500> <33000:36000 1500:4500> <34500:37500 3000:6000> <34500:37500 1500:4500> <33000:36000 3000:6000> <36000:39000 6000:9000> <40500:43500 3000:6000> <36000:39000 3000:6000> <39000:42000 4500:7500> <31500:34500 6000:9000> <39000:42000 6000:9000> <31500:34500 4500:7500> <34500:37500 7500:10500> <34500:37500 4500:7500> <37500:40500 4500:7500> <36000:39000 7500:10500> <31500:34500 10500:13500> <28500:31500 12000:15000> <31500:34500 13500:16500> <28500:31500 13500:16500> <21000:24000 15000:18000> <36000:39000 9000:12000> <33000:36000 12000:15000> <24000:27000 15000:18000> <30000:33000 13500:16500> <27000:30000 9000:12000> <31500:34500 9000:12000> <22500:25500 15000:18000> <37500:40500 7500:10500> <30000:33000 10500:13500> <33000:36000 9000:12000> <27000:30000 12000:15000> <24000:27000 13500:16500> <24000:27000 12000:15000> <28500:31500 10500:13500> <30000:33000 12000:15000> <28500:31500 9000:12000> <34500:37500 9000:12000> <33000:36000 10500:13500> <27000:30000 13500:16500> <25500:28500 10500:13500> <25500:28500 12000:15000> <30000:33000 9000:12000> <34500:37500 10500:13500> <22500:25500 13500:16500> <25500:28500 13500:16500> <31500:34500 12000:15000> <25500:28500 15000:18000> <27000:30000 10500:13500> <27000:30000 15000:18000> <24000:27000 18000:21000> <21000:24000 19500:22500> <28500:31500 16500:19500> <27000:30000 18000:21000> <22500:25500 21000:24000> <25500:28500 21000:24000> <27000:30000 19500:22500> <24000:27000 21000:24000> <22500:25500 19500:22500> <24000:27000 19500:22500> <21000:24000 18000:21000> <30000:33000 15000:18000> <27000:30000 16500:19500> <28500:31500 15000:18000> <18000:21000 21000:24000> <30000:33000 16500:19500> <19500:22500 21000:24000> <18000:21000 19500:22500> <22500:25500 16500:19500> <25500:28500 19500:22500> <24000:27000 16500:19500> <16500:19500 21000:24000> <16500:19500 22500:25500> <22500:25500 18000:21000> <25500:28500 18000:21000> <19500:22500 19500:22500> <19500:22500 18000:21000> <21000:24000 16500:19500> <18000:21000 22500:25500> <28500:31500 18000:21000> <25500:28500 16500:19500> <21000:24000 21000:24000> <19500:22500 22500:25500> <21000:24000 22500:25500> <12000:15000 30000:33000> <15000:18000 24000:27000> <18000:21000 30000:33000> <19500:22500 24000:27000> <19500:22500 27000:30000> <13500:16500 25500:28500> <10500:13500 31500:34500> <18000:21000 27000:30000> <21000:24000 27000:30000> <16500:19500 27000:30000> <12000:15000 28500:31500> <18000:21000 25500:28500> <16500:19500 25500:28500> <24000:27000 22500:25500> <19500:22500 28500:31500> <15000:18000 25500:28500> <15000:18000 28500:31500> <18000:21000 24000:27000> <15000:18000 27000:30000> <18000:21000 28500:31500> <15000:18000 30000:33000> <16500:19500 28500:31500> <13500:16500 28500:31500> <21000:24000 25500:28500> <13500:16500 30000:33000> <19500:22500 25500:28500> <21000:24000 24000:27000> <13500:16500 27000:30000> <16500:19500 30000:33000> <22500:25500 24000:27000> <16500:19500 24000:27000> <22500:25500 22500:25500> <12000:15000 31500:34500> <13500:16500 31500:34500> <13500:16500 39000:42000> <6000:9000 40500:43500> <18000:21000 33000:36000> <10500:13500 33000:36000> <16500:19500 33000:36000> <10500:13500 34500:37500> <18000:21000 31500:34500> <12000:15000 37500:40500> <12000:15000 33000:36000> <9000:12000 34500:37500> <12000:15000 39000:42000> <7500:10500 39000:42000> <13500:16500 36000:39000> <9000:12000 37500:40500> <16500:19500 34500:37500> <7500:10500 37500:40500> <15000:18000 31500:34500> <10500:13500 39000:42000> <13500:16500 34500:37500> <16500:19500 31500:34500> <15000:18000 34500:37500> <13500:16500 33000:36000> <15000:18000 37500:40500> <15000:18000 36000:39000> <10500:13500 37500:40500> <9000:12000 39000:42000> <13500:16500 37500:40500> <15000:18000 33000:36000> <10500:13500 36000:39000> <9000:12000 36000:39000> <12000:15000 34500:37500> <7500:10500 40500:43500> <12000:15000 36000:39000> <13500:16500 40500:43500> <12000:15000 40500:43500> <7500:10500 43500:46500> <0:3000 46500:49500> <10500:13500 43500:46500> <9000:12000 46500:49500> <4500:7500 42000:45000> <7500:10500 45000:48000> <9000:12000 40500:43500> <4500:7500 46500:49500> <4500:7500 43500:46500> <6000:9000 42000:45000> <7500:10500 42000:45000> <10500:13500 42000:45000> <3000:6000 43500:46500> <10500:13500 40500:43500> <0:3000 48000:51000> <9000:12000 45000:48000> <1500:4500 46500:49500> <7500:10500 46500:49500> <10500:13500 45000:48000> <6000:9000 46500:49500> <9000:12000 43500:46500> <6000:9000 43500:46500> <1500:4500 45000:48000> <3000:6000 46500:49500> <6000:9000 45000:48000> <12000:15000 43500:46500> <9000:12000 42000:45000> <12000:15000 42000:45000> <4500:7500 45000:48000> <1500:4500 48000:51000> <3000:6000 45000:48000> <4500:7500 48000:51000> <3000:6000 48000:51000> <6000:9000 48000:51000> <7500:10500 48000:51000> <1500:4500 49500:52500> <4500:7500 49500:52500> <0:3000 49500:52500> <6000:9000 49500:52500> <3000:6000 51000:54000> <3000:6000 49500:52500> <4500:7500 51000:54000> <1500:4500 51000:54000> <0:3000 51000:54000> 2024-04-06 10:30:22.593354: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 10:30:22.710077: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 10:30:22.710208: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 10:30:23.873949: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 10:30:23.874093: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 10:30:23.874120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 10:30:23.874241: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 10:30:35.806048: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 10:30:35.806207: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57882/img_files/ + + +215 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 214> <1 of 214> <2 of 214> <3 of 214> <4 of 214> <5 of 214> <6 of 214> <7 of 214> <8 of 214> <9 of 214> <10 of 214> <11 of 214> <12 of 214> <13 of 214> <14 of 214> <15 of 214> <16 of 214> <17 of 214> <18 of 214> <19 of 214> <20 of 214> <21 of 214> <22 of 214> <23 of 214> <24 of 214> <25 of 214> <26 of 214> <27 of 214> <28 of 214> <29 of 214> <30 of 214> <31 of 214> <32 of 214> <33 of 214> <34 of 214> <35 of 214> <36 of 214> <37 of 214> <38 of 214> <39 of 214> <40 of 214> <41 of 214> <42 of 214> <43 of 214> <44 of 214> <45 of 214> <46 of 214> <47 of 214> <48 of 214> <49 of 214> <50 of 214> <51 of 214> <52 of 214> <53 of 214> <54 of 214> <55 of 214> <56 of 214> <57 of 214> <58 of 214> <59 of 214> <60 of 214> <61 of 214> <62 of 214> <63 of 214> <64 of 214> <65 of 214> <66 of 214> <67 of 214> <68 of 214> <69 of 214> <70 of 214> <71 of 214> <72 of 214> <73 of 214> <74 of 214> <75 of 214> <76 of 214> <77 of 214> <78 of 214> <79 of 214> <80 of 214> <81 of 214> <82 of 214> <83 of 214> <84 of 214> <85 of 214> <86 of 214> <87 of 214> <88 of 214> <89 of 214> <90 of 214> <91 of 214> <92 of 214> <93 of 214> <94 of 214> <95 of 214> <96 of 214> <97 of 214> <98 of 214> <99 of 214> <100 of 214> <101 of 214> <102 of 214> <103 of 214> <104 of 214> <105 of 214> <106 of 214> <107 of 214> <108 of 214> <109 of 214> <110 of 214> <111 of 214> <112 of 214> <113 of 214> <114 of 214> <115 of 214> <116 of 214> <117 of 214> <118 of 214> <119 of 214> <120 of 214> <121 of 214> <122 of 214> <123 of 214> <124 of 214> <125 of 214> <126 of 214> <127 of 214> <128 of 214> <129 of 214> <130 of 214> <131 of 214> <132 of 214> <133 of 214> <134 of 214> <135 of 214> <136 of 214> <137 of 214> <138 of 214> <139 of 214> <140 of 214> <141 of 214> <142 of 214> <143 of 214> <144 of 214> <145 of 214> <146 of 214> <147 of 214> <148 of 214> <149 of 214> <150 of 214> <151 of 214> <152 of 214> <153 of 214> <154 of 214> <155 of 214> <156 of 214> <157 of 214> <158 of 214> <159 of 214> <160 of 214> <161 of 214> <162 of 214> <163 of 214> <164 of 214> <165 of 214> <166 of 214> <167 of 214> <168 of 214> <169 of 214> <170 of 214> <171 of 214> <172 of 214> <173 of 214> <174 of 214> <175 of 214> <176 of 214> <177 of 214> <178 of 214> <179 of 214> <180 of 214> <181 of 214> <182 of 214> <183 of 214> <184 of 214> <185 of 214> <186 of 214> <187 of 214> <188 of 214> <189 of 214> <190 of 214> <191 of 214> <192 of 214> <193 of 214> <194 of 214> <195 of 214> <196 of 214> <197 of 214> <198 of 214> <199 of 214> <200 of 214> <201 of 214> <202 of 214> <203 of 214> <204 of 214> <205 of 214> <206 of 214> <207 of 214> <208 of 214> <209 of 214> <210 of 214> <211 of 214> <212 of 214> <213 of 214> <214 of 214> + +Starting XML construction: +[0 1 2] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57854.svs + +chopping ... + +saving region: + <34500:37500 9000:12000> <40500:43500 9000:12000> <52500:55500 9000:12000> <76500:79500 6000:9000> <69000:72000 7500:10500> <55500:58500 9000:12000> <61500:64500 9000:12000> <73500:76500 7500:10500> <57000:60000 9000:12000> <60000:63000 9000:12000> <33000:36000 9000:12000> <78000:81000 7500:10500> <51000:54000 9000:12000> <73500:76500 6000:9000> <63000:66000 9000:12000> <73500:76500 4500:7500> <76500:79500 7500:10500> <75000:78000 6000:9000> <72000:75000 7500:10500> <54000:57000 9000:12000> <37500:40500 9000:12000> <36000:39000 9000:12000> <64500:67500 9000:12000> <49500:52500 9000:12000> <75000:78000 4500:7500> <70500:73500 7500:10500> <76500:79500 4500:7500> <58500:61500 9000:12000> <78000:81000 6000:9000> <39000:42000 9000:12000> <75000:78000 7500:10500> <31500:34500 9000:12000> <66000:69000 9000:12000> <69000:72000 9000:12000> <67500:70500 9000:12000> <28500:31500 10500:13500> <55500:58500 10500:13500> <57000:60000 10500:13500> <61500:64500 10500:13500> <60000:63000 10500:13500> <58500:61500 10500:13500> <39000:42000 10500:13500> <76500:79500 9000:12000> <24000:27000 10500:13500> <54000:57000 10500:13500> <48000:51000 10500:13500> <22500:25500 10500:13500> <72000:75000 9000:12000> <78000:81000 9000:12000> <75000:78000 9000:12000> <42000:45000 10500:13500> <31500:34500 10500:13500> <30000:33000 10500:13500> <70500:73500 9000:12000> <25500:28500 10500:13500> <21000:24000 10500:13500> <36000:39000 10500:13500> <73500:76500 9000:12000> <40500:43500 10500:13500> <45000:48000 10500:13500> <34500:37500 10500:13500> <64500:67500 10500:13500> <33000:36000 10500:13500> <27000:30000 10500:13500> <37500:40500 10500:13500> <66000:69000 10500:13500> <49500:52500 10500:13500> <46500:49500 10500:13500> <63000:66000 10500:13500> <51000:54000 10500:13500> <52500:55500 10500:13500> <43500:46500 10500:13500> <67500:70500 10500:13500> <13500:16500 12000:15000> <39000:42000 12000:15000> <46500:49500 12000:15000> <25500:28500 12000:15000> <48000:51000 12000:15000> <31500:34500 12000:15000> <15000:18000 12000:15000> <37500:40500 12000:15000> <76500:79500 10500:13500> <70500:73500 10500:13500> <18000:21000 12000:15000> <36000:39000 12000:15000> <43500:46500 12000:15000> <40500:43500 12000:15000> <22500:25500 12000:15000> <10500:13500 12000:15000> <27000:30000 12000:15000> <19500:22500 12000:15000> <75000:78000 10500:13500> <28500:31500 12000:15000> <30000:33000 12000:15000> <21000:24000 12000:15000> <34500:37500 12000:15000> <33000:36000 12000:15000> <69000:72000 10500:13500> <16500:19500 12000:15000> <42000:45000 12000:15000> <73500:76500 10500:13500> <45000:48000 12000:15000> <72000:75000 10500:13500> <12000:15000 12000:15000> <24000:27000 12000:15000> <51000:54000 12000:15000> <52500:55500 12000:15000> <49500:52500 12000:15000> <58500:61500 12000:15000> <55500:58500 12000:15000> <54000:57000 12000:15000> <33000:36000 13500:16500> <31500:34500 13500:16500> <30000:33000 13500:16500> <18000:21000 13500:16500> <67500:70500 12000:15000> <57000:60000 12000:15000> <9000:12000 13500:16500> <7500:10500 13500:16500> <16500:19500 13500:16500> <60000:63000 12000:15000> <61500:64500 12000:15000> <22500:25500 13500:16500> <63000:66000 12000:15000> <72000:75000 12000:15000> <24000:27000 13500:16500> <15000:18000 13500:16500> <69000:72000 12000:15000> <64500:67500 12000:15000> <27000:30000 13500:16500> <66000:69000 12000:15000> <13500:16500 13500:16500> <12000:15000 13500:16500> <10500:13500 13500:16500> <70500:73500 12000:15000> <19500:22500 13500:16500> <6000:9000 13500:16500> <21000:24000 13500:16500> <28500:31500 13500:16500> <25500:28500 13500:16500> <37500:40500 13500:16500> <34500:37500 13500:16500> <36000:39000 13500:16500> <39000:42000 13500:16500> <40500:43500 13500:16500> <42000:45000 13500:16500> <49500:52500 13500:16500> <22500:25500 15000:18000> <19500:22500 15000:18000> <48000:51000 13500:16500> <16500:19500 15000:18000> <63000:66000 13500:16500> <21000:24000 15000:18000> <52500:55500 13500:16500> <69000:72000 13500:16500> <58500:61500 13500:16500> <66000:69000 13500:16500> <4500:7500 15000:18000> <57000:60000 13500:16500> <67500:70500 13500:16500> <45000:48000 13500:16500> <18000:21000 15000:18000> <64500:67500 13500:16500> <9000:12000 15000:18000> <46500:49500 13500:16500> <6000:9000 15000:18000> <43500:46500 13500:16500> <55500:58500 13500:16500> <54000:57000 13500:16500> <60000:63000 13500:16500> <7500:10500 15000:18000> <61500:64500 13500:16500> <10500:13500 15000:18000> <51000:54000 13500:16500> <15000:18000 15000:18000> <12000:15000 15000:18000> <24000:27000 15000:18000> <13500:16500 15000:18000> <27000:30000 15000:18000> <25500:28500 15000:18000> <33000:36000 15000:18000> <42000:45000 15000:18000> <15000:18000 16500:19500> <49500:52500 15000:18000> <19500:22500 16500:19500> <18000:21000 16500:19500> <4500:7500 18000:21000> <1500:4500 18000:21000> <21000:24000 16500:19500> <22500:25500 16500:19500> <3000:6000 16500:19500> <24000:27000 16500:19500> <60000:63000 15000:18000> <43500:46500 15000:18000> <58500:61500 15000:18000> <30000:33000 15000:18000> <46500:49500 15000:18000> <61500:64500 15000:18000> <34500:37500 15000:18000> <3000:6000 18000:21000> <9000:12000 16500:19500> <48000:51000 15000:18000> <13500:16500 16500:19500> <45000:48000 15000:18000> <7500:10500 16500:19500> <6000:9000 16500:19500> <57000:60000 15000:18000> <37500:40500 15000:18000> <31500:34500 15000:18000> <12000:15000 16500:19500> <10500:13500 16500:19500> <36000:39000 15000:18000> <28500:31500 15000:18000> <39000:42000 15000:18000> <55500:58500 15000:18000> <4500:7500 16500:19500> <40500:43500 15000:18000> <16500:19500 16500:19500> <6000:9000 18000:21000> <1500:4500 21000:24000> <9000:12000 18000:21000> <4500:7500 19500:22500> <10500:13500 18000:21000> <6000:9000 19500:22500> <7500:10500 18000:21000> <4500:7500 24000:27000> <6000:9000 21000:24000> <3000:6000 21000:24000> <1500:4500 19500:22500> <7500:10500 19500:22500> <4500:7500 21000:24000> <6000:9000 22500:25500> <1500:4500 22500:25500> <4500:7500 22500:25500> <3000:6000 24000:27000> <3000:6000 22500:25500> <3000:6000 19500:22500> 2024-04-06 10:45:07.016568: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 10:45:07.129524: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 10:45:07.129654: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 10:45:08.247801: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 10:45:08.247945: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 10:45:08.247973: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 10:45:08.248096: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 10:45:20.212762: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 10:45:20.212921: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57854/img_files/ + + +237 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 236> <1 of 236> <2 of 236> <3 of 236> <4 of 236> <5 of 236> <6 of 236> <7 of 236> <8 of 236> <9 of 236> <10 of 236> <11 of 236> <12 of 236> <13 of 236> <14 of 236> <15 of 236> <16 of 236> <17 of 236> <18 of 236> <19 of 236> <20 of 236> <21 of 236> <22 of 236> <23 of 236> <24 of 236> <25 of 236> <26 of 236> <27 of 236> <28 of 236> <29 of 236> <30 of 236> <31 of 236> <32 of 236> <33 of 236> <34 of 236> <35 of 236> <36 of 236> <37 of 236> <38 of 236> <39 of 236> <40 of 236> <41 of 236> <42 of 236> <43 of 236> <44 of 236> <45 of 236> <46 of 236> <47 of 236> <48 of 236> <49 of 236> <50 of 236> <51 of 236> <52 of 236> <53 of 236> <54 of 236> <55 of 236> <56 of 236> <57 of 236> <58 of 236> <59 of 236> <60 of 236> <61 of 236> <62 of 236> <63 of 236> <64 of 236> <65 of 236> <66 of 236> <67 of 236> <68 of 236> <69 of 236> <70 of 236> <71 of 236> <72 of 236> <73 of 236> <74 of 236> <75 of 236> <76 of 236> <77 of 236> <78 of 236> <79 of 236> <80 of 236> <81 of 236> <82 of 236> <83 of 236> <84 of 236> <85 of 236> <86 of 236> <87 of 236> <88 of 236> <89 of 236> <90 of 236> <91 of 236> <92 of 236> <93 of 236> <94 of 236> <95 of 236> <96 of 236> <97 of 236> <98 of 236> <99 of 236> <100 of 236> <101 of 236> <102 of 236> <103 of 236> <104 of 236> <105 of 236> <106 of 236> <107 of 236> <108 of 236> <109 of 236> <110 of 236> <111 of 236> <112 of 236> <113 of 236> <114 of 236> <115 of 236> <116 of 236> <117 of 236> <118 of 236> <119 of 236> <120 of 236> <121 of 236> <122 of 236> <123 of 236> <124 of 236> <125 of 236> <126 of 236> <127 of 236> <128 of 236> <129 of 236> <130 of 236> <131 of 236> <132 of 236> <133 of 236> <134 of 236> <135 of 236> <136 of 236> <137 of 236> <138 of 236> <139 of 236> <140 of 236> <141 of 236> <142 of 236> <143 of 236> <144 of 236> <145 of 236> <146 of 236> <147 of 236> <148 of 236> <149 of 236> <150 of 236> <151 of 236> <152 of 236> <153 of 236> <154 of 236> <155 of 236> <156 of 236> <157 of 236> <158 of 236> <159 of 236> <160 of 236> <161 of 236> <162 of 236> <163 of 236> <164 of 236> <165 of 236> <166 of 236> <167 of 236> <168 of 236> <169 of 236> <170 of 236> <171 of 236> <172 of 236> <173 of 236> <174 of 236> <175 of 236> <176 of 236> <177 of 236> <178 of 236> <179 of 236> <180 of 236> <181 of 236> <182 of 236> <183 of 236> <184 of 236> <185 of 236> <186 of 236> <187 of 236> <188 of 236> <189 of 236> <190 of 236> <191 of 236> <192 of 236> <193 of 236> <194 of 236> <195 of 236> <196 of 236> <197 of 236> <198 of 236> <199 of 236> <200 of 236> <201 of 236> <202 of 236> <203 of 236> <204 of 236> <205 of 236> <206 of 236> <207 of 236> <208 of 236> <209 of 236> <210 of 236> <211 of 236> <212 of 236> <213 of 236> <214 of 236> <215 of 236> <216 of 236> <217 of 236> <218 of 236> <219 of 236> <220 of 236> <221 of 236> <222 of 236> <223 of 236> <224 of 236> <225 of 236> <226 of 236> <227 of 236> <228 of 236> <229 of 236> <230 of 236> <231 of 236> <232 of 236> <233 of 236> <234 of 236> <235 of 236> <236 of 236> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/54248.svs + +chopping ... + +saving region: + <21000:24000 1500:4500> <13500:16500 1500:4500> <6000:9000 3000:6000> <12000:15000 1500:4500> <19500:22500 1500:4500> <16500:19500 1500:4500> <10500:13500 0:3000> <27000:30000 1500:4500> <7500:10500 3000:6000> <1500:4500 1500:4500> <7500:10500 1500:4500> <4500:7500 3000:6000> <13500:16500 0:3000> <4500:7500 1500:4500> <9000:12000 1500:4500> <7500:10500 0:3000> <3000:6000 1500:4500> <6000:9000 0:3000> <22500:25500 1500:4500> <4500:7500 0:3000> <3000:6000 3000:6000> <1500:4500 3000:6000> <6000:9000 1500:4500> <0:3000 3000:6000> <10500:13500 1500:4500> <18000:21000 1500:4500> <12000:15000 0:3000> <15000:18000 1500:4500> <24000:27000 1500:4500> <3000:6000 0:3000> <10500:13500 3000:6000> <25500:28500 1500:4500> <9000:12000 0:3000> <9000:12000 3000:6000> <13500:16500 3000:6000> <16500:19500 3000:6000> <12000:15000 4500:7500> <16500:19500 4500:7500> <9000:12000 4500:7500> <25500:28500 3000:6000> <4500:7500 4500:7500> <37500:40500 3000:6000> <19500:22500 3000:6000> <31500:34500 3000:6000> <27000:30000 3000:6000> <7500:10500 4500:7500> <18000:21000 3000:6000> <21000:24000 3000:6000> <1500:4500 4500:7500> <24000:27000 3000:6000> <40500:43500 3000:6000> <22500:25500 3000:6000> <13500:16500 4500:7500> <12000:15000 3000:6000> <15000:18000 4500:7500> <34500:37500 3000:6000> <18000:21000 4500:7500> <10500:13500 4500:7500> <30000:33000 3000:6000> <6000:9000 4500:7500> <3000:6000 4500:7500> <0:3000 4500:7500> <33000:36000 3000:6000> <21000:24000 4500:7500> <28500:31500 3000:6000> <39000:42000 3000:6000> <22500:25500 4500:7500> <36000:39000 3000:6000> <15000:18000 3000:6000> <19500:22500 4500:7500> <24000:27000 4500:7500> <27000:30000 4500:7500> <25500:28500 4500:7500> <25500:28500 6000:9000> <22500:25500 6000:9000> <3000:6000 6000:9000> <31500:34500 6000:9000> <28500:31500 6000:9000> <15000:18000 6000:9000> <30000:33000 6000:9000> <33000:36000 6000:9000> <42000:45000 6000:9000> <36000:39000 6000:9000> <45000:48000 4500:7500> <37500:40500 6000:9000> <34500:37500 4500:7500> <39000:42000 4500:7500> <24000:27000 6000:9000> <30000:33000 4500:7500> <40500:43500 6000:9000> <27000:30000 6000:9000> <21000:24000 6000:9000> <28500:31500 4500:7500> <19500:22500 6000:9000> <33000:36000 4500:7500> <39000:42000 6000:9000> <31500:34500 4500:7500> <34500:37500 6000:9000> <43500:46500 4500:7500> <12000:15000 6000:9000> <37500:40500 4500:7500> <42000:45000 4500:7500> <18000:21000 6000:9000> <13500:16500 6000:9000> <36000:39000 4500:7500> <49500:52500 4500:7500> <0:3000 6000:9000> <48000:51000 4500:7500> <16500:19500 6000:9000> <1500:4500 6000:9000> <46500:49500 4500:7500> <43500:46500 6000:9000> <40500:43500 4500:7500> <45000:48000 6000:9000> <46500:49500 6000:9000> <24000:27000 7500:10500> <51000:54000 6000:9000> <48000:51000 6000:9000> <40500:43500 9000:12000> <36000:39000 9000:12000> <39000:42000 9000:12000> <54000:57000 7500:10500> <39000:42000 7500:10500> <33000:36000 7500:10500> <27000:30000 7500:10500> <49500:52500 6000:9000> <42000:45000 7500:10500> <33000:36000 9000:12000> <34500:37500 7500:10500> <51000:54000 7500:10500> <52500:55500 7500:10500> <37500:40500 7500:10500> <43500:46500 7500:10500> <31500:34500 7500:10500> <40500:43500 7500:10500> <36000:39000 7500:10500> <30000:33000 7500:10500> <25500:28500 7500:10500> <34500:37500 9000:12000> <22500:25500 7500:10500> <48000:51000 7500:10500> <52500:55500 6000:9000> <46500:49500 7500:10500> <28500:31500 7500:10500> <37500:40500 9000:12000> <45000:48000 7500:10500> <42000:45000 9000:12000> <46500:49500 9000:12000> <49500:52500 7500:10500> <45000:48000 9000:12000> <48000:51000 9000:12000> <43500:46500 9000:12000> <49500:52500 9000:12000> <51000:54000 9000:12000> <54000:57000 9000:12000> <55500:58500 13500:16500> <52500:55500 9000:12000> <51000:54000 12000:15000> <57000:60000 10500:13500> <55500:58500 9000:12000> <49500:52500 10500:13500> <46500:49500 10500:13500> <55500:58500 10500:13500> <48000:51000 12000:15000> <54000:57000 10500:13500> <54000:57000 12000:15000> <48000:51000 10500:13500> <55500:58500 12000:15000> <51000:54000 13500:16500> <52500:55500 13500:16500> <49500:52500 12000:15000> <52500:55500 12000:15000> <54000:57000 13500:16500> <52500:55500 10500:13500> <51000:54000 10500:13500> 2024-04-06 11:00:07.579606: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 11:00:07.692842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 11:00:07.692982: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 11:00:08.818351: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 11:00:08.818473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 11:00:08.818501: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 11:00:08.818622: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 11:00:20.750613: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 11:00:20.750743: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/54248/img_files/ + + +175 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 174> <1 of 174> <2 of 174> <3 of 174> <4 of 174> <5 of 174> <6 of 174> <7 of 174> <8 of 174> <9 of 174> <10 of 174> <11 of 174> <12 of 174> <13 of 174> <14 of 174> <15 of 174> <16 of 174> <17 of 174> <18 of 174> <19 of 174> <20 of 174> <21 of 174> <22 of 174> <23 of 174> <24 of 174> <25 of 174> <26 of 174> <27 of 174> <28 of 174> <29 of 174> <30 of 174> <31 of 174> <32 of 174> <33 of 174> <34 of 174> <35 of 174> <36 of 174> <37 of 174> <38 of 174> <39 of 174> <40 of 174> <41 of 174> <42 of 174> <43 of 174> <44 of 174> <45 of 174> <46 of 174> <47 of 174> <48 of 174> <49 of 174> <50 of 174> <51 of 174> <52 of 174> <53 of 174> <54 of 174> <55 of 174> <56 of 174> <57 of 174> <58 of 174> <59 of 174> <60 of 174> <61 of 174> <62 of 174> <63 of 174> <64 of 174> <65 of 174> <66 of 174> <67 of 174> <68 of 174> <69 of 174> <70 of 174> <71 of 174> <72 of 174> <73 of 174> <74 of 174> <75 of 174> <76 of 174> <77 of 174> <78 of 174> <79 of 174> <80 of 174> <81 of 174> <82 of 174> <83 of 174> <84 of 174> <85 of 174> <86 of 174> <87 of 174> <88 of 174> <89 of 174> <90 of 174> <91 of 174> <92 of 174> <93 of 174> <94 of 174> <95 of 174> <96 of 174> <97 of 174> <98 of 174> <99 of 174> <100 of 174> <101 of 174> <102 of 174> <103 of 174> <104 of 174> <105 of 174> <106 of 174> <107 of 174> <108 of 174> <109 of 174> <110 of 174> <111 of 174> <112 of 174> <113 of 174> <114 of 174> <115 of 174> <116 of 174> <117 of 174> <118 of 174> <119 of 174> <120 of 174> <121 of 174> <122 of 174> <123 of 174> <124 of 174> <125 of 174> <126 of 174> <127 of 174> <128 of 174> <129 of 174> <130 of 174> <131 of 174> <132 of 174> <133 of 174> <134 of 174> <135 of 174> <136 of 174> <137 of 174> <138 of 174> <139 of 174> <140 of 174> <141 of 174> <142 of 174> <143 of 174> <144 of 174> <145 of 174> <146 of 174> <147 of 174> <148 of 174> <149 of 174> <150 of 174> <151 of 174> <152 of 174> <153 of 174> <154 of 174> <155 of 174> <156 of 174> <157 of 174> <158 of 174> <159 of 174> <160 of 174> <161 of 174> <162 of 174> <163 of 174> <164 of 174> <165 of 174> <166 of 174> <167 of 174> <168 of 174> <169 of 174> <170 of 174> <171 of 174> <172 of 174> <173 of 174> <174 of 174> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57665.svs + +chopping ... + +saving region: + <93000:96000 3000:6000> <30000:33000 3000:6000> <22500:25500 3000:6000> <18000:21000 3000:6000> <97500:100500 1500:4500> <94500:97500 3000:6000> <111000:114000 0:3000> <96000:99000 1500:4500> <21000:24000 3000:6000> <31500:34500 3000:6000> <25500:28500 3000:6000> <30000:33000 1500:4500> <94500:97500 1500:4500> <28500:31500 3000:6000> <111000:114000 1500:4500> <31500:34500 1500:4500> <103500:106500 1500:4500> <27000:30000 1500:4500> <15000:18000 3000:6000> <109500:112500 0:3000> <19500:22500 3000:6000> <108000:111000 0:3000> <99000:102000 1500:4500> <24000:27000 3000:6000> <100500:103500 1500:4500> <109500:112500 1500:4500> <108000:111000 1500:4500> <106500:109500 1500:4500> <27000:30000 3000:6000> <102000:105000 1500:4500> <105000:108000 1500:4500> <16500:19500 3000:6000> <28500:31500 1500:4500> <96000:99000 3000:6000> <97500:100500 3000:6000> <100500:103500 3000:6000> <99000:102000 3000:6000> <106500:109500 3000:6000> <103500:106500 4500:7500> <102000:105000 4500:7500> <28500:31500 4500:7500> <106500:109500 4500:7500> <19500:22500 4500:7500> <109500:112500 3000:6000> <99000:102000 4500:7500> <112500:115500 3000:6000> <96000:99000 4500:7500> <24000:27000 4500:7500> <91500:94500 4500:7500> <108000:111000 3000:6000> <13500:16500 4500:7500> <25500:28500 4500:7500> <18000:21000 4500:7500> <15000:18000 4500:7500> <30000:33000 4500:7500> <105000:108000 3000:6000> <102000:105000 3000:6000> <93000:96000 4500:7500> <31500:34500 4500:7500> <97500:100500 4500:7500> <27000:30000 4500:7500> <22500:25500 4500:7500> <111000:114000 3000:6000> <103500:106500 3000:6000> <100500:103500 4500:7500> <16500:19500 4500:7500> <21000:24000 4500:7500> <94500:97500 4500:7500> <108000:111000 4500:7500> <105000:108000 4500:7500> <109500:112500 4500:7500> <111000:114000 4500:7500> <10500:13500 7500:10500> <30000:33000 6000:9000> <109500:112500 6000:9000> <111000:114000 6000:9000> <12000:15000 7500:10500> <18000:21000 6000:9000> <15000:18000 6000:9000> <12000:15000 6000:9000> <28500:31500 6000:9000> <19500:22500 6000:9000> <105000:108000 6000:9000> <112500:115500 4500:7500> <96000:99000 6000:9000> <27000:30000 6000:9000> <106500:109500 6000:9000> <21000:24000 6000:9000> <13500:16500 6000:9000> <15000:18000 7500:10500> <99000:102000 6000:9000> <13500:16500 7500:10500> <97500:100500 6000:9000> <100500:103500 6000:9000> <102000:105000 6000:9000> <25500:28500 6000:9000> <24000:27000 6000:9000> <16500:19500 7500:10500> <22500:25500 6000:9000> <94500:97500 6000:9000> <90000:93000 6000:9000> <103500:106500 6000:9000> <91500:94500 6000:9000> <108000:111000 6000:9000> <31500:34500 6000:9000> <93000:96000 6000:9000> <16500:19500 6000:9000> <18000:21000 7500:10500> <19500:22500 7500:10500> <94500:97500 7500:10500> <90000:93000 7500:10500> <106500:109500 7500:10500> <91500:94500 9000:12000> <28500:31500 7500:10500> <90000:93000 9000:12000> <94500:97500 9000:12000> <9000:12000 10500:13500> <102000:105000 7500:10500> <10500:13500 9000:12000> <28500:31500 9000:12000> <27000:30000 7500:10500> <25500:28500 9000:12000> <103500:106500 7500:10500> <16500:19500 9000:12000> <12000:15000 9000:12000> <12000:15000 10500:13500> <27000:30000 9000:12000> <109500:112500 7500:10500> <22500:25500 7500:10500> <93000:96000 9000:12000> <13500:16500 10500:13500> <18000:21000 9000:12000> <93000:96000 7500:10500> <96000:99000 7500:10500> <91500:94500 7500:10500> <88500:91500 9000:12000> <13500:16500 9000:12000> <21000:24000 7500:10500> <24000:27000 9000:12000> <97500:100500 7500:10500> <108000:111000 7500:10500> <105000:108000 7500:10500> <30000:33000 7500:10500> <24000:27000 7500:10500> <10500:13500 10500:13500> <25500:28500 7500:10500> <15000:18000 9000:12000> <15000:18000 10500:13500> <16500:19500 10500:13500> <12000:15000 15000:18000> <13500:16500 12000:15000> <90000:93000 12000:15000> <90000:93000 15000:18000> <12000:15000 13500:16500> <10500:13500 12000:15000> <15000:18000 12000:15000> <9000:12000 12000:15000> <12000:15000 12000:15000> <90000:93000 10500:13500> <93000:96000 10500:13500> <91500:94500 13500:16500> <94500:97500 10500:13500> <91500:94500 10500:13500> <88500:91500 12000:15000> <90000:93000 13500:16500> <13500:16500 15000:18000> <10500:13500 15000:18000> <87000:90000 13500:16500> <93000:96000 13500:16500> <93000:96000 12000:15000> <9000:12000 13500:16500> <88500:91500 10500:13500> <88500:91500 13500:16500> <91500:94500 12000:15000> <13500:16500 13500:16500> <10500:13500 13500:16500> <91500:94500 15000:18000> <88500:91500 15000:18000> <85500:88500 15000:18000> <87000:90000 15000:18000> <9000:12000 15000:18000> <93000:96000 15000:18000> <7500:10500 16500:19500> <9000:12000 16500:19500> <10500:13500 16500:19500> <9000:12000 18000:21000> <82500:85500 19500:22500> <82500:85500 18000:21000> <84000:87000 19500:22500> <9000:12000 19500:22500> <85500:88500 19500:22500> <7500:10500 19500:22500> <6000:9000 18000:21000> <12000:15000 16500:19500> <3000:6000 21000:24000> <84000:87000 16500:19500> <10500:13500 18000:21000> <87000:90000 19500:22500> <84000:87000 18000:21000> <91500:94500 18000:21000> <85500:88500 18000:21000> <91500:94500 16500:19500> <85500:88500 16500:19500> <6000:9000 21000:24000> <7500:10500 18000:21000> <90000:93000 19500:22500> <4500:7500 19500:22500> <6000:9000 19500:22500> <88500:91500 16500:19500> <13500:16500 18000:21000> <87000:90000 18000:21000> <90000:93000 18000:21000> <4500:7500 21000:24000> <88500:91500 18000:21000> <12000:15000 18000:21000> <10500:13500 19500:22500> <88500:91500 19500:22500> <90000:93000 16500:19500> <87000:90000 16500:19500> <12000:15000 19500:22500> <81000:84000 19500:22500> <7500:10500 21000:24000> <13500:16500 16500:19500> <9000:12000 22500:25500> <7500:10500 24000:27000> <79500:82500 21000:24000> <85500:88500 21000:24000> <6000:9000 24000:27000> <4500:7500 24000:27000> <4500:7500 25500:28500> <79500:82500 22500:25500> <4500:7500 22500:25500> <84000:87000 24000:27000> <6000:9000 22500:25500> <12000:15000 21000:24000> <3000:6000 25500:28500> <81000:84000 21000:24000> <10500:13500 21000:24000> <1500:4500 24000:27000> <3000:6000 22500:25500> <85500:88500 22500:25500> <1500:4500 25500:28500> <81000:84000 22500:25500> <6000:9000 25500:28500> <82500:85500 21000:24000> <84000:87000 22500:25500> <78000:81000 25500:28500> <7500:10500 25500:28500> <84000:87000 21000:24000> <3000:6000 24000:27000> <9000:12000 21000:24000> <82500:85500 22500:25500> <82500:85500 24000:27000> <87000:90000 21000:24000> <81000:84000 24000:27000> <79500:82500 25500:28500> <79500:82500 24000:27000> <7500:10500 22500:25500> <81000:84000 25500:28500> <1500:4500 27000:30000> <82500:85500 25500:28500> <84000:87000 25500:28500> <82500:85500 28500:31500> <4500:7500 31500:34500> <78000:81000 27000:30000> <3000:6000 30000:33000> <79500:82500 28500:31500> <6000:9000 30000:33000> <82500:85500 27000:30000> <81000:84000 27000:30000> <1500:4500 30000:33000> <4500:7500 30000:33000> <79500:82500 27000:30000> <4500:7500 27000:30000> <3000:6000 31500:34500> <1500:4500 28500:31500> <1500:4500 31500:34500> <6000:9000 28500:31500> <81000:84000 28500:31500> <6000:9000 27000:30000> <3000:6000 28500:31500> <3000:6000 27000:30000> <4500:7500 28500:31500> 2024-04-06 11:13:21.921945: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 11:13:22.036473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 11:13:22.036594: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 11:13:23.215141: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 11:13:23.215267: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 11:13:23.215294: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 11:13:23.215403: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 11:13:37.205791: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 11:13:37.205919: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57665/img_files/ + + +283 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 282> <1 of 282> <2 of 282> <3 of 282> <4 of 282> <5 of 282> <6 of 282> <7 of 282> <8 of 282> <9 of 282> <10 of 282> <11 of 282> <12 of 282> <13 of 282> <14 of 282> <15 of 282> <16 of 282> <17 of 282> <18 of 282> <19 of 282> <20 of 282> <21 of 282> <22 of 282> <23 of 282> <24 of 282> <25 of 282> <26 of 282> <27 of 282> <28 of 282> <29 of 282> <30 of 282> <31 of 282> <32 of 282> <33 of 282> <34 of 282> <35 of 282> <36 of 282> <37 of 282> <38 of 282> <39 of 282> <40 of 282> <41 of 282> <42 of 282> <43 of 282> <44 of 282> <45 of 282> <46 of 282> <47 of 282> <48 of 282> <49 of 282> <50 of 282> <51 of 282> <52 of 282> <53 of 282> <54 of 282> <55 of 282> <56 of 282> <57 of 282> <58 of 282> <59 of 282> <60 of 282> <61 of 282> <62 of 282> <63 of 282> <64 of 282> <65 of 282> <66 of 282> <67 of 282> <68 of 282> <69 of 282> <70 of 282> <71 of 282> <72 of 282> <73 of 282> <74 of 282> <75 of 282> <76 of 282> <77 of 282> <78 of 282> <79 of 282> <80 of 282> <81 of 282> <82 of 282> <83 of 282> <84 of 282> <85 of 282> <86 of 282> <87 of 282> <88 of 282> <89 of 282> <90 of 282> <91 of 282> <92 of 282> <93 of 282> <94 of 282> <95 of 282> <96 of 282> <97 of 282> <98 of 282> <99 of 282> <100 of 282> <101 of 282> <102 of 282> <103 of 282> <104 of 282> <105 of 282> <106 of 282> <107 of 282> <108 of 282> <109 of 282> <110 of 282> <111 of 282> <112 of 282> <113 of 282> <114 of 282> <115 of 282> <116 of 282> <117 of 282> <118 of 282> <119 of 282> <120 of 282> <121 of 282> <122 of 282> <123 of 282> <124 of 282> <125 of 282> <126 of 282> <127 of 282> <128 of 282> <129 of 282> <130 of 282> <131 of 282> <132 of 282> <133 of 282> <134 of 282> <135 of 282> <136 of 282> <137 of 282> <138 of 282> <139 of 282> <140 of 282> <141 of 282> <142 of 282> <143 of 282> <144 of 282> <145 of 282> <146 of 282> <147 of 282> <148 of 282> <149 of 282> <150 of 282> <151 of 282> <152 of 282> <153 of 282> <154 of 282> <155 of 282> <156 of 282> <157 of 282> <158 of 282> <159 of 282> <160 of 282> <161 of 282> <162 of 282> <163 of 282> <164 of 282> <165 of 282> <166 of 282> <167 of 282> <168 of 282> <169 of 282> <170 of 282> <171 of 282> <172 of 282> <173 of 282> <174 of 282> <175 of 282> <176 of 282> <177 of 282> <178 of 282> <179 of 282> <180 of 282> <181 of 282> <182 of 282> <183 of 282> <184 of 282> <185 of 282> <186 of 282> <187 of 282> <188 of 282> <189 of 282> <190 of 282> <191 of 282> <192 of 282> <193 of 282> <194 of 282> <195 of 282> <196 of 282> <197 of 282> <198 of 282> <199 of 282> <200 of 282> <201 of 282> <202 of 282> <203 of 282> <204 of 282> <205 of 282> <206 of 282> <207 of 282> <208 of 282> <209 of 282> <210 of 282> <211 of 282> <212 of 282> <213 of 282> <214 of 282> <215 of 282> <216 of 282> <217 of 282> <218 of 282> <219 of 282> <220 of 282> <221 of 282> <222 of 282> <223 of 282> <224 of 282> <225 of 282> <226 of 282> <227 of 282> <228 of 282> <229 of 282> <230 of 282> <231 of 282> <232 of 282> <233 of 282> <234 of 282> <235 of 282> <236 of 282> <237 of 282> <238 of 282> <239 of 282> <240 of 282> <241 of 282> <242 of 282> <243 of 282> <244 of 282> <245 of 282> <246 of 282> <247 of 282> <248 of 282> <249 of 282> <250 of 282> <251 of 282> <252 of 282> <253 of 282> <254 of 282> <255 of 282> <256 of 282> <257 of 282> <258 of 282> <259 of 282> <260 of 282> <261 of 282> <262 of 282> <263 of 282> <264 of 282> <265 of 282> <266 of 282> <267 of 282> <268 of 282> <269 of 282> <270 of 282> <271 of 282> <272 of 282> <273 of 282> <274 of 282> <275 of 282> <276 of 282> <277 of 282> <278 of 282> <279 of 282> <280 of 282> <281 of 282> <282 of 282> + +Starting XML construction: +[0 1 2] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/59272.svs + +chopping ... + +saving region: + <13500:16500 12000:15000> <15000:18000 13500:16500> <12000:15000 13500:16500> <9000:12000 13500:16500> <4500:7500 13500:16500> <10500:13500 12000:15000> <10500:13500 15000:18000> <7500:10500 13500:16500> <9000:12000 15000:18000> <10500:13500 13500:16500> <9000:12000 12000:15000> <1500:4500 13500:16500> <6000:9000 15000:18000> <1500:4500 12000:15000> <3000:6000 10500:13500> <6000:9000 13500:16500> <7500:10500 12000:15000> <19500:22500 13500:16500> <3000:6000 13500:16500> <4500:7500 15000:18000> <3000:6000 12000:15000> <7500:10500 10500:13500> <13500:16500 13500:16500> <12000:15000 15000:18000> <4500:7500 10500:13500> <12000:15000 12000:15000> <6000:9000 10500:13500> <6000:9000 12000:15000> <18000:21000 13500:16500> <16500:19500 13500:16500> <4500:7500 12000:15000> <7500:10500 15000:18000> <13500:16500 15000:18000> <15000:18000 15000:18000> <16500:19500 15000:18000> <22500:25500 18000:21000> <19500:22500 18000:21000> <9000:12000 16500:19500> <12000:15000 16500:19500> <24000:27000 18000:21000> <10500:13500 16500:19500> <19500:22500 15000:18000> <25500:28500 16500:19500> <24000:27000 15000:18000> <21000:24000 16500:19500> <18000:21000 15000:18000> <16500:19500 18000:21000> <7500:10500 16500:19500> <9000:12000 18000:21000> <13500:16500 18000:21000> <24000:27000 16500:19500> <27000:30000 16500:19500> <21000:24000 15000:18000> <22500:25500 16500:19500> <18000:21000 16500:19500> <6000:9000 16500:19500> <21000:24000 18000:21000> <22500:25500 15000:18000> <16500:19500 16500:19500> <13500:16500 16500:19500> <10500:13500 18000:21000> <18000:21000 18000:21000> <15000:18000 16500:19500> <15000:18000 18000:21000> <25500:28500 18000:21000> <19500:22500 16500:19500> <12000:15000 18000:21000> <27000:30000 18000:21000> <21000:24000 19500:22500> <13500:16500 19500:22500> <16500:19500 19500:22500> <24000:27000 22500:25500> <28500:31500 19500:22500> <22500:25500 21000:24000> <21000:24000 21000:24000> <12000:15000 24000:27000> <27000:30000 19500:22500> <15000:18000 24000:27000> <30000:33000 22500:25500> <24000:27000 19500:22500> <25500:28500 19500:22500> <31500:34500 18000:21000> <24000:27000 21000:24000> <30000:33000 19500:22500> <31500:34500 22500:25500> <33000:36000 19500:22500> <27000:30000 22500:25500> <30000:33000 21000:24000> <18000:21000 19500:22500> <13500:16500 24000:27000> <22500:25500 19500:22500> <31500:34500 21000:24000> <19500:22500 21000:24000> <15000:18000 19500:22500> <31500:34500 19500:22500> <19500:22500 19500:22500> <33000:36000 21000:24000> <28500:31500 18000:21000> <28500:31500 21000:24000> <27000:30000 21000:24000> <28500:31500 22500:25500> <18000:21000 21000:24000> <16500:19500 24000:27000> <30000:33000 18000:21000> <25500:28500 22500:25500> <25500:28500 21000:24000> <24000:27000 25500:28500> <13500:16500 25500:28500> <19500:22500 27000:30000> <24000:27000 27000:30000> <19500:22500 25500:28500> <30000:33000 24000:27000> <6000:9000 27000:30000> <22500:25500 25500:28500> <18000:21000 25500:28500> <7500:10500 27000:30000> <9000:12000 25500:28500> <15000:18000 25500:28500> <28500:31500 27000:30000> <16500:19500 27000:30000> <10500:13500 27000:30000> <18000:21000 27000:30000> <7500:10500 25500:28500> <28500:31500 24000:27000> <10500:13500 25500:28500> <9000:12000 27000:30000> <4500:7500 27000:30000> <16500:19500 25500:28500> <13500:16500 27000:30000> <18000:21000 24000:27000> <21000:24000 25500:28500> <25500:28500 27000:30000> <12000:15000 27000:30000> <12000:15000 25500:28500> <22500:25500 27000:30000> <15000:18000 27000:30000> <3000:6000 28500:31500> <21000:24000 27000:30000> <27000:30000 27000:30000> <4500:7500 28500:31500> <9000:12000 28500:31500> <16500:19500 30000:33000> <13500:16500 30000:33000> <6000:9000 28500:31500> <21000:24000 28500:31500> <18000:21000 28500:31500> <18000:21000 30000:33000> <16500:19500 28500:31500> <13500:16500 28500:31500> <9000:12000 30000:33000> <30000:33000 28500:31500> <12000:15000 28500:31500> <10500:13500 30000:33000> <7500:10500 28500:31500> <10500:13500 28500:31500> <31500:34500 28500:31500> <24000:27000 28500:31500> <22500:25500 28500:31500> <7500:10500 30000:33000> <25500:28500 28500:31500> <28500:31500 28500:31500> <19500:22500 30000:33000> <1500:4500 30000:33000> <4500:7500 30000:33000> <3000:6000 30000:33000> <15000:18000 28500:31500> <15000:18000 30000:33000> <12000:15000 30000:33000> <6000:9000 30000:33000> <19500:22500 28500:31500> <21000:24000 30000:33000> <27000:30000 28500:31500> <24000:27000 30000:33000> <27000:30000 30000:33000> <37500:40500 31500:34500> <22500:25500 30000:33000> <34500:37500 30000:33000> <19500:22500 31500:34500> <34500:37500 31500:34500> <25500:28500 30000:33000> <28500:31500 30000:33000> <27000:30000 31500:34500> <7500:10500 31500:34500> <3000:6000 31500:34500> <36000:39000 31500:34500> <30000:33000 31500:34500> <31500:34500 31500:34500> <39000:42000 31500:34500> <25500:28500 31500:34500> <9000:12000 31500:34500> <30000:33000 30000:33000> <36000:39000 30000:33000> <6000:9000 31500:34500> <31500:34500 30000:33000> <1500:4500 31500:34500> <4500:7500 31500:34500> <28500:31500 31500:34500> <22500:25500 31500:34500> <21000:24000 31500:34500> <12000:15000 31500:34500> <24000:27000 31500:34500> <1500:4500 33000:36000> <33000:36000 31500:34500> <10500:13500 31500:34500> <33000:36000 30000:33000> <3000:6000 33000:36000> <6000:9000 33000:36000> <25500:28500 33000:36000> <4500:7500 33000:36000> <24000:27000 33000:36000> <40500:43500 36000:39000> <30000:33000 34500:37500> <34500:37500 36000:39000> <42000:45000 36000:39000> <37500:40500 37500:40500> <36000:39000 37500:40500> <40500:43500 37500:40500> <42000:45000 37500:40500> <28500:31500 33000:36000> <36000:39000 34500:37500> <37500:40500 33000:36000> <27000:30000 33000:36000> <40500:43500 34500:37500> <28500:31500 34500:37500> <33000:36000 33000:36000> <39000:42000 34500:37500> <40500:43500 33000:36000> <33000:36000 36000:39000> <30000:33000 33000:36000> <42000:45000 33000:36000> <39000:42000 37500:40500> <37500:40500 36000:39000> <31500:34500 33000:36000> <36000:39000 36000:39000> <36000:39000 33000:36000> <7500:10500 33000:36000> <42000:45000 34500:37500> <39000:42000 33000:36000> <31500:34500 34500:37500> <34500:37500 33000:36000> <34500:37500 34500:37500> <33000:36000 34500:37500> <37500:40500 34500:37500> <39000:42000 36000:39000> 2024-04-06 11:32:58.888132: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 11:32:59.004473: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 11:32:59.004610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 11:33:00.381800: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 11:33:00.381941: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 11:33:00.381969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 11:33:00.382087: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 11:33:12.383720: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 11:33:12.383847: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/59272/img_files/ + + +244 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 243> <1 of 243> <2 of 243> <3 of 243> <4 of 243> <5 of 243> <6 of 243> <7 of 243> <8 of 243> <9 of 243> <10 of 243> <11 of 243> <12 of 243> <13 of 243> <14 of 243> <15 of 243> <16 of 243> <17 of 243> <18 of 243> <19 of 243> <20 of 243> <21 of 243> <22 of 243> <23 of 243> <24 of 243> <25 of 243> <26 of 243> <27 of 243> <28 of 243> <29 of 243> <30 of 243> <31 of 243> <32 of 243> <33 of 243> <34 of 243> <35 of 243> <36 of 243> <37 of 243> <38 of 243> <39 of 243> <40 of 243> <41 of 243> <42 of 243> <43 of 243> <44 of 243> <45 of 243> <46 of 243> <47 of 243> <48 of 243> <49 of 243> <50 of 243> <51 of 243> <52 of 243> <53 of 243> <54 of 243> <55 of 243> <56 of 243> <57 of 243> <58 of 243> <59 of 243> <60 of 243> <61 of 243> <62 of 243> <63 of 243> <64 of 243> <65 of 243> <66 of 243> <67 of 243> <68 of 243> <69 of 243> <70 of 243> <71 of 243> <72 of 243> <73 of 243> <74 of 243> <75 of 243> <76 of 243> <77 of 243> <78 of 243> <79 of 243> <80 of 243> <81 of 243> <82 of 243> <83 of 243> <84 of 243> <85 of 243> <86 of 243> <87 of 243> <88 of 243> <89 of 243> <90 of 243> <91 of 243> <92 of 243> <93 of 243> <94 of 243> <95 of 243> <96 of 243> <97 of 243> <98 of 243> <99 of 243> <100 of 243> <101 of 243> <102 of 243> <103 of 243> <104 of 243> <105 of 243> <106 of 243> <107 of 243> <108 of 243> <109 of 243> <110 of 243> <111 of 243> <112 of 243> <113 of 243> <114 of 243> <115 of 243> <116 of 243> <117 of 243> <118 of 243> <119 of 243> <120 of 243> <121 of 243> <122 of 243> <123 of 243> <124 of 243> <125 of 243> <126 of 243> <127 of 243> <128 of 243> <129 of 243> <130 of 243> <131 of 243> <132 of 243> <133 of 243> <134 of 243> <135 of 243> <136 of 243> <137 of 243> <138 of 243> <139 of 243> <140 of 243> <141 of 243> <142 of 243> <143 of 243> <144 of 243> <145 of 243> <146 of 243> <147 of 243> <148 of 243> <149 of 243> <150 of 243> <151 of 243> <152 of 243> <153 of 243> <154 of 243> <155 of 243> <156 of 243> <157 of 243> <158 of 243> <159 of 243> <160 of 243> <161 of 243> <162 of 243> <163 of 243> <164 of 243> <165 of 243> <166 of 243> <167 of 243> <168 of 243> <169 of 243> <170 of 243> <171 of 243> <172 of 243> <173 of 243> <174 of 243> <175 of 243> <176 of 243> <177 of 243> <178 of 243> <179 of 243> <180 of 243> <181 of 243> <182 of 243> <183 of 243> <184 of 243> <185 of 243> <186 of 243> <187 of 243> <188 of 243> <189 of 243> <190 of 243> <191 of 243> <192 of 243> <193 of 243> <194 of 243> <195 of 243> <196 of 243> <197 of 243> <198 of 243> <199 of 243> <200 of 243> <201 of 243> <202 of 243> <203 of 243> <204 of 243> <205 of 243> <206 of 243> <207 of 243> <208 of 243> <209 of 243> <210 of 243> <211 of 243> <212 of 243> <213 of 243> <214 of 243> <215 of 243> <216 of 243> <217 of 243> <218 of 243> <219 of 243> <220 of 243> <221 of 243> <222 of 243> <223 of 243> <224 of 243> <225 of 243> <226 of 243> <227 of 243> <228 of 243> <229 of 243> <230 of 243> <231 of 243> <232 of 243> <233 of 243> <234 of 243> <235 of 243> <236 of 243> <237 of 243> <238 of 243> <239 of 243> <240 of 243> <241 of 243> <242 of 243> <243 of 243> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/54265.svs + +chopping ... + +saving region: + <30000:33000 7500:10500> <36000:39000 3000:6000> <34500:37500 6000:9000> <37500:40500 1500:4500> <39000:42000 4500:7500> <28500:31500 9000:12000> <33000:36000 3000:6000> <33000:36000 1500:4500> <33000:36000 4500:7500> <28500:31500 7500:10500> <31500:34500 7500:10500> <39000:42000 3000:6000> <36000:39000 6000:9000> <30000:33000 4500:7500> <34500:37500 7500:10500> <37500:40500 6000:9000> <30000:33000 6000:9000> <31500:34500 6000:9000> <36000:39000 0:3000> <34500:37500 1500:4500> <36000:39000 1500:4500> <36000:39000 7500:10500> <31500:34500 3000:6000> <31500:34500 4500:7500> <37500:40500 3000:6000> <37500:40500 4500:7500> <33000:36000 6000:9000> <36000:39000 4500:7500> <34500:37500 4500:7500> <34500:37500 3000:6000> <33000:36000 7500:10500> <34500:37500 0:3000> <33000:36000 9000:12000> <31500:34500 9000:12000> <30000:33000 9000:12000> <31500:34500 15000:18000> <30000:33000 16500:19500> <33000:36000 12000:15000> <30000:33000 12000:15000> <21000:24000 18000:21000> <25500:28500 16500:19500> <28500:31500 10500:13500> <30000:33000 13500:16500> <33000:36000 10500:13500> <27000:30000 16500:19500> <28500:31500 13500:16500> <24000:27000 16500:19500> <33000:36000 13500:16500> <25500:28500 15000:18000> <22500:25500 16500:19500> <24000:27000 15000:18000> <28500:31500 16500:19500> <25500:28500 13500:16500> <31500:34500 12000:15000> <34500:37500 10500:13500> <28500:31500 15000:18000> <28500:31500 12000:15000> <22500:25500 18000:21000> <27000:30000 15000:18000> <30000:33000 15000:18000> <30000:33000 10500:13500> <31500:34500 16500:19500> <34500:37500 9000:12000> <31500:34500 13500:16500> <27000:30000 13500:16500> <27000:30000 18000:21000> <27000:30000 12000:15000> <31500:34500 10500:13500> <27000:30000 10500:13500> <28500:31500 18000:21000> <24000:27000 18000:21000> <25500:28500 18000:21000> <21000:24000 22500:25500> <13500:16500 25500:28500> <25500:28500 22500:25500> <15000:18000 25500:28500> <24000:27000 24000:27000> <22500:25500 21000:24000> <15000:18000 24000:27000> <27000:30000 21000:24000> <19500:22500 21000:24000> <30000:33000 18000:21000> <24000:27000 22500:25500> <19500:22500 24000:27000> <18000:21000 19500:22500> <19500:22500 19500:22500> <21000:24000 24000:27000> <18000:21000 21000:24000> <25500:28500 21000:24000> <21000:24000 21000:24000> <22500:25500 19500:22500> <18000:21000 24000:27000> <24000:27000 19500:22500> <21000:24000 19500:22500> <22500:25500 24000:27000> <19500:22500 22500:25500> <22500:25500 22500:25500> <18000:21000 22500:25500> <16500:19500 22500:25500> <24000:27000 21000:24000> <28500:31500 19500:22500> <25500:28500 19500:22500> <16500:19500 25500:28500> <18000:21000 25500:28500> <27000:30000 19500:22500> <21000:24000 25500:28500> <19500:22500 25500:28500> <16500:19500 24000:27000> <13500:16500 27000:30000> <22500:25500 25500:28500> <18000:21000 33000:36000> <12000:15000 33000:36000> <15000:18000 33000:36000> <10500:13500 34500:37500> <12000:15000 34500:37500> <19500:22500 30000:33000> <13500:16500 33000:36000> <15000:18000 30000:33000> <13500:16500 30000:33000> <12000:15000 31500:34500> <19500:22500 27000:30000> <19500:22500 28500:31500> <13500:16500 34500:37500> <15000:18000 28500:31500> <13500:16500 28500:31500> <15000:18000 34500:37500> <10500:13500 33000:36000> <9000:12000 34500:37500> <16500:19500 27000:30000> <18000:21000 31500:34500> <18000:21000 27000:30000> <13500:16500 31500:34500> <16500:19500 30000:33000> <16500:19500 28500:31500> <18000:21000 30000:33000> <15000:18000 31500:34500> <16500:19500 31500:34500> <16500:19500 33000:36000> <15000:18000 27000:30000> <21000:24000 27000:30000> <18000:21000 28500:31500> <12000:15000 30000:33000> <7500:10500 36000:39000> <18000:21000 34500:37500> <16500:19500 34500:37500> <9000:12000 36000:39000> <10500:13500 36000:39000> <10500:13500 39000:42000> <10500:13500 42000:45000> <10500:13500 37500:40500> <16500:19500 36000:39000> <6000:9000 42000:45000> <13500:16500 37500:40500> <3000:6000 42000:45000> <12000:15000 42000:45000> <9000:12000 42000:45000> <9000:12000 40500:43500> <10500:13500 40500:43500> <15000:18000 37500:40500> <7500:10500 37500:40500> <6000:9000 39000:42000> <15000:18000 39000:42000> <4500:7500 42000:45000> <13500:16500 40500:43500> <6000:9000 40500:43500> <7500:10500 42000:45000> <12000:15000 40500:43500> <7500:10500 39000:42000> <3000:6000 40500:43500> <13500:16500 36000:39000> <12000:15000 39000:42000> <9000:12000 39000:42000> <13500:16500 39000:42000> <15000:18000 36000:39000> <9000:12000 37500:40500> <4500:7500 40500:43500> <12000:15000 36000:39000> <7500:10500 40500:43500> <12000:15000 37500:40500> <1500:4500 43500:46500> <3000:6000 43500:46500> <3000:6000 46500:49500> <7500:10500 45000:48000> <0:3000 46500:49500> <3000:6000 45000:48000> <7500:10500 46500:49500> <4500:7500 46500:49500> <9000:12000 45000:48000> <6000:9000 48000:51000> <6000:9000 46500:49500> <9000:12000 43500:46500> <0:3000 45000:48000> <1500:4500 46500:49500> <1500:4500 48000:51000> <1500:4500 45000:48000> <4500:7500 45000:48000> <4500:7500 48000:51000> <7500:10500 43500:46500> <4500:7500 43500:46500> <6000:9000 45000:48000> <6000:9000 43500:46500> <3000:6000 48000:51000> 2024-04-06 11:49:43.648796: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 11:49:43.765163: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 11:49:43.765295: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 11:49:44.940152: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 11:49:44.940301: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 11:49:44.940331: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 11:49:44.940440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 11:49:57.081858: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 11:49:57.081988: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/54265/img_files/ + + +202 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 201> <1 of 201> <2 of 201> <3 of 201> <4 of 201> <5 of 201> <6 of 201> <7 of 201> <8 of 201> <9 of 201> <10 of 201> <11 of 201> <12 of 201> <13 of 201> <14 of 201> <15 of 201> <16 of 201> <17 of 201> <18 of 201> <19 of 201> <20 of 201> <21 of 201> <22 of 201> <23 of 201> <24 of 201> <25 of 201> <26 of 201> <27 of 201> <28 of 201> <29 of 201> <30 of 201> <31 of 201> <32 of 201> <33 of 201> <34 of 201> <35 of 201> <36 of 201> <37 of 201> <38 of 201> <39 of 201> <40 of 201> <41 of 201> <42 of 201> <43 of 201> <44 of 201> <45 of 201> <46 of 201> <47 of 201> <48 of 201> <49 of 201> <50 of 201> <51 of 201> <52 of 201> <53 of 201> <54 of 201> <55 of 201> <56 of 201> <57 of 201> <58 of 201> <59 of 201> <60 of 201> <61 of 201> <62 of 201> <63 of 201> <64 of 201> <65 of 201> <66 of 201> <67 of 201> <68 of 201> <69 of 201> <70 of 201> <71 of 201> <72 of 201> <73 of 201> <74 of 201> <75 of 201> <76 of 201> <77 of 201> <78 of 201> <79 of 201> <80 of 201> <81 of 201> <82 of 201> <83 of 201> <84 of 201> <85 of 201> <86 of 201> <87 of 201> <88 of 201> <89 of 201> <90 of 201> <91 of 201> <92 of 201> <93 of 201> <94 of 201> <95 of 201> <96 of 201> <97 of 201> <98 of 201> <99 of 201> <100 of 201> <101 of 201> <102 of 201> <103 of 201> <104 of 201> <105 of 201> <106 of 201> <107 of 201> <108 of 201> <109 of 201> <110 of 201> <111 of 201> <112 of 201> <113 of 201> <114 of 201> <115 of 201> <116 of 201> <117 of 201> <118 of 201> <119 of 201> <120 of 201> <121 of 201> <122 of 201> <123 of 201> <124 of 201> <125 of 201> <126 of 201> <127 of 201> <128 of 201> <129 of 201> <130 of 201> <131 of 201> <132 of 201> <133 of 201> <134 of 201> <135 of 201> <136 of 201> <137 of 201> <138 of 201> <139 of 201> <140 of 201> <141 of 201> <142 of 201> <143 of 201> <144 of 201> <145 of 201> <146 of 201> <147 of 201> <148 of 201> <149 of 201> <150 of 201> <151 of 201> <152 of 201> <153 of 201> <154 of 201> <155 of 201> <156 of 201> <157 of 201> <158 of 201> <159 of 201> <160 of 201> <161 of 201> <162 of 201> <163 of 201> <164 of 201> <165 of 201> <166 of 201> <167 of 201> <168 of 201> <169 of 201> <170 of 201> <171 of 201> <172 of 201> <173 of 201> <174 of 201> <175 of 201> <176 of 201> <177 of 201> <178 of 201> <179 of 201> <180 of 201> <181 of 201> <182 of 201> <183 of 201> <184 of 201> <185 of 201> <186 of 201> <187 of 201> <188 of 201> <189 of 201> <190 of 201> <191 of 201> <192 of 201> <193 of 201> <194 of 201> <195 of 201> <196 of 201> <197 of 201> <198 of 201> <199 of 201> <200 of 201> <201 of 201> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/54232.svs + +chopping ... + +saving region: + <30000:33000 7500:10500> <33000:36000 1500:4500> <33000:36000 6000:9000> <27000:30000 7500:10500> <31500:34500 7500:10500> <33000:36000 4500:7500> <30000:33000 3000:6000> <33000:36000 0:3000> <28500:31500 6000:9000> <31500:34500 3000:6000> <30000:33000 1500:4500> <28500:31500 3000:6000> <37500:40500 7500:10500> <28500:31500 7500:10500> <31500:34500 1500:4500> <43500:46500 7500:10500> <42000:45000 7500:10500> <30000:33000 4500:7500> <28500:31500 1500:4500> <27000:30000 4500:7500> <31500:34500 4500:7500> <30000:33000 0:3000> <33000:36000 3000:6000> <25500:28500 7500:10500> <39000:42000 7500:10500> <31500:34500 6000:9000> <27000:30000 6000:9000> <28500:31500 4500:7500> <31500:34500 0:3000> <40500:43500 6000:9000> <40500:43500 7500:10500> <30000:33000 6000:9000> <24000:27000 9000:12000> <25500:28500 9000:12000> <24000:27000 12000:15000> <40500:43500 12000:15000> <39000:42000 12000:15000> <43500:46500 9000:12000> <28500:31500 9000:12000> <36000:39000 10500:13500> <36000:39000 9000:12000> <27000:30000 12000:15000> <28500:31500 12000:15000> <30000:33000 10500:13500> <42000:45000 10500:13500> <22500:25500 12000:15000> <39000:42000 10500:13500> <33000:36000 12000:15000> <40500:43500 9000:12000> <42000:45000 9000:12000> <39000:42000 9000:12000> <36000:39000 12000:15000> <37500:40500 12000:15000> <25500:28500 12000:15000> <40500:43500 10500:13500> <25500:28500 10500:13500> <24000:27000 10500:13500> <37500:40500 9000:12000> <22500:25500 10500:13500> <30000:33000 9000:12000> <21000:24000 13500:16500> <34500:37500 12000:15000> <27000:30000 10500:13500> <27000:30000 9000:12000> <37500:40500 10500:13500> <28500:31500 10500:13500> <34500:37500 10500:13500> <42000:45000 12000:15000> <22500:25500 13500:16500> <30000:33000 15000:18000> <36000:39000 13500:16500> <30000:33000 16500:19500> <31500:34500 16500:19500> <37500:40500 15000:18000> <24000:27000 13500:16500> <19500:22500 15000:18000> <40500:43500 13500:16500> <33000:36000 15000:18000> <24000:27000 16500:19500> <19500:22500 16500:19500> <28500:31500 16500:19500> <31500:34500 15000:18000> <27000:30000 16500:19500> <24000:27000 15000:18000> <25500:28500 15000:18000> <25500:28500 16500:19500> <39000:42000 15000:18000> <21000:24000 16500:19500> <18000:21000 16500:19500> <22500:25500 16500:19500> <34500:37500 15000:18000> <37500:40500 13500:16500> <36000:39000 15000:18000> <21000:24000 15000:18000> <27000:30000 13500:16500> <25500:28500 13500:16500> <33000:36000 13500:16500> <39000:42000 13500:16500> <34500:37500 13500:16500> <22500:25500 15000:18000> <31500:34500 13500:16500> <33000:36000 16500:19500> <36000:39000 16500:19500> <34500:37500 16500:19500> <21000:24000 21000:24000> <19500:22500 21000:24000> <22500:25500 21000:24000> <30000:33000 18000:21000> <34500:37500 18000:21000> <21000:24000 18000:21000> <22500:25500 19500:22500> <31500:34500 18000:21000> <19500:22500 18000:21000> <37500:40500 16500:19500> <28500:31500 18000:21000> <28500:31500 19500:22500> <18000:21000 21000:24000> <30000:33000 19500:22500> <18000:21000 18000:21000> <24000:27000 18000:21000> <27000:30000 19500:22500> <33000:36000 18000:21000> <25500:28500 19500:22500> <21000:24000 19500:22500> <24000:27000 19500:22500> <22500:25500 18000:21000> <33000:36000 19500:22500> <15000:18000 21000:24000> <18000:21000 19500:22500> <25500:28500 18000:21000> <19500:22500 19500:22500> <16500:19500 21000:24000> <16500:19500 18000:21000> <24000:27000 21000:24000> <16500:19500 19500:22500> <27000:30000 18000:21000> <31500:34500 19500:22500> <25500:28500 21000:24000> <27000:30000 21000:24000> <15000:18000 22500:25500> <22500:25500 25500:28500> <21000:24000 25500:28500> <16500:19500 25500:28500> <13500:16500 25500:28500> <24000:27000 24000:27000> <13500:16500 24000:27000> <21000:24000 24000:27000> <18000:21000 22500:25500> <22500:25500 22500:25500> <28500:31500 22500:25500> <30000:33000 21000:24000> <31500:34500 21000:24000> <27000:30000 24000:27000> <15000:18000 25500:28500> <24000:27000 22500:25500> <19500:22500 22500:25500> <13500:16500 22500:25500> <15000:18000 24000:27000> <28500:31500 21000:24000> <18000:21000 25500:28500> <12000:15000 25500:28500> <25500:28500 24000:27000> <19500:22500 24000:27000> <16500:19500 24000:27000> <19500:22500 25500:28500> <22500:25500 24000:27000> <25500:28500 22500:25500> <27000:30000 22500:25500> <21000:24000 22500:25500> <24000:27000 25500:28500> <16500:19500 22500:25500> <25500:28500 25500:28500> <18000:21000 24000:27000> <12000:15000 27000:30000> <13500:16500 27000:30000> <7500:10500 31500:34500> <10500:13500 31500:34500> <13500:16500 31500:34500> <21000:24000 31500:34500> <9000:12000 30000:33000> <9000:12000 31500:34500> <13500:16500 30000:33000> <21000:24000 30000:33000> <10500:13500 30000:33000> <21000:24000 27000:30000> <19500:22500 30000:33000> <16500:19500 27000:30000> <19500:22500 28500:31500> <15000:18000 30000:33000> <15000:18000 28500:31500> <18000:21000 27000:30000> <22500:25500 27000:30000> <16500:19500 30000:33000> <19500:22500 27000:30000> <24000:27000 27000:30000> <16500:19500 31500:34500> <22500:25500 30000:33000> <12000:15000 30000:33000> <12000:15000 28500:31500> <24000:27000 28500:31500> <13500:16500 28500:31500> <18000:21000 28500:31500> <12000:15000 31500:34500> <21000:24000 28500:31500> <18000:21000 30000:33000> <22500:25500 28500:31500> <15000:18000 27000:30000> <22500:25500 31500:34500> <18000:21000 31500:34500> <10500:13500 28500:31500> <19500:22500 31500:34500> <12000:15000 34500:37500> <22500:25500 36000:39000> <21000:24000 36000:39000> <7500:10500 37500:40500> <22500:25500 33000:36000> <19500:22500 34500:37500> <19500:22500 36000:39000> <19500:22500 33000:36000> <7500:10500 33000:36000> <22500:25500 34500:37500> <7500:10500 36000:39000> <10500:13500 34500:37500> <9000:12000 34500:37500> <18000:21000 36000:39000> <7500:10500 34500:37500> <6000:9000 34500:37500> <18000:21000 33000:36000> <21000:24000 34500:37500> <10500:13500 36000:39000> <9000:12000 33000:36000> <21000:24000 33000:36000> <6000:9000 36000:39000> <16500:19500 34500:37500> <13500:16500 33000:36000> <16500:19500 33000:36000> <18000:21000 34500:37500> <16500:19500 36000:39000> <6000:9000 37500:40500> <10500:13500 33000:36000> <12000:15000 33000:36000> <12000:15000 36000:39000> <9000:12000 36000:39000> <9000:12000 37500:40500> <10500:13500 37500:40500> <16500:19500 37500:40500> <19500:22500 43500:46500> <6000:9000 39000:42000> <19500:22500 40500:43500> <18000:21000 39000:42000> <19500:22500 37500:40500> <7500:10500 42000:45000> <18000:21000 37500:40500> <16500:19500 43500:46500> <4500:7500 42000:45000> <21000:24000 39000:42000> <19500:22500 39000:42000> <18000:21000 42000:45000> <7500:10500 39000:42000> <18000:21000 40500:43500> <16500:19500 40500:43500> <9000:12000 39000:42000> <21000:24000 37500:40500> <19500:22500 42000:45000> <6000:9000 42000:45000> <4500:7500 39000:42000> <6000:9000 40500:43500> <9000:12000 40500:43500> <9000:12000 42000:45000> <18000:21000 43500:46500> <16500:19500 39000:42000> <4500:7500 40500:43500> <7500:10500 40500:43500> <10500:13500 39000:42000> <16500:19500 42000:45000> 2024-04-06 12:05:47.006787: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 12:05:47.110731: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 12:05:47.110865: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 12:05:48.269938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 12:05:48.270084: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 12:05:48.270112: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 12:05:48.270237: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 12:06:00.315055: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 12:06:00.315191: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/54232/img_files/ + + +275 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 274> <1 of 274> <2 of 274> <3 of 274> <4 of 274> <5 of 274> <6 of 274> <7 of 274> <8 of 274> <9 of 274> <10 of 274> <11 of 274> <12 of 274> <13 of 274> <14 of 274> <15 of 274> <16 of 274> <17 of 274> <18 of 274> <19 of 274> <20 of 274> <21 of 274> <22 of 274> <23 of 274> <24 of 274> <25 of 274> <26 of 274> <27 of 274> <28 of 274> <29 of 274> <30 of 274> <31 of 274> <32 of 274> <33 of 274> <34 of 274> <35 of 274> <36 of 274> <37 of 274> <38 of 274> <39 of 274> <40 of 274> <41 of 274> <42 of 274> <43 of 274> <44 of 274> <45 of 274> <46 of 274> <47 of 274> <48 of 274> <49 of 274> <50 of 274> <51 of 274> <52 of 274> <53 of 274> <54 of 274> <55 of 274> <56 of 274> <57 of 274> <58 of 274> <59 of 274> <60 of 274> <61 of 274> <62 of 274> <63 of 274> <64 of 274> <65 of 274> <66 of 274> <67 of 274> <68 of 274> <69 of 274> <70 of 274> <71 of 274> <72 of 274> <73 of 274> <74 of 274> <75 of 274> <76 of 274> <77 of 274> <78 of 274> <79 of 274> <80 of 274> <81 of 274> <82 of 274> <83 of 274> <84 of 274> <85 of 274> <86 of 274> <87 of 274> <88 of 274> <89 of 274> <90 of 274> <91 of 274> <92 of 274> <93 of 274> <94 of 274> <95 of 274> <96 of 274> <97 of 274> <98 of 274> <99 of 274> <100 of 274> <101 of 274> <102 of 274> <103 of 274> <104 of 274> <105 of 274> <106 of 274> <107 of 274> <108 of 274> <109 of 274> <110 of 274> <111 of 274> <112 of 274> <113 of 274> <114 of 274> <115 of 274> <116 of 274> <117 of 274> <118 of 274> <119 of 274> <120 of 274> <121 of 274> <122 of 274> <123 of 274> <124 of 274> <125 of 274> <126 of 274> <127 of 274> <128 of 274> <129 of 274> <130 of 274> <131 of 274> <132 of 274> <133 of 274> <134 of 274> <135 of 274> <136 of 274> <137 of 274> <138 of 274> <139 of 274> <140 of 274> <141 of 274> <142 of 274> <143 of 274> <144 of 274> <145 of 274> <146 of 274> <147 of 274> <148 of 274> <149 of 274> <150 of 274> <151 of 274> <152 of 274> <153 of 274> <154 of 274> <155 of 274> <156 of 274> <157 of 274> <158 of 274> <159 of 274> <160 of 274> <161 of 274> <162 of 274> <163 of 274> <164 of 274> <165 of 274> <166 of 274> <167 of 274> <168 of 274> <169 of 274> <170 of 274> <171 of 274> <172 of 274> <173 of 274> <174 of 274> <175 of 274> <176 of 274> <177 of 274> <178 of 274> <179 of 274> <180 of 274> <181 of 274> <182 of 274> <183 of 274> <184 of 274> <185 of 274> <186 of 274> <187 of 274> <188 of 274> <189 of 274> <190 of 274> <191 of 274> <192 of 274> <193 of 274> <194 of 274> <195 of 274> <196 of 274> <197 of 274> <198 of 274> <199 of 274> <200 of 274> <201 of 274> <202 of 274> <203 of 274> <204 of 274> <205 of 274> <206 of 274> <207 of 274> <208 of 274> <209 of 274> <210 of 274> <211 of 274> <212 of 274> <213 of 274> <214 of 274> <215 of 274> <216 of 274> <217 of 274> <218 of 274> <219 of 274> <220 of 274> <221 of 274> <222 of 274> <223 of 274> <224 of 274> <225 of 274> <226 of 274> <227 of 274> <228 of 274> <229 of 274> <230 of 274> <231 of 274> <232 of 274> <233 of 274> <234 of 274> <235 of 274> <236 of 274> <237 of 274> <238 of 274> <239 of 274> <240 of 274> <241 of 274> <242 of 274> <243 of 274> <244 of 274> <245 of 274> <246 of 274> <247 of 274> <248 of 274> <249 of 274> <250 of 274> <251 of 274> <252 of 274> <253 of 274> <254 of 274> <255 of 274> <256 of 274> <257 of 274> <258 of 274> <259 of 274> <260 of 274> <261 of 274> <262 of 274> <263 of 274> <264 of 274> <265 of 274> <266 of 274> <267 of 274> <268 of 274> <269 of 274> <270 of 274> <271 of 274> <272 of 274> <273 of 274> <274 of 274> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/54250.svs + +chopping ... + +saving region: + <39000:42000 3000:6000> <51000:54000 1500:4500> <48000:51000 1500:4500> <43500:46500 0:3000> <42000:45000 0:3000> <46500:49500 0:3000> <43500:46500 3000:6000> <46500:49500 1500:4500> <51000:54000 3000:6000> <45000:48000 3000:6000> <52500:55500 1500:4500> <49500:52500 1500:4500> <45000:48000 1500:4500> <43500:46500 1500:4500> <42000:45000 3000:6000> <48000:51000 3000:6000> <37500:40500 1500:4500> <37500:40500 3000:6000> <45000:48000 0:3000> <34500:37500 3000:6000> <40500:43500 1500:4500> <40500:43500 0:3000> <33000:36000 4500:7500> <54000:57000 3000:6000> <42000:45000 1500:4500> <49500:52500 3000:6000> <36000:39000 3000:6000> <39000:42000 1500:4500> <40500:43500 3000:6000> <48000:51000 0:3000> <52500:55500 3000:6000> <46500:49500 3000:6000> <34500:37500 4500:7500> <55500:58500 4500:7500> <27000:30000 7500:10500> <37500:40500 6000:9000> <54000:57000 4500:7500> <34500:37500 6000:9000> <42000:45000 6000:9000> <55500:58500 6000:9000> <39000:42000 4500:7500> <37500:40500 4500:7500> <45000:48000 6000:9000> <49500:52500 4500:7500> <49500:52500 6000:9000> <51000:54000 4500:7500> <43500:46500 4500:7500> <45000:48000 4500:7500> <36000:39000 6000:9000> <52500:55500 6000:9000> <39000:42000 6000:9000> <33000:36000 6000:9000> <52500:55500 4500:7500> <40500:43500 4500:7500> <54000:57000 6000:9000> <40500:43500 6000:9000> <51000:54000 6000:9000> <46500:49500 6000:9000> <48000:51000 6000:9000> <31500:34500 6000:9000> <43500:46500 6000:9000> <30000:33000 7500:10500> <42000:45000 4500:7500> <46500:49500 4500:7500> <48000:51000 4500:7500> <36000:39000 4500:7500> <28500:31500 7500:10500> <31500:34500 7500:10500> <33000:36000 7500:10500> <49500:52500 9000:12000> <22500:25500 9000:12000> <51000:54000 9000:12000> <52500:55500 9000:12000> <54000:57000 9000:12000> <12000:15000 9000:12000> <55500:58500 9000:12000> <51000:54000 7500:10500> <19500:22500 9000:12000> <36000:39000 7500:10500> <13500:16500 9000:12000> <16500:19500 9000:12000> <49500:52500 7500:10500> <40500:43500 7500:10500> <54000:57000 7500:10500> <48000:51000 7500:10500> <31500:34500 9000:12000> <21000:24000 9000:12000> <15000:18000 9000:12000> <28500:31500 9000:12000> <42000:45000 7500:10500> <39000:42000 7500:10500> <55500:58500 7500:10500> <34500:37500 9000:12000> <30000:33000 9000:12000> <27000:30000 9000:12000> <52500:55500 7500:10500> <18000:21000 9000:12000> <37500:40500 9000:12000> <25500:28500 9000:12000> <24000:27000 9000:12000> <6000:9000 10500:13500> <37500:40500 7500:10500> <34500:37500 7500:10500> <36000:39000 9000:12000> <33000:36000 9000:12000> <7500:10500 10500:13500> <10500:13500 10500:13500> <9000:12000 12000:15000> <16500:19500 12000:15000> <21000:24000 12000:15000> <18000:21000 12000:15000> <19500:22500 12000:15000> <18000:21000 10500:13500> <22500:25500 12000:15000> <54000:57000 10500:13500> <3000:6000 12000:15000> <24000:27000 12000:15000> <15000:18000 10500:13500> <15000:18000 12000:15000> <31500:34500 10500:13500> <7500:10500 12000:15000> <22500:25500 10500:13500> <4500:7500 12000:15000> <21000:24000 10500:13500> <52500:55500 10500:13500> <51000:54000 10500:13500> <25500:28500 10500:13500> <12000:15000 12000:15000> <36000:39000 10500:13500> <13500:16500 10500:13500> <27000:30000 10500:13500> <12000:15000 10500:13500> <28500:31500 10500:13500> <33000:36000 10500:13500> <10500:13500 12000:15000> <19500:22500 10500:13500> <6000:9000 12000:15000> <9000:12000 10500:13500> <13500:16500 12000:15000> <34500:37500 10500:13500> <30000:33000 10500:13500> <24000:27000 10500:13500> <16500:19500 10500:13500> <25500:28500 12000:15000> <28500:31500 12000:15000> <9000:12000 15000:18000> <10500:13500 13500:16500> <10500:13500 15000:18000> <27000:30000 12000:15000> <19500:22500 13500:16500> <30000:33000 13500:16500> <31500:34500 12000:15000> <12000:15000 15000:18000> <16500:19500 13500:16500> <24000:27000 13500:16500> <27000:30000 13500:16500> <13500:16500 15000:18000> <0:3000 15000:18000> <15000:18000 13500:16500> <22500:25500 13500:16500> <28500:31500 13500:16500> <18000:21000 13500:16500> <7500:10500 13500:16500> <12000:15000 13500:16500> <1500:4500 15000:18000> <21000:24000 13500:16500> <3000:6000 15000:18000> <9000:12000 13500:16500> <3000:6000 13500:16500> <1500:4500 13500:16500> <4500:7500 13500:16500> <30000:33000 12000:15000> <25500:28500 13500:16500> <7500:10500 15000:18000> <6000:9000 13500:16500> <6000:9000 15000:18000> <13500:16500 13500:16500> <33000:36000 12000:15000> <4500:7500 15000:18000> <6000:9000 16500:19500> <15000:18000 15000:18000> <22500:25500 15000:18000> <1500:4500 19500:22500> <0:3000 16500:19500> <1500:4500 18000:21000> <3000:6000 18000:21000> <4500:7500 18000:21000> <25500:28500 15000:18000> <0:3000 18000:21000> <18000:21000 15000:18000> <24000:27000 15000:18000> <21000:24000 15000:18000> <4500:7500 16500:19500> <16500:19500 15000:18000> <19500:22500 15000:18000> <1500:4500 16500:19500> <3000:6000 16500:19500> 2024-04-06 12:22:46.266031: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 12:22:46.379175: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 12:22:46.379302: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 12:22:47.558285: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 12:22:47.558426: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 12:22:47.558456: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 12:22:47.558572: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 12:23:00.735634: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 12:23:00.735773: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/54250/img_files/ + + +197 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 196> <1 of 196> <2 of 196> <3 of 196> <4 of 196> <5 of 196> <6 of 196> <7 of 196> <8 of 196> <9 of 196> <10 of 196> <11 of 196> <12 of 196> <13 of 196> <14 of 196> <15 of 196> <16 of 196> <17 of 196> <18 of 196> <19 of 196> <20 of 196> <21 of 196> <22 of 196> <23 of 196> <24 of 196> <25 of 196> <26 of 196> <27 of 196> <28 of 196> <29 of 196> <30 of 196> <31 of 196> <32 of 196> <33 of 196> <34 of 196> <35 of 196> <36 of 196> <37 of 196> <38 of 196> <39 of 196> <40 of 196> <41 of 196> <42 of 196> <43 of 196> <44 of 196> <45 of 196> <46 of 196> <47 of 196> <48 of 196> <49 of 196> <50 of 196> <51 of 196> <52 of 196> <53 of 196> <54 of 196> <55 of 196> <56 of 196> <57 of 196> <58 of 196> <59 of 196> <60 of 196> <61 of 196> <62 of 196> <63 of 196> <64 of 196> <65 of 196> <66 of 196> <67 of 196> <68 of 196> <69 of 196> <70 of 196> <71 of 196> <72 of 196> <73 of 196> <74 of 196> <75 of 196> <76 of 196> <77 of 196> <78 of 196> <79 of 196> <80 of 196> <81 of 196> <82 of 196> <83 of 196> <84 of 196> <85 of 196> <86 of 196> <87 of 196> <88 of 196> <89 of 196> <90 of 196> <91 of 196> <92 of 196> <93 of 196> <94 of 196> <95 of 196> <96 of 196> <97 of 196> <98 of 196> <99 of 196> <100 of 196> <101 of 196> <102 of 196> <103 of 196> <104 of 196> <105 of 196> <106 of 196> <107 of 196> <108 of 196> <109 of 196> <110 of 196> <111 of 196> <112 of 196> <113 of 196> <114 of 196> <115 of 196> <116 of 196> <117 of 196> <118 of 196> <119 of 196> <120 of 196> <121 of 196> <122 of 196> <123 of 196> <124 of 196> <125 of 196> <126 of 196> <127 of 196> <128 of 196> <129 of 196> <130 of 196> <131 of 196> <132 of 196> <133 of 196> <134 of 196> <135 of 196> <136 of 196> <137 of 196> <138 of 196> <139 of 196> <140 of 196> <141 of 196> <142 of 196> <143 of 196> <144 of 196> <145 of 196> <146 of 196> <147 of 196> <148 of 196> <149 of 196> <150 of 196> <151 of 196> <152 of 196> <153 of 196> <154 of 196> <155 of 196> <156 of 196> <157 of 196> <158 of 196> <159 of 196> <160 of 196> <161 of 196> <162 of 196> <163 of 196> <164 of 196> <165 of 196> <166 of 196> <167 of 196> <168 of 196> <169 of 196> <170 of 196> <171 of 196> <172 of 196> <173 of 196> <174 of 196> <175 of 196> <176 of 196> <177 of 196> <178 of 196> <179 of 196> <180 of 196> <181 of 196> <182 of 196> <183 of 196> <184 of 196> <185 of 196> <186 of 196> <187 of 196> <188 of 196> <189 of 196> <190 of 196> <191 of 196> <192 of 196> <193 of 196> <194 of 196> <195 of 196> <196 of 196> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/59280.svs + +chopping ... + +saving region: + <36000:39000 12000:15000> <37500:40500 9000:12000> <37500:40500 10500:13500> <36000:39000 10500:13500> <40500:43500 9000:12000> <34500:37500 13500:16500> <39000:42000 9000:12000> <42000:45000 12000:15000> <39000:42000 12000:15000> <39000:42000 7500:10500> <42000:45000 10500:13500> <40500:43500 12000:15000> <40500:43500 4500:7500> <36000:39000 9000:12000> <42000:45000 9000:12000> <42000:45000 7500:10500> <37500:40500 12000:15000> <39000:42000 4500:7500> <34500:37500 12000:15000> <39000:42000 13500:16500> <36000:39000 13500:16500> <40500:43500 13500:16500> <37500:40500 6000:9000> <37500:40500 7500:10500> <40500:43500 10500:13500> <39000:42000 10500:13500> <39000:42000 6000:9000> <40500:43500 7500:10500> <37500:40500 13500:16500> <33000:36000 13500:16500> <42000:45000 6000:9000> <40500:43500 6000:9000> <33000:36000 15000:18000> <31500:34500 15000:18000> <31500:34500 16500:19500> <34500:37500 15000:18000> <36000:39000 21000:24000> <30000:33000 18000:21000> <30000:33000 21000:24000> <39000:42000 15000:18000> <40500:43500 15000:18000> <33000:36000 16500:19500> <33000:36000 18000:21000> <36000:39000 15000:18000> <36000:39000 16500:19500> <33000:36000 19500:22500> <37500:40500 16500:19500> <31500:34500 21000:24000> <36000:39000 18000:21000> <30000:33000 19500:22500> <39000:42000 16500:19500> <37500:40500 15000:18000> <37500:40500 18000:21000> <31500:34500 18000:21000> <28500:31500 19500:22500> <34500:37500 19500:22500> <33000:36000 21000:24000> <27000:30000 21000:24000> <34500:37500 18000:21000> <30000:33000 16500:19500> <36000:39000 19500:22500> <28500:31500 21000:24000> <34500:37500 21000:24000> <31500:34500 19500:22500> <34500:37500 16500:19500> <25500:28500 22500:25500> <27000:30000 22500:25500> <28500:31500 22500:25500> <21000:24000 28500:31500> <19500:22500 28500:31500> <22500:25500 28500:31500> <24000:27000 28500:31500> <27000:30000 24000:27000> <25500:28500 28500:31500> <30000:33000 22500:25500> <28500:31500 25500:28500> <30000:33000 24000:27000> <24000:27000 24000:27000> <22500:25500 25500:28500> <34500:37500 22500:25500> <27000:30000 28500:31500> <31500:34500 24000:27000> <25500:28500 27000:30000> <21000:24000 25500:28500> <22500:25500 24000:27000> <25500:28500 25500:28500> <28500:31500 27000:30000> <18000:21000 28500:31500> <31500:34500 22500:25500> <33000:36000 24000:27000> <33000:36000 22500:25500> <24000:27000 25500:28500> <25500:28500 24000:27000> <30000:33000 25500:28500> <27000:30000 27000:30000> <22500:25500 27000:30000> <27000:30000 25500:28500> <21000:24000 27000:30000> <24000:27000 27000:30000> <19500:22500 27000:30000> <30000:33000 27000:30000> <28500:31500 24000:27000> <31500:34500 25500:28500> <18000:21000 30000:33000> <18000:21000 31500:34500> <21000:24000 30000:33000> <18000:21000 36000:39000> <24000:27000 30000:33000> <1500:4500 37500:40500> <15000:18000 33000:36000> <16500:19500 33000:36000> <3000:6000 37500:40500> <15000:18000 31500:34500> <18000:21000 33000:36000> <21000:24000 31500:34500> <19500:22500 34500:37500> <13500:16500 34500:37500> <25500:28500 30000:33000> <1500:4500 36000:39000> <4500:7500 37500:40500> <16500:19500 31500:34500> <13500:16500 36000:39000> <18000:21000 34500:37500> <16500:19500 36000:39000> <13500:16500 33000:36000> <19500:22500 31500:34500> <22500:25500 31500:34500> <3000:6000 36000:39000> <15000:18000 36000:39000> <22500:25500 30000:33000> <4500:7500 36000:39000> <12000:15000 36000:39000> <19500:22500 30000:33000> <15000:18000 34500:37500> <12000:15000 34500:37500> <21000:24000 33000:36000> <10500:13500 36000:39000> <16500:19500 34500:37500> <6000:9000 37500:40500> <19500:22500 33000:36000> <7500:10500 37500:40500> <1500:4500 40500:43500> <4500:7500 42000:45000> <13500:16500 37500:40500> <15000:18000 39000:42000> <16500:19500 37500:40500> <12000:15000 40500:43500> <4500:7500 39000:42000> <7500:10500 42000:45000> <1500:4500 39000:42000> <10500:13500 40500:43500> <3000:6000 42000:45000> <13500:16500 40500:43500> <6000:9000 42000:45000> <12000:15000 39000:42000> <10500:13500 39000:42000> <7500:10500 39000:42000> <13500:16500 39000:42000> <1500:4500 42000:45000> <15000:18000 37500:40500> <9000:12000 39000:42000> <9000:12000 40500:43500> <12000:15000 37500:40500> <3000:6000 40500:43500> <6000:9000 39000:42000> <3000:6000 39000:42000> <7500:10500 40500:43500> <9000:12000 37500:40500> <4500:7500 40500:43500> <10500:13500 37500:40500> <16500:19500 39000:42000> <6000:9000 40500:43500> <15000:18000 40500:43500> <9000:12000 42000:45000> <12000:15000 42000:45000> <10500:13500 42000:45000> <13500:16500 42000:45000> <4500:7500 43500:46500> <9000:12000 43500:46500> <6000:9000 43500:46500> <7500:10500 43500:46500> <10500:13500 43500:46500> 2024-04-06 12:36:05.860738: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 12:36:05.974184: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 12:36:05.974306: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 12:36:07.365358: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 12:36:07.365495: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 12:36:07.365521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 12:36:07.365629: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 12:36:19.471292: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 12:36:19.471424: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/59280/img_files/ + + +182 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 181> <1 of 181> <2 of 181> <3 of 181> <4 of 181> <5 of 181> <6 of 181> <7 of 181> <8 of 181> <9 of 181> <10 of 181> <11 of 181> <12 of 181> <13 of 181> <14 of 181> <15 of 181> <16 of 181> <17 of 181> <18 of 181> <19 of 181> <20 of 181> <21 of 181> <22 of 181> <23 of 181> <24 of 181> <25 of 181> <26 of 181> <27 of 181> <28 of 181> <29 of 181> <30 of 181> <31 of 181> <32 of 181> <33 of 181> <34 of 181> <35 of 181> <36 of 181> <37 of 181> <38 of 181> <39 of 181> <40 of 181> <41 of 181> <42 of 181> <43 of 181> <44 of 181> <45 of 181> <46 of 181> <47 of 181> <48 of 181> <49 of 181> <50 of 181> <51 of 181> <52 of 181> <53 of 181> <54 of 181> <55 of 181> <56 of 181> <57 of 181> <58 of 181> <59 of 181> <60 of 181> <61 of 181> <62 of 181> <63 of 181> <64 of 181> <65 of 181> <66 of 181> <67 of 181> <68 of 181> <69 of 181> <70 of 181> <71 of 181> <72 of 181> <73 of 181> <74 of 181> <75 of 181> <76 of 181> <77 of 181> <78 of 181> <79 of 181> <80 of 181> <81 of 181> <82 of 181> <83 of 181> <84 of 181> <85 of 181> <86 of 181> <87 of 181> <88 of 181> <89 of 181> <90 of 181> <91 of 181> <92 of 181> <93 of 181> <94 of 181> <95 of 181> <96 of 181> <97 of 181> <98 of 181> <99 of 181> <100 of 181> <101 of 181> <102 of 181> <103 of 181> <104 of 181> <105 of 181> <106 of 181> <107 of 181> <108 of 181> <109 of 181> <110 of 181> <111 of 181> <112 of 181> <113 of 181> <114 of 181> <115 of 181> <116 of 181> <117 of 181> <118 of 181> <119 of 181> <120 of 181> <121 of 181> <122 of 181> <123 of 181> <124 of 181> <125 of 181> <126 of 181> <127 of 181> <128 of 181> <129 of 181> <130 of 181> <131 of 181> <132 of 181> <133 of 181> <134 of 181> <135 of 181> <136 of 181> <137 of 181> <138 of 181> <139 of 181> <140 of 181> <141 of 181> <142 of 181> <143 of 181> <144 of 181> <145 of 181> <146 of 181> <147 of 181> <148 of 181> <149 of 181> <150 of 181> <151 of 181> <152 of 181> <153 of 181> <154 of 181> <155 of 181> <156 of 181> <157 of 181> <158 of 181> <159 of 181> <160 of 181> <161 of 181> <162 of 181> <163 of 181> <164 of 181> <165 of 181> <166 of 181> <167 of 181> <168 of 181> <169 of 181> <170 of 181> <171 of 181> <172 of 181> <173 of 181> <174 of 181> <175 of 181> <176 of 181> <177 of 181> <178 of 181> <179 of 181> <180 of 181> <181 of 181> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57872.svs + +chopping ... + +saving region: + <33000:36000 3000:6000> <36000:39000 4500:7500> <39000:42000 3000:6000> <37500:40500 3000:6000> <36000:39000 1500:4500> <31500:34500 7500:10500> <31500:34500 6000:9000> <33000:36000 6000:9000> <36000:39000 3000:6000> <28500:31500 6000:9000> <39000:42000 4500:7500> <30000:33000 7500:10500> <31500:34500 1500:4500> <30000:33000 6000:9000> <34500:37500 6000:9000> <34500:37500 0:3000> <33000:36000 0:3000> <34500:37500 1500:4500> <31500:34500 4500:7500> <36000:39000 6000:9000> <33000:36000 1500:4500> <33000:36000 7500:10500> <36000:39000 0:3000> <37500:40500 4500:7500> <30000:33000 4500:7500> <37500:40500 1500:4500> <28500:31500 7500:10500> <37500:40500 6000:9000> <33000:36000 4500:7500> <34500:37500 3000:6000> <34500:37500 4500:7500> <31500:34500 3000:6000> <34500:37500 7500:10500> <27000:30000 9000:12000> <28500:31500 9000:12000> <30000:33000 9000:12000> <25500:28500 13500:16500> <27000:30000 16500:19500> <27000:30000 12000:15000> <27000:30000 10500:13500> <22500:25500 18000:21000> <28500:31500 10500:13500> <31500:34500 9000:12000> <30000:33000 13500:16500> <31500:34500 12000:15000> <24000:27000 18000:21000> <24000:27000 16500:19500> <28500:31500 15000:18000> <28500:31500 13500:16500> <30000:33000 12000:15000> <34500:37500 9000:12000> <25500:28500 15000:18000> <21000:24000 18000:21000> <24000:27000 13500:16500> <25500:28500 16500:19500> <30000:33000 10500:13500> <28500:31500 16500:19500> <30000:33000 15000:18000> <33000:36000 9000:12000> <24000:27000 15000:18000> <22500:25500 15000:18000> <31500:34500 10500:13500> <25500:28500 12000:15000> <28500:31500 12000:15000> <27000:30000 13500:16500> <27000:30000 15000:18000> <22500:25500 16500:19500> <33000:36000 10500:13500> <25500:28500 10500:13500> <25500:28500 18000:21000> <24000:27000 22500:25500> <21000:24000 19500:22500> <18000:21000 25500:28500> <21000:24000 24000:27000> <27000:30000 19500:22500> <21000:24000 21000:24000> <18000:21000 24000:27000> <24000:27000 21000:24000> <25500:28500 19500:22500> <25500:28500 21000:24000> <25500:28500 22500:25500> <22500:25500 21000:24000> <18000:21000 27000:30000> <22500:25500 19500:22500> <22500:25500 25500:28500> <24000:27000 25500:28500> <27000:30000 18000:21000> <21000:24000 27000:30000> <19500:22500 21000:24000> <22500:25500 22500:25500> <16500:19500 28500:31500> <19500:22500 24000:27000> <19500:22500 25500:28500> <22500:25500 27000:30000> <22500:25500 24000:27000> <21000:24000 22500:25500> <21000:24000 25500:28500> <24000:27000 19500:22500> <24000:27000 24000:27000> <19500:22500 22500:25500> <16500:19500 27000:30000> <19500:22500 27000:30000> <18000:21000 28500:31500> <19500:22500 28500:31500> <15000:18000 30000:33000> <21000:24000 28500:31500> <13500:16500 34500:37500> <13500:16500 39000:42000> <12000:15000 39000:42000> <18000:21000 33000:36000> <12000:15000 37500:40500> <21000:24000 30000:33000> <16500:19500 37500:40500> <18000:21000 34500:37500> <16500:19500 33000:36000> <15000:18000 39000:42000> <16500:19500 31500:34500> <16500:19500 36000:39000> <19500:22500 30000:33000> <16500:19500 34500:37500> <15000:18000 33000:36000> <19500:22500 31500:34500> <15000:18000 36000:39000> <12000:15000 36000:39000> <15000:18000 31500:34500> <13500:16500 33000:36000> <15000:18000 37500:40500> <19500:22500 33000:36000> <18000:21000 30000:33000> <18000:21000 31500:34500> <18000:21000 36000:39000> <18000:21000 37500:40500> <13500:16500 37500:40500> <16500:19500 30000:33000> <13500:16500 36000:39000> <15000:18000 34500:37500> <16500:19500 39000:42000> <12000:15000 40500:43500> <13500:16500 31500:34500> <13500:16500 40500:43500> <15000:18000 40500:43500> <10500:13500 49500:52500> <10500:13500 43500:46500> <7500:10500 51000:54000> <9000:12000 51000:54000> <12000:15000 49500:52500> <10500:13500 48000:51000> <13500:16500 43500:46500> <12000:15000 43500:46500> <15000:18000 42000:45000> <12000:15000 45000:48000> <10500:13500 42000:45000> <13500:16500 49500:52500> <13500:16500 45000:48000> <7500:10500 49500:52500> <16500:19500 42000:45000> <13500:16500 46500:49500> <15000:18000 48000:51000> <12000:15000 48000:51000> <16500:19500 43500:46500> <15000:18000 45000:48000> <13500:16500 48000:51000> <12000:15000 46500:49500> <9000:12000 46500:49500> <13500:16500 42000:45000> <15000:18000 43500:46500> <9000:12000 48000:51000> <9000:12000 49500:52500> <10500:13500 45000:48000> <15000:18000 46500:49500> <12000:15000 42000:45000> <10500:13500 46500:49500> <16500:19500 40500:43500> <10500:13500 51000:54000> <12000:15000 51000:54000> <7500:10500 52500:55500> <13500:16500 51000:54000> <6000:9000 52500:55500> <3000:6000 61500:64500> <4500:7500 61500:64500> <6000:9000 55500:58500> <4500:7500 58500:61500> <10500:13500 54000:57000> <10500:13500 57000:60000> <7500:10500 54000:57000> <4500:7500 57000:60000> <6000:9000 58500:61500> <6000:9000 54000:57000> <7500:10500 58500:61500> <12000:15000 52500:55500> <6000:9000 57000:60000> <12000:15000 55500:58500> <9000:12000 58500:61500> <9000:12000 60000:63000> <9000:12000 54000:57000> <6000:9000 60000:63000> <10500:13500 58500:61500> <12000:15000 54000:57000> <9000:12000 57000:60000> <9000:12000 52500:55500> <7500:10500 55500:58500> <7500:10500 60000:63000> <7500:10500 61500:64500> <6000:9000 61500:64500> <9000:12000 55500:58500> <10500:13500 52500:55500> <10500:13500 55500:58500> <7500:10500 57000:60000> <4500:7500 60000:63000> <9000:12000 61500:64500> <4500:7500 63000:66000> <6000:9000 63000:66000> <7500:10500 63000:66000> <9000:12000 63000:66000> <6000:9000 64500:67500> <9000:12000 66000:69000> <7500:10500 64500:67500> <10500:13500 63000:66000> <7500:10500 66000:69000> <6000:9000 66000:69000> <4500:7500 64500:67500> <9000:12000 64500:67500> <10500:13500 64500:67500> 2024-04-06 12:51:09.566743: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 12:51:09.686288: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 12:51:09.686420: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 12:51:10.919881: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 12:51:10.920027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 12:51:10.920057: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 12:51:10.920172: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 12:51:23.001270: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 12:51:23.001418: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57872/img_files/ + + +223 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 222> <1 of 222> <2 of 222> <3 of 222> <4 of 222> <5 of 222> <6 of 222> <7 of 222> <8 of 222> <9 of 222> <10 of 222> <11 of 222> <12 of 222> <13 of 222> <14 of 222> <15 of 222> <16 of 222> <17 of 222> <18 of 222> <19 of 222> <20 of 222> <21 of 222> <22 of 222> <23 of 222> <24 of 222> <25 of 222> <26 of 222> <27 of 222> <28 of 222> <29 of 222> <30 of 222> <31 of 222> <32 of 222> <33 of 222> <34 of 222> <35 of 222> <36 of 222> <37 of 222> <38 of 222> <39 of 222> <40 of 222> <41 of 222> <42 of 222> <43 of 222> <44 of 222> <45 of 222> <46 of 222> <47 of 222> <48 of 222> <49 of 222> <50 of 222> <51 of 222> <52 of 222> <53 of 222> <54 of 222> <55 of 222> <56 of 222> <57 of 222> <58 of 222> <59 of 222> <60 of 222> <61 of 222> <62 of 222> <63 of 222> <64 of 222> <65 of 222> <66 of 222> <67 of 222> <68 of 222> <69 of 222> <70 of 222> <71 of 222> <72 of 222> <73 of 222> <74 of 222> <75 of 222> <76 of 222> <77 of 222> <78 of 222> <79 of 222> <80 of 222> <81 of 222> <82 of 222> <83 of 222> <84 of 222> <85 of 222> <86 of 222> <87 of 222> <88 of 222> <89 of 222> <90 of 222> <91 of 222> <92 of 222> <93 of 222> <94 of 222> <95 of 222> <96 of 222> <97 of 222> <98 of 222> <99 of 222> <100 of 222> <101 of 222> <102 of 222> <103 of 222> <104 of 222> <105 of 222> <106 of 222> <107 of 222> <108 of 222> <109 of 222> <110 of 222> <111 of 222> <112 of 222> <113 of 222> <114 of 222> <115 of 222> <116 of 222> <117 of 222> <118 of 222> <119 of 222> <120 of 222> <121 of 222> <122 of 222> <123 of 222> <124 of 222> <125 of 222> <126 of 222> <127 of 222> <128 of 222> <129 of 222> <130 of 222> <131 of 222> <132 of 222> <133 of 222> <134 of 222> <135 of 222> <136 of 222> <137 of 222> <138 of 222> <139 of 222> <140 of 222> <141 of 222> <142 of 222> <143 of 222> <144 of 222> <145 of 222> <146 of 222> <147 of 222> <148 of 222> <149 of 222> <150 of 222> <151 of 222> <152 of 222> <153 of 222> <154 of 222> <155 of 222> <156 of 222> <157 of 222> <158 of 222> <159 of 222> <160 of 222> <161 of 222> <162 of 222> <163 of 222> <164 of 222> <165 of 222> <166 of 222> <167 of 222> <168 of 222> <169 of 222> <170 of 222> <171 of 222> <172 of 222> <173 of 222> <174 of 222> <175 of 222> <176 of 222> <177 of 222> <178 of 222> <179 of 222> <180 of 222> <181 of 222> <182 of 222> <183 of 222> <184 of 222> <185 of 222> <186 of 222> <187 of 222> <188 of 222> <189 of 222> <190 of 222> <191 of 222> <192 of 222> <193 of 222> <194 of 222> <195 of 222> <196 of 222> <197 of 222> <198 of 222> <199 of 222> <200 of 222> <201 of 222> <202 of 222> <203 of 222> <204 of 222> <205 of 222> <206 of 222> <207 of 222> <208 of 222> <209 of 222> <210 of 222> <211 of 222> <212 of 222> <213 of 222> <214 of 222> <215 of 222> <216 of 222> <217 of 222> <218 of 222> <219 of 222> <220 of 222> <221 of 222> <222 of 222> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/54269.svs + +chopping ... + +saving region: + <43500:46500 6000:9000> <42000:45000 1500:4500> <45000:48000 1500:4500> <45000:48000 3000:6000> <42000:45000 3000:6000> <40500:43500 3000:6000> <40500:43500 4500:7500> <34500:37500 6000:9000> <43500:46500 0:3000> <37500:40500 7500:10500> <42000:45000 6000:9000> <36000:39000 6000:9000> <36000:39000 4500:7500> <45000:48000 4500:7500> <37500:40500 6000:9000> <45000:48000 0:3000> <39000:42000 6000:9000> <39000:42000 3000:6000> <34500:37500 7500:10500> <39000:42000 4500:7500> <45000:48000 6000:9000> <42000:45000 4500:7500> <33000:36000 7500:10500> <46500:49500 3000:6000> <43500:46500 4500:7500> <40500:43500 6000:9000> <37500:40500 4500:7500> <46500:49500 0:3000> <46500:49500 1500:4500> <43500:46500 3000:6000> <43500:46500 1500:4500> <36000:39000 7500:10500> <39000:42000 7500:10500> <42000:45000 9000:12000> <9000:12000 15000:18000> <10500:13500 15000:18000> <40500:43500 7500:10500> <28500:31500 15000:18000> <27000:30000 15000:18000> <33000:36000 9000:12000> <34500:37500 13500:16500> <33000:36000 12000:15000> <31500:34500 9000:12000> <42000:45000 7500:10500> <39000:42000 9000:12000> <39000:42000 10500:13500> <43500:46500 7500:10500> <9000:12000 13500:16500> <37500:40500 10500:13500> <30000:33000 10500:13500> <33000:36000 10500:13500> <37500:40500 9000:12000> <31500:34500 10500:13500> <30000:33000 12000:15000> <36000:39000 12000:15000> <31500:34500 13500:16500> <28500:31500 12000:15000> <10500:13500 13500:16500> <40500:43500 9000:12000> <31500:34500 12000:15000> <34500:37500 10500:13500> <34500:37500 12000:15000> <28500:31500 13500:16500> <36000:39000 9000:12000> <34500:37500 9000:12000> <30000:33000 13500:16500> <33000:36000 13500:16500> <36000:39000 10500:13500> <33000:36000 15000:18000> <31500:34500 15000:18000> <30000:33000 15000:18000> <28500:31500 16500:19500> <27000:30000 22500:25500> <24000:27000 22500:25500> <25500:28500 22500:25500> <25500:28500 16500:19500> <40500:43500 22500:25500> <21000:24000 21000:24000> <24000:27000 21000:24000> <27000:30000 18000:21000> <25500:28500 18000:21000> <24000:27000 18000:21000> <27000:30000 21000:24000> <22500:25500 19500:22500> <27000:30000 16500:19500> <19500:22500 22500:25500> <31500:34500 18000:21000> <42000:45000 22500:25500> <28500:31500 19500:22500> <31500:34500 16500:19500> <28500:31500 18000:21000> <30000:33000 19500:22500> <25500:28500 19500:22500> <30000:33000 16500:19500> <18000:21000 22500:25500> <16500:19500 24000:27000> <28500:31500 21000:24000> <22500:25500 22500:25500> <34500:37500 15000:18000> <27000:30000 19500:22500> <33000:36000 16500:19500> <30000:33000 18000:21000> <22500:25500 21000:24000> <24000:27000 19500:22500> <25500:28500 21000:24000> <39000:42000 22500:25500> <21000:24000 22500:25500> <22500:25500 24000:27000> <12000:15000 27000:30000> <18000:21000 28500:31500> <22500:25500 25500:28500> <21000:24000 28500:31500> <7500:10500 30000:33000> <18000:21000 27000:30000> <19500:22500 24000:27000> <15000:18000 27000:30000> <22500:25500 27000:30000> <13500:16500 27000:30000> <18000:21000 25500:28500> <25500:28500 24000:27000> <19500:22500 25500:28500> <40500:43500 25500:28500> <13500:16500 28500:31500> <18000:21000 24000:27000> <15000:18000 25500:28500> <42000:45000 24000:27000> <19500:22500 27000:30000> <24000:27000 25500:28500> <15000:18000 28500:31500> <21000:24000 25500:28500> <40500:43500 24000:27000> <16500:19500 28500:31500> <24000:27000 24000:27000> <9000:12000 30000:33000> <16500:19500 25500:28500> <12000:15000 28500:31500> <21000:24000 24000:27000> <39000:42000 24000:27000> <19500:22500 28500:31500> <21000:24000 27000:30000> <10500:13500 30000:33000> <10500:13500 28500:31500> <16500:19500 27000:30000> <13500:16500 30000:33000> <12000:15000 30000:33000> <12000:15000 34500:37500> <10500:13500 33000:36000> <15000:18000 34500:37500> <1500:4500 36000:39000> <15000:18000 30000:33000> <16500:19500 30000:33000> <6000:9000 31500:34500> <15000:18000 33000:36000> <18000:21000 30000:33000> <15000:18000 31500:34500> <13500:16500 31500:34500> <19500:22500 30000:33000> <16500:19500 33000:36000> <6000:9000 34500:37500> <12000:15000 33000:36000> <4500:7500 33000:36000> <6000:9000 33000:36000> <7500:10500 33000:36000> <13500:16500 34500:37500> <10500:13500 34500:37500> <13500:16500 33000:36000> <10500:13500 31500:34500> <16500:19500 31500:34500> <7500:10500 36000:39000> <7500:10500 34500:37500> <18000:21000 31500:34500> <4500:7500 34500:37500> <12000:15000 31500:34500> <3000:6000 34500:37500> <9000:12000 34500:37500> <9000:12000 31500:34500> <9000:12000 33000:36000> <7500:10500 31500:34500> <6000:9000 36000:39000> <3000:6000 36000:39000> <4500:7500 36000:39000> <1500:4500 39000:42000> <1500:4500 43500:46500> <0:3000 43500:46500> <3000:6000 37500:40500> <9000:12000 37500:40500> <10500:13500 36000:39000> <6000:9000 39000:42000> <1500:4500 42000:45000> <13500:16500 36000:39000> <6000:9000 37500:40500> <1500:4500 40500:43500> <4500:7500 39000:42000> <9000:12000 36000:39000> <6000:9000 40500:43500> <9000:12000 39000:42000> <0:3000 42000:45000> <3000:6000 42000:45000> <0:3000 40500:43500> <3000:6000 43500:46500> <4500:7500 37500:40500> <7500:10500 37500:40500> <0:3000 39000:42000> <3000:6000 39000:42000> <12000:15000 36000:39000> <1500:4500 37500:40500> <10500:13500 37500:40500> <4500:7500 42000:45000> <7500:10500 39000:42000> <4500:7500 40500:43500> <3000:6000 40500:43500> 2024-04-06 13:08:16.971048: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 13:08:17.082978: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 13:08:17.083115: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 13:08:18.222418: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 13:08:18.222558: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 13:08:18.222586: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 13:08:18.222700: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 13:08:30.577403: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 13:08:30.577532: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/54269/img_files/ + + +211 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 210> <1 of 210> <2 of 210> <3 of 210> <4 of 210> <5 of 210> <6 of 210> <7 of 210> <8 of 210> <9 of 210> <10 of 210> <11 of 210> <12 of 210> <13 of 210> <14 of 210> <15 of 210> <16 of 210> <17 of 210> <18 of 210> <19 of 210> <20 of 210> <21 of 210> <22 of 210> <23 of 210> <24 of 210> <25 of 210> <26 of 210> <27 of 210> <28 of 210> <29 of 210> <30 of 210> <31 of 210> <32 of 210> <33 of 210> <34 of 210> <35 of 210> <36 of 210> <37 of 210> <38 of 210> <39 of 210> <40 of 210> <41 of 210> <42 of 210> <43 of 210> <44 of 210> <45 of 210> <46 of 210> <47 of 210> <48 of 210> <49 of 210> <50 of 210> <51 of 210> <52 of 210> <53 of 210> <54 of 210> <55 of 210> <56 of 210> <57 of 210> <58 of 210> <59 of 210> <60 of 210> <61 of 210> <62 of 210> <63 of 210> <64 of 210> <65 of 210> <66 of 210> <67 of 210> <68 of 210> <69 of 210> <70 of 210> <71 of 210> <72 of 210> <73 of 210> <74 of 210> <75 of 210> <76 of 210> <77 of 210> <78 of 210> <79 of 210> <80 of 210> <81 of 210> <82 of 210> <83 of 210> <84 of 210> <85 of 210> <86 of 210> <87 of 210> <88 of 210> <89 of 210> <90 of 210> <91 of 210> <92 of 210> <93 of 210> <94 of 210> <95 of 210> <96 of 210> <97 of 210> <98 of 210> <99 of 210> <100 of 210> <101 of 210> <102 of 210> <103 of 210> <104 of 210> <105 of 210> <106 of 210> <107 of 210> <108 of 210> <109 of 210> <110 of 210> <111 of 210> <112 of 210> <113 of 210> <114 of 210> <115 of 210> <116 of 210> <117 of 210> <118 of 210> <119 of 210> <120 of 210> <121 of 210> <122 of 210> <123 of 210> <124 of 210> <125 of 210> <126 of 210> <127 of 210> <128 of 210> <129 of 210> <130 of 210> <131 of 210> <132 of 210> <133 of 210> <134 of 210> <135 of 210> <136 of 210> <137 of 210> <138 of 210> <139 of 210> <140 of 210> <141 of 210> <142 of 210> <143 of 210> <144 of 210> <145 of 210> <146 of 210> <147 of 210> <148 of 210> <149 of 210> <150 of 210> <151 of 210> <152 of 210> <153 of 210> <154 of 210> <155 of 210> <156 of 210> <157 of 210> <158 of 210> <159 of 210> <160 of 210> <161 of 210> <162 of 210> <163 of 210> <164 of 210> <165 of 210> <166 of 210> <167 of 210> <168 of 210> <169 of 210> <170 of 210> <171 of 210> <172 of 210> <173 of 210> <174 of 210> <175 of 210> <176 of 210> <177 of 210> <178 of 210> <179 of 210> <180 of 210> <181 of 210> <182 of 210> <183 of 210> <184 of 210> <185 of 210> <186 of 210> <187 of 210> <188 of 210> <189 of 210> <190 of 210> <191 of 210> <192 of 210> <193 of 210> <194 of 210> <195 of 210> <196 of 210> <197 of 210> <198 of 210> <199 of 210> <200 of 210> <201 of 210> <202 of 210> <203 of 210> <204 of 210> <205 of 210> <206 of 210> <207 of 210> <208 of 210> <209 of 210> <210 of 210> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/59233.svs + +chopping ... + +saving region: + <46500:49500 6000:9000> <43500:46500 6000:9000> <52500:55500 6000:9000> <49500:52500 9000:12000> <49500:52500 6000:9000> <46500:49500 4500:7500> <51000:54000 3000:6000> <43500:46500 9000:12000> <52500:55500 7500:10500> <49500:52500 4500:7500> <51000:54000 4500:7500> <45000:48000 7500:10500> <51000:54000 6000:9000> <45000:48000 4500:7500> <52500:55500 4500:7500> <48000:51000 7500:10500> <48000:51000 6000:9000> <43500:46500 7500:10500> <51000:54000 7500:10500> <46500:49500 3000:6000> <48000:51000 3000:6000> <42000:45000 7500:10500> <42000:45000 9000:12000> <49500:52500 3000:6000> <46500:49500 7500:10500> <49500:52500 7500:10500> <51000:54000 9000:12000> <40500:43500 9000:12000> <48000:51000 9000:12000> <46500:49500 9000:12000> <45000:48000 6000:9000> <45000:48000 9000:12000> <48000:51000 4500:7500> <37500:40500 10500:13500> <39000:42000 12000:15000> <33000:36000 15000:18000> <34500:37500 16500:19500> <43500:46500 10500:13500> <39000:42000 16500:19500> <37500:40500 16500:19500> <43500:46500 13500:16500> <34500:37500 13500:16500> <42000:45000 15000:18000> <42000:45000 13500:16500> <39000:42000 13500:16500> <33000:36000 16500:19500> <46500:49500 12000:15000> <46500:49500 10500:13500> <48000:51000 10500:13500> <40500:43500 10500:13500> <39000:42000 10500:13500> <45000:48000 12000:15000> <43500:46500 12000:15000> <31500:34500 16500:19500> <37500:40500 15000:18000> <40500:43500 16500:19500> <36000:39000 16500:19500> <40500:43500 12000:15000> <39000:42000 15000:18000> <42000:45000 12000:15000> <36000:39000 15000:18000> <40500:43500 15000:18000> <34500:37500 15000:18000> <36000:39000 12000:15000> <37500:40500 13500:16500> <37500:40500 12000:15000> <36000:39000 13500:16500> <40500:43500 13500:16500> <45000:48000 10500:13500> <42000:45000 10500:13500> <31500:34500 18000:21000> <30000:33000 18000:21000> <28500:31500 21000:24000> <48000:51000 19500:22500> <31500:34500 22500:25500> <33000:36000 22500:25500> <33000:36000 19500:22500> <31500:34500 21000:24000> <48000:51000 21000:24000> <27000:30000 22500:25500> <34500:37500 18000:21000> <30000:33000 22500:25500> <46500:49500 21000:24000> <34500:37500 21000:24000> <30000:33000 21000:24000> <30000:33000 19500:22500> <27000:30000 21000:24000> <49500:52500 21000:24000> <37500:40500 19500:22500> <33000:36000 18000:21000> <36000:39000 19500:22500> <51000:54000 21000:24000> <43500:46500 22500:25500> <45000:48000 21000:24000> <52500:55500 21000:24000> <28500:31500 22500:25500> <28500:31500 19500:22500> <49500:52500 19500:22500> <51000:54000 19500:22500> <36000:39000 18000:21000> <33000:36000 21000:24000> <37500:40500 18000:21000> <31500:34500 19500:22500> <39000:42000 18000:21000> <25500:28500 22500:25500> <46500:49500 22500:25500> <34500:37500 19500:22500> <45000:48000 22500:25500> <27000:30000 25500:28500> <30000:33000 24000:27000> <48000:51000 25500:28500> <19500:22500 27000:30000> <49500:52500 22500:25500> <43500:46500 24000:27000> <48000:51000 22500:25500> <24000:27000 24000:27000> <27000:30000 24000:27000> <28500:31500 24000:27000> <51000:54000 22500:25500> <51000:54000 24000:27000> <48000:51000 24000:27000> <25500:28500 24000:27000> <52500:55500 22500:25500> <46500:49500 24000:27000> <31500:34500 24000:27000> <25500:28500 25500:28500> <46500:49500 25500:28500> <45000:48000 25500:28500> <21000:24000 25500:28500> <42000:45000 25500:28500> <28500:31500 25500:28500> <42000:45000 24000:27000> <30000:33000 25500:28500> <40500:43500 25500:28500> <49500:52500 25500:28500> <43500:46500 25500:28500> <24000:27000 25500:28500> <45000:48000 24000:27000> <22500:25500 25500:28500> <21000:24000 27000:30000> <49500:52500 24000:27000> <24000:27000 27000:30000> <22500:25500 27000:30000> <25500:28500 27000:30000> <36000:39000 30000:33000> <22500:25500 30000:33000> <37500:40500 30000:33000> <39000:42000 30000:33000> <40500:43500 30000:33000> <42000:45000 30000:33000> <37500:40500 28500:31500> <27000:30000 27000:30000> <42000:45000 28500:31500> <39000:42000 27000:30000> <22500:25500 28500:31500> <18000:21000 28500:31500> <40500:43500 27000:30000> <25500:28500 28500:31500> <24000:27000 30000:33000> <45000:48000 27000:30000> <18000:21000 30000:33000> <28500:31500 27000:30000> <19500:22500 28500:31500> <43500:46500 30000:33000> <40500:43500 28500:31500> <43500:46500 28500:31500> <21000:24000 28500:31500> <16500:19500 30000:33000> <15000:18000 30000:33000> <19500:22500 30000:33000> <45000:48000 28500:31500> <25500:28500 30000:33000> <27000:30000 28500:31500> <43500:46500 27000:30000> <21000:24000 30000:33000> <46500:49500 27000:30000> <24000:27000 28500:31500> <13500:16500 31500:34500> <39000:42000 28500:31500> <42000:45000 27000:30000> <15000:18000 31500:34500> <18000:21000 31500:34500> <39000:42000 31500:34500> <40500:43500 33000:36000> <16500:19500 34500:37500> <18000:21000 34500:37500> <19500:22500 34500:37500> <30000:33000 34500:37500> <31500:34500 34500:37500> <40500:43500 31500:34500> <12000:15000 34500:37500> <13500:16500 33000:36000> <24000:27000 31500:34500> <36000:39000 34500:37500> <16500:19500 33000:36000> <19500:22500 31500:34500> <22500:25500 31500:34500> <15000:18000 34500:37500> <37500:40500 34500:37500> <12000:15000 33000:36000> <15000:18000 33000:36000> <39000:42000 33000:36000> <37500:40500 33000:36000> <10500:13500 33000:36000> <19500:22500 33000:36000> <34500:37500 33000:36000> <33000:36000 34500:37500> <33000:36000 33000:36000> <37500:40500 31500:34500> <9000:12000 34500:37500> <36000:39000 33000:36000> <36000:39000 31500:34500> <21000:24000 33000:36000> <21000:24000 31500:34500> <42000:45000 31500:34500> <34500:37500 34500:37500> <34500:37500 31500:34500> <13500:16500 34500:37500> <10500:13500 34500:37500> <16500:19500 31500:34500> <18000:21000 33000:36000> <16500:19500 36000:39000> <6000:9000 39000:42000> <13500:16500 36000:39000> <7500:10500 37500:40500> <31500:34500 39000:42000> <30000:33000 39000:42000> <33000:36000 37500:40500> <10500:13500 36000:39000> <36000:39000 36000:39000> <12000:15000 39000:42000> <34500:37500 37500:40500> <27000:30000 39000:42000> <15000:18000 36000:39000> <10500:13500 39000:42000> <31500:34500 36000:39000> <34500:37500 36000:39000> <12000:15000 37500:40500> <12000:15000 36000:39000> <9000:12000 36000:39000> <13500:16500 37500:40500> <9000:12000 39000:42000> <7500:10500 39000:42000> <37500:40500 36000:39000> <28500:31500 39000:42000> <10500:13500 37500:40500> <15000:18000 37500:40500> <6000:9000 37500:40500> <30000:33000 36000:39000> <39000:42000 34500:37500> <30000:33000 37500:40500> <31500:34500 37500:40500> <9000:12000 37500:40500> <4500:7500 39000:42000> <7500:10500 36000:39000> <33000:36000 36000:39000> <33000:36000 39000:42000> <3000:6000 40500:43500> <6000:9000 40500:43500> <19500:22500 43500:46500> <21000:24000 40500:43500> <27000:30000 42000:45000> <27000:30000 40500:43500> <1500:4500 43500:46500> <4500:7500 42000:45000> <6000:9000 43500:46500> <9000:12000 43500:46500> <24000:27000 42000:45000> <6000:9000 42000:45000> <1500:4500 42000:45000> <9000:12000 42000:45000> <22500:25500 42000:45000> <10500:13500 40500:43500> <28500:31500 40500:43500> <25500:28500 42000:45000> <7500:10500 43500:46500> <30000:33000 40500:43500> <7500:10500 42000:45000> <3000:6000 43500:46500> <24000:27000 40500:43500> <7500:10500 40500:43500> <4500:7500 40500:43500> <19500:22500 42000:45000> <25500:28500 40500:43500> <9000:12000 40500:43500> <22500:25500 40500:43500> <3000:6000 42000:45000> <21000:24000 43500:46500> <21000:24000 42000:45000> <4500:7500 43500:46500> <22500:25500 43500:46500> <24000:27000 43500:46500> <3000:6000 45000:48000> <7500:10500 45000:48000> <6000:9000 46500:49500> <21000:24000 45000:48000> <22500:25500 45000:48000> <1500:4500 45000:48000> <3000:6000 46500:49500> <4500:7500 46500:49500> <6000:9000 45000:48000> <4500:7500 45000:48000> <24000:27000 45000:48000> <25500:28500 43500:46500> <19500:22500 46500:49500> <21000:24000 46500:49500> <19500:22500 45000:48000> 2024-04-06 13:25:15.921913: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 13:25:16.038177: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 13:25:16.038311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 13:25:17.362350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 13:25:17.362487: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 13:25:17.362514: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 13:25:17.362622: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 13:25:29.378656: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 13:25:29.378794: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/59233/img_files/ + + +307 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 306> <1 of 306> <2 of 306> <3 of 306> <4 of 306> <5 of 306> <6 of 306> <7 of 306> <8 of 306> <9 of 306> <10 of 306> <11 of 306> <12 of 306> <13 of 306> <14 of 306> <15 of 306> <16 of 306> <17 of 306> <18 of 306> <19 of 306> <20 of 306> <21 of 306> <22 of 306> <23 of 306> <24 of 306> <25 of 306> <26 of 306> <27 of 306> <28 of 306> <29 of 306> <30 of 306> <31 of 306> <32 of 306> <33 of 306> <34 of 306> <35 of 306> <36 of 306> <37 of 306> <38 of 306> <39 of 306> <40 of 306> <41 of 306> <42 of 306> <43 of 306> <44 of 306> <45 of 306> <46 of 306> <47 of 306> <48 of 306> <49 of 306> <50 of 306> <51 of 306> <52 of 306> <53 of 306> <54 of 306> <55 of 306> <56 of 306> <57 of 306> <58 of 306> <59 of 306> <60 of 306> <61 of 306> <62 of 306> <63 of 306> <64 of 306> <65 of 306> <66 of 306> <67 of 306> <68 of 306> <69 of 306> <70 of 306> <71 of 306> <72 of 306> <73 of 306> <74 of 306> <75 of 306> <76 of 306> <77 of 306> <78 of 306> <79 of 306> <80 of 306> <81 of 306> <82 of 306> <83 of 306> <84 of 306> <85 of 306> <86 of 306> <87 of 306> <88 of 306> <89 of 306> <90 of 306> <91 of 306> <92 of 306> <93 of 306> <94 of 306> <95 of 306> <96 of 306> <97 of 306> <98 of 306> <99 of 306> <100 of 306> <101 of 306> <102 of 306> <103 of 306> <104 of 306> <105 of 306> <106 of 306> <107 of 306> <108 of 306> <109 of 306> <110 of 306> <111 of 306> <112 of 306> <113 of 306> <114 of 306> <115 of 306> <116 of 306> <117 of 306> <118 of 306> <119 of 306> <120 of 306> <121 of 306> <122 of 306> <123 of 306> <124 of 306> <125 of 306> <126 of 306> <127 of 306> <128 of 306> <129 of 306> <130 of 306> <131 of 306> <132 of 306> <133 of 306> <134 of 306> <135 of 306> <136 of 306> <137 of 306> <138 of 306> <139 of 306> <140 of 306> <141 of 306> <142 of 306> <143 of 306> <144 of 306> <145 of 306> <146 of 306> <147 of 306> <148 of 306> <149 of 306> <150 of 306> <151 of 306> <152 of 306> <153 of 306> <154 of 306> <155 of 306> <156 of 306> <157 of 306> <158 of 306> <159 of 306> <160 of 306> <161 of 306> <162 of 306> <163 of 306> <164 of 306> <165 of 306> <166 of 306> <167 of 306> <168 of 306> <169 of 306> <170 of 306> <171 of 306> <172 of 306> <173 of 306> <174 of 306> <175 of 306> <176 of 306> <177 of 306> <178 of 306> <179 of 306> <180 of 306> <181 of 306> <182 of 306> <183 of 306> <184 of 306> <185 of 306> <186 of 306> <187 of 306> <188 of 306> <189 of 306> <190 of 306> <191 of 306> <192 of 306> <193 of 306> <194 of 306> <195 of 306> <196 of 306> <197 of 306> <198 of 306> <199 of 306> <200 of 306> <201 of 306> <202 of 306> <203 of 306> <204 of 306> <205 of 306> <206 of 306> <207 of 306> <208 of 306> <209 of 306> <210 of 306> <211 of 306> <212 of 306> <213 of 306> <214 of 306> <215 of 306> <216 of 306> <217 of 306> <218 of 306> <219 of 306> <220 of 306> <221 of 306> <222 of 306> <223 of 306> <224 of 306> <225 of 306> <226 of 306> <227 of 306> <228 of 306> <229 of 306> <230 of 306> <231 of 306> <232 of 306> <233 of 306> <234 of 306> <235 of 306> <236 of 306> <237 of 306> <238 of 306> <239 of 306> <240 of 306> <241 of 306> <242 of 306> <243 of 306> <244 of 306> <245 of 306> <246 of 306> <247 of 306> <248 of 306> <249 of 306> <250 of 306> <251 of 306> <252 of 306> <253 of 306> <254 of 306> <255 of 306> <256 of 306> <257 of 306> <258 of 306> <259 of 306> <260 of 306> <261 of 306> <262 of 306> <263 of 306> <264 of 306> <265 of 306> <266 of 306> <267 of 306> <268 of 306> <269 of 306> <270 of 306> <271 of 306> <272 of 306> <273 of 306> <274 of 306> <275 of 306> <276 of 306> <277 of 306> <278 of 306> <279 of 306> <280 of 306> <281 of 306> <282 of 306> <283 of 306> <284 of 306> <285 of 306> <286 of 306> <287 of 306> <288 of 306> <289 of 306> <290 of 306> <291 of 306> <292 of 306> <293 of 306> <294 of 306> <295 of 306> <296 of 306> <297 of 306> <298 of 306> <299 of 306> <300 of 306> <301 of 306> <302 of 306> <303 of 306> <304 of 306> <305 of 306> <306 of 306> + +Starting XML construction: +[0 1 2 3] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57603.svs + +chopping ... + +saving region: + <19500:22500 4500:7500> <85500:88500 4500:7500> <93000:96000 0:3000> <87000:90000 3000:6000> <87000:90000 1500:4500> <22500:25500 4500:7500> <84000:87000 4500:7500> <87000:90000 4500:7500> <90000:93000 3000:6000> <90000:93000 1500:4500> <18000:21000 3000:6000> <93000:96000 4500:7500> <88500:91500 1500:4500> <15000:18000 6000:9000> <85500:88500 3000:6000> <88500:91500 0:3000> <21000:24000 3000:6000> <93000:96000 1500:4500> <93000:96000 3000:6000> <90000:93000 0:3000> <91500:94500 1500:4500> <22500:25500 3000:6000> <21000:24000 4500:7500> <18000:21000 4500:7500> <19500:22500 3000:6000> <88500:91500 3000:6000> <91500:94500 4500:7500> <16500:19500 4500:7500> <91500:94500 3000:6000> <90000:93000 4500:7500> <88500:91500 4500:7500> <91500:94500 0:3000> <18000:21000 6000:9000> <19500:22500 6000:9000> <16500:19500 6000:9000> <18000:21000 9000:12000> <16500:19500 7500:10500> <21000:24000 6000:9000> <88500:91500 6000:9000> <85500:88500 6000:9000> <84000:87000 6000:9000> <19500:22500 9000:12000> <93000:96000 6000:9000> <85500:88500 7500:10500> <91500:94500 7500:10500> <91500:94500 6000:9000> <15000:18000 9000:12000> <22500:25500 6000:9000> <90000:93000 6000:9000> <18000:21000 7500:10500> <12000:15000 9000:12000> <13500:16500 9000:12000> <88500:91500 7500:10500> <13500:16500 7500:10500> <87000:90000 6000:9000> <15000:18000 7500:10500> <84000:87000 7500:10500> <90000:93000 7500:10500> <81000:84000 7500:10500> <22500:25500 7500:10500> <82500:85500 7500:10500> <21000:24000 7500:10500> <16500:19500 9000:12000> <87000:90000 7500:10500> <21000:24000 9000:12000> <19500:22500 7500:10500> <79500:82500 9000:12000> <82500:85500 6000:9000> <22500:25500 9000:12000> <12000:15000 12000:15000> <99000:102000 10500:13500> <102000:105000 10500:13500> <18000:21000 10500:13500> <10500:13500 10500:13500> <99000:102000 9000:12000> <100500:103500 10500:13500> <82500:85500 9000:12000> <81000:84000 10500:13500> <85500:88500 10500:13500> <10500:13500 12000:15000> <88500:91500 9000:12000> <21000:24000 10500:13500> <96000:99000 10500:13500> <84000:87000 9000:12000> <19500:22500 10500:13500> <82500:85500 10500:13500> <13500:16500 10500:13500> <12000:15000 10500:13500> <90000:93000 9000:12000> <87000:90000 9000:12000> <88500:91500 10500:13500> <100500:103500 9000:12000> <79500:82500 10500:13500> <85500:88500 9000:12000> <84000:87000 10500:13500> <87000:90000 10500:13500> <97500:100500 10500:13500> <15000:18000 10500:13500> <16500:19500 10500:13500> <13500:16500 12000:15000> <15000:18000 12000:15000> <16500:19500 12000:15000> <78000:81000 10500:13500> <81000:84000 9000:12000> <18000:21000 12000:15000> <30000:33000 13500:16500> <75000:78000 13500:16500> <73500:76500 13500:16500> <79500:82500 13500:16500> <81000:84000 13500:16500> <15000:18000 13500:16500> <28500:31500 13500:16500> <102000:105000 12000:15000> <16500:19500 13500:16500> <76500:79500 13500:16500> <87000:90000 12000:15000> <85500:88500 12000:15000> <18000:21000 13500:16500> <13500:16500 13500:16500> <94500:97500 12000:15000> <82500:85500 13500:16500> <85500:88500 13500:16500> <76500:79500 12000:15000> <100500:103500 12000:15000> <82500:85500 12000:15000> <96000:99000 12000:15000> <99000:102000 12000:15000> <79500:82500 12000:15000> <97500:100500 12000:15000> <84000:87000 12000:15000> <12000:15000 13500:16500> <27000:30000 13500:16500> <78000:81000 12000:15000> <25500:28500 13500:16500> <19500:22500 12000:15000> <10500:13500 13500:16500> <81000:84000 12000:15000> <9000:12000 13500:16500> <31500:34500 13500:16500> <78000:81000 13500:16500> <87000:90000 13500:16500> <84000:87000 13500:16500> <9000:12000 15000:18000> <84000:87000 15000:18000> <10500:13500 15000:18000> <85500:88500 15000:18000> <30000:33000 15000:18000> <18000:21000 15000:18000> <72000:75000 15000:18000> <24000:27000 15000:18000> <99000:102000 13500:16500> <102000:105000 13500:16500> <93000:96000 13500:16500> <6000:9000 15000:18000> <76500:79500 15000:18000> <15000:18000 15000:18000> <73500:76500 15000:18000> <31500:34500 15000:18000> <100500:103500 13500:16500> <75000:78000 15000:18000> <27000:30000 15000:18000> <13500:16500 15000:18000> <81000:84000 15000:18000> <28500:31500 15000:18000> <78000:81000 15000:18000> <16500:19500 15000:18000> <25500:28500 15000:18000> <7500:10500 15000:18000> <94500:97500 13500:16500> <12000:15000 15000:18000> <82500:85500 15000:18000> <96000:99000 13500:16500> <93000:96000 15000:18000> <97500:100500 13500:16500> <79500:82500 15000:18000> <94500:97500 15000:18000> <97500:100500 15000:18000> <96000:99000 15000:18000> <100500:103500 16500:19500> <73500:76500 16500:19500> <81000:84000 16500:19500> <99000:102000 16500:19500> <15000:18000 16500:19500> <100500:103500 15000:18000> <70500:73500 16500:19500> <75000:78000 16500:19500> <30000:33000 16500:19500> <16500:19500 16500:19500> <7500:10500 16500:19500> <93000:96000 16500:19500> <9000:12000 16500:19500> <13500:16500 16500:19500> <12000:15000 16500:19500> <96000:99000 16500:19500> <99000:102000 15000:18000> <6000:9000 16500:19500> <79500:82500 16500:19500> <25500:28500 16500:19500> <91500:94500 16500:19500> <4500:7500 16500:19500> <27000:30000 16500:19500> <78000:81000 16500:19500> <94500:97500 16500:19500> <24000:27000 16500:19500> <76500:79500 16500:19500> <72000:75000 16500:19500> <28500:31500 16500:19500> <97500:100500 16500:19500> <3000:6000 18000:21000> <82500:85500 16500:19500> <10500:13500 16500:19500> <4500:7500 18000:21000> <6000:9000 18000:21000> <7500:10500 18000:21000> <6000:9000 19500:22500> <1500:4500 19500:22500> <7500:10500 19500:22500> <12000:15000 18000:21000> <73500:76500 18000:21000> <78000:81000 18000:21000> <90000:93000 18000:21000> <27000:30000 18000:21000> <93000:96000 18000:21000> <9000:12000 18000:21000> <22500:25500 18000:21000> <21000:24000 19500:22500> <99000:102000 18000:21000> <9000:12000 19500:22500> <12000:15000 19500:22500> <10500:13500 19500:22500> <13500:16500 18000:21000> <10500:13500 18000:21000> <79500:82500 18000:21000> <15000:18000 18000:21000> <24000:27000 18000:21000> <30000:33000 18000:21000> <28500:31500 18000:21000> <25500:28500 18000:21000> <76500:79500 18000:21000> <70500:73500 18000:21000> <96000:99000 18000:21000> <94500:97500 18000:21000> <91500:94500 18000:21000> <97500:100500 18000:21000> <75000:78000 18000:21000> <69000:72000 18000:21000> <3000:6000 19500:22500> <72000:75000 18000:21000> <4500:7500 19500:22500> <0:3000 21000:24000> <4500:7500 21000:24000> <94500:97500 19500:22500> <25500:28500 21000:24000> <27000:30000 21000:24000> <28500:31500 21000:24000> <93000:96000 19500:22500> <78000:81000 19500:22500> <75000:78000 19500:22500> <10500:13500 21000:24000> <7500:10500 21000:24000> <88500:91500 19500:22500> <6000:9000 21000:24000> <24000:27000 21000:24000> <30000:33000 19500:22500> <28500:31500 19500:22500> <73500:76500 19500:22500> <70500:73500 21000:24000> <72000:75000 19500:22500> <1500:4500 21000:24000> <22500:25500 19500:22500> <76500:79500 19500:22500> <96000:99000 19500:22500> <24000:27000 19500:22500> <90000:93000 19500:22500> <25500:28500 19500:22500> <9000:12000 21000:24000> <22500:25500 21000:24000> <70500:73500 19500:22500> <21000:24000 21000:24000> <27000:30000 19500:22500> <79500:82500 19500:22500> <91500:94500 19500:22500> <69000:72000 19500:22500> <97500:100500 19500:22500> <3000:6000 21000:24000> <72000:75000 21000:24000> <73500:76500 21000:24000> <21000:24000 22500:25500> <97500:100500 21000:24000> <88500:91500 22500:25500> <90000:93000 22500:25500> <88500:91500 21000:24000> <93000:96000 21000:24000> <4500:7500 22500:25500> <7500:10500 22500:25500> <96000:99000 21000:24000> <94500:97500 21000:24000> <72000:75000 22500:25500> <75000:78000 21000:24000> <76500:79500 21000:24000> <3000:6000 22500:25500> <85500:88500 21000:24000> <18000:21000 22500:25500> <75000:78000 22500:25500> <6000:9000 22500:25500> <90000:93000 21000:24000> <0:3000 22500:25500> <22500:25500 22500:25500> <27000:30000 22500:25500> <91500:94500 21000:24000> <9000:12000 22500:25500> <73500:76500 22500:25500> <19500:22500 22500:25500> <87000:90000 22500:25500> <87000:90000 21000:24000> <24000:27000 22500:25500> <1500:4500 22500:25500> <25500:28500 22500:25500> <85500:88500 22500:25500> <91500:94500 22500:25500> <94500:97500 22500:25500> <93000:96000 22500:25500> <0:3000 24000:27000> <22500:25500 25500:28500> <24000:27000 25500:28500> <25500:28500 25500:28500> <21000:24000 25500:28500> <88500:91500 24000:27000> <84000:87000 24000:27000> <6000:9000 25500:28500> <4500:7500 25500:28500> <18000:21000 25500:28500> <1500:4500 24000:27000> <19500:22500 25500:28500> <16500:19500 24000:27000> <87000:90000 24000:27000> <21000:24000 24000:27000> <91500:94500 24000:27000> <24000:27000 24000:27000> <3000:6000 24000:27000> <93000:96000 24000:27000> <18000:21000 24000:27000> <15000:18000 25500:28500> <27000:30000 24000:27000> <7500:10500 24000:27000> <3000:6000 25500:28500> <16500:19500 25500:28500> <85500:88500 24000:27000> <90000:93000 24000:27000> <6000:9000 24000:27000> <4500:7500 24000:27000> <22500:25500 24000:27000> <1500:4500 25500:28500> <19500:22500 24000:27000> <90000:93000 25500:28500> <85500:88500 25500:28500> <87000:90000 25500:28500> <25500:28500 24000:27000> <88500:91500 25500:28500> <18000:21000 31500:34500> <18000:21000 30000:33000> <15000:18000 27000:30000> <13500:16500 27000:30000> <85500:88500 27000:30000> <19500:22500 27000:30000> <18000:21000 28500:31500> <16500:19500 27000:30000> <18000:21000 27000:30000> <88500:91500 27000:30000> <21000:24000 28500:31500> <22500:25500 27000:30000> <15000:18000 30000:33000> <15000:18000 28500:31500> <19500:22500 28500:31500> <19500:22500 30000:33000> <87000:90000 27000:30000> <16500:19500 31500:34500> <13500:16500 28500:31500> <90000:93000 27000:30000> <16500:19500 30000:33000> <21000:24000 27000:30000> <16500:19500 28500:31500> 2024-04-06 13:48:40.993841: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 13:48:41.110235: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 13:48:41.110365: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 13:48:42.430130: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 13:48:42.430270: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 13:48:42.430298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 13:48:42.430405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 13:48:55.923045: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 13:48:55.923219: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +step 200 +step 300 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57603/img_files/ + + +382 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 381> <1 of 381> <2 of 381> <3 of 381> <4 of 381> <5 of 381> <6 of 381> <7 of 381> <8 of 381> <9 of 381> <10 of 381> <11 of 381> <12 of 381> <13 of 381> <14 of 381> <15 of 381> <16 of 381> <17 of 381> <18 of 381> <19 of 381> <20 of 381> <21 of 381> <22 of 381> <23 of 381> <24 of 381> <25 of 381> <26 of 381> <27 of 381> <28 of 381> <29 of 381> <30 of 381> <31 of 381> <32 of 381> <33 of 381> <34 of 381> <35 of 381> <36 of 381> <37 of 381> <38 of 381> <39 of 381> <40 of 381> <41 of 381> <42 of 381> <43 of 381> <44 of 381> <45 of 381> <46 of 381> <47 of 381> <48 of 381> <49 of 381> <50 of 381> <51 of 381> <52 of 381> <53 of 381> <54 of 381> <55 of 381> <56 of 381> <57 of 381> <58 of 381> <59 of 381> <60 of 381> <61 of 381> <62 of 381> <63 of 381> <64 of 381> <65 of 381> <66 of 381> <67 of 381> <68 of 381> <69 of 381> <70 of 381> <71 of 381> <72 of 381> <73 of 381> <74 of 381> <75 of 381> <76 of 381> <77 of 381> <78 of 381> <79 of 381> <80 of 381> <81 of 381> <82 of 381> <83 of 381> <84 of 381> <85 of 381> <86 of 381> <87 of 381> <88 of 381> <89 of 381> <90 of 381> <91 of 381> <92 of 381> <93 of 381> <94 of 381> <95 of 381> <96 of 381> <97 of 381> <98 of 381> <99 of 381> <100 of 381> <101 of 381> <102 of 381> <103 of 381> <104 of 381> <105 of 381> <106 of 381> <107 of 381> <108 of 381> <109 of 381> <110 of 381> <111 of 381> <112 of 381> <113 of 381> <114 of 381> <115 of 381> <116 of 381> <117 of 381> <118 of 381> <119 of 381> <120 of 381> <121 of 381> <122 of 381> <123 of 381> <124 of 381> <125 of 381> <126 of 381> <127 of 381> <128 of 381> <129 of 381> <130 of 381> <131 of 381> <132 of 381> <133 of 381> <134 of 381> <135 of 381> <136 of 381> <137 of 381> <138 of 381> <139 of 381> <140 of 381> <141 of 381> <142 of 381> <143 of 381> <144 of 381> <145 of 381> <146 of 381> <147 of 381> <148 of 381> <149 of 381> <150 of 381> <151 of 381> <152 of 381> <153 of 381> <154 of 381> <155 of 381> <156 of 381> <157 of 381> <158 of 381> <159 of 381> <160 of 381> <161 of 381> <162 of 381> <163 of 381> <164 of 381> <165 of 381> <166 of 381> <167 of 381> <168 of 381> <169 of 381> <170 of 381> <171 of 381> <172 of 381> <173 of 381> <174 of 381> <175 of 381> <176 of 381> <177 of 381> <178 of 381> <179 of 381> <180 of 381> <181 of 381> <182 of 381> <183 of 381> <184 of 381> <185 of 381> <186 of 381> <187 of 381> <188 of 381> <189 of 381> <190 of 381> <191 of 381> <192 of 381> <193 of 381> <194 of 381> <195 of 381> <196 of 381> <197 of 381> <198 of 381> <199 of 381> <200 of 381> <201 of 381> <202 of 381> <203 of 381> <204 of 381> <205 of 381> <206 of 381> <207 of 381> <208 of 381> <209 of 381> <210 of 381> <211 of 381> <212 of 381> <213 of 381> <214 of 381> <215 of 381> <216 of 381> <217 of 381> <218 of 381> <219 of 381> <220 of 381> <221 of 381> <222 of 381> <223 of 381> <224 of 381> <225 of 381> <226 of 381> <227 of 381> <228 of 381> <229 of 381> <230 of 381> <231 of 381> <232 of 381> <233 of 381> <234 of 381> <235 of 381> <236 of 381> <237 of 381> <238 of 381> <239 of 381> <240 of 381> <241 of 381> <242 of 381> <243 of 381> <244 of 381> <245 of 381> <246 of 381> <247 of 381> <248 of 381> <249 of 381> <250 of 381> <251 of 381> <252 of 381> <253 of 381> <254 of 381> <255 of 381> <256 of 381> <257 of 381> <258 of 381> <259 of 381> <260 of 381> <261 of 381> <262 of 381> <263 of 381> <264 of 381> <265 of 381> <266 of 381> <267 of 381> <268 of 381> <269 of 381> <270 of 381> <271 of 381> <272 of 381> <273 of 381> <274 of 381> <275 of 381> <276 of 381> <277 of 381> <278 of 381> <279 of 381> <280 of 381> <281 of 381> <282 of 381> <283 of 381> <284 of 381> <285 of 381> <286 of 381> <287 of 381> <288 of 381> <289 of 381> <290 of 381> <291 of 381> <292 of 381> <293 of 381> <294 of 381> <295 of 381> <296 of 381> <297 of 381> <298 of 381> <299 of 381> <300 of 381> <301 of 381> <302 of 381> <303 of 381> <304 of 381> <305 of 381> <306 of 381> <307 of 381> <308 of 381> <309 of 381> <310 of 381> <311 of 381> <312 of 381> <313 of 381> <314 of 381> <315 of 381> <316 of 381> <317 of 381> <318 of 381> <319 of 381> <320 of 381> <321 of 381> <322 of 381> <323 of 381> <324 of 381> <325 of 381> <326 of 381> <327 of 381> <328 of 381> <329 of 381> <330 of 381> <331 of 381> <332 of 381> <333 of 381> <334 of 381> <335 of 381> <336 of 381> <337 of 381> <338 of 381> <339 of 381> <340 of 381> <341 of 381> <342 of 381> <343 of 381> <344 of 381> <345 of 381> <346 of 381> <347 of 381> <348 of 381> <349 of 381> <350 of 381> <351 of 381> <352 of 381> <353 of 381> <354 of 381> <355 of 381> <356 of 381> <357 of 381> <358 of 381> <359 of 381> <360 of 381> <361 of 381> <362 of 381> <363 of 381> <364 of 381> <365 of 381> <366 of 381> <367 of 381> <368 of 381> <369 of 381> <370 of 381> <371 of 381> <372 of 381> <373 of 381> <374 of 381> <375 of 381> <376 of 381> <377 of 381> <378 of 381> <379 of 381> <380 of 381> <381 of 381> + +Starting XML construction: +[0 1 2] + working on: annotationID 1 +binary_mask == [0 1] + working on: annotationID 2 +binary_mask == [0 1] +cleaning up + +opening: /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/57756.svs + +chopping ... + +saving region: + <43500:46500 4500:7500> <34500:37500 7500:10500> <40500:43500 4500:7500> <31500:34500 7500:10500> <46500:49500 6000:9000> <45000:48000 4500:7500> <48000:51000 3000:6000> <36000:39000 7500:10500> <36000:39000 6000:9000> <43500:46500 6000:9000> <39000:42000 6000:9000> <48000:51000 6000:9000> <40500:43500 6000:9000> <42000:45000 4500:7500> <33000:36000 7500:10500> <45000:48000 3000:6000> <42000:45000 3000:6000> <45000:48000 6000:9000> <46500:49500 4500:7500> <48000:51000 4500:7500> <43500:46500 3000:6000> <40500:43500 3000:6000> <42000:45000 6000:9000> <45000:48000 1500:4500> <39000:42000 4500:7500> <46500:49500 3000:6000> <37500:40500 7500:10500> <46500:49500 1500:4500> <30000:33000 7500:10500> <37500:40500 6000:9000> <37500:40500 4500:7500> <34500:37500 6000:9000> <39000:42000 7500:10500> <40500:43500 7500:10500> <45000:48000 7500:10500> <33000:36000 9000:12000> <42000:45000 7500:10500> <27000:30000 10500:13500> <31500:34500 10500:13500> <30000:33000 9000:12000> <42000:45000 9000:12000> <24000:27000 10500:13500> <28500:31500 9000:12000> <31500:34500 9000:12000> <37500:40500 9000:12000> <27000:30000 9000:12000> <30000:33000 10500:13500> <34500:37500 10500:13500> <33000:36000 10500:13500> <24000:27000 12000:15000> <25500:28500 10500:13500> <36000:39000 10500:13500> <34500:37500 9000:12000> <43500:46500 7500:10500> <40500:43500 9000:12000> <22500:25500 12000:15000> <37500:40500 10500:13500> <39000:42000 9000:12000> <28500:31500 12000:15000> <36000:39000 9000:12000> <28500:31500 10500:13500> <21000:24000 12000:15000> <27000:30000 12000:15000> <25500:28500 12000:15000> <31500:34500 12000:15000> <39000:42000 10500:13500> <30000:33000 12000:15000> <18000:21000 16500:19500> <21000:24000 16500:19500> <22500:25500 16500:19500> <24000:27000 16500:19500> <7500:10500 18000:21000> <25500:28500 16500:19500> <28500:31500 15000:18000> <15000:18000 16500:19500> <22500:25500 15000:18000> <28500:31500 13500:16500> <34500:37500 12000:15000> <33000:36000 13500:16500> <24000:27000 13500:16500> <31500:34500 13500:16500> <33000:36000 12000:15000> <27000:30000 15000:18000> <21000:24000 13500:16500> <12000:15000 16500:19500> <16500:19500 15000:18000> <25500:28500 13500:16500> <36000:39000 12000:15000> <21000:24000 15000:18000> <19500:22500 15000:18000> <24000:27000 15000:18000> <19500:22500 16500:19500> <25500:28500 15000:18000> <27000:30000 13500:16500> <30000:33000 13500:16500> <16500:19500 16500:19500> <19500:22500 13500:16500> <13500:16500 16500:19500> <18000:21000 15000:18000> <15000:18000 15000:18000> <18000:21000 13500:16500> <10500:13500 16500:19500> <9000:12000 18000:21000> <22500:25500 13500:16500> <10500:13500 18000:21000> <21000:24000 19500:22500> <18000:21000 21000:24000> <1500:4500 22500:25500> <4500:7500 22500:25500> <6000:9000 22500:25500> <13500:16500 21000:24000> <16500:19500 19500:22500> <12000:15000 21000:24000> <21000:24000 18000:21000> <19500:22500 18000:21000> <6000:9000 19500:22500> <6000:9000 21000:24000> <9000:12000 21000:24000> <13500:16500 19500:22500> <4500:7500 21000:24000> <13500:16500 18000:21000> <15000:18000 19500:22500> <15000:18000 18000:21000> <7500:10500 21000:24000> <12000:15000 19500:22500> <19500:22500 21000:24000> <24000:27000 18000:21000> <19500:22500 19500:22500> <22500:25500 18000:21000> <10500:13500 19500:22500> <12000:15000 18000:21000> <7500:10500 19500:22500> <9000:12000 19500:22500> <16500:19500 21000:24000> <3000:6000 22500:25500> <15000:18000 21000:24000> <18000:21000 19500:22500> <10500:13500 21000:24000> <16500:19500 18000:21000> <18000:21000 18000:21000> <7500:10500 22500:25500> <9000:12000 22500:25500> <9000:12000 25500:28500> <4500:7500 28500:31500> <12000:15000 22500:25500> <3000:6000 25500:28500> <9000:12000 24000:27000> <10500:13500 24000:27000> <3000:6000 27000:30000> <10500:13500 22500:25500> <1500:4500 27000:30000> <6000:9000 25500:28500> <7500:10500 25500:28500> <7500:10500 24000:27000> <1500:4500 24000:27000> <4500:7500 24000:27000> <6000:9000 27000:30000> <15000:18000 22500:25500> <13500:16500 22500:25500> <12000:15000 24000:27000> <3000:6000 24000:27000> <3000:6000 28500:31500> <7500:10500 27000:30000> <1500:4500 25500:28500> <4500:7500 27000:30000> <6000:9000 24000:27000> <4500:7500 25500:28500> 2024-04-06 14:10:46.788505: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA +2024-04-06 14:10:46.903320: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1344] Found device 0 with properties: +name: NVIDIA GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545 +pciBusID: 0000:1a:00.0 +totalMemory: 10.75GiB freeMemory: 10.44GiB +2024-04-06 14:10:46.903470: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1423] Adding visible gpu devices: 0 +2024-04-06 14:10:48.181749: I tensorflow/core/common_runtime/gpu/gpu_device.cc:911] Device interconnect StreamExecutor with strength 1 edge matrix: +2024-04-06 14:10:48.181912: I tensorflow/core/common_runtime/gpu/gpu_device.cc:917] 0 +2024-04-06 14:10:48.181942: I tensorflow/core/common_runtime/gpu/gpu_device.cc:930] 0: N +2024-04-06 14:10:48.182060: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10096 MB memory) -> physical GPU (device: 0, name: NVIDIA GeForce RTX 2080 Ti, pci bus id: 0000:1a:00.0, compute capability: 7.5) +WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/learn/python/learn/datasets/base.py:198: retry (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version. +Instructions for updating: +Use the retry module or similar alternatives. +2024-04-06 14:11:00.224018: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +2024-04-06 14:11:00.224160: W tensorflow/core/common_runtime/bfc_allocator.cc:219] Allocator (GPU_0_bfc) ran out of memory trying to allocate 3.38GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available. +-----------build encoder: deeplab pre-trained----------- +after start block: (1, ?, ?, 64) +after block1: (1, ?, ?, 256) +after block2: (1, ?, ?, 512) +after block3: (1, ?, ?, 1024) +after block4: (1, ?, ?, 2048) +-----------build decoder----------- +after aspp block: (1, ?, ?, 4) +Restored model parameters from /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/model.ckpt-1 +step 0 +step 100 +The output files has been saved to /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/Predictions/57756/img_files/ + + +167 image regions chopped +Chop SUEY! + +Segmenting tissue ... + +starting prediction using model: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/MODELS/0/HR/1 + + + +reconstructing wsi map ... + + <0 of 166> <1 of 166> <2 of 166> <3 of 166> <4 of 166> <5 of 166> <6 of 166> <7 of 166> <8 of 166> <9 of 166> <10 of 166> <11 of 166> <12 of 166> <13 of 166> <14 of 166> <15 of 166> <16 of 166> <17 of 166> <18 of 166> <19 of 166> <20 of 166> <21 of 166> <22 of 166> <23 of 166> <24 of 166> <25 of 166> <26 of 166> <27 of 166> <28 of 166> <29 of 166> <30 of 166> <31 of 166> <32 of 166> <33 of 166> <34 of 166> <35 of 166> <36 of 166> <37 of 166> <38 of 166> <39 of 166> <40 of 166> <41 of 166> <42 of 166> <43 of 166> <44 of 166> <45 of 166> <46 of 166> <47 of 166> <48 of 166> <49 of 166> <50 of 166> <51 of 166> <52 of 166> <53 of 166> <54 of 166> <55 of 166> <56 of 166> <57 of 166> <58 of 166> <59 of 166> <60 of 166> <61 of 166> <62 of 166> <63 of 166> <64 of 166> <65 of 166> <66 of 166> <67 of 166> <68 of 166> <69 of 166> <70 of 166> <71 of 166> <72 of 166> <73 of 166> <74 of 166> <75 of 166> <76 of 166> <77 of 166> <78 of 166> <79 of 166> <80 of 166> <81 of 166> <82 of 166> <83 of 166> <84 of 166> <85 of 166> <86 of 166> <87 of 166> <88 of 166> <89 of 166> <90 of 166> <91 of 166> <92 of 166> <93 of 166> <94 of 166> <95 of 166> <96 of 166> <97 of 166> <98 of 166> <99 of 166> <100 of 166> <101 of 166> <102 of 166> <103 of 166> <104 of 166> <105 of 166> <106 of 166> <107 of 166> <108 of 166> <109 of 166> <110 of 166> <111 of 166> <112 of 166> <113 of 166> <114 of 166> <115 of 166> <116 of 166> <117 of 166> <118 of 166> <119 of 166> <120 of 166> <121 of 166> <122 of 166> <123 of 166> <124 of 166> <125 of 166> <126 of 166> <127 of 166> <128 of 166> <129 of 166> <130 of 166> <131 of 166> <132 of 166> <133 of 166> <134 of 166> <135 of 166> <136 of 166> <137 of 166> <138 of 166> <139 of 166> <140 of 166> <141 of 166> <142 of 166> <143 of 166> <144 of 166> <145 of 166> <146 of 166> <147 of 166> <148 of 166> <149 of 166> <150 of 166> <151 of 166> <152 of 166> <153 of 166> <154 of 166> <155 of 166> <156 of 166> <157 of 166> <158 of 166> <159 of 166> <160 of 166> <161 of 166> <162 of 166> <163 of 166> <164 of 166> <165 of 166> <166 of 166> + +Starting XML construction: +[0 2 3] + working on: annotationID 2 +binary_mask == [0 1] + working on: annotationID 3 +binary_mask == [0 1] +cleaning up + + +Please correct the xml annotations found in: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/Predicted_XMLs/ + +then place them in: + /orange/pinaki.sarder/sdevarasetty/IFTA-Jeong-Running/H-AI-L/TxR01/TRAINING_data/0/ + +and run [--option train] +