Docker is a software that is used to create containers. These containers are independent units, usually with an installed software that can be run inside the container. Thus it is akin to a virtual maschine, however docker goes one step further and is more independent than a virtual maschine as it creates a standardized brige between an operating system and the software installed in the container. As benefits, the software contained in a container can be run as long as the docker software is installed in the operating system. This frees up users from the complexities and nightmares usually associated with installing dependencies, and compabilities of different operating systems, settings, and many more varieties within each user's computer.
First we need to clone the github repository containing the model using:
git clone https://github.com/igemsoftware2019/iGemMarburg2019.git
cd iGemMarburg2019/AI <br>
Now we build the docker image using the included process.dockerfile below. The purpose is to build all the required dependencies and run the train.sh script, which does all the steps elaborated in the AI documentation file. The default training steps is 100 and can be changed by changing the NUM_STEPS parameter in the train.sh file. Images must be labeled (the label needs to be colony), have corresponding .xml files, and be put in their corresponding folders (either in <path to test images> or <path to train images>). Subsequently, the following command needs to be executed:
docker build -f training.dockerfile -t <image tag> .
# training.dockerfile
# Copyright 2019, iGEM Marburg 2019
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version. This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
FROM tensorflow/tensorflow:1.14.0-py3
RUN pip install --user Cython contextlib2 pillow lxml matplotlib pandas && pip install --user pycocotools
COPY models /tf/models
COPY train.sh /train.sh
RUN chmod +x /train.sh
RUN export PYTHONPATH=$PYTHONPATH:/tf/models/research:/tf/models/research/object_detection:/tf/models/research/slim
RUN cd /tf/models/research && python setup.py build && python setup.py install
RUN cd /tf/models/research/slim && python setup.py build && python setup.py install
ENTRYPOINT /train.sh
#train.sh
#!/usr/bin/env bash
# Copyright 2019, iGEM Marburg 2019
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version. This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
mkdir /tf/trained_model
cd /tf/models/research/object_detection && python xml_to_csv.py
cd /tf/models/research && python generate_tfrecord.py \
--csv_input=object_detection/images/train_labels.csv \
--image_dir=object_detection/images/train \
--output_path=mscoco_train.record
cd /tf/models/research && python generate_tfrecord.py \
--csv_input=object_detection/images/test_labels.csv \
--image_dir=object_detection/images/test \
--output_path=mscoco_val.record
NUM_STEPS=${NUM_STEPS:-100}
cd /tf/models/research
python model_main.py \
--logtostderr \
--model_dir=/tf/trained_model \
--num_train_steps=${NUM_STEPS} \
--train_dir=object_detection/training/ \
--pipeline_config_path=object_detection/training/faster_rcnn_resnet101_coco.config
suffix=$(ls /tf/trained_model | grep ".index" | tail -1 | cut -d '.' -f 2 | cut -d '-' -f 2)
cd /tf/models/research/object_detection
python export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=training/faster_rcnn_resnet101_coco.config \
--trained_checkpoint_prefix=/tf/trained_model/model.ckpt-${suffix} \
--output_directory=inference_graph
Subsequently the image can be run as a container using the following command.
docker run \
--rm \
-v <path to train images>:/tf/models/research/object_detection/images/train \
-v <path to test images>:/tf/models/research/object_detection/images/test \
-v <output path of inference graph>:/tf/models/research/object_detection/inference_graph \
<image tag>
Once running the model will be trained on your local computer. The inference image will be put in <output path of inference graph>
Once trained, a frozen inference image is then created in the corresponding folder. To use this with an arbritary colony image we need to put the image in <path of test images folder> and create a docker image as before using process.dockerfile with the command below. The command build all the dependencies and run process.py file which takes the frozen inference file and use it on the arbritary colony image.
docker build -f process.dockerfile -t <image tag> .
# process.dockerfile
# Copyright 2019, iGEM Marburg 2019
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version. This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
FROM tensorflow/tensorflow:1.14.0-py3
RUN pip install --user Cython contextlib2 pillow lxml matplotlib pandas && pip install --user pycocotools
RUN mkdir /processed
COPY models /tf/models
COPY process.py /tf/models/research/object_detection/process.py
RUN export PYTHONPATH=$PYTHONPATH:/tf/models/research:/tf/models/research/object_detection:/tf/models/research/slim
RUN cd /tf/models/research && python setup.py build && python setup.py install
RUN cd /tf/models/research/slim && python setup.py build && python setup.py install
WORKDIR /tf/models/research/object_detection
ENTRYPOINT ["python", "process.py"]
# process.py file
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow.compat.v1 as tf
import zipfile
from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
tf.disable_v2_behavior()
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
if StrictVersion(tf.__version__) < StrictVersion('1.12.0'):
raise ImportError('Please upgrade your TensorFlow installation to v1.12.*.')
from utils import label_map_util
from utils import visualization_utils as vis_util
# What model to download.
MODEL_NAME = 'inference_graph'
#MODEL_FILE = MODEL_NAME + '.tar.gz'
#DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
#PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
PATH_TO_LABELS = 'training/mscoco_label_map.pbtxt'
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.io.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
category_index = label_map_util.create_category_index_from_labelmap(
PATH_TO_LABELS, use_display_name=True)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
def absoluteFilePaths(directory):
for dirpath, _, filenames in os.walk(directory):
for f in filenames:
yield os.path.abspath(os.path.join(dirpath, f))
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = absoluteFilePaths(PATH_TO_TEST_IMAGES_DIR)
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(
tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [
real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [
real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[1], image.shape[2])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: image})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.int64)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
c = 0
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(
image_np_expanded, detection_graph)
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
output_dict['detection_boxes'],
output_dict['detection_classes'],
output_dict['detection_scores'],
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=4)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
plt.savefig('/processed/{0}.jpg'.format(c))
np.savetxt('/processed/boxes{0}.csv'.format(c),
output_dict['detection_boxes'], delimiter=",")
np.savetxt('/processed/scores{0}.csv'.format(c),
output_dict['detection_scores'], delimiter=",")
c += 1
Finally the docker image can be run with the command:
docker run \
--rm \
-v <path of inference graph>:/tf/models/research/object_detection/inference_graph \
-v <path of test images folder>:/tf/models/research/object_detection/test_images \
-v <output path>:/processed \
<image tag>
The processed image will be in the folder <output path>.