by @daveluo
In this Google Colab notebook and accompanying Medium post, we will learn all the code and concepts comprising a complete workflow to automatically detect and delineate building footprints (instance segmentation) from drone imagery with cutting edge deep learning models.
All you'll need is a Google account, an internet connection, and a couple of hours to learn how to make the data & model that learns to make something like this:
This Colab notebook is our main learning resource - working interactively here is highly recommended!
Code is organized into modular sections, set up for installation/import of all required dependencies, and executable on either CPU or GPU runtimes (depending on the section). Links to load files generated at each step are also included so you can pick up and start from any section. Inline# comments (& references for further reading) are provided within code cells to explain steps or nuances in more detail as needed. Executing all code cells end-to-end takes <1 hour on GPU.
The Medium post serves as a high-level conceptual walkthrough and maps directly to sections within the Colab notebook. The post works best as a quick overview with handy bookmarks to Colab or viewed side-by-side with this Colab notebook as a code & concept companion set.
This tutorial assumes you have a working knowledge of Python, data analysis with Pandas, making training/validation/test sets for machine learning, and a beginner practitioner's grasp of deep learning concepts. Or the motivation to gain what knowledge you're missing by following the ample references linked throughout this post and notebook.
Note that the preprocessing section is possible to be done on CPU runtime:
Change in menu: Runtime > Change runtime type > Hardware Accelerator = None
Pip install
the required geodata processing packages we'll be using of, test that their import to Colab works, and create our output data directories.
!add-apt-repository ppa:ubuntugis/ubuntugis-unstable -y
!apt-get update
!apt-get install python-numpy gdal-bin libgdal-dev python3-rtree
!pip install rasterio
!pip install geopandas
!pip install descartes
!pip install solaris
!pip install rio-tiler
Get:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease [3,626 B] Ign:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease Get:3 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] Ign:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Hit:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release Get:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release [564 B] Get:7 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release.gpg [833 B] Get:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB] Get:10 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ Packages [81.6 kB] Hit:11 http://archive.ubuntu.com/ubuntu bionic InRelease Get:12 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Packages [30.4 kB] Get:13 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] Get:14 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease [15.4 kB] Get:15 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [21.8 kB] Get:16 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [6,779 B] Get:17 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [777 kB] Get:18 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic InRelease [20.8 kB] Get:19 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,075 kB] Get:21 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [804 kB] Get:22 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [36.8 kB] Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,335 kB] Get:24 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main Sources [1,752 kB] Get:25 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [10.8 kB] Get:26 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [35.5 kB] Get:27 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [4,241 B] Get:28 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main amd64 Packages [845 kB] Get:29 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 Packages [82.0 kB] Fetched 7,213 kB in 9s (832 kB/s) Reading package lists... Done Hit:1 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease Ign:2 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease Hit:3 http://security.ubuntu.com/ubuntu bionic-security InRelease Ign:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease Hit:5 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release Hit:6 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release Hit:7 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease Hit:8 http://archive.ubuntu.com/ubuntu bionic InRelease Hit:10 http://archive.ubuntu.com/ubuntu bionic-updates InRelease Hit:12 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease Hit:13 http://archive.ubuntu.com/ubuntu bionic-backports InRelease Hit:14 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic InRelease Reading package lists... Done Reading package lists... Done Building dependency tree Reading state information... Done python-numpy is already the newest version (1:1.13.3-2ubuntu1). python-numpy set to manually installed. The following packages were automatically installed and are no longer required: libgdal20 libgeos-3.6.2 libgeotiff2 libnvidia-common-430 libproj12 libproj13 Use 'apt autoremove' to remove them. The following additional packages will be installed: default-libmysqlclient-dev gdal-data libarmadillo-dev libarpack2-dev libblas-dev libblas3 libcfitsio-dev libcfitsio-doc libcfitsio5 libcharls-dev libdap-dev libdapserver7v5 libepsilon-dev libfreexl-dev libfyba-dev libgdal20 libgdal26 libgeos-3.8.0 libgeos-c1v5 libgeos-dev libgeotiff-dev libgeotiff5 libgif-dev libhdf4-alt-dev libjson-c-dev libkml-dev libkmlconvenience1 libkmlregionator1 libkmlxsd1 libminizip-dev libmysqlclient-dev libnetcdf-dev libogdi-dev libogdi4.1 libopencv-calib3d-dev libopencv-calib3d3.2 libopencv-contrib-dev libopencv-contrib3.2 libopencv-core-dev libopencv-core3.2 libopencv-dev libopencv-features2d-dev libopencv-features2d3.2 libopencv-flann-dev libopencv-flann3.2 libopencv-highgui-dev libopencv-highgui3.2 libopencv-imgcodecs-dev libopencv-imgcodecs3.2 libopencv-imgproc-dev libopencv-imgproc3.2 libopencv-ml-dev libopencv-ml3.2 libopencv-objdetect-dev libopencv-objdetect3.2 libopencv-photo-dev libopencv-photo3.2 libopencv-shape-dev libopencv-shape3.2 libopencv-stitching-dev libopencv-stitching3.2 libopencv-superres-dev libopencv-superres3.2 libopencv-ts-dev libopencv-video-dev libopencv-video3.2 libopencv-videoio-dev libopencv-videoio3.2 libopencv-videostab-dev libopencv-videostab3.2 libopencv-viz-dev libopencv-viz3.2 libopencv3.2-java libopencv3.2-jni libopenjp2-7-dev libpoppler-dev libpoppler-private-dev libpq-dev libproj-dev libproj13 libproj15 libqhull-dev libqhull-r7 libspatialindex-c4v5 libspatialindex-dev libspatialindex4v5 libspatialite-dev libspatialite7 libsqlite3-0 libsqlite3-dev libsuperlu-dev liburiparser-dev libvtk6.3 libwebp-dev libxerces-c-dev libzstd-dev proj-bin proj-data python3-gdal python3-pkg-resources unixodbc-dev Suggested packages: libgdal-grass libitpp-dev liblapack-doc libgdal-doc libgeotiff-epsg geotiff-bin netcdf-bin netcdf-doc ogdi-bin opencv-doc postgresql-doc-10 sqlite3-doc libsuperlu-doc vtk6-doc vtk6-examples libxerces-c-doc python3-setuptools Recommended packages: opencv-data The following packages will be REMOVED: libogdi3.2 python-gdal The following NEW packages will be installed: default-libmysqlclient-dev libarmadillo-dev libarpack2-dev libblas-dev libblas3 libcfitsio-dev libcfitsio-doc libcfitsio5 libcharls-dev libdap-dev libdapserver7v5 libepsilon-dev libfreexl-dev libfyba-dev libgdal-dev libgdal26 libgeos-3.8.0 libgeos-dev libgeotiff-dev libgeotiff5 libgif-dev libhdf4-alt-dev libjson-c-dev libkml-dev libkmlconvenience1 libkmlregionator1 libkmlxsd1 libminizip-dev libmysqlclient-dev libnetcdf-dev libogdi-dev libogdi4.1 libopenjp2-7-dev libpoppler-dev libpoppler-private-dev libpq-dev libproj-dev libproj13 libproj15 libqhull-dev libqhull-r7 libspatialindex-c4v5 libspatialindex-dev libspatialindex4v5 libspatialite-dev libsqlite3-dev libsuperlu-dev liburiparser-dev libwebp-dev libxerces-c-dev libzstd-dev proj-bin python3-pkg-resources python3-rtree unixodbc-dev The following packages will be upgraded: gdal-bin gdal-data libgdal20 libgeos-c1v5 libopencv-calib3d-dev libopencv-calib3d3.2 libopencv-contrib-dev libopencv-contrib3.2 libopencv-core-dev libopencv-core3.2 libopencv-dev libopencv-features2d-dev libopencv-features2d3.2 libopencv-flann-dev libopencv-flann3.2 libopencv-highgui-dev libopencv-highgui3.2 libopencv-imgcodecs-dev libopencv-imgcodecs3.2 libopencv-imgproc-dev libopencv-imgproc3.2 libopencv-ml-dev libopencv-ml3.2 libopencv-objdetect-dev libopencv-objdetect3.2 libopencv-photo-dev libopencv-photo3.2 libopencv-shape-dev libopencv-shape3.2 libopencv-stitching-dev libopencv-stitching3.2 libopencv-superres-dev libopencv-superres3.2 libopencv-ts-dev libopencv-video-dev libopencv-video3.2 libopencv-videoio-dev libopencv-videoio3.2 libopencv-videostab-dev libopencv-videostab3.2 libopencv-viz-dev libopencv-viz3.2 libopencv3.2-java libopencv3.2-jni libspatialite7 libsqlite3-0 libvtk6.3 proj-data python3-gdal 49 upgraded, 55 newly installed, 2 to remove and 63 not upgraded. Need to get 92.2 MB of archives. After this operation, 168 MB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libcfitsio5 amd64 3.430-2 [446 kB] Get:2 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [237 kB] Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsqlite3-0 amd64 3.22.0-1ubuntu0.2 [498 kB] Get:4 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-ts-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [279 kB] Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmysqlclient-dev amd64 5.7.28-0ubuntu0.18.04.4 [987 kB] Get:6 http://archive.ubuntu.com/ubuntu bionic/main amd64 default-libmysqlclient-dev amd64 1.0.4 [3,736 B] Get:7 http://archive.ubuntu.com/ubuntu bionic/main amd64 libblas3 amd64 3.7.1-4ubuntu1 [140 kB] Get:8 http://archive.ubuntu.com/ubuntu bionic/main amd64 libblas-dev amd64 3.7.1-4ubuntu1 [143 kB] Get:9 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libarpack2-dev amd64 3.5.0+real-2 [97.3 kB] Get:10 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libsuperlu-dev amd64 5.2.1+dfsg1-3 [16.3 kB] Get:11 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libarmadillo-dev amd64 1:8.400.0+dfsg-2 [340 kB] Get:12 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libcfitsio-dev amd64 3.430-2 [494 kB] Get:13 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libcfitsio-doc all 3.430-2 [2,005 kB] Get:14 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libcharls-dev amd64 1.1.0+dfsg-2 [20.4 kB] Get:15 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libdapserver7v5 amd64 3.19.1-2build1 [22.2 kB] Get:16 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libdap-dev amd64 3.19.1-2build1 [710 kB] Get:17 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libfyba-dev amd64 4.1.1-3 [436 kB] Get:18 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libepsilon-dev amd64 0.9.2+dfsg-2 [49.3 kB] Get:19 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libfreexl-dev amd64 1.0.5-1 [30.9 kB] Get:20 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libsqlite3-dev amd64 3.22.0-1ubuntu0.2 [632 kB] Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgif-dev amd64 5.1.4-2ubuntu0.1 [20.6 kB] Get:22 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libnetcdf-dev amd64 1:4.6.0-2build1 [37.6 kB] Get:23 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libhdf4-alt-dev amd64 4.2.13-2 [368 kB] Get:24 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-contrib-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [1,878 kB] Get:25 http://archive.ubuntu.com/ubuntu bionic/main amd64 libjson-c-dev amd64 0.12.1-1.3 [31.7 kB] Get:26 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libkmlconvenience1 amd64 1.3.0-5 [43.1 kB] Get:27 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libkmlregionator1 amd64 1.3.0-5 [19.0 kB] Get:28 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libkmlxsd1 amd64 1.3.0-5 [29.5 kB] Get:29 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libminizip-dev amd64 1.1-8build1 [26.7 kB] Get:30 http://archive.ubuntu.com/ubuntu bionic/universe amd64 liburiparser-dev amd64 0.8.4-1 [10.0 kB] Get:31 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libkml-dev amd64 1.3.0-5 [933 kB] Get:32 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 libopenjp2-7-dev amd64 2.3.0-2build0.18.04.1 [26.6 kB] Get:33 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpoppler-dev amd64 0.62.0-2ubuntu2.10 [4,608 B] Get:34 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpoppler-private-dev amd64 0.62.0-2ubuntu2.10 [169 kB] Get:35 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libpq-dev amd64 10.10-0ubuntu0.18.04.1 [218 kB] Get:36 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libqhull-r7 amd64 2015.2-4 [149 kB] Get:37 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libqhull-dev amd64 2015.2-4 [357 kB] Get:38 http://archive.ubuntu.com/ubuntu bionic/main amd64 libwebp-dev amd64 0.6.1-2 [267 kB] Get:39 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libxerces-c-dev amd64 3.2.0+debian-2 [1,627 kB] Get:40 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libzstd-dev amd64 1.3.3+dfsg-2ubuntu1.1 [230 kB] Get:41 http://archive.ubuntu.com/ubuntu bionic/main amd64 unixodbc-dev amd64 2.3.4-1.1ubuntu3 [217 kB] Get:42 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libspatialindex4v5 amd64 1.8.5-5 [219 kB] Get:43 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libspatialindex-c4v5 amd64 1.8.5-5 [51.7 kB] Get:44 http://archive.ubuntu.com/ubuntu bionic/main amd64 python3-pkg-resources all 39.0.1-2 [98.8 kB] Get:45 http://archive.ubuntu.com/ubuntu bionic/universe amd64 libspatialindex-dev amd64 1.8.5-5 [285 kB] Get:46 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-videostab-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [135 kB] Get:47 http://archive.ubuntu.com/ubuntu bionic/universe amd64 python3-rtree all 0.8.3+ds-1 [16.9 kB] Get:48 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-stitching-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [222 kB] Get:49 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-calib3d-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [516 kB] Get:50 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-features2d-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [294 kB] Get:51 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-flann-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [176 kB] Get:52 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-contrib3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [1,463 kB] Get:53 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv3.2-jni amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [201 kB] Get:54 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-videostab3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [111 kB] Get:55 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-stitching3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [179 kB] Get:56 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-calib3d3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [440 kB] Get:57 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-features2d3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [244 kB] Get:58 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-flann3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [105 kB] Get:59 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-objdetect-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [176 kB] Get:60 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-objdetect3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [151 kB] Get:61 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-ml-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [296 kB] Get:62 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-ml3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [236 kB] Get:63 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-superres-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [65.8 kB] Get:64 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-highgui-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [44.3 kB] Get:65 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-videoio-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [124 kB] Get:66 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-imgcodecs-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [132 kB] Get:67 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-superres3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [56.0 kB] Get:68 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-highgui3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [32.4 kB] Get:69 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-videoio3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [91.0 kB] Get:70 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-viz-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [163 kB] Get:71 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-shape-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [84.1 kB] Get:72 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-video-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [162 kB] Get:73 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-photo-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [238 kB] Get:74 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-imgproc-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [1,024 kB] Get:75 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-core-dev amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [1,108 kB] Get:76 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 gdal-data all 3.0.2+dfsg-1~bionic2 [428 kB] Get:77 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgeos-3.8.0 amd64 3.8.0-1~bionic0 [541 kB] Get:78 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgeos-c1v5 amd64 3.8.0-1~bionic0 [76.6 kB] Get:79 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 proj-data all 6.2.1-1~bionic0 [7,608 kB] Get:80 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libproj15 amd64 6.2.1-1~bionic0 [813 kB] Get:81 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgeotiff5 amd64 1.5.1-2~bionic1 [73.6 kB] Get:82 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-imgcodecs3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [95.8 kB] Get:83 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 python3-gdal amd64 3.0.2+dfsg-1~bionic2 [756 kB] Get:84 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libvtk6.3 amd64 6.3.0+dfsg2-2build4~bionic3 [31.5 MB] Get:85 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 gdal-bin amd64 3.0.2+dfsg-1~bionic2 [496 kB] Get:86 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgdal20 amd64 2.4.2+dfsg-1~bionic0 [6,031 kB] Get:87 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libogdi4.1 amd64 4.1.0+ds-1~bionic2 [200 kB] Get:88 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libspatialite7 amd64 4.3.0a-6~bionic2 [1,253 kB] Get:89 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgdal26 amd64 3.0.2+dfsg-1~bionic2 [6,141 kB] Get:90 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-viz3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [124 kB] Get:91 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-shape3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [69.6 kB] Get:92 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-video3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [136 kB] Get:93 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-photo3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [203 kB] Get:94 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-imgproc3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [831 kB] Get:95 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv-core3.2 amd64 3.2.0+dfsg-4ubuntu0.1+bionic3 [720 kB] Get:96 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libopencv3.2-java all 3.2.0+dfsg-4ubuntu0.1+bionic3 [401 kB] Get:97 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libproj13 amd64 5.2.0-1~bionic0 [202 kB] Get:98 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgeos-dev amd64 3.8.0-1~bionic0 [96.8 kB] Get:99 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libproj-dev amd64 6.2.1-1~bionic0 [984 kB] Get:100 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgeotiff-dev amd64 1.5.1-2~bionic1 [101 kB] Get:101 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libogdi-dev amd64 4.1.0+ds-1~bionic2 [25.4 kB] Get:102 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libspatialite-dev amd64 4.3.0a-6~bionic2 [1,363 kB] Get:103 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 libgdal-dev amd64 3.0.2+dfsg-1~bionic2 [7,637 kB] Get:104 http://ppa.launchpad.net/ubuntugis/ubuntugis-unstable/ubuntu bionic/main amd64 proj-bin amd64 6.2.1-1~bionic0 [113 kB] Fetched 92.2 MB in 1min 47s (864 kB/s) Extracting templates from packages: 100% (Reading database ... 135004 files and directories currently installed.) Preparing to unpack .../00-libopencv-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-dev (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../01-libopencv-ts-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-ts-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../02-libopencv-contrib-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-contrib-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../03-libopencv-videostab-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-videostab-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../04-libopencv-stitching-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-stitching-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../05-libopencv-calib3d-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-calib3d-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../06-libopencv-features2d-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-features2d-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../07-libopencv-flann-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-flann-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../08-libopencv-contrib3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-contrib3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../09-libopencv3.2-jni_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv3.2-jni (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../10-libopencv-videostab3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-videostab3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../11-libopencv-stitching3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-stitching3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../12-libopencv-calib3d3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-calib3d3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../13-libopencv-features2d3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-features2d3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../14-libopencv-flann3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-flann3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../15-libopencv-objdetect-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-objdetect-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../16-libopencv-objdetect3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-objdetect3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../17-libopencv-ml-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-ml-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../18-libopencv-ml3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-ml3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../19-libopencv-superres-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-superres-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../20-libopencv-highgui-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-highgui-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../21-libopencv-videoio-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-videoio-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../22-libopencv-imgcodecs-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-imgcodecs-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../23-libopencv-superres3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-superres3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../24-libopencv-highgui3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-highgui3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../25-libopencv-videoio3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-videoio3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../26-libopencv-viz-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-viz-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../27-libopencv-shape-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-shape-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../28-libopencv-video-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-video-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../29-libopencv-photo-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-photo-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../30-libopencv-imgproc-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-imgproc-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../31-libopencv-core-dev_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-core-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../32-gdal-data_3.0.2+dfsg-1~bionic2_all.deb ... Unpacking gdal-data (3.0.2+dfsg-1~bionic2) over (2.2.3+dfsg-2) ... Selecting previously unselected package libcfitsio5:amd64. Preparing to unpack .../33-libcfitsio5_3.430-2_amd64.deb ... Unpacking libcfitsio5:amd64 (3.430-2) ... Selecting previously unselected package libgeos-3.8.0:amd64. Preparing to unpack .../34-libgeos-3.8.0_3.8.0-1~bionic0_amd64.deb ... Unpacking libgeos-3.8.0:amd64 (3.8.0-1~bionic0) ... Preparing to unpack .../35-libgeos-c1v5_3.8.0-1~bionic0_amd64.deb ... Unpacking libgeos-c1v5:amd64 (3.8.0-1~bionic0) over (3.6.2-1build2) ... Preparing to unpack .../36-proj-data_6.2.1-1~bionic0_all.deb ... Unpacking proj-data (6.2.1-1~bionic0) over (4.9.3-2) ... Preparing to unpack .../37-libsqlite3-0_3.22.0-1ubuntu0.2_amd64.deb ... Unpacking libsqlite3-0:amd64 (3.22.0-1ubuntu0.2) over (3.22.0-1ubuntu0.1) ... Selecting previously unselected package libproj15:amd64. Preparing to unpack .../38-libproj15_6.2.1-1~bionic0_amd64.deb ... Unpacking libproj15:amd64 (6.2.1-1~bionic0) ... Selecting previously unselected package libgeotiff5:amd64. Preparing to unpack .../39-libgeotiff5_1.5.1-2~bionic1_amd64.deb ... Unpacking libgeotiff5:amd64 (1.5.1-2~bionic1) ... Preparing to unpack .../40-libopencv-imgcodecs3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-imgcodecs3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../41-python3-gdal_3.0.2+dfsg-1~bionic2_amd64.deb ... Unpacking python3-gdal (3.0.2+dfsg-1~bionic2) over (2.2.3+dfsg-2) ... (Reading database ... 135075 files and directories currently installed.) Removing python-gdal (2.2.3+dfsg-2) ... (Reading database ... 134965 files and directories currently installed.) Preparing to unpack .../libvtk6.3_6.3.0+dfsg2-2build4~bionic3_amd64.deb ... Unpacking libvtk6.3 (6.3.0+dfsg2-2build4~bionic3) over (6.3.0+dfsg1-11build1) ... Preparing to unpack .../gdal-bin_3.0.2+dfsg-1~bionic2_amd64.deb ... Unpacking gdal-bin (3.0.2+dfsg-1~bionic2) over (2.2.3+dfsg-2) ... Preparing to unpack .../libgdal20_2.4.2+dfsg-1~bionic0_amd64.deb ... Unpacking libgdal20 (2.4.2+dfsg-1~bionic0) over (2.2.3+dfsg-2) ... (Reading database ... 135010 files and directories currently installed.) Removing libogdi3.2 (3.2.0+ds-2) ... Selecting previously unselected package libogdi4.1. (Reading database ... 134992 files and directories currently installed.) Preparing to unpack .../00-libogdi4.1_4.1.0+ds-1~bionic2_amd64.deb ... Unpacking libogdi4.1 (4.1.0+ds-1~bionic2) ... Preparing to unpack .../01-libspatialite7_4.3.0a-6~bionic2_amd64.deb ... Unpacking libspatialite7:amd64 (4.3.0a-6~bionic2) over (4.3.0a-5build1) ... Selecting previously unselected package libgdal26. Preparing to unpack .../02-libgdal26_3.0.2+dfsg-1~bionic2_amd64.deb ... Unpacking libgdal26 (3.0.2+dfsg-1~bionic2) ... Preparing to unpack .../03-libopencv-viz3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-viz3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../04-libopencv-shape3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-shape3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../05-libopencv-video3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-video3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../06-libopencv-photo3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-photo3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../07-libopencv-imgproc3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-imgproc3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../08-libopencv-core3.2_3.2.0+dfsg-4ubuntu0.1+bionic3_amd64.deb ... Unpacking libopencv-core3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Preparing to unpack .../09-libopencv3.2-java_3.2.0+dfsg-4ubuntu0.1+bionic3_all.deb ... Unpacking libopencv3.2-java (3.2.0+dfsg-4ubuntu0.1+bionic3) over (3.2.0+dfsg-4ubuntu0.1) ... Selecting previously unselected package libproj13:amd64. Preparing to unpack .../10-libproj13_5.2.0-1~bionic0_amd64.deb ... Unpacking libproj13:amd64 (5.2.0-1~bionic0) ... Selecting previously unselected package libmysqlclient-dev. Preparing to unpack .../11-libmysqlclient-dev_5.7.28-0ubuntu0.18.04.4_amd64.deb ... Unpacking libmysqlclient-dev (5.7.28-0ubuntu0.18.04.4) ... Selecting previously unselected package default-libmysqlclient-dev:amd64. Preparing to unpack .../12-default-libmysqlclient-dev_1.0.4_amd64.deb ... Unpacking default-libmysqlclient-dev:amd64 (1.0.4) ... Selecting previously unselected package libblas3:amd64. Preparing to unpack .../13-libblas3_3.7.1-4ubuntu1_amd64.deb ... Unpacking libblas3:amd64 (3.7.1-4ubuntu1) ... Selecting previously unselected package libblas-dev:amd64. Preparing to unpack .../14-libblas-dev_3.7.1-4ubuntu1_amd64.deb ... Unpacking libblas-dev:amd64 (3.7.1-4ubuntu1) ... Selecting previously unselected package libarpack2-dev:amd64. Preparing to unpack .../15-libarpack2-dev_3.5.0+real-2_amd64.deb ... Unpacking libarpack2-dev:amd64 (3.5.0+real-2) ... Selecting previously unselected package libsuperlu-dev:amd64. Preparing to unpack .../16-libsuperlu-dev_5.2.1+dfsg1-3_amd64.deb ... Unpacking libsuperlu-dev:amd64 (5.2.1+dfsg1-3) ... Selecting previously unselected package libarmadillo-dev. Preparing to unpack .../17-libarmadillo-dev_1%3a8.400.0+dfsg-2_amd64.deb ... Unpacking libarmadillo-dev (1:8.400.0+dfsg-2) ... Selecting previously unselected package libcfitsio-dev:amd64. Preparing to unpack .../18-libcfitsio-dev_3.430-2_amd64.deb ... Unpacking libcfitsio-dev:amd64 (3.430-2) ... Selecting previously unselected package libcfitsio-doc. Preparing to unpack .../19-libcfitsio-doc_3.430-2_all.deb ... Unpacking libcfitsio-doc (3.430-2) ... Selecting previously unselected package libcharls-dev:amd64. Preparing to unpack .../20-libcharls-dev_1.1.0+dfsg-2_amd64.deb ... Unpacking libcharls-dev:amd64 (1.1.0+dfsg-2) ... Selecting previously unselected package libdapserver7v5:amd64. Preparing to unpack .../21-libdapserver7v5_3.19.1-2build1_amd64.deb ... Unpacking libdapserver7v5:amd64 (3.19.1-2build1) ... Selecting previously unselected package libdap-dev:amd64. Preparing to unpack .../22-libdap-dev_3.19.1-2build1_amd64.deb ... Unpacking libdap-dev:amd64 (3.19.1-2build1) ... Selecting previously unselected package libfyba-dev:amd64. Preparing to unpack .../23-libfyba-dev_4.1.1-3_amd64.deb ... Unpacking libfyba-dev:amd64 (4.1.1-3) ... Selecting previously unselected package libepsilon-dev:amd64. Preparing to unpack .../24-libepsilon-dev_0.9.2+dfsg-2_amd64.deb ... Unpacking libepsilon-dev:amd64 (0.9.2+dfsg-2) ... Selecting previously unselected package libfreexl-dev:amd64. Preparing to unpack .../25-libfreexl-dev_1.0.5-1_amd64.deb ... Unpacking libfreexl-dev:amd64 (1.0.5-1) ... Selecting previously unselected package libgeos-dev. Preparing to unpack .../26-libgeos-dev_3.8.0-1~bionic0_amd64.deb ... Unpacking libgeos-dev (3.8.0-1~bionic0) ... Selecting previously unselected package libsqlite3-dev:amd64. Preparing to unpack .../27-libsqlite3-dev_3.22.0-1ubuntu0.2_amd64.deb ... Unpacking libsqlite3-dev:amd64 (3.22.0-1ubuntu0.2) ... Selecting previously unselected package libproj-dev:amd64. Preparing to unpack .../28-libproj-dev_6.2.1-1~bionic0_amd64.deb ... Unpacking libproj-dev:amd64 (6.2.1-1~bionic0) ... Selecting previously unselected package libgeotiff-dev:amd64. Preparing to unpack .../29-libgeotiff-dev_1.5.1-2~bionic1_amd64.deb ... Unpacking libgeotiff-dev:amd64 (1.5.1-2~bionic1) ... Selecting previously unselected package libgif-dev. Preparing to unpack .../30-libgif-dev_5.1.4-2ubuntu0.1_amd64.deb ... Unpacking libgif-dev (5.1.4-2ubuntu0.1) ... Selecting previously unselected package libnetcdf-dev. Preparing to unpack .../31-libnetcdf-dev_1%3a4.6.0-2build1_amd64.deb ... Unpacking libnetcdf-dev (1:4.6.0-2build1) ... Selecting previously unselected package libhdf4-alt-dev. Preparing to unpack .../32-libhdf4-alt-dev_4.2.13-2_amd64.deb ... Unpacking libhdf4-alt-dev (4.2.13-2) ... Selecting previously unselected package libjson-c-dev:amd64. Preparing to unpack .../33-libjson-c-dev_0.12.1-1.3_amd64.deb ... Unpacking libjson-c-dev:amd64 (0.12.1-1.3) ... Selecting previously unselected package libkmlconvenience1:amd64. Preparing to unpack .../34-libkmlconvenience1_1.3.0-5_amd64.deb ... Unpacking libkmlconvenience1:amd64 (1.3.0-5) ... Selecting previously unselected package libkmlregionator1:amd64. Preparing to unpack .../35-libkmlregionator1_1.3.0-5_amd64.deb ... Unpacking libkmlregionator1:amd64 (1.3.0-5) ... Selecting previously unselected package libkmlxsd1:amd64. Preparing to unpack .../36-libkmlxsd1_1.3.0-5_amd64.deb ... Unpacking libkmlxsd1:amd64 (1.3.0-5) ... Selecting previously unselected package libminizip-dev:amd64. Preparing to unpack .../37-libminizip-dev_1.1-8build1_amd64.deb ... Unpacking libminizip-dev:amd64 (1.1-8build1) ... Selecting previously unselected package liburiparser-dev. Preparing to unpack .../38-liburiparser-dev_0.8.4-1_amd64.deb ... Unpacking liburiparser-dev (0.8.4-1) ... Selecting previously unselected package libkml-dev:amd64. Preparing to unpack .../39-libkml-dev_1.3.0-5_amd64.deb ... Unpacking libkml-dev:amd64 (1.3.0-5) ... Selecting previously unselected package libogdi-dev. Preparing to unpack .../40-libogdi-dev_4.1.0+ds-1~bionic2_amd64.deb ... Unpacking libogdi-dev (4.1.0+ds-1~bionic2) ... Selecting previously unselected package libopenjp2-7-dev. Preparing to unpack .../41-libopenjp2-7-dev_2.3.0-2build0.18.04.1_amd64.deb ... Unpacking libopenjp2-7-dev (2.3.0-2build0.18.04.1) ... Selecting previously unselected package libpoppler-dev:amd64. Preparing to unpack .../42-libpoppler-dev_0.62.0-2ubuntu2.10_amd64.deb ... Unpacking libpoppler-dev:amd64 (0.62.0-2ubuntu2.10) ... Selecting previously unselected package libpoppler-private-dev:amd64. Preparing to unpack .../43-libpoppler-private-dev_0.62.0-2ubuntu2.10_amd64.deb ... Unpacking libpoppler-private-dev:amd64 (0.62.0-2ubuntu2.10) ... Selecting previously unselected package libpq-dev. Preparing to unpack .../44-libpq-dev_10.10-0ubuntu0.18.04.1_amd64.deb ... Unpacking libpq-dev (10.10-0ubuntu0.18.04.1) ... Selecting previously unselected package libqhull-r7:amd64. Preparing to unpack .../45-libqhull-r7_2015.2-4_amd64.deb ... Unpacking libqhull-r7:amd64 (2015.2-4) ... Selecting previously unselected package libqhull-dev:amd64. Preparing to unpack .../46-libqhull-dev_2015.2-4_amd64.deb ... Unpacking libqhull-dev:amd64 (2015.2-4) ... Selecting previously unselected package libspatialite-dev:amd64. Preparing to unpack .../47-libspatialite-dev_4.3.0a-6~bionic2_amd64.deb ... Unpacking libspatialite-dev:amd64 (4.3.0a-6~bionic2) ... Selecting previously unselected package libwebp-dev:amd64. Preparing to unpack .../48-libwebp-dev_0.6.1-2_amd64.deb ... Unpacking libwebp-dev:amd64 (0.6.1-2) ... Selecting previously unselected package libxerces-c-dev. Preparing to unpack .../49-libxerces-c-dev_3.2.0+debian-2_amd64.deb ... Unpacking libxerces-c-dev (3.2.0+debian-2) ... Selecting previously unselected package libzstd-dev:amd64. Preparing to unpack .../50-libzstd-dev_1.3.3+dfsg-2ubuntu1.1_amd64.deb ... Unpacking libzstd-dev:amd64 (1.3.3+dfsg-2ubuntu1.1) ... Selecting previously unselected package unixodbc-dev:amd64. Preparing to unpack .../51-unixodbc-dev_2.3.4-1.1ubuntu3_amd64.deb ... Unpacking unixodbc-dev:amd64 (2.3.4-1.1ubuntu3) ... Selecting previously unselected package libgdal-dev. Preparing to unpack .../52-libgdal-dev_3.0.2+dfsg-1~bionic2_amd64.deb ... Unpacking libgdal-dev (3.0.2+dfsg-1~bionic2) ... Selecting previously unselected package libspatialindex4v5:amd64. Preparing to unpack .../53-libspatialindex4v5_1.8.5-5_amd64.deb ... Unpacking libspatialindex4v5:amd64 (1.8.5-5) ... Selecting previously unselected package libspatialindex-c4v5:amd64. Preparing to unpack .../54-libspatialindex-c4v5_1.8.5-5_amd64.deb ... Unpacking libspatialindex-c4v5:amd64 (1.8.5-5) ... Selecting previously unselected package proj-bin. Preparing to unpack .../55-proj-bin_6.2.1-1~bionic0_amd64.deb ... Unpacking proj-bin (6.2.1-1~bionic0) ... Selecting previously unselected package python3-pkg-resources. Preparing to unpack .../56-python3-pkg-resources_39.0.1-2_all.deb ... Unpacking python3-pkg-resources (39.0.1-2) ... Selecting previously unselected package libspatialindex-dev:amd64. Preparing to unpack .../57-libspatialindex-dev_1.8.5-5_amd64.deb ... Unpacking libspatialindex-dev:amd64 (1.8.5-5) ... Selecting previously unselected package python3-rtree. Preparing to unpack .../58-python3-rtree_0.8.3+ds-1_all.deb ... Unpacking python3-rtree (0.8.3+ds-1) ... Setting up libgeos-3.8.0:amd64 (3.8.0-1~bionic0) ... Setting up libcfitsio5:amd64 (3.430-2) ... Setting up libspatialindex4v5:amd64 (1.8.5-5) ... Setting up libxerces-c-dev (3.2.0+debian-2) ... Setting up unixodbc-dev:amd64 (2.3.4-1.1ubuntu3) ... Setting up libcfitsio-dev:amd64 (3.430-2) ... Setting up libpq-dev (10.10-0ubuntu0.18.04.1) ... Setting up libpoppler-dev:amd64 (0.62.0-2ubuntu2.10) ... Setting up libmysqlclient-dev (5.7.28-0ubuntu0.18.04.4) ... Setting up libgif-dev (5.1.4-2ubuntu0.1) ... Setting up libepsilon-dev:amd64 (0.9.2+dfsg-2) ... Setting up libopencv-core3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopenjp2-7-dev (2.3.0-2build0.18.04.1) ... Setting up python3-pkg-resources (39.0.1-2) ... Setting up libminizip-dev:amd64 (1.1-8build1) ... Setting up libkmlconvenience1:amd64 (1.3.0-5) ... Setting up libwebp-dev:amd64 (0.6.1-2) ... Setting up gdal-data (3.0.2+dfsg-1~bionic2) ... Setting up libcfitsio-doc (3.430-2) ... Setting up libkmlxsd1:amd64 (1.3.0-5) ... Setting up libgeos-c1v5:amd64 (3.8.0-1~bionic0) ... Setting up libblas3:amd64 (3.7.1-4ubuntu1) ... Setting up libspatialindex-c4v5:amd64 (1.8.5-5) ... Setting up libdapserver7v5:amd64 (3.19.1-2build1) ... Setting up libkmlregionator1:amd64 (1.3.0-5) ... Setting up libcharls-dev:amd64 (1.1.0+dfsg-2) ... Setting up libopencv-core-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libzstd-dev:amd64 (1.3.3+dfsg-2ubuntu1.1) ... Setting up libjson-c-dev:amd64 (0.12.1-1.3) ... Setting up libogdi4.1 (4.1.0+ds-1~bionic2) ... Setting up libopencv-ml3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libsqlite3-0:amd64 (3.22.0-1ubuntu0.2) ... Setting up libarpack2-dev:amd64 (3.5.0+real-2) ... Setting up libopencv-ml-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up liburiparser-dev (0.8.4-1) ... Setting up libopencv-imgproc3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-flann3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up default-libmysqlclient-dev:amd64 (1.0.4) ... Setting up libopencv-video3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libfyba-dev:amd64 (4.1.1-3) ... Setting up libnetcdf-dev (1:4.6.0-2build1) ... Setting up libqhull-r7:amd64 (2015.2-4) ... Setting up libopencv-imgproc-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up proj-data (6.2.1-1~bionic0) ... Setting up libfreexl-dev:amd64 (1.0.5-1) ... Setting up libopencv-photo3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libpoppler-private-dev:amd64 (0.62.0-2ubuntu2.10) ... Setting up libopencv-ts-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libogdi-dev (4.1.0+ds-1~bionic2) ... Setting up libopencv-photo-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libdap-dev:amd64 (3.19.1-2build1) ... Setting up libproj15:amd64 (6.2.1-1~bionic0) ... Setting up libsqlite3-dev:amd64 (3.22.0-1ubuntu0.2) ... Setting up libblas-dev:amd64 (3.7.1-4ubuntu1) ... Setting up libopencv-flann-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libproj13:amd64 (5.2.0-1~bionic0) ... Setting up libgeos-dev (3.8.0-1~bionic0) ... Setting up libgeotiff5:amd64 (1.5.1-2~bionic1) ... Setting up libqhull-dev:amd64 (2015.2-4) ... Setting up libkml-dev:amd64 (1.3.0-5) ... Setting up libhdf4-alt-dev (4.2.13-2) ... Setting up libspatialindex-dev:amd64 (1.8.5-5) ... Setting up libsuperlu-dev:amd64 (5.2.1+dfsg1-3) ... Setting up proj-bin (6.2.1-1~bionic0) ... Setting up libopencv-shape3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libspatialite7:amd64 (4.3.0a-6~bionic2) ... Setting up libopencv-video-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libspatialite-dev:amd64 (4.3.0a-6~bionic2) ... Setting up libopencv-shape-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libarmadillo-dev (1:8.400.0+dfsg-2) ... Setting up libproj-dev:amd64 (6.2.1-1~bionic0) ... Setting up libgdal26 (3.0.2+dfsg-1~bionic2) ... Setting up python3-rtree (0.8.3+ds-1) ... Setting up libgdal20 (2.4.2+dfsg-1~bionic0) ... Setting up libopencv-imgcodecs3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up python3-gdal (3.0.2+dfsg-1~bionic2) ... Setting up libvtk6.3 (6.3.0+dfsg2-2build4~bionic3) ... Setting up gdal-bin (3.0.2+dfsg-1~bionic2) ... Setting up libopencv-videoio3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libgeotiff-dev:amd64 (1.5.1-2~bionic1) ... Setting up libgdal-dev (3.0.2+dfsg-1~bionic2) ... Setting up libopencv-imgcodecs-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-viz3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-superres3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-highgui3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-videoio-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-viz-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-objdetect3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-highgui-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-features2d3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-superres-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-features2d-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-calib3d3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-stitching3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-calib3d-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-objdetect-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-videostab3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-stitching-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-contrib3.2:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-videostab-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-contrib-dev:amd64 (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv3.2-jni (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv3.2-java (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Setting up libopencv-dev (3.2.0+dfsg-4ubuntu0.1+bionic3) ... Processing triggers for man-db (2.8.3-2ubuntu0.1) ... Processing triggers for libc-bin (2.27-3ubuntu1) ... Collecting rasterio Downloading https://files.pythonhosted.org/packages/be/e5/7052a3eef72af7e883a280d8dff64f4ea44cb92ec25ffb1d00ce27bc1a12/rasterio-1.1.2-cp36-cp36m-manylinux1_x86_64.whl (18.0MB) |████████████████████████████████| 18.0MB 244kB/s Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from rasterio) (1.17.5) Requirement already satisfied: attrs in /usr/local/lib/python3.6/dist-packages (from rasterio) (19.3.0) Collecting click-plugins Downloading https://files.pythonhosted.org/packages/e9/da/824b92d9942f4e472702488857914bdd50f73021efea15b4cad9aca8ecef/click_plugins-1.1.1-py2.py3-none-any.whl Collecting snuggs>=1.4.1 Downloading https://files.pythonhosted.org/packages/cc/0e/d27d6e806d6c0d1a2cfdc5d1f088e42339a0a54a09c3343f7f81ec8947ea/snuggs-1.4.7-py3-none-any.whl Collecting cligj>=0.5 Downloading https://files.pythonhosted.org/packages/e4/be/30a58b4b0733850280d01f8bd132591b4668ed5c7046761098d665ac2174/cligj-0.5.0-py3-none-any.whl Requirement already satisfied: click<8,>=4.0 in /usr/local/lib/python3.6/dist-packages (from rasterio) (7.0) Collecting affine Downloading https://files.pythonhosted.org/packages/ac/a6/1a39a1ede71210e3ddaf623982b06ecfc5c5c03741ae659073159184cd3e/affine-2.3.0-py2.py3-none-any.whl Requirement already satisfied: pyparsing>=2.1.6 in /usr/local/lib/python3.6/dist-packages (from snuggs>=1.4.1->rasterio) (2.4.6) Installing collected packages: click-plugins, snuggs, cligj, affine, rasterio Successfully installed affine-2.3.0 click-plugins-1.1.1 cligj-0.5.0 rasterio-1.1.2 snuggs-1.4.7 Collecting geopandas Downloading https://files.pythonhosted.org/packages/5b/0c/e6c99e561b03482220f00443f610ccf4dce9b50f4b1093d735f93c6fc8c6/geopandas-0.6.2-py2.py3-none-any.whl (919kB) |████████████████████████████████| 921kB 2.8MB/s Requirement already satisfied: pandas>=0.23.0 in /usr/local/lib/python3.6/dist-packages (from geopandas) (0.25.3) Collecting pyproj Downloading https://files.pythonhosted.org/packages/d6/70/eedc98cd52b86de24a1589c762612a98bea26cde649ffdd60c1db396cce8/pyproj-2.4.2.post1-cp36-cp36m-manylinux2010_x86_64.whl (10.1MB) |████████████████████████████████| 10.1MB 16.6MB/s Requirement already satisfied: shapely in /usr/local/lib/python3.6/dist-packages (from geopandas) (1.6.4.post2) Collecting fiona Downloading https://files.pythonhosted.org/packages/50/f7/9899f8a9a2e38601472fe1079ce5088f58833221c8b8507d8b5eafd5404a/Fiona-1.8.13-cp36-cp36m-manylinux1_x86_64.whl (11.8MB) |████████████████████████████████| 11.8MB 50.3MB/s Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23.0->geopandas) (2018.9) Requirement already satisfied: numpy>=1.13.3 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23.0->geopandas) (1.17.5) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.23.0->geopandas) (2.6.1) Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (0.5.0) Requirement already satisfied: click-plugins>=1.0 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (1.1.1) Collecting munch Downloading https://files.pythonhosted.org/packages/cc/ab/85d8da5c9a45e072301beb37ad7f833cd344e04c817d97e0cc75681d248f/munch-2.5.0-py2.py3-none-any.whl Requirement already satisfied: attrs>=17 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (19.3.0) Requirement already satisfied: click<8,>=4.0 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (7.0) Requirement already satisfied: six>=1.7 in /usr/local/lib/python3.6/dist-packages (from fiona->geopandas) (1.12.0) Installing collected packages: pyproj, munch, fiona, geopandas Successfully installed fiona-1.8.13 geopandas-0.6.2 munch-2.5.0 pyproj-2.4.2.post1 Requirement already satisfied: descartes in /usr/local/lib/python3.6/dist-packages (1.1.0) Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from descartes) (3.1.2) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (2.4.6) Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (2.6.1) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (1.1.0) Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.6/dist-packages (from matplotlib->descartes) (1.17.5) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil>=2.1->matplotlib->descartes) (1.12.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib->descartes) (42.0.2) Collecting solaris Downloading https://files.pythonhosted.org/packages/57/9d/1663c1eda9d2bcf8f18bed1ce34477c15855fa0e591bc70f94233661b7fc/solaris-0.2.1-py3-none-any.whl (10.2MB) |████████████████████████████████| 10.2MB 2.8MB/s Collecting rio-cogeo>=1.1.6 Downloading https://files.pythonhosted.org/packages/2e/90/40638ddabe9c483c0550696a335229df0d71b6499ac15f5a30d708d83d24/rio-cogeo-1.1.8.tar.gz Requirement already satisfied: opencv-python>=4.1.0.25 in /usr/local/lib/python3.6/dist-packages (from solaris) (4.1.2.30) Collecting rtree>=0.9.3 Downloading https://files.pythonhosted.org/packages/11/1d/42d6904a436076df813d1df632575529991005b33aa82f169f01750e39e4/Rtree-0.9.3.tar.gz (520kB) |████████████████████████████████| 522kB 47.2MB/s Collecting pyyaml==5.2 Downloading https://files.pythonhosted.org/packages/8d/c9/e5be955a117a1ac548cdd31e37e8fd7b02ce987f9655f5c7563c656d5dcb/PyYAML-5.2.tar.gz (265kB) |████████████████████████████████| 266kB 38.7MB/s Requirement already satisfied: pandas>=0.25.3 in /usr/local/lib/python3.6/dist-packages (from solaris) (0.25.3) Requirement already satisfied: scikit-image>=0.16.2 in /usr/local/lib/python3.6/dist-packages (from solaris) (0.16.2) Requirement already satisfied: pyproj>=2.1 in /usr/local/lib/python3.6/dist-packages (from solaris) (2.4.2.post1) Requirement already satisfied: torchvision>=0.4.2 in /usr/local/lib/python3.6/dist-packages (from solaris) (0.4.2) Requirement already satisfied: matplotlib>=3.1.2 in /usr/local/lib/python3.6/dist-packages (from solaris) (3.1.2) Collecting tensorflow==1.13.1 Downloading https://files.pythonhosted.org/packages/77/63/a9fa76de8dffe7455304c4ed635be4aa9c0bacef6e0633d87d5f54530c5c/tensorflow-1.13.1-cp36-cp36m-manylinux1_x86_64.whl (92.5MB) |████████████████████████████████| 92.5MB 98kB/s Requirement already satisfied: rasterio>=1.0.23 in /usr/local/lib/python3.6/dist-packages (from solaris) (1.1.2) Requirement already satisfied: geopandas>=0.6.2 in /usr/local/lib/python3.6/dist-packages (from solaris) (0.6.2) Requirement already satisfied: gdal>=3.0.2 in /usr/lib/python3/dist-packages (from solaris) (3.0.2) Requirement already satisfied: shapely>=1.6.4 in /usr/local/lib/python3.6/dist-packages (from solaris) (1.6.4.post2) Collecting albumentations==0.4.3 Downloading https://files.pythonhosted.org/packages/f6/c4/a1e6ac237b5a27874b01900987d902fe83cc469ebdb09eb72a68c4329e78/albumentations-0.4.3.tar.gz (3.2MB) |████████████████████████████████| 3.2MB 39.2MB/s Requirement already satisfied: fiona>=1.8.13 in /usr/local/lib/python3.6/dist-packages (from solaris) (1.8.13) Requirement already satisfied: pip>=19.0.3 in /usr/local/lib/python3.6/dist-packages (from solaris) (19.3.1) Collecting urllib3>=1.25.7 Downloading https://files.pythonhosted.org/packages/b4/40/a9837291310ee1ccc242ceb6ebfd9eb21539649f193a7c8c86ba15b98539/urllib3-1.25.7-py2.py3-none-any.whl (125kB) |████████████████████████████████| 133kB 48.4MB/s Requirement already satisfied: torch==1.3.1 in /usr/local/lib/python3.6/dist-packages (from solaris) (1.3.1) Requirement already satisfied: networkx>=2.4 in /usr/local/lib/python3.6/dist-packages (from solaris) (2.4) Requirement already satisfied: numpy>=1.17.3 in /usr/local/lib/python3.6/dist-packages (from solaris) (1.17.5) Requirement already satisfied: affine>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from solaris) (2.3.0) Collecting tqdm>=4.40.0 Downloading https://files.pythonhosted.org/packages/72/c9/7fc20feac72e79032a7c8138fd0d395dc6d8812b5b9edf53c3afd0b31017/tqdm-4.41.1-py2.py3-none-any.whl (56kB) |████████████████████████████████| 61kB 7.8MB/s Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.6/dist-packages (from solaris) (1.4.1) Collecting requests>=2.22.0 Downloading https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl (57kB) |████████████████████████████████| 61kB 8.2MB/s Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from rio-cogeo>=1.1.6->solaris) (7.0) Collecting supermercado Downloading https://files.pythonhosted.org/packages/8f/c0/9c7878fbd8533486d04dfee7ef751c458dd73d687e823ad88574f3b2e631/supermercado-0.0.5.tar.gz Collecting mercantile Downloading https://files.pythonhosted.org/packages/9d/1d/80d28ba17e4647bf820e8d5f485d58f9da9c5ca424450489eb49e325ba66/mercantile-1.1.2-py3-none-any.whl Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from rtree>=0.9.3->solaris) (42.0.2) Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.25.3->solaris) (2018.9) Requirement already satisfied: python-dateutil>=2.6.1 in /usr/local/lib/python3.6/dist-packages (from pandas>=0.25.3->solaris) (2.6.1) Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.16.2->solaris) (2.4.1) Requirement already satisfied: pillow>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.16.2->solaris) (6.2.2) Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image>=0.16.2->solaris) (1.1.1) Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from torchvision>=0.4.2->solaris) (1.12.0) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.2->solaris) (2.4.6) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.2->solaris) (1.1.0) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib>=3.1.2->solaris) (0.10.0) Requirement already satisfied: protobuf>=3.6.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (3.10.0) Collecting tensorboard<1.14.0,>=1.13.0 Downloading https://files.pythonhosted.org/packages/0f/39/bdd75b08a6fba41f098b6cb091b9e8c7a80e1b4d679a581a0ccd17b10373/tensorboard-1.13.1-py3-none-any.whl (3.2MB) |████████████████████████████████| 3.2MB 44.3MB/s Requirement already satisfied: absl-py>=0.1.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (0.9.0) Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (1.1.0) Requirement already satisfied: gast>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (0.2.2) Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (1.15.0) Collecting tensorflow-estimator<1.14.0rc0,>=1.13.0 Downloading https://files.pythonhosted.org/packages/bb/48/13f49fc3fa0fdf916aa1419013bb8f2ad09674c275b4046d5ee669a46873/tensorflow_estimator-1.13.0-py2.py3-none-any.whl (367kB) |████████████████████████████████| 368kB 45.4MB/s Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (0.8.1) Requirement already satisfied: keras-applications>=1.0.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (1.0.8) Requirement already satisfied: wheel>=0.26 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (0.33.6) Requirement already satisfied: keras-preprocessing>=1.0.5 in /usr/local/lib/python3.6/dist-packages (from tensorflow==1.13.1->solaris) (1.1.0) Requirement already satisfied: snuggs>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from rasterio>=1.0.23->solaris) (1.4.7) Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.6/dist-packages (from rasterio>=1.0.23->solaris) (0.5.0) Requirement already satisfied: attrs in /usr/local/lib/python3.6/dist-packages (from rasterio>=1.0.23->solaris) (19.3.0) Requirement already satisfied: click-plugins in /usr/local/lib/python3.6/dist-packages (from rasterio>=1.0.23->solaris) (1.1.1) Collecting imgaug<0.2.7,>=0.2.5 Downloading https://files.pythonhosted.org/packages/ad/2e/748dbb7bb52ec8667098bae9b585f448569ae520031932687761165419a2/imgaug-0.2.6.tar.gz (631kB) |████████████████████████████████| 634kB 42.7MB/s Requirement already satisfied: munch in /usr/local/lib/python3.6/dist-packages (from fiona>=1.8.13->solaris) (2.5.0) Requirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.4->solaris) (4.4.1) Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests>=2.22.0->solaris) (2.8) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests>=2.22.0->solaris) (2019.11.28) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests>=2.22.0->solaris) (3.0.4) Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow==1.13.1->solaris) (0.16.0) Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<1.14.0,>=1.13.0->tensorflow==1.13.1->solaris) (3.1.1) Collecting mock>=2.0.0 Downloading https://files.pythonhosted.org/packages/05/d2/f94e68be6b17f46d2c353564da56e6fb89ef09faeeff3313a046cb810ca9/mock-3.0.5-py2.py3-none-any.whl Requirement already satisfied: h5py in /usr/local/lib/python3.6/dist-packages (from keras-applications>=1.0.6->tensorflow==1.13.1->solaris) (2.8.0) Building wheels for collected packages: rio-cogeo, rtree, pyyaml, albumentations, supermercado, imgaug Building wheel for rio-cogeo (setup.py) ... done Created wheel for rio-cogeo: filename=rio_cogeo-1.1.8-cp36-none-any.whl size=17085 sha256=fe4fd6b0c95c24ccbb6bb99d6035180ae75dd7126af0f0f5f6868e783e131ff8 Stored in directory: /root/.cache/pip/wheels/65/61/bb/57962da75239cb8f3bfef62b6f3e4f4a7dfeec2de3bc89995e Building wheel for rtree (setup.py) ... done Created wheel for rtree: filename=Rtree-0.9.3-cp36-none-any.whl size=21264 sha256=95c3a38dde559751720ff9423e6249d1ec8017294cb554b0333727f27c066bb6 Stored in directory: /root/.cache/pip/wheels/0b/f6/58/2d819b2abdc280c3f70db0b0ce86a712839267957db7abad85 Building wheel for pyyaml (setup.py) ... done Created wheel for pyyaml: filename=PyYAML-5.2-cp36-cp36m-linux_x86_64.whl size=44209 sha256=e3a59363783054c16c845fc47e0d1f349a91870a99eaa77af14a0ed74153c373 Stored in directory: /root/.cache/pip/wheels/54/b7/c7/2ada654ee54483c9329871665aaf4a6056c3ce36f29cf66e67 Building wheel for albumentations (setup.py) ... done Created wheel for albumentations: filename=albumentations-0.4.3-cp36-none-any.whl size=60764 sha256=179ad89167c0a8c4fda69a0519e02e8e5ec99360e90615d14603cabf42106034 Stored in directory: /root/.cache/pip/wheels/20/16/8e/d3bec34bf30adff30929226f0b83cc8c005b5af131f51db9d0 Building wheel for supermercado (setup.py) ... done Created wheel for supermercado: filename=supermercado-0.0.5-cp36-none-any.whl size=7088 sha256=9c8ac2677b131bc499f25fd636c433ba8c03b5c306f5f309c97a2756e58d5b1e Stored in directory: /root/.cache/pip/wheels/9b/2f/8e/011d7ab17b423894b4b358204c0bb854a8bb8de199e9f98f30 Building wheel for imgaug (setup.py) ... done Created wheel for imgaug: filename=imgaug-0.2.6-cp36-none-any.whl size=654020 sha256=a96fbb499ed8c4d82c60ae96bcab95c523cfc3b082d6bf7e9516e03304ff1661 Stored in directory: /root/.cache/pip/wheels/97/ec/48/0d25896c417b715af6236dbcef8f0bed136a1a5e52972fc6d0 Successfully built rio-cogeo rtree pyyaml albumentations supermercado imgaug ERROR: kaggle 1.5.6 has requirement urllib3<1.25,>=1.21.1, but you'll have urllib3 1.25.7 which is incompatible. ERROR: google-colab 1.0.0 has requirement requests~=2.21.0, but you'll have requests 2.22.0 which is incompatible. ERROR: datascience 0.10.6 has requirement folium==0.2.1, but you'll have folium 0.8.3 which is incompatible. Installing collected packages: mercantile, supermercado, rio-cogeo, rtree, pyyaml, tensorboard, mock, tensorflow-estimator, tensorflow, imgaug, albumentations, urllib3, tqdm, requests, solaris Found existing installation: Rtree 0.8.3 Uninstalling Rtree-0.8.3: Successfully uninstalled Rtree-0.8.3 Found existing installation: PyYAML 3.13 Uninstalling PyYAML-3.13: Successfully uninstalled PyYAML-3.13 Found existing installation: tensorboard 1.15.0 Uninstalling tensorboard-1.15.0: Successfully uninstalled tensorboard-1.15.0 Found existing installation: tensorflow-estimator 1.15.1 Uninstalling tensorflow-estimator-1.15.1: Successfully uninstalled tensorflow-estimator-1.15.1 Found existing installation: tensorflow 1.15.0 Uninstalling tensorflow-1.15.0: Successfully uninstalled tensorflow-1.15.0 Found existing installation: imgaug 0.2.9 Uninstalling imgaug-0.2.9: Successfully uninstalled imgaug-0.2.9 Found existing installation: albumentations 0.1.12 Uninstalling albumentations-0.1.12: Successfully uninstalled albumentations-0.1.12 Found existing installation: urllib3 1.24.3 Uninstalling urllib3-1.24.3: Successfully uninstalled urllib3-1.24.3 Found existing installation: tqdm 4.28.1 Uninstalling tqdm-4.28.1: Successfully uninstalled tqdm-4.28.1 Found existing installation: requests 2.21.0 Uninstalling requests-2.21.0: Successfully uninstalled requests-2.21.0 Successfully installed albumentations-0.4.3 imgaug-0.2.6 mercantile-1.1.2 mock-3.0.5 pyyaml-5.2 requests-2.22.0 rio-cogeo-1.1.8 rtree-0.9.3 solaris-0.2.1 supermercado-0.0.5 tensorboard-1.13.1 tensorflow-1.13.1 tensorflow-estimator-1.13.0 tqdm-4.41.1 urllib3-1.25.7
Collecting rio-tiler Downloading https://files.pythonhosted.org/packages/5c/c9/302627b333dcb3832ef885430bed1a2b278665401865b489d49c1eefe206/rio-tiler-1.3.1.tar.gz (112kB) |████████████████████████████████| 112kB 2.5MB/s Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from rio-tiler) (1.17.5) Requirement already satisfied: numexpr in /usr/local/lib/python3.6/dist-packages (from rio-tiler) (2.7.1) Requirement already satisfied: mercantile in /usr/local/lib/python3.6/dist-packages (from rio-tiler) (1.1.2) Requirement already satisfied: boto3 in /usr/local/lib/python3.6/dist-packages (from rio-tiler) (1.10.47) Requirement already satisfied: rasterio[s3]>=1.1 in /usr/local/lib/python3.6/dist-packages (from rio-tiler) (1.1.2) Collecting rio-toa Downloading https://files.pythonhosted.org/packages/de/19/ccfdc23a822e31fdb61fef90ad641089837e86187e2b7168c2359db85df2/rio-toa-0.3.0.tar.gz Requirement already satisfied: click>=3.0 in /usr/local/lib/python3.6/dist-packages (from mercantile->rio-tiler) (7.0) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.6/dist-packages (from boto3->rio-tiler) (0.9.4) Requirement already satisfied: botocore<1.14.0,>=1.13.47 in /usr/local/lib/python3.6/dist-packages (from boto3->rio-tiler) (1.13.47) Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /usr/local/lib/python3.6/dist-packages (from boto3->rio-tiler) (0.2.1) Requirement already satisfied: snuggs>=1.4.1 in /usr/local/lib/python3.6/dist-packages (from rasterio[s3]>=1.1->rio-tiler) (1.4.7) Requirement already satisfied: attrs in /usr/local/lib/python3.6/dist-packages (from rasterio[s3]>=1.1->rio-tiler) (19.3.0) Requirement already satisfied: cligj>=0.5 in /usr/local/lib/python3.6/dist-packages (from rasterio[s3]>=1.1->rio-tiler) (0.5.0) Requirement already satisfied: click-plugins in /usr/local/lib/python3.6/dist-packages (from rasterio[s3]>=1.1->rio-tiler) (1.1.1) Requirement already satisfied: affine in /usr/local/lib/python3.6/dist-packages (from rasterio[s3]>=1.1->rio-tiler) (2.3.0) Collecting rio-mucho Downloading https://files.pythonhosted.org/packages/8a/ba/e9a23efc6a8ffe6b2340c9f1040cd26a730754c75a58061c9302c66156fa/rio_mucho-1.0.0-py3-none-any.whl Requirement already satisfied: docutils<0.16,>=0.10 in /usr/local/lib/python3.6/dist-packages (from botocore<1.14.0,>=1.13.47->boto3->rio-tiler) (0.15.2) Requirement already satisfied: urllib3<1.26,>=1.20; python_version >= "3.4" in /usr/local/lib/python3.6/dist-packages (from botocore<1.14.0,>=1.13.47->boto3->rio-tiler) (1.25.7) Requirement already satisfied: python-dateutil<3.0.0,>=2.1; python_version >= "2.7" in /usr/local/lib/python3.6/dist-packages (from botocore<1.14.0,>=1.13.47->boto3->rio-tiler) (2.6.1) Requirement already satisfied: pyparsing>=2.1.6 in /usr/local/lib/python3.6/dist-packages (from snuggs>=1.4.1->rasterio[s3]>=1.1->rio-tiler) (2.4.6) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.6/dist-packages (from python-dateutil<3.0.0,>=2.1; python_version >= "2.7"->botocore<1.14.0,>=1.13.47->boto3->rio-tiler) (1.12.0) Building wheels for collected packages: rio-tiler, rio-toa Building wheel for rio-tiler (setup.py) ... done Created wheel for rio-tiler: filename=rio_tiler-1.3.1-cp36-none-any.whl size=172294 sha256=641bdb72ba8ce43b059ad795d80803a27994227c3835887f763406a79decb569 Stored in directory: /root/.cache/pip/wheels/13/0f/f0/2e7e21b2aeaa99791322cdd28262bbf3da097d24a4bf640f47 Building wheel for rio-toa (setup.py) ... done Created wheel for rio-toa: filename=rio_toa-0.3.0-cp36-none-any.whl size=12429 sha256=03c31289ee5aec70e1eecd18154081355f6b24aaf0a23ee74c0c4a97cca8a2c4 Stored in directory: /root/.cache/pip/wheels/12/25/52/036fe06fa14768bf5e4eef4abd4beccb3924b695199f1721a2 Successfully built rio-tiler rio-toa Installing collected packages: rio-mucho, rio-toa, rio-tiler Successfully installed rio-mucho-1.0.0 rio-tiler-1.3.1 rio-toa-0.3.0
# for bleeding edge version of solaris:
# !pip install git+https://github.com/CosmiQ/solaris/@dev
# restarts runtime
import os
os._exit(00)
import solaris as sol
import numpy as np
import geopandas as gpd
from matplotlib import pyplot as plt
from pathlib import Path
import rasterio
import os
data_dir = Path('data')
data_dir.mkdir(exist_ok=True)
img_path = data_dir/'images-256'
mask_path = data_dir/'masks-256'
img_path.mkdir(exist_ok=True)
mask_path.mkdir(exist_ok=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])
For this tutorial, we'll use the Tanzania Open AI Challenge dataset of 7-cm resolution drone imagery and building footprint labels over Unguja Island, Zanzibar.
Much thanks to the following organizations for producing, openly licensing, and making this invaluable dataset accessible:
For simplicity of demonstration, we'll create training and validation data from a single drone image (in cloud-optimized geoTIFF format) and its accompanying ground-truth labels of manually traced building outlines (in GeoJSON format).
We'll work with imagery and labels from image grid znz001
which covers the northern tip of Zanzibar's main island of Unguja. Here is a browsable preview of the drone imagery with its building footprint labels, organized per the Spatio-Temporal Asset Catalog (STAC) label extension and visualized in an instance of STAC browser:
After previewing the labeled data and imagery, let's import our geo-processing tools, let's copy the direct download URLs from the Assets tab of the browser and test loading them.
tif_url = 'http://oin-hotosm.s3.amazonaws.com/5afeda152b6a08001185f11a/0/5afeda152b6a08001185f11b.tif'
geojson_url = 'https://www.dropbox.com/sh/ct3s1x2a846x3yl/AAARCAOqhcRdoU7ULOb9GJl9a/grid_001.geojson?dl=1'
rasterio.open(tif_url).meta
{'count': 3, 'crs': CRS.from_epsg(32737), 'driver': 'GTiff', 'dtype': 'uint8', 'height': 34306, 'nodata': None, 'transform': Affine(0.0774800032377243, 0.0, 531847.0, 0.0, -0.0774800032377243, 9367848.0), 'width': 37113}
# TODO: bug with rasterio/gdal not loading https urls, workaround by using http: urls or download file locally
# !export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt
!wget -O tmp.tif {tif_url}
--2020-01-16 20:59:19-- http://oin-hotosm.s3.amazonaws.com/5afeda152b6a08001185f11a/0/5afeda152b6a08001185f11b.tif Resolving oin-hotosm.s3.amazonaws.com (oin-hotosm.s3.amazonaws.com)... 52.217.16.84 Connecting to oin-hotosm.s3.amazonaws.com (oin-hotosm.s3.amazonaws.com)|52.217.16.84|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 204496818 (195M) [image/tiff] Saving to: ‘tmp.tif’ tmp.tif 100%[===================>] 195.02M 16.9MB/s in 13s 2020-01-16 20:59:33 (15.2 MB/s) - ‘tmp.tif’ saved [204496818/204496818]
rasterio.open('tmp.tif').meta
{'count': 3, 'crs': CRS.from_epsg(32737), 'driver': 'GTiff', 'dtype': 'uint8', 'height': 34306, 'nodata': None, 'transform': Affine(0.0774800032377243, 0.0, 531847.0, 0.0, -0.0774800032377243, 9367848.0), 'width': 37113}
# load geojson for znz001 labels
label_df = gpd.read_file(geojson_url)
label_df = label_df[label_df['geometry'].isna() != True] # remove empty rows
label_df.plot(figsize=(10,10))
<matplotlib.axes._subplots.AxesSubplot at 0x7fafbe0e7160>
Since we are working with a single image, we need to delineate what sub-areas of the image and labels should be used as training versus validation data for model training.
Using geojson.io, we'll draw our trn
and val
Areas of Interest (AOI) polygons in geojson format and add dataset:trn
or dataset:val
to the respective polygon properties
.
The finished polygons look something like this in geojson.io:
And here is the exact GeoJSON file I created viewable in geojson.io: http://geojson.io/#id=gist:daveluo/8e192744b2aa377db162bc34e0e0ae64&map=15/-5.7314/39.3026
protip: in geojson.io, you can display the drone imagery as a base layer via the menu: Meta > Add map layer > Layer URL: https://tiles.openaerialmap.org/5b100d4b2b6a08001185f344/0/5b100d4b2b6a08001185f345/%7Bz%7D/%7Bx%7D/%7By%7D.png
In this case, I intentionally drew a more complex shape for each AOI to demonstrate some later steps but we could have drawn simpler adjacent rectanFor demonstration of later steps, I intentionally drew a more complex shape for each AOI but we could have simply drawn adjacent rectangles instead. Or in more complex cases, we could choose to draw AOIs of smaller sub-areas that don't encompass the entire image - for instance, if we want to create training data for specific types of environments like dense urban areas or sparsely populated rural areas only or we want to avoid using poorly labeled areas in our training data.
Drawing the AOIs as geoJSON polygons in this way gives us the flexibility to choose exactly what and where our training and validation data represents.gles instead.
Or we could choose to draw AOIs of smaller areas that don't encompass the entire image if we want to only create training/validation data for specific types of environments (like dense urban areas only).
In this step, we'll use supermercado to generate square polygons representing all the slippy map tiles at a specified zoom level that overlap the geojson training and validation AOIs we created above.
For this tutorial, we'll work with slippy map tiles of tile_size=256
and zoom_level=19
which yields a manageable number of tiles and satisfactory segmentation results without too much preprocessing or model training time.
You could also try setting a higher or lower zoom_level
which would generate more or less tiles at higher or lower resolutions respectively.
Here is an example of different tile zoom_levels
over the same area of Zanzibar (see the round, white satellite TV dish for a consistently sized visual reference):
Learn more about slippy maps here, here, and here.
Then we'll merge our supermercado-generated slippy map tile polygons into one GeoDataFrame
with geopandas. We'll also check for and reconcile overlapping train and validation tiles which would otherwise throw off how we evaluate our progress with model training.
# download pre-made AOI geojson file:
!wget -O aoi.geojson https://www.dropbox.com/s/ojyjvvoer5guadr/znz001_trnval2.geojson?dl=1
--2020-01-16 21:00:07-- https://www.dropbox.com/s/ojyjvvoer5guadr/znz001_trnval2.geojson?dl=1 Resolving www.dropbox.com (www.dropbox.com)... 162.125.82.1, 2620:100:6032:1::a27d:5201 Connecting to www.dropbox.com (www.dropbox.com)|162.125.82.1|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: /s/dl/ojyjvvoer5guadr/znz001_trnval2.geojson [following] --2020-01-16 21:00:07-- https://www.dropbox.com/s/dl/ojyjvvoer5guadr/znz001_trnval2.geojson Reusing existing connection to www.dropbox.com:443. HTTP request sent, awaiting response... 302 Found Location: https://uc2a5723def66d1bbf9301557843.dl.dropboxusercontent.com/cd/0/get/AwTDEbciGYRVNffmT2Tb1qm5p5Ib0sHoOARyCzCLRG1vdUaqmWRHxLOX1aNeS1q-TrqeEyou1V4Te18nDiS0V7KzyFtmhP4bm7CmjP1tx4wLRA/file?dl=1# [following] --2020-01-16 21:00:08-- https://uc2a5723def66d1bbf9301557843.dl.dropboxusercontent.com/cd/0/get/AwTDEbciGYRVNffmT2Tb1qm5p5Ib0sHoOARyCzCLRG1vdUaqmWRHxLOX1aNeS1q-TrqeEyou1V4Te18nDiS0V7KzyFtmhP4bm7CmjP1tx4wLRA/file?dl=1 Resolving uc2a5723def66d1bbf9301557843.dl.dropboxusercontent.com (uc2a5723def66d1bbf9301557843.dl.dropboxusercontent.com)... 162.125.82.6, 2620:100:6032:6::a27d:5206 Connecting to uc2a5723def66d1bbf9301557843.dl.dropboxusercontent.com (uc2a5723def66d1bbf9301557843.dl.dropboxusercontent.com)|162.125.82.6|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 1085 (1.1K) [application/binary] Saving to: ‘aoi.geojson’ aoi.geojson 100%[===================>] 1.06K --.-KB/s in 0s 2020-01-16 21:00:08 (159 MB/s) - ‘aoi.geojson’ saved [1085/1085]
tile_size = 256
zoom_level = 19
aoi_df = gpd.read_file('aoi.geojson')
aoi_df.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7fafbda30eb8>
aoi_df[aoi_df['dataset']=='trn']['geometry'].to_file('trn_aoi.geojson', driver='GeoJSON')
aoi_df[aoi_df['dataset']=='val']['geometry'].to_file('val_aoi.geojson', driver='GeoJSON')
# see https://github.com/mapbox/supermercado#supermercado-burn
!cat trn_aoi.geojson | supermercado burn {zoom_level} | mercantile shapes | fio collect > trn_aoi_z{zoom_level}tiles.geojson
!cat val_aoi.geojson | supermercado burn {zoom_level} | mercantile shapes | fio collect > val_aoi_z{zoom_level}tiles.geojson
trn_tiles = gpd.read_file(f'trn_aoi_z{zoom_level}tiles.geojson')
val_tiles = gpd.read_file(f'val_aoi_z{zoom_level}tiles.geojson')
trn_tiles['dataset'] = 'trn'
val_tiles['dataset'] = 'val'
# see if there's overlapping tiles between trn and val
fig, ax = plt.subplots(figsize=(10,10))
trn_tiles.plot(ax=ax, color='grey', alpha=0.5, edgecolor='red')
val_tiles.plot(ax=ax, color='grey', alpha=0.5, edgecolor='blue')
<matplotlib.axes._subplots.AxesSubplot at 0x7fafbda54e48>
# merge into one gdf to keep all trn tiles while dropping overlapping/duplicate val tiles
import pandas as pd
tiles_gdf = gpd.GeoDataFrame(pd.concat([trn_tiles, val_tiles], ignore_index=True), crs=trn_tiles.crs)
tiles_gdf.drop_duplicates(subset=['id'], inplace=True)
# check that there's no more overlapping tiles between trn and val
fig, ax = plt.subplots(figsize=(10,10))
tiles_gdf[tiles_gdf['dataset'] == 'trn'].plot(ax=ax, color='grey', edgecolor='red', alpha=0.5)
tiles_gdf[tiles_gdf['dataset'] == 'val'].plot(ax=ax, color='grey', edgecolor='blue', alpha=0.5)
<matplotlib.axes._subplots.AxesSubplot at 0x7fafbd8ebe48>
tiles_gdf.head()
id | title | geometry | dataset | |
---|---|---|---|---|
0 | (319377, 270487, 19) | XYZ tile (319377, 270487, 19) | POLYGON ((39.29878 -5.71985, 39.29878 -5.71916... | trn |
1 | (319378, 270487, 19) | XYZ tile (319378, 270487, 19) | POLYGON ((39.29947 -5.71985, 39.29947 -5.71916... | trn |
2 | (319379, 270487, 19) | XYZ tile (319379, 270487, 19) | POLYGON ((39.30016 -5.71985, 39.30016 -5.71916... | trn |
3 | (319380, 270487, 19) | XYZ tile (319380, 270487, 19) | POLYGON ((39.30084 -5.71985, 39.30084 -5.71916... | trn |
4 | (319381, 270487, 19) | XYZ tile (319381, 270487, 19) | POLYGON ((39.30153 -5.71985, 39.30153 -5.71916... | trn |
# convert 'id' string to list of ints for z,x,y
def reformat_xyz(tile_gdf):
tile_gdf['xyz'] = tile_gdf.id.apply(lambda x: x.lstrip('(,)').rstrip('(,)').split(','))
tile_gdf['xyz'] = [[int(q) for q in p] for p in tile_gdf['xyz']]
return tile_gdf
tiles_gdf = reformat_xyz(tiles_gdf)
tiles_gdf.head()
id | title | geometry | dataset | xyz | |
---|---|---|---|---|---|
0 | (319377, 270487, 19) | XYZ tile (319377, 270487, 19) | POLYGON ((39.29878 -5.71985, 39.29878 -5.71916... | trn | [319377, 270487, 19] |
1 | (319378, 270487, 19) | XYZ tile (319378, 270487, 19) | POLYGON ((39.29947 -5.71985, 39.29947 -5.71916... | trn | [319378, 270487, 19] |
2 | (319379, 270487, 19) | XYZ tile (319379, 270487, 19) | POLYGON ((39.30016 -5.71985, 39.30016 -5.71916... | trn | [319379, 270487, 19] |
3 | (319380, 270487, 19) | XYZ tile (319380, 270487, 19) | POLYGON ((39.30084 -5.71985, 39.30084 -5.71916... | trn | [319380, 270487, 19] |
4 | (319381, 270487, 19) | XYZ tile (319381, 270487, 19) | POLYGON ((39.30153 -5.71985, 39.30153 -5.71916... | trn | [319381, 270487, 19] |
Now we'll use rio-tiler and the slippy map tile polygons generated by supermercado to test load a single 256x256 pixel tile from our znz001 COG image file. We will also load the znz001 geoJSON labels into a geopandas GeoDataFrame and crop the building geometries to only those that intersect the bounds of the tile image.
Here is a great intro to COGs, rio-tiler, and exciting developments in the cloud-native geospatial toolbox by Vincent Sarago of Development Seed: https://medium.com/devseed/cog-talk-part-1-whats-new-941facbcd3d1
We'll then create our corresponding 3-channel RGB mask by passing these cropped geometries to solaris' df_to_px_mask function. Pixel value of 255 in the generated mask:
from rio_tiler import main as rt_main
# import mercantile
from rasterio.transform import from_bounds
from shapely.geometry import Polygon
from shapely.ops import cascaded_union
idx = 220
tiles_gdf.iloc[idx]['xyz']
[319380, 270495, 19]
tile, mask = rt_main.tile(tif_url, *tiles_gdf.iloc[idx]['xyz'], tilesize=tile_size)
plt.imshow(np.moveaxis(tile,0,2))
<matplotlib.image.AxesImage at 0x7fafb671e2b0>
# redisplay our labeled geojson file
label_df.plot(figsize=(10,10))
<matplotlib.axes._subplots.AxesSubplot at 0x7fafbd6d8c50>
# get the geometries from the geodataframe
all_polys = label_df.geometry
# preemptively fix and merge any invalid or overlapping geoms that would otherwise throw errors during the rasterize step.
# TODO: probably a better way to do this
# https://gis.stackexchange.com/questions/271733/geopandas-dissolve-overlapping-polygons
# https://nbviewer.jupyter.org/gist/rutgerhofste/6e7c6569616c2550568b9ce9cb4716a3
def explode(gdf):
"""
Will explode the geodataframe's muti-part geometries into single
geometries. Each row containing a multi-part geometry will be split into
multiple rows with single geometries, thereby increasing the vertical size
of the geodataframe. The index of the input geodataframe is no longer
unique and is replaced with a multi-index.
The output geodataframe has an index based on two columns (multi-index)
i.e. 'level_0' (index of input geodataframe) and 'level_1' which is a new
zero-based index for each single part geometry per multi-part geometry
Args:
gdf (gpd.GeoDataFrame) : input geodataframe with multi-geometries
Returns:
gdf (gpd.GeoDataFrame) : exploded geodataframe with each single
geometry as a separate entry in the
geodataframe. The GeoDataFrame has a multi-
index set to columns level_0 and level_1
"""
gs = gdf.explode()
gdf2 = gs.reset_index().rename(columns={0: 'geometry'})
gdf_out = gdf2.merge(gdf.drop('geometry', axis=1), left_on='level_0', right_index=True)
gdf_out = gdf_out.set_index(['level_0', 'level_1']).set_geometry('geometry')
gdf_out.crs = gdf.crs
return gdf_out
def cleanup_invalid_geoms(all_polys):
all_polys_merged = gpd.GeoDataFrame()
all_polys_merged['geometry'] = gpd.GeoSeries(cascaded_union([p.buffer(0) for p in all_polys]))
gdf_out = explode(all_polys_merged)
gdf_out = gdf_out.reset_index()
gdf_out.drop(columns=['level_0','level_1'], inplace=True)
all_polys = gdf_out['geometry']
return all_polys
all_polys = cleanup_invalid_geoms(all_polys)
# get the same tile polygon as our tile image above
tile_poly = tiles_gdf.iloc[idx]['geometry']
print(tile_poly.bounds)
tile_poly
(39.30084228515625, -5.7253114476101485, 39.30152893066406, -5.724628226958752)
# get affine transformation matrix for this tile using rasterio.transform.from_bounds: https://rasterio.readthedocs.io/en/stable/api/rasterio.transform.html#rasterio.transform.from_bounds
tfm = from_bounds(*tile_poly.bounds, tile_size, tile_size)
tfm
Affine(2.682209014892578e-06, 0.0, 39.30084228515625, 0.0, -2.6688306695166197e-06, -5.724628226958752)
# crop znz001 geometries to what overlaps our tile polygon bounds
cropped_polys = [poly for poly in all_polys if poly.intersects(tile_poly)]
cropped_polys_gdf = gpd.GeoDataFrame(geometry=cropped_polys, crs=4326)
cropped_polys_gdf.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7fafb66f20f0>
# burn a footprint/boundary/contact 3-channel mask with solaris: https://solaris.readthedocs.io/en/latest/tutorials/notebooks/api_masks_tutorial.html
fbc_mask = sol.vector.mask.df_to_px_mask(df=cropped_polys_gdf,
channels=['footprint', 'boundary', 'contact'],
affine_obj=tfm, shape=(tile_size,tile_size),
boundary_width=5, boundary_type='inner', contact_spacing=5, meters=True)
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(10, 5))
ax1.imshow(np.moveaxis(tile,0,2))
ax2.imshow(fbc_mask)
<matplotlib.image.AxesImage at 0x7fafb6629780>
fig, (ax1, ax2, ax3) = plt.subplots(1,3,figsize=(10, 5))
ax1.imshow(fbc_mask[:,:,0])
ax2.imshow(fbc_mask[:,:,1])
ax3.imshow(fbc_mask[:,:,2])
<matplotlib.image.AxesImage at 0x7fafb657bc50>
Now that we've successfully loaded one tile image from COG with rio-tiler and created its 3-channel RGB mask with solaris, let's generate our full training and validation datasets.
We'll write some functions and loops to run through all of our trn
and val
tiles at zoom_level=19
and save them as lossless png
files in the appropriate folders with a filename schema of {save_path}/{prefix}{z}_{x}_{y}
so we can easily identify and geolocate what tile each file represents.
import skimage
from tqdm import tqdm
def save_tile_img(tif_url, xyz, tile_size, save_path='', prefix='', display=False):
x,y,z = xyz
tile, mask = rt_main.tile(tif_url, x,y,z, tilesize=tile_size)
if display:
plt.imshow(np.moveaxis(tile,0,2))
plt.show()
skimage.io.imsave(f'{save_path}/{prefix}{z}_{x}_{y}.png',np.moveaxis(tile,0,2), check_contrast=False)
def save_tile_mask(labels_poly, tile_poly, xyz, tile_size, save_path='', prefix='', display=False):
x,y,z = xyz
tfm = from_bounds(*tile_poly.bounds, tile_size, tile_size)
cropped_polys = [poly for poly in labels_poly if poly.intersects(tile_poly)]
cropped_polys_gdf = gpd.GeoDataFrame(geometry=cropped_polys, crs=4326)
fbc_mask = sol.vector.mask.df_to_px_mask(df=cropped_polys_gdf,
channels=['footprint', 'boundary', 'contact'],
affine_obj=tfm, shape=(tile_size,tile_size),
boundary_width=5, boundary_type='inner', contact_spacing=5, meters=True)
if display: plt.imshow(fbc_mask); plt.show()
skimage.io.imsave(f'{save_path}/{prefix}{z}_{x}_{y}_mask.png',fbc_mask, check_contrast=False)
tiles_gdf[tiles_gdf['dataset'] == 'trn'].shape, tiles_gdf[tiles_gdf['dataset'] == 'val'].shape
((809, 5), (452, 5))
# we'll load our COG locally but could also load directly from url which is slower and subject to potentially more i/o issues
# TODO: try loading from url and catch i/o exceptions
# TODO: multithread/multiprocess this? Took ~3.5 mins to load and save 1261 image tiles on local COG file loading
for idx, tile in tqdm(tiles_gdf.iterrows()):
dataset = tile['dataset']
save_tile_img('tmp.tif', tile['xyz'], tile_size, save_path=img_path, prefix=f'znz001{dataset}_', display=False)
1261it [02:21, 8.91it/s]
# TODO: multiprocess this? Took ~3 mins to burn and save 1261 masks
for idx, tile in tqdm(tiles_gdf.iterrows()):
dataset = tile['dataset']
tile_poly = tile['geometry']
save_tile_mask(all_polys, tile_poly, tile['xyz'], tile_size, save_path=mask_path,prefix=f'znz001{dataset}_', display=False)
# check that tile images and masks saved correctly
start_idx, end_idx = 200,205
for i,j in zip(sorted(img_path.iterdir())[start_idx:end_idx], sorted(mask_path.iterdir())[start_idx:end_idx]):
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(10,5))
ax1.imshow(skimage.io.imread(i))
ax2.imshow(skimage.io.imread(j))
plt.show()
# compress and download
!tar -czf znz001trn.tar.gz data
Colab does not persistently store any files created and saved in its runtimes for than 8-12 hours (or less depending on inactivity or overall demand on the system). We'll transfer or download the files we create somewhere else. We can:
!cp
command: see below cell# to mount and transfer to GDrive: uncomment and run this and the next cell, follow instructions to auhorize access to your GDrive
# from google.colab import drive
# drive.mount('/content/drive')
# copy training data compressed tarball to root of your GDrive
# !cp znz001trn.tar.gz /content/drive/My\ Drive/
As our deep learning framework and library of tools, we'll use the excellent fastai library built on top of PyTorch.
For more info:
Let's download, install, and set up fastai v1 (currently at 1.0.55). And if we're not already on it, let's reset Colab to a GPU runtime (this removes locally stored files since it switches to a new environment so you will have to re-download and untar the training dataset created in above steps):
SWITCH TO GPU RUNTIME: Menu > Runtime > Change runtime type > Hardware Accelerator = GPU
Colab's free GPUs range from a Tesla K80, T4, or T8 depending on their availability. See the ===Hardware===
section of show_install()
for what GPU type and how much GPU memory is available which will affect the batch size and training time.
For all of these GPUs and mem sizes, a batch size of bs=16
at size=256
should train at <2 mins/epoch without encountering out-of-memory issues but if it does comes up, lower the bs to 8.
!curl https://course.fast.ai/setup/colab | bash
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 322 100 322 0 0 423 0 --:--:-- --:--:-- --:--:-- 422 Updating fastai... Done.
from fastai.vision import *
from fastai.callbacks import *
from fastai.utils.collect_env import *
show_install(True)
```text === Software === python : 3.6.9 fastai : 1.0.60 fastprogress : 0.2.2 torch : 1.3.1 nvidia driver : 418.67 torch cuda : 10.1.243 / is available torch cudnn : 7603 / is enabled === Hardware === nvidia gpus : 1 torch devices : 1 - gpu0 : 15079MB | Tesla T4 === Environment === platform : Linux-4.19.80+-x86_64-with-Ubuntu-18.04-bionic distro : #1 SMP Tue Oct 29 21:03:15 PDT 2019 conda env : Unknown python : /usr/bin/python3 sys.path : /env/python /usr/lib/python36.zip /usr/lib/python3.6 /usr/lib/python3.6/lib-dynload /usr/local/lib/python3.6/dist-packages /usr/lib/python3/dist-packages /usr/local/lib/python3.6/dist-packages/IPython/extensions /root/.ipython Thu Jan 16 21:12:51 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.44 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 44C P8 10W / 70W | 10MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections. Optional package(s) to enhance the diagnostics can be installed with: pip install distro Once installed, re-run this utility to get the additional information
Now we'll set up our training dataset of tile images and masks created above to load correctly into fastai for training and validation.
The code in this step tracks closely with that of fastai course's lesson3-camvid so please refer to that lesson video and notebook for more detailed and excellent explanation by Jeremy Howard about the code and fastai's Data Block API.
The main departures from the camvid lesson notebook is the use of filename string parsing to determine which image and mask files comprise the validation data.
And we'll subclass SegmentationLabelList
to alter the behavior of open_mask
and PIL.Image
underlying it in order to open the 3-channel target masks as RGB images (convert_mode='RGB')
instead of default greyscale 1-channel images (convert_mode='L')
.
We'll also visually confirm that the image files and channels of the respective target mask file are loaded and paired correctly with a display function show_3ch
.
# if not already present in file storage, download and extract the training/validation dataset created in above sections
!wget -O znz001trn.tar.gz https://www.dropbox.com/s/2a2ikf7m265davv/znz001trn.tar.gz?dl=1
!tar -xf znz001trn.tar.gz
--2020-01-16 21:13:00-- https://www.dropbox.com/s/2a2ikf7m265davv/znz001trn.tar.gz?dl=1 Resolving www.dropbox.com (www.dropbox.com)... 162.125.81.1, 2620:100:601b:1::a27d:801 Connecting to www.dropbox.com (www.dropbox.com)|162.125.81.1|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: /s/dl/2a2ikf7m265davv/znz001trn.tar.gz [following] --2020-01-16 21:13:01-- https://www.dropbox.com/s/dl/2a2ikf7m265davv/znz001trn.tar.gz Reusing existing connection to www.dropbox.com:443. HTTP request sent, awaiting response... 302 Found Location: https://ucf80f91df0b69db6b8285d4b530.dl.dropboxusercontent.com/cd/0/get/AwSO37viwiiCiTMiDqIrvEW-ue0PkneEASoDC9190xV0darOwVHMn5ixEyifPvs_L6bqLrqdeEAcJAW1N2_1sukfXI-aCh52Ku0Q62EYlnxxnQ/file?dl=1# [following] --2020-01-16 21:13:01-- https://ucf80f91df0b69db6b8285d4b530.dl.dropboxusercontent.com/cd/0/get/AwSO37viwiiCiTMiDqIrvEW-ue0PkneEASoDC9190xV0darOwVHMn5ixEyifPvs_L6bqLrqdeEAcJAW1N2_1sukfXI-aCh52Ku0Q62EYlnxxnQ/file?dl=1 Resolving ucf80f91df0b69db6b8285d4b530.dl.dropboxusercontent.com (ucf80f91df0b69db6b8285d4b530.dl.dropboxusercontent.com)... 162.125.82.6, 2620:100:6031:6::a27d:5106 Connecting to ucf80f91df0b69db6b8285d4b530.dl.dropboxusercontent.com (ucf80f91df0b69db6b8285d4b530.dl.dropboxusercontent.com)|162.125.82.6|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 107963420 (103M) [application/binary] Saving to: ‘znz001trn.tar.gz’ znz001trn.tar.gz 100%[===================>] 102.96M 5.53MB/s in 20s 2020-01-16 21:13:21 (5.28 MB/s) - ‘znz001trn.tar.gz’ saved [107963420/107963420]
path = Path('data')
path.ls()
[PosixPath('data/masks-256'), PosixPath('data/images-256')]
path_lbl = path/'masks-256'
path_img = path/'images-256'
fnames = get_image_files(path_img)
lbl_names = get_image_files(path_lbl)
print(len(fnames), len(lbl_names))
fnames[:3], lbl_names[:3]
1261 1261
([PosixPath('data/images-256/znz001trn_19_319389_270493.png'), PosixPath('data/images-256/znz001trn_19_319396_270509.png'), PosixPath('data/images-256/znz001val_19_319368_270513.png')], [PosixPath('data/masks-256/znz001trn_19_319384_270493_mask.png'), PosixPath('data/masks-256/znz001val_19_319369_270498_mask.png'), PosixPath('data/masks-256/znz001trn_19_319372_270498_mask.png')])
get_y_fn = lambda x: path_lbl/f'{x.stem}_mask.png'
# test that masks are opening correctly with open_mask() settings
img_f = fnames[121]
img = open_image(img_f)
mask = open_mask(get_y_fn(img_f), convert_mode='RGB', div=False)
fig,ax = plt.subplots(1,1, figsize=(10,10))
img.show(ax=ax)
mask.show(ax=ax, alpha=0.5)
plt.hist(mask.data.view(-1), bins=3)
(array([188518., 0., 8090.]), array([ 0., 85., 170., 255.]), <a list of 3 Patch objects>)
# define the valdation set by fn prefix
holdout_grids = ['znz001val_']
valid_idx = [i for i,o in enumerate(fnames) if any(c in str(o) for c in holdout_grids)]
print(len(valid_idx))
452
# subclassing SegmentationLabelList to set open_mask(fn, div=True, convert_mode='RGB') for 3 channel target masks
class SegLabelListCustom(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True, convert_mode='RGB')
class SegItemListCustom(SegmentationItemList):
_label_cls = SegLabelListCustom
# the classes corresponding to each channel
codes = np.array(['Footprint','Boundary','Contact'])
size = 256
bs = 16
# define image transforms for data augmentation and create databunch. More about image tfms and data aug at https://docs.fast.ai/vision.transform.html
tfms = get_transforms(flip_vert=True, max_warp=0.1, max_rotate=20, max_zoom=2, max_lighting=0.3)
src = (SegItemListCustom.from_folder(path_img)
.split_by_idx(valid_idx)
.label_from_func(get_y_fn, classes=codes))
data = (src.transform(tfms, size=size, tfm_y=True)
.databunch(bs=bs)
.normalize(imagenet_stats))
def show_3ch(imgitem, figsize=(10,5)):
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=figsize)
ax1.imshow(np.asarray(imgitem.data[0,None])[0])
ax2.imshow(np.asarray(imgitem.data[1,None])[0])
ax3.imshow(np.asarray(imgitem.data[2,None])[0])
ax1.set_title('Footprint')
ax2.set_title('Boundary')
ax3.set_title('Contact')
plt.show()
for idx in range(10,15):
print(data.valid_ds.items[idx].name)
fig, (ax1,ax2) = plt.subplots(1,2, figsize=(10,5))
data.valid_ds.x[idx].show(ax=ax1)
ax2.imshow(image2np(data.valid_ds.y[idx].data*255))
plt.show()
show_3ch(data.valid_ds.y[idx])
znz001val_19_319363_270513.png
znz001val_19_319365_270507.png
znz001val_19_319377_270519.png
znz001val_19_319366_270498.png
znz001val_19_319377_270505.png
# visually inspect data-augmented training images
# TODO: show_batch doesn't display RGB mask correctly, setting alpha=0 to turn off for now
data.show_batch(4,figsize=(10,10), alpha=0.)
data
ImageDataBunch; Train: LabelList (809 items) x: SegItemListCustom Image (3, 256, 256),Image (3, 256, 256),Image (3, 256, 256),Image (3, 256, 256),Image (3, 256, 256) y: SegLabelListCustom ImageSegment (3, 256, 256),ImageSegment (3, 256, 256),ImageSegment (3, 256, 256),ImageSegment (3, 256, 256),ImageSegment (3, 256, 256) Path: data/images-256; Valid: LabelList (452 items) x: SegItemListCustom Image (3, 256, 256),Image (3, 256, 256),Image (3, 256, 256),Image (3, 256, 256),Image (3, 256, 256) y: SegLabelListCustom ImageSegment (3, 256, 256),ImageSegment (3, 256, 256),ImageSegment (3, 256, 256),ImageSegment (3, 256, 256),ImageSegment (3, 256, 256) Path: data/images-256; Test: None
Here we implement some new loss functions like Dice Loss and Focal Loss which have been shown to perform well in image segmentation tasks. Then we'll create a MultiChComboLoss
class to combine multiple loss functions and calculate them across the 3 channels with adjustable weighting.
The approach of combining a Dice or Jaccard loss to consider image-wide context with individual pixel-focused Binary Cross Entropy or Focal loss with adjustable weighing of the 3 target mask channels has been shown to consistently outperform single loss functions. This is well-documented by Nick Weir's deep dive into the recent SpaceNet 4 Off-Nadir Building Detection top results:
https://medium.com/the-downlinq/a-deep-dive-into-the-spacenet-4-winning-algorithms-8d611a5dfe25
Finally, we adapt our model evaluation metrics (accuracy and dice score) to calculate a mean score for all channels or by a specified individual channel.
import pdb
def dice_loss(input, target):
# pdb.set_trace()
smooth = 1.
input = torch.sigmoid(input)
iflat = input.contiguous().view(-1).float()
tflat = target.contiguous().view(-1).float()
intersection = (iflat * tflat).sum()
return 1 - ((2. * intersection + smooth) / ((iflat + tflat).sum() +smooth))
# adapted from https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/65938
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=2, reduction='mean'):
super().__init__()
self.alpha = alpha
self.gamma = gamma
self.reduction = reduction
def forward(self, inputs, targets):
BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets.float(), reduction='none')
pt = torch.exp(-BCE_loss)
F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
if self.reduction == 'mean': return F_loss.mean()
elif self.reduction == 'sum': return F_loss.sum()
else: return F_loss
class DiceLoss(nn.Module):
def __init__(self, reduction='mean'):
super().__init__()
self.reduction = reduction
def forward(self, input, target):
loss = dice_loss(input, target)
if self.reduction == 'mean': return loss.mean()
elif self.reduction == 'sum': return loss.sum()
else: return loss
class MultiChComboLoss(nn.Module):
def __init__(self, reduction='mean', loss_funcs=[FocalLoss(),DiceLoss()], loss_wts = [1,1], ch_wts=[1,1,1]):
super().__init__()
self.reduction = reduction
self.ch_wts = ch_wts
self.loss_wts = loss_wts
self.loss_funcs = loss_funcs
def forward(self, output, target):
# pdb.set_trace()
for loss_func in self.loss_funcs: loss_func.reduction = self.reduction # need to change reduction on fwd pass for loss calc in learn.get_preds(with_loss=True)
loss = 0
channels = output.shape[1]
assert len(self.ch_wts) == channels
assert len(self.loss_wts) == len(self.loss_funcs)
for ch_wt,c in zip(self.ch_wts,range(channels)):
ch_loss=0
for loss_wt, loss_func in zip(self.loss_wts,self.loss_funcs):
ch_loss+=loss_wt*loss_func(output[:,c,None], target[:,c,None])
loss+=ch_wt*(ch_loss)
return loss/sum(self.ch_wts)
# calculate metrics on one channel (i.e. ch 0 for building footprints only) or on all 3 channels
def acc_thresh_multich(input:Tensor, target:Tensor, thresh:float=0.5, sigmoid:bool=True, one_ch:int=None)->Rank0Tensor:
"Compute accuracy when `y_pred` and `y_true` are the same size."
# pdb.set_trace()
if sigmoid: input = input.sigmoid()
n = input.shape[0]
if one_ch is not None:
input = input[:,one_ch,None]
target = target[:,one_ch,None]
input = input.view(n,-1)
target = target.view(n,-1)
return ((input>thresh)==target.byte()).float().mean()
def dice_multich(input:Tensor, targs:Tensor, iou:bool=False, one_ch:int=None)->Rank0Tensor:
"Dice coefficient metric for binary target. If iou=True, returns iou metric, classic for segmentation problems."
# pdb.set_trace()
n = targs.shape[0]
input = input.sigmoid()
if one_ch is not None:
input = input[:,one_ch,None]
targs = targs[:,one_ch,None]
input = (input>0.5).view(n,-1).float()
targs = targs.view(n,-1).float()
intersect = (input * targs).sum().float()
union = (input+targs).sum().float()
if not iou: return (2. * intersect / union if union > 0 else union.new([1.]).squeeze())
else: return intersect / (union-intersect+1.0)
We'll set up fastai's Dynamic Unet model with an ImageNet-pretrained resnet34 encoder. This architecture, inspired by the original U-net, uses by default many advanced deep learning techniques such as:
We'll define our MultiChComboLoss
function as a balanced combination of Focal Loss and Dice Loss and set our accuracy and dice metrics.
Also note that our metrics displayed during training shows channel-0 (building footprint channel only) accuracy and dice metrics in the right-most 2 columns while the first two accuracy and dice metrics (left-hand columns) show the mean of the respective metric across all 3 channels.
# set up metrics to show mean metrics for all channels as well as the building-only metrics (channel 0)
acc_ch0 = partial(acc_thresh_multich, one_ch=0)
dice_ch0 = partial(dice_multich, one_ch=0)
metrics = [acc_thresh_multich, dice_multich, acc_ch0, dice_ch0]
# combo Focal + Dice loss with equal channel wts
learn = unet_learner(data, models.resnet34, model_dir='../../models',
metrics=metrics,
loss_func=MultiChComboLoss(
reduction='mean',
loss_funcs=[FocalLoss(gamma=1, alpha=0.95),
DiceLoss(),
],
loss_wts=[1,1],
ch_wts=[1,1,1])
)
Downloading: "https://download.pytorch.org/models/resnet34-333f7ec4.pth" to /root/.cache/torch/checkpoints/resnet34-333f7ec4.pth 100%|██████████| 83.3M/83.3M [00:00<00:00, 244MB/s]
learn.metrics
[<function __main__.acc_thresh_multich>, <function __main__.dice_multich>, functools.partial(<function acc_thresh_multich at 0x7f9b51290378>, one_ch=0), functools.partial(<function dice_multich at 0x7f9b51290268>, one_ch=0)]
learn.loss_func
MultiChComboLoss()
learn.summary()
DynamicUnet ====================================================================== Layer (type) Output Shape Param # Trainable ====================================================================== Conv2d [64, 128, 128] 9,408 False ______________________________________________________________________ BatchNorm2d [64, 128, 128] 128 True ______________________________________________________________________ ReLU [64, 128, 128] 0 False ______________________________________________________________________ MaxPool2d [64, 64, 64] 0 False ______________________________________________________________________ Conv2d [64, 64, 64] 36,864 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ ReLU [64, 64, 64] 0 False ______________________________________________________________________ Conv2d [64, 64, 64] 36,864 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ Conv2d [64, 64, 64] 36,864 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ ReLU [64, 64, 64] 0 False ______________________________________________________________________ Conv2d [64, 64, 64] 36,864 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ Conv2d [64, 64, 64] 36,864 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ ReLU [64, 64, 64] 0 False ______________________________________________________________________ Conv2d [64, 64, 64] 36,864 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ Conv2d [128, 32, 32] 73,728 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ ReLU [128, 32, 32] 0 False ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ Conv2d [128, 32, 32] 8,192 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ ReLU [128, 32, 32] 0 False ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ ReLU [128, 32, 32] 0 False ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ ReLU [128, 32, 32] 0 False ______________________________________________________________________ Conv2d [128, 32, 32] 147,456 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ Conv2d [256, 16, 16] 294,912 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ ReLU [256, 16, 16] 0 False ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [256, 16, 16] 32,768 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ ReLU [256, 16, 16] 0 False ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ ReLU [256, 16, 16] 0 False ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ ReLU [256, 16, 16] 0 False ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ ReLU [256, 16, 16] 0 False ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ ReLU [256, 16, 16] 0 False ______________________________________________________________________ Conv2d [256, 16, 16] 589,824 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [512, 8, 8] 1,179,648 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ ReLU [512, 8, 8] 0 False ______________________________________________________________________ Conv2d [512, 8, 8] 2,359,296 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ Conv2d [512, 8, 8] 131,072 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ Conv2d [512, 8, 8] 2,359,296 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ ReLU [512, 8, 8] 0 False ______________________________________________________________________ Conv2d [512, 8, 8] 2,359,296 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ Conv2d [512, 8, 8] 2,359,296 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ ReLU [512, 8, 8] 0 False ______________________________________________________________________ Conv2d [512, 8, 8] 2,359,296 False ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ BatchNorm2d [512, 8, 8] 1,024 True ______________________________________________________________________ ReLU [512, 8, 8] 0 False ______________________________________________________________________ Conv2d [1024, 8, 8] 4,719,616 True ______________________________________________________________________ ReLU [1024, 8, 8] 0 False ______________________________________________________________________ Conv2d [512, 8, 8] 4,719,104 True ______________________________________________________________________ ReLU [512, 8, 8] 0 False ______________________________________________________________________ Conv2d [1024, 8, 8] 525,312 True ______________________________________________________________________ PixelShuffle [256, 16, 16] 0 False ______________________________________________________________________ ReLU [1024, 8, 8] 0 False ______________________________________________________________________ BatchNorm2d [256, 16, 16] 512 True ______________________________________________________________________ Conv2d [512, 16, 16] 2,359,808 True ______________________________________________________________________ ReLU [512, 16, 16] 0 False ______________________________________________________________________ Conv2d [512, 16, 16] 2,359,808 True ______________________________________________________________________ ReLU [512, 16, 16] 0 False ______________________________________________________________________ ReLU [512, 16, 16] 0 False ______________________________________________________________________ Conv2d [1024, 16, 16] 525,312 True ______________________________________________________________________ PixelShuffle [256, 32, 32] 0 False ______________________________________________________________________ ReLU [1024, 16, 16] 0 False ______________________________________________________________________ BatchNorm2d [128, 32, 32] 256 True ______________________________________________________________________ Conv2d [384, 32, 32] 1,327,488 True ______________________________________________________________________ ReLU [384, 32, 32] 0 False ______________________________________________________________________ Conv2d [384, 32, 32] 1,327,488 True ______________________________________________________________________ ReLU [384, 32, 32] 0 False ______________________________________________________________________ ReLU [384, 32, 32] 0 False ______________________________________________________________________ Conv2d [768, 32, 32] 295,680 True ______________________________________________________________________ PixelShuffle [192, 64, 64] 0 False ______________________________________________________________________ ReLU [768, 32, 32] 0 False ______________________________________________________________________ BatchNorm2d [64, 64, 64] 128 True ______________________________________________________________________ Conv2d [256, 64, 64] 590,080 True ______________________________________________________________________ ReLU [256, 64, 64] 0 False ______________________________________________________________________ Conv2d [256, 64, 64] 590,080 True ______________________________________________________________________ ReLU [256, 64, 64] 0 False ______________________________________________________________________ ReLU [256, 64, 64] 0 False ______________________________________________________________________ Conv2d [512, 64, 64] 131,584 True ______________________________________________________________________ PixelShuffle [128, 128, 128] 0 False ______________________________________________________________________ ReLU [512, 64, 64] 0 False ______________________________________________________________________ BatchNorm2d [64, 128, 128] 128 True ______________________________________________________________________ Conv2d [96, 128, 128] 165,984 True ______________________________________________________________________ ReLU [96, 128, 128] 0 False ______________________________________________________________________ Conv2d [96, 128, 128] 83,040 True ______________________________________________________________________ ReLU [96, 128, 128] 0 False ______________________________________________________________________ ReLU [192, 128, 128] 0 False ______________________________________________________________________ Conv2d [384, 128, 128] 37,248 True ______________________________________________________________________ PixelShuffle [96, 256, 256] 0 False ______________________________________________________________________ ReLU [384, 128, 128] 0 False ______________________________________________________________________ MergeLayer [99, 256, 256] 0 False ______________________________________________________________________ Conv2d [99, 256, 256] 88,308 True ______________________________________________________________________ ReLU [99, 256, 256] 0 False ______________________________________________________________________ Conv2d [99, 256, 256] 88,308 True ______________________________________________________________________ ReLU [99, 256, 256] 0 False ______________________________________________________________________ MergeLayer [99, 256, 256] 0 False ______________________________________________________________________ Conv2d [3, 256, 256] 300 True ______________________________________________________________________ Total params: 41,221,268 Total trainable params: 19,953,620 Total non-trainable params: 21,267,648 Optimized with 'torch.optim.adam.Adam', betas=(0.9, 0.99) Using true weight decay as discussed in https://www.fast.ai/2018/07/02/adam-weight-decay/ Loss function : MultiChComboLoss ====================================================================== Callbacks functions applied
First, we'll fine-tune our Unet on the decoder part only (leaving the weights for the ImageNet-pretrained resnet34 encoder frozen) for some epochs. Then we'll unfreeze all the trainable weights/layers of our model and train for some more epochs.
We'll track the valid_loss
, acc_...
, and dice_..
. metrics per epoch as training progresses to make sure they continue to improve and we're not overfitting. And we set a SaveModelCallback
which will track the channel-0 dice score, save a model checkpoint each time there's an improvement, and reload the highest performing model checkpoint file at the end of training.
We'll also inspect our model's results by setting learn.model.eval()
, generating some batches of predictions on the validation set, calculating and reshaping the image-wise loss values, and sorting by highest loss first to see the worst performing results (as measured by the loss which may differ in surprising ways from visually gauging results).
Pro-tip: display and view your results every chance you get! You'll pick up on all kinds of interesting clues about your model's behavior and how to make it better.
Finally, we'll export our trained Unet segmentation model for inference purposes as a .pkl
file. Learn more about exporting fastai models for inference in this tutorial: https://docs.fast.ai/tutorial.inference.html
learn.lr_find()
epoch | train_loss | valid_loss | acc_thresh_multich | dice_multich | acc_thresh_multich | dice_multich | time |
---|---|---|---|---|---|---|---|
0 | 1.285457 | #na# | 00:40 |
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
learn.recorder.plot(0,2,suggestion=True)
Min numerical gradient: 4.37E-05 Min loss divided by 10: 2.51E-04
lr = 1e-3
learn.fit_one_cycle(10, max_lr=lr,
callbacks=[
SaveModelCallback(learn,
monitor='dice_multich',
mode='max',
name='znz001trn-focaldice-stage1-best')
]
)
epoch | train_loss | valid_loss | acc_thresh_multich | dice_multich | acc_thresh_multich | dice_multich | time |
---|---|---|---|---|---|---|---|
0 | 0.911005 | 0.845238 | 0.926194 | 0.498503 | 0.856770 | 0.575664 | 00:42 |
1 | 0.766402 | 0.643972 | 0.968963 | 0.619496 | 0.949655 | 0.710738 | 00:40 |
2 | 0.670306 | 0.743911 | 0.969851 | 0.549823 | 0.941488 | 0.611353 | 00:39 |
3 | 0.598255 | 0.555541 | 0.967271 | 0.700767 | 0.951840 | 0.795056 | 00:39 |
4 | 0.542311 | 0.468147 | 0.977242 | 0.761267 | 0.963543 | 0.831779 | 00:39 |
5 | 0.497074 | 0.534367 | 0.968898 | 0.711269 | 0.958442 | 0.813535 | 00:39 |
6 | 0.481502 | 0.442373 | 0.978611 | 0.769515 | 0.970706 | 0.852280 | 00:39 |
7 | 0.460632 | 0.438452 | 0.978743 | 0.773524 | 0.969750 | 0.851324 | 00:39 |
8 | 0.438294 | 0.432549 | 0.978527 | 0.773472 | 0.968852 | 0.848523 | 00:39 |
9 | 0.431336 | 0.422709 | 0.979564 | 0.778634 | 0.970965 | 0.854430 | 00:39 |
Better model found at epoch 0 with dice_multich value: 0.5756635665893555. Better model found at epoch 1 with dice_multich value: 0.7107381820678711. Better model found at epoch 3 with dice_multich value: 0.7950560450553894. Better model found at epoch 4 with dice_multich value: 0.8317791819572449. Better model found at epoch 6 with dice_multich value: 0.85228031873703. Better model found at epoch 9 with dice_multich value: 0.8544298410415649.
learn.model.eval()
outputs,labels,losses = learn.get_preds(ds_type=DatasetType.Valid,n_batch=3,with_loss=True)
losses.shape
torch.Size([48, 1, 256, 256])
losses_reshaped = torch.mean(losses.view(outputs.shape[0],-1), dim=1)
sorted_idx = torch.argsort(losses_reshaped,descending=True)
losses_reshaped.shape
torch.Size([48])
# look at predictions vs actual by channel sorted by highest image-wise loss first
for i in sorted_idx[:10]:
print(f'{data.valid_ds.items[i].name}')
print(f'loss: {losses_reshaped[i].mean()}')
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,5))
data.valid_ds.x[i].show(ax=ax1)
ax1.set_title('Prediction')
ax1.imshow(image2np(outputs[i].sigmoid()), alpha=0.4)
ax2.set_title('Ground Truth')
data.valid_ds.x[i].show(ax=ax2)
ax2.imshow(image2np(labels[i])*255, alpha=0.4)
plt.show()
print('Predicted:')
show_3ch(outputs[i].sigmoid())
print('Actual:')
show_3ch(labels[i])
znz001val_19_319376_270503.png loss: 0.5641329288482666
Predicted:
Actual:
znz001val_19_319373_270499.png loss: 0.5443497896194458
Predicted:
Actual:
znz001val_19_319370_270500.png loss: 0.5268387794494629
Predicted:
Actual:
znz001val_19_319377_270499.png loss: 0.5174726843833923
Predicted:
Actual:
znz001val_19_319371_270501.png loss: 0.5123971700668335
Predicted:
Actual:
znz001val_19_319377_270511.png loss: 0.4875272810459137
Predicted:
Actual:
znz001val_19_319377_270505.png loss: 0.4573276937007904
Predicted:
Actual:
znz001val_19_319369_270499.png loss: 0.4478919208049774
Predicted:
Actual:
znz001val_19_319368_270498.png loss: 0.4472734332084656
Predicted:
Actual:
znz001val_19_319372_270505.png loss: 0.434670627117157
Predicted:
Actual:
learn.load('znz001trn-focaldice-stage1-best')
learn.model.train()
learn.unfreeze()
learn.lr_find()
epoch | train_loss | valid_loss | acc_thresh_multich | dice_multich | acc_thresh_multich | dice_multich | time |
---|---|---|---|---|---|---|---|
0 | 0.429385 | #na# | 00:34 |
LR Finder is complete, type {learner_name}.recorder.plot() to see the graph.
learn.recorder.plot(suggestion=True)
Min numerical gradient: 9.12E-07 Min loss divided by 10: 2.51E-06
learn.fit_one_cycle(20, max_lr=slice(3e-6,3e-4),
callbacks=[
SaveModelCallback(learn,
monitor='dice_multich',
mode='max',
name='znz001trn-focaldice-unfrozen-best')
]
)
epoch | train_loss | valid_loss | acc_thresh_multich | dice_multich | acc_thresh_multich | dice_multich | time |
---|---|---|---|---|---|---|---|
0 | 0.417450 | 0.424920 | 0.978937 | 0.777752 | 0.969837 | 0.853167 | 00:41 |
1 | 0.417881 | 0.422358 | 0.979555 | 0.781190 | 0.971355 | 0.858130 | 00:41 |
2 | 0.419773 | 0.419144 | 0.979988 | 0.780624 | 0.971901 | 0.857470 | 00:41 |
3 | 0.419228 | 0.415875 | 0.980770 | 0.785044 | 0.971328 | 0.855197 | 00:41 |
4 | 0.418023 | 0.423114 | 0.980637 | 0.782647 | 0.971088 | 0.853200 | 00:41 |
5 | 0.425701 | 0.421230 | 0.981116 | 0.779291 | 0.970956 | 0.846995 | 00:41 |
6 | 0.422510 | 0.411189 | 0.980170 | 0.786639 | 0.972095 | 0.862334 | 00:41 |
7 | 0.425119 | 0.409174 | 0.981921 | 0.795296 | 0.971999 | 0.860459 | 00:41 |
8 | 0.417396 | 0.411324 | 0.980118 | 0.787256 | 0.971205 | 0.859728 | 00:41 |
9 | 0.420265 | 0.423728 | 0.978711 | 0.782385 | 0.970431 | 0.859891 | 00:41 |
10 | 0.407220 | 0.400645 | 0.980449 | 0.792857 | 0.971484 | 0.862631 | 00:41 |
11 | 0.396948 | 0.400754 | 0.980687 | 0.792292 | 0.972799 | 0.865240 | 00:41 |
12 | 0.393657 | 0.393783 | 0.980724 | 0.795213 | 0.971120 | 0.862116 | 00:41 |
13 | 0.396129 | 0.390189 | 0.981337 | 0.798204 | 0.972389 | 0.865641 | 00:41 |
14 | 0.383326 | 0.382319 | 0.982289 | 0.804073 | 0.974107 | 0.870913 | 00:41 |
15 | 0.382185 | 0.383700 | 0.981734 | 0.801713 | 0.973677 | 0.870451 | 00:41 |
16 | 0.375103 | 0.380441 | 0.981567 | 0.802609 | 0.972649 | 0.868295 | 00:41 |
17 | 0.374760 | 0.376261 | 0.982032 | 0.805187 | 0.973627 | 0.871092 | 00:41 |
18 | 0.373607 | 0.378467 | 0.981716 | 0.803659 | 0.972845 | 0.868919 | 00:41 |
19 | 0.365126 | 0.378556 | 0.981615 | 0.802897 | 0.972802 | 0.868657 | 00:41 |
Better model found at epoch 0 with dice_multich value: 0.8531665802001953. Better model found at epoch 1 with dice_multich value: 0.8581295609474182. Better model found at epoch 6 with dice_multich value: 0.8623344898223877. Better model found at epoch 10 with dice_multich value: 0.8626308441162109. Better model found at epoch 11 with dice_multich value: 0.8652395606040955. Better model found at epoch 13 with dice_multich value: 0.8656412959098816. Better model found at epoch 14 with dice_multich value: 0.8709132075309753. Better model found at epoch 17 with dice_multich value: 0.8710918426513672.
learn.model.eval()
outputs,labels,losses = learn.get_preds(ds_type=DatasetType.Valid,n_batch=6,with_loss=True)
losses_reshaped = torch.mean(losses.view(outputs.shape[0],-1), dim=1)
sorted_idx = torch.argsort(losses_reshaped,descending=True)
# look at predictions vs actual by channel sorted by highest image-wise loss first
for i in sorted_idx[:10]:
print(f'{data.valid_ds.items[i].name}')
print(f'loss: {losses_reshaped[i].mean()}')
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,5))
data.valid_ds.x[i].show(ax=ax1)
ax1.set_title('Prediction')
ax1.imshow(image2np(outputs[i].sigmoid()), alpha=0.4)
ax2.set_title('Ground Truth')
data.valid_ds.x[i].show(ax=ax2)
ax2.imshow(image2np(labels[i])*255, alpha=0.4)
plt.show()
print('Predicted:')
show_3ch(outputs[i].sigmoid())
print('Actual:')
show_3ch(labels[i])
znz001val_19_319377_270504.png loss: 0.6615709662437439
Predicted:
Actual:
znz001val_19_319367_270511.png loss: 0.5069770812988281
Predicted:
Actual:
znz001val_19_319376_270503.png loss: 0.5039869546890259
Predicted:
Actual:
znz001val_19_319372_270499.png loss: 0.493091344833374
Predicted:
Actual:
znz001val_19_319370_270500.png loss: 0.4745260775089264
Predicted:
Actual:
znz001val_19_319373_270499.png loss: 0.4662070870399475
Predicted:
Actual:
znz001val_19_319368_270503.png loss: 0.460345983505249
Predicted:
Actual:
znz001val_19_319377_270499.png loss: 0.4598151743412018
Predicted:
Actual:
znz001val_19_319377_270511.png loss: 0.4539119005203247
Predicted:
Actual:
znz001val_19_319369_270498.png loss: 0.45350393652915955
Predicted:
Actual:
# pickling with custom classes like MultiChComboLoss is a bit tricky
learn.export('../../models/znz001trn-focaldice.pkl')
Colab does not persistently store any files created and saved in its runtimes for than 8-12 hours (or less depending on inactivity or overall demand on the system). We'll transfer or download the files we create somewhere else. We can:
!cp
command: see below cell# to mount and transfer files to GDrive: uncomment and run this and the next cell, follow instructions to auhorize access to your GDrive
# from google.colab import drive
# drive.mount('/content/drive')
# copy model export .pkl file to root of your GDrive
# !cp models/znz001trn-focaldice.pkl /content/drive/My\ Drive/
With our segmentation model trained and exported for inference use, we will now re-load it as an inference-only model to test on new unseen imagery. We'll test the generalizability of our trained segmentation model on tiles from drone imagery captured over another part of Zanzibar and in other parts of the world as well as at varying zoom_levels
(locations and zoom levels indicated):
We'll also compare our model inference time per tile on GPU versus CPU.
!curl https://course.fast.ai/setup/colab | bash
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 322 100 322 0 0 1210 0 --:--:-- --:--:-- --:--:-- 1215 Updating fastai... Done.
from fastai.vision import *
from fastai.callbacks import *
from fastai.utils.collect_env import *
show_install(True)
```text === Software === python : 3.6.9 fastai : 1.0.60 fastprogress : 0.2.2 torch : 1.3.1 nvidia driver : 418.67 torch cuda : 10.1.243 / is available torch cudnn : 7603 / is enabled === Hardware === nvidia gpus : 1 torch devices : 1 - gpu0 : 15079MB | Tesla T4 === Environment === platform : Linux-4.19.80+-x86_64-with-Ubuntu-18.04-bionic distro : #1 SMP Tue Oct 29 21:03:15 PDT 2019 conda env : Unknown python : /usr/bin/python3 sys.path : /env/python /usr/lib/python36.zip /usr/lib/python3.6 /usr/lib/python3.6/lib-dynload /usr/local/lib/python3.6/dist-packages /usr/lib/python3/dist-packages /usr/local/lib/python3.6/dist-packages/IPython/extensions /root/.ipython Thu Jan 16 21:49:53 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.44 Driver Version: 418.67 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 | | N/A 45C P8 10W / 70W | 10MiB / 15079MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+ ``` Please make sure to include opening/closing ``` when you paste into forums/github to make the reports appear formatted as code sections. Optional package(s) to enhance the diagnostics can be installed with: pip install distro Once installed, re-run this utility to get the additional information
# TODO: look into better way of loading export.pkl w/o needing to redefine these custom classes
class SegLabelListCustom(SegmentationLabelList):
def open(self, fn): return open_mask(fn, div=True, convert_mode='RGB')
class SegItemListCustom(SegmentationItemList):
_label_cls = SegLabelListCustom
def dice_loss(input, target):
# pdb.set_trace()
smooth = 1.
input = torch.sigmoid(input)
iflat = input.contiguous().view(-1).float()
tflat = target.contiguous().view(-1).float()
intersection = (iflat * tflat).sum()
return 1 - ((2. * intersection + smooth) / ((iflat + tflat).sum() +smooth))
# adapted from https://www.kaggle.com/c/tgs-salt-identification-challenge/discussion/65938
class FocalLoss(nn.Module):
def __init__(self, alpha=1, gamma=2, reduction='mean'):
super().__init__()
self.alpha = alpha
self.gamma = gamma
self.reduction = reduction
def forward(self, inputs, targets):
BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets.float(), reduction='none')
pt = torch.exp(-BCE_loss)
F_loss = self.alpha * (1-pt)**self.gamma * BCE_loss
if self.reduction == 'mean': return F_loss.mean()
elif self.reduction == 'sum': return F_loss.sum()
else: return F_loss
class DiceLoss(nn.Module):
def __init__(self, reduction='mean'):
super().__init__()
self.reduction = reduction
def forward(self, input, target):
loss = dice_loss(input, target)
if self.reduction == 'mean': return loss.mean()
elif self.reduction == 'sum': return loss.sum()
else: return loss
class MultiChComboLoss(nn.Module):
def __init__(self, reduction='mean', loss_funcs=[FocalLoss(),DiceLoss()], loss_wts = [1,1], ch_wts=[1,1,1]):
super().__init__()
self.reduction = reduction
self.ch_wts = ch_wts
self.loss_wts = loss_wts
self.loss_funcs = loss_funcs
def forward(self, output, target):
# pdb.set_trace()
for loss_func in self.loss_funcs: loss_func.reduction = self.reduction # need to change reduction on fwd pass for loss calc in learn.get_preds(with_loss=True)
loss = 0
channels = output.shape[1]
assert len(self.ch_wts) == channels
assert len(self.loss_wts) == len(self.loss_funcs)
for ch_wt,c in zip(self.ch_wts,range(channels)):
ch_loss=0
for loss_wt, loss_func in zip(self.loss_wts,self.loss_funcs):
ch_loss+=loss_wt*loss_func(output[:,c,None], target[:,c,None])
loss+=ch_wt*(ch_loss)
return loss/sum(self.ch_wts)
def acc_thresh_multich(input:Tensor, target:Tensor, thresh:float=0.5, sigmoid:bool=True, one_ch:int=None)->Rank0Tensor:
"Compute accuracy when `y_pred` and `y_true` are the same size."
# pdb.set_trace()
if sigmoid: input = input.sigmoid()
n = input.shape[0]
if one_ch is not None:
input = input[:,one_ch,None]
target = target[:,one_ch,None]
input = input.view(n,-1)
target = target.view(n,-1)
return ((input>thresh)==target.byte()).float().mean()
def dice_multich(input:Tensor, targs:Tensor, iou:bool=False, one_ch:int=None)->Rank0Tensor:
"Dice coefficient metric for binary target. If iou=True, returns iou metric, classic for segmentation problems."
# pdb.set_trace()
n = targs.shape[0]
input = input.sigmoid()
if one_ch is not None:
input = input[:,one_ch,None]
targs = targs[:,one_ch,None]
input = (input>0.5).view(n,-1).float()
targs = targs.view(n,-1).float()
intersect = (input * targs).sum().float()
union = (input+targs).sum().float()
if not iou: return (2. * intersect / union if union > 0 else union.new([1.]).squeeze())
else: return intersect / (union-intersect+1.0)
!wget -O models/znz001trn-focaldice.pkl https://www.dropbox.com/s/by3nc1xri8y7t4p/znz001trn-focaldice.pkl?dl=1
--2019-07-24 02:43:06-- https://www.dropbox.com/s/by3nc1xri8y7t4p/znz001trn-focaldice.pkl?dl=1 Resolving www.dropbox.com (www.dropbox.com)... 162.125.1.1, 2620:100:6016:1::a27d:101 Connecting to www.dropbox.com (www.dropbox.com)|162.125.1.1|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: /s/dl/by3nc1xri8y7t4p/znz001trn-focaldice.pkl [following] --2019-07-24 02:43:06-- https://www.dropbox.com/s/dl/by3nc1xri8y7t4p/znz001trn-focaldice.pkl Reusing existing connection to www.dropbox.com:443. HTTP request sent, awaiting response... 302 Found Location: https://uc84533df2e60284d3996caa86ad.dl.dropboxusercontent.com/cd/0/get/AlTefEq5R4jMRWPk6EzkyI9REddBeNgGiULIu0EFNkFENBhi6uIUac7P-DErwm5G_4IB-L2YR1RDsnMj92uGYOLdVaGXST_ZmL21pKyNh_cJNA/file?dl=1# [following] --2019-07-24 02:43:07-- https://uc84533df2e60284d3996caa86ad.dl.dropboxusercontent.com/cd/0/get/AlTefEq5R4jMRWPk6EzkyI9REddBeNgGiULIu0EFNkFENBhi6uIUac7P-DErwm5G_4IB-L2YR1RDsnMj92uGYOLdVaGXST_ZmL21pKyNh_cJNA/file?dl=1 Resolving uc84533df2e60284d3996caa86ad.dl.dropboxusercontent.com (uc84533df2e60284d3996caa86ad.dl.dropboxusercontent.com)... 162.125.1.6, 2620:100:601b:6::a27d:806 Connecting to uc84533df2e60284d3996caa86ad.dl.dropboxusercontent.com (uc84533df2e60284d3996caa86ad.dl.dropboxusercontent.com)|162.125.1.6|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 261536249 (249M) [application/binary] Saving to: ‘models/znz001trn-focaldice.pkl’ models/znz001trn-fo 100%[===================>] 249.42M 38.4MB/s in 6.5s 2019-07-24 02:43:14 (38.4 MB/s) - ‘models/znz001trn-focaldice.pkl’ saved [261536249/261536249]
# if you have your own model .pkl file to load, either:
# upload from computer: Files tab > Upload on left
# or mount GDrive and transfer file to Colab storage: uncomment below, change filepaths to the .pkl file on your GDrive if needed, and run:
# !cp /content/drive/My\ Drive/znz001trn-focaldice.pkl models/
inference_learner = load_learner(path='models/', file='znz001trn-focaldice.pkl')
import skimage.io
import time
def get_pred(learner, tile):
# pdb.set_trace()
t_img = Image(pil2tensor(tile[:,:,:3],np.float32).div_(255))
outputs = learner.predict(t_img)
im = image2np(outputs[2].sigmoid())
im = (im*255).astype('uint8')
return im
# try a different tile by changing or adding your own urls to list
urls = [
'https://tiles.openaerialmap.org/5b1009f22b6a08001185f24a/0/5b1009f22b6a08001185f24b/19/319454/270706.png',
'https://tiles.openaerialmap.org/5b1e6fd42b6a08001185f7bf/0/5b1e6fd42b6a08001185f7c0/20/569034/537093.png',
'https://tiles.openaerialmap.org/5beaaba463f9420005ef8db0/0/5beaaba463f9420005ef8db1/19/313479/283111.png',
'https://tiles.openaerialmap.org/5d050c3673de290005853a91/0/5d050c3673de290005853a92/18/203079/117283.png',
'https://tiles.openaerialmap.org/5c88ff77225fc20007ab4e26/0/5c88ff77225fc20007ab4e27/21/1035771/1013136.png',
'https://tiles.openaerialmap.org/5d30bac2e757aa0005951652/0/5d30bac2e757aa0005951653/19/136700/197574.png'
]
for url in urls:
t1 = time.time()
test_tile = skimage.io.imread(url)
result = get_pred(inference_learner, test_tile)
t2 = time.time()
print(url)
print(f'GPU inference took {t2-t1:.2f} secs')
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,5))
ax1.imshow(test_tile)
ax2.imshow(result)
ax1.axis('off')
ax2.axis('off')
plt.show()
https://tiles.openaerialmap.org/5b1009f22b6a08001185f24a/0/5b1009f22b6a08001185f24b/19/319454/270706.png GPU inference took 0.11 secs
https://tiles.openaerialmap.org/5b1e6fd42b6a08001185f7bf/0/5b1e6fd42b6a08001185f7c0/20/569034/537093.png GPU inference took 0.12 secs
https://tiles.openaerialmap.org/5beaaba463f9420005ef8db0/0/5beaaba463f9420005ef8db1/19/313479/283111.png GPU inference took 0.11 secs
https://tiles.openaerialmap.org/5d050c3673de290005853a91/0/5d050c3673de290005853a92/18/203079/117283.png GPU inference took 0.11 secs
https://tiles.openaerialmap.org/5c88ff77225fc20007ab4e26/0/5c88ff77225fc20007ab4e27/21/1035771/1013136.png GPU inference took 0.11 secs
https://tiles.openaerialmap.org/5d30bac2e757aa0005951652/0/5d30bac2e757aa0005951653/19/136700/197574.png GPU inference took 0.11 secs
for url in urls:
t1 = time.time()
test_tile = skimage.io.imread(url)
print(url)
result = get_pred(inference_learner, test_tile)
t2 = time.time()
print(f'CPU inference took {t2-t1:.2f} secs')
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,5))
ax1.imshow(test_tile)
ax2.imshow(result)
ax1.axis('off')
ax2.axis('off')
plt.show()
https://tiles.openaerialmap.org/5b1009f22b6a08001185f24a/0/5b1009f22b6a08001185f24b/19/319454/270706.png CPU inference took 1.58 secs
https://tiles.openaerialmap.org/5b1e6fd42b6a08001185f7bf/0/5b1e6fd42b6a08001185f7c0/20/569034/537093.png CPU inference took 1.60 secs
https://tiles.openaerialmap.org/5beaaba463f9420005ef8db0/0/5beaaba463f9420005ef8db1/19/313479/283111.png CPU inference took 1.58 secs
https://tiles.openaerialmap.org/5d050c3673de290005853a91/0/5d050c3673de290005853a92/18/203079/117283.png CPU inference took 1.60 secs
https://tiles.openaerialmap.org/5c88ff77225fc20007ab4e26/0/5c88ff77225fc20007ab4e27/21/1035771/1013136.png CPU inference took 1.59 secs
https://tiles.openaerialmap.org/5d30bac2e757aa0005951652/0/5d30bac2e757aa0005951653/19/136700/197574.png CPU inference took 1.59 secs
For good evaluation of model performance against ground truth, we'll use another set of labeled data that the model was not trained on. We'll get this from the larger Zanzibar dataset. Preview the imagery and ground truth labels for znz029
in the STAC browser here:
For demonstration, we'll use this particular tile at z=19, x=319454, y=270706
from znz029
:
Using solaris and geopandas, we'll convert our model's prediction as a 3-channel pixel raster output into a GeoJSON file by:
# if not already loaded in runtime:
# install fastai and load inference learner from "Inference on new imagery section"
# and uncomment below and re-install geo packages
# !add-apt-repository ppa:ubuntugis/ubuntugis-unstable -y
# !apt-get update
# !apt-get install python-numpy gdal-bin libgdal-dev python3-rtree
# !pip install rasterio
# !pip install geopandas
# !pip install descartes
# !pip install solaris
# !pip install rio-tiler
import solaris as sol
from affine import Affine
from rasterio.transform import from_bounds
from shapely.geometry import Polygon
import math
import geopandas as gpd
import skimage
def deg2num(lat_deg, lon_deg, zoom):
lat_rad = math.radians(lat_deg)
n = 2.0 ** zoom
xtile = int((lon_deg + 180.0) / 360.0 * n)
ytile = int((1.0 - math.log(math.tan(lat_rad) + (1 / math.cos(lat_rad))) / math.pi) / 2.0 * n)
return (xtile, ytile)
def num2deg(xtile, ytile, zoom):
n = 2.0 ** zoom
lon_deg = xtile / n * 360.0 - 180.0
lat_rad = math.atan(math.sinh(math.pi * (1 - 2 * ytile / n)))
lat_deg = math.degrees(lat_rad)
return (lat_deg, lon_deg)
def tile_to_poly(z,x,y, size):
top, left = num2deg(x, y, z)
bottom, right = num2deg(x+1, y+1, z)
tfm = from_bounds(left, bottom, right, top, size, size)
return Polygon.from_bounds(left,top,right,bottom), tfm
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)]) /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])
z,x,y = 19,319454,270706
url= 'https://tiles.openaerialmap.org/5b1009f22b6a08001185f24a/0/5b1009f22b6a08001185f24b/19/319454/270706.png'
test_tile = skimage.io.imread(url)
result = get_pred(inference_learner, test_tile)
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,5))
ax1.imshow(test_tile)
ax2.imshow(result)
plt.show()
# threshold and polygonize with solaris: https://solaris.readthedocs.io/en/latest/tutorials/notebooks/api_mask_to_vector.html
mask2poly = sol.vector.mask.mask_to_poly_geojson(result,
channel_scaling=[1,0,-1],
bg_threshold=245,
simplify=True,
tolerance=2
)
mask2poly.plot(figsize=(10,10))
<matplotlib.axes._subplots.AxesSubplot at 0x7f887b4b2550>
mask2poly.head()
geometry | value | |
---|---|---|
0 | POLYGON ((12.00000 0.00000, 9.00000 3.00000, 1... | 255.0 |
1 | POLYGON ((144.00000 0.00000, 142.00000 24.0000... | 255.0 |
2 | POLYGON ((44.00000 0.00000, 44.00000 19.00000,... | 255.0 |
3 | POLYGON ((2.00000 13.00000, 0.00000 40.00000, ... | 255.0 |
4 | POLYGON ((121.00000 11.00000, 110.00000 20.000... | 255.0 |
# get the bounds of the tile and its affine tfm matrix for georegistering purposes
tile_poly, tile_tfm = tile_to_poly(z,x,y,256)
tile_tfm
Affine(2.682209014892578e-06, 0.0, 39.351654052734375, 0.0, -2.6681491150752634e-06, -5.868769539456524)
# convert polys from pixel coords to geo coords: https://solaris.readthedocs.io/en/latest/api/vector.html?highlight=georegister_px_df#solaris.vector.polygon.georegister_px_df
result_polys = sol.vector.polygon.georegister_px_df(mask2poly,
affine_obj=tile_tfm,
crs=4326)
# show tile image to raw prediction to georegistered polygons
fig, (ax1, ax2, ax3) = plt.subplots(1,3, figsize=(15,5))
ax1.imshow(test_tile)
ax2.imshow(result)
result_polys.plot(ax=ax3)
<matplotlib.axes._subplots.AxesSubplot at 0x7f887a69eeb8>
result_polys.to_file('result_polys.geojson', driver='GeoJSON')
http://geojson.io/#id=gist:daveluo/3dfe4695e31b2b3a4c7c6e13ada5d1e6&map=19/-5.86910/39.35198
TMS layer link: https://tiles.openaerialmap.org/5ae242fd0b093000130afd38/0/5ae242fd0b093000130afd39/%7Bz%7D/%7Bx%7D/%7By%7D.png
Finally with georegistered building predictions as a GeoJSON file, we can evaluate it against the ground truth GeoJSON file for the same tile.
We'll clip the ground truth labels to the bounds of this particular tile and use solaris's Evaluator to calculate the precision, recall, and F1 score. We will also visualize our predicted buildings (in red) against the ground truth buildings (in blue) in this particular tile.
For more information about these common evaluation metrics for models applied to overhead imagery, see the following articles and more by the SpaceNet team:
https://medium.com/the-downlinq/the-spacenet-metric-612183cc2ddb
# get the ground truth labels for all znz029
labels_url = 'https://www.dropbox.com/sh/ct3s1x2a846x3yl/AADHytc8fSCf3gna0wNAW3lZa/grid_029.geojson?dl=1'
gt_gdf = gpd.read_file(labels_url)
print(tile_poly.bounds)
(39.351654052734375, -5.8694525856299835, 39.35234069824219, -5.868769539456524)
# visualize the tile (in red) against the entire labeled znz029 area (in blue)
fig, ax = plt.subplots(figsize=(10,10))
gt_gdf.plot(ax=ax)
gpd.GeoDataFrame(geometry=[tile_poly], crs='epsg:4326').plot(alpha=0.5, color='red', ax=ax)
<matplotlib.axes._subplots.AxesSubplot at 0x7f887a70ae48>
# clip gt_gdf to the tile bounds
clipped_gt_polys = gpd.overlay(gt_gdf, gpd.GeoDataFrame(geometry=[tile_poly], crs=4326), how='intersection')
clipped_gt_polys.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7f887a3a8438>
result_polys.plot()
<matplotlib.axes._subplots.AxesSubplot at 0x7f887a620550>
clipped_gt_polys.to_file('clipped_gt_polys.geojson', driver='GeoJSON')
# solaris tutorial on evaluation: https://solaris.readthedocs.io/en/latest/tutorials/notebooks/api_evaluation_tutorial.html
evaluator = sol.eval.base.Evaluator('clipped_gt_polys.geojson')
evaluator.load_proposal('result_polys.geojson', proposalCSV=False, conf_field_list=[])
evaluator.eval_iou(calculate_class_scores=False)
21it [00:00, 147.88it/s]
[{'F1Score': 0.9130434782608696, 'FalseNeg': 4, 'FalsePos': 0, 'Precision': 1.0, 'Recall': 0.84, 'TruePos': 21, 'class_id': 'all', 'iou_field': 'iou_score_all'}]
# visualize predicted vs ground truth
fig, (ax1, ax2) = plt.subplots(1,2, figsize=(10,5))
ax1.imshow(test_tile)
clipped_gt_polys.plot(ax=ax2, color='blue', alpha=0.5) #gt
result_polys.plot(ax=ax2, color='red', alpha=0.5) #pred
<matplotlib.axes._subplots.AxesSubplot at 0x7f887b03feb8>
Congratulations, you did it!
You've completed the tutorial and now know how to do everything from producing training data to creating a deep learning model for segmentation to postprocessing and evaluating your model's performance.
To flex your newfound knowledge and make your model perform potentially much better, try implementing some or all these ideas:
Create and use more training data: there are 13 grids' worth of training data for Zanzibar released as part of the Open AI Tanzania Building Footprint Segmentation Challenge dataset.
Change the zoom_level of your training/validation tiles. Better yet, try using tiles across multiple zooms (i.e. z21, z20, z19, z18). Note that with multiple zoom levels over the same imagery, you should be extra careful of overlapping tiles across those different zoom levels. ← test your understanding of slippy map tiles by checking that you understand what I mean here but feel free to message me for the answer!
Change the Unet's encoder to a bigger or different architecture (i.e. resnet50, resnet101, densenet).
Change the combinations, weighting, and hyperparameters of the loss functions. Or implement completely new loss functions like Lovasz Loss.
Try different data augmentation combinations and techniques.
Train for more epochs and with different learning rate schedules. Try mixed-precision for faster model training.
Your idea here.
I look forward to seeing what you discover!
If you liked this tutorial, look forward to next ones which will potentially cover topics like:
Curious about more geospatial deep learning topics? Did I miss something? Share your questions and thoughts in the Medium post so I can add them into this and next tutorials.
Good luck and happy deep learning!
Initiative (OpenDRI) for consultation projects which have inspired & informed.