This tutorial presents a few basic options for preparing 3D-printed visualizations from datasets that are available through the City of Toronto's Open Data Portal.


Most of the processes used are software/language agnostic, but we strongly encourage you to acquire a baseline understanding of the modern Python data science ecosystem. These are a few important pieces that you'll want to set up:

  • Make sure you have a working Python installation! Download Anaconda and follow its setup instructions if you don't already have a Python installation:
  • You will need to install the following Python libraries: pandas, numpy, stat, matplotlib, seaborn, plotly. If you are using Anaconda, use "conda install -c anaconda package-name" from a command line. Alternatively, use pip or easyinstall or apt-get install python-package-name.
  • There is a difference between Python 2 and 3. You may need to familiarize yourself with it, depending on which version you decide to install.
  • Download and set up JupyterLab and/or Jupyter Notebook:
  • Download Blender if you don't already use 3D modeling software (e.g. Maya):
  • Download QGIS if you want to create maps and other geospatial data representations:

Data Sources

We will be using data from the City of Toronto's portal, but there are various other interesting datasets that you might consider working with. For example, you can collect and use your own biometric/self-tracking data if you have a wearable device like a fitbit. Kaggle provides lots of awesome datasets (the pokemon one is fun if you're working with kids): Google has a dataset search tool: 538 has plenty of interesting political, social, and sports-related datasets: You might also familiarize yourself with the Open North community:

There are various tools you can use to clean/munge/prepare your data, including Excel, Libre Office Calc, R, and many more. We prefer pandas, a Python library for data analysis. There are lots of great tutorials that will outline how to import and prepare data with pandas in an iPython/Jupyter notebook. Your best bet is to do the free datacamp tutorials if you're completely new to this stuff: That said, we recommend either starting with something simple or spending plenty of time familiarizing yourself with a dataset in a spreadsheet application before moving to pandas dataframes (even though we have included some very basic cleaning functions in the notebook cells below).

For this exercise, we will be working with pedestrian data from the King Street Pilot Project. We scraped all the data from the monthly .pdf reports that city of Toronto has made available here: The data is available in the data directory of the repository that contains this file.

Working with Data in a Python/Jupyter Notebook

Start by importing the necessary libraries:

In [1]:
import pandas as pd
import numpy as np
import stat as st
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.plotly as py
import plotly.graph_objs as go

If you want, you can set a default figure size for charts:

In [2]:
plt.rcParams['figure.figsize'] = [10, 8]

Next, import your data. We have included a cleaned version of the dataset in the data directory. pvol is what is referred to as a "dataframe" in pandas terminology. You will often see the variable name df (e.g. df = pd.readcsv('example.csv')).

In [3]:
pvol = pd.read_csv('data/king_pedestrian_volume.csv')

Set the index for the pandas dataframe you've created:

In [4]:
am_bathurst_baseline am_bathurst_january am_bathurst_february am_bathurst_march am_bathurst_april am_bathurst_may am_bathurst_june am_spadina_baseline am_spadina_january am_spadina_february ... pm_bay_april pm_bay_may pm_bay_june pm_jarvis_baseline pm_jarvis_january pm_jarvis_february pm_jarvis_march pm_jarvis_april pm_jarvis_may pm_jarvis_june
queen 1810 1640 1750 1760 1790 1960 1770 2000 1880 1790 ... 4890 8340 9280 1320 1140 1300 1210 1300 1450 1460
king 2820 2680 2620 2590 2580 2890 2780 4150 3580 3690 ... 5540 8060 8190 3370 3760 4050 3930 4060 3920 4080

2 rows × 56 columns

Create some objects for each street. Normally, we wouldn't want such long names (and wide dataframes), but explaining the hows and whys of reshaping data is not in the scope of this tutorial. If you're interested, read Hadley Wickham's papers on the subject ( or follow these instructions for reshaping data in pandas:

In [5]:
#### am objects
# bathurst
pvol_bathurst_am = pvol[['street',

# spadina
pvol_spadina_am = pvol[['street',

# bay
pvol_bay_am = pvol[['street',

# jarvis
pvol_jarvis_am = pvol[['street',

#### pm objects
# bathurst
pvol_bathurst_pm = pvol[['street',

# spadina
pvol_spadina_pm = pvol[['street',

# bay
pvol_bay_pm = pvol[['street',

# jarvis
pvol_jarvis_pm = pvol[['street',

Using the standard pandas plotting functions (which rely on matplotlib), you can prepare bare-bones static charts (you might use matplotlib or seaborn if you want greater customization options). There are lots of ways to adjust the colours if you want, but we like our charts to look like life savers ;-)

In [6]:'street', 
                          title='AM Peak Pedestrian Volume Measured at Bathurst');

If you want horizontal charts, you can feed barh to the plot method:

In [7]:
                           title='AM Peak Pedestrian Volume Measured at Bathurst')

Far more interesting and useful is the potential for creating interactive charts inside a notebook. There are various libraries you can use (such as Bokeh or Pygal), but we find Plotly to be the most well-developed. It also has an easy-to-use web portal. What we're going to do next is create a grouped bar chart using Plotly's python library.

In [9]:
#### plotly-based grouped bar charts
# AM Bathurst
bath_baseline = go.Bar(
    name='AM Bathurst Baseline',
bath_january = go.Bar(
    name='AM Bathurst January',
bath_february = go.Bar(
    name='AM Bathurst February',
bath_march = go.Bar(
    name='AM Bathurst March',
bath_april = go.Bar(
    name='AM Bathurst April',
bath_may = go.Bar(
    name='AM Bathurst May',
bath_june = go.Bar(
    name='AM Bathurst June',

data = [bath_baseline, bath_january, bath_february, bath_march, bath_april, bath_may, bath_june]
layout = go.Layout(
    # bargap=0.15,
    # showlegend=False

am_bath_pvol_fig = go.Figure(data=data, layout=layout)
py.iplot(am_bath_pvol_fig, filename='am_bath_pvol_grouped-bar')

This chart displays monthly average pedestrian counts for the morning rush at the intersections Bathurst/Queen and Bathurst/King. The dataframe provides options for the 7-10 am and 4-7 pm peak periods at the intersections of Bathurst, Spadina, Bay, and Jarvis (at both King and Queen). Change your arguments accordingly to prepare different - or multiple - charts.

Was Al Carbone Right?

Remember this guy?

In [10]:
<img src="images/al-carbone.jpg">

Now that the Pilot Project has been running for almost a year, is there evidence that King is the "wasteland" Al Carbone claims it to be? Let's look at the data. Here's evening (4-7 pm) pedestrian counts for Spadina… right around early dinner time:

In [11]:
# PM Spadina
spadina_baseline = go.Bar(
    name='PM Spadina Baseline',
spadina_january = go.Bar(
    name='PM Spadina January',
spadina_february = go.Bar(
    name='PM Spadina February',
spadina_march = go.Bar(
    name='PM Spadina March',
spadina_april = go.Bar(
    name='PM Spadina April',
spadina_may = go.Bar(
    name='PM Spadina May',
spadina_june = go.Bar(
    name='PM Spadina June',

data = [spadina_baseline, spadina_january, spadina_february, spadina_march, spadina_april, spadina_may, spadina_june]
layout = go.Layout(
    # bargap=0.15,
    # showlegend=False

pm_spad_pvol_fig = go.Figure(data=data, layout=layout)
py.iplot(pm_spad_pvol_fig, filename='pm_spad_pvol_grouped-bar')

And here's Bay…

In [12]:
# PM Bay
bay_baseline = go.Bar(
    name='PM Bay Baseline',
bay_january = go.Bar(
    name='PM Bay January',
bay_february = go.Bar(
    name='PM Bay February',
bay_march = go.Bar(
    name='PM Bay March',
bay_april = go.Bar(
    name='PM Bay April',
bay_may = go.Bar(
    name='PM Bay May',
bay_june = go.Bar(
    name='PM Bay June',

data = [bay_baseline, bay_january, bay_february, bay_march, bay_april, bay_may, bay_june]
layout = go.Layout(
    # bargap=0.15,
    # showlegend=False

pm_bay_pvol_fig = go.Figure(data=data, layout=layout)
py.iplot(pm_bay_pvol_fig, filename='pm_bay_pvol_grouped-bar')

Keep in mind that we have incomplete data, and won't be able to get a really good sense of the Pilot Project's impact without year over year data. These numbers also don't take into account things like major events (including TIFF), the impact that the Muskoka chair seating areas have had on public space use, and whether or not people are spending more or less time waiting for streetcars. While it is easy to look at the increase in pedestrian volume as evidence of the Pilot Project's success, how do we know the weather hasn't been the biggest driver (keep in mind that the baseline was captured in October)?

3D Bars in Blender

Now, let's take these same charts that we've prepared for the screen and render them as 3D models. This section assumes that you have the csv and bpy modules available in your Python ecosystem. Depending on your operating system and Python configuration, they may be pre-loaded, or you may need to install and configure separately.

These steps have already been done, but are included for reference:

  • open kingpedestrianvolume.csv in a spreadsheet application (calc or excel) and copy the entire row for king
  • open a new window/file and "paste special" with the transpose option to turn your row of data into a column
  • remove the "king" row at the top, then save as a new file called pvolking.csv
  • repeat these steps for queen
In [76]:
<img src="images/blender.gif">
  • referring to the image above, open up a "text editor" screen in Blender
  • open the script (that has been included in this repository) and use it to create 3D bars using the pvolking.csv and pvolqueen.csv files (you'll need to run them separately)
  • when you are satisfied with the results, export an entire group or the entire street as .obj or .stl files

Here are some sample tiles and prototypes of tactile dashboard interfaces:

In [4]:
<img src="images/prototypes.jpg">

Preparing 3D Data Maps

Now, we're going to switch to the recently-released 2016 Neighbourhood Profiles Dataset. (We had done a bunch of work with ward data in anticipation of the upcoming election, but it seems pretty irrelevant in light of recent events!) We're going to compare population growth between 2011 and 2016 (which is originally taken from the 2016 Census - more info here). There are plenty of interesting features of this dataset that you might consider using instead of population - language concentrations, income, citizenship, etc. We've already cleaned and processed the population data so it will play nice with QGIS. The raw and processed .csv files are in the data directory.

If you're going to use Excel or Calc to prep data for import into QGIS, here are some important steps:

Here are some additional things you can do with pandas and numpy:

Depending on the data you use, you might have to re-scale to make it printable. Refer to the following image:

In [77]:
<img src="images/ladder2.gif">

Import the data:

In [78]:
df = pd.read_csv('data/neighbourhood_pop.csv', dtype=str) # dtype str will keep the leading zeroes
id 2011 2016
0 001 0.0341 0.033312
1 002 0.032788 0.032954
2 003 0.010138 0.01036
3 004 0.010488 0.010529
4 005 0.00955 0.009456

Set index to ID:

In [79]:
df.set_index('id', inplace=True)

Convert strings to floats in order to use numpy functions:

In [80]:
df['2011'] = df['2011'].astype(str).astype(float)
df['2016'] = df['2016'].astype(str).astype(float)

You can use numpy to convert to square root, logarithmic, or whatever other scale you like:

In [81]:
df['2016'] = np.sqrt(df['2016'])
df['2011'] = np.log10(df['2011'])
2011 2016
001 -1.467246 0.182516
002 -1.484285 0.181532
003 -1.994048 0.101784
004 -1.979307 0.102611
005 -2.019997 0.097242

When you're done processing, you can output a new .csv for import into QGIS (this has already been done):

In [82]:

Working in QGIS

In [83]:
<img src="images/qgis.gif">

Loading your Shapefile into Blender

  • make sure you have the BlenderGIS plugin installed and configured:
  • import the new shapefile that you've just created
  • set extrusion to the specific data column you want to use (in our example, 2011 or 2016 population)
  • set to extrude along z axis
  • if you want, separate the objects and create object names from the id field
  • change coordinates to WGS84 latlon
  • you might also add a base or make the objects solid (use solidify modifier) to make printing easier
  • as with the previous example, you'll want to export to .obj or .stl and make sure to set the scale, materials, and other export parameters to work with your 3D printing software (e.g. Cura, if you're using an Ultimaker)
In [84]:
<img src="images/shapefile.gif">

This is what your printed maps will look like:

In [3]:
<img src="images/models.jpg">

3D Printing Considerations

There are numerous software applications that you might use for preparing models prior to setting them up to print. You can likely do most of your prep in Blender, but the learning curve is steep.

  • Meshlab is not very user friendly, but has a million features built into it (especially good for working with point clouds):
  • Meshmixer has been the go-to processing tool for a long time, and has everything from sculpting tools to built-in printer export:
  • Cotangent is a new application from the guy who developed Meshmixer. It has lots of interesting features, including better repair and slicing features, and is ideal for prepping 3D prints:

Some Useful Blender shortcuts:

keys function
a select all
c circle select
ctrl-lmb lasso select
b border select
ctrl-g group selected objects
m when object selected, move to specific layer

Some things to think about if you're preparing tactile models for blind users:

Printed tactile models do not have to be static! Think about how to separate your models into individual, reconfigurable/modular chunks in order to create dynamic data representations. It is easy to 3D print lego-like connectors onto the faces of your objects: Additionally, attachable velcro tape gives you lots of options for creating endlessly modular graphics.

Remember, once you have a digital 3D data representation, it can usually be ported to any number of interaction contexts:

  • VR and gaming environments
  • Immersive point clouds and 3D scatter plots (not particularly printable, but they can be really engaging!)
  • Haptic/conductive extensions to tactile models