knitr::opts_chunk$set(echo = TRUE)
This document showcases a completely reproducible particulate matter analysis. Starting from the used measurement devices, via software for data hosting, to the analysis and visualisation environment. It stands on the shoulders of communities who provide free and open resources in the spirit of Open Science:
rmarkdown
, sf
, dplyr
, ...), Project Jupyter, binder, Rocker, Docker, ... and many moreThe actual analysis is based on the opensensmapR vignette osem-intro
.
The code and it's environment are published, documented, and packaged to support reproducible research.
The code repository is https://github.com/nuest/sensebox-binder and the git version hash is r system("git rev-parse HEAD", intern = TRUE)
.
The repository can be opened interactively at http://mybinder.org/v2/gh/nuest/sensebox-binder/master.
The code (this document as either R Markdown or Jupyter Notebook) and environment (a Docker image) are archived with a DOI: 10.5281/zenodo.1135140
.
In the remainder of this file, code "chunks" and text are interspersed to provide a transparent and understandable workflow.
The analysis of takes a look at fine particulate matter measured in Germany at New Year's Eve 2018.
Note: The data is included in the archive as a backup in JSON format.
This document by default can only be compiled as long as the openSenseMap API is available.
To use local backup data, set the variable online
to FALSE
in the second code chunk.
library("opensensmapr")
library("dplyr")
library("lubridate")
library("units")
library("sf")
library("leaflet")
library("readr")
library("jsonlite")
library("here")
library("maps")
[output hidden]
online <- TRUE # access online API or use local data backup?
analysis_date <- lubridate::as_datetime("2018-01-01 00:00:00")
if(online) {
# retrieve data from openSenseMap API
all_boxes <- osem_boxes()
pm25_boxes <- osem_boxes(
exposure = 'outdoor',
date = analysis_date, # ±4 hours
phenomenon = 'PM2.5'
)
# update local data
all_json <- toJSON(all_boxes, digits = NA, pretty = TRUE)
write(all_json, file = here("data/all_boxes.json"))
pm25_json <- toJSON(pm25_boxes, digits = NA, pretty = TRUE)
write(pm25_json, file = here("data/pm25_boxes.json"))
} else {
# load data from file and fix column types
all_boxes_file <- fromJSON(here("data/all_boxes.json"))
all_boxes <- type_convert(all_boxes_file,
col_types = cols(
exposure = col_factor(levels = NULL),
model = col_factor(levels = NULL),
grouptag = col_factor(levels = NULL)))
class(all_boxes) <- c("sensebox", class(all_boxes))
pm25_boxes_file <- fromJSON(here("data/pm25_boxes.json"))
pm25_boxes <- type_convert(pm25_boxes_file,
col_types = cols(
exposure = col_factor(levels = NULL),
model = col_factor(levels = NULL),
grouptag = col_factor(levels = NULL)))
class(pm25_boxes) <- c("sensebox", class(pm25_boxes))
}
knitr::kable(data.frame(nrow(all_boxes), nrow(pm25_boxes)),
col.names = c(
"# senseBoxes",
paste("# senseBoxes with PM2.5 measurements around", format(analysis_date, "%Y-%m-%d %T %Z"))))
The openSenseMap currently provides access to r nrow(all_boxes)
senseBoxes of which r nrow(pm25_boxes)
provide measurements of PM2.5 around r format(analysis_date, "%Y-%m-%d %T %Z")
.
The following map shows the PM2.5 sensor locations, which are mostly deployed in central Europe.
plot(pm25_boxes)
How many senseBoxes in Münster measure PM2.5?
ms <- st_sfc(st_point(c(7.62571, 51.96236)))
st_crs(ms) <- 4326
pm25_boxes_sf <- st_as_sf(pm25_boxes, remove = FALSE, agr = "identity")
names(pm25_boxes_sf) <- c(names(pm25_boxes), "geometry")
pm25_boxes_sf <- cbind(pm25_boxes_sf, dist_to_ms = st_distance(ms, pm25_boxes_sf))
max_dist <- set_units(7, km) # km from city center
ms_boxes <- pm25_boxes_sf[pm25_boxes_sf$dist_to_ms < max_dist,c("X_id", "name")]
ms_boxes
Where are the sensors in Münster? [Does not work in Jupyter Notebook]
sense_icon <- awesomeIcons(
icon = 'cube',
iconColor = '#ffffff',
library = 'fa',
markerColor = 'green'
)
leaflet() %>%
addTiles() %>%
addAwesomeMarkers(data = ms_boxes,
popup = ~paste0("<b>Name:</b> ", name, "<br><b>Id:</b> ",
"<a href='https://opensensemap.org/explore/", X_id, "' ",
"target='_blank'>", X_id, "</a>"),
label = ~name,
icon = sense_icon)
Now we retrieve data for r nrow(ms_boxes)
senseBoxes with values in the area of interest.
if(online) {
class(ms_boxes) <- c("sensebox", class(ms_boxes))
ms_data <- osem_measurements(ms_boxes, phenomenon = "PM2.5",
from = lubridate::as_datetime("2017-12-31 20:00:00"),
to = lubridate::as_datetime("2018-01-01 04:00:00"),
columns = c("value", "createdAt", "lat", "lon", "boxId",
"boxName", "exposure", "sensorId",
"phenomenon", "unit", "sensorType"))
# update local data
data_json <- toJSON(ms_data, digits = NA, pretty = TRUE)
write(data_json, file = here("data/ms_data.json"))
} else {
# load data from file and fix column types
ms_data_file <- fromJSON(here("data/ms_data.json"))
ms_data <- type_convert(ms_data_file,
col_types = cols(
sensorId = col_factor(levels = NULL),
unit = col_factor(levels = NULL)))
class(ms_data) <- c("sensebox", class(ms_data))
}
summary(ms_data %>%
select(value,sensorId,unit))
We can now plot r nrow(ms_data)
measurements.
plot(value~createdAt, ms_data,
type = "p", pch = '*', cex = 2, # new year's style
col = factor(ms_data$sensorId),
xlab = NA,
ylab = unique(ms_data$unit),
main = "Particulates measurements (PM2.5) on New Year 2017/2018",
sub = paste(nrow(ms_boxes), "stations in Münster, Germany\n",
"Data by openSenseMap.org licensed under",
"Public Domain Dedication and License 1.0"))
You can see, it was a very "particular" celebration.
Who are the record holders?
top_measurements <- ms_data %>%
arrange(desc(value))
top_boxes <- top_measurements %>%
distinct(sensorId, .keep_all = TRUE)
knitr::kable(x = top_boxes %>%
select(value, createdAt, boxName) %>%
head(n = 3),
caption = "Top 3 boxes")
knitr::kable(top_boxes %>% filter(value == max(top_boxes$value)) %>%
select(sensorId, boxName),
col.names = c("Top sensor identifier", "Top box name"))
Congratulations (?) to boxes for holding the record values just after the new year started.
Where are the record holding boxes?
Static plot
top_boxes_sf <- top_boxes %>%
filter(value == max(top_boxes$value)) %>%
st_as_sf(coords = c('lon', 'lat'), crs = 4326)
bbox <- sf::st_bbox(top_boxes_sf)
world <- map("world", plot = FALSE, fill = TRUE) %>%
sf::st_as_sf() %>%
sf::st_geometry()
plot(world,
xlim = round(bbox[c(1,3)], digits = 1),
ylim = round(bbox[c(2,4)], digits = 1),
axes = TRUE, las = 1)
plot(top_boxes_sf, add = TRUE, col = "red", cex = 2)
title("senseBox stations in Münster with highest PM2.5 measurements")
Interactive map
fireworks_icon <- makeIcon(
# icon source: https://commons.wikimedia.org/wiki/File:Fireworks_2.png
iconUrl = "320px-Fireworks_2.png", iconWidth = 160)
leaflet(data = top_boxes_sf) %>%
addTiles() %>%
addMarkers(popup = ~as.character(boxName),
label = ~as.character(boxName),
icon = fireworks_icon)
A converted version of this file can in Jupyter Notebook format is automatically created with each rendering using ipyrmd
.
ipyrmd
is installed with other dependencies in the file install.R
.
The Jupyter Notebook is intended to increase accessability for users unfamiliar with R Markdown.
The automatic conversion does not handle code statements within sentences.
{bash
ipyrmd --to ipynb --from Rmd -y -o sensebox-analysis.ipynb sensebox-analysis.Rmd
This document creates a reproducible workflow of open data from a public API. It leverages software to create a transparent analysis, which can be easily opened, investigated, and even developed further with a web browser by opening the public code repository on a free cloud platform. To increase reproducibility, the data is cached manually as CSV files (i.e. text-based data format) and stored next to the analysis file. A use may adjust this workflow to her own needs, like different location or time period, by adjust the R code and deleting the data files. In case the exploration platform ceases to exist, users may still recreate the environment themselves based on the files in the code repository. A snapshot of the files from the code repository, i.e. data, code, and runtime environment (as a Docker image) are stored in a reliable data repository. While the manual workflow of building the image and running it is very likely to work in the future, the archived image captures the exact version of the software the original author used.
The presented solution might seem complex. But it caters to many different levels of expertise (one-click open in browser vs. self-building of images and local inspection) and has several fail-safes (binder may disappear, GitHub repository may be lost, Docker may stop working). The additional work is much outweighed by the advantages in transparency and openness.
This document is licensed under a Creative Commons Attribution 4.0 International (CC BY 4.0).
devtools::session_info()