# 1. Settings¶

## Setup¶

I recommend running this notebook in a conda environment, which can be created from the environment.yml file provided with this notebook.

## Import Python libraries¶

In [ ]:
# Python modules
import os
import shutil
import pandas as pd
from time import time
from datetime import date

# Custom scripts
import scripts.preprocess as preprocess
import scripts.demand as demand
import scripts.cop as cop
import scripts.write as write

%matplotlib inline


## Version and changes¶

In [ ]:
version = '2019-08-06'
changes = 'Minor revisions'


## Make directories¶

In [ ]:
home_path = os.path.realpath('.')

input_path = os.path.join(home_path, 'input')
interim_path = os.path.join(home_path, 'interim')
output_path = os.path.join(home_path, 'output', version)

for path in [input_path, interim_path, output_path]:
os.makedirs(path, exist_ok=True)


## Select geographical and temporal scope¶

In [ ]:
all_countries = ['AT', 'BE', 'BG', 'CZ', 'DE', 'FR', 'GB', 'HR',
'HU', 'IE', 'LU', 'NL', 'PL', 'RO', 'SI', 'SK'] # available
countries = all_countries  # selected for calculation

In [ ]:
year_start = 2008
year_end = 2018


## Set ECMWF access key¶

In the following, this notebook downloads weather data from the ECMWF server. For accessing this server, follow the steps below:

1. Register at https://apps.ecmwf.int/registration/.
3. Retrieve your key at https://api.ecmwf.int/v1/key/.

If you have already installed your ECMWF KEY, this step is skipped.

In [ ]:
if not os.path.isfile(os.path.join(os.environ['USERPROFILE'], ".ecmwfapirc")):
os.environ["ECMWF_API_URL"] = "https://api.ecmwf.int/v1"
os.environ["ECMWF_API_KEY"] = "XXXXXXXXXXXXXXXXXXXXXX"
os.environ["ECMWF_API_EMAIL"] = "[email protected]"


In the following, weather and population data is downloaded from the respective sources. For all years and countries, this takes around 45 minutes to run.

Note that standard load profile parameters from BGW/BDEW and energy statistics from the EU Builidng Database are already provided with this notebook in the input directory.

## Weather data¶

As mentioned above, weather data is downloaded from ECMWF, more specifically form the ERA-Interim archive. The following data is retrieved:

• Wind: wind speed at 10 m above ground for heating seasons (October-April) in 1979-2016 in monthly resolution
• Temperature: ambient air temperature at 2 m above ground for the selected years in six-hourly resolution

In [ ]:
download.wind(input_path)


## Population data¶

In [ ]:
download.population(input_path)


# 3. Preprocessing¶

Population and weather data is preprocessed. This takes around 10 minutes to run.

## Re-mapping population data¶

The population data from Eurostat features a 1 km² grid, which country-by-country transformed to the 0.75 x 0.75° grid of the weather data in the following. Interim results are saved/loaded from disk.

In [ ]:
mapped_population = preprocess.map_population(input_path, countries, interim_path)

In [ ]:
mapped_population['LU']


## Preparing weather data¶

The temporal resolution of the weather data is changed as follows:

• Temperatures (air and soil): from six-hours to one hour
• Wind: from monthly to the average of all heating periods from 1979 to 2016

To speed up the calculation, all weather data is filtered by the selected countries.

In [ ]:
wind = preprocess.wind(input_path, mapped_population)

In [ ]:
temperature = preprocess.temperature(input_path, year_start, year_end, mapped_population)


# 4. Heat demand time series¶

For all years and countries, the calculation of heat demand time series takes around 20 minutes to run.

## Reference temperature¶

To capture the thermal inertia of buildings, the daily reference temperature is calculated as the weighted mean of the ambient air temperature of the actual and the three preceding days.

In [ ]:
reference_temperature = demand.reference_temperature(temperature['air'])


## Daily demand¶

Daily demand factors are derived from the reference temperatures using profile functions as described in BDEW.

In [ ]:
daily_parameters = read.daily_parameters(input_path)

In [ ]:
daily_heat = demand.daily_heat(reference_temperature,
wind,
daily_parameters)

In [ ]:
daily_water = demand.daily_water(reference_temperature,
wind,
daily_parameters)


## Hourly demand¶

Hourly damand factors are calculated from the daily demand based on hourly factors from BGW.

In [ ]:
hourly_parameters = read.hourly_parameters(input_path)

In [ ]:
hourly_heat = demand.hourly_heat(daily_heat,
reference_temperature,
hourly_parameters)

In [ ]:
hourly_water = demand.hourly_water(daily_water,
reference_temperature,
hourly_parameters)

In [ ]:
hourly_space = (hourly_heat - hourly_water).clip(lower=0)


## Weight and scale¶

The spatial time series are weighted with the population and normalized to 1 TWh yearly demand each. Years included in the building database are scaled accordingly. The time series not spatially aggregated yet because spatial time series are needed for COP calculation.

In [ ]:
building_database = read.building_database(input_path)

In [ ]:
spatial_space = demand.finishing(hourly_space, mapped_population, building_database['space'])

In [ ]:
spatial_water = demand.finishing(hourly_water, mapped_population, building_database['water'])


## Safepoint¶

The following cells can be used to save and reload the spatial hourly time series.

In [ ]:
spatial_space.to_pickle(os.path.join(interim_path, 'spatial_space'))
spatial_water.to_pickle(os.path.join(interim_path, 'spatial_water'))

In [ ]:
spatial_space = pd.read_pickle(os.path.join(interim_path, 'spatial_space'))[countries]


## Aggregate and combine¶

All heat demand time series are aggregated country-wise and combined into one data frame.

In [ ]:
final_heat = demand.combine(spatial_space, spatial_water)


# 5. COP time series¶

For all years and countries, the calculation of the coefficient of performance (COP) of heat pumps takes around 5 minutes to run.

## Source temperature¶

For air-sourced, ground-sources and groundwater-sourced heat pumps (ASHP, GSHP and WSHP), the relevant heat source temperatures are calculated.

In [ ]:
source_temperature = cop.source_temperature(temperature)


## Sink temperatures¶

Heat sink temperatures, i.e. the temperature level at which the heat pumps have to provide heat, are calculated for floor heating, radiator heating and warm water.

In [ ]:
sink_temperature = cop.sink_temperature(temperature)


## COP¶

The COP is derived from the temperature difference between heat sources and sinks using COP curves.

In [ ]:
cop_parameters = read.cop_parameters(input_path)

In [ ]:
spatial_cop = cop.spatial_cop(source_temperature, sink_temperature, cop_parameters)


## Safepoint¶

The following cells can be used to save and reload the spatial hourly time series.

In [ ]:
spatial_cop.to_pickle(os.path.join(interim_path, 'spatial_cop'))

In [ ]:
spatial_cop = pd.read_pickle(os.path.join(interim_path, 'spatial_cop'))[countries]


## Aggregating and correction¶

The spatial COP time series are weighted with the spatial heat demand and aggregated into national time series. The national time series are corrected for part-load losses.

In [ ]:
final_cop = cop.finishing(spatial_cop, spatial_space, spatial_water)


## COP averages¶

COP averages (performance factors) are calculated and saved to disk for validation purposes.

In [ ]:
cop.validation(final_cop, final_heat, interim_path, 'corrected')

In [ ]:
cop.validation(cop.finishing(spatial_cop, spatial_space, spatial_water, correction=1),
final_heat, interim_path, "uncorrected")


# 6. Writing¶

For data and metadata, this takes around 5 minutes to run.

## Data¶

As for the OPSD "Time Series" package, data are provided in three different "shapes":

• SingleIndex (easy to read for humans, compatible with datapackage standard, small file size)
• Fileformat: CSV, SQLite
• MultiIndex (easy to read into GAMS, not compatible with datapackage standard, small file size)
• Fileformat: CSV, Excel
• Stacked (compatible with data package standard, large file size, many rows, too many for Excel)
• Fileformat: CSV

The different shapes are created before they are saved to files.

In [ ]:
shaped_dfs = write.shaping(final_heat, final_cop)


Write data to an SQL-database, ...

In [ ]:
write.to_sql(shaped_dfs, output_path, home_path)


and to CSV.

In [ ]:
write.to_csv(shaped_dfs, output_path)


Writing to Excel takes extremely long. As a workaround, a copy of the multi-indexed data is writtten to CSV and manually converted to Excel.

The metadata is reported in a JSON file.

In [ ]:
metadata.make_json(shaped_dfs, version, changes, year_start, year_end, output_path)


## Copy input data¶

In [ ]:
shutil.copytree(input_path, os.path.join(output_path, 'original_data'))


## Checksums¶

In [ ]:
metadata.checksums(output_path, home_path)