I recommend running this notebook in a conda environment, which can be created from the environment.yml file provided with this notebook.
# Python modules import os import shutil import pandas as pd from time import time from datetime import date # Custom scripts import scripts.download as download import scripts.read as read import scripts.preprocess as preprocess import scripts.demand as demand import scripts.cop as cop import scripts.write as write import scripts.metadata as metadata %load_ext autoreload %autoreload 2 %matplotlib inline
version = '2019-08-06' changes = 'Minor revisions'
home_path = os.path.realpath('.') input_path = os.path.join(home_path, 'input') interim_path = os.path.join(home_path, 'interim') output_path = os.path.join(home_path, 'output', version) for path in [input_path, interim_path, output_path]: os.makedirs(path, exist_ok=True)
all_countries = ['AT', 'BE', 'BG', 'CZ', 'DE', 'FR', 'GB', 'HR', 'HU', 'IE', 'LU', 'NL', 'PL', 'RO', 'SI', 'SK'] # available countries = all_countries # selected for calculation
year_start = 2008 year_end = 2018
In the following, this notebook downloads weather data from the ECMWF server. For accessing this server, follow the steps below:
If you have already installed your ECMWF KEY, this step is skipped.
if not os.path.isfile(os.path.join(os.environ['USERPROFILE'], ".ecmwfapirc")): os.environ["ECMWF_API_URL"] = "https://api.ecmwf.int/v1" os.environ["ECMWF_API_KEY"] = "XXXXXXXXXXXXXXXXXXXXXX" os.environ["ECMWF_API_EMAIL"] = "[email protected]"
In the following, weather and population data is downloaded from the respective sources. For all years and countries, this takes around 45 minutes to run.
As mentioned above, weather data is downloaded from ECMWF, more specifically form the ERA-Interim archive. The following data is retrieved:
All data is downloaded for the whole of Europe. If some data already exists on your computer, this data will be skipped in the download process.
download.wind(input_path) download.temperatures(input_path, year_start, year_end)
The population data from Eurostat features a 1 km² grid, which country-by-country transformed to the 0.75 x 0.75° grid of the weather data in the following. Interim results are saved/loaded from disk.
mapped_population = preprocess.map_population(input_path, countries, interim_path)
The temporal resolution of the weather data is changed as follows:
To speed up the calculation, all weather data is filtered by the selected countries.
wind = preprocess.wind(input_path, mapped_population)
temperature = preprocess.temperature(input_path, year_start, year_end, mapped_population)
For all years and countries, the calculation of heat demand time series takes around 20 minutes to run.
To capture the thermal inertia of buildings, the daily reference temperature is calculated as the weighted mean of the ambient air temperature of the actual and the three preceding days.
reference_temperature = demand.reference_temperature(temperature['air'])
daily_parameters = read.daily_parameters(input_path)
daily_heat = demand.daily_heat(reference_temperature, wind, daily_parameters)
daily_water = demand.daily_water(reference_temperature, wind, daily_parameters)
hourly_parameters = read.hourly_parameters(input_path)
hourly_heat = demand.hourly_heat(daily_heat, reference_temperature, hourly_parameters)
hourly_water = demand.hourly_water(daily_water, reference_temperature, hourly_parameters)
hourly_space = (hourly_heat - hourly_water).clip(lower=0)
The spatial time series are weighted with the population and normalized to 1 TWh yearly demand each. Years included in the building database are scaled accordingly. The time series not spatially aggregated yet because spatial time series are needed for COP calculation.
building_database = read.building_database(input_path)
spatial_space = demand.finishing(hourly_space, mapped_population, building_database['space'])
spatial_water = demand.finishing(hourly_water, mapped_population, building_database['water'])
spatial_space.to_pickle(os.path.join(interim_path, 'spatial_space')) spatial_water.to_pickle(os.path.join(interim_path, 'spatial_water'))
spatial_space = pd.read_pickle(os.path.join(interim_path, 'spatial_space'))[countries] spatial_water = pd.read_pickle(os.path.join(interim_path, 'spatial_water'))[countries]
All heat demand time series are aggregated country-wise and combined into one data frame.
final_heat = demand.combine(spatial_space, spatial_water)
For all years and countries, the calculation of the coefficient of performance (COP) of heat pumps takes around 5 minutes to run.
For air-sourced, ground-sources and groundwater-sourced heat pumps (ASHP, GSHP and WSHP), the relevant heat source temperatures are calculated.
source_temperature = cop.source_temperature(temperature)
Heat sink temperatures, i.e. the temperature level at which the heat pumps have to provide heat, are calculated for floor heating, radiator heating and warm water.
sink_temperature = cop.sink_temperature(temperature)
The COP is derived from the temperature difference between heat sources and sinks using COP curves.
cop_parameters = read.cop_parameters(input_path)
spatial_cop = cop.spatial_cop(source_temperature, sink_temperature, cop_parameters)
spatial_cop = pd.read_pickle(os.path.join(interim_path, 'spatial_cop'))[countries]
The spatial COP time series are weighted with the spatial heat demand and aggregated into national time series. The national time series are corrected for part-load losses.
final_cop = cop.finishing(spatial_cop, spatial_space, spatial_water)
COP averages (performance factors) are calculated and saved to disk for validation purposes.
cop.validation(final_cop, final_heat, interim_path, 'corrected')
cop.validation(cop.finishing(spatial_cop, spatial_space, spatial_water, correction=1), final_heat, interim_path, "uncorrected")
As for the OPSD "Time Series" package, data are provided in three different "shapes":
The different shapes are created before they are saved to files.
shaped_dfs = write.shaping(final_heat, final_cop)
Write data to an SQL-database, ...
write.to_sql(shaped_dfs, output_path, home_path)
and to CSV.
Writing to Excel takes extremely long. As a workaround, a copy of the multi-indexed data is writtten to CSV and manually converted to Excel.
metadata.make_json(shaped_dfs, version, changes, year_start, year_end, output_path)
shutil.copytree(input_path, os.path.join(output_path, 'original_data'))