User manual¶
Installation¶
First, clone the git repository in a directory of your choice using a Command Prompt window:
$ ~\directory-of-my-choice> git clone https://github.com/tum-ens/pyPRIMA.git
We recommend using conda and installing the environment from the file gen_mod.yml
that you can find in the repository. In the Command Prompt window, type:
$ cd pyPRIMA\env\
$ conda env create -f gen_mod.yml
Then activate the environment:
$ conda activate gen_mod
In the folder code
, you will find multiple files:
File |
Description |
---|---|
config.py |
used for configuration, see below. |
runme.py |
main file, which will be run later using |
lib\initialization.py |
used for initialization. |
lib\input_maps.py |
used to generate input maps for the scope. |
lib\generate-models.py |
used to generate the model files from intermediate files. |
lib\generate_intermediate_files.py |
used to generate intermediate files from raw data. |
lib\spatial_functions.py |
contains helping functions related to maps, coordinates and indices. |
lib\correction_functions.py |
contains helping functions for data correction/cleaning. |
lib\util.py |
contains minor helping functions and the necessary python libraries to be imported. |
config.py¶
This file contains the user preferences, the links to the input files, and the paths where the outputs should be saved. The paths are initialized in a way that follows a particular folder hierarchy. However, you can change the hierarchy as you wish.
runme.py¶
runme.py
calls the main functions of the code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | from lib.initialization import initialization
from lib.generate_intermediate_files import *
from lib.correction_functions import *
from lib.generate_models import *
if __name__ == "__main__":
paths, param = initialization()
## Clean raw data
clean_residential_load_profile(paths, param)
clean_commercial_load_profile(paths, param)
clean_industry_load_profile(paths, param)
clean_agriculture_load_profile(paths, param)
clean_streetlight_load_profile(paths, param)
clean_GridKit_Europe(paths, param)
clean_sector_shares_Eurostat(paths, param)
clean_load_data_ENTSOE(paths, param)
distribute_renewable_capacities_IRENA(paths, param)
clean_processes_and_storage_FRESNA(paths, param)
## Generate intermediate files
generate_sites_from_shapefile(paths, param)
generate_load_timeseries(paths, param)
generate_transmission(paths, param)
generate_intermittent_supply_timeseries(paths, param)
generate_processes(paths, param)
generate_storage(paths, param)
generate_commodities(paths, param)
## Generate model files
generate_urbs_model(paths, param)
generate_evrys_model(paths, param)
|
Recommended input sources¶
Load time series for countries¶
ENTSO-E publishes (or used to publish - the service has been discontinued as of November 2019) hourly load profiles for each country in Europe that is part of ENTSO-E.
Sectoral load profiles¶
The choice of the load profiles is not too critical, since the sectoral load profiles will be scaled according to their shares in the yearly demand, and their shapes edited to match the hourly load profile. Nevertheless, examples of load profiles for Germany can be obtained from the BDEW.
Power plants and storage units¶
The powerplantmatching package within FRESNA extracts a standardized power plant database that combines several other databases covering Europe. In this repository, all non-renewable power plants, all storage units, and some renewable power plants (e.g. geothermal) are obtained from this database. Since the capacities for most renewable technologies are inaccurate, they are obtained from another source (see below).
Renewable installed capacities¶
Renewable electricity capacity and generation statistics are obtained from the Query Tool of IRENA.
The user has to create a query that includes all countries (but no groups of countries, such as continents), all technologies (but no groups of technology)
for a particular year and name the file IRENA_RE_electricity_statistics_allcountries_alltech_YEAR.csv
.
This dataset has a global coverage, however it does not provide the exact location of each project. The code includes an algorithm to distribute the
renewable capacities spatially.
Renewable potential maps¶
These maps are needed to distribute the renewable capacities spatially, since IRENA does not provide their exact locations.
You can use any potential maps, provided that they have the same extent as the geographic scope. Adjust the resolution parameters
in config.py
accordingly. Such maps can be generated using the GitHub repository tum-ens/pyGRETA.
Renewable time series¶
Similarly, the renewable time series can be generated using the GitHub repository tum-ens/pyGRETA. This repository is particularly is the model regions are unconventional.
Transmission lines¶
High-voltage power grid data for Europe and North America can be obtained from GridKit, which used OpenStreetMap as a primary data source. In this repository, we only use the file with the lines (links.csv). In general, the minimum requirements for any data source are that the coordinates for the line vertices and the voltage are provided.
Other assumptions¶
Currently, other assumptions are provided in tables filled by the modelers. Ideally, machine-readable datasets providing the missing information are collected and new modules are written to read them and extract that information.
Recommended workflow¶
The script is designed to be modular and split into three main modules: lib.correction_functions
, lib.generate_intermediate_files
, and lib.generate_models
.
Warning
The outputs of each module serve as inputs to the following module. Therefore, the user will have to run the script sequentially.
The recommended use cases of each module will be presented in the order in which the user will have to run them.
The use cases associated with each module are presented below.
It is recommended to thoroughly read through the configuration file config.py and modify the input paths and computation parameters before starting the runme.py script. Once the configuration file is set, open the runme.py file to define what use case you will be using the script for.
Correction and cleaning of raw input data¶
Each function in this module is designed for a specific data set (usually mentioned at the end of the function name). The pre-processing steps include filtering, filling in missing values, correcting/overwriting erronous values, aggregating and disaggregating entries, and deleting/converting/renaming the attributes.
At this stage, the obtained files are valid for the whole geographic scope, and do not depend on the model regions.
Generation of intermediate files¶
The functions in this module read the cleaned input data, and adapts it to the model regions. They also expand the attributes based on assumptions to cover all the data needs of all the supported models. The results are saved in individual CSV files that are model-independent. These files can be shared with modelers whose models are not supported, and they might be able to adjust them according to their model input requirements, and use them.
Generation of model input files¶
Here, the input files are adapted to the requirements of the supported model frameworks (currently urbs and evrys). Input files as needed by the scripts of urbs and evrys are generated at the end of this step.