(Normaußentemperatur auf Gemeindebasis - NAT)
Note
The calculation of the NAT is the basis for sizing heating systems. The current calculation method (and data basis in Austria) is heavily outdated and does not take into account temperature changes caused by the climate crisis.
This repo introduces a corrected calculation of the NAT that accounts for temperature change. This correction generally leads to higher NAT values, creating a new basis for heating system planning and helping to prevent overdimensioning.
- 10th lowest value of the minima of 2-day mean temperatures over a 20-year period
- Areas at lower elevations are expected to show larger changes
- Above ~600 m, municipalities show negligible differences due to the correction
- Across all elevation ranges, a large spread in the magnitude of the impact can be observed
- Overall, the NAT changes more strongly for areas with currently high values when applying the correction factor than for areas with low values
- Within a certain range of variability, the change appears relatively homogeneous
Warning
This repo is mirrored from the GeoSphere GitLab maybe leading to errors in compilation outside this ecosystem.
- Temperature data basis: SPARTACUS dataset from GeoSphere Austria
- Municipality data basis: Dataset of municipality centroids from Statistics Austria
Municipality data is read from a CSV file. The evaluation is carried out at the respective municipality centroid (specified by lat/lon). These input data were extracted directly from the municipality table on SYBKLIM. Other points can also be used; they do not have to be municipality data. However, the input file must have the following structure and use the encoding "utf-16":
"GKZ","BUNDESLAND","NAME","PLZ","SEEHOEHE","LATITUDE","LONGITUDE"
- All paths are defined in the config file
- SPARTACUS files are cached to avoid dependency on climate normal periods
- Conda environment: Python 3.10 (use the env file)
- Extra package: CDO — required to merge all SPARTACUS files. If CDO is not available on the VM, a meaningful error is printed along with the statement that can be executed on a VM where CDO is available.
- Multiprocessing is used to extract temperature data from the raster dataset. The number of cores must be adapted to the available cores on the machine. The default setting is 2 cores.
- Preprocessing: Prepares the SPARTACUS files for evaluation. Calculates TM from TX and TN and stores the result in
temp. - Get timeseries from raster: Reads lat/lon values from the municipality input file and transforms them into the SPARTACUS projection. Evaluates the TM parameter (a different parameter can also be specified) at the given points and stores the results per point in a folder. Before running this step, a folder must be created manually for each point where the script will write its output. The output folder path is constructed as implemented here.
- Merge timeseries files: Merges the timeseries generated in step 2 into one file per site and creates an additional file with formatted output. Requires class imports from the classes used in
merge_timeseries_files. Uses the same folder structure. - Order files per state: Sorts the TM output files into files per federal state. Depending on the number of municipalities per state, multiple files are created so that each file contains a maximum of 30 municipalities, while avoiding files with only 3 or 4 municipalities. All files are stored in the
TM_outputfolder withinoutput. - get_normaußentemperatur: Calculates the standard outdoor temperature (NAT) for the municipalities in the input file. Reuses classes from the previous scripts. Saves the result as a UTF-16 encoded CSV file in
output.
To evaluate the raster at a point, a radius of 1e9 around this point (in SPARTACUS projection) is applied. From this reduced dataset, the 15 grid points with the smallest distance to the evaluation point are selected. These 15 points are sorted by elevation difference, and the mean of the best-suited 9 points is calculated. This method is not particularly fast, so running it with multiprocessing on a VM with multiple cores is recommended. For all municipalities over a 30-year period, the computation took approximately 60 hours using 12 cores.



