Version 12 (modified by kanani, 6 years ago) (diff)

--

Code verification & benchmarking

The procedure of testing the overall functionality of the PALM-4U components is organized into three categories:

Fig 1: Testing categories.

Simulation setups for category 1

Category 1 includes checks of the PALM-4U source code for compliance to the general FORTRAN coding standard, as well as to the specific PALM formatting rules. Further, the code is tested for run-time errors as well as compiler-dependent functionality issues, and simulation results are checked for elementary plausibility. For this purpose (S)mall setups are sufficient, suitable to run on desktop PCs (2-4 processor cores) within a time frame of O(1-10min).

Since the release of PALM 5.0, PALM-4U enables the use of NetCDF input files, the so-called static and dynamic drivers, as input data for the PALM model system. The drivers are based on the PALM Input Data Standard (PIDS). The former generic_* setups don't work anymore. Instead, we created a new street-crossing setup (see Fig. 2), which contains all so far available PALM-4U components, and is driven by a static driver. To summarize the setup:

  • Domain: 40 x 40 x 120 m³
  • Grid size: 2m
  • Moderate wind from west
  • Cyclic lateral boundary conditions
  • 4 differently high buildings
  • Orography (terrain elevation)
  • 2 street types
  • 2 trees
  • Land surface model (LSM)
  • Urban surface model (USM)
  • Plant canopy model (PCM)
  • Clear-sky radiation model
  • Chemistry model

Sketch for test_urban setup
Fig 2: (S)mall street-crossing setup with two trees.

The test_urban.zip archive contains the required INPUT files

  • test_urban_p3d (parameter file)
  • test_urban_static (static driver)

as well as the MONITORING and OUTPUT files, an NCL script for creation of the static driver, and a README file with the setup documentation. The previous test_urban setups that worked with PALM r2770 and PALM r2957 are also still available (test_urban_r2770.zip, test_urban_r2957.zip)

NOTE: Additional and more feature-specific setups can be created on an as-needed basis.

Simulation setups for category 2

Two differently sized setups are defined:

  • (M)edium setup (see Fig. 3) for run-time optimization tests, and verification of the code efficiency on the MOSAIK demonstration PC
  • (L)arge setup (see Fig. 4) for code scalability tests on a high-performance computing system (HLRN) with up to tens of thousands processor cores.

The base of PALM-4U -- PALM -- is highly optimized and its performance scales well up to 40,000 processor cores. PALM-4U shall maintain this high level of performance optimization.

The final formulation of setup files is postponed until the new INPUT data structure is implemented to PALM.


Fig. 3: (M)edium simulation domain with O(106} grid points, suitable for tests on demonstration PC).


Fig. 4: (L)arge simulation domain with O(1010} grid points, suitable for scalability tests).

Simulation setups for category 3

These setups will be formulated once a decision has been made (in coordination with module B & C partners) which UCMs to apply. On MOSAIK's side, ENVIMET, FITNAH, and MUKLIMO_3 are on the shortlist. Selected setups must be compatible to run with PALM-4U as well as with the UCM of choice.

Attachments (12)

                                                                                                                                                                                                                                                                                                                                                                               
  | Impressum | ©Leibniz Universität Hannover |