Changeset 4649 for palm/trunk/SOURCE/pmc_interface_mod.f90
- Timestamp:
- Aug 25, 2020 12:11:17 PM (4 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
palm/trunk/SOURCE/pmc_interface_mod.f90
r4629 r4649 1 1 !> @file pmc_interface_mod.f90 2 !------------------------------------------------------------------------------ !2 !--------------------------------------------------------------------------------------------------! 3 3 ! This file is part of the PALM model system. 4 4 ! 5 ! PALM is free software: you can redistribute it and/or modify it under the 6 ! terms of the GNU General Public License as published by the Free Software 7 ! Foundation, either version 3 of the License, or (at your option) any later 8 ! version. 9 ! 10 ! PALM is distributed in the hope that it will be useful, but WITHOUT ANY 11 ! WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR 12 ! A PARTICULAR PURPOSE. See the GNU General Public License for more details. 13 ! 14 ! You should have received a copy of the GNU General Public License along with 15 ! PALM. If not, see <http://www.gnu.org/licenses/>. 5 ! PALM is free software: you can redistribute it and/or modify it under the terms of the GNU General 6 ! Public License as published by the Free Software Foundation, either version 3 of the License, or 7 ! (at your option) any later version. 8 ! 9 ! PALM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the 10 ! implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General 11 ! Public License for more details. 12 ! 13 ! You should have received a copy of the GNU General Public License along with PALM. If not, see 14 ! <http://www.gnu.org/licenses/>. 16 15 ! 17 16 ! Copyright 1997-2020 Leibniz Universitaet Hannover 18 !------------------------------------------------------------------------------! 17 !--------------------------------------------------------------------------------------------------! 18 ! 19 19 ! 20 20 ! Current revisions: 21 ! ----------------- -21 ! ----------------- 22 22 ! 23 23 ! … … 25 25 ! ----------------- 26 26 ! $Id$ 27 ! File re-formatted to follow the PALM coding standard 28 ! 29 ! 30 ! 4629 2020-07-29 09:37:56Z raasch 27 31 ! support for MPI Fortran77 interface (mpif.h) removed 28 ! 32 ! 29 33 ! 4508 2020-04-24 13:32:20Z raasch 30 ! salsa variable name changed31 ! 34 ! Salsa variable name changed 35 ! 32 36 ! 4444 2020-03-05 15:59:50Z raasch 33 ! bugfix: cpp-directives and variable declarations for serial mode added34 ! 37 ! Bugfix: cpp-directives and variable declarations for serial mode added 38 ! 35 39 ! 4413 2020-02-19 15:52:19Z hellstea 36 40 ! All the USE-statements within subroutines moved up to the module declaration section. 37 ! 41 ! 38 42 ! 4385 2020-01-27 08:37:37Z hellstea 39 43 ! Error messages PA0425 and PA0426 made more specific 40 ! 44 ! 41 45 ! 4360 2020-01-07 11:25:50Z suehring 42 ! Introduction of wall_flags_total_0, which currently sets bits based on static 43 ! topographyinformation used in wall_flags_static_044 ! 46 ! Introduction of wall_flags_total_0, which currently sets bits based on static topography 47 ! information used in wall_flags_static_0 48 ! 45 49 ! 4329 2019-12-10 15:46:36Z motisi 46 50 ! Renamed wall_flags_0 to wall_flags_static_0 47 ! 51 ! 48 52 ! 4273 2019-10-24 13:40:54Z monakurppa 49 53 ! Add a logical switch nesting_chem and rename nest_salsa to nesting_salsa 50 ! 54 ! 51 55 ! 4260 2019-10-09 14:04:03Z hellstea 52 ! Rest of the possibly round-off-error sensitive grid-line matching tests 53 ! changed to round-off-errortolerant forms throughout the module.54 ! 56 ! Rest of the possibly round-off-error sensitive grid-line matching tests changed to round-off-error 57 ! tolerant forms throughout the module. 58 ! 55 59 ! 4249 2019-10-01 12:27:47Z hellstea 56 ! Several grid-line matching tests changed to a round-off-error tolerant form 57 ! in pmci_setup_parent,pmci_define_index_mapping and pmci_check_grid_matching.58 ! 60 ! Several grid-line matching tests changed to a round-off-error tolerant form in pmci_setup_parent, 61 ! pmci_define_index_mapping and pmci_check_grid_matching. 62 ! 59 63 ! 4182 2019-08-22 15:20:23Z scharf 60 64 ! Corrected "Former revisions" section 61 ! 65 ! 62 66 ! 4168 2019-08-16 13:50:17Z suehring 63 67 ! Replace function get_topography_top_index by topo_top_ind 64 ! 68 ! 65 69 ! 4029 2019-06-14 14:04:35Z raasch 66 70 ! nest_chemistry switch removed 67 ! 71 ! 68 72 ! 4026 2019-06-12 16:50:15Z suehring 69 ! Masked topography at boundary grid points in mass conservation, in order to 73 ! Masked topography at boundary grid points in mass conservation, in order to 70 74 ! avoid that mean velocities within topography are imposed 71 ! 75 ! 72 76 ! 4011 2019-05-31 14:34:03Z hellstea 73 77 ! Mass (volume) flux correction included to ensure global mass conservation for child domains. 74 ! 78 ! 75 79 ! 3987 2019-05-22 09:52:13Z kanani 76 80 ! Introduce alternative switch for debug output during timestepping 77 ! 81 ! 78 82 ! 3984 2019-05-16 15:17:03Z hellstea 79 83 ! Commenting improved, pmci_map_fine_to_coarse_grid renamed as pmci_map_child_grid_to_parent_grid, 80 ! set_child_edge_coords renamed as pmci_set_child_edge_coords, some variables renamed, etc. 81 ! 84 ! set_child_edge_coords renamed as pmci_set_child_edge_coords, some variables renamed, etc. 85 ! 82 86 ! 3979 2019-05-15 13:54:29Z hellstea 83 ! Bugfix in pmc_interp_1sto_sn. This bug had effect only in case of 1-d domain 84 ! decomposition with npex = 1.85 ! 87 ! Bugfix in pmc_interp_1sto_sn. This bug had effect only in case of 1-d domain decomposition with 88 ! npex = 1. 89 ! 86 90 ! 3976 2019-05-15 11:02:34Z hellstea 87 ! Child initialization also for the redundant ghost points behind the nested 88 ! boundaries added(2nd and 3rd ghost-point layers and corners).89 ! 91 ! Child initialization also for the redundant ghost points behind the nested boundaries added 92 ! (2nd and 3rd ghost-point layers and corners). 93 ! 90 94 ! 3948 2019-05-03 14:49:57Z hellstea 91 ! Some variables renamed, a little cleaning up and some commenting improvements 92 ! 95 ! Some variables renamed, a little cleaning up and some commenting improvements 96 ! 93 97 ! 3947 2019-05-03 07:56:44Z hellstea 94 ! The checks included in 3946 are extended for the z-direction and moved into its 95 ! own subroutinecalled from pmci_define_index_mapping.96 ! 98 ! The checks included in 3946 are extended for the z-direction and moved into its own subroutine 99 ! called from pmci_define_index_mapping. 100 ! 97 101 ! 3946 2019-05-02 14:18:59Z hellstea 98 ! Check added for child domains too small in terms of number of parent-grid cells so 99 ! that anterpolation is not possible. Checks added for too wide anterpolation buffer100 ! for the same reason.Some minor code reformatting done.102 ! Check added for child domains too small in terms of number of parent-grid cells so that 103 ! anterpolation is not possible. Checks added for too wide anterpolation buffer for the same reason. 104 ! Some minor code reformatting done. 101 105 ! 102 106 ! 3945 2019-05-02 11:29:27Z raasch 103 107 ! 104 108 ! 3932 2019-04-24 17:31:34Z suehring 105 ! Add missing if statements for call of pmc_set_dataarray_name for TKE and 106 ! dissipation. 109 ! Add missing if statements for call of pmc_set_dataarray_name for TKE and dissipation. 107 110 ! 108 111 ! 3888 2019-04-12 09:18:10Z hellstea 109 112 ! Variables renamed, commenting improved etc. 110 ! 113 ! 111 114 ! 3885 2019-04-11 11:29:34Z kanani 112 ! Changes related to global restructuring of location messages and introduction 113 ! of additional debugmessages114 ! 115 ! Changes related to global restructuring of location messages and introduction of additional debug 116 ! messages 117 ! 115 118 ! 3883 2019-04-10 12:51:50Z hellstea 116 ! Checks and error messages improved and extended. All the child index bounds in the 117 ! parent-grid index space are made module variables. Function get_number_of_childs118 ! renamed get_number_of_children. A number of variables renamed119 ! a nd qite a lot of other code reshaping made all around the module.120 ! 119 ! Checks and error messages improved and extended. All the child index bounds in the parent-grid 120 ! index space are made module variables. Function get_number_of_childs renamed 121 ! get_number_of_children. A number of variables renamed and qite a lot of other code reshaping made 122 ! all around the module. 123 ! 121 124 ! 3876 2019-04-08 18:41:49Z knoop 122 125 ! Implemented nesting for salsa variables. 123 ! 126 ! 124 127 ! 3833 2019-03-28 15:04:04Z forkel 125 ! replaced USE chem_modules by USE chem_gasphase_mod 126 ! 128 ! replaced USE chem_modules by USE chem_gasphase_mod 129 ! 127 130 ! 3822 2019-03-27 13:10:23Z hellstea 128 ! Temporary increase of the vertical dimension of the parent-grid arrays and 129 ! workarrc_t is cancelledas unnecessary.130 ! 131 ! Temporary increase of the vertical dimension of the parent-grid arrays and workarrc_t is cancelled 132 ! as unnecessary. 133 ! 131 134 ! 3819 2019-03-27 11:01:36Z hellstea 132 ! Adjustable anterpolation buffer introduced on all nest boundaries, it is controlled 133 ! by the newnesting_parameters parameter anterpolation_buffer_width.134 ! 135 ! Adjustable anterpolation buffer introduced on all nest boundaries, it is controlled by the new 136 ! nesting_parameters parameter anterpolation_buffer_width. 137 ! 135 138 ! 3804 2019-03-19 13:46:20Z hellstea 136 ! Anterpolation domain is lowered from kct-1 to kct-3 to avoid exessive 137 ! kinetic energy from buildingup in CBL flows.138 ! 139 ! Anterpolation domain is lowered from kct-1 to kct-3 to avoid exessive kinetic energy from building 140 ! up in CBL flows. 141 ! 139 142 ! 3803 2019-03-19 13:44:40Z hellstea 140 ! A bug fixed in lateral boundary interpolations. Dimension of val changed from 141 ! 5 to 3 inpmci_setup_parent and pmci_setup_child.142 ! 143 ! A bug fixed in lateral boundary interpolations. Dimension of val changed from 5 to 3 in 144 ! pmci_setup_parent and pmci_setup_child. 145 ! 143 146 ! 3794 2019-03-15 09:36:33Z raasch 144 ! two remaining unused variables removed145 ! 147 ! Two remaining unused variables removed 148 ! 146 149 ! 3792 2019-03-14 16:50:07Z hellstea 147 150 ! Interpolations improved. Large number of obsolete subroutines removed. 148 ! All unused variables removed. 149 ! 151 ! All unused variables removed. 152 ! 150 153 ! 3741 2019-02-13 16:24:49Z hellstea 151 ! Interpolations and child initialization adjusted to handle set ups with child 152 ! pe-subdomain dimension not integer divisible by the grid-spacing ratio in the153 ! respective direction. Set ups with pe-subdomain dimension smaller than the154 ! grid-spacing ratio in the respective direction arenow forbidden.155 ! 154 ! Interpolations and child initialization adjusted to handle set ups with child pe-subdomain 155 ! dimension not integer divisible by the grid-spacing ratio in the respective direction. Set ups 156 ! with pe-subdomain dimension smaller than the grid-spacing ratio in the respective direction are 157 ! now forbidden. 158 ! 156 159 ! 3708 2019-01-30 12:58:13Z hellstea 157 160 ! Checks for parent / child grid line matching introduced. 158 161 ! Interpolation of nest-boundary-tangential velocity components revised. 159 ! 162 ! 160 163 ! 3697 2019-01-24 17:16:13Z hellstea 161 ! Bugfix: upper k-bound in the child initialization interpolation 162 ! pmci_interp_1sto_all corrected. 163 ! Copying of the nest boundary values into the redundant 2nd and 3rd ghost-node 164 ! layers is added to the pmci_interp_1sto_*-routines. 165 ! 164 ! Bugfix: upper k-bound in the child initialization interpolation pmci_interp_1sto_all corrected. 165 ! Copying of the nest boundary values into the redundant 2nd and 3rd ghost-node layers is added to 166 ! the pmci_interp_1sto_*-routines. 167 ! 166 168 ! 3681 2019-01-18 15:06:05Z hellstea 167 ! Linear interpolations are replaced by first order interpolations. The linear 168 ! interpolation routines are still included but not called. In the child 169 ! inititialization the interpolation is also changed to 1st order and the linear 170 ! interpolation is not kept. 169 ! Linear interpolations are replaced by first order interpolations. The linear interpolation 170 ! routines are still included but not called. In the child inititialization the interpolation is 171 ! also changed to 1st order and the linear interpolation is not kept. 171 172 ! Subroutine pmci_map_fine_to_coarse_grid is rewritten. 172 173 ! Several changes in pmci_init_anterp_tophat. 173 ! Child's parent-grid arrays (uc, vc,...) are made non-overlapping on the PE- 174 ! subdomain boundaries in order to allow grid-spacing ratios higher than nbgp.175 ! Subroutine pmci_init_tkefactor is removed asunnecessary.176 ! 174 ! Child's parent-grid arrays (uc, vc,...) are made non-overlapping on the PE-subdomain boundaries in 175 ! order to allow grid-spacing ratios higher than nbgp. Subroutine pmci_init_tkefactor is removed as 176 ! unnecessary. 177 ! 177 178 ! 3655 2019-01-07 16:51:22Z knoop 178 179 ! Remove unused variable simulated_time 179 ! 180 ! 180 181 ! 1762 2016-02-25 12:31:13Z hellstea 181 182 ! Initial revision by A. Hellsten … … 183 184 ! Description: 184 185 ! ------------ 185 ! Domain nesting interface routines. The low-level inter-domain communication 186 ! is conducted by the PMC-library routines. 187 ! 188 ! @todo Remove array_3d variables from USE statements thate not used in the 189 ! routine 186 ! Domain nesting interface routines. The low-level inter-domain communication is conducted by the 187 ! PMC-library routines. 188 ! 189 ! @todo Remove array_3d variables from USE statements thate not used in the routine 190 190 ! @todo Data transfer of qc and nc is prepared but not activated 191 !------------------------------------------------------------------------------ !191 !--------------------------------------------------------------------------------------------------! 192 192 MODULE pmc_interface 193 193 194 194 #if ! defined( __parallel ) 195 195 ! 196 !-- Serial mode does not allow nesting, but requires the following variables as steering 197 !-- quantities 196 !-- Serial mode does not allow nesting, but requires the following variables as steering quantities 198 197 USE kinds 199 198 … … 202 201 PUBLIC 203 202 204 CHARACTER(LEN=8), SAVE :: nesting_mode = 'none' 205 206 INTEGER(iwp), SAVE :: comm_world_nesting!< Global nesting communicator207 INTEGER(iwp), SAVE :: cpl_id = 1!<203 CHARACTER(LEN=8), SAVE :: nesting_mode = 'none' !< steering parameter for 1- or 2-way nesting 204 205 INTEGER(iwp), SAVE :: comm_world_nesting !< Global nesting communicator 206 INTEGER(iwp), SAVE :: cpl_id = 1 !< 208 207 209 208 LOGICAL, SAVE :: nested_run = .FALSE. !< general switch … … 215 214 216 215 217 USE arrays_3d, & 218 ONLY: diss, diss_2, dzu, dzw, e, e_p, e_2, nc, nc_2, nc_p, nr, nr_2, & 219 pt, pt_2, q, q_2, qc, qc_2, qr, qr_2, s, s_2, & 220 u, u_p, u_2, v, v_p, v_2, w, w_p, w_2, zu, zw 221 222 USE chem_gasphase_mod, & 216 USE arrays_3d, & 217 ONLY: diss, & 218 diss_2, & 219 dzu, & 220 dzw, & 221 e, & 222 e_p, & 223 e_2, & 224 nc, & 225 nc_2, & 226 nc_p, & 227 nr, & 228 nr_2, & 229 pt, & 230 pt_2, & 231 q, & 232 q_2, & 233 qc, & 234 qc_2, & 235 qr, & 236 qr_2, & 237 s, & 238 s_2, & 239 u, & 240 u_p, & 241 u_2, & 242 v, & 243 v_p, & 244 v_2, & 245 w, & 246 w_p, & 247 w_2, & 248 zu, & 249 zw 250 251 USE chem_gasphase_mod, & 223 252 ONLY: nspec 224 253 225 USE chem_modules, & 226 ONLY: chem_species, ibc_cs_b, nesting_chem 227 228 USE chemistry_model_mod, & 254 USE chem_modules, & 255 ONLY: chem_species, & 256 ibc_cs_b, & 257 nesting_chem 258 259 USE chemistry_model_mod, & 229 260 ONLY: spec_conc_2 230 231 USE control_parameters, & 232 ONLY: air_chemistry, bc_dirichlet_l, bc_dirichlet_n, bc_dirichlet_r, & 233 bc_dirichlet_s, child_domain, & 234 constant_diffusion, constant_flux_layer, & 235 coupling_char, end_time, & 236 debug_output_timestep, & 237 dt_restart, dt_3d, dz, humidity, & 238 ibc_pt_b, ibc_q_b, ibc_s_b, ibc_uv_b, & 239 message_string, neutral, passive_scalar, rans_mode, rans_tke_e, & 240 restart_time, & 241 roughness_length, salsa, topography, volume_flow, time_restart 242 243 USE cpulog, & 244 ONLY: cpu_log, log_point_s 245 246 USE grid_variables, & 247 ONLY: dx, dy 248 249 USE indices, & 250 ONLY: nbgp, nx, nxl, nxlg, nxlu, nxr, nxrg, ny, nyn, nyng, nys, nysg, & 251 nysv, nz, nzb, nzt, topo_top_ind, wall_flags_total_0 252 253 USE bulk_cloud_model_mod, & 254 ONLY: bulk_cloud_model, microphysics_morrison, microphysics_seifert 255 256 USE particle_attributes, & 261 262 USE control_parameters, & 263 ONLY: air_chemistry, & 264 bc_dirichlet_l, & 265 bc_dirichlet_n, & 266 bc_dirichlet_r, & 267 bc_dirichlet_s, & 268 child_domain, & 269 constant_diffusion, & 270 constant_flux_layer, & 271 coupling_char, & 272 debug_output_timestep, & 273 dt_restart, & 274 dt_3d, & 275 dz, & 276 end_time, & 277 humidity, & 278 ibc_pt_b, & 279 ibc_q_b, & 280 ibc_s_b, & 281 ibc_uv_b, & 282 message_string, & 283 neutral, & 284 passive_scalar, & 285 rans_mode, & 286 rans_tke_e, & 287 restart_time, & 288 roughness_length, & 289 salsa, & 290 time_restart, & 291 topography, & 292 volume_flow 293 294 295 USE cpulog, & 296 ONLY: cpu_log, & 297 log_point_s 298 299 USE grid_variables, & 300 ONLY: dx, & 301 dy 302 303 USE indices, & 304 ONLY: nbgp, & 305 nx, & 306 nxl, & 307 nxlg, & 308 nxlu, & 309 nxr, & 310 nxrg, & 311 ny, & 312 nyn, & 313 nyng, & 314 nys, & 315 nysg, & 316 nysv, & 317 nz, & 318 nzb, & 319 nzt, & 320 topo_top_ind, & 321 wall_flags_total_0 322 323 USE bulk_cloud_model_mod, & 324 ONLY: bulk_cloud_model, & 325 microphysics_morrison, & 326 microphysics_seifert 327 328 USE particle_attributes, & 257 329 ONLY: particle_advection 258 330 … … 262 334 USE MPI 263 335 264 USE pegrid, & 265 ONLY: collective_wait, comm1dx, comm1dy, comm2d, myid, myidx, myidy, & 266 numprocs, pdims, pleft, pnorth, pright, psouth, status 267 268 USE pmc_child, & 269 ONLY: pmc_childinit, pmc_c_clear_next_array_list, & 270 pmc_c_getnextarray, pmc_c_get_2d_index_list, pmc_c_getbuffer, & 271 pmc_c_putbuffer, pmc_c_setind_and_allocmem, & 272 pmc_c_set_dataarray, pmc_set_dataarray_name 273 274 USE pmc_general, & 275 ONLY: da_namelen, pmc_max_array 276 277 USE pmc_handle_communicator, & 278 ONLY: pmc_get_model_info, pmc_init_model, pmc_is_rootmodel, & 279 pmc_no_namelist_found, pmc_parent_for_child, m_couplers 280 281 USE pmc_mpi_wrapper, & 282 ONLY: pmc_bcast, pmc_recv_from_child, pmc_recv_from_parent, & 283 pmc_send_to_child, pmc_send_to_parent 284 285 USE pmc_parent, & 286 ONLY: pmc_parentinit, pmc_s_clear_next_array_list, pmc_s_fillbuffer, & 287 pmc_s_getdata_from_buffer, pmc_s_getnextarray, & 288 pmc_s_setind_and_allocmem, pmc_s_set_active_data_array, & 289 pmc_s_set_dataarray, pmc_s_set_2d_index_list 336 USE pegrid, & 337 ONLY: collective_wait, & 338 comm1dx, & 339 comm1dy, & 340 comm2d, & 341 myid, & 342 myidx, & 343 myidy, & 344 numprocs, & 345 pdims, & 346 pleft, & 347 pnorth, & 348 pright, & 349 psouth, & 350 status 351 352 USE pmc_child, & 353 ONLY: pmc_childinit, & 354 pmc_c_clear_next_array_list, & 355 pmc_c_getnextarray, & 356 pmc_c_get_2d_index_list, & 357 pmc_c_getbuffer, & 358 pmc_c_putbuffer, & 359 pmc_c_setind_and_allocmem, & 360 pmc_c_set_dataarray, & 361 pmc_set_dataarray_name 362 363 USE pmc_general, & 364 ONLY: da_namelen, & 365 pmc_max_array 366 367 USE pmc_handle_communicator, & 368 ONLY: pmc_get_model_info, & 369 pmc_init_model, & 370 pmc_is_rootmodel, & 371 pmc_no_namelist_found, & 372 pmc_parent_for_child, & 373 m_couplers 374 375 USE pmc_mpi_wrapper, & 376 ONLY: pmc_bcast, & 377 pmc_recv_from_child, & 378 pmc_recv_from_parent, & 379 pmc_send_to_child, & 380 pmc_send_to_parent 381 382 USE pmc_parent, & 383 ONLY: pmc_parentinit, & 384 pmc_s_clear_next_array_list, & 385 pmc_s_fillbuffer, & 386 pmc_s_getdata_from_buffer, & 387 pmc_s_getnextarray, & 388 pmc_s_setind_and_allocmem, & 389 pmc_s_set_active_data_array, & 390 pmc_s_set_dataarray, & 391 pmc_s_set_2d_index_list 290 392 291 393 #endif 292 394 293 USE salsa_mod, & 294 ONLY: aerosol_mass, aerosol_number, gconc_2, ibc_aer_b, & 295 mconc_2, nbins_aerosol, & 296 ncomponents_mass, nconc_2, nesting_salsa, ngases_salsa, & 297 salsa_gas, salsa_gases_from_chem 298 299 USE surface_mod, & 300 ONLY: bc_h, surf_def_h, surf_lsm_h, surf_usm_h 395 USE salsa_mod, & 396 ONLY: aerosol_mass, & 397 aerosol_number, & 398 gconc_2, & 399 ibc_aer_b, & 400 mconc_2, & 401 nbins_aerosol, & 402 ncomponents_mass, & 403 nconc_2, & 404 nesting_salsa, & 405 ngases_salsa, & 406 salsa_gas, & 407 salsa_gases_from_chem 408 409 USE surface_mod, & 410 ONLY: bc_h, & 411 surf_def_h, & 412 surf_lsm_h, & 413 surf_usm_h 301 414 302 415 IMPLICIT NONE … … 306 419 ! 307 420 !-- Constants 308 INTEGER(iwp), PARAMETER :: child_to_parent = 2 !< Parameter for pmci_parent_datatrans indicating the direction of transfer 309 INTEGER(iwp), PARAMETER :: parent_to_child = 1 !< Parameter for pmci_parent_datatrans indicating the direction of transfer 310 INTEGER(iwp), PARAMETER :: interpolation_scheme_lrsn = 2 !< Interpolation scheme to be used on lateral boundaries 311 INTEGER(iwp), PARAMETER :: interpolation_scheme_t = 3 !< Interpolation scheme to be used on top boundary 312 313 REAL(wp), PARAMETER :: tolefac = 1.0E-6_wp !< Relative tolerence for grid-line matching tests and comparisons 421 INTEGER(iwp), PARAMETER :: child_to_parent = 2 !< Parameter for pmci_parent_datatrans indicating the direction of 422 !< transfer 423 INTEGER(iwp), PARAMETER :: interpolation_scheme_lrsn = 2 !< Interpolation scheme to be used on lateral boundaries 424 INTEGER(iwp), PARAMETER :: interpolation_scheme_t = 3 !< Interpolation scheme to be used on top boundary 425 INTEGER(iwp), PARAMETER :: parent_to_child = 1 !< Parameter for pmci_parent_datatrans indicating the direction of 426 !< transfer 427 428 REAL(wp), PARAMETER :: tolefac = 1.0E-6_wp !< Relative tolerence for grid-line matching tests and comparisons 314 429 ! 315 430 !-- Coupler setup 316 INTEGER(iwp), SAVE :: comm_world_nesting !< Global nesting communicator317 INTEGER(iwp), SAVE :: cpl_id = 1 !< 318 INTEGER(iwp), SAVE :: cpl_npe_total !<319 INTEGER(iwp), SAVE :: cpl_parent_id!<320 321 CHARACTER(LEN=32), SAVE :: cpl_name!<431 CHARACTER(LEN=32), SAVE :: cpl_name !< 432 433 INTEGER(iwp), SAVE :: comm_world_nesting !< Global nesting communicator 434 INTEGER(iwp), SAVE :: cpl_id = 1 !< 435 INTEGER(iwp), SAVE :: cpl_npe_total !< 436 INTEGER(iwp), SAVE :: cpl_parent_id !< 322 437 323 438 ! 324 439 !-- Control parameters 325 INTEGER(iwp), SAVE :: anterpolation_buffer_width = 2 !< Boundary buffer width for anterpolation326 440 CHARACTER(LEN=7), SAVE :: nesting_datatransfer_mode = 'mixed' !< steering parameter for data-transfer mode 327 441 CHARACTER(LEN=8), SAVE :: nesting_mode = 'two-way' !< steering parameter for 1- or 2-way nesting 328 329 LOGICAL, SAVE :: nested_run = .FALSE. !< general switch 330 LOGICAL, SAVE :: rans_mode_parent = .FALSE. !< mode of parent model (.F. - LES mode, .T. - RANS mode) 442 443 INTEGER(iwp), SAVE :: anterpolation_buffer_width = 2 !< Boundary buffer width for anterpolation 444 445 LOGICAL, SAVE :: nested_run = .FALSE. !< general switch 446 LOGICAL, SAVE :: rans_mode_parent = .FALSE. !< mode of parent model (.F. - LES mode, .T. - RANS mode) 331 447 ! 332 448 !-- Geometry 333 REAL(wp), SAVE, DIMENSION(:), ALLOCATABLE, PUBLIC :: coord_x !< Array for the absolute x-coordinates334 REAL(wp), SAVE, DIMENSION(:), ALLOCATABLE, PUBLIC :: coord_y !< Array for the absolute y-coordinates335 REAL(wp), SAVE, PUBLIC :: lower_left_coord_x !< x-coordinate of the lower left corner of the domain336 REAL(wp), SAVE, PUBLIC :: lower_left_coord_y !< y-coordinate of the lower left corner of the domain449 REAL(wp), SAVE, DIMENSION(:), ALLOCATABLE, PUBLIC :: coord_x !< Array for the absolute x-coordinates 450 REAL(wp), SAVE, DIMENSION(:), ALLOCATABLE, PUBLIC :: coord_y !< Array for the absolute y-coordinates 451 REAL(wp), SAVE, PUBLIC :: lower_left_coord_x !< x-coordinate of the lower left corner of the domain 452 REAL(wp), SAVE, PUBLIC :: lower_left_coord_y !< y-coordinate of the lower left corner of the domain 337 453 ! 338 454 !-- Children's parent-grid arrays 339 INTEGER(iwp), SAVE, DIMENSION(5), PUBLIC :: parent_bound !< subdomain index bounds for children's parent-grid arrays 340 341 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: dissc !< Parent-grid array on child domain - dissipation rate 342 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: ec !< Parent-grid array on child domain - SGS TKE 343 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: ptc !< Parent-grid array on child domain - potential temperature 344 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: uc !< Parent-grid array on child domain - velocity component u 345 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: vc !< Parent-grid array on child domain - velocity component v 346 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: wc !< Parent-grid array on child domain - velocity component w 347 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: q_c !< Parent-grid array on child domain - 348 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: qcc !< Parent-grid array on child domain - 349 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: qrc !< Parent-grid array on child domain - 350 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: nrc !< Parent-grid array on child domain - 351 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: ncc !< Parent-grid array on child domain - 352 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: sc !< Parent-grid array on child domain - 353 INTEGER(idp), SAVE, DIMENSION(:,:), ALLOCATABLE, TARGET, PUBLIC :: nr_partc !< 354 INTEGER(idp), SAVE, DIMENSION(:,:), ALLOCATABLE, TARGET, PUBLIC :: part_adrc !< 355 356 REAL(wp), SAVE, DIMENSION(:,:,:,:), ALLOCATABLE, TARGET :: chem_spec_c !< Parent-grid array on child domain - chemical species 455 INTEGER(iwp), SAVE, DIMENSION(5), PUBLIC :: parent_bound !< subdomain index bounds for children's parent-grid arrays 456 457 INTEGER(idp), SAVE, DIMENSION(:,:), ALLOCATABLE, TARGET, PUBLIC :: nr_partc !< 458 INTEGER(idp), SAVE, DIMENSION(:,:), ALLOCATABLE, TARGET, PUBLIC :: part_adrc !< 459 460 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: dissc !< Parent-grid array on child domain - dissipation rate 461 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: ec !< Parent-grid array on child domain - SGS TKE 462 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: nrc !< Parent-grid array on child domain - 463 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: ncc !< Parent-grid array on child domain - 464 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: ptc !< Parent-grid array on child domain - potential temperature 465 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: q_c !< Parent-grid array on child domain - 466 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: qcc !< Parent-grid array on child domain - 467 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: qrc !< Parent-grid array on child domain - 468 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: sc !< Parent-grid array on child domain - 469 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: uc !< Parent-grid array on child domain - velocity component u 470 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: vc !< Parent-grid array on child domain - velocity component v 471 REAL(wp), SAVE, DIMENSION(:,:,:), ALLOCATABLE, TARGET :: wc !< Parent-grid array on child domain - velocity component w 357 472 358 473 REAL(wp), SAVE, DIMENSION(:,:,:,:), ALLOCATABLE, TARGET :: aerosol_mass_c !< Aerosol mass 359 REAL(wp), SAVE, DIMENSION(:,:,:,:), ALLOCATABLE, TARGET :: aerosol_number_c !< Aerosol number 474 REAL(wp), SAVE, DIMENSION(:,:,:,:), ALLOCATABLE, TARGET :: aerosol_number_c !< Aerosol number 475 REAL(wp), SAVE, DIMENSION(:,:,:,:), ALLOCATABLE, TARGET :: chem_spec_c !< Parent-grid array on child domain 476 !< - chemical species 360 477 REAL(wp), SAVE, DIMENSION(:,:,:,:), ALLOCATABLE, TARGET :: salsa_gas_c !< SALSA gases 361 478 ! 362 479 !-- Grid-spacing ratios. 363 INTEGER(iwp), SAVE :: igsr !< Integer grid-spacing ratio in i-direction364 INTEGER(iwp), SAVE :: jgsr !< Integer grid-spacing ratio in j-direction365 INTEGER(iwp), SAVE :: kgsr 480 INTEGER(iwp), SAVE :: igsr !< Integer grid-spacing ratio in i-direction 481 INTEGER(iwp), SAVE :: jgsr !< Integer grid-spacing ratio in j-direction 482 INTEGER(iwp), SAVE :: kgsr !< Integer grid-spacing ratio in k-direction 366 483 ! 367 484 !-- Global parent-grid index bounds 368 INTEGER(iwp), SAVE :: iplg !< Leftmost parent-grid array ip index of the whole child domain 369 INTEGER(iwp), SAVE :: iprg !< Rightmost parent-grid array ip index of the whole child domain 370 INTEGER(iwp), SAVE :: jpsg !< Southmost parent-grid array jp index of the whole child domain 371 INTEGER(iwp), SAVE :: jpng !< Northmost parent-grid array jp index of the whole child domain 372 ! 373 !-- Local parent-grid index bounds. Different sets of index bounds are needed for parent-grid arrays (uc, etc), 374 !-- for index mapping arrays (iflu, etc) and for work arrays (workarr_lr, etc). This is because these arrays 375 !-- have different dimensions depending on the location of the subdomain relative to boundaries and corners. 376 INTEGER(iwp), SAVE :: ipl !< Left index limit for children's parent-grid arrays 377 INTEGER(iwp), SAVE :: ipla !< Left index limit for allocation of index-mapping and other auxiliary arrays 378 INTEGER(iwp), SAVE :: iplw !< Left index limit for children's parent-grid work arrays 379 INTEGER(iwp), SAVE :: ipr !< Right index limit for children's parent-grid arrays 380 INTEGER(iwp), SAVE :: ipra !< Right index limit for allocation of index-mapping and other auxiliary arrays 381 INTEGER(iwp), SAVE :: iprw !< Right index limit for children's parent-grid work arrays 382 INTEGER(iwp), SAVE :: jpn !< North index limit for children's parent-grid arrays 383 INTEGER(iwp), SAVE :: jpna !< North index limit for allocation of index-mapping and other auxiliary arrays 384 INTEGER(iwp), SAVE :: jpnw !< North index limit for children's parent-grid work arrays 385 INTEGER(iwp), SAVE :: jps !< South index limit for children's parent-grid arrays 386 INTEGER(iwp), SAVE :: jpsa !< South index limit for allocation of index-mapping and other auxiliary arrays 387 INTEGER(iwp), SAVE :: jpsw !< South index limit for children's parent-grid work arrays 485 INTEGER(iwp), SAVE :: iplg !< Leftmost parent-grid array ip index of the whole child domain 486 INTEGER(iwp), SAVE :: iprg !< Rightmost parent-grid array ip index of the whole child domain 487 INTEGER(iwp), SAVE :: jpsg !< Southmost parent-grid array jp index of the whole child domain 488 INTEGER(iwp), SAVE :: jpng !< Northmost parent-grid array jp index of the whole child domain 489 ! 490 !-- Local parent-grid index bounds. Different sets of index bounds are needed for parent-grid arrays 491 !-- (uc, etc), for index mapping arrays (iflu, etc) and for work arrays (workarr_lr, etc). This is 492 !-- because these arrays have different dimensions depending on the location of the subdomain 493 !-- relative to boundaries and corners. 494 INTEGER(iwp), SAVE :: ipl !< Left index limit for children's parent-grid arrays 495 INTEGER(iwp), SAVE :: ipla !< Left index limit for allocation of index-mapping and other auxiliary arrays 496 INTEGER(iwp), SAVE :: iplw !< Left index limit for children's parent-grid work arrays 497 INTEGER(iwp), SAVE :: ipr !< Right index limit for children's parent-grid arrays 498 INTEGER(iwp), SAVE :: ipra !< Right index limit for allocation of index-mapping and other auxiliary arrays 499 INTEGER(iwp), SAVE :: iprw !< Right index limit for children's parent-grid work arrays 500 INTEGER(iwp), SAVE :: jpn !< North index limit for children's parent-grid arrays 501 INTEGER(iwp), SAVE :: jpna !< North index limit for allocation of index-mapping and other auxiliary arrays 502 INTEGER(iwp), SAVE :: jpnw !< North index limit for children's parent-grid work arrays 503 INTEGER(iwp), SAVE :: jps !< South index limit for children's parent-grid arrays 504 INTEGER(iwp), SAVE :: jpsa !< South index limit for allocation of index-mapping and other auxiliary arrays 505 INTEGER(iwp), SAVE :: jpsw !< South index limit for children's parent-grid work arrays 388 506 ! 389 507 !-- Highest prognostic parent-grid k-indices. 390 INTEGER(iwp), SAVE :: kcto 391 INTEGER(iwp), SAVE :: kctw 508 INTEGER(iwp), SAVE :: kcto !< Upper bound for k in anterpolation of variables other than w. 509 INTEGER(iwp), SAVE :: kctw !< Upper bound for k in anterpolation of w. 392 510 ! 393 511 !-- Child-array indices to be precomputed and stored for anterpolation. … … 406 524 ! 407 525 !-- Number of child-grid nodes within anterpolation cells to be precomputed for anterpolation. 526 INTEGER(iwp), SAVE, ALLOCATABLE, DIMENSION(:,:,:) :: ijkfc_s !< number of child grid points contributing to a parent grid 527 !< node in anterpolation, scalar-grid 408 528 INTEGER(iwp), SAVE, ALLOCATABLE, DIMENSION(:,:,:) :: ijkfc_u !< number of child grid points contributing to a parent grid 409 529 !< node in anterpolation, u-grid … … 412 532 INTEGER(iwp), SAVE, ALLOCATABLE, DIMENSION(:,:,:) :: ijkfc_w !< number of child grid points contributing to a parent grid 413 533 !< node in anterpolation, w-grid 414 INTEGER(iwp), SAVE, ALLOCATABLE, DIMENSION(:,:,:) :: ijkfc_s !< number of child grid points contributing to a parent grid 415 !< node in anterpolation, scalar-grid 416 ! 417 !-- Work arrays for interpolation and user-defined type definitions for horizontal work-array exchange 418 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: workarr_lr 419 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: workarr_sn 420 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: workarr_t 421 422 INTEGER(iwp) :: workarr_lr_exchange_type 423 INTEGER(iwp) :: workarr_sn_exchange_type 424 INTEGER(iwp) :: workarr_t_exchange_type_x 425 INTEGER(iwp) :: workarr_t_exchange_type_y 426 427 INTEGER(iwp), DIMENSION(3) :: parent_grid_info_int !< Array for communicating the parent-grid dimensions 428 !< to its children. 429 430 REAL(wp), DIMENSION(6) :: face_area !< Surface area of each boundary face 431 REAL(wp), DIMENSION(7) :: parent_grid_info_real !< Array for communicating the real-type parent-grid 432 !< parameters to its children. 534 ! 535 !-- Work arrays for interpolation and user-defined type definitions for horizontal work-array exchange 536 INTEGER(iwp) :: workarr_lr_exchange_type !< 537 INTEGER(iwp) :: workarr_sn_exchange_type !< 538 INTEGER(iwp) :: workarr_t_exchange_type_x !< 539 INTEGER(iwp) :: workarr_t_exchange_type_y !< 540 541 INTEGER(iwp), DIMENSION(3) :: parent_grid_info_int !< Array for communicating the parent-grid dimensions to its children. 542 543 REAL(wp), DIMENSION(6) :: face_area !< Surface area of each boundary face 544 REAL(wp), DIMENSION(7) :: parent_grid_info_real !< Array for communicating the real-type parent-grid parameters to its 545 !< children. 546 547 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: workarr_lr !< 548 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: workarr_sn !< 549 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: workarr_t !< 433 550 434 551 TYPE parentgrid_def 435 INTEGER(iwp) :: nx!<436 INTEGER(iwp) :: ny!<437 INTEGER(iwp) :: nz!<438 REAL(wp) :: dx!<439 REAL(wp) :: dy!<440 REAL(wp) :: dz!<441 REAL(wp) :: lower_left_coord_x!<442 REAL(wp) :: lower_left_coord_y!<443 REAL(wp) :: xend!<444 REAL(wp) :: yend!<445 REAL(wp), DIMENSION(:), ALLOCATABLE :: coord_x 446 REAL(wp), DIMENSION(:), ALLOCATABLE :: coord_y 447 REAL(wp), DIMENSION(:), ALLOCATABLE :: dzu 448 REAL(wp), DIMENSION(:), ALLOCATABLE :: dzw 449 REAL(wp), DIMENSION(:), ALLOCATABLE :: zu 450 REAL(wp), DIMENSION(:), ALLOCATABLE :: zw 552 INTEGER(iwp) :: nx !< 553 INTEGER(iwp) :: ny !< 554 INTEGER(iwp) :: nz !< 555 REAL(wp) :: dx !< 556 REAL(wp) :: dy !< 557 REAL(wp) :: dz !< 558 REAL(wp) :: lower_left_coord_x !< 559 REAL(wp) :: lower_left_coord_y !< 560 REAL(wp) :: xend !< 561 REAL(wp) :: yend !< 562 REAL(wp), DIMENSION(:), ALLOCATABLE :: coord_x !< 563 REAL(wp), DIMENSION(:), ALLOCATABLE :: coord_y !< 564 REAL(wp), DIMENSION(:), ALLOCATABLE :: dzu !< 565 REAL(wp), DIMENSION(:), ALLOCATABLE :: dzw !< 566 REAL(wp), DIMENSION(:), ALLOCATABLE :: zu !< 567 REAL(wp), DIMENSION(:), ALLOCATABLE :: zw !< 451 568 END TYPE parentgrid_def 452 569 453 TYPE(parentgrid_def), SAVE, PUBLIC :: pg!< Parent-grid information package of type parentgrid_def570 TYPE(parentgrid_def), SAVE, PUBLIC :: pg !< Parent-grid information package of type parentgrid_def 454 571 ! 455 572 !-- Variables for particle coupling 456 573 TYPE, PUBLIC :: childgrid_def 457 INTEGER(iwp) :: nx!<458 INTEGER(iwp) :: ny!<459 INTEGER(iwp) :: nz!<460 REAL(wp) :: dx!<461 REAL(wp) :: dy!<462 REAL(wp) :: dz!<463 REAL(wp) 464 REAL(wp) 465 REAL(wp) 466 REAL(wp) 467 REAL(wp) 574 INTEGER(iwp) :: nx !< 575 INTEGER(iwp) :: ny !< 576 INTEGER(iwp) :: nz !< 577 REAL(wp) :: dx !< 578 REAL(wp) :: dy !< 579 REAL(wp) :: dz !< 580 REAL(wp) :: lx_coord, lx_coord_b !< ! split onto separate lines 581 REAL(wp) :: rx_coord, rx_coord_b !< 582 REAL(wp) :: sy_coord, sy_coord_b !< 583 REAL(wp) :: ny_coord, ny_coord_b !< 584 REAL(wp) :: uz_coord, uz_coord_b !< 468 585 END TYPE childgrid_def 469 586 470 587 TYPE(childgrid_def), SAVE, ALLOCATABLE, DIMENSION(:), PUBLIC :: childgrid !< 471 588 472 INTEGER(idp), ALLOCATABLE,DIMENSION(:,:), PUBLIC,TARGET :: nr_part !<473 INTEGER(idp), ALLOCATABLE,DIMENSION(:,:), PUBLIC,TARGET :: part_adr !<474 475 589 INTEGER(idp), ALLOCATABLE,DIMENSION(:,:), PUBLIC,TARGET :: nr_part !< 590 INTEGER(idp), ALLOCATABLE,DIMENSION(:,:), PUBLIC,TARGET :: part_adr !< 591 592 476 593 INTERFACE pmci_boundary_conds 477 594 MODULE PROCEDURE pmci_boundary_conds 478 595 END INTERFACE pmci_boundary_conds 479 596 480 597 INTERFACE pmci_check_setting_mismatches 481 598 MODULE PROCEDURE pmci_check_setting_mismatches … … 534 651 END INTERFACE pmci_set_swaplevel 535 652 536 PUBLIC child_to_parent, comm_world_nesting, cpl_id, nested_run, & 537 nesting_datatransfer_mode, nesting_mode, parent_to_child, rans_mode_parent 653 PUBLIC child_to_parent, & 654 comm_world_nesting, & 655 cpl_id, & 656 nested_run, & 657 nesting_datatransfer_mode, & 658 nesting_mode, & 659 parent_to_child, & 660 rans_mode_parent 538 661 539 662 PUBLIC pmci_boundary_conds … … 548 671 PUBLIC pmci_ensure_nest_mass_conservation 549 672 PUBLIC pmci_ensure_nest_mass_conservation_vertical 550 673 551 674 CONTAINS 552 675 553 676 !--------------------------------------------------------------------------------------------------! 677 ! Description: 678 ! ------------ 679 !> @Todo: Missing subroutine description. 680 !--------------------------------------------------------------------------------------------------! 554 681 SUBROUTINE pmci_init( world_comm ) 555 682 556 683 IMPLICIT NONE 557 684 558 INTEGER(iwp), INTENT(OUT) :: world_comm 685 INTEGER(iwp), INTENT(OUT) :: world_comm !< 559 686 560 687 #if defined( __parallel ) 561 688 562 INTEGER(iwp) :: pmc_status 563 564 565 CALL pmc_init_model( world_comm, nesting_datatransfer_mode, nesting_mode, 689 INTEGER(iwp) :: pmc_status !< 690 691 692 CALL pmc_init_model( world_comm, nesting_datatransfer_mode, nesting_mode, & 566 693 anterpolation_buffer_width, pmc_status ) 567 694 … … 578 705 ! 579 706 !-- Check steering parameter values 580 IF ( TRIM( nesting_mode ) /= 'one-way' .AND. 581 TRIM( nesting_mode ) /= 'two-way' .AND. 582 TRIM( nesting_mode ) /= 'vertical' ) 707 IF ( TRIM( nesting_mode ) /= 'one-way' .AND. & 708 TRIM( nesting_mode ) /= 'two-way' .AND. & 709 TRIM( nesting_mode ) /= 'vertical' ) & 583 710 THEN 584 711 message_string = 'illegal nesting mode: ' // TRIM( nesting_mode ) … … 586 713 ENDIF 587 714 588 IF ( TRIM( nesting_datatransfer_mode ) /= 'cascade' .AND. 589 TRIM( nesting_datatransfer_mode ) /= 'mixed' .AND. 590 TRIM( nesting_datatransfer_mode ) /= 'overlap' ) 715 IF ( TRIM( nesting_datatransfer_mode ) /= 'cascade' .AND. & 716 TRIM( nesting_datatransfer_mode ) /= 'mixed' .AND. & 717 TRIM( nesting_datatransfer_mode ) /= 'overlap' ) & 591 718 THEN 592 719 message_string = 'illegal nesting datatransfer mode: ' // TRIM( nesting_datatransfer_mode ) … … 594 721 ENDIF 595 722 ! 596 !-- Set the general steering switch which tells PALM that it s a nested run723 !-- Set the general steering switch which tells PALM that it is a nested run 597 724 nested_run = .TRUE. 598 725 ! 599 !-- Get some variables required by the pmc-interface (and in some cases in the 600 !-- PALM code out of the pmci) out of the pmc-core 601 CALL pmc_get_model_info( comm_world_nesting = comm_world_nesting, & 602 cpl_id = cpl_id, cpl_parent_id = cpl_parent_id, & 603 cpl_name = cpl_name, npe_total = cpl_npe_total, & 604 lower_left_x = lower_left_coord_x, & 726 !-- Get some variables required by the pmc-interface (and in some cases in the PALM code out of the 727 !-- pmci) out of the pmc-core 728 CALL pmc_get_model_info( comm_world_nesting = comm_world_nesting, cpl_id = cpl_id, & 729 cpl_parent_id = cpl_parent_id, cpl_name = cpl_name, & 730 npe_total = cpl_npe_total, lower_left_x = lower_left_coord_x, & 605 731 lower_left_y = lower_left_coord_y ) 606 732 ! 607 !-- Set the steering switch which tells the models that they are nested (of 608 !-- course the root domainis not nested)733 !-- Set the steering switch which tells the models that they are nested (of course the root domain 734 !-- is not nested) 609 735 IF ( .NOT. pmc_is_rootmodel() ) THEN 610 736 child_domain = .TRUE. … … 614 740 ! 615 741 !-- Message that communicators for nesting are initialized. 616 !-- Attention: myid has been set at the end of pmc_init_model in order to 617 !-- guarantee that only PE0 ofthe root domain does the output.742 !-- Attention: myid has been set at the end of pmc_init_model in order to guarantee that only PE0 of 743 !-- the root domain does the output. 618 744 CALL location_message( 'initialize model nesting', 'finished' ) 619 745 ! … … 622 748 #else 623 749 ! 624 !-- Nesting cannot be used in serial mode. cpl_id is set to root domain (1) 625 !-- because no location messages would be generated otherwise. 626 !-- world_comm is given a dummy value to avoid compiler warnings (INTENT(OUT) 627 !-- must get an explicit value). 628 !-- Note that this branch is only to avoid compiler warnings. The actual 629 !-- execution never reaches here because the call of this subroutine is 630 !-- already enclosed by #if defined( __parallel ). 750 !-- Nesting cannot be used in serial mode. cpl_id is set to root domain (1) because no location 751 !-- messages would be generated otherwise. world_comm is given a dummy value to avoid compiler 752 !-- warnings (INTENT(OUT) must get an explicit value). 753 !-- Note that this branch is only to avoid compiler warnings. The actual execution never reaches 754 !-- here because the call of this subroutine is already enclosed by #if defined( __parallel ). 631 755 cpl_id = 1 632 756 nested_run = .FALSE. … … 637 761 638 762 639 763 !--------------------------------------------------------------------------------------------------! 764 ! Description: 765 ! ------------ 766 !> @Todo: Missing subroutine description. 767 !--------------------------------------------------------------------------------------------------! 640 768 SUBROUTINE pmci_modelconfiguration 641 769 642 770 IMPLICIT NONE 643 771 644 INTEGER(iwp) :: ncpl !<number of nest domains645 646 772 INTEGER(iwp) :: ncpl !< number of nest domains 773 774 647 775 #if defined( __parallel ) 648 776 CALL location_message( 'setup the nested model configuration', 'start' ) … … 650 778 ! 651 779 !-- Compute absolute coordinates for all models 652 CALL pmci_setup_coordinates ! CONTAIN THIS 780 CALL pmci_setup_coordinates ! CONTAIN THIS 653 781 ! 654 782 !-- Determine the number of coupled arrays … … 656 784 ! 657 785 !-- Initialize the child (must be called before pmc_setup_parent) 658 !-- Klaus, extend this comment to explain why it must be called before 786 !-- Klaus, extend this comment to explain why it must be called before 659 787 CALL pmci_setup_child ! CONTAIN THIS 660 788 ! … … 666 794 CALL pmci_check_setting_mismatches ! CONTAIN THIS 667 795 ! 668 !-- Set flag file for combine_plot_fields for pr ecessing the nest output data669 OPEN( 90, FILE ='3DNESTING', FORM='FORMATTED' )796 !-- Set flag file for combine_plot_fields for processing the nest output data 797 OPEN( 90, FILE = '3DNESTING', FORM = 'FORMATTED' ) 670 798 CALL pmc_get_model_info( ncpl = ncpl ) 671 799 WRITE( 90, '(I2)' ) ncpl … … 679 807 680 808 681 809 !--------------------------------------------------------------------------------------------------! 810 ! Description: 811 ! ------------ 812 !> @Todo: Missing subroutine description. 813 !--------------------------------------------------------------------------------------------------! 682 814 SUBROUTINE pmci_setup_parent 683 815 … … 685 817 IMPLICIT NONE 686 818 687 INTEGER(iwp) :: child_id !< Child id-number for the child m 688 INTEGER(iwp) :: ierr !< MPI-error code 689 INTEGER(iwp) :: kp !< Parent-grid index n the z-direction 690 INTEGER(iwp) :: lb = 1 !< Running index for aerosol size bins 691 INTEGER(iwp) :: lc = 1 !< Running index for aerosol mass bins 692 INTEGER(iwp) :: lg = 1 !< Running index for SALSA gases 693 INTEGER(iwp) :: m !< Loop index over all children of the current parent 694 INTEGER(iwp) :: msib !< Loop index over all other children than m in case of siblings (parallel children) 695 INTEGER(iwp) :: n = 1 !< Running index for chemical species 696 INTEGER(iwp) :: nx_child !< Number of child-grid points in the x-direction 697 INTEGER(iwp) :: ny_child !< Number of child-grid points in the y-direction 698 INTEGER(iwp) :: nz_child !< Number of child-grid points in the z-direction 699 INTEGER(iwp) :: sibling_id !< Child id-number for the child msib (sibling of child m) 700 819 CHARACTER(LEN=32) :: myname !< String for variable name such as 'u' 820 821 INTEGER(iwp) :: child_id !< Child id-number for the child m 822 INTEGER(iwp) :: ierr !< MPI-error code 823 INTEGER(iwp) :: kp !< Parent-grid index n the z-direction 824 INTEGER(iwp) :: lb = 1 !< Running index for aerosol size bins 825 INTEGER(iwp) :: lc = 1 !< Running index for aerosol mass bins 826 INTEGER(iwp) :: lg = 1 !< Running index for SALSA gases 827 INTEGER(iwp) :: m !< Loop index over all children of the current parent 828 INTEGER(iwp) :: msib !< Loop index over all other children than m in case of siblings (parallel children) 829 INTEGER(iwp) :: n = 1 !< Running index for chemical species 830 INTEGER(iwp) :: nx_child !< Number of child-grid points in the x-direction 831 INTEGER(iwp) :: ny_child !< Number of child-grid points in the y-direction 832 INTEGER(iwp) :: nz_child !< Number of child-grid points in the z-direction 833 INTEGER(iwp) :: sibling_id !< Child id-number for the child msib (sibling of child m) 834 701 835 INTEGER(iwp), DIMENSION(3) :: child_grid_dim !< Array for receiving the child-grid dimensions from the children 702 703 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_x_left !< Minimum x-coordinate of the child domain including the ghost 704 !< point layers 705 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_x_right !< Maximum x-coordinate of the child domain including the ghost 706 !< point layers 707 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_y_south !< Minimum y-coordinate of the child domain including the ghost 708 !< point layers 709 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_y_north !< Maximum y-coordinate of the child domain including the ghost 710 !< point layers 711 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_coord_x !< Child domain x-coordinate array 712 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_coord_y !< Child domain y-coordinate array 713 714 REAL(wp), DIMENSION(5) :: child_grid_info !< Array for receiving the child-grid spacings etc from the children 715 836 837 LOGICAL :: m_left_in_msib !< Logical auxiliary parameter for the overlap test: true if the left border 838 !< of the child m is within the x-range of the child msib 839 LOGICAL :: m_right_in_msib !< Logical auxiliary parameter for the overlap test: true if the right border 840 !< of the child m is within the x-range of the child msib 841 LOGICAL :: msib_left_in_m !< Logical auxiliary parameter for the overlap test: true if the left border 842 !< of the child msib is within the x-range of the child m 843 LOGICAL :: msib_right_in_m !< Logical auxiliary parameter for the overlap test: true if the right border 844 !< of the child msib is within the x-range of the child m 845 LOGICAL :: m_south_in_msib !< Logical auxiliary parameter for the overlap test: true if the south border 846 !< of the child m is within the y-range of the child msib 847 LOGICAL :: m_north_in_msib !< Logical auxiliary parameter for the overlap test: true if the north border 848 !< of the child m is within the y-range of the child msib 849 LOGICAL :: msib_south_in_m !< Logical auxiliary parameter for the overlap test: true if the south border 850 !< of the child msib is within the y-range of the child m 851 LOGICAL :: msib_north_in_m !< Logical auxiliary parameter for the overlap test: true if the north border 852 !< of the child msib is within the y-range of the child m 853 716 854 REAL(wp) :: child_height !< Height of the child domain defined on the child side as zw(nzt+1) 717 855 REAL(wp) :: dx_child !< Child-grid spacing in the x-direction 718 856 REAL(wp) :: dy_child !< Child-grid spacing in the y-direction 719 857 REAL(wp) :: dz_child !< Child-grid spacing in the z-direction 720 REAL(wp) :: left_limit !< Left limit for the absolute x-coordinate of the child left boundary 858 REAL(wp) :: left_limit !< Left limit for the absolute x-coordinate of the child left boundary 721 859 REAL(wp) :: north_limit !< North limit for the absolute y-coordinate of the child north boundary 722 860 REAL(wp) :: right_limit !< Right limit for the absolute x-coordinate of the child right boundary 723 REAL(wp) :: south_limit !< South limit for the absolute y-coordinate of the child south boundary 724 REAL(wp) :: upper_right_coord_x !< Absolute x-coordinate of the upper right corner of the child domain 725 REAL(wp) :: upper_right_coord_y !< Absolute y-coordinate of the upper right corner of the child domain 861 REAL(wp) :: south_limit !< South limit for the absolute y-coordinate of the child south boundary 862 REAL(wp) :: upper_right_coord_x !< Absolute x-coordinate of the upper right corner of the child domain 863 REAL(wp) :: upper_right_coord_y !< Absolute y-coordinate of the upper right corner of the child domain 726 864 REAL(wp) :: xez !< Minimum separation in the x-direction required between the child and 727 865 !< parent boundaries (left or right) 728 866 REAL(wp) :: yez !< Minimum separation in the y-direction required between the child and 729 867 !< parent boundaries (south or north) 730 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 731 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 732 REAL(wp) :: tolez !< Tolerance for grid-line matching in z-direction 733 734 CHARACTER(LEN=32) :: myname !< String for variable name such as 'u' 735 736 LOGICAL :: m_left_in_msib !< Logical auxiliary parameter for the overlap test: true if the left border 737 !< of the child m is within the x-range of the child msib 738 LOGICAL :: m_right_in_msib !< Logical auxiliary parameter for the overlap test: true if the right border 739 !< of the child m is within the x-range of the child msib 740 LOGICAL :: msib_left_in_m !< Logical auxiliary parameter for the overlap test: true if the left border 741 !< of the child msib is within the x-range of the child m 742 LOGICAL :: msib_right_in_m !< Logical auxiliary parameter for the overlap test: true if the right border 743 !< of the child msib is within the x-range of the child m 744 LOGICAL :: m_south_in_msib !< Logical auxiliary parameter for the overlap test: true if the south border 745 !< of the child m is within the y-range of the child msib 746 LOGICAL :: m_north_in_msib !< Logical auxiliary parameter for the overlap test: true if the north border 747 !< of the child m is within the y-range of the child msib 748 LOGICAL :: msib_south_in_m !< Logical auxiliary parameter for the overlap test: true if the south border 749 !< of the child msib is within the y-range of the child m 750 LOGICAL :: msib_north_in_m !< Logical auxiliary parameter for the overlap test: true if the north border 751 !< of the child msib is within the y-range of the child m 868 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 869 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 870 REAL(wp) :: tolez !< Tolerance for grid-line matching in z-direction 871 872 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_coord_x !< Child domain x-coordinate array 873 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_coord_y !< Child domain y-coordinate array 874 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_x_left !< Minimum x-coordinate of the child domain including the ghost 875 !< point layers 876 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_x_right !< Maximum x-coordinate of the child domain including the ghost 877 !< point layers 878 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_y_north !< Maximum y-coordinate of the child domain including the ghost 879 !< point layers 880 REAL(wp), DIMENSION(:), ALLOCATABLE :: child_y_south !< Minimum y-coordinate of the child domain including the ghost 881 !< point layers 882 883 REAL(wp), DIMENSION(5) :: child_grid_info !< Array for receiving the child-grid spacings etc from the children 752 884 753 885 ! … … 755 887 tolex = tolefac * dx 756 888 toley = tolefac * dy 757 tolez = tolefac * dz(1) 889 tolez = tolefac * dz(1) 758 890 ! 759 891 !-- Initialize the current pmc parent. 760 892 CALL pmc_parentinit 761 893 ! 762 !-- Corners of all children of the present parent. Note that 763 !-- SIZE( pmc_parent_for_child ) = 1 if wehave no children.764 IF ( ( SIZE( pmc_parent_for_child ) - 1 > 0 ) .AND. myid == 0 ) THEN 765 ALLOCATE( child_x_left(1:SIZE( pmc_parent_for_child ) - 1) )894 !-- Corners of all children of the present parent. Note that SIZE( pmc_parent_for_child ) = 1 if we 895 !-- have no children. 896 IF ( ( SIZE( pmc_parent_for_child ) - 1 > 0 ) .AND. myid == 0 ) THEN 897 ALLOCATE( child_x_left(1:SIZE( pmc_parent_for_child ) - 1) ) 766 898 ALLOCATE( child_x_right(1:SIZE( pmc_parent_for_child ) - 1) ) 767 899 ALLOCATE( child_y_south(1:SIZE( pmc_parent_for_child ) - 1) ) … … 772 904 ENDIF 773 905 ! 774 !-- Get coordinates from all children and check that the children match the parent 775 !-- domain and each others. Note that SIZE( pmc_parent_for_child ) = 1776 !-- if we have no children, thence the loop is not executed at all.906 !-- Get coordinates from all children and check that the children match the parent domain and each 907 !-- others. Note that SIZE( pmc_parent_for_child ) = 1 if we have no children, hence the loop is 908 !-- not executed at all. 777 909 DO m = 1, SIZE( pmc_parent_for_child ) - 1 778 910 … … 781 913 IF ( myid == 0 ) THEN 782 914 783 CALL pmc_recv_from_child( child_id, child_grid_dim, SIZE(child_grid_dim), 0, 123, ierr ) 784 CALL pmc_recv_from_child( child_id, child_grid_info, SIZE(child_grid_info), 0, 124, ierr ) 785 915 CALL pmc_recv_from_child( child_id, child_grid_dim, SIZE( child_grid_dim ), 0, 123, & 916 ierr ) 917 CALL pmc_recv_from_child( child_id, child_grid_info, SIZE( child_grid_info ), 0, 124, & 918 ierr ) 919 786 920 nx_child = child_grid_dim(1) 787 921 ny_child = child_grid_dim(2) … … 792 926 ! 793 927 !-- Find the highest child-domain level in the parent grid for the reduced z transfer 794 DO kp = 1, nzt 795 IF ( zw(kp) - child_height > tolez ) THEN 928 DO kp = 1, nzt 929 IF ( zw(kp) - child_height > tolez ) THEN 796 930 nz_child = kp 797 931 EXIT 798 932 ENDIF 799 933 ENDDO 800 ! 934 ! 801 935 !-- Get absolute coordinates from the child 802 936 ALLOCATE( child_coord_x(-nbgp:nx_child+nbgp) ) 803 937 ALLOCATE( child_coord_y(-nbgp:ny_child+nbgp) ) 804 938 805 939 CALL pmc_recv_from_child( child_id, child_coord_x, SIZE( child_coord_x ), 0, 11, ierr ) 806 940 CALL pmc_recv_from_child( child_id, child_coord_y, SIZE( child_coord_y ), 0, 12, ierr ) 807 941 808 942 parent_grid_info_real(1) = lower_left_coord_x 809 943 parent_grid_info_real(2) = lower_left_coord_y … … 821 955 parent_grid_info_int(3) = nz_child 822 956 ! 823 !-- Check that the child domain matches its parent domain. 957 !-- Check that the child domain matches its parent domain. 824 958 IF ( nesting_mode == 'vertical' ) THEN 825 959 ! 826 !-- In case of vertical nesting, the lateral boundaries must match exactly. 960 !-- In case of vertical nesting, the lateral boundaries must match exactly. 827 961 right_limit = upper_right_coord_x 828 962 north_limit = upper_right_coord_y 829 963 IF ( ABS( child_coord_x(nx_child+1) - right_limit ) > tolex ) THEN 830 WRITE 831 ') domain right edge does not match its parent right edge'964 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 965 ') domain right edge does not match its parent right edge' 832 966 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 833 967 ENDIF 834 968 IF ( ABS( child_coord_y(ny_child+1) - north_limit ) > toley ) THEN 835 WRITE 836 ') domain north edge does not match its parent north edge'969 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 970 ') domain north edge does not match its parent north edge' 837 971 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 838 972 ENDIF 839 ELSE 840 ! 841 !-- In case of 3-D nesting, check that the child domain is completely 842 !-- inside its parent domain.843 xez = ( nbgp + 1 ) * dx 844 yez = ( nbgp + 1 ) * dy 973 ELSE 974 ! 975 !-- In case of 3-D nesting, check that the child domain is completely inside its parent 976 !-- domain. 977 xez = ( nbgp + 1 ) * dx 978 yez = ( nbgp + 1 ) * dy 845 979 left_limit = lower_left_coord_x + xez 846 980 right_limit = upper_right_coord_x - xez … … 848 982 north_limit = upper_right_coord_y - yez 849 983 IF ( left_limit - child_coord_x(0) > tolex ) THEN 850 WRITE 851 ') domain does not fit in its parent domain, left edge is either too ' //&852 'close or outside its parent left edge'984 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 985 ') domain does not fit in its parent domain, left edge is either too ' // & 986 'close or outside its parent left edge' 853 987 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 854 988 ENDIF 855 989 IF ( child_coord_x(nx_child+1) - right_limit > tolex ) THEN 856 WRITE 857 ') domain does not fit in its parent domain, right edge is either too ' //&858 'close or outside its parent right edge'990 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 991 ') domain does not fit in its parent domain, right edge is either too ' // & 992 'close or outside its parent right edge' 859 993 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 860 994 ENDIF 861 995 IF ( south_limit - child_coord_y(0) > toley ) THEN 862 WRITE 863 ') domain does not fit in its parent domain, south edge is either too ' //&864 'close or outside its parent south edge'996 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 997 ') domain does not fit in its parent domain, south edge is either too ' // & 998 'close or outside its parent south edge' 865 999 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 866 1000 ENDIF 867 1001 IF ( child_coord_y(ny_child+1) - north_limit > toley ) THEN 868 WRITE 869 ') domain does not fit in its parent domain, north edge is either too ' //&870 'close or outside its parent north edge'1002 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 1003 ') domain does not fit in its parent domain, north edge is either too ' // & 1004 'close or outside its parent north edge' 871 1005 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 872 1006 ENDIF 873 1007 ENDIF 874 ! 875 !-- Child domain must be lower than the parent domain such that the top ghost 876 !-- layer of thechild grid does not exceed the parent domain top boundary.1008 ! 1009 !-- Child domain must be lower than the parent domain such that the top ghost layer of the 1010 !-- child grid does not exceed the parent domain top boundary. 877 1011 IF ( child_height - zw(nzt) > tolez ) THEN 878 WRITE 879 880 'close or above its parent top edge'1012 WRITE( message_string, "(a,i2,a)" ) 'nested child (id: ',child_id, & 1013 ') domain does not fit in its parent domain, top edge is either too ' // & 1014 'close or above its parent top edge' 881 1015 CALL message( 'pmci_setup_parent', 'PA0425', 3, 2, 0, 6, 0 ) 882 1016 ENDIF 883 1017 ! 884 !-- If parallel child domains (siblings) do exist ( m > 1 ), 885 !-- check that they do not overlap. 1018 !-- If parallel child domains (siblings) do exist ( m > 1 ), check that they do not overlap. 886 1019 child_x_left(m) = child_coord_x(-nbgp) 887 1020 child_x_right(m) = child_coord_x(nx_child+nbgp) … … 891 1024 IF ( nesting_mode /= 'vertical' ) THEN 892 1025 ! 893 !-- Note that the msib-loop is executed only if ( m > 1 ). 894 !-- Also note that the tests have to be made both ways (m vs msib and msib vs m)895 !-- in orderto detect all the possible overlap situations.1026 !-- Note that the msib-loop is executed only if ( m > 1 ). 1027 !-- Also note that the tests have to be done both ways (m vs msib and msib vs m) in order 1028 !-- to detect all the possible overlap situations. 896 1029 DO msib = 1, m - 1 897 1030 ! 898 !-- Set some logical auxiliary parameters to simplify the IF-condition. 899 m_left_in_msib = ( child_x_left(m) >= child_x_left(msib) - tolex ) .AND. 1031 !-- Set some logical auxiliary parameters to simplify the IF-condition. 1032 m_left_in_msib = ( child_x_left(m) >= child_x_left(msib) - tolex ) .AND. & 900 1033 ( child_x_left(m) <= child_x_right(msib) + tolex ) 901 m_right_in_msib = ( child_x_right(m) >= child_x_left(msib) - tolex ) .AND. 1034 m_right_in_msib = ( child_x_right(m) >= child_x_left(msib) - tolex ) .AND. & 902 1035 ( child_x_right(m) <= child_x_right(msib) + tolex ) 903 msib_left_in_m = ( child_x_left(msib) >= child_x_left(m) - tolex ) .AND. 1036 msib_left_in_m = ( child_x_left(msib) >= child_x_left(m) - tolex ) .AND. & 904 1037 ( child_x_left(msib) <= child_x_right(m) + tolex ) 905 msib_right_in_m = ( child_x_right(msib) >= child_x_left(m) - tolex ) .AND. 1038 msib_right_in_m = ( child_x_right(msib) >= child_x_left(m) - tolex ) .AND. & 906 1039 ( child_x_right(msib) <= child_x_right(m) + tolex ) 907 m_south_in_msib = ( child_y_south(m) >= child_y_south(msib) - toley ) .AND. 1040 m_south_in_msib = ( child_y_south(m) >= child_y_south(msib) - toley ) .AND. & 908 1041 ( child_y_south(m) <= child_y_north(msib) + toley ) 909 m_north_in_msib = ( child_y_north(m) >= child_y_south(msib) - toley ) .AND. 1042 m_north_in_msib = ( child_y_north(m) >= child_y_south(msib) - toley ) .AND. & 910 1043 ( child_y_north(m) <= child_y_north(msib) + toley ) 911 msib_south_in_m = ( child_y_south(msib) >= child_y_south(m) - toley ) .AND. 1044 msib_south_in_m = ( child_y_south(msib) >= child_y_south(m) - toley ) .AND. & 912 1045 ( child_y_south(msib) <= child_y_north(m) + toley ) 913 msib_north_in_m = ( child_y_north(msib) >= child_y_south(m) - toley ) .AND. 1046 msib_north_in_m = ( child_y_north(msib) >= child_y_south(m) - toley ) .AND. & 914 1047 ( child_y_north(msib) <= child_y_north(m) + toley ) 915 916 IF ( ( m_left_in_msib .OR. m_right_in_msib .OR. & 917 msib_left_in_m .OR. msib_right_in_m ) & 918 .AND. & 919 ( m_south_in_msib .OR. m_north_in_msib .OR. & 1048 1049 IF ( ( m_left_in_msib .OR. m_right_in_msib .OR. & 1050 msib_left_in_m .OR. msib_right_in_m ) .AND. & 1051 ( m_south_in_msib .OR. m_north_in_msib .OR. & 920 1052 msib_south_in_m .OR. msib_north_in_m ) ) THEN 921 1053 sibling_id = pmc_parent_for_child(msib) 922 WRITE 923 child_id, ' and ', sibling_id, ') overlap'1054 WRITE( message_string, "(a,i2,a,i2,a)" ) 'nested parallel child domains (ids: ',& 1055 child_id, ' and ', sibling_id, ') overlap' 924 1056 CALL message( 'pmci_setup_parent', 'PA0426', 3, 2, 0, 6, 0 ) 925 1057 ENDIF 926 1058 927 1059 ENDDO 928 ENDIF 1060 ENDIF 929 1061 930 1062 CALL pmci_set_child_edge_coords … … 933 1065 DEALLOCATE( child_coord_y ) 934 1066 ! 935 !-- Send information about operating mode (LES or RANS) to child. This will be 936 !-- used tocontrol TKE nesting and setting boundary conditions properly.937 CALL pmc_send_to_child( child_id, rans_mode, 1, 0, 19, ierr ) 1067 !-- Send information about operating mode (LES or RANS) to child. This will be used to 1068 !-- control TKE nesting and setting boundary conditions properly. 1069 CALL pmc_send_to_child( child_id, rans_mode, 1, 0, 19, ierr ) 938 1070 ! 939 1071 !-- Send parent grid information to child 940 CALL pmc_send_to_child( child_id, parent_grid_info_real, & 941 SIZE( parent_grid_info_real ), 0, 21, & 942 ierr ) 943 CALL pmc_send_to_child( child_id, parent_grid_info_int, 3, 0, & 944 22, ierr ) 1072 CALL pmc_send_to_child( child_id, parent_grid_info_real, SIZE( parent_grid_info_real ), & 1073 0, 21, ierr ) 1074 CALL pmc_send_to_child( child_id, parent_grid_info_int, 3, 0, 22, ierr ) 945 1075 ! 946 1076 !-- Send local grid to child 947 CALL pmc_send_to_child( child_id, coord_x, nx+1+2*nbgp, 0, 24, & 948 ierr ) 949 CALL pmc_send_to_child( child_id, coord_y, ny+1+2*nbgp, 0, 25, & 950 ierr ) 1077 CALL pmc_send_to_child( child_id, coord_x, nx+1+2*nbgp, 0, 24, ierr ) 1078 CALL pmc_send_to_child( child_id, coord_y, ny+1+2*nbgp, 0, 25, ierr ) 951 1079 ! 952 1080 !-- Also send the dzu-, dzw-, zu- and zw-arrays here … … 955 1083 CALL pmc_send_to_child( child_id, zu, nz_child + 2, 0, 28, ierr ) 956 1084 CALL pmc_send_to_child( child_id, zw, nz_child + 2, 0, 29, ierr ) 957 958 ENDIF ! ( myid == 0 ) 1085 1086 ENDIF ! ( myid == 0 ) 959 1087 960 1088 CALL MPI_BCAST( nz_child, 1, MPI_INTEGER, 0, comm2d, ierr ) 961 1089 962 CALL MPI_BCAST( childgrid(m), STORAGE_SIZE( childgrid(1))/8, MPI_BYTE, 0, comm2d, ierr )963 ! 964 !-- Set up the index-list which is an integer array that maps the child index space on 965 !-- the parentindex- and subdomain spaces.1090 CALL MPI_BCAST( childgrid(m), STORAGE_SIZE( childgrid( 1 ) ) / 8, MPI_BYTE, 0, comm2d, ierr ) 1091 ! 1092 !-- Set up the index-list which is an integer array that maps the child index space on the parent 1093 !-- index- and subdomain spaces. 966 1094 CALL pmci_create_index_list 967 1095 ! 968 1096 !-- Include couple arrays into parent content. 969 !-- The adresses of the PALM 2D or 3D array (here parent grid) which are candidates 970 !-- for coupling are stored once into the pmc context. While data transfer, the array do not971 !-- have to bespecified again1097 !-- The adresses of the PALM 2D or 3D array (here parent grid) which are candidates for coupling 1098 !-- are stored once into the pmc context. While data transfer, the arrays do not have to be 1099 !-- specified again 972 1100 CALL pmc_s_clear_next_array_list 973 1101 DO WHILE ( pmc_s_getnextarray( child_id, myname ) ) 974 IF ( INDEX( TRIM( myname ), 'chem_' ) /= 0 ) THEN 1102 IF ( INDEX( TRIM( myname ), 'chem_' ) /= 0 ) THEN 975 1103 CALL pmci_set_array_pointer( myname, child_id = child_id, nz_child = nz_child, n = n ) 976 n = n + 1 1104 n = n + 1 977 1105 ELSEIF ( INDEX( TRIM( myname ), 'an_' ) /= 0 ) THEN 978 1106 CALL pmci_set_array_pointer( myname, child_id = child_id, nz_child = nz_child, n = lb ) 979 lb = lb + 1 1107 lb = lb + 1 980 1108 ELSEIF ( INDEX( TRIM( myname ), 'am_' ) /= 0 ) THEN 981 1109 CALL pmci_set_array_pointer( myname, child_id = child_id, nz_child = nz_child, n = lc ) 982 lc = lc + 1 1110 lc = lc + 1 983 1111 ELSEIF ( INDEX( TRIM( myname ), 'sg_' ) /= 0 .AND. .NOT. salsa_gases_from_chem ) THEN 984 1112 CALL pmci_set_array_pointer( myname, child_id = child_id, nz_child = nz_child, n = lg ) … … 990 1118 991 1119 CALL pmc_s_setind_and_allocmem( child_id ) 992 1120 993 1121 ENDDO ! m 994 1122 995 IF ( ( SIZE( pmc_parent_for_child ) - 1 > 0 ) .AND.myid == 0 ) THEN1123 IF ( ( SIZE( pmc_parent_for_child ) - 1 > 0 ) .AND. myid == 0 ) THEN 996 1124 DEALLOCATE( child_x_left ) 997 1125 DEALLOCATE( child_x_right ) … … 1000 1128 ENDIF 1001 1129 1002 1130 1003 1131 CONTAINS 1004 1132 1005 1006 SUBROUTINE pmci_create_index_list 1007 1008 IMPLICIT NONE 1009 1010 INTEGER(iwp) :: ilist !< Index-list index running over the child's parent-grid jc,ic-space 1011 INTEGER(iwp) :: index_list_size !< Dimension 2 of the array index_list 1012 INTEGER(iwp) :: ierr !< MPI error code 1013 INTEGER(iwp) :: ip !< Running parent-grid index on the child domain in the x-direction 1014 INTEGER(iwp) :: jp !< Running parent-grid index on the child domain in the y-direction 1015 INTEGER(iwp) :: n !< Running index over child subdomains 1016 INTEGER(iwp) :: nrx !< Parent subdomain dimension in the x-direction 1017 INTEGER(iwp) :: nry !< Parent subdomain dimension in the y-direction 1018 INTEGER(iwp) :: pex !< Two-dimensional subdomain (pe) index in the x-direction 1019 INTEGER(iwp) :: pey !< Two-dimensional subdomain (pe) index in the y-direction 1020 INTEGER(iwp) :: parent_pe !< Parent subdomain index (one-dimensional) 1021 1022 INTEGER(iwp), DIMENSION(2) :: pe_indices_2d !< Array for two-dimensional subdomain (pe) 1023 !< indices needed for MPI_CART_RANK 1024 INTEGER(iwp), DIMENSION(2) :: size_of_childs_parent_grid_bounds_all !< Dimensions of childs_parent_grid_bounds_all 1025 INTEGER(iwp), DIMENSION(:,:), ALLOCATABLE :: childs_parent_grid_bounds_all !< Array that contains the child's 1026 !< parent-grid index bounds for all its 1027 !< subdomains (pes) 1028 INTEGER(iwp), DIMENSION(:,:), ALLOCATABLE :: index_list !< Array that maps the child index space on 1029 !< the parent index- and subdomain spaces 1030 1031 IF ( myid == 0 ) THEN 1032 1033 CALL pmc_recv_from_child( child_id, size_of_childs_parent_grid_bounds_all, & 1034 2, 0, 40, ierr ) 1035 ALLOCATE( childs_parent_grid_bounds_all(size_of_childs_parent_grid_bounds_all(1), & 1036 size_of_childs_parent_grid_bounds_all(2)) ) 1037 CALL pmc_recv_from_child( child_id, childs_parent_grid_bounds_all, & 1038 SIZE( childs_parent_grid_bounds_all ), 0, 41, ierr ) 1039 ! 1040 !-- Compute size (dimension) of the index_list. 1041 index_list_size = 0 1042 DO n = 1, size_of_childs_parent_grid_bounds_all(2) 1043 index_list_size = index_list_size + & 1044 ( childs_parent_grid_bounds_all(4,n) - childs_parent_grid_bounds_all(3,n) + 1 ) * & 1045 ( childs_parent_grid_bounds_all(2,n) - childs_parent_grid_bounds_all(1,n) + 1 ) 1046 ENDDO 1047 1048 ALLOCATE( index_list(6,index_list_size) ) 1049 1050 nrx = nxr - nxl + 1 1051 nry = nyn - nys + 1 1052 ilist = 0 1053 ! 1054 !-- Loop over all children PEs 1055 DO n = 1, size_of_childs_parent_grid_bounds_all(2) ! 1056 ! 1057 !-- Subspace along y required by actual child PE 1058 DO jp = childs_parent_grid_bounds_all(3,n), childs_parent_grid_bounds_all(4,n) ! jp = jps, jpn of child PE# n 1059 ! 1060 !-- Subspace along x required by actual child PE 1061 DO ip = childs_parent_grid_bounds_all(1,n), childs_parent_grid_bounds_all(2,n) ! ip = ipl, ipr of child PE# n 1062 1063 pex = ip / nrx 1064 pey = jp / nry 1065 pe_indices_2d(1) = pex 1066 pe_indices_2d(2) = pey 1067 CALL MPI_CART_RANK( comm2d, pe_indices_2d, parent_pe, ierr ) 1068 1069 ilist = ilist + 1 1070 ! 1071 !-- First index in parent array ! TO_DO: Klaus, please explain better 1072 index_list(1,ilist) = ip - ( pex * nrx ) + 1 + nbgp 1073 ! 1074 !-- Second index in parent array ! TO_DO: Klaus, please explain better 1075 index_list(2,ilist) = jp - ( pey * nry ) + 1 + nbgp 1076 ! 1077 !-- x index of child's parent grid 1078 index_list(3,ilist) = ip - childs_parent_grid_bounds_all(1,n) + 1 1079 ! 1080 !-- y index of child's parent grid 1081 index_list(4,ilist) = jp - childs_parent_grid_bounds_all(3,n) + 1 1082 ! 1083 !-- PE number of child 1084 index_list(5,ilist) = n - 1 1085 ! 1086 !-- PE number of parent 1087 index_list(6,ilist) = parent_pe 1088 1089 ENDDO 1090 ENDDO 1091 ENDDO 1092 ! 1093 !-- TO_DO: Klaus: comment what is done here 1094 CALL pmc_s_set_2d_index_list( child_id, index_list(:,1:ilist) ) 1095 1096 ELSE 1097 ! 1098 !-- TO_DO: Klaus: comment why this dummy allocation is required 1099 ALLOCATE( index_list(6,1) ) 1100 CALL pmc_s_set_2d_index_list( child_id, index_list ) 1101 ENDIF 1102 1103 DEALLOCATE(index_list) 1104 1105 END SUBROUTINE pmci_create_index_list 1106 1107 1108 1109 SUBROUTINE pmci_set_child_edge_coords 1110 IMPLICIT NONE 1111 1112 INTEGER(iwp) :: nbgp_lpm = 1 !< Number of ghost-point layers used for lpm (Klaus, is this correct?) 1113 1114 1115 nbgp_lpm = MIN( nbgp_lpm, nbgp ) 1116 1117 childgrid(m)%nx = nx_child 1118 childgrid(m)%ny = ny_child 1119 childgrid(m)%nz = nz_child 1120 childgrid(m)%dx = dx_child 1121 childgrid(m)%dy = dy_child 1122 childgrid(m)%dz = dz_child 1123 1124 childgrid(m)%lx_coord = child_coord_x(0) 1125 childgrid(m)%lx_coord_b = child_coord_x(-nbgp_lpm) 1126 childgrid(m)%rx_coord = child_coord_x(nx_child) + dx_child 1127 childgrid(m)%rx_coord_b = child_coord_x(nx_child+nbgp_lpm) + dx_child 1128 childgrid(m)%sy_coord = child_coord_y(0) 1129 childgrid(m)%sy_coord_b = child_coord_y(-nbgp_lpm) 1130 childgrid(m)%ny_coord = child_coord_y(ny_child) + dy_child 1131 childgrid(m)%ny_coord_b = child_coord_y(ny_child+nbgp_lpm) + dy_child 1132 childgrid(m)%uz_coord = child_grid_info(2) 1133 childgrid(m)%uz_coord_b = child_grid_info(1) 1134 1135 END SUBROUTINE pmci_set_child_edge_coords 1133 !--------------------------------------------------------------------------------------------------! 1134 ! Description: 1135 ! ------------ 1136 !> @Todo: Missing subroutine description. 1137 !--------------------------------------------------------------------------------------------------! 1138 SUBROUTINE pmci_create_index_list 1139 1140 IMPLICIT NONE 1141 1142 INTEGER(iwp) :: ilist !< Index-list index running over the child's parent-grid jc,ic-space 1143 INTEGER(iwp) :: index_list_size !< Dimension 2 of the array index_list 1144 INTEGER(iwp) :: ierr !< MPI error code 1145 INTEGER(iwp) :: ip !< Running parent-grid index on the child domain in the x-direction 1146 INTEGER(iwp) :: jp !< Running parent-grid index on the child domain in the y-direction 1147 INTEGER(iwp) :: n !< Running index over child subdomains 1148 INTEGER(iwp) :: nrx !< Parent subdomain dimension in the x-direction 1149 INTEGER(iwp) :: nry !< Parent subdomain dimension in the y-direction 1150 INTEGER(iwp) :: pex !< Two-dimensional subdomain (pe) index in the x-direction 1151 INTEGER(iwp) :: pey !< Two-dimensional subdomain (pe) index in the y-direction 1152 INTEGER(iwp) :: parent_pe !< Parent subdomain index (one-dimensional) 1153 1154 INTEGER(iwp), DIMENSION(2) :: pe_indices_2d !< Array for two-dimensional subdomain (pe) 1155 !< indices needed for MPI_CART_RANK 1156 INTEGER(iwp), DIMENSION(2) :: size_of_childs_parent_grid_bounds_all !< Dimensions of childs_parent_grid_bounds_all 1157 1158 INTEGER(iwp), DIMENSION(:,:), ALLOCATABLE :: childs_parent_grid_bounds_all !< Array that contains the child's 1159 !< parent-grid index 1160 !< bounds for all its subdomains (pes) 1161 INTEGER(iwp), DIMENSION(:,:), ALLOCATABLE :: index_list !< Array that maps the child index space on 1162 !< the parent index- and subdomain spaces 1163 1164 IF ( myid == 0 ) THEN 1165 1166 CALL pmc_recv_from_child( child_id, size_of_childs_parent_grid_bounds_all, 2, 0, 40, ierr ) 1167 ALLOCATE( childs_parent_grid_bounds_all(size_of_childs_parent_grid_bounds_all(1), & 1168 size_of_childs_parent_grid_bounds_all(2)) ) 1169 CALL pmc_recv_from_child( child_id, childs_parent_grid_bounds_all, & 1170 SIZE( childs_parent_grid_bounds_all ), 0, 41, ierr ) 1171 ! 1172 !-- Compute size (dimension) of the index_list. 1173 index_list_size = 0 1174 DO n = 1, size_of_childs_parent_grid_bounds_all(2) 1175 index_list_size = index_list_size + & 1176 ( childs_parent_grid_bounds_all(4,n) - childs_parent_grid_bounds_all(3,n) + 1 ) * & 1177 ( childs_parent_grid_bounds_all(2,n) - childs_parent_grid_bounds_all(1,n) + 1 ) 1178 ENDDO 1179 1180 ALLOCATE( index_list(6,index_list_size) ) 1181 1182 nrx = nxr - nxl + 1 1183 nry = nyn - nys + 1 1184 ilist = 0 1185 ! 1186 !-- Loop over all children PEs 1187 DO n = 1, size_of_childs_parent_grid_bounds_all(2) ! 1188 ! 1189 !-- Subspace along y required by actual child PE 1190 DO jp = childs_parent_grid_bounds_all(3,n), childs_parent_grid_bounds_all(4,n) ! jp = jps, jpn of child PE# n 1191 ! 1192 !-- Subspace along x required by actual child PE 1193 DO ip = childs_parent_grid_bounds_all(1,n), childs_parent_grid_bounds_all(2,n) ! ip = ipl, ipr of child PE# n 1194 1195 pex = ip / nrx 1196 pey = jp / nry 1197 pe_indices_2d(1) = pex 1198 pe_indices_2d(2) = pey 1199 CALL MPI_CART_RANK( comm2d, pe_indices_2d, parent_pe, ierr ) 1200 1201 ilist = ilist + 1 1202 ! 1203 !-- First index in parent array ! TO_DO: Klaus, please explain better 1204 index_list(1,ilist) = ip - ( pex * nrx ) + 1 + nbgp 1205 ! 1206 !-- Second index in parent array ! TO_DO: Klaus, please explain better 1207 index_list(2,ilist) = jp - ( pey * nry ) + 1 + nbgp 1208 ! 1209 !-- x index of child's parent grid 1210 index_list(3,ilist) = ip - childs_parent_grid_bounds_all(1,n) + 1 1211 ! 1212 !-- y index of child's parent grid 1213 index_list(4,ilist) = jp - childs_parent_grid_bounds_all(3,n) + 1 1214 ! 1215 !-- PE number of child 1216 index_list(5,ilist) = n - 1 1217 ! 1218 !-- PE number of parent 1219 index_list(6,ilist) = parent_pe 1220 1221 ENDDO 1222 ENDDO 1223 ENDDO 1224 ! 1225 !-- TO_DO: Klaus: comment what is done here 1226 CALL pmc_s_set_2d_index_list( child_id, index_list(:,1:ilist) ) 1227 1228 ELSE 1229 ! 1230 !-- TO_DO: Klaus: comment why this dummy allocation is required 1231 ALLOCATE( index_list(6,1) ) 1232 CALL pmc_s_set_2d_index_list( child_id, index_list ) 1233 ENDIF 1234 1235 DEALLOCATE( index_list ) 1236 1237 END SUBROUTINE pmci_create_index_list 1238 1239 1240 !--------------------------------------------------------------------------------------------------! 1241 ! Description: 1242 ! ------------ 1243 !> @Todo: Missing subroutine description. 1244 !--------------------------------------------------------------------------------------------------! 1245 SUBROUTINE pmci_set_child_edge_coords 1246 IMPLICIT NONE 1247 1248 INTEGER(iwp) :: nbgp_lpm = 1 !< Number of ghost-point layers used for lpm (Klaus, is this correct?) 1249 1250 1251 nbgp_lpm = MIN( nbgp_lpm, nbgp ) 1252 1253 childgrid(m)%nx = nx_child 1254 childgrid(m)%ny = ny_child 1255 childgrid(m)%nz = nz_child 1256 childgrid(m)%dx = dx_child 1257 childgrid(m)%dy = dy_child 1258 childgrid(m)%dz = dz_child 1259 1260 childgrid(m)%lx_coord = child_coord_x(0) 1261 childgrid(m)%lx_coord_b = child_coord_x(-nbgp_lpm) 1262 childgrid(m)%rx_coord = child_coord_x(nx_child) + dx_child 1263 childgrid(m)%rx_coord_b = child_coord_x(nx_child+nbgp_lpm) + dx_child 1264 childgrid(m)%sy_coord = child_coord_y(0) 1265 childgrid(m)%sy_coord_b = child_coord_y(-nbgp_lpm) 1266 childgrid(m)%ny_coord = child_coord_y(ny_child) + dy_child 1267 childgrid(m)%ny_coord_b = child_coord_y(ny_child+nbgp_lpm) + dy_child 1268 childgrid(m)%uz_coord = child_grid_info(2) 1269 childgrid(m)%uz_coord_b = child_grid_info(1) 1270 1271 END SUBROUTINE pmci_set_child_edge_coords 1136 1272 1137 1273 #endif … … 1139 1275 1140 1276 1141 1277 !--------------------------------------------------------------------------------------------------! 1278 ! Description: 1279 ! ------------ 1280 !> @Todo: Missing subroutine description. 1281 !--------------------------------------------------------------------------------------------------! 1142 1282 SUBROUTINE pmci_setup_child 1143 1283 … … 1145 1285 IMPLICIT NONE 1146 1286 1147 INTEGER(iwp) :: ierr !< MPI error code 1148 INTEGER(iwp) :: lb !< Running index for aerosol size bins 1149 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 1150 INTEGER(iwp) :: lg !< Running index for SALSA gases 1151 INTEGER(iwp) :: n !< Running index for number of chemical species 1152 INTEGER(iwp), DIMENSION(3) :: child_grid_dim !< Array for sending the child-grid dimensions to parent 1153 1154 REAL(wp), DIMENSION(5) :: child_grid_info !< Array for sending the child-grid spacings etc to parent 1155 1156 CHARACTER( LEN=da_namelen ) :: myname !< Name of the variable to be coupled 1157 CHARACTER(LEN=5) :: salsa_char !< Name extension for the variable name in case of SALSA variable 1158 1287 CHARACTER(LEN=da_namelen) :: myname !< Name of the variable to be coupled 1288 CHARACTER(LEN=5) :: salsa_char !< Name extension for the variable name in case of SALSA variable 1289 1290 INTEGER(iwp) :: ierr !< MPI error code 1291 INTEGER(iwp) :: lb !< Running index for aerosol size bins 1292 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 1293 INTEGER(iwp) :: lg !< Running index for SALSA gases 1294 INTEGER(iwp) :: n !< Running index for number of chemical species 1295 1296 INTEGER(iwp), DIMENSION(3) :: child_grid_dim !< Array for sending the child-grid dimensions to parent 1297 1298 REAL(wp), DIMENSION(5) :: child_grid_info !< Array for sending the child-grid spacings etc to parent 1299 1159 1300 ! 1160 1301 !-- Child setup … … 1162 1303 IF ( .NOT. pmc_is_rootmodel() ) THEN 1163 1304 ! 1164 !-- KLaus, add a description here what pmc_childinit does 1305 !-- KLaus, add a description here what pmc_childinit does 1165 1306 CALL pmc_childinit 1166 1307 ! 1167 !-- The arrays, which actually will be exchanged between child and parent 1168 !-- are defined Here AND ONLY HERE. 1169 !-- If a variable is removed, it only has to be removed from here. 1170 !-- Please check, if the arrays are in the list of POSSIBLE exchange arrays 1171 !-- in subroutines: 1308 !-- The arrays, which actually will be exchanged between child and parent are defined Here AND 1309 !-- ONLY HERE. If a variable is removed, it only has to be removed from here. Please check, if 1310 !-- the arrays are in the list of POSSIBLE exchange arrays in subroutines: 1172 1311 !-- pmci_set_array_pointer (for parent arrays) 1173 1312 !-- pmci_create_childs_parent_grid_arrays (for child's parent-grid arrays) … … 1176 1315 CALL pmc_set_dataarray_name( 'parent', 'w', 'child', 'w', ierr ) 1177 1316 ! 1178 !-- Set data array name for TKE. Please note, nesting of TKE is actually 1179 !-- only done if both parent and child are in LES or in RANS mode. Due to 1180 !-- design of model coupler, however, data array names must be already 1181 !-- available at this point. 1182 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 1183 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 1184 .NOT. constant_diffusion ) ) THEN 1317 !-- Set data array name for TKE. Please note, nesting of TKE is actually only done if both parent 1318 !-- and child are in LES or in RANS mode. Due to design of model coupler, however, data array 1319 !-- names must be already available at this point. 1320 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 1321 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. .NOT. constant_diffusion ) )& 1322 THEN 1185 1323 CALL pmc_set_dataarray_name( 'parent', 'e', 'child', 'e', ierr ) 1186 1324 ENDIF 1187 1325 ! 1188 !-- Nesting of dissipation rate only if both parent and child are in RANS 1189 !-- mode and TKE-epsilon closure is applied. Please see also comment for TKE 1190 !-- above. 1326 !-- Nesting of dissipation rate only if both parent and child are in RANS mode and TKE-epsilon 1327 !-- closure is applied. Please see also comment for TKE above. 1191 1328 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 1192 1329 CALL pmc_set_dataarray_name( 'parent', 'diss', 'child', 'diss', ierr ) … … 1202 1339 1203 1340 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 1204 CALL pmc_set_dataarray_name( 'parent', 'qc', 'child', 'qc', ierr ) 1205 CALL pmc_set_dataarray_name( 'parent', 'nc', 'child', 'nc', ierr ) 1341 CALL pmc_set_dataarray_name( 'parent', 'qc', 'child', 'qc', ierr ) 1342 CALL pmc_set_dataarray_name( 'parent', 'nc', 'child', 'nc', ierr ) 1206 1343 ENDIF 1207 1344 1208 1345 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 1209 1346 CALL pmc_set_dataarray_name( 'parent', 'qr', 'child', 'qr', ierr ) 1210 CALL pmc_set_dataarray_name( 'parent', 'nr', 'child', 'nr', ierr ) 1347 CALL pmc_set_dataarray_name( 'parent', 'nr', 'child', 'nr', ierr ) 1211 1348 ENDIF 1212 1349 1213 1350 ENDIF 1214 1351 … … 1221 1358 CALL pmc_set_dataarray_name( 'parent', 'part_adr', 'child', 'part_adr', ierr ) 1222 1359 ENDIF 1223 1360 1224 1361 IF ( air_chemistry .AND. nesting_chem ) THEN 1225 1362 DO n = 1, nspec 1226 CALL pmc_set_dataarray_name( 'parent', 'chem_' // TRIM( chem_species(n)%name ), 1363 CALL pmc_set_dataarray_name( 'parent', 'chem_' // TRIM( chem_species(n)%name ), & 1227 1364 'child', 'chem_' // TRIM( chem_species(n)%name ), ierr ) 1228 ENDDO 1365 ENDDO 1229 1366 ENDIF 1230 1367 1231 1368 IF ( salsa .AND. nesting_salsa ) THEN 1232 1369 DO lb = 1, nbins_aerosol 1233 WRITE( salsa_char,'(i0)') lb1234 CALL pmc_set_dataarray_name( 'parent', 'an_' // TRIM( salsa_char ), 1370 WRITE( salsa_char,'(i0)' ) lb 1371 CALL pmc_set_dataarray_name( 'parent', 'an_' // TRIM( salsa_char ), & 1235 1372 'child', 'an_' // TRIM( salsa_char ), ierr ) 1236 1373 ENDDO 1237 1374 DO lc = 1, nbins_aerosol * ncomponents_mass 1238 WRITE( salsa_char,'(i0)') lc1239 CALL pmc_set_dataarray_name( 'parent', 'am_' // TRIM( salsa_char ), 1375 WRITE( salsa_char,'(i0)' ) lc 1376 CALL pmc_set_dataarray_name( 'parent', 'am_' // TRIM( salsa_char ), & 1240 1377 'child', 'am_' // TRIM( salsa_char ), ierr ) 1241 1378 ENDDO 1242 1379 IF ( .NOT. salsa_gases_from_chem ) THEN 1243 1380 DO lg = 1, ngases_salsa 1244 WRITE( salsa_char,'(i0)') lg1245 CALL pmc_set_dataarray_name( 'parent', 'sg_' // TRIM( salsa_char ), 1381 WRITE( salsa_char,'(i0)' ) lg 1382 CALL pmc_set_dataarray_name( 'parent', 'sg_' // TRIM( salsa_char ), & 1246 1383 'child', 'sg_' // TRIM( salsa_char ), ierr ) 1247 1384 ENDDO … … 1271 1408 ! 1272 1409 !-- Receive parent-grid information. 1273 CALL pmc_recv_from_parent( parent_grid_info_real, 1274 SIZE(parent_grid_info_real), 0, 21,ierr )1410 CALL pmc_recv_from_parent( parent_grid_info_real, SIZE( parent_grid_info_real ), 0, 21, & 1411 ierr ) 1275 1412 CALL pmc_recv_from_parent( parent_grid_info_int, 3, 0, 22, ierr ) 1276 1413 1277 1414 ENDIF 1278 1415 1279 CALL MPI_BCAST( parent_grid_info_real, SIZE( parent_grid_info_real),&1280 MPI_REAL, 0, comm2d,ierr )1416 CALL MPI_BCAST( parent_grid_info_real, SIZE( parent_grid_info_real ), MPI_REAL, 0, comm2d, & 1417 ierr ) 1281 1418 CALL MPI_BCAST( parent_grid_info_int, 3, MPI_INTEGER, 0, comm2d, ierr ) 1282 1419 … … 1313 1450 CALL MPI_BCAST( pg%zu, pg%nz+2, MPI_REAL, 0, comm2d, ierr ) 1314 1451 CALL MPI_BCAST( pg%zw, pg%nz+2, MPI_REAL, 0, comm2d, ierr ) 1315 CALL MPI_BCAST( rans_mode_parent, 1, MPI_LOGICAL, 0, comm2d, ierr ) 1452 CALL MPI_BCAST( rans_mode_parent, 1, MPI_LOGICAL, 0, comm2d, ierr ) 1316 1453 ! 1317 1454 !-- Find the index bounds for the nest domain in the parent-grid index space … … 1332 1469 DO WHILE ( pmc_c_getnextarray( myname ) ) 1333 1470 ! 1334 !-- Note that pg%nz is not the original nz of parent, but the highest 1335 !-- parent-grid level needed for nesting. 1336 !-- Note that in case of chemical species or SALSA variables an additional 1337 !-- parameter needs to be passed. The parameter is required to set the pointer 1338 !-- correctlyto the chemical-species or SALSA data structure. Hence, first check if 1339 !-- the current variable is a chemical species or a SALSA variable. If so, pass 1340 !-- index id of respective sub-variable (species or bin) and increment this subsequently. 1341 IF ( INDEX( TRIM( myname ), 'chem_' ) /= 0 ) THEN 1471 !-- Note that pg%nz is not the original nz of parent, but the highest parent-grid level needed 1472 !-- for nesting. Note that in case of chemical species or SALSA variables an additional 1473 !-- parameter needs to be passed. The parameter is required to set the pointer correctly to 1474 !-- the chemical-species or SALSA data structure. Hence, first check if the current variable 1475 !-- is a chemical species or a SALSA variable. If so, pass index id of respective sub-variable 1476 !-- (species or bin) and increment this subsequently. 1477 IF ( INDEX( TRIM( myname ), 'chem_' ) /= 0 ) THEN 1342 1478 CALL pmci_create_childs_parent_grid_arrays ( myname, ipl, ipr, jps, jpn, pg%nz, n ) 1343 n = n + 1 1479 n = n + 1 1344 1480 ELSEIF ( INDEX( TRIM( myname ), 'an_' ) /= 0 ) THEN 1345 1481 CALL pmci_create_childs_parent_grid_arrays ( myname, ipl, ipr, jps, jpn, pg%nz, lb ) … … 1360 1496 CALL pmci_define_index_mapping 1361 1497 ! 1362 !-- Check that the child and parent grid lines do match 1498 !-- Check that the child and parent grid lines do match 1363 1499 CALL pmci_check_grid_matching 1364 ! 1365 !-- Compute surface areas of the nest-boundary faces 1500 ! 1501 !-- Compute surface areas of the nest-boundary faces 1366 1502 CALL pmci_compute_face_areas 1367 1503 1368 1504 ENDIF 1369 1505 … … 1371 1507 1372 1508 1373 SUBROUTINE pmci_map_child_grid_to_parent_grid 1374 ! 1375 !-- Determine index bounds of interpolation/anterpolation area in the parent-grid index space 1376 IMPLICIT NONE 1377 1378 INTEGER(iwp), DIMENSION(5,numprocs) :: parent_bound_all !< Transfer array for parent-grid index bounds 1379 1380 INTEGER(iwp), DIMENSION(4) :: parent_bound_global !< Transfer array for global parent-grid index bounds 1381 INTEGER(iwp), DIMENSION(2) :: size_of_array !< For sending the dimensions of parent_bound_all to parent 1382 1383 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 1384 INTEGER(iwp) :: iauxl !< Offset between the index bound ipl and the auxiliary index bound ipla 1385 INTEGER(iwp) :: iauxr !< Offset between the index bound ipr and the auxiliary index bound ipra 1386 INTEGER(iwp) :: ijaux !< Temporary variable for receiving the index bound from the neighbouring subdomain 1387 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 1388 INTEGER(iwp) :: jauxs !< Offset between the index bound jps and the auxiliary index bound jpsa 1389 INTEGER(iwp) :: jauxn !< Offset between the index bound jpn and the auxiliary index bound jpna 1390 1391 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 1392 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 1393 REAL(wp) :: xexl !< Parent-grid array exceedance behind the left edge of the child PE subdomain 1394 REAL(wp) :: xexr !< Parent-grid array exceedance behind the right edge of the child PE subdomain 1395 REAL(wp) :: yexs !< Parent-grid array exceedance behind the south edge of the child PE subdomain 1396 REAL(wp) :: yexn !< Parent-grid array exceedance behind the north edge of the child PE subdomain 1397 REAL(wp) :: xpl !< Requested left-edge x-coordinate of the parent-grid array domain (at the internal boundaries 1398 !< the real edge may differ from this in some cases as explained in the comment block below) 1399 REAL(wp) :: xpr !< Requested right-edge x-coordinate of the parent-grid array domain (at the internal boundaries 1400 !< the real edge may differ from this in some cases as explained in the comment block below) 1401 REAL(wp) :: yps !< Requested south-edge y-coordinate of the parent-grid array domain (at the internal boundaries 1402 !< the real edge may differ from this in some cases as explained in the comment block below) 1403 REAL(wp) :: ypn !< Requested south-edge y-coordinate of the parent-grid array domain (at the internal boundaries 1404 !< the real edge may differ from this in some cases as explained in the comment block below) 1405 1406 ! 1407 !-- Determine the index limits for the child's parent-grid arrays (such as uc for example). 1408 !-- Note that at the outer edges of the child domain (nest boundaries) these arrays exceed 1409 !-- the boundary by two parent-grid cells. At the internal boundaries, there are no 1410 !-- exceedances and thus no overlaps with the neighbouring subdomain. If at least half 1411 !-- of the parent-grid cell is within the current child sub-domain, then it is included 1412 !-- in the current sub-domain's parent-grid array. Else the parent-grid cell is 1413 !-- included in the neighbouring subdomain's parent-grid array, or not included at all if 1414 !-- we are at the outer edge of the child domain. This may occur especially when a large 1415 !-- grid-spacing ratio is used. 1416 ! 1417 !-- Tolerances for grid-line matching. 1418 tolex = tolefac * dx 1419 toley = tolefac * dy 1420 ! 1421 !-- Left boundary. 1422 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1423 IF ( bc_dirichlet_l ) THEN 1424 xexl = 2.0_wp * pg%dx 1425 iauxl = 0 1509 !--------------------------------------------------------------------------------------------------! 1510 ! Description: 1511 ! ------------ 1512 !> @Todo: Missing subroutine description. 1513 !--------------------------------------------------------------------------------------------------! 1514 SUBROUTINE pmci_map_child_grid_to_parent_grid 1515 1516 ! 1517 !-- Determine index bounds of interpolation/anterpolation area in the parent-grid index space 1518 IMPLICIT NONE 1519 1520 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 1521 INTEGER(iwp) :: iauxl !< Offset between the index bound ipl and the auxiliary index bound ipla 1522 INTEGER(iwp) :: iauxr !< Offset between the index bound ipr and the auxiliary index bound ipra 1523 INTEGER(iwp) :: ijaux !< Temporary variable for receiving the index bound from the neighbouring subdomain 1524 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 1525 INTEGER(iwp) :: jauxs !< Offset between the index bound jps and the auxiliary index bound jpsa 1526 INTEGER(iwp) :: jauxn !< Offset between the index bound jpn and the auxiliary index bound jpna 1527 1528 INTEGER(iwp), DIMENSION(4) :: parent_bound_global !< Transfer array for global parent-grid index bounds 1529 INTEGER(iwp), DIMENSION(2) :: size_of_array !< For sending the dimensions of parent_bound_all to parent 1530 1531 INTEGER(iwp), DIMENSION(5,numprocs) :: parent_bound_all !< Transfer array for parent-grid index bounds 1532 1533 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 1534 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 1535 REAL(wp) :: xexl !< Parent-grid array exceedance behind the left edge of the child PE subdomain 1536 REAL(wp) :: xexr !< Parent-grid array exceedance behind the right edge of the child PE subdomain 1537 REAL(wp) :: xpl !< Requested left-edge x-coordinate of the parent-grid array domain (at the internal boundaries 1538 !< the real edge may differ from this in some cases as explained in the comment block below) 1539 REAL(wp) :: xpr !< Requested right-edge x-coordinate of the parent-grid array domain (at the internal boundaries 1540 !< the real edge may differ from this in some cases as explained in the comment block below) 1541 REAL(wp) :: yexs !< Parent-grid array exceedance behind the south edge of the child PE subdomain 1542 REAL(wp) :: yexn !< Parent-grid array exceedance behind the north edge of the child PE subdomain 1543 REAL(wp) :: yps !< Requested south-edge y-coordinate of the parent-grid array domain (at the internal boundaries 1544 !< the real edge may differ from this in some cases as explained in the comment block below) 1545 REAL(wp) :: ypn !< Requested south-edge y-coordinate of the parent-grid array domain (at the internal boundaries 1546 !< the real edge may differ from this in some cases as explained in the comment block below) 1547 1548 ! 1549 !-- Determine the index limits for the child's parent-grid arrays (such as uc for example). 1550 !-- Note that at the outer edges of the child domain (nest boundaries) these arrays exceed the 1551 !-- boundary by two parent-grid cells. At the internal boundaries, there are no exceedances and 1552 !-- thus no overlaps with the neighbouring subdomain. If at least half of the parent-grid cell is 1553 !-- within the current child sub-domain, then it is included in the current sub-domain's 1554 !-- parent-grid array. Else the parent-grid cell is included in the neighbouring subdomain's 1555 !-- parent-grid array, or not included at all if we are at the outer edge of the child domain. 1556 !-- This may occur especially when a large grid-spacing ratio is used. 1557 ! 1558 !-- Tolerances for grid-line matching. 1559 tolex = tolefac * dx 1560 toley = tolefac * dy 1561 ! 1562 !-- Left boundary. 1563 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1564 IF ( bc_dirichlet_l ) THEN 1565 xexl = 2.0_wp * pg%dx 1566 iauxl = 0 1567 ELSE 1568 xexl = 0.0_wp 1569 iauxl = 1 1570 ENDIF 1571 xpl = coord_x(nxl) - xexl 1572 DO ip = 0, pg%nx 1573 IF ( pg%coord_x(ip) + 0.5_wp * pg%dx >= xpl - tolex ) THEN 1574 ipl = MAX( 0, ip ) 1575 EXIT 1576 ENDIF 1577 ENDDO 1578 ! 1579 !-- Right boundary. 1580 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1581 IF ( bc_dirichlet_r ) THEN 1582 xexr = 2.0_wp * pg%dx 1583 iauxr = 0 1584 ELSE 1585 xexr = 0.0_wp 1586 iauxr = 1 1587 ENDIF 1588 xpr = coord_x(nxr+1) + xexr 1589 DO ip = pg%nx, 0 , -1 1590 IF ( pg%coord_x(ip) + 0.5_wp * pg%dx <= xpr + tolex ) THEN 1591 ipr = MIN( pg%nx, MAX( ipl, ip ) ) 1592 EXIT 1593 ENDIF 1594 ENDDO 1595 ! 1596 !-- South boundary. 1597 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1598 IF ( bc_dirichlet_s ) THEN 1599 yexs = 2.0_wp * pg%dy 1600 jauxs = 0 1601 ELSE 1602 yexs = 0.0_wp 1603 jauxs = 1 1604 ENDIF 1605 yps = coord_y(nys) - yexs 1606 DO jp = 0, pg%ny 1607 IF ( pg%coord_y(jp) + 0.5_wp * pg%dy >= yps - toley ) THEN 1608 jps = MAX( 0, jp ) 1609 EXIT 1610 ENDIF 1611 ENDDO 1612 ! 1613 !-- North boundary. 1614 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1615 IF ( bc_dirichlet_n ) THEN 1616 yexn = 2.0_wp * pg%dy 1617 jauxn = 0 1618 ELSE 1619 yexn = 0.0_wp 1620 jauxn = 1 1621 ENDIF 1622 ypn = coord_y(nyn+1) + yexn 1623 DO jp = pg%ny, 0 , -1 1624 IF ( pg%coord_y(jp) + 0.5_wp * pg%dy <= ypn + toley ) THEN 1625 jpn = MIN( pg%ny, MAX( jps, jp ) ) 1626 EXIT 1627 ENDIF 1628 ENDDO 1629 ! 1630 !-- Make sure that the indexing is contiguous (no gaps, no overlaps). This is a safety measure 1631 !-- mainly for cases with high grid-spacing ratio and narrow child subdomains. 1632 IF ( pdims(1) > 1 ) THEN 1633 IF ( nxl == 0 ) THEN 1634 CALL MPI_SEND( ipr, 1, MPI_INTEGER, pright, 717, comm2d, ierr ) 1635 ELSE IF ( nxr == nx ) THEN 1636 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, pleft, 717, comm2d, status, ierr ) 1637 ipl = ijaux + 1 1426 1638 ELSE 1427 xexl = 0.0_wp 1428 iauxl = 1 1429 ENDIF 1430 xpl = coord_x(nxl) - xexl 1431 DO ip = 0, pg%nx 1432 IF ( pg%coord_x(ip) + 0.5_wp * pg%dx >= xpl - tolex ) THEN 1433 ipl = MAX( 0, ip ) 1434 EXIT 1435 ENDIF 1436 ENDDO 1437 ! 1438 !-- Right boundary. 1439 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1440 IF ( bc_dirichlet_r ) THEN 1441 xexr = 2.0_wp * pg%dx 1442 iauxr = 0 1639 CALL MPI_SEND( ipr, 1, MPI_INTEGER, pright, 717, comm2d, ierr ) 1640 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, pleft, 717, comm2d, status, ierr ) 1641 ipl = ijaux + 1 1642 ENDIF 1643 ENDIF 1644 1645 IF ( pdims(2) > 1 ) THEN 1646 IF ( nys == 0 ) THEN 1647 CALL MPI_SEND( jpn, 1, MPI_INTEGER, pnorth, 719, comm2d, ierr ) 1648 ELSE IF ( nyn == ny ) THEN 1649 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, psouth, 719, comm2d, status, ierr ) 1650 jps = ijaux + 1 1443 1651 ELSE 1444 xexr = 0.0_wp 1445 iauxr = 1 1446 ENDIF 1447 xpr = coord_x(nxr+1) + xexr 1448 DO ip = pg%nx, 0 , -1 1449 IF ( pg%coord_x(ip) + 0.5_wp * pg%dx <= xpr + tolex ) THEN 1450 ipr = MIN( pg%nx, MAX( ipl, ip ) ) 1451 EXIT 1452 ENDIF 1453 ENDDO 1454 ! 1455 !-- South boundary. 1456 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1457 IF ( bc_dirichlet_s ) THEN 1458 yexs = 2.0_wp * pg%dy 1459 jauxs = 0 1652 CALL MPI_SEND( jpn, 1, MPI_INTEGER, pnorth, 719, comm2d, ierr ) 1653 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, psouth, 719, comm2d, status, ierr ) 1654 jps = ijaux + 1 1655 ENDIF 1656 ENDIF 1657 1658 WRITE( 9,"('pmci_map_child_grid_to_parent_grid. Parent-grid array bounds: ',4(i4,2x))" ) & 1659 ipl, ipr, jps, jpn 1660 FLUSH(9) 1661 1662 parent_bound(1) = ipl 1663 parent_bound(2) = ipr 1664 parent_bound(3) = jps 1665 parent_bound(4) = jpn 1666 parent_bound(5) = myid 1667 ! 1668 !-- The following auxiliary index bounds are used for allocating index mapping and some other 1669 !-- auxiliary arrays. 1670 ipla = ipl - iauxl 1671 ipra = ipr + iauxr 1672 jpsa = jps - jauxs 1673 jpna = jpn + jauxn 1674 ! 1675 !-- The index-bounds parent_bound of all subdomains of the current child domain must be sent to the 1676 !-- parent in order for the parent to create the index list. For this reason, the parent_bound 1677 !-- arrays are packed together in single array parent_bound_all using MPI_GATHER. Note that 1678 !-- MPI_Gather receives data from all processes in the rank order This fact is exploited in creating 1679 !-- the index list in pmci_create_index_list. 1680 CALL MPI_GATHER( parent_bound, 5, MPI_INTEGER, parent_bound_all, 5, MPI_INTEGER, 0, comm2d, & 1681 ierr ) 1682 1683 IF ( myid == 0 ) THEN 1684 size_of_array(1) = SIZE( parent_bound_all, 1 ) 1685 size_of_array(2) = SIZE( parent_bound_all, 2 ) 1686 CALL pmc_send_to_parent( size_of_array, 2, 0, 40, ierr ) 1687 CALL pmc_send_to_parent( parent_bound_all, SIZE( parent_bound_all ), 0, 41, ierr ) 1688 ! 1689 !-- Determine the global parent-grid index bounds 1690 parent_bound_global(1) = MINVAL( parent_bound_all(1,:) ) 1691 parent_bound_global(2) = MAXVAL( parent_bound_all(2,:) ) 1692 parent_bound_global(3) = MINVAL( parent_bound_all(3,:) ) 1693 parent_bound_global(4) = MAXVAL( parent_bound_all(4,:) ) 1694 ENDIF 1695 ! 1696 !-- Broadcast the global parent-grid index bounds to all current child processes 1697 CALL MPI_BCAST( parent_bound_global, 4, MPI_INTEGER, 0, comm2d, ierr ) 1698 iplg = parent_bound_global(1) 1699 iprg = parent_bound_global(2) 1700 jpsg = parent_bound_global(3) 1701 jpng = parent_bound_global(4) 1702 WRITE( 9, "('pmci_map_child_grid_to_parent_grid. Global parent-grid index bounds iplg, iprg, jpsg, jpng: ',4(i4,2x))" ) & 1703 iplg, iprg, jpsg, jpng 1704 FLUSH( 9 ) 1705 1706 END SUBROUTINE pmci_map_child_grid_to_parent_grid 1707 1708 1709 !--------------------------------------------------------------------------------------------------! 1710 ! Description: 1711 ! ------------ 1712 !> @Todo: Missing subroutine description. 1713 !--------------------------------------------------------------------------------------------------! 1714 SUBROUTINE pmci_define_index_mapping 1715 ! 1716 !-- Precomputation of the mapping of the child- and parent-grid indices. 1717 1718 IMPLICIT NONE 1719 1720 INTEGER(iwp) :: i !< Child-grid index in the x-direction 1721 INTEGER(iwp) :: ii !< Parent-grid index in the x-direction 1722 INTEGER(iwp) :: istart !< 1723 INTEGER(iwp) :: ir !< 1724 INTEGER(iwp) :: iw !< Child-grid index limited to -1 <= iw <= nx+1 for wall_flags_total_0 1725 INTEGER(iwp) :: j !< Child-grid index in the y-direction 1726 INTEGER(iwp) :: jj !< Parent-grid index in the y-direction 1727 INTEGER(iwp) :: jstart !< 1728 INTEGER(iwp) :: jr !< 1729 INTEGER(iwp) :: jw !< Child-grid index limited to -1 <= jw <= ny+1 for wall_flags_total_0 1730 INTEGER(iwp) :: k !< Child-grid index in the z-direction 1731 INTEGER(iwp) :: kk !< Parent-grid index in the z-direction 1732 INTEGER(iwp) :: kstart !< 1733 INTEGER(iwp) :: kw !< Child-grid index limited to kw <= nzt+1 for wall_flags_total_0 1734 1735 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 1736 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 1737 REAL(wp) :: tolez !< Tolerance for grid-line matching in z-direction 1738 1739 ! 1740 !-- Grid-line tolerances. 1741 tolex = tolefac * dx 1742 toley = tolefac * dy 1743 tolez = tolefac * dz(1) 1744 ! 1745 !-- Allocate child-grid work arrays for interpolation. 1746 igsr = NINT( pg%dx / dx, iwp ) 1747 jgsr = NINT( pg%dy / dy, iwp ) 1748 kgsr = NINT( pg%dzw(1) / dzw(1), iwp ) 1749 WRITE(9,"('igsr, jgsr, kgsr: ',3(i3,2x))") igsr, jgsr, kgsr 1750 FLUSH(9) 1751 ! 1752 !-- Determine index bounds for the parent-grid work arrays for interpolation and allocate them. 1753 CALL pmci_allocate_workarrays 1754 ! 1755 !-- Define the MPI-datatypes for parent-grid work array exchange between the PE-subdomains. 1756 CALL pmci_create_workarray_exchange_datatypes 1757 ! 1758 !-- First determine kcto and kctw which refer to the uppermost parent-grid levels below the child 1759 !-- top-boundary level. Note that these comparison tests are not round-off-error sensitive and 1760 !-- therefore tolerance buffering is not needed here. 1761 kk = 0 1762 DO WHILE ( pg%zu(kk) <= zu(nzt) ) 1763 kk = kk + 1 1764 ENDDO 1765 kcto = kk - 1 1766 1767 kk = 0 1768 DO WHILE ( pg%zw(kk) <= zw(nzt-1) ) 1769 kk = kk + 1 1770 ENDDO 1771 kctw = kk - 1 1772 1773 WRITE( 9, "('kcto, kctw = ', 2(i3,2x))" ) kcto, kctw 1774 FLUSH( 9 ) 1775 ! 1776 !-- In case of two-way coupling, check that the child domain is sufficiently large in terms of the 1777 !-- number of parent-grid cells covered. Otherwise anterpolation is not possible. 1778 IF ( nesting_mode == 'two-way') THEN 1779 CALL pmci_check_child_domain_size 1780 ENDIF 1781 1782 ALLOCATE( iflu(ipla:ipra) ) 1783 ALLOCATE( iflo(ipla:ipra) ) 1784 ALLOCATE( ifuu(ipla:ipra) ) 1785 ALLOCATE( ifuo(ipla:ipra) ) 1786 ALLOCATE( jflv(jpsa:jpna) ) 1787 ALLOCATE( jflo(jpsa:jpna) ) 1788 ALLOCATE( jfuv(jpsa:jpna) ) 1789 ALLOCATE( jfuo(jpsa:jpna) ) 1790 ALLOCATE( kflw(0:pg%nz+1) ) 1791 ALLOCATE( kflo(0:pg%nz+1) ) 1792 ALLOCATE( kfuw(0:pg%nz+1) ) 1793 ALLOCATE( kfuo(0:pg%nz+1) ) 1794 ALLOCATE( ijkfc_u(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1795 ALLOCATE( ijkfc_v(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1796 ALLOCATE( ijkfc_w(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1797 ALLOCATE( ijkfc_s(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1798 1799 ijkfc_u = 0 1800 ijkfc_v = 0 1801 ijkfc_w = 0 1802 ijkfc_s = 0 1803 ! 1804 !-- i-indices of u for each ii-index value 1805 istart = nxlg 1806 DO ii = ipla, ipra 1807 ! 1808 !-- The parent and child grid lines do always match in x, hence we use only the local 1809 !-- k,j-child-grid plane for the anterpolation. However, iflu still has to be stored separately 1810 !-- as these index bounds are passed as arguments to the interpolation and anterpolation 1811 !-- subroutines. Note that this comparison test is round-off-error sensitive and therefore 1812 !-- tolerance buffering is needed here. 1813 i = istart 1814 DO WHILE ( pg%coord_x(ii) - coord_x(i) > tolex .AND. i < nxrg ) 1815 i = i + 1 1816 ENDDO 1817 iflu(ii) = MIN( MAX( i, nxlg ), nxrg ) 1818 ifuu(ii) = iflu(ii) 1819 istart = iflu(ii) 1820 ! 1821 !-- Print out the index bounds for checking and debugging purposes 1822 WRITE( 9, "('pmci_define_index_mapping, ii, iflu, ifuu: ', 3(i4,2x))" ) ii, iflu(ii), & 1823 ifuu(ii) 1824 FLUSH( 9 ) 1825 ENDDO 1826 WRITE( 9, * ) 1827 ! 1828 !-- i-indices of others for each ii-index value. Note that these comparison tests are not 1829 !-- round-off-error sensitive and therefore tolerance buffering is not needed here. 1830 istart = nxlg 1831 DO ii = ipla, ipra 1832 i = istart 1833 DO WHILE ( ( coord_x(i) + 0.5_wp * dx < pg%coord_x(ii) ) .AND. ( i < nxrg ) ) 1834 i = i + 1 1835 ENDDO 1836 iflo(ii) = MIN( MAX( i, nxlg ), nxrg ) 1837 ir = i 1838 DO WHILE ( ( coord_x(ir) + 0.5_wp * dx < pg%coord_x(ii) + pg%dx ) .AND. ( i < nxrg+1 ) ) 1839 i = i + 1 1840 ir = MIN( i, nxrg ) 1841 ENDDO 1842 ifuo(ii) = MIN( MAX( i-1, iflo(ii) ), nxrg ) 1843 istart = iflo(ii) 1844 ! 1845 !-- Print out the index bounds for checking and debugging purposes 1846 WRITE( 9, "('pmci_define_index_mapping, ii, iflo, ifuo: ', 3(i4,2x))" ) ii, iflo(ii), & 1847 ifuo(ii) 1848 FLUSH( 9 ) 1849 ENDDO 1850 WRITE( 9, * ) 1851 ! 1852 !-- j-indices of v for each jj-index value 1853 jstart = nysg 1854 DO jj = jpsa, jpna 1855 ! 1856 !-- The parent and child grid lines do always match in y, hence we use only the local 1857 !-- k,i-child-grid plane for the anterpolation. However, jcnv still has to be stored separately 1858 !-- as these index bounds are passed as arguments to the interpolation and anterpolation 1859 !-- subroutines. Note that this comparison test is round-off-error sensitive and therefore 1860 !-- tolerance buffering is needed here. 1861 j = jstart 1862 DO WHILE ( pg%coord_y(jj) - coord_y(j) > toley .AND. j < nyng ) 1863 j = j + 1 1864 ENDDO 1865 jflv(jj) = MIN( MAX( j, nysg ), nyng ) 1866 jfuv(jj) = jflv(jj) 1867 jstart = jflv(jj) 1868 ! 1869 !-- Print out the index bounds for checking and debugging purposes 1870 WRITE( 9, "('pmci_define_index_mapping, jj, jflv, jfuv: ', 3(i4,2x))" ) jj, jflv(jj), & 1871 jfuv(jj) 1872 FLUSH(9) 1873 ENDDO 1874 WRITE( 9, * ) 1875 ! 1876 !-- j-indices of others for each jj-index value 1877 !-- Note that these comparison tests are not round-off-error sensitive and therefore tolerance 1878 !-- buffering is not needed here. 1879 jstart = nysg 1880 DO jj = jpsa, jpna 1881 j = jstart 1882 DO WHILE ( ( coord_y(j) + 0.5_wp * dy < pg%coord_y(jj) ) .AND. ( j < nyng ) ) 1883 j = j + 1 1884 ENDDO 1885 jflo(jj) = MIN( MAX( j, nysg ), nyng ) 1886 jr = j 1887 DO WHILE ( ( coord_y(jr) + 0.5_wp * dy < pg%coord_y(jj) + pg%dy ) .AND. ( j < nyng+1 ) ) 1888 j = j + 1 1889 jr = MIN( j, nyng ) 1890 ENDDO 1891 jfuo(jj) = MIN( MAX( j-1, jflo(jj) ), nyng ) 1892 jstart = jflo(jj) 1893 ! 1894 !-- Print out the index bounds for checking and debugging purposes 1895 WRITE( 9, "('pmci_define_index_mapping, jj, jflo, jfuo: ', 3(i4,2x))" ) jj, jflo(jj), & 1896 jfuo(jj) 1897 FLUSH( 9 ) 1898 ENDDO 1899 WRITE( 9, * ) 1900 ! 1901 !-- k-indices of w for each kk-index value 1902 !-- Note that anterpolation index limits are needed also for the top boundary ghost cell level 1903 !-- because they are used also in the interpolation. 1904 kstart = 0 1905 kflw(0) = 0 1906 kfuw(0) = 0 1907 DO kk = 1, pg%nz+1 1908 ! 1909 !-- The parent and child grid lines do always match in z, hence we use only the local 1910 !-- j,i-child-grid plane for the anterpolation. However, kctw still has to be stored separately 1911 !-- as these index bounds are passed as arguments to the interpolation and anterpolation 1912 !-- subroutines. Note that this comparison test is round-off-error sensitive and therefore 1913 !-- tolerance buffering is needed here. 1914 k = kstart 1915 DO WHILE ( ( pg%zw(kk) - zw(k) > tolez ) .AND. ( k < nzt+1 ) ) 1916 k = k + 1 1917 ENDDO 1918 kflw(kk) = MIN( MAX( k, 1 ), nzt + 1 ) 1919 kfuw(kk) = kflw(kk) 1920 kstart = kflw(kk) 1921 ! 1922 !-- Print out the index bounds for checking and debugging purposes 1923 WRITE( 9, "('pmci_define_index_mapping, kk, kflw, kfuw: ', 4(i4,2x), 2(e12.5,2x))" ) & 1924 kk, kflw(kk), kfuw(kk), nzt, pg%zu(kk), pg%zw(kk) 1925 FLUSH( 9 ) 1926 ENDDO 1927 WRITE( 9, * ) 1928 ! 1929 !-- k-indices of others for each kk-index value 1930 kstart = 0 1931 kflo(0) = 0 1932 kfuo(0) = 0 1933 ! 1934 !-- Note that anterpolation index limits are needed also for the top boundary ghost cell level 1935 !-- because they are used also in the interpolation. Note that these comparison tests are not 1936 !-- round-off-error sensitive and therefore tolerance buffering is not needed here. 1937 DO kk = 1, pg%nz+1 1938 k = kstart 1939 DO WHILE ( ( zu(k) < pg%zw(kk-1) ) .AND. ( k <= nzt ) ) 1940 k = k + 1 1941 ENDDO 1942 kflo(kk) = MIN( MAX( k, 1 ), nzt + 1 ) 1943 DO WHILE ( ( zu(k) < pg%zw(kk) ) .AND. ( k <= nzt+1 ) ) 1944 k = k + 1 1945 IF ( k > nzt + 1 ) EXIT ! This EXIT is to prevent zu(k) from flowing over. 1946 ENDDO 1947 kfuo(kk) = MIN( MAX( k-1, kflo(kk) ), nzt + 1 ) 1948 kstart = kflo(kk) 1949 ENDDO 1950 ! 1951 !-- Print out the index bounds for checking and debugging purposes 1952 DO kk = 1, pg%nz+1 1953 WRITE( 9, "('pmci_define_index_mapping, kk, kflo, kfuo: ', 4(i4,2x), 2(e12.5,2x))" ) & 1954 kk, kflo(kk), kfuo(kk), nzt, pg%zu(kk), pg%zw(kk) 1955 FLUSH( 9 ) 1956 ENDDO 1957 WRITE( 9, * ) 1958 ! 1959 !-- Precomputation of number of child-grid nodes inside parent-grid cells. Note that ii, jj, and kk 1960 !-- are parent-grid indices. This information is needed in the anterpolation. The indices for 1961 !-- wall_flags_total_0 (kw,jw,iw) must be limited to the range [-1,...,nx/ny/nzt+1] in order to 1962 !-- avoid zero values on the outer ghost nodes. 1963 DO ii = ipla, ipra 1964 DO jj = jpsa, jpna 1965 DO kk = 0, pg%nz+1 1966 ! 1967 !-- u-component 1968 DO i = iflu(ii), ifuu(ii) 1969 iw = MAX( MIN( i, nx+1 ), -1 ) 1970 DO j = jflo(jj), jfuo(jj) 1971 jw = MAX( MIN( j, ny+1 ), -1 ) 1972 DO k = kflo(kk), kfuo(kk) 1973 kw = MIN( k, nzt+1 ) 1974 ijkfc_u(kk,jj,ii) = ijkfc_u(kk,jj,ii) & 1975 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 1 ) ) 1976 ENDDO 1977 ENDDO 1978 ENDDO 1979 ! 1980 !-- v-component 1981 DO i = iflo(ii), ifuo(ii) 1982 iw = MAX( MIN( i, nx+1 ), -1 ) 1983 DO j = jflv(jj), jfuv(jj) 1984 jw = MAX( MIN( j, ny+1 ), -1 ) 1985 DO k = kflo(kk), kfuo(kk) 1986 kw = MIN( k, nzt+1 ) 1987 ijkfc_v(kk,jj,ii) = ijkfc_v(kk,jj,ii) & 1988 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 2 ) ) 1989 ENDDO 1990 ENDDO 1991 ENDDO 1992 ! 1993 !-- Scalars 1994 DO i = iflo(ii), ifuo(ii) 1995 iw = MAX( MIN( i, nx+1 ), -1 ) 1996 DO j = jflo(jj), jfuo(jj) 1997 jw = MAX( MIN( j, ny+1 ), -1 ) 1998 DO k = kflo(kk), kfuo(kk) 1999 kw = MIN( k, nzt+1 ) 2000 ijkfc_s(kk,jj,ii) = ijkfc_s(kk,jj,ii) & 2001 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 0 ) ) 2002 ENDDO 2003 ENDDO 2004 ENDDO 2005 ! 2006 !-- w-component 2007 DO i = iflo(ii), ifuo(ii) 2008 iw = MAX( MIN( i, nx+1 ), -1 ) 2009 DO j = jflo(jj), jfuo(jj) 2010 jw = MAX( MIN( j, ny+1 ), -1 ) 2011 DO k = kflw(kk), kfuw(kk) 2012 kw = MIN( k, nzt+1 ) 2013 ijkfc_w(kk,jj,ii) = ijkfc_w(kk,jj,ii) & 2014 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 3 ) ) 2015 ENDDO 2016 ENDDO 2017 ENDDO 2018 2019 ENDDO ! kk 2020 ENDDO ! jj 2021 ENDDO ! ii 2022 2023 END SUBROUTINE pmci_define_index_mapping 2024 2025 2026 2027 !--------------------------------------------------------------------------------------------------! 2028 ! Description: 2029 ! ------------ 2030 !> @Todo: Missing subroutine description. 2031 !--------------------------------------------------------------------------------------------------! 2032 SUBROUTINE pmci_check_child_domain_size 2033 ! 2034 !-- Check if the child domain is too small in terms of number of parent-grid cells covered so that 2035 !-- anterpolation buffers fill the whole domain so that anterpolation not possible. Also, check that 2036 !-- anterpolation_buffer_width is not too large to prevent anterpolation. 2037 IMPLICIT NONE 2038 2039 ! 2040 !-- First x-direction 2041 IF ( iplg + 3 + anterpolation_buffer_width > iprg - 3 - anterpolation_buffer_width ) THEN 2042 IF ( iprg - iplg + 1 < 7 ) THEN 2043 ! 2044 !-- Error 2045 WRITE( message_string, * ) 'child domain too narrow for anterpolation in x-direction' 2046 CALL message( 'pmci_check_child_domain_size', 'PA0652', 3, 2, 0, 6, 0 ) 2047 ELSE IF ( iprg - iplg + 1 < 11 ) THEN 2048 ! 2049 !-- Warning 2050 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to 0' 2051 CALL message( 'pmci_check_child_domain_size', 'PA0653', 0, 1, 0, 6, 0 ) 2052 anterpolation_buffer_width = 0 1460 2053 ELSE 1461 yexs = 0.0_wp 1462 jauxs = 1 1463 ENDIF 1464 yps = coord_y(nys) - yexs 1465 DO jp = 0, pg%ny 1466 IF ( pg%coord_y(jp) + 0.5_wp * pg%dy >= yps - toley ) THEN 1467 jps = MAX( 0, jp ) 1468 EXIT 1469 ENDIF 1470 ENDDO 1471 ! 1472 !-- North boundary. 1473 !-- Extension by two parent-grid cells behind the boundary, see the comment block above. 1474 IF ( bc_dirichlet_n ) THEN 1475 yexn = 2.0_wp * pg%dy 1476 jauxn = 0 2054 ! 2055 !-- Informative message 2056 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to ' // & 2057 'default value 2' 2058 CALL message( 'pmci_check_child_domain_size', 'PA0654', 0, 0, 0, 6, 0 ) 2059 anterpolation_buffer_width = 2 2060 ENDIF 2061 ENDIF 2062 ! 2063 !-- Then y-direction 2064 IF ( jpsg + 3 + anterpolation_buffer_width > jpng - 3 - anterpolation_buffer_width ) THEN 2065 IF ( jpng - jpsg + 1 < 7 ) THEN 2066 ! 2067 !-- Error 2068 WRITE( message_string, * ) 'child domain too narrow for anterpolation in y-direction' 2069 CALL message( 'pmci_check_child_domain_size', 'PA0652', 3, 2, 0, 6, 0 ) 2070 ELSE IF ( jpng - jpsg + 1 < 11 ) THEN 2071 ! 2072 !-- Warning 2073 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to 0' 2074 CALL message( 'pmci_check_child_domain_size', 'PA0653', 0, 1, 0, 6, 0 ) 2075 anterpolation_buffer_width = 0 1477 2076 ELSE 1478 yexn = 0.0_wp 1479 jauxn = 1 1480 ENDIF 1481 ypn = coord_y(nyn+1) + yexn 1482 DO jp = pg%ny, 0 , -1 1483 IF ( pg%coord_y(jp) + 0.5_wp * pg%dy <= ypn + toley ) THEN 1484 jpn = MIN( pg%ny, MAX( jps, jp ) ) 1485 EXIT 1486 ENDIF 1487 ENDDO 1488 ! 1489 !-- Make sure that the indexing is contiguous (no gaps, no overlaps). 1490 !-- This is a safety measure mainly for cases with high grid-spacing 1491 !-- ratio and narrow child subdomains. 1492 IF ( pdims(1) > 1 ) THEN 1493 IF ( nxl == 0 ) THEN 1494 CALL MPI_SEND( ipr, 1, MPI_INTEGER, pright, 717, comm2d, ierr ) 1495 ELSE IF ( nxr == nx ) THEN 1496 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, pleft, 717, comm2d, status, ierr ) 1497 ipl = ijaux + 1 1498 ELSE 1499 CALL MPI_SEND( ipr, 1, MPI_INTEGER, pright, 717, comm2d, ierr ) 1500 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, pleft, 717, comm2d, status, ierr ) 1501 ipl = ijaux + 1 1502 ENDIF 1503 ENDIF 1504 1505 IF ( pdims(2) > 1 ) THEN 1506 IF ( nys == 0 ) THEN 1507 CALL MPI_SEND( jpn, 1, MPI_INTEGER, pnorth, 719, comm2d, ierr ) 1508 ELSE IF ( nyn == ny ) THEN 1509 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, psouth, 719, comm2d, status, ierr ) 1510 jps = ijaux + 1 1511 ELSE 1512 CALL MPI_SEND( jpn, 1, MPI_INTEGER, pnorth, 719, comm2d, ierr ) 1513 CALL MPI_RECV( ijaux, 1, MPI_INTEGER, psouth, 719, comm2d, status, ierr ) 1514 jps = ijaux + 1 1515 ENDIF 1516 ENDIF 1517 1518 WRITE(9,"('pmci_map_child_grid_to_parent_grid. Parent-grid array bounds: ',4(i4,2x))") & 1519 ipl, ipr, jps, jpn 1520 FLUSH(9) 1521 1522 parent_bound(1) = ipl 1523 parent_bound(2) = ipr 1524 parent_bound(3) = jps 1525 parent_bound(4) = jpn 1526 parent_bound(5) = myid 1527 ! 1528 !-- The following auxiliary index bounds are used for allocating index mapping and 1529 !-- some other auxiliary arrays. 1530 ipla = ipl - iauxl 1531 ipra = ipr + iauxr 1532 jpsa = jps - jauxs 1533 jpna = jpn + jauxn 1534 ! 1535 !-- The index-bounds parent_bound of all subdomains of the current child domain 1536 !-- must be sent to the parent in order for the parent to create the index list. 1537 !-- For this reason, the parent_bound arrays are packed together in single 1538 !-- array parent_bound_all using MPI_GATHER. 1539 !-- Note that MPI_Gather receives data from all processes in the rank order 1540 !-- This fact is exploited in creating the index list in pmci_create_index_list. 1541 CALL MPI_GATHER( parent_bound, 5, MPI_INTEGER, parent_bound_all, 5, & 1542 MPI_INTEGER, 0, comm2d, ierr ) 1543 1544 IF ( myid == 0 ) THEN 1545 size_of_array(1) = SIZE( parent_bound_all, 1 ) 1546 size_of_array(2) = SIZE( parent_bound_all, 2 ) 1547 CALL pmc_send_to_parent( size_of_array, 2, 0, 40, ierr ) 1548 CALL pmc_send_to_parent( parent_bound_all, SIZE( parent_bound_all ), 0, 41, ierr ) 1549 ! 1550 !-- Determine the global parent-grid index bounds 1551 parent_bound_global(1) = MINVAL( parent_bound_all(1,:) ) 1552 parent_bound_global(2) = MAXVAL( parent_bound_all(2,:) ) 1553 parent_bound_global(3) = MINVAL( parent_bound_all(3,:) ) 1554 parent_bound_global(4) = MAXVAL( parent_bound_all(4,:) ) 1555 ENDIF 1556 ! 1557 !-- Broadcast the global parent-grid index bounds to all current child processes 1558 CALL MPI_BCAST( parent_bound_global, 4, MPI_INTEGER, 0, comm2d, ierr ) 1559 iplg = parent_bound_global(1) 1560 iprg = parent_bound_global(2) 1561 jpsg = parent_bound_global(3) 1562 jpng = parent_bound_global(4) 1563 WRITE( 9, "('pmci_map_child_grid_to_parent_grid. Global parent-grid index bounds iplg, iprg, jpsg, jpng: ',4(i4,2x))" ) & 1564 iplg, iprg, jpsg, jpng 1565 FLUSH( 9 ) 1566 1567 END SUBROUTINE pmci_map_child_grid_to_parent_grid 1568 1569 1570 1571 SUBROUTINE pmci_define_index_mapping 1572 ! 1573 !-- Precomputation of the mapping of the child- and parent-grid indices. 1574 1575 IMPLICIT NONE 1576 1577 INTEGER(iwp) :: i !< Child-grid index in the x-direction 1578 INTEGER(iwp) :: ii !< Parent-grid index in the x-direction 1579 INTEGER(iwp) :: istart !< 1580 INTEGER(iwp) :: ir !< 1581 INTEGER(iwp) :: iw !< Child-grid index limited to -1 <= iw <= nx+1 for wall_flags_total_0 1582 INTEGER(iwp) :: j !< Child-grid index in the y-direction 1583 INTEGER(iwp) :: jj !< Parent-grid index in the y-direction 1584 INTEGER(iwp) :: jstart !< 1585 INTEGER(iwp) :: jr !< 1586 INTEGER(iwp) :: jw !< Child-grid index limited to -1 <= jw <= ny+1 for wall_flags_total_0 1587 INTEGER(iwp) :: k !< Child-grid index in the z-direction 1588 INTEGER(iwp) :: kk !< Parent-grid index in the z-direction 1589 INTEGER(iwp) :: kstart !< 1590 INTEGER(iwp) :: kw !< Child-grid index limited to kw <= nzt+1 for wall_flags_total_0 1591 1592 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 1593 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 1594 REAL(wp) :: tolez !< Tolerance for grid-line matching in z-direction 1595 1596 ! 1597 !-- Grid-line tolerances. 2077 ! 2078 !-- Informative message 2079 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to ' // & 2080 'default value 2' 2081 CALL message( 'pmci_check_child_domain_size', 'PA0654', 0, 0, 0, 6, 0 ) 2082 anterpolation_buffer_width = 2 2083 ENDIF 2084 ENDIF 2085 ! 2086 !-- Finally z-direction 2087 IF ( kctw - 1 - anterpolation_buffer_width < 1 ) THEN 2088 IF ( kctw - 1 < 1 ) THEN 2089 ! 2090 !-- Error 2091 WRITE( message_string, * ) 'child domain too shallow for anterpolation in z-direction' 2092 CALL message( 'pmci_check_child_domain_size', 'PA0652', 3, 2, 0, 6, 0 ) 2093 ELSE IF ( kctw - 3 < 1 ) THEN 2094 ! 2095 !-- Warning 2096 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to 0' 2097 CALL message( 'pmci_check_child_domain_size', 'PA0653', 0, 1, 0, 6, 0 ) 2098 anterpolation_buffer_width = 0 2099 ELSE 2100 ! 2101 !-- Informative message 2102 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to ' // & 2103 'default value 2' 2104 CALL message( 'pmci_check_child_domain_size', 'PA0654', 0, 0, 0, 6, 0 ) 2105 anterpolation_buffer_width = 2 2106 ENDIF 2107 ENDIF 2108 2109 END SUBROUTINE pmci_check_child_domain_size 2110 2111 2112 !--------------------------------------------------------------------------------------------------! 2113 ! Description: 2114 ! ------------ 2115 !> @Todo: Missing subroutine description. 2116 !--------------------------------------------------------------------------------------------------! 2117 SUBROUTINE pmci_allocate_workarrays 2118 ! 2119 !-- Allocate parent-grid work-arrays for interpolation 2120 IMPLICIT NONE 2121 2122 ! 2123 !-- Determine and store the PE-subdomain dependent index bounds 2124 IF ( bc_dirichlet_l ) THEN 2125 iplw = ipl + 1 2126 ELSE 2127 iplw = ipl - 1 2128 ENDIF 2129 2130 IF ( bc_dirichlet_r ) THEN 2131 iprw = ipr - 1 2132 ELSE 2133 iprw = ipr + 1 2134 ENDIF 2135 2136 IF ( bc_dirichlet_s ) THEN 2137 jpsw = jps + 1 2138 ELSE 2139 jpsw = jps - 1 2140 ENDIF 2141 2142 IF ( bc_dirichlet_n ) THEN 2143 jpnw = jpn - 1 2144 ELSE 2145 jpnw = jpn + 1 2146 ENDIF 2147 ! 2148 !-- Left and right boundaries. 2149 ALLOCATE( workarr_lr(0:pg%nz+1,jpsw:jpnw,0:2) ) 2150 ! 2151 !-- South and north boundaries. 2152 ALLOCATE( workarr_sn(0:pg%nz+1,0:2,iplw:iprw) ) 2153 ! 2154 !-- Top boundary. 2155 ALLOCATE( workarr_t(0:2,jpsw:jpnw,iplw:iprw) ) 2156 2157 END SUBROUTINE pmci_allocate_workarrays 2158 2159 2160 !--------------------------------------------------------------------------------------------------! 2161 ! Description: 2162 ! ------------ 2163 !> @Todo: Missing subroutine description. 2164 !--------------------------------------------------------------------------------------------------! 2165 SUBROUTINE pmci_create_workarray_exchange_datatypes 2166 ! 2167 !-- Define specific MPI types for workarr-exchange. 2168 IMPLICIT NONE 2169 2170 ! 2171 !-- For the left and right boundaries 2172 CALL MPI_TYPE_VECTOR( 3, pg%nz+2, (jpnw-jpsw+1)*(pg%nz+2), MPI_REAL, workarr_lr_exchange_type, & 2173 ierr ) 2174 CALL MPI_TYPE_COMMIT( workarr_lr_exchange_type, ierr ) 2175 ! 2176 !-- For the south and north boundaries 2177 CALL MPI_TYPE_VECTOR( 1, 3*(pg%nz+2), 3*(pg%nz+2), MPI_REAL, workarr_sn_exchange_type, ierr ) 2178 CALL MPI_TYPE_COMMIT( workarr_sn_exchange_type, ierr ) 2179 ! 2180 !-- For the top-boundary x-slices 2181 CALL MPI_TYPE_VECTOR( iprw-iplw+1, 3, 3*(jpnw-jpsw+1), MPI_REAL, workarr_t_exchange_type_x, & 2182 ierr ) 2183 CALL MPI_TYPE_COMMIT( workarr_t_exchange_type_x, ierr ) 2184 ! 2185 !-- For the top-boundary y-slices 2186 CALL MPI_TYPE_VECTOR( 1, 3*(jpnw-jpsw+1), 3*(jpnw-jpsw+1), MPI_REAL, & 2187 workarr_t_exchange_type_y, ierr ) 2188 CALL MPI_TYPE_COMMIT( workarr_t_exchange_type_y, ierr ) 2189 2190 END SUBROUTINE pmci_create_workarray_exchange_datatypes 2191 2192 2193 !--------------------------------------------------------------------------------------------------! 2194 ! Description: 2195 ! ------------ 2196 !> @Todo: Missing subroutine description. 2197 !--------------------------------------------------------------------------------------------------! 2198 SUBROUTINE pmci_check_grid_matching 2199 ! 2200 !-- Check that the grid lines of child and parent do match. 2201 !-- Also check that the child subdomain width is not smaller than the parent grid spacing in the 2202 !-- respective direction. 2203 IMPLICIT NONE 2204 2205 INTEGER(iwp) :: non_int_gsr_x = 0 !< Flag for non-integer grid-spacing ration in x-direction 2206 INTEGER(iwp) :: non_int_gsr_y = 0 !< Flag for non-integer grid-spacing ration in y-direction 2207 INTEGER(iwp) :: non_int_gsr_z = 0 !< Flag for non-integer grid-spacing ration in z-direction 2208 INTEGER(iwp) :: non_matching_height = 0 !< Flag for non-matching child-domain height 2209 INTEGER(iwp) :: non_matching_lower_left_corner = 0 !< Flag for non-matching lower left corner 2210 INTEGER(iwp) :: non_matching_upper_right_corner = 0 !< Flag for non-matching upper right corner 2211 INTEGER(iwp) :: too_narrow_pesd_x = 0 !< Flag for too narrow pe-subdomain in x-direction 2212 INTEGER(iwp) :: too_narrow_pesd_y = 0 !< Flag for too narrow pe-subdomain in y-direction 2213 2214 REAL(wp) :: child_ngp_x_l !< Number of gridpoints in child subdomain in x-direction 2215 !< converted to REAL(wp) 2216 REAL(wp) :: child_ngp_y_l !< Number of gridpoints in child subdomain in y-direction 2217 !< converted to REAL(wp) 2218 REAL(wp) :: gridline_mismatch_x !< Mismatch between the parent and child gridlines in the x-direction 2219 REAL(wp) :: gridline_mismatch_y !< Mismatch between the parent and child gridlines in the y-direction 2220 REAL(wp) :: gsr_mismatch_x !< Deviation of the grid-spacing ratio from the nearest integer value, 2221 !< the x-direction 2222 REAL(wp) :: gsr_mismatch_y !< Deviation of the grid-spacing ratio from the nearest integer value, the 2223 !< y-direction 2224 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 2225 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 2226 REAL(wp) :: tolez !< Tolerance for grid-line matching in z-direction 2227 REAL(wp) :: upper_right_coord_x !< X-coordinate of the upper right corner of the child domain 2228 REAL(wp) :: upper_right_coord_y !< Y-coordinate of the upper right corner of the child domain 2229 2230 2231 IF ( myid == 0 ) THEN 2232 1598 2233 tolex = tolefac * dx 1599 2234 toley = tolefac * dy 1600 2235 tolez = tolefac * dz(1) 1601 2236 ! 1602 !-- Allocate child-grid work arrays for interpolation. 1603 igsr = NINT( pg%dx / dx, iwp ) 1604 jgsr = NINT( pg%dy / dy, iwp ) 1605 kgsr = NINT( pg%dzw(1) / dzw(1), iwp ) 1606 WRITE(9,"('igsr, jgsr, kgsr: ',3(i3,2x))") igsr, jgsr, kgsr 1607 FLUSH(9) 1608 ! 1609 !-- Determine index bounds for the parent-grid work arrays for 1610 !-- interpolation and allocate them. 1611 CALL pmci_allocate_workarrays 1612 ! 1613 !-- Define the MPI-datatypes for parent-grid work array 1614 !-- exchange between the PE-subdomains. 1615 CALL pmci_create_workarray_exchange_datatypes 1616 ! 1617 !-- First determine kcto and kctw which refer to the uppermost 1618 !-- parent-grid levels below the child top-boundary level. 1619 !-- Note that these comparison tests are not round-off-error 1620 !-- sensitive and therefore tolerance buffering is not needed here. 1621 kk = 0 1622 DO WHILE ( pg%zu(kk) <= zu(nzt) ) 1623 kk = kk + 1 1624 ENDDO 1625 kcto = kk - 1 1626 1627 kk = 0 1628 DO WHILE ( pg%zw(kk) <= zw(nzt-1) ) 1629 kk = kk + 1 1630 ENDDO 1631 kctw = kk - 1 1632 1633 WRITE( 9, "('kcto, kctw = ', 2(i3,2x))" ) kcto, kctw 1634 FLUSH( 9 ) 1635 ! 1636 !-- In case of two-way coupling, check that the child domain is sufficiently 1637 !-- large in terms of the number of parent-grid cells covered. Otherwise 1638 !-- anterpolation is not possible. 1639 IF ( nesting_mode == 'two-way') THEN 1640 CALL pmci_check_child_domain_size 1641 ENDIF 1642 1643 ALLOCATE( iflu(ipla:ipra) ) 1644 ALLOCATE( iflo(ipla:ipra) ) 1645 ALLOCATE( ifuu(ipla:ipra) ) 1646 ALLOCATE( ifuo(ipla:ipra) ) 1647 ALLOCATE( jflv(jpsa:jpna) ) 1648 ALLOCATE( jflo(jpsa:jpna) ) 1649 ALLOCATE( jfuv(jpsa:jpna) ) 1650 ALLOCATE( jfuo(jpsa:jpna) ) 1651 ALLOCATE( kflw(0:pg%nz+1) ) 1652 ALLOCATE( kflo(0:pg%nz+1) ) 1653 ALLOCATE( kfuw(0:pg%nz+1) ) 1654 ALLOCATE( kfuo(0:pg%nz+1) ) 1655 ALLOCATE( ijkfc_u(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1656 ALLOCATE( ijkfc_v(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1657 ALLOCATE( ijkfc_w(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1658 ALLOCATE( ijkfc_s(0:pg%nz+1,jpsa:jpna,ipla:ipra) ) 1659 1660 ijkfc_u = 0 1661 ijkfc_v = 0 1662 ijkfc_w = 0 1663 ijkfc_s = 0 1664 ! 1665 !-- i-indices of u for each ii-index value 1666 istart = nxlg 1667 DO ii = ipla, ipra 1668 ! 1669 !-- The parent and child grid lines do always match in x, hence we 1670 !-- use only the local k,j-child-grid plane for the anterpolation. 1671 !-- However, icru still has to be stored separately as these index bounds 1672 !-- are passed as arguments to the interpolation and anterpolation 1673 !-- subroutines. 1674 !-- Note that this comparison test is round-off-error sensitive 1675 !-- and therefore tolerance buffering is needed here. 1676 i = istart 1677 DO WHILE ( pg%coord_x(ii) - coord_x(i) > tolex .AND. i < nxrg ) 1678 i = i + 1 1679 ENDDO 1680 iflu(ii) = MIN( MAX( i, nxlg ), nxrg ) 1681 ifuu(ii) = iflu(ii) 1682 istart = iflu(ii) 1683 ! 1684 !-- Print out the index bounds for checking and debugging purposes 1685 WRITE( 9, "('pmci_define_index_mapping, ii, iflu, ifuu: ', 3(i4,2x))" ) & 1686 ii, iflu(ii), ifuu(ii) 1687 FLUSH( 9 ) 1688 ENDDO 1689 WRITE( 9, * ) 1690 ! 1691 !-- i-indices of others for each ii-index value. 1692 !-- Note that these comparison tests are not round-off-error 1693 !-- sensitive and therefore tolerance buffering is not needed here. 1694 istart = nxlg 1695 DO ii = ipla, ipra 1696 i = istart 1697 DO WHILE ( ( coord_x(i) + 0.5_wp * dx < pg%coord_x(ii) ) .AND. ( i < nxrg ) ) 1698 i = i + 1 1699 ENDDO 1700 iflo(ii) = MIN( MAX( i, nxlg ), nxrg ) 1701 ir = i 1702 DO WHILE ( ( coord_x(ir) + 0.5_wp * dx < pg%coord_x(ii) + pg%dx ) .AND. ( i < nxrg+1 ) ) 1703 i = i + 1 1704 ir = MIN( i, nxrg ) 1705 ENDDO 1706 ifuo(ii) = MIN( MAX( i-1, iflo(ii) ), nxrg ) 1707 istart = iflo(ii) 1708 ! 1709 !-- Print out the index bounds for checking and debugging purposes 1710 WRITE( 9, "('pmci_define_index_mapping, ii, iflo, ifuo: ', 3(i4,2x))" ) & 1711 ii, iflo(ii), ifuo(ii) 1712 FLUSH( 9 ) 1713 ENDDO 1714 WRITE( 9, * ) 1715 ! 1716 !-- j-indices of v for each jj-index value 1717 jstart = nysg 1718 DO jj = jpsa, jpna 1719 ! 1720 !-- The parent and child grid lines do always match in y, hence we 1721 !-- use only the local k,i-child-grid plane for the anterpolation. 1722 !-- However, jcnv still has to be stored separately as these index bounds 1723 !-- are passed as arguments to the interpolation and anterpolation 1724 !-- subroutines. 1725 !-- Note that this comparison test is round-off-error sensitive 1726 !-- and therefore tolerance buffering is needed here. 1727 j = jstart 1728 DO WHILE ( pg%coord_y(jj) - coord_y(j) > toley .AND. j < nyng ) 1729 j = j + 1 1730 ENDDO 1731 jflv(jj) = MIN( MAX( j, nysg ), nyng ) 1732 jfuv(jj) = jflv(jj) 1733 jstart = jflv(jj) 1734 ! 1735 !-- Print out the index bounds for checking and debugging purposes 1736 WRITE( 9, "('pmci_define_index_mapping, jj, jflv, jfuv: ', 3(i4,2x))" ) & 1737 jj, jflv(jj), jfuv(jj) 1738 FLUSH(9) 1739 ENDDO 1740 WRITE( 9, * ) 1741 ! 1742 !-- j-indices of others for each jj-index value 1743 !-- Note that these comparison tests are not round-off-error 1744 !-- sensitive and therefore tolerance buffering is not needed here. 1745 jstart = nysg 1746 DO jj = jpsa, jpna 1747 j = jstart 1748 DO WHILE ( ( coord_y(j) + 0.5_wp * dy < pg%coord_y(jj) ) .AND. ( j < nyng ) ) 1749 j = j + 1 1750 ENDDO 1751 jflo(jj) = MIN( MAX( j, nysg ), nyng ) 1752 jr = j 1753 DO WHILE ( ( coord_y(jr) + 0.5_wp * dy < pg%coord_y(jj) + pg%dy ) .AND. ( j < nyng+1 ) ) 1754 j = j + 1 1755 jr = MIN( j, nyng ) 1756 ENDDO 1757 jfuo(jj) = MIN( MAX( j-1, jflo(jj) ), nyng ) 1758 jstart = jflo(jj) 1759 ! 1760 !-- Print out the index bounds for checking and debugging purposes 1761 WRITE( 9, "('pmci_define_index_mapping, jj, jflo, jfuo: ', 3(i4,2x))" ) & 1762 jj, jflo(jj), jfuo(jj) 1763 FLUSH( 9 ) 1764 ENDDO 1765 WRITE( 9, * ) 1766 ! 1767 !-- k-indices of w for each kk-index value 1768 !-- Note that anterpolation index limits are needed also for the top boundary 1769 !-- ghost cell level because they are used also in the interpolation. 1770 kstart = 0 1771 kflw(0) = 0 1772 kfuw(0) = 0 1773 DO kk = 1, pg%nz+1 1774 ! 1775 !-- The parent and child grid lines do always match in z, hence we 1776 !-- use only the local j,i-child-grid plane for the anterpolation. 1777 !-- However, kctw still has to be stored separately as these index bounds 1778 !-- are passed as arguments to the interpolation and anterpolation 1779 !-- subroutines. 1780 !-- Note that this comparison test is round-off-error sensitive 1781 !-- and therefore tolerance buffering is needed here. 1782 k = kstart 1783 DO WHILE ( ( pg%zw(kk) - zw(k) > tolez ) .AND. ( k < nzt+1 ) ) 1784 k = k + 1 1785 ENDDO 1786 kflw(kk) = MIN( MAX( k, 1 ), nzt + 1 ) 1787 kfuw(kk) = kflw(kk) 1788 kstart = kflw(kk) 1789 ! 1790 !-- Print out the index bounds for checking and debugging purposes 1791 WRITE( 9, "('pmci_define_index_mapping, kk, kflw, kfuw: ', 4(i4,2x), 2(e12.5,2x))" ) & 1792 kk, kflw(kk), kfuw(kk), nzt, pg%zu(kk), pg%zw(kk) 1793 FLUSH( 9 ) 1794 ENDDO 1795 WRITE( 9, * ) 1796 ! 1797 !-- k-indices of others for each kk-index value 1798 kstart = 0 1799 kflo(0) = 0 1800 kfuo(0) = 0 1801 ! 1802 !-- Note that anterpolation index limits are needed also for the top boundary 1803 !-- ghost cell level because they are used also in the interpolation. 1804 !-- Note that these comparison tests are not round-off-error 1805 !-- sensitive and therefore tolerance buffering is not needed here. 1806 DO kk = 1, pg%nz+1 1807 k = kstart 1808 DO WHILE ( ( zu(k) < pg%zw(kk-1) ) .AND. ( k <= nzt ) ) 1809 k = k + 1 1810 ENDDO 1811 kflo(kk) = MIN( MAX( k, 1 ), nzt + 1 ) 1812 DO WHILE ( ( zu(k) < pg%zw(kk) ) .AND. ( k <= nzt+1 ) ) 1813 k = k + 1 1814 IF ( k > nzt + 1 ) EXIT ! This EXIT is to prevent zu(k) from flowing over. 1815 ENDDO 1816 kfuo(kk) = MIN( MAX( k-1, kflo(kk) ), nzt + 1 ) 1817 kstart = kflo(kk) 1818 ENDDO 1819 ! 1820 !-- Print out the index bounds for checking and debugging purposes 1821 DO kk = 1, pg%nz+1 1822 WRITE( 9, "('pmci_define_index_mapping, kk, kflo, kfuo: ', 4(i4,2x), 2(e12.5,2x))" ) & 1823 kk, kflo(kk), kfuo(kk), nzt, pg%zu(kk), pg%zw(kk) 1824 FLUSH( 9 ) 1825 ENDDO 1826 WRITE( 9, * ) 1827 ! 1828 !-- Precomputation of number of child-grid nodes inside parent-grid cells. 1829 !-- Note that ii, jj, and kk are parent-grid indices. 1830 !-- This information is needed in the anterpolation. 1831 !-- The indices for wall_flags_total_0 (kw,jw,iw) must be limited to the range 1832 !-- [-1,...,nx/ny/nzt+1] in order to avoid zero values on the outer ghost nodes. 1833 DO ii = ipla, ipra 1834 DO jj = jpsa, jpna 1835 DO kk = 0, pg%nz+1 1836 ! 1837 !-- u-component 1838 DO i = iflu(ii), ifuu(ii) 1839 iw = MAX( MIN( i, nx+1 ), -1 ) 1840 DO j = jflo(jj), jfuo(jj) 1841 jw = MAX( MIN( j, ny+1 ), -1 ) 1842 DO k = kflo(kk), kfuo(kk) 1843 kw = MIN( k, nzt+1 ) 1844 ijkfc_u(kk,jj,ii) = ijkfc_u(kk,jj,ii) & 1845 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 1 ) ) 1846 ENDDO 1847 ENDDO 1848 ENDDO 1849 ! 1850 !-- v-component 1851 DO i = iflo(ii), ifuo(ii) 1852 iw = MAX( MIN( i, nx+1 ), -1 ) 1853 DO j = jflv(jj), jfuv(jj) 1854 jw = MAX( MIN( j, ny+1 ), -1 ) 1855 DO k = kflo(kk), kfuo(kk) 1856 kw = MIN( k, nzt+1 ) 1857 ijkfc_v(kk,jj,ii) = ijkfc_v(kk,jj,ii) & 1858 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 2 ) ) 1859 ENDDO 1860 ENDDO 1861 ENDDO 1862 ! 1863 !-- scalars 1864 DO i = iflo(ii), ifuo(ii) 1865 iw = MAX( MIN( i, nx+1 ), -1 ) 1866 DO j = jflo(jj), jfuo(jj) 1867 jw = MAX( MIN( j, ny+1 ), -1 ) 1868 DO k = kflo(kk), kfuo(kk) 1869 kw = MIN( k, nzt+1 ) 1870 ijkfc_s(kk,jj,ii) = ijkfc_s(kk,jj,ii) & 1871 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 0 ) ) 1872 ENDDO 1873 ENDDO 1874 ENDDO 1875 ! 1876 !-- w-component 1877 DO i = iflo(ii), ifuo(ii) 1878 iw = MAX( MIN( i, nx+1 ), -1 ) 1879 DO j = jflo(jj), jfuo(jj) 1880 jw = MAX( MIN( j, ny+1 ), -1 ) 1881 DO k = kflw(kk), kfuw(kk) 1882 kw = MIN( k, nzt+1 ) 1883 ijkfc_w(kk,jj,ii) = ijkfc_w(kk,jj,ii) & 1884 + MERGE( 1, 0, BTEST( wall_flags_total_0(kw,jw,iw), 3 ) ) 1885 ENDDO 1886 ENDDO 1887 ENDDO 1888 1889 ENDDO ! kk 1890 ENDDO ! jj 1891 ENDDO ! ii 1892 1893 END SUBROUTINE pmci_define_index_mapping 1894 1895 1896 1897 SUBROUTINE pmci_check_child_domain_size 1898 ! 1899 !-- Check if the child domain is too small in terms of number of parent-grid cells 1900 !-- covered so that anterpolation buffers fill the whole domain so that anterpolation 1901 !-- not possible. Also, check that anterpolation_buffer_width is not too large to 1902 !-- prevent anterpolation. 1903 IMPLICIT NONE 1904 1905 ! 1906 !-- First x-direction 1907 IF ( iplg + 3 + anterpolation_buffer_width > iprg - 3 - anterpolation_buffer_width ) THEN 1908 IF ( iprg - iplg + 1 < 7 ) THEN 1909 ! 1910 !-- Error 1911 WRITE( message_string, * ) 'child domain too narrow for anterpolation in x-direction' 1912 CALL message( 'pmci_check_child_domain_size', 'PA0652', 3, 2, 0, 6, 0 ) 1913 ELSE IF ( iprg - iplg + 1 < 11 ) THEN 1914 ! 1915 !-- Warning 1916 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to 0' 1917 CALL message( 'pmci_check_child_domain_size', 'PA0653', 0, 1, 0, 6, 0 ) 1918 anterpolation_buffer_width = 0 1919 ELSE 1920 ! 1921 !-- Informative message 1922 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to default value 2' 1923 CALL message( 'pmci_check_child_domain_size', 'PA0654', 0, 0, 0, 6, 0 ) 1924 anterpolation_buffer_width = 2 1925 ENDIF 1926 ENDIF 1927 ! 1928 !-- Then y-direction 1929 IF ( jpsg + 3 + anterpolation_buffer_width > jpng - 3 - anterpolation_buffer_width ) THEN 1930 IF ( jpng - jpsg + 1 < 7 ) THEN 1931 ! 1932 !-- Error 1933 WRITE( message_string, * ) 'child domain too narrow for anterpolation in y-direction' 1934 CALL message( 'pmci_check_child_domain_size', 'PA0652', 3, 2, 0, 6, 0 ) 1935 ELSE IF ( jpng - jpsg + 1 < 11 ) THEN 1936 ! 1937 !-- Warning 1938 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to 0' 1939 CALL message( 'pmci_check_child_domain_size', 'PA0653', 0, 1, 0, 6, 0 ) 1940 anterpolation_buffer_width = 0 1941 ELSE 1942 ! 1943 !-- Informative message 1944 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to default value 2' 1945 CALL message( 'pmci_check_child_domain_size', 'PA0654', 0, 0, 0, 6, 0 ) 1946 anterpolation_buffer_width = 2 1947 ENDIF 1948 ENDIF 1949 ! 1950 !-- Finally z-direction 1951 IF ( kctw - 1 - anterpolation_buffer_width < 1 ) THEN 1952 IF ( kctw - 1 < 1 ) THEN 1953 ! 1954 !-- Error 1955 WRITE( message_string, * ) 'child domain too shallow for anterpolation in z-direction' 1956 CALL message( 'pmci_check_child_domain_size', 'PA0652', 3, 2, 0, 6, 0 ) 1957 ELSE IF ( kctw - 3 < 1 ) THEN 1958 ! 1959 !-- Warning 1960 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to 0' 1961 CALL message( 'pmci_check_child_domain_size', 'PA0653', 0, 1, 0, 6, 0 ) 1962 anterpolation_buffer_width = 0 1963 ELSE 1964 ! 1965 !-- Informative message 1966 WRITE( message_string, * ) 'anterpolation_buffer_width value too high, reset to default value 2' 1967 CALL message( 'pmci_check_child_domain_size', 'PA0654', 0, 0, 0, 6, 0 ) 1968 anterpolation_buffer_width = 2 1969 ENDIF 1970 ENDIF 1971 1972 END SUBROUTINE pmci_check_child_domain_size 1973 1974 1975 1976 SUBROUTINE pmci_allocate_workarrays 1977 ! 1978 !-- Allocate parent-grid work-arrays for interpolation 1979 IMPLICIT NONE 1980 1981 ! 1982 !-- Determine and store the PE-subdomain dependent index bounds 1983 IF ( bc_dirichlet_l ) THEN 1984 iplw = ipl + 1 1985 ELSE 1986 iplw = ipl - 1 1987 ENDIF 1988 1989 IF ( bc_dirichlet_r ) THEN 1990 iprw = ipr - 1 1991 ELSE 1992 iprw = ipr + 1 1993 ENDIF 1994 1995 IF ( bc_dirichlet_s ) THEN 1996 jpsw = jps + 1 1997 ELSE 1998 jpsw = jps - 1 1999 ENDIF 2000 2001 IF ( bc_dirichlet_n ) THEN 2002 jpnw = jpn - 1 2003 ELSE 2004 jpnw = jpn + 1 2005 ENDIF 2006 ! 2007 !-- Left and right boundaries. 2008 ALLOCATE( workarr_lr(0:pg%nz+1,jpsw:jpnw,0:2) ) 2009 ! 2010 !-- South and north boundaries. 2011 ALLOCATE( workarr_sn(0:pg%nz+1,0:2,iplw:iprw) ) 2012 ! 2013 !-- Top boundary. 2014 ALLOCATE( workarr_t(0:2,jpsw:jpnw,iplw:iprw) ) 2015 2016 END SUBROUTINE pmci_allocate_workarrays 2017 2018 2019 2020 SUBROUTINE pmci_create_workarray_exchange_datatypes 2021 ! 2022 !-- Define specific MPI types for workarr-exchange. 2023 IMPLICIT NONE 2024 2025 ! 2026 !-- For the left and right boundaries 2027 CALL MPI_TYPE_VECTOR( 3, pg%nz+2, (jpnw-jpsw+1)*(pg%nz+2), MPI_REAL, & 2028 workarr_lr_exchange_type, ierr ) 2029 CALL MPI_TYPE_COMMIT( workarr_lr_exchange_type, ierr ) 2030 ! 2031 !-- For the south and north boundaries 2032 CALL MPI_TYPE_VECTOR( 1, 3*(pg%nz+2), 3*(pg%nz+2), MPI_REAL, & 2033 workarr_sn_exchange_type, ierr ) 2034 CALL MPI_TYPE_COMMIT( workarr_sn_exchange_type, ierr ) 2035 ! 2036 !-- For the top-boundary x-slices 2037 CALL MPI_TYPE_VECTOR( iprw-iplw+1, 3, 3*(jpnw-jpsw+1), MPI_REAL, & 2038 workarr_t_exchange_type_x, ierr ) 2039 CALL MPI_TYPE_COMMIT( workarr_t_exchange_type_x, ierr ) 2040 ! 2041 !-- For the top-boundary y-slices 2042 CALL MPI_TYPE_VECTOR( 1, 3*(jpnw-jpsw+1), 3*(jpnw-jpsw+1), MPI_REAL, & 2043 workarr_t_exchange_type_y, ierr ) 2044 CALL MPI_TYPE_COMMIT( workarr_t_exchange_type_y, ierr ) 2045 2046 END SUBROUTINE pmci_create_workarray_exchange_datatypes 2047 2048 2049 2050 SUBROUTINE pmci_check_grid_matching 2051 ! 2052 !-- Check that the grid lines of child and parent do match. 2053 !-- Also check that the child subdomain width is not smaller than 2054 !-- the parent grid spacing in the respective direction. 2055 IMPLICIT NONE 2056 2057 INTEGER(iwp) :: non_matching_height = 0 !< Flag for non-matching child-domain height 2058 INTEGER(iwp) :: non_matching_lower_left_corner = 0 !< Flag for non-matching lower left corner 2059 INTEGER(iwp) :: non_matching_upper_right_corner = 0 !< Flag for non-matching upper right corner 2060 INTEGER(iwp) :: non_int_gsr_x = 0 !< Flag for non-integer grid-spacing ration in x-direction 2061 INTEGER(iwp) :: non_int_gsr_y = 0 !< Flag for non-integer grid-spacing ration in y-direction 2062 INTEGER(iwp) :: non_int_gsr_z = 0 !< Flag for non-integer grid-spacing ration in z-direction 2063 INTEGER(iwp) :: too_narrow_pesd_x = 0 !< Flag for too narrow pe-subdomain in x-direction 2064 INTEGER(iwp) :: too_narrow_pesd_y = 0 !< Flag for too narrow pe-subdomain in y-direction 2065 2066 REAL(wp) :: child_ngp_x_l !< Number of gridpoints in child subdomain in x-direction 2067 !< converted to REAL(wp) 2068 REAL(wp) :: child_ngp_y_l !< Number of gridpoints in child subdomain in y-direction 2069 !< converted to REAL(wp) 2070 REAL(wp) :: gridline_mismatch_x !< Mismatch between the parent and child gridlines in the x-direction 2071 REAL(wp) :: gridline_mismatch_y !< Mismatch between the parent and child gridlines in the y-direction 2072 REAL(wp) :: gsr_mismatch_x !< Deviation of the grid-spacing ratio from the nearest integer value, the x-direction 2073 REAL(wp) :: gsr_mismatch_y !< Deviation of the grid-spacing ratio from the nearest integer value, the y-direction 2074 REAL(wp) :: upper_right_coord_x !< X-coordinate of the upper right corner of the child domain 2075 REAL(wp) :: upper_right_coord_y !< Y-coordinate of the upper right corner of the child domain 2076 REAL(wp) :: tolex !< Tolerance for grid-line matching in x-direction 2077 REAL(wp) :: toley !< Tolerance for grid-line matching in y-direction 2078 REAL(wp) :: tolez !< Tolerance for grid-line matching in z-direction 2079 2080 2081 IF ( myid == 0 ) THEN 2082 2083 tolex = tolefac * dx 2084 toley = tolefac * dy 2085 tolez = tolefac * dz(1) 2086 ! 2087 !-- First check that the child domain lower left corner matches the parent grid lines. 2088 gridline_mismatch_x = ABS( NINT( lower_left_coord_x / pg%dx ) * pg%dx - lower_left_coord_x ) 2089 gridline_mismatch_y = ABS( NINT( lower_left_coord_y / pg%dy ) * pg%dy - lower_left_coord_y ) 2090 IF ( gridline_mismatch_x > tolex ) non_matching_lower_left_corner = 1 2091 IF ( gridline_mismatch_y > toley ) non_matching_lower_left_corner = 1 2092 ! 2093 !-- Then check that the child doman upper right corner matches the parent grid lines. 2094 upper_right_coord_x = lower_left_coord_x + ( nx + 1 ) * dx 2095 upper_right_coord_y = lower_left_coord_y + ( ny + 1 ) * dy 2096 gridline_mismatch_x = ABS( NINT( upper_right_coord_x / pg%dx ) * pg%dx - upper_right_coord_x ) 2097 gridline_mismatch_y = ABS( NINT( upper_right_coord_y / pg%dy ) * pg%dy - upper_right_coord_y ) 2098 IF ( gridline_mismatch_x > tolex ) non_matching_upper_right_corner = 1 2099 IF ( gridline_mismatch_y > toley ) non_matching_upper_right_corner = 1 2100 ! 2101 !-- Also check that the cild domain height matches the parent grid lines. 2102 IF ( MOD( zw(nzt), pg%dz ) > tolez ) non_matching_height = 1 2103 ! 2104 !-- Check that the grid-spacing ratios in each direction are integer valued. 2105 gsr_mismatch_x = ABS( NINT( pg%dx / dx ) * dx - pg%dx ) 2106 gsr_mismatch_y = ABS( NINT( pg%dy / dy ) * dy - pg%dy ) 2107 IF ( gsr_mismatch_x > tolex ) non_int_gsr_x = 1 2108 IF ( gsr_mismatch_y > toley ) non_int_gsr_y = 1 2109 ! 2110 !-- In the z-direction, all levels need to be checked separately against grid stretching 2111 !-- which is not allowed. 2112 DO n = 0, kctw+1 2113 IF ( ABS( pg%zw(n) - zw(kflw(n)) ) > tolez ) non_int_gsr_z = 1 2114 ENDDO 2115 2116 child_ngp_x_l = REAL( nxr - nxl + 1, KIND=wp ) 2117 IF ( child_ngp_x_l / REAL( igsr, KIND=wp ) < 1.0_wp ) too_narrow_pesd_x = 1 2118 child_ngp_y_l = REAL( nyn - nys + 1, KIND=wp ) 2119 IF ( child_ngp_y_l / REAL( jgsr, KIND=wp ) < 1.0_wp ) too_narrow_pesd_y = 1 2120 2121 IF ( non_matching_height > 0 ) THEN 2122 WRITE( message_string, * ) 'nested child domain height must match ', & 2123 'its parent grid lines' 2124 CALL message( 'pmci_check_grid_matching', 'PA0414', 3, 2, 0, 6, 0 ) 2125 ENDIF 2126 2127 IF ( non_matching_lower_left_corner > 0 ) THEN 2128 WRITE( message_string, * ) 'nested child domain lower left ', & 2129 'corner must match its parent grid lines' 2130 CALL message( 'pmci_check_grid_matching', 'PA0414', 3, 2, 0, 6, 0 ) 2131 ENDIF 2132 2133 IF ( non_matching_upper_right_corner > 0 ) THEN 2134 WRITE( message_string, * ) 'nested child domain upper right ', & 2135 'corner must match its parent grid lines' 2136 CALL message( 'pmci_check_grid_matching', 'PA0414', 3, 2, 0, 6, 0 ) 2137 ENDIF 2138 2139 IF ( non_int_gsr_x > 0 ) THEN 2140 WRITE( message_string, * ) 'nesting grid-spacing ratio ( parent dx / child dx ) ', & 2141 'must have an integer value' 2142 CALL message( 'pmci_check_grid_matching', 'PA0416', 3, 2, 0, 6, 0 ) 2143 ENDIF 2144 2145 IF ( non_int_gsr_y > 0 ) THEN 2146 WRITE( message_string, * ) 'nesting grid-spacing ratio ( parent dy / child dy ) ', & 2147 'must have an integer value' 2148 CALL message( 'pmci_check_grid_matching', 'PA0416', 3, 2, 0, 6, 0 ) 2149 ENDIF 2150 2151 IF ( non_int_gsr_z > 0 ) THEN 2152 WRITE( message_string, * ) 'nesting grid-spacing ratio ( parent dz / child dz ) ', & 2153 'must have an integer value for each z-level' 2154 CALL message( 'pmci_check_grid_matching', 'PA0416', 3, 2, 0, 6, 0 ) 2155 ENDIF 2156 2157 IF ( too_narrow_pesd_x > 0 ) THEN 2158 WRITE( message_string, * ) 'child subdomain width in x-direction must not be ', & 2159 'smaller than its parent grid dx. Change the PE-grid ', & 2160 'setting (npex, npey) to satisfy this requirement.' 2161 CALL message( 'pmci_check_grid_matching', 'PA0587', 3, 2, 0, 6, 0 ) 2162 ENDIF 2163 2164 IF ( too_narrow_pesd_y > 0 ) THEN 2165 WRITE( message_string, * ) 'child subdomain width in y-direction must not be ', & 2166 'smaller than its parent grid dy. Change the PE-grid ', & 2167 'setting (npex, npey) to satisfy this requirement.' 2168 CALL message( 'pmci_check_grid_matching', 'PA0587', 3, 2, 0, 6, 0 ) 2169 ENDIF 2170 2171 ENDIF ! ( myid == 0 ) 2172 2173 END SUBROUTINE pmci_check_grid_matching 2174 2175 2176 2177 SUBROUTINE pmci_compute_face_areas 2178 2179 IMPLICIT NONE 2180 REAL(wp) :: face_area_local !< Local (for the current pe) integral face area of the left boundary 2181 REAL(wp) :: sub_sum !< Intermediate sum in order to improve the accuracy of the summation 2182 2183 INTEGER(iwp) :: i !< Running index in the x-direction 2184 INTEGER(iwp) :: j !< Running index in the y-direction 2185 INTEGER(iwp) :: k !< Running index in the z-direction 2186 INTEGER(iwp) :: k_wall !< Local topography top k-index 2187 INTEGER(iwp) :: n !< Running index over boundary faces 2188 2189 2190 ! 2191 !-- Sum up the volume flow through the left boundary 2192 face_area(1) = 0.0_wp 2193 face_area_local = 0.0_wp 2194 IF ( bc_dirichlet_l ) THEN 2195 i = 0 2196 DO j = nys, nyn 2197 sub_sum = 0.0_wp 2198 k_wall = topo_top_ind(j,i,1) 2199 DO k = k_wall + 1, nzt 2200 sub_sum = sub_sum + dzw(k) 2201 ENDDO 2202 face_area_local = face_area_local + dy * sub_sum 2203 ENDDO 2204 ENDIF 2205 2237 !-- First check that the child domain lower left corner matches the parent grid lines. 2238 gridline_mismatch_x = ABS( NINT( lower_left_coord_x / pg%dx ) * pg%dx - lower_left_coord_x ) 2239 gridline_mismatch_y = ABS( NINT( lower_left_coord_y / pg%dy ) * pg%dy - lower_left_coord_y ) 2240 IF ( gridline_mismatch_x > tolex ) non_matching_lower_left_corner = 1 2241 IF ( gridline_mismatch_y > toley ) non_matching_lower_left_corner = 1 2242 ! 2243 !-- Then check that the child doman upper right corner matches the parent grid lines. 2244 upper_right_coord_x = lower_left_coord_x + ( nx + 1 ) * dx 2245 upper_right_coord_y = lower_left_coord_y + ( ny + 1 ) * dy 2246 gridline_mismatch_x = ABS( NINT( upper_right_coord_x / pg%dx ) * pg%dx - upper_right_coord_x ) 2247 gridline_mismatch_y = ABS( NINT( upper_right_coord_y / pg%dy ) * pg%dy - upper_right_coord_y ) 2248 IF ( gridline_mismatch_x > tolex ) non_matching_upper_right_corner = 1 2249 IF ( gridline_mismatch_y > toley ) non_matching_upper_right_corner = 1 2250 ! 2251 !-- Also check that the cild domain height matches the parent grid lines. 2252 IF ( MOD( zw(nzt), pg%dz ) > tolez ) non_matching_height = 1 2253 ! 2254 !-- Check that the grid-spacing ratios in each direction are integer valued. 2255 gsr_mismatch_x = ABS( NINT( pg%dx / dx ) * dx - pg%dx ) 2256 gsr_mismatch_y = ABS( NINT( pg%dy / dy ) * dy - pg%dy ) 2257 IF ( gsr_mismatch_x > tolex ) non_int_gsr_x = 1 2258 IF ( gsr_mismatch_y > toley ) non_int_gsr_y = 1 2259 ! 2260 !-- In the z-direction, all levels need to be checked separately against grid stretching which is 2261 !-- not allowed. 2262 DO n = 0, kctw+1 2263 IF ( ABS( pg%zw(n) - zw(kflw(n)) ) > tolez ) non_int_gsr_z = 1 2264 ENDDO 2265 2266 child_ngp_x_l = REAL( nxr - nxl + 1, KIND=wp ) 2267 IF ( child_ngp_x_l / REAL( igsr, KIND=wp ) < 1.0_wp ) too_narrow_pesd_x = 1 2268 child_ngp_y_l = REAL( nyn - nys + 1, KIND=wp ) 2269 IF ( child_ngp_y_l / REAL( jgsr, KIND=wp ) < 1.0_wp ) too_narrow_pesd_y = 1 2270 2271 IF ( non_matching_height > 0 ) THEN 2272 WRITE( message_string, * ) 'nested child domain height must match ', & 2273 'its parent grid lines' 2274 CALL message( 'pmci_check_grid_matching', 'PA0414', 3, 2, 0, 6, 0 ) 2275 ENDIF 2276 2277 IF ( non_matching_lower_left_corner > 0 ) THEN 2278 WRITE( message_string, * ) 'nested child domain lower left ', & 2279 'corner must match its parent grid lines' 2280 CALL message( 'pmci_check_grid_matching', 'PA0414', 3, 2, 0, 6, 0 ) 2281 ENDIF 2282 2283 IF ( non_matching_upper_right_corner > 0 ) THEN 2284 WRITE( message_string, * ) 'nested child domain upper right ', & 2285 'corner must match its parent grid lines' 2286 CALL message( 'pmci_check_grid_matching', 'PA0414', 3, 2, 0, 6, 0 ) 2287 ENDIF 2288 2289 IF ( non_int_gsr_x > 0 ) THEN 2290 WRITE( message_string, * ) 'nesting grid-spacing ratio ( parent dx / child dx ) ', & 2291 'must have an integer value' 2292 CALL message( 'pmci_check_grid_matching', 'PA0416', 3, 2, 0, 6, 0 ) 2293 ENDIF 2294 2295 IF ( non_int_gsr_y > 0 ) THEN 2296 WRITE( message_string, * ) 'nesting grid-spacing ratio ( parent dy / child dy ) ', & 2297 'must have an integer value' 2298 CALL message( 'pmci_check_grid_matching', 'PA0416', 3, 2, 0, 6, 0 ) 2299 ENDIF 2300 2301 IF ( non_int_gsr_z > 0 ) THEN 2302 WRITE( message_string, * ) 'nesting grid-spacing ratio ( parent dz / child dz ) ', & 2303 'must have an integer value for each z-level' 2304 CALL message( 'pmci_check_grid_matching', 'PA0416', 3, 2, 0, 6, 0 ) 2305 ENDIF 2306 2307 IF ( too_narrow_pesd_x > 0 ) THEN 2308 WRITE( message_string, * ) 'child subdomain width in x-direction must not be ', & 2309 'smaller than its parent grid dx. Change the PE-grid ', & 2310 'setting (npex, npey) to satisfy this requirement.' 2311 CALL message( 'pmci_check_grid_matching', 'PA0587', 3, 2, 0, 6, 0 ) 2312 ENDIF 2313 2314 IF ( too_narrow_pesd_y > 0 ) THEN 2315 WRITE( message_string, * ) 'child subdomain width in y-direction must not be ', & 2316 'smaller than its parent grid dy. Change the PE-grid ', & 2317 'setting (npex, npey) to satisfy this requirement.' 2318 CALL message( 'pmci_check_grid_matching', 'PA0587', 3, 2, 0, 6, 0 ) 2319 ENDIF 2320 2321 ENDIF ! ( myid == 0 ) 2322 2323 END SUBROUTINE pmci_check_grid_matching 2324 2325 2326 !--------------------------------------------------------------------------------------------------! 2327 ! Description: 2328 ! ------------ 2329 !> @Todo: Missing subroutine description. 2330 !--------------------------------------------------------------------------------------------------! 2331 SUBROUTINE pmci_compute_face_areas 2332 2333 IMPLICIT NONE 2334 2335 INTEGER(iwp) :: i !< Running index in the x-direction 2336 INTEGER(iwp) :: j !< Running index in the y-direction 2337 INTEGER(iwp) :: k !< Running index in the z-direction 2338 INTEGER(iwp) :: k_wall !< Local topography top k-index 2339 INTEGER(iwp) :: n !< Running index over boundary faces 2340 2341 REAL(wp) :: face_area_local !< Local (for the current pe) integral face area of the left boundary 2342 REAL(wp) :: sub_sum !< Intermediate sum in order to improve the accuracy of the summation 2343 2344 ! 2345 !-- Sum up the volume flow through the left boundary 2346 face_area(1) = 0.0_wp 2347 face_area_local = 0.0_wp 2348 IF ( bc_dirichlet_l ) THEN 2349 i = 0 2350 DO j = nys, nyn 2351 sub_sum = 0.0_wp 2352 k_wall = topo_top_ind(j,i,1) 2353 DO k = k_wall + 1, nzt 2354 sub_sum = sub_sum + dzw(k) 2355 ENDDO 2356 face_area_local = face_area_local + dy * sub_sum 2357 ENDDO 2358 ENDIF 2359 2206 2360 #if defined( __parallel ) 2207 2208 2361 IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr ) 2362 CALL MPI_ALLREDUCE( face_area_local, face_area(1), 1, MPI_REAL, MPI_SUM, comm2d, ierr ) 2209 2363 #else 2210 2364 face_area(1) = face_area_local 2211 2365 #endif 2212 2366 ! 2213 !-- 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2367 !-- Sum up the volume flow through the right boundary 2368 face_area(2) = 0.0_wp 2369 face_area_local = 0.0_wp 2370 IF ( bc_dirichlet_r ) THEN 2371 i = nx 2372 DO j = nys, nyn 2373 sub_sum = 0.0_wp 2374 k_wall = topo_top_ind(j,i,1) 2375 DO k = k_wall + 1, nzt 2376 sub_sum = sub_sum + dzw(k) 2377 ENDDO 2378 face_area_local = face_area_local + dy * sub_sum 2379 ENDDO 2380 ENDIF 2381 2228 2382 #if defined( __parallel ) 2229 2230 2383 IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr ) 2384 CALL MPI_ALLREDUCE( face_area_local, face_area(2), 1, MPI_REAL, MPI_SUM, comm2d, ierr ) 2231 2385 #else 2232 2386 face_area(2) = face_area_local 2233 2387 #endif 2234 2388 ! 2235 !-- 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2389 !-- Sum up the volume flow through the south boundary 2390 face_area(3) = 0.0_wp 2391 face_area_local = 0.0_wp 2392 IF ( bc_dirichlet_s ) THEN 2393 j = 0 2394 DO i = nxl, nxr 2395 sub_sum = 0.0_wp 2396 k_wall = topo_top_ind(j,i,2) 2397 DO k = k_wall + 1, nzt 2398 sub_sum = sub_sum + dzw(k) 2399 ENDDO 2400 face_area_local = face_area_local + dx * sub_sum 2401 ENDDO 2402 ENDIF 2403 2250 2404 #if defined( __parallel ) 2251 2252 2405 IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr ) 2406 CALL MPI_ALLREDUCE( face_area_local, face_area(3), 1, MPI_REAL, MPI_SUM, comm2d, ierr ) 2253 2407 #else 2254 2408 face_area(3) = face_area_local 2255 2409 #endif 2256 2410 ! 2257 !-- 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2411 !-- Sum up the volume flow through the north boundary 2412 face_area(4) = 0.0_wp 2413 face_area_local = 0.0_wp 2414 IF ( bc_dirichlet_n ) THEN 2415 j = ny 2416 DO i = nxl, nxr 2417 sub_sum = 0.0_wp 2418 k_wall = topo_top_ind(j,i,2) 2419 DO k = k_wall + 1, nzt 2420 sub_sum = sub_sum + dzw(k) 2421 ENDDO 2422 face_area_local = face_area_local + dx * sub_sum 2423 ENDDO 2424 ENDIF 2425 2272 2426 #if defined( __parallel ) 2273 2274 2427 IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr ) 2428 CALL MPI_ALLREDUCE( face_area_local, face_area(4), 1, MPI_REAL, MPI_SUM, comm2d, ierr ) 2275 2429 #else 2276 2430 face_area(4) = face_area_local 2277 2431 #endif 2278 2432 ! 2279 !-- The top face area does not depend on the topography at all.2280 2281 ! 2282 !-- 2283 2284 2285 2286 2287 2288 ! 2289 ! 2290 2291 2433 !-- The top face area does not depend on the topography at all. 2434 face_area(5) = ( nx + 1 ) * ( ny + 1 ) * dx * dy 2435 ! 2436 !-- The 6th element is used for the total area 2437 face_area(6) = 0.0_wp 2438 DO n = 1, 5 2439 face_area(6) = face_area(6) + face_area(n) 2440 ENDDO 2441 2442 ! write( 9, "(6(e12.5,2x))") ( face_area(n), n = 1, 6 ) 2443 ! flush( 9 ) 2444 2445 END SUBROUTINE pmci_compute_face_areas 2292 2446 #endif 2293 2447 2294 2448 END SUBROUTINE pmci_setup_child 2295 2449 2296 2450 2297 2451 !--------------------------------------------------------------------------------------------------! 2452 ! Description: 2453 ! ------------ 2454 !> @Todo: Missing subroutine description. 2455 !--------------------------------------------------------------------------------------------------! 2298 2456 SUBROUTINE pmci_setup_coordinates 2299 2457 … … 2301 2459 IMPLICIT NONE 2302 2460 2303 INTEGER(iwp) :: i 2304 INTEGER(iwp) :: j 2461 INTEGER(iwp) :: i !< 2462 INTEGER(iwp) :: j !< 2305 2463 2306 2464 ! … … 2308 2466 ALLOCATE( coord_x(-nbgp:nx+nbgp) ) 2309 2467 ALLOCATE( coord_y(-nbgp:ny+nbgp) ) 2310 2468 2311 2469 DO i = -nbgp, nx + nbgp 2312 2470 coord_x(i) = lower_left_coord_x + i * dx … … 2320 2478 END SUBROUTINE pmci_setup_coordinates 2321 2479 2322 !------------------------------------------------------------------------------ !2480 !--------------------------------------------------------------------------------------------------! 2323 2481 ! Description: 2324 2482 ! ------------ 2325 !> In this subroutine the number of coupled arrays is determined. 2326 !------------------------------------------------------------------------------ !2327 SUBROUTINE pmci_num_arrays2328 2329 #if defined( __parallel ) 2483 !> In this subroutine the number of coupled arrays is determined. 2484 !--------------------------------------------------------------------------------------------------! 2485 SUBROUTINE pmci_num_arrays 2486 2487 #if defined( __parallel ) 2330 2488 IMPLICIT NONE 2331 2489 ! 2332 !-- The number of coupled arrays depends on the model settings. At least 2333 !-- 5 arrays need to be coupled (u, v, w, e, diss). Please note, actually 2334 !-- e and diss (TKE and dissipation rate) are only required if RANS-RANS 2335 !-- nesting is applied, but memory is allocated nevertheless. This is because 2336 !-- the information whether they are needed or not is retrieved at a later 2337 !-- point in time. In case e and diss are not needed, they are also not 2338 !-- exchanged between parent and child. 2490 !-- The number of coupled arrays depends on the model settings. At least 5 arrays need to be 2491 !-- coupled (u, v, w, e, diss). Please note, actually e and diss (TKE and dissipation rate) are 2492 !-- only required if RANS-RANS nesting is applied, but memory is allocated nevertheless. This is 2493 !-- because the information whether they are needed or not is retrieved at a later point in time. 2494 !-- In case e and diss are not needed, they are also not exchanged between parent and child. 2339 2495 pmc_max_array = 5 2340 2496 ! 2341 2497 !-- pt 2342 2498 IF ( .NOT. neutral ) pmc_max_array = pmc_max_array + 1 2343 2499 2344 2500 IF ( humidity ) THEN 2345 2501 ! … … 2348 2504 ! 2349 2505 !-- qc, nc 2350 IF ( bulk_cloud_model .AND. microphysics_morrison ) &2506 IF ( bulk_cloud_model .AND. microphysics_morrison ) & 2351 2507 pmc_max_array = pmc_max_array + 2 2352 2508 ! 2353 2509 !-- qr, nr 2354 IF ( bulk_cloud_model .AND. microphysics_seifert ) &2510 IF ( bulk_cloud_model .AND. microphysics_seifert ) & 2355 2511 pmc_max_array = pmc_max_array + 2 2356 2512 ENDIF … … 2365 2521 IF ( air_chemistry .AND. nesting_chem ) pmc_max_array = pmc_max_array + nspec 2366 2522 ! 2367 !-- SALSA, depens on the number aerosol size bins and chemical components + 2368 !-- the number of defaultgases2523 !-- SALSA, depens on the number aerosol size bins and chemical components + the number of default 2524 !-- gases 2369 2525 IF ( salsa .AND. nesting_salsa ) pmc_max_array = pmc_max_array + nbins_aerosol + & 2370 2526 nbins_aerosol * ncomponents_mass … … 2372 2528 2373 2529 #endif 2374 2530 2375 2531 END SUBROUTINE pmci_num_arrays 2376 2532 2377 2533 2534 2535 !--------------------------------------------------------------------------------------------------! 2536 ! Description: 2537 ! ------------ 2538 !> @Todo: Missing subroutine description. 2539 !--------------------------------------------------------------------------------------------------! 2378 2540 SUBROUTINE pmci_set_array_pointer( name, child_id, nz_child, n ) 2379 2541 2380 2542 IMPLICIT NONE 2381 2543 2544 CHARACTER(LEN=*), INTENT(IN) :: name !< 2545 2382 2546 INTEGER(iwp), INTENT(IN) :: child_id !< 2383 2547 INTEGER(iwp), INTENT(IN) :: nz_child !< 2384 2385 INTEGER(iwp), INTENT(IN), OPTIONAL :: n !< index of chemical species 2386 2387 CHARACTER(LEN=*), INTENT(IN) :: name !< 2388 2389 #if defined( __parallel ) 2390 ! 2391 !-- Local variables: 2392 INTEGER(iwp) :: ierr !< MPI error code 2393 2394 INTEGER(idp), POINTER, DIMENSION(:,:) :: i_2d !< 2395 2548 2549 INTEGER(iwp), INTENT(IN), OPTIONAL :: n !< index of chemical species 2550 2551 #if defined( __parallel ) 2552 ! 2553 !-- Local variables: 2554 INTEGER(iwp) :: ierr !< MPI error code 2555 2556 INTEGER(idp), POINTER, DIMENSION(:,:) :: i_2d !< 2557 2396 2558 REAL(wp), POINTER, DIMENSION(:,:) :: p_2d !< 2397 2559 REAL(wp), POINTER, DIMENSION(:,:,:) :: p_3d !< 2398 2560 REAL(wp), POINTER, DIMENSION(:,:,:) :: p_3d_sec !< 2399 2561 2400 2562 2401 2563 NULLIFY( p_3d ) … … 2416 2578 IF ( TRIM(name) == "nc" ) p_3d => nc 2417 2579 IF ( TRIM(name) == "s" ) p_3d => s 2418 IF ( TRIM(name) == "diss" ) p_3d => diss 2580 IF ( TRIM(name) == "diss" ) p_3d => diss 2419 2581 IF ( TRIM(name) == "nr_part" ) i_2d => nr_part 2420 2582 IF ( TRIM(name) == "part_adr" ) i_2d => part_adr … … 2422 2584 IF ( INDEX( TRIM(name), "an_" ) /= 0 ) p_3d => aerosol_number(n)%conc 2423 2585 IF ( INDEX( TRIM(name), "am_" ) /= 0 ) p_3d => aerosol_mass(n)%conc 2424 IF ( INDEX( TRIM(name), "sg_" ) /= 0 .AND. .NOT. salsa_gases_from_chem ) &2586 IF ( INDEX( TRIM(name), "sg_" ) /= 0 .AND. .NOT. salsa_gases_from_chem ) & 2425 2587 p_3d => salsa_gas(n)%conc 2426 2588 ! 2427 !-- Next line is just an example for a 2D array (not active for coupling!) 2589 !-- Next line is just an example for a 2D array (not active for coupling!) 2428 2590 !-- Please note, that z0 has to be declared as TARGET array in modules.f90. 2429 2591 ! IF ( TRIM(name) == "z0" ) p_2d => z0 … … 2443 2605 IF ( INDEX( TRIM(name), "an_" ) /= 0 ) p_3d_sec => nconc_2(:,:,:,n) 2444 2606 IF ( INDEX( TRIM(name), "am_" ) /= 0 ) p_3d_sec => mconc_2(:,:,:,n) 2445 IF ( INDEX( TRIM(name), "sg_" ) /= 0 .AND. .NOT. salsa_gases_from_chem )&2446 2607 IF ( INDEX( TRIM(name), "sg_" ) /= 0 .AND. .NOT. salsa_gases_from_chem ) & 2608 p_3d_sec => gconc_2(:,:,:,n) 2447 2609 2448 2610 IF ( ASSOCIATED( p_3d ) ) THEN … … 2463 2625 CALL MPI_BARRIER( comm2d, ierr ) 2464 2626 ENDIF 2465 2466 ENDIF 2467 2627 2628 ENDIF 2629 2468 2630 #endif 2469 2631 2470 2632 END SUBROUTINE pmci_set_array_pointer 2471 2633 2472 2634 2473 2635 2474 2636 INTEGER FUNCTION get_number_of_children() 2475 2637 2476 2638 IMPLICIT NONE 2477 2639 2478 2640 2479 2641 #if defined( __parallel ) 2480 2642 get_number_of_children = SIZE( pmc_parent_for_child ) - 1 … … 2488 2650 2489 2651 2490 2652 2491 2653 INTEGER FUNCTION get_childid( id_index ) 2492 2654 … … 2495 2657 INTEGER, INTENT(IN) :: id_index !< 2496 2658 2497 2659 2498 2660 #if defined( __parallel ) 2499 2661 get_childid = pmc_parent_for_child(id_index) … … 2507 2669 2508 2670 2509 2510 SUBROUTINE get_child_edges( m, lx_coord, lx_coord_b, rx_coord, rx_coord_b, sy_coord, sy_coord_b, & 2511 ny_coord, ny_coord_b, uz_coord, uz_coord_b ) 2512 2671 !--------------------------------------------------------------------------------------------------! 2672 ! Description: 2673 ! ------------ 2674 !> @Todo: Missing subroutine description. 2675 !--------------------------------------------------------------------------------------------------! 2676 SUBROUTINE get_child_edges( m, lx_coord, lx_coord_b, rx_coord, rx_coord_b, sy_coord, sy_coord_b, & 2677 ny_coord, ny_coord_b, uz_coord, uz_coord_b ) 2678 2513 2679 IMPLICIT NONE 2514 2680 2515 INTEGER,INTENT(IN) :: m!<2681 INTEGER,INTENT(IN) :: m !< 2516 2682 2517 2683 REAL(wp),INTENT(OUT) :: lx_coord, lx_coord_b !< 2684 REAL(wp),INTENT(OUT) :: ny_coord, ny_coord_b !< 2518 2685 REAL(wp),INTENT(OUT) :: rx_coord, rx_coord_b !< 2519 REAL(wp),INTENT(OUT) :: ny_coord, ny_coord_b !<2520 2686 REAL(wp),INTENT(OUT) :: sy_coord, sy_coord_b !< 2521 2687 REAL(wp),INTENT(OUT) :: uz_coord, uz_coord_b !< 2522 2688 2523 2689 2524 2690 #if defined( __parallel ) 2525 2691 2526 2692 lx_coord = childgrid(m)%lx_coord 2527 2693 rx_coord = childgrid(m)%rx_coord … … 2529 2695 ny_coord = childgrid(m)%ny_coord 2530 2696 uz_coord = childgrid(m)%uz_coord 2531 2697 2532 2698 lx_coord_b = childgrid(m)%lx_coord_b 2533 2699 rx_coord_b = childgrid(m)%rx_coord_b … … 2535 2701 ny_coord_b = childgrid(m)%ny_coord_b 2536 2702 uz_coord_b = childgrid(m)%uz_coord_b 2537 2703 2538 2704 #endif 2539 2705 2540 2706 END SUBROUTINE get_child_edges 2541 2707 2542 2708 2543 2709 2710 !--------------------------------------------------------------------------------------------------! 2711 ! Description: 2712 ! ------------ 2713 !> @Todo: Missing subroutine description. 2714 !--------------------------------------------------------------------------------------------------! 2544 2715 SUBROUTINE get_child_gridspacing( m, dx, dy,dz ) 2545 2716 2546 2717 IMPLICIT NONE 2547 2548 INTEGER, INTENT(IN) :: m!<2549 2550 REAL(wp), INTENT(OUT) 2551 2552 REAL(wp), INTENT(OUT), OPTIONAL :: dz 2718 2719 INTEGER, INTENT(IN) :: m !< 2720 2721 REAL(wp), INTENT(OUT) :: dx,dy !< 2722 2723 REAL(wp), INTENT(OUT), OPTIONAL :: dz !< 2553 2724 2554 2725 2555 2726 #if defined( __parallel ) 2556 2727 2557 2728 dx = childgrid(m)%dx 2558 2729 dy = childgrid(m)%dy … … 2560 2731 dz = childgrid(m)%dz 2561 2732 ENDIF 2562 2733 2563 2734 #endif 2564 2735 2565 2736 END SUBROUTINE get_child_gridspacing 2566 2737 2567 2738 2568 2739 !--------------------------------------------------------------------------------------------------! 2740 ! Description: 2741 ! ------------ 2742 !> @Todo: Missing subroutine description. 2743 !--------------------------------------------------------------------------------------------------! 2569 2744 SUBROUTINE pmci_create_childs_parent_grid_arrays( name, is, ie, js, je, nzc, n ) 2570 2745 2571 2746 IMPLICIT NONE 2572 2747 2573 INTEGER(iwp), INTENT(IN) :: ie !< RENAME ie, is, je, js? 2574 INTEGER(iwp), INTENT(IN) :: is !< 2575 INTEGER(iwp), INTENT(IN) :: je !< 2576 INTEGER(iwp), INTENT(IN) :: js !< 2577 INTEGER(iwp), INTENT(IN) :: nzc !< nzc is pg%nz, but note that pg%nz is not the original nz of parent, 2578 !< but the highest parent-grid level needed for nesting. 2748 CHARACTER(LEN=*), INTENT(IN) :: name !< 2749 2750 INTEGER(iwp), INTENT(IN) :: ie !< RENAME ie, is, je, js? 2751 INTEGER(iwp), INTENT(IN) :: is !< 2752 INTEGER(iwp), INTENT(IN) :: je !< 2753 INTEGER(iwp), INTENT(IN) :: js !< 2754 INTEGER(iwp), INTENT(IN) :: nzc !< nzc is pg%nz, but note that pg%nz is not the original nz of parent, 2755 !< but the highest parent-grid level needed for nesting. 2579 2756 INTEGER(iwp), INTENT(IN), OPTIONAL :: n !< number of chemical species / salsa variables 2580 2581 CHARACTER(LEN=*), INTENT(IN) :: name !<2582 2757 2583 2758 #if defined( __parallel ) 2584 ! 2759 ! 2585 2760 !-- Local variables: 2586 INTEGER(iwp) :: ierr !< 2587 2588 INTEGER(idp), POINTER,DIMENSION(:,:) :: i_2d !< 2589 2590 REAL(wp), POINTER,DIMENSION(:,:) :: p_2d !< 2591 REAL(wp), POINTER,DIMENSION(:,:,:) :: p_3d !< 2592 2761 INTEGER(iwp) :: ierr !< 2762 2763 INTEGER(idp), POINTER,DIMENSION(:,:) :: i_2d !< 2764 2765 REAL(wp), POINTER,DIMENSION(:,:) :: p_2d !< 2766 2767 REAL(wp), POINTER,DIMENSION(:,:,:) :: p_3d !< 2768 2593 2769 NULLIFY( p_3d ) 2594 2770 NULLIFY( p_2d ) … … 2642 2818 p_3d => chem_spec_c(:,:,:,n) 2643 2819 ELSEIF ( TRIM( name(1:3) ) == "an_" ) THEN 2644 IF ( .NOT. ALLOCATED( aerosol_number_c ) ) &2820 IF ( .NOT. ALLOCATED( aerosol_number_c ) ) & 2645 2821 ALLOCATE( aerosol_number_c(0:nzc+1,js:je,is:ie,1:nbins_aerosol) ) 2646 2822 p_3d => aerosol_number_c(:,:,:,n) 2647 2823 ELSEIF ( TRIM( name(1:3) ) == "am_" ) THEN 2648 IF ( .NOT. ALLOCATED( aerosol_mass_c ) ) &2824 IF ( .NOT. ALLOCATED( aerosol_mass_c ) ) & 2649 2825 ALLOCATE( aerosol_mass_c(0:nzc+1,js:je,is:ie,1:(nbins_aerosol*ncomponents_mass) ) ) 2650 2826 p_3d => aerosol_mass_c(:,:,:,n) 2651 ELSEIF ( TRIM( name(1:3) ) == "sg_" .AND. .NOT. salsa_gases_from_chem ) & 2652 THEN 2653 IF ( .NOT. ALLOCATED( salsa_gas_c ) ) & 2827 ELSEIF ( TRIM( name(1:3) ) == "sg_" .AND. .NOT. salsa_gases_from_chem ) THEN 2828 IF ( .NOT. ALLOCATED( salsa_gas_c ) ) & 2654 2829 ALLOCATE( salsa_gas_c(0:nzc+1,js:je,is:ie,1:ngases_salsa) ) 2655 2830 p_3d => salsa_gas_c(:,:,:,n) … … 2669 2844 !-- Give only one message for the first child domain. 2670 2845 IF ( cpl_id == 2 .AND. myid == 0 ) THEN 2671 message_string = 'pointer for array "' // TRIM( name ) // & 2672 '" can''t be associated' 2846 message_string = 'pointer for array "' // TRIM( name ) // '" can''t be associated' 2673 2847 CALL message( 'pmci_create_childs_parent_grid_arrays', 'PA0170', 3, 2, 0, 6, 0 ) 2674 2848 ELSE … … 2684 2858 2685 2859 2860 2861 !--------------------------------------------------------------------------------------------------! 2862 ! Description: 2863 ! ------------ 2864 !> @Todo: Missing subroutine description. 2865 !--------------------------------------------------------------------------------------------------! 2686 2866 SUBROUTINE pmci_parent_initialize 2687 2867 2688 2868 ! 2689 !-- Send data for the children in order to let them create initial 2690 !-- conditions by interpolating theparent-domain fields.2869 !-- Send data for the children in order to let them create initial conditions by interpolating the 2870 !-- parent-domain fields. 2691 2871 #if defined( __parallel ) 2692 2872 IMPLICIT NONE 2693 2873 2694 INTEGER(iwp) :: child_id !< 2695 INTEGER(iwp) :: m !< 2696 REAL(wp) :: waittime !< 2874 INTEGER(iwp) :: child_id !< 2875 INTEGER(iwp) :: m !< 2876 2877 REAL(wp) :: waittime !< 2697 2878 2698 2879 … … 2706 2887 2707 2888 2708 2889 !--------------------------------------------------------------------------------------------------! 2890 ! Description: 2891 ! ------------ 2892 !> @Todo: Missing subroutine description. 2893 !--------------------------------------------------------------------------------------------------! 2709 2894 SUBROUTINE pmci_child_initialize 2710 2895 2711 2896 ! 2712 !-- Create initial conditions for the current child domain by interpolating 2713 !-- the parent-domain fields. 2897 !-- Create initial conditions for the current child domain by interpolating the parent-domain fields. 2714 2898 #if defined( __parallel ) 2715 2899 IMPLICIT NONE 2716 2900 2717 INTEGER(iwp) :: ic !< Child-grid index in x-direction 2718 INTEGER(iwp) :: jc !< Child-grid index in y-direction 2719 INTEGER(iwp) :: kc !< Child-grid index in z-direction 2720 INTEGER(iwp) :: lb !< Running index for aerosol size bins 2721 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 2722 INTEGER(iwp) :: lg !< Running index for salsa gases 2723 INTEGER(iwp) :: n !< Running index for chemical species 2724 REAL(wp) :: waittime !< Waiting time 2901 INTEGER(iwp) :: ic !< Child-grid index in x-direction 2902 INTEGER(iwp) :: jc !< Child-grid index in y-direction 2903 INTEGER(iwp) :: kc !< Child-grid index in z-direction 2904 INTEGER(iwp) :: lb !< Running index for aerosol size bins 2905 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 2906 INTEGER(iwp) :: lg !< Running index for salsa gases 2907 INTEGER(iwp) :: n !< Running index for chemical species 2908 2909 REAL(wp) :: waittime !< Waiting time 2725 2910 2726 2911 ! … … 2736 2921 CALL pmci_interp_1sto_all ( w, wc, kctw, iflo, ifuo, jflo, jfuo, kflw, kfuw, 'w' ) 2737 2922 2738 IF ( ( 2739 ( 2740 2923 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 2924 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 2925 .NOT. constant_diffusion ) ) THEN 2741 2926 CALL pmci_interp_1sto_all ( e, ec, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 'e' ) 2742 2927 ENDIF … … 2755 2940 2756 2941 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 2757 CALL pmci_interp_1sto_all ( qc, qcc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2758 CALL pmci_interp_1sto_all ( nc, ncc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2942 CALL pmci_interp_1sto_all ( qc, qcc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2943 CALL pmci_interp_1sto_all ( nc, ncc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2759 2944 ENDIF 2760 2945 … … 2772 2957 IF ( air_chemistry .AND. nesting_chem ) THEN 2773 2958 DO n = 1, nspec 2774 CALL pmci_interp_1sto_all ( chem_species(n)%conc, chem_spec_c(:,:,:,n), 2959 CALL pmci_interp_1sto_all ( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 2775 2960 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2776 2961 ENDDO … … 2779 2964 IF ( salsa .AND. nesting_salsa ) THEN 2780 2965 DO lb = 1, nbins_aerosol 2781 CALL pmci_interp_1sto_all ( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), 2966 CALL pmci_interp_1sto_all ( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 2782 2967 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2783 2968 ENDDO 2784 2969 DO lc = 1, nbins_aerosol * ncomponents_mass 2785 CALL pmci_interp_1sto_all ( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), 2970 CALL pmci_interp_1sto_all ( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 2786 2971 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2787 2972 ENDDO 2788 2973 IF ( .NOT. salsa_gases_from_chem ) THEN 2789 2974 DO lg = 1, ngases_salsa 2790 CALL pmci_interp_1sto_all ( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), 2975 CALL pmci_interp_1sto_all ( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 2791 2976 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 's' ) 2792 2977 ENDDO … … 2800 2985 DO jc = nysg, nyng 2801 2986 DO kc = nzb, nzt 2802 u(kc,jc,ic) = MERGE( u(kc,jc,ic), 0.0_wp, BTEST( wall_flags_total_0(kc,jc,ic), 1 ) ) 2803 v(kc,jc,ic) = MERGE( v(kc,jc,ic), 0.0_wp, BTEST( wall_flags_total_0(kc,jc,ic), 2 ) ) 2804 w(kc,jc,ic) = MERGE( w(kc,jc,ic), 0.0_wp, BTEST( wall_flags_total_0(kc,jc,ic), 3 ) ) 2805 u_p(kc,jc,ic) = MERGE( u_p(kc,jc,ic), 0.0_wp, BTEST( wall_flags_total_0(kc,jc,ic), 1 ) ) 2806 v_p(kc,jc,ic) = MERGE( v_p(kc,jc,ic), 0.0_wp, BTEST( wall_flags_total_0(kc,jc,ic), 2 ) ) 2807 w_p(kc,jc,ic) = MERGE( w_p(kc,jc,ic), 0.0_wp, BTEST( wall_flags_total_0(kc,jc,ic), 3 ) ) 2987 u(kc,jc,ic) = MERGE( u(kc,jc,ic), 0.0_wp, & 2988 BTEST( wall_flags_total_0(kc,jc,ic), 1 ) ) 2989 v(kc,jc,ic) = MERGE( v(kc,jc,ic), 0.0_wp, & 2990 BTEST( wall_flags_total_0(kc,jc,ic), 2 ) ) 2991 w(kc,jc,ic) = MERGE( w(kc,jc,ic), 0.0_wp, & 2992 BTEST( wall_flags_total_0(kc,jc,ic), 3 ) ) 2993 u_p(kc,jc,ic) = MERGE( u_p(kc,jc,ic), 0.0_wp, & 2994 BTEST( wall_flags_total_0(kc,jc,ic), 1 ) ) 2995 v_p(kc,jc,ic) = MERGE( v_p(kc,jc,ic), 0.0_wp, & 2996 BTEST( wall_flags_total_0(kc,jc,ic), 2 ) ) 2997 w_p(kc,jc,ic) = MERGE( w_p(kc,jc,ic), 0.0_wp, & 2998 BTEST( wall_flags_total_0(kc,jc,ic), 3 ) ) 2808 2999 ENDDO 2809 3000 ENDDO … … 2816 3007 2817 3008 2818 SUBROUTINE pmci_interp_1sto_all( child_array, parent_array, kct, ifl, ifu, jfl, jfu, kfl, kfu, & 2819 var ) 2820 ! 2821 !-- Interpolation of the internal values for the child-domain initialization 2822 IMPLICIT NONE 2823 2824 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 2825 2826 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 2827 !< parent cell - x direction 2828 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 2829 !< parent cell - x direction 2830 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 2831 !< parent cell - y direction 2832 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 2833 !< parent cell - y direction 2834 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 2835 !< parent cell - z direction 2836 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 2837 !< parent cell - z direction 2838 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 2839 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 2840 2841 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 2842 ! 2843 !-- Local variables: 2844 INTEGER(iwp) :: ic !< Running child-grid index in the x-direction 2845 INTEGER(iwp) :: icb !< Index pointing to the first redundant ghost point layer behind the actual boundary 2846 !< ghost point layer in the x-direction 2847 INTEGER(iwp) :: icbc !< Index pointing to the boundary ghost point layer in the x-direction 2848 INTEGER(iwp) :: icfirst !< Leftmost child-grid index initialized by the main loops (usually icfirst == icl_init) 2849 INTEGER(iwp) :: iclast !< Rightmost child-grid index initialized by the main loops (usually iclast == icr_init) 2850 INTEGER(iwp) :: icl_init !< Left child-grid index bound for initialization in the x-direction 2851 INTEGER(iwp) :: icr_init !< Right child-grid index bound for initialization in the x-direction 2852 INTEGER(iwp) :: jc !< Running child-grid index in the y-direction 2853 INTEGER(iwp) :: jcb !< Index pointing to the first redundant ghost point layer behind the actual boundary 2854 !< ghost point layer in the y-direction 2855 INTEGER(iwp) :: jcbc !< Index pointing to the boundary ghost point layer in the y-direction 2856 INTEGER(iwp) :: jcfirst !< Southmost child-grid index initialized by the main loops (usually jcfirst == jcs_init) 2857 INTEGER(iwp) :: jclast !< Northmost child-grid index initialized by the main loops (usually jclast == jcn_init) 2858 INTEGER(iwp) :: jcs_init !< South child-grid index bound for initialization in the y-direction 2859 INTEGER(iwp) :: jcn_init !< North child-grid index bound for initialization in the y-direction 2860 INTEGER(iwp) :: kc !< Running child-grid index in the z-direction 2861 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 2862 INTEGER(iwp) :: ipl_init !< Left parent-grid index bound for initialization in the x-direction 2863 INTEGER(iwp) :: ipr_init !< Right parent-grid index bound for initialization in the x-direction 2864 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 2865 INTEGER(iwp) :: jps_init !< South parent-grid index bound for initialization in the y-direction 2866 INTEGER(iwp) :: jpn_init !< North parent-grid index bound for initialization in the y-direction 2867 INTEGER(iwp) :: kp !< Running parent-grid index in the z-direction 2868 2869 2870 ipl_init = ipl 2871 ipr_init = ipr 2872 jps_init = jps 2873 jpn_init = jpn 2874 icl_init = nxl 2875 icr_init = nxr 2876 jcs_init = nys 2877 jcn_init = nyn 2878 2879 icbc = -1 2880 icb = -2 2881 jcbc = -1 2882 jcb = -2 2883 IF ( var == 'u' ) THEN 2884 icbc = 0 2885 icb = -1 2886 ELSE IF ( var == 'v' ) THEN 2887 jcbc = 0 2888 jcb = -1 2889 ENDIF 2890 2891 IF ( nesting_mode /= 'vertical' ) THEN 2892 IF ( bc_dirichlet_l ) THEN 2893 ipl_init = ipl + 1 2894 icl_init = nxl - 1 2895 ! 2896 !-- For u, nxl is a ghost node, but not for the other variables 2897 IF ( var == 'u' ) THEN 2898 ipl_init = ipl + 2 2899 icl_init = nxl 2900 ENDIF 3009 !--------------------------------------------------------------------------------------------------! 3010 ! Description: 3011 ! ------------ 3012 !> @Todo: Missing subroutine description. 3013 !--------------------------------------------------------------------------------------------------! 3014 SUBROUTINE pmci_interp_1sto_all( child_array, parent_array, kct, ifl, ifu, jfl, jfu, kfl, kfu, & 3015 var ) 3016 ! 3017 !-- Interpolation of the internal values for the child-domain initialization 3018 IMPLICIT NONE 3019 3020 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 3021 3022 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 3023 3024 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 3025 !< parent cell - x direction 3026 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 3027 !< parent cell - x direction 3028 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 3029 !< parent cell - y direction 3030 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 3031 !< parent cell - y direction 3032 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 3033 !< parent cell - z direction 3034 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 3035 !< parent cell - z direction 3036 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 3037 3038 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 3039 3040 ! 3041 !-- Local variables: 3042 INTEGER(iwp) :: ic !< Running child-grid index in the x-direction 3043 INTEGER(iwp) :: icb !< Index pointing to the first redundant ghost point layer behind the actual boundary 3044 !< ghost point layer in the x-direction 3045 INTEGER(iwp) :: icbc !< Index pointing to the boundary ghost point layer in the x-direction 3046 INTEGER(iwp) :: icfirst !< Leftmost child-grid index initialized by the main loops (usually icfirst == icl_init) 3047 INTEGER(iwp) :: iclast !< Rightmost child-grid index initialized by the main loops (usually iclast == icr_init) 3048 INTEGER(iwp) :: icl_init !< Left child-grid index bound for initialization in the x-direction 3049 INTEGER(iwp) :: icr_init !< Right child-grid index bound for initialization in the x-direction 3050 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 3051 INTEGER(iwp) :: ipl_init !< Left parent-grid index bound for initialization in the x-direction 3052 INTEGER(iwp) :: ipr_init !< Right parent-grid index bound for initialization in the x-direction 3053 INTEGER(iwp) :: jc !< Running child-grid index in the y-direction 3054 INTEGER(iwp) :: jcb !< Index pointing to the first redundant ghost point layer behind the actual boundary 3055 !< ghost point layer in the y-direction 3056 INTEGER(iwp) :: jcbc !< Index pointing to the boundary ghost point layer in the y-direction 3057 INTEGER(iwp) :: jcfirst !< Southmost child-grid index initialized by the main loops (usually jcfirst == jcs_init) 3058 INTEGER(iwp) :: jclast !< Northmost child-grid index initialized by the main loops (usually jclast == jcn_init) 3059 INTEGER(iwp) :: jcs_init !< South child-grid index bound for initialization in the y-direction 3060 INTEGER(iwp) :: jcn_init !< North child-grid index bound for initialization in the y-direction 3061 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 3062 INTEGER(iwp) :: jps_init !< South parent-grid index bound for initialization in the y-direction 3063 INTEGER(iwp) :: jpn_init !< North parent-grid index bound for initialization in the y-direction 3064 INTEGER(iwp) :: kc !< Running child-grid index in the z-direction 3065 INTEGER(iwp) :: kp !< Running parent-grid index in the z-direction 3066 3067 3068 ipl_init = ipl 3069 ipr_init = ipr 3070 jps_init = jps 3071 jpn_init = jpn 3072 icl_init = nxl 3073 icr_init = nxr 3074 jcs_init = nys 3075 jcn_init = nyn 3076 3077 icbc = -1 3078 icb = -2 3079 jcbc = -1 3080 jcb = -2 3081 IF ( var == 'u' ) THEN 3082 icbc = 0 3083 icb = -1 3084 ELSE IF ( var == 'v' ) THEN 3085 jcbc = 0 3086 jcb = -1 3087 ENDIF 3088 3089 IF ( nesting_mode /= 'vertical' ) THEN 3090 IF ( bc_dirichlet_l ) THEN 3091 ipl_init = ipl + 1 3092 icl_init = nxl - 1 3093 ! 3094 !-- For u, nxl is a ghost node, but not for the other variables 3095 IF ( var == 'u' ) THEN 3096 ipl_init = ipl + 2 3097 icl_init = nxl 2901 3098 ENDIF 2902 IF ( bc_dirichlet_s ) THEN2903 jps_init = jps + 12904 jcs_init = nys - 12905 ! 2906 ! -- For v, nys is a ghost node, but not for the other variables2907 IF ( var == 'v' ) THEN 2908 jps_init = jps + 22909 jcs_init = nys2910 ENDIF3099 ENDIF 3100 IF ( bc_dirichlet_s ) THEN 3101 jps_init = jps + 1 3102 jcs_init = nys - 1 3103 ! 3104 !-- For v, nys is a ghost node, but not for the other variables 3105 IF ( var == 'v' ) THEN 3106 jps_init = jps + 2 3107 jcs_init = nys 2911 3108 ENDIF 2912 IF ( bc_dirichlet_r ) THEN2913 ipr_init = ipr - 12914 icr_init = nxr + 12915 ENDIF2916 IF ( bc_dirichlet_n ) THEN2917 jpn_init = jpn - 12918 jcn_init = nyn +12919 ENDIF2920 ENDIF 2921 2922 child_array(:,:,:) = 0.0_wp 2923 2924 IF ( var == 'u' ) THEN 2925 2926 icfirst = ifl(ipl_init) 2927 iclast = ifl(ipr_init+1) - 12928 jcfirst = jfl(jps_init)2929 jclast = jfu(jpn_init)2930 DO ip = ipl_init, ipr_init2931 DO jp = jps_init, jpn_init2932 DO kp = 0, kct + 12933 2934 DO ic = ifl(ip), ifl(ip+1)-1 2935 DO jc = jfl(jp), jfu(jp)2936 DO kc = kfl(kp), MIN( kfu(kp), nzt+1)2937 child_array(kc,jc,ic) = parent_array(kp,jp,ip)2938 ENDDO3109 ENDIF 3110 IF ( bc_dirichlet_r ) THEN 3111 ipr_init = ipr - 1 3112 icr_init = nxr + 1 3113 ENDIF 3114 IF ( bc_dirichlet_n ) THEN 3115 jpn_init = jpn - 1 3116 jcn_init = nyn + 1 3117 ENDIF 3118 ENDIF 3119 3120 child_array(:,:,:) = 0.0_wp 3121 3122 IF ( var == 'u' ) THEN 3123 3124 icfirst = ifl(ipl_init) 3125 iclast = ifl(ipr_init+1) - 1 3126 jcfirst = jfl(jps_init) 3127 jclast = jfu(jpn_init) 3128 DO ip = ipl_init, ipr_init 3129 DO jp = jps_init, jpn_init 3130 DO kp = 0, kct + 1 3131 3132 DO ic = ifl(ip), ifl(ip+1)-1 3133 DO jc = jfl(jp), jfu(jp) 3134 DO kc = kfl(kp), MIN( kfu(kp), nzt+1 ) 3135 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 2939 3136 ENDDO 2940 3137 ENDDO 2941 2942 3138 ENDDO 2943 ENDDO 2944 ENDDO 2945 2946 ELSE IF ( var == 'v' ) THEN 2947 2948 icfirst = ifl(ipl_init) 2949 iclast = ifu(ipr_init) 2950 jcfirst = jfl(jps_init) 2951 jclast = jfl(jpn_init+1) - 1 2952 DO ip = ipl_init, ipr_init 2953 DO jp = jps_init, jpn_init 2954 DO kp = 0, kct + 1 2955 2956 DO ic = ifl(ip), ifu(ip) 2957 DO jc = jfl(jp), jfl(jp+1)-1 2958 DO kc = kfl(kp), MIN( kfu(kp), nzt+1 ) 2959 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 2960 ENDDO 3139 3140 ENDDO 3141 ENDDO 3142 ENDDO 3143 3144 ELSE IF ( var == 'v' ) THEN 3145 3146 icfirst = ifl(ipl_init) 3147 iclast = ifu(ipr_init) 3148 jcfirst = jfl(jps_init) 3149 jclast = jfl(jpn_init+1) - 1 3150 DO ip = ipl_init, ipr_init 3151 DO jp = jps_init, jpn_init 3152 DO kp = 0, kct + 1 3153 3154 DO ic = ifl(ip), ifu(ip) 3155 DO jc = jfl(jp), jfl(jp+1)-1 3156 DO kc = kfl(kp), MIN( kfu(kp), nzt+1 ) 3157 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 2961 3158 ENDDO 2962 3159 ENDDO 2963 2964 3160 ENDDO 2965 ENDDO 2966 ENDDO 2967 2968 ELSE IF ( var == 'w' ) THEN 2969 2970 icfirst = ifl(ipl_init) 2971 iclast = ifu(ipr_init) 2972 jcfirst = jfl(jps_init) 2973 jclast = jfu(jpn_init) 2974 DO ip = ipl_init, ipr_init 2975 DO jp = jps_init, jpn_init 2976 DO kp = 1, kct + 1 2977 2978 DO ic = ifl(ip), ifu(ip) 2979 DO jc = jfl(jp), jfu(jp) 2980 ! 2981 !-- Because the kp-loop for w starts from kp=1 instead of 0 2982 child_array(nzb,jc,ic) = 0.0_wp 2983 DO kc = kfu(kp-1)+1, kfu(kp) 2984 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 2985 ENDDO 3161 3162 ENDDO 3163 ENDDO 3164 ENDDO 3165 3166 ELSE IF ( var == 'w' ) THEN 3167 3168 icfirst = ifl(ipl_init) 3169 iclast = ifu(ipr_init) 3170 jcfirst = jfl(jps_init) 3171 jclast = jfu(jpn_init) 3172 DO ip = ipl_init, ipr_init 3173 DO jp = jps_init, jpn_init 3174 DO kp = 1, kct + 1 3175 3176 DO ic = ifl(ip), ifu(ip) 3177 DO jc = jfl(jp), jfu(jp) 3178 ! 3179 !-- Because the kp-loop for w starts from kp=1 instead of 0 3180 child_array(nzb,jc,ic) = 0.0_wp 3181 DO kc = kfu(kp-1)+1, kfu(kp) 3182 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 2986 3183 ENDDO 2987 3184 ENDDO 2988 2989 3185 ENDDO 2990 ENDDO 2991 ENDDO 2992 2993 ELSE ! scalars 2994 2995 icfirst = ifl(ipl_init) 2996 iclast = ifu(ipr_init) 2997 jcfirst = jfl(jps_init) 2998 jclast = jfu(jpn_init) 2999 DO ip = ipl_init, ipr_init 3000 DO jp = jps_init, jpn_init 3001 DO kp = 0, kct + 1 3002 3003 DO ic = ifl(ip), ifu(ip) 3004 DO jc = jfl(jp), jfu(jp) 3005 DO kc = kfl(kp), MIN( kfu(kp), nzt+1 ) 3006 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 3007 ENDDO 3186 3187 ENDDO 3188 ENDDO 3189 ENDDO 3190 3191 ELSE ! Scalars 3192 3193 icfirst = ifl(ipl_init) 3194 iclast = ifu(ipr_init) 3195 jcfirst = jfl(jps_init) 3196 jclast = jfu(jpn_init) 3197 DO ip = ipl_init, ipr_init 3198 DO jp = jps_init, jpn_init 3199 DO kp = 0, kct + 1 3200 3201 DO ic = ifl(ip), ifu(ip) 3202 DO jc = jfl(jp), jfu(jp) 3203 DO kc = kfl(kp), MIN( kfu(kp), nzt+1 ) 3204 child_array(kc,jc,ic) = parent_array(kp,jp,ip) 3008 3205 ENDDO 3009 3206 ENDDO 3010 3011 3207 ENDDO 3012 ENDDO 3013 ENDDO 3014 3015 ENDIF ! var 3016 ! 3017 !-- If the number of grid points in child subdomain in x- or y-direction 3018 !-- (nxr - nxl + 1 and/or nyn - nys + 1) is not integer divisible by the grid spacing 3019 !-- ratio in its direction (igsr and/or jgsr), the above loops will return with 3020 !-- unfilled gaps in the initial fields. These gaps, if present, are filled here. 3021 IF ( icfirst > icl_init ) THEN 3022 DO ic = icl_init, icfirst - 1 3023 child_array(:,:,ic) = child_array(:,:,icfirst) 3024 ENDDO 3025 ENDIF 3026 IF ( iclast < icr_init ) THEN 3027 DO ic = iclast + 1, icr_init 3028 child_array(:,:,ic) = child_array(:,:,iclast) 3029 ENDDO 3030 ENDIF 3031 IF ( jcfirst > jcs_init ) THEN 3032 DO jc = jcs_init, jcfirst - 1 3033 child_array(:,jc,:) = child_array(:,jcfirst,:) 3034 ENDDO 3035 ENDIF 3036 IF ( jclast < jcn_init ) THEN 3037 DO jc = jclast + 1, jcn_init 3038 child_array(:,jc,:) = child_array(:,jclast,:) 3039 ENDDO 3040 ENDIF 3041 ! 3042 !-- Finally, make sure that also the redundant 2nd and 3rd ghost-node layers 3043 !-- including the corners are properly filled up. 3044 IF ( nys == 0 ) THEN 3045 DO jc = -nbgp, jcb ! jcb = -2 if var == v, else jcb = -1 3046 child_array(0:nzt+1,jc,nxlg:nxrg) = child_array(0:nzt+1,jcbc,nxlg:nxrg) 3047 ENDDO 3048 ENDIF 3049 IF ( nyn == ny ) THEN 3050 DO jc = ny+2, ny+nbgp 3051 child_array(0:nzt+1,jc,nxlg:nxrg) = child_array(0:nzt+1,ny+1,nxlg:nxrg) 3052 ENDDO 3053 ENDIF 3054 IF ( nxl == 0 ) THEN 3055 DO ic = -nbgp, icb ! icb = -2 if var == u, else icb = -1 3056 child_array(0:nzt+1,nysg:nyng,ic) = child_array(0:nzt+1,nysg:nyng,icbc) 3057 ENDDO 3058 ENDIF 3059 IF ( nxr == nx ) THEN 3060 DO ic = nx+2, nx+nbgp 3061 child_array(0:nzt+1,nysg:nyng,ic) = child_array(0:nzt+1,nysg:nyng,nx+1) 3062 ENDDO 3063 ENDIF 3064 3065 END SUBROUTINE pmci_interp_1sto_all 3208 3209 ENDDO 3210 ENDDO 3211 ENDDO 3212 3213 ENDIF ! var 3214 ! 3215 !-- If the number of grid points in child subdomain in x- or y-direction 3216 !-- (nxr - nxl + 1 and/or nyn - nys + 1) is not integer divisible by the grid spacing ratio in its 3217 !-- direction (igsr and/or jgsr), the above loops will return with unfilled gaps in the initial 3218 !-- fields. These gaps, if present, are filled here. 3219 IF ( icfirst > icl_init ) THEN 3220 DO ic = icl_init, icfirst - 1 3221 child_array(:,:,ic) = child_array(:,:,icfirst) 3222 ENDDO 3223 ENDIF 3224 IF ( iclast < icr_init ) THEN 3225 DO ic = iclast + 1, icr_init 3226 child_array(:,:,ic) = child_array(:,:,iclast) 3227 ENDDO 3228 ENDIF 3229 IF ( jcfirst > jcs_init ) THEN 3230 DO jc = jcs_init, jcfirst - 1 3231 child_array(:,jc,:) = child_array(:,jcfirst,:) 3232 ENDDO 3233 ENDIF 3234 IF ( jclast < jcn_init ) THEN 3235 DO jc = jclast + 1, jcn_init 3236 child_array(:,jc,:) = child_array(:,jclast,:) 3237 ENDDO 3238 ENDIF 3239 ! 3240 !-- Finally, make sure that also the redundant 2nd and 3rd ghost-node layers including the corners 3241 !-- are properly filled up. 3242 IF ( nys == 0 ) THEN 3243 DO jc = -nbgp, jcb ! jcb = -2 if var == v, else jcb = -1 3244 child_array(0:nzt+1,jc,nxlg:nxrg) = child_array(0:nzt+1,jcbc,nxlg:nxrg) 3245 ENDDO 3246 ENDIF 3247 IF ( nyn == ny ) THEN 3248 DO jc = ny+2, ny+nbgp 3249 child_array(0:nzt+1,jc,nxlg:nxrg) = child_array(0:nzt+1,ny+1,nxlg:nxrg) 3250 ENDDO 3251 ENDIF 3252 IF ( nxl == 0 ) THEN 3253 DO ic = -nbgp, icb ! icb = -2 if var == u, else icb = -1 3254 child_array(0:nzt+1,nysg:nyng,ic) = child_array(0:nzt+1,nysg:nyng,icbc) 3255 ENDDO 3256 ENDIF 3257 IF ( nxr == nx ) THEN 3258 DO ic = nx+2, nx+nbgp 3259 child_array(0:nzt+1,nysg:nyng,ic) = child_array(0:nzt+1,nysg:nyng,nx+1) 3260 ENDDO 3261 ENDIF 3262 3263 END SUBROUTINE pmci_interp_1sto_all 3066 3264 3067 3265 #endif … … 3069 3267 3070 3268 3071 3269 !--------------------------------------------------------------------------------------------------! 3270 ! Description: 3271 ! ------------ 3272 !> @Todo: Missing subroutine description. 3273 !--------------------------------------------------------------------------------------------------! 3072 3274 SUBROUTINE pmci_check_setting_mismatches 3073 3275 ! 3074 !-- Check for mismatches between settings of root and child variables 3075 !-- (e.g., all children have to follow the end_time settings of the root model). 3076 !-- The root model overwrites variables in the other models, so these variables 3077 !-- only need to be set once in file PARIN. 3276 !-- Check for mismatches between settings of root and child variables (e.g., all children have to 3277 !-- follow the end_time settings of the root model). The root model overwrites variables in the 3278 !-- other models, so these variables only need to be set once in file PARIN. 3078 3279 3079 3280 #if defined( __parallel ) 3080 3281 IMPLICIT NONE 3081 3282 3082 INTEGER :: ierr !< MPI error code 3083 3084 REAL(wp) :: dt_restart_root !< 3085 REAL(wp) :: end_time_root !< 3086 REAL(wp) :: restart_time_root !< 3087 REAL(wp) :: time_restart_root !< 3088 3089 ! 3090 !-- Check the time to be simulated. 3091 !-- Here, and in the following, the root process communicates the respective 3092 !-- variable to all others, and its value will then be compared with the local 3093 !-- values. 3283 INTEGER :: ierr !< MPI error code 3284 3285 REAL(wp) :: dt_restart_root !< 3286 REAL(wp) :: end_time_root !< 3287 REAL(wp) :: restart_time_root !< 3288 REAL(wp) :: time_restart_root !< 3289 3290 ! 3291 !-- Check the time to be simulated. Here, and in the following, the root process communicates the 3292 !-- respective variable to all others, and its value will then be compared with the local values. 3094 3293 IF ( pmc_is_rootmodel() ) end_time_root = end_time 3095 3294 CALL MPI_BCAST( end_time_root, 1, MPI_REAL, 0, comm_world_nesting, ierr ) … … 3097 3296 IF ( .NOT. pmc_is_rootmodel() ) THEN 3098 3297 IF ( end_time /= end_time_root ) THEN 3099 WRITE( message_string, * ) 'mismatch between root model and ', & 3100 'child settings:& end_time(root) = ', end_time_root, & 3101 '& end_time(child) = ', end_time, '& child value is set', & 3102 ' to root value' 3103 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, & 3104 0 ) 3298 WRITE( message_string, * ) 'mismatch between root model and child settings:& ' // & 3299 'end_time(root) = ', end_time_root, & 3300 '& end_time(child) = ', end_time, '& child value is set', & 3301 ' to root value' 3302 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, 0 ) 3105 3303 end_time = end_time_root 3106 3304 ENDIF … … 3113 3311 IF ( .NOT. pmc_is_rootmodel() ) THEN 3114 3312 IF ( restart_time /= restart_time_root ) THEN 3115 WRITE( message_string, * ) 'mismatch between root model and ', & 3116 'child settings: & restart_time(root) = ', restart_time_root, & 3117 '& restart_time(child) = ', restart_time, '& child ', & 3118 'value is set to root value' 3119 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, & 3120 0 ) 3313 WRITE( message_string, * ) 'mismatch between root model and child settings: & ' // & 3314 'restart_time(root) = ', restart_time_root, & 3315 '& restart_time(child) = ', restart_time, '& child ', & 3316 'value is set to root value' 3317 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, 0 ) 3121 3318 restart_time = restart_time_root 3122 3319 ENDIF … … 3129 3326 IF ( .NOT. pmc_is_rootmodel() ) THEN 3130 3327 IF ( dt_restart /= dt_restart_root ) THEN 3131 WRITE( message_string, * ) 'mismatch between root model and ', & 3132 'child settings: & dt_restart(root) = ', dt_restart_root, & 3133 '& dt_restart(child) = ', dt_restart, '& child ', & 3134 'value is set to root value' 3135 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, & 3136 0 ) 3328 WRITE( message_string, * ) 'mismatch between root model and ', & 3329 'child settings: & dt_restart(root) = ', dt_restart_root, & 3330 '& dt_restart(child) = ', dt_restart, '& child ', & 3331 'value is set to root value' 3332 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, 0 ) 3137 3333 dt_restart = dt_restart_root 3138 3334 ENDIF … … 3145 3341 IF ( .NOT. pmc_is_rootmodel() ) THEN 3146 3342 IF ( time_restart /= time_restart_root ) THEN 3147 WRITE( message_string, * ) 'mismatch between root model and ', & 3148 'child settings: & time_restart(root) = ', time_restart_root, & 3149 '& time_restart(child) = ', time_restart, '& child ', & 3150 'value is set to root value' 3151 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, & 3152 0 ) 3343 WRITE( message_string, * ) 'mismatch between root model and child settings: & ' // & 3344 'time_restart(root) = ', time_restart_root, & 3345 '& time_restart(child) = ', time_restart, '& child ', & 3346 'value is set to root value' 3347 CALL message( 'pmci_check_setting_mismatches', 'PA0419', 0, 1, 0, 6, 0 ) 3153 3348 time_restart = time_restart_root 3154 3349 ENDIF … … 3160 3355 3161 3356 3162 3357 !--------------------------------------------------------------------------------------------------! 3358 ! Description: 3359 ! ------------ 3360 !> @Todo: Missing subroutine description. 3361 !--------------------------------------------------------------------------------------------------! 3163 3362 SUBROUTINE pmci_synchronize 3164 3363 3165 3364 #if defined( __parallel ) 3166 3365 ! 3167 !-- Unify the time steps for each model and synchronize using 3168 !-- MPI_ALLREDUCE with the MPI_MIN operator over all processes using 3169 !-- the global communicator MPI_COMM_WORLD. 3170 3366 !-- Unify the time steps for each model and synchronize using MPI_ALLREDUCE with the MPI_MIN 3367 !-- operator over all processes using the global communicator MPI_COMM_WORLD. 3368 3171 3369 IMPLICIT NONE 3172 3370 3173 INTEGER(iwp) :: ierr !< MPI error code 3174 REAL(wp) :: dtl !< Local time step of the current process 3175 REAL(wp) :: dtg !< Global time step defined as the global minimum of dtl of all processes 3371 INTEGER(iwp) :: ierr !< MPI error code 3372 3373 REAL(wp) :: dtl !< Local time step of the current process 3374 REAL(wp) :: dtg !< Global time step defined as the global minimum of dtl of all processes 3176 3375 3177 3376 3178 3377 IF ( debug_output_timestep ) CALL debug_message( 'pmci_synchronize', 'start' ) 3179 3378 3180 3379 dtl = dt_3d 3181 3380 CALL MPI_ALLREDUCE( dtl, dtg, 1, MPI_REAL, MPI_MIN, MPI_COMM_WORLD, ierr ) … … 3186 3385 #endif 3187 3386 END SUBROUTINE pmci_synchronize 3188 3189 3190 3387 3388 3389 !--------------------------------------------------------------------------------------------------! 3390 ! Description: 3391 ! ------------ 3392 !> @Todo: Missing subroutine description. 3393 !--------------------------------------------------------------------------------------------------! 3191 3394 SUBROUTINE pmci_set_swaplevel( swaplevel ) 3192 3395 3193 3396 ! 3194 !-- After each Runge-Kutta sub-timestep, alternately set buffer one or buffer 3195 !-- two active 3397 !-- After each Runge-Kutta sub-timestep, alternately set buffer one or buffer two active 3196 3398 3197 3399 IMPLICIT NONE … … 3199 3401 INTEGER(iwp), INTENT(IN) :: swaplevel !< swaplevel (1 or 2) of PALM's timestep 3200 3402 3201 INTEGER(iwp) :: child_id 3202 INTEGER(iwp) :: m !< Loop index over all children of the current parent3403 INTEGER(iwp) :: child_id !< Child id of the child number m 3404 INTEGER(iwp) :: m !< Loop index over all children of the current parent 3203 3405 3204 3406 #if defined( __parallel ) … … 3211 3413 3212 3414 3213 3214 SUBROUTINE pmci_datatrans( local_nesting_mode ) 3215 ! 3216 !-- This subroutine controls the nesting according to the nestpar 3217 !-- parameter nesting_mode (two-way (default) or one-way) and the 3218 !-- order of anterpolations according to the nestpar parameter 3219 !-- nesting_datatransfer_mode (cascade, overlap or mixed (default)). 3220 !-- Although nesting_mode is a variable of this model, pass it as 3221 !-- an argument to allow for example to force one-way initialization 3222 !-- phase. 3223 !-- Note that interpolation ( parent_to_child ) must always be carried 3224 !-- out before anterpolation ( child_to_parent ). 3415 !--------------------------------------------------------------------------------------------------! 3416 ! Description: 3417 ! ------------ 3418 !> @Todo: Missing subroutine description. 3419 !--------------------------------------------------------------------------------------------------! 3420 SUBROUTINE pmci_datatrans( local_nesting_mode ) 3421 ! 3422 !-- This subroutine controls the nesting according to the nestpar parameter nesting_mode (two-way 3423 !-- (default) or one-way) and the order of anterpolations according to the nestpar parameter 3424 !-- nesting_datatransfer_mode (cascade, overlap or mixed (default)). Although nesting_mode is a 3425 !-- variable of this model, pass it as an argument to allow for example to force one-way 3426 !-- initialization phase. Note that interpolation ( parent_to_child ) must always be carried out 3427 !-- before anterpolation ( child_to_parent ). 3225 3428 3226 3429 IMPLICIT NONE … … 3228 3431 CHARACTER(LEN=*), INTENT(IN) :: local_nesting_mode !< Nesting mode: 'one-way', 'two-way' or 'vertical' 3229 3432 3230 #if defined( __parallel ) 3433 #if defined( __parallel ) 3231 3434 3232 3435 IF ( debug_output_timestep ) CALL debug_message( 'pmci_datatrans', 'start' ) … … 3274 3477 3275 3478 3479 !--------------------------------------------------------------------------------------------------! 3480 ! Description: 3481 ! ------------ 3482 !> @Todo: Missing subroutine description. 3483 !--------------------------------------------------------------------------------------------------! 3276 3484 SUBROUTINE pmci_parent_datatrans( direction ) 3277 3485 3278 3486 IMPLICIT NONE 3279 3487 3280 INTEGER(iwp), INTENT(IN) :: direction 3488 INTEGER(iwp), INTENT(IN) :: direction !< Direction of the data transfer: 'parent_to_child' or 'child_to_parent' 3281 3489 3282 3490 #if defined( __parallel ) 3283 INTEGER(iwp) :: child_id 3284 INTEGER(iwp) :: i 3285 INTEGER(iwp) :: j 3286 INTEGER(iwp) :: k 3287 INTEGER(iwp) :: m !< Loop index over all children of the current parent3491 INTEGER(iwp) :: child_id !< Child id of the child number m 3492 INTEGER(iwp) :: i !< Parent-grid index in x-direction 3493 INTEGER(iwp) :: j !< Parent-grid index in y-direction 3494 INTEGER(iwp) :: k !< Parent-grid index in z-direction 3495 INTEGER(iwp) :: m !< Loop index over all children of the current parent 3288 3496 3289 3497 … … 3306 3514 ! 3307 3515 !-- Inside buildings/topography reset velocities back to zero. 3308 !-- Scalars (pt, q, s, km, kh, p, sa, ...) are ignored at 3309 !-- present, maybe revise later. 3310 !-- Resetting of e is removed as unnecessary since e is not 3311 !-- anterpolated, and as incorrect since it overran the default 3312 !-- Neumann condition (bc_e_b). 3516 !-- Scalars (pt, q, s, km, kh, p, sa, ...) are ignored at present, maybe revise later. 3517 !-- Resetting of e is removed as unnecessary since e is not interpolated, and as incorrect 3518 !-- since it overran the default Neumann condition (bc_e_b). 3313 3519 DO i = nxlg, nxrg 3314 3520 DO j = nysg, nyng … … 3318 3524 w(k,j,i) = MERGE( w(k,j,i), 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 3 ) ) 3319 3525 ! 3320 !-- TO_DO: zero setting of temperature within topography creates 3321 !-- wrong results 3526 !-- TO_DO: zero setting of temperature within topography creates wrong results 3322 3527 ! pt(nzb:nzb_s_inner(j,i),j,i) = 0.0_wp 3323 3528 ! IF ( humidity .OR. passive_scalar ) THEN … … 3335 3540 3336 3541 3337 3542 !--------------------------------------------------------------------------------------------------! 3543 ! Description: 3544 ! ------------ 3545 !> @Todo: Missing subroutine description. 3546 !--------------------------------------------------------------------------------------------------! 3338 3547 SUBROUTINE pmci_child_datatrans( direction ) 3339 3548 … … 3344 3553 #if defined( __parallel ) 3345 3554 3346 REAL(wp), DIMENSION(1) :: dtl 3555 REAL(wp), DIMENSION(1) :: dtl !< Time step size 3347 3556 3348 3557 … … 3351 3560 3352 3561 IF ( direction == parent_to_child ) THEN 3353 3562 3354 3563 CALL cpu_log( log_point_s(73), 'pmc child recv', 'start' ) 3355 3564 CALL pmc_c_getbuffer( ) … … 3359 3568 CALL pmci_interpolation 3360 3569 CALL cpu_log( log_point_s(75), 'pmc interpolation', 'stop' ) 3361 3570 3362 3571 ELSE 3363 3572 ! … … 3376 3585 CONTAINS 3377 3586 3378 3379 SUBROUTINE pmci_interpolation 3380 3381 ! 3382 !-- A wrapper routine for all interpolation actions 3383 3384 IMPLICIT NONE 3385 3386 INTEGER(iwp) :: ibgp !< Index running over the nbgp boundary ghost points in i-direction 3387 INTEGER(iwp) :: jbgp !< Index running over the nbgp boundary ghost points in j-direction 3388 INTEGER(iwp) :: lb !< Running index for aerosol size bins 3389 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 3390 INTEGER(iwp) :: lg !< Running index for salsa gases 3391 INTEGER(iwp) :: n !< Running index for number of chemical species 3392 3393 ! 3394 !-- In case of vertical nesting no interpolation is needed for the 3395 !-- horizontal boundaries 3396 IF ( nesting_mode /= 'vertical' ) THEN 3397 ! 3398 !-- Left border pe: 3399 IF ( bc_dirichlet_l ) THEN 3400 3401 CALL pmci_interp_1sto_lr( u, uc, kcto, jflo, jfuo, kflo, kfuo, 'l', 'u' ) 3402 CALL pmci_interp_1sto_lr( v, vc, kcto, jflv, jfuv, kflo, kfuo, 'l', 'v' ) 3403 CALL pmci_interp_1sto_lr( w, wc, kctw, jflo, jfuo, kflw, kfuw, 'l', 'w' ) 3404 3405 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3406 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3407 .NOT. constant_diffusion ) ) THEN 3408 ! CALL pmci_interp_1sto_lr( e, ec, kcto, jflo, jfuo, kflo, kfuo, 'l', 'e' ) 3409 ! 3410 !-- Interpolation of e is replaced by the Neumann condition. 3411 DO ibgp = -nbgp, -1 3412 e(nzb:nzt,nys:nyn,ibgp) = e(nzb:nzt,nys:nyn,0) 3413 ENDDO 3414 3587 3588 !--------------------------------------------------------------------------------------------------! 3589 ! Description: 3590 ! ------------ 3591 !> @Todo: Missing subroutine description. 3592 !--------------------------------------------------------------------------------------------------! 3593 SUBROUTINE pmci_interpolation 3594 3595 ! 3596 !-- A wrapper routine for all interpolation actions 3597 3598 IMPLICIT NONE 3599 3600 INTEGER(iwp) :: ibgp !< Index running over the nbgp boundary ghost points in i-direction 3601 INTEGER(iwp) :: jbgp !< Index running over the nbgp boundary ghost points in j-direction 3602 INTEGER(iwp) :: lb !< Running index for aerosol size bins 3603 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 3604 INTEGER(iwp) :: lg !< Running index for salsa gases 3605 INTEGER(iwp) :: n !< Running index for number of chemical species 3606 3607 ! 3608 !-- In case of vertical nesting no interpolation is needed for the horizontal boundaries 3609 IF ( nesting_mode /= 'vertical' ) THEN 3610 ! 3611 !-- Left border pe: 3612 IF ( bc_dirichlet_l ) THEN 3613 3614 CALL pmci_interp_1sto_lr( u, uc, kcto, jflo, jfuo, kflo, kfuo, 'l', 'u' ) 3615 CALL pmci_interp_1sto_lr( v, vc, kcto, jflv, jfuv, kflo, kfuo, 'l', 'v' ) 3616 CALL pmci_interp_1sto_lr( w, wc, kctw, jflo, jfuo, kflw, kfuw, 'l', 'w' ) 3617 3618 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3619 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3620 .NOT. constant_diffusion ) ) THEN 3621 ! CALL pmci_interp_1sto_lr( e, ec, kcto, jflo, jfuo, kflo, kfuo, 'l', 'e' ) 3622 ! 3623 !-- Interpolation of e is replaced by the Neumann condition. 3624 DO ibgp = -nbgp, -1 3625 e(nzb:nzt,nys:nyn,ibgp) = e(nzb:nzt,nys:nyn,0) 3626 ENDDO 3627 3628 ENDIF 3629 3630 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3631 CALL pmci_interp_1sto_lr( diss, dissc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3632 ENDIF 3633 3634 IF ( .NOT. neutral ) THEN 3635 CALL pmci_interp_1sto_lr( pt, ptc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3636 ENDIF 3637 3638 IF ( humidity ) THEN 3639 3640 CALL pmci_interp_1sto_lr( q, q_c, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3641 3642 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3643 CALL pmci_interp_1sto_lr( qc, qcc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3644 CALL pmci_interp_1sto_lr( nc, ncc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3415 3645 ENDIF 3416 3646 3417 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3418 CALL pmci_interp_1sto_lr( diss, dissc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3647 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3648 CALL pmci_interp_1sto_lr( qr, qrc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3649 CALL pmci_interp_1sto_lr( nr, nrc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3419 3650 ENDIF 3420 3651 3421 IF ( .NOT. neutral ) THEN 3422 CALL pmci_interp_1sto_lr( pt, ptc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3423 ENDIF 3424 3425 IF ( humidity ) THEN 3426 3427 CALL pmci_interp_1sto_lr( q, q_c, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3428 3429 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3430 CALL pmci_interp_1sto_lr( qc, qcc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3431 CALL pmci_interp_1sto_lr( nc, ncc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3432 ENDIF 3433 3434 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3435 CALL pmci_interp_1sto_lr( qr, qrc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3436 CALL pmci_interp_1sto_lr( nr, nrc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3437 ENDIF 3438 3439 ENDIF 3440 3441 IF ( passive_scalar ) THEN 3442 CALL pmci_interp_1sto_lr( s, sc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3443 ENDIF 3444 3445 IF ( air_chemistry .AND. nesting_chem ) THEN 3446 DO n = 1, nspec 3447 CALL pmci_interp_1sto_lr( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3448 kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3652 ENDIF 3653 3654 IF ( passive_scalar ) THEN 3655 CALL pmci_interp_1sto_lr( s, sc, kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3656 ENDIF 3657 3658 IF ( air_chemistry .AND. nesting_chem ) THEN 3659 DO n = 1, nspec 3660 CALL pmci_interp_1sto_lr( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3661 kcto, jflo, jfuo, kflo, kfuo, 'l', 's' ) 3662 ENDDO 3663 ENDIF 3664 3665 IF ( salsa .AND. nesting_salsa ) THEN 3666 DO lb = 1, nbins_aerosol 3667 CALL pmci_interp_1sto_lr( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3668 kcto, jflo, jfuo, kflo, kfuo, 'l', 's') 3669 ENDDO 3670 DO lc = 1, nbins_aerosol * ncomponents_mass 3671 CALL pmci_interp_1sto_lr( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3672 kcto, jflo, jfuo, kflo, kfuo, 'l', 's') 3673 ENDDO 3674 IF ( .NOT. salsa_gases_from_chem ) THEN 3675 DO lg = 1, ngases_salsa 3676 CALL pmci_interp_1sto_lr( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3677 kcto, jflo, jfuo, kflo, kfuo, 'l', 's') 3449 3678 ENDDO 3450 3679 ENDIF 3451 3452 IF ( salsa .AND. nesting_salsa ) THEN 3453 DO lb = 1, nbins_aerosol 3454 CALL pmci_interp_1sto_lr( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3455 kcto, jflo, jfuo, kflo, kfuo, 'l', 's') 3456 ENDDO 3457 DO lc = 1, nbins_aerosol * ncomponents_mass 3458 CALL pmci_interp_1sto_lr( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3459 kcto, jflo, jfuo, kflo, kfuo, 'l', 's') 3460 ENDDO 3461 IF ( .NOT. salsa_gases_from_chem ) THEN 3462 DO lg = 1, ngases_salsa 3463 CALL pmci_interp_1sto_lr( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3464 kcto, jflo, jfuo, kflo, kfuo, 'l', 's') 3465 ENDDO 3466 ENDIF 3680 ENDIF 3681 3682 ENDIF 3683 ! 3684 !-- Right border pe 3685 IF ( bc_dirichlet_r ) THEN 3686 3687 CALL pmci_interp_1sto_lr( u, uc, kcto, jflo, jfuo, kflo, kfuo, 'r', 'u' ) 3688 CALL pmci_interp_1sto_lr( v, vc, kcto, jflv, jfuv, kflo, kfuo, 'r', 'v' ) 3689 CALL pmci_interp_1sto_lr( w, wc, kctw, jflo, jfuo, kflw, kfuw, 'r', 'w' ) 3690 3691 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3692 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3693 .NOT. constant_diffusion ) ) THEN 3694 ! CALL pmci_interp_1sto_lr( e, ec, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 'r', 'e' ) 3695 ! 3696 !-- Interpolation of e is replaced by the Neumann condition. 3697 DO ibgp = nx+1, nx+nbgp 3698 e(nzb:nzt,nys:nyn,ibgp) = e(nzb:nzt,nys:nyn,nx) 3699 ENDDO 3700 ENDIF 3701 3702 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3703 CALL pmci_interp_1sto_lr( diss, dissc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3704 ENDIF 3705 3706 IF ( .NOT. neutral ) THEN 3707 CALL pmci_interp_1sto_lr( pt, ptc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3708 ENDIF 3709 3710 IF ( humidity ) THEN 3711 CALL pmci_interp_1sto_lr( q, q_c, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3712 3713 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3714 CALL pmci_interp_1sto_lr( qc, qcc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3715 CALL pmci_interp_1sto_lr( nc, ncc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3467 3716 ENDIF 3468 3717 3718 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3719 CALL pmci_interp_1sto_lr( qr, qrc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3720 CALL pmci_interp_1sto_lr( nr, nrc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3721 ENDIF 3722 3469 3723 ENDIF 3470 ! 3471 !-- Right border pe 3472 IF ( bc_dirichlet_r ) THEN 3473 3474 CALL pmci_interp_1sto_lr( u, uc, kcto, jflo, jfuo, kflo, kfuo, 'r', 'u' ) 3475 CALL pmci_interp_1sto_lr( v, vc, kcto, jflv, jfuv, kflo, kfuo, 'r', 'v' ) 3476 CALL pmci_interp_1sto_lr( w, wc, kctw, jflo, jfuo, kflw, kfuw, 'r', 'w' ) 3477 3478 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3479 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3480 .NOT. constant_diffusion ) ) THEN 3481 ! CALL pmci_interp_1sto_lr( e, ec, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, 'r', 'e' ) 3482 ! 3483 !-- Interpolation of e is replaced by the Neumann condition. 3484 DO ibgp = nx+1, nx+nbgp 3485 e(nzb:nzt,nys:nyn,ibgp) = e(nzb:nzt,nys:nyn,nx) 3724 3725 IF ( passive_scalar ) THEN 3726 CALL pmci_interp_1sto_lr( s, sc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3727 ENDIF 3728 3729 IF ( air_chemistry .AND. nesting_chem ) THEN 3730 DO n = 1, nspec 3731 CALL pmci_interp_1sto_lr( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3732 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3733 ENDDO 3734 ENDIF 3735 3736 IF ( salsa .AND. nesting_salsa ) THEN 3737 DO lb = 1, nbins_aerosol 3738 CALL pmci_interp_1sto_lr( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3739 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3740 ENDDO 3741 DO lc = 1, nbins_aerosol * ncomponents_mass 3742 CALL pmci_interp_1sto_lr( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3743 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3744 ENDDO 3745 IF ( .NOT. salsa_gases_from_chem ) THEN 3746 DO lg = 1, ngases_salsa 3747 CALL pmci_interp_1sto_lr( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3748 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3486 3749 ENDDO 3487 3750 ENDIF 3488 3489 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3490 CALL pmci_interp_1sto_lr( diss, dissc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3751 ENDIF 3752 3753 ENDIF 3754 ! 3755 !-- South border pe 3756 IF ( bc_dirichlet_s ) THEN 3757 3758 CALL pmci_interp_1sto_sn( v, vc, kcto, iflo, ifuo, kflo, kfuo, 's', 'v' ) 3759 CALL pmci_interp_1sto_sn( w, wc, kctw, iflo, ifuo, kflw, kfuw, 's', 'w' ) 3760 CALL pmci_interp_1sto_sn( u, uc, kcto, iflu, ifuu, kflo, kfuo, 's', 'u' ) 3761 3762 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3763 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3764 .NOT. constant_diffusion ) ) THEN 3765 ! CALL pmci_interp_1sto_sn( e, ec, kcto, iflo, ifuo, kflo, kfuo, 's', 'e' ) 3766 ! 3767 !-- Interpolation of e is replaced by the Neumann condition. 3768 DO jbgp = -nbgp, -1 3769 e(nzb:nzt,jbgp,nxl:nxr) = e(nzb:nzt,0,nxl:nxr) 3770 ENDDO 3771 ENDIF 3772 3773 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3774 CALL pmci_interp_1sto_sn( diss, dissc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3775 ENDIF 3776 3777 IF ( .NOT. neutral ) THEN 3778 CALL pmci_interp_1sto_sn( pt, ptc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3779 ENDIF 3780 3781 IF ( humidity ) THEN 3782 CALL pmci_interp_1sto_sn( q, q_c, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3783 3784 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3785 CALL pmci_interp_1sto_sn( qc, qcc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3786 CALL pmci_interp_1sto_sn( nc, ncc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3491 3787 ENDIF 3492 3788 3493 IF ( .NOT. neutral ) THEN 3494 CALL pmci_interp_1sto_lr( pt, ptc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3789 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3790 CALL pmci_interp_1sto_sn( qr, qrc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3791 CALL pmci_interp_1sto_sn( nr, nrc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3495 3792 ENDIF 3496 3793 3497 IF ( humidity ) THEN 3498 CALL pmci_interp_1sto_lr( q, q_c, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3499 3500 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3501 CALL pmci_interp_1sto_lr( qc, qcc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3502 CALL pmci_interp_1sto_lr( nc, ncc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3503 ENDIF 3504 3505 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3506 CALL pmci_interp_1sto_lr( qr, qrc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3507 CALL pmci_interp_1sto_lr( nr, nrc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3508 ENDIF 3509 3510 ENDIF 3511 3512 IF ( passive_scalar ) THEN 3513 CALL pmci_interp_1sto_lr( s, sc, kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3514 ENDIF 3515 3516 IF ( air_chemistry .AND. nesting_chem ) THEN 3517 DO n = 1, nspec 3518 CALL pmci_interp_1sto_lr( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3519 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3794 ENDIF 3795 3796 IF ( passive_scalar ) THEN 3797 CALL pmci_interp_1sto_sn( s, sc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3798 ENDIF 3799 3800 IF ( air_chemistry .AND. nesting_chem ) THEN 3801 DO n = 1, nspec 3802 CALL pmci_interp_1sto_sn( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3803 kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3804 ENDDO 3805 ENDIF 3806 3807 IF ( salsa .AND. nesting_salsa ) THEN 3808 DO lb = 1, nbins_aerosol 3809 CALL pmci_interp_1sto_sn( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3810 kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3811 ENDDO 3812 DO lc = 1, nbins_aerosol * ncomponents_mass 3813 CALL pmci_interp_1sto_sn( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3814 kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3815 ENDDO 3816 IF ( .NOT. salsa_gases_from_chem ) THEN 3817 DO lg = 1, ngases_salsa 3818 CALL pmci_interp_1sto_sn( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3819 kcto, iflo, ifuo, kflo, kfuo, 's', 's' ) 3520 3820 ENDDO 3521 3821 ENDIF 3522 3523 IF ( salsa .AND. nesting_salsa ) THEN 3524 DO lb = 1, nbins_aerosol 3525 CALL pmci_interp_1sto_lr( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3526 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3527 ENDDO 3528 DO lc = 1, nbins_aerosol * ncomponents_mass 3529 CALL pmci_interp_1sto_lr( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3530 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3531 ENDDO 3532 IF ( .NOT. salsa_gases_from_chem ) THEN 3533 DO lg = 1, ngases_salsa 3534 CALL pmci_interp_1sto_lr( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3535 kcto, jflo, jfuo, kflo, kfuo, 'r', 's' ) 3536 ENDDO 3537 ENDIF 3822 ENDIF 3823 3824 ENDIF 3825 ! 3826 !-- North border pe 3827 IF ( bc_dirichlet_n ) THEN 3828 3829 CALL pmci_interp_1sto_sn( v, vc, kcto, iflo, ifuo, kflo, kfuo, 'n', 'v' ) 3830 CALL pmci_interp_1sto_sn( w, wc, kctw, iflo, ifuo, kflw, kfuw, 'n', 'w' ) 3831 CALL pmci_interp_1sto_sn( u, uc, kcto, iflu, ifuu, kflo, kfuo, 'n', 'u' ) 3832 3833 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3834 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3835 .NOT. constant_diffusion ) ) THEN 3836 ! CALL pmci_interp_1sto_sn( e, ec, kcto, iflo, ifuo, kflo, kfuo, 'n', 'e' ) 3837 ! 3838 !-- Interpolation of e is replaced by the Neumann condition. 3839 DO jbgp = ny+1, ny+nbgp 3840 e(nzb:nzt,jbgp,nxl:nxr) = e(nzb:nzt,ny,nxl:nxr) 3841 ENDDO 3842 ENDIF 3843 3844 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3845 CALL pmci_interp_1sto_sn( diss, dissc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3846 ENDIF 3847 3848 IF ( .NOT. neutral ) THEN 3849 CALL pmci_interp_1sto_sn( pt, ptc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3850 ENDIF 3851 3852 IF ( humidity ) THEN 3853 CALL pmci_interp_1sto_sn( q, q_c, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3854 3855 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3856 CALL pmci_interp_1sto_sn( qc, qcc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3857 CALL pmci_interp_1sto_sn( nc, ncc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3538 3858 ENDIF 3539 3859 3860 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3861 CALL pmci_interp_1sto_sn( qr, qrc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3862 CALL pmci_interp_1sto_sn( nr, nrc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3863 ENDIF 3864 3540 3865 ENDIF 3541 ! 3542 !-- South border pe 3543 IF ( bc_dirichlet_s ) THEN 3544 3545 CALL pmci_interp_1sto_sn( v, vc, kcto, iflo, ifuo, kflo, kfuo, 's', 'v' ) 3546 CALL pmci_interp_1sto_sn( w, wc, kctw, iflo, ifuo, kflw, kfuw, 's', 'w' ) 3547 CALL pmci_interp_1sto_sn( u, uc, kcto, iflu, ifuu, kflo, kfuo, 's', 'u' ) 3548 3549 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3550 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3551 .NOT. constant_diffusion ) ) THEN 3552 ! CALL pmci_interp_1sto_sn( e, ec, kcto, iflo, ifuo, kflo, kfuo, 's', 'e' ) 3553 ! 3554 !-- Interpolation of e is replaced by the Neumann condition. 3555 DO jbgp = -nbgp, -1 3556 e(nzb:nzt,jbgp,nxl:nxr) = e(nzb:nzt,0,nxl:nxr) 3866 3867 IF ( passive_scalar ) THEN 3868 CALL pmci_interp_1sto_sn( s, sc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3869 ENDIF 3870 3871 IF ( air_chemistry .AND. nesting_chem ) THEN 3872 DO n = 1, nspec 3873 CALL pmci_interp_1sto_sn( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3874 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3875 ENDDO 3876 ENDIF 3877 3878 IF ( salsa .AND. nesting_salsa ) THEN 3879 DO lb = 1, nbins_aerosol 3880 CALL pmci_interp_1sto_sn( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3881 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3882 ENDDO 3883 DO lc = 1, nbins_aerosol * ncomponents_mass 3884 CALL pmci_interp_1sto_sn( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3885 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3886 ENDDO 3887 IF ( .NOT. salsa_gases_from_chem ) THEN 3888 DO lg = 1, ngases_salsa 3889 CALL pmci_interp_1sto_sn( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3890 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3557 3891 ENDDO 3558 3892 ENDIF 3559 3560 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN3561 CALL pmci_interp_1sto_sn( diss, dissc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3562 ENDIF3563 3564 IF ( .NOT. neutral ) THEN3565 CALL pmci_interp_1sto_sn( pt, ptc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3566 ENDIF3567 3568 IF ( humidity ) THEN3569 CALL pmci_interp_1sto_sn( q, q_c, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3570 3571 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN3572 CALL pmci_interp_1sto_sn( qc, qcc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3573 CALL pmci_interp_1sto_sn( nc, ncc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3574 ENDIF3575 3576 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN3577 CALL pmci_interp_1sto_sn( qr, qrc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3578 CALL pmci_interp_1sto_sn( nr, nrc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3579 ENDIF3580 3581 ENDIF3582 3583 IF ( passive_scalar ) THEN3584 CALL pmci_interp_1sto_sn( s, sc, kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3585 ENDIF3586 3587 IF ( air_chemistry .AND. nesting_chem ) THEN3588 DO n = 1, nspec3589 CALL pmci_interp_1sto_sn( chem_species(n)%conc, chem_spec_c(:,:,:,n), &3590 kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3591 ENDDO3592 ENDIF3593 3594 IF ( salsa .AND. nesting_salsa ) THEN3595 DO lb = 1, nbins_aerosol3596 CALL pmci_interp_1sto_sn( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), &3597 kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3598 ENDDO3599 DO lc = 1, nbins_aerosol * ncomponents_mass3600 CALL pmci_interp_1sto_sn( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), &3601 kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3602 ENDDO3603 IF ( .NOT. salsa_gases_from_chem ) THEN3604 DO lg = 1, ngases_salsa3605 CALL pmci_interp_1sto_sn( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), &3606 kcto, iflo, ifuo, kflo, kfuo, 's', 's' )3607 ENDDO3608 ENDIF3609 ENDIF3610 3611 3893 ENDIF 3612 ! 3613 !-- North border pe 3614 IF ( bc_dirichlet_n ) THEN 3615 3616 CALL pmci_interp_1sto_sn( v, vc, kcto, iflo, ifuo, kflo, kfuo, 'n', 'v' ) 3617 CALL pmci_interp_1sto_sn( w, wc, kctw, iflo, ifuo, kflw, kfuw, 'n', 'w' ) 3618 CALL pmci_interp_1sto_sn( u, uc, kcto, iflu, ifuu, kflo, kfuo, 'n', 'u' ) 3619 3620 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3621 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3622 .NOT. constant_diffusion ) ) THEN 3623 ! CALL pmci_interp_1sto_sn( e, ec, kcto, iflo, ifuo, kflo, kfuo, 'n', 'e' ) 3624 ! 3625 !-- Interpolation of e is replaced by the Neumann condition. 3626 DO jbgp = ny+1, ny+nbgp 3627 e(nzb:nzt,jbgp,nxl:nxr) = e(nzb:nzt,ny,nxl:nxr) 3628 ENDDO 3629 ENDIF 3630 3631 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3632 CALL pmci_interp_1sto_sn( diss, dissc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3633 ENDIF 3634 3635 IF ( .NOT. neutral ) THEN 3636 CALL pmci_interp_1sto_sn( pt, ptc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3637 ENDIF 3638 3639 IF ( humidity ) THEN 3640 CALL pmci_interp_1sto_sn( q, q_c, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3641 3642 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3643 CALL pmci_interp_1sto_sn( qc, qcc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3644 CALL pmci_interp_1sto_sn( nc, ncc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3645 ENDIF 3646 3647 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3648 CALL pmci_interp_1sto_sn( qr, qrc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3649 CALL pmci_interp_1sto_sn( nr, nrc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3650 ENDIF 3651 3652 ENDIF 3653 3654 IF ( passive_scalar ) THEN 3655 CALL pmci_interp_1sto_sn( s, sc, kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3656 ENDIF 3657 3658 IF ( air_chemistry .AND. nesting_chem ) THEN 3659 DO n = 1, nspec 3660 CALL pmci_interp_1sto_sn( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3661 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3662 ENDDO 3663 ENDIF 3664 3665 IF ( salsa .AND. nesting_salsa ) THEN 3666 DO lb = 1, nbins_aerosol 3667 CALL pmci_interp_1sto_sn( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3668 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3669 ENDDO 3670 DO lc = 1, nbins_aerosol * ncomponents_mass 3671 CALL pmci_interp_1sto_sn( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3672 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3673 ENDDO 3674 IF ( .NOT. salsa_gases_from_chem ) THEN 3675 DO lg = 1, ngases_salsa 3676 CALL pmci_interp_1sto_sn( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3677 kcto, iflo, ifuo, kflo, kfuo, 'n', 's' ) 3678 ENDDO 3679 ENDIF 3680 ENDIF 3681 3682 ENDIF 3683 3684 ENDIF ! IF ( nesting_mode /= 'vertical' ) 3685 ! 3686 !-- All PEs are top-border PEs 3687 CALL pmci_interp_1sto_t( w, wc, kctw, iflo, ifuo, jflo, jfuo, 'w' ) 3688 CALL pmci_interp_1sto_t( u, uc, kcto, iflu, ifuu, jflo, jfuo, 'u' ) 3689 CALL pmci_interp_1sto_t( v, vc, kcto, iflo, ifuo, jflv, jfuv, 'v' ) 3690 3691 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3692 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3693 .NOT. constant_diffusion ) ) THEN 3694 ! CALL pmci_interp_1sto_t( e, ec, kcto, iflo, ifuo, jflo, jfuo, 'e' ) 3695 ! 3696 !-- Interpolation of e is replaced by the Neumann condition. 3697 e(nzt+1,nys:nyn,nxl:nxr) = e(nzt,nys:nyn,nxl:nxr) 3698 ENDIF 3699 3700 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3701 CALL pmci_interp_1sto_t( diss, dissc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3702 ENDIF 3703 3704 IF ( .NOT. neutral ) THEN 3705 CALL pmci_interp_1sto_t( pt, ptc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3706 ENDIF 3707 3708 IF ( humidity ) THEN 3709 CALL pmci_interp_1sto_t( q, q_c, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3710 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3711 CALL pmci_interp_1sto_t( qc, qcc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3712 CALL pmci_interp_1sto_t( nc, ncc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3713 ENDIF 3714 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3715 CALL pmci_interp_1sto_t( qr, qrc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3716 CALL pmci_interp_1sto_t( nr, nrc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3717 ENDIF 3718 ENDIF 3719 3720 IF ( passive_scalar ) THEN 3721 CALL pmci_interp_1sto_t( s, sc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3722 ENDIF 3723 3724 IF ( air_chemistry .AND. nesting_chem ) THEN 3725 DO n = 1, nspec 3726 CALL pmci_interp_1sto_t( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3894 3895 ENDIF 3896 3897 ENDIF ! IF ( nesting_mode /= 'vertical' ) 3898 ! 3899 !-- All PEs are top-border PEs 3900 CALL pmci_interp_1sto_t( w, wc, kctw, iflo, ifuo, jflo, jfuo, 'w' ) 3901 CALL pmci_interp_1sto_t( u, uc, kcto, iflu, ifuu, jflo, jfuo, 'u' ) 3902 CALL pmci_interp_1sto_t( v, vc, kcto, iflo, ifuo, jflv, jfuv, 'v' ) 3903 3904 IF ( ( rans_mode_parent .AND. rans_mode ) .OR. & 3905 ( .NOT. rans_mode_parent .AND. .NOT. rans_mode .AND. & 3906 .NOT. constant_diffusion ) ) THEN 3907 ! CALL pmci_interp_1sto_t( e, ec, kcto, iflo, ifuo, jflo, jfuo, 'e' ) 3908 ! 3909 !-- Interpolation of e is replaced by the Neumann condition. 3910 e(nzt+1,nys:nyn,nxl:nxr) = e(nzt,nys:nyn,nxl:nxr) 3911 ENDIF 3912 3913 IF ( rans_mode_parent .AND. rans_mode .AND. rans_tke_e ) THEN 3914 CALL pmci_interp_1sto_t( diss, dissc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3915 ENDIF 3916 3917 IF ( .NOT. neutral ) THEN 3918 CALL pmci_interp_1sto_t( pt, ptc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3919 ENDIF 3920 3921 IF ( humidity ) THEN 3922 CALL pmci_interp_1sto_t( q, q_c, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3923 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3924 CALL pmci_interp_1sto_t( qc, qcc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3925 CALL pmci_interp_1sto_t( nc, ncc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3926 ENDIF 3927 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3928 CALL pmci_interp_1sto_t( qr, qrc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3929 CALL pmci_interp_1sto_t( nr, nrc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3930 ENDIF 3931 ENDIF 3932 3933 IF ( passive_scalar ) THEN 3934 CALL pmci_interp_1sto_t( s, sc, kcto, iflo, ifuo, jflo, jfuo, 's' ) 3935 ENDIF 3936 3937 IF ( air_chemistry .AND. nesting_chem ) THEN 3938 DO n = 1, nspec 3939 CALL pmci_interp_1sto_t( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3940 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3941 ENDDO 3942 ENDIF 3943 3944 IF ( salsa .AND. nesting_salsa ) THEN 3945 DO lb = 1, nbins_aerosol 3946 CALL pmci_interp_1sto_t( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3947 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3948 ENDDO 3949 DO lc = 1, nbins_aerosol * ncomponents_mass 3950 CALL pmci_interp_1sto_t( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3951 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3952 ENDDO 3953 IF ( .NOT. salsa_gases_from_chem ) THEN 3954 DO lg = 1, ngases_salsa 3955 CALL pmci_interp_1sto_t( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3727 3956 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3728 3957 ENDDO 3729 ENDIF 3730 3731 IF ( salsa .AND. nesting_salsa ) THEN 3732 DO lb = 1, nbins_aerosol 3733 CALL pmci_interp_1sto_t( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3734 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3735 ENDDO 3736 DO lc = 1, nbins_aerosol * ncomponents_mass 3737 CALL pmci_interp_1sto_t( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3738 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3739 ENDDO 3740 IF ( .NOT. salsa_gases_from_chem ) THEN 3741 DO lg = 1, ngases_salsa 3742 CALL pmci_interp_1sto_t( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3743 kcto, iflo, ifuo, jflo, jfuo, 's' ) 3744 ENDDO 3745 ENDIF 3746 ENDIF 3747 3748 END SUBROUTINE pmci_interpolation 3749 3750 3751 3752 SUBROUTINE pmci_anterpolation 3753 3754 ! 3755 !-- A wrapper routine for all anterpolation actions. 3756 !-- Note that TKE is not anterpolated. 3757 IMPLICIT NONE 3758 INTEGER(iwp) :: lb !< Running index for aerosol size bins 3759 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 3760 INTEGER(iwp) :: lg !< Running index for salsa gases 3761 INTEGER(iwp) :: n !< Running index for number of chemical species 3762 3763 3764 CALL pmci_anterp_tophat( u, uc, kcto, iflu, ifuu, jflo, jfuo, kflo, kfuo, ijkfc_u, 'u' ) 3765 CALL pmci_anterp_tophat( v, vc, kcto, iflo, ifuo, jflv, jfuv, kflo, kfuo, ijkfc_v, 'v' ) 3766 CALL pmci_anterp_tophat( w, wc, kctw, iflo, ifuo, jflo, jfuo, kflw, kfuw, ijkfc_w, 'w' ) 3767 ! 3768 !-- Anterpolation of TKE and dissipation rate if parent and child are in 3769 !-- RANS mode. 3770 IF ( rans_mode_parent .AND. rans_mode ) THEN 3771 CALL pmci_anterp_tophat( e, ec, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 'e' ) 3772 ! 3773 !-- Anterpolation of dissipation rate only if TKE-e closure is applied. 3774 IF ( rans_tke_e ) THEN 3775 CALL pmci_anterp_tophat( diss, dissc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, & 3776 ijkfc_s, 'diss' ) 3777 ENDIF 3778 3779 ENDIF 3780 3781 IF ( .NOT. neutral ) THEN 3782 CALL pmci_anterp_tophat( pt, ptc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 'pt' ) 3783 ENDIF 3784 3785 IF ( humidity ) THEN 3786 3787 CALL pmci_anterp_tophat( q, q_c, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 'q' ) 3788 3789 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 3790 3791 CALL pmci_anterp_tophat( qc, qcc, kcto, iflo, ifuo, jflo, jfuo, & 3792 kflo, kfuo, ijkfc_s, 'qc' ) 3793 3794 CALL pmci_anterp_tophat( nc, ncc, kcto, iflo, ifuo, jflo, jfuo, & 3795 kflo, kfuo, ijkfc_s, 'nc' ) 3796 3797 ENDIF 3798 3799 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 3800 3801 CALL pmci_anterp_tophat( qr, qrc, kcto, iflo, ifuo, jflo, jfuo, & 3802 kflo, kfuo, ijkfc_s, 'qr' ) 3803 3804 CALL pmci_anterp_tophat( nr, nrc, kcto, iflo, ifuo, jflo, jfuo, & 3805 kflo, kfuo, ijkfc_s, 'nr' ) 3806 3807 ENDIF 3808 3809 ENDIF 3810 3811 IF ( passive_scalar ) THEN 3812 CALL pmci_anterp_tophat( s, sc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 3813 ENDIF 3814 3815 IF ( air_chemistry .AND. nesting_chem ) THEN 3816 DO n = 1, nspec 3817 CALL pmci_anterp_tophat( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 3818 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 3819 ENDDO 3820 ENDIF 3821 3822 IF ( salsa .AND. nesting_salsa ) THEN 3823 DO lb = 1, nbins_aerosol 3824 CALL pmci_anterp_tophat( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 3825 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 3826 ENDDO 3827 DO lc = 1, nbins_aerosol * ncomponents_mass 3828 CALL pmci_anterp_tophat( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 3829 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 3830 ENDDO 3831 IF ( .NOT. salsa_gases_from_chem ) THEN 3832 DO lg = 1, ngases_salsa 3833 CALL pmci_anterp_tophat( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 3834 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 3835 ENDDO 3836 ENDIF 3837 ENDIF 3838 3839 END SUBROUTINE pmci_anterpolation 3840 3841 3842 3843 SUBROUTINE pmci_interp_1sto_lr( child_array, parent_array, kct, jfl, jfu, kfl, kfu, edge, var ) 3844 ! 3845 !-- Interpolation of ghost-node values used as the child-domain boundary 3846 !-- conditions. This subroutine handles the left and right boundaries. 3847 IMPLICIT NONE 3848 3849 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 3850 3851 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 3852 !< parent cell - y direction 3853 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 3854 !< parent cell - y direction 3855 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 3856 !< parent cell - z direction 3857 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 3858 !< parent cell - z direction 3859 3860 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 3861 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 3862 3863 CHARACTER(LEN=1), INTENT(IN) :: edge !< Edge symbol: 'l' or 'r' 3864 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 3865 ! 3866 !-- Local variables: 3867 INTEGER(iwp) :: icb !< Fixed child-grid index in the x-direction pointing to the node just behind the 3868 !< boundary-value node 3869 INTEGER(iwp) :: icbc !< Fixed child-grid index in the x-direction pointing to the boundary-value nodes 3870 INTEGER(iwp) :: icbgp !< Index running over the redundant boundary ghost points in the x-direction 3871 INTEGER(iwp) :: ierr !< MPI error code 3872 INTEGER(iwp) :: ipbeg !< Parent-grid index in the x-direction pointing to the starting point of workarr_lr 3873 !< in the parent-grid array 3874 INTEGER(iwp) :: ipw !< Reduced parent-grid index in the x-direction for workarr_lr pointing to 3875 !< the boundary ghost node 3876 INTEGER(iwp) :: ipwp !< Reduced parent-grid index in the x-direction for workarr_lr pointing to 3877 !< the first prognostic node 3878 INTEGER(iwp) :: jc !< Running child-grid index in the y-direction 3879 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 3880 INTEGER(iwp) :: kc !< Running child-grid index in the z-direction 3881 INTEGER(iwp) :: kp !< Running parent-grid index in the z-direction 3882 3883 REAL(wp) :: cb !< Interpolation coefficient for the boundary ghost node 3884 REAL(wp) :: cp !< Interpolation coefficient for the first prognostic node 3885 REAL(wp) :: c_interp_1 !< Value interpolated to the flux point in x direction from the parent-grid data 3886 REAL(wp) :: c_interp_2 !< Auxiliary value interpolated to the flux point in x direction from the parent-grid data 3887 ! 3888 !-- Check which edge is to be handled 3889 IF ( edge == 'l' ) THEN 3890 ! 3891 !-- For u, nxl is a ghost node, but not for the other variables 3892 IF ( var == 'u' ) THEN 3893 icbc = nxl 3894 icb = icbc - 1 3895 ipw = 2 3896 ipwp = ipw ! This is redundant when var == 'u' 3897 ipbeg = ipl 3898 ELSE 3899 icbc = nxl - 1 3900 icb = icbc - 1 3901 ipw = 1 3902 ipwp = 2 3903 ipbeg = ipl 3904 ENDIF 3905 ELSEIF ( edge == 'r' ) THEN 3906 IF ( var == 'u' ) THEN 3907 icbc = nxr + 1 3908 icb = icbc + 1 3909 ipw = 1 3910 ipwp = ipw ! This is redundant when var == 'u' 3911 ipbeg = ipr - 2 3912 ELSE 3913 icbc = nxr + 1 3914 icb = icbc + 1 3915 ipw = 1 3916 ipwp = 0 3917 ipbeg = ipr - 2 3918 ENDIF 3919 ENDIF 3920 ! 3921 !-- Interpolation coefficients 3922 IF ( interpolation_scheme_lrsn == 1 ) THEN 3923 cb = 1.0_wp ! 1st-order upwind 3924 ELSE IF ( interpolation_scheme_lrsn == 2 ) THEN 3925 cb = 0.5_wp ! 2nd-order central 3926 ELSE 3927 cb = 0.5_wp ! 2nd-order central (default) 3928 ENDIF 3929 cp = 1.0_wp - cb 3930 ! 3931 !-- Substitute the necessary parent-grid data to the work array workarr_lr. 3932 workarr_lr = 0.0_wp 3933 IF ( pdims(2) > 1 ) THEN 3934 3935 IF ( bc_dirichlet_s ) THEN 3936 workarr_lr(0:pg%nz+1,jpsw:jpnw-1,0:2) = parent_array(0:pg%nz+1,jpsw:jpnw-1,ipbeg:ipbeg+2) 3937 ELSE IF ( bc_dirichlet_n ) THEN 3938 workarr_lr(0:pg%nz+1,jpsw+1:jpnw,0:2) = parent_array(0:pg%nz+1,jpsw+1:jpnw,ipbeg:ipbeg+2) 3939 ELSE 3940 workarr_lr(0:pg%nz+1,jpsw+1:jpnw-1,0:2) & 3941 = parent_array(0:pg%nz+1,jpsw+1:jpnw-1,ipbeg:ipbeg+2) 3942 ENDIF 3943 ! 3944 !-- South-north exchange if more than one PE subdomain in the y-direction. 3945 !-- Note that in case of 3-D nesting the south (psouth == MPI_PROC_NULL) 3946 !-- and north (pnorth == MPI_PROC_NULL) boundaries are not exchanged 3947 !-- because the nest domain is not cyclic. 3948 !-- From south to north 3949 CALL MPI_SENDRECV( workarr_lr(0,jpsw+1,0), 1, workarr_lr_exchange_type, psouth, 0, & 3950 workarr_lr(0,jpnw,0), 1, workarr_lr_exchange_type, pnorth, 0, comm2d, & 3951 status, ierr ) 3952 ! 3953 !-- From north to south 3954 CALL MPI_SENDRECV( workarr_lr(0,jpnw-1,0), 1, workarr_lr_exchange_type, pnorth, 1, & 3955 workarr_lr(0,jpsw,0), 1, workarr_lr_exchange_type, psouth, 1, comm2d, & 3956 status, ierr ) 3957 3958 ELSE 3959 workarr_lr(0:pg%nz+1,jpsw:jpnw,0:2) = parent_array(0:pg%nz+1,jpsw:jpnw,ipbeg:ipbeg+2) 3960 ENDIF 3961 3962 IF ( var == 'u' ) THEN 3963 3964 DO jp = jpsw, jpnw 3965 DO kp = 0, kct 3966 3967 DO jc = jfl(jp), jfu(jp) 3968 DO kc = kfl(kp), kfu(kp) 3969 child_array(kc,jc,icbc) = workarr_lr(kp,jp,ipw) 3970 ENDDO 3971 ENDDO 3972 3973 ENDDO 3974 ENDDO 3975 3976 ELSE IF ( var == 'v' ) THEN 3977 3978 DO jp = jpsw, jpnw-1 3979 DO kp = 0, kct 3980 ! 3981 !-- First interpolate to the flux point 3982 c_interp_1 = cb * workarr_lr(kp,jp,ipw) + cp * workarr_lr(kp,jp,ipwp) 3983 c_interp_2 = cb * workarr_lr(kp,jp+1,ipw) + cp * workarr_lr(kp,jp+1,ipwp) 3984 ! 3985 !-- Use averages of the neighbouring matching grid-line values 3986 DO jc = jfl(jp), jfl(jp+1) 3987 child_array(kfl(kp):kfu(kp),jc,icbc) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 3988 ENDDO 3989 ! 3990 !-- Then set the values along the matching grid-lines 3991 IF ( MOD( jfl(jp), jgsr ) == 0 ) THEN 3992 child_array(kfl(kp):kfu(kp),jfl(jp),icbc) = c_interp_1 3993 ENDIF 3994 ENDDO 3995 ENDDO 3996 ! 3997 !-- Finally, set the values along the last matching grid-line 3998 IF ( MOD( jfl(jpnw), jgsr ) == 0 ) THEN 3999 DO kp = 0, kct 4000 c_interp_1 = cb * workarr_lr(kp,jpnw,ipw) + cp * workarr_lr(kp,jpnw,ipwp) 4001 child_array(kfl(kp):kfu(kp),jfl(jpnw),icbc) = c_interp_1 4002 ENDDO 4003 ENDIF 4004 ! 4005 !-- A gap may still remain in some cases if the subdomain size is not 4006 !-- divisible by the grid-spacing ratio. In such a case, fill the 4007 !-- gap. Note however, this operation may produce some additional 4008 !-- momentum conservation error. 4009 IF ( jfl(jpnw) < nyn ) THEN 4010 DO kp = 0, kct 4011 DO jc = jfl(jpnw) + 1, nyn 4012 child_array(kfl(kp):kfu(kp),jc,icbc) = child_array(kfl(kp):kfu(kp),jfl(jpnw),icbc) 4013 ENDDO 4014 ENDDO 4015 ENDIF 4016 4017 ELSE IF ( var == 'w' ) THEN 4018 4019 DO jp = jpsw, jpnw 4020 DO kp = 0, kct + 1 ! It is important to go up to kct+1 4021 ! 4022 !-- Interpolate to the flux point 4023 c_interp_1 = cb * workarr_lr(kp,jp,ipw) + cp * workarr_lr(kp,jp,ipwp) 4024 ! 4025 !-- First substitute only the matching-node values 4026 child_array(kfu(kp),jfl(jp):jfu(jp),icbc) = c_interp_1 4027 4028 ENDDO 4029 ENDDO 4030 4031 DO jp = jpsw, jpnw 4032 DO kp = 1, kct + 1 ! It is important to go up to kct+1 4033 ! 4034 !-- Then fill up the nodes in between with the averages 4035 DO kc = kfu(kp-1) + 1, kfu(kp) - 1 4036 child_array(kc,jfl(jp):jfu(jp),icbc) = & 4037 0.5_wp * ( child_array(kfu(kp-1),jfl(jp):jfu(jp),icbc) & 4038 + child_array(kfu(kp),jfl(jp):jfu(jp),icbc) ) 4039 ENDDO 4040 4041 ENDDO 4042 ENDDO 4043 4044 ELSE ! any scalar 4045 4046 DO jp = jpsw, jpnw 4047 DO kp = 0, kct 4048 ! 4049 !-- Interpolate to the flux point 4050 c_interp_1 = cb * workarr_lr(kp,jp,ipw) + cp * workarr_lr(kp,jp,ipwp) 4051 DO jc = jfl(jp), jfu(jp) 4052 DO kc = kfl(kp), kfu(kp) 4053 child_array(kc,jc,icbc) = c_interp_1 4054 ENDDO 4055 ENDDO 4056 4057 ENDDO 4058 ENDDO 4059 4060 ENDIF ! var 4061 ! 4062 !-- Fill up also the redundant 2nd and 3rd ghost-node layers 4063 IF ( edge == 'l' ) THEN 4064 DO icbgp = -nbgp, icb 4065 child_array(0:nzt+1,nysg:nyng,icbgp) = child_array(0:nzt+1,nysg:nyng,icbc) 4066 ENDDO 4067 ELSEIF ( edge == 'r' ) THEN 4068 DO icbgp = icb, nx+nbgp 4069 child_array(0:nzt+1,nysg:nyng,icbgp) = child_array(0:nzt+1,nysg:nyng,icbc) 4070 ENDDO 4071 ENDIF 4072 4073 END SUBROUTINE pmci_interp_1sto_lr 4074 4075 4076 4077 SUBROUTINE pmci_interp_1sto_sn( child_array, parent_array, kct, ifl, ifu, kfl, kfu, edge, var ) 4078 ! 4079 !-- Interpolation of ghost-node values used as the child-domain boundary 4080 !-- conditions. This subroutine handles the south and north boundaries. 4081 IMPLICIT NONE 4082 4083 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 4084 4085 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 4086 !< parent cell - x direction 4087 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 4088 !< parent cell - x direction 4089 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 4090 !< parent cell - z direction 4091 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 4092 !< parent cell - z direction 4093 4094 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 4095 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 4096 4097 CHARACTER(LEN=1), INTENT(IN) :: edge !< Edge symbol: 's' or 'n' 4098 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4099 ! 4100 !-- Local variables: 4101 INTEGER(iwp) :: ic !< Running child-grid index in the x-direction 4102 INTEGER(iwp) :: ierr !< MPI error code 4103 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 4104 INTEGER(iwp) :: jcb !< Fixed child-grid index in the y-direction pointing to the node just behind the 4105 !< boundary-value node 4106 INTEGER(iwp) :: jcbc !< Fixed child-grid index in the y-direction pointing to the boundary-value nodes 4107 INTEGER(iwp) :: jcbgp !< Index running over the redundant boundary ghost points in y-direction 4108 INTEGER(iwp) :: jpbeg !< Parent-grid index in the y-direction pointing to the starting point of workarr_sn 4109 !< in the parent-grid array 4110 INTEGER(iwp) :: jpw !< Reduced parent-grid index in the y-direction for workarr_sn pointing to 4111 !< the boundary ghost node 4112 INTEGER(iwp) :: jpwp !< Reduced parent-grid index in the y-direction for workarr_sn pointing to 4113 !< the first prognostic node 4114 INTEGER(iwp) :: kc !< Running child-grid index in the z-direction 4115 INTEGER(iwp) :: kp !< Running parent-grid index in the z-direction 4116 REAL(wp) :: cb !< Interpolation coefficient for the boundary ghost node 4117 REAL(wp) :: cp !< Interpolation coefficient for the first prognostic node 4118 REAL(wp) :: c_interp_1 !< Value interpolated to the flux point in x direction from the parent-grid data 4119 REAL(wp) :: c_interp_2 !< Auxiliary value interpolated to the flux point in x direction from the parent-grid data 4120 4121 4122 ! 4123 !-- Check which edge is to be handled: south or north 4124 IF ( edge == 's' ) THEN 4125 ! 4126 !-- For v, nys is a ghost node, but not for the other variables 4127 IF ( var == 'v' ) THEN 4128 jcbc = nys 4129 jcb = jcbc - 1 4130 jpw = 2 4131 jpwp = 2 ! This is redundant when var == 'v' 4132 jpbeg = jps 4133 ELSE 4134 jcbc = nys - 1 4135 jcb = jcbc - 1 4136 jpw = 1 4137 jpwp = 2 4138 jpbeg = jps 4139 ENDIF 4140 ELSEIF ( edge == 'n' ) THEN 4141 IF ( var == 'v' ) THEN 4142 jcbc = nyn + 1 4143 jcb = jcbc + 1 4144 jpw = 1 4145 jpwp = 0 ! This is redundant when var == 'v' 4146 jpbeg = jpn - 2 4147 ELSE 4148 jcbc = nyn + 1 4149 jcb = jcbc + 1 4150 jpw = 1 4151 jpwp = 0 4152 jpbeg = jpn - 2 4153 ENDIF 4154 ENDIF 4155 ! 4156 !-- Interpolation coefficients 4157 IF ( interpolation_scheme_lrsn == 1 ) THEN 4158 cb = 1.0_wp ! 1st-order upwind 4159 ELSE IF ( interpolation_scheme_lrsn == 2 ) THEN 4160 cb = 0.5_wp ! 2nd-order central 4161 ELSE 4162 cb = 0.5_wp ! 2nd-order central (default) 4163 ENDIF 4164 cp = 1.0_wp - cb 4165 ! 4166 !-- Substitute the necessary parent-grid data to the work array workarr_sn. 4167 workarr_sn = 0.0_wp 4168 IF ( pdims(1) > 1 ) THEN 4169 4170 IF ( bc_dirichlet_l ) THEN 4171 workarr_sn(0:pg%nz+1,0:2,iplw:iprw-1) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw:iprw-1) 4172 ELSE IF ( bc_dirichlet_r ) THEN 4173 workarr_sn(0:pg%nz+1,0:2,iplw+1:iprw) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw+1:iprw) 4174 ELSE 4175 workarr_sn(0:pg%nz+1,0:2,iplw+1:iprw-1) & 4176 = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw+1:iprw-1) 4177 ENDIF 4178 ! 4179 !-- Left-right exchange if more than one PE subdomain in the x-direction. 4180 !-- Note that in case of 3-D nesting the left (pleft == MPI_PROC_NULL) and 4181 !-- right (pright == MPI_PROC_NULL) boundaries are not exchanged because 4182 !-- the nest domain is not cyclic. 4183 !-- From left to right 4184 CALL MPI_SENDRECV( workarr_sn(0,0,iplw+1), 1, workarr_sn_exchange_type, pleft, 0, & 4185 workarr_sn(0,0,iprw), 1, workarr_sn_exchange_type, pright, 0, comm2d, & 4186 status, ierr ) 4187 ! 4188 !-- From right to left 4189 CALL MPI_SENDRECV( workarr_sn(0,0,iprw-1), 1, workarr_sn_exchange_type, pright, 1, & 4190 workarr_sn(0,0,iplw), 1, workarr_sn_exchange_type, pleft, 1, comm2d, & 4191 status, ierr ) 4192 4193 ELSE 4194 workarr_sn(0:pg%nz+1,0:2,iplw:iprw) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw:iprw) 4195 ENDIF 4196 4197 IF ( var == 'v' ) THEN 4198 4199 DO ip = iplw, iprw 4200 DO kp = 0, kct 4201 4202 DO ic = ifl(ip), ifu(ip) 4203 DO kc = kfl(kp), kfu(kp) 4204 child_array(kc,jcbc,ic) = workarr_sn(kp,jpw,ip) 4205 ENDDO 4206 ENDDO 4207 4208 ENDDO 4209 ENDDO 4210 4211 ELSE IF ( var == 'u' ) THEN 4212 4213 DO ip = iplw, iprw - 1 4214 DO kp = 0, kct 4215 ! 4216 !-- First interpolate to the flux point 4217 c_interp_1 = cb * workarr_sn(kp,jpw,ip) + cp * workarr_sn(kp,jpwp,ip) 4218 c_interp_2 = cb * workarr_sn(kp,jpw,ip+1) + cp * workarr_sn(kp,jpwp,ip+1) 4219 ! 4220 !-- Use averages of the neighbouring matching grid-line values 4221 DO ic = ifl(ip), ifl(ip+1) 4222 child_array(kfl(kp):kfu(kp),jcbc,ic) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4223 ENDDO 4224 ! 4225 !-- Then set the values along the matching grid-lines 4226 IF ( MOD( ifl(ip), igsr ) == 0 ) THEN 4227 child_array(kfl(kp):kfu(kp),jcbc,ifl(ip)) = c_interp_1 4228 ENDIF 4229 4230 ENDDO 4231 ENDDO 4232 ! 4233 !-- Finally, set the values along the last matching grid-line 4234 IF ( MOD( ifl(iprw), igsr ) == 0 ) THEN 4235 DO kp = 0, kct 4236 c_interp_1 = cb * workarr_sn(kp,jpw,iprw) + cp * workarr_sn(kp,jpwp,iprw) 4237 child_array(kfl(kp):kfu(kp),jcbc,ifl(iprw)) = c_interp_1 4238 ENDDO 4239 ENDIF 4240 ! 4241 !-- A gap may still remain in some cases if the subdomain size is not 4242 !-- divisible by the grid-spacing ratio. In such a case, fill the 4243 !-- gap. Note however, this operation may produce some additional 4244 !-- momentum conservation error. 4245 IF ( ifl(iprw) < nxr ) THEN 4246 DO kp = 0, kct 4247 DO ic = ifl(iprw) + 1, nxr 4248 child_array(kfl(kp):kfu(kp),jcbc,ic) = child_array(kfl(kp):kfu(kp),jcbc,ifl(iprw)) 4249 ENDDO 4250 ENDDO 4251 ENDIF 4252 4253 ELSE IF ( var == 'w' ) THEN 4254 4255 DO ip = iplw, iprw 4256 DO kp = 0, kct + 1 ! It is important to go up to kct+1 4257 ! 4258 !-- Interpolate to the flux point 4259 c_interp_1 = cb * workarr_sn(kp,jpw,ip) + cp * workarr_sn(kp,jpwp,ip) 4260 ! 4261 !-- First substitute only the matching-node values 4262 child_array(kfu(kp),jcbc,ifl(ip):ifu(ip)) = c_interp_1 4263 4264 ENDDO 4265 ENDDO 4266 4267 DO ip = iplw, iprw 4268 DO kp = 1, kct + 1 ! It is important to go up to kct + 1 4269 ! 4270 !-- Then fill up the nodes in between with the averages 4271 DO kc = kfu(kp-1) + 1, kfu(kp) - 1 4272 child_array(kc,jcbc,ifl(ip):ifu(ip)) = & 4273 0.5_wp * ( child_array(kfu(kp-1),jcbc,ifl(ip):ifu(ip)) & 4274 + child_array(kfu(kp),jcbc,ifl(ip):ifu(ip)) ) 4275 ENDDO 4276 4277 ENDDO 4278 ENDDO 4279 4280 ELSE ! Any scalar 4281 4282 DO ip = iplw, iprw 4283 DO kp = 0, kct 4284 ! 4285 !-- Interpolate to the flux point 4286 c_interp_1 = cb * workarr_sn(kp,jpw,ip) + cp * workarr_sn(kp,jpwp,ip) 4287 DO ic = ifl(ip), ifu(ip) 4288 DO kc = kfl(kp), kfu(kp) 4289 child_array(kc,jcbc,ic) = c_interp_1 4290 ENDDO 4291 ENDDO 4292 4293 ENDDO 4294 ENDDO 4295 4296 ENDIF ! var 4297 ! 4298 !-- Fill up also the redundant 2nd and 3rd ghost-node layers 4299 IF ( edge == 's' ) THEN 4300 DO jcbgp = -nbgp, jcb 4301 child_array(0:nzt+1,jcbgp,nxlg:nxrg) = child_array(0:nzt+1,jcbc,nxlg:nxrg) 4302 ENDDO 4303 ELSEIF ( edge == 'n' ) THEN 4304 DO jcbgp = jcb, ny+nbgp 4305 child_array(0:nzt+1,jcbgp,nxlg:nxrg) = child_array(0:nzt+1,jcbc,nxlg:nxrg) 4306 ENDDO 4307 ENDIF 4308 4309 END SUBROUTINE pmci_interp_1sto_sn 4310 4311 4312 4313 SUBROUTINE pmci_interp_1sto_t( child_array, parent_array, kct, ifl, ifu, jfl, jfu, var ) 4314 ! 4315 !-- Interpolation of ghost-node values used as the child-domain boundary 4316 !-- conditions. This subroutine handles the top boundary. 4317 IMPLICIT NONE 4318 4319 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 4320 4321 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 4322 !< parent cell - x direction 4323 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 4324 !< parent cell - x direction 4325 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 4326 !< parent cell - y direction 4327 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 4328 !< parent cell - y direction 4329 4330 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 4331 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 4332 4333 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4334 ! 4335 !-- Local variables: 4336 INTEGER(iwp) :: ic !< Running child-grid index in the x-direction 4337 INTEGER(iwp) :: ierr !< MPI error code 4338 INTEGER(iwp) :: iplc !< Lower parent-grid index limit in the x-direction for copying parent-grid 4339 !< array data to workarr_t 4340 INTEGER(iwp) :: iprc !< Upper parent-grid index limit in the x-direction for copying parent-grid 4341 !< array data to workarr_t 4342 INTEGER(iwp) :: jc !< Running child-grid index in the y-direction 4343 INTEGER(iwp) :: jpsc !< Lower parent-grid index limit in the y-direction for copying parent-grid 4344 !< array data to workarr_t 4345 INTEGER(iwp) :: jpnc !< Upper parent-grid-index limit in the y-direction for copying parent-grid 4346 !< array data to workarr_t 4347 INTEGER(iwp) :: kc !< Vertical child-grid index fixed to the boundary-value level 4348 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 4349 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 4350 INTEGER(iwp) :: kpw !< Reduced parent-grid index in the z-direction for workarr_t pointing to 4351 !< the boundary ghost node 4352 REAL(wp) :: c31 !< Interpolation coefficient for the 3rd-order WS scheme 4353 REAL(wp) :: c32 !< Interpolation coefficient for the 3rd-order WS scheme 4354 REAL(wp) :: c33 !< Interpolation coefficient for the 3rd-order WS scheme 4355 REAL(wp) :: c_interp_1 !< Value interpolated to the flux point in z direction from the parent-grid data 4356 REAL(wp) :: c_interp_2 !< Auxiliary value interpolated to the flux point in z direction from the parent-grid data 4357 4358 4359 IF ( var == 'w' ) THEN 4360 kc = nzt 4361 ELSE 4362 kc = nzt + 1 4363 ENDIF 4364 kpw = 1 4365 ! 4366 !-- Interpolation coefficients 4367 IF ( interpolation_scheme_t == 1 ) THEN 4368 c31 = 0.0_wp ! 1st-order upwind 4369 c32 = 1.0_wp 4370 c33 = 0.0_wp 4371 ELSE IF ( interpolation_scheme_t == 2 ) THEN 4372 c31 = 0.5_wp ! 2nd-order central 4373 c32 = 0.5_wp 4374 c33 = 0.0_wp 4375 ELSE 4376 c31 = 2.0_wp / 6.0_wp ! 3rd-order WS upwind biased (default) 4377 c32 = 5.0_wp / 6.0_wp 4378 c33 = -1.0_wp / 6.0_wp 4379 ENDIF 4380 ! 4381 !-- Substitute the necessary parent-grid data to the work array. 4382 !-- Note that the dimension of workarr_t is (0:2,jpsw:jpnw,iplw:iprw), 4383 !-- And the jc?w and ic?w-index bounds depend on the location of the PE- 4384 !-- subdomain relative to the side boundaries. 4385 iplc = iplw + 1 4386 iprc = iprw - 1 4387 jpsc = jpsw + 1 4388 jpnc = jpnw - 1 4389 IF ( bc_dirichlet_l ) THEN 4390 iplc = iplw 4391 ENDIF 4392 IF ( bc_dirichlet_r ) THEN 4393 iprc = iprw 4394 ENDIF 4395 IF ( bc_dirichlet_s ) THEN 4396 jpsc = jpsw 4397 ENDIF 4398 IF ( bc_dirichlet_n ) THEN 4399 jpnc = jpnw 4400 ENDIF 4401 workarr_t = 0.0_wp 4402 workarr_t(0:2,jpsc:jpnc,iplc:iprc) = parent_array(kct:kct+2,jpsc:jpnc,iplc:iprc) 4403 ! 4404 !-- Left-right exchange if more than one PE subdomain in the x-direction. 4405 !-- Note that in case of 3-D nesting the left and right boundaries are 4406 !-- not exchanged because the nest domain is not cyclic. 4407 IF ( pdims(1) > 1 ) THEN 4408 ! 4409 !-- From left to right 4410 CALL MPI_SENDRECV( workarr_t(0,jpsw,iplw+1), 1, workarr_t_exchange_type_y, pleft, 0, & 4411 workarr_t(0,jpsw,iprw), 1, workarr_t_exchange_type_y, pright, 0, & 4412 comm2d, status, ierr ) 4413 ! 4414 !-- From right to left 4415 CALL MPI_SENDRECV( workarr_t(0,jpsw,iprw-1), 1, workarr_t_exchange_type_y, pright, 1, & 4416 workarr_t(0,jpsw,iplw), 1, workarr_t_exchange_type_y, pleft, 1, & 4417 comm2d, status, ierr ) 4418 ENDIF 4419 ! 4420 !-- South-north exchange if more than one PE subdomain in the y-direction. 4421 !-- Note that in case of 3-D nesting the south and north boundaries are 4422 !-- not exchanged because the nest domain is not cyclic. 4423 IF ( pdims(2) > 1 ) THEN 4424 ! 4425 !-- From south to north 4426 CALL MPI_SENDRECV( workarr_t(0,jpsw+1,iplw), 1, workarr_t_exchange_type_x, psouth, 2, & 4427 workarr_t(0,jpnw,iplw), 1, workarr_t_exchange_type_x, pnorth, 2, & 4428 comm2d, status, ierr ) 4429 ! 4430 !-- From north to south 4431 CALL MPI_SENDRECV( workarr_t(0,jpnw-1,iplw), 1, workarr_t_exchange_type_x, pnorth, 3, & 4432 workarr_t(0,jpsw,iplw), 1, workarr_t_exchange_type_x, psouth, 3, & 4433 comm2d, status, ierr ) 4434 ENDIF 4435 4436 IF ( var == 'w' ) THEN 4437 DO ip = iplw, iprw 4438 DO jp = jpsw, jpnw 4439 4440 DO ic = ifl(ip), ifu(ip) 4441 DO jc = jfl(jp), jfu(jp) 4442 child_array(kc,jc,ic) = workarr_t(kpw,jp,ip) 4443 ENDDO 4444 ENDDO 4445 4446 ENDDO 4447 ENDDO 4448 4449 ELSE IF ( var == 'u' ) THEN 4450 4451 DO ip = iplw, iprw - 1 4452 DO jp = jpsw, jpnw 4453 ! 4454 !-- First interpolate to the flux point using the 3rd-order WS scheme 4455 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4456 + c33 * workarr_t(kpw+1,jp,ip) 4457 c_interp_2 = c31 * workarr_t(kpw-1,jp,ip+1) + c32 * workarr_t(kpw,jp,ip+1) & 4458 + c33 * workarr_t(kpw+1,jp,ip+1) 4459 ! 4460 !-- Use averages of the neighbouring matching grid-line values 4461 DO ic = ifl(ip), ifl(ip+1) 4462 child_array(kc,jfl(jp):jfu(jp),ic) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4463 ENDDO 4464 ! 4465 !-- Then set the values along the matching grid-lines 4466 IF ( MOD( ifl(ip), igsr ) == 0 ) THEN 4467 ! 4468 !-- First interpolate to the flux point using the 3rd-order WS scheme 4469 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4470 + c33 * workarr_t(kpw+1,jp,ip) 4471 child_array(kc,jfl(jp):jfu(jp),ifl(ip)) = c_interp_1 4472 ENDIF 4473 4474 ENDDO 4475 ENDDO 4476 ! 4477 !-- Finally, set the values along the last matching grid-line 4478 IF ( MOD( ifl(iprw), igsr ) == 0 ) THEN 4479 DO jp = jpsw, jpnw 4480 ! 4481 !-- First interpolate to the flux point using the 3rd-order WS scheme 4482 c_interp_1 = c31 * workarr_t(kpw-1,jp,iprw) + c32 * workarr_t(kpw,jp,iprw) & 4483 + c33 * workarr_t(kpw+1,jp,iprw) 4484 child_array(kc,jfl(jp):jfu(jp),ifl(iprw)) = c_interp_1 4485 ENDDO 4486 ENDIF 4487 ! 4488 !-- A gap may still remain in some cases if the subdomain size is not 4489 !-- divisible by the grid-spacing ratio. In such a case, fill the 4490 !-- gap. Note however, this operation may produce some additional 4491 !-- momentum conservation error. 4492 IF ( ifl(iprw) < nxr ) THEN 4493 DO jp = jpsw, jpnw 4494 DO ic = ifl(iprw) + 1, nxr 4495 child_array(kc,jfl(jp):jfu(jp),ic) = child_array(kc,jfl(jp):jfu(jp),ifl(iprw)) 4496 ENDDO 4497 ENDDO 4498 ENDIF 4499 4500 ELSE IF ( var == 'v' ) THEN 4501 4502 DO ip = iplw, iprw 4503 DO jp = jpsw, jpnw-1 4504 ! 4505 !-- First interpolate to the flux point using the 3rd-order WS scheme 4506 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4507 + c33 * workarr_t(kpw+1,jp,ip) 4508 c_interp_2 = c31 * workarr_t(kpw-1,jp+1,ip) + c32 * workarr_t(kpw,jp+1,ip) & 4509 + c33 * workarr_t(kpw+1,jp+1,ip) 4510 ! 4511 !-- Use averages of the neighbouring matching grid-line values 4512 DO jc = jfl(jp), jfl(jp+1) 4513 child_array(kc,jc,ifl(ip):ifu(ip)) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4514 ENDDO 4515 ! 4516 !-- Then set the values along the matching grid-lines 4517 IF ( MOD( jfl(jp), jgsr ) == 0 ) THEN 4518 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4519 + c33 * workarr_t(kpw+1,jp,ip) 4520 child_array(kc,jfl(jp),ifl(ip):ifu(ip)) = c_interp_1 4521 ENDIF 4522 4523 ENDDO 4524 4525 ENDDO 4526 ! 4527 !-- Finally, set the values along the last matching grid-line 4528 IF ( MOD( jfl(jpnw), jgsr ) == 0 ) THEN 4529 DO ip = iplw, iprw 4530 ! 4531 !-- First interpolate to the flux point using the 3rd-order WS scheme 4532 c_interp_1 = c31 * workarr_t(kpw-1,jpnw,ip) + c32 * workarr_t(kpw,jpnw,ip) & 4533 + c33 * workarr_t(kpw+1,jpnw,ip) 4534 child_array(kc,jfl(jpnw),ifl(ip):ifu(ip)) = c_interp_1 4535 ENDDO 4536 ENDIF 4537 ! 4538 !-- A gap may still remain in some cases if the subdomain size is not 4539 !-- divisible by the grid-spacing ratio. In such a case, fill the 4540 !-- gap. Note however, this operation may produce some additional 4541 !-- momentum conservation error. 4542 IF ( jfl(jpnw) < nyn ) THEN 4543 DO ip = iplw, iprw 4544 DO jc = jfl(jpnw)+1, nyn 4545 child_array(kc,jc,ifl(ip):ifu(ip)) = child_array(kc,jfl(jpnw),ifl(ip):ifu(ip)) 4546 ENDDO 4547 ENDDO 4548 ENDIF 4549 4550 ELSE ! any scalar variable 4551 4552 DO ip = iplw, iprw 4553 DO jp = jpsw, jpnw 4554 ! 4555 !-- First interpolate to the flux point using the 3rd-order WS scheme 4556 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4557 + c33 * workarr_t(kpw+1,jp,ip) 4558 DO ic = ifl(ip), ifu(ip) 4559 DO jc = jfl(jp), jfu(jp) 4560 child_array(kc,jc,ic) = c_interp_1 4561 ENDDO 4562 ENDDO 4563 4564 ENDDO 4565 ENDDO 4566 4567 ENDIF ! var 4568 ! 4569 !-- Just fill up the redundant second ghost-node layer in case of var == w. 4570 IF ( var == 'w' ) THEN 4571 child_array(nzt+1,:,:) = child_array(nzt,:,:) 4572 ENDIF 4573 4574 END SUBROUTINE pmci_interp_1sto_t 4575 4576 4577 4578 SUBROUTINE pmci_anterp_tophat( child_array, parent_array, kct, ifl, ifu, jfl, jfu, kfl, kfu, & 4579 ijkfc, var ) 4580 ! 4581 !-- Anterpolation of internal-node values to be used as the parent-domain 4582 !-- values. This subroutine is based on the first-order numerical 4583 !-- integration of the child-grid values contained within the anterpolation 4584 !-- cell (Clark & Farley, Journal of the Atmospheric Sciences 41(3), 1984). 4585 4586 IMPLICIT NONE 4587 4588 INTEGER(iwp), INTENT(IN) :: kct !< Top boundary index for anterpolation along z 4589 4590 INTEGER(iwp), DIMENSION(0:pg%nz+1,jpsa:jpna,ipla:ipra), INTENT(IN) :: ijkfc !< number of child grid points contributing 4591 !< to a parent grid box 4592 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 4593 !< parent cell - x direction 4594 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 4595 !< parent cell - x direction 4596 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 4597 !< parent cell - y direction 4598 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 4599 !< parent cell - y direction 4600 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 4601 !< parent cell - z direction 4602 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 4603 !< parent cell - z direction 4604 4605 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(IN) :: child_array !< Child-grid array 4606 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(INOUT) :: parent_array !< Parent-grid array 4607 4608 CHARACTER(LEN=*), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4609 ! 4610 !-- Local variables: 4611 INTEGER(iwp) :: ic !< Running index x-direction - child grid 4612 INTEGER(iwp) :: ipl_anterp !< Left boundary index for anterpolation along x 4613 INTEGER(iwp) :: ipr_anterp !< Right boundary index for anterpolation along x 4614 INTEGER(iwp) :: jc !< Running index y-direction - child grid 4615 INTEGER(iwp) :: jpn_anterp !< North boundary index for anterpolation along y 4616 INTEGER(iwp) :: jps_anterp !< South boundary index for anterpolation along y 4617 INTEGER(iwp) :: kc !< Running index z-direction - child grid 4618 INTEGER(iwp) :: kpb_anterp = 0 !< Bottom boundary index for anterpolation along z 4619 INTEGER(iwp) :: kpt_anterp !< Top boundary index for anterpolation along z 4620 INTEGER(iwp) :: ip !< Running index x-direction - parent grid 4621 INTEGER(iwp) :: jp !< Running index y-direction - parent grid 4622 INTEGER(iwp) :: kp !< Running index z-direction - parent grid 4623 INTEGER(iwp) :: var_flag !< bit number used to flag topography on respective grid 4624 4625 REAL(wp) :: cellsum !< sum of respective child cells belonging to parent cell 4626 4627 ! 4628 !-- Define the index bounds ipl_anterp, ipr_anterp, jps_anterp and jpn_anterp. 4629 !-- Note that kcb_anterp is simply zero and kct_anterp depends on kct which enters 4630 !-- here as a parameter and it is determined in pmci_define_index_mapping. 4631 !-- Note that the grid points directly used also for interpolation (from parent to 4632 !-- child) are always excluded from anterpolation, e.g. anterpolation is maximally 4633 !-- only from 0:kct-1, since kct is directly used for interpolation. Similar restriction is 4634 !-- applied to the lateral boundaries as well. An additional buffer is 4635 !-- also applied (default value for anterpolation_buffer_width = 2) in order 4636 !-- to avoid unphysical accumulation of kinetic energy. 4637 ipl_anterp = ipl 4638 ipr_anterp = ipr 4639 jps_anterp = jps 4640 jpn_anterp = jpn 4641 kpb_anterp = 0 4642 kpt_anterp = kct - 1 - anterpolation_buffer_width 4643 4644 IF ( nesting_mode /= 'vertical' ) THEN 4645 ! 4646 !-- Set the anterpolation buffers on the lateral boundaries 4647 ipl_anterp = MAX( ipl, iplg + 3 + anterpolation_buffer_width ) 4648 ipr_anterp = MIN( ipr, iprg - 3 - anterpolation_buffer_width ) 4649 jps_anterp = MAX( jps, jpsg + 3 + anterpolation_buffer_width ) 4650 jpn_anterp = MIN( jpn, jpng - 3 - anterpolation_buffer_width ) 4651 4652 ENDIF 4653 ! 4654 !-- Set masking bit for topography flags 4655 IF ( var == 'u' ) THEN 4656 var_flag = 1 4657 ELSEIF ( var == 'v' ) THEN 4658 var_flag = 2 4659 ELSEIF ( var == 'w' ) THEN 4660 var_flag = 3 4661 ELSE 4662 var_flag = 0 4663 ENDIF 4664 ! 4665 !-- Note that ip, jp, and kp are parent-grid indices and ic,jc, and kc 4666 !-- are child-grid indices. 4667 DO ip = ipl_anterp, ipr_anterp 4668 DO jp = jps_anterp, jpn_anterp 4669 ! 4670 !-- For simplicity anterpolate within buildings and under elevated 4671 !-- terrain too 4672 DO kp = kpb_anterp, kpt_anterp 4673 cellsum = 0.0_wp 4674 DO ic = ifl(ip), ifu(ip) 4675 DO jc = jfl(jp), jfu(jp) 4676 DO kc = kfl(kp), kfu(kp) 4677 cellsum = cellsum + MERGE( child_array(kc,jc,ic), 0.0_wp, & 4678 BTEST( wall_flags_total_0(kc,jc,ic), var_flag ) ) 4679 ENDDO 4680 ENDDO 4681 ENDDO 4682 ! 4683 !-- In case all child grid points are inside topography, i.e. 4684 !-- ijkfc and cellsum are zero, also parent solution would have 4685 !-- zero values at that grid point, which may cause problems in 4686 !-- particular for the temperature. Therefore, in case cellsum is 4687 !-- zero, keep the parent solution at this point. 4688 IF ( ijkfc(kp,jp,ip) /= 0 ) THEN 4689 parent_array(kp,jp,ip) = cellsum / REAL( ijkfc(kp,jp,ip), KIND=wp ) 4690 ENDIF 4691 4692 ENDDO 4693 ENDDO 4694 ENDDO 4695 4696 END SUBROUTINE pmci_anterp_tophat 4697 4698 #endif 4699 4700 END SUBROUTINE pmci_child_datatrans 4701 3958 ENDIF 3959 ENDIF 3960 3961 END SUBROUTINE pmci_interpolation 3962 3963 3964 3965 !--------------------------------------------------------------------------------------------------! 4702 3966 ! Description: 4703 3967 ! ------------ 4704 !> Set boundary conditions for the prognostic quantities after interpolation 4705 !> and anterpolation at upward- and downward facing surfaces. 4706 !> @todo: add Dirichlet boundary conditions for pot. temperature, humdidity and 4707 !> passive scalar. 4708 !------------------------------------------------------------------------------! 3968 !> @Todo: Missing subroutine description. 3969 !--------------------------------------------------------------------------------------------------! 3970 SUBROUTINE pmci_anterpolation 3971 3972 ! 3973 !-- A wrapper routine for all anterpolation actions. 3974 !-- Note that TKE is not anterpolated. 3975 IMPLICIT NONE 3976 INTEGER(iwp) :: lb !< Running index for aerosol size bins 3977 INTEGER(iwp) :: lc !< Running index for aerosol mass bins 3978 INTEGER(iwp) :: lg !< Running index for salsa gases 3979 INTEGER(iwp) :: n !< Running index for number of chemical species 3980 3981 3982 CALL pmci_anterp_tophat( u, uc, kcto, iflu, ifuu, jflo, jfuo, kflo, kfuo, ijkfc_u, 'u' ) 3983 CALL pmci_anterp_tophat( v, vc, kcto, iflo, ifuo, jflv, jfuv, kflo, kfuo, ijkfc_v, 'v' ) 3984 CALL pmci_anterp_tophat( w, wc, kctw, iflo, ifuo, jflo, jfuo, kflw, kfuw, ijkfc_w, 'w' ) 3985 ! 3986 !-- Anterpolation of TKE and dissipation rate if parent and child are in 3987 !-- RANS mode. 3988 IF ( rans_mode_parent .AND. rans_mode ) THEN 3989 CALL pmci_anterp_tophat( e, ec, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 'e' ) 3990 ! 3991 !-- Anterpolation of dissipation rate only if TKE-e closure is applied. 3992 IF ( rans_tke_e ) THEN 3993 CALL pmci_anterp_tophat( diss, dissc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, & 3994 ijkfc_s, 'diss' ) 3995 ENDIF 3996 3997 ENDIF 3998 3999 IF ( .NOT. neutral ) THEN 4000 CALL pmci_anterp_tophat( pt, ptc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 'pt' ) 4001 ENDIF 4002 4003 IF ( humidity ) THEN 4004 4005 CALL pmci_anterp_tophat( q, q_c, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 'q' ) 4006 4007 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 4008 4009 CALL pmci_anterp_tophat( qc, qcc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, & 4010 'qc' ) 4011 4012 CALL pmci_anterp_tophat( nc, ncc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, & 4013 'nc' ) 4014 4015 ENDIF 4016 4017 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 4018 4019 CALL pmci_anterp_tophat( qr, qrc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, & 4020 'qr' ) 4021 4022 CALL pmci_anterp_tophat( nr, nrc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, & 4023 'nr' ) 4024 4025 ENDIF 4026 4027 ENDIF 4028 4029 IF ( passive_scalar ) THEN 4030 CALL pmci_anterp_tophat( s, sc, kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 4031 ENDIF 4032 4033 IF ( air_chemistry .AND. nesting_chem ) THEN 4034 DO n = 1, nspec 4035 CALL pmci_anterp_tophat( chem_species(n)%conc, chem_spec_c(:,:,:,n), & 4036 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 4037 ENDDO 4038 ENDIF 4039 4040 IF ( salsa .AND. nesting_salsa ) THEN 4041 DO lb = 1, nbins_aerosol 4042 CALL pmci_anterp_tophat( aerosol_number(lb)%conc, aerosol_number_c(:,:,:,lb), & 4043 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 4044 ENDDO 4045 DO lc = 1, nbins_aerosol * ncomponents_mass 4046 CALL pmci_anterp_tophat( aerosol_mass(lc)%conc, aerosol_mass_c(:,:,:,lc), & 4047 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 4048 ENDDO 4049 IF ( .NOT. salsa_gases_from_chem ) THEN 4050 DO lg = 1, ngases_salsa 4051 CALL pmci_anterp_tophat( salsa_gas(lg)%conc, salsa_gas_c(:,:,:,lg), & 4052 kcto, iflo, ifuo, jflo, jfuo, kflo, kfuo, ijkfc_s, 's' ) 4053 ENDDO 4054 ENDIF 4055 ENDIF 4056 4057 END SUBROUTINE pmci_anterpolation 4058 4059 4060 !--------------------------------------------------------------------------------------------------! 4061 ! Description: 4062 ! ------------ 4063 !> @Todo: Missing subroutine description. 4064 !--------------------------------------------------------------------------------------------------! 4065 SUBROUTINE pmci_interp_1sto_lr( child_array, parent_array, kct, jfl, jfu, kfl, kfu, edge, var ) 4066 ! 4067 !-- Interpolation of ghost-node values used as the child-domain boundary conditions. This subroutine 4068 !-- handles the left and right boundaries. 4069 IMPLICIT NONE 4070 4071 CHARACTER(LEN=1), INTENT(IN) :: edge !< Edge symbol: 'l' or 'r' 4072 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4073 4074 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 4075 4076 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 4077 !< parent cell - y direction 4078 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 4079 !< parent cell - y direction 4080 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 4081 !< parent cell - z direction 4082 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 4083 !< parent cell - z direction 4084 4085 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 4086 4087 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 4088 4089 ! 4090 !-- Local variables: 4091 INTEGER(iwp) :: icb !< Fixed child-grid index in the x-direction pointing to the node just behind the 4092 !< boundary-value node 4093 INTEGER(iwp) :: icbc !< Fixed child-grid index in the x-direction pointing to the boundary-value nodes 4094 INTEGER(iwp) :: icbgp !< Index running over the redundant boundary ghost points in the x-direction 4095 INTEGER(iwp) :: ierr !< MPI error code 4096 INTEGER(iwp) :: ipbeg !< Parent-grid index in the x-direction pointing to the starting point of workarr_lr 4097 !< in the parent-grid array 4098 INTEGER(iwp) :: ipw !< Reduced parent-grid index in the x-direction for workarr_lr pointing to 4099 !< the boundary ghost node 4100 INTEGER(iwp) :: ipwp !< Reduced parent-grid index in the x-direction for workarr_lr pointing to 4101 !< the first prognostic node 4102 INTEGER(iwp) :: jc !< Running child-grid index in the y-direction 4103 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 4104 INTEGER(iwp) :: kc !< Running child-grid index in the z-direction 4105 INTEGER(iwp) :: kp !< Running parent-grid index in the z-direction 4106 4107 REAL(wp) :: cb !< Interpolation coefficient for the boundary ghost node 4108 REAL(wp) :: cp !< Interpolation coefficient for the first prognostic node 4109 REAL(wp) :: c_interp_1 !< Value interpolated to the flux point in x direction from the parent-grid data 4110 REAL(wp) :: c_interp_2 !< Auxiliary value interpolated to the flux point in x direction from the parent-grid data 4111 ! 4112 !-- Check which edge is to be handled 4113 IF ( edge == 'l' ) THEN 4114 ! 4115 !-- For u, nxl is a ghost node, but not for the other variables 4116 IF ( var == 'u' ) THEN 4117 icbc = nxl 4118 icb = icbc - 1 4119 ipw = 2 4120 ipwp = ipw ! This is redundant when var == 'u' 4121 ipbeg = ipl 4122 ELSE 4123 icbc = nxl - 1 4124 icb = icbc - 1 4125 ipw = 1 4126 ipwp = 2 4127 ipbeg = ipl 4128 ENDIF 4129 ELSEIF ( edge == 'r' ) THEN 4130 IF ( var == 'u' ) THEN 4131 icbc = nxr + 1 4132 icb = icbc + 1 4133 ipw = 1 4134 ipwp = ipw ! This is redundant when var == 'u' 4135 ipbeg = ipr - 2 4136 ELSE 4137 icbc = nxr + 1 4138 icb = icbc + 1 4139 ipw = 1 4140 ipwp = 0 4141 ipbeg = ipr - 2 4142 ENDIF 4143 ENDIF 4144 ! 4145 !-- Interpolation coefficients 4146 IF ( interpolation_scheme_lrsn == 1 ) THEN 4147 cb = 1.0_wp ! 1st-order upwind 4148 ELSE IF ( interpolation_scheme_lrsn == 2 ) THEN 4149 cb = 0.5_wp ! 2nd-order central 4150 ELSE 4151 cb = 0.5_wp ! 2nd-order central (default) 4152 ENDIF 4153 cp = 1.0_wp - cb 4154 ! 4155 !-- Substitute the necessary parent-grid data to the work array workarr_lr. 4156 workarr_lr = 0.0_wp 4157 IF ( pdims(2) > 1 ) THEN 4158 4159 IF ( bc_dirichlet_s ) THEN 4160 workarr_lr(0:pg%nz+1,jpsw:jpnw-1,0:2) = parent_array(0:pg%nz+1,jpsw:jpnw-1,ipbeg:ipbeg+2) 4161 ELSE IF ( bc_dirichlet_n ) THEN 4162 workarr_lr(0:pg%nz+1,jpsw+1:jpnw,0:2) = parent_array(0:pg%nz+1,jpsw+1:jpnw,ipbeg:ipbeg+2) 4163 ELSE 4164 workarr_lr(0:pg%nz+1,jpsw+1:jpnw-1,0:2) = parent_array(0:pg%nz+1,jpsw+1:jpnw-1, & 4165 ipbeg:ipbeg+2) 4166 ENDIF 4167 ! 4168 !-- South-north exchange if more than one PE subdomain in the y-direction. Note that in case of 4169 !-- 3-D nesting the south (psouth == MPI_PROC_NULL) and north (pnorth == MPI_PROC_NULL) 4170 !-- boundaries are not exchanged because the nest domain is not cyclic. 4171 !-- From south to north 4172 CALL MPI_SENDRECV( workarr_lr(0,jpsw+1,0), 1, workarr_lr_exchange_type, psouth, 0, & 4173 workarr_lr(0,jpnw,0), 1, workarr_lr_exchange_type, pnorth, 0, comm2d, & 4174 status, ierr ) 4175 ! 4176 !-- From north to south 4177 CALL MPI_SENDRECV( workarr_lr(0,jpnw-1,0), 1, workarr_lr_exchange_type, pnorth, 1, & 4178 workarr_lr(0,jpsw,0), 1, workarr_lr_exchange_type, psouth, 1, comm2d, & 4179 status, ierr ) 4180 4181 ELSE 4182 workarr_lr(0:pg%nz+1,jpsw:jpnw,0:2) = parent_array(0:pg%nz+1,jpsw:jpnw,ipbeg:ipbeg+2) 4183 ENDIF 4184 4185 IF ( var == 'u' ) THEN 4186 4187 DO jp = jpsw, jpnw 4188 DO kp = 0, kct 4189 4190 DO jc = jfl(jp), jfu(jp) 4191 DO kc = kfl(kp), kfu(kp) 4192 child_array(kc,jc,icbc) = workarr_lr(kp,jp,ipw) 4193 ENDDO 4194 ENDDO 4195 4196 ENDDO 4197 ENDDO 4198 4199 ELSE IF ( var == 'v' ) THEN 4200 4201 DO jp = jpsw, jpnw-1 4202 DO kp = 0, kct 4203 ! 4204 !-- First interpolate to the flux point 4205 c_interp_1 = cb * workarr_lr(kp,jp,ipw) + cp * workarr_lr(kp,jp,ipwp) 4206 c_interp_2 = cb * workarr_lr(kp,jp+1,ipw) + cp * workarr_lr(kp,jp+1,ipwp) 4207 ! 4208 !-- Use averages of the neighbouring matching grid-line values 4209 DO jc = jfl(jp), jfl(jp+1) 4210 child_array(kfl(kp):kfu(kp),jc,icbc) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4211 ENDDO 4212 ! 4213 !-- Then set the values along the matching grid-lines 4214 IF ( MOD( jfl(jp), jgsr ) == 0 ) THEN 4215 child_array(kfl(kp):kfu(kp),jfl(jp),icbc) = c_interp_1 4216 ENDIF 4217 ENDDO 4218 ENDDO 4219 ! 4220 !-- Finally, set the values along the last matching grid-line 4221 IF ( MOD( jfl(jpnw), jgsr ) == 0 ) THEN 4222 DO kp = 0, kct 4223 c_interp_1 = cb * workarr_lr(kp,jpnw,ipw) + cp * workarr_lr(kp,jpnw,ipwp) 4224 child_array(kfl(kp):kfu(kp),jfl(jpnw),icbc) = c_interp_1 4225 ENDDO 4226 ENDIF 4227 ! 4228 !-- A gap may still remain in some cases if the subdomain size is not divisible by the 4229 !-- grid-spacing ratio. In such a case, fill the gap. Note however, this operation may produce 4230 !-- some additional momentum conservation error. 4231 IF ( jfl(jpnw) < nyn ) THEN 4232 DO kp = 0, kct 4233 DO jc = jfl(jpnw) + 1, nyn 4234 child_array(kfl(kp):kfu(kp),jc,icbc) = child_array(kfl(kp):kfu(kp),jfl(jpnw),icbc) 4235 ENDDO 4236 ENDDO 4237 ENDIF 4238 4239 ELSE IF ( var == 'w' ) THEN 4240 4241 DO jp = jpsw, jpnw 4242 DO kp = 0, kct + 1 ! It is important to go up to kct+1 4243 ! 4244 !-- Interpolate to the flux point 4245 c_interp_1 = cb * workarr_lr(kp,jp,ipw) + cp * workarr_lr(kp,jp,ipwp) 4246 ! 4247 !-- First substitute only the matching-node values 4248 child_array(kfu(kp),jfl(jp):jfu(jp),icbc) = c_interp_1 4249 4250 ENDDO 4251 ENDDO 4252 4253 DO jp = jpsw, jpnw 4254 DO kp = 1, kct + 1 ! It is important to go up to kct+1 4255 ! 4256 !-- Then fill up the nodes in between with the averages 4257 DO kc = kfu(kp-1) + 1, kfu(kp) - 1 4258 child_array(kc,jfl(jp):jfu(jp),icbc) = 0.5_wp * ( child_array(kfu(kp-1), & 4259 jfl(jp):jfu(jp),icbc) & 4260 + child_array(kfu(kp),jfl(jp):jfu(jp),icbc) ) 4261 ENDDO 4262 4263 ENDDO 4264 ENDDO 4265 4266 ELSE ! Any scalar 4267 4268 DO jp = jpsw, jpnw 4269 DO kp = 0, kct 4270 ! 4271 !-- Interpolate to the flux point 4272 c_interp_1 = cb * workarr_lr(kp,jp,ipw) + cp * workarr_lr(kp,jp,ipwp) 4273 DO jc = jfl(jp), jfu(jp) 4274 DO kc = kfl(kp), kfu(kp) 4275 child_array(kc,jc,icbc) = c_interp_1 4276 ENDDO 4277 ENDDO 4278 4279 ENDDO 4280 ENDDO 4281 4282 ENDIF ! var 4283 ! 4284 !-- Fill up also the redundant 2nd and 3rd ghost-node layers 4285 IF ( edge == 'l' ) THEN 4286 DO icbgp = -nbgp, icb 4287 child_array(0:nzt+1,nysg:nyng,icbgp) = child_array(0:nzt+1,nysg:nyng,icbc) 4288 ENDDO 4289 ELSEIF ( edge == 'r' ) THEN 4290 DO icbgp = icb, nx+nbgp 4291 child_array(0:nzt+1,nysg:nyng,icbgp) = child_array(0:nzt+1,nysg:nyng,icbc) 4292 ENDDO 4293 ENDIF 4294 4295 END SUBROUTINE pmci_interp_1sto_lr 4296 4297 4298 !--------------------------------------------------------------------------------------------------! 4299 ! Description: 4300 ! ------------ 4301 !> @Todo: Missing subroutine description. 4302 !--------------------------------------------------------------------------------------------------! 4303 SUBROUTINE pmci_interp_1sto_sn( child_array, parent_array, kct, ifl, ifu, kfl, kfu, edge, var ) 4304 ! 4305 !-- Interpolation of ghost-node values used as the child-domain boundary conditions. This subroutine 4306 !-- handles the south and north boundaries. 4307 IMPLICIT NONE 4308 4309 CHARACTER(LEN=1), INTENT(IN) :: edge !< Edge symbol: 's' or 'n' 4310 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4311 4312 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 4313 4314 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 4315 !< parent cell - x direction 4316 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 4317 !< parent cell - x direction 4318 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 4319 !< parent cell - z direction 4320 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 4321 !< parent cell - z direction 4322 4323 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 4324 4325 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 4326 ! 4327 !-- Local variables: 4328 INTEGER(iwp) :: ic !< Running child-grid index in the x-direction 4329 INTEGER(iwp) :: ierr !< MPI error code 4330 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 4331 INTEGER(iwp) :: jcb !< Fixed child-grid index in the y-direction pointing to the node just behind the 4332 !< boundary-value node 4333 INTEGER(iwp) :: jcbc !< Fixed child-grid index in the y-direction pointing to the boundary-value nodes 4334 INTEGER(iwp) :: jcbgp !< Index running over the redundant boundary ghost points in y-direction 4335 INTEGER(iwp) :: jpbeg !< Parent-grid index in the y-direction pointing to the starting point of workarr_sn 4336 !< in the parent-grid array 4337 INTEGER(iwp) :: jpw !< Reduced parent-grid index in the y-direction for workarr_sn pointing to 4338 !< the boundary ghost node 4339 INTEGER(iwp) :: jpwp !< Reduced parent-grid index in the y-direction for workarr_sn pointing to 4340 !< the first prognostic node 4341 INTEGER(iwp) :: kc !< Running child-grid index in the z-direction 4342 INTEGER(iwp) :: kp !< Running parent-grid index in the z-direction 4343 4344 REAL(wp) :: cb !< Interpolation coefficient for the boundary ghost node 4345 REAL(wp) :: cp !< Interpolation coefficient for the first prognostic node 4346 REAL(wp) :: c_interp_1 !< Value interpolated to the flux point in x direction from the parent-grid data 4347 REAL(wp) :: c_interp_2 !< Auxiliary value interpolated to the flux point in x direction from the parent-grid data 4348 4349 4350 ! 4351 !-- Check which edge is to be handled: south or north 4352 IF ( edge == 's' ) THEN 4353 ! 4354 !-- For v, nys is a ghost node, but not for the other variables 4355 IF ( var == 'v' ) THEN 4356 jcbc = nys 4357 jcb = jcbc - 1 4358 jpw = 2 4359 jpwp = 2 ! This is redundant when var == 'v' 4360 jpbeg = jps 4361 ELSE 4362 jcbc = nys - 1 4363 jcb = jcbc - 1 4364 jpw = 1 4365 jpwp = 2 4366 jpbeg = jps 4367 ENDIF 4368 ELSEIF ( edge == 'n' ) THEN 4369 IF ( var == 'v' ) THEN 4370 jcbc = nyn + 1 4371 jcb = jcbc + 1 4372 jpw = 1 4373 jpwp = 0 ! This is redundant when var == 'v' 4374 jpbeg = jpn - 2 4375 ELSE 4376 jcbc = nyn + 1 4377 jcb = jcbc + 1 4378 jpw = 1 4379 jpwp = 0 4380 jpbeg = jpn - 2 4381 ENDIF 4382 ENDIF 4383 ! 4384 !-- Interpolation coefficients 4385 IF ( interpolation_scheme_lrsn == 1 ) THEN 4386 cb = 1.0_wp ! 1st-order upwind 4387 ELSE IF ( interpolation_scheme_lrsn == 2 ) THEN 4388 cb = 0.5_wp ! 2nd-order central 4389 ELSE 4390 cb = 0.5_wp ! 2nd-order central (default) 4391 ENDIF 4392 cp = 1.0_wp - cb 4393 ! 4394 !-- Substitute the necessary parent-grid data to the work array workarr_sn. 4395 workarr_sn = 0.0_wp 4396 IF ( pdims(1) > 1 ) THEN 4397 4398 IF ( bc_dirichlet_l ) THEN 4399 workarr_sn(0:pg%nz+1,0:2,iplw:iprw-1) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw:iprw-1) 4400 ELSE IF ( bc_dirichlet_r ) THEN 4401 workarr_sn(0:pg%nz+1,0:2,iplw+1:iprw) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw+1:iprw) 4402 ELSE 4403 workarr_sn(0:pg%nz+1,0:2,iplw+1:iprw-1) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2, & 4404 iplw+1:iprw-1) 4405 ENDIF 4406 ! 4407 !-- Left-right exchange if more than one PE subdomain in the x-direction. Note that in case of 4408 !-- 3-D nesting the left (pleft == MPI_PROC_NULL) and right (pright == MPI_PROC_NULL) boundaries 4409 !-- are not exchanged because the nest domain is not cyclic. 4410 !-- From left to right 4411 CALL MPI_SENDRECV( workarr_sn(0,0,iplw+1), 1, workarr_sn_exchange_type, pleft, 0, & 4412 workarr_sn(0,0,iprw), 1, workarr_sn_exchange_type, pright, 0, comm2d, & 4413 status, ierr ) 4414 ! 4415 !-- From right to left 4416 CALL MPI_SENDRECV( workarr_sn(0,0,iprw-1), 1, workarr_sn_exchange_type, pright, 1, & 4417 workarr_sn(0,0,iplw), 1, workarr_sn_exchange_type, pleft, 1, comm2d, & 4418 status, ierr ) 4419 4420 ELSE 4421 workarr_sn(0:pg%nz+1,0:2,iplw:iprw) = parent_array(0:pg%nz+1,jpbeg:jpbeg+2,iplw:iprw) 4422 ENDIF 4423 4424 IF ( var == 'v' ) THEN 4425 4426 DO ip = iplw, iprw 4427 DO kp = 0, kct 4428 4429 DO ic = ifl(ip), ifu(ip) 4430 DO kc = kfl(kp), kfu(kp) 4431 child_array(kc,jcbc,ic) = workarr_sn(kp,jpw,ip) 4432 ENDDO 4433 ENDDO 4434 4435 ENDDO 4436 ENDDO 4437 4438 ELSE IF ( var == 'u' ) THEN 4439 4440 DO ip = iplw, iprw - 1 4441 DO kp = 0, kct 4442 ! 4443 !-- First interpolate to the flux point 4444 c_interp_1 = cb * workarr_sn(kp,jpw,ip) + cp * workarr_sn(kp,jpwp,ip) 4445 c_interp_2 = cb * workarr_sn(kp,jpw,ip+1) + cp * workarr_sn(kp,jpwp,ip+1) 4446 ! 4447 !-- Use averages of the neighbouring matching grid-line values 4448 DO ic = ifl(ip), ifl(ip+1) 4449 child_array(kfl(kp):kfu(kp),jcbc,ic) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4450 ENDDO 4451 ! 4452 !-- Then set the values along the matching grid-lines 4453 IF ( MOD( ifl(ip), igsr ) == 0 ) THEN 4454 child_array(kfl(kp):kfu(kp),jcbc,ifl(ip)) = c_interp_1 4455 ENDIF 4456 4457 ENDDO 4458 ENDDO 4459 ! 4460 !-- Finally, set the values along the last matching grid-line 4461 IF ( MOD( ifl(iprw), igsr ) == 0 ) THEN 4462 DO kp = 0, kct 4463 c_interp_1 = cb * workarr_sn(kp,jpw,iprw) + cp * workarr_sn(kp,jpwp,iprw) 4464 child_array(kfl(kp):kfu(kp),jcbc,ifl(iprw)) = c_interp_1 4465 ENDDO 4466 ENDIF 4467 ! 4468 !-- A gap may still remain in some cases if the subdomain size is not divisible by the 4469 !-- grid-spacing ratio. In such a case, fill the gap. Note however, this operation may produce 4470 !-- some additional momentum conservation error. 4471 IF ( ifl(iprw) < nxr ) THEN 4472 DO kp = 0, kct 4473 DO ic = ifl(iprw) + 1, nxr 4474 child_array(kfl(kp):kfu(kp),jcbc,ic) = child_array(kfl(kp):kfu(kp),jcbc,ifl(iprw)) 4475 ENDDO 4476 ENDDO 4477 ENDIF 4478 4479 ELSE IF ( var == 'w' ) THEN 4480 4481 DO ip = iplw, iprw 4482 DO kp = 0, kct + 1 ! It is important to go up to kct+1 4483 ! 4484 !-- Interpolate to the flux point 4485 c_interp_1 = cb * workarr_sn(kp,jpw,ip) + cp * workarr_sn(kp,jpwp,ip) 4486 ! 4487 !-- First substitute only the matching-node values 4488 child_array(kfu(kp),jcbc,ifl(ip):ifu(ip)) = c_interp_1 4489 4490 ENDDO 4491 ENDDO 4492 4493 DO ip = iplw, iprw 4494 DO kp = 1, kct + 1 ! It is important to go up to kct + 1 4495 ! 4496 !-- Then fill up the nodes in between with the averages 4497 DO kc = kfu(kp-1) + 1, kfu(kp) - 1 4498 child_array(kc,jcbc,ifl(ip):ifu(ip)) = 0.5_wp * ( child_array(kfu(kp-1), & 4499 jcbc,ifl(ip):ifu(ip)) & 4500 + child_array(kfu(kp),jcbc,ifl(ip):ifu(ip)) ) 4501 ENDDO 4502 4503 ENDDO 4504 ENDDO 4505 4506 ELSE ! Any scalar 4507 4508 DO ip = iplw, iprw 4509 DO kp = 0, kct 4510 ! 4511 !-- Interpolate to the flux point 4512 c_interp_1 = cb * workarr_sn(kp,jpw,ip) + cp * workarr_sn(kp,jpwp,ip) 4513 DO ic = ifl(ip), ifu(ip) 4514 DO kc = kfl(kp), kfu(kp) 4515 child_array(kc,jcbc,ic) = c_interp_1 4516 ENDDO 4517 ENDDO 4518 4519 ENDDO 4520 ENDDO 4521 4522 ENDIF ! var 4523 ! 4524 !-- Fill up also the redundant 2nd and 3rd ghost-node layers 4525 IF ( edge == 's' ) THEN 4526 DO jcbgp = -nbgp, jcb 4527 child_array(0:nzt+1,jcbgp,nxlg:nxrg) = child_array(0:nzt+1,jcbc,nxlg:nxrg) 4528 ENDDO 4529 ELSEIF ( edge == 'n' ) THEN 4530 DO jcbgp = jcb, ny+nbgp 4531 child_array(0:nzt+1,jcbgp,nxlg:nxrg) = child_array(0:nzt+1,jcbc,nxlg:nxrg) 4532 ENDDO 4533 ENDIF 4534 4535 END SUBROUTINE pmci_interp_1sto_sn 4536 4537 4538 !--------------------------------------------------------------------------------------------------! 4539 ! Description: 4540 ! ------------ 4541 !> @Todo: Missing subroutine description. 4542 !--------------------------------------------------------------------------------------------------! 4543 SUBROUTINE pmci_interp_1sto_t( child_array, parent_array, kct, ifl, ifu, jfl, jfu, var ) 4544 ! 4545 !-- Interpolation of ghost-node values used as the child-domain boundary conditions. This subroutine 4546 !-- handles the top boundary. 4547 IMPLICIT NONE 4548 4549 CHARACTER(LEN=1), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4550 4551 INTEGER(iwp), INTENT(IN) :: kct !< The parent-grid index in z-direction just below the boundary value node 4552 4553 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 4554 !< parent cell - x direction 4555 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 4556 !< parent cell - x direction 4557 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 4558 !< parent cell - y direction 4559 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 4560 !< parent cell - y direction 4561 4562 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(INOUT) :: child_array !< Child-grid array 4563 4564 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(IN) :: parent_array !< Parent-grid array 4565 4566 4567 ! 4568 !-- Local variables: 4569 INTEGER(iwp) :: ic !< Running child-grid index in the x-direction 4570 INTEGER(iwp) :: ierr !< MPI error code 4571 INTEGER(iwp) :: iplc !< Lower parent-grid index limit in the x-direction for copying parent-grid 4572 !< array data to workarr_t 4573 INTEGER(iwp) :: iprc !< Upper parent-grid index limit in the x-direction for copying parent-grid 4574 !< array data to workarr_t 4575 INTEGER(iwp) :: jc !< Running child-grid index in the y-direction 4576 INTEGER(iwp) :: jpsc !< Lower parent-grid index limit in the y-direction for copying parent-grid 4577 !< array data to workarr_t 4578 INTEGER(iwp) :: jpnc !< Upper parent-grid-index limit in the y-direction for copying parent-grid 4579 !< array data to workarr_t 4580 INTEGER(iwp) :: kc !< Vertical child-grid index fixed to the boundary-value level 4581 INTEGER(iwp) :: ip !< Running parent-grid index in the x-direction 4582 INTEGER(iwp) :: jp !< Running parent-grid index in the y-direction 4583 INTEGER(iwp) :: kpw !< Reduced parent-grid index in the z-direction for workarr_t pointing to 4584 !< the boundary ghost node 4585 4586 REAL(wp) :: c31 !< Interpolation coefficient for the 3rd-order WS scheme 4587 REAL(wp) :: c32 !< Interpolation coefficient for the 3rd-order WS scheme 4588 REAL(wp) :: c33 !< Interpolation coefficient for the 3rd-order WS scheme 4589 REAL(wp) :: c_interp_1 !< Value interpolated to the flux point in z direction from the parent-grid data 4590 REAL(wp) :: c_interp_2 !< Auxiliary value interpolated to the flux point in z direction from the parent-grid data 4591 4592 4593 IF ( var == 'w' ) THEN 4594 kc = nzt 4595 ELSE 4596 kc = nzt + 1 4597 ENDIF 4598 kpw = 1 4599 ! 4600 !-- Interpolation coefficients 4601 IF ( interpolation_scheme_t == 1 ) THEN 4602 c31 = 0.0_wp ! 1st-order upwind 4603 c32 = 1.0_wp 4604 c33 = 0.0_wp 4605 ELSE IF ( interpolation_scheme_t == 2 ) THEN 4606 c31 = 0.5_wp ! 2nd-order central 4607 c32 = 0.5_wp 4608 c33 = 0.0_wp 4609 ELSE 4610 c31 = 2.0_wp / 6.0_wp ! 3rd-order WS upwind biased (default) 4611 c32 = 5.0_wp / 6.0_wp 4612 c33 = -1.0_wp / 6.0_wp 4613 ENDIF 4614 ! 4615 !-- Substitute the necessary parent-grid data to the work array. Note that the dimension of 4616 !-- workarr_t is (0:2,jpsw:jpnw,iplw:iprw) and the jc?w and ic?w-index bounds depend on the location 4617 !-- of the PE-subdomain relative to the side boundaries. 4618 iplc = iplw + 1 4619 iprc = iprw - 1 4620 jpsc = jpsw + 1 4621 jpnc = jpnw - 1 4622 IF ( bc_dirichlet_l ) THEN 4623 iplc = iplw 4624 ENDIF 4625 IF ( bc_dirichlet_r ) THEN 4626 iprc = iprw 4627 ENDIF 4628 IF ( bc_dirichlet_s ) THEN 4629 jpsc = jpsw 4630 ENDIF 4631 IF ( bc_dirichlet_n ) THEN 4632 jpnc = jpnw 4633 ENDIF 4634 workarr_t = 0.0_wp 4635 workarr_t(0:2,jpsc:jpnc,iplc:iprc) = parent_array(kct:kct+2,jpsc:jpnc,iplc:iprc) 4636 ! 4637 !-- Left-right exchange if more than one PE subdomain in the x-direction. Note that in case of 3-D 4638 !-- nesting the left and right boundaries are not exchanged because the nest domain is not cyclic. 4639 IF ( pdims(1) > 1 ) THEN 4640 ! 4641 !-- From left to right 4642 CALL MPI_SENDRECV( workarr_t(0,jpsw,iplw+1), 1, workarr_t_exchange_type_y, pleft, 0, & 4643 workarr_t(0,jpsw,iprw), 1, workarr_t_exchange_type_y, pright, 0, & 4644 comm2d, status, ierr ) 4645 ! 4646 !-- From right to left 4647 CALL MPI_SENDRECV( workarr_t(0,jpsw,iprw-1), 1, workarr_t_exchange_type_y, pright, 1, & 4648 workarr_t(0,jpsw,iplw), 1, workarr_t_exchange_type_y, pleft, 1, & 4649 comm2d, status, ierr ) 4650 ENDIF 4651 ! 4652 !-- South-north exchange if more than one PE subdomain in the y-direction. 4653 !-- Note that in case of 3-D nesting the south and north boundaries are not exchanged because the 4654 !-- nest domain is not cyclic. 4655 IF ( pdims(2) > 1 ) THEN 4656 ! 4657 !-- From south to north 4658 CALL MPI_SENDRECV( workarr_t(0,jpsw+1,iplw), 1, workarr_t_exchange_type_x, psouth, 2, & 4659 workarr_t(0,jpnw,iplw), 1, workarr_t_exchange_type_x, pnorth, 2, & 4660 comm2d, status, ierr ) 4661 ! 4662 !-- From north to south 4663 CALL MPI_SENDRECV( workarr_t(0,jpnw-1,iplw), 1, workarr_t_exchange_type_x, pnorth, 3, & 4664 workarr_t(0,jpsw,iplw), 1, workarr_t_exchange_type_x, psouth, 3, & 4665 comm2d, status, ierr ) 4666 ENDIF 4667 4668 IF ( var == 'w' ) THEN 4669 DO ip = iplw, iprw 4670 DO jp = jpsw, jpnw 4671 4672 DO ic = ifl(ip), ifu(ip) 4673 DO jc = jfl(jp), jfu(jp) 4674 child_array(kc,jc,ic) = workarr_t(kpw,jp,ip) 4675 ENDDO 4676 ENDDO 4677 4678 ENDDO 4679 ENDDO 4680 4681 ELSE IF ( var == 'u' ) THEN 4682 4683 DO ip = iplw, iprw - 1 4684 DO jp = jpsw, jpnw 4685 ! 4686 !-- First interpolate to the flux point using the 3rd-order WS scheme 4687 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4688 + c33 * workarr_t(kpw+1,jp,ip) 4689 c_interp_2 = c31 * workarr_t(kpw-1,jp,ip+1) + c32 * workarr_t(kpw,jp,ip+1) & 4690 + c33 * workarr_t(kpw+1,jp,ip+1) 4691 ! 4692 !-- Use averages of the neighbouring matching grid-line values 4693 DO ic = ifl(ip), ifl(ip+1) 4694 child_array(kc,jfl(jp):jfu(jp),ic) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4695 ENDDO 4696 ! 4697 !-- Then set the values along the matching grid-lines 4698 IF ( MOD( ifl(ip), igsr ) == 0 ) THEN 4699 ! 4700 !-- First interpolate to the flux point using the 3rd-order WS scheme 4701 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4702 + c33 * workarr_t(kpw+1,jp,ip) 4703 child_array(kc,jfl(jp):jfu(jp),ifl(ip)) = c_interp_1 4704 ENDIF 4705 4706 ENDDO 4707 ENDDO 4708 ! 4709 !-- Finally, set the values along the last matching grid-line 4710 IF ( MOD( ifl(iprw), igsr ) == 0 ) THEN 4711 DO jp = jpsw, jpnw 4712 ! 4713 !-- First interpolate to the flux point using the 3rd-order WS scheme 4714 c_interp_1 = c31 * workarr_t(kpw-1,jp,iprw) + c32 * workarr_t(kpw,jp,iprw) & 4715 + c33 * workarr_t(kpw+1,jp,iprw) 4716 child_array(kc,jfl(jp):jfu(jp),ifl(iprw)) = c_interp_1 4717 ENDDO 4718 ENDIF 4719 ! 4720 !-- A gap may still remain in some cases if the subdomain size is not divisible by the 4721 !-- grid-spacing ratio. In such a case, fill the gap. Note however, this operation may produce 4722 !-- some additional momentum conservation error. 4723 IF ( ifl(iprw) < nxr ) THEN 4724 DO jp = jpsw, jpnw 4725 DO ic = ifl(iprw) + 1, nxr 4726 child_array(kc,jfl(jp):jfu(jp),ic) = child_array(kc,jfl(jp):jfu(jp),ifl(iprw)) 4727 ENDDO 4728 ENDDO 4729 ENDIF 4730 4731 ELSE IF ( var == 'v' ) THEN 4732 4733 DO ip = iplw, iprw 4734 DO jp = jpsw, jpnw-1 4735 ! 4736 !-- First interpolate to the flux point using the 3rd-order WS scheme 4737 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4738 + c33 * workarr_t(kpw+1,jp,ip) 4739 c_interp_2 = c31 * workarr_t(kpw-1,jp+1,ip) + c32 * workarr_t(kpw,jp+1,ip) & 4740 + c33 * workarr_t(kpw+1,jp+1,ip) 4741 ! 4742 !-- Use averages of the neighbouring matching grid-line values 4743 DO jc = jfl(jp), jfl(jp+1) 4744 child_array(kc,jc,ifl(ip):ifu(ip)) = 0.5_wp * ( c_interp_1 + c_interp_2 ) 4745 ENDDO 4746 ! 4747 !-- Then set the values along the matching grid-lines 4748 IF ( MOD( jfl(jp), jgsr ) == 0 ) THEN 4749 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4750 + c33 * workarr_t(kpw+1,jp,ip) 4751 child_array(kc,jfl(jp),ifl(ip):ifu(ip)) = c_interp_1 4752 ENDIF 4753 4754 ENDDO 4755 4756 ENDDO 4757 ! 4758 !-- Finally, set the values along the last matching grid-line 4759 IF ( MOD( jfl(jpnw), jgsr ) == 0 ) THEN 4760 DO ip = iplw, iprw 4761 ! 4762 !-- First interpolate to the flux point using the 3rd-order WS scheme 4763 c_interp_1 = c31 * workarr_t(kpw-1,jpnw,ip) + c32 * workarr_t(kpw,jpnw,ip) & 4764 + c33 * workarr_t(kpw+1,jpnw,ip) 4765 child_array(kc,jfl(jpnw),ifl(ip):ifu(ip)) = c_interp_1 4766 ENDDO 4767 ENDIF 4768 ! 4769 !-- A gap may still remain in some cases if the subdomain size is not divisible by the 4770 !-- grid-spacing ratio. In such a case, fill the gap. Note however, this operation may produce 4771 !-- some additional momentum conservation error. 4772 IF ( jfl(jpnw) < nyn ) THEN 4773 DO ip = iplw, iprw 4774 DO jc = jfl(jpnw)+1, nyn 4775 child_array(kc,jc,ifl(ip):ifu(ip)) = child_array(kc,jfl(jpnw),ifl(ip):ifu(ip)) 4776 ENDDO 4777 ENDDO 4778 ENDIF 4779 4780 ELSE ! Any scalar variable 4781 4782 DO ip = iplw, iprw 4783 DO jp = jpsw, jpnw 4784 ! 4785 !-- First interpolate to the flux point using the 3rd-order WS scheme 4786 c_interp_1 = c31 * workarr_t(kpw-1,jp,ip) + c32 * workarr_t(kpw,jp,ip) & 4787 + c33 * workarr_t(kpw+1,jp,ip) 4788 DO ic = ifl(ip), ifu(ip) 4789 DO jc = jfl(jp), jfu(jp) 4790 child_array(kc,jc,ic) = c_interp_1 4791 ENDDO 4792 ENDDO 4793 4794 ENDDO 4795 ENDDO 4796 4797 ENDIF ! var 4798 ! 4799 !-- Just fill up the redundant second ghost-node layer in case of var == w. 4800 IF ( var == 'w' ) THEN 4801 child_array(nzt+1,:,:) = child_array(nzt,:,:) 4802 ENDIF 4803 4804 END SUBROUTINE pmci_interp_1sto_t 4805 4806 4807 4808 !--------------------------------------------------------------------------------------------------! 4809 ! Description: 4810 ! ------------ 4811 !> @Todo: Missing subroutine description. 4812 !--------------------------------------------------------------------------------------------------! 4813 SUBROUTINE pmci_anterp_tophat( child_array, parent_array, kct, ifl, ifu, jfl, jfu, kfl, kfu, & 4814 ijkfc, var ) 4815 ! 4816 !-- Anterpolation of internal-node values to be used as the parent-domain values. This subroutine is 4817 !-- based on the first-order numerical integration of the child-grid values contained within the 4818 !-- anterpolation cell (Clark & Farley, Journal of the Atmospheric Sciences 41(3), 1984). 4819 4820 IMPLICIT NONE 4821 4822 CHARACTER(LEN=*), INTENT(IN) :: var !< Variable symbol: 'u', 'v', 'w' or 's' 4823 4824 INTEGER(iwp), INTENT(IN) :: kct !< Top boundary index for anterpolation along z 4825 4826 INTEGER(iwp), DIMENSION(0:pg%nz+1,jpsa:jpna,ipla:ipra), INTENT(IN) :: ijkfc !< number of child grid points contributing 4827 !< to a parent grid box 4828 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifl !< Indicates start index of child cells belonging to certain 4829 !< parent cell - x direction 4830 INTEGER(iwp), DIMENSION(ipla:ipra), INTENT(IN) :: ifu !< Indicates end index of child cells belonging to certain 4831 !< parent cell - x direction 4832 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfl !< Indicates start index of child cells belonging to certain 4833 !< parent cell - y direction 4834 INTEGER(iwp), DIMENSION(jpsa:jpna), INTENT(IN) :: jfu !< Indicates end index of child cells belonging to certain 4835 !< parent cell - y direction 4836 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfl !< Indicates start index of child cells belonging to certain 4837 !< parent cell - z direction 4838 INTEGER(iwp), DIMENSION(0:pg%nz+1), INTENT(IN) :: kfu !< Indicates end index of child cells belonging to certain 4839 !< parent cell - z direction 4840 4841 REAL(wp), DIMENSION(nzb:nzt+1,nysg:nyng,nxlg:nxrg), INTENT(IN) :: child_array !< Child-grid array 4842 4843 REAL(wp), DIMENSION(0:pg%nz+1,jps:jpn,ipl:ipr), INTENT(INOUT) :: parent_array !< Parent-grid array 4844 4845 ! 4846 !-- Local variables: 4847 INTEGER(iwp) :: ic !< Running index x-direction - child grid 4848 INTEGER(iwp) :: ip !< Running index x-direction - parent grid 4849 INTEGER(iwp) :: ipl_anterp !< Left boundary index for anterpolation along x 4850 INTEGER(iwp) :: ipr_anterp !< Right boundary index for anterpolation along x 4851 INTEGER(iwp) :: jc !< Running index y-direction - child grid 4852 INTEGER(iwp) :: jp !< Running index y-direction - parent grid 4853 INTEGER(iwp) :: jpn_anterp !< North boundary index for anterpolation along y 4854 INTEGER(iwp) :: jps_anterp !< South boundary index for anterpolation along y 4855 INTEGER(iwp) :: kc !< Running index z-direction - child grid 4856 INTEGER(iwp) :: kp !< Running index z-direction - parent grid 4857 INTEGER(iwp) :: kpb_anterp = 0 !< Bottom boundary index for anterpolation along z 4858 INTEGER(iwp) :: kpt_anterp !< Top boundary index for anterpolation along z 4859 INTEGER(iwp) :: var_flag !< bit number used to flag topography on respective grid 4860 4861 REAL(wp) :: cellsum !< Sum of respective child cells belonging to parent cell 4862 4863 ! 4864 !-- Define the index bounds ipl_anterp, ipr_anterp, jps_anterp and jpn_anterp. 4865 !-- Note that kcb_anterp is simply zero and kct_anterp depends on kct which enters here as a 4866 !-- parameter and it is determined in pmci_define_index_mapping. Note that the grid points directly 4867 !-- used also for interpolation (from parent to child) are always excluded from anterpolation, e.g. 4868 !-- anterpolation is maximally only from 0:kct-1, since kct is directly used for interpolation. 4869 !-- Similar restriction is applied to the lateral boundaries as well. An additional buffer is also 4870 !-- applied (default value for anterpolation_buffer_width = 2) in order to avoid unphysical 4871 !-- accumulation of kinetic energy. 4872 ipl_anterp = ipl 4873 ipr_anterp = ipr 4874 jps_anterp = jps 4875 jpn_anterp = jpn 4876 kpb_anterp = 0 4877 kpt_anterp = kct - 1 - anterpolation_buffer_width 4878 4879 IF ( nesting_mode /= 'vertical' ) THEN 4880 ! 4881 !-- Set the anterpolation buffers on the lateral boundaries 4882 ipl_anterp = MAX( ipl, iplg + 3 + anterpolation_buffer_width ) 4883 ipr_anterp = MIN( ipr, iprg - 3 - anterpolation_buffer_width ) 4884 jps_anterp = MAX( jps, jpsg + 3 + anterpolation_buffer_width ) 4885 jpn_anterp = MIN( jpn, jpng - 3 - anterpolation_buffer_width ) 4886 4887 ENDIF 4888 ! 4889 !-- Set masking bit for topography flags 4890 IF ( var == 'u' ) THEN 4891 var_flag = 1 4892 ELSEIF ( var == 'v' ) THEN 4893 var_flag = 2 4894 ELSEIF ( var == 'w' ) THEN 4895 var_flag = 3 4896 ELSE 4897 var_flag = 0 4898 ENDIF 4899 ! 4900 !-- Note that ip, jp, and kp are parent-grid indices and ic,jc, and kc are child-grid indices. 4901 DO ip = ipl_anterp, ipr_anterp 4902 DO jp = jps_anterp, jpn_anterp 4903 ! 4904 !-- For simplicity anterpolate within buildings and under elevated terrain too 4905 DO kp = kpb_anterp, kpt_anterp 4906 cellsum = 0.0_wp 4907 DO ic = ifl(ip), ifu(ip) 4908 DO jc = jfl(jp), jfu(jp) 4909 DO kc = kfl(kp), kfu(kp) 4910 cellsum = cellsum + MERGE( child_array(kc,jc,ic), 0.0_wp, & 4911 BTEST( wall_flags_total_0(kc,jc,ic), var_flag ) ) 4912 ENDDO 4913 ENDDO 4914 ENDDO 4915 ! 4916 !-- In case all child grid points are inside topography, i.e. ijkfc and cellsum are zero, 4917 !-- also parent solution would have zero values at that grid point, which may cause 4918 !-- problems in particular for the temperature. Therefore, in case cellsum is zero, keep 4919 !-- the parent solution at this point. 4920 IF ( ijkfc(kp,jp,ip) /= 0 ) THEN 4921 parent_array(kp,jp,ip) = cellsum / REAL( ijkfc(kp,jp,ip), KIND = wp ) 4922 ENDIF 4923 4924 ENDDO 4925 ENDDO 4926 ENDDO 4927 4928 END SUBROUTINE pmci_anterp_tophat 4929 4930 #endif 4931 4932 END SUBROUTINE pmci_child_datatrans 4933 4934 4935 !--------------------------------------------------------------------------------------------------! 4936 ! Description: 4937 ! ------------ 4938 !> Set boundary conditions for the prognostic quantities after interpolation and anterpolation at 4939 !> upward- and downward facing surfaces. 4940 !> @todo: add Dirichlet boundary conditions for pot. temperature, humdidity and passive scalar. 4941 !--------------------------------------------------------------------------------------------------! 4709 4942 SUBROUTINE pmci_boundary_conds 4710 4943 … … 4720 4953 INTEGER(iwp) :: m !< Running index for surface type 4721 4954 INTEGER(iwp) :: n !< Running index for number of chemical species 4722 4955 4723 4956 4724 4957 IF ( debug_output_timestep ) CALL debug_message( 'pmci_boundary_conds', 'start' ) … … 4729 4962 !-- Upward-facing surfaces 4730 4963 DO m = 1, bc_h(0)%ns 4731 ic = bc_h(0)%i(m) 4964 ic = bc_h(0)%i(m) 4732 4965 jc = bc_h(0)%j(m) 4733 4966 kc = bc_h(0)%k(m) … … 4738 4971 !-- Downward-facing surfaces 4739 4972 DO m = 1, bc_h(1)%ns 4740 ic = bc_h(1)%i(m) 4973 ic = bc_h(1)%i(m) 4741 4974 jc = bc_h(1)%j(m) 4742 4975 kc = bc_h(1)%k(m) … … 4749 4982 !-- Upward-facing surfaces 4750 4983 DO m = 1, bc_h(0)%ns 4751 ic = bc_h(0)%i(m) 4984 ic = bc_h(0)%i(m) 4752 4985 jc = bc_h(0)%j(m) 4753 4986 kc = bc_h(0)%k(m) … … 4757 4990 !-- Downward-facing surfaces 4758 4991 DO m = 1, bc_h(1)%ns 4759 ic = bc_h(1)%i(m) 4992 ic = bc_h(1)%i(m) 4760 4993 jc = bc_h(1)%j(m) 4761 4994 kc = bc_h(1)%k(m) … … 4767 5000 IF ( ibc_pt_b == 1 ) THEN 4768 5001 DO m = 1, bc_h(0)%ns 4769 ic = bc_h(0)%i(m) 5002 ic = bc_h(0)%i(m) 4770 5003 jc = bc_h(0)%j(m) 4771 5004 kc = bc_h(0)%k(m) … … 4773 5006 ENDDO 4774 5007 DO m = 1, bc_h(1)%ns 4775 ic = bc_h(1)%i(m) 5008 ic = bc_h(1)%i(m) 4776 5009 jc = bc_h(1)%j(m) 4777 5010 kc = bc_h(1)%k(m) 4778 5011 pt(kc+1,jc,ic) = pt(kc,jc,ic) 4779 ENDDO 5012 ENDDO 4780 5013 ENDIF 4781 5014 ENDIF … … 4785 5018 IF ( ibc_q_b == 1 ) THEN 4786 5019 DO m = 1, bc_h(0)%ns 4787 ic = bc_h(0)%i(m) 5020 ic = bc_h(0)%i(m) 4788 5021 jc = bc_h(0)%j(m) 4789 5022 kc = bc_h(0)%k(m) 4790 5023 q(kc-1,jc,ic) = q(kc,jc,ic) 4791 ENDDO 5024 ENDDO 4792 5025 DO m = 1, bc_h(1)%ns 4793 ic = bc_h(1)%i(m) 5026 ic = bc_h(1)%i(m) 4794 5027 jc = bc_h(1)%j(m) 4795 5028 kc = bc_h(1)%k(m) 4796 5029 q(kc+1,jc,ic) = q(kc,jc,ic) 4797 ENDDO 5030 ENDDO 4798 5031 ENDIF 4799 5032 IF ( bulk_cloud_model .AND. microphysics_morrison ) THEN 4800 5033 DO m = 1, bc_h(0)%ns 4801 ic = bc_h(0)%i(m) 5034 ic = bc_h(0)%i(m) 4802 5035 jc = bc_h(0)%j(m) 4803 5036 kc = bc_h(0)%k(m) 4804 5037 nc(kc-1,jc,ic) = 0.0_wp 4805 5038 qc(kc-1,jc,ic) = 0.0_wp 4806 ENDDO 5039 ENDDO 4807 5040 DO m = 1, bc_h(1)%ns 4808 ic = bc_h(1)%i(m) 5041 ic = bc_h(1)%i(m) 4809 5042 jc = bc_h(1)%j(m) 4810 5043 kc = bc_h(1)%k(m) … … 4812 5045 nc(kc+1,jc,ic) = 0.0_wp 4813 5046 qc(kc+1,jc,ic) = 0.0_wp 4814 ENDDO 5047 ENDDO 4815 5048 ENDIF 4816 5049 4817 5050 IF ( bulk_cloud_model .AND. microphysics_seifert ) THEN 4818 5051 DO m = 1, bc_h(0)%ns 4819 ic = bc_h(0)%i(m) 5052 ic = bc_h(0)%i(m) 4820 5053 jc = bc_h(0)%j(m) 4821 5054 kc = bc_h(0)%k(m) 4822 5055 nr(kc-1,jc,ic) = 0.0_wp 4823 5056 qr(kc-1,jc,ic) = 0.0_wp 4824 ENDDO 5057 ENDDO 4825 5058 DO m = 1, bc_h(1)%ns 4826 ic = bc_h(1)%i(m) 5059 ic = bc_h(1)%i(m) 4827 5060 jc = bc_h(1)%j(m) 4828 5061 kc = bc_h(1)%k(m) 4829 5062 nr(kc+1,jc,ic) = 0.0_wp 4830 5063 qr(kc+1,jc,ic) = 0.0_wp 4831 ENDDO 5064 ENDDO 4832 5065 ENDIF 4833 5066 … … 4838 5071 IF ( ibc_s_b == 1 ) THEN 4839 5072 DO m = 1, bc_h(0)%ns 4840 ic = bc_h(0)%i(m) 5073 ic = bc_h(0)%i(m) 4841 5074 jc = bc_h(0)%j(m) 4842 5075 kc = bc_h(0)%k(m) 4843 5076 s(kc-1,jc,ic) = s(kc,jc,ic) 4844 ENDDO 5077 ENDDO 4845 5078 DO m = 1, bc_h(1)%ns 4846 ic = bc_h(1)%i(m) 5079 ic = bc_h(1)%i(m) 4847 5080 jc = bc_h(1)%j(m) 4848 5081 kc = bc_h(1)%k(m) 4849 5082 s(kc+1,jc,ic) = s(kc,jc,ic) 4850 ENDDO 5083 ENDDO 4851 5084 ENDIF 4852 5085 ENDIF … … 4857 5090 DO n = 1, nspec 4858 5091 DO m = 1, bc_h(0)%ns 4859 ic = bc_h(0)%i(m) 5092 ic = bc_h(0)%i(m) 4860 5093 jc = bc_h(0)%j(m) 4861 5094 kc = bc_h(0)%k(m) 4862 5095 chem_species(n)%conc(kc-1,jc,ic) = chem_species(n)%conc(kc,jc,ic) 4863 ENDDO 5096 ENDDO 4864 5097 DO m = 1, bc_h(1)%ns 4865 ic = bc_h(1)%i(m) 5098 ic = bc_h(1)%i(m) 4866 5099 jc = bc_h(1)%j(m) 4867 5100 kc = bc_h(1)%k(m) … … 4870 5103 ENDDO 4871 5104 ENDIF 4872 ENDIF 5105 ENDIF 4873 5106 ! 4874 5107 !-- Set Neumann boundary conditions for aerosols and salsa gases … … 4908 5141 ENDDO 4909 5142 ENDIF 4910 ENDIF 5143 ENDIF 4911 5144 4912 5145 IF ( debug_output_timestep ) CALL debug_message( 'pmci_boundary_conds', 'end' ) … … 4916 5149 4917 5150 4918 5151 5152 !--------------------------------------------------------------------------------------------------! 5153 ! Description: 5154 ! ------------ 5155 !> @Todo: Missing subroutine description. 5156 !--------------------------------------------------------------------------------------------------! 4919 5157 SUBROUTINE pmci_ensure_nest_mass_conservation 4920 5158 4921 5159 ! 4922 !-- Adjust the volume-flow rate through the nested boundaries so that the net volume 4923 !-- flow throughall boundaries of the current nest domain becomes zero.5160 !-- Adjust the volume-flow rate through the nested boundaries so that the net volume flow through 5161 !-- all boundaries of the current nest domain becomes zero. 4924 5162 IMPLICIT NONE 4925 5163 4926 INTEGER(iwp) :: i 4927 INTEGER(iwp) :: ierr 4928 INTEGER(iwp) :: j 4929 INTEGER(iwp) :: k 4930 INTEGER(iwp) :: n 4931 4932 REAL(wp) :: dxdy 4933 REAL(wp) :: innor 4934 REAL(wp) :: sub_sum 4935 REAL(wp) :: u_corr_left 4936 REAL(wp) :: u_corr_right 4937 REAL(wp) :: v_corr_south 4938 REAL(wp) :: v_corr_north 4939 REAL(wp) :: volume_flux_integral 4940 REAL(wp) :: volume_flux_local 4941 REAL(wp) :: w_corr_top !< Correction added to the top boundary value of w4942 4943 REAL(wp), DIMENSION(5) :: volume_flux 4944 4945 5164 INTEGER(iwp) :: i !< Running index in the x-direction 5165 INTEGER(iwp) :: ierr !< MPI error code 5166 INTEGER(iwp) :: j !< Running index in the y-direction 5167 INTEGER(iwp) :: k !< Running index in the z-direction 5168 INTEGER(iwp) :: n !< Running index over the boundary faces: l, r, s, n and t 5169 5170 REAL(wp) :: dxdy !< Surface area of grid cell top face 5171 REAL(wp) :: innor !< Inner normal vector of the grid cell face 5172 REAL(wp) :: sub_sum !< Intermediate sum for reducing the loss of signifigant digits in 2-D summations 5173 REAL(wp) :: u_corr_left !< Correction added to the left boundary value of u 5174 REAL(wp) :: u_corr_right !< Correction added to the right boundary value of u 5175 REAL(wp) :: v_corr_south !< Correction added to the south boundary value of v 5176 REAL(wp) :: v_corr_north !< Correction added to the north boundary value of v 5177 REAL(wp) :: volume_flux_integral !< Surface integral of volume flux over the domain boundaries 5178 REAL(wp) :: volume_flux_local !< Surface integral of volume flux over the subdomain boundary face 5179 REAL(wp) :: w_corr_top !< Correction added to the top boundary value of w 5180 5181 REAL(wp), DIMENSION(5) :: volume_flux !< Surface integral of volume flux over each boundary face of the domain 5182 5183 4946 5184 ! 4947 5185 !-- Sum up the volume flow through the left boundary … … 4954 5192 sub_sum = 0.0_wp 4955 5193 DO k = nzb+1, nzt 4956 sub_sum = sub_sum + innor * u(k,j,i) * dzw(k) 4957 5194 sub_sum = sub_sum + innor * u(k,j,i) * dzw(k) & 5195 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 1 ) ) 4958 5196 ENDDO 4959 5197 volume_flux_local = volume_flux_local + sub_sum … … 4977 5215 sub_sum = 0.0_wp 4978 5216 DO k = nzb+1, nzt 4979 sub_sum = sub_sum + innor * u(k,j,i) * dzw(k) 4980 5217 sub_sum = sub_sum + innor * u(k,j,i) * dzw(k) & 5218 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 1 ) ) 4981 5219 ENDDO 4982 5220 volume_flux_local = volume_flux_local + sub_sum … … 4992 5230 ! 4993 5231 !-- Sum up the volume flow through the south boundary 4994 volume_flux(3) = 0.0_wp 5232 volume_flux(3) = 0.0_wp 4995 5233 volume_flux_local = 0.0_wp 4996 5234 IF ( bc_dirichlet_s ) THEN … … 5000 5238 sub_sum = 0.0_wp 5001 5239 DO k = nzb+1, nzt 5002 sub_sum = sub_sum + innor * v(k,j,i) * dzw(k) 5003 5240 sub_sum = sub_sum + innor * v(k,j,i) * dzw(k) & 5241 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 2 ) ) 5004 5242 ENDDO 5005 5243 volume_flux_local = volume_flux_local + sub_sum 5006 5244 ENDDO 5007 5245 ENDIF 5008 5246 5009 5247 #if defined( __parallel ) 5010 5248 IF ( collective_wait ) CALL MPI_BARRIER( comm2d, ierr ) … … 5013 5251 volume_flux(3) = volume_flux_local 5014 5252 #endif 5015 ! 5253 ! 5016 5254 !-- Sum up the volume flow through the north boundary 5017 5255 volume_flux(4) = 0.0_wp … … 5023 5261 sub_sum = 0.0_wp 5024 5262 DO k = nzb+1, nzt 5025 sub_sum = sub_sum + innor * v(k,j,i) * dzw(k) 5026 5263 sub_sum = sub_sum + innor * v(k,j,i) * dzw(k) & 5264 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 2 ) ) 5027 5265 ENDDO 5028 5266 volume_flux_local = volume_flux_local + sub_sum … … 5061 5299 volume_flux_integral = volume_flux_integral + volume_flux(n) 5062 5300 ENDDO 5063 ! 5301 ! 5064 5302 !-- Correction equally distributed to all nest boundaries, area_total must be used as area. 5065 5303 !-- Note that face_area(6) is the total area (=sum from 1 to 5) … … 5070 5308 v_corr_north = w_corr_top 5071 5309 !! 5072 !!-- Just print out the net volume fluxes through each boundary. Only the root process prints. 5073 ! if ( myid == 0 ) then 5074 ! write( 9, "(5(e14.7,2x),4x,e14.7,4x,e12.5,4x,5(e14.7,2x))" ) 5075 ! volume_flux(1), volume_flux(2), volume_flux(3), volume_flux(4), volume_flux(5), 5076 ! volume_flux_integral, c_correc, 5310 !!-- Just print out the net volume fluxes through each boundary. Only the root process prints. 5311 ! if ( myid == 0 ) then 5312 ! write( 9, "(5(e14.7,2x),4x,e14.7,4x,e12.5,4x,5(e14.7,2x))" ) & 5313 ! volume_flux(1), volume_flux(2), volume_flux(3), volume_flux(4), volume_flux(5), & 5314 ! volume_flux_integral, c_correc, & 5077 5315 ! u_corr_left, u_corr_right, v_corr_south, v_corr_north, w_corr_top 5078 5316 ! flush( 9 ) 5079 ! endif 5317 ! endif 5080 5318 ! 5081 5319 !-- Correct the top-boundary value of w … … 5093 5331 DO j = nys, nyn 5094 5332 DO k = nzb + 1, nzt 5095 u(k,j,i) = u(k,j,i) + u_corr_left &5096 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 1 ) )5333 u(k,j,i) = u(k,j,i) + u_corr_left & 5334 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 1 ) ) 5097 5335 ENDDO 5098 5336 ENDDO … … 5105 5343 DO j = nys, nyn 5106 5344 DO k = nzb + 1, nzt 5107 u(k,j,i) = u(k,j,i) + u_corr_right &5108 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 1 ) )5345 u(k,j,i) = u(k,j,i) + u_corr_right & 5346 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 1 ) ) 5109 5347 ENDDO 5110 5348 ENDDO … … 5117 5355 DO j = nysg, nys 5118 5356 DO k = nzb + 1, nzt 5119 v(k,j,i) = v(k,j,i) + v_corr_south &5120 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 2 ) )5357 v(k,j,i) = v(k,j,i) + v_corr_south & 5358 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 2 ) ) 5121 5359 ENDDO 5122 5360 ENDDO … … 5129 5367 DO j = nyn+1, nyng 5130 5368 DO k = nzb + 1, nzt 5131 v(k,j,i) = v(k,j,i) + v_corr_north &5132 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 2 ) )5133 ENDDO 5134 ENDDO 5135 ENDDO 5136 ENDIF 5137 5138 5369 v(k,j,i) = v(k,j,i) + v_corr_north & 5370 * MERGE( 1.0_wp, 0.0_wp, BTEST( wall_flags_total_0(k,j,i), 2 ) ) 5371 ENDDO 5372 ENDDO 5373 ENDDO 5374 ENDIF 5375 5376 5139 5377 END SUBROUTINE pmci_ensure_nest_mass_conservation 5140 5378 5141 5379 5142 5380 5381 !--------------------------------------------------------------------------------------------------! 5382 ! Description: 5383 ! ------------ 5384 !> @Todo: Missing subroutine description. 5385 !--------------------------------------------------------------------------------------------------! 5143 5386 SUBROUTINE pmci_ensure_nest_mass_conservation_vertical 5144 5387 5145 5388 ! 5146 !-- Adjust the volume-flow rate through the top boundary so that the net volume 5147 !-- flow through allboundaries of the current nest domain becomes zero.5389 !-- Adjust the volume-flow rate through the top boundary so that the net volume flow through all 5390 !-- boundaries of the current nest domain becomes zero. 5148 5391 IMPLICIT NONE 5149 5392 5150 INTEGER(iwp) :: i 5151 INTEGER(iwp) :: ierr 5152 INTEGER(iwp) :: j 5153 INTEGER(iwp) :: k 5154 5155 REAL(wp) :: dxdy 5156 REAL(wp) :: sub_sum 5157 REAL(wp) :: top_area 5158 REAL(wp) :: volume_flux 5159 REAL(wp) :: volume_flux_local 5160 REAL(wp) :: w_corr_top !< Correction added to the top boundary value of w5393 INTEGER(iwp) :: i !< Running index in the x-direction 5394 INTEGER(iwp) :: ierr !< MPI error code 5395 INTEGER(iwp) :: j !< Running index in the y-direction 5396 INTEGER(iwp) :: k !< Running index in the z-direction 5397 5398 REAL(wp) :: dxdy !< Surface area of grid cell top face 5399 REAL(wp) :: sub_sum !< Intermediate sum for reducing the loss of signifigant digits in 2-D summations 5400 REAL(wp) :: top_area !< Top boundary face area 5401 REAL(wp) :: volume_flux !< Surface integral of volume flux over the top boundary face 5402 REAL(wp) :: volume_flux_local !< Surface integral of volume flux over the subdomain boundary face 5403 REAL(wp) :: w_corr_top !< Correction added to the top boundary value of w 5161 5404 5162 5405 … … 5185 5428 w_corr_top = volume_flux / top_area 5186 5429 !! 5187 !!-- Just print out the net volume fluxes through each boundary. Only the root process prints. 5188 ! if ( myid == 0 ) then 5189 ! write( 9, "(5(e14.7,2x),4x,e14.7,4x,e12.5,4x,5(e14.7,2x))" ) 5190 ! volume_flux(1), volume_flux(2), volume_flux(3), volume_flux(4), volume_flux(5), 5191 ! volume_flux_integral, c_correc, 5430 !!-- Just print out the net volume fluxes through each boundary. Only the root process prints. 5431 ! if ( myid == 0 ) then 5432 ! write( 9, "(5(e14.7,2x),4x,e14.7,4x,e12.5,4x,5(e14.7,2x))" ) & 5433 ! volume_flux(1), volume_flux(2), volume_flux(3), volume_flux(4), volume_flux(5), & 5434 ! volume_flux_integral, c_correc, & 5192 5435 ! u_corr_left, u_corr_right, v_corr_south, v_corr_north, w_corr_top 5193 5436 ! flush( 9 ) 5194 ! endif 5437 ! endif 5195 5438 ! 5196 5439 !-- Correct the top-boundary value of w … … 5202 5445 ENDDO 5203 5446 ENDDO 5204 5447 5205 5448 END SUBROUTINE pmci_ensure_nest_mass_conservation_vertical 5206 5449
Note: See TracChangeset
for help on using the changeset viewer.