Changeset 4626 for palm/trunk/SOURCE/dynamics_mod.f90
- Timestamp:
- Jul 26, 2020 9:49:48 AM (4 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
palm/trunk/SOURCE/dynamics_mod.f90
r4517 r4626 3 3 ! This file is part of the PALM model system. 4 4 ! 5 ! PALM is free software: you can redistribute it and/or modify it under the 6 ! terms of the GNU General Public License as published by the Free Software 7 ! Foundation, either version 3 of the License, or (at your option) any later 8 ! version. 9 ! 10 ! PALM is distributed in the hope that it will be useful, but WITHOUT ANY 11 ! WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR 12 ! A PARTICULAR PURPOSE. See the GNU General Public License for more details. 13 ! 14 ! You should have received a copy of the GNU General Public License along with 15 ! PALM. If not, see <http://www.gnu.org/licenses/>. 5 ! PALM is free software: you can redistribute it and/or modify it under the terms of the GNU General 6 ! Public License as published by the Free Software Foundation, either version 3 of the License, or 7 ! (at your option) any later version. 8 ! 9 ! PALM is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the 10 ! implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General 11 ! Public License for more details. 12 ! 13 ! You should have received a copy of the GNU General Public License along with PALM. If not, see 14 ! <http://www.gnu.org/licenses/>. 16 15 ! 17 16 ! Copyright 1997-2020 Leibniz Universitaet Hannover … … 20 19 ! Current revisions: 21 20 ! ----------------- 22 ! 23 ! 21 ! 22 ! 24 23 ! Former revisions: 25 24 ! ----------------- 26 25 ! $Id$ 26 ! file re-formatted to follow the PALM coding standard 27 ! 28 ! 4517 2020-05-03 14:29:30Z raasch 27 29 ! added restart with MPI-IO for reading local arrays 28 ! 30 ! 29 31 ! 4505 2020-04-20 15:37:15Z schwenkel 30 32 ! Add flag for saturation check 31 ! 33 ! 32 34 ! 4495 2020-04-13 20:11:20Z resler 33 35 ! restart data handling with MPI-IO added 34 ! 36 ! 35 37 ! 4360 2020-01-07 11:25:50Z suehring 36 38 ! Bugfix for last commit. 37 ! 39 ! 38 40 ! 4359 2019-12-30 13:36:50Z suehring 39 41 ! Refine post-initialization check for realistically inital values of mixing ratio. Give an error 40 ! message for faulty initial values, but only a warning in a restart run. 41 ! 42 ! message for faulty initial values, but only a warning in a restart run. 43 ! 42 44 ! 4347 2019-12-18 13:18:33Z suehring 43 45 ! Implement post-initialization check for realistically inital values of mixing ratio 44 ! 46 ! 45 47 ! 4281 2019-10-29 15:15:39Z schwenkel 46 48 ! Moved boundary conditions in dynamics module 47 ! 49 ! 48 50 ! 4097 2019-07-15 11:59:11Z suehring 49 51 ! Avoid overlong lines - limit is 132 characters per line … … 60 62 61 63 62 USE arrays_3d, &63 ONLY: c_u, c_u_m, c_u_m_l, c_v, c_v_m, c_v_m_l, c_w, c_w_m, c_w_m_l, &64 dzu, &65 exner, &66 hyp, &67 pt, pt_1, pt_2, pt_init, pt_p, &68 q, q_1, q_2, q_p, &69 s, s_1, s_2, s_p, &70 u, u_1, u_2, u_init, u_p, u_m_l, u_m_n, u_m_r, u_m_s, &71 v, v_1, v_2, v_p, v_init, v_m_l, v_m_n, v_m_r, v_m_s, &64 USE arrays_3d, & 65 ONLY: c_u, c_u_m, c_u_m_l, c_v, c_v_m, c_v_m_l, c_w, c_w_m, c_w_m_l, & 66 dzu, & 67 exner, & 68 hyp, & 69 pt, pt_1, pt_2, pt_init, pt_p, & 70 q, q_1, q_2, q_p, & 71 s, s_1, s_2, s_p, & 72 u, u_1, u_2, u_init, u_p, u_m_l, u_m_n, u_m_r, u_m_s, & 73 v, v_1, v_2, v_p, v_init, v_m_l, v_m_n, v_m_r, v_m_s, & 72 74 w, w_1, w_2, w_p, w_m_l, w_m_n, w_m_r, w_m_s 73 75 … … 76 78 rd_d_rv 77 79 78 USE control_parameters, &79 ONLY: bc_dirichlet_l, &80 bc_dirichlet_s, &81 bc_radiation_l, &82 bc_radiation_n, &83 bc_radiation_r, &84 bc_radiation_s, &85 bc_pt_t_val, &86 bc_q_t_val, &87 bc_s_t_val, &88 check_realistic_q, &89 child_domain, &90 coupling_mode, &91 dt_3d, &92 ibc_pt_b,&93 ibc_pt_ t,&94 ibc_ q_b,&95 ibc_q_ t,&96 ibc_ s_b,&97 ibc_s_ t,&98 ibc_ uv_b,&99 ibc_uv_ t,&100 i nitializing_actions,&101 in termediate_timestep_count,&102 length,&103 message_string,&104 nesting_offline,&105 n udging,&106 restart_string,&107 humidity,&108 neutral,&109 passive_scalar,&110 tsc, &80 USE control_parameters, & 81 ONLY: bc_dirichlet_l, & 82 bc_dirichlet_s, & 83 bc_radiation_l, & 84 bc_radiation_n, & 85 bc_radiation_r, & 86 bc_radiation_s, & 87 bc_pt_t_val, & 88 bc_q_t_val, & 89 bc_s_t_val, & 90 check_realistic_q, & 91 child_domain, & 92 coupling_mode, & 93 dt_3d, & 94 humidity, & 95 ibc_pt_b, & 96 ibc_pt_t, & 97 ibc_q_b, & 98 ibc_q_t, & 99 ibc_s_b, & 100 ibc_s_t, & 101 ibc_uv_b, & 102 ibc_uv_t, & 103 initializing_actions, & 104 intermediate_timestep_count, & 105 length, & 106 message_string, & 107 nesting_offline, & 108 neutral, & 109 nudging, & 110 passive_scalar, & 111 restart_string, & 112 tsc, & 111 113 use_cmax 112 114 113 USE grid_variables, &114 ONLY: ddx, &115 ddy, &116 dx, &115 USE grid_variables, & 116 ONLY: ddx, & 117 ddy, & 118 dx, & 117 119 dy 118 120 119 USE indices, &120 ONLY: nbgp, &121 nx, &122 nxl, &123 nxlg, &124 nxr, &125 nxrg, &126 ny, &127 nys, &128 nysg, &129 nyn, &130 nyng, &131 nzb, &121 USE indices, & 122 ONLY: nbgp, & 123 nx, & 124 nxl, & 125 nxlg, & 126 nxr, & 127 nxrg, & 128 ny, & 129 nys, & 130 nysg, & 131 nyn, & 132 nyng, & 133 nzb, & 132 134 nzt 133 135 … … 136 138 USE pegrid 137 139 138 USE pmc_interface, &140 USE pmc_interface, & 139 141 ONLY : nesting_mode 140 142 … … 142 144 ! ONLY: 143 145 144 USE surface_mod, &146 USE surface_mod, & 145 147 ONLY : bc_h 146 148 … … 156 158 ! 157 159 !-- Public functions 158 PUBLIC &159 dynamics_parin, &160 dynamics_check_parameters, &161 dynamics_check_data_output_ts, &162 dynamics_check_data_output_pr, &163 dynamics_check_data_output, &164 dynamics_init_masks, &165 dynamics_define_netcdf_grid, &166 dynamics_init_arrays, &167 dynamics_init, &168 dynamics_init_checks, &169 dynamics_header, &170 dynamics_actions, &171 dynamics_non_advective_processes, &172 dynamics_exchange_horiz, &173 dynamics_prognostic_equations, &174 dynamics_boundary_conditions, &175 dynamics_swap_timelevel, &176 dynamics_3d_data_averaging, &177 dynamics_data_output_2d, &178 dynamics_data_output_3d, &179 dynamics_statistics, &180 dynamics_rrd_global, &181 dynamics_rrd_local, &182 dynamics_wrd_global, &183 dynamics_wrd_local, &160 PUBLIC & 161 dynamics_parin, & 162 dynamics_check_parameters, & 163 dynamics_check_data_output_ts, & 164 dynamics_check_data_output_pr, & 165 dynamics_check_data_output, & 166 dynamics_init_masks, & 167 dynamics_define_netcdf_grid, & 168 dynamics_init_arrays, & 169 dynamics_init, & 170 dynamics_init_checks, & 171 dynamics_header, & 172 dynamics_actions, & 173 dynamics_non_advective_processes, & 174 dynamics_exchange_horiz, & 175 dynamics_prognostic_equations, & 176 dynamics_boundary_conditions, & 177 dynamics_swap_timelevel, & 178 dynamics_3d_data_averaging, & 179 dynamics_data_output_2d, & 180 dynamics_data_output_3d, & 181 dynamics_statistics, & 182 dynamics_rrd_global, & 183 dynamics_rrd_local, & 184 dynamics_wrd_global, & 185 dynamics_wrd_local, & 184 186 dynamics_last_actions 185 187 186 188 ! 187 189 !-- Public parameters, constants and initial values 188 PUBLIC &190 PUBLIC & 189 191 dynamics_module_enabled 190 192 … … 312 314 CHARACTER (LEN=80) :: line !< dummy string that contains the current line of the parameter file 313 315 314 NAMELIST /dynamics_parameters/ &316 NAMELIST /dynamics_parameters/ & 315 317 dynamics_module_enabled 318 316 319 317 320 line = ' ' … … 360 363 SUBROUTINE dynamics_check_data_output_ts( dots_max, dots_num, dots_label, dots_unit ) 361 364 365 CHARACTER (LEN=*), DIMENSION(dots_max), INTENT(INOUT) :: dots_label 366 CHARACTER (LEN=*), DIMENSION(dots_max), INTENT(INOUT) :: dots_unit 362 367 363 368 INTEGER(iwp), INTENT(IN) :: dots_max 364 369 INTEGER(iwp), INTENT(INOUT) :: dots_num 365 CHARACTER (LEN=*), DIMENSION(dots_max), INTENT(INOUT) :: dots_label366 CHARACTER (LEN=*), DIMENSION(dots_max), INTENT(INOUT) :: dots_unit367 370 368 371 ! … … 384 387 385 388 389 CHARACTER (LEN=*) :: dopr_unit !< local value of dopr_unit 386 390 CHARACTER (LEN=*) :: unit !< 387 391 CHARACTER (LEN=*) :: variable !< 388 CHARACTER (LEN=*) :: dopr_unit !< local value of dopr_unit389 392 390 393 INTEGER(iwp) :: var_count !< … … 433 436 434 437 435 !------------------------------------------------------------------------------ !438 !--------------------------------------------------------------------------------------------------! 436 439 ! 437 440 ! Description: 438 441 ! ------------ 439 442 !> Initialize module-specific masked output 440 !------------------------------------------------------------------------------ !443 !--------------------------------------------------------------------------------------------------! 441 444 SUBROUTINE dynamics_init_masks( variable, unit ) 442 445 … … 499 502 500 503 ! 501 !-- Check for realistic initial mixing ratio. This must be in a realistic phyiscial range and must 502 !-- not exceed the saturation mixing ratio by more than 2 percent. Please note, the check is 503 !-- performed for each grid point (not just for a vertical profile), in order to cover also 504 !-- Check for realistic initial mixing ratio. This must be in a realistic phyiscial range and must 505 !-- not exceed the saturation mixing ratio by more than 2 percent. Please note, the check is 506 !-- performed for each grid point (not just for a vertical profile), in order to cover also 504 507 !-- three-dimensional initialization. Note, this check gives an error only for the initial run not 505 508 !-- for a restart run. In case there are no cloud physics considered, the mixing ratio can exceed 506 !-- the saturation moisture. This case a warning is given. 509 !-- the saturation moisture. This case a warning is given. 507 510 IF ( humidity .AND. .NOT. neutral .AND. check_realistic_q ) THEN 508 511 DO i = nxl, nxr … … 516 519 q_s = rd_d_rv * e_s / ( hyp(k) - e_s ) 517 520 518 IF ( q(k,j,i) > 1.02_wp * q_s ) realistic_q = .FALSE. 521 IF ( q(k,j,i) > 1.02_wp * q_s ) realistic_q = .FALSE. 519 522 ENDDO 520 523 ENDDO 521 524 ENDDO 522 525 ! 523 !-- Since the check is performed locally, merge the logical flag from all mpi ranks, 526 !-- Since the check is performed locally, merge the logical flag from all mpi ranks, 524 527 !-- in order to do not print the error message multiple times. 525 528 #if defined( __parallel ) … … 598 601 !-- Format-descriptors 599 602 100 FORMAT (//' *** dynamic module disabled'/) 600 110 FORMAT (//1X,78('#') &601 //' User-defined variables and actions:'/ &603 110 FORMAT (//1X,78('#') & 604 //' User-defined variables and actions:'/ & 602 605 ' -----------------------------------'//) 603 606 … … 621 624 ! 622 625 !-- Here the user-defined actions follow 623 !-- No calls for single grid points are allowed at locations before and 624 !-- after the timestep, sincethese calls are not within an i,j-loop626 !-- No calls for single grid points are allowed at locations before and after the timestep, since 627 !-- these calls are not within an i,j-loop 625 628 SELECT CASE ( location ) 626 629 … … 688 691 CASE ( 'u-tendency' ) 689 692 693 ! 690 694 !-- Next line is to avoid compiler warning about unused variables. Please remove. 691 695 IF ( i + j < 0 ) CONTINUE … … 784 788 785 789 INTEGER(iwp), INTENT(IN) :: i !< grid index in x-direction 790 INTEGER(iwp), INTENT(IN) :: i_omp_start !< first loop index of i-loop in prognostic_equations 786 791 INTEGER(iwp), INTENT(IN) :: j !< grid index in y-direction 787 INTEGER(iwp), INTENT(IN) :: i_omp_start !< first loop index of i-loop in prognostic_equations788 792 INTEGER(iwp), INTENT(IN) :: tn !< task number of openmp task 789 793 … … 848 852 ! 849 853 !-- Vertical nesting: Vertical velocity not zero at the top of the fine grid 850 IF ( .NOT. child_domain .AND. .NOT. nesting_offline .AND.&851 854 IF ( .NOT. child_domain .AND. .NOT. nesting_offline .AND. & 855 TRIM(coupling_mode) /= 'vnested_fine' ) THEN 852 856 !$ACC KERNELS PRESENT(w_p) 853 857 w_p(nzt:nzt+1,:,:) = 0.0_wp !< nzt is not a prognostic level (but cf. pres) … … 857 861 ! 858 862 !-- Temperature at bottom and top boundary. 859 !-- In case of coupled runs (ibc_pt_b = 2) the temperature is given by 860 !-- the sea surface temperatureof the coupled ocean model.863 !-- In case of coupled runs (ibc_pt_b = 2) the temperature is given by the sea surface temperature 864 !-- of the coupled ocean model. 861 865 !-- Dirichlet 862 866 IF ( .NOT. neutral ) THEN … … 906 910 ENDIF 907 911 ! 908 !-- Boundary conditions for total water content, 909 !-- bottom and top boundary (see also temperature) 912 !-- Boundary conditions for total water content, bottom and top boundary (see also temperature) 910 913 IF ( humidity ) THEN 911 914 ! 912 915 !-- Surface conditions for constant_humidity_flux 913 !-- Run loop over all non-natural and natural walls. Note, in wall-datatype 914 !-- the k coordinate belongs to the atmospheric grid point, therefore, set 915 !-- q_p at k-1 916 !-- Run loop over all non-natural and natural walls. Note, in wall-datatype the k coordinate 917 !-- belongs to the atmospheric grid point, therefore, set q_p at k-1 916 918 IF ( ibc_q_b == 0 ) THEN 917 919 … … 947 949 ENDIF 948 950 ! 949 !-- Boundary conditions for scalar, 950 !-- bottom and top boundary (see also temperature) 951 !-- Boundary conditions for scalar, bottom and top boundary (see also temperature) 951 952 IF ( passive_scalar ) THEN 952 953 ! 953 954 !-- Surface conditions for constant_humidity_flux 954 !-- Run loop over all non-natural and natural walls. Note, in wall-datatype 955 !-- the k coordinate belongs to the atmospheric grid point, therefore, set 956 !-- s_p at k-1 955 !-- Run loop over all non-natural and natural walls. Note, in wall-datatype the k coordinate 956 !-- belongs to the atmospheric grid point, therefore, set s_p at k-1 957 957 IF ( ibc_s_b == 0 ) THEN 958 958 … … 991 991 ENDIF 992 992 ! 993 !-- In case of inflow or nest boundary at the south boundary the boundary for v 994 !-- is at nys and in case of inflow or nest boundary at the left boundary the 995 !-- boundary for u is at nxl. Since in prognostic_equations (cache optimized 996 !-- version) these levels are handled as a prognostic level, boundary values 997 !-- have to be restored here. 993 !-- In case of inflow or nest boundary at the south boundary the boundary for v is at nys and in 994 !-- case of inflow or nest boundary at the left boundary the boundary for u is at nxl. Since in 995 !-- prognostic_equations (cache optimized version) these levels are handled as a prognostic level, 996 !-- boundary values have to be restored here. 998 997 IF ( bc_dirichlet_s ) THEN 999 998 v_p(:,nys,:) = v_p(:,nys-1,:) … … 1003 1002 1004 1003 ! 1005 !-- The same restoration for u at i=nxl and v at j=nys as above must be made 1006 !-- in case of nest boundaries. This must not be done in case of vertical nesting 1007 !-- mode as in that case the lateral boundaries are actually cyclic. 1008 !-- Lateral oundary conditions for TKE and dissipation are set 1009 !-- in tcm_boundary_conds. 1004 !-- The same restoration for u at i=nxl and v at j=nys as above must be made in case of nest 1005 !-- boundaries. This must not be done in case of vertical nesting mode as in that case the lateral 1006 !-- boundaries are actually cyclic. 1007 !-- Lateral oundary conditions for TKE and dissipation are set in tcm_boundary_conds. 1010 1008 IF ( nesting_mode /= 'vertical' .OR. nesting_offline ) THEN 1011 1009 IF ( bc_dirichlet_s ) THEN … … 1019 1017 ! 1020 1018 !-- Lateral boundary conditions for scalar quantities at the outflow. 1021 !-- Lateral oundary conditions for TKE and dissipation are set 1022 !-- in tcm_boundary_conds. 1019 !-- Lateral oundary conditions for TKE and dissipation are set in tcm_boundary_conds. 1023 1020 IF ( bc_radiation_s ) THEN 1024 1021 pt_p(:,nys-1,:) = pt_p(:,nys,:) … … 1049 1046 ! 1050 1047 !-- Radiation boundary conditions for the velocities at the respective outflow. 1051 !-- The phase velocity is either assumed to the maximum phase velocity that 1052 !-- ensures numerical stability (CFL-condition) or calculated after1053 !-- Orlanski(1976) and averaged along the outflowboundary.1048 !-- The phase velocity is either assumed to the maximum phase velocity that ensures numerical 1049 !-- stability (CFL-condition) or calculated after Orlanski(1976) and averaged along the outflow 1050 !-- boundary. 1054 1051 IF ( bc_radiation_s ) THEN 1055 1052 … … 1071 1068 1072 1069 ! 1073 !-- Calculate the phase speeds for u, v, and w, first local and then 1074 !-- average along the outflowboundary.1070 !-- Calculate the phase speeds for u, v, and w, first local and then average along the outflow 1071 !-- boundary. 1075 1072 DO k = nzb+1, nzt+1 1076 1073 DO i = nxl, nxr … … 1124 1121 #if defined( __parallel ) 1125 1122 IF ( collective_wait ) CALL MPI_BARRIER( comm1dx, ierr ) 1126 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, &1127 MPI_SUM, comm1dx,ierr )1123 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dx, & 1124 ierr ) 1128 1125 IF ( collective_wait ) CALL MPI_BARRIER( comm1dx, ierr ) 1129 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, &1130 MPI_SUM, comm1dx,ierr )1126 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dx, & 1127 ierr ) 1131 1128 IF ( collective_wait ) CALL MPI_BARRIER( comm1dx, ierr ) 1132 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, &1133 MPI_SUM, comm1dx,ierr )1129 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dx, & 1130 ierr ) 1134 1131 #else 1135 1132 c_u_m = c_u_m_l … … 1154 1151 DO k = nzb+1, nzt+1 1155 1152 DO i = nxlg, nxrg 1156 u_p(k,-1,i) = u(k,-1,i) - dt_3d * tsc(2) * c_u_m(k) * &1157 ( u(k,-1,i) - u(k,0,i) ) * ddy1158 1159 v_p(k,0,i) = v(k,0,i) - dt_3d * tsc(2) * c_v_m(k) * &1160 ( v(k,0,i) - v(k,1,i) ) * ddy1161 1162 w_p(k,-1,i) = w(k,-1,i) - dt_3d * tsc(2) * c_w_m(k) * &1163 ( w(k,-1,i) - w(k,0,i) ) * ddy1153 u_p(k,-1,i) = u(k,-1,i) - dt_3d * tsc(2) * c_u_m(k) * & 1154 ( u(k,-1,i) - u(k,0,i) ) * ddy 1155 1156 v_p(k,0,i) = v(k,0,i) - dt_3d * tsc(2) * c_v_m(k) * & 1157 ( v(k,0,i) - v(k,1,i) ) * ddy 1158 1159 w_p(k,-1,i) = w(k,-1,i) - dt_3d * tsc(2) * c_w_m(k) * & 1160 ( w(k,-1,i) - w(k,0,i) ) * ddy 1164 1161 ENDDO 1165 1162 ENDDO … … 1210 1207 1211 1208 ! 1212 !-- Calculate the phase speeds for u, v, and w, first local and then 1213 !-- average along the outflowboundary.1209 !-- Calculate the phase speeds for u, v, and w, first local and then average along the outflow 1210 !-- boundary. 1214 1211 DO k = nzb+1, nzt+1 1215 1212 DO i = nxl, nxr … … 1263 1260 #if defined( __parallel ) 1264 1261 IF ( collective_wait ) CALL MPI_BARRIER( comm1dx, ierr ) 1265 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, &1266 MPI_SUM, comm1dx,ierr )1262 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dx, & 1263 ierr ) 1267 1264 IF ( collective_wait ) CALL MPI_BARRIER( comm1dx, ierr ) 1268 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, &1269 MPI_SUM, comm1dx,ierr )1265 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dx, & 1266 ierr ) 1270 1267 IF ( collective_wait ) CALL MPI_BARRIER( comm1dx, ierr ) 1271 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, &1272 MPI_SUM, comm1dx,ierr )1268 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dx, & 1269 ierr ) 1273 1270 #else 1274 1271 c_u_m = c_u_m_l … … 1293 1290 DO k = nzb+1, nzt+1 1294 1291 DO i = nxlg, nxrg 1295 u_p(k,ny+1,i) = u(k,ny+1,i) - dt_3d * tsc(2) * c_u_m(k) * &1296 ( u(k,ny+1,i) - u(k,ny,i) ) * ddy1297 1298 v_p(k,ny+1,i) = v(k,ny+1,i) - dt_3d * tsc(2) * c_v_m(k) * &1299 ( v(k,ny+1,i) - v(k,ny,i) ) * ddy1300 1301 w_p(k,ny+1,i) = w(k,ny+1,i) - dt_3d * tsc(2) * c_w_m(k) * &1302 ( w(k,ny+1,i) - w(k,ny,i) ) * ddy1292 u_p(k,ny+1,i) = u(k,ny+1,i) - dt_3d * tsc(2) * c_u_m(k) * & 1293 ( u(k,ny+1,i) - u(k,ny,i) ) * ddy 1294 1295 v_p(k,ny+1,i) = v(k,ny+1,i) - dt_3d * tsc(2) * c_v_m(k) * & 1296 ( v(k,ny+1,i) - v(k,ny,i) ) * ddy 1297 1298 w_p(k,ny+1,i) = w(k,ny+1,i) - dt_3d * tsc(2) * c_w_m(k) * & 1299 ( w(k,ny+1,i) - w(k,ny,i) ) * ddy 1303 1300 ENDDO 1304 1301 ENDDO … … 1349 1346 1350 1347 ! 1351 !-- Calculate the phase speeds for u, v, and w, first local and then 1352 !-- average along the outflowboundary.1348 !-- Calculate the phase speeds for u, v, and w, first local and then average along the outflow 1349 !-- boundary. 1353 1350 DO k = nzb+1, nzt+1 1354 1351 DO j = nys, nyn … … 1402 1399 #if defined( __parallel ) 1403 1400 IF ( collective_wait ) CALL MPI_BARRIER( comm1dy, ierr ) 1404 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, &1405 MPI_SUM, comm1dy,ierr )1401 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dy, & 1402 ierr ) 1406 1403 IF ( collective_wait ) CALL MPI_BARRIER( comm1dy, ierr ) 1407 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, &1408 MPI_SUM, comm1dy,ierr )1404 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dy, & 1405 ierr ) 1409 1406 IF ( collective_wait ) CALL MPI_BARRIER( comm1dy, ierr ) 1410 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, &1411 MPI_SUM, comm1dy,ierr )1407 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dy, & 1408 ierr ) 1412 1409 #else 1413 1410 c_u_m = c_u_m_l … … 1432 1429 DO k = nzb+1, nzt+1 1433 1430 DO j = nysg, nyng 1434 u_p(k,j,0) = u(k,j,0) - dt_3d * tsc(2) * c_u_m(k) *&1435 ( u(k,j,0) - u(k,j,1) ) * ddx1436 1437 v_p(k,j,-1) = v(k,j,-1) - dt_3d * tsc(2) * c_v_m(k) * &1438 ( v(k,j,-1) - v(k,j,0) ) * ddx1439 1440 w_p(k,j,-1) = w(k,j,-1) - dt_3d * tsc(2) * c_w_m(k) * &1441 ( w(k,j,-1) - w(k,j,0) ) * ddx1431 u_p(k,j,0) = u(k,j,0) - dt_3d * tsc(2) * c_u_m(k) * & 1432 ( u(k,j,0) - u(k,j,1) ) * ddx 1433 1434 v_p(k,j,-1) = v(k,j,-1) - dt_3d * tsc(2) * c_v_m(k) * & 1435 ( v(k,j,-1) - v(k,j,0) ) * ddx 1436 1437 w_p(k,j,-1) = w(k,j,-1) - dt_3d * tsc(2) * c_w_m(k) * & 1438 ( w(k,j,-1) - w(k,j,0) ) * ddx 1442 1439 ENDDO 1443 1440 ENDDO … … 1488 1485 1489 1486 ! 1490 !-- Calculate the phase speeds for u, v, and w, first local and then 1491 !-- average along the outflowboundary.1487 !-- Calculate the phase speeds for u, v, and w, first local and then average along the outflow 1488 !-- boundary. 1492 1489 DO k = nzb+1, nzt+1 1493 1490 DO j = nys, nyn … … 1541 1538 #if defined( __parallel ) 1542 1539 IF ( collective_wait ) CALL MPI_BARRIER( comm1dy, ierr ) 1543 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, &1544 MPI_SUM, comm1dy,ierr )1540 CALL MPI_ALLREDUCE( c_u_m_l(nzb+1), c_u_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dy, & 1541 ierr ) 1545 1542 IF ( collective_wait ) CALL MPI_BARRIER( comm1dy, ierr ) 1546 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, &1547 MPI_SUM, comm1dy,ierr )1543 CALL MPI_ALLREDUCE( c_v_m_l(nzb+1), c_v_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dy, & 1544 ierr ) 1548 1545 IF ( collective_wait ) CALL MPI_BARRIER( comm1dy, ierr ) 1549 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, &1550 MPI_SUM, comm1dy,ierr )1546 CALL MPI_ALLREDUCE( c_w_m_l(nzb+1), c_w_m(nzb+1), nzt-nzb, MPI_REAL, MPI_SUM, comm1dy, & 1547 ierr ) 1551 1548 #else 1552 1549 c_u_m = c_u_m_l … … 1571 1568 DO k = nzb+1, nzt+1 1572 1569 DO j = nysg, nyng 1573 u_p(k,j,nx+1) = u(k,j,nx+1) - dt_3d * tsc(2) * c_u_m(k) * &1574 ( u(k,j,nx+1) - u(k,j,nx) ) * ddx1575 1576 v_p(k,j,nx+1) = v(k,j,nx+1) - dt_3d * tsc(2) * c_v_m(k) * &1577 ( v(k,j,nx+1) - v(k,j,nx) ) * ddx1578 1579 w_p(k,j,nx+1) = w(k,j,nx+1) - dt_3d * tsc(2) * c_w_m(k) * &1580 ( w(k,j,nx+1) - w(k,j,nx) ) * ddx1570 u_p(k,j,nx+1) = u(k,j,nx+1) - dt_3d * tsc(2) * c_u_m(k) * & 1571 ( u(k,j,nx+1) - u(k,j,nx) ) * ddx 1572 1573 v_p(k,j,nx+1) = v(k,j,nx+1) - dt_3d * tsc(2) * c_v_m(k) * & 1574 ( v(k,j,nx+1) - v(k,j,nx) ) * ddx 1575 1576 w_p(k,j,nx+1) = w(k,j,nx+1) - dt_3d * tsc(2) * c_w_m(k) * & 1577 ( w(k,j,nx+1) - w(k,j,nx) ) * ddx 1581 1578 ENDDO 1582 1579 ENDDO … … 1609 1606 1610 1607 END SUBROUTINE dynamics_boundary_conditions 1611 !------------------------------------------------------------------------------ !1608 !--------------------------------------------------------------------------------------------------! 1612 1609 ! Description: 1613 1610 ! ------------ 1614 1611 !> Swap timelevels of module-specific array pointers 1615 !------------------------------------------------------------------------------ !1612 !--------------------------------------------------------------------------------------------------! 1616 1613 SUBROUTINE dynamics_swap_timelevel ( mod_count ) 1617 1614 … … 1660 1657 ! Description: 1661 1658 ! ------------ 1662 !> Sum up and time-average module-specific output quantities 1663 !> as well as allocate the array necessaryfor storing the average.1659 !> Sum up and time-average module-specific output quantities as well as allocate the array necessary 1660 !> for storing the average. 1664 1661 !--------------------------------------------------------------------------------------------------! 1665 1662 SUBROUTINE dynamics_3d_data_averaging( mode, variable ) … … 1709 1706 ! Description: 1710 1707 ! ------------ 1711 !> Resorts the module-specific output quantity with indices (k,j,i) to a 1712 !> temporary array withindices (i,j,k) and sets the grid on which it is defined.1708 !> Resorts the module-specific output quantity with indices (k,j,i) to a temporary array with 1709 !> indices (i,j,k) and sets the grid on which it is defined. 1713 1710 !> Allowed values for grid are "zu" and "zw". 1714 1711 !--------------------------------------------------------------------------------------------------! 1715 SUBROUTINE dynamics_data_output_2d( av, variable, found, grid, mode, local_pf, &1716 two_d, nzb_do,nzt_do, fill_value )1717 1718 1719 CHARACTER (LEN=*) :: grid!<1712 SUBROUTINE dynamics_data_output_2d( av, variable, found, grid, mode, local_pf, two_d, nzb_do, & 1713 nzt_do, fill_value ) 1714 1715 1716 CHARACTER (LEN=*) :: grid !< 1720 1717 CHARACTER (LEN=*), INTENT(IN) :: mode !< either 'xy', 'xz' or 'yz' 1721 CHARACTER (LEN=*) :: variable!<1718 CHARACTER (LEN=*) :: variable !< 1722 1719 1723 1720 INTEGER(iwp) :: av !< flag to control data output of instantaneous or time-averaged data … … 1759 1756 ! Description: 1760 1757 ! ------------ 1761 !> Resorts the module-specific output quantity with indices (k,j,i) 1762 !> to a temporary array withindices (i,j,k).1758 !> Resorts the module-specific output quantity with indices (k,j,i) to a temporary array with 1759 !> indices (i,j,k). 1763 1760 !--------------------------------------------------------------------------------------------------! 1764 1761 SUBROUTINE dynamics_data_output_3d( av, variable, found, local_pf, fill_value, nzb_do, nzt_do ) … … 1836 1833 SUBROUTINE dynamics_rrd_global_ftn( found ) 1837 1834 1838 1839 1835 LOGICAL, INTENT(OUT) :: found 1840 1836 … … 1864 1860 !--------------------------------------------------------------------------------------------------! 1865 1861 SUBROUTINE dynamics_rrd_global_mpi 1866 1867 1862 1868 1863 ! CALL rrd_mpi_io( 'global_parameter', global_parameter ) … … 1880 1875 !> to the subdomain of the current PE (c). They have been calculated in routine rrd_local. 1881 1876 !--------------------------------------------------------------------------------------------------! 1882 SUBROUTINE dynamics_rrd_local_ftn( k, nxlf, nxlc, nxl_on_file, nxrf, nxrc, nxr_on_file, nynf, nync, & 1883 nyn_on_file, nysf, nysc, nys_on_file, tmp_2d, tmp_3d, found ) 1877 SUBROUTINE dynamics_rrd_local_ftn( k, nxlf, nxlc, nxl_on_file, nxrf, nxrc, nxr_on_file, nynf, & 1878 nync, nyn_on_file, nysf, nysc, nys_on_file, tmp_2d, tmp_3d, & 1879 found ) 1884 1880 1885 1881 … … 1905 1901 ! 1906 1902 !-- Next line is to avoid compiler warning about unused variables. Please remove. 1907 IF ( k + nxlc + nxlf + nxrc + nxrf + nync + nynf + nysc + nysf + &1908 tmp_2d(nys_on_file,nxl_on_file) + &1903 IF ( k + nxlc + nxlf + nxrc + nxrf + nync + nynf + nysc + nysf + & 1904 tmp_2d(nys_on_file,nxl_on_file) + & 1909 1905 tmp_3d(nzb,nys_on_file,nxl_on_file) < 0.0 ) CONTINUE 1910 1906 !
Note: See TracChangeset
for help on using the changeset viewer.