source: palm/trunk/SOURCE/pres.f90 @ 667

Last change on this file since 667 was 667, checked in by suehring, 14 years ago

summary:


Gryschka:

  • Coupling with different resolution and different numbers of PEs in ocean and atmosphere is available
  • Exchange of u and v from ocean surface to atmosphere surface
  • Mirror boundary condition for u and v at the bottom are replaced by dirichlet boundary conditions
  • Inflow turbulence is now defined by flucuations around spanwise mean
  • Bugfixes for cyclic_fill and constant_volume_flow

Suehring:

  • New advection added ( Wicker and Skamarock 5th order ), therefore:
    • New module advec_ws.f90
    • Modified exchange of ghost boundaries.
    • Modified evaluation of turbulent fluxes
    • New index bounds nxlg, nxrg, nysg, nyng

advec_ws.f90


Advection scheme for scalars and momentum using the flux formulation of
Wicker and Skamarock 5th order.
Additionally the module contains of a routine using for initialisation and
steering of the statical evaluation. The computation of turbulent fluxes takes
place inside the advection routines.
In case of vector architectures Dirichlet and Radiation boundary conditions are
outstanding and not available. Furthermore simulations within topography are
not possible so far. A further routine local_diss_ij is available and is used
if a control of dissipative fluxes is desired.

check_parameters.f90


Exchange of parameters between ocean and atmosphere via PE0
Check for illegal combination of ws-scheme and timestep scheme.
Check for topography and ws-scheme.
Check for not cyclic boundary conditions in combination with ws-scheme and
loop_optimization = 'vector'.
Check for call_psolver_at_all_substeps and ws-scheme for momentum_advec.

Different processor/grid topology in atmosphere and ocean is now allowed!
Bugfixes in checking for conserve_volume_flow_mode.

exchange_horiz.f90


Dynamic exchange of ghost points with nbgp_local to ensure that no useless
ghost points exchanged in case of multigrid. type_yz(0) and type_xz(0) used for
normal grid, the remaining types used for the several grid levels.
Exchange is done via MPI-Vectors with a dynamic value of ghost points which
depend on the advection scheme. Exchange of left and right PEs is 10% faster
with MPI-Vectors than without.

flow_statistics.f90


When advection is computed with ws-scheme, turbulent fluxes are already
computed in the respective advection routines and buffered in arrays
sums_xxxx_ws_l(). This is due to a consistent treatment of statistics
with the numerics and to avoid unphysical kinks near the surface. So some if-
requests has to be done to dicern between fluxes from ws-scheme other advection
schemes. Furthermore the computation of z_i is only done if the heat flux
exceeds a minimum value. This affects only simulations of a neutral boundary
layer and is due to reasons of computations in the advection scheme.

inflow_turbulence.f90


Using nbgp recycling planes for a better resolution of the turbulent flow near
the inflow.

init_grid.f90


Definition of new array bounds nxlg, nxrg, nysg, nyng on each PE.
Furthermore the allocation of arrays and steering of loops is done with these
parameters. Call of exchange_horiz are modified.
In case of dirichlet bounday condition at the bottom zu(0)=0.0
dzu_mg has to be set explicitly for a equally spaced grid near bottom.
ddzu_pres added to use a equally spaced grid near bottom.

init_pegrid.f90


Moved determination of target_id's from init_coupling
Determination of parameters needed for coupling (coupling_topology, ngp_a, ngp_o)
with different grid/processor-topology in ocean and atmosphere

Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
This distinction is due to reasons of data exchange and performance for the
normal grid and grids in poismg.
The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
New MPI-Vectors for data exchange between left and right boundaries added.
This is due to reasons of performance (10% faster).

ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!!
TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values

parin.f90


Steering parameter dissipation_control added in inipar.

Makefile


Module advec_ws added.

Modules


Removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc

For coupling with different resolution in ocean and atmophere:
+nx_a, +nx_o, ny_a, +ny_o, ngp_a, ngp_o, +total_2d_o, +total_2d_a,
+coupling_topology

Buffer arrays for the left sided advective fluxes added in arrays_3d.
+flux_s_u, +flux_s_v, +flux_s_w, +diss_s_u, +diss_s_v, +diss_s_w,
+flux_s_pt, +diss_s_pt, +flux_s_e, +diss_s_e, +flux_s_q, +diss_s_q,
+flux_s_sa, +diss_s_sa
3d arrays for dissipation control added. (only necessary for vector arch.)
+var_x, +var_y, +var_z, +gamma_x, +gamma_y, +gamma_z
Default of momentum_advec and scalar_advec changed to 'ws-scheme' .
+exchange_mg added in control_parameters to steer the data exchange.
Parameters +nbgp, +nxlg, +nxrg, +nysg, +nyng added in indices.
flag array +boundary_flags added in indices to steer the degradation of order
of the advective fluxes when non-cyclic boundaries are used.
MPI-datatypes +type_y, +type_y_int and +type_yz for data_exchange added in
pegrid.
+sums_wsus_ws_l, +sums_wsvs_ws_l, +sums_us2_ws_l, +sums_vs2_ws_l,
+sums_ws2_ws_l, +sums_wspts_ws_l, +sums_wssas_ws_l, +sums_wsqs_ws_l
and +weight_substep added in statistics to steer the statistical evaluation
of turbulent fluxes in the advection routines.
LOGICALS +ws_scheme_sca and +ws_scheme_mom added to get a better performance
in prognostic_equations.
LOGICAL +dissipation_control control added to steer numerical dissipation
in ws-scheme.

Changed length of string run_description_header

pres.f90


New allocation of tend when ws-scheme and multigrid is used. This is due to
reasons of perforance of the data_exchange. The same is done with p after
poismg is called.
nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng when no
multigrid is used. Calls of exchange_horiz are modified.

bugfix: After pressure correction no volume flow correction in case of
non-cyclic boundary conditions
(has to be done only before pressure correction)

Call of SOR routine is referenced with ddzu_pres.

prognostic_equations.f90


Calls of the advection routines with WS5 added.
Calls of ws_statistics added to set the statistical arrays to zero after each
time step.

advec_particles.f90


Declaration of de_dx, de_dy, de_dz adapted to additional ghost points.
Furthermore the calls of exchange_horiz were modified.

asselin_filter.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

average_3d_data.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

boundary_conds.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng
Removed mirror boundary conditions for u and v at the bottom in case of
ibc_uv_b == 0. Instead, dirichelt boundary conditions (u=v=0) are set
in init_3d_model

calc_liquid_water_content.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

calc_spectra.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng for
allocation of tend.

check_open.f90


Output of total array size was adapted to nbgp.

data_output_2d.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays local_2d and total_2d.
Calls of exchange_horiz are modified.

data_output_2d.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays. Calls of exchange_horiz are modified.
Skip-value skip_do_avs changed to a dynamic adaption of ghost points.

data_output_mask.f90


Calls of exchange_horiz are modified.

diffusion_e.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_s.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_u.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_v.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusion_w.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusivities.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

diffusivities.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Calls of exchange_horiz are modified.

exchange_horiz_2d.f90


Dynamic exchange of ghost points with nbgp, which depends on the advection
scheme. Exchange between left and right PEs is now done with MPI-vectors.

global_min_max.f90


Adapting of the index arrays, because MINLOC assumes lowerbound
at 1 and not at nbgp.

init_3d_model.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in loops and
allocation of arrays. Calls of exchange_horiz are modified.
Call ws_init to initialize arrays needed for statistical evaluation and
optimization when ws-scheme is used.
Initial volume flow is now calculated by using the variable hom_sum.
Therefore the correction of initial volume flow for non-flat topography
removed (removed u_nzb_p1_for_vfc and v_nzb_p1_for_vfc)
Changed surface boundary conditions for u and v in case of ibc_uv_b == 0 from
mirror bc to dirichlet boundary conditions (u=v=0), so that k=nzb is
representative for the height z0

Bugfix: type conversion of '1' to 64bit for the MAX function (ngp_3d_inner)

init_coupling.f90


determination of target_id's moved to init_pegrid

init_pt_anomaly.f90


Call of exchange_horiz are modified.

init_rankine.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Calls of exchange_horiz are modified.

init_slope.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

header.f90


Output of advection scheme.

poismg.f90


Calls of exchange_horiz are modified.

prandtl_fluxes.f90


Changed surface boundary conditions for u and v from mirror bc to dirichelt bc,
therefore u(uzb,:,:) and v(nzb,:,:) is now representative for the height z0
nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

production_e.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng

read_3d_binary.f90


+/- 1 replaced with +/- nbgp when swapping and allocating variables.

sor.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.
Call of exchange_horiz are modified.
bug removed in declaration of ddzw(), nz replaced by nzt+1

subsidence.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

sum_up_3d_data.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

surface_coupler.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng in
MPI_SEND() and MPI_RECV.
additional case for nonequivalent processor and grid topopolgy in ocean and
atmosphere added (coupling_topology = 1)

Added exchange of u and v from Ocean to Atmosphere

time_integration.f90


Calls of exchange_horiz are modified.
Adaption to slooping surface.

timestep.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

user_3d_data_averaging.f90, user_data_output_2d.f90, user_data_output_3d.f90,
user_actions.f90, user_init.f90, user_init_plant_canopy.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

user_read_restart_data.f90


Allocation with nbgp.

wall_fluxes.f90


nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng.

write_compressed.f90


Array bounds and nx, ny adapted with nbgp.

sor.f90


bug removed in declaration of ddzw(), nz replaced by nzt+1

  • Property svn:keywords set to Id
File size: 20.6 KB
Line 
1 SUBROUTINE pres
2
3!------------------------------------------------------------------------------!
4! Current revisions:
5! -----------------
6! New allocation of tend when ws-scheme and multigrid is used. This is due to
7! reasons of perforance of the data_exchange. The same is done with p after
8! poismg is called.
9! nxl-1, nxr+1, nys-1, nyn+1 replaced by nxlg, nxrg, nysg, nyng when no
10! multigrid is used. Calls of exchange_horiz are modified.
11!
12! bugfix: After pressure correction no volume flow correction in case of
13! non-cyclic boundary conditions
14! (has to be done only before pressure correction)
15!
16! Call of SOR routine is referenced with ddzu_pres.
17!
18! Former revisions:
19! -----------------
20! $Id: pres.f90 667 2010-12-23 12:06:00Z suehring $
21!
22! 622 2010-12-10 08:08:13Z raasch
23! optional barriers included in order to speed up collective operations
24!
25! 151 2008-03-07 13:42:18Z raasch
26! Bugfix in volume flow control for non-cyclic boundary conditions
27!
28! 106 2007-08-16 14:30:26Z raasch
29! Volume flow conservation added for the remaining three outflow boundaries
30!
31! 85 2007-05-11 09:35:14Z raasch
32! Division through dt_3d replaced by multiplication of the inverse.
33! For performance optimisation, this is done in the loop calculating the
34! divergence instead of using a seperate loop.
35!
36! 75 2007-03-22 09:54:05Z raasch
37! Volume flow control for non-cyclic boundary conditions added (currently only
38! for the north boundary!!), 2nd+3rd argument removed from exchange horiz,
39! mean vertical velocity is removed in case of Neumann boundary conditions
40! both at the bottom and the top
41!
42! RCS Log replace by Id keyword, revision history cleaned up
43!
44! Revision 1.25  2006/04/26 13:26:12  raasch
45! OpenMP optimization (+localsum, threadsum)
46!
47! Revision 1.1  1997/07/24 11:24:44  raasch
48! Initial revision
49!
50!
51! Description:
52! ------------
53! Compute the divergence of the provisional velocity field. Solve the Poisson
54! equation for the perturbation pressure. Compute the final velocities using
55! this perturbation pressure. Compute the remaining divergence.
56!------------------------------------------------------------------------------!
57
58    USE arrays_3d
59    USE constants
60    USE control_parameters
61    USE cpulog
62    USE grid_variables
63    USE indices
64    USE interfaces
65    USE pegrid
66    USE poisfft_mod
67    USE poisfft_hybrid_mod
68    USE statistics
69
70    IMPLICIT NONE
71
72    INTEGER ::  i, j, k, sr
73
74    REAL    ::  ddt_3d, localsum, threadsum
75
76    REAL, DIMENSION(1:2) ::  volume_flow_l, volume_flow_offset
77    REAL, DIMENSION(1:nzt) ::  w_l, w_l_l
78
79
80    CALL cpu_log( log_point(8), 'pres', 'start' )
81
82
83    ddt_3d = 1.0 / dt_3d
84
85!
86!-- Multigrid method expects 1 additional grid point for the arrays
87!-- d, tend and p
88    IF ( psolver == 'multigrid' )  THEN
89     
90       DEALLOCATE( d )
91       ALLOCATE( d(nzb:nzt+1,nys-1:nyn+1,nxl-1:nxr+1) ) 
92       
93       IF ( ws_scheme_mom  .OR. ws_scheme_sca )  THEN
94       
95          DEALLOCATE( tend )
96          ALLOCATE( tend(nzb:nzt+1,nys-1:nyn+1,nxl-1:nxr+1) )
97          DEALLOCATE( p )
98          ALLOCATE( p(nzb:nzt+1,nys-1:nyn+1,nxl-1:nxr+1) )
99         
100       ENDIF
101       
102    ENDIF
103
104!
105!-- Conserve the volume flow at the outflow in case of non-cyclic lateral
106!-- boundary conditions
107!-- WARNING: so far, this conservation does not work at the left/south
108!--          boundary if the topography at the inflow differs from that at the
109!--          outflow! For this case, volume_flow_area needs adjustment!
110!
111!-- Left/right
112    IF ( conserve_volume_flow  .AND.  ( outflow_l  .OR. outflow_r ) )  THEN
113
114       volume_flow(1)   = 0.0
115       volume_flow_l(1) = 0.0
116
117       IF ( outflow_l )  THEN
118          i = 0
119       ELSEIF ( outflow_r )  THEN
120          i = nx+1
121       ENDIF
122
123       DO  j = nys, nyn
124!
125!--       Sum up the volume flow through the south/north boundary
126          DO  k = nzb_2d(j,i) + 1, nzt
127             volume_flow_l(1) = volume_flow_l(1) + u(k,j,i) * dzw(k)
128          ENDDO
129       ENDDO
130
131#if defined( __parallel )   
132       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
133       CALL MPI_ALLREDUCE( volume_flow_l(1), volume_flow(1), 1, MPI_REAL, &
134                           MPI_SUM, comm1dy, ierr )   
135#else
136       volume_flow = volume_flow_l 
137#endif
138       volume_flow_offset(1) = ( volume_flow_initial(1) - volume_flow(1) )    &
139                               / volume_flow_area(1)
140
141       DO  j = nysg, nyng
142          DO  k = nzb_2d(j,i) + 1, nzt
143             u(k,j,i) = u(k,j,i) + volume_flow_offset(1)
144          ENDDO
145       ENDDO
146
147    ENDIF
148
149!
150!-- South/north
151    IF ( conserve_volume_flow  .AND.  ( outflow_n  .OR. outflow_s ) )  THEN
152
153       volume_flow(2)   = 0.0
154       volume_flow_l(2) = 0.0
155
156       IF ( outflow_s )  THEN
157          j = 0
158       ELSEIF ( outflow_n )  THEN
159          j = ny+1
160       ENDIF
161
162       DO  i = nxl, nxr
163!
164!--       Sum up the volume flow through the south/north boundary
165          DO  k = nzb_2d(j,i) + 1, nzt
166             volume_flow_l(2) = volume_flow_l(2) + v(k,j,i) * dzw(k)
167          ENDDO
168       ENDDO
169
170#if defined( __parallel )   
171       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
172       CALL MPI_ALLREDUCE( volume_flow_l(2), volume_flow(2), 1, MPI_REAL, &
173                           MPI_SUM, comm1dx, ierr )   
174#else
175       volume_flow = volume_flow_l 
176#endif
177       volume_flow_offset(2) = ( volume_flow_initial(2) - volume_flow(2) )    &
178                               / volume_flow_area(2)
179
180       DO  i = nxlg, nxrg
181          DO  k = nzb_v_inner(j,i) + 1, nzt
182             v(k,j,i) = v(k,j,i) + volume_flow_offset(2)
183          ENDDO
184       ENDDO
185
186    ENDIF
187
188!
189!-- Remove mean vertical velocity
190    IF ( ibc_p_b == 1  .AND.  ibc_p_t == 1 )  THEN
191       IF ( simulated_time > 0.0 )  THEN ! otherwise nzb_w_inner is not yet known
192          w_l = 0.0;  w_l_l = 0.0
193          DO  i = nxl, nxr
194             DO  j = nys, nyn
195                DO  k = nzb_w_inner(j,i)+1, nzt
196                   w_l_l(k) = w_l_l(k) + w(k,j,i)
197                ENDDO
198             ENDDO
199          ENDDO
200#if defined( __parallel )   
201          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
202          CALL MPI_ALLREDUCE( w_l_l(1), w_l(1), nzt, MPI_REAL, MPI_SUM, comm2d, &
203                              ierr )
204#else
205          w_l = w_l_l 
206#endif
207          DO  k = 1, nzt
208             w_l(k) = w_l(k) / ngp_2dh_outer(k,0)
209          ENDDO
210          DO  i = nxlg, nxrg
211             DO  j = nysg, nyng
212                DO  k = nzb_w_inner(j,i)+1, nzt
213                   w(k,j,i) = w(k,j,i) - w_l(k)
214                ENDDO
215             ENDDO
216          ENDDO
217       ENDIF
218    ENDIF
219
220!
221!-- Compute the divergence of the provisional velocity field.
222    CALL cpu_log( log_point_s(1), 'divergence', 'start' )
223
224    IF ( psolver == 'multigrid' )  THEN
225       !$OMP PARALLEL DO SCHEDULE( STATIC )
226       DO  i = nxl-1, nxr+1
227          DO  j = nys-1, nyn+1
228             DO  k = nzb, nzt+1
229                d(k,j,i) = 0.0
230             ENDDO
231          ENDDO
232       ENDDO
233    ELSE
234       !$OMP PARALLEL DO SCHEDULE( STATIC )
235       DO  i = nxl, nxra
236          DO  j = nys, nyna
237             DO  k = nzb+1, nzta
238                d(k,j,i) = 0.0
239             ENDDO
240          ENDDO
241       ENDDO
242    ENDIF
243
244    localsum  = 0.0
245    threadsum = 0.0
246
247#if defined( __ibm )
248    !$OMP PARALLEL PRIVATE (i,j,k) FIRSTPRIVATE(threadsum) REDUCTION(+:localsum)
249    !$OMP DO SCHEDULE( STATIC )
250    DO  i = nxl, nxr
251       DO  j = nys, nyn
252          DO  k = nzb_s_inner(j,i)+1, nzt
253             d(k,j,i) = ( ( u(k,j,i+1) - u(k,j,i) ) * ddx + &
254                          ( v(k,j+1,i) - v(k,j,i) ) * ddy + &
255                          ( w(k,j,i) - w(k-1,j,i) ) * ddzw(k) ) * ddt_3d
256          ENDDO
257!
258!--       Additional pressure boundary condition at the bottom boundary for
259!--       inhomogeneous Prandtl layer heat fluxes and temperatures, respectively
260!--       dp/dz = -(dtau13/dx + dtau23/dy) + g*pt'/pt0.
261!--       This condition must not be applied at the start of a run, because then
262!--       flow_statistics has not yet been called and thus sums = 0.
263          IF ( ibc_p_b == 2  .AND.  sums(nzb+1,4) /= 0.0 )  THEN
264             k = nzb_s_inner(j,i)
265             d(k+1,j,i) = d(k+1,j,i) + (                                     &
266                                         ( usws(j,i+1) - usws(j,i) ) * ddx   &
267                                       + ( vsws(j+1,i) - vsws(j,i) ) * ddy   &
268                                       - g * ( pt(k+1,j,i) - sums(k+1,4) ) / &
269                                         sums(k+1,4)                         &
270                                       ) * ddzw(k+1) * ddt_3d
271          ENDIF
272
273!
274!--       Compute possible PE-sum of divergences for flow_statistics
275          DO  k = nzb_s_inner(j,i)+1, nzt
276             threadsum = threadsum + ABS( d(k,j,i) )
277          ENDDO
278
279       ENDDO
280    ENDDO
281
282    localsum = ( localsum + threadsum ) * dt_3d
283    !$OMP END PARALLEL
284#else
285    IF ( ibc_p_b == 2 .AND. sums(nzb+1,4) /= 0.0 )  THEN
286       !$OMP PARALLEL PRIVATE (i,j,k)
287       !$OMP DO SCHEDULE( STATIC )
288       DO  i = nxl, nxr
289          DO  j = nys, nyn
290             DO  k = nzb_s_inner(j,i)+1, nzt
291             d(k,j,i) = ( ( u(k,j,i+1) - u(k,j,i) ) * ddx + &
292                          ( v(k,j+1,i) - v(k,j,i) ) * ddy + &
293                          ( w(k,j,i) - w(k-1,j,i) ) * ddzw(k) ) * ddt_3d
294             ENDDO
295          ENDDO
296!
297!--       Additional pressure boundary condition at the bottom boundary for
298!--       inhomogeneous Prandtl layer heat fluxes and temperatures, respectively
299!--       dp/dz = -(dtau13/dx + dtau23/dy) + g*pt'/pt0.
300!--       This condition must not be applied at the start of a run, because then
301!--       flow_statistics has not yet been called and thus sums = 0.
302          DO  j = nys, nyn
303              k = nzb_s_inner(j,i)
304              d(k+1,j,i) = d(k+1,j,i) + (                        &
305                             ( usws(j,i+1) - usws(j,i) ) * ddx   &
306                           + ( vsws(j+1,i) - vsws(j,i) ) * ddy   &
307                           - g * ( pt(k+1,j,i) - sums(k+1,4) ) / &
308                             sums(k+1,4)                         &
309                                        ) * ddzw(k+1) * ddt_3d
310          ENDDO
311       ENDDO
312       !$OMP END PARALLEL
313
314    ELSE
315
316       !$OMP PARALLEL PRIVATE (i,j,k)
317       !$OMP DO SCHEDULE( STATIC )
318       DO  i = nxl, nxr
319          DO  j = nys, nyn
320             DO  k = nzb_s_inner(j,i)+1, nzt
321                d(k,j,i) = ( ( u(k,j,i+1) - u(k,j,i) ) * ddx + &
322                          ( v(k,j+1,i) - v(k,j,i) ) * ddy + &
323                          ( w(k,j,i) - w(k-1,j,i) ) * ddzw(k) ) * ddt_3d
324             ENDDO
325          ENDDO
326       ENDDO
327       !$OMP END PARALLEL
328
329    ENDIF
330
331!
332!-- Compute possible PE-sum of divergences for flow_statistics
333    !$OMP PARALLEL PRIVATE (i,j,k) FIRSTPRIVATE(threadsum) REDUCTION(+:localsum)
334    !$OMP DO SCHEDULE( STATIC )
335    DO  i = nxl, nxr
336       DO  j = nys, nyn
337          DO  k = nzb+1, nzt
338             threadsum = threadsum + ABS( d(k,j,i) )
339          ENDDO
340       ENDDO
341    ENDDO
342    localsum = ( localsum + threadsum ) * dt_3d
343    !$OMP END PARALLEL
344#endif
345
346!
347!-- For completeness, set the divergence sum of all statistic regions to those
348!-- of the total domain
349    sums_divold_l(0:statistic_regions) = localsum
350
351!
352!-- Determine absolute minimum/maximum (only for test cases, therefore as
353!-- comment line)
354!    CALL global_min_max( nzb+1, nzt, nys, nyn, nxl, nxr, d, 'abs', divmax, &
355!                        divmax_ijk )
356
357    CALL cpu_log( log_point_s(1), 'divergence', 'stop' )
358
359!
360!-- Compute the pressure perturbation solving the Poisson equation
361    IF ( psolver(1:7) == 'poisfft' )  THEN
362
363!
364!--    Enlarge the size of tend, used as a working array for the transpositions
365       IF ( nxra > nxr  .OR.  nyna > nyn  .OR.  nza > nz )  THEN
366          DEALLOCATE( tend )
367          ALLOCATE( tend(1:nza,nys:nyna,nxl:nxra) )
368       ENDIF
369
370!
371!--    Solve Poisson equation via FFT and solution of tridiagonal matrices
372       IF ( psolver == 'poisfft' )  THEN
373!
374!--       Solver for 2d-decomposition
375          CALL poisfft( d, tend )
376       ELSEIF ( psolver == 'poisfft_hybrid' )  THEN 
377!
378!--       Solver for 1d-decomposition (using MPI and OpenMP).
379!--       The old hybrid-solver is still included here, as long as there
380!--       are some optimization problems in poisfft
381          CALL poisfft_hybrid( d )
382       ENDIF
383
384!
385!--    Resize tend to its normal size
386       IF ( nxra > nxr  .OR.  nyna > nyn  .OR.  nza > nz )  THEN
387          DEALLOCATE( tend )
388          ALLOCATE( tend(nzb:nzt+1,nysg:nyng,nxlg:nxrg) )
389       ENDIF
390
391!
392!--    Store computed perturbation pressure and set boundary condition in
393!--    z-direction
394       !$OMP PARALLEL DO
395       DO  i = nxl, nxr
396          DO  j = nys, nyn
397             DO  k = nzb+1, nzt
398                tend(k,j,i) = d(k,j,i)
399             ENDDO
400          ENDDO
401       ENDDO
402
403!
404!--    Bottom boundary:
405!--    This condition is only required for internal output. The pressure
406!--    gradient (dp(nzb+1)-dp(nzb))/dz is not used anywhere else.
407       IF ( ibc_p_b == 1 )  THEN
408!
409!--       Neumann (dp/dz = 0)
410          !$OMP PARALLEL DO
411          DO  i = nxlg, nxrg
412             DO  j = nysg, nyng
413                tend(nzb_s_inner(j,i),j,i) = tend(nzb_s_inner(j,i)+1,j,i)
414             ENDDO
415          ENDDO
416
417       ELSEIF ( ibc_p_b == 2 )  THEN
418!
419!--       Neumann condition for inhomogeneous surfaces,
420!--       here currently still in the form of a zero gradient. Actually
421!--       dp/dz = -(dtau13/dx + dtau23/dy) + g*pt'/pt0 would have to be used for
422!--       the computation (cf. above: computation of divergences).
423          !$OMP PARALLEL DO
424          DO  i = nxlg, nxrg
425             DO  j = nysg, nyng
426                tend(nzb_s_inner(j,i),j,i) = tend(nzb_s_inner(j,i)+1,j,i)
427             ENDDO
428          ENDDO
429
430       ELSE
431!
432!--       Dirichlet
433          !$OMP PARALLEL DO
434          DO  i = nxlg, nxrg
435             DO  j = nysg, nyng
436                tend(nzb_s_inner(j,i),j,i) = 0.0
437             ENDDO
438          ENDDO
439
440       ENDIF
441
442!
443!--    Top boundary
444       IF ( ibc_p_t == 1 )  THEN
445!
446!--       Neumann
447          !$OMP PARALLEL DO
448          DO  i = nxlg, nxrg
449             DO  j = nysg, nyng
450                tend(nzt+1,j,i) = tend(nzt,j,i)
451             ENDDO
452          ENDDO
453
454       ELSE
455!
456!--       Dirichlet
457          !$OMP PARALLEL DO
458          DO  i = nxlg, nxrg
459             DO  j = nysg, nyng
460                tend(nzt+1,j,i) = 0.0
461             ENDDO
462          ENDDO
463
464       ENDIF
465
466!
467!--    Exchange boundaries for p
468       CALL exchange_horiz( tend, nbgp )
469     
470    ELSEIF ( psolver == 'sor' )  THEN
471
472!
473!--    Solve Poisson equation for perturbation pressure using SOR-Red/Black
474!--    scheme
475       CALL sor( d, ddzu_pres, ddzw, p )
476       tend = p
477
478    ELSEIF ( psolver == 'multigrid' )  THEN
479
480!
481!--    Solve Poisson equation for perturbation pressure using Multigrid scheme,
482!--    array tend is used to store the residuals, logical exchange_mg is used
483!--    to discern data exchange in multigrid ( 1 ghostpoint ) and normal grid
484!--    ( nbgp ghost points ).
485       exchange_mg = .TRUE.
486       CALL poismg( tend )
487       exchange_mg = .FALSE.
488!
489!--    Restore perturbation pressure on tend because this array is used
490!--    further below to correct the velocity fields
491
492       tend = p
493       IF( ws_scheme_mom .OR. ws_scheme_sca )  THEN
494!       
495!--       Allocate p to its normal size and restore pressure.         
496          DEALLOCATE( p )
497          ALLOCATE( p(nzb:nzt+1,nysg:nyng,nxlg:nxrg) )
498          DO  i = nxl, nxr
499             DO  j = nys, nyn
500                DO  k = nzb_s_inner(j,i), nzt
501                   p(k,j,i) = tend(k,j,i)
502                ENDDO
503             ENDDO
504          ENDDO
505       ENDIF
506
507    ENDIF
508
509!
510!-- Store perturbation pressure on array p, used in the momentum equations
511    IF ( psolver(1:7) == 'poisfft' )  THEN
512!
513!--    Here, only the values from the left and right boundaries are copied
514!--    The remaining values are copied in the following loop due to speed
515!--    optimization
516       !$OMP PARALLEL DO
517       DO  j = nysg, nyng
518          DO  k = nzb, nzt+1
519             p(k,j,nxlg:nxl-1) = tend(k,j,nxlg:nxl-1)
520             p(k,j,nxr+1:nxrg) = tend(k,j,nxr+1:nxrg)
521          ENDDO
522       ENDDO
523    ENDIF
524
525!
526!-- Correction of the provisional velocities with the current perturbation
527!-- pressure just computed
528    IF ( conserve_volume_flow  .AND. &
529         ( bc_lr == 'cyclic'  .OR.  bc_ns == 'cyclic' ) )  THEN
530       volume_flow_l(1) = 0.0
531       volume_flow_l(2) = 0.0
532    ENDIF
533    !$OMP PARALLEL PRIVATE (i,j,k)
534    !$OMP DO
535    DO  i = nxl, nxr
536       IF ( psolver(1:7) == 'poisfft' )  THEN
537          DO  j = nysg, nyng
538             DO  k = nzb, nzt+1
539                p(k,j,i) = tend(k,j,i)
540             ENDDO
541          ENDDO
542       ENDIF
543       DO  j = nys, nyn
544          DO  k = nzb_w_inner(j,i)+1, nzt
545             w(k,j,i) = w(k,j,i) - dt_3d * &
546                                   ( tend(k+1,j,i) - tend(k,j,i) ) * ddzu(k+1)
547          ENDDO
548          DO  k = nzb_u_inner(j,i)+1, nzt
549             u(k,j,i) = u(k,j,i) - dt_3d * ( tend(k,j,i) - tend(k,j,i-1) ) * ddx
550          ENDDO
551          DO  k = nzb_v_inner(j,i)+1, nzt
552             v(k,j,i) = v(k,j,i) - dt_3d * ( tend(k,j,i) - tend(k,j-1,i) ) * ddy
553          ENDDO
554
555!
556!--       Sum up the volume flow through the right and north boundary
557          IF ( conserve_volume_flow  .AND.  bc_lr == 'cyclic'  .AND. &
558               bc_ns == 'cyclic' .AND. i == nx )  THEN
559             !$OMP CRITICAL
560             DO  k = nzb_2d(j,i) + 1, nzt
561                volume_flow_l(1) = volume_flow_l(1) + u(k,j,i) * dzw(k)
562             ENDDO
563             !$OMP END CRITICAL
564          ENDIF
565          IF ( conserve_volume_flow  .AND.  bc_ns == 'cyclic'  .AND. &
566               bc_lr == 'cyclic' .AND. j == ny )  THEN
567             !$OMP CRITICAL
568             DO  k = nzb_2d(j,i) + 1, nzt
569                volume_flow_l(2) = volume_flow_l(2) + v(k,j,i) * dzw(k)
570             ENDDO
571             !$OMP END CRITICAL
572          ENDIF
573
574       ENDDO
575    ENDDO
576    !$OMP END PARALLEL
577
578!
579!-- Resize tend to its normal size in case of multigrid and ws-scheme.
580    IF ( psolver == 'multigrid' .AND. ( ws_scheme_mom        &
581                                   .OR. ws_scheme_sca ) )  THEN
582       DEALLOCATE( tend )
583       ALLOCATE( tend(nzb:nzt+1,nysg:nyng,nxlg:nxrg) )
584    ENDIF
585
586!
587!-- Conserve the volume flow
588    IF ( conserve_volume_flow  .AND. &
589         ( bc_lr == 'cyclic'  .AND.  bc_ns == 'cyclic' ) )  THEN
590
591#if defined( __parallel )   
592       IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
593       CALL MPI_ALLREDUCE( volume_flow_l(1), volume_flow(1), 2, MPI_REAL, &
594                           MPI_SUM, comm2d, ierr ) 
595#else
596       volume_flow = volume_flow_l 
597#endif   
598
599       volume_flow_offset = ( volume_flow_initial - volume_flow ) / &
600                            volume_flow_area
601
602       !$OMP PARALLEL PRIVATE (i,j,k)
603       !$OMP DO
604       DO  i = nxl, nxr
605          DO  j = nys, nyn
606             DO  k = nzb_u_inner(j,i) + 1, nzt
607                u(k,j,i) = u(k,j,i) + volume_flow_offset(1)
608                v(k,j,i) = v(k,j,i) + volume_flow_offset(2)
609             ENDDO
610          ENDDO
611       ENDDO
612
613       !$OMP END PARALLEL
614
615    ENDIF
616
617!
618!-- Exchange of boundaries for the velocities
619    CALL exchange_horiz( u, nbgp )
620    CALL exchange_horiz( v, nbgp )
621    CALL exchange_horiz( w, nbgp )
622
623!
624!-- Compute the divergence of the corrected velocity field,
625!-- a possible PE-sum is computed in flow_statistics
626    CALL cpu_log( log_point_s(1), 'divergence', 'start' )
627    sums_divnew_l = 0.0
628
629!
630!-- d must be reset to zero because it can contain nonzero values below the
631!-- topography
632    IF ( topography /= 'flat' )  d = 0.0
633
634    localsum  = 0.0
635    threadsum = 0.0
636
637    !$OMP PARALLEL PRIVATE (i,j,k) FIRSTPRIVATE(threadsum) REDUCTION(+:localsum)
638    !$OMP DO SCHEDULE( STATIC )
639#if defined( __ibm )
640    DO  i = nxl, nxr
641       DO  j = nys, nyn
642          DO  k = nzb_s_inner(j,i)+1, nzt
643             d(k,j,i) = ( u(k,j,i+1) - u(k,j,i) ) * ddx + &
644                        ( v(k,j+1,i) - v(k,j,i) ) * ddy + &
645                        ( w(k,j,i) - w(k-1,j,i) ) * ddzw(k)
646          ENDDO
647          DO  k = nzb+1, nzt
648             threadsum = threadsum + ABS( d(k,j,i) )
649          ENDDO
650       ENDDO
651    ENDDO
652#else
653    DO  i = nxl, nxr
654       DO  j = nys, nyn
655          DO  k = nzb_s_inner(j,i)+1, nzt
656             d(k,j,i) = ( u(k,j,i+1) - u(k,j,i) ) * ddx + &
657                        ( v(k,j+1,i) - v(k,j,i) ) * ddy + &
658                        ( w(k,j,i) - w(k-1,j,i) ) * ddzw(k)
659             threadsum = threadsum + ABS( d(k,j,i) )
660          ENDDO
661       ENDDO
662    ENDDO
663#endif
664
665    localsum = localsum + threadsum
666    !$OMP END PARALLEL
667
668!
669!-- For completeness, set the divergence sum of all statistic regions to those
670!-- of the total domain
671    sums_divnew_l(0:statistic_regions) = localsum
672
673    CALL cpu_log( log_point_s(1), 'divergence', 'stop' )
674
675    CALL cpu_log( log_point(8), 'pres', 'stop' )
676   
677
678
679 END SUBROUTINE pres
Note: See TracBrowser for help on using the repository browser.