source: palm/trunk/SOURCE/init_pegrid.f90 @ 430

Last change on this file since 430 was 392, checked in by raasch, 15 years ago

New:
---

Adapted for machine lck
(mrun, mbuild, subjob)

bc_lr/bc_ns in most subroutines replaced by LOGICAL variables bc_lr_cyc,
bc_ns_cyc for speed optimization
(check_parameters, diffusion_u, diffusion_v, diffusion_w, modules)

Additional timestep criterion in case of simulations with plant canopy (timestep)

Check for illegal entries in section_xy|xz|yz that exceed nz+1|ny+1|nx+1
(check_parameters)

Clipping of dvrp output implemented. Default colourtable for particles
implemented, particle attributes (color, dvrp_size) can be set with new
parameters particle_color, particle_dvrpsize, color_interval,
dvrpsize_interval (init_dvrp, data_output_dvrp, modules, user_data_output_dvrp).
Slicer attributes (dvrp) are set with new routine set_slicer_attributes_dvrp
and are controlled with existing parameters slicer_range_limits.
(set_slicer_attributes_dvrp)

Ocean atmosphere coupling allows to use independent precursor runs in order
to account for different spin-up times. The time when coupling has to be
started is given by new inipar parameter coupling_start_time. The precursor
ocean run has to be started using new mrun option "-y" in order to add
appendix "_O" to all output files.
(check_for_restart, check_parameters, data_output_2d, data_output_3d,
data_output_profiles, data_output_ptseries, data_output_spectra,
data_output_tseries, header, init_coupling, modules, mrun,
parin, read_var_list, surface_coupler, time_integration, write_var_list)

Polygon reduction for topography and ground plate isosurface. Reduction level
for buildings can be chosen with parameter cluster_size. (init_dvrp)

External pressure gradient (check_parameters, header, init_3d_model, modules,
parin, prognostic_equations, read_var_list, write_var_list)

New topography case 'single_street_canyon' (header, init_grid, modules, parin,
read_var_list, user_check_parameters, user_header, user_init_grid, write_var_list)

Option to predefine a target bulk velocity for conserve_volume_flow
(check_parameters, header, init_3d_model, modules, parin, read_var_list,
write_var_list)

Option for user defined 2D data output in xy cross sections at z=nzb+1
(data_output_2d, user_data_output_2d)

xy cross section output of surface heatfluxes (latent, sensible)
(average_3d_data, check_parameters, data_output_2d, modules, read_3d_binary,
sum_up_3d_data, write_3d_binary)

average_3d_data, check_for_restart, check_parameters, data_output_2d, data_output_3d, data_output_dvrp, data_output_profiles, data_output_ptseries, data_output_spectra, data_output_tseries, init_coupling, init_dvrp, init_grid, init_3d_model, header, mbuild, modules, mrun, package_parin, parin, prognostic_equations, read_3d_binary, read_var_list, subjob, surface_coupler, timestep, time_integration, user_check_parameters, user_data_output_2d, user_data_output_dvrp, user_header, user_init_grid, write_3d_binary, write_var_list

New: set_particle_attributes, set_slicer_attributes_dvrp

Changed:


lcmuk changed to lc to avoid problems with Intel compiler on sgi-ice
(poisfft)

For extended NetCDF files, the updated title attribute includes an update of
time_average_text where appropriate. (netcdf)

In case of restart runs without extension, initial profiles are not written
to NetCDF-file anymore. (data_output_profiles, modules, read_var_list, write_var_list)

Small change in formatting of the message handling routine concering the output in the
job protocoll. (message)

initializing_actions='read_data_for_recycling' renamed to 'cyclic_fill', now
independent of turbulent_inflow (check_parameters, header, init_3d_model)

2 NetCDF error numbers changed. (data_output_3d)

A Link to the website appendix_a.html is printed for further information
about the possible errors. (message)

Temperature gradient criterion for estimating the boundary layer height
replaced by the gradient criterion of Sullivan et al. (1998). (flow_statistics)

NetCDF unit attribute in timeseries output in case of statistic regions added
(netcdf)

Output of NetCDF messages with aid of message handling routine.
(check_open, close_file, data_output_2d, data_output_3d,
data_output_profiles, data_output_ptseries, data_output_spectra,
data_output_tseries, netcdf, output_particles_netcdf)

Output of messages replaced by message handling routine.
(advec_particles, advec_s_bc, buoyancy, calc_spectra, check_for_restart,
check_open, coriolis, cpu_log, data_output_2d, data_output_3d, data_output_dvrp,
data_output_profiles, data_output_spectra, fft_xy, flow_statistics, header,
init_1d_model, init_3d_model, init_dvrp, init_grid, init_particles, init_pegrid,
netcdf, parin, plant_canopy_model, poisfft_hybrid, poismg, read_3d_binary,
read_var_list, surface_coupler, temperton_fft, timestep, user_actions,
user_data_output_dvrp, user_dvrp_coltab, user_init_grid, user_init_plant_canopy,
user_parin, user_read_restart_data, user_spectra )

Maximum number of tails is calculated from maximum number of particles and
skip_particles_for_tail (init_particles)

Value of vertical_particle_advection may differ for each particle group
(advec_particles, header, modules)

First constant in array den also defined as type double. (eqn_state_seawater)

Parameter dvrp_psize moved from particles_par to dvrp_graphics_par. (package_parin)

topography_grid_convention moved from userpar to inipar (check_parameters,
header, parin, read_var_list, user_check_parameters, user_header,
user_init_grid, user_parin, write_var_list)

Default value of grid_matching changed to strict.

Adjustments for runs on lcxt4 (necessary due to an software update on CRAY) and
for coupled runs on ibmy (mrun, subjob)

advec_particles, advec_s_bc, buoyancy, calc_spectra, check_for_restart, check_open, check_parameters, close_file, coriolis, cpu_log, data_output_2d, data_output_3d, data_output_dvrp, data_output_profiles, data_output_ptseries, data_output_spectra, data_output_tseries, eqn_state_seawater, fft_xy, flow_statistics, header, init_1d_model, init_3d_model, init_dvrp, init_grid, init_particles, init_pegrid, message, mrun, netcdf, output_particles_netcdf, package_parin, parin, plant_canopy_model, poisfft, poisfft_hybrid, poismg, read_3d_binary, read_var_list, sort_particles, subjob, user_check_parameters, user_header, user_init_grid, user_parin, surface_coupler, temperton_fft, timestep, user_actions, user_data_output_dvrp, user_dvrp_coltab, user_init_grid, user_init_plant_canopy, user_parin, user_read_restart_data, user_spectra, write_var_list

Errors:


Bugfix: Initial hydrostatic pressure profile in case of ocean runs is now
calculated in 5 iteration steps. (init_ocean)

Bugfix: wrong sign in buoyancy production of ocean part in case of not using
the reference density (only in 3D routine production_e) (production_e)

Bugfix: output of averaged 2d/3d quantities requires that an avaraging
interval has been set, respective error message is included (check_parameters)

Bugfix: Output on unit 14 only if requested by write_binary.
(user_last_actions)

Bugfix to avoid zero division by km_neutral (production_e)

Bugfix for extended NetCDF files: In order to avoid 'data mode' errors if
updated attributes are larger than their original size, NF90_PUT_ATT is called
in 'define mode' enclosed by NF90_REDEF and NF90_ENDDEF calls. This implies a
possible performance loss; an alternative strategy would be to ensure equal
attribute size in a job chain. (netcdf)

Bugfix: correction of initial volume flow for non-flat topography (init_3d_model)
Bugfix: zero initialization of arrays within buildings for 'cyclic_fill' (init_3d_model)

Bugfix: to_be_resorted => s_av for time-averaged scalars (data_output_2d, data_output_3d)

Bugfix: error in formatting the output (message)

Bugfix: avoid that ngp_2dh_s_inner becomes zero (init_3_model)

Typographical error: unit of wpt in dots_unit (modules)

Bugfix: error in check, if particles moved further than one subdomain length.
This check must not be applied for newly released particles. (advec_particles)

Bugfix: several tail counters are initialized, particle_tail_coordinates is
only written to file if its third index is > 0, arrays for tails are allocated
with a minimum size of 10 tails if there is no tail initially (init_particles,
advec_particles)

Bugfix: pressure included for profile output (check_parameters)

Bugfix: Type of count and count_rate changed to default INTEGER on NEC machines
(cpu_log)

Bugfix: output if particle time series only if particle advection is switched
on. (time_integration)

Bugfix: qsws was calculated in case of constant heatflux = .FALSE. (prandtl_fluxes)

Bugfix: averaging along z is not allowed for 2d quantities (e.g. u* and z0) (data_output_2d)

Typographical errors (netcdf)

If the inversion height calculated by the prerun is zero, inflow_damping_height
must be explicitly specified (init_3d_model)

Small bugfix concerning 3d 64bit netcdf output format (header)

Bugfix: dt_fixed removed from the restart file, because otherwise, no change
from a fixed to a variable timestep would be possible in restart runs.
(read_var_list, write_var_list)

Bugfix: initial setting of time_coupling in coupled restart runs (time_integration)

advec_particles, check_parameters, cpu_log, data_output_2d, data_output_3d, header, init_3d_model, init_particles, init_ocean, modules, netcdf, prandtl_fluxes, production_e, read_var_list, time_integration, user_last_actions, write_var_list

  • Property svn:keywords set to Id
File size: 33.9 KB
RevLine 
[1]1 SUBROUTINE init_pegrid
2
3!------------------------------------------------------------------------------!
[254]4! Current revisions:
[1]5! -----------------
[392]6!
[181]7! ATTENTION: nnz_x undefined problem still has to be solved!!!!!!!!
[108]8! TEST OUTPUT (TO BE REMOVED) logging mpi2 ierr values
[77]9!
10! Former revisions:
11! -----------------
12! $Id: init_pegrid.f90 392 2009-09-24 10:39:14Z raasch $
13!
[392]14! 274 2009-03-26 15:11:21Z heinze
15! Output of messages replaced by message handling routine.
16!
[226]17! 206 2008-10-13 14:59:11Z raasch
18! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part
19! 2d-decomposition is default on SGI-ICE systems
20!
[198]21! 197 2008-09-16 15:29:03Z raasch
22! multigrid levels are limited by subdomains if mg_switch_to_pe0_level = -1,
23! nz is used instead nnz for calculating mg-levels
24! Collect on PE0 horizontal index bounds from all other PEs,
25! broadcast the id of the inflow PE (using the respective communicator)
26!
[139]27! 114 2007-10-10 00:03:15Z raasch
28! Allocation of wall flag arrays for multigrid solver
29!
[110]30! 108 2007-08-24 15:10:38Z letzel
31! Intercommunicator (comm_inter) and derived data type (type_xy) for
32! coupled model runs created, assign coupling_mode_remote,
33! indices nxlu and nysv are calculated (needed for non-cyclic boundary
34! conditions)
35!
[83]36! 82 2007-04-16 15:40:52Z raasch
37! Cpp-directive lcmuk changed to intel_openmp_bug, setting of host on lcmuk by
38! cpp-directive removed
39!
[77]40! 75 2007-03-22 09:54:05Z raasch
[73]41! uxrp, vynp eliminated,
[75]42! dirichlet/neumann changed to dirichlet/radiation, etc.,
43! poisfft_init is only called if fft-solver is switched on
[1]44!
[3]45! RCS Log replace by Id keyword, revision history cleaned up
46!
[1]47! Revision 1.28  2006/04/26 13:23:32  raasch
48! lcmuk does not understand the !$ comment so a cpp-directive is required
49!
50! Revision 1.1  1997/07/24 11:15:09  raasch
51! Initial revision
52!
53!
54! Description:
55! ------------
56! Determination of the virtual processor topology (if not prescribed by the
57! user)and computation of the grid point number and array bounds of the local
58! domains.
59!------------------------------------------------------------------------------!
60
61    USE control_parameters
62    USE fft_xy
[163]63    USE grid_variables
[1]64    USE indices
65    USE pegrid
66    USE poisfft_mod
67    USE poisfft_hybrid_mod
68    USE statistics
69    USE transpose_indices
70
71
72    IMPLICIT NONE
73
[163]74    INTEGER ::  gathered_size, i, id_inflow_l, id_recycling_l, ind(5), j, k, &
[151]75                maximum_grid_level_l, mg_switch_to_pe0_level_l, mg_levels_x, &
76                mg_levels_y, mg_levels_z, nnx_y, nnx_z, nny_x, nny_z, nnz_x, &
77                nnz_y, numproc_sqr, nx_total, nxl_l, nxr_l, nyn_l, nys_l,    &
78                nzb_l, nzt_l, omp_get_num_threads, subdomain_size
[1]79
80    INTEGER, DIMENSION(:), ALLOCATABLE ::  ind_all, nxlf, nxrf, nynf, nysf
81
82    LOGICAL ::  found
83
84!
85!-- Get the number of OpenMP threads
86    !$OMP PARALLEL
[82]87#if defined( __intel_openmp_bug )
[1]88    threads_per_task = omp_get_num_threads()
89#else
90!$  threads_per_task = omp_get_num_threads()
91#endif
92    !$OMP END PARALLEL
93
94
95#if defined( __parallel )
96!
97!-- Determine the processor topology or check it, if prescribed by the user
98    IF ( npex == -1  .AND.  npey == -1 )  THEN
99
100!
101!--    Automatic determination of the topology
102!--    The default on SMP- and cluster-hosts is a 1d-decomposition along x
[206]103       IF ( host(1:3) == 'ibm'  .OR.  host(1:3) == 'nec'      .OR. &
104            ( host(1:2) == 'lc'  .AND.  host(3:5) /= 'sgi' )  .OR. &
105             host(1:3) == 'dec' )  THEN
[1]106
107          pdims(1) = numprocs
108          pdims(2) = 1
109
110       ELSE
111
112          numproc_sqr = SQRT( REAL( numprocs ) )
113          pdims(1)    = MAX( numproc_sqr , 1 )
114          DO  WHILE ( MOD( numprocs , pdims(1) ) /= 0 )
115             pdims(1) = pdims(1) - 1
116          ENDDO
117          pdims(2) = numprocs / pdims(1)
118
119       ENDIF
120
121    ELSEIF ( npex /= -1  .AND.  npey /= -1 )  THEN
122
123!
124!--    Prescribed by user. Number of processors on the prescribed topology
125!--    must be equal to the number of PEs available to the job
126       IF ( ( npex * npey ) /= numprocs )  THEN
[274]127          WRITE( message_string, * ) 'number of PEs of the prescribed ',      & 
128                 'topology (', npex*npey,') does not match & the number of ', & 
129                 'PEs available to the job (', numprocs, ')'
[254]130          CALL message( 'init_pegrid', 'PA0221', 1, 2, 0, 6, 0 )
[1]131       ENDIF
132       pdims(1) = npex
133       pdims(2) = npey
134
135    ELSE
136!
137!--    If the processor topology is prescribed by the user, the number of
138!--    PEs must be given in both directions
[274]139       message_string = 'if the processor topology is prescribed by the, ' //  &
140                   ' user& both values of "npex" and "npey" must be given ' // &
141                   'in the &NAMELIST-parameter file'
[254]142       CALL message( 'init_pegrid', 'PA0222', 1, 2, 0, 6, 0 )
[1]143
144    ENDIF
145
146!
147!-- The hybrid solver can only be used in case of a 1d-decomposition along x
148    IF ( pdims(2) /= 1  .AND.  psolver == 'poisfft_hybrid' )  THEN
[254]149       message_string = 'psolver = "poisfft_hybrid" can only be' // &
150                        '& used in case of a 1d-decomposition along x'
151       CALL message( 'init_pegrid', 'PA0223', 1, 2, 0, 6, 0 )
[1]152    ENDIF
153
154!
155!-- If necessary, set horizontal boundary conditions to non-cyclic
156    IF ( bc_lr /= 'cyclic' )  cyclic(1) = .FALSE.
157    IF ( bc_ns /= 'cyclic' )  cyclic(2) = .FALSE.
158
159!
160!-- Create the virtual processor grid
161    CALL MPI_CART_CREATE( comm_palm, ndim, pdims, cyclic, reorder, &
162                          comm2d, ierr )
163    CALL MPI_COMM_RANK( comm2d, myid, ierr )
164    WRITE (myid_char,'(''_'',I4.4)')  myid
165
166    CALL MPI_CART_COORDS( comm2d, myid, ndim, pcoord, ierr )
167    CALL MPI_CART_SHIFT( comm2d, 0, 1, pleft, pright, ierr )
168    CALL MPI_CART_SHIFT( comm2d, 1, 1, psouth, pnorth, ierr )
169
170!
171!-- Determine sub-topologies for transpositions
172!-- Transposition from z to x:
173    remain_dims(1) = .TRUE.
174    remain_dims(2) = .FALSE.
175    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dx, ierr )
176    CALL MPI_COMM_RANK( comm1dx, myidx, ierr )
177!
178!-- Transposition from x to y
179    remain_dims(1) = .FALSE.
180    remain_dims(2) = .TRUE.
181    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dy, ierr )
182    CALL MPI_COMM_RANK( comm1dy, myidy, ierr )
183
184
185!
186!-- Find a grid (used for array d) which will match the transposition demands
187    IF ( grid_matching == 'strict' )  THEN
188
189       nxa = nx;  nya = ny;  nza = nz
190
191    ELSE
192
193       found = .FALSE.
194   xn: DO  nxa = nx, 2*nx
195!
196!--       Meet conditions for nx
197          IF ( MOD( nxa+1, pdims(1) ) /= 0 .OR. &
198               MOD( nxa+1, pdims(2) ) /= 0 )  CYCLE xn
199
200      yn: DO  nya = ny, 2*ny
201!
202!--          Meet conditions for ny
203             IF ( MOD( nya+1, pdims(2) ) /= 0 .OR. &
204                  MOD( nya+1, pdims(1) ) /= 0 )  CYCLE yn
205
206
207         zn: DO  nza = nz, 2*nz
208!
209!--             Meet conditions for nz
210                IF ( ( MOD( nza, pdims(1) ) /= 0  .AND.  pdims(1) /= 1  .AND. &
211                       pdims(2) /= 1 )  .OR.                                  &
212                     ( MOD( nza, pdims(2) ) /= 0  .AND.  dt_dosp /= 9999999.9 &
213                     ) )  THEN
214                   CYCLE zn
215                ELSE
216                   found = .TRUE.
217                   EXIT xn
218                ENDIF
219
220             ENDDO zn
221
222          ENDDO yn
223
224       ENDDO xn
225
226       IF ( .NOT. found )  THEN
[254]227          message_string = 'no matching grid for transpositions found'
228          CALL message( 'init_pegrid', 'PA0224', 1, 2, 0, 6, 0 )
[1]229       ENDIF
230
231    ENDIF
232
233!
234!-- Calculate array bounds in x-direction for every PE.
235!-- The last PE along x may get less grid points than the others
236    ALLOCATE( nxlf(0:pdims(1)-1), nxrf(0:pdims(1)-1), nynf(0:pdims(2)-1), &
237              nysf(0:pdims(2)-1), nnx_pe(0:pdims(1)-1), nny_pe(0:pdims(2)-1) )
238
239    IF ( MOD( nxa+1 , pdims(1) ) /= 0 )  THEN
[274]240       WRITE( message_string, * ) 'x-direction: gridpoint number (',nx+1,') ',&
241                               'is not an& integral divisor of the number ',  &
242                               'processors (', pdims(1),')'
[254]243       CALL message( 'init_pegrid', 'PA0225', 1, 2, 0, 6, 0 )
[1]244    ELSE
245       nnx  = ( nxa + 1 ) / pdims(1)
246       IF ( nnx*pdims(1) - ( nx + 1) > nnx )  THEN
[274]247          WRITE( message_string, * ) 'x-direction: nx does not match the',    & 
248                       'requirements given by the number of PEs &used',       &
249                       '& please use nx = ', nx - ( pdims(1) - ( nnx*pdims(1) &
250                                   - ( nx + 1 ) ) ), ' instead of nx =', nx
[254]251          CALL message( 'init_pegrid', 'PA0226', 1, 2, 0, 6, 0 )
[1]252       ENDIF
253    ENDIF   
254
255!
256!-- Left and right array bounds, number of gridpoints
257    DO  i = 0, pdims(1)-1
258       nxlf(i)   = i * nnx
259       nxrf(i)   = ( i + 1 ) * nnx - 1
260       nnx_pe(i) = MIN( nx, nxrf(i) ) - nxlf(i) + 1
261    ENDDO
262
263!
264!-- Calculate array bounds in y-direction for every PE.
265    IF ( MOD( nya+1 , pdims(2) ) /= 0 )  THEN
[274]266       WRITE( message_string, * ) 'y-direction: gridpoint number (',ny+1,') ', &
267                           'is not an& integral divisor of the number of',     &
268                           'processors (', pdims(2),')'
[254]269       CALL message( 'init_pegrid', 'PA0227', 1, 2, 0, 6, 0 )
[1]270    ELSE
271       nny  = ( nya + 1 ) / pdims(2)
272       IF ( nny*pdims(2) - ( ny + 1) > nny )  THEN
[274]273          WRITE( message_string, * ) 'y-direction: ny does not match the',    &
274                       'requirements given by the number of PEs &used ',      &
275                       '& please use ny = ', ny - ( pdims(2) - ( nnx*pdims(2) &
[254]276                                     - ( ny + 1 ) ) ), ' instead of ny =', ny
277          CALL message( 'init_pegrid', 'PA0228', 1, 2, 0, 6, 0 )
[1]278       ENDIF
279    ENDIF   
280
281!
282!-- South and north array bounds
283    DO  j = 0, pdims(2)-1
284       nysf(j)   = j * nny
285       nynf(j)   = ( j + 1 ) * nny - 1
286       nny_pe(j) = MIN( ny, nynf(j) ) - nysf(j) + 1
287    ENDDO
288
289!
290!-- Local array bounds of the respective PEs
291    nxl  = nxlf(pcoord(1))
292    nxra = nxrf(pcoord(1))
293    nxr  = MIN( nx, nxra )
294    nys  = nysf(pcoord(2))
295    nyna = nynf(pcoord(2))
296    nyn  = MIN( ny, nyna )
297    nzb  = 0
298    nzta = nza
299    nzt  = MIN( nz, nzta )
300    nnz  = nza
301
302!
303!-- Calculate array bounds and gridpoint numbers for the transposed arrays
304!-- (needed in the pressure solver)
305!-- For the transposed arrays, cyclic boundaries as well as top and bottom
306!-- boundaries are omitted, because they are obstructive to the transposition
307
308!
309!-- 1. transposition  z --> x
310!-- This transposition is not neccessary in case of a 1d-decomposition along x,
311!-- except that the uptream-spline method is switched on
312    IF ( pdims(2) /= 1  .OR.  momentum_advec == 'ups-scheme'  .OR. &
313         scalar_advec == 'ups-scheme' )  THEN
314
315       IF ( pdims(2) == 1  .AND. ( momentum_advec == 'ups-scheme'  .OR. &
316            scalar_advec == 'ups-scheme' ) )  THEN
[254]317          message_string = '1d-decomposition along x ' // &
318                           'chosen but nz restrictions may occur' // &
319                           '& since ups-scheme is activated'
320          CALL message( 'init_pegrid', 'PA0229', 0, 1, 0, 6, 0 )
[1]321       ENDIF
322       nys_x  = nys
323       nyn_xa = nyna
324       nyn_x  = nyn
325       nny_x  = nny
326       IF ( MOD( nza , pdims(1) ) /= 0 )  THEN
[274]327          WRITE( message_string, * ) 'transposition z --> x:',                &
328                       '&nz=',nz,' is not an integral divisior of pdims(1)=', &
329                                                                   pdims(1)
[254]330          CALL message( 'init_pegrid', 'PA0230', 1, 2, 0, 6, 0 )
[1]331       ENDIF
332       nnz_x  = nza / pdims(1)
333       nzb_x  = 1 + myidx * nnz_x
334       nzt_xa = ( myidx + 1 ) * nnz_x
335       nzt_x  = MIN( nzt, nzt_xa )
336
337       sendrecvcount_zx = nnx * nny * nnz_x
338
[181]339    ELSE
340!
341!---   Setting of dummy values because otherwise variables are undefined in
342!---   the next step  x --> y
343!---   WARNING: This case has still to be clarified!!!!!!!!!!!!
344       nnz_x  = 1
345       nzb_x  = 1
346       nzt_xa = 1
347       nzt_x  = 1
348       nny_x  = nny
349
[1]350    ENDIF
351
352!
353!-- 2. transposition  x --> y
354    nnz_y  = nnz_x
355    nzb_y  = nzb_x
356    nzt_ya = nzt_xa
357    nzt_y  = nzt_x
358    IF ( MOD( nxa+1 , pdims(2) ) /= 0 )  THEN
[274]359       WRITE( message_string, * ) 'transposition x --> y:',                &
360                         '&nx+1=',nx+1,' is not an integral divisor of ',&
361                         'pdims(2)=',pdims(2)
[254]362       CALL message( 'init_pegrid', 'PA0231', 1, 2, 0, 6, 0 )
[1]363    ENDIF
364    nnx_y = (nxa+1) / pdims(2)
365    nxl_y = myidy * nnx_y
366    nxr_ya = ( myidy + 1 ) * nnx_y - 1
367    nxr_y  = MIN( nx, nxr_ya )
368
369    sendrecvcount_xy = nnx_y * nny_x * nnz_y
370
371!
372!-- 3. transposition  y --> z  (ELSE:  x --> y  in case of 1D-decomposition
373!-- along x)
374    IF ( pdims(2) /= 1  .OR.  momentum_advec == 'ups-scheme'  .OR. &
375         scalar_advec == 'ups-scheme' )  THEN
376!
377!--    y --> z
378!--    This transposition is not neccessary in case of a 1d-decomposition
379!--    along x, except that the uptream-spline method is switched on
380       nnx_z  = nnx_y
381       nxl_z  = nxl_y
382       nxr_za = nxr_ya
383       nxr_z  = nxr_y
384       IF ( MOD( nya+1 , pdims(1) ) /= 0 )  THEN
[274]385          WRITE( message_string, * ) 'transposition y --> z:',            &
386                            '& ny+1=',ny+1,' is not an integral divisor of ',&
387                            'pdims(1)=',pdims(1)
[254]388          CALL message( 'init_pegrid', 'PA0232', 1, 2, 0, 6, 0 )
[1]389       ENDIF
390       nny_z  = (nya+1) / pdims(1)
391       nys_z  = myidx * nny_z
392       nyn_za = ( myidx + 1 ) * nny_z - 1
393       nyn_z  = MIN( ny, nyn_za )
394
395       sendrecvcount_yz = nnx_y * nny_z * nnz_y
396
397    ELSE
398!
399!--    x --> y. This condition must be fulfilled for a 1D-decomposition along x
400       IF ( MOD( nya+1 , pdims(1) ) /= 0 )  THEN
[274]401          WRITE( message_string, * ) 'transposition x --> y:',               &
402                            '& ny+1=',ny+1,' is not an integral divisor of ',&
403                            'pdims(1)=',pdims(1)
[254]404          CALL message( 'init_pegrid', 'PA0233', 1, 2, 0, 6, 0 )
[1]405       ENDIF
406
407    ENDIF
408
409!
410!-- Indices for direct transpositions z --> y (used for calculating spectra)
411    IF ( dt_dosp /= 9999999.9 )  THEN
412       IF ( MOD( nza, pdims(2) ) /= 0 )  THEN
[274]413          WRITE( message_string, * ) 'direct transposition z --> y (needed ', &
414                    'for spectra):& nz=',nz,' is not an integral divisor of ',&
415                    'pdims(2)=',pdims(2)
[254]416          CALL message( 'init_pegrid', 'PA0234', 1, 2, 0, 6, 0 )
[1]417       ELSE
418          nxl_yd  = nxl
419          nxr_yda = nxra
420          nxr_yd  = nxr
421          nzb_yd  = 1 + myidy * ( nza / pdims(2) )
422          nzt_yda = ( myidy + 1 ) * ( nza / pdims(2) )
423          nzt_yd  = MIN( nzt, nzt_yda )
424
425          sendrecvcount_zyd = nnx * nny * ( nza / pdims(2) )
426       ENDIF
427    ENDIF
428
429!
430!-- Indices for direct transpositions y --> x (they are only possible in case
431!-- of a 1d-decomposition along x)
432    IF ( pdims(2) == 1 )  THEN
433       nny_x  = nny / pdims(1)
434       nys_x  = myid * nny_x
435       nyn_xa = ( myid + 1 ) * nny_x - 1
436       nyn_x  = MIN( ny, nyn_xa )
437       nzb_x  = 1
438       nzt_xa = nza
439       nzt_x  = nz
440       sendrecvcount_xy = nnx * nny_x * nza
441    ENDIF
442
443!
444!-- Indices for direct transpositions x --> y (they are only possible in case
445!-- of a 1d-decomposition along y)
446    IF ( pdims(1) == 1 )  THEN
447       nnx_y  = nnx / pdims(2)
448       nxl_y  = myid * nnx_y
449       nxr_ya = ( myid + 1 ) * nnx_y - 1
450       nxr_y  = MIN( nx, nxr_ya )
451       nzb_y  = 1
452       nzt_ya = nza
453       nzt_y  = nz
454       sendrecvcount_xy = nnx_y * nny * nza
455    ENDIF
456
457!
458!-- Arrays for storing the array bounds are needed any more
459    DEALLOCATE( nxlf , nxrf , nynf , nysf )
460
[145]461!
462!-- Collect index bounds from other PEs (to be written to restart file later)
463    ALLOCATE( hor_index_bounds(4,0:numprocs-1) )
464
465    IF ( myid == 0 )  THEN
466
467       hor_index_bounds(1,0) = nxl
468       hor_index_bounds(2,0) = nxr
469       hor_index_bounds(3,0) = nys
470       hor_index_bounds(4,0) = nyn
471
472!
473!--    Receive data from all other PEs
474       DO  i = 1, numprocs-1
475          CALL MPI_RECV( ibuf, 4, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
476                         ierr )
477          hor_index_bounds(:,i) = ibuf(1:4)
478       ENDDO
479
480    ELSE
481!
482!--    Send index bounds to PE0
483       ibuf(1) = nxl
484       ibuf(2) = nxr
485       ibuf(3) = nys
486       ibuf(4) = nyn
487       CALL MPI_SEND( ibuf, 4, MPI_INTEGER, 0, myid, comm2d, ierr )
488
489    ENDIF
490
[1]491#if defined( __print )
492!
493!-- Control output
494    IF ( myid == 0 )  THEN
495       PRINT*, '*** processor topology ***'
496       PRINT*, ' '
497       PRINT*, 'myid   pcoord    left right  south north  idx idy   nxl: nxr',&
498               &'   nys: nyn'
499       PRINT*, '------------------------------------------------------------',&
500               &'-----------'
501       WRITE (*,1000)  0, pcoord(1), pcoord(2), pleft, pright, psouth, pnorth, &
502                       myidx, myidy, nxl, nxr, nys, nyn
5031000   FORMAT (I4,2X,'(',I3,',',I3,')',3X,I4,2X,I4,3X,I4,2X,I4,2X,I3,1X,I3, &
504               2(2X,I4,':',I4))
505
506!
[108]507!--    Receive data from the other PEs
[1]508       DO  i = 1,numprocs-1
509          CALL MPI_RECV( ibuf, 12, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
510                         ierr )
511          WRITE (*,1000)  i, ( ibuf(j) , j = 1,12 )
512       ENDDO
513    ELSE
514
515!
516!--    Send data to PE0
517       ibuf(1) = pcoord(1); ibuf(2) = pcoord(2); ibuf(3) = pleft
518       ibuf(4) = pright; ibuf(5) = psouth; ibuf(6) = pnorth; ibuf(7) = myidx
519       ibuf(8) = myidy; ibuf(9) = nxl; ibuf(10) = nxr; ibuf(11) = nys
520       ibuf(12) = nyn
521       CALL MPI_SEND( ibuf, 12, MPI_INTEGER, 0, myid, comm2d, ierr )       
522    ENDIF
523#endif
524
[206]525#if defined( __parallel )
[102]526#if defined( __mpi2 )
527!
528!-- In case of coupled runs, get the port name on PE0 of the atmosphere model
529!-- and pass it to PE0 of the ocean model
530    IF ( myid == 0 )  THEN
531
532       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
533
534          CALL MPI_OPEN_PORT( MPI_INFO_NULL, port_name, ierr )
[108]535!
536!--       TEST OUTPUT (TO BE REMOVED)
537          WRITE(9,*)  TRIM( coupling_mode ),  &
538               ', ierr after MPI_OPEN_PORT: ', ierr
539          CALL LOCAL_FLUSH( 9 )
540
[102]541          CALL MPI_PUBLISH_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, &
542                                 ierr )
[104]543!
[108]544!--       TEST OUTPUT (TO BE REMOVED)
545          WRITE(9,*)  TRIM( coupling_mode ),  &
546               ', ierr after MPI_PUBLISH_NAME: ', ierr
547          CALL LOCAL_FLUSH( 9 )
548
549!
[104]550!--       Write a flag file for the ocean model and the other atmosphere
551!--       processes.
552!--       There seems to be a bug in MPICH2 which causes hanging processes
553!--       in case that execution of LOOKUP_NAME is continued too early
554!--       (i.e. before the port has been created)
555          OPEN( 90, FILE='COUPLING_PORT_OPENED', FORM='FORMATTED' )
556          WRITE ( 90, '(''TRUE'')' )
557          CLOSE ( 90 )
[102]558
559       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
560
[104]561!
562!--       Continue only if the atmosphere model has created the port.
563!--       There seems to be a bug in MPICH2 which causes hanging processes
564!--       in case that execution of LOOKUP_NAME is continued too early
565!--       (i.e. before the port has been created)
566          INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
567          DO WHILE ( .NOT. found )
568             INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
569          ENDDO
570
[102]571          CALL MPI_LOOKUP_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, ierr )
[108]572!
573!--       TEST OUTPUT (TO BE REMOVED)
574          WRITE(9,*)  TRIM( coupling_mode ),  &
575               ', ierr after MPI_LOOKUP_NAME: ', ierr
576          CALL LOCAL_FLUSH( 9 )
[102]577
[108]578
[102]579       ENDIF
580
581    ENDIF
582
583!
584!-- In case of coupled runs, establish the connection between the atmosphere
585!-- and the ocean model and define the intercommunicator (comm_inter)
586    CALL MPI_BARRIER( comm2d, ierr )
587    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
588
[254]589       PRINT*, '... before COMM_ACCEPT'
[102]590       CALL MPI_COMM_ACCEPT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
591                             comm_inter, ierr )
[254]592       PRINT*, '--- ierr = ', ierr
593       PRINT*, '--- comm_inter atmosphere = ', comm_inter
[102]594
[108]595       coupling_mode_remote = 'ocean_to_atmosphere'
596
[102]597    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
598
599       IF ( myid == 0 )  PRINT*, '*** read: ', port_name, '  ierr = ', ierr
[254]600       PRINT*, '... before COMM_CONNECT'
[102]601       CALL MPI_COMM_CONNECT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
602                              comm_inter, ierr )
[254]603       PRINT*, '--- ierr = ', ierr
604       PRINT*, '--- comm_inter ocean      = ', comm_inter
[102]605
[108]606       coupling_mode_remote = 'atmosphere_to_ocean'
607
[102]608    ENDIF
[206]609#endif
[102]610
611!
612!-- In case of coupled runs, create a new MPI derived datatype for the
613!-- exchange of surface (xy) data .
614!-- Gridpoint number for the exchange of ghost points (xy-plane)
615    ngp_xy  = ( nxr - nxl + 3 ) * ( nyn - nys + 3 )
616
617!
618!-- Define a new MPI derived datatype for the exchange of ghost points in
619!-- y-direction for 2D-arrays (line)
620    CALL MPI_TYPE_VECTOR( ngp_xy, 1, nzt-nzb+2, MPI_REAL, type_xy, ierr )
621    CALL MPI_TYPE_COMMIT( type_xy, ierr )
622#endif
623
[1]624#else
625
626!
627!-- Array bounds when running on a single PE (respectively a non-parallel
628!-- machine)
629    nxl  = 0
630    nxr  = nx
631    nxra = nx
632    nnx  = nxr - nxl + 1
633    nys  = 0
634    nyn  = ny
635    nyna = ny
636    nny  = nyn - nys + 1
637    nzb  = 0
638    nzt  = nz
639    nzta = nz
640    nnz  = nz
641
[145]642    ALLOCATE( hor_index_bounds(4,0:0) )
643    hor_index_bounds(1,0) = nxl
644    hor_index_bounds(2,0) = nxr
645    hor_index_bounds(3,0) = nys
646    hor_index_bounds(4,0) = nyn
647
[1]648!
649!-- Array bounds for the pressure solver (in the parallel code, these bounds
650!-- are the ones for the transposed arrays)
651    nys_x  = nys
652    nyn_x  = nyn
653    nyn_xa = nyn
654    nzb_x  = nzb + 1
655    nzt_x  = nzt
656    nzt_xa = nzt
657
658    nxl_y  = nxl
659    nxr_y  = nxr
660    nxr_ya = nxr
661    nzb_y  = nzb + 1
662    nzt_y  = nzt
663    nzt_ya = nzt
664
665    nxl_z  = nxl
666    nxr_z  = nxr
667    nxr_za = nxr
668    nys_z  = nys
669    nyn_z  = nyn
670    nyn_za = nyn
671
672#endif
673
674!
675!-- Calculate number of grid levels necessary for the multigrid poisson solver
676!-- as well as the gridpoint indices on each level
677    IF ( psolver == 'multigrid' )  THEN
678
679!
680!--    First calculate number of possible grid levels for the subdomains
681       mg_levels_x = 1
682       mg_levels_y = 1
683       mg_levels_z = 1
684
685       i = nnx
686       DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
687          i = i / 2
688          mg_levels_x = mg_levels_x + 1
689       ENDDO
690
691       j = nny
692       DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
693          j = j / 2
694          mg_levels_y = mg_levels_y + 1
695       ENDDO
696
[181]697       k = nz    ! do not use nnz because it might be > nz due to transposition
698                 ! requirements
[1]699       DO WHILE ( MOD( k, 2 ) == 0  .AND.  k /= 2 )
700          k = k / 2
701          mg_levels_z = mg_levels_z + 1
702       ENDDO
703
704       maximum_grid_level = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
705
706!
707!--    Find out, if the total domain allows more levels. These additional
708!--    levels are processed on PE0 only.
[197]709       IF ( numprocs > 1  .AND.  mg_switch_to_pe0_level /= -1 )  THEN
[1]710          IF ( mg_levels_z > MIN( mg_levels_x, mg_levels_y ) )  THEN
711             mg_switch_to_pe0_level_l = maximum_grid_level
712
713             mg_levels_x = 1
714             mg_levels_y = 1
715
716             i = nx+1
717             DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
718                i = i / 2
719                mg_levels_x = mg_levels_x + 1
720             ENDDO
721
722             j = ny+1
723             DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
724                j = j / 2
725                mg_levels_y = mg_levels_y + 1
726             ENDDO
727
728             maximum_grid_level_l = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
729
730             IF ( maximum_grid_level_l > mg_switch_to_pe0_level_l )  THEN
731                mg_switch_to_pe0_level_l = maximum_grid_level_l - &
732                                           mg_switch_to_pe0_level_l + 1
733             ELSE
734                mg_switch_to_pe0_level_l = 0
735             ENDIF
736          ELSE
737             mg_switch_to_pe0_level_l = 0
738             maximum_grid_level_l = maximum_grid_level
739          ENDIF
740
741!
742!--       Use switch level calculated above only if it is not pre-defined
743!--       by user
744          IF ( mg_switch_to_pe0_level == 0 )  THEN
745
746             IF ( mg_switch_to_pe0_level_l /= 0 )  THEN
747                mg_switch_to_pe0_level = mg_switch_to_pe0_level_l
748                maximum_grid_level     = maximum_grid_level_l
749             ENDIF
750
751          ELSE
752!
753!--          Check pre-defined value and reset to default, if neccessary
754             IF ( mg_switch_to_pe0_level < mg_switch_to_pe0_level_l  .OR.  &
755                  mg_switch_to_pe0_level >= maximum_grid_level_l )  THEN
[254]756                message_string = 'mg_switch_to_pe0_level ' // &
757                                 'out of range and reset to default (=0)'
758                CALL message( 'init_pegrid', 'PA0235', 0, 1, 0, 6, 0 )
[1]759                mg_switch_to_pe0_level = 0
760             ELSE
761!
762!--             Use the largest number of possible levels anyway and recalculate
763!--             the switch level to this largest number of possible values
764                maximum_grid_level = maximum_grid_level_l
765
766             ENDIF
767          ENDIF
768
769       ENDIF
770
771       ALLOCATE( grid_level_count(maximum_grid_level),                   &
772                 nxl_mg(maximum_grid_level), nxr_mg(maximum_grid_level), &
773                 nyn_mg(maximum_grid_level), nys_mg(maximum_grid_level), &
774                 nzt_mg(maximum_grid_level) )
775
776       grid_level_count = 0
777       nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzt_l = nzt
778
779       DO  i = maximum_grid_level, 1 , -1
780
781          IF ( i == mg_switch_to_pe0_level )  THEN
782#if defined( __parallel )
783!
784!--          Save the grid size of the subdomain at the switch level, because
785!--          it is needed in poismg.
786!--          Array bounds of the local subdomain grids are gathered on PE0
787             ind(1) = nxl_l; ind(2) = nxr_l
788             ind(3) = nys_l; ind(4) = nyn_l
789             ind(5) = nzt_l
790             ALLOCATE( ind_all(5*numprocs), mg_loc_ind(5,0:numprocs-1) )
791             CALL MPI_ALLGATHER( ind, 5, MPI_INTEGER, ind_all, 5, &
792                                 MPI_INTEGER, comm2d, ierr )
793             DO  j = 0, numprocs-1
794                DO  k = 1, 5
795                   mg_loc_ind(k,j) = ind_all(k+j*5)
796                ENDDO
797             ENDDO
798             DEALLOCATE( ind_all )
799!
800!--          Calculate the grid size of the total domain gathered on PE0
801             nxr_l = ( nxr_l-nxl_l+1 ) * pdims(1) - 1
802             nxl_l = 0
803             nyn_l = ( nyn_l-nys_l+1 ) * pdims(2) - 1
804             nys_l = 0
805!
806!--          The size of this gathered array must not be larger than the
807!--          array tend, which is used in the multigrid scheme as a temporary
808!--          array
809             subdomain_size = ( nxr - nxl + 3 )     * ( nyn - nys + 3 )     * &
810                              ( nzt - nzb + 2 )
811             gathered_size  = ( nxr_l - nxl_l + 3 ) * ( nyn_l - nys_l + 3 ) * &
812                              ( nzt_l - nzb + 2 )
813
814             IF ( gathered_size > subdomain_size )  THEN
[254]815                message_string = 'not enough memory for storing ' // &
816                                 'gathered multigrid data on PE0'
817                CALL message( 'init_pegrid', 'PA0236', 1, 2, 0, 6, 0 )
[1]818             ENDIF
819#else
[254]820             message_string = 'multigrid gather/scatter impossible ' // &
[1]821                          'in non parallel mode'
[254]822             CALL message( 'init_pegrid', 'PA0237', 1, 2, 0, 6, 0 )
[1]823#endif
824          ENDIF
825
826          nxl_mg(i) = nxl_l
827          nxr_mg(i) = nxr_l
828          nys_mg(i) = nys_l
829          nyn_mg(i) = nyn_l
830          nzt_mg(i) = nzt_l
831
832          nxl_l = nxl_l / 2 
833          nxr_l = nxr_l / 2
834          nys_l = nys_l / 2 
835          nyn_l = nyn_l / 2 
836          nzt_l = nzt_l / 2 
837       ENDDO
838
839    ELSE
840
841       maximum_grid_level = 1
842
843    ENDIF
844
845    grid_level = maximum_grid_level
846
847#if defined( __parallel )
848!
849!-- Gridpoint number for the exchange of ghost points (y-line for 2D-arrays)
850    ngp_y  = nyn - nys + 1
851
852!
853!-- Define a new MPI derived datatype for the exchange of ghost points in
854!-- y-direction for 2D-arrays (line)
855    CALL MPI_TYPE_VECTOR( nxr-nxl+3, 1, ngp_y+2, MPI_REAL, type_x, ierr )
856    CALL MPI_TYPE_COMMIT( type_x, ierr )
857    CALL MPI_TYPE_VECTOR( nxr-nxl+3, 1, ngp_y+2, MPI_INTEGER, type_x_int, ierr )
858    CALL MPI_TYPE_COMMIT( type_x_int, ierr )
859
860!
861!-- Calculate gridpoint numbers for the exchange of ghost points along x
862!-- (yz-plane for 3D-arrays) and define MPI derived data type(s) for the
863!-- exchange of ghost points in y-direction (xz-plane).
864!-- Do these calculations for the model grid and (if necessary) also
865!-- for the coarser grid levels used in the multigrid method
866    ALLOCATE ( ngp_yz(maximum_grid_level), type_xz(maximum_grid_level) )
867
868    nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzb_l = nzb; nzt_l = nzt
869         
870    DO i = maximum_grid_level, 1 , -1
871       ngp_yz(i) = (nzt_l - nzb_l + 2) * (nyn_l - nys_l + 3)
872
873       CALL MPI_TYPE_VECTOR( nxr_l-nxl_l+3, nzt_l-nzb_l+2, ngp_yz(i), &
874                             MPI_REAL, type_xz(i), ierr )
875       CALL MPI_TYPE_COMMIT( type_xz(i), ierr )
876
877       nxl_l = nxl_l / 2 
878       nxr_l = nxr_l / 2
879       nys_l = nys_l / 2 
880       nyn_l = nyn_l / 2 
881       nzt_l = nzt_l / 2 
882    ENDDO
883#endif
884
885#if defined( __parallel )
886!
887!-- Setting of flags for inflow/outflow conditions in case of non-cyclic
[106]888!-- horizontal boundary conditions.
[1]889    IF ( pleft == MPI_PROC_NULL )  THEN
[73]890       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]891          inflow_l  = .TRUE.
[73]892       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]893          outflow_l = .TRUE.
894       ENDIF
895    ENDIF
896
897    IF ( pright == MPI_PROC_NULL )  THEN
[73]898       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]899          outflow_r = .TRUE.
[73]900       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]901          inflow_r  = .TRUE.
902       ENDIF
903    ENDIF
904
905    IF ( psouth == MPI_PROC_NULL )  THEN
[73]906       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]907          outflow_s = .TRUE.
[73]908       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]909          inflow_s  = .TRUE.
910       ENDIF
911    ENDIF
912
913    IF ( pnorth == MPI_PROC_NULL )  THEN
[73]914       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]915          inflow_n  = .TRUE.
[73]916       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]917          outflow_n = .TRUE.
918       ENDIF
919    ENDIF
920
[151]921!
922!-- Broadcast the id of the inflow PE
923    IF ( inflow_l )  THEN
[163]924       id_inflow_l = myidx
[151]925    ELSE
926       id_inflow_l = 0
927    ENDIF
928    CALL MPI_ALLREDUCE( id_inflow_l, id_inflow, 1, MPI_INTEGER, MPI_SUM, &
929                        comm1dx, ierr )
930
[163]931!
932!-- Broadcast the id of the recycling plane
933!-- WARNING: needs to be adjusted in case of inflows other than from left side!
934    IF ( ( recycling_width / dx ) >= nxl  .AND.  ( recycling_width / dx ) <= nxr ) &
935    THEN
936       id_recycling_l = myidx
937    ELSE
938       id_recycling_l = 0
939    ENDIF
940    CALL MPI_ALLREDUCE( id_recycling_l, id_recycling, 1, MPI_INTEGER, MPI_SUM, &
941                        comm1dx, ierr )
942
[1]943#else
[73]944    IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]945       inflow_l  = .TRUE.
946       outflow_r = .TRUE.
[73]947    ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]948       outflow_l = .TRUE.
949       inflow_r  = .TRUE.
950    ENDIF
951
[73]952    IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]953       inflow_n  = .TRUE.
954       outflow_s = .TRUE.
[73]955    ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]956       outflow_n = .TRUE.
957       inflow_s  = .TRUE.
958    ENDIF
959#endif
[106]960!
[110]961!-- At the outflow, u or v, respectively, have to be calculated for one more
962!-- grid point.
[106]963    IF ( outflow_l )  THEN
964       nxlu = nxl + 1
965    ELSE
966       nxlu = nxl
967    ENDIF
968    IF ( outflow_s )  THEN
969       nysv = nys + 1
970    ELSE
971       nysv = nys
972    ENDIF
[1]973
974    IF ( psolver == 'poisfft_hybrid' )  THEN
975       CALL poisfft_hybrid_ini
[75]976    ELSEIF ( psolver == 'poisfft' )  THEN
[1]977       CALL poisfft_init
978    ENDIF
979
[114]980!
981!-- Allocate wall flag arrays used in the multigrid solver
982    IF ( psolver == 'multigrid' )  THEN
983
984       DO  i = maximum_grid_level, 1, -1
985
986           SELECT CASE ( i )
987
988              CASE ( 1 )
989                 ALLOCATE( wall_flags_1(nzb:nzt_mg(i)+1,         &
990                                        nys_mg(i)-1:nyn_mg(i)+1, &
991                                        nxl_mg(i)-1:nxr_mg(i)+1) )
992
993              CASE ( 2 )
994                 ALLOCATE( wall_flags_2(nzb:nzt_mg(i)+1,         &
995                                        nys_mg(i)-1:nyn_mg(i)+1, &
996                                        nxl_mg(i)-1:nxr_mg(i)+1) )
997
998              CASE ( 3 )
999                 ALLOCATE( wall_flags_3(nzb:nzt_mg(i)+1,         &
1000                                        nys_mg(i)-1:nyn_mg(i)+1, &
1001                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1002
1003              CASE ( 4 )
1004                 ALLOCATE( wall_flags_4(nzb:nzt_mg(i)+1,         &
1005                                        nys_mg(i)-1:nyn_mg(i)+1, &
1006                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1007
1008              CASE ( 5 )
1009                 ALLOCATE( wall_flags_5(nzb:nzt_mg(i)+1,         &
1010                                        nys_mg(i)-1:nyn_mg(i)+1, &
1011                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1012
1013              CASE ( 6 )
1014                 ALLOCATE( wall_flags_6(nzb:nzt_mg(i)+1,         &
1015                                        nys_mg(i)-1:nyn_mg(i)+1, &
1016                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1017
1018              CASE ( 7 )
1019                 ALLOCATE( wall_flags_7(nzb:nzt_mg(i)+1,         &
1020                                        nys_mg(i)-1:nyn_mg(i)+1, &
1021                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1022
1023              CASE ( 8 )
1024                 ALLOCATE( wall_flags_8(nzb:nzt_mg(i)+1,         &
1025                                        nys_mg(i)-1:nyn_mg(i)+1, &
1026                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1027
1028              CASE ( 9 )
1029                 ALLOCATE( wall_flags_9(nzb:nzt_mg(i)+1,         &
1030                                        nys_mg(i)-1:nyn_mg(i)+1, &
1031                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1032
1033              CASE ( 10 )
1034                 ALLOCATE( wall_flags_10(nzb:nzt_mg(i)+1,        &
1035                                        nys_mg(i)-1:nyn_mg(i)+1, &
1036                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1037
1038              CASE DEFAULT
[254]1039                 message_string = 'more than 10 multigrid levels'
1040                 CALL message( 'init_pegrid', 'PA0238', 1, 2, 0, 6, 0 )
[114]1041
1042          END SELECT
1043
1044       ENDDO
1045
1046    ENDIF
1047
[1]1048 END SUBROUTINE init_pegrid
Note: See TracBrowser for help on using the repository browser.